feat: Implement parallel collaborative planning workflow with Plan Note

- Updated collaborative planning prompt to support parallel task generation with subagents.
- Enhanced workflow to include explicit lifecycle management for agents and conflict detection.
- Revised output structure to accommodate parallel planning results.
- Added new LocaleDropdownNavbarItem component for language selection in the documentation site.
- Introduced styles for the language icon in the dropdown.
- Modified issue execution process to streamline commit messages and report completion with full solution metadata.
This commit is contained in:
catlog22
2026-02-05 18:24:35 +08:00
parent a59baf2473
commit 1480873008
7 changed files with 1653 additions and 359 deletions

View File

@@ -341,31 +341,59 @@ For each task:
- Do NOT commit after each task
### Step 3: Commit Solution (Once)
After ALL tasks pass, commit once with formatted summary.
After ALL tasks pass, commit once with clean conventional format.
Command:
git add -A
git commit -m "<type>(<scope>): <description>
git commit -m "<type>(<scope>): <brief description>"
Solution: ${SOLUTION_ID}
Tasks completed: <list task IDs>
Examples:
git commit -m "feat(auth): add token refresh mechanism"
git commit -m "fix(payment): resolve timeout in checkout flow"
git commit -m "refactor(api): simplify error handling"
Changes:
- <file1>: <what changed>
- <file2>: <what changed>
Verified: all tests passed"
Replace <type> with: feat|fix|refactor|docs|test
Replace <type> with: feat|fix|refactor|docs|test|chore
Replace <scope> with: affected module name
Replace <description> with: brief summary from solution
Replace <description> with: brief summary (NO solution/issue IDs)
### Step 4: Report Completion
On success, run:
ccw issue done ${SOLUTION_ID} --result '{"summary": "<brief>", "files_modified": ["<file1>", "<file2>"], "commit": {"hash": "<hash>", "type": "<type>"}, "tasks_completed": <N>}'
ccw issue done ${SOLUTION_ID} --result '{
"solution_id": "<solution-id>",
"issue_id": "<issue-id>",
"commit": {
"hash": "<commit-hash>",
"type": "<commit-type>",
"scope": "<commit-scope>",
"message": "<commit-message>"
},
"analysis": {
"risk": "<low|medium|high>",
"impact": "<low|medium|high>",
"complexity": "<low|medium|high>"
},
"tasks_completed": [
{"id": "T1", "title": "...", "action": "...", "scope": "..."},
{"id": "T2", "title": "...", "action": "...", "scope": "..."}
],
"files_modified": ["<file1>", "<file2>"],
"tests_passed": true,
"verification": {
"all_tests_passed": true,
"acceptance_criteria_met": true,
"regression_checked": true
},
"summary": "<brief description of accomplishment>"
}'
On failure, run:
ccw issue done ${SOLUTION_ID} --fail --reason '{"task_id": "<TX>", "error_type": "<test_failure|build_error|other>", "message": "<error details>"}'
ccw issue done ${SOLUTION_ID} --fail --reason '{
"task_id": "<TX>",
"error_type": "<test_failure|build_error|other>",
"message": "<error details>",
"files_attempted": ["<file1>", "<file2>"],
"commit": null
}'
### Important Notes
- Do NOT cleanup worktree - it is shared by all solutions in the queue

View File

@@ -1,6 +1,6 @@
---
name: analyze-with-file
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding. Serial analysis for Codex.
description: Interactive collaborative analysis with documented discussions, parallel subagent exploration, and evolving understanding. Parallel analysis for Codex.
argument-hint: "TOPIC=\"<question or topic>\" [--depth=quick|standard|deep] [--continue]"
---
@@ -8,21 +8,27 @@ argument-hint: "TOPIC=\"<question or topic>\" [--depth=quick|standard|deep] [--c
## Quick Start
Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses CLI tools for deep exploration.
Interactive collaborative analysis workflow with **documented discussion process**. Records understanding evolution, facilitates multi-round Q&A, and uses **parallel subagent exploration** for deep analysis.
**Core workflow**: Topic → Explore → Discuss → Document → Refine → Conclude
**Core workflow**: Topic → Parallel Explore → Discuss → Document → Refine → Conclude
## Overview
This workflow enables iterative exploration and refinement of complex topics through sequential phases:
This workflow enables iterative exploration and refinement of complex topics through parallel-capable phases:
1. **Topic Understanding** - Parse the topic and identify analysis dimensions
2. **CLI Exploration** - Gather codebase context and perform deep analysis via Gemini
2. **Parallel Exploration** - Gather codebase context via parallel subagents (up to 4)
3. **Interactive Discussion** - Multi-round Q&A with user feedback and direction adjustments
4. **Synthesis & Conclusion** - Consolidate insights and generate actionable recommendations
The key innovation is **documented discussion timeline** that captures the evolution of understanding across all phases, enabling users to track how insights develop and assumptions are corrected.
**Codex-Specific Features**:
- Parallel subagent execution via `spawn_agent` + batch `wait({ ids: [...] })`
- Role loading via path (agent reads `~/.codex/agents/*.md` itself)
- Deep interaction with `send_input` for multi-round within single agent
- Explicit lifecycle management with `close_agent`
## Analysis Flow
```
@@ -34,18 +40,21 @@ Session Detection
Phase 1: Topic Understanding
├─ Parse topic/question
├─ Identify analysis dimensions (architecture, implementation, performance, security, concept, comparison, decision)
├─ Initial scoping with user (focus areas, analysis depth)
├─ Initial scoping with user (focus areas, perspectives, analysis depth)
└─ Initialize discussion.md
Phase 2: CLI Exploration (Serial Execution)
├─ Codebase context gathering (project structure, related files, constraints)
├─ Gemini CLI analysis (build on codebase findings)
Aggregate findings into explorations.json
Phase 2: Parallel Exploration (Subagent Execution)
├─ Determine exploration mode (single vs multi-perspective)
├─ Parallel: spawn_agent × N (up to 4 perspectives)
Batch wait: wait({ ids: [agent1, agent2, ...] })
├─ Aggregate findings from all perspectives
├─ Synthesize convergent/conflicting themes (if multi-perspective)
└─ Write explorations.json or perspectives.json
Phase 3: Interactive Discussion (Multi-Round)
├─ Present exploration findings to user
├─ Gather user feedback (deepen, adjust direction, ask questions, complete)
├─ Execute targeted CLI analysis based on user direction
├─ Execute targeted analysis via send_input or new subagent
├─ Update discussion.md with each round
└─ Repeat until clarity achieved (max 5 rounds)
@@ -61,8 +70,13 @@ Phase 4: Synthesis & Conclusion
```
.workflow/.analysis/ANL-{slug}-{date}/
├── discussion.md # ⭐ Evolution of understanding & discussions
├── exploration-codebase.json # Phase 2: Codebase context and project structure
├── explorations.json # Phase 2: CLI analysis findings aggregated
├── exploration-codebase.json # Phase 2: Codebase context (single perspective)
├── explorations/ # Phase 2: Multi-perspective explorations (if selected)
│ ├── technical.json
│ ├── architectural.json
│ └── ...
├── explorations.json # Phase 2: Single perspective findings
├── perspectives.json # Phase 2: Multi-perspective findings with synthesis
└── conclusions.json # Phase 4: Final synthesis with recommendations
```
@@ -73,22 +87,24 @@ Phase 4: Synthesis & Conclusion
| Artifact | Purpose |
|----------|---------|
| `discussion.md` | Initialized with session metadata and initial questions |
| Session variables | Topic slug, dimensions, focus areas, analysis depth |
| Session variables | Topic slug, dimensions, focus areas, perspectives, analysis depth |
### Phase 2: CLI Exploration
### Phase 2: Parallel Exploration
| Artifact | Purpose |
|----------|---------|
| `exploration-codebase.json` | Codebase context: relevant files, patterns, constraints |
| `explorations.json` | CLI analysis findings: key findings, discussion points, open questions |
| Updated `discussion.md` | Round 1-2: Exploration results and initial analysis |
| `exploration-codebase.json` | Single perspective: Codebase context (relevant files, patterns, constraints) |
| `explorations/*.json` | Multi-perspective: Individual exploration results per perspective |
| `explorations.json` | Single perspective: Aggregated findings |
| `perspectives.json` | Multi-perspective: Findings with synthesis (convergent/conflicting themes) |
| Updated `discussion.md` | Round 1: Exploration results and initial analysis |
### Phase 3: Interactive Discussion
| Artifact | Purpose |
|----------|---------|
| Updated `discussion.md` | Round N (3-5): User feedback, direction adjustments, corrected assumptions |
| CLI analysis results | Deepened analysis, adjusted perspective, or specific question answers |
| Updated `discussion.md` | Round N (2-5): User feedback, direction adjustments, corrected assumptions |
| Subagent analysis results | Deepened analysis, adjusted perspective, or specific question answers |
### Phase 4: Synthesis & Conclusion
@@ -155,10 +171,18 @@ For new analysis sessions, gather user preferences before exploration:
- 最佳实践 (Best practices)
- 问题诊断 (Problem diagnosis)
**Analysis Perspectives** (Multi-select, max 4 for parallel exploration):
- 技术视角 (Technical - implementation patterns, code structure)
- 架构视角 (Architectural - system design, component interactions)
- 安全视角 (Security - vulnerabilities, access control)
- 性能视角 (Performance - bottlenecks, optimization)
**Selection Note**: Single perspective = 1 subagent. Multiple perspectives = parallel subagents (up to 4).
**Analysis Depth** (Single-select):
- 快速概览 (Quick overview, 10-15 minutes)
- 标准分析 (Standard analysis, 30-60 minutes)
- 深度挖掘 (Deep dive, 1-2+ hours)
- 快速概览 (Quick overview, 10-15 minutes, 1 agent)
- 标准分析 (Standard analysis, 30-60 minutes, 1-2 agents)
- 深度挖掘 (Deep dive, 1-2+ hours, up to 4 parallel agents)
### Step 1.3: Initialize discussion.md
@@ -185,103 +209,234 @@ Create the main discussion document with session metadata, context, and placehol
---
## Phase 2: CLI Exploration
## Phase 2: Parallel Exploration
**Objective**: Gather codebase context and execute deep analysis via CLI tools to build understanding of the topic.
**Objective**: Gather codebase context and execute deep analysis via parallel subagents to build understanding of the topic.
**Execution Model**: Sequential (serial) execution - gather codebase context first, then perform CLI analysis building on those findings.
**Execution Model**: Parallel subagent execution - spawn multiple agents for different perspectives, batch wait for all results, then aggregate.
### Step 2.1: Codebase Context Gathering
**Key API Pattern**:
```
spawn_agent × N → wait({ ids: [...] }) → aggregate → close_agent × N
```
Use built-in tools to understand the codebase structure and identify relevant code related to the topic.
### Step 2.1: Determine Exploration Mode
**Context Gathering Activities**:
1. **Get project structure** - Execute `ccw tool exec get_modules_by_depth '{}'` to understand module organization
2. **Search for related code** - Use Grep/Glob to find files matching topic keywords
3. **Read project tech context** - Load `.workflow/project-tech.json` if available for constraints and integration points
4. **Analyze patterns** - Identify common code patterns and architecture decisions
Based on user's perspective selection in Phase 1, choose exploration mode:
**exploration-codebase.json Structure**:
- `relevant_files[]`: Files related to the topic with relevance indicators
- `patterns[]`: Common code patterns and architectural styles identified
- `constraints[]`: Project-level constraints that affect the analysis
- `integration_points[]`: Key integration points between modules
- `_metadata`: Timestamp and context information
| Mode | Condition | Subagents | Output |
|------|-----------|-----------|--------|
| Single | Default or 1 perspective selected | 1 agent | `exploration-codebase.json`, `explorations.json` |
| Multi-perspective | 2-4 perspectives selected | 2-4 agents | `explorations/*.json`, `perspectives.json` |
**Key Information to Capture**:
- Top 5-10 most relevant files with brief descriptions
- Recurring patterns in code organization and naming
- Project constraints (frameworks, architectural styles, tech stack)
- Integration patterns between modules
- Existing solutions or similar implementations
### Step 2.2: Parallel Subagent Exploration
### Step 2.2: Gemini CLI Analysis
**⚠️ IMPORTANT**: Role files are NOT read by main process. Pass path in message, agent reads itself.
Execute a comprehensive CLI analysis building on the codebase context gathered in Step 2.1.
**Single Perspective Exploration**:
**CLI Execution**: Synchronous analysis via Gemini with mode=analysis
```javascript
// spawn_agent with role path (agent reads itself)
const explorationAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
**Prompt Structure**:
- **PURPOSE**: Clear goal and success criteria for the analysis
- **PRIOR CODEBASE CONTEXT**: Incorporate findings from Step 2.1 (top files, patterns, constraints)
- **TASK**: Specific investigation steps (analyze patterns, identify issues, generate insights, create discussion points)
- **MODE**: analysis (read-only)
- **CONTEXT**: Full codebase context with topic reference
- **EXPECTED**: Structured output with evidence-based insights and confidence levels
- **CONSTRAINTS**: Focus dimensions, ignore test files
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
**Analysis Output Should Include**:
- Structured analysis organized by analysis dimensions
- Specific insights tied to evidence (file references)
- Questions to deepen understanding
- Recommendations with clear rationale
- Confidence levels (high/medium/low) for conclusions
- 3-5 key findings with supporting details
---
**Execution Guideline**: Wait for CLI analysis to complete before proceeding to aggregation.
## Analysis Context
Topic: ${topic_or_question}
Dimensions: ${dimensions.join(', ')}
Session: ${sessionFolder}
## Exploration Tasks
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute relevant searches based on topic keywords
3. Analyze identified files for patterns and constraints
## Deliverables
Write findings to: ${sessionFolder}/exploration-codebase.json
Schema: {relevant_files, patterns, constraints, integration_points, key_findings, _metadata}
## Success Criteria
- [ ] Role definition read
- [ ] At least 5 relevant files identified
- [ ] Patterns and constraints documented
- [ ] JSON output follows schema
`
})
// Wait for single agent
const result = wait({ ids: [explorationAgent], timeout_ms: 600000 })
// Clean up
close_agent({ id: explorationAgent })
```
**Multi-Perspective Parallel Exploration** (up to 4 agents):
```javascript
// Define perspectives based on user selection
const selectedPerspectives = [
{ name: 'technical', focus: 'Implementation patterns and code structure' },
{ name: 'architectural', focus: 'System design and component interactions' },
{ name: 'security', focus: 'Security patterns and vulnerabilities' },
{ name: 'performance', focus: 'Performance bottlenecks and optimization' }
].slice(0, userSelectedCount) // Max 4
// Parallel spawn - all agents start immediately
const agentIds = selectedPerspectives.map(perspective => {
return spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
## Analysis Context
Topic: ${topic_or_question}
Perspective: ${perspective.name} - ${perspective.focus}
Session: ${sessionFolder}
## Perspective-Specific Exploration
Focus on ${perspective.focus} aspects of the topic.
## Exploration Tasks
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute searches focused on ${perspective.name} patterns
3. Identify ${perspective.name}-specific findings
## Deliverables
Write findings to: ${sessionFolder}/explorations/${perspective.name}.json
Schema: {
perspective: "${perspective.name}",
relevant_files, patterns, key_findings,
perspective_insights, open_questions,
_metadata
}
## Success Criteria
- [ ] Role definition read
- [ ] Perspective-specific insights identified
- [ ] At least 3 relevant findings
- [ ] JSON output follows schema
`
})
})
// Batch wait - TRUE PARALLELISM (key Codex advantage)
const results = wait({
ids: agentIds,
timeout_ms: 600000 // 10 minutes for all
})
// Handle timeout
if (results.timed_out) {
// Some agents may still be running
// Decide: continue waiting or use completed results
}
// Collect results from all perspectives
const completedFindings = {}
agentIds.forEach((agentId, index) => {
const perspective = selectedPerspectives[index]
if (results.status[agentId].completed) {
completedFindings[perspective.name] = results.status[agentId].completed
}
})
// Batch cleanup
agentIds.forEach(id => close_agent({ id }))
```
### Step 2.3: Aggregate Findings
Consolidate results from codebase context gathering and CLI analysis into a unified findings document.
**Single Perspective Aggregation**:
**explorations.json Structure**:
- `session_id`: Reference to the analysis session
Create `explorations.json` from single agent output:
- Extract key findings from exploration-codebase.json
- Organize by analysis dimensions
- Generate discussion points and open questions
**Multi-Perspective Synthesis**:
Create `perspectives.json` from parallel agent outputs:
```javascript
const synthesis = {
session_id: sessionId,
timestamp: new Date().toISOString(),
topic: topic_or_question,
dimensions: dimensions,
// Individual perspective findings
perspectives: selectedPerspectives.map(p => ({
name: p.name,
findings: completedFindings[p.name]?.key_findings || [],
insights: completedFindings[p.name]?.perspective_insights || [],
questions: completedFindings[p.name]?.open_questions || []
})),
// Cross-perspective synthesis
synthesis: {
convergent_themes: extractConvergentThemes(completedFindings),
conflicting_views: extractConflicts(completedFindings),
unique_contributions: extractUniqueInsights(completedFindings)
},
// Aggregated for discussion
aggregated_findings: mergeAllFindings(completedFindings),
discussion_points: generateDiscussionPoints(completedFindings),
open_questions: mergeOpenQuestions(completedFindings)
}
```
**perspectives.json Schema**:
- `session_id`: Session identifier
- `timestamp`: Completion time
- `topic`: Original topic/question
- `dimensions[]`: Identified analysis dimensions
- `sources[]`: List of information sources (codebase exploration, CLI analysis)
- `key_findings[]`: Main insights with evidence
- `discussion_points[]`: Questions to engage user
- `open_questions[]`: Unresolved or partially answered questions
- `_metadata`: Processing metadata
**Aggregation Activities**:
1. Extract key findings from CLI analysis output
2. Cross-reference with codebase context
3. Identify discussion points that benefit from user input
4. Note open questions for follow-up investigation
5. Organize findings by analysis dimension
- `dimensions[]`: Analysis dimensions
- `perspectives[]`: [{name, findings, insights, questions}]
- `synthesis`: {convergent_themes, conflicting_views, unique_contributions}
- `aggregated_findings[]`: Main insights across all perspectives
- `discussion_points[]`: Questions for user engagement
- `open_questions[]`: Unresolved questions
### Step 2.4: Update discussion.md
Append exploration results to the discussion timeline.
Append Round 1 with exploration results.
**Round 1-2 Sections** (Initial Understanding + Exploration Results):
- **Codebase Findings**: Top relevant files and identified patterns
- **Analysis Results**: Key findings, discussion points, recommendations
- **Sources Analyzed**: Files and code patterns examined
**Single Perspective Round 1**:
- Sources analyzed (files, patterns)
- Key findings with evidence
- Discussion points for user
- Open questions
**Documentation Standards**:
- Include direct references to analyzed files (file:line format)
- List discussion points as questions or open items
- Highlight key conclusions with confidence indicators
- Note any constraints that affect the analysis
**Multi-Perspective Round 1**:
- Per-perspective summary (brief)
- Synthesis section:
- Convergent themes (what all perspectives agree on)
- Conflicting views (where perspectives differ)
- Unique contributions (insights from specific perspectives)
- Discussion points
- Open questions
**Success Criteria**:
- `exploration-codebase.json` created with comprehensive context
- `explorations.json` created with aggregated findings
- `discussion.md` updated with Round 1-2 results
- All explorations completed successfully
- All subagents spawned and completed (or timeout handled)
- `exploration-codebase.json` OR `explorations/*.json` created
- `explorations.json` OR `perspectives.json` created with aggregated findings
- `discussion.md` updated with Round 1 results
- All agents closed properly
- Ready for interactive discussion phase
---
@@ -292,6 +447,8 @@ Append exploration results to the discussion timeline.
**Max Rounds**: 5 discussion rounds (can exit earlier if user indicates analysis is complete)
**Execution Model**: Use `send_input` for deep interaction within same agent context, or spawn new agent for significantly different analysis angles.
### Step 3.1: Present Findings & Gather Feedback
Display current understanding and exploration findings to the user.
@@ -306,69 +463,192 @@ Display current understanding and exploration findings to the user.
| Option | Purpose | Next Action |
|--------|---------|------------|
| **继续深入** | Analysis direction is correct, deepen investigation | Execute deeper CLI analysis on same topic |
| **调整方向** | Different understanding or focus needed | Ask for adjusted focus, rerun CLI analysis |
| **有具体问题** | Specific questions to ask about the topic | Capture questions, use CLI to answer them |
| **继续深入** | Analysis direction is correct, deepen investigation | `send_input` to existing agent OR spawn new deepening agent |
| **调整方向** | Different understanding or focus needed | Spawn new agent with adjusted focus |
| **有具体问题** | Specific questions to ask about the topic | `send_input` with specific questions OR spawn Q&A agent |
| **分析完成** | Sufficient information obtained | Exit discussion loop, proceed to synthesis |
### Step 3.2: Deepen Analysis
### Step 3.2: Deepen Analysis (via send_input or new agent)
When user selects "continue deepening", execute more detailed investigation in the same direction.
When user selects "continue deepening", execute more detailed investigation.
**Deepening Strategy**:
- Focus on previously identified findings
- Investigate edge cases and special scenarios
- Identify patterns not yet discussed
- Suggest implementation or improvement approaches
- Provide risk/impact assessments
**Option A: send_input to Existing Agent** (preferred if agent still active)
**CLI Execution**: Synchronous analysis via Gemini with emphasis on elaboration and detail.
```javascript
// Continue with existing agent context (if not closed)
send_input({
id: explorationAgent,
message: `
## CONTINUATION: Deepen Analysis
**Analysis Scope**:
- Expand on prior findings with more specifics
- Investigate corner cases and limitations
- Propose concrete improvement strategies
- Provide risk/impact ratings for findings
- Generate follow-up questions
Based on your initial exploration, the user wants deeper investigation.
### Step 3.3: Adjust Direction
## Focus Areas for Deepening
${previousFindings.discussion_points.map(p => `- ${p}`).join('\n')}
When user indicates a different focus is needed, shift the analysis angle.
## Additional Tasks
1. Investigate edge cases and special scenarios
2. Identify patterns not yet discussed
3. Suggest implementation or improvement approaches
4. Provide risk/impact assessments
## Deliverables
Append to: ${sessionFolder}/explorations.json (add "deepening_round_N" section)
## Success Criteria
- [ ] Prior findings expanded with specifics
- [ ] Corner cases and limitations identified
- [ ] Concrete improvement strategies proposed
`
})
const deepenResult = wait({ ids: [explorationAgent], timeout_ms: 600000 })
```
**Option B: Spawn New Deepening Agent** (if prior agent closed)
```javascript
const deepeningAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: ${sessionFolder}/explorations.json (prior findings)
3. Read: .workflow/project-tech.json
---
## Context
Topic: ${topic_or_question}
Prior Findings Summary: ${previousFindings.key_findings.slice(0,3).join('; ')}
## Deepening Task
Expand on prior findings with more detailed investigation.
## Focus Areas
${previousFindings.discussion_points.map(p => `- ${p}`).join('\n')}
## Deliverables
Update: ${sessionFolder}/explorations.json (add deepening insights)
`
})
const result = wait({ ids: [deepeningAgent], timeout_ms: 600000 })
close_agent({ id: deepeningAgent })
```
### Step 3.3: Adjust Direction (new agent)
When user indicates a different focus is needed, spawn new agent with adjusted perspective.
**Direction Adjustment Process**:
1. Ask user for adjusted focus area (through AskUserQuestion)
2. Determine new analysis angle (different dimension or perspective)
3. Execute CLI analysis from new perspective
4. Compare new insights with prior analysis
5. Identify what was missed and why
2. Spawn new agent with different dimension/perspective
3. Compare new insights with prior analysis
4. Identify what was missed and why
**CLI Execution**: Synchronous analysis via Gemini with new perspective.
```javascript
// Spawn agent with adjusted focus
const adjustedAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
**Analysis Scope**:
- Analyze topic from different dimension or angle
- Identify gaps in prior analysis
- Generate insights specific to new focus
- Cross-reference with prior findings
- Suggest investigation paths forward
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: ${sessionFolder}/explorations.json (prior findings)
3. Read: .workflow/project-tech.json
### Step 3.4: Answer Specific Questions
---
## Context
Topic: ${topic_or_question}
Previous Focus: ${previousDimensions.join(', ')}
**New Focus**: ${userAdjustedFocus}
## Adjusted Analysis Task
Analyze the topic from ${userAdjustedFocus} perspective.
## Tasks
1. Identify gaps in prior analysis
2. Generate insights specific to new focus
3. Cross-reference with prior findings
4. Explain what was missed and why
## Deliverables
Update: ${sessionFolder}/explorations.json (add adjusted_direction section)
`
})
const result = wait({ ids: [adjustedAgent], timeout_ms: 600000 })
close_agent({ id: adjustedAgent })
```
### Step 3.4: Answer Specific Questions (send_input preferred)
When user has specific questions, address them directly.
**Question Handling Process**:
1. Capture user questions (through AskUserQuestion)
2. Use CLI analysis or direct investigation to answer
3. Provide evidence-based answers with supporting details
4. Offer related follow-up investigations
**Preferred: send_input to Active Agent**
**CLI Execution**: Synchronous analysis via Gemini focused on specific questions.
```javascript
// Capture user questions first
const userQuestions = await AskUserQuestion({
question: "What specific questions do you have?",
options: [/* predefined + custom */]
})
**Analysis Scope**:
// Send questions to active agent
send_input({
id: activeAgent,
message: `
## USER QUESTIONS
Please answer the following questions based on your analysis:
${userQuestions.map((q, i) => `Q${i+1}: ${q}`).join('\n\n')}
## Requirements
- Answer each question directly and clearly
- Provide evidence and examples
- Clarify ambiguous or complex points
- Provide evidence and file references
- Rate confidence for each answer (high/medium/low)
- Suggest related investigation areas
- Rate confidence for each answer
`
})
const answerResult = wait({ ids: [activeAgent], timeout_ms: 300000 })
```
**Alternative: Spawn Q&A Agent**
```javascript
const qaAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: ${sessionFolder}/explorations.json (context)
---
## Q&A Task
Answer user's specific questions:
${userQuestions.map((q, i) => `Q${i+1}: ${q}`).join('\n\n')}
## Requirements
- Evidence-based answers with file references
- Confidence rating for each answer
- Suggest related investigation areas
## Deliverables
Append to: ${sessionFolder}/explorations.json (add qa_round_N section)
`
})
const result = wait({ ids: [qaAgent], timeout_ms: 300000 })
close_agent({ id: qaAgent })
```
### Step 3.5: Document Each Round
@@ -498,6 +778,21 @@ Offer user follow-up actions based on analysis results.
## Configuration
### Analysis Perspectives
Optional multi-perspective parallel exploration (single perspective is default, max 4):
| Perspective | Role File | Focus | Best For |
|------------|-----------|-------|----------|
| **Technical** | `~/.codex/agents/cli-explore-agent.md` | Implementation, code patterns, technical feasibility | Understanding how and technical details |
| **Architectural** | `~/.codex/agents/cli-explore-agent.md` | System design, scalability, component interactions | Understanding structure and organization |
| **Security** | `~/.codex/agents/cli-explore-agent.md` | Security patterns, vulnerabilities, access control | Identifying security risks |
| **Performance** | `~/.codex/agents/cli-explore-agent.md` | Bottlenecks, optimization, resource utilization | Finding performance issues |
**Selection**: User can multi-select up to 4 perspectives in Phase 1, or default to single comprehensive view.
**Subagent Assignment**: Each perspective gets its own subagent for true parallel exploration.
### Analysis Dimensions Reference
Dimensions guide the scope and focus of analysis:
@@ -514,11 +809,11 @@ Dimensions guide the scope and focus of analysis:
### Analysis Depth Levels
| Depth | Duration | Scope | Questions |
| Depth | Duration | Scope | Subagents |
|-------|----------|-------|-----------|
| Quick (快速概览) | 10-15 min | Surface level understanding | 3-5 key questions |
| Standard (标准分析) | 30-60 min | Moderate depth with good coverage | 5-8 key questions |
| Deep (深度挖掘) | 1-2+ hours | Comprehensive detailed analysis | 10+ key questions |
| Quick (快速概览) | 10-15 min | Surface level understanding | 1 agent, short timeout |
| Standard (标准分析) | 30-60 min | Moderate depth with good coverage | 1-2 agents |
| Deep (深度挖掘) | 1-2+ hours | Comprehensive detailed analysis | Up to 4 parallel agents |
### Focus Areas
@@ -537,27 +832,70 @@ Common focus areas that guide the analysis direction:
| Situation | Action | Recovery |
|-----------|--------|----------|
| CLI timeout | Retry with shorter, focused prompt | Skip analysis or reduce depth |
| No relevant findings | Broaden search keywords or adjust scope | Ask user for clarification |
| User disengaged | Summarize progress and offer break point | Save state for later continuation |
| Max rounds reached (5) | Force synthesis phase | Highlight remaining questions in conclusions |
| Session folder conflict | Append timestamp suffix to session ID | Create unique folder and continue |
| **Subagent timeout** | Check `results.timed_out`, continue `wait()` or use partial results | Reduce scope, spawn single agent instead of parallel |
| **Agent closed prematurely** | Cannot recover closed agent | Spawn new agent with prior context from explorations.json |
| **Parallel agent partial failure** | Some agents complete, some fail | Use completed results, note gaps in synthesis |
| **send_input to closed agent** | Error: agent not found | Spawn new agent with prior findings as context |
| **No relevant findings** | Broaden search keywords or adjust scope | Ask user for clarification |
| **User disengaged** | Summarize progress and offer break point | Save state, keep agents alive for resume |
| **Max rounds reached (5)** | Force synthesis phase | Highlight remaining questions in conclusions |
| **Session folder conflict** | Append timestamp suffix to session ID | Create unique folder and continue |
### Codex-Specific Error Patterns
```javascript
// Safe parallel execution with error handling
try {
const agentIds = perspectives.map(p => spawn_agent({ message: buildPrompt(p) }))
const results = wait({ ids: agentIds, timeout_ms: 600000 })
if (results.timed_out) {
// Handle partial completion
const completed = agentIds.filter(id => results.status[id].completed)
const pending = agentIds.filter(id => !results.status[id].completed)
// Option 1: Continue waiting for pending
// const moreResults = wait({ ids: pending, timeout_ms: 300000 })
// Option 2: Use partial results
// processPartialResults(completed, results)
}
// Process all results
processResults(agentIds, results)
} finally {
// ALWAYS cleanup, even on errors
agentIds.forEach(id => {
try { close_agent({ id }) } catch (e) { /* ignore */ }
})
}
```
---
## Iteration Patterns
### First Analysis Session
### First Analysis Session (Parallel Mode)
```
User initiates: TOPIC="specific question"
├─ No session exists → New session mode
├─ Parse topic and identify dimensions
├─ Scope analysis with user (focus areas, depth)
├─ Scope analysis with user (focus areas, perspectives, depth)
├─ Create discussion.md
├─ Gather codebase context
├─ Execute Gemini CLI analysis
├─ Aggregate findings
├─ Determine exploration mode:
├─ Single perspective → 1 subagent
│ └─ Multi-perspective → 2-4 parallel subagents
├─ Execute parallel exploration:
│ ├─ spawn_agent × N (perspectives)
│ ├─ wait({ ids: [...] }) ← TRUE PARALLELISM
│ └─ close_agent × N
├─ Aggregate findings (+ synthesis if multi-perspective)
└─ Enter multi-round discussion loop
```
@@ -567,32 +905,50 @@ User initiates: TOPIC="specific question"
User resumes: TOPIC="same topic" with --continue flag
├─ Session exists → Continue mode
├─ Load previous discussion.md
├─ Load explorations.json
├─ Load explorations.json or perspectives.json
└─ Resume from last discussion round
```
### Discussion Loop (Rounds 3-5)
### Discussion Loop (Rounds 2-5)
```
Each round:
├─ Present current findings
├─ Gather user feedback
├─ Process response:
│ ├─ Deepen → Deeper CLI analysis on same topic
│ ├─ Adjust → New CLI analysis with adjusted focus
│ ├─ Questions → CLI analysis answering specific questions
│ ├─ Deepen → send_input to active agent OR spawn deepening agent
│ ├─ Adjust → spawn new agent with adjusted focus
│ ├─ Questions → send_input with questions OR spawn Q&A agent
│ └─ Complete → Exit loop for synthesis
├─ wait({ ids: [...] }) for result
├─ Update discussion.md
└─ Repeat until user selects complete or max rounds reached
```
### Agent Lifecycle Management
```
Subagent lifecycle:
├─ spawn_agent({ message }) → Create with role path + task
├─ wait({ ids, timeout_ms }) → Get results (ONLY way to get output)
├─ send_input({ id, message }) → Continue interaction (if not closed)
└─ close_agent({ id }) → Cleanup (MUST do, cannot recover)
Key rules:
├─ NEVER close before you're done with an agent
├─ ALWAYS use wait() to get results, NOT close_agent()
├─ Batch wait for parallel agents: wait({ ids: [a, b, c] })
└─ Delay close_agent until all rounds complete (for send_input reuse)
```
### Completion Flow
```
Final synthesis:
├─ Consolidate all insights
├─ Consolidate all insights from all rounds
├─ Generate conclusions.json
├─ Update discussion.md with final synthesis
├─ close_agent for any remaining active agents
├─ Offer follow-up options
└─ Archive session artifacts
```
@@ -605,7 +961,8 @@ Final synthesis:
1. **Clear Topic Definition**: Detailed topics lead to better dimension identification
2. **User Context**: Understanding focus preferences helps scope the analysis
3. **Scope Understanding**: Being clear about depth expectations sets correct analysis intensity
3. **Perspective Selection**: Choose 2-4 perspectives for complex topics, single for focused queries
4. **Scope Understanding**: Being clear about depth expectations sets correct analysis intensity
### During Analysis
@@ -615,6 +972,15 @@ Final synthesis:
4. **Embrace Corrections**: Track wrong→right transformations as valuable learnings
5. **Iterate Thoughtfully**: Each discussion round should meaningfully refine understanding
### Codex Subagent Best Practices
1. **Role Path, Not Content**: Pass `~/.codex/agents/*.md` path in message, let agent read itself
2. **Delay close_agent**: Keep agents active for `send_input` reuse during discussion rounds
3. **Batch wait**: Use `wait({ ids: [a, b, c] })` for parallel agents, not sequential waits
4. **Handle Timeouts**: Check `results.timed_out` and decide: continue waiting or use partial results
5. **Explicit Cleanup**: Always `close_agent` when done, even on errors (use try/finally pattern)
6. **send_input vs spawn**: Prefer `send_input` for same-context continuation, `spawn` for new angles
### Documentation Practices
1. **Evidence-Based**: Every conclusion should reference specific code or patterns
@@ -622,6 +988,7 @@ Final synthesis:
3. **Timeline Clarity**: Use clear timestamps for traceability
4. **Evolution Tracking**: Document how understanding changed across rounds
5. **Action Items**: Generate specific, actionable recommendations
6. **Multi-Perspective Synthesis**: When using parallel perspectives, document convergent/conflicting themes
---

View File

@@ -1,5 +1,5 @@
---
description: Interactive brainstorming with serial CLI collaboration, idea expansion, and documented thought evolution. Sequential multi-perspective analysis for Codex.
description: Interactive brainstorming with parallel subagent collaboration, idea expansion, and documented thought evolution. Parallel multi-perspective analysis for Codex.
argument-hint: "TOPIC=\"<idea or topic>\" [--perspectives=creative,pragmatic,systematic] [--max-ideas=<n>]"
---
@@ -7,23 +7,29 @@ argument-hint: "TOPIC=\"<idea or topic>\" [--perspectives=creative,pragmatic,sys
## Quick Start
Interactive brainstorming workflow with **documented thought evolution**. Expands initial ideas through questioning, multi-perspective analysis, and iterative refinement.
Interactive brainstorming workflow with **documented thought evolution**. Expands initial ideas through questioning, **parallel subagent analysis**, and iterative refinement.
**Core workflow**: Seed Idea → Expand → Serial CLI Explore → Synthesize → Refine → Crystallize
**Core workflow**: Seed Idea → Expand → Parallel Subagent Explore → Synthesize → Refine → Crystallize
**Key features**:
- **brainstorm.md**: Complete thought evolution timeline
- **Serial multi-perspective**: Creative Pragmatic Systematic (sequential)
- **Parallel multi-perspective**: Creative + Pragmatic + Systematic (concurrent subagents)
- **Idea expansion**: Progressive questioning and exploration
- **Diverge-Converge cycles**: Generate options then focus on best paths
**Codex-Specific Features**:
- Parallel subagent execution via `spawn_agent` + batch `wait({ ids: [...] })`
- Role loading via path (agent reads `~/.codex/agents/*.md` itself)
- Deep interaction with `send_input` for multi-round refinement within single agent
- Explicit lifecycle management with `close_agent`
## Overview
This workflow enables iterative exploration and refinement of ideas through sequential phases:
This workflow enables iterative exploration and refinement of ideas through parallel-capable phases:
1. **Seed Understanding** - Parse the initial idea and identify exploration vectors
2. **Divergent Exploration** - Gather codebase context and execute serial multi-perspective analysis
3. **Interactive Refinement** - Multi-round idea selection, deep-dive, and refinement
2. **Divergent Exploration** - Gather codebase context and execute parallel multi-perspective analysis
3. **Interactive Refinement** - Multi-round idea selection, deep-dive, and refinement via send_input
4. **Convergence & Crystallization** - Synthesize final ideas and generate recommendations
The key innovation is **documented thought evolution** that captures how ideas develop, perspectives differ, and insights emerge across all phases.
@@ -34,7 +40,11 @@ The key innovation is **documented thought evolution** that captures how ideas d
.workflow/.brainstorm/BS-{slug}-{date}/
├── brainstorm.md # ⭐ Complete thought evolution timeline
├── exploration-codebase.json # Phase 2: Codebase context
├── perspectives.json # Phase 2: Serial CLI findings
├── perspectives/ # Phase 2: Individual perspective outputs
│ ├── creative.json
│ ├── pragmatic.json
│ └── systematic.json
├── perspectives.json # Phase 2: Aggregated parallel findings with synthesis
├── synthesis.json # Phase 4: Final synthesis
└── ideas/ # Phase 3: Individual idea deep-dives
├── idea-1.md
@@ -56,7 +66,8 @@ The key innovation is **documented thought evolution** that captures how ideas d
| Artifact | Purpose |
|----------|---------|
| `exploration-codebase.json` | Codebase context: relevant files, patterns, architecture constraints |
| `perspectives.json` | Serial CLI findings: creative, pragmatic, systematic perspectives |
| `perspectives/*.json` | Individual perspective outputs from parallel subagents |
| `perspectives.json` | Aggregated parallel findings with synthesis (convergent/conflicting themes) |
| Updated `brainstorm.md` | Round 2: Exploration results and multi-perspective analysis |
### Phase 3: Interactive Refinement
@@ -118,14 +129,43 @@ The workflow analyzes the topic text against predefined brainstorm dimensions.
**Matching Logic**: Compare topic text against keyword lists to identify relevant dimensions.
### Step 1.2: Initial Scoping (New Session Only)
### Step 1.2: Role Selection
Recommend roles based on topic keywords, then let user confirm or override.
**Professional Roles** (recommended based on topic keywords):
| Role | Perspective Agent Focus | Keywords |
|------|------------------------|----------|
| system-architect | Architecture, patterns | 架构, architecture, system, 系统, design pattern |
| product-manager | Business value, roadmap | 产品, product, feature, 功能, roadmap |
| ui-designer | Visual design, interaction | UI, 界面, interface, visual, 视觉 |
| ux-expert | User research, usability | UX, 体验, experience, user, 用户 |
| data-architect | Data modeling, storage | 数据, data, database, 存储, storage |
| test-strategist | Quality, testing | 测试, test, quality, 质量, QA |
| subject-matter-expert | Domain knowledge | 领域, domain, industry, 行业, expert |
**Simple Perspectives** (fallback - always available):
| Perspective | Focus | Best For |
|-------------|-------|----------|
| creative | Innovation, cross-domain | Generating novel ideas |
| pragmatic | Implementation, feasibility | Reality-checking ideas |
| systematic | Architecture, structure | Organizing solutions |
**Selection Strategy**:
1. **Auto mode**: Select top 3 recommended professional roles based on keyword matching
2. **Manual mode**: Present recommended roles + "Use simple perspectives" option
3. **Continue mode**: Use roles from previous session
### Step 1.3: Initial Scoping (New Session Only)
For new brainstorm sessions, gather user preferences before exploration.
**Brainstorm Mode** (Single-select):
- 创意模式 (Creative mode - 15-20 minutes)
- 平衡模式 (Balanced mode - 30-60 minutes)
- 深度模式 (Deep mode - 1-2+ hours)
- 创意模式 (Creative mode - 15-20 minutes, 1 subagent)
- 平衡模式 (Balanced mode - 30-60 minutes, 3 parallel subagents)
- 深度模式 (Deep mode - 1-2+ hours, 3 parallel subagents + deep refinement)
**Focus Areas** (Multi-select):
- 技术方案 (Technical solutions)
@@ -133,9 +173,15 @@ For new brainstorm sessions, gather user preferences before exploration.
- 创新突破 (Innovation breakthroughs)
- 可行性评估 (Feasibility assessment)
### Step 1.3: Expand Seed into Exploration Vectors
**Constraints** (Multi-select):
- 现有架构 (Existing architecture constraints)
- 时间限制 (Time constraints)
- 资源限制 (Resource constraints)
- 无约束 (No constraints)
Generate key questions that guide the brainstorming exploration.
### Step 1.4: Expand Seed into Exploration Vectors
Generate key questions that guide the brainstorming exploration. Use a subagent for vector generation.
**Exploration Vectors**:
1. **Core question**: What is the fundamental problem/opportunity?
@@ -146,15 +192,52 @@ Generate key questions that guide the brainstorming exploration.
6. **Innovation angle**: What would make this 10x better?
7. **Integration**: How does this fit with existing systems/processes?
**Purpose**: These vectors guide each perspective's analysis and ensure comprehensive exploration.
**Subagent for Vector Generation**:
### Step 1.4: Initialize brainstorm.md
```javascript
const vectorAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
---
## Context
Topic: ${idea_or_topic}
User focus areas: ${userFocusAreas.join(', ')}
Constraints: ${constraints.join(', ')}
## Task
Generate 5-7 exploration vectors (questions/directions) to expand this idea:
1. Core question: What is the fundamental problem/opportunity?
2. User perspective: Who benefits and how?
3. Technical angle: What enables this technically?
4. Alternative approaches: What other ways could this be solved?
5. Challenges: What could go wrong or block success?
6. Innovation angle: What would make this 10x better?
7. Integration: How does this fit with existing systems/processes?
## Deliverables
Return structured exploration vectors for multi-perspective analysis.
`
})
const result = wait({ ids: [vectorAgent], timeout_ms: 120000 })
close_agent({ id: vectorAgent })
```
**Purpose**: These vectors guide each perspective subagent's analysis and ensure comprehensive exploration.
### Step 1.5: Initialize brainstorm.md
Create the main brainstorm document with session metadata and expansion content.
**brainstorm.md Structure**:
- **Header**: Session ID, topic, start time, brainstorm mode, dimensions
- **Initial Context**: Focus areas, depth level, constraints
- **Roles**: Selected roles (professional or simple perspectives)
- **Seed Expansion**: Original idea + exploration vectors
- **Thought Evolution Timeline**: Round-by-round findings
- **Current Ideas**: To be populated after exploration
@@ -162,6 +245,7 @@ Create the main brainstorm document with session metadata and expansion content.
**Success Criteria**:
- Session folder created successfully
- brainstorm.md initialized with all metadata
- 1-3 roles selected (professional or simple perspectives)
- Brainstorm mode and dimensions identified
- Exploration vectors generated
- User preferences captured
@@ -170,16 +254,21 @@ Create the main brainstorm document with session metadata and expansion content.
## Phase 2: Divergent Exploration
**Objective**: Gather codebase context and execute serial (sequential) multi-perspective analysis to generate diverse viewpoints.
**Objective**: Gather codebase context and execute parallel multi-perspective analysis via subagents to generate diverse viewpoints.
**Execution Model**: Serial execution (creative → pragmatic → systematic) - each perspective completes before starting the next, allowing later perspectives to build on earlier findings.
**Execution Model**: Parallel subagent execution - spawn 3 perspective agents simultaneously, batch wait for all results, then aggregate.
**Key API Pattern**:
```
spawn_agent × 3 → wait({ ids: [...] }) → aggregate → close_agent × 3
```
### Step 2.1: Codebase Context Gathering
Use built-in tools to understand the codebase structure and identify relevant code.
Use built-in tools to understand the codebase structure before spawning perspective agents.
**Context Gathering Activities**:
1. **Get project structure** - Execute `ccw tool exec get_modules_by_depth '{}'` to understand module organization
1. **Get project structure** - Execute `ccw tool exec get_modules_by_depth '{}'`
2. **Search for related code** - Use Grep/Glob to find files matching topic keywords
3. **Read project tech context** - Load `.workflow/project-tech.json` if available
4. **Analyze patterns** - Identify common code patterns and architecture decisions
@@ -191,47 +280,178 @@ Use built-in tools to understand the codebase structure and identify relevant co
- `integration_points[]`: Key integration patterns between modules
- `_metadata`: Timestamp and context information
### Step 2.2: Serial Multi-Perspective Analysis
### Step 2.2: Parallel Multi-Perspective Analysis
Execute perspectives sequentially: Creative (Gemini) → Pragmatic (Codex) → Systematic (Claude).
**⚠️ IMPORTANT**: Role files are NOT read by main process. Pass path in message, agent reads itself.
**Execution Guideline**: Each perspective builds on codebase context and runs to completion before the next begins.
Spawn 3 perspective agents in parallel: Creative + Pragmatic + Systematic.
**Perspective Definitions**:
| Perspective | Purpose | Focus |
|-------------|---------|-------|
| Creative | Innovation and novelty | Cross-domain inspiration, challenging assumptions, moonshot ideas |
| Pragmatic | Implementation reality | Technical feasibility, effort estimates, quick wins, blockers |
| Systematic | Architecture thinking | Problem decomposition, patterns, dependencies, scalability |
| Perspective | Role File | Focus |
|-------------|-----------|-------|
| Creative | `~/.codex/agents/cli-explore-agent.md` | Innovation, cross-domain inspiration, challenging assumptions |
| Pragmatic | `~/.codex/agents/cli-explore-agent.md` | Implementation feasibility, effort estimates, blockers |
| Systematic | `~/.codex/agents/cli-explore-agent.md` | Problem decomposition, patterns, scalability |
**Analysis Approach**:
- Each perspective receives the exploration context
- Creative generates novel ideas first
- Pragmatic evaluates creative ideas for feasibility
- Systematic provides architectural structure and tradeoffs
- Each builds understanding progressively
**Parallel Subagent Execution**:
```javascript
// Build shared context from codebase exploration
const explorationContext = `
CODEBASE CONTEXT:
- Key files: ${explorationResults.relevant_files.slice(0,5).map(f => f.path).join(', ')}
- Existing patterns: ${explorationResults.existing_patterns.slice(0,3).join(', ')}
- Architecture constraints: ${explorationResults.architecture_constraints.slice(0,3).join(', ')}`
// Define perspectives
const perspectives = [
{
name: 'creative',
focus: 'Innovation and novelty',
tasks: [
'Think beyond obvious solutions - what would be surprising/delightful?',
'Explore cross-domain inspiration',
'Challenge assumptions - what if the opposite were true?',
'Generate moonshot ideas alongside practical ones'
]
},
{
name: 'pragmatic',
focus: 'Implementation reality',
tasks: [
'Evaluate technical feasibility of core concept',
'Identify existing patterns/libraries that could help',
'Estimate implementation complexity',
'Highlight potential technical blockers'
]
},
{
name: 'systematic',
focus: 'Architecture thinking',
tasks: [
'Decompose the problem into sub-problems',
'Identify architectural patterns that apply',
'Map dependencies and interactions',
'Consider scalability implications'
]
}
]
// Parallel spawn - all agents start immediately
const agentIds = perspectives.map(perspective => {
return spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
## Brainstorm Context
Topic: ${idea_or_topic}
Perspective: ${perspective.name} - ${perspective.focus}
Session: ${sessionFolder}
${explorationContext}
## ${perspective.name.toUpperCase()} Perspective Tasks
${perspective.tasks.map(t => `${t}`).join('\n')}
## Deliverables
Write findings to: ${sessionFolder}/perspectives/${perspective.name}.json
Schema: {
perspective: "${perspective.name}",
ideas: [{ title, description, novelty, feasibility, rationale }],
key_findings: [],
challenged_assumptions: [],
open_questions: [],
_metadata: { perspective, timestamp }
}
## Success Criteria
- [ ] Role definition read
- [ ] 3-5 ideas generated with ratings
- [ ] Key findings documented
- [ ] JSON output follows schema
`
})
})
// Batch wait - TRUE PARALLELISM (key Codex advantage)
const results = wait({
ids: agentIds,
timeout_ms: 600000 // 10 minutes for all
})
// Handle timeout
if (results.timed_out) {
// Some agents may still be running
// Option: continue waiting or use completed results
}
// Collect results from all perspectives
const completedFindings = {}
agentIds.forEach((agentId, index) => {
const perspective = perspectives[index]
if (results.status[agentId].completed) {
completedFindings[perspective.name] = results.status[agentId].completed
}
})
// Batch cleanup
agentIds.forEach(id => close_agent({ id }))
```
### Step 2.3: Aggregate Multi-Perspective Findings
Consolidate results from all three perspectives into a unified findings document.
Consolidate results from all three parallel perspective agents.
**perspectives.json Structure**:
- `session_id`: Reference to brainstorm session
- `timestamp`: Completion time
- `topic`: Original idea/topic
- `creative[]`: Creative perspective findings (ideas with novelty ratings)
- `pragmatic[]`: Pragmatic perspective findings (approaches with effort ratings)
- `systematic[]`: Systematic perspective findings (architectural options)
- `synthesis`: Convergent themes, conflicting views, unique contributions
- `creative`: Creative perspective findings (ideas with novelty ratings)
- `pragmatic`: Pragmatic perspective findings (approaches with effort ratings)
- `systematic`: Systematic perspective findings (architectural options)
- `synthesis`: {convergent_themes, conflicting_views, unique_contributions}
- `aggregated_ideas[]`: Merged ideas from all perspectives
- `key_findings[]`: Main insights across all perspectives
**Aggregation Activities**:
1. Extract key findings from each perspective's CLI analysis
2. Identify themes all perspectives agree on
1. Extract ideas and findings from each perspective's output
2. Identify themes all perspectives agree on (convergent)
3. Note conflicting views and tradeoffs
4. Extract unique contributions from each perspective
5. Organize findings by brainstorm dimension
5. Merge and deduplicate similar ideas
```javascript
const synthesis = {
session_id: sessionId,
timestamp: new Date().toISOString(),
topic: idea_or_topic,
// Individual perspective findings
creative: completedFindings.creative || {},
pragmatic: completedFindings.pragmatic || {},
systematic: completedFindings.systematic || {},
// Cross-perspective synthesis
synthesis: {
convergent_themes: extractConvergentThemes(completedFindings),
conflicting_views: extractConflicts(completedFindings),
unique_contributions: extractUniqueInsights(completedFindings)
},
// Aggregated for refinement
aggregated_ideas: mergeAllIdeas(completedFindings),
key_findings: mergeKeyFindings(completedFindings)
}
```
### Step 2.4: Update brainstorm.md
@@ -250,10 +470,12 @@ Append exploration results to the brainstorm timeline.
- Note key assumptions and reasoning
**Success Criteria**:
- All 3 subagents spawned and completed (or timeout handled)
- `exploration-codebase.json` created with comprehensive context
- `perspectives.json` created with all three perspective analyses
- `perspectives/*.json` created for each perspective
- `perspectives.json` created with aggregated findings and synthesis
- `brainstorm.md` updated with Round 2 results
- All CLI executions completed successfully
- All agents closed properly
- Ready for interactive refinement phase
---
@@ -264,6 +486,8 @@ Append exploration results to the brainstorm timeline.
**Max Rounds**: 6 refinement rounds (can exit earlier if user indicates completion)
**Execution Model**: Use `send_input` for deep interaction within same agent context, or spawn new agent for significantly different exploration angles.
### Step 3.1: Present Findings & Gather User Direction
Display current ideas and perspectives to the user.
@@ -278,63 +502,197 @@ Display current ideas and perspectives to the user.
| Option | Purpose | Next Action |
|--------|---------|------------|
| **深入探索** | Explore selected ideas in detail | Execute deep-dive CLI analysis |
| **继续发散** | Generate more ideas | Additional idea generation from new angles |
| **挑战验证** | Test ideas critically | Devil's advocate challenge |
| **合并综合** | Combine multiple ideas | Synthesize selected ideas into unified concept |
| **深入探索** | Explore selected ideas in detail | `send_input` to active agent OR spawn deep-dive agent |
| **继续发散** | Generate more ideas | Spawn new agent with different angles |
| **挑战验证** | Test ideas critically | Spawn challenge agent (devil's advocate) |
| **合并综合** | Combine multiple ideas | Spawn merge agent to synthesize |
| **准备收敛** | Begin convergence | Exit refinement loop for synthesis |
### Step 3.2: Deep Dive on Selected Ideas
### Step 3.2: Deep Dive on Selected Ideas (via send_input or new agent)
When user selects "deep dive", provide comprehensive analysis of ideas.
When user selects "deep dive", provide comprehensive analysis.
**Deep Dive Strategy**:
- Elaborate core concept with specifics
- Identify implementation requirements and dependencies
- Analyze potential challenges and propose mitigations
- Suggest proof-of-concept approach
- Define success metrics and acceptance criteria
**Option A: send_input to Existing Agent** (preferred if agent still active)
**Analysis Scope**:
- Detailed concept breakdown
- Technical requirements and dependencies
- Risk/challenge analysis with proposed solutions
- MVP definition for proof-of-concept
- Success criteria and measurement approach
```javascript
// Continue with existing agent context
send_input({
id: perspectiveAgent, // Reuse agent from Phase 2 if not closed
message: `
## CONTINUATION: Deep Dive Analysis
### Step 3.3: Devil's Advocate Challenge
Based on your initial exploration, the user wants deeper investigation on these ideas:
${selectedIdeas.map((idea, i) => `${i+1}. ${idea.title}`).join('\n')}
When user selects "challenge", critically test ideas.
## Deep Dive Tasks
• Elaborate each concept in detail
• Identify implementation requirements and dependencies
• Analyze potential challenges and propose mitigations
• Suggest proof-of-concept approach
• Define success metrics
**Challenge Strategy**:
- Identify strongest objections to each idea
- Challenge core assumptions and reasoning
- Identify failure scenarios and edge cases
- Consider competitive or alternative solutions
- Assess whether the idea solves the right problem
## Deliverables
Write to: ${sessionFolder}/ideas/{idea-slug}.md for each selected idea
**Analysis Scope**:
- 3+ strongest objections per idea
- Challenge to assumptions and reasoning
- Failure scenarios and recovery approaches
- Alternative solutions that could compete
- Survivability rating after challenge (1-5)
## Success Criteria
- [ ] Each idea has detailed breakdown
- [ ] Technical requirements documented
- [ ] Risk analysis with mitigations
`
})
### Step 3.4: Merge Multiple Ideas
const deepDiveResult = wait({ ids: [perspectiveAgent], timeout_ms: 600000 })
```
**Option B: Spawn New Deep-Dive Agent** (if prior agents closed)
```javascript
const deepDiveAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: ${sessionFolder}/perspectives.json (prior findings)
3. Read: .workflow/project-tech.json
---
## Deep Dive Context
Topic: ${idea_or_topic}
Selected Ideas: ${selectedIdeas.map(i => i.title).join(', ')}
## Deep Dive Tasks
${selectedIdeas.map(idea => `
### ${idea.title}
• Elaborate the core concept in detail
• Identify implementation requirements
• List potential challenges and mitigations
• Suggest proof-of-concept approach
• Define success metrics
`).join('\n')}
## Deliverables
Write: ${sessionFolder}/ideas/{idea-slug}.md for each idea
Include for each:
- Detailed concept description
- Technical requirements list
- Risk/challenge matrix
- MVP definition
- Success criteria
`
})
const result = wait({ ids: [deepDiveAgent], timeout_ms: 600000 })
close_agent({ id: deepDiveAgent })
```
### Step 3.3: Devil's Advocate Challenge (spawn new agent)
When user selects "challenge", spawn a dedicated challenge agent.
```javascript
const challengeAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: ${sessionFolder}/perspectives.json (ideas to challenge)
---
## Challenge Context
Topic: ${idea_or_topic}
Ideas to Challenge:
${selectedIdeas.map((idea, i) => `${i+1}. ${idea.title}: ${idea.description}`).join('\n')}
## Devil's Advocate Tasks
• For each idea, identify 3 strongest objections
• Challenge core assumptions
• Identify scenarios where this fails
• Consider competitive/alternative solutions
• Assess whether this solves the right problem
• Rate survivability after challenge (1-5)
## Deliverables
Return structured challenge results:
{
challenges: [{
idea: "...",
objections: [],
challenged_assumptions: [],
failure_scenarios: [],
alternatives: [],
survivability_rating: 1-5,
strengthened_version: "..."
}]
}
## Success Criteria
- [ ] 3+ objections per idea
- [ ] Assumptions explicitly challenged
- [ ] Survivability ratings assigned
`
})
const result = wait({ ids: [challengeAgent], timeout_ms: 300000 })
close_agent({ id: challengeAgent })
```
### Step 3.4: Merge Multiple Ideas (spawn merge agent)
When user selects "merge", synthesize complementary ideas.
**Merge Strategy**:
- Identify complementary elements and strengths
- Resolve contradictions and conflicts
- Create unified concept preserving key strengths
- Ensure coherence and viability of merged result
```javascript
const mergeAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
**Analysis Scope**:
- Complementary elements from each source idea
- Contradiction resolution or documented tradeoffs
- New combined strengths and capabilities
- Implementation considerations for merged approach
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: ${sessionFolder}/perspectives.json (source ideas)
---
## Merge Context
Topic: ${idea_or_topic}
Ideas to Merge:
${selectedIdeas.map((idea, i) => `
${i+1}. ${idea.title} (${idea.source_perspective})
${idea.description}
Strengths: ${idea.strengths?.join(', ') || 'N/A'}
`).join('\n')}
## Merge Tasks
• Identify complementary elements
• Resolve contradictions
• Create unified concept
• Preserve key strengths from each
• Describe the merged solution
• Assess viability of merged idea
## Deliverables
Write to: ${sessionFolder}/ideas/merged-idea-{n}.md
Include:
- Merged concept description
- Elements taken from each source idea
- Contradictions resolved (or noted as tradeoffs)
- New combined strengths
- Implementation considerations
## Success Criteria
- [ ] Coherent merged concept
- [ ] Source attributions clear
- [ ] Contradictions addressed
`
})
const result = wait({ ids: [mergeAgent], timeout_ms: 300000 })
close_agent({ id: mergeAgent })
```
### Step 3.5: Document Each Round
@@ -361,6 +719,7 @@ Update brainstorm.md with results from each refinement round.
- User feedback processed for each round
- `brainstorm.md` updated with all refinement rounds
- Ideas in `ideas/` folder for selected deep-dives
- All spawned agents closed properly
- Exit condition reached (user selects converge or max rounds)
---
@@ -461,11 +820,37 @@ Dimensions guide brainstorming scope and focus:
### Brainstorm Modes
| Mode | Duration | Intensity | Complexity |
| Mode | Duration | Intensity | Subagents |
|------|----------|-----------|-----------|
| Creative | 15-20 min | High novelty | Exploratory |
| Balanced | 30-60 min | Mixed | Moderate |
| Deep | 1-2+ hours | Comprehensive | Detailed |
| Creative | 15-20 min | High novelty | 1 agent, short timeout |
| Balanced | 30-60 min | Mixed | 3 parallel agents |
| Deep | 1-2+ hours | Comprehensive | 3 parallel agents + deep refinement |
### Collaboration Patterns
| Pattern | Usage | Description |
|---------|-------|-------------|
| Parallel Divergence | New topic | All perspectives explore simultaneously via parallel subagents |
| Sequential Deep-Dive | Promising idea | `send_input` to one agent for elaboration, others critique via new agents |
| Debate Mode | Controversial approach | Spawn opposing agents to argue for/against |
| Synthesis Mode | Ready to decide | Spawn synthesis agent combining insights from all perspectives |
### Context Overflow Protection
**Per-Agent Limits**:
- Main analysis output: < 3000 words
- Sub-document (if any): < 2000 words each
- Maximum sub-documents: 5 per perspective
**Synthesis Protection**:
- If total analysis > 100KB, synthesis reads only main analysis files (not sub-documents)
- Large ideas automatically split into separate idea documents in ideas/ folder
**Recovery Steps**:
1. Check agent outputs for truncation or overflow
2. Reduce scope: fewer perspectives or simpler topic
3. Use structured brainstorm mode for more focused output
4. Split complex topics into multiple sessions
---
@@ -473,18 +858,53 @@ Dimensions guide brainstorming scope and focus:
| Situation | Action | Recovery |
|-----------|--------|----------|
| CLI timeout | Retry with shorter prompt | Skip perspective or reduce depth |
| No good ideas | Reframe problem or adjust constraints | Try new exploration angles |
| User disengaged | Summarize progress and offer break | Save state for later continuation |
| Perspectives conflict | Present as tradeoff options | Let user select preferred direction |
| Max rounds reached | Force synthesis phase | Highlight unresolved questions |
| Session folder conflict | Append timestamp suffix | Create unique folder |
| **Subagent timeout** | Check `results.timed_out`, continue `wait()` or use partial results | Reduce scope, use 2 perspectives instead of 3 |
| **Agent closed prematurely** | Cannot recover closed agent | Spawn new agent with prior context from perspectives.json |
| **Parallel agent partial failure** | Some perspectives complete, some fail | Use completed results, note gaps in synthesis |
| **send_input to closed agent** | Error: agent not found | Spawn new agent with prior findings as context |
| **No good ideas** | Reframe problem or adjust constraints | Try new exploration angles |
| **User disengaged** | Summarize progress and offer break | Save state, keep agents alive for resume |
| **Perspectives conflict** | Present as tradeoff options | Let user select preferred direction |
| **Max rounds reached** | Force synthesis phase | Highlight unresolved questions |
| **Session folder conflict** | Append timestamp suffix | Create unique folder |
### Codex-Specific Error Patterns
```javascript
// Safe parallel execution with error handling
try {
const agentIds = perspectives.map(p => spawn_agent({ message: buildPrompt(p) }))
const results = wait({ ids: agentIds, timeout_ms: 600000 })
if (results.timed_out) {
// Handle partial completion
const completed = agentIds.filter(id => results.status[id].completed)
const pending = agentIds.filter(id => !results.status[id].completed)
// Option 1: Continue waiting for pending
// const moreResults = wait({ ids: pending, timeout_ms: 300000 })
// Option 2: Use partial results
// processPartialResults(completed, results)
}
// Process all results
processResults(agentIds, results)
} finally {
// ALWAYS cleanup, even on errors
agentIds.forEach(id => {
try { close_agent({ id }) } catch (e) { /* ignore */ }
})
}
```
---
## Iteration Patterns
### First Brainstorm Session
### First Brainstorm Session (Parallel Mode)
```
User initiates: TOPIC="idea or topic"
@@ -494,8 +914,13 @@ User initiates: TOPIC="idea or topic"
├─ Create brainstorm.md
├─ Expand seed into vectors
├─ Gather codebase context
├─ Execute serial CLI perspectives (Creative → Pragmatic → Systematic)
├─ Aggregate findings
├─ Execute parallel perspective exploration:
│ ├─ spawn_agent × 3 (Creative + Pragmatic + Systematic)
│ ├─ wait({ ids: [...] }) ← TRUE PARALLELISM
│ └─ close_agent × 3
├─ Aggregate findings with synthesis
└─ Enter multi-round refinement loop
```
@@ -516,15 +941,32 @@ Each round:
├─ Present current findings and top ideas
├─ Gather user feedback (deep dive/diverge/challenge/merge/converge)
├─ Process response:
│ ├─ Deep Dive → CLI analysis elaborating on selected ideas
│ ├─ Diverge → CLI analysis generating new ideas
│ ├─ Challenge → CLI analysis with devil's advocate
│ ├─ Merge → CLI analysis synthesizing multiple ideas
│ ├─ Deep Dive → send_input to active agent OR spawn deep-dive agent
│ ├─ Diverge → spawn new agent with different angles
│ ├─ Challenge → spawn challenge agent (devil's advocate)
│ ├─ Merge → spawn merge agent to synthesize
│ └─ Converge → Exit loop for synthesis
├─ wait({ ids: [...] }) for result
├─ Update brainstorm.md
└─ Repeat until user selects converge or max rounds reached
```
### Agent Lifecycle Management
```
Subagent lifecycle:
├─ spawn_agent({ message }) → Create with role path + task
├─ wait({ ids, timeout_ms }) → Get results (ONLY way to get output)
├─ send_input({ id, message }) → Continue interaction (if not closed)
└─ close_agent({ id }) → Cleanup (MUST do, cannot recover)
Key rules:
├─ NEVER close before you're done with an agent
├─ ALWAYS use wait() to get results, NOT close_agent()
├─ Batch wait for parallel agents: wait({ ids: [a, b, c] })
└─ Consider keeping agents alive for send_input during refinement
```
### Completion Flow
```
@@ -532,6 +974,7 @@ Final synthesis:
├─ Consolidate all findings into top ideas
├─ Generate synthesis.json
├─ Update brainstorm.md with final conclusions
├─ close_agent for any remaining active agents
├─ Offer follow-up options
└─ Archive session artifacts
```
@@ -554,6 +997,16 @@ Final synthesis:
4. **Embrace Conflicts**: Perspective conflicts often reveal important tradeoffs
5. **Iterate Thoughtfully**: Each refinement round should meaningfully advance ideas
### Codex Subagent Best Practices
1. **Role Path, Not Content**: Pass `~/.codex/agents/*.md` path in message, let agent read itself
2. **Parallel for Perspectives**: Use batch spawn + wait for 3 perspective agents
3. **Delay close_agent for Refinement**: Keep perspective agents alive for `send_input` reuse
4. **Batch wait**: Use `wait({ ids: [a, b, c] })` for parallel agents, not sequential waits
5. **Handle Timeouts**: Check `results.timed_out` and decide: continue waiting or use partial results
6. **Explicit Cleanup**: Always `close_agent` when done, even on errors (use try/finally pattern)
7. **send_input vs spawn**: Prefer `send_input` for same-context deep-dive, `spawn` for new exploration angles
### Documentation Practices
1. **Evidence-Based**: Every idea should reference codebase patterns or feasibility analysis
@@ -561,7 +1014,7 @@ Final synthesis:
3. **Timeline Clarity**: Use clear timestamps for traceability
4. **Evolution Tracking**: Document how ideas changed and evolved
5. **Action Items**: Generate specific, implementable recommendations
6. **Synthesis Quality**: Ensure convergent/conflicting themes are clearly documented
---

View File

@@ -1,34 +1,38 @@
---
description: Serial collaborative planning with Plan Note - Single-agent sequential task generation, unified plan-note.md, conflict detection. Codex-optimized.
argument-hint: "TASK=\"<description>\" [--max-domains=5]"
description: Parallel collaborative planning with Plan Note - Multi-agent parallel task generation, unified plan-note.md, conflict detection. Codex subagent-optimized.
argument-hint: "TASK=\"<description>\" [--max-agents=5]"
---
# Codex Collaborative-Plan-With-File Workflow
## Quick Start
Serial collaborative planning workflow using **Plan Note** architecture. Processes sub-domains sequentially, generates task plans, and detects conflicts across domains.
Parallel collaborative planning workflow using **Plan Note** architecture. Spawns parallel subagents for each sub-domain, generates task plans concurrently, and detects conflicts across domains.
**Core workflow**: Understand → Template → Sequential Planning → Conflict Detection → Completion
**Core workflow**: Understand → Template → Parallel Subagent Planning → Conflict Detection → Completion
**Key features**:
- **plan-note.md**: Shared collaborative document with pre-allocated sections
- **Serial domain processing**: Each sub-domain planned sequentially via CLI
- **Parallel subagent planning**: Each sub-domain planned by its own subagent concurrently
- **Conflict detection**: Automatic file, dependency, and strategy conflict scanning
- **No merge needed**: Pre-allocated sections eliminate merge conflicts
**Note**: Codex does not support parallel agent execution. All domains are processed serially.
**Codex-Specific Features**:
- Parallel subagent execution via `spawn_agent` + batch `wait({ ids: [...] })`
- Role loading via path (agent reads `~/.codex/agents/*.md` itself)
- Pre-allocated sections per agent = no write conflicts
- Explicit lifecycle management with `close_agent`
## Overview
This workflow enables structured planning through sequential phases:
This workflow enables structured planning through parallel-capable phases:
1. **Understanding & Template** - Analyze requirements, identify sub-domains, create plan-note.md template
2. **Sequential Planning** - Process each sub-domain serially via CLI analysis
2. **Parallel Planning** - Spawn subagent per sub-domain, batch wait for all results
3. **Conflict Detection** - Scan plan-note.md for conflicts across all domains
4. **Completion** - Generate human-readable plan.md summary
The key innovation is the **Plan Note** architecture - a shared collaborative document with pre-allocated sections per sub-domain, eliminating merge conflicts.
The key innovation is the **Plan Note** architecture - a shared collaborative document with pre-allocated sections per sub-domain, eliminating merge conflicts. Combined with Codex's true parallel subagent execution, all domains are planned simultaneously.
## Output Structure
@@ -55,12 +59,12 @@ The key innovation is the **Plan Note** architecture - a shared collaborative do
| `plan-note.md` | Collaborative template with pre-allocated task pool and evidence sections per domain |
| `requirement-analysis.json` | Sub-domain assignments, TASK ID ranges, complexity assessment |
### Phase 2: Sequential Planning
### Phase 2: Parallel Planning
| Artifact | Purpose |
|----------|---------|
| `agents/{domain}/plan.json` | Detailed implementation plan per domain |
| Updated `plan-note.md` | Task pool and evidence sections filled for each domain |
| `agents/{domain}/plan.json` | Detailed implementation plan per domain (from parallel subagent) |
| Updated `plan-note.md` | Task pool and evidence sections filled by each subagent |
### Phase 3: Conflict Detection
@@ -166,59 +170,163 @@ Create the sub-domain configuration document.
---
## Phase 2: Sequential Sub-Domain Planning
## Phase 2: Parallel Sub-Domain Planning
**Objective**: Process each sub-domain serially via CLI analysis, generating detailed plans and updating plan-note.md.
**Objective**: Spawn parallel subagents for each sub-domain, generating detailed plans and updating plan-note.md concurrently.
**Execution Model**: Serial processing - plan each domain completely before moving to the next. Later domains can reference earlier planning results.
**Execution Model**: Parallel subagent execution - all domains planned simultaneously via `spawn_agent` + batch `wait`.
### Step 2.1: Domain Planning Loop
**Key API Pattern**:
```
spawn_agent × N → wait({ ids: [...] }) → verify outputs → close_agent × N
```
For each sub-domain in sequence:
1. Execute Gemini CLI analysis for the current domain
2. Parse CLI output into structured plan
3. Save detailed plan as `agents/{domain}/plan.json`
4. Update plan-note.md with task summaries and evidence
### Step 2.1: User Confirmation (unless autoMode)
**Planning Guideline**: Wait for each domain's CLI analysis to complete before proceeding to the next.
Display identified sub-domains and confirm before spawning agents.
### Step 2.2: CLI Planning for Each Domain
```javascript
// User confirmation
if (!autoMode) {
// Display sub-domains for user approval
// Options: "开始规划" / "调整拆分" / "取消"
}
```
Execute synchronous CLI analysis to generate a detailed implementation plan.
### Step 2.2: Parallel Subagent Planning
**CLI Analysis Scope**:
- **PURPOSE**: Generate detailed implementation plan for the specific domain
- **CONTEXT**: Domain description, related codebase files, prior domain results
- **TASK**: Analyze domain, identify all necessary tasks, define dependencies, estimate effort
- **EXPECTED**: JSON output with tasks, summaries, interdependencies, total effort
**⚠️ IMPORTANT**: Role files are NOT read by main process. Pass path in message, agent reads itself.
**Analysis Output Should Include**:
- Task breakdown with IDs from the assigned range
**Spawn All Domain Agents in Parallel**:
```javascript
// Create agent directories first
subDomains.forEach(sub => {
// mkdir: ${sessionFolder}/agents/${sub.focus_area}/
})
// Parallel spawn - all agents start immediately
const agentIds = subDomains.map(sub => {
return spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-lite-planning-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
4. Read: ${sessionFolder}/plan-note.md (understand template structure)
5. Read: ${sessionFolder}/requirement-analysis.json (understand full context)
---
## Sub-Domain Context
**Focus Area**: ${sub.focus_area}
**Description**: ${sub.description}
**TASK ID Range**: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
**Session**: ${sessionId}
## Dual Output Tasks
### Task 1: Generate Complete plan.json
Output: ${sessionFolder}/agents/${sub.focus_area}/plan.json
Include:
- Task breakdown with IDs from assigned range (${sub.task_id_range[0]}-${sub.task_id_range[1]})
- Dependencies within and across domains
- Files to modify with specific locations
- Effort and complexity estimates per task
- Conflict risk assessment for each task
### Step 2.3: Update plan-note.md After Each Domain
### Task 2: Sync Summary to plan-note.md
Parse CLI output and update the plan-note.md sections for the current domain.
**Locate Your Sections** (pre-allocated, ONLY modify these):
- Task Pool: "## 任务池 - ${toTitleCase(sub.focus_area)}"
- Evidence: "## 上下文证据 - ${toTitleCase(sub.focus_area)}"
**Task Summary Format** (for "任务池" section):
- Task header: `### TASK-{ID}: {Title} [{domain}]`
- Fields: 状态 (status), 复杂度 (complexity), 依赖 (dependencies), 范围 (scope)
- Modification points: File paths with line ranges and change summaries
- Conflict risk assessment: Low/Medium/High
**Task Summary Format**:
### TASK-{ID}: {Title} [${sub.focus_area}]
- **状态**: pending
- **复杂度**: Low/Medium/High
- **依赖**: TASK-xxx (if any)
- **范围**: Brief scope description
- **修改点**: file:line - change summary
- **冲突风险**: Low/Medium/High
**Evidence Format** (for "上下文证据" section):
- Related files with relevance descriptions
- Existing patterns identified in codebase
- Constraints discovered during analysis
**Evidence Format**:
- 相关文件: File list with relevance
- 现有模式: Patterns identified
- 约束: Constraints discovered
## Execution Steps
1. Explore codebase for domain-relevant files
2. Generate complete plan.json
3. Extract task summaries from plan.json
4. Read ${sessionFolder}/plan-note.md
5. Locate and fill your pre-allocated task pool section
6. Locate and fill your pre-allocated evidence section
7. Write back plan-note.md
## Important Rules
- ONLY modify your pre-allocated sections (do NOT touch other domains)
- Use assigned TASK ID range exclusively: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
- Include conflict_risk assessment for each task
## Success Criteria
- [ ] Role definition read
- [ ] plan.json generated with detailed tasks
- [ ] plan-note.md updated with task pool and evidence
- [ ] All tasks within assigned ID range
`
})
})
// Batch wait - TRUE PARALLELISM (key Codex advantage)
const results = wait({
ids: agentIds,
timeout_ms: 900000 // 15 minutes for all planning agents
})
// Handle timeout
if (results.timed_out) {
const completed = agentIds.filter(id => results.status[id].completed)
const pending = agentIds.filter(id => !results.status[id].completed)
// Option: Continue waiting or use partial results
// If most agents completed, proceed with partial results
}
// Verify outputs exist
subDomains.forEach((sub, index) => {
const agentId = agentIds[index]
if (results.status[agentId].completed) {
// Verify: agents/${sub.focus_area}/plan.json exists
// Verify: plan-note.md sections populated
}
})
// Batch cleanup
agentIds.forEach(id => close_agent({ id }))
```
### Step 2.3: Verify plan-note.md Consistency
After all agents complete, verify the shared document.
**Verification Activities**:
1. Read final plan-note.md
2. Verify all task pool sections are populated
3. Verify all evidence sections are populated
4. Check for any accidental cross-section modifications
5. Validate TASK ID uniqueness across all domains
**Success Criteria**:
- All domains processed sequentially
- All subagents spawned and completed (or timeout handled)
- `agents/{domain}/plan.json` created for each domain
- `plan-note.md` updated with all task pools and evidence sections
- Task summaries follow consistent format
- No TASK ID overlaps across domains
- All agents closed properly
---
@@ -323,17 +431,51 @@ Present session statistics and next steps.
| Situation | Action | Recovery |
|-----------|--------|----------|
| CLI timeout | Retry with shorter, focused prompt | Skip domain or reduce scope |
| No tasks generated | Review domain description | Retry with refined description |
| Section not found in plan-note | Recreate section defensively | Continue with new section |
| Conflict detection fails | Continue with empty conflicts | Note in completion summary |
| Session folder conflict | Append timestamp suffix | Create unique folder |
| **Subagent timeout** | Check `results.timed_out`, continue `wait()` or use partial results | Reduce scope, plan remaining domains with new agent |
| **Agent closed prematurely** | Cannot recover closed agent | Spawn new agent with domain context |
| **Parallel agent partial failure** | Some domains complete, some fail | Use completed results, re-spawn for failed domains |
| **plan-note.md write conflict** | Multiple agents write simultaneously | Pre-allocated sections prevent this; if detected, re-read and verify |
| **Section not found in plan-note** | Agent creates section defensively | Continue with new section |
| **No tasks generated** | Review domain description | Retry with refined description via new agent |
| **Conflict detection fails** | Continue with empty conflicts | Note in completion summary |
| **Session folder conflict** | Append timestamp suffix | Create unique folder |
### Codex-Specific Error Patterns
```javascript
// Safe parallel planning with error handling
try {
const agentIds = subDomains.map(sub => spawn_agent({ message: buildPlanPrompt(sub) }))
const results = wait({ ids: agentIds, timeout_ms: 900000 })
if (results.timed_out) {
const completed = agentIds.filter(id => results.status[id].completed)
const pending = agentIds.filter(id => !results.status[id].completed)
// Re-spawn for timed-out domains
const retryIds = pending.map((id, i) => {
const sub = subDomains[agentIds.indexOf(id)]
return spawn_agent({ message: buildPlanPrompt(sub) })
})
const retryResults = wait({ ids: retryIds, timeout_ms: 600000 })
retryIds.forEach(id => { try { close_agent({ id }) } catch(e) {} })
}
} finally {
// ALWAYS cleanup
agentIds.forEach(id => {
try { close_agent({ id }) } catch (e) { /* ignore */ }
})
}
```
---
## Iteration Patterns
### New Planning Session
### New Planning Session (Parallel Mode)
```
User initiates: TASK="task description"
@@ -341,9 +483,13 @@ User initiates: TASK="task description"
├─ Analyze task and identify sub-domains
├─ Create plan-note.md template
├─ Generate requirement-analysis.json
├─ Process each domain serially:
├─ CLI analysis → plan.json
Update plan-note.md sections
├─ Execute parallel planning:
spawn_agent × N (one per sub-domain)
│ ├─ wait({ ids: [...] }) ← TRUE PARALLELISM
│ └─ close_agent × N
├─ Verify plan-note.md consistency
├─ Detect conflicts
├─ Generate plan.md summary
└─ Report completion
@@ -355,8 +501,24 @@ User initiates: TASK="task description"
User resumes: TASK="same task"
├─ Session exists → Continue mode
├─ Load plan-note.md and requirement-analysis.json
├─ Resume from first incomplete domain
Continue sequential processing
├─ Identify incomplete domains (empty task pool sections)
Spawn agents for incomplete domains only
└─ Continue with conflict detection
```
### Agent Lifecycle Management
```
Subagent lifecycle:
├─ spawn_agent({ message }) → Create with role path + task
├─ wait({ ids, timeout_ms }) → Get results (ONLY way to get output)
└─ close_agent({ id }) → Cleanup (MUST do, cannot recover)
Key rules:
├─ Pre-allocated sections = no write conflicts
├─ ALWAYS use wait() to get results, NOT close_agent()
├─ Batch wait for all domain agents: wait({ ids: [a, b, c, ...] })
└─ Verify plan-note.md after batch completion
```
---
@@ -376,6 +538,16 @@ User resumes: TASK="same task"
3. **Check Dependencies**: Cross-domain dependencies should be documented explicitly
4. **Inspect Details**: Review `agents/{domain}/plan.json` for specifics when needed
### Codex Subagent Best Practices
1. **Role Path, Not Content**: Pass `~/.codex/agents/*.md` path in message, let agent read itself
2. **Pre-allocated Sections**: Each agent only writes to its own sections - no write conflicts
3. **Batch wait**: Use `wait({ ids: [a, b, c] })` for all domain agents, not sequential waits
4. **Handle Timeouts**: Check `results.timed_out`, re-spawn for failed domains
5. **Explicit Cleanup**: Always `close_agent` when done, even on errors (use try/finally)
6. **Verify After Batch**: Read plan-note.md after all agents complete to verify consistency
7. **TASK ID Isolation**: Pre-assigned non-overlapping ranges prevent ID conflicts
### After Planning
1. **Resolve Conflicts**: Address high/critical conflicts before execution

View File

@@ -583,30 +583,13 @@ After ALL tasks in the solution pass implementation, testing, and verification,
# Stage all modified files from all tasks
git add path/to/file1.ts path/to/file2.ts ...
# Commit with formatted solution summary
git commit -m "$(cat <<'EOF'
[commit_type](scope): [solution.description - brief]
# Commit with clean, standard format (NO solution metadata)
git commit -m "[commit_type](scope): [brief description of changes]"
## Solution Summary
- **Solution-ID**: [solution_id]
- **Issue-ID**: [issue_id]
- **Risk/Impact/Complexity**: [solution.analysis.risk]/[solution.analysis.impact]/[solution.analysis.complexity]
## Tasks Completed
- [T1] [task1.title]: [task1.action] [task1.scope]
- [T2] [task2.title]: [task2.action] [task2.scope]
- ...
## Files Modified
- path/to/file1.ts
- path/to/file2.ts
- ...
## Verification
- All unit tests passed
- All acceptance criteria verified
EOF
)"
# Example commits:
# feat(auth): add token refresh mechanism
# fix(payment): resolve timeout handling in checkout flow
# refactor(api): simplify error handling logic
```
**Commit Type Selection**:
@@ -644,16 +627,78 @@ EOF
## Step 4: Report Completion
After ALL tasks in the solution are complete and committed, report to queue system:
After ALL tasks in the solution are complete and committed, report to queue system with full solution metadata:
```javascript
// ccw auto-detects worktree and uses main repo's .workflow/
// Record ALL solution context here (NOT in git commit)
shell_command({
command: `ccw issue done ${item_id} --result '${JSON.stringify({
files_modified: ["path1", "path2"],
solution_id: solution.id,
issue_id: issue_id,
commit: {
hash: commit_hash,
type: commit_type,
scope: commit_scope,
message: commit_message
},
analysis: {
risk: solution.analysis.risk,
impact: solution.analysis.impact,
complexity: solution.analysis.complexity
},
tasks_completed: solution.tasks.map(t => ({
id: t.id,
title: t.title,
action: t.action,
scope: t.scope
})),
files_modified: ["path1", "path2", ...],
tests_passed: true,
commit: { hash: "abc123", type: "feat", tasks: ["T1", "T2"] },
summary: "[What was accomplished]"
verification: {
all_tests_passed: true,
acceptance_criteria_met: true,
regression_checked: true
},
summary: "[What was accomplished - brief description]"
})}'`
})
```
**Complete Example**:
```javascript
shell_command({
command: `ccw issue done S-1 --result '${JSON.stringify({
solution_id: "SOL-ISS-20251227-001-1",
issue_id: "ISS-20251227-001",
commit: {
hash: "a1b2c3d4",
type: "feat",
scope: "auth",
message: "feat(auth): add token refresh mechanism"
},
analysis: {
risk: "low",
impact: "medium",
complexity: "medium"
},
tasks_completed: [
{ id: "T1", title: "Implement refresh token endpoint", action: "Add", scope: "src/auth/" },
{ id: "T2", title: "Add token rotation logic", action: "Create", scope: "src/auth/services/" }
],
files_modified: [
"src/auth/routes/token.ts",
"src/auth/services/refresh.ts",
"src/auth/middleware/validate.ts"
],
tests_passed: true,
verification: {
all_tests_passed: true,
acceptance_criteria_met: true,
regression_checked: true
},
summary: "Implemented token refresh mechanism with automatic rotation"
})}'`
})
```
@@ -662,7 +707,13 @@ shell_command({
```javascript
shell_command({
command: `ccw issue done ${item_id} --fail --reason '{"task_id": "TX", "error_type": "test_failure", "message": "..."}'`
command: `ccw issue done ${item_id} --fail --reason '${JSON.stringify({
task_id: "TX",
error_type: "test_failure",
message: "Integration tests failed: timeout in token validation",
files_attempted: ["path1", "path2"],
commit: null
})}'`
})
```

View File

@@ -0,0 +1,218 @@
import React, {type ReactNode} from 'react';
import {translate} from '@docusaurus/Translate';
import {useLocation} from '@docusaurus/router';
import useDocusaurusContext from '@docusaurus/useDocusaurusContext';
import {applyTrailingSlash, removeTrailingSlash} from '@docusaurus/utils-common';
import {mergeSearchStrings, useHistorySelector} from '@docusaurus/theme-common';
import DropdownNavbarItem from '@theme/NavbarItem/DropdownNavbarItem';
import type {LinkLikeNavbarItemProps} from '@theme/NavbarItem';
import type {Props} from '@theme/NavbarItem/LocaleDropdownNavbarItem';
import IconLanguage from '@theme/Icon/Language';
import styles from './styles.module.css';
function isBaseUrlPrefixOfPathname(baseUrl: string, pathname: string): boolean {
const baseUrlNoTrailingSlash = removeTrailingSlash(baseUrl);
return (
pathname === baseUrlNoTrailingSlash ||
pathname.startsWith(`${baseUrlNoTrailingSlash}/`) ||
pathname.startsWith(baseUrl)
);
}
function findBestMatchingBaseUrl({
pathname,
candidateBaseUrls,
}: {
pathname: string;
candidateBaseUrls: string[];
}): string | undefined {
let bestBaseUrl: string | undefined;
let bestBaseUrlLength = -1;
for (const baseUrl of candidateBaseUrls) {
if (!isBaseUrlPrefixOfPathname(baseUrl, pathname)) {
continue;
}
const baseUrlNoTrailingSlash = removeTrailingSlash(baseUrl);
if (baseUrlNoTrailingSlash.length > bestBaseUrlLength) {
bestBaseUrl = baseUrl;
bestBaseUrlLength = baseUrlNoTrailingSlash.length;
}
}
return bestBaseUrl;
}
function stripBaseUrlPrefix({
pathname,
baseUrl,
}: {
pathname: string;
baseUrl: string;
}): string {
const baseUrlNoTrailingSlash = removeTrailingSlash(baseUrl);
if (pathname === baseUrl || pathname === baseUrlNoTrailingSlash) {
return '';
}
if (pathname.startsWith(`${baseUrlNoTrailingSlash}/`)) {
return pathname.slice(baseUrlNoTrailingSlash.length + 1);
}
if (pathname.startsWith(baseUrl)) {
return pathname.slice(baseUrl.length);
}
return removeTrailingSlash(pathname).replace(/^\//, '');
}
function useLocaleDropdownUtils() {
const {
siteConfig,
i18n: {localeConfigs},
} = useDocusaurusContext();
const {pathname} = useLocation();
const search = useHistorySelector((history) => history.location.search);
const hash = useHistorySelector((history) => history.location.hash);
const candidateBaseUrls = Object.values(localeConfigs)
.map((localeConfig) => localeConfig.baseUrl)
.filter((baseUrl): baseUrl is string => typeof baseUrl === 'string');
const currentBaseUrl =
findBestMatchingBaseUrl({pathname, candidateBaseUrls}) ?? siteConfig.baseUrl;
const canonicalPathname = applyTrailingSlash(pathname, {
trailingSlash: siteConfig.trailingSlash,
baseUrl: currentBaseUrl,
});
// Canonical pathname, without the baseUrl of the current locale.
// We pick the longest matching locale baseUrl so that when we're already
// under /docs/zh/, we strip /docs/zh/ (not just /docs/).
const pathnameSuffix = stripBaseUrlPrefix({
pathname: canonicalPathname,
baseUrl: currentBaseUrl,
});
const getLocaleConfig = (locale: string) => {
const localeConfig = localeConfigs[locale];
if (!localeConfig) {
throw new Error(
`Docusaurus bug, no locale config found for locale=${locale}`,
);
}
return localeConfig;
};
const createUrl = ({
locale,
fullyQualified,
}: {
locale: string;
fullyQualified: boolean;
}) => {
const localeConfig = getLocaleConfig(locale);
const newUrl = `${fullyQualified ? localeConfig.url : ''}`;
return `${newUrl}${localeConfig.baseUrl}${pathnameSuffix}`;
};
const getBaseURLForLocale = (locale: string) => {
const localeConfig = getLocaleConfig(locale);
const isSameDomain = localeConfig.url === siteConfig.url;
if (isSameDomain) {
// Shorter paths if localized sites are hosted on the same domain.
// We keep the `pathname://` escape hatch so we don't rely on SPA
// navigation when baseUrl changes between locales.
return `pathname://${createUrl({locale, fullyQualified: false})}`;
}
return createUrl({locale, fullyQualified: true});
};
return {
getURL: (locale: string, options: {queryString: string | undefined}) => {
// We have 2 query strings because
// - there's the current one
// - there's one user can provide through navbar config
// see https://github.com/facebook/docusaurus/pull/8915
const finalSearch = mergeSearchStrings(
[search, options.queryString],
'append',
);
return `${getBaseURLForLocale(locale)}${finalSearch}${hash}`;
},
getLabel: (locale: string) => {
return getLocaleConfig(locale).label;
},
getLang: (locale: string) => {
return getLocaleConfig(locale).htmlLang;
},
};
}
export default function LocaleDropdownNavbarItem({
mobile,
dropdownItemsBefore,
dropdownItemsAfter,
queryString,
...props
}: Props): ReactNode {
const utils = useLocaleDropdownUtils();
const {
i18n: {currentLocale, locales},
} = useDocusaurusContext();
const localeItems = locales.map((locale): LinkLikeNavbarItemProps => {
return {
label: utils.getLabel(locale),
lang: utils.getLang(locale),
to: utils.getURL(locale, {queryString}),
target: '_self',
autoAddBaseUrl: false,
className:
// eslint-disable-next-line no-nested-ternary
locale === currentLocale
? // Similar idea as DefaultNavbarItem: select the right Infima active
// class name. This cannot be substituted with isActive, because the
// target URLs contain `pathname://` and therefore are not NavLinks!
mobile
? 'menu__link--active'
: 'dropdown__link--active'
: '',
};
});
const items = [...dropdownItemsBefore, ...localeItems, ...dropdownItemsAfter];
// Mobile is handled a bit differently
const dropdownLabel = mobile
? translate({
message: 'Languages',
id: 'theme.navbar.mobileLanguageDropdown.label',
description: 'The label for the mobile language switcher dropdown',
})
: utils.getLabel(currentLocale);
return (
<DropdownNavbarItem
{...props}
mobile={mobile}
label={
<>
<IconLanguage className={styles.iconLanguage} />
{dropdownLabel}
</>
}
items={items}
/>
);
}

View File

@@ -0,0 +1,5 @@
.iconLanguage {
vertical-align: text-bottom;
margin-right: 5px;
}