mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-08 02:14:08 +08:00
feat(execution): add Phase 4: Execution with agent orchestration, lazy loading, and auto-commit features
This commit is contained in:
@@ -1,719 +0,0 @@
|
||||
---
|
||||
name: collaborative-plan-parallel
|
||||
description: Parallel collaborative planning with Execution Groups - Multi-codex parallel task generation, execution group assignment, multi-branch strategy. Codex-optimized.
|
||||
argument-hint: "TASK=\"<description>\" [--max-groups=3] [--group-strategy=automatic|balanced|manual]"
|
||||
---
|
||||
|
||||
# Codex Collaborative-Plan-Parallel Workflow
|
||||
|
||||
## Quick Start
|
||||
|
||||
Parallel collaborative planning workflow using **Execution Groups** architecture. Splits task into sub-domains, assigns them to execution groups, and prepares for multi-branch parallel development.
|
||||
|
||||
**Core workflow**: Understand → Group Assignment → Sequential Planning → Conflict Detection → Execution Strategy
|
||||
|
||||
**Key features**:
|
||||
- **Execution Groups**: Sub-domains grouped for parallel execution by different codex instances
|
||||
- **Multi-branch strategy**: Each execution group works on independent Git branch
|
||||
- **Codex instance assignment**: Each group assigned to specific codex worker
|
||||
- **Dependency-aware grouping**: Automatic or manual group assignment based on dependencies
|
||||
- **plan-note.md**: Shared document with execution group sections
|
||||
|
||||
**Note**: Planning is still serial (Codex limitation), but output is structured for parallel execution.
|
||||
|
||||
## Overview
|
||||
|
||||
This workflow enables structured planning for parallel execution:
|
||||
|
||||
1. **Understanding & Group Assignment** - Analyze requirements, identify sub-domains, assign to execution groups
|
||||
2. **Sequential Planning** - Process each sub-domain serially via CLI analysis (planning phase only)
|
||||
3. **Conflict Detection** - Scan for conflicts across execution groups
|
||||
4. **Execution Strategy** - Generate branch strategy and codex assignment for parallel execution
|
||||
|
||||
The key innovation is **Execution Groups** - sub-domains are grouped by dependencies and complexity, enabling true parallel development with multiple codex instances.
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.planning/CPLAN-{slug}-{date}/
|
||||
├── plan-note.md # ⭐ Core: Requirements + Groups + Tasks
|
||||
├── requirement-analysis.json # Phase 1: Sub-domain + group assignments
|
||||
├── execution-groups.json # ⭐ Phase 1: Group metadata + codex assignment
|
||||
├── agents/ # Phase 2: Per-domain plans (serial planning)
|
||||
│ ├── {domain-1}/
|
||||
│ │ └── plan.json
|
||||
│ ├── {domain-2}/
|
||||
│ │ └── plan.json
|
||||
│ └── ...
|
||||
├── conflicts.json # Phase 3: Conflict report
|
||||
├── execution-strategy.md # ⭐ Phase 4: Branch strategy + codex commands
|
||||
└── plan.md # Phase 4: Human-readable summary
|
||||
```
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
### Phase 1: Understanding & Group Assignment
|
||||
|
||||
| Artifact | Purpose |
|
||||
|----------|---------|
|
||||
| `plan-note.md` | Collaborative template with execution group sections |
|
||||
| `requirement-analysis.json` | Sub-domain assignments with group IDs |
|
||||
| `execution-groups.json` | ⭐ Group metadata, codex assignment, branch names, dependencies |
|
||||
|
||||
### Phase 2: Sequential Planning (per Phase 1 in original)
|
||||
|
||||
| Artifact | Purpose |
|
||||
|----------|---------|
|
||||
| `agents/{domain}/plan.json` | Detailed implementation plan per domain |
|
||||
| Updated `plan-note.md` | Task pool and evidence sections filled per domain |
|
||||
|
||||
### Phase 3: Conflict Detection (same as original)
|
||||
|
||||
| Artifact | Purpose |
|
||||
|----------|---------|
|
||||
| `conflicts.json` | Detected conflicts with types, severity, resolutions |
|
||||
| Updated `plan-note.md` | Conflict markers section populated |
|
||||
|
||||
### Phase 4: Execution Strategy Generation
|
||||
|
||||
| Artifact | Purpose |
|
||||
|----------|---------|
|
||||
| `execution-strategy.md` | ⭐ Branch creation commands, codex execution commands per group, merge strategy |
|
||||
| `plan.md` | Human-readable summary with execution groups |
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Session Initialization
|
||||
|
||||
The workflow automatically generates a unique session identifier and directory structure.
|
||||
|
||||
**Session ID Format**: `CPLAN-{slug}-{date}`
|
||||
- `slug`: Lowercase alphanumeric, max 30 chars
|
||||
- `date`: YYYY-MM-DD format (UTC+8)
|
||||
|
||||
**Session Directory**: `.workflow/.planning/{sessionId}/`
|
||||
|
||||
**Auto-Detection**: If session folder exists with plan-note.md, automatically enters continue mode.
|
||||
|
||||
**Session Variables**:
|
||||
- `sessionId`: Unique session identifier
|
||||
- `sessionFolder`: Base directory for all artifacts
|
||||
- `maxGroups`: Maximum execution groups (default: 3)
|
||||
- `groupStrategy`: automatic | balanced | manual (default: automatic)
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Understanding & Group Assignment
|
||||
|
||||
**Objective**: Analyze task requirements, identify sub-domains, assign to execution groups, and create the plan-note.md template.
|
||||
|
||||
### Step 1.1: Analyze Task Description
|
||||
|
||||
Use built-in tools to understand the task scope and identify sub-domains.
|
||||
|
||||
**Analysis Activities**:
|
||||
1. **Extract task keywords** - Identify key terms and concepts
|
||||
2. **Identify sub-domains** - Split into 2-8 parallelizable focus areas
|
||||
3. **Analyze dependencies** - Map cross-domain dependencies
|
||||
4. **Assess complexity** - Evaluate task complexity per domain (Low/Medium/High)
|
||||
5. **Search for references** - Find related documentation, README, architecture guides
|
||||
|
||||
**Sub-Domain Identification Patterns**:
|
||||
|
||||
| Pattern | Keywords | Typical Group Assignment |
|
||||
|---------|----------|--------------------------|
|
||||
| Backend API | 服务, 后端, API, 接口 | Group with database if dependent |
|
||||
| Frontend | 界面, 前端, UI, 视图 | Separate group (UI-focused) |
|
||||
| Database | 数据, 存储, 数据库, 持久化 | Group with backend if tightly coupled |
|
||||
| Testing | 测试, 验证, QA | Can be separate or split across groups |
|
||||
| Infrastructure | 部署, 基础, 运维, 配置 | Usually separate group |
|
||||
|
||||
### Step 1.2: Assign Execution Groups
|
||||
|
||||
Assign sub-domains to execution groups based on strategy.
|
||||
|
||||
**Group Assignment Strategies**:
|
||||
|
||||
#### 1. Automatic Strategy (default)
|
||||
- **Logic**: Group domains by dependency relationships
|
||||
- **Rule**: Domains with direct dependencies → same group
|
||||
- **Rule**: Independent domains → separate groups (up to maxGroups)
|
||||
- **Example**:
|
||||
- Group 1: backend-api + database (dependent)
|
||||
- Group 2: frontend + ui-components (dependent)
|
||||
- Group 3: testing + documentation (independent)
|
||||
|
||||
#### 2. Balanced Strategy
|
||||
- **Logic**: Distribute domains evenly across groups by estimated effort
|
||||
- **Rule**: Balance total complexity across groups
|
||||
- **Example**:
|
||||
- Group 1: frontend (high) + testing (low)
|
||||
- Group 2: backend (high) + documentation (low)
|
||||
- Group 3: database (medium) + infrastructure (medium)
|
||||
|
||||
#### 3. Manual Strategy
|
||||
- **Logic**: Prompt user to manually assign domains to groups
|
||||
- **UI**: Present domains with dependencies, ask for group assignments
|
||||
- **Validation**: Check that dependencies are within same group or properly ordered
|
||||
|
||||
**Codex Instance Assignment**:
|
||||
- Each group assigned to `codex-{N}` (e.g., codex-1, codex-2, codex-3)
|
||||
- Instance names are logical identifiers for parallel execution
|
||||
- Actual parallel execution happens in unified-execute-parallel workflow
|
||||
|
||||
### Step 1.3: Generate execution-groups.json
|
||||
|
||||
Create the execution group metadata document.
|
||||
|
||||
**execution-groups.json Structure**:
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "CPLAN-auth-2025-02-03",
|
||||
"total_groups": 3,
|
||||
"group_strategy": "automatic",
|
||||
"groups": [
|
||||
{
|
||||
"group_id": "EG-001",
|
||||
"codex_instance": "codex-1",
|
||||
"domains": ["frontend", "ui-components"],
|
||||
"branch_name": "feature/cplan-auth-eg-001-frontend",
|
||||
"estimated_effort": "high",
|
||||
"task_id_range": "TASK-001~200",
|
||||
"dependencies_on_groups": [],
|
||||
"cross_group_files": []
|
||||
},
|
||||
{
|
||||
"group_id": "EG-002",
|
||||
"codex_instance": "codex-2",
|
||||
"domains": ["backend-api", "database"],
|
||||
"branch_name": "feature/cplan-auth-eg-002-backend",
|
||||
"estimated_effort": "medium",
|
||||
"task_id_range": "TASK-201~400",
|
||||
"dependencies_on_groups": [],
|
||||
"cross_group_files": []
|
||||
},
|
||||
{
|
||||
"group_id": "EG-003",
|
||||
"codex_instance": "codex-3",
|
||||
"domains": ["testing"],
|
||||
"branch_name": "feature/cplan-auth-eg-003-testing",
|
||||
"estimated_effort": "low",
|
||||
"task_id_range": "TASK-401~500",
|
||||
"dependencies_on_groups": ["EG-001", "EG-002"],
|
||||
"cross_group_files": []
|
||||
}
|
||||
],
|
||||
"inter_group_dependencies": [
|
||||
{
|
||||
"from_group": "EG-003",
|
||||
"to_group": "EG-001",
|
||||
"dependency_type": "requires_completion",
|
||||
"description": "Testing requires frontend implementation"
|
||||
},
|
||||
{
|
||||
"from_group": "EG-003",
|
||||
"to_group": "EG-002",
|
||||
"dependency_type": "requires_completion",
|
||||
"description": "Testing requires backend API"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Field Descriptions**:
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `group_id` | Unique execution group identifier (EG-001, EG-002, ...) |
|
||||
| `codex_instance` | Logical codex worker name for parallel execution |
|
||||
| `domains[]` | Sub-domains assigned to this group |
|
||||
| `branch_name` | Git branch name for this group's work |
|
||||
| `estimated_effort` | Complexity: low/medium/high |
|
||||
| `task_id_range` | Non-overlapping TASK ID range (200 IDs per group) |
|
||||
| `dependencies_on_groups[]` | Groups that must complete before this group starts |
|
||||
| `cross_group_files[]` | Files modified by multiple groups (conflict risk) |
|
||||
| `inter_group_dependencies[]` | Explicit cross-group dependency relationships |
|
||||
|
||||
### Step 1.4: Create plan-note.md Template with Groups
|
||||
|
||||
Generate structured template with execution group sections.
|
||||
|
||||
**plan-note.md Structure**:
|
||||
- **YAML Frontmatter**: session_id, original_requirement, total_groups, group_strategy, status
|
||||
- **Section: 需求理解**: Core objectives, key points, constraints, group strategy
|
||||
- **Section: 执行组划分**: Table of groups with domains, branches, codex assignments
|
||||
- **Section: 任务池 - {Group ID} - {Domains}**: Pre-allocated task section per execution group
|
||||
- **Section: 依赖关系**: Cross-group dependencies
|
||||
- **Section: 冲突标记**: Populated in Phase 3
|
||||
- **Section: 上下文证据 - {Group ID}**: Evidence section per execution group
|
||||
|
||||
**TASK ID Range Allocation**: Each group receives 200 non-overlapping IDs (e.g., Group 1: TASK-001~200, Group 2: TASK-201~400).
|
||||
|
||||
### Step 1.5: Update requirement-analysis.json with Groups
|
||||
|
||||
Extend requirement-analysis.json to include execution group assignments.
|
||||
|
||||
**requirement-analysis.json Structure** (extended):
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "CPLAN-auth-2025-02-03",
|
||||
"original_requirement": "...",
|
||||
"complexity": "high",
|
||||
"total_groups": 3,
|
||||
"group_strategy": "automatic",
|
||||
"sub_domains": [
|
||||
{
|
||||
"focus_area": "frontend",
|
||||
"description": "...",
|
||||
"execution_group": "EG-001",
|
||||
"task_id_range": "TASK-001~100",
|
||||
"estimated_effort": "high",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"focus_area": "ui-components",
|
||||
"description": "...",
|
||||
"execution_group": "EG-001",
|
||||
"task_id_range": "TASK-101~200",
|
||||
"estimated_effort": "medium",
|
||||
"dependencies": ["frontend"]
|
||||
}
|
||||
],
|
||||
"execution_groups_summary": [
|
||||
{
|
||||
"group_id": "EG-001",
|
||||
"domains": ["frontend", "ui-components"],
|
||||
"total_estimated_effort": "high"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- 2-3 execution groups identified (up to maxGroups)
|
||||
- Each group has 1-4 sub-domains
|
||||
- Dependencies mapped (intra-group and inter-group)
|
||||
- execution-groups.json created with complete metadata
|
||||
- plan-note.md template includes group sections
|
||||
- requirement-analysis.json extended with group assignments
|
||||
- Branch names generated for each group
|
||||
- Codex instance assigned to each group
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Sequential Sub-Domain Planning
|
||||
|
||||
**Objective**: Process each sub-domain serially via CLI analysis (same as original workflow, but with group awareness).
|
||||
|
||||
**Note**: This phase is identical to original collaborative-plan-with-file Phase 2, with the following additions:
|
||||
- CLI prompt includes execution group context
|
||||
- Task IDs respect group's assigned range
|
||||
- Cross-group dependencies explicitly documented
|
||||
|
||||
### Step 2.1: Domain Planning Loop (Serial)
|
||||
|
||||
For each sub-domain in sequence:
|
||||
1. Execute Gemini/Codex CLI analysis for the current domain
|
||||
2. Include execution group metadata in CLI context
|
||||
3. Parse CLI output into structured plan
|
||||
4. Save detailed plan as `agents/{domain}/plan.json`
|
||||
5. Update plan-note.md group section with task summaries and evidence
|
||||
|
||||
**Planning Guideline**: Wait for each domain's CLI analysis to complete before proceeding.
|
||||
|
||||
### Step 2.2: CLI Planning with Group Context
|
||||
|
||||
Execute synchronous CLI analysis with execution group awareness.
|
||||
|
||||
**CLI Analysis Scope** (extended):
|
||||
- **PURPOSE**: Generate detailed implementation plan for domain within execution group
|
||||
- **CONTEXT**:
|
||||
- Domain description
|
||||
- Execution group ID and metadata
|
||||
- Related codebase files
|
||||
- Prior domain results within same group
|
||||
- Cross-group dependencies (if any)
|
||||
- **TASK**: Analyze domain, identify tasks within group's ID range, define dependencies
|
||||
- **EXPECTED**: JSON output with tasks, summaries, group-aware dependencies, effort estimates
|
||||
- **CONSTRAINTS**:
|
||||
- Use only TASK IDs from assigned range
|
||||
- Document any cross-group dependencies
|
||||
- Flag files that might be modified by other groups
|
||||
|
||||
**Cross-Group Dependency Handling**:
|
||||
- If a task depends on another group's completion, document as `depends_on_group: "EG-XXX"`
|
||||
- Mark files that are likely modified by multiple groups as `cross_group_risk: true`
|
||||
|
||||
### Step 2.3: Update plan-note.md Group Sections
|
||||
|
||||
Parse CLI output and update the plan-note.md sections for the current domain's group.
|
||||
|
||||
**Task Summary Format** (extended with group info):
|
||||
- Task header: `### TASK-{ID}: {Title} [{domain}] [Group: {group_id}]`
|
||||
- Fields: 状态, 复杂度, 依赖, 范围, **执行组** (execution_group)
|
||||
- Cross-group dependencies: `依赖执行组: EG-XXX`
|
||||
- Modification points with conflict risk flag
|
||||
- Conflict risk assessment
|
||||
|
||||
**Evidence Format** (same as original)
|
||||
|
||||
**Success Criteria**:
|
||||
- All domains processed sequentially
|
||||
- `agents/{domain}/plan.json` created for each domain
|
||||
- `plan-note.md` updated with group-aware task pools
|
||||
- Cross-group dependencies explicitly documented
|
||||
- Task IDs respect group ranges
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Conflict Detection
|
||||
|
||||
**Objective**: Analyze plan-note.md for conflicts within and across execution groups.
|
||||
|
||||
**Note**: This phase extends original conflict detection with group-aware analysis.
|
||||
|
||||
### Step 3.1: Parse plan-note.md (same as original)
|
||||
|
||||
Extract all tasks from all group sections.
|
||||
|
||||
### Step 3.2: Detect Conflicts (Extended)
|
||||
|
||||
Scan all tasks for four categories of conflicts (added cross-group conflicts).
|
||||
|
||||
**Conflict Types** (extended):
|
||||
|
||||
| Type | Severity | Detection Logic | Resolution |
|
||||
|------|----------|-----------------|------------|
|
||||
| file_conflict | high | Same file:location modified by multiple domains within same group | Coordinate modification order |
|
||||
| cross_group_file_conflict | critical | Same file modified by multiple execution groups | Requires merge coordination or branch rebase strategy |
|
||||
| dependency_cycle | critical | Circular dependencies in task graph (within or across groups) | Remove or reorganize dependencies |
|
||||
| strategy_conflict | medium | Multiple high-risk tasks in same file from different domains/groups | Review approaches and align on strategy |
|
||||
|
||||
**Detection Activities**:
|
||||
1. **File Conflicts (Intra-Group)**: Group modification points by file:location within each group
|
||||
2. **Cross-Group File Conflicts**: Identify files modified by multiple execution groups
|
||||
3. **Dependency Cycles**: Build dependency graph including cross-group dependencies, detect cycles
|
||||
4. **Strategy Conflicts**: Identify files with high-risk tasks from multiple groups
|
||||
|
||||
**Cross-Group Conflict Detection**:
|
||||
- Parse `cross_group_files[]` from execution-groups.json
|
||||
- Scan all tasks for files modified by multiple groups
|
||||
- Flag as critical conflict requiring merge strategy
|
||||
|
||||
### Step 3.3: Update execution-groups.json with Conflicts
|
||||
|
||||
Append detected cross-group conflicts to execution-groups.json.
|
||||
|
||||
**Update Structure**:
|
||||
```json
|
||||
{
|
||||
"groups": [
|
||||
{
|
||||
"group_id": "EG-001",
|
||||
"cross_group_files": [
|
||||
{
|
||||
"file": "src/shared/config.ts",
|
||||
"conflicting_groups": ["EG-002"],
|
||||
"conflict_type": "both modify shared configuration",
|
||||
"resolution": "Coordinate changes or use merge strategy"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3.4: Generate Conflict Artifacts (Extended)
|
||||
|
||||
Write conflict results with group context.
|
||||
|
||||
**conflicts.json Structure** (extended):
|
||||
- `detected_at`: Detection timestamp
|
||||
- `total_conflicts`: Number of conflicts
|
||||
- `intra_group_conflicts[]`: Conflicts within single group
|
||||
- `cross_group_conflicts[]`: ⭐ Conflicts across execution groups
|
||||
- `conflicts[]`: All conflict objects with group IDs
|
||||
|
||||
**plan-note.md Update**: Populate "冲突标记" section with:
|
||||
- Intra-group conflicts (can be resolved during group execution)
|
||||
- Cross-group conflicts (require coordination or merge strategy)
|
||||
|
||||
**Success Criteria**:
|
||||
- All tasks analyzed for intra-group and cross-group conflicts
|
||||
- `conflicts.json` written with group-aware detection results
|
||||
- `execution-groups.json` updated with cross_group_files
|
||||
- `plan-note.md` updated with conflict markers
|
||||
- Cross-group conflicts flagged as critical
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Execution Strategy Generation
|
||||
|
||||
**Objective**: Generate branch strategy and codex execution commands for parallel development.
|
||||
|
||||
### Step 4.1: Generate Branch Strategy
|
||||
|
||||
Create Git branch strategy for multi-branch parallel development.
|
||||
|
||||
**Branch Strategy Decisions**:
|
||||
|
||||
1. **Independent Groups** (no cross-group conflicts):
|
||||
- Each group works on independent branch from main
|
||||
- Branches can be merged independently
|
||||
- Parallel development fully supported
|
||||
|
||||
2. **Dependent Groups** (cross-group dependencies but no file conflicts):
|
||||
- Groups with dependencies must coordinate completion order
|
||||
- Independent branches, but merge order matters
|
||||
- Group A completes → merge to main → Group B starts/continues
|
||||
|
||||
3. **Conflicting Groups** (cross-group file conflicts):
|
||||
- Strategy 1: Sequential - Complete one group, merge, then start next
|
||||
- Strategy 2: Feature branch + rebase - Each group rebases on main periodically
|
||||
- Strategy 3: Shared integration branch - Both groups branch from shared base, coordinate merges
|
||||
|
||||
**Default Strategy**: Independent branches with merge order based on dependencies
|
||||
|
||||
### Step 4.2: Generate execution-strategy.md
|
||||
|
||||
Create execution strategy document with concrete commands.
|
||||
|
||||
**execution-strategy.md Structure**:
|
||||
|
||||
```markdown
|
||||
# Execution Strategy: {session_id}
|
||||
|
||||
## Overview
|
||||
|
||||
- **Total Execution Groups**: {N}
|
||||
- **Group Strategy**: {automatic|balanced|manual}
|
||||
- **Branch Strategy**: {independent|dependent|conflicting}
|
||||
- **Estimated Total Effort**: {sum of all groups}
|
||||
|
||||
## Execution Groups
|
||||
|
||||
### EG-001: Frontend Development
|
||||
- **Codex Instance**: codex-1
|
||||
- **Domains**: frontend, ui-components
|
||||
- **Branch**: feature/cplan-auth-eg-001-frontend
|
||||
- **Dependencies**: None (can start immediately)
|
||||
- **Estimated Effort**: High
|
||||
|
||||
### EG-002: Backend Development
|
||||
- **Codex Instance**: codex-2
|
||||
- **Domains**: backend-api, database
|
||||
- **Branch**: feature/cplan-auth-eg-002-backend
|
||||
- **Dependencies**: None (can start immediately)
|
||||
- **Estimated Effort**: Medium
|
||||
|
||||
### EG-003: Testing
|
||||
- **Codex Instance**: codex-3
|
||||
- **Domains**: testing
|
||||
- **Branch**: feature/cplan-auth-eg-003-testing
|
||||
- **Dependencies**: EG-001, EG-002 (must complete first)
|
||||
- **Estimated Effort**: Low
|
||||
|
||||
## Branch Creation Commands
|
||||
|
||||
```bash
|
||||
# Create branches for all execution groups
|
||||
git checkout main
|
||||
git pull
|
||||
|
||||
# Group 1: Frontend
|
||||
git checkout -b feature/cplan-auth-eg-001-frontend
|
||||
git push -u origin feature/cplan-auth-eg-001-frontend
|
||||
|
||||
# Group 2: Backend
|
||||
git checkout main
|
||||
git checkout -b feature/cplan-auth-eg-002-backend
|
||||
git push -u origin feature/cplan-auth-eg-002-backend
|
||||
|
||||
# Group 3: Testing
|
||||
git checkout main
|
||||
git checkout -b feature/cplan-auth-eg-003-testing
|
||||
git push -u origin feature/cplan-auth-eg-003-testing
|
||||
```
|
||||
|
||||
## Parallel Execution Commands
|
||||
|
||||
Execute these commands in parallel (separate terminal sessions or background):
|
||||
|
||||
```bash
|
||||
# Terminal 1: Execute Group 1 (Frontend)
|
||||
PLAN=".workflow/.planning/CPLAN-auth-2025-02-03/plan-note.md" \
|
||||
GROUP="EG-001" \
|
||||
/workflow:unified-execute-parallel
|
||||
|
||||
# Terminal 2: Execute Group 2 (Backend)
|
||||
PLAN=".workflow/.planning/CPLAN-auth-2025-02-03/plan-note.md" \
|
||||
GROUP="EG-002" \
|
||||
/workflow:unified-execute-parallel
|
||||
|
||||
# Terminal 3: Execute Group 3 (Testing) - starts after EG-001 and EG-002 complete
|
||||
PLAN=".workflow/.planning/CPLAN-auth-2025-02-03/plan-note.md" \
|
||||
GROUP="EG-003" \
|
||||
WAIT_FOR="EG-001,EG-002" \
|
||||
/workflow:unified-execute-parallel
|
||||
```
|
||||
|
||||
## Cross-Group Conflicts
|
||||
|
||||
### Critical Conflicts Detected
|
||||
|
||||
1. **File: src/shared/config.ts**
|
||||
- Modified by: EG-001 (frontend), EG-002 (backend)
|
||||
- Resolution: Coordinate changes or use merge strategy
|
||||
- Recommendation: EG-001 completes first, EG-002 rebases before continuing
|
||||
|
||||
### Resolution Strategy
|
||||
|
||||
- **Option 1**: Sequential execution (EG-001 → merge → EG-002 rebases)
|
||||
- **Option 2**: Manual coordination (both groups align on config changes before execution)
|
||||
- **Option 3**: Split file (refactor into separate configs if feasible)
|
||||
|
||||
## Merge Strategy
|
||||
|
||||
### Independent Groups (EG-001, EG-002)
|
||||
```bash
|
||||
# After EG-001 completes
|
||||
git checkout main
|
||||
git merge feature/cplan-auth-eg-001-frontend
|
||||
git push
|
||||
|
||||
# After EG-002 completes
|
||||
git checkout main
|
||||
git merge feature/cplan-auth-eg-002-backend
|
||||
git push
|
||||
```
|
||||
|
||||
### Dependent Group (EG-003)
|
||||
```bash
|
||||
# After EG-001 and EG-002 merged to main
|
||||
git checkout feature/cplan-auth-eg-003-testing
|
||||
git rebase main # Update with latest changes
|
||||
# Continue execution...
|
||||
|
||||
# After EG-003 completes
|
||||
git checkout main
|
||||
git merge feature/cplan-auth-eg-003-testing
|
||||
git push
|
||||
```
|
||||
|
||||
## Monitoring Progress
|
||||
|
||||
Track execution progress:
|
||||
```bash
|
||||
# Check execution logs for each group
|
||||
cat .workflow/.execution/EXEC-eg-001-*/execution-events.md
|
||||
cat .workflow/.execution/EXEC-eg-002-*/execution-events.md
|
||||
cat .workflow/.execution/EXEC-eg-003-*/execution-events.md
|
||||
```
|
||||
|
||||
### Step 4.3: Generate plan.md Summary (Extended)
|
||||
|
||||
Create human-readable summary with execution group information.
|
||||
|
||||
**plan.md Structure** (extended):
|
||||
|
||||
| Section | Content |
|
||||
|---------|---------|
|
||||
| Header | Session ID, task description, creation time |
|
||||
| 需求 (Requirements) | From plan-note.md "需求理解" |
|
||||
| 执行组划分 (Execution Groups) | ⭐ Table of groups with domains, branches, codex assignments, dependencies |
|
||||
| 任务概览 (Task Overview) | All tasks grouped by execution group |
|
||||
| 冲突报告 (Conflict Report) | Intra-group and cross-group conflicts |
|
||||
| 执行策略 (Execution Strategy) | Branch strategy, parallel execution commands, merge order |
|
||||
|
||||
### Step 4.4: Display Completion Summary
|
||||
|
||||
Present session statistics with execution group information.
|
||||
|
||||
**Summary Content**:
|
||||
- Session ID and directory path
|
||||
- Total execution groups created
|
||||
- Total domains planned
|
||||
- Total tasks generated (per group and total)
|
||||
- Conflict status (intra-group and cross-group)
|
||||
- Execution strategy summary
|
||||
- Next step: Use `workflow:unified-execute-parallel` with GROUP parameter
|
||||
|
||||
**Success Criteria**:
|
||||
- `execution-strategy.md` generated with complete branch and execution strategy
|
||||
- `plan.md` includes execution group information
|
||||
- All artifacts present in session directory
|
||||
- User informed of parallel execution approach and commands
|
||||
- Cross-group conflicts clearly documented with resolution strategies
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--max-groups` | 3 | Maximum execution groups to create |
|
||||
| `--group-strategy` | automatic | Group assignment: automatic / balanced / manual |
|
||||
|
||||
**Group Strategy Details**:
|
||||
- **automatic**: Group by dependency relationships (dependent domains in same group)
|
||||
- **balanced**: Distribute evenly by estimated effort
|
||||
- **manual**: Prompt user to assign domains to groups interactively
|
||||
|
||||
---
|
||||
|
||||
## Error Handling & Recovery
|
||||
|
||||
| Situation | Action | Recovery |
|
||||
|-----------|--------|----------|
|
||||
| Too many groups requested | Limit to maxGroups | Merge low-effort domains |
|
||||
| Circular group dependencies | Stop execution, report error | Reorganize domain assignments |
|
||||
| All domains in one group | Warning: No parallelization | Continue or prompt user to split |
|
||||
| Cross-group file conflicts | Flag as critical | Suggest resolution strategies |
|
||||
| Manual grouping timeout | Fall back to automatic | Continue with automatic strategy |
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Starting Planning
|
||||
|
||||
1. **Clear Task Description**: Detailed requirements for better grouping
|
||||
2. **Understand Dependencies**: Know which modules depend on each other
|
||||
3. **Choose Group Strategy**:
|
||||
- Use `automatic` for dependency-heavy tasks
|
||||
- Use `balanced` for independent features
|
||||
- Use `manual` for complex architectures you understand well
|
||||
|
||||
### During Planning
|
||||
|
||||
1. **Review Group Assignments**: Check execution-groups.json makes sense
|
||||
2. **Verify Dependencies**: Cross-group dependencies should be minimal
|
||||
3. **Check Branch Names**: Ensure branch names follow project conventions
|
||||
4. **Monitor Conflicts**: Review conflicts.json for cross-group file conflicts
|
||||
|
||||
### After Planning
|
||||
|
||||
1. **Review Execution Strategy**: Read execution-strategy.md carefully
|
||||
2. **Resolve Critical Conflicts**: Address cross-group file conflicts before execution
|
||||
3. **Prepare Environments**: Ensure multiple codex instances can run in parallel
|
||||
4. **Plan Merge Order**: Understand which groups must merge first
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Migration from Original Workflow
|
||||
|
||||
Existing `collaborative-plan-with-file` sessions can be converted to parallel execution:
|
||||
|
||||
1. Read existing `plan-note.md` and `requirement-analysis.json`
|
||||
2. Assign sub-domains to execution groups (run Step 1.2 manually)
|
||||
3. Generate `execution-groups.json` and `execution-strategy.md`
|
||||
4. Use `workflow:unified-execute-parallel` for execution
|
||||
|
||||
---
|
||||
|
||||
**Now execute collaborative-plan-parallel for**: $TASK
|
||||
@@ -1,272 +0,0 @@
|
||||
---
|
||||
name: unified-execute-parallel
|
||||
description: Worktree-based parallel execution engine. Execute group tasks in isolated Git worktree. Codex-optimized.
|
||||
argument-hint: "PLAN=\"<path>\" GROUP=\"<group-id>\" [--auto-commit] [--dry-run]"
|
||||
---
|
||||
|
||||
# Codex Unified-Execute-Parallel Workflow
|
||||
|
||||
## Quick Start
|
||||
|
||||
Execute tasks for a specific execution group in isolated Git worktree.
|
||||
|
||||
**Core workflow**: Load Plan → Select Group → Create Worktree → Execute Tasks → Mark Complete
|
||||
|
||||
**Key features**:
|
||||
- **Worktree isolation**: Each group executes in `.ccw/worktree/{group-id}/`
|
||||
- **Parallel execution**: Multiple codex instances can run different groups simultaneously
|
||||
- **Simple focus**: Execute only, merge handled by separate command
|
||||
|
||||
## Overview
|
||||
|
||||
1. **Plan & Group Selection** - Load plan, select execution group
|
||||
2. **Worktree Setup** - Create Git worktree for group's branch
|
||||
3. **Task Execution** - Execute group tasks serially in worktree
|
||||
4. **Mark Complete** - Record completion status
|
||||
|
||||
**Note**: Merging is handled by `/workflow:worktree-merge` command.
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.ccw/worktree/
|
||||
├── {group-id}/ # Git worktree for group
|
||||
│ ├── (full project checkout)
|
||||
│ └── .execution/ # Execution artifacts
|
||||
│ ├── execution.md # Task overview
|
||||
│ └── execution-events.md # Execution log
|
||||
|
||||
.workflow/.execution/
|
||||
└── worktree-status.json # ⭐ All groups completion status
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Session Variables
|
||||
|
||||
- `planPath`: Path to plan file
|
||||
- `groupId`: Selected execution group ID (required)
|
||||
- `worktreePath`: `.ccw/worktree/{groupId}`
|
||||
- `branchName`: From execution-groups.json
|
||||
- `autoCommit`: Boolean for auto-commit mode
|
||||
- `dryRun`: Boolean for dry-run mode
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Plan & Group Selection
|
||||
|
||||
**Objective**: Load plan, extract group metadata, filter tasks.
|
||||
|
||||
### Step 1.1: Load Plan File
|
||||
|
||||
Read plan file and execution-groups.json from same directory.
|
||||
|
||||
**Required Files**:
|
||||
- `plan-note.md` or `plan.json` - Task definitions
|
||||
- `execution-groups.json` - Group metadata with branch names
|
||||
|
||||
### Step 1.2: Select Group
|
||||
|
||||
Validate GROUP parameter.
|
||||
|
||||
**Validation**:
|
||||
- Group exists in execution-groups.json
|
||||
- Group has assigned tasks
|
||||
- Branch name is defined
|
||||
|
||||
### Step 1.3: Filter Group Tasks
|
||||
|
||||
Extract only tasks belonging to selected group.
|
||||
|
||||
**Task Filtering**:
|
||||
- Match by `execution_group` field, OR
|
||||
- Match by `domain` if domain in group.domains
|
||||
|
||||
Build execution order from filtered tasks.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Worktree Setup
|
||||
|
||||
**Objective**: Create Git worktree for isolated execution.
|
||||
|
||||
### Step 2.1: Create Worktree Directory
|
||||
|
||||
Ensure worktree base directory exists.
|
||||
|
||||
```bash
|
||||
mkdir -p .ccw/worktree
|
||||
```
|
||||
|
||||
### Step 2.2: Create Git Worktree
|
||||
|
||||
Create worktree with group's branch.
|
||||
|
||||
**If worktree doesn't exist**:
|
||||
```bash
|
||||
# Create branch and worktree
|
||||
git worktree add -b {branch-name} .ccw/worktree/{group-id} main
|
||||
```
|
||||
|
||||
**If worktree exists**:
|
||||
```bash
|
||||
# Verify worktree is on correct branch
|
||||
cd .ccw/worktree/{group-id}
|
||||
git status
|
||||
```
|
||||
|
||||
**Branch Naming**: From execution-groups.json `branch_name` field.
|
||||
|
||||
### Step 2.3: Initialize Execution Folder
|
||||
|
||||
Create execution tracking folder inside worktree.
|
||||
|
||||
```bash
|
||||
mkdir -p .ccw/worktree/{group-id}/.execution
|
||||
```
|
||||
|
||||
Create initial `execution.md`:
|
||||
- Group ID and branch name
|
||||
- Task list for this group
|
||||
- Start timestamp
|
||||
|
||||
**Success Criteria**:
|
||||
- Worktree created at `.ccw/worktree/{group-id}/`
|
||||
- Working on correct branch
|
||||
- Execution folder initialized
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Task Execution
|
||||
|
||||
**Objective**: Execute group tasks serially in worktree.
|
||||
|
||||
### Step 3.1: Change to Worktree Directory
|
||||
|
||||
All execution happens inside worktree.
|
||||
|
||||
```bash
|
||||
cd .ccw/worktree/{group-id}
|
||||
```
|
||||
|
||||
### Step 3.2: Execute Tasks Sequentially
|
||||
|
||||
For each task in group:
|
||||
1. Execute via CLI in worktree directory
|
||||
2. Record result in `.execution/execution-events.md`
|
||||
3. Auto-commit if enabled
|
||||
4. Continue to next task
|
||||
|
||||
**CLI Execution**:
|
||||
- `--cd .ccw/worktree/{group-id}` - Execute in worktree
|
||||
- `--mode write` - Allow file modifications
|
||||
|
||||
### Step 3.3: Record Progress
|
||||
|
||||
Update `.execution/execution-events.md` after each task:
|
||||
- Task ID and title
|
||||
- Timestamp
|
||||
- Status (completed/failed)
|
||||
- Files modified
|
||||
|
||||
### Step 3.4: Auto-Commit (if enabled)
|
||||
|
||||
Commit task changes in worktree.
|
||||
|
||||
```bash
|
||||
cd .ccw/worktree/{group-id}
|
||||
git add .
|
||||
git commit -m "feat: {task-title} [TASK-{id}]"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Mark Complete
|
||||
|
||||
**Objective**: Record group completion status.
|
||||
|
||||
### Step 4.1: Update worktree-status.json
|
||||
|
||||
Write/update status file in main project.
|
||||
|
||||
**worktree-status.json** (in `.workflow/.execution/`):
|
||||
```json
|
||||
{
|
||||
"plan_session": "CPLAN-auth-2025-02-03",
|
||||
"groups": {
|
||||
"EG-001": {
|
||||
"worktree_path": ".ccw/worktree/EG-001",
|
||||
"branch": "feature/cplan-auth-eg-001-frontend",
|
||||
"status": "completed",
|
||||
"tasks_total": 15,
|
||||
"tasks_completed": 15,
|
||||
"tasks_failed": 0,
|
||||
"completed_at": "2025-02-03T14:30:00Z"
|
||||
},
|
||||
"EG-002": {
|
||||
"status": "in_progress",
|
||||
"tasks_completed": 8,
|
||||
"tasks_total": 12
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4.2: Display Summary
|
||||
|
||||
Report group execution results:
|
||||
- Group ID and worktree path
|
||||
- Tasks completed/failed
|
||||
- Next step: Use `/workflow:worktree-merge` to merge
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `PLAN` | Auto-detect | Path to plan file |
|
||||
| `GROUP` | Required | Execution group ID (e.g., EG-001) |
|
||||
| `--auto-commit` | false | Commit after each task |
|
||||
| `--dry-run` | false | Simulate without changes |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| GROUP missing | Error: Require GROUP parameter |
|
||||
| Group not found | Error: Check execution-groups.json |
|
||||
| Worktree exists with wrong branch | Warning: Clean or remove existing worktree |
|
||||
| Task fails | Record failure, continue to next |
|
||||
|
||||
---
|
||||
|
||||
## Parallel Execution Example
|
||||
|
||||
**3 terminals, 3 groups**:
|
||||
|
||||
```bash
|
||||
# Terminal 1
|
||||
PLAN=".workflow/.planning/CPLAN-auth/plan-note.md" GROUP="EG-001" --auto-commit
|
||||
|
||||
# Terminal 2
|
||||
PLAN=".workflow/.planning/CPLAN-auth/plan-note.md" GROUP="EG-002" --auto-commit
|
||||
|
||||
# Terminal 3
|
||||
PLAN=".workflow/.planning/CPLAN-auth/plan-note.md" GROUP="EG-003" --auto-commit
|
||||
```
|
||||
|
||||
Each executes in isolated worktree:
|
||||
- `.ccw/worktree/EG-001/`
|
||||
- `.ccw/worktree/EG-002/`
|
||||
- `.ccw/worktree/EG-003/`
|
||||
|
||||
After all complete, use `/workflow:worktree-merge` to merge.
|
||||
|
||||
---
|
||||
|
||||
**Now execute unified-execute-parallel for**: $PLAN with GROUP=$GROUP
|
||||
@@ -8,6 +8,8 @@ allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read
|
||||
|
||||
Orchestrates autonomous workflow execution through systematic task discovery, agent coordination, and progress tracking. **Executes entire workflow without user interruption** (except initial session selection if multiple active sessions exist), providing complete context to agents and ensuring proper flow control execution with comprehensive TodoWrite tracking.
|
||||
|
||||
**Also available as**: `workflow:plan` Phase 4 — when running `workflow:plan` with `--yes` or selecting "Start Execution" after Phase 3, the core execution logic runs inline without needing a separate `workflow:execute` call. Use this standalone command for `--resume-session` scenarios or when invoking execution independently.
|
||||
|
||||
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
@@ -1,32 +1,32 @@
|
||||
---
|
||||
name: workflow-plan
|
||||
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs. Triggers on "workflow:plan".
|
||||
description: 4-phase planning+execution workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs, optional Phase 4 execution. Triggers on "workflow:plan".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
# Workflow Plan
|
||||
|
||||
5-phase planning workflow that orchestrates session discovery, context gathering, conflict resolution, and task generation to produce implementation plans (IMPL_PLAN.md, task JSONs, TODO_LIST.md).
|
||||
4-phase workflow that orchestrates session discovery, context gathering (with inline conflict resolution), task generation, and conditional execution to produce and implement plans (IMPL_PLAN.md, task JSONs, TODO_LIST.md).
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Workflow Plan Orchestrator (SKILL.md) │
|
||||
│ → Pure coordinator: Execute phases, parse outputs, pass context │
|
||||
└───────────────┬─────────────────────────────────────────────────┘
|
||||
┌──────────────────────────────────────────────────────────────────────┐
|
||||
│ Workflow Plan Orchestrator (SKILL.md) │
|
||||
│ → Pure coordinator: Execute phases, parse outputs, pass context │
|
||||
└───────────────┬──────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌───────────┼───────────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓ ↓
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │Phase 3.5│ │ Phase 4 │
|
||||
│ Session │ │ Context │ │Conflict │ │ Gate │ │ Task │
|
||||
│Discovery│ │ Gather │ │Resolve │ │(Optional)│ │Generate │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
|
||||
↓ ↓ ↓ ↓
|
||||
sessionId contextPath resolved IMPL_PLAN.md
|
||||
conflict_risk artifacts task JSONs
|
||||
TODO_LIST.md
|
||||
┌─────────┐ ┌──────────────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │
|
||||
│ Session │ │ Context Gather │ │ Task │ │Execute │
|
||||
│Discovery│ │& Conflict Resolve│ │Generate │ │(optional)│
|
||||
└─────────┘ └──────────────────┘ └─────────┘ └─────────┘
|
||||
↓ ↓ ↓ ↓
|
||||
sessionId contextPath IMPL_PLAN.md summaries
|
||||
conflict_risk task JSONs completed
|
||||
resolved TODO_LIST.md tasks
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
@@ -35,12 +35,14 @@ allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read
|
||||
2. **Auto-Continue**: All phases run autonomously without user intervention between phases
|
||||
3. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
|
||||
4. **Progressive Phase Loading**: Phase docs are read on-demand, not all at once
|
||||
5. **Conditional Execution**: Phase 3 only executes when conflict_risk >= medium
|
||||
5. **Inline Conflict Resolution**: Conflicts detected and resolved within Phase 2 (not a separate phase)
|
||||
6. **Role Path Loading**: Subagent roles loaded via path reference in MANDATORY FIRST STEPS
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions.
|
||||
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions, auto-execute Phase 4.
|
||||
|
||||
When `--with-commit`: Auto-commit after each task completion in Phase 4.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
@@ -52,24 +54,27 @@ Phase 1: Session Discovery
|
||||
└─ Ref: phases/01-session-discovery.md
|
||||
└─ Output: sessionId (WFS-xxx)
|
||||
|
||||
Phase 2: Context Gathering
|
||||
Phase 2: Context Gathering & Conflict Resolution
|
||||
└─ Ref: phases/02-context-gathering.md
|
||||
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
||||
└─ Output: contextPath + conflict_risk
|
||||
├─ Step 1: Context-Package Detection
|
||||
├─ Step 2: Complexity Assessment & Parallel Explore (conflict-aware)
|
||||
├─ Step 3: Inline Conflict Resolution (conditional, if significant conflicts)
|
||||
├─ Step 4: Invoke Context-Search Agent (with exploration + conflict results)
|
||||
├─ Step 5: Output Verification
|
||||
└─ Output: contextPath + conflict_risk + optional conflict-resolution.json
|
||||
|
||||
Phase 3: Conflict Resolution
|
||||
└─ Decision (conflict_risk check):
|
||||
├─ conflict_risk ≥ medium → Ref: phases/03-conflict-resolution.md
|
||||
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
||||
│ └─ Output: Modified brainstorm artifacts
|
||||
└─ conflict_risk < medium → Skip to Phase 4
|
||||
|
||||
Phase 4: Task Generation
|
||||
└─ Ref: phases/04-task-generation.md
|
||||
Phase 3: Task Generation
|
||||
└─ Ref: phases/03-task-generation.md
|
||||
└─ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
||||
|
||||
Return:
|
||||
└─ Summary with recommended next steps
|
||||
User Decision (or --yes auto):
|
||||
└─ "Start Execution" → Phase 4
|
||||
└─ "Verify Plan Quality" → workflow:plan-verify
|
||||
└─ "Review Status Only" → workflow:status
|
||||
|
||||
Phase 4: Execution (Conditional)
|
||||
└─ Ref: phases/04-execution.md
|
||||
└─ Output: completed tasks, summaries, session completion
|
||||
```
|
||||
|
||||
**Phase Reference Documents** (read on-demand when phase executes):
|
||||
@@ -77,9 +82,9 @@ Return:
|
||||
| Phase | Document | Purpose |
|
||||
|-------|----------|---------|
|
||||
| 1 | [phases/01-session-discovery.md](phases/01-session-discovery.md) | Session creation/discovery with intelligent session management |
|
||||
| 2 | [phases/02-context-gathering.md](phases/02-context-gathering.md) | Project context collection via context-search-agent |
|
||||
| 3 | [phases/03-conflict-resolution.md](phases/03-conflict-resolution.md) | Conflict detection and resolution with CLI analysis |
|
||||
| 4 | [phases/04-task-generation.md](phases/04-task-generation.md) | Implementation plan and task JSON generation |
|
||||
| 2 | [phases/02-context-gathering.md](phases/02-context-gathering.md) | Context collection + inline conflict resolution |
|
||||
| 3 | [phases/03-task-generation.md](phases/03-task-generation.md) | Implementation plan and task JSON generation |
|
||||
| 4 | [phases/04-execution.md](phases/04-execution.md) | Task execution (conditional, triggered by user or --yes) |
|
||||
|
||||
## Core Rules
|
||||
|
||||
@@ -199,26 +204,29 @@ Phase 1: session:start --auto "structured-description"
|
||||
↓
|
||||
Phase 2: context-gather --session sessionId "structured-description"
|
||||
↓ Input: sessionId + structured description
|
||||
↓ Output: contextPath (context-package.json with prioritized_context) + conflict_risk
|
||||
↓ Update: planning-notes.md (Context Findings + Consolidated Constraints)
|
||||
↓ Step 2: Parallel exploration (with conflict detection)
|
||||
↓ Step 3: Inline conflict resolution (if significant conflicts detected)
|
||||
↓ Step 4: Context-search-agent packaging
|
||||
↓ Output: contextPath (context-package.json with prioritized_context)
|
||||
↓ + optional conflict-resolution.json
|
||||
↓ Update: planning-notes.md (Context Findings + Conflict Decisions + Consolidated Constraints)
|
||||
↓
|
||||
Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
|
||||
↓ Input: sessionId + contextPath + conflict_risk
|
||||
↓ Output: Modified brainstorm artifacts
|
||||
↓ Update: planning-notes.md (Conflict Decisions + Consolidated Constraints)
|
||||
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
|
||||
↓
|
||||
Phase 4: task-generate-agent --session sessionId
|
||||
Phase 3: task-generate-agent --session sessionId
|
||||
↓ Input: sessionId + planning-notes.md + context-package.json + brainstorm artifacts
|
||||
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
||||
↓
|
||||
Return summary to user
|
||||
User Decision: "Start Execution" / --yes auto
|
||||
↓
|
||||
Phase 4: Execute tasks (conditional)
|
||||
↓ Input: sessionId + IMPL_PLAN.md + TODO_LIST.md + .task/*.json
|
||||
↓ Loop: lazy load → spawn_agent → wait → close_agent → commit (optional)
|
||||
↓ Output: completed tasks, summaries, session completion
|
||||
```
|
||||
|
||||
**Session Memory Flow**: Each phase receives session ID, which provides access to:
|
||||
- Previous task summaries
|
||||
- Existing context and analysis
|
||||
- Brainstorming artifacts (potentially modified by Phase 3)
|
||||
- Brainstorming artifacts (potentially modified by Phase 2 conflict resolution)
|
||||
- Session-specific configuration
|
||||
|
||||
## TodoWrite Pattern
|
||||
@@ -229,15 +237,15 @@ Return summary to user
|
||||
|
||||
1. **Task Attachment** (when phase executed):
|
||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
|
||||
- **Phase 4**: Single agent task attached
|
||||
- **Phase 2**: Multiple sub-tasks attached (e.g., explore, conflict resolution, context packaging)
|
||||
- **Phase 3**: Single agent task attached
|
||||
- First attached task marked as `in_progress`, others as `pending`
|
||||
- Orchestrator **executes** these attached tasks sequentially
|
||||
|
||||
2. **Task Collapse** (after sub-tasks complete):
|
||||
- **Applies to Phase 2, 3**: Remove detailed sub-tasks from TodoWrite
|
||||
- **Applies to Phase 2**: Remove detailed sub-tasks from TodoWrite
|
||||
- **Collapse** to high-level phase summary
|
||||
- **Phase 4**: No collapse needed (single task, just mark completed)
|
||||
- **Phase 3**: No collapse needed (single task, just mark completed)
|
||||
- Maintains clean orchestrator-level view
|
||||
|
||||
3. **Continuous Execution**:
|
||||
@@ -253,11 +261,11 @@ Return summary to user
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "in_progress"},
|
||||
{"content": " → Analyze codebase structure", "status": "in_progress"},
|
||||
{"content": " → Identify integration points", "status": "pending"},
|
||||
{"content": " → Generate context package", "status": "pending"},
|
||||
{"content": "Phase 4: Task Generation", "status": "pending"}
|
||||
{"content": "Phase 2: Context Gathering & Conflict Resolution", "status": "in_progress"},
|
||||
{"content": " → Parallel exploration (conflict-aware)", "status": "in_progress"},
|
||||
{"content": " → Inline conflict resolution (if needed)", "status": "pending"},
|
||||
{"content": " → Context-search-agent packaging", "status": "pending"},
|
||||
{"content": "Phase 3: Task Generation", "status": "pending"}
|
||||
]
|
||||
```
|
||||
|
||||
@@ -265,21 +273,21 @@ Return summary to user
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "completed"},
|
||||
{"content": "Phase 4: Task Generation", "status": "pending"}
|
||||
{"content": "Phase 2: Context Gathering & Conflict Resolution", "status": "completed"},
|
||||
{"content": "Phase 3: Task Generation", "status": "pending"}
|
||||
]
|
||||
```
|
||||
|
||||
### Phase 3 (Conditional, Tasks Attached):
|
||||
### Phase 4 (Tasks Attached, conditional):
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "completed"},
|
||||
{"content": "Phase 3: Conflict Resolution", "status": "in_progress"},
|
||||
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress"},
|
||||
{"content": " → Present conflicts to user", "status": "pending"},
|
||||
{"content": " → Apply resolution strategies", "status": "pending"},
|
||||
{"content": "Phase 4: Task Generation", "status": "pending"}
|
||||
{"content": "Phase 2: Context Gathering & Conflict Resolution", "status": "completed"},
|
||||
{"content": "Phase 3: Task Generation", "status": "completed"},
|
||||
{"content": "Phase 4: Execution", "status": "in_progress"},
|
||||
{"content": " → IMPL-1: [task title]", "status": "in_progress"},
|
||||
{"content": " → IMPL-2: [task title]", "status": "pending"},
|
||||
{"content": " → IMPL-3: [task title]", "status": "pending"}
|
||||
]
|
||||
```
|
||||
|
||||
@@ -303,15 +311,15 @@ After Phase 1, create `planning-notes.md` with this structure:
|
||||
## Context Findings (Phase 2)
|
||||
(To be filled by context-gather)
|
||||
|
||||
## Conflict Decisions (Phase 3)
|
||||
## Conflict Decisions (Phase 2)
|
||||
(To be filled if conflicts detected)
|
||||
|
||||
## Consolidated Constraints (Phase 4 Input)
|
||||
## Consolidated Constraints (Phase 3 Input)
|
||||
1. ${userConstraints}
|
||||
|
||||
---
|
||||
|
||||
## Task Generation (Phase 4)
|
||||
## Task Generation (Phase 3)
|
||||
(To be filled by action-planning-agent)
|
||||
|
||||
## N+1 Context
|
||||
@@ -329,52 +337,57 @@ After Phase 1, create `planning-notes.md` with this structure:
|
||||
|
||||
Read context-package to extract key findings, update planning-notes.md:
|
||||
- `Context Findings (Phase 2)`: CRITICAL_FILES, ARCHITECTURE, CONFLICT_RISK, CONSTRAINTS
|
||||
- `Consolidated Constraints`: Append Phase 2 constraints
|
||||
|
||||
### After Phase 3
|
||||
|
||||
If executed, read conflict-resolution.json, update planning-notes.md:
|
||||
- `Conflict Decisions (Phase 3)`: RESOLVED, MODIFIED_ARTIFACTS, CONSTRAINTS
|
||||
- `Consolidated Constraints`: Append Phase 3 planning constraints
|
||||
- `Conflict Decisions (Phase 2)`: RESOLVED, CUSTOM_HANDLING, CONSTRAINTS (if conflicts were resolved inline)
|
||||
- `Consolidated Constraints`: Append Phase 2 constraints (context + conflict)
|
||||
|
||||
### Memory State Check
|
||||
|
||||
After Phase 3, evaluate context window usage. If memory usage is high (>120K tokens):
|
||||
After Phase 2, evaluate context window usage. If memory usage is high (>120K tokens):
|
||||
```javascript
|
||||
// Codex: Use compact command if available
|
||||
codex compact
|
||||
```
|
||||
|
||||
## Phase 4 User Decision
|
||||
## Phase 3 User Decision
|
||||
|
||||
After Phase 4 completes, present user with action choices:
|
||||
After Phase 3 completes, present user with action choices.
|
||||
|
||||
**Auto Mode** (`--yes`): Skip user decision, directly enter Phase 4 (Execution).
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Planning complete. What would you like to do next?",
|
||||
header: "Next Action",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Verify Plan Quality (Recommended)",
|
||||
description: "Run quality verification to catch issues before execution."
|
||||
},
|
||||
{
|
||||
label: "Start Execution",
|
||||
description: "Begin implementing tasks immediately."
|
||||
},
|
||||
{
|
||||
label: "Review Status Only",
|
||||
description: "View task breakdown and session status without taking further action."
|
||||
}
|
||||
]
|
||||
}]
|
||||
});
|
||||
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
|
||||
if (autoYes) {
|
||||
// Auto mode: Skip decision, proceed to Phase 4
|
||||
console.log(`[--yes] Auto-continuing to Phase 4: Execution`)
|
||||
// Read phases/04-execution.md and execute Phase 4
|
||||
} else {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Planning complete. What would you like to do next?",
|
||||
header: "Next Action",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Verify Plan Quality (Recommended)",
|
||||
description: "Run quality verification to catch issues before execution."
|
||||
},
|
||||
{
|
||||
label: "Start Execution",
|
||||
description: "Begin implementing tasks immediately (Phase 4)."
|
||||
},
|
||||
{
|
||||
label: "Review Status Only",
|
||||
description: "View task breakdown and session status without taking further action."
|
||||
}
|
||||
]
|
||||
}]
|
||||
});
|
||||
}
|
||||
|
||||
// Execute based on user choice
|
||||
// "Verify Plan Quality" → workflow:plan-verify --session sessionId
|
||||
// "Start Execution" → workflow:execute --session sessionId
|
||||
// "Start Execution" → Read phases/04-execution.md, execute Phase 4 inline
|
||||
// "Review Status Only" → workflow:status --session sessionId
|
||||
```
|
||||
|
||||
@@ -388,16 +401,18 @@ AskUserQuestion({
|
||||
## Coordinator Checklist
|
||||
|
||||
- **Pre-Phase**: Convert user input to structured format (GOAL/SCOPE/CONTEXT)
|
||||
- Initialize TodoWrite before any command (Phase 3 added dynamically after Phase 2)
|
||||
- Parse flags: `--yes`, `--with-commit`
|
||||
- Initialize TodoWrite before any command
|
||||
- Execute Phase 1 immediately with structured description
|
||||
- Parse session ID from Phase 1 output, store in memory
|
||||
- Pass session ID and structured description to Phase 2 command
|
||||
- Parse context path from Phase 2 output, store in memory
|
||||
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
||||
- **If conflict_risk >= medium**: Launch Phase 3 with sessionId and contextPath
|
||||
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
||||
- **Build Phase 4 command**: workflow:tools:task-generate-agent --session [sessionId]
|
||||
- Verify all Phase 4 outputs
|
||||
- **Phase 2 handles conflict resolution inline** (no separate Phase 3 decision needed)
|
||||
- **Build Phase 3 command**: workflow:tools:task-generate-agent --session [sessionId]
|
||||
- Verify all Phase 3 outputs
|
||||
- **Phase 3 User Decision**: Present choices or auto-continue if `--yes`
|
||||
- **Phase 4 (conditional)**: If user selects "Start Execution" or `--yes`, read phases/04-execution.md and execute
|
||||
- Pass `--with-commit` flag to Phase 4 if present
|
||||
- Update TodoWrite after each phase
|
||||
- After each phase, automatically continue to next phase based on TodoList status
|
||||
- **Always close_agent after wait completes**
|
||||
@@ -411,4 +426,4 @@ AskUserQuestion({
|
||||
**Follow-up Commands**:
|
||||
- `workflow:plan-verify` - Recommended: Verify plan quality before execution
|
||||
- `workflow:status` - Review task breakdown and current progress
|
||||
- `workflow:execute` - Begin implementation of generated tasks
|
||||
- `workflow:execute` - Begin implementation (also available via Phase 4 inline execution)
|
||||
@@ -95,15 +95,15 @@ Write(planningNotesPath, `# Planning Notes
|
||||
## Context Findings (Phase 2)
|
||||
(To be filled by context-gather)
|
||||
|
||||
## Conflict Decisions (Phase 3)
|
||||
## Conflict Decisions (Phase 2)
|
||||
(To be filled if conflicts detected)
|
||||
|
||||
## Consolidated Constraints (Phase 4 Input)
|
||||
## Consolidated Constraints (Phase 3 Input)
|
||||
1. ${userConstraints}
|
||||
|
||||
---
|
||||
|
||||
## Task Generation (Phase 4)
|
||||
## Task Generation (Phase 3)
|
||||
(To be filled by action-planning-agent)
|
||||
|
||||
## N+1 Context
|
||||
@@ -0,0 +1,928 @@
|
||||
# Phase 2: Context Gathering & Conflict Resolution
|
||||
|
||||
Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON. When conflicts are detected, resolve them inline before packaging.
|
||||
|
||||
## Objective
|
||||
|
||||
- Check for existing valid context-package before executing
|
||||
- Assess task complexity and launch parallel exploration agents (with conflict detection)
|
||||
- Detect and resolve conflicts inline (conditional, when conflict indicators found)
|
||||
- Invoke context-search-agent to analyze codebase
|
||||
- Generate standardized `context-package.json` with prioritized context
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
|
||||
- **Detection-First**: Check for existing context-package before executing
|
||||
- **Conflict-Aware Exploration**: Explore agents detect conflict indicators during exploration
|
||||
- **Inline Resolution**: Conflicts resolved as sub-step within this phase, not a separate phase
|
||||
- **Conditional Trigger**: Conflict resolution only executes when exploration results contain conflict indicators
|
||||
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
|
||||
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
|
||||
- **Explicit Lifecycle**: Manage subagent creation, waiting, and cleanup
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
├─ Parse flags: --session
|
||||
└─ Parse: task_description (required)
|
||||
|
||||
Step 1: Context-Package Detection
|
||||
└─ Decision (existing package):
|
||||
├─ Valid package exists → Return existing (skip execution)
|
||||
└─ No valid package → Continue to Step 2
|
||||
|
||||
Step 2: Complexity Assessment & Parallel Explore (conflict-aware)
|
||||
├─ Analyze task_description → classify Low/Medium/High
|
||||
├─ Select exploration angles (1-4 based on complexity)
|
||||
├─ Launch N cli-explore-agents in parallel (spawn_agent)
|
||||
│ └─ Each outputs: exploration-{angle}.json (includes conflict_indicators)
|
||||
├─ Wait for all agents (batch wait)
|
||||
├─ Close all agents
|
||||
└─ Generate explorations-manifest.json
|
||||
|
||||
Step 3: Inline Conflict Resolution (conditional)
|
||||
├─ 3.1 Aggregate conflict_indicators from all explorations
|
||||
├─ 3.2 Decision: No significant conflicts → Skip to Step 4
|
||||
├─ 3.3 Spawn conflict-analysis agent (cli-execution-agent)
|
||||
│ └─ Gemini/Qwen CLI analysis → conflict strategies
|
||||
├─ 3.4 Iterative user clarification (send_input loop, max 10 rounds)
|
||||
│ ├─ Display conflict + strategy ONE BY ONE
|
||||
│ ├─ AskUserQuestion for user selection
|
||||
│ └─ send_input → agent re-analysis → confirm uniqueness
|
||||
├─ 3.5 Generate conflict-resolution.json
|
||||
└─ 3.6 Close conflict agent
|
||||
|
||||
Step 4: Invoke Context-Search Agent (enhanced)
|
||||
├─ Receives exploration results + resolved conflicts (if any)
|
||||
└─ Generates context-package.json with exploration_results + conflict status
|
||||
|
||||
Step 5: Output Verification (enhanced)
|
||||
└─ Verify context-package.json contains exploration_results + conflict resolution
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Step 1: Context-Package Detection
|
||||
|
||||
**Execute First** - Check if valid package already exists:
|
||||
|
||||
```javascript
|
||||
const contextPackagePath = `.workflow/${session_id}/.process/context-package.json`;
|
||||
|
||||
if (file_exists(contextPackagePath)) {
|
||||
const existing = Read(contextPackagePath);
|
||||
|
||||
// Validate package belongs to current session
|
||||
if (existing?.metadata?.session_id === session_id) {
|
||||
console.log("Valid context-package found for session:", session_id);
|
||||
console.log("Stats:", existing.statistics);
|
||||
console.log("Conflict Risk:", existing.conflict_detection.risk_level);
|
||||
return existing; // Skip execution, return existing
|
||||
} else {
|
||||
console.warn("Invalid session_id in existing package, re-generating...");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Complexity Assessment & Parallel Explore
|
||||
|
||||
**Only execute if Step 1 finds no valid package**
|
||||
|
||||
```javascript
|
||||
// 2.1 Complexity Assessment
|
||||
function analyzeTaskComplexity(taskDescription) {
|
||||
const text = taskDescription.toLowerCase();
|
||||
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
|
||||
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
|
||||
return 'Low';
|
||||
}
|
||||
|
||||
const ANGLE_PRESETS = {
|
||||
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
|
||||
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
|
||||
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
|
||||
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
|
||||
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
|
||||
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
|
||||
};
|
||||
|
||||
function selectAngles(taskDescription, complexity) {
|
||||
const text = taskDescription.toLowerCase();
|
||||
let preset = 'feature';
|
||||
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
|
||||
else if (/security|auth|permission/.test(text)) preset = 'security';
|
||||
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
|
||||
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
|
||||
|
||||
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
|
||||
return ANGLE_PRESETS[preset].slice(0, count);
|
||||
}
|
||||
|
||||
const complexity = analyzeTaskComplexity(task_description);
|
||||
const selectedAngles = selectAngles(task_description, complexity);
|
||||
const sessionFolder = `.workflow/active/${session_id}/.process`;
|
||||
|
||||
// 2.2 Launch Parallel Explore Agents (with conflict detection)
|
||||
const explorationAgents = [];
|
||||
|
||||
// Spawn all agents in parallel
|
||||
selectedAngles.forEach((angle, index) => {
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Task Objective
|
||||
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
|
||||
|
||||
**CONFLICT DETECTION**: Additionally detect conflict indicators including module overlaps, breaking changes, incompatible patterns, and scenario boundary ambiguities.
|
||||
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Task Description**: ${task_description}
|
||||
- **Session ID**: ${session_id}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
**Step 1: Structural Scan** (Bash)
|
||||
- get_modules_by_depth.sh → identify modules related to ${angle}
|
||||
- find/rg → locate files relevant to ${angle} aspect
|
||||
- Analyze imports/dependencies from ${angle} perspective
|
||||
|
||||
**Step 2: Semantic Analysis** (Gemini CLI)
|
||||
- How does existing code handle ${angle} concerns?
|
||||
- What patterns are used for ${angle}?
|
||||
- Where would new code integrate from ${angle} viewpoint?
|
||||
- **Detect conflict indicators**: module overlaps, breaking changes, incompatible patterns
|
||||
|
||||
**Step 3: Write Output**
|
||||
- Consolidate ${angle} findings into JSON
|
||||
- Identify ${angle}-specific clarification needs
|
||||
- **Include conflict_indicators array** with detected conflicts
|
||||
|
||||
## Expected Output
|
||||
|
||||
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
|
||||
|
||||
**Required Fields** (all ${angle} focused):
|
||||
- project_structure: Modules/architecture relevant to ${angle}
|
||||
- relevant_files: Files affected from ${angle} perspective
|
||||
**IMPORTANT**: Use object format with relevance scores for synthesis:
|
||||
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Core ${angle} logic"}]\`
|
||||
Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
|
||||
- patterns: ${angle}-related patterns to follow
|
||||
- dependencies: Dependencies relevant to ${angle}
|
||||
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
|
||||
- constraints: ${angle}-specific limitations/conventions
|
||||
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
|
||||
- **conflict_indicators**: Array of detected conflicts from ${angle} perspective
|
||||
\`[{type: "ModuleOverlap|BreakingChange|PatternConflict", severity: "high|medium|low", description: "...", affected_files: [...]}]\`
|
||||
- _metadata.exploration_angle: "${angle}"
|
||||
|
||||
## Success Criteria
|
||||
- [ ] Schema obtained via cat explore-json-schema.json
|
||||
- [ ] get_modules_by_depth.sh executed
|
||||
- [ ] At least 3 relevant files identified with ${angle} rationale
|
||||
- [ ] Patterns are actionable (code examples, not generic advice)
|
||||
- [ ] Integration points include file:line locations
|
||||
- [ ] Constraints are project-specific to ${angle}
|
||||
- [ ] conflict_indicators populated (empty array if none detected)
|
||||
- [ ] JSON output follows schema exactly
|
||||
- [ ] clarification_needs includes options + recommended
|
||||
|
||||
## Output
|
||||
Write: ${sessionFolder}/exploration-${angle}.json
|
||||
Return: 2-3 sentence summary of ${angle} findings + conflict indicators count
|
||||
`
|
||||
});
|
||||
|
||||
explorationAgents.push(agentId);
|
||||
});
|
||||
|
||||
// 2.3 Batch wait for all exploration agents
|
||||
const explorationResults = wait({
|
||||
ids: explorationAgents,
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
});
|
||||
|
||||
// Check for timeouts
|
||||
if (explorationResults.timed_out) {
|
||||
console.log('Some exploration agents timed out - continuing with completed results');
|
||||
}
|
||||
|
||||
// 2.4 Close all exploration agents
|
||||
explorationAgents.forEach(agentId => {
|
||||
close_agent({ id: agentId });
|
||||
});
|
||||
|
||||
// 2.5 Generate Manifest after all complete
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
|
||||
const explorationManifest = {
|
||||
session_id,
|
||||
task_description,
|
||||
timestamp: new Date().toISOString(),
|
||||
complexity,
|
||||
exploration_count: selectedAngles.length,
|
||||
angles_explored: selectedAngles,
|
||||
explorations: explorationFiles.map(file => {
|
||||
const data = JSON.parse(Read(file));
|
||||
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
|
||||
})
|
||||
};
|
||||
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
|
||||
```
|
||||
|
||||
### Step 3: Inline Conflict Resolution
|
||||
|
||||
**Conditional execution** - Only runs when exploration results contain significant conflict indicators.
|
||||
|
||||
#### 3.1 Aggregate Conflict Indicators
|
||||
|
||||
```javascript
|
||||
// Aggregate conflict_indicators from all explorations
|
||||
const allConflictIndicators = [];
|
||||
explorationFiles.forEach(file => {
|
||||
const data = JSON.parse(Read(file));
|
||||
if (data.conflict_indicators?.length > 0) {
|
||||
allConflictIndicators.push(...data.conflict_indicators.map(ci => ({
|
||||
...ci,
|
||||
source_angle: data._metadata.exploration_angle
|
||||
})));
|
||||
}
|
||||
});
|
||||
|
||||
const hasSignificantConflicts = allConflictIndicators.some(ci => ci.severity === 'high') ||
|
||||
allConflictIndicators.filter(ci => ci.severity === 'medium').length >= 2;
|
||||
```
|
||||
|
||||
#### 3.2 Decision Gate
|
||||
|
||||
```javascript
|
||||
if (!hasSignificantConflicts) {
|
||||
console.log(`No significant conflicts detected (${allConflictIndicators.length} low indicators). Skipping conflict resolution.`);
|
||||
// Skip to Step 4
|
||||
} else {
|
||||
console.log(`Significant conflicts detected: ${allConflictIndicators.length} indicators. Launching conflict analysis...`);
|
||||
// Continue to 3.3
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.3 Spawn Conflict-Analysis Agent
|
||||
|
||||
```javascript
|
||||
const conflictAgentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-execution-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
- Session: ${session_id}
|
||||
- Conflict Indicators: ${JSON.stringify(allConflictIndicators)}
|
||||
- Files: ${existing_files_list}
|
||||
|
||||
## Exploration Context (from exploration results)
|
||||
- Exploration Count: ${explorationManifest.exploration_count}
|
||||
- Angles Analyzed: ${JSON.stringify(explorationManifest.angles_explored)}
|
||||
- Pre-identified Conflict Indicators: ${JSON.stringify(allConflictIndicators)}
|
||||
- Critical Files: ${JSON.stringify(explorationFiles.flatMap(f => JSON.parse(Read(f)).relevant_files?.filter(rf => rf.relevance >= 0.7).map(rf => rf.path) || []))}
|
||||
|
||||
## Analysis Steps
|
||||
|
||||
### 0. Load Output Schema (MANDATORY)
|
||||
Execute: cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json
|
||||
|
||||
### 1. Load Context
|
||||
- Read existing files from conflict indicators
|
||||
- Load exploration results and use aggregated insights for enhanced analysis
|
||||
|
||||
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
||||
|
||||
Primary (Gemini):
|
||||
ccw cli -p "
|
||||
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
||||
TASK:
|
||||
• **Review pre-identified conflict_indicators from exploration results**
|
||||
• Compare architectures (use exploration key_patterns)
|
||||
• Identify breaking API changes
|
||||
• Detect data model incompatibilities
|
||||
• Assess dependency conflicts
|
||||
• **Analyze module scenario uniqueness**
|
||||
- Cross-validate with exploration critical_files
|
||||
- Generate clarification questions for boundary definition
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/${session_id}/**/*
|
||||
EXPECTED: Conflict list with severity ratings, including:
|
||||
- Validation of exploration conflict_indicators
|
||||
- ModuleOverlap conflicts with overlap_analysis
|
||||
- Targeted clarification questions
|
||||
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd ${project_root}
|
||||
|
||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||
|
||||
### 3. Generate Strategies (2-4 per conflict)
|
||||
|
||||
Template per conflict:
|
||||
- Severity: Critical/High/Medium
|
||||
- Category: Architecture/API/Data/Dependency/ModuleOverlap
|
||||
- Affected files + impact
|
||||
- **For ModuleOverlap**: Include overlap_analysis with existing modules and scenarios
|
||||
- Options with pros/cons, effort, risk
|
||||
- **For ModuleOverlap strategies**: Add clarification_needed questions for boundary definition
|
||||
- Recommended strategy + rationale
|
||||
|
||||
### 4. Return Structured Conflict Data
|
||||
|
||||
**Schema Reference**: Execute \`cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json\` to get full schema
|
||||
|
||||
Return JSON following the schema above. Key requirements:
|
||||
- Minimum 2 strategies per conflict, max 4
|
||||
- All text in Chinese for user-facing fields (brief, name, pros, cons, modification_suggestions)
|
||||
- modifications.old_content: 20-100 chars for unique Edit tool matching
|
||||
- modifications.new_content: preserves markdown formatting
|
||||
- modification_suggestions: 2-5 actionable suggestions for custom handling
|
||||
|
||||
### 5. Planning Notes Record (REQUIRED)
|
||||
After analysis complete, append a brief execution record to planning-notes.md:
|
||||
|
||||
**File**: .workflow/active/${session_id}/planning-notes.md
|
||||
**Location**: Under "## Conflict Decisions (Phase 2)" section
|
||||
**Format**:
|
||||
\`\`\`
|
||||
### [Conflict-Resolution Agent] YYYY-MM-DD
|
||||
- **Note**: [brief summary of conflict types, resolution strategies, key decisions]
|
||||
\`\`\`
|
||||
`
|
||||
});
|
||||
|
||||
// Wait for initial analysis
|
||||
const analysisResult = wait({
|
||||
ids: [conflictAgentId],
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
});
|
||||
|
||||
// Parse conflicts from result
|
||||
const conflicts = parseConflictsFromResult(analysisResult);
|
||||
```
|
||||
|
||||
#### Conflict Categories
|
||||
|
||||
| Category | Description |
|
||||
|----------|-------------|
|
||||
| **Architecture** | Incompatible design patterns, module structure changes, pattern migration |
|
||||
| **API** | Breaking contract changes, signature modifications, public interface impacts |
|
||||
| **Data Model** | Schema modifications, type breaking changes, data migration needs |
|
||||
| **Dependency** | Version incompatibilities, setup conflicts, breaking updates |
|
||||
| **ModuleOverlap** | Functional overlap, scenario boundary ambiguity, duplicate responsibility |
|
||||
|
||||
#### 3.4 Iterative User Clarification
|
||||
|
||||
```javascript
|
||||
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const resolvedConflicts = [];
|
||||
const customConflicts = [];
|
||||
|
||||
FOR each conflict:
|
||||
round = 0, clarified = false, userClarifications = []
|
||||
|
||||
WHILE (!clarified && round++ < 10):
|
||||
// 1. Display conflict info (text output for context)
|
||||
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
|
||||
|
||||
// 2. Strategy selection
|
||||
if (autoYes) {
|
||||
console.log(`[--yes] Auto-selecting recommended strategy`)
|
||||
selectedStrategy = conflict.strategies[conflict.recommended || 0]
|
||||
clarified = true // Skip clarification loop
|
||||
} else {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: formatStrategiesForDisplay(conflict.strategies),
|
||||
header: "策略选择",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
...conflict.strategies.map((s, i) => ({
|
||||
label: `${s.name}${i === conflict.recommended ? ' (推荐)' : ''}`,
|
||||
description: `${s.complexity}复杂度 | ${s.risk}风险${s.clarification_needed?.length ? ' | 需澄清' : ''}`
|
||||
})),
|
||||
{ label: "自定义修改", description: `建议: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// 3. Handle selection
|
||||
if (userChoice === "自定义修改") {
|
||||
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
|
||||
break
|
||||
}
|
||||
|
||||
selectedStrategy = findStrategyByName(userChoice)
|
||||
}
|
||||
|
||||
// 4. Clarification (if needed) - using send_input for agent re-analysis
|
||||
if (!autoYes && selectedStrategy.clarification_needed?.length > 0) {
|
||||
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
|
||||
AskUserQuestion({
|
||||
questions: batch.map((q, i) => ({
|
||||
question: q, header: `澄清${i+1}`, multiSelect: false,
|
||||
options: [{ label: "详细说明", description: "提供答案" }]
|
||||
}))
|
||||
})
|
||||
userClarifications.push(...collectAnswers(batch))
|
||||
}
|
||||
|
||||
// 5. Agent re-analysis via send_input (key: agent stays active)
|
||||
send_input({
|
||||
id: conflictAgentId,
|
||||
message: `
|
||||
## CLARIFICATION ANSWERS
|
||||
Conflict: ${conflict.id}
|
||||
Strategy: ${selectedStrategy.name}
|
||||
User Clarifications: ${JSON.stringify(userClarifications)}
|
||||
|
||||
## REQUEST
|
||||
Based on the clarifications above, update the strategy assessment.
|
||||
Output: { uniqueness_confirmed: boolean, rationale: string, updated_strategy: {...}, remaining_questions: [...] }
|
||||
`
|
||||
});
|
||||
|
||||
// Wait for re-analysis result
|
||||
const reanalysisResult = wait({
|
||||
ids: [conflictAgentId],
|
||||
timeout_ms: 300000 // 5 minutes
|
||||
});
|
||||
|
||||
const parsedResult = parseReanalysisResult(reanalysisResult);
|
||||
|
||||
if (parsedResult.uniqueness_confirmed) {
|
||||
selectedStrategy = { ...parsedResult.updated_strategy, clarifications: userClarifications }
|
||||
clarified = true
|
||||
} else {
|
||||
selectedStrategy.clarification_needed = parsedResult.remaining_questions
|
||||
}
|
||||
} else {
|
||||
clarified = true
|
||||
}
|
||||
|
||||
if (clarified) resolvedConflicts.push({ conflict, strategy: selectedStrategy })
|
||||
END WHILE
|
||||
END FOR
|
||||
|
||||
selectedStrategies = resolvedConflicts.map(r => ({
|
||||
conflict_id: r.conflict.id, strategy: r.strategy, clarifications: r.strategy.clarifications || []
|
||||
}))
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
- AskUserQuestion: max 4 questions/call, batch if more
|
||||
- Strategy options: 2-4 strategies + "自定义修改"
|
||||
- Clarification loop via send_input: max 10 rounds, agent determines uniqueness_confirmed
|
||||
- Agent stays active throughout interaction (no close_agent until Step 3.6)
|
||||
- Custom conflicts: record overlap_analysis for subsequent manual handling
|
||||
|
||||
#### 3.5 Generate conflict-resolution.json
|
||||
|
||||
```javascript
|
||||
// Apply modifications from resolved strategies
|
||||
const modifications = [];
|
||||
selectedStrategies.forEach(item => {
|
||||
if (item.strategy && item.strategy.modifications) {
|
||||
modifications.push(...item.strategy.modifications.map(mod => ({
|
||||
...mod,
|
||||
conflict_id: item.conflict_id,
|
||||
clarifications: item.clarifications
|
||||
})));
|
||||
}
|
||||
});
|
||||
|
||||
console.log(`\nApplying ${modifications.length} modifications...`);
|
||||
|
||||
const appliedModifications = [];
|
||||
const failedModifications = [];
|
||||
const fallbackConstraints = [];
|
||||
|
||||
modifications.forEach((mod, idx) => {
|
||||
try {
|
||||
console.log(`[${idx + 1}/${modifications.length}] Modifying ${mod.file}...`);
|
||||
|
||||
if (!file_exists(mod.file)) {
|
||||
console.log(` File not found, recording as constraint`);
|
||||
fallbackConstraints.push({
|
||||
source: "conflict-resolution",
|
||||
conflict_id: mod.conflict_id,
|
||||
target_file: mod.file,
|
||||
section: mod.section,
|
||||
change_type: mod.change_type,
|
||||
content: mod.new_content,
|
||||
rationale: mod.rationale
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
if (mod.change_type === "update") {
|
||||
Edit({ file_path: mod.file, old_string: mod.old_content, new_string: mod.new_content });
|
||||
} else if (mod.change_type === "add") {
|
||||
const fileContent = Read(mod.file);
|
||||
const updated = insertContentAfterSection(fileContent, mod.section, mod.new_content);
|
||||
Write(mod.file, updated);
|
||||
} else if (mod.change_type === "remove") {
|
||||
Edit({ file_path: mod.file, old_string: mod.old_content, new_string: "" });
|
||||
}
|
||||
|
||||
appliedModifications.push(mod);
|
||||
} catch (error) {
|
||||
failedModifications.push({ ...mod, error: error.message });
|
||||
}
|
||||
});
|
||||
|
||||
// Generate conflict-resolution.json
|
||||
const resolutionOutput = {
|
||||
session_id: sessionId,
|
||||
resolved_at: new Date().toISOString(),
|
||||
summary: {
|
||||
total_conflicts: conflicts.length,
|
||||
resolved_with_strategy: selectedStrategies.length,
|
||||
custom_handling: customConflicts.length,
|
||||
fallback_constraints: fallbackConstraints.length
|
||||
},
|
||||
resolved_conflicts: selectedStrategies.map(s => ({
|
||||
conflict_id: s.conflict_id,
|
||||
strategy_name: s.strategy.name,
|
||||
strategy_approach: s.strategy.approach,
|
||||
clarifications: s.clarifications || [],
|
||||
modifications_applied: s.strategy.modifications?.filter(m =>
|
||||
appliedModifications.some(am => am.conflict_id === s.conflict_id)
|
||||
) || []
|
||||
})),
|
||||
custom_conflicts: customConflicts.map(c => ({
|
||||
id: c.id, brief: c.brief, category: c.category,
|
||||
suggestions: c.suggestions, overlap_analysis: c.overlap_analysis || null
|
||||
})),
|
||||
planning_constraints: fallbackConstraints,
|
||||
failed_modifications: failedModifications
|
||||
};
|
||||
|
||||
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
|
||||
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
|
||||
|
||||
// Output custom conflict summary (if any)
|
||||
if (customConflicts.length > 0) {
|
||||
customConflicts.forEach(conflict => {
|
||||
console.log(`[${conflict.category}] ${conflict.id}: ${conflict.brief}`);
|
||||
if (conflict.category === 'ModuleOverlap' && conflict.overlap_analysis) {
|
||||
console.log(` New module: ${conflict.overlap_analysis.new_module.name}`);
|
||||
conflict.overlap_analysis.existing_modules.forEach(mod => {
|
||||
console.log(` Overlaps: ${mod.name} (${mod.file})`);
|
||||
});
|
||||
}
|
||||
conflict.suggestions.forEach(s => console.log(` - ${s}`));
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.6 Close Conflict Agent
|
||||
|
||||
```javascript
|
||||
close_agent({ id: conflictAgentId });
|
||||
```
|
||||
|
||||
### Step 4: Invoke Context-Search Agent
|
||||
|
||||
**Execute after Step 2 (and Step 3 if triggered)**
|
||||
|
||||
```javascript
|
||||
// Load user intent from planning-notes.md (from Phase 1)
|
||||
const planningNotesPath = `.workflow/active/${session_id}/planning-notes.md`;
|
||||
let userIntent = { goal: task_description, key_constraints: "None specified" };
|
||||
|
||||
if (file_exists(planningNotesPath)) {
|
||||
const notesContent = Read(planningNotesPath);
|
||||
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
|
||||
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
|
||||
if (goalMatch) userIntent.goal = goalMatch[1].trim();
|
||||
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
|
||||
}
|
||||
|
||||
// Prepare conflict resolution context for agent
|
||||
const conflictContext = hasSignificantConflicts
|
||||
? `Conflict Resolution: ${resolutionPath} (${selectedStrategies.length} resolved, ${customConflicts.length} custom)`
|
||||
: `Conflict Resolution: None needed (no significant conflicts detected)`;
|
||||
|
||||
// Spawn context-search-agent
|
||||
const contextAgentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/context-search-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
|
||||
|
||||
## Session Information
|
||||
- **Session ID**: ${session_id}
|
||||
- **Task Description**: ${task_description}
|
||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
||||
|
||||
## User Intent (from Phase 1 - Planning Notes)
|
||||
**GOAL**: ${userIntent.goal}
|
||||
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
|
||||
|
||||
This is the PRIMARY context source - all subsequent analysis must align with user intent.
|
||||
|
||||
## Exploration Input (from Step 2)
|
||||
- **Manifest**: ${sessionFolder}/explorations-manifest.json
|
||||
- **Exploration Count**: ${explorationManifest.exploration_count}
|
||||
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
|
||||
- **Complexity**: ${complexity}
|
||||
|
||||
## Conflict Resolution Input (from Step 3)
|
||||
- **${conflictContext}**
|
||||
${hasSignificantConflicts ? `- **Resolution File**: ${resolutionPath}
|
||||
- **Resolved Conflicts**: ${selectedStrategies.length}
|
||||
- **Custom Conflicts**: ${customConflicts.length}
|
||||
- **Planning Constraints**: ${fallbackConstraints.length}` : ''}
|
||||
|
||||
## Mission
|
||||
Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**:
|
||||
- Read and parse \`.workflow/project-tech.json\`. Use its \`overview\` section as the foundational \`project_context\`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse \`.workflow/project-guidelines.json\`. Load \`conventions\`, \`constraints\`, and \`learnings\` into a \`project_guidelines\` section.
|
||||
- If files don't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
3. **Foundation**: Initialize CodexLens, get project structure, load docs
|
||||
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||
|
||||
### Phase 2: Multi-Source Context Discovery
|
||||
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
|
||||
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
|
||||
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
|
||||
- Map user requirements to codebase entities (files, modules, patterns)
|
||||
- Establish baseline priority scores based on user goal alignment
|
||||
- Output: user_intent_mapping.json with preliminary priority scores
|
||||
|
||||
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
|
||||
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
|
||||
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
|
||||
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
||||
- **Track 4**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
|
||||
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. **Synthesize 5-source data** (including Track -1): Merge findings from all sources
|
||||
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
|
||||
- **Prioritize the context from \`project-tech.json\`** for architecture and tech stack unless code analysis reveals it's outdated
|
||||
3. **Context Priority Sorting**:
|
||||
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
|
||||
b. Classify files into priority tiers:
|
||||
- **Critical** (score >= 0.85): Directly mentioned in user goal OR exploration critical_files
|
||||
- **High** (0.70-0.84): Key dependencies, patterns required for goal
|
||||
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
|
||||
- **Low** (< 0.50): Contextual awareness only
|
||||
c. Generate dependency_order: Based on dependency graph + user goal sequence
|
||||
d. Document sorting_rationale: Explain prioritization logic
|
||||
|
||||
4. **Populate \`project_context\`**: Directly use the \`overview\` from \`project-tech.json\` to fill the \`project_context\` section. Include description, technology_stack, architecture, and key_components.
|
||||
5. **Populate \`project_guidelines\`**: Load conventions, constraints, and learnings from \`project-guidelines.json\` into a dedicated section.
|
||||
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
7. Perform conflict detection with risk assessment
|
||||
8. **Inject conflict resolution results** (if conflict-resolution.json exists) into conflict_detection
|
||||
9. **Generate prioritized_context section**:
|
||||
\`\`\`json
|
||||
{
|
||||
"prioritized_context": {
|
||||
"user_intent": {
|
||||
"goal": "...",
|
||||
"scope": "...",
|
||||
"key_constraints": ["..."]
|
||||
},
|
||||
"priority_tiers": {
|
||||
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
|
||||
"high": [...],
|
||||
"medium": [...],
|
||||
"low": [...]
|
||||
},
|
||||
"dependency_order": ["module1", "module2", "module3"],
|
||||
"sorting_rationale": "Based on user goal alignment (Track -1), exploration critical files, and dependency graph analysis"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
10. Generate and validate context-package.json with prioritized_context field
|
||||
|
||||
## Output Requirements
|
||||
Complete context-package.json with:
|
||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||
- **project_context**: description, technology_stack, architecture, key_components (sourced from \`project-tech.json\`)
|
||||
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (sourced from \`project-guidelines.json\`)
|
||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||
- **dependencies**: {internal[], external[]} with dependency graph
|
||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[], resolution_file (if exists)}
|
||||
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
|
||||
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
|
||||
|
||||
## Quality Validation
|
||||
Before completion verify:
|
||||
- [ ] Valid JSON format with all required fields
|
||||
- [ ] File relevance accuracy >80%
|
||||
- [ ] Dependency graph complete (max 2 transitive levels)
|
||||
- [ ] Conflict risk level calculated correctly
|
||||
- [ ] No sensitive data exposed
|
||||
- [ ] Total files <=50 (prioritize high-relevance)
|
||||
|
||||
## Planning Notes Record (REQUIRED)
|
||||
After completing context-package.json, append a brief execution record to planning-notes.md:
|
||||
|
||||
**File**: .workflow/active/${session_id}/planning-notes.md
|
||||
**Location**: Under "## Context Findings (Phase 2)" section
|
||||
**Format**:
|
||||
\`\`\`
|
||||
### [Context-Search Agent] YYYY-MM-DD
|
||||
- **Note**: [brief summary of key findings]
|
||||
\`\`\`
|
||||
|
||||
Execute autonomously following agent documentation.
|
||||
Report completion with statistics.
|
||||
`
|
||||
});
|
||||
|
||||
// Wait for context agent to complete
|
||||
const contextResult = wait({
|
||||
ids: [contextAgentId],
|
||||
timeout_ms: 900000 // 15 minutes
|
||||
});
|
||||
|
||||
// Close context agent
|
||||
close_agent({ id: contextAgentId });
|
||||
```
|
||||
|
||||
### Step 5: Output Verification
|
||||
|
||||
After agent completes, verify output:
|
||||
|
||||
```javascript
|
||||
// Verify file was created
|
||||
const outputPath = `.workflow/${session_id}/.process/context-package.json`;
|
||||
if (!file_exists(outputPath)) {
|
||||
throw new Error("Agent failed to generate context-package.json");
|
||||
}
|
||||
|
||||
// Verify exploration_results included
|
||||
const pkg = JSON.parse(Read(outputPath));
|
||||
if (pkg.exploration_results?.exploration_count > 0) {
|
||||
console.log(`Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
|
||||
}
|
||||
|
||||
// Verify conflict resolution status
|
||||
if (hasSignificantConflicts) {
|
||||
const resolutionFileRef = pkg.conflict_detection?.resolution_file;
|
||||
if (resolutionFileRef) {
|
||||
console.log(`Conflict resolution integrated: ${resolutionFileRef}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Parameter Reference
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `--session` | string | Yes | Workflow session ID (e.g., WFS-user-auth) |
|
||||
| `task_description` | string | Yes | Detailed task description for context extraction |
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-select recommended strategy for each conflict in Step 3, skip clarification questions.
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After context-gather completes, update planning-notes.md:
|
||||
|
||||
```javascript
|
||||
const contextPackage = JSON.parse(Read(contextPath))
|
||||
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low'
|
||||
const criticalFiles = (contextPackage.exploration_results?.aggregated_insights?.critical_files || [])
|
||||
.slice(0, 5).map(f => f.path)
|
||||
const archPatterns = contextPackage.project_context?.architecture_patterns || []
|
||||
const constraints = contextPackage.exploration_results?.aggregated_insights?.constraints || []
|
||||
|
||||
// Update Phase 2 section
|
||||
Edit(planningNotesPath, {
|
||||
old: '## Context Findings (Phase 2)\n(To be filled by context-gather)',
|
||||
new: `## Context Findings (Phase 2)
|
||||
|
||||
- **CRITICAL_FILES**: ${criticalFiles.join(', ') || 'None identified'}
|
||||
- **ARCHITECTURE**: ${archPatterns.join(', ') || 'Not detected'}
|
||||
- **CONFLICT_RISK**: ${conflictRisk}
|
||||
- **CONSTRAINTS**: ${constraints.length > 0 ? constraints.join('; ') : 'None'}`
|
||||
})
|
||||
|
||||
// If conflicts were resolved inline, update conflict decisions section
|
||||
if (hasSignificantConflicts && file_exists(resolutionPath)) {
|
||||
const conflictRes = JSON.parse(Read(resolutionPath))
|
||||
const resolved = conflictRes.resolved_conflicts || []
|
||||
const planningConstraints = conflictRes.planning_constraints || []
|
||||
|
||||
Edit(planningNotesPath, {
|
||||
old: '## Conflict Decisions (Phase 2)\n(To be filled if conflicts detected)',
|
||||
new: `## Conflict Decisions (Phase 2)
|
||||
|
||||
- **RESOLVED**: ${resolved.map(r => `${r.conflict_id} → ${r.strategy_name}`).join('; ') || 'None'}
|
||||
- **CUSTOM_HANDLING**: ${conflictRes.custom_conflicts?.map(c => c.id).join(', ') || 'None'}
|
||||
- **CONSTRAINTS**: ${planningConstraints.map(c => c.content).join('; ') || 'None'}`
|
||||
})
|
||||
|
||||
// Append conflict constraints to consolidated list
|
||||
if (planningConstraints.length > 0) {
|
||||
Edit(planningNotesPath, {
|
||||
old: '## Consolidated Constraints (Phase 3 Input)',
|
||||
new: `## Consolidated Constraints (Phase 3 Input)
|
||||
${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c.content}`).join('\n')}`
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Append Phase 2 constraints to consolidated list
|
||||
Edit(planningNotesPath, {
|
||||
old: '## Consolidated Constraints (Phase 3 Input)',
|
||||
new: `## Consolidated Constraints (Phase 3 Input)
|
||||
${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
|
||||
})
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Recovery Strategy
|
||||
```
|
||||
1. Pre-check: Verify exploration results before conflict analysis
|
||||
2. Monitor: Track agents via wait with timeout
|
||||
3. Validate: Parse agent JSON output
|
||||
4. Recover:
|
||||
- Agent failure → check logs + report error
|
||||
- Invalid JSON → retry once with Claude fallback
|
||||
- CLI failure → fallback to Claude analysis
|
||||
- Edit tool failure → report affected files + rollback option
|
||||
- User cancels → mark as "unresolved", continue to Step 4
|
||||
5. Degrade: If conflict analysis fails, skip and continue with context packaging
|
||||
6. Cleanup: Always close_agent even on error path
|
||||
```
|
||||
|
||||
### Rollback Handling
|
||||
```
|
||||
If Edit tool fails mid-application:
|
||||
1. Log all successfully applied modifications
|
||||
2. Output rollback option via text interaction
|
||||
3. If rollback selected: restore files from git or backups
|
||||
4. If continue: mark partial resolution in context-package.json
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **User intent integration**: Load user intent from planning-notes.md (Phase 1 output)
|
||||
- **Conflict-aware exploration**: Explore agents detect conflict indicators during their work
|
||||
- **Inline conflict resolution**: Conflicts resolved within this phase when significant indicators found
|
||||
- **Output**: Generates `context-package.json` with `prioritized_context` field + optional `conflict-resolution.json`
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
- **Explicit Lifecycle**: Always close_agent after wait to free resources
|
||||
- **Batch Wait**: Use single wait call for multiple parallel agents for efficiency
|
||||
|
||||
## Output
|
||||
|
||||
- **Variable**: `contextPath` (e.g., `.workflow/active/WFS-xxx/.process/context-package.json`)
|
||||
- **Variable**: `conflictRisk` (none/low/medium/high/resolved)
|
||||
- **File**: Updated `planning-notes.md` with context findings + conflict decisions (if applicable)
|
||||
- **File**: Optional `conflict-resolution.json` (when conflicts resolved inline)
|
||||
|
||||
## Next Phase
|
||||
|
||||
Return to orchestrator, then auto-continue to [Phase 3: Task Generation](03-task-generation.md).
|
||||
@@ -1,4 +1,4 @@
|
||||
# Phase 4: Task Generation
|
||||
# Phase 3: Task Generation
|
||||
|
||||
Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation.
|
||||
|
||||
@@ -10,7 +10,7 @@ When `--yes` or `-y`: Skip user questions, use defaults (no materials, Agent exe
|
||||
|
||||
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
|
||||
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
|
||||
- **NO Redundant Context Sorting**: Context priority sorting is ALREADY completed in context-gather Phase 2/3
|
||||
- **NO Redundant Context Sorting**: Context priority sorting is ALREADY completed in context-gather Phase 2
|
||||
- Use `context-package.json.prioritized_context` directly
|
||||
- DO NOT re-sort files or re-compute priorities
|
||||
- `priority_tiers` and `dependency_order` are pre-computed and ready-to-use
|
||||
@@ -164,7 +164,7 @@ const userConfig = {
|
||||
|
||||
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
|
||||
|
||||
**Command prepares session paths, metadata, detects module structure. Context priority sorting is NOT performed here - it's already completed in context-gather Phase 2/3.**
|
||||
**Command prepares session paths, metadata, detects module structure. Context priority sorting is NOT performed here - it's already completed in context-gather Phase 2.**
|
||||
|
||||
**Session Path Structure**:
|
||||
```
|
||||
@@ -259,13 +259,13 @@ IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT im
|
||||
|
||||
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
|
||||
|
||||
## PLANNING NOTES (PHASE 1-3 CONTEXT)
|
||||
## PLANNING NOTES (PHASE 1-2 CONTEXT)
|
||||
Load: .workflow/active/${session_id}/planning-notes.md
|
||||
|
||||
This document contains:
|
||||
- User Intent: Original GOAL and KEY_CONSTRAINTS from Phase 1
|
||||
- Context Findings: Critical files, architecture, and constraints from Phase 2
|
||||
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 3
|
||||
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 2
|
||||
- Consolidated Constraints: All constraints from all phases
|
||||
|
||||
**USAGE**: Read planning-notes.md FIRST. Use Consolidated Constraints list to guide task sequencing and dependencies.
|
||||
@@ -310,7 +310,7 @@ Based on userConfig.executionMethod, set task-level meta.execution_config:
|
||||
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
|
||||
|
||||
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
|
||||
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
|
||||
Context sorting is ALREADY COMPLETED in context-gather Phase 2. DO NOT re-sort.
|
||||
Direct usage:
|
||||
- **user_intent**: Use goal/scope/key_constraints for task alignment
|
||||
- **priority_tiers.critical**: These files are PRIMARY focus for task generation
|
||||
@@ -396,11 +396,11 @@ After completing, update planning-notes.md:
|
||||
|
||||
**File**: .workflow/active/${session_id}/planning-notes.md
|
||||
|
||||
1. **Task Generation (Phase 4)**: Task count and key tasks
|
||||
1. **Task Generation (Phase 3)**: Task count and key tasks
|
||||
2. **N+1 Context**: Key decisions (with rationale) + deferred items
|
||||
|
||||
\`\`\`markdown
|
||||
## Task Generation (Phase 4)
|
||||
## Task Generation (Phase 3)
|
||||
### [Action-Planning Agent] YYYY-MM-DD
|
||||
- **Tasks**: [count] ([IDs])
|
||||
|
||||
@@ -459,7 +459,7 @@ IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Co
|
||||
|
||||
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
|
||||
|
||||
## PLANNING NOTES (PHASE 1-3 CONTEXT)
|
||||
## PLANNING NOTES (PHASE 1-2 CONTEXT)
|
||||
Load: .workflow/active/${session_id}/planning-notes.md
|
||||
|
||||
This document contains consolidated constraints and user intent to guide module-scoped task generation.
|
||||
@@ -509,7 +509,7 @@ Based on userConfig.executionMethod, set task-level meta.execution_config:
|
||||
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
|
||||
|
||||
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
|
||||
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
|
||||
Context sorting is ALREADY COMPLETED in context-gather Phase 2. DO NOT re-sort.
|
||||
Filter by module scope (${module.paths.join(', ')}):
|
||||
- **user_intent**: Use for task alignment within module
|
||||
- **priority_tiers.critical**: Filter for files in ${module.paths.join(', ')} → PRIMARY focus
|
||||
@@ -750,7 +750,7 @@ function resolveCrossModuleDependency(placeholder, allTasks) {
|
||||
|
||||
## Next Step
|
||||
|
||||
Return to orchestrator. Present user with action choices:
|
||||
Return to orchestrator. Present user with action choices (or auto-continue if `--yes`):
|
||||
1. Verify Plan Quality (Recommended) → `workflow:plan-verify`
|
||||
2. Start Execution → `workflow:execute`
|
||||
2. Start Execution → Phase 4 (phases/04-execution.md)
|
||||
3. Review Status Only → `workflow:status`
|
||||
454
.codex/skills/workflow-plan-execute/phases/04-execution.md
Normal file
454
.codex/skills/workflow-plan-execute/phases/04-execution.md
Normal file
@@ -0,0 +1,454 @@
|
||||
# Phase 4: Execution (Conditional)
|
||||
|
||||
Execute implementation tasks using agent orchestration with lazy loading, progress tracking, and optional auto-commit. This phase is triggered conditionally after Phase 3 completes.
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- **User Selection**: User chooses "Start Execution" in Phase 3 User Decision
|
||||
- **Auto Mode** (`--yes`): Automatically enters Phase 4 after Phase 3 completes
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`:
|
||||
- **Completion Choice**: Automatically completes session (runs `workflow:session:complete --yes`)
|
||||
|
||||
When `--with-commit`:
|
||||
- **Auto-Commit**: After each agent task completes, commit changes based on summary document
|
||||
- **Commit Principle**: Minimal commits - only commit files modified by the completed task
|
||||
- **Commit Message**: Generated from task summary with format: "feat/fix/refactor: {task-title} - {summary}"
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **No Redundant Discovery**: Session ID and planning docs already available from Phase 1-3
|
||||
2. **Autonomous Execution**: Complete entire workflow without user interruption
|
||||
3. **Lazy Loading**: Task JSONs read on-demand during execution, not upfront
|
||||
4. **ONE AGENT = ONE TASK JSON**: Each agent instance executes exactly one task JSON file
|
||||
5. **IMPL_PLAN-Driven Strategy**: Execution model derived from planning document
|
||||
6. **Continuous Progress Tracking**: TodoWrite updates throughout entire workflow
|
||||
7. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Step 1: TodoWrite Generation
|
||||
├─ Update session status to "active"
|
||||
├─ Parse TODO_LIST.md for task statuses
|
||||
├─ Generate TodoWrite for entire workflow
|
||||
└─ Prepare session context paths
|
||||
|
||||
Step 2: Execution Strategy Parsing
|
||||
├─ Parse IMPL_PLAN.md Section 4 (Execution Model)
|
||||
└─ Fallback: Analyze task structure for smart defaults
|
||||
|
||||
Step 3: Task Execution Loop
|
||||
├─ Get next in_progress task from TodoWrite
|
||||
├─ Lazy load task JSON
|
||||
├─ Launch agent (spawn_agent → wait → close_agent)
|
||||
├─ Mark task completed (update IMPL-*.json status)
|
||||
├─ [with-commit] Auto-commit changes
|
||||
└─ Advance to next task
|
||||
|
||||
Step 4: Completion
|
||||
├─ Synchronize all statuses
|
||||
├─ Generate summaries
|
||||
└─ AskUserQuestion: Review or Complete Session
|
||||
```
|
||||
|
||||
## Step 1: TodoWrite Generation
|
||||
|
||||
### Step 1.0: Update Session Status to Active
|
||||
|
||||
Before generating TodoWrite, update session status from "planning" to "active":
|
||||
```bash
|
||||
# Update session status (idempotent - safe to run if already active)
|
||||
jq '.status = "active" | .execution_started_at = (.execution_started_at // now | todate)' \
|
||||
.workflow/active/${sessionId}/workflow-session.json > tmp.json && \
|
||||
mv tmp.json .workflow/active/${sessionId}/workflow-session.json
|
||||
```
|
||||
This ensures the dashboard shows the session as "ACTIVE" during execution.
|
||||
|
||||
### Step 1.1: Parse TODO_LIST.md
|
||||
|
||||
```
|
||||
1. Create TodoWrite List: Generate task list from TODO_LIST.md (not from task JSONs)
|
||||
- Parse TODO_LIST.md to extract all tasks with current statuses
|
||||
- Identify first pending task with met dependencies
|
||||
- Generate comprehensive TodoWrite covering entire workflow
|
||||
2. Prepare Session Context: Inject workflow paths for agent use
|
||||
3. Validate Prerequisites: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
|
||||
```
|
||||
|
||||
**Performance Optimization**: TODO_LIST.md provides task metadata and status. Task JSONs are NOT loaded here - deferred to Step 3 (lazy loading).
|
||||
|
||||
## Step 2: Execution Strategy Parsing
|
||||
|
||||
### Step 2A: Parse Execution Strategy from IMPL_PLAN.md
|
||||
|
||||
Read IMPL_PLAN.md Section 4 to extract:
|
||||
- **Execution Model**: Sequential | Parallel | Phased | TDD Cycles
|
||||
- **Parallelization Opportunities**: Which tasks can run in parallel
|
||||
- **Serialization Requirements**: Which tasks must run sequentially
|
||||
- **Critical Path**: Priority execution order
|
||||
|
||||
### Step 2B: Intelligent Fallback
|
||||
|
||||
If IMPL_PLAN.md lacks execution strategy, use intelligent fallback:
|
||||
1. **Analyze task structure**:
|
||||
- Check `meta.execution_group` in task JSONs
|
||||
- Analyze `depends_on` relationships
|
||||
- Understand task complexity and risk
|
||||
2. **Apply smart defaults**:
|
||||
- No dependencies + same execution_group → Parallel
|
||||
- Has dependencies → Sequential (wait for deps)
|
||||
- Critical/high-risk tasks → Sequential
|
||||
3. **Conservative approach**: When uncertain, prefer sequential execution
|
||||
|
||||
### Execution Models
|
||||
|
||||
#### 1. Sequential Execution
|
||||
**When**: IMPL_PLAN specifies "Sequential" OR no clear parallelization guidance
|
||||
**Pattern**: Execute tasks one by one in TODO_LIST order via spawn_agent → wait → close_agent per task
|
||||
**TodoWrite**: ONE task marked as `in_progress` at a time
|
||||
|
||||
#### 2. Parallel Execution
|
||||
**When**: IMPL_PLAN specifies "Parallel" with clear parallelization opportunities
|
||||
**Pattern**: Execute independent task groups concurrently by spawning multiple agents and batch waiting
|
||||
**TodoWrite**: MULTIPLE tasks (in same batch) marked as `in_progress` simultaneously
|
||||
**Agent Instantiation**: spawn one agent per task (respects ONE AGENT = ONE TASK JSON rule), then batch wait
|
||||
|
||||
#### 3. Phased Execution
|
||||
**When**: IMPL_PLAN specifies "Phased" with phase breakdown
|
||||
**Pattern**: Execute tasks in phases, respect phase boundaries
|
||||
**TodoWrite**: Within each phase, follow Sequential or Parallel rules
|
||||
|
||||
## Step 3: Task Execution Loop
|
||||
|
||||
### Execution Loop Pattern
|
||||
```
|
||||
while (TODO_LIST.md has pending tasks) {
|
||||
next_task_id = getTodoWriteInProgressTask()
|
||||
task_json = Read(.workflow/active/{session}/.task/{next_task_id}.json) // Lazy load
|
||||
executeTaskWithAgent(task_json) // spawn_agent → wait → close_agent
|
||||
updateTodoListMarkCompleted(next_task_id)
|
||||
advanceTodoWriteToNextTask()
|
||||
}
|
||||
```
|
||||
|
||||
### Execution Process per Task
|
||||
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
|
||||
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
|
||||
3. **Validate Task Structure**: Ensure all required fields exist (id, title, status, meta, context, flow_control)
|
||||
4. **Launch Agent**: Invoke specialized agent via spawn_agent with complete context including flow control steps
|
||||
5. **Wait for Completion**: wait for agent result, handle timeout
|
||||
6. **Close Agent**: close_agent to free resources
|
||||
7. **Collect Results**: Gather implementation results and outputs
|
||||
8. **[with-commit] Auto-Commit**: If `--with-commit` flag enabled, commit changes based on summary
|
||||
9. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
|
||||
|
||||
**Note**: TODO_LIST.md updates are handled by agents (e.g., code-developer.md), not by the orchestrator.
|
||||
|
||||
### Agent Prompt Template (Sequential)
|
||||
|
||||
**Path-Based Invocation**: Pass paths and trigger markers, let agent parse task JSON autonomously.
|
||||
|
||||
```javascript
|
||||
// Step 1: Spawn agent
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/${meta.agent}.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Implement task ${task.id}: ${task.title}
|
||||
|
||||
[FLOW_CONTROL]
|
||||
|
||||
**Input**:
|
||||
- Task JSON: ${session.task_json_path}
|
||||
- Context Package: ${session.context_package_path}
|
||||
|
||||
**Output Location**:
|
||||
- Workflow: ${session.workflow_dir}
|
||||
- TODO List: ${session.todo_list_path}
|
||||
- Summaries: ${session.summaries_dir}
|
||||
|
||||
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary
|
||||
`
|
||||
});
|
||||
|
||||
// Step 2: Wait for completion
|
||||
const result = wait({
|
||||
ids: [agentId],
|
||||
timeout_ms: 600000 // 10 minutes per task
|
||||
});
|
||||
|
||||
// Step 3: Close agent (IMPORTANT: always close)
|
||||
close_agent({ id: agentId });
|
||||
```
|
||||
|
||||
**Key Markers**:
|
||||
- `Implement` keyword: Triggers tech stack detection and guidelines loading
|
||||
- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution
|
||||
|
||||
### Parallel Execution Pattern
|
||||
|
||||
```javascript
|
||||
// Step 1: Spawn agents for parallel batch
|
||||
const batchAgents = [];
|
||||
parallelTasks.forEach(task => {
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/${task.meta.agent}.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Implement task ${task.id}: ${task.title}
|
||||
|
||||
[FLOW_CONTROL]
|
||||
|
||||
**Input**:
|
||||
- Task JSON: ${session.task_json_path}
|
||||
- Context Package: ${session.context_package_path}
|
||||
|
||||
**Output Location**:
|
||||
- Workflow: ${session.workflow_dir}
|
||||
- TODO List: ${session.todo_list_path}
|
||||
- Summaries: ${session.summaries_dir}
|
||||
|
||||
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary
|
||||
`
|
||||
});
|
||||
batchAgents.push({ agentId, taskId: task.id });
|
||||
});
|
||||
|
||||
// Step 2: Batch wait for all agents
|
||||
const batchResult = wait({
|
||||
ids: batchAgents.map(a => a.agentId),
|
||||
timeout_ms: 600000
|
||||
});
|
||||
|
||||
// Step 3: Check results and handle timeouts
|
||||
if (batchResult.timed_out) {
|
||||
console.log('Some parallel tasks timed out, continuing with completed results');
|
||||
}
|
||||
|
||||
// Step 4: Close all agents
|
||||
batchAgents.forEach(a => close_agent({ id: a.agentId }));
|
||||
```
|
||||
|
||||
### Agent Assignment Rules
|
||||
```
|
||||
meta.agent specified → Use specified agent
|
||||
meta.agent missing → Infer from meta.type:
|
||||
- "feature" → @code-developer (role: ~/.codex/agents/code-developer.md)
|
||||
- "test-gen" → @code-developer (role: ~/.codex/agents/code-developer.md)
|
||||
- "test-fix" → @test-fix-agent (role: ~/.codex/agents/test-fix-agent.md)
|
||||
- "review" → @universal-executor (role: ~/.codex/agents/universal-executor.md)
|
||||
- "docs" → @doc-generator (role: ~/.codex/agents/doc-generator.md)
|
||||
```
|
||||
|
||||
### Task Status Logic
|
||||
```
|
||||
pending + dependencies_met → executable
|
||||
completed → skip
|
||||
blocked → skip until dependencies clear
|
||||
```
|
||||
|
||||
## Step 4: Completion
|
||||
|
||||
### Process
|
||||
1. **Update Task Status**: Mark completed tasks in JSON files
|
||||
2. **Generate Summary**: Create task summary in `.summaries/`
|
||||
3. **Update TodoWrite**: Mark current task complete, advance to next
|
||||
4. **Synchronize State**: Update session state and workflow status
|
||||
5. **Check Workflow Complete**: Verify all tasks are completed
|
||||
6. **User Choice**: When all tasks finished, ask user to choose next step:
|
||||
|
||||
```javascript
|
||||
// Parse --yes flag
|
||||
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
|
||||
if (autoYes) {
|
||||
// Auto mode: Complete session automatically
|
||||
console.log(`[--yes] Auto-selecting: Complete Session`)
|
||||
// Execute: workflow:session:complete --yes
|
||||
} else {
|
||||
// Interactive mode: Ask user
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "All tasks completed. What would you like to do next?",
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Enter Review",
|
||||
description: "Run specialized review (security/architecture/quality/action-items)"
|
||||
},
|
||||
{
|
||||
label: "Complete Session",
|
||||
description: "Archive session and update manifest"
|
||||
}
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Based on user selection**:
|
||||
- **"Enter Review"**: Execute `workflow:review`
|
||||
- **"Complete Session"**: Execute `workflow:session:complete`
|
||||
|
||||
### Post-Completion Expansion
|
||||
|
||||
After completion, ask user whether to expand to issues (test/enhance/refactor/doc). Selected items invoke `issue:new "{summary} - {dimension}"`.
|
||||
|
||||
## Auto-Commit Mode (--with-commit)
|
||||
|
||||
**Behavior**: After each agent task completes, automatically commit changes based on summary document.
|
||||
|
||||
**Minimal Principle**: Only commit files modified by the completed task.
|
||||
|
||||
**Commit Message Format**: `{type}: {task-title} - {summary}`
|
||||
|
||||
**Type Mapping** (from `meta.type`):
|
||||
- `feature` → `feat` | `bugfix` → `fix` | `refactor` → `refactor`
|
||||
- `test-gen` → `test` | `docs` → `docs` | `review` → `chore`
|
||||
|
||||
**Implementation**:
|
||||
```bash
|
||||
# 1. Read summary from .summaries/{task-id}-summary.md
|
||||
# 2. Extract files from "Files Modified" section
|
||||
# 3. Commit: git add <files> && git commit -m "{type}: {title} - {summary}"
|
||||
```
|
||||
|
||||
**Error Handling**: Skip commit on no changes/missing summary, log errors, continue workflow.
|
||||
|
||||
## TodoWrite Coordination
|
||||
|
||||
### TodoWrite Rules
|
||||
|
||||
**Rule 1: Initial Creation**
|
||||
- Generate TodoWrite from TODO_LIST.md pending tasks
|
||||
|
||||
**Rule 2: In-Progress Task Count (Execution-Model-Dependent)**
|
||||
- **Sequential execution**: Mark ONLY ONE task as `in_progress` at a time
|
||||
- **Parallel batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
|
||||
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
|
||||
|
||||
**Rule 3: Status Updates**
|
||||
- **Immediate Updates**: Update status after each task/batch completion without user interruption
|
||||
- **Status Synchronization**: Sync with JSON task files after updates
|
||||
- **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
|
||||
|
||||
**Rule 4: Workflow Completion Check**
|
||||
- When all tasks marked `completed`, prompt user to choose review or complete session
|
||||
|
||||
### TodoWrite Examples
|
||||
|
||||
**Sequential Execution**:
|
||||
```javascript
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
|
||||
status: "in_progress", // ONE task in progress
|
||||
activeForm: "Executing IMPL-1.1: Design auth schema"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-1.2: Implement auth logic [code-developer] [FLOW_CONTROL]",
|
||||
status: "pending",
|
||||
activeForm: "Executing IMPL-1.2: Implement auth logic"
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
**Parallel Batch Execution**:
|
||||
```javascript
|
||||
TodoWrite({
|
||||
todos: [
|
||||
{
|
||||
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
|
||||
status: "in_progress",
|
||||
activeForm: "Executing IMPL-1.1: Build Auth API"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
|
||||
status: "in_progress",
|
||||
activeForm: "Executing IMPL-1.2: Build User UI"
|
||||
},
|
||||
{
|
||||
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent] [depends_on: IMPL-1.1, IMPL-1.2]",
|
||||
status: "pending",
|
||||
activeForm: "Executing IMPL-2.1: Integration Tests"
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
## Error Handling & Recovery
|
||||
|
||||
### Common Errors & Recovery
|
||||
|
||||
| Error Type | Cause | Recovery Strategy | Max Attempts |
|
||||
|-----------|-------|------------------|--------------|
|
||||
| **Execution Errors** |
|
||||
| Agent failure | Agent crash/timeout | Retry with simplified context (close_agent first, then spawn new) | 2 |
|
||||
| Flow control error | Command failure | Skip optional, fail critical | 1 per step |
|
||||
| Context loading error | Missing dependencies | Reload from JSON, use defaults | 3 |
|
||||
| JSON file corruption | File system issues | Restore from backup/recreate | 1 |
|
||||
| **Lifecycle Errors** |
|
||||
| Agent timeout | wait timed out | send_input to prompt completion, or close_agent and retry | 2 |
|
||||
| Orphaned agent | Agent not closed after error | Ensure close_agent in error paths | N/A |
|
||||
|
||||
### Error Recovery with Lifecycle Management
|
||||
```javascript
|
||||
// Safe agent execution pattern with error handling
|
||||
let agentId = null;
|
||||
try {
|
||||
agentId = spawn_agent({ message: taskPrompt });
|
||||
const result = wait({ ids: [agentId], timeout_ms: 600000 });
|
||||
|
||||
if (result.timed_out) {
|
||||
// Option 1: Send prompt to complete
|
||||
send_input({ id: agentId, message: "Please wrap up and generate summary." });
|
||||
const retryResult = wait({ ids: [agentId], timeout_ms: 120000 });
|
||||
}
|
||||
|
||||
// Process results...
|
||||
close_agent({ id: agentId });
|
||||
} catch (error) {
|
||||
// Ensure cleanup on error
|
||||
if (agentId) close_agent({ id: agentId });
|
||||
// Handle error (retry or skip task)
|
||||
}
|
||||
```
|
||||
|
||||
### Error Prevention
|
||||
- **Lazy Loading**: Reduces upfront memory usage and validation errors
|
||||
- **Atomic Updates**: Update JSON files atomically to prevent corruption
|
||||
- **Dependency Validation**: Check all depends_on references exist
|
||||
- **Context Verification**: Ensure all required context is available
|
||||
- **Lifecycle Cleanup**: Always close_agent in both success and error paths
|
||||
|
||||
## Output
|
||||
|
||||
- **Updated**: Task JSON status fields (completed)
|
||||
- **Created**: `.summaries/IMPL-*-summary.md` per task
|
||||
- **Updated**: `TODO_LIST.md` (by agents)
|
||||
- **Updated**: `workflow-session.json` status
|
||||
- **Created**: Git commits (if `--with-commit`)
|
||||
|
||||
## Next Step
|
||||
|
||||
Return to orchestrator. Orchestrator handles completion summary output.
|
||||
@@ -1,476 +0,0 @@
|
||||
# Phase 2: Context Gathering
|
||||
|
||||
Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON.
|
||||
|
||||
## Objective
|
||||
|
||||
- Check for existing valid context-package before executing
|
||||
- Assess task complexity and launch parallel exploration agents
|
||||
- Invoke context-search-agent to analyze codebase
|
||||
- Generate standardized `context-package.json` with prioritized context
|
||||
- Detect conflict risk level for Phase 3 decision
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
|
||||
- **Detection-First**: Check for existing context-package before executing
|
||||
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
|
||||
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
|
||||
- **Explicit Lifecycle**: Manage subagent creation, waiting, and cleanup
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
├─ Parse flags: --session
|
||||
└─ Parse: task_description (required)
|
||||
|
||||
Step 1: Context-Package Detection
|
||||
└─ Decision (existing package):
|
||||
├─ Valid package exists → Return existing (skip execution)
|
||||
└─ No valid package → Continue to Step 2
|
||||
|
||||
Step 2: Complexity Assessment & Parallel Explore
|
||||
├─ Analyze task_description → classify Low/Medium/High
|
||||
├─ Select exploration angles (1-4 based on complexity)
|
||||
├─ Launch N cli-explore-agents in parallel (spawn_agent)
|
||||
│ └─ Each outputs: exploration-{angle}.json
|
||||
├─ Wait for all agents (batch wait)
|
||||
├─ Close all agents
|
||||
└─ Generate explorations-manifest.json
|
||||
|
||||
Step 3: Invoke Context-Search Agent (with exploration input)
|
||||
├─ Phase 1: Initialization & Pre-Analysis
|
||||
├─ Phase 2: Multi-Source Discovery
|
||||
│ ├─ Track 0: Exploration Synthesis (prioritize & deduplicate)
|
||||
│ ├─ Track 1-4: Existing tracks
|
||||
└─ Phase 3: Synthesis & Packaging
|
||||
└─ Generate context-package.json with exploration_results
|
||||
└─ Lifecycle: spawn_agent → wait → close_agent
|
||||
|
||||
Step 4: Output Verification
|
||||
└─ Verify context-package.json contains exploration_results
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Step 1: Context-Package Detection
|
||||
|
||||
**Execute First** - Check if valid package already exists:
|
||||
|
||||
```javascript
|
||||
const contextPackagePath = `.workflow/${session_id}/.process/context-package.json`;
|
||||
|
||||
if (file_exists(contextPackagePath)) {
|
||||
const existing = Read(contextPackagePath);
|
||||
|
||||
// Validate package belongs to current session
|
||||
if (existing?.metadata?.session_id === session_id) {
|
||||
console.log("Valid context-package found for session:", session_id);
|
||||
console.log("Stats:", existing.statistics);
|
||||
console.log("Conflict Risk:", existing.conflict_detection.risk_level);
|
||||
return existing; // Skip execution, return existing
|
||||
} else {
|
||||
console.warn("Invalid session_id in existing package, re-generating...");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Complexity Assessment & Parallel Explore
|
||||
|
||||
**Only execute if Step 1 finds no valid package**
|
||||
|
||||
```javascript
|
||||
// 2.1 Complexity Assessment
|
||||
function analyzeTaskComplexity(taskDescription) {
|
||||
const text = taskDescription.toLowerCase();
|
||||
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
|
||||
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
|
||||
return 'Low';
|
||||
}
|
||||
|
||||
const ANGLE_PRESETS = {
|
||||
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
|
||||
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
|
||||
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
|
||||
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
|
||||
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
|
||||
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
|
||||
};
|
||||
|
||||
function selectAngles(taskDescription, complexity) {
|
||||
const text = taskDescription.toLowerCase();
|
||||
let preset = 'feature';
|
||||
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
|
||||
else if (/security|auth|permission/.test(text)) preset = 'security';
|
||||
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
|
||||
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
|
||||
|
||||
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
|
||||
return ANGLE_PRESETS[preset].slice(0, count);
|
||||
}
|
||||
|
||||
const complexity = analyzeTaskComplexity(task_description);
|
||||
const selectedAngles = selectAngles(task_description, complexity);
|
||||
const sessionFolder = `.workflow/active/${session_id}/.process`;
|
||||
|
||||
// 2.2 Launch Parallel Explore Agents
|
||||
const explorationAgents = [];
|
||||
|
||||
// Spawn all agents in parallel
|
||||
selectedAngles.forEach((angle, index) => {
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Task Objective
|
||||
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
|
||||
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Task Description**: ${task_description}
|
||||
- **Session ID**: ${session_id}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
**Step 1: Structural Scan** (Bash)
|
||||
- get_modules_by_depth.sh → identify modules related to ${angle}
|
||||
- find/rg → locate files relevant to ${angle} aspect
|
||||
- Analyze imports/dependencies from ${angle} perspective
|
||||
|
||||
**Step 2: Semantic Analysis** (Gemini CLI)
|
||||
- How does existing code handle ${angle} concerns?
|
||||
- What patterns are used for ${angle}?
|
||||
- Where would new code integrate from ${angle} viewpoint?
|
||||
|
||||
**Step 3: Write Output**
|
||||
- Consolidate ${angle} findings into JSON
|
||||
- Identify ${angle}-specific clarification needs
|
||||
|
||||
## Expected Output
|
||||
|
||||
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
|
||||
|
||||
**Required Fields** (all ${angle} focused):
|
||||
- project_structure: Modules/architecture relevant to ${angle}
|
||||
- relevant_files: Files affected from ${angle} perspective
|
||||
**IMPORTANT**: Use object format with relevance scores for synthesis:
|
||||
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Core ${angle} logic"}]\`
|
||||
Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
|
||||
- patterns: ${angle}-related patterns to follow
|
||||
- dependencies: Dependencies relevant to ${angle}
|
||||
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
|
||||
- constraints: ${angle}-specific limitations/conventions
|
||||
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
|
||||
- _metadata.exploration_angle: "${angle}"
|
||||
|
||||
## Success Criteria
|
||||
- [ ] Schema obtained via cat explore-json-schema.json
|
||||
- [ ] get_modules_by_depth.sh executed
|
||||
- [ ] At least 3 relevant files identified with ${angle} rationale
|
||||
- [ ] Patterns are actionable (code examples, not generic advice)
|
||||
- [ ] Integration points include file:line locations
|
||||
- [ ] Constraints are project-specific to ${angle}
|
||||
- [ ] JSON output follows schema exactly
|
||||
- [ ] clarification_needs includes options + recommended
|
||||
|
||||
## Output
|
||||
Write: ${sessionFolder}/exploration-${angle}.json
|
||||
Return: 2-3 sentence summary of ${angle} findings
|
||||
`
|
||||
});
|
||||
|
||||
explorationAgents.push(agentId);
|
||||
});
|
||||
|
||||
// 2.3 Batch wait for all exploration agents
|
||||
const explorationResults = wait({
|
||||
ids: explorationAgents,
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
});
|
||||
|
||||
// Check for timeouts
|
||||
if (explorationResults.timed_out) {
|
||||
console.log('Some exploration agents timed out - continuing with completed results');
|
||||
}
|
||||
|
||||
// 2.4 Close all exploration agents
|
||||
explorationAgents.forEach(agentId => {
|
||||
close_agent({ id: agentId });
|
||||
});
|
||||
|
||||
// 2.5 Generate Manifest after all complete
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
|
||||
const explorationManifest = {
|
||||
session_id,
|
||||
task_description,
|
||||
timestamp: new Date().toISOString(),
|
||||
complexity,
|
||||
exploration_count: selectedAngles.length,
|
||||
angles_explored: selectedAngles,
|
||||
explorations: explorationFiles.map(file => {
|
||||
const data = JSON.parse(Read(file));
|
||||
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
|
||||
})
|
||||
};
|
||||
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
|
||||
```
|
||||
|
||||
### Step 3: Invoke Context-Search Agent
|
||||
|
||||
**Only execute after Step 2 completes**
|
||||
|
||||
```javascript
|
||||
// Load user intent from planning-notes.md (from Phase 1)
|
||||
const planningNotesPath = `.workflow/active/${session_id}/planning-notes.md`;
|
||||
let userIntent = { goal: task_description, key_constraints: "None specified" };
|
||||
|
||||
if (file_exists(planningNotesPath)) {
|
||||
const notesContent = Read(planningNotesPath);
|
||||
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
|
||||
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
|
||||
if (goalMatch) userIntent.goal = goalMatch[1].trim();
|
||||
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
|
||||
}
|
||||
|
||||
// Spawn context-search-agent
|
||||
const contextAgentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/context-search-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
|
||||
|
||||
## Session Information
|
||||
- **Session ID**: ${session_id}
|
||||
- **Task Description**: ${task_description}
|
||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
||||
|
||||
## User Intent (from Phase 1 - Planning Notes)
|
||||
**GOAL**: ${userIntent.goal}
|
||||
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
|
||||
|
||||
This is the PRIMARY context source - all subsequent analysis must align with user intent.
|
||||
|
||||
## Exploration Input (from Step 2)
|
||||
- **Manifest**: ${sessionFolder}/explorations-manifest.json
|
||||
- **Exploration Count**: ${explorationManifest.exploration_count}
|
||||
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
|
||||
- **Complexity**: ${complexity}
|
||||
|
||||
## Mission
|
||||
Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**:
|
||||
- Read and parse \`.workflow/project-tech.json\`. Use its \`overview\` section as the foundational \`project_context\`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse \`.workflow/project-guidelines.json\`. Load \`conventions\`, \`constraints\`, and \`learnings\` into a \`project_guidelines\` section.
|
||||
- If files don't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
3. **Foundation**: Initialize CodexLens, get project structure, load docs
|
||||
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||
|
||||
### Phase 2: Multi-Source Context Discovery
|
||||
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
|
||||
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
|
||||
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
|
||||
- Map user requirements to codebase entities (files, modules, patterns)
|
||||
- Establish baseline priority scores based on user goal alignment
|
||||
- Output: user_intent_mapping.json with preliminary priority scores
|
||||
|
||||
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
|
||||
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
|
||||
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
|
||||
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
||||
- **Track 4**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
|
||||
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. **Synthesize 5-source data** (including Track -1): Merge findings from all sources
|
||||
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
|
||||
- **Prioritize the context from \`project-tech.json\`** for architecture and tech stack unless code analysis reveals it's outdated
|
||||
3. **Context Priority Sorting**:
|
||||
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
|
||||
b. Classify files into priority tiers:
|
||||
- **Critical** (score >= 0.85): Directly mentioned in user goal OR exploration critical_files
|
||||
- **High** (0.70-0.84): Key dependencies, patterns required for goal
|
||||
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
|
||||
- **Low** (< 0.50): Contextual awareness only
|
||||
c. Generate dependency_order: Based on dependency graph + user goal sequence
|
||||
d. Document sorting_rationale: Explain prioritization logic
|
||||
|
||||
4. **Populate \`project_context\`**: Directly use the \`overview\` from \`project-tech.json\` to fill the \`project_context\` section. Include description, technology_stack, architecture, and key_components.
|
||||
5. **Populate \`project_guidelines\`**: Load conventions, constraints, and learnings from \`project-guidelines.json\` into a dedicated section.
|
||||
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
7. Perform conflict detection with risk assessment
|
||||
8. **Inject historical conflicts** from archive analysis into conflict_detection
|
||||
9. **Generate prioritized_context section**:
|
||||
\`\`\`json
|
||||
{
|
||||
"prioritized_context": {
|
||||
"user_intent": {
|
||||
"goal": "...",
|
||||
"scope": "...",
|
||||
"key_constraints": ["..."]
|
||||
},
|
||||
"priority_tiers": {
|
||||
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
|
||||
"high": [...],
|
||||
"medium": [...],
|
||||
"low": [...]
|
||||
},
|
||||
"dependency_order": ["module1", "module2", "module3"],
|
||||
"sorting_rationale": "Based on user goal alignment (Track -1), exploration critical files, and dependency graph analysis"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
10. Generate and validate context-package.json with prioritized_context field
|
||||
|
||||
## Output Requirements
|
||||
Complete context-package.json with:
|
||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||
- **project_context**: description, technology_stack, architecture, key_components (sourced from \`project-tech.json\`)
|
||||
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (sourced from \`project-guidelines.json\`)
|
||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||
- **dependencies**: {internal[], external[]} with dependency graph
|
||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
|
||||
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
|
||||
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
|
||||
|
||||
## Quality Validation
|
||||
Before completion verify:
|
||||
- [ ] Valid JSON format with all required fields
|
||||
- [ ] File relevance accuracy >80%
|
||||
- [ ] Dependency graph complete (max 2 transitive levels)
|
||||
- [ ] Conflict risk level calculated correctly
|
||||
- [ ] No sensitive data exposed
|
||||
- [ ] Total files <=50 (prioritize high-relevance)
|
||||
|
||||
## Planning Notes Record (REQUIRED)
|
||||
After completing context-package.json, append a brief execution record to planning-notes.md:
|
||||
|
||||
**File**: .workflow/active/${session_id}/planning-notes.md
|
||||
**Location**: Under "## Context Findings (Phase 2)" section
|
||||
**Format**:
|
||||
\`\`\`
|
||||
### [Context-Search Agent] YYYY-MM-DD
|
||||
- **Note**: [brief summary of key findings]
|
||||
\`\`\`
|
||||
|
||||
Execute autonomously following agent documentation.
|
||||
Report completion with statistics.
|
||||
`
|
||||
});
|
||||
|
||||
// Wait for context agent to complete
|
||||
const contextResult = wait({
|
||||
ids: [contextAgentId],
|
||||
timeout_ms: 900000 // 15 minutes
|
||||
});
|
||||
|
||||
// Close context agent
|
||||
close_agent({ id: contextAgentId });
|
||||
```
|
||||
|
||||
### Step 4: Output Verification
|
||||
|
||||
After agent completes, verify output:
|
||||
|
||||
```javascript
|
||||
// Verify file was created
|
||||
const outputPath = `.workflow/${session_id}/.process/context-package.json`;
|
||||
if (!file_exists(outputPath)) {
|
||||
throw new Error("Agent failed to generate context-package.json");
|
||||
}
|
||||
|
||||
// Verify exploration_results included
|
||||
const pkg = JSON.parse(Read(outputPath));
|
||||
if (pkg.exploration_results?.exploration_count > 0) {
|
||||
console.log(`Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
|
||||
}
|
||||
```
|
||||
|
||||
## Parameter Reference
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `--session` | string | Yes | Workflow session ID (e.g., WFS-user-auth) |
|
||||
| `task_description` | string | Yes | Detailed task description for context extraction |
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After context-gather completes, update planning-notes.md:
|
||||
|
||||
```javascript
|
||||
const contextPackage = JSON.parse(Read(contextPath))
|
||||
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low'
|
||||
const criticalFiles = (contextPackage.exploration_results?.aggregated_insights?.critical_files || [])
|
||||
.slice(0, 5).map(f => f.path)
|
||||
const archPatterns = contextPackage.project_context?.architecture_patterns || []
|
||||
const constraints = contextPackage.exploration_results?.aggregated_insights?.constraints || []
|
||||
|
||||
// Update Phase 2 section
|
||||
Edit(planningNotesPath, {
|
||||
old: '## Context Findings (Phase 2)\n(To be filled by context-gather)',
|
||||
new: `## Context Findings (Phase 2)
|
||||
|
||||
- **CRITICAL_FILES**: ${criticalFiles.join(', ') || 'None identified'}
|
||||
- **ARCHITECTURE**: ${archPatterns.join(', ') || 'Not detected'}
|
||||
- **CONFLICT_RISK**: ${conflictRisk}
|
||||
- **CONSTRAINTS**: ${constraints.length > 0 ? constraints.join('; ') : 'None'}`
|
||||
})
|
||||
|
||||
// Append Phase 2 constraints to consolidated list
|
||||
Edit(planningNotesPath, {
|
||||
old: '## Consolidated Constraints (Phase 4 Input)',
|
||||
new: `## Consolidated Constraints (Phase 4 Input)
|
||||
${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
|
||||
})
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **User intent integration**: Load user intent from planning-notes.md (Phase 1 output)
|
||||
- **Output**: Generates `context-package.json` with `prioritized_context` field
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
- **Explicit Lifecycle**: Always close_agent after wait to free resources
|
||||
- **Batch Wait**: Use single wait call for multiple parallel agents for efficiency
|
||||
|
||||
## Output
|
||||
|
||||
- **Variable**: `contextPath` (e.g., `.workflow/active/WFS-xxx/.process/context-package.json`)
|
||||
- **Variable**: `conflictRisk` (none/low/medium/high)
|
||||
- **File**: Updated `planning-notes.md` with context findings
|
||||
- **Decision**: If `conflictRisk >= medium` → Phase 3, else → Phase 4
|
||||
|
||||
## Next Phase
|
||||
|
||||
Return to orchestrator showing Phase 2 results, then auto-continue:
|
||||
- If `conflict_risk >= medium` → [Phase 3: Conflict Resolution](03-conflict-resolution.md)
|
||||
- If `conflict_risk < medium` → [Phase 4: Task Generation](04-task-generation.md)
|
||||
@@ -1,693 +0,0 @@
|
||||
# Phase 3: Conflict Resolution
|
||||
|
||||
Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen.
|
||||
|
||||
## Objective
|
||||
|
||||
- Analyze conflicts between plan and existing code, **including module scenario uniqueness detection**
|
||||
- Generate multiple resolution strategies with **iterative clarification until boundaries are clear**
|
||||
- Apply selected modifications to brainstorm artifacts
|
||||
|
||||
**Scope**: Detection and strategy generation only - NO code modification or task creation.
|
||||
|
||||
**Trigger**: Auto-executes when `conflict_risk >= medium`.
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-select recommended strategy for each conflict, skip clarification questions.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Responsibility | Description |
|
||||
|---------------|-------------|
|
||||
| **Detect Conflicts** | Analyze plan vs existing code inconsistencies |
|
||||
| **Scenario Uniqueness** | Search and compare new modules with existing modules for functional overlaps |
|
||||
| **Generate Strategies** | Provide 2-4 resolution options per conflict |
|
||||
| **Iterative Clarification** | Ask unlimited questions until scenario boundaries are clear and unique |
|
||||
| **Agent Re-analysis** | Dynamically update strategies based on user clarifications |
|
||||
| **CLI Analysis** | Use Gemini/Qwen (Claude fallback) |
|
||||
| **User Decision** | Present options ONE BY ONE, never auto-apply |
|
||||
| **Direct Text Output** | Output questions via text directly, NEVER use bash echo/printf |
|
||||
| **Structured Data** | JSON output for programmatic processing, NO file generation |
|
||||
| **Explicit Lifecycle** | Manage agent lifecycle with spawn_agent → wait → send_input → close_agent |
|
||||
|
||||
## Conflict Categories
|
||||
|
||||
### 1. Architecture Conflicts
|
||||
- Incompatible design patterns
|
||||
- Module structure changes
|
||||
- Pattern migration requirements
|
||||
|
||||
### 2. API Conflicts
|
||||
- Breaking contract changes
|
||||
- Signature modifications
|
||||
- Public interface impacts
|
||||
|
||||
### 3. Data Model Conflicts
|
||||
- Schema modifications
|
||||
- Type breaking changes
|
||||
- Data migration needs
|
||||
|
||||
### 4. Dependency Conflicts
|
||||
- Version incompatibilities
|
||||
- Setup conflicts
|
||||
- Breaking updates
|
||||
|
||||
### 5. Module Scenario Overlap
|
||||
- Functional overlap between new and existing modules
|
||||
- Scenario boundary ambiguity
|
||||
- Duplicate responsibility detection
|
||||
- Module merge/split decisions
|
||||
- **Requires iterative clarification until uniqueness confirmed**
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
├─ Parse flags: --session, --context
|
||||
└─ Validation: Both REQUIRED, conflict_risk >= medium
|
||||
|
||||
Phase 1: Validation
|
||||
├─ Step 1: Verify session directory exists
|
||||
├─ Step 2: Load context-package.json
|
||||
├─ Step 3: Check conflict_risk (skip if none/low)
|
||||
└─ Step 4: Prepare agent task prompt
|
||||
|
||||
Phase 2: CLI-Powered Analysis (Agent with Dual Role)
|
||||
├─ Spawn agent with exploration + planning capability
|
||||
├─ Execute Gemini analysis (Qwen fallback)
|
||||
├─ Detect conflicts including ModuleOverlap category
|
||||
└─ Generate 2-4 strategies per conflict with modifications
|
||||
|
||||
Phase 3: Iterative User Interaction (using send_input)
|
||||
└─ FOR each conflict (one by one):
|
||||
├─ Display conflict with overlap_analysis (if ModuleOverlap)
|
||||
├─ Display strategies (2-4 + custom option)
|
||||
├─ User selects strategy
|
||||
└─ IF clarification_needed:
|
||||
├─ Collect answers
|
||||
├─ send_input for agent re-analysis
|
||||
└─ Loop until uniqueness_confirmed (max 10 rounds)
|
||||
|
||||
Phase 4: Apply Modifications
|
||||
├─ Step 1: Extract modifications from resolved strategies
|
||||
├─ Step 2: Apply using Edit tool
|
||||
├─ Step 3: Update context-package.json (mark resolved)
|
||||
├─ Step 4: Close agent
|
||||
└─ Step 5: Output custom conflict summary (if any)
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Phase 1: Validation
|
||||
```
|
||||
1. Verify session directory exists
|
||||
2. Load context-package.json
|
||||
3. Check conflict_risk (skip if none/low)
|
||||
4. Prepare agent task prompt
|
||||
```
|
||||
|
||||
### Phase 2: CLI-Powered Analysis
|
||||
|
||||
**Agent Delegation with Dual Role** (enables multi-round interaction):
|
||||
```javascript
|
||||
// Spawn agent with combined analysis + resolution capability
|
||||
const conflictAgentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-execution-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
- Session: ${session_id}
|
||||
- Risk: ${conflict_risk}
|
||||
- Files: ${existing_files_list}
|
||||
|
||||
## Exploration Context (from context-package.exploration_results)
|
||||
- Exploration Count: ${contextPackage.exploration_results?.exploration_count || 0}
|
||||
- Angles Analyzed: ${JSON.stringify(contextPackage.exploration_results?.angles || [])}
|
||||
- Pre-identified Conflict Indicators: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.conflict_indicators || [])}
|
||||
- Critical Files: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.critical_files?.map(f => f.path) || [])}
|
||||
- All Patterns: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_patterns || [])}
|
||||
- All Integration Points: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_integration_points || [])}
|
||||
|
||||
## Analysis Steps
|
||||
|
||||
### 0. Load Output Schema (MANDATORY)
|
||||
Execute: cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json
|
||||
|
||||
### 1. Load Context
|
||||
- Read existing files from conflict_detection.existing_files
|
||||
- Load plan from .workflow/active/${session_id}/.process/context-package.json
|
||||
- Load exploration_results and use aggregated_insights for enhanced analysis
|
||||
- Extract role analyses and requirements
|
||||
|
||||
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
||||
|
||||
Primary (Gemini):
|
||||
ccw cli -p "
|
||||
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
||||
TASK:
|
||||
• **Review pre-identified conflict_indicators from exploration results**
|
||||
• Compare architectures (use exploration key_patterns)
|
||||
• Identify breaking API changes
|
||||
• Detect data model incompatibilities
|
||||
• Assess dependency conflicts
|
||||
• **Analyze module scenario uniqueness**
|
||||
- Use exploration integration_points for precise locations
|
||||
- Cross-validate with exploration critical_files
|
||||
- Generate clarification questions for boundary definition
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/${session_id}/**/*
|
||||
EXPECTED: Conflict list with severity ratings, including:
|
||||
- Validation of exploration conflict_indicators
|
||||
- ModuleOverlap conflicts with overlap_analysis
|
||||
- Targeted clarification questions
|
||||
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
" --tool gemini --mode analysis --rule analysis-code-patterns --cd ${project_root}
|
||||
|
||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||
|
||||
### 3. Generate Strategies (2-4 per conflict)
|
||||
|
||||
Template per conflict:
|
||||
- Severity: Critical/High/Medium
|
||||
- Category: Architecture/API/Data/Dependency/ModuleOverlap
|
||||
- Affected files + impact
|
||||
- **For ModuleOverlap**: Include overlap_analysis with existing modules and scenarios
|
||||
- Options with pros/cons, effort, risk
|
||||
- **For ModuleOverlap strategies**: Add clarification_needed questions for boundary definition
|
||||
- Recommended strategy + rationale
|
||||
|
||||
### 4. Return Structured Conflict Data
|
||||
|
||||
⚠️ Output to conflict-resolution.json (generated in Phase 4)
|
||||
|
||||
**Schema Reference**: Execute \`cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json\` to get full schema
|
||||
|
||||
Return JSON following the schema above. Key requirements:
|
||||
- Minimum 2 strategies per conflict, max 4
|
||||
- All text in Chinese for user-facing fields (brief, name, pros, cons, modification_suggestions)
|
||||
- modifications.old_content: 20-100 chars for unique Edit tool matching
|
||||
- modifications.new_content: preserves markdown formatting
|
||||
- modification_suggestions: 2-5 actionable suggestions for custom handling
|
||||
|
||||
### 5. Planning Notes Record (REQUIRED)
|
||||
After analysis complete, append a brief execution record to planning-notes.md:
|
||||
|
||||
**File**: .workflow/active/${session_id}/planning-notes.md
|
||||
**Location**: Under "## Conflict Decisions (Phase 3)" section
|
||||
**Format**:
|
||||
\`\`\`
|
||||
### [Conflict-Resolution Agent] YYYY-MM-DD
|
||||
- **Note**: [brief summary of conflict types, resolution strategies, key decisions]
|
||||
\`\`\`
|
||||
`
|
||||
});
|
||||
|
||||
// Wait for initial analysis
|
||||
const analysisResult = wait({
|
||||
ids: [conflictAgentId],
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
});
|
||||
|
||||
// Parse conflicts from result
|
||||
const conflicts = parseConflictsFromResult(analysisResult);
|
||||
```
|
||||
|
||||
### Phase 3: User Interaction Loop
|
||||
|
||||
```javascript
|
||||
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
|
||||
FOR each conflict:
|
||||
round = 0, clarified = false, userClarifications = []
|
||||
|
||||
WHILE (!clarified && round++ < 10):
|
||||
// 1. Display conflict info (text output for context)
|
||||
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
|
||||
|
||||
// 2. Strategy selection
|
||||
if (autoYes) {
|
||||
console.log(`[--yes] Auto-selecting recommended strategy`)
|
||||
selectedStrategy = conflict.strategies[conflict.recommended || 0]
|
||||
clarified = true // Skip clarification loop
|
||||
} else {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: formatStrategiesForDisplay(conflict.strategies),
|
||||
header: "策略选择",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
...conflict.strategies.map((s, i) => ({
|
||||
label: `${s.name}${i === conflict.recommended ? ' (推荐)' : ''}`,
|
||||
description: `${s.complexity}复杂度 | ${s.risk}风险${s.clarification_needed?.length ? ' | ⚠️需澄清' : ''}`
|
||||
})),
|
||||
{ label: "自定义修改", description: `建议: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// 3. Handle selection
|
||||
if (userChoice === "自定义修改") {
|
||||
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
|
||||
break
|
||||
}
|
||||
|
||||
selectedStrategy = findStrategyByName(userChoice)
|
||||
}
|
||||
|
||||
// 4. Clarification (if needed) - using send_input for agent re-analysis
|
||||
if (!autoYes && selectedStrategy.clarification_needed?.length > 0) {
|
||||
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
|
||||
AskUserQuestion({
|
||||
questions: batch.map((q, i) => ({
|
||||
question: q, header: `澄清${i+1}`, multiSelect: false,
|
||||
options: [{ label: "详细说明", description: "提供答案" }]
|
||||
}))
|
||||
})
|
||||
userClarifications.push(...collectAnswers(batch))
|
||||
}
|
||||
|
||||
// 5. Agent re-analysis via send_input (key: agent stays active)
|
||||
send_input({
|
||||
id: conflictAgentId,
|
||||
message: `
|
||||
## CLARIFICATION ANSWERS
|
||||
Conflict: ${conflict.id}
|
||||
Strategy: ${selectedStrategy.name}
|
||||
User Clarifications: ${JSON.stringify(userClarifications)}
|
||||
|
||||
## REQUEST
|
||||
Based on the clarifications above, update the strategy assessment.
|
||||
Output: { uniqueness_confirmed: boolean, rationale: string, updated_strategy: {...}, remaining_questions: [...] }
|
||||
`
|
||||
});
|
||||
|
||||
// Wait for re-analysis result
|
||||
const reanalysisResult = wait({
|
||||
ids: [conflictAgentId],
|
||||
timeout_ms: 300000 // 5 minutes
|
||||
});
|
||||
|
||||
const parsedResult = parseReanalysisResult(reanalysisResult);
|
||||
|
||||
if (parsedResult.uniqueness_confirmed) {
|
||||
selectedStrategy = { ...parsedResult.updated_strategy, clarifications: userClarifications }
|
||||
clarified = true
|
||||
} else {
|
||||
selectedStrategy.clarification_needed = parsedResult.remaining_questions
|
||||
}
|
||||
} else {
|
||||
clarified = true
|
||||
}
|
||||
|
||||
if (clarified) resolvedConflicts.push({ conflict, strategy: selectedStrategy })
|
||||
END WHILE
|
||||
END FOR
|
||||
|
||||
selectedStrategies = resolvedConflicts.map(r => ({
|
||||
conflict_id: r.conflict.id, strategy: r.strategy, clarifications: r.strategy.clarifications || []
|
||||
}))
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
- AskUserQuestion: max 4 questions/call, batch if more
|
||||
- Strategy options: 2-4 strategies + "自定义修改"
|
||||
- Clarification loop via send_input: max 10 rounds, agent判断 uniqueness_confirmed
|
||||
- Agent stays active throughout interaction (no close_agent until Phase 4 complete)
|
||||
- Custom conflicts: 记录 overlap_analysis 供后续手动处理
|
||||
|
||||
### Phase 4: Apply Modifications
|
||||
|
||||
```javascript
|
||||
// 1. Extract modifications from resolved strategies
|
||||
const modifications = [];
|
||||
selectedStrategies.forEach(item => {
|
||||
if (item.strategy && item.strategy.modifications) {
|
||||
modifications.push(...item.strategy.modifications.map(mod => ({
|
||||
...mod,
|
||||
conflict_id: item.conflict_id,
|
||||
clarifications: item.clarifications
|
||||
})));
|
||||
}
|
||||
});
|
||||
|
||||
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
||||
|
||||
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
|
||||
const appliedModifications = [];
|
||||
const failedModifications = [];
|
||||
const fallbackConstraints = []; // For files that don't exist
|
||||
|
||||
modifications.forEach((mod, idx) => {
|
||||
try {
|
||||
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
||||
|
||||
// Check if target file exists (brainstorm files may not exist in lite workflow)
|
||||
if (!file_exists(mod.file)) {
|
||||
console.log(` ⚠️ 文件不存在,写入 context-package.json 作为约束`);
|
||||
fallbackConstraints.push({
|
||||
source: "conflict-resolution",
|
||||
conflict_id: mod.conflict_id,
|
||||
target_file: mod.file,
|
||||
section: mod.section,
|
||||
change_type: mod.change_type,
|
||||
content: mod.new_content,
|
||||
rationale: mod.rationale
|
||||
});
|
||||
return; // Skip to next modification
|
||||
}
|
||||
|
||||
if (mod.change_type === "update") {
|
||||
Edit({
|
||||
file_path: mod.file,
|
||||
old_string: mod.old_content,
|
||||
new_string: mod.new_content
|
||||
});
|
||||
} else if (mod.change_type === "add") {
|
||||
// Handle addition - append or insert based on section
|
||||
const fileContent = Read(mod.file);
|
||||
const updated = insertContentAfterSection(fileContent, mod.section, mod.new_content);
|
||||
Write(mod.file, updated);
|
||||
} else if (mod.change_type === "remove") {
|
||||
Edit({
|
||||
file_path: mod.file,
|
||||
old_string: mod.old_content,
|
||||
new_string: ""
|
||||
});
|
||||
}
|
||||
|
||||
appliedModifications.push(mod);
|
||||
console.log(` ✓ 成功`);
|
||||
} catch (error) {
|
||||
console.log(` ✗ 失败: ${error.message}`);
|
||||
failedModifications.push({ ...mod, error: error.message });
|
||||
}
|
||||
});
|
||||
|
||||
// 2b. Generate conflict-resolution.json output file
|
||||
const resolutionOutput = {
|
||||
session_id: sessionId,
|
||||
resolved_at: new Date().toISOString(),
|
||||
summary: {
|
||||
total_conflicts: conflicts.length,
|
||||
resolved_with_strategy: selectedStrategies.length,
|
||||
custom_handling: customConflicts.length,
|
||||
fallback_constraints: fallbackConstraints.length
|
||||
},
|
||||
resolved_conflicts: selectedStrategies.map(s => ({
|
||||
conflict_id: s.conflict_id,
|
||||
strategy_name: s.strategy.name,
|
||||
strategy_approach: s.strategy.approach,
|
||||
clarifications: s.clarifications || [],
|
||||
modifications_applied: s.strategy.modifications?.filter(m =>
|
||||
appliedModifications.some(am => am.conflict_id === s.conflict_id)
|
||||
) || []
|
||||
})),
|
||||
custom_conflicts: customConflicts.map(c => ({
|
||||
id: c.id,
|
||||
brief: c.brief,
|
||||
category: c.category,
|
||||
suggestions: c.suggestions,
|
||||
overlap_analysis: c.overlap_analysis || null
|
||||
})),
|
||||
planning_constraints: fallbackConstraints, // Constraints for files that don't exist
|
||||
failed_modifications: failedModifications
|
||||
};
|
||||
|
||||
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
|
||||
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
|
||||
|
||||
// 3. Update context-package.json with resolution details (reference to JSON file)
|
||||
const contextPackage = JSON.parse(Read(contextPath));
|
||||
contextPackage.conflict_detection.conflict_risk = "resolved";
|
||||
contextPackage.conflict_detection.resolution_file = resolutionPath; // Reference to detailed JSON
|
||||
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
|
||||
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
||||
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
||||
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
||||
|
||||
// 4. Close the conflict agent (IMPORTANT: explicit lifecycle management)
|
||||
close_agent({ id: conflictAgentId });
|
||||
|
||||
// 5. Output custom conflict summary with overlap analysis (if any)
|
||||
if (customConflicts.length > 0) {
|
||||
console.log(`\n${'='.repeat(60)}`);
|
||||
console.log(`需要自定义处理的冲突 (${customConflicts.length})`);
|
||||
console.log(`${'='.repeat(60)}\n`);
|
||||
|
||||
customConflicts.forEach(conflict => {
|
||||
console.log(`【${conflict.category}】${conflict.id}: ${conflict.brief}`);
|
||||
|
||||
// Show overlap analysis for ModuleOverlap conflicts
|
||||
if (conflict.category === 'ModuleOverlap' && conflict.overlap_analysis) {
|
||||
console.log(`\n场景重叠信息:`);
|
||||
console.log(` 新模块: ${conflict.overlap_analysis.new_module.name}`);
|
||||
console.log(` 场景: ${conflict.overlap_analysis.new_module.scenarios.join(', ')}`);
|
||||
console.log(`\n 与以下模块重叠:`);
|
||||
conflict.overlap_analysis.existing_modules.forEach(mod => {
|
||||
console.log(` - ${mod.name} (${mod.file})`);
|
||||
console.log(` 重叠场景: ${mod.overlap_scenarios.join(', ')}`);
|
||||
});
|
||||
}
|
||||
|
||||
console.log(`\n修改建议:`);
|
||||
conflict.suggestions.forEach(suggestion => {
|
||||
console.log(` - ${suggestion}`);
|
||||
});
|
||||
console.log();
|
||||
});
|
||||
}
|
||||
|
||||
// 6. Output failure summary (if any)
|
||||
if (failedModifications.length > 0) {
|
||||
console.log(`\n⚠️ 部分修改失败 (${failedModifications.length}):`);
|
||||
failedModifications.forEach(mod => {
|
||||
console.log(` - ${mod.file}: ${mod.error}`);
|
||||
});
|
||||
}
|
||||
|
||||
// 7. Return summary
|
||||
return {
|
||||
total_conflicts: conflicts.length,
|
||||
resolved_with_strategy: selectedStrategies.length,
|
||||
custom_handling: customConflicts.length,
|
||||
modifications_applied: appliedModifications.length,
|
||||
modifications_failed: failedModifications.length,
|
||||
modified_files: [...new Set(appliedModifications.map(m => m.file))],
|
||||
custom_conflicts: customConflicts,
|
||||
clarification_records: selectedStrategies.filter(s => s.clarifications.length > 0)
|
||||
};
|
||||
```
|
||||
|
||||
**Validation**:
|
||||
```
|
||||
✓ Agent returns valid JSON structure with ModuleOverlap conflicts
|
||||
✓ Conflicts processed ONE BY ONE (not in batches)
|
||||
✓ ModuleOverlap conflicts include overlap_analysis field
|
||||
✓ Strategies with clarification_needed display questions
|
||||
✓ User selections captured correctly per conflict
|
||||
✓ Clarification loop continues until uniqueness confirmed via send_input
|
||||
✓ Agent re-analysis with user clarifications updates strategy
|
||||
✓ Uniqueness confirmation based on clear scenario boundaries
|
||||
✓ Maximum 10 rounds per conflict safety limit enforced
|
||||
✓ Edit tool successfully applies modifications
|
||||
✓ guidance-specification.md updated
|
||||
✓ Role analyses (*.md) updated
|
||||
✓ context-package.json marked as resolved with clarification records
|
||||
✓ Custom conflicts display overlap_analysis for manual handling
|
||||
✓ Agent closed after all interactions complete (explicit lifecycle)
|
||||
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
### Primary Output: conflict-resolution.json
|
||||
|
||||
**Path**: `.workflow/active/{session_id}/.process/conflict-resolution.json`
|
||||
|
||||
**Schema**:
|
||||
```json
|
||||
{
|
||||
"session_id": "WFS-xxx",
|
||||
"resolved_at": "ISO timestamp",
|
||||
"summary": {
|
||||
"total_conflicts": 3,
|
||||
"resolved_with_strategy": 2,
|
||||
"custom_handling": 1,
|
||||
"fallback_constraints": 0
|
||||
},
|
||||
"resolved_conflicts": [
|
||||
{
|
||||
"conflict_id": "CON-001",
|
||||
"strategy_name": "策略名称",
|
||||
"strategy_approach": "实现方法",
|
||||
"clarifications": [],
|
||||
"modifications_applied": []
|
||||
}
|
||||
],
|
||||
"custom_conflicts": [
|
||||
{
|
||||
"id": "CON-002",
|
||||
"brief": "冲突摘要",
|
||||
"category": "ModuleOverlap",
|
||||
"suggestions": ["建议1", "建议2"],
|
||||
"overlap_analysis": null
|
||||
}
|
||||
],
|
||||
"planning_constraints": [],
|
||||
"failed_modifications": []
|
||||
}
|
||||
```
|
||||
|
||||
### Key Requirements
|
||||
|
||||
| Requirement | Details |
|
||||
|------------|---------|
|
||||
| **Conflict batching** | Max 10 conflicts per round (no total limit) |
|
||||
| **Strategy count** | 2-4 strategies per conflict |
|
||||
| **Modifications** | Each strategy includes file paths, old_content, new_content |
|
||||
| **User-facing text** | Chinese (brief, strategy names, pros/cons) |
|
||||
| **Technical fields** | English (severity, category, complexity, risk) |
|
||||
| **old_content precision** | 20-100 chars for unique Edit tool matching |
|
||||
| **File targets** | guidance-specification.md, role analyses (*.md) |
|
||||
| **Agent lifecycle** | Keep active during interaction, close after Phase 4 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Recovery Strategy
|
||||
```
|
||||
1. Pre-check: Verify conflict_risk ≥ medium
|
||||
2. Monitor: Track agent via wait with timeout
|
||||
3. Validate: Parse agent JSON output
|
||||
4. Recover:
|
||||
- Agent failure → check logs + report error
|
||||
- Invalid JSON → retry once with Claude fallback
|
||||
- CLI failure → fallback to Claude analysis
|
||||
- Edit tool failure → report affected files + rollback option
|
||||
- User cancels → mark as "unresolved", continue to task-generate
|
||||
5. Degrade: If all fail, generate minimal conflict report and skip modifications
|
||||
6. Cleanup: Always close_agent even on error path
|
||||
```
|
||||
|
||||
### Rollback Handling
|
||||
```
|
||||
If Edit tool fails mid-application:
|
||||
1. Log all successfully applied modifications
|
||||
2. Output rollback option via text interaction
|
||||
3. If rollback selected: restore files from git or backups
|
||||
4. If continue: mark partial resolution in context-package.json
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### Interface
|
||||
**Input**:
|
||||
- `--session` (required): WFS-{session-id}
|
||||
- `--context` (required): context-package.json path
|
||||
- Requires: `conflict_risk >= medium`
|
||||
|
||||
**Output**:
|
||||
- Generated file:
|
||||
- `.workflow/active/{session_id}/.process/conflict-resolution.json` (primary output)
|
||||
- Modified files (if exist):
|
||||
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
||||
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
||||
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved, resolution_file reference)
|
||||
|
||||
**User Interaction**:
|
||||
- **Iterative conflict processing**: One conflict at a time, not in batches
|
||||
- Each conflict: 2-4 strategy options + "自定义修改" option (with suggestions)
|
||||
- **Clarification loop via send_input**: Unlimited questions per conflict until uniqueness confirmed (max 10 rounds)
|
||||
- **ModuleOverlap conflicts**: Display overlap_analysis with existing modules
|
||||
- **Agent re-analysis**: Dynamic strategy updates based on user clarifications
|
||||
|
||||
### Success Criteria
|
||||
```
|
||||
✓ CLI analysis returns valid JSON structure with ModuleOverlap category
|
||||
✓ Agent performs scenario uniqueness detection (searches existing modules)
|
||||
✓ Conflicts processed ONE BY ONE with iterative clarification via send_input
|
||||
✓ Min 2 strategies per conflict with modifications
|
||||
✓ ModuleOverlap conflicts include overlap_analysis with existing modules
|
||||
✓ Strategies requiring clarification include clarification_needed questions
|
||||
✓ Each conflict includes 2-5 modification_suggestions
|
||||
✓ Text output displays conflict with overlap analysis (if ModuleOverlap)
|
||||
✓ User selections captured per conflict
|
||||
✓ Clarification loop continues until uniqueness confirmed (unlimited rounds, max 10)
|
||||
✓ Agent re-analysis with user clarifications updates strategy
|
||||
✓ Uniqueness confirmation based on clear scenario boundaries
|
||||
✓ Edit tool applies modifications successfully
|
||||
✓ Custom conflicts displayed with overlap_analysis for manual handling
|
||||
✓ guidance-specification.md updated with resolved conflicts
|
||||
✓ Role analyses (*.md) updated with resolved conflicts
|
||||
✓ context-package.json marked as "resolved" with clarification records
|
||||
✓ conflict-resolution.json generated with full resolution details
|
||||
✓ Agent explicitly closed after all interactions
|
||||
✓ Modification summary includes:
|
||||
- Total conflicts
|
||||
- Resolved with strategy (count)
|
||||
- Custom handling (count)
|
||||
- Clarification records
|
||||
- Overlap analysis for custom ModuleOverlap conflicts
|
||||
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
||||
✓ Error handling robust (validate/retry/degrade)
|
||||
```
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
If Phase 3 was executed, update planning-notes.md:
|
||||
|
||||
```javascript
|
||||
const conflictResPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`
|
||||
|
||||
if (file_exists(conflictResPath)) {
|
||||
const conflictRes = JSON.parse(Read(conflictResPath))
|
||||
const resolved = conflictRes.resolved_conflicts || []
|
||||
const planningConstraints = conflictRes.planning_constraints || []
|
||||
|
||||
// Update Phase 3 section
|
||||
Edit(planningNotesPath, {
|
||||
old: '## Conflict Decisions (Phase 3)\n(To be filled if conflicts detected)',
|
||||
new: `## Conflict Decisions (Phase 3)
|
||||
|
||||
- **RESOLVED**: ${resolved.map(r => `${r.conflict_id} → ${r.strategy_name}`).join('; ') || 'None'}
|
||||
- **CUSTOM_HANDLING**: ${conflictRes.custom_conflicts?.map(c => c.id).join(', ') || 'None'}
|
||||
- **CONSTRAINTS**: ${planningConstraints.map(c => c.content).join('; ') || 'None'}`
|
||||
})
|
||||
|
||||
// Append Phase 3 constraints to consolidated list
|
||||
if (planningConstraints.length > 0) {
|
||||
Edit(planningNotesPath, {
|
||||
old: '## Consolidated Constraints (Phase 4 Input)',
|
||||
new: `## Consolidated Constraints (Phase 4 Input)
|
||||
${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c.content}`).join('\n')}`
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Memory State Check
|
||||
|
||||
After Phase 3 completion, evaluate context window usage.
|
||||
If memory usage is high (>120K tokens):
|
||||
|
||||
```javascript
|
||||
// Codex: Use compact command if available
|
||||
codex compact
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `.workflow/active/{sessionId}/.process/conflict-resolution.json`
|
||||
- **Modified files**: brainstorm artifacts (guidance-specification.md, role analyses)
|
||||
- **Updated**: `context-package.json` with resolved conflict status
|
||||
|
||||
## Next Phase
|
||||
|
||||
Return to orchestrator, then auto-continue to [Phase 4: Task Generation](04-task-generation.md).
|
||||
@@ -1,557 +0,0 @@
|
||||
---
|
||||
name: worktree-merge
|
||||
description: Merge completed worktrees back to main branch. Handle cross-group conflicts and dependency order.
|
||||
argument-hint: "[--plan=<plan-session>] [--group=<group-id>] [--all] [--cleanup]"
|
||||
---
|
||||
|
||||
# Codex Worktree-Merge Workflow
|
||||
|
||||
## Quick Start
|
||||
|
||||
Merge completed execution group worktrees back to main branch.
|
||||
|
||||
**Core workflow**: Load Status → Check Dependencies → Merge Groups → Cleanup Worktrees
|
||||
|
||||
**Key features**:
|
||||
- **Dependency-aware merge**: Merge groups in correct order
|
||||
- **Conflict detection**: Check for cross-group file conflicts
|
||||
- **Selective or bulk merge**: Merge single group or all completed groups
|
||||
- **Cleanup option**: Remove worktrees after successful merge
|
||||
|
||||
## Overview
|
||||
|
||||
1. **Load Status** - Read worktree-status.json and execution-groups.json
|
||||
2. **Validate Dependencies** - Check group dependencies are merged first
|
||||
3. **Merge Worktree** - Merge group's branch to main
|
||||
4. **Update Status** - Mark group as merged
|
||||
5. **Cleanup** (optional) - Remove worktree after merge
|
||||
|
||||
**Note**: This command only merges, execution is handled by `/workflow:unified-execute-parallel`.
|
||||
|
||||
## Input Files
|
||||
|
||||
```
|
||||
.workflow/.execution/
|
||||
└── worktree-status.json # Group completion status
|
||||
|
||||
.workflow/.planning/{session}/
|
||||
├── execution-groups.json # Group metadata and dependencies
|
||||
└── conflicts.json # Cross-group conflicts (if any)
|
||||
|
||||
.ccw/worktree/
|
||||
├── {group-id}/ # Worktree to merge
|
||||
│ ├── .execution/ # Execution logs
|
||||
│ └── (modified files)
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
```
|
||||
.workflow/.execution/
|
||||
├── worktree-status.json # Updated with merge status
|
||||
└── merge-log.md # Merge history and details
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Command Parameters
|
||||
|
||||
- `--plan=<session>`: Plan session ID (auto-detect if not provided)
|
||||
- `--group=<id>`: Merge specific group (e.g., EG-001)
|
||||
- `--all`: Merge all completed groups in dependency order
|
||||
- `--cleanup`: Remove worktree after successful merge
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Merge single group
|
||||
--group=EG-001
|
||||
|
||||
# Merge all completed groups
|
||||
--all
|
||||
|
||||
# Merge and cleanup
|
||||
--group=EG-001 --cleanup
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Load Status
|
||||
|
||||
**Objective**: Read completion status and group metadata.
|
||||
|
||||
### Step 1.1: Load worktree-status.json
|
||||
|
||||
Read group completion status.
|
||||
|
||||
**Status File Location**: `.workflow/.execution/worktree-status.json`
|
||||
|
||||
**Required Fields**:
|
||||
- `plan_session`: Planning session ID
|
||||
- `groups[]`: Array of group status objects
|
||||
- `status`: "completed" / "in_progress" / "failed"
|
||||
- `worktree_path`: Path to worktree
|
||||
- `branch`: Branch name
|
||||
- `merge_status`: "not_merged" / "merged"
|
||||
|
||||
### Step 1.2: Load execution-groups.json
|
||||
|
||||
Read group dependencies.
|
||||
|
||||
**Metadata File**: `.workflow/.planning/{session}/execution-groups.json`
|
||||
|
||||
**Required Fields**:
|
||||
- `groups[]`: Group metadata with dependencies
|
||||
- `group_id`: Group identifier
|
||||
- `dependencies_on_groups[]`: Groups that must merge first
|
||||
- `cross_group_files[]`: Files modified by multiple groups
|
||||
|
||||
### Step 1.3: Determine Merge Targets
|
||||
|
||||
Select groups to merge based on parameters.
|
||||
|
||||
**Selection Logic**:
|
||||
|
||||
| Parameter | Behavior |
|
||||
|-----------|----------|
|
||||
| `--group=EG-001` | Merge only specified group |
|
||||
| `--all` | Merge all groups with status="completed" |
|
||||
| Neither | Prompt user to select from completed groups |
|
||||
|
||||
**Validation**:
|
||||
- Group must have status="completed"
|
||||
- Group's worktree must exist
|
||||
- Group must not already be merged
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Validate Dependencies
|
||||
|
||||
**Objective**: Ensure dependencies are merged before target group.
|
||||
|
||||
### Step 2.1: Build Dependency Graph
|
||||
|
||||
Create merge order based on inter-group dependencies.
|
||||
|
||||
**Dependency Analysis**:
|
||||
1. For target group, check `dependencies_on_groups[]`
|
||||
2. For each dependency, verify merge status
|
||||
3. Build topological order for merge sequence
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
EG-003 depends on [EG-001, EG-002]
|
||||
→ Merge order: EG-001, EG-002, then EG-003
|
||||
```
|
||||
|
||||
### Step 2.2: Check Dependency Status
|
||||
|
||||
Validate all dependencies are merged.
|
||||
|
||||
**Check Logic**:
|
||||
```
|
||||
For each dependency in target.dependencies_on_groups:
|
||||
├─ Check dependency.merge_status == "merged"
|
||||
├─ If not merged: Error or prompt to merge dependency first
|
||||
└─ If merged: Continue
|
||||
```
|
||||
|
||||
**Options on Dependency Not Met**:
|
||||
1. **Error**: Refuse to merge until dependencies merged
|
||||
2. **Cascade**: Automatically merge dependencies first (if --all)
|
||||
3. **Force**: Allow merge anyway (dangerous, use --force)
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Conflict Detection
|
||||
|
||||
**Objective**: Check for cross-group file conflicts before merge.
|
||||
|
||||
### Step 3.1: Load Cross-Group Files
|
||||
|
||||
Read files modified by multiple groups.
|
||||
|
||||
**Source**: `execution-groups.json` → `groups[].cross_group_files[]`
|
||||
|
||||
**Example**:
|
||||
```json
|
||||
{
|
||||
"group_id": "EG-001",
|
||||
"cross_group_files": [
|
||||
{
|
||||
"file": "src/shared/config.ts",
|
||||
"conflicting_groups": ["EG-002"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3.2: Check File Modifications
|
||||
|
||||
Compare file state across groups and main.
|
||||
|
||||
**Conflict Check**:
|
||||
1. For each cross-group file:
|
||||
- Get version on main branch
|
||||
- Get version in target worktree
|
||||
- Get version in conflicting group worktrees
|
||||
2. If all different → conflict likely
|
||||
3. If same → safe to merge
|
||||
|
||||
### Step 3.3: Report Conflicts
|
||||
|
||||
Display potential conflicts to user.
|
||||
|
||||
**Conflict Report**:
|
||||
```markdown
|
||||
## Potential Merge Conflicts
|
||||
|
||||
### File: src/shared/config.ts
|
||||
- Modified by: EG-001 (target), EG-002
|
||||
- Status: EG-002 already merged to main
|
||||
- Action: Manual review recommended
|
||||
|
||||
### File: package.json
|
||||
- Modified by: EG-001 (target), EG-003
|
||||
- Status: EG-003 not yet merged
|
||||
- Action: Safe to merge (EG-003 will handle conflict)
|
||||
```
|
||||
|
||||
**User Decision**:
|
||||
- Proceed with merge (handle conflicts manually if occur)
|
||||
- Abort and review files first
|
||||
- Coordinate with other group maintainers
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Merge Worktree
|
||||
|
||||
**Objective**: Merge group's branch from worktree to main.
|
||||
|
||||
### Step 4.1: Prepare Main Branch
|
||||
|
||||
Ensure main branch is up to date.
|
||||
|
||||
```bash
|
||||
git checkout main
|
||||
git pull origin main
|
||||
```
|
||||
|
||||
### Step 4.2: Merge Group Branch
|
||||
|
||||
Merge from worktree's branch.
|
||||
|
||||
**Merge Command**:
|
||||
```bash
|
||||
# Strategy 1: Regular merge (creates merge commit)
|
||||
git merge --no-ff {branch-name} -m "Merge {group-id}: {description}"
|
||||
|
||||
# Strategy 2: Squash merge (single commit)
|
||||
git merge --squash {branch-name}
|
||||
git commit -m "feat: {group-id} - {description}"
|
||||
```
|
||||
|
||||
**Default**: Use regular merge to preserve history.
|
||||
|
||||
### Step 4.3: Handle Merge Conflicts
|
||||
|
||||
If conflicts occur, provide resolution guidance.
|
||||
|
||||
**Conflict Resolution**:
|
||||
```bash
|
||||
# List conflicting files
|
||||
git status
|
||||
|
||||
# For each conflict:
|
||||
# 1. Open file and resolve markers
|
||||
# 2. Stage resolved file
|
||||
git add {file}
|
||||
|
||||
# Complete merge
|
||||
git commit
|
||||
```
|
||||
|
||||
**Conflict Types**:
|
||||
- **Cross-group file**: Expected, requires manual merge
|
||||
- **Unexpected conflict**: Investigate cause
|
||||
|
||||
### Step 4.4: Push to Remote
|
||||
|
||||
Push merged changes.
|
||||
|
||||
```bash
|
||||
git push origin main
|
||||
```
|
||||
|
||||
**Validation**:
|
||||
- Check CI/tests pass after merge
|
||||
- Verify no regressions
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Update Status & Cleanup
|
||||
|
||||
**Objective**: Mark group as merged, optionally remove worktree.
|
||||
|
||||
### Step 5.1: Update worktree-status.json
|
||||
|
||||
Mark group as merged.
|
||||
|
||||
**Status Update**:
|
||||
```json
|
||||
{
|
||||
"groups": {
|
||||
"EG-001": {
|
||||
"merge_status": "merged",
|
||||
"merged_at": "2025-02-03T15:00:00Z",
|
||||
"merged_to": "main",
|
||||
"merge_commit": "abc123def456"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5.2: Append to merge-log.md
|
||||
|
||||
Record merge details.
|
||||
|
||||
**Merge Log Entry**:
|
||||
```markdown
|
||||
## EG-001: Frontend Development
|
||||
|
||||
- **Merged At**: 2025-02-03 15:00:00
|
||||
- **Branch**: feature/cplan-auth-eg-001-frontend
|
||||
- **Commit**: abc123def456
|
||||
- **Tasks Completed**: 15/15
|
||||
- **Conflicts**: 1 file (src/shared/config.ts) - resolved
|
||||
- **Status**: Successfully merged to main
|
||||
```
|
||||
|
||||
### Step 5.3: Cleanup Worktree (optional)
|
||||
|
||||
Remove worktree if --cleanup flag provided.
|
||||
|
||||
**Cleanup Commands**:
|
||||
```bash
|
||||
# Remove worktree
|
||||
git worktree remove .ccw/worktree/{group-id}
|
||||
|
||||
# Delete branch (optional)
|
||||
git branch -d {branch-name}
|
||||
git push origin --delete {branch-name}
|
||||
```
|
||||
|
||||
**When to Cleanup**:
|
||||
- Group successfully merged
|
||||
- No need to revisit worktree
|
||||
- Disk space needed
|
||||
|
||||
**When to Keep**:
|
||||
- May need to reference execution logs
|
||||
- Other groups may need to coordinate
|
||||
- Debugging merge issues
|
||||
|
||||
### Step 5.4: Display Summary
|
||||
|
||||
Report merge results.
|
||||
|
||||
**Summary Output**:
|
||||
```
|
||||
✓ Merged EG-001 to main
|
||||
- Branch: feature/cplan-auth-eg-001-frontend
|
||||
- Commit: abc123def456
|
||||
- Tasks: 15/15 completed
|
||||
- Conflicts: 1 resolved
|
||||
- Worktree: Cleaned up
|
||||
|
||||
Remaining groups:
|
||||
- EG-002: completed, ready to merge
|
||||
- EG-003: in progress, waiting for dependencies
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--plan` | Auto-detect | Plan session ID |
|
||||
| `--group` | Interactive | Group to merge |
|
||||
| `--all` | false | Merge all completed groups |
|
||||
| `--cleanup` | false | Remove worktree after merge |
|
||||
| `--force` | false | Ignore dependency checks |
|
||||
| `--squash` | false | Use squash merge instead of regular |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Group not completed | Error: Complete execution first |
|
||||
| Group already merged | Skip with warning |
|
||||
| Dependencies not merged | Error or cascade merge (--all) |
|
||||
| Merge conflict | Pause for manual resolution |
|
||||
| Worktree not found | Error: Check worktree path |
|
||||
| Push fails | Rollback merge, report error |
|
||||
|
||||
---
|
||||
|
||||
## Merge Strategies
|
||||
|
||||
### Strategy 1: Sequential Merge
|
||||
|
||||
Merge groups one by one in dependency order.
|
||||
|
||||
```bash
|
||||
# Merge EG-001
|
||||
--group=EG-001 --cleanup
|
||||
|
||||
# Merge EG-002
|
||||
--group=EG-002 --cleanup
|
||||
|
||||
# Merge EG-003 (depends on EG-001, EG-002)
|
||||
--group=EG-003 --cleanup
|
||||
```
|
||||
|
||||
**Use When**:
|
||||
- Want to review each merge carefully
|
||||
- High risk of conflicts
|
||||
- Testing between merges
|
||||
|
||||
### Strategy 2: Bulk Merge
|
||||
|
||||
Merge all completed groups at once.
|
||||
|
||||
```bash
|
||||
--all --cleanup
|
||||
```
|
||||
|
||||
**Use When**:
|
||||
- Groups are independent
|
||||
- Low conflict risk
|
||||
- Want fast integration
|
||||
|
||||
### Strategy 3: Dependency-First
|
||||
|
||||
Merge dependencies before dependent groups.
|
||||
|
||||
```bash
|
||||
# Automatically merges EG-001, EG-002 before EG-003
|
||||
--group=EG-003 --cascade
|
||||
```
|
||||
|
||||
**Use When**:
|
||||
- Complex dependency graph
|
||||
- Want automatic ordering
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Merge
|
||||
|
||||
1. **Verify Completion**: Check all tasks in group completed
|
||||
2. **Review Conflicts**: Read conflicts.json for cross-group files
|
||||
3. **Test Worktree**: Run tests in worktree before merge
|
||||
4. **Update Main**: Ensure main branch is current
|
||||
|
||||
### During Merge
|
||||
|
||||
1. **Follow Order**: Respect dependency order
|
||||
2. **Review Conflicts**: Carefully resolve cross-group conflicts
|
||||
3. **Test After Merge**: Run CI/tests after each merge
|
||||
4. **Commit Often**: Keep merge history clean
|
||||
|
||||
### After Merge
|
||||
|
||||
1. **Update Status**: Ensure worktree-status.json reflects merge
|
||||
2. **Keep Logs**: Archive merge-log.md for reference
|
||||
3. **Cleanup Gradually**: Don't rush to delete worktrees
|
||||
4. **Notify Team**: Inform others of merged groups
|
||||
|
||||
---
|
||||
|
||||
## Rollback Strategy
|
||||
|
||||
If merge causes issues:
|
||||
|
||||
```bash
|
||||
# Find merge commit
|
||||
git log --oneline
|
||||
|
||||
# Revert merge
|
||||
git revert -m 1 {merge-commit}
|
||||
git push origin main
|
||||
|
||||
# Or reset (dangerous, loses history)
|
||||
git reset --hard HEAD~1
|
||||
git push origin main --force
|
||||
|
||||
# Update status
|
||||
# Mark group as not_merged in worktree-status.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Workflow
|
||||
|
||||
### Scenario: 3 Groups Complete
|
||||
|
||||
**Status**:
|
||||
- EG-001: Completed (no dependencies)
|
||||
- EG-002: Completed (no dependencies)
|
||||
- EG-003: Completed (depends on EG-001, EG-002)
|
||||
|
||||
### Step 1: Merge Independent Groups
|
||||
|
||||
```bash
|
||||
# Merge EG-001
|
||||
--group=EG-001
|
||||
|
||||
# Test after merge
|
||||
npm test
|
||||
|
||||
# Merge EG-002
|
||||
--group=EG-002
|
||||
|
||||
# Test after merge
|
||||
npm test
|
||||
```
|
||||
|
||||
### Step 2: Merge Dependent Group
|
||||
|
||||
```bash
|
||||
# EG-003 depends on EG-001, EG-002 (already merged)
|
||||
--group=EG-003
|
||||
|
||||
# Final test
|
||||
npm test
|
||||
```
|
||||
|
||||
### Step 3: Cleanup All Worktrees
|
||||
|
||||
```bash
|
||||
# Remove all merged worktrees
|
||||
--cleanup-all
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
### Use worktree-merge when:
|
||||
- Execution groups completed via unified-execute-parallel
|
||||
- Ready to integrate changes to main branch
|
||||
- Need dependency-aware merge order
|
||||
- Want to handle cross-group conflicts systematically
|
||||
|
||||
### Manual merge when:
|
||||
- Single group with no dependencies
|
||||
- Comfortable with Git merge commands
|
||||
- No cross-group conflicts to handle
|
||||
|
||||
---
|
||||
|
||||
**Now execute worktree-merge for completed execution groups**
|
||||
Reference in New Issue
Block a user