@@ -1,4 +1,4 @@
# Phase 4 : Lite Execute
# Phase 2 : Lite Execute
## Overview
@@ -46,50 +46,16 @@ Flexible task execution phase supporting three input modes: in-memory plan (from
- Proceed to execution with `originalUserInput` included
**User Interaction ** :
``` javascript
// Parse --yes flag
const autoYes = $ARGUMENTS . includes ( '--yes' ) || $ARGUMENTS . includes ( '-y' )
let userSelection
if ( autoYes ) {
// Auto m ode : Use defaults
console . log ( ` [--yes] Auto-confirming execution: ` )
console . log ( ` - Execution method: Auto ` )
console . log ( ` - Code review: Skip ` )
userSelection = {
execution _method : "Auto" ,
code _review _tool : "Skip"
}
} else {
// Interactive mode: Ask user
userSelection = ASK _USER ( [
{
id : "execution" ,
type : "select" ,
prompt : "Select execution method:" ,
options : [
{ label : "Agent" , description : "@code-developer agent" } ,
{ label : "Codex" , description : "codex CLI tool" } ,
{ label : "Auto" , description : "Auto-select based on complexity" }
] ,
default : "Auto"
} ,
{
id : "review" ,
type : "select" ,
prompt : "Enable code review after execution?" ,
options : [
{ label : "Skip" , description : "No review" } ,
{ label : "Gemini Review" , description : "Gemini CLI tool" } ,
{ label : "Codex Review" , description : "Git-aware review (prompt OR --uncommitted)" } ,
{ label : "Agent Review" , description : "Current agent review" }
] ,
default : "Skip"
}
] ) // BLOCKS (wait for user response)
}
```
Route by mode:
├─ --yes mode → Auto-confirm with defaults:
│ ├─ Execution meth od: Auto
│ └─ Code review: Skip
│
└─ Interactive mode → ASK_USER with 2 questions:
├─ Execution method: Agent (@code-developer) / Codex (codex CLI) / Auto (complexity-based)
└─ Code review: Skip / Gemini Review / Codex Review (git-aware) / Agent Review
```
### Mode 3: File Content
@@ -98,47 +64,18 @@ if (autoYes) {
**Input ** : Path to file containing task description or plan.json
**Step 1: Read and Detect Format **
**Format Detection ** :
``` javascript
fileContent = Read ( filePath )
// Attempt JSON parsing
try {
jsonData = JSON . parse ( fileContent )
// Check if plan.json from lite-plan session
if ( jsonData . summary && jsonData . approach && jsonData . tasks ) {
planObject = jsonData
originalUserInput = jsonData . summary
isPlanJson = true
} else {
// Valid JSON but not plan.json - treat as plain text
originalUserInput = fileContent
isPlanJson = false
}
} catch {
// Not valid JSON - treat as plain text prompt
originalUserInput = fileContent
isPlanJson = false
}
```
**Step 2: Create Execution Plan **
If `isPlanJson === true` :
- Use `planObject` directl y
- User selects execution method and code review
If `isPlanJson === false` :
- Treat file content as prompt (same behavior as Mode 2)
- Create simple execution plan from content
**Step 3: User Interaction **
- ASK_USER: Select execution method (Agent/Codex/Auto)
- ASK_USER: Select code review tool
- Proceed to execution with full context
1. Read file content
2. Attempt JSON parsing
├─ Valid JSON with summary + approach + tasks fields → plan.json format
│ ├─ Use parsed data as planObject
│ └─ Set originalUserInput = summar y
└─ Not valid JSON or missing required fields → plain text format
└─ Set originalUserInput = file content (same as Mode 2)
3. User selects execution method + code review (same as Mode 2)
```
## Execution Process
@@ -173,379 +110,213 @@ Output:
### Step 1: Initialize Execution Tracking
**Operations ** :
- Initialize result tracking for multi-execution scenarios
- Set up `previousExecutionResults` array for context continuity
- Initialize `previousExecutionResults` array for context continuity
- **In-Memory Mode**: Echo execution strategy from planning phase for transparency
``` javascript
// Initialize result tracking
previousExecutionResults = [ ]
// In-Memory Mode: Echo execution strategy (transparency before execution)
if ( executionContext ) {
console . log ( `
Execution Strategy (from planning phase):
Method: ${ executionContext . executionMethod }
Review: ${ executionContext . codeReviewTool }
Tasks: ${ executionContext . planObject . tasks . length }
Complexity: ${ executionContext . planObject . complexity }
${ executionContext . executorAssignments ? ` Assignments: ${ JSON . stringify ( executionContext . executorAssignments ) } ` : '' }
` )
}
```
- Display: Method, Review tool, Task count, Complexity, Executor assignments (if present)
### Step 2: Task Grouping & Batch Creation
**Dependency Analysis & Grouping Algorithm ** :
``` javascript
// Use explicit depends_on from plan.json (no inference from file/keywords)
function extractDependencies ( tasks ) {
const taskIdToIndex = { }
tasks . forEach ( ( t , i ) => { taskIdToIndex [ t . id ] = i } )
**Dependency Analysis ** : Use **explicit ** `depends_on` from plan.json only — no inference from file paths or keywords.
return tasks . map ( ( task , i ) => {
// Only use explicit depends_on from plan.json
const deps = ( task . depends _on || [ ] )
. map ( depId => taskIdToIndex [ depId ] )
. filter ( idx => idx !== undefined && idx < i )
return { ... task , taskIndex : i , dependencies : deps }
} )
}
```
1. Build task ID → index mapping
// Group into batches: maximize parallel execution
function createExecutionCalls ( tasks , executionMethod ) {
const tasksWithDeps = extractDependencies ( tasks)
const processed = new Set ( )
const calls = [ ]
2. For each task:
└─ Resolve depends_on IDs to task indices
└─ Filter: only valid IDs that reference earlier tasks
// Phase 1: All independent tasks → single parallel batch (maximize utilization)
const independentTasks = tasksW ithDeps . filter ( t => t . dependencies . length === 0 )
if ( independentTasks . length > 0 ) {
independentTasks . forEach ( t => processed . add ( t . taskIndex ) )
calls . push ( {
method : executionMethod ,
executionType : "parallel" ,
groupId : "P1" ,
taskSummary : independentTasks . map ( t => t . title ) . join ( ' | ' ) ,
tasks : independentTasks
} )
}
3. Group into execution batches:
├─ Phase 1: All tasks w ith NO dependencies → single parallel batch
│ └─ Mark as processed
└─ Phase 2+: Iterative dependency resolution
├─ Find tasks whose ALL dependencies are satisfied (processed)
├─ Group ready tasks as batch (parallel within batch if multiple)
├─ Mark as processed
├─ Repeat until no tasks remain
└─ Safety: If no ready tasks found → circular dependency warning → force remaining
// Phase 2: Dependent tasks → sequential batches (respect dependencies)
let sequentialIndex = 1
let remaining = tasksWithDeps . filter ( t => ! processed . has ( t . taskIndex ) )
4. Assign batch IDs:
├─ Parallel batches: P1, P2, P3...
└─ Sequential batches: S1, S2, S3...
while ( remaining . length > 0 ) {
// Find tasks whose dependencies are al l s atisfied
const ready = remaining . filter ( t =>
t . dependencies . every ( d => processed . has ( d ) )
)
if ( ready . length === 0 ) {
console . warn ( 'Circular dependency detected, forcing remaining tasks' )
ready . push ( ... remaining )
}
// Group ready tasks (can run in parallel within this phase)
ready . forEach ( t => processed . add ( t . taskIndex ) )
calls . push ( {
method : executionMethod ,
executionType : ready . length > 1 ? "parallel" : "sequential" ,
groupId : ready . length > 1 ? ` P ${ calls . length + 1 } ` : ` S ${ sequentialIndex ++ } ` ,
taskSummary : ready . map ( t => t . title ) . join ( ready . length > 1 ? ' | ' : ' → ' ) ,
tasks : ready
} )
remaining = remaining . filter ( t => ! processed . has ( t . taskIndex ) )
}
return calls
}
executionCalls = createExecutionCalls ( planObject . tasks , executionMethod ) . map ( c => ( { ... c , id : ` [ ${ c . groupId } ] ` } ) )
TodoWrite ( {
todos : executionCalls . map ( c => ( {
content : ` ${ c . executionType === "parallel" ? "⚡" : "→" } ${ c . id } ( ${ c . tasks . length } tasks) ` ,
status : "pending" ,
activeForm : ` Executing ${ c . id } `
} ) )
} )
5. Create TodoWrite list with batch indicators:
├─ ⚡ for paralle l b atches (concurrent)
└─ → for sequential batches (one-by-one)
```
### Step 3: Launch Execution
** Executor Resolution** (任务级 executor 优先于全局设置):
``` javascript
// 获取任务的 executor( 优先使用 executorAssignments, fallback 到全局 executionMethod)
function getTaskExecutor ( task ) {
const assignments = executionContext ? . executorAssignments || { }
if ( assignments [ task . id ] ) {
return assignments [ task . id ] . executor // 'gemini' | 'codex' | 'agent'
}
// Fallback: 全局 executionMethod 映射
const method = executionContext ? . executionMethod || 'Auto'
if ( method === 'Agent' ) return 'agent'
if ( method === 'Codex' ) return 'codex'
// Auto: 根据复杂度
return planObject . complexity === 'Low' ? 'agent' : 'codex'
}
#### Executor Resolution
// 按 executor 分组任务
function groupTasksByExecutor ( tasks ) {
const groups = { gemini : [ ] , codex : [ ] , agent : [ ] }
tasks . forEach ( task => {
const executor = getTaskExecutor ( task )
groups [ executor ] . push ( task )
} )
return groups
}
Task-level executor assignment takes priority over global execution method:
```
Resolution order (per task):
├─ 1. executorAssignments[task.id] → use assigned executor (gemini/codex/agent)
└─ 2. Fallback to global executionMethod:
├─ "Agent" → agent
├─ "Codex" → codex
└─ "Auto" → Low complexity → agent; Medium/High → codex
```
** Execution Flow** : Parallel batches concurrently → Sequential batches in order
``` javascript
const parallel = executionCalls . filter ( c => c . executionType === "parallel" )
const sequential = executionCalls . filter ( c => c . executionType === "sequential" )
#### Execution Flow
// Phase 1: Launch all parallel batches (single message with multiple tool calls)
if ( parallel . length > 0 ) {
TodoWrite ( { todos : executionCalls . map ( c => ( { status : c . executionType === "parallel" ? "in_progress" : "pending" } ) ) } )
parallelResults = await Promise . all ( parallel . map ( c => executeBatch ( c ) ) )
previousExecutionResults . push ( ... parallelResults )
TodoWrite ( { todos : executionCalls . map ( c => ( { status : parallel . includes ( c ) ? "completed" : "pending" } ) ) } )
}
```
1. Separate batches into parallel and sequential groups
// Phase 2: Execute sequentia l batches one by one
for ( const call of sequential ) {
TodoWrite ( { todos : executionCalls . map ( c => ( { status : c === call ? "in_progress" : "..." } ) ) } )
result = await executeBatch ( call )
previousExecutionResults . push ( result )
TodoWrite ( { todos : executionCalls . map ( c => ( { status : "completed" or "pending" } ) ) } )
}
2. Phase 1 — Launch all paralle l batches c oncurrently:
├─ Update TodoWrite: all parallel → in_progress
├─ Execute all parallel batches simultaneously (single message, multiple tool calls)
├─ Collect results → append to previousExecutionResults
└─ Update TodoWrite: all parallel → completed
3. Phase 2+ — Execute sequential batches in order:
├─ For each sequential batch:
│ ├─ Update TodoWrite: current batch → in_progress
│ ├─ Execute batch
│ ├─ Collect result → append to previousExecutionResults
│ └─ Update TodoWrite: current batch → completed
└─ Continue until all batches completed
```
### Unified Task Prompt Builder
**Task Formatting Principle ** : Each task i s a self-contained checklist. The executor only needs to know what THIS task requires. Same template for Agent and CLI.
Each task is formatted a s a ** self-contained checklist** . The executor only needs to know what THIS task requires. Same template for Agent and CLI.
``` javascript
function buildExecutionPrompt ( batch ) {
// Task template (6 parts: Modification Points → Why → How → Reference → Risks → Done)
const formatTask = ( t ) => `
## ${ t . title }
**Prompt Structure ** (assembled from batch tasks):
**Scope**: \` ${ t . scope } \` | **Action**: ${ t . action }
```
## Goal
{originalUserInput}
# ## Modification Point s
${ t . modification _points . map ( p => ` - ** ${ p . file } ** → \` ${ p . target } \` : ${ p . change } ` ) . join ( '\n' ) }
## Task s
(For each task in batch, separated by ---)
${ t . rationale ? `
### Why this approach (Medium/High)
${ t . rationale . chosen _approach }
${ t . rationale . decision _factors ? . length > 0 ? ` \n Key factors: ${ t . rationale . decision _factors . join ( ', ' ) } ` : '' }
${ t . rationale . tradeoffs ? ` \n Tradeoffs: ${ t . rationale . tradeoffs } ` : '' }
` : '' }
### {task.title}
**Scope**: {task.scope} | **Action**: {task.action}
### How to do it
${ t . description }
# ### Modification Points
- **{file}** → `{target}`: {change}
${ t . implementation . map ( step => ` - ${ step } ` ) . join ( '\n' ) }
#### Why this approach (Medium/High only)
{rationale.chosen_approach}
Key factors: {decision_factors}
Tradeoffs: {tradeoffs}
${ t . code _skeleton ? `
### Code skeleton (High)
${ t . code _skeleton . interfaces ? . length > 0 ? ` **Interfaces**: ${ t . code _skeleton . interfaces . map ( i => ` \` ${ i . name } \` - ${ i . purpose } ` ) . join ( ', ' ) } ` : '' }
${ t . code _skeleton . key _functions ? . length > 0 ? ` \n **Functions**: ${ t . code _skeleton . key _functions . map ( f => ` \` ${ f . signature } \` - ${ f . purpose } ` ) . join ( ', ' ) } ` : '' }
${ t . code _skeleton . classes ? . length > 0 ? ` \n **Classes**: ${ t . code _skeleton . classes . map ( c => ` \` ${ c . name } \` - ${ c . purpose } ` ) . join ( ', ' ) } ` : '' }
` : '' }
#### How to do it
{description}
{implementation steps}
### Reference
- Pattern: ${ t . reference ? . pattern || 'N/A' }
- Files: ${ t . reference ? . files ? . join ( ', ' ) || 'N/A' }
${ t . reference ? . examples ? ` - Notes: ${ t . reference . examples } ` : '' }
# ### Code skeleton (High only)
Interfaces: {interfaces}
Functions: {key_functions}
Classes: {classes}
${ t . risks ? . length > 0 ? `
### Risk mitigations (High)
${ t . risks . map ( r => ` - ${ r . description } → ** ${ r . mitigation } ** ` ) . join ( '\n' ) }
` : '' }
#### Reference
- Pattern: {pattern}
- Files: {files}
- Notes: {examples}
### Done when
${ t . acceptance . map ( c => ` - [ ] ${ c } ` ) . join ( '\n' ) }
${ t . verification ? . success _metrics ? . length > 0 ? ` \n **Success metrics**: ${ t . verification . success _metrics . join ( ', ' ) } ` : '' } `
# ### Risk mitigations (High only)
- {risk.description} → **{risk.mitigation}**
// Build prompt
const sections = [ ]
#### Done when
- [ ] {acceptance criteria}
Success metrics: {verification.success_metrics}
if ( originalUserInput ) sections . push ( ` ## Goal \n ${ originalUserInput } ` )
## Context
(Assembled from available sources, reference only)
sections . push ( ` ## Tasks \n ${ batch . tasks . map ( f orm atTask ) . join ( '\n\n---\n' ) } ` )
### Expl oration Notes (Refined)
**Read first**: {exploration_log_refined path}
Contains: file index, task-relevant context, code reference, execution notes
**IMPORTANT**: Use to avoid re-exploring already analyzed files
// Context (reference only)
const context = [ ]
### Previous Work
{previousExecutionResults summaries}
// Priority: reference refined exploration notes (Execute phase specific)
if ( executionContext ? . session ? . artifacts ? . exploration _log _refined ) {
context . push ( ` ### Exploration Notes (Refined)
**Read first**: ${ executionContext . session . artifacts . exploration _log _refined }
### Clarifications
{clarificationContext entries}
This refined notes contains only execution-relevant context:
- Execution-relevant file index (files directly related to plan tasks)
- Task-relevant exploration context (findings, patterns, risks per task)
- Condensed code reference (code snippets with line numbers)
- Execution notes (constraints, integration points, dependencies)
### Data Flow
{planObject.data_flow.diagram}
**IMPORTANT**: Use this refined notes to avoid re-exploring files. DO NOT re-read files already analyzed in the notes unless verifying specific implementation details. ` )
}
### Artifacts
Plan: {plan path}
if ( previousExecutionResults . length > 0 ) {
context . push ( ` ### Previous Work \n ${ previousExecutionResults . map ( r => ` - ${ r . tasksSummary } : ${ r . status } ` ) . join ( '\n' ) } ` )
}
if ( clarificationContext ) {
context . push ( ` ### Clarifications \n ${ Object . entries ( clarificationContext ) . map ( ( [ q , a ] ) => ` - ${ q } : ${ a } ` ) . join ( '\n' ) } ` )
}
if ( executionContext ? . planObject ? . data _flow ? . diagram ) {
context . push ( ` ### Data Flow \n ${ executionContext . planObject . data _flow . diagram } ` )
}
if ( executionContext ? . session ? . artifacts ? . plan ) {
context . push ( ` ### Artifacts \n Plan: ${ executionContext . session . artifacts . plan } ` )
}
// Project guidelines (user-defined constraints)
context . push ( ` ### Project Guidelines \n @.workflow/project-guidelines.json ` )
if ( context . length > 0 ) sections . push ( ` ## Context \n ${ context . join ( '\n\n' ) } ` )
### Project Guidelines
@.workflow/project-guidelines.json
sections . push ( ` Complete each task according to its "Done when" checklist.` )
return sections . join ( '\n\n' )
}
Complete each task according to its "Done when" checklist.
```
** Option A: Agent Execution**
#### Option A: Agent Execution
When to use:
- `getTaskExecutor(task) === "agent"`
- or `executionMethod = "Agent"` (global fallback)
- or `executionMethod = "Auto" AND complexity = "Low"` (global fallback)
** When to use** : `getTaskExecutor(task) === "agent"` , or global `executionMethod = "Agent"` , or `Auto AND complexity = Low`
``` javascript
// Step 1: Create execution agent with mandatory context reading
const executionAgentId = spawn _agent ( {
message : `
## TASK ASSIGNMENT
**Execution Flow ** :
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/code-developer.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
4. **Read refined exploration log**: ${ executionContext ? . session ? . artifacts ? . exploration _log _refined || 'N/A' }
```
1. Spawn code-developer agent with prompt:
├─ MANDATORY FIRST STEPS:
│ ├─ Read: ~/.codex/agents/code-developer.md
│ ├─ Read: .workflow/project-tech.json
│ ├─ Read: .workflow/project-guidelines.json
│ └─ Read: {exploration_log_refined} (execution-relevant context)
└─ Body: {buildExecutionPrompt(batch)}
**CRITICAL**: Step 4 contains execution-relevant context including:
- Task-relevant code patterns and integration points
- Condensed code reference (with line numbers)
- Execution constraints and risk notes
2. Wait for completion (timeout: 10 minutes)
Use the notes to avoid re-exploring files - the analysis is already done.
3. Close agent after collection
---
${ buildExecutionPrompt ( batch ) }
`
} )
// Step 2: Wait for execution completion
const execResult = wait ( {
ids : [ executionAgentId ] ,
timeout _ms : 600000 // 10 minutes
} )
// Step 3: Close execution agent
close _agent ( { id : executionAgentId } )
4. Collect result → executionResult structure
```
**Result Collection ** : After completion, collect result following `executionResult` structure (see Data Structures section )
#### Option B: CLI Execution (Codex )
**Option B: CLI Execution (Codex) **
**When to use ** : `getTaskExecutor(task) === "codex"` , or global `executionMethod = "Codex"` , or `Auto AND complexity = Medium/High`
When to use :
- `getTaskExecutor(task) === "codex"`
- or `executionMethod = "Codex"` (global fallback)
- or `executionMethod = "Auto" AND complexity = "Medium/High"` (global fallback)
**Execution ** :
``` bash
ccw cli -p "${ buildExecutionPrompt (batch)} " --tool codex --mode write
ccw cli -p "{ buildExecutionPrompt(batch)} " --tool codex --mode write --id { sessionId} -{ groupId}
```
**Execution with fixed IDs ** (predictable ID pattern):
``` javascript
// Launch CLI in background, wait for task hook callback
// Generate fixed execution ID: ${sessionId}-${groupId}
const sessionId = executionContext ? . session ? . id || 'standalone'
const fixedExecutionId = ` ${ sessionId } - ${ batch . groupId } ` // e.g., "implement-auth-2025-12-13-P1"
**Fixed ID Pattern ** : `{sessionId}-{groupId}` (e.g., `implement-auth-2025-12-13-P1` )
// Check if resuming from previous failed execution
const previousCliId = batch . resumeFromCliId || null
**Execution Mode ** : Background (`run_in_background=true` ) → Stop output → Wait for task hook callback
// Build command with fixed ID (and optional resume for continuation)
const cli _command = previousCliId
? ` ccw cli -p " ${ buildExecutionPrompt ( batch ) } " --tool codex --mode write --id ${ fixedExecutionId } --resume ${ previousCliId } `
: ` ccw cli -p " ${ buildExecutionPrompt ( batch ) } " --tool codex --mode write --id ${ fixedExecutionId } `
**Resume on Failure ** :
// Execute in background, stop output and wait for task hook callback
Bash (
command = cli _ command,
run _in _background = true
)
// STOP HERE - CLI executes in background, task hook will notify on completion
```
If status = failed or timeout:
├─ Display: Fixed ID, lookup command, resume command
├─ Lookup: ccw cli detail {fixedExecutionId}
└─ Resume: ccw cli -p "Continue tasks" --resume {fixedExecutionId} --tool codex --mode write --id {fixedExecutionId}-retry
```
**Resume on Failure ** (with fixed ID):
``` javascript
// If execution failed or timed out, offer resume option
if ( bash _result . status === 'failed' || bash _result . status === 'timeout' ) {
console . log ( `
Execution incomplete. Resume available:
Fixed ID: ${ fixedExecutionId }
Lookup: ccw cli detail ${ fixedExecutionId }
Resume: ccw cli -p "Continue tasks" --resume ${ fixedExecutionId } --tool codex --mode write --id ${ fixedExecutionId } -retry
` )
#### Option C: CLI Execution (Gemini)
// Store for potential retry in same session
batch . resumeFromCliId = fixedExecutionId
}
```
**Result Collection ** : After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
**Option C: CLI Execution (Gemini) **
When to use: `getTaskExecutor(task) === "gemini"` (analysis tasks)
**When to use ** : `getTaskExecutor(task) === "gemini"` (analysis tasks)
``` bash
# Use unified buildExecutionPrompt, switch tool and mode
ccw cli -p " ${ buildExecutionPrompt (batch) } " --tool gemini --mode analysis --id ${ sessionId } -${ batch .groupId }
ccw cli -p "{ buildExecutionPrompt(batch)}" --tool gemini --mode analysis --id { sessionId} -{ groupId}
```
### Step 4: Progress Tracking
Progress tracked at batch level (not individual task level). Icons: ⚡ (parallel, concurrent), → (sequential, one-by-one)
Progress tracked at ** batch level** (not individual task level).
| Icon | Meaning |
|------|---------|
| ⚡ | Parallel batch (concurrent execution) |
| → | Sequential batch (one-by-one execution) |
### Step 5: Code Review (Optional)
**Skip Condition ** : Only run if `codeReviewTool ≠ "Skip"`
**Review Focus ** : Verify implementation against plan acceptance criteria and verification requirements
- Read plan.json for task acceptance criteria and verification checklist
- Check each acceptance criterion is fulfilled
- Verify success metrics from verification field (Medium/High complexity)
- Run unit/integration tests specified in verification field
- Validate code quality and identify issues
- Ensure alignment with planned approach and risk mitigations
**Operations ** :
- Agent Review: Current agent performs direct review
- Gemini Review: Execute gemini CLI with review prompt
- Codex Review: Two options - (A) with prompt for complex reviews, (B) `--uncommitted` flag only for quick reviews
- Custom tool: Execute specified CLI tool (qwen, etc.)
**Unified Review Template ** (All tools use same standard):
**Review Focus ** : Verify implementation against plan acceptance criteria and verification requirements.
**Review Criteria ** :
- **Acceptance Criteria**: Verify each criterion from plan.tasks[].acceptance
@@ -553,7 +324,8 @@ Progress tracked at batch level (not individual task level). Icons: ⚡ (paralle
- **Code Quality**: Analyze quality, identify issues, suggest improvements
- **Plan Alignment**: Validate implementation matches planned approach and risk mitigations
**Shared Prompt Template ** (used by all CLI tools) :
**Shared Review Prompt Template ** :
```
PURPOSE: Code review for implemented changes against plan acceptance criteria and verification requirements
TASK: • Verify plan acceptance criteria fulfillment • Check verification requirements (unit tests, success metrics) • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence and risk mitigations
@@ -568,52 +340,26 @@ EXPECTED: Quality assessment with:
CONSTRAINTS: Focus on plan acceptance criteria, verification requirements, and plan adherence | analysis=READ-ONLY
```
**Tool-Specific Execution ** (Apply shared prompt template above) :
**Tool-Specific Execution ** :
| Tool | Command | Notes |
|------|---------|-------|
| Agent Review | Direct review by current agent | Read plan.json, apply review criteria, report findings |
| Gemini Review | `ccw cli -p "[review prompt]" --tool gemini --mode analysis` | Recommended |
| Qwen Review | `ccw cli -p "[review prompt]" --tool qwen --mode analysis` | Alternative |
| Codex Review (A) | `ccw cli -p "[review prompt]" --tool codex --mode review` | Complex reviews with focus areas |
| Codex Review (B) | `ccw cli --tool codex --mode review --uncommitted` | Quick review, no custom prompt |
> **IMPORTANT**: `-p` prompt and target flags (`--uncommitted`/`--base`/`--commit`) are **mutually exclusive** for codex review.
**Multi-Round Review ** : Generate fixed review ID (`{sessionId}-review` ). If issues found, resume with follow-up:
``` bash
# Method 1: Agent Review (current agent)
# - Read plan.json: ${executionContext.session.artifacts.plan}
# - Apply unified review criteria (see Shared Prompt Template)
# - Report findings directly
# Method 2: Gemini Review (recommended)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
# Method 3: Qwen Review (alternative)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
# Same prompt as Gemini, different execution engine
# Method 4: Codex Review (git-aware) - Two mutually exclusive options:
# Option A: With custom prompt (reviews uncommitted by default)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool codex --mode review
# Use for complex reviews with specific focus areas
# Option B: Target flag only (no prompt allowed)
ccw cli --tool codex --mode review --uncommitted
# Quick review of uncommitted changes without custom instructions
# IMPORTANT: -p prompt and target flags (--uncommitted/--base/--commit) are MUTUALLY EXCLUSIVE
ccw cli -p "Clarify the security concerns" --resume { reviewId} --tool gemini --mode analysis --id { reviewId} -followup
```
**Multi-Round Review with Fixed IDs ** :
``` javascript
// Generate fixed review ID
const reviewId = ` ${ sessionId } -review `
// First review pass with fixed ID
const reviewResult = Bash ( ` ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${ reviewId } ` )
// If issues found, continue review dialog with fixed ID chain
if ( hasUnresolvedIssues ( reviewResult ) ) {
// Resume with follow-up questions
Bash ( ` ccw cli -p "Clarify the security concerns you mentioned" --resume ${ reviewId } --tool gemini --mode analysis --id ${ reviewId } -followup ` )
}
```
**Implementation Note ** : Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
- `@{plan.json}` → `@${executionContext.session.artifacts.plan}`
**Implementation Note ** : Replace `[review prompt]` placeholder with actual Shared Review Prompt Template content, substituting :
- `@{plan.json}` → `@{executionContext.session.artifacts.plan}`
- `[@{exploration.json}]` → exploration files from artifacts (if exists)
### Step 6: Update Development Index
@@ -623,49 +369,35 @@ if (hasUnresolvedIssues(reviewResult)) {
**Skip Condition ** : Skip if `.workflow/project-tech.json` does not exist
**Operations ** :
``` javascript
const projectJsonPath = '.workflow/project-tech.json'
if ( ! fileExists ( projectJsonPath ) ) return // Silent skip
const projectJson = JSON . parse ( Read ( projectJsonPath ) )
```
1. Read .workflow/project-tech.json
└─ If not found → silent skip
// Initialize if needed
if ( ! projectJson . development _index ) {
projectJson . development _index = { feature : [ ] , enhancement : [ ] , bugfix : [ ] , refactor : [ ] , docs : [ ] }
}
2. Initialize development_index if missing
└─ Categories: feature, enhancement, bugfix, refactor, docs
// Detect category from keywords
function detectCategory ( text ) {
text = text . toLowerCase ( )
if ( /\b(fix|bug|error|issue|crash)\b/ . test ( text ) ) return 'bugfix'
if ( /\b(refactor|cleanup|reorganize)\b/ . test ( text ) ) return 'refactor'
if ( /\b(doc|readme|comment)\b/ . test ( text ) ) return 'docs'
if ( /\b(add|new|create|implement)\b/ . test ( text ) ) return 'feature'
return 'enhancement'
}
3. Detect category from task keywords:
├─ fix/bug/error/issue/crash → bugfix
├─ refactor/cleanup/reorganize → refactor
├─ doc/readme/comment → docs
├─ add/new/create/implement → feature
└─ (default) → enhancement
// Detect sub_feature from task file paths
function detectSubFeature ( tasks ) {
const dirs = tasks . map ( t => t . file ? . split ( '/' ) . slice ( - 2 , - 1 ) [ 0 ] ) . filter ( Boolean )
const counts = dirs . reduce ( ( a , d ) => { a [ d ] = ( a [ d ] || 0 ) + 1 ; return a } , { } )
return Object . entries ( counts ) . sort ( ( a , b ) => b [ 1 ] - a [ 1 ] ) [ 0 ] ? . [ 0 ] || 'general'
}
4. Detect sub_feature from task file paths
└─ Extract parent directory names, return most frequent
const category = detectCategory ( ` ${ planObject . summary } ${ planObject . approach } ` )
const entry = {
title : planObject . summary . slice ( 0 , 60 ) ,
sub _feature : detectSubFeature ( planObject . tasks ) ,
date : new Date ( ) . toISOString ( ) . split ( 'T' ) [ 0 ] ,
description : planObject . approach . slice ( 0 , 100 ) ,
status : previousExecutionResults . every ( r => r . status === 'completed' ) ? 'completed' : 'partial' ,
session _id : executionContext ? . session ? . id || null
}
5. Create entry:
├─ title: plan summary (max 60 chars)
├─ sub_feature: detected from file paths
├─ date: current date (YYYY-MM-DD)
├─ description: plan approach (max 100 chars)
├─ status: "completed" if all results completed, else "partial"
└─ session_id: from executionContext
projectJson . development _index[ category] . push ( entry )
projectJson . statistics. last _updated = new Date ( ) . toISOString ( )
Write( projectJsonPath , JSON . stringify ( projectJson , null , 2 ) )
console . log ( ` Development index: [ ${ category } ] ${ entry . title } ` )
6. Append entry to development_index[ category]
7. Update statistics. last_updated timestamp
8. Write updated project-tech.json
```
## Best Practices
@@ -683,7 +415,7 @@ console.log(`Development index: [${category}] ${entry.title}`)
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `$ {sessionId}-$ {groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `{sessionId}-{groupId}` for resume |
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history` , verify date directories |
@@ -760,22 +492,23 @@ Appended to `previousExecutionResults` array for context continuity in multi-exe
After completion, ask user whether to expand as issue (test/enhance/refactor/doc). Selected items create new issues accordingly.
**Fixed ID Pattern ** : `$ {sessionId}-$ {groupId}` enables predictable lookup without auto-generated timestamps.
**Fixed ID Pattern ** : `{sessionId}-{groupId}` enables predictable lookup without auto-generated timestamps.
**Resume Usage ** : If `status` is "partial" or "failed", use `fixedCliId` to resume:
``` bash
# Lookup previous execution
ccw cli detail $ { fixedCliId }
ccw cli detail { fixedCliId }
# Resume with new fixed ID for retry
ccw cli -p "Continue from where we left off" --resume $ { fixedCliId } --tool codex --mode write --id $ { fixedCliId } -retry
ccw cli -p "Continue from where we left off" --resume { fixedCliId } --tool codex --mode write --id { fixedCliId } -retry
```
---
## Post-Phase Update
After Phase 4 (Lite Execute) completes:
After Phase 2 (Lite Execute) completes:
- **Output Created**: Executed tasks, optional code review results, updated development index
- **Execution Results**: `previousExecutionResults[]` with status per batch
- **Next Action**: Workflow complete. Optionally expand to issue (test/enhance/refactor/doc)