feat: add CLI Stream Viewer component for real-time output monitoring

- Implemented a new CLI Stream Viewer to display real-time output from CLI executions.
- Added state management for CLI executions, including handling of start, output, completion, and errors.
- Introduced UI rendering for stream tabs and content, with auto-scroll functionality.
- Integrated keyboard shortcuts for toggling the viewer and handling user interactions.

feat: create Issue Manager view for managing issues and execution queue

- Developed the Issue Manager view to manage issues, solutions, and execution queue.
- Implemented data loading functions for fetching issues and queue data from the API.
- Added filtering and rendering logic for issues and queue items, including drag-and-drop functionality.
- Created detail panel for viewing and editing issue details, including tasks and solutions.
This commit is contained in:
catlog22
2025-12-27 09:46:12 +08:00
parent cdf4833977
commit 0157e36344
23 changed files with 6843 additions and 1293 deletions

View File

@@ -1,552 +1,384 @@
---
name: execute
description: Execute issue tasks with closed-loop methodology (analyze→implement→test→optimize→commit)
argument-hint: "<issue-id> [--task <task-id>] [--batch <n>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*), Edit(*), AskUserQuestion(*)
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
# Issue Execute Command (/issue:execute)
## Overview
Execute tasks from a planned issue using closed-loop methodology. Each task goes through 5 phases: **Analyze → Implement → Test → Optimize → Commit**. Tasks are loaded progressively based on dependency satisfaction.
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
**Core capabilities:**
- Progressive task loading (only load ready tasks)
- Closed-loop execution with 5 phases per task
- Automatic retry on test failures (up to 3 attempts)
- Pause on defined pause_criteria conditions
- Delivery criteria verification before completion
- Automatic git commit per task
**Core design:**
- Single task per codex instance (not loop mode)
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
- No file reading in codex
- Orchestrator manages parallelism
## Storage Structure (Flat JSONL)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
## Usage
```bash
/issue:execute <ISSUE_ID> [FLAGS]
/issue:execute [FLAGS]
# Arguments
<issue-id> Issue ID (e.g., GH-123, TEXT-1735200000)
# Examples
/issue:execute # Execute all ready tasks
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
/issue:execute --executor codex # Force codex executor
# Flags
--task <id> Execute specific task only
--batch <n> Max concurrent tasks (default: 1)
--skip-commit Skip git commit phase
--dry-run Simulate execution without changes
--continue Continue from paused/failed state
--parallel <n> Max parallel codex instances (default: 1)
--executor <type> Force executor: codex|gemini|agent
--dry-run Show what would execute without running
```
## Execution Process
```
Initialization:
├─ Load state.json and tasks.jsonl
├─ Build completed task index
└─ Identify ready tasks (dependencies satisfied)
Phase 1: Queue Loading
├─ Load queue.json
├─ Count pending/ready tasks
└─ Initialize TodoWrite tracking
Task Loop:
─ For each ready task:
├─ Phase 1: ANALYZE
│ ├─ Verify task requirements
│ ├─ Check file existence
│ ├─ Validate preconditions
│ └─ Check pause_criteria (halt if triggered)
├─ Phase 2: IMPLEMENT
│ ├─ Execute code changes
│ ├─ Write/modify files
│ └─ Track modified files
├─ Phase 3: TEST
│ ├─ Run relevant tests
│ ├─ Verify functionality
│ └─ Retry loop (max 3) on failure → back to IMPLEMENT
├─ Phase 4: OPTIMIZE
│ ├─ Code quality check
│ ├─ Lint/format verification
│ └─ Apply minor improvements
├─ Phase 5: COMMIT
│ ├─ Stage modified files
│ ├─ Create commit with task reference
│ └─ Update task status to 'completed'
└─ Update state.json
Phase 2: Ready Task Detection
─ Find tasks with satisfied dependencies
├─ Group by execution_group (parallel batches)
└─ Determine execution order
Completion:
Return execution summary
Phase 3: Codex Coordination
For each ready task:
│ ├─ Launch independent codex instance
│ ├─ Codex calls: ccw issue next
│ ├─ Codex receives task data (NOT file)
│ ├─ Codex executes task
│ ├─ Codex calls: ccw issue complete <queue-id>
│ └─ Update TodoWrite
└─ Parallel execution based on --parallel flag
Phase 4: Completion
├─ Generate execution summary
├─ Update issue statuses in issues.jsonl
└─ Display results
```
## Implementation
### Initialization
### Phase 1: Queue Loading
```javascript
// Load issue context
const issueDir = `.workflow/issues/${issueId}`
const state = JSON.parse(Read(`${issueDir}/state.json`))
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
// Load queue
const queuePath = '.workflow/issues/queue.json';
if (!Bash(`test -f "${queuePath}" && echo exists`).includes('exists')) {
console.log('No queue found. Run /issue:queue first.');
return;
}
// Build completed index
const completedIds = new Set(
tasks.filter(t => t.status === 'completed').map(t => t.id)
)
const queue = JSON.parse(Read(queuePath));
// Get ready tasks (dependencies satisfied)
// Count by status
const pending = queue.queue.filter(q => q.status === 'pending');
const executing = queue.queue.filter(q => q.status === 'executing');
const completed = queue.queue.filter(q => q.status === 'completed');
console.log(`
## Execution Queue Status
- Pending: ${pending.length}
- Executing: ${executing.length}
- Completed: ${completed.length}
- Total: ${queue.queue.length}
`);
if (pending.length === 0 && executing.length === 0) {
console.log('All tasks completed!');
return;
}
```
### Phase 2: Ready Task Detection
```javascript
// Find ready tasks (dependencies satisfied)
function getReadyTasks() {
return tasks.filter(task =>
task.status === 'pending' &&
task.depends_on.every(dep => completedIds.has(dep))
)
const completedIds = new Set(
queue.queue.filter(q => q.status === 'completed').map(q => q.queue_id)
);
return queue.queue.filter(item => {
if (item.status !== 'pending') return false;
return item.depends_on.every(depId => completedIds.has(depId));
});
}
let readyTasks = getReadyTasks()
const readyTasks = getReadyTasks();
if (readyTasks.length === 0) {
if (tasks.every(t => t.status === 'completed')) {
console.log('✓ All tasks completed!')
return
}
console.log('⚠ No ready tasks. Check dependencies or blocked tasks.')
return
}
// Initialize TodoWrite for tracking
TodoWrite({
todos: readyTasks.slice(0, batchSize).map(t => ({
content: `[${t.id}] ${t.title}`,
status: 'pending',
activeForm: `Executing ${t.id}`
}))
})
```
### Task Execution Loop
```javascript
for (const task of readyTasks.slice(0, batchSize)) {
console.log(`\n## Executing: ${task.id} - ${task.title}`)
// Update state
updateTaskStatus(task.id, 'in_progress', 'analyze')
try {
// Phase 1: ANALYZE
const analyzeResult = await executePhase_Analyze(task)
if (analyzeResult.paused) {
console.log(`⏸ Task paused: ${analyzeResult.reason}`)
updateTaskStatus(task.id, 'paused', 'analyze')
continue
}
// Phase 2-5: Closed Loop
let implementRetries = 0
const maxRetries = 3
while (implementRetries < maxRetries) {
// Phase 2: IMPLEMENT
const implementResult = await executePhase_Implement(task, analyzeResult)
updateTaskStatus(task.id, 'in_progress', 'test')
// Phase 3: TEST
const testResult = await executePhase_Test(task, implementResult)
if (testResult.passed) {
// Phase 4: OPTIMIZE
await executePhase_Optimize(task, implementResult)
// Phase 5: COMMIT
if (!flags.skipCommit) {
await executePhase_Commit(task, implementResult)
}
// Mark completed
updateTaskStatus(task.id, 'completed', 'done')
completedIds.add(task.id)
break
} else {
implementRetries++
console.log(`⚠ Test failed, retry ${implementRetries}/${maxRetries}`)
if (implementRetries >= maxRetries) {
updateTaskStatus(task.id, 'failed', 'test')
console.log(`✗ Task failed after ${maxRetries} retries`)
}
}
}
} catch (error) {
updateTaskStatus(task.id, 'failed', task.current_phase)
console.log(`✗ Task failed: ${error.message}`)
}
}
```
### Phase 1: ANALYZE
```javascript
async function executePhase_Analyze(task) {
console.log('### Phase 1: ANALYZE')
// Check pause criteria first
for (const criterion of task.pause_criteria || []) {
const shouldPause = await evaluatePauseCriterion(criterion, task)
if (shouldPause) {
return { paused: true, reason: criterion }
}
}
// Execute analysis via CLI
const analysisResult = await Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Analyze: ${task.id}`,
prompt=`
## Analysis Task
ID: ${task.id}
Title: ${task.title}
Description: ${task.description}
## File Context
${task.file_context.join('\n')}
## Delivery Criteria (to be achieved)
${task.delivery_criteria.map((c, i) => `${i+1}. ${c}`).join('\n')}
## Required Analysis
1. Verify all referenced files exist
2. Identify exact modification points
3. Check for potential conflicts
4. Validate approach feasibility
## Output
Return JSON:
{
"files_to_modify": ["path1", "path2"],
"integration_points": [...],
"potential_risks": [...],
"implementation_notes": "..."
}
`
)
// Parse and return
const analysis = JSON.parse(analysisResult)
// Update phase results
updatePhaseResult(task.id, 'analyze', {
status: 'completed',
findings: analysis.potential_risks,
timestamp: new Date().toISOString()
})
return { paused: false, analysis }
}
```
### Phase 2: IMPLEMENT
```javascript
async function executePhase_Implement(task, analyzeResult) {
console.log('### Phase 2: IMPLEMENT')
updateTaskStatus(task.id, 'in_progress', 'implement')
// Determine executor
const executor = task.executor === 'auto'
? (task.type === 'test' ? 'agent' : 'codex')
: task.executor
// Build implementation prompt
const prompt = `
## Implementation Task
ID: ${task.id}
Title: ${task.title}
Type: ${task.type}
## Description
${task.description}
## Analysis Results
${JSON.stringify(analyzeResult.analysis, null, 2)}
## Files to Modify
${analyzeResult.analysis.files_to_modify.join('\n')}
## Delivery Criteria (MUST achieve all)
${task.delivery_criteria.map((c, i) => `- [ ] ${c}`).join('\n')}
## Implementation Notes
${analyzeResult.analysis.implementation_notes}
## Rules
- Follow existing code patterns
- Maintain backward compatibility
- Add appropriate error handling
- Document significant changes
`
let result
if (executor === 'codex') {
result = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write`,
timeout=3600000
)
} else if (executor === 'gemini') {
result = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write`,
timeout=1800000
)
if (executing.length > 0) {
console.log('Tasks are currently executing. Wait for completion.');
} else {
result = await Task(
console.log('No ready tasks. Check for blocked dependencies.');
}
return;
}
console.log(`Found ${readyTasks.length} ready tasks`);
// Sort by execution order
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
// Initialize TodoWrite
TodoWrite({
todos: readyTasks.slice(0, parallelLimit).map(t => ({
content: `[${t.queue_id}] ${t.issue_id}:${t.task_id}`,
status: 'pending',
activeForm: `Executing ${t.queue_id}`
}))
});
```
### Phase 3: Codex Coordination (Single Task Mode)
```javascript
// Execute tasks - single codex instance per task
async function executeTask(queueItem) {
const codexPrompt = `
## Single Task Execution
You are executing ONE task from the issue queue. Follow these steps exactly:
### Step 1: Fetch Task
Run this command to get your task:
\`\`\`bash
ccw issue next
\`\`\`
This returns JSON with:
- queue_id: Queue item ID
- task: Task definition with implementation steps
- context: Exploration context
- execution_hints: Executor and time estimate
### Step 2: Execute Task
Read the returned task object and:
1. Follow task.implementation steps in order
2. Meet all task.acceptance criteria
3. Use provided context.relevant_files for reference
4. Use context.patterns for code style
### Step 3: Report Completion
When done, run:
\`\`\`bash
ccw issue complete <queue_id> --result '{"files_modified": ["path1", "path2"], "summary": "What was done"}'
\`\`\`
If task fails, run:
\`\`\`bash
ccw issue fail <queue_id> --reason "Why it failed"
\`\`\`
### Rules
- NEVER read task files directly - use ccw issue next
- Execute the FULL task before marking complete
- Do NOT loop - execute ONE task only
- Report accurate files_modified in result
### Start Now
Begin by running: ccw issue next
`;
// Execute codex
const executor = queueItem.assigned_executor || flags.executor || 'codex';
if (executor === 'codex') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.queue_id}`,
timeout=3600000 // 1 hour timeout
);
} else if (executor === 'gemini') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.queue_id}`,
timeout=1800000 // 30 min timeout
);
} else {
// Agent execution
Task(
subagent_type="code-developer",
run_in_background=false,
description=`Implement: ${task.id}`,
prompt=prompt
)
description=`Execute ${queueItem.queue_id}`,
prompt=codexPrompt
);
}
// Track modified files
const modifiedFiles = extractModifiedFiles(result)
updatePhaseResult(task.id, 'implement', {
status: 'completed',
files_modified: modifiedFiles,
timestamp: new Date().toISOString()
})
return { modifiedFiles, output: result }
}
```
### Phase 3: TEST
// Execute with parallelism
const parallelLimit = flags.parallel || 1;
```javascript
async function executePhase_Test(task, implementResult) {
console.log('### Phase 3: TEST')
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
const batch = readyTasks.slice(i, i + parallelLimit);
updateTaskStatus(task.id, 'in_progress', 'test')
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
console.log(batch.map(t => `- ${t.queue_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
// Determine test command based on project
const testCommand = detectTestCommand(task.file_context)
// e.g., 'npm test', 'pytest', 'go test', etc.
// Run tests
const testResult = Bash(testCommand, timeout=300000)
const passed = testResult.exitCode === 0
// Verify delivery criteria
let criteriaVerified = passed
if (passed) {
for (const criterion of task.delivery_criteria) {
const verified = await verifyCriterion(criterion, implementResult)
if (!verified) {
criteriaVerified = false
console.log(`⚠ Criterion not met: ${criterion}`)
}
if (parallelLimit === 1) {
// Sequential execution
for (const task of batch) {
updateTodo(task.queue_id, 'in_progress');
await executeTask(task);
updateTodo(task.queue_id, 'completed');
}
} else {
// Parallel execution - launch all at once
const executions = batch.map(task => {
updateTodo(task.queue_id, 'in_progress');
return executeTask(task);
});
await Promise.all(executions);
batch.forEach(task => updateTodo(task.queue_id, 'completed'));
}
updatePhaseResult(task.id, 'test', {
status: passed && criteriaVerified ? 'passed' : 'failed',
test_results: testResult.output.substring(0, 1000),
retry_count: implementResult.retryCount || 0,
timestamp: new Date().toISOString()
})
return { passed: passed && criteriaVerified, output: testResult }
}
```
### Phase 4: OPTIMIZE
```javascript
async function executePhase_Optimize(task, implementResult) {
console.log('### Phase 4: OPTIMIZE')
updateTaskStatus(task.id, 'in_progress', 'optimize')
// Run linting/formatting
const lintResult = Bash('npm run lint:fix || true', timeout=60000)
// Quick code review
const reviewResult = await Task(
subagent_type="universal-executor",
run_in_background=false,
description=`Review: ${task.id}`,
prompt=`
Quick code review for task ${task.id}
## Modified Files
${implementResult.modifiedFiles.join('\n')}
## Check
1. Code follows project conventions
2. No obvious security issues
3. Error handling is appropriate
4. No dead code or console.logs
## Output
If issues found, apply fixes directly. Otherwise confirm OK.
`
)
updatePhaseResult(task.id, 'optimize', {
status: 'completed',
improvements: extractImprovements(reviewResult),
timestamp: new Date().toISOString()
})
return { lintResult, reviewResult }
}
```
### Phase 5: COMMIT
```javascript
async function executePhase_Commit(task, implementResult) {
console.log('### Phase 5: COMMIT')
updateTaskStatus(task.id, 'in_progress', 'commit')
// Stage modified files
for (const file of implementResult.modifiedFiles) {
Bash(`git add "${file}"`)
}
// Create commit message
const typePrefix = {
'feature': 'feat',
'bug': 'fix',
'refactor': 'refactor',
'test': 'test',
'chore': 'chore',
'docs': 'docs'
}[task.type] || 'feat'
const commitMessage = `${typePrefix}(${task.id}): ${task.title}
${task.description.substring(0, 200)}
Delivery Criteria:
${task.delivery_criteria.map(c => `- [x] ${c}`).join('\n')}
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>`
// Commit
const commitResult = Bash(`git commit -m "$(cat <<'EOF'
${commitMessage}
EOF
)"`)
// Get commit hash
const commitHash = Bash('git rev-parse HEAD').trim()
updatePhaseResult(task.id, 'commit', {
status: 'completed',
commit_hash: commitHash,
message: `${typePrefix}(${task.id}): ${task.title}`,
timestamp: new Date().toISOString()
})
console.log(`✓ Committed: ${commitHash.substring(0, 7)}`)
return { commitHash }
}
```
### State Management
```javascript
// Update task status in JSONL (append-style with compaction)
function updateTaskStatus(taskId, status, phase) {
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
const taskIndex = tasks.findIndex(t => t.id === taskId)
if (taskIndex >= 0) {
tasks[taskIndex].status = status
tasks[taskIndex].current_phase = phase
tasks[taskIndex].updated_at = new Date().toISOString()
// Rewrite JSONL (compact)
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n')
Write(`${issueDir}/tasks.jsonl`, jsonlContent)
}
// Update state.json
const state = JSON.parse(Read(`${issueDir}/state.json`))
state.current_task = status === 'in_progress' ? taskId : null
state.completed_count = tasks.filter(t => t.status === 'completed').length
state.updated_at = new Date().toISOString()
Write(`${issueDir}/state.json`, JSON.stringify(state, null, 2))
}
// Update phase result
function updatePhaseResult(taskId, phase, result) {
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
const taskIndex = tasks.findIndex(t => t.id === taskId)
if (taskIndex >= 0) {
tasks[taskIndex].phase_results = tasks[taskIndex].phase_results || {}
tasks[taskIndex].phase_results[phase] = result
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n')
Write(`${issueDir}/tasks.jsonl`, jsonlContent)
// Refresh ready tasks after batch
const newReady = getReadyTasks();
if (newReady.length > 0) {
console.log(`${newReady.length} more tasks now ready`);
}
}
```
## Progressive Loading
### Codex Task Fetch Response
For memory efficiency with large task lists:
When codex calls `ccw issue next`, it receives:
```json
{
"queue_id": "Q-001",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task": {
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file in src/middleware/",
"Implement JWT token validation using jsonwebtoken",
"Add error handling for invalid/expired tokens",
"Export middleware function"
],
"acceptance": [
"Middleware validates JWT tokens successfully",
"Returns 401 for invalid or missing tokens",
"Passes token payload to request context"
]
},
"context": {
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
},
"execution_hints": {
"executor": "codex",
"estimated_minutes": 30
}
}
```
### Phase 4: Completion Summary
```javascript
// Stream JSONL and only load ready tasks
function* getReadyTasksStream(issueDir, completedIds) {
const filePath = `${issueDir}/tasks.jsonl`
const lines = readFileLines(filePath)
// Reload queue for final status
const finalQueue = JSON.parse(Read(queuePath));
for (const line of lines) {
if (!line.trim()) continue
const task = JSON.parse(line)
const summary = {
completed: finalQueue.queue.filter(q => q.status === 'completed').length,
failed: finalQueue.queue.filter(q => q.status === 'failed').length,
pending: finalQueue.queue.filter(q => q.status === 'pending').length,
total: finalQueue.queue.length
};
if (task.status === 'pending' &&
task.depends_on.every(dep => completedIds.has(dep))) {
yield task
console.log(`
## Execution Complete
**Completed**: ${summary.completed}/${summary.total}
**Failed**: ${summary.failed}
**Pending**: ${summary.pending}
### Task Results
${finalQueue.queue.map(q => {
const icon = q.status === 'completed' ? '✓' :
q.status === 'failed' ? '✗' :
q.status === 'executing' ? '⟳' : '○';
return `${icon} ${q.queue_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
}).join('\n')}
`);
// Update issue statuses in issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const issueIds = [...new Set(finalQueue.queue.map(q => q.issue_id))];
for (const issueId of issueIds) {
const issueTasks = finalQueue.queue.filter(q => q.issue_id === issueId);
if (issueTasks.every(q => q.status === 'completed')) {
console.log(`\n✓ Issue ${issueId} fully completed!`);
// Update issue status
const issueIndex = allIssues.findIndex(i => i.id === issueId);
if (issueIndex !== -1) {
allIssues[issueIndex].status = 'completed';
allIssues[issueIndex].completed_at = new Date().toISOString();
allIssues[issueIndex].updated_at = new Date().toISOString();
}
}
}
// Usage: Only load what's needed
const iterator = getReadyTasksStream(issueDir, completedIds)
const batch = []
for (let i = 0; i < batchSize; i++) {
const { value, done } = iterator.next()
if (done) break
batch.push(value)
// Write updated issues.jsonl
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
if (summary.pending > 0) {
console.log(`
### Continue Execution
Run \`/issue:execute\` again to execute remaining tasks.
`);
}
```
## Pause Criteria Evaluation
## Dry Run Mode
```javascript
async function evaluatePauseCriterion(criterion, task) {
// Pattern matching for common pause conditions
const patterns = [
{ match: /unclear|undefined|missing/i, action: 'ask_user' },
{ match: /security review/i, action: 'require_approval' },
{ match: /migration required/i, action: 'check_migration' },
{ match: /external (api|service)/i, action: 'verify_external' }
]
if (flags.dryRun) {
console.log(`
## Dry Run - Would Execute
for (const pattern of patterns) {
if (pattern.match.test(criterion)) {
// Check if condition is resolved
const resolved = await checkCondition(pattern.action, criterion, task)
if (!resolved) return true // Pause
}
}
${readyTasks.map((t, i) => `
${i + 1}. ${t.queue_id}
Issue: ${t.issue_id}
Task: ${t.task_id}
Executor: ${t.assigned_executor}
Group: ${t.execution_group}
`).join('')}
return false // Don't pause
No changes made. Remove --dry-run to execute.
`);
return;
}
```
@@ -554,38 +386,32 @@ async function evaluatePauseCriterion(criterion, task) {
| Error | Resolution |
|-------|------------|
| Task not found | List available tasks, suggest correct ID |
| Dependencies unsatisfied | Show blocking tasks, suggest running those first |
| Test failure (3x) | Mark failed, save state, suggest manual intervention |
| Pause triggered | Save state, display pause reason, await user action |
| Commit conflict | Stash changes, report conflict, await resolution |
| Queue not found | Display message, suggest /issue:queue |
| No ready tasks | Check dependencies, show blocked tasks |
| Codex timeout | Mark as failed, allow retry |
| ccw issue next empty | All tasks done or blocked |
| Task execution failure | Marked via ccw issue fail |
## Output
## Endpoint Contract
```
## Execution Complete
### `ccw issue next`
- Returns next ready task as JSON
- Marks task as 'executing'
- Returns `{ status: 'empty' }` when no tasks
**Issue**: GH-123
**Tasks Executed**: 3/5
**Completed**: 3
**Failed**: 0
**Pending**: 2 (dependencies not met)
### `ccw issue complete <queue-id>`
- Marks task as 'completed'
- Updates queue.json
- Checks if issue is fully complete
### Task Status
| ID | Title | Status | Phase | Commit |
|----|-------|--------|-------|--------|
| TASK-001 | Setup auth middleware | ✓ | done | a1b2c3d |
| TASK-002 | Protect API routes | ✓ | done | e4f5g6h |
| TASK-003 | Add login endpoint | ✓ | done | i7j8k9l |
| TASK-004 | Add logout endpoint | ⏳ | pending | - |
| TASK-005 | Integration tests | ⏳ | pending | - |
### Next Steps
Run `/issue:execute GH-123` to continue with remaining tasks.
```
### `ccw issue fail <queue-id>`
- Marks task as 'failed'
- Records failure reason
- Allows retry via /issue:execute
## Related Commands
- `/issue:plan` - Create issue plan with JSONL tasks
- `ccw issue status` - Check issue execution status
- `/issue:plan` - Plan issues with solutions
- `/issue:queue` - Form execution queue
- `ccw issue queue list` - View queue status
- `ccw issue retry` - Retry failed tasks

View File

@@ -1,7 +1,7 @@
---
name: plan
description: Plan issue resolution with JSONL task generation, delivery/pause criteria, and dependency graph
argument-hint: "\"issue description\"|github-url|file.md"
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "<issue-id>[,<issue-id>,...] [--batch-size 3]"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
---
@@ -9,339 +9,317 @@ allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(
## Overview
Generate a JSONL-based task plan from a GitHub issue or description. Each task includes delivery criteria, pause criteria, and dependency relationships. The plan is designed for progressive execution with the `/issue:execute` command.
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow. The agent handles ACE semantic search, solution generation, and task breakdown.
**Core capabilities:**
- Parse issue from URL, text description, or markdown file
- Analyze codebase context for accurate task breakdown
- Generate JSONL task file with DAG (Directed Acyclic Graph) dependencies
- Define clear delivery criteria (what marks a task complete)
- Define pause criteria (conditions to halt execution)
- Interactive confirmation before finalizing
- **Closed-loop agent**: issue-plan-agent combines explore + plan
- Batch processing: 1 agent processes 1-3 issues
- ACE semantic search integrated into planning
- Solution with executable tasks and acceptance criteria
- Automatic solution registration and binding
## Storage Structure (Flat JSONL)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue
└── solutions/
├── {issue-id}.jsonl # Solutions for issue (one per line)
└── ...
```
## Usage
```bash
/issue:plan [FLAGS] <INPUT>
/issue:plan <issue-id>[,<issue-id>,...] [FLAGS]
# Input Formats
<issue-url> GitHub issue URL (e.g., https://github.com/owner/repo/issues/123)
<description> Text description of the issue
<file.md> Markdown file with issue details
# Examples
/issue:plan GH-123 # Single issue
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
/issue:plan --all-pending # All pending issues
# Flags
-e, --explore Force code exploration phase
--executor <type> Default executor: agent|codex|gemini|auto (default: auto)
--batch-size <n> Max issues per agent batch (default: 3)
```
## Execution Process
```
Phase 1: Input Parsing & Context
├─ Parse input (URL → fetch issue, text → use directly, file → read content)
├─ Extract: title, description, labels, acceptance criteria
Store as issueContext
Phase 1: Issue Loading
├─ Parse input (single, comma-separated, or --all-pending)
├─ Load issues from .workflow/issues/issues.jsonl
Validate issues exist (create if needed)
└─ Group into batches (max 3 per batch)
Phase 2: Exploration (if needed)
├─ Complexity assessment (Low/Medium/High)
├─ Launch cli-explore-agent for codebase understanding
└─ Identify: relevant files, patterns, integration points
Phase 2: Unified Explore + Plan (issue-plan-agent)
├─ Launch issue-plan-agent per batch
├─ Agent performs:
│ ├─ ACE semantic search for each issue
│ ├─ Codebase exploration (files, patterns, dependencies)
│ ├─ Solution generation with task breakdown
│ └─ Conflict detection across issues
└─ Output: solution JSON per issue
Phase 3: Task Breakdown
├─ Agent generates JSONL task list
├─ Each task includes:
├─ delivery_criteria (completion checklist)
│ ├─ pause_criteria (halt conditions)
│ └─ depends_on (dependency graph)
└─ Validate DAG (no circular dependencies)
Phase 3: Solution Registration & Binding
├─ Append solutions to solutions/{issue-id}.jsonl
├─ Single solution per issue → auto-bind
├─ Multiple candidates → AskUserQuestion to select
└─ Update issues.jsonl with bound_solution_id
Phase 4: User Confirmation
├─ Display task summary table
├─ Show dependency graph
└─ AskUserQuestion: Approve / Refine / Cancel
Phase 5: Persistence
├─ Write tasks.jsonl to .workflow/issues/{issue-id}/
├─ Initialize state.json for status tracking
└─ Return summary and next steps
Phase 4: Summary
├─ Display bound solutions
├─ Show task counts per issue
└─ Display next steps (/issue:queue)
```
## Implementation
### Phase 1: Input Parsing
### Phase 1: Issue Loading
```javascript
// Helper: Get UTC+8 ISO string
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse input
const issueIds = userInput.includes(',')
? userInput.split(',').map(s => s.trim())
: [userInput.trim()];
// Parse input type
function parseInput(input) {
if (input.startsWith('https://github.com/')) {
const match = input.match(/github\.com\/(.+?)\/(.+?)\/issues\/(\d+)/)
if (match) {
return { type: 'github', owner: match[1], repo: match[2], number: match[3] }
}
// Read issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Load and validate issues
const issues = [];
for (const id of issueIds) {
let issue = allIssues.find(i => i.id === id);
if (!issue) {
console.log(`Issue ${id} not found. Creating...`);
issue = {
id,
title: `Issue ${id}`,
status: 'registered',
priority: 3,
context: '',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
// Append to issues.jsonl
Bash(`echo '${JSON.stringify(issue)}' >> "${issuesPath}"`);
}
if (input.endsWith('.md') && fileExists(input)) {
return { type: 'file', path: input }
}
return { type: 'text', content: input }
issues.push(issue);
}
// Generate issue ID
const inputType = parseInput(userInput)
let issueId, issueTitle, issueContent
if (inputType.type === 'github') {
// Fetch via gh CLI
const issueData = Bash(`gh issue view ${inputType.number} --repo ${inputType.owner}/${inputType.repo} --json title,body,labels`)
const parsed = JSON.parse(issueData)
issueId = `GH-${inputType.number}`
issueTitle = parsed.title
issueContent = parsed.body
} else if (inputType.type === 'file') {
issueContent = Read(inputType.path)
issueId = `FILE-${Date.now()}`
issueTitle = extractTitle(issueContent) // First # heading
} else {
issueContent = inputType.content
issueId = `TEXT-${Date.now()}`
issueTitle = issueContent.substring(0, 50)
// Group into batches
const batchSize = flags.batchSize || 3;
const batches = [];
for (let i = 0; i < issues.length; i += batchSize) {
batches.push(issues.slice(i, i + batchSize));
}
// Create issue directory
const issueDir = `.workflow/issues/${issueId}`
Bash(`mkdir -p ${issueDir}`)
// Save issue context
Write(`${issueDir}/context.md`, `# ${issueTitle}\n\n${issueContent}`)
TodoWrite({
todos: batches.flatMap((batch, i) => [
{ content: `Plan batch ${i+1}`, status: 'pending', activeForm: `Planning batch ${i+1}` }
])
});
```
### Phase 2: Exploration
### Phase 2: Unified Explore + Plan (issue-plan-agent)
```javascript
// Complexity assessment
const complexity = analyzeComplexity(issueContent)
// Low: Single file change, isolated
// Medium: Multiple files, some dependencies
// High: Cross-module, architectural
for (const [batchIndex, batch] of batches.entries()) {
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
const needsExploration = (
flags.includes('--explore') ||
complexity !== 'Low' ||
issueContent.mentions_specific_files
)
// Build issue prompt for agent
const issuePrompt = `
## Issues to Plan
if (needsExploration) {
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Explore codebase for issue context",
prompt=`
## Task Objective
Analyze codebase to understand context for issue resolution.
${batch.map((issue, i) => `
### Issue ${i + 1}: ${issue.id}
**Title**: ${issue.title}
**Context**: ${issue.context || 'No context provided'}
`).join('\n')}
## Issue Context
Title: ${issueTitle}
Content: ${issueContent}
## Required Analysis
1. Identify files that need modification
2. Find relevant patterns and conventions
3. Map dependencies and integration points
4. Identify potential risks or blockers
## Output
Write exploration results to: ${issueDir}/exploration.json
`
)
}
```
### Phase 3: Task Breakdown
```javascript
// Load schema reference
const schema = Read('~/.claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json')
// Generate tasks via CLI
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description="Generate JSONL task breakdown",
prompt=`
## Objective
Break down the issue into executable tasks in JSONL format.
## Issue Context
ID: ${issueId}
Title: ${issueTitle}
Content: ${issueContent}
## Exploration Results
${explorationResults || 'No exploration performed'}
## Task Schema
${schema}
## Project Root
${process.cwd()}
## Requirements
1. Generate 2-10 tasks depending on complexity
2. Each task MUST include:
- delivery_criteria: Specific, verifiable conditions for completion (2-5 items)
- pause_criteria: Conditions that should halt execution (0-3 items)
- depends_on: Task IDs that must complete first (ensure DAG)
3. Task execution phases: analyze → implement → test → optimize → commit
4. Assign executor based on task nature (analysis=gemini, implementation=codex)
1. Use ACE semantic search (mcp__ace-tool__search_context) for exploration
2. Generate complete solution with task breakdown
3. Each task must have:
- implementation steps (2-7 steps)
- acceptance criteria (1-4 testable criteria)
- modification_points (exact file locations)
- depends_on (task dependencies)
4. Detect file conflicts if multiple issues
`;
## Delivery Criteria Examples
Good: "User login endpoint returns JWT token with 24h expiry"
Bad: "Authentication works" (too vague)
// Launch issue-plan-agent (combines explore + plan)
const result = Task(
subagent_type="issue-plan-agent",
run_in_background=false,
description=`Explore & plan ${batch.length} issues`,
prompt=issuePrompt
);
## Pause Criteria Examples
- "API spec for external service unclear"
- "Database schema migration required"
- "Security review needed before implementation"
// Parse agent output
const agentOutput = JSON.parse(result);
## Output Format
Write JSONL file (one JSON object per line):
${issueDir}/tasks.jsonl
// Register solutions for each issue (append to solutions/{issue-id}.jsonl)
for (const item of agentOutput.solutions) {
const solutionPath = `.workflow/issues/solutions/${item.issue_id}.jsonl`;
## Validation
- Ensure no circular dependencies
- Ensure all depends_on references exist
- Ensure at least one task has empty depends_on (entry point)
`
)
// Ensure solutions directory exists
Bash(`mkdir -p .workflow/issues/solutions`);
// Validate DAG
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
validateDAG(tasks) // Throws if circular dependency detected
```
// Append solution as new line
Bash(`echo '${JSON.stringify(item.solution)}' >> "${solutionPath}"`);
}
### Phase 4: User Confirmation
// Handle conflicts if any
if (agentOutput.conflicts?.length > 0) {
console.log(`\n⚠ File conflicts detected:`);
agentOutput.conflicts.forEach(c => {
console.log(` ${c.file}: ${c.issues.join(', ')} → suggested: ${c.suggested_order.join(' → ')}`);
});
}
```javascript
// Display task summary
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
console.log(`
## Issue Plan: ${issueId}
**Title**: ${issueTitle}
**Tasks**: ${tasks.length}
**Complexity**: ${complexity}
### Task Breakdown
| ID | Title | Type | Dependencies | Delivery Criteria |
|----|-------|------|--------------|-------------------|
${tasks.map(t => `| ${t.id} | ${t.title} | ${t.type} | ${t.depends_on.join(', ') || '-'} | ${t.delivery_criteria.length} items |`).join('\n')}
### Dependency Graph
${generateDependencyGraph(tasks)}
`)
// User confirmation
AskUserQuestion({
questions: [
{
question: `Approve issue plan? (${tasks.length} tasks)`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Approve", description: "Proceed with this plan" },
{ label: "Refine", description: "Modify tasks before proceeding" },
{ label: "Cancel", description: "Discard plan" }
]
}
]
})
if (answer === "Refine") {
// Allow editing specific tasks
AskUserQuestion({
questions: [
{
question: "What would you like to refine?",
header: "Refine",
multiSelect: true,
options: [
{ label: "Add Task", description: "Add a new task" },
{ label: "Remove Task", description: "Remove an existing task" },
{ label: "Modify Dependencies", description: "Change task dependencies" },
{ label: "Regenerate", description: "Regenerate entire plan" }
]
}
]
})
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
}
```
### Phase 5: Persistence
### Phase 3: Solution Binding
```javascript
// Initialize state.json for status tracking
const state = {
issue_id: issueId,
title: issueTitle,
status: 'planned',
created_at: getUtc8ISOString(),
updated_at: getUtc8ISOString(),
task_count: tasks.length,
completed_count: 0,
current_task: null,
executor_default: flags.executor || 'auto'
// Re-read issues.jsonl
let allIssuesUpdated = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
for (const issue of issues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
if (solutions.length === 0) {
console.log(`⚠ No solutions for ${issue.id}`);
continue;
}
let selectedSolId;
if (solutions.length === 1) {
// Auto-bind single solution
selectedSolId = solutions[0].id;
console.log(`✓ Auto-bound ${selectedSolId} to ${issue.id} (${solutions[0].tasks?.length || 0} tasks)`);
} else {
// Multiple solutions - ask user
const answer = AskUserQuestion({
questions: [{
question: `Select solution for ${issue.id}:`,
header: issue.id,
multiSelect: false,
options: solutions.map(s => ({
label: `${s.id}: ${s.description || 'Solution'}`,
description: `${s.tasks?.length || 0} tasks`
}))
}]
});
selectedSolId = extractSelectedSolutionId(answer);
console.log(`✓ Bound ${selectedSolId} to ${issue.id}`);
}
// Update issue in allIssuesUpdated
const issueIndex = allIssuesUpdated.findIndex(i => i.id === issue.id);
if (issueIndex !== -1) {
allIssuesUpdated[issueIndex].bound_solution_id = selectedSolId;
allIssuesUpdated[issueIndex].status = 'planned';
allIssuesUpdated[issueIndex].planned_at = new Date().toISOString();
allIssuesUpdated[issueIndex].updated_at = new Date().toISOString();
}
// Mark solution as bound in solutions file
const updatedSolutions = solutions.map(s => ({
...s,
is_bound: s.id === selectedSolId,
bound_at: s.id === selectedSolId ? new Date().toISOString() : s.bound_at
}));
Write(solPath, updatedSolutions.map(s => JSON.stringify(s)).join('\n'));
}
Write(`${issueDir}/state.json`, JSON.stringify(state, null, 2))
// Write updated issues.jsonl
Write(issuesPath, allIssuesUpdated.map(i => JSON.stringify(i)).join('\n'));
```
### Phase 4: Summary
```javascript
console.log(`
## Plan Created
## Planning Complete
**Issue**: ${issueId}
**Location**: ${issueDir}/
**Tasks**: ${tasks.length}
**Issues Planned**: ${issues.length}
### Files Created
- tasks.jsonl (task definitions)
- state.json (execution state)
- context.md (issue context)
${explorationResults ? '- exploration.json (codebase analysis)' : ''}
### Bound Solutions
${issues.map(i => {
const issue = allIssuesUpdated.find(a => a.id === i.id);
return issue?.bound_solution_id
? `${i.id}: ${issue.bound_solution_id}`
: `${i.id}: No solution bound`;
}).join('\n')}
### Next Steps
1. Review: \`ccw issue list ${issueId}\`
2. Execute: \`/issue:execute ${issueId}\`
3. Monitor: \`ccw issue status ${issueId}\`
`)
1. Review: \`ccw issue status <issue-id>\`
2. Form queue: \`/issue:queue\`
3. Execute: \`/issue:execute\`
`);
```
## JSONL Task Format
## Solution Format
Each line in `tasks.jsonl` is a complete JSON object:
Each solution line in `solutions/{issue-id}.jsonl`:
```json
{"id":"TASK-001","title":"Setup auth middleware","type":"feature","description":"Implement JWT verification middleware","file_context":["src/middleware/","src/config/auth.ts"],"depends_on":[],"delivery_criteria":["Middleware validates JWT tokens","Returns 401 for invalid tokens","Passes existing auth tests"],"pause_criteria":["JWT secret configuration unclear"],"status":"pending","current_phase":"analyze","executor":"auto","priority":1,"created_at":"2025-12-26T10:00:00Z","updated_at":"2025-12-26T10:00:00Z"}
{"id":"TASK-002","title":"Protect API routes","type":"feature","description":"Apply auth middleware to /api/v1/* routes","file_context":["src/routes/api/"],"depends_on":["TASK-001"],"delivery_criteria":["All /api/v1/* routes require auth","Public routes excluded","Integration tests pass"],"pause_criteria":[],"status":"pending","current_phase":"analyze","executor":"auto","priority":2,"created_at":"2025-12-26T10:00:00Z","updated_at":"2025-12-26T10:00:00Z"}
```
## Progressive Loading Algorithm
For large task lists, only load tasks with satisfied dependencies:
```javascript
function getReadyTasks(tasks, completedIds) {
return tasks.filter(task =>
task.status === 'pending' &&
task.depends_on.every(dep => completedIds.has(dep))
)
}
// Stream JSONL line-by-line for memory efficiency
function* streamJsonl(filePath) {
const lines = readLines(filePath)
for (const line of lines) {
if (line.trim()) yield JSON.parse(line)
}
{
"id": "SOL-20251226-001",
"description": "Direct Implementation",
"tasks": [
{
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file",
"Implement JWT validation",
"Add error handling",
"Export middleware"
],
"acceptance": [
"Middleware validates JWT tokens",
"Returns 401 for invalid tokens"
],
"depends_on": [],
"estimated_minutes": 30
}
],
"exploration_context": {
"relevant_files": ["src/config/auth.ts"],
"patterns": "Follow existing middleware pattern"
},
"is_bound": true,
"created_at": "2025-12-26T10:00:00Z",
"bound_at": "2025-12-26T10:05:00Z"
}
```
@@ -349,13 +327,26 @@ function* streamJsonl(filePath) {
| Error | Resolution |
|-------|------------|
| Invalid GitHub URL | Display correct format, ask for valid URL |
| Circular dependency | List cycle, ask user to resolve |
| No tasks generated | Suggest simpler breakdown or manual entry |
| Exploration timeout | Proceed without exploration, warn user |
| Issue not found | Auto-create in issues.jsonl |
| ACE search fails | Agent falls back to ripgrep |
| No solutions generated | Display error, suggest manual planning |
| User cancels selection | Skip issue, continue with others |
| File conflicts | Agent detects and suggests resolution order |
## Agent Integration
The command uses `issue-plan-agent` which:
1. Performs ACE semantic search per issue
2. Identifies modification points and patterns
3. Generates task breakdown with dependencies
4. Detects cross-issue file conflicts
5. Outputs solution JSON for registration
See `.claude/agents/issue-plan-agent.md` for agent specification.
## Related Commands
- `/issue:execute` - Execute planned tasks with closed-loop methodology
- `ccw issue list` - List all issues and their status
- `ccw issue status` - Show detailed issue status
- `/issue:queue` - Form execution queue from bound solutions
- `/issue:execute` - Execute queue with codex
- `ccw issue list` - List all issues
- `ccw issue status` - View issue and solution details

View File

@@ -0,0 +1,303 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent
argument-hint: "[--rebuild] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
# Issue Queue Command (/issue:queue)
## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, determines dependencies, and creates an ordered execution queue. The queue is global across all issues.
**Core capabilities:**
- **Agent-driven**: issue-queue-agent handles all ordering logic
- ACE semantic search for relationship discovery
- Dependency DAG construction and cycle detection
- File conflict detection and resolution
- Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
- Output global queue.json
## Storage Structure (Flat JSONL)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue (output)
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
## Usage
```bash
/issue:queue [FLAGS]
# Examples
/issue:queue # Form queue from all bound solutions
/issue:queue --rebuild # Rebuild queue (clear and regenerate)
/issue:queue --issue GH-123 # Add only specific issue to queue
# Flags
--rebuild Clear existing queue and regenerate
--issue <id> Add only specific issue's tasks
```
## Execution Process
```
Phase 1: Solution Loading
├─ Load issues.jsonl
├─ Filter issues with bound_solution_id
├─ Read solutions/{issue-id}.jsonl for each issue
├─ Find bound solution by ID
└─ Extract tasks from bound solutions
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
├─ Launch issue-queue-agent with all tasks
├─ Agent performs:
│ ├─ Build dependency DAG from depends_on
│ ├─ Detect circular dependencies
│ ├─ Identify file modification conflicts
│ ├─ Resolve conflicts using ordering rules
│ ├─ Calculate semantic priority (0.0-1.0)
│ └─ Assign execution groups (parallel/sequential)
└─ Output: queue JSON with ordered tasks
Phase 5: Queue Output
├─ Write queue.json
├─ Update issue statuses in issues.jsonl
└─ Display queue summary
```
## Implementation
### Phase 1: Solution Loading
```javascript
// Load issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Filter issues with bound solutions
const plannedIssues = allIssues.filter(i =>
i.status === 'planned' && i.bound_solution_id
);
if (plannedIssues.length === 0) {
console.log('No issues with bound solutions found.');
console.log('Run /issue:plan first to create and bind solutions.');
return;
}
// Load all tasks from bound solutions
const allTasks = [];
for (const issue of plannedIssues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Find bound solution
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
if (!boundSol) {
console.log(`⚠ Bound solution ${issue.bound_solution_id} not found for ${issue.id}`);
continue;
}
for (const task of boundSol.tasks || []) {
allTasks.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task,
exploration_context: boundSol.exploration_context
});
}
}
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
```
### Phase 2-4: Agent-Driven Queue Formation
```javascript
// Launch issue-queue-agent to handle all ordering logic
const agentPrompt = `
## Tasks to Order
${JSON.stringify(allTasks, null, 2)}
## Project Root
${process.cwd()}
## Requirements
1. Build dependency DAG from depends_on fields
2. Detect circular dependencies (abort if found)
3. Identify file modification conflicts
4. Resolve conflicts using ordering rules:
- Create before Update/Implement
- Foundation scopes (config/types) before implementation
- Core logic before tests
5. Calculate semantic priority (0.0-1.0) for each task
6. Assign execution groups (parallel P* / sequential S*)
7. Output queue JSON
`;
const result = Task(
subagent_type="issue-queue-agent",
run_in_background=false,
description=`Order ${allTasks.length} tasks from ${plannedIssues.length} issues`,
prompt=agentPrompt
);
// Parse agent output
const agentOutput = JSON.parse(result);
if (!agentOutput.success) {
console.error(`Queue formation failed: ${agentOutput.error}`);
if (agentOutput.cycles) {
console.error('Circular dependencies:', agentOutput.cycles.join(', '));
}
return;
}
```
### Phase 5: Queue Output & Summary
```javascript
const queueOutput = agentOutput.output;
// Write queue.json
Write('.workflow/issues/queue.json', JSON.stringify(queueOutput, null, 2));
// Update issue statuses in issues.jsonl
const updatedIssues = allIssues.map(issue => {
if (plannedIssues.find(p => p.id === issue.id)) {
return {
...issue,
status: 'queued',
queued_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
}
return issue;
});
Write(issuesPath, updatedIssues.map(i => JSON.stringify(i)).join('\n'));
// Display summary
console.log(`
## Queue Formed
**Total Tasks**: ${queueOutput.queue.length}
**Issues**: ${plannedIssues.length}
**Conflicts**: ${queueOutput.conflicts?.length || 0} (${queueOutput._metadata?.resolved_conflicts || 0} resolved)
### Execution Groups
${(queueOutput.execution_groups || []).map(g => {
const type = g.type === 'parallel' ? 'Parallel' : 'Sequential';
return `- ${g.id} (${type}): ${g.task_count} tasks`;
}).join('\n')}
### Next Steps
1. Review queue: \`ccw issue queue list\`
2. Execute: \`/issue:execute\`
`);
```
## Queue Schema
Output `queue.json`:
```json
{
"queue": [
{
"queue_id": "Q-001",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task_id": "T1",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7,
"queued_at": "2025-12-26T10:00:00Z"
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"tasks": ["GH-123:T1", "GH-124:T2"],
"resolution": "sequential",
"resolution_order": ["GH-123:T1", "GH-124:T2"],
"rationale": "T1 creates file before T2 updates",
"resolved": true
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["GH-123:T1", "GH-124:T1", "GH-125:T1"] },
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["GH-123:T2", "GH-124:T2"] }
],
"_metadata": {
"version": "2.0",
"storage": "jsonl",
"total_tasks": 5,
"total_conflicts": 1,
"resolved_conflicts": 1,
"parallel_groups": 1,
"sequential_groups": 1,
"timestamp": "2025-12-26T10:00:00Z",
"source": "issue-queue-agent"
}
}
```
## Semantic Priority Rules
| Factor | Priority Boost |
|--------|---------------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Config/Types scope | +0.1 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
## Error Handling
| Error | Resolution |
|-------|------------|
| No bound solutions | Display message, suggest /issue:plan |
| Circular dependency | List cycles, abort queue formation |
| Unresolved conflicts | Agent resolves using ordering rules |
| Invalid task reference | Skip and warn |
## Agent Integration
The command uses `issue-queue-agent` which:
1. Builds dependency DAG from task depends_on fields
2. Detects circular dependencies (aborts if found)
3. Identifies file modification conflicts across issues
4. Resolves conflicts using semantic ordering rules
5. Calculates priority (0.0-1.0) for each task
6. Assigns parallel/sequential execution groups
7. Outputs structured queue JSON
See `.claude/agents/issue-queue-agent.md` for agent specification.
## Related Commands
- `/issue:plan` - Plan issues and bind solutions
- `/issue:execute` - Execute queue with codex
- `ccw issue queue list` - View current queue