feat(issue): add DAG support for parallel execution planning and enhance task fetching

This commit is contained in:
catlog22
2025-12-28 12:04:10 +08:00
parent 35ffd3419e
commit 2c42cefa5a
2 changed files with 306 additions and 383 deletions

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
description: Execute queue with codex using DAG-based parallel orchestration (delegates task lookup to executors)
argument-hint: "[--parallel <n>] [--executor codex|gemini|agent]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
@@ -9,26 +9,13 @@ allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
## Overview
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
Minimal orchestrator that dispatches task IDs to executors. **Does NOT read task details** - delegates all task lookup to the executor via `ccw issue next <item_id>`.
**Core design:**
- Single task per codex instance (not loop mode)
- Endpoint-driven: `ccw issue next` execute → `ccw issue complete`
- No file reading in codex
- Orchestrator manages parallelism
## Storage Structure (Queue History)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queues/ # Queue history directory
│ ├── index.json # Queue index (active + history)
│ └── {queue-id}.json # Individual queue files
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
**Design Principles:**
- **DAG-driven**: Uses `ccw issue queue dag` to get parallel execution plan
- **ID-only dispatch**: Only passes `item_id` to executors
- **Executor responsibility**: Codex/Agent fetches task details via `ccw issue next <item_id>`
- **Parallel execution**: Launches multiple executors concurrently based on DAG batches
## Usage
@@ -36,427 +23,250 @@ Execution orchestrator that coordinates codex instances. Each task is executed b
/issue:execute [FLAGS]
# Examples
/issue:execute # Execute all ready tasks
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
/issue:execute --executor codex # Force codex executor
/issue:execute # Execute with default parallelism
/issue:execute --parallel 4 # Execute up to 4 tasks in parallel
/issue:execute --executor agent # Use agent instead of codex
# Flags
--parallel <n> Max parallel codex instances (default: 1)
--executor <type> Force executor: codex|gemini|agent
--dry-run Show what would execute without running
--parallel <n> Max parallel executors (default: 3)
--executor <type> Force executor: codex|gemini|agent (default: codex)
--dry-run Show DAG and batches without executing
```
## Execution Process
## Execution Flow
```
Phase 1: Queue Loading
Load queue.json
├─ Count pending/ready tasks
└─ Initialize TodoWrite tracking
Phase 1: Get DAG
ccw issue queue dag → { parallel_batches, nodes, ready_count }
Phase 2: Ready Task Detection
├─ Find tasks with satisfied dependencies
├─ Group by execution_group (parallel batches)
└─ Determine execution order
Phase 2: Dispatch Batches
├─ For each batch in parallel_batches:
├─ Launch N executors (up to --parallel limit)
│ ├─ Each executor receives: item_id only
│ ├─ Executor calls: ccw issue next <item_id>
│ ├─ Executor gets full task definition
│ ├─ Executor implements + tests + commits
│ └─ Executor calls: ccw issue done <item_id>
└─ Wait for batch completion before next batch
Phase 3: Codex Coordination
For each ready task:
│ ├─ Launch independent codex instance
│ ├─ Codex calls: ccw issue next
│ ├─ Codex receives task data (NOT file)
│ ├─ Codex executes task
│ ├─ Codex calls: ccw issue complete <queue-id>
│ └─ Update TodoWrite
└─ Parallel execution based on --parallel flag
Phase 4: Completion
├─ Generate execution summary
├─ Update issue statuses in issues.jsonl
└─ Display results
Phase 3: Summary
ccw issue queue dag → updated status
```
## Implementation
### Phase 1: Queue Loading
### Phase 1: Get DAG
```javascript
// Load active queue via CLI endpoint
const queueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
const queue = JSON.parse(queueJson);
// Get dependency graph and parallel batches
const dagJson = Bash(`ccw issue queue dag`).trim();
const dag = JSON.parse(dagJson);
if (!queue.id || queue.tasks?.length === 0) {
console.log('No active queue found. Run /issue:queue first.');
if (dag.error || dag.ready_count === 0) {
console.log(dag.error || 'No tasks ready for execution');
console.log('Use /issue:queue to form a queue first');
return;
}
// Count by status
const pending = queue.tasks.filter(q => q.status === 'pending');
const executing = queue.tasks.filter(q => q.status === 'executing');
const completed = queue.tasks.filter(q => q.status === 'completed');
console.log(`
## Execution Queue Status
## Queue DAG
- Pending: ${pending.length}
- Executing: ${executing.length}
- Completed: ${completed.length}
- Total: ${queue.tasks.length}
- Total: ${dag.total}
- Ready: ${dag.ready_count}
- Completed: ${dag.completed_count}
- Batches: ${dag.parallel_batches.length}
- Max parallel: ${dag._summary.can_parallel}
`);
if (pending.length === 0 && executing.length === 0) {
console.log('All tasks completed!');
// Dry run mode
if (flags.dryRun) {
console.log('### Parallel Batches (would execute):\n');
dag.parallel_batches.forEach((batch, i) => {
console.log(`Batch ${i + 1}: ${batch.join(', ')}`);
});
return;
}
```
### Phase 2: Ready Task Detection
### Phase 2: Dispatch Batches
```javascript
// Find ready tasks (dependencies satisfied)
function getReadyTasks() {
const completedIds = new Set(
queue.tasks.filter(q => q.status === 'completed').map(q => q.item_id)
);
const parallelLimit = flags.parallel || 3;
const executor = flags.executor || 'codex';
return queue.tasks.filter(item => {
if (item.status !== 'pending') return false;
return item.depends_on.every(depId => completedIds.has(depId));
});
}
const readyTasks = getReadyTasks();
if (readyTasks.length === 0) {
if (executing.length > 0) {
console.log('Tasks are currently executing. Wait for completion.');
} else {
console.log('No ready tasks. Check for blocked dependencies.');
}
return;
}
console.log(`Found ${readyTasks.length} ready tasks`);
// Sort by execution order
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
// Initialize TodoWrite
// Initialize TodoWrite for tracking
const allTasks = dag.parallel_batches.flat();
TodoWrite({
todos: readyTasks.slice(0, parallelLimit).map(t => ({
content: `[${t.item_id}] ${t.issue_id}:${t.task_id}`,
todos: allTasks.map(id => ({
content: `Execute ${id}`,
status: 'pending',
activeForm: `Executing ${t.item_id}`
activeForm: `Executing ${id}`
}))
});
// Process each batch
for (const [batchIndex, batch] of dag.parallel_batches.entries()) {
console.log(`\n### Batch ${batchIndex + 1}/${dag.parallel_batches.length}`);
console.log(`Tasks: ${batch.join(', ')}`);
// Dispatch batch with parallelism limit
const chunks = [];
for (let i = 0; i < batch.length; i += parallelLimit) {
chunks.push(batch.slice(i, i + parallelLimit));
}
for (const chunk of chunks) {
// Launch executors in parallel
const executions = chunk.map(itemId => {
updateTodo(itemId, 'in_progress');
return dispatchExecutor(itemId, executor);
});
await Promise.all(executions);
chunk.forEach(id => updateTodo(id, 'completed'));
}
// Refresh DAG for next batch (dependencies may now be satisfied)
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
if (refreshedDag.ready_count === 0) break;
}
```
### Phase 3: Codex Coordination (Single Task Mode - Full Lifecycle)
### Executor Dispatch (Minimal Prompt)
```javascript
// Execute tasks - single codex instance per task with full lifecycle
async function executeTask(queueItem) {
const codexPrompt = `
## Single Task Execution - CLOSED-LOOP LIFECYCLE
You are executing ONE task from the issue queue. Each task has 5 phases that MUST ALL complete successfully.
function dispatchExecutor(itemId, executorType) {
// Minimal prompt - executor fetches its own task
const prompt = `
## Execute Task ${itemId}
### Step 1: Fetch Task
Run this command to get your task:
\`\`\`bash
ccw issue next
ccw issue next ${itemId}
\`\`\`
This returns JSON with full lifecycle definition:
- task.implementation: Implementation steps
- task.test: Test requirements and commands
- task.regression: Regression check commands
- task.acceptance: Acceptance criteria and verification
- task.commit: Commit specification
### Step 2: Execute
Follow the task definition returned by the command above.
The JSON includes: implementation steps, test commands, acceptance criteria, commit spec.
### Step 2: Execute Full Lifecycle
**Phase 1: IMPLEMENT**
1. Follow task.implementation steps in order
2. Modify files specified in modification_points
3. Use context.relevant_files for reference
4. Use context.patterns for code style
**Phase 2: TEST**
1. Run test commands from task.test.commands
2. Ensure all unit tests pass (task.test.unit)
3. Run integration tests if specified (task.test.integration)
4. Verify coverage meets task.test.coverage_target if specified
5. If tests fail → fix code and re-run, do NOT proceed until tests pass
**Phase 3: REGRESSION**
1. Run all commands in task.regression
2. Ensure no existing tests are broken
3. If regression fails → fix and re-run
**Phase 4: ACCEPTANCE**
1. Verify each criterion in task.acceptance.criteria
2. Execute verification steps in task.acceptance.verification
3. Complete any manual_checks if specified
4. All criteria MUST pass before proceeding
**Phase 5: COMMIT**
1. Stage all modified files
2. Use task.commit.message_template as commit message
3. Commit with: git commit -m "$(cat <<'EOF'\n<message>\nEOF\n)"
4. If commit_strategy is 'per-task', commit now
5. If commit_strategy is 'atomic' or 'squash', stage but don't commit
### Step 3: Report Completion
When ALL phases complete successfully:
### Step 3: Report
When done:
\`\`\`bash
ccw issue complete <item_id> --result '{
"files_modified": ["path1", "path2"],
"tests_passed": true,
"regression_passed": true,
"acceptance_passed": true,
"committed": true,
"commit_hash": "<hash>",
"summary": "What was done"
}'
ccw issue done ${itemId} --result '{"summary": "..."}'
\`\`\`
If any phase fails and cannot be fixed:
If failed:
\`\`\`bash
ccw issue fail <item_id> --reason "Phase X failed: <details>"
ccw issue done ${itemId} --fail --reason "..."
\`\`\`
### Rules
- NEVER skip any lifecycle phase
- Tests MUST pass before proceeding to acceptance
- Regression MUST pass before commit
- ALL acceptance criteria MUST be verified
- Report accurate lifecycle status in result
### Start Now
Begin by running: ccw issue next
`;
// Execute codex
const executor = queueItem.assigned_executor || flags.executor || 'codex';
if (executor === 'codex') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.item_id}`,
timeout=3600000 // 1 hour timeout
if (executorType === 'codex') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${itemId}`,
{ timeout: 3600000, run_in_background: true }
);
} else if (executor === 'gemini') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.item_id}`,
timeout=1800000 // 30 min timeout
} else if (executorType === 'gemini') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${itemId}`,
{ timeout: 1800000, run_in_background: true }
);
} else {
// Agent execution
Task(
subagent_type="code-developer",
run_in_background=false,
description=`Execute ${queueItem.item_id}`,
prompt=codexPrompt
);
}
}
// Execute with parallelism
const parallelLimit = flags.parallel || 1;
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
const batch = readyTasks.slice(i, i + parallelLimit);
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
console.log(batch.map(t => `- ${t.item_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
if (parallelLimit === 1) {
// Sequential execution
for (const task of batch) {
updateTodo(task.item_id, 'in_progress');
await executeTask(task);
updateTodo(task.item_id, 'completed');
}
} else {
// Parallel execution - launch all at once
const executions = batch.map(task => {
updateTodo(task.item_id, 'in_progress');
return executeTask(task);
return Task({
subagent_type: 'code-developer',
run_in_background: false,
description: `Execute ${itemId}`,
prompt: prompt
});
await Promise.all(executions);
batch.forEach(task => updateTodo(task.item_id, 'completed'));
}
// Refresh ready tasks after batch
const newReady = getReadyTasks();
if (newReady.length > 0) {
console.log(`${newReady.length} more tasks now ready`);
}
}
```
### Codex Task Fetch Response
When codex calls `ccw issue next`, it receives:
```json
{
"item_id": "T-1",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task": {
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file in src/middleware/",
"Implement JWT token validation using jsonwebtoken",
"Add error handling for invalid/expired tokens",
"Export middleware function"
],
"acceptance": [
"Middleware validates JWT tokens successfully",
"Returns 401 for invalid or missing tokens",
"Passes token payload to request context"
]
},
"context": {
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
},
"execution_hints": {
"executor": "codex",
"estimated_minutes": 30
}
}
```
### Phase 4: Completion Summary
### Phase 3: Summary
```javascript
// Reload queue for final status via CLI
const finalQueueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
const finalQueue = JSON.parse(finalQueueJson);
// Use queue._metadata for summary (already calculated by CLI)
const summary = finalQueue._metadata || {
completed_count: 0,
failed_count: 0,
pending_count: 0,
total_tasks: 0
};
// Get final status
const finalDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
console.log(`
## Execution Complete
**Completed**: ${summary.completed_count}/${summary.total_tasks}
**Failed**: ${summary.failed_count}
**Pending**: ${summary.pending_count}
- Completed: ${finalDag.completed_count}/${finalDag.total}
- Remaining: ${finalDag.ready_count}
### Task Results
${(finalQueue.tasks || []).map(q => {
const icon = q.status === 'completed' ? '✓' :
q.status === 'failed' ? '✗' :
q.status === 'executing' ? '⟳' : '○';
return `${icon} ${q.item_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
### Task Status
${finalDag.nodes.map(n => {
const icon = n.status === 'completed' ? '✓' :
n.status === 'failed' ? '✗' :
n.status === 'executing' ? '⟳' : '○';
return `${icon} ${n.id} [${n.issue_id}:${n.task_id}] - ${n.status}`;
}).join('\n')}
`);
// Issue status updates are handled by ccw issue complete/fail endpoints
// No need to manually update issues.jsonl here
if (summary.pending_count > 0) {
console.log(`
### Continue Execution
Run \`/issue:execute\` again to execute remaining tasks.
`);
if (finalDag.ready_count > 0) {
console.log('\nRun `/issue:execute` again for remaining tasks.');
}
```
## Dry Run Mode
## CLI Endpoint Contract
```javascript
if (flags.dryRun) {
console.log(`
## Dry Run - Would Execute
${readyTasks.map((t, i) => `
${i + 1}. ${t.item_id}
Issue: ${t.issue_id}
Task: ${t.task_id}
Executor: ${t.assigned_executor}
Group: ${t.execution_group}
`).join('')}
No changes made. Remove --dry-run to execute.
`);
return;
### `ccw issue queue dag`
Returns dependency graph with parallel batches:
```json
{
"queue_id": "QUE-...",
"total": 10,
"ready_count": 3,
"completed_count": 2,
"nodes": [{ "id": "T-1", "status": "pending", "ready": true, ... }],
"edges": [{ "from": "T-1", "to": "T-2" }],
"parallel_batches": [["T-1", "T-3"], ["T-2"]],
"_summary": { "can_parallel": 2, "batches_needed": 2 }
}
```
### `ccw issue next <item_id>`
Returns full task definition for the specified item:
```json
{
"item_id": "T-1",
"issue_id": "GH-123",
"task": { "id": "T1", "title": "...", "implementation": [...], ... },
"context": { "relevant_files": [...] }
}
```
### `ccw issue done <item_id>`
Marks task completed/failed and updates queue state.
## Error Handling
| Error | Resolution |
|-------|------------|
| Queue not found | Display message, suggest /issue:queue |
| No ready tasks | Check dependencies, show blocked tasks |
| Codex timeout | Mark as failed, allow retry |
| ccw issue next empty | All tasks done or blocked |
| Task execution failure | Marked via ccw issue fail, use `ccw issue retry` to reset |
| No queue | Run /issue:queue first |
| No ready tasks | Dependencies blocked, check DAG |
| Executor timeout | Marked as executing, can resume |
| Task failure | Use `ccw issue retry` to reset |
## Troubleshooting
### Interrupted Tasks
If execution was interrupted (crashed/stopped), `ccw issue next` will automatically resume:
### Check DAG Status
```bash
# Automatically returns the executing task for resumption
ccw issue next
ccw issue queue dag | jq '.parallel_batches'
```
Tasks in `executing` status are prioritized and returned first, no manual reset needed.
### Failed Tasks
If a task failed and you want to retry:
### Resume Interrupted Execution
Executors in `executing` status will be resumed automatically when calling `ccw issue next <item_id>`.
### Retry Failed Tasks
```bash
# Reset all failed tasks to pending
ccw issue retry
# Reset failed tasks for specific issue
ccw issue retry <issue-id>
ccw issue retry # Reset all failed to pending
/issue:execute # Re-execute
```
## Endpoint Contract
### `ccw issue next`
- Returns next ready task as JSON
- Marks task as 'executing'
- Returns `{ status: 'empty' }` when no tasks
### `ccw issue complete <item-id>`
- Marks task as 'completed'
- Updates queue.json
- Checks if issue is fully complete
### `ccw issue fail <item-id>`
- Marks task as 'failed'
- Records failure reason
- Allows retry via /issue:execute
### `ccw issue retry [issue-id]`
- Resets failed tasks to 'pending'
- Allows re-execution via `ccw issue next`
## Related Commands
- `/issue:plan` - Plan issues with solutions
- `/issue:queue` - Form execution queue
- `ccw issue queue list` - View queue status
- `ccw issue retry` - Retry failed tasks
- `ccw issue queue dag` - View dependency graph
- `ccw issue retry` - Reset failed tasks

View File

@@ -845,6 +845,101 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
return;
}
// DAG - Return dependency graph for parallel execution planning
if (subAction === 'dag') {
const queue = readActiveQueue();
if (!queue.id || queue.tasks.length === 0) {
console.log(JSON.stringify({ error: 'No active queue', nodes: [], edges: [], groups: [] }));
return;
}
// Build DAG nodes
const completedIds = new Set(queue.tasks.filter(t => t.status === 'completed').map(t => t.item_id));
const failedIds = new Set(queue.tasks.filter(t => t.status === 'failed').map(t => t.item_id));
const nodes = queue.tasks.map(task => ({
id: task.item_id,
issue_id: task.issue_id,
task_id: task.task_id,
status: task.status,
executor: task.assigned_executor,
priority: task.semantic_priority,
depends_on: task.depends_on,
// Calculate if ready (dependencies satisfied)
ready: task.status === 'pending' && task.depends_on.every(d => completedIds.has(d)),
blocked_by: task.depends_on.filter(d => !completedIds.has(d) && !failedIds.has(d))
}));
// Build edges for visualization
const edges = queue.tasks.flatMap(task =>
task.depends_on.map(dep => ({ from: dep, to: task.item_id }))
);
// Group ready tasks by execution_group for parallel execution
const readyTasks = nodes.filter(n => n.ready || n.status === 'executing');
const groups: Record<string, string[]> = {};
for (const task of queue.tasks) {
if (readyTasks.some(r => r.id === task.item_id)) {
const group = task.execution_group || 'P1';
if (!groups[group]) groups[group] = [];
groups[group].push(task.item_id);
}
}
// Calculate parallel batches (tasks with no dependencies on each other)
const parallelBatches: string[][] = [];
const remainingReady = new Set(readyTasks.map(t => t.id));
while (remainingReady.size > 0) {
const batch: string[] = [];
const batchFiles = new Set<string>();
for (const taskId of remainingReady) {
const task = queue.tasks.find(t => t.item_id === taskId);
if (!task) continue;
// Check for file conflicts with already-batched tasks
const solution = findSolution(task.issue_id, task.solution_id);
const taskDef = solution?.tasks.find(t => t.id === task.task_id);
const taskFiles = taskDef?.modification_points?.map(mp => mp.file) || [];
const hasConflict = taskFiles.some(f => batchFiles.has(f));
if (!hasConflict) {
batch.push(taskId);
taskFiles.forEach(f => batchFiles.add(f));
}
}
if (batch.length === 0) {
// Fallback: take one at a time if all conflict
const first = Array.from(remainingReady)[0];
batch.push(first);
}
parallelBatches.push(batch);
batch.forEach(id => remainingReady.delete(id));
}
console.log(JSON.stringify({
queue_id: queue.id,
total: nodes.length,
ready_count: readyTasks.length,
completed_count: completedIds.size,
nodes,
edges,
groups: Object.entries(groups).map(([id, tasks]) => ({ id, tasks })),
parallel_batches: parallelBatches,
_summary: {
can_parallel: parallelBatches[0]?.length || 0,
batches_needed: parallelBatches.length
}
}, null, 2));
return;
}
// Archive current queue
if (subAction === 'archive') {
const queue = readActiveQueue();
@@ -998,39 +1093,56 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
/**
* next - Get next ready task for execution (JSON output)
* Accepts optional item_id to fetch a specific task directly
*/
async function nextAction(options: IssueOptions): Promise<void> {
async function nextAction(itemId: string | undefined, options: IssueOptions): Promise<void> {
const queue = readActiveQueue();
let nextItem: typeof queue.tasks[0] | undefined;
let isResume = false;
// Priority 1: Resume executing tasks (interrupted/crashed)
const executingTasks = queue.tasks.filter(item => item.status === 'executing');
// Priority 2: Find pending tasks with satisfied dependencies
const pendingTasks = queue.tasks.filter(item => {
if (item.status !== 'pending') return false;
return item.depends_on.every(depId => {
const dep = queue.tasks.find(q => q.item_id === depId);
return !dep || dep.status === 'completed';
// If specific item_id provided, fetch that task directly
if (itemId) {
nextItem = queue.tasks.find(t => t.item_id === itemId);
if (!nextItem) {
console.log(JSON.stringify({ status: 'error', message: `Task ${itemId} not found` }));
return;
}
if (nextItem.status === 'completed') {
console.log(JSON.stringify({ status: 'completed', message: `Task ${itemId} already completed` }));
return;
}
if (nextItem.status === 'failed') {
console.log(JSON.stringify({ status: 'failed', message: `Task ${itemId} failed, use retry to reset` }));
return;
}
isResume = nextItem.status === 'executing';
} else {
// Auto-select: Priority 1 - executing, Priority 2 - ready pending
const executingTasks = queue.tasks.filter(item => item.status === 'executing');
const pendingTasks = queue.tasks.filter(item => {
if (item.status !== 'pending') return false;
return item.depends_on.every(depId => {
const dep = queue.tasks.find(q => q.item_id === depId);
return !dep || dep.status === 'completed';
});
});
});
// Combine: executing first, then pending
const readyTasks = [...executingTasks, ...pendingTasks];
const readyTasks = [...executingTasks, ...pendingTasks];
if (readyTasks.length === 0) {
console.log(JSON.stringify({
status: 'empty',
message: 'No ready tasks',
queue_status: queue._metadata
}, null, 2));
return;
if (readyTasks.length === 0) {
console.log(JSON.stringify({
status: 'empty',
message: 'No ready tasks',
queue_status: queue._metadata
}, null, 2));
return;
}
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
nextItem = readyTasks[0];
isResume = nextItem.status === 'executing';
}
// Sort by execution order
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
const nextItem = readyTasks[0];
const isResume = nextItem.status === 'executing';
// Load task definition
const solution = findSolution(nextItem.issue_id, nextItem.solution_id);
const taskDef = solution?.tasks.find(t => t.id === nextItem.task_id);
@@ -1054,8 +1166,8 @@ async function nextAction(options: IssueOptions): Promise<void> {
total: queue.tasks.length,
completed: queue.tasks.filter(q => q.status === 'completed').length,
failed: queue.tasks.filter(q => q.status === 'failed').length,
executing: executingTasks.length,
pending: pendingTasks.length
executing: queue.tasks.filter(q => q.status === 'executing').length,
pending: queue.tasks.filter(q => q.status === 'pending').length
};
const remaining = stats.pending + stats.executing;
@@ -1219,7 +1331,7 @@ export async function issueCommand(
await queueAction(argsArray[0], argsArray[1], options);
break;
case 'next':
await nextAction(options);
await nextAction(argsArray[0], options);
break;
case 'done':
await doneAction(argsArray[0], options);
@@ -1252,14 +1364,15 @@ export async function issueCommand(
console.log(chalk.gray(' queue list List all queues (history)'));
console.log(chalk.gray(' queue add <issue-id> Add issue to active queue (or create new)'));
console.log(chalk.gray(' queue switch <queue-id> Switch active queue'));
console.log(chalk.gray(' queue dag Get dependency graph (JSON) for parallel execution'));
console.log(chalk.gray(' queue archive Archive current queue'));
console.log(chalk.gray(' queue delete <queue-id> Delete queue from history'));
console.log(chalk.gray(' retry [issue-id] Retry failed tasks'));
console.log();
console.log(chalk.bold('Execution Endpoints:'));
console.log(chalk.gray(' next Get next ready task (JSON)'));
console.log(chalk.gray(' done <queue-id> Mark task completed'));
console.log(chalk.gray(' done <queue-id> --fail Mark task failed'));
console.log(chalk.gray(' next [item-id] Get task by ID or next ready task (JSON)'));
console.log(chalk.gray(' done <item-id> Mark task completed'));
console.log(chalk.gray(' done <item-id> --fail Mark task failed'));
console.log();
console.log(chalk.bold('Options:'));
console.log(chalk.gray(' --title <title> Issue/task title'));