mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-10 02:24:35 +08:00
feat(issue): add DAG support for parallel execution planning and enhance task fetching
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
|
||||
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
|
||||
description: Execute queue with codex using DAG-based parallel orchestration (delegates task lookup to executors)
|
||||
argument-hint: "[--parallel <n>] [--executor codex|gemini|agent]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
@@ -9,26 +9,13 @@ allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
|
||||
## Overview
|
||||
|
||||
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
|
||||
Minimal orchestrator that dispatches task IDs to executors. **Does NOT read task details** - delegates all task lookup to the executor via `ccw issue next <item_id>`.
|
||||
|
||||
**Core design:**
|
||||
- Single task per codex instance (not loop mode)
|
||||
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
|
||||
- No file reading in codex
|
||||
- Orchestrator manages parallelism
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queues/ # Queue history directory
|
||||
│ ├── index.json # Queue index (active + history)
|
||||
│ └── {queue-id}.json # Individual queue files
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
**Design Principles:**
|
||||
- **DAG-driven**: Uses `ccw issue queue dag` to get parallel execution plan
|
||||
- **ID-only dispatch**: Only passes `item_id` to executors
|
||||
- **Executor responsibility**: Codex/Agent fetches task details via `ccw issue next <item_id>`
|
||||
- **Parallel execution**: Launches multiple executors concurrently based on DAG batches
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -36,427 +23,250 @@ Execution orchestrator that coordinates codex instances. Each task is executed b
|
||||
/issue:execute [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:execute # Execute all ready tasks
|
||||
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
|
||||
/issue:execute --executor codex # Force codex executor
|
||||
/issue:execute # Execute with default parallelism
|
||||
/issue:execute --parallel 4 # Execute up to 4 tasks in parallel
|
||||
/issue:execute --executor agent # Use agent instead of codex
|
||||
|
||||
# Flags
|
||||
--parallel <n> Max parallel codex instances (default: 1)
|
||||
--executor <type> Force executor: codex|gemini|agent
|
||||
--dry-run Show what would execute without running
|
||||
--parallel <n> Max parallel executors (default: 3)
|
||||
--executor <type> Force executor: codex|gemini|agent (default: codex)
|
||||
--dry-run Show DAG and batches without executing
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Queue Loading
|
||||
├─ Load queue.json
|
||||
├─ Count pending/ready tasks
|
||||
└─ Initialize TodoWrite tracking
|
||||
Phase 1: Get DAG
|
||||
└─ ccw issue queue dag → { parallel_batches, nodes, ready_count }
|
||||
|
||||
Phase 2: Ready Task Detection
|
||||
├─ Find tasks with satisfied dependencies
|
||||
├─ Group by execution_group (parallel batches)
|
||||
└─ Determine execution order
|
||||
Phase 2: Dispatch Batches
|
||||
├─ For each batch in parallel_batches:
|
||||
│ ├─ Launch N executors (up to --parallel limit)
|
||||
│ ├─ Each executor receives: item_id only
|
||||
│ ├─ Executor calls: ccw issue next <item_id>
|
||||
│ ├─ Executor gets full task definition
|
||||
│ ├─ Executor implements + tests + commits
|
||||
│ └─ Executor calls: ccw issue done <item_id>
|
||||
└─ Wait for batch completion before next batch
|
||||
|
||||
Phase 3: Codex Coordination
|
||||
├─ For each ready task:
|
||||
│ ├─ Launch independent codex instance
|
||||
│ ├─ Codex calls: ccw issue next
|
||||
│ ├─ Codex receives task data (NOT file)
|
||||
│ ├─ Codex executes task
|
||||
│ ├─ Codex calls: ccw issue complete <queue-id>
|
||||
│ └─ Update TodoWrite
|
||||
└─ Parallel execution based on --parallel flag
|
||||
|
||||
Phase 4: Completion
|
||||
├─ Generate execution summary
|
||||
├─ Update issue statuses in issues.jsonl
|
||||
└─ Display results
|
||||
Phase 3: Summary
|
||||
└─ ccw issue queue dag → updated status
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Queue Loading
|
||||
### Phase 1: Get DAG
|
||||
|
||||
```javascript
|
||||
// Load active queue via CLI endpoint
|
||||
const queueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
|
||||
const queue = JSON.parse(queueJson);
|
||||
// Get dependency graph and parallel batches
|
||||
const dagJson = Bash(`ccw issue queue dag`).trim();
|
||||
const dag = JSON.parse(dagJson);
|
||||
|
||||
if (!queue.id || queue.tasks?.length === 0) {
|
||||
console.log('No active queue found. Run /issue:queue first.');
|
||||
if (dag.error || dag.ready_count === 0) {
|
||||
console.log(dag.error || 'No tasks ready for execution');
|
||||
console.log('Use /issue:queue to form a queue first');
|
||||
return;
|
||||
}
|
||||
|
||||
// Count by status
|
||||
const pending = queue.tasks.filter(q => q.status === 'pending');
|
||||
const executing = queue.tasks.filter(q => q.status === 'executing');
|
||||
const completed = queue.tasks.filter(q => q.status === 'completed');
|
||||
|
||||
console.log(`
|
||||
## Execution Queue Status
|
||||
## Queue DAG
|
||||
|
||||
- Pending: ${pending.length}
|
||||
- Executing: ${executing.length}
|
||||
- Completed: ${completed.length}
|
||||
- Total: ${queue.tasks.length}
|
||||
- Total: ${dag.total}
|
||||
- Ready: ${dag.ready_count}
|
||||
- Completed: ${dag.completed_count}
|
||||
- Batches: ${dag.parallel_batches.length}
|
||||
- Max parallel: ${dag._summary.can_parallel}
|
||||
`);
|
||||
|
||||
if (pending.length === 0 && executing.length === 0) {
|
||||
console.log('All tasks completed!');
|
||||
// Dry run mode
|
||||
if (flags.dryRun) {
|
||||
console.log('### Parallel Batches (would execute):\n');
|
||||
dag.parallel_batches.forEach((batch, i) => {
|
||||
console.log(`Batch ${i + 1}: ${batch.join(', ')}`);
|
||||
});
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Ready Task Detection
|
||||
### Phase 2: Dispatch Batches
|
||||
|
||||
```javascript
|
||||
// Find ready tasks (dependencies satisfied)
|
||||
function getReadyTasks() {
|
||||
const completedIds = new Set(
|
||||
queue.tasks.filter(q => q.status === 'completed').map(q => q.item_id)
|
||||
);
|
||||
const parallelLimit = flags.parallel || 3;
|
||||
const executor = flags.executor || 'codex';
|
||||
|
||||
return queue.tasks.filter(item => {
|
||||
if (item.status !== 'pending') return false;
|
||||
return item.depends_on.every(depId => completedIds.has(depId));
|
||||
});
|
||||
}
|
||||
|
||||
const readyTasks = getReadyTasks();
|
||||
|
||||
if (readyTasks.length === 0) {
|
||||
if (executing.length > 0) {
|
||||
console.log('Tasks are currently executing. Wait for completion.');
|
||||
} else {
|
||||
console.log('No ready tasks. Check for blocked dependencies.');
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Found ${readyTasks.length} ready tasks`);
|
||||
|
||||
// Sort by execution order
|
||||
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
|
||||
|
||||
// Initialize TodoWrite
|
||||
// Initialize TodoWrite for tracking
|
||||
const allTasks = dag.parallel_batches.flat();
|
||||
TodoWrite({
|
||||
todos: readyTasks.slice(0, parallelLimit).map(t => ({
|
||||
content: `[${t.item_id}] ${t.issue_id}:${t.task_id}`,
|
||||
todos: allTasks.map(id => ({
|
||||
content: `Execute ${id}`,
|
||||
status: 'pending',
|
||||
activeForm: `Executing ${t.item_id}`
|
||||
activeForm: `Executing ${id}`
|
||||
}))
|
||||
});
|
||||
|
||||
// Process each batch
|
||||
for (const [batchIndex, batch] of dag.parallel_batches.entries()) {
|
||||
console.log(`\n### Batch ${batchIndex + 1}/${dag.parallel_batches.length}`);
|
||||
console.log(`Tasks: ${batch.join(', ')}`);
|
||||
|
||||
// Dispatch batch with parallelism limit
|
||||
const chunks = [];
|
||||
for (let i = 0; i < batch.length; i += parallelLimit) {
|
||||
chunks.push(batch.slice(i, i + parallelLimit));
|
||||
}
|
||||
|
||||
for (const chunk of chunks) {
|
||||
// Launch executors in parallel
|
||||
const executions = chunk.map(itemId => {
|
||||
updateTodo(itemId, 'in_progress');
|
||||
return dispatchExecutor(itemId, executor);
|
||||
});
|
||||
|
||||
await Promise.all(executions);
|
||||
chunk.forEach(id => updateTodo(id, 'completed'));
|
||||
}
|
||||
|
||||
// Refresh DAG for next batch (dependencies may now be satisfied)
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
|
||||
if (refreshedDag.ready_count === 0) break;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Codex Coordination (Single Task Mode - Full Lifecycle)
|
||||
### Executor Dispatch (Minimal Prompt)
|
||||
|
||||
```javascript
|
||||
// Execute tasks - single codex instance per task with full lifecycle
|
||||
async function executeTask(queueItem) {
|
||||
const codexPrompt = `
|
||||
## Single Task Execution - CLOSED-LOOP LIFECYCLE
|
||||
|
||||
You are executing ONE task from the issue queue. Each task has 5 phases that MUST ALL complete successfully.
|
||||
function dispatchExecutor(itemId, executorType) {
|
||||
// Minimal prompt - executor fetches its own task
|
||||
const prompt = `
|
||||
## Execute Task ${itemId}
|
||||
|
||||
### Step 1: Fetch Task
|
||||
Run this command to get your task:
|
||||
\`\`\`bash
|
||||
ccw issue next
|
||||
ccw issue next ${itemId}
|
||||
\`\`\`
|
||||
|
||||
This returns JSON with full lifecycle definition:
|
||||
- task.implementation: Implementation steps
|
||||
- task.test: Test requirements and commands
|
||||
- task.regression: Regression check commands
|
||||
- task.acceptance: Acceptance criteria and verification
|
||||
- task.commit: Commit specification
|
||||
### Step 2: Execute
|
||||
Follow the task definition returned by the command above.
|
||||
The JSON includes: implementation steps, test commands, acceptance criteria, commit spec.
|
||||
|
||||
### Step 2: Execute Full Lifecycle
|
||||
|
||||
**Phase 1: IMPLEMENT**
|
||||
1. Follow task.implementation steps in order
|
||||
2. Modify files specified in modification_points
|
||||
3. Use context.relevant_files for reference
|
||||
4. Use context.patterns for code style
|
||||
|
||||
**Phase 2: TEST**
|
||||
1. Run test commands from task.test.commands
|
||||
2. Ensure all unit tests pass (task.test.unit)
|
||||
3. Run integration tests if specified (task.test.integration)
|
||||
4. Verify coverage meets task.test.coverage_target if specified
|
||||
5. If tests fail → fix code and re-run, do NOT proceed until tests pass
|
||||
|
||||
**Phase 3: REGRESSION**
|
||||
1. Run all commands in task.regression
|
||||
2. Ensure no existing tests are broken
|
||||
3. If regression fails → fix and re-run
|
||||
|
||||
**Phase 4: ACCEPTANCE**
|
||||
1. Verify each criterion in task.acceptance.criteria
|
||||
2. Execute verification steps in task.acceptance.verification
|
||||
3. Complete any manual_checks if specified
|
||||
4. All criteria MUST pass before proceeding
|
||||
|
||||
**Phase 5: COMMIT**
|
||||
1. Stage all modified files
|
||||
2. Use task.commit.message_template as commit message
|
||||
3. Commit with: git commit -m "$(cat <<'EOF'\n<message>\nEOF\n)"
|
||||
4. If commit_strategy is 'per-task', commit now
|
||||
5. If commit_strategy is 'atomic' or 'squash', stage but don't commit
|
||||
|
||||
### Step 3: Report Completion
|
||||
When ALL phases complete successfully:
|
||||
### Step 3: Report
|
||||
When done:
|
||||
\`\`\`bash
|
||||
ccw issue complete <item_id> --result '{
|
||||
"files_modified": ["path1", "path2"],
|
||||
"tests_passed": true,
|
||||
"regression_passed": true,
|
||||
"acceptance_passed": true,
|
||||
"committed": true,
|
||||
"commit_hash": "<hash>",
|
||||
"summary": "What was done"
|
||||
}'
|
||||
ccw issue done ${itemId} --result '{"summary": "..."}'
|
||||
\`\`\`
|
||||
|
||||
If any phase fails and cannot be fixed:
|
||||
If failed:
|
||||
\`\`\`bash
|
||||
ccw issue fail <item_id> --reason "Phase X failed: <details>"
|
||||
ccw issue done ${itemId} --fail --reason "..."
|
||||
\`\`\`
|
||||
|
||||
### Rules
|
||||
- NEVER skip any lifecycle phase
|
||||
- Tests MUST pass before proceeding to acceptance
|
||||
- Regression MUST pass before commit
|
||||
- ALL acceptance criteria MUST be verified
|
||||
- Report accurate lifecycle status in result
|
||||
|
||||
### Start Now
|
||||
Begin by running: ccw issue next
|
||||
`;
|
||||
|
||||
// Execute codex
|
||||
const executor = queueItem.assigned_executor || flags.executor || 'codex';
|
||||
|
||||
if (executor === 'codex') {
|
||||
Bash(
|
||||
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.item_id}`,
|
||||
timeout=3600000 // 1 hour timeout
|
||||
if (executorType === 'codex') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${itemId}`,
|
||||
{ timeout: 3600000, run_in_background: true }
|
||||
);
|
||||
} else if (executor === 'gemini') {
|
||||
Bash(
|
||||
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.item_id}`,
|
||||
timeout=1800000 // 30 min timeout
|
||||
} else if (executorType === 'gemini') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${itemId}`,
|
||||
{ timeout: 1800000, run_in_background: true }
|
||||
);
|
||||
} else {
|
||||
// Agent execution
|
||||
Task(
|
||||
subagent_type="code-developer",
|
||||
run_in_background=false,
|
||||
description=`Execute ${queueItem.item_id}`,
|
||||
prompt=codexPrompt
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Execute with parallelism
|
||||
const parallelLimit = flags.parallel || 1;
|
||||
|
||||
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
|
||||
const batch = readyTasks.slice(i, i + parallelLimit);
|
||||
|
||||
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
|
||||
console.log(batch.map(t => `- ${t.item_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
|
||||
|
||||
if (parallelLimit === 1) {
|
||||
// Sequential execution
|
||||
for (const task of batch) {
|
||||
updateTodo(task.item_id, 'in_progress');
|
||||
await executeTask(task);
|
||||
updateTodo(task.item_id, 'completed');
|
||||
}
|
||||
} else {
|
||||
// Parallel execution - launch all at once
|
||||
const executions = batch.map(task => {
|
||||
updateTodo(task.item_id, 'in_progress');
|
||||
return executeTask(task);
|
||||
return Task({
|
||||
subagent_type: 'code-developer',
|
||||
run_in_background: false,
|
||||
description: `Execute ${itemId}`,
|
||||
prompt: prompt
|
||||
});
|
||||
await Promise.all(executions);
|
||||
batch.forEach(task => updateTodo(task.item_id, 'completed'));
|
||||
}
|
||||
|
||||
// Refresh ready tasks after batch
|
||||
const newReady = getReadyTasks();
|
||||
if (newReady.length > 0) {
|
||||
console.log(`${newReady.length} more tasks now ready`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Codex Task Fetch Response
|
||||
|
||||
When codex calls `ccw issue next`, it receives:
|
||||
|
||||
```json
|
||||
{
|
||||
"item_id": "T-1",
|
||||
"issue_id": "GH-123",
|
||||
"solution_id": "SOL-001",
|
||||
"task": {
|
||||
"id": "T1",
|
||||
"title": "Create auth middleware",
|
||||
"scope": "src/middleware/",
|
||||
"action": "Create",
|
||||
"description": "Create JWT validation middleware",
|
||||
"modification_points": [
|
||||
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||
],
|
||||
"implementation": [
|
||||
"Create auth.ts file in src/middleware/",
|
||||
"Implement JWT token validation using jsonwebtoken",
|
||||
"Add error handling for invalid/expired tokens",
|
||||
"Export middleware function"
|
||||
],
|
||||
"acceptance": [
|
||||
"Middleware validates JWT tokens successfully",
|
||||
"Returns 401 for invalid or missing tokens",
|
||||
"Passes token payload to request context"
|
||||
]
|
||||
},
|
||||
"context": {
|
||||
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
|
||||
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
|
||||
},
|
||||
"execution_hints": {
|
||||
"executor": "codex",
|
||||
"estimated_minutes": 30
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Completion Summary
|
||||
### Phase 3: Summary
|
||||
|
||||
```javascript
|
||||
// Reload queue for final status via CLI
|
||||
const finalQueueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
|
||||
const finalQueue = JSON.parse(finalQueueJson);
|
||||
|
||||
// Use queue._metadata for summary (already calculated by CLI)
|
||||
const summary = finalQueue._metadata || {
|
||||
completed_count: 0,
|
||||
failed_count: 0,
|
||||
pending_count: 0,
|
||||
total_tasks: 0
|
||||
};
|
||||
// Get final status
|
||||
const finalDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
|
||||
|
||||
console.log(`
|
||||
## Execution Complete
|
||||
|
||||
**Completed**: ${summary.completed_count}/${summary.total_tasks}
|
||||
**Failed**: ${summary.failed_count}
|
||||
**Pending**: ${summary.pending_count}
|
||||
- Completed: ${finalDag.completed_count}/${finalDag.total}
|
||||
- Remaining: ${finalDag.ready_count}
|
||||
|
||||
### Task Results
|
||||
${(finalQueue.tasks || []).map(q => {
|
||||
const icon = q.status === 'completed' ? '✓' :
|
||||
q.status === 'failed' ? '✗' :
|
||||
q.status === 'executing' ? '⟳' : '○';
|
||||
return `${icon} ${q.item_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
|
||||
### Task Status
|
||||
${finalDag.nodes.map(n => {
|
||||
const icon = n.status === 'completed' ? '✓' :
|
||||
n.status === 'failed' ? '✗' :
|
||||
n.status === 'executing' ? '⟳' : '○';
|
||||
return `${icon} ${n.id} [${n.issue_id}:${n.task_id}] - ${n.status}`;
|
||||
}).join('\n')}
|
||||
`);
|
||||
|
||||
// Issue status updates are handled by ccw issue complete/fail endpoints
|
||||
// No need to manually update issues.jsonl here
|
||||
|
||||
if (summary.pending_count > 0) {
|
||||
console.log(`
|
||||
### Continue Execution
|
||||
Run \`/issue:execute\` again to execute remaining tasks.
|
||||
`);
|
||||
if (finalDag.ready_count > 0) {
|
||||
console.log('\nRun `/issue:execute` again for remaining tasks.');
|
||||
}
|
||||
```
|
||||
|
||||
## Dry Run Mode
|
||||
## CLI Endpoint Contract
|
||||
|
||||
```javascript
|
||||
if (flags.dryRun) {
|
||||
console.log(`
|
||||
## Dry Run - Would Execute
|
||||
|
||||
${readyTasks.map((t, i) => `
|
||||
${i + 1}. ${t.item_id}
|
||||
Issue: ${t.issue_id}
|
||||
Task: ${t.task_id}
|
||||
Executor: ${t.assigned_executor}
|
||||
Group: ${t.execution_group}
|
||||
`).join('')}
|
||||
|
||||
No changes made. Remove --dry-run to execute.
|
||||
`);
|
||||
return;
|
||||
### `ccw issue queue dag`
|
||||
Returns dependency graph with parallel batches:
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-...",
|
||||
"total": 10,
|
||||
"ready_count": 3,
|
||||
"completed_count": 2,
|
||||
"nodes": [{ "id": "T-1", "status": "pending", "ready": true, ... }],
|
||||
"edges": [{ "from": "T-1", "to": "T-2" }],
|
||||
"parallel_batches": [["T-1", "T-3"], ["T-2"]],
|
||||
"_summary": { "can_parallel": 2, "batches_needed": 2 }
|
||||
}
|
||||
```
|
||||
|
||||
### `ccw issue next <item_id>`
|
||||
Returns full task definition for the specified item:
|
||||
```json
|
||||
{
|
||||
"item_id": "T-1",
|
||||
"issue_id": "GH-123",
|
||||
"task": { "id": "T1", "title": "...", "implementation": [...], ... },
|
||||
"context": { "relevant_files": [...] }
|
||||
}
|
||||
```
|
||||
|
||||
### `ccw issue done <item_id>`
|
||||
Marks task completed/failed and updates queue state.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Queue not found | Display message, suggest /issue:queue |
|
||||
| No ready tasks | Check dependencies, show blocked tasks |
|
||||
| Codex timeout | Mark as failed, allow retry |
|
||||
| ccw issue next empty | All tasks done or blocked |
|
||||
| Task execution failure | Marked via ccw issue fail, use `ccw issue retry` to reset |
|
||||
| No queue | Run /issue:queue first |
|
||||
| No ready tasks | Dependencies blocked, check DAG |
|
||||
| Executor timeout | Marked as executing, can resume |
|
||||
| Task failure | Use `ccw issue retry` to reset |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Interrupted Tasks
|
||||
|
||||
If execution was interrupted (crashed/stopped), `ccw issue next` will automatically resume:
|
||||
|
||||
### Check DAG Status
|
||||
```bash
|
||||
# Automatically returns the executing task for resumption
|
||||
ccw issue next
|
||||
ccw issue queue dag | jq '.parallel_batches'
|
||||
```
|
||||
|
||||
Tasks in `executing` status are prioritized and returned first, no manual reset needed.
|
||||
|
||||
### Failed Tasks
|
||||
|
||||
If a task failed and you want to retry:
|
||||
### Resume Interrupted Execution
|
||||
Executors in `executing` status will be resumed automatically when calling `ccw issue next <item_id>`.
|
||||
|
||||
### Retry Failed Tasks
|
||||
```bash
|
||||
# Reset all failed tasks to pending
|
||||
ccw issue retry
|
||||
|
||||
# Reset failed tasks for specific issue
|
||||
ccw issue retry <issue-id>
|
||||
ccw issue retry # Reset all failed to pending
|
||||
/issue:execute # Re-execute
|
||||
```
|
||||
|
||||
## Endpoint Contract
|
||||
|
||||
### `ccw issue next`
|
||||
- Returns next ready task as JSON
|
||||
- Marks task as 'executing'
|
||||
- Returns `{ status: 'empty' }` when no tasks
|
||||
|
||||
### `ccw issue complete <item-id>`
|
||||
- Marks task as 'completed'
|
||||
- Updates queue.json
|
||||
- Checks if issue is fully complete
|
||||
|
||||
### `ccw issue fail <item-id>`
|
||||
- Marks task as 'failed'
|
||||
- Records failure reason
|
||||
- Allows retry via /issue:execute
|
||||
|
||||
### `ccw issue retry [issue-id]`
|
||||
- Resets failed tasks to 'pending'
|
||||
- Allows re-execution via `ccw issue next`
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan issues with solutions
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `ccw issue queue list` - View queue status
|
||||
- `ccw issue retry` - Retry failed tasks
|
||||
- `ccw issue queue dag` - View dependency graph
|
||||
- `ccw issue retry` - Reset failed tasks
|
||||
|
||||
Reference in New Issue
Block a user