feat: Enhance issue management to support solution-level queues

- Added support for solution-level queues in the issue management system.
- Updated interfaces to include solution-specific properties such as `approach`, `task_count`, and `files_touched`.
- Modified queue handling to differentiate between task-level and solution-level items.
- Adjusted rendering logic in the dashboard to display solutions and their associated tasks correctly.
- Enhanced queue statistics and conflict resolution to accommodate the new solution structure.
- Updated actions (next, done, retry) to handle both tasks and solutions seamlessly.
This commit is contained in:
catlog22
2025-12-28 13:21:34 +08:00
parent 4c6b28030f
commit 2eaefb61ab
6 changed files with 828 additions and 491 deletions

View File

@@ -1,31 +1,31 @@
---
name: issue-queue-agent
description: |
Task ordering agent for queue formation with dependency analysis and conflict resolution.
Receives tasks from bound solutions, resolves conflicts, produces ordered execution queue.
Solution ordering agent for queue formation with dependency analysis and conflict resolution.
Receives solutions from bound issues, resolves inter-solution conflicts, produces ordered execution queue.
Examples:
- Context: Single issue queue
user: "Order tasks for GH-123"
user: "Order solutions for GH-123"
assistant: "I'll analyze dependencies and generate execution queue"
- Context: Multi-issue queue with conflicts
user: "Order tasks for GH-123, GH-124"
assistant: "I'll detect conflicts, resolve ordering, and assign groups"
user: "Order solutions for GH-123, GH-124"
assistant: "I'll detect file conflicts between solutions, resolve ordering, and assign groups"
color: orange
---
## Overview
**Agent Role**: Queue formation agent that transforms tasks from bound solutions into an ordered execution queue. Analyzes dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
**Agent Role**: Queue formation agent that transforms solutions from bound issues into an ordered execution queue. Analyzes inter-solution dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
**Core Capabilities**:
- Cross-issue dependency DAG construction
- File modification conflict detection
- Inter-solution dependency DAG construction
- File conflict detection between solutions (based on files_touched intersection)
- Conflict resolution with semantic ordering rules
- Priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
- Priority calculation (0.0-1.0) per solution
- Parallel/Sequential group assignment for solutions
**Key Principle**: Produce valid DAG with no circular dependencies and optimal parallel execution.
**Key Principle**: Queue items are **solutions**, NOT individual tasks. Each executor receives a complete solution with all its tasks.
---
@@ -35,33 +35,31 @@ color: orange
```javascript
{
tasks: [{
key: string, // e.g., "GH-123:TASK-001"
issue_id: string, // e.g., "GH-123"
solution_id: string, // e.g., "SOL-001"
task_id: string, // e.g., "TASK-001"
type: string, // feature | bug | refactor | test | chore | docs
file_context: string[],
depends_on: string[] // composite keys, e.g., ["GH-123:TASK-001"]
solutions: [{
issue_id: string, // e.g., "ISS-20251227-001"
solution_id: string, // e.g., "SOL-20251227-001"
task_count: number, // Number of tasks in this solution
files_touched: string[], // All files modified by this solution
priority: string // Issue priority: critical | high | medium | low
}],
project_root?: string,
rebuild?: boolean
}
```
**Note**: Agent generates unique `item_id` (pattern: `T-{N}`) for queue output.
**Note**: Agent generates unique `item_id` (pattern: `S-{N}`) for queue output.
### 1.2 Execution Flow
```
Phase 1: Dependency Analysis (20%)
Parse depends_on, build DAG, detect cycles
Phase 1: Solution Analysis (20%)
| Parse solutions, collect files_touched, build DAG
Phase 2: Conflict Detection (30%)
Identify file conflicts across issues
| Identify file overlaps between solutions
Phase 3: Conflict Resolution (25%)
Apply ordering rules, update DAG
| Apply ordering rules, update DAG
Phase 4: Ordering & Grouping (25%)
Topological sort, assign groups
| Topological sort, assign parallel/sequential groups
```
---
@@ -71,26 +69,16 @@ Phase 4: Ordering & Grouping (25%)
### 2.1 Dependency Graph
```javascript
function buildDependencyGraph(tasks) {
function buildDependencyGraph(solutions) {
const graph = new Map()
const fileModifications = new Map()
for (const item of tasks) {
graph.set(item.key, { ...item, inDegree: 0, outEdges: [] })
for (const sol of solutions) {
graph.set(sol.solution_id, { ...sol, inDegree: 0, outEdges: [] })
for (const file of item.file_context || []) {
for (const file of sol.files_touched || []) {
if (!fileModifications.has(file)) fileModifications.set(file, [])
fileModifications.get(file).push(item.key)
}
}
// Add dependency edges
for (const [key, node] of graph) {
for (const depKey of node.depends_on || []) {
if (graph.has(depKey)) {
graph.get(depKey).outEdges.push(key)
node.inDegree++
}
fileModifications.get(file).push(sol.solution_id)
}
}
@@ -100,15 +88,15 @@ function buildDependencyGraph(tasks) {
### 2.2 Conflict Detection
Conflict when multiple tasks modify same file:
Conflict when multiple solutions modify same file:
```javascript
function detectConflicts(fileModifications, graph) {
return [...fileModifications.entries()]
.filter(([_, tasks]) => tasks.length > 1)
.map(([file, tasks]) => ({
.filter(([_, solutions]) => solutions.length > 1)
.map(([file, solutions]) => ({
type: 'file_conflict',
file,
tasks,
solutions,
resolved: false
}))
}
@@ -118,42 +106,35 @@ function detectConflicts(fileModifications, graph) {
| Priority | Rule | Example |
|----------|------|---------|
| 1 | Create before Update | T1:Create → T2:Update |
| 2 | Foundation before integration | config/ → src/ |
| 3 | Types before implementation | types/ → components/ |
| 4 | Core before tests | src/ → __tests__/ |
| 5 | Delete last | T1:Update → T2:Delete |
| 1 | Higher issue priority first | critical > high > medium > low |
| 2 | Foundation solutions first | Solutions with fewer dependencies |
| 3 | More tasks = higher priority | Solutions with larger impact |
| 4 | Create before extend | S1:Creates module -> S2:Extends it |
### 2.4 Semantic Priority
**Base Priority Mapping** (task.priority 1-5 → base score):
| task.priority | Base Score | Meaning |
|---------------|------------|---------|
| 1 | 0.8 | Highest |
| 2 | 0.65 | High |
| 3 | 0.5 | Medium |
| 4 | 0.35 | Low |
| 5 | 0.2 | Lowest |
**Base Priority Mapping** (issue priority -> base score):
| Priority | Base Score | Meaning |
|----------|------------|---------|
| critical | 0.9 | Highest |
| high | 0.7 | High |
| medium | 0.5 | Medium |
| low | 0.3 | Low |
**Action-based Boost** (applied to base score):
**Task-count Boost** (applied to base score):
| Factor | Boost |
|--------|-------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Fix action | +0.05 |
| task_count >= 5 | +0.1 |
| task_count >= 3 | +0.05 |
| Foundation scope | +0.1 |
| Types scope | +0.05 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
| Fewer dependencies | +0.05 |
**Formula**: `semantic_priority = clamp(baseScore + sum(boosts), 0.0, 1.0)`
### 2.5 Group Assignment
- **Parallel (P*)**: Tasks with no dependencies or conflicts between them
- **Sequential (S*)**: Tasks that must run in order due to dependencies or conflicts
- **Parallel (P*)**: Solutions with no file overlaps between them
- **Sequential (S*)**: Solutions that share files must run in order
---
@@ -163,23 +144,61 @@ function detectConflicts(fileModifications, graph) {
**Queue files**:
```
.workflow/issues/queues/{queue-id}.json # Full queue with tasks, conflicts, groups
.workflow/issues/queues/{queue-id}.json # Full queue with solutions, conflicts, groups
.workflow/issues/queues/index.json # Update with new queue entry
```
Queue ID format: `QUE-YYYYMMDD-HHMMSS` (UTC timestamp)
Queue Item ID format: `S-N` (S-1, S-2, S-3, ...)
Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
### 3.2 Queue File Schema
### 3.2 Return Summary
```json
{
"id": "QUE-20251227-143000",
"status": "active",
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-20251227-003",
"solution_id": "SOL-20251227-003",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts", "src/utils.ts"],
"task_count": 3
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"solutions": ["S-1", "S-3"],
"resolution": "sequential",
"resolution_order": ["S-1", "S-3"],
"rationale": "S-1 creates auth module, S-3 extends it"
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"], "solution_count": 2 },
{ "id": "S2", "type": "sequential", "solutions": ["S-3"], "solution_count": 1 }
]
}
```
### 3.3 Return Summary
```json
{
"queue_id": "QUE-20251227-143000",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123", "GH-124"]
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
@@ -189,11 +208,11 @@ Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
### 4.1 Validation Checklist
- [ ] No circular dependencies
- [ ] All conflicts resolved
- [ ] No circular dependencies between solutions
- [ ] All file conflicts resolved
- [ ] Solutions in same parallel group have NO file overlaps
- [ ] Semantic priority calculated for all solutions
- [ ] Dependencies ordered correctly
- [ ] Parallel groups have no conflicts
- [ ] Semantic priority calculated
### 4.2 Error Handling
@@ -201,27 +220,28 @@ Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
|----------|--------|
| Circular dependency | Abort, report cycles |
| Resolution creates cycle | Flag for manual resolution |
| Missing task reference | Skip and warn |
| Empty task list | Return empty queue |
| Missing solution reference | Skip and warn |
| Empty solution list | Return empty queue |
### 4.3 Guidelines
**ALWAYS**:
1. Build dependency graph before ordering
2. Detect cycles before and after resolution
2. Detect file overlaps between solutions
3. Apply resolution rules consistently
4. Calculate semantic priority for all tasks
4. Calculate semantic priority for all solutions
5. Include rationale for conflict resolutions
6. Validate ordering before output
**NEVER**:
1. Execute tasks (ordering only)
1. Execute solutions (ordering only)
2. Ignore circular dependencies
3. Skip conflict detection
4. Output invalid DAG
5. Merge conflicting tasks in parallel group
5. Merge conflicting solutions in parallel group
6. Split tasks from their solution
**OUTPUT**:
1. Write `.workflow/issues/queues/{queue-id}.json`
2. Update `.workflow/issues/queues/index.json`
3. Return summary with `queue_id`, `total_tasks`, `execution_groups`, `conflicts_resolved`, `issues_queued`
3. Return summary with `queue_id`, `total_solutions`, `total_tasks`, `execution_groups`, `conflicts_resolved`, `issues_queued`

View File

@@ -1,6 +1,6 @@
---
name: execute
description: Execute queue with codex using DAG-based parallel orchestration (read-only task fetch)
description: Execute queue with codex using DAG-based parallel orchestration (solution-level)
argument-hint: "[--parallel <n>] [--executor codex|gemini|agent]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
@@ -9,13 +9,14 @@ allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
## Overview
Minimal orchestrator that dispatches task IDs to executors. Uses read-only `detail` command for parallel-safe task fetching.
Minimal orchestrator that dispatches **solution IDs** to executors. Each executor receives a complete solution with all its tasks.
**Design Principles:**
- `queue dag` → returns parallel batches with task IDs
- `detail <id>` → READ-ONLY task fetch (no status modification)
- `done <id>` → update completion status
- `queue dag` → returns parallel batches with solution IDs (S-1, S-2, ...)
- `detail <id>` → READ-ONLY solution fetch (returns full solution with all tasks)
- `done <id>` → update solution completion status
- No race conditions: status changes only via `done`
- **Executor handles all tasks within a solution sequentially**
## Usage
@@ -37,18 +38,19 @@ Minimal orchestrator that dispatches task IDs to executors. Uses read-only `deta
```
Phase 1: Get DAG
└─ ccw issue queue dag → { parallel_batches: [["T-1","T-2","T-3"], ...] }
└─ ccw issue queue dag → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
Phase 2: Dispatch Parallel Batch
├─ For each ID in batch (parallel):
├─ For each solution ID in batch (parallel):
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
│ ├─ Executor gets full task definition
│ ├─ Executor implements + tests + commits
│ ├─ Executor gets FULL SOLUTION with all tasks
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
│ ├─ Executor tests + commits per task
│ └─ Executor calls: ccw issue done <id>
└─ Wait for batch completion
Phase 3: Next Batch
└─ ccw issue queue dag → check for newly-ready tasks
└─ ccw issue queue dag → check for newly-ready solutions
```
## Implementation
@@ -61,15 +63,15 @@ const dagJson = Bash(`ccw issue queue dag`).trim();
const dag = JSON.parse(dagJson);
if (dag.error || dag.ready_count === 0) {
console.log(dag.error || 'No tasks ready for execution');
console.log(dag.error || 'No solutions ready for execution');
console.log('Use /issue:queue to form a queue first');
return;
}
console.log(`
## Queue DAG
## Queue DAG (Solution-Level)
- Total: ${dag.total}
- Total Solutions: ${dag.total}
- Ready: ${dag.ready_count}
- Completed: ${dag.completed_count}
- Parallel in batch 1: ${dag.parallel_batches[0]?.length || 0}
@@ -91,15 +93,15 @@ if (flags.dryRun) {
const parallelLimit = flags.parallel || 3;
const executor = flags.executor || 'codex';
// Process first batch (all can run in parallel)
// Process first batch (all solutions can run in parallel)
const batch = dag.parallel_batches[0] || [];
// Initialize TodoWrite
TodoWrite({
todos: batch.map(id => ({
content: `Execute ${id}`,
content: `Execute solution ${id}`,
status: 'pending',
activeForm: `Executing ${id}`
activeForm: `Executing solution ${id}`
}))
});
@@ -110,12 +112,12 @@ for (let i = 0; i < batch.length; i += parallelLimit) {
}
for (const chunk of chunks) {
console.log(`\n### Executing: ${chunk.join(', ')}`);
console.log(`\n### Executing Solutions: ${chunk.join(', ')}`);
// Launch all in parallel
const executions = chunk.map(itemId => {
updateTodo(itemId, 'in_progress');
return dispatchExecutor(itemId, executor);
const executions = chunk.map(solutionId => {
updateTodo(solutionId, 'in_progress');
return dispatchExecutor(solutionId, executor);
});
await Promise.all(executions);
@@ -126,51 +128,55 @@ for (const chunk of chunks) {
### Executor Dispatch
```javascript
function dispatchExecutor(itemId, executorType) {
// Executor fetches task via READ-ONLY detail command
function dispatchExecutor(solutionId, executorType) {
// Executor fetches FULL SOLUTION via READ-ONLY detail command
// Executor handles all tasks within solution sequentially
// Then reports completion via done command
const prompt = `
## Execute Task ${itemId}
## Execute Solution ${solutionId}
### Step 1: Get Task (read-only)
### Step 1: Get Solution (read-only)
\`\`\`bash
ccw issue detail ${itemId}
ccw issue detail ${solutionId}
\`\`\`
### Step 2: Execute
Follow the task definition returned above:
- task.implementation: Implementation steps
- task.test: Test commands
- task.acceptance: Acceptance criteria
- task.commit: Commit specification
### Step 2: Execute All Tasks Sequentially
The detail command returns a FULL SOLUTION with all tasks.
Execute each task in order (T1 → T2 → T3 → ...):
For each task:
1. Follow task.implementation steps
2. Run task.test commands
3. Verify task.acceptance criteria
4. Commit using task.commit specification
### Step 3: Report Completion
When done:
When ALL tasks in solution are done:
\`\`\`bash
ccw issue done ${itemId} --result '{"summary": "...", "files_modified": [...]}'
ccw issue done ${solutionId} --result '{"summary": "...", "files_modified": [...], "tasks_completed": N}'
\`\`\`
If failed:
If any task failed:
\`\`\`bash
ccw issue done ${itemId} --fail --reason "..."
ccw issue done ${solutionId} --fail --reason "Task TX failed: ..."
\`\`\`
`;
if (executorType === 'codex') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${itemId}`,
{ timeout: 3600000, run_in_background: true }
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
);
} else if (executorType === 'gemini') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${itemId}`,
{ timeout: 1800000, run_in_background: true }
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
{ timeout: 3600000, run_in_background: true }
);
} else {
return Task({
subagent_type: 'code-developer',
run_in_background: false,
description: `Execute ${itemId}`,
description: `Execute solution ${solutionId}`,
prompt: prompt
});
}
@@ -186,7 +192,7 @@ const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
console.log(`
## Batch Complete
- Completed: ${refreshedDag.completed_count}/${refreshedDag.total}
- Solutions Completed: ${refreshedDag.completed_count}/${refreshedDag.total}
- Next ready: ${refreshedDag.ready_count}
`);
@@ -198,68 +204,86 @@ if (refreshedDag.ready_count > 0) {
## Parallel Execution Model
```
┌─────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────┤
│ 1. ccw issue queue dag │
│ → { parallel_batches: [["T-1","T-2","T-3"], ["T-4"]]
│ │
│ 2. Dispatch batch 1 (parallel): │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────
│ │ Executor 1 │ │ Executor 2 │ │ Executor 3 │
│ │ detail T-1 │ │ detail T-2 │ │ detail T-3 │
│ │ [work] │ │ [work] │ │ [work]
│ │ done T-1 │ │ done T-2 │ │ done T-3
└────────────────┘ └────────────────┘ └────────────┘
3. ccw issue queue dag (refresh)
→ T-4 now ready (dependencies T-1,T-2 completed)
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────
│ Orchestrator
├─────────────────────────────────────────────────────────────
│ 1. ccw issue queue dag
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
│ 2. Dispatch batch 1 (parallel):
│ ┌──────────────────────┐ ┌──────────────────────┐
│ │ Executor 1 │ │ Executor 2
│ │ detail S-1 │ │ detail S-2
│ │ → gets full solution │ │ → gets full solution │
│ │ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │
│ done S-1 │ │ done S-2 │
└──────────────────────┘ └──────────────────────┘
3. ccw issue queue dag (refresh)
│ → S-3 now ready (S-1 completed, file conflict resolved) │
└─────────────────────────────────────────────────────────────┘
```
**Why this works for parallel:**
- `detail <id>` is READ-ONLY → no race conditions
- `done <id>` updates only its own task status
- `queue dag` recalculates ready tasks after each batch
- Each executor handles **all tasks within a solution** sequentially
- `done <id>` updates only its own solution status
- `queue dag` recalculates ready solutions after each batch
- Solutions in same batch have NO file conflicts
## CLI Endpoint Contract
### `ccw issue queue dag`
Returns dependency graph with parallel batches:
Returns dependency graph with parallel batches (solution-level):
```json
{
"queue_id": "QUE-...",
"total": 10,
"ready_count": 3,
"completed_count": 2,
"nodes": [{ "id": "T-1", "status": "pending", "ready": true, ... }],
"parallel_batches": [["T-1", "T-2", "T-3"], ["T-4", "T-5"]]
"total": 3,
"ready_count": 2,
"completed_count": 0,
"nodes": [
{ "id": "S-1", "issue_id": "ISS-xxx", "status": "pending", "ready": true, "task_count": 3 },
{ "id": "S-2", "issue_id": "ISS-yyy", "status": "pending", "ready": true, "task_count": 2 },
{ "id": "S-3", "issue_id": "ISS-zzz", "status": "pending", "ready": false, "depends_on": ["S-1"] }
],
"parallel_batches": [["S-1", "S-2"], ["S-3"]]
}
```
### `ccw issue detail <item_id>`
Returns full task definition (READ-ONLY):
Returns FULL SOLUTION with all tasks (READ-ONLY):
```json
{
"item_id": "T-1",
"issue_id": "GH-123",
"item_id": "S-1",
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"status": "pending",
"task": { "id": "T1", "implementation": [...], "test": {...}, ... },
"context": { "relevant_files": [...] }
"solution": {
"id": "SOL-xxx",
"approach": "...",
"tasks": [
{ "id": "T1", "title": "...", "implementation": [...], "test": {...} },
{ "id": "T2", "title": "...", "implementation": [...], "test": {...} },
{ "id": "T3", "title": "...", "implementation": [...], "test": {...} }
],
"exploration_context": { "relevant_files": [...] }
},
"execution_hints": { "executor": "codex", "estimated_minutes": 180 }
}
```
### `ccw issue done <item_id>`
Marks task completed/failed, updates queue state, checks for queue completion.
Marks solution completed/failed, updates queue state, checks for queue completion.
## Error Handling
| Error | Resolution |
|-------|------------|
| No queue | Run /issue:queue first |
| No ready tasks | Dependencies blocked, check DAG |
| Executor timeout | Task not marked done, can retry |
| Task failure | Use `ccw issue retry` to reset |
| No ready solutions | Dependencies blocked, check DAG |
| Executor timeout | Solution not marked done, can retry |
| Solution failure | Use `ccw issue retry` to reset |
| Partial task failure | Executor reports which task failed via `done --fail` |
## Related Commands

View File

@@ -1,6 +1,6 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent
description: Form execution queue from bound solutions using issue-queue-agent (solution-level)
argument-hint: "[--rebuild] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
@@ -9,39 +9,43 @@ allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, and creates an ordered execution queue.
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves **inter-solution** conflicts, and creates an ordered execution queue at **solution level**.
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
## Output Requirements
**Generate Files:**
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with tasks, conflicts, groups
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with solutions, conflicts, groups
2. `.workflow/issues/queues/index.json` - Update with new queue entry
**Return Summary:**
```json
{
"queue_id": "QUE-20251227-143000",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123", "GH-124"]
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
**Completion Criteria:**
- [ ] Queue JSON generated with valid DAG (no cycles)
- [ ] All file conflicts resolved with rationale
- [ ] Semantic priority calculated for all tasks
- [ ] Queue JSON generated with valid DAG (no cycles between solutions)
- [ ] All inter-solution file conflicts resolved with rationale
- [ ] Semantic priority calculated for each solution
- [ ] Execution groups assigned (parallel P* / sequential S*)
- [ ] Issue statuses updated to `queued` via `ccw issue update`
## Core Capabilities
- **Agent-driven**: issue-queue-agent handles all ordering logic
- Dependency DAG construction and cycle detection
- File conflict detection and resolution
- Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
- **Solution-level granularity**: Queue items are solutions, not tasks
- Inter-solution dependency DAG (based on file conflicts)
- File conflict detection between solutions
- Semantic priority calculation per solution (0.0-1.0)
- Parallel/Sequential group assignment for solutions
## Storage Structure (Queue History)
@@ -66,20 +70,75 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
{
"id": "QUE-20251227-143000",
"status": "active",
"issue_ids": ["GH-123", "GH-124"],
"total_tasks": 8,
"completed_tasks": 3,
"issue_ids": ["ISS-xxx", "ISS-yyy"],
"total_solutions": 3,
"completed_solutions": 1,
"created_at": "2025-12-27T14:30:00Z"
}
]
}
```
### Queue File Schema (Solution-Level)
```json
{
"id": "QUE-20251227-143000",
"status": "active",
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-20251227-003",
"solution_id": "SOL-20251227-003",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts", "src/utils.ts"],
"task_count": 3
},
{
"id": "QUE-20251226-100000",
"status": "completed",
"issue_ids": ["GH-120"],
"total_tasks": 5,
"completed_tasks": 5,
"created_at": "2025-12-26T10:00:00Z",
"completed_at": "2025-12-26T12:30:00Z"
"item_id": "S-2",
"issue_id": "ISS-20251227-001",
"solution_id": "SOL-20251227-001",
"status": "pending",
"execution_order": 2,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7,
"assigned_executor": "codex",
"files_touched": ["src/api.ts"],
"task_count": 2
},
{
"item_id": "S-3",
"issue_id": "ISS-20251227-002",
"solution_id": "SOL-20251227-002",
"status": "pending",
"execution_order": 3,
"execution_group": "S2",
"depends_on": ["S-1"],
"semantic_priority": 0.5,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts"],
"task_count": 4
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"solutions": ["S-1", "S-3"],
"resolution": "sequential",
"resolution_order": ["S-1", "S-3"],
"rationale": "S-1 creates auth module, S-3 extends it"
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"] },
{ "id": "S2", "type": "sequential", "solutions": ["S-3"] }
]
}
```
@@ -116,21 +175,22 @@ Phase 1: Solution Loading
├─ Filter issues with bound_solution_id
├─ Read solutions/{issue-id}.jsonl for each issue
├─ Find bound solution by ID
Extract tasks from bound solutions
Collect files_touched from all tasks in solution
└─ Build solution objects (NOT individual tasks)
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
├─ Launch issue-queue-agent with all tasks
├─ Launch issue-queue-agent with all solutions
├─ Agent performs:
│ ├─ Build dependency DAG from depends_on
│ ├─ Detect file overlaps between solutions
│ ├─ Build dependency DAG from file conflicts
│ ├─ Detect circular dependencies
│ ├─ Identify file modification conflicts
│ ├─ Resolve conflicts using ordering rules
│ ├─ Calculate semantic priority (0.0-1.0)
│ ├─ Resolve conflicts using priority rules
│ ├─ Calculate semantic priority per solution
│ └─ Assign execution groups (parallel/sequential)
└─ Output: queue JSON with ordered tasks
└─ Output: queue JSON with ordered solutions (S-1, S-2, ...)
Phase 5: Queue Output
├─ Write queue.json
├─ Write queue.json with solutions array
├─ Update issue statuses in issues.jsonl
└─ Display queue summary
```
@@ -158,8 +218,8 @@ if (plannedIssues.length === 0) {
return;
}
// Load all tasks from bound solutions
const allTasks = [];
// Load bound solutions (not individual tasks)
const allSolutions = [];
for (const issue of plannedIssues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
@@ -175,74 +235,77 @@ for (const issue of plannedIssues) {
continue;
}
// Collect all files touched by this solution
const filesTouched = new Set();
for (const task of boundSol.tasks || []) {
allTasks.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task,
exploration_context: boundSol.exploration_context
});
for (const mp of task.modification_points || []) {
filesTouched.add(mp.file);
}
}
allSolutions.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task_count: boundSol.tasks?.length || 0,
files_touched: Array.from(filesTouched),
priority: issue.priority || 'medium'
});
}
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
console.log(`Loaded ${allSolutions.length} solutions from ${plannedIssues.length} issues`);
```
### Phase 2-4: Agent-Driven Queue Formation
```javascript
// Build minimal prompt - agent reads schema and handles ordering
// Build minimal prompt - agent orders SOLUTIONS, not tasks
const agentPrompt = `
## Order Tasks
## Order Solutions
**Tasks**: ${allTasks.length} from ${plannedIssues.length} issues
**Solutions**: ${allSolutions.length} from ${plannedIssues.length} issues
**Project Root**: ${process.cwd()}
### Input
### Input (Solution-Level)
\`\`\`json
${JSON.stringify(allTasks.map(t => ({
key: \`\${t.issue_id}:\${t.task.id}\`,
type: t.task.type,
file_context: t.task.file_context,
depends_on: t.task.depends_on
})), null, 2)}
${JSON.stringify(allSolutions, null, 2)}
\`\`\`
### Steps
1. Parse tasks: Extract task keys, types, file contexts, dependencies
2. Build DAG: Construct dependency graph from depends_on references
3. Detect cycles: Verify no circular dependencies exist (abort if found)
4. Detect conflicts: Identify file modification conflicts across issues
5. Resolve conflicts: Apply ordering rules (Create→Update→Delete, config→src→tests)
6. Calculate priority: Compute semantic priority (0.0-1.0) for each task
7. Assign groups: Assign parallel (P*) or sequential (S*) execution groups
8. Generate queue: Write queue JSON with ordered tasks
1. Parse solutions: Extract solution IDs, files_touched, task_count, priority
2. Detect conflicts: Find file overlaps between solutions (files_touched intersection)
3. Build DAG: Create dependency edges where solutions share files
4. Detect cycles: Verify no circular dependencies (abort if found)
5. Resolve conflicts: Apply ordering rules based on action types
6. Calculate priority: Compute semantic priority (0.0-1.0) per solution
7. Assign groups: Parallel (P*) for no-conflict, Sequential (S*) for conflicts
8. Generate queue: Write queue JSON with ordered solutions
9. Update index: Update queues/index.json with new queue entry
### Rules
- **Solution Granularity**: Queue items are solutions, NOT individual tasks
- **DAG Validity**: Output must be valid DAG with no circular dependencies
- **Conflict Resolution**: All file conflicts must be resolved with rationale
- **Conflict Detection**: Two solutions conflict if files_touched intersect
- **Ordering Priority**:
1. Create before Update (files must exist before modification)
2. Foundation before integration (config/ → src/)
3. Types before implementation (types/ → components/)
4. Core before tests (src/ → __tests__/)
5. Delete last (preserve dependencies until no longer needed)
- **Parallel Safety**: Tasks in same parallel group must have no file conflicts
1. Higher issue priority first (critical > high > medium > low)
2. Fewer dependencies first (foundation solutions)
3. More tasks = higher priority (larger impact)
- **Parallel Safety**: Solutions in same parallel group must have NO file overlaps
- **Queue Item ID Format**: \`S-N\` (S-1, S-2, S-3, ...)
- **Queue ID Format**: \`QUE-YYYYMMDD-HHMMSS\` (UTC timestamp)
### Generate Files
1. \`.workflow/issues/queues/\${queueId}.json\` - Full queue (schema: cat .claude/workflows/cli-templates/schemas/queue-schema.json)
1. \`.workflow/issues/queues/\${queueId}.json\` - Full queue with solutions array
2. \`.workflow/issues/queues/index.json\` - Update with new entry
### Return Summary
\`\`\`json
{
"queue_id": "QUE-YYYYMMDD-HHMMSS",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123"]
"issues_queued": ["ISS-xxx"]
}
\`\`\`
`;
@@ -250,7 +313,7 @@ ${JSON.stringify(allTasks.map(t => ({
const result = Task(
subagent_type="issue-queue-agent",
run_in_background=false,
description=`Order ${allTasks.length} tasks`,
description=`Order ${allSolutions.length} solutions`,
prompt=agentPrompt
);
@@ -264,6 +327,7 @@ const summary = JSON.parse(result);
console.log(`
## Queue Formed: ${summary.queue_id}
**Solutions**: ${summary.total_solutions}
**Tasks**: ${summary.total_tasks}
**Issues**: ${summary.issues_queued.join(', ')}
**Groups**: ${summary.execution_groups.map(g => `${g.id}(${g.count})`).join(', ')}

View File

@@ -1,128 +1,63 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Execution Queue Schema",
"description": "Global execution queue for all issue tasks",
"description": "Execution queue supporting both task-level (T-N) and solution-level (S-N) granularity",
"type": "object",
"properties": {
"queue": {
"id": {
"type": "string",
"pattern": "^QUE-[0-9]{8}-[0-9]{6}$",
"description": "Queue ID in format QUE-YYYYMMDD-HHMMSS"
},
"status": {
"type": "string",
"enum": ["active", "paused", "completed", "archived"],
"default": "active"
},
"issue_ids": {
"type": "array",
"description": "Ordered list of tasks to execute",
"items": { "type": "string" },
"description": "Issues included in this queue"
},
"solutions": {
"type": "array",
"description": "Solution-level queue items (preferred for new queues)",
"items": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "task_id", "status"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^T-[0-9]+$",
"description": "Unique queue item identifier"
},
"issue_id": {
"type": "string",
"description": "Source issue ID"
},
"solution_id": {
"type": "string",
"description": "Source solution ID"
},
"task_id": {
"type": "string",
"description": "Task ID within solution"
},
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"execution_order": {
"type": "integer",
"description": "Order in execution sequence"
},
"execution_group": {
"type": "string",
"description": "Parallel execution group ID (e.g., P1, S1)"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Queue IDs this task depends on"
},
"semantic_priority": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Semantic importance score (0.0-1.0)"
},
"assigned_executor": {
"type": "string",
"enum": ["codex", "gemini", "agent"]
},
"queued_at": {
"type": "string",
"format": "date-time"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
},
"result": {
"type": "object",
"description": "Execution result",
"properties": {
"files_modified": { "type": "array", "items": { "type": "string" } },
"files_created": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" },
"commit_hash": { "type": "string" }
}
},
"failure_reason": {
"type": "string"
}
}
"$ref": "#/definitions/solutionItem"
}
},
"tasks": {
"type": "array",
"description": "Task-level queue items (legacy format)",
"items": {
"$ref": "#/definitions/taskItem"
}
},
"conflicts": {
"type": "array",
"description": "Detected conflicts between tasks",
"description": "Detected conflicts between items",
"items": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Queue IDs involved in conflict"
},
"file": {
"type": "string",
"description": "Conflicting file path"
},
"resolution": {
"type": "string",
"enum": ["sequential", "merge", "manual"]
},
"resolution_order": {
"type": "array",
"items": { "type": "string" }
},
"resolved": {
"type": "boolean",
"default": false
}
}
"$ref": "#/definitions/conflict"
}
},
"execution_groups": {
"type": "array",
"description": "Parallel/Sequential execution groups",
"items": {
"$ref": "#/definitions/executionGroup"
}
},
"_metadata": {
"type": "object",
"properties": {
"version": { "type": "string", "default": "1.0" },
"total_items": { "type": "integer" },
"version": { "type": "string", "default": "2.0" },
"queue_type": {
"type": "string",
"enum": ["solution", "task"],
"description": "Queue granularity level"
},
"total_solutions": { "type": "integer" },
"total_tasks": { "type": "integer" },
"pending_count": { "type": "integer" },
"ready_count": { "type": "integer" },
"executing_count": { "type": "integer" },
@@ -132,5 +67,187 @@
"last_updated": { "type": "string", "format": "date-time" }
}
}
},
"definitions": {
"solutionItem": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "status", "task_count", "files_touched"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^S-[0-9]+$",
"description": "Solution-level queue item ID (S-1, S-2, ...)"
},
"issue_id": {
"type": "string",
"description": "Source issue ID"
},
"solution_id": {
"type": "string",
"description": "Bound solution ID"
},
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"task_count": {
"type": "integer",
"minimum": 1,
"description": "Number of tasks in this solution"
},
"files_touched": {
"type": "array",
"items": { "type": "string" },
"description": "All files modified by this solution"
},
"execution_order": {
"type": "integer",
"description": "Order in execution sequence"
},
"execution_group": {
"type": "string",
"description": "Parallel (P*) or Sequential (S*) group ID"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs this item depends on"
},
"semantic_priority": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Semantic importance score (0.0-1.0)"
},
"assigned_executor": {
"type": "string",
"enum": ["codex", "gemini", "agent"]
},
"queued_at": { "type": "string", "format": "date-time" },
"started_at": { "type": "string", "format": "date-time" },
"completed_at": { "type": "string", "format": "date-time" },
"result": {
"type": "object",
"properties": {
"summary": { "type": "string" },
"files_modified": { "type": "array", "items": { "type": "string" } },
"tasks_completed": { "type": "integer" },
"commit_hashes": { "type": "array", "items": { "type": "string" } }
}
},
"failure_reason": { "type": "string" }
}
},
"taskItem": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "task_id", "status"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^T-[0-9]+$",
"description": "Task-level queue item ID (T-1, T-2, ...)"
},
"issue_id": { "type": "string" },
"solution_id": { "type": "string" },
"task_id": { "type": "string" },
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"execution_order": { "type": "integer" },
"execution_group": { "type": "string" },
"depends_on": { "type": "array", "items": { "type": "string" } },
"semantic_priority": { "type": "number", "minimum": 0, "maximum": 1 },
"assigned_executor": { "type": "string", "enum": ["codex", "gemini", "agent"] },
"queued_at": { "type": "string", "format": "date-time" },
"started_at": { "type": "string", "format": "date-time" },
"completed_at": { "type": "string", "format": "date-time" },
"result": {
"type": "object",
"properties": {
"files_modified": { "type": "array", "items": { "type": "string" } },
"files_created": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" },
"commit_hash": { "type": "string" }
}
},
"failure_reason": { "type": "string" }
}
},
"conflict": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
},
"file": {
"type": "string",
"description": "Conflicting file path"
},
"solutions": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs involved (for solution-level queues)"
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Task IDs involved (for task-level queues)"
},
"resolution": {
"type": "string",
"enum": ["sequential", "merge", "manual"]
},
"resolution_order": {
"type": "array",
"items": { "type": "string" },
"description": "Execution order to resolve conflict"
},
"rationale": {
"type": "string",
"description": "Explanation of resolution decision"
},
"resolved": {
"type": "boolean",
"default": false
}
}
},
"executionGroup": {
"type": "object",
"required": ["id", "type"],
"properties": {
"id": {
"type": "string",
"pattern": "^[PS][0-9]+$",
"description": "Group ID (P1, P2 for parallel, S1, S2 for sequential)"
},
"type": {
"type": "string",
"enum": ["parallel", "sequential"]
},
"solutions": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs in this group"
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Task IDs in this group (legacy)"
},
"solution_count": {
"type": "integer",
"description": "Number of solutions in group"
},
"task_count": {
"type": "integer",
"description": "Number of tasks in group (legacy)"
}
}
}
}
}