feat: Enhance issue management to support solution-level queues

- Added support for solution-level queues in the issue management system.
- Updated interfaces to include solution-specific properties such as `approach`, `task_count`, and `files_touched`.
- Modified queue handling to differentiate between task-level and solution-level items.
- Adjusted rendering logic in the dashboard to display solutions and their associated tasks correctly.
- Enhanced queue statistics and conflict resolution to accommodate the new solution structure.
- Updated actions (next, done, retry) to handle both tasks and solutions seamlessly.
This commit is contained in:
catlog22
2025-12-28 13:21:34 +08:00
parent 4c6b28030f
commit 2eaefb61ab
6 changed files with 828 additions and 491 deletions

View File

@@ -1,31 +1,31 @@
---
name: issue-queue-agent
description: |
Task ordering agent for queue formation with dependency analysis and conflict resolution.
Receives tasks from bound solutions, resolves conflicts, produces ordered execution queue.
Solution ordering agent for queue formation with dependency analysis and conflict resolution.
Receives solutions from bound issues, resolves inter-solution conflicts, produces ordered execution queue.
Examples:
- Context: Single issue queue
user: "Order tasks for GH-123"
user: "Order solutions for GH-123"
assistant: "I'll analyze dependencies and generate execution queue"
- Context: Multi-issue queue with conflicts
user: "Order tasks for GH-123, GH-124"
assistant: "I'll detect conflicts, resolve ordering, and assign groups"
user: "Order solutions for GH-123, GH-124"
assistant: "I'll detect file conflicts between solutions, resolve ordering, and assign groups"
color: orange
---
## Overview
**Agent Role**: Queue formation agent that transforms tasks from bound solutions into an ordered execution queue. Analyzes dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
**Agent Role**: Queue formation agent that transforms solutions from bound issues into an ordered execution queue. Analyzes inter-solution dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
**Core Capabilities**:
- Cross-issue dependency DAG construction
- File modification conflict detection
- Inter-solution dependency DAG construction
- File conflict detection between solutions (based on files_touched intersection)
- Conflict resolution with semantic ordering rules
- Priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
- Priority calculation (0.0-1.0) per solution
- Parallel/Sequential group assignment for solutions
**Key Principle**: Produce valid DAG with no circular dependencies and optimal parallel execution.
**Key Principle**: Queue items are **solutions**, NOT individual tasks. Each executor receives a complete solution with all its tasks.
---
@@ -35,33 +35,31 @@ color: orange
```javascript
{
tasks: [{
key: string, // e.g., "GH-123:TASK-001"
issue_id: string, // e.g., "GH-123"
solution_id: string, // e.g., "SOL-001"
task_id: string, // e.g., "TASK-001"
type: string, // feature | bug | refactor | test | chore | docs
file_context: string[],
depends_on: string[] // composite keys, e.g., ["GH-123:TASK-001"]
solutions: [{
issue_id: string, // e.g., "ISS-20251227-001"
solution_id: string, // e.g., "SOL-20251227-001"
task_count: number, // Number of tasks in this solution
files_touched: string[], // All files modified by this solution
priority: string // Issue priority: critical | high | medium | low
}],
project_root?: string,
rebuild?: boolean
}
```
**Note**: Agent generates unique `item_id` (pattern: `T-{N}`) for queue output.
**Note**: Agent generates unique `item_id` (pattern: `S-{N}`) for queue output.
### 1.2 Execution Flow
```
Phase 1: Dependency Analysis (20%)
Parse depends_on, build DAG, detect cycles
Phase 1: Solution Analysis (20%)
| Parse solutions, collect files_touched, build DAG
Phase 2: Conflict Detection (30%)
Identify file conflicts across issues
| Identify file overlaps between solutions
Phase 3: Conflict Resolution (25%)
Apply ordering rules, update DAG
| Apply ordering rules, update DAG
Phase 4: Ordering & Grouping (25%)
Topological sort, assign groups
| Topological sort, assign parallel/sequential groups
```
---
@@ -71,26 +69,16 @@ Phase 4: Ordering & Grouping (25%)
### 2.1 Dependency Graph
```javascript
function buildDependencyGraph(tasks) {
function buildDependencyGraph(solutions) {
const graph = new Map()
const fileModifications = new Map()
for (const item of tasks) {
graph.set(item.key, { ...item, inDegree: 0, outEdges: [] })
for (const sol of solutions) {
graph.set(sol.solution_id, { ...sol, inDegree: 0, outEdges: [] })
for (const file of item.file_context || []) {
for (const file of sol.files_touched || []) {
if (!fileModifications.has(file)) fileModifications.set(file, [])
fileModifications.get(file).push(item.key)
}
}
// Add dependency edges
for (const [key, node] of graph) {
for (const depKey of node.depends_on || []) {
if (graph.has(depKey)) {
graph.get(depKey).outEdges.push(key)
node.inDegree++
}
fileModifications.get(file).push(sol.solution_id)
}
}
@@ -100,15 +88,15 @@ function buildDependencyGraph(tasks) {
### 2.2 Conflict Detection
Conflict when multiple tasks modify same file:
Conflict when multiple solutions modify same file:
```javascript
function detectConflicts(fileModifications, graph) {
return [...fileModifications.entries()]
.filter(([_, tasks]) => tasks.length > 1)
.map(([file, tasks]) => ({
.filter(([_, solutions]) => solutions.length > 1)
.map(([file, solutions]) => ({
type: 'file_conflict',
file,
tasks,
solutions,
resolved: false
}))
}
@@ -118,42 +106,35 @@ function detectConflicts(fileModifications, graph) {
| Priority | Rule | Example |
|----------|------|---------|
| 1 | Create before Update | T1:Create → T2:Update |
| 2 | Foundation before integration | config/ → src/ |
| 3 | Types before implementation | types/ → components/ |
| 4 | Core before tests | src/ → __tests__/ |
| 5 | Delete last | T1:Update → T2:Delete |
| 1 | Higher issue priority first | critical > high > medium > low |
| 2 | Foundation solutions first | Solutions with fewer dependencies |
| 3 | More tasks = higher priority | Solutions with larger impact |
| 4 | Create before extend | S1:Creates module -> S2:Extends it |
### 2.4 Semantic Priority
**Base Priority Mapping** (task.priority 1-5 → base score):
| task.priority | Base Score | Meaning |
|---------------|------------|---------|
| 1 | 0.8 | Highest |
| 2 | 0.65 | High |
| 3 | 0.5 | Medium |
| 4 | 0.35 | Low |
| 5 | 0.2 | Lowest |
**Base Priority Mapping** (issue priority -> base score):
| Priority | Base Score | Meaning |
|----------|------------|---------|
| critical | 0.9 | Highest |
| high | 0.7 | High |
| medium | 0.5 | Medium |
| low | 0.3 | Low |
**Action-based Boost** (applied to base score):
**Task-count Boost** (applied to base score):
| Factor | Boost |
|--------|-------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Fix action | +0.05 |
| task_count >= 5 | +0.1 |
| task_count >= 3 | +0.05 |
| Foundation scope | +0.1 |
| Types scope | +0.05 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
| Fewer dependencies | +0.05 |
**Formula**: `semantic_priority = clamp(baseScore + sum(boosts), 0.0, 1.0)`
### 2.5 Group Assignment
- **Parallel (P*)**: Tasks with no dependencies or conflicts between them
- **Sequential (S*)**: Tasks that must run in order due to dependencies or conflicts
- **Parallel (P*)**: Solutions with no file overlaps between them
- **Sequential (S*)**: Solutions that share files must run in order
---
@@ -163,23 +144,61 @@ function detectConflicts(fileModifications, graph) {
**Queue files**:
```
.workflow/issues/queues/{queue-id}.json # Full queue with tasks, conflicts, groups
.workflow/issues/queues/{queue-id}.json # Full queue with solutions, conflicts, groups
.workflow/issues/queues/index.json # Update with new queue entry
```
Queue ID format: `QUE-YYYYMMDD-HHMMSS` (UTC timestamp)
Queue Item ID format: `S-N` (S-1, S-2, S-3, ...)
Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
### 3.2 Queue File Schema
### 3.2 Return Summary
```json
{
"id": "QUE-20251227-143000",
"status": "active",
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-20251227-003",
"solution_id": "SOL-20251227-003",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts", "src/utils.ts"],
"task_count": 3
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"solutions": ["S-1", "S-3"],
"resolution": "sequential",
"resolution_order": ["S-1", "S-3"],
"rationale": "S-1 creates auth module, S-3 extends it"
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"], "solution_count": 2 },
{ "id": "S2", "type": "sequential", "solutions": ["S-3"], "solution_count": 1 }
]
}
```
### 3.3 Return Summary
```json
{
"queue_id": "QUE-20251227-143000",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123", "GH-124"]
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
@@ -189,11 +208,11 @@ Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
### 4.1 Validation Checklist
- [ ] No circular dependencies
- [ ] All conflicts resolved
- [ ] No circular dependencies between solutions
- [ ] All file conflicts resolved
- [ ] Solutions in same parallel group have NO file overlaps
- [ ] Semantic priority calculated for all solutions
- [ ] Dependencies ordered correctly
- [ ] Parallel groups have no conflicts
- [ ] Semantic priority calculated
### 4.2 Error Handling
@@ -201,27 +220,28 @@ Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
|----------|--------|
| Circular dependency | Abort, report cycles |
| Resolution creates cycle | Flag for manual resolution |
| Missing task reference | Skip and warn |
| Empty task list | Return empty queue |
| Missing solution reference | Skip and warn |
| Empty solution list | Return empty queue |
### 4.3 Guidelines
**ALWAYS**:
1. Build dependency graph before ordering
2. Detect cycles before and after resolution
2. Detect file overlaps between solutions
3. Apply resolution rules consistently
4. Calculate semantic priority for all tasks
4. Calculate semantic priority for all solutions
5. Include rationale for conflict resolutions
6. Validate ordering before output
**NEVER**:
1. Execute tasks (ordering only)
1. Execute solutions (ordering only)
2. Ignore circular dependencies
3. Skip conflict detection
4. Output invalid DAG
5. Merge conflicting tasks in parallel group
5. Merge conflicting solutions in parallel group
6. Split tasks from their solution
**OUTPUT**:
1. Write `.workflow/issues/queues/{queue-id}.json`
2. Update `.workflow/issues/queues/index.json`
3. Return summary with `queue_id`, `total_tasks`, `execution_groups`, `conflicts_resolved`, `issues_queued`
3. Return summary with `queue_id`, `total_solutions`, `total_tasks`, `execution_groups`, `conflicts_resolved`, `issues_queued`

View File

@@ -1,6 +1,6 @@
---
name: execute
description: Execute queue with codex using DAG-based parallel orchestration (read-only task fetch)
description: Execute queue with codex using DAG-based parallel orchestration (solution-level)
argument-hint: "[--parallel <n>] [--executor codex|gemini|agent]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
@@ -9,13 +9,14 @@ allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
## Overview
Minimal orchestrator that dispatches task IDs to executors. Uses read-only `detail` command for parallel-safe task fetching.
Minimal orchestrator that dispatches **solution IDs** to executors. Each executor receives a complete solution with all its tasks.
**Design Principles:**
- `queue dag` → returns parallel batches with task IDs
- `detail <id>` → READ-ONLY task fetch (no status modification)
- `done <id>` → update completion status
- `queue dag` → returns parallel batches with solution IDs (S-1, S-2, ...)
- `detail <id>` → READ-ONLY solution fetch (returns full solution with all tasks)
- `done <id>` → update solution completion status
- No race conditions: status changes only via `done`
- **Executor handles all tasks within a solution sequentially**
## Usage
@@ -37,18 +38,19 @@ Minimal orchestrator that dispatches task IDs to executors. Uses read-only `deta
```
Phase 1: Get DAG
└─ ccw issue queue dag → { parallel_batches: [["T-1","T-2","T-3"], ...] }
└─ ccw issue queue dag → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
Phase 2: Dispatch Parallel Batch
├─ For each ID in batch (parallel):
├─ For each solution ID in batch (parallel):
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
│ ├─ Executor gets full task definition
│ ├─ Executor implements + tests + commits
│ ├─ Executor gets FULL SOLUTION with all tasks
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
│ ├─ Executor tests + commits per task
│ └─ Executor calls: ccw issue done <id>
└─ Wait for batch completion
Phase 3: Next Batch
└─ ccw issue queue dag → check for newly-ready tasks
└─ ccw issue queue dag → check for newly-ready solutions
```
## Implementation
@@ -61,15 +63,15 @@ const dagJson = Bash(`ccw issue queue dag`).trim();
const dag = JSON.parse(dagJson);
if (dag.error || dag.ready_count === 0) {
console.log(dag.error || 'No tasks ready for execution');
console.log(dag.error || 'No solutions ready for execution');
console.log('Use /issue:queue to form a queue first');
return;
}
console.log(`
## Queue DAG
## Queue DAG (Solution-Level)
- Total: ${dag.total}
- Total Solutions: ${dag.total}
- Ready: ${dag.ready_count}
- Completed: ${dag.completed_count}
- Parallel in batch 1: ${dag.parallel_batches[0]?.length || 0}
@@ -91,15 +93,15 @@ if (flags.dryRun) {
const parallelLimit = flags.parallel || 3;
const executor = flags.executor || 'codex';
// Process first batch (all can run in parallel)
// Process first batch (all solutions can run in parallel)
const batch = dag.parallel_batches[0] || [];
// Initialize TodoWrite
TodoWrite({
todos: batch.map(id => ({
content: `Execute ${id}`,
content: `Execute solution ${id}`,
status: 'pending',
activeForm: `Executing ${id}`
activeForm: `Executing solution ${id}`
}))
});
@@ -110,12 +112,12 @@ for (let i = 0; i < batch.length; i += parallelLimit) {
}
for (const chunk of chunks) {
console.log(`\n### Executing: ${chunk.join(', ')}`);
console.log(`\n### Executing Solutions: ${chunk.join(', ')}`);
// Launch all in parallel
const executions = chunk.map(itemId => {
updateTodo(itemId, 'in_progress');
return dispatchExecutor(itemId, executor);
const executions = chunk.map(solutionId => {
updateTodo(solutionId, 'in_progress');
return dispatchExecutor(solutionId, executor);
});
await Promise.all(executions);
@@ -126,51 +128,55 @@ for (const chunk of chunks) {
### Executor Dispatch
```javascript
function dispatchExecutor(itemId, executorType) {
// Executor fetches task via READ-ONLY detail command
function dispatchExecutor(solutionId, executorType) {
// Executor fetches FULL SOLUTION via READ-ONLY detail command
// Executor handles all tasks within solution sequentially
// Then reports completion via done command
const prompt = `
## Execute Task ${itemId}
## Execute Solution ${solutionId}
### Step 1: Get Task (read-only)
### Step 1: Get Solution (read-only)
\`\`\`bash
ccw issue detail ${itemId}
ccw issue detail ${solutionId}
\`\`\`
### Step 2: Execute
Follow the task definition returned above:
- task.implementation: Implementation steps
- task.test: Test commands
- task.acceptance: Acceptance criteria
- task.commit: Commit specification
### Step 2: Execute All Tasks Sequentially
The detail command returns a FULL SOLUTION with all tasks.
Execute each task in order (T1 → T2 → T3 → ...):
For each task:
1. Follow task.implementation steps
2. Run task.test commands
3. Verify task.acceptance criteria
4. Commit using task.commit specification
### Step 3: Report Completion
When done:
When ALL tasks in solution are done:
\`\`\`bash
ccw issue done ${itemId} --result '{"summary": "...", "files_modified": [...]}'
ccw issue done ${solutionId} --result '{"summary": "...", "files_modified": [...], "tasks_completed": N}'
\`\`\`
If failed:
If any task failed:
\`\`\`bash
ccw issue done ${itemId} --fail --reason "..."
ccw issue done ${solutionId} --fail --reason "Task TX failed: ..."
\`\`\`
`;
if (executorType === 'codex') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${itemId}`,
{ timeout: 3600000, run_in_background: true }
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
);
} else if (executorType === 'gemini') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${itemId}`,
{ timeout: 1800000, run_in_background: true }
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
{ timeout: 3600000, run_in_background: true }
);
} else {
return Task({
subagent_type: 'code-developer',
run_in_background: false,
description: `Execute ${itemId}`,
description: `Execute solution ${solutionId}`,
prompt: prompt
});
}
@@ -186,7 +192,7 @@ const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
console.log(`
## Batch Complete
- Completed: ${refreshedDag.completed_count}/${refreshedDag.total}
- Solutions Completed: ${refreshedDag.completed_count}/${refreshedDag.total}
- Next ready: ${refreshedDag.ready_count}
`);
@@ -198,68 +204,86 @@ if (refreshedDag.ready_count > 0) {
## Parallel Execution Model
```
┌─────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────┤
│ 1. ccw issue queue dag │
│ → { parallel_batches: [["T-1","T-2","T-3"], ["T-4"]]
│ │
│ 2. Dispatch batch 1 (parallel): │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────
│ │ Executor 1 │ │ Executor 2 │ │ Executor 3 │
│ │ detail T-1 │ │ detail T-2 │ │ detail T-3 │
│ │ [work] │ │ [work] │ │ [work]
│ │ done T-1 │ │ done T-2 │ │ done T-3
└────────────────┘ └────────────────┘ └────────────┘
3. ccw issue queue dag (refresh)
→ T-4 now ready (dependencies T-1,T-2 completed)
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────
│ Orchestrator
├─────────────────────────────────────────────────────────────
│ 1. ccw issue queue dag
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
│ 2. Dispatch batch 1 (parallel):
│ ┌──────────────────────┐ ┌──────────────────────┐
│ │ Executor 1 │ │ Executor 2
│ │ detail S-1 │ │ detail S-2
│ │ → gets full solution │ │ → gets full solution │
│ │ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │
│ done S-1 │ │ done S-2 │
└──────────────────────┘ └──────────────────────┘
3. ccw issue queue dag (refresh)
│ → S-3 now ready (S-1 completed, file conflict resolved) │
└─────────────────────────────────────────────────────────────┘
```
**Why this works for parallel:**
- `detail <id>` is READ-ONLY → no race conditions
- `done <id>` updates only its own task status
- `queue dag` recalculates ready tasks after each batch
- Each executor handles **all tasks within a solution** sequentially
- `done <id>` updates only its own solution status
- `queue dag` recalculates ready solutions after each batch
- Solutions in same batch have NO file conflicts
## CLI Endpoint Contract
### `ccw issue queue dag`
Returns dependency graph with parallel batches:
Returns dependency graph with parallel batches (solution-level):
```json
{
"queue_id": "QUE-...",
"total": 10,
"ready_count": 3,
"completed_count": 2,
"nodes": [{ "id": "T-1", "status": "pending", "ready": true, ... }],
"parallel_batches": [["T-1", "T-2", "T-3"], ["T-4", "T-5"]]
"total": 3,
"ready_count": 2,
"completed_count": 0,
"nodes": [
{ "id": "S-1", "issue_id": "ISS-xxx", "status": "pending", "ready": true, "task_count": 3 },
{ "id": "S-2", "issue_id": "ISS-yyy", "status": "pending", "ready": true, "task_count": 2 },
{ "id": "S-3", "issue_id": "ISS-zzz", "status": "pending", "ready": false, "depends_on": ["S-1"] }
],
"parallel_batches": [["S-1", "S-2"], ["S-3"]]
}
```
### `ccw issue detail <item_id>`
Returns full task definition (READ-ONLY):
Returns FULL SOLUTION with all tasks (READ-ONLY):
```json
{
"item_id": "T-1",
"issue_id": "GH-123",
"item_id": "S-1",
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"status": "pending",
"task": { "id": "T1", "implementation": [...], "test": {...}, ... },
"context": { "relevant_files": [...] }
"solution": {
"id": "SOL-xxx",
"approach": "...",
"tasks": [
{ "id": "T1", "title": "...", "implementation": [...], "test": {...} },
{ "id": "T2", "title": "...", "implementation": [...], "test": {...} },
{ "id": "T3", "title": "...", "implementation": [...], "test": {...} }
],
"exploration_context": { "relevant_files": [...] }
},
"execution_hints": { "executor": "codex", "estimated_minutes": 180 }
}
```
### `ccw issue done <item_id>`
Marks task completed/failed, updates queue state, checks for queue completion.
Marks solution completed/failed, updates queue state, checks for queue completion.
## Error Handling
| Error | Resolution |
|-------|------------|
| No queue | Run /issue:queue first |
| No ready tasks | Dependencies blocked, check DAG |
| Executor timeout | Task not marked done, can retry |
| Task failure | Use `ccw issue retry` to reset |
| No ready solutions | Dependencies blocked, check DAG |
| Executor timeout | Solution not marked done, can retry |
| Solution failure | Use `ccw issue retry` to reset |
| Partial task failure | Executor reports which task failed via `done --fail` |
## Related Commands

View File

@@ -1,6 +1,6 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent
description: Form execution queue from bound solutions using issue-queue-agent (solution-level)
argument-hint: "[--rebuild] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
@@ -9,39 +9,43 @@ allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, and creates an ordered execution queue.
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves **inter-solution** conflicts, and creates an ordered execution queue at **solution level**.
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
## Output Requirements
**Generate Files:**
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with tasks, conflicts, groups
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with solutions, conflicts, groups
2. `.workflow/issues/queues/index.json` - Update with new queue entry
**Return Summary:**
```json
{
"queue_id": "QUE-20251227-143000",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123", "GH-124"]
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
**Completion Criteria:**
- [ ] Queue JSON generated with valid DAG (no cycles)
- [ ] All file conflicts resolved with rationale
- [ ] Semantic priority calculated for all tasks
- [ ] Queue JSON generated with valid DAG (no cycles between solutions)
- [ ] All inter-solution file conflicts resolved with rationale
- [ ] Semantic priority calculated for each solution
- [ ] Execution groups assigned (parallel P* / sequential S*)
- [ ] Issue statuses updated to `queued` via `ccw issue update`
## Core Capabilities
- **Agent-driven**: issue-queue-agent handles all ordering logic
- Dependency DAG construction and cycle detection
- File conflict detection and resolution
- Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
- **Solution-level granularity**: Queue items are solutions, not tasks
- Inter-solution dependency DAG (based on file conflicts)
- File conflict detection between solutions
- Semantic priority calculation per solution (0.0-1.0)
- Parallel/Sequential group assignment for solutions
## Storage Structure (Queue History)
@@ -66,20 +70,75 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
{
"id": "QUE-20251227-143000",
"status": "active",
"issue_ids": ["GH-123", "GH-124"],
"total_tasks": 8,
"completed_tasks": 3,
"issue_ids": ["ISS-xxx", "ISS-yyy"],
"total_solutions": 3,
"completed_solutions": 1,
"created_at": "2025-12-27T14:30:00Z"
}
]
}
```
### Queue File Schema (Solution-Level)
```json
{
"id": "QUE-20251227-143000",
"status": "active",
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-20251227-003",
"solution_id": "SOL-20251227-003",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts", "src/utils.ts"],
"task_count": 3
},
{
"id": "QUE-20251226-100000",
"status": "completed",
"issue_ids": ["GH-120"],
"total_tasks": 5,
"completed_tasks": 5,
"created_at": "2025-12-26T10:00:00Z",
"completed_at": "2025-12-26T12:30:00Z"
"item_id": "S-2",
"issue_id": "ISS-20251227-001",
"solution_id": "SOL-20251227-001",
"status": "pending",
"execution_order": 2,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7,
"assigned_executor": "codex",
"files_touched": ["src/api.ts"],
"task_count": 2
},
{
"item_id": "S-3",
"issue_id": "ISS-20251227-002",
"solution_id": "SOL-20251227-002",
"status": "pending",
"execution_order": 3,
"execution_group": "S2",
"depends_on": ["S-1"],
"semantic_priority": 0.5,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts"],
"task_count": 4
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"solutions": ["S-1", "S-3"],
"resolution": "sequential",
"resolution_order": ["S-1", "S-3"],
"rationale": "S-1 creates auth module, S-3 extends it"
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"] },
{ "id": "S2", "type": "sequential", "solutions": ["S-3"] }
]
}
```
@@ -116,21 +175,22 @@ Phase 1: Solution Loading
├─ Filter issues with bound_solution_id
├─ Read solutions/{issue-id}.jsonl for each issue
├─ Find bound solution by ID
Extract tasks from bound solutions
Collect files_touched from all tasks in solution
└─ Build solution objects (NOT individual tasks)
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
├─ Launch issue-queue-agent with all tasks
├─ Launch issue-queue-agent with all solutions
├─ Agent performs:
│ ├─ Build dependency DAG from depends_on
│ ├─ Detect file overlaps between solutions
│ ├─ Build dependency DAG from file conflicts
│ ├─ Detect circular dependencies
│ ├─ Identify file modification conflicts
│ ├─ Resolve conflicts using ordering rules
│ ├─ Calculate semantic priority (0.0-1.0)
│ ├─ Resolve conflicts using priority rules
│ ├─ Calculate semantic priority per solution
│ └─ Assign execution groups (parallel/sequential)
└─ Output: queue JSON with ordered tasks
└─ Output: queue JSON with ordered solutions (S-1, S-2, ...)
Phase 5: Queue Output
├─ Write queue.json
├─ Write queue.json with solutions array
├─ Update issue statuses in issues.jsonl
└─ Display queue summary
```
@@ -158,8 +218,8 @@ if (plannedIssues.length === 0) {
return;
}
// Load all tasks from bound solutions
const allTasks = [];
// Load bound solutions (not individual tasks)
const allSolutions = [];
for (const issue of plannedIssues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
@@ -175,74 +235,77 @@ for (const issue of plannedIssues) {
continue;
}
// Collect all files touched by this solution
const filesTouched = new Set();
for (const task of boundSol.tasks || []) {
allTasks.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task,
exploration_context: boundSol.exploration_context
});
for (const mp of task.modification_points || []) {
filesTouched.add(mp.file);
}
}
allSolutions.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task_count: boundSol.tasks?.length || 0,
files_touched: Array.from(filesTouched),
priority: issue.priority || 'medium'
});
}
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
console.log(`Loaded ${allSolutions.length} solutions from ${plannedIssues.length} issues`);
```
### Phase 2-4: Agent-Driven Queue Formation
```javascript
// Build minimal prompt - agent reads schema and handles ordering
// Build minimal prompt - agent orders SOLUTIONS, not tasks
const agentPrompt = `
## Order Tasks
## Order Solutions
**Tasks**: ${allTasks.length} from ${plannedIssues.length} issues
**Solutions**: ${allSolutions.length} from ${plannedIssues.length} issues
**Project Root**: ${process.cwd()}
### Input
### Input (Solution-Level)
\`\`\`json
${JSON.stringify(allTasks.map(t => ({
key: \`\${t.issue_id}:\${t.task.id}\`,
type: t.task.type,
file_context: t.task.file_context,
depends_on: t.task.depends_on
})), null, 2)}
${JSON.stringify(allSolutions, null, 2)}
\`\`\`
### Steps
1. Parse tasks: Extract task keys, types, file contexts, dependencies
2. Build DAG: Construct dependency graph from depends_on references
3. Detect cycles: Verify no circular dependencies exist (abort if found)
4. Detect conflicts: Identify file modification conflicts across issues
5. Resolve conflicts: Apply ordering rules (Create→Update→Delete, config→src→tests)
6. Calculate priority: Compute semantic priority (0.0-1.0) for each task
7. Assign groups: Assign parallel (P*) or sequential (S*) execution groups
8. Generate queue: Write queue JSON with ordered tasks
1. Parse solutions: Extract solution IDs, files_touched, task_count, priority
2. Detect conflicts: Find file overlaps between solutions (files_touched intersection)
3. Build DAG: Create dependency edges where solutions share files
4. Detect cycles: Verify no circular dependencies (abort if found)
5. Resolve conflicts: Apply ordering rules based on action types
6. Calculate priority: Compute semantic priority (0.0-1.0) per solution
7. Assign groups: Parallel (P*) for no-conflict, Sequential (S*) for conflicts
8. Generate queue: Write queue JSON with ordered solutions
9. Update index: Update queues/index.json with new queue entry
### Rules
- **Solution Granularity**: Queue items are solutions, NOT individual tasks
- **DAG Validity**: Output must be valid DAG with no circular dependencies
- **Conflict Resolution**: All file conflicts must be resolved with rationale
- **Conflict Detection**: Two solutions conflict if files_touched intersect
- **Ordering Priority**:
1. Create before Update (files must exist before modification)
2. Foundation before integration (config/ → src/)
3. Types before implementation (types/ → components/)
4. Core before tests (src/ → __tests__/)
5. Delete last (preserve dependencies until no longer needed)
- **Parallel Safety**: Tasks in same parallel group must have no file conflicts
1. Higher issue priority first (critical > high > medium > low)
2. Fewer dependencies first (foundation solutions)
3. More tasks = higher priority (larger impact)
- **Parallel Safety**: Solutions in same parallel group must have NO file overlaps
- **Queue Item ID Format**: \`S-N\` (S-1, S-2, S-3, ...)
- **Queue ID Format**: \`QUE-YYYYMMDD-HHMMSS\` (UTC timestamp)
### Generate Files
1. \`.workflow/issues/queues/\${queueId}.json\` - Full queue (schema: cat .claude/workflows/cli-templates/schemas/queue-schema.json)
1. \`.workflow/issues/queues/\${queueId}.json\` - Full queue with solutions array
2. \`.workflow/issues/queues/index.json\` - Update with new entry
### Return Summary
\`\`\`json
{
"queue_id": "QUE-YYYYMMDD-HHMMSS",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123"]
"issues_queued": ["ISS-xxx"]
}
\`\`\`
`;
@@ -250,7 +313,7 @@ ${JSON.stringify(allTasks.map(t => ({
const result = Task(
subagent_type="issue-queue-agent",
run_in_background=false,
description=`Order ${allTasks.length} tasks`,
description=`Order ${allSolutions.length} solutions`,
prompt=agentPrompt
);
@@ -264,6 +327,7 @@ const summary = JSON.parse(result);
console.log(`
## Queue Formed: ${summary.queue_id}
**Solutions**: ${summary.total_solutions}
**Tasks**: ${summary.total_tasks}
**Issues**: ${summary.issues_queued.join(', ')}
**Groups**: ${summary.execution_groups.map(g => `${g.id}(${g.count})`).join(', ')}

View File

@@ -1,128 +1,63 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Execution Queue Schema",
"description": "Global execution queue for all issue tasks",
"description": "Execution queue supporting both task-level (T-N) and solution-level (S-N) granularity",
"type": "object",
"properties": {
"queue": {
"id": {
"type": "string",
"pattern": "^QUE-[0-9]{8}-[0-9]{6}$",
"description": "Queue ID in format QUE-YYYYMMDD-HHMMSS"
},
"status": {
"type": "string",
"enum": ["active", "paused", "completed", "archived"],
"default": "active"
},
"issue_ids": {
"type": "array",
"description": "Ordered list of tasks to execute",
"items": { "type": "string" },
"description": "Issues included in this queue"
},
"solutions": {
"type": "array",
"description": "Solution-level queue items (preferred for new queues)",
"items": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "task_id", "status"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^T-[0-9]+$",
"description": "Unique queue item identifier"
},
"issue_id": {
"type": "string",
"description": "Source issue ID"
},
"solution_id": {
"type": "string",
"description": "Source solution ID"
},
"task_id": {
"type": "string",
"description": "Task ID within solution"
},
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"execution_order": {
"type": "integer",
"description": "Order in execution sequence"
},
"execution_group": {
"type": "string",
"description": "Parallel execution group ID (e.g., P1, S1)"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Queue IDs this task depends on"
},
"semantic_priority": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Semantic importance score (0.0-1.0)"
},
"assigned_executor": {
"type": "string",
"enum": ["codex", "gemini", "agent"]
},
"queued_at": {
"type": "string",
"format": "date-time"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
},
"result": {
"type": "object",
"description": "Execution result",
"properties": {
"files_modified": { "type": "array", "items": { "type": "string" } },
"files_created": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" },
"commit_hash": { "type": "string" }
}
},
"failure_reason": {
"type": "string"
}
}
"$ref": "#/definitions/solutionItem"
}
},
"tasks": {
"type": "array",
"description": "Task-level queue items (legacy format)",
"items": {
"$ref": "#/definitions/taskItem"
}
},
"conflicts": {
"type": "array",
"description": "Detected conflicts between tasks",
"description": "Detected conflicts between items",
"items": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Queue IDs involved in conflict"
},
"file": {
"type": "string",
"description": "Conflicting file path"
},
"resolution": {
"type": "string",
"enum": ["sequential", "merge", "manual"]
},
"resolution_order": {
"type": "array",
"items": { "type": "string" }
},
"resolved": {
"type": "boolean",
"default": false
}
}
"$ref": "#/definitions/conflict"
}
},
"execution_groups": {
"type": "array",
"description": "Parallel/Sequential execution groups",
"items": {
"$ref": "#/definitions/executionGroup"
}
},
"_metadata": {
"type": "object",
"properties": {
"version": { "type": "string", "default": "1.0" },
"total_items": { "type": "integer" },
"version": { "type": "string", "default": "2.0" },
"queue_type": {
"type": "string",
"enum": ["solution", "task"],
"description": "Queue granularity level"
},
"total_solutions": { "type": "integer" },
"total_tasks": { "type": "integer" },
"pending_count": { "type": "integer" },
"ready_count": { "type": "integer" },
"executing_count": { "type": "integer" },
@@ -132,5 +67,187 @@
"last_updated": { "type": "string", "format": "date-time" }
}
}
},
"definitions": {
"solutionItem": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "status", "task_count", "files_touched"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^S-[0-9]+$",
"description": "Solution-level queue item ID (S-1, S-2, ...)"
},
"issue_id": {
"type": "string",
"description": "Source issue ID"
},
"solution_id": {
"type": "string",
"description": "Bound solution ID"
},
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"task_count": {
"type": "integer",
"minimum": 1,
"description": "Number of tasks in this solution"
},
"files_touched": {
"type": "array",
"items": { "type": "string" },
"description": "All files modified by this solution"
},
"execution_order": {
"type": "integer",
"description": "Order in execution sequence"
},
"execution_group": {
"type": "string",
"description": "Parallel (P*) or Sequential (S*) group ID"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs this item depends on"
},
"semantic_priority": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Semantic importance score (0.0-1.0)"
},
"assigned_executor": {
"type": "string",
"enum": ["codex", "gemini", "agent"]
},
"queued_at": { "type": "string", "format": "date-time" },
"started_at": { "type": "string", "format": "date-time" },
"completed_at": { "type": "string", "format": "date-time" },
"result": {
"type": "object",
"properties": {
"summary": { "type": "string" },
"files_modified": { "type": "array", "items": { "type": "string" } },
"tasks_completed": { "type": "integer" },
"commit_hashes": { "type": "array", "items": { "type": "string" } }
}
},
"failure_reason": { "type": "string" }
}
},
"taskItem": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "task_id", "status"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^T-[0-9]+$",
"description": "Task-level queue item ID (T-1, T-2, ...)"
},
"issue_id": { "type": "string" },
"solution_id": { "type": "string" },
"task_id": { "type": "string" },
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"execution_order": { "type": "integer" },
"execution_group": { "type": "string" },
"depends_on": { "type": "array", "items": { "type": "string" } },
"semantic_priority": { "type": "number", "minimum": 0, "maximum": 1 },
"assigned_executor": { "type": "string", "enum": ["codex", "gemini", "agent"] },
"queued_at": { "type": "string", "format": "date-time" },
"started_at": { "type": "string", "format": "date-time" },
"completed_at": { "type": "string", "format": "date-time" },
"result": {
"type": "object",
"properties": {
"files_modified": { "type": "array", "items": { "type": "string" } },
"files_created": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" },
"commit_hash": { "type": "string" }
}
},
"failure_reason": { "type": "string" }
}
},
"conflict": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
},
"file": {
"type": "string",
"description": "Conflicting file path"
},
"solutions": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs involved (for solution-level queues)"
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Task IDs involved (for task-level queues)"
},
"resolution": {
"type": "string",
"enum": ["sequential", "merge", "manual"]
},
"resolution_order": {
"type": "array",
"items": { "type": "string" },
"description": "Execution order to resolve conflict"
},
"rationale": {
"type": "string",
"description": "Explanation of resolution decision"
},
"resolved": {
"type": "boolean",
"default": false
}
}
},
"executionGroup": {
"type": "object",
"required": ["id", "type"],
"properties": {
"id": {
"type": "string",
"pattern": "^[PS][0-9]+$",
"description": "Group ID (P1, P2 for parallel, S1, S2 for sequential)"
},
"type": {
"type": "string",
"enum": ["parallel", "sequential"]
},
"solutions": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs in this group"
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Task IDs in this group (legacy)"
},
"solution_count": {
"type": "integer",
"description": "Number of solutions in group"
},
"task_count": {
"type": "integer",
"description": "Number of tasks in group (legacy)"
}
}
}
}
}

View File

@@ -102,6 +102,7 @@ interface SolutionTask {
interface Solution {
id: string;
description?: string;
approach?: string; // Solution approach description
tasks: SolutionTask[];
exploration_context?: Record<string, any>;
analysis?: { risk?: string; impact?: string; complexity?: string };
@@ -112,10 +113,10 @@ interface Solution {
}
interface QueueItem {
item_id: string; // Task item ID in queue: T-1, T-2, ... (formerly queue_id)
item_id: string; // Item ID in queue: T-1, T-2, ... (task-level) or S-1, S-2, ... (solution-level)
issue_id: string;
solution_id: string;
task_id: string;
task_id?: string; // Only for task-level queues
title?: string;
status: 'pending' | 'ready' | 'executing' | 'completed' | 'failed' | 'blocked';
execution_order: number;
@@ -123,6 +124,8 @@ interface QueueItem {
depends_on: string[];
semantic_priority: number;
assigned_executor: 'codex' | 'gemini' | 'agent';
task_count?: number; // For solution-level queues
files_touched?: string[]; // For solution-level queues
started_at?: string;
completed_at?: string;
result?: Record<string, any>;
@@ -142,8 +145,10 @@ interface QueueConflict {
interface ExecutionGroup {
id: string; // Group ID: P1, S1, etc.
type: 'parallel' | 'sequential';
task_count: number;
tasks: string[]; // Item IDs in this group
task_count?: number; // For task-level queues
solution_count?: number; // For solution-level queues
tasks?: string[]; // Task IDs in this group (task-level)
solutions?: string[]; // Solution IDs in this group (solution-level)
}
interface Queue {
@@ -151,7 +156,8 @@ interface Queue {
name?: string; // Optional queue name
status: 'active' | 'completed' | 'archived' | 'failed';
issue_ids: string[]; // Issues in this queue
tasks: QueueItem[]; // Task items (formerly 'queue')
tasks: QueueItem[]; // Task items (task-level queue)
solutions?: QueueItem[]; // Solution items (solution-level queue)
conflicts: QueueConflict[];
execution_groups?: ExecutionGroup[];
_metadata: {
@@ -172,8 +178,10 @@ interface QueueIndex {
id: string;
status: string;
issue_ids: string[];
total_tasks: number;
completed_tasks: number;
total_tasks?: number; // For task-level queues
total_solutions?: number; // For solution-level queues
completed_tasks?: number; // For task-level queues
completed_solutions?: number; // For solution-level queues
created_at: string;
completed_at?: string;
}[];
@@ -845,92 +853,129 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
return;
}
// DAG - Return dependency graph for parallel execution planning
// DAG - Return dependency graph for parallel execution planning (solution-level)
if (subAction === 'dag') {
const queue = readActiveQueue();
if (!queue.id || queue.tasks.length === 0) {
// Support both old (tasks) and new (solutions) queue format
const items = queue.solutions || queue.tasks || [];
if (!queue.id || items.length === 0) {
console.log(JSON.stringify({ error: 'No active queue', nodes: [], edges: [], groups: [] }));
return;
}
// Build DAG nodes
const completedIds = new Set(queue.tasks.filter(t => t.status === 'completed').map(t => t.item_id));
const failedIds = new Set(queue.tasks.filter(t => t.status === 'failed').map(t => t.item_id));
// Build DAG nodes (solution-level)
const completedIds = new Set(items.filter(t => t.status === 'completed').map(t => t.item_id));
const failedIds = new Set(items.filter(t => t.status === 'failed').map(t => t.item_id));
const nodes = queue.tasks.map(task => ({
id: task.item_id,
issue_id: task.issue_id,
task_id: task.task_id,
status: task.status,
executor: task.assigned_executor,
priority: task.semantic_priority,
depends_on: task.depends_on,
const nodes = items.map(item => ({
id: item.item_id,
issue_id: item.issue_id,
solution_id: item.solution_id,
status: item.status,
executor: item.assigned_executor,
priority: item.semantic_priority,
depends_on: item.depends_on || [],
task_count: item.task_count || 1,
files_touched: item.files_touched || [],
// Calculate if ready (dependencies satisfied)
ready: task.status === 'pending' && task.depends_on.every(d => completedIds.has(d)),
blocked_by: task.depends_on.filter(d => !completedIds.has(d) && !failedIds.has(d))
ready: item.status === 'pending' && (item.depends_on || []).every(d => completedIds.has(d)),
blocked_by: (item.depends_on || []).filter(d => !completedIds.has(d) && !failedIds.has(d))
}));
// Build edges for visualization
const edges = queue.tasks.flatMap(task =>
task.depends_on.map(dep => ({ from: dep, to: task.item_id }))
const edges = items.flatMap(item =>
(item.depends_on || []).map(dep => ({ from: dep, to: item.item_id }))
);
// Group ready tasks by execution_group for parallel execution
const readyTasks = nodes.filter(n => n.ready || n.status === 'executing');
// Group ready items by execution_group
const readyItems = nodes.filter(n => n.ready || n.status === 'executing');
const groups: Record<string, string[]> = {};
for (const task of queue.tasks) {
if (readyTasks.some(r => r.id === task.item_id)) {
const group = task.execution_group || 'P1';
for (const item of items) {
if (readyItems.some(r => r.id === item.item_id)) {
const group = item.execution_group || 'P1';
if (!groups[group]) groups[group] = [];
groups[group].push(task.item_id);
groups[group].push(item.item_id);
}
}
// Calculate parallel batches (tasks with no dependencies on each other)
// Calculate parallel batches - prefer execution_groups from queue if available
const parallelBatches: string[][] = [];
const remainingReady = new Set(readyTasks.map(t => t.id));
const readyItemIds = new Set(readyItems.map(t => t.id));
while (remainingReady.size > 0) {
const batch: string[] = [];
const batchFiles = new Set<string>();
for (const taskId of remainingReady) {
const task = queue.tasks.find(t => t.item_id === taskId);
if (!task) continue;
// Check for file conflicts with already-batched tasks
const solution = findSolution(task.issue_id, task.solution_id);
const taskDef = solution?.tasks.find(t => t.id === task.task_id);
const taskFiles = taskDef?.modification_points?.map(mp => mp.file) || [];
const hasConflict = taskFiles.some(f => batchFiles.has(f));
if (!hasConflict) {
batch.push(taskId);
taskFiles.forEach(f => batchFiles.add(f));
// Check if queue has pre-assigned execution_groups
if (queue.execution_groups && queue.execution_groups.length > 0) {
// Use agent-assigned execution groups
for (const group of queue.execution_groups) {
const groupItems = (group.solutions || group.tasks || [])
.filter((id: string) => readyItemIds.has(id));
if (groupItems.length > 0) {
if (group.type === 'parallel') {
// All items in parallel group can run together
parallelBatches.push(groupItems);
} else {
// Sequential group: each item is its own batch
for (const itemId of groupItems) {
parallelBatches.push([itemId]);
}
}
}
}
} else {
// Fallback: calculate parallel batches from file conflicts
const remainingReady = new Set(readyItemIds);
if (batch.length === 0) {
// Fallback: take one at a time if all conflict
const first = Array.from(remainingReady)[0];
batch.push(first);
while (remainingReady.size > 0) {
const batch: string[] = [];
const batchFiles = new Set<string>();
for (const itemId of Array.from(remainingReady)) {
const item = items.find(t => t.item_id === itemId);
if (!item) continue;
// Get all files touched by this solution
let solutionFiles: string[] = item.files_touched || [];
// If not in queue item, fetch from solution definition
if (solutionFiles.length === 0) {
const solution = findSolution(item.issue_id, item.solution_id);
if (solution?.tasks) {
for (const task of solution.tasks) {
for (const mp of task.modification_points || []) {
solutionFiles.push(mp.file);
}
}
}
}
const hasConflict = solutionFiles.some(f => batchFiles.has(f));
if (!hasConflict) {
batch.push(itemId);
solutionFiles.forEach(f => batchFiles.add(f));
}
}
if (batch.length === 0) {
// Fallback: take one at a time if all conflict
const first = Array.from(remainingReady)[0];
batch.push(first);
}
parallelBatches.push(batch);
batch.forEach(id => remainingReady.delete(id));
}
parallelBatches.push(batch);
batch.forEach(id => remainingReady.delete(id));
}
console.log(JSON.stringify({
queue_id: queue.id,
total: nodes.length,
ready_count: readyTasks.length,
ready_count: readyItems.length,
completed_count: completedIds.size,
nodes,
edges,
groups: Object.entries(groups).map(([id, tasks]) => ({ id, tasks })),
groups: Object.entries(groups).map(([id, solutions]) => ({ id, solutions })),
parallel_batches: parallelBatches,
_summary: {
can_parallel: parallelBatches[0]?.length || 0,
@@ -1084,7 +1129,7 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
console.log(
item.item_id.padEnd(10) +
item.issue_id.substring(0, 13).padEnd(15) +
item.task_id.padEnd(8) +
(item.task_id || '-').padEnd(8) +
statusColor(item.status.padEnd(12)) +
item.assigned_executor
);
@@ -1097,91 +1142,107 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
*/
async function nextAction(itemId: string | undefined, options: IssueOptions): Promise<void> {
const queue = readActiveQueue();
let nextItem: typeof queue.tasks[0] | undefined;
// Support both old (tasks) and new (solutions) queue format
const items = queue.solutions || queue.tasks || [];
let nextItem: typeof items[0] | undefined;
let isResume = false;
// If specific item_id provided, fetch that task directly
// If specific item_id provided, fetch that item directly
if (itemId) {
nextItem = queue.tasks.find(t => t.item_id === itemId);
nextItem = items.find(t => t.item_id === itemId);
if (!nextItem) {
console.log(JSON.stringify({ status: 'error', message: `Task ${itemId} not found` }));
console.log(JSON.stringify({ status: 'error', message: `Item ${itemId} not found` }));
return;
}
if (nextItem.status === 'completed') {
console.log(JSON.stringify({ status: 'completed', message: `Task ${itemId} already completed` }));
console.log(JSON.stringify({ status: 'completed', message: `Item ${itemId} already completed` }));
return;
}
if (nextItem.status === 'failed') {
console.log(JSON.stringify({ status: 'failed', message: `Task ${itemId} failed, use retry to reset` }));
console.log(JSON.stringify({ status: 'failed', message: `Item ${itemId} failed, use retry to reset` }));
return;
}
isResume = nextItem.status === 'executing';
} else {
// Auto-select: Priority 1 - executing, Priority 2 - ready pending
const executingTasks = queue.tasks.filter(item => item.status === 'executing');
const pendingTasks = queue.tasks.filter(item => {
const executingItems = items.filter(item => item.status === 'executing');
const pendingItems = items.filter(item => {
if (item.status !== 'pending') return false;
return item.depends_on.every(depId => {
const dep = queue.tasks.find(q => q.item_id === depId);
return (item.depends_on || []).every(depId => {
const dep = items.find(q => q.item_id === depId);
return !dep || dep.status === 'completed';
});
});
const readyTasks = [...executingTasks, ...pendingTasks];
const readyItems = [...executingItems, ...pendingItems];
if (readyTasks.length === 0) {
if (readyItems.length === 0) {
console.log(JSON.stringify({
status: 'empty',
message: 'No ready tasks',
message: 'No ready items',
queue_status: queue._metadata
}, null, 2));
return;
}
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
nextItem = readyTasks[0];
readyItems.sort((a, b) => a.execution_order - b.execution_order);
nextItem = readyItems[0];
isResume = nextItem.status === 'executing';
}
// Load task definition
// Load FULL solution with all tasks
const solution = findSolution(nextItem.issue_id, nextItem.solution_id);
const taskDef = solution?.tasks.find(t => t.id === nextItem.task_id);
if (!taskDef) {
console.log(JSON.stringify({ status: 'error', message: 'Task definition not found' }));
if (!solution) {
console.log(JSON.stringify({ status: 'error', message: 'Solution not found' }));
process.exit(1);
}
// Only update status if not already executing (new task)
// Only update status if not already executing
if (!isResume) {
const idx = queue.tasks.findIndex(q => q.item_id === nextItem.item_id);
queue.tasks[idx].status = 'executing';
queue.tasks[idx].started_at = new Date().toISOString();
const idx = items.findIndex(q => q.item_id === nextItem.item_id);
items[idx].status = 'executing';
items[idx].started_at = new Date().toISOString();
// Write back to correct array
if (queue.solutions) {
queue.solutions = items;
} else {
queue.tasks = items;
}
writeQueue(queue);
updateIssue(nextItem.issue_id, { status: 'executing' });
}
// Calculate queue stats for context
// Calculate queue stats
const stats = {
total: queue.tasks.length,
completed: queue.tasks.filter(q => q.status === 'completed').length,
failed: queue.tasks.filter(q => q.status === 'failed').length,
executing: queue.tasks.filter(q => q.status === 'executing').length,
pending: queue.tasks.filter(q => q.status === 'pending').length
total: items.length,
completed: items.filter(q => q.status === 'completed').length,
failed: items.filter(q => q.status === 'failed').length,
executing: items.filter(q => q.status === 'executing').length,
pending: items.filter(q => q.status === 'pending').length
};
const remaining = stats.pending + stats.executing;
// Calculate total estimated time for all tasks
const totalMinutes = solution.tasks?.reduce((sum, t) => sum + (t.estimated_minutes || 30), 0) || 30;
console.log(JSON.stringify({
item_id: nextItem.item_id,
issue_id: nextItem.issue_id,
solution_id: nextItem.solution_id,
task: taskDef,
context: solution?.exploration_context || {},
// Return full solution object with all tasks
solution: {
id: solution.id,
approach: solution.approach,
tasks: solution.tasks || [],
exploration_context: solution.exploration_context || {}
},
resumed: isResume,
resume_note: isResume ? `Resuming interrupted task (started: ${nextItem.started_at})` : undefined,
resume_note: isResume ? `Resuming interrupted item (started: ${nextItem.started_at})` : undefined,
execution_hints: {
executor: nextItem.assigned_executor,
estimated_minutes: taskDef.estimated_minutes || 30
task_count: solution.tasks?.length || 0,
estimated_minutes: totalMinutes
},
queue_progress: {
completed: stats.completed,
@@ -1203,33 +1264,43 @@ async function detailAction(itemId: string | undefined, options: IssueOptions):
}
const queue = readActiveQueue();
const queueItem = queue.tasks.find(t => t.item_id === itemId);
// Support both old (tasks) and new (solutions) queue format
const items = queue.solutions || queue.tasks || [];
const queueItem = items.find(t => t.item_id === itemId);
if (!queueItem) {
console.log(JSON.stringify({ status: 'error', message: `Task ${itemId} not found` }));
console.log(JSON.stringify({ status: 'error', message: `Item ${itemId} not found` }));
return;
}
// Load task definition from solution
// Load FULL solution with all tasks
const solution = findSolution(queueItem.issue_id, queueItem.solution_id);
const taskDef = solution?.tasks.find(t => t.id === queueItem.task_id);
if (!taskDef) {
console.log(JSON.stringify({ status: 'error', message: 'Task definition not found in solution' }));
if (!solution) {
console.log(JSON.stringify({ status: 'error', message: 'Solution not found' }));
return;
}
// Return full task info (READ-ONLY - no status update)
// Calculate total estimated time for all tasks
const totalMinutes = solution.tasks?.reduce((sum, t) => sum + (t.estimated_minutes || 30), 0) || 30;
// Return FULL SOLUTION with all tasks (READ-ONLY - no status update)
console.log(JSON.stringify({
item_id: queueItem.item_id,
issue_id: queueItem.issue_id,
solution_id: queueItem.solution_id,
status: queueItem.status,
task: taskDef,
context: solution?.exploration_context || {},
// Return full solution object with all tasks
solution: {
id: solution.id,
approach: solution.approach,
tasks: solution.tasks || [],
exploration_context: solution.exploration_context || {}
},
execution_hints: {
executor: queueItem.assigned_executor,
estimated_minutes: taskDef.estimated_minutes || 30
task_count: solution.tasks?.length || 0,
estimated_minutes: totalMinutes
}
}, null, 2));
}
@@ -1239,13 +1310,15 @@ async function detailAction(itemId: string | undefined, options: IssueOptions):
*/
async function doneAction(queueId: string | undefined, options: IssueOptions): Promise<void> {
if (!queueId) {
console.error(chalk.red('Queue ID is required'));
console.error(chalk.gray('Usage: ccw issue done <queue-id> [--fail] [--reason "..."]'));
console.error(chalk.red('Item ID is required'));
console.error(chalk.gray('Usage: ccw issue done <item-id> [--fail] [--reason "..."]'));
process.exit(1);
}
const queue = readActiveQueue();
const idx = queue.tasks.findIndex(q => q.item_id === queueId);
// Support both old (tasks) and new (solutions) queue format
const items = queue.solutions || queue.tasks || [];
const idx = items.findIndex(q => q.item_id === queueId);
if (idx === -1) {
console.error(chalk.red(`Queue item "${queueId}" not found`));
@@ -1253,66 +1326,69 @@ async function doneAction(queueId: string | undefined, options: IssueOptions): P
}
const isFail = options.fail;
queue.tasks[idx].status = isFail ? 'failed' : 'completed';
queue.tasks[idx].completed_at = new Date().toISOString();
items[idx].status = isFail ? 'failed' : 'completed';
items[idx].completed_at = new Date().toISOString();
if (isFail) {
queue.tasks[idx].failure_reason = options.reason || 'Unknown failure';
items[idx].failure_reason = options.reason || 'Unknown failure';
} else if (options.result) {
try {
queue.tasks[idx].result = JSON.parse(options.result);
items[idx].result = JSON.parse(options.result);
} catch {
console.warn(chalk.yellow('Warning: Could not parse result JSON'));
}
}
// Check if all issue tasks are complete
const issueId = queue.tasks[idx].issue_id;
const issueTasks = queue.tasks.filter(q => q.issue_id === issueId);
const allIssueComplete = issueTasks.every(q => q.status === 'completed');
const anyIssueFailed = issueTasks.some(q => q.status === 'failed');
// Update issue status (solution = issue in new model)
const issueId = items[idx].issue_id;
if (allIssueComplete) {
updateIssue(issueId, { status: 'completed', completed_at: new Date().toISOString() });
console.log(chalk.green(`${queueId} completed`));
console.log(chalk.green(`✓ Issue ${issueId} completed (all tasks done)`));
} else if (anyIssueFailed) {
if (isFail) {
updateIssue(issueId, { status: 'failed' });
console.log(chalk.red(`${queueId} failed`));
} else {
console.log(isFail ? chalk.red(`${queueId} failed`) : chalk.green(`${queueId} completed`));
updateIssue(issueId, { status: 'completed', completed_at: new Date().toISOString() });
console.log(chalk.green(`${queueId} completed`));
console.log(chalk.green(`✓ Issue ${issueId} completed`));
}
// Check if entire queue is complete
const allQueueComplete = queue.tasks.every(q => q.status === 'completed');
const anyQueueFailed = queue.tasks.some(q => q.status === 'failed');
const allQueueComplete = items.every(q => q.status === 'completed');
const anyQueueFailed = items.some(q => q.status === 'failed');
if (allQueueComplete) {
queue.status = 'completed';
console.log(chalk.green(`\n✓ Queue ${queue.id} completed (all tasks done)`));
} else if (anyQueueFailed && queue.tasks.every(q => q.status === 'completed' || q.status === 'failed')) {
console.log(chalk.green(`\n✓ Queue ${queue.id} completed (all solutions done)`));
} else if (anyQueueFailed && items.every(q => q.status === 'completed' || q.status === 'failed')) {
queue.status = 'failed';
console.log(chalk.yellow(`\n⚠ Queue ${queue.id} has failed tasks`));
console.log(chalk.yellow(`\n⚠ Queue ${queue.id} has failed solutions`));
}
// Write back to queue (update the correct array)
if (queue.solutions) {
queue.solutions = items;
} else {
queue.tasks = items;
}
writeQueue(queue);
}
/**
* retry - Reset failed tasks to pending for re-execution
* retry - Reset failed items to pending for re-execution
*/
async function retryAction(issueId: string | undefined, options: IssueOptions): Promise<void> {
const queue = readActiveQueue();
// Support both old (tasks) and new (solutions) queue format
const items = queue.solutions || queue.tasks || [];
if (!queue.id || queue.tasks.length === 0) {
if (!queue.id || items.length === 0) {
console.log(chalk.yellow('No active queue'));
return;
}
let updated = 0;
for (const item of queue.tasks) {
// Retry failed tasks only
for (const item of items) {
// Retry failed items only
if (item.status === 'failed') {
if (!issueId || item.issue_id === issueId) {
item.status = 'pending';
@@ -1325,8 +1401,7 @@ async function retryAction(issueId: string | undefined, options: IssueOptions):
}
if (updated === 0) {
console.log(chalk.yellow('No failed tasks to retry'));
console.log(chalk.gray('Note: Interrupted (executing) tasks are auto-resumed by "ccw issue next"'));
console.log(chalk.yellow('No failed items to retry'));
return;
}
@@ -1335,13 +1410,19 @@ async function retryAction(issueId: string | undefined, options: IssueOptions):
queue.status = 'active';
}
// Write back to queue
if (queue.solutions) {
queue.solutions = items;
} else {
queue.tasks = items;
}
writeQueue(queue);
if (issueId) {
updateIssue(issueId, { status: 'queued' });
}
console.log(chalk.green(`✓ Reset ${updated} task(s) to pending`));
console.log(chalk.green(`✓ Reset ${updated} item(s) to pending`));
}
// ============ Main Entry ============

View File

@@ -6,7 +6,7 @@
// ========== Issue State ==========
var issueData = {
issues: [],
queue: { tasks: [], conflicts: [], execution_groups: [], grouped_items: {} },
queue: { tasks: [], solutions: [], conflicts: [], execution_groups: [], grouped_items: {} },
selectedIssue: null,
selectedSolution: null,
selectedSolutionIssueId: null,
@@ -65,7 +65,7 @@ async function loadQueueData() {
issueData.queue = await response.json();
} catch (err) {
console.error('Failed to load queue:', err);
issueData.queue = { tasks: [], conflicts: [], execution_groups: [], grouped_items: {} };
issueData.queue = { tasks: [], solutions: [], conflicts: [], execution_groups: [], grouped_items: {} };
}
}
@@ -360,7 +360,9 @@ function filterIssuesByStatus(status) {
// ========== Queue Section ==========
function renderQueueSection() {
const queue = issueData.queue;
const queueItems = queue.tasks || [];
// Support both solution-level and task-level queues
const queueItems = queue.solutions || queue.tasks || [];
const isSolutionLevel = !!(queue.solutions && queue.solutions.length > 0);
const metadata = queue._metadata || {};
// Check if queue is empty
@@ -443,8 +445,8 @@ function renderQueueSection() {
<!-- Queue Stats -->
<div class="queue-stats-grid mb-4">
<div class="queue-stat-card">
<span class="queue-stat-value">${metadata.total_tasks || queueItems.length}</span>
<span class="queue-stat-label">${t('issues.totalTasks') || 'Total'}</span>
<span class="queue-stat-value">${isSolutionLevel ? (metadata.total_solutions || queueItems.length) : (metadata.total_tasks || queueItems.length)}</span>
<span class="queue-stat-label">${isSolutionLevel ? (t('issues.totalSolutions') || 'Solutions') : (t('issues.totalTasks') || 'Total')}</span>
</div>
<div class="queue-stat-card pending">
<span class="queue-stat-value">${metadata.pending_count || queueItems.filter(i => i.status === 'pending').length}</span>
@@ -510,6 +512,9 @@ function renderQueueSection() {
function renderQueueGroup(group, items) {
const isParallel = group.type === 'parallel';
// Support both solution-level (solution_count) and task-level (task_count)
const itemCount = group.solution_count || group.task_count || items.length;
const itemLabel = group.solution_count ? 'solutions' : 'tasks';
return `
<div class="queue-group" data-group-id="${group.id}">
@@ -518,7 +523,7 @@ function renderQueueGroup(group, items) {
<i data-lucide="${isParallel ? 'git-merge' : 'arrow-right'}" class="w-4 h-4"></i>
${group.id} (${isParallel ? t('issues.parallelGroup') || 'Parallel' : t('issues.sequentialGroup') || 'Sequential'})
</div>
<span class="text-sm text-muted-foreground">${group.task_count} tasks</span>
<span class="text-sm text-muted-foreground">${itemCount} ${itemLabel}</span>
</div>
<div class="queue-items ${isParallel ? 'parallel' : 'sequential'}">
${items.map((item, idx) => renderQueueItem(item, idx, items.length)).join('')}
@@ -537,6 +542,9 @@ function renderQueueItem(item, index, total) {
blocked: 'blocked'
};
// Check if this is a solution-level item (has task_count) or task-level (has task_id)
const isSolutionItem = item.task_count !== undefined;
return `
<div class="queue-item ${statusColors[item.status] || ''}"
draggable="true"
@@ -545,7 +553,20 @@ function renderQueueItem(item, index, total) {
onclick="openQueueItemDetail('${item.item_id}')">
<span class="queue-item-id font-mono text-xs">${item.item_id}</span>
<span class="queue-item-issue text-xs text-muted-foreground">${item.issue_id}</span>
<span class="queue-item-task text-sm">${item.task_id}</span>
${isSolutionItem ? `
<span class="queue-item-solution text-sm" title="${item.solution_id || ''}">
<i data-lucide="package" class="w-3 h-3 inline mr-1"></i>
${item.task_count} ${t('issues.tasks') || 'tasks'}
</span>
${item.files_touched && item.files_touched.length > 0 ? `
<span class="queue-item-files text-xs text-muted-foreground" title="${item.files_touched.join(', ')}">
<i data-lucide="file" class="w-3 h-3"></i>
${item.files_touched.length}
</span>
` : ''}
` : `
<span class="queue-item-task text-sm">${item.task_id || '-'}</span>
`}
<span class="queue-item-priority" style="opacity: ${item.semantic_priority || 0.5}">
<i data-lucide="arrow-up" class="w-3 h-3"></i>
</span>
@@ -569,9 +590,12 @@ function renderConflictsSection(conflicts) {
${conflicts.map(c => `
<div class="conflict-item">
<span class="conflict-file font-mono text-xs">${c.file}</span>
<span class="conflict-tasks text-xs text-muted-foreground">${c.tasks.join(' → ')}</span>
<span class="conflict-status ${c.resolved ? 'resolved' : 'pending'}">
${c.resolved ? 'Resolved' : 'Pending'}
<span class="conflict-items text-xs text-muted-foreground">${(c.solutions || c.tasks || []).join(' → ')}</span>
${c.rationale ? `<span class="conflict-rationale text-xs text-muted-foreground" title="${c.rationale}">
<i data-lucide="info" class="w-3 h-3"></i>
</span>` : ''}
<span class="conflict-status ${c.resolved || c.resolution ? 'resolved' : 'pending'}">
${c.resolved || c.resolution ? 'Resolved' : 'Pending'}
</span>
</div>
`).join('')}
@@ -1156,7 +1180,9 @@ function escapeHtml(text) {
}
function openQueueItemDetail(itemId) {
const item = issueData.queue.tasks?.find(q => q.item_id === itemId);
// Support both solution-level and task-level queues
const items = issueData.queue.solutions || issueData.queue.tasks || [];
const item = items.find(q => q.item_id === itemId);
if (item) {
openIssueDetail(item.issue_id);
}
@@ -1600,7 +1626,7 @@ async function showQueueHistoryModal() {
</span>
<span class="text-xs text-muted-foreground">
<i data-lucide="check-circle" class="w-3 h-3 inline"></i>
${q.completed_tasks || 0}/${q.total_tasks || 0} tasks
${q.completed_solutions || q.completed_tasks || 0}/${q.total_solutions || q.total_tasks || 0} ${q.total_solutions ? 'solutions' : 'tasks'}
</span>
<span class="text-xs text-muted-foreground">
<i data-lucide="calendar" class="w-3 h-3 inline"></i>
@@ -1689,17 +1715,21 @@ async function viewQueueDetail(queueId) {
throw new Error(queue.error);
}
const tasks = queue.queue || [];
// Support both solution-level and task-level queues
const items = queue.solutions || queue.queue || queue.tasks || [];
const isSolutionLevel = !!(queue.solutions && queue.solutions.length > 0);
const metadata = queue._metadata || {};
// Group by execution_group
const grouped = {};
tasks.forEach(task => {
const group = task.execution_group || 'ungrouped';
items.forEach(item => {
const group = item.execution_group || 'ungrouped';
if (!grouped[group]) grouped[group] = [];
grouped[group].push(task);
grouped[group].push(item);
});
const itemLabel = isSolutionLevel ? 'solutions' : 'tasks';
const detailHtml = `
<div class="queue-detail-view">
<div class="queue-detail-header mb-4">
@@ -1715,40 +1745,41 @@ async function viewQueueDetail(queueId) {
<div class="queue-detail-stats mb-4">
<div class="stat-item">
<span class="stat-value">${tasks.length}</span>
<span class="stat-label">Total</span>
<span class="stat-value">${items.length}</span>
<span class="stat-label">${isSolutionLevel ? 'Solutions' : 'Total'}</span>
</div>
<div class="stat-item completed">
<span class="stat-value">${tasks.filter(t => t.status === 'completed').length}</span>
<span class="stat-value">${items.filter(t => t.status === 'completed').length}</span>
<span class="stat-label">Completed</span>
</div>
<div class="stat-item pending">
<span class="stat-value">${tasks.filter(t => t.status === 'pending').length}</span>
<span class="stat-value">${items.filter(t => t.status === 'pending').length}</span>
<span class="stat-label">Pending</span>
</div>
<div class="stat-item failed">
<span class="stat-value">${tasks.filter(t => t.status === 'failed').length}</span>
<span class="stat-value">${items.filter(t => t.status === 'failed').length}</span>
<span class="stat-label">Failed</span>
</div>
</div>
<div class="queue-detail-groups">
${Object.entries(grouped).map(([groupId, items]) => `
${Object.entries(grouped).map(([groupId, groupItems]) => `
<div class="queue-group-section">
<div class="queue-group-header">
<i data-lucide="folder" class="w-4 h-4"></i>
<span>${groupId}</span>
<span class="text-xs text-muted-foreground">(${items.length} tasks)</span>
<span class="text-xs text-muted-foreground">(${groupItems.length} ${itemLabel})</span>
</div>
<div class="queue-group-items">
${items.map(item => `
${groupItems.map(item => `
<div class="queue-detail-item ${item.status || ''}">
<div class="item-main">
<span class="item-id font-mono text-xs">${item.queue_id || item.task_id || 'N/A'}</span>
<span class="item-title text-sm">${item.title || item.action || 'Untitled'}</span>
<span class="item-id font-mono text-xs">${item.item_id || item.queue_id || item.task_id || 'N/A'}</span>
<span class="item-title text-sm">${isSolutionLevel ? (item.task_count + ' tasks') : (item.title || item.action || 'Untitled')}</span>
</div>
<div class="item-meta">
<span class="item-issue text-xs">${item.issue_id || ''}</span>
${isSolutionLevel && item.files_touched ? `<span class="item-files text-xs">${item.files_touched.length} files</span>` : ''}
<span class="item-status ${item.status || ''}">${item.status || 'unknown'}</span>
</div>
</div>