refactor(issue-manager): Enhance queue detail item styling and update modal content

This commit is contained in:
catlog22
2025-12-27 23:43:40 +08:00
parent 5aa0c9610d
commit 0992d27523
10 changed files with 260 additions and 393 deletions

View File

@@ -100,10 +100,49 @@ mcp__ace-tool__search_context({
- [ ] Discover dependencies
- [ ] Locate test patterns
**Fallback**: ACE → ripgrep → Glob
**Fallback Chain**: ACE → smart_search → Grep → rg → Glob
| Tool | When to Use |
|------|-------------|
| `mcp__ace-tool__search_context` | Semantic search (primary) |
| `mcp__ccw-tools__smart_search` | Symbol/pattern search |
| `Grep` | Exact regex matching |
| `rg` / `grep` | CLI fallback |
| `Glob` | File path discovery |
#### Phase 3: Solution Planning
**Multi-Solution Generation**:
Generate multiple candidate solutions when:
- Issue complexity is HIGH
- Multiple valid implementation approaches exist
- Trade-offs between approaches (performance vs simplicity, etc.)
| Condition | Solutions |
|-----------|-----------|
| Low complexity, single approach | 1 solution, auto-bind |
| Medium complexity, clear path | 1-2 solutions |
| High complexity, multiple approaches | 2-3 solutions, user selection |
**Solution Evaluation** (for each candidate):
```javascript
{
analysis: {
risk: "low|medium|high", // Implementation risk
impact: "low|medium|high", // Scope of changes
complexity: "low|medium|high" // Technical complexity
},
score: 0.0-1.0 // Overall quality score (higher = recommended)
}
```
**Selection Flow**:
1. Generate all candidate solutions
2. Evaluate and score each
3. Single solution → auto-bind
4. Multiple solutions → return `pending_selection` for user choice
**Task Decomposition** following schema:
```javascript
function decomposeTasks(issue, exploration) {
@@ -139,56 +178,33 @@ ccw issue bind <issue-id> --solution /tmp/sol.json
---
## 2. Output Specifications
## 2. Output Requirements
### 2.1 Return Format
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
"conflicts": [{ "file": "...", "issues": [...] }]
}
```
### 2.2 Binding Rules
| Scenario | Action |
|----------|--------|
| Single solution | Register AND auto-bind |
| Multiple solutions | Register only, return for user selection |
### 2.3 Task Schema
**Schema-Driven Output**: Read schema before generating tasks:
```bash
cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json
```
**Required Fields**:
- `id`: Task ID (pattern: `TASK-NNN`)
- `title`: Short summary (max 100 chars)
- `type`: feature | bug | refactor | test | chore | docs
- `description`: Detailed instructions
- `depends_on`: Array of prerequisite task IDs
- `delivery_criteria`: Checklist items defining completion
- `status`: pending | ready | in_progress | completed | failed | paused | skipped
- `current_phase`: analyze | implement | test | optimize | commit | done
- `executor`: agent | codex | gemini | auto
**Optional Fields**:
- `file_context`: Relevant files/globs
- `pause_criteria`: Conditions to halt execution
- `priority`: 1-5 (1=highest)
- `phase_results`: Results from each execution phase
### 2.4 Solution File Structure
### 2.1 Generate Files (Primary)
**Solution file per issue**:
```
.workflow/issues/solutions/{issue-id}.jsonl
```
Each line is a complete solution JSON.
Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
### 2.2 Binding
| Scenario | Action |
|----------|--------|
| Single solution | `ccw issue bind <id> --solution <file>` (auto) |
| Multiple solutions | Register only, return for selection |
### 2.3 Return Summary
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "SOL-001", "description": "...", "task_count": N }] }],
"conflicts": [{ "file": "...", "issues": [...] }]
}
```
---
@@ -215,12 +231,14 @@ Each line is a complete solution JSON.
### 3.3 Guidelines
**ALWAYS**:
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json`
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
2. Use ACE semantic search as PRIMARY exploration tool
3. Fetch issue details via `ccw issue status <id> --json`
4. Quantify delivery_criteria with testable conditions
4. Quantify acceptance.criteria with testable conditions
5. Validate DAG before output
6. Single solution → auto-bind; Multiple → return for selection
6. Evaluate each solution with `analysis` and `score`
7. Single solution → auto-bind; Multiple → return `pending_selection`
8. For HIGH complexity: generate 2-3 candidate solutions
**NEVER**:
1. Execute implementation (return plan only)

View File

@@ -36,21 +36,21 @@ color: orange
```javascript
{
tasks: [{
key: string, // e.g., "GH-123:TASK-001"
issue_id: string, // e.g., "GH-123"
solution_id: string, // e.g., "SOL-001"
task: {
id: string, // e.g., "TASK-001"
title: string,
type: string,
file_context: string[],
depends_on: string[]
}
task_id: string, // e.g., "TASK-001"
type: string, // feature | bug | refactor | test | chore | docs
file_context: string[],
depends_on: string[] // composite keys, e.g., ["GH-123:TASK-001"]
}],
project_root?: string,
rebuild?: boolean
}
```
**Note**: Agent generates unique `item_id` (pattern: `T-{N}`) for queue output.
### 1.2 Execution Flow
```
@@ -76,19 +76,17 @@ function buildDependencyGraph(tasks) {
const fileModifications = new Map()
for (const item of tasks) {
const key = `${item.issue_id}:${item.task.id}`
graph.set(key, { ...item, key, inDegree: 0, outEdges: [] })
graph.set(item.key, { ...item, inDegree: 0, outEdges: [] })
for (const file of item.task.file_context || []) {
for (const file of item.file_context || []) {
if (!fileModifications.has(file)) fileModifications.set(file, [])
fileModifications.get(file).push(key)
fileModifications.get(file).push(item.key)
}
}
// Add dependency edges
for (const [key, node] of graph) {
for (const dep of node.task.depends_on || []) {
const depKey = `${node.issue_id}:${dep}`
for (const depKey of node.depends_on || []) {
if (graph.has(depKey)) {
graph.get(depKey).outEdges.push(key)
node.inDegree++
@@ -147,48 +145,29 @@ function detectConflicts(fileModifications, graph) {
---
## 3. Output Specifications
## 3. Output Requirements
### 3.1 Queue Schema
### 3.1 Generate Files (Primary)
Read schema before output:
```bash
cat .claude/workflows/cli-templates/schemas/queue-schema.json
**Queue files**:
```
.workflow/issues/queues/{queue-id}.json # Full queue with tasks, conflicts, groups
.workflow/issues/queues/index.json # Update with new queue entry
```
### 3.2 Output Format
Queue ID format: `QUE-YYYYMMDD-HHMMSS` (UTC timestamp)
Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
### 3.2 Return Summary
```json
{
"tasks": [{
"item_id": "T-1",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task_id": "TASK-001",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7
}],
"conflicts": [{
"file": "src/auth.ts",
"tasks": ["GH-123:TASK-001", "GH-124:TASK-002"],
"resolution": "sequential",
"resolution_order": ["GH-123:TASK-001", "GH-124:TASK-002"],
"rationale": "TASK-001 creates file before TASK-002 updates",
"resolved": true
}],
"execution_groups": [
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["T-1", "T-2", "T-3"] },
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["T-4", "T-5"] }
],
"_metadata": {
"total_tasks": 5,
"total_conflicts": 1,
"resolved_conflicts": 1,
"timestamp": "2025-12-27T10:00:00Z"
}
"queue_id": "QUE-20251227-143000",
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123", "GH-124"]
}
```
@@ -231,5 +210,6 @@ cat .claude/workflows/cli-templates/schemas/queue-schema.json
5. Merge conflicting tasks in parallel group
**OUTPUT**:
1. Write queue via `ccw issue queue` CLI
2. Return JSON with `tasks`, `conflicts`, `execution_groups`, `_metadata`
1. Write `.workflow/issues/queues/{queue-id}.json`
2. Update `.workflow/issues/queues/index.json`
3. Return summary with `queue_id`, `total_tasks`, `execution_groups`, `conflicts_resolved`, `issues_queued`