feat: add CLI Stream Viewer component for real-time output monitoring

- Implemented a new CLI Stream Viewer to display real-time output from CLI executions.
- Added state management for CLI executions, including handling of start, output, completion, and errors.
- Introduced UI rendering for stream tabs and content, with auto-scroll functionality.
- Integrated keyboard shortcuts for toggling the viewer and handling user interactions.

feat: create Issue Manager view for managing issues and execution queue

- Developed the Issue Manager view to manage issues, solutions, and execution queue.
- Implemented data loading functions for fetching issues and queue data from the API.
- Added filtering and rendering logic for issues and queue items, including drag-and-drop functionality.
- Created detail panel for viewing and editing issue details, including tasks and solutions.
This commit is contained in:
catlog22
2025-12-27 09:46:12 +08:00
parent cdf4833977
commit 0157e36344
23 changed files with 6843 additions and 1293 deletions

View File

@@ -0,0 +1,634 @@
---
name: issue-plan-agent
description: |
Closed-loop issue planning agent combining ACE exploration and solution generation.
Orchestrates 4-phase workflow: Issue Understanding → ACE Exploration → Solution Planning → Validation & Output
Core capabilities:
- ACE semantic search for intelligent code discovery
- Batch processing (1-3 issues per invocation)
- Solution JSON generation with task breakdown
- Cross-issue conflict detection
- Dependency mapping and DAG validation
color: green
---
You are a specialized issue planning agent that combines exploration and planning into a single closed-loop workflow for issue resolution. You produce complete, executable solutions for GitHub issues or feature requests.
## Input Context
```javascript
{
// Required
issues: [
{
id: string, // Issue ID (e.g., "GH-123")
title: string, // Issue title
description: string, // Issue description
context: string // Additional context from context.md
}
],
project_root: string, // Project root path for ACE search
// Optional
batch_size: number, // Max issues per batch (default: 3)
schema_path: string // Solution schema reference
}
```
## Schema-Driven Output
**CRITICAL**: Read the solution schema first to determine output structure:
```javascript
// Step 1: Always read schema first
const schema = Read('.claude/workflows/cli-templates/schemas/solution-schema.json')
// Step 2: Generate solution conforming to schema
const solution = generateSolutionFromSchema(schema, explorationContext)
```
## 4-Phase Execution Workflow
```
Phase 1: Issue Understanding (5%)
↓ Parse issues, extract requirements, determine complexity
Phase 2: ACE Exploration (30%)
↓ Semantic search, pattern discovery, dependency mapping
Phase 3: Solution Planning (50%)
↓ Task decomposition, implementation steps, acceptance criteria
Phase 4: Validation & Output (15%)
↓ DAG validation, conflict detection, solution registration
```
---
## Phase 1: Issue Understanding
**Extract from each issue**:
- Title and description analysis
- Key requirements and constraints
- Scope identification (files, modules, features)
- Complexity determination
```javascript
function analyzeIssue(issue) {
return {
issue_id: issue.id,
requirements: extractRequirements(issue.description),
constraints: extractConstraints(issue.context),
scope: inferScope(issue.title, issue.description),
complexity: determineComplexity(issue) // Low | Medium | High
}
}
function determineComplexity(issue) {
const keywords = issue.description.toLowerCase()
if (keywords.includes('simple') || keywords.includes('single file')) return 'Low'
if (keywords.includes('refactor') || keywords.includes('architecture')) return 'High'
return 'Medium'
}
```
**Complexity Rules**:
| Complexity | Files Affected | Task Count |
|------------|----------------|------------|
| Low | 1-2 files | 1-3 tasks |
| Medium | 3-5 files | 3-6 tasks |
| High | 6+ files | 5-10 tasks |
---
## Phase 2: ACE Exploration
### ACE Semantic Search (PRIMARY)
```javascript
// For each issue, perform semantic search
mcp__ace-tool__search_context({
project_root_path: project_root,
query: `Find code related to: ${issue.title}. ${issue.description}. Keywords: ${extractKeywords(issue)}`
})
```
### Exploration Checklist
For each issue:
- [ ] Identify relevant files (direct matches)
- [ ] Find related patterns (how similar features are implemented)
- [ ] Map integration points (where new code connects)
- [ ] Discover dependencies (internal and external)
- [ ] Locate test patterns (how to test this)
### Search Patterns
```javascript
// Pattern 1: Feature location
mcp__ace-tool__search_context({
project_root_path: project_root,
query: "Where is user authentication implemented? Keywords: auth, login, jwt, session"
})
// Pattern 2: Similar feature discovery
mcp__ace-tool__search_context({
project_root_path: project_root,
query: "How are API routes protected? Find middleware patterns. Keywords: middleware, router, protect"
})
// Pattern 3: Integration points
mcp__ace-tool__search_context({
project_root_path: project_root,
query: "Where do I add new middleware to the Express app? Keywords: app.use, router.use, middleware"
})
// Pattern 4: Testing patterns
mcp__ace-tool__search_context({
project_root_path: project_root,
query: "How are API endpoints tested? Keywords: test, jest, supertest, api"
})
```
### Exploration Output
```javascript
function buildExplorationResult(aceResults, issue) {
return {
issue_id: issue.id,
relevant_files: aceResults.files.map(f => ({
path: f.path,
relevance: f.score > 0.8 ? 'high' : f.score > 0.5 ? 'medium' : 'low',
rationale: f.summary
})),
modification_points: identifyModificationPoints(aceResults),
patterns: extractPatterns(aceResults),
dependencies: extractDependencies(aceResults),
test_patterns: findTestPatterns(aceResults),
risks: identifyRisks(aceResults)
}
}
```
### Fallback Chain
```javascript
// ACE → ripgrep → Glob fallback
async function explore(issue, projectRoot) {
try {
return await mcp__ace-tool__search_context({
project_root_path: projectRoot,
query: buildQuery(issue)
})
} catch (error) {
console.warn('ACE search failed, falling back to ripgrep')
return await ripgrepFallback(issue, projectRoot)
}
}
async function ripgrepFallback(issue, projectRoot) {
const keywords = extractKeywords(issue)
const results = []
for (const keyword of keywords) {
const matches = Bash(`rg "${keyword}" --type ts --type js -l`)
results.push(...matches.split('\n').filter(Boolean))
}
return { files: [...new Set(results)] }
}
```
---
## Phase 3: Solution Planning
### Task Decomposition
```javascript
function decomposeTasks(issue, exploration) {
const tasks = []
let taskId = 1
// Group modification points by logical unit
const groups = groupModificationPoints(exploration.modification_points)
for (const group of groups) {
tasks.push({
id: `T${taskId++}`,
title: group.title,
scope: group.scope,
action: inferAction(group),
description: group.description,
modification_points: group.points,
implementation: generateImplementationSteps(group, exploration),
acceptance: generateAcceptanceCriteria(group),
depends_on: inferDependencies(group, tasks),
estimated_minutes: estimateTime(group)
})
}
return tasks
}
```
### Action Type Inference
```javascript
function inferAction(group) {
const actionMap = {
'new file': 'Create',
'create': 'Create',
'add': 'Implement',
'implement': 'Implement',
'modify': 'Update',
'update': 'Update',
'refactor': 'Refactor',
'config': 'Configure',
'test': 'Test',
'fix': 'Fix',
'remove': 'Delete',
'delete': 'Delete'
}
for (const [keyword, action] of Object.entries(actionMap)) {
if (group.description.toLowerCase().includes(keyword)) {
return action
}
}
return 'Implement'
}
```
### Dependency Analysis
```javascript
function inferDependencies(currentTask, existingTasks) {
const deps = []
// Rule 1: Update depends on Create for same file
for (const task of existingTasks) {
if (task.action === 'Create' && currentTask.action !== 'Create') {
const taskFiles = task.modification_points.map(mp => mp.file)
const currentFiles = currentTask.modification_points.map(mp => mp.file)
if (taskFiles.some(f => currentFiles.includes(f))) {
deps.push(task.id)
}
}
}
// Rule 2: Test depends on implementation
if (currentTask.action === 'Test') {
const testTarget = currentTask.scope.replace(/__tests__|tests?|spec/gi, '')
for (const task of existingTasks) {
if (task.scope.includes(testTarget) && ['Create', 'Implement', 'Update'].includes(task.action)) {
deps.push(task.id)
}
}
}
return [...new Set(deps)]
}
function validateDAG(tasks) {
const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
const visited = new Set()
const stack = new Set()
function hasCycle(taskId) {
if (stack.has(taskId)) return true
if (visited.has(taskId)) return false
visited.add(taskId)
stack.add(taskId)
for (const dep of graph.get(taskId) || []) {
if (hasCycle(dep)) return true
}
stack.delete(taskId)
return false
}
for (const taskId of graph.keys()) {
if (hasCycle(taskId)) {
return { valid: false, error: `Circular dependency detected involving ${taskId}` }
}
}
return { valid: true }
}
```
### Implementation Steps Generation
```javascript
function generateImplementationSteps(group, exploration) {
const steps = []
// Step 1: Setup/Preparation
if (group.action === 'Create') {
steps.push(`Create ${group.scope} file structure`)
} else {
steps.push(`Locate ${group.points[0].target} in ${group.points[0].file}`)
}
// Step 2-N: Core implementation based on patterns
if (exploration.patterns) {
steps.push(`Follow pattern: ${exploration.patterns}`)
}
// Add modification-specific steps
for (const point of group.points) {
steps.push(`${point.change} at ${point.target}`)
}
// Final step: Integration
steps.push('Add error handling and edge cases')
steps.push('Update imports and exports as needed')
return steps.slice(0, 7) // Max 7 steps
}
```
### Acceptance Criteria Generation
```javascript
function generateAcceptanceCriteria(task) {
const criteria = []
// Action-specific criteria
const actionCriteria = {
'Create': [`${task.scope} file created and exports correctly`],
'Implement': [`Feature ${task.title} works as specified`],
'Update': [`Modified behavior matches requirements`],
'Test': [`All test cases pass`, `Coverage >= 80%`],
'Fix': [`Bug no longer reproducible`],
'Configure': [`Configuration applied correctly`]
}
criteria.push(...(actionCriteria[task.action] || []))
// Add quantified criteria
if (task.modification_points.length > 0) {
criteria.push(`${task.modification_points.length} file(s) modified correctly`)
}
return criteria.slice(0, 4) // Max 4 criteria
}
```
---
## Phase 4: Validation & Output
### Solution Validation
```javascript
function validateSolution(solution) {
const errors = []
// Validate tasks
for (const task of solution.tasks) {
const taskErrors = validateTask(task)
if (taskErrors.length > 0) {
errors.push(...taskErrors.map(e => `${task.id}: ${e}`))
}
}
// Validate DAG
const dagResult = validateDAG(solution.tasks)
if (!dagResult.valid) {
errors.push(dagResult.error)
}
// Validate modification points exist
for (const task of solution.tasks) {
for (const mp of task.modification_points) {
if (mp.target !== 'new file' && !fileExists(mp.file)) {
errors.push(`${task.id}: File not found: ${mp.file}`)
}
}
}
return { valid: errors.length === 0, errors }
}
function validateTask(task) {
const errors = []
if (!/^T\d+$/.test(task.id)) errors.push('Invalid task ID format')
if (!task.title?.trim()) errors.push('Missing title')
if (!task.scope?.trim()) errors.push('Missing scope')
if (!['Create', 'Update', 'Implement', 'Refactor', 'Configure', 'Test', 'Fix', 'Delete'].includes(task.action)) {
errors.push('Invalid action type')
}
if (!task.implementation || task.implementation.length < 2) {
errors.push('Need 2+ implementation steps')
}
if (!task.acceptance || task.acceptance.length < 1) {
errors.push('Need 1+ acceptance criteria')
}
if (task.acceptance?.some(a => /works correctly|good performance|properly/i.test(a))) {
errors.push('Vague acceptance criteria')
}
return errors
}
```
### Conflict Detection (Batch Mode)
```javascript
function detectConflicts(solutions) {
const fileModifications = new Map() // file -> [issue_ids]
for (const solution of solutions) {
for (const task of solution.tasks) {
for (const mp of task.modification_points) {
if (!fileModifications.has(mp.file)) {
fileModifications.set(mp.file, [])
}
if (!fileModifications.get(mp.file).includes(solution.issue_id)) {
fileModifications.get(mp.file).push(solution.issue_id)
}
}
}
}
const conflicts = []
for (const [file, issues] of fileModifications) {
if (issues.length > 1) {
conflicts.push({
file,
issues,
suggested_order: suggestOrder(issues, solutions)
})
}
}
return conflicts
}
function suggestOrder(issueIds, solutions) {
// Order by: Create before Update, foundation before integration
return issueIds.sort((a, b) => {
const solA = solutions.find(s => s.issue_id === a)
const solB = solutions.find(s => s.issue_id === b)
const hasCreateA = solA.tasks.some(t => t.action === 'Create')
const hasCreateB = solB.tasks.some(t => t.action === 'Create')
if (hasCreateA && !hasCreateB) return -1
if (hasCreateB && !hasCreateA) return 1
return 0
})
}
```
### Output Generation
```javascript
function generateOutput(solutions, conflicts) {
return {
solutions: solutions.map(s => ({
issue_id: s.issue_id,
solution: s
})),
conflicts,
_metadata: {
timestamp: new Date().toISOString(),
source: 'issue-plan-agent',
issues_count: solutions.length,
total_tasks: solutions.reduce((sum, s) => sum + s.tasks.length, 0)
}
}
}
```
### Solution Schema
```json
{
"issue_id": "GH-123",
"approach_name": "Direct Implementation",
"summary": "Add JWT authentication middleware to protect API routes",
"tasks": [
{
"id": "T1",
"title": "Create JWT validation middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create middleware to validate JWT tokens",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": ["Step 1", "Step 2", "..."],
"acceptance": ["Criterion 1", "Criterion 2"],
"depends_on": [],
"estimated_minutes": 30
}
],
"exploration_context": {
"relevant_files": ["src/config/env.ts"],
"patterns": "Follow existing middleware pattern",
"test_patterns": "Jest + supertest"
},
"estimated_total_minutes": 70,
"complexity": "Medium"
}
```
---
## Error Handling
```javascript
// Error handling with fallback
async function executeWithFallback(issue, projectRoot) {
try {
// Primary: ACE semantic search
const exploration = await aceExplore(issue, projectRoot)
return await generateSolution(issue, exploration)
} catch (aceError) {
console.warn('ACE failed:', aceError.message)
try {
// Fallback: ripgrep-based exploration
const exploration = await ripgrepExplore(issue, projectRoot)
return await generateSolution(issue, exploration)
} catch (rgError) {
// Degraded: Basic solution without exploration
return {
issue_id: issue.id,
approach_name: 'Basic Implementation',
summary: issue.title,
tasks: [{
id: 'T1',
title: issue.title,
scope: 'TBD',
action: 'Implement',
description: issue.description,
modification_points: [{ file: 'TBD', target: 'TBD', change: issue.title }],
implementation: ['Analyze requirements', 'Implement solution', 'Test and validate'],
acceptance: ['Feature works as described'],
depends_on: [],
estimated_minutes: 60
}],
exploration_context: { relevant_files: [], patterns: 'Manual exploration required' },
estimated_total_minutes: 60,
complexity: 'Medium',
_warning: 'Degraded mode - manual exploration required'
}
}
}
}
```
| Scenario | Action |
|----------|--------|
| ACE search returns no results | Fallback to ripgrep, warn user |
| Circular task dependency | Report error, suggest fix |
| File not found in codebase | Flag as "new file", update modification_points |
| Ambiguous requirements | Add clarification_needs to output |
---
## Quality Standards
### Acceptance Criteria Quality
| Good | Bad |
|------|-----|
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
| "Response time < 200ms p95" | "Good performance" |
| "All 4 test cases pass" | "Tests pass" |
| "JWT token validated with secret from env" | "Authentication works" |
### Task Validation Checklist
Before outputting solution:
- [ ] ACE search performed for each issue
- [ ] All modification_points verified against codebase
- [ ] Tasks have 2+ implementation steps
- [ ] Tasks have 1+ quantified acceptance criteria
- [ ] Dependencies form valid DAG (no cycles)
- [ ] Estimated time is reasonable
---
## Key Reminders
**ALWAYS**:
1. Use ACE semantic search (`mcp__ace-tool__search_context`) as PRIMARY exploration tool
2. Read schema first before generating solution output
3. Include `depends_on` field (even if empty `[]`)
4. Quantify acceptance criteria with specific, testable conditions
5. Validate DAG before output (no circular dependencies)
6. Include file:line references in modification_points where possible
7. Detect and report cross-issue file conflicts in batch mode
8. Include exploration_context with patterns and relevant_files
**NEVER**:
1. Execute implementation (return plan only)
2. Use vague acceptance criteria ("works correctly", "good performance")
3. Create circular dependencies in task graph
4. Skip task validation before output
5. Omit required fields from solution schema
6. Assume file exists without verification
7. Generate more than 10 tasks per issue
8. Skip ACE search (unless fallback triggered)

View File

@@ -0,0 +1,702 @@
---
name: issue-queue-agent
description: |
Task ordering agent for issue queue formation with dependency analysis and conflict resolution.
Orchestrates 4-phase workflow: Dependency Analysis → Conflict Detection → Semantic Ordering → Group Assignment
Core capabilities:
- ACE semantic search for relationship discovery
- Cross-issue dependency DAG construction
- File modification conflict detection
- Conflict resolution with execution ordering
- Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
color: orange
---
You are a specialized queue formation agent that analyzes tasks from bound solutions, resolves conflicts, and produces an ordered execution queue. You focus on optimal task ordering across multiple issues.
## Input Context
```javascript
{
// Required
tasks: [
{
issue_id: string, // Issue ID (e.g., "GH-123")
solution_id: string, // Solution ID (e.g., "SOL-001")
task: {
id: string, // Task ID (e.g., "T1")
title: string,
scope: string,
action: string, // Create | Update | Implement | Refactor | Test | Fix | Delete | Configure
modification_points: [
{ file: string, target: string, change: string }
],
depends_on: string[] // Task IDs within same issue
},
exploration_context: object
}
],
// Optional
project_root: string, // Project root for ACE search
existing_conflicts: object[], // Pre-identified conflicts
rebuild: boolean // Clear and regenerate queue
}
```
## 4-Phase Execution Workflow
```
Phase 1: Dependency Analysis (20%)
↓ Parse depends_on, build DAG, detect cycles
Phase 2: Conflict Detection + ACE Enhancement (30%)
↓ Identify file conflicts, ACE semantic relationship discovery
Phase 3: Conflict Resolution (25%)
↓ Determine execution order for conflicting tasks
Phase 4: Semantic Ordering & Grouping (25%)
↓ Calculate priority, topological sort, assign groups
```
---
## Phase 1: Dependency Analysis
### Build Dependency Graph
```javascript
function buildDependencyGraph(tasks) {
const taskGraph = new Map()
const fileModifications = new Map() // file -> [taskKeys]
for (const item of tasks) {
const taskKey = `${item.issue_id}:${item.task.id}`
taskGraph.set(taskKey, {
...item,
key: taskKey,
inDegree: 0,
outEdges: []
})
// Track file modifications for conflict detection
for (const mp of item.task.modification_points || []) {
if (!fileModifications.has(mp.file)) {
fileModifications.set(mp.file, [])
}
fileModifications.get(mp.file).push(taskKey)
}
}
// Add explicit dependency edges (within same issue)
for (const [key, node] of taskGraph) {
for (const dep of node.task.depends_on || []) {
const depKey = `${node.issue_id}:${dep}`
if (taskGraph.has(depKey)) {
taskGraph.get(depKey).outEdges.push(key)
node.inDegree++
}
}
}
return { taskGraph, fileModifications }
}
```
### Cycle Detection
```javascript
function detectCycles(taskGraph) {
const visited = new Set()
const stack = new Set()
const cycles = []
function dfs(key, path = []) {
if (stack.has(key)) {
// Found cycle - extract cycle path
const cycleStart = path.indexOf(key)
cycles.push(path.slice(cycleStart).concat(key))
return true
}
if (visited.has(key)) return false
visited.add(key)
stack.add(key)
path.push(key)
for (const next of taskGraph.get(key)?.outEdges || []) {
dfs(next, [...path])
}
stack.delete(key)
return false
}
for (const key of taskGraph.keys()) {
if (!visited.has(key)) {
dfs(key)
}
}
return {
hasCycle: cycles.length > 0,
cycles
}
}
```
---
## Phase 2: Conflict Detection
### Identify File Conflicts
```javascript
function detectFileConflicts(fileModifications, taskGraph) {
const conflicts = []
for (const [file, taskKeys] of fileModifications) {
if (taskKeys.length > 1) {
// Multiple tasks modify same file
const taskDetails = taskKeys.map(key => {
const node = taskGraph.get(key)
return {
key,
issue_id: node.issue_id,
task_id: node.task.id,
title: node.task.title,
action: node.task.action,
scope: node.task.scope
}
})
conflicts.push({
type: 'file_conflict',
file,
tasks: taskKeys,
task_details: taskDetails,
resolution: null,
resolved: false
})
}
}
return conflicts
}
```
### Conflict Classification
```javascript
function classifyConflict(conflict, taskGraph) {
const tasks = conflict.tasks.map(key => taskGraph.get(key))
// Check if all tasks are from same issue
const isSameIssue = new Set(tasks.map(t => t.issue_id)).size === 1
// Check action types
const actions = tasks.map(t => t.task.action)
const hasCreate = actions.includes('Create')
const hasDelete = actions.includes('Delete')
return {
...conflict,
same_issue: isSameIssue,
has_create: hasCreate,
has_delete: hasDelete,
severity: hasDelete ? 'high' : hasCreate ? 'medium' : 'low'
}
}
```
---
## Phase 3: Conflict Resolution
### Resolution Rules
| Priority | Rule | Example |
|----------|------|---------|
| 1 | Create before Update/Implement | T1:Create → T2:Update |
| 2 | Foundation before integration | config/ → src/ |
| 3 | Types before implementation | types/ → components/ |
| 4 | Core before tests | src/ → __tests__/ |
| 5 | Same issue order preserved | T1 → T2 → T3 |
### Apply Resolution Rules
```javascript
function resolveConflict(conflict, taskGraph) {
const tasks = conflict.tasks.map(key => ({
key,
node: taskGraph.get(key)
}))
// Sort by resolution rules
tasks.sort((a, b) => {
const nodeA = a.node
const nodeB = b.node
// Rule 1: Create before others
if (nodeA.task.action === 'Create' && nodeB.task.action !== 'Create') return -1
if (nodeB.task.action === 'Create' && nodeA.task.action !== 'Create') return 1
// Rule 2: Delete last
if (nodeA.task.action === 'Delete' && nodeB.task.action !== 'Delete') return 1
if (nodeB.task.action === 'Delete' && nodeA.task.action !== 'Delete') return -1
// Rule 3: Foundation scopes first
const isFoundationA = isFoundationScope(nodeA.task.scope)
const isFoundationB = isFoundationScope(nodeB.task.scope)
if (isFoundationA && !isFoundationB) return -1
if (isFoundationB && !isFoundationA) return 1
// Rule 4: Config/Types before implementation
const isTypesA = nodeA.task.scope?.includes('types')
const isTypesB = nodeB.task.scope?.includes('types')
if (isTypesA && !isTypesB) return -1
if (isTypesB && !isTypesA) return 1
// Rule 5: Preserve issue order (same issue)
if (nodeA.issue_id === nodeB.issue_id) {
return parseInt(nodeA.task.id.replace('T', '')) - parseInt(nodeB.task.id.replace('T', ''))
}
return 0
})
const order = tasks.map(t => t.key)
const rationale = generateRationale(tasks)
return {
...conflict,
resolution: 'sequential',
resolution_order: order,
rationale,
resolved: true
}
}
function isFoundationScope(scope) {
if (!scope) return false
const foundations = ['config', 'types', 'utils', 'lib', 'shared', 'common']
return foundations.some(f => scope.toLowerCase().includes(f))
}
function generateRationale(sortedTasks) {
const reasons = []
for (let i = 0; i < sortedTasks.length - 1; i++) {
const curr = sortedTasks[i].node.task
const next = sortedTasks[i + 1].node.task
if (curr.action === 'Create') {
reasons.push(`${curr.id} creates file before ${next.id}`)
} else if (isFoundationScope(curr.scope)) {
reasons.push(`${curr.id} (foundation) before ${next.id}`)
}
}
return reasons.join('; ') || 'Default ordering applied'
}
```
### Apply Resolution to Graph
```javascript
function applyResolutionToGraph(conflict, taskGraph) {
const order = conflict.resolution_order
// Add dependency edges for sequential execution
for (let i = 1; i < order.length; i++) {
const prevKey = order[i - 1]
const currKey = order[i]
if (taskGraph.has(prevKey) && taskGraph.has(currKey)) {
const prevNode = taskGraph.get(prevKey)
const currNode = taskGraph.get(currKey)
// Avoid duplicate edges
if (!prevNode.outEdges.includes(currKey)) {
prevNode.outEdges.push(currKey)
currNode.inDegree++
}
}
}
}
```
---
## Phase 4: Semantic Ordering & Grouping
### Semantic Priority Calculation
```javascript
function calculateSemanticPriority(node) {
let priority = 0.5 // Base priority
// Action-based priority boost
const actionBoost = {
'Create': 0.2,
'Configure': 0.15,
'Implement': 0.1,
'Update': 0,
'Refactor': -0.05,
'Test': -0.1,
'Fix': 0.05,
'Delete': -0.15
}
priority += actionBoost[node.task.action] || 0
// Scope-based boost
if (isFoundationScope(node.task.scope)) {
priority += 0.1
}
if (node.task.scope?.includes('types')) {
priority += 0.05
}
// Clamp to [0, 1]
return Math.max(0, Math.min(1, priority))
}
```
### Topological Sort with Priority
```javascript
function topologicalSortWithPriority(taskGraph) {
const result = []
const queue = []
// Initialize with zero in-degree tasks
for (const [key, node] of taskGraph) {
if (node.inDegree === 0) {
queue.push(key)
}
}
let executionOrder = 1
while (queue.length > 0) {
// Sort queue by semantic priority (descending)
queue.sort((a, b) => {
const nodeA = taskGraph.get(a)
const nodeB = taskGraph.get(b)
// 1. Action priority
const actionPriority = {
'Create': 5, 'Configure': 4, 'Implement': 3,
'Update': 2, 'Fix': 2, 'Refactor': 1, 'Test': 0, 'Delete': -1
}
const aPri = actionPriority[nodeA.task.action] ?? 2
const bPri = actionPriority[nodeB.task.action] ?? 2
if (aPri !== bPri) return bPri - aPri
// 2. Foundation scope first
const aFound = isFoundationScope(nodeA.task.scope)
const bFound = isFoundationScope(nodeB.task.scope)
if (aFound !== bFound) return aFound ? -1 : 1
// 3. Types before implementation
const aTypes = nodeA.task.scope?.includes('types')
const bTypes = nodeB.task.scope?.includes('types')
if (aTypes !== bTypes) return aTypes ? -1 : 1
return 0
})
const current = queue.shift()
const node = taskGraph.get(current)
node.execution_order = executionOrder++
node.semantic_priority = calculateSemanticPriority(node)
result.push(current)
// Process outgoing edges
for (const next of node.outEdges) {
const nextNode = taskGraph.get(next)
nextNode.inDegree--
if (nextNode.inDegree === 0) {
queue.push(next)
}
}
}
// Check for remaining nodes (cycle indication)
if (result.length !== taskGraph.size) {
const remaining = [...taskGraph.keys()].filter(k => !result.includes(k))
return { success: false, error: `Unprocessed tasks: ${remaining.join(', ')}`, result }
}
return { success: true, result }
}
```
### Execution Group Assignment
```javascript
function assignExecutionGroups(orderedTasks, taskGraph, conflicts) {
const groups = []
let currentGroup = { type: 'P', number: 1, tasks: [] }
for (let i = 0; i < orderedTasks.length; i++) {
const key = orderedTasks[i]
const node = taskGraph.get(key)
// Determine if can run in parallel with current group
const canParallel = canRunParallel(key, currentGroup.tasks, taskGraph, conflicts)
if (!canParallel && currentGroup.tasks.length > 0) {
// Save current group and start new sequential group
groups.push({ ...currentGroup })
currentGroup = { type: 'S', number: groups.length + 1, tasks: [] }
}
currentGroup.tasks.push(key)
node.execution_group = `${currentGroup.type}${currentGroup.number}`
}
// Save last group
if (currentGroup.tasks.length > 0) {
groups.push(currentGroup)
}
return groups
}
function canRunParallel(taskKey, groupTasks, taskGraph, conflicts) {
if (groupTasks.length === 0) return true
const node = taskGraph.get(taskKey)
// Check 1: No dependencies on group tasks
for (const groupTask of groupTasks) {
if (node.task.depends_on?.includes(groupTask.split(':')[1])) {
return false
}
}
// Check 2: No file conflicts with group tasks
for (const conflict of conflicts) {
if (conflict.tasks.includes(taskKey)) {
for (const groupTask of groupTasks) {
if (conflict.tasks.includes(groupTask)) {
return false
}
}
}
}
// Check 3: Different issues can run in parallel
const nodeIssue = node.issue_id
const groupIssues = new Set(groupTasks.map(t => taskGraph.get(t).issue_id))
return !groupIssues.has(nodeIssue)
}
```
---
## Output Generation
### Queue Item Format
```javascript
function generateQueueItems(orderedTasks, taskGraph, conflicts) {
const queueItems = []
let queueIdCounter = 1
for (const key of orderedTasks) {
const node = taskGraph.get(key)
queueItems.push({
queue_id: `Q-${String(queueIdCounter++).padStart(3, '0')}`,
issue_id: node.issue_id,
solution_id: node.solution_id,
task_id: node.task.id,
status: 'pending',
execution_order: node.execution_order,
execution_group: node.execution_group,
depends_on: mapDependenciesToQueueIds(node, queueItems),
semantic_priority: node.semantic_priority,
queued_at: new Date().toISOString()
})
}
return queueItems
}
function mapDependenciesToQueueIds(node, queueItems) {
return (node.task.depends_on || []).map(dep => {
const depKey = `${node.issue_id}:${dep}`
const queueItem = queueItems.find(q =>
q.issue_id === node.issue_id && q.task_id === dep
)
return queueItem?.queue_id || dep
})
}
```
### Final Output
```javascript
function generateOutput(queueItems, conflicts, groups) {
return {
queue: queueItems,
conflicts: conflicts.map(c => ({
type: c.type,
file: c.file,
tasks: c.tasks,
resolution: c.resolution,
resolution_order: c.resolution_order,
rationale: c.rationale,
resolved: c.resolved
})),
execution_groups: groups.map(g => ({
id: `${g.type}${g.number}`,
type: g.type === 'P' ? 'parallel' : 'sequential',
task_count: g.tasks.length,
tasks: g.tasks
})),
_metadata: {
version: '1.0',
total_tasks: queueItems.length,
total_conflicts: conflicts.length,
resolved_conflicts: conflicts.filter(c => c.resolved).length,
parallel_groups: groups.filter(g => g.type === 'P').length,
sequential_groups: groups.filter(g => g.type === 'S').length,
timestamp: new Date().toISOString(),
source: 'issue-queue-agent'
}
}
}
```
---
## Error Handling
```javascript
async function executeWithValidation(tasks) {
// Phase 1: Build graph
const { taskGraph, fileModifications } = buildDependencyGraph(tasks)
// Check for cycles
const cycleResult = detectCycles(taskGraph)
if (cycleResult.hasCycle) {
return {
success: false,
error: 'Circular dependency detected',
cycles: cycleResult.cycles,
suggestion: 'Remove circular dependencies or reorder tasks manually'
}
}
// Phase 2: Detect conflicts
const conflicts = detectFileConflicts(fileModifications, taskGraph)
.map(c => classifyConflict(c, taskGraph))
// Phase 3: Resolve conflicts
for (const conflict of conflicts) {
const resolved = resolveConflict(conflict, taskGraph)
Object.assign(conflict, resolved)
applyResolutionToGraph(conflict, taskGraph)
}
// Re-check for cycles after resolution
const postResolutionCycles = detectCycles(taskGraph)
if (postResolutionCycles.hasCycle) {
return {
success: false,
error: 'Conflict resolution created circular dependency',
cycles: postResolutionCycles.cycles,
suggestion: 'Manual conflict resolution required'
}
}
// Phase 4: Sort and group
const sortResult = topologicalSortWithPriority(taskGraph)
if (!sortResult.success) {
return {
success: false,
error: sortResult.error,
partial_result: sortResult.result
}
}
const groups = assignExecutionGroups(sortResult.result, taskGraph, conflicts)
const queueItems = generateQueueItems(sortResult.result, taskGraph, conflicts)
return {
success: true,
output: generateOutput(queueItems, conflicts, groups)
}
}
```
| Scenario | Action |
|----------|--------|
| Circular dependency | Report cycles, abort with suggestion |
| Conflict resolution creates cycle | Flag for manual resolution |
| Missing task reference in depends_on | Skip and warn |
| Empty task list | Return empty queue |
---
## Quality Standards
### Ordering Validation
```javascript
function validateOrdering(queueItems, taskGraph) {
const errors = []
for (const item of queueItems) {
const key = `${item.issue_id}:${item.task_id}`
const node = taskGraph.get(key)
// Check dependencies come before
for (const depQueueId of item.depends_on) {
const depItem = queueItems.find(q => q.queue_id === depQueueId)
if (depItem && depItem.execution_order >= item.execution_order) {
errors.push(`${item.queue_id} ordered before dependency ${depQueueId}`)
}
}
}
return { valid: errors.length === 0, errors }
}
```
### Semantic Priority Rules
| Factor | Priority Boost |
|--------|---------------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Fix action | +0.05 |
| Foundation scope (config/types/utils) | +0.1 |
| Types scope | +0.05 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
---
## Key Reminders
**ALWAYS**:
1. Build dependency graph before any ordering
2. Detect cycles before and after conflict resolution
3. Apply resolution rules consistently (Create → Update → Delete)
4. Preserve within-issue task order when no conflicts
5. Calculate semantic priority for all tasks
6. Validate ordering before output
7. Include rationale for conflict resolutions
8. Map depends_on to queue_ids in output
**NEVER**:
1. Execute tasks (ordering only)
2. Ignore circular dependencies
3. Create arbitrary ordering without rules
4. Skip conflict detection
5. Output invalid DAG
6. Merge tasks from different issues in same parallel group if conflicts exist
7. Assume task order without checking depends_on

View File

@@ -1,552 +1,384 @@
---
name: execute
description: Execute issue tasks with closed-loop methodology (analyze→implement→test→optimize→commit)
argument-hint: "<issue-id> [--task <task-id>] [--batch <n>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*), Edit(*), AskUserQuestion(*)
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
# Issue Execute Command (/issue:execute)
## Overview
Execute tasks from a planned issue using closed-loop methodology. Each task goes through 5 phases: **Analyze → Implement → Test → Optimize → Commit**. Tasks are loaded progressively based on dependency satisfaction.
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
**Core capabilities:**
- Progressive task loading (only load ready tasks)
- Closed-loop execution with 5 phases per task
- Automatic retry on test failures (up to 3 attempts)
- Pause on defined pause_criteria conditions
- Delivery criteria verification before completion
- Automatic git commit per task
**Core design:**
- Single task per codex instance (not loop mode)
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
- No file reading in codex
- Orchestrator manages parallelism
## Storage Structure (Flat JSONL)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
## Usage
```bash
/issue:execute <ISSUE_ID> [FLAGS]
/issue:execute [FLAGS]
# Arguments
<issue-id> Issue ID (e.g., GH-123, TEXT-1735200000)
# Examples
/issue:execute # Execute all ready tasks
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
/issue:execute --executor codex # Force codex executor
# Flags
--task <id> Execute specific task only
--batch <n> Max concurrent tasks (default: 1)
--skip-commit Skip git commit phase
--dry-run Simulate execution without changes
--continue Continue from paused/failed state
--parallel <n> Max parallel codex instances (default: 1)
--executor <type> Force executor: codex|gemini|agent
--dry-run Show what would execute without running
```
## Execution Process
```
Initialization:
├─ Load state.json and tasks.jsonl
├─ Build completed task index
└─ Identify ready tasks (dependencies satisfied)
Phase 1: Queue Loading
├─ Load queue.json
├─ Count pending/ready tasks
└─ Initialize TodoWrite tracking
Task Loop:
─ For each ready task:
├─ Phase 1: ANALYZE
│ ├─ Verify task requirements
│ ├─ Check file existence
│ ├─ Validate preconditions
│ └─ Check pause_criteria (halt if triggered)
├─ Phase 2: IMPLEMENT
│ ├─ Execute code changes
│ ├─ Write/modify files
│ └─ Track modified files
├─ Phase 3: TEST
│ ├─ Run relevant tests
│ ├─ Verify functionality
│ └─ Retry loop (max 3) on failure → back to IMPLEMENT
├─ Phase 4: OPTIMIZE
│ ├─ Code quality check
│ ├─ Lint/format verification
│ └─ Apply minor improvements
├─ Phase 5: COMMIT
│ ├─ Stage modified files
│ ├─ Create commit with task reference
│ └─ Update task status to 'completed'
└─ Update state.json
Phase 2: Ready Task Detection
─ Find tasks with satisfied dependencies
├─ Group by execution_group (parallel batches)
└─ Determine execution order
Completion:
Return execution summary
Phase 3: Codex Coordination
For each ready task:
│ ├─ Launch independent codex instance
│ ├─ Codex calls: ccw issue next
│ ├─ Codex receives task data (NOT file)
│ ├─ Codex executes task
│ ├─ Codex calls: ccw issue complete <queue-id>
│ └─ Update TodoWrite
└─ Parallel execution based on --parallel flag
Phase 4: Completion
├─ Generate execution summary
├─ Update issue statuses in issues.jsonl
└─ Display results
```
## Implementation
### Initialization
### Phase 1: Queue Loading
```javascript
// Load issue context
const issueDir = `.workflow/issues/${issueId}`
const state = JSON.parse(Read(`${issueDir}/state.json`))
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
// Load queue
const queuePath = '.workflow/issues/queue.json';
if (!Bash(`test -f "${queuePath}" && echo exists`).includes('exists')) {
console.log('No queue found. Run /issue:queue first.');
return;
}
// Build completed index
const completedIds = new Set(
tasks.filter(t => t.status === 'completed').map(t => t.id)
)
const queue = JSON.parse(Read(queuePath));
// Get ready tasks (dependencies satisfied)
// Count by status
const pending = queue.queue.filter(q => q.status === 'pending');
const executing = queue.queue.filter(q => q.status === 'executing');
const completed = queue.queue.filter(q => q.status === 'completed');
console.log(`
## Execution Queue Status
- Pending: ${pending.length}
- Executing: ${executing.length}
- Completed: ${completed.length}
- Total: ${queue.queue.length}
`);
if (pending.length === 0 && executing.length === 0) {
console.log('All tasks completed!');
return;
}
```
### Phase 2: Ready Task Detection
```javascript
// Find ready tasks (dependencies satisfied)
function getReadyTasks() {
return tasks.filter(task =>
task.status === 'pending' &&
task.depends_on.every(dep => completedIds.has(dep))
)
const completedIds = new Set(
queue.queue.filter(q => q.status === 'completed').map(q => q.queue_id)
);
return queue.queue.filter(item => {
if (item.status !== 'pending') return false;
return item.depends_on.every(depId => completedIds.has(depId));
});
}
let readyTasks = getReadyTasks()
const readyTasks = getReadyTasks();
if (readyTasks.length === 0) {
if (tasks.every(t => t.status === 'completed')) {
console.log('✓ All tasks completed!')
return
}
console.log('⚠ No ready tasks. Check dependencies or blocked tasks.')
return
}
// Initialize TodoWrite for tracking
TodoWrite({
todos: readyTasks.slice(0, batchSize).map(t => ({
content: `[${t.id}] ${t.title}`,
status: 'pending',
activeForm: `Executing ${t.id}`
}))
})
```
### Task Execution Loop
```javascript
for (const task of readyTasks.slice(0, batchSize)) {
console.log(`\n## Executing: ${task.id} - ${task.title}`)
// Update state
updateTaskStatus(task.id, 'in_progress', 'analyze')
try {
// Phase 1: ANALYZE
const analyzeResult = await executePhase_Analyze(task)
if (analyzeResult.paused) {
console.log(`⏸ Task paused: ${analyzeResult.reason}`)
updateTaskStatus(task.id, 'paused', 'analyze')
continue
}
// Phase 2-5: Closed Loop
let implementRetries = 0
const maxRetries = 3
while (implementRetries < maxRetries) {
// Phase 2: IMPLEMENT
const implementResult = await executePhase_Implement(task, analyzeResult)
updateTaskStatus(task.id, 'in_progress', 'test')
// Phase 3: TEST
const testResult = await executePhase_Test(task, implementResult)
if (testResult.passed) {
// Phase 4: OPTIMIZE
await executePhase_Optimize(task, implementResult)
// Phase 5: COMMIT
if (!flags.skipCommit) {
await executePhase_Commit(task, implementResult)
}
// Mark completed
updateTaskStatus(task.id, 'completed', 'done')
completedIds.add(task.id)
break
} else {
implementRetries++
console.log(`⚠ Test failed, retry ${implementRetries}/${maxRetries}`)
if (implementRetries >= maxRetries) {
updateTaskStatus(task.id, 'failed', 'test')
console.log(`✗ Task failed after ${maxRetries} retries`)
}
}
}
} catch (error) {
updateTaskStatus(task.id, 'failed', task.current_phase)
console.log(`✗ Task failed: ${error.message}`)
}
}
```
### Phase 1: ANALYZE
```javascript
async function executePhase_Analyze(task) {
console.log('### Phase 1: ANALYZE')
// Check pause criteria first
for (const criterion of task.pause_criteria || []) {
const shouldPause = await evaluatePauseCriterion(criterion, task)
if (shouldPause) {
return { paused: true, reason: criterion }
}
}
// Execute analysis via CLI
const analysisResult = await Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Analyze: ${task.id}`,
prompt=`
## Analysis Task
ID: ${task.id}
Title: ${task.title}
Description: ${task.description}
## File Context
${task.file_context.join('\n')}
## Delivery Criteria (to be achieved)
${task.delivery_criteria.map((c, i) => `${i+1}. ${c}`).join('\n')}
## Required Analysis
1. Verify all referenced files exist
2. Identify exact modification points
3. Check for potential conflicts
4. Validate approach feasibility
## Output
Return JSON:
{
"files_to_modify": ["path1", "path2"],
"integration_points": [...],
"potential_risks": [...],
"implementation_notes": "..."
}
`
)
// Parse and return
const analysis = JSON.parse(analysisResult)
// Update phase results
updatePhaseResult(task.id, 'analyze', {
status: 'completed',
findings: analysis.potential_risks,
timestamp: new Date().toISOString()
})
return { paused: false, analysis }
}
```
### Phase 2: IMPLEMENT
```javascript
async function executePhase_Implement(task, analyzeResult) {
console.log('### Phase 2: IMPLEMENT')
updateTaskStatus(task.id, 'in_progress', 'implement')
// Determine executor
const executor = task.executor === 'auto'
? (task.type === 'test' ? 'agent' : 'codex')
: task.executor
// Build implementation prompt
const prompt = `
## Implementation Task
ID: ${task.id}
Title: ${task.title}
Type: ${task.type}
## Description
${task.description}
## Analysis Results
${JSON.stringify(analyzeResult.analysis, null, 2)}
## Files to Modify
${analyzeResult.analysis.files_to_modify.join('\n')}
## Delivery Criteria (MUST achieve all)
${task.delivery_criteria.map((c, i) => `- [ ] ${c}`).join('\n')}
## Implementation Notes
${analyzeResult.analysis.implementation_notes}
## Rules
- Follow existing code patterns
- Maintain backward compatibility
- Add appropriate error handling
- Document significant changes
`
let result
if (executor === 'codex') {
result = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write`,
timeout=3600000
)
} else if (executor === 'gemini') {
result = Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write`,
timeout=1800000
)
if (executing.length > 0) {
console.log('Tasks are currently executing. Wait for completion.');
} else {
result = await Task(
console.log('No ready tasks. Check for blocked dependencies.');
}
return;
}
console.log(`Found ${readyTasks.length} ready tasks`);
// Sort by execution order
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
// Initialize TodoWrite
TodoWrite({
todos: readyTasks.slice(0, parallelLimit).map(t => ({
content: `[${t.queue_id}] ${t.issue_id}:${t.task_id}`,
status: 'pending',
activeForm: `Executing ${t.queue_id}`
}))
});
```
### Phase 3: Codex Coordination (Single Task Mode)
```javascript
// Execute tasks - single codex instance per task
async function executeTask(queueItem) {
const codexPrompt = `
## Single Task Execution
You are executing ONE task from the issue queue. Follow these steps exactly:
### Step 1: Fetch Task
Run this command to get your task:
\`\`\`bash
ccw issue next
\`\`\`
This returns JSON with:
- queue_id: Queue item ID
- task: Task definition with implementation steps
- context: Exploration context
- execution_hints: Executor and time estimate
### Step 2: Execute Task
Read the returned task object and:
1. Follow task.implementation steps in order
2. Meet all task.acceptance criteria
3. Use provided context.relevant_files for reference
4. Use context.patterns for code style
### Step 3: Report Completion
When done, run:
\`\`\`bash
ccw issue complete <queue_id> --result '{"files_modified": ["path1", "path2"], "summary": "What was done"}'
\`\`\`
If task fails, run:
\`\`\`bash
ccw issue fail <queue_id> --reason "Why it failed"
\`\`\`
### Rules
- NEVER read task files directly - use ccw issue next
- Execute the FULL task before marking complete
- Do NOT loop - execute ONE task only
- Report accurate files_modified in result
### Start Now
Begin by running: ccw issue next
`;
// Execute codex
const executor = queueItem.assigned_executor || flags.executor || 'codex';
if (executor === 'codex') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.queue_id}`,
timeout=3600000 // 1 hour timeout
);
} else if (executor === 'gemini') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.queue_id}`,
timeout=1800000 // 30 min timeout
);
} else {
// Agent execution
Task(
subagent_type="code-developer",
run_in_background=false,
description=`Implement: ${task.id}`,
prompt=prompt
)
description=`Execute ${queueItem.queue_id}`,
prompt=codexPrompt
);
}
// Track modified files
const modifiedFiles = extractModifiedFiles(result)
updatePhaseResult(task.id, 'implement', {
status: 'completed',
files_modified: modifiedFiles,
timestamp: new Date().toISOString()
})
return { modifiedFiles, output: result }
}
```
### Phase 3: TEST
// Execute with parallelism
const parallelLimit = flags.parallel || 1;
```javascript
async function executePhase_Test(task, implementResult) {
console.log('### Phase 3: TEST')
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
const batch = readyTasks.slice(i, i + parallelLimit);
updateTaskStatus(task.id, 'in_progress', 'test')
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
console.log(batch.map(t => `- ${t.queue_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
// Determine test command based on project
const testCommand = detectTestCommand(task.file_context)
// e.g., 'npm test', 'pytest', 'go test', etc.
// Run tests
const testResult = Bash(testCommand, timeout=300000)
const passed = testResult.exitCode === 0
// Verify delivery criteria
let criteriaVerified = passed
if (passed) {
for (const criterion of task.delivery_criteria) {
const verified = await verifyCriterion(criterion, implementResult)
if (!verified) {
criteriaVerified = false
console.log(`⚠ Criterion not met: ${criterion}`)
}
if (parallelLimit === 1) {
// Sequential execution
for (const task of batch) {
updateTodo(task.queue_id, 'in_progress');
await executeTask(task);
updateTodo(task.queue_id, 'completed');
}
} else {
// Parallel execution - launch all at once
const executions = batch.map(task => {
updateTodo(task.queue_id, 'in_progress');
return executeTask(task);
});
await Promise.all(executions);
batch.forEach(task => updateTodo(task.queue_id, 'completed'));
}
updatePhaseResult(task.id, 'test', {
status: passed && criteriaVerified ? 'passed' : 'failed',
test_results: testResult.output.substring(0, 1000),
retry_count: implementResult.retryCount || 0,
timestamp: new Date().toISOString()
})
return { passed: passed && criteriaVerified, output: testResult }
}
```
### Phase 4: OPTIMIZE
```javascript
async function executePhase_Optimize(task, implementResult) {
console.log('### Phase 4: OPTIMIZE')
updateTaskStatus(task.id, 'in_progress', 'optimize')
// Run linting/formatting
const lintResult = Bash('npm run lint:fix || true', timeout=60000)
// Quick code review
const reviewResult = await Task(
subagent_type="universal-executor",
run_in_background=false,
description=`Review: ${task.id}`,
prompt=`
Quick code review for task ${task.id}
## Modified Files
${implementResult.modifiedFiles.join('\n')}
## Check
1. Code follows project conventions
2. No obvious security issues
3. Error handling is appropriate
4. No dead code or console.logs
## Output
If issues found, apply fixes directly. Otherwise confirm OK.
`
)
updatePhaseResult(task.id, 'optimize', {
status: 'completed',
improvements: extractImprovements(reviewResult),
timestamp: new Date().toISOString()
})
return { lintResult, reviewResult }
}
```
### Phase 5: COMMIT
```javascript
async function executePhase_Commit(task, implementResult) {
console.log('### Phase 5: COMMIT')
updateTaskStatus(task.id, 'in_progress', 'commit')
// Stage modified files
for (const file of implementResult.modifiedFiles) {
Bash(`git add "${file}"`)
}
// Create commit message
const typePrefix = {
'feature': 'feat',
'bug': 'fix',
'refactor': 'refactor',
'test': 'test',
'chore': 'chore',
'docs': 'docs'
}[task.type] || 'feat'
const commitMessage = `${typePrefix}(${task.id}): ${task.title}
${task.description.substring(0, 200)}
Delivery Criteria:
${task.delivery_criteria.map(c => `- [x] ${c}`).join('\n')}
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>`
// Commit
const commitResult = Bash(`git commit -m "$(cat <<'EOF'
${commitMessage}
EOF
)"`)
// Get commit hash
const commitHash = Bash('git rev-parse HEAD').trim()
updatePhaseResult(task.id, 'commit', {
status: 'completed',
commit_hash: commitHash,
message: `${typePrefix}(${task.id}): ${task.title}`,
timestamp: new Date().toISOString()
})
console.log(`✓ Committed: ${commitHash.substring(0, 7)}`)
return { commitHash }
}
```
### State Management
```javascript
// Update task status in JSONL (append-style with compaction)
function updateTaskStatus(taskId, status, phase) {
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
const taskIndex = tasks.findIndex(t => t.id === taskId)
if (taskIndex >= 0) {
tasks[taskIndex].status = status
tasks[taskIndex].current_phase = phase
tasks[taskIndex].updated_at = new Date().toISOString()
// Rewrite JSONL (compact)
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n')
Write(`${issueDir}/tasks.jsonl`, jsonlContent)
}
// Update state.json
const state = JSON.parse(Read(`${issueDir}/state.json`))
state.current_task = status === 'in_progress' ? taskId : null
state.completed_count = tasks.filter(t => t.status === 'completed').length
state.updated_at = new Date().toISOString()
Write(`${issueDir}/state.json`, JSON.stringify(state, null, 2))
}
// Update phase result
function updatePhaseResult(taskId, phase, result) {
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
const taskIndex = tasks.findIndex(t => t.id === taskId)
if (taskIndex >= 0) {
tasks[taskIndex].phase_results = tasks[taskIndex].phase_results || {}
tasks[taskIndex].phase_results[phase] = result
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n')
Write(`${issueDir}/tasks.jsonl`, jsonlContent)
// Refresh ready tasks after batch
const newReady = getReadyTasks();
if (newReady.length > 0) {
console.log(`${newReady.length} more tasks now ready`);
}
}
```
## Progressive Loading
### Codex Task Fetch Response
For memory efficiency with large task lists:
When codex calls `ccw issue next`, it receives:
```json
{
"queue_id": "Q-001",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task": {
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file in src/middleware/",
"Implement JWT token validation using jsonwebtoken",
"Add error handling for invalid/expired tokens",
"Export middleware function"
],
"acceptance": [
"Middleware validates JWT tokens successfully",
"Returns 401 for invalid or missing tokens",
"Passes token payload to request context"
]
},
"context": {
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
},
"execution_hints": {
"executor": "codex",
"estimated_minutes": 30
}
}
```
### Phase 4: Completion Summary
```javascript
// Stream JSONL and only load ready tasks
function* getReadyTasksStream(issueDir, completedIds) {
const filePath = `${issueDir}/tasks.jsonl`
const lines = readFileLines(filePath)
// Reload queue for final status
const finalQueue = JSON.parse(Read(queuePath));
for (const line of lines) {
if (!line.trim()) continue
const task = JSON.parse(line)
const summary = {
completed: finalQueue.queue.filter(q => q.status === 'completed').length,
failed: finalQueue.queue.filter(q => q.status === 'failed').length,
pending: finalQueue.queue.filter(q => q.status === 'pending').length,
total: finalQueue.queue.length
};
if (task.status === 'pending' &&
task.depends_on.every(dep => completedIds.has(dep))) {
yield task
console.log(`
## Execution Complete
**Completed**: ${summary.completed}/${summary.total}
**Failed**: ${summary.failed}
**Pending**: ${summary.pending}
### Task Results
${finalQueue.queue.map(q => {
const icon = q.status === 'completed' ? '✓' :
q.status === 'failed' ? '✗' :
q.status === 'executing' ? '⟳' : '○';
return `${icon} ${q.queue_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
}).join('\n')}
`);
// Update issue statuses in issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const issueIds = [...new Set(finalQueue.queue.map(q => q.issue_id))];
for (const issueId of issueIds) {
const issueTasks = finalQueue.queue.filter(q => q.issue_id === issueId);
if (issueTasks.every(q => q.status === 'completed')) {
console.log(`\n✓ Issue ${issueId} fully completed!`);
// Update issue status
const issueIndex = allIssues.findIndex(i => i.id === issueId);
if (issueIndex !== -1) {
allIssues[issueIndex].status = 'completed';
allIssues[issueIndex].completed_at = new Date().toISOString();
allIssues[issueIndex].updated_at = new Date().toISOString();
}
}
}
// Usage: Only load what's needed
const iterator = getReadyTasksStream(issueDir, completedIds)
const batch = []
for (let i = 0; i < batchSize; i++) {
const { value, done } = iterator.next()
if (done) break
batch.push(value)
// Write updated issues.jsonl
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
if (summary.pending > 0) {
console.log(`
### Continue Execution
Run \`/issue:execute\` again to execute remaining tasks.
`);
}
```
## Pause Criteria Evaluation
## Dry Run Mode
```javascript
async function evaluatePauseCriterion(criterion, task) {
// Pattern matching for common pause conditions
const patterns = [
{ match: /unclear|undefined|missing/i, action: 'ask_user' },
{ match: /security review/i, action: 'require_approval' },
{ match: /migration required/i, action: 'check_migration' },
{ match: /external (api|service)/i, action: 'verify_external' }
]
if (flags.dryRun) {
console.log(`
## Dry Run - Would Execute
for (const pattern of patterns) {
if (pattern.match.test(criterion)) {
// Check if condition is resolved
const resolved = await checkCondition(pattern.action, criterion, task)
if (!resolved) return true // Pause
}
}
${readyTasks.map((t, i) => `
${i + 1}. ${t.queue_id}
Issue: ${t.issue_id}
Task: ${t.task_id}
Executor: ${t.assigned_executor}
Group: ${t.execution_group}
`).join('')}
return false // Don't pause
No changes made. Remove --dry-run to execute.
`);
return;
}
```
@@ -554,38 +386,32 @@ async function evaluatePauseCriterion(criterion, task) {
| Error | Resolution |
|-------|------------|
| Task not found | List available tasks, suggest correct ID |
| Dependencies unsatisfied | Show blocking tasks, suggest running those first |
| Test failure (3x) | Mark failed, save state, suggest manual intervention |
| Pause triggered | Save state, display pause reason, await user action |
| Commit conflict | Stash changes, report conflict, await resolution |
| Queue not found | Display message, suggest /issue:queue |
| No ready tasks | Check dependencies, show blocked tasks |
| Codex timeout | Mark as failed, allow retry |
| ccw issue next empty | All tasks done or blocked |
| Task execution failure | Marked via ccw issue fail |
## Output
## Endpoint Contract
```
## Execution Complete
### `ccw issue next`
- Returns next ready task as JSON
- Marks task as 'executing'
- Returns `{ status: 'empty' }` when no tasks
**Issue**: GH-123
**Tasks Executed**: 3/5
**Completed**: 3
**Failed**: 0
**Pending**: 2 (dependencies not met)
### `ccw issue complete <queue-id>`
- Marks task as 'completed'
- Updates queue.json
- Checks if issue is fully complete
### Task Status
| ID | Title | Status | Phase | Commit |
|----|-------|--------|-------|--------|
| TASK-001 | Setup auth middleware | ✓ | done | a1b2c3d |
| TASK-002 | Protect API routes | ✓ | done | e4f5g6h |
| TASK-003 | Add login endpoint | ✓ | done | i7j8k9l |
| TASK-004 | Add logout endpoint | ⏳ | pending | - |
| TASK-005 | Integration tests | ⏳ | pending | - |
### Next Steps
Run `/issue:execute GH-123` to continue with remaining tasks.
```
### `ccw issue fail <queue-id>`
- Marks task as 'failed'
- Records failure reason
- Allows retry via /issue:execute
## Related Commands
- `/issue:plan` - Create issue plan with JSONL tasks
- `ccw issue status` - Check issue execution status
- `/issue:plan` - Plan issues with solutions
- `/issue:queue` - Form execution queue
- `ccw issue queue list` - View queue status
- `ccw issue retry` - Retry failed tasks

View File

@@ -1,7 +1,7 @@
---
name: plan
description: Plan issue resolution with JSONL task generation, delivery/pause criteria, and dependency graph
argument-hint: "\"issue description\"|github-url|file.md"
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "<issue-id>[,<issue-id>,...] [--batch-size 3]"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
---
@@ -9,339 +9,317 @@ allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(
## Overview
Generate a JSONL-based task plan from a GitHub issue or description. Each task includes delivery criteria, pause criteria, and dependency relationships. The plan is designed for progressive execution with the `/issue:execute` command.
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow. The agent handles ACE semantic search, solution generation, and task breakdown.
**Core capabilities:**
- Parse issue from URL, text description, or markdown file
- Analyze codebase context for accurate task breakdown
- Generate JSONL task file with DAG (Directed Acyclic Graph) dependencies
- Define clear delivery criteria (what marks a task complete)
- Define pause criteria (conditions to halt execution)
- Interactive confirmation before finalizing
- **Closed-loop agent**: issue-plan-agent combines explore + plan
- Batch processing: 1 agent processes 1-3 issues
- ACE semantic search integrated into planning
- Solution with executable tasks and acceptance criteria
- Automatic solution registration and binding
## Storage Structure (Flat JSONL)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue
└── solutions/
├── {issue-id}.jsonl # Solutions for issue (one per line)
└── ...
```
## Usage
```bash
/issue:plan [FLAGS] <INPUT>
/issue:plan <issue-id>[,<issue-id>,...] [FLAGS]
# Input Formats
<issue-url> GitHub issue URL (e.g., https://github.com/owner/repo/issues/123)
<description> Text description of the issue
<file.md> Markdown file with issue details
# Examples
/issue:plan GH-123 # Single issue
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
/issue:plan --all-pending # All pending issues
# Flags
-e, --explore Force code exploration phase
--executor <type> Default executor: agent|codex|gemini|auto (default: auto)
--batch-size <n> Max issues per agent batch (default: 3)
```
## Execution Process
```
Phase 1: Input Parsing & Context
├─ Parse input (URL → fetch issue, text → use directly, file → read content)
├─ Extract: title, description, labels, acceptance criteria
Store as issueContext
Phase 1: Issue Loading
├─ Parse input (single, comma-separated, or --all-pending)
├─ Load issues from .workflow/issues/issues.jsonl
Validate issues exist (create if needed)
└─ Group into batches (max 3 per batch)
Phase 2: Exploration (if needed)
├─ Complexity assessment (Low/Medium/High)
├─ Launch cli-explore-agent for codebase understanding
└─ Identify: relevant files, patterns, integration points
Phase 2: Unified Explore + Plan (issue-plan-agent)
├─ Launch issue-plan-agent per batch
├─ Agent performs:
│ ├─ ACE semantic search for each issue
│ ├─ Codebase exploration (files, patterns, dependencies)
│ ├─ Solution generation with task breakdown
│ └─ Conflict detection across issues
└─ Output: solution JSON per issue
Phase 3: Task Breakdown
├─ Agent generates JSONL task list
├─ Each task includes:
├─ delivery_criteria (completion checklist)
│ ├─ pause_criteria (halt conditions)
│ └─ depends_on (dependency graph)
└─ Validate DAG (no circular dependencies)
Phase 3: Solution Registration & Binding
├─ Append solutions to solutions/{issue-id}.jsonl
├─ Single solution per issue → auto-bind
├─ Multiple candidates → AskUserQuestion to select
└─ Update issues.jsonl with bound_solution_id
Phase 4: User Confirmation
├─ Display task summary table
├─ Show dependency graph
└─ AskUserQuestion: Approve / Refine / Cancel
Phase 5: Persistence
├─ Write tasks.jsonl to .workflow/issues/{issue-id}/
├─ Initialize state.json for status tracking
└─ Return summary and next steps
Phase 4: Summary
├─ Display bound solutions
├─ Show task counts per issue
└─ Display next steps (/issue:queue)
```
## Implementation
### Phase 1: Input Parsing
### Phase 1: Issue Loading
```javascript
// Helper: Get UTC+8 ISO string
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse input
const issueIds = userInput.includes(',')
? userInput.split(',').map(s => s.trim())
: [userInput.trim()];
// Parse input type
function parseInput(input) {
if (input.startsWith('https://github.com/')) {
const match = input.match(/github\.com\/(.+?)\/(.+?)\/issues\/(\d+)/)
if (match) {
return { type: 'github', owner: match[1], repo: match[2], number: match[3] }
}
// Read issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Load and validate issues
const issues = [];
for (const id of issueIds) {
let issue = allIssues.find(i => i.id === id);
if (!issue) {
console.log(`Issue ${id} not found. Creating...`);
issue = {
id,
title: `Issue ${id}`,
status: 'registered',
priority: 3,
context: '',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
// Append to issues.jsonl
Bash(`echo '${JSON.stringify(issue)}' >> "${issuesPath}"`);
}
if (input.endsWith('.md') && fileExists(input)) {
return { type: 'file', path: input }
}
return { type: 'text', content: input }
issues.push(issue);
}
// Generate issue ID
const inputType = parseInput(userInput)
let issueId, issueTitle, issueContent
if (inputType.type === 'github') {
// Fetch via gh CLI
const issueData = Bash(`gh issue view ${inputType.number} --repo ${inputType.owner}/${inputType.repo} --json title,body,labels`)
const parsed = JSON.parse(issueData)
issueId = `GH-${inputType.number}`
issueTitle = parsed.title
issueContent = parsed.body
} else if (inputType.type === 'file') {
issueContent = Read(inputType.path)
issueId = `FILE-${Date.now()}`
issueTitle = extractTitle(issueContent) // First # heading
} else {
issueContent = inputType.content
issueId = `TEXT-${Date.now()}`
issueTitle = issueContent.substring(0, 50)
// Group into batches
const batchSize = flags.batchSize || 3;
const batches = [];
for (let i = 0; i < issues.length; i += batchSize) {
batches.push(issues.slice(i, i + batchSize));
}
// Create issue directory
const issueDir = `.workflow/issues/${issueId}`
Bash(`mkdir -p ${issueDir}`)
// Save issue context
Write(`${issueDir}/context.md`, `# ${issueTitle}\n\n${issueContent}`)
TodoWrite({
todos: batches.flatMap((batch, i) => [
{ content: `Plan batch ${i+1}`, status: 'pending', activeForm: `Planning batch ${i+1}` }
])
});
```
### Phase 2: Exploration
### Phase 2: Unified Explore + Plan (issue-plan-agent)
```javascript
// Complexity assessment
const complexity = analyzeComplexity(issueContent)
// Low: Single file change, isolated
// Medium: Multiple files, some dependencies
// High: Cross-module, architectural
for (const [batchIndex, batch] of batches.entries()) {
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
const needsExploration = (
flags.includes('--explore') ||
complexity !== 'Low' ||
issueContent.mentions_specific_files
)
// Build issue prompt for agent
const issuePrompt = `
## Issues to Plan
if (needsExploration) {
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Explore codebase for issue context",
prompt=`
## Task Objective
Analyze codebase to understand context for issue resolution.
${batch.map((issue, i) => `
### Issue ${i + 1}: ${issue.id}
**Title**: ${issue.title}
**Context**: ${issue.context || 'No context provided'}
`).join('\n')}
## Issue Context
Title: ${issueTitle}
Content: ${issueContent}
## Required Analysis
1. Identify files that need modification
2. Find relevant patterns and conventions
3. Map dependencies and integration points
4. Identify potential risks or blockers
## Output
Write exploration results to: ${issueDir}/exploration.json
`
)
}
```
### Phase 3: Task Breakdown
```javascript
// Load schema reference
const schema = Read('~/.claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json')
// Generate tasks via CLI
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description="Generate JSONL task breakdown",
prompt=`
## Objective
Break down the issue into executable tasks in JSONL format.
## Issue Context
ID: ${issueId}
Title: ${issueTitle}
Content: ${issueContent}
## Exploration Results
${explorationResults || 'No exploration performed'}
## Task Schema
${schema}
## Project Root
${process.cwd()}
## Requirements
1. Generate 2-10 tasks depending on complexity
2. Each task MUST include:
- delivery_criteria: Specific, verifiable conditions for completion (2-5 items)
- pause_criteria: Conditions that should halt execution (0-3 items)
- depends_on: Task IDs that must complete first (ensure DAG)
3. Task execution phases: analyze → implement → test → optimize → commit
4. Assign executor based on task nature (analysis=gemini, implementation=codex)
1. Use ACE semantic search (mcp__ace-tool__search_context) for exploration
2. Generate complete solution with task breakdown
3. Each task must have:
- implementation steps (2-7 steps)
- acceptance criteria (1-4 testable criteria)
- modification_points (exact file locations)
- depends_on (task dependencies)
4. Detect file conflicts if multiple issues
`;
## Delivery Criteria Examples
Good: "User login endpoint returns JWT token with 24h expiry"
Bad: "Authentication works" (too vague)
// Launch issue-plan-agent (combines explore + plan)
const result = Task(
subagent_type="issue-plan-agent",
run_in_background=false,
description=`Explore & plan ${batch.length} issues`,
prompt=issuePrompt
);
## Pause Criteria Examples
- "API spec for external service unclear"
- "Database schema migration required"
- "Security review needed before implementation"
// Parse agent output
const agentOutput = JSON.parse(result);
## Output Format
Write JSONL file (one JSON object per line):
${issueDir}/tasks.jsonl
// Register solutions for each issue (append to solutions/{issue-id}.jsonl)
for (const item of agentOutput.solutions) {
const solutionPath = `.workflow/issues/solutions/${item.issue_id}.jsonl`;
## Validation
- Ensure no circular dependencies
- Ensure all depends_on references exist
- Ensure at least one task has empty depends_on (entry point)
`
)
// Ensure solutions directory exists
Bash(`mkdir -p .workflow/issues/solutions`);
// Validate DAG
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
validateDAG(tasks) // Throws if circular dependency detected
```
// Append solution as new line
Bash(`echo '${JSON.stringify(item.solution)}' >> "${solutionPath}"`);
}
### Phase 4: User Confirmation
// Handle conflicts if any
if (agentOutput.conflicts?.length > 0) {
console.log(`\n⚠ File conflicts detected:`);
agentOutput.conflicts.forEach(c => {
console.log(` ${c.file}: ${c.issues.join(', ')} → suggested: ${c.suggested_order.join(' → ')}`);
});
}
```javascript
// Display task summary
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
console.log(`
## Issue Plan: ${issueId}
**Title**: ${issueTitle}
**Tasks**: ${tasks.length}
**Complexity**: ${complexity}
### Task Breakdown
| ID | Title | Type | Dependencies | Delivery Criteria |
|----|-------|------|--------------|-------------------|
${tasks.map(t => `| ${t.id} | ${t.title} | ${t.type} | ${t.depends_on.join(', ') || '-'} | ${t.delivery_criteria.length} items |`).join('\n')}
### Dependency Graph
${generateDependencyGraph(tasks)}
`)
// User confirmation
AskUserQuestion({
questions: [
{
question: `Approve issue plan? (${tasks.length} tasks)`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Approve", description: "Proceed with this plan" },
{ label: "Refine", description: "Modify tasks before proceeding" },
{ label: "Cancel", description: "Discard plan" }
]
}
]
})
if (answer === "Refine") {
// Allow editing specific tasks
AskUserQuestion({
questions: [
{
question: "What would you like to refine?",
header: "Refine",
multiSelect: true,
options: [
{ label: "Add Task", description: "Add a new task" },
{ label: "Remove Task", description: "Remove an existing task" },
{ label: "Modify Dependencies", description: "Change task dependencies" },
{ label: "Regenerate", description: "Regenerate entire plan" }
]
}
]
})
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
}
```
### Phase 5: Persistence
### Phase 3: Solution Binding
```javascript
// Initialize state.json for status tracking
const state = {
issue_id: issueId,
title: issueTitle,
status: 'planned',
created_at: getUtc8ISOString(),
updated_at: getUtc8ISOString(),
task_count: tasks.length,
completed_count: 0,
current_task: null,
executor_default: flags.executor || 'auto'
// Re-read issues.jsonl
let allIssuesUpdated = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
for (const issue of issues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
if (solutions.length === 0) {
console.log(`⚠ No solutions for ${issue.id}`);
continue;
}
let selectedSolId;
if (solutions.length === 1) {
// Auto-bind single solution
selectedSolId = solutions[0].id;
console.log(`✓ Auto-bound ${selectedSolId} to ${issue.id} (${solutions[0].tasks?.length || 0} tasks)`);
} else {
// Multiple solutions - ask user
const answer = AskUserQuestion({
questions: [{
question: `Select solution for ${issue.id}:`,
header: issue.id,
multiSelect: false,
options: solutions.map(s => ({
label: `${s.id}: ${s.description || 'Solution'}`,
description: `${s.tasks?.length || 0} tasks`
}))
}]
});
selectedSolId = extractSelectedSolutionId(answer);
console.log(`✓ Bound ${selectedSolId} to ${issue.id}`);
}
// Update issue in allIssuesUpdated
const issueIndex = allIssuesUpdated.findIndex(i => i.id === issue.id);
if (issueIndex !== -1) {
allIssuesUpdated[issueIndex].bound_solution_id = selectedSolId;
allIssuesUpdated[issueIndex].status = 'planned';
allIssuesUpdated[issueIndex].planned_at = new Date().toISOString();
allIssuesUpdated[issueIndex].updated_at = new Date().toISOString();
}
// Mark solution as bound in solutions file
const updatedSolutions = solutions.map(s => ({
...s,
is_bound: s.id === selectedSolId,
bound_at: s.id === selectedSolId ? new Date().toISOString() : s.bound_at
}));
Write(solPath, updatedSolutions.map(s => JSON.stringify(s)).join('\n'));
}
Write(`${issueDir}/state.json`, JSON.stringify(state, null, 2))
// Write updated issues.jsonl
Write(issuesPath, allIssuesUpdated.map(i => JSON.stringify(i)).join('\n'));
```
### Phase 4: Summary
```javascript
console.log(`
## Plan Created
## Planning Complete
**Issue**: ${issueId}
**Location**: ${issueDir}/
**Tasks**: ${tasks.length}
**Issues Planned**: ${issues.length}
### Files Created
- tasks.jsonl (task definitions)
- state.json (execution state)
- context.md (issue context)
${explorationResults ? '- exploration.json (codebase analysis)' : ''}
### Bound Solutions
${issues.map(i => {
const issue = allIssuesUpdated.find(a => a.id === i.id);
return issue?.bound_solution_id
? `${i.id}: ${issue.bound_solution_id}`
: `${i.id}: No solution bound`;
}).join('\n')}
### Next Steps
1. Review: \`ccw issue list ${issueId}\`
2. Execute: \`/issue:execute ${issueId}\`
3. Monitor: \`ccw issue status ${issueId}\`
`)
1. Review: \`ccw issue status <issue-id>\`
2. Form queue: \`/issue:queue\`
3. Execute: \`/issue:execute\`
`);
```
## JSONL Task Format
## Solution Format
Each line in `tasks.jsonl` is a complete JSON object:
Each solution line in `solutions/{issue-id}.jsonl`:
```json
{"id":"TASK-001","title":"Setup auth middleware","type":"feature","description":"Implement JWT verification middleware","file_context":["src/middleware/","src/config/auth.ts"],"depends_on":[],"delivery_criteria":["Middleware validates JWT tokens","Returns 401 for invalid tokens","Passes existing auth tests"],"pause_criteria":["JWT secret configuration unclear"],"status":"pending","current_phase":"analyze","executor":"auto","priority":1,"created_at":"2025-12-26T10:00:00Z","updated_at":"2025-12-26T10:00:00Z"}
{"id":"TASK-002","title":"Protect API routes","type":"feature","description":"Apply auth middleware to /api/v1/* routes","file_context":["src/routes/api/"],"depends_on":["TASK-001"],"delivery_criteria":["All /api/v1/* routes require auth","Public routes excluded","Integration tests pass"],"pause_criteria":[],"status":"pending","current_phase":"analyze","executor":"auto","priority":2,"created_at":"2025-12-26T10:00:00Z","updated_at":"2025-12-26T10:00:00Z"}
```
## Progressive Loading Algorithm
For large task lists, only load tasks with satisfied dependencies:
```javascript
function getReadyTasks(tasks, completedIds) {
return tasks.filter(task =>
task.status === 'pending' &&
task.depends_on.every(dep => completedIds.has(dep))
)
}
// Stream JSONL line-by-line for memory efficiency
function* streamJsonl(filePath) {
const lines = readLines(filePath)
for (const line of lines) {
if (line.trim()) yield JSON.parse(line)
}
{
"id": "SOL-20251226-001",
"description": "Direct Implementation",
"tasks": [
{
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file",
"Implement JWT validation",
"Add error handling",
"Export middleware"
],
"acceptance": [
"Middleware validates JWT tokens",
"Returns 401 for invalid tokens"
],
"depends_on": [],
"estimated_minutes": 30
}
],
"exploration_context": {
"relevant_files": ["src/config/auth.ts"],
"patterns": "Follow existing middleware pattern"
},
"is_bound": true,
"created_at": "2025-12-26T10:00:00Z",
"bound_at": "2025-12-26T10:05:00Z"
}
```
@@ -349,13 +327,26 @@ function* streamJsonl(filePath) {
| Error | Resolution |
|-------|------------|
| Invalid GitHub URL | Display correct format, ask for valid URL |
| Circular dependency | List cycle, ask user to resolve |
| No tasks generated | Suggest simpler breakdown or manual entry |
| Exploration timeout | Proceed without exploration, warn user |
| Issue not found | Auto-create in issues.jsonl |
| ACE search fails | Agent falls back to ripgrep |
| No solutions generated | Display error, suggest manual planning |
| User cancels selection | Skip issue, continue with others |
| File conflicts | Agent detects and suggests resolution order |
## Agent Integration
The command uses `issue-plan-agent` which:
1. Performs ACE semantic search per issue
2. Identifies modification points and patterns
3. Generates task breakdown with dependencies
4. Detects cross-issue file conflicts
5. Outputs solution JSON for registration
See `.claude/agents/issue-plan-agent.md` for agent specification.
## Related Commands
- `/issue:execute` - Execute planned tasks with closed-loop methodology
- `ccw issue list` - List all issues and their status
- `ccw issue status` - Show detailed issue status
- `/issue:queue` - Form execution queue from bound solutions
- `/issue:execute` - Execute queue with codex
- `ccw issue list` - List all issues
- `ccw issue status` - View issue and solution details

View File

@@ -0,0 +1,303 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent
argument-hint: "[--rebuild] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
# Issue Queue Command (/issue:queue)
## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, determines dependencies, and creates an ordered execution queue. The queue is global across all issues.
**Core capabilities:**
- **Agent-driven**: issue-queue-agent handles all ordering logic
- ACE semantic search for relationship discovery
- Dependency DAG construction and cycle detection
- File conflict detection and resolution
- Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
- Output global queue.json
## Storage Structure (Flat JSONL)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue (output)
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
## Usage
```bash
/issue:queue [FLAGS]
# Examples
/issue:queue # Form queue from all bound solutions
/issue:queue --rebuild # Rebuild queue (clear and regenerate)
/issue:queue --issue GH-123 # Add only specific issue to queue
# Flags
--rebuild Clear existing queue and regenerate
--issue <id> Add only specific issue's tasks
```
## Execution Process
```
Phase 1: Solution Loading
├─ Load issues.jsonl
├─ Filter issues with bound_solution_id
├─ Read solutions/{issue-id}.jsonl for each issue
├─ Find bound solution by ID
└─ Extract tasks from bound solutions
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
├─ Launch issue-queue-agent with all tasks
├─ Agent performs:
│ ├─ Build dependency DAG from depends_on
│ ├─ Detect circular dependencies
│ ├─ Identify file modification conflicts
│ ├─ Resolve conflicts using ordering rules
│ ├─ Calculate semantic priority (0.0-1.0)
│ └─ Assign execution groups (parallel/sequential)
└─ Output: queue JSON with ordered tasks
Phase 5: Queue Output
├─ Write queue.json
├─ Update issue statuses in issues.jsonl
└─ Display queue summary
```
## Implementation
### Phase 1: Solution Loading
```javascript
// Load issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Filter issues with bound solutions
const plannedIssues = allIssues.filter(i =>
i.status === 'planned' && i.bound_solution_id
);
if (plannedIssues.length === 0) {
console.log('No issues with bound solutions found.');
console.log('Run /issue:plan first to create and bind solutions.');
return;
}
// Load all tasks from bound solutions
const allTasks = [];
for (const issue of plannedIssues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Find bound solution
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
if (!boundSol) {
console.log(`⚠ Bound solution ${issue.bound_solution_id} not found for ${issue.id}`);
continue;
}
for (const task of boundSol.tasks || []) {
allTasks.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task,
exploration_context: boundSol.exploration_context
});
}
}
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
```
### Phase 2-4: Agent-Driven Queue Formation
```javascript
// Launch issue-queue-agent to handle all ordering logic
const agentPrompt = `
## Tasks to Order
${JSON.stringify(allTasks, null, 2)}
## Project Root
${process.cwd()}
## Requirements
1. Build dependency DAG from depends_on fields
2. Detect circular dependencies (abort if found)
3. Identify file modification conflicts
4. Resolve conflicts using ordering rules:
- Create before Update/Implement
- Foundation scopes (config/types) before implementation
- Core logic before tests
5. Calculate semantic priority (0.0-1.0) for each task
6. Assign execution groups (parallel P* / sequential S*)
7. Output queue JSON
`;
const result = Task(
subagent_type="issue-queue-agent",
run_in_background=false,
description=`Order ${allTasks.length} tasks from ${plannedIssues.length} issues`,
prompt=agentPrompt
);
// Parse agent output
const agentOutput = JSON.parse(result);
if (!agentOutput.success) {
console.error(`Queue formation failed: ${agentOutput.error}`);
if (agentOutput.cycles) {
console.error('Circular dependencies:', agentOutput.cycles.join(', '));
}
return;
}
```
### Phase 5: Queue Output & Summary
```javascript
const queueOutput = agentOutput.output;
// Write queue.json
Write('.workflow/issues/queue.json', JSON.stringify(queueOutput, null, 2));
// Update issue statuses in issues.jsonl
const updatedIssues = allIssues.map(issue => {
if (plannedIssues.find(p => p.id === issue.id)) {
return {
...issue,
status: 'queued',
queued_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
}
return issue;
});
Write(issuesPath, updatedIssues.map(i => JSON.stringify(i)).join('\n'));
// Display summary
console.log(`
## Queue Formed
**Total Tasks**: ${queueOutput.queue.length}
**Issues**: ${plannedIssues.length}
**Conflicts**: ${queueOutput.conflicts?.length || 0} (${queueOutput._metadata?.resolved_conflicts || 0} resolved)
### Execution Groups
${(queueOutput.execution_groups || []).map(g => {
const type = g.type === 'parallel' ? 'Parallel' : 'Sequential';
return `- ${g.id} (${type}): ${g.task_count} tasks`;
}).join('\n')}
### Next Steps
1. Review queue: \`ccw issue queue list\`
2. Execute: \`/issue:execute\`
`);
```
## Queue Schema
Output `queue.json`:
```json
{
"queue": [
{
"queue_id": "Q-001",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task_id": "T1",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7,
"queued_at": "2025-12-26T10:00:00Z"
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"tasks": ["GH-123:T1", "GH-124:T2"],
"resolution": "sequential",
"resolution_order": ["GH-123:T1", "GH-124:T2"],
"rationale": "T1 creates file before T2 updates",
"resolved": true
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["GH-123:T1", "GH-124:T1", "GH-125:T1"] },
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["GH-123:T2", "GH-124:T2"] }
],
"_metadata": {
"version": "2.0",
"storage": "jsonl",
"total_tasks": 5,
"total_conflicts": 1,
"resolved_conflicts": 1,
"parallel_groups": 1,
"sequential_groups": 1,
"timestamp": "2025-12-26T10:00:00Z",
"source": "issue-queue-agent"
}
}
```
## Semantic Priority Rules
| Factor | Priority Boost |
|--------|---------------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Config/Types scope | +0.1 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
## Error Handling
| Error | Resolution |
|-------|------------|
| No bound solutions | Display message, suggest /issue:plan |
| Circular dependency | List cycles, abort queue formation |
| Unresolved conflicts | Agent resolves using ordering rules |
| Invalid task reference | Skip and warn |
## Agent Integration
The command uses `issue-queue-agent` which:
1. Builds dependency DAG from task depends_on fields
2. Detects circular dependencies (aborts if found)
3. Identifies file modification conflicts across issues
4. Resolves conflicts using semantic ordering rules
5. Calculates priority (0.0-1.0) for each task
6. Assigns parallel/sequential execution groups
7. Outputs structured queue JSON
See `.claude/agents/issue-queue-agent.md` for agent specification.
## Related Commands
- `/issue:plan` - Plan issues and bind solutions
- `/issue:execute` - Execute queue with codex
- `ccw issue queue list` - View current queue

View File

@@ -4,6 +4,29 @@
> **写作规范**: [../specs/writing-style.md](../specs/writing-style.md)
## 执行要求
**必须执行**Phase 3 所有 Analysis Agents 完成后,主编排器**必须**调用此 Consolidation Agent。
**触发条件**
- Phase 3 所有 agent 已返回结果status: completed/partial/failed
- `sections/section-*.md` 文件已生成
**输入来源**
- `agent_summaries`: Phase 3 各 agent 返回的 JSON包含 status, output_file, summary, cross_module_notes
- `cross_module_notes`: 从各 agent 返回中提取的跨模块备注数组
**调用时机**
```javascript
// Phase 3 完成后,主编排器执行:
const phase3Results = await runPhase3Agents(); // 并行执行所有 analysis agents
const agentSummaries = phase3Results.map(r => JSON.parse(r));
const crossNotes = agentSummaries.flatMap(s => s.cross_module_notes || []);
// 必须调用 Phase 3.5 Consolidation Agent
await runPhase35Consolidation(agentSummaries, crossNotes);
```
## 核心职责
1. **跨章节综合分析**:生成 synthesis报告综述
@@ -22,7 +45,9 @@ interface ConsolidationInput {
}
```
## 执行
## Agent 调用代码
主编排器使用以下代码调用 Consolidation Agent
```javascript
Task({

View File

@@ -0,0 +1,74 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issues JSONL Schema",
"description": "Schema for each line in issues.jsonl (flat storage)",
"type": "object",
"required": ["id", "title", "status", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
},
"title": {
"type": "string"
},
"status": {
"type": "string",
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
"default": "registered"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"default": 3
},
"context": {
"type": "string",
"description": "Issue context/description (markdown)"
},
"bound_solution_id": {
"type": "string",
"description": "ID of the bound solution (null if none bound)"
},
"solution_count": {
"type": "integer",
"default": 0,
"description": "Number of candidate solutions in solutions/{id}.jsonl"
},
"source": {
"type": "string",
"enum": ["github", "text", "file"],
"description": "Source of the issue"
},
"source_url": {
"type": "string",
"description": "Original source URL (for GitHub issues)"
},
"labels": {
"type": "array",
"items": { "type": "string" },
"description": "Issue labels/tags"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"updated_at": {
"type": "string",
"format": "date-time"
},
"planned_at": {
"type": "string",
"format": "date-time"
},
"queued_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
}
}
}

View File

@@ -0,0 +1,136 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Execution Queue Schema",
"description": "Global execution queue for all issue tasks",
"type": "object",
"properties": {
"queue": {
"type": "array",
"description": "Ordered list of tasks to execute",
"items": {
"type": "object",
"required": ["queue_id", "issue_id", "solution_id", "task_id", "status"],
"properties": {
"queue_id": {
"type": "string",
"pattern": "^Q-[0-9]+$",
"description": "Unique queue item identifier"
},
"issue_id": {
"type": "string",
"description": "Source issue ID"
},
"solution_id": {
"type": "string",
"description": "Source solution ID"
},
"task_id": {
"type": "string",
"description": "Task ID within solution"
},
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"execution_order": {
"type": "integer",
"description": "Order in execution sequence"
},
"execution_group": {
"type": "string",
"description": "Parallel execution group ID (e.g., P1, S1)"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Queue IDs this task depends on"
},
"semantic_priority": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Semantic importance score (0.0-1.0)"
},
"assigned_executor": {
"type": "string",
"enum": ["codex", "gemini", "agent"]
},
"queued_at": {
"type": "string",
"format": "date-time"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
},
"result": {
"type": "object",
"description": "Execution result",
"properties": {
"files_modified": { "type": "array", "items": { "type": "string" } },
"files_created": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" },
"commit_hash": { "type": "string" }
}
},
"failure_reason": {
"type": "string"
}
}
}
},
"conflicts": {
"type": "array",
"description": "Detected conflicts between tasks",
"items": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Queue IDs involved in conflict"
},
"file": {
"type": "string",
"description": "Conflicting file path"
},
"resolution": {
"type": "string",
"enum": ["sequential", "merge", "manual"]
},
"resolution_order": {
"type": "array",
"items": { "type": "string" }
},
"resolved": {
"type": "boolean",
"default": false
}
}
}
},
"_metadata": {
"type": "object",
"properties": {
"version": { "type": "string", "default": "1.0" },
"total_items": { "type": "integer" },
"pending_count": { "type": "integer" },
"ready_count": { "type": "integer" },
"executing_count": { "type": "integer" },
"completed_count": { "type": "integer" },
"failed_count": { "type": "integer" },
"last_queue_formation": { "type": "string", "format": "date-time" },
"last_updated": { "type": "string", "format": "date-time" }
}
}
}
}

View File

@@ -0,0 +1,94 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Registry Schema",
"description": "Global registry of all issues and their solutions",
"type": "object",
"properties": {
"issues": {
"type": "array",
"description": "List of registered issues",
"items": {
"type": "object",
"required": ["id", "title", "status", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
},
"title": {
"type": "string"
},
"status": {
"type": "string",
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
"default": "registered"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"default": 3
},
"solution_count": {
"type": "integer",
"default": 0,
"description": "Number of candidate solutions"
},
"bound_solution_id": {
"type": "string",
"description": "ID of the bound solution (null if none bound)"
},
"source": {
"type": "string",
"enum": ["github", "text", "file"],
"description": "Source of the issue"
},
"source_url": {
"type": "string",
"description": "Original source URL (for GitHub issues)"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"updated_at": {
"type": "string",
"format": "date-time"
},
"planned_at": {
"type": "string",
"format": "date-time"
},
"queued_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
}
}
}
},
"_metadata": {
"type": "object",
"properties": {
"version": { "type": "string", "default": "1.0" },
"total_issues": { "type": "integer" },
"by_status": {
"type": "object",
"properties": {
"registered": { "type": "integer" },
"planning": { "type": "integer" },
"planned": { "type": "integer" },
"queued": { "type": "integer" },
"executing": { "type": "integer" },
"completed": { "type": "integer" },
"failed": { "type": "integer" }
}
},
"last_updated": { "type": "string", "format": "date-time" }
}
}
}
}

View File

@@ -0,0 +1,120 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Solution Schema",
"description": "Schema for solution registered to an issue",
"type": "object",
"required": ["id", "issue_id", "tasks", "status", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Unique solution identifier",
"pattern": "^SOL-[0-9]+$"
},
"issue_id": {
"type": "string",
"description": "Parent issue ID"
},
"plan_session_id": {
"type": "string",
"description": "Planning session that created this solution"
},
"tasks": {
"type": "array",
"description": "Task breakdown for this solution",
"items": {
"type": "object",
"required": ["id", "title", "scope", "action", "acceptance"],
"properties": {
"id": {
"type": "string",
"pattern": "^T[0-9]+$"
},
"title": {
"type": "string",
"description": "Action verb + target"
},
"scope": {
"type": "string",
"description": "Module path or feature area"
},
"action": {
"type": "string",
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
},
"description": {
"type": "string",
"description": "1-2 sentences describing what to implement"
},
"modification_points": {
"type": "array",
"items": {
"type": "object",
"properties": {
"file": { "type": "string" },
"target": { "type": "string" },
"change": { "type": "string" }
}
}
},
"implementation": {
"type": "array",
"items": { "type": "string" },
"description": "Step-by-step implementation guide"
},
"acceptance": {
"type": "array",
"items": { "type": "string" },
"description": "Quantified completion criteria"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"default": [],
"description": "Task IDs this task depends on"
},
"estimated_minutes": {
"type": "integer",
"description": "Estimated time to complete"
},
"executor": {
"type": "string",
"enum": ["codex", "gemini", "agent", "auto"],
"default": "auto"
}
}
}
},
"exploration_context": {
"type": "object",
"description": "ACE exploration results",
"properties": {
"project_structure": { "type": "string" },
"relevant_files": {
"type": "array",
"items": { "type": "string" }
},
"patterns": { "type": "string" },
"integration_points": { "type": "string" }
}
},
"status": {
"type": "string",
"enum": ["draft", "candidate", "bound", "queued", "executing", "completed", "failed"],
"default": "draft"
},
"is_bound": {
"type": "boolean",
"default": false,
"description": "Whether this solution is bound to the issue"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"bound_at": {
"type": "string",
"format": "date-time",
"description": "When this solution was bound to the issue"
}
}
}

View File

@@ -0,0 +1,125 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Solutions JSONL Schema",
"description": "Schema for each line in solutions/{issue-id}.jsonl",
"type": "object",
"required": ["id", "tasks", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Unique solution identifier",
"pattern": "^SOL-[0-9]+$"
},
"description": {
"type": "string",
"description": "Solution approach description"
},
"tasks": {
"type": "array",
"description": "Task breakdown for this solution",
"items": {
"type": "object",
"required": ["id", "title", "scope", "action", "acceptance"],
"properties": {
"id": {
"type": "string",
"pattern": "^T[0-9]+$"
},
"title": {
"type": "string",
"description": "Action verb + target"
},
"scope": {
"type": "string",
"description": "Module path or feature area"
},
"action": {
"type": "string",
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
},
"description": {
"type": "string",
"description": "1-2 sentences describing what to implement"
},
"modification_points": {
"type": "array",
"items": {
"type": "object",
"properties": {
"file": { "type": "string" },
"target": { "type": "string" },
"change": { "type": "string" }
}
}
},
"implementation": {
"type": "array",
"items": { "type": "string" },
"description": "Step-by-step implementation guide"
},
"acceptance": {
"type": "array",
"items": { "type": "string" },
"description": "Quantified completion criteria"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"default": [],
"description": "Task IDs this task depends on"
},
"estimated_minutes": {
"type": "integer",
"description": "Estimated time to complete"
},
"executor": {
"type": "string",
"enum": ["codex", "gemini", "agent", "auto"],
"default": "auto"
}
}
}
},
"exploration_context": {
"type": "object",
"description": "ACE exploration results",
"properties": {
"project_structure": { "type": "string" },
"relevant_files": {
"type": "array",
"items": { "type": "string" }
},
"patterns": { "type": "string" },
"integration_points": { "type": "string" }
}
},
"analysis": {
"type": "object",
"properties": {
"risk": { "type": "string", "enum": ["low", "medium", "high"] },
"impact": { "type": "string", "enum": ["low", "medium", "high"] },
"complexity": { "type": "string", "enum": ["low", "medium", "high"] }
}
},
"score": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Solution quality score (0.0-1.0)"
},
"is_bound": {
"type": "boolean",
"default": false,
"description": "Whether this solution is bound to the issue"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"bound_at": {
"type": "string",
"format": "date-time",
"description": "When this solution was bound to the issue"
}
}
}

View File

@@ -278,6 +278,11 @@ export function run(argv: string[]): void {
.option('--format <fmt>', 'Output format: json, markdown')
.option('--json', 'Output as JSON')
.option('--force', 'Force operation')
// New options for solution/queue management
.option('--solution <path>', 'Solution JSON file path')
.option('--solution-id <id>', 'Solution ID')
.option('--result <json>', 'Execution result JSON')
.option('--reason <text>', 'Failure reason')
.action((subcommand, args, options) => issueCommand(subcommand, args, options));
program.parse(argv);

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,512 @@
// @ts-nocheck
/**
* Issue Routes Module (Optimized - Flat JSONL Storage)
*
* Storage Structure:
* .workflow/issues/
* ├── issues.jsonl # All issues (one per line)
* ├── queue.json # Execution queue
* └── solutions/
* ├── {issue-id}.jsonl # Solutions for issue (one per line)
* └── ...
*
* API Endpoints (8 total):
* - GET /api/issues - List all issues
* - POST /api/issues - Create new issue
* - GET /api/issues/:id - Get issue detail
* - PATCH /api/issues/:id - Update issue (includes binding logic)
* - DELETE /api/issues/:id - Delete issue
* - POST /api/issues/:id/solutions - Add solution
* - PATCH /api/issues/:id/tasks/:taskId - Update task
* - GET /api/queue - Get execution queue
* - POST /api/queue/reorder - Reorder queue items
*/
import type { IncomingMessage, ServerResponse } from 'http';
import { readFileSync, existsSync, writeFileSync, mkdirSync, unlinkSync } from 'fs';
import { join } from 'path';
export interface RouteContext {
pathname: string;
url: URL;
req: IncomingMessage;
res: ServerResponse;
initialPath: string;
handlePostRequest: (req: IncomingMessage, res: ServerResponse, handler: (body: unknown) => Promise<any>) => void;
broadcastToClients: (data: unknown) => void;
}
// ========== JSONL Helper Functions ==========
function readIssuesJsonl(issuesDir: string): any[] {
const issuesPath = join(issuesDir, 'issues.jsonl');
if (!existsSync(issuesPath)) return [];
try {
const content = readFileSync(issuesPath, 'utf8');
return content.split('\n').filter(line => line.trim()).map(line => JSON.parse(line));
} catch {
return [];
}
}
function writeIssuesJsonl(issuesDir: string, issues: any[]) {
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
const issuesPath = join(issuesDir, 'issues.jsonl');
writeFileSync(issuesPath, issues.map(i => JSON.stringify(i)).join('\n'));
}
function readSolutionsJsonl(issuesDir: string, issueId: string): any[] {
const solutionsPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
if (!existsSync(solutionsPath)) return [];
try {
const content = readFileSync(solutionsPath, 'utf8');
return content.split('\n').filter(line => line.trim()).map(line => JSON.parse(line));
} catch {
return [];
}
}
function writeSolutionsJsonl(issuesDir: string, issueId: string, solutions: any[]) {
const solutionsDir = join(issuesDir, 'solutions');
if (!existsSync(solutionsDir)) mkdirSync(solutionsDir, { recursive: true });
writeFileSync(join(solutionsDir, `${issueId}.jsonl`), solutions.map(s => JSON.stringify(s)).join('\n'));
}
function readQueue(issuesDir: string) {
const queuePath = join(issuesDir, 'queue.json');
if (!existsSync(queuePath)) {
return { queue: [], conflicts: [], execution_groups: [], _metadata: { version: '1.0', total_tasks: 0 } };
}
try {
return JSON.parse(readFileSync(queuePath, 'utf8'));
} catch {
return { queue: [], conflicts: [], execution_groups: [], _metadata: { version: '1.0', total_tasks: 0 } };
}
}
function writeQueue(issuesDir: string, queue: any) {
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
queue._metadata = { ...queue._metadata, last_updated: new Date().toISOString(), total_tasks: queue.queue?.length || 0 };
writeFileSync(join(issuesDir, 'queue.json'), JSON.stringify(queue, null, 2));
}
function getIssueDetail(issuesDir: string, issueId: string) {
const issues = readIssuesJsonl(issuesDir);
const issue = issues.find(i => i.id === issueId);
if (!issue) return null;
const solutions = readSolutionsJsonl(issuesDir, issueId);
let tasks: any[] = [];
if (issue.bound_solution_id) {
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
if (boundSol?.tasks) tasks = boundSol.tasks;
}
return { ...issue, solutions, tasks };
}
function enrichIssues(issues: any[], issuesDir: string) {
return issues.map(issue => ({
...issue,
solution_count: readSolutionsJsonl(issuesDir, issue.id).length
}));
}
function groupQueueByExecutionGroup(queue: any) {
const groups: { [key: string]: any[] } = {};
for (const item of queue.queue || []) {
const groupId = item.execution_group || 'ungrouped';
if (!groups[groupId]) groups[groupId] = [];
groups[groupId].push(item);
}
for (const groupId of Object.keys(groups)) {
groups[groupId].sort((a, b) => (a.execution_order || 0) - (b.execution_order || 0));
}
const executionGroups = Object.entries(groups).map(([id, items]) => ({
id,
type: id.startsWith('P') ? 'parallel' : id.startsWith('S') ? 'sequential' : 'unknown',
task_count: items.length,
tasks: items.map(i => i.queue_id)
})).sort((a, b) => {
const aFirst = groups[a.id]?.[0]?.execution_order || 0;
const bFirst = groups[b.id]?.[0]?.execution_order || 0;
return aFirst - bFirst;
});
return { ...queue, execution_groups: executionGroups, grouped_items: groups };
}
/**
* Bind solution to issue with proper side effects
*/
function bindSolutionToIssue(issuesDir: string, issueId: string, solutionId: string, issues: any[], issueIndex: number) {
const solutions = readSolutionsJsonl(issuesDir, issueId);
const solIndex = solutions.findIndex(s => s.id === solutionId);
if (solIndex === -1) return { error: `Solution ${solutionId} not found` };
// Unbind all, bind new
solutions.forEach(s => { s.is_bound = false; });
solutions[solIndex].is_bound = true;
solutions[solIndex].bound_at = new Date().toISOString();
writeSolutionsJsonl(issuesDir, issueId, solutions);
// Update issue
issues[issueIndex].bound_solution_id = solutionId;
issues[issueIndex].status = 'planned';
issues[issueIndex].planned_at = new Date().toISOString();
return { success: true, bound: solutionId };
}
// ========== Route Handler ==========
export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
const { pathname, url, req, res, initialPath, handlePostRequest } = ctx;
const projectPath = url.searchParams.get('path') || initialPath;
const issuesDir = join(projectPath, '.workflow', 'issues');
// ===== Queue Routes (top-level /api/queue) =====
// GET /api/queue - Get execution queue
if (pathname === '/api/queue' && req.method === 'GET') {
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(queue));
return true;
}
// POST /api/queue/reorder - Reorder queue items
if (pathname === '/api/queue/reorder' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const { groupId, newOrder } = body;
if (!groupId || !Array.isArray(newOrder)) {
return { error: 'groupId and newOrder (array) required' };
}
const queue = readQueue(issuesDir);
const groupItems = queue.queue.filter((item: any) => item.execution_group === groupId);
const otherItems = queue.queue.filter((item: any) => item.execution_group !== groupId);
if (groupItems.length === 0) return { error: `No items in group ${groupId}` };
const groupQueueIds = new Set(groupItems.map((i: any) => i.queue_id));
if (groupQueueIds.size !== new Set(newOrder).size) {
return { error: 'newOrder must contain all group items' };
}
for (const id of newOrder) {
if (!groupQueueIds.has(id)) return { error: `Invalid queue_id: ${id}` };
}
const itemMap = new Map(groupItems.map((i: any) => [i.queue_id, i]));
const reorderedItems = newOrder.map((qid: string, idx: number) => ({ ...itemMap.get(qid), _idx: idx }));
const newQueue = [...otherItems, ...reorderedItems].sort((a, b) => {
const aGroup = parseInt(a.execution_group?.match(/\d+/)?.[0] || '999');
const bGroup = parseInt(b.execution_group?.match(/\d+/)?.[0] || '999');
if (aGroup !== bGroup) return aGroup - bGroup;
if (a.execution_group === b.execution_group) {
return (a._idx ?? a.execution_order ?? 999) - (b._idx ?? b.execution_order ?? 999);
}
return (a.execution_order || 0) - (b.execution_order || 0);
});
newQueue.forEach((item, idx) => { item.execution_order = idx + 1; delete item._idx; });
queue.queue = newQueue;
writeQueue(issuesDir, queue);
return { success: true, groupId, reordered: newOrder.length };
});
return true;
}
// Legacy: GET /api/issues/queue (backward compat)
if (pathname === '/api/issues/queue' && req.method === 'GET') {
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(queue));
return true;
}
// ===== Issue Routes =====
// GET /api/issues - List all issues
if (pathname === '/api/issues' && req.method === 'GET') {
const issues = enrichIssues(readIssuesJsonl(issuesDir), issuesDir);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
issues,
_metadata: { version: '2.0', storage: 'jsonl', total_issues: issues.length, last_updated: new Date().toISOString() }
}));
return true;
}
// POST /api/issues - Create issue
if (pathname === '/api/issues' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
if (!body.id || !body.title) return { error: 'id and title required' };
const issues = readIssuesJsonl(issuesDir);
if (issues.find(i => i.id === body.id)) return { error: `Issue ${body.id} exists` };
const newIssue = {
id: body.id,
title: body.title,
status: body.status || 'registered',
priority: body.priority || 3,
context: body.context || '',
source: body.source || 'text',
source_url: body.source_url || null,
labels: body.labels || [],
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
issues.push(newIssue);
writeIssuesJsonl(issuesDir, issues);
return { success: true, issue: newIssue };
});
return true;
}
// GET /api/issues/:id - Get issue detail
const detailMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
if (detailMatch && req.method === 'GET') {
const issueId = decodeURIComponent(detailMatch[1]);
if (issueId === 'queue') return false;
const detail = getIssueDetail(issuesDir, issueId);
if (!detail) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Issue not found' }));
return true;
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(detail));
return true;
}
// PATCH /api/issues/:id - Update issue (with binding support)
const updateMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
if (updateMatch && req.method === 'PATCH') {
const issueId = decodeURIComponent(updateMatch[1]);
if (issueId === 'queue') return false;
handlePostRequest(req, res, async (body: any) => {
const issues = readIssuesJsonl(issuesDir);
const issueIndex = issues.findIndex(i => i.id === issueId);
if (issueIndex === -1) return { error: 'Issue not found' };
const updates: string[] = [];
// Handle binding if bound_solution_id provided
if (body.bound_solution_id !== undefined) {
if (body.bound_solution_id) {
const bindResult = bindSolutionToIssue(issuesDir, issueId, body.bound_solution_id, issues, issueIndex);
if (bindResult.error) return bindResult;
updates.push('bound_solution_id');
} else {
// Unbind
const solutions = readSolutionsJsonl(issuesDir, issueId);
solutions.forEach(s => { s.is_bound = false; });
writeSolutionsJsonl(issuesDir, issueId, solutions);
issues[issueIndex].bound_solution_id = null;
updates.push('bound_solution_id (unbound)');
}
}
// Update other fields
for (const field of ['title', 'context', 'status', 'priority', 'labels']) {
if (body[field] !== undefined) {
issues[issueIndex][field] = body[field];
updates.push(field);
}
}
issues[issueIndex].updated_at = new Date().toISOString();
writeIssuesJsonl(issuesDir, issues);
return { success: true, issueId, updated: updates };
});
return true;
}
// DELETE /api/issues/:id
const deleteMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
if (deleteMatch && req.method === 'DELETE') {
const issueId = decodeURIComponent(deleteMatch[1]);
const issues = readIssuesJsonl(issuesDir);
const filtered = issues.filter(i => i.id !== issueId);
if (filtered.length === issues.length) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Issue not found' }));
return true;
}
writeIssuesJsonl(issuesDir, filtered);
// Clean up solutions file
const solPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
if (existsSync(solPath)) {
try { unlinkSync(solPath); } catch {}
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, issueId }));
return true;
}
// POST /api/issues/:id/solutions - Add solution
const addSolMatch = pathname.match(/^\/api\/issues\/([^/]+)\/solutions$/);
if (addSolMatch && req.method === 'POST') {
const issueId = decodeURIComponent(addSolMatch[1]);
handlePostRequest(req, res, async (body: any) => {
if (!body.id || !body.tasks) return { error: 'id and tasks required' };
const solutions = readSolutionsJsonl(issuesDir, issueId);
if (solutions.find(s => s.id === body.id)) return { error: `Solution ${body.id} exists` };
const newSolution = {
id: body.id,
description: body.description || '',
tasks: body.tasks,
exploration_context: body.exploration_context || {},
analysis: body.analysis || {},
score: body.score || 0,
is_bound: false,
created_at: new Date().toISOString()
};
solutions.push(newSolution);
writeSolutionsJsonl(issuesDir, issueId, solutions);
// Update issue solution_count
const issues = readIssuesJsonl(issuesDir);
const idx = issues.findIndex(i => i.id === issueId);
if (idx !== -1) {
issues[idx].solution_count = solutions.length;
issues[idx].updated_at = new Date().toISOString();
writeIssuesJsonl(issuesDir, issues);
}
return { success: true, solution: newSolution };
});
return true;
}
// PATCH /api/issues/:id/tasks/:taskId - Update task
const taskMatch = pathname.match(/^\/api\/issues\/([^/]+)\/tasks\/([^/]+)$/);
if (taskMatch && req.method === 'PATCH') {
const issueId = decodeURIComponent(taskMatch[1]);
const taskId = decodeURIComponent(taskMatch[2]);
handlePostRequest(req, res, async (body: any) => {
const issues = readIssuesJsonl(issuesDir);
const issue = issues.find(i => i.id === issueId);
if (!issue?.bound_solution_id) return { error: 'Issue or bound solution not found' };
const solutions = readSolutionsJsonl(issuesDir, issueId);
const solIdx = solutions.findIndex(s => s.id === issue.bound_solution_id);
if (solIdx === -1) return { error: 'Bound solution not found' };
const taskIdx = solutions[solIdx].tasks?.findIndex((t: any) => t.id === taskId);
if (taskIdx === -1 || taskIdx === undefined) return { error: 'Task not found' };
const updates: string[] = [];
for (const field of ['status', 'priority', 'result', 'error']) {
if (body[field] !== undefined) {
solutions[solIdx].tasks[taskIdx][field] = body[field];
updates.push(field);
}
}
solutions[solIdx].tasks[taskIdx].updated_at = new Date().toISOString();
writeSolutionsJsonl(issuesDir, issueId, solutions);
return { success: true, issueId, taskId, updated: updates };
});
return true;
}
// Legacy: PUT /api/issues/:id/task/:taskId (backward compat)
const legacyTaskMatch = pathname.match(/^\/api\/issues\/([^/]+)\/task\/([^/]+)$/);
if (legacyTaskMatch && req.method === 'PUT') {
const issueId = decodeURIComponent(legacyTaskMatch[1]);
const taskId = decodeURIComponent(legacyTaskMatch[2]);
handlePostRequest(req, res, async (body: any) => {
const issues = readIssuesJsonl(issuesDir);
const issue = issues.find(i => i.id === issueId);
if (!issue?.bound_solution_id) return { error: 'Issue or bound solution not found' };
const solutions = readSolutionsJsonl(issuesDir, issueId);
const solIdx = solutions.findIndex(s => s.id === issue.bound_solution_id);
if (solIdx === -1) return { error: 'Bound solution not found' };
const taskIdx = solutions[solIdx].tasks?.findIndex((t: any) => t.id === taskId);
if (taskIdx === -1 || taskIdx === undefined) return { error: 'Task not found' };
const updates: string[] = [];
if (body.status !== undefined) { solutions[solIdx].tasks[taskIdx].status = body.status; updates.push('status'); }
if (body.priority !== undefined) { solutions[solIdx].tasks[taskIdx].priority = body.priority; updates.push('priority'); }
solutions[solIdx].tasks[taskIdx].updated_at = new Date().toISOString();
writeSolutionsJsonl(issuesDir, issueId, solutions);
return { success: true, issueId, taskId, updated: updates };
});
return true;
}
// Legacy: PUT /api/issues/:id/bind/:solutionId (backward compat)
const legacyBindMatch = pathname.match(/^\/api\/issues\/([^/]+)\/bind\/([^/]+)$/);
if (legacyBindMatch && req.method === 'PUT') {
const issueId = decodeURIComponent(legacyBindMatch[1]);
const solutionId = decodeURIComponent(legacyBindMatch[2]);
const issues = readIssuesJsonl(issuesDir);
const issueIndex = issues.findIndex(i => i.id === issueId);
if (issueIndex === -1) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Issue not found' }));
return true;
}
const result = bindSolutionToIssue(issuesDir, issueId, solutionId, issues, issueIndex);
if (result.error) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(result));
return true;
}
issues[issueIndex].updated_at = new Date().toISOString();
writeIssuesJsonl(issuesDir, issues);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, issueId, solutionId }));
return true;
}
// Legacy: PUT /api/issues/:id (backward compat for PATCH)
const legacyUpdateMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
if (legacyUpdateMatch && req.method === 'PUT') {
const issueId = decodeURIComponent(legacyUpdateMatch[1]);
if (issueId === 'queue') return false;
handlePostRequest(req, res, async (body: any) => {
const issues = readIssuesJsonl(issuesDir);
const issueIndex = issues.findIndex(i => i.id === issueId);
if (issueIndex === -1) return { error: 'Issue not found' };
const updates: string[] = [];
for (const field of ['title', 'context', 'status', 'priority', 'bound_solution_id', 'labels']) {
if (body[field] !== undefined) {
issues[issueIndex][field] = body[field];
updates.push(field);
}
}
issues[issueIndex].updated_at = new Date().toISOString();
writeIssuesJsonl(issuesDir, issues);
return { success: true, issueId, updated: updates };
});
return true;
}
return false;
}

View File

@@ -17,6 +17,7 @@ import { handleGraphRoutes } from './routes/graph-routes.js';
import { handleSystemRoutes } from './routes/system-routes.js';
import { handleFilesRoutes } from './routes/files-routes.js';
import { handleSkillsRoutes } from './routes/skills-routes.js';
import { handleIssueRoutes } from './routes/issue-routes.js';
import { handleRulesRoutes } from './routes/rules-routes.js';
import { handleSessionRoutes } from './routes/session-routes.js';
import { handleCcwRoutes } from './routes/ccw-routes.js';
@@ -86,7 +87,8 @@ const MODULE_CSS_FILES = [
'28-mcp-manager.css',
'29-help.css',
'30-core-memory.css',
'31-api-settings.css'
'31-api-settings.css',
'32-issue-manager.css'
];
// Modular JS files in dependency order
@@ -142,6 +144,7 @@ const MODULE_FILES = [
'views/claude-manager.js',
'views/api-settings.js',
'views/help.js',
'views/issue-manager.js',
'main.js'
];
@@ -244,7 +247,7 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
// CORS headers for API requests
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, DELETE, OPTIONS');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, PATCH, DELETE, OPTIONS');
res.setHeader('Access-Control-Allow-Headers', 'Content-Type');
if (req.method === 'OPTIONS') {
@@ -340,6 +343,16 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
if (await handleSkillsRoutes(routeContext)) return;
}
// Queue routes (/api/queue*) - top-level queue API
if (pathname.startsWith('/api/queue')) {
if (await handleIssueRoutes(routeContext)) return;
}
// Issue routes (/api/issues*)
if (pathname.startsWith('/api/issues')) {
if (await handleIssueRoutes(routeContext)) return;
}
// Rules routes (/api/rules*)
if (pathname.startsWith('/api/rules')) {
if (await handleRulesRoutes(routeContext)) return;

View File

@@ -0,0 +1,979 @@
/* ==========================================
ISSUE MANAGER STYLES
========================================== */
/* Issue Manager Container */
.issue-manager {
width: 100%;
}
.issue-manager.loading {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 300px;
color: hsl(var(--muted-foreground));
}
/* View Toggle (Issues/Queue) */
.issue-view-toggle {
display: inline-flex;
background: hsl(var(--muted));
border-radius: 0.5rem;
padding: 0.25rem;
gap: 0.25rem;
}
.issue-view-toggle button {
padding: 0.5rem 1rem;
border-radius: 0.375rem;
font-size: 0.875rem;
font-weight: 500;
color: hsl(var(--muted-foreground));
background: transparent;
border: none;
cursor: pointer;
transition: all 0.15s ease;
}
.issue-view-toggle button:hover {
color: hsl(var(--foreground));
}
.issue-view-toggle button.active {
background: hsl(var(--background));
color: hsl(var(--foreground));
box-shadow: 0 1px 2px hsl(var(--foreground) / 0.05);
}
/* Issues Grid */
.issues-section {
margin-bottom: 2rem;
width: 100%;
}
.issues-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(320px, 1fr));
gap: 1rem;
width: 100%;
}
.issues-empty-state {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 160px;
}
/* Issue Card */
.issue-card {
position: relative;
transition: all 0.2s ease;
cursor: pointer;
}
.issue-card:hover {
border-color: hsl(var(--primary));
transform: translateY(-2px);
}
.issue-card-header {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 0.5rem;
}
.issue-card-id {
font-family: var(--font-mono);
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
.issue-card-title {
font-weight: 600;
font-size: 0.9375rem;
line-height: 1.4;
margin-top: 0.25rem;
}
.issue-card-meta {
display: flex;
align-items: center;
gap: 0.5rem;
margin-top: 0.75rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
.issue-card-stats {
display: flex;
align-items: center;
gap: 1rem;
margin-top: 0.5rem;
}
.issue-card-stat {
display: flex;
align-items: center;
gap: 0.25rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
/* Issue Status Badges */
.issue-status {
display: inline-flex;
align-items: center;
padding: 0.125rem 0.5rem;
border-radius: 9999px;
font-size: 0.6875rem;
font-weight: 500;
text-transform: uppercase;
letter-spacing: 0.025em;
}
.issue-status.registered {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.issue-status.planned {
background: hsl(217 91% 60% / 0.15);
color: hsl(217 91% 60%);
}
.issue-status.queued {
background: hsl(262 83% 58% / 0.15);
color: hsl(262 83% 58%);
}
.issue-status.executing {
background: hsl(45 93% 47% / 0.15);
color: hsl(45 93% 47%);
}
.issue-status.completed {
background: hsl(var(--success) / 0.15);
color: hsl(var(--success));
}
.issue-status.failed {
background: hsl(var(--destructive) / 0.15);
color: hsl(var(--destructive));
}
/* Priority Badges */
.issue-priority {
display: inline-flex;
align-items: center;
padding: 0.125rem 0.375rem;
border-radius: 0.25rem;
font-size: 0.6875rem;
font-weight: 500;
}
.issue-priority.critical {
background: hsl(0 84% 60% / 0.15);
color: hsl(0 84% 60%);
}
.issue-priority.high {
background: hsl(25 95% 53% / 0.15);
color: hsl(25 95% 53%);
}
.issue-priority.medium {
background: hsl(45 93% 47% / 0.15);
color: hsl(45 93% 47%);
}
.issue-priority.low {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
/* ==========================================
QUEUE VIEW STYLES
========================================== */
.queue-section {
width: 100%;
}
.queue-timeline {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.queue-empty-state {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 200px;
text-align: center;
}
/* Execution Group */
.queue-group {
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
overflow: hidden;
}
.queue-group-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.5);
border-bottom: 1px solid hsl(var(--border));
}
.queue-group-title {
display: flex;
align-items: center;
gap: 0.5rem;
font-weight: 600;
font-size: 0.875rem;
}
.queue-group-badge {
display: inline-flex;
align-items: center;
padding: 0.125rem 0.5rem;
border-radius: 9999px;
font-size: 0.6875rem;
font-weight: 500;
}
.queue-group-badge.parallel {
background: hsl(142 71% 45% / 0.15);
color: hsl(142 71% 45%);
}
.queue-group-badge.sequential {
background: hsl(262 83% 58% / 0.15);
color: hsl(262 83% 58%);
}
.queue-group-count {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
.queue-group-items {
padding: 0.75rem;
display: flex;
flex-direction: column;
gap: 0.5rem;
min-height: 60px;
}
/* Parallel group items display horizontally */
.queue-group.parallel .queue-group-items {
flex-direction: row;
flex-wrap: wrap;
}
/* Queue Item */
.queue-item {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.75rem 1rem;
background: hsl(var(--background));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
transition: all 0.15s ease;
}
.queue-item:hover {
border-color: hsl(var(--primary));
box-shadow: 0 2px 4px hsl(var(--foreground) / 0.05);
}
.queue-item[draggable="true"] {
cursor: grab;
}
.queue-item[draggable="true"]:active {
cursor: grabbing;
}
.queue-item-drag-handle {
display: flex;
align-items: center;
color: hsl(var(--muted-foreground));
cursor: grab;
}
.queue-item-drag-handle:active {
cursor: grabbing;
}
.queue-item-order {
font-family: var(--font-mono);
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
min-width: 2.5rem;
}
.queue-item-content {
flex: 1;
min-width: 0;
}
.queue-item-id {
font-family: var(--font-mono);
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
}
.queue-item-title {
font-size: 0.875rem;
font-weight: 500;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.queue-item-issue {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
.queue-item-priority {
font-family: var(--font-mono);
font-size: 0.6875rem;
padding: 0.125rem 0.375rem;
background: hsl(var(--muted));
border-radius: 0.25rem;
color: hsl(var(--muted-foreground));
}
/* Queue Item Status */
.queue-item-status {
display: inline-flex;
align-items: center;
gap: 0.25rem;
padding: 0.125rem 0.5rem;
border-radius: 9999px;
font-size: 0.6875rem;
font-weight: 500;
}
.queue-item-status.pending {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.queue-item-status.running {
background: hsl(217 91% 60% / 0.15);
color: hsl(217 91% 60%);
}
.queue-item-status.completed {
background: hsl(var(--success) / 0.15);
color: hsl(var(--success));
}
.queue-item-status.failed {
background: hsl(var(--destructive) / 0.15);
color: hsl(var(--destructive));
}
/* Drag and Drop States */
.queue-item.dragging {
opacity: 0.5;
border: 2px dashed hsl(var(--primary));
}
.queue-item.drag-over {
border-color: hsl(var(--primary));
background: hsl(var(--primary) / 0.05);
}
.queue-group-items.drag-over {
background: hsl(var(--primary) / 0.03);
}
/* Arrow connector between sequential items */
.queue-group.sequential .queue-item:not(:last-child)::after {
content: '';
display: block;
width: 0;
height: 0;
border-left: 6px solid transparent;
border-right: 6px solid transparent;
border-top: 8px solid hsl(var(--muted-foreground) / 0.3);
position: absolute;
bottom: -12px;
left: 50%;
transform: translateX(-50%);
}
.queue-group.sequential .queue-item {
position: relative;
}
/* ==========================================
ISSUE DETAIL PANEL
========================================== */
.issue-detail-overlay {
position: fixed;
inset: 0;
background: hsl(var(--foreground) / 0.4);
z-index: 999;
animation: fadeIn 0.15s ease-out;
}
.issue-detail-panel {
position: fixed;
top: 0;
right: 0;
width: 560px;
max-width: 100%;
height: 100vh;
background: hsl(var(--background));
border-left: 1px solid hsl(var(--border));
z-index: 1000;
display: flex;
flex-direction: column;
animation: slideInPanel 0.2s ease-out;
}
@keyframes slideInPanel {
from {
opacity: 0;
transform: translateX(100%);
}
to {
opacity: 1;
transform: translateX(0);
}
}
.issue-detail-header {
display: flex;
align-items: flex-start;
justify-content: space-between;
padding: 1.25rem 1.5rem;
border-bottom: 1px solid hsl(var(--border));
}
.issue-detail-header-content {
flex: 1;
min-width: 0;
}
.issue-detail-id {
font-family: var(--font-mono);
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
.issue-detail-title {
font-size: 1.125rem;
font-weight: 600;
margin-top: 0.25rem;
line-height: 1.4;
}
.issue-detail-title.editable {
cursor: text;
padding: 0.25rem 0.5rem;
margin: 0.25rem -0.5rem 0;
border-radius: 0.375rem;
border: 1px solid transparent;
}
.issue-detail-title.editable:hover {
background: hsl(var(--muted) / 0.5);
}
.issue-detail-title.editable:focus {
outline: none;
border-color: hsl(var(--primary));
background: hsl(var(--background));
}
.issue-detail-close {
display: flex;
align-items: center;
justify-content: center;
width: 2rem;
height: 2rem;
border-radius: 0.375rem;
border: none;
background: transparent;
cursor: pointer;
color: hsl(var(--muted-foreground));
transition: all 0.15s ease;
}
.issue-detail-close:hover {
background: hsl(var(--muted));
color: hsl(var(--foreground));
}
.issue-detail-body {
flex: 1;
overflow-y: auto;
padding: 1.5rem;
}
.issue-detail-section {
margin-bottom: 1.5rem;
}
.issue-detail-section-title {
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
color: hsl(var(--muted-foreground));
margin-bottom: 0.75rem;
display: flex;
align-items: center;
gap: 0.5rem;
}
.issue-detail-section-title button {
padding: 0.25rem;
border-radius: 0.25rem;
border: none;
background: transparent;
cursor: pointer;
color: hsl(var(--muted-foreground));
transition: all 0.15s ease;
}
.issue-detail-section-title button:hover {
background: hsl(var(--muted));
color: hsl(var(--foreground));
}
/* Context / Description */
.issue-detail-context {
font-size: 0.875rem;
line-height: 1.6;
color: hsl(var(--foreground));
white-space: pre-wrap;
}
.issue-detail-context.editable {
padding: 0.75rem;
background: hsl(var(--muted) / 0.3);
border-radius: 0.5rem;
border: 1px solid transparent;
cursor: text;
min-height: 100px;
}
.issue-detail-context.editable:hover {
border-color: hsl(var(--border));
}
.issue-detail-context.editable:focus {
outline: none;
border-color: hsl(var(--primary));
background: hsl(var(--background));
}
/* Solutions List */
.solution-list {
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.solution-item {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.3);
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
cursor: pointer;
transition: all 0.15s ease;
}
.solution-item:hover {
border-color: hsl(var(--primary));
}
.solution-item.bound {
border-color: hsl(var(--success));
background: hsl(var(--success) / 0.05);
}
.solution-item-icon {
display: flex;
align-items: center;
justify-content: center;
width: 2rem;
height: 2rem;
border-radius: 0.375rem;
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.solution-item.bound .solution-item-icon {
background: hsl(var(--success) / 0.15);
color: hsl(var(--success));
}
.solution-item-content {
flex: 1;
min-width: 0;
}
.solution-item-name {
font-size: 0.875rem;
font-weight: 500;
}
.solution-item-meta {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
/* Task List */
.task-list {
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.task-item {
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.3);
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
transition: all 0.15s ease;
}
.task-item:hover {
border-color: hsl(var(--primary) / 0.5);
}
.task-item-header {
display: flex;
align-items: center;
justify-content: space-between;
gap: 0.5rem;
}
.task-item-id {
font-family: var(--font-mono);
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
}
.task-item-title {
font-size: 0.875rem;
font-weight: 500;
margin-top: 0.25rem;
}
.task-item-scope {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
font-family: var(--font-mono);
}
.task-item-actions {
display: flex;
align-items: center;
gap: 0.5rem;
}
/* Task Action Badge (Create, Update, etc) */
.task-action-badge {
display: inline-flex;
align-items: center;
padding: 0.125rem 0.375rem;
border-radius: 0.25rem;
font-size: 0.6875rem;
font-weight: 500;
}
.task-action-badge.create {
background: hsl(142 71% 45% / 0.15);
color: hsl(142 71% 45%);
}
.task-action-badge.update {
background: hsl(217 91% 60% / 0.15);
color: hsl(217 91% 60%);
}
.task-action-badge.implement {
background: hsl(262 83% 58% / 0.15);
color: hsl(262 83% 58%);
}
.task-action-badge.configure {
background: hsl(45 93% 47% / 0.15);
color: hsl(45 93% 47%);
}
.task-action-badge.refactor {
background: hsl(25 95% 53% / 0.15);
color: hsl(25 95% 53%);
}
.task-action-badge.test {
background: hsl(199 89% 48% / 0.15);
color: hsl(199 89% 48%);
}
.task-action-badge.delete {
background: hsl(var(--destructive) / 0.15);
color: hsl(var(--destructive));
}
/* Task Status Dropdown */
.task-status-select {
font-size: 0.6875rem;
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
border: 1px solid hsl(var(--border));
background: hsl(var(--background));
cursor: pointer;
}
.task-status-select:focus {
outline: none;
border-color: hsl(var(--primary));
}
/* Modification Points */
.modification-points {
margin-top: 0.5rem;
padding-top: 0.5rem;
border-top: 1px solid hsl(var(--border) / 0.5);
}
.modification-point {
display: flex;
align-items: flex-start;
gap: 0.5rem;
font-size: 0.75rem;
padding: 0.25rem 0;
}
.modification-point-file {
font-family: var(--font-mono);
color: hsl(var(--primary));
}
.modification-point-change {
color: hsl(var(--muted-foreground));
}
/* Implementation Steps */
.implementation-steps {
margin-top: 0.5rem;
padding-left: 1rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
.implementation-steps li {
margin: 0.25rem 0;
}
/* Acceptance Criteria */
.acceptance-criteria {
margin-top: 0.5rem;
padding-left: 1rem;
font-size: 0.75rem;
}
.acceptance-criteria li {
margin: 0.25rem 0;
color: hsl(var(--success));
}
/* ==========================================
CONFLICTS SECTION
========================================== */
.conflicts-section {
margin-top: 1.5rem;
}
.conflict-item {
display: flex;
align-items: flex-start;
gap: 0.75rem;
padding: 0.75rem 1rem;
background: hsl(45 93% 47% / 0.1);
border: 1px solid hsl(45 93% 47% / 0.3);
border-radius: 0.5rem;
margin-bottom: 0.5rem;
}
.conflict-item.resolved {
background: hsl(var(--success) / 0.05);
border-color: hsl(var(--success) / 0.3);
}
.conflict-icon {
display: flex;
align-items: center;
justify-content: center;
width: 1.5rem;
height: 1.5rem;
border-radius: 9999px;
background: hsl(45 93% 47% / 0.2);
color: hsl(45 93% 47%);
}
.conflict-item.resolved .conflict-icon {
background: hsl(var(--success) / 0.2);
color: hsl(var(--success));
}
.conflict-content {
flex: 1;
min-width: 0;
}
.conflict-file {
font-family: var(--font-mono);
font-size: 0.8125rem;
font-weight: 500;
}
.conflict-tasks {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
margin-top: 0.25rem;
}
.conflict-resolution {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
margin-top: 0.25rem;
}
/* ==========================================
FILTER BAR
========================================== */
.issue-filter-bar {
display: flex;
align-items: center;
gap: 0.75rem;
flex-wrap: wrap;
}
.issue-filter-group {
display: flex;
align-items: center;
gap: 0.25rem;
}
.issue-filter-group label {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
.issue-filter-select {
font-size: 0.8125rem;
padding: 0.375rem 0.75rem;
border-radius: 0.375rem;
border: 1px solid hsl(var(--border));
background: hsl(var(--background));
cursor: pointer;
}
.issue-filter-select:focus {
outline: none;
border-color: hsl(var(--primary));
}
/* ==========================================
RESPONSIVE ADJUSTMENTS
========================================== */
@media (max-width: 768px) {
.issues-grid {
grid-template-columns: 1fr;
}
.issue-detail-panel {
width: 100%;
}
.queue-group.parallel .queue-group-items {
flex-direction: column;
}
.issue-filter-bar {
flex-direction: column;
align-items: stretch;
}
}
@media (max-width: 480px) {
.issue-view-toggle {
width: 100%;
}
.issue-view-toggle button {
flex: 1;
text-align: center;
}
.queue-item {
flex-wrap: wrap;
}
.queue-item-content {
width: 100%;
}
}
/* ==========================================
ANIMATIONS
========================================== */
@keyframes fadeIn {
from { opacity: 0; }
to { opacity: 1; }
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.animate-pulse {
animation: pulse 2s ease-in-out infinite;
}
/* Line clamp utility */
.line-clamp-2 {
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
overflow: hidden;
}
/* Badge styles */
.issue-card .badge,
.queue-item .badge {
font-size: 0.75rem;
font-weight: 500;
}

View File

@@ -0,0 +1,467 @@
/**
* CLI Stream Viewer Styles
* Right-side popup panel for viewing CLI streaming output
*/
/* ===== Overlay ===== */
.cli-stream-overlay {
position: fixed;
inset: 0;
background: rgb(0 0 0 / 0.3);
z-index: 1050;
opacity: 0;
visibility: hidden;
transition: all 0.3s ease;
}
.cli-stream-overlay.open {
opacity: 1;
visibility: visible;
}
/* ===== Main Panel ===== */
.cli-stream-viewer {
position: fixed;
top: 60px;
right: 16px;
width: 650px;
max-width: calc(100vw - 32px);
max-height: calc(100vh - 80px);
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 8px;
box-shadow: 0 8px 32px rgb(0 0 0 / 0.2);
z-index: 1100;
display: flex;
flex-direction: column;
transform: translateX(calc(100% + 20px));
opacity: 0;
visibility: hidden;
transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1);
}
.cli-stream-viewer.open {
transform: translateX(0);
opacity: 1;
visibility: visible;
}
/* ===== Header ===== */
.cli-stream-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 12px 16px;
border-bottom: 1px solid hsl(var(--border));
background: hsl(var(--muted) / 0.3);
}
.cli-stream-title {
display: flex;
align-items: center;
gap: 8px;
font-size: 0.875rem;
font-weight: 600;
color: hsl(var(--foreground));
}
.cli-stream-title svg,
.cli-stream-title i {
width: 18px;
height: 18px;
color: hsl(var(--primary));
}
.cli-stream-count-badge {
display: inline-flex;
align-items: center;
justify-content: center;
min-width: 20px;
height: 20px;
padding: 0 6px;
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
border-radius: 10px;
font-size: 0.6875rem;
font-weight: 600;
}
.cli-stream-count-badge.has-running {
background: hsl(var(--warning));
color: hsl(var(--warning-foreground, white));
}
.cli-stream-actions {
display: flex;
align-items: center;
gap: 8px;
}
.cli-stream-action-btn {
display: flex;
align-items: center;
gap: 4px;
padding: 4px 10px;
background: transparent;
border: 1px solid hsl(var(--border));
border-radius: 4px;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
cursor: pointer;
transition: all 0.15s;
}
.cli-stream-action-btn:hover {
background: hsl(var(--hover));
color: hsl(var(--foreground));
}
.cli-stream-close-btn {
display: flex;
align-items: center;
justify-content: center;
width: 28px;
height: 28px;
padding: 0;
background: transparent;
border: none;
border-radius: 4px;
font-size: 1.25rem;
color: hsl(var(--muted-foreground));
cursor: pointer;
transition: all 0.15s;
}
.cli-stream-close-btn:hover {
background: hsl(var(--destructive) / 0.1);
color: hsl(var(--destructive));
}
/* ===== Tab Bar ===== */
.cli-stream-tabs {
display: flex;
gap: 2px;
padding: 8px 12px;
border-bottom: 1px solid hsl(var(--border));
background: hsl(var(--muted) / 0.2);
overflow-x: auto;
scrollbar-width: thin;
}
.cli-stream-tabs::-webkit-scrollbar {
height: 4px;
}
.cli-stream-tabs::-webkit-scrollbar-thumb {
background: hsl(var(--border));
border-radius: 2px;
}
.cli-stream-tab {
display: flex;
align-items: center;
gap: 6px;
padding: 6px 12px;
background: transparent;
border: 1px solid transparent;
border-radius: 6px;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
cursor: pointer;
white-space: nowrap;
transition: all 0.15s;
}
.cli-stream-tab:hover {
background: hsl(var(--hover));
color: hsl(var(--foreground));
}
.cli-stream-tab.active {
background: hsl(var(--card));
border-color: hsl(var(--primary));
color: hsl(var(--foreground));
box-shadow: 0 1px 3px rgb(0 0 0 / 0.1);
}
.cli-stream-tab-status {
width: 8px;
height: 8px;
border-radius: 50%;
flex-shrink: 0;
}
.cli-stream-tab-status.running {
background: hsl(var(--warning));
animation: streamStatusPulse 1.5s ease-in-out infinite;
}
.cli-stream-tab-status.completed {
background: hsl(var(--success));
}
.cli-stream-tab-status.error {
background: hsl(var(--destructive));
}
@keyframes streamStatusPulse {
0%, 100% { opacity: 1; transform: scale(1); }
50% { opacity: 0.6; transform: scale(1.2); }
}
.cli-stream-tab-tool {
font-weight: 500;
text-transform: capitalize;
}
.cli-stream-tab-mode {
font-size: 0.625rem;
padding: 1px 4px;
background: hsl(var(--muted));
border-radius: 3px;
color: hsl(var(--muted-foreground));
}
.cli-stream-tab-close {
display: flex;
align-items: center;
justify-content: center;
width: 16px;
height: 16px;
margin-left: 4px;
background: transparent;
border: none;
border-radius: 50%;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
cursor: pointer;
opacity: 0;
transition: all 0.15s;
}
.cli-stream-tab:hover .cli-stream-tab-close {
opacity: 1;
}
.cli-stream-tab-close:hover {
background: hsl(var(--destructive) / 0.2);
color: hsl(var(--destructive));
}
.cli-stream-tab-close.disabled {
cursor: not-allowed;
opacity: 0.3 !important;
}
/* ===== Empty State ===== */
.cli-stream-empty {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
padding: 48px 24px;
color: hsl(var(--muted-foreground));
text-align: center;
}
.cli-stream-empty svg,
.cli-stream-empty i {
width: 48px;
height: 48px;
margin-bottom: 16px;
opacity: 0.5;
}
.cli-stream-empty-title {
font-size: 0.875rem;
font-weight: 500;
margin-bottom: 4px;
}
.cli-stream-empty-hint {
font-size: 0.75rem;
opacity: 0.7;
}
/* ===== Terminal Content ===== */
.cli-stream-content {
flex: 1;
min-height: 300px;
max-height: 500px;
overflow-y: auto;
padding: 12px 16px;
background: hsl(220 13% 8%);
font-family: var(--font-mono, 'Consolas', 'Monaco', 'Courier New', monospace);
font-size: 0.75rem;
line-height: 1.6;
scrollbar-width: thin;
}
.cli-stream-content::-webkit-scrollbar {
width: 8px;
}
.cli-stream-content::-webkit-scrollbar-track {
background: transparent;
}
.cli-stream-content::-webkit-scrollbar-thumb {
background: hsl(0 0% 40%);
border-radius: 4px;
}
.cli-stream-line {
white-space: pre-wrap;
word-break: break-all;
margin: 0;
padding: 0;
}
.cli-stream-line.stdout {
color: hsl(0 0% 85%);
}
.cli-stream-line.stderr {
color: hsl(8 75% 65%);
}
.cli-stream-line.system {
color: hsl(210 80% 65%);
font-style: italic;
}
.cli-stream-line.info {
color: hsl(200 80% 70%);
}
/* Auto-scroll indicator */
.cli-stream-scroll-btn {
position: sticky;
bottom: 8px;
left: 50%;
transform: translateX(-50%);
display: inline-flex;
align-items: center;
gap: 4px;
padding: 4px 12px;
background: hsl(var(--primary));
color: white;
border: none;
border-radius: 12px;
font-size: 0.625rem;
cursor: pointer;
opacity: 0;
transition: opacity 0.2s;
}
.cli-stream-content.has-new-content .cli-stream-scroll-btn {
opacity: 1;
}
/* ===== Status Bar ===== */
.cli-stream-status {
display: flex;
align-items: center;
justify-content: space-between;
padding: 8px 16px;
border-top: 1px solid hsl(var(--border));
background: hsl(var(--muted) / 0.3);
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
}
.cli-stream-status-info {
display: flex;
align-items: center;
gap: 12px;
}
.cli-stream-status-item {
display: flex;
align-items: center;
gap: 4px;
}
.cli-stream-status-item svg,
.cli-stream-status-item i {
width: 12px;
height: 12px;
}
.cli-stream-status-actions {
display: flex;
align-items: center;
gap: 8px;
}
.cli-stream-toggle-btn {
display: flex;
align-items: center;
gap: 4px;
padding: 2px 8px;
background: transparent;
border: 1px solid hsl(var(--border));
border-radius: 3px;
font-size: 0.625rem;
color: hsl(var(--muted-foreground));
cursor: pointer;
transition: all 0.15s;
}
.cli-stream-toggle-btn:hover {
background: hsl(var(--hover));
}
.cli-stream-toggle-btn.active {
background: hsl(var(--primary) / 0.1);
border-color: hsl(var(--primary));
color: hsl(var(--primary));
}
/* ===== Header Button & Badge ===== */
.cli-stream-btn {
position: relative;
}
.cli-stream-badge {
position: absolute;
top: -2px;
right: -2px;
min-width: 14px;
height: 14px;
padding: 0 4px;
background: hsl(var(--warning));
color: white;
border-radius: 7px;
font-size: 0.5625rem;
font-weight: 600;
display: none;
align-items: center;
justify-content: center;
}
.cli-stream-badge.has-running {
display: flex;
animation: streamBadgePulse 1.5s ease-in-out infinite;
}
@keyframes streamBadgePulse {
0%, 100% { transform: scale(1); }
50% { transform: scale(1.15); }
}
/* ===== Responsive ===== */
@media (max-width: 768px) {
.cli-stream-viewer {
top: 56px;
right: 8px;
left: 8px;
width: auto;
max-height: calc(100vh - 72px);
}
.cli-stream-content {
min-height: 200px;
max-height: 350px;
}
}

View File

@@ -0,0 +1,456 @@
/**
* CLI Stream Viewer Component
* Real-time streaming output viewer for CLI executions
*/
// ===== State Management =====
let cliStreamExecutions = {}; // { executionId: { tool, mode, output, status, startTime, endTime } }
let activeStreamTab = null;
let autoScrollEnabled = true;
let isCliStreamViewerOpen = false;
const MAX_OUTPUT_LINES = 5000; // Prevent memory issues
// ===== Initialization =====
function initCliStreamViewer() {
// Initialize keyboard shortcuts
document.addEventListener('keydown', function(e) {
if (e.key === 'Escape' && isCliStreamViewerOpen) {
toggleCliStreamViewer();
}
});
// Initialize scroll detection for auto-scroll
const content = document.getElementById('cliStreamContent');
if (content) {
content.addEventListener('scroll', handleStreamContentScroll);
}
}
// ===== Panel Control =====
function toggleCliStreamViewer() {
const viewer = document.getElementById('cliStreamViewer');
const overlay = document.getElementById('cliStreamOverlay');
if (!viewer || !overlay) return;
isCliStreamViewerOpen = !isCliStreamViewerOpen;
if (isCliStreamViewerOpen) {
viewer.classList.add('open');
overlay.classList.add('open');
// If no active tab but have executions, select the first one
if (!activeStreamTab && Object.keys(cliStreamExecutions).length > 0) {
const firstId = Object.keys(cliStreamExecutions)[0];
switchStreamTab(firstId);
} else {
renderStreamContent(activeStreamTab);
}
// Re-init lucide icons
if (typeof lucide !== 'undefined') {
lucide.createIcons();
}
} else {
viewer.classList.remove('open');
overlay.classList.remove('open');
}
}
// ===== WebSocket Event Handlers =====
function handleCliStreamStarted(payload) {
const { executionId, tool, mode, timestamp } = payload;
// Create new execution record
cliStreamExecutions[executionId] = {
tool: tool || 'cli',
mode: mode || 'analysis',
output: [],
status: 'running',
startTime: timestamp ? new Date(timestamp).getTime() : Date.now(),
endTime: null
};
// Add system message
cliStreamExecutions[executionId].output.push({
type: 'system',
content: `[${new Date().toLocaleTimeString()}] CLI execution started: ${tool} (${mode} mode)`,
timestamp: Date.now()
});
// If this is the first execution or panel is open, select it
if (!activeStreamTab || isCliStreamViewerOpen) {
activeStreamTab = executionId;
}
renderStreamTabs();
renderStreamContent(activeStreamTab);
updateStreamBadge();
// Auto-open panel if configured (optional)
// if (!isCliStreamViewerOpen) toggleCliStreamViewer();
}
function handleCliStreamOutput(payload) {
const { executionId, chunkType, data } = payload;
const exec = cliStreamExecutions[executionId];
if (!exec) return;
// Parse and add output lines
const content = typeof data === 'string' ? data : JSON.stringify(data);
const lines = content.split('\n');
lines.forEach(line => {
if (line.trim() || lines.length === 1) { // Keep empty lines if it's the only content
exec.output.push({
type: chunkType || 'stdout',
content: line,
timestamp: Date.now()
});
}
});
// Trim if too long
if (exec.output.length > MAX_OUTPUT_LINES) {
exec.output = exec.output.slice(-MAX_OUTPUT_LINES);
}
// Update UI if this is the active tab
if (activeStreamTab === executionId && isCliStreamViewerOpen) {
requestAnimationFrame(() => {
renderStreamContent(executionId);
});
}
// Update badge to show activity
updateStreamBadge();
}
function handleCliStreamCompleted(payload) {
const { executionId, success, duration, timestamp } = payload;
const exec = cliStreamExecutions[executionId];
if (!exec) return;
exec.status = success ? 'completed' : 'error';
exec.endTime = timestamp ? new Date(timestamp).getTime() : Date.now();
// Add completion message
const durationText = duration ? ` (${formatDuration(duration)})` : '';
const statusText = success ? 'completed successfully' : 'failed';
exec.output.push({
type: 'system',
content: `[${new Date().toLocaleTimeString()}] CLI execution ${statusText}${durationText}`,
timestamp: Date.now()
});
renderStreamTabs();
if (activeStreamTab === executionId) {
renderStreamContent(executionId);
}
updateStreamBadge();
}
function handleCliStreamError(payload) {
const { executionId, error, timestamp } = payload;
const exec = cliStreamExecutions[executionId];
if (!exec) return;
exec.status = 'error';
exec.endTime = timestamp ? new Date(timestamp).getTime() : Date.now();
// Add error message
exec.output.push({
type: 'stderr',
content: `[ERROR] ${error || 'Unknown error occurred'}`,
timestamp: Date.now()
});
renderStreamTabs();
if (activeStreamTab === executionId) {
renderStreamContent(executionId);
}
updateStreamBadge();
}
// ===== UI Rendering =====
function renderStreamTabs() {
const tabsContainer = document.getElementById('cliStreamTabs');
if (!tabsContainer) return;
const execIds = Object.keys(cliStreamExecutions);
if (execIds.length === 0) {
tabsContainer.innerHTML = '';
return;
}
// Sort: running first, then by start time (newest first)
execIds.sort((a, b) => {
const execA = cliStreamExecutions[a];
const execB = cliStreamExecutions[b];
if (execA.status === 'running' && execB.status !== 'running') return -1;
if (execA.status !== 'running' && execB.status === 'running') return 1;
return execB.startTime - execA.startTime;
});
tabsContainer.innerHTML = execIds.map(id => {
const exec = cliStreamExecutions[id];
const isActive = id === activeStreamTab;
const canClose = exec.status !== 'running';
return `
<div class="cli-stream-tab ${isActive ? 'active' : ''}"
onclick="switchStreamTab('${id}')"
data-execution-id="${id}">
<span class="cli-stream-tab-status ${exec.status}"></span>
<span class="cli-stream-tab-tool">${escapeHtml(exec.tool)}</span>
<span class="cli-stream-tab-mode">${exec.mode}</span>
<button class="cli-stream-tab-close ${canClose ? '' : 'disabled'}"
onclick="event.stopPropagation(); closeStream('${id}')"
title="${canClose ? t('cliStream.close') : t('cliStream.cannotCloseRunning')}"
${canClose ? '' : 'disabled'}>×</button>
</div>
`;
}).join('');
// Update count badge
const countBadge = document.getElementById('cliStreamCountBadge');
if (countBadge) {
const runningCount = execIds.filter(id => cliStreamExecutions[id].status === 'running').length;
countBadge.textContent = execIds.length;
countBadge.classList.toggle('has-running', runningCount > 0);
}
}
function renderStreamContent(executionId) {
const contentContainer = document.getElementById('cliStreamContent');
if (!contentContainer) return;
const exec = executionId ? cliStreamExecutions[executionId] : null;
if (!exec) {
// Show empty state
contentContainer.innerHTML = `
<div class="cli-stream-empty">
<i data-lucide="terminal"></i>
<div class="cli-stream-empty-title" data-i18n="cliStream.noStreams">${t('cliStream.noStreams')}</div>
<div class="cli-stream-empty-hint" data-i18n="cliStream.noStreamsHint">${t('cliStream.noStreamsHint')}</div>
</div>
`;
if (typeof lucide !== 'undefined') lucide.createIcons();
return;
}
// Check if should auto-scroll
const wasAtBottom = contentContainer.scrollHeight - contentContainer.scrollTop <= contentContainer.clientHeight + 50;
// Render output lines
contentContainer.innerHTML = exec.output.map(line =>
`<div class="cli-stream-line ${line.type}">${escapeHtml(line.content)}</div>`
).join('');
// Auto-scroll if enabled and was at bottom
if (autoScrollEnabled && wasAtBottom) {
contentContainer.scrollTop = contentContainer.scrollHeight;
}
// Update status bar
renderStreamStatus(executionId);
}
function renderStreamStatus(executionId) {
const statusContainer = document.getElementById('cliStreamStatus');
if (!statusContainer) return;
const exec = executionId ? cliStreamExecutions[executionId] : null;
if (!exec) {
statusContainer.innerHTML = '';
return;
}
const duration = exec.endTime
? formatDuration(exec.endTime - exec.startTime)
: formatDuration(Date.now() - exec.startTime);
const statusLabel = exec.status === 'running'
? t('cliStream.running')
: exec.status === 'completed'
? t('cliStream.completed')
: t('cliStream.error');
statusContainer.innerHTML = `
<div class="cli-stream-status-info">
<div class="cli-stream-status-item">
<span class="cli-stream-tab-status ${exec.status}"></span>
<span>${statusLabel}</span>
</div>
<div class="cli-stream-status-item">
<i data-lucide="clock"></i>
<span>${duration}</span>
</div>
<div class="cli-stream-status-item">
<i data-lucide="file-text"></i>
<span>${exec.output.length} ${t('cliStream.lines') || 'lines'}</span>
</div>
</div>
<div class="cli-stream-status-actions">
<button class="cli-stream-toggle-btn ${autoScrollEnabled ? 'active' : ''}"
onclick="toggleAutoScroll()"
title="${t('cliStream.autoScroll')}">
<i data-lucide="arrow-down-to-line"></i>
<span data-i18n="cliStream.autoScroll">${t('cliStream.autoScroll')}</span>
</button>
</div>
`;
if (typeof lucide !== 'undefined') lucide.createIcons();
// Update duration periodically for running executions
if (exec.status === 'running') {
setTimeout(() => {
if (activeStreamTab === executionId && cliStreamExecutions[executionId]?.status === 'running') {
renderStreamStatus(executionId);
}
}, 1000);
}
}
function switchStreamTab(executionId) {
if (!cliStreamExecutions[executionId]) return;
activeStreamTab = executionId;
renderStreamTabs();
renderStreamContent(executionId);
}
function updateStreamBadge() {
const badge = document.getElementById('cliStreamBadge');
if (!badge) return;
const runningCount = Object.values(cliStreamExecutions).filter(e => e.status === 'running').length;
if (runningCount > 0) {
badge.textContent = runningCount;
badge.classList.add('has-running');
} else {
badge.textContent = '';
badge.classList.remove('has-running');
}
}
// ===== User Actions =====
function closeStream(executionId) {
const exec = cliStreamExecutions[executionId];
if (!exec || exec.status === 'running') return;
delete cliStreamExecutions[executionId];
// Switch to another tab if this was active
if (activeStreamTab === executionId) {
const remaining = Object.keys(cliStreamExecutions);
activeStreamTab = remaining.length > 0 ? remaining[0] : null;
}
renderStreamTabs();
renderStreamContent(activeStreamTab);
updateStreamBadge();
}
function clearCompletedStreams() {
const toRemove = Object.keys(cliStreamExecutions).filter(
id => cliStreamExecutions[id].status !== 'running'
);
toRemove.forEach(id => delete cliStreamExecutions[id]);
// Update active tab if needed
if (activeStreamTab && !cliStreamExecutions[activeStreamTab]) {
const remaining = Object.keys(cliStreamExecutions);
activeStreamTab = remaining.length > 0 ? remaining[0] : null;
}
renderStreamTabs();
renderStreamContent(activeStreamTab);
updateStreamBadge();
}
function toggleAutoScroll() {
autoScrollEnabled = !autoScrollEnabled;
if (autoScrollEnabled && activeStreamTab) {
const content = document.getElementById('cliStreamContent');
if (content) {
content.scrollTop = content.scrollHeight;
}
}
renderStreamStatus(activeStreamTab);
}
function handleStreamContentScroll() {
const content = document.getElementById('cliStreamContent');
if (!content) return;
// If user scrolls up, disable auto-scroll
const isAtBottom = content.scrollHeight - content.scrollTop <= content.clientHeight + 50;
if (!isAtBottom && autoScrollEnabled) {
autoScrollEnabled = false;
renderStreamStatus(activeStreamTab);
}
}
// ===== Helper Functions =====
function formatDuration(ms) {
if (ms < 1000) return `${ms}ms`;
const seconds = Math.floor(ms / 1000);
if (seconds < 60) return `${seconds}s`;
const minutes = Math.floor(seconds / 60);
const remainingSeconds = seconds % 60;
if (minutes < 60) return `${minutes}m ${remainingSeconds}s`;
const hours = Math.floor(minutes / 60);
const remainingMinutes = minutes % 60;
return `${hours}h ${remainingMinutes}m`;
}
function escapeHtml(text) {
if (!text) return '';
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
// Translation helper with fallback
function t(key) {
if (typeof window.t === 'function') {
return window.t(key);
}
// Fallback values
const fallbacks = {
'cliStream.noStreams': 'No active CLI executions',
'cliStream.noStreamsHint': 'Start a CLI command to see streaming output',
'cliStream.running': 'Running',
'cliStream.completed': 'Completed',
'cliStream.error': 'Error',
'cliStream.autoScroll': 'Auto-scroll',
'cliStream.close': 'Close',
'cliStream.cannotCloseRunning': 'Cannot close running execution',
'cliStream.lines': 'lines'
};
return fallbacks[key] || key;
}
// Initialize when DOM is ready
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', initCliStreamViewer);
} else {
initCliStreamViewer();
}

View File

@@ -155,6 +155,12 @@ function initNavigation() {
} else {
console.error('renderApiSettings not defined - please refresh the page');
}
} else if (currentView === 'issue-manager') {
if (typeof renderIssueManager === 'function') {
renderIssueManager();
} else {
console.error('renderIssueManager not defined - please refresh the page');
}
}
});
});
@@ -199,6 +205,8 @@ function updateContentTitle() {
titleEl.textContent = t('title.codexLensManager');
} else if (currentView === 'api-settings') {
titleEl.textContent = t('title.apiSettings');
} else if (currentView === 'issue-manager') {
titleEl.textContent = t('title.issueManager');
} else if (currentView === 'liteTasks') {
const names = { 'lite-plan': t('title.litePlanSessions'), 'lite-fix': t('title.liteFixSessions') };
titleEl.textContent = names[currentLiteType] || t('title.liteTasks');

View File

@@ -217,24 +217,40 @@ function handleNotification(data) {
if (typeof handleCliExecutionStarted === 'function') {
handleCliExecutionStarted(payload);
}
// Route to CLI Stream Viewer
if (typeof handleCliStreamStarted === 'function') {
handleCliStreamStarted(payload);
}
break;
case 'CLI_OUTPUT':
if (typeof handleCliOutput === 'function') {
handleCliOutput(payload);
}
// Route to CLI Stream Viewer
if (typeof handleCliStreamOutput === 'function') {
handleCliStreamOutput(payload);
}
break;
case 'CLI_EXECUTION_COMPLETED':
if (typeof handleCliExecutionCompleted === 'function') {
handleCliExecutionCompleted(payload);
}
// Route to CLI Stream Viewer
if (typeof handleCliStreamCompleted === 'function') {
handleCliStreamCompleted(payload);
}
break;
case 'CLI_EXECUTION_ERROR':
if (typeof handleCliExecutionError === 'function') {
handleCliExecutionError(payload);
}
// Route to CLI Stream Viewer
if (typeof handleCliStreamError === 'function') {
handleCliStreamError(payload);
}
break;
// CLI Review Events

View File

@@ -39,7 +39,21 @@ const i18n = {
'header.refreshWorkspace': 'Refresh workspace',
'header.toggleTheme': 'Toggle theme',
'header.language': 'Language',
'header.cliStream': 'CLI Stream Viewer',
// CLI Stream Viewer
'cliStream.title': 'CLI Stream',
'cliStream.clearCompleted': 'Clear Completed',
'cliStream.noStreams': 'No active CLI executions',
'cliStream.noStreamsHint': 'Start a CLI command to see streaming output',
'cliStream.running': 'Running',
'cliStream.completed': 'Completed',
'cliStream.error': 'Error',
'cliStream.autoScroll': 'Auto-scroll',
'cliStream.close': 'Close',
'cliStream.cannotCloseRunning': 'Cannot close running execution',
'cliStream.lines': 'lines',
// Sidebar - Project section
'nav.project': 'Project',
'nav.overview': 'Overview',
@@ -1711,6 +1725,53 @@ const i18n = {
'coreMemory.belongsToClusters': 'Belongs to Clusters',
'coreMemory.relationsError': 'Failed to load relations',
// Issue Manager
'nav.issues': 'Issues',
'nav.issueManager': 'Manager',
'title.issueManager': 'Issue Manager',
'issue.viewIssues': 'Issues',
'issue.viewQueue': 'Queue',
'issue.filterAll': 'All',
'issue.filterStatus': 'Status',
'issue.filterPriority': 'Priority',
'issue.noIssues': 'No issues found',
'issue.noIssuesHint': 'Issues will appear here when created via /issue:plan command',
'issue.noQueue': 'No tasks in queue',
'issue.noQueueHint': 'Run /issue:queue to form execution queue from bound solutions',
'issue.tasks': 'tasks',
'issue.solutions': 'solutions',
'issue.parallel': 'Parallel',
'issue.sequential': 'Sequential',
'issue.status.registered': 'Registered',
'issue.status.planned': 'Planned',
'issue.status.queued': 'Queued',
'issue.status.executing': 'Executing',
'issue.status.completed': 'Completed',
'issue.status.failed': 'Failed',
'issue.priority.critical': 'Critical',
'issue.priority.high': 'High',
'issue.priority.medium': 'Medium',
'issue.priority.low': 'Low',
'issue.detail.context': 'Context',
'issue.detail.solutions': 'Solutions',
'issue.detail.tasks': 'Tasks',
'issue.detail.noSolutions': 'No solutions available',
'issue.detail.noTasks': 'No tasks available',
'issue.detail.bound': 'Bound',
'issue.detail.modificationPoints': 'Modification Points',
'issue.detail.implementation': 'Implementation Steps',
'issue.detail.acceptance': 'Acceptance Criteria',
'issue.queue.reordered': 'Queue reordered',
'issue.queue.reorderFailed': 'Failed to reorder queue',
'issue.saved': 'Issue saved',
'issue.saveFailed': 'Failed to save issue',
'issue.taskUpdated': 'Task updated',
'issue.taskUpdateFailed': 'Failed to update task',
'issue.conflicts': 'Conflicts',
'issue.noConflicts': 'No conflicts detected',
'issue.conflict.resolved': 'Resolved',
'issue.conflict.pending': 'Pending',
// Common additions
'common.copyId': 'Copy ID',
'common.copied': 'Copied!',
@@ -1748,7 +1809,21 @@ const i18n = {
'header.refreshWorkspace': '刷新工作区',
'header.toggleTheme': '切换主题',
'header.language': '语言',
'header.cliStream': 'CLI 流式输出',
// CLI Stream Viewer
'cliStream.title': 'CLI 流式输出',
'cliStream.clearCompleted': '清除已完成',
'cliStream.noStreams': '没有活动的 CLI 执行',
'cliStream.noStreamsHint': '启动 CLI 命令以查看流式输出',
'cliStream.running': '运行中',
'cliStream.completed': '已完成',
'cliStream.error': '错误',
'cliStream.autoScroll': '自动滚动',
'cliStream.close': '关闭',
'cliStream.cannotCloseRunning': '无法关闭运行中的执行',
'cliStream.lines': '行',
// Sidebar - Project section
'nav.project': '项目',
'nav.overview': '概览',
@@ -3429,6 +3504,53 @@ const i18n = {
'coreMemory.belongsToClusters': '所属聚类',
'coreMemory.relationsError': '加载关联失败',
// Issue Manager
'nav.issues': '议题',
'nav.issueManager': '管理器',
'title.issueManager': '议题管理器',
'issue.viewIssues': '议题',
'issue.viewQueue': '队列',
'issue.filterAll': '全部',
'issue.filterStatus': '状态',
'issue.filterPriority': '优先级',
'issue.noIssues': '暂无议题',
'issue.noIssuesHint': '通过 /issue:plan 命令创建的议题将显示在此处',
'issue.noQueue': '队列中暂无任务',
'issue.noQueueHint': '运行 /issue:queue 从绑定的解决方案生成执行队列',
'issue.tasks': '任务',
'issue.solutions': '解决方案',
'issue.parallel': '并行',
'issue.sequential': '顺序',
'issue.status.registered': '已注册',
'issue.status.planned': '已规划',
'issue.status.queued': '已入队',
'issue.status.executing': '执行中',
'issue.status.completed': '已完成',
'issue.status.failed': '失败',
'issue.priority.critical': '紧急',
'issue.priority.high': '高',
'issue.priority.medium': '中',
'issue.priority.low': '低',
'issue.detail.context': '上下文',
'issue.detail.solutions': '解决方案',
'issue.detail.tasks': '任务',
'issue.detail.noSolutions': '暂无解决方案',
'issue.detail.noTasks': '暂无任务',
'issue.detail.bound': '已绑定',
'issue.detail.modificationPoints': '修改点',
'issue.detail.implementation': '实现步骤',
'issue.detail.acceptance': '验收标准',
'issue.queue.reordered': '队列已重排',
'issue.queue.reorderFailed': '队列重排失败',
'issue.saved': '议题已保存',
'issue.saveFailed': '保存议题失败',
'issue.taskUpdated': '任务已更新',
'issue.taskUpdateFailed': '更新任务失败',
'issue.conflicts': '冲突',
'issue.noConflicts': '未检测到冲突',
'issue.conflict.resolved': '已解决',
'issue.conflict.pending': '待处理',
// Common additions
'common.copyId': '复制 ID',
'common.copied': '已复制!',

View File

@@ -0,0 +1,704 @@
// ==========================================
// ISSUE MANAGER VIEW
// Manages issues, solutions, and execution queue
// ==========================================
// ========== Issue State ==========
var issueData = {
issues: [],
queue: { queue: [], conflicts: [], execution_groups: [], grouped_items: {} },
selectedIssue: null,
selectedSolution: null,
statusFilter: 'all',
viewMode: 'issues' // 'issues' | 'queue'
};
var issueLoading = false;
var issueDragState = {
dragging: null,
groupId: null
};
// ========== Main Render Function ==========
async function renderIssueManager() {
const container = document.getElementById('mainContent');
if (!container) return;
// Hide stats grid and search
hideStatsAndCarousel();
// Show loading state
container.innerHTML = '<div class="issue-manager loading">' +
'<div class="loading-spinner"><i data-lucide="loader-2" class="w-8 h-8 animate-spin"></i></div>' +
'<p>' + t('common.loading') + '</p>' +
'</div>';
// Load data
await Promise.all([loadIssueData(), loadQueueData()]);
// Render the main view
renderIssueView();
}
// ========== Data Loading ==========
async function loadIssueData() {
issueLoading = true;
try {
const response = await fetch('/api/issues?path=' + encodeURIComponent(projectPath));
if (!response.ok) throw new Error('Failed to load issues');
const data = await response.json();
issueData.issues = data.issues || [];
updateIssueBadge();
} catch (err) {
console.error('Failed to load issues:', err);
issueData.issues = [];
} finally {
issueLoading = false;
}
}
async function loadQueueData() {
try {
const response = await fetch('/api/queue?path=' + encodeURIComponent(projectPath));
if (!response.ok) throw new Error('Failed to load queue');
issueData.queue = await response.json();
} catch (err) {
console.error('Failed to load queue:', err);
issueData.queue = { queue: [], conflicts: [], execution_groups: [], grouped_items: {} };
}
}
async function loadIssueDetail(issueId) {
try {
const response = await fetch('/api/issues/' + encodeURIComponent(issueId) + '?path=' + encodeURIComponent(projectPath));
if (!response.ok) throw new Error('Failed to load issue detail');
return await response.json();
} catch (err) {
console.error('Failed to load issue detail:', err);
return null;
}
}
function updateIssueBadge() {
const badge = document.getElementById('badgeIssues');
if (badge) {
badge.textContent = issueData.issues.length;
}
}
// ========== Main View Render ==========
function renderIssueView() {
const container = document.getElementById('mainContent');
if (!container) return;
const issues = issueData.issues || [];
const filteredIssues = issueData.statusFilter === 'all'
? issues
: issues.filter(i => i.status === issueData.statusFilter);
container.innerHTML = `
<div class="issue-manager">
<!-- Header -->
<div class="issue-header mb-6">
<div class="flex items-center justify-between flex-wrap gap-4">
<div class="flex items-center gap-3">
<div class="w-10 h-10 bg-primary/10 rounded-lg flex items-center justify-center">
<i data-lucide="clipboard-list" class="w-5 h-5 text-primary"></i>
</div>
<div>
<h2 class="text-lg font-semibold text-foreground">${t('issues.title') || 'Issue Manager'}</h2>
<p class="text-sm text-muted-foreground">${t('issues.description') || 'Manage issues, solutions, and execution queue'}</p>
</div>
</div>
<!-- View Toggle -->
<div class="issue-view-toggle">
<button class="${issueData.viewMode === 'issues' ? 'active' : ''}" onclick="switchIssueView('issues')">
<i data-lucide="list" class="w-4 h-4 mr-1"></i>
${t('issues.viewIssues') || 'Issues'}
</button>
<button class="${issueData.viewMode === 'queue' ? 'active' : ''}" onclick="switchIssueView('queue')">
<i data-lucide="git-branch" class="w-4 h-4 mr-1"></i>
${t('issues.viewQueue') || 'Queue'}
</button>
</div>
</div>
</div>
${issueData.viewMode === 'issues' ? renderIssueListSection(filteredIssues) : renderQueueSection()}
<!-- Detail Panel -->
<div id="issueDetailPanel" class="issue-detail-panel hidden"></div>
</div>
`;
lucide.createIcons();
// Initialize drag-drop if in queue view
if (issueData.viewMode === 'queue') {
initQueueDragDrop();
}
}
function switchIssueView(mode) {
issueData.viewMode = mode;
renderIssueView();
}
// ========== Issue List Section ==========
function renderIssueListSection(issues) {
const statuses = ['all', 'registered', 'planning', 'planned', 'queued', 'executing', 'completed', 'failed'];
return `
<!-- Filters -->
<div class="issue-filters mb-4">
<div class="flex items-center gap-2 flex-wrap">
<span class="text-sm text-muted-foreground">${t('issues.filterStatus') || 'Status'}:</span>
${statuses.map(status => `
<button class="issue-filter-btn ${issueData.statusFilter === status ? 'active' : ''}"
onclick="filterIssuesByStatus('${status}')">
${status === 'all' ? (t('issues.filterAll') || 'All') : status}
</button>
`).join('')}
</div>
</div>
<!-- Issues Grid -->
<div class="issues-grid">
${issues.length === 0 ? `
<div class="issue-empty">
<i data-lucide="inbox" class="w-12 h-12 text-muted-foreground mb-4"></i>
<p class="text-muted-foreground">${t('issues.noIssues') || 'No issues found'}</p>
<p class="text-sm text-muted-foreground mt-2">${t('issues.createHint') || 'Create issues using: ccw issue init <id>'}</p>
</div>
` : issues.map(issue => renderIssueCard(issue)).join('')}
</div>
`;
}
function renderIssueCard(issue) {
const statusColors = {
registered: 'registered',
planning: 'planning',
planned: 'planned',
queued: 'queued',
executing: 'executing',
completed: 'completed',
failed: 'failed'
};
return `
<div class="issue-card" onclick="openIssueDetail('${issue.id}')">
<div class="flex items-start justify-between mb-3">
<div class="flex items-center gap-2">
<span class="issue-id font-mono text-sm">${issue.id}</span>
<span class="issue-status ${statusColors[issue.status] || ''}">${issue.status || 'unknown'}</span>
</div>
<span class="issue-priority" title="${t('issues.priority') || 'Priority'}: ${issue.priority || 3}">
${renderPriorityStars(issue.priority || 3)}
</span>
</div>
<h3 class="issue-title text-foreground font-medium mb-2">${issue.title || issue.id}</h3>
<div class="issue-meta flex items-center gap-4 text-sm text-muted-foreground">
<span class="flex items-center gap-1">
<i data-lucide="file-text" class="w-3.5 h-3.5"></i>
${issue.task_count || 0} ${t('issues.tasks') || 'tasks'}
</span>
<span class="flex items-center gap-1">
<i data-lucide="lightbulb" class="w-3.5 h-3.5"></i>
${issue.solution_count || 0} ${t('issues.solutions') || 'solutions'}
</span>
${issue.bound_solution_id ? `
<span class="flex items-center gap-1 text-primary">
<i data-lucide="link" class="w-3.5 h-3.5"></i>
${t('issues.boundSolution') || 'Bound'}
</span>
` : ''}
</div>
</div>
`;
}
function renderPriorityStars(priority) {
const maxStars = 5;
let stars = '';
for (let i = 1; i <= maxStars; i++) {
stars += `<i data-lucide="star" class="w-3 h-3 ${i <= priority ? 'text-warning fill-warning' : 'text-muted'}"></i>`;
}
return stars;
}
function filterIssuesByStatus(status) {
issueData.statusFilter = status;
renderIssueView();
}
// ========== Queue Section ==========
function renderQueueSection() {
const queue = issueData.queue;
const groups = queue.execution_groups || [];
const groupedItems = queue.grouped_items || {};
if (groups.length === 0 && (!queue.queue || queue.queue.length === 0)) {
return `
<div class="queue-empty">
<i data-lucide="git-branch" class="w-12 h-12 text-muted-foreground mb-4"></i>
<p class="text-muted-foreground">${t('issues.queueEmpty') || 'Queue is empty'}</p>
<p class="text-sm text-muted-foreground mt-2">Run /issue:queue to form execution queue</p>
</div>
`;
}
return `
<div class="queue-info mb-4">
<p class="text-sm text-muted-foreground">
<i data-lucide="info" class="w-4 h-4 inline mr-1"></i>
${t('issues.reorderHint') || 'Drag items within a group to reorder'}
</p>
</div>
<div class="queue-timeline">
${groups.map(group => renderQueueGroup(group, groupedItems[group.id] || [])).join('')}
</div>
${queue.conflicts && queue.conflicts.length > 0 ? renderConflictsSection(queue.conflicts) : ''}
`;
}
function renderQueueGroup(group, items) {
const isParallel = group.type === 'parallel';
return `
<div class="queue-group" data-group-id="${group.id}">
<div class="queue-group-header">
<div class="queue-group-type ${isParallel ? 'parallel' : 'sequential'}">
<i data-lucide="${isParallel ? 'git-merge' : 'arrow-right'}" class="w-4 h-4"></i>
${group.id} (${isParallel ? t('issues.parallelGroup') || 'Parallel' : t('issues.sequentialGroup') || 'Sequential'})
</div>
<span class="text-sm text-muted-foreground">${group.task_count} tasks</span>
</div>
<div class="queue-items ${isParallel ? 'parallel' : 'sequential'}">
${items.map((item, idx) => renderQueueItem(item, idx, items.length)).join('')}
</div>
</div>
`;
}
function renderQueueItem(item, index, total) {
const statusColors = {
pending: '',
ready: 'ready',
executing: 'executing',
completed: 'completed',
failed: 'failed',
blocked: 'blocked'
};
return `
<div class="queue-item ${statusColors[item.status] || ''}"
draggable="true"
data-queue-id="${item.queue_id}"
data-group-id="${item.execution_group}"
onclick="openQueueItemDetail('${item.queue_id}')">
<span class="queue-item-id font-mono text-xs">${item.queue_id}</span>
<span class="queue-item-issue text-xs text-muted-foreground">${item.issue_id}</span>
<span class="queue-item-task text-sm">${item.task_id}</span>
<span class="queue-item-priority" style="opacity: ${item.semantic_priority || 0.5}">
<i data-lucide="arrow-up" class="w-3 h-3"></i>
</span>
${item.depends_on && item.depends_on.length > 0 ? `
<span class="queue-item-deps text-xs text-muted-foreground" title="${t('issues.dependsOn') || 'Depends on'}: ${item.depends_on.join(', ')}">
<i data-lucide="link" class="w-3 h-3"></i>
</span>
` : ''}
</div>
`;
}
function renderConflictsSection(conflicts) {
return `
<div class="conflicts-section mt-6">
<h3 class="text-sm font-semibold text-foreground mb-3">
<i data-lucide="alert-triangle" class="w-4 h-4 inline text-warning mr-1"></i>
Conflicts (${conflicts.length})
</h3>
<div class="conflicts-list">
${conflicts.map(c => `
<div class="conflict-item">
<span class="conflict-file font-mono text-xs">${c.file}</span>
<span class="conflict-tasks text-xs text-muted-foreground">${c.tasks.join(' → ')}</span>
<span class="conflict-status ${c.resolved ? 'resolved' : 'pending'}">
${c.resolved ? 'Resolved' : 'Pending'}
</span>
</div>
`).join('')}
</div>
</div>
`;
}
// ========== Drag-Drop for Queue ==========
function initQueueDragDrop() {
const items = document.querySelectorAll('.queue-item[draggable="true"]');
items.forEach(item => {
item.addEventListener('dragstart', handleIssueDragStart);
item.addEventListener('dragend', handleIssueDragEnd);
item.addEventListener('dragover', handleIssueDragOver);
item.addEventListener('drop', handleIssueDrop);
});
}
function handleIssueDragStart(e) {
const item = e.target.closest('.queue-item');
if (!item) return;
issueDragState.dragging = item.dataset.queueId;
issueDragState.groupId = item.dataset.groupId;
item.classList.add('dragging');
e.dataTransfer.effectAllowed = 'move';
e.dataTransfer.setData('text/plain', item.dataset.queueId);
}
function handleIssueDragEnd(e) {
const item = e.target.closest('.queue-item');
if (item) {
item.classList.remove('dragging');
}
issueDragState.dragging = null;
issueDragState.groupId = null;
// Remove all placeholders
document.querySelectorAll('.queue-drop-placeholder').forEach(p => p.remove());
}
function handleIssueDragOver(e) {
e.preventDefault();
const target = e.target.closest('.queue-item');
if (!target || target.dataset.queueId === issueDragState.dragging) return;
// Only allow drag within same group
if (target.dataset.groupId !== issueDragState.groupId) {
e.dataTransfer.dropEffect = 'none';
return;
}
e.dataTransfer.dropEffect = 'move';
}
function handleIssueDrop(e) {
e.preventDefault();
const target = e.target.closest('.queue-item');
if (!target || !issueDragState.dragging) return;
// Only allow drop within same group
if (target.dataset.groupId !== issueDragState.groupId) return;
const container = target.closest('.queue-items');
if (!container) return;
// Get new order
const items = Array.from(container.querySelectorAll('.queue-item'));
const draggedItem = items.find(i => i.dataset.queueId === issueDragState.dragging);
const targetIndex = items.indexOf(target);
const draggedIndex = items.indexOf(draggedItem);
if (draggedIndex === targetIndex) return;
// Reorder in DOM
if (draggedIndex < targetIndex) {
target.after(draggedItem);
} else {
target.before(draggedItem);
}
// Get new order and save
const newOrder = Array.from(container.querySelectorAll('.queue-item')).map(i => i.dataset.queueId);
saveQueueOrder(issueDragState.groupId, newOrder);
}
async function saveQueueOrder(groupId, newOrder) {
try {
const response = await fetch('/api/queue/reorder?path=' + encodeURIComponent(projectPath), {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ groupId, newOrder })
});
if (!response.ok) {
throw new Error('Failed to save queue order');
}
const result = await response.json();
if (result.error) {
showNotification(result.error, 'error');
} else {
showNotification('Queue reordered', 'success');
// Reload queue data
await loadQueueData();
}
} catch (err) {
console.error('Failed to save queue order:', err);
showNotification('Failed to save queue order', 'error');
// Reload to restore original order
await loadQueueData();
renderIssueView();
}
}
// ========== Detail Panel ==========
async function openIssueDetail(issueId) {
const panel = document.getElementById('issueDetailPanel');
if (!panel) return;
panel.innerHTML = '<div class="p-8 text-center"><i data-lucide="loader-2" class="w-8 h-8 animate-spin mx-auto"></i></div>';
panel.classList.remove('hidden');
lucide.createIcons();
const detail = await loadIssueDetail(issueId);
if (!detail) {
panel.innerHTML = '<div class="p-8 text-center text-destructive">Failed to load issue</div>';
return;
}
issueData.selectedIssue = detail;
renderIssueDetailPanel(detail);
}
function renderIssueDetailPanel(issue) {
const panel = document.getElementById('issueDetailPanel');
if (!panel) return;
const boundSolution = issue.solutions?.find(s => s.is_bound);
panel.innerHTML = `
<div class="issue-detail-header">
<div class="flex items-center justify-between">
<h3 class="text-lg font-semibold">${issue.id}</h3>
<button class="btn-icon" onclick="closeIssueDetail()">
<i data-lucide="x" class="w-5 h-5"></i>
</button>
</div>
<span class="issue-status ${issue.status || ''}">${issue.status || 'unknown'}</span>
</div>
<div class="issue-detail-content">
<!-- Title (editable) -->
<div class="detail-section">
<label class="detail-label">Title</label>
<div class="detail-editable" id="issueTitle">
<span class="detail-value">${issue.title || issue.id}</span>
<button class="btn-edit" onclick="startEditField('${issue.id}', 'title', '${(issue.title || issue.id).replace(/'/g, "\\'")}')">
<i data-lucide="pencil" class="w-3.5 h-3.5"></i>
</button>
</div>
</div>
<!-- Context (editable) -->
<div class="detail-section">
<label class="detail-label">Context</label>
<div class="detail-context" id="issueContext">
<pre class="detail-pre">${issue.context || 'No context'}</pre>
<button class="btn-edit" onclick="startEditContext('${issue.id}')">
<i data-lucide="pencil" class="w-3.5 h-3.5"></i>
</button>
</div>
</div>
<!-- Solutions -->
<div class="detail-section">
<label class="detail-label">${t('issues.solutions') || 'Solutions'} (${issue.solutions?.length || 0})</label>
<div class="solutions-list">
${(issue.solutions || []).map(sol => `
<div class="solution-item ${sol.is_bound ? 'bound' : ''}" onclick="toggleSolutionExpand('${sol.id}')">
<div class="solution-header">
<span class="solution-id font-mono text-xs">${sol.id}</span>
${sol.is_bound ? '<span class="solution-bound-badge">Bound</span>' : ''}
<span class="solution-tasks text-xs">${sol.tasks?.length || 0} tasks</span>
</div>
<div class="solution-tasks-list hidden" id="solution-${sol.id}">
${(sol.tasks || []).map(task => `
<div class="task-item">
<span class="task-id font-mono">${task.id}</span>
<span class="task-action ${task.action?.toLowerCase() || ''}">${task.action || 'Unknown'}</span>
<span class="task-title">${task.title || ''}</span>
</div>
`).join('')}
</div>
</div>
`).join('') || '<p class="text-sm text-muted-foreground">No solutions</p>'}
</div>
</div>
<!-- Tasks (from tasks.jsonl) -->
<div class="detail-section">
<label class="detail-label">${t('issues.tasks') || 'Tasks'} (${issue.tasks?.length || 0})</label>
<div class="tasks-list">
${(issue.tasks || []).map(task => `
<div class="task-item-detail">
<div class="flex items-center justify-between">
<span class="font-mono text-sm">${task.id}</span>
<select class="task-status-select" onchange="updateTaskStatus('${issue.id}', '${task.id}', this.value)">
${['pending', 'ready', 'in_progress', 'completed', 'failed', 'paused', 'skipped'].map(s =>
`<option value="${s}" ${task.status === s ? 'selected' : ''}>${s}</option>`
).join('')}
</select>
</div>
<p class="task-title-detail">${task.title || task.description || ''}</p>
</div>
`).join('') || '<p class="text-sm text-muted-foreground">No tasks</p>'}
</div>
</div>
</div>
`;
lucide.createIcons();
}
function closeIssueDetail() {
const panel = document.getElementById('issueDetailPanel');
if (panel) {
panel.classList.add('hidden');
}
issueData.selectedIssue = null;
}
function toggleSolutionExpand(solId) {
const el = document.getElementById('solution-' + solId);
if (el) {
el.classList.toggle('hidden');
}
}
function openQueueItemDetail(queueId) {
const item = issueData.queue.queue?.find(q => q.queue_id === queueId);
if (item) {
openIssueDetail(item.issue_id);
}
}
// ========== Edit Functions ==========
function startEditField(issueId, field, currentValue) {
const container = document.getElementById('issueTitle');
if (!container) return;
container.innerHTML = `
<input type="text" class="edit-input" id="editField" value="${currentValue}" />
<div class="edit-actions">
<button class="btn-save" onclick="saveFieldEdit('${issueId}', '${field}')">
<i data-lucide="check" class="w-4 h-4"></i>
</button>
<button class="btn-cancel" onclick="cancelEdit()">
<i data-lucide="x" class="w-4 h-4"></i>
</button>
</div>
`;
lucide.createIcons();
document.getElementById('editField')?.focus();
}
function startEditContext(issueId) {
const container = document.getElementById('issueContext');
const currentValue = issueData.selectedIssue?.context || '';
if (!container) return;
container.innerHTML = `
<textarea class="edit-textarea" id="editContext" rows="8">${currentValue}</textarea>
<div class="edit-actions">
<button class="btn-save" onclick="saveContextEdit('${issueId}')">
<i data-lucide="check" class="w-4 h-4"></i>
</button>
<button class="btn-cancel" onclick="cancelEdit()">
<i data-lucide="x" class="w-4 h-4"></i>
</button>
</div>
`;
lucide.createIcons();
document.getElementById('editContext')?.focus();
}
async function saveFieldEdit(issueId, field) {
const input = document.getElementById('editField');
if (!input) return;
const value = input.value.trim();
if (!value) return;
try {
const response = await fetch('/api/issues/' + encodeURIComponent(issueId) + '?path=' + encodeURIComponent(projectPath), {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ [field]: value })
});
if (!response.ok) throw new Error('Failed to update');
showNotification('Updated ' + field, 'success');
// Refresh data
await loadIssueData();
const detail = await loadIssueDetail(issueId);
if (detail) {
issueData.selectedIssue = detail;
renderIssueDetailPanel(detail);
}
} catch (err) {
showNotification('Failed to update', 'error');
cancelEdit();
}
}
async function saveContextEdit(issueId) {
const textarea = document.getElementById('editContext');
if (!textarea) return;
const value = textarea.value;
try {
const response = await fetch('/api/issues/' + encodeURIComponent(issueId) + '?path=' + encodeURIComponent(projectPath), {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ context: value })
});
if (!response.ok) throw new Error('Failed to update');
showNotification('Context updated', 'success');
// Refresh detail
const detail = await loadIssueDetail(issueId);
if (detail) {
issueData.selectedIssue = detail;
renderIssueDetailPanel(detail);
}
} catch (err) {
showNotification('Failed to update context', 'error');
cancelEdit();
}
}
function cancelEdit() {
if (issueData.selectedIssue) {
renderIssueDetailPanel(issueData.selectedIssue);
}
}
async function updateTaskStatus(issueId, taskId, status) {
try {
const response = await fetch('/api/issues/' + encodeURIComponent(issueId) + '/tasks/' + encodeURIComponent(taskId) + '?path=' + encodeURIComponent(projectPath), {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ status })
});
if (!response.ok) throw new Error('Failed to update task');
showNotification('Task status updated', 'success');
} catch (err) {
showNotification('Failed to update task status', 'error');
}
}

View File

@@ -275,6 +275,18 @@
</div>
</div>
</div>
<!-- CLI Stream Viewer Button -->
<button class="cli-stream-btn p-1.5 text-muted-foreground hover:text-foreground hover:bg-hover rounded relative"
id="cliStreamBtn"
onclick="toggleCliStreamViewer()"
data-i18n-title="header.cliStream"
title="CLI Stream Viewer">
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<polyline points="4 17 10 11 4 5"/>
<line x1="12" y1="19" x2="20" y2="19"/>
</svg>
<span class="cli-stream-badge" id="cliStreamBadge"></span>
</button>
<!-- Refresh Button -->
<button class="refresh-btn p-1.5 text-muted-foreground hover:text-foreground hover:bg-hover rounded" id="refreshWorkspace" data-i18n-title="header.refreshWorkspace" title="Refresh workspace">
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
@@ -394,6 +406,21 @@
</ul>
</div>
<!-- Issues Section -->
<div class="mb-2" id="issuesNav">
<div class="flex items-center px-4 py-2 text-sm font-semibold text-muted-foreground uppercase tracking-wide">
<i data-lucide="clipboard-list" class="nav-section-icon mr-2"></i>
<span class="nav-section-title" data-i18n="nav.issues">Issues</span>
</div>
<ul class="space-y-0.5">
<li class="nav-item flex items-center gap-2 px-3 py-2.5 text-sm text-muted-foreground hover:bg-hover hover:text-foreground rounded cursor-pointer transition-colors" data-view="issue-manager" data-tooltip="Issue Manager">
<i data-lucide="list-checks" class="nav-icon"></i>
<span class="nav-text flex-1" data-i18n="nav.issueManager">Manager</span>
<span class="badge px-2 py-0.5 text-xs font-semibold rounded-full bg-hover text-muted-foreground" id="badgeIssues">0</span>
</li>
</ul>
</div>
<!-- MCP Servers Section -->
<div class="mb-2" id="mcpServersNav">
<div class="flex items-center px-4 py-2 text-sm font-semibold text-muted-foreground uppercase tracking-wide">
@@ -578,6 +605,34 @@
<div class="drawer-overlay hidden fixed inset-0 bg-black/50 z-40" id="drawerOverlay" onclick="closeTaskDrawer()"></div>
</div>
<!-- CLI Stream Viewer Panel -->
<div class="cli-stream-viewer" id="cliStreamViewer">
<div class="cli-stream-header">
<div class="cli-stream-title">
<i data-lucide="terminal"></i>
<span data-i18n="cliStream.title">CLI Stream</span>
<span class="cli-stream-count-badge" id="cliStreamCountBadge">0</span>
</div>
<div class="cli-stream-actions">
<button class="cli-stream-action-btn" onclick="clearCompletedStreams()" data-i18n="cliStream.clearCompleted">
<i data-lucide="trash-2"></i>
<span>Clear</span>
</button>
<button class="cli-stream-close-btn" onclick="toggleCliStreamViewer()" title="Close">&times;</button>
</div>
</div>
<div class="cli-stream-tabs" id="cliStreamTabs">
<!-- Dynamic tabs -->
</div>
<div class="cli-stream-content" id="cliStreamContent">
<!-- Terminal output -->
</div>
<div class="cli-stream-status" id="cliStreamStatus">
<!-- Status bar -->
</div>
</div>
<div class="cli-stream-overlay" id="cliStreamOverlay" onclick="toggleCliStreamViewer()"></div>
<!-- Markdown Preview Modal -->
<div id="markdownModal" class="markdown-modal hidden fixed inset-0 z-[100] flex items-center justify-center">
<div class="markdown-modal-backdrop absolute inset-0 bg-black/60" onclick="closeMarkdownModal()"></div>