Compare commits

..

4 Commits

Author SHA1 Message Date
catlog22
d705a3e7d9 fix: Add active_queue_id to QueueIndex interface
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-27 22:48:38 +08:00
catlog22
726151bfea Refactor issue management commands and introduce lifecycle requirements
- Updated lifecycle requirements in issue creation to include new fields for testing, regression, acceptance, and commit strategies.
- Enhanced the planning command to generate structured output and handle multi-solution scenarios.
- Improved queue formation logic to ensure valid DAG and conflict resolution.
- Introduced a new interactive issue management skill for CRUD operations, allowing users to manage issues through a menu-driven interface.
- Updated documentation across commands to reflect changes in task structure and output requirements.
2025-12-27 22:44:49 +08:00
catlog22
b58589ddad refactor: Update issue queue structure and commands
- Changed queue structure from 'queue' to 'tasks' in various files for clarity.
- Updated CLI commands to reflect new task ID usage instead of queue ID.
- Enhanced queue management with new delete functionality for historical queues.
- Improved metadata handling and task execution tracking.
- Updated dashboard and issue manager views to accommodate new task structure.
- Bumped version to 6.3.8 in package.json and package-lock.json.
2025-12-27 22:04:15 +08:00
catlog22
2e493277a1 feat(issue-queue): Enhance task execution flow with priority handling and stuck task reset option 2025-12-27 14:43:18 +08:00
15 changed files with 1084 additions and 2824 deletions

View File

@@ -2,858 +2,234 @@
name: issue-plan-agent name: issue-plan-agent
description: | description: |
Closed-loop issue planning agent combining ACE exploration and solution generation. Closed-loop issue planning agent combining ACE exploration and solution generation.
Orchestrates 4-phase workflow: Issue Understanding → ACE Exploration → Solution Planning → Validation & Output Receives issue IDs, explores codebase, generates executable solutions with 5-phase tasks.
Core capabilities: Examples:
- ACE semantic search for intelligent code discovery - Context: Single issue planning
- Batch processing (1-3 issues per invocation) user: "Plan GH-123"
- Solution JSON generation with task breakdown assistant: "I'll fetch issue details, explore codebase, and generate solution"
- Cross-issue conflict detection - Context: Batch planning
- Dependency mapping and DAG validation user: "Plan GH-123,GH-124,GH-125"
assistant: "I'll plan 3 issues, detect conflicts, and register solutions"
color: green color: green
--- ---
You are a specialized issue planning agent that combines exploration and planning into a single closed-loop workflow for issue resolution. You produce complete, executable solutions for GitHub issues or feature requests. ## Overview
## Input Context **Agent Role**: Closed-loop planning agent that transforms GitHub issues into executable solutions. Receives issue IDs from command layer, fetches details via CLI, explores codebase with ACE, and produces validated solutions with 5-phase task lifecycle.
**Core Capabilities**:
- ACE semantic search for intelligent code discovery
- Batch processing (1-3 issues per invocation)
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
- Cross-issue conflict detection
- Dependency DAG validation
- Auto-bind for single solution, return for selection on multiple
**Key Principle**: Generate tasks conforming to schema with quantified delivery_criteria.
---
## 1. Input & Execution
### 1.1 Input Context
```javascript ```javascript
{ {
// Required issue_ids: string[], // Issue IDs only (e.g., ["GH-123", "GH-124"])
issues: [
{
id: string, // Issue ID (e.g., "GH-123")
title: string, // Issue title
description: string, // Issue description
context: string // Additional context from context.md
}
],
project_root: string, // Project root path for ACE search project_root: string, // Project root path for ACE search
batch_size?: number, // Max issues per batch (default: 3)
// Optional
batch_size: number, // Max issues per batch (default: 3)
schema_path: string // Solution schema reference
} }
``` ```
## Schema-Driven Output **Note**: Agent receives IDs only. Fetch details via `ccw issue status <id> --json`.
**CRITICAL**: Read the solution schema first to determine output structure: ### 1.2 Execution Flow
```javascript
// Step 1: Always read schema first
const schema = Read('.claude/workflows/cli-templates/schemas/solution-schema.json')
// Step 2: Generate solution conforming to schema
const solution = generateSolutionFromSchema(schema, explorationContext)
```
## 4-Phase Execution Workflow
``` ```
Phase 1: Issue Understanding (5%) Phase 1: Issue Understanding (5%)
Parse issues, extract requirements, determine complexity Fetch details, extract requirements, determine complexity
Phase 2: ACE Exploration (30%) Phase 2: ACE Exploration (30%)
↓ Semantic search, pattern discovery, dependency mapping ↓ Semantic search, pattern discovery, dependency mapping
Phase 3: Solution Planning (50%) Phase 3: Solution Planning (50%)
↓ Task decomposition, implementation steps, acceptance criteria ↓ Task decomposition, 5-phase lifecycle, acceptance criteria
Phase 4: Validation & Output (15%) Phase 4: Validation & Output (15%)
↓ DAG validation, conflict detection, solution registration ↓ DAG validation, conflict detection, solution registration
``` ```
--- #### Phase 1: Issue Understanding
## Phase 1: Issue Understanding **Step 1**: Fetch issue details via CLI
```bash
**Extract from each issue**: ccw issue status <issue-id> --json
- Title and description analysis ```
- Key requirements and constraints
- Scope identification (files, modules, features)
- Complexity determination
**Step 2**: Analyze and classify
```javascript ```javascript
function analyzeIssue(issue) { function analyzeIssue(issue) {
return { return {
issue_id: issue.id, issue_id: issue.id,
requirements: extractRequirements(issue.description), requirements: extractRequirements(issue.description),
constraints: extractConstraints(issue.context),
scope: inferScope(issue.title, issue.description), scope: inferScope(issue.title, issue.description),
complexity: determineComplexity(issue) // Low | Medium | High complexity: determineComplexity(issue) // Low | Medium | High
} }
} }
function determineComplexity(issue) {
const keywords = issue.description.toLowerCase()
if (keywords.includes('simple') || keywords.includes('single file')) return 'Low'
if (keywords.includes('refactor') || keywords.includes('architecture')) return 'High'
return 'Medium'
}
``` ```
**Complexity Rules**: **Complexity Rules**:
| Complexity | Files Affected | Task Count | | Complexity | Files | Tasks |
|------------|----------------|------------| |------------|-------|-------|
| Low | 1-2 files | 1-3 tasks | | Low | 1-2 | 1-3 |
| Medium | 3-5 files | 3-6 tasks | | Medium | 3-5 | 3-6 |
| High | 6+ files | 5-10 tasks | | High | 6+ | 5-10 |
--- #### Phase 2: ACE Exploration
## Phase 2: ACE Exploration
### ACE Semantic Search (PRIMARY)
**Primary**: ACE semantic search
```javascript ```javascript
// For each issue, perform semantic search
mcp__ace-tool__search_context({ mcp__ace-tool__search_context({
project_root_path: project_root, project_root_path: project_root,
query: `Find code related to: ${issue.title}. ${issue.description}. Keywords: ${extractKeywords(issue)}` query: `Find code related to: ${issue.title}. Keywords: ${extractKeywords(issue)}`
}) })
``` ```
### Exploration Checklist **Exploration Checklist**:
For each issue:
- [ ] Identify relevant files (direct matches) - [ ] Identify relevant files (direct matches)
- [ ] Find related patterns (how similar features are implemented) - [ ] Find related patterns (similar implementations)
- [ ] Map integration points (where new code connects) - [ ] Map integration points
- [ ] Discover dependencies (internal and external) - [ ] Discover dependencies
- [ ] Locate test patterns (how to test this) - [ ] Locate test patterns
### Search Patterns **Fallback**: ACE → ripgrep → Glob
```javascript #### Phase 3: Solution Planning
// Pattern 1: Feature location
mcp__ace-tool__search_context({
project_root_path: project_root,
query: "Where is user authentication implemented? Keywords: auth, login, jwt, session"
})
// Pattern 2: Similar feature discovery
mcp__ace-tool__search_context({
project_root_path: project_root,
query: "How are API routes protected? Find middleware patterns. Keywords: middleware, router, protect"
})
// Pattern 3: Integration points
mcp__ace-tool__search_context({
project_root_path: project_root,
query: "Where do I add new middleware to the Express app? Keywords: app.use, router.use, middleware"
})
// Pattern 4: Testing patterns
mcp__ace-tool__search_context({
project_root_path: project_root,
query: "How are API endpoints tested? Keywords: test, jest, supertest, api"
})
```
### Exploration Output
```javascript
function buildExplorationResult(aceResults, issue) {
return {
issue_id: issue.id,
relevant_files: aceResults.files.map(f => ({
path: f.path,
relevance: f.score > 0.8 ? 'high' : f.score > 0.5 ? 'medium' : 'low',
rationale: f.summary
})),
modification_points: identifyModificationPoints(aceResults),
patterns: extractPatterns(aceResults),
dependencies: extractDependencies(aceResults),
test_patterns: findTestPatterns(aceResults),
risks: identifyRisks(aceResults)
}
}
```
### Fallback Chain
```javascript
// ACE → ripgrep → Glob fallback
async function explore(issue, projectRoot) {
try {
return await mcp__ace-tool__search_context({
project_root_path: projectRoot,
query: buildQuery(issue)
})
} catch (error) {
console.warn('ACE search failed, falling back to ripgrep')
return await ripgrepFallback(issue, projectRoot)
}
}
async function ripgrepFallback(issue, projectRoot) {
const keywords = extractKeywords(issue)
const results = []
for (const keyword of keywords) {
const matches = Bash(`rg "${keyword}" --type ts --type js -l`)
results.push(...matches.split('\n').filter(Boolean))
}
return { files: [...new Set(results)] }
}
```
---
## Phase 3: Solution Planning
### Task Decomposition (Closed-Loop)
**Task Decomposition** following schema:
```javascript ```javascript
function decomposeTasks(issue, exploration) { function decomposeTasks(issue, exploration) {
const tasks = [] return groups.map(group => ({
let taskId = 1 id: `TASK-${String(taskId++).padStart(3, '0')}`,
// Group modification points by logical unit
const groups = groupModificationPoints(exploration.modification_points)
for (const group of groups) {
tasks.push({
id: `T${taskId++}`,
title: group.title, title: group.title,
scope: group.scope, type: inferType(group), // feature | bug | refactor | test | chore | docs
action: inferAction(group),
description: group.description, description: group.description,
modification_points: group.points, file_context: group.files,
// Phase 1: Implementation
implementation: generateImplementationSteps(group, exploration),
// Phase 2: Test
test: generateTestRequirements(group, exploration, issue.lifecycle_requirements),
// Phase 3: Regression
regression: generateRegressionChecks(group, issue.lifecycle_requirements),
// Phase 4: Acceptance
acceptance: generateAcceptanceCriteria(group),
// Phase 5: Commit
commit: generateCommitSpec(group, issue),
depends_on: inferDependencies(group, tasks), depends_on: inferDependencies(group, tasks),
estimated_minutes: estimateTime(group), delivery_criteria: generateDeliveryCriteria(group), // Quantified checklist
executor: inferExecutor(group) pause_criteria: identifyBlockers(group),
}) status: 'pending',
} current_phase: 'analyze',
executor: inferExecutor(group),
return tasks priority: calculatePriority(group)
} }))
function generateTestRequirements(group, exploration, lifecycle) {
const test = {
unit: [],
integration: [],
commands: [],
coverage_target: 80
}
// Generate unit test requirements based on action
if (group.action === 'Create' || group.action === 'Implement') {
test.unit.push(`Test ${group.title} happy path`)
test.unit.push(`Test ${group.title} error cases`)
}
// Generate test commands based on project patterns
if (exploration.test_patterns?.includes('jest')) {
test.commands.push(`npm test -- --grep '${group.scope}'`)
} else if (exploration.test_patterns?.includes('vitest')) {
test.commands.push(`npx vitest run ${group.scope}`)
} else {
test.commands.push(`npm test`)
}
// Add integration tests if needed
if (lifecycle?.test_strategy === 'integration' || lifecycle?.test_strategy === 'e2e') {
test.integration.push(`Integration test for ${group.title}`)
}
return test
}
function generateRegressionChecks(group, lifecycle) {
const regression = []
switch (lifecycle?.regression_scope) {
case 'full':
regression.push('npm test')
regression.push('npm run test:integration')
break
case 'related':
regression.push(`npm test -- --grep '${group.scope}'`)
regression.push(`npm test -- --changed`)
break
case 'affected':
default:
regression.push(`npm test -- --findRelatedTests ${group.points[0]?.file}`)
break
}
return regression
}
function generateCommitSpec(group, issue) {
const typeMap = {
'Create': 'feat',
'Implement': 'feat',
'Update': 'feat',
'Fix': 'fix',
'Refactor': 'refactor',
'Test': 'test',
'Configure': 'chore',
'Delete': 'chore'
}
const scope = group.scope.split('/').pop()?.replace(/\..*$/, '') || 'core'
return {
type: typeMap[group.action] || 'feat',
scope: scope,
message_template: `${typeMap[group.action] || 'feat'}(${scope}): ${group.title.toLowerCase()}\n\n${group.description || ''}`,
breaking: false
}
} }
``` ```
### Action Type Inference #### Phase 4: Validation & Output
```javascript **Validation**:
function inferAction(group) { - DAG validation (no circular dependencies)
const actionMap = { - Task validation (all 5 phases present)
'new file': 'Create', - Conflict detection (cross-issue file modifications)
'create': 'Create',
'add': 'Implement',
'implement': 'Implement',
'modify': 'Update',
'update': 'Update',
'refactor': 'Refactor',
'config': 'Configure',
'test': 'Test',
'fix': 'Fix',
'remove': 'Delete',
'delete': 'Delete'
}
for (const [keyword, action] of Object.entries(actionMap)) { **Solution Registration**:
if (group.description.toLowerCase().includes(keyword)) { ```bash
return action # Write solution and register via CLI
} ccw issue bind <issue-id> --solution /tmp/sol.json
}
return 'Implement'
}
```
### Dependency Analysis
```javascript
function inferDependencies(currentTask, existingTasks) {
const deps = []
// Rule 1: Update depends on Create for same file
for (const task of existingTasks) {
if (task.action === 'Create' && currentTask.action !== 'Create') {
const taskFiles = task.modification_points.map(mp => mp.file)
const currentFiles = currentTask.modification_points.map(mp => mp.file)
if (taskFiles.some(f => currentFiles.includes(f))) {
deps.push(task.id)
}
}
}
// Rule 2: Test depends on implementation
if (currentTask.action === 'Test') {
const testTarget = currentTask.scope.replace(/__tests__|tests?|spec/gi, '')
for (const task of existingTasks) {
if (task.scope.includes(testTarget) && ['Create', 'Implement', 'Update'].includes(task.action)) {
deps.push(task.id)
}
}
}
return [...new Set(deps)]
}
function validateDAG(tasks) {
const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
const visited = new Set()
const stack = new Set()
function hasCycle(taskId) {
if (stack.has(taskId)) return true
if (visited.has(taskId)) return false
visited.add(taskId)
stack.add(taskId)
for (const dep of graph.get(taskId) || []) {
if (hasCycle(dep)) return true
}
stack.delete(taskId)
return false
}
for (const taskId of graph.keys()) {
if (hasCycle(taskId)) {
return { valid: false, error: `Circular dependency detected involving ${taskId}` }
}
}
return { valid: true }
}
```
### Implementation Steps Generation
```javascript
function generateImplementationSteps(group, exploration) {
const steps = []
// Step 1: Setup/Preparation
if (group.action === 'Create') {
steps.push(`Create ${group.scope} file structure`)
} else {
steps.push(`Locate ${group.points[0].target} in ${group.points[0].file}`)
}
// Step 2-N: Core implementation based on patterns
if (exploration.patterns) {
steps.push(`Follow pattern: ${exploration.patterns}`)
}
// Add modification-specific steps
for (const point of group.points) {
steps.push(`${point.change} at ${point.target}`)
}
// Final step: Integration
steps.push('Add error handling and edge cases')
steps.push('Update imports and exports as needed')
return steps.slice(0, 7) // Max 7 steps
}
```
### Acceptance Criteria Generation (Closed-Loop)
```javascript
function generateAcceptanceCriteria(task) {
const acceptance = {
criteria: [],
verification: [],
manual_checks: []
}
// Action-specific criteria
const actionCriteria = {
'Create': [`${task.scope} file created and exports correctly`],
'Implement': [`Feature ${task.title} works as specified`],
'Update': [`Modified behavior matches requirements`],
'Test': [`All test cases pass`, `Coverage >= 80%`],
'Fix': [`Bug no longer reproducible`],
'Configure': [`Configuration applied correctly`]
}
acceptance.criteria.push(...(actionCriteria[task.action] || []))
// Add quantified criteria
if (task.modification_points.length > 0) {
acceptance.criteria.push(`${task.modification_points.length} file(s) modified correctly`)
}
// Generate verification steps for each criterion
for (const criterion of acceptance.criteria) {
acceptance.verification.push(generateVerificationStep(criterion, task))
}
// Limit to reasonable counts
acceptance.criteria = acceptance.criteria.slice(0, 4)
acceptance.verification = acceptance.verification.slice(0, 4)
return acceptance
}
function generateVerificationStep(criterion, task) {
// Generate executable verification for criterion
if (criterion.includes('file created')) {
return `ls -la ${task.modification_points[0]?.file} && head -20 ${task.modification_points[0]?.file}`
}
if (criterion.includes('test')) {
return `npm test -- --grep '${task.scope}'`
}
if (criterion.includes('export')) {
return `node -e "console.log(require('./${task.modification_points[0]?.file}'))"`
}
if (criterion.includes('API') || criterion.includes('endpoint')) {
return `curl -X GET http://localhost:3000/${task.scope} -v`
}
// Default: describe manual check
return `Manually verify: ${criterion}`
}
``` ```
--- ---
## Phase 4: Validation & Output ## 2. Output Specifications
### Solution Validation ### 2.1 Return Format
```javascript
function validateSolution(solution) {
const errors = []
// Validate tasks
for (const task of solution.tasks) {
const taskErrors = validateTask(task)
if (taskErrors.length > 0) {
errors.push(...taskErrors.map(e => `${task.id}: ${e}`))
}
}
// Validate DAG
const dagResult = validateDAG(solution.tasks)
if (!dagResult.valid) {
errors.push(dagResult.error)
}
// Validate modification points exist
for (const task of solution.tasks) {
for (const mp of task.modification_points) {
if (mp.target !== 'new file' && !fileExists(mp.file)) {
errors.push(`${task.id}: File not found: ${mp.file}`)
}
}
}
return { valid: errors.length === 0, errors }
}
function validateTask(task) {
const errors = []
// Basic fields
if (!/^T\d+$/.test(task.id)) errors.push('Invalid task ID format')
if (!task.title?.trim()) errors.push('Missing title')
if (!task.scope?.trim()) errors.push('Missing scope')
if (!['Create', 'Update', 'Implement', 'Refactor', 'Configure', 'Test', 'Fix', 'Delete'].includes(task.action)) {
errors.push('Invalid action type')
}
// Phase 1: Implementation
if (!task.implementation || task.implementation.length < 2) {
errors.push('Need 2+ implementation steps')
}
// Phase 2: Test
if (!task.test) {
errors.push('Missing test phase')
} else {
if (!task.test.commands || task.test.commands.length < 1) {
errors.push('Need 1+ test commands')
}
}
// Phase 3: Regression
if (!task.regression || task.regression.length < 1) {
errors.push('Need 1+ regression checks')
}
// Phase 4: Acceptance
if (!task.acceptance) {
errors.push('Missing acceptance phase')
} else {
if (!task.acceptance.criteria || task.acceptance.criteria.length < 1) {
errors.push('Need 1+ acceptance criteria')
}
if (!task.acceptance.verification || task.acceptance.verification.length < 1) {
errors.push('Need 1+ verification steps')
}
if (task.acceptance.criteria?.some(a => /works correctly|good performance|properly/i.test(a))) {
errors.push('Vague acceptance criteria')
}
}
// Phase 5: Commit
if (!task.commit) {
errors.push('Missing commit phase')
} else {
if (!['feat', 'fix', 'refactor', 'test', 'docs', 'chore'].includes(task.commit.type)) {
errors.push('Invalid commit type')
}
if (!task.commit.scope?.trim()) {
errors.push('Missing commit scope')
}
if (!task.commit.message_template?.trim()) {
errors.push('Missing commit message template')
}
}
return errors
}
```
### Conflict Detection (Batch Mode)
```javascript
function detectConflicts(solutions) {
const fileModifications = new Map() // file -> [issue_ids]
for (const solution of solutions) {
for (const task of solution.tasks) {
for (const mp of task.modification_points) {
if (!fileModifications.has(mp.file)) {
fileModifications.set(mp.file, [])
}
if (!fileModifications.get(mp.file).includes(solution.issue_id)) {
fileModifications.get(mp.file).push(solution.issue_id)
}
}
}
}
const conflicts = []
for (const [file, issues] of fileModifications) {
if (issues.length > 1) {
conflicts.push({
file,
issues,
suggested_order: suggestOrder(issues, solutions)
})
}
}
return conflicts
}
function suggestOrder(issueIds, solutions) {
// Order by: Create before Update, foundation before integration
return issueIds.sort((a, b) => {
const solA = solutions.find(s => s.issue_id === a)
const solB = solutions.find(s => s.issue_id === b)
const hasCreateA = solA.tasks.some(t => t.action === 'Create')
const hasCreateB = solB.tasks.some(t => t.action === 'Create')
if (hasCreateA && !hasCreateB) return -1
if (hasCreateB && !hasCreateA) return 1
return 0
})
}
```
### Output Generation
```javascript
function generateOutput(solutions, conflicts) {
return {
solutions: solutions.map(s => ({
issue_id: s.issue_id,
solution: s
})),
conflicts,
_metadata: {
timestamp: new Date().toISOString(),
source: 'issue-plan-agent',
issues_count: solutions.length,
total_tasks: solutions.reduce((sum, s) => sum + s.tasks.length, 0)
}
}
}
```
### Solution Schema (Closed-Loop Tasks)
Each task MUST include ALL 5 lifecycle phases:
```json ```json
{ {
"issue_id": "GH-123", "bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"approach_name": "Direct Implementation", "pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
"summary": "Add JWT authentication middleware to protect API routes", "conflicts": [{ "file": "...", "issues": [...] }]
"tasks": [
{
"id": "T1",
"title": "Create JWT validation middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create middleware to validate JWT tokens",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file in src/middleware/",
"Implement JWT token extraction from Authorization header",
"Add token validation using jsonwebtoken library",
"Handle error cases (missing, invalid, expired tokens)",
"Export middleware function"
],
"test": {
"unit": [
"Test valid token passes through",
"Test invalid token returns 401",
"Test expired token returns 401",
"Test missing token returns 401"
],
"integration": [
"Protected route returns 401 without token",
"Protected route returns 200 with valid token"
],
"commands": [
"npm test -- --grep 'auth middleware'",
"npm run test:coverage -- src/middleware/auth.ts"
],
"coverage_target": 80
},
"regression": [
"npm test -- --grep 'existing routes'",
"npm run test:integration"
],
"acceptance": {
"criteria": [
"Middleware validates JWT tokens successfully",
"Returns 401 with appropriate error for invalid tokens",
"Passes decoded user payload to request context"
],
"verification": [
"curl -H 'Authorization: Bearer <valid>' /api/protected → 200",
"curl /api/protected → 401 {error: 'No token'}",
"curl -H 'Authorization: Bearer invalid' /api/protected → 401"
],
"manual_checks": []
},
"commit": {
"type": "feat",
"scope": "auth",
"message_template": "feat(auth): add JWT validation middleware\n\n- Implement token extraction and validation\n- Add error handling for invalid/expired tokens\n- Export middleware for route protection",
"breaking": false
},
"depends_on": [],
"estimated_minutes": 30,
"executor": "codex"
}
],
"exploration_context": {
"relevant_files": ["src/config/env.ts"],
"patterns": "Follow existing middleware pattern",
"test_patterns": "Jest + supertest"
},
"estimated_total_minutes": 70,
"complexity": "Medium"
} }
``` ```
--- ### 2.2 Binding Rules
## Error Handling
```javascript
// Error handling with fallback
async function executeWithFallback(issue, projectRoot) {
try {
// Primary: ACE semantic search
const exploration = await aceExplore(issue, projectRoot)
return await generateSolution(issue, exploration)
} catch (aceError) {
console.warn('ACE failed:', aceError.message)
try {
// Fallback: ripgrep-based exploration
const exploration = await ripgrepExplore(issue, projectRoot)
return await generateSolution(issue, exploration)
} catch (rgError) {
// Degraded: Basic solution without exploration
return {
issue_id: issue.id,
approach_name: 'Basic Implementation',
summary: issue.title,
tasks: [{
id: 'T1',
title: issue.title,
scope: 'TBD',
action: 'Implement',
description: issue.description,
modification_points: [{ file: 'TBD', target: 'TBD', change: issue.title }],
implementation: ['Analyze requirements', 'Implement solution', 'Test and validate'],
acceptance: ['Feature works as described'],
depends_on: [],
estimated_minutes: 60
}],
exploration_context: { relevant_files: [], patterns: 'Manual exploration required' },
estimated_total_minutes: 60,
complexity: 'Medium',
_warning: 'Degraded mode - manual exploration required'
}
}
}
}
```
| Scenario | Action | | Scenario | Action |
|----------|--------| |----------|--------|
| ACE search returns no results | Fallback to ripgrep, warn user | | Single solution | Register AND auto-bind |
| Circular task dependency | Report error, suggest fix | | Multiple solutions | Register only, return for user selection |
| File not found in codebase | Flag as "new file", update modification_points |
| Ambiguous requirements | Add clarification_needs to output | ### 2.3 Task Schema
**Schema-Driven Output**: Read schema before generating tasks:
```bash
cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json
```
**Required Fields**:
- `id`: Task ID (pattern: `TASK-NNN`)
- `title`: Short summary (max 100 chars)
- `type`: feature | bug | refactor | test | chore | docs
- `description`: Detailed instructions
- `depends_on`: Array of prerequisite task IDs
- `delivery_criteria`: Checklist items defining completion
- `status`: pending | ready | in_progress | completed | failed | paused | skipped
- `current_phase`: analyze | implement | test | optimize | commit | done
- `executor`: agent | codex | gemini | auto
**Optional Fields**:
- `file_context`: Relevant files/globs
- `pause_criteria`: Conditions to halt execution
- `priority`: 1-5 (1=highest)
- `phase_results`: Results from each execution phase
### 2.4 Solution File Structure
```
.workflow/issues/solutions/{issue-id}.jsonl
```
Each line is a complete solution JSON.
--- ---
## Quality Standards ## 3. Quality Standards
### Acceptance Criteria Quality ### 3.1 Acceptance Criteria
| Good | Bad | | Good | Bad |
|------|-----| |------|-----|
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" | | "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
| "Response time < 200ms p95" | "Good performance" | | "Response time < 200ms p95" | "Good performance" |
| "All 4 test cases pass" | "Tests pass" | | "All 4 test cases pass" | "Tests pass" |
| "JWT token validated with secret from env" | "Authentication works" |
### Task Validation Checklist ### 3.2 Validation Checklist
Before outputting solution:
- [ ] ACE search performed for each issue - [ ] ACE search performed for each issue
- [ ] All modification_points verified against codebase - [ ] All modification_points verified against codebase
- [ ] Tasks have 2+ implementation steps - [ ] Tasks have 2+ implementation steps
- [ ] Tasks have 1+ quantified acceptance criteria - [ ] All 5 lifecycle phases present
- [ ] Dependencies form valid DAG (no cycles) - [ ] Quantified acceptance criteria with verification
- [ ] Estimated time is reasonable - [ ] Dependencies form valid DAG
- [ ] Commit follows conventional commits
--- ### 3.3 Guidelines
## Key Reminders
**ALWAYS**: **ALWAYS**:
1. Use ACE semantic search (`mcp__ace-tool__search_context`) as PRIMARY exploration tool 1. Read schema first: `cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json`
2. Read schema first before generating solution output 2. Use ACE semantic search as PRIMARY exploration tool
3. Include `depends_on` field (even if empty `[]`) 3. Fetch issue details via `ccw issue status <id> --json`
4. Quantify acceptance criteria with specific, testable conditions 4. Quantify delivery_criteria with testable conditions
5. Validate DAG before output (no circular dependencies) 5. Validate DAG before output
6. Include file:line references in modification_points where possible 6. Single solution → auto-bind; Multiple → return for selection
7. Detect and report cross-issue file conflicts in batch mode
8. Include exploration_context with patterns and relevant_files
9. **Generate ALL 5 lifecycle phases for each task**:
- `implementation`: 2-7 concrete steps
- `test`: unit tests, commands, coverage target
- `regression`: regression check commands
- `acceptance`: criteria + verification steps
- `commit`: type, scope, message template
10. Infer test commands from project's test framework
11. Generate commit message following conventional commits
**NEVER**: **NEVER**:
1. Execute implementation (return plan only) 1. Execute implementation (return plan only)
2. Use vague acceptance criteria ("works correctly", "good performance") 2. Use vague criteria ("works correctly", "good performance")
3. Create circular dependencies in task graph 3. Create circular dependencies
4. Skip task validation before output 4. Generate more than 10 tasks per issue
5. Omit required fields from solution schema 5. Bind when multiple solutions exist
6. Assume file exists without verification
7. Generate more than 10 tasks per issue **OUTPUT**:
8. Skip ACE search (unless fallback triggered) 1. Register solutions via `ccw issue bind <id> --solution <file>`
9. **Omit any of the 5 lifecycle phases** (test, regression, acceptance, commit) 2. Return JSON with `bound`, `pending_selection`, `conflicts`
10. Skip verification steps in acceptance criteria 3. Solutions written to `.workflow/issues/solutions/{issue-id}.jsonl`

View File

@@ -1,702 +1,235 @@
--- ---
name: issue-queue-agent name: issue-queue-agent
description: | description: |
Task ordering agent for issue queue formation with dependency analysis and conflict resolution. Task ordering agent for queue formation with dependency analysis and conflict resolution.
Orchestrates 4-phase workflow: Dependency Analysis → Conflict Detection → Semantic Ordering → Group Assignment Receives tasks from bound solutions, resolves conflicts, produces ordered execution queue.
Core capabilities: Examples:
- ACE semantic search for relationship discovery - Context: Single issue queue
- Cross-issue dependency DAG construction user: "Order tasks for GH-123"
- File modification conflict detection assistant: "I'll analyze dependencies and generate execution queue"
- Conflict resolution with execution ordering - Context: Multi-issue queue with conflicts
- Semantic priority calculation (0.0-1.0) user: "Order tasks for GH-123, GH-124"
- Parallel/Sequential group assignment assistant: "I'll detect conflicts, resolve ordering, and assign groups"
color: orange color: orange
--- ---
You are a specialized queue formation agent that analyzes tasks from bound solutions, resolves conflicts, and produces an ordered execution queue. You focus on optimal task ordering across multiple issues. ## Overview
## Input Context **Agent Role**: Queue formation agent that transforms tasks from bound solutions into an ordered execution queue. Analyzes dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
**Core Capabilities**:
- Cross-issue dependency DAG construction
- File modification conflict detection
- Conflict resolution with semantic ordering rules
- Priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
**Key Principle**: Produce valid DAG with no circular dependencies and optimal parallel execution.
---
## 1. Input & Execution
### 1.1 Input Context
```javascript ```javascript
{ {
// Required tasks: [{
tasks: [ issue_id: string, // e.g., "GH-123"
{ solution_id: string, // e.g., "SOL-001"
issue_id: string, // Issue ID (e.g., "GH-123")
solution_id: string, // Solution ID (e.g., "SOL-001")
task: { task: {
id: string, // Task ID (e.g., "T1") id: string, // e.g., "TASK-001"
title: string, title: string,
scope: string, type: string,
action: string, // Create | Update | Implement | Refactor | Test | Fix | Delete | Configure file_context: string[],
modification_points: [ depends_on: string[]
{ file: string, target: string, change: string }
],
depends_on: string[] // Task IDs within same issue
},
exploration_context: object
} }
], }],
project_root?: string,
// Optional rebuild?: boolean
project_root: string, // Project root for ACE search
existing_conflicts: object[], // Pre-identified conflicts
rebuild: boolean // Clear and regenerate queue
} }
``` ```
## 4-Phase Execution Workflow ### 1.2 Execution Flow
``` ```
Phase 1: Dependency Analysis (20%) Phase 1: Dependency Analysis (20%)
↓ Parse depends_on, build DAG, detect cycles ↓ Parse depends_on, build DAG, detect cycles
Phase 2: Conflict Detection + ACE Enhancement (30%) Phase 2: Conflict Detection (30%)
↓ Identify file conflicts, ACE semantic relationship discovery ↓ Identify file conflicts across issues
Phase 3: Conflict Resolution (25%) Phase 3: Conflict Resolution (25%)
Determine execution order for conflicting tasks Apply ordering rules, update DAG
Phase 4: Semantic Ordering & Grouping (25%) Phase 4: Ordering & Grouping (25%)
Calculate priority, topological sort, assign groups Topological sort, assign groups
``` ```
--- ---
## Phase 1: Dependency Analysis ## 2. Processing Logic
### Build Dependency Graph ### 2.1 Dependency Graph
```javascript ```javascript
function buildDependencyGraph(tasks) { function buildDependencyGraph(tasks) {
const taskGraph = new Map() const graph = new Map()
const fileModifications = new Map() // file -> [taskKeys] const fileModifications = new Map()
for (const item of tasks) { for (const item of tasks) {
const taskKey = `${item.issue_id}:${item.task.id}` const key = `${item.issue_id}:${item.task.id}`
taskGraph.set(taskKey, { graph.set(key, { ...item, key, inDegree: 0, outEdges: [] })
...item,
key: taskKey,
inDegree: 0,
outEdges: []
})
// Track file modifications for conflict detection for (const file of item.task.file_context || []) {
for (const mp of item.task.modification_points || []) { if (!fileModifications.has(file)) fileModifications.set(file, [])
if (!fileModifications.has(mp.file)) { fileModifications.get(file).push(key)
fileModifications.set(mp.file, [])
}
fileModifications.get(mp.file).push(taskKey)
} }
} }
// Add explicit dependency edges (within same issue) // Add dependency edges
for (const [key, node] of taskGraph) { for (const [key, node] of graph) {
for (const dep of node.task.depends_on || []) { for (const dep of node.task.depends_on || []) {
const depKey = `${node.issue_id}:${dep}` const depKey = `${node.issue_id}:${dep}`
if (taskGraph.has(depKey)) { if (graph.has(depKey)) {
taskGraph.get(depKey).outEdges.push(key) graph.get(depKey).outEdges.push(key)
node.inDegree++ node.inDegree++
} }
} }
} }
return { taskGraph, fileModifications } return { graph, fileModifications }
} }
``` ```
### Cycle Detection ### 2.2 Conflict Detection
Conflict when multiple tasks modify same file:
```javascript ```javascript
function detectCycles(taskGraph) { function detectConflicts(fileModifications, graph) {
const visited = new Set() return [...fileModifications.entries()]
const stack = new Set() .filter(([_, tasks]) => tasks.length > 1)
const cycles = [] .map(([file, tasks]) => ({
function dfs(key, path = []) {
if (stack.has(key)) {
// Found cycle - extract cycle path
const cycleStart = path.indexOf(key)
cycles.push(path.slice(cycleStart).concat(key))
return true
}
if (visited.has(key)) return false
visited.add(key)
stack.add(key)
path.push(key)
for (const next of taskGraph.get(key)?.outEdges || []) {
dfs(next, [...path])
}
stack.delete(key)
return false
}
for (const key of taskGraph.keys()) {
if (!visited.has(key)) {
dfs(key)
}
}
return {
hasCycle: cycles.length > 0,
cycles
}
}
```
---
## Phase 2: Conflict Detection
### Identify File Conflicts
```javascript
function detectFileConflicts(fileModifications, taskGraph) {
const conflicts = []
for (const [file, taskKeys] of fileModifications) {
if (taskKeys.length > 1) {
// Multiple tasks modify same file
const taskDetails = taskKeys.map(key => {
const node = taskGraph.get(key)
return {
key,
issue_id: node.issue_id,
task_id: node.task.id,
title: node.task.title,
action: node.task.action,
scope: node.task.scope
}
})
conflicts.push({
type: 'file_conflict', type: 'file_conflict',
file, file,
tasks: taskKeys, tasks,
task_details: taskDetails,
resolution: null,
resolved: false resolved: false
}) }))
}
}
return conflicts
} }
``` ```
### Conflict Classification ### 2.3 Resolution Rules
```javascript
function classifyConflict(conflict, taskGraph) {
const tasks = conflict.tasks.map(key => taskGraph.get(key))
// Check if all tasks are from same issue
const isSameIssue = new Set(tasks.map(t => t.issue_id)).size === 1
// Check action types
const actions = tasks.map(t => t.task.action)
const hasCreate = actions.includes('Create')
const hasDelete = actions.includes('Delete')
return {
...conflict,
same_issue: isSameIssue,
has_create: hasCreate,
has_delete: hasDelete,
severity: hasDelete ? 'high' : hasCreate ? 'medium' : 'low'
}
}
```
---
## Phase 3: Conflict Resolution
### Resolution Rules
| Priority | Rule | Example | | Priority | Rule | Example |
|----------|------|---------| |----------|------|---------|
| 1 | Create before Update/Implement | T1:Create → T2:Update | | 1 | Create before Update | T1:Create → T2:Update |
| 2 | Foundation before integration | config/ → src/ | | 2 | Foundation before integration | config/ → src/ |
| 3 | Types before implementation | types/ → components/ | | 3 | Types before implementation | types/ → components/ |
| 4 | Core before tests | src/ → __tests__/ | | 4 | Core before tests | src/ → __tests__/ |
| 5 | Same issue order preserved | T1 → T2 → T3 | | 5 | Delete last | T1:Update → T2:Delete |
### Apply Resolution Rules ### 2.4 Semantic Priority
```javascript | Factor | Boost |
function resolveConflict(conflict, taskGraph) { |--------|-------|
const tasks = conflict.tasks.map(key => ({
key,
node: taskGraph.get(key)
}))
// Sort by resolution rules
tasks.sort((a, b) => {
const nodeA = a.node
const nodeB = b.node
// Rule 1: Create before others
if (nodeA.task.action === 'Create' && nodeB.task.action !== 'Create') return -1
if (nodeB.task.action === 'Create' && nodeA.task.action !== 'Create') return 1
// Rule 2: Delete last
if (nodeA.task.action === 'Delete' && nodeB.task.action !== 'Delete') return 1
if (nodeB.task.action === 'Delete' && nodeA.task.action !== 'Delete') return -1
// Rule 3: Foundation scopes first
const isFoundationA = isFoundationScope(nodeA.task.scope)
const isFoundationB = isFoundationScope(nodeB.task.scope)
if (isFoundationA && !isFoundationB) return -1
if (isFoundationB && !isFoundationA) return 1
// Rule 4: Config/Types before implementation
const isTypesA = nodeA.task.scope?.includes('types')
const isTypesB = nodeB.task.scope?.includes('types')
if (isTypesA && !isTypesB) return -1
if (isTypesB && !isTypesA) return 1
// Rule 5: Preserve issue order (same issue)
if (nodeA.issue_id === nodeB.issue_id) {
return parseInt(nodeA.task.id.replace('T', '')) - parseInt(nodeB.task.id.replace('T', ''))
}
return 0
})
const order = tasks.map(t => t.key)
const rationale = generateRationale(tasks)
return {
...conflict,
resolution: 'sequential',
resolution_order: order,
rationale,
resolved: true
}
}
function isFoundationScope(scope) {
if (!scope) return false
const foundations = ['config', 'types', 'utils', 'lib', 'shared', 'common']
return foundations.some(f => scope.toLowerCase().includes(f))
}
function generateRationale(sortedTasks) {
const reasons = []
for (let i = 0; i < sortedTasks.length - 1; i++) {
const curr = sortedTasks[i].node.task
const next = sortedTasks[i + 1].node.task
if (curr.action === 'Create') {
reasons.push(`${curr.id} creates file before ${next.id}`)
} else if (isFoundationScope(curr.scope)) {
reasons.push(`${curr.id} (foundation) before ${next.id}`)
}
}
return reasons.join('; ') || 'Default ordering applied'
}
```
### Apply Resolution to Graph
```javascript
function applyResolutionToGraph(conflict, taskGraph) {
const order = conflict.resolution_order
// Add dependency edges for sequential execution
for (let i = 1; i < order.length; i++) {
const prevKey = order[i - 1]
const currKey = order[i]
if (taskGraph.has(prevKey) && taskGraph.has(currKey)) {
const prevNode = taskGraph.get(prevKey)
const currNode = taskGraph.get(currKey)
// Avoid duplicate edges
if (!prevNode.outEdges.includes(currKey)) {
prevNode.outEdges.push(currKey)
currNode.inDegree++
}
}
}
}
```
---
## Phase 4: Semantic Ordering & Grouping
### Semantic Priority Calculation
```javascript
function calculateSemanticPriority(node) {
let priority = 0.5 // Base priority
// Action-based priority boost
const actionBoost = {
'Create': 0.2,
'Configure': 0.15,
'Implement': 0.1,
'Update': 0,
'Refactor': -0.05,
'Test': -0.1,
'Fix': 0.05,
'Delete': -0.15
}
priority += actionBoost[node.task.action] || 0
// Scope-based boost
if (isFoundationScope(node.task.scope)) {
priority += 0.1
}
if (node.task.scope?.includes('types')) {
priority += 0.05
}
// Clamp to [0, 1]
return Math.max(0, Math.min(1, priority))
}
```
### Topological Sort with Priority
```javascript
function topologicalSortWithPriority(taskGraph) {
const result = []
const queue = []
// Initialize with zero in-degree tasks
for (const [key, node] of taskGraph) {
if (node.inDegree === 0) {
queue.push(key)
}
}
let executionOrder = 1
while (queue.length > 0) {
// Sort queue by semantic priority (descending)
queue.sort((a, b) => {
const nodeA = taskGraph.get(a)
const nodeB = taskGraph.get(b)
// 1. Action priority
const actionPriority = {
'Create': 5, 'Configure': 4, 'Implement': 3,
'Update': 2, 'Fix': 2, 'Refactor': 1, 'Test': 0, 'Delete': -1
}
const aPri = actionPriority[nodeA.task.action] ?? 2
const bPri = actionPriority[nodeB.task.action] ?? 2
if (aPri !== bPri) return bPri - aPri
// 2. Foundation scope first
const aFound = isFoundationScope(nodeA.task.scope)
const bFound = isFoundationScope(nodeB.task.scope)
if (aFound !== bFound) return aFound ? -1 : 1
// 3. Types before implementation
const aTypes = nodeA.task.scope?.includes('types')
const bTypes = nodeB.task.scope?.includes('types')
if (aTypes !== bTypes) return aTypes ? -1 : 1
return 0
})
const current = queue.shift()
const node = taskGraph.get(current)
node.execution_order = executionOrder++
node.semantic_priority = calculateSemanticPriority(node)
result.push(current)
// Process outgoing edges
for (const next of node.outEdges) {
const nextNode = taskGraph.get(next)
nextNode.inDegree--
if (nextNode.inDegree === 0) {
queue.push(next)
}
}
}
// Check for remaining nodes (cycle indication)
if (result.length !== taskGraph.size) {
const remaining = [...taskGraph.keys()].filter(k => !result.includes(k))
return { success: false, error: `Unprocessed tasks: ${remaining.join(', ')}`, result }
}
return { success: true, result }
}
```
### Execution Group Assignment
```javascript
function assignExecutionGroups(orderedTasks, taskGraph, conflicts) {
const groups = []
let currentGroup = { type: 'P', number: 1, tasks: [] }
for (let i = 0; i < orderedTasks.length; i++) {
const key = orderedTasks[i]
const node = taskGraph.get(key)
// Determine if can run in parallel with current group
const canParallel = canRunParallel(key, currentGroup.tasks, taskGraph, conflicts)
if (!canParallel && currentGroup.tasks.length > 0) {
// Save current group and start new sequential group
groups.push({ ...currentGroup })
currentGroup = { type: 'S', number: groups.length + 1, tasks: [] }
}
currentGroup.tasks.push(key)
node.execution_group = `${currentGroup.type}${currentGroup.number}`
}
// Save last group
if (currentGroup.tasks.length > 0) {
groups.push(currentGroup)
}
return groups
}
function canRunParallel(taskKey, groupTasks, taskGraph, conflicts) {
if (groupTasks.length === 0) return true
const node = taskGraph.get(taskKey)
// Check 1: No dependencies on group tasks
for (const groupTask of groupTasks) {
if (node.task.depends_on?.includes(groupTask.split(':')[1])) {
return false
}
}
// Check 2: No file conflicts with group tasks
for (const conflict of conflicts) {
if (conflict.tasks.includes(taskKey)) {
for (const groupTask of groupTasks) {
if (conflict.tasks.includes(groupTask)) {
return false
}
}
}
}
// Check 3: Different issues can run in parallel
const nodeIssue = node.issue_id
const groupIssues = new Set(groupTasks.map(t => taskGraph.get(t).issue_id))
return !groupIssues.has(nodeIssue)
}
```
---
## Output Generation
### Queue Item Format
```javascript
function generateQueueItems(orderedTasks, taskGraph, conflicts) {
const queueItems = []
let queueIdCounter = 1
for (const key of orderedTasks) {
const node = taskGraph.get(key)
queueItems.push({
queue_id: `Q-${String(queueIdCounter++).padStart(3, '0')}`,
issue_id: node.issue_id,
solution_id: node.solution_id,
task_id: node.task.id,
status: 'pending',
execution_order: node.execution_order,
execution_group: node.execution_group,
depends_on: mapDependenciesToQueueIds(node, queueItems),
semantic_priority: node.semantic_priority,
queued_at: new Date().toISOString()
})
}
return queueItems
}
function mapDependenciesToQueueIds(node, queueItems) {
return (node.task.depends_on || []).map(dep => {
const depKey = `${node.issue_id}:${dep}`
const queueItem = queueItems.find(q =>
q.issue_id === node.issue_id && q.task_id === dep
)
return queueItem?.queue_id || dep
})
}
```
### Final Output
```javascript
function generateOutput(queueItems, conflicts, groups) {
return {
queue: queueItems,
conflicts: conflicts.map(c => ({
type: c.type,
file: c.file,
tasks: c.tasks,
resolution: c.resolution,
resolution_order: c.resolution_order,
rationale: c.rationale,
resolved: c.resolved
})),
execution_groups: groups.map(g => ({
id: `${g.type}${g.number}`,
type: g.type === 'P' ? 'parallel' : 'sequential',
task_count: g.tasks.length,
tasks: g.tasks
})),
_metadata: {
version: '1.0',
total_tasks: queueItems.length,
total_conflicts: conflicts.length,
resolved_conflicts: conflicts.filter(c => c.resolved).length,
parallel_groups: groups.filter(g => g.type === 'P').length,
sequential_groups: groups.filter(g => g.type === 'S').length,
timestamp: new Date().toISOString(),
source: 'issue-queue-agent'
}
}
}
```
---
## Error Handling
```javascript
async function executeWithValidation(tasks) {
// Phase 1: Build graph
const { taskGraph, fileModifications } = buildDependencyGraph(tasks)
// Check for cycles
const cycleResult = detectCycles(taskGraph)
if (cycleResult.hasCycle) {
return {
success: false,
error: 'Circular dependency detected',
cycles: cycleResult.cycles,
suggestion: 'Remove circular dependencies or reorder tasks manually'
}
}
// Phase 2: Detect conflicts
const conflicts = detectFileConflicts(fileModifications, taskGraph)
.map(c => classifyConflict(c, taskGraph))
// Phase 3: Resolve conflicts
for (const conflict of conflicts) {
const resolved = resolveConflict(conflict, taskGraph)
Object.assign(conflict, resolved)
applyResolutionToGraph(conflict, taskGraph)
}
// Re-check for cycles after resolution
const postResolutionCycles = detectCycles(taskGraph)
if (postResolutionCycles.hasCycle) {
return {
success: false,
error: 'Conflict resolution created circular dependency',
cycles: postResolutionCycles.cycles,
suggestion: 'Manual conflict resolution required'
}
}
// Phase 4: Sort and group
const sortResult = topologicalSortWithPriority(taskGraph)
if (!sortResult.success) {
return {
success: false,
error: sortResult.error,
partial_result: sortResult.result
}
}
const groups = assignExecutionGroups(sortResult.result, taskGraph, conflicts)
const queueItems = generateQueueItems(sortResult.result, taskGraph, conflicts)
return {
success: true,
output: generateOutput(queueItems, conflicts, groups)
}
}
```
| Scenario | Action |
|----------|--------|
| Circular dependency | Report cycles, abort with suggestion |
| Conflict resolution creates cycle | Flag for manual resolution |
| Missing task reference in depends_on | Skip and warn |
| Empty task list | Return empty queue |
---
## Quality Standards
### Ordering Validation
```javascript
function validateOrdering(queueItems, taskGraph) {
const errors = []
for (const item of queueItems) {
const key = `${item.issue_id}:${item.task_id}`
const node = taskGraph.get(key)
// Check dependencies come before
for (const depQueueId of item.depends_on) {
const depItem = queueItems.find(q => q.queue_id === depQueueId)
if (depItem && depItem.execution_order >= item.execution_order) {
errors.push(`${item.queue_id} ordered before dependency ${depQueueId}`)
}
}
}
return { valid: errors.length === 0, errors }
}
```
### Semantic Priority Rules
| Factor | Priority Boost |
|--------|---------------|
| Create action | +0.2 | | Create action | +0.2 |
| Configure action | +0.15 | | Configure action | +0.15 |
| Implement action | +0.1 | | Implement action | +0.1 |
| Fix action | +0.05 | | Fix action | +0.05 |
| Foundation scope (config/types/utils) | +0.1 | | Foundation scope | +0.1 |
| Types scope | +0.05 | | Types scope | +0.05 |
| Refactor action | -0.05 | | Refactor action | -0.05 |
| Test action | -0.1 | | Test action | -0.1 |
| Delete action | -0.15 | | Delete action | -0.15 |
### 2.5 Group Assignment
- **Parallel (P*)**: Tasks with no dependencies or conflicts between them
- **Sequential (S*)**: Tasks that must run in order due to dependencies or conflicts
--- ---
## Key Reminders ## 3. Output Specifications
### 3.1 Queue Schema
Read schema before output:
```bash
cat .claude/workflows/cli-templates/schemas/queue-schema.json
```
### 3.2 Output Format
```json
{
"tasks": [{
"item_id": "T-1",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task_id": "TASK-001",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7
}],
"conflicts": [{
"file": "src/auth.ts",
"tasks": ["GH-123:TASK-001", "GH-124:TASK-002"],
"resolution": "sequential",
"resolution_order": ["GH-123:TASK-001", "GH-124:TASK-002"],
"rationale": "TASK-001 creates file before TASK-002 updates",
"resolved": true
}],
"execution_groups": [
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["T-1", "T-2", "T-3"] },
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["T-4", "T-5"] }
],
"_metadata": {
"total_tasks": 5,
"total_conflicts": 1,
"resolved_conflicts": 1,
"timestamp": "2025-12-27T10:00:00Z"
}
}
```
---
## 4. Quality Standards
### 4.1 Validation Checklist
- [ ] No circular dependencies
- [ ] All conflicts resolved
- [ ] Dependencies ordered correctly
- [ ] Parallel groups have no conflicts
- [ ] Semantic priority calculated
### 4.2 Error Handling
| Scenario | Action |
|----------|--------|
| Circular dependency | Abort, report cycles |
| Resolution creates cycle | Flag for manual resolution |
| Missing task reference | Skip and warn |
| Empty task list | Return empty queue |
### 4.3 Guidelines
**ALWAYS**: **ALWAYS**:
1. Build dependency graph before any ordering 1. Build dependency graph before ordering
2. Detect cycles before and after conflict resolution 2. Detect cycles before and after resolution
3. Apply resolution rules consistently (Create → Update → Delete) 3. Apply resolution rules consistently
4. Preserve within-issue task order when no conflicts 4. Calculate semantic priority for all tasks
5. Calculate semantic priority for all tasks 5. Include rationale for conflict resolutions
6. Validate ordering before output 6. Validate ordering before output
7. Include rationale for conflict resolutions
8. Map depends_on to queue_ids in output
**NEVER**: **NEVER**:
1. Execute tasks (ordering only) 1. Execute tasks (ordering only)
2. Ignore circular dependencies 2. Ignore circular dependencies
3. Create arbitrary ordering without rules 3. Skip conflict detection
4. Skip conflict detection 4. Output invalid DAG
5. Output invalid DAG 5. Merge conflicting tasks in parallel group
6. Merge tasks from different issues in same parallel group if conflicts exist
7. Assume task order without checking depends_on **OUTPUT**:
1. Write queue via `ccw issue queue` CLI
2. Return JSON with `tasks`, `conflicts`, `execution_groups`, `_metadata`

View File

@@ -17,12 +17,14 @@ Execution orchestrator that coordinates codex instances. Each task is executed b
- No file reading in codex - No file reading in codex
- Orchestrator manages parallelism - Orchestrator manages parallelism
## Storage Structure (Flat JSONL) ## Storage Structure (Queue History)
``` ```
.workflow/issues/ .workflow/issues/
├── issues.jsonl # All issues (one per line) ├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue ├── queues/ # Queue history directory
│ ├── index.json # Queue index (active + history)
│ └── {queue-id}.json # Individual queue files
└── solutions/ └── solutions/
├── {issue-id}.jsonl # Solutions for issue ├── {issue-id}.jsonl # Solutions for issue
└── ... └── ...
@@ -78,19 +80,19 @@ Phase 4: Completion
### Phase 1: Queue Loading ### Phase 1: Queue Loading
```javascript ```javascript
// Load queue // Load active queue via CLI endpoint
const queuePath = '.workflow/issues/queue.json'; const queueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
if (!Bash(`test -f "${queuePath}" && echo exists`).includes('exists')) { const queue = JSON.parse(queueJson);
console.log('No queue found. Run /issue:queue first.');
if (!queue.id || queue.tasks?.length === 0) {
console.log('No active queue found. Run /issue:queue first.');
return; return;
} }
const queue = JSON.parse(Read(queuePath));
// Count by status // Count by status
const pending = queue.queue.filter(q => q.status === 'pending'); const pending = queue.tasks.filter(q => q.status === 'pending');
const executing = queue.queue.filter(q => q.status === 'executing'); const executing = queue.tasks.filter(q => q.status === 'executing');
const completed = queue.queue.filter(q => q.status === 'completed'); const completed = queue.tasks.filter(q => q.status === 'completed');
console.log(` console.log(`
## Execution Queue Status ## Execution Queue Status
@@ -98,7 +100,7 @@ console.log(`
- Pending: ${pending.length} - Pending: ${pending.length}
- Executing: ${executing.length} - Executing: ${executing.length}
- Completed: ${completed.length} - Completed: ${completed.length}
- Total: ${queue.queue.length} - Total: ${queue.tasks.length}
`); `);
if (pending.length === 0 && executing.length === 0) { if (pending.length === 0 && executing.length === 0) {
@@ -113,10 +115,10 @@ if (pending.length === 0 && executing.length === 0) {
// Find ready tasks (dependencies satisfied) // Find ready tasks (dependencies satisfied)
function getReadyTasks() { function getReadyTasks() {
const completedIds = new Set( const completedIds = new Set(
queue.queue.filter(q => q.status === 'completed').map(q => q.queue_id) queue.tasks.filter(q => q.status === 'completed').map(q => q.item_id)
); );
return queue.queue.filter(item => { return queue.tasks.filter(item => {
if (item.status !== 'pending') return false; if (item.status !== 'pending') return false;
return item.depends_on.every(depId => completedIds.has(depId)); return item.depends_on.every(depId => completedIds.has(depId));
}); });
@@ -141,9 +143,9 @@ readyTasks.sort((a, b) => a.execution_order - b.execution_order);
// Initialize TodoWrite // Initialize TodoWrite
TodoWrite({ TodoWrite({
todos: readyTasks.slice(0, parallelLimit).map(t => ({ todos: readyTasks.slice(0, parallelLimit).map(t => ({
content: `[${t.queue_id}] ${t.issue_id}:${t.task_id}`, content: `[${t.item_id}] ${t.issue_id}:${t.task_id}`,
status: 'pending', status: 'pending',
activeForm: `Executing ${t.queue_id}` activeForm: `Executing ${t.item_id}`
})) }))
}); });
``` ```
@@ -207,7 +209,7 @@ This returns JSON with full lifecycle definition:
### Step 3: Report Completion ### Step 3: Report Completion
When ALL phases complete successfully: When ALL phases complete successfully:
\`\`\`bash \`\`\`bash
ccw issue complete <queue_id> --result '{ ccw issue complete <item_id> --result '{
"files_modified": ["path1", "path2"], "files_modified": ["path1", "path2"],
"tests_passed": true, "tests_passed": true,
"regression_passed": true, "regression_passed": true,
@@ -220,7 +222,7 @@ ccw issue complete <queue_id> --result '{
If any phase fails and cannot be fixed: If any phase fails and cannot be fixed:
\`\`\`bash \`\`\`bash
ccw issue fail <queue_id> --reason "Phase X failed: <details>" ccw issue fail <item_id> --reason "Phase X failed: <details>"
\`\`\` \`\`\`
### Rules ### Rules
@@ -239,12 +241,12 @@ Begin by running: ccw issue next
if (executor === 'codex') { if (executor === 'codex') {
Bash( Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.queue_id}`, `ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.item_id}`,
timeout=3600000 // 1 hour timeout timeout=3600000 // 1 hour timeout
); );
} else if (executor === 'gemini') { } else if (executor === 'gemini') {
Bash( Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.queue_id}`, `ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.item_id}`,
timeout=1800000 // 30 min timeout timeout=1800000 // 30 min timeout
); );
} else { } else {
@@ -252,7 +254,7 @@ Begin by running: ccw issue next
Task( Task(
subagent_type="code-developer", subagent_type="code-developer",
run_in_background=false, run_in_background=false,
description=`Execute ${queueItem.queue_id}`, description=`Execute ${queueItem.item_id}`,
prompt=codexPrompt prompt=codexPrompt
); );
} }
@@ -265,23 +267,23 @@ for (let i = 0; i < readyTasks.length; i += parallelLimit) {
const batch = readyTasks.slice(i, i + parallelLimit); const batch = readyTasks.slice(i, i + parallelLimit);
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`); console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
console.log(batch.map(t => `- ${t.queue_id}: ${t.issue_id}:${t.task_id}`).join('\n')); console.log(batch.map(t => `- ${t.item_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
if (parallelLimit === 1) { if (parallelLimit === 1) {
// Sequential execution // Sequential execution
for (const task of batch) { for (const task of batch) {
updateTodo(task.queue_id, 'in_progress'); updateTodo(task.item_id, 'in_progress');
await executeTask(task); await executeTask(task);
updateTodo(task.queue_id, 'completed'); updateTodo(task.item_id, 'completed');
} }
} else { } else {
// Parallel execution - launch all at once // Parallel execution - launch all at once
const executions = batch.map(task => { const executions = batch.map(task => {
updateTodo(task.queue_id, 'in_progress'); updateTodo(task.item_id, 'in_progress');
return executeTask(task); return executeTask(task);
}); });
await Promise.all(executions); await Promise.all(executions);
batch.forEach(task => updateTodo(task.queue_id, 'completed')); batch.forEach(task => updateTodo(task.item_id, 'completed'));
} }
// Refresh ready tasks after batch // Refresh ready tasks after batch
@@ -298,7 +300,7 @@ When codex calls `ccw issue next`, it receives:
```json ```json
{ {
"queue_id": "Q-001", "item_id": "T-1",
"issue_id": "GH-123", "issue_id": "GH-123",
"solution_id": "SOL-001", "solution_id": "SOL-001",
"task": { "task": {
@@ -336,60 +338,38 @@ When codex calls `ccw issue next`, it receives:
### Phase 4: Completion Summary ### Phase 4: Completion Summary
```javascript ```javascript
// Reload queue for final status // Reload queue for final status via CLI
const finalQueue = JSON.parse(Read(queuePath)); const finalQueueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
const finalQueue = JSON.parse(finalQueueJson);
const summary = { // Use queue._metadata for summary (already calculated by CLI)
completed: finalQueue.queue.filter(q => q.status === 'completed').length, const summary = finalQueue._metadata || {
failed: finalQueue.queue.filter(q => q.status === 'failed').length, completed_count: 0,
pending: finalQueue.queue.filter(q => q.status === 'pending').length, failed_count: 0,
total: finalQueue.queue.length pending_count: 0,
total_tasks: 0
}; };
console.log(` console.log(`
## Execution Complete ## Execution Complete
**Completed**: ${summary.completed}/${summary.total} **Completed**: ${summary.completed_count}/${summary.total_tasks}
**Failed**: ${summary.failed} **Failed**: ${summary.failed_count}
**Pending**: ${summary.pending} **Pending**: ${summary.pending_count}
### Task Results ### Task Results
${finalQueue.queue.map(q => { ${(finalQueue.tasks || []).map(q => {
const icon = q.status === 'completed' ? '✓' : const icon = q.status === 'completed' ? '✓' :
q.status === 'failed' ? '✗' : q.status === 'failed' ? '✗' :
q.status === 'executing' ? '⟳' : '○'; q.status === 'executing' ? '⟳' : '○';
return `${icon} ${q.queue_id} [${q.issue_id}:${q.task_id}] - ${q.status}`; return `${icon} ${q.item_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
}).join('\n')} }).join('\n')}
`); `);
// Update issue statuses in issues.jsonl // Issue status updates are handled by ccw issue complete/fail endpoints
const issuesPath = '.workflow/issues/issues.jsonl'; // No need to manually update issues.jsonl here
const allIssues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const issueIds = [...new Set(finalQueue.queue.map(q => q.issue_id))]; if (summary.pending_count > 0) {
for (const issueId of issueIds) {
const issueTasks = finalQueue.queue.filter(q => q.issue_id === issueId);
if (issueTasks.every(q => q.status === 'completed')) {
console.log(`\n✓ Issue ${issueId} fully completed!`);
// Update issue status
const issueIndex = allIssues.findIndex(i => i.id === issueId);
if (issueIndex !== -1) {
allIssues[issueIndex].status = 'completed';
allIssues[issueIndex].completed_at = new Date().toISOString();
allIssues[issueIndex].updated_at = new Date().toISOString();
}
}
}
// Write updated issues.jsonl
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
if (summary.pending > 0) {
console.log(` console.log(`
### Continue Execution ### Continue Execution
Run \`/issue:execute\` again to execute remaining tasks. Run \`/issue:execute\` again to execute remaining tasks.
@@ -405,7 +385,7 @@ if (flags.dryRun) {
## Dry Run - Would Execute ## Dry Run - Would Execute
${readyTasks.map((t, i) => ` ${readyTasks.map((t, i) => `
${i + 1}. ${t.queue_id} ${i + 1}. ${t.item_id}
Issue: ${t.issue_id} Issue: ${t.issue_id}
Task: ${t.task_id} Task: ${t.task_id}
Executor: ${t.assigned_executor} Executor: ${t.assigned_executor}
@@ -426,7 +406,32 @@ No changes made. Remove --dry-run to execute.
| No ready tasks | Check dependencies, show blocked tasks | | No ready tasks | Check dependencies, show blocked tasks |
| Codex timeout | Mark as failed, allow retry | | Codex timeout | Mark as failed, allow retry |
| ccw issue next empty | All tasks done or blocked | | ccw issue next empty | All tasks done or blocked |
| Task execution failure | Marked via ccw issue fail | | Task execution failure | Marked via ccw issue fail, use `ccw issue retry` to reset |
## Troubleshooting
### Interrupted Tasks
If execution was interrupted (crashed/stopped), `ccw issue next` will automatically resume:
```bash
# Automatically returns the executing task for resumption
ccw issue next
```
Tasks in `executing` status are prioritized and returned first, no manual reset needed.
### Failed Tasks
If a task failed and you want to retry:
```bash
# Reset all failed tasks to pending
ccw issue retry
# Reset failed tasks for specific issue
ccw issue retry <issue-id>
```
## Endpoint Contract ## Endpoint Contract
@@ -435,16 +440,20 @@ No changes made. Remove --dry-run to execute.
- Marks task as 'executing' - Marks task as 'executing'
- Returns `{ status: 'empty' }` when no tasks - Returns `{ status: 'empty' }` when no tasks
### `ccw issue complete <queue-id>` ### `ccw issue complete <item-id>`
- Marks task as 'completed' - Marks task as 'completed'
- Updates queue.json - Updates queue.json
- Checks if issue is fully complete - Checks if issue is fully complete
### `ccw issue fail <queue-id>` ### `ccw issue fail <item-id>`
- Marks task as 'failed' - Marks task as 'failed'
- Records failure reason - Records failure reason
- Allows retry via /issue:execute - Allows retry via /issue:execute
### `ccw issue retry [issue-id]`
- Resets failed tasks to 'pending'
- Allows re-execution via `ccw issue next`
## Related Commands ## Related Commands
- `/issue:plan` - Plan issues with solutions - `/issue:plan` - Plan issues with solutions

View File

@@ -25,12 +25,18 @@ ccw issue list <id> --json # Get issue details
ccw issue status <id> # Detailed status ccw issue status <id> # Detailed status
ccw issue init <id> --title "..." # Create issue ccw issue init <id> --title "..." # Create issue
ccw issue task <id> --title "..." # Add task ccw issue task <id> --title "..." # Add task
ccw issue bind <id> <solution-id> # Bind solution
# Queue management # Queue management
ccw issue queue # List queue ccw issue queue # List current queue
ccw issue queue add <id> # Add to queue ccw issue queue add <id> # Add to queue
ccw issue queue list # Queue history
ccw issue queue switch <queue-id> # Switch queue
ccw issue queue archive # Archive queue
ccw issue queue delete <queue-id> # Delete queue
ccw issue next # Get next task ccw issue next # Get next task
ccw issue done <queue-id> # Complete task ccw issue done <queue-id> # Mark completed
ccw issue complete <item-id> # (legacy alias for done)
``` ```
## Usage ## Usage
@@ -49,7 +55,9 @@ ccw issue done <queue-id> # Complete task
## Implementation ## Implementation
### Phase 1: Entry Point This command delegates to the `issue-manage` skill for detailed implementation.
### Entry Point
```javascript ```javascript
const issueId = parseIssueId(userInput); const issueId = parseIssueId(userInput);
@@ -63,787 +71,30 @@ if (!action) {
} }
``` ```
### Phase 2: Main Menu ### Main Menu Flow
```javascript 1. **Dashboard**: Fetch issues summary via `ccw issue list --json`
async function showMainMenu(preselectedIssue = null) { 2. **Menu**: Present action options via AskUserQuestion
// Fetch current issues summary 3. **Route**: Execute selected action (List/View/Edit/Delete/Bulk)
const issuesResult = Bash('ccw issue list --json 2>/dev/null || echo "[]"'); 4. **Loop**: Return to menu after each action
const issues = JSON.parse(issuesResult) || [];
### Available Actions
const queueResult = Bash('ccw issue status --json 2>/dev/null');
const queueStatus = JSON.parse(queueResult || '{}'); | Action | Description | CLI Command |
|--------|-------------|-------------|
console.log(` | List | Browse with filters | `ccw issue list --json` |
## Issue Management Dashboard | View | Detail view | `ccw issue status <id> --json` |
| Edit | Modify fields | Update `issues.jsonl` |
**Total Issues**: ${issues.length} | Delete | Remove issue | Clean up all related files |
**Queue Status**: ${queueStatus.queue?.total_tasks || 0} tasks (${queueStatus.queue?.pending_count || 0} pending) | Bulk | Batch operations | Multi-select + batch update |
### Quick Stats ## Data Files
- Registered: ${issues.filter(i => i.status === 'registered').length}
- Planned: ${issues.filter(i => i.status === 'planned').length} | File | Purpose |
- Executing: ${issues.filter(i => i.status === 'executing').length} |------|---------|
- Completed: ${issues.filter(i => i.status === 'completed').length} | `.workflow/issues/issues.jsonl` | Issue records |
`); | `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
| `.workflow/issues/queue.json` | Execution queue |
const answer = AskUserQuestion({
questions: [{
question: 'What would you like to do?',
header: 'Action',
multiSelect: false,
options: [
{ label: 'List Issues', description: 'Browse all issues with filters' },
{ label: 'View Issue', description: 'Detailed view of specific issue' },
{ label: 'Create Issue', description: 'Add new issue from text or GitHub' },
{ label: 'Edit Issue', description: 'Modify issue fields' },
{ label: 'Delete Issue', description: 'Remove issue(s)' },
{ label: 'Bulk Operations', description: 'Batch actions on multiple issues' }
]
}]
});
const selected = parseAnswer(answer);
switch (selected) {
case 'List Issues':
await listIssuesInteractive();
break;
case 'View Issue':
await viewIssueInteractive(preselectedIssue);
break;
case 'Create Issue':
await createIssueInteractive();
break;
case 'Edit Issue':
await editIssueInteractive(preselectedIssue);
break;
case 'Delete Issue':
await deleteIssueInteractive(preselectedIssue);
break;
case 'Bulk Operations':
await bulkOperationsInteractive();
break;
}
}
```
### Phase 3: List Issues
```javascript
async function listIssuesInteractive() {
// Ask for filter
const filterAnswer = AskUserQuestion({
questions: [{
question: 'Filter issues by status?',
header: 'Filter',
multiSelect: true,
options: [
{ label: 'All', description: 'Show all issues' },
{ label: 'Registered', description: 'New, unplanned issues' },
{ label: 'Planned', description: 'Issues with bound solutions' },
{ label: 'Queued', description: 'In execution queue' },
{ label: 'Executing', description: 'Currently being worked on' },
{ label: 'Completed', description: 'Finished issues' },
{ label: 'Failed', description: 'Failed issues' }
]
}]
});
const filters = parseMultiAnswer(filterAnswer);
// Fetch and filter issues
const result = Bash('ccw issue list --json');
let issues = JSON.parse(result) || [];
if (!filters.includes('All')) {
const statusMap = {
'Registered': 'registered',
'Planned': 'planned',
'Queued': 'queued',
'Executing': 'executing',
'Completed': 'completed',
'Failed': 'failed'
};
const allowedStatuses = filters.map(f => statusMap[f]).filter(Boolean);
issues = issues.filter(i => allowedStatuses.includes(i.status));
}
if (issues.length === 0) {
console.log('No issues found matching filters.');
return showMainMenu();
}
// Display issues table
console.log(`
## Issues (${issues.length})
| ID | Status | Priority | Title |
|----|--------|----------|-------|
${issues.map(i => `| ${i.id} | ${i.status} | P${i.priority} | ${i.title.substring(0, 40)} |`).join('\n')}
`);
// Ask for action on issue
const actionAnswer = AskUserQuestion({
questions: [{
question: 'Select an issue to view/edit, or return to menu:',
header: 'Select',
multiSelect: false,
options: [
...issues.slice(0, 10).map(i => ({
label: i.id,
description: i.title.substring(0, 50)
})),
{ label: 'Back to Menu', description: 'Return to main menu' }
]
}]
});
const selected = parseAnswer(actionAnswer);
if (selected === 'Back to Menu') {
return showMainMenu();
}
// View selected issue
await viewIssueInteractive(selected);
}
```
### Phase 4: View Issue
```javascript
async function viewIssueInteractive(issueId) {
if (!issueId) {
// Ask for issue ID
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
const idAnswer = AskUserQuestion({
questions: [{
question: 'Select issue to view:',
header: 'Issue',
multiSelect: false,
options: issues.slice(0, 10).map(i => ({
label: i.id,
description: `${i.status} - ${i.title.substring(0, 40)}`
}))
}]
});
issueId = parseAnswer(idAnswer);
}
// Fetch detailed status
const result = Bash(`ccw issue status ${issueId} --json`);
const data = JSON.parse(result);
const issue = data.issue;
const solutions = data.solutions || [];
const bound = data.bound;
console.log(`
## Issue: ${issue.id}
**Title**: ${issue.title}
**Status**: ${issue.status}
**Priority**: P${issue.priority}
**Created**: ${issue.created_at}
**Updated**: ${issue.updated_at}
### Context
${issue.context || 'No context provided'}
### Solutions (${solutions.length})
${solutions.length === 0 ? 'No solutions registered' :
solutions.map(s => `- ${s.is_bound ? '◉' : '○'} ${s.id}: ${s.tasks?.length || 0} tasks`).join('\n')}
${bound ? `### Bound Solution: ${bound.id}\n**Tasks**: ${bound.tasks?.length || 0}` : ''}
`);
// Show tasks if bound solution exists
if (bound?.tasks?.length > 0) {
console.log(`
### Tasks
| ID | Action | Scope | Title |
|----|--------|-------|-------|
${bound.tasks.map(t => `| ${t.id} | ${t.action} | ${t.scope?.substring(0, 20) || '-'} | ${t.title.substring(0, 30)} |`).join('\n')}
`);
}
// Action menu
const actionAnswer = AskUserQuestion({
questions: [{
question: 'What would you like to do?',
header: 'Action',
multiSelect: false,
options: [
{ label: 'Edit Issue', description: 'Modify issue fields' },
{ label: 'Plan Issue', description: 'Generate solution (/issue:plan)' },
{ label: 'Add to Queue', description: 'Queue bound solution tasks' },
{ label: 'View Queue', description: 'See queue status' },
{ label: 'Delete Issue', description: 'Remove this issue' },
{ label: 'Back to Menu', description: 'Return to main menu' }
]
}]
});
const action = parseAnswer(actionAnswer);
switch (action) {
case 'Edit Issue':
await editIssueInteractive(issueId);
break;
case 'Plan Issue':
console.log(`Running: /issue:plan ${issueId}`);
// Invoke plan skill
break;
case 'Add to Queue':
Bash(`ccw issue queue add ${issueId}`);
console.log(`✓ Added ${issueId} tasks to queue`);
break;
case 'View Queue':
const queueOutput = Bash('ccw issue queue');
console.log(queueOutput);
break;
case 'Delete Issue':
await deleteIssueInteractive(issueId);
break;
default:
return showMainMenu();
}
}
```
### Phase 5: Edit Issue
```javascript
async function editIssueInteractive(issueId) {
if (!issueId) {
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
const idAnswer = AskUserQuestion({
questions: [{
question: 'Select issue to edit:',
header: 'Issue',
multiSelect: false,
options: issues.slice(0, 10).map(i => ({
label: i.id,
description: `${i.status} - ${i.title.substring(0, 40)}`
}))
}]
});
issueId = parseAnswer(idAnswer);
}
// Get current issue data
const result = Bash(`ccw issue list ${issueId} --json`);
const issueData = JSON.parse(result);
const issue = issueData.issue || issueData;
// Ask which field to edit
const fieldAnswer = AskUserQuestion({
questions: [{
question: 'Which field to edit?',
header: 'Field',
multiSelect: false,
options: [
{ label: 'Title', description: `Current: ${issue.title?.substring(0, 40)}` },
{ label: 'Priority', description: `Current: P${issue.priority}` },
{ label: 'Status', description: `Current: ${issue.status}` },
{ label: 'Context', description: 'Edit problem description' },
{ label: 'Labels', description: `Current: ${issue.labels?.join(', ') || 'none'}` },
{ label: 'Back', description: 'Return without changes' }
]
}]
});
const field = parseAnswer(fieldAnswer);
if (field === 'Back') {
return viewIssueInteractive(issueId);
}
let updatePayload = {};
switch (field) {
case 'Title':
const titleAnswer = AskUserQuestion({
questions: [{
question: 'Enter new title (or select current to keep):',
header: 'Title',
multiSelect: false,
options: [
{ label: issue.title.substring(0, 50), description: 'Keep current title' }
]
}]
});
const newTitle = parseAnswer(titleAnswer);
if (newTitle && newTitle !== issue.title.substring(0, 50)) {
updatePayload.title = newTitle;
}
break;
case 'Priority':
const priorityAnswer = AskUserQuestion({
questions: [{
question: 'Select priority:',
header: 'Priority',
multiSelect: false,
options: [
{ label: 'P1 - Critical', description: 'Production blocking' },
{ label: 'P2 - High', description: 'Major functionality' },
{ label: 'P3 - Medium', description: 'Normal priority (default)' },
{ label: 'P4 - Low', description: 'Minor issues' },
{ label: 'P5 - Trivial', description: 'Nice to have' }
]
}]
});
const priorityStr = parseAnswer(priorityAnswer);
updatePayload.priority = parseInt(priorityStr.charAt(1));
break;
case 'Status':
const statusAnswer = AskUserQuestion({
questions: [{
question: 'Select status:',
header: 'Status',
multiSelect: false,
options: [
{ label: 'registered', description: 'New issue, not yet planned' },
{ label: 'planning', description: 'Solution being generated' },
{ label: 'planned', description: 'Solution bound, ready for queue' },
{ label: 'queued', description: 'In execution queue' },
{ label: 'executing', description: 'Currently being worked on' },
{ label: 'completed', description: 'All tasks finished' },
{ label: 'failed', description: 'Execution failed' },
{ label: 'paused', description: 'Temporarily on hold' }
]
}]
});
updatePayload.status = parseAnswer(statusAnswer);
break;
case 'Context':
console.log(`Current context:\n${issue.context || '(empty)'}\n`);
const contextAnswer = AskUserQuestion({
questions: [{
question: 'Enter new context (problem description):',
header: 'Context',
multiSelect: false,
options: [
{ label: 'Keep current', description: 'No changes' }
]
}]
});
const newContext = parseAnswer(contextAnswer);
if (newContext && newContext !== 'Keep current') {
updatePayload.context = newContext;
}
break;
case 'Labels':
const labelsAnswer = AskUserQuestion({
questions: [{
question: 'Enter labels (comma-separated):',
header: 'Labels',
multiSelect: false,
options: [
{ label: issue.labels?.join(',') || '', description: 'Keep current labels' }
]
}]
});
const labelsStr = parseAnswer(labelsAnswer);
if (labelsStr) {
updatePayload.labels = labelsStr.split(',').map(l => l.trim());
}
break;
}
// Apply update if any
if (Object.keys(updatePayload).length > 0) {
// Read, update, write issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const idx = allIssues.findIndex(i => i.id === issueId);
if (idx !== -1) {
allIssues[idx] = {
...allIssues[idx],
...updatePayload,
updated_at: new Date().toISOString()
};
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
console.log(`✓ Updated ${issueId}: ${Object.keys(updatePayload).join(', ')}`);
}
}
// Continue editing or return
const continueAnswer = AskUserQuestion({
questions: [{
question: 'Continue editing?',
header: 'Continue',
multiSelect: false,
options: [
{ label: 'Edit Another Field', description: 'Continue editing this issue' },
{ label: 'View Issue', description: 'See updated issue' },
{ label: 'Back to Menu', description: 'Return to main menu' }
]
}]
});
const cont = parseAnswer(continueAnswer);
if (cont === 'Edit Another Field') {
await editIssueInteractive(issueId);
} else if (cont === 'View Issue') {
await viewIssueInteractive(issueId);
} else {
return showMainMenu();
}
}
```
### Phase 6: Delete Issue
```javascript
async function deleteIssueInteractive(issueId) {
if (!issueId) {
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
const idAnswer = AskUserQuestion({
questions: [{
question: 'Select issue to delete:',
header: 'Delete',
multiSelect: false,
options: issues.slice(0, 10).map(i => ({
label: i.id,
description: `${i.status} - ${i.title.substring(0, 40)}`
}))
}]
});
issueId = parseAnswer(idAnswer);
}
// Confirm deletion
const confirmAnswer = AskUserQuestion({
questions: [{
question: `Delete issue ${issueId}? This will also remove associated solutions.`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Delete', description: 'Permanently remove issue and solutions' },
{ label: 'Cancel', description: 'Keep issue' }
]
}]
});
if (parseAnswer(confirmAnswer) !== 'Delete') {
console.log('Deletion cancelled.');
return showMainMenu();
}
// Remove from issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const filtered = allIssues.filter(i => i.id !== issueId);
Write(issuesPath, filtered.map(i => JSON.stringify(i)).join('\n'));
// Remove solutions file if exists
const solPath = `.workflow/issues/solutions/${issueId}.jsonl`;
Bash(`rm -f "${solPath}" 2>/dev/null || true`);
// Remove from queue if present
const queuePath = '.workflow/issues/queue.json';
if (Bash(`test -f "${queuePath}" && echo exists`) === 'exists') {
const queue = JSON.parse(Bash(`cat "${queuePath}"`));
queue.queue = queue.queue.filter(q => q.issue_id !== issueId);
Write(queuePath, JSON.stringify(queue, null, 2));
}
console.log(`✓ Deleted issue ${issueId}`);
return showMainMenu();
}
```
### Phase 7: Bulk Operations
```javascript
async function bulkOperationsInteractive() {
const bulkAnswer = AskUserQuestion({
questions: [{
question: 'Select bulk operation:',
header: 'Bulk',
multiSelect: false,
options: [
{ label: 'Update Status', description: 'Change status of multiple issues' },
{ label: 'Update Priority', description: 'Change priority of multiple issues' },
{ label: 'Add Labels', description: 'Add labels to multiple issues' },
{ label: 'Delete Multiple', description: 'Remove multiple issues' },
{ label: 'Queue All Planned', description: 'Add all planned issues to queue' },
{ label: 'Retry All Failed', description: 'Reset all failed tasks to pending' },
{ label: 'Back', description: 'Return to main menu' }
]
}]
});
const operation = parseAnswer(bulkAnswer);
if (operation === 'Back') {
return showMainMenu();
}
// Get issues for selection
const allIssues = JSON.parse(Bash('ccw issue list --json') || '[]');
if (operation === 'Queue All Planned') {
const planned = allIssues.filter(i => i.status === 'planned' && i.bound_solution_id);
for (const issue of planned) {
Bash(`ccw issue queue add ${issue.id}`);
console.log(`✓ Queued ${issue.id}`);
}
console.log(`\n✓ Queued ${planned.length} issues`);
return showMainMenu();
}
if (operation === 'Retry All Failed') {
Bash('ccw issue retry');
console.log('✓ Reset all failed tasks to pending');
return showMainMenu();
}
// Multi-select issues
const selectAnswer = AskUserQuestion({
questions: [{
question: 'Select issues (multi-select):',
header: 'Select',
multiSelect: true,
options: allIssues.slice(0, 15).map(i => ({
label: i.id,
description: `${i.status} - ${i.title.substring(0, 30)}`
}))
}]
});
const selectedIds = parseMultiAnswer(selectAnswer);
if (selectedIds.length === 0) {
console.log('No issues selected.');
return showMainMenu();
}
// Execute bulk operation
const issuesPath = '.workflow/issues/issues.jsonl';
let issues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
switch (operation) {
case 'Update Status':
const statusAnswer = AskUserQuestion({
questions: [{
question: 'Select new status:',
header: 'Status',
multiSelect: false,
options: [
{ label: 'registered', description: 'Reset to registered' },
{ label: 'paused', description: 'Pause issues' },
{ label: 'completed', description: 'Mark completed' }
]
}]
});
const newStatus = parseAnswer(statusAnswer);
issues = issues.map(i =>
selectedIds.includes(i.id)
? { ...i, status: newStatus, updated_at: new Date().toISOString() }
: i
);
break;
case 'Update Priority':
const prioAnswer = AskUserQuestion({
questions: [{
question: 'Select new priority:',
header: 'Priority',
multiSelect: false,
options: [
{ label: 'P1', description: 'Critical' },
{ label: 'P2', description: 'High' },
{ label: 'P3', description: 'Medium' },
{ label: 'P4', description: 'Low' },
{ label: 'P5', description: 'Trivial' }
]
}]
});
const newPrio = parseInt(parseAnswer(prioAnswer).charAt(1));
issues = issues.map(i =>
selectedIds.includes(i.id)
? { ...i, priority: newPrio, updated_at: new Date().toISOString() }
: i
);
break;
case 'Add Labels':
const labelAnswer = AskUserQuestion({
questions: [{
question: 'Enter labels to add (comma-separated):',
header: 'Labels',
multiSelect: false,
options: [
{ label: 'bug', description: 'Bug fix' },
{ label: 'feature', description: 'New feature' },
{ label: 'urgent', description: 'Urgent priority' }
]
}]
});
const newLabels = parseAnswer(labelAnswer).split(',').map(l => l.trim());
issues = issues.map(i =>
selectedIds.includes(i.id)
? {
...i,
labels: [...new Set([...(i.labels || []), ...newLabels])],
updated_at: new Date().toISOString()
}
: i
);
break;
case 'Delete Multiple':
const confirmDelete = AskUserQuestion({
questions: [{
question: `Delete ${selectedIds.length} issues permanently?`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Delete All', description: 'Remove selected issues' },
{ label: 'Cancel', description: 'Keep issues' }
]
}]
});
if (parseAnswer(confirmDelete) === 'Delete All') {
issues = issues.filter(i => !selectedIds.includes(i.id));
// Clean up solutions
for (const id of selectedIds) {
Bash(`rm -f ".workflow/issues/solutions/${id}.jsonl" 2>/dev/null || true`);
}
} else {
console.log('Deletion cancelled.');
return showMainMenu();
}
break;
}
Write(issuesPath, issues.map(i => JSON.stringify(i)).join('\n'));
console.log(`✓ Updated ${selectedIds.length} issues`);
return showMainMenu();
}
```
### Phase 8: Create Issue (Redirect)
```javascript
async function createIssueInteractive() {
const typeAnswer = AskUserQuestion({
questions: [{
question: 'Create issue from:',
header: 'Source',
multiSelect: false,
options: [
{ label: 'GitHub URL', description: 'Import from GitHub issue' },
{ label: 'Text Description', description: 'Enter problem description' },
{ label: 'Quick Create', description: 'Just title and priority' }
]
}]
});
const type = parseAnswer(typeAnswer);
if (type === 'GitHub URL' || type === 'Text Description') {
console.log('Use /issue:new for structured issue creation');
console.log('Example: /issue:new https://github.com/org/repo/issues/123');
return showMainMenu();
}
// Quick create
const titleAnswer = AskUserQuestion({
questions: [{
question: 'Enter issue title:',
header: 'Title',
multiSelect: false,
options: [
{ label: 'Authentication Bug', description: 'Example title' }
]
}]
});
const title = parseAnswer(titleAnswer);
const prioAnswer = AskUserQuestion({
questions: [{
question: 'Select priority:',
header: 'Priority',
multiSelect: false,
options: [
{ label: 'P3 - Medium (Recommended)', description: 'Normal priority' },
{ label: 'P1 - Critical', description: 'Production blocking' },
{ label: 'P2 - High', description: 'Major functionality' }
]
}]
});
const priority = parseInt(parseAnswer(prioAnswer).charAt(1));
// Generate ID and create
const id = `ISS-${Date.now()}`;
Bash(`ccw issue init ${id} --title "${title}" --priority ${priority}`);
console.log(`✓ Created issue ${id}`);
await viewIssueInteractive(id);
}
```
## Helper Functions
```javascript
function parseAnswer(answer) {
// Extract selected option from AskUserQuestion response
if (typeof answer === 'string') return answer;
if (answer.answers) {
const values = Object.values(answer.answers);
return values[0] || '';
}
return '';
}
function parseMultiAnswer(answer) {
// Extract multiple selections
if (typeof answer === 'string') return answer.split(',').map(s => s.trim());
if (answer.answers) {
const values = Object.values(answer.answers);
return values.flatMap(v => v.split(',').map(s => s.trim()));
}
return [];
}
function parseFlags(input) {
const flags = {};
const matches = input.matchAll(/--(\w+)\s+([^\s-]+)/g);
for (const match of matches) {
flags[match[1]] = match[2];
}
return flags;
}
function parseIssueId(input) {
const match = input.match(/^([A-Z]+-\d+|ISS-\d+|GH-\d+)/i);
return match ? match[1] : null;
}
```
## Error Handling ## Error Handling
@@ -853,7 +104,6 @@ function parseIssueId(input) {
| Issue not found | Show available issues, ask for correction | | Issue not found | Show available issues, ask for correction |
| Invalid selection | Show error, re-prompt | | Invalid selection | Show error, re-prompt |
| Write failure | Check permissions, show error | | Write failure | Check permissions, show error |
| Queue operation fails | Show ccw issue error, suggest fix |
## Related Commands ## Related Commands
@@ -861,5 +111,3 @@ function parseIssueId(input) {
- `/issue:plan` - Plan solution for issue - `/issue:plan` - Plan solution for issue
- `/issue:queue` - Form execution queue - `/issue:queue` - Form execution queue
- `/issue:execute` - Execute queued tasks - `/issue:execute` - Execute queued tasks
- `ccw issue list` - CLI list command
- `ccw issue status` - CLI status command

View File

@@ -51,51 +51,18 @@ interface Issue {
} }
``` ```
## Task Lifecycle (Each Task is Closed-Loop) ## Lifecycle Requirements
When `/issue:plan` generates tasks, each task MUST include: The `lifecycle_requirements` field guides downstream commands (`/issue:plan`, `/issue:execute`):
```typescript | Field | Options | Purpose |
interface SolutionTask { |-------|---------|---------|
id: string; | `test_strategy` | `unit`, `integration`, `e2e`, `manual`, `auto` | Which test types to generate |
title: string; | `regression_scope` | `affected`, `related`, `full` | Which tests to run for regression |
scope: string; | `acceptance_type` | `automated`, `manual`, `both` | How to verify completion |
action: string; | `commit_strategy` | `per-task`, `squash`, `atomic` | Commit granularity |
// Phase 1: Implementation > **Note**: Task structure (SolutionTask) is defined in `/issue:plan` - see `.claude/commands/issue/plan.md`
implementation: string[]; // Step-by-step implementation
modification_points: { file: string; target: string; change: string }[];
// Phase 2: Testing
test: {
unit?: string[]; // Unit test requirements
integration?: string[]; // Integration test requirements
commands?: string[]; // Test commands to run
coverage_target?: number; // Minimum coverage %
};
// Phase 3: Regression
regression: string[]; // Regression check commands/points
// Phase 4: Acceptance
acceptance: {
criteria: string[]; // Testable acceptance criteria
verification: string[]; // How to verify each criterion
manual_checks?: string[]; // Manual verification if needed
};
// Phase 5: Commit
commit: {
type: 'feat' | 'fix' | 'refactor' | 'test' | 'docs' | 'chore';
scope: string; // e.g., "auth", "api"
message_template: string; // Commit message template
breaking?: boolean;
};
depends_on: string[];
executor: 'codex' | 'gemini' | 'agent' | 'auto';
}
```
## Usage ## Usage

View File

@@ -1,7 +1,7 @@
--- ---
name: plan name: plan
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop) description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "<issue-id>[,<issue-id>,...] [--batch-size 3]" argument-hint: "<issue-id>[,<issue-id>,...] [--batch-size 3] --all-pending"
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*) allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
--- ---
@@ -9,13 +9,35 @@ allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(
## Overview ## Overview
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow. The agent handles ACE semantic search, solution generation, and task breakdown. Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
## Output Requirements
**Generate Files:**
1. `.workflow/issues/solutions/{issue-id}.jsonl` - Solution with tasks for each issue
**Return Summary:**
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "...", "solutions": [...] }],
"conflicts": [{ "file": "...", "issues": [...] }]
}
```
**Completion Criteria:**
- [ ] Solution file generated for each issue
- [ ] Single solution → auto-bound via `ccw issue bind`
- [ ] Multiple solutions → returned for user selection
- [ ] Tasks conform to schema: `cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json`
- [ ] Each task has quantified `delivery_criteria`
## Core Capabilities
**Core capabilities:**
- **Closed-loop agent**: issue-plan-agent combines explore + plan - **Closed-loop agent**: issue-plan-agent combines explore + plan
- Batch processing: 1 agent processes 1-3 issues - Batch processing: 1 agent processes 1-3 issues
- ACE semantic search integrated into planning - ACE semantic search integrated into planning
- Solution with executable tasks and acceptance criteria - Solution with executable tasks and delivery criteria
- Automatic solution registration and binding - Automatic solution registration and binding
## Storage Structure (Flat JSONL) ## Storage Structure (Flat JSONL)
@@ -75,120 +97,90 @@ Phase 4: Summary
## Implementation ## Implementation
### Phase 1: Issue Loading ### Phase 1: Issue Loading (IDs Only)
```javascript ```javascript
// Parse input const batchSize = flags.batchSize || 3;
const issueIds = userInput.includes(',') let issueIds = [];
if (flags.allPending) {
// Get pending issue IDs directly via CLI
const ids = Bash(`ccw issue list --status pending,registered --ids`).trim();
issueIds = ids ? ids.split('\n').filter(Boolean) : [];
if (issueIds.length === 0) {
console.log('No pending issues found.');
return;
}
console.log(`Found ${issueIds.length} pending issues`);
} else {
// Parse comma-separated issue IDs
issueIds = userInput.includes(',')
? userInput.split(',').map(s => s.trim()) ? userInput.split(',').map(s => s.trim())
: [userInput.trim()]; : [userInput.trim()];
// Read issues.jsonl // Create if not exists
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Load and validate issues
const issues = [];
for (const id of issueIds) { for (const id of issueIds) {
let issue = allIssues.find(i => i.id === id); Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
if (!issue) {
console.log(`Issue ${id} not found. Creating...`);
issue = {
id,
title: `Issue ${id}`,
status: 'registered',
priority: 3,
context: '',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
// Append to issues.jsonl
Bash(`echo '${JSON.stringify(issue)}' >> "${issuesPath}"`);
} }
issues.push(issue);
} }
// Group into batches // Group into batches
const batchSize = flags.batchSize || 3;
const batches = []; const batches = [];
for (let i = 0; i < issues.length; i += batchSize) { for (let i = 0; i < issueIds.length; i += batchSize) {
batches.push(issues.slice(i, i + batchSize)); batches.push(issueIds.slice(i, i + batchSize));
} }
console.log(`Processing ${issueIds.length} issues in ${batches.length} batch(es)`);
TodoWrite({ TodoWrite({
todos: batches.flatMap((batch, i) => [ todos: batches.map((_, i) => ({
{ content: `Plan batch ${i+1}`, status: 'pending', activeForm: `Planning batch ${i+1}` } content: `Plan batch ${i+1}`,
]) status: 'pending',
activeForm: `Planning batch ${i+1}`
}))
}); });
``` ```
### Phase 2: Unified Explore + Plan (issue-plan-agent) ### Phase 2: Unified Explore + Plan (issue-plan-agent)
```javascript ```javascript
Bash(`mkdir -p .workflow/issues/solutions`);
const pendingSelections = []; // Collect multi-solution issues for user selection
for (const [batchIndex, batch] of batches.entries()) { for (const [batchIndex, batch] of batches.entries()) {
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress'); updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
// Build issue prompt for agent with lifecycle requirements // Build minimal prompt - agent handles exploration, planning, and binding
const issuePrompt = ` const issuePrompt = `
## Issues to Plan (Closed-Loop Tasks Required) ## Plan Issues
${batch.map((issue, i) => ` **Issue IDs**: ${batch.join(', ')}
### Issue ${i + 1}: ${issue.id} **Project Root**: ${process.cwd()}
**Title**: ${issue.title}
**Context**: ${issue.context || 'No context provided'}
**Affected Components**: ${issue.affected_components?.join(', ') || 'Not specified'}
**Lifecycle Requirements**: ### Steps
- Test Strategy: ${issue.lifecycle_requirements?.test_strategy || 'auto'} 1. Fetch: \`ccw issue status <id> --json\`
- Regression Scope: ${issue.lifecycle_requirements?.regression_scope || 'affected'} 2. Explore (ACE) → Plan solution
- Commit Strategy: ${issue.lifecycle_requirements?.commit_strategy || 'per-task'} 3. Register & bind: \`ccw issue bind <id> --solution <file>\`
`).join('\n')}
## Project Root ### Generate Files
${process.cwd()} \`.workflow/issues/solutions/{issue-id}.jsonl\` - Solution with tasks (schema: cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json)
## Requirements - CLOSED-LOOP TASKS ### Binding Rules
- **Single solution**: Auto-bind via \`ccw issue bind <id> --solution <file>\`
- **Multiple solutions**: Register only, return for user selection
Each task MUST include ALL lifecycle phases: ### Return Summary
\`\`\`json
### 1. Implementation {
- implementation: string[] (2-7 concrete steps) "bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
- modification_points: { file, target, change }[] "pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
"conflicts": [{ "file": "...", "issues": [...] }]
### 2. Test }
- test.unit: string[] (unit test requirements) \`\`\`
- test.integration: string[] (integration test requirements if needed)
- test.commands: string[] (actual test commands to run)
- test.coverage_target: number (minimum coverage %)
### 3. Regression
- regression: string[] (commands to run for regression check)
- Based on issue's regression_scope setting
### 4. Acceptance
- acceptance.criteria: string[] (testable acceptance criteria)
- acceptance.verification: string[] (how to verify each criterion)
- acceptance.manual_checks: string[] (manual checks if needed)
### 5. Commit
- commit.type: feat|fix|refactor|test|docs|chore
- commit.scope: string (module name)
- commit.message_template: string (full commit message)
- commit.breaking: boolean
## Additional Requirements
1. Use ACE semantic search (mcp__ace-tool__search_context) for exploration
2. Detect file conflicts if multiple issues
3. Generate executable test commands based on project's test framework
4. Infer commit scope from affected files
`; `;
// Launch issue-plan-agent (combines explore + plan) // Launch issue-plan-agent - agent writes solutions directly
const result = Task( const result = Task(
subagent_type="issue-plan-agent", subagent_type="issue-plan-agent",
run_in_background=false, run_in_background=false,
@@ -196,202 +188,68 @@ Each task MUST include ALL lifecycle phases:
prompt=issuePrompt prompt=issuePrompt
); );
// Parse agent output // Parse summary from agent
const agentOutput = JSON.parse(result); const summary = JSON.parse(result);
// Register solutions for each issue (append to solutions/{issue-id}.jsonl) // Display auto-bound solutions
for (const item of agentOutput.solutions) { for (const item of summary.bound || []) {
const solutionPath = `.workflow/issues/solutions/${item.issue_id}.jsonl`; console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
// Ensure solutions directory exists
Bash(`mkdir -p .workflow/issues/solutions`);
// Append solution as new line
Bash(`echo '${JSON.stringify(item.solution)}' >> "${solutionPath}"`);
} }
// Handle conflicts if any // Collect pending selections for Phase 3
if (agentOutput.conflicts?.length > 0) { pendingSelections.push(...(summary.pending_selection || []));
console.log(`\n⚠ File conflicts detected:`);
agentOutput.conflicts.forEach(c => { // Show conflicts
console.log(` ${c.file}: ${c.issues.join(', ')} → suggested: ${c.suggested_order.join(' → ')}`); if (summary.conflicts?.length > 0) {
}); console.log(`⚠ Conflicts: ${summary.conflicts.map(c => c.file).join(', ')}`);
} }
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed'); updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
} }
``` ```
### Phase 3: Solution Binding ### Phase 3: Multi-Solution Selection
```javascript ```javascript
// Re-read issues.jsonl // Only handle issues where agent generated multiple solutions
let allIssuesUpdated = Bash(`cat "${issuesPath}"`) if (pendingSelections.length > 0) {
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
for (const issue of issues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
if (solutions.length === 0) {
console.log(`⚠ No solutions for ${issue.id}`);
continue;
}
let selectedSolId;
if (solutions.length === 1) {
// Auto-bind single solution
selectedSolId = solutions[0].id;
console.log(`✓ Auto-bound ${selectedSolId} to ${issue.id} (${solutions[0].tasks?.length || 0} tasks)`);
} else {
// Multiple solutions - ask user
const answer = AskUserQuestion({ const answer = AskUserQuestion({
questions: [{ questions: pendingSelections.map(({ issue_id, solutions }) => ({
question: `Select solution for ${issue.id}:`, question: `Select solution for ${issue_id}:`,
header: issue.id, header: issue_id,
multiSelect: false, multiSelect: false,
options: solutions.map(s => ({ options: solutions.map(s => ({
label: `${s.id}: ${s.description || 'Solution'}`, label: `${s.id} (${s.task_count} tasks)`,
description: `${s.tasks?.length || 0} tasks` description: s.description
}))
})) }))
}]
}); });
selectedSolId = extractSelectedSolutionId(answer); // Bind user-selected solutions
console.log(`✓ Bound ${selectedSolId} to ${issue.id}`); for (const { issue_id } of pendingSelections) {
const selectedId = extractSelectedSolutionId(answer, issue_id);
if (selectedId) {
Bash(`ccw issue bind ${issue_id} ${selectedId}`);
console.log(`${issue_id}: ${selectedId} bound`);
} }
// Update issue in allIssuesUpdated
const issueIndex = allIssuesUpdated.findIndex(i => i.id === issue.id);
if (issueIndex !== -1) {
allIssuesUpdated[issueIndex].bound_solution_id = selectedSolId;
allIssuesUpdated[issueIndex].status = 'planned';
allIssuesUpdated[issueIndex].planned_at = new Date().toISOString();
allIssuesUpdated[issueIndex].updated_at = new Date().toISOString();
} }
// Mark solution as bound in solutions file
const updatedSolutions = solutions.map(s => ({
...s,
is_bound: s.id === selectedSolId,
bound_at: s.id === selectedSolId ? new Date().toISOString() : s.bound_at
}));
Write(solPath, updatedSolutions.map(s => JSON.stringify(s)).join('\n'));
} }
// Write updated issues.jsonl
Write(issuesPath, allIssuesUpdated.map(i => JSON.stringify(i)).join('\n'));
``` ```
### Phase 4: Summary ### Phase 4: Summary
```javascript ```javascript
// Count planned issues via CLI
const plannedIds = Bash(`ccw issue list --status planned --ids`).trim();
const plannedCount = plannedIds ? plannedIds.split('\n').length : 0;
console.log(` console.log(`
## Planning Complete ## Done: ${issueIds.length} issues → ${plannedCount} planned
**Issues Planned**: ${issues.length} Next: \`/issue:queue\`\`/issue:execute\`
### Bound Solutions
${issues.map(i => {
const issue = allIssuesUpdated.find(a => a.id === i.id);
return issue?.bound_solution_id
? `${i.id}: ${issue.bound_solution_id}`
: `${i.id}: No solution bound`;
}).join('\n')}
### Next Steps
1. Review: \`ccw issue status <issue-id>\`
2. Form queue: \`/issue:queue\`
3. Execute: \`/issue:execute\`
`); `);
``` ```
## Solution Format (Closed-Loop Tasks)
Each solution line in `solutions/{issue-id}.jsonl`:
```json
{
"id": "SOL-20251226-001",
"description": "Direct Implementation",
"tasks": [
{
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file in src/middleware/",
"Implement JWT token validation using jsonwebtoken",
"Add error handling for invalid/expired tokens",
"Export middleware function"
],
"test": {
"unit": [
"Test valid token passes through",
"Test invalid token returns 401",
"Test expired token returns 401",
"Test missing token returns 401"
],
"commands": [
"npm test -- --grep 'auth middleware'",
"npm run test:coverage -- src/middleware/auth.ts"
],
"coverage_target": 80
},
"regression": [
"npm test -- --grep 'protected routes'",
"npm run test:integration -- auth"
],
"acceptance": {
"criteria": [
"Middleware validates JWT tokens successfully",
"Returns 401 for invalid or missing tokens",
"Passes decoded token to request context"
],
"verification": [
"curl -H 'Authorization: Bearer valid_token' /api/protected → 200",
"curl /api/protected → 401",
"curl -H 'Authorization: Bearer invalid' /api/protected → 401"
]
},
"commit": {
"type": "feat",
"scope": "auth",
"message_template": "feat(auth): add JWT validation middleware\n\n- Implement token validation\n- Add error handling for invalid tokens\n- Export for route protection",
"breaking": false
},
"depends_on": [],
"estimated_minutes": 30,
"executor": "codex"
}
],
"exploration_context": {
"relevant_files": ["src/config/auth.ts"],
"patterns": "Follow existing middleware pattern"
},
"is_bound": true,
"created_at": "2025-12-26T10:00:00Z",
"bound_at": "2025-12-26T10:05:00Z"
}
```
## Error Handling ## Error Handling
| Error | Resolution | | Error | Resolution |
@@ -402,17 +260,6 @@ Each solution line in `solutions/{issue-id}.jsonl`:
| User cancels selection | Skip issue, continue with others | | User cancels selection | Skip issue, continue with others |
| File conflicts | Agent detects and suggests resolution order | | File conflicts | Agent detects and suggests resolution order |
## Agent Integration
The command uses `issue-plan-agent` which:
1. Performs ACE semantic search per issue
2. Identifies modification points and patterns
3. Generates task breakdown with dependencies
4. Detects cross-issue file conflicts
5. Outputs solution JSON for registration
See `.claude/agents/issue-plan-agent.md` for agent specification.
## Related Commands ## Related Commands
- `/issue:queue` - Form execution queue from bound solutions - `/issue:queue` - Form execution queue from bound solutions

View File

@@ -9,16 +9,39 @@ allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
## Overview ## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, determines dependencies, and creates an ordered execution queue. The queue is global across all issues. Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, and creates an ordered execution queue.
## Output Requirements
**Generate Files:**
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with tasks, conflicts, groups
2. `.workflow/issues/queues/index.json` - Update with new queue entry
**Return Summary:**
```json
{
"queue_id": "QUE-20251227-143000",
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123", "GH-124"]
}
```
**Completion Criteria:**
- [ ] Queue JSON generated with valid DAG (no cycles)
- [ ] All file conflicts resolved with rationale
- [ ] Semantic priority calculated for all tasks
- [ ] Execution groups assigned (parallel P* / sequential S*)
- [ ] Issue statuses updated to `queued` via `ccw issue update`
## Core Capabilities
**Core capabilities:**
- **Agent-driven**: issue-queue-agent handles all ordering logic - **Agent-driven**: issue-queue-agent handles all ordering logic
- ACE semantic search for relationship discovery
- Dependency DAG construction and cycle detection - Dependency DAG construction and cycle detection
- File conflict detection and resolution - File conflict detection and resolution
- Semantic priority calculation (0.0-1.0) - Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment - Parallel/Sequential group assignment
- Output global queue.json
## Storage Structure (Queue History) ## Storage Structure (Queue History)
@@ -77,10 +100,12 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
# Flags # Flags
--issue <id> Form queue for specific issue only --issue <id> Form queue for specific issue only
--append <id> Append issue to active queue (don't create new) --append <id> Append issue to active queue (don't create new)
--list List all queues with status
--switch <queue-id> Switch active queue # CLI subcommands (ccw issue queue ...)
--archive Archive current queue (mark completed) ccw issue queue list List all queues with status
--clear <queue-id> Delete a queue from history ccw issue queue switch <queue-id> Switch active queue
ccw issue queue archive Archive current queue
ccw issue queue delete <queue-id> Delete queue from history
``` ```
## Execution Process ## Execution Process
@@ -166,165 +191,93 @@ console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues
### Phase 2-4: Agent-Driven Queue Formation ### Phase 2-4: Agent-Driven Queue Formation
```javascript ```javascript
// Launch issue-queue-agent to handle all ordering logic // Build minimal prompt - agent reads schema and handles ordering
const agentPrompt = ` const agentPrompt = `
## Tasks to Order ## Order Tasks
${JSON.stringify(allTasks, null, 2)} **Tasks**: ${allTasks.length} from ${plannedIssues.length} issues
**Project Root**: ${process.cwd()}
## Project Root ### Input
${process.cwd()} \`\`\`json
${JSON.stringify(allTasks.map(t => ({
key: \`\${t.issue_id}:\${t.task.id}\`,
type: t.task.type,
file_context: t.task.file_context,
depends_on: t.task.depends_on
})), null, 2)}
\`\`\`
## Requirements ### Steps
1. Build dependency DAG from depends_on fields 1. Parse tasks: Extract task keys, types, file contexts, dependencies
2. Detect circular dependencies (abort if found) 2. Build DAG: Construct dependency graph from depends_on references
3. Identify file modification conflicts 3. Detect cycles: Verify no circular dependencies exist (abort if found)
4. Resolve conflicts using ordering rules: 4. Detect conflicts: Identify file modification conflicts across issues
- Create before Update/Implement 5. Resolve conflicts: Apply ordering rules (Create→Update→Delete, config→src→tests)
- Foundation scopes (config/types) before implementation 6. Calculate priority: Compute semantic priority (0.0-1.0) for each task
- Core logic before tests 7. Assign groups: Assign parallel (P*) or sequential (S*) execution groups
5. Calculate semantic priority (0.0-1.0) for each task 8. Generate queue: Write queue JSON with ordered tasks
6. Assign execution groups (parallel P* / sequential S*) 9. Update index: Update queues/index.json with new queue entry
7. Output queue JSON
### Rules
- **DAG Validity**: Output must be valid DAG with no circular dependencies
- **Conflict Resolution**: All file conflicts must be resolved with rationale
- **Ordering Priority**:
1. Create before Update (files must exist before modification)
2. Foundation before integration (config/ → src/)
3. Types before implementation (types/ → components/)
4. Core before tests (src/ → __tests__/)
5. Delete last (preserve dependencies until no longer needed)
- **Parallel Safety**: Tasks in same parallel group must have no file conflicts
- **Queue ID Format**: \`QUE-YYYYMMDD-HHMMSS\` (UTC timestamp)
### Generate Files
1. \`.workflow/issues/queues/\${queueId}.json\` - Full queue (schema: cat .claude/workflows/cli-templates/schemas/queue-schema.json)
2. \`.workflow/issues/queues/index.json\` - Update with new entry
### Return Summary
\`\`\`json
{
"queue_id": "QUE-YYYYMMDD-HHMMSS",
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123"]
}
\`\`\`
`; `;
const result = Task( const result = Task(
subagent_type="issue-queue-agent", subagent_type="issue-queue-agent",
run_in_background=false, run_in_background=false,
description=`Order ${allTasks.length} tasks from ${plannedIssues.length} issues`, description=`Order ${allTasks.length} tasks`,
prompt=agentPrompt prompt=agentPrompt
); );
// Parse agent output const summary = JSON.parse(result);
const agentOutput = JSON.parse(result);
if (!agentOutput.success) {
console.error(`Queue formation failed: ${agentOutput.error}`);
if (agentOutput.cycles) {
console.error('Circular dependencies:', agentOutput.cycles.join(', '));
}
return;
}
``` ```
### Phase 5: Queue Output & Summary ### Phase 5: Summary & Status Update
```javascript ```javascript
const queueOutput = agentOutput.output; // Agent already generated queue files, use summary
// Write queue.json
Write('.workflow/issues/queue.json', JSON.stringify(queueOutput, null, 2));
// Update issue statuses in issues.jsonl
const updatedIssues = allIssues.map(issue => {
if (plannedIssues.find(p => p.id === issue.id)) {
return {
...issue,
status: 'queued',
queued_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
}
return issue;
});
Write(issuesPath, updatedIssues.map(i => JSON.stringify(i)).join('\n'));
// Display summary
console.log(` console.log(`
## Queue Formed ## Queue Formed: ${summary.queue_id}
**Total Tasks**: ${queueOutput.queue.length} **Tasks**: ${summary.total_tasks}
**Issues**: ${plannedIssues.length} **Issues**: ${summary.issues_queued.join(', ')}
**Conflicts**: ${queueOutput.conflicts?.length || 0} (${queueOutput._metadata?.resolved_conflicts || 0} resolved) **Groups**: ${summary.execution_groups.map(g => `${g.id}(${g.count})`).join(', ')}
**Conflicts Resolved**: ${summary.conflicts_resolved}
### Execution Groups Next: \`/issue:execute\`
${(queueOutput.execution_groups || []).map(g => {
const type = g.type === 'parallel' ? 'Parallel' : 'Sequential';
return `- ${g.id} (${type}): ${g.task_count} tasks`;
}).join('\n')}
### Next Steps
1. Review queue: \`ccw issue queue list\`
2. Execute: \`/issue:execute\`
`); `);
```
## Queue Schema // Update issue statuses via CLI
for (const issueId of summary.issues_queued) {
Output `queues/{queue-id}.json`: Bash(`ccw issue update ${issueId} --status queued`);
```json
{
"id": "QUE-20251227-143000",
"name": "Auth Feature Queue",
"status": "active",
"issue_ids": ["GH-123", "GH-124"],
"queue": [
{
"queue_id": "Q-001",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task_id": "T1",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7,
"queued_at": "2025-12-26T10:00:00Z"
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"tasks": ["GH-123:T1", "GH-124:T2"],
"resolution": "sequential",
"resolution_order": ["GH-123:T1", "GH-124:T2"],
"rationale": "T1 creates file before T2 updates",
"resolved": true
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["GH-123:T1", "GH-124:T1", "GH-125:T1"] },
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["GH-123:T2", "GH-124:T2"] }
],
"_metadata": {
"version": "2.0",
"total_tasks": 5,
"pending_count": 3,
"completed_count": 2,
"failed_count": 0,
"created_at": "2025-12-26T10:00:00Z",
"updated_at": "2025-12-26T11:00:00Z",
"source": "issue-queue-agent"
}
} }
``` ```
### Queue ID Format
```
QUE-YYYYMMDD-HHMMSS
例如: QUE-20251227-143052
```
## Semantic Priority Rules
| Factor | Priority Boost |
|--------|---------------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Config/Types scope | +0.1 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
## Error Handling ## Error Handling
| Error | Resolution | | Error | Resolution |
@@ -334,19 +287,6 @@ QUE-YYYYMMDD-HHMMSS
| Unresolved conflicts | Agent resolves using ordering rules | | Unresolved conflicts | Agent resolves using ordering rules |
| Invalid task reference | Skip and warn | | Invalid task reference | Skip and warn |
## Agent Integration
The command uses `issue-queue-agent` which:
1. Builds dependency DAG from task depends_on fields
2. Detects circular dependencies (aborts if found)
3. Identifies file modification conflicts across issues
4. Resolves conflicts using semantic ordering rules
5. Calculates priority (0.0-1.0) for each task
6. Assigns parallel/sequential execution groups
7. Outputs structured queue JSON
See `.claude/agents/issue-queue-agent.md` for agent specification.
## Related Commands ## Related Commands
- `/issue:plan` - Plan issues and bind solutions - `/issue:plan` - Plan issues and bind solutions

View File

@@ -0,0 +1,244 @@
---
name: issue-manage
description: Interactive issue management with menu-driven CRUD operations. Use when managing issues, viewing issue status, editing issue fields, or performing bulk operations on issues. Triggers on "manage issue", "list issues", "edit issue", "delete issue", "bulk update", "issue dashboard".
allowed-tools: Bash, Read, Write, AskUserQuestion, Task, Glob
---
# Issue Management Skill
Interactive menu-driven interface for issue CRUD operations via `ccw issue` CLI.
## Quick Start
Ask me:
- "Show all issues" → List with filters
- "View issue GH-123" → Detailed inspection
- "Edit issue priority" → Modify fields
- "Delete old issues" → Remove with confirmation
- "Bulk update status" → Batch operations
## CLI Endpoints
```bash
# Core operations
ccw issue list # List all issues
ccw issue list <id> --json # Get issue details
ccw issue status <id> # Detailed status
ccw issue init <id> --title "..." # Create issue
ccw issue task <id> --title "..." # Add task
ccw issue bind <id> <solution-id> # Bind solution
# Queue management
ccw issue queue # List current queue
ccw issue queue add <id> # Add to queue
ccw issue queue list # Queue history
ccw issue queue switch <queue-id> # Switch queue
ccw issue queue archive # Archive queue
ccw issue queue delete <queue-id> # Delete queue
ccw issue next # Get next task
ccw issue done <queue-id> # Mark completed
```
## Operations
### 1. LIST 📋
Filter and browse issues:
```
┌─ Filter by Status ─────────────────┐
│ □ All □ Registered │
│ □ Planned □ Queued │
│ □ Executing □ Completed │
└────────────────────────────────────┘
```
**Flow**:
1. Ask filter preferences → `ccw issue list --json`
2. Display table: ID | Status | Priority | Title
3. Select issue for detail view
### 2. VIEW 🔍
Detailed issue inspection:
```
┌─ Issue: GH-123 ─────────────────────┐
│ Title: Fix authentication bug │
│ Status: planned | Priority: P2 │
│ Solutions: 2 (1 bound) │
│ Tasks: 5 pending │
└─────────────────────────────────────┘
```
**Flow**:
1. Fetch `ccw issue status <id> --json`
2. Display issue + solutions + tasks
3. Offer actions: Edit | Plan | Queue | Delete
### 3. EDIT ✏️
Modify issue fields:
| Field | Options |
|-------|---------|
| Title | Free text |
| Priority | P1-P5 |
| Status | registered → completed |
| Context | Problem description |
| Labels | Comma-separated |
**Flow**:
1. Select field to edit
2. Show current value
3. Collect new value via AskUserQuestion
4. Update `.workflow/issues/issues.jsonl`
### 4. DELETE 🗑️
Remove with confirmation:
```
⚠️ Delete issue GH-123?
This will also remove:
- Associated solutions
- Queued tasks
[Delete] [Cancel]
```
**Flow**:
1. Confirm deletion via AskUserQuestion
2. Remove from `issues.jsonl`
3. Clean up `solutions/<id>.jsonl`
4. Remove from `queue.json`
### 5. BULK 📦
Batch operations:
| Operation | Description |
|-----------|-------------|
| Update Status | Change multiple issues |
| Update Priority | Batch priority change |
| Add Labels | Tag multiple issues |
| Delete Multiple | Bulk removal |
| Queue All Planned | Add all planned to queue |
| Retry All Failed | Reset failed tasks |
## Workflow
```
┌──────────────────────────────────────┐
│ Main Menu │
│ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │
│ │List│ │View│ │Edit│ │Bulk│ │
│ └──┬─┘ └──┬─┘ └──┬─┘ └──┬─┘ │
└─────┼──────┼──────┼──────┼──────────┘
│ │ │ │
▼ ▼ ▼ ▼
Filter Detail Fields Multi
Select Actions Update Select
│ │ │ │
└──────┴──────┴──────┘
Back to Menu
```
## Implementation Guide
### Entry Point
```javascript
// Parse input for issue ID
const issueId = input.match(/^([A-Z]+-\d+|ISS-\d+)/i)?.[1];
// Show main menu
await showMainMenu(issueId);
```
### Main Menu Pattern
```javascript
// 1. Fetch dashboard data
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
const queue = JSON.parse(Bash('ccw issue queue --json 2>/dev/null') || '{}');
// 2. Display summary
console.log(`Issues: ${issues.length} | Queue: ${queue.pending_count || 0} pending`);
// 3. Ask action via AskUserQuestion
const action = AskUserQuestion({
questions: [{
question: 'What would you like to do?',
header: 'Action',
options: [
{ label: 'List Issues', description: 'Browse with filters' },
{ label: 'View Issue', description: 'Detail view' },
{ label: 'Edit Issue', description: 'Modify fields' },
{ label: 'Bulk Operations', description: 'Batch actions' }
]
}]
});
// 4. Route to handler
```
### Filter Pattern
```javascript
const filter = AskUserQuestion({
questions: [{
question: 'Filter by status?',
header: 'Filter',
multiSelect: true,
options: [
{ label: 'All', description: 'Show all' },
{ label: 'Registered', description: 'Unplanned' },
{ label: 'Planned', description: 'Has solution' },
{ label: 'Executing', description: 'In progress' }
]
}]
});
```
### Edit Pattern
```javascript
// Select field
const field = AskUserQuestion({...});
// Get new value based on field type
// For Priority: show P1-P5 options
// For Status: show status options
// For Title: accept free text via "Other"
// Update file
const issuesPath = '.workflow/issues/issues.jsonl';
// Read → Parse → Update → Write
```
## Data Files
| File | Purpose |
|------|---------|
| `.workflow/issues/issues.jsonl` | Issue records |
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
| `.workflow/issues/queue.json` | Execution queue |
## Error Handling
| Error | Resolution |
|-------|------------|
| No issues found | Suggest `/issue:new` to create |
| Issue not found | Show available issues, re-prompt |
| Write failure | Check file permissions |
| Queue error | Display ccw error message |
## Related Commands
- `/issue:new` - Create structured issue
- `/issue:plan` - Generate solution
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute tasks

View File

@@ -21,7 +21,7 @@ WHILE task exists:
- TEST: Run task.test commands - TEST: Run task.test commands
- VERIFY: Check task.acceptance criteria - VERIFY: Check task.acceptance criteria
- COMMIT: Stage files, commit with task.commit.message_template - COMMIT: Stage files, commit with task.commit.message_template
3. Report completion via ccw issue complete <queue_id> 3. Report completion via ccw issue complete <item_id>
4. Fetch next task via ccw issue next 4. Fetch next task via ccw issue next
WHEN queue empty: WHEN queue empty:
@@ -37,7 +37,7 @@ ccw issue next
``` ```
This returns JSON with the full task definition: This returns JSON with the full task definition:
- `queue_id`: Unique ID for queue tracking (e.g., "Q-001") - `item_id`: Unique task identifier in queue (e.g., "T-1")
- `issue_id`: Parent issue ID (e.g., "ISSUE-20251227-001") - `issue_id`: Parent issue ID (e.g., "ISSUE-20251227-001")
- `task`: Full task definition with implementation steps - `task`: Full task definition with implementation steps
- `context`: Relevant files and patterns - `context`: Relevant files and patterns
@@ -51,7 +51,7 @@ Expected task structure:
```json ```json
{ {
"queue_id": "Q-001", "item_id": "T-1",
"issue_id": "ISSUE-20251227-001", "issue_id": "ISSUE-20251227-001",
"solution_id": "SOL-001", "solution_id": "SOL-001",
"task": { "task": {
@@ -159,7 +159,7 @@ git add path/to/file1.ts path/to/file2.ts ...
git commit -m "$(cat <<'EOF' git commit -m "$(cat <<'EOF'
[task.commit.message_template] [task.commit.message_template]
Queue-ID: [queue_id] Item-ID: [item_id]
Issue-ID: [issue_id] Issue-ID: [issue_id]
Task-ID: [task.id] Task-ID: [task.id]
EOF EOF
@@ -180,7 +180,7 @@ EOF
After commit succeeds, report to queue system: After commit succeeds, report to queue system:
```bash ```bash
ccw issue complete [queue_id] --result '{ ccw issue complete [item_id] --result '{
"files_modified": ["path1", "path2"], "files_modified": ["path1", "path2"],
"tests_passed": true, "tests_passed": true,
"acceptance_passed": true, "acceptance_passed": true,
@@ -193,7 +193,7 @@ ccw issue complete [queue_id] --result '{
**If task failed and cannot be fixed:** **If task failed and cannot be fixed:**
```bash ```bash
ccw issue fail [queue_id] --reason "Phase [X] failed: [details]" ccw issue fail [item_id] --reason "Phase [X] failed: [details]"
``` ```
## Step 5: Continue to Next Task ## Step 5: Continue to Next Task
@@ -206,7 +206,7 @@ ccw issue next
**Output progress:** **Output progress:**
``` ```
✓ [N/M] Completed: [queue_id] - [task.title] ✓ [N/M] Completed: [item_id] - [task.title]
→ Fetching next task... → Fetching next task...
``` ```
@@ -221,10 +221,10 @@ When `ccw issue next` returns `{ "status": "empty" }`:
**Total Tasks Executed**: N **Total Tasks Executed**: N
**All Commits**: **All Commits**:
| # | Queue ID | Task | Commit | | # | Item ID | Task | Commit |
|---|----------|------|--------| |---|---------|------|--------|
| 1 | Q-001 | Task title | abc123 | | 1 | T-1 | Task title | abc123 |
| 2 | Q-002 | Task title | def456 | | 2 | T-2 | Task title | def456 |
**Files Modified**: **Files Modified**:
- path/to/file1.ts - path/to/file1.ts

View File

@@ -277,6 +277,7 @@ export function run(argv: string[]): void {
.option('--priority <n>', 'Task priority (1-5)') .option('--priority <n>', 'Task priority (1-5)')
.option('--format <fmt>', 'Output format: json, markdown') .option('--format <fmt>', 'Output format: json, markdown')
.option('--json', 'Output as JSON') .option('--json', 'Output as JSON')
.option('--ids', 'List only IDs (one per line, for scripting)')
.option('--force', 'Force operation') .option('--force', 'Force operation')
// New options for solution/queue management // New options for solution/queue management
.option('--solution <path>', 'Solution JSON file path') .option('--solution <path>', 'Solution JSON file path')

View File

@@ -5,7 +5,7 @@
*/ */
import chalk from 'chalk'; import chalk from 'chalk';
import { existsSync, mkdirSync, readFileSync, writeFileSync } from 'fs'; import { existsSync, mkdirSync, readFileSync, writeFileSync, unlinkSync } from 'fs';
import { join, resolve } from 'path'; import { join, resolve } from 'path';
// Handle EPIPE errors gracefully // Handle EPIPE errors gracefully
@@ -29,6 +29,18 @@ interface Issue {
source?: string; source?: string;
source_url?: string; source_url?: string;
labels?: string[]; labels?: string[];
// Agent workflow fields
affected_components?: string[];
lifecycle_requirements?: {
test_strategy?: 'unit' | 'integration' | 'e2e' | 'auto';
regression_scope?: 'full' | 'related' | 'affected';
commit_strategy?: 'per-task' | 'atomic' | 'squash';
};
problem_statement?: string;
expected_behavior?: string;
actual_behavior?: string;
reproduction_steps?: string[];
// Timestamps
created_at: string; created_at: string;
updated_at: string; updated_at: string;
planned_at?: string; planned_at?: string;
@@ -100,17 +112,17 @@ interface Solution {
} }
interface QueueItem { interface QueueItem {
queue_id: string; item_id: string; // Task item ID in queue: T-1, T-2, ... (formerly queue_id)
issue_id: string; issue_id: string;
solution_id: string; solution_id: string;
task_id: string; task_id: string;
title?: string;
status: 'pending' | 'ready' | 'executing' | 'completed' | 'failed' | 'blocked'; status: 'pending' | 'ready' | 'executing' | 'completed' | 'failed' | 'blocked';
execution_order: number; execution_order: number;
execution_group: string; execution_group: string;
depends_on: string[]; depends_on: string[];
semantic_priority: number; semantic_priority: number;
assigned_executor: 'codex' | 'gemini' | 'agent'; assigned_executor: 'codex' | 'gemini' | 'agent';
queued_at: string;
started_at?: string; started_at?: string;
completed_at?: string; completed_at?: string;
result?: Record<string, any>; result?: Record<string, any>;
@@ -118,11 +130,11 @@ interface QueueItem {
} }
interface Queue { interface Queue {
id: string; // Queue unique ID: QUE-YYYYMMDD-HHMMSS id: string; // Queue unique ID: QUE-YYYYMMDD-HHMMSS (derived from filename)
name?: string; // Optional queue name name?: string; // Optional queue name
status: 'active' | 'completed' | 'archived' | 'failed'; status: 'active' | 'completed' | 'archived' | 'failed';
issue_ids: string[]; // Issues in this queue issue_ids: string[]; // Issues in this queue
queue: QueueItem[]; tasks: QueueItem[]; // Task items (formerly 'queue')
conflicts: any[]; conflicts: any[];
execution_groups?: any[]; execution_groups?: any[];
_metadata: { _metadata: {
@@ -132,13 +144,13 @@ interface Queue {
executing_count: number; executing_count: number;
completed_count: number; completed_count: number;
failed_count: number; failed_count: number;
created_at: string;
updated_at: string; updated_at: string;
}; };
} }
interface QueueIndex { interface QueueIndex {
active_queue_id: string | null; active_queue_id: string | null;
active_item_id: string | null;
queues: { queues: {
id: string; id: string;
status: string; status: string;
@@ -162,6 +174,7 @@ interface IssueOptions {
json?: boolean; json?: boolean;
force?: boolean; force?: boolean;
fail?: boolean; fail?: boolean;
ids?: boolean; // List only IDs (one per line)
} }
const ISSUES_DIR = '.workflow/issues'; const ISSUES_DIR = '.workflow/issues';
@@ -278,7 +291,7 @@ function ensureQueuesDir(): void {
function readQueueIndex(): QueueIndex { function readQueueIndex(): QueueIndex {
const path = join(getQueuesDir(), 'index.json'); const path = join(getQueuesDir(), 'index.json');
if (!existsSync(path)) { if (!existsSync(path)) {
return { active_queue_id: null, queues: [] }; return { active_queue_id: null, active_item_id: null, queues: [] };
} }
return JSON.parse(readFileSync(path, 'utf-8')); return JSON.parse(readFileSync(path, 'utf-8'));
} }
@@ -319,16 +332,15 @@ function createEmptyQueue(): Queue {
id: generateQueueFileId(), id: generateQueueFileId(),
status: 'active', status: 'active',
issue_ids: [], issue_ids: [],
queue: [], tasks: [],
conflicts: [], conflicts: [],
_metadata: { _metadata: {
version: '2.0', version: '2.1',
total_tasks: 0, total_tasks: 0,
pending_count: 0, pending_count: 0,
executing_count: 0, executing_count: 0,
completed_count: 0, completed_count: 0,
failed_count: 0, failed_count: 0,
created_at: new Date().toISOString(),
updated_at: new Date().toISOString() updated_at: new Date().toISOString()
} }
}; };
@@ -338,11 +350,11 @@ function writeQueue(queue: Queue): void {
ensureQueuesDir(); ensureQueuesDir();
// Update metadata counts // Update metadata counts
queue._metadata.total_tasks = queue.queue.length; queue._metadata.total_tasks = queue.tasks.length;
queue._metadata.pending_count = queue.queue.filter(q => q.status === 'pending').length; queue._metadata.pending_count = queue.tasks.filter(q => q.status === 'pending').length;
queue._metadata.executing_count = queue.queue.filter(q => q.status === 'executing').length; queue._metadata.executing_count = queue.tasks.filter(q => q.status === 'executing').length;
queue._metadata.completed_count = queue.queue.filter(q => q.status === 'completed').length; queue._metadata.completed_count = queue.tasks.filter(q => q.status === 'completed').length;
queue._metadata.failed_count = queue.queue.filter(q => q.status === 'failed').length; queue._metadata.failed_count = queue.tasks.filter(q => q.status === 'failed').length;
queue._metadata.updated_at = new Date().toISOString(); queue._metadata.updated_at = new Date().toISOString();
// Write queue file // Write queue file
@@ -359,7 +371,7 @@ function writeQueue(queue: Queue): void {
issue_ids: queue.issue_ids, issue_ids: queue.issue_ids,
total_tasks: queue._metadata.total_tasks, total_tasks: queue._metadata.total_tasks,
completed_tasks: queue._metadata.completed_count, completed_tasks: queue._metadata.completed_count,
created_at: queue._metadata.created_at, created_at: queue.id.replace('QUE-', '').replace(/(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})/, '$1-$2-$3T$4:$5:$6Z'), // Derive from ID
completed_at: queue.status === 'completed' ? new Date().toISOString() : undefined completed_at: queue.status === 'completed' ? new Date().toISOString() : undefined
}; };
@@ -377,11 +389,11 @@ function writeQueue(queue: Queue): void {
} }
function generateQueueItemId(queue: Queue): string { function generateQueueItemId(queue: Queue): string {
const maxNum = queue.queue.reduce((max, q) => { const maxNum = queue.tasks.reduce((max, q) => {
const match = q.queue_id.match(/^Q-(\d+)$/); const match = q.item_id.match(/^T-(\d+)$/);
return match ? Math.max(max, parseInt(match[1])) : max; return match ? Math.max(max, parseInt(match[1])) : max;
}, 0); }, 0);
return `Q-${String(maxNum + 1).padStart(3, '0')}`; return `T-${maxNum + 1}`;
} }
// ============ Commands ============ // ============ Commands ============
@@ -429,7 +441,19 @@ async function initAction(issueId: string | undefined, options: IssueOptions): P
async function listAction(issueId: string | undefined, options: IssueOptions): Promise<void> { async function listAction(issueId: string | undefined, options: IssueOptions): Promise<void> {
if (!issueId) { if (!issueId) {
// List all issues // List all issues
const issues = readIssues(); let issues = readIssues();
// Filter by status if specified
if (options.status) {
const statuses = options.status.split(',').map(s => s.trim());
issues = issues.filter(i => statuses.includes(i.status));
}
// IDs only mode (one per line, for scripting)
if (options.ids) {
issues.forEach(i => console.log(i.id));
return;
}
if (options.json) { if (options.json) {
console.log(JSON.stringify(issues, null, 2)); console.log(JSON.stringify(issues, null, 2));
@@ -519,7 +543,8 @@ async function statusAction(issueId: string | undefined, options: IssueOptions):
const index = readQueueIndex(); const index = readQueueIndex();
if (options.json) { if (options.json) {
console.log(JSON.stringify({ queue: queue._metadata, issues: issues.length, queues: index.queues.length }, null, 2)); // Return full queue for programmatic access
console.log(JSON.stringify(queue, null, 2));
return; return;
} }
@@ -806,7 +831,7 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
// Archive current queue // Archive current queue
if (subAction === 'archive') { if (subAction === 'archive') {
const queue = readActiveQueue(); const queue = readActiveQueue();
if (!queue.id || queue.queue.length === 0) { if (!queue.id || queue.tasks.length === 0) {
console.log(chalk.yellow('No active queue to archive')); console.log(chalk.yellow('No active queue to archive'));
return; return;
} }
@@ -822,6 +847,31 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
return; return;
} }
// Delete queue from history
if ((subAction === 'clear' || subAction === 'delete') && issueId) {
const queueId = issueId; // issueId is actually queue ID here
const queuePath = join(getQueuesDir(), `${queueId}.json`);
if (!existsSync(queuePath)) {
console.error(chalk.red(`Queue "${queueId}" not found`));
process.exit(1);
}
// Remove from index
const index = readQueueIndex();
index.queues = index.queues.filter(q => q.id !== queueId);
if (index.active_queue_id === queueId) {
index.active_queue_id = null;
}
writeQueueIndex(index);
// Delete queue file
unlinkSync(queuePath);
console.log(chalk.green(`✓ Deleted queue ${queueId}`));
return;
}
// Add issue tasks to queue // Add issue tasks to queue
if (subAction === 'add' && issueId) { if (subAction === 'add' && issueId) {
const issue = findIssue(issueId); const issue = findIssue(issueId);
@@ -839,7 +889,7 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
// Get or create active queue (create new if current is completed/archived) // Get or create active queue (create new if current is completed/archived)
let queue = readActiveQueue(); let queue = readActiveQueue();
const isNewQueue = queue.queue.length === 0 || queue.status !== 'active'; const isNewQueue = queue.tasks.length === 0 || queue.status !== 'active';
if (queue.status !== 'active') { if (queue.status !== 'active') {
// Create new queue if current is not active // Create new queue if current is not active
@@ -853,24 +903,23 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
let added = 0; let added = 0;
for (const task of solution.tasks) { for (const task of solution.tasks) {
const exists = queue.queue.some(q => q.issue_id === issueId && q.task_id === task.id); const exists = queue.tasks.some(q => q.issue_id === issueId && q.task_id === task.id);
if (exists) continue; if (exists) continue;
queue.queue.push({ queue.tasks.push({
queue_id: generateQueueItemId(queue), item_id: generateQueueItemId(queue),
issue_id: issueId, issue_id: issueId,
solution_id: solution.id, solution_id: solution.id,
task_id: task.id, task_id: task.id,
status: 'pending', status: 'pending',
execution_order: queue.queue.length + 1, execution_order: queue.tasks.length + 1,
execution_group: 'P1', execution_group: 'P1',
depends_on: task.depends_on.map(dep => { depends_on: task.depends_on.map(dep => {
const depItem = queue.queue.find(q => q.task_id === dep && q.issue_id === issueId); const depItem = queue.tasks.find(q => q.task_id === dep && q.issue_id === issueId);
return depItem?.queue_id || dep; return depItem?.item_id || dep;
}), }),
semantic_priority: 0.5, semantic_priority: 0.5,
assigned_executor: task.executor === 'auto' ? 'codex' : task.executor as any, assigned_executor: task.executor === 'auto' ? 'codex' : task.executor as any
queued_at: new Date().toISOString()
}); });
added++; added++;
} }
@@ -895,7 +944,7 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
console.log(chalk.bold.cyan('\nActive Queue\n')); console.log(chalk.bold.cyan('\nActive Queue\n'));
if (!queue.id || queue.queue.length === 0) { if (!queue.id || queue.tasks.length === 0) {
console.log(chalk.yellow('No active queue')); console.log(chalk.yellow('No active queue'));
console.log(chalk.gray('Create one: ccw issue queue add <issue-id>')); console.log(chalk.gray('Create one: ccw issue queue add <issue-id>'));
console.log(chalk.gray('Or list history: ccw issue queue list')); console.log(chalk.gray('Or list history: ccw issue queue list'));
@@ -910,7 +959,7 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
console.log(chalk.gray('QueueID'.padEnd(10) + 'Issue'.padEnd(15) + 'Task'.padEnd(8) + 'Status'.padEnd(12) + 'Executor')); console.log(chalk.gray('QueueID'.padEnd(10) + 'Issue'.padEnd(15) + 'Task'.padEnd(8) + 'Status'.padEnd(12) + 'Executor'));
console.log(chalk.gray('-'.repeat(60))); console.log(chalk.gray('-'.repeat(60)));
for (const item of queue.queue) { for (const item of queue.tasks) {
const statusColor = { const statusColor = {
'pending': chalk.gray, 'pending': chalk.gray,
'ready': chalk.cyan, 'ready': chalk.cyan,
@@ -921,7 +970,7 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
}[item.status] || chalk.white; }[item.status] || chalk.white;
console.log( console.log(
item.queue_id.padEnd(10) + item.item_id.padEnd(10) +
item.issue_id.substring(0, 13).padEnd(15) + item.issue_id.substring(0, 13).padEnd(15) +
item.task_id.padEnd(8) + item.task_id.padEnd(8) +
statusColor(item.status.padEnd(12)) + statusColor(item.status.padEnd(12)) +
@@ -936,15 +985,21 @@ async function queueAction(subAction: string | undefined, issueId: string | unde
async function nextAction(options: IssueOptions): Promise<void> { async function nextAction(options: IssueOptions): Promise<void> {
const queue = readActiveQueue(); const queue = readActiveQueue();
// Find ready tasks // Priority 1: Resume executing tasks (interrupted/crashed)
const readyTasks = queue.queue.filter(item => { const executingTasks = queue.tasks.filter(item => item.status === 'executing');
// Priority 2: Find pending tasks with satisfied dependencies
const pendingTasks = queue.tasks.filter(item => {
if (item.status !== 'pending') return false; if (item.status !== 'pending') return false;
return item.depends_on.every(depId => { return item.depends_on.every(depId => {
const dep = queue.queue.find(q => q.queue_id === depId); const dep = queue.tasks.find(q => q.item_id === depId);
return !dep || dep.status === 'completed'; return !dep || dep.status === 'completed';
}); });
}); });
// Combine: executing first, then pending
const readyTasks = [...executingTasks, ...pendingTasks];
if (readyTasks.length === 0) { if (readyTasks.length === 0) {
console.log(JSON.stringify({ console.log(JSON.stringify({
status: 'empty', status: 'empty',
@@ -957,6 +1012,7 @@ async function nextAction(options: IssueOptions): Promise<void> {
// Sort by execution order // Sort by execution order
readyTasks.sort((a, b) => a.execution_order - b.execution_order); readyTasks.sort((a, b) => a.execution_order - b.execution_order);
const nextItem = readyTasks[0]; const nextItem = readyTasks[0];
const isResume = nextItem.status === 'executing';
// Load task definition // Load task definition
const solution = findSolution(nextItem.issue_id, nextItem.solution_id); const solution = findSolution(nextItem.issue_id, nextItem.solution_id);
@@ -967,24 +1023,42 @@ async function nextAction(options: IssueOptions): Promise<void> {
process.exit(1); process.exit(1);
} }
// Mark as executing // Only update status if not already executing (new task)
const idx = queue.queue.findIndex(q => q.queue_id === nextItem.queue_id); if (!isResume) {
queue.queue[idx].status = 'executing'; const idx = queue.tasks.findIndex(q => q.item_id === nextItem.item_id);
queue.queue[idx].started_at = new Date().toISOString(); queue.tasks[idx].status = 'executing';
queue.tasks[idx].started_at = new Date().toISOString();
writeQueue(queue); writeQueue(queue);
// Update issue status
updateIssue(nextItem.issue_id, { status: 'executing' }); updateIssue(nextItem.issue_id, { status: 'executing' });
}
// Calculate queue stats for context
const stats = {
total: queue.tasks.length,
completed: queue.tasks.filter(q => q.status === 'completed').length,
failed: queue.tasks.filter(q => q.status === 'failed').length,
executing: executingTasks.length,
pending: pendingTasks.length
};
const remaining = stats.pending + stats.executing;
console.log(JSON.stringify({ console.log(JSON.stringify({
queue_id: nextItem.queue_id, item_id: nextItem.item_id,
issue_id: nextItem.issue_id, issue_id: nextItem.issue_id,
solution_id: nextItem.solution_id, solution_id: nextItem.solution_id,
task: taskDef, task: taskDef,
context: solution?.exploration_context || {}, context: solution?.exploration_context || {},
resumed: isResume,
resume_note: isResume ? `Resuming interrupted task (started: ${nextItem.started_at})` : undefined,
execution_hints: { execution_hints: {
executor: nextItem.assigned_executor, executor: nextItem.assigned_executor,
estimated_minutes: taskDef.estimated_minutes || 30 estimated_minutes: taskDef.estimated_minutes || 30
},
queue_progress: {
completed: stats.completed,
remaining: remaining,
total: stats.total,
progress: `${stats.completed}/${stats.total}`
} }
}, null, 2)); }, null, 2));
} }
@@ -1000,7 +1074,7 @@ async function doneAction(queueId: string | undefined, options: IssueOptions): P
} }
const queue = readActiveQueue(); const queue = readActiveQueue();
const idx = queue.queue.findIndex(q => q.queue_id === queueId); const idx = queue.tasks.findIndex(q => q.item_id === queueId);
if (idx === -1) { if (idx === -1) {
console.error(chalk.red(`Queue item "${queueId}" not found`)); console.error(chalk.red(`Queue item "${queueId}" not found`));
@@ -1008,22 +1082,22 @@ async function doneAction(queueId: string | undefined, options: IssueOptions): P
} }
const isFail = options.fail; const isFail = options.fail;
queue.queue[idx].status = isFail ? 'failed' : 'completed'; queue.tasks[idx].status = isFail ? 'failed' : 'completed';
queue.queue[idx].completed_at = new Date().toISOString(); queue.tasks[idx].completed_at = new Date().toISOString();
if (isFail) { if (isFail) {
queue.queue[idx].failure_reason = options.reason || 'Unknown failure'; queue.tasks[idx].failure_reason = options.reason || 'Unknown failure';
} else if (options.result) { } else if (options.result) {
try { try {
queue.queue[idx].result = JSON.parse(options.result); queue.tasks[idx].result = JSON.parse(options.result);
} catch { } catch {
console.warn(chalk.yellow('Warning: Could not parse result JSON')); console.warn(chalk.yellow('Warning: Could not parse result JSON'));
} }
} }
// Check if all issue tasks are complete // Check if all issue tasks are complete
const issueId = queue.queue[idx].issue_id; const issueId = queue.tasks[idx].issue_id;
const issueTasks = queue.queue.filter(q => q.issue_id === issueId); const issueTasks = queue.tasks.filter(q => q.issue_id === issueId);
const allIssueComplete = issueTasks.every(q => q.status === 'completed'); const allIssueComplete = issueTasks.every(q => q.status === 'completed');
const anyIssueFailed = issueTasks.some(q => q.status === 'failed'); const anyIssueFailed = issueTasks.some(q => q.status === 'failed');
@@ -1039,13 +1113,13 @@ async function doneAction(queueId: string | undefined, options: IssueOptions): P
} }
// Check if entire queue is complete // Check if entire queue is complete
const allQueueComplete = queue.queue.every(q => q.status === 'completed'); const allQueueComplete = queue.tasks.every(q => q.status === 'completed');
const anyQueueFailed = queue.queue.some(q => q.status === 'failed'); const anyQueueFailed = queue.tasks.some(q => q.status === 'failed');
if (allQueueComplete) { if (allQueueComplete) {
queue.status = 'completed'; queue.status = 'completed';
console.log(chalk.green(`\n✓ Queue ${queue.id} completed (all tasks done)`)); console.log(chalk.green(`\n✓ Queue ${queue.id} completed (all tasks done)`));
} else if (anyQueueFailed && queue.queue.every(q => q.status === 'completed' || q.status === 'failed')) { } else if (anyQueueFailed && queue.tasks.every(q => q.status === 'completed' || q.status === 'failed')) {
queue.status = 'failed'; queue.status = 'failed';
console.log(chalk.yellow(`\n⚠ Queue ${queue.id} has failed tasks`)); console.log(chalk.yellow(`\n⚠ Queue ${queue.id} has failed tasks`));
} }
@@ -1054,19 +1128,20 @@ async function doneAction(queueId: string | undefined, options: IssueOptions): P
} }
/** /**
* retry - Retry failed tasks * retry - Reset failed tasks to pending for re-execution
*/ */
async function retryAction(issueId: string | undefined, options: IssueOptions): Promise<void> { async function retryAction(issueId: string | undefined, options: IssueOptions): Promise<void> {
const queue = readActiveQueue(); const queue = readActiveQueue();
if (!queue.id || queue.queue.length === 0) { if (!queue.id || queue.tasks.length === 0) {
console.log(chalk.yellow('No active queue')); console.log(chalk.yellow('No active queue'));
return; return;
} }
let updated = 0; let updated = 0;
for (const item of queue.queue) { for (const item of queue.tasks) {
// Retry failed tasks only
if (item.status === 'failed') { if (item.status === 'failed') {
if (!issueId || item.issue_id === issueId) { if (!issueId || item.issue_id === issueId) {
item.status = 'pending'; item.status = 'pending';
@@ -1080,6 +1155,7 @@ async function retryAction(issueId: string | undefined, options: IssueOptions):
if (updated === 0) { if (updated === 0) {
console.log(chalk.yellow('No failed tasks to retry')); console.log(chalk.yellow('No failed tasks to retry'));
console.log(chalk.gray('Note: Interrupted (executing) tasks are auto-resumed by "ccw issue next"'));
return; return;
} }
@@ -1160,6 +1236,7 @@ export async function issueCommand(
console.log(chalk.gray(' queue add <issue-id> Add issue to active queue (or create new)')); console.log(chalk.gray(' queue add <issue-id> Add issue to active queue (or create new)'));
console.log(chalk.gray(' queue switch <queue-id> Switch active queue')); console.log(chalk.gray(' queue switch <queue-id> Switch active queue'));
console.log(chalk.gray(' queue archive Archive current queue')); console.log(chalk.gray(' queue archive Archive current queue'));
console.log(chalk.gray(' queue delete <queue-id> Delete queue from history'));
console.log(chalk.gray(' retry [issue-id] Retry failed tasks')); console.log(chalk.gray(' retry [issue-id] Retry failed tasks'));
console.log(); console.log();
console.log(chalk.bold('Execution Endpoints:')); console.log(chalk.bold('Execution Endpoints:'));
@@ -1169,6 +1246,8 @@ export async function issueCommand(
console.log(); console.log();
console.log(chalk.bold('Options:')); console.log(chalk.bold('Options:'));
console.log(chalk.gray(' --title <title> Issue/task title')); console.log(chalk.gray(' --title <title> Issue/task title'));
console.log(chalk.gray(' --status <status> Filter by status (comma-separated)'));
console.log(chalk.gray(' --ids List only IDs (one per line)'));
console.log(chalk.gray(' --solution <path> Solution JSON file')); console.log(chalk.gray(' --solution <path> Solution JSON file'));
console.log(chalk.gray(' --result <json> Execution result')); console.log(chalk.gray(' --result <json> Execution result'));
console.log(chalk.gray(' --reason <text> Failure reason')); console.log(chalk.gray(' --reason <text> Failure reason'));

View File

@@ -5,7 +5,9 @@
* Storage Structure: * Storage Structure:
* .workflow/issues/ * .workflow/issues/
* ├── issues.jsonl # All issues (one per line) * ├── issues.jsonl # All issues (one per line)
* ├── queue.json # Execution queue * ├── queues/ # Queue history directory
* │ ├── index.json # Queue index (active + history)
* │ └── {queue-id}.json # Individual queue files
* └── solutions/ * └── solutions/
* ├── {issue-id}.jsonl # Solutions for issue (one per line) * ├── {issue-id}.jsonl # Solutions for issue (one per line)
* └── ... * └── ...
@@ -102,12 +104,12 @@ function readQueue(issuesDir: string) {
} }
} }
return { queue: [], conflicts: [], execution_groups: [], _metadata: { version: '1.0', total_tasks: 0 } }; return { tasks: [], conflicts: [], execution_groups: [], _metadata: { version: '1.0', total_tasks: 0 } };
} }
function writeQueue(issuesDir: string, queue: any) { function writeQueue(issuesDir: string, queue: any) {
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true }); if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
queue._metadata = { ...queue._metadata, updated_at: new Date().toISOString(), total_tasks: queue.queue?.length || 0 }; queue._metadata = { ...queue._metadata, updated_at: new Date().toISOString(), total_tasks: queue.tasks?.length || 0 };
// Check if using new multi-queue structure // Check if using new multi-queue structure
const queuesDir = join(issuesDir, 'queues'); const queuesDir = join(issuesDir, 'queues');
@@ -123,8 +125,8 @@ function writeQueue(issuesDir: string, queue: any) {
const index = JSON.parse(readFileSync(indexPath, 'utf8')); const index = JSON.parse(readFileSync(indexPath, 'utf8'));
const queueEntry = index.queues?.find((q: any) => q.id === queue.id); const queueEntry = index.queues?.find((q: any) => q.id === queue.id);
if (queueEntry) { if (queueEntry) {
queueEntry.total_tasks = queue.queue?.length || 0; queueEntry.total_tasks = queue.tasks?.length || 0;
queueEntry.completed_tasks = queue.queue?.filter((i: any) => i.status === 'completed').length || 0; queueEntry.completed_tasks = queue.tasks?.filter((i: any) => i.status === 'completed').length || 0;
writeFileSync(indexPath, JSON.stringify(index, null, 2)); writeFileSync(indexPath, JSON.stringify(index, null, 2));
} }
} catch { } catch {
@@ -151,15 +153,29 @@ function getIssueDetail(issuesDir: string, issueId: string) {
} }
function enrichIssues(issues: any[], issuesDir: string) { function enrichIssues(issues: any[], issuesDir: string) {
return issues.map(issue => ({ return issues.map(issue => {
const solutions = readSolutionsJsonl(issuesDir, issue.id);
let taskCount = 0;
// Get task count from bound solution
if (issue.bound_solution_id) {
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
if (boundSol?.tasks) {
taskCount = boundSol.tasks.length;
}
}
return {
...issue, ...issue,
solution_count: readSolutionsJsonl(issuesDir, issue.id).length solution_count: solutions.length,
})); task_count: taskCount
};
});
} }
function groupQueueByExecutionGroup(queue: any) { function groupQueueByExecutionGroup(queue: any) {
const groups: { [key: string]: any[] } = {}; const groups: { [key: string]: any[] } = {};
for (const item of queue.queue || []) { for (const item of queue.tasks || []) {
const groupId = item.execution_group || 'ungrouped'; const groupId = item.execution_group || 'ungrouped';
if (!groups[groupId]) groups[groupId] = []; if (!groups[groupId]) groups[groupId] = [];
groups[groupId].push(item); groups[groupId].push(item);
@@ -171,7 +187,7 @@ function groupQueueByExecutionGroup(queue: any) {
id, id,
type: id.startsWith('P') ? 'parallel' : id.startsWith('S') ? 'sequential' : 'unknown', type: id.startsWith('P') ? 'parallel' : id.startsWith('S') ? 'sequential' : 'unknown',
task_count: items.length, task_count: items.length,
tasks: items.map(i => i.queue_id) tasks: items.map(i => i.item_id)
})).sort((a, b) => { })).sort((a, b) => {
const aFirst = groups[a.id]?.[0]?.execution_order || 0; const aFirst = groups[a.id]?.[0]?.execution_order || 0;
const bFirst = groups[b.id]?.[0]?.execution_order || 0; const bFirst = groups[b.id]?.[0]?.execution_order || 0;
@@ -229,20 +245,20 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
} }
const queue = readQueue(issuesDir); const queue = readQueue(issuesDir);
const groupItems = queue.queue.filter((item: any) => item.execution_group === groupId); const groupItems = queue.tasks.filter((item: any) => item.execution_group === groupId);
const otherItems = queue.queue.filter((item: any) => item.execution_group !== groupId); const otherItems = queue.tasks.filter((item: any) => item.execution_group !== groupId);
if (groupItems.length === 0) return { error: `No items in group ${groupId}` }; if (groupItems.length === 0) return { error: `No items in group ${groupId}` };
const groupQueueIds = new Set(groupItems.map((i: any) => i.queue_id)); const groupItemIds = new Set(groupItems.map((i: any) => i.item_id));
if (groupQueueIds.size !== new Set(newOrder).size) { if (groupItemIds.size !== new Set(newOrder).size) {
return { error: 'newOrder must contain all group items' }; return { error: 'newOrder must contain all group items' };
} }
for (const id of newOrder) { for (const id of newOrder) {
if (!groupQueueIds.has(id)) return { error: `Invalid queue_id: ${id}` }; if (!groupItemIds.has(id)) return { error: `Invalid item_id: ${id}` };
} }
const itemMap = new Map(groupItems.map((i: any) => [i.queue_id, i])); const itemMap = new Map(groupItems.map((i: any) => [i.item_id, i]));
const reorderedItems = newOrder.map((qid: string, idx: number) => ({ ...itemMap.get(qid), _idx: idx })); const reorderedItems = newOrder.map((qid: string, idx: number) => ({ ...itemMap.get(qid), _idx: idx }));
const newQueue = [...otherItems, ...reorderedItems].sort((a, b) => { const newQueue = [...otherItems, ...reorderedItems].sort((a, b) => {
const aGroup = parseInt(a.execution_group?.match(/\d+/)?.[0] || '999'); const aGroup = parseInt(a.execution_group?.match(/\d+/)?.[0] || '999');
@@ -255,7 +271,7 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
}); });
newQueue.forEach((item, idx) => { item.execution_order = idx + 1; delete item._idx; }); newQueue.forEach((item, idx) => { item.execution_order = idx + 1; delete item._idx; });
queue.queue = newQueue; queue.tasks = newQueue;
writeQueue(issuesDir, queue); writeQueue(issuesDir, queue);
return { success: true, groupId, reordered: newOrder.length }; return { success: true, groupId, reordered: newOrder.length };

View File

@@ -6,7 +6,7 @@
// ========== Issue State ========== // ========== Issue State ==========
var issueData = { var issueData = {
issues: [], issues: [],
queue: { queue: [], conflicts: [], execution_groups: [], grouped_items: {} }, queue: { tasks: [], conflicts: [], execution_groups: [], grouped_items: {} },
selectedIssue: null, selectedIssue: null,
selectedSolution: null, selectedSolution: null,
selectedSolutionIssueId: null, selectedSolutionIssueId: null,
@@ -65,7 +65,7 @@ async function loadQueueData() {
issueData.queue = await response.json(); issueData.queue = await response.json();
} catch (err) { } catch (err) {
console.error('Failed to load queue:', err); console.error('Failed to load queue:', err);
issueData.queue = { queue: [], conflicts: [], execution_groups: [], grouped_items: {} }; issueData.queue = { tasks: [], conflicts: [], execution_groups: [], grouped_items: {} };
} }
} }
@@ -360,7 +360,7 @@ function filterIssuesByStatus(status) {
// ========== Queue Section ========== // ========== Queue Section ==========
function renderQueueSection() { function renderQueueSection() {
const queue = issueData.queue; const queue = issueData.queue;
const queueItems = queue.queue || []; const queueItems = queue.tasks || [];
const metadata = queue._metadata || {}; const metadata = queue._metadata || {};
// Check if queue is empty // Check if queue is empty
@@ -530,10 +530,10 @@ function renderQueueItem(item, index, total) {
return ` return `
<div class="queue-item ${statusColors[item.status] || ''}" <div class="queue-item ${statusColors[item.status] || ''}"
draggable="true" draggable="true"
data-queue-id="${item.queue_id}" data-item-id="${item.item_id}"
data-group-id="${item.execution_group}" data-group-id="${item.execution_group}"
onclick="openQueueItemDetail('${item.queue_id}')"> onclick="openQueueItemDetail('${item.item_id}')">
<span class="queue-item-id font-mono text-xs">${item.queue_id}</span> <span class="queue-item-id font-mono text-xs">${item.item_id}</span>
<span class="queue-item-issue text-xs text-muted-foreground">${item.issue_id}</span> <span class="queue-item-issue text-xs text-muted-foreground">${item.issue_id}</span>
<span class="queue-item-task text-sm">${item.task_id}</span> <span class="queue-item-task text-sm">${item.task_id}</span>
<span class="queue-item-priority" style="opacity: ${item.semantic_priority || 0.5}"> <span class="queue-item-priority" style="opacity: ${item.semantic_priority || 0.5}">
@@ -586,12 +586,12 @@ function handleIssueDragStart(e) {
const item = e.target.closest('.queue-item'); const item = e.target.closest('.queue-item');
if (!item) return; if (!item) return;
issueDragState.dragging = item.dataset.queueId; issueDragState.dragging = item.dataset.itemId;
issueDragState.groupId = item.dataset.groupId; issueDragState.groupId = item.dataset.groupId;
item.classList.add('dragging'); item.classList.add('dragging');
e.dataTransfer.effectAllowed = 'move'; e.dataTransfer.effectAllowed = 'move';
e.dataTransfer.setData('text/plain', item.dataset.queueId); e.dataTransfer.setData('text/plain', item.dataset.itemId);
} }
function handleIssueDragEnd(e) { function handleIssueDragEnd(e) {
@@ -610,7 +610,7 @@ function handleIssueDragOver(e) {
e.preventDefault(); e.preventDefault();
const target = e.target.closest('.queue-item'); const target = e.target.closest('.queue-item');
if (!target || target.dataset.queueId === issueDragState.dragging) return; if (!target || target.dataset.itemId === issueDragState.dragging) return;
// Only allow drag within same group // Only allow drag within same group
if (target.dataset.groupId !== issueDragState.groupId) { if (target.dataset.groupId !== issueDragState.groupId) {
@@ -635,7 +635,7 @@ function handleIssueDrop(e) {
// Get new order // Get new order
const items = Array.from(container.querySelectorAll('.queue-item')); const items = Array.from(container.querySelectorAll('.queue-item'));
const draggedItem = items.find(i => i.dataset.queueId === issueDragState.dragging); const draggedItem = items.find(i => i.dataset.itemId === issueDragState.dragging);
const targetIndex = items.indexOf(target); const targetIndex = items.indexOf(target);
const draggedIndex = items.indexOf(draggedItem); const draggedIndex = items.indexOf(draggedItem);
@@ -649,7 +649,7 @@ function handleIssueDrop(e) {
} }
// Get new order and save // Get new order and save
const newOrder = Array.from(container.querySelectorAll('.queue-item')).map(i => i.dataset.queueId); const newOrder = Array.from(container.querySelectorAll('.queue-item')).map(i => i.dataset.itemId);
saveQueueOrder(issueDragState.groupId, newOrder); saveQueueOrder(issueDragState.groupId, newOrder);
} }
@@ -767,7 +767,7 @@ function renderIssueDetailPanel(issue) {
<div class="flex items-center justify-between"> <div class="flex items-center justify-between">
<span class="font-mono text-sm">${task.id}</span> <span class="font-mono text-sm">${task.id}</span>
<select class="task-status-select" onchange="updateTaskStatus('${issue.id}', '${task.id}', this.value)"> <select class="task-status-select" onchange="updateTaskStatus('${issue.id}', '${task.id}', this.value)">
${['pending', 'ready', 'in_progress', 'completed', 'failed', 'paused', 'skipped'].map(s => ${['pending', 'ready', 'executing', 'completed', 'failed', 'blocked', 'paused', 'skipped'].map(s =>
`<option value="${s}" ${task.status === s ? 'selected' : ''}>${s}</option>` `<option value="${s}" ${task.status === s ? 'selected' : ''}>${s}</option>`
).join('')} ).join('')}
</select> </select>
@@ -1145,8 +1145,8 @@ function escapeHtml(text) {
return div.innerHTML; return div.innerHTML;
} }
function openQueueItemDetail(queueId) { function openQueueItemDetail(itemId) {
const item = issueData.queue.queue?.find(q => q.queue_id === queueId); const item = issueData.queue.tasks?.find(q => q.item_id === itemId);
if (item) { if (item) {
openIssueDetail(item.issue_id); openIssueDetail(item.issue_id);
} }

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "claude-code-workflow", "name": "claude-code-workflow",
"version": "6.2.9", "version": "6.3.8",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "claude-code-workflow", "name": "claude-code-workflow",
"version": "6.2.9", "version": "6.3.8",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@modelcontextprotocol/sdk": "^1.0.4", "@modelcontextprotocol/sdk": "^1.0.4",

View File

@@ -1,6 +1,6 @@
{ {
"name": "claude-code-workflow", "name": "claude-code-workflow",
"version": "6.3.6", "version": "6.3.8",
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution", "description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
"type": "module", "type": "module",
"main": "ccw/src/index.js", "main": "ccw/src/index.js",