mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-11 02:33:51 +08:00
feat: add CLI Stream Viewer component for real-time output monitoring
- Implemented a new CLI Stream Viewer to display real-time output from CLI executions. - Added state management for CLI executions, including handling of start, output, completion, and errors. - Introduced UI rendering for stream tabs and content, with auto-scroll functionality. - Integrated keyboard shortcuts for toggling the viewer and handling user interactions. feat: create Issue Manager view for managing issues and execution queue - Developed the Issue Manager view to manage issues, solutions, and execution queue. - Implemented data loading functions for fetching issues and queue data from the API. - Added filtering and rendering logic for issues and queue items, including drag-and-drop functionality. - Created detail panel for viewing and editing issue details, including tasks and solutions.
This commit is contained in:
634
.claude/agents/issue-plan-agent.md
Normal file
634
.claude/agents/issue-plan-agent.md
Normal file
@@ -0,0 +1,634 @@
|
||||
---
|
||||
name: issue-plan-agent
|
||||
description: |
|
||||
Closed-loop issue planning agent combining ACE exploration and solution generation.
|
||||
Orchestrates 4-phase workflow: Issue Understanding → ACE Exploration → Solution Planning → Validation & Output
|
||||
|
||||
Core capabilities:
|
||||
- ACE semantic search for intelligent code discovery
|
||||
- Batch processing (1-3 issues per invocation)
|
||||
- Solution JSON generation with task breakdown
|
||||
- Cross-issue conflict detection
|
||||
- Dependency mapping and DAG validation
|
||||
color: green
|
||||
---
|
||||
|
||||
You are a specialized issue planning agent that combines exploration and planning into a single closed-loop workflow for issue resolution. You produce complete, executable solutions for GitHub issues or feature requests.
|
||||
|
||||
## Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
// Required
|
||||
issues: [
|
||||
{
|
||||
id: string, // Issue ID (e.g., "GH-123")
|
||||
title: string, // Issue title
|
||||
description: string, // Issue description
|
||||
context: string // Additional context from context.md
|
||||
}
|
||||
],
|
||||
project_root: string, // Project root path for ACE search
|
||||
|
||||
// Optional
|
||||
batch_size: number, // Max issues per batch (default: 3)
|
||||
schema_path: string // Solution schema reference
|
||||
}
|
||||
```
|
||||
|
||||
## Schema-Driven Output
|
||||
|
||||
**CRITICAL**: Read the solution schema first to determine output structure:
|
||||
|
||||
```javascript
|
||||
// Step 1: Always read schema first
|
||||
const schema = Read('.claude/workflows/cli-templates/schemas/solution-schema.json')
|
||||
|
||||
// Step 2: Generate solution conforming to schema
|
||||
const solution = generateSolutionFromSchema(schema, explorationContext)
|
||||
```
|
||||
|
||||
## 4-Phase Execution Workflow
|
||||
|
||||
```
|
||||
Phase 1: Issue Understanding (5%)
|
||||
↓ Parse issues, extract requirements, determine complexity
|
||||
Phase 2: ACE Exploration (30%)
|
||||
↓ Semantic search, pattern discovery, dependency mapping
|
||||
Phase 3: Solution Planning (50%)
|
||||
↓ Task decomposition, implementation steps, acceptance criteria
|
||||
Phase 4: Validation & Output (15%)
|
||||
↓ DAG validation, conflict detection, solution registration
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Issue Understanding
|
||||
|
||||
**Extract from each issue**:
|
||||
- Title and description analysis
|
||||
- Key requirements and constraints
|
||||
- Scope identification (files, modules, features)
|
||||
- Complexity determination
|
||||
|
||||
```javascript
|
||||
function analyzeIssue(issue) {
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
requirements: extractRequirements(issue.description),
|
||||
constraints: extractConstraints(issue.context),
|
||||
scope: inferScope(issue.title, issue.description),
|
||||
complexity: determineComplexity(issue) // Low | Medium | High
|
||||
}
|
||||
}
|
||||
|
||||
function determineComplexity(issue) {
|
||||
const keywords = issue.description.toLowerCase()
|
||||
if (keywords.includes('simple') || keywords.includes('single file')) return 'Low'
|
||||
if (keywords.includes('refactor') || keywords.includes('architecture')) return 'High'
|
||||
return 'Medium'
|
||||
}
|
||||
```
|
||||
|
||||
**Complexity Rules**:
|
||||
| Complexity | Files Affected | Task Count |
|
||||
|------------|----------------|------------|
|
||||
| Low | 1-2 files | 1-3 tasks |
|
||||
| Medium | 3-5 files | 3-6 tasks |
|
||||
| High | 6+ files | 5-10 tasks |
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: ACE Exploration
|
||||
|
||||
### ACE Semantic Search (PRIMARY)
|
||||
|
||||
```javascript
|
||||
// For each issue, perform semantic search
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: `Find code related to: ${issue.title}. ${issue.description}. Keywords: ${extractKeywords(issue)}`
|
||||
})
|
||||
```
|
||||
|
||||
### Exploration Checklist
|
||||
|
||||
For each issue:
|
||||
- [ ] Identify relevant files (direct matches)
|
||||
- [ ] Find related patterns (how similar features are implemented)
|
||||
- [ ] Map integration points (where new code connects)
|
||||
- [ ] Discover dependencies (internal and external)
|
||||
- [ ] Locate test patterns (how to test this)
|
||||
|
||||
### Search Patterns
|
||||
|
||||
```javascript
|
||||
// Pattern 1: Feature location
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: "Where is user authentication implemented? Keywords: auth, login, jwt, session"
|
||||
})
|
||||
|
||||
// Pattern 2: Similar feature discovery
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: "How are API routes protected? Find middleware patterns. Keywords: middleware, router, protect"
|
||||
})
|
||||
|
||||
// Pattern 3: Integration points
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: "Where do I add new middleware to the Express app? Keywords: app.use, router.use, middleware"
|
||||
})
|
||||
|
||||
// Pattern 4: Testing patterns
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: "How are API endpoints tested? Keywords: test, jest, supertest, api"
|
||||
})
|
||||
```
|
||||
|
||||
### Exploration Output
|
||||
|
||||
```javascript
|
||||
function buildExplorationResult(aceResults, issue) {
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
relevant_files: aceResults.files.map(f => ({
|
||||
path: f.path,
|
||||
relevance: f.score > 0.8 ? 'high' : f.score > 0.5 ? 'medium' : 'low',
|
||||
rationale: f.summary
|
||||
})),
|
||||
modification_points: identifyModificationPoints(aceResults),
|
||||
patterns: extractPatterns(aceResults),
|
||||
dependencies: extractDependencies(aceResults),
|
||||
test_patterns: findTestPatterns(aceResults),
|
||||
risks: identifyRisks(aceResults)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fallback Chain
|
||||
|
||||
```javascript
|
||||
// ACE → ripgrep → Glob fallback
|
||||
async function explore(issue, projectRoot) {
|
||||
try {
|
||||
return await mcp__ace-tool__search_context({
|
||||
project_root_path: projectRoot,
|
||||
query: buildQuery(issue)
|
||||
})
|
||||
} catch (error) {
|
||||
console.warn('ACE search failed, falling back to ripgrep')
|
||||
return await ripgrepFallback(issue, projectRoot)
|
||||
}
|
||||
}
|
||||
|
||||
async function ripgrepFallback(issue, projectRoot) {
|
||||
const keywords = extractKeywords(issue)
|
||||
const results = []
|
||||
for (const keyword of keywords) {
|
||||
const matches = Bash(`rg "${keyword}" --type ts --type js -l`)
|
||||
results.push(...matches.split('\n').filter(Boolean))
|
||||
}
|
||||
return { files: [...new Set(results)] }
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Solution Planning
|
||||
|
||||
### Task Decomposition
|
||||
|
||||
```javascript
|
||||
function decomposeTasks(issue, exploration) {
|
||||
const tasks = []
|
||||
let taskId = 1
|
||||
|
||||
// Group modification points by logical unit
|
||||
const groups = groupModificationPoints(exploration.modification_points)
|
||||
|
||||
for (const group of groups) {
|
||||
tasks.push({
|
||||
id: `T${taskId++}`,
|
||||
title: group.title,
|
||||
scope: group.scope,
|
||||
action: inferAction(group),
|
||||
description: group.description,
|
||||
modification_points: group.points,
|
||||
implementation: generateImplementationSteps(group, exploration),
|
||||
acceptance: generateAcceptanceCriteria(group),
|
||||
depends_on: inferDependencies(group, tasks),
|
||||
estimated_minutes: estimateTime(group)
|
||||
})
|
||||
}
|
||||
|
||||
return tasks
|
||||
}
|
||||
```
|
||||
|
||||
### Action Type Inference
|
||||
|
||||
```javascript
|
||||
function inferAction(group) {
|
||||
const actionMap = {
|
||||
'new file': 'Create',
|
||||
'create': 'Create',
|
||||
'add': 'Implement',
|
||||
'implement': 'Implement',
|
||||
'modify': 'Update',
|
||||
'update': 'Update',
|
||||
'refactor': 'Refactor',
|
||||
'config': 'Configure',
|
||||
'test': 'Test',
|
||||
'fix': 'Fix',
|
||||
'remove': 'Delete',
|
||||
'delete': 'Delete'
|
||||
}
|
||||
|
||||
for (const [keyword, action] of Object.entries(actionMap)) {
|
||||
if (group.description.toLowerCase().includes(keyword)) {
|
||||
return action
|
||||
}
|
||||
}
|
||||
return 'Implement'
|
||||
}
|
||||
```
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
```javascript
|
||||
function inferDependencies(currentTask, existingTasks) {
|
||||
const deps = []
|
||||
|
||||
// Rule 1: Update depends on Create for same file
|
||||
for (const task of existingTasks) {
|
||||
if (task.action === 'Create' && currentTask.action !== 'Create') {
|
||||
const taskFiles = task.modification_points.map(mp => mp.file)
|
||||
const currentFiles = currentTask.modification_points.map(mp => mp.file)
|
||||
if (taskFiles.some(f => currentFiles.includes(f))) {
|
||||
deps.push(task.id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Rule 2: Test depends on implementation
|
||||
if (currentTask.action === 'Test') {
|
||||
const testTarget = currentTask.scope.replace(/__tests__|tests?|spec/gi, '')
|
||||
for (const task of existingTasks) {
|
||||
if (task.scope.includes(testTarget) && ['Create', 'Implement', 'Update'].includes(task.action)) {
|
||||
deps.push(task.id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return [...new Set(deps)]
|
||||
}
|
||||
|
||||
function validateDAG(tasks) {
|
||||
const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
|
||||
const visited = new Set()
|
||||
const stack = new Set()
|
||||
|
||||
function hasCycle(taskId) {
|
||||
if (stack.has(taskId)) return true
|
||||
if (visited.has(taskId)) return false
|
||||
|
||||
visited.add(taskId)
|
||||
stack.add(taskId)
|
||||
|
||||
for (const dep of graph.get(taskId) || []) {
|
||||
if (hasCycle(dep)) return true
|
||||
}
|
||||
|
||||
stack.delete(taskId)
|
||||
return false
|
||||
}
|
||||
|
||||
for (const taskId of graph.keys()) {
|
||||
if (hasCycle(taskId)) {
|
||||
return { valid: false, error: `Circular dependency detected involving ${taskId}` }
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: true }
|
||||
}
|
||||
```
|
||||
|
||||
### Implementation Steps Generation
|
||||
|
||||
```javascript
|
||||
function generateImplementationSteps(group, exploration) {
|
||||
const steps = []
|
||||
|
||||
// Step 1: Setup/Preparation
|
||||
if (group.action === 'Create') {
|
||||
steps.push(`Create ${group.scope} file structure`)
|
||||
} else {
|
||||
steps.push(`Locate ${group.points[0].target} in ${group.points[0].file}`)
|
||||
}
|
||||
|
||||
// Step 2-N: Core implementation based on patterns
|
||||
if (exploration.patterns) {
|
||||
steps.push(`Follow pattern: ${exploration.patterns}`)
|
||||
}
|
||||
|
||||
// Add modification-specific steps
|
||||
for (const point of group.points) {
|
||||
steps.push(`${point.change} at ${point.target}`)
|
||||
}
|
||||
|
||||
// Final step: Integration
|
||||
steps.push('Add error handling and edge cases')
|
||||
steps.push('Update imports and exports as needed')
|
||||
|
||||
return steps.slice(0, 7) // Max 7 steps
|
||||
}
|
||||
```
|
||||
|
||||
### Acceptance Criteria Generation
|
||||
|
||||
```javascript
|
||||
function generateAcceptanceCriteria(task) {
|
||||
const criteria = []
|
||||
|
||||
// Action-specific criteria
|
||||
const actionCriteria = {
|
||||
'Create': [`${task.scope} file created and exports correctly`],
|
||||
'Implement': [`Feature ${task.title} works as specified`],
|
||||
'Update': [`Modified behavior matches requirements`],
|
||||
'Test': [`All test cases pass`, `Coverage >= 80%`],
|
||||
'Fix': [`Bug no longer reproducible`],
|
||||
'Configure': [`Configuration applied correctly`]
|
||||
}
|
||||
|
||||
criteria.push(...(actionCriteria[task.action] || []))
|
||||
|
||||
// Add quantified criteria
|
||||
if (task.modification_points.length > 0) {
|
||||
criteria.push(`${task.modification_points.length} file(s) modified correctly`)
|
||||
}
|
||||
|
||||
return criteria.slice(0, 4) // Max 4 criteria
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Validation & Output
|
||||
|
||||
### Solution Validation
|
||||
|
||||
```javascript
|
||||
function validateSolution(solution) {
|
||||
const errors = []
|
||||
|
||||
// Validate tasks
|
||||
for (const task of solution.tasks) {
|
||||
const taskErrors = validateTask(task)
|
||||
if (taskErrors.length > 0) {
|
||||
errors.push(...taskErrors.map(e => `${task.id}: ${e}`))
|
||||
}
|
||||
}
|
||||
|
||||
// Validate DAG
|
||||
const dagResult = validateDAG(solution.tasks)
|
||||
if (!dagResult.valid) {
|
||||
errors.push(dagResult.error)
|
||||
}
|
||||
|
||||
// Validate modification points exist
|
||||
for (const task of solution.tasks) {
|
||||
for (const mp of task.modification_points) {
|
||||
if (mp.target !== 'new file' && !fileExists(mp.file)) {
|
||||
errors.push(`${task.id}: File not found: ${mp.file}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: errors.length === 0, errors }
|
||||
}
|
||||
|
||||
function validateTask(task) {
|
||||
const errors = []
|
||||
|
||||
if (!/^T\d+$/.test(task.id)) errors.push('Invalid task ID format')
|
||||
if (!task.title?.trim()) errors.push('Missing title')
|
||||
if (!task.scope?.trim()) errors.push('Missing scope')
|
||||
if (!['Create', 'Update', 'Implement', 'Refactor', 'Configure', 'Test', 'Fix', 'Delete'].includes(task.action)) {
|
||||
errors.push('Invalid action type')
|
||||
}
|
||||
if (!task.implementation || task.implementation.length < 2) {
|
||||
errors.push('Need 2+ implementation steps')
|
||||
}
|
||||
if (!task.acceptance || task.acceptance.length < 1) {
|
||||
errors.push('Need 1+ acceptance criteria')
|
||||
}
|
||||
if (task.acceptance?.some(a => /works correctly|good performance|properly/i.test(a))) {
|
||||
errors.push('Vague acceptance criteria')
|
||||
}
|
||||
|
||||
return errors
|
||||
}
|
||||
```
|
||||
|
||||
### Conflict Detection (Batch Mode)
|
||||
|
||||
```javascript
|
||||
function detectConflicts(solutions) {
|
||||
const fileModifications = new Map() // file -> [issue_ids]
|
||||
|
||||
for (const solution of solutions) {
|
||||
for (const task of solution.tasks) {
|
||||
for (const mp of task.modification_points) {
|
||||
if (!fileModifications.has(mp.file)) {
|
||||
fileModifications.set(mp.file, [])
|
||||
}
|
||||
if (!fileModifications.get(mp.file).includes(solution.issue_id)) {
|
||||
fileModifications.get(mp.file).push(solution.issue_id)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const conflicts = []
|
||||
for (const [file, issues] of fileModifications) {
|
||||
if (issues.length > 1) {
|
||||
conflicts.push({
|
||||
file,
|
||||
issues,
|
||||
suggested_order: suggestOrder(issues, solutions)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts
|
||||
}
|
||||
|
||||
function suggestOrder(issueIds, solutions) {
|
||||
// Order by: Create before Update, foundation before integration
|
||||
return issueIds.sort((a, b) => {
|
||||
const solA = solutions.find(s => s.issue_id === a)
|
||||
const solB = solutions.find(s => s.issue_id === b)
|
||||
const hasCreateA = solA.tasks.some(t => t.action === 'Create')
|
||||
const hasCreateB = solB.tasks.some(t => t.action === 'Create')
|
||||
if (hasCreateA && !hasCreateB) return -1
|
||||
if (hasCreateB && !hasCreateA) return 1
|
||||
return 0
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Output Generation
|
||||
|
||||
```javascript
|
||||
function generateOutput(solutions, conflicts) {
|
||||
return {
|
||||
solutions: solutions.map(s => ({
|
||||
issue_id: s.issue_id,
|
||||
solution: s
|
||||
})),
|
||||
conflicts,
|
||||
_metadata: {
|
||||
timestamp: new Date().toISOString(),
|
||||
source: 'issue-plan-agent',
|
||||
issues_count: solutions.length,
|
||||
total_tasks: solutions.reduce((sum, s) => sum + s.tasks.length, 0)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Solution Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "GH-123",
|
||||
"approach_name": "Direct Implementation",
|
||||
"summary": "Add JWT authentication middleware to protect API routes",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"title": "Create JWT validation middleware",
|
||||
"scope": "src/middleware/",
|
||||
"action": "Create",
|
||||
"description": "Create middleware to validate JWT tokens",
|
||||
"modification_points": [
|
||||
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||
],
|
||||
"implementation": ["Step 1", "Step 2", "..."],
|
||||
"acceptance": ["Criterion 1", "Criterion 2"],
|
||||
"depends_on": [],
|
||||
"estimated_minutes": 30
|
||||
}
|
||||
],
|
||||
"exploration_context": {
|
||||
"relevant_files": ["src/config/env.ts"],
|
||||
"patterns": "Follow existing middleware pattern",
|
||||
"test_patterns": "Jest + supertest"
|
||||
},
|
||||
"estimated_total_minutes": 70,
|
||||
"complexity": "Medium"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
```javascript
|
||||
// Error handling with fallback
|
||||
async function executeWithFallback(issue, projectRoot) {
|
||||
try {
|
||||
// Primary: ACE semantic search
|
||||
const exploration = await aceExplore(issue, projectRoot)
|
||||
return await generateSolution(issue, exploration)
|
||||
} catch (aceError) {
|
||||
console.warn('ACE failed:', aceError.message)
|
||||
|
||||
try {
|
||||
// Fallback: ripgrep-based exploration
|
||||
const exploration = await ripgrepExplore(issue, projectRoot)
|
||||
return await generateSolution(issue, exploration)
|
||||
} catch (rgError) {
|
||||
// Degraded: Basic solution without exploration
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
approach_name: 'Basic Implementation',
|
||||
summary: issue.title,
|
||||
tasks: [{
|
||||
id: 'T1',
|
||||
title: issue.title,
|
||||
scope: 'TBD',
|
||||
action: 'Implement',
|
||||
description: issue.description,
|
||||
modification_points: [{ file: 'TBD', target: 'TBD', change: issue.title }],
|
||||
implementation: ['Analyze requirements', 'Implement solution', 'Test and validate'],
|
||||
acceptance: ['Feature works as described'],
|
||||
depends_on: [],
|
||||
estimated_minutes: 60
|
||||
}],
|
||||
exploration_context: { relevant_files: [], patterns: 'Manual exploration required' },
|
||||
estimated_total_minutes: 60,
|
||||
complexity: 'Medium',
|
||||
_warning: 'Degraded mode - manual exploration required'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| ACE search returns no results | Fallback to ripgrep, warn user |
|
||||
| Circular task dependency | Report error, suggest fix |
|
||||
| File not found in codebase | Flag as "new file", update modification_points |
|
||||
| Ambiguous requirements | Add clarification_needs to output |
|
||||
|
||||
---
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Acceptance Criteria Quality
|
||||
|
||||
| Good | Bad |
|
||||
|------|-----|
|
||||
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
|
||||
| "Response time < 200ms p95" | "Good performance" |
|
||||
| "All 4 test cases pass" | "Tests pass" |
|
||||
| "JWT token validated with secret from env" | "Authentication works" |
|
||||
|
||||
### Task Validation Checklist
|
||||
|
||||
Before outputting solution:
|
||||
- [ ] ACE search performed for each issue
|
||||
- [ ] All modification_points verified against codebase
|
||||
- [ ] Tasks have 2+ implementation steps
|
||||
- [ ] Tasks have 1+ quantified acceptance criteria
|
||||
- [ ] Dependencies form valid DAG (no cycles)
|
||||
- [ ] Estimated time is reasonable
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. Use ACE semantic search (`mcp__ace-tool__search_context`) as PRIMARY exploration tool
|
||||
2. Read schema first before generating solution output
|
||||
3. Include `depends_on` field (even if empty `[]`)
|
||||
4. Quantify acceptance criteria with specific, testable conditions
|
||||
5. Validate DAG before output (no circular dependencies)
|
||||
6. Include file:line references in modification_points where possible
|
||||
7. Detect and report cross-issue file conflicts in batch mode
|
||||
8. Include exploration_context with patterns and relevant_files
|
||||
|
||||
**NEVER**:
|
||||
1. Execute implementation (return plan only)
|
||||
2. Use vague acceptance criteria ("works correctly", "good performance")
|
||||
3. Create circular dependencies in task graph
|
||||
4. Skip task validation before output
|
||||
5. Omit required fields from solution schema
|
||||
6. Assume file exists without verification
|
||||
7. Generate more than 10 tasks per issue
|
||||
8. Skip ACE search (unless fallback triggered)
|
||||
702
.claude/agents/issue-queue-agent.md
Normal file
702
.claude/agents/issue-queue-agent.md
Normal file
@@ -0,0 +1,702 @@
|
||||
---
|
||||
name: issue-queue-agent
|
||||
description: |
|
||||
Task ordering agent for issue queue formation with dependency analysis and conflict resolution.
|
||||
Orchestrates 4-phase workflow: Dependency Analysis → Conflict Detection → Semantic Ordering → Group Assignment
|
||||
|
||||
Core capabilities:
|
||||
- ACE semantic search for relationship discovery
|
||||
- Cross-issue dependency DAG construction
|
||||
- File modification conflict detection
|
||||
- Conflict resolution with execution ordering
|
||||
- Semantic priority calculation (0.0-1.0)
|
||||
- Parallel/Sequential group assignment
|
||||
color: orange
|
||||
---
|
||||
|
||||
You are a specialized queue formation agent that analyzes tasks from bound solutions, resolves conflicts, and produces an ordered execution queue. You focus on optimal task ordering across multiple issues.
|
||||
|
||||
## Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
// Required
|
||||
tasks: [
|
||||
{
|
||||
issue_id: string, // Issue ID (e.g., "GH-123")
|
||||
solution_id: string, // Solution ID (e.g., "SOL-001")
|
||||
task: {
|
||||
id: string, // Task ID (e.g., "T1")
|
||||
title: string,
|
||||
scope: string,
|
||||
action: string, // Create | Update | Implement | Refactor | Test | Fix | Delete | Configure
|
||||
modification_points: [
|
||||
{ file: string, target: string, change: string }
|
||||
],
|
||||
depends_on: string[] // Task IDs within same issue
|
||||
},
|
||||
exploration_context: object
|
||||
}
|
||||
],
|
||||
|
||||
// Optional
|
||||
project_root: string, // Project root for ACE search
|
||||
existing_conflicts: object[], // Pre-identified conflicts
|
||||
rebuild: boolean // Clear and regenerate queue
|
||||
}
|
||||
```
|
||||
|
||||
## 4-Phase Execution Workflow
|
||||
|
||||
```
|
||||
Phase 1: Dependency Analysis (20%)
|
||||
↓ Parse depends_on, build DAG, detect cycles
|
||||
Phase 2: Conflict Detection + ACE Enhancement (30%)
|
||||
↓ Identify file conflicts, ACE semantic relationship discovery
|
||||
Phase 3: Conflict Resolution (25%)
|
||||
↓ Determine execution order for conflicting tasks
|
||||
Phase 4: Semantic Ordering & Grouping (25%)
|
||||
↓ Calculate priority, topological sort, assign groups
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Dependency Analysis
|
||||
|
||||
### Build Dependency Graph
|
||||
|
||||
```javascript
|
||||
function buildDependencyGraph(tasks) {
|
||||
const taskGraph = new Map()
|
||||
const fileModifications = new Map() // file -> [taskKeys]
|
||||
|
||||
for (const item of tasks) {
|
||||
const taskKey = `${item.issue_id}:${item.task.id}`
|
||||
taskGraph.set(taskKey, {
|
||||
...item,
|
||||
key: taskKey,
|
||||
inDegree: 0,
|
||||
outEdges: []
|
||||
})
|
||||
|
||||
// Track file modifications for conflict detection
|
||||
for (const mp of item.task.modification_points || []) {
|
||||
if (!fileModifications.has(mp.file)) {
|
||||
fileModifications.set(mp.file, [])
|
||||
}
|
||||
fileModifications.get(mp.file).push(taskKey)
|
||||
}
|
||||
}
|
||||
|
||||
// Add explicit dependency edges (within same issue)
|
||||
for (const [key, node] of taskGraph) {
|
||||
for (const dep of node.task.depends_on || []) {
|
||||
const depKey = `${node.issue_id}:${dep}`
|
||||
if (taskGraph.has(depKey)) {
|
||||
taskGraph.get(depKey).outEdges.push(key)
|
||||
node.inDegree++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { taskGraph, fileModifications }
|
||||
}
|
||||
```
|
||||
|
||||
### Cycle Detection
|
||||
|
||||
```javascript
|
||||
function detectCycles(taskGraph) {
|
||||
const visited = new Set()
|
||||
const stack = new Set()
|
||||
const cycles = []
|
||||
|
||||
function dfs(key, path = []) {
|
||||
if (stack.has(key)) {
|
||||
// Found cycle - extract cycle path
|
||||
const cycleStart = path.indexOf(key)
|
||||
cycles.push(path.slice(cycleStart).concat(key))
|
||||
return true
|
||||
}
|
||||
if (visited.has(key)) return false
|
||||
|
||||
visited.add(key)
|
||||
stack.add(key)
|
||||
path.push(key)
|
||||
|
||||
for (const next of taskGraph.get(key)?.outEdges || []) {
|
||||
dfs(next, [...path])
|
||||
}
|
||||
|
||||
stack.delete(key)
|
||||
return false
|
||||
}
|
||||
|
||||
for (const key of taskGraph.keys()) {
|
||||
if (!visited.has(key)) {
|
||||
dfs(key)
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
hasCycle: cycles.length > 0,
|
||||
cycles
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Conflict Detection
|
||||
|
||||
### Identify File Conflicts
|
||||
|
||||
```javascript
|
||||
function detectFileConflicts(fileModifications, taskGraph) {
|
||||
const conflicts = []
|
||||
|
||||
for (const [file, taskKeys] of fileModifications) {
|
||||
if (taskKeys.length > 1) {
|
||||
// Multiple tasks modify same file
|
||||
const taskDetails = taskKeys.map(key => {
|
||||
const node = taskGraph.get(key)
|
||||
return {
|
||||
key,
|
||||
issue_id: node.issue_id,
|
||||
task_id: node.task.id,
|
||||
title: node.task.title,
|
||||
action: node.task.action,
|
||||
scope: node.task.scope
|
||||
}
|
||||
})
|
||||
|
||||
conflicts.push({
|
||||
type: 'file_conflict',
|
||||
file,
|
||||
tasks: taskKeys,
|
||||
task_details: taskDetails,
|
||||
resolution: null,
|
||||
resolved: false
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts
|
||||
}
|
||||
```
|
||||
|
||||
### Conflict Classification
|
||||
|
||||
```javascript
|
||||
function classifyConflict(conflict, taskGraph) {
|
||||
const tasks = conflict.tasks.map(key => taskGraph.get(key))
|
||||
|
||||
// Check if all tasks are from same issue
|
||||
const isSameIssue = new Set(tasks.map(t => t.issue_id)).size === 1
|
||||
|
||||
// Check action types
|
||||
const actions = tasks.map(t => t.task.action)
|
||||
const hasCreate = actions.includes('Create')
|
||||
const hasDelete = actions.includes('Delete')
|
||||
|
||||
return {
|
||||
...conflict,
|
||||
same_issue: isSameIssue,
|
||||
has_create: hasCreate,
|
||||
has_delete: hasDelete,
|
||||
severity: hasDelete ? 'high' : hasCreate ? 'medium' : 'low'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Conflict Resolution
|
||||
|
||||
### Resolution Rules
|
||||
|
||||
| Priority | Rule | Example |
|
||||
|----------|------|---------|
|
||||
| 1 | Create before Update/Implement | T1:Create → T2:Update |
|
||||
| 2 | Foundation before integration | config/ → src/ |
|
||||
| 3 | Types before implementation | types/ → components/ |
|
||||
| 4 | Core before tests | src/ → __tests__/ |
|
||||
| 5 | Same issue order preserved | T1 → T2 → T3 |
|
||||
|
||||
### Apply Resolution Rules
|
||||
|
||||
```javascript
|
||||
function resolveConflict(conflict, taskGraph) {
|
||||
const tasks = conflict.tasks.map(key => ({
|
||||
key,
|
||||
node: taskGraph.get(key)
|
||||
}))
|
||||
|
||||
// Sort by resolution rules
|
||||
tasks.sort((a, b) => {
|
||||
const nodeA = a.node
|
||||
const nodeB = b.node
|
||||
|
||||
// Rule 1: Create before others
|
||||
if (nodeA.task.action === 'Create' && nodeB.task.action !== 'Create') return -1
|
||||
if (nodeB.task.action === 'Create' && nodeA.task.action !== 'Create') return 1
|
||||
|
||||
// Rule 2: Delete last
|
||||
if (nodeA.task.action === 'Delete' && nodeB.task.action !== 'Delete') return 1
|
||||
if (nodeB.task.action === 'Delete' && nodeA.task.action !== 'Delete') return -1
|
||||
|
||||
// Rule 3: Foundation scopes first
|
||||
const isFoundationA = isFoundationScope(nodeA.task.scope)
|
||||
const isFoundationB = isFoundationScope(nodeB.task.scope)
|
||||
if (isFoundationA && !isFoundationB) return -1
|
||||
if (isFoundationB && !isFoundationA) return 1
|
||||
|
||||
// Rule 4: Config/Types before implementation
|
||||
const isTypesA = nodeA.task.scope?.includes('types')
|
||||
const isTypesB = nodeB.task.scope?.includes('types')
|
||||
if (isTypesA && !isTypesB) return -1
|
||||
if (isTypesB && !isTypesA) return 1
|
||||
|
||||
// Rule 5: Preserve issue order (same issue)
|
||||
if (nodeA.issue_id === nodeB.issue_id) {
|
||||
return parseInt(nodeA.task.id.replace('T', '')) - parseInt(nodeB.task.id.replace('T', ''))
|
||||
}
|
||||
|
||||
return 0
|
||||
})
|
||||
|
||||
const order = tasks.map(t => t.key)
|
||||
const rationale = generateRationale(tasks)
|
||||
|
||||
return {
|
||||
...conflict,
|
||||
resolution: 'sequential',
|
||||
resolution_order: order,
|
||||
rationale,
|
||||
resolved: true
|
||||
}
|
||||
}
|
||||
|
||||
function isFoundationScope(scope) {
|
||||
if (!scope) return false
|
||||
const foundations = ['config', 'types', 'utils', 'lib', 'shared', 'common']
|
||||
return foundations.some(f => scope.toLowerCase().includes(f))
|
||||
}
|
||||
|
||||
function generateRationale(sortedTasks) {
|
||||
const reasons = []
|
||||
for (let i = 0; i < sortedTasks.length - 1; i++) {
|
||||
const curr = sortedTasks[i].node.task
|
||||
const next = sortedTasks[i + 1].node.task
|
||||
if (curr.action === 'Create') {
|
||||
reasons.push(`${curr.id} creates file before ${next.id}`)
|
||||
} else if (isFoundationScope(curr.scope)) {
|
||||
reasons.push(`${curr.id} (foundation) before ${next.id}`)
|
||||
}
|
||||
}
|
||||
return reasons.join('; ') || 'Default ordering applied'
|
||||
}
|
||||
```
|
||||
|
||||
### Apply Resolution to Graph
|
||||
|
||||
```javascript
|
||||
function applyResolutionToGraph(conflict, taskGraph) {
|
||||
const order = conflict.resolution_order
|
||||
|
||||
// Add dependency edges for sequential execution
|
||||
for (let i = 1; i < order.length; i++) {
|
||||
const prevKey = order[i - 1]
|
||||
const currKey = order[i]
|
||||
|
||||
if (taskGraph.has(prevKey) && taskGraph.has(currKey)) {
|
||||
const prevNode = taskGraph.get(prevKey)
|
||||
const currNode = taskGraph.get(currKey)
|
||||
|
||||
// Avoid duplicate edges
|
||||
if (!prevNode.outEdges.includes(currKey)) {
|
||||
prevNode.outEdges.push(currKey)
|
||||
currNode.inDegree++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Semantic Ordering & Grouping
|
||||
|
||||
### Semantic Priority Calculation
|
||||
|
||||
```javascript
|
||||
function calculateSemanticPriority(node) {
|
||||
let priority = 0.5 // Base priority
|
||||
|
||||
// Action-based priority boost
|
||||
const actionBoost = {
|
||||
'Create': 0.2,
|
||||
'Configure': 0.15,
|
||||
'Implement': 0.1,
|
||||
'Update': 0,
|
||||
'Refactor': -0.05,
|
||||
'Test': -0.1,
|
||||
'Fix': 0.05,
|
||||
'Delete': -0.15
|
||||
}
|
||||
priority += actionBoost[node.task.action] || 0
|
||||
|
||||
// Scope-based boost
|
||||
if (isFoundationScope(node.task.scope)) {
|
||||
priority += 0.1
|
||||
}
|
||||
if (node.task.scope?.includes('types')) {
|
||||
priority += 0.05
|
||||
}
|
||||
|
||||
// Clamp to [0, 1]
|
||||
return Math.max(0, Math.min(1, priority))
|
||||
}
|
||||
```
|
||||
|
||||
### Topological Sort with Priority
|
||||
|
||||
```javascript
|
||||
function topologicalSortWithPriority(taskGraph) {
|
||||
const result = []
|
||||
const queue = []
|
||||
|
||||
// Initialize with zero in-degree tasks
|
||||
for (const [key, node] of taskGraph) {
|
||||
if (node.inDegree === 0) {
|
||||
queue.push(key)
|
||||
}
|
||||
}
|
||||
|
||||
let executionOrder = 1
|
||||
while (queue.length > 0) {
|
||||
// Sort queue by semantic priority (descending)
|
||||
queue.sort((a, b) => {
|
||||
const nodeA = taskGraph.get(a)
|
||||
const nodeB = taskGraph.get(b)
|
||||
|
||||
// 1. Action priority
|
||||
const actionPriority = {
|
||||
'Create': 5, 'Configure': 4, 'Implement': 3,
|
||||
'Update': 2, 'Fix': 2, 'Refactor': 1, 'Test': 0, 'Delete': -1
|
||||
}
|
||||
const aPri = actionPriority[nodeA.task.action] ?? 2
|
||||
const bPri = actionPriority[nodeB.task.action] ?? 2
|
||||
if (aPri !== bPri) return bPri - aPri
|
||||
|
||||
// 2. Foundation scope first
|
||||
const aFound = isFoundationScope(nodeA.task.scope)
|
||||
const bFound = isFoundationScope(nodeB.task.scope)
|
||||
if (aFound !== bFound) return aFound ? -1 : 1
|
||||
|
||||
// 3. Types before implementation
|
||||
const aTypes = nodeA.task.scope?.includes('types')
|
||||
const bTypes = nodeB.task.scope?.includes('types')
|
||||
if (aTypes !== bTypes) return aTypes ? -1 : 1
|
||||
|
||||
return 0
|
||||
})
|
||||
|
||||
const current = queue.shift()
|
||||
const node = taskGraph.get(current)
|
||||
node.execution_order = executionOrder++
|
||||
node.semantic_priority = calculateSemanticPriority(node)
|
||||
result.push(current)
|
||||
|
||||
// Process outgoing edges
|
||||
for (const next of node.outEdges) {
|
||||
const nextNode = taskGraph.get(next)
|
||||
nextNode.inDegree--
|
||||
if (nextNode.inDegree === 0) {
|
||||
queue.push(next)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for remaining nodes (cycle indication)
|
||||
if (result.length !== taskGraph.size) {
|
||||
const remaining = [...taskGraph.keys()].filter(k => !result.includes(k))
|
||||
return { success: false, error: `Unprocessed tasks: ${remaining.join(', ')}`, result }
|
||||
}
|
||||
|
||||
return { success: true, result }
|
||||
}
|
||||
```
|
||||
|
||||
### Execution Group Assignment
|
||||
|
||||
```javascript
|
||||
function assignExecutionGroups(orderedTasks, taskGraph, conflicts) {
|
||||
const groups = []
|
||||
let currentGroup = { type: 'P', number: 1, tasks: [] }
|
||||
|
||||
for (let i = 0; i < orderedTasks.length; i++) {
|
||||
const key = orderedTasks[i]
|
||||
const node = taskGraph.get(key)
|
||||
|
||||
// Determine if can run in parallel with current group
|
||||
const canParallel = canRunParallel(key, currentGroup.tasks, taskGraph, conflicts)
|
||||
|
||||
if (!canParallel && currentGroup.tasks.length > 0) {
|
||||
// Save current group and start new sequential group
|
||||
groups.push({ ...currentGroup })
|
||||
currentGroup = { type: 'S', number: groups.length + 1, tasks: [] }
|
||||
}
|
||||
|
||||
currentGroup.tasks.push(key)
|
||||
node.execution_group = `${currentGroup.type}${currentGroup.number}`
|
||||
}
|
||||
|
||||
// Save last group
|
||||
if (currentGroup.tasks.length > 0) {
|
||||
groups.push(currentGroup)
|
||||
}
|
||||
|
||||
return groups
|
||||
}
|
||||
|
||||
function canRunParallel(taskKey, groupTasks, taskGraph, conflicts) {
|
||||
if (groupTasks.length === 0) return true
|
||||
|
||||
const node = taskGraph.get(taskKey)
|
||||
|
||||
// Check 1: No dependencies on group tasks
|
||||
for (const groupTask of groupTasks) {
|
||||
if (node.task.depends_on?.includes(groupTask.split(':')[1])) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Check 2: No file conflicts with group tasks
|
||||
for (const conflict of conflicts) {
|
||||
if (conflict.tasks.includes(taskKey)) {
|
||||
for (const groupTask of groupTasks) {
|
||||
if (conflict.tasks.includes(groupTask)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check 3: Different issues can run in parallel
|
||||
const nodeIssue = node.issue_id
|
||||
const groupIssues = new Set(groupTasks.map(t => taskGraph.get(t).issue_id))
|
||||
|
||||
return !groupIssues.has(nodeIssue)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Generation
|
||||
|
||||
### Queue Item Format
|
||||
|
||||
```javascript
|
||||
function generateQueueItems(orderedTasks, taskGraph, conflicts) {
|
||||
const queueItems = []
|
||||
let queueIdCounter = 1
|
||||
|
||||
for (const key of orderedTasks) {
|
||||
const node = taskGraph.get(key)
|
||||
|
||||
queueItems.push({
|
||||
queue_id: `Q-${String(queueIdCounter++).padStart(3, '0')}`,
|
||||
issue_id: node.issue_id,
|
||||
solution_id: node.solution_id,
|
||||
task_id: node.task.id,
|
||||
status: 'pending',
|
||||
execution_order: node.execution_order,
|
||||
execution_group: node.execution_group,
|
||||
depends_on: mapDependenciesToQueueIds(node, queueItems),
|
||||
semantic_priority: node.semantic_priority,
|
||||
queued_at: new Date().toISOString()
|
||||
})
|
||||
}
|
||||
|
||||
return queueItems
|
||||
}
|
||||
|
||||
function mapDependenciesToQueueIds(node, queueItems) {
|
||||
return (node.task.depends_on || []).map(dep => {
|
||||
const depKey = `${node.issue_id}:${dep}`
|
||||
const queueItem = queueItems.find(q =>
|
||||
q.issue_id === node.issue_id && q.task_id === dep
|
||||
)
|
||||
return queueItem?.queue_id || dep
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Final Output
|
||||
|
||||
```javascript
|
||||
function generateOutput(queueItems, conflicts, groups) {
|
||||
return {
|
||||
queue: queueItems,
|
||||
conflicts: conflicts.map(c => ({
|
||||
type: c.type,
|
||||
file: c.file,
|
||||
tasks: c.tasks,
|
||||
resolution: c.resolution,
|
||||
resolution_order: c.resolution_order,
|
||||
rationale: c.rationale,
|
||||
resolved: c.resolved
|
||||
})),
|
||||
execution_groups: groups.map(g => ({
|
||||
id: `${g.type}${g.number}`,
|
||||
type: g.type === 'P' ? 'parallel' : 'sequential',
|
||||
task_count: g.tasks.length,
|
||||
tasks: g.tasks
|
||||
})),
|
||||
_metadata: {
|
||||
version: '1.0',
|
||||
total_tasks: queueItems.length,
|
||||
total_conflicts: conflicts.length,
|
||||
resolved_conflicts: conflicts.filter(c => c.resolved).length,
|
||||
parallel_groups: groups.filter(g => g.type === 'P').length,
|
||||
sequential_groups: groups.filter(g => g.type === 'S').length,
|
||||
timestamp: new Date().toISOString(),
|
||||
source: 'issue-queue-agent'
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
```javascript
|
||||
async function executeWithValidation(tasks) {
|
||||
// Phase 1: Build graph
|
||||
const { taskGraph, fileModifications } = buildDependencyGraph(tasks)
|
||||
|
||||
// Check for cycles
|
||||
const cycleResult = detectCycles(taskGraph)
|
||||
if (cycleResult.hasCycle) {
|
||||
return {
|
||||
success: false,
|
||||
error: 'Circular dependency detected',
|
||||
cycles: cycleResult.cycles,
|
||||
suggestion: 'Remove circular dependencies or reorder tasks manually'
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 2: Detect conflicts
|
||||
const conflicts = detectFileConflicts(fileModifications, taskGraph)
|
||||
.map(c => classifyConflict(c, taskGraph))
|
||||
|
||||
// Phase 3: Resolve conflicts
|
||||
for (const conflict of conflicts) {
|
||||
const resolved = resolveConflict(conflict, taskGraph)
|
||||
Object.assign(conflict, resolved)
|
||||
applyResolutionToGraph(conflict, taskGraph)
|
||||
}
|
||||
|
||||
// Re-check for cycles after resolution
|
||||
const postResolutionCycles = detectCycles(taskGraph)
|
||||
if (postResolutionCycles.hasCycle) {
|
||||
return {
|
||||
success: false,
|
||||
error: 'Conflict resolution created circular dependency',
|
||||
cycles: postResolutionCycles.cycles,
|
||||
suggestion: 'Manual conflict resolution required'
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 4: Sort and group
|
||||
const sortResult = topologicalSortWithPriority(taskGraph)
|
||||
if (!sortResult.success) {
|
||||
return {
|
||||
success: false,
|
||||
error: sortResult.error,
|
||||
partial_result: sortResult.result
|
||||
}
|
||||
}
|
||||
|
||||
const groups = assignExecutionGroups(sortResult.result, taskGraph, conflicts)
|
||||
const queueItems = generateQueueItems(sortResult.result, taskGraph, conflicts)
|
||||
|
||||
return {
|
||||
success: true,
|
||||
output: generateOutput(queueItems, conflicts, groups)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Circular dependency | Report cycles, abort with suggestion |
|
||||
| Conflict resolution creates cycle | Flag for manual resolution |
|
||||
| Missing task reference in depends_on | Skip and warn |
|
||||
| Empty task list | Return empty queue |
|
||||
|
||||
---
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Ordering Validation
|
||||
|
||||
```javascript
|
||||
function validateOrdering(queueItems, taskGraph) {
|
||||
const errors = []
|
||||
|
||||
for (const item of queueItems) {
|
||||
const key = `${item.issue_id}:${item.task_id}`
|
||||
const node = taskGraph.get(key)
|
||||
|
||||
// Check dependencies come before
|
||||
for (const depQueueId of item.depends_on) {
|
||||
const depItem = queueItems.find(q => q.queue_id === depQueueId)
|
||||
if (depItem && depItem.execution_order >= item.execution_order) {
|
||||
errors.push(`${item.queue_id} ordered before dependency ${depQueueId}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: errors.length === 0, errors }
|
||||
}
|
||||
```
|
||||
|
||||
### Semantic Priority Rules
|
||||
|
||||
| Factor | Priority Boost |
|
||||
|--------|---------------|
|
||||
| Create action | +0.2 |
|
||||
| Configure action | +0.15 |
|
||||
| Implement action | +0.1 |
|
||||
| Fix action | +0.05 |
|
||||
| Foundation scope (config/types/utils) | +0.1 |
|
||||
| Types scope | +0.05 |
|
||||
| Refactor action | -0.05 |
|
||||
| Test action | -0.1 |
|
||||
| Delete action | -0.15 |
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. Build dependency graph before any ordering
|
||||
2. Detect cycles before and after conflict resolution
|
||||
3. Apply resolution rules consistently (Create → Update → Delete)
|
||||
4. Preserve within-issue task order when no conflicts
|
||||
5. Calculate semantic priority for all tasks
|
||||
6. Validate ordering before output
|
||||
7. Include rationale for conflict resolutions
|
||||
8. Map depends_on to queue_ids in output
|
||||
|
||||
**NEVER**:
|
||||
1. Execute tasks (ordering only)
|
||||
2. Ignore circular dependencies
|
||||
3. Create arbitrary ordering without rules
|
||||
4. Skip conflict detection
|
||||
5. Output invalid DAG
|
||||
6. Merge tasks from different issues in same parallel group if conflicts exist
|
||||
7. Assume task order without checking depends_on
|
||||
@@ -1,552 +1,384 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute issue tasks with closed-loop methodology (analyze→implement→test→optimize→commit)
|
||||
argument-hint: "<issue-id> [--task <task-id>] [--batch <n>]"
|
||||
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*), Edit(*), AskUserQuestion(*)
|
||||
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
|
||||
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
# Issue Execute Command (/issue:execute)
|
||||
|
||||
## Overview
|
||||
|
||||
Execute tasks from a planned issue using closed-loop methodology. Each task goes through 5 phases: **Analyze → Implement → Test → Optimize → Commit**. Tasks are loaded progressively based on dependency satisfaction.
|
||||
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
|
||||
|
||||
**Core capabilities:**
|
||||
- Progressive task loading (only load ready tasks)
|
||||
- Closed-loop execution with 5 phases per task
|
||||
- Automatic retry on test failures (up to 3 attempts)
|
||||
- Pause on defined pause_criteria conditions
|
||||
- Delivery criteria verification before completion
|
||||
- Automatic git commit per task
|
||||
**Core design:**
|
||||
- Single task per codex instance (not loop mode)
|
||||
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
|
||||
- No file reading in codex
|
||||
- Orchestrator manages parallelism
|
||||
|
||||
## Storage Structure (Flat JSONL)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queue.json # Execution queue
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:execute <ISSUE_ID> [FLAGS]
|
||||
/issue:execute [FLAGS]
|
||||
|
||||
# Arguments
|
||||
<issue-id> Issue ID (e.g., GH-123, TEXT-1735200000)
|
||||
# Examples
|
||||
/issue:execute # Execute all ready tasks
|
||||
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
|
||||
/issue:execute --executor codex # Force codex executor
|
||||
|
||||
# Flags
|
||||
--task <id> Execute specific task only
|
||||
--batch <n> Max concurrent tasks (default: 1)
|
||||
--skip-commit Skip git commit phase
|
||||
--dry-run Simulate execution without changes
|
||||
--continue Continue from paused/failed state
|
||||
--parallel <n> Max parallel codex instances (default: 1)
|
||||
--executor <type> Force executor: codex|gemini|agent
|
||||
--dry-run Show what would execute without running
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Initialization:
|
||||
├─ Load state.json and tasks.jsonl
|
||||
├─ Build completed task index
|
||||
└─ Identify ready tasks (dependencies satisfied)
|
||||
Phase 1: Queue Loading
|
||||
├─ Load queue.json
|
||||
├─ Count pending/ready tasks
|
||||
└─ Initialize TodoWrite tracking
|
||||
|
||||
Task Loop:
|
||||
└─ For each ready task:
|
||||
├─ Phase 1: ANALYZE
|
||||
│ ├─ Verify task requirements
|
||||
│ ├─ Check file existence
|
||||
│ ├─ Validate preconditions
|
||||
│ └─ Check pause_criteria (halt if triggered)
|
||||
│
|
||||
├─ Phase 2: IMPLEMENT
|
||||
│ ├─ Execute code changes
|
||||
│ ├─ Write/modify files
|
||||
│ └─ Track modified files
|
||||
│
|
||||
├─ Phase 3: TEST
|
||||
│ ├─ Run relevant tests
|
||||
│ ├─ Verify functionality
|
||||
│ └─ Retry loop (max 3) on failure → back to IMPLEMENT
|
||||
│
|
||||
├─ Phase 4: OPTIMIZE
|
||||
│ ├─ Code quality check
|
||||
│ ├─ Lint/format verification
|
||||
│ └─ Apply minor improvements
|
||||
│
|
||||
├─ Phase 5: COMMIT
|
||||
│ ├─ Stage modified files
|
||||
│ ├─ Create commit with task reference
|
||||
│ └─ Update task status to 'completed'
|
||||
│
|
||||
└─ Update state.json
|
||||
Phase 2: Ready Task Detection
|
||||
├─ Find tasks with satisfied dependencies
|
||||
├─ Group by execution_group (parallel batches)
|
||||
└─ Determine execution order
|
||||
|
||||
Completion:
|
||||
└─ Return execution summary
|
||||
Phase 3: Codex Coordination
|
||||
├─ For each ready task:
|
||||
│ ├─ Launch independent codex instance
|
||||
│ ├─ Codex calls: ccw issue next
|
||||
│ ├─ Codex receives task data (NOT file)
|
||||
│ ├─ Codex executes task
|
||||
│ ├─ Codex calls: ccw issue complete <queue-id>
|
||||
│ └─ Update TodoWrite
|
||||
└─ Parallel execution based on --parallel flag
|
||||
|
||||
Phase 4: Completion
|
||||
├─ Generate execution summary
|
||||
├─ Update issue statuses in issues.jsonl
|
||||
└─ Display results
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Initialization
|
||||
### Phase 1: Queue Loading
|
||||
|
||||
```javascript
|
||||
// Load issue context
|
||||
const issueDir = `.workflow/issues/${issueId}`
|
||||
const state = JSON.parse(Read(`${issueDir}/state.json`))
|
||||
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
|
||||
// Load queue
|
||||
const queuePath = '.workflow/issues/queue.json';
|
||||
if (!Bash(`test -f "${queuePath}" && echo exists`).includes('exists')) {
|
||||
console.log('No queue found. Run /issue:queue first.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Build completed index
|
||||
const completedIds = new Set(
|
||||
tasks.filter(t => t.status === 'completed').map(t => t.id)
|
||||
)
|
||||
const queue = JSON.parse(Read(queuePath));
|
||||
|
||||
// Get ready tasks (dependencies satisfied)
|
||||
// Count by status
|
||||
const pending = queue.queue.filter(q => q.status === 'pending');
|
||||
const executing = queue.queue.filter(q => q.status === 'executing');
|
||||
const completed = queue.queue.filter(q => q.status === 'completed');
|
||||
|
||||
console.log(`
|
||||
## Execution Queue Status
|
||||
|
||||
- Pending: ${pending.length}
|
||||
- Executing: ${executing.length}
|
||||
- Completed: ${completed.length}
|
||||
- Total: ${queue.queue.length}
|
||||
`);
|
||||
|
||||
if (pending.length === 0 && executing.length === 0) {
|
||||
console.log('All tasks completed!');
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Ready Task Detection
|
||||
|
||||
```javascript
|
||||
// Find ready tasks (dependencies satisfied)
|
||||
function getReadyTasks() {
|
||||
return tasks.filter(task =>
|
||||
task.status === 'pending' &&
|
||||
task.depends_on.every(dep => completedIds.has(dep))
|
||||
)
|
||||
const completedIds = new Set(
|
||||
queue.queue.filter(q => q.status === 'completed').map(q => q.queue_id)
|
||||
);
|
||||
|
||||
return queue.queue.filter(item => {
|
||||
if (item.status !== 'pending') return false;
|
||||
return item.depends_on.every(depId => completedIds.has(depId));
|
||||
});
|
||||
}
|
||||
|
||||
let readyTasks = getReadyTasks()
|
||||
const readyTasks = getReadyTasks();
|
||||
|
||||
if (readyTasks.length === 0) {
|
||||
if (tasks.every(t => t.status === 'completed')) {
|
||||
console.log('✓ All tasks completed!')
|
||||
return
|
||||
}
|
||||
console.log('⚠ No ready tasks. Check dependencies or blocked tasks.')
|
||||
return
|
||||
}
|
||||
|
||||
// Initialize TodoWrite for tracking
|
||||
TodoWrite({
|
||||
todos: readyTasks.slice(0, batchSize).map(t => ({
|
||||
content: `[${t.id}] ${t.title}`,
|
||||
status: 'pending',
|
||||
activeForm: `Executing ${t.id}`
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
### Task Execution Loop
|
||||
|
||||
```javascript
|
||||
for (const task of readyTasks.slice(0, batchSize)) {
|
||||
console.log(`\n## Executing: ${task.id} - ${task.title}`)
|
||||
|
||||
// Update state
|
||||
updateTaskStatus(task.id, 'in_progress', 'analyze')
|
||||
|
||||
try {
|
||||
// Phase 1: ANALYZE
|
||||
const analyzeResult = await executePhase_Analyze(task)
|
||||
if (analyzeResult.paused) {
|
||||
console.log(`⏸ Task paused: ${analyzeResult.reason}`)
|
||||
updateTaskStatus(task.id, 'paused', 'analyze')
|
||||
continue
|
||||
}
|
||||
|
||||
// Phase 2-5: Closed Loop
|
||||
let implementRetries = 0
|
||||
const maxRetries = 3
|
||||
|
||||
while (implementRetries < maxRetries) {
|
||||
// Phase 2: IMPLEMENT
|
||||
const implementResult = await executePhase_Implement(task, analyzeResult)
|
||||
updateTaskStatus(task.id, 'in_progress', 'test')
|
||||
|
||||
// Phase 3: TEST
|
||||
const testResult = await executePhase_Test(task, implementResult)
|
||||
|
||||
if (testResult.passed) {
|
||||
// Phase 4: OPTIMIZE
|
||||
await executePhase_Optimize(task, implementResult)
|
||||
|
||||
// Phase 5: COMMIT
|
||||
if (!flags.skipCommit) {
|
||||
await executePhase_Commit(task, implementResult)
|
||||
}
|
||||
|
||||
// Mark completed
|
||||
updateTaskStatus(task.id, 'completed', 'done')
|
||||
completedIds.add(task.id)
|
||||
break
|
||||
} else {
|
||||
implementRetries++
|
||||
console.log(`⚠ Test failed, retry ${implementRetries}/${maxRetries}`)
|
||||
if (implementRetries >= maxRetries) {
|
||||
updateTaskStatus(task.id, 'failed', 'test')
|
||||
console.log(`✗ Task failed after ${maxRetries} retries`)
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
updateTaskStatus(task.id, 'failed', task.current_phase)
|
||||
console.log(`✗ Task failed: ${error.message}`)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 1: ANALYZE
|
||||
|
||||
```javascript
|
||||
async function executePhase_Analyze(task) {
|
||||
console.log('### Phase 1: ANALYZE')
|
||||
|
||||
// Check pause criteria first
|
||||
for (const criterion of task.pause_criteria || []) {
|
||||
const shouldPause = await evaluatePauseCriterion(criterion, task)
|
||||
if (shouldPause) {
|
||||
return { paused: true, reason: criterion }
|
||||
}
|
||||
}
|
||||
|
||||
// Execute analysis via CLI
|
||||
const analysisResult = await Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description=`Analyze: ${task.id}`,
|
||||
prompt=`
|
||||
## Analysis Task
|
||||
ID: ${task.id}
|
||||
Title: ${task.title}
|
||||
Description: ${task.description}
|
||||
|
||||
## File Context
|
||||
${task.file_context.join('\n')}
|
||||
|
||||
## Delivery Criteria (to be achieved)
|
||||
${task.delivery_criteria.map((c, i) => `${i+1}. ${c}`).join('\n')}
|
||||
|
||||
## Required Analysis
|
||||
1. Verify all referenced files exist
|
||||
2. Identify exact modification points
|
||||
3. Check for potential conflicts
|
||||
4. Validate approach feasibility
|
||||
|
||||
## Output
|
||||
Return JSON:
|
||||
{
|
||||
"files_to_modify": ["path1", "path2"],
|
||||
"integration_points": [...],
|
||||
"potential_risks": [...],
|
||||
"implementation_notes": "..."
|
||||
}
|
||||
`
|
||||
)
|
||||
|
||||
// Parse and return
|
||||
const analysis = JSON.parse(analysisResult)
|
||||
|
||||
// Update phase results
|
||||
updatePhaseResult(task.id, 'analyze', {
|
||||
status: 'completed',
|
||||
findings: analysis.potential_risks,
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
|
||||
return { paused: false, analysis }
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: IMPLEMENT
|
||||
|
||||
```javascript
|
||||
async function executePhase_Implement(task, analyzeResult) {
|
||||
console.log('### Phase 2: IMPLEMENT')
|
||||
|
||||
updateTaskStatus(task.id, 'in_progress', 'implement')
|
||||
|
||||
// Determine executor
|
||||
const executor = task.executor === 'auto'
|
||||
? (task.type === 'test' ? 'agent' : 'codex')
|
||||
: task.executor
|
||||
|
||||
// Build implementation prompt
|
||||
const prompt = `
|
||||
## Implementation Task
|
||||
ID: ${task.id}
|
||||
Title: ${task.title}
|
||||
Type: ${task.type}
|
||||
|
||||
## Description
|
||||
${task.description}
|
||||
|
||||
## Analysis Results
|
||||
${JSON.stringify(analyzeResult.analysis, null, 2)}
|
||||
|
||||
## Files to Modify
|
||||
${analyzeResult.analysis.files_to_modify.join('\n')}
|
||||
|
||||
## Delivery Criteria (MUST achieve all)
|
||||
${task.delivery_criteria.map((c, i) => `- [ ] ${c}`).join('\n')}
|
||||
|
||||
## Implementation Notes
|
||||
${analyzeResult.analysis.implementation_notes}
|
||||
|
||||
## Rules
|
||||
- Follow existing code patterns
|
||||
- Maintain backward compatibility
|
||||
- Add appropriate error handling
|
||||
- Document significant changes
|
||||
`
|
||||
|
||||
let result
|
||||
if (executor === 'codex') {
|
||||
result = Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write`,
|
||||
timeout=3600000
|
||||
)
|
||||
} else if (executor === 'gemini') {
|
||||
result = Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write`,
|
||||
timeout=1800000
|
||||
)
|
||||
if (executing.length > 0) {
|
||||
console.log('Tasks are currently executing. Wait for completion.');
|
||||
} else {
|
||||
result = await Task(
|
||||
console.log('No ready tasks. Check for blocked dependencies.');
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Found ${readyTasks.length} ready tasks`);
|
||||
|
||||
// Sort by execution order
|
||||
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
|
||||
|
||||
// Initialize TodoWrite
|
||||
TodoWrite({
|
||||
todos: readyTasks.slice(0, parallelLimit).map(t => ({
|
||||
content: `[${t.queue_id}] ${t.issue_id}:${t.task_id}`,
|
||||
status: 'pending',
|
||||
activeForm: `Executing ${t.queue_id}`
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 3: Codex Coordination (Single Task Mode)
|
||||
|
||||
```javascript
|
||||
// Execute tasks - single codex instance per task
|
||||
async function executeTask(queueItem) {
|
||||
const codexPrompt = `
|
||||
## Single Task Execution
|
||||
|
||||
You are executing ONE task from the issue queue. Follow these steps exactly:
|
||||
|
||||
### Step 1: Fetch Task
|
||||
Run this command to get your task:
|
||||
\`\`\`bash
|
||||
ccw issue next
|
||||
\`\`\`
|
||||
|
||||
This returns JSON with:
|
||||
- queue_id: Queue item ID
|
||||
- task: Task definition with implementation steps
|
||||
- context: Exploration context
|
||||
- execution_hints: Executor and time estimate
|
||||
|
||||
### Step 2: Execute Task
|
||||
Read the returned task object and:
|
||||
1. Follow task.implementation steps in order
|
||||
2. Meet all task.acceptance criteria
|
||||
3. Use provided context.relevant_files for reference
|
||||
4. Use context.patterns for code style
|
||||
|
||||
### Step 3: Report Completion
|
||||
When done, run:
|
||||
\`\`\`bash
|
||||
ccw issue complete <queue_id> --result '{"files_modified": ["path1", "path2"], "summary": "What was done"}'
|
||||
\`\`\`
|
||||
|
||||
If task fails, run:
|
||||
\`\`\`bash
|
||||
ccw issue fail <queue_id> --reason "Why it failed"
|
||||
\`\`\`
|
||||
|
||||
### Rules
|
||||
- NEVER read task files directly - use ccw issue next
|
||||
- Execute the FULL task before marking complete
|
||||
- Do NOT loop - execute ONE task only
|
||||
- Report accurate files_modified in result
|
||||
|
||||
### Start Now
|
||||
Begin by running: ccw issue next
|
||||
`;
|
||||
|
||||
// Execute codex
|
||||
const executor = queueItem.assigned_executor || flags.executor || 'codex';
|
||||
|
||||
if (executor === 'codex') {
|
||||
Bash(
|
||||
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.queue_id}`,
|
||||
timeout=3600000 // 1 hour timeout
|
||||
);
|
||||
} else if (executor === 'gemini') {
|
||||
Bash(
|
||||
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.queue_id}`,
|
||||
timeout=1800000 // 30 min timeout
|
||||
);
|
||||
} else {
|
||||
// Agent execution
|
||||
Task(
|
||||
subagent_type="code-developer",
|
||||
run_in_background=false,
|
||||
description=`Implement: ${task.id}`,
|
||||
prompt=prompt
|
||||
)
|
||||
description=`Execute ${queueItem.queue_id}`,
|
||||
prompt=codexPrompt
|
||||
);
|
||||
}
|
||||
|
||||
// Track modified files
|
||||
const modifiedFiles = extractModifiedFiles(result)
|
||||
|
||||
updatePhaseResult(task.id, 'implement', {
|
||||
status: 'completed',
|
||||
files_modified: modifiedFiles,
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
|
||||
return { modifiedFiles, output: result }
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: TEST
|
||||
// Execute with parallelism
|
||||
const parallelLimit = flags.parallel || 1;
|
||||
|
||||
```javascript
|
||||
async function executePhase_Test(task, implementResult) {
|
||||
console.log('### Phase 3: TEST')
|
||||
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
|
||||
const batch = readyTasks.slice(i, i + parallelLimit);
|
||||
|
||||
updateTaskStatus(task.id, 'in_progress', 'test')
|
||||
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
|
||||
console.log(batch.map(t => `- ${t.queue_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
|
||||
|
||||
// Determine test command based on project
|
||||
const testCommand = detectTestCommand(task.file_context)
|
||||
// e.g., 'npm test', 'pytest', 'go test', etc.
|
||||
|
||||
// Run tests
|
||||
const testResult = Bash(testCommand, timeout=300000)
|
||||
const passed = testResult.exitCode === 0
|
||||
|
||||
// Verify delivery criteria
|
||||
let criteriaVerified = passed
|
||||
if (passed) {
|
||||
for (const criterion of task.delivery_criteria) {
|
||||
const verified = await verifyCriterion(criterion, implementResult)
|
||||
if (!verified) {
|
||||
criteriaVerified = false
|
||||
console.log(`⚠ Criterion not met: ${criterion}`)
|
||||
}
|
||||
if (parallelLimit === 1) {
|
||||
// Sequential execution
|
||||
for (const task of batch) {
|
||||
updateTodo(task.queue_id, 'in_progress');
|
||||
await executeTask(task);
|
||||
updateTodo(task.queue_id, 'completed');
|
||||
}
|
||||
} else {
|
||||
// Parallel execution - launch all at once
|
||||
const executions = batch.map(task => {
|
||||
updateTodo(task.queue_id, 'in_progress');
|
||||
return executeTask(task);
|
||||
});
|
||||
await Promise.all(executions);
|
||||
batch.forEach(task => updateTodo(task.queue_id, 'completed'));
|
||||
}
|
||||
|
||||
updatePhaseResult(task.id, 'test', {
|
||||
status: passed && criteriaVerified ? 'passed' : 'failed',
|
||||
test_results: testResult.output.substring(0, 1000),
|
||||
retry_count: implementResult.retryCount || 0,
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
|
||||
return { passed: passed && criteriaVerified, output: testResult }
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: OPTIMIZE
|
||||
|
||||
```javascript
|
||||
async function executePhase_Optimize(task, implementResult) {
|
||||
console.log('### Phase 4: OPTIMIZE')
|
||||
|
||||
updateTaskStatus(task.id, 'in_progress', 'optimize')
|
||||
|
||||
// Run linting/formatting
|
||||
const lintResult = Bash('npm run lint:fix || true', timeout=60000)
|
||||
|
||||
// Quick code review
|
||||
const reviewResult = await Task(
|
||||
subagent_type="universal-executor",
|
||||
run_in_background=false,
|
||||
description=`Review: ${task.id}`,
|
||||
prompt=`
|
||||
Quick code review for task ${task.id}
|
||||
|
||||
## Modified Files
|
||||
${implementResult.modifiedFiles.join('\n')}
|
||||
|
||||
## Check
|
||||
1. Code follows project conventions
|
||||
2. No obvious security issues
|
||||
3. Error handling is appropriate
|
||||
4. No dead code or console.logs
|
||||
|
||||
## Output
|
||||
If issues found, apply fixes directly. Otherwise confirm OK.
|
||||
`
|
||||
)
|
||||
|
||||
updatePhaseResult(task.id, 'optimize', {
|
||||
status: 'completed',
|
||||
improvements: extractImprovements(reviewResult),
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
|
||||
return { lintResult, reviewResult }
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: COMMIT
|
||||
|
||||
```javascript
|
||||
async function executePhase_Commit(task, implementResult) {
|
||||
console.log('### Phase 5: COMMIT')
|
||||
|
||||
updateTaskStatus(task.id, 'in_progress', 'commit')
|
||||
|
||||
// Stage modified files
|
||||
for (const file of implementResult.modifiedFiles) {
|
||||
Bash(`git add "${file}"`)
|
||||
}
|
||||
|
||||
// Create commit message
|
||||
const typePrefix = {
|
||||
'feature': 'feat',
|
||||
'bug': 'fix',
|
||||
'refactor': 'refactor',
|
||||
'test': 'test',
|
||||
'chore': 'chore',
|
||||
'docs': 'docs'
|
||||
}[task.type] || 'feat'
|
||||
|
||||
const commitMessage = `${typePrefix}(${task.id}): ${task.title}
|
||||
|
||||
${task.description.substring(0, 200)}
|
||||
|
||||
Delivery Criteria:
|
||||
${task.delivery_criteria.map(c => `- [x] ${c}`).join('\n')}
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>`
|
||||
|
||||
// Commit
|
||||
const commitResult = Bash(`git commit -m "$(cat <<'EOF'
|
||||
${commitMessage}
|
||||
EOF
|
||||
)"`)
|
||||
|
||||
// Get commit hash
|
||||
const commitHash = Bash('git rev-parse HEAD').trim()
|
||||
|
||||
updatePhaseResult(task.id, 'commit', {
|
||||
status: 'completed',
|
||||
commit_hash: commitHash,
|
||||
message: `${typePrefix}(${task.id}): ${task.title}`,
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
|
||||
console.log(`✓ Committed: ${commitHash.substring(0, 7)}`)
|
||||
|
||||
return { commitHash }
|
||||
}
|
||||
```
|
||||
|
||||
### State Management
|
||||
|
||||
```javascript
|
||||
// Update task status in JSONL (append-style with compaction)
|
||||
function updateTaskStatus(taskId, status, phase) {
|
||||
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
|
||||
const taskIndex = tasks.findIndex(t => t.id === taskId)
|
||||
|
||||
if (taskIndex >= 0) {
|
||||
tasks[taskIndex].status = status
|
||||
tasks[taskIndex].current_phase = phase
|
||||
tasks[taskIndex].updated_at = new Date().toISOString()
|
||||
|
||||
// Rewrite JSONL (compact)
|
||||
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n')
|
||||
Write(`${issueDir}/tasks.jsonl`, jsonlContent)
|
||||
}
|
||||
|
||||
// Update state.json
|
||||
const state = JSON.parse(Read(`${issueDir}/state.json`))
|
||||
state.current_task = status === 'in_progress' ? taskId : null
|
||||
state.completed_count = tasks.filter(t => t.status === 'completed').length
|
||||
state.updated_at = new Date().toISOString()
|
||||
Write(`${issueDir}/state.json`, JSON.stringify(state, null, 2))
|
||||
}
|
||||
|
||||
// Update phase result
|
||||
function updatePhaseResult(taskId, phase, result) {
|
||||
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
|
||||
const taskIndex = tasks.findIndex(t => t.id === taskId)
|
||||
|
||||
if (taskIndex >= 0) {
|
||||
tasks[taskIndex].phase_results = tasks[taskIndex].phase_results || {}
|
||||
tasks[taskIndex].phase_results[phase] = result
|
||||
|
||||
const jsonlContent = tasks.map(t => JSON.stringify(t)).join('\n')
|
||||
Write(`${issueDir}/tasks.jsonl`, jsonlContent)
|
||||
// Refresh ready tasks after batch
|
||||
const newReady = getReadyTasks();
|
||||
if (newReady.length > 0) {
|
||||
console.log(`${newReady.length} more tasks now ready`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Progressive Loading
|
||||
### Codex Task Fetch Response
|
||||
|
||||
For memory efficiency with large task lists:
|
||||
When codex calls `ccw issue next`, it receives:
|
||||
|
||||
```json
|
||||
{
|
||||
"queue_id": "Q-001",
|
||||
"issue_id": "GH-123",
|
||||
"solution_id": "SOL-001",
|
||||
"task": {
|
||||
"id": "T1",
|
||||
"title": "Create auth middleware",
|
||||
"scope": "src/middleware/",
|
||||
"action": "Create",
|
||||
"description": "Create JWT validation middleware",
|
||||
"modification_points": [
|
||||
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||
],
|
||||
"implementation": [
|
||||
"Create auth.ts file in src/middleware/",
|
||||
"Implement JWT token validation using jsonwebtoken",
|
||||
"Add error handling for invalid/expired tokens",
|
||||
"Export middleware function"
|
||||
],
|
||||
"acceptance": [
|
||||
"Middleware validates JWT tokens successfully",
|
||||
"Returns 401 for invalid or missing tokens",
|
||||
"Passes token payload to request context"
|
||||
]
|
||||
},
|
||||
"context": {
|
||||
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
|
||||
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
|
||||
},
|
||||
"execution_hints": {
|
||||
"executor": "codex",
|
||||
"estimated_minutes": 30
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Completion Summary
|
||||
|
||||
```javascript
|
||||
// Stream JSONL and only load ready tasks
|
||||
function* getReadyTasksStream(issueDir, completedIds) {
|
||||
const filePath = `${issueDir}/tasks.jsonl`
|
||||
const lines = readFileLines(filePath)
|
||||
// Reload queue for final status
|
||||
const finalQueue = JSON.parse(Read(queuePath));
|
||||
|
||||
for (const line of lines) {
|
||||
if (!line.trim()) continue
|
||||
const task = JSON.parse(line)
|
||||
const summary = {
|
||||
completed: finalQueue.queue.filter(q => q.status === 'completed').length,
|
||||
failed: finalQueue.queue.filter(q => q.status === 'failed').length,
|
||||
pending: finalQueue.queue.filter(q => q.status === 'pending').length,
|
||||
total: finalQueue.queue.length
|
||||
};
|
||||
|
||||
if (task.status === 'pending' &&
|
||||
task.depends_on.every(dep => completedIds.has(dep))) {
|
||||
yield task
|
||||
console.log(`
|
||||
## Execution Complete
|
||||
|
||||
**Completed**: ${summary.completed}/${summary.total}
|
||||
**Failed**: ${summary.failed}
|
||||
**Pending**: ${summary.pending}
|
||||
|
||||
### Task Results
|
||||
${finalQueue.queue.map(q => {
|
||||
const icon = q.status === 'completed' ? '✓' :
|
||||
q.status === 'failed' ? '✗' :
|
||||
q.status === 'executing' ? '⟳' : '○';
|
||||
return `${icon} ${q.queue_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
|
||||
}).join('\n')}
|
||||
`);
|
||||
|
||||
// Update issue statuses in issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}"`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
const issueIds = [...new Set(finalQueue.queue.map(q => q.issue_id))];
|
||||
for (const issueId of issueIds) {
|
||||
const issueTasks = finalQueue.queue.filter(q => q.issue_id === issueId);
|
||||
|
||||
if (issueTasks.every(q => q.status === 'completed')) {
|
||||
console.log(`\n✓ Issue ${issueId} fully completed!`);
|
||||
|
||||
// Update issue status
|
||||
const issueIndex = allIssues.findIndex(i => i.id === issueId);
|
||||
if (issueIndex !== -1) {
|
||||
allIssues[issueIndex].status = 'completed';
|
||||
allIssues[issueIndex].completed_at = new Date().toISOString();
|
||||
allIssues[issueIndex].updated_at = new Date().toISOString();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Usage: Only load what's needed
|
||||
const iterator = getReadyTasksStream(issueDir, completedIds)
|
||||
const batch = []
|
||||
for (let i = 0; i < batchSize; i++) {
|
||||
const { value, done } = iterator.next()
|
||||
if (done) break
|
||||
batch.push(value)
|
||||
// Write updated issues.jsonl
|
||||
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
|
||||
|
||||
if (summary.pending > 0) {
|
||||
console.log(`
|
||||
### Continue Execution
|
||||
Run \`/issue:execute\` again to execute remaining tasks.
|
||||
`);
|
||||
}
|
||||
```
|
||||
|
||||
## Pause Criteria Evaluation
|
||||
## Dry Run Mode
|
||||
|
||||
```javascript
|
||||
async function evaluatePauseCriterion(criterion, task) {
|
||||
// Pattern matching for common pause conditions
|
||||
const patterns = [
|
||||
{ match: /unclear|undefined|missing/i, action: 'ask_user' },
|
||||
{ match: /security review/i, action: 'require_approval' },
|
||||
{ match: /migration required/i, action: 'check_migration' },
|
||||
{ match: /external (api|service)/i, action: 'verify_external' }
|
||||
]
|
||||
if (flags.dryRun) {
|
||||
console.log(`
|
||||
## Dry Run - Would Execute
|
||||
|
||||
for (const pattern of patterns) {
|
||||
if (pattern.match.test(criterion)) {
|
||||
// Check if condition is resolved
|
||||
const resolved = await checkCondition(pattern.action, criterion, task)
|
||||
if (!resolved) return true // Pause
|
||||
}
|
||||
}
|
||||
${readyTasks.map((t, i) => `
|
||||
${i + 1}. ${t.queue_id}
|
||||
Issue: ${t.issue_id}
|
||||
Task: ${t.task_id}
|
||||
Executor: ${t.assigned_executor}
|
||||
Group: ${t.execution_group}
|
||||
`).join('')}
|
||||
|
||||
return false // Don't pause
|
||||
No changes made. Remove --dry-run to execute.
|
||||
`);
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
@@ -554,38 +386,32 @@ async function evaluatePauseCriterion(criterion, task) {
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Task not found | List available tasks, suggest correct ID |
|
||||
| Dependencies unsatisfied | Show blocking tasks, suggest running those first |
|
||||
| Test failure (3x) | Mark failed, save state, suggest manual intervention |
|
||||
| Pause triggered | Save state, display pause reason, await user action |
|
||||
| Commit conflict | Stash changes, report conflict, await resolution |
|
||||
| Queue not found | Display message, suggest /issue:queue |
|
||||
| No ready tasks | Check dependencies, show blocked tasks |
|
||||
| Codex timeout | Mark as failed, allow retry |
|
||||
| ccw issue next empty | All tasks done or blocked |
|
||||
| Task execution failure | Marked via ccw issue fail |
|
||||
|
||||
## Output
|
||||
## Endpoint Contract
|
||||
|
||||
```
|
||||
## Execution Complete
|
||||
### `ccw issue next`
|
||||
- Returns next ready task as JSON
|
||||
- Marks task as 'executing'
|
||||
- Returns `{ status: 'empty' }` when no tasks
|
||||
|
||||
**Issue**: GH-123
|
||||
**Tasks Executed**: 3/5
|
||||
**Completed**: 3
|
||||
**Failed**: 0
|
||||
**Pending**: 2 (dependencies not met)
|
||||
### `ccw issue complete <queue-id>`
|
||||
- Marks task as 'completed'
|
||||
- Updates queue.json
|
||||
- Checks if issue is fully complete
|
||||
|
||||
### Task Status
|
||||
| ID | Title | Status | Phase | Commit |
|
||||
|----|-------|--------|-------|--------|
|
||||
| TASK-001 | Setup auth middleware | ✓ | done | a1b2c3d |
|
||||
| TASK-002 | Protect API routes | ✓ | done | e4f5g6h |
|
||||
| TASK-003 | Add login endpoint | ✓ | done | i7j8k9l |
|
||||
| TASK-004 | Add logout endpoint | ⏳ | pending | - |
|
||||
| TASK-005 | Integration tests | ⏳ | pending | - |
|
||||
|
||||
### Next Steps
|
||||
Run `/issue:execute GH-123` to continue with remaining tasks.
|
||||
```
|
||||
### `ccw issue fail <queue-id>`
|
||||
- Marks task as 'failed'
|
||||
- Records failure reason
|
||||
- Allows retry via /issue:execute
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Create issue plan with JSONL tasks
|
||||
- `ccw issue status` - Check issue execution status
|
||||
- `/issue:plan` - Plan issues with solutions
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `ccw issue queue list` - View queue status
|
||||
- `ccw issue retry` - Retry failed tasks
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: plan
|
||||
description: Plan issue resolution with JSONL task generation, delivery/pause criteria, and dependency graph
|
||||
argument-hint: "\"issue description\"|github-url|file.md"
|
||||
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
|
||||
argument-hint: "<issue-id>[,<issue-id>,...] [--batch-size 3]"
|
||||
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
|
||||
---
|
||||
|
||||
@@ -9,339 +9,317 @@ allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(
|
||||
|
||||
## Overview
|
||||
|
||||
Generate a JSONL-based task plan from a GitHub issue or description. Each task includes delivery criteria, pause criteria, and dependency relationships. The plan is designed for progressive execution with the `/issue:execute` command.
|
||||
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow. The agent handles ACE semantic search, solution generation, and task breakdown.
|
||||
|
||||
**Core capabilities:**
|
||||
- Parse issue from URL, text description, or markdown file
|
||||
- Analyze codebase context for accurate task breakdown
|
||||
- Generate JSONL task file with DAG (Directed Acyclic Graph) dependencies
|
||||
- Define clear delivery criteria (what marks a task complete)
|
||||
- Define pause criteria (conditions to halt execution)
|
||||
- Interactive confirmation before finalizing
|
||||
- **Closed-loop agent**: issue-plan-agent combines explore + plan
|
||||
- Batch processing: 1 agent processes 1-3 issues
|
||||
- ACE semantic search integrated into planning
|
||||
- Solution with executable tasks and acceptance criteria
|
||||
- Automatic solution registration and binding
|
||||
|
||||
## Storage Structure (Flat JSONL)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queue.json # Execution queue
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue (one per line)
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:plan [FLAGS] <INPUT>
|
||||
/issue:plan <issue-id>[,<issue-id>,...] [FLAGS]
|
||||
|
||||
# Input Formats
|
||||
<issue-url> GitHub issue URL (e.g., https://github.com/owner/repo/issues/123)
|
||||
<description> Text description of the issue
|
||||
<file.md> Markdown file with issue details
|
||||
# Examples
|
||||
/issue:plan GH-123 # Single issue
|
||||
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
|
||||
/issue:plan --all-pending # All pending issues
|
||||
|
||||
# Flags
|
||||
-e, --explore Force code exploration phase
|
||||
--executor <type> Default executor: agent|codex|gemini|auto (default: auto)
|
||||
--batch-size <n> Max issues per agent batch (default: 3)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Input Parsing & Context
|
||||
├─ Parse input (URL → fetch issue, text → use directly, file → read content)
|
||||
├─ Extract: title, description, labels, acceptance criteria
|
||||
└─ Store as issueContext
|
||||
Phase 1: Issue Loading
|
||||
├─ Parse input (single, comma-separated, or --all-pending)
|
||||
├─ Load issues from .workflow/issues/issues.jsonl
|
||||
├─ Validate issues exist (create if needed)
|
||||
└─ Group into batches (max 3 per batch)
|
||||
|
||||
Phase 2: Exploration (if needed)
|
||||
├─ Complexity assessment (Low/Medium/High)
|
||||
├─ Launch cli-explore-agent for codebase understanding
|
||||
└─ Identify: relevant files, patterns, integration points
|
||||
Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||
├─ Launch issue-plan-agent per batch
|
||||
├─ Agent performs:
|
||||
│ ├─ ACE semantic search for each issue
|
||||
│ ├─ Codebase exploration (files, patterns, dependencies)
|
||||
│ ├─ Solution generation with task breakdown
|
||||
│ └─ Conflict detection across issues
|
||||
└─ Output: solution JSON per issue
|
||||
|
||||
Phase 3: Task Breakdown
|
||||
├─ Agent generates JSONL task list
|
||||
├─ Each task includes:
|
||||
│ ├─ delivery_criteria (completion checklist)
|
||||
│ ├─ pause_criteria (halt conditions)
|
||||
│ └─ depends_on (dependency graph)
|
||||
└─ Validate DAG (no circular dependencies)
|
||||
Phase 3: Solution Registration & Binding
|
||||
├─ Append solutions to solutions/{issue-id}.jsonl
|
||||
├─ Single solution per issue → auto-bind
|
||||
├─ Multiple candidates → AskUserQuestion to select
|
||||
└─ Update issues.jsonl with bound_solution_id
|
||||
|
||||
Phase 4: User Confirmation
|
||||
├─ Display task summary table
|
||||
├─ Show dependency graph
|
||||
└─ AskUserQuestion: Approve / Refine / Cancel
|
||||
|
||||
Phase 5: Persistence
|
||||
├─ Write tasks.jsonl to .workflow/issues/{issue-id}/
|
||||
├─ Initialize state.json for status tracking
|
||||
└─ Return summary and next steps
|
||||
Phase 4: Summary
|
||||
├─ Display bound solutions
|
||||
├─ Show task counts per issue
|
||||
└─ Display next steps (/issue:queue)
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Input Parsing
|
||||
### Phase 1: Issue Loading
|
||||
|
||||
```javascript
|
||||
// Helper: Get UTC+8 ISO string
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
// Parse input
|
||||
const issueIds = userInput.includes(',')
|
||||
? userInput.split(',').map(s => s.trim())
|
||||
: [userInput.trim()];
|
||||
|
||||
// Parse input type
|
||||
function parseInput(input) {
|
||||
if (input.startsWith('https://github.com/')) {
|
||||
const match = input.match(/github\.com\/(.+?)\/(.+?)\/issues\/(\d+)/)
|
||||
if (match) {
|
||||
return { type: 'github', owner: match[1], repo: match[2], number: match[3] }
|
||||
}
|
||||
// Read issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Load and validate issues
|
||||
const issues = [];
|
||||
for (const id of issueIds) {
|
||||
let issue = allIssues.find(i => i.id === id);
|
||||
|
||||
if (!issue) {
|
||||
console.log(`Issue ${id} not found. Creating...`);
|
||||
issue = {
|
||||
id,
|
||||
title: `Issue ${id}`,
|
||||
status: 'registered',
|
||||
priority: 3,
|
||||
context: '',
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
// Append to issues.jsonl
|
||||
Bash(`echo '${JSON.stringify(issue)}' >> "${issuesPath}"`);
|
||||
}
|
||||
if (input.endsWith('.md') && fileExists(input)) {
|
||||
return { type: 'file', path: input }
|
||||
}
|
||||
return { type: 'text', content: input }
|
||||
|
||||
issues.push(issue);
|
||||
}
|
||||
|
||||
// Generate issue ID
|
||||
const inputType = parseInput(userInput)
|
||||
let issueId, issueTitle, issueContent
|
||||
|
||||
if (inputType.type === 'github') {
|
||||
// Fetch via gh CLI
|
||||
const issueData = Bash(`gh issue view ${inputType.number} --repo ${inputType.owner}/${inputType.repo} --json title,body,labels`)
|
||||
const parsed = JSON.parse(issueData)
|
||||
issueId = `GH-${inputType.number}`
|
||||
issueTitle = parsed.title
|
||||
issueContent = parsed.body
|
||||
} else if (inputType.type === 'file') {
|
||||
issueContent = Read(inputType.path)
|
||||
issueId = `FILE-${Date.now()}`
|
||||
issueTitle = extractTitle(issueContent) // First # heading
|
||||
} else {
|
||||
issueContent = inputType.content
|
||||
issueId = `TEXT-${Date.now()}`
|
||||
issueTitle = issueContent.substring(0, 50)
|
||||
// Group into batches
|
||||
const batchSize = flags.batchSize || 3;
|
||||
const batches = [];
|
||||
for (let i = 0; i < issues.length; i += batchSize) {
|
||||
batches.push(issues.slice(i, i + batchSize));
|
||||
}
|
||||
|
||||
// Create issue directory
|
||||
const issueDir = `.workflow/issues/${issueId}`
|
||||
Bash(`mkdir -p ${issueDir}`)
|
||||
|
||||
// Save issue context
|
||||
Write(`${issueDir}/context.md`, `# ${issueTitle}\n\n${issueContent}`)
|
||||
TodoWrite({
|
||||
todos: batches.flatMap((batch, i) => [
|
||||
{ content: `Plan batch ${i+1}`, status: 'pending', activeForm: `Planning batch ${i+1}` }
|
||||
])
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 2: Exploration
|
||||
### Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||
|
||||
```javascript
|
||||
// Complexity assessment
|
||||
const complexity = analyzeComplexity(issueContent)
|
||||
// Low: Single file change, isolated
|
||||
// Medium: Multiple files, some dependencies
|
||||
// High: Cross-module, architectural
|
||||
for (const [batchIndex, batch] of batches.entries()) {
|
||||
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
|
||||
|
||||
const needsExploration = (
|
||||
flags.includes('--explore') ||
|
||||
complexity !== 'Low' ||
|
||||
issueContent.mentions_specific_files
|
||||
)
|
||||
// Build issue prompt for agent
|
||||
const issuePrompt = `
|
||||
## Issues to Plan
|
||||
|
||||
if (needsExploration) {
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description="Explore codebase for issue context",
|
||||
prompt=`
|
||||
## Task Objective
|
||||
Analyze codebase to understand context for issue resolution.
|
||||
${batch.map((issue, i) => `
|
||||
### Issue ${i + 1}: ${issue.id}
|
||||
**Title**: ${issue.title}
|
||||
**Context**: ${issue.context || 'No context provided'}
|
||||
`).join('\n')}
|
||||
|
||||
## Issue Context
|
||||
Title: ${issueTitle}
|
||||
Content: ${issueContent}
|
||||
|
||||
## Required Analysis
|
||||
1. Identify files that need modification
|
||||
2. Find relevant patterns and conventions
|
||||
3. Map dependencies and integration points
|
||||
4. Identify potential risks or blockers
|
||||
|
||||
## Output
|
||||
Write exploration results to: ${issueDir}/exploration.json
|
||||
`
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Task Breakdown
|
||||
|
||||
```javascript
|
||||
// Load schema reference
|
||||
const schema = Read('~/.claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json')
|
||||
|
||||
// Generate tasks via CLI
|
||||
Task(
|
||||
subagent_type="cli-lite-planning-agent",
|
||||
run_in_background=false,
|
||||
description="Generate JSONL task breakdown",
|
||||
prompt=`
|
||||
## Objective
|
||||
Break down the issue into executable tasks in JSONL format.
|
||||
|
||||
## Issue Context
|
||||
ID: ${issueId}
|
||||
Title: ${issueTitle}
|
||||
Content: ${issueContent}
|
||||
|
||||
## Exploration Results
|
||||
${explorationResults || 'No exploration performed'}
|
||||
|
||||
## Task Schema
|
||||
${schema}
|
||||
## Project Root
|
||||
${process.cwd()}
|
||||
|
||||
## Requirements
|
||||
1. Generate 2-10 tasks depending on complexity
|
||||
2. Each task MUST include:
|
||||
- delivery_criteria: Specific, verifiable conditions for completion (2-5 items)
|
||||
- pause_criteria: Conditions that should halt execution (0-3 items)
|
||||
- depends_on: Task IDs that must complete first (ensure DAG)
|
||||
3. Task execution phases: analyze → implement → test → optimize → commit
|
||||
4. Assign executor based on task nature (analysis=gemini, implementation=codex)
|
||||
1. Use ACE semantic search (mcp__ace-tool__search_context) for exploration
|
||||
2. Generate complete solution with task breakdown
|
||||
3. Each task must have:
|
||||
- implementation steps (2-7 steps)
|
||||
- acceptance criteria (1-4 testable criteria)
|
||||
- modification_points (exact file locations)
|
||||
- depends_on (task dependencies)
|
||||
4. Detect file conflicts if multiple issues
|
||||
`;
|
||||
|
||||
## Delivery Criteria Examples
|
||||
Good: "User login endpoint returns JWT token with 24h expiry"
|
||||
Bad: "Authentication works" (too vague)
|
||||
// Launch issue-plan-agent (combines explore + plan)
|
||||
const result = Task(
|
||||
subagent_type="issue-plan-agent",
|
||||
run_in_background=false,
|
||||
description=`Explore & plan ${batch.length} issues`,
|
||||
prompt=issuePrompt
|
||||
);
|
||||
|
||||
## Pause Criteria Examples
|
||||
- "API spec for external service unclear"
|
||||
- "Database schema migration required"
|
||||
- "Security review needed before implementation"
|
||||
// Parse agent output
|
||||
const agentOutput = JSON.parse(result);
|
||||
|
||||
## Output Format
|
||||
Write JSONL file (one JSON object per line):
|
||||
${issueDir}/tasks.jsonl
|
||||
// Register solutions for each issue (append to solutions/{issue-id}.jsonl)
|
||||
for (const item of agentOutput.solutions) {
|
||||
const solutionPath = `.workflow/issues/solutions/${item.issue_id}.jsonl`;
|
||||
|
||||
## Validation
|
||||
- Ensure no circular dependencies
|
||||
- Ensure all depends_on references exist
|
||||
- Ensure at least one task has empty depends_on (entry point)
|
||||
`
|
||||
)
|
||||
// Ensure solutions directory exists
|
||||
Bash(`mkdir -p .workflow/issues/solutions`);
|
||||
|
||||
// Validate DAG
|
||||
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
|
||||
validateDAG(tasks) // Throws if circular dependency detected
|
||||
```
|
||||
// Append solution as new line
|
||||
Bash(`echo '${JSON.stringify(item.solution)}' >> "${solutionPath}"`);
|
||||
}
|
||||
|
||||
### Phase 4: User Confirmation
|
||||
// Handle conflicts if any
|
||||
if (agentOutput.conflicts?.length > 0) {
|
||||
console.log(`\n⚠ File conflicts detected:`);
|
||||
agentOutput.conflicts.forEach(c => {
|
||||
console.log(` ${c.file}: ${c.issues.join(', ')} → suggested: ${c.suggested_order.join(' → ')}`);
|
||||
});
|
||||
}
|
||||
|
||||
```javascript
|
||||
// Display task summary
|
||||
const tasks = readJsonl(`${issueDir}/tasks.jsonl`)
|
||||
|
||||
console.log(`
|
||||
## Issue Plan: ${issueId}
|
||||
|
||||
**Title**: ${issueTitle}
|
||||
**Tasks**: ${tasks.length}
|
||||
**Complexity**: ${complexity}
|
||||
|
||||
### Task Breakdown
|
||||
|
||||
| ID | Title | Type | Dependencies | Delivery Criteria |
|
||||
|----|-------|------|--------------|-------------------|
|
||||
${tasks.map(t => `| ${t.id} | ${t.title} | ${t.type} | ${t.depends_on.join(', ') || '-'} | ${t.delivery_criteria.length} items |`).join('\n')}
|
||||
|
||||
### Dependency Graph
|
||||
${generateDependencyGraph(tasks)}
|
||||
`)
|
||||
|
||||
// User confirmation
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `Approve issue plan? (${tasks.length} tasks)`,
|
||||
header: "Confirm",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Approve", description: "Proceed with this plan" },
|
||||
{ label: "Refine", description: "Modify tasks before proceeding" },
|
||||
{ label: "Cancel", description: "Discard plan" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
|
||||
if (answer === "Refine") {
|
||||
// Allow editing specific tasks
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "What would you like to refine?",
|
||||
header: "Refine",
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: "Add Task", description: "Add a new task" },
|
||||
{ label: "Remove Task", description: "Remove an existing task" },
|
||||
{ label: "Modify Dependencies", description: "Change task dependencies" },
|
||||
{ label: "Regenerate", description: "Regenerate entire plan" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Persistence
|
||||
### Phase 3: Solution Binding
|
||||
|
||||
```javascript
|
||||
// Initialize state.json for status tracking
|
||||
const state = {
|
||||
issue_id: issueId,
|
||||
title: issueTitle,
|
||||
status: 'planned',
|
||||
created_at: getUtc8ISOString(),
|
||||
updated_at: getUtc8ISOString(),
|
||||
task_count: tasks.length,
|
||||
completed_count: 0,
|
||||
current_task: null,
|
||||
executor_default: flags.executor || 'auto'
|
||||
// Re-read issues.jsonl
|
||||
let allIssuesUpdated = Bash(`cat "${issuesPath}"`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
for (const issue of issues) {
|
||||
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
|
||||
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
if (solutions.length === 0) {
|
||||
console.log(`⚠ No solutions for ${issue.id}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
let selectedSolId;
|
||||
|
||||
if (solutions.length === 1) {
|
||||
// Auto-bind single solution
|
||||
selectedSolId = solutions[0].id;
|
||||
console.log(`✓ Auto-bound ${selectedSolId} to ${issue.id} (${solutions[0].tasks?.length || 0} tasks)`);
|
||||
} else {
|
||||
// Multiple solutions - ask user
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Select solution for ${issue.id}:`,
|
||||
header: issue.id,
|
||||
multiSelect: false,
|
||||
options: solutions.map(s => ({
|
||||
label: `${s.id}: ${s.description || 'Solution'}`,
|
||||
description: `${s.tasks?.length || 0} tasks`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
|
||||
selectedSolId = extractSelectedSolutionId(answer);
|
||||
console.log(`✓ Bound ${selectedSolId} to ${issue.id}`);
|
||||
}
|
||||
|
||||
// Update issue in allIssuesUpdated
|
||||
const issueIndex = allIssuesUpdated.findIndex(i => i.id === issue.id);
|
||||
if (issueIndex !== -1) {
|
||||
allIssuesUpdated[issueIndex].bound_solution_id = selectedSolId;
|
||||
allIssuesUpdated[issueIndex].status = 'planned';
|
||||
allIssuesUpdated[issueIndex].planned_at = new Date().toISOString();
|
||||
allIssuesUpdated[issueIndex].updated_at = new Date().toISOString();
|
||||
}
|
||||
|
||||
// Mark solution as bound in solutions file
|
||||
const updatedSolutions = solutions.map(s => ({
|
||||
...s,
|
||||
is_bound: s.id === selectedSolId,
|
||||
bound_at: s.id === selectedSolId ? new Date().toISOString() : s.bound_at
|
||||
}));
|
||||
Write(solPath, updatedSolutions.map(s => JSON.stringify(s)).join('\n'));
|
||||
}
|
||||
|
||||
Write(`${issueDir}/state.json`, JSON.stringify(state, null, 2))
|
||||
// Write updated issues.jsonl
|
||||
Write(issuesPath, allIssuesUpdated.map(i => JSON.stringify(i)).join('\n'));
|
||||
```
|
||||
|
||||
### Phase 4: Summary
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
## Plan Created
|
||||
## Planning Complete
|
||||
|
||||
**Issue**: ${issueId}
|
||||
**Location**: ${issueDir}/
|
||||
**Tasks**: ${tasks.length}
|
||||
**Issues Planned**: ${issues.length}
|
||||
|
||||
### Files Created
|
||||
- tasks.jsonl (task definitions)
|
||||
- state.json (execution state)
|
||||
- context.md (issue context)
|
||||
${explorationResults ? '- exploration.json (codebase analysis)' : ''}
|
||||
### Bound Solutions
|
||||
${issues.map(i => {
|
||||
const issue = allIssuesUpdated.find(a => a.id === i.id);
|
||||
return issue?.bound_solution_id
|
||||
? `✓ ${i.id}: ${issue.bound_solution_id}`
|
||||
: `○ ${i.id}: No solution bound`;
|
||||
}).join('\n')}
|
||||
|
||||
### Next Steps
|
||||
1. Review: \`ccw issue list ${issueId}\`
|
||||
2. Execute: \`/issue:execute ${issueId}\`
|
||||
3. Monitor: \`ccw issue status ${issueId}\`
|
||||
`)
|
||||
1. Review: \`ccw issue status <issue-id>\`
|
||||
2. Form queue: \`/issue:queue\`
|
||||
3. Execute: \`/issue:execute\`
|
||||
`);
|
||||
```
|
||||
|
||||
## JSONL Task Format
|
||||
## Solution Format
|
||||
|
||||
Each line in `tasks.jsonl` is a complete JSON object:
|
||||
Each solution line in `solutions/{issue-id}.jsonl`:
|
||||
|
||||
```json
|
||||
{"id":"TASK-001","title":"Setup auth middleware","type":"feature","description":"Implement JWT verification middleware","file_context":["src/middleware/","src/config/auth.ts"],"depends_on":[],"delivery_criteria":["Middleware validates JWT tokens","Returns 401 for invalid tokens","Passes existing auth tests"],"pause_criteria":["JWT secret configuration unclear"],"status":"pending","current_phase":"analyze","executor":"auto","priority":1,"created_at":"2025-12-26T10:00:00Z","updated_at":"2025-12-26T10:00:00Z"}
|
||||
{"id":"TASK-002","title":"Protect API routes","type":"feature","description":"Apply auth middleware to /api/v1/* routes","file_context":["src/routes/api/"],"depends_on":["TASK-001"],"delivery_criteria":["All /api/v1/* routes require auth","Public routes excluded","Integration tests pass"],"pause_criteria":[],"status":"pending","current_phase":"analyze","executor":"auto","priority":2,"created_at":"2025-12-26T10:00:00Z","updated_at":"2025-12-26T10:00:00Z"}
|
||||
```
|
||||
|
||||
## Progressive Loading Algorithm
|
||||
|
||||
For large task lists, only load tasks with satisfied dependencies:
|
||||
|
||||
```javascript
|
||||
function getReadyTasks(tasks, completedIds) {
|
||||
return tasks.filter(task =>
|
||||
task.status === 'pending' &&
|
||||
task.depends_on.every(dep => completedIds.has(dep))
|
||||
)
|
||||
}
|
||||
|
||||
// Stream JSONL line-by-line for memory efficiency
|
||||
function* streamJsonl(filePath) {
|
||||
const lines = readLines(filePath)
|
||||
for (const line of lines) {
|
||||
if (line.trim()) yield JSON.parse(line)
|
||||
}
|
||||
{
|
||||
"id": "SOL-20251226-001",
|
||||
"description": "Direct Implementation",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"title": "Create auth middleware",
|
||||
"scope": "src/middleware/",
|
||||
"action": "Create",
|
||||
"description": "Create JWT validation middleware",
|
||||
"modification_points": [
|
||||
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||
],
|
||||
"implementation": [
|
||||
"Create auth.ts file",
|
||||
"Implement JWT validation",
|
||||
"Add error handling",
|
||||
"Export middleware"
|
||||
],
|
||||
"acceptance": [
|
||||
"Middleware validates JWT tokens",
|
||||
"Returns 401 for invalid tokens"
|
||||
],
|
||||
"depends_on": [],
|
||||
"estimated_minutes": 30
|
||||
}
|
||||
],
|
||||
"exploration_context": {
|
||||
"relevant_files": ["src/config/auth.ts"],
|
||||
"patterns": "Follow existing middleware pattern"
|
||||
},
|
||||
"is_bound": true,
|
||||
"created_at": "2025-12-26T10:00:00Z",
|
||||
"bound_at": "2025-12-26T10:05:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
@@ -349,13 +327,26 @@ function* streamJsonl(filePath) {
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Invalid GitHub URL | Display correct format, ask for valid URL |
|
||||
| Circular dependency | List cycle, ask user to resolve |
|
||||
| No tasks generated | Suggest simpler breakdown or manual entry |
|
||||
| Exploration timeout | Proceed without exploration, warn user |
|
||||
| Issue not found | Auto-create in issues.jsonl |
|
||||
| ACE search fails | Agent falls back to ripgrep |
|
||||
| No solutions generated | Display error, suggest manual planning |
|
||||
| User cancels selection | Skip issue, continue with others |
|
||||
| File conflicts | Agent detects and suggests resolution order |
|
||||
|
||||
## Agent Integration
|
||||
|
||||
The command uses `issue-plan-agent` which:
|
||||
1. Performs ACE semantic search per issue
|
||||
2. Identifies modification points and patterns
|
||||
3. Generates task breakdown with dependencies
|
||||
4. Detects cross-issue file conflicts
|
||||
5. Outputs solution JSON for registration
|
||||
|
||||
See `.claude/agents/issue-plan-agent.md` for agent specification.
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:execute` - Execute planned tasks with closed-loop methodology
|
||||
- `ccw issue list` - List all issues and their status
|
||||
- `ccw issue status` - Show detailed issue status
|
||||
- `/issue:queue` - Form execution queue from bound solutions
|
||||
- `/issue:execute` - Execute queue with codex
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status` - View issue and solution details
|
||||
|
||||
303
.claude/commands/issue/queue.md
Normal file
303
.claude/commands/issue/queue.md
Normal file
@@ -0,0 +1,303 @@
|
||||
---
|
||||
name: queue
|
||||
description: Form execution queue from bound solutions using issue-queue-agent
|
||||
argument-hint: "[--rebuild] [--issue <id>]"
|
||||
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
|
||||
---
|
||||
|
||||
# Issue Queue Command (/issue:queue)
|
||||
|
||||
## Overview
|
||||
|
||||
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, determines dependencies, and creates an ordered execution queue. The queue is global across all issues.
|
||||
|
||||
**Core capabilities:**
|
||||
- **Agent-driven**: issue-queue-agent handles all ordering logic
|
||||
- ACE semantic search for relationship discovery
|
||||
- Dependency DAG construction and cycle detection
|
||||
- File conflict detection and resolution
|
||||
- Semantic priority calculation (0.0-1.0)
|
||||
- Parallel/Sequential group assignment
|
||||
- Output global queue.json
|
||||
|
||||
## Storage Structure (Flat JSONL)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queue.json # Execution queue (output)
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:queue [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:queue # Form queue from all bound solutions
|
||||
/issue:queue --rebuild # Rebuild queue (clear and regenerate)
|
||||
/issue:queue --issue GH-123 # Add only specific issue to queue
|
||||
|
||||
# Flags
|
||||
--rebuild Clear existing queue and regenerate
|
||||
--issue <id> Add only specific issue's tasks
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Solution Loading
|
||||
├─ Load issues.jsonl
|
||||
├─ Filter issues with bound_solution_id
|
||||
├─ Read solutions/{issue-id}.jsonl for each issue
|
||||
├─ Find bound solution by ID
|
||||
└─ Extract tasks from bound solutions
|
||||
|
||||
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
|
||||
├─ Launch issue-queue-agent with all tasks
|
||||
├─ Agent performs:
|
||||
│ ├─ Build dependency DAG from depends_on
|
||||
│ ├─ Detect circular dependencies
|
||||
│ ├─ Identify file modification conflicts
|
||||
│ ├─ Resolve conflicts using ordering rules
|
||||
│ ├─ Calculate semantic priority (0.0-1.0)
|
||||
│ └─ Assign execution groups (parallel/sequential)
|
||||
└─ Output: queue JSON with ordered tasks
|
||||
|
||||
Phase 5: Queue Output
|
||||
├─ Write queue.json
|
||||
├─ Update issue statuses in issues.jsonl
|
||||
└─ Display queue summary
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Solution Loading
|
||||
|
||||
```javascript
|
||||
// Load issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Filter issues with bound solutions
|
||||
const plannedIssues = allIssues.filter(i =>
|
||||
i.status === 'planned' && i.bound_solution_id
|
||||
);
|
||||
|
||||
if (plannedIssues.length === 0) {
|
||||
console.log('No issues with bound solutions found.');
|
||||
console.log('Run /issue:plan first to create and bind solutions.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Load all tasks from bound solutions
|
||||
const allTasks = [];
|
||||
for (const issue of plannedIssues) {
|
||||
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
|
||||
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Find bound solution
|
||||
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
|
||||
|
||||
if (!boundSol) {
|
||||
console.log(`⚠ Bound solution ${issue.bound_solution_id} not found for ${issue.id}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
for (const task of boundSol.tasks || []) {
|
||||
allTasks.push({
|
||||
issue_id: issue.id,
|
||||
solution_id: issue.bound_solution_id,
|
||||
task,
|
||||
exploration_context: boundSol.exploration_context
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
|
||||
```
|
||||
|
||||
### Phase 2-4: Agent-Driven Queue Formation
|
||||
|
||||
```javascript
|
||||
// Launch issue-queue-agent to handle all ordering logic
|
||||
const agentPrompt = `
|
||||
## Tasks to Order
|
||||
|
||||
${JSON.stringify(allTasks, null, 2)}
|
||||
|
||||
## Project Root
|
||||
${process.cwd()}
|
||||
|
||||
## Requirements
|
||||
1. Build dependency DAG from depends_on fields
|
||||
2. Detect circular dependencies (abort if found)
|
||||
3. Identify file modification conflicts
|
||||
4. Resolve conflicts using ordering rules:
|
||||
- Create before Update/Implement
|
||||
- Foundation scopes (config/types) before implementation
|
||||
- Core logic before tests
|
||||
5. Calculate semantic priority (0.0-1.0) for each task
|
||||
6. Assign execution groups (parallel P* / sequential S*)
|
||||
7. Output queue JSON
|
||||
`;
|
||||
|
||||
const result = Task(
|
||||
subagent_type="issue-queue-agent",
|
||||
run_in_background=false,
|
||||
description=`Order ${allTasks.length} tasks from ${plannedIssues.length} issues`,
|
||||
prompt=agentPrompt
|
||||
);
|
||||
|
||||
// Parse agent output
|
||||
const agentOutput = JSON.parse(result);
|
||||
|
||||
if (!agentOutput.success) {
|
||||
console.error(`Queue formation failed: ${agentOutput.error}`);
|
||||
if (agentOutput.cycles) {
|
||||
console.error('Circular dependencies:', agentOutput.cycles.join(', '));
|
||||
}
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Queue Output & Summary
|
||||
|
||||
```javascript
|
||||
const queueOutput = agentOutput.output;
|
||||
|
||||
// Write queue.json
|
||||
Write('.workflow/issues/queue.json', JSON.stringify(queueOutput, null, 2));
|
||||
|
||||
// Update issue statuses in issues.jsonl
|
||||
const updatedIssues = allIssues.map(issue => {
|
||||
if (plannedIssues.find(p => p.id === issue.id)) {
|
||||
return {
|
||||
...issue,
|
||||
status: 'queued',
|
||||
queued_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
}
|
||||
return issue;
|
||||
});
|
||||
|
||||
Write(issuesPath, updatedIssues.map(i => JSON.stringify(i)).join('\n'));
|
||||
|
||||
// Display summary
|
||||
console.log(`
|
||||
## Queue Formed
|
||||
|
||||
**Total Tasks**: ${queueOutput.queue.length}
|
||||
**Issues**: ${plannedIssues.length}
|
||||
**Conflicts**: ${queueOutput.conflicts?.length || 0} (${queueOutput._metadata?.resolved_conflicts || 0} resolved)
|
||||
|
||||
### Execution Groups
|
||||
${(queueOutput.execution_groups || []).map(g => {
|
||||
const type = g.type === 'parallel' ? 'Parallel' : 'Sequential';
|
||||
return `- ${g.id} (${type}): ${g.task_count} tasks`;
|
||||
}).join('\n')}
|
||||
|
||||
### Next Steps
|
||||
1. Review queue: \`ccw issue queue list\`
|
||||
2. Execute: \`/issue:execute\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Queue Schema
|
||||
|
||||
Output `queue.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"queue": [
|
||||
{
|
||||
"queue_id": "Q-001",
|
||||
"issue_id": "GH-123",
|
||||
"solution_id": "SOL-001",
|
||||
"task_id": "T1",
|
||||
"status": "pending",
|
||||
"execution_order": 1,
|
||||
"execution_group": "P1",
|
||||
"depends_on": [],
|
||||
"semantic_priority": 0.7,
|
||||
"queued_at": "2025-12-26T10:00:00Z"
|
||||
}
|
||||
],
|
||||
"conflicts": [
|
||||
{
|
||||
"type": "file_conflict",
|
||||
"file": "src/auth.ts",
|
||||
"tasks": ["GH-123:T1", "GH-124:T2"],
|
||||
"resolution": "sequential",
|
||||
"resolution_order": ["GH-123:T1", "GH-124:T2"],
|
||||
"rationale": "T1 creates file before T2 updates",
|
||||
"resolved": true
|
||||
}
|
||||
],
|
||||
"execution_groups": [
|
||||
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["GH-123:T1", "GH-124:T1", "GH-125:T1"] },
|
||||
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["GH-123:T2", "GH-124:T2"] }
|
||||
],
|
||||
"_metadata": {
|
||||
"version": "2.0",
|
||||
"storage": "jsonl",
|
||||
"total_tasks": 5,
|
||||
"total_conflicts": 1,
|
||||
"resolved_conflicts": 1,
|
||||
"parallel_groups": 1,
|
||||
"sequential_groups": 1,
|
||||
"timestamp": "2025-12-26T10:00:00Z",
|
||||
"source": "issue-queue-agent"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Semantic Priority Rules
|
||||
|
||||
| Factor | Priority Boost |
|
||||
|--------|---------------|
|
||||
| Create action | +0.2 |
|
||||
| Configure action | +0.15 |
|
||||
| Implement action | +0.1 |
|
||||
| Config/Types scope | +0.1 |
|
||||
| Refactor action | -0.05 |
|
||||
| Test action | -0.1 |
|
||||
| Delete action | -0.15 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No bound solutions | Display message, suggest /issue:plan |
|
||||
| Circular dependency | List cycles, abort queue formation |
|
||||
| Unresolved conflicts | Agent resolves using ordering rules |
|
||||
| Invalid task reference | Skip and warn |
|
||||
|
||||
## Agent Integration
|
||||
|
||||
The command uses `issue-queue-agent` which:
|
||||
1. Builds dependency DAG from task depends_on fields
|
||||
2. Detects circular dependencies (aborts if found)
|
||||
3. Identifies file modification conflicts across issues
|
||||
4. Resolves conflicts using semantic ordering rules
|
||||
5. Calculates priority (0.0-1.0) for each task
|
||||
6. Assigns parallel/sequential execution groups
|
||||
7. Outputs structured queue JSON
|
||||
|
||||
See `.claude/agents/issue-queue-agent.md` for agent specification.
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan issues and bind solutions
|
||||
- `/issue:execute` - Execute queue with codex
|
||||
- `ccw issue queue list` - View current queue
|
||||
@@ -4,6 +4,29 @@
|
||||
|
||||
> **写作规范**: [../specs/writing-style.md](../specs/writing-style.md)
|
||||
|
||||
## 执行要求
|
||||
|
||||
**必须执行**:Phase 3 所有 Analysis Agents 完成后,主编排器**必须**调用此 Consolidation Agent。
|
||||
|
||||
**触发条件**:
|
||||
- Phase 3 所有 agent 已返回结果(status: completed/partial/failed)
|
||||
- `sections/section-*.md` 文件已生成
|
||||
|
||||
**输入来源**:
|
||||
- `agent_summaries`: Phase 3 各 agent 返回的 JSON(包含 status, output_file, summary, cross_module_notes)
|
||||
- `cross_module_notes`: 从各 agent 返回中提取的跨模块备注数组
|
||||
|
||||
**调用时机**:
|
||||
```javascript
|
||||
// Phase 3 完成后,主编排器执行:
|
||||
const phase3Results = await runPhase3Agents(); // 并行执行所有 analysis agents
|
||||
const agentSummaries = phase3Results.map(r => JSON.parse(r));
|
||||
const crossNotes = agentSummaries.flatMap(s => s.cross_module_notes || []);
|
||||
|
||||
// 必须调用 Phase 3.5 Consolidation Agent
|
||||
await runPhase35Consolidation(agentSummaries, crossNotes);
|
||||
```
|
||||
|
||||
## 核心职责
|
||||
|
||||
1. **跨章节综合分析**:生成 synthesis(报告综述)
|
||||
@@ -22,7 +45,9 @@ interface ConsolidationInput {
|
||||
}
|
||||
```
|
||||
|
||||
## 执行
|
||||
## Agent 调用代码
|
||||
|
||||
主编排器使用以下代码调用 Consolidation Agent:
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
|
||||
@@ -0,0 +1,74 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issues JSONL Schema",
|
||||
"description": "Schema for each line in issues.jsonl (flat storage)",
|
||||
"type": "object",
|
||||
"required": ["id", "title", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
|
||||
"default": "registered"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"description": "Issue context/description (markdown)"
|
||||
},
|
||||
"bound_solution_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the bound solution (null if none bound)"
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "Number of candidate solutions in solutions/{id}.jsonl"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"enum": ["github", "text", "file"],
|
||||
"description": "Source of the issue"
|
||||
},
|
||||
"source_url": {
|
||||
"type": "string",
|
||||
"description": "Original source URL (for GitHub issues)"
|
||||
},
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Issue labels/tags"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"planned_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
}
|
||||
136
.claude/workflows/cli-templates/schemas/queue-schema.json
Normal file
136
.claude/workflows/cli-templates/schemas/queue-schema.json
Normal file
@@ -0,0 +1,136 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Execution Queue Schema",
|
||||
"description": "Global execution queue for all issue tasks",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"queue": {
|
||||
"type": "array",
|
||||
"description": "Ordered list of tasks to execute",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["queue_id", "issue_id", "solution_id", "task_id", "status"],
|
||||
"properties": {
|
||||
"queue_id": {
|
||||
"type": "string",
|
||||
"pattern": "^Q-[0-9]+$",
|
||||
"description": "Unique queue item identifier"
|
||||
},
|
||||
"issue_id": {
|
||||
"type": "string",
|
||||
"description": "Source issue ID"
|
||||
},
|
||||
"solution_id": {
|
||||
"type": "string",
|
||||
"description": "Source solution ID"
|
||||
},
|
||||
"task_id": {
|
||||
"type": "string",
|
||||
"description": "Task ID within solution"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
|
||||
"default": "pending"
|
||||
},
|
||||
"execution_order": {
|
||||
"type": "integer",
|
||||
"description": "Order in execution sequence"
|
||||
},
|
||||
"execution_group": {
|
||||
"type": "string",
|
||||
"description": "Parallel execution group ID (e.g., P1, S1)"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Queue IDs this task depends on"
|
||||
},
|
||||
"semantic_priority": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Semantic importance score (0.0-1.0)"
|
||||
},
|
||||
"assigned_executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent"]
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"started_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"result": {
|
||||
"type": "object",
|
||||
"description": "Execution result",
|
||||
"properties": {
|
||||
"files_modified": { "type": "array", "items": { "type": "string" } },
|
||||
"files_created": { "type": "array", "items": { "type": "string" } },
|
||||
"summary": { "type": "string" },
|
||||
"commit_hash": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"failure_reason": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"conflicts": {
|
||||
"type": "array",
|
||||
"description": "Detected conflicts between tasks",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Queue IDs involved in conflict"
|
||||
},
|
||||
"file": {
|
||||
"type": "string",
|
||||
"description": "Conflicting file path"
|
||||
},
|
||||
"resolution": {
|
||||
"type": "string",
|
||||
"enum": ["sequential", "merge", "manual"]
|
||||
},
|
||||
"resolution_order": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"resolved": {
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"version": { "type": "string", "default": "1.0" },
|
||||
"total_items": { "type": "integer" },
|
||||
"pending_count": { "type": "integer" },
|
||||
"ready_count": { "type": "integer" },
|
||||
"executing_count": { "type": "integer" },
|
||||
"completed_count": { "type": "integer" },
|
||||
"failed_count": { "type": "integer" },
|
||||
"last_queue_formation": { "type": "string", "format": "date-time" },
|
||||
"last_updated": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
94
.claude/workflows/cli-templates/schemas/registry-schema.json
Normal file
94
.claude/workflows/cli-templates/schemas/registry-schema.json
Normal file
@@ -0,0 +1,94 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Registry Schema",
|
||||
"description": "Global registry of all issues and their solutions",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"issues": {
|
||||
"type": "array",
|
||||
"description": "List of registered issues",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
|
||||
"default": "registered"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "Number of candidate solutions"
|
||||
},
|
||||
"bound_solution_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the bound solution (null if none bound)"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"enum": ["github", "text", "file"],
|
||||
"description": "Source of the issue"
|
||||
},
|
||||
"source_url": {
|
||||
"type": "string",
|
||||
"description": "Original source URL (for GitHub issues)"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"planned_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"version": { "type": "string", "default": "1.0" },
|
||||
"total_issues": { "type": "integer" },
|
||||
"by_status": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"registered": { "type": "integer" },
|
||||
"planning": { "type": "integer" },
|
||||
"planned": { "type": "integer" },
|
||||
"queued": { "type": "integer" },
|
||||
"executing": { "type": "integer" },
|
||||
"completed": { "type": "integer" },
|
||||
"failed": { "type": "integer" }
|
||||
}
|
||||
},
|
||||
"last_updated": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
120
.claude/workflows/cli-templates/schemas/solution-schema.json
Normal file
120
.claude/workflows/cli-templates/schemas/solution-schema.json
Normal file
@@ -0,0 +1,120 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Solution Schema",
|
||||
"description": "Schema for solution registered to an issue",
|
||||
"type": "object",
|
||||
"required": ["id", "issue_id", "tasks", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique solution identifier",
|
||||
"pattern": "^SOL-[0-9]+$"
|
||||
},
|
||||
"issue_id": {
|
||||
"type": "string",
|
||||
"description": "Parent issue ID"
|
||||
},
|
||||
"plan_session_id": {
|
||||
"type": "string",
|
||||
"description": "Planning session that created this solution"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"description": "Task breakdown for this solution",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "scope", "action", "acceptance"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^T[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Action verb + target"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Module path or feature area"
|
||||
},
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "1-2 sentences describing what to implement"
|
||||
},
|
||||
"modification_points": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"target": { "type": "string" },
|
||||
"change": { "type": "string" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"implementation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Step-by-step implementation guide"
|
||||
},
|
||||
"acceptance": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Quantified completion criteria"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"default": [],
|
||||
"description": "Task IDs this task depends on"
|
||||
},
|
||||
"estimated_minutes": {
|
||||
"type": "integer",
|
||||
"description": "Estimated time to complete"
|
||||
},
|
||||
"executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent", "auto"],
|
||||
"default": "auto"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"exploration_context": {
|
||||
"type": "object",
|
||||
"description": "ACE exploration results",
|
||||
"properties": {
|
||||
"project_structure": { "type": "string" },
|
||||
"relevant_files": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"patterns": { "type": "string" },
|
||||
"integration_points": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["draft", "candidate", "bound", "queued", "executing", "completed", "failed"],
|
||||
"default": "draft"
|
||||
},
|
||||
"is_bound": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether this solution is bound to the issue"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"bound_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "When this solution was bound to the issue"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,125 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Solutions JSONL Schema",
|
||||
"description": "Schema for each line in solutions/{issue-id}.jsonl",
|
||||
"type": "object",
|
||||
"required": ["id", "tasks", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique solution identifier",
|
||||
"pattern": "^SOL-[0-9]+$"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "Solution approach description"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"description": "Task breakdown for this solution",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "scope", "action", "acceptance"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^T[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Action verb + target"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Module path or feature area"
|
||||
},
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "1-2 sentences describing what to implement"
|
||||
},
|
||||
"modification_points": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"target": { "type": "string" },
|
||||
"change": { "type": "string" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"implementation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Step-by-step implementation guide"
|
||||
},
|
||||
"acceptance": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Quantified completion criteria"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"default": [],
|
||||
"description": "Task IDs this task depends on"
|
||||
},
|
||||
"estimated_minutes": {
|
||||
"type": "integer",
|
||||
"description": "Estimated time to complete"
|
||||
},
|
||||
"executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent", "auto"],
|
||||
"default": "auto"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"exploration_context": {
|
||||
"type": "object",
|
||||
"description": "ACE exploration results",
|
||||
"properties": {
|
||||
"project_structure": { "type": "string" },
|
||||
"relevant_files": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"patterns": { "type": "string" },
|
||||
"integration_points": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"analysis": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"risk": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"impact": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"complexity": { "type": "string", "enum": ["low", "medium", "high"] }
|
||||
}
|
||||
},
|
||||
"score": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Solution quality score (0.0-1.0)"
|
||||
},
|
||||
"is_bound": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether this solution is bound to the issue"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"bound_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "When this solution was bound to the issue"
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user