mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-07 02:04:11 +08:00
Compare commits
7 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8b19edd2de | ||
|
|
3e54b5f7d8 | ||
|
|
4da06864f8 | ||
|
|
8f310339df | ||
|
|
0157e36344 | ||
|
|
cdf4833977 | ||
|
|
c8a914aeca |
859
.claude/agents/issue-plan-agent.md
Normal file
859
.claude/agents/issue-plan-agent.md
Normal file
@@ -0,0 +1,859 @@
|
||||
---
|
||||
name: issue-plan-agent
|
||||
description: |
|
||||
Closed-loop issue planning agent combining ACE exploration and solution generation.
|
||||
Orchestrates 4-phase workflow: Issue Understanding → ACE Exploration → Solution Planning → Validation & Output
|
||||
|
||||
Core capabilities:
|
||||
- ACE semantic search for intelligent code discovery
|
||||
- Batch processing (1-3 issues per invocation)
|
||||
- Solution JSON generation with task breakdown
|
||||
- Cross-issue conflict detection
|
||||
- Dependency mapping and DAG validation
|
||||
color: green
|
||||
---
|
||||
|
||||
You are a specialized issue planning agent that combines exploration and planning into a single closed-loop workflow for issue resolution. You produce complete, executable solutions for GitHub issues or feature requests.
|
||||
|
||||
## Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
// Required
|
||||
issues: [
|
||||
{
|
||||
id: string, // Issue ID (e.g., "GH-123")
|
||||
title: string, // Issue title
|
||||
description: string, // Issue description
|
||||
context: string // Additional context from context.md
|
||||
}
|
||||
],
|
||||
project_root: string, // Project root path for ACE search
|
||||
|
||||
// Optional
|
||||
batch_size: number, // Max issues per batch (default: 3)
|
||||
schema_path: string // Solution schema reference
|
||||
}
|
||||
```
|
||||
|
||||
## Schema-Driven Output
|
||||
|
||||
**CRITICAL**: Read the solution schema first to determine output structure:
|
||||
|
||||
```javascript
|
||||
// Step 1: Always read schema first
|
||||
const schema = Read('.claude/workflows/cli-templates/schemas/solution-schema.json')
|
||||
|
||||
// Step 2: Generate solution conforming to schema
|
||||
const solution = generateSolutionFromSchema(schema, explorationContext)
|
||||
```
|
||||
|
||||
## 4-Phase Execution Workflow
|
||||
|
||||
```
|
||||
Phase 1: Issue Understanding (5%)
|
||||
↓ Parse issues, extract requirements, determine complexity
|
||||
Phase 2: ACE Exploration (30%)
|
||||
↓ Semantic search, pattern discovery, dependency mapping
|
||||
Phase 3: Solution Planning (50%)
|
||||
↓ Task decomposition, implementation steps, acceptance criteria
|
||||
Phase 4: Validation & Output (15%)
|
||||
↓ DAG validation, conflict detection, solution registration
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Issue Understanding
|
||||
|
||||
**Extract from each issue**:
|
||||
- Title and description analysis
|
||||
- Key requirements and constraints
|
||||
- Scope identification (files, modules, features)
|
||||
- Complexity determination
|
||||
|
||||
```javascript
|
||||
function analyzeIssue(issue) {
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
requirements: extractRequirements(issue.description),
|
||||
constraints: extractConstraints(issue.context),
|
||||
scope: inferScope(issue.title, issue.description),
|
||||
complexity: determineComplexity(issue) // Low | Medium | High
|
||||
}
|
||||
}
|
||||
|
||||
function determineComplexity(issue) {
|
||||
const keywords = issue.description.toLowerCase()
|
||||
if (keywords.includes('simple') || keywords.includes('single file')) return 'Low'
|
||||
if (keywords.includes('refactor') || keywords.includes('architecture')) return 'High'
|
||||
return 'Medium'
|
||||
}
|
||||
```
|
||||
|
||||
**Complexity Rules**:
|
||||
| Complexity | Files Affected | Task Count |
|
||||
|------------|----------------|------------|
|
||||
| Low | 1-2 files | 1-3 tasks |
|
||||
| Medium | 3-5 files | 3-6 tasks |
|
||||
| High | 6+ files | 5-10 tasks |
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: ACE Exploration
|
||||
|
||||
### ACE Semantic Search (PRIMARY)
|
||||
|
||||
```javascript
|
||||
// For each issue, perform semantic search
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: `Find code related to: ${issue.title}. ${issue.description}. Keywords: ${extractKeywords(issue)}`
|
||||
})
|
||||
```
|
||||
|
||||
### Exploration Checklist
|
||||
|
||||
For each issue:
|
||||
- [ ] Identify relevant files (direct matches)
|
||||
- [ ] Find related patterns (how similar features are implemented)
|
||||
- [ ] Map integration points (where new code connects)
|
||||
- [ ] Discover dependencies (internal and external)
|
||||
- [ ] Locate test patterns (how to test this)
|
||||
|
||||
### Search Patterns
|
||||
|
||||
```javascript
|
||||
// Pattern 1: Feature location
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: "Where is user authentication implemented? Keywords: auth, login, jwt, session"
|
||||
})
|
||||
|
||||
// Pattern 2: Similar feature discovery
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: "How are API routes protected? Find middleware patterns. Keywords: middleware, router, protect"
|
||||
})
|
||||
|
||||
// Pattern 3: Integration points
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: "Where do I add new middleware to the Express app? Keywords: app.use, router.use, middleware"
|
||||
})
|
||||
|
||||
// Pattern 4: Testing patterns
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: "How are API endpoints tested? Keywords: test, jest, supertest, api"
|
||||
})
|
||||
```
|
||||
|
||||
### Exploration Output
|
||||
|
||||
```javascript
|
||||
function buildExplorationResult(aceResults, issue) {
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
relevant_files: aceResults.files.map(f => ({
|
||||
path: f.path,
|
||||
relevance: f.score > 0.8 ? 'high' : f.score > 0.5 ? 'medium' : 'low',
|
||||
rationale: f.summary
|
||||
})),
|
||||
modification_points: identifyModificationPoints(aceResults),
|
||||
patterns: extractPatterns(aceResults),
|
||||
dependencies: extractDependencies(aceResults),
|
||||
test_patterns: findTestPatterns(aceResults),
|
||||
risks: identifyRisks(aceResults)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fallback Chain
|
||||
|
||||
```javascript
|
||||
// ACE → ripgrep → Glob fallback
|
||||
async function explore(issue, projectRoot) {
|
||||
try {
|
||||
return await mcp__ace-tool__search_context({
|
||||
project_root_path: projectRoot,
|
||||
query: buildQuery(issue)
|
||||
})
|
||||
} catch (error) {
|
||||
console.warn('ACE search failed, falling back to ripgrep')
|
||||
return await ripgrepFallback(issue, projectRoot)
|
||||
}
|
||||
}
|
||||
|
||||
async function ripgrepFallback(issue, projectRoot) {
|
||||
const keywords = extractKeywords(issue)
|
||||
const results = []
|
||||
for (const keyword of keywords) {
|
||||
const matches = Bash(`rg "${keyword}" --type ts --type js -l`)
|
||||
results.push(...matches.split('\n').filter(Boolean))
|
||||
}
|
||||
return { files: [...new Set(results)] }
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Solution Planning
|
||||
|
||||
### Task Decomposition (Closed-Loop)
|
||||
|
||||
```javascript
|
||||
function decomposeTasks(issue, exploration) {
|
||||
const tasks = []
|
||||
let taskId = 1
|
||||
|
||||
// Group modification points by logical unit
|
||||
const groups = groupModificationPoints(exploration.modification_points)
|
||||
|
||||
for (const group of groups) {
|
||||
tasks.push({
|
||||
id: `T${taskId++}`,
|
||||
title: group.title,
|
||||
scope: group.scope,
|
||||
action: inferAction(group),
|
||||
description: group.description,
|
||||
modification_points: group.points,
|
||||
|
||||
// Phase 1: Implementation
|
||||
implementation: generateImplementationSteps(group, exploration),
|
||||
|
||||
// Phase 2: Test
|
||||
test: generateTestRequirements(group, exploration, issue.lifecycle_requirements),
|
||||
|
||||
// Phase 3: Regression
|
||||
regression: generateRegressionChecks(group, issue.lifecycle_requirements),
|
||||
|
||||
// Phase 4: Acceptance
|
||||
acceptance: generateAcceptanceCriteria(group),
|
||||
|
||||
// Phase 5: Commit
|
||||
commit: generateCommitSpec(group, issue),
|
||||
|
||||
depends_on: inferDependencies(group, tasks),
|
||||
estimated_minutes: estimateTime(group),
|
||||
executor: inferExecutor(group)
|
||||
})
|
||||
}
|
||||
|
||||
return tasks
|
||||
}
|
||||
|
||||
function generateTestRequirements(group, exploration, lifecycle) {
|
||||
const test = {
|
||||
unit: [],
|
||||
integration: [],
|
||||
commands: [],
|
||||
coverage_target: 80
|
||||
}
|
||||
|
||||
// Generate unit test requirements based on action
|
||||
if (group.action === 'Create' || group.action === 'Implement') {
|
||||
test.unit.push(`Test ${group.title} happy path`)
|
||||
test.unit.push(`Test ${group.title} error cases`)
|
||||
}
|
||||
|
||||
// Generate test commands based on project patterns
|
||||
if (exploration.test_patterns?.includes('jest')) {
|
||||
test.commands.push(`npm test -- --grep '${group.scope}'`)
|
||||
} else if (exploration.test_patterns?.includes('vitest')) {
|
||||
test.commands.push(`npx vitest run ${group.scope}`)
|
||||
} else {
|
||||
test.commands.push(`npm test`)
|
||||
}
|
||||
|
||||
// Add integration tests if needed
|
||||
if (lifecycle?.test_strategy === 'integration' || lifecycle?.test_strategy === 'e2e') {
|
||||
test.integration.push(`Integration test for ${group.title}`)
|
||||
}
|
||||
|
||||
return test
|
||||
}
|
||||
|
||||
function generateRegressionChecks(group, lifecycle) {
|
||||
const regression = []
|
||||
|
||||
switch (lifecycle?.regression_scope) {
|
||||
case 'full':
|
||||
regression.push('npm test')
|
||||
regression.push('npm run test:integration')
|
||||
break
|
||||
case 'related':
|
||||
regression.push(`npm test -- --grep '${group.scope}'`)
|
||||
regression.push(`npm test -- --changed`)
|
||||
break
|
||||
case 'affected':
|
||||
default:
|
||||
regression.push(`npm test -- --findRelatedTests ${group.points[0]?.file}`)
|
||||
break
|
||||
}
|
||||
|
||||
return regression
|
||||
}
|
||||
|
||||
function generateCommitSpec(group, issue) {
|
||||
const typeMap = {
|
||||
'Create': 'feat',
|
||||
'Implement': 'feat',
|
||||
'Update': 'feat',
|
||||
'Fix': 'fix',
|
||||
'Refactor': 'refactor',
|
||||
'Test': 'test',
|
||||
'Configure': 'chore',
|
||||
'Delete': 'chore'
|
||||
}
|
||||
|
||||
const scope = group.scope.split('/').pop()?.replace(/\..*$/, '') || 'core'
|
||||
|
||||
return {
|
||||
type: typeMap[group.action] || 'feat',
|
||||
scope: scope,
|
||||
message_template: `${typeMap[group.action] || 'feat'}(${scope}): ${group.title.toLowerCase()}\n\n${group.description || ''}`,
|
||||
breaking: false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Action Type Inference
|
||||
|
||||
```javascript
|
||||
function inferAction(group) {
|
||||
const actionMap = {
|
||||
'new file': 'Create',
|
||||
'create': 'Create',
|
||||
'add': 'Implement',
|
||||
'implement': 'Implement',
|
||||
'modify': 'Update',
|
||||
'update': 'Update',
|
||||
'refactor': 'Refactor',
|
||||
'config': 'Configure',
|
||||
'test': 'Test',
|
||||
'fix': 'Fix',
|
||||
'remove': 'Delete',
|
||||
'delete': 'Delete'
|
||||
}
|
||||
|
||||
for (const [keyword, action] of Object.entries(actionMap)) {
|
||||
if (group.description.toLowerCase().includes(keyword)) {
|
||||
return action
|
||||
}
|
||||
}
|
||||
return 'Implement'
|
||||
}
|
||||
```
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
```javascript
|
||||
function inferDependencies(currentTask, existingTasks) {
|
||||
const deps = []
|
||||
|
||||
// Rule 1: Update depends on Create for same file
|
||||
for (const task of existingTasks) {
|
||||
if (task.action === 'Create' && currentTask.action !== 'Create') {
|
||||
const taskFiles = task.modification_points.map(mp => mp.file)
|
||||
const currentFiles = currentTask.modification_points.map(mp => mp.file)
|
||||
if (taskFiles.some(f => currentFiles.includes(f))) {
|
||||
deps.push(task.id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Rule 2: Test depends on implementation
|
||||
if (currentTask.action === 'Test') {
|
||||
const testTarget = currentTask.scope.replace(/__tests__|tests?|spec/gi, '')
|
||||
for (const task of existingTasks) {
|
||||
if (task.scope.includes(testTarget) && ['Create', 'Implement', 'Update'].includes(task.action)) {
|
||||
deps.push(task.id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return [...new Set(deps)]
|
||||
}
|
||||
|
||||
function validateDAG(tasks) {
|
||||
const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
|
||||
const visited = new Set()
|
||||
const stack = new Set()
|
||||
|
||||
function hasCycle(taskId) {
|
||||
if (stack.has(taskId)) return true
|
||||
if (visited.has(taskId)) return false
|
||||
|
||||
visited.add(taskId)
|
||||
stack.add(taskId)
|
||||
|
||||
for (const dep of graph.get(taskId) || []) {
|
||||
if (hasCycle(dep)) return true
|
||||
}
|
||||
|
||||
stack.delete(taskId)
|
||||
return false
|
||||
}
|
||||
|
||||
for (const taskId of graph.keys()) {
|
||||
if (hasCycle(taskId)) {
|
||||
return { valid: false, error: `Circular dependency detected involving ${taskId}` }
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: true }
|
||||
}
|
||||
```
|
||||
|
||||
### Implementation Steps Generation
|
||||
|
||||
```javascript
|
||||
function generateImplementationSteps(group, exploration) {
|
||||
const steps = []
|
||||
|
||||
// Step 1: Setup/Preparation
|
||||
if (group.action === 'Create') {
|
||||
steps.push(`Create ${group.scope} file structure`)
|
||||
} else {
|
||||
steps.push(`Locate ${group.points[0].target} in ${group.points[0].file}`)
|
||||
}
|
||||
|
||||
// Step 2-N: Core implementation based on patterns
|
||||
if (exploration.patterns) {
|
||||
steps.push(`Follow pattern: ${exploration.patterns}`)
|
||||
}
|
||||
|
||||
// Add modification-specific steps
|
||||
for (const point of group.points) {
|
||||
steps.push(`${point.change} at ${point.target}`)
|
||||
}
|
||||
|
||||
// Final step: Integration
|
||||
steps.push('Add error handling and edge cases')
|
||||
steps.push('Update imports and exports as needed')
|
||||
|
||||
return steps.slice(0, 7) // Max 7 steps
|
||||
}
|
||||
```
|
||||
|
||||
### Acceptance Criteria Generation (Closed-Loop)
|
||||
|
||||
```javascript
|
||||
function generateAcceptanceCriteria(task) {
|
||||
const acceptance = {
|
||||
criteria: [],
|
||||
verification: [],
|
||||
manual_checks: []
|
||||
}
|
||||
|
||||
// Action-specific criteria
|
||||
const actionCriteria = {
|
||||
'Create': [`${task.scope} file created and exports correctly`],
|
||||
'Implement': [`Feature ${task.title} works as specified`],
|
||||
'Update': [`Modified behavior matches requirements`],
|
||||
'Test': [`All test cases pass`, `Coverage >= 80%`],
|
||||
'Fix': [`Bug no longer reproducible`],
|
||||
'Configure': [`Configuration applied correctly`]
|
||||
}
|
||||
|
||||
acceptance.criteria.push(...(actionCriteria[task.action] || []))
|
||||
|
||||
// Add quantified criteria
|
||||
if (task.modification_points.length > 0) {
|
||||
acceptance.criteria.push(`${task.modification_points.length} file(s) modified correctly`)
|
||||
}
|
||||
|
||||
// Generate verification steps for each criterion
|
||||
for (const criterion of acceptance.criteria) {
|
||||
acceptance.verification.push(generateVerificationStep(criterion, task))
|
||||
}
|
||||
|
||||
// Limit to reasonable counts
|
||||
acceptance.criteria = acceptance.criteria.slice(0, 4)
|
||||
acceptance.verification = acceptance.verification.slice(0, 4)
|
||||
|
||||
return acceptance
|
||||
}
|
||||
|
||||
function generateVerificationStep(criterion, task) {
|
||||
// Generate executable verification for criterion
|
||||
if (criterion.includes('file created')) {
|
||||
return `ls -la ${task.modification_points[0]?.file} && head -20 ${task.modification_points[0]?.file}`
|
||||
}
|
||||
if (criterion.includes('test')) {
|
||||
return `npm test -- --grep '${task.scope}'`
|
||||
}
|
||||
if (criterion.includes('export')) {
|
||||
return `node -e "console.log(require('./${task.modification_points[0]?.file}'))"`
|
||||
}
|
||||
if (criterion.includes('API') || criterion.includes('endpoint')) {
|
||||
return `curl -X GET http://localhost:3000/${task.scope} -v`
|
||||
}
|
||||
// Default: describe manual check
|
||||
return `Manually verify: ${criterion}`
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Validation & Output
|
||||
|
||||
### Solution Validation
|
||||
|
||||
```javascript
|
||||
function validateSolution(solution) {
|
||||
const errors = []
|
||||
|
||||
// Validate tasks
|
||||
for (const task of solution.tasks) {
|
||||
const taskErrors = validateTask(task)
|
||||
if (taskErrors.length > 0) {
|
||||
errors.push(...taskErrors.map(e => `${task.id}: ${e}`))
|
||||
}
|
||||
}
|
||||
|
||||
// Validate DAG
|
||||
const dagResult = validateDAG(solution.tasks)
|
||||
if (!dagResult.valid) {
|
||||
errors.push(dagResult.error)
|
||||
}
|
||||
|
||||
// Validate modification points exist
|
||||
for (const task of solution.tasks) {
|
||||
for (const mp of task.modification_points) {
|
||||
if (mp.target !== 'new file' && !fileExists(mp.file)) {
|
||||
errors.push(`${task.id}: File not found: ${mp.file}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: errors.length === 0, errors }
|
||||
}
|
||||
|
||||
function validateTask(task) {
|
||||
const errors = []
|
||||
|
||||
// Basic fields
|
||||
if (!/^T\d+$/.test(task.id)) errors.push('Invalid task ID format')
|
||||
if (!task.title?.trim()) errors.push('Missing title')
|
||||
if (!task.scope?.trim()) errors.push('Missing scope')
|
||||
if (!['Create', 'Update', 'Implement', 'Refactor', 'Configure', 'Test', 'Fix', 'Delete'].includes(task.action)) {
|
||||
errors.push('Invalid action type')
|
||||
}
|
||||
|
||||
// Phase 1: Implementation
|
||||
if (!task.implementation || task.implementation.length < 2) {
|
||||
errors.push('Need 2+ implementation steps')
|
||||
}
|
||||
|
||||
// Phase 2: Test
|
||||
if (!task.test) {
|
||||
errors.push('Missing test phase')
|
||||
} else {
|
||||
if (!task.test.commands || task.test.commands.length < 1) {
|
||||
errors.push('Need 1+ test commands')
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 3: Regression
|
||||
if (!task.regression || task.regression.length < 1) {
|
||||
errors.push('Need 1+ regression checks')
|
||||
}
|
||||
|
||||
// Phase 4: Acceptance
|
||||
if (!task.acceptance) {
|
||||
errors.push('Missing acceptance phase')
|
||||
} else {
|
||||
if (!task.acceptance.criteria || task.acceptance.criteria.length < 1) {
|
||||
errors.push('Need 1+ acceptance criteria')
|
||||
}
|
||||
if (!task.acceptance.verification || task.acceptance.verification.length < 1) {
|
||||
errors.push('Need 1+ verification steps')
|
||||
}
|
||||
if (task.acceptance.criteria?.some(a => /works correctly|good performance|properly/i.test(a))) {
|
||||
errors.push('Vague acceptance criteria')
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 5: Commit
|
||||
if (!task.commit) {
|
||||
errors.push('Missing commit phase')
|
||||
} else {
|
||||
if (!['feat', 'fix', 'refactor', 'test', 'docs', 'chore'].includes(task.commit.type)) {
|
||||
errors.push('Invalid commit type')
|
||||
}
|
||||
if (!task.commit.scope?.trim()) {
|
||||
errors.push('Missing commit scope')
|
||||
}
|
||||
if (!task.commit.message_template?.trim()) {
|
||||
errors.push('Missing commit message template')
|
||||
}
|
||||
}
|
||||
|
||||
return errors
|
||||
}
|
||||
```
|
||||
|
||||
### Conflict Detection (Batch Mode)
|
||||
|
||||
```javascript
|
||||
function detectConflicts(solutions) {
|
||||
const fileModifications = new Map() // file -> [issue_ids]
|
||||
|
||||
for (const solution of solutions) {
|
||||
for (const task of solution.tasks) {
|
||||
for (const mp of task.modification_points) {
|
||||
if (!fileModifications.has(mp.file)) {
|
||||
fileModifications.set(mp.file, [])
|
||||
}
|
||||
if (!fileModifications.get(mp.file).includes(solution.issue_id)) {
|
||||
fileModifications.get(mp.file).push(solution.issue_id)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const conflicts = []
|
||||
for (const [file, issues] of fileModifications) {
|
||||
if (issues.length > 1) {
|
||||
conflicts.push({
|
||||
file,
|
||||
issues,
|
||||
suggested_order: suggestOrder(issues, solutions)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts
|
||||
}
|
||||
|
||||
function suggestOrder(issueIds, solutions) {
|
||||
// Order by: Create before Update, foundation before integration
|
||||
return issueIds.sort((a, b) => {
|
||||
const solA = solutions.find(s => s.issue_id === a)
|
||||
const solB = solutions.find(s => s.issue_id === b)
|
||||
const hasCreateA = solA.tasks.some(t => t.action === 'Create')
|
||||
const hasCreateB = solB.tasks.some(t => t.action === 'Create')
|
||||
if (hasCreateA && !hasCreateB) return -1
|
||||
if (hasCreateB && !hasCreateA) return 1
|
||||
return 0
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Output Generation
|
||||
|
||||
```javascript
|
||||
function generateOutput(solutions, conflicts) {
|
||||
return {
|
||||
solutions: solutions.map(s => ({
|
||||
issue_id: s.issue_id,
|
||||
solution: s
|
||||
})),
|
||||
conflicts,
|
||||
_metadata: {
|
||||
timestamp: new Date().toISOString(),
|
||||
source: 'issue-plan-agent',
|
||||
issues_count: solutions.length,
|
||||
total_tasks: solutions.reduce((sum, s) => sum + s.tasks.length, 0)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Solution Schema (Closed-Loop Tasks)
|
||||
|
||||
Each task MUST include ALL 5 lifecycle phases:
|
||||
|
||||
```json
|
||||
{
|
||||
"issue_id": "GH-123",
|
||||
"approach_name": "Direct Implementation",
|
||||
"summary": "Add JWT authentication middleware to protect API routes",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"title": "Create JWT validation middleware",
|
||||
"scope": "src/middleware/",
|
||||
"action": "Create",
|
||||
"description": "Create middleware to validate JWT tokens",
|
||||
"modification_points": [
|
||||
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||
],
|
||||
|
||||
"implementation": [
|
||||
"Create auth.ts file in src/middleware/",
|
||||
"Implement JWT token extraction from Authorization header",
|
||||
"Add token validation using jsonwebtoken library",
|
||||
"Handle error cases (missing, invalid, expired tokens)",
|
||||
"Export middleware function"
|
||||
],
|
||||
|
||||
"test": {
|
||||
"unit": [
|
||||
"Test valid token passes through",
|
||||
"Test invalid token returns 401",
|
||||
"Test expired token returns 401",
|
||||
"Test missing token returns 401"
|
||||
],
|
||||
"integration": [
|
||||
"Protected route returns 401 without token",
|
||||
"Protected route returns 200 with valid token"
|
||||
],
|
||||
"commands": [
|
||||
"npm test -- --grep 'auth middleware'",
|
||||
"npm run test:coverage -- src/middleware/auth.ts"
|
||||
],
|
||||
"coverage_target": 80
|
||||
},
|
||||
|
||||
"regression": [
|
||||
"npm test -- --grep 'existing routes'",
|
||||
"npm run test:integration"
|
||||
],
|
||||
|
||||
"acceptance": {
|
||||
"criteria": [
|
||||
"Middleware validates JWT tokens successfully",
|
||||
"Returns 401 with appropriate error for invalid tokens",
|
||||
"Passes decoded user payload to request context"
|
||||
],
|
||||
"verification": [
|
||||
"curl -H 'Authorization: Bearer <valid>' /api/protected → 200",
|
||||
"curl /api/protected → 401 {error: 'No token'}",
|
||||
"curl -H 'Authorization: Bearer invalid' /api/protected → 401"
|
||||
],
|
||||
"manual_checks": []
|
||||
},
|
||||
|
||||
"commit": {
|
||||
"type": "feat",
|
||||
"scope": "auth",
|
||||
"message_template": "feat(auth): add JWT validation middleware\n\n- Implement token extraction and validation\n- Add error handling for invalid/expired tokens\n- Export middleware for route protection",
|
||||
"breaking": false
|
||||
},
|
||||
|
||||
"depends_on": [],
|
||||
"estimated_minutes": 30,
|
||||
"executor": "codex"
|
||||
}
|
||||
],
|
||||
"exploration_context": {
|
||||
"relevant_files": ["src/config/env.ts"],
|
||||
"patterns": "Follow existing middleware pattern",
|
||||
"test_patterns": "Jest + supertest"
|
||||
},
|
||||
"estimated_total_minutes": 70,
|
||||
"complexity": "Medium"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
```javascript
|
||||
// Error handling with fallback
|
||||
async function executeWithFallback(issue, projectRoot) {
|
||||
try {
|
||||
// Primary: ACE semantic search
|
||||
const exploration = await aceExplore(issue, projectRoot)
|
||||
return await generateSolution(issue, exploration)
|
||||
} catch (aceError) {
|
||||
console.warn('ACE failed:', aceError.message)
|
||||
|
||||
try {
|
||||
// Fallback: ripgrep-based exploration
|
||||
const exploration = await ripgrepExplore(issue, projectRoot)
|
||||
return await generateSolution(issue, exploration)
|
||||
} catch (rgError) {
|
||||
// Degraded: Basic solution without exploration
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
approach_name: 'Basic Implementation',
|
||||
summary: issue.title,
|
||||
tasks: [{
|
||||
id: 'T1',
|
||||
title: issue.title,
|
||||
scope: 'TBD',
|
||||
action: 'Implement',
|
||||
description: issue.description,
|
||||
modification_points: [{ file: 'TBD', target: 'TBD', change: issue.title }],
|
||||
implementation: ['Analyze requirements', 'Implement solution', 'Test and validate'],
|
||||
acceptance: ['Feature works as described'],
|
||||
depends_on: [],
|
||||
estimated_minutes: 60
|
||||
}],
|
||||
exploration_context: { relevant_files: [], patterns: 'Manual exploration required' },
|
||||
estimated_total_minutes: 60,
|
||||
complexity: 'Medium',
|
||||
_warning: 'Degraded mode - manual exploration required'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| ACE search returns no results | Fallback to ripgrep, warn user |
|
||||
| Circular task dependency | Report error, suggest fix |
|
||||
| File not found in codebase | Flag as "new file", update modification_points |
|
||||
| Ambiguous requirements | Add clarification_needs to output |
|
||||
|
||||
---
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Acceptance Criteria Quality
|
||||
|
||||
| Good | Bad |
|
||||
|------|-----|
|
||||
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
|
||||
| "Response time < 200ms p95" | "Good performance" |
|
||||
| "All 4 test cases pass" | "Tests pass" |
|
||||
| "JWT token validated with secret from env" | "Authentication works" |
|
||||
|
||||
### Task Validation Checklist
|
||||
|
||||
Before outputting solution:
|
||||
- [ ] ACE search performed for each issue
|
||||
- [ ] All modification_points verified against codebase
|
||||
- [ ] Tasks have 2+ implementation steps
|
||||
- [ ] Tasks have 1+ quantified acceptance criteria
|
||||
- [ ] Dependencies form valid DAG (no cycles)
|
||||
- [ ] Estimated time is reasonable
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. Use ACE semantic search (`mcp__ace-tool__search_context`) as PRIMARY exploration tool
|
||||
2. Read schema first before generating solution output
|
||||
3. Include `depends_on` field (even if empty `[]`)
|
||||
4. Quantify acceptance criteria with specific, testable conditions
|
||||
5. Validate DAG before output (no circular dependencies)
|
||||
6. Include file:line references in modification_points where possible
|
||||
7. Detect and report cross-issue file conflicts in batch mode
|
||||
8. Include exploration_context with patterns and relevant_files
|
||||
9. **Generate ALL 5 lifecycle phases for each task**:
|
||||
- `implementation`: 2-7 concrete steps
|
||||
- `test`: unit tests, commands, coverage target
|
||||
- `regression`: regression check commands
|
||||
- `acceptance`: criteria + verification steps
|
||||
- `commit`: type, scope, message template
|
||||
10. Infer test commands from project's test framework
|
||||
11. Generate commit message following conventional commits
|
||||
|
||||
**NEVER**:
|
||||
1. Execute implementation (return plan only)
|
||||
2. Use vague acceptance criteria ("works correctly", "good performance")
|
||||
3. Create circular dependencies in task graph
|
||||
4. Skip task validation before output
|
||||
5. Omit required fields from solution schema
|
||||
6. Assume file exists without verification
|
||||
7. Generate more than 10 tasks per issue
|
||||
8. Skip ACE search (unless fallback triggered)
|
||||
9. **Omit any of the 5 lifecycle phases** (test, regression, acceptance, commit)
|
||||
10. Skip verification steps in acceptance criteria
|
||||
702
.claude/agents/issue-queue-agent.md
Normal file
702
.claude/agents/issue-queue-agent.md
Normal file
@@ -0,0 +1,702 @@
|
||||
---
|
||||
name: issue-queue-agent
|
||||
description: |
|
||||
Task ordering agent for issue queue formation with dependency analysis and conflict resolution.
|
||||
Orchestrates 4-phase workflow: Dependency Analysis → Conflict Detection → Semantic Ordering → Group Assignment
|
||||
|
||||
Core capabilities:
|
||||
- ACE semantic search for relationship discovery
|
||||
- Cross-issue dependency DAG construction
|
||||
- File modification conflict detection
|
||||
- Conflict resolution with execution ordering
|
||||
- Semantic priority calculation (0.0-1.0)
|
||||
- Parallel/Sequential group assignment
|
||||
color: orange
|
||||
---
|
||||
|
||||
You are a specialized queue formation agent that analyzes tasks from bound solutions, resolves conflicts, and produces an ordered execution queue. You focus on optimal task ordering across multiple issues.
|
||||
|
||||
## Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
// Required
|
||||
tasks: [
|
||||
{
|
||||
issue_id: string, // Issue ID (e.g., "GH-123")
|
||||
solution_id: string, // Solution ID (e.g., "SOL-001")
|
||||
task: {
|
||||
id: string, // Task ID (e.g., "T1")
|
||||
title: string,
|
||||
scope: string,
|
||||
action: string, // Create | Update | Implement | Refactor | Test | Fix | Delete | Configure
|
||||
modification_points: [
|
||||
{ file: string, target: string, change: string }
|
||||
],
|
||||
depends_on: string[] // Task IDs within same issue
|
||||
},
|
||||
exploration_context: object
|
||||
}
|
||||
],
|
||||
|
||||
// Optional
|
||||
project_root: string, // Project root for ACE search
|
||||
existing_conflicts: object[], // Pre-identified conflicts
|
||||
rebuild: boolean // Clear and regenerate queue
|
||||
}
|
||||
```
|
||||
|
||||
## 4-Phase Execution Workflow
|
||||
|
||||
```
|
||||
Phase 1: Dependency Analysis (20%)
|
||||
↓ Parse depends_on, build DAG, detect cycles
|
||||
Phase 2: Conflict Detection + ACE Enhancement (30%)
|
||||
↓ Identify file conflicts, ACE semantic relationship discovery
|
||||
Phase 3: Conflict Resolution (25%)
|
||||
↓ Determine execution order for conflicting tasks
|
||||
Phase 4: Semantic Ordering & Grouping (25%)
|
||||
↓ Calculate priority, topological sort, assign groups
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Dependency Analysis
|
||||
|
||||
### Build Dependency Graph
|
||||
|
||||
```javascript
|
||||
function buildDependencyGraph(tasks) {
|
||||
const taskGraph = new Map()
|
||||
const fileModifications = new Map() // file -> [taskKeys]
|
||||
|
||||
for (const item of tasks) {
|
||||
const taskKey = `${item.issue_id}:${item.task.id}`
|
||||
taskGraph.set(taskKey, {
|
||||
...item,
|
||||
key: taskKey,
|
||||
inDegree: 0,
|
||||
outEdges: []
|
||||
})
|
||||
|
||||
// Track file modifications for conflict detection
|
||||
for (const mp of item.task.modification_points || []) {
|
||||
if (!fileModifications.has(mp.file)) {
|
||||
fileModifications.set(mp.file, [])
|
||||
}
|
||||
fileModifications.get(mp.file).push(taskKey)
|
||||
}
|
||||
}
|
||||
|
||||
// Add explicit dependency edges (within same issue)
|
||||
for (const [key, node] of taskGraph) {
|
||||
for (const dep of node.task.depends_on || []) {
|
||||
const depKey = `${node.issue_id}:${dep}`
|
||||
if (taskGraph.has(depKey)) {
|
||||
taskGraph.get(depKey).outEdges.push(key)
|
||||
node.inDegree++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { taskGraph, fileModifications }
|
||||
}
|
||||
```
|
||||
|
||||
### Cycle Detection
|
||||
|
||||
```javascript
|
||||
function detectCycles(taskGraph) {
|
||||
const visited = new Set()
|
||||
const stack = new Set()
|
||||
const cycles = []
|
||||
|
||||
function dfs(key, path = []) {
|
||||
if (stack.has(key)) {
|
||||
// Found cycle - extract cycle path
|
||||
const cycleStart = path.indexOf(key)
|
||||
cycles.push(path.slice(cycleStart).concat(key))
|
||||
return true
|
||||
}
|
||||
if (visited.has(key)) return false
|
||||
|
||||
visited.add(key)
|
||||
stack.add(key)
|
||||
path.push(key)
|
||||
|
||||
for (const next of taskGraph.get(key)?.outEdges || []) {
|
||||
dfs(next, [...path])
|
||||
}
|
||||
|
||||
stack.delete(key)
|
||||
return false
|
||||
}
|
||||
|
||||
for (const key of taskGraph.keys()) {
|
||||
if (!visited.has(key)) {
|
||||
dfs(key)
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
hasCycle: cycles.length > 0,
|
||||
cycles
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Conflict Detection
|
||||
|
||||
### Identify File Conflicts
|
||||
|
||||
```javascript
|
||||
function detectFileConflicts(fileModifications, taskGraph) {
|
||||
const conflicts = []
|
||||
|
||||
for (const [file, taskKeys] of fileModifications) {
|
||||
if (taskKeys.length > 1) {
|
||||
// Multiple tasks modify same file
|
||||
const taskDetails = taskKeys.map(key => {
|
||||
const node = taskGraph.get(key)
|
||||
return {
|
||||
key,
|
||||
issue_id: node.issue_id,
|
||||
task_id: node.task.id,
|
||||
title: node.task.title,
|
||||
action: node.task.action,
|
||||
scope: node.task.scope
|
||||
}
|
||||
})
|
||||
|
||||
conflicts.push({
|
||||
type: 'file_conflict',
|
||||
file,
|
||||
tasks: taskKeys,
|
||||
task_details: taskDetails,
|
||||
resolution: null,
|
||||
resolved: false
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return conflicts
|
||||
}
|
||||
```
|
||||
|
||||
### Conflict Classification
|
||||
|
||||
```javascript
|
||||
function classifyConflict(conflict, taskGraph) {
|
||||
const tasks = conflict.tasks.map(key => taskGraph.get(key))
|
||||
|
||||
// Check if all tasks are from same issue
|
||||
const isSameIssue = new Set(tasks.map(t => t.issue_id)).size === 1
|
||||
|
||||
// Check action types
|
||||
const actions = tasks.map(t => t.task.action)
|
||||
const hasCreate = actions.includes('Create')
|
||||
const hasDelete = actions.includes('Delete')
|
||||
|
||||
return {
|
||||
...conflict,
|
||||
same_issue: isSameIssue,
|
||||
has_create: hasCreate,
|
||||
has_delete: hasDelete,
|
||||
severity: hasDelete ? 'high' : hasCreate ? 'medium' : 'low'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Conflict Resolution
|
||||
|
||||
### Resolution Rules
|
||||
|
||||
| Priority | Rule | Example |
|
||||
|----------|------|---------|
|
||||
| 1 | Create before Update/Implement | T1:Create → T2:Update |
|
||||
| 2 | Foundation before integration | config/ → src/ |
|
||||
| 3 | Types before implementation | types/ → components/ |
|
||||
| 4 | Core before tests | src/ → __tests__/ |
|
||||
| 5 | Same issue order preserved | T1 → T2 → T3 |
|
||||
|
||||
### Apply Resolution Rules
|
||||
|
||||
```javascript
|
||||
function resolveConflict(conflict, taskGraph) {
|
||||
const tasks = conflict.tasks.map(key => ({
|
||||
key,
|
||||
node: taskGraph.get(key)
|
||||
}))
|
||||
|
||||
// Sort by resolution rules
|
||||
tasks.sort((a, b) => {
|
||||
const nodeA = a.node
|
||||
const nodeB = b.node
|
||||
|
||||
// Rule 1: Create before others
|
||||
if (nodeA.task.action === 'Create' && nodeB.task.action !== 'Create') return -1
|
||||
if (nodeB.task.action === 'Create' && nodeA.task.action !== 'Create') return 1
|
||||
|
||||
// Rule 2: Delete last
|
||||
if (nodeA.task.action === 'Delete' && nodeB.task.action !== 'Delete') return 1
|
||||
if (nodeB.task.action === 'Delete' && nodeA.task.action !== 'Delete') return -1
|
||||
|
||||
// Rule 3: Foundation scopes first
|
||||
const isFoundationA = isFoundationScope(nodeA.task.scope)
|
||||
const isFoundationB = isFoundationScope(nodeB.task.scope)
|
||||
if (isFoundationA && !isFoundationB) return -1
|
||||
if (isFoundationB && !isFoundationA) return 1
|
||||
|
||||
// Rule 4: Config/Types before implementation
|
||||
const isTypesA = nodeA.task.scope?.includes('types')
|
||||
const isTypesB = nodeB.task.scope?.includes('types')
|
||||
if (isTypesA && !isTypesB) return -1
|
||||
if (isTypesB && !isTypesA) return 1
|
||||
|
||||
// Rule 5: Preserve issue order (same issue)
|
||||
if (nodeA.issue_id === nodeB.issue_id) {
|
||||
return parseInt(nodeA.task.id.replace('T', '')) - parseInt(nodeB.task.id.replace('T', ''))
|
||||
}
|
||||
|
||||
return 0
|
||||
})
|
||||
|
||||
const order = tasks.map(t => t.key)
|
||||
const rationale = generateRationale(tasks)
|
||||
|
||||
return {
|
||||
...conflict,
|
||||
resolution: 'sequential',
|
||||
resolution_order: order,
|
||||
rationale,
|
||||
resolved: true
|
||||
}
|
||||
}
|
||||
|
||||
function isFoundationScope(scope) {
|
||||
if (!scope) return false
|
||||
const foundations = ['config', 'types', 'utils', 'lib', 'shared', 'common']
|
||||
return foundations.some(f => scope.toLowerCase().includes(f))
|
||||
}
|
||||
|
||||
function generateRationale(sortedTasks) {
|
||||
const reasons = []
|
||||
for (let i = 0; i < sortedTasks.length - 1; i++) {
|
||||
const curr = sortedTasks[i].node.task
|
||||
const next = sortedTasks[i + 1].node.task
|
||||
if (curr.action === 'Create') {
|
||||
reasons.push(`${curr.id} creates file before ${next.id}`)
|
||||
} else if (isFoundationScope(curr.scope)) {
|
||||
reasons.push(`${curr.id} (foundation) before ${next.id}`)
|
||||
}
|
||||
}
|
||||
return reasons.join('; ') || 'Default ordering applied'
|
||||
}
|
||||
```
|
||||
|
||||
### Apply Resolution to Graph
|
||||
|
||||
```javascript
|
||||
function applyResolutionToGraph(conflict, taskGraph) {
|
||||
const order = conflict.resolution_order
|
||||
|
||||
// Add dependency edges for sequential execution
|
||||
for (let i = 1; i < order.length; i++) {
|
||||
const prevKey = order[i - 1]
|
||||
const currKey = order[i]
|
||||
|
||||
if (taskGraph.has(prevKey) && taskGraph.has(currKey)) {
|
||||
const prevNode = taskGraph.get(prevKey)
|
||||
const currNode = taskGraph.get(currKey)
|
||||
|
||||
// Avoid duplicate edges
|
||||
if (!prevNode.outEdges.includes(currKey)) {
|
||||
prevNode.outEdges.push(currKey)
|
||||
currNode.inDegree++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Semantic Ordering & Grouping
|
||||
|
||||
### Semantic Priority Calculation
|
||||
|
||||
```javascript
|
||||
function calculateSemanticPriority(node) {
|
||||
let priority = 0.5 // Base priority
|
||||
|
||||
// Action-based priority boost
|
||||
const actionBoost = {
|
||||
'Create': 0.2,
|
||||
'Configure': 0.15,
|
||||
'Implement': 0.1,
|
||||
'Update': 0,
|
||||
'Refactor': -0.05,
|
||||
'Test': -0.1,
|
||||
'Fix': 0.05,
|
||||
'Delete': -0.15
|
||||
}
|
||||
priority += actionBoost[node.task.action] || 0
|
||||
|
||||
// Scope-based boost
|
||||
if (isFoundationScope(node.task.scope)) {
|
||||
priority += 0.1
|
||||
}
|
||||
if (node.task.scope?.includes('types')) {
|
||||
priority += 0.05
|
||||
}
|
||||
|
||||
// Clamp to [0, 1]
|
||||
return Math.max(0, Math.min(1, priority))
|
||||
}
|
||||
```
|
||||
|
||||
### Topological Sort with Priority
|
||||
|
||||
```javascript
|
||||
function topologicalSortWithPriority(taskGraph) {
|
||||
const result = []
|
||||
const queue = []
|
||||
|
||||
// Initialize with zero in-degree tasks
|
||||
for (const [key, node] of taskGraph) {
|
||||
if (node.inDegree === 0) {
|
||||
queue.push(key)
|
||||
}
|
||||
}
|
||||
|
||||
let executionOrder = 1
|
||||
while (queue.length > 0) {
|
||||
// Sort queue by semantic priority (descending)
|
||||
queue.sort((a, b) => {
|
||||
const nodeA = taskGraph.get(a)
|
||||
const nodeB = taskGraph.get(b)
|
||||
|
||||
// 1. Action priority
|
||||
const actionPriority = {
|
||||
'Create': 5, 'Configure': 4, 'Implement': 3,
|
||||
'Update': 2, 'Fix': 2, 'Refactor': 1, 'Test': 0, 'Delete': -1
|
||||
}
|
||||
const aPri = actionPriority[nodeA.task.action] ?? 2
|
||||
const bPri = actionPriority[nodeB.task.action] ?? 2
|
||||
if (aPri !== bPri) return bPri - aPri
|
||||
|
||||
// 2. Foundation scope first
|
||||
const aFound = isFoundationScope(nodeA.task.scope)
|
||||
const bFound = isFoundationScope(nodeB.task.scope)
|
||||
if (aFound !== bFound) return aFound ? -1 : 1
|
||||
|
||||
// 3. Types before implementation
|
||||
const aTypes = nodeA.task.scope?.includes('types')
|
||||
const bTypes = nodeB.task.scope?.includes('types')
|
||||
if (aTypes !== bTypes) return aTypes ? -1 : 1
|
||||
|
||||
return 0
|
||||
})
|
||||
|
||||
const current = queue.shift()
|
||||
const node = taskGraph.get(current)
|
||||
node.execution_order = executionOrder++
|
||||
node.semantic_priority = calculateSemanticPriority(node)
|
||||
result.push(current)
|
||||
|
||||
// Process outgoing edges
|
||||
for (const next of node.outEdges) {
|
||||
const nextNode = taskGraph.get(next)
|
||||
nextNode.inDegree--
|
||||
if (nextNode.inDegree === 0) {
|
||||
queue.push(next)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for remaining nodes (cycle indication)
|
||||
if (result.length !== taskGraph.size) {
|
||||
const remaining = [...taskGraph.keys()].filter(k => !result.includes(k))
|
||||
return { success: false, error: `Unprocessed tasks: ${remaining.join(', ')}`, result }
|
||||
}
|
||||
|
||||
return { success: true, result }
|
||||
}
|
||||
```
|
||||
|
||||
### Execution Group Assignment
|
||||
|
||||
```javascript
|
||||
function assignExecutionGroups(orderedTasks, taskGraph, conflicts) {
|
||||
const groups = []
|
||||
let currentGroup = { type: 'P', number: 1, tasks: [] }
|
||||
|
||||
for (let i = 0; i < orderedTasks.length; i++) {
|
||||
const key = orderedTasks[i]
|
||||
const node = taskGraph.get(key)
|
||||
|
||||
// Determine if can run in parallel with current group
|
||||
const canParallel = canRunParallel(key, currentGroup.tasks, taskGraph, conflicts)
|
||||
|
||||
if (!canParallel && currentGroup.tasks.length > 0) {
|
||||
// Save current group and start new sequential group
|
||||
groups.push({ ...currentGroup })
|
||||
currentGroup = { type: 'S', number: groups.length + 1, tasks: [] }
|
||||
}
|
||||
|
||||
currentGroup.tasks.push(key)
|
||||
node.execution_group = `${currentGroup.type}${currentGroup.number}`
|
||||
}
|
||||
|
||||
// Save last group
|
||||
if (currentGroup.tasks.length > 0) {
|
||||
groups.push(currentGroup)
|
||||
}
|
||||
|
||||
return groups
|
||||
}
|
||||
|
||||
function canRunParallel(taskKey, groupTasks, taskGraph, conflicts) {
|
||||
if (groupTasks.length === 0) return true
|
||||
|
||||
const node = taskGraph.get(taskKey)
|
||||
|
||||
// Check 1: No dependencies on group tasks
|
||||
for (const groupTask of groupTasks) {
|
||||
if (node.task.depends_on?.includes(groupTask.split(':')[1])) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Check 2: No file conflicts with group tasks
|
||||
for (const conflict of conflicts) {
|
||||
if (conflict.tasks.includes(taskKey)) {
|
||||
for (const groupTask of groupTasks) {
|
||||
if (conflict.tasks.includes(groupTask)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check 3: Different issues can run in parallel
|
||||
const nodeIssue = node.issue_id
|
||||
const groupIssues = new Set(groupTasks.map(t => taskGraph.get(t).issue_id))
|
||||
|
||||
return !groupIssues.has(nodeIssue)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Generation
|
||||
|
||||
### Queue Item Format
|
||||
|
||||
```javascript
|
||||
function generateQueueItems(orderedTasks, taskGraph, conflicts) {
|
||||
const queueItems = []
|
||||
let queueIdCounter = 1
|
||||
|
||||
for (const key of orderedTasks) {
|
||||
const node = taskGraph.get(key)
|
||||
|
||||
queueItems.push({
|
||||
queue_id: `Q-${String(queueIdCounter++).padStart(3, '0')}`,
|
||||
issue_id: node.issue_id,
|
||||
solution_id: node.solution_id,
|
||||
task_id: node.task.id,
|
||||
status: 'pending',
|
||||
execution_order: node.execution_order,
|
||||
execution_group: node.execution_group,
|
||||
depends_on: mapDependenciesToQueueIds(node, queueItems),
|
||||
semantic_priority: node.semantic_priority,
|
||||
queued_at: new Date().toISOString()
|
||||
})
|
||||
}
|
||||
|
||||
return queueItems
|
||||
}
|
||||
|
||||
function mapDependenciesToQueueIds(node, queueItems) {
|
||||
return (node.task.depends_on || []).map(dep => {
|
||||
const depKey = `${node.issue_id}:${dep}`
|
||||
const queueItem = queueItems.find(q =>
|
||||
q.issue_id === node.issue_id && q.task_id === dep
|
||||
)
|
||||
return queueItem?.queue_id || dep
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Final Output
|
||||
|
||||
```javascript
|
||||
function generateOutput(queueItems, conflicts, groups) {
|
||||
return {
|
||||
queue: queueItems,
|
||||
conflicts: conflicts.map(c => ({
|
||||
type: c.type,
|
||||
file: c.file,
|
||||
tasks: c.tasks,
|
||||
resolution: c.resolution,
|
||||
resolution_order: c.resolution_order,
|
||||
rationale: c.rationale,
|
||||
resolved: c.resolved
|
||||
})),
|
||||
execution_groups: groups.map(g => ({
|
||||
id: `${g.type}${g.number}`,
|
||||
type: g.type === 'P' ? 'parallel' : 'sequential',
|
||||
task_count: g.tasks.length,
|
||||
tasks: g.tasks
|
||||
})),
|
||||
_metadata: {
|
||||
version: '1.0',
|
||||
total_tasks: queueItems.length,
|
||||
total_conflicts: conflicts.length,
|
||||
resolved_conflicts: conflicts.filter(c => c.resolved).length,
|
||||
parallel_groups: groups.filter(g => g.type === 'P').length,
|
||||
sequential_groups: groups.filter(g => g.type === 'S').length,
|
||||
timestamp: new Date().toISOString(),
|
||||
source: 'issue-queue-agent'
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
```javascript
|
||||
async function executeWithValidation(tasks) {
|
||||
// Phase 1: Build graph
|
||||
const { taskGraph, fileModifications } = buildDependencyGraph(tasks)
|
||||
|
||||
// Check for cycles
|
||||
const cycleResult = detectCycles(taskGraph)
|
||||
if (cycleResult.hasCycle) {
|
||||
return {
|
||||
success: false,
|
||||
error: 'Circular dependency detected',
|
||||
cycles: cycleResult.cycles,
|
||||
suggestion: 'Remove circular dependencies or reorder tasks manually'
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 2: Detect conflicts
|
||||
const conflicts = detectFileConflicts(fileModifications, taskGraph)
|
||||
.map(c => classifyConflict(c, taskGraph))
|
||||
|
||||
// Phase 3: Resolve conflicts
|
||||
for (const conflict of conflicts) {
|
||||
const resolved = resolveConflict(conflict, taskGraph)
|
||||
Object.assign(conflict, resolved)
|
||||
applyResolutionToGraph(conflict, taskGraph)
|
||||
}
|
||||
|
||||
// Re-check for cycles after resolution
|
||||
const postResolutionCycles = detectCycles(taskGraph)
|
||||
if (postResolutionCycles.hasCycle) {
|
||||
return {
|
||||
success: false,
|
||||
error: 'Conflict resolution created circular dependency',
|
||||
cycles: postResolutionCycles.cycles,
|
||||
suggestion: 'Manual conflict resolution required'
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 4: Sort and group
|
||||
const sortResult = topologicalSortWithPriority(taskGraph)
|
||||
if (!sortResult.success) {
|
||||
return {
|
||||
success: false,
|
||||
error: sortResult.error,
|
||||
partial_result: sortResult.result
|
||||
}
|
||||
}
|
||||
|
||||
const groups = assignExecutionGroups(sortResult.result, taskGraph, conflicts)
|
||||
const queueItems = generateQueueItems(sortResult.result, taskGraph, conflicts)
|
||||
|
||||
return {
|
||||
success: true,
|
||||
output: generateOutput(queueItems, conflicts, groups)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Circular dependency | Report cycles, abort with suggestion |
|
||||
| Conflict resolution creates cycle | Flag for manual resolution |
|
||||
| Missing task reference in depends_on | Skip and warn |
|
||||
| Empty task list | Return empty queue |
|
||||
|
||||
---
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Ordering Validation
|
||||
|
||||
```javascript
|
||||
function validateOrdering(queueItems, taskGraph) {
|
||||
const errors = []
|
||||
|
||||
for (const item of queueItems) {
|
||||
const key = `${item.issue_id}:${item.task_id}`
|
||||
const node = taskGraph.get(key)
|
||||
|
||||
// Check dependencies come before
|
||||
for (const depQueueId of item.depends_on) {
|
||||
const depItem = queueItems.find(q => q.queue_id === depQueueId)
|
||||
if (depItem && depItem.execution_order >= item.execution_order) {
|
||||
errors.push(`${item.queue_id} ordered before dependency ${depQueueId}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: errors.length === 0, errors }
|
||||
}
|
||||
```
|
||||
|
||||
### Semantic Priority Rules
|
||||
|
||||
| Factor | Priority Boost |
|
||||
|--------|---------------|
|
||||
| Create action | +0.2 |
|
||||
| Configure action | +0.15 |
|
||||
| Implement action | +0.1 |
|
||||
| Fix action | +0.05 |
|
||||
| Foundation scope (config/types/utils) | +0.1 |
|
||||
| Types scope | +0.05 |
|
||||
| Refactor action | -0.05 |
|
||||
| Test action | -0.1 |
|
||||
| Delete action | -0.15 |
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
1. Build dependency graph before any ordering
|
||||
2. Detect cycles before and after conflict resolution
|
||||
3. Apply resolution rules consistently (Create → Update → Delete)
|
||||
4. Preserve within-issue task order when no conflicts
|
||||
5. Calculate semantic priority for all tasks
|
||||
6. Validate ordering before output
|
||||
7. Include rationale for conflict resolutions
|
||||
8. Map depends_on to queue_ids in output
|
||||
|
||||
**NEVER**:
|
||||
1. Execute tasks (ordering only)
|
||||
2. Ignore circular dependencies
|
||||
3. Create arbitrary ordering without rules
|
||||
4. Skip conflict detection
|
||||
5. Output invalid DAG
|
||||
6. Merge tasks from different issues in same parallel group if conflicts exist
|
||||
7. Assume task order without checking depends_on
|
||||
453
.claude/commands/issue/execute.md
Normal file
453
.claude/commands/issue/execute.md
Normal file
@@ -0,0 +1,453 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
|
||||
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
# Issue Execute Command (/issue:execute)
|
||||
|
||||
## Overview
|
||||
|
||||
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
|
||||
|
||||
**Core design:**
|
||||
- Single task per codex instance (not loop mode)
|
||||
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
|
||||
- No file reading in codex
|
||||
- Orchestrator manages parallelism
|
||||
|
||||
## Storage Structure (Flat JSONL)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queue.json # Execution queue
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:execute [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:execute # Execute all ready tasks
|
||||
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
|
||||
/issue:execute --executor codex # Force codex executor
|
||||
|
||||
# Flags
|
||||
--parallel <n> Max parallel codex instances (default: 1)
|
||||
--executor <type> Force executor: codex|gemini|agent
|
||||
--dry-run Show what would execute without running
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Queue Loading
|
||||
├─ Load queue.json
|
||||
├─ Count pending/ready tasks
|
||||
└─ Initialize TodoWrite tracking
|
||||
|
||||
Phase 2: Ready Task Detection
|
||||
├─ Find tasks with satisfied dependencies
|
||||
├─ Group by execution_group (parallel batches)
|
||||
└─ Determine execution order
|
||||
|
||||
Phase 3: Codex Coordination
|
||||
├─ For each ready task:
|
||||
│ ├─ Launch independent codex instance
|
||||
│ ├─ Codex calls: ccw issue next
|
||||
│ ├─ Codex receives task data (NOT file)
|
||||
│ ├─ Codex executes task
|
||||
│ ├─ Codex calls: ccw issue complete <queue-id>
|
||||
│ └─ Update TodoWrite
|
||||
└─ Parallel execution based on --parallel flag
|
||||
|
||||
Phase 4: Completion
|
||||
├─ Generate execution summary
|
||||
├─ Update issue statuses in issues.jsonl
|
||||
└─ Display results
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Queue Loading
|
||||
|
||||
```javascript
|
||||
// Load queue
|
||||
const queuePath = '.workflow/issues/queue.json';
|
||||
if (!Bash(`test -f "${queuePath}" && echo exists`).includes('exists')) {
|
||||
console.log('No queue found. Run /issue:queue first.');
|
||||
return;
|
||||
}
|
||||
|
||||
const queue = JSON.parse(Read(queuePath));
|
||||
|
||||
// Count by status
|
||||
const pending = queue.queue.filter(q => q.status === 'pending');
|
||||
const executing = queue.queue.filter(q => q.status === 'executing');
|
||||
const completed = queue.queue.filter(q => q.status === 'completed');
|
||||
|
||||
console.log(`
|
||||
## Execution Queue Status
|
||||
|
||||
- Pending: ${pending.length}
|
||||
- Executing: ${executing.length}
|
||||
- Completed: ${completed.length}
|
||||
- Total: ${queue.queue.length}
|
||||
`);
|
||||
|
||||
if (pending.length === 0 && executing.length === 0) {
|
||||
console.log('All tasks completed!');
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Ready Task Detection
|
||||
|
||||
```javascript
|
||||
// Find ready tasks (dependencies satisfied)
|
||||
function getReadyTasks() {
|
||||
const completedIds = new Set(
|
||||
queue.queue.filter(q => q.status === 'completed').map(q => q.queue_id)
|
||||
);
|
||||
|
||||
return queue.queue.filter(item => {
|
||||
if (item.status !== 'pending') return false;
|
||||
return item.depends_on.every(depId => completedIds.has(depId));
|
||||
});
|
||||
}
|
||||
|
||||
const readyTasks = getReadyTasks();
|
||||
|
||||
if (readyTasks.length === 0) {
|
||||
if (executing.length > 0) {
|
||||
console.log('Tasks are currently executing. Wait for completion.');
|
||||
} else {
|
||||
console.log('No ready tasks. Check for blocked dependencies.');
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Found ${readyTasks.length} ready tasks`);
|
||||
|
||||
// Sort by execution order
|
||||
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
|
||||
|
||||
// Initialize TodoWrite
|
||||
TodoWrite({
|
||||
todos: readyTasks.slice(0, parallelLimit).map(t => ({
|
||||
content: `[${t.queue_id}] ${t.issue_id}:${t.task_id}`,
|
||||
status: 'pending',
|
||||
activeForm: `Executing ${t.queue_id}`
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 3: Codex Coordination (Single Task Mode - Full Lifecycle)
|
||||
|
||||
```javascript
|
||||
// Execute tasks - single codex instance per task with full lifecycle
|
||||
async function executeTask(queueItem) {
|
||||
const codexPrompt = `
|
||||
## Single Task Execution - CLOSED-LOOP LIFECYCLE
|
||||
|
||||
You are executing ONE task from the issue queue. Each task has 5 phases that MUST ALL complete successfully.
|
||||
|
||||
### Step 1: Fetch Task
|
||||
Run this command to get your task:
|
||||
\`\`\`bash
|
||||
ccw issue next
|
||||
\`\`\`
|
||||
|
||||
This returns JSON with full lifecycle definition:
|
||||
- task.implementation: Implementation steps
|
||||
- task.test: Test requirements and commands
|
||||
- task.regression: Regression check commands
|
||||
- task.acceptance: Acceptance criteria and verification
|
||||
- task.commit: Commit specification
|
||||
|
||||
### Step 2: Execute Full Lifecycle
|
||||
|
||||
**Phase 1: IMPLEMENT**
|
||||
1. Follow task.implementation steps in order
|
||||
2. Modify files specified in modification_points
|
||||
3. Use context.relevant_files for reference
|
||||
4. Use context.patterns for code style
|
||||
|
||||
**Phase 2: TEST**
|
||||
1. Run test commands from task.test.commands
|
||||
2. Ensure all unit tests pass (task.test.unit)
|
||||
3. Run integration tests if specified (task.test.integration)
|
||||
4. Verify coverage meets task.test.coverage_target if specified
|
||||
5. If tests fail → fix code and re-run, do NOT proceed until tests pass
|
||||
|
||||
**Phase 3: REGRESSION**
|
||||
1. Run all commands in task.regression
|
||||
2. Ensure no existing tests are broken
|
||||
3. If regression fails → fix and re-run
|
||||
|
||||
**Phase 4: ACCEPTANCE**
|
||||
1. Verify each criterion in task.acceptance.criteria
|
||||
2. Execute verification steps in task.acceptance.verification
|
||||
3. Complete any manual_checks if specified
|
||||
4. All criteria MUST pass before proceeding
|
||||
|
||||
**Phase 5: COMMIT**
|
||||
1. Stage all modified files
|
||||
2. Use task.commit.message_template as commit message
|
||||
3. Commit with: git commit -m "$(cat <<'EOF'\n<message>\nEOF\n)"
|
||||
4. If commit_strategy is 'per-task', commit now
|
||||
5. If commit_strategy is 'atomic' or 'squash', stage but don't commit
|
||||
|
||||
### Step 3: Report Completion
|
||||
When ALL phases complete successfully:
|
||||
\`\`\`bash
|
||||
ccw issue complete <queue_id> --result '{
|
||||
"files_modified": ["path1", "path2"],
|
||||
"tests_passed": true,
|
||||
"regression_passed": true,
|
||||
"acceptance_passed": true,
|
||||
"committed": true,
|
||||
"commit_hash": "<hash>",
|
||||
"summary": "What was done"
|
||||
}'
|
||||
\`\`\`
|
||||
|
||||
If any phase fails and cannot be fixed:
|
||||
\`\`\`bash
|
||||
ccw issue fail <queue_id> --reason "Phase X failed: <details>"
|
||||
\`\`\`
|
||||
|
||||
### Rules
|
||||
- NEVER skip any lifecycle phase
|
||||
- Tests MUST pass before proceeding to acceptance
|
||||
- Regression MUST pass before commit
|
||||
- ALL acceptance criteria MUST be verified
|
||||
- Report accurate lifecycle status in result
|
||||
|
||||
### Start Now
|
||||
Begin by running: ccw issue next
|
||||
`;
|
||||
|
||||
// Execute codex
|
||||
const executor = queueItem.assigned_executor || flags.executor || 'codex';
|
||||
|
||||
if (executor === 'codex') {
|
||||
Bash(
|
||||
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.queue_id}`,
|
||||
timeout=3600000 // 1 hour timeout
|
||||
);
|
||||
} else if (executor === 'gemini') {
|
||||
Bash(
|
||||
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.queue_id}`,
|
||||
timeout=1800000 // 30 min timeout
|
||||
);
|
||||
} else {
|
||||
// Agent execution
|
||||
Task(
|
||||
subagent_type="code-developer",
|
||||
run_in_background=false,
|
||||
description=`Execute ${queueItem.queue_id}`,
|
||||
prompt=codexPrompt
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Execute with parallelism
|
||||
const parallelLimit = flags.parallel || 1;
|
||||
|
||||
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
|
||||
const batch = readyTasks.slice(i, i + parallelLimit);
|
||||
|
||||
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
|
||||
console.log(batch.map(t => `- ${t.queue_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
|
||||
|
||||
if (parallelLimit === 1) {
|
||||
// Sequential execution
|
||||
for (const task of batch) {
|
||||
updateTodo(task.queue_id, 'in_progress');
|
||||
await executeTask(task);
|
||||
updateTodo(task.queue_id, 'completed');
|
||||
}
|
||||
} else {
|
||||
// Parallel execution - launch all at once
|
||||
const executions = batch.map(task => {
|
||||
updateTodo(task.queue_id, 'in_progress');
|
||||
return executeTask(task);
|
||||
});
|
||||
await Promise.all(executions);
|
||||
batch.forEach(task => updateTodo(task.queue_id, 'completed'));
|
||||
}
|
||||
|
||||
// Refresh ready tasks after batch
|
||||
const newReady = getReadyTasks();
|
||||
if (newReady.length > 0) {
|
||||
console.log(`${newReady.length} more tasks now ready`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Codex Task Fetch Response
|
||||
|
||||
When codex calls `ccw issue next`, it receives:
|
||||
|
||||
```json
|
||||
{
|
||||
"queue_id": "Q-001",
|
||||
"issue_id": "GH-123",
|
||||
"solution_id": "SOL-001",
|
||||
"task": {
|
||||
"id": "T1",
|
||||
"title": "Create auth middleware",
|
||||
"scope": "src/middleware/",
|
||||
"action": "Create",
|
||||
"description": "Create JWT validation middleware",
|
||||
"modification_points": [
|
||||
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||
],
|
||||
"implementation": [
|
||||
"Create auth.ts file in src/middleware/",
|
||||
"Implement JWT token validation using jsonwebtoken",
|
||||
"Add error handling for invalid/expired tokens",
|
||||
"Export middleware function"
|
||||
],
|
||||
"acceptance": [
|
||||
"Middleware validates JWT tokens successfully",
|
||||
"Returns 401 for invalid or missing tokens",
|
||||
"Passes token payload to request context"
|
||||
]
|
||||
},
|
||||
"context": {
|
||||
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
|
||||
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
|
||||
},
|
||||
"execution_hints": {
|
||||
"executor": "codex",
|
||||
"estimated_minutes": 30
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Completion Summary
|
||||
|
||||
```javascript
|
||||
// Reload queue for final status
|
||||
const finalQueue = JSON.parse(Read(queuePath));
|
||||
|
||||
const summary = {
|
||||
completed: finalQueue.queue.filter(q => q.status === 'completed').length,
|
||||
failed: finalQueue.queue.filter(q => q.status === 'failed').length,
|
||||
pending: finalQueue.queue.filter(q => q.status === 'pending').length,
|
||||
total: finalQueue.queue.length
|
||||
};
|
||||
|
||||
console.log(`
|
||||
## Execution Complete
|
||||
|
||||
**Completed**: ${summary.completed}/${summary.total}
|
||||
**Failed**: ${summary.failed}
|
||||
**Pending**: ${summary.pending}
|
||||
|
||||
### Task Results
|
||||
${finalQueue.queue.map(q => {
|
||||
const icon = q.status === 'completed' ? '✓' :
|
||||
q.status === 'failed' ? '✗' :
|
||||
q.status === 'executing' ? '⟳' : '○';
|
||||
return `${icon} ${q.queue_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
|
||||
}).join('\n')}
|
||||
`);
|
||||
|
||||
// Update issue statuses in issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}"`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
const issueIds = [...new Set(finalQueue.queue.map(q => q.issue_id))];
|
||||
for (const issueId of issueIds) {
|
||||
const issueTasks = finalQueue.queue.filter(q => q.issue_id === issueId);
|
||||
|
||||
if (issueTasks.every(q => q.status === 'completed')) {
|
||||
console.log(`\n✓ Issue ${issueId} fully completed!`);
|
||||
|
||||
// Update issue status
|
||||
const issueIndex = allIssues.findIndex(i => i.id === issueId);
|
||||
if (issueIndex !== -1) {
|
||||
allIssues[issueIndex].status = 'completed';
|
||||
allIssues[issueIndex].completed_at = new Date().toISOString();
|
||||
allIssues[issueIndex].updated_at = new Date().toISOString();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Write updated issues.jsonl
|
||||
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
|
||||
|
||||
if (summary.pending > 0) {
|
||||
console.log(`
|
||||
### Continue Execution
|
||||
Run \`/issue:execute\` again to execute remaining tasks.
|
||||
`);
|
||||
}
|
||||
```
|
||||
|
||||
## Dry Run Mode
|
||||
|
||||
```javascript
|
||||
if (flags.dryRun) {
|
||||
console.log(`
|
||||
## Dry Run - Would Execute
|
||||
|
||||
${readyTasks.map((t, i) => `
|
||||
${i + 1}. ${t.queue_id}
|
||||
Issue: ${t.issue_id}
|
||||
Task: ${t.task_id}
|
||||
Executor: ${t.assigned_executor}
|
||||
Group: ${t.execution_group}
|
||||
`).join('')}
|
||||
|
||||
No changes made. Remove --dry-run to execute.
|
||||
`);
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Queue not found | Display message, suggest /issue:queue |
|
||||
| No ready tasks | Check dependencies, show blocked tasks |
|
||||
| Codex timeout | Mark as failed, allow retry |
|
||||
| ccw issue next empty | All tasks done or blocked |
|
||||
| Task execution failure | Marked via ccw issue fail |
|
||||
|
||||
## Endpoint Contract
|
||||
|
||||
### `ccw issue next`
|
||||
- Returns next ready task as JSON
|
||||
- Marks task as 'executing'
|
||||
- Returns `{ status: 'empty' }` when no tasks
|
||||
|
||||
### `ccw issue complete <queue-id>`
|
||||
- Marks task as 'completed'
|
||||
- Updates queue.json
|
||||
- Checks if issue is fully complete
|
||||
|
||||
### `ccw issue fail <queue-id>`
|
||||
- Marks task as 'failed'
|
||||
- Records failure reason
|
||||
- Allows retry via /issue:execute
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan issues with solutions
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `ccw issue queue list` - View queue status
|
||||
- `ccw issue retry` - Retry failed tasks
|
||||
865
.claude/commands/issue/manage.md
Normal file
865
.claude/commands/issue/manage.md
Normal file
@@ -0,0 +1,865 @@
|
||||
---
|
||||
name: manage
|
||||
description: Interactive issue management (CRUD) via ccw cli endpoints with menu-driven interface
|
||||
argument-hint: "[issue-id] [--action list|view|edit|delete|bulk]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), AskUserQuestion(*), Task(*)
|
||||
---
|
||||
|
||||
# Issue Manage Command (/issue:manage)
|
||||
|
||||
## Overview
|
||||
|
||||
Interactive menu-driven interface for issue management using `ccw issue` CLI endpoints:
|
||||
- **List**: Browse and filter issues
|
||||
- **View**: Detailed issue inspection
|
||||
- **Edit**: Modify issue fields
|
||||
- **Delete**: Remove issues
|
||||
- **Bulk**: Batch operations on multiple issues
|
||||
|
||||
## CLI Endpoints Reference
|
||||
|
||||
```bash
|
||||
# Core endpoints (ccw issue)
|
||||
ccw issue list # List all issues
|
||||
ccw issue list <id> --json # Get issue details
|
||||
ccw issue status <id> # Detailed status
|
||||
ccw issue init <id> --title "..." # Create issue
|
||||
ccw issue task <id> --title "..." # Add task
|
||||
|
||||
# Queue management
|
||||
ccw issue queue # List queue
|
||||
ccw issue queue add <id> # Add to queue
|
||||
ccw issue next # Get next task
|
||||
ccw issue done <queue-id> # Complete task
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Interactive mode (menu-driven)
|
||||
/issue:manage
|
||||
|
||||
# Direct to specific issue
|
||||
/issue:manage GH-123
|
||||
|
||||
# Direct action
|
||||
/issue:manage --action list
|
||||
/issue:manage GH-123 --action edit
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Entry Point
|
||||
|
||||
```javascript
|
||||
const issueId = parseIssueId(userInput);
|
||||
const action = flags.action;
|
||||
|
||||
// Show main menu if no action specified
|
||||
if (!action) {
|
||||
await showMainMenu(issueId);
|
||||
} else {
|
||||
await executeAction(action, issueId);
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Main Menu
|
||||
|
||||
```javascript
|
||||
async function showMainMenu(preselectedIssue = null) {
|
||||
// Fetch current issues summary
|
||||
const issuesResult = Bash('ccw issue list --json 2>/dev/null || echo "[]"');
|
||||
const issues = JSON.parse(issuesResult) || [];
|
||||
|
||||
const queueResult = Bash('ccw issue status --json 2>/dev/null');
|
||||
const queueStatus = JSON.parse(queueResult || '{}');
|
||||
|
||||
console.log(`
|
||||
## Issue Management Dashboard
|
||||
|
||||
**Total Issues**: ${issues.length}
|
||||
**Queue Status**: ${queueStatus.queue?.total_tasks || 0} tasks (${queueStatus.queue?.pending_count || 0} pending)
|
||||
|
||||
### Quick Stats
|
||||
- Registered: ${issues.filter(i => i.status === 'registered').length}
|
||||
- Planned: ${issues.filter(i => i.status === 'planned').length}
|
||||
- Executing: ${issues.filter(i => i.status === 'executing').length}
|
||||
- Completed: ${issues.filter(i => i.status === 'completed').length}
|
||||
`);
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'What would you like to do?',
|
||||
header: 'Action',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'List Issues', description: 'Browse all issues with filters' },
|
||||
{ label: 'View Issue', description: 'Detailed view of specific issue' },
|
||||
{ label: 'Create Issue', description: 'Add new issue from text or GitHub' },
|
||||
{ label: 'Edit Issue', description: 'Modify issue fields' },
|
||||
{ label: 'Delete Issue', description: 'Remove issue(s)' },
|
||||
{ label: 'Bulk Operations', description: 'Batch actions on multiple issues' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const selected = parseAnswer(answer);
|
||||
|
||||
switch (selected) {
|
||||
case 'List Issues':
|
||||
await listIssuesInteractive();
|
||||
break;
|
||||
case 'View Issue':
|
||||
await viewIssueInteractive(preselectedIssue);
|
||||
break;
|
||||
case 'Create Issue':
|
||||
await createIssueInteractive();
|
||||
break;
|
||||
case 'Edit Issue':
|
||||
await editIssueInteractive(preselectedIssue);
|
||||
break;
|
||||
case 'Delete Issue':
|
||||
await deleteIssueInteractive(preselectedIssue);
|
||||
break;
|
||||
case 'Bulk Operations':
|
||||
await bulkOperationsInteractive();
|
||||
break;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: List Issues
|
||||
|
||||
```javascript
|
||||
async function listIssuesInteractive() {
|
||||
// Ask for filter
|
||||
const filterAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Filter issues by status?',
|
||||
header: 'Filter',
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: 'All', description: 'Show all issues' },
|
||||
{ label: 'Registered', description: 'New, unplanned issues' },
|
||||
{ label: 'Planned', description: 'Issues with bound solutions' },
|
||||
{ label: 'Queued', description: 'In execution queue' },
|
||||
{ label: 'Executing', description: 'Currently being worked on' },
|
||||
{ label: 'Completed', description: 'Finished issues' },
|
||||
{ label: 'Failed', description: 'Failed issues' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const filters = parseMultiAnswer(filterAnswer);
|
||||
|
||||
// Fetch and filter issues
|
||||
const result = Bash('ccw issue list --json');
|
||||
let issues = JSON.parse(result) || [];
|
||||
|
||||
if (!filters.includes('All')) {
|
||||
const statusMap = {
|
||||
'Registered': 'registered',
|
||||
'Planned': 'planned',
|
||||
'Queued': 'queued',
|
||||
'Executing': 'executing',
|
||||
'Completed': 'completed',
|
||||
'Failed': 'failed'
|
||||
};
|
||||
const allowedStatuses = filters.map(f => statusMap[f]).filter(Boolean);
|
||||
issues = issues.filter(i => allowedStatuses.includes(i.status));
|
||||
}
|
||||
|
||||
if (issues.length === 0) {
|
||||
console.log('No issues found matching filters.');
|
||||
return showMainMenu();
|
||||
}
|
||||
|
||||
// Display issues table
|
||||
console.log(`
|
||||
## Issues (${issues.length})
|
||||
|
||||
| ID | Status | Priority | Title |
|
||||
|----|--------|----------|-------|
|
||||
${issues.map(i => `| ${i.id} | ${i.status} | P${i.priority} | ${i.title.substring(0, 40)} |`).join('\n')}
|
||||
`);
|
||||
|
||||
// Ask for action on issue
|
||||
const actionAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select an issue to view/edit, or return to menu:',
|
||||
header: 'Select',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
...issues.slice(0, 10).map(i => ({
|
||||
label: i.id,
|
||||
description: i.title.substring(0, 50)
|
||||
})),
|
||||
{ label: 'Back to Menu', description: 'Return to main menu' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const selected = parseAnswer(actionAnswer);
|
||||
|
||||
if (selected === 'Back to Menu') {
|
||||
return showMainMenu();
|
||||
}
|
||||
|
||||
// View selected issue
|
||||
await viewIssueInteractive(selected);
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: View Issue
|
||||
|
||||
```javascript
|
||||
async function viewIssueInteractive(issueId) {
|
||||
if (!issueId) {
|
||||
// Ask for issue ID
|
||||
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
|
||||
|
||||
const idAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select issue to view:',
|
||||
header: 'Issue',
|
||||
multiSelect: false,
|
||||
options: issues.slice(0, 10).map(i => ({
|
||||
label: i.id,
|
||||
description: `${i.status} - ${i.title.substring(0, 40)}`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
|
||||
issueId = parseAnswer(idAnswer);
|
||||
}
|
||||
|
||||
// Fetch detailed status
|
||||
const result = Bash(`ccw issue status ${issueId} --json`);
|
||||
const data = JSON.parse(result);
|
||||
|
||||
const issue = data.issue;
|
||||
const solutions = data.solutions || [];
|
||||
const bound = data.bound;
|
||||
|
||||
console.log(`
|
||||
## Issue: ${issue.id}
|
||||
|
||||
**Title**: ${issue.title}
|
||||
**Status**: ${issue.status}
|
||||
**Priority**: P${issue.priority}
|
||||
**Created**: ${issue.created_at}
|
||||
**Updated**: ${issue.updated_at}
|
||||
|
||||
### Context
|
||||
${issue.context || 'No context provided'}
|
||||
|
||||
### Solutions (${solutions.length})
|
||||
${solutions.length === 0 ? 'No solutions registered' :
|
||||
solutions.map(s => `- ${s.is_bound ? '◉' : '○'} ${s.id}: ${s.tasks?.length || 0} tasks`).join('\n')}
|
||||
|
||||
${bound ? `### Bound Solution: ${bound.id}\n**Tasks**: ${bound.tasks?.length || 0}` : ''}
|
||||
`);
|
||||
|
||||
// Show tasks if bound solution exists
|
||||
if (bound?.tasks?.length > 0) {
|
||||
console.log(`
|
||||
### Tasks
|
||||
| ID | Action | Scope | Title |
|
||||
|----|--------|-------|-------|
|
||||
${bound.tasks.map(t => `| ${t.id} | ${t.action} | ${t.scope?.substring(0, 20) || '-'} | ${t.title.substring(0, 30)} |`).join('\n')}
|
||||
`);
|
||||
}
|
||||
|
||||
// Action menu
|
||||
const actionAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'What would you like to do?',
|
||||
header: 'Action',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Edit Issue', description: 'Modify issue fields' },
|
||||
{ label: 'Plan Issue', description: 'Generate solution (/issue:plan)' },
|
||||
{ label: 'Add to Queue', description: 'Queue bound solution tasks' },
|
||||
{ label: 'View Queue', description: 'See queue status' },
|
||||
{ label: 'Delete Issue', description: 'Remove this issue' },
|
||||
{ label: 'Back to Menu', description: 'Return to main menu' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const action = parseAnswer(actionAnswer);
|
||||
|
||||
switch (action) {
|
||||
case 'Edit Issue':
|
||||
await editIssueInteractive(issueId);
|
||||
break;
|
||||
case 'Plan Issue':
|
||||
console.log(`Running: /issue:plan ${issueId}`);
|
||||
// Invoke plan skill
|
||||
break;
|
||||
case 'Add to Queue':
|
||||
Bash(`ccw issue queue add ${issueId}`);
|
||||
console.log(`✓ Added ${issueId} tasks to queue`);
|
||||
break;
|
||||
case 'View Queue':
|
||||
const queueOutput = Bash('ccw issue queue');
|
||||
console.log(queueOutput);
|
||||
break;
|
||||
case 'Delete Issue':
|
||||
await deleteIssueInteractive(issueId);
|
||||
break;
|
||||
default:
|
||||
return showMainMenu();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Edit Issue
|
||||
|
||||
```javascript
|
||||
async function editIssueInteractive(issueId) {
|
||||
if (!issueId) {
|
||||
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
|
||||
const idAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select issue to edit:',
|
||||
header: 'Issue',
|
||||
multiSelect: false,
|
||||
options: issues.slice(0, 10).map(i => ({
|
||||
label: i.id,
|
||||
description: `${i.status} - ${i.title.substring(0, 40)}`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
issueId = parseAnswer(idAnswer);
|
||||
}
|
||||
|
||||
// Get current issue data
|
||||
const result = Bash(`ccw issue list ${issueId} --json`);
|
||||
const issueData = JSON.parse(result);
|
||||
const issue = issueData.issue || issueData;
|
||||
|
||||
// Ask which field to edit
|
||||
const fieldAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Which field to edit?',
|
||||
header: 'Field',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Title', description: `Current: ${issue.title?.substring(0, 40)}` },
|
||||
{ label: 'Priority', description: `Current: P${issue.priority}` },
|
||||
{ label: 'Status', description: `Current: ${issue.status}` },
|
||||
{ label: 'Context', description: 'Edit problem description' },
|
||||
{ label: 'Labels', description: `Current: ${issue.labels?.join(', ') || 'none'}` },
|
||||
{ label: 'Back', description: 'Return without changes' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const field = parseAnswer(fieldAnswer);
|
||||
|
||||
if (field === 'Back') {
|
||||
return viewIssueInteractive(issueId);
|
||||
}
|
||||
|
||||
let updatePayload = {};
|
||||
|
||||
switch (field) {
|
||||
case 'Title':
|
||||
const titleAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Enter new title (or select current to keep):',
|
||||
header: 'Title',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: issue.title.substring(0, 50), description: 'Keep current title' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
const newTitle = parseAnswer(titleAnswer);
|
||||
if (newTitle && newTitle !== issue.title.substring(0, 50)) {
|
||||
updatePayload.title = newTitle;
|
||||
}
|
||||
break;
|
||||
|
||||
case 'Priority':
|
||||
const priorityAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select priority:',
|
||||
header: 'Priority',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'P1 - Critical', description: 'Production blocking' },
|
||||
{ label: 'P2 - High', description: 'Major functionality' },
|
||||
{ label: 'P3 - Medium', description: 'Normal priority (default)' },
|
||||
{ label: 'P4 - Low', description: 'Minor issues' },
|
||||
{ label: 'P5 - Trivial', description: 'Nice to have' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
const priorityStr = parseAnswer(priorityAnswer);
|
||||
updatePayload.priority = parseInt(priorityStr.charAt(1));
|
||||
break;
|
||||
|
||||
case 'Status':
|
||||
const statusAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select status:',
|
||||
header: 'Status',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'registered', description: 'New issue, not yet planned' },
|
||||
{ label: 'planning', description: 'Solution being generated' },
|
||||
{ label: 'planned', description: 'Solution bound, ready for queue' },
|
||||
{ label: 'queued', description: 'In execution queue' },
|
||||
{ label: 'executing', description: 'Currently being worked on' },
|
||||
{ label: 'completed', description: 'All tasks finished' },
|
||||
{ label: 'failed', description: 'Execution failed' },
|
||||
{ label: 'paused', description: 'Temporarily on hold' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
updatePayload.status = parseAnswer(statusAnswer);
|
||||
break;
|
||||
|
||||
case 'Context':
|
||||
console.log(`Current context:\n${issue.context || '(empty)'}\n`);
|
||||
const contextAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Enter new context (problem description):',
|
||||
header: 'Context',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Keep current', description: 'No changes' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
const newContext = parseAnswer(contextAnswer);
|
||||
if (newContext && newContext !== 'Keep current') {
|
||||
updatePayload.context = newContext;
|
||||
}
|
||||
break;
|
||||
|
||||
case 'Labels':
|
||||
const labelsAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Enter labels (comma-separated):',
|
||||
header: 'Labels',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: issue.labels?.join(',') || '', description: 'Keep current labels' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
const labelsStr = parseAnswer(labelsAnswer);
|
||||
if (labelsStr) {
|
||||
updatePayload.labels = labelsStr.split(',').map(l => l.trim());
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
// Apply update if any
|
||||
if (Object.keys(updatePayload).length > 0) {
|
||||
// Read, update, write issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}"`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
const idx = allIssues.findIndex(i => i.id === issueId);
|
||||
if (idx !== -1) {
|
||||
allIssues[idx] = {
|
||||
...allIssues[idx],
|
||||
...updatePayload,
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
|
||||
console.log(`✓ Updated ${issueId}: ${Object.keys(updatePayload).join(', ')}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Continue editing or return
|
||||
const continueAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Continue editing?',
|
||||
header: 'Continue',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Edit Another Field', description: 'Continue editing this issue' },
|
||||
{ label: 'View Issue', description: 'See updated issue' },
|
||||
{ label: 'Back to Menu', description: 'Return to main menu' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const cont = parseAnswer(continueAnswer);
|
||||
if (cont === 'Edit Another Field') {
|
||||
await editIssueInteractive(issueId);
|
||||
} else if (cont === 'View Issue') {
|
||||
await viewIssueInteractive(issueId);
|
||||
} else {
|
||||
return showMainMenu();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 6: Delete Issue
|
||||
|
||||
```javascript
|
||||
async function deleteIssueInteractive(issueId) {
|
||||
if (!issueId) {
|
||||
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
|
||||
const idAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select issue to delete:',
|
||||
header: 'Delete',
|
||||
multiSelect: false,
|
||||
options: issues.slice(0, 10).map(i => ({
|
||||
label: i.id,
|
||||
description: `${i.status} - ${i.title.substring(0, 40)}`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
issueId = parseAnswer(idAnswer);
|
||||
}
|
||||
|
||||
// Confirm deletion
|
||||
const confirmAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Delete issue ${issueId}? This will also remove associated solutions.`,
|
||||
header: 'Confirm',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Delete', description: 'Permanently remove issue and solutions' },
|
||||
{ label: 'Cancel', description: 'Keep issue' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (parseAnswer(confirmAnswer) !== 'Delete') {
|
||||
console.log('Deletion cancelled.');
|
||||
return showMainMenu();
|
||||
}
|
||||
|
||||
// Remove from issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}"`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
const filtered = allIssues.filter(i => i.id !== issueId);
|
||||
Write(issuesPath, filtered.map(i => JSON.stringify(i)).join('\n'));
|
||||
|
||||
// Remove solutions file if exists
|
||||
const solPath = `.workflow/issues/solutions/${issueId}.jsonl`;
|
||||
Bash(`rm -f "${solPath}" 2>/dev/null || true`);
|
||||
|
||||
// Remove from queue if present
|
||||
const queuePath = '.workflow/issues/queue.json';
|
||||
if (Bash(`test -f "${queuePath}" && echo exists`) === 'exists') {
|
||||
const queue = JSON.parse(Bash(`cat "${queuePath}"`));
|
||||
queue.queue = queue.queue.filter(q => q.issue_id !== issueId);
|
||||
Write(queuePath, JSON.stringify(queue, null, 2));
|
||||
}
|
||||
|
||||
console.log(`✓ Deleted issue ${issueId}`);
|
||||
return showMainMenu();
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 7: Bulk Operations
|
||||
|
||||
```javascript
|
||||
async function bulkOperationsInteractive() {
|
||||
const bulkAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select bulk operation:',
|
||||
header: 'Bulk',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Update Status', description: 'Change status of multiple issues' },
|
||||
{ label: 'Update Priority', description: 'Change priority of multiple issues' },
|
||||
{ label: 'Add Labels', description: 'Add labels to multiple issues' },
|
||||
{ label: 'Delete Multiple', description: 'Remove multiple issues' },
|
||||
{ label: 'Queue All Planned', description: 'Add all planned issues to queue' },
|
||||
{ label: 'Retry All Failed', description: 'Reset all failed tasks to pending' },
|
||||
{ label: 'Back', description: 'Return to main menu' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const operation = parseAnswer(bulkAnswer);
|
||||
|
||||
if (operation === 'Back') {
|
||||
return showMainMenu();
|
||||
}
|
||||
|
||||
// Get issues for selection
|
||||
const allIssues = JSON.parse(Bash('ccw issue list --json') || '[]');
|
||||
|
||||
if (operation === 'Queue All Planned') {
|
||||
const planned = allIssues.filter(i => i.status === 'planned' && i.bound_solution_id);
|
||||
for (const issue of planned) {
|
||||
Bash(`ccw issue queue add ${issue.id}`);
|
||||
console.log(`✓ Queued ${issue.id}`);
|
||||
}
|
||||
console.log(`\n✓ Queued ${planned.length} issues`);
|
||||
return showMainMenu();
|
||||
}
|
||||
|
||||
if (operation === 'Retry All Failed') {
|
||||
Bash('ccw issue retry');
|
||||
console.log('✓ Reset all failed tasks to pending');
|
||||
return showMainMenu();
|
||||
}
|
||||
|
||||
// Multi-select issues
|
||||
const selectAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select issues (multi-select):',
|
||||
header: 'Select',
|
||||
multiSelect: true,
|
||||
options: allIssues.slice(0, 15).map(i => ({
|
||||
label: i.id,
|
||||
description: `${i.status} - ${i.title.substring(0, 30)}`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
|
||||
const selectedIds = parseMultiAnswer(selectAnswer);
|
||||
|
||||
if (selectedIds.length === 0) {
|
||||
console.log('No issues selected.');
|
||||
return showMainMenu();
|
||||
}
|
||||
|
||||
// Execute bulk operation
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
let issues = Bash(`cat "${issuesPath}"`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
switch (operation) {
|
||||
case 'Update Status':
|
||||
const statusAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select new status:',
|
||||
header: 'Status',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'registered', description: 'Reset to registered' },
|
||||
{ label: 'paused', description: 'Pause issues' },
|
||||
{ label: 'completed', description: 'Mark completed' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
const newStatus = parseAnswer(statusAnswer);
|
||||
issues = issues.map(i =>
|
||||
selectedIds.includes(i.id)
|
||||
? { ...i, status: newStatus, updated_at: new Date().toISOString() }
|
||||
: i
|
||||
);
|
||||
break;
|
||||
|
||||
case 'Update Priority':
|
||||
const prioAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select new priority:',
|
||||
header: 'Priority',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'P1', description: 'Critical' },
|
||||
{ label: 'P2', description: 'High' },
|
||||
{ label: 'P3', description: 'Medium' },
|
||||
{ label: 'P4', description: 'Low' },
|
||||
{ label: 'P5', description: 'Trivial' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
const newPrio = parseInt(parseAnswer(prioAnswer).charAt(1));
|
||||
issues = issues.map(i =>
|
||||
selectedIds.includes(i.id)
|
||||
? { ...i, priority: newPrio, updated_at: new Date().toISOString() }
|
||||
: i
|
||||
);
|
||||
break;
|
||||
|
||||
case 'Add Labels':
|
||||
const labelAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Enter labels to add (comma-separated):',
|
||||
header: 'Labels',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'bug', description: 'Bug fix' },
|
||||
{ label: 'feature', description: 'New feature' },
|
||||
{ label: 'urgent', description: 'Urgent priority' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
const newLabels = parseAnswer(labelAnswer).split(',').map(l => l.trim());
|
||||
issues = issues.map(i =>
|
||||
selectedIds.includes(i.id)
|
||||
? {
|
||||
...i,
|
||||
labels: [...new Set([...(i.labels || []), ...newLabels])],
|
||||
updated_at: new Date().toISOString()
|
||||
}
|
||||
: i
|
||||
);
|
||||
break;
|
||||
|
||||
case 'Delete Multiple':
|
||||
const confirmDelete = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Delete ${selectedIds.length} issues permanently?`,
|
||||
header: 'Confirm',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Delete All', description: 'Remove selected issues' },
|
||||
{ label: 'Cancel', description: 'Keep issues' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
if (parseAnswer(confirmDelete) === 'Delete All') {
|
||||
issues = issues.filter(i => !selectedIds.includes(i.id));
|
||||
// Clean up solutions
|
||||
for (const id of selectedIds) {
|
||||
Bash(`rm -f ".workflow/issues/solutions/${id}.jsonl" 2>/dev/null || true`);
|
||||
}
|
||||
} else {
|
||||
console.log('Deletion cancelled.');
|
||||
return showMainMenu();
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
Write(issuesPath, issues.map(i => JSON.stringify(i)).join('\n'));
|
||||
console.log(`✓ Updated ${selectedIds.length} issues`);
|
||||
return showMainMenu();
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 8: Create Issue (Redirect)
|
||||
|
||||
```javascript
|
||||
async function createIssueInteractive() {
|
||||
const typeAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Create issue from:',
|
||||
header: 'Source',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'GitHub URL', description: 'Import from GitHub issue' },
|
||||
{ label: 'Text Description', description: 'Enter problem description' },
|
||||
{ label: 'Quick Create', description: 'Just title and priority' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const type = parseAnswer(typeAnswer);
|
||||
|
||||
if (type === 'GitHub URL' || type === 'Text Description') {
|
||||
console.log('Use /issue:new for structured issue creation');
|
||||
console.log('Example: /issue:new https://github.com/org/repo/issues/123');
|
||||
return showMainMenu();
|
||||
}
|
||||
|
||||
// Quick create
|
||||
const titleAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Enter issue title:',
|
||||
header: 'Title',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Authentication Bug', description: 'Example title' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const title = parseAnswer(titleAnswer);
|
||||
|
||||
const prioAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Select priority:',
|
||||
header: 'Priority',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'P3 - Medium (Recommended)', description: 'Normal priority' },
|
||||
{ label: 'P1 - Critical', description: 'Production blocking' },
|
||||
{ label: 'P2 - High', description: 'Major functionality' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
const priority = parseInt(parseAnswer(prioAnswer).charAt(1));
|
||||
|
||||
// Generate ID and create
|
||||
const id = `ISS-${Date.now()}`;
|
||||
Bash(`ccw issue init ${id} --title "${title}" --priority ${priority}`);
|
||||
|
||||
console.log(`✓ Created issue ${id}`);
|
||||
await viewIssueInteractive(id);
|
||||
}
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function parseAnswer(answer) {
|
||||
// Extract selected option from AskUserQuestion response
|
||||
if (typeof answer === 'string') return answer;
|
||||
if (answer.answers) {
|
||||
const values = Object.values(answer.answers);
|
||||
return values[0] || '';
|
||||
}
|
||||
return '';
|
||||
}
|
||||
|
||||
function parseMultiAnswer(answer) {
|
||||
// Extract multiple selections
|
||||
if (typeof answer === 'string') return answer.split(',').map(s => s.trim());
|
||||
if (answer.answers) {
|
||||
const values = Object.values(answer.answers);
|
||||
return values.flatMap(v => v.split(',').map(s => s.trim()));
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
function parseFlags(input) {
|
||||
const flags = {};
|
||||
const matches = input.matchAll(/--(\w+)\s+([^\s-]+)/g);
|
||||
for (const match of matches) {
|
||||
flags[match[1]] = match[2];
|
||||
}
|
||||
return flags;
|
||||
}
|
||||
|
||||
function parseIssueId(input) {
|
||||
const match = input.match(/^([A-Z]+-\d+|ISS-\d+|GH-\d+)/i);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No issues found | Suggest creating with /issue:new |
|
||||
| Issue not found | Show available issues, ask for correction |
|
||||
| Invalid selection | Show error, re-prompt |
|
||||
| Write failure | Check permissions, show error |
|
||||
| Queue operation fails | Show ccw issue error, suggest fix |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:new` - Create structured issue
|
||||
- `/issue:plan` - Plan solution for issue
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `/issue:execute` - Execute queued tasks
|
||||
- `ccw issue list` - CLI list command
|
||||
- `ccw issue status` - CLI status command
|
||||
484
.claude/commands/issue/new.md
Normal file
484
.claude/commands/issue/new.md
Normal file
@@ -0,0 +1,484 @@
|
||||
---
|
||||
name: new
|
||||
description: Create structured issue from GitHub URL or text description, extracting key elements into issues.jsonl
|
||||
argument-hint: "<github-url | text-description> [--priority 1-5] [--labels label1,label2]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), WebFetch(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
# Issue New Command (/issue:new)
|
||||
|
||||
## Overview
|
||||
|
||||
Creates a new structured issue from either:
|
||||
1. **GitHub Issue URL** - Fetches and parses issue content via `gh` CLI
|
||||
2. **Text Description** - Parses natural language into structured fields
|
||||
|
||||
Outputs a well-formed issue entry to `.workflow/issues/issues.jsonl`.
|
||||
|
||||
## Issue Structure (Closed-Loop)
|
||||
|
||||
```typescript
|
||||
interface Issue {
|
||||
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
|
||||
title: string; // Issue title (clear, concise)
|
||||
status: 'registered'; // Initial status
|
||||
priority: number; // 1 (critical) to 5 (low)
|
||||
context: string; // Problem description
|
||||
source: 'github' | 'text'; // Input source type
|
||||
source_url?: string; // GitHub URL if applicable
|
||||
labels?: string[]; // Categorization labels
|
||||
|
||||
// Structured extraction
|
||||
problem_statement: string; // What is the problem?
|
||||
expected_behavior?: string; // What should happen?
|
||||
actual_behavior?: string; // What actually happens?
|
||||
affected_components?: string[];// Files/modules affected
|
||||
reproduction_steps?: string[]; // Steps to reproduce
|
||||
|
||||
// Closed-loop requirements (guide plan generation)
|
||||
lifecycle_requirements: {
|
||||
test_strategy: 'unit' | 'integration' | 'e2e' | 'manual' | 'auto';
|
||||
regression_scope: 'affected' | 'related' | 'full'; // Which tests to run
|
||||
acceptance_type: 'automated' | 'manual' | 'both'; // How to verify
|
||||
commit_strategy: 'per-task' | 'squash' | 'atomic'; // Commit granularity
|
||||
};
|
||||
|
||||
// Metadata
|
||||
bound_solution_id: null;
|
||||
solution_count: 0;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
}
|
||||
```
|
||||
|
||||
## Task Lifecycle (Each Task is Closed-Loop)
|
||||
|
||||
When `/issue:plan` generates tasks, each task MUST include:
|
||||
|
||||
```typescript
|
||||
interface SolutionTask {
|
||||
id: string;
|
||||
title: string;
|
||||
scope: string;
|
||||
action: string;
|
||||
|
||||
// Phase 1: Implementation
|
||||
implementation: string[]; // Step-by-step implementation
|
||||
modification_points: { file: string; target: string; change: string }[];
|
||||
|
||||
// Phase 2: Testing
|
||||
test: {
|
||||
unit?: string[]; // Unit test requirements
|
||||
integration?: string[]; // Integration test requirements
|
||||
commands?: string[]; // Test commands to run
|
||||
coverage_target?: number; // Minimum coverage %
|
||||
};
|
||||
|
||||
// Phase 3: Regression
|
||||
regression: string[]; // Regression check commands/points
|
||||
|
||||
// Phase 4: Acceptance
|
||||
acceptance: {
|
||||
criteria: string[]; // Testable acceptance criteria
|
||||
verification: string[]; // How to verify each criterion
|
||||
manual_checks?: string[]; // Manual verification if needed
|
||||
};
|
||||
|
||||
// Phase 5: Commit
|
||||
commit: {
|
||||
type: 'feat' | 'fix' | 'refactor' | 'test' | 'docs' | 'chore';
|
||||
scope: string; // e.g., "auth", "api"
|
||||
message_template: string; // Commit message template
|
||||
breaking?: boolean;
|
||||
};
|
||||
|
||||
depends_on: string[];
|
||||
executor: 'codex' | 'gemini' | 'agent' | 'auto';
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# From GitHub URL
|
||||
/issue:new https://github.com/owner/repo/issues/123
|
||||
|
||||
# From text description
|
||||
/issue:new "Login fails when password contains special characters. Expected: successful login. Actual: 500 error. Affects src/auth/*"
|
||||
|
||||
# With options
|
||||
/issue:new <url-or-text> --priority 2 --labels "bug,auth"
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Input Detection
|
||||
|
||||
```javascript
|
||||
const input = userInput.trim();
|
||||
const flags = parseFlags(userInput); // --priority, --labels
|
||||
|
||||
// Detect input type
|
||||
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
|
||||
const isGitHubShort = input.match(/^#(\d+)$/); // #123 format
|
||||
|
||||
let issueData = {};
|
||||
|
||||
if (isGitHubUrl || isGitHubShort) {
|
||||
// GitHub issue - fetch via gh CLI
|
||||
issueData = await fetchGitHubIssue(input);
|
||||
} else {
|
||||
// Text description - parse structure
|
||||
issueData = await parseTextDescription(input);
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: GitHub Issue Fetching
|
||||
|
||||
```javascript
|
||||
async function fetchGitHubIssue(urlOrNumber) {
|
||||
let issueRef;
|
||||
|
||||
if (urlOrNumber.startsWith('http')) {
|
||||
// Extract owner/repo/number from URL
|
||||
const match = urlOrNumber.match(/github\.com\/([\w-]+)\/([\w-]+)\/issues\/(\d+)/);
|
||||
if (!match) throw new Error('Invalid GitHub URL');
|
||||
issueRef = `${match[1]}/${match[2]}#${match[3]}`;
|
||||
} else {
|
||||
// #123 format - use current repo
|
||||
issueRef = urlOrNumber.replace('#', '');
|
||||
}
|
||||
|
||||
// Fetch via gh CLI
|
||||
const result = Bash(`gh issue view ${issueRef} --json number,title,body,labels,state,url`);
|
||||
const ghIssue = JSON.parse(result);
|
||||
|
||||
// Parse body for structure
|
||||
const parsed = parseIssueBody(ghIssue.body);
|
||||
|
||||
return {
|
||||
id: `GH-${ghIssue.number}`,
|
||||
title: ghIssue.title,
|
||||
source: 'github',
|
||||
source_url: ghIssue.url,
|
||||
labels: ghIssue.labels.map(l => l.name),
|
||||
context: ghIssue.body,
|
||||
...parsed
|
||||
};
|
||||
}
|
||||
|
||||
function parseIssueBody(body) {
|
||||
// Extract structured sections from markdown body
|
||||
const sections = {};
|
||||
|
||||
// Problem/Description
|
||||
const problemMatch = body.match(/##?\s*(problem|description|issue)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (problemMatch) sections.problem_statement = problemMatch[2].trim();
|
||||
|
||||
// Expected behavior
|
||||
const expectedMatch = body.match(/##?\s*(expected|should)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (expectedMatch) sections.expected_behavior = expectedMatch[2].trim();
|
||||
|
||||
// Actual behavior
|
||||
const actualMatch = body.match(/##?\s*(actual|current)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (actualMatch) sections.actual_behavior = actualMatch[2].trim();
|
||||
|
||||
// Steps to reproduce
|
||||
const stepsMatch = body.match(/##?\s*(steps|reproduce)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (stepsMatch) {
|
||||
const stepsText = stepsMatch[2].trim();
|
||||
sections.reproduction_steps = stepsText
|
||||
.split('\n')
|
||||
.filter(line => line.match(/^\s*[\d\-\*]/))
|
||||
.map(line => line.replace(/^\s*[\d\.\-\*]\s*/, '').trim());
|
||||
}
|
||||
|
||||
// Affected components (from file references)
|
||||
const fileMatches = body.match(/`[^`]*\.(ts|js|tsx|jsx|py|go|rs)[^`]*`/g);
|
||||
if (fileMatches) {
|
||||
sections.affected_components = [...new Set(fileMatches.map(f => f.replace(/`/g, '')))];
|
||||
}
|
||||
|
||||
// Fallback: use entire body as problem statement
|
||||
if (!sections.problem_statement) {
|
||||
sections.problem_statement = body.substring(0, 500);
|
||||
}
|
||||
|
||||
return sections;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Text Description Parsing
|
||||
|
||||
```javascript
|
||||
async function parseTextDescription(text) {
|
||||
// Generate unique ID
|
||||
const id = `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
|
||||
|
||||
// Extract structured elements using patterns
|
||||
const result = {
|
||||
id,
|
||||
source: 'text',
|
||||
title: '',
|
||||
problem_statement: '',
|
||||
expected_behavior: null,
|
||||
actual_behavior: null,
|
||||
affected_components: [],
|
||||
reproduction_steps: []
|
||||
};
|
||||
|
||||
// Pattern: "Title. Description. Expected: X. Actual: Y. Affects: files"
|
||||
const sentences = text.split(/\.(?=\s|$)/);
|
||||
|
||||
// First sentence as title
|
||||
result.title = sentences[0]?.trim() || 'Untitled Issue';
|
||||
|
||||
// Look for keywords
|
||||
for (const sentence of sentences) {
|
||||
const s = sentence.trim();
|
||||
|
||||
if (s.match(/^expected:?\s*/i)) {
|
||||
result.expected_behavior = s.replace(/^expected:?\s*/i, '');
|
||||
} else if (s.match(/^actual:?\s*/i)) {
|
||||
result.actual_behavior = s.replace(/^actual:?\s*/i, '');
|
||||
} else if (s.match(/^affects?:?\s*/i)) {
|
||||
const components = s.replace(/^affects?:?\s*/i, '').split(/[,\s]+/);
|
||||
result.affected_components = components.filter(c => c.includes('/') || c.includes('.'));
|
||||
} else if (s.match(/^steps?:?\s*/i)) {
|
||||
result.reproduction_steps = s.replace(/^steps?:?\s*/i, '').split(/[,;]/);
|
||||
} else if (!result.problem_statement && s.length > 10) {
|
||||
result.problem_statement = s;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback problem statement
|
||||
if (!result.problem_statement) {
|
||||
result.problem_statement = text.substring(0, 300);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Lifecycle Configuration
|
||||
|
||||
```javascript
|
||||
// Ask for lifecycle requirements (or use smart defaults)
|
||||
const lifecycleAnswer = AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: 'Test strategy for this issue?',
|
||||
header: 'Test',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'auto', description: 'Auto-detect based on affected files (Recommended)' },
|
||||
{ label: 'unit', description: 'Unit tests only' },
|
||||
{ label: 'integration', description: 'Integration tests' },
|
||||
{ label: 'e2e', description: 'End-to-end tests' },
|
||||
{ label: 'manual', description: 'Manual testing only' }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Regression scope?',
|
||||
header: 'Regression',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'affected', description: 'Only affected module tests (Recommended)' },
|
||||
{ label: 'related', description: 'Affected + dependent modules' },
|
||||
{ label: 'full', description: 'Full test suite' }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Commit strategy?',
|
||||
header: 'Commit',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'per-task', description: 'One commit per task (Recommended)' },
|
||||
{ label: 'atomic', description: 'Single commit for entire issue' },
|
||||
{ label: 'squash', description: 'Squash at the end' }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
const lifecycle = {
|
||||
test_strategy: lifecycleAnswer.test || 'auto',
|
||||
regression_scope: lifecycleAnswer.regression || 'affected',
|
||||
acceptance_type: 'automated',
|
||||
commit_strategy: lifecycleAnswer.commit || 'per-task'
|
||||
};
|
||||
|
||||
issueData.lifecycle_requirements = lifecycle;
|
||||
```
|
||||
|
||||
### Phase 5: User Confirmation
|
||||
|
||||
```javascript
|
||||
// Show parsed data and ask for confirmation
|
||||
console.log(`
|
||||
## Parsed Issue
|
||||
|
||||
**ID**: ${issueData.id}
|
||||
**Title**: ${issueData.title}
|
||||
**Source**: ${issueData.source}${issueData.source_url ? ` (${issueData.source_url})` : ''}
|
||||
|
||||
### Problem Statement
|
||||
${issueData.problem_statement}
|
||||
|
||||
${issueData.expected_behavior ? `### Expected Behavior\n${issueData.expected_behavior}\n` : ''}
|
||||
${issueData.actual_behavior ? `### Actual Behavior\n${issueData.actual_behavior}\n` : ''}
|
||||
${issueData.affected_components?.length ? `### Affected Components\n${issueData.affected_components.map(c => `- ${c}`).join('\n')}\n` : ''}
|
||||
${issueData.reproduction_steps?.length ? `### Reproduction Steps\n${issueData.reproduction_steps.map((s, i) => `${i+1}. ${s}`).join('\n')}\n` : ''}
|
||||
|
||||
### Lifecycle Configuration
|
||||
- **Test Strategy**: ${lifecycle.test_strategy}
|
||||
- **Regression Scope**: ${lifecycle.regression_scope}
|
||||
- **Commit Strategy**: ${lifecycle.commit_strategy}
|
||||
`);
|
||||
|
||||
// Ask user to confirm or edit
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Create this issue?',
|
||||
header: 'Confirm',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Create', description: 'Save issue to issues.jsonl' },
|
||||
{ label: 'Edit Title', description: 'Modify the issue title' },
|
||||
{ label: 'Edit Priority', description: 'Change priority (1-5)' },
|
||||
{ label: 'Cancel', description: 'Discard and exit' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (answer.includes('Cancel')) {
|
||||
console.log('Issue creation cancelled.');
|
||||
return;
|
||||
}
|
||||
|
||||
if (answer.includes('Edit Title')) {
|
||||
const titleAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Enter new title:',
|
||||
header: 'Title',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: issueData.title.substring(0, 40), description: 'Keep current' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
// Handle custom input via "Other"
|
||||
if (titleAnswer.customText) {
|
||||
issueData.title = titleAnswer.customText;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 6: Write to JSONL
|
||||
|
||||
```javascript
|
||||
// Construct final issue object
|
||||
const priority = flags.priority ? parseInt(flags.priority) : 3;
|
||||
const labels = flags.labels ? flags.labels.split(',').map(l => l.trim()) : [];
|
||||
|
||||
const newIssue = {
|
||||
id: issueData.id,
|
||||
title: issueData.title,
|
||||
status: 'registered',
|
||||
priority,
|
||||
context: issueData.problem_statement,
|
||||
source: issueData.source,
|
||||
source_url: issueData.source_url || null,
|
||||
labels: [...(issueData.labels || []), ...labels],
|
||||
|
||||
// Structured fields
|
||||
problem_statement: issueData.problem_statement,
|
||||
expected_behavior: issueData.expected_behavior || null,
|
||||
actual_behavior: issueData.actual_behavior || null,
|
||||
affected_components: issueData.affected_components || [],
|
||||
reproduction_steps: issueData.reproduction_steps || [],
|
||||
|
||||
// Closed-loop lifecycle requirements
|
||||
lifecycle_requirements: issueData.lifecycle_requirements || {
|
||||
test_strategy: 'auto',
|
||||
regression_scope: 'affected',
|
||||
acceptance_type: 'automated',
|
||||
commit_strategy: 'per-task'
|
||||
},
|
||||
|
||||
// Metadata
|
||||
bound_solution_id: null,
|
||||
solution_count: 0,
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Ensure directory exists
|
||||
Bash('mkdir -p .workflow/issues');
|
||||
|
||||
// Append to issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
Bash(`echo '${JSON.stringify(newIssue)}' >> "${issuesPath}"`);
|
||||
|
||||
console.log(`
|
||||
## Issue Created
|
||||
|
||||
**ID**: ${newIssue.id}
|
||||
**Title**: ${newIssue.title}
|
||||
**Priority**: ${newIssue.priority}
|
||||
**Labels**: ${newIssue.labels.join(', ') || 'none'}
|
||||
**Source**: ${newIssue.source}
|
||||
|
||||
### Next Steps
|
||||
1. Plan solution: \`/issue:plan ${newIssue.id}\`
|
||||
2. View details: \`ccw issue status ${newIssue.id}\`
|
||||
3. Manage issues: \`/issue:manage\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### GitHub Issue
|
||||
|
||||
```bash
|
||||
/issue:new https://github.com/myorg/myrepo/issues/42 --priority 2
|
||||
|
||||
# Output:
|
||||
## Issue Created
|
||||
**ID**: GH-42
|
||||
**Title**: Fix memory leak in WebSocket handler
|
||||
**Priority**: 2
|
||||
**Labels**: bug, performance
|
||||
**Source**: github (https://github.com/myorg/myrepo/issues/42)
|
||||
```
|
||||
|
||||
### Text Description
|
||||
|
||||
```bash
|
||||
/issue:new "API rate limiting not working. Expected: 429 after 100 requests. Actual: No limit. Affects src/middleware/rate-limit.ts"
|
||||
|
||||
# Output:
|
||||
## Issue Created
|
||||
**ID**: ISS-20251227-142530
|
||||
**Title**: API rate limiting not working
|
||||
**Priority**: 3
|
||||
**Labels**: none
|
||||
**Source**: text
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Invalid GitHub URL | Show format hint, ask for correction |
|
||||
| gh CLI not available | Fall back to WebFetch for public issues |
|
||||
| Empty description | Prompt user for required fields |
|
||||
| Duplicate issue ID | Auto-increment or suggest merge |
|
||||
| Parse failure | Show raw input, ask for manual structuring |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan solution for issue
|
||||
- `/issue:manage` - Interactive issue management
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status <id>` - View issue details
|
||||
421
.claude/commands/issue/plan.md
Normal file
421
.claude/commands/issue/plan.md
Normal file
@@ -0,0 +1,421 @@
|
||||
---
|
||||
name: plan
|
||||
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
|
||||
argument-hint: "<issue-id>[,<issue-id>,...] [--batch-size 3]"
|
||||
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
|
||||
---
|
||||
|
||||
# Issue Plan Command (/issue:plan)
|
||||
|
||||
## Overview
|
||||
|
||||
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow. The agent handles ACE semantic search, solution generation, and task breakdown.
|
||||
|
||||
**Core capabilities:**
|
||||
- **Closed-loop agent**: issue-plan-agent combines explore + plan
|
||||
- Batch processing: 1 agent processes 1-3 issues
|
||||
- ACE semantic search integrated into planning
|
||||
- Solution with executable tasks and acceptance criteria
|
||||
- Automatic solution registration and binding
|
||||
|
||||
## Storage Structure (Flat JSONL)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queue.json # Execution queue
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue (one per line)
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:plan <issue-id>[,<issue-id>,...] [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:plan GH-123 # Single issue
|
||||
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
|
||||
/issue:plan --all-pending # All pending issues
|
||||
|
||||
# Flags
|
||||
--batch-size <n> Max issues per agent batch (default: 3)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Issue Loading
|
||||
├─ Parse input (single, comma-separated, or --all-pending)
|
||||
├─ Load issues from .workflow/issues/issues.jsonl
|
||||
├─ Validate issues exist (create if needed)
|
||||
└─ Group into batches (max 3 per batch)
|
||||
|
||||
Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||
├─ Launch issue-plan-agent per batch
|
||||
├─ Agent performs:
|
||||
│ ├─ ACE semantic search for each issue
|
||||
│ ├─ Codebase exploration (files, patterns, dependencies)
|
||||
│ ├─ Solution generation with task breakdown
|
||||
│ └─ Conflict detection across issues
|
||||
└─ Output: solution JSON per issue
|
||||
|
||||
Phase 3: Solution Registration & Binding
|
||||
├─ Append solutions to solutions/{issue-id}.jsonl
|
||||
├─ Single solution per issue → auto-bind
|
||||
├─ Multiple candidates → AskUserQuestion to select
|
||||
└─ Update issues.jsonl with bound_solution_id
|
||||
|
||||
Phase 4: Summary
|
||||
├─ Display bound solutions
|
||||
├─ Show task counts per issue
|
||||
└─ Display next steps (/issue:queue)
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Issue Loading
|
||||
|
||||
```javascript
|
||||
// Parse input
|
||||
const issueIds = userInput.includes(',')
|
||||
? userInput.split(',').map(s => s.trim())
|
||||
: [userInput.trim()];
|
||||
|
||||
// Read issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Load and validate issues
|
||||
const issues = [];
|
||||
for (const id of issueIds) {
|
||||
let issue = allIssues.find(i => i.id === id);
|
||||
|
||||
if (!issue) {
|
||||
console.log(`Issue ${id} not found. Creating...`);
|
||||
issue = {
|
||||
id,
|
||||
title: `Issue ${id}`,
|
||||
status: 'registered',
|
||||
priority: 3,
|
||||
context: '',
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
// Append to issues.jsonl
|
||||
Bash(`echo '${JSON.stringify(issue)}' >> "${issuesPath}"`);
|
||||
}
|
||||
|
||||
issues.push(issue);
|
||||
}
|
||||
|
||||
// Group into batches
|
||||
const batchSize = flags.batchSize || 3;
|
||||
const batches = [];
|
||||
for (let i = 0; i < issues.length; i += batchSize) {
|
||||
batches.push(issues.slice(i, i + batchSize));
|
||||
}
|
||||
|
||||
TodoWrite({
|
||||
todos: batches.flatMap((batch, i) => [
|
||||
{ content: `Plan batch ${i+1}`, status: 'pending', activeForm: `Planning batch ${i+1}` }
|
||||
])
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||
|
||||
```javascript
|
||||
for (const [batchIndex, batch] of batches.entries()) {
|
||||
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
|
||||
|
||||
// Build issue prompt for agent with lifecycle requirements
|
||||
const issuePrompt = `
|
||||
## Issues to Plan (Closed-Loop Tasks Required)
|
||||
|
||||
${batch.map((issue, i) => `
|
||||
### Issue ${i + 1}: ${issue.id}
|
||||
**Title**: ${issue.title}
|
||||
**Context**: ${issue.context || 'No context provided'}
|
||||
**Affected Components**: ${issue.affected_components?.join(', ') || 'Not specified'}
|
||||
|
||||
**Lifecycle Requirements**:
|
||||
- Test Strategy: ${issue.lifecycle_requirements?.test_strategy || 'auto'}
|
||||
- Regression Scope: ${issue.lifecycle_requirements?.regression_scope || 'affected'}
|
||||
- Commit Strategy: ${issue.lifecycle_requirements?.commit_strategy || 'per-task'}
|
||||
`).join('\n')}
|
||||
|
||||
## Project Root
|
||||
${process.cwd()}
|
||||
|
||||
## Requirements - CLOSED-LOOP TASKS
|
||||
|
||||
Each task MUST include ALL lifecycle phases:
|
||||
|
||||
### 1. Implementation
|
||||
- implementation: string[] (2-7 concrete steps)
|
||||
- modification_points: { file, target, change }[]
|
||||
|
||||
### 2. Test
|
||||
- test.unit: string[] (unit test requirements)
|
||||
- test.integration: string[] (integration test requirements if needed)
|
||||
- test.commands: string[] (actual test commands to run)
|
||||
- test.coverage_target: number (minimum coverage %)
|
||||
|
||||
### 3. Regression
|
||||
- regression: string[] (commands to run for regression check)
|
||||
- Based on issue's regression_scope setting
|
||||
|
||||
### 4. Acceptance
|
||||
- acceptance.criteria: string[] (testable acceptance criteria)
|
||||
- acceptance.verification: string[] (how to verify each criterion)
|
||||
- acceptance.manual_checks: string[] (manual checks if needed)
|
||||
|
||||
### 5. Commit
|
||||
- commit.type: feat|fix|refactor|test|docs|chore
|
||||
- commit.scope: string (module name)
|
||||
- commit.message_template: string (full commit message)
|
||||
- commit.breaking: boolean
|
||||
|
||||
## Additional Requirements
|
||||
1. Use ACE semantic search (mcp__ace-tool__search_context) for exploration
|
||||
2. Detect file conflicts if multiple issues
|
||||
3. Generate executable test commands based on project's test framework
|
||||
4. Infer commit scope from affected files
|
||||
`;
|
||||
|
||||
// Launch issue-plan-agent (combines explore + plan)
|
||||
const result = Task(
|
||||
subagent_type="issue-plan-agent",
|
||||
run_in_background=false,
|
||||
description=`Explore & plan ${batch.length} issues`,
|
||||
prompt=issuePrompt
|
||||
);
|
||||
|
||||
// Parse agent output
|
||||
const agentOutput = JSON.parse(result);
|
||||
|
||||
// Register solutions for each issue (append to solutions/{issue-id}.jsonl)
|
||||
for (const item of agentOutput.solutions) {
|
||||
const solutionPath = `.workflow/issues/solutions/${item.issue_id}.jsonl`;
|
||||
|
||||
// Ensure solutions directory exists
|
||||
Bash(`mkdir -p .workflow/issues/solutions`);
|
||||
|
||||
// Append solution as new line
|
||||
Bash(`echo '${JSON.stringify(item.solution)}' >> "${solutionPath}"`);
|
||||
}
|
||||
|
||||
// Handle conflicts if any
|
||||
if (agentOutput.conflicts?.length > 0) {
|
||||
console.log(`\n⚠ File conflicts detected:`);
|
||||
agentOutput.conflicts.forEach(c => {
|
||||
console.log(` ${c.file}: ${c.issues.join(', ')} → suggested: ${c.suggested_order.join(' → ')}`);
|
||||
});
|
||||
}
|
||||
|
||||
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Solution Binding
|
||||
|
||||
```javascript
|
||||
// Re-read issues.jsonl
|
||||
let allIssuesUpdated = Bash(`cat "${issuesPath}"`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
for (const issue of issues) {
|
||||
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
|
||||
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
if (solutions.length === 0) {
|
||||
console.log(`⚠ No solutions for ${issue.id}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
let selectedSolId;
|
||||
|
||||
if (solutions.length === 1) {
|
||||
// Auto-bind single solution
|
||||
selectedSolId = solutions[0].id;
|
||||
console.log(`✓ Auto-bound ${selectedSolId} to ${issue.id} (${solutions[0].tasks?.length || 0} tasks)`);
|
||||
} else {
|
||||
// Multiple solutions - ask user
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Select solution for ${issue.id}:`,
|
||||
header: issue.id,
|
||||
multiSelect: false,
|
||||
options: solutions.map(s => ({
|
||||
label: `${s.id}: ${s.description || 'Solution'}`,
|
||||
description: `${s.tasks?.length || 0} tasks`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
|
||||
selectedSolId = extractSelectedSolutionId(answer);
|
||||
console.log(`✓ Bound ${selectedSolId} to ${issue.id}`);
|
||||
}
|
||||
|
||||
// Update issue in allIssuesUpdated
|
||||
const issueIndex = allIssuesUpdated.findIndex(i => i.id === issue.id);
|
||||
if (issueIndex !== -1) {
|
||||
allIssuesUpdated[issueIndex].bound_solution_id = selectedSolId;
|
||||
allIssuesUpdated[issueIndex].status = 'planned';
|
||||
allIssuesUpdated[issueIndex].planned_at = new Date().toISOString();
|
||||
allIssuesUpdated[issueIndex].updated_at = new Date().toISOString();
|
||||
}
|
||||
|
||||
// Mark solution as bound in solutions file
|
||||
const updatedSolutions = solutions.map(s => ({
|
||||
...s,
|
||||
is_bound: s.id === selectedSolId,
|
||||
bound_at: s.id === selectedSolId ? new Date().toISOString() : s.bound_at
|
||||
}));
|
||||
Write(solPath, updatedSolutions.map(s => JSON.stringify(s)).join('\n'));
|
||||
}
|
||||
|
||||
// Write updated issues.jsonl
|
||||
Write(issuesPath, allIssuesUpdated.map(i => JSON.stringify(i)).join('\n'));
|
||||
```
|
||||
|
||||
### Phase 4: Summary
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
## Planning Complete
|
||||
|
||||
**Issues Planned**: ${issues.length}
|
||||
|
||||
### Bound Solutions
|
||||
${issues.map(i => {
|
||||
const issue = allIssuesUpdated.find(a => a.id === i.id);
|
||||
return issue?.bound_solution_id
|
||||
? `✓ ${i.id}: ${issue.bound_solution_id}`
|
||||
: `○ ${i.id}: No solution bound`;
|
||||
}).join('\n')}
|
||||
|
||||
### Next Steps
|
||||
1. Review: \`ccw issue status <issue-id>\`
|
||||
2. Form queue: \`/issue:queue\`
|
||||
3. Execute: \`/issue:execute\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Solution Format (Closed-Loop Tasks)
|
||||
|
||||
Each solution line in `solutions/{issue-id}.jsonl`:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "SOL-20251226-001",
|
||||
"description": "Direct Implementation",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"title": "Create auth middleware",
|
||||
"scope": "src/middleware/",
|
||||
"action": "Create",
|
||||
"description": "Create JWT validation middleware",
|
||||
"modification_points": [
|
||||
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||
],
|
||||
|
||||
"implementation": [
|
||||
"Create auth.ts file in src/middleware/",
|
||||
"Implement JWT token validation using jsonwebtoken",
|
||||
"Add error handling for invalid/expired tokens",
|
||||
"Export middleware function"
|
||||
],
|
||||
|
||||
"test": {
|
||||
"unit": [
|
||||
"Test valid token passes through",
|
||||
"Test invalid token returns 401",
|
||||
"Test expired token returns 401",
|
||||
"Test missing token returns 401"
|
||||
],
|
||||
"commands": [
|
||||
"npm test -- --grep 'auth middleware'",
|
||||
"npm run test:coverage -- src/middleware/auth.ts"
|
||||
],
|
||||
"coverage_target": 80
|
||||
},
|
||||
|
||||
"regression": [
|
||||
"npm test -- --grep 'protected routes'",
|
||||
"npm run test:integration -- auth"
|
||||
],
|
||||
|
||||
"acceptance": {
|
||||
"criteria": [
|
||||
"Middleware validates JWT tokens successfully",
|
||||
"Returns 401 for invalid or missing tokens",
|
||||
"Passes decoded token to request context"
|
||||
],
|
||||
"verification": [
|
||||
"curl -H 'Authorization: Bearer valid_token' /api/protected → 200",
|
||||
"curl /api/protected → 401",
|
||||
"curl -H 'Authorization: Bearer invalid' /api/protected → 401"
|
||||
]
|
||||
},
|
||||
|
||||
"commit": {
|
||||
"type": "feat",
|
||||
"scope": "auth",
|
||||
"message_template": "feat(auth): add JWT validation middleware\n\n- Implement token validation\n- Add error handling for invalid tokens\n- Export for route protection",
|
||||
"breaking": false
|
||||
},
|
||||
|
||||
"depends_on": [],
|
||||
"estimated_minutes": 30,
|
||||
"executor": "codex"
|
||||
}
|
||||
],
|
||||
"exploration_context": {
|
||||
"relevant_files": ["src/config/auth.ts"],
|
||||
"patterns": "Follow existing middleware pattern"
|
||||
},
|
||||
"is_bound": true,
|
||||
"created_at": "2025-12-26T10:00:00Z",
|
||||
"bound_at": "2025-12-26T10:05:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Issue not found | Auto-create in issues.jsonl |
|
||||
| ACE search fails | Agent falls back to ripgrep |
|
||||
| No solutions generated | Display error, suggest manual planning |
|
||||
| User cancels selection | Skip issue, continue with others |
|
||||
| File conflicts | Agent detects and suggests resolution order |
|
||||
|
||||
## Agent Integration
|
||||
|
||||
The command uses `issue-plan-agent` which:
|
||||
1. Performs ACE semantic search per issue
|
||||
2. Identifies modification points and patterns
|
||||
3. Generates task breakdown with dependencies
|
||||
4. Detects cross-issue file conflicts
|
||||
5. Outputs solution JSON for registration
|
||||
|
||||
See `.claude/agents/issue-plan-agent.md` for agent specification.
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:queue` - Form execution queue from bound solutions
|
||||
- `/issue:execute` - Execute queue with codex
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status` - View issue and solution details
|
||||
354
.claude/commands/issue/queue.md
Normal file
354
.claude/commands/issue/queue.md
Normal file
@@ -0,0 +1,354 @@
|
||||
---
|
||||
name: queue
|
||||
description: Form execution queue from bound solutions using issue-queue-agent
|
||||
argument-hint: "[--rebuild] [--issue <id>]"
|
||||
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
|
||||
---
|
||||
|
||||
# Issue Queue Command (/issue:queue)
|
||||
|
||||
## Overview
|
||||
|
||||
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, determines dependencies, and creates an ordered execution queue. The queue is global across all issues.
|
||||
|
||||
**Core capabilities:**
|
||||
- **Agent-driven**: issue-queue-agent handles all ordering logic
|
||||
- ACE semantic search for relationship discovery
|
||||
- Dependency DAG construction and cycle detection
|
||||
- File conflict detection and resolution
|
||||
- Semantic priority calculation (0.0-1.0)
|
||||
- Parallel/Sequential group assignment
|
||||
- Output global queue.json
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queues/ # Queue history directory
|
||||
│ ├── index.json # Queue index (active + history)
|
||||
│ ├── {queue-id}.json # Individual queue files
|
||||
│ └── ...
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Queue Index Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251227-143000",
|
||||
"queues": [
|
||||
{
|
||||
"id": "QUE-20251227-143000",
|
||||
"status": "active",
|
||||
"issue_ids": ["GH-123", "GH-124"],
|
||||
"total_tasks": 8,
|
||||
"completed_tasks": 3,
|
||||
"created_at": "2025-12-27T14:30:00Z"
|
||||
},
|
||||
{
|
||||
"id": "QUE-20251226-100000",
|
||||
"status": "completed",
|
||||
"issue_ids": ["GH-120"],
|
||||
"total_tasks": 5,
|
||||
"completed_tasks": 5,
|
||||
"created_at": "2025-12-26T10:00:00Z",
|
||||
"completed_at": "2025-12-26T12:30:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:queue [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:queue # Form NEW queue from all bound solutions
|
||||
/issue:queue --issue GH-123 # Form queue for specific issue only
|
||||
/issue:queue --append GH-124 # Append to active queue
|
||||
/issue:queue --list # List all queues (history)
|
||||
/issue:queue --switch QUE-xxx # Switch active queue
|
||||
/issue:queue --archive # Archive completed active queue
|
||||
|
||||
# Flags
|
||||
--issue <id> Form queue for specific issue only
|
||||
--append <id> Append issue to active queue (don't create new)
|
||||
--list List all queues with status
|
||||
--switch <queue-id> Switch active queue
|
||||
--archive Archive current queue (mark completed)
|
||||
--clear <queue-id> Delete a queue from history
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Solution Loading
|
||||
├─ Load issues.jsonl
|
||||
├─ Filter issues with bound_solution_id
|
||||
├─ Read solutions/{issue-id}.jsonl for each issue
|
||||
├─ Find bound solution by ID
|
||||
└─ Extract tasks from bound solutions
|
||||
|
||||
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
|
||||
├─ Launch issue-queue-agent with all tasks
|
||||
├─ Agent performs:
|
||||
│ ├─ Build dependency DAG from depends_on
|
||||
│ ├─ Detect circular dependencies
|
||||
│ ├─ Identify file modification conflicts
|
||||
│ ├─ Resolve conflicts using ordering rules
|
||||
│ ├─ Calculate semantic priority (0.0-1.0)
|
||||
│ └─ Assign execution groups (parallel/sequential)
|
||||
└─ Output: queue JSON with ordered tasks
|
||||
|
||||
Phase 5: Queue Output
|
||||
├─ Write queue.json
|
||||
├─ Update issue statuses in issues.jsonl
|
||||
└─ Display queue summary
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Solution Loading
|
||||
|
||||
```javascript
|
||||
// Load issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Filter issues with bound solutions
|
||||
const plannedIssues = allIssues.filter(i =>
|
||||
i.status === 'planned' && i.bound_solution_id
|
||||
);
|
||||
|
||||
if (plannedIssues.length === 0) {
|
||||
console.log('No issues with bound solutions found.');
|
||||
console.log('Run /issue:plan first to create and bind solutions.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Load all tasks from bound solutions
|
||||
const allTasks = [];
|
||||
for (const issue of plannedIssues) {
|
||||
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
|
||||
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Find bound solution
|
||||
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
|
||||
|
||||
if (!boundSol) {
|
||||
console.log(`⚠ Bound solution ${issue.bound_solution_id} not found for ${issue.id}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
for (const task of boundSol.tasks || []) {
|
||||
allTasks.push({
|
||||
issue_id: issue.id,
|
||||
solution_id: issue.bound_solution_id,
|
||||
task,
|
||||
exploration_context: boundSol.exploration_context
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
|
||||
```
|
||||
|
||||
### Phase 2-4: Agent-Driven Queue Formation
|
||||
|
||||
```javascript
|
||||
// Launch issue-queue-agent to handle all ordering logic
|
||||
const agentPrompt = `
|
||||
## Tasks to Order
|
||||
|
||||
${JSON.stringify(allTasks, null, 2)}
|
||||
|
||||
## Project Root
|
||||
${process.cwd()}
|
||||
|
||||
## Requirements
|
||||
1. Build dependency DAG from depends_on fields
|
||||
2. Detect circular dependencies (abort if found)
|
||||
3. Identify file modification conflicts
|
||||
4. Resolve conflicts using ordering rules:
|
||||
- Create before Update/Implement
|
||||
- Foundation scopes (config/types) before implementation
|
||||
- Core logic before tests
|
||||
5. Calculate semantic priority (0.0-1.0) for each task
|
||||
6. Assign execution groups (parallel P* / sequential S*)
|
||||
7. Output queue JSON
|
||||
`;
|
||||
|
||||
const result = Task(
|
||||
subagent_type="issue-queue-agent",
|
||||
run_in_background=false,
|
||||
description=`Order ${allTasks.length} tasks from ${plannedIssues.length} issues`,
|
||||
prompt=agentPrompt
|
||||
);
|
||||
|
||||
// Parse agent output
|
||||
const agentOutput = JSON.parse(result);
|
||||
|
||||
if (!agentOutput.success) {
|
||||
console.error(`Queue formation failed: ${agentOutput.error}`);
|
||||
if (agentOutput.cycles) {
|
||||
console.error('Circular dependencies:', agentOutput.cycles.join(', '));
|
||||
}
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Queue Output & Summary
|
||||
|
||||
```javascript
|
||||
const queueOutput = agentOutput.output;
|
||||
|
||||
// Write queue.json
|
||||
Write('.workflow/issues/queue.json', JSON.stringify(queueOutput, null, 2));
|
||||
|
||||
// Update issue statuses in issues.jsonl
|
||||
const updatedIssues = allIssues.map(issue => {
|
||||
if (plannedIssues.find(p => p.id === issue.id)) {
|
||||
return {
|
||||
...issue,
|
||||
status: 'queued',
|
||||
queued_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
}
|
||||
return issue;
|
||||
});
|
||||
|
||||
Write(issuesPath, updatedIssues.map(i => JSON.stringify(i)).join('\n'));
|
||||
|
||||
// Display summary
|
||||
console.log(`
|
||||
## Queue Formed
|
||||
|
||||
**Total Tasks**: ${queueOutput.queue.length}
|
||||
**Issues**: ${plannedIssues.length}
|
||||
**Conflicts**: ${queueOutput.conflicts?.length || 0} (${queueOutput._metadata?.resolved_conflicts || 0} resolved)
|
||||
|
||||
### Execution Groups
|
||||
${(queueOutput.execution_groups || []).map(g => {
|
||||
const type = g.type === 'parallel' ? 'Parallel' : 'Sequential';
|
||||
return `- ${g.id} (${type}): ${g.task_count} tasks`;
|
||||
}).join('\n')}
|
||||
|
||||
### Next Steps
|
||||
1. Review queue: \`ccw issue queue list\`
|
||||
2. Execute: \`/issue:execute\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Queue Schema
|
||||
|
||||
Output `queues/{queue-id}.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "QUE-20251227-143000",
|
||||
"name": "Auth Feature Queue",
|
||||
"status": "active",
|
||||
"issue_ids": ["GH-123", "GH-124"],
|
||||
|
||||
"queue": [
|
||||
{
|
||||
"queue_id": "Q-001",
|
||||
"issue_id": "GH-123",
|
||||
"solution_id": "SOL-001",
|
||||
"task_id": "T1",
|
||||
"status": "pending",
|
||||
"execution_order": 1,
|
||||
"execution_group": "P1",
|
||||
"depends_on": [],
|
||||
"semantic_priority": 0.7,
|
||||
"queued_at": "2025-12-26T10:00:00Z"
|
||||
}
|
||||
],
|
||||
|
||||
"conflicts": [
|
||||
{
|
||||
"type": "file_conflict",
|
||||
"file": "src/auth.ts",
|
||||
"tasks": ["GH-123:T1", "GH-124:T2"],
|
||||
"resolution": "sequential",
|
||||
"resolution_order": ["GH-123:T1", "GH-124:T2"],
|
||||
"rationale": "T1 creates file before T2 updates",
|
||||
"resolved": true
|
||||
}
|
||||
],
|
||||
|
||||
"execution_groups": [
|
||||
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["GH-123:T1", "GH-124:T1", "GH-125:T1"] },
|
||||
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["GH-123:T2", "GH-124:T2"] }
|
||||
],
|
||||
|
||||
"_metadata": {
|
||||
"version": "2.0",
|
||||
"total_tasks": 5,
|
||||
"pending_count": 3,
|
||||
"completed_count": 2,
|
||||
"failed_count": 0,
|
||||
"created_at": "2025-12-26T10:00:00Z",
|
||||
"updated_at": "2025-12-26T11:00:00Z",
|
||||
"source": "issue-queue-agent"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Queue ID Format
|
||||
|
||||
```
|
||||
QUE-YYYYMMDD-HHMMSS
|
||||
例如: QUE-20251227-143052
|
||||
```
|
||||
|
||||
## Semantic Priority Rules
|
||||
|
||||
| Factor | Priority Boost |
|
||||
|--------|---------------|
|
||||
| Create action | +0.2 |
|
||||
| Configure action | +0.15 |
|
||||
| Implement action | +0.1 |
|
||||
| Config/Types scope | +0.1 |
|
||||
| Refactor action | -0.05 |
|
||||
| Test action | -0.1 |
|
||||
| Delete action | -0.15 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No bound solutions | Display message, suggest /issue:plan |
|
||||
| Circular dependency | List cycles, abort queue formation |
|
||||
| Unresolved conflicts | Agent resolves using ordering rules |
|
||||
| Invalid task reference | Skip and warn |
|
||||
|
||||
## Agent Integration
|
||||
|
||||
The command uses `issue-queue-agent` which:
|
||||
1. Builds dependency DAG from task depends_on fields
|
||||
2. Detects circular dependencies (aborts if found)
|
||||
3. Identifies file modification conflicts across issues
|
||||
4. Resolves conflicts using semantic ordering rules
|
||||
5. Calculates priority (0.0-1.0) for each task
|
||||
6. Assigns parallel/sequential execution groups
|
||||
7. Outputs structured queue JSON
|
||||
|
||||
See `.claude/agents/issue-queue-agent.md` for agent specification.
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan issues and bind solutions
|
||||
- `/issue:execute` - Execute queue with codex
|
||||
- `ccw issue queue list` - View current queue
|
||||
@@ -410,7 +410,6 @@ Task(subagent_type="{meta.agent}",
|
||||
1. Read complete task JSON: {session.task_json_path}
|
||||
2. Load context package: {session.context_package_path}
|
||||
|
||||
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
||||
|
||||
**Session Paths**:
|
||||
- Workflow Dir: {session.workflow_dir}
|
||||
|
||||
@@ -15,7 +15,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
||||
|
||||
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -431,6 +430,5 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
|
||||
@@ -239,15 +239,6 @@ If conflict_risk was medium/high, modifications have been applied to:
|
||||
|
||||
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
||||
|
||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
- TDD Task Decomposition Standards
|
||||
- Red-Green-Refactor Cycle Requirements
|
||||
- Quantification Requirements (MANDATORY)
|
||||
- 5-Field Task JSON Schema
|
||||
- IMPL_PLAN.md Structure (TDD variant)
|
||||
- TODO_LIST.md Format
|
||||
- TDD Execution Flow & Quality Validation
|
||||
|
||||
### TDD-Specific Requirements Summary
|
||||
|
||||
#### Task Structure Philosophy
|
||||
|
||||
@@ -14,7 +14,7 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
||||
|
||||
**Agent**: `test-context-search-agent` (`.claude/agents/test-context-search-agent.md`)
|
||||
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -89,7 +89,6 @@ Task(
|
||||
run_in_background=false,
|
||||
description="Gather test coverage context",
|
||||
prompt=`
|
||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||
@@ -229,7 +228,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
||||
- **Agent autonomy**: Agent handles all coverage analysis logic per `.claude/agents/test-context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
||||
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
||||
|
||||
@@ -107,8 +107,6 @@ CRITICAL:
|
||||
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
||||
|
||||
## AGENT CONFIGURATION REFERENCE
|
||||
All test task generation rules, schemas, and quality standards are defined in your agent specification:
|
||||
@.claude/agents/action-planning-agent.md
|
||||
|
||||
Refer to your specification for:
|
||||
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
||||
|
||||
@@ -806,8 +806,6 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
**Examples**:
|
||||
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||
- BAD: `"Implement new commands"`
|
||||
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||
- BAD: `"All commands implemented successfully"`
|
||||
|
||||
### 3.2 Planning & Organization Standards
|
||||
|
||||
|
||||
@@ -400,7 +400,7 @@ Task(subagent_type="{meta.agent}",
|
||||
1. Read complete task JSON: {session.task_json_path}
|
||||
2. Load context package: {session.context_package_path}
|
||||
|
||||
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
||||
|
||||
|
||||
**Session Paths**:
|
||||
- Workflow Dir: {session.workflow_dir}
|
||||
|
||||
@@ -15,7 +15,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
||||
|
||||
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -429,6 +428,6 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
|
||||
@@ -238,14 +238,7 @@ If conflict_risk was medium/high, modifications have been applied to:
|
||||
|
||||
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
||||
|
||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
- TDD Task Decomposition Standards
|
||||
- Red-Green-Refactor Cycle Requirements
|
||||
- Quantification Requirements (MANDATORY)
|
||||
- 5-Field Task JSON Schema
|
||||
- IMPL_PLAN.md Structure (TDD variant)
|
||||
- TODO_LIST.md Format
|
||||
- TDD Execution Flow & Quality Validation
|
||||
|
||||
|
||||
### TDD-Specific Requirements Summary
|
||||
|
||||
|
||||
@@ -14,8 +14,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
||||
|
||||
**Agent**: `test-context-search-agent` (`.claude/agents/test-context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
- **Agent Delegation**: Delegate all test coverage analysis to `test-context-search-agent` for autonomous execution
|
||||
@@ -88,7 +86,6 @@ Task(
|
||||
subagent_type="test-context-search-agent",
|
||||
description="Gather test coverage context",
|
||||
prompt=`
|
||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||
@@ -228,7 +225,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
||||
- **Agent autonomy**: Agent handles all coverage analysis logic per `.claude/agents/test-context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
||||
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
||||
|
||||
@@ -106,8 +106,6 @@ CRITICAL:
|
||||
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
||||
|
||||
## AGENT CONFIGURATION REFERENCE
|
||||
All test task generation rules, schemas, and quality standards are defined in your agent specification:
|
||||
@.claude/agents/action-planning-agent.md
|
||||
|
||||
Refer to your specification for:
|
||||
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
||||
|
||||
150
.claude/skills/copyright-docs/phases/01.5-project-exploration.md
Normal file
150
.claude/skills/copyright-docs/phases/01.5-project-exploration.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# Phase 1.5: Project Exploration
|
||||
|
||||
基于元数据,启动并行探索 Agent 收集代码信息。
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Intelligent Angle Selection
|
||||
|
||||
```javascript
|
||||
// 根据软件类型选择探索角度
|
||||
const ANGLE_PRESETS = {
|
||||
'CLI': ['architecture', 'commands', 'algorithms', 'exceptions'],
|
||||
'API': ['architecture', 'endpoints', 'data-structures', 'interfaces'],
|
||||
'SDK': ['architecture', 'interfaces', 'data-structures', 'algorithms'],
|
||||
'DataProcessing': ['architecture', 'algorithms', 'data-structures', 'dataflow'],
|
||||
'Automation': ['architecture', 'algorithms', 'exceptions', 'dataflow']
|
||||
};
|
||||
|
||||
// 从 metadata.category 映射到预设
|
||||
function getCategoryKey(category) {
|
||||
if (category.includes('CLI') || category.includes('命令行')) return 'CLI';
|
||||
if (category.includes('API') || category.includes('后端')) return 'API';
|
||||
if (category.includes('SDK') || category.includes('库')) return 'SDK';
|
||||
if (category.includes('数据处理')) return 'DataProcessing';
|
||||
if (category.includes('自动化')) return 'Automation';
|
||||
return 'API'; // default
|
||||
}
|
||||
|
||||
const categoryKey = getCategoryKey(metadata.category);
|
||||
const selectedAngles = ANGLE_PRESETS[categoryKey];
|
||||
|
||||
console.log(`
|
||||
## Exploration Plan
|
||||
|
||||
Software: ${metadata.software_name}
|
||||
Category: ${metadata.category} → ${categoryKey}
|
||||
Selected Angles: ${selectedAngles.join(', ')}
|
||||
|
||||
Launching ${selectedAngles.length} parallel explorations...
|
||||
`);
|
||||
```
|
||||
|
||||
### Step 2: Launch Parallel Agents (Direct Output)
|
||||
|
||||
**⚠️ CRITICAL**: Agents write output files directly.
|
||||
|
||||
```javascript
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
## Exploration Objective
|
||||
为 CPCC 软著申请文档执行 **${angle}** 探索。
|
||||
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Software Name**: ${metadata.software_name}
|
||||
- **Scope Path**: ${metadata.scope_path}
|
||||
- **Category**: ${metadata.category}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
|
||||
3. Analyze from ${angle} perspective
|
||||
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
**Step 1: Structural Scan**
|
||||
- 识别与 ${angle} 相关的模块和文件
|
||||
- 分析导入/导出关系
|
||||
|
||||
**Step 2: Pattern Recognition**
|
||||
- ${angle} 相关的设计模式
|
||||
- 代码组织方式
|
||||
|
||||
**Step 3: Write Output**
|
||||
- 输出 JSON 到指定路径
|
||||
|
||||
## Expected Output Schema
|
||||
|
||||
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"angle": "${angle}",
|
||||
"findings": {
|
||||
"structure": [
|
||||
{ "component": "...", "type": "module|layer|service", "path": "...", "description": "..." }
|
||||
],
|
||||
"patterns": [
|
||||
{ "name": "...", "usage": "...", "files": ["path1", "path2"] }
|
||||
],
|
||||
"key_files": [
|
||||
{ "path": "src/file.ts", "relevance": 0.85, "rationale": "Core ${angle} logic" }
|
||||
]
|
||||
},
|
||||
"insights": [
|
||||
{ "observation": "...", "cpcc_section": "2|3|4|5|6|7", "recommendation": "..." }
|
||||
],
|
||||
"_metadata": {
|
||||
"exploration_angle": "${angle}",
|
||||
"exploration_index": ${index + 1},
|
||||
"software_name": "${metadata.software_name}",
|
||||
"timestamp": "ISO8601"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Success Criteria
|
||||
- [ ] get_modules_by_depth 执行完成
|
||||
- [ ] 至少识别 3 个相关文件
|
||||
- [ ] patterns 包含具体代码示例
|
||||
- [ ] insights 关联到 CPCC 章节 (2-7)
|
||||
- [ ] JSON 输出到指定路径
|
||||
- [ ] Return: 2-3 句话总结 ${angle} 发现
|
||||
`
|
||||
})
|
||||
);
|
||||
|
||||
// Execute all exploration tasks in parallel
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Session folder structure after exploration:
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── exploration-architecture.json
|
||||
├── exploration-{angle2}.json
|
||||
├── exploration-{angle3}.json
|
||||
└── exploration-{angle4}.json
|
||||
```
|
||||
|
||||
## Downstream Usage (Phase 2 Analysis Input)
|
||||
|
||||
Phase 2 agents read exploration files as context:
|
||||
|
||||
```javascript
|
||||
// Discover exploration files by known angle pattern
|
||||
const explorationData = {};
|
||||
selectedAngles.forEach(angle => {
|
||||
const filePath = `${sessionFolder}/exploration-${angle}.json`;
|
||||
explorationData[angle] = JSON.parse(Read(filePath));
|
||||
});
|
||||
```
|
||||
@@ -5,15 +5,161 @@
|
||||
> **模板参考**: [../templates/agent-base.md](../templates/agent-base.md)
|
||||
> **规范参考**: [../specs/cpcc-requirements.md](../specs/cpcc-requirements.md)
|
||||
|
||||
## Agent 执行前置条件
|
||||
## Exploration → Agent 自动分配
|
||||
|
||||
**每个 Agent 必须首先读取以下规范文件**:
|
||||
根据 Phase 1.5 生成的 exploration 文件名自动分配对应的 analysis agent。
|
||||
|
||||
### 映射规则
|
||||
|
||||
```javascript
|
||||
// Agent 启动时的第一步操作
|
||||
const specs = {
|
||||
cpcc: Read(`${skillRoot}/specs/cpcc-requirements.md`)
|
||||
// Exploration 角度 → Agent 映射(基于文件名识别,不读取内容)
|
||||
const EXPLORATION_TO_AGENT = {
|
||||
'architecture': 'architecture',
|
||||
'commands': 'functions', // CLI 命令 → 功能模块
|
||||
'endpoints': 'interfaces', // API 端点 → 接口设计
|
||||
'algorithms': 'algorithms',
|
||||
'data-structures': 'data_structures',
|
||||
'dataflow': 'data_structures', // 数据流 → 数据结构
|
||||
'interfaces': 'interfaces',
|
||||
'exceptions': 'exceptions'
|
||||
};
|
||||
|
||||
// 从文件名提取角度
|
||||
function extractAngle(filename) {
|
||||
// exploration-architecture.json → architecture
|
||||
const match = filename.match(/exploration-(.+)\.json$/);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
|
||||
// 分配 agent
|
||||
function assignAgent(explorationFile) {
|
||||
const angle = extractAngle(path.basename(explorationFile));
|
||||
return EXPLORATION_TO_AGENT[angle] || null;
|
||||
}
|
||||
|
||||
// Agent 配置(用于 buildAgentPrompt)
|
||||
const AGENT_CONFIGS = {
|
||||
architecture: {
|
||||
role: '系统架构师,专注于分层设计和模块依赖',
|
||||
section: '2',
|
||||
output: 'section-2-architecture.md',
|
||||
focus: '分层结构、模块依赖、数据流向'
|
||||
},
|
||||
functions: {
|
||||
role: '功能分析师,专注于功能点识别和交互',
|
||||
section: '3',
|
||||
output: 'section-3-functions.md',
|
||||
focus: '功能点枚举、模块分组、入口文件、功能交互'
|
||||
},
|
||||
algorithms: {
|
||||
role: '算法工程师,专注于核心逻辑和复杂度分析',
|
||||
section: '4',
|
||||
output: 'section-4-algorithms.md',
|
||||
focus: '核心算法、流程步骤、复杂度、输入输出'
|
||||
},
|
||||
data_structures: {
|
||||
role: '数据建模师,专注于实体关系和类型定义',
|
||||
section: '5',
|
||||
output: 'section-5-data-structures.md',
|
||||
focus: '实体定义、属性类型、关系映射、枚举'
|
||||
},
|
||||
interfaces: {
|
||||
role: 'API设计师,专注于接口契约和协议',
|
||||
section: '6',
|
||||
output: 'section-6-interfaces.md',
|
||||
focus: 'API端点、参数校验、响应格式、时序'
|
||||
},
|
||||
exceptions: {
|
||||
role: '可靠性工程师,专注于异常处理和恢复策略',
|
||||
section: '7',
|
||||
output: 'section-7-exceptions.md',
|
||||
focus: '异常类型、错误码、处理模式、恢复策略'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 自动发现与分配流程
|
||||
|
||||
```javascript
|
||||
// 1. 发现所有 exploration 文件(仅看文件名)
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
// 2. 按文件名自动分配 agent
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return {
|
||||
exploration_file: file,
|
||||
angle: angle,
|
||||
agent: agentName,
|
||||
output_file: AGENT_CONFIGS[agentName]?.output
|
||||
};
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 3. 补充未被 exploration 覆盖的必需 agent(分配相关 exploration)
|
||||
const coveredAgents = new Set(agentAssignments.map(a => a.agent));
|
||||
const requiredAgents = ['architecture', 'functions', 'algorithms', 'data_structures', 'interfaces', 'exceptions'];
|
||||
const missingAgents = requiredAgents.filter(a => !coveredAgents.has(a));
|
||||
|
||||
// 相关性映射:为缺失 agent 分配最相关的 exploration
|
||||
const RELATED_EXPLORATIONS = {
|
||||
architecture: ['architecture', 'dataflow', 'interfaces'],
|
||||
functions: ['commands', 'endpoints', 'architecture'],
|
||||
algorithms: ['algorithms', 'dataflow', 'architecture'],
|
||||
data_structures: ['data-structures', 'dataflow', 'architecture'],
|
||||
interfaces: ['interfaces', 'endpoints', 'architecture'],
|
||||
exceptions: ['exceptions', 'algorithms', 'architecture']
|
||||
};
|
||||
|
||||
function findRelatedExploration(agent, availableFiles) {
|
||||
const preferences = RELATED_EXPLORATIONS[agent] || ['architecture'];
|
||||
for (const pref of preferences) {
|
||||
const match = availableFiles.find(f => f.includes(`exploration-${pref}.json`));
|
||||
if (match) return { file: match, angle: pref, isRelated: true };
|
||||
}
|
||||
// 最后兜底:任意 exploration 都比没有强
|
||||
return availableFiles.length > 0
|
||||
? { file: availableFiles[0], angle: extractAngle(path.basename(availableFiles[0])), isRelated: true }
|
||||
: { file: null, angle: null, isRelated: false };
|
||||
}
|
||||
|
||||
missingAgents.forEach(agent => {
|
||||
const related = findRelatedExploration(agent, explorationFiles);
|
||||
agentAssignments.push({
|
||||
exploration_file: related.file,
|
||||
angle: related.angle,
|
||||
agent: agent,
|
||||
output_file: AGENT_CONFIGS[agent].output,
|
||||
is_related: related.isRelated // 标记为相关而非直接匹配
|
||||
});
|
||||
});
|
||||
|
||||
console.log(`
|
||||
## Agent Auto-Assignment
|
||||
|
||||
Found ${explorationFiles.length} exploration files:
|
||||
${agentAssignments.map(a => {
|
||||
if (!a.exploration_file) return `- ${a.agent} agent (no exploration)`;
|
||||
if (a.is_related) return `- ${a.agent} agent ← ${a.angle} (related)`;
|
||||
return `- ${a.agent} agent ← ${a.angle} (direct)`;
|
||||
}).join('\n')}
|
||||
`);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 执行前置条件
|
||||
|
||||
**每个 Agent 接收 exploration 文件路径,自行读取内容**:
|
||||
|
||||
```javascript
|
||||
// Agent prompt 中包含文件路径
|
||||
// Agent 启动后的操作顺序:
|
||||
// 1. Read exploration 文件(如有)
|
||||
// 2. Read CPCC 规范文件
|
||||
// 3. 执行分析任务
|
||||
```
|
||||
|
||||
规范文件路径(相对于 skill 根目录):
|
||||
@@ -47,26 +193,90 @@ const specs = {
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
// 1. 准备目录
|
||||
// 1. 发现 exploration 文件并自动分配 agent
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return { exploration_file: file, angle, agent: agentName };
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 补充必需 agent
|
||||
const coveredAgents = new Set(agentAssignments.map(a => a.agent));
|
||||
const requiredAgents = ['architecture', 'functions', 'algorithms', 'data_structures', 'interfaces', 'exceptions'];
|
||||
requiredAgents.filter(a => !coveredAgents.has(a)).forEach(agent => {
|
||||
agentAssignments.push({ exploration_file: null, angle: null, agent });
|
||||
});
|
||||
|
||||
// 2. 准备目录
|
||||
Bash(`mkdir -p ${outputDir}/sections`);
|
||||
|
||||
// 2. 并行启动 6 个 Agent
|
||||
const results = await Promise.all([
|
||||
launchAgent('architecture', metadata, outputDir),
|
||||
launchAgent('functions', metadata, outputDir),
|
||||
launchAgent('algorithms', metadata, outputDir),
|
||||
launchAgent('data_structures', metadata, outputDir),
|
||||
launchAgent('interfaces', metadata, outputDir),
|
||||
launchAgent('exceptions', metadata, outputDir)
|
||||
]);
|
||||
// 3. 并行启动所有 Agent(传递 exploration 文件路径)
|
||||
const results = await Promise.all(
|
||||
agentAssignments.map(assignment =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Analyze: ${assignment.agent}`,
|
||||
prompt: buildAgentPrompt(assignment, metadata, outputDir)
|
||||
})
|
||||
)
|
||||
);
|
||||
|
||||
// 3. 收集返回信息
|
||||
// 4. 收集返回信息
|
||||
const summaries = results.map(r => JSON.parse(r));
|
||||
|
||||
// 4. 传递给 Phase 2.5
|
||||
// 5. 传递给 Phase 2.5
|
||||
return { summaries, cross_notes: summaries.flatMap(s => s.cross_module_notes) };
|
||||
```
|
||||
|
||||
### Agent Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildAgentPrompt(assignment, metadata, outputDir) {
|
||||
const config = AGENT_CONFIGS[assignment.agent];
|
||||
let contextSection = '';
|
||||
|
||||
if (assignment.exploration_file) {
|
||||
const matchType = assignment.is_related ? '相关' : '直接匹配';
|
||||
contextSection = `[CONTEXT]
|
||||
**Exploration 文件**: ${assignment.exploration_file}
|
||||
**匹配类型**: ${matchType}
|
||||
首先读取此文件获取 ${assignment.angle} 探索结果作为分析上下文。
|
||||
${assignment.is_related ? `注意:这是相关探索结果(非直接匹配),请提取与 ${config.focus} 相关的信息。` : ''}
|
||||
`;
|
||||
}
|
||||
|
||||
return `
|
||||
${contextSection}
|
||||
[SPEC]
|
||||
读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
|
||||
[ROLE] ${config.role}
|
||||
|
||||
[TASK]
|
||||
分析 ${metadata.scope_path},生成 Section ${config.section}。
|
||||
输出: ${outputDir}/sections/${config.output}
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图${config.section}-1, 图${config.section}-2...
|
||||
- 每个子章节 ≥100字
|
||||
- 包含文件路径引用
|
||||
|
||||
[FOCUS]
|
||||
${config.focus}
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"${config.output}","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 提示词
|
||||
|
||||
@@ -1,75 +1,176 @@
|
||||
# Phase 2: Project Exploration
|
||||
|
||||
Launch parallel exploration agents based on report type.
|
||||
Launch parallel exploration agents based on report type and task context.
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Map Exploration Angles
|
||||
### Step 1: Intelligent Angle Selection
|
||||
|
||||
```javascript
|
||||
const angleMapping = {
|
||||
architecture: ["Layer Structure", "Module Dependencies", "Entry Points", "Data Flow"],
|
||||
design: ["Design Patterns", "Class Relationships", "Interface Contracts", "State Management"],
|
||||
methods: ["Core Algorithms", "Critical Paths", "Public APIs", "Complex Logic"],
|
||||
comprehensive: ["Layer Structure", "Design Patterns", "Core Algorithms", "Data Flow"]
|
||||
// Angle presets based on report type (adapted from lite-plan.md)
|
||||
const ANGLE_PRESETS = {
|
||||
architecture: ['layer-structure', 'module-dependencies', 'entry-points', 'data-flow'],
|
||||
design: ['design-patterns', 'class-relationships', 'interface-contracts', 'state-management'],
|
||||
methods: ['core-algorithms', 'critical-paths', 'public-apis', 'complex-logic'],
|
||||
comprehensive: ['architecture', 'patterns', 'dependencies', 'integration-points']
|
||||
};
|
||||
|
||||
const angles = angleMapping[config.type];
|
||||
// Depth-based angle count
|
||||
const angleCount = {
|
||||
shallow: 2,
|
||||
standard: 3,
|
||||
deep: 4
|
||||
};
|
||||
|
||||
function selectAngles(reportType, depth) {
|
||||
const preset = ANGLE_PRESETS[reportType] || ANGLE_PRESETS.comprehensive;
|
||||
const count = angleCount[depth] || 3;
|
||||
return preset.slice(0, count);
|
||||
}
|
||||
|
||||
const selectedAngles = selectAngles(config.type, config.depth);
|
||||
|
||||
console.log(`
|
||||
## Exploration Plan
|
||||
|
||||
Report Type: ${config.type}
|
||||
Depth: ${config.depth}
|
||||
Selected Angles: ${selectedAngles.join(', ')}
|
||||
|
||||
Launching ${selectedAngles.length} parallel explorations...
|
||||
`);
|
||||
```
|
||||
|
||||
### Step 2: Launch Parallel Agents
|
||||
### Step 2: Launch Parallel Agents (Direct Output)
|
||||
|
||||
For each angle, launch an exploration agent:
|
||||
**⚠️ CRITICAL**: Agents write output files directly. No aggregation needed.
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
// Launch agents with pre-assigned angles
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false, // ⚠️ MANDATORY: Must wait for results
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
## Exploration Objective
|
||||
Execute **${angle}** exploration for project analysis report.
|
||||
Execute **${angle}** exploration for ${config.type} project analysis report.
|
||||
|
||||
## Context
|
||||
- **Angle**: ${angle}
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Report Type**: ${config.type}
|
||||
- **Depth**: ${config.depth}
|
||||
- **Scope**: ${config.scope}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
## Exploration Protocol
|
||||
1. Structural Discovery (get_modules_by_depth, rg, glob)
|
||||
2. Pattern Recognition (conventions, naming, organization)
|
||||
3. Relationship Mapping (dependencies, integration points)
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
|
||||
3. Analyze project from ${angle} perspective
|
||||
|
||||
## Output Format
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
**Step 1: Structural Scan** (Bash)
|
||||
- get_modules_by_depth.sh → identify modules related to ${angle}
|
||||
- find/rg → locate files relevant to ${angle} aspect
|
||||
- Analyze imports/dependencies from ${angle} perspective
|
||||
|
||||
**Step 2: Semantic Analysis** (Gemini/Qwen CLI)
|
||||
- How does existing code handle ${angle} concerns?
|
||||
- What patterns are used for ${angle}?
|
||||
- Identify key architectural decisions related to ${angle}
|
||||
|
||||
**Step 3: Write Output Directly**
|
||||
- Consolidate ${angle} findings into JSON
|
||||
- Write to output file path specified above
|
||||
|
||||
## Expected Output Schema
|
||||
|
||||
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"angle": "${angle}",
|
||||
"findings": {
|
||||
"structure": [...],
|
||||
"patterns": [...],
|
||||
"relationships": [...],
|
||||
"key_files": [{path, relevance, rationale}]
|
||||
"structure": [
|
||||
{ "component": "...", "type": "module|layer|service", "description": "..." }
|
||||
],
|
||||
"patterns": [
|
||||
{ "name": "...", "usage": "...", "files": ["path1", "path2"] }
|
||||
],
|
||||
"relationships": [
|
||||
{ "from": "...", "to": "...", "type": "depends|imports|calls", "strength": "high|medium|low" }
|
||||
],
|
||||
"key_files": [
|
||||
{ "path": "src/file.ts", "relevance": 0.85, "rationale": "Core ${angle} logic" }
|
||||
]
|
||||
},
|
||||
"insights": [...]
|
||||
"insights": [
|
||||
{ "observation": "...", "impact": "high|medium|low", "recommendation": "..." }
|
||||
],
|
||||
"_metadata": {
|
||||
"exploration_angle": "${angle}",
|
||||
"exploration_index": ${index + 1},
|
||||
"report_type": "${config.type}",
|
||||
"timestamp": "ISO8601"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Success Criteria
|
||||
- [ ] get_modules_by_depth.sh executed
|
||||
- [ ] At least 3 relevant files identified with ${angle} rationale
|
||||
- [ ] Patterns are actionable (code examples, not generic advice)
|
||||
- [ ] Relationships include concrete file references
|
||||
- [ ] JSON output written to ${sessionFolder}/exploration-${angle}.json
|
||||
- [ ] Return: 2-3 sentence summary of ${angle} findings
|
||||
`
|
||||
})
|
||||
```
|
||||
})
|
||||
);
|
||||
|
||||
### Step 3: Aggregate Results
|
||||
|
||||
Merge all exploration results into unified findings:
|
||||
|
||||
```javascript
|
||||
const aggregatedFindings = {
|
||||
structure: [], // from all angles
|
||||
patterns: [], // from all angles
|
||||
relationships: [], // from all angles
|
||||
key_files: [], // deduplicated
|
||||
insights: [] // prioritized
|
||||
};
|
||||
// Execute all exploration tasks in parallel
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Save exploration results to `exploration-{angle}.json` files.
|
||||
Session folder structure after exploration:
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── exploration-{angle1}.json # Agent 1 direct output
|
||||
├── exploration-{angle2}.json # Agent 2 direct output
|
||||
├── exploration-{angle3}.json # Agent 3 direct output (if applicable)
|
||||
└── exploration-{angle4}.json # Agent 4 direct output (if applicable)
|
||||
```
|
||||
|
||||
## Downstream Usage (Phase 3 Analysis Input)
|
||||
|
||||
Subsequent analysis phases MUST read exploration outputs as input:
|
||||
|
||||
```javascript
|
||||
// Discover exploration files by known angle pattern
|
||||
const explorationData = {};
|
||||
selectedAngles.forEach(angle => {
|
||||
const filePath = `${sessionFolder}/exploration-${angle}.json`;
|
||||
explorationData[angle] = JSON.parse(Read(filePath));
|
||||
});
|
||||
|
||||
// Pass to analysis agent
|
||||
Task({
|
||||
subagent_type: "analysis-agent",
|
||||
prompt: `
|
||||
## Analysis Input
|
||||
|
||||
### Exploration Data by Angle
|
||||
${Object.entries(explorationData).map(([angle, data]) => `
|
||||
#### ${angle}
|
||||
${JSON.stringify(data, null, 2)}
|
||||
`).join('\n')}
|
||||
|
||||
## Analysis Task
|
||||
Synthesize findings from all exploration angles...
|
||||
`
|
||||
});
|
||||
```
|
||||
|
||||
@@ -5,16 +5,176 @@
|
||||
> **规范参考**: [../specs/quality-standards.md](../specs/quality-standards.md)
|
||||
> **写作风格**: [../specs/writing-style.md](../specs/writing-style.md)
|
||||
|
||||
## Agent 执行前置条件
|
||||
## Exploration → Agent 自动分配
|
||||
|
||||
**每个 Agent 必须首先读取以下规范文件**:
|
||||
根据 Phase 2 生成的 exploration 文件名自动分配对应的 analysis agent。
|
||||
|
||||
### 映射规则
|
||||
|
||||
```javascript
|
||||
// Agent 启动时的第一步操作
|
||||
const specs = {
|
||||
quality: Read(`${skillRoot}/specs/quality-standards.md`),
|
||||
style: Read(`${skillRoot}/specs/writing-style.md`)
|
||||
// Exploration 角度 → Agent 映射(基于文件名识别,不读取内容)
|
||||
const EXPLORATION_TO_AGENT = {
|
||||
// Architecture Report 角度
|
||||
'layer-structure': 'layers',
|
||||
'module-dependencies': 'dependencies',
|
||||
'entry-points': 'entrypoints',
|
||||
'data-flow': 'dataflow',
|
||||
|
||||
// Design Report 角度
|
||||
'design-patterns': 'patterns',
|
||||
'class-relationships': 'classes',
|
||||
'interface-contracts': 'interfaces',
|
||||
'state-management': 'state',
|
||||
|
||||
// Methods Report 角度
|
||||
'core-algorithms': 'algorithms',
|
||||
'critical-paths': 'paths',
|
||||
'public-apis': 'apis',
|
||||
'complex-logic': 'logic',
|
||||
|
||||
// Comprehensive 角度
|
||||
'architecture': 'overview',
|
||||
'patterns': 'patterns',
|
||||
'dependencies': 'dependencies',
|
||||
'integration-points': 'entrypoints'
|
||||
};
|
||||
|
||||
// 从文件名提取角度
|
||||
function extractAngle(filename) {
|
||||
// exploration-layer-structure.json → layer-structure
|
||||
const match = filename.match(/exploration-(.+)\.json$/);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
|
||||
// 分配 agent
|
||||
function assignAgent(explorationFile) {
|
||||
const angle = extractAngle(path.basename(explorationFile));
|
||||
return EXPLORATION_TO_AGENT[angle] || null;
|
||||
}
|
||||
|
||||
// Agent 配置(用于 buildAgentPrompt)
|
||||
const AGENT_CONFIGS = {
|
||||
overview: {
|
||||
role: '首席系统架构师',
|
||||
task: '基于代码库全貌,撰写"总体架构"章节,洞察核心价值主张和顶层技术决策',
|
||||
focus: '领域边界与定位、架构范式、核心技术决策、顶层模块划分',
|
||||
constraint: '避免罗列目录结构,重点阐述设计意图,包含至少1个 Mermaid 架构图'
|
||||
},
|
||||
layers: {
|
||||
role: '资深软件设计师',
|
||||
task: '分析系统逻辑分层结构,撰写"逻辑视点与分层架构"章节',
|
||||
focus: '职责分配体系、数据流向与约束、边界隔离策略、异常处理流',
|
||||
constraint: '不要列举具体文件名,关注层级间契约和隔离艺术'
|
||||
},
|
||||
dependencies: {
|
||||
role: '集成架构专家',
|
||||
task: '审视系统外部连接与内部耦合,撰写"依赖管理与生态集成"章节',
|
||||
focus: '外部集成拓扑、核心依赖分析、依赖注入与控制反转、供应链安全',
|
||||
constraint: '禁止简单列出依赖配置,必须分析集成策略和风险控制模型'
|
||||
},
|
||||
dataflow: {
|
||||
role: '数据架构师',
|
||||
task: '追踪系统数据流转机制,撰写"数据流与状态管理"章节',
|
||||
focus: '数据入口与出口、数据转换管道、持久化策略、一致性保障',
|
||||
constraint: '关注数据生命周期和形态演变,不要罗列数据库表结构'
|
||||
},
|
||||
entrypoints: {
|
||||
role: '系统边界分析师',
|
||||
task: '识别系统入口设计和关键路径,撰写"系统入口与调用链"章节',
|
||||
focus: '入口类型与职责、请求处理管道、关键业务路径、异常与边界处理',
|
||||
constraint: '关注入口设计哲学,不要逐个列举所有端点'
|
||||
},
|
||||
patterns: {
|
||||
role: '核心开发规范制定者',
|
||||
task: '挖掘代码中的复用机制和标准化实践,撰写"设计模式与工程规范"章节',
|
||||
focus: '架构级模式、通信与并发模式、横切关注点实现、抽象与复用策略',
|
||||
constraint: '避免教科书式解释,必须结合项目上下文说明应用场景'
|
||||
},
|
||||
classes: {
|
||||
role: '领域模型设计师',
|
||||
task: '分析系统类型体系和领域模型,撰写"类型体系与领域建模"章节',
|
||||
focus: '领域模型设计、继承与组合策略、职责分配原则、类型安全与约束',
|
||||
constraint: '关注建模思想,用 UML 类图辅助说明核心关系'
|
||||
},
|
||||
interfaces: {
|
||||
role: '契约设计专家',
|
||||
task: '分析系统接口设计和抽象层次,撰写"接口契约与抽象设计"章节',
|
||||
focus: '抽象层次设计、契约与实现分离、扩展点设计、版本演进策略',
|
||||
constraint: '关注接口设计哲学,不要逐个列举接口方法签名'
|
||||
},
|
||||
state: {
|
||||
role: '状态管理架构师',
|
||||
task: '分析系统状态管理机制,撰写"状态管理与生命周期"章节',
|
||||
focus: '状态模型设计、状态生命周期、并发与一致性、状态恢复与容错',
|
||||
constraint: '关注状态管理设计决策,不要列举具体变量名'
|
||||
},
|
||||
algorithms: {
|
||||
role: '算法架构师',
|
||||
task: '分析系统核心算法设计,撰写"核心算法与计算模型"章节',
|
||||
focus: '算法选型与权衡、计算模型设计、性能与可扩展性、正确性保障',
|
||||
constraint: '关注算法思想,用流程图辅助说明复杂逻辑'
|
||||
},
|
||||
paths: {
|
||||
role: '性能架构师',
|
||||
task: '分析系统关键执行路径,撰写"关键路径与性能设计"章节',
|
||||
focus: '关键业务路径、性能敏感区域、瓶颈识别与缓解、降级与熔断',
|
||||
constraint: '关注路径设计战略考量,不要罗列所有代码执行步骤'
|
||||
},
|
||||
apis: {
|
||||
role: 'API 设计规范专家',
|
||||
task: '分析系统对外接口设计规范,撰写"API 设计与规范"章节',
|
||||
focus: 'API 设计风格、命名与结构规范、版本管理策略、错误处理规范',
|
||||
constraint: '关注设计规范和一致性,不要逐个列举所有 API 端点'
|
||||
},
|
||||
logic: {
|
||||
role: '业务逻辑架构师',
|
||||
task: '分析系统业务逻辑建模,撰写"业务逻辑与规则引擎"章节',
|
||||
focus: '业务规则建模、决策点设计、边界条件处理、业务流程编排',
|
||||
constraint: '关注业务逻辑组织方式,不要逐行解释代码逻辑'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 自动发现与分配流程
|
||||
|
||||
```javascript
|
||||
// 1. 发现所有 exploration 文件(仅看文件名)
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
// 2. 按文件名自动分配 agent
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return {
|
||||
exploration_file: file,
|
||||
angle: angle,
|
||||
agent: agentName,
|
||||
output_file: `section-${agentName}.md`
|
||||
};
|
||||
}).filter(a => a.agent); // 过滤未映射的角度
|
||||
|
||||
console.log(`
|
||||
## Agent Auto-Assignment
|
||||
|
||||
Found ${explorationFiles.length} exploration files:
|
||||
${agentAssignments.map(a => `- ${a.angle} → ${a.agent} agent`).join('\n')}
|
||||
`);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 执行前置条件
|
||||
|
||||
**每个 Agent 接收 exploration 文件路径,自行读取内容**:
|
||||
|
||||
```javascript
|
||||
// Agent prompt 中包含文件路径
|
||||
// Agent 启动后的操作顺序:
|
||||
// 1. Read exploration 文件(上下文输入)
|
||||
// 2. Read 规范文件
|
||||
// 3. 执行分析任务
|
||||
```
|
||||
|
||||
规范文件路径(相对于 skill 根目录):
|
||||
@@ -617,15 +777,30 @@ Task({
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
// 1. 根据报告类型选择 Agent 配置
|
||||
const agentConfigs = getAgentConfigs(config.type);
|
||||
// 1. 发现 exploration 文件并自动分配 agent
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return { exploration_file: file, angle, agent: agentName };
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 2. 准备目录
|
||||
Bash(`mkdir "${outputDir}\\sections"`);
|
||||
|
||||
// 3. 并行启动所有 Agent
|
||||
// 3. 并行启动所有 Agent(传递 exploration 文件路径)
|
||||
const results = await Promise.all(
|
||||
agentConfigs.map(agent => launchAgent(agent, config, outputDir))
|
||||
agentAssignments.map(assignment =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Analyze: ${assignment.agent}`,
|
||||
prompt: buildAgentPrompt(assignment, config, outputDir)
|
||||
})
|
||||
)
|
||||
);
|
||||
|
||||
// 4. 收集简要返回信息
|
||||
@@ -635,6 +810,45 @@ const summaries = results.map(r => JSON.parse(r));
|
||||
return { summaries, cross_notes: summaries.flatMap(s => s.cross_module_notes) };
|
||||
```
|
||||
|
||||
### Agent Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildAgentPrompt(assignment, config, outputDir) {
|
||||
const agentConfig = AGENT_CONFIGS[assignment.agent];
|
||||
return `
|
||||
[CONTEXT]
|
||||
**Exploration 文件**: ${assignment.exploration_file}
|
||||
首先读取此文件获取 ${assignment.angle} 探索结果作为分析上下文。
|
||||
|
||||
[SPEC]
|
||||
读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
|
||||
[ROLE] ${agentConfig.role}
|
||||
|
||||
[TASK]
|
||||
${agentConfig.task}
|
||||
输出: ${outputDir}/sections/section-${assignment.agent}.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作,专业术语保留英文
|
||||
- 完全客观的第三人称视角,严禁"我们"、"开发者"
|
||||
- 段落式叙述,采用"论点-论据-结论"结构
|
||||
- 善用逻辑连接词体现设计推演过程
|
||||
|
||||
[FOCUS]
|
||||
${agentConfig.focus}
|
||||
|
||||
[CONSTRAINT]
|
||||
${agentConfig.constraint}
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-${assignment.agent}.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
各 Agent 写入 `sections/section-xxx.md`,返回简要 JSON 供 Phase 3.5 汇总。
|
||||
|
||||
@@ -4,6 +4,29 @@
|
||||
|
||||
> **写作规范**: [../specs/writing-style.md](../specs/writing-style.md)
|
||||
|
||||
## 执行要求
|
||||
|
||||
**必须执行**:Phase 3 所有 Analysis Agents 完成后,主编排器**必须**调用此 Consolidation Agent。
|
||||
|
||||
**触发条件**:
|
||||
- Phase 3 所有 agent 已返回结果(status: completed/partial/failed)
|
||||
- `sections/section-*.md` 文件已生成
|
||||
|
||||
**输入来源**:
|
||||
- `agent_summaries`: Phase 3 各 agent 返回的 JSON(包含 status, output_file, summary, cross_module_notes)
|
||||
- `cross_module_notes`: 从各 agent 返回中提取的跨模块备注数组
|
||||
|
||||
**调用时机**:
|
||||
```javascript
|
||||
// Phase 3 完成后,主编排器执行:
|
||||
const phase3Results = await runPhase3Agents(); // 并行执行所有 analysis agents
|
||||
const agentSummaries = phase3Results.map(r => JSON.parse(r));
|
||||
const crossNotes = agentSummaries.flatMap(s => s.cross_module_notes || []);
|
||||
|
||||
// 必须调用 Phase 3.5 Consolidation Agent
|
||||
await runPhase35Consolidation(agentSummaries, crossNotes);
|
||||
```
|
||||
|
||||
## 核心职责
|
||||
|
||||
1. **跨章节综合分析**:生成 synthesis(报告综述)
|
||||
@@ -22,7 +45,9 @@ interface ConsolidationInput {
|
||||
}
|
||||
```
|
||||
|
||||
## 执行
|
||||
## Agent 调用代码
|
||||
|
||||
主编排器使用以下代码调用 Consolidation Agent:
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
|
||||
@@ -0,0 +1,136 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Task JSONL Schema",
|
||||
"description": "Schema for individual task entries in tasks.jsonl file",
|
||||
"type": "object",
|
||||
"required": ["id", "title", "type", "description", "depends_on", "delivery_criteria", "status", "current_phase", "executor"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique task identifier (e.g., TASK-001)",
|
||||
"pattern": "^TASK-[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Short summary of the task",
|
||||
"maxLength": 100
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["feature", "bug", "refactor", "test", "chore", "docs"],
|
||||
"description": "Task category"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "Detailed instructions for the task"
|
||||
},
|
||||
"file_context": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "List of relevant files/globs",
|
||||
"default": []
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Array of Task IDs that must complete first",
|
||||
"default": []
|
||||
},
|
||||
"delivery_criteria": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Checklist items that define task completion",
|
||||
"minItems": 1
|
||||
},
|
||||
"pause_criteria": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Conditions that should halt execution (e.g., 'API spec unclear')",
|
||||
"default": []
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "ready", "in_progress", "completed", "failed", "paused", "skipped"],
|
||||
"description": "Current task status",
|
||||
"default": "pending"
|
||||
},
|
||||
"current_phase": {
|
||||
"type": "string",
|
||||
"enum": ["analyze", "implement", "test", "optimize", "commit", "done"],
|
||||
"description": "Current execution phase within the task lifecycle",
|
||||
"default": "analyze"
|
||||
},
|
||||
"executor": {
|
||||
"type": "string",
|
||||
"enum": ["agent", "codex", "gemini", "auto"],
|
||||
"description": "Preferred executor for this task",
|
||||
"default": "auto"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"description": "Task priority (1=highest, 5=lowest)",
|
||||
"default": 3
|
||||
},
|
||||
"phase_results": {
|
||||
"type": "object",
|
||||
"description": "Results from each execution phase",
|
||||
"properties": {
|
||||
"analyze": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"findings": { "type": "array", "items": { "type": "string" } },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
},
|
||||
"implement": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"files_modified": { "type": "array", "items": { "type": "string" } },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
},
|
||||
"test": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"test_results": { "type": "string" },
|
||||
"retry_count": { "type": "integer" },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
},
|
||||
"optimize": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"improvements": { "type": "array", "items": { "type": "string" } },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
},
|
||||
"commit": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"commit_hash": { "type": "string" },
|
||||
"message": { "type": "string" },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "Task creation timestamp"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "Last update timestamp"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
@@ -0,0 +1,74 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issues JSONL Schema",
|
||||
"description": "Schema for each line in issues.jsonl (flat storage)",
|
||||
"type": "object",
|
||||
"required": ["id", "title", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
|
||||
"default": "registered"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"description": "Issue context/description (markdown)"
|
||||
},
|
||||
"bound_solution_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the bound solution (null if none bound)"
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "Number of candidate solutions in solutions/{id}.jsonl"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"enum": ["github", "text", "file"],
|
||||
"description": "Source of the issue"
|
||||
},
|
||||
"source_url": {
|
||||
"type": "string",
|
||||
"description": "Original source URL (for GitHub issues)"
|
||||
},
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Issue labels/tags"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"planned_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
}
|
||||
136
.claude/workflows/cli-templates/schemas/queue-schema.json
Normal file
136
.claude/workflows/cli-templates/schemas/queue-schema.json
Normal file
@@ -0,0 +1,136 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Execution Queue Schema",
|
||||
"description": "Global execution queue for all issue tasks",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"queue": {
|
||||
"type": "array",
|
||||
"description": "Ordered list of tasks to execute",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["queue_id", "issue_id", "solution_id", "task_id", "status"],
|
||||
"properties": {
|
||||
"queue_id": {
|
||||
"type": "string",
|
||||
"pattern": "^Q-[0-9]+$",
|
||||
"description": "Unique queue item identifier"
|
||||
},
|
||||
"issue_id": {
|
||||
"type": "string",
|
||||
"description": "Source issue ID"
|
||||
},
|
||||
"solution_id": {
|
||||
"type": "string",
|
||||
"description": "Source solution ID"
|
||||
},
|
||||
"task_id": {
|
||||
"type": "string",
|
||||
"description": "Task ID within solution"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
|
||||
"default": "pending"
|
||||
},
|
||||
"execution_order": {
|
||||
"type": "integer",
|
||||
"description": "Order in execution sequence"
|
||||
},
|
||||
"execution_group": {
|
||||
"type": "string",
|
||||
"description": "Parallel execution group ID (e.g., P1, S1)"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Queue IDs this task depends on"
|
||||
},
|
||||
"semantic_priority": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Semantic importance score (0.0-1.0)"
|
||||
},
|
||||
"assigned_executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent"]
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"started_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"result": {
|
||||
"type": "object",
|
||||
"description": "Execution result",
|
||||
"properties": {
|
||||
"files_modified": { "type": "array", "items": { "type": "string" } },
|
||||
"files_created": { "type": "array", "items": { "type": "string" } },
|
||||
"summary": { "type": "string" },
|
||||
"commit_hash": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"failure_reason": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"conflicts": {
|
||||
"type": "array",
|
||||
"description": "Detected conflicts between tasks",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Queue IDs involved in conflict"
|
||||
},
|
||||
"file": {
|
||||
"type": "string",
|
||||
"description": "Conflicting file path"
|
||||
},
|
||||
"resolution": {
|
||||
"type": "string",
|
||||
"enum": ["sequential", "merge", "manual"]
|
||||
},
|
||||
"resolution_order": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"resolved": {
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"version": { "type": "string", "default": "1.0" },
|
||||
"total_items": { "type": "integer" },
|
||||
"pending_count": { "type": "integer" },
|
||||
"ready_count": { "type": "integer" },
|
||||
"executing_count": { "type": "integer" },
|
||||
"completed_count": { "type": "integer" },
|
||||
"failed_count": { "type": "integer" },
|
||||
"last_queue_formation": { "type": "string", "format": "date-time" },
|
||||
"last_updated": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
94
.claude/workflows/cli-templates/schemas/registry-schema.json
Normal file
94
.claude/workflows/cli-templates/schemas/registry-schema.json
Normal file
@@ -0,0 +1,94 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Registry Schema",
|
||||
"description": "Global registry of all issues and their solutions",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"issues": {
|
||||
"type": "array",
|
||||
"description": "List of registered issues",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
|
||||
"default": "registered"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "Number of candidate solutions"
|
||||
},
|
||||
"bound_solution_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the bound solution (null if none bound)"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"enum": ["github", "text", "file"],
|
||||
"description": "Source of the issue"
|
||||
},
|
||||
"source_url": {
|
||||
"type": "string",
|
||||
"description": "Original source URL (for GitHub issues)"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"planned_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"version": { "type": "string", "default": "1.0" },
|
||||
"total_issues": { "type": "integer" },
|
||||
"by_status": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"registered": { "type": "integer" },
|
||||
"planning": { "type": "integer" },
|
||||
"planned": { "type": "integer" },
|
||||
"queued": { "type": "integer" },
|
||||
"executing": { "type": "integer" },
|
||||
"completed": { "type": "integer" },
|
||||
"failed": { "type": "integer" }
|
||||
}
|
||||
},
|
||||
"last_updated": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
120
.claude/workflows/cli-templates/schemas/solution-schema.json
Normal file
120
.claude/workflows/cli-templates/schemas/solution-schema.json
Normal file
@@ -0,0 +1,120 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Solution Schema",
|
||||
"description": "Schema for solution registered to an issue",
|
||||
"type": "object",
|
||||
"required": ["id", "issue_id", "tasks", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique solution identifier",
|
||||
"pattern": "^SOL-[0-9]+$"
|
||||
},
|
||||
"issue_id": {
|
||||
"type": "string",
|
||||
"description": "Parent issue ID"
|
||||
},
|
||||
"plan_session_id": {
|
||||
"type": "string",
|
||||
"description": "Planning session that created this solution"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"description": "Task breakdown for this solution",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "scope", "action", "acceptance"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^T[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Action verb + target"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Module path or feature area"
|
||||
},
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "1-2 sentences describing what to implement"
|
||||
},
|
||||
"modification_points": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"target": { "type": "string" },
|
||||
"change": { "type": "string" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"implementation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Step-by-step implementation guide"
|
||||
},
|
||||
"acceptance": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Quantified completion criteria"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"default": [],
|
||||
"description": "Task IDs this task depends on"
|
||||
},
|
||||
"estimated_minutes": {
|
||||
"type": "integer",
|
||||
"description": "Estimated time to complete"
|
||||
},
|
||||
"executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent", "auto"],
|
||||
"default": "auto"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"exploration_context": {
|
||||
"type": "object",
|
||||
"description": "ACE exploration results",
|
||||
"properties": {
|
||||
"project_structure": { "type": "string" },
|
||||
"relevant_files": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"patterns": { "type": "string" },
|
||||
"integration_points": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["draft", "candidate", "bound", "queued", "executing", "completed", "failed"],
|
||||
"default": "draft"
|
||||
},
|
||||
"is_bound": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether this solution is bound to the issue"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"bound_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "When this solution was bound to the issue"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,125 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Solutions JSONL Schema",
|
||||
"description": "Schema for each line in solutions/{issue-id}.jsonl",
|
||||
"type": "object",
|
||||
"required": ["id", "tasks", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique solution identifier",
|
||||
"pattern": "^SOL-[0-9]+$"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "Solution approach description"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"description": "Task breakdown for this solution",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "scope", "action", "acceptance"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^T[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Action verb + target"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Module path or feature area"
|
||||
},
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "1-2 sentences describing what to implement"
|
||||
},
|
||||
"modification_points": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"target": { "type": "string" },
|
||||
"change": { "type": "string" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"implementation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Step-by-step implementation guide"
|
||||
},
|
||||
"acceptance": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Quantified completion criteria"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"default": [],
|
||||
"description": "Task IDs this task depends on"
|
||||
},
|
||||
"estimated_minutes": {
|
||||
"type": "integer",
|
||||
"description": "Estimated time to complete"
|
||||
},
|
||||
"executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent", "auto"],
|
||||
"default": "auto"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"exploration_context": {
|
||||
"type": "object",
|
||||
"description": "ACE exploration results",
|
||||
"properties": {
|
||||
"project_structure": { "type": "string" },
|
||||
"relevant_files": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"patterns": { "type": "string" },
|
||||
"integration_points": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"analysis": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"risk": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"impact": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"complexity": { "type": "string", "enum": ["low", "medium", "high"] }
|
||||
}
|
||||
},
|
||||
"score": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Solution quality score (0.0-1.0)"
|
||||
},
|
||||
"is_bound": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether this solution is bound to the issue"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"bound_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "When this solution was bound to the issue"
|
||||
}
|
||||
}
|
||||
}
|
||||
266
.codex/prompts/issue-execute.md
Normal file
266
.codex/prompts/issue-execute.md
Normal file
@@ -0,0 +1,266 @@
|
||||
---
|
||||
description: Execute issue queue tasks sequentially with git commit after each task
|
||||
argument-hint: "[--dry-run]"
|
||||
---
|
||||
|
||||
# Issue Execute (Codex Version)
|
||||
|
||||
## Core Principle
|
||||
|
||||
**Serial Execution**: Execute tasks ONE BY ONE from the issue queue. Complete each task fully (implement → test → commit) before moving to next. Continue autonomously until ALL tasks complete or queue is empty.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
INIT: Fetch first task via ccw issue next
|
||||
|
||||
WHILE task exists:
|
||||
1. Receive task JSON from ccw issue next
|
||||
2. Execute full lifecycle:
|
||||
- IMPLEMENT: Follow task.implementation steps
|
||||
- TEST: Run task.test commands
|
||||
- VERIFY: Check task.acceptance criteria
|
||||
- COMMIT: Stage files, commit with task.commit.message_template
|
||||
3. Report completion via ccw issue complete <queue_id>
|
||||
4. Fetch next task via ccw issue next
|
||||
|
||||
WHEN queue empty:
|
||||
Output final summary
|
||||
```
|
||||
|
||||
## Step 1: Fetch First Task
|
||||
|
||||
Run this command to get your first task:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
This returns JSON with the full task definition:
|
||||
- `queue_id`: Unique ID for queue tracking (e.g., "Q-001")
|
||||
- `issue_id`: Parent issue ID (e.g., "ISSUE-20251227-001")
|
||||
- `task`: Full task definition with implementation steps
|
||||
- `context`: Relevant files and patterns
|
||||
- `execution_hints`: Timing and executor hints
|
||||
|
||||
If response contains `{ "status": "empty" }`, all tasks are complete - skip to final summary.
|
||||
|
||||
## Step 2: Parse Task Response
|
||||
|
||||
Expected task structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"queue_id": "Q-001",
|
||||
"issue_id": "ISSUE-20251227-001",
|
||||
"solution_id": "SOL-001",
|
||||
"task": {
|
||||
"id": "T1",
|
||||
"title": "Task title",
|
||||
"scope": "src/module/",
|
||||
"action": "Create|Modify|Fix|Refactor",
|
||||
"description": "What to do",
|
||||
"modification_points": [
|
||||
{ "file": "path/to/file.ts", "target": "function name", "change": "description" }
|
||||
],
|
||||
"implementation": [
|
||||
"Step 1: Do this",
|
||||
"Step 2: Do that"
|
||||
],
|
||||
"test": {
|
||||
"commands": ["npm test -- --filter=xxx"],
|
||||
"unit": "Unit test requirements",
|
||||
"integration": "Integration test requirements (optional)"
|
||||
},
|
||||
"acceptance": [
|
||||
"Criterion 1: Must pass",
|
||||
"Criterion 2: Must verify"
|
||||
],
|
||||
"commit": {
|
||||
"message_template": "feat(scope): description"
|
||||
}
|
||||
},
|
||||
"context": {
|
||||
"relevant_files": ["path/to/reference.ts"],
|
||||
"patterns": "Follow existing pattern in xxx"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Step 3: Execute Task Lifecycle
|
||||
|
||||
### Phase A: IMPLEMENT
|
||||
|
||||
1. Read all `context.relevant_files` to understand existing patterns
|
||||
2. Follow `task.implementation` steps in order
|
||||
3. Apply changes to `task.modification_points` files
|
||||
4. Follow `context.patterns` for code style consistency
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Implementing: [task.title]
|
||||
|
||||
**Scope**: [task.scope]
|
||||
**Action**: [task.action]
|
||||
|
||||
**Steps**:
|
||||
1. ✓ [implementation step 1]
|
||||
2. ✓ [implementation step 2]
|
||||
...
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
```
|
||||
|
||||
### Phase B: TEST
|
||||
|
||||
1. Run all commands in `task.test.commands`
|
||||
2. Verify unit tests pass (`task.test.unit`)
|
||||
3. Run integration tests if specified (`task.test.integration`)
|
||||
|
||||
**If tests fail**: Fix the code and re-run. Do NOT proceed until tests pass.
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Testing: [task.title]
|
||||
|
||||
**Test Results**:
|
||||
- [x] Unit tests: PASSED
|
||||
- [x] Integration tests: PASSED (or N/A)
|
||||
```
|
||||
|
||||
### Phase C: VERIFY
|
||||
|
||||
Check all `task.acceptance` criteria are met:
|
||||
|
||||
```
|
||||
## Verifying: [task.title]
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [x] Criterion 1: Verified
|
||||
- [x] Criterion 2: Verified
|
||||
...
|
||||
|
||||
All criteria met: YES
|
||||
```
|
||||
|
||||
**If any criterion fails**: Go back to IMPLEMENT phase and fix.
|
||||
|
||||
### Phase D: COMMIT
|
||||
|
||||
After all phases pass, commit the changes:
|
||||
|
||||
```bash
|
||||
# Stage all modified files
|
||||
git add path/to/file1.ts path/to/file2.ts ...
|
||||
|
||||
# Commit with task message template
|
||||
git commit -m "$(cat <<'EOF'
|
||||
[task.commit.message_template]
|
||||
|
||||
Queue-ID: [queue_id]
|
||||
Issue-ID: [issue_id]
|
||||
Task-ID: [task.id]
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Committed: [task.title]
|
||||
|
||||
**Commit**: [commit hash]
|
||||
**Message**: [commit message]
|
||||
**Files**: N files changed
|
||||
```
|
||||
|
||||
## Step 4: Report Completion
|
||||
|
||||
After commit succeeds, report to queue system:
|
||||
|
||||
```bash
|
||||
ccw issue complete [queue_id] --result '{
|
||||
"files_modified": ["path1", "path2"],
|
||||
"tests_passed": true,
|
||||
"acceptance_passed": true,
|
||||
"committed": true,
|
||||
"commit_hash": "[actual hash]",
|
||||
"summary": "[What was accomplished]"
|
||||
}'
|
||||
```
|
||||
|
||||
**If task failed and cannot be fixed:**
|
||||
|
||||
```bash
|
||||
ccw issue fail [queue_id] --reason "Phase [X] failed: [details]"
|
||||
```
|
||||
|
||||
## Step 5: Continue to Next Task
|
||||
|
||||
Immediately fetch the next task:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
**Output progress:**
|
||||
```
|
||||
✓ [N/M] Completed: [queue_id] - [task.title]
|
||||
→ Fetching next task...
|
||||
```
|
||||
|
||||
**DO NOT STOP.** Return to Step 2 and continue until queue is empty.
|
||||
|
||||
## Final Summary
|
||||
|
||||
When `ccw issue next` returns `{ "status": "empty" }`:
|
||||
|
||||
```markdown
|
||||
## Issue Queue Execution Complete
|
||||
|
||||
**Total Tasks Executed**: N
|
||||
**All Commits**:
|
||||
| # | Queue ID | Task | Commit |
|
||||
|---|----------|------|--------|
|
||||
| 1 | Q-001 | Task title | abc123 |
|
||||
| 2 | Q-002 | Task title | def456 |
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
|
||||
**Summary**:
|
||||
[Overall what was accomplished]
|
||||
```
|
||||
|
||||
## Execution Rules
|
||||
|
||||
1. **Never stop mid-queue** - Continue until queue is empty
|
||||
2. **One task at a time** - Fully complete (including commit) before moving on
|
||||
3. **Tests MUST pass** - Do not proceed to commit if tests fail
|
||||
4. **Commit after each task** - Each task gets its own commit
|
||||
5. **Self-verify** - All acceptance criteria must pass before commit
|
||||
6. **Report accurately** - Use ccw issue complete/fail after each task
|
||||
7. **Handle failures gracefully** - If a task fails, report via ccw issue fail and continue to next
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| ccw issue next returns empty | All done - output final summary |
|
||||
| Tests fail | Fix code, re-run tests |
|
||||
| Verification fails | Go back to implement phase |
|
||||
| Git commit fails | Check staging, retry commit |
|
||||
| ccw issue complete fails | Log error, continue to next task |
|
||||
| Unrecoverable error | Call ccw issue fail, continue to next |
|
||||
|
||||
## Start Execution
|
||||
|
||||
Begin by running:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
Then follow the lifecycle for each task until queue is empty.
|
||||
@@ -12,6 +12,7 @@ import { cliCommand } from './commands/cli.js';
|
||||
import { memoryCommand } from './commands/memory.js';
|
||||
import { coreMemoryCommand } from './commands/core-memory.js';
|
||||
import { hookCommand } from './commands/hook.js';
|
||||
import { issueCommand } from './commands/issue.js';
|
||||
import { readFileSync, existsSync } from 'fs';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
@@ -260,5 +261,29 @@ export function run(argv: string[]): void {
|
||||
.option('--type <type>', 'Context type: session-start, context')
|
||||
.action((subcommand, args, options) => hookCommand(subcommand, args, options));
|
||||
|
||||
// Issue command - Issue lifecycle management with JSONL task tracking
|
||||
program
|
||||
.command('issue [subcommand] [args...]')
|
||||
.description('Issue lifecycle management with JSONL task tracking')
|
||||
.option('--title <title>', 'Task title')
|
||||
.option('--type <type>', 'Task type: feature, bug, refactor, test, chore, docs')
|
||||
.option('--status <status>', 'Task status')
|
||||
.option('--phase <phase>', 'Execution phase')
|
||||
.option('--description <desc>', 'Task description')
|
||||
.option('--depends-on <ids>', 'Comma-separated dependency task IDs')
|
||||
.option('--delivery-criteria <items>', 'Pipe-separated delivery criteria')
|
||||
.option('--pause-criteria <items>', 'Pipe-separated pause criteria')
|
||||
.option('--executor <type>', 'Executor: agent, codex, gemini, auto')
|
||||
.option('--priority <n>', 'Task priority (1-5)')
|
||||
.option('--format <fmt>', 'Output format: json, markdown')
|
||||
.option('--json', 'Output as JSON')
|
||||
.option('--force', 'Force operation')
|
||||
// New options for solution/queue management
|
||||
.option('--solution <path>', 'Solution JSON file path')
|
||||
.option('--solution-id <id>', 'Solution ID')
|
||||
.option('--result <json>', 'Execution result JSON')
|
||||
.option('--reason <text>', 'Failure reason')
|
||||
.action((subcommand, args, options) => issueCommand(subcommand, args, options));
|
||||
|
||||
program.parse(argv);
|
||||
}
|
||||
|
||||
1184
ccw/src/commands/issue.ts
Normal file
1184
ccw/src/commands/issue.ts
Normal file
File diff suppressed because it is too large
Load Diff
@@ -21,6 +21,7 @@ const MODULE_FILES = [
|
||||
'dashboard-js/components/tabs-other.js',
|
||||
'dashboard-js/components/carousel.js',
|
||||
'dashboard-js/components/notifications.js',
|
||||
'dashboard-js/components/cli-stream-viewer.js',
|
||||
'dashboard-js/components/global-notifications.js',
|
||||
'dashboard-js/components/cli-status.js',
|
||||
'dashboard-js/components/cli-history.js',
|
||||
|
||||
559
ccw/src/core/routes/issue-routes.ts
Normal file
559
ccw/src/core/routes/issue-routes.ts
Normal file
@@ -0,0 +1,559 @@
|
||||
// @ts-nocheck
|
||||
/**
|
||||
* Issue Routes Module (Optimized - Flat JSONL Storage)
|
||||
*
|
||||
* Storage Structure:
|
||||
* .workflow/issues/
|
||||
* ├── issues.jsonl # All issues (one per line)
|
||||
* ├── queue.json # Execution queue
|
||||
* └── solutions/
|
||||
* ├── {issue-id}.jsonl # Solutions for issue (one per line)
|
||||
* └── ...
|
||||
*
|
||||
* API Endpoints (8 total):
|
||||
* - GET /api/issues - List all issues
|
||||
* - POST /api/issues - Create new issue
|
||||
* - GET /api/issues/:id - Get issue detail
|
||||
* - PATCH /api/issues/:id - Update issue (includes binding logic)
|
||||
* - DELETE /api/issues/:id - Delete issue
|
||||
* - POST /api/issues/:id/solutions - Add solution
|
||||
* - PATCH /api/issues/:id/tasks/:taskId - Update task
|
||||
* - GET /api/queue - Get execution queue
|
||||
* - POST /api/queue/reorder - Reorder queue items
|
||||
*/
|
||||
import type { IncomingMessage, ServerResponse } from 'http';
|
||||
import { readFileSync, existsSync, writeFileSync, mkdirSync, unlinkSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
|
||||
export interface RouteContext {
|
||||
pathname: string;
|
||||
url: URL;
|
||||
req: IncomingMessage;
|
||||
res: ServerResponse;
|
||||
initialPath: string;
|
||||
handlePostRequest: (req: IncomingMessage, res: ServerResponse, handler: (body: unknown) => Promise<any>) => void;
|
||||
broadcastToClients: (data: unknown) => void;
|
||||
}
|
||||
|
||||
// ========== JSONL Helper Functions ==========
|
||||
|
||||
function readIssuesJsonl(issuesDir: string): any[] {
|
||||
const issuesPath = join(issuesDir, 'issues.jsonl');
|
||||
if (!existsSync(issuesPath)) return [];
|
||||
try {
|
||||
const content = readFileSync(issuesPath, 'utf8');
|
||||
return content.split('\n').filter(line => line.trim()).map(line => JSON.parse(line));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
function writeIssuesJsonl(issuesDir: string, issues: any[]) {
|
||||
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
|
||||
const issuesPath = join(issuesDir, 'issues.jsonl');
|
||||
writeFileSync(issuesPath, issues.map(i => JSON.stringify(i)).join('\n'));
|
||||
}
|
||||
|
||||
function readSolutionsJsonl(issuesDir: string, issueId: string): any[] {
|
||||
const solutionsPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
|
||||
if (!existsSync(solutionsPath)) return [];
|
||||
try {
|
||||
const content = readFileSync(solutionsPath, 'utf8');
|
||||
return content.split('\n').filter(line => line.trim()).map(line => JSON.parse(line));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
function writeSolutionsJsonl(issuesDir: string, issueId: string, solutions: any[]) {
|
||||
const solutionsDir = join(issuesDir, 'solutions');
|
||||
if (!existsSync(solutionsDir)) mkdirSync(solutionsDir, { recursive: true });
|
||||
writeFileSync(join(solutionsDir, `${issueId}.jsonl`), solutions.map(s => JSON.stringify(s)).join('\n'));
|
||||
}
|
||||
|
||||
function readQueue(issuesDir: string) {
|
||||
// Try new multi-queue structure first
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const indexPath = join(queuesDir, 'index.json');
|
||||
|
||||
if (existsSync(indexPath)) {
|
||||
try {
|
||||
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
|
||||
const activeQueueId = index.active_queue_id;
|
||||
|
||||
if (activeQueueId) {
|
||||
const queueFilePath = join(queuesDir, `${activeQueueId}.json`);
|
||||
if (existsSync(queueFilePath)) {
|
||||
return JSON.parse(readFileSync(queueFilePath, 'utf8'));
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Fall through to legacy check
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to legacy queue.json
|
||||
const legacyQueuePath = join(issuesDir, 'queue.json');
|
||||
if (existsSync(legacyQueuePath)) {
|
||||
try {
|
||||
return JSON.parse(readFileSync(legacyQueuePath, 'utf8'));
|
||||
} catch {
|
||||
// Return empty queue
|
||||
}
|
||||
}
|
||||
|
||||
return { queue: [], conflicts: [], execution_groups: [], _metadata: { version: '1.0', total_tasks: 0 } };
|
||||
}
|
||||
|
||||
function writeQueue(issuesDir: string, queue: any) {
|
||||
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
|
||||
queue._metadata = { ...queue._metadata, updated_at: new Date().toISOString(), total_tasks: queue.queue?.length || 0 };
|
||||
|
||||
// Check if using new multi-queue structure
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const indexPath = join(queuesDir, 'index.json');
|
||||
|
||||
if (existsSync(indexPath) && queue.id) {
|
||||
// Write to new structure
|
||||
const queueFilePath = join(queuesDir, `${queue.id}.json`);
|
||||
writeFileSync(queueFilePath, JSON.stringify(queue, null, 2));
|
||||
|
||||
// Update index metadata
|
||||
try {
|
||||
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
|
||||
const queueEntry = index.queues?.find((q: any) => q.id === queue.id);
|
||||
if (queueEntry) {
|
||||
queueEntry.total_tasks = queue.queue?.length || 0;
|
||||
queueEntry.completed_tasks = queue.queue?.filter((i: any) => i.status === 'completed').length || 0;
|
||||
writeFileSync(indexPath, JSON.stringify(index, null, 2));
|
||||
}
|
||||
} catch {
|
||||
// Ignore index update errors
|
||||
}
|
||||
} else {
|
||||
// Fallback to legacy queue.json
|
||||
writeFileSync(join(issuesDir, 'queue.json'), JSON.stringify(queue, null, 2));
|
||||
}
|
||||
}
|
||||
|
||||
function getIssueDetail(issuesDir: string, issueId: string) {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issue = issues.find(i => i.id === issueId);
|
||||
if (!issue) return null;
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
let tasks: any[] = [];
|
||||
if (issue.bound_solution_id) {
|
||||
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
|
||||
if (boundSol?.tasks) tasks = boundSol.tasks;
|
||||
}
|
||||
return { ...issue, solutions, tasks };
|
||||
}
|
||||
|
||||
function enrichIssues(issues: any[], issuesDir: string) {
|
||||
return issues.map(issue => ({
|
||||
...issue,
|
||||
solution_count: readSolutionsJsonl(issuesDir, issue.id).length
|
||||
}));
|
||||
}
|
||||
|
||||
function groupQueueByExecutionGroup(queue: any) {
|
||||
const groups: { [key: string]: any[] } = {};
|
||||
for (const item of queue.queue || []) {
|
||||
const groupId = item.execution_group || 'ungrouped';
|
||||
if (!groups[groupId]) groups[groupId] = [];
|
||||
groups[groupId].push(item);
|
||||
}
|
||||
for (const groupId of Object.keys(groups)) {
|
||||
groups[groupId].sort((a, b) => (a.execution_order || 0) - (b.execution_order || 0));
|
||||
}
|
||||
const executionGroups = Object.entries(groups).map(([id, items]) => ({
|
||||
id,
|
||||
type: id.startsWith('P') ? 'parallel' : id.startsWith('S') ? 'sequential' : 'unknown',
|
||||
task_count: items.length,
|
||||
tasks: items.map(i => i.queue_id)
|
||||
})).sort((a, b) => {
|
||||
const aFirst = groups[a.id]?.[0]?.execution_order || 0;
|
||||
const bFirst = groups[b.id]?.[0]?.execution_order || 0;
|
||||
return aFirst - bFirst;
|
||||
});
|
||||
return { ...queue, execution_groups: executionGroups, grouped_items: groups };
|
||||
}
|
||||
|
||||
/**
|
||||
* Bind solution to issue with proper side effects
|
||||
*/
|
||||
function bindSolutionToIssue(issuesDir: string, issueId: string, solutionId: string, issues: any[], issueIndex: number) {
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
const solIndex = solutions.findIndex(s => s.id === solutionId);
|
||||
|
||||
if (solIndex === -1) return { error: `Solution ${solutionId} not found` };
|
||||
|
||||
// Unbind all, bind new
|
||||
solutions.forEach(s => { s.is_bound = false; });
|
||||
solutions[solIndex].is_bound = true;
|
||||
solutions[solIndex].bound_at = new Date().toISOString();
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
|
||||
// Update issue
|
||||
issues[issueIndex].bound_solution_id = solutionId;
|
||||
issues[issueIndex].status = 'planned';
|
||||
issues[issueIndex].planned_at = new Date().toISOString();
|
||||
|
||||
return { success: true, bound: solutionId };
|
||||
}
|
||||
|
||||
// ========== Route Handler ==========
|
||||
|
||||
export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
const { pathname, url, req, res, initialPath, handlePostRequest } = ctx;
|
||||
const projectPath = url.searchParams.get('path') || initialPath;
|
||||
const issuesDir = join(projectPath, '.workflow', 'issues');
|
||||
|
||||
// ===== Queue Routes (top-level /api/queue) =====
|
||||
|
||||
// GET /api/queue - Get execution queue
|
||||
if (pathname === '/api/queue' && req.method === 'GET') {
|
||||
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(queue));
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/queue/reorder - Reorder queue items
|
||||
if (pathname === '/api/queue/reorder' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const { groupId, newOrder } = body;
|
||||
if (!groupId || !Array.isArray(newOrder)) {
|
||||
return { error: 'groupId and newOrder (array) required' };
|
||||
}
|
||||
|
||||
const queue = readQueue(issuesDir);
|
||||
const groupItems = queue.queue.filter((item: any) => item.execution_group === groupId);
|
||||
const otherItems = queue.queue.filter((item: any) => item.execution_group !== groupId);
|
||||
|
||||
if (groupItems.length === 0) return { error: `No items in group ${groupId}` };
|
||||
|
||||
const groupQueueIds = new Set(groupItems.map((i: any) => i.queue_id));
|
||||
if (groupQueueIds.size !== new Set(newOrder).size) {
|
||||
return { error: 'newOrder must contain all group items' };
|
||||
}
|
||||
for (const id of newOrder) {
|
||||
if (!groupQueueIds.has(id)) return { error: `Invalid queue_id: ${id}` };
|
||||
}
|
||||
|
||||
const itemMap = new Map(groupItems.map((i: any) => [i.queue_id, i]));
|
||||
const reorderedItems = newOrder.map((qid: string, idx: number) => ({ ...itemMap.get(qid), _idx: idx }));
|
||||
const newQueue = [...otherItems, ...reorderedItems].sort((a, b) => {
|
||||
const aGroup = parseInt(a.execution_group?.match(/\d+/)?.[0] || '999');
|
||||
const bGroup = parseInt(b.execution_group?.match(/\d+/)?.[0] || '999');
|
||||
if (aGroup !== bGroup) return aGroup - bGroup;
|
||||
if (a.execution_group === b.execution_group) {
|
||||
return (a._idx ?? a.execution_order ?? 999) - (b._idx ?? b.execution_order ?? 999);
|
||||
}
|
||||
return (a.execution_order || 0) - (b.execution_order || 0);
|
||||
});
|
||||
|
||||
newQueue.forEach((item, idx) => { item.execution_order = idx + 1; delete item._idx; });
|
||||
queue.queue = newQueue;
|
||||
writeQueue(issuesDir, queue);
|
||||
|
||||
return { success: true, groupId, reordered: newOrder.length };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: GET /api/issues/queue (backward compat)
|
||||
if (pathname === '/api/issues/queue' && req.method === 'GET') {
|
||||
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(queue));
|
||||
return true;
|
||||
}
|
||||
|
||||
// ===== Issue Routes =====
|
||||
|
||||
// GET /api/issues - List all issues
|
||||
if (pathname === '/api/issues' && req.method === 'GET') {
|
||||
const issues = enrichIssues(readIssuesJsonl(issuesDir), issuesDir);
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({
|
||||
issues,
|
||||
_metadata: { version: '2.0', storage: 'jsonl', total_issues: issues.length, last_updated: new Date().toISOString() }
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/issues - Create issue
|
||||
if (pathname === '/api/issues' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
if (!body.id || !body.title) return { error: 'id and title required' };
|
||||
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
if (issues.find(i => i.id === body.id)) return { error: `Issue ${body.id} exists` };
|
||||
|
||||
const newIssue = {
|
||||
id: body.id,
|
||||
title: body.title,
|
||||
status: body.status || 'registered',
|
||||
priority: body.priority || 3,
|
||||
context: body.context || '',
|
||||
source: body.source || 'text',
|
||||
source_url: body.source_url || null,
|
||||
labels: body.labels || [],
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
issues.push(newIssue);
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
return { success: true, issue: newIssue };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// GET /api/issues/:id - Get issue detail
|
||||
const detailMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
|
||||
if (detailMatch && req.method === 'GET') {
|
||||
const issueId = decodeURIComponent(detailMatch[1]);
|
||||
if (issueId === 'queue') return false;
|
||||
|
||||
const detail = getIssueDetail(issuesDir, issueId);
|
||||
if (!detail) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Issue not found' }));
|
||||
return true;
|
||||
}
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(detail));
|
||||
return true;
|
||||
}
|
||||
|
||||
// PATCH /api/issues/:id - Update issue (with binding support)
|
||||
const updateMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
|
||||
if (updateMatch && req.method === 'PATCH') {
|
||||
const issueId = decodeURIComponent(updateMatch[1]);
|
||||
if (issueId === 'queue') return false;
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issueIndex = issues.findIndex(i => i.id === issueId);
|
||||
if (issueIndex === -1) return { error: 'Issue not found' };
|
||||
|
||||
const updates: string[] = [];
|
||||
|
||||
// Handle binding if bound_solution_id provided
|
||||
if (body.bound_solution_id !== undefined) {
|
||||
if (body.bound_solution_id) {
|
||||
const bindResult = bindSolutionToIssue(issuesDir, issueId, body.bound_solution_id, issues, issueIndex);
|
||||
if (bindResult.error) return bindResult;
|
||||
updates.push('bound_solution_id');
|
||||
} else {
|
||||
// Unbind
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
solutions.forEach(s => { s.is_bound = false; });
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
issues[issueIndex].bound_solution_id = null;
|
||||
updates.push('bound_solution_id (unbound)');
|
||||
}
|
||||
}
|
||||
|
||||
// Update other fields
|
||||
for (const field of ['title', 'context', 'status', 'priority', 'labels']) {
|
||||
if (body[field] !== undefined) {
|
||||
issues[issueIndex][field] = body[field];
|
||||
updates.push(field);
|
||||
}
|
||||
}
|
||||
|
||||
issues[issueIndex].updated_at = new Date().toISOString();
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
return { success: true, issueId, updated: updates };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// DELETE /api/issues/:id
|
||||
const deleteMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
|
||||
if (deleteMatch && req.method === 'DELETE') {
|
||||
const issueId = decodeURIComponent(deleteMatch[1]);
|
||||
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const filtered = issues.filter(i => i.id !== issueId);
|
||||
if (filtered.length === issues.length) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Issue not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
writeIssuesJsonl(issuesDir, filtered);
|
||||
|
||||
// Clean up solutions file
|
||||
const solPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
|
||||
if (existsSync(solPath)) {
|
||||
try { unlinkSync(solPath); } catch {}
|
||||
}
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, issueId }));
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/issues/:id/solutions - Add solution
|
||||
const addSolMatch = pathname.match(/^\/api\/issues\/([^/]+)\/solutions$/);
|
||||
if (addSolMatch && req.method === 'POST') {
|
||||
const issueId = decodeURIComponent(addSolMatch[1]);
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
if (!body.id || !body.tasks) return { error: 'id and tasks required' };
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
if (solutions.find(s => s.id === body.id)) return { error: `Solution ${body.id} exists` };
|
||||
|
||||
const newSolution = {
|
||||
id: body.id,
|
||||
description: body.description || '',
|
||||
tasks: body.tasks,
|
||||
exploration_context: body.exploration_context || {},
|
||||
analysis: body.analysis || {},
|
||||
score: body.score || 0,
|
||||
is_bound: false,
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
solutions.push(newSolution);
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
|
||||
// Update issue solution_count
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const idx = issues.findIndex(i => i.id === issueId);
|
||||
if (idx !== -1) {
|
||||
issues[idx].solution_count = solutions.length;
|
||||
issues[idx].updated_at = new Date().toISOString();
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
}
|
||||
|
||||
return { success: true, solution: newSolution };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// PATCH /api/issues/:id/tasks/:taskId - Update task
|
||||
const taskMatch = pathname.match(/^\/api\/issues\/([^/]+)\/tasks\/([^/]+)$/);
|
||||
if (taskMatch && req.method === 'PATCH') {
|
||||
const issueId = decodeURIComponent(taskMatch[1]);
|
||||
const taskId = decodeURIComponent(taskMatch[2]);
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issue = issues.find(i => i.id === issueId);
|
||||
if (!issue?.bound_solution_id) return { error: 'Issue or bound solution not found' };
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
const solIdx = solutions.findIndex(s => s.id === issue.bound_solution_id);
|
||||
if (solIdx === -1) return { error: 'Bound solution not found' };
|
||||
|
||||
const taskIdx = solutions[solIdx].tasks?.findIndex((t: any) => t.id === taskId);
|
||||
if (taskIdx === -1 || taskIdx === undefined) return { error: 'Task not found' };
|
||||
|
||||
const updates: string[] = [];
|
||||
for (const field of ['status', 'priority', 'result', 'error']) {
|
||||
if (body[field] !== undefined) {
|
||||
solutions[solIdx].tasks[taskIdx][field] = body[field];
|
||||
updates.push(field);
|
||||
}
|
||||
}
|
||||
solutions[solIdx].tasks[taskIdx].updated_at = new Date().toISOString();
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
|
||||
return { success: true, issueId, taskId, updated: updates };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: PUT /api/issues/:id/task/:taskId (backward compat)
|
||||
const legacyTaskMatch = pathname.match(/^\/api\/issues\/([^/]+)\/task\/([^/]+)$/);
|
||||
if (legacyTaskMatch && req.method === 'PUT') {
|
||||
const issueId = decodeURIComponent(legacyTaskMatch[1]);
|
||||
const taskId = decodeURIComponent(legacyTaskMatch[2]);
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issue = issues.find(i => i.id === issueId);
|
||||
if (!issue?.bound_solution_id) return { error: 'Issue or bound solution not found' };
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
const solIdx = solutions.findIndex(s => s.id === issue.bound_solution_id);
|
||||
if (solIdx === -1) return { error: 'Bound solution not found' };
|
||||
|
||||
const taskIdx = solutions[solIdx].tasks?.findIndex((t: any) => t.id === taskId);
|
||||
if (taskIdx === -1 || taskIdx === undefined) return { error: 'Task not found' };
|
||||
|
||||
const updates: string[] = [];
|
||||
if (body.status !== undefined) { solutions[solIdx].tasks[taskIdx].status = body.status; updates.push('status'); }
|
||||
if (body.priority !== undefined) { solutions[solIdx].tasks[taskIdx].priority = body.priority; updates.push('priority'); }
|
||||
solutions[solIdx].tasks[taskIdx].updated_at = new Date().toISOString();
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
|
||||
return { success: true, issueId, taskId, updated: updates };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: PUT /api/issues/:id/bind/:solutionId (backward compat)
|
||||
const legacyBindMatch = pathname.match(/^\/api\/issues\/([^/]+)\/bind\/([^/]+)$/);
|
||||
if (legacyBindMatch && req.method === 'PUT') {
|
||||
const issueId = decodeURIComponent(legacyBindMatch[1]);
|
||||
const solutionId = decodeURIComponent(legacyBindMatch[2]);
|
||||
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issueIndex = issues.findIndex(i => i.id === issueId);
|
||||
if (issueIndex === -1) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Issue not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
const result = bindSolutionToIssue(issuesDir, issueId, solutionId, issues, issueIndex);
|
||||
if (result.error) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(result));
|
||||
return true;
|
||||
}
|
||||
|
||||
issues[issueIndex].updated_at = new Date().toISOString();
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, issueId, solutionId }));
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: PUT /api/issues/:id (backward compat for PATCH)
|
||||
const legacyUpdateMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
|
||||
if (legacyUpdateMatch && req.method === 'PUT') {
|
||||
const issueId = decodeURIComponent(legacyUpdateMatch[1]);
|
||||
if (issueId === 'queue') return false;
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issueIndex = issues.findIndex(i => i.id === issueId);
|
||||
if (issueIndex === -1) return { error: 'Issue not found' };
|
||||
|
||||
const updates: string[] = [];
|
||||
for (const field of ['title', 'context', 'status', 'priority', 'bound_solution_id', 'labels']) {
|
||||
if (body[field] !== undefined) {
|
||||
issues[issueIndex][field] = body[field];
|
||||
updates.push(field);
|
||||
}
|
||||
}
|
||||
|
||||
issues[issueIndex].updated_at = new Date().toISOString();
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
return { success: true, issueId, updated: updates };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
@@ -17,6 +17,7 @@ import { handleGraphRoutes } from './routes/graph-routes.js';
|
||||
import { handleSystemRoutes } from './routes/system-routes.js';
|
||||
import { handleFilesRoutes } from './routes/files-routes.js';
|
||||
import { handleSkillsRoutes } from './routes/skills-routes.js';
|
||||
import { handleIssueRoutes } from './routes/issue-routes.js';
|
||||
import { handleRulesRoutes } from './routes/rules-routes.js';
|
||||
import { handleSessionRoutes } from './routes/session-routes.js';
|
||||
import { handleCcwRoutes } from './routes/ccw-routes.js';
|
||||
@@ -86,7 +87,9 @@ const MODULE_CSS_FILES = [
|
||||
'28-mcp-manager.css',
|
||||
'29-help.css',
|
||||
'30-core-memory.css',
|
||||
'31-api-settings.css'
|
||||
'31-api-settings.css',
|
||||
'32-issue-manager.css',
|
||||
'33-cli-stream-viewer.css'
|
||||
];
|
||||
|
||||
// Modular JS files in dependency order
|
||||
@@ -107,6 +110,7 @@ const MODULE_FILES = [
|
||||
'components/flowchart.js',
|
||||
'components/carousel.js',
|
||||
'components/notifications.js',
|
||||
'components/cli-stream-viewer.js',
|
||||
'components/global-notifications.js',
|
||||
'components/task-queue-sidebar.js',
|
||||
'components/cli-status.js',
|
||||
@@ -142,6 +146,7 @@ const MODULE_FILES = [
|
||||
'views/claude-manager.js',
|
||||
'views/api-settings.js',
|
||||
'views/help.js',
|
||||
'views/issue-manager.js',
|
||||
'main.js'
|
||||
];
|
||||
|
||||
@@ -244,7 +249,7 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
|
||||
|
||||
// CORS headers for API requests
|
||||
res.setHeader('Access-Control-Allow-Origin', '*');
|
||||
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, DELETE, OPTIONS');
|
||||
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, PATCH, DELETE, OPTIONS');
|
||||
res.setHeader('Access-Control-Allow-Headers', 'Content-Type');
|
||||
|
||||
if (req.method === 'OPTIONS') {
|
||||
@@ -340,6 +345,16 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
|
||||
if (await handleSkillsRoutes(routeContext)) return;
|
||||
}
|
||||
|
||||
// Queue routes (/api/queue*) - top-level queue API
|
||||
if (pathname.startsWith('/api/queue')) {
|
||||
if (await handleIssueRoutes(routeContext)) return;
|
||||
}
|
||||
|
||||
// Issue routes (/api/issues*)
|
||||
if (pathname.startsWith('/api/issues')) {
|
||||
if (await handleIssueRoutes(routeContext)) return;
|
||||
}
|
||||
|
||||
// Rules routes (/api/rules*)
|
||||
if (pathname.startsWith('/api/rules')) {
|
||||
if (await handleRulesRoutes(routeContext)) return;
|
||||
|
||||
2544
ccw/src/templates/dashboard-css/32-issue-manager.css
Normal file
2544
ccw/src/templates/dashboard-css/32-issue-manager.css
Normal file
File diff suppressed because it is too large
Load Diff
467
ccw/src/templates/dashboard-css/33-cli-stream-viewer.css
Normal file
467
ccw/src/templates/dashboard-css/33-cli-stream-viewer.css
Normal file
@@ -0,0 +1,467 @@
|
||||
/**
|
||||
* CLI Stream Viewer Styles
|
||||
* Right-side popup panel for viewing CLI streaming output
|
||||
*/
|
||||
|
||||
/* ===== Overlay ===== */
|
||||
.cli-stream-overlay {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
background: rgb(0 0 0 / 0.3);
|
||||
z-index: 1050;
|
||||
opacity: 0;
|
||||
visibility: hidden;
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
.cli-stream-overlay.open {
|
||||
opacity: 1;
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
/* ===== Main Panel ===== */
|
||||
.cli-stream-viewer {
|
||||
position: fixed;
|
||||
top: 60px;
|
||||
right: 16px;
|
||||
width: 650px;
|
||||
max-width: calc(100vw - 32px);
|
||||
max-height: calc(100vh - 80px);
|
||||
background: hsl(var(--card));
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 8px 32px rgb(0 0 0 / 0.2);
|
||||
z-index: 1100;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
transform: translateX(calc(100% + 20px));
|
||||
opacity: 0;
|
||||
visibility: hidden;
|
||||
transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1);
|
||||
}
|
||||
|
||||
.cli-stream-viewer.open {
|
||||
transform: translateX(0);
|
||||
opacity: 1;
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
/* ===== Header ===== */
|
||||
.cli-stream-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: 12px 16px;
|
||||
border-bottom: 1px solid hsl(var(--border));
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
}
|
||||
|
||||
.cli-stream-title {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
font-size: 0.875rem;
|
||||
font-weight: 600;
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.cli-stream-title svg,
|
||||
.cli-stream-title i {
|
||||
width: 18px;
|
||||
height: 18px;
|
||||
color: hsl(var(--primary));
|
||||
}
|
||||
|
||||
.cli-stream-count-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-width: 20px;
|
||||
height: 20px;
|
||||
padding: 0 6px;
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--muted-foreground));
|
||||
border-radius: 10px;
|
||||
font-size: 0.6875rem;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.cli-stream-count-badge.has-running {
|
||||
background: hsl(var(--warning));
|
||||
color: hsl(var(--warning-foreground, white));
|
||||
}
|
||||
|
||||
.cli-stream-actions {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.cli-stream-action-btn {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
padding: 4px 10px;
|
||||
background: transparent;
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 4px;
|
||||
font-size: 0.75rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-action-btn:hover {
|
||||
background: hsl(var(--hover));
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.cli-stream-close-btn {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 28px;
|
||||
height: 28px;
|
||||
padding: 0;
|
||||
background: transparent;
|
||||
border: none;
|
||||
border-radius: 4px;
|
||||
font-size: 1.25rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-close-btn:hover {
|
||||
background: hsl(var(--destructive) / 0.1);
|
||||
color: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
/* ===== Tab Bar ===== */
|
||||
.cli-stream-tabs {
|
||||
display: flex;
|
||||
gap: 2px;
|
||||
padding: 8px 12px;
|
||||
border-bottom: 1px solid hsl(var(--border));
|
||||
background: hsl(var(--muted) / 0.2);
|
||||
overflow-x: auto;
|
||||
scrollbar-width: thin;
|
||||
}
|
||||
|
||||
.cli-stream-tabs::-webkit-scrollbar {
|
||||
height: 4px;
|
||||
}
|
||||
|
||||
.cli-stream-tabs::-webkit-scrollbar-thumb {
|
||||
background: hsl(var(--border));
|
||||
border-radius: 2px;
|
||||
}
|
||||
|
||||
.cli-stream-tab {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
padding: 6px 12px;
|
||||
background: transparent;
|
||||
border: 1px solid transparent;
|
||||
border-radius: 6px;
|
||||
font-size: 0.75rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
white-space: nowrap;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-tab:hover {
|
||||
background: hsl(var(--hover));
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.cli-stream-tab.active {
|
||||
background: hsl(var(--card));
|
||||
border-color: hsl(var(--primary));
|
||||
color: hsl(var(--foreground));
|
||||
box-shadow: 0 1px 3px rgb(0 0 0 / 0.1);
|
||||
}
|
||||
|
||||
.cli-stream-tab-status {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.cli-stream-tab-status.running {
|
||||
background: hsl(var(--warning));
|
||||
animation: streamStatusPulse 1.5s ease-in-out infinite;
|
||||
}
|
||||
|
||||
.cli-stream-tab-status.completed {
|
||||
background: hsl(var(--success));
|
||||
}
|
||||
|
||||
.cli-stream-tab-status.error {
|
||||
background: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
@keyframes streamStatusPulse {
|
||||
0%, 100% { opacity: 1; transform: scale(1); }
|
||||
50% { opacity: 0.6; transform: scale(1.2); }
|
||||
}
|
||||
|
||||
.cli-stream-tab-tool {
|
||||
font-weight: 500;
|
||||
text-transform: capitalize;
|
||||
}
|
||||
|
||||
.cli-stream-tab-mode {
|
||||
font-size: 0.625rem;
|
||||
padding: 1px 4px;
|
||||
background: hsl(var(--muted));
|
||||
border-radius: 3px;
|
||||
color: hsl(var(--muted-foreground));
|
||||
}
|
||||
|
||||
.cli-stream-tab-close {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
margin-left: 4px;
|
||||
background: transparent;
|
||||
border: none;
|
||||
border-radius: 50%;
|
||||
font-size: 0.75rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
opacity: 0;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-tab:hover .cli-stream-tab-close {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.cli-stream-tab-close:hover {
|
||||
background: hsl(var(--destructive) / 0.2);
|
||||
color: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
.cli-stream-tab-close.disabled {
|
||||
cursor: not-allowed;
|
||||
opacity: 0.3 !important;
|
||||
}
|
||||
|
||||
/* ===== Empty State ===== */
|
||||
.cli-stream-empty {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: 48px 24px;
|
||||
color: hsl(var(--muted-foreground));
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.cli-stream-empty svg,
|
||||
.cli-stream-empty i {
|
||||
width: 48px;
|
||||
height: 48px;
|
||||
margin-bottom: 16px;
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
.cli-stream-empty-title {
|
||||
font-size: 0.875rem;
|
||||
font-weight: 500;
|
||||
margin-bottom: 4px;
|
||||
}
|
||||
|
||||
.cli-stream-empty-hint {
|
||||
font-size: 0.75rem;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
/* ===== Terminal Content ===== */
|
||||
.cli-stream-content {
|
||||
flex: 1;
|
||||
min-height: 300px;
|
||||
max-height: 500px;
|
||||
overflow-y: auto;
|
||||
padding: 12px 16px;
|
||||
background: hsl(220 13% 8%);
|
||||
font-family: var(--font-mono, 'Consolas', 'Monaco', 'Courier New', monospace);
|
||||
font-size: 0.75rem;
|
||||
line-height: 1.6;
|
||||
scrollbar-width: thin;
|
||||
}
|
||||
|
||||
.cli-stream-content::-webkit-scrollbar {
|
||||
width: 8px;
|
||||
}
|
||||
|
||||
.cli-stream-content::-webkit-scrollbar-track {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.cli-stream-content::-webkit-scrollbar-thumb {
|
||||
background: hsl(0 0% 40%);
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.cli-stream-line {
|
||||
white-space: pre-wrap;
|
||||
word-break: break-all;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.cli-stream-line.stdout {
|
||||
color: hsl(0 0% 85%);
|
||||
}
|
||||
|
||||
.cli-stream-line.stderr {
|
||||
color: hsl(8 75% 65%);
|
||||
}
|
||||
|
||||
.cli-stream-line.system {
|
||||
color: hsl(210 80% 65%);
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.cli-stream-line.info {
|
||||
color: hsl(200 80% 70%);
|
||||
}
|
||||
|
||||
/* Auto-scroll indicator */
|
||||
.cli-stream-scroll-btn {
|
||||
position: sticky;
|
||||
bottom: 8px;
|
||||
left: 50%;
|
||||
transform: translateX(-50%);
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
padding: 4px 12px;
|
||||
background: hsl(var(--primary));
|
||||
color: white;
|
||||
border: none;
|
||||
border-radius: 12px;
|
||||
font-size: 0.625rem;
|
||||
cursor: pointer;
|
||||
opacity: 0;
|
||||
transition: opacity 0.2s;
|
||||
}
|
||||
|
||||
.cli-stream-content.has-new-content .cli-stream-scroll-btn {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
/* ===== Status Bar ===== */
|
||||
.cli-stream-status {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: 8px 16px;
|
||||
border-top: 1px solid hsl(var(--border));
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
font-size: 0.6875rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
}
|
||||
|
||||
.cli-stream-status-info {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
}
|
||||
|
||||
.cli-stream-status-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
.cli-stream-status-item svg,
|
||||
.cli-stream-status-item i {
|
||||
width: 12px;
|
||||
height: 12px;
|
||||
}
|
||||
|
||||
.cli-stream-status-actions {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.cli-stream-toggle-btn {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
padding: 2px 8px;
|
||||
background: transparent;
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 3px;
|
||||
font-size: 0.625rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-toggle-btn:hover {
|
||||
background: hsl(var(--hover));
|
||||
}
|
||||
|
||||
.cli-stream-toggle-btn.active {
|
||||
background: hsl(var(--primary) / 0.1);
|
||||
border-color: hsl(var(--primary));
|
||||
color: hsl(var(--primary));
|
||||
}
|
||||
|
||||
/* ===== Header Button & Badge ===== */
|
||||
.cli-stream-btn {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.cli-stream-badge {
|
||||
position: absolute;
|
||||
top: -2px;
|
||||
right: -2px;
|
||||
min-width: 14px;
|
||||
height: 14px;
|
||||
padding: 0 4px;
|
||||
background: hsl(var(--warning));
|
||||
color: white;
|
||||
border-radius: 7px;
|
||||
font-size: 0.5625rem;
|
||||
font-weight: 600;
|
||||
display: none;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.cli-stream-badge.has-running {
|
||||
display: flex;
|
||||
animation: streamBadgePulse 1.5s ease-in-out infinite;
|
||||
}
|
||||
|
||||
@keyframes streamBadgePulse {
|
||||
0%, 100% { transform: scale(1); }
|
||||
50% { transform: scale(1.15); }
|
||||
}
|
||||
|
||||
/* ===== Responsive ===== */
|
||||
@media (max-width: 768px) {
|
||||
.cli-stream-viewer {
|
||||
top: 56px;
|
||||
right: 8px;
|
||||
left: 8px;
|
||||
width: auto;
|
||||
max-height: calc(100vh - 72px);
|
||||
}
|
||||
|
||||
.cli-stream-content {
|
||||
min-height: 200px;
|
||||
max-height: 350px;
|
||||
}
|
||||
}
|
||||
461
ccw/src/templates/dashboard-js/components/cli-stream-viewer.js
Normal file
461
ccw/src/templates/dashboard-js/components/cli-stream-viewer.js
Normal file
@@ -0,0 +1,461 @@
|
||||
/**
|
||||
* CLI Stream Viewer Component
|
||||
* Real-time streaming output viewer for CLI executions
|
||||
*/
|
||||
|
||||
// ===== State Management =====
|
||||
let cliStreamExecutions = {}; // { executionId: { tool, mode, output, status, startTime, endTime } }
|
||||
let activeStreamTab = null;
|
||||
let autoScrollEnabled = true;
|
||||
let isCliStreamViewerOpen = false;
|
||||
|
||||
const MAX_OUTPUT_LINES = 5000; // Prevent memory issues
|
||||
|
||||
// ===== Initialization =====
|
||||
function initCliStreamViewer() {
|
||||
// Initialize keyboard shortcuts
|
||||
document.addEventListener('keydown', function(e) {
|
||||
if (e.key === 'Escape' && isCliStreamViewerOpen) {
|
||||
toggleCliStreamViewer();
|
||||
}
|
||||
});
|
||||
|
||||
// Initialize scroll detection for auto-scroll
|
||||
const content = document.getElementById('cliStreamContent');
|
||||
if (content) {
|
||||
content.addEventListener('scroll', handleStreamContentScroll);
|
||||
}
|
||||
}
|
||||
|
||||
// ===== Panel Control =====
|
||||
function toggleCliStreamViewer() {
|
||||
const viewer = document.getElementById('cliStreamViewer');
|
||||
const overlay = document.getElementById('cliStreamOverlay');
|
||||
|
||||
if (!viewer || !overlay) return;
|
||||
|
||||
isCliStreamViewerOpen = !isCliStreamViewerOpen;
|
||||
|
||||
if (isCliStreamViewerOpen) {
|
||||
viewer.classList.add('open');
|
||||
overlay.classList.add('open');
|
||||
|
||||
// If no active tab but have executions, select the first one
|
||||
if (!activeStreamTab && Object.keys(cliStreamExecutions).length > 0) {
|
||||
const firstId = Object.keys(cliStreamExecutions)[0];
|
||||
switchStreamTab(firstId);
|
||||
} else {
|
||||
renderStreamContent(activeStreamTab);
|
||||
}
|
||||
|
||||
// Re-init lucide icons
|
||||
if (typeof lucide !== 'undefined') {
|
||||
lucide.createIcons();
|
||||
}
|
||||
} else {
|
||||
viewer.classList.remove('open');
|
||||
overlay.classList.remove('open');
|
||||
}
|
||||
}
|
||||
|
||||
// ===== WebSocket Event Handlers =====
|
||||
function handleCliStreamStarted(payload) {
|
||||
const { executionId, tool, mode, timestamp } = payload;
|
||||
|
||||
// Create new execution record
|
||||
cliStreamExecutions[executionId] = {
|
||||
tool: tool || 'cli',
|
||||
mode: mode || 'analysis',
|
||||
output: [],
|
||||
status: 'running',
|
||||
startTime: timestamp ? new Date(timestamp).getTime() : Date.now(),
|
||||
endTime: null
|
||||
};
|
||||
|
||||
// Add system message
|
||||
cliStreamExecutions[executionId].output.push({
|
||||
type: 'system',
|
||||
content: `[${new Date().toLocaleTimeString()}] CLI execution started: ${tool} (${mode} mode)`,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
|
||||
// If this is the first execution or panel is open, select it
|
||||
if (!activeStreamTab || isCliStreamViewerOpen) {
|
||||
activeStreamTab = executionId;
|
||||
}
|
||||
|
||||
renderStreamTabs();
|
||||
renderStreamContent(activeStreamTab);
|
||||
updateStreamBadge();
|
||||
|
||||
// Auto-open panel if configured (optional)
|
||||
// if (!isCliStreamViewerOpen) toggleCliStreamViewer();
|
||||
}
|
||||
|
||||
function handleCliStreamOutput(payload) {
|
||||
const { executionId, chunkType, data } = payload;
|
||||
|
||||
const exec = cliStreamExecutions[executionId];
|
||||
if (!exec) return;
|
||||
|
||||
// Parse and add output lines
|
||||
const content = typeof data === 'string' ? data : JSON.stringify(data);
|
||||
const lines = content.split('\n');
|
||||
|
||||
lines.forEach(line => {
|
||||
if (line.trim() || lines.length === 1) { // Keep empty lines if it's the only content
|
||||
exec.output.push({
|
||||
type: chunkType || 'stdout',
|
||||
content: line,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Trim if too long
|
||||
if (exec.output.length > MAX_OUTPUT_LINES) {
|
||||
exec.output = exec.output.slice(-MAX_OUTPUT_LINES);
|
||||
}
|
||||
|
||||
// Update UI if this is the active tab
|
||||
if (activeStreamTab === executionId && isCliStreamViewerOpen) {
|
||||
requestAnimationFrame(() => {
|
||||
renderStreamContent(executionId);
|
||||
});
|
||||
}
|
||||
|
||||
// Update badge to show activity
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
function handleCliStreamCompleted(payload) {
|
||||
const { executionId, success, duration, timestamp } = payload;
|
||||
|
||||
const exec = cliStreamExecutions[executionId];
|
||||
if (!exec) return;
|
||||
|
||||
exec.status = success ? 'completed' : 'error';
|
||||
exec.endTime = timestamp ? new Date(timestamp).getTime() : Date.now();
|
||||
|
||||
// Add completion message
|
||||
const durationText = duration ? ` (${formatDuration(duration)})` : '';
|
||||
const statusText = success ? 'completed successfully' : 'failed';
|
||||
exec.output.push({
|
||||
type: 'system',
|
||||
content: `[${new Date().toLocaleTimeString()}] CLI execution ${statusText}${durationText}`,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
|
||||
renderStreamTabs();
|
||||
if (activeStreamTab === executionId) {
|
||||
renderStreamContent(executionId);
|
||||
}
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
function handleCliStreamError(payload) {
|
||||
const { executionId, error, timestamp } = payload;
|
||||
|
||||
const exec = cliStreamExecutions[executionId];
|
||||
if (!exec) return;
|
||||
|
||||
exec.status = 'error';
|
||||
exec.endTime = timestamp ? new Date(timestamp).getTime() : Date.now();
|
||||
|
||||
// Add error message
|
||||
exec.output.push({
|
||||
type: 'stderr',
|
||||
content: `[ERROR] ${error || 'Unknown error occurred'}`,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
|
||||
renderStreamTabs();
|
||||
if (activeStreamTab === executionId) {
|
||||
renderStreamContent(executionId);
|
||||
}
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
// ===== UI Rendering =====
|
||||
function renderStreamTabs() {
|
||||
const tabsContainer = document.getElementById('cliStreamTabs');
|
||||
if (!tabsContainer) return;
|
||||
|
||||
const execIds = Object.keys(cliStreamExecutions);
|
||||
|
||||
if (execIds.length === 0) {
|
||||
tabsContainer.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
|
||||
// Sort: running first, then by start time (newest first)
|
||||
execIds.sort((a, b) => {
|
||||
const execA = cliStreamExecutions[a];
|
||||
const execB = cliStreamExecutions[b];
|
||||
|
||||
if (execA.status === 'running' && execB.status !== 'running') return -1;
|
||||
if (execA.status !== 'running' && execB.status === 'running') return 1;
|
||||
return execB.startTime - execA.startTime;
|
||||
});
|
||||
|
||||
tabsContainer.innerHTML = execIds.map(id => {
|
||||
const exec = cliStreamExecutions[id];
|
||||
const isActive = id === activeStreamTab;
|
||||
const canClose = exec.status !== 'running';
|
||||
|
||||
return `
|
||||
<div class="cli-stream-tab ${isActive ? 'active' : ''}"
|
||||
onclick="switchStreamTab('${id}')"
|
||||
data-execution-id="${id}">
|
||||
<span class="cli-stream-tab-status ${exec.status}"></span>
|
||||
<span class="cli-stream-tab-tool">${escapeHtml(exec.tool)}</span>
|
||||
<span class="cli-stream-tab-mode">${exec.mode}</span>
|
||||
<button class="cli-stream-tab-close ${canClose ? '' : 'disabled'}"
|
||||
onclick="event.stopPropagation(); closeStream('${id}')"
|
||||
title="${canClose ? _streamT('cliStream.close') : _streamT('cliStream.cannotCloseRunning')}"
|
||||
${canClose ? '' : 'disabled'}>×</button>
|
||||
</div>
|
||||
`;
|
||||
}).join('');
|
||||
|
||||
// Update count badge
|
||||
const countBadge = document.getElementById('cliStreamCountBadge');
|
||||
if (countBadge) {
|
||||
const runningCount = execIds.filter(id => cliStreamExecutions[id].status === 'running').length;
|
||||
countBadge.textContent = execIds.length;
|
||||
countBadge.classList.toggle('has-running', runningCount > 0);
|
||||
}
|
||||
}
|
||||
|
||||
function renderStreamContent(executionId) {
|
||||
const contentContainer = document.getElementById('cliStreamContent');
|
||||
if (!contentContainer) return;
|
||||
|
||||
const exec = executionId ? cliStreamExecutions[executionId] : null;
|
||||
|
||||
if (!exec) {
|
||||
// Show empty state
|
||||
contentContainer.innerHTML = `
|
||||
<div class="cli-stream-empty">
|
||||
<i data-lucide="terminal"></i>
|
||||
<div class="cli-stream-empty-title" data-i18n="cliStream.noStreams">${_streamT('cliStream.noStreams')}</div>
|
||||
<div class="cli-stream-empty-hint" data-i18n="cliStream.noStreamsHint">${_streamT('cliStream.noStreamsHint')}</div>
|
||||
</div>
|
||||
`;
|
||||
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if should auto-scroll
|
||||
const wasAtBottom = contentContainer.scrollHeight - contentContainer.scrollTop <= contentContainer.clientHeight + 50;
|
||||
|
||||
// Render output lines
|
||||
contentContainer.innerHTML = exec.output.map(line =>
|
||||
`<div class="cli-stream-line ${line.type}">${escapeHtml(line.content)}</div>`
|
||||
).join('');
|
||||
|
||||
// Auto-scroll if enabled and was at bottom
|
||||
if (autoScrollEnabled && wasAtBottom) {
|
||||
contentContainer.scrollTop = contentContainer.scrollHeight;
|
||||
}
|
||||
|
||||
// Update status bar
|
||||
renderStreamStatus(executionId);
|
||||
}
|
||||
|
||||
function renderStreamStatus(executionId) {
|
||||
const statusContainer = document.getElementById('cliStreamStatus');
|
||||
if (!statusContainer) return;
|
||||
|
||||
const exec = executionId ? cliStreamExecutions[executionId] : null;
|
||||
|
||||
if (!exec) {
|
||||
statusContainer.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
|
||||
const duration = exec.endTime
|
||||
? formatDuration(exec.endTime - exec.startTime)
|
||||
: formatDuration(Date.now() - exec.startTime);
|
||||
|
||||
const statusLabel = exec.status === 'running'
|
||||
? _streamT('cliStream.running')
|
||||
: exec.status === 'completed'
|
||||
? _streamT('cliStream.completed')
|
||||
: _streamT('cliStream.error');
|
||||
|
||||
statusContainer.innerHTML = `
|
||||
<div class="cli-stream-status-info">
|
||||
<div class="cli-stream-status-item">
|
||||
<span class="cli-stream-tab-status ${exec.status}"></span>
|
||||
<span>${statusLabel}</span>
|
||||
</div>
|
||||
<div class="cli-stream-status-item">
|
||||
<i data-lucide="clock"></i>
|
||||
<span>${duration}</span>
|
||||
</div>
|
||||
<div class="cli-stream-status-item">
|
||||
<i data-lucide="file-text"></i>
|
||||
<span>${exec.output.length} ${_streamT('cliStream.lines') || 'lines'}</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="cli-stream-status-actions">
|
||||
<button class="cli-stream-toggle-btn ${autoScrollEnabled ? 'active' : ''}"
|
||||
onclick="toggleAutoScroll()"
|
||||
title="${_streamT('cliStream.autoScroll')}">
|
||||
<i data-lucide="arrow-down-to-line"></i>
|
||||
<span data-i18n="cliStream.autoScroll">${_streamT('cliStream.autoScroll')}</span>
|
||||
</button>
|
||||
</div>
|
||||
`;
|
||||
|
||||
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||
|
||||
// Update duration periodically for running executions
|
||||
if (exec.status === 'running') {
|
||||
setTimeout(() => {
|
||||
if (activeStreamTab === executionId && cliStreamExecutions[executionId]?.status === 'running') {
|
||||
renderStreamStatus(executionId);
|
||||
}
|
||||
}, 1000);
|
||||
}
|
||||
}
|
||||
|
||||
function switchStreamTab(executionId) {
|
||||
if (!cliStreamExecutions[executionId]) return;
|
||||
|
||||
activeStreamTab = executionId;
|
||||
renderStreamTabs();
|
||||
renderStreamContent(executionId);
|
||||
}
|
||||
|
||||
function updateStreamBadge() {
|
||||
const badge = document.getElementById('cliStreamBadge');
|
||||
if (!badge) return;
|
||||
|
||||
const runningCount = Object.values(cliStreamExecutions).filter(e => e.status === 'running').length;
|
||||
|
||||
if (runningCount > 0) {
|
||||
badge.textContent = runningCount;
|
||||
badge.classList.add('has-running');
|
||||
} else {
|
||||
badge.textContent = '';
|
||||
badge.classList.remove('has-running');
|
||||
}
|
||||
}
|
||||
|
||||
// ===== User Actions =====
|
||||
function closeStream(executionId) {
|
||||
const exec = cliStreamExecutions[executionId];
|
||||
if (!exec || exec.status === 'running') return;
|
||||
|
||||
delete cliStreamExecutions[executionId];
|
||||
|
||||
// Switch to another tab if this was active
|
||||
if (activeStreamTab === executionId) {
|
||||
const remaining = Object.keys(cliStreamExecutions);
|
||||
activeStreamTab = remaining.length > 0 ? remaining[0] : null;
|
||||
}
|
||||
|
||||
renderStreamTabs();
|
||||
renderStreamContent(activeStreamTab);
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
function clearCompletedStreams() {
|
||||
const toRemove = Object.keys(cliStreamExecutions).filter(
|
||||
id => cliStreamExecutions[id].status !== 'running'
|
||||
);
|
||||
|
||||
toRemove.forEach(id => delete cliStreamExecutions[id]);
|
||||
|
||||
// Update active tab if needed
|
||||
if (activeStreamTab && !cliStreamExecutions[activeStreamTab]) {
|
||||
const remaining = Object.keys(cliStreamExecutions);
|
||||
activeStreamTab = remaining.length > 0 ? remaining[0] : null;
|
||||
}
|
||||
|
||||
renderStreamTabs();
|
||||
renderStreamContent(activeStreamTab);
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
function toggleAutoScroll() {
|
||||
autoScrollEnabled = !autoScrollEnabled;
|
||||
|
||||
if (autoScrollEnabled && activeStreamTab) {
|
||||
const content = document.getElementById('cliStreamContent');
|
||||
if (content) {
|
||||
content.scrollTop = content.scrollHeight;
|
||||
}
|
||||
}
|
||||
|
||||
renderStreamStatus(activeStreamTab);
|
||||
}
|
||||
|
||||
function handleStreamContentScroll() {
|
||||
const content = document.getElementById('cliStreamContent');
|
||||
if (!content) return;
|
||||
|
||||
// If user scrolls up, disable auto-scroll
|
||||
const isAtBottom = content.scrollHeight - content.scrollTop <= content.clientHeight + 50;
|
||||
if (!isAtBottom && autoScrollEnabled) {
|
||||
autoScrollEnabled = false;
|
||||
renderStreamStatus(activeStreamTab);
|
||||
}
|
||||
}
|
||||
|
||||
// ===== Helper Functions =====
|
||||
function formatDuration(ms) {
|
||||
if (ms < 1000) return `${ms}ms`;
|
||||
|
||||
const seconds = Math.floor(ms / 1000);
|
||||
if (seconds < 60) return `${seconds}s`;
|
||||
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const remainingSeconds = seconds % 60;
|
||||
if (minutes < 60) return `${minutes}m ${remainingSeconds}s`;
|
||||
|
||||
const hours = Math.floor(minutes / 60);
|
||||
const remainingMinutes = minutes % 60;
|
||||
return `${hours}h ${remainingMinutes}m`;
|
||||
}
|
||||
|
||||
function escapeHtml(text) {
|
||||
if (!text) return '';
|
||||
const div = document.createElement('div');
|
||||
div.textContent = text;
|
||||
return div.innerHTML;
|
||||
}
|
||||
|
||||
// Translation helper with fallback (uses global t from i18n.js)
|
||||
function _streamT(key) {
|
||||
// First try global t() from i18n.js
|
||||
if (typeof t === 'function' && t !== _streamT) {
|
||||
try {
|
||||
return t(key);
|
||||
} catch (e) {
|
||||
// Fall through to fallbacks
|
||||
}
|
||||
}
|
||||
// Fallback values
|
||||
const fallbacks = {
|
||||
'cliStream.noStreams': 'No active CLI executions',
|
||||
'cliStream.noStreamsHint': 'Start a CLI command to see streaming output',
|
||||
'cliStream.running': 'Running',
|
||||
'cliStream.completed': 'Completed',
|
||||
'cliStream.error': 'Error',
|
||||
'cliStream.autoScroll': 'Auto-scroll',
|
||||
'cliStream.close': 'Close',
|
||||
'cliStream.cannotCloseRunning': 'Cannot close running execution',
|
||||
'cliStream.lines': 'lines'
|
||||
};
|
||||
return fallbacks[key] || key;
|
||||
}
|
||||
|
||||
// Initialize when DOM is ready
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', initCliStreamViewer);
|
||||
} else {
|
||||
initCliStreamViewer();
|
||||
}
|
||||
@@ -155,6 +155,12 @@ function initNavigation() {
|
||||
} else {
|
||||
console.error('renderApiSettings not defined - please refresh the page');
|
||||
}
|
||||
} else if (currentView === 'issue-manager') {
|
||||
if (typeof renderIssueManager === 'function') {
|
||||
renderIssueManager();
|
||||
} else {
|
||||
console.error('renderIssueManager not defined - please refresh the page');
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
@@ -199,6 +205,8 @@ function updateContentTitle() {
|
||||
titleEl.textContent = t('title.codexLensManager');
|
||||
} else if (currentView === 'api-settings') {
|
||||
titleEl.textContent = t('title.apiSettings');
|
||||
} else if (currentView === 'issue-manager') {
|
||||
titleEl.textContent = t('title.issueManager');
|
||||
} else if (currentView === 'liteTasks') {
|
||||
const names = { 'lite-plan': t('title.litePlanSessions'), 'lite-fix': t('title.liteFixSessions') };
|
||||
titleEl.textContent = names[currentLiteType] || t('title.liteTasks');
|
||||
|
||||
@@ -217,24 +217,40 @@ function handleNotification(data) {
|
||||
if (typeof handleCliExecutionStarted === 'function') {
|
||||
handleCliExecutionStarted(payload);
|
||||
}
|
||||
// Route to CLI Stream Viewer
|
||||
if (typeof handleCliStreamStarted === 'function') {
|
||||
handleCliStreamStarted(payload);
|
||||
}
|
||||
break;
|
||||
|
||||
case 'CLI_OUTPUT':
|
||||
if (typeof handleCliOutput === 'function') {
|
||||
handleCliOutput(payload);
|
||||
}
|
||||
// Route to CLI Stream Viewer
|
||||
if (typeof handleCliStreamOutput === 'function') {
|
||||
handleCliStreamOutput(payload);
|
||||
}
|
||||
break;
|
||||
|
||||
case 'CLI_EXECUTION_COMPLETED':
|
||||
if (typeof handleCliExecutionCompleted === 'function') {
|
||||
handleCliExecutionCompleted(payload);
|
||||
}
|
||||
// Route to CLI Stream Viewer
|
||||
if (typeof handleCliStreamCompleted === 'function') {
|
||||
handleCliStreamCompleted(payload);
|
||||
}
|
||||
break;
|
||||
|
||||
case 'CLI_EXECUTION_ERROR':
|
||||
if (typeof handleCliExecutionError === 'function') {
|
||||
handleCliExecutionError(payload);
|
||||
}
|
||||
// Route to CLI Stream Viewer
|
||||
if (typeof handleCliStreamError === 'function') {
|
||||
handleCliStreamError(payload);
|
||||
}
|
||||
break;
|
||||
|
||||
// CLI Review Events
|
||||
|
||||
@@ -39,7 +39,21 @@ const i18n = {
|
||||
'header.refreshWorkspace': 'Refresh workspace',
|
||||
'header.toggleTheme': 'Toggle theme',
|
||||
'header.language': 'Language',
|
||||
|
||||
'header.cliStream': 'CLI Stream Viewer',
|
||||
|
||||
// CLI Stream Viewer
|
||||
'cliStream.title': 'CLI Stream',
|
||||
'cliStream.clearCompleted': 'Clear Completed',
|
||||
'cliStream.noStreams': 'No active CLI executions',
|
||||
'cliStream.noStreamsHint': 'Start a CLI command to see streaming output',
|
||||
'cliStream.running': 'Running',
|
||||
'cliStream.completed': 'Completed',
|
||||
'cliStream.error': 'Error',
|
||||
'cliStream.autoScroll': 'Auto-scroll',
|
||||
'cliStream.close': 'Close',
|
||||
'cliStream.cannotCloseRunning': 'Cannot close running execution',
|
||||
'cliStream.lines': 'lines',
|
||||
|
||||
// Sidebar - Project section
|
||||
'nav.project': 'Project',
|
||||
'nav.overview': 'Overview',
|
||||
@@ -1711,6 +1725,136 @@ const i18n = {
|
||||
'coreMemory.belongsToClusters': 'Belongs to Clusters',
|
||||
'coreMemory.relationsError': 'Failed to load relations',
|
||||
|
||||
// Issue Manager
|
||||
'nav.issues': 'Issues',
|
||||
'nav.issueManager': 'Manager',
|
||||
'title.issueManager': 'Issue Manager',
|
||||
// issues.* keys (used by issue-manager.js)
|
||||
'issues.title': 'Issue Manager',
|
||||
'issues.description': 'Manage issues, solutions, and execution queue',
|
||||
'issues.viewIssues': 'Issues',
|
||||
'issues.viewQueue': 'Queue',
|
||||
'issues.filterStatus': 'Status',
|
||||
'issues.filterAll': 'All',
|
||||
'issues.noIssues': 'No issues found',
|
||||
'issues.createHint': 'Click "Create" to add your first issue',
|
||||
'issues.priority': 'Priority',
|
||||
'issues.tasks': 'tasks',
|
||||
'issues.solutions': 'solutions',
|
||||
'issues.boundSolution': 'Bound',
|
||||
'issues.queueEmpty': 'Queue is empty',
|
||||
'issues.reorderHint': 'Drag items within a group to reorder',
|
||||
'issues.parallelGroup': 'Parallel',
|
||||
'issues.sequentialGroup': 'Sequential',
|
||||
'issues.dependsOn': 'Depends on',
|
||||
// Create & Search
|
||||
'issues.create': 'Create',
|
||||
'issues.createTitle': 'Create New Issue',
|
||||
'issues.issueId': 'Issue ID',
|
||||
'issues.issueTitle': 'Title',
|
||||
'issues.issueContext': 'Context',
|
||||
'issues.issuePriority': 'Priority',
|
||||
'issues.titlePlaceholder': 'Brief description of the issue',
|
||||
'issues.contextPlaceholder': 'Detailed description, requirements, etc.',
|
||||
'issues.priorityLowest': 'Lowest',
|
||||
'issues.priorityLow': 'Low',
|
||||
'issues.priorityMedium': 'Medium',
|
||||
'issues.priorityHigh': 'High',
|
||||
'issues.priorityCritical': 'Critical',
|
||||
'issues.searchPlaceholder': 'Search issues...',
|
||||
'issues.showing': 'Showing',
|
||||
'issues.of': 'of',
|
||||
'issues.issues': 'issues',
|
||||
'issues.tryDifferentFilter': 'Try adjusting your search or filters',
|
||||
'issues.createFirst': 'Create First Issue',
|
||||
'issues.idRequired': 'Issue ID is required',
|
||||
'issues.titleRequired': 'Title is required',
|
||||
'issues.created': 'Issue created successfully',
|
||||
'issues.confirmDelete': 'Are you sure you want to delete this issue?',
|
||||
'issues.deleted': 'Issue deleted',
|
||||
'issues.idAutoGenerated': 'Auto-generated',
|
||||
'issues.regenerateId': 'Regenerate ID',
|
||||
// Solution detail
|
||||
'issues.solutionDetail': 'Solution Details',
|
||||
'issues.bind': 'Bind',
|
||||
'issues.unbind': 'Unbind',
|
||||
'issues.bound': 'Bound',
|
||||
'issues.totalTasks': 'Total Tasks',
|
||||
'issues.bindStatus': 'Bind Status',
|
||||
'issues.createdAt': 'Created',
|
||||
'issues.taskList': 'Task List',
|
||||
'issues.noTasks': 'No tasks in this solution',
|
||||
'issues.noSolutions': 'No solutions',
|
||||
'issues.viewJson': 'View Raw JSON',
|
||||
'issues.scope': 'Scope',
|
||||
'issues.modificationPoints': 'Modification Points',
|
||||
'issues.implementationSteps': 'Implementation Steps',
|
||||
'issues.acceptanceCriteria': 'Acceptance Criteria',
|
||||
'issues.dependencies': 'Dependencies',
|
||||
'issues.solutionBound': 'Solution bound successfully',
|
||||
'issues.solutionUnbound': 'Solution unbound',
|
||||
// Queue operations
|
||||
'issues.queueEmptyHint': 'Generate execution queue from bound solutions',
|
||||
'issues.createQueue': 'Create Queue',
|
||||
'issues.regenerate': 'Regenerate',
|
||||
'issues.regenerateQueue': 'Regenerate Queue',
|
||||
'issues.refreshQueue': 'Refresh',
|
||||
'issues.executionGroups': 'groups',
|
||||
'issues.totalItems': 'items',
|
||||
'issues.queueRefreshed': 'Queue refreshed',
|
||||
'issues.confirmCreateQueue': 'This will execute /issue:queue command via Claude Code CLI to generate execution queue from bound solutions.\n\nContinue?',
|
||||
'issues.creatingQueue': 'Creating execution queue...',
|
||||
'issues.queueExecutionStarted': 'Queue generation started',
|
||||
'issues.queueCreated': 'Queue created successfully',
|
||||
'issues.queueCreationFailed': 'Queue creation failed',
|
||||
'issues.queueCommandHint': 'Run one of the following commands in your terminal to generate the execution queue from bound solutions:',
|
||||
'issues.queueCommandInfo': 'After running the command, click "Refresh" to see the updated queue.',
|
||||
'issues.alternative': 'Alternative',
|
||||
'issues.refreshAfter': 'Refresh Queue',
|
||||
// issue.* keys (legacy)
|
||||
'issue.viewIssues': 'Issues',
|
||||
'issue.viewQueue': 'Queue',
|
||||
'issue.filterAll': 'All',
|
||||
'issue.filterStatus': 'Status',
|
||||
'issue.filterPriority': 'Priority',
|
||||
'issue.noIssues': 'No issues found',
|
||||
'issue.noIssuesHint': 'Issues will appear here when created via /issue:plan command',
|
||||
'issue.noQueue': 'No tasks in queue',
|
||||
'issue.noQueueHint': 'Run /issue:queue to form execution queue from bound solutions',
|
||||
'issue.tasks': 'tasks',
|
||||
'issue.solutions': 'solutions',
|
||||
'issue.parallel': 'Parallel',
|
||||
'issue.sequential': 'Sequential',
|
||||
'issue.status.registered': 'Registered',
|
||||
'issue.status.planned': 'Planned',
|
||||
'issue.status.queued': 'Queued',
|
||||
'issue.status.executing': 'Executing',
|
||||
'issue.status.completed': 'Completed',
|
||||
'issue.status.failed': 'Failed',
|
||||
'issue.priority.critical': 'Critical',
|
||||
'issue.priority.high': 'High',
|
||||
'issue.priority.medium': 'Medium',
|
||||
'issue.priority.low': 'Low',
|
||||
'issue.detail.context': 'Context',
|
||||
'issue.detail.solutions': 'Solutions',
|
||||
'issue.detail.tasks': 'Tasks',
|
||||
'issue.detail.noSolutions': 'No solutions available',
|
||||
'issue.detail.noTasks': 'No tasks available',
|
||||
'issue.detail.bound': 'Bound',
|
||||
'issue.detail.modificationPoints': 'Modification Points',
|
||||
'issue.detail.implementation': 'Implementation Steps',
|
||||
'issue.detail.acceptance': 'Acceptance Criteria',
|
||||
'issue.queue.reordered': 'Queue reordered',
|
||||
'issue.queue.reorderFailed': 'Failed to reorder queue',
|
||||
'issue.saved': 'Issue saved',
|
||||
'issue.saveFailed': 'Failed to save issue',
|
||||
'issue.taskUpdated': 'Task updated',
|
||||
'issue.taskUpdateFailed': 'Failed to update task',
|
||||
'issue.conflicts': 'Conflicts',
|
||||
'issue.noConflicts': 'No conflicts detected',
|
||||
'issue.conflict.resolved': 'Resolved',
|
||||
'issue.conflict.pending': 'Pending',
|
||||
|
||||
// Common additions
|
||||
'common.copyId': 'Copy ID',
|
||||
'common.copied': 'Copied!',
|
||||
@@ -1748,7 +1892,21 @@ const i18n = {
|
||||
'header.refreshWorkspace': '刷新工作区',
|
||||
'header.toggleTheme': '切换主题',
|
||||
'header.language': '语言',
|
||||
|
||||
'header.cliStream': 'CLI 流式输出',
|
||||
|
||||
// CLI Stream Viewer
|
||||
'cliStream.title': 'CLI 流式输出',
|
||||
'cliStream.clearCompleted': '清除已完成',
|
||||
'cliStream.noStreams': '没有活动的 CLI 执行',
|
||||
'cliStream.noStreamsHint': '启动 CLI 命令以查看流式输出',
|
||||
'cliStream.running': '运行中',
|
||||
'cliStream.completed': '已完成',
|
||||
'cliStream.error': '错误',
|
||||
'cliStream.autoScroll': '自动滚动',
|
||||
'cliStream.close': '关闭',
|
||||
'cliStream.cannotCloseRunning': '无法关闭运行中的执行',
|
||||
'cliStream.lines': '行',
|
||||
|
||||
// Sidebar - Project section
|
||||
'nav.project': '项目',
|
||||
'nav.overview': '概览',
|
||||
@@ -3429,6 +3587,136 @@ const i18n = {
|
||||
'coreMemory.belongsToClusters': '所属聚类',
|
||||
'coreMemory.relationsError': '加载关联失败',
|
||||
|
||||
// Issue Manager
|
||||
'nav.issues': '议题',
|
||||
'nav.issueManager': '管理器',
|
||||
'title.issueManager': '议题管理器',
|
||||
// issues.* keys (used by issue-manager.js)
|
||||
'issues.title': '议题管理器',
|
||||
'issues.description': '管理议题、解决方案和执行队列',
|
||||
'issues.viewIssues': '议题',
|
||||
'issues.viewQueue': '队列',
|
||||
'issues.filterStatus': '状态',
|
||||
'issues.filterAll': '全部',
|
||||
'issues.noIssues': '暂无议题',
|
||||
'issues.createHint': '点击"创建"添加您的第一个议题',
|
||||
'issues.priority': '优先级',
|
||||
'issues.tasks': '任务',
|
||||
'issues.solutions': '解决方案',
|
||||
'issues.boundSolution': '已绑定',
|
||||
'issues.queueEmpty': '队列为空',
|
||||
'issues.reorderHint': '在组内拖拽项目以重新排序',
|
||||
'issues.parallelGroup': '并行',
|
||||
'issues.sequentialGroup': '顺序',
|
||||
'issues.dependsOn': '依赖于',
|
||||
// Create & Search
|
||||
'issues.create': '创建',
|
||||
'issues.createTitle': '创建新议题',
|
||||
'issues.issueId': '议题ID',
|
||||
'issues.issueTitle': '标题',
|
||||
'issues.issueContext': '上下文',
|
||||
'issues.issuePriority': '优先级',
|
||||
'issues.titlePlaceholder': '简要描述议题',
|
||||
'issues.contextPlaceholder': '详细描述、需求等',
|
||||
'issues.priorityLowest': '最低',
|
||||
'issues.priorityLow': '低',
|
||||
'issues.priorityMedium': '中',
|
||||
'issues.priorityHigh': '高',
|
||||
'issues.priorityCritical': '紧急',
|
||||
'issues.searchPlaceholder': '搜索议题...',
|
||||
'issues.showing': '显示',
|
||||
'issues.of': '共',
|
||||
'issues.issues': '条议题',
|
||||
'issues.tryDifferentFilter': '尝试调整搜索或筛选条件',
|
||||
'issues.createFirst': '创建第一个议题',
|
||||
'issues.idRequired': '议题ID为必填',
|
||||
'issues.titleRequired': '标题为必填',
|
||||
'issues.created': '议题创建成功',
|
||||
'issues.confirmDelete': '确定要删除此议题吗?',
|
||||
'issues.deleted': '议题已删除',
|
||||
'issues.idAutoGenerated': '自动生成',
|
||||
'issues.regenerateId': '重新生成ID',
|
||||
// Solution detail
|
||||
'issues.solutionDetail': '解决方案详情',
|
||||
'issues.bind': '绑定',
|
||||
'issues.unbind': '解绑',
|
||||
'issues.bound': '已绑定',
|
||||
'issues.totalTasks': '任务总数',
|
||||
'issues.bindStatus': '绑定状态',
|
||||
'issues.createdAt': '创建时间',
|
||||
'issues.taskList': '任务列表',
|
||||
'issues.noTasks': '此解决方案无任务',
|
||||
'issues.noSolutions': '暂无解决方案',
|
||||
'issues.viewJson': '查看原始JSON',
|
||||
'issues.scope': '作用域',
|
||||
'issues.modificationPoints': '修改点',
|
||||
'issues.implementationSteps': '实现步骤',
|
||||
'issues.acceptanceCriteria': '验收标准',
|
||||
'issues.dependencies': '依赖项',
|
||||
'issues.solutionBound': '解决方案已绑定',
|
||||
'issues.solutionUnbound': '解决方案已解绑',
|
||||
// Queue operations
|
||||
'issues.queueEmptyHint': '从绑定的解决方案生成执行队列',
|
||||
'issues.createQueue': '创建队列',
|
||||
'issues.regenerate': '重新生成',
|
||||
'issues.regenerateQueue': '重新生成队列',
|
||||
'issues.refreshQueue': '刷新',
|
||||
'issues.executionGroups': '个执行组',
|
||||
'issues.totalItems': '个任务',
|
||||
'issues.queueRefreshed': '队列已刷新',
|
||||
'issues.confirmCreateQueue': '这将通过 Claude Code CLI 执行 /issue:queue 命令,从绑定的解决方案生成执行队列。\n\n是否继续?',
|
||||
'issues.creatingQueue': '正在创建执行队列...',
|
||||
'issues.queueExecutionStarted': '队列生成已启动',
|
||||
'issues.queueCreated': '队列创建成功',
|
||||
'issues.queueCreationFailed': '队列创建失败',
|
||||
'issues.queueCommandHint': '在终端中运行以下命令之一,从绑定的解决方案生成执行队列:',
|
||||
'issues.queueCommandInfo': '运行命令后,点击"刷新"查看更新后的队列。',
|
||||
'issues.alternative': '或者',
|
||||
'issues.refreshAfter': '刷新队列',
|
||||
// issue.* keys (legacy)
|
||||
'issue.viewIssues': '议题',
|
||||
'issue.viewQueue': '队列',
|
||||
'issue.filterAll': '全部',
|
||||
'issue.filterStatus': '状态',
|
||||
'issue.filterPriority': '优先级',
|
||||
'issue.noIssues': '暂无议题',
|
||||
'issue.noIssuesHint': '通过 /issue:plan 命令创建的议题将显示在此处',
|
||||
'issue.noQueue': '队列中暂无任务',
|
||||
'issue.noQueueHint': '运行 /issue:queue 从绑定的解决方案生成执行队列',
|
||||
'issue.tasks': '任务',
|
||||
'issue.solutions': '解决方案',
|
||||
'issue.parallel': '并行',
|
||||
'issue.sequential': '顺序',
|
||||
'issue.status.registered': '已注册',
|
||||
'issue.status.planned': '已规划',
|
||||
'issue.status.queued': '已入队',
|
||||
'issue.status.executing': '执行中',
|
||||
'issue.status.completed': '已完成',
|
||||
'issue.status.failed': '失败',
|
||||
'issue.priority.critical': '紧急',
|
||||
'issue.priority.high': '高',
|
||||
'issue.priority.medium': '中',
|
||||
'issue.priority.low': '低',
|
||||
'issue.detail.context': '上下文',
|
||||
'issue.detail.solutions': '解决方案',
|
||||
'issue.detail.tasks': '任务',
|
||||
'issue.detail.noSolutions': '暂无解决方案',
|
||||
'issue.detail.noTasks': '暂无任务',
|
||||
'issue.detail.bound': '已绑定',
|
||||
'issue.detail.modificationPoints': '修改点',
|
||||
'issue.detail.implementation': '实现步骤',
|
||||
'issue.detail.acceptance': '验收标准',
|
||||
'issue.queue.reordered': '队列已重排',
|
||||
'issue.queue.reorderFailed': '队列重排失败',
|
||||
'issue.saved': '议题已保存',
|
||||
'issue.saveFailed': '保存议题失败',
|
||||
'issue.taskUpdated': '任务已更新',
|
||||
'issue.taskUpdateFailed': '更新任务失败',
|
||||
'issue.conflicts': '冲突',
|
||||
'issue.noConflicts': '未检测到冲突',
|
||||
'issue.conflict.resolved': '已解决',
|
||||
'issue.conflict.pending': '待处理',
|
||||
|
||||
// Common additions
|
||||
'common.copyId': '复制 ID',
|
||||
'common.copied': '已复制!',
|
||||
|
||||
@@ -168,16 +168,22 @@ async function loadAvailableSkills() {
|
||||
if (!response.ok) throw new Error('Failed to load skills');
|
||||
const data = await response.json();
|
||||
|
||||
// Combine project and user skills (API returns { projectSkills: [], userSkills: [] })
|
||||
const allSkills = [
|
||||
...(data.projectSkills || []).map(s => ({ ...s, scope: 'project' })),
|
||||
...(data.userSkills || []).map(s => ({ ...s, scope: 'user' }))
|
||||
];
|
||||
|
||||
const container = document.getElementById('skill-discovery-skill-context');
|
||||
if (container && data.skills) {
|
||||
if (data.skills.length === 0) {
|
||||
if (container) {
|
||||
if (allSkills.length === 0) {
|
||||
container.innerHTML = `
|
||||
<span class="font-mono bg-muted px-1.5 py-0.5 rounded">${t('hook.wizard.availableSkills')}</span>
|
||||
<span class="text-muted-foreground ml-2">${t('hook.wizard.noSkillsFound').split('.')[0]}</span>
|
||||
`;
|
||||
} else {
|
||||
const skillBadges = data.skills.map(skill => `
|
||||
<span class="px-2 py-0.5 bg-emerald-500/10 text-emerald-500 rounded" title="${escapeHtml(skill.description)}">${escapeHtml(skill.name)}</span>
|
||||
const skillBadges = allSkills.map(skill => `
|
||||
<span class="px-2 py-0.5 bg-emerald-500/10 text-emerald-500 rounded" title="${escapeHtml(skill.description || '')}">${escapeHtml(skill.name)}</span>
|
||||
`).join('');
|
||||
container.innerHTML = `
|
||||
<span class="font-mono bg-muted px-1.5 py-0.5 rounded">${t('hook.wizard.availableSkills')}</span>
|
||||
@@ -187,7 +193,7 @@ async function loadAvailableSkills() {
|
||||
}
|
||||
|
||||
// Store skills for wizard use
|
||||
window.availableSkills = data.skills || [];
|
||||
window.availableSkills = allSkills;
|
||||
} catch (err) {
|
||||
console.error('Failed to load skills:', err);
|
||||
const container = document.getElementById('skill-discovery-skill-context');
|
||||
|
||||
1546
ccw/src/templates/dashboard-js/views/issue-manager.js
Normal file
1546
ccw/src/templates/dashboard-js/views/issue-manager.js
Normal file
File diff suppressed because it is too large
Load Diff
@@ -275,6 +275,18 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- CLI Stream Viewer Button -->
|
||||
<button class="cli-stream-btn p-1.5 text-muted-foreground hover:text-foreground hover:bg-hover rounded relative"
|
||||
id="cliStreamBtn"
|
||||
onclick="toggleCliStreamViewer()"
|
||||
data-i18n-title="header.cliStream"
|
||||
title="CLI Stream Viewer">
|
||||
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
<polyline points="4 17 10 11 4 5"/>
|
||||
<line x1="12" y1="19" x2="20" y2="19"/>
|
||||
</svg>
|
||||
<span class="cli-stream-badge" id="cliStreamBadge"></span>
|
||||
</button>
|
||||
<!-- Refresh Button -->
|
||||
<button class="refresh-btn p-1.5 text-muted-foreground hover:text-foreground hover:bg-hover rounded" id="refreshWorkspace" data-i18n-title="header.refreshWorkspace" title="Refresh workspace">
|
||||
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
@@ -394,6 +406,21 @@
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<!-- Issues Section -->
|
||||
<div class="mb-2" id="issuesNav">
|
||||
<div class="flex items-center px-4 py-2 text-sm font-semibold text-muted-foreground uppercase tracking-wide">
|
||||
<i data-lucide="clipboard-list" class="nav-section-icon mr-2"></i>
|
||||
<span class="nav-section-title" data-i18n="nav.issues">Issues</span>
|
||||
</div>
|
||||
<ul class="space-y-0.5">
|
||||
<li class="nav-item flex items-center gap-2 px-3 py-2.5 text-sm text-muted-foreground hover:bg-hover hover:text-foreground rounded cursor-pointer transition-colors" data-view="issue-manager" data-tooltip="Issue Manager">
|
||||
<i data-lucide="list-checks" class="nav-icon"></i>
|
||||
<span class="nav-text flex-1" data-i18n="nav.issueManager">Manager</span>
|
||||
<span class="badge px-2 py-0.5 text-xs font-semibold rounded-full bg-hover text-muted-foreground" id="badgeIssues">0</span>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<!-- MCP Servers Section -->
|
||||
<div class="mb-2" id="mcpServersNav">
|
||||
<div class="flex items-center px-4 py-2 text-sm font-semibold text-muted-foreground uppercase tracking-wide">
|
||||
@@ -578,6 +605,34 @@
|
||||
<div class="drawer-overlay hidden fixed inset-0 bg-black/50 z-40" id="drawerOverlay" onclick="closeTaskDrawer()"></div>
|
||||
</div>
|
||||
|
||||
<!-- CLI Stream Viewer Panel -->
|
||||
<div class="cli-stream-viewer" id="cliStreamViewer">
|
||||
<div class="cli-stream-header">
|
||||
<div class="cli-stream-title">
|
||||
<i data-lucide="terminal"></i>
|
||||
<span data-i18n="cliStream.title">CLI Stream</span>
|
||||
<span class="cli-stream-count-badge" id="cliStreamCountBadge">0</span>
|
||||
</div>
|
||||
<div class="cli-stream-actions">
|
||||
<button class="cli-stream-action-btn" onclick="clearCompletedStreams()" data-i18n="cliStream.clearCompleted">
|
||||
<i data-lucide="trash-2"></i>
|
||||
<span>Clear</span>
|
||||
</button>
|
||||
<button class="cli-stream-close-btn" onclick="toggleCliStreamViewer()" title="Close">×</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="cli-stream-tabs" id="cliStreamTabs">
|
||||
<!-- Dynamic tabs -->
|
||||
</div>
|
||||
<div class="cli-stream-content" id="cliStreamContent">
|
||||
<!-- Terminal output -->
|
||||
</div>
|
||||
<div class="cli-stream-status" id="cliStreamStatus">
|
||||
<!-- Status bar -->
|
||||
</div>
|
||||
</div>
|
||||
<div class="cli-stream-overlay" id="cliStreamOverlay" onclick="toggleCliStreamViewer()"></div>
|
||||
|
||||
<!-- Markdown Preview Modal -->
|
||||
<div id="markdownModal" class="markdown-modal hidden fixed inset-0 z-[100] flex items-center justify-center">
|
||||
<div class="markdown-modal-backdrop absolute inset-0 bg-black/60" onclick="closeMarkdownModal()"></div>
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.3.5",
|
||||
"version": "6.3.6",
|
||||
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
|
||||
"type": "module",
|
||||
"main": "ccw/src/index.js",
|
||||
|
||||
Reference in New Issue
Block a user