mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-07 02:04:11 +08:00
Compare commits
11 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d705a3e7d9 | ||
|
|
726151bfea | ||
|
|
b58589ddad | ||
|
|
2e493277a1 | ||
|
|
8b19edd2de | ||
|
|
3e54b5f7d8 | ||
|
|
4da06864f8 | ||
|
|
8f310339df | ||
|
|
0157e36344 | ||
|
|
cdf4833977 | ||
|
|
c8a914aeca |
235
.claude/agents/issue-plan-agent.md
Normal file
235
.claude/agents/issue-plan-agent.md
Normal file
@@ -0,0 +1,235 @@
|
||||
---
|
||||
name: issue-plan-agent
|
||||
description: |
|
||||
Closed-loop issue planning agent combining ACE exploration and solution generation.
|
||||
Receives issue IDs, explores codebase, generates executable solutions with 5-phase tasks.
|
||||
|
||||
Examples:
|
||||
- Context: Single issue planning
|
||||
user: "Plan GH-123"
|
||||
assistant: "I'll fetch issue details, explore codebase, and generate solution"
|
||||
- Context: Batch planning
|
||||
user: "Plan GH-123,GH-124,GH-125"
|
||||
assistant: "I'll plan 3 issues, detect conflicts, and register solutions"
|
||||
color: green
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
**Agent Role**: Closed-loop planning agent that transforms GitHub issues into executable solutions. Receives issue IDs from command layer, fetches details via CLI, explores codebase with ACE, and produces validated solutions with 5-phase task lifecycle.
|
||||
|
||||
**Core Capabilities**:
|
||||
- ACE semantic search for intelligent code discovery
|
||||
- Batch processing (1-3 issues per invocation)
|
||||
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
|
||||
- Cross-issue conflict detection
|
||||
- Dependency DAG validation
|
||||
- Auto-bind for single solution, return for selection on multiple
|
||||
|
||||
**Key Principle**: Generate tasks conforming to schema with quantified delivery_criteria.
|
||||
|
||||
---
|
||||
|
||||
## 1. Input & Execution
|
||||
|
||||
### 1.1 Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
issue_ids: string[], // Issue IDs only (e.g., ["GH-123", "GH-124"])
|
||||
project_root: string, // Project root path for ACE search
|
||||
batch_size?: number, // Max issues per batch (default: 3)
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: Agent receives IDs only. Fetch details via `ccw issue status <id> --json`.
|
||||
|
||||
### 1.2 Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Issue Understanding (5%)
|
||||
↓ Fetch details, extract requirements, determine complexity
|
||||
Phase 2: ACE Exploration (30%)
|
||||
↓ Semantic search, pattern discovery, dependency mapping
|
||||
Phase 3: Solution Planning (50%)
|
||||
↓ Task decomposition, 5-phase lifecycle, acceptance criteria
|
||||
Phase 4: Validation & Output (15%)
|
||||
↓ DAG validation, conflict detection, solution registration
|
||||
```
|
||||
|
||||
#### Phase 1: Issue Understanding
|
||||
|
||||
**Step 1**: Fetch issue details via CLI
|
||||
```bash
|
||||
ccw issue status <issue-id> --json
|
||||
```
|
||||
|
||||
**Step 2**: Analyze and classify
|
||||
```javascript
|
||||
function analyzeIssue(issue) {
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
requirements: extractRequirements(issue.description),
|
||||
scope: inferScope(issue.title, issue.description),
|
||||
complexity: determineComplexity(issue) // Low | Medium | High
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Complexity Rules**:
|
||||
| Complexity | Files | Tasks |
|
||||
|------------|-------|-------|
|
||||
| Low | 1-2 | 1-3 |
|
||||
| Medium | 3-5 | 3-6 |
|
||||
| High | 6+ | 5-10 |
|
||||
|
||||
#### Phase 2: ACE Exploration
|
||||
|
||||
**Primary**: ACE semantic search
|
||||
```javascript
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: `Find code related to: ${issue.title}. Keywords: ${extractKeywords(issue)}`
|
||||
})
|
||||
```
|
||||
|
||||
**Exploration Checklist**:
|
||||
- [ ] Identify relevant files (direct matches)
|
||||
- [ ] Find related patterns (similar implementations)
|
||||
- [ ] Map integration points
|
||||
- [ ] Discover dependencies
|
||||
- [ ] Locate test patterns
|
||||
|
||||
**Fallback**: ACE → ripgrep → Glob
|
||||
|
||||
#### Phase 3: Solution Planning
|
||||
|
||||
**Task Decomposition** following schema:
|
||||
```javascript
|
||||
function decomposeTasks(issue, exploration) {
|
||||
return groups.map(group => ({
|
||||
id: `TASK-${String(taskId++).padStart(3, '0')}`,
|
||||
title: group.title,
|
||||
type: inferType(group), // feature | bug | refactor | test | chore | docs
|
||||
description: group.description,
|
||||
file_context: group.files,
|
||||
depends_on: inferDependencies(group, tasks),
|
||||
delivery_criteria: generateDeliveryCriteria(group), // Quantified checklist
|
||||
pause_criteria: identifyBlockers(group),
|
||||
status: 'pending',
|
||||
current_phase: 'analyze',
|
||||
executor: inferExecutor(group),
|
||||
priority: calculatePriority(group)
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
#### Phase 4: Validation & Output
|
||||
|
||||
**Validation**:
|
||||
- DAG validation (no circular dependencies)
|
||||
- Task validation (all 5 phases present)
|
||||
- Conflict detection (cross-issue file modifications)
|
||||
|
||||
**Solution Registration**:
|
||||
```bash
|
||||
# Write solution and register via CLI
|
||||
ccw issue bind <issue-id> --solution /tmp/sol.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Output Specifications
|
||||
|
||||
### 2.1 Return Format
|
||||
|
||||
```json
|
||||
{
|
||||
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
|
||||
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Binding Rules
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Single solution | Register AND auto-bind |
|
||||
| Multiple solutions | Register only, return for user selection |
|
||||
|
||||
### 2.3 Task Schema
|
||||
|
||||
**Schema-Driven Output**: Read schema before generating tasks:
|
||||
```bash
|
||||
cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json
|
||||
```
|
||||
|
||||
**Required Fields**:
|
||||
- `id`: Task ID (pattern: `TASK-NNN`)
|
||||
- `title`: Short summary (max 100 chars)
|
||||
- `type`: feature | bug | refactor | test | chore | docs
|
||||
- `description`: Detailed instructions
|
||||
- `depends_on`: Array of prerequisite task IDs
|
||||
- `delivery_criteria`: Checklist items defining completion
|
||||
- `status`: pending | ready | in_progress | completed | failed | paused | skipped
|
||||
- `current_phase`: analyze | implement | test | optimize | commit | done
|
||||
- `executor`: agent | codex | gemini | auto
|
||||
|
||||
**Optional Fields**:
|
||||
- `file_context`: Relevant files/globs
|
||||
- `pause_criteria`: Conditions to halt execution
|
||||
- `priority`: 1-5 (1=highest)
|
||||
- `phase_results`: Results from each execution phase
|
||||
|
||||
### 2.4 Solution File Structure
|
||||
|
||||
```
|
||||
.workflow/issues/solutions/{issue-id}.jsonl
|
||||
```
|
||||
|
||||
Each line is a complete solution JSON.
|
||||
|
||||
---
|
||||
|
||||
## 3. Quality Standards
|
||||
|
||||
### 3.1 Acceptance Criteria
|
||||
|
||||
| Good | Bad |
|
||||
|------|-----|
|
||||
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
|
||||
| "Response time < 200ms p95" | "Good performance" |
|
||||
| "All 4 test cases pass" | "Tests pass" |
|
||||
|
||||
### 3.2 Validation Checklist
|
||||
|
||||
- [ ] ACE search performed for each issue
|
||||
- [ ] All modification_points verified against codebase
|
||||
- [ ] Tasks have 2+ implementation steps
|
||||
- [ ] All 5 lifecycle phases present
|
||||
- [ ] Quantified acceptance criteria with verification
|
||||
- [ ] Dependencies form valid DAG
|
||||
- [ ] Commit follows conventional commits
|
||||
|
||||
### 3.3 Guidelines
|
||||
|
||||
**ALWAYS**:
|
||||
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json`
|
||||
2. Use ACE semantic search as PRIMARY exploration tool
|
||||
3. Fetch issue details via `ccw issue status <id> --json`
|
||||
4. Quantify delivery_criteria with testable conditions
|
||||
5. Validate DAG before output
|
||||
6. Single solution → auto-bind; Multiple → return for selection
|
||||
|
||||
**NEVER**:
|
||||
1. Execute implementation (return plan only)
|
||||
2. Use vague criteria ("works correctly", "good performance")
|
||||
3. Create circular dependencies
|
||||
4. Generate more than 10 tasks per issue
|
||||
5. Bind when multiple solutions exist
|
||||
|
||||
**OUTPUT**:
|
||||
1. Register solutions via `ccw issue bind <id> --solution <file>`
|
||||
2. Return JSON with `bound`, `pending_selection`, `conflicts`
|
||||
3. Solutions written to `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
235
.claude/agents/issue-queue-agent.md
Normal file
235
.claude/agents/issue-queue-agent.md
Normal file
@@ -0,0 +1,235 @@
|
||||
---
|
||||
name: issue-queue-agent
|
||||
description: |
|
||||
Task ordering agent for queue formation with dependency analysis and conflict resolution.
|
||||
Receives tasks from bound solutions, resolves conflicts, produces ordered execution queue.
|
||||
|
||||
Examples:
|
||||
- Context: Single issue queue
|
||||
user: "Order tasks for GH-123"
|
||||
assistant: "I'll analyze dependencies and generate execution queue"
|
||||
- Context: Multi-issue queue with conflicts
|
||||
user: "Order tasks for GH-123, GH-124"
|
||||
assistant: "I'll detect conflicts, resolve ordering, and assign groups"
|
||||
color: orange
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
**Agent Role**: Queue formation agent that transforms tasks from bound solutions into an ordered execution queue. Analyzes dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
|
||||
|
||||
**Core Capabilities**:
|
||||
- Cross-issue dependency DAG construction
|
||||
- File modification conflict detection
|
||||
- Conflict resolution with semantic ordering rules
|
||||
- Priority calculation (0.0-1.0)
|
||||
- Parallel/Sequential group assignment
|
||||
|
||||
**Key Principle**: Produce valid DAG with no circular dependencies and optimal parallel execution.
|
||||
|
||||
---
|
||||
|
||||
## 1. Input & Execution
|
||||
|
||||
### 1.1 Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
tasks: [{
|
||||
issue_id: string, // e.g., "GH-123"
|
||||
solution_id: string, // e.g., "SOL-001"
|
||||
task: {
|
||||
id: string, // e.g., "TASK-001"
|
||||
title: string,
|
||||
type: string,
|
||||
file_context: string[],
|
||||
depends_on: string[]
|
||||
}
|
||||
}],
|
||||
project_root?: string,
|
||||
rebuild?: boolean
|
||||
}
|
||||
```
|
||||
|
||||
### 1.2 Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Dependency Analysis (20%)
|
||||
↓ Parse depends_on, build DAG, detect cycles
|
||||
Phase 2: Conflict Detection (30%)
|
||||
↓ Identify file conflicts across issues
|
||||
Phase 3: Conflict Resolution (25%)
|
||||
↓ Apply ordering rules, update DAG
|
||||
Phase 4: Ordering & Grouping (25%)
|
||||
↓ Topological sort, assign groups
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Processing Logic
|
||||
|
||||
### 2.1 Dependency Graph
|
||||
|
||||
```javascript
|
||||
function buildDependencyGraph(tasks) {
|
||||
const graph = new Map()
|
||||
const fileModifications = new Map()
|
||||
|
||||
for (const item of tasks) {
|
||||
const key = `${item.issue_id}:${item.task.id}`
|
||||
graph.set(key, { ...item, key, inDegree: 0, outEdges: [] })
|
||||
|
||||
for (const file of item.task.file_context || []) {
|
||||
if (!fileModifications.has(file)) fileModifications.set(file, [])
|
||||
fileModifications.get(file).push(key)
|
||||
}
|
||||
}
|
||||
|
||||
// Add dependency edges
|
||||
for (const [key, node] of graph) {
|
||||
for (const dep of node.task.depends_on || []) {
|
||||
const depKey = `${node.issue_id}:${dep}`
|
||||
if (graph.has(depKey)) {
|
||||
graph.get(depKey).outEdges.push(key)
|
||||
node.inDegree++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { graph, fileModifications }
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Conflict Detection
|
||||
|
||||
Conflict when multiple tasks modify same file:
|
||||
```javascript
|
||||
function detectConflicts(fileModifications, graph) {
|
||||
return [...fileModifications.entries()]
|
||||
.filter(([_, tasks]) => tasks.length > 1)
|
||||
.map(([file, tasks]) => ({
|
||||
type: 'file_conflict',
|
||||
file,
|
||||
tasks,
|
||||
resolved: false
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 Resolution Rules
|
||||
|
||||
| Priority | Rule | Example |
|
||||
|----------|------|---------|
|
||||
| 1 | Create before Update | T1:Create → T2:Update |
|
||||
| 2 | Foundation before integration | config/ → src/ |
|
||||
| 3 | Types before implementation | types/ → components/ |
|
||||
| 4 | Core before tests | src/ → __tests__/ |
|
||||
| 5 | Delete last | T1:Update → T2:Delete |
|
||||
|
||||
### 2.4 Semantic Priority
|
||||
|
||||
| Factor | Boost |
|
||||
|--------|-------|
|
||||
| Create action | +0.2 |
|
||||
| Configure action | +0.15 |
|
||||
| Implement action | +0.1 |
|
||||
| Fix action | +0.05 |
|
||||
| Foundation scope | +0.1 |
|
||||
| Types scope | +0.05 |
|
||||
| Refactor action | -0.05 |
|
||||
| Test action | -0.1 |
|
||||
| Delete action | -0.15 |
|
||||
|
||||
### 2.5 Group Assignment
|
||||
|
||||
- **Parallel (P*)**: Tasks with no dependencies or conflicts between them
|
||||
- **Sequential (S*)**: Tasks that must run in order due to dependencies or conflicts
|
||||
|
||||
---
|
||||
|
||||
## 3. Output Specifications
|
||||
|
||||
### 3.1 Queue Schema
|
||||
|
||||
Read schema before output:
|
||||
```bash
|
||||
cat .claude/workflows/cli-templates/schemas/queue-schema.json
|
||||
```
|
||||
|
||||
### 3.2 Output Format
|
||||
|
||||
```json
|
||||
{
|
||||
"tasks": [{
|
||||
"item_id": "T-1",
|
||||
"issue_id": "GH-123",
|
||||
"solution_id": "SOL-001",
|
||||
"task_id": "TASK-001",
|
||||
"status": "pending",
|
||||
"execution_order": 1,
|
||||
"execution_group": "P1",
|
||||
"depends_on": [],
|
||||
"semantic_priority": 0.7
|
||||
}],
|
||||
"conflicts": [{
|
||||
"file": "src/auth.ts",
|
||||
"tasks": ["GH-123:TASK-001", "GH-124:TASK-002"],
|
||||
"resolution": "sequential",
|
||||
"resolution_order": ["GH-123:TASK-001", "GH-124:TASK-002"],
|
||||
"rationale": "TASK-001 creates file before TASK-002 updates",
|
||||
"resolved": true
|
||||
}],
|
||||
"execution_groups": [
|
||||
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["T-1", "T-2", "T-3"] },
|
||||
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["T-4", "T-5"] }
|
||||
],
|
||||
"_metadata": {
|
||||
"total_tasks": 5,
|
||||
"total_conflicts": 1,
|
||||
"resolved_conflicts": 1,
|
||||
"timestamp": "2025-12-27T10:00:00Z"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Quality Standards
|
||||
|
||||
### 4.1 Validation Checklist
|
||||
|
||||
- [ ] No circular dependencies
|
||||
- [ ] All conflicts resolved
|
||||
- [ ] Dependencies ordered correctly
|
||||
- [ ] Parallel groups have no conflicts
|
||||
- [ ] Semantic priority calculated
|
||||
|
||||
### 4.2 Error Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Circular dependency | Abort, report cycles |
|
||||
| Resolution creates cycle | Flag for manual resolution |
|
||||
| Missing task reference | Skip and warn |
|
||||
| Empty task list | Return empty queue |
|
||||
|
||||
### 4.3 Guidelines
|
||||
|
||||
**ALWAYS**:
|
||||
1. Build dependency graph before ordering
|
||||
2. Detect cycles before and after resolution
|
||||
3. Apply resolution rules consistently
|
||||
4. Calculate semantic priority for all tasks
|
||||
5. Include rationale for conflict resolutions
|
||||
6. Validate ordering before output
|
||||
|
||||
**NEVER**:
|
||||
1. Execute tasks (ordering only)
|
||||
2. Ignore circular dependencies
|
||||
3. Skip conflict detection
|
||||
4. Output invalid DAG
|
||||
5. Merge conflicting tasks in parallel group
|
||||
|
||||
**OUTPUT**:
|
||||
1. Write queue via `ccw issue queue` CLI
|
||||
2. Return JSON with `tasks`, `conflicts`, `execution_groups`, `_metadata`
|
||||
462
.claude/commands/issue/execute.md
Normal file
462
.claude/commands/issue/execute.md
Normal file
@@ -0,0 +1,462 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
|
||||
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
# Issue Execute Command (/issue:execute)
|
||||
|
||||
## Overview
|
||||
|
||||
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
|
||||
|
||||
**Core design:**
|
||||
- Single task per codex instance (not loop mode)
|
||||
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
|
||||
- No file reading in codex
|
||||
- Orchestrator manages parallelism
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queues/ # Queue history directory
|
||||
│ ├── index.json # Queue index (active + history)
|
||||
│ └── {queue-id}.json # Individual queue files
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:execute [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:execute # Execute all ready tasks
|
||||
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
|
||||
/issue:execute --executor codex # Force codex executor
|
||||
|
||||
# Flags
|
||||
--parallel <n> Max parallel codex instances (default: 1)
|
||||
--executor <type> Force executor: codex|gemini|agent
|
||||
--dry-run Show what would execute without running
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Queue Loading
|
||||
├─ Load queue.json
|
||||
├─ Count pending/ready tasks
|
||||
└─ Initialize TodoWrite tracking
|
||||
|
||||
Phase 2: Ready Task Detection
|
||||
├─ Find tasks with satisfied dependencies
|
||||
├─ Group by execution_group (parallel batches)
|
||||
└─ Determine execution order
|
||||
|
||||
Phase 3: Codex Coordination
|
||||
├─ For each ready task:
|
||||
│ ├─ Launch independent codex instance
|
||||
│ ├─ Codex calls: ccw issue next
|
||||
│ ├─ Codex receives task data (NOT file)
|
||||
│ ├─ Codex executes task
|
||||
│ ├─ Codex calls: ccw issue complete <queue-id>
|
||||
│ └─ Update TodoWrite
|
||||
└─ Parallel execution based on --parallel flag
|
||||
|
||||
Phase 4: Completion
|
||||
├─ Generate execution summary
|
||||
├─ Update issue statuses in issues.jsonl
|
||||
└─ Display results
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Queue Loading
|
||||
|
||||
```javascript
|
||||
// Load active queue via CLI endpoint
|
||||
const queueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
|
||||
const queue = JSON.parse(queueJson);
|
||||
|
||||
if (!queue.id || queue.tasks?.length === 0) {
|
||||
console.log('No active queue found. Run /issue:queue first.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Count by status
|
||||
const pending = queue.tasks.filter(q => q.status === 'pending');
|
||||
const executing = queue.tasks.filter(q => q.status === 'executing');
|
||||
const completed = queue.tasks.filter(q => q.status === 'completed');
|
||||
|
||||
console.log(`
|
||||
## Execution Queue Status
|
||||
|
||||
- Pending: ${pending.length}
|
||||
- Executing: ${executing.length}
|
||||
- Completed: ${completed.length}
|
||||
- Total: ${queue.tasks.length}
|
||||
`);
|
||||
|
||||
if (pending.length === 0 && executing.length === 0) {
|
||||
console.log('All tasks completed!');
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Ready Task Detection
|
||||
|
||||
```javascript
|
||||
// Find ready tasks (dependencies satisfied)
|
||||
function getReadyTasks() {
|
||||
const completedIds = new Set(
|
||||
queue.tasks.filter(q => q.status === 'completed').map(q => q.item_id)
|
||||
);
|
||||
|
||||
return queue.tasks.filter(item => {
|
||||
if (item.status !== 'pending') return false;
|
||||
return item.depends_on.every(depId => completedIds.has(depId));
|
||||
});
|
||||
}
|
||||
|
||||
const readyTasks = getReadyTasks();
|
||||
|
||||
if (readyTasks.length === 0) {
|
||||
if (executing.length > 0) {
|
||||
console.log('Tasks are currently executing. Wait for completion.');
|
||||
} else {
|
||||
console.log('No ready tasks. Check for blocked dependencies.');
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Found ${readyTasks.length} ready tasks`);
|
||||
|
||||
// Sort by execution order
|
||||
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
|
||||
|
||||
// Initialize TodoWrite
|
||||
TodoWrite({
|
||||
todos: readyTasks.slice(0, parallelLimit).map(t => ({
|
||||
content: `[${t.item_id}] ${t.issue_id}:${t.task_id}`,
|
||||
status: 'pending',
|
||||
activeForm: `Executing ${t.item_id}`
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 3: Codex Coordination (Single Task Mode - Full Lifecycle)
|
||||
|
||||
```javascript
|
||||
// Execute tasks - single codex instance per task with full lifecycle
|
||||
async function executeTask(queueItem) {
|
||||
const codexPrompt = `
|
||||
## Single Task Execution - CLOSED-LOOP LIFECYCLE
|
||||
|
||||
You are executing ONE task from the issue queue. Each task has 5 phases that MUST ALL complete successfully.
|
||||
|
||||
### Step 1: Fetch Task
|
||||
Run this command to get your task:
|
||||
\`\`\`bash
|
||||
ccw issue next
|
||||
\`\`\`
|
||||
|
||||
This returns JSON with full lifecycle definition:
|
||||
- task.implementation: Implementation steps
|
||||
- task.test: Test requirements and commands
|
||||
- task.regression: Regression check commands
|
||||
- task.acceptance: Acceptance criteria and verification
|
||||
- task.commit: Commit specification
|
||||
|
||||
### Step 2: Execute Full Lifecycle
|
||||
|
||||
**Phase 1: IMPLEMENT**
|
||||
1. Follow task.implementation steps in order
|
||||
2. Modify files specified in modification_points
|
||||
3. Use context.relevant_files for reference
|
||||
4. Use context.patterns for code style
|
||||
|
||||
**Phase 2: TEST**
|
||||
1. Run test commands from task.test.commands
|
||||
2. Ensure all unit tests pass (task.test.unit)
|
||||
3. Run integration tests if specified (task.test.integration)
|
||||
4. Verify coverage meets task.test.coverage_target if specified
|
||||
5. If tests fail → fix code and re-run, do NOT proceed until tests pass
|
||||
|
||||
**Phase 3: REGRESSION**
|
||||
1. Run all commands in task.regression
|
||||
2. Ensure no existing tests are broken
|
||||
3. If regression fails → fix and re-run
|
||||
|
||||
**Phase 4: ACCEPTANCE**
|
||||
1. Verify each criterion in task.acceptance.criteria
|
||||
2. Execute verification steps in task.acceptance.verification
|
||||
3. Complete any manual_checks if specified
|
||||
4. All criteria MUST pass before proceeding
|
||||
|
||||
**Phase 5: COMMIT**
|
||||
1. Stage all modified files
|
||||
2. Use task.commit.message_template as commit message
|
||||
3. Commit with: git commit -m "$(cat <<'EOF'\n<message>\nEOF\n)"
|
||||
4. If commit_strategy is 'per-task', commit now
|
||||
5. If commit_strategy is 'atomic' or 'squash', stage but don't commit
|
||||
|
||||
### Step 3: Report Completion
|
||||
When ALL phases complete successfully:
|
||||
\`\`\`bash
|
||||
ccw issue complete <item_id> --result '{
|
||||
"files_modified": ["path1", "path2"],
|
||||
"tests_passed": true,
|
||||
"regression_passed": true,
|
||||
"acceptance_passed": true,
|
||||
"committed": true,
|
||||
"commit_hash": "<hash>",
|
||||
"summary": "What was done"
|
||||
}'
|
||||
\`\`\`
|
||||
|
||||
If any phase fails and cannot be fixed:
|
||||
\`\`\`bash
|
||||
ccw issue fail <item_id> --reason "Phase X failed: <details>"
|
||||
\`\`\`
|
||||
|
||||
### Rules
|
||||
- NEVER skip any lifecycle phase
|
||||
- Tests MUST pass before proceeding to acceptance
|
||||
- Regression MUST pass before commit
|
||||
- ALL acceptance criteria MUST be verified
|
||||
- Report accurate lifecycle status in result
|
||||
|
||||
### Start Now
|
||||
Begin by running: ccw issue next
|
||||
`;
|
||||
|
||||
// Execute codex
|
||||
const executor = queueItem.assigned_executor || flags.executor || 'codex';
|
||||
|
||||
if (executor === 'codex') {
|
||||
Bash(
|
||||
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.item_id}`,
|
||||
timeout=3600000 // 1 hour timeout
|
||||
);
|
||||
} else if (executor === 'gemini') {
|
||||
Bash(
|
||||
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.item_id}`,
|
||||
timeout=1800000 // 30 min timeout
|
||||
);
|
||||
} else {
|
||||
// Agent execution
|
||||
Task(
|
||||
subagent_type="code-developer",
|
||||
run_in_background=false,
|
||||
description=`Execute ${queueItem.item_id}`,
|
||||
prompt=codexPrompt
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Execute with parallelism
|
||||
const parallelLimit = flags.parallel || 1;
|
||||
|
||||
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
|
||||
const batch = readyTasks.slice(i, i + parallelLimit);
|
||||
|
||||
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
|
||||
console.log(batch.map(t => `- ${t.item_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
|
||||
|
||||
if (parallelLimit === 1) {
|
||||
// Sequential execution
|
||||
for (const task of batch) {
|
||||
updateTodo(task.item_id, 'in_progress');
|
||||
await executeTask(task);
|
||||
updateTodo(task.item_id, 'completed');
|
||||
}
|
||||
} else {
|
||||
// Parallel execution - launch all at once
|
||||
const executions = batch.map(task => {
|
||||
updateTodo(task.item_id, 'in_progress');
|
||||
return executeTask(task);
|
||||
});
|
||||
await Promise.all(executions);
|
||||
batch.forEach(task => updateTodo(task.item_id, 'completed'));
|
||||
}
|
||||
|
||||
// Refresh ready tasks after batch
|
||||
const newReady = getReadyTasks();
|
||||
if (newReady.length > 0) {
|
||||
console.log(`${newReady.length} more tasks now ready`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Codex Task Fetch Response
|
||||
|
||||
When codex calls `ccw issue next`, it receives:
|
||||
|
||||
```json
|
||||
{
|
||||
"item_id": "T-1",
|
||||
"issue_id": "GH-123",
|
||||
"solution_id": "SOL-001",
|
||||
"task": {
|
||||
"id": "T1",
|
||||
"title": "Create auth middleware",
|
||||
"scope": "src/middleware/",
|
||||
"action": "Create",
|
||||
"description": "Create JWT validation middleware",
|
||||
"modification_points": [
|
||||
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||
],
|
||||
"implementation": [
|
||||
"Create auth.ts file in src/middleware/",
|
||||
"Implement JWT token validation using jsonwebtoken",
|
||||
"Add error handling for invalid/expired tokens",
|
||||
"Export middleware function"
|
||||
],
|
||||
"acceptance": [
|
||||
"Middleware validates JWT tokens successfully",
|
||||
"Returns 401 for invalid or missing tokens",
|
||||
"Passes token payload to request context"
|
||||
]
|
||||
},
|
||||
"context": {
|
||||
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
|
||||
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
|
||||
},
|
||||
"execution_hints": {
|
||||
"executor": "codex",
|
||||
"estimated_minutes": 30
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Completion Summary
|
||||
|
||||
```javascript
|
||||
// Reload queue for final status via CLI
|
||||
const finalQueueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
|
||||
const finalQueue = JSON.parse(finalQueueJson);
|
||||
|
||||
// Use queue._metadata for summary (already calculated by CLI)
|
||||
const summary = finalQueue._metadata || {
|
||||
completed_count: 0,
|
||||
failed_count: 0,
|
||||
pending_count: 0,
|
||||
total_tasks: 0
|
||||
};
|
||||
|
||||
console.log(`
|
||||
## Execution Complete
|
||||
|
||||
**Completed**: ${summary.completed_count}/${summary.total_tasks}
|
||||
**Failed**: ${summary.failed_count}
|
||||
**Pending**: ${summary.pending_count}
|
||||
|
||||
### Task Results
|
||||
${(finalQueue.tasks || []).map(q => {
|
||||
const icon = q.status === 'completed' ? '✓' :
|
||||
q.status === 'failed' ? '✗' :
|
||||
q.status === 'executing' ? '⟳' : '○';
|
||||
return `${icon} ${q.item_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
|
||||
}).join('\n')}
|
||||
`);
|
||||
|
||||
// Issue status updates are handled by ccw issue complete/fail endpoints
|
||||
// No need to manually update issues.jsonl here
|
||||
|
||||
if (summary.pending_count > 0) {
|
||||
console.log(`
|
||||
### Continue Execution
|
||||
Run \`/issue:execute\` again to execute remaining tasks.
|
||||
`);
|
||||
}
|
||||
```
|
||||
|
||||
## Dry Run Mode
|
||||
|
||||
```javascript
|
||||
if (flags.dryRun) {
|
||||
console.log(`
|
||||
## Dry Run - Would Execute
|
||||
|
||||
${readyTasks.map((t, i) => `
|
||||
${i + 1}. ${t.item_id}
|
||||
Issue: ${t.issue_id}
|
||||
Task: ${t.task_id}
|
||||
Executor: ${t.assigned_executor}
|
||||
Group: ${t.execution_group}
|
||||
`).join('')}
|
||||
|
||||
No changes made. Remove --dry-run to execute.
|
||||
`);
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Queue not found | Display message, suggest /issue:queue |
|
||||
| No ready tasks | Check dependencies, show blocked tasks |
|
||||
| Codex timeout | Mark as failed, allow retry |
|
||||
| ccw issue next empty | All tasks done or blocked |
|
||||
| Task execution failure | Marked via ccw issue fail, use `ccw issue retry` to reset |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Interrupted Tasks
|
||||
|
||||
If execution was interrupted (crashed/stopped), `ccw issue next` will automatically resume:
|
||||
|
||||
```bash
|
||||
# Automatically returns the executing task for resumption
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
Tasks in `executing` status are prioritized and returned first, no manual reset needed.
|
||||
|
||||
### Failed Tasks
|
||||
|
||||
If a task failed and you want to retry:
|
||||
|
||||
```bash
|
||||
# Reset all failed tasks to pending
|
||||
ccw issue retry
|
||||
|
||||
# Reset failed tasks for specific issue
|
||||
ccw issue retry <issue-id>
|
||||
```
|
||||
|
||||
## Endpoint Contract
|
||||
|
||||
### `ccw issue next`
|
||||
- Returns next ready task as JSON
|
||||
- Marks task as 'executing'
|
||||
- Returns `{ status: 'empty' }` when no tasks
|
||||
|
||||
### `ccw issue complete <item-id>`
|
||||
- Marks task as 'completed'
|
||||
- Updates queue.json
|
||||
- Checks if issue is fully complete
|
||||
|
||||
### `ccw issue fail <item-id>`
|
||||
- Marks task as 'failed'
|
||||
- Records failure reason
|
||||
- Allows retry via /issue:execute
|
||||
|
||||
### `ccw issue retry [issue-id]`
|
||||
- Resets failed tasks to 'pending'
|
||||
- Allows re-execution via `ccw issue next`
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan issues with solutions
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `ccw issue queue list` - View queue status
|
||||
- `ccw issue retry` - Retry failed tasks
|
||||
113
.claude/commands/issue/manage.md
Normal file
113
.claude/commands/issue/manage.md
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
name: manage
|
||||
description: Interactive issue management (CRUD) via ccw cli endpoints with menu-driven interface
|
||||
argument-hint: "[issue-id] [--action list|view|edit|delete|bulk]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), AskUserQuestion(*), Task(*)
|
||||
---
|
||||
|
||||
# Issue Manage Command (/issue:manage)
|
||||
|
||||
## Overview
|
||||
|
||||
Interactive menu-driven interface for issue management using `ccw issue` CLI endpoints:
|
||||
- **List**: Browse and filter issues
|
||||
- **View**: Detailed issue inspection
|
||||
- **Edit**: Modify issue fields
|
||||
- **Delete**: Remove issues
|
||||
- **Bulk**: Batch operations on multiple issues
|
||||
|
||||
## CLI Endpoints Reference
|
||||
|
||||
```bash
|
||||
# Core endpoints (ccw issue)
|
||||
ccw issue list # List all issues
|
||||
ccw issue list <id> --json # Get issue details
|
||||
ccw issue status <id> # Detailed status
|
||||
ccw issue init <id> --title "..." # Create issue
|
||||
ccw issue task <id> --title "..." # Add task
|
||||
ccw issue bind <id> <solution-id> # Bind solution
|
||||
|
||||
# Queue management
|
||||
ccw issue queue # List current queue
|
||||
ccw issue queue add <id> # Add to queue
|
||||
ccw issue queue list # Queue history
|
||||
ccw issue queue switch <queue-id> # Switch queue
|
||||
ccw issue queue archive # Archive queue
|
||||
ccw issue queue delete <queue-id> # Delete queue
|
||||
ccw issue next # Get next task
|
||||
ccw issue done <queue-id> # Mark completed
|
||||
ccw issue complete <item-id> # (legacy alias for done)
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Interactive mode (menu-driven)
|
||||
/issue:manage
|
||||
|
||||
# Direct to specific issue
|
||||
/issue:manage GH-123
|
||||
|
||||
# Direct action
|
||||
/issue:manage --action list
|
||||
/issue:manage GH-123 --action edit
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
This command delegates to the `issue-manage` skill for detailed implementation.
|
||||
|
||||
### Entry Point
|
||||
|
||||
```javascript
|
||||
const issueId = parseIssueId(userInput);
|
||||
const action = flags.action;
|
||||
|
||||
// Show main menu if no action specified
|
||||
if (!action) {
|
||||
await showMainMenu(issueId);
|
||||
} else {
|
||||
await executeAction(action, issueId);
|
||||
}
|
||||
```
|
||||
|
||||
### Main Menu Flow
|
||||
|
||||
1. **Dashboard**: Fetch issues summary via `ccw issue list --json`
|
||||
2. **Menu**: Present action options via AskUserQuestion
|
||||
3. **Route**: Execute selected action (List/View/Edit/Delete/Bulk)
|
||||
4. **Loop**: Return to menu after each action
|
||||
|
||||
### Available Actions
|
||||
|
||||
| Action | Description | CLI Command |
|
||||
|--------|-------------|-------------|
|
||||
| List | Browse with filters | `ccw issue list --json` |
|
||||
| View | Detail view | `ccw issue status <id> --json` |
|
||||
| Edit | Modify fields | Update `issues.jsonl` |
|
||||
| Delete | Remove issue | Clean up all related files |
|
||||
| Bulk | Batch operations | Multi-select + batch update |
|
||||
|
||||
## Data Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `.workflow/issues/issues.jsonl` | Issue records |
|
||||
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
|
||||
| `.workflow/issues/queue.json` | Execution queue |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No issues found | Suggest creating with /issue:new |
|
||||
| Issue not found | Show available issues, ask for correction |
|
||||
| Invalid selection | Show error, re-prompt |
|
||||
| Write failure | Check permissions, show error |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:new` - Create structured issue
|
||||
- `/issue:plan` - Plan solution for issue
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `/issue:execute` - Execute queued tasks
|
||||
451
.claude/commands/issue/new.md
Normal file
451
.claude/commands/issue/new.md
Normal file
@@ -0,0 +1,451 @@
|
||||
---
|
||||
name: new
|
||||
description: Create structured issue from GitHub URL or text description, extracting key elements into issues.jsonl
|
||||
argument-hint: "<github-url | text-description> [--priority 1-5] [--labels label1,label2]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), WebFetch(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
# Issue New Command (/issue:new)
|
||||
|
||||
## Overview
|
||||
|
||||
Creates a new structured issue from either:
|
||||
1. **GitHub Issue URL** - Fetches and parses issue content via `gh` CLI
|
||||
2. **Text Description** - Parses natural language into structured fields
|
||||
|
||||
Outputs a well-formed issue entry to `.workflow/issues/issues.jsonl`.
|
||||
|
||||
## Issue Structure (Closed-Loop)
|
||||
|
||||
```typescript
|
||||
interface Issue {
|
||||
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
|
||||
title: string; // Issue title (clear, concise)
|
||||
status: 'registered'; // Initial status
|
||||
priority: number; // 1 (critical) to 5 (low)
|
||||
context: string; // Problem description
|
||||
source: 'github' | 'text'; // Input source type
|
||||
source_url?: string; // GitHub URL if applicable
|
||||
labels?: string[]; // Categorization labels
|
||||
|
||||
// Structured extraction
|
||||
problem_statement: string; // What is the problem?
|
||||
expected_behavior?: string; // What should happen?
|
||||
actual_behavior?: string; // What actually happens?
|
||||
affected_components?: string[];// Files/modules affected
|
||||
reproduction_steps?: string[]; // Steps to reproduce
|
||||
|
||||
// Closed-loop requirements (guide plan generation)
|
||||
lifecycle_requirements: {
|
||||
test_strategy: 'unit' | 'integration' | 'e2e' | 'manual' | 'auto';
|
||||
regression_scope: 'affected' | 'related' | 'full'; // Which tests to run
|
||||
acceptance_type: 'automated' | 'manual' | 'both'; // How to verify
|
||||
commit_strategy: 'per-task' | 'squash' | 'atomic'; // Commit granularity
|
||||
};
|
||||
|
||||
// Metadata
|
||||
bound_solution_id: null;
|
||||
solution_count: 0;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
}
|
||||
```
|
||||
|
||||
## Lifecycle Requirements
|
||||
|
||||
The `lifecycle_requirements` field guides downstream commands (`/issue:plan`, `/issue:execute`):
|
||||
|
||||
| Field | Options | Purpose |
|
||||
|-------|---------|---------|
|
||||
| `test_strategy` | `unit`, `integration`, `e2e`, `manual`, `auto` | Which test types to generate |
|
||||
| `regression_scope` | `affected`, `related`, `full` | Which tests to run for regression |
|
||||
| `acceptance_type` | `automated`, `manual`, `both` | How to verify completion |
|
||||
| `commit_strategy` | `per-task`, `squash`, `atomic` | Commit granularity |
|
||||
|
||||
> **Note**: Task structure (SolutionTask) is defined in `/issue:plan` - see `.claude/commands/issue/plan.md`
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# From GitHub URL
|
||||
/issue:new https://github.com/owner/repo/issues/123
|
||||
|
||||
# From text description
|
||||
/issue:new "Login fails when password contains special characters. Expected: successful login. Actual: 500 error. Affects src/auth/*"
|
||||
|
||||
# With options
|
||||
/issue:new <url-or-text> --priority 2 --labels "bug,auth"
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Input Detection
|
||||
|
||||
```javascript
|
||||
const input = userInput.trim();
|
||||
const flags = parseFlags(userInput); // --priority, --labels
|
||||
|
||||
// Detect input type
|
||||
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
|
||||
const isGitHubShort = input.match(/^#(\d+)$/); // #123 format
|
||||
|
||||
let issueData = {};
|
||||
|
||||
if (isGitHubUrl || isGitHubShort) {
|
||||
// GitHub issue - fetch via gh CLI
|
||||
issueData = await fetchGitHubIssue(input);
|
||||
} else {
|
||||
// Text description - parse structure
|
||||
issueData = await parseTextDescription(input);
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: GitHub Issue Fetching
|
||||
|
||||
```javascript
|
||||
async function fetchGitHubIssue(urlOrNumber) {
|
||||
let issueRef;
|
||||
|
||||
if (urlOrNumber.startsWith('http')) {
|
||||
// Extract owner/repo/number from URL
|
||||
const match = urlOrNumber.match(/github\.com\/([\w-]+)\/([\w-]+)\/issues\/(\d+)/);
|
||||
if (!match) throw new Error('Invalid GitHub URL');
|
||||
issueRef = `${match[1]}/${match[2]}#${match[3]}`;
|
||||
} else {
|
||||
// #123 format - use current repo
|
||||
issueRef = urlOrNumber.replace('#', '');
|
||||
}
|
||||
|
||||
// Fetch via gh CLI
|
||||
const result = Bash(`gh issue view ${issueRef} --json number,title,body,labels,state,url`);
|
||||
const ghIssue = JSON.parse(result);
|
||||
|
||||
// Parse body for structure
|
||||
const parsed = parseIssueBody(ghIssue.body);
|
||||
|
||||
return {
|
||||
id: `GH-${ghIssue.number}`,
|
||||
title: ghIssue.title,
|
||||
source: 'github',
|
||||
source_url: ghIssue.url,
|
||||
labels: ghIssue.labels.map(l => l.name),
|
||||
context: ghIssue.body,
|
||||
...parsed
|
||||
};
|
||||
}
|
||||
|
||||
function parseIssueBody(body) {
|
||||
// Extract structured sections from markdown body
|
||||
const sections = {};
|
||||
|
||||
// Problem/Description
|
||||
const problemMatch = body.match(/##?\s*(problem|description|issue)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (problemMatch) sections.problem_statement = problemMatch[2].trim();
|
||||
|
||||
// Expected behavior
|
||||
const expectedMatch = body.match(/##?\s*(expected|should)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (expectedMatch) sections.expected_behavior = expectedMatch[2].trim();
|
||||
|
||||
// Actual behavior
|
||||
const actualMatch = body.match(/##?\s*(actual|current)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (actualMatch) sections.actual_behavior = actualMatch[2].trim();
|
||||
|
||||
// Steps to reproduce
|
||||
const stepsMatch = body.match(/##?\s*(steps|reproduce)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (stepsMatch) {
|
||||
const stepsText = stepsMatch[2].trim();
|
||||
sections.reproduction_steps = stepsText
|
||||
.split('\n')
|
||||
.filter(line => line.match(/^\s*[\d\-\*]/))
|
||||
.map(line => line.replace(/^\s*[\d\.\-\*]\s*/, '').trim());
|
||||
}
|
||||
|
||||
// Affected components (from file references)
|
||||
const fileMatches = body.match(/`[^`]*\.(ts|js|tsx|jsx|py|go|rs)[^`]*`/g);
|
||||
if (fileMatches) {
|
||||
sections.affected_components = [...new Set(fileMatches.map(f => f.replace(/`/g, '')))];
|
||||
}
|
||||
|
||||
// Fallback: use entire body as problem statement
|
||||
if (!sections.problem_statement) {
|
||||
sections.problem_statement = body.substring(0, 500);
|
||||
}
|
||||
|
||||
return sections;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Text Description Parsing
|
||||
|
||||
```javascript
|
||||
async function parseTextDescription(text) {
|
||||
// Generate unique ID
|
||||
const id = `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
|
||||
|
||||
// Extract structured elements using patterns
|
||||
const result = {
|
||||
id,
|
||||
source: 'text',
|
||||
title: '',
|
||||
problem_statement: '',
|
||||
expected_behavior: null,
|
||||
actual_behavior: null,
|
||||
affected_components: [],
|
||||
reproduction_steps: []
|
||||
};
|
||||
|
||||
// Pattern: "Title. Description. Expected: X. Actual: Y. Affects: files"
|
||||
const sentences = text.split(/\.(?=\s|$)/);
|
||||
|
||||
// First sentence as title
|
||||
result.title = sentences[0]?.trim() || 'Untitled Issue';
|
||||
|
||||
// Look for keywords
|
||||
for (const sentence of sentences) {
|
||||
const s = sentence.trim();
|
||||
|
||||
if (s.match(/^expected:?\s*/i)) {
|
||||
result.expected_behavior = s.replace(/^expected:?\s*/i, '');
|
||||
} else if (s.match(/^actual:?\s*/i)) {
|
||||
result.actual_behavior = s.replace(/^actual:?\s*/i, '');
|
||||
} else if (s.match(/^affects?:?\s*/i)) {
|
||||
const components = s.replace(/^affects?:?\s*/i, '').split(/[,\s]+/);
|
||||
result.affected_components = components.filter(c => c.includes('/') || c.includes('.'));
|
||||
} else if (s.match(/^steps?:?\s*/i)) {
|
||||
result.reproduction_steps = s.replace(/^steps?:?\s*/i, '').split(/[,;]/);
|
||||
} else if (!result.problem_statement && s.length > 10) {
|
||||
result.problem_statement = s;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback problem statement
|
||||
if (!result.problem_statement) {
|
||||
result.problem_statement = text.substring(0, 300);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Lifecycle Configuration
|
||||
|
||||
```javascript
|
||||
// Ask for lifecycle requirements (or use smart defaults)
|
||||
const lifecycleAnswer = AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: 'Test strategy for this issue?',
|
||||
header: 'Test',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'auto', description: 'Auto-detect based on affected files (Recommended)' },
|
||||
{ label: 'unit', description: 'Unit tests only' },
|
||||
{ label: 'integration', description: 'Integration tests' },
|
||||
{ label: 'e2e', description: 'End-to-end tests' },
|
||||
{ label: 'manual', description: 'Manual testing only' }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Regression scope?',
|
||||
header: 'Regression',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'affected', description: 'Only affected module tests (Recommended)' },
|
||||
{ label: 'related', description: 'Affected + dependent modules' },
|
||||
{ label: 'full', description: 'Full test suite' }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Commit strategy?',
|
||||
header: 'Commit',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'per-task', description: 'One commit per task (Recommended)' },
|
||||
{ label: 'atomic', description: 'Single commit for entire issue' },
|
||||
{ label: 'squash', description: 'Squash at the end' }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
const lifecycle = {
|
||||
test_strategy: lifecycleAnswer.test || 'auto',
|
||||
regression_scope: lifecycleAnswer.regression || 'affected',
|
||||
acceptance_type: 'automated',
|
||||
commit_strategy: lifecycleAnswer.commit || 'per-task'
|
||||
};
|
||||
|
||||
issueData.lifecycle_requirements = lifecycle;
|
||||
```
|
||||
|
||||
### Phase 5: User Confirmation
|
||||
|
||||
```javascript
|
||||
// Show parsed data and ask for confirmation
|
||||
console.log(`
|
||||
## Parsed Issue
|
||||
|
||||
**ID**: ${issueData.id}
|
||||
**Title**: ${issueData.title}
|
||||
**Source**: ${issueData.source}${issueData.source_url ? ` (${issueData.source_url})` : ''}
|
||||
|
||||
### Problem Statement
|
||||
${issueData.problem_statement}
|
||||
|
||||
${issueData.expected_behavior ? `### Expected Behavior\n${issueData.expected_behavior}\n` : ''}
|
||||
${issueData.actual_behavior ? `### Actual Behavior\n${issueData.actual_behavior}\n` : ''}
|
||||
${issueData.affected_components?.length ? `### Affected Components\n${issueData.affected_components.map(c => `- ${c}`).join('\n')}\n` : ''}
|
||||
${issueData.reproduction_steps?.length ? `### Reproduction Steps\n${issueData.reproduction_steps.map((s, i) => `${i+1}. ${s}`).join('\n')}\n` : ''}
|
||||
|
||||
### Lifecycle Configuration
|
||||
- **Test Strategy**: ${lifecycle.test_strategy}
|
||||
- **Regression Scope**: ${lifecycle.regression_scope}
|
||||
- **Commit Strategy**: ${lifecycle.commit_strategy}
|
||||
`);
|
||||
|
||||
// Ask user to confirm or edit
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Create this issue?',
|
||||
header: 'Confirm',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Create', description: 'Save issue to issues.jsonl' },
|
||||
{ label: 'Edit Title', description: 'Modify the issue title' },
|
||||
{ label: 'Edit Priority', description: 'Change priority (1-5)' },
|
||||
{ label: 'Cancel', description: 'Discard and exit' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (answer.includes('Cancel')) {
|
||||
console.log('Issue creation cancelled.');
|
||||
return;
|
||||
}
|
||||
|
||||
if (answer.includes('Edit Title')) {
|
||||
const titleAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Enter new title:',
|
||||
header: 'Title',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: issueData.title.substring(0, 40), description: 'Keep current' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
// Handle custom input via "Other"
|
||||
if (titleAnswer.customText) {
|
||||
issueData.title = titleAnswer.customText;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 6: Write to JSONL
|
||||
|
||||
```javascript
|
||||
// Construct final issue object
|
||||
const priority = flags.priority ? parseInt(flags.priority) : 3;
|
||||
const labels = flags.labels ? flags.labels.split(',').map(l => l.trim()) : [];
|
||||
|
||||
const newIssue = {
|
||||
id: issueData.id,
|
||||
title: issueData.title,
|
||||
status: 'registered',
|
||||
priority,
|
||||
context: issueData.problem_statement,
|
||||
source: issueData.source,
|
||||
source_url: issueData.source_url || null,
|
||||
labels: [...(issueData.labels || []), ...labels],
|
||||
|
||||
// Structured fields
|
||||
problem_statement: issueData.problem_statement,
|
||||
expected_behavior: issueData.expected_behavior || null,
|
||||
actual_behavior: issueData.actual_behavior || null,
|
||||
affected_components: issueData.affected_components || [],
|
||||
reproduction_steps: issueData.reproduction_steps || [],
|
||||
|
||||
// Closed-loop lifecycle requirements
|
||||
lifecycle_requirements: issueData.lifecycle_requirements || {
|
||||
test_strategy: 'auto',
|
||||
regression_scope: 'affected',
|
||||
acceptance_type: 'automated',
|
||||
commit_strategy: 'per-task'
|
||||
},
|
||||
|
||||
// Metadata
|
||||
bound_solution_id: null,
|
||||
solution_count: 0,
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Ensure directory exists
|
||||
Bash('mkdir -p .workflow/issues');
|
||||
|
||||
// Append to issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
Bash(`echo '${JSON.stringify(newIssue)}' >> "${issuesPath}"`);
|
||||
|
||||
console.log(`
|
||||
## Issue Created
|
||||
|
||||
**ID**: ${newIssue.id}
|
||||
**Title**: ${newIssue.title}
|
||||
**Priority**: ${newIssue.priority}
|
||||
**Labels**: ${newIssue.labels.join(', ') || 'none'}
|
||||
**Source**: ${newIssue.source}
|
||||
|
||||
### Next Steps
|
||||
1. Plan solution: \`/issue:plan ${newIssue.id}\`
|
||||
2. View details: \`ccw issue status ${newIssue.id}\`
|
||||
3. Manage issues: \`/issue:manage\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### GitHub Issue
|
||||
|
||||
```bash
|
||||
/issue:new https://github.com/myorg/myrepo/issues/42 --priority 2
|
||||
|
||||
# Output:
|
||||
## Issue Created
|
||||
**ID**: GH-42
|
||||
**Title**: Fix memory leak in WebSocket handler
|
||||
**Priority**: 2
|
||||
**Labels**: bug, performance
|
||||
**Source**: github (https://github.com/myorg/myrepo/issues/42)
|
||||
```
|
||||
|
||||
### Text Description
|
||||
|
||||
```bash
|
||||
/issue:new "API rate limiting not working. Expected: 429 after 100 requests. Actual: No limit. Affects src/middleware/rate-limit.ts"
|
||||
|
||||
# Output:
|
||||
## Issue Created
|
||||
**ID**: ISS-20251227-142530
|
||||
**Title**: API rate limiting not working
|
||||
**Priority**: 3
|
||||
**Labels**: none
|
||||
**Source**: text
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Invalid GitHub URL | Show format hint, ask for correction |
|
||||
| gh CLI not available | Fall back to WebFetch for public issues |
|
||||
| Empty description | Prompt user for required fields |
|
||||
| Duplicate issue ID | Auto-increment or suggest merge |
|
||||
| Parse failure | Show raw input, ask for manual structuring |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan solution for issue
|
||||
- `/issue:manage` - Interactive issue management
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status <id>` - View issue details
|
||||
268
.claude/commands/issue/plan.md
Normal file
268
.claude/commands/issue/plan.md
Normal file
@@ -0,0 +1,268 @@
|
||||
---
|
||||
name: plan
|
||||
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
|
||||
argument-hint: "<issue-id>[,<issue-id>,...] [--batch-size 3] --all-pending"
|
||||
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
|
||||
---
|
||||
|
||||
# Issue Plan Command (/issue:plan)
|
||||
|
||||
## Overview
|
||||
|
||||
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Generate Files:**
|
||||
1. `.workflow/issues/solutions/{issue-id}.jsonl` - Solution with tasks for each issue
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||
"pending_selection": [{ "issue_id": "...", "solutions": [...] }],
|
||||
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||
}
|
||||
```
|
||||
|
||||
**Completion Criteria:**
|
||||
- [ ] Solution file generated for each issue
|
||||
- [ ] Single solution → auto-bound via `ccw issue bind`
|
||||
- [ ] Multiple solutions → returned for user selection
|
||||
- [ ] Tasks conform to schema: `cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json`
|
||||
- [ ] Each task has quantified `delivery_criteria`
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
- **Closed-loop agent**: issue-plan-agent combines explore + plan
|
||||
- Batch processing: 1 agent processes 1-3 issues
|
||||
- ACE semantic search integrated into planning
|
||||
- Solution with executable tasks and delivery criteria
|
||||
- Automatic solution registration and binding
|
||||
|
||||
## Storage Structure (Flat JSONL)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queue.json # Execution queue
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue (one per line)
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:plan <issue-id>[,<issue-id>,...] [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:plan GH-123 # Single issue
|
||||
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
|
||||
/issue:plan --all-pending # All pending issues
|
||||
|
||||
# Flags
|
||||
--batch-size <n> Max issues per agent batch (default: 3)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Issue Loading
|
||||
├─ Parse input (single, comma-separated, or --all-pending)
|
||||
├─ Load issues from .workflow/issues/issues.jsonl
|
||||
├─ Validate issues exist (create if needed)
|
||||
└─ Group into batches (max 3 per batch)
|
||||
|
||||
Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||
├─ Launch issue-plan-agent per batch
|
||||
├─ Agent performs:
|
||||
│ ├─ ACE semantic search for each issue
|
||||
│ ├─ Codebase exploration (files, patterns, dependencies)
|
||||
│ ├─ Solution generation with task breakdown
|
||||
│ └─ Conflict detection across issues
|
||||
└─ Output: solution JSON per issue
|
||||
|
||||
Phase 3: Solution Registration & Binding
|
||||
├─ Append solutions to solutions/{issue-id}.jsonl
|
||||
├─ Single solution per issue → auto-bind
|
||||
├─ Multiple candidates → AskUserQuestion to select
|
||||
└─ Update issues.jsonl with bound_solution_id
|
||||
|
||||
Phase 4: Summary
|
||||
├─ Display bound solutions
|
||||
├─ Show task counts per issue
|
||||
└─ Display next steps (/issue:queue)
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Issue Loading (IDs Only)
|
||||
|
||||
```javascript
|
||||
const batchSize = flags.batchSize || 3;
|
||||
let issueIds = [];
|
||||
|
||||
if (flags.allPending) {
|
||||
// Get pending issue IDs directly via CLI
|
||||
const ids = Bash(`ccw issue list --status pending,registered --ids`).trim();
|
||||
issueIds = ids ? ids.split('\n').filter(Boolean) : [];
|
||||
|
||||
if (issueIds.length === 0) {
|
||||
console.log('No pending issues found.');
|
||||
return;
|
||||
}
|
||||
console.log(`Found ${issueIds.length} pending issues`);
|
||||
} else {
|
||||
// Parse comma-separated issue IDs
|
||||
issueIds = userInput.includes(',')
|
||||
? userInput.split(',').map(s => s.trim())
|
||||
: [userInput.trim()];
|
||||
|
||||
// Create if not exists
|
||||
for (const id of issueIds) {
|
||||
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
|
||||
}
|
||||
}
|
||||
|
||||
// Group into batches
|
||||
const batches = [];
|
||||
for (let i = 0; i < issueIds.length; i += batchSize) {
|
||||
batches.push(issueIds.slice(i, i + batchSize));
|
||||
}
|
||||
|
||||
console.log(`Processing ${issueIds.length} issues in ${batches.length} batch(es)`);
|
||||
|
||||
TodoWrite({
|
||||
todos: batches.map((_, i) => ({
|
||||
content: `Plan batch ${i+1}`,
|
||||
status: 'pending',
|
||||
activeForm: `Planning batch ${i+1}`
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||
|
||||
```javascript
|
||||
Bash(`mkdir -p .workflow/issues/solutions`);
|
||||
const pendingSelections = []; // Collect multi-solution issues for user selection
|
||||
|
||||
for (const [batchIndex, batch] of batches.entries()) {
|
||||
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
|
||||
|
||||
// Build minimal prompt - agent handles exploration, planning, and binding
|
||||
const issuePrompt = `
|
||||
## Plan Issues
|
||||
|
||||
**Issue IDs**: ${batch.join(', ')}
|
||||
**Project Root**: ${process.cwd()}
|
||||
|
||||
### Steps
|
||||
1. Fetch: \`ccw issue status <id> --json\`
|
||||
2. Explore (ACE) → Plan solution
|
||||
3. Register & bind: \`ccw issue bind <id> --solution <file>\`
|
||||
|
||||
### Generate Files
|
||||
\`.workflow/issues/solutions/{issue-id}.jsonl\` - Solution with tasks (schema: cat .claude/workflows/cli-templates/schemas/issue-task-jsonl-schema.json)
|
||||
|
||||
### Binding Rules
|
||||
- **Single solution**: Auto-bind via \`ccw issue bind <id> --solution <file>\`
|
||||
- **Multiple solutions**: Register only, return for user selection
|
||||
|
||||
### Return Summary
|
||||
\`\`\`json
|
||||
{
|
||||
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
|
||||
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||
}
|
||||
\`\`\`
|
||||
`;
|
||||
|
||||
// Launch issue-plan-agent - agent writes solutions directly
|
||||
const result = Task(
|
||||
subagent_type="issue-plan-agent",
|
||||
run_in_background=false,
|
||||
description=`Explore & plan ${batch.length} issues`,
|
||||
prompt=issuePrompt
|
||||
);
|
||||
|
||||
// Parse summary from agent
|
||||
const summary = JSON.parse(result);
|
||||
|
||||
// Display auto-bound solutions
|
||||
for (const item of summary.bound || []) {
|
||||
console.log(`✓ ${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
|
||||
}
|
||||
|
||||
// Collect pending selections for Phase 3
|
||||
pendingSelections.push(...(summary.pending_selection || []));
|
||||
|
||||
// Show conflicts
|
||||
if (summary.conflicts?.length > 0) {
|
||||
console.log(`⚠ Conflicts: ${summary.conflicts.map(c => c.file).join(', ')}`);
|
||||
}
|
||||
|
||||
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Multi-Solution Selection
|
||||
|
||||
```javascript
|
||||
// Only handle issues where agent generated multiple solutions
|
||||
if (pendingSelections.length > 0) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: pendingSelections.map(({ issue_id, solutions }) => ({
|
||||
question: `Select solution for ${issue_id}:`,
|
||||
header: issue_id,
|
||||
multiSelect: false,
|
||||
options: solutions.map(s => ({
|
||||
label: `${s.id} (${s.task_count} tasks)`,
|
||||
description: s.description
|
||||
}))
|
||||
}))
|
||||
});
|
||||
|
||||
// Bind user-selected solutions
|
||||
for (const { issue_id } of pendingSelections) {
|
||||
const selectedId = extractSelectedSolutionId(answer, issue_id);
|
||||
if (selectedId) {
|
||||
Bash(`ccw issue bind ${issue_id} ${selectedId}`);
|
||||
console.log(`✓ ${issue_id}: ${selectedId} bound`);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Summary
|
||||
|
||||
```javascript
|
||||
// Count planned issues via CLI
|
||||
const plannedIds = Bash(`ccw issue list --status planned --ids`).trim();
|
||||
const plannedCount = plannedIds ? plannedIds.split('\n').length : 0;
|
||||
|
||||
console.log(`
|
||||
## Done: ${issueIds.length} issues → ${plannedCount} planned
|
||||
|
||||
Next: \`/issue:queue\` → \`/issue:execute\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Issue not found | Auto-create in issues.jsonl |
|
||||
| ACE search fails | Agent falls back to ripgrep |
|
||||
| No solutions generated | Display error, suggest manual planning |
|
||||
| User cancels selection | Skip issue, continue with others |
|
||||
| File conflicts | Agent detects and suggests resolution order |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:queue` - Form execution queue from bound solutions
|
||||
- `/issue:execute` - Execute queue with codex
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status` - View issue and solution details
|
||||
294
.claude/commands/issue/queue.md
Normal file
294
.claude/commands/issue/queue.md
Normal file
@@ -0,0 +1,294 @@
|
||||
---
|
||||
name: queue
|
||||
description: Form execution queue from bound solutions using issue-queue-agent
|
||||
argument-hint: "[--rebuild] [--issue <id>]"
|
||||
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
|
||||
---
|
||||
|
||||
# Issue Queue Command (/issue:queue)
|
||||
|
||||
## Overview
|
||||
|
||||
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, and creates an ordered execution queue.
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Generate Files:**
|
||||
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with tasks, conflicts, groups
|
||||
2. `.workflow/issues/queues/index.json` - Update with new queue entry
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-20251227-143000",
|
||||
"total_tasks": N,
|
||||
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
|
||||
"conflicts_resolved": N,
|
||||
"issues_queued": ["GH-123", "GH-124"]
|
||||
}
|
||||
```
|
||||
|
||||
**Completion Criteria:**
|
||||
- [ ] Queue JSON generated with valid DAG (no cycles)
|
||||
- [ ] All file conflicts resolved with rationale
|
||||
- [ ] Semantic priority calculated for all tasks
|
||||
- [ ] Execution groups assigned (parallel P* / sequential S*)
|
||||
- [ ] Issue statuses updated to `queued` via `ccw issue update`
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
- **Agent-driven**: issue-queue-agent handles all ordering logic
|
||||
- Dependency DAG construction and cycle detection
|
||||
- File conflict detection and resolution
|
||||
- Semantic priority calculation (0.0-1.0)
|
||||
- Parallel/Sequential group assignment
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queues/ # Queue history directory
|
||||
│ ├── index.json # Queue index (active + history)
|
||||
│ ├── {queue-id}.json # Individual queue files
|
||||
│ └── ...
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Queue Index Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251227-143000",
|
||||
"queues": [
|
||||
{
|
||||
"id": "QUE-20251227-143000",
|
||||
"status": "active",
|
||||
"issue_ids": ["GH-123", "GH-124"],
|
||||
"total_tasks": 8,
|
||||
"completed_tasks": 3,
|
||||
"created_at": "2025-12-27T14:30:00Z"
|
||||
},
|
||||
{
|
||||
"id": "QUE-20251226-100000",
|
||||
"status": "completed",
|
||||
"issue_ids": ["GH-120"],
|
||||
"total_tasks": 5,
|
||||
"completed_tasks": 5,
|
||||
"created_at": "2025-12-26T10:00:00Z",
|
||||
"completed_at": "2025-12-26T12:30:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:queue [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:queue # Form NEW queue from all bound solutions
|
||||
/issue:queue --issue GH-123 # Form queue for specific issue only
|
||||
/issue:queue --append GH-124 # Append to active queue
|
||||
/issue:queue --list # List all queues (history)
|
||||
/issue:queue --switch QUE-xxx # Switch active queue
|
||||
/issue:queue --archive # Archive completed active queue
|
||||
|
||||
# Flags
|
||||
--issue <id> Form queue for specific issue only
|
||||
--append <id> Append issue to active queue (don't create new)
|
||||
|
||||
# CLI subcommands (ccw issue queue ...)
|
||||
ccw issue queue list List all queues with status
|
||||
ccw issue queue switch <queue-id> Switch active queue
|
||||
ccw issue queue archive Archive current queue
|
||||
ccw issue queue delete <queue-id> Delete queue from history
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Solution Loading
|
||||
├─ Load issues.jsonl
|
||||
├─ Filter issues with bound_solution_id
|
||||
├─ Read solutions/{issue-id}.jsonl for each issue
|
||||
├─ Find bound solution by ID
|
||||
└─ Extract tasks from bound solutions
|
||||
|
||||
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
|
||||
├─ Launch issue-queue-agent with all tasks
|
||||
├─ Agent performs:
|
||||
│ ├─ Build dependency DAG from depends_on
|
||||
│ ├─ Detect circular dependencies
|
||||
│ ├─ Identify file modification conflicts
|
||||
│ ├─ Resolve conflicts using ordering rules
|
||||
│ ├─ Calculate semantic priority (0.0-1.0)
|
||||
│ └─ Assign execution groups (parallel/sequential)
|
||||
└─ Output: queue JSON with ordered tasks
|
||||
|
||||
Phase 5: Queue Output
|
||||
├─ Write queue.json
|
||||
├─ Update issue statuses in issues.jsonl
|
||||
└─ Display queue summary
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Solution Loading
|
||||
|
||||
```javascript
|
||||
// Load issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Filter issues with bound solutions
|
||||
const plannedIssues = allIssues.filter(i =>
|
||||
i.status === 'planned' && i.bound_solution_id
|
||||
);
|
||||
|
||||
if (plannedIssues.length === 0) {
|
||||
console.log('No issues with bound solutions found.');
|
||||
console.log('Run /issue:plan first to create and bind solutions.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Load all tasks from bound solutions
|
||||
const allTasks = [];
|
||||
for (const issue of plannedIssues) {
|
||||
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
|
||||
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Find bound solution
|
||||
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
|
||||
|
||||
if (!boundSol) {
|
||||
console.log(`⚠ Bound solution ${issue.bound_solution_id} not found for ${issue.id}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
for (const task of boundSol.tasks || []) {
|
||||
allTasks.push({
|
||||
issue_id: issue.id,
|
||||
solution_id: issue.bound_solution_id,
|
||||
task,
|
||||
exploration_context: boundSol.exploration_context
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
|
||||
```
|
||||
|
||||
### Phase 2-4: Agent-Driven Queue Formation
|
||||
|
||||
```javascript
|
||||
// Build minimal prompt - agent reads schema and handles ordering
|
||||
const agentPrompt = `
|
||||
## Order Tasks
|
||||
|
||||
**Tasks**: ${allTasks.length} from ${plannedIssues.length} issues
|
||||
**Project Root**: ${process.cwd()}
|
||||
|
||||
### Input
|
||||
\`\`\`json
|
||||
${JSON.stringify(allTasks.map(t => ({
|
||||
key: \`\${t.issue_id}:\${t.task.id}\`,
|
||||
type: t.task.type,
|
||||
file_context: t.task.file_context,
|
||||
depends_on: t.task.depends_on
|
||||
})), null, 2)}
|
||||
\`\`\`
|
||||
|
||||
### Steps
|
||||
1. Parse tasks: Extract task keys, types, file contexts, dependencies
|
||||
2. Build DAG: Construct dependency graph from depends_on references
|
||||
3. Detect cycles: Verify no circular dependencies exist (abort if found)
|
||||
4. Detect conflicts: Identify file modification conflicts across issues
|
||||
5. Resolve conflicts: Apply ordering rules (Create→Update→Delete, config→src→tests)
|
||||
6. Calculate priority: Compute semantic priority (0.0-1.0) for each task
|
||||
7. Assign groups: Assign parallel (P*) or sequential (S*) execution groups
|
||||
8. Generate queue: Write queue JSON with ordered tasks
|
||||
9. Update index: Update queues/index.json with new queue entry
|
||||
|
||||
### Rules
|
||||
- **DAG Validity**: Output must be valid DAG with no circular dependencies
|
||||
- **Conflict Resolution**: All file conflicts must be resolved with rationale
|
||||
- **Ordering Priority**:
|
||||
1. Create before Update (files must exist before modification)
|
||||
2. Foundation before integration (config/ → src/)
|
||||
3. Types before implementation (types/ → components/)
|
||||
4. Core before tests (src/ → __tests__/)
|
||||
5. Delete last (preserve dependencies until no longer needed)
|
||||
- **Parallel Safety**: Tasks in same parallel group must have no file conflicts
|
||||
- **Queue ID Format**: \`QUE-YYYYMMDD-HHMMSS\` (UTC timestamp)
|
||||
|
||||
### Generate Files
|
||||
1. \`.workflow/issues/queues/\${queueId}.json\` - Full queue (schema: cat .claude/workflows/cli-templates/schemas/queue-schema.json)
|
||||
2. \`.workflow/issues/queues/index.json\` - Update with new entry
|
||||
|
||||
### Return Summary
|
||||
\`\`\`json
|
||||
{
|
||||
"queue_id": "QUE-YYYYMMDD-HHMMSS",
|
||||
"total_tasks": N,
|
||||
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
|
||||
"conflicts_resolved": N,
|
||||
"issues_queued": ["GH-123"]
|
||||
}
|
||||
\`\`\`
|
||||
`;
|
||||
|
||||
const result = Task(
|
||||
subagent_type="issue-queue-agent",
|
||||
run_in_background=false,
|
||||
description=`Order ${allTasks.length} tasks`,
|
||||
prompt=agentPrompt
|
||||
);
|
||||
|
||||
const summary = JSON.parse(result);
|
||||
```
|
||||
|
||||
### Phase 5: Summary & Status Update
|
||||
|
||||
```javascript
|
||||
// Agent already generated queue files, use summary
|
||||
console.log(`
|
||||
## Queue Formed: ${summary.queue_id}
|
||||
|
||||
**Tasks**: ${summary.total_tasks}
|
||||
**Issues**: ${summary.issues_queued.join(', ')}
|
||||
**Groups**: ${summary.execution_groups.map(g => `${g.id}(${g.count})`).join(', ')}
|
||||
**Conflicts Resolved**: ${summary.conflicts_resolved}
|
||||
|
||||
Next: \`/issue:execute\`
|
||||
`);
|
||||
|
||||
// Update issue statuses via CLI
|
||||
for (const issueId of summary.issues_queued) {
|
||||
Bash(`ccw issue update ${issueId} --status queued`);
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No bound solutions | Display message, suggest /issue:plan |
|
||||
| Circular dependency | List cycles, abort queue formation |
|
||||
| Unresolved conflicts | Agent resolves using ordering rules |
|
||||
| Invalid task reference | Skip and warn |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan issues and bind solutions
|
||||
- `/issue:execute` - Execute queue with codex
|
||||
- `ccw issue queue list` - View current queue
|
||||
@@ -410,7 +410,6 @@ Task(subagent_type="{meta.agent}",
|
||||
1. Read complete task JSON: {session.task_json_path}
|
||||
2. Load context package: {session.context_package_path}
|
||||
|
||||
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
||||
|
||||
**Session Paths**:
|
||||
- Workflow Dir: {session.workflow_dir}
|
||||
|
||||
@@ -15,7 +15,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
||||
|
||||
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -431,6 +430,5 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
|
||||
@@ -239,15 +239,6 @@ If conflict_risk was medium/high, modifications have been applied to:
|
||||
|
||||
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
||||
|
||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
- TDD Task Decomposition Standards
|
||||
- Red-Green-Refactor Cycle Requirements
|
||||
- Quantification Requirements (MANDATORY)
|
||||
- 5-Field Task JSON Schema
|
||||
- IMPL_PLAN.md Structure (TDD variant)
|
||||
- TODO_LIST.md Format
|
||||
- TDD Execution Flow & Quality Validation
|
||||
|
||||
### TDD-Specific Requirements Summary
|
||||
|
||||
#### Task Structure Philosophy
|
||||
|
||||
@@ -14,7 +14,7 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
||||
|
||||
**Agent**: `test-context-search-agent` (`.claude/agents/test-context-search-agent.md`)
|
||||
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -89,7 +89,6 @@ Task(
|
||||
run_in_background=false,
|
||||
description="Gather test coverage context",
|
||||
prompt=`
|
||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||
@@ -229,7 +228,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
||||
- **Agent autonomy**: Agent handles all coverage analysis logic per `.claude/agents/test-context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
||||
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
||||
|
||||
@@ -107,8 +107,6 @@ CRITICAL:
|
||||
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
||||
|
||||
## AGENT CONFIGURATION REFERENCE
|
||||
All test task generation rules, schemas, and quality standards are defined in your agent specification:
|
||||
@.claude/agents/action-planning-agent.md
|
||||
|
||||
Refer to your specification for:
|
||||
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
||||
|
||||
@@ -806,8 +806,6 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
**Examples**:
|
||||
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||
- BAD: `"Implement new commands"`
|
||||
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||
- BAD: `"All commands implemented successfully"`
|
||||
|
||||
### 3.2 Planning & Organization Standards
|
||||
|
||||
|
||||
@@ -400,7 +400,7 @@ Task(subagent_type="{meta.agent}",
|
||||
1. Read complete task JSON: {session.task_json_path}
|
||||
2. Load context package: {session.context_package_path}
|
||||
|
||||
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
||||
|
||||
|
||||
**Session Paths**:
|
||||
- Workflow Dir: {session.workflow_dir}
|
||||
|
||||
@@ -15,7 +15,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
||||
|
||||
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -429,6 +428,6 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
|
||||
@@ -238,14 +238,7 @@ If conflict_risk was medium/high, modifications have been applied to:
|
||||
|
||||
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
||||
|
||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
- TDD Task Decomposition Standards
|
||||
- Red-Green-Refactor Cycle Requirements
|
||||
- Quantification Requirements (MANDATORY)
|
||||
- 5-Field Task JSON Schema
|
||||
- IMPL_PLAN.md Structure (TDD variant)
|
||||
- TODO_LIST.md Format
|
||||
- TDD Execution Flow & Quality Validation
|
||||
|
||||
|
||||
### TDD-Specific Requirements Summary
|
||||
|
||||
|
||||
@@ -14,8 +14,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
||||
|
||||
**Agent**: `test-context-search-agent` (`.claude/agents/test-context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
- **Agent Delegation**: Delegate all test coverage analysis to `test-context-search-agent` for autonomous execution
|
||||
@@ -88,7 +86,6 @@ Task(
|
||||
subagent_type="test-context-search-agent",
|
||||
description="Gather test coverage context",
|
||||
prompt=`
|
||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||
@@ -228,7 +225,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
||||
- **Agent autonomy**: Agent handles all coverage analysis logic per `.claude/agents/test-context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
||||
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
||||
|
||||
@@ -106,8 +106,6 @@ CRITICAL:
|
||||
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
||||
|
||||
## AGENT CONFIGURATION REFERENCE
|
||||
All test task generation rules, schemas, and quality standards are defined in your agent specification:
|
||||
@.claude/agents/action-planning-agent.md
|
||||
|
||||
Refer to your specification for:
|
||||
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
||||
|
||||
150
.claude/skills/copyright-docs/phases/01.5-project-exploration.md
Normal file
150
.claude/skills/copyright-docs/phases/01.5-project-exploration.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# Phase 1.5: Project Exploration
|
||||
|
||||
基于元数据,启动并行探索 Agent 收集代码信息。
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Intelligent Angle Selection
|
||||
|
||||
```javascript
|
||||
// 根据软件类型选择探索角度
|
||||
const ANGLE_PRESETS = {
|
||||
'CLI': ['architecture', 'commands', 'algorithms', 'exceptions'],
|
||||
'API': ['architecture', 'endpoints', 'data-structures', 'interfaces'],
|
||||
'SDK': ['architecture', 'interfaces', 'data-structures', 'algorithms'],
|
||||
'DataProcessing': ['architecture', 'algorithms', 'data-structures', 'dataflow'],
|
||||
'Automation': ['architecture', 'algorithms', 'exceptions', 'dataflow']
|
||||
};
|
||||
|
||||
// 从 metadata.category 映射到预设
|
||||
function getCategoryKey(category) {
|
||||
if (category.includes('CLI') || category.includes('命令行')) return 'CLI';
|
||||
if (category.includes('API') || category.includes('后端')) return 'API';
|
||||
if (category.includes('SDK') || category.includes('库')) return 'SDK';
|
||||
if (category.includes('数据处理')) return 'DataProcessing';
|
||||
if (category.includes('自动化')) return 'Automation';
|
||||
return 'API'; // default
|
||||
}
|
||||
|
||||
const categoryKey = getCategoryKey(metadata.category);
|
||||
const selectedAngles = ANGLE_PRESETS[categoryKey];
|
||||
|
||||
console.log(`
|
||||
## Exploration Plan
|
||||
|
||||
Software: ${metadata.software_name}
|
||||
Category: ${metadata.category} → ${categoryKey}
|
||||
Selected Angles: ${selectedAngles.join(', ')}
|
||||
|
||||
Launching ${selectedAngles.length} parallel explorations...
|
||||
`);
|
||||
```
|
||||
|
||||
### Step 2: Launch Parallel Agents (Direct Output)
|
||||
|
||||
**⚠️ CRITICAL**: Agents write output files directly.
|
||||
|
||||
```javascript
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
## Exploration Objective
|
||||
为 CPCC 软著申请文档执行 **${angle}** 探索。
|
||||
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Software Name**: ${metadata.software_name}
|
||||
- **Scope Path**: ${metadata.scope_path}
|
||||
- **Category**: ${metadata.category}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
|
||||
3. Analyze from ${angle} perspective
|
||||
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
**Step 1: Structural Scan**
|
||||
- 识别与 ${angle} 相关的模块和文件
|
||||
- 分析导入/导出关系
|
||||
|
||||
**Step 2: Pattern Recognition**
|
||||
- ${angle} 相关的设计模式
|
||||
- 代码组织方式
|
||||
|
||||
**Step 3: Write Output**
|
||||
- 输出 JSON 到指定路径
|
||||
|
||||
## Expected Output Schema
|
||||
|
||||
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"angle": "${angle}",
|
||||
"findings": {
|
||||
"structure": [
|
||||
{ "component": "...", "type": "module|layer|service", "path": "...", "description": "..." }
|
||||
],
|
||||
"patterns": [
|
||||
{ "name": "...", "usage": "...", "files": ["path1", "path2"] }
|
||||
],
|
||||
"key_files": [
|
||||
{ "path": "src/file.ts", "relevance": 0.85, "rationale": "Core ${angle} logic" }
|
||||
]
|
||||
},
|
||||
"insights": [
|
||||
{ "observation": "...", "cpcc_section": "2|3|4|5|6|7", "recommendation": "..." }
|
||||
],
|
||||
"_metadata": {
|
||||
"exploration_angle": "${angle}",
|
||||
"exploration_index": ${index + 1},
|
||||
"software_name": "${metadata.software_name}",
|
||||
"timestamp": "ISO8601"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Success Criteria
|
||||
- [ ] get_modules_by_depth 执行完成
|
||||
- [ ] 至少识别 3 个相关文件
|
||||
- [ ] patterns 包含具体代码示例
|
||||
- [ ] insights 关联到 CPCC 章节 (2-7)
|
||||
- [ ] JSON 输出到指定路径
|
||||
- [ ] Return: 2-3 句话总结 ${angle} 发现
|
||||
`
|
||||
})
|
||||
);
|
||||
|
||||
// Execute all exploration tasks in parallel
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Session folder structure after exploration:
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── exploration-architecture.json
|
||||
├── exploration-{angle2}.json
|
||||
├── exploration-{angle3}.json
|
||||
└── exploration-{angle4}.json
|
||||
```
|
||||
|
||||
## Downstream Usage (Phase 2 Analysis Input)
|
||||
|
||||
Phase 2 agents read exploration files as context:
|
||||
|
||||
```javascript
|
||||
// Discover exploration files by known angle pattern
|
||||
const explorationData = {};
|
||||
selectedAngles.forEach(angle => {
|
||||
const filePath = `${sessionFolder}/exploration-${angle}.json`;
|
||||
explorationData[angle] = JSON.parse(Read(filePath));
|
||||
});
|
||||
```
|
||||
@@ -5,15 +5,161 @@
|
||||
> **模板参考**: [../templates/agent-base.md](../templates/agent-base.md)
|
||||
> **规范参考**: [../specs/cpcc-requirements.md](../specs/cpcc-requirements.md)
|
||||
|
||||
## Agent 执行前置条件
|
||||
## Exploration → Agent 自动分配
|
||||
|
||||
**每个 Agent 必须首先读取以下规范文件**:
|
||||
根据 Phase 1.5 生成的 exploration 文件名自动分配对应的 analysis agent。
|
||||
|
||||
### 映射规则
|
||||
|
||||
```javascript
|
||||
// Agent 启动时的第一步操作
|
||||
const specs = {
|
||||
cpcc: Read(`${skillRoot}/specs/cpcc-requirements.md`)
|
||||
// Exploration 角度 → Agent 映射(基于文件名识别,不读取内容)
|
||||
const EXPLORATION_TO_AGENT = {
|
||||
'architecture': 'architecture',
|
||||
'commands': 'functions', // CLI 命令 → 功能模块
|
||||
'endpoints': 'interfaces', // API 端点 → 接口设计
|
||||
'algorithms': 'algorithms',
|
||||
'data-structures': 'data_structures',
|
||||
'dataflow': 'data_structures', // 数据流 → 数据结构
|
||||
'interfaces': 'interfaces',
|
||||
'exceptions': 'exceptions'
|
||||
};
|
||||
|
||||
// 从文件名提取角度
|
||||
function extractAngle(filename) {
|
||||
// exploration-architecture.json → architecture
|
||||
const match = filename.match(/exploration-(.+)\.json$/);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
|
||||
// 分配 agent
|
||||
function assignAgent(explorationFile) {
|
||||
const angle = extractAngle(path.basename(explorationFile));
|
||||
return EXPLORATION_TO_AGENT[angle] || null;
|
||||
}
|
||||
|
||||
// Agent 配置(用于 buildAgentPrompt)
|
||||
const AGENT_CONFIGS = {
|
||||
architecture: {
|
||||
role: '系统架构师,专注于分层设计和模块依赖',
|
||||
section: '2',
|
||||
output: 'section-2-architecture.md',
|
||||
focus: '分层结构、模块依赖、数据流向'
|
||||
},
|
||||
functions: {
|
||||
role: '功能分析师,专注于功能点识别和交互',
|
||||
section: '3',
|
||||
output: 'section-3-functions.md',
|
||||
focus: '功能点枚举、模块分组、入口文件、功能交互'
|
||||
},
|
||||
algorithms: {
|
||||
role: '算法工程师,专注于核心逻辑和复杂度分析',
|
||||
section: '4',
|
||||
output: 'section-4-algorithms.md',
|
||||
focus: '核心算法、流程步骤、复杂度、输入输出'
|
||||
},
|
||||
data_structures: {
|
||||
role: '数据建模师,专注于实体关系和类型定义',
|
||||
section: '5',
|
||||
output: 'section-5-data-structures.md',
|
||||
focus: '实体定义、属性类型、关系映射、枚举'
|
||||
},
|
||||
interfaces: {
|
||||
role: 'API设计师,专注于接口契约和协议',
|
||||
section: '6',
|
||||
output: 'section-6-interfaces.md',
|
||||
focus: 'API端点、参数校验、响应格式、时序'
|
||||
},
|
||||
exceptions: {
|
||||
role: '可靠性工程师,专注于异常处理和恢复策略',
|
||||
section: '7',
|
||||
output: 'section-7-exceptions.md',
|
||||
focus: '异常类型、错误码、处理模式、恢复策略'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 自动发现与分配流程
|
||||
|
||||
```javascript
|
||||
// 1. 发现所有 exploration 文件(仅看文件名)
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
// 2. 按文件名自动分配 agent
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return {
|
||||
exploration_file: file,
|
||||
angle: angle,
|
||||
agent: agentName,
|
||||
output_file: AGENT_CONFIGS[agentName]?.output
|
||||
};
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 3. 补充未被 exploration 覆盖的必需 agent(分配相关 exploration)
|
||||
const coveredAgents = new Set(agentAssignments.map(a => a.agent));
|
||||
const requiredAgents = ['architecture', 'functions', 'algorithms', 'data_structures', 'interfaces', 'exceptions'];
|
||||
const missingAgents = requiredAgents.filter(a => !coveredAgents.has(a));
|
||||
|
||||
// 相关性映射:为缺失 agent 分配最相关的 exploration
|
||||
const RELATED_EXPLORATIONS = {
|
||||
architecture: ['architecture', 'dataflow', 'interfaces'],
|
||||
functions: ['commands', 'endpoints', 'architecture'],
|
||||
algorithms: ['algorithms', 'dataflow', 'architecture'],
|
||||
data_structures: ['data-structures', 'dataflow', 'architecture'],
|
||||
interfaces: ['interfaces', 'endpoints', 'architecture'],
|
||||
exceptions: ['exceptions', 'algorithms', 'architecture']
|
||||
};
|
||||
|
||||
function findRelatedExploration(agent, availableFiles) {
|
||||
const preferences = RELATED_EXPLORATIONS[agent] || ['architecture'];
|
||||
for (const pref of preferences) {
|
||||
const match = availableFiles.find(f => f.includes(`exploration-${pref}.json`));
|
||||
if (match) return { file: match, angle: pref, isRelated: true };
|
||||
}
|
||||
// 最后兜底:任意 exploration 都比没有强
|
||||
return availableFiles.length > 0
|
||||
? { file: availableFiles[0], angle: extractAngle(path.basename(availableFiles[0])), isRelated: true }
|
||||
: { file: null, angle: null, isRelated: false };
|
||||
}
|
||||
|
||||
missingAgents.forEach(agent => {
|
||||
const related = findRelatedExploration(agent, explorationFiles);
|
||||
agentAssignments.push({
|
||||
exploration_file: related.file,
|
||||
angle: related.angle,
|
||||
agent: agent,
|
||||
output_file: AGENT_CONFIGS[agent].output,
|
||||
is_related: related.isRelated // 标记为相关而非直接匹配
|
||||
});
|
||||
});
|
||||
|
||||
console.log(`
|
||||
## Agent Auto-Assignment
|
||||
|
||||
Found ${explorationFiles.length} exploration files:
|
||||
${agentAssignments.map(a => {
|
||||
if (!a.exploration_file) return `- ${a.agent} agent (no exploration)`;
|
||||
if (a.is_related) return `- ${a.agent} agent ← ${a.angle} (related)`;
|
||||
return `- ${a.agent} agent ← ${a.angle} (direct)`;
|
||||
}).join('\n')}
|
||||
`);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 执行前置条件
|
||||
|
||||
**每个 Agent 接收 exploration 文件路径,自行读取内容**:
|
||||
|
||||
```javascript
|
||||
// Agent prompt 中包含文件路径
|
||||
// Agent 启动后的操作顺序:
|
||||
// 1. Read exploration 文件(如有)
|
||||
// 2. Read CPCC 规范文件
|
||||
// 3. 执行分析任务
|
||||
```
|
||||
|
||||
规范文件路径(相对于 skill 根目录):
|
||||
@@ -47,26 +193,90 @@ const specs = {
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
// 1. 准备目录
|
||||
// 1. 发现 exploration 文件并自动分配 agent
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return { exploration_file: file, angle, agent: agentName };
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 补充必需 agent
|
||||
const coveredAgents = new Set(agentAssignments.map(a => a.agent));
|
||||
const requiredAgents = ['architecture', 'functions', 'algorithms', 'data_structures', 'interfaces', 'exceptions'];
|
||||
requiredAgents.filter(a => !coveredAgents.has(a)).forEach(agent => {
|
||||
agentAssignments.push({ exploration_file: null, angle: null, agent });
|
||||
});
|
||||
|
||||
// 2. 准备目录
|
||||
Bash(`mkdir -p ${outputDir}/sections`);
|
||||
|
||||
// 2. 并行启动 6 个 Agent
|
||||
const results = await Promise.all([
|
||||
launchAgent('architecture', metadata, outputDir),
|
||||
launchAgent('functions', metadata, outputDir),
|
||||
launchAgent('algorithms', metadata, outputDir),
|
||||
launchAgent('data_structures', metadata, outputDir),
|
||||
launchAgent('interfaces', metadata, outputDir),
|
||||
launchAgent('exceptions', metadata, outputDir)
|
||||
]);
|
||||
// 3. 并行启动所有 Agent(传递 exploration 文件路径)
|
||||
const results = await Promise.all(
|
||||
agentAssignments.map(assignment =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Analyze: ${assignment.agent}`,
|
||||
prompt: buildAgentPrompt(assignment, metadata, outputDir)
|
||||
})
|
||||
)
|
||||
);
|
||||
|
||||
// 3. 收集返回信息
|
||||
// 4. 收集返回信息
|
||||
const summaries = results.map(r => JSON.parse(r));
|
||||
|
||||
// 4. 传递给 Phase 2.5
|
||||
// 5. 传递给 Phase 2.5
|
||||
return { summaries, cross_notes: summaries.flatMap(s => s.cross_module_notes) };
|
||||
```
|
||||
|
||||
### Agent Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildAgentPrompt(assignment, metadata, outputDir) {
|
||||
const config = AGENT_CONFIGS[assignment.agent];
|
||||
let contextSection = '';
|
||||
|
||||
if (assignment.exploration_file) {
|
||||
const matchType = assignment.is_related ? '相关' : '直接匹配';
|
||||
contextSection = `[CONTEXT]
|
||||
**Exploration 文件**: ${assignment.exploration_file}
|
||||
**匹配类型**: ${matchType}
|
||||
首先读取此文件获取 ${assignment.angle} 探索结果作为分析上下文。
|
||||
${assignment.is_related ? `注意:这是相关探索结果(非直接匹配),请提取与 ${config.focus} 相关的信息。` : ''}
|
||||
`;
|
||||
}
|
||||
|
||||
return `
|
||||
${contextSection}
|
||||
[SPEC]
|
||||
读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
|
||||
[ROLE] ${config.role}
|
||||
|
||||
[TASK]
|
||||
分析 ${metadata.scope_path},生成 Section ${config.section}。
|
||||
输出: ${outputDir}/sections/${config.output}
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图${config.section}-1, 图${config.section}-2...
|
||||
- 每个子章节 ≥100字
|
||||
- 包含文件路径引用
|
||||
|
||||
[FOCUS]
|
||||
${config.focus}
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"${config.output}","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 提示词
|
||||
|
||||
244
.claude/skills/issue-manage/SKILL.md
Normal file
244
.claude/skills/issue-manage/SKILL.md
Normal file
@@ -0,0 +1,244 @@
|
||||
---
|
||||
name: issue-manage
|
||||
description: Interactive issue management with menu-driven CRUD operations. Use when managing issues, viewing issue status, editing issue fields, or performing bulk operations on issues. Triggers on "manage issue", "list issues", "edit issue", "delete issue", "bulk update", "issue dashboard".
|
||||
allowed-tools: Bash, Read, Write, AskUserQuestion, Task, Glob
|
||||
---
|
||||
|
||||
# Issue Management Skill
|
||||
|
||||
Interactive menu-driven interface for issue CRUD operations via `ccw issue` CLI.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Ask me:
|
||||
- "Show all issues" → List with filters
|
||||
- "View issue GH-123" → Detailed inspection
|
||||
- "Edit issue priority" → Modify fields
|
||||
- "Delete old issues" → Remove with confirmation
|
||||
- "Bulk update status" → Batch operations
|
||||
|
||||
## CLI Endpoints
|
||||
|
||||
```bash
|
||||
# Core operations
|
||||
ccw issue list # List all issues
|
||||
ccw issue list <id> --json # Get issue details
|
||||
ccw issue status <id> # Detailed status
|
||||
ccw issue init <id> --title "..." # Create issue
|
||||
ccw issue task <id> --title "..." # Add task
|
||||
ccw issue bind <id> <solution-id> # Bind solution
|
||||
|
||||
# Queue management
|
||||
ccw issue queue # List current queue
|
||||
ccw issue queue add <id> # Add to queue
|
||||
ccw issue queue list # Queue history
|
||||
ccw issue queue switch <queue-id> # Switch queue
|
||||
ccw issue queue archive # Archive queue
|
||||
ccw issue queue delete <queue-id> # Delete queue
|
||||
ccw issue next # Get next task
|
||||
ccw issue done <queue-id> # Mark completed
|
||||
```
|
||||
|
||||
## Operations
|
||||
|
||||
### 1. LIST 📋
|
||||
|
||||
Filter and browse issues:
|
||||
|
||||
```
|
||||
┌─ Filter by Status ─────────────────┐
|
||||
│ □ All □ Registered │
|
||||
│ □ Planned □ Queued │
|
||||
│ □ Executing □ Completed │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Flow**:
|
||||
1. Ask filter preferences → `ccw issue list --json`
|
||||
2. Display table: ID | Status | Priority | Title
|
||||
3. Select issue for detail view
|
||||
|
||||
### 2. VIEW 🔍
|
||||
|
||||
Detailed issue inspection:
|
||||
|
||||
```
|
||||
┌─ Issue: GH-123 ─────────────────────┐
|
||||
│ Title: Fix authentication bug │
|
||||
│ Status: planned | Priority: P2 │
|
||||
│ Solutions: 2 (1 bound) │
|
||||
│ Tasks: 5 pending │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Flow**:
|
||||
1. Fetch `ccw issue status <id> --json`
|
||||
2. Display issue + solutions + tasks
|
||||
3. Offer actions: Edit | Plan | Queue | Delete
|
||||
|
||||
### 3. EDIT ✏️
|
||||
|
||||
Modify issue fields:
|
||||
|
||||
| Field | Options |
|
||||
|-------|---------|
|
||||
| Title | Free text |
|
||||
| Priority | P1-P5 |
|
||||
| Status | registered → completed |
|
||||
| Context | Problem description |
|
||||
| Labels | Comma-separated |
|
||||
|
||||
**Flow**:
|
||||
1. Select field to edit
|
||||
2. Show current value
|
||||
3. Collect new value via AskUserQuestion
|
||||
4. Update `.workflow/issues/issues.jsonl`
|
||||
|
||||
### 4. DELETE 🗑️
|
||||
|
||||
Remove with confirmation:
|
||||
|
||||
```
|
||||
⚠️ Delete issue GH-123?
|
||||
This will also remove:
|
||||
- Associated solutions
|
||||
- Queued tasks
|
||||
|
||||
[Delete] [Cancel]
|
||||
```
|
||||
|
||||
**Flow**:
|
||||
1. Confirm deletion via AskUserQuestion
|
||||
2. Remove from `issues.jsonl`
|
||||
3. Clean up `solutions/<id>.jsonl`
|
||||
4. Remove from `queue.json`
|
||||
|
||||
### 5. BULK 📦
|
||||
|
||||
Batch operations:
|
||||
|
||||
| Operation | Description |
|
||||
|-----------|-------------|
|
||||
| Update Status | Change multiple issues |
|
||||
| Update Priority | Batch priority change |
|
||||
| Add Labels | Tag multiple issues |
|
||||
| Delete Multiple | Bulk removal |
|
||||
| Queue All Planned | Add all planned to queue |
|
||||
| Retry All Failed | Reset failed tasks |
|
||||
|
||||
## Workflow
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────┐
|
||||
│ Main Menu │
|
||||
│ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │
|
||||
│ │List│ │View│ │Edit│ │Bulk│ │
|
||||
│ └──┬─┘ └──┬─┘ └──┬─┘ └──┬─┘ │
|
||||
└─────┼──────┼──────┼──────┼──────────┘
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
Filter Detail Fields Multi
|
||||
Select Actions Update Select
|
||||
│ │ │ │
|
||||
└──────┴──────┴──────┘
|
||||
│
|
||||
▼
|
||||
Back to Menu
|
||||
```
|
||||
|
||||
## Implementation Guide
|
||||
|
||||
### Entry Point
|
||||
|
||||
```javascript
|
||||
// Parse input for issue ID
|
||||
const issueId = input.match(/^([A-Z]+-\d+|ISS-\d+)/i)?.[1];
|
||||
|
||||
// Show main menu
|
||||
await showMainMenu(issueId);
|
||||
```
|
||||
|
||||
### Main Menu Pattern
|
||||
|
||||
```javascript
|
||||
// 1. Fetch dashboard data
|
||||
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
|
||||
const queue = JSON.parse(Bash('ccw issue queue --json 2>/dev/null') || '{}');
|
||||
|
||||
// 2. Display summary
|
||||
console.log(`Issues: ${issues.length} | Queue: ${queue.pending_count || 0} pending`);
|
||||
|
||||
// 3. Ask action via AskUserQuestion
|
||||
const action = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'What would you like to do?',
|
||||
header: 'Action',
|
||||
options: [
|
||||
{ label: 'List Issues', description: 'Browse with filters' },
|
||||
{ label: 'View Issue', description: 'Detail view' },
|
||||
{ label: 'Edit Issue', description: 'Modify fields' },
|
||||
{ label: 'Bulk Operations', description: 'Batch actions' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
// 4. Route to handler
|
||||
```
|
||||
|
||||
### Filter Pattern
|
||||
|
||||
```javascript
|
||||
const filter = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Filter by status?',
|
||||
header: 'Filter',
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: 'All', description: 'Show all' },
|
||||
{ label: 'Registered', description: 'Unplanned' },
|
||||
{ label: 'Planned', description: 'Has solution' },
|
||||
{ label: 'Executing', description: 'In progress' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
```
|
||||
|
||||
### Edit Pattern
|
||||
|
||||
```javascript
|
||||
// Select field
|
||||
const field = AskUserQuestion({...});
|
||||
|
||||
// Get new value based on field type
|
||||
// For Priority: show P1-P5 options
|
||||
// For Status: show status options
|
||||
// For Title: accept free text via "Other"
|
||||
|
||||
// Update file
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
// Read → Parse → Update → Write
|
||||
```
|
||||
|
||||
## Data Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `.workflow/issues/issues.jsonl` | Issue records |
|
||||
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
|
||||
| `.workflow/issues/queue.json` | Execution queue |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No issues found | Suggest `/issue:new` to create |
|
||||
| Issue not found | Show available issues, re-prompt |
|
||||
| Write failure | Check file permissions |
|
||||
| Queue error | Display ccw error message |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:new` - Create structured issue
|
||||
- `/issue:plan` - Generate solution
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `/issue:execute` - Execute tasks
|
||||
@@ -1,75 +1,176 @@
|
||||
# Phase 2: Project Exploration
|
||||
|
||||
Launch parallel exploration agents based on report type.
|
||||
Launch parallel exploration agents based on report type and task context.
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Map Exploration Angles
|
||||
### Step 1: Intelligent Angle Selection
|
||||
|
||||
```javascript
|
||||
const angleMapping = {
|
||||
architecture: ["Layer Structure", "Module Dependencies", "Entry Points", "Data Flow"],
|
||||
design: ["Design Patterns", "Class Relationships", "Interface Contracts", "State Management"],
|
||||
methods: ["Core Algorithms", "Critical Paths", "Public APIs", "Complex Logic"],
|
||||
comprehensive: ["Layer Structure", "Design Patterns", "Core Algorithms", "Data Flow"]
|
||||
// Angle presets based on report type (adapted from lite-plan.md)
|
||||
const ANGLE_PRESETS = {
|
||||
architecture: ['layer-structure', 'module-dependencies', 'entry-points', 'data-flow'],
|
||||
design: ['design-patterns', 'class-relationships', 'interface-contracts', 'state-management'],
|
||||
methods: ['core-algorithms', 'critical-paths', 'public-apis', 'complex-logic'],
|
||||
comprehensive: ['architecture', 'patterns', 'dependencies', 'integration-points']
|
||||
};
|
||||
|
||||
const angles = angleMapping[config.type];
|
||||
// Depth-based angle count
|
||||
const angleCount = {
|
||||
shallow: 2,
|
||||
standard: 3,
|
||||
deep: 4
|
||||
};
|
||||
|
||||
function selectAngles(reportType, depth) {
|
||||
const preset = ANGLE_PRESETS[reportType] || ANGLE_PRESETS.comprehensive;
|
||||
const count = angleCount[depth] || 3;
|
||||
return preset.slice(0, count);
|
||||
}
|
||||
|
||||
const selectedAngles = selectAngles(config.type, config.depth);
|
||||
|
||||
console.log(`
|
||||
## Exploration Plan
|
||||
|
||||
Report Type: ${config.type}
|
||||
Depth: ${config.depth}
|
||||
Selected Angles: ${selectedAngles.join(', ')}
|
||||
|
||||
Launching ${selectedAngles.length} parallel explorations...
|
||||
`);
|
||||
```
|
||||
|
||||
### Step 2: Launch Parallel Agents
|
||||
### Step 2: Launch Parallel Agents (Direct Output)
|
||||
|
||||
For each angle, launch an exploration agent:
|
||||
**⚠️ CRITICAL**: Agents write output files directly. No aggregation needed.
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
// Launch agents with pre-assigned angles
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false, // ⚠️ MANDATORY: Must wait for results
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
## Exploration Objective
|
||||
Execute **${angle}** exploration for project analysis report.
|
||||
Execute **${angle}** exploration for ${config.type} project analysis report.
|
||||
|
||||
## Context
|
||||
- **Angle**: ${angle}
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Report Type**: ${config.type}
|
||||
- **Depth**: ${config.depth}
|
||||
- **Scope**: ${config.scope}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
## Exploration Protocol
|
||||
1. Structural Discovery (get_modules_by_depth, rg, glob)
|
||||
2. Pattern Recognition (conventions, naming, organization)
|
||||
3. Relationship Mapping (dependencies, integration points)
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
|
||||
3. Analyze project from ${angle} perspective
|
||||
|
||||
## Output Format
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
**Step 1: Structural Scan** (Bash)
|
||||
- get_modules_by_depth.sh → identify modules related to ${angle}
|
||||
- find/rg → locate files relevant to ${angle} aspect
|
||||
- Analyze imports/dependencies from ${angle} perspective
|
||||
|
||||
**Step 2: Semantic Analysis** (Gemini/Qwen CLI)
|
||||
- How does existing code handle ${angle} concerns?
|
||||
- What patterns are used for ${angle}?
|
||||
- Identify key architectural decisions related to ${angle}
|
||||
|
||||
**Step 3: Write Output Directly**
|
||||
- Consolidate ${angle} findings into JSON
|
||||
- Write to output file path specified above
|
||||
|
||||
## Expected Output Schema
|
||||
|
||||
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"angle": "${angle}",
|
||||
"findings": {
|
||||
"structure": [...],
|
||||
"patterns": [...],
|
||||
"relationships": [...],
|
||||
"key_files": [{path, relevance, rationale}]
|
||||
"structure": [
|
||||
{ "component": "...", "type": "module|layer|service", "description": "..." }
|
||||
],
|
||||
"patterns": [
|
||||
{ "name": "...", "usage": "...", "files": ["path1", "path2"] }
|
||||
],
|
||||
"relationships": [
|
||||
{ "from": "...", "to": "...", "type": "depends|imports|calls", "strength": "high|medium|low" }
|
||||
],
|
||||
"key_files": [
|
||||
{ "path": "src/file.ts", "relevance": 0.85, "rationale": "Core ${angle} logic" }
|
||||
]
|
||||
},
|
||||
"insights": [...]
|
||||
"insights": [
|
||||
{ "observation": "...", "impact": "high|medium|low", "recommendation": "..." }
|
||||
],
|
||||
"_metadata": {
|
||||
"exploration_angle": "${angle}",
|
||||
"exploration_index": ${index + 1},
|
||||
"report_type": "${config.type}",
|
||||
"timestamp": "ISO8601"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Success Criteria
|
||||
- [ ] get_modules_by_depth.sh executed
|
||||
- [ ] At least 3 relevant files identified with ${angle} rationale
|
||||
- [ ] Patterns are actionable (code examples, not generic advice)
|
||||
- [ ] Relationships include concrete file references
|
||||
- [ ] JSON output written to ${sessionFolder}/exploration-${angle}.json
|
||||
- [ ] Return: 2-3 sentence summary of ${angle} findings
|
||||
`
|
||||
})
|
||||
```
|
||||
})
|
||||
);
|
||||
|
||||
### Step 3: Aggregate Results
|
||||
|
||||
Merge all exploration results into unified findings:
|
||||
|
||||
```javascript
|
||||
const aggregatedFindings = {
|
||||
structure: [], // from all angles
|
||||
patterns: [], // from all angles
|
||||
relationships: [], // from all angles
|
||||
key_files: [], // deduplicated
|
||||
insights: [] // prioritized
|
||||
};
|
||||
// Execute all exploration tasks in parallel
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Save exploration results to `exploration-{angle}.json` files.
|
||||
Session folder structure after exploration:
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── exploration-{angle1}.json # Agent 1 direct output
|
||||
├── exploration-{angle2}.json # Agent 2 direct output
|
||||
├── exploration-{angle3}.json # Agent 3 direct output (if applicable)
|
||||
└── exploration-{angle4}.json # Agent 4 direct output (if applicable)
|
||||
```
|
||||
|
||||
## Downstream Usage (Phase 3 Analysis Input)
|
||||
|
||||
Subsequent analysis phases MUST read exploration outputs as input:
|
||||
|
||||
```javascript
|
||||
// Discover exploration files by known angle pattern
|
||||
const explorationData = {};
|
||||
selectedAngles.forEach(angle => {
|
||||
const filePath = `${sessionFolder}/exploration-${angle}.json`;
|
||||
explorationData[angle] = JSON.parse(Read(filePath));
|
||||
});
|
||||
|
||||
// Pass to analysis agent
|
||||
Task({
|
||||
subagent_type: "analysis-agent",
|
||||
prompt: `
|
||||
## Analysis Input
|
||||
|
||||
### Exploration Data by Angle
|
||||
${Object.entries(explorationData).map(([angle, data]) => `
|
||||
#### ${angle}
|
||||
${JSON.stringify(data, null, 2)}
|
||||
`).join('\n')}
|
||||
|
||||
## Analysis Task
|
||||
Synthesize findings from all exploration angles...
|
||||
`
|
||||
});
|
||||
```
|
||||
|
||||
@@ -5,16 +5,176 @@
|
||||
> **规范参考**: [../specs/quality-standards.md](../specs/quality-standards.md)
|
||||
> **写作风格**: [../specs/writing-style.md](../specs/writing-style.md)
|
||||
|
||||
## Agent 执行前置条件
|
||||
## Exploration → Agent 自动分配
|
||||
|
||||
**每个 Agent 必须首先读取以下规范文件**:
|
||||
根据 Phase 2 生成的 exploration 文件名自动分配对应的 analysis agent。
|
||||
|
||||
### 映射规则
|
||||
|
||||
```javascript
|
||||
// Agent 启动时的第一步操作
|
||||
const specs = {
|
||||
quality: Read(`${skillRoot}/specs/quality-standards.md`),
|
||||
style: Read(`${skillRoot}/specs/writing-style.md`)
|
||||
// Exploration 角度 → Agent 映射(基于文件名识别,不读取内容)
|
||||
const EXPLORATION_TO_AGENT = {
|
||||
// Architecture Report 角度
|
||||
'layer-structure': 'layers',
|
||||
'module-dependencies': 'dependencies',
|
||||
'entry-points': 'entrypoints',
|
||||
'data-flow': 'dataflow',
|
||||
|
||||
// Design Report 角度
|
||||
'design-patterns': 'patterns',
|
||||
'class-relationships': 'classes',
|
||||
'interface-contracts': 'interfaces',
|
||||
'state-management': 'state',
|
||||
|
||||
// Methods Report 角度
|
||||
'core-algorithms': 'algorithms',
|
||||
'critical-paths': 'paths',
|
||||
'public-apis': 'apis',
|
||||
'complex-logic': 'logic',
|
||||
|
||||
// Comprehensive 角度
|
||||
'architecture': 'overview',
|
||||
'patterns': 'patterns',
|
||||
'dependencies': 'dependencies',
|
||||
'integration-points': 'entrypoints'
|
||||
};
|
||||
|
||||
// 从文件名提取角度
|
||||
function extractAngle(filename) {
|
||||
// exploration-layer-structure.json → layer-structure
|
||||
const match = filename.match(/exploration-(.+)\.json$/);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
|
||||
// 分配 agent
|
||||
function assignAgent(explorationFile) {
|
||||
const angle = extractAngle(path.basename(explorationFile));
|
||||
return EXPLORATION_TO_AGENT[angle] || null;
|
||||
}
|
||||
|
||||
// Agent 配置(用于 buildAgentPrompt)
|
||||
const AGENT_CONFIGS = {
|
||||
overview: {
|
||||
role: '首席系统架构师',
|
||||
task: '基于代码库全貌,撰写"总体架构"章节,洞察核心价值主张和顶层技术决策',
|
||||
focus: '领域边界与定位、架构范式、核心技术决策、顶层模块划分',
|
||||
constraint: '避免罗列目录结构,重点阐述设计意图,包含至少1个 Mermaid 架构图'
|
||||
},
|
||||
layers: {
|
||||
role: '资深软件设计师',
|
||||
task: '分析系统逻辑分层结构,撰写"逻辑视点与分层架构"章节',
|
||||
focus: '职责分配体系、数据流向与约束、边界隔离策略、异常处理流',
|
||||
constraint: '不要列举具体文件名,关注层级间契约和隔离艺术'
|
||||
},
|
||||
dependencies: {
|
||||
role: '集成架构专家',
|
||||
task: '审视系统外部连接与内部耦合,撰写"依赖管理与生态集成"章节',
|
||||
focus: '外部集成拓扑、核心依赖分析、依赖注入与控制反转、供应链安全',
|
||||
constraint: '禁止简单列出依赖配置,必须分析集成策略和风险控制模型'
|
||||
},
|
||||
dataflow: {
|
||||
role: '数据架构师',
|
||||
task: '追踪系统数据流转机制,撰写"数据流与状态管理"章节',
|
||||
focus: '数据入口与出口、数据转换管道、持久化策略、一致性保障',
|
||||
constraint: '关注数据生命周期和形态演变,不要罗列数据库表结构'
|
||||
},
|
||||
entrypoints: {
|
||||
role: '系统边界分析师',
|
||||
task: '识别系统入口设计和关键路径,撰写"系统入口与调用链"章节',
|
||||
focus: '入口类型与职责、请求处理管道、关键业务路径、异常与边界处理',
|
||||
constraint: '关注入口设计哲学,不要逐个列举所有端点'
|
||||
},
|
||||
patterns: {
|
||||
role: '核心开发规范制定者',
|
||||
task: '挖掘代码中的复用机制和标准化实践,撰写"设计模式与工程规范"章节',
|
||||
focus: '架构级模式、通信与并发模式、横切关注点实现、抽象与复用策略',
|
||||
constraint: '避免教科书式解释,必须结合项目上下文说明应用场景'
|
||||
},
|
||||
classes: {
|
||||
role: '领域模型设计师',
|
||||
task: '分析系统类型体系和领域模型,撰写"类型体系与领域建模"章节',
|
||||
focus: '领域模型设计、继承与组合策略、职责分配原则、类型安全与约束',
|
||||
constraint: '关注建模思想,用 UML 类图辅助说明核心关系'
|
||||
},
|
||||
interfaces: {
|
||||
role: '契约设计专家',
|
||||
task: '分析系统接口设计和抽象层次,撰写"接口契约与抽象设计"章节',
|
||||
focus: '抽象层次设计、契约与实现分离、扩展点设计、版本演进策略',
|
||||
constraint: '关注接口设计哲学,不要逐个列举接口方法签名'
|
||||
},
|
||||
state: {
|
||||
role: '状态管理架构师',
|
||||
task: '分析系统状态管理机制,撰写"状态管理与生命周期"章节',
|
||||
focus: '状态模型设计、状态生命周期、并发与一致性、状态恢复与容错',
|
||||
constraint: '关注状态管理设计决策,不要列举具体变量名'
|
||||
},
|
||||
algorithms: {
|
||||
role: '算法架构师',
|
||||
task: '分析系统核心算法设计,撰写"核心算法与计算模型"章节',
|
||||
focus: '算法选型与权衡、计算模型设计、性能与可扩展性、正确性保障',
|
||||
constraint: '关注算法思想,用流程图辅助说明复杂逻辑'
|
||||
},
|
||||
paths: {
|
||||
role: '性能架构师',
|
||||
task: '分析系统关键执行路径,撰写"关键路径与性能设计"章节',
|
||||
focus: '关键业务路径、性能敏感区域、瓶颈识别与缓解、降级与熔断',
|
||||
constraint: '关注路径设计战略考量,不要罗列所有代码执行步骤'
|
||||
},
|
||||
apis: {
|
||||
role: 'API 设计规范专家',
|
||||
task: '分析系统对外接口设计规范,撰写"API 设计与规范"章节',
|
||||
focus: 'API 设计风格、命名与结构规范、版本管理策略、错误处理规范',
|
||||
constraint: '关注设计规范和一致性,不要逐个列举所有 API 端点'
|
||||
},
|
||||
logic: {
|
||||
role: '业务逻辑架构师',
|
||||
task: '分析系统业务逻辑建模,撰写"业务逻辑与规则引擎"章节',
|
||||
focus: '业务规则建模、决策点设计、边界条件处理、业务流程编排',
|
||||
constraint: '关注业务逻辑组织方式,不要逐行解释代码逻辑'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 自动发现与分配流程
|
||||
|
||||
```javascript
|
||||
// 1. 发现所有 exploration 文件(仅看文件名)
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
// 2. 按文件名自动分配 agent
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return {
|
||||
exploration_file: file,
|
||||
angle: angle,
|
||||
agent: agentName,
|
||||
output_file: `section-${agentName}.md`
|
||||
};
|
||||
}).filter(a => a.agent); // 过滤未映射的角度
|
||||
|
||||
console.log(`
|
||||
## Agent Auto-Assignment
|
||||
|
||||
Found ${explorationFiles.length} exploration files:
|
||||
${agentAssignments.map(a => `- ${a.angle} → ${a.agent} agent`).join('\n')}
|
||||
`);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 执行前置条件
|
||||
|
||||
**每个 Agent 接收 exploration 文件路径,自行读取内容**:
|
||||
|
||||
```javascript
|
||||
// Agent prompt 中包含文件路径
|
||||
// Agent 启动后的操作顺序:
|
||||
// 1. Read exploration 文件(上下文输入)
|
||||
// 2. Read 规范文件
|
||||
// 3. 执行分析任务
|
||||
```
|
||||
|
||||
规范文件路径(相对于 skill 根目录):
|
||||
@@ -617,15 +777,30 @@ Task({
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
// 1. 根据报告类型选择 Agent 配置
|
||||
const agentConfigs = getAgentConfigs(config.type);
|
||||
// 1. 发现 exploration 文件并自动分配 agent
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return { exploration_file: file, angle, agent: agentName };
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 2. 准备目录
|
||||
Bash(`mkdir "${outputDir}\\sections"`);
|
||||
|
||||
// 3. 并行启动所有 Agent
|
||||
// 3. 并行启动所有 Agent(传递 exploration 文件路径)
|
||||
const results = await Promise.all(
|
||||
agentConfigs.map(agent => launchAgent(agent, config, outputDir))
|
||||
agentAssignments.map(assignment =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Analyze: ${assignment.agent}`,
|
||||
prompt: buildAgentPrompt(assignment, config, outputDir)
|
||||
})
|
||||
)
|
||||
);
|
||||
|
||||
// 4. 收集简要返回信息
|
||||
@@ -635,6 +810,45 @@ const summaries = results.map(r => JSON.parse(r));
|
||||
return { summaries, cross_notes: summaries.flatMap(s => s.cross_module_notes) };
|
||||
```
|
||||
|
||||
### Agent Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildAgentPrompt(assignment, config, outputDir) {
|
||||
const agentConfig = AGENT_CONFIGS[assignment.agent];
|
||||
return `
|
||||
[CONTEXT]
|
||||
**Exploration 文件**: ${assignment.exploration_file}
|
||||
首先读取此文件获取 ${assignment.angle} 探索结果作为分析上下文。
|
||||
|
||||
[SPEC]
|
||||
读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
|
||||
[ROLE] ${agentConfig.role}
|
||||
|
||||
[TASK]
|
||||
${agentConfig.task}
|
||||
输出: ${outputDir}/sections/section-${assignment.agent}.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作,专业术语保留英文
|
||||
- 完全客观的第三人称视角,严禁"我们"、"开发者"
|
||||
- 段落式叙述,采用"论点-论据-结论"结构
|
||||
- 善用逻辑连接词体现设计推演过程
|
||||
|
||||
[FOCUS]
|
||||
${agentConfig.focus}
|
||||
|
||||
[CONSTRAINT]
|
||||
${agentConfig.constraint}
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-${assignment.agent}.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
各 Agent 写入 `sections/section-xxx.md`,返回简要 JSON 供 Phase 3.5 汇总。
|
||||
|
||||
@@ -4,6 +4,29 @@
|
||||
|
||||
> **写作规范**: [../specs/writing-style.md](../specs/writing-style.md)
|
||||
|
||||
## 执行要求
|
||||
|
||||
**必须执行**:Phase 3 所有 Analysis Agents 完成后,主编排器**必须**调用此 Consolidation Agent。
|
||||
|
||||
**触发条件**:
|
||||
- Phase 3 所有 agent 已返回结果(status: completed/partial/failed)
|
||||
- `sections/section-*.md` 文件已生成
|
||||
|
||||
**输入来源**:
|
||||
- `agent_summaries`: Phase 3 各 agent 返回的 JSON(包含 status, output_file, summary, cross_module_notes)
|
||||
- `cross_module_notes`: 从各 agent 返回中提取的跨模块备注数组
|
||||
|
||||
**调用时机**:
|
||||
```javascript
|
||||
// Phase 3 完成后,主编排器执行:
|
||||
const phase3Results = await runPhase3Agents(); // 并行执行所有 analysis agents
|
||||
const agentSummaries = phase3Results.map(r => JSON.parse(r));
|
||||
const crossNotes = agentSummaries.flatMap(s => s.cross_module_notes || []);
|
||||
|
||||
// 必须调用 Phase 3.5 Consolidation Agent
|
||||
await runPhase35Consolidation(agentSummaries, crossNotes);
|
||||
```
|
||||
|
||||
## 核心职责
|
||||
|
||||
1. **跨章节综合分析**:生成 synthesis(报告综述)
|
||||
@@ -22,7 +45,9 @@ interface ConsolidationInput {
|
||||
}
|
||||
```
|
||||
|
||||
## 执行
|
||||
## Agent 调用代码
|
||||
|
||||
主编排器使用以下代码调用 Consolidation Agent:
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
|
||||
@@ -0,0 +1,136 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Task JSONL Schema",
|
||||
"description": "Schema for individual task entries in tasks.jsonl file",
|
||||
"type": "object",
|
||||
"required": ["id", "title", "type", "description", "depends_on", "delivery_criteria", "status", "current_phase", "executor"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique task identifier (e.g., TASK-001)",
|
||||
"pattern": "^TASK-[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Short summary of the task",
|
||||
"maxLength": 100
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["feature", "bug", "refactor", "test", "chore", "docs"],
|
||||
"description": "Task category"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "Detailed instructions for the task"
|
||||
},
|
||||
"file_context": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "List of relevant files/globs",
|
||||
"default": []
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Array of Task IDs that must complete first",
|
||||
"default": []
|
||||
},
|
||||
"delivery_criteria": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Checklist items that define task completion",
|
||||
"minItems": 1
|
||||
},
|
||||
"pause_criteria": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Conditions that should halt execution (e.g., 'API spec unclear')",
|
||||
"default": []
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "ready", "in_progress", "completed", "failed", "paused", "skipped"],
|
||||
"description": "Current task status",
|
||||
"default": "pending"
|
||||
},
|
||||
"current_phase": {
|
||||
"type": "string",
|
||||
"enum": ["analyze", "implement", "test", "optimize", "commit", "done"],
|
||||
"description": "Current execution phase within the task lifecycle",
|
||||
"default": "analyze"
|
||||
},
|
||||
"executor": {
|
||||
"type": "string",
|
||||
"enum": ["agent", "codex", "gemini", "auto"],
|
||||
"description": "Preferred executor for this task",
|
||||
"default": "auto"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"description": "Task priority (1=highest, 5=lowest)",
|
||||
"default": 3
|
||||
},
|
||||
"phase_results": {
|
||||
"type": "object",
|
||||
"description": "Results from each execution phase",
|
||||
"properties": {
|
||||
"analyze": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"findings": { "type": "array", "items": { "type": "string" } },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
},
|
||||
"implement": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"files_modified": { "type": "array", "items": { "type": "string" } },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
},
|
||||
"test": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"test_results": { "type": "string" },
|
||||
"retry_count": { "type": "integer" },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
},
|
||||
"optimize": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"improvements": { "type": "array", "items": { "type": "string" } },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
},
|
||||
"commit": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": { "type": "string" },
|
||||
"commit_hash": { "type": "string" },
|
||||
"message": { "type": "string" },
|
||||
"timestamp": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "Task creation timestamp"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "Last update timestamp"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
@@ -0,0 +1,74 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issues JSONL Schema",
|
||||
"description": "Schema for each line in issues.jsonl (flat storage)",
|
||||
"type": "object",
|
||||
"required": ["id", "title", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
|
||||
"default": "registered"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"description": "Issue context/description (markdown)"
|
||||
},
|
||||
"bound_solution_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the bound solution (null if none bound)"
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "Number of candidate solutions in solutions/{id}.jsonl"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"enum": ["github", "text", "file"],
|
||||
"description": "Source of the issue"
|
||||
},
|
||||
"source_url": {
|
||||
"type": "string",
|
||||
"description": "Original source URL (for GitHub issues)"
|
||||
},
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Issue labels/tags"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"planned_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
}
|
||||
136
.claude/workflows/cli-templates/schemas/queue-schema.json
Normal file
136
.claude/workflows/cli-templates/schemas/queue-schema.json
Normal file
@@ -0,0 +1,136 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Execution Queue Schema",
|
||||
"description": "Global execution queue for all issue tasks",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"queue": {
|
||||
"type": "array",
|
||||
"description": "Ordered list of tasks to execute",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["queue_id", "issue_id", "solution_id", "task_id", "status"],
|
||||
"properties": {
|
||||
"queue_id": {
|
||||
"type": "string",
|
||||
"pattern": "^Q-[0-9]+$",
|
||||
"description": "Unique queue item identifier"
|
||||
},
|
||||
"issue_id": {
|
||||
"type": "string",
|
||||
"description": "Source issue ID"
|
||||
},
|
||||
"solution_id": {
|
||||
"type": "string",
|
||||
"description": "Source solution ID"
|
||||
},
|
||||
"task_id": {
|
||||
"type": "string",
|
||||
"description": "Task ID within solution"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
|
||||
"default": "pending"
|
||||
},
|
||||
"execution_order": {
|
||||
"type": "integer",
|
||||
"description": "Order in execution sequence"
|
||||
},
|
||||
"execution_group": {
|
||||
"type": "string",
|
||||
"description": "Parallel execution group ID (e.g., P1, S1)"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Queue IDs this task depends on"
|
||||
},
|
||||
"semantic_priority": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Semantic importance score (0.0-1.0)"
|
||||
},
|
||||
"assigned_executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent"]
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"started_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"result": {
|
||||
"type": "object",
|
||||
"description": "Execution result",
|
||||
"properties": {
|
||||
"files_modified": { "type": "array", "items": { "type": "string" } },
|
||||
"files_created": { "type": "array", "items": { "type": "string" } },
|
||||
"summary": { "type": "string" },
|
||||
"commit_hash": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"failure_reason": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"conflicts": {
|
||||
"type": "array",
|
||||
"description": "Detected conflicts between tasks",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Queue IDs involved in conflict"
|
||||
},
|
||||
"file": {
|
||||
"type": "string",
|
||||
"description": "Conflicting file path"
|
||||
},
|
||||
"resolution": {
|
||||
"type": "string",
|
||||
"enum": ["sequential", "merge", "manual"]
|
||||
},
|
||||
"resolution_order": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"resolved": {
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"version": { "type": "string", "default": "1.0" },
|
||||
"total_items": { "type": "integer" },
|
||||
"pending_count": { "type": "integer" },
|
||||
"ready_count": { "type": "integer" },
|
||||
"executing_count": { "type": "integer" },
|
||||
"completed_count": { "type": "integer" },
|
||||
"failed_count": { "type": "integer" },
|
||||
"last_queue_formation": { "type": "string", "format": "date-time" },
|
||||
"last_updated": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
94
.claude/workflows/cli-templates/schemas/registry-schema.json
Normal file
94
.claude/workflows/cli-templates/schemas/registry-schema.json
Normal file
@@ -0,0 +1,94 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Registry Schema",
|
||||
"description": "Global registry of all issues and their solutions",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"issues": {
|
||||
"type": "array",
|
||||
"description": "List of registered issues",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
|
||||
"default": "registered"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "Number of candidate solutions"
|
||||
},
|
||||
"bound_solution_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the bound solution (null if none bound)"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"enum": ["github", "text", "file"],
|
||||
"description": "Source of the issue"
|
||||
},
|
||||
"source_url": {
|
||||
"type": "string",
|
||||
"description": "Original source URL (for GitHub issues)"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"planned_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"version": { "type": "string", "default": "1.0" },
|
||||
"total_issues": { "type": "integer" },
|
||||
"by_status": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"registered": { "type": "integer" },
|
||||
"planning": { "type": "integer" },
|
||||
"planned": { "type": "integer" },
|
||||
"queued": { "type": "integer" },
|
||||
"executing": { "type": "integer" },
|
||||
"completed": { "type": "integer" },
|
||||
"failed": { "type": "integer" }
|
||||
}
|
||||
},
|
||||
"last_updated": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
120
.claude/workflows/cli-templates/schemas/solution-schema.json
Normal file
120
.claude/workflows/cli-templates/schemas/solution-schema.json
Normal file
@@ -0,0 +1,120 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Solution Schema",
|
||||
"description": "Schema for solution registered to an issue",
|
||||
"type": "object",
|
||||
"required": ["id", "issue_id", "tasks", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique solution identifier",
|
||||
"pattern": "^SOL-[0-9]+$"
|
||||
},
|
||||
"issue_id": {
|
||||
"type": "string",
|
||||
"description": "Parent issue ID"
|
||||
},
|
||||
"plan_session_id": {
|
||||
"type": "string",
|
||||
"description": "Planning session that created this solution"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"description": "Task breakdown for this solution",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "scope", "action", "acceptance"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^T[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Action verb + target"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Module path or feature area"
|
||||
},
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "1-2 sentences describing what to implement"
|
||||
},
|
||||
"modification_points": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"target": { "type": "string" },
|
||||
"change": { "type": "string" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"implementation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Step-by-step implementation guide"
|
||||
},
|
||||
"acceptance": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Quantified completion criteria"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"default": [],
|
||||
"description": "Task IDs this task depends on"
|
||||
},
|
||||
"estimated_minutes": {
|
||||
"type": "integer",
|
||||
"description": "Estimated time to complete"
|
||||
},
|
||||
"executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent", "auto"],
|
||||
"default": "auto"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"exploration_context": {
|
||||
"type": "object",
|
||||
"description": "ACE exploration results",
|
||||
"properties": {
|
||||
"project_structure": { "type": "string" },
|
||||
"relevant_files": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"patterns": { "type": "string" },
|
||||
"integration_points": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["draft", "candidate", "bound", "queued", "executing", "completed", "failed"],
|
||||
"default": "draft"
|
||||
},
|
||||
"is_bound": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether this solution is bound to the issue"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"bound_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "When this solution was bound to the issue"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,125 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Solutions JSONL Schema",
|
||||
"description": "Schema for each line in solutions/{issue-id}.jsonl",
|
||||
"type": "object",
|
||||
"required": ["id", "tasks", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique solution identifier",
|
||||
"pattern": "^SOL-[0-9]+$"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "Solution approach description"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"description": "Task breakdown for this solution",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "scope", "action", "acceptance"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^T[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Action verb + target"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Module path or feature area"
|
||||
},
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "1-2 sentences describing what to implement"
|
||||
},
|
||||
"modification_points": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"target": { "type": "string" },
|
||||
"change": { "type": "string" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"implementation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Step-by-step implementation guide"
|
||||
},
|
||||
"acceptance": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Quantified completion criteria"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"default": [],
|
||||
"description": "Task IDs this task depends on"
|
||||
},
|
||||
"estimated_minutes": {
|
||||
"type": "integer",
|
||||
"description": "Estimated time to complete"
|
||||
},
|
||||
"executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent", "auto"],
|
||||
"default": "auto"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"exploration_context": {
|
||||
"type": "object",
|
||||
"description": "ACE exploration results",
|
||||
"properties": {
|
||||
"project_structure": { "type": "string" },
|
||||
"relevant_files": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"patterns": { "type": "string" },
|
||||
"integration_points": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"analysis": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"risk": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"impact": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"complexity": { "type": "string", "enum": ["low", "medium", "high"] }
|
||||
}
|
||||
},
|
||||
"score": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Solution quality score (0.0-1.0)"
|
||||
},
|
||||
"is_bound": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether this solution is bound to the issue"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"bound_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "When this solution was bound to the issue"
|
||||
}
|
||||
}
|
||||
}
|
||||
266
.codex/prompts/issue-execute.md
Normal file
266
.codex/prompts/issue-execute.md
Normal file
@@ -0,0 +1,266 @@
|
||||
---
|
||||
description: Execute issue queue tasks sequentially with git commit after each task
|
||||
argument-hint: "[--dry-run]"
|
||||
---
|
||||
|
||||
# Issue Execute (Codex Version)
|
||||
|
||||
## Core Principle
|
||||
|
||||
**Serial Execution**: Execute tasks ONE BY ONE from the issue queue. Complete each task fully (implement → test → commit) before moving to next. Continue autonomously until ALL tasks complete or queue is empty.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
INIT: Fetch first task via ccw issue next
|
||||
|
||||
WHILE task exists:
|
||||
1. Receive task JSON from ccw issue next
|
||||
2. Execute full lifecycle:
|
||||
- IMPLEMENT: Follow task.implementation steps
|
||||
- TEST: Run task.test commands
|
||||
- VERIFY: Check task.acceptance criteria
|
||||
- COMMIT: Stage files, commit with task.commit.message_template
|
||||
3. Report completion via ccw issue complete <item_id>
|
||||
4. Fetch next task via ccw issue next
|
||||
|
||||
WHEN queue empty:
|
||||
Output final summary
|
||||
```
|
||||
|
||||
## Step 1: Fetch First Task
|
||||
|
||||
Run this command to get your first task:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
This returns JSON with the full task definition:
|
||||
- `item_id`: Unique task identifier in queue (e.g., "T-1")
|
||||
- `issue_id`: Parent issue ID (e.g., "ISSUE-20251227-001")
|
||||
- `task`: Full task definition with implementation steps
|
||||
- `context`: Relevant files and patterns
|
||||
- `execution_hints`: Timing and executor hints
|
||||
|
||||
If response contains `{ "status": "empty" }`, all tasks are complete - skip to final summary.
|
||||
|
||||
## Step 2: Parse Task Response
|
||||
|
||||
Expected task structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"item_id": "T-1",
|
||||
"issue_id": "ISSUE-20251227-001",
|
||||
"solution_id": "SOL-001",
|
||||
"task": {
|
||||
"id": "T1",
|
||||
"title": "Task title",
|
||||
"scope": "src/module/",
|
||||
"action": "Create|Modify|Fix|Refactor",
|
||||
"description": "What to do",
|
||||
"modification_points": [
|
||||
{ "file": "path/to/file.ts", "target": "function name", "change": "description" }
|
||||
],
|
||||
"implementation": [
|
||||
"Step 1: Do this",
|
||||
"Step 2: Do that"
|
||||
],
|
||||
"test": {
|
||||
"commands": ["npm test -- --filter=xxx"],
|
||||
"unit": "Unit test requirements",
|
||||
"integration": "Integration test requirements (optional)"
|
||||
},
|
||||
"acceptance": [
|
||||
"Criterion 1: Must pass",
|
||||
"Criterion 2: Must verify"
|
||||
],
|
||||
"commit": {
|
||||
"message_template": "feat(scope): description"
|
||||
}
|
||||
},
|
||||
"context": {
|
||||
"relevant_files": ["path/to/reference.ts"],
|
||||
"patterns": "Follow existing pattern in xxx"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Step 3: Execute Task Lifecycle
|
||||
|
||||
### Phase A: IMPLEMENT
|
||||
|
||||
1. Read all `context.relevant_files` to understand existing patterns
|
||||
2. Follow `task.implementation` steps in order
|
||||
3. Apply changes to `task.modification_points` files
|
||||
4. Follow `context.patterns` for code style consistency
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Implementing: [task.title]
|
||||
|
||||
**Scope**: [task.scope]
|
||||
**Action**: [task.action]
|
||||
|
||||
**Steps**:
|
||||
1. ✓ [implementation step 1]
|
||||
2. ✓ [implementation step 2]
|
||||
...
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
```
|
||||
|
||||
### Phase B: TEST
|
||||
|
||||
1. Run all commands in `task.test.commands`
|
||||
2. Verify unit tests pass (`task.test.unit`)
|
||||
3. Run integration tests if specified (`task.test.integration`)
|
||||
|
||||
**If tests fail**: Fix the code and re-run. Do NOT proceed until tests pass.
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Testing: [task.title]
|
||||
|
||||
**Test Results**:
|
||||
- [x] Unit tests: PASSED
|
||||
- [x] Integration tests: PASSED (or N/A)
|
||||
```
|
||||
|
||||
### Phase C: VERIFY
|
||||
|
||||
Check all `task.acceptance` criteria are met:
|
||||
|
||||
```
|
||||
## Verifying: [task.title]
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [x] Criterion 1: Verified
|
||||
- [x] Criterion 2: Verified
|
||||
...
|
||||
|
||||
All criteria met: YES
|
||||
```
|
||||
|
||||
**If any criterion fails**: Go back to IMPLEMENT phase and fix.
|
||||
|
||||
### Phase D: COMMIT
|
||||
|
||||
After all phases pass, commit the changes:
|
||||
|
||||
```bash
|
||||
# Stage all modified files
|
||||
git add path/to/file1.ts path/to/file2.ts ...
|
||||
|
||||
# Commit with task message template
|
||||
git commit -m "$(cat <<'EOF'
|
||||
[task.commit.message_template]
|
||||
|
||||
Item-ID: [item_id]
|
||||
Issue-ID: [issue_id]
|
||||
Task-ID: [task.id]
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Committed: [task.title]
|
||||
|
||||
**Commit**: [commit hash]
|
||||
**Message**: [commit message]
|
||||
**Files**: N files changed
|
||||
```
|
||||
|
||||
## Step 4: Report Completion
|
||||
|
||||
After commit succeeds, report to queue system:
|
||||
|
||||
```bash
|
||||
ccw issue complete [item_id] --result '{
|
||||
"files_modified": ["path1", "path2"],
|
||||
"tests_passed": true,
|
||||
"acceptance_passed": true,
|
||||
"committed": true,
|
||||
"commit_hash": "[actual hash]",
|
||||
"summary": "[What was accomplished]"
|
||||
}'
|
||||
```
|
||||
|
||||
**If task failed and cannot be fixed:**
|
||||
|
||||
```bash
|
||||
ccw issue fail [item_id] --reason "Phase [X] failed: [details]"
|
||||
```
|
||||
|
||||
## Step 5: Continue to Next Task
|
||||
|
||||
Immediately fetch the next task:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
**Output progress:**
|
||||
```
|
||||
✓ [N/M] Completed: [item_id] - [task.title]
|
||||
→ Fetching next task...
|
||||
```
|
||||
|
||||
**DO NOT STOP.** Return to Step 2 and continue until queue is empty.
|
||||
|
||||
## Final Summary
|
||||
|
||||
When `ccw issue next` returns `{ "status": "empty" }`:
|
||||
|
||||
```markdown
|
||||
## Issue Queue Execution Complete
|
||||
|
||||
**Total Tasks Executed**: N
|
||||
**All Commits**:
|
||||
| # | Item ID | Task | Commit |
|
||||
|---|---------|------|--------|
|
||||
| 1 | T-1 | Task title | abc123 |
|
||||
| 2 | T-2 | Task title | def456 |
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
|
||||
**Summary**:
|
||||
[Overall what was accomplished]
|
||||
```
|
||||
|
||||
## Execution Rules
|
||||
|
||||
1. **Never stop mid-queue** - Continue until queue is empty
|
||||
2. **One task at a time** - Fully complete (including commit) before moving on
|
||||
3. **Tests MUST pass** - Do not proceed to commit if tests fail
|
||||
4. **Commit after each task** - Each task gets its own commit
|
||||
5. **Self-verify** - All acceptance criteria must pass before commit
|
||||
6. **Report accurately** - Use ccw issue complete/fail after each task
|
||||
7. **Handle failures gracefully** - If a task fails, report via ccw issue fail and continue to next
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| ccw issue next returns empty | All done - output final summary |
|
||||
| Tests fail | Fix code, re-run tests |
|
||||
| Verification fails | Go back to implement phase |
|
||||
| Git commit fails | Check staging, retry commit |
|
||||
| ccw issue complete fails | Log error, continue to next task |
|
||||
| Unrecoverable error | Call ccw issue fail, continue to next |
|
||||
|
||||
## Start Execution
|
||||
|
||||
Begin by running:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
Then follow the lifecycle for each task until queue is empty.
|
||||
@@ -12,6 +12,7 @@ import { cliCommand } from './commands/cli.js';
|
||||
import { memoryCommand } from './commands/memory.js';
|
||||
import { coreMemoryCommand } from './commands/core-memory.js';
|
||||
import { hookCommand } from './commands/hook.js';
|
||||
import { issueCommand } from './commands/issue.js';
|
||||
import { readFileSync, existsSync } from 'fs';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
@@ -260,5 +261,30 @@ export function run(argv: string[]): void {
|
||||
.option('--type <type>', 'Context type: session-start, context')
|
||||
.action((subcommand, args, options) => hookCommand(subcommand, args, options));
|
||||
|
||||
// Issue command - Issue lifecycle management with JSONL task tracking
|
||||
program
|
||||
.command('issue [subcommand] [args...]')
|
||||
.description('Issue lifecycle management with JSONL task tracking')
|
||||
.option('--title <title>', 'Task title')
|
||||
.option('--type <type>', 'Task type: feature, bug, refactor, test, chore, docs')
|
||||
.option('--status <status>', 'Task status')
|
||||
.option('--phase <phase>', 'Execution phase')
|
||||
.option('--description <desc>', 'Task description')
|
||||
.option('--depends-on <ids>', 'Comma-separated dependency task IDs')
|
||||
.option('--delivery-criteria <items>', 'Pipe-separated delivery criteria')
|
||||
.option('--pause-criteria <items>', 'Pipe-separated pause criteria')
|
||||
.option('--executor <type>', 'Executor: agent, codex, gemini, auto')
|
||||
.option('--priority <n>', 'Task priority (1-5)')
|
||||
.option('--format <fmt>', 'Output format: json, markdown')
|
||||
.option('--json', 'Output as JSON')
|
||||
.option('--ids', 'List only IDs (one per line, for scripting)')
|
||||
.option('--force', 'Force operation')
|
||||
// New options for solution/queue management
|
||||
.option('--solution <path>', 'Solution JSON file path')
|
||||
.option('--solution-id <id>', 'Solution ID')
|
||||
.option('--result <json>', 'Execution result JSON')
|
||||
.option('--reason <text>', 'Failure reason')
|
||||
.action((subcommand, args, options) => issueCommand(subcommand, args, options));
|
||||
|
||||
program.parse(argv);
|
||||
}
|
||||
|
||||
1263
ccw/src/commands/issue.ts
Normal file
1263
ccw/src/commands/issue.ts
Normal file
File diff suppressed because it is too large
Load Diff
@@ -21,6 +21,7 @@ const MODULE_FILES = [
|
||||
'dashboard-js/components/tabs-other.js',
|
||||
'dashboard-js/components/carousel.js',
|
||||
'dashboard-js/components/notifications.js',
|
||||
'dashboard-js/components/cli-stream-viewer.js',
|
||||
'dashboard-js/components/global-notifications.js',
|
||||
'dashboard-js/components/cli-status.js',
|
||||
'dashboard-js/components/cli-history.js',
|
||||
|
||||
575
ccw/src/core/routes/issue-routes.ts
Normal file
575
ccw/src/core/routes/issue-routes.ts
Normal file
@@ -0,0 +1,575 @@
|
||||
// @ts-nocheck
|
||||
/**
|
||||
* Issue Routes Module (Optimized - Flat JSONL Storage)
|
||||
*
|
||||
* Storage Structure:
|
||||
* .workflow/issues/
|
||||
* ├── issues.jsonl # All issues (one per line)
|
||||
* ├── queues/ # Queue history directory
|
||||
* │ ├── index.json # Queue index (active + history)
|
||||
* │ └── {queue-id}.json # Individual queue files
|
||||
* └── solutions/
|
||||
* ├── {issue-id}.jsonl # Solutions for issue (one per line)
|
||||
* └── ...
|
||||
*
|
||||
* API Endpoints (8 total):
|
||||
* - GET /api/issues - List all issues
|
||||
* - POST /api/issues - Create new issue
|
||||
* - GET /api/issues/:id - Get issue detail
|
||||
* - PATCH /api/issues/:id - Update issue (includes binding logic)
|
||||
* - DELETE /api/issues/:id - Delete issue
|
||||
* - POST /api/issues/:id/solutions - Add solution
|
||||
* - PATCH /api/issues/:id/tasks/:taskId - Update task
|
||||
* - GET /api/queue - Get execution queue
|
||||
* - POST /api/queue/reorder - Reorder queue items
|
||||
*/
|
||||
import type { IncomingMessage, ServerResponse } from 'http';
|
||||
import { readFileSync, existsSync, writeFileSync, mkdirSync, unlinkSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
|
||||
export interface RouteContext {
|
||||
pathname: string;
|
||||
url: URL;
|
||||
req: IncomingMessage;
|
||||
res: ServerResponse;
|
||||
initialPath: string;
|
||||
handlePostRequest: (req: IncomingMessage, res: ServerResponse, handler: (body: unknown) => Promise<any>) => void;
|
||||
broadcastToClients: (data: unknown) => void;
|
||||
}
|
||||
|
||||
// ========== JSONL Helper Functions ==========
|
||||
|
||||
function readIssuesJsonl(issuesDir: string): any[] {
|
||||
const issuesPath = join(issuesDir, 'issues.jsonl');
|
||||
if (!existsSync(issuesPath)) return [];
|
||||
try {
|
||||
const content = readFileSync(issuesPath, 'utf8');
|
||||
return content.split('\n').filter(line => line.trim()).map(line => JSON.parse(line));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
function writeIssuesJsonl(issuesDir: string, issues: any[]) {
|
||||
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
|
||||
const issuesPath = join(issuesDir, 'issues.jsonl');
|
||||
writeFileSync(issuesPath, issues.map(i => JSON.stringify(i)).join('\n'));
|
||||
}
|
||||
|
||||
function readSolutionsJsonl(issuesDir: string, issueId: string): any[] {
|
||||
const solutionsPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
|
||||
if (!existsSync(solutionsPath)) return [];
|
||||
try {
|
||||
const content = readFileSync(solutionsPath, 'utf8');
|
||||
return content.split('\n').filter(line => line.trim()).map(line => JSON.parse(line));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
function writeSolutionsJsonl(issuesDir: string, issueId: string, solutions: any[]) {
|
||||
const solutionsDir = join(issuesDir, 'solutions');
|
||||
if (!existsSync(solutionsDir)) mkdirSync(solutionsDir, { recursive: true });
|
||||
writeFileSync(join(solutionsDir, `${issueId}.jsonl`), solutions.map(s => JSON.stringify(s)).join('\n'));
|
||||
}
|
||||
|
||||
function readQueue(issuesDir: string) {
|
||||
// Try new multi-queue structure first
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const indexPath = join(queuesDir, 'index.json');
|
||||
|
||||
if (existsSync(indexPath)) {
|
||||
try {
|
||||
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
|
||||
const activeQueueId = index.active_queue_id;
|
||||
|
||||
if (activeQueueId) {
|
||||
const queueFilePath = join(queuesDir, `${activeQueueId}.json`);
|
||||
if (existsSync(queueFilePath)) {
|
||||
return JSON.parse(readFileSync(queueFilePath, 'utf8'));
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Fall through to legacy check
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to legacy queue.json
|
||||
const legacyQueuePath = join(issuesDir, 'queue.json');
|
||||
if (existsSync(legacyQueuePath)) {
|
||||
try {
|
||||
return JSON.parse(readFileSync(legacyQueuePath, 'utf8'));
|
||||
} catch {
|
||||
// Return empty queue
|
||||
}
|
||||
}
|
||||
|
||||
return { tasks: [], conflicts: [], execution_groups: [], _metadata: { version: '1.0', total_tasks: 0 } };
|
||||
}
|
||||
|
||||
function writeQueue(issuesDir: string, queue: any) {
|
||||
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
|
||||
queue._metadata = { ...queue._metadata, updated_at: new Date().toISOString(), total_tasks: queue.tasks?.length || 0 };
|
||||
|
||||
// Check if using new multi-queue structure
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const indexPath = join(queuesDir, 'index.json');
|
||||
|
||||
if (existsSync(indexPath) && queue.id) {
|
||||
// Write to new structure
|
||||
const queueFilePath = join(queuesDir, `${queue.id}.json`);
|
||||
writeFileSync(queueFilePath, JSON.stringify(queue, null, 2));
|
||||
|
||||
// Update index metadata
|
||||
try {
|
||||
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
|
||||
const queueEntry = index.queues?.find((q: any) => q.id === queue.id);
|
||||
if (queueEntry) {
|
||||
queueEntry.total_tasks = queue.tasks?.length || 0;
|
||||
queueEntry.completed_tasks = queue.tasks?.filter((i: any) => i.status === 'completed').length || 0;
|
||||
writeFileSync(indexPath, JSON.stringify(index, null, 2));
|
||||
}
|
||||
} catch {
|
||||
// Ignore index update errors
|
||||
}
|
||||
} else {
|
||||
// Fallback to legacy queue.json
|
||||
writeFileSync(join(issuesDir, 'queue.json'), JSON.stringify(queue, null, 2));
|
||||
}
|
||||
}
|
||||
|
||||
function getIssueDetail(issuesDir: string, issueId: string) {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issue = issues.find(i => i.id === issueId);
|
||||
if (!issue) return null;
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
let tasks: any[] = [];
|
||||
if (issue.bound_solution_id) {
|
||||
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
|
||||
if (boundSol?.tasks) tasks = boundSol.tasks;
|
||||
}
|
||||
return { ...issue, solutions, tasks };
|
||||
}
|
||||
|
||||
function enrichIssues(issues: any[], issuesDir: string) {
|
||||
return issues.map(issue => {
|
||||
const solutions = readSolutionsJsonl(issuesDir, issue.id);
|
||||
let taskCount = 0;
|
||||
|
||||
// Get task count from bound solution
|
||||
if (issue.bound_solution_id) {
|
||||
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
|
||||
if (boundSol?.tasks) {
|
||||
taskCount = boundSol.tasks.length;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
...issue,
|
||||
solution_count: solutions.length,
|
||||
task_count: taskCount
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
function groupQueueByExecutionGroup(queue: any) {
|
||||
const groups: { [key: string]: any[] } = {};
|
||||
for (const item of queue.tasks || []) {
|
||||
const groupId = item.execution_group || 'ungrouped';
|
||||
if (!groups[groupId]) groups[groupId] = [];
|
||||
groups[groupId].push(item);
|
||||
}
|
||||
for (const groupId of Object.keys(groups)) {
|
||||
groups[groupId].sort((a, b) => (a.execution_order || 0) - (b.execution_order || 0));
|
||||
}
|
||||
const executionGroups = Object.entries(groups).map(([id, items]) => ({
|
||||
id,
|
||||
type: id.startsWith('P') ? 'parallel' : id.startsWith('S') ? 'sequential' : 'unknown',
|
||||
task_count: items.length,
|
||||
tasks: items.map(i => i.item_id)
|
||||
})).sort((a, b) => {
|
||||
const aFirst = groups[a.id]?.[0]?.execution_order || 0;
|
||||
const bFirst = groups[b.id]?.[0]?.execution_order || 0;
|
||||
return aFirst - bFirst;
|
||||
});
|
||||
return { ...queue, execution_groups: executionGroups, grouped_items: groups };
|
||||
}
|
||||
|
||||
/**
|
||||
* Bind solution to issue with proper side effects
|
||||
*/
|
||||
function bindSolutionToIssue(issuesDir: string, issueId: string, solutionId: string, issues: any[], issueIndex: number) {
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
const solIndex = solutions.findIndex(s => s.id === solutionId);
|
||||
|
||||
if (solIndex === -1) return { error: `Solution ${solutionId} not found` };
|
||||
|
||||
// Unbind all, bind new
|
||||
solutions.forEach(s => { s.is_bound = false; });
|
||||
solutions[solIndex].is_bound = true;
|
||||
solutions[solIndex].bound_at = new Date().toISOString();
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
|
||||
// Update issue
|
||||
issues[issueIndex].bound_solution_id = solutionId;
|
||||
issues[issueIndex].status = 'planned';
|
||||
issues[issueIndex].planned_at = new Date().toISOString();
|
||||
|
||||
return { success: true, bound: solutionId };
|
||||
}
|
||||
|
||||
// ========== Route Handler ==========
|
||||
|
||||
export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
const { pathname, url, req, res, initialPath, handlePostRequest } = ctx;
|
||||
const projectPath = url.searchParams.get('path') || initialPath;
|
||||
const issuesDir = join(projectPath, '.workflow', 'issues');
|
||||
|
||||
// ===== Queue Routes (top-level /api/queue) =====
|
||||
|
||||
// GET /api/queue - Get execution queue
|
||||
if (pathname === '/api/queue' && req.method === 'GET') {
|
||||
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(queue));
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/queue/reorder - Reorder queue items
|
||||
if (pathname === '/api/queue/reorder' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const { groupId, newOrder } = body;
|
||||
if (!groupId || !Array.isArray(newOrder)) {
|
||||
return { error: 'groupId and newOrder (array) required' };
|
||||
}
|
||||
|
||||
const queue = readQueue(issuesDir);
|
||||
const groupItems = queue.tasks.filter((item: any) => item.execution_group === groupId);
|
||||
const otherItems = queue.tasks.filter((item: any) => item.execution_group !== groupId);
|
||||
|
||||
if (groupItems.length === 0) return { error: `No items in group ${groupId}` };
|
||||
|
||||
const groupItemIds = new Set(groupItems.map((i: any) => i.item_id));
|
||||
if (groupItemIds.size !== new Set(newOrder).size) {
|
||||
return { error: 'newOrder must contain all group items' };
|
||||
}
|
||||
for (const id of newOrder) {
|
||||
if (!groupItemIds.has(id)) return { error: `Invalid item_id: ${id}` };
|
||||
}
|
||||
|
||||
const itemMap = new Map(groupItems.map((i: any) => [i.item_id, i]));
|
||||
const reorderedItems = newOrder.map((qid: string, idx: number) => ({ ...itemMap.get(qid), _idx: idx }));
|
||||
const newQueue = [...otherItems, ...reorderedItems].sort((a, b) => {
|
||||
const aGroup = parseInt(a.execution_group?.match(/\d+/)?.[0] || '999');
|
||||
const bGroup = parseInt(b.execution_group?.match(/\d+/)?.[0] || '999');
|
||||
if (aGroup !== bGroup) return aGroup - bGroup;
|
||||
if (a.execution_group === b.execution_group) {
|
||||
return (a._idx ?? a.execution_order ?? 999) - (b._idx ?? b.execution_order ?? 999);
|
||||
}
|
||||
return (a.execution_order || 0) - (b.execution_order || 0);
|
||||
});
|
||||
|
||||
newQueue.forEach((item, idx) => { item.execution_order = idx + 1; delete item._idx; });
|
||||
queue.tasks = newQueue;
|
||||
writeQueue(issuesDir, queue);
|
||||
|
||||
return { success: true, groupId, reordered: newOrder.length };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: GET /api/issues/queue (backward compat)
|
||||
if (pathname === '/api/issues/queue' && req.method === 'GET') {
|
||||
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(queue));
|
||||
return true;
|
||||
}
|
||||
|
||||
// ===== Issue Routes =====
|
||||
|
||||
// GET /api/issues - List all issues
|
||||
if (pathname === '/api/issues' && req.method === 'GET') {
|
||||
const issues = enrichIssues(readIssuesJsonl(issuesDir), issuesDir);
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({
|
||||
issues,
|
||||
_metadata: { version: '2.0', storage: 'jsonl', total_issues: issues.length, last_updated: new Date().toISOString() }
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/issues - Create issue
|
||||
if (pathname === '/api/issues' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
if (!body.id || !body.title) return { error: 'id and title required' };
|
||||
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
if (issues.find(i => i.id === body.id)) return { error: `Issue ${body.id} exists` };
|
||||
|
||||
const newIssue = {
|
||||
id: body.id,
|
||||
title: body.title,
|
||||
status: body.status || 'registered',
|
||||
priority: body.priority || 3,
|
||||
context: body.context || '',
|
||||
source: body.source || 'text',
|
||||
source_url: body.source_url || null,
|
||||
labels: body.labels || [],
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
issues.push(newIssue);
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
return { success: true, issue: newIssue };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// GET /api/issues/:id - Get issue detail
|
||||
const detailMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
|
||||
if (detailMatch && req.method === 'GET') {
|
||||
const issueId = decodeURIComponent(detailMatch[1]);
|
||||
if (issueId === 'queue') return false;
|
||||
|
||||
const detail = getIssueDetail(issuesDir, issueId);
|
||||
if (!detail) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Issue not found' }));
|
||||
return true;
|
||||
}
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(detail));
|
||||
return true;
|
||||
}
|
||||
|
||||
// PATCH /api/issues/:id - Update issue (with binding support)
|
||||
const updateMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
|
||||
if (updateMatch && req.method === 'PATCH') {
|
||||
const issueId = decodeURIComponent(updateMatch[1]);
|
||||
if (issueId === 'queue') return false;
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issueIndex = issues.findIndex(i => i.id === issueId);
|
||||
if (issueIndex === -1) return { error: 'Issue not found' };
|
||||
|
||||
const updates: string[] = [];
|
||||
|
||||
// Handle binding if bound_solution_id provided
|
||||
if (body.bound_solution_id !== undefined) {
|
||||
if (body.bound_solution_id) {
|
||||
const bindResult = bindSolutionToIssue(issuesDir, issueId, body.bound_solution_id, issues, issueIndex);
|
||||
if (bindResult.error) return bindResult;
|
||||
updates.push('bound_solution_id');
|
||||
} else {
|
||||
// Unbind
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
solutions.forEach(s => { s.is_bound = false; });
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
issues[issueIndex].bound_solution_id = null;
|
||||
updates.push('bound_solution_id (unbound)');
|
||||
}
|
||||
}
|
||||
|
||||
// Update other fields
|
||||
for (const field of ['title', 'context', 'status', 'priority', 'labels']) {
|
||||
if (body[field] !== undefined) {
|
||||
issues[issueIndex][field] = body[field];
|
||||
updates.push(field);
|
||||
}
|
||||
}
|
||||
|
||||
issues[issueIndex].updated_at = new Date().toISOString();
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
return { success: true, issueId, updated: updates };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// DELETE /api/issues/:id
|
||||
const deleteMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
|
||||
if (deleteMatch && req.method === 'DELETE') {
|
||||
const issueId = decodeURIComponent(deleteMatch[1]);
|
||||
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const filtered = issues.filter(i => i.id !== issueId);
|
||||
if (filtered.length === issues.length) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Issue not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
writeIssuesJsonl(issuesDir, filtered);
|
||||
|
||||
// Clean up solutions file
|
||||
const solPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
|
||||
if (existsSync(solPath)) {
|
||||
try { unlinkSync(solPath); } catch {}
|
||||
}
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, issueId }));
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/issues/:id/solutions - Add solution
|
||||
const addSolMatch = pathname.match(/^\/api\/issues\/([^/]+)\/solutions$/);
|
||||
if (addSolMatch && req.method === 'POST') {
|
||||
const issueId = decodeURIComponent(addSolMatch[1]);
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
if (!body.id || !body.tasks) return { error: 'id and tasks required' };
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
if (solutions.find(s => s.id === body.id)) return { error: `Solution ${body.id} exists` };
|
||||
|
||||
const newSolution = {
|
||||
id: body.id,
|
||||
description: body.description || '',
|
||||
tasks: body.tasks,
|
||||
exploration_context: body.exploration_context || {},
|
||||
analysis: body.analysis || {},
|
||||
score: body.score || 0,
|
||||
is_bound: false,
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
solutions.push(newSolution);
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
|
||||
// Update issue solution_count
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const idx = issues.findIndex(i => i.id === issueId);
|
||||
if (idx !== -1) {
|
||||
issues[idx].solution_count = solutions.length;
|
||||
issues[idx].updated_at = new Date().toISOString();
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
}
|
||||
|
||||
return { success: true, solution: newSolution };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// PATCH /api/issues/:id/tasks/:taskId - Update task
|
||||
const taskMatch = pathname.match(/^\/api\/issues\/([^/]+)\/tasks\/([^/]+)$/);
|
||||
if (taskMatch && req.method === 'PATCH') {
|
||||
const issueId = decodeURIComponent(taskMatch[1]);
|
||||
const taskId = decodeURIComponent(taskMatch[2]);
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issue = issues.find(i => i.id === issueId);
|
||||
if (!issue?.bound_solution_id) return { error: 'Issue or bound solution not found' };
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
const solIdx = solutions.findIndex(s => s.id === issue.bound_solution_id);
|
||||
if (solIdx === -1) return { error: 'Bound solution not found' };
|
||||
|
||||
const taskIdx = solutions[solIdx].tasks?.findIndex((t: any) => t.id === taskId);
|
||||
if (taskIdx === -1 || taskIdx === undefined) return { error: 'Task not found' };
|
||||
|
||||
const updates: string[] = [];
|
||||
for (const field of ['status', 'priority', 'result', 'error']) {
|
||||
if (body[field] !== undefined) {
|
||||
solutions[solIdx].tasks[taskIdx][field] = body[field];
|
||||
updates.push(field);
|
||||
}
|
||||
}
|
||||
solutions[solIdx].tasks[taskIdx].updated_at = new Date().toISOString();
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
|
||||
return { success: true, issueId, taskId, updated: updates };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: PUT /api/issues/:id/task/:taskId (backward compat)
|
||||
const legacyTaskMatch = pathname.match(/^\/api\/issues\/([^/]+)\/task\/([^/]+)$/);
|
||||
if (legacyTaskMatch && req.method === 'PUT') {
|
||||
const issueId = decodeURIComponent(legacyTaskMatch[1]);
|
||||
const taskId = decodeURIComponent(legacyTaskMatch[2]);
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issue = issues.find(i => i.id === issueId);
|
||||
if (!issue?.bound_solution_id) return { error: 'Issue or bound solution not found' };
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
const solIdx = solutions.findIndex(s => s.id === issue.bound_solution_id);
|
||||
if (solIdx === -1) return { error: 'Bound solution not found' };
|
||||
|
||||
const taskIdx = solutions[solIdx].tasks?.findIndex((t: any) => t.id === taskId);
|
||||
if (taskIdx === -1 || taskIdx === undefined) return { error: 'Task not found' };
|
||||
|
||||
const updates: string[] = [];
|
||||
if (body.status !== undefined) { solutions[solIdx].tasks[taskIdx].status = body.status; updates.push('status'); }
|
||||
if (body.priority !== undefined) { solutions[solIdx].tasks[taskIdx].priority = body.priority; updates.push('priority'); }
|
||||
solutions[solIdx].tasks[taskIdx].updated_at = new Date().toISOString();
|
||||
writeSolutionsJsonl(issuesDir, issueId, solutions);
|
||||
|
||||
return { success: true, issueId, taskId, updated: updates };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: PUT /api/issues/:id/bind/:solutionId (backward compat)
|
||||
const legacyBindMatch = pathname.match(/^\/api\/issues\/([^/]+)\/bind\/([^/]+)$/);
|
||||
if (legacyBindMatch && req.method === 'PUT') {
|
||||
const issueId = decodeURIComponent(legacyBindMatch[1]);
|
||||
const solutionId = decodeURIComponent(legacyBindMatch[2]);
|
||||
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issueIndex = issues.findIndex(i => i.id === issueId);
|
||||
if (issueIndex === -1) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Issue not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
const result = bindSolutionToIssue(issuesDir, issueId, solutionId, issues, issueIndex);
|
||||
if (result.error) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(result));
|
||||
return true;
|
||||
}
|
||||
|
||||
issues[issueIndex].updated_at = new Date().toISOString();
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, issueId, solutionId }));
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: PUT /api/issues/:id (backward compat for PATCH)
|
||||
const legacyUpdateMatch = pathname.match(/^\/api\/issues\/([^/]+)$/);
|
||||
if (legacyUpdateMatch && req.method === 'PUT') {
|
||||
const issueId = decodeURIComponent(legacyUpdateMatch[1]);
|
||||
if (issueId === 'queue') return false;
|
||||
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issueIndex = issues.findIndex(i => i.id === issueId);
|
||||
if (issueIndex === -1) return { error: 'Issue not found' };
|
||||
|
||||
const updates: string[] = [];
|
||||
for (const field of ['title', 'context', 'status', 'priority', 'bound_solution_id', 'labels']) {
|
||||
if (body[field] !== undefined) {
|
||||
issues[issueIndex][field] = body[field];
|
||||
updates.push(field);
|
||||
}
|
||||
}
|
||||
|
||||
issues[issueIndex].updated_at = new Date().toISOString();
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
return { success: true, issueId, updated: updates };
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
@@ -17,6 +17,7 @@ import { handleGraphRoutes } from './routes/graph-routes.js';
|
||||
import { handleSystemRoutes } from './routes/system-routes.js';
|
||||
import { handleFilesRoutes } from './routes/files-routes.js';
|
||||
import { handleSkillsRoutes } from './routes/skills-routes.js';
|
||||
import { handleIssueRoutes } from './routes/issue-routes.js';
|
||||
import { handleRulesRoutes } from './routes/rules-routes.js';
|
||||
import { handleSessionRoutes } from './routes/session-routes.js';
|
||||
import { handleCcwRoutes } from './routes/ccw-routes.js';
|
||||
@@ -86,7 +87,9 @@ const MODULE_CSS_FILES = [
|
||||
'28-mcp-manager.css',
|
||||
'29-help.css',
|
||||
'30-core-memory.css',
|
||||
'31-api-settings.css'
|
||||
'31-api-settings.css',
|
||||
'32-issue-manager.css',
|
||||
'33-cli-stream-viewer.css'
|
||||
];
|
||||
|
||||
// Modular JS files in dependency order
|
||||
@@ -107,6 +110,7 @@ const MODULE_FILES = [
|
||||
'components/flowchart.js',
|
||||
'components/carousel.js',
|
||||
'components/notifications.js',
|
||||
'components/cli-stream-viewer.js',
|
||||
'components/global-notifications.js',
|
||||
'components/task-queue-sidebar.js',
|
||||
'components/cli-status.js',
|
||||
@@ -142,6 +146,7 @@ const MODULE_FILES = [
|
||||
'views/claude-manager.js',
|
||||
'views/api-settings.js',
|
||||
'views/help.js',
|
||||
'views/issue-manager.js',
|
||||
'main.js'
|
||||
];
|
||||
|
||||
@@ -244,7 +249,7 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
|
||||
|
||||
// CORS headers for API requests
|
||||
res.setHeader('Access-Control-Allow-Origin', '*');
|
||||
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, DELETE, OPTIONS');
|
||||
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, PATCH, DELETE, OPTIONS');
|
||||
res.setHeader('Access-Control-Allow-Headers', 'Content-Type');
|
||||
|
||||
if (req.method === 'OPTIONS') {
|
||||
@@ -340,6 +345,16 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
|
||||
if (await handleSkillsRoutes(routeContext)) return;
|
||||
}
|
||||
|
||||
// Queue routes (/api/queue*) - top-level queue API
|
||||
if (pathname.startsWith('/api/queue')) {
|
||||
if (await handleIssueRoutes(routeContext)) return;
|
||||
}
|
||||
|
||||
// Issue routes (/api/issues*)
|
||||
if (pathname.startsWith('/api/issues')) {
|
||||
if (await handleIssueRoutes(routeContext)) return;
|
||||
}
|
||||
|
||||
// Rules routes (/api/rules*)
|
||||
if (pathname.startsWith('/api/rules')) {
|
||||
if (await handleRulesRoutes(routeContext)) return;
|
||||
|
||||
2544
ccw/src/templates/dashboard-css/32-issue-manager.css
Normal file
2544
ccw/src/templates/dashboard-css/32-issue-manager.css
Normal file
File diff suppressed because it is too large
Load Diff
467
ccw/src/templates/dashboard-css/33-cli-stream-viewer.css
Normal file
467
ccw/src/templates/dashboard-css/33-cli-stream-viewer.css
Normal file
@@ -0,0 +1,467 @@
|
||||
/**
|
||||
* CLI Stream Viewer Styles
|
||||
* Right-side popup panel for viewing CLI streaming output
|
||||
*/
|
||||
|
||||
/* ===== Overlay ===== */
|
||||
.cli-stream-overlay {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
background: rgb(0 0 0 / 0.3);
|
||||
z-index: 1050;
|
||||
opacity: 0;
|
||||
visibility: hidden;
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
.cli-stream-overlay.open {
|
||||
opacity: 1;
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
/* ===== Main Panel ===== */
|
||||
.cli-stream-viewer {
|
||||
position: fixed;
|
||||
top: 60px;
|
||||
right: 16px;
|
||||
width: 650px;
|
||||
max-width: calc(100vw - 32px);
|
||||
max-height: calc(100vh - 80px);
|
||||
background: hsl(var(--card));
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 8px 32px rgb(0 0 0 / 0.2);
|
||||
z-index: 1100;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
transform: translateX(calc(100% + 20px));
|
||||
opacity: 0;
|
||||
visibility: hidden;
|
||||
transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1);
|
||||
}
|
||||
|
||||
.cli-stream-viewer.open {
|
||||
transform: translateX(0);
|
||||
opacity: 1;
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
/* ===== Header ===== */
|
||||
.cli-stream-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: 12px 16px;
|
||||
border-bottom: 1px solid hsl(var(--border));
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
}
|
||||
|
||||
.cli-stream-title {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
font-size: 0.875rem;
|
||||
font-weight: 600;
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.cli-stream-title svg,
|
||||
.cli-stream-title i {
|
||||
width: 18px;
|
||||
height: 18px;
|
||||
color: hsl(var(--primary));
|
||||
}
|
||||
|
||||
.cli-stream-count-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-width: 20px;
|
||||
height: 20px;
|
||||
padding: 0 6px;
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--muted-foreground));
|
||||
border-radius: 10px;
|
||||
font-size: 0.6875rem;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.cli-stream-count-badge.has-running {
|
||||
background: hsl(var(--warning));
|
||||
color: hsl(var(--warning-foreground, white));
|
||||
}
|
||||
|
||||
.cli-stream-actions {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.cli-stream-action-btn {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
padding: 4px 10px;
|
||||
background: transparent;
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 4px;
|
||||
font-size: 0.75rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-action-btn:hover {
|
||||
background: hsl(var(--hover));
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.cli-stream-close-btn {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 28px;
|
||||
height: 28px;
|
||||
padding: 0;
|
||||
background: transparent;
|
||||
border: none;
|
||||
border-radius: 4px;
|
||||
font-size: 1.25rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-close-btn:hover {
|
||||
background: hsl(var(--destructive) / 0.1);
|
||||
color: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
/* ===== Tab Bar ===== */
|
||||
.cli-stream-tabs {
|
||||
display: flex;
|
||||
gap: 2px;
|
||||
padding: 8px 12px;
|
||||
border-bottom: 1px solid hsl(var(--border));
|
||||
background: hsl(var(--muted) / 0.2);
|
||||
overflow-x: auto;
|
||||
scrollbar-width: thin;
|
||||
}
|
||||
|
||||
.cli-stream-tabs::-webkit-scrollbar {
|
||||
height: 4px;
|
||||
}
|
||||
|
||||
.cli-stream-tabs::-webkit-scrollbar-thumb {
|
||||
background: hsl(var(--border));
|
||||
border-radius: 2px;
|
||||
}
|
||||
|
||||
.cli-stream-tab {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
padding: 6px 12px;
|
||||
background: transparent;
|
||||
border: 1px solid transparent;
|
||||
border-radius: 6px;
|
||||
font-size: 0.75rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
white-space: nowrap;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-tab:hover {
|
||||
background: hsl(var(--hover));
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.cli-stream-tab.active {
|
||||
background: hsl(var(--card));
|
||||
border-color: hsl(var(--primary));
|
||||
color: hsl(var(--foreground));
|
||||
box-shadow: 0 1px 3px rgb(0 0 0 / 0.1);
|
||||
}
|
||||
|
||||
.cli-stream-tab-status {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.cli-stream-tab-status.running {
|
||||
background: hsl(var(--warning));
|
||||
animation: streamStatusPulse 1.5s ease-in-out infinite;
|
||||
}
|
||||
|
||||
.cli-stream-tab-status.completed {
|
||||
background: hsl(var(--success));
|
||||
}
|
||||
|
||||
.cli-stream-tab-status.error {
|
||||
background: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
@keyframes streamStatusPulse {
|
||||
0%, 100% { opacity: 1; transform: scale(1); }
|
||||
50% { opacity: 0.6; transform: scale(1.2); }
|
||||
}
|
||||
|
||||
.cli-stream-tab-tool {
|
||||
font-weight: 500;
|
||||
text-transform: capitalize;
|
||||
}
|
||||
|
||||
.cli-stream-tab-mode {
|
||||
font-size: 0.625rem;
|
||||
padding: 1px 4px;
|
||||
background: hsl(var(--muted));
|
||||
border-radius: 3px;
|
||||
color: hsl(var(--muted-foreground));
|
||||
}
|
||||
|
||||
.cli-stream-tab-close {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
margin-left: 4px;
|
||||
background: transparent;
|
||||
border: none;
|
||||
border-radius: 50%;
|
||||
font-size: 0.75rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
opacity: 0;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-tab:hover .cli-stream-tab-close {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.cli-stream-tab-close:hover {
|
||||
background: hsl(var(--destructive) / 0.2);
|
||||
color: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
.cli-stream-tab-close.disabled {
|
||||
cursor: not-allowed;
|
||||
opacity: 0.3 !important;
|
||||
}
|
||||
|
||||
/* ===== Empty State ===== */
|
||||
.cli-stream-empty {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: 48px 24px;
|
||||
color: hsl(var(--muted-foreground));
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.cli-stream-empty svg,
|
||||
.cli-stream-empty i {
|
||||
width: 48px;
|
||||
height: 48px;
|
||||
margin-bottom: 16px;
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
.cli-stream-empty-title {
|
||||
font-size: 0.875rem;
|
||||
font-weight: 500;
|
||||
margin-bottom: 4px;
|
||||
}
|
||||
|
||||
.cli-stream-empty-hint {
|
||||
font-size: 0.75rem;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
/* ===== Terminal Content ===== */
|
||||
.cli-stream-content {
|
||||
flex: 1;
|
||||
min-height: 300px;
|
||||
max-height: 500px;
|
||||
overflow-y: auto;
|
||||
padding: 12px 16px;
|
||||
background: hsl(220 13% 8%);
|
||||
font-family: var(--font-mono, 'Consolas', 'Monaco', 'Courier New', monospace);
|
||||
font-size: 0.75rem;
|
||||
line-height: 1.6;
|
||||
scrollbar-width: thin;
|
||||
}
|
||||
|
||||
.cli-stream-content::-webkit-scrollbar {
|
||||
width: 8px;
|
||||
}
|
||||
|
||||
.cli-stream-content::-webkit-scrollbar-track {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.cli-stream-content::-webkit-scrollbar-thumb {
|
||||
background: hsl(0 0% 40%);
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.cli-stream-line {
|
||||
white-space: pre-wrap;
|
||||
word-break: break-all;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.cli-stream-line.stdout {
|
||||
color: hsl(0 0% 85%);
|
||||
}
|
||||
|
||||
.cli-stream-line.stderr {
|
||||
color: hsl(8 75% 65%);
|
||||
}
|
||||
|
||||
.cli-stream-line.system {
|
||||
color: hsl(210 80% 65%);
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.cli-stream-line.info {
|
||||
color: hsl(200 80% 70%);
|
||||
}
|
||||
|
||||
/* Auto-scroll indicator */
|
||||
.cli-stream-scroll-btn {
|
||||
position: sticky;
|
||||
bottom: 8px;
|
||||
left: 50%;
|
||||
transform: translateX(-50%);
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
padding: 4px 12px;
|
||||
background: hsl(var(--primary));
|
||||
color: white;
|
||||
border: none;
|
||||
border-radius: 12px;
|
||||
font-size: 0.625rem;
|
||||
cursor: pointer;
|
||||
opacity: 0;
|
||||
transition: opacity 0.2s;
|
||||
}
|
||||
|
||||
.cli-stream-content.has-new-content .cli-stream-scroll-btn {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
/* ===== Status Bar ===== */
|
||||
.cli-stream-status {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: 8px 16px;
|
||||
border-top: 1px solid hsl(var(--border));
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
font-size: 0.6875rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
}
|
||||
|
||||
.cli-stream-status-info {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
}
|
||||
|
||||
.cli-stream-status-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
.cli-stream-status-item svg,
|
||||
.cli-stream-status-item i {
|
||||
width: 12px;
|
||||
height: 12px;
|
||||
}
|
||||
|
||||
.cli-stream-status-actions {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.cli-stream-toggle-btn {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
padding: 2px 8px;
|
||||
background: transparent;
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 3px;
|
||||
font-size: 0.625rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
cursor: pointer;
|
||||
transition: all 0.15s;
|
||||
}
|
||||
|
||||
.cli-stream-toggle-btn:hover {
|
||||
background: hsl(var(--hover));
|
||||
}
|
||||
|
||||
.cli-stream-toggle-btn.active {
|
||||
background: hsl(var(--primary) / 0.1);
|
||||
border-color: hsl(var(--primary));
|
||||
color: hsl(var(--primary));
|
||||
}
|
||||
|
||||
/* ===== Header Button & Badge ===== */
|
||||
.cli-stream-btn {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.cli-stream-badge {
|
||||
position: absolute;
|
||||
top: -2px;
|
||||
right: -2px;
|
||||
min-width: 14px;
|
||||
height: 14px;
|
||||
padding: 0 4px;
|
||||
background: hsl(var(--warning));
|
||||
color: white;
|
||||
border-radius: 7px;
|
||||
font-size: 0.5625rem;
|
||||
font-weight: 600;
|
||||
display: none;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.cli-stream-badge.has-running {
|
||||
display: flex;
|
||||
animation: streamBadgePulse 1.5s ease-in-out infinite;
|
||||
}
|
||||
|
||||
@keyframes streamBadgePulse {
|
||||
0%, 100% { transform: scale(1); }
|
||||
50% { transform: scale(1.15); }
|
||||
}
|
||||
|
||||
/* ===== Responsive ===== */
|
||||
@media (max-width: 768px) {
|
||||
.cli-stream-viewer {
|
||||
top: 56px;
|
||||
right: 8px;
|
||||
left: 8px;
|
||||
width: auto;
|
||||
max-height: calc(100vh - 72px);
|
||||
}
|
||||
|
||||
.cli-stream-content {
|
||||
min-height: 200px;
|
||||
max-height: 350px;
|
||||
}
|
||||
}
|
||||
461
ccw/src/templates/dashboard-js/components/cli-stream-viewer.js
Normal file
461
ccw/src/templates/dashboard-js/components/cli-stream-viewer.js
Normal file
@@ -0,0 +1,461 @@
|
||||
/**
|
||||
* CLI Stream Viewer Component
|
||||
* Real-time streaming output viewer for CLI executions
|
||||
*/
|
||||
|
||||
// ===== State Management =====
|
||||
let cliStreamExecutions = {}; // { executionId: { tool, mode, output, status, startTime, endTime } }
|
||||
let activeStreamTab = null;
|
||||
let autoScrollEnabled = true;
|
||||
let isCliStreamViewerOpen = false;
|
||||
|
||||
const MAX_OUTPUT_LINES = 5000; // Prevent memory issues
|
||||
|
||||
// ===== Initialization =====
|
||||
function initCliStreamViewer() {
|
||||
// Initialize keyboard shortcuts
|
||||
document.addEventListener('keydown', function(e) {
|
||||
if (e.key === 'Escape' && isCliStreamViewerOpen) {
|
||||
toggleCliStreamViewer();
|
||||
}
|
||||
});
|
||||
|
||||
// Initialize scroll detection for auto-scroll
|
||||
const content = document.getElementById('cliStreamContent');
|
||||
if (content) {
|
||||
content.addEventListener('scroll', handleStreamContentScroll);
|
||||
}
|
||||
}
|
||||
|
||||
// ===== Panel Control =====
|
||||
function toggleCliStreamViewer() {
|
||||
const viewer = document.getElementById('cliStreamViewer');
|
||||
const overlay = document.getElementById('cliStreamOverlay');
|
||||
|
||||
if (!viewer || !overlay) return;
|
||||
|
||||
isCliStreamViewerOpen = !isCliStreamViewerOpen;
|
||||
|
||||
if (isCliStreamViewerOpen) {
|
||||
viewer.classList.add('open');
|
||||
overlay.classList.add('open');
|
||||
|
||||
// If no active tab but have executions, select the first one
|
||||
if (!activeStreamTab && Object.keys(cliStreamExecutions).length > 0) {
|
||||
const firstId = Object.keys(cliStreamExecutions)[0];
|
||||
switchStreamTab(firstId);
|
||||
} else {
|
||||
renderStreamContent(activeStreamTab);
|
||||
}
|
||||
|
||||
// Re-init lucide icons
|
||||
if (typeof lucide !== 'undefined') {
|
||||
lucide.createIcons();
|
||||
}
|
||||
} else {
|
||||
viewer.classList.remove('open');
|
||||
overlay.classList.remove('open');
|
||||
}
|
||||
}
|
||||
|
||||
// ===== WebSocket Event Handlers =====
|
||||
function handleCliStreamStarted(payload) {
|
||||
const { executionId, tool, mode, timestamp } = payload;
|
||||
|
||||
// Create new execution record
|
||||
cliStreamExecutions[executionId] = {
|
||||
tool: tool || 'cli',
|
||||
mode: mode || 'analysis',
|
||||
output: [],
|
||||
status: 'running',
|
||||
startTime: timestamp ? new Date(timestamp).getTime() : Date.now(),
|
||||
endTime: null
|
||||
};
|
||||
|
||||
// Add system message
|
||||
cliStreamExecutions[executionId].output.push({
|
||||
type: 'system',
|
||||
content: `[${new Date().toLocaleTimeString()}] CLI execution started: ${tool} (${mode} mode)`,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
|
||||
// If this is the first execution or panel is open, select it
|
||||
if (!activeStreamTab || isCliStreamViewerOpen) {
|
||||
activeStreamTab = executionId;
|
||||
}
|
||||
|
||||
renderStreamTabs();
|
||||
renderStreamContent(activeStreamTab);
|
||||
updateStreamBadge();
|
||||
|
||||
// Auto-open panel if configured (optional)
|
||||
// if (!isCliStreamViewerOpen) toggleCliStreamViewer();
|
||||
}
|
||||
|
||||
function handleCliStreamOutput(payload) {
|
||||
const { executionId, chunkType, data } = payload;
|
||||
|
||||
const exec = cliStreamExecutions[executionId];
|
||||
if (!exec) return;
|
||||
|
||||
// Parse and add output lines
|
||||
const content = typeof data === 'string' ? data : JSON.stringify(data);
|
||||
const lines = content.split('\n');
|
||||
|
||||
lines.forEach(line => {
|
||||
if (line.trim() || lines.length === 1) { // Keep empty lines if it's the only content
|
||||
exec.output.push({
|
||||
type: chunkType || 'stdout',
|
||||
content: line,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Trim if too long
|
||||
if (exec.output.length > MAX_OUTPUT_LINES) {
|
||||
exec.output = exec.output.slice(-MAX_OUTPUT_LINES);
|
||||
}
|
||||
|
||||
// Update UI if this is the active tab
|
||||
if (activeStreamTab === executionId && isCliStreamViewerOpen) {
|
||||
requestAnimationFrame(() => {
|
||||
renderStreamContent(executionId);
|
||||
});
|
||||
}
|
||||
|
||||
// Update badge to show activity
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
function handleCliStreamCompleted(payload) {
|
||||
const { executionId, success, duration, timestamp } = payload;
|
||||
|
||||
const exec = cliStreamExecutions[executionId];
|
||||
if (!exec) return;
|
||||
|
||||
exec.status = success ? 'completed' : 'error';
|
||||
exec.endTime = timestamp ? new Date(timestamp).getTime() : Date.now();
|
||||
|
||||
// Add completion message
|
||||
const durationText = duration ? ` (${formatDuration(duration)})` : '';
|
||||
const statusText = success ? 'completed successfully' : 'failed';
|
||||
exec.output.push({
|
||||
type: 'system',
|
||||
content: `[${new Date().toLocaleTimeString()}] CLI execution ${statusText}${durationText}`,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
|
||||
renderStreamTabs();
|
||||
if (activeStreamTab === executionId) {
|
||||
renderStreamContent(executionId);
|
||||
}
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
function handleCliStreamError(payload) {
|
||||
const { executionId, error, timestamp } = payload;
|
||||
|
||||
const exec = cliStreamExecutions[executionId];
|
||||
if (!exec) return;
|
||||
|
||||
exec.status = 'error';
|
||||
exec.endTime = timestamp ? new Date(timestamp).getTime() : Date.now();
|
||||
|
||||
// Add error message
|
||||
exec.output.push({
|
||||
type: 'stderr',
|
||||
content: `[ERROR] ${error || 'Unknown error occurred'}`,
|
||||
timestamp: Date.now()
|
||||
});
|
||||
|
||||
renderStreamTabs();
|
||||
if (activeStreamTab === executionId) {
|
||||
renderStreamContent(executionId);
|
||||
}
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
// ===== UI Rendering =====
|
||||
function renderStreamTabs() {
|
||||
const tabsContainer = document.getElementById('cliStreamTabs');
|
||||
if (!tabsContainer) return;
|
||||
|
||||
const execIds = Object.keys(cliStreamExecutions);
|
||||
|
||||
if (execIds.length === 0) {
|
||||
tabsContainer.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
|
||||
// Sort: running first, then by start time (newest first)
|
||||
execIds.sort((a, b) => {
|
||||
const execA = cliStreamExecutions[a];
|
||||
const execB = cliStreamExecutions[b];
|
||||
|
||||
if (execA.status === 'running' && execB.status !== 'running') return -1;
|
||||
if (execA.status !== 'running' && execB.status === 'running') return 1;
|
||||
return execB.startTime - execA.startTime;
|
||||
});
|
||||
|
||||
tabsContainer.innerHTML = execIds.map(id => {
|
||||
const exec = cliStreamExecutions[id];
|
||||
const isActive = id === activeStreamTab;
|
||||
const canClose = exec.status !== 'running';
|
||||
|
||||
return `
|
||||
<div class="cli-stream-tab ${isActive ? 'active' : ''}"
|
||||
onclick="switchStreamTab('${id}')"
|
||||
data-execution-id="${id}">
|
||||
<span class="cli-stream-tab-status ${exec.status}"></span>
|
||||
<span class="cli-stream-tab-tool">${escapeHtml(exec.tool)}</span>
|
||||
<span class="cli-stream-tab-mode">${exec.mode}</span>
|
||||
<button class="cli-stream-tab-close ${canClose ? '' : 'disabled'}"
|
||||
onclick="event.stopPropagation(); closeStream('${id}')"
|
||||
title="${canClose ? _streamT('cliStream.close') : _streamT('cliStream.cannotCloseRunning')}"
|
||||
${canClose ? '' : 'disabled'}>×</button>
|
||||
</div>
|
||||
`;
|
||||
}).join('');
|
||||
|
||||
// Update count badge
|
||||
const countBadge = document.getElementById('cliStreamCountBadge');
|
||||
if (countBadge) {
|
||||
const runningCount = execIds.filter(id => cliStreamExecutions[id].status === 'running').length;
|
||||
countBadge.textContent = execIds.length;
|
||||
countBadge.classList.toggle('has-running', runningCount > 0);
|
||||
}
|
||||
}
|
||||
|
||||
function renderStreamContent(executionId) {
|
||||
const contentContainer = document.getElementById('cliStreamContent');
|
||||
if (!contentContainer) return;
|
||||
|
||||
const exec = executionId ? cliStreamExecutions[executionId] : null;
|
||||
|
||||
if (!exec) {
|
||||
// Show empty state
|
||||
contentContainer.innerHTML = `
|
||||
<div class="cli-stream-empty">
|
||||
<i data-lucide="terminal"></i>
|
||||
<div class="cli-stream-empty-title" data-i18n="cliStream.noStreams">${_streamT('cliStream.noStreams')}</div>
|
||||
<div class="cli-stream-empty-hint" data-i18n="cliStream.noStreamsHint">${_streamT('cliStream.noStreamsHint')}</div>
|
||||
</div>
|
||||
`;
|
||||
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if should auto-scroll
|
||||
const wasAtBottom = contentContainer.scrollHeight - contentContainer.scrollTop <= contentContainer.clientHeight + 50;
|
||||
|
||||
// Render output lines
|
||||
contentContainer.innerHTML = exec.output.map(line =>
|
||||
`<div class="cli-stream-line ${line.type}">${escapeHtml(line.content)}</div>`
|
||||
).join('');
|
||||
|
||||
// Auto-scroll if enabled and was at bottom
|
||||
if (autoScrollEnabled && wasAtBottom) {
|
||||
contentContainer.scrollTop = contentContainer.scrollHeight;
|
||||
}
|
||||
|
||||
// Update status bar
|
||||
renderStreamStatus(executionId);
|
||||
}
|
||||
|
||||
function renderStreamStatus(executionId) {
|
||||
const statusContainer = document.getElementById('cliStreamStatus');
|
||||
if (!statusContainer) return;
|
||||
|
||||
const exec = executionId ? cliStreamExecutions[executionId] : null;
|
||||
|
||||
if (!exec) {
|
||||
statusContainer.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
|
||||
const duration = exec.endTime
|
||||
? formatDuration(exec.endTime - exec.startTime)
|
||||
: formatDuration(Date.now() - exec.startTime);
|
||||
|
||||
const statusLabel = exec.status === 'running'
|
||||
? _streamT('cliStream.running')
|
||||
: exec.status === 'completed'
|
||||
? _streamT('cliStream.completed')
|
||||
: _streamT('cliStream.error');
|
||||
|
||||
statusContainer.innerHTML = `
|
||||
<div class="cli-stream-status-info">
|
||||
<div class="cli-stream-status-item">
|
||||
<span class="cli-stream-tab-status ${exec.status}"></span>
|
||||
<span>${statusLabel}</span>
|
||||
</div>
|
||||
<div class="cli-stream-status-item">
|
||||
<i data-lucide="clock"></i>
|
||||
<span>${duration}</span>
|
||||
</div>
|
||||
<div class="cli-stream-status-item">
|
||||
<i data-lucide="file-text"></i>
|
||||
<span>${exec.output.length} ${_streamT('cliStream.lines') || 'lines'}</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="cli-stream-status-actions">
|
||||
<button class="cli-stream-toggle-btn ${autoScrollEnabled ? 'active' : ''}"
|
||||
onclick="toggleAutoScroll()"
|
||||
title="${_streamT('cliStream.autoScroll')}">
|
||||
<i data-lucide="arrow-down-to-line"></i>
|
||||
<span data-i18n="cliStream.autoScroll">${_streamT('cliStream.autoScroll')}</span>
|
||||
</button>
|
||||
</div>
|
||||
`;
|
||||
|
||||
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||
|
||||
// Update duration periodically for running executions
|
||||
if (exec.status === 'running') {
|
||||
setTimeout(() => {
|
||||
if (activeStreamTab === executionId && cliStreamExecutions[executionId]?.status === 'running') {
|
||||
renderStreamStatus(executionId);
|
||||
}
|
||||
}, 1000);
|
||||
}
|
||||
}
|
||||
|
||||
function switchStreamTab(executionId) {
|
||||
if (!cliStreamExecutions[executionId]) return;
|
||||
|
||||
activeStreamTab = executionId;
|
||||
renderStreamTabs();
|
||||
renderStreamContent(executionId);
|
||||
}
|
||||
|
||||
function updateStreamBadge() {
|
||||
const badge = document.getElementById('cliStreamBadge');
|
||||
if (!badge) return;
|
||||
|
||||
const runningCount = Object.values(cliStreamExecutions).filter(e => e.status === 'running').length;
|
||||
|
||||
if (runningCount > 0) {
|
||||
badge.textContent = runningCount;
|
||||
badge.classList.add('has-running');
|
||||
} else {
|
||||
badge.textContent = '';
|
||||
badge.classList.remove('has-running');
|
||||
}
|
||||
}
|
||||
|
||||
// ===== User Actions =====
|
||||
function closeStream(executionId) {
|
||||
const exec = cliStreamExecutions[executionId];
|
||||
if (!exec || exec.status === 'running') return;
|
||||
|
||||
delete cliStreamExecutions[executionId];
|
||||
|
||||
// Switch to another tab if this was active
|
||||
if (activeStreamTab === executionId) {
|
||||
const remaining = Object.keys(cliStreamExecutions);
|
||||
activeStreamTab = remaining.length > 0 ? remaining[0] : null;
|
||||
}
|
||||
|
||||
renderStreamTabs();
|
||||
renderStreamContent(activeStreamTab);
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
function clearCompletedStreams() {
|
||||
const toRemove = Object.keys(cliStreamExecutions).filter(
|
||||
id => cliStreamExecutions[id].status !== 'running'
|
||||
);
|
||||
|
||||
toRemove.forEach(id => delete cliStreamExecutions[id]);
|
||||
|
||||
// Update active tab if needed
|
||||
if (activeStreamTab && !cliStreamExecutions[activeStreamTab]) {
|
||||
const remaining = Object.keys(cliStreamExecutions);
|
||||
activeStreamTab = remaining.length > 0 ? remaining[0] : null;
|
||||
}
|
||||
|
||||
renderStreamTabs();
|
||||
renderStreamContent(activeStreamTab);
|
||||
updateStreamBadge();
|
||||
}
|
||||
|
||||
function toggleAutoScroll() {
|
||||
autoScrollEnabled = !autoScrollEnabled;
|
||||
|
||||
if (autoScrollEnabled && activeStreamTab) {
|
||||
const content = document.getElementById('cliStreamContent');
|
||||
if (content) {
|
||||
content.scrollTop = content.scrollHeight;
|
||||
}
|
||||
}
|
||||
|
||||
renderStreamStatus(activeStreamTab);
|
||||
}
|
||||
|
||||
function handleStreamContentScroll() {
|
||||
const content = document.getElementById('cliStreamContent');
|
||||
if (!content) return;
|
||||
|
||||
// If user scrolls up, disable auto-scroll
|
||||
const isAtBottom = content.scrollHeight - content.scrollTop <= content.clientHeight + 50;
|
||||
if (!isAtBottom && autoScrollEnabled) {
|
||||
autoScrollEnabled = false;
|
||||
renderStreamStatus(activeStreamTab);
|
||||
}
|
||||
}
|
||||
|
||||
// ===== Helper Functions =====
|
||||
function formatDuration(ms) {
|
||||
if (ms < 1000) return `${ms}ms`;
|
||||
|
||||
const seconds = Math.floor(ms / 1000);
|
||||
if (seconds < 60) return `${seconds}s`;
|
||||
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const remainingSeconds = seconds % 60;
|
||||
if (minutes < 60) return `${minutes}m ${remainingSeconds}s`;
|
||||
|
||||
const hours = Math.floor(minutes / 60);
|
||||
const remainingMinutes = minutes % 60;
|
||||
return `${hours}h ${remainingMinutes}m`;
|
||||
}
|
||||
|
||||
function escapeHtml(text) {
|
||||
if (!text) return '';
|
||||
const div = document.createElement('div');
|
||||
div.textContent = text;
|
||||
return div.innerHTML;
|
||||
}
|
||||
|
||||
// Translation helper with fallback (uses global t from i18n.js)
|
||||
function _streamT(key) {
|
||||
// First try global t() from i18n.js
|
||||
if (typeof t === 'function' && t !== _streamT) {
|
||||
try {
|
||||
return t(key);
|
||||
} catch (e) {
|
||||
// Fall through to fallbacks
|
||||
}
|
||||
}
|
||||
// Fallback values
|
||||
const fallbacks = {
|
||||
'cliStream.noStreams': 'No active CLI executions',
|
||||
'cliStream.noStreamsHint': 'Start a CLI command to see streaming output',
|
||||
'cliStream.running': 'Running',
|
||||
'cliStream.completed': 'Completed',
|
||||
'cliStream.error': 'Error',
|
||||
'cliStream.autoScroll': 'Auto-scroll',
|
||||
'cliStream.close': 'Close',
|
||||
'cliStream.cannotCloseRunning': 'Cannot close running execution',
|
||||
'cliStream.lines': 'lines'
|
||||
};
|
||||
return fallbacks[key] || key;
|
||||
}
|
||||
|
||||
// Initialize when DOM is ready
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', initCliStreamViewer);
|
||||
} else {
|
||||
initCliStreamViewer();
|
||||
}
|
||||
@@ -155,6 +155,12 @@ function initNavigation() {
|
||||
} else {
|
||||
console.error('renderApiSettings not defined - please refresh the page');
|
||||
}
|
||||
} else if (currentView === 'issue-manager') {
|
||||
if (typeof renderIssueManager === 'function') {
|
||||
renderIssueManager();
|
||||
} else {
|
||||
console.error('renderIssueManager not defined - please refresh the page');
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
@@ -199,6 +205,8 @@ function updateContentTitle() {
|
||||
titleEl.textContent = t('title.codexLensManager');
|
||||
} else if (currentView === 'api-settings') {
|
||||
titleEl.textContent = t('title.apiSettings');
|
||||
} else if (currentView === 'issue-manager') {
|
||||
titleEl.textContent = t('title.issueManager');
|
||||
} else if (currentView === 'liteTasks') {
|
||||
const names = { 'lite-plan': t('title.litePlanSessions'), 'lite-fix': t('title.liteFixSessions') };
|
||||
titleEl.textContent = names[currentLiteType] || t('title.liteTasks');
|
||||
|
||||
@@ -217,24 +217,40 @@ function handleNotification(data) {
|
||||
if (typeof handleCliExecutionStarted === 'function') {
|
||||
handleCliExecutionStarted(payload);
|
||||
}
|
||||
// Route to CLI Stream Viewer
|
||||
if (typeof handleCliStreamStarted === 'function') {
|
||||
handleCliStreamStarted(payload);
|
||||
}
|
||||
break;
|
||||
|
||||
case 'CLI_OUTPUT':
|
||||
if (typeof handleCliOutput === 'function') {
|
||||
handleCliOutput(payload);
|
||||
}
|
||||
// Route to CLI Stream Viewer
|
||||
if (typeof handleCliStreamOutput === 'function') {
|
||||
handleCliStreamOutput(payload);
|
||||
}
|
||||
break;
|
||||
|
||||
case 'CLI_EXECUTION_COMPLETED':
|
||||
if (typeof handleCliExecutionCompleted === 'function') {
|
||||
handleCliExecutionCompleted(payload);
|
||||
}
|
||||
// Route to CLI Stream Viewer
|
||||
if (typeof handleCliStreamCompleted === 'function') {
|
||||
handleCliStreamCompleted(payload);
|
||||
}
|
||||
break;
|
||||
|
||||
case 'CLI_EXECUTION_ERROR':
|
||||
if (typeof handleCliExecutionError === 'function') {
|
||||
handleCliExecutionError(payload);
|
||||
}
|
||||
// Route to CLI Stream Viewer
|
||||
if (typeof handleCliStreamError === 'function') {
|
||||
handleCliStreamError(payload);
|
||||
}
|
||||
break;
|
||||
|
||||
// CLI Review Events
|
||||
|
||||
@@ -39,7 +39,21 @@ const i18n = {
|
||||
'header.refreshWorkspace': 'Refresh workspace',
|
||||
'header.toggleTheme': 'Toggle theme',
|
||||
'header.language': 'Language',
|
||||
|
||||
'header.cliStream': 'CLI Stream Viewer',
|
||||
|
||||
// CLI Stream Viewer
|
||||
'cliStream.title': 'CLI Stream',
|
||||
'cliStream.clearCompleted': 'Clear Completed',
|
||||
'cliStream.noStreams': 'No active CLI executions',
|
||||
'cliStream.noStreamsHint': 'Start a CLI command to see streaming output',
|
||||
'cliStream.running': 'Running',
|
||||
'cliStream.completed': 'Completed',
|
||||
'cliStream.error': 'Error',
|
||||
'cliStream.autoScroll': 'Auto-scroll',
|
||||
'cliStream.close': 'Close',
|
||||
'cliStream.cannotCloseRunning': 'Cannot close running execution',
|
||||
'cliStream.lines': 'lines',
|
||||
|
||||
// Sidebar - Project section
|
||||
'nav.project': 'Project',
|
||||
'nav.overview': 'Overview',
|
||||
@@ -1711,6 +1725,136 @@ const i18n = {
|
||||
'coreMemory.belongsToClusters': 'Belongs to Clusters',
|
||||
'coreMemory.relationsError': 'Failed to load relations',
|
||||
|
||||
// Issue Manager
|
||||
'nav.issues': 'Issues',
|
||||
'nav.issueManager': 'Manager',
|
||||
'title.issueManager': 'Issue Manager',
|
||||
// issues.* keys (used by issue-manager.js)
|
||||
'issues.title': 'Issue Manager',
|
||||
'issues.description': 'Manage issues, solutions, and execution queue',
|
||||
'issues.viewIssues': 'Issues',
|
||||
'issues.viewQueue': 'Queue',
|
||||
'issues.filterStatus': 'Status',
|
||||
'issues.filterAll': 'All',
|
||||
'issues.noIssues': 'No issues found',
|
||||
'issues.createHint': 'Click "Create" to add your first issue',
|
||||
'issues.priority': 'Priority',
|
||||
'issues.tasks': 'tasks',
|
||||
'issues.solutions': 'solutions',
|
||||
'issues.boundSolution': 'Bound',
|
||||
'issues.queueEmpty': 'Queue is empty',
|
||||
'issues.reorderHint': 'Drag items within a group to reorder',
|
||||
'issues.parallelGroup': 'Parallel',
|
||||
'issues.sequentialGroup': 'Sequential',
|
||||
'issues.dependsOn': 'Depends on',
|
||||
// Create & Search
|
||||
'issues.create': 'Create',
|
||||
'issues.createTitle': 'Create New Issue',
|
||||
'issues.issueId': 'Issue ID',
|
||||
'issues.issueTitle': 'Title',
|
||||
'issues.issueContext': 'Context',
|
||||
'issues.issuePriority': 'Priority',
|
||||
'issues.titlePlaceholder': 'Brief description of the issue',
|
||||
'issues.contextPlaceholder': 'Detailed description, requirements, etc.',
|
||||
'issues.priorityLowest': 'Lowest',
|
||||
'issues.priorityLow': 'Low',
|
||||
'issues.priorityMedium': 'Medium',
|
||||
'issues.priorityHigh': 'High',
|
||||
'issues.priorityCritical': 'Critical',
|
||||
'issues.searchPlaceholder': 'Search issues...',
|
||||
'issues.showing': 'Showing',
|
||||
'issues.of': 'of',
|
||||
'issues.issues': 'issues',
|
||||
'issues.tryDifferentFilter': 'Try adjusting your search or filters',
|
||||
'issues.createFirst': 'Create First Issue',
|
||||
'issues.idRequired': 'Issue ID is required',
|
||||
'issues.titleRequired': 'Title is required',
|
||||
'issues.created': 'Issue created successfully',
|
||||
'issues.confirmDelete': 'Are you sure you want to delete this issue?',
|
||||
'issues.deleted': 'Issue deleted',
|
||||
'issues.idAutoGenerated': 'Auto-generated',
|
||||
'issues.regenerateId': 'Regenerate ID',
|
||||
// Solution detail
|
||||
'issues.solutionDetail': 'Solution Details',
|
||||
'issues.bind': 'Bind',
|
||||
'issues.unbind': 'Unbind',
|
||||
'issues.bound': 'Bound',
|
||||
'issues.totalTasks': 'Total Tasks',
|
||||
'issues.bindStatus': 'Bind Status',
|
||||
'issues.createdAt': 'Created',
|
||||
'issues.taskList': 'Task List',
|
||||
'issues.noTasks': 'No tasks in this solution',
|
||||
'issues.noSolutions': 'No solutions',
|
||||
'issues.viewJson': 'View Raw JSON',
|
||||
'issues.scope': 'Scope',
|
||||
'issues.modificationPoints': 'Modification Points',
|
||||
'issues.implementationSteps': 'Implementation Steps',
|
||||
'issues.acceptanceCriteria': 'Acceptance Criteria',
|
||||
'issues.dependencies': 'Dependencies',
|
||||
'issues.solutionBound': 'Solution bound successfully',
|
||||
'issues.solutionUnbound': 'Solution unbound',
|
||||
// Queue operations
|
||||
'issues.queueEmptyHint': 'Generate execution queue from bound solutions',
|
||||
'issues.createQueue': 'Create Queue',
|
||||
'issues.regenerate': 'Regenerate',
|
||||
'issues.regenerateQueue': 'Regenerate Queue',
|
||||
'issues.refreshQueue': 'Refresh',
|
||||
'issues.executionGroups': 'groups',
|
||||
'issues.totalItems': 'items',
|
||||
'issues.queueRefreshed': 'Queue refreshed',
|
||||
'issues.confirmCreateQueue': 'This will execute /issue:queue command via Claude Code CLI to generate execution queue from bound solutions.\n\nContinue?',
|
||||
'issues.creatingQueue': 'Creating execution queue...',
|
||||
'issues.queueExecutionStarted': 'Queue generation started',
|
||||
'issues.queueCreated': 'Queue created successfully',
|
||||
'issues.queueCreationFailed': 'Queue creation failed',
|
||||
'issues.queueCommandHint': 'Run one of the following commands in your terminal to generate the execution queue from bound solutions:',
|
||||
'issues.queueCommandInfo': 'After running the command, click "Refresh" to see the updated queue.',
|
||||
'issues.alternative': 'Alternative',
|
||||
'issues.refreshAfter': 'Refresh Queue',
|
||||
// issue.* keys (legacy)
|
||||
'issue.viewIssues': 'Issues',
|
||||
'issue.viewQueue': 'Queue',
|
||||
'issue.filterAll': 'All',
|
||||
'issue.filterStatus': 'Status',
|
||||
'issue.filterPriority': 'Priority',
|
||||
'issue.noIssues': 'No issues found',
|
||||
'issue.noIssuesHint': 'Issues will appear here when created via /issue:plan command',
|
||||
'issue.noQueue': 'No tasks in queue',
|
||||
'issue.noQueueHint': 'Run /issue:queue to form execution queue from bound solutions',
|
||||
'issue.tasks': 'tasks',
|
||||
'issue.solutions': 'solutions',
|
||||
'issue.parallel': 'Parallel',
|
||||
'issue.sequential': 'Sequential',
|
||||
'issue.status.registered': 'Registered',
|
||||
'issue.status.planned': 'Planned',
|
||||
'issue.status.queued': 'Queued',
|
||||
'issue.status.executing': 'Executing',
|
||||
'issue.status.completed': 'Completed',
|
||||
'issue.status.failed': 'Failed',
|
||||
'issue.priority.critical': 'Critical',
|
||||
'issue.priority.high': 'High',
|
||||
'issue.priority.medium': 'Medium',
|
||||
'issue.priority.low': 'Low',
|
||||
'issue.detail.context': 'Context',
|
||||
'issue.detail.solutions': 'Solutions',
|
||||
'issue.detail.tasks': 'Tasks',
|
||||
'issue.detail.noSolutions': 'No solutions available',
|
||||
'issue.detail.noTasks': 'No tasks available',
|
||||
'issue.detail.bound': 'Bound',
|
||||
'issue.detail.modificationPoints': 'Modification Points',
|
||||
'issue.detail.implementation': 'Implementation Steps',
|
||||
'issue.detail.acceptance': 'Acceptance Criteria',
|
||||
'issue.queue.reordered': 'Queue reordered',
|
||||
'issue.queue.reorderFailed': 'Failed to reorder queue',
|
||||
'issue.saved': 'Issue saved',
|
||||
'issue.saveFailed': 'Failed to save issue',
|
||||
'issue.taskUpdated': 'Task updated',
|
||||
'issue.taskUpdateFailed': 'Failed to update task',
|
||||
'issue.conflicts': 'Conflicts',
|
||||
'issue.noConflicts': 'No conflicts detected',
|
||||
'issue.conflict.resolved': 'Resolved',
|
||||
'issue.conflict.pending': 'Pending',
|
||||
|
||||
// Common additions
|
||||
'common.copyId': 'Copy ID',
|
||||
'common.copied': 'Copied!',
|
||||
@@ -1748,7 +1892,21 @@ const i18n = {
|
||||
'header.refreshWorkspace': '刷新工作区',
|
||||
'header.toggleTheme': '切换主题',
|
||||
'header.language': '语言',
|
||||
|
||||
'header.cliStream': 'CLI 流式输出',
|
||||
|
||||
// CLI Stream Viewer
|
||||
'cliStream.title': 'CLI 流式输出',
|
||||
'cliStream.clearCompleted': '清除已完成',
|
||||
'cliStream.noStreams': '没有活动的 CLI 执行',
|
||||
'cliStream.noStreamsHint': '启动 CLI 命令以查看流式输出',
|
||||
'cliStream.running': '运行中',
|
||||
'cliStream.completed': '已完成',
|
||||
'cliStream.error': '错误',
|
||||
'cliStream.autoScroll': '自动滚动',
|
||||
'cliStream.close': '关闭',
|
||||
'cliStream.cannotCloseRunning': '无法关闭运行中的执行',
|
||||
'cliStream.lines': '行',
|
||||
|
||||
// Sidebar - Project section
|
||||
'nav.project': '项目',
|
||||
'nav.overview': '概览',
|
||||
@@ -3429,6 +3587,136 @@ const i18n = {
|
||||
'coreMemory.belongsToClusters': '所属聚类',
|
||||
'coreMemory.relationsError': '加载关联失败',
|
||||
|
||||
// Issue Manager
|
||||
'nav.issues': '议题',
|
||||
'nav.issueManager': '管理器',
|
||||
'title.issueManager': '议题管理器',
|
||||
// issues.* keys (used by issue-manager.js)
|
||||
'issues.title': '议题管理器',
|
||||
'issues.description': '管理议题、解决方案和执行队列',
|
||||
'issues.viewIssues': '议题',
|
||||
'issues.viewQueue': '队列',
|
||||
'issues.filterStatus': '状态',
|
||||
'issues.filterAll': '全部',
|
||||
'issues.noIssues': '暂无议题',
|
||||
'issues.createHint': '点击"创建"添加您的第一个议题',
|
||||
'issues.priority': '优先级',
|
||||
'issues.tasks': '任务',
|
||||
'issues.solutions': '解决方案',
|
||||
'issues.boundSolution': '已绑定',
|
||||
'issues.queueEmpty': '队列为空',
|
||||
'issues.reorderHint': '在组内拖拽项目以重新排序',
|
||||
'issues.parallelGroup': '并行',
|
||||
'issues.sequentialGroup': '顺序',
|
||||
'issues.dependsOn': '依赖于',
|
||||
// Create & Search
|
||||
'issues.create': '创建',
|
||||
'issues.createTitle': '创建新议题',
|
||||
'issues.issueId': '议题ID',
|
||||
'issues.issueTitle': '标题',
|
||||
'issues.issueContext': '上下文',
|
||||
'issues.issuePriority': '优先级',
|
||||
'issues.titlePlaceholder': '简要描述议题',
|
||||
'issues.contextPlaceholder': '详细描述、需求等',
|
||||
'issues.priorityLowest': '最低',
|
||||
'issues.priorityLow': '低',
|
||||
'issues.priorityMedium': '中',
|
||||
'issues.priorityHigh': '高',
|
||||
'issues.priorityCritical': '紧急',
|
||||
'issues.searchPlaceholder': '搜索议题...',
|
||||
'issues.showing': '显示',
|
||||
'issues.of': '共',
|
||||
'issues.issues': '条议题',
|
||||
'issues.tryDifferentFilter': '尝试调整搜索或筛选条件',
|
||||
'issues.createFirst': '创建第一个议题',
|
||||
'issues.idRequired': '议题ID为必填',
|
||||
'issues.titleRequired': '标题为必填',
|
||||
'issues.created': '议题创建成功',
|
||||
'issues.confirmDelete': '确定要删除此议题吗?',
|
||||
'issues.deleted': '议题已删除',
|
||||
'issues.idAutoGenerated': '自动生成',
|
||||
'issues.regenerateId': '重新生成ID',
|
||||
// Solution detail
|
||||
'issues.solutionDetail': '解决方案详情',
|
||||
'issues.bind': '绑定',
|
||||
'issues.unbind': '解绑',
|
||||
'issues.bound': '已绑定',
|
||||
'issues.totalTasks': '任务总数',
|
||||
'issues.bindStatus': '绑定状态',
|
||||
'issues.createdAt': '创建时间',
|
||||
'issues.taskList': '任务列表',
|
||||
'issues.noTasks': '此解决方案无任务',
|
||||
'issues.noSolutions': '暂无解决方案',
|
||||
'issues.viewJson': '查看原始JSON',
|
||||
'issues.scope': '作用域',
|
||||
'issues.modificationPoints': '修改点',
|
||||
'issues.implementationSteps': '实现步骤',
|
||||
'issues.acceptanceCriteria': '验收标准',
|
||||
'issues.dependencies': '依赖项',
|
||||
'issues.solutionBound': '解决方案已绑定',
|
||||
'issues.solutionUnbound': '解决方案已解绑',
|
||||
// Queue operations
|
||||
'issues.queueEmptyHint': '从绑定的解决方案生成执行队列',
|
||||
'issues.createQueue': '创建队列',
|
||||
'issues.regenerate': '重新生成',
|
||||
'issues.regenerateQueue': '重新生成队列',
|
||||
'issues.refreshQueue': '刷新',
|
||||
'issues.executionGroups': '个执行组',
|
||||
'issues.totalItems': '个任务',
|
||||
'issues.queueRefreshed': '队列已刷新',
|
||||
'issues.confirmCreateQueue': '这将通过 Claude Code CLI 执行 /issue:queue 命令,从绑定的解决方案生成执行队列。\n\n是否继续?',
|
||||
'issues.creatingQueue': '正在创建执行队列...',
|
||||
'issues.queueExecutionStarted': '队列生成已启动',
|
||||
'issues.queueCreated': '队列创建成功',
|
||||
'issues.queueCreationFailed': '队列创建失败',
|
||||
'issues.queueCommandHint': '在终端中运行以下命令之一,从绑定的解决方案生成执行队列:',
|
||||
'issues.queueCommandInfo': '运行命令后,点击"刷新"查看更新后的队列。',
|
||||
'issues.alternative': '或者',
|
||||
'issues.refreshAfter': '刷新队列',
|
||||
// issue.* keys (legacy)
|
||||
'issue.viewIssues': '议题',
|
||||
'issue.viewQueue': '队列',
|
||||
'issue.filterAll': '全部',
|
||||
'issue.filterStatus': '状态',
|
||||
'issue.filterPriority': '优先级',
|
||||
'issue.noIssues': '暂无议题',
|
||||
'issue.noIssuesHint': '通过 /issue:plan 命令创建的议题将显示在此处',
|
||||
'issue.noQueue': '队列中暂无任务',
|
||||
'issue.noQueueHint': '运行 /issue:queue 从绑定的解决方案生成执行队列',
|
||||
'issue.tasks': '任务',
|
||||
'issue.solutions': '解决方案',
|
||||
'issue.parallel': '并行',
|
||||
'issue.sequential': '顺序',
|
||||
'issue.status.registered': '已注册',
|
||||
'issue.status.planned': '已规划',
|
||||
'issue.status.queued': '已入队',
|
||||
'issue.status.executing': '执行中',
|
||||
'issue.status.completed': '已完成',
|
||||
'issue.status.failed': '失败',
|
||||
'issue.priority.critical': '紧急',
|
||||
'issue.priority.high': '高',
|
||||
'issue.priority.medium': '中',
|
||||
'issue.priority.low': '低',
|
||||
'issue.detail.context': '上下文',
|
||||
'issue.detail.solutions': '解决方案',
|
||||
'issue.detail.tasks': '任务',
|
||||
'issue.detail.noSolutions': '暂无解决方案',
|
||||
'issue.detail.noTasks': '暂无任务',
|
||||
'issue.detail.bound': '已绑定',
|
||||
'issue.detail.modificationPoints': '修改点',
|
||||
'issue.detail.implementation': '实现步骤',
|
||||
'issue.detail.acceptance': '验收标准',
|
||||
'issue.queue.reordered': '队列已重排',
|
||||
'issue.queue.reorderFailed': '队列重排失败',
|
||||
'issue.saved': '议题已保存',
|
||||
'issue.saveFailed': '保存议题失败',
|
||||
'issue.taskUpdated': '任务已更新',
|
||||
'issue.taskUpdateFailed': '更新任务失败',
|
||||
'issue.conflicts': '冲突',
|
||||
'issue.noConflicts': '未检测到冲突',
|
||||
'issue.conflict.resolved': '已解决',
|
||||
'issue.conflict.pending': '待处理',
|
||||
|
||||
// Common additions
|
||||
'common.copyId': '复制 ID',
|
||||
'common.copied': '已复制!',
|
||||
|
||||
@@ -168,16 +168,22 @@ async function loadAvailableSkills() {
|
||||
if (!response.ok) throw new Error('Failed to load skills');
|
||||
const data = await response.json();
|
||||
|
||||
// Combine project and user skills (API returns { projectSkills: [], userSkills: [] })
|
||||
const allSkills = [
|
||||
...(data.projectSkills || []).map(s => ({ ...s, scope: 'project' })),
|
||||
...(data.userSkills || []).map(s => ({ ...s, scope: 'user' }))
|
||||
];
|
||||
|
||||
const container = document.getElementById('skill-discovery-skill-context');
|
||||
if (container && data.skills) {
|
||||
if (data.skills.length === 0) {
|
||||
if (container) {
|
||||
if (allSkills.length === 0) {
|
||||
container.innerHTML = `
|
||||
<span class="font-mono bg-muted px-1.5 py-0.5 rounded">${t('hook.wizard.availableSkills')}</span>
|
||||
<span class="text-muted-foreground ml-2">${t('hook.wizard.noSkillsFound').split('.')[0]}</span>
|
||||
`;
|
||||
} else {
|
||||
const skillBadges = data.skills.map(skill => `
|
||||
<span class="px-2 py-0.5 bg-emerald-500/10 text-emerald-500 rounded" title="${escapeHtml(skill.description)}">${escapeHtml(skill.name)}</span>
|
||||
const skillBadges = allSkills.map(skill => `
|
||||
<span class="px-2 py-0.5 bg-emerald-500/10 text-emerald-500 rounded" title="${escapeHtml(skill.description || '')}">${escapeHtml(skill.name)}</span>
|
||||
`).join('');
|
||||
container.innerHTML = `
|
||||
<span class="font-mono bg-muted px-1.5 py-0.5 rounded">${t('hook.wizard.availableSkills')}</span>
|
||||
@@ -187,7 +193,7 @@ async function loadAvailableSkills() {
|
||||
}
|
||||
|
||||
// Store skills for wizard use
|
||||
window.availableSkills = data.skills || [];
|
||||
window.availableSkills = allSkills;
|
||||
} catch (err) {
|
||||
console.error('Failed to load skills:', err);
|
||||
const container = document.getElementById('skill-discovery-skill-context');
|
||||
|
||||
1546
ccw/src/templates/dashboard-js/views/issue-manager.js
Normal file
1546
ccw/src/templates/dashboard-js/views/issue-manager.js
Normal file
File diff suppressed because it is too large
Load Diff
@@ -275,6 +275,18 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- CLI Stream Viewer Button -->
|
||||
<button class="cli-stream-btn p-1.5 text-muted-foreground hover:text-foreground hover:bg-hover rounded relative"
|
||||
id="cliStreamBtn"
|
||||
onclick="toggleCliStreamViewer()"
|
||||
data-i18n-title="header.cliStream"
|
||||
title="CLI Stream Viewer">
|
||||
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
<polyline points="4 17 10 11 4 5"/>
|
||||
<line x1="12" y1="19" x2="20" y2="19"/>
|
||||
</svg>
|
||||
<span class="cli-stream-badge" id="cliStreamBadge"></span>
|
||||
</button>
|
||||
<!-- Refresh Button -->
|
||||
<button class="refresh-btn p-1.5 text-muted-foreground hover:text-foreground hover:bg-hover rounded" id="refreshWorkspace" data-i18n-title="header.refreshWorkspace" title="Refresh workspace">
|
||||
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
@@ -394,6 +406,21 @@
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<!-- Issues Section -->
|
||||
<div class="mb-2" id="issuesNav">
|
||||
<div class="flex items-center px-4 py-2 text-sm font-semibold text-muted-foreground uppercase tracking-wide">
|
||||
<i data-lucide="clipboard-list" class="nav-section-icon mr-2"></i>
|
||||
<span class="nav-section-title" data-i18n="nav.issues">Issues</span>
|
||||
</div>
|
||||
<ul class="space-y-0.5">
|
||||
<li class="nav-item flex items-center gap-2 px-3 py-2.5 text-sm text-muted-foreground hover:bg-hover hover:text-foreground rounded cursor-pointer transition-colors" data-view="issue-manager" data-tooltip="Issue Manager">
|
||||
<i data-lucide="list-checks" class="nav-icon"></i>
|
||||
<span class="nav-text flex-1" data-i18n="nav.issueManager">Manager</span>
|
||||
<span class="badge px-2 py-0.5 text-xs font-semibold rounded-full bg-hover text-muted-foreground" id="badgeIssues">0</span>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<!-- MCP Servers Section -->
|
||||
<div class="mb-2" id="mcpServersNav">
|
||||
<div class="flex items-center px-4 py-2 text-sm font-semibold text-muted-foreground uppercase tracking-wide">
|
||||
@@ -578,6 +605,34 @@
|
||||
<div class="drawer-overlay hidden fixed inset-0 bg-black/50 z-40" id="drawerOverlay" onclick="closeTaskDrawer()"></div>
|
||||
</div>
|
||||
|
||||
<!-- CLI Stream Viewer Panel -->
|
||||
<div class="cli-stream-viewer" id="cliStreamViewer">
|
||||
<div class="cli-stream-header">
|
||||
<div class="cli-stream-title">
|
||||
<i data-lucide="terminal"></i>
|
||||
<span data-i18n="cliStream.title">CLI Stream</span>
|
||||
<span class="cli-stream-count-badge" id="cliStreamCountBadge">0</span>
|
||||
</div>
|
||||
<div class="cli-stream-actions">
|
||||
<button class="cli-stream-action-btn" onclick="clearCompletedStreams()" data-i18n="cliStream.clearCompleted">
|
||||
<i data-lucide="trash-2"></i>
|
||||
<span>Clear</span>
|
||||
</button>
|
||||
<button class="cli-stream-close-btn" onclick="toggleCliStreamViewer()" title="Close">×</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="cli-stream-tabs" id="cliStreamTabs">
|
||||
<!-- Dynamic tabs -->
|
||||
</div>
|
||||
<div class="cli-stream-content" id="cliStreamContent">
|
||||
<!-- Terminal output -->
|
||||
</div>
|
||||
<div class="cli-stream-status" id="cliStreamStatus">
|
||||
<!-- Status bar -->
|
||||
</div>
|
||||
</div>
|
||||
<div class="cli-stream-overlay" id="cliStreamOverlay" onclick="toggleCliStreamViewer()"></div>
|
||||
|
||||
<!-- Markdown Preview Modal -->
|
||||
<div id="markdownModal" class="markdown-modal hidden fixed inset-0 z-[100] flex items-center justify-center">
|
||||
<div class="markdown-modal-backdrop absolute inset-0 bg-black/60" onclick="closeMarkdownModal()"></div>
|
||||
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.2.9",
|
||||
"version": "6.3.8",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.2.9",
|
||||
"version": "6.3.8",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.0.4",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.3.5",
|
||||
"version": "6.3.8",
|
||||
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
|
||||
"type": "module",
|
||||
"main": "ccw/src/index.js",
|
||||
|
||||
Reference in New Issue
Block a user