Compare commits

..

27 Commits

Author SHA1 Message Date
catlog22
847abcefce feat: 添加选项以标记任务为失败 2025-12-28 20:45:05 +08:00
catlog22
c24ad501b5 feat: 更新问题执行和队列生成逻辑,支持解决方案格式并增强元数据管理 2025-12-28 20:35:29 +08:00
catlog22
35c7fe28bb feat: 更新队列生成逻辑,使用提供的队列ID并严格限制输出文件 2025-12-28 20:20:13 +08:00
catlog22
a33cacfd75 feat: 添加更新命令以修改问题字段(状态、优先级、标题等) 2025-12-28 20:14:31 +08:00
catlog22
338c3d612c feat: 更新问题规划和队列命令,增强解决方案注册和用户选择逻辑 2025-12-28 19:58:44 +08:00
catlog22
8b17fad723 feat(discovery): enhance discovery progress reading with new schema support 2025-12-28 19:33:17 +08:00
catlog22
169f218f7a feat(discovery): enhance discovery index reading and issue exporting
- Improved the reading of the discovery index by adding a fallback mechanism to scan directories for discovery folders if the index.json is invalid or missing.
- Added sorting of discoveries by creation time in descending order.
- Enhanced the `appendToIssuesJsonl` function to include deduplication logic for issues based on ID and source finding ID.
- Updated the discovery route handler to reflect the number of issues added and skipped during export.
- Introduced UI elements for selecting and deselecting findings in the dashboard.
- Added CSS styles for exported findings and action buttons.
- Implemented search functionality for filtering findings based on title, file, and description.
- Added internationalization support for new UI elements.
- Created scripts for automated API extraction from various project types, including FastAPI and TypeScript.
- Documented the API extraction process and library bundling instructions.
2025-12-28 19:27:34 +08:00
catlog22
3ef1e54412 feat: 更新软件手册,优化截图捕获流程和规则说明 2025-12-28 18:19:39 +08:00
catlog22
4419c50942 feat: enhance internationalization support and improve GPU mode selector with Python environment checks 2025-12-28 17:49:40 +08:00
catlog22
7aa1cda367 feat: add issue discovery view for managing discovery sessions and findings
- Implemented main render function for the issue discovery view.
- Added data loading functions to fetch discoveries, details, findings, and progress.
- Created rendering functions for discovery list and detail sections.
- Introduced filtering and searching capabilities for findings.
- Implemented actions for exporting and dismissing findings.
- Added polling mechanism to track discovery progress.
- Included utility functions for HTML escaping and cleanup.
2025-12-28 17:21:07 +08:00
catlog22
a2c88ba885 feat: Add project guidelines support and enhance project overview rendering 2025-12-28 14:50:50 +08:00
catlog22
e16950ef1e refactor: Remove unused context-tools-ace.md and clean up CLI status management 2025-12-28 14:09:51 +08:00
catlog22
5b973b00ea feat(issue): Enhance queue management with solution-level granularity and improved item identification 2025-12-28 13:58:43 +08:00
catlog22
3a1ebf8684 refactor(issue): Simplify issue and task structures by removing unused fields 2025-12-28 13:39:52 +08:00
catlog22
2eaefb61ab feat: Enhance issue management to support solution-level queues
- Added support for solution-level queues in the issue management system.
- Updated interfaces to include solution-specific properties such as `approach`, `task_count`, and `files_touched`.
- Modified queue handling to differentiate between task-level and solution-level items.
- Adjusted rendering logic in the dashboard to display solutions and their associated tasks correctly.
- Enhanced queue statistics and conflict resolution to accommodate the new solution structure.
- Updated actions (next, done, retry) to handle both tasks and solutions seamlessly.
2025-12-28 13:21:34 +08:00
catlog22
4c6b28030f Enhance project management workflow by introducing dual file system for project guidelines and tech analysis
- Updated workflow initialization to create `.workflow/project-tech.json` and `.workflow/project-guidelines.json` for comprehensive project understanding.
- Added mandatory context reading steps in various commands to ensure compliance with user-defined constraints and technology stack.
- Implemented a new command `/workflow:session:solidify` to capture session learnings and solidify them into project guidelines.
- Introduced a detail action in issue management to retrieve task details without altering status.
- Enhanced documentation across multiple workflow commands to reflect changes in project structure and guidelines.
2025-12-28 12:47:39 +08:00
catlog22
2c42cefa5a feat(issue): add DAG support for parallel execution planning and enhance task fetching 2025-12-28 12:04:10 +08:00
catlog22
35ffd3419e chore(release): v6.3.9 - Issue System Consistency
- Unified four-layer architecture (Schema/Agent/Command/Implementation)
- Upgraded to Rich Plan model with lifecycle fields
- Added multi-solution generation support
- Consolidated schemas (deleted redundant issue-task-jsonl-schema, solutions-jsonl-schema)
- Fixed field naming consistency (acceptance, lifecycle_status, priority mapping)
- Added search tool fallback chain to issue-plan-agent

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-27 23:57:30 +08:00
catlog22
e3223edbb1 fix(plan): Update delivery criteria terminology to acceptance.criteria 2025-12-27 23:54:00 +08:00
catlog22
a061fc1428 fix(issue-plan-agent): Update acceptance criteria terminology and enhance issue loading process with metadata 2025-12-27 23:51:30 +08:00
catlog22
0992d27523 refactor(issue-manager): Enhance queue detail item styling and update modal content 2025-12-27 23:43:40 +08:00
catlog22
5aa0c9610d fix(issue-manager): Add History button to empty queue state
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-27 23:00:17 +08:00
catlog22
7620ff703d feat(issue-manager): Add queue history modal for viewing and switching queues
- Add GET /api/queue/history endpoint to fetch all queues from index
- Add GET /api/queue/:id endpoint to fetch specific queue details
- Add POST /api/queue/switch endpoint to switch active queue
- Add History button in queue toolbar
- Add queue history modal with list view and detail view
- Add switch functionality to change active queue
- Add CSS styles for queue history components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-27 22:56:32 +08:00
catlog22
d705a3e7d9 fix: Add active_queue_id to QueueIndex interface
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-27 22:48:38 +08:00
catlog22
726151bfea Refactor issue management commands and introduce lifecycle requirements
- Updated lifecycle requirements in issue creation to include new fields for testing, regression, acceptance, and commit strategies.
- Enhanced the planning command to generate structured output and handle multi-solution scenarios.
- Improved queue formation logic to ensure valid DAG and conflict resolution.
- Introduced a new interactive issue management skill for CRUD operations, allowing users to manage issues through a menu-driven interface.
- Updated documentation across commands to reflect changes in task structure and output requirements.
2025-12-27 22:44:49 +08:00
catlog22
b58589ddad refactor: Update issue queue structure and commands
- Changed queue structure from 'queue' to 'tasks' in various files for clarity.
- Updated CLI commands to reflect new task ID usage instead of queue ID.
- Enhanced queue management with new delete functionality for historical queues.
- Improved metadata handling and task execution tracking.
- Updated dashboard and issue manager views to accommodate new task structure.
- Bumped version to 6.3.8 in package.json and package-lock.json.
2025-12-27 22:04:15 +08:00
catlog22
2e493277a1 feat(issue-queue): Enhance task execution flow with priority handling and stuck task reset option 2025-12-27 14:43:18 +08:00
76 changed files with 12765 additions and 4032 deletions

View File

@@ -2,7 +2,7 @@
- **CLI Tools Usage**: @~/.claude/workflows/cli-tools-usage.md
- **Coding Philosophy**: @~/.claude/workflows/coding-philosophy.md
- **Context Requirements**: @~/.claude/workflows/context-tools-ace.md
- **Context Requirements**: @~/.claude/workflows/context-tools.md
- **File Modification**: @~/.claude/workflows/file-modification.md
- **CLI Endpoints Config**: @.claude/cli-tools.json

File diff suppressed because it is too large Load Diff

View File

@@ -1,702 +1,254 @@
---
name: issue-queue-agent
description: |
Task ordering agent for issue queue formation with dependency analysis and conflict resolution.
Orchestrates 4-phase workflow: Dependency Analysis → Conflict Detection → Semantic Ordering → Group Assignment
Solution ordering agent for queue formation with dependency analysis and conflict resolution.
Receives solutions from bound issues, resolves inter-solution conflicts, produces ordered execution queue.
Core capabilities:
- ACE semantic search for relationship discovery
- Cross-issue dependency DAG construction
- File modification conflict detection
- Conflict resolution with execution ordering
- Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
Examples:
- Context: Single issue queue
user: "Order solutions for GH-123"
assistant: "I'll analyze dependencies and generate execution queue"
- Context: Multi-issue queue with conflicts
user: "Order solutions for GH-123, GH-124"
assistant: "I'll detect file conflicts between solutions, resolve ordering, and assign groups"
color: orange
---
You are a specialized queue formation agent that analyzes tasks from bound solutions, resolves conflicts, and produces an ordered execution queue. You focus on optimal task ordering across multiple issues.
## Overview
## Input Context
**Agent Role**: Queue formation agent that transforms solutions from bound issues into an ordered execution queue. Analyzes inter-solution dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
**Core Capabilities**:
- Inter-solution dependency DAG construction
- File conflict detection between solutions (based on files_touched intersection)
- Conflict resolution with semantic ordering rules
- Priority calculation (0.0-1.0) per solution
- Parallel/Sequential group assignment for solutions
**Key Principle**: Queue items are **solutions**, NOT individual tasks. Each executor receives a complete solution with all its tasks.
---
## 1. Input & Execution
### 1.1 Input Context
```javascript
{
// Required
tasks: [
{
issue_id: string, // Issue ID (e.g., "GH-123")
solution_id: string, // Solution ID (e.g., "SOL-001")
task: {
id: string, // Task ID (e.g., "T1")
title: string,
scope: string,
action: string, // Create | Update | Implement | Refactor | Test | Fix | Delete | Configure
modification_points: [
{ file: string, target: string, change: string }
],
depends_on: string[] // Task IDs within same issue
},
exploration_context: object
}
],
// Optional
project_root: string, // Project root for ACE search
existing_conflicts: object[], // Pre-identified conflicts
rebuild: boolean // Clear and regenerate queue
solutions: [{
issue_id: string, // e.g., "ISS-20251227-001"
solution_id: string, // e.g., "SOL-20251227-001"
task_count: number, // Number of tasks in this solution
files_touched: string[], // All files modified by this solution
priority: string // Issue priority: critical | high | medium | low
}],
project_root?: string,
rebuild?: boolean
}
```
## 4-Phase Execution Workflow
**Note**: Agent generates unique `item_id` (pattern: `S-{N}`) for queue output.
### 1.2 Execution Flow
```
Phase 1: Dependency Analysis (20%)
Parse depends_on, build DAG, detect cycles
Phase 2: Conflict Detection + ACE Enhancement (30%)
Identify file conflicts, ACE semantic relationship discovery
Phase 1: Solution Analysis (20%)
| Parse solutions, collect files_touched, build DAG
Phase 2: Conflict Detection (30%)
| Identify file overlaps between solutions
Phase 3: Conflict Resolution (25%)
↓ Determine execution order for conflicting tasks
Phase 4: Semantic Ordering & Grouping (25%)
↓ Calculate priority, topological sort, assign groups
| Apply ordering rules, update DAG
Phase 4: Ordering & Grouping (25%)
| Topological sort, assign parallel/sequential groups
```
---
## Phase 1: Dependency Analysis
## 2. Processing Logic
### Build Dependency Graph
### 2.1 Dependency Graph
```javascript
function buildDependencyGraph(tasks) {
const taskGraph = new Map()
const fileModifications = new Map() // file -> [taskKeys]
function buildDependencyGraph(solutions) {
const graph = new Map()
const fileModifications = new Map()
for (const item of tasks) {
const taskKey = `${item.issue_id}:${item.task.id}`
taskGraph.set(taskKey, {
...item,
key: taskKey,
inDegree: 0,
outEdges: []
})
for (const sol of solutions) {
graph.set(sol.solution_id, { ...sol, inDegree: 0, outEdges: [] })
// Track file modifications for conflict detection
for (const mp of item.task.modification_points || []) {
if (!fileModifications.has(mp.file)) {
fileModifications.set(mp.file, [])
}
fileModifications.get(mp.file).push(taskKey)
for (const file of sol.files_touched || []) {
if (!fileModifications.has(file)) fileModifications.set(file, [])
fileModifications.get(file).push(sol.solution_id)
}
}
// Add explicit dependency edges (within same issue)
for (const [key, node] of taskGraph) {
for (const dep of node.task.depends_on || []) {
const depKey = `${node.issue_id}:${dep}`
if (taskGraph.has(depKey)) {
taskGraph.get(depKey).outEdges.push(key)
node.inDegree++
}
}
}
return { taskGraph, fileModifications }
return { graph, fileModifications }
}
```
### Cycle Detection
### 2.2 Conflict Detection
Conflict when multiple solutions modify same file:
```javascript
function detectCycles(taskGraph) {
const visited = new Set()
const stack = new Set()
const cycles = []
function dfs(key, path = []) {
if (stack.has(key)) {
// Found cycle - extract cycle path
const cycleStart = path.indexOf(key)
cycles.push(path.slice(cycleStart).concat(key))
return true
}
if (visited.has(key)) return false
visited.add(key)
stack.add(key)
path.push(key)
for (const next of taskGraph.get(key)?.outEdges || []) {
dfs(next, [...path])
}
stack.delete(key)
return false
}
for (const key of taskGraph.keys()) {
if (!visited.has(key)) {
dfs(key)
}
}
return {
hasCycle: cycles.length > 0,
cycles
}
function detectConflicts(fileModifications, graph) {
return [...fileModifications.entries()]
.filter(([_, solutions]) => solutions.length > 1)
.map(([file, solutions]) => ({
type: 'file_conflict',
file,
solutions,
resolved: false
}))
}
```
---
## Phase 2: Conflict Detection
### Identify File Conflicts
```javascript
function detectFileConflicts(fileModifications, taskGraph) {
const conflicts = []
for (const [file, taskKeys] of fileModifications) {
if (taskKeys.length > 1) {
// Multiple tasks modify same file
const taskDetails = taskKeys.map(key => {
const node = taskGraph.get(key)
return {
key,
issue_id: node.issue_id,
task_id: node.task.id,
title: node.task.title,
action: node.task.action,
scope: node.task.scope
}
})
conflicts.push({
type: 'file_conflict',
file,
tasks: taskKeys,
task_details: taskDetails,
resolution: null,
resolved: false
})
}
}
return conflicts
}
```
### Conflict Classification
```javascript
function classifyConflict(conflict, taskGraph) {
const tasks = conflict.tasks.map(key => taskGraph.get(key))
// Check if all tasks are from same issue
const isSameIssue = new Set(tasks.map(t => t.issue_id)).size === 1
// Check action types
const actions = tasks.map(t => t.task.action)
const hasCreate = actions.includes('Create')
const hasDelete = actions.includes('Delete')
return {
...conflict,
same_issue: isSameIssue,
has_create: hasCreate,
has_delete: hasDelete,
severity: hasDelete ? 'high' : hasCreate ? 'medium' : 'low'
}
}
```
---
## Phase 3: Conflict Resolution
### Resolution Rules
### 2.3 Resolution Rules
| Priority | Rule | Example |
|----------|------|---------|
| 1 | Create before Update/Implement | T1:Create → T2:Update |
| 2 | Foundation before integration | config/ → src/ |
| 3 | Types before implementation | types/ → components/ |
| 4 | Core before tests | src/ → __tests__/ |
| 5 | Same issue order preserved | T1 → T2 → T3 |
| 1 | Higher issue priority first | critical > high > medium > low |
| 2 | Foundation solutions first | Solutions with fewer dependencies |
| 3 | More tasks = higher priority | Solutions with larger impact |
| 4 | Create before extend | S1:Creates module -> S2:Extends it |
### Apply Resolution Rules
### 2.4 Semantic Priority
```javascript
function resolveConflict(conflict, taskGraph) {
const tasks = conflict.tasks.map(key => ({
key,
node: taskGraph.get(key)
}))
**Base Priority Mapping** (issue priority -> base score):
| Priority | Base Score | Meaning |
|----------|------------|---------|
| critical | 0.9 | Highest |
| high | 0.7 | High |
| medium | 0.5 | Medium |
| low | 0.3 | Low |
// Sort by resolution rules
tasks.sort((a, b) => {
const nodeA = a.node
const nodeB = b.node
**Task-count Boost** (applied to base score):
| Factor | Boost |
|--------|-------|
| task_count >= 5 | +0.1 |
| task_count >= 3 | +0.05 |
| Foundation scope | +0.1 |
| Fewer dependencies | +0.05 |
// Rule 1: Create before others
if (nodeA.task.action === 'Create' && nodeB.task.action !== 'Create') return -1
if (nodeB.task.action === 'Create' && nodeA.task.action !== 'Create') return 1
**Formula**: `semantic_priority = clamp(baseScore + sum(boosts), 0.0, 1.0)`
// Rule 2: Delete last
if (nodeA.task.action === 'Delete' && nodeB.task.action !== 'Delete') return 1
if (nodeB.task.action === 'Delete' && nodeA.task.action !== 'Delete') return -1
### 2.5 Group Assignment
// Rule 3: Foundation scopes first
const isFoundationA = isFoundationScope(nodeA.task.scope)
const isFoundationB = isFoundationScope(nodeB.task.scope)
if (isFoundationA && !isFoundationB) return -1
if (isFoundationB && !isFoundationA) return 1
- **Parallel (P*)**: Solutions with no file overlaps between them
- **Sequential (S*)**: Solutions that share files must run in order
// Rule 4: Config/Types before implementation
const isTypesA = nodeA.task.scope?.includes('types')
const isTypesB = nodeB.task.scope?.includes('types')
if (isTypesA && !isTypesB) return -1
if (isTypesB && !isTypesA) return 1
---
// Rule 5: Preserve issue order (same issue)
if (nodeA.issue_id === nodeB.issue_id) {
return parseInt(nodeA.task.id.replace('T', '')) - parseInt(nodeB.task.id.replace('T', ''))
## 3. Output Requirements
### 3.1 Generate Files (Primary)
**Queue files**:
```
.workflow/issues/queues/{queue-id}.json # Full queue with solutions, conflicts, groups
.workflow/issues/queues/index.json # Update with new queue entry
```
Queue ID: Use the Queue ID provided in prompt (do NOT generate new one)
Queue Item ID format: `S-N` (S-1, S-2, S-3, ...)
### 3.2 Queue File Schema
```json
{
"id": "QUE-20251227-143000",
"status": "active",
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-20251227-003",
"solution_id": "SOL-20251227-003",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts", "src/utils.ts"],
"task_count": 3
}
return 0
})
const order = tasks.map(t => t.key)
const rationale = generateRationale(tasks)
return {
...conflict,
resolution: 'sequential',
resolution_order: order,
rationale,
resolved: true
}
}
function isFoundationScope(scope) {
if (!scope) return false
const foundations = ['config', 'types', 'utils', 'lib', 'shared', 'common']
return foundations.some(f => scope.toLowerCase().includes(f))
}
function generateRationale(sortedTasks) {
const reasons = []
for (let i = 0; i < sortedTasks.length - 1; i++) {
const curr = sortedTasks[i].node.task
const next = sortedTasks[i + 1].node.task
if (curr.action === 'Create') {
reasons.push(`${curr.id} creates file before ${next.id}`)
} else if (isFoundationScope(curr.scope)) {
reasons.push(`${curr.id} (foundation) before ${next.id}`)
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"solutions": ["S-1", "S-3"],
"resolution": "sequential",
"resolution_order": ["S-1", "S-3"],
"rationale": "S-1 creates auth module, S-3 extends it"
}
}
return reasons.join('; ') || 'Default ordering applied'
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"], "solution_count": 2 },
{ "id": "S2", "type": "sequential", "solutions": ["S-3"], "solution_count": 1 }
]
}
```
### Apply Resolution to Graph
### 3.3 Return Summary
```javascript
function applyResolutionToGraph(conflict, taskGraph) {
const order = conflict.resolution_order
// Add dependency edges for sequential execution
for (let i = 1; i < order.length; i++) {
const prevKey = order[i - 1]
const currKey = order[i]
if (taskGraph.has(prevKey) && taskGraph.has(currKey)) {
const prevNode = taskGraph.get(prevKey)
const currNode = taskGraph.get(currKey)
// Avoid duplicate edges
if (!prevNode.outEdges.includes(currKey)) {
prevNode.outEdges.push(currKey)
currNode.inDegree++
}
}
}
```json
{
"queue_id": "QUE-20251227-143000",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
---
## Phase 4: Semantic Ordering & Grouping
## 4. Quality Standards
### Semantic Priority Calculation
### 4.1 Validation Checklist
```javascript
function calculateSemanticPriority(node) {
let priority = 0.5 // Base priority
- [ ] No circular dependencies between solutions
- [ ] All file conflicts resolved
- [ ] Solutions in same parallel group have NO file overlaps
- [ ] Semantic priority calculated for all solutions
- [ ] Dependencies ordered correctly
// Action-based priority boost
const actionBoost = {
'Create': 0.2,
'Configure': 0.15,
'Implement': 0.1,
'Update': 0,
'Refactor': -0.05,
'Test': -0.1,
'Fix': 0.05,
'Delete': -0.15
}
priority += actionBoost[node.task.action] || 0
// Scope-based boost
if (isFoundationScope(node.task.scope)) {
priority += 0.1
}
if (node.task.scope?.includes('types')) {
priority += 0.05
}
// Clamp to [0, 1]
return Math.max(0, Math.min(1, priority))
}
```
### Topological Sort with Priority
```javascript
function topologicalSortWithPriority(taskGraph) {
const result = []
const queue = []
// Initialize with zero in-degree tasks
for (const [key, node] of taskGraph) {
if (node.inDegree === 0) {
queue.push(key)
}
}
let executionOrder = 1
while (queue.length > 0) {
// Sort queue by semantic priority (descending)
queue.sort((a, b) => {
const nodeA = taskGraph.get(a)
const nodeB = taskGraph.get(b)
// 1. Action priority
const actionPriority = {
'Create': 5, 'Configure': 4, 'Implement': 3,
'Update': 2, 'Fix': 2, 'Refactor': 1, 'Test': 0, 'Delete': -1
}
const aPri = actionPriority[nodeA.task.action] ?? 2
const bPri = actionPriority[nodeB.task.action] ?? 2
if (aPri !== bPri) return bPri - aPri
// 2. Foundation scope first
const aFound = isFoundationScope(nodeA.task.scope)
const bFound = isFoundationScope(nodeB.task.scope)
if (aFound !== bFound) return aFound ? -1 : 1
// 3. Types before implementation
const aTypes = nodeA.task.scope?.includes('types')
const bTypes = nodeB.task.scope?.includes('types')
if (aTypes !== bTypes) return aTypes ? -1 : 1
return 0
})
const current = queue.shift()
const node = taskGraph.get(current)
node.execution_order = executionOrder++
node.semantic_priority = calculateSemanticPriority(node)
result.push(current)
// Process outgoing edges
for (const next of node.outEdges) {
const nextNode = taskGraph.get(next)
nextNode.inDegree--
if (nextNode.inDegree === 0) {
queue.push(next)
}
}
}
// Check for remaining nodes (cycle indication)
if (result.length !== taskGraph.size) {
const remaining = [...taskGraph.keys()].filter(k => !result.includes(k))
return { success: false, error: `Unprocessed tasks: ${remaining.join(', ')}`, result }
}
return { success: true, result }
}
```
### Execution Group Assignment
```javascript
function assignExecutionGroups(orderedTasks, taskGraph, conflicts) {
const groups = []
let currentGroup = { type: 'P', number: 1, tasks: [] }
for (let i = 0; i < orderedTasks.length; i++) {
const key = orderedTasks[i]
const node = taskGraph.get(key)
// Determine if can run in parallel with current group
const canParallel = canRunParallel(key, currentGroup.tasks, taskGraph, conflicts)
if (!canParallel && currentGroup.tasks.length > 0) {
// Save current group and start new sequential group
groups.push({ ...currentGroup })
currentGroup = { type: 'S', number: groups.length + 1, tasks: [] }
}
currentGroup.tasks.push(key)
node.execution_group = `${currentGroup.type}${currentGroup.number}`
}
// Save last group
if (currentGroup.tasks.length > 0) {
groups.push(currentGroup)
}
return groups
}
function canRunParallel(taskKey, groupTasks, taskGraph, conflicts) {
if (groupTasks.length === 0) return true
const node = taskGraph.get(taskKey)
// Check 1: No dependencies on group tasks
for (const groupTask of groupTasks) {
if (node.task.depends_on?.includes(groupTask.split(':')[1])) {
return false
}
}
// Check 2: No file conflicts with group tasks
for (const conflict of conflicts) {
if (conflict.tasks.includes(taskKey)) {
for (const groupTask of groupTasks) {
if (conflict.tasks.includes(groupTask)) {
return false
}
}
}
}
// Check 3: Different issues can run in parallel
const nodeIssue = node.issue_id
const groupIssues = new Set(groupTasks.map(t => taskGraph.get(t).issue_id))
return !groupIssues.has(nodeIssue)
}
```
---
## Output Generation
### Queue Item Format
```javascript
function generateQueueItems(orderedTasks, taskGraph, conflicts) {
const queueItems = []
let queueIdCounter = 1
for (const key of orderedTasks) {
const node = taskGraph.get(key)
queueItems.push({
queue_id: `Q-${String(queueIdCounter++).padStart(3, '0')}`,
issue_id: node.issue_id,
solution_id: node.solution_id,
task_id: node.task.id,
status: 'pending',
execution_order: node.execution_order,
execution_group: node.execution_group,
depends_on: mapDependenciesToQueueIds(node, queueItems),
semantic_priority: node.semantic_priority,
queued_at: new Date().toISOString()
})
}
return queueItems
}
function mapDependenciesToQueueIds(node, queueItems) {
return (node.task.depends_on || []).map(dep => {
const depKey = `${node.issue_id}:${dep}`
const queueItem = queueItems.find(q =>
q.issue_id === node.issue_id && q.task_id === dep
)
return queueItem?.queue_id || dep
})
}
```
### Final Output
```javascript
function generateOutput(queueItems, conflicts, groups) {
return {
queue: queueItems,
conflicts: conflicts.map(c => ({
type: c.type,
file: c.file,
tasks: c.tasks,
resolution: c.resolution,
resolution_order: c.resolution_order,
rationale: c.rationale,
resolved: c.resolved
})),
execution_groups: groups.map(g => ({
id: `${g.type}${g.number}`,
type: g.type === 'P' ? 'parallel' : 'sequential',
task_count: g.tasks.length,
tasks: g.tasks
})),
_metadata: {
version: '1.0',
total_tasks: queueItems.length,
total_conflicts: conflicts.length,
resolved_conflicts: conflicts.filter(c => c.resolved).length,
parallel_groups: groups.filter(g => g.type === 'P').length,
sequential_groups: groups.filter(g => g.type === 'S').length,
timestamp: new Date().toISOString(),
source: 'issue-queue-agent'
}
}
}
```
---
## Error Handling
```javascript
async function executeWithValidation(tasks) {
// Phase 1: Build graph
const { taskGraph, fileModifications } = buildDependencyGraph(tasks)
// Check for cycles
const cycleResult = detectCycles(taskGraph)
if (cycleResult.hasCycle) {
return {
success: false,
error: 'Circular dependency detected',
cycles: cycleResult.cycles,
suggestion: 'Remove circular dependencies or reorder tasks manually'
}
}
// Phase 2: Detect conflicts
const conflicts = detectFileConflicts(fileModifications, taskGraph)
.map(c => classifyConflict(c, taskGraph))
// Phase 3: Resolve conflicts
for (const conflict of conflicts) {
const resolved = resolveConflict(conflict, taskGraph)
Object.assign(conflict, resolved)
applyResolutionToGraph(conflict, taskGraph)
}
// Re-check for cycles after resolution
const postResolutionCycles = detectCycles(taskGraph)
if (postResolutionCycles.hasCycle) {
return {
success: false,
error: 'Conflict resolution created circular dependency',
cycles: postResolutionCycles.cycles,
suggestion: 'Manual conflict resolution required'
}
}
// Phase 4: Sort and group
const sortResult = topologicalSortWithPriority(taskGraph)
if (!sortResult.success) {
return {
success: false,
error: sortResult.error,
partial_result: sortResult.result
}
}
const groups = assignExecutionGroups(sortResult.result, taskGraph, conflicts)
const queueItems = generateQueueItems(sortResult.result, taskGraph, conflicts)
return {
success: true,
output: generateOutput(queueItems, conflicts, groups)
}
}
```
### 4.2 Error Handling
| Scenario | Action |
|----------|--------|
| Circular dependency | Report cycles, abort with suggestion |
| Conflict resolution creates cycle | Flag for manual resolution |
| Missing task reference in depends_on | Skip and warn |
| Empty task list | Return empty queue |
| Circular dependency | Abort, report cycles |
| Resolution creates cycle | Flag for manual resolution |
| Missing solution reference | Skip and warn |
| Empty solution list | Return empty queue |
---
## Quality Standards
### Ordering Validation
```javascript
function validateOrdering(queueItems, taskGraph) {
const errors = []
for (const item of queueItems) {
const key = `${item.issue_id}:${item.task_id}`
const node = taskGraph.get(key)
// Check dependencies come before
for (const depQueueId of item.depends_on) {
const depItem = queueItems.find(q => q.queue_id === depQueueId)
if (depItem && depItem.execution_order >= item.execution_order) {
errors.push(`${item.queue_id} ordered before dependency ${depQueueId}`)
}
}
}
return { valid: errors.length === 0, errors }
}
```
### Semantic Priority Rules
| Factor | Priority Boost |
|--------|---------------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Fix action | +0.05 |
| Foundation scope (config/types/utils) | +0.1 |
| Types scope | +0.05 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
---
## Key Reminders
### 4.3 Guidelines
**ALWAYS**:
1. Build dependency graph before any ordering
2. Detect cycles before and after conflict resolution
3. Apply resolution rules consistently (Create → Update → Delete)
4. Preserve within-issue task order when no conflicts
5. Calculate semantic priority for all tasks
1. Build dependency graph before ordering
2. Detect file overlaps between solutions
3. Apply resolution rules consistently
4. Calculate semantic priority for all solutions
5. Include rationale for conflict resolutions
6. Validate ordering before output
7. Include rationale for conflict resolutions
8. Map depends_on to queue_ids in output
**NEVER**:
1. Execute tasks (ordering only)
1. Execute solutions (ordering only)
2. Ignore circular dependencies
3. Create arbitrary ordering without rules
4. Skip conflict detection
5. Output invalid DAG
6. Merge tasks from different issues in same parallel group if conflicts exist
7. Assume task order without checking depends_on
3. Skip conflict detection
4. Output invalid DAG
5. Merge conflicting solutions in parallel group
6. Split tasks from their solution
**OUTPUT** (STRICT - only these 2 files):
```
.workflow/issues/queues/{Queue ID}.json # Use Queue ID from prompt
.workflow/issues/queues/index.json # Update existing index
```
- Use the Queue ID provided in prompt, do NOT generate new one
- Write ONLY the 2 files listed above, NO other files
- Final return: PURE JSON summary (no markdown, no prose):
```json
{"queue_id":"QUE-xxx","total_solutions":N,"total_tasks":N,"execution_groups":[...],"conflicts_resolved":N,"issues_queued":["ISS-xxx"]}
```

View File

@@ -0,0 +1,427 @@
---
name: issue:discover
description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.
argument-hint: "<path-pattern> [--perspectives=bug,ux,...] [--external]"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*)
---
# Issue Discovery Command
## Quick Start
```bash
# Discover issues in specific module (interactive perspective selection)
/issue:discover src/auth/**
# Discover with specific perspectives
/issue:discover src/payment/** --perspectives=bug,security,test
# Discover with external research for all perspectives
/issue:discover src/api/** --external
# Discover in multiple modules
/issue:discover src/auth/**,src/payment/**
```
**Discovery Scope**: Specified modules/files only
**Output Directory**: `.workflow/issues/discoveries/{discovery-id}/`
**Available Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices
**Exa Integration**: Auto-enabled for security and best-practices perspectives
**CLI Tools**: Gemini → Qwen → Codex (fallback chain)
## What & Why
### Core Concept
Multi-perspective issue discovery orchestrator that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items. Unlike code review (which assesses existing code quality), discovery focuses on **finding opportunities for improvement and potential problems**.
**vs Code Review**:
- **Code Review** (`review-module-cycle`): Evaluates code quality against standards
- **Issue Discovery** (`issue:discover`): Finds actionable issues, bugs, and improvement opportunities
### Value Proposition
1. **Proactive Issue Detection**: Find problems before they become bugs
2. **Multi-Perspective Analysis**: Each perspective surfaces different types of issues
3. **External Benchmarking**: Compare against industry best practices via Exa
4. **Direct Issue Integration**: Discoveries can be exported to issue tracker
5. **Dashboard Management**: View, filter, and export discoveries via CCW dashboard
## How It Works
### Execution Flow
```
Phase 1: Discovery & Initialization
└─ Parse target pattern, create session, initialize output structure
Phase 2: Interactive Perspective Selection
└─ AskUserQuestion for perspective selection (or use --perspectives)
Phase 3: Parallel Perspective Analysis
├─ Launch N @cli-explore-agent instances (one per perspective)
├─ Security & Best-Practices auto-trigger Exa research
├─ Agent writes perspective JSON, returns summary
└─ Update discovery-progress.json
Phase 4: Aggregation & Prioritization
├─ Collect agent return summaries
├─ Load perspective JSON files
├─ Merge findings, deduplicate by file+line
└─ Calculate priority scores
Phase 5: Issue Generation & Summary
├─ Convert high-priority discoveries to issue format
├─ Write to discovery-issues.jsonl
├─ Generate single summary.md from agent returns
└─ Update discovery-state.json to complete
```
## Perspectives
### Available Perspectives
| Perspective | Focus | Categories | Exa |
|-------------|-------|------------|-----|
| **bug** | Potential Bugs | edge-case, null-check, resource-leak, race-condition, boundary, exception-handling | - |
| **ux** | User Experience | error-message, loading-state, feedback, accessibility, interaction, consistency | - |
| **test** | Test Coverage | missing-test, edge-case-test, integration-gap, coverage-hole, assertion-quality | - |
| **quality** | Code Quality | complexity, duplication, naming, documentation, code-smell, readability | - |
| **security** | Security Issues | injection, auth, encryption, input-validation, data-exposure, access-control | ✓ |
| **performance** | Performance | n-plus-one, memory-usage, caching, algorithm, blocking-operation, resource | - |
| **maintainability** | Maintainability | coupling, cohesion, tech-debt, extensibility, module-boundary, interface-design | - |
| **best-practices** | Best Practices | convention, pattern, framework-usage, anti-pattern, industry-standard | ✓ |
### Interactive Perspective Selection
When no `--perspectives` flag is provided, the command uses AskUserQuestion:
```javascript
AskUserQuestion({
questions: [{
question: "Select primary discovery focus:",
header: "Focus",
multiSelect: false,
options: [
{ label: "Bug + Test + Quality", description: "Quick scan: potential bugs, test gaps, code quality (Recommended)" },
{ label: "Security + Performance", description: "System audit: security issues, performance bottlenecks" },
{ label: "Maintainability + Best-practices", description: "Long-term health: coupling, tech debt, conventions" },
{ label: "Full analysis", description: "All 7 perspectives (comprehensive, takes longer)" }
]
}]
})
```
**Recommended Combinations**:
- Quick scan: bug, test, quality
- Full analysis: all perspectives
- Security audit: security, bug, quality
## Core Responsibilities
### Orchestrator
**Phase 1: Discovery & Initialization**
```javascript
// Step 1: Parse target pattern and resolve files
const resolvedFiles = await expandGlobPattern(targetPattern);
if (resolvedFiles.length === 0) {
throw new Error(`No files matched pattern: ${targetPattern}`);
}
// Step 2: Generate discovery ID
const discoveryId = `DSC-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
// Step 3: Create output directory
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
await mkdir(outputDir, { recursive: true });
await mkdir(`${outputDir}/perspectives`, { recursive: true });
// Step 4: Initialize unified discovery state (merged state+progress)
await writeJson(`${outputDir}/discovery-state.json`, {
discovery_id: discoveryId,
target_pattern: targetPattern,
phase: "initialization",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
target: { files_count: { total: resolvedFiles.length }, project: {} },
perspectives: [], // filled after selection: [{name, status, findings}]
external_research: { enabled: false, completed: false },
results: { total_findings: 0, issues_generated: 0, priority_distribution: {} }
});
```
**Phase 2: Perspective Selection**
```javascript
// Check for --perspectives flag
let selectedPerspectives = [];
if (args.perspectives) {
selectedPerspectives = args.perspectives.split(',').map(p => p.trim());
} else {
// Interactive selection via AskUserQuestion
const response = await AskUserQuestion({...});
selectedPerspectives = parseSelectedPerspectives(response);
}
// Validate and update state
await updateDiscoveryState(outputDir, {
'metadata.perspectives': selectedPerspectives,
phase: 'parallel'
});
```
**Phase 3: Parallel Perspective Analysis**
Launch N agents in parallel (one per selected perspective):
```javascript
// Launch agents in parallel - agents write JSON and return summary
const agentPromises = selectedPerspectives.map(perspective =>
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Discover ${perspective} issues`,
prompt: buildPerspectivePrompt(perspective, discoveryId, resolvedFiles, outputDir)
})
);
// Wait for all agents - collect their return summaries
const results = await Promise.all(agentPromises);
// results contain agent summaries for final report
```
**Phase 4: Aggregation & Prioritization**
```javascript
// Load all perspective JSON files written by agents
const allFindings = [];
for (const perspective of selectedPerspectives) {
const jsonPath = `${outputDir}/perspectives/${perspective}.json`;
if (await fileExists(jsonPath)) {
const data = await readJson(jsonPath);
allFindings.push(...data.findings.map(f => ({ ...f, perspective })));
}
}
// Deduplicate and prioritize
const prioritizedFindings = deduplicateAndPrioritize(allFindings);
// Update unified state
await updateDiscoveryState(outputDir, {
phase: 'aggregation',
'results.total_findings': prioritizedFindings.length,
'results.priority_distribution': countByPriority(prioritizedFindings)
});
```
**Phase 5: Issue Generation & Summary**
```javascript
// Convert high-priority findings to issues
const issueWorthy = prioritizedFindings.filter(f =>
f.priority === 'critical' || f.priority === 'high' || f.priority_score >= 0.7
);
// Write discovery-issues.jsonl
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
// Generate single summary.md from agent return summaries
// Orchestrator briefly summarizes what agents returned (NO detailed reports)
await writeSummaryFromAgentReturns(outputDir, results, prioritizedFindings, issues);
// Update final state
await updateDiscoveryState(outputDir, {
phase: 'complete',
updated_at: new Date().toISOString(),
'results.issues_generated': issues.length
});
```
### Output File Structure
```
.workflow/issues/discoveries/
├── index.json # Discovery session index
└── {discovery-id}/
├── discovery-state.json # Unified state (merged state+progress)
├── perspectives/
│ └── {perspective}.json # Per-perspective findings
├── external-research.json # Exa research results (if enabled)
├── discovery-issues.jsonl # Generated candidate issues
└── summary.md # Single summary (from agent returns)
```
### Schema References
**External Schema Files** (agent MUST read and follow exactly):
| Schema | Path | Purpose |
|--------|------|---------|
| **Discovery State** | `~/.claude/workflows/cli-templates/schemas/discovery-state-schema.json` | Session state machine |
| **Discovery Finding** | `~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json` | Perspective analysis results |
### Agent Invocation Template
**Perspective Analysis Agent**:
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Discover ${perspective} issues`,
prompt: `
## Task Objective
Discover potential ${perspective} issues in specified module files.
## Discovery Context
- Discovery ID: ${discoveryId}
- Perspective: ${perspective}
- Target Pattern: ${targetPattern}
- Resolved Files: ${resolvedFiles.length} files
- Output Directory: ${outputDir}
## MANDATORY FIRST STEPS
1. Read discovery state: ${outputDir}/discovery-state.json
2. Read schema: ~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json
3. Analyze target files for ${perspective} concerns
## Output Requirements
**1. Write JSON file**: ${outputDir}/perspectives/${perspective}.json
- Follow discovery-finding-schema.json exactly
- Each finding: id, title, priority, category, description, file, line, snippet, suggested_issue, confidence
**2. Return summary** (DO NOT write report file):
- Return a brief text summary of findings
- Include: total findings, priority breakdown, key issues
- This summary will be used by orchestrator for final report
## Perspective-Specific Guidance
${getPerspectiveGuidance(perspective)}
## Success Criteria
- [ ] JSON written to ${outputDir}/perspectives/${perspective}.json
- [ ] Summary returned with findings count and key issues
- [ ] Each finding includes actionable suggested_issue
- [ ] Priority uses lowercase enum: critical/high/medium/low
`
})
```
**Exa Research Agent** (for security and best-practices):
```javascript
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `External research for ${perspective} via Exa`,
prompt: `
## Task Objective
Research industry best practices for ${perspective} using Exa search
## Research Steps
1. Read project tech stack: .workflow/project-tech.json
2. Use Exa to search for best practices
3. Synthesize findings relevant to this project
## Output Requirements
**1. Write JSON file**: ${outputDir}/external-research.json
- Include sources, key_findings, gap_analysis, recommendations
**2. Return summary** (DO NOT write report file):
- Brief summary of external research findings
- Key recommendations for the project
## Success Criteria
- [ ] JSON written to ${outputDir}/external-research.json
- [ ] Summary returned with key recommendations
- [ ] Findings are relevant to project's tech stack
`
})
```
### Perspective Guidance Reference
```javascript
function getPerspectiveGuidance(perspective) {
const guidance = {
bug: `
Focus: Null checks, edge cases, resource leaks, race conditions, boundary conditions, exception handling
Priority: Critical=data corruption/crash, High=malfunction, Medium=edge case issues, Low=minor
`,
ux: `
Focus: Error messages, loading states, feedback, accessibility, interaction patterns, form validation
Priority: Critical=inaccessible, High=confusing, Medium=inconsistent, Low=cosmetic
`,
test: `
Focus: Missing unit tests, edge case coverage, integration gaps, assertion quality, test isolation
Priority: Critical=no security tests, High=no core logic tests, Medium=weak coverage, Low=minor gaps
`,
quality: `
Focus: Complexity, duplication, naming, documentation, code smells, readability
Priority: Critical=unmaintainable, High=significant issues, Medium=naming/docs, Low=minor refactoring
`,
security: `
Focus: Input validation, auth/authz, injection, XSS/CSRF, data exposure, access control
Priority: Critical=auth bypass/injection, High=missing authz, Medium=weak validation, Low=headers
`,
performance: `
Focus: N+1 queries, memory leaks, caching, algorithm efficiency, blocking operations
Priority: Critical=memory leaks, High=N+1/inefficient, Medium=missing cache, Low=minor optimization
`,
maintainability: `
Focus: Coupling, interface design, tech debt, extensibility, module boundaries, configuration
Priority: Critical=unrelated code changes, High=unclear boundaries, Medium=coupling, Low=refactoring
`,
'best-practices': `
Focus: Framework conventions, language patterns, anti-patterns, deprecated APIs, coding standards
Priority: Critical=anti-patterns causing bugs, High=convention violations, Medium=style, Low=cosmetic
`
};
return guidance[perspective] || 'General code discovery analysis';
}
```
## Dashboard Integration
### Viewing Discoveries
Open CCW dashboard to manage discoveries:
```bash
ccw view
```
Navigate to **Issues > Discovery** to:
- View all discovery sessions
- Filter findings by perspective and priority
- Preview finding details
- Select and export findings as issues
### Exporting to Issues
From the dashboard, select findings and click "Export as Issues" to:
1. Convert discoveries to standard issue format
2. Append to `.workflow/issues/issues.jsonl`
3. Set status to `registered`
4. Continue with `/issue:plan` workflow
## Related Commands
```bash
# After discovery, plan solutions for exported issues
/issue:plan DSC-001,DSC-002,DSC-003
# Or use interactive management
/issue:manage
```
## Best Practices
1. **Start Focused**: Begin with specific modules rather than entire codebase
2. **Use Quick Scan First**: Start with bug, test, quality for fast results
3. **Review Before Export**: Not all discoveries warrant issues - use dashboard to filter
4. **Combine Perspectives**: Run related perspectives together (e.g., security + bug)
5. **Enable Exa for New Tech**: When using unfamiliar frameworks, enable external research

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
description: Execute queue with codex using DAG-based parallel orchestration (solution-level)
argument-hint: "[--parallel <n>] [--executor codex|gemini|agent]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
@@ -9,24 +9,14 @@ allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
## Overview
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
Minimal orchestrator that dispatches **solution IDs** to executors. Each executor receives a complete solution with all its tasks.
**Core design:**
- Single task per codex instance (not loop mode)
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
- No file reading in codex
- Orchestrator manages parallelism
## Storage Structure (Flat JSONL)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
**Design Principles:**
- `queue dag` → returns parallel batches with solution IDs (S-1, S-2, ...)
- `detail <id>` → READ-ONLY solution fetch (returns full solution with all tasks)
- `done <id>` → update solution completion status
- No race conditions: status changes only via `done`
- **Executor handles all tasks within a solution sequentially**
## Usage
@@ -34,420 +24,271 @@ Execution orchestrator that coordinates codex instances. Each task is executed b
/issue:execute [FLAGS]
# Examples
/issue:execute # Execute all ready tasks
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
/issue:execute --executor codex # Force codex executor
/issue:execute # Execute with default parallelism
/issue:execute --parallel 4 # Execute up to 4 tasks in parallel
/issue:execute --executor agent # Use agent instead of codex
# Flags
--parallel <n> Max parallel codex instances (default: 1)
--executor <type> Force executor: codex|gemini|agent
--dry-run Show what would execute without running
--parallel <n> Max parallel executors (default: 3)
--executor <type> Force executor: codex|gemini|agent (default: codex)
--dry-run Show DAG and batches without executing
```
## Execution Process
## Execution Flow
```
Phase 1: Queue Loading
Load queue.json
├─ Count pending/ready tasks
└─ Initialize TodoWrite tracking
Phase 1: Get DAG
ccw issue queue dag → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
Phase 2: Ready Task Detection
├─ Find tasks with satisfied dependencies
├─ Group by execution_group (parallel batches)
└─ Determine execution order
Phase 2: Dispatch Parallel Batch
├─ For each solution ID in batch (parallel):
├─ Executor calls: ccw issue detail <id> (READ-ONLY)
│ ├─ Executor gets FULL SOLUTION with all tasks
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
│ ├─ Executor tests + commits per task
│ └─ Executor calls: ccw issue done <id>
└─ Wait for batch completion
Phase 3: Codex Coordination
For each ready task:
│ ├─ Launch independent codex instance
│ ├─ Codex calls: ccw issue next
│ ├─ Codex receives task data (NOT file)
│ ├─ Codex executes task
│ ├─ Codex calls: ccw issue complete <queue-id>
│ └─ Update TodoWrite
└─ Parallel execution based on --parallel flag
Phase 4: Completion
├─ Generate execution summary
├─ Update issue statuses in issues.jsonl
└─ Display results
Phase 3: Next Batch
ccw issue queue dag → check for newly-ready solutions
```
## Implementation
### Phase 1: Queue Loading
### Phase 1: Get DAG
```javascript
// Load queue
const queuePath = '.workflow/issues/queue.json';
if (!Bash(`test -f "${queuePath}" && echo exists`).includes('exists')) {
console.log('No queue found. Run /issue:queue first.');
// Get dependency graph and parallel batches
const dagJson = Bash(`ccw issue queue dag`).trim();
const dag = JSON.parse(dagJson);
if (dag.error || dag.ready_count === 0) {
console.log(dag.error || 'No solutions ready for execution');
console.log('Use /issue:queue to form a queue first');
return;
}
const queue = JSON.parse(Read(queuePath));
// Count by status
const pending = queue.queue.filter(q => q.status === 'pending');
const executing = queue.queue.filter(q => q.status === 'executing');
const completed = queue.queue.filter(q => q.status === 'completed');
console.log(`
## Execution Queue Status
## Queue DAG (Solution-Level)
- Pending: ${pending.length}
- Executing: ${executing.length}
- Completed: ${completed.length}
- Total: ${queue.queue.length}
- Total Solutions: ${dag.total}
- Ready: ${dag.ready_count}
- Completed: ${dag.completed_count}
- Parallel in batch 1: ${dag.parallel_batches[0]?.length || 0}
`);
if (pending.length === 0 && executing.length === 0) {
console.log('All tasks completed!');
// Dry run mode
if (flags.dryRun) {
console.log('### Parallel Batches:\n');
dag.parallel_batches.forEach((batch, i) => {
console.log(`Batch ${i + 1}: ${batch.join(', ')}`);
});
return;
}
```
### Phase 2: Ready Task Detection
### Phase 2: Dispatch Parallel Batch
```javascript
// Find ready tasks (dependencies satisfied)
function getReadyTasks() {
const completedIds = new Set(
queue.queue.filter(q => q.status === 'completed').map(q => q.queue_id)
);
const parallelLimit = flags.parallel || 3;
const executor = flags.executor || 'codex';
return queue.queue.filter(item => {
if (item.status !== 'pending') return false;
return item.depends_on.every(depId => completedIds.has(depId));
});
}
const readyTasks = getReadyTasks();
if (readyTasks.length === 0) {
if (executing.length > 0) {
console.log('Tasks are currently executing. Wait for completion.');
} else {
console.log('No ready tasks. Check for blocked dependencies.');
}
return;
}
console.log(`Found ${readyTasks.length} ready tasks`);
// Sort by execution order
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
// Process first batch (all solutions can run in parallel)
const batch = dag.parallel_batches[0] || [];
// Initialize TodoWrite
TodoWrite({
todos: readyTasks.slice(0, parallelLimit).map(t => ({
content: `[${t.queue_id}] ${t.issue_id}:${t.task_id}`,
todos: batch.map(id => ({
content: `Execute solution ${id}`,
status: 'pending',
activeForm: `Executing ${t.queue_id}`
activeForm: `Executing solution ${id}`
}))
});
// Dispatch all in parallel (up to limit)
const chunks = [];
for (let i = 0; i < batch.length; i += parallelLimit) {
chunks.push(batch.slice(i, i + parallelLimit));
}
for (const chunk of chunks) {
console.log(`\n### Executing Solutions: ${chunk.join(', ')}`);
// Launch all in parallel
const executions = chunk.map(solutionId => {
updateTodo(solutionId, 'in_progress');
return dispatchExecutor(solutionId, executor);
});
await Promise.all(executions);
chunk.forEach(id => updateTodo(id, 'completed'));
}
```
### Phase 3: Codex Coordination (Single Task Mode - Full Lifecycle)
### Executor Dispatch
```javascript
// Execute tasks - single codex instance per task with full lifecycle
async function executeTask(queueItem) {
const codexPrompt = `
## Single Task Execution - CLOSED-LOOP LIFECYCLE
function dispatchExecutor(solutionId, executorType) {
// Executor fetches FULL SOLUTION via READ-ONLY detail command
// Executor handles all tasks within solution sequentially
// Then reports completion via done command
const prompt = `
## Execute Solution ${solutionId}
You are executing ONE task from the issue queue. Each task has 5 phases that MUST ALL complete successfully.
### Step 1: Fetch Task
Run this command to get your task:
### Step 1: Get Solution (read-only)
\`\`\`bash
ccw issue next
ccw issue detail ${solutionId}
\`\`\`
This returns JSON with full lifecycle definition:
- task.implementation: Implementation steps
- task.test: Test requirements and commands
- task.regression: Regression check commands
- task.acceptance: Acceptance criteria and verification
- task.commit: Commit specification
### Step 2: Execute All Tasks Sequentially
The detail command returns a FULL SOLUTION with all tasks.
Execute each task in order (T1 → T2 → T3 → ...):
### Step 2: Execute Full Lifecycle
**Phase 1: IMPLEMENT**
1. Follow task.implementation steps in order
2. Modify files specified in modification_points
3. Use context.relevant_files for reference
4. Use context.patterns for code style
**Phase 2: TEST**
1. Run test commands from task.test.commands
2. Ensure all unit tests pass (task.test.unit)
3. Run integration tests if specified (task.test.integration)
4. Verify coverage meets task.test.coverage_target if specified
5. If tests fail → fix code and re-run, do NOT proceed until tests pass
**Phase 3: REGRESSION**
1. Run all commands in task.regression
2. Ensure no existing tests are broken
3. If regression fails → fix and re-run
**Phase 4: ACCEPTANCE**
1. Verify each criterion in task.acceptance.criteria
2. Execute verification steps in task.acceptance.verification
3. Complete any manual_checks if specified
4. All criteria MUST pass before proceeding
**Phase 5: COMMIT**
1. Stage all modified files
2. Use task.commit.message_template as commit message
3. Commit with: git commit -m "$(cat <<'EOF'\n<message>\nEOF\n)"
4. If commit_strategy is 'per-task', commit now
5. If commit_strategy is 'atomic' or 'squash', stage but don't commit
For each task:
1. Follow task.implementation steps
2. Run task.test commands
3. Verify task.acceptance criteria
4. Commit using task.commit specification
### Step 3: Report Completion
When ALL phases complete successfully:
When ALL tasks in solution are done:
\`\`\`bash
ccw issue complete <queue_id> --result '{
"files_modified": ["path1", "path2"],
"tests_passed": true,
"regression_passed": true,
"acceptance_passed": true,
"committed": true,
"commit_hash": "<hash>",
"summary": "What was done"
}'
ccw issue done ${solutionId} --result '{"summary": "...", "files_modified": [...], "tasks_completed": N}'
\`\`\`
If any phase fails and cannot be fixed:
If any task failed:
\`\`\`bash
ccw issue fail <queue_id> --reason "Phase X failed: <details>"
ccw issue done ${solutionId} --fail --reason "Task TX failed: ..."
\`\`\`
### Rules
- NEVER skip any lifecycle phase
- Tests MUST pass before proceeding to acceptance
- Regression MUST pass before commit
- ALL acceptance criteria MUST be verified
- Report accurate lifecycle status in result
### Start Now
Begin by running: ccw issue next
`;
// Execute codex
const executor = queueItem.assigned_executor || flags.executor || 'codex';
if (executor === 'codex') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.queue_id}`,
timeout=3600000 // 1 hour timeout
if (executorType === 'codex') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
);
} else if (executor === 'gemini') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.queue_id}`,
timeout=1800000 // 30 min timeout
} else if (executorType === 'gemini') {
return Bash(
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
{ timeout: 3600000, run_in_background: true }
);
} else {
// Agent execution
Task(
subagent_type="code-developer",
run_in_background=false,
description=`Execute ${queueItem.queue_id}`,
prompt=codexPrompt
);
}
}
// Execute with parallelism
const parallelLimit = flags.parallel || 1;
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
const batch = readyTasks.slice(i, i + parallelLimit);
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
console.log(batch.map(t => `- ${t.queue_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
if (parallelLimit === 1) {
// Sequential execution
for (const task of batch) {
updateTodo(task.queue_id, 'in_progress');
await executeTask(task);
updateTodo(task.queue_id, 'completed');
}
} else {
// Parallel execution - launch all at once
const executions = batch.map(task => {
updateTodo(task.queue_id, 'in_progress');
return executeTask(task);
return Task({
subagent_type: 'code-developer',
run_in_background: false,
description: `Execute solution ${solutionId}`,
prompt: prompt
});
await Promise.all(executions);
batch.forEach(task => updateTodo(task.queue_id, 'completed'));
}
// Refresh ready tasks after batch
const newReady = getReadyTasks();
if (newReady.length > 0) {
console.log(`${newReady.length} more tasks now ready`);
}
}
```
### Codex Task Fetch Response
When codex calls `ccw issue next`, it receives:
```json
{
"queue_id": "Q-001",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task": {
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file in src/middleware/",
"Implement JWT token validation using jsonwebtoken",
"Add error handling for invalid/expired tokens",
"Export middleware function"
],
"acceptance": [
"Middleware validates JWT tokens successfully",
"Returns 401 for invalid or missing tokens",
"Passes token payload to request context"
]
},
"context": {
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
},
"execution_hints": {
"executor": "codex",
"estimated_minutes": 30
}
}
```
### Phase 4: Completion Summary
### Phase 3: Check Next Batch
```javascript
// Reload queue for final status
const finalQueue = JSON.parse(Read(queuePath));
const summary = {
completed: finalQueue.queue.filter(q => q.status === 'completed').length,
failed: finalQueue.queue.filter(q => q.status === 'failed').length,
pending: finalQueue.queue.filter(q => q.status === 'pending').length,
total: finalQueue.queue.length
};
// Refresh DAG after batch completes
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
console.log(`
## Execution Complete
## Batch Complete
**Completed**: ${summary.completed}/${summary.total}
**Failed**: ${summary.failed}
**Pending**: ${summary.pending}
### Task Results
${finalQueue.queue.map(q => {
const icon = q.status === 'completed' ? '✓' :
q.status === 'failed' ? '✗' :
q.status === 'executing' ? '⟳' : '○';
return `${icon} ${q.queue_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
}).join('\n')}
- Solutions Completed: ${refreshedDag.completed_count}/${refreshedDag.total}
- Next ready: ${refreshedDag.ready_count}
`);
// Update issue statuses in issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const issueIds = [...new Set(finalQueue.queue.map(q => q.issue_id))];
for (const issueId of issueIds) {
const issueTasks = finalQueue.queue.filter(q => q.issue_id === issueId);
if (issueTasks.every(q => q.status === 'completed')) {
console.log(`\n✓ Issue ${issueId} fully completed!`);
// Update issue status
const issueIndex = allIssues.findIndex(i => i.id === issueId);
if (issueIndex !== -1) {
allIssues[issueIndex].status = 'completed';
allIssues[issueIndex].completed_at = new Date().toISOString();
allIssues[issueIndex].updated_at = new Date().toISOString();
}
}
}
// Write updated issues.jsonl
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
if (summary.pending > 0) {
console.log(`
### Continue Execution
Run \`/issue:execute\` again to execute remaining tasks.
`);
if (refreshedDag.ready_count > 0) {
console.log('Run `/issue:execute` again for next batch.');
}
```
## Dry Run Mode
## Parallel Execution Model
```javascript
if (flags.dryRun) {
console.log(`
## Dry Run - Would Execute
```
┌─────────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────────┤
│ 1. ccw issue queue dag │
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
│ │
│ 2. Dispatch batch 1 (parallel): │
│ ┌──────────────────────┐ ┌──────────────────────┐ │
│ │ Executor 1 │ │ Executor 2 │ │
│ │ detail S-1 │ │ detail S-2 │ │
│ │ → gets full solution │ │ → gets full solution │ │
│ │ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │ │
│ │ done S-1 │ │ done S-2 │ │
│ └──────────────────────┘ └──────────────────────┘ │
│ │
│ 3. ccw issue queue dag (refresh) │
│ → S-3 now ready (S-1 completed, file conflict resolved) │
└─────────────────────────────────────────────────────────────┘
```
${readyTasks.map((t, i) => `
${i + 1}. ${t.queue_id}
Issue: ${t.issue_id}
Task: ${t.task_id}
Executor: ${t.assigned_executor}
Group: ${t.execution_group}
`).join('')}
**Why this works for parallel:**
- `detail <id>` is READ-ONLY → no race conditions
- Each executor handles **all tasks within a solution** sequentially
- `done <id>` updates only its own solution status
- `queue dag` recalculates ready solutions after each batch
- Solutions in same batch have NO file conflicts
No changes made. Remove --dry-run to execute.
`);
return;
## CLI Endpoint Contract
### `ccw issue queue dag`
Returns dependency graph with parallel batches (solution-level):
```json
{
"queue_id": "QUE-...",
"total": 3,
"ready_count": 2,
"completed_count": 0,
"nodes": [
{ "id": "S-1", "issue_id": "ISS-xxx", "status": "pending", "ready": true, "task_count": 3 },
{ "id": "S-2", "issue_id": "ISS-yyy", "status": "pending", "ready": true, "task_count": 2 },
{ "id": "S-3", "issue_id": "ISS-zzz", "status": "pending", "ready": false, "depends_on": ["S-1"] }
],
"parallel_batches": [["S-1", "S-2"], ["S-3"]]
}
```
### `ccw issue detail <item_id>`
Returns FULL SOLUTION with all tasks (READ-ONLY):
```json
{
"item_id": "S-1",
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"status": "pending",
"solution": {
"id": "SOL-xxx",
"approach": "...",
"tasks": [
{ "id": "T1", "title": "...", "implementation": [...], "test": {...} },
{ "id": "T2", "title": "...", "implementation": [...], "test": {...} },
{ "id": "T3", "title": "...", "implementation": [...], "test": {...} }
],
"exploration_context": { "relevant_files": [...] }
},
"execution_hints": { "executor": "codex", "estimated_minutes": 180 }
}
```
### `ccw issue done <item_id>`
Marks solution completed/failed, updates queue state, checks for queue completion.
## Error Handling
| Error | Resolution |
|-------|------------|
| Queue not found | Display message, suggest /issue:queue |
| No ready tasks | Check dependencies, show blocked tasks |
| Codex timeout | Mark as failed, allow retry |
| ccw issue next empty | All tasks done or blocked |
| Task execution failure | Marked via ccw issue fail |
## Endpoint Contract
### `ccw issue next`
- Returns next ready task as JSON
- Marks task as 'executing'
- Returns `{ status: 'empty' }` when no tasks
### `ccw issue complete <queue-id>`
- Marks task as 'completed'
- Updates queue.json
- Checks if issue is fully complete
### `ccw issue fail <queue-id>`
- Marks task as 'failed'
- Records failure reason
- Allows retry via /issue:execute
| No queue | Run /issue:queue first |
| No ready solutions | Dependencies blocked, check DAG |
| Executor timeout | Solution not marked done, can retry |
| Solution failure | Use `ccw issue retry` to reset |
| Partial task failure | Executor reports which task failed via `done --fail` |
## Related Commands
- `/issue:plan` - Plan issues with solutions
- `/issue:queue` - Form execution queue
- `ccw issue queue list` - View queue status
- `ccw issue retry` - Retry failed tasks
- `ccw issue queue dag` - View dependency graph
- `ccw issue detail <id>` - View task details
- `ccw issue retry` - Reset failed tasks

View File

@@ -20,17 +20,23 @@ Interactive menu-driven interface for issue management using `ccw issue` CLI end
```bash
# Core endpoints (ccw issue)
ccw issue list # List all issues
ccw issue list <id> --json # Get issue details
ccw issue status <id> # Detailed status
ccw issue init <id> --title "..." # Create issue
ccw issue task <id> --title "..." # Add task
ccw issue list # List all issues
ccw issue list <id> --json # Get issue details
ccw issue status <id> # Detailed status
ccw issue init <id> --title "..." # Create issue
ccw issue task <id> --title "..." # Add task
ccw issue bind <id> <solution-id> # Bind solution
# Queue management
ccw issue queue # List queue
ccw issue queue add <id> # Add to queue
ccw issue next # Get next task
ccw issue done <queue-id> # Complete task
ccw issue queue # List current queue
ccw issue queue add <id> # Add to queue
ccw issue queue list # Queue history
ccw issue queue switch <queue-id> # Switch queue
ccw issue queue archive # Archive queue
ccw issue queue delete <queue-id> # Delete queue
ccw issue next # Get next task
ccw issue done <queue-id> # Mark completed
ccw issue complete <item-id> # (legacy alias for done)
```
## Usage
@@ -49,7 +55,9 @@ ccw issue done <queue-id> # Complete task
## Implementation
### Phase 1: Entry Point
This command delegates to the `issue-manage` skill for detailed implementation.
### Entry Point
```javascript
const issueId = parseIssueId(userInput);
@@ -63,787 +71,30 @@ if (!action) {
}
```
### Phase 2: Main Menu
### Main Menu Flow
```javascript
async function showMainMenu(preselectedIssue = null) {
// Fetch current issues summary
const issuesResult = Bash('ccw issue list --json 2>/dev/null || echo "[]"');
const issues = JSON.parse(issuesResult) || [];
const queueResult = Bash('ccw issue status --json 2>/dev/null');
const queueStatus = JSON.parse(queueResult || '{}');
console.log(`
## Issue Management Dashboard
1. **Dashboard**: Fetch issues summary via `ccw issue list --json`
2. **Menu**: Present action options via AskUserQuestion
3. **Route**: Execute selected action (List/View/Edit/Delete/Bulk)
4. **Loop**: Return to menu after each action
**Total Issues**: ${issues.length}
**Queue Status**: ${queueStatus.queue?.total_tasks || 0} tasks (${queueStatus.queue?.pending_count || 0} pending)
### Available Actions
### Quick Stats
- Registered: ${issues.filter(i => i.status === 'registered').length}
- Planned: ${issues.filter(i => i.status === 'planned').length}
- Executing: ${issues.filter(i => i.status === 'executing').length}
- Completed: ${issues.filter(i => i.status === 'completed').length}
`);
| Action | Description | CLI Command |
|--------|-------------|-------------|
| List | Browse with filters | `ccw issue list --json` |
| View | Detail view | `ccw issue status <id> --json` |
| Edit | Modify fields | Update `issues.jsonl` |
| Delete | Remove issue | Clean up all related files |
| Bulk | Batch operations | Multi-select + batch update |
const answer = AskUserQuestion({
questions: [{
question: 'What would you like to do?',
header: 'Action',
multiSelect: false,
options: [
{ label: 'List Issues', description: 'Browse all issues with filters' },
{ label: 'View Issue', description: 'Detailed view of specific issue' },
{ label: 'Create Issue', description: 'Add new issue from text or GitHub' },
{ label: 'Edit Issue', description: 'Modify issue fields' },
{ label: 'Delete Issue', description: 'Remove issue(s)' },
{ label: 'Bulk Operations', description: 'Batch actions on multiple issues' }
]
}]
});
const selected = parseAnswer(answer);
switch (selected) {
case 'List Issues':
await listIssuesInteractive();
break;
case 'View Issue':
await viewIssueInteractive(preselectedIssue);
break;
case 'Create Issue':
await createIssueInteractive();
break;
case 'Edit Issue':
await editIssueInteractive(preselectedIssue);
break;
case 'Delete Issue':
await deleteIssueInteractive(preselectedIssue);
break;
case 'Bulk Operations':
await bulkOperationsInteractive();
break;
}
}
```
## Data Files
### Phase 3: List Issues
```javascript
async function listIssuesInteractive() {
// Ask for filter
const filterAnswer = AskUserQuestion({
questions: [{
question: 'Filter issues by status?',
header: 'Filter',
multiSelect: true,
options: [
{ label: 'All', description: 'Show all issues' },
{ label: 'Registered', description: 'New, unplanned issues' },
{ label: 'Planned', description: 'Issues with bound solutions' },
{ label: 'Queued', description: 'In execution queue' },
{ label: 'Executing', description: 'Currently being worked on' },
{ label: 'Completed', description: 'Finished issues' },
{ label: 'Failed', description: 'Failed issues' }
]
}]
});
const filters = parseMultiAnswer(filterAnswer);
// Fetch and filter issues
const result = Bash('ccw issue list --json');
let issues = JSON.parse(result) || [];
if (!filters.includes('All')) {
const statusMap = {
'Registered': 'registered',
'Planned': 'planned',
'Queued': 'queued',
'Executing': 'executing',
'Completed': 'completed',
'Failed': 'failed'
};
const allowedStatuses = filters.map(f => statusMap[f]).filter(Boolean);
issues = issues.filter(i => allowedStatuses.includes(i.status));
}
if (issues.length === 0) {
console.log('No issues found matching filters.');
return showMainMenu();
}
// Display issues table
console.log(`
## Issues (${issues.length})
| ID | Status | Priority | Title |
|----|--------|----------|-------|
${issues.map(i => `| ${i.id} | ${i.status} | P${i.priority} | ${i.title.substring(0, 40)} |`).join('\n')}
`);
// Ask for action on issue
const actionAnswer = AskUserQuestion({
questions: [{
question: 'Select an issue to view/edit, or return to menu:',
header: 'Select',
multiSelect: false,
options: [
...issues.slice(0, 10).map(i => ({
label: i.id,
description: i.title.substring(0, 50)
})),
{ label: 'Back to Menu', description: 'Return to main menu' }
]
}]
});
const selected = parseAnswer(actionAnswer);
if (selected === 'Back to Menu') {
return showMainMenu();
}
// View selected issue
await viewIssueInteractive(selected);
}
```
### Phase 4: View Issue
```javascript
async function viewIssueInteractive(issueId) {
if (!issueId) {
// Ask for issue ID
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
const idAnswer = AskUserQuestion({
questions: [{
question: 'Select issue to view:',
header: 'Issue',
multiSelect: false,
options: issues.slice(0, 10).map(i => ({
label: i.id,
description: `${i.status} - ${i.title.substring(0, 40)}`
}))
}]
});
issueId = parseAnswer(idAnswer);
}
// Fetch detailed status
const result = Bash(`ccw issue status ${issueId} --json`);
const data = JSON.parse(result);
const issue = data.issue;
const solutions = data.solutions || [];
const bound = data.bound;
console.log(`
## Issue: ${issue.id}
**Title**: ${issue.title}
**Status**: ${issue.status}
**Priority**: P${issue.priority}
**Created**: ${issue.created_at}
**Updated**: ${issue.updated_at}
### Context
${issue.context || 'No context provided'}
### Solutions (${solutions.length})
${solutions.length === 0 ? 'No solutions registered' :
solutions.map(s => `- ${s.is_bound ? '◉' : '○'} ${s.id}: ${s.tasks?.length || 0} tasks`).join('\n')}
${bound ? `### Bound Solution: ${bound.id}\n**Tasks**: ${bound.tasks?.length || 0}` : ''}
`);
// Show tasks if bound solution exists
if (bound?.tasks?.length > 0) {
console.log(`
### Tasks
| ID | Action | Scope | Title |
|----|--------|-------|-------|
${bound.tasks.map(t => `| ${t.id} | ${t.action} | ${t.scope?.substring(0, 20) || '-'} | ${t.title.substring(0, 30)} |`).join('\n')}
`);
}
// Action menu
const actionAnswer = AskUserQuestion({
questions: [{
question: 'What would you like to do?',
header: 'Action',
multiSelect: false,
options: [
{ label: 'Edit Issue', description: 'Modify issue fields' },
{ label: 'Plan Issue', description: 'Generate solution (/issue:plan)' },
{ label: 'Add to Queue', description: 'Queue bound solution tasks' },
{ label: 'View Queue', description: 'See queue status' },
{ label: 'Delete Issue', description: 'Remove this issue' },
{ label: 'Back to Menu', description: 'Return to main menu' }
]
}]
});
const action = parseAnswer(actionAnswer);
switch (action) {
case 'Edit Issue':
await editIssueInteractive(issueId);
break;
case 'Plan Issue':
console.log(`Running: /issue:plan ${issueId}`);
// Invoke plan skill
break;
case 'Add to Queue':
Bash(`ccw issue queue add ${issueId}`);
console.log(`✓ Added ${issueId} tasks to queue`);
break;
case 'View Queue':
const queueOutput = Bash('ccw issue queue');
console.log(queueOutput);
break;
case 'Delete Issue':
await deleteIssueInteractive(issueId);
break;
default:
return showMainMenu();
}
}
```
### Phase 5: Edit Issue
```javascript
async function editIssueInteractive(issueId) {
if (!issueId) {
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
const idAnswer = AskUserQuestion({
questions: [{
question: 'Select issue to edit:',
header: 'Issue',
multiSelect: false,
options: issues.slice(0, 10).map(i => ({
label: i.id,
description: `${i.status} - ${i.title.substring(0, 40)}`
}))
}]
});
issueId = parseAnswer(idAnswer);
}
// Get current issue data
const result = Bash(`ccw issue list ${issueId} --json`);
const issueData = JSON.parse(result);
const issue = issueData.issue || issueData;
// Ask which field to edit
const fieldAnswer = AskUserQuestion({
questions: [{
question: 'Which field to edit?',
header: 'Field',
multiSelect: false,
options: [
{ label: 'Title', description: `Current: ${issue.title?.substring(0, 40)}` },
{ label: 'Priority', description: `Current: P${issue.priority}` },
{ label: 'Status', description: `Current: ${issue.status}` },
{ label: 'Context', description: 'Edit problem description' },
{ label: 'Labels', description: `Current: ${issue.labels?.join(', ') || 'none'}` },
{ label: 'Back', description: 'Return without changes' }
]
}]
});
const field = parseAnswer(fieldAnswer);
if (field === 'Back') {
return viewIssueInteractive(issueId);
}
let updatePayload = {};
switch (field) {
case 'Title':
const titleAnswer = AskUserQuestion({
questions: [{
question: 'Enter new title (or select current to keep):',
header: 'Title',
multiSelect: false,
options: [
{ label: issue.title.substring(0, 50), description: 'Keep current title' }
]
}]
});
const newTitle = parseAnswer(titleAnswer);
if (newTitle && newTitle !== issue.title.substring(0, 50)) {
updatePayload.title = newTitle;
}
break;
case 'Priority':
const priorityAnswer = AskUserQuestion({
questions: [{
question: 'Select priority:',
header: 'Priority',
multiSelect: false,
options: [
{ label: 'P1 - Critical', description: 'Production blocking' },
{ label: 'P2 - High', description: 'Major functionality' },
{ label: 'P3 - Medium', description: 'Normal priority (default)' },
{ label: 'P4 - Low', description: 'Minor issues' },
{ label: 'P5 - Trivial', description: 'Nice to have' }
]
}]
});
const priorityStr = parseAnswer(priorityAnswer);
updatePayload.priority = parseInt(priorityStr.charAt(1));
break;
case 'Status':
const statusAnswer = AskUserQuestion({
questions: [{
question: 'Select status:',
header: 'Status',
multiSelect: false,
options: [
{ label: 'registered', description: 'New issue, not yet planned' },
{ label: 'planning', description: 'Solution being generated' },
{ label: 'planned', description: 'Solution bound, ready for queue' },
{ label: 'queued', description: 'In execution queue' },
{ label: 'executing', description: 'Currently being worked on' },
{ label: 'completed', description: 'All tasks finished' },
{ label: 'failed', description: 'Execution failed' },
{ label: 'paused', description: 'Temporarily on hold' }
]
}]
});
updatePayload.status = parseAnswer(statusAnswer);
break;
case 'Context':
console.log(`Current context:\n${issue.context || '(empty)'}\n`);
const contextAnswer = AskUserQuestion({
questions: [{
question: 'Enter new context (problem description):',
header: 'Context',
multiSelect: false,
options: [
{ label: 'Keep current', description: 'No changes' }
]
}]
});
const newContext = parseAnswer(contextAnswer);
if (newContext && newContext !== 'Keep current') {
updatePayload.context = newContext;
}
break;
case 'Labels':
const labelsAnswer = AskUserQuestion({
questions: [{
question: 'Enter labels (comma-separated):',
header: 'Labels',
multiSelect: false,
options: [
{ label: issue.labels?.join(',') || '', description: 'Keep current labels' }
]
}]
});
const labelsStr = parseAnswer(labelsAnswer);
if (labelsStr) {
updatePayload.labels = labelsStr.split(',').map(l => l.trim());
}
break;
}
// Apply update if any
if (Object.keys(updatePayload).length > 0) {
// Read, update, write issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const idx = allIssues.findIndex(i => i.id === issueId);
if (idx !== -1) {
allIssues[idx] = {
...allIssues[idx],
...updatePayload,
updated_at: new Date().toISOString()
};
Write(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
console.log(`✓ Updated ${issueId}: ${Object.keys(updatePayload).join(', ')}`);
}
}
// Continue editing or return
const continueAnswer = AskUserQuestion({
questions: [{
question: 'Continue editing?',
header: 'Continue',
multiSelect: false,
options: [
{ label: 'Edit Another Field', description: 'Continue editing this issue' },
{ label: 'View Issue', description: 'See updated issue' },
{ label: 'Back to Menu', description: 'Return to main menu' }
]
}]
});
const cont = parseAnswer(continueAnswer);
if (cont === 'Edit Another Field') {
await editIssueInteractive(issueId);
} else if (cont === 'View Issue') {
await viewIssueInteractive(issueId);
} else {
return showMainMenu();
}
}
```
### Phase 6: Delete Issue
```javascript
async function deleteIssueInteractive(issueId) {
if (!issueId) {
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
const idAnswer = AskUserQuestion({
questions: [{
question: 'Select issue to delete:',
header: 'Delete',
multiSelect: false,
options: issues.slice(0, 10).map(i => ({
label: i.id,
description: `${i.status} - ${i.title.substring(0, 40)}`
}))
}]
});
issueId = parseAnswer(idAnswer);
}
// Confirm deletion
const confirmAnswer = AskUserQuestion({
questions: [{
question: `Delete issue ${issueId}? This will also remove associated solutions.`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Delete', description: 'Permanently remove issue and solutions' },
{ label: 'Cancel', description: 'Keep issue' }
]
}]
});
if (parseAnswer(confirmAnswer) !== 'Delete') {
console.log('Deletion cancelled.');
return showMainMenu();
}
// Remove from issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const filtered = allIssues.filter(i => i.id !== issueId);
Write(issuesPath, filtered.map(i => JSON.stringify(i)).join('\n'));
// Remove solutions file if exists
const solPath = `.workflow/issues/solutions/${issueId}.jsonl`;
Bash(`rm -f "${solPath}" 2>/dev/null || true`);
// Remove from queue if present
const queuePath = '.workflow/issues/queue.json';
if (Bash(`test -f "${queuePath}" && echo exists`) === 'exists') {
const queue = JSON.parse(Bash(`cat "${queuePath}"`));
queue.queue = queue.queue.filter(q => q.issue_id !== issueId);
Write(queuePath, JSON.stringify(queue, null, 2));
}
console.log(`✓ Deleted issue ${issueId}`);
return showMainMenu();
}
```
### Phase 7: Bulk Operations
```javascript
async function bulkOperationsInteractive() {
const bulkAnswer = AskUserQuestion({
questions: [{
question: 'Select bulk operation:',
header: 'Bulk',
multiSelect: false,
options: [
{ label: 'Update Status', description: 'Change status of multiple issues' },
{ label: 'Update Priority', description: 'Change priority of multiple issues' },
{ label: 'Add Labels', description: 'Add labels to multiple issues' },
{ label: 'Delete Multiple', description: 'Remove multiple issues' },
{ label: 'Queue All Planned', description: 'Add all planned issues to queue' },
{ label: 'Retry All Failed', description: 'Reset all failed tasks to pending' },
{ label: 'Back', description: 'Return to main menu' }
]
}]
});
const operation = parseAnswer(bulkAnswer);
if (operation === 'Back') {
return showMainMenu();
}
// Get issues for selection
const allIssues = JSON.parse(Bash('ccw issue list --json') || '[]');
if (operation === 'Queue All Planned') {
const planned = allIssues.filter(i => i.status === 'planned' && i.bound_solution_id);
for (const issue of planned) {
Bash(`ccw issue queue add ${issue.id}`);
console.log(`✓ Queued ${issue.id}`);
}
console.log(`\n✓ Queued ${planned.length} issues`);
return showMainMenu();
}
if (operation === 'Retry All Failed') {
Bash('ccw issue retry');
console.log('✓ Reset all failed tasks to pending');
return showMainMenu();
}
// Multi-select issues
const selectAnswer = AskUserQuestion({
questions: [{
question: 'Select issues (multi-select):',
header: 'Select',
multiSelect: true,
options: allIssues.slice(0, 15).map(i => ({
label: i.id,
description: `${i.status} - ${i.title.substring(0, 30)}`
}))
}]
});
const selectedIds = parseMultiAnswer(selectAnswer);
if (selectedIds.length === 0) {
console.log('No issues selected.');
return showMainMenu();
}
// Execute bulk operation
const issuesPath = '.workflow/issues/issues.jsonl';
let issues = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
switch (operation) {
case 'Update Status':
const statusAnswer = AskUserQuestion({
questions: [{
question: 'Select new status:',
header: 'Status',
multiSelect: false,
options: [
{ label: 'registered', description: 'Reset to registered' },
{ label: 'paused', description: 'Pause issues' },
{ label: 'completed', description: 'Mark completed' }
]
}]
});
const newStatus = parseAnswer(statusAnswer);
issues = issues.map(i =>
selectedIds.includes(i.id)
? { ...i, status: newStatus, updated_at: new Date().toISOString() }
: i
);
break;
case 'Update Priority':
const prioAnswer = AskUserQuestion({
questions: [{
question: 'Select new priority:',
header: 'Priority',
multiSelect: false,
options: [
{ label: 'P1', description: 'Critical' },
{ label: 'P2', description: 'High' },
{ label: 'P3', description: 'Medium' },
{ label: 'P4', description: 'Low' },
{ label: 'P5', description: 'Trivial' }
]
}]
});
const newPrio = parseInt(parseAnswer(prioAnswer).charAt(1));
issues = issues.map(i =>
selectedIds.includes(i.id)
? { ...i, priority: newPrio, updated_at: new Date().toISOString() }
: i
);
break;
case 'Add Labels':
const labelAnswer = AskUserQuestion({
questions: [{
question: 'Enter labels to add (comma-separated):',
header: 'Labels',
multiSelect: false,
options: [
{ label: 'bug', description: 'Bug fix' },
{ label: 'feature', description: 'New feature' },
{ label: 'urgent', description: 'Urgent priority' }
]
}]
});
const newLabels = parseAnswer(labelAnswer).split(',').map(l => l.trim());
issues = issues.map(i =>
selectedIds.includes(i.id)
? {
...i,
labels: [...new Set([...(i.labels || []), ...newLabels])],
updated_at: new Date().toISOString()
}
: i
);
break;
case 'Delete Multiple':
const confirmDelete = AskUserQuestion({
questions: [{
question: `Delete ${selectedIds.length} issues permanently?`,
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Delete All', description: 'Remove selected issues' },
{ label: 'Cancel', description: 'Keep issues' }
]
}]
});
if (parseAnswer(confirmDelete) === 'Delete All') {
issues = issues.filter(i => !selectedIds.includes(i.id));
// Clean up solutions
for (const id of selectedIds) {
Bash(`rm -f ".workflow/issues/solutions/${id}.jsonl" 2>/dev/null || true`);
}
} else {
console.log('Deletion cancelled.');
return showMainMenu();
}
break;
}
Write(issuesPath, issues.map(i => JSON.stringify(i)).join('\n'));
console.log(`✓ Updated ${selectedIds.length} issues`);
return showMainMenu();
}
```
### Phase 8: Create Issue (Redirect)
```javascript
async function createIssueInteractive() {
const typeAnswer = AskUserQuestion({
questions: [{
question: 'Create issue from:',
header: 'Source',
multiSelect: false,
options: [
{ label: 'GitHub URL', description: 'Import from GitHub issue' },
{ label: 'Text Description', description: 'Enter problem description' },
{ label: 'Quick Create', description: 'Just title and priority' }
]
}]
});
const type = parseAnswer(typeAnswer);
if (type === 'GitHub URL' || type === 'Text Description') {
console.log('Use /issue:new for structured issue creation');
console.log('Example: /issue:new https://github.com/org/repo/issues/123');
return showMainMenu();
}
// Quick create
const titleAnswer = AskUserQuestion({
questions: [{
question: 'Enter issue title:',
header: 'Title',
multiSelect: false,
options: [
{ label: 'Authentication Bug', description: 'Example title' }
]
}]
});
const title = parseAnswer(titleAnswer);
const prioAnswer = AskUserQuestion({
questions: [{
question: 'Select priority:',
header: 'Priority',
multiSelect: false,
options: [
{ label: 'P3 - Medium (Recommended)', description: 'Normal priority' },
{ label: 'P1 - Critical', description: 'Production blocking' },
{ label: 'P2 - High', description: 'Major functionality' }
]
}]
});
const priority = parseInt(parseAnswer(prioAnswer).charAt(1));
// Generate ID and create
const id = `ISS-${Date.now()}`;
Bash(`ccw issue init ${id} --title "${title}" --priority ${priority}`);
console.log(`✓ Created issue ${id}`);
await viewIssueInteractive(id);
}
```
## Helper Functions
```javascript
function parseAnswer(answer) {
// Extract selected option from AskUserQuestion response
if (typeof answer === 'string') return answer;
if (answer.answers) {
const values = Object.values(answer.answers);
return values[0] || '';
}
return '';
}
function parseMultiAnswer(answer) {
// Extract multiple selections
if (typeof answer === 'string') return answer.split(',').map(s => s.trim());
if (answer.answers) {
const values = Object.values(answer.answers);
return values.flatMap(v => v.split(',').map(s => s.trim()));
}
return [];
}
function parseFlags(input) {
const flags = {};
const matches = input.matchAll(/--(\w+)\s+([^\s-]+)/g);
for (const match of matches) {
flags[match[1]] = match[2];
}
return flags;
}
function parseIssueId(input) {
const match = input.match(/^([A-Z]+-\d+|ISS-\d+|GH-\d+)/i);
return match ? match[1] : null;
}
```
| File | Purpose |
|------|---------|
| `.workflow/issues/issues.jsonl` | Issue records |
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
| `.workflow/issues/queue.json` | Execution queue |
## Error Handling
@@ -853,7 +104,6 @@ function parseIssueId(input) {
| Issue not found | Show available issues, ask for correction |
| Invalid selection | Show error, re-prompt |
| Write failure | Check permissions, show error |
| Queue operation fails | Show ccw issue error, suggest fix |
## Related Commands
@@ -861,5 +111,3 @@ function parseIssueId(input) {
- `/issue:plan` - Plan solution for issue
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute queued tasks
- `ccw issue list` - CLI list command
- `ccw issue status` - CLI status command

View File

@@ -24,7 +24,7 @@ interface Issue {
status: 'registered'; // Initial status
priority: number; // 1 (critical) to 5 (low)
context: string; // Problem description
source: 'github' | 'text'; // Input source type
source: 'github' | 'text' | 'discovery'; // Input source type
source_url?: string; // GitHub URL if applicable
labels?: string[]; // Categorization labels
@@ -35,6 +35,18 @@ interface Issue {
affected_components?: string[];// Files/modules affected
reproduction_steps?: string[]; // Steps to reproduce
// Discovery context (when source='discovery')
discovery_context?: {
discovery_id: string; // Source discovery session
perspective: string; // bug, test, quality, etc.
category: string; // Finding category
file: string; // Primary affected file
line: number; // Line number
snippet?: string; // Code snippet
confidence: number; // Agent confidence (0-1)
suggested_fix?: string; // Suggested remediation
};
// Closed-loop requirements (guide plan generation)
lifecycle_requirements: {
test_strategy: 'unit' | 'integration' | 'e2e' | 'manual' | 'auto';
@@ -51,51 +63,18 @@ interface Issue {
}
```
## Task Lifecycle (Each Task is Closed-Loop)
## Lifecycle Requirements
When `/issue:plan` generates tasks, each task MUST include:
The `lifecycle_requirements` field guides downstream commands (`/issue:plan`, `/issue:execute`):
```typescript
interface SolutionTask {
id: string;
title: string;
scope: string;
action: string;
| Field | Options | Purpose |
|-------|---------|---------|
| `test_strategy` | `unit`, `integration`, `e2e`, `manual`, `auto` | Which test types to generate |
| `regression_scope` | `affected`, `related`, `full` | Which tests to run for regression |
| `acceptance_type` | `automated`, `manual`, `both` | How to verify completion |
| `commit_strategy` | `per-task`, `squash`, `atomic` | Commit granularity |
// Phase 1: Implementation
implementation: string[]; // Step-by-step implementation
modification_points: { file: string; target: string; change: string }[];
// Phase 2: Testing
test: {
unit?: string[]; // Unit test requirements
integration?: string[]; // Integration test requirements
commands?: string[]; // Test commands to run
coverage_target?: number; // Minimum coverage %
};
// Phase 3: Regression
regression: string[]; // Regression check commands/points
// Phase 4: Acceptance
acceptance: {
criteria: string[]; // Testable acceptance criteria
verification: string[]; // How to verify each criterion
manual_checks?: string[]; // Manual verification if needed
};
// Phase 5: Commit
commit: {
type: 'feat' | 'fix' | 'refactor' | 'test' | 'docs' | 'chore';
scope: string; // e.g., "auth", "api"
message_template: string; // Commit message template
breaking?: boolean;
};
depends_on: string[];
executor: 'codex' | 'gemini' | 'agent' | 'auto';
}
```
> **Note**: Task structure (SolutionTask) is defined in `/issue:plan` - see `.claude/commands/issue/plan.md`
## Usage

View File

@@ -1,7 +1,7 @@
---
name: plan
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "<issue-id>[,<issue-id>,...] [--batch-size 3]"
argument-hint: "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] "
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
---
@@ -9,13 +9,35 @@ allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(
## Overview
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow. The agent handles ACE semantic search, solution generation, and task breakdown.
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
## Output Requirements
**Generate Files:**
1. `.workflow/issues/solutions/{issue-id}.jsonl` - Solution with tasks for each issue
**Return Summary:**
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "...", "solutions": [...] }],
"conflicts": [{ "file": "...", "issues": [...] }]
}
```
**Completion Criteria:**
- [ ] Solution file generated for each issue
- [ ] Single solution → auto-bound via `ccw issue bind`
- [ ] Multiple solutions → returned for user selection
- [ ] Tasks conform to schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
- [ ] Each task has quantified `acceptance.criteria`
## Core Capabilities
**Core capabilities:**
- **Closed-loop agent**: issue-plan-agent combines explore + plan
- Batch processing: 1 agent processes 1-3 issues
- ACE semantic search integrated into planning
- Solution with executable tasks and acceptance criteria
- Solution with executable tasks and delivery criteria
- Automatic solution registration and binding
## Storage Structure (Flat JSONL)
@@ -48,9 +70,9 @@ Unified planning command using **issue-plan-agent** that combines exploration an
```
Phase 1: Issue Loading
├─ Parse input (single, comma-separated, or --all-pending)
├─ Load issues from .workflow/issues/issues.jsonl
├─ Fetch issue metadata (ID, title, tags)
├─ Validate issues exist (create if needed)
└─ Group into batches (max 3 per batch)
└─ Group by similarity (shared tags or title keywords, max 3 per batch)
Phase 2: Unified Explore + Plan (issue-plan-agent)
├─ Launch issue-plan-agent per batch
@@ -75,323 +97,218 @@ Phase 4: Summary
## Implementation
### Phase 1: Issue Loading
### Phase 1: Issue Loading (ID + Title + Tags)
```javascript
// Parse input
const issueIds = userInput.includes(',')
? userInput.split(',').map(s => s.trim())
: [userInput.trim()];
// Read issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Load and validate issues
const issues = [];
for (const id of issueIds) {
let issue = allIssues.find(i => i.id === id);
if (!issue) {
console.log(`Issue ${id} not found. Creating...`);
issue = {
id,
title: `Issue ${id}`,
status: 'registered',
priority: 3,
context: '',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
// Append to issues.jsonl
Bash(`echo '${JSON.stringify(issue)}' >> "${issuesPath}"`);
}
issues.push(issue);
}
// Group into batches
const batchSize = flags.batchSize || 3;
const batches = [];
for (let i = 0; i < issues.length; i += batchSize) {
batches.push(issues.slice(i, i + batchSize));
let issues = []; // {id, title, tags}
if (flags.allPending) {
// Get pending issues with metadata via CLI (JSON output)
const result = Bash(`ccw issue list --status pending,registered --json`).trim();
const parsed = result ? JSON.parse(result) : [];
issues = parsed.map(i => ({ id: i.id, title: i.title || '', tags: i.tags || [] }));
if (issues.length === 0) {
console.log('No pending issues found.');
return;
}
console.log(`Found ${issues.length} pending issues`);
} else {
// Parse comma-separated issue IDs, fetch metadata
const ids = userInput.includes(',')
? userInput.split(',').map(s => s.trim())
: [userInput.trim()];
for (const id of ids) {
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
const info = Bash(`ccw issue status ${id} --json`).trim();
const parsed = info ? JSON.parse(info) : {};
issues.push({ id, title: parsed.title || '', tags: parsed.tags || [] });
}
}
// Intelligent grouping by similarity (tags → title keywords)
function groupBySimilarity(issues, maxSize) {
const batches = [];
const used = new Set();
for (const issue of issues) {
if (used.has(issue.id)) continue;
const batch = [issue];
used.add(issue.id);
const issueTags = new Set(issue.tags);
const issueWords = new Set(issue.title.toLowerCase().split(/\s+/));
// Find similar issues
for (const other of issues) {
if (used.has(other.id) || batch.length >= maxSize) continue;
// Similarity: shared tags or shared title keywords
const sharedTags = other.tags.filter(t => issueTags.has(t)).length;
const otherWords = other.title.toLowerCase().split(/\s+/);
const sharedWords = otherWords.filter(w => issueWords.has(w) && w.length > 3).length;
if (sharedTags > 0 || sharedWords >= 2) {
batch.push(other);
used.add(other.id);
}
}
batches.push(batch);
}
return batches;
}
const batches = groupBySimilarity(issues, batchSize);
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es) (grouped by similarity)`);
TodoWrite({
todos: batches.flatMap((batch, i) => [
{ content: `Plan batch ${i+1}`, status: 'pending', activeForm: `Planning batch ${i+1}` }
])
todos: batches.map((_, i) => ({
content: `Plan batch ${i+1}`,
status: 'pending',
activeForm: `Planning batch ${i+1}`
}))
});
```
### Phase 2: Unified Explore + Plan (issue-plan-agent)
### Phase 2: Unified Explore + Plan (issue-plan-agent) - PARALLEL
```javascript
for (const [batchIndex, batch] of batches.entries()) {
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
Bash(`mkdir -p .workflow/issues/solutions`);
const pendingSelections = []; // Collect multi-solution issues for user selection
// Build prompts for all batches
const agentTasks = batches.map((batch, batchIndex) => {
const issueList = batch.map(i => `- ${i.id}: ${i.title}${i.tags.length ? ` [${i.tags.join(', ')}]` : ''}`).join('\n');
const batchIds = batch.map(i => i.id);
// Build issue prompt for agent with lifecycle requirements
const issuePrompt = `
## Issues to Plan (Closed-Loop Tasks Required)
## Plan Issues
${batch.map((issue, i) => `
### Issue ${i + 1}: ${issue.id}
**Title**: ${issue.title}
**Context**: ${issue.context || 'No context provided'}
**Affected Components**: ${issue.affected_components?.join(', ') || 'Not specified'}
**Issues** (grouped by similarity):
${issueList}
**Lifecycle Requirements**:
- Test Strategy: ${issue.lifecycle_requirements?.test_strategy || 'auto'}
- Regression Scope: ${issue.lifecycle_requirements?.regression_scope || 'affected'}
- Commit Strategy: ${issue.lifecycle_requirements?.commit_strategy || 'per-task'}
`).join('\n')}
**Project Root**: ${process.cwd()}
## Project Root
${process.cwd()}
### Project Context (MANDATORY - Read Both Files First)
1. Read: .workflow/project-tech.json (technology stack, architecture, key components)
2. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
## Requirements - CLOSED-LOOP TASKS
**CRITICAL**: All solution tasks MUST comply with constraints in project-guidelines.json
Each task MUST include ALL lifecycle phases:
### Steps
1. Fetch: \`ccw issue status <id> --json\`
2. Load project context (project-tech.json + project-guidelines.json)
3. **If source=discovery**: Use discovery_context (file, line, snippet, suggested_fix) as planning hints
4. Explore (ACE) → Plan solution (respecting guidelines)
5. Register & bind: \`ccw issue bind <id> --solution <file>\`
### 1. Implementation
- implementation: string[] (2-7 concrete steps)
- modification_points: { file, target, change }[]
### Generate Files
\`.workflow/issues/solutions/{issue-id}.jsonl\` - Solution with tasks (schema: cat .claude/workflows/cli-templates/schemas/solution-schema.json)
### 2. Test
- test.unit: string[] (unit test requirements)
- test.integration: string[] (integration test requirements if needed)
- test.commands: string[] (actual test commands to run)
- test.coverage_target: number (minimum coverage %)
### Binding Rules
- **Single solution**: Auto-bind via \`ccw issue bind <id> --solution <file>\`
- **Multiple solutions**: Register only, return for user selection
### 3. Regression
- regression: string[] (commands to run for regression check)
- Based on issue's regression_scope setting
### 4. Acceptance
- acceptance.criteria: string[] (testable acceptance criteria)
- acceptance.verification: string[] (how to verify each criterion)
- acceptance.manual_checks: string[] (manual checks if needed)
### 5. Commit
- commit.type: feat|fix|refactor|test|docs|chore
- commit.scope: string (module name)
- commit.message_template: string (full commit message)
- commit.breaking: boolean
## Additional Requirements
1. Use ACE semantic search (mcp__ace-tool__search_context) for exploration
2. Detect file conflicts if multiple issues
3. Generate executable test commands based on project's test framework
4. Infer commit scope from affected files
### Return Summary
\`\`\`json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
"conflicts": [{ "file": "...", "issues": [...] }]
}
\`\`\`
`;
// Launch issue-plan-agent (combines explore + plan)
const result = Task(
subagent_type="issue-plan-agent",
run_in_background=false,
description=`Explore & plan ${batch.length} issues`,
prompt=issuePrompt
);
return { batchIndex, batchIds, issuePrompt, batch };
});
// Parse agent output
const agentOutput = JSON.parse(result);
// Launch agents in parallel (max 10 concurrent)
const MAX_PARALLEL = 10;
for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
const chunk = agentTasks.slice(i, i + MAX_PARALLEL);
const taskIds = [];
// Register solutions for each issue (append to solutions/{issue-id}.jsonl)
for (const item of agentOutput.solutions) {
const solutionPath = `.workflow/issues/solutions/${item.issue_id}.jsonl`;
// Ensure solutions directory exists
Bash(`mkdir -p .workflow/issues/solutions`);
// Append solution as new line
Bash(`echo '${JSON.stringify(item.solution)}' >> "${solutionPath}"`);
// Launch chunk in parallel
for (const { batchIndex, batchIds, issuePrompt, batch } of chunk) {
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
const taskId = Task(
subagent_type="issue-plan-agent",
run_in_background=true,
description=`Explore & plan ${batch.length} issues: ${batchIds.join(', ')}`,
prompt=issuePrompt
);
taskIds.push({ taskId, batchIndex });
}
// Handle conflicts if any
if (agentOutput.conflicts?.length > 0) {
console.log(`\n⚠ File conflicts detected:`);
agentOutput.conflicts.forEach(c => {
console.log(` ${c.file}: ${c.issues.join(', ')} → suggested: ${c.suggested_order.join(' → ')}`);
});
}
console.log(`Launched ${taskIds.length} agents (batch ${i/MAX_PARALLEL + 1}/${Math.ceil(agentTasks.length/MAX_PARALLEL)})...`);
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
// Collect results from this chunk
for (const { taskId, batchIndex } of taskIds) {
const result = TaskOutput(task_id=taskId, block=true);
const summary = JSON.parse(result);
for (const item of summary.bound || []) {
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
}
// Collect and notify pending selections
for (const pending of summary.pending_selection || []) {
console.log(`${pending.issue_id}: ${pending.solutions.length} solutions → awaiting selection`);
pendingSelections.push(pending);
}
if (summary.conflicts?.length > 0) {
console.log(`⚠ Conflicts: ${summary.conflicts.map(c => c.file).join(', ')}`);
}
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
}
}
```
### Phase 3: Solution Binding
### Phase 3: Multi-Solution Selection (MANDATORY when pendingSelections > 0)
```javascript
// Re-read issues.jsonl
let allIssuesUpdated = Bash(`cat "${issuesPath}"`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// MUST trigger user selection when multiple solutions exist
if (pendingSelections.length > 0) {
console.log(`\n## User Selection Required: ${pendingSelections.length} issue(s) have multiple solutions\n`);
for (const issue of issues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
const answer = AskUserQuestion({
questions: pendingSelections.map(({ issue_id, solutions }) => ({
question: `Select solution for ${issue_id}:`,
header: issue_id,
multiSelect: false,
options: solutions.map(s => ({
label: `${s.id} (${s.task_count} tasks)`,
description: s.description
}))
}))
});
if (solutions.length === 0) {
console.log(`⚠ No solutions for ${issue.id}`);
continue;
// Bind user-selected solutions
for (const { issue_id } of pendingSelections) {
const selectedId = extractSelectedSolutionId(answer, issue_id);
if (selectedId) {
Bash(`ccw issue bind ${issue_id} ${selectedId}`);
console.log(`${issue_id}: ${selectedId} bound`);
}
}
let selectedSolId;
if (solutions.length === 1) {
// Auto-bind single solution
selectedSolId = solutions[0].id;
console.log(`✓ Auto-bound ${selectedSolId} to ${issue.id} (${solutions[0].tasks?.length || 0} tasks)`);
} else {
// Multiple solutions - ask user
const answer = AskUserQuestion({
questions: [{
question: `Select solution for ${issue.id}:`,
header: issue.id,
multiSelect: false,
options: solutions.map(s => ({
label: `${s.id}: ${s.description || 'Solution'}`,
description: `${s.tasks?.length || 0} tasks`
}))
}]
});
selectedSolId = extractSelectedSolutionId(answer);
console.log(`✓ Bound ${selectedSolId} to ${issue.id}`);
}
// Update issue in allIssuesUpdated
const issueIndex = allIssuesUpdated.findIndex(i => i.id === issue.id);
if (issueIndex !== -1) {
allIssuesUpdated[issueIndex].bound_solution_id = selectedSolId;
allIssuesUpdated[issueIndex].status = 'planned';
allIssuesUpdated[issueIndex].planned_at = new Date().toISOString();
allIssuesUpdated[issueIndex].updated_at = new Date().toISOString();
}
// Mark solution as bound in solutions file
const updatedSolutions = solutions.map(s => ({
...s,
is_bound: s.id === selectedSolId,
bound_at: s.id === selectedSolId ? new Date().toISOString() : s.bound_at
}));
Write(solPath, updatedSolutions.map(s => JSON.stringify(s)).join('\n'));
}
// Write updated issues.jsonl
Write(issuesPath, allIssuesUpdated.map(i => JSON.stringify(i)).join('\n'));
```
### Phase 4: Summary
```javascript
// Count planned issues via CLI
const plannedIds = Bash(`ccw issue list --status planned --ids`).trim();
const plannedCount = plannedIds ? plannedIds.split('\n').length : 0;
console.log(`
## Planning Complete
## Done: ${issues.length} issues → ${plannedCount} planned
**Issues Planned**: ${issues.length}
### Bound Solutions
${issues.map(i => {
const issue = allIssuesUpdated.find(a => a.id === i.id);
return issue?.bound_solution_id
? `${i.id}: ${issue.bound_solution_id}`
: `${i.id}: No solution bound`;
}).join('\n')}
### Next Steps
1. Review: \`ccw issue status <issue-id>\`
2. Form queue: \`/issue:queue\`
3. Execute: \`/issue:execute\`
Next: \`/issue:queue\`\`/issue:execute\`
`);
```
## Solution Format (Closed-Loop Tasks)
Each solution line in `solutions/{issue-id}.jsonl`:
```json
{
"id": "SOL-20251226-001",
"description": "Direct Implementation",
"tasks": [
{
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file in src/middleware/",
"Implement JWT token validation using jsonwebtoken",
"Add error handling for invalid/expired tokens",
"Export middleware function"
],
"test": {
"unit": [
"Test valid token passes through",
"Test invalid token returns 401",
"Test expired token returns 401",
"Test missing token returns 401"
],
"commands": [
"npm test -- --grep 'auth middleware'",
"npm run test:coverage -- src/middleware/auth.ts"
],
"coverage_target": 80
},
"regression": [
"npm test -- --grep 'protected routes'",
"npm run test:integration -- auth"
],
"acceptance": {
"criteria": [
"Middleware validates JWT tokens successfully",
"Returns 401 for invalid or missing tokens",
"Passes decoded token to request context"
],
"verification": [
"curl -H 'Authorization: Bearer valid_token' /api/protected → 200",
"curl /api/protected → 401",
"curl -H 'Authorization: Bearer invalid' /api/protected → 401"
]
},
"commit": {
"type": "feat",
"scope": "auth",
"message_template": "feat(auth): add JWT validation middleware\n\n- Implement token validation\n- Add error handling for invalid tokens\n- Export for route protection",
"breaking": false
},
"depends_on": [],
"estimated_minutes": 30,
"executor": "codex"
}
],
"exploration_context": {
"relevant_files": ["src/config/auth.ts"],
"patterns": "Follow existing middleware pattern"
},
"is_bound": true,
"created_at": "2025-12-26T10:00:00Z",
"bound_at": "2025-12-26T10:05:00Z"
}
```
## Error Handling
| Error | Resolution |
@@ -402,17 +319,6 @@ Each solution line in `solutions/{issue-id}.jsonl`:
| User cancels selection | Skip issue, continue with others |
| File conflicts | Agent detects and suggests resolution order |
## Agent Integration
The command uses `issue-plan-agent` which:
1. Performs ACE semantic search per issue
2. Identifies modification points and patterns
3. Generates task breakdown with dependencies
4. Detects cross-issue file conflicts
5. Outputs solution JSON for registration
See `.claude/agents/issue-plan-agent.md` for agent specification.
## Related Commands
- `/issue:queue` - Form execution queue from bound solutions

View File

@@ -1,6 +1,6 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent
description: Form execution queue from bound solutions using issue-queue-agent (solution-level)
argument-hint: "[--rebuild] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
@@ -9,16 +9,43 @@ allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, determines dependencies, and creates an ordered execution queue. The queue is global across all issues.
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves **inter-solution** conflicts, and creates an ordered execution queue at **solution level**.
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
## Output Requirements
**Generate Files:**
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with solutions, conflicts, groups
2. `.workflow/issues/queues/index.json` - Update with new queue entry
**Return Summary:**
```json
{
"queue_id": "QUE-20251227-143000",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
**Completion Criteria:**
- [ ] Queue JSON generated with valid DAG (no cycles between solutions)
- [ ] All inter-solution file conflicts resolved with rationale
- [ ] Semantic priority calculated for each solution
- [ ] Execution groups assigned (parallel P* / sequential S*)
- [ ] Issue statuses updated to `queued` via `ccw issue update`
## Core Capabilities
**Core capabilities:**
- **Agent-driven**: issue-queue-agent handles all ordering logic
- ACE semantic search for relationship discovery
- Dependency DAG construction and cycle detection
- File conflict detection and resolution
- Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
- Output global queue.json
- **Solution-level granularity**: Queue items are solutions, not tasks
- Inter-solution dependency DAG (based on file conflicts)
- File conflict detection between solutions
- Semantic priority calculation per solution (0.0-1.0)
- Parallel/Sequential group assignment for solutions
## Storage Structure (Queue History)
@@ -43,20 +70,75 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
{
"id": "QUE-20251227-143000",
"status": "active",
"issue_ids": ["GH-123", "GH-124"],
"total_tasks": 8,
"completed_tasks": 3,
"issue_ids": ["ISS-xxx", "ISS-yyy"],
"total_solutions": 3,
"completed_solutions": 1,
"created_at": "2025-12-27T14:30:00Z"
}
]
}
```
### Queue File Schema (Solution-Level)
```json
{
"id": "QUE-20251227-143000",
"status": "active",
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-20251227-003",
"solution_id": "SOL-20251227-003",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts", "src/utils.ts"],
"task_count": 3
},
{
"id": "QUE-20251226-100000",
"status": "completed",
"issue_ids": ["GH-120"],
"total_tasks": 5,
"completed_tasks": 5,
"created_at": "2025-12-26T10:00:00Z",
"completed_at": "2025-12-26T12:30:00Z"
"item_id": "S-2",
"issue_id": "ISS-20251227-001",
"solution_id": "SOL-20251227-001",
"status": "pending",
"execution_order": 2,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7,
"assigned_executor": "codex",
"files_touched": ["src/api.ts"],
"task_count": 2
},
{
"item_id": "S-3",
"issue_id": "ISS-20251227-002",
"solution_id": "SOL-20251227-002",
"status": "pending",
"execution_order": 3,
"execution_group": "S2",
"depends_on": ["S-1"],
"semantic_priority": 0.5,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts"],
"task_count": 4
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"solutions": ["S-1", "S-3"],
"resolution": "sequential",
"resolution_order": ["S-1", "S-3"],
"rationale": "S-1 creates auth module, S-3 extends it"
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"] },
{ "id": "S2", "type": "sequential", "solutions": ["S-3"] }
]
}
```
@@ -77,10 +159,12 @@ Queue formation command using **issue-queue-agent** that analyzes all bound solu
# Flags
--issue <id> Form queue for specific issue only
--append <id> Append issue to active queue (don't create new)
--list List all queues with status
--switch <queue-id> Switch active queue
--archive Archive current queue (mark completed)
--clear <queue-id> Delete a queue from history
# CLI subcommands (ccw issue queue ...)
ccw issue queue list List all queues with status
ccw issue queue switch <queue-id> Switch active queue
ccw issue queue archive Archive current queue
ccw issue queue delete <queue-id> Delete queue from history
```
## Execution Process
@@ -91,21 +175,22 @@ Phase 1: Solution Loading
├─ Filter issues with bound_solution_id
├─ Read solutions/{issue-id}.jsonl for each issue
├─ Find bound solution by ID
Extract tasks from bound solutions
Collect files_touched from all tasks in solution
└─ Build solution objects (NOT individual tasks)
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
├─ Launch issue-queue-agent with all tasks
├─ Launch issue-queue-agent with all solutions
├─ Agent performs:
│ ├─ Build dependency DAG from depends_on
│ ├─ Detect file overlaps between solutions
│ ├─ Build dependency DAG from file conflicts
│ ├─ Detect circular dependencies
│ ├─ Identify file modification conflicts
│ ├─ Resolve conflicts using ordering rules
│ ├─ Calculate semantic priority (0.0-1.0)
│ ├─ Resolve conflicts using priority rules
│ ├─ Calculate semantic priority per solution
│ └─ Assign execution groups (parallel/sequential)
└─ Output: queue JSON with ordered tasks
└─ Output: queue JSON with ordered solutions (S-1, S-2, ...)
Phase 5: Queue Output
├─ Write queue.json
├─ Write queue.json with solutions array
├─ Update issue statuses in issues.jsonl
└─ Display queue summary
```
@@ -114,6 +199,8 @@ Phase 5: Queue Output
### Phase 1: Solution Loading
**NOTE**: Execute code directly. DO NOT pre-read solution files - Bash cat handles all reading.
```javascript
// Load issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
@@ -133,8 +220,8 @@ if (plannedIssues.length === 0) {
return;
}
// Load all tasks from bound solutions
const allTasks = [];
// Load bound solutions (not individual tasks)
const allSolutions = [];
for (const issue of plannedIssues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
@@ -150,181 +237,121 @@ for (const issue of plannedIssues) {
continue;
}
// Collect all files touched by this solution
const filesTouched = new Set();
for (const task of boundSol.tasks || []) {
allTasks.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task,
exploration_context: boundSol.exploration_context
});
for (const mp of task.modification_points || []) {
filesTouched.add(mp.file);
}
}
allSolutions.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task_count: boundSol.tasks?.length || 0,
files_touched: Array.from(filesTouched),
priority: issue.priority || 'medium'
});
}
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
console.log(`Loaded ${allSolutions.length} solutions from ${plannedIssues.length} issues`);
```
### Phase 2-4: Agent-Driven Queue Formation
```javascript
// Launch issue-queue-agent to handle all ordering logic
// Generate queue-id ONCE here, pass to agent
const now = new Date();
const queueId = `QUE-${now.toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
// Build minimal prompt - agent orders SOLUTIONS, not tasks
const agentPrompt = `
## Tasks to Order
## Order Solutions
${JSON.stringify(allTasks, null, 2)}
**Queue ID**: ${queueId}
**Solutions**: ${allSolutions.length} from ${plannedIssues.length} issues
**Project Root**: ${process.cwd()}
## Project Root
${process.cwd()}
### Input (Solution-Level)
\`\`\`json
${JSON.stringify(allSolutions, null, 2)}
\`\`\`
## Requirements
1. Build dependency DAG from depends_on fields
2. Detect circular dependencies (abort if found)
3. Identify file modification conflicts
4. Resolve conflicts using ordering rules:
- Create before Update/Implement
- Foundation scopes (config/types) before implementation
- Core logic before tests
5. Calculate semantic priority (0.0-1.0) for each task
6. Assign execution groups (parallel P* / sequential S*)
7. Output queue JSON
### Steps
1. Parse solutions: Extract solution IDs, files_touched, task_count, priority
2. Detect conflicts: Find file overlaps between solutions (files_touched intersection)
3. Build DAG: Create dependency edges where solutions share files
4. Detect cycles: Verify no circular dependencies (abort if found)
5. Resolve conflicts: Apply ordering rules based on action types
6. Calculate priority: Compute semantic priority (0.0-1.0) per solution
7. Assign groups: Parallel (P*) for no-conflict, Sequential (S*) for conflicts
8. Generate queue: Write queue JSON with ordered solutions
9. Update index: Update queues/index.json with new queue entry
### Rules
- **Solution Granularity**: Queue items are solutions, NOT individual tasks
- **DAG Validity**: Output must be valid DAG with no circular dependencies
- **Conflict Detection**: Two solutions conflict if files_touched intersect
- **Ordering Priority**:
1. Higher issue priority first (critical > high > medium > low)
2. Fewer dependencies first (foundation solutions)
3. More tasks = higher priority (larger impact)
- **Parallel Safety**: Solutions in same parallel group must have NO file overlaps
- **Queue Item ID Format**: \`S-N\` (S-1, S-2, S-3, ...)
- **Queue ID**: Use the provided Queue ID (passed above), do NOT generate new one
### Generate Files (STRICT - only these 2)
1. \`.workflow/issues/queues/{Queue ID}.json\` - Use Queue ID from above
2. \`.workflow/issues/queues/index.json\` - Update existing index
Write ONLY these 2 files, using the provided Queue ID.
### Return Summary
\`\`\`json
{
"queue_id": "QUE-YYYYMMDD-HHMMSS",
"total_solutions": N,
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["ISS-xxx"]
}
\`\`\`
`;
const result = Task(
subagent_type="issue-queue-agent",
run_in_background=false,
description=`Order ${allTasks.length} tasks from ${plannedIssues.length} issues`,
description=`Order ${allSolutions.length} solutions`,
prompt=agentPrompt
);
// Parse agent output
const agentOutput = JSON.parse(result);
if (!agentOutput.success) {
console.error(`Queue formation failed: ${agentOutput.error}`);
if (agentOutput.cycles) {
console.error('Circular dependencies:', agentOutput.cycles.join(', '));
}
return;
}
const summary = JSON.parse(result);
```
### Phase 5: Queue Output & Summary
### Phase 5: Summary & Status Update
```javascript
const queueOutput = agentOutput.output;
// Write queue.json
Write('.workflow/issues/queue.json', JSON.stringify(queueOutput, null, 2));
// Update issue statuses in issues.jsonl
const updatedIssues = allIssues.map(issue => {
if (plannedIssues.find(p => p.id === issue.id)) {
return {
...issue,
status: 'queued',
queued_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
}
return issue;
});
Write(issuesPath, updatedIssues.map(i => JSON.stringify(i)).join('\n'));
// Display summary
// Agent already generated queue files, use summary
console.log(`
## Queue Formed
## Queue Formed: ${summary.queue_id}
**Total Tasks**: ${queueOutput.queue.length}
**Issues**: ${plannedIssues.length}
**Conflicts**: ${queueOutput.conflicts?.length || 0} (${queueOutput._metadata?.resolved_conflicts || 0} resolved)
**Solutions**: ${summary.total_solutions}
**Tasks**: ${summary.total_tasks}
**Issues**: ${summary.issues_queued.join(', ')}
**Groups**: ${summary.execution_groups.map(g => `${g.id}(${g.count})`).join(', ')}
**Conflicts Resolved**: ${summary.conflicts_resolved}
### Execution Groups
${(queueOutput.execution_groups || []).map(g => {
const type = g.type === 'parallel' ? 'Parallel' : 'Sequential';
return `- ${g.id} (${type}): ${g.task_count} tasks`;
}).join('\n')}
### Next Steps
1. Review queue: \`ccw issue queue list\`
2. Execute: \`/issue:execute\`
Next: \`/issue:execute\`
`);
```
## Queue Schema
Output `queues/{queue-id}.json`:
```json
{
"id": "QUE-20251227-143000",
"name": "Auth Feature Queue",
"status": "active",
"issue_ids": ["GH-123", "GH-124"],
"queue": [
{
"queue_id": "Q-001",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task_id": "T1",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.7,
"queued_at": "2025-12-26T10:00:00Z"
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"tasks": ["GH-123:T1", "GH-124:T2"],
"resolution": "sequential",
"resolution_order": ["GH-123:T1", "GH-124:T2"],
"rationale": "T1 creates file before T2 updates",
"resolved": true
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "task_count": 3, "tasks": ["GH-123:T1", "GH-124:T1", "GH-125:T1"] },
{ "id": "S2", "type": "sequential", "task_count": 2, "tasks": ["GH-123:T2", "GH-124:T2"] }
],
"_metadata": {
"version": "2.0",
"total_tasks": 5,
"pending_count": 3,
"completed_count": 2,
"failed_count": 0,
"created_at": "2025-12-26T10:00:00Z",
"updated_at": "2025-12-26T11:00:00Z",
"source": "issue-queue-agent"
}
// Update issue statuses via CLI (use `update` for pure field changes)
// Note: `queue add` has its own logic; here we only need status update
for (const issueId of summary.issues_queued) {
Bash(`ccw issue update ${issueId} --status queued`);
}
```
### Queue ID Format
```
QUE-YYYYMMDD-HHMMSS
例如: QUE-20251227-143052
```
## Semantic Priority Rules
| Factor | Priority Boost |
|--------|---------------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Config/Types scope | +0.1 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
## Error Handling
| Error | Resolution |
@@ -334,19 +361,6 @@ QUE-YYYYMMDD-HHMMSS
| Unresolved conflicts | Agent resolves using ordering rules |
| Invalid task reference | Skip and warn |
## Agent Integration
The command uses `issue-queue-agent` which:
1. Builds dependency DAG from task depends_on fields
2. Detects circular dependencies (aborts if found)
3. Identifies file modification conflicts across issues
4. Resolves conflicts using semantic ordering rules
5. Calculates priority (0.0-1.0) for each task
6. Assigns parallel/sequential execution groups
7. Outputs structured queue JSON
See `.claude/agents/issue-queue-agent.md` for agent specification.
## Related Commands
- `/issue:plan` - Plan issues and bind solutions

View File

@@ -10,7 +10,11 @@ examples:
# Workflow Init Command (/workflow:init)
## Overview
Initialize `.workflow/project.json` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
Initialize `.workflow/project-tech.json` and `.workflow/project-guidelines.json` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
**Dual File System**:
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
- `project-guidelines.json`: User-maintained rules and constraints (created as scaffold)
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
@@ -27,7 +31,7 @@ Input Parsing:
└─ Parse --regenerate flag → regenerate = true | false
Decision:
├─ EXISTS + no --regenerate → Exit: "Already initialized"
├─ BOTH_EXIST + no --regenerate → Exit: "Already initialized"
├─ EXISTS + --regenerate → Backup existing → Continue analysis
└─ NOT_FOUND → Continue analysis
@@ -37,11 +41,14 @@ Analysis Flow:
│ ├─ Structural scan (get_modules_by_depth.sh, find, wc)
│ ├─ Semantic analysis (Gemini CLI)
│ ├─ Synthesis and merge
│ └─ Write .workflow/project.json
│ └─ Write .workflow/project-tech.json
├─ Create guidelines scaffold (if not exists)
│ └─ Write .workflow/project-guidelines.json (empty structure)
└─ Display summary
Output:
─ .workflow/project.json (+ .backup if regenerate)
─ .workflow/project-tech.json (+ .backup if regenerate)
└─ .workflow/project-guidelines.json (scaffold if new)
```
## Implementation
@@ -56,13 +63,18 @@ const regenerate = $ARGUMENTS.includes('--regenerate')
**Check existing state**:
```bash
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
```
**If EXISTS and no --regenerate**: Exit early
**If BOTH_EXIST and no --regenerate**: Exit early
```
Project already initialized at .workflow/project.json
Use /workflow:init --regenerate to rebuild
Project already initialized:
- Tech analysis: .workflow/project-tech.json
- Guidelines: .workflow/project-guidelines.json
Use /workflow:init --regenerate to rebuild tech analysis
Use /workflow:session:solidify to add guidelines
Use /workflow:status --project to view state
```
@@ -78,7 +90,7 @@ bash(mkdir -p .workflow)
**For --regenerate**: Backup and preserve existing data
```bash
bash(cp .workflow/project.json .workflow/project.json.backup)
bash(cp .workflow/project-tech.json .workflow/project-tech.json.backup)
```
**Delegate analysis to agent**:
@@ -89,20 +101,17 @@ Task(
run_in_background=false,
description="Deep project analysis",
prompt=`
Analyze project for workflow initialization and generate .workflow/project.json.
Analyze project for workflow initialization and generate .workflow/project-tech.json.
## MANDATORY FIRST STEPS
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-json-schema.json (get schema reference)
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-tech-schema.json (get schema reference)
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
## Task
Generate complete project.json with:
- project_name: ${projectName}
- initialized_at: current ISO timestamp
- overview: {description, technology_stack, architecture, key_components}
- features: ${regenerate ? 'preserve from backup' : '[] (empty)'}
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated}'}
Generate complete project-tech.json with:
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
## Analysis Requirements
@@ -123,8 +132,8 @@ Generate complete project.json with:
1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved features/development_index/statistics from .workflow/project.json.backup' : ''}
5. Write JSON: Write('.workflow/project.json', jsonContent)
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
6. Report: Return brief completion summary
Project root: ${projectRoot}
@@ -132,29 +141,66 @@ Project root: ${projectRoot}
)
```
### Step 3.5: Create Guidelines Scaffold (if not exists)
```javascript
// Only create if not exists (never overwrite user guidelines)
if (!file_exists('.workflow/project-guidelines.json')) {
const guidelinesScaffold = {
conventions: {
coding_style: [],
naming_patterns: [],
file_structure: [],
documentation: []
},
constraints: {
architecture: [],
tech_stack: [],
performance: [],
security: []
},
quality_rules: [],
learnings: [],
_metadata: {
created_at: new Date().toISOString(),
version: "1.0.0"
}
};
Write('.workflow/project-guidelines.json', JSON.stringify(guidelinesScaffold, null, 2));
}
```
### Step 4: Display Summary
```javascript
const projectJson = JSON.parse(Read('.workflow/project.json'));
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
const guidelinesExists = file_exists('.workflow/project-guidelines.json');
console.log(`
✓ Project initialized successfully
## Project Overview
Name: ${projectJson.project_name}
Description: ${projectJson.overview.description}
Name: ${projectTech.project_metadata.name}
Description: ${projectTech.technology_analysis.description}
### Technology Stack
Languages: ${projectJson.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectJson.overview.technology_stack.frameworks.join(', ')}
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
### Architecture
Style: ${projectJson.overview.architecture.style}
Components: ${projectJson.overview.key_components.length} core modules
Style: ${projectTech.technology_analysis.architecture.style}
Components: ${projectTech.technology_analysis.key_components.length} core modules
---
Project state: .workflow/project.json
${regenerate ? 'Backup: .workflow/project.json.backup' : ''}
Files created:
- Tech analysis: .workflow/project-tech.json
- Guidelines: .workflow/project-guidelines.json ${guidelinesExists ? '(scaffold)' : ''}
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
Next steps:
- Use /workflow:session:solidify to add project guidelines
- Use /workflow:plan to start planning
`);
```

View File

@@ -181,6 +181,8 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{error_keyword_from_bug}" --type ts (locate relevant files)
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/diagnosis-json-schema.json (get output schema reference)
4. Read: .workflow/project-tech.json (technology stack and architecture context)
5. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
## Diagnosis Strategy (${angle} focus)
@@ -409,6 +411,12 @@ Generate fix plan and write fix-plan.json.
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json (get schema reference before generating plan)
## Project Context (MANDATORY - Read Both Files)
1. Read: .workflow/project-tech.json (technology stack, architecture, key components)
2. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
**CRITICAL**: All fix tasks MUST comply with constraints in project-guidelines.json
## Bug Description
${bug_description}

View File

@@ -184,6 +184,8 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
4. Read: .workflow/project-tech.json (technology stack and architecture context)
5. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
## Exploration Strategy (${angle} focus)
@@ -416,6 +418,12 @@ Generate implementation plan and write plan.json.
## Output Schema Reference
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json (get schema reference before generating plan)
## Project Context (MANDATORY - Read Both Files)
1. Read: .workflow/project-tech.json (technology stack, architecture, key components)
2. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
**CRITICAL**: All generated tasks MUST comply with constraints in project-guidelines.json
## Task Description
${task_description}

View File

@@ -409,6 +409,8 @@ Task(
2. Get target files: Read resolved_files from review-state.json
3. Validate file access: bash(ls -la ${targetFiles.join(' ')})
4. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
5. Read: .workflow/project-tech.json (technology stack and architecture context)
6. Read: .workflow/project-guidelines.json (user-defined constraints and conventions to validate against)
## Review Context
- Review Type: module (independent)
@@ -511,6 +513,8 @@ Task(
3. Identify related code: bash(grep -r "import.*${basename(file)}" ${projectDir}/src --include="*.ts")
4. Read test files: bash(find ${projectDir}/tests -name "*${basename(file, '.ts')}*" -type f)
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
6. Read: .workflow/project-tech.json (technology stack and architecture context)
7. Read: .workflow/project-guidelines.json (user-defined constraints for remediation compliance)
## CLI Configuration
- Tool Priority: gemini → qwen → codex

View File

@@ -420,6 +420,8 @@ Task(
3. Get changed files: bash(cd ${workflowDir} && git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u)
4. Read review state: ${reviewStateJsonPath}
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
6. Read: .workflow/project-tech.json (technology stack and architecture context)
7. Read: .workflow/project-guidelines.json (user-defined constraints and conventions to validate against)
## Session Context
- Session ID: ${sessionId}
@@ -522,6 +524,8 @@ Task(
3. Identify related code: bash(grep -r "import.*${basename(file)}" ${workflowDir}/src --include="*.ts")
4. Read test files: bash(find ${workflowDir}/tests -name "*${basename(file, '.ts')}*" -type f)
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
6. Read: .workflow/project-tech.json (technology stack and architecture context)
7. Read: .workflow/project-guidelines.json (user-defined constraints for remediation compliance)
## CLI Configuration
- Tool Priority: gemini → qwen → codex

View File

@@ -139,7 +139,7 @@ After bash validation, the model takes control to:
ccw cli -p "
PURPOSE: Security audit of completed implementation
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
CONTEXT: @.summaries/IMPL-*.md,../.. @../../project-tech.json @../../project-guidelines.json
EXPECTED: Security findings report with severity levels
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
" --tool gemini --mode write --cd .workflow/active/${sessionId}
@@ -151,7 +151,7 @@ After bash validation, the model takes control to:
ccw cli -p "
PURPOSE: Architecture compliance review
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
CONTEXT: @.summaries/IMPL-*.md,../.. @../../project-tech.json @../../project-guidelines.json
EXPECTED: Architecture assessment with recommendations
RULES: Check for patterns, separation of concerns, modularity, scalability
" --tool qwen --mode write --cd .workflow/active/${sessionId}
@@ -163,7 +163,7 @@ After bash validation, the model takes control to:
ccw cli -p "
PURPOSE: Code quality and best practices review
TASK: Assess code readability, maintainability, adherence to best practices
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
CONTEXT: @.summaries/IMPL-*.md,../.. @../../project-tech.json @../../project-guidelines.json
EXPECTED: Quality assessment with improvement suggestions
RULES: Check for code smells, duplication, complexity, naming conventions
" --tool gemini --mode write --cd .workflow/active/${sessionId}
@@ -185,7 +185,7 @@ After bash validation, the model takes control to:
ccw cli -p "
PURPOSE: Verify all requirements and acceptance criteria are met
TASK: Cross-check implementation summaries against original requirements
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../project-tech.json @../../project-guidelines.json
EXPECTED:
- Requirements coverage matrix
- Acceptance criteria verification

View File

@@ -0,0 +1,299 @@
---
name: solidify
description: Crystallize session learnings and user-defined constraints into permanent project guidelines
argument-hint: "[--type <convention|constraint|learning>] [--category <category>] \"rule or insight\""
examples:
- /workflow:session:solidify "Use functional components for all React code" --type convention
- /workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
- /workflow:session:solidify "Cache invalidation requires event sourcing" --type learning --category architecture
- /workflow:session:solidify --interactive
---
# Session Solidify Command (/workflow:session:solidify)
## Overview
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.workflow/project-guidelines.json`. This ensures valuable learnings persist across sessions and inform future planning.
## Use Cases
1. **During Session**: Capture important decisions as they're made
2. **After Session**: Reflect on lessons learned before archiving
3. **Proactive**: Add team conventions or architectural rules
## Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `rule` | string | ✅ (unless --interactive) | The rule, convention, or insight to solidify |
| `--type` | enum | ❌ | Type: `convention`, `constraint`, `learning` (default: auto-detect) |
| `--category` | string | ❌ | Category for organization (see categories below) |
| `--interactive` | flag | ❌ | Launch guided wizard for adding rules |
### Type Categories
**convention** → Coding style preferences (goes to `conventions` section)
- Subcategories: `coding_style`, `naming_patterns`, `file_structure`, `documentation`
**constraint** → Hard rules that must not be violated (goes to `constraints` section)
- Subcategories: `architecture`, `tech_stack`, `performance`, `security`
**learning** → Session-specific insights (goes to `learnings` array)
- Subcategories: `architecture`, `performance`, `security`, `testing`, `process`, `other`
## Execution Process
```
Input Parsing:
├─ Parse: rule text (required unless --interactive)
├─ Parse: --type (convention|constraint|learning)
├─ Parse: --category (subcategory)
└─ Parse: --interactive (flag)
Step 1: Ensure Guidelines File Exists
└─ If not exists → Create with empty structure
Step 2: Auto-detect Type (if not specified)
└─ Analyze rule text for keywords
Step 3: Validate and Format Entry
└─ Build entry object based on type
Step 4: Update Guidelines File
└─ Add entry to appropriate section
Step 5: Display Confirmation
└─ Show what was added and where
```
## Implementation
### Step 1: Ensure Guidelines File Exists
```bash
bash(test -f .workflow/project-guidelines.json && echo "EXISTS" || echo "NOT_FOUND")
```
**If NOT_FOUND**, create scaffold:
```javascript
const scaffold = {
conventions: {
coding_style: [],
naming_patterns: [],
file_structure: [],
documentation: []
},
constraints: {
architecture: [],
tech_stack: [],
performance: [],
security: []
},
quality_rules: [],
learnings: [],
_metadata: {
created_at: new Date().toISOString(),
version: "1.0.0"
}
};
Write('.workflow/project-guidelines.json', JSON.stringify(scaffold, null, 2));
```
### Step 2: Auto-detect Type (if not specified)
```javascript
function detectType(ruleText) {
const text = ruleText.toLowerCase();
// Constraint indicators
if (/\b(no|never|must not|forbidden|prohibited|always must)\b/.test(text)) {
return 'constraint';
}
// Learning indicators
if (/\b(learned|discovered|realized|found that|turns out)\b/.test(text)) {
return 'learning';
}
// Default to convention
return 'convention';
}
function detectCategory(ruleText, type) {
const text = ruleText.toLowerCase();
if (type === 'constraint' || type === 'learning') {
if (/\b(architecture|layer|module|dependency|circular)\b/.test(text)) return 'architecture';
if (/\b(security|auth|permission|sanitize|xss|sql)\b/.test(text)) return 'security';
if (/\b(performance|cache|lazy|async|sync|slow)\b/.test(text)) return 'performance';
if (/\b(test|coverage|mock|stub)\b/.test(text)) return 'testing';
}
if (type === 'convention') {
if (/\b(name|naming|prefix|suffix|camel|pascal)\b/.test(text)) return 'naming_patterns';
if (/\b(file|folder|directory|structure|organize)\b/.test(text)) return 'file_structure';
if (/\b(doc|comment|jsdoc|readme)\b/.test(text)) return 'documentation';
return 'coding_style';
}
return type === 'constraint' ? 'tech_stack' : 'other';
}
```
### Step 3: Build Entry
```javascript
function buildEntry(rule, type, category, sessionId) {
if (type === 'learning') {
return {
date: new Date().toISOString().split('T')[0],
session_id: sessionId || null,
insight: rule,
category: category,
context: null
};
}
// For conventions and constraints, just return the rule string
return rule;
}
```
### Step 4: Update Guidelines File
```javascript
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'));
if (type === 'convention') {
if (!guidelines.conventions[category]) {
guidelines.conventions[category] = [];
}
if (!guidelines.conventions[category].includes(rule)) {
guidelines.conventions[category].push(rule);
}
} else if (type === 'constraint') {
if (!guidelines.constraints[category]) {
guidelines.constraints[category] = [];
}
if (!guidelines.constraints[category].includes(rule)) {
guidelines.constraints[category].push(rule);
}
} else if (type === 'learning') {
guidelines.learnings.push(buildEntry(rule, type, category, sessionId));
}
guidelines._metadata.updated_at = new Date().toISOString();
guidelines._metadata.last_solidified_by = sessionId;
Write('.workflow/project-guidelines.json', JSON.stringify(guidelines, null, 2));
```
### Step 5: Display Confirmation
```
✓ Guideline solidified
Type: ${type}
Category: ${category}
Rule: "${rule}"
Location: .workflow/project-guidelines.json → ${type}s.${category}
Total ${type}s in ${category}: ${count}
```
## Interactive Mode
When `--interactive` flag is provided:
```javascript
AskUserQuestion({
questions: [
{
question: "What type of guideline are you adding?",
header: "Type",
multiSelect: false,
options: [
{ label: "Convention", description: "Coding style preference (e.g., use functional components)" },
{ label: "Constraint", description: "Hard rule that must not be violated (e.g., no direct DB access)" },
{ label: "Learning", description: "Insight from this session (e.g., cache invalidation needs events)" }
]
}
]
});
// Follow-up based on type selection...
```
## Examples
### Add a Convention
```bash
/workflow:session:solidify "Use async/await instead of callbacks" --type convention --category coding_style
```
Result in `project-guidelines.json`:
```json
{
"conventions": {
"coding_style": ["Use async/await instead of callbacks"]
}
}
```
### Add an Architectural Constraint
```bash
/workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
```
Result:
```json
{
"constraints": {
"architecture": ["No direct DB access from controllers"]
}
}
```
### Capture a Session Learning
```bash
/workflow:session:solidify "Cache invalidation requires event sourcing for consistency" --type learning
```
Result:
```json
{
"learnings": [
{
"date": "2024-12-28",
"session_id": "WFS-auth-feature",
"insight": "Cache invalidation requires event sourcing for consistency",
"category": "architecture"
}
]
}
```
## Integration with Planning
The `project-guidelines.json` is consumed by:
1. **`/workflow:tools:context-gather`**: Loads guidelines into context-package.json
2. **`/workflow:plan`**: Passes guidelines to task generation agent
3. **`task-generate-agent`**: Includes guidelines as "CRITICAL CONSTRAINTS" in system prompt
This ensures all future planning respects solidified rules without users needing to re-state them.
## Error Handling
- **Duplicate Rule**: Warn and skip if exact rule already exists
- **Invalid Category**: Suggest valid categories for the type
- **File Corruption**: Backup existing file before modification
## Related Commands
- `/workflow:session:start` - Start a session (may prompt for solidify at end)
- `/workflow:session:complete` - Complete session (prompts for learnings to solidify)
- `/workflow:init` - Creates project-guidelines.json scaffold if missing

View File

@@ -38,26 +38,29 @@ ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
## Step 0: Initialize Project State (First-time Only)
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
**Executed before all modes** - Ensures project-level state files exist by calling `/workflow:init`.
### Check and Initialize
```bash
# Check if project state exists
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
# Check if project state exists (both files required)
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
```
**If NOT_FOUND**, delegate to `/workflow:init`:
**If either NOT_FOUND**, delegate to `/workflow:init`:
```javascript
// Call workflow:init for intelligent project analysis
SlashCommand({command: "/workflow:init"});
// Wait for init completion
// project.json will be created with comprehensive project overview
// project-tech.json and project-guidelines.json will be created
```
**Output**:
- If EXISTS: `PROJECT_STATE: initialized`
- If NOT_FOUND: Calls `/workflow:init` → creates `.workflow/project.json` with full project analysis
- If BOTH_EXIST: `PROJECT_STATE: initialized`
- If NOT_FOUND: Calls `/workflow:init` → creates:
- `.workflow/project-tech.json` with full technical analysis
- `.workflow/project-guidelines.json` with empty scaffold
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.

View File

@@ -236,7 +236,10 @@ Task(
Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
1. **Project State Loading**:
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
- If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize CodexLens, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
@@ -251,17 +254,19 @@ Execute all discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include description, technology_stack, architecture, and key_components.
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
5. Perform conflict detection with risk assessment
6. **Inject historical conflicts** from archive analysis into conflict_detection
7. Generate and validate context-package.json
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
6. Perform conflict detection with risk assessment
7. **Inject historical conflicts** from archive analysis into conflict_detection
8. Generate and validate context-package.json
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: description, technology_stack, architecture, key_components (sourced from `project.json` overview)
- **project_context**: description, technology_stack, architecture, key_components (sourced from `project-tech.json`)
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (sourced from `project-guidelines.json`)
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
@@ -314,7 +319,8 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
**Key Sections**:
- **metadata**: Session info, keywords, complexity, tech stack
- **project_context**: Architecture patterns, conventions, tech stack (populated from `project.json` overview)
- **project_context**: Architecture patterns, conventions, tech stack (populated from `project-tech.json`)
- **project_guidelines**: Conventions, constraints, quality rules, learnings (populated from `project-guidelines.json`)
- **assets**: Categorized files with relevance scores (documentation, source_code, config, tests)
- **dependencies**: Internal and external dependency graphs
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
@@ -429,6 +435,7 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
## Notes
- **Detection-first**: Always check for existing package before invoking agent
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
- **Dual project file integration**: Agent reads both `.workflow/project-tech.json` (tech analysis) and `.workflow/project-guidelines.json` (user constraints) as primary sources
- **Guidelines injection**: Project guidelines are included in context-package to ensure task generation respects user-defined constraints
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call

View File

@@ -0,0 +1,244 @@
---
name: issue-manage
description: Interactive issue management with menu-driven CRUD operations. Use when managing issues, viewing issue status, editing issue fields, or performing bulk operations on issues. Triggers on "manage issue", "list issues", "edit issue", "delete issue", "bulk update", "issue dashboard".
allowed-tools: Bash, Read, Write, AskUserQuestion, Task, Glob
---
# Issue Management Skill
Interactive menu-driven interface for issue CRUD operations via `ccw issue` CLI.
## Quick Start
Ask me:
- "Show all issues" → List with filters
- "View issue GH-123" → Detailed inspection
- "Edit issue priority" → Modify fields
- "Delete old issues" → Remove with confirmation
- "Bulk update status" → Batch operations
## CLI Endpoints
```bash
# Core operations
ccw issue list # List all issues
ccw issue list <id> --json # Get issue details
ccw issue status <id> # Detailed status
ccw issue init <id> --title "..." # Create issue
ccw issue task <id> --title "..." # Add task
ccw issue bind <id> <solution-id> # Bind solution
# Queue management
ccw issue queue # List current queue
ccw issue queue add <id> # Add to queue
ccw issue queue list # Queue history
ccw issue queue switch <queue-id> # Switch queue
ccw issue queue archive # Archive queue
ccw issue queue delete <queue-id> # Delete queue
ccw issue next # Get next task
ccw issue done <queue-id> # Mark completed
```
## Operations
### 1. LIST 📋
Filter and browse issues:
```
┌─ Filter by Status ─────────────────┐
│ □ All □ Registered │
│ □ Planned □ Queued │
│ □ Executing □ Completed │
└────────────────────────────────────┘
```
**Flow**:
1. Ask filter preferences → `ccw issue list --json`
2. Display table: ID | Status | Priority | Title
3. Select issue for detail view
### 2. VIEW 🔍
Detailed issue inspection:
```
┌─ Issue: GH-123 ─────────────────────┐
│ Title: Fix authentication bug │
│ Status: planned | Priority: P2 │
│ Solutions: 2 (1 bound) │
│ Tasks: 5 pending │
└─────────────────────────────────────┘
```
**Flow**:
1. Fetch `ccw issue status <id> --json`
2. Display issue + solutions + tasks
3. Offer actions: Edit | Plan | Queue | Delete
### 3. EDIT ✏️
Modify issue fields:
| Field | Options |
|-------|---------|
| Title | Free text |
| Priority | P1-P5 |
| Status | registered → completed |
| Context | Problem description |
| Labels | Comma-separated |
**Flow**:
1. Select field to edit
2. Show current value
3. Collect new value via AskUserQuestion
4. Update `.workflow/issues/issues.jsonl`
### 4. DELETE 🗑️
Remove with confirmation:
```
⚠️ Delete issue GH-123?
This will also remove:
- Associated solutions
- Queued tasks
[Delete] [Cancel]
```
**Flow**:
1. Confirm deletion via AskUserQuestion
2. Remove from `issues.jsonl`
3. Clean up `solutions/<id>.jsonl`
4. Remove from `queue.json`
### 5. BULK 📦
Batch operations:
| Operation | Description |
|-----------|-------------|
| Update Status | Change multiple issues |
| Update Priority | Batch priority change |
| Add Labels | Tag multiple issues |
| Delete Multiple | Bulk removal |
| Queue All Planned | Add all planned to queue |
| Retry All Failed | Reset failed tasks |
## Workflow
```
┌──────────────────────────────────────┐
│ Main Menu │
│ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │
│ │List│ │View│ │Edit│ │Bulk│ │
│ └──┬─┘ └──┬─┘ └──┬─┘ └──┬─┘ │
└─────┼──────┼──────┼──────┼──────────┘
│ │ │ │
▼ ▼ ▼ ▼
Filter Detail Fields Multi
Select Actions Update Select
│ │ │ │
└──────┴──────┴──────┘
Back to Menu
```
## Implementation Guide
### Entry Point
```javascript
// Parse input for issue ID
const issueId = input.match(/^([A-Z]+-\d+|ISS-\d+)/i)?.[1];
// Show main menu
await showMainMenu(issueId);
```
### Main Menu Pattern
```javascript
// 1. Fetch dashboard data
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
const queue = JSON.parse(Bash('ccw issue queue --json 2>/dev/null') || '{}');
// 2. Display summary
console.log(`Issues: ${issues.length} | Queue: ${queue.pending_count || 0} pending`);
// 3. Ask action via AskUserQuestion
const action = AskUserQuestion({
questions: [{
question: 'What would you like to do?',
header: 'Action',
options: [
{ label: 'List Issues', description: 'Browse with filters' },
{ label: 'View Issue', description: 'Detail view' },
{ label: 'Edit Issue', description: 'Modify fields' },
{ label: 'Bulk Operations', description: 'Batch actions' }
]
}]
});
// 4. Route to handler
```
### Filter Pattern
```javascript
const filter = AskUserQuestion({
questions: [{
question: 'Filter by status?',
header: 'Filter',
multiSelect: true,
options: [
{ label: 'All', description: 'Show all' },
{ label: 'Registered', description: 'Unplanned' },
{ label: 'Planned', description: 'Has solution' },
{ label: 'Executing', description: 'In progress' }
]
}]
});
```
### Edit Pattern
```javascript
// Select field
const field = AskUserQuestion({...});
// Get new value based on field type
// For Priority: show P1-P5 options
// For Status: show status options
// For Title: accept free text via "Other"
// Update file
const issuesPath = '.workflow/issues/issues.jsonl';
// Read → Parse → Update → Write
```
## Data Files
| File | Purpose |
|------|---------|
| `.workflow/issues/issues.jsonl` | Issue records |
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
| `.workflow/issues/queue.json` | Execution queue |
## Error Handling
| Error | Resolution |
|-------|------------|
| No issues found | Suggest `/issue:new` to create |
| Issue not found | Show available issues, re-prompt |
| Write failure | Check file permissions |
| Queue error | Display ccw error message |
## Related Commands
- `/issue:new` - Create structured issue
- `/issue:plan` - Generate solution
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute tasks

View File

@@ -0,0 +1,184 @@
---
name: software-manual
description: Generate interactive TiddlyWiki-style HTML software manuals with screenshots, API docs, and multi-level code examples. Use when creating user guides, software documentation, or API references. Triggers on "software manual", "user guide", "generate manual", "create docs".
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write, mcp__chrome__*
---
# Software Manual Skill
Generate comprehensive, interactive software manuals in TiddlyWiki-style single-file HTML format.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Context-Optimized Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Phase 1: Requirements → manual-config.json │
│ ↓ │
│ Phase 2: Exploration → exploration-*.json │
│ ↓ │
│ Phase 3: Parallel Agents → sections/section-*.md │
│ ↓ (6 Agents) │
│ Phase 3.5: Consolidation → consolidation-summary.md │
│ ↓ │
│ Phase 4: Screenshot → screenshots/*.png │
│ Capture (via Chrome MCP) │
│ ↓ │
│ Phase 5: HTML Assembly → {name}-使用手册.html │
│ ↓ │
│ Phase 6: Refinement → iterations/ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Key Design Principles
1. **主 Agent 编排,子 Agent 执行**: 所有繁重计算委托给 `universal-executor` 子 Agent
2. **Brief Returns**: Agents return path + summary, not full content (avoid context overflow)
3. **System Agents**: 使用 `cli-explore-agent` (探索) 和 `universal-executor` (执行)
4. **成熟库内嵌**: marked.js (MD 解析) + highlight.js (语法高亮),无 CDN 依赖
5. **Single-File HTML**: TiddlyWiki-style interactive document with embedded resources
6. **动态标签**: 根据实际章节自动生成导航标签
## Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
│ Phase 1: Requirements Discovery (主 Agent) │
│ → AskUserQuestion: 收集软件类型、目标用户、文档范围 │
│ → Output: manual-config.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 2: Project Exploration (cli-explore-agent × N) │
│ → 并行探索: architecture, ui-routes, api-endpoints, config │
│ → Output: exploration-*.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 2.5: API Extraction (extract_apis.py) │
│ → 自动提取: FastAPI/TypeDoc/pdoc │
│ → Output: api-docs/{backend,frontend,modules}/*.md │
├─────────────────────────────────────────────────────────────────┤
│ Phase 3: Parallel Analysis (universal-executor × 6) │
│ → 6 个子 Agent 并行: overview, ui-guide, api-docs, config, │
│ troubleshooting, code-examples │
│ → Output: sections/section-*.md │
├─────────────────────────────────────────────────────────────────┤
│ Phase 3.5: Consolidation (universal-executor) │
│ → 质量检查: 一致性、交叉引用、截图标记 │
│ → Output: consolidation-summary.md, screenshots-list.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 4: Screenshot Capture (universal-executor + Chrome MCP) │
│ → 批量截图: 调用 mcp__chrome__screenshot │
│ → Output: screenshots/*.png + manifest.json │
├─────────────────────────────────────────────────────────────────┤
│ Phase 5: HTML Assembly (universal-executor) │
│ → 组装 HTML: MD→tiddlers, 嵌入 CSS/JS/图片 │
│ → Output: {name}-使用手册.html │
├─────────────────────────────────────────────────────────────────┤
│ Phase 6: Iterative Refinement (主 Agent) │
│ → 预览 + 用户反馈 + 迭代修复 │
│ → Output: iterations/v*.html │
└─────────────────────────────────────────────────────────────────┘
```
## Agent Configuration
| Agent | Role | Output File | Focus Areas |
|-------|------|-------------|-------------|
| overview | Product Manager | section-overview.md | Product intro, features, quick start |
| ui-guide | UX Expert | section-ui-guide.md | UI operations, step-by-step guides |
| api-docs | API Architect | section-api-reference.md | REST API, Frontend API |
| config | DevOps Engineer | section-configuration.md | Env vars, deployment, settings |
| troubleshooting | Support Engineer | section-troubleshooting.md | FAQs, error codes, solutions |
| code-examples | Developer Advocate | section-examples.md | Beginner/Intermediate/Advanced examples |
## Agent Return Format
```typescript
interface ManualAgentReturn {
status: "completed" | "partial" | "failed";
output_file: string;
summary: string; // Max 50 chars
screenshots_needed: Array<{
id: string; // e.g., "ss-login-form"
url: string; // Relative or absolute URL
description: string; // "Login form interface"
selector?: string; // CSS selector for partial screenshot
wait_for?: string; // Element to wait for
}>;
cross_references: string[]; // Other sections referenced
difficulty_level: "beginner" | "intermediate" | "advanced";
}
```
## HTML Features (TiddlyWiki-style)
1. **Search**: Full-text search with result highlighting
2. **Collapse/Expand**: Per-section collapsible content
3. **Tag Navigation**: Filter by category tags
4. **Theme Toggle**: Light/Dark mode with localStorage persistence
5. **Single File**: All CSS/JS/images embedded as Base64
6. **Offline**: Works without internet connection
7. **Print-friendly**: Optimized print stylesheet
## Directory Setup
```javascript
// Generate timestamp directory name
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const dir = `.workflow/.scratchpad/manual-${timestamp}`;
// Windows
Bash(`mkdir "${dir}\\sections" && mkdir "${dir}\\screenshots" && mkdir "${dir}\\api-docs" && mkdir "${dir}\\iterations"`);
```
## Output Structure
```
.workflow/.scratchpad/manual-{timestamp}/
├── manual-config.json # Phase 1
├── exploration/ # Phase 2
│ ├── exploration-architecture.json
│ ├── exploration-ui-routes.json
│ └── exploration-api-endpoints.json
├── sections/ # Phase 3
│ ├── section-overview.md
│ ├── section-ui-guide.md
│ ├── section-api-reference.md
│ ├── section-configuration.md
│ ├── section-troubleshooting.md
│ └── section-examples.md
├── consolidation-summary.md # Phase 3.5
├── api-docs/ # API documentation
│ ├── frontend/ # TypeDoc output
│ └── backend/ # Swagger/OpenAPI output
├── screenshots/ # Phase 4
│ ├── ss-*.png
│ └── screenshots-manifest.json
├── iterations/ # Phase 6
│ ├── v1.html
│ └── v2.html
└── {软件名}-使用手册.html # Final Output
```
## Reference Documents
| Document | Purpose |
|----------|---------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | 用户配置收集 |
| [phases/02-project-exploration.md](phases/02-project-exploration.md) | 项目类型检测 |
| [phases/02.5-api-extraction.md](phases/02.5-api-extraction.md) | API 自动提取 |
| [phases/03-parallel-analysis.md](phases/03-parallel-analysis.md) | 6 Agent 并行分析 |
| [phases/03.5-consolidation.md](phases/03.5-consolidation.md) | 整合与质量检查 |
| [phases/04-screenshot-capture.md](phases/04-screenshot-capture.md) | Chrome MCP 截图 |
| [phases/05-html-assembly.md](phases/05-html-assembly.md) | HTML 组装 |
| [phases/06-iterative-refinement.md](phases/06-iterative-refinement.md) | 迭代优化 |
| [specs/quality-standards.md](specs/quality-standards.md) | 质量标准 |
| [specs/writing-style.md](specs/writing-style.md) | 写作风格 |
| [templates/tiddlywiki-shell.html](templates/tiddlywiki-shell.html) | HTML 模板 |
| [templates/css/wiki-base.css](templates/css/wiki-base.css) | 基础样式 |
| [templates/css/wiki-dark.css](templates/css/wiki-dark.css) | 暗色主题 |
| [scripts/bundle-libraries.md](scripts/bundle-libraries.md) | 库文件打包 |
| [scripts/api-extractor.md](scripts/api-extractor.md) | API 提取说明 |
| [scripts/extract_apis.py](scripts/extract_apis.py) | API 提取脚本 |
| [scripts/screenshot-helper.md](scripts/screenshot-helper.md) | 截图辅助 |

View File

@@ -0,0 +1,162 @@
# Phase 1: Requirements Discovery
Collect user requirements and generate configuration for the manual generation process.
## Objective
Gather essential information about the software project to customize the manual generation:
- Software type and characteristics
- Target user audience
- Documentation scope and depth
- Special requirements
## Execution Steps
### Step 1: Software Information Collection
Use `AskUserQuestion` to collect:
```javascript
AskUserQuestion({
questions: [
{
question: "What type of software is this project?",
header: "Software Type",
options: [
{ label: "Web Application", description: "Frontend + Backend web app with UI" },
{ label: "CLI Tool", description: "Command-line interface tool" },
{ label: "SDK/Library", description: "Developer library or SDK" },
{ label: "Desktop App", description: "Desktop application (Electron, etc.)" }
],
multiSelect: false
},
{
question: "Who is the target audience for this manual?",
header: "Target Users",
options: [
{ label: "End Users", description: "Non-technical users who use the product" },
{ label: "Developers", description: "Developers integrating or extending the product" },
{ label: "Administrators", description: "System admins deploying and maintaining" },
{ label: "All Audiences", description: "Mixed audience with different sections" }
],
multiSelect: false
},
{
question: "What documentation scope do you need?",
header: "Doc Scope",
options: [
{ label: "Quick Start", description: "Essential getting started guide only" },
{ label: "User Guide", description: "Complete user-facing documentation" },
{ label: "API Reference", description: "Focus on API documentation" },
{ label: "Comprehensive", description: "Full documentation including all sections" }
],
multiSelect: false
},
{
question: "What difficulty levels should code examples cover?",
header: "Example Levels",
options: [
{ label: "Beginner Only", description: "Simple, basic examples" },
{ label: "Beginner + Intermediate", description: "Basic to moderate complexity" },
{ label: "All Levels", description: "Beginner, Intermediate, and Advanced" }
],
multiSelect: false
}
]
});
```
### Step 2: Auto-Detection (Supplement)
Automatically detect project characteristics:
```javascript
// Detect from package.json
const packageJson = Read('package.json');
const softwareName = packageJson.name;
const version = packageJson.version;
const description = packageJson.description;
// Detect tech stack
const hasReact = packageJson.dependencies?.react;
const hasVue = packageJson.dependencies?.vue;
const hasExpress = packageJson.dependencies?.express;
const hasNestJS = packageJson.dependencies?.['@nestjs/core'];
// Detect CLI
const hasBin = !!packageJson.bin;
// Detect UI
const hasPages = Glob('src/pages/**/*').length > 0 || Glob('pages/**/*').length > 0;
const hasRoutes = Glob('**/routes.*').length > 0;
```
### Step 3: Generate Configuration
Create `manual-config.json`:
```json
{
"software": {
"name": "{{detected or user input}}",
"version": "{{from package.json}}",
"description": "{{from package.json}}",
"type": "{{web|cli|sdk|desktop}}"
},
"target_audience": "{{end_users|developers|admins|all}}",
"doc_scope": "{{quick_start|user_guide|api_reference|comprehensive}}",
"example_levels": ["beginner", "intermediate", "advanced"],
"tech_stack": {
"frontend": "{{react|vue|angular|vanilla}}",
"backend": "{{express|nestjs|fastify|none}}",
"language": "{{typescript|javascript}}",
"ui_framework": "{{tailwind|mui|antd|none}}"
},
"features": {
"has_ui": true,
"has_api": true,
"has_cli": false,
"has_config": true
},
"agents_to_run": [
"overview",
"ui-guide",
"api-docs",
"config",
"troubleshooting",
"code-examples"
],
"screenshot_config": {
"enabled": true,
"dev_command": "npm run dev",
"dev_url": "http://localhost:3000",
"wait_timeout": 5000
},
"output": {
"filename": "{{name}}-使用手册.html",
"theme": "light",
"language": "zh-CN"
},
"timestamp": "{{ISO8601}}"
}
```
## Agent Selection Logic
Based on `doc_scope`, select agents to run:
| Scope | Agents |
|-------|--------|
| quick_start | overview |
| user_guide | overview, ui-guide, config, troubleshooting |
| api_reference | overview, api-docs, code-examples |
| comprehensive | ALL 6 agents |
## Output
- **File**: `manual-config.json`
- **Location**: `.workflow/.scratchpad/manual-{timestamp}/`
## Next Phase
Proceed to [Phase 2: Project Exploration](02-project-exploration.md) with the generated configuration.

View File

@@ -0,0 +1,101 @@
# Phase 2: Project Exploration
使用 `cli-explore-agent` 探索项目结构,生成文档所需的结构化数据。
## 探索角度
```javascript
const EXPLORATION_ANGLES = {
web: ['architecture', 'ui-routes', 'api-endpoints', 'config'],
cli: ['architecture', 'commands', 'config'],
sdk: ['architecture', 'public-api', 'types', 'config'],
desktop: ['architecture', 'ui-screens', 'config']
};
```
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
const angles = EXPLORATION_ANGLES[config.software.type];
// 并行探索
const tasks = angles.map(angle => Task({
subagent_type: 'cli-explore-agent',
run_in_background: false,
prompt: buildExplorationPrompt(angle, config, workDir)
}));
const results = await Promise.all(tasks);
```
## 探索配置
```javascript
const EXPLORATION_CONFIGS = {
architecture: {
task: '分析项目模块结构、入口点、依赖关系',
patterns: ['src/*/', 'package.json', 'tsconfig.json'],
output: 'exploration-architecture.json'
},
'ui-routes': {
task: '提取 UI 路由、页面组件、导航结构',
patterns: ['src/pages/**', 'src/views/**', 'app/**/page.*', 'src/router/**'],
output: 'exploration-ui-routes.json'
},
'api-endpoints': {
task: '提取 REST API 端点、请求/响应类型',
patterns: ['src/**/*.controller.*', 'src/routes/**', 'openapi.*', 'swagger.*'],
output: 'exploration-api-endpoints.json'
},
config: {
task: '提取环境变量、配置文件选项',
patterns: ['.env.example', 'config/**', 'docker-compose.yml'],
output: 'exploration-config.json'
},
commands: {
task: '提取 CLI 命令、选项、示例',
patterns: ['src/cli*', 'bin/*', 'src/commands/**'],
output: 'exploration-commands.json'
}
};
```
## Prompt 构建
```javascript
function buildExplorationPrompt(angle, config, workDir) {
const cfg = EXPLORATION_CONFIGS[angle];
return `
[TASK]
${cfg.task}
[SCOPE]
项目类型: ${config.software.type}
扫描模式: deep-scan
文件模式: ${cfg.patterns.join(', ')}
[OUTPUT]
文件: ${workDir}/exploration/${cfg.output}
格式: JSON (schema-compliant)
[RETURN]
简要说明发现的内容数量和关键发现
`;
}
```
## 输出结构
```
exploration/
├── exploration-architecture.json # 模块结构
├── exploration-ui-routes.json # UI 路由
├── exploration-api-endpoints.json # API 端点
├── exploration-config.json # 配置选项
└── exploration-commands.json # CLI 命令 (if CLI)
```
## 下一阶段
→ [Phase 3: Parallel Analysis](03-parallel-analysis.md)

View File

@@ -0,0 +1,161 @@
# Phase 2.5: API Extraction
在项目探索后、并行分析前,自动提取 API 文档。
## 核心原则
**使用成熟工具提取,确保输出格式与 wiki 模板兼容。**
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
// 检查项目路径配置
const apiSources = config.api_sources || detectApiSources(config.project_path);
// 执行 API 提取
Bash({
command: `python .claude/skills/software-manual/scripts/extract_apis.py -o "${workDir}" -p ${apiSources.join(' ')}`
});
// 验证输出
const apiDocsDir = `${workDir}/api-docs`;
const extractedFiles = Glob(`${apiDocsDir}/**/*.{json,md}`);
console.log(`Extracted ${extractedFiles.length} API documentation files`);
```
## 支持的项目类型
| 类型 | 检测方式 | 提取工具 | 输出格式 |
|------|----------|----------|----------|
| FastAPI | `app/main.py` + FastAPI import | OpenAPI JSON | `openapi.json` + `API_SUMMARY.md` |
| Next.js | `package.json` + next | TypeDoc | `*.md` (Markdown) |
| Python Module | `__init__.py` + setup.py/pyproject.toml | pdoc | `*.md` (Markdown) |
| Express | `package.json` + express | swagger-jsdoc | `openapi.json` |
| NestJS | `package.json` + @nestjs | @nestjs/swagger | `openapi.json` |
## 输出格式规范
### Markdown 兼容性要求
确保输出 Markdown 与 wiki CSS 样式兼容:
```markdown
# API Reference → <h1> (wiki-base.css)
## Endpoints → <h2>
| Method | Path | Summary | → <table> 蓝色表头
|--------|------|---------|
| `GET` | `/api/...` | ... | → <code> 红色高亮
### GET /api/users → <h3>
\`\`\`json → <pre><code> 深色背景
{
"id": 1,
"name": "example"
}
\`\`\`
- Parameter: `id` (required) → <ul><li> + <code>
```
### 格式验证检查
```javascript
function validateApiDocsFormat(apiDocsDir) {
const issues = [];
const mdFiles = Glob(`${apiDocsDir}/**/*.md`);
for (const file of mdFiles) {
const content = Read(file);
// 检查表格格式
if (content.includes('|') && !content.match(/\|.*\|.*\|/)) {
issues.push(`${file}: 表格格式不完整`);
}
// 检查代码块语言标注
const codeBlocks = content.match(/```(\w*)\n/g) || [];
const unlabeled = codeBlocks.filter(b => b === '```\n');
if (unlabeled.length > 0) {
issues.push(`${file}: ${unlabeled.length} 个代码块缺少语言标注`);
}
// 检查标题层级
if (!content.match(/^# /m)) {
issues.push(`${file}: 缺少一级标题`);
}
}
return issues;
}
```
## 项目配置示例
`manual-config.json` 中配置 API 源:
```json
{
"software": {
"name": "Hydro Generator Workbench",
"type": "web"
},
"api_sources": {
"backend": {
"path": "D:/dongdiankaifa9/backend",
"type": "fastapi",
"entry": "app.main:app"
},
"frontend": {
"path": "D:/dongdiankaifa9/frontend",
"type": "typescript",
"entries": ["lib", "hooks", "components"]
},
"hydro_generator_module": {
"path": "D:/dongdiankaifa9/hydro_generator_module",
"type": "python"
},
"multiphysics_network": {
"path": "D:/dongdiankaifa9/multiphysics_network",
"type": "python"
}
}
}
```
## 输出结构
```
{workDir}/api-docs/
├── backend/
│ ├── openapi.json # OpenAPI 3.0 规范
│ └── API_SUMMARY.md # Markdown 摘要wiki 兼容)
├── frontend/
│ ├── modules.md # TypeDoc 模块文档
│ ├── classes/ # 类文档
│ └── functions/ # 函数文档
├── hydro_generator/
│ ├── assembler.md # pdoc 模块文档
│ ├── blueprint.md
│ └── builders/
└── multiphysics/
├── analysis_domain.md
├── builders.md
└── compilers.md
```
## 质量门禁
- [ ] 所有配置的 API 源已提取
- [ ] Markdown 格式与 wiki CSS 兼容
- [ ] 表格正确渲染(蓝色表头)
- [ ] 代码块有语言标注
- [ ] 无空文件或错误文件
## 下一阶段
→ [Phase 3: Parallel Analysis](03-parallel-analysis.md)

View File

@@ -0,0 +1,183 @@
# Phase 3: Parallel Analysis
使用 `universal-executor` 并行生成 6 个文档章节。
## Agent 配置
```javascript
const AGENT_CONFIGS = {
overview: {
role: 'Product Manager',
output: 'section-overview.md',
task: '撰写产品概览、核心功能、快速入门指南',
focus: '产品定位、目标用户、5步快速入门、系统要求',
input: ['exploration-architecture.json', 'README.md', 'package.json'],
tag: 'getting-started'
},
'interface-guide': {
role: 'Product Designer',
output: 'section-interface.md',
task: '撰写界面或交互指南Web 截图、CLI 命令交互、桌面应用操作)',
focus: '视觉布局、交互流程、命令行参数、输入/输出示例',
input: ['exploration-ui-routes.json', 'src/**', 'pages/**', 'views/**', 'components/**', 'src/commands/**'],
tag: 'interface',
screenshot_rules: `
根据项目类型标注交互点:
[Web] <!-- SCREENSHOT: id="ss-{功能}" url="{路由}" selector="{CSS选择器}" description="{描述}" -->
[CLI] 使用代码块展示命令交互:
\`\`\`bash
$ command --flag value
Expected output here
\`\`\`
[Desktop] <!-- SCREENSHOT: id="ss-{功能}" description="{描述}" -->
`
},
'api-reference': {
role: 'Technical Architect',
output: 'section-reference.md',
task: '撰写接口参考文档REST API / 函数库 / CLI 命令)',
focus: '函数签名、端点定义、参数说明、返回值、错误代码',
pre_extract: 'python .claude/skills/software-manual/scripts/extract_apis.py -o ${workDir}',
input: [
'${workDir}/api-docs/backend/openapi.json', // FastAPI OpenAPI
'${workDir}/api-docs/backend/API_SUMMARY.md', // Backend summary
'${workDir}/api-docs/frontend/**/*.md', // TypeDoc output
'${workDir}/api-docs/hydro_generator/**/*.md', // Python module
'${workDir}/api-docs/multiphysics/**/*.md' // Python module
],
tag: 'api'
},
config: {
role: 'DevOps Engineer',
output: 'section-configuration.md',
task: '撰写配置指南,涵盖环境变量、配置文件、部署设置',
focus: '环境变量表格、配置文件格式、部署选项、安全设置',
input: ['exploration-config.json', '.env.example', 'config/**', '*.config.*'],
tag: 'config'
},
troubleshooting: {
role: 'Support Engineer',
output: 'section-troubleshooting.md',
task: '撰写故障排查指南涵盖常见问题、错误码、FAQ',
focus: '常见问题与解决方案、错误码参考、FAQ、获取帮助',
input: ['docs/troubleshooting.md', 'src/**/errors.*', 'src/**/exceptions.*', 'TROUBLESHOOTING.md'],
tag: 'troubleshooting'
},
'code-examples': {
role: 'Developer Advocate',
output: 'section-examples.md',
task: '撰写多难度级别代码示例入门40%/进阶40%/高级20%',
focus: '完整可运行代码、分步解释、预期输出、最佳实践',
input: ['examples/**', 'tests/**', 'demo/**', 'samples/**'],
tag: 'examples'
}
};
```
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
// 1. 预提取 API 文档(如有 pre_extract 配置)
for (const [name, cfg] of Object.entries(AGENT_CONFIGS)) {
if (cfg.pre_extract) {
const cmd = cfg.pre_extract.replace(/\$\{workDir\}/g, workDir);
console.log(`[Pre-extract] ${name}: ${cmd}`);
Bash({ command: cmd });
}
}
// 2. 并行启动 6 个 universal-executor
const tasks = Object.entries(AGENT_CONFIGS).map(([name, cfg]) =>
Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: buildAgentPrompt(name, cfg, config, workDir)
})
);
const results = await Promise.all(tasks);
```
## Prompt 构建
```javascript
function buildAgentPrompt(name, cfg, config, workDir) {
const screenshotSection = cfg.screenshot_rules
? `\n[SCREENSHOT RULES]\n${cfg.screenshot_rules}`
: '';
return `
[ROLE] ${cfg.role}
[PROJECT CONTEXT]
项目类型: ${config.software.type} (web/cli/sdk/desktop)
语言: ${config.software.language || 'auto-detect'}
名称: ${config.software.name}
[TASK]
${cfg.task}
输出: ${workDir}/sections/${cfg.output}
[INPUT]
- 配置: ${workDir}/manual-config.json
- 探索结果: ${workDir}/exploration/
- 扫描路径: ${cfg.input.join(', ')}
[CONTENT REQUIREMENTS]
- 标题层级: # ## ### (最多3级)
- 代码块: \`\`\`language ... \`\`\` (必须标注语言)
- 表格: | col1 | col2 | 格式
- 列表: 有序 1. 2. 3. / 无序 - - -
- 内联代码: \`code\`
- 链接: [text](url)
${screenshotSection}
[FOCUS]
${cfg.focus}
[OUTPUT FORMAT]
Markdown 文件,包含:
- 清晰的章节结构
- 具体的代码示例
- 参数/配置表格
- 常见用例说明
[RETURN JSON]
{
"status": "completed",
"output_file": "sections/${cfg.output}",
"summary": "<50字>",
"tag": "${cfg.tag}",
"screenshots_needed": []
}
`;
}
```
## 结果收集
```javascript
const agentResults = results.map(r => JSON.parse(r));
const allScreenshots = agentResults.flatMap(r => r.screenshots_needed);
Write(`${workDir}/agent-results.json`, JSON.stringify({
results: agentResults,
screenshots_needed: allScreenshots,
timestamp: new Date().toISOString()
}, null, 2));
```
## 质量检查
- [ ] Markdown 语法有效
- [ ] 无占位符文本
- [ ] 代码块标注语言
- [ ] 截图标记格式正确
- [ ] 交叉引用有效
## 下一阶段
→ [Phase 3.5: Consolidation](03.5-consolidation.md)

View File

@@ -0,0 +1,82 @@
# Phase 3.5: Consolidation
使用 `universal-executor` 子 Agent 执行质量检查,避免主 Agent 内存溢出。
## 核心原则
**主 Agent 负责编排,子 Agent 负责繁重计算。**
## 执行流程
```javascript
const agentResults = JSON.parse(Read(`${workDir}/agent-results.json`));
// 委托给 universal-executor 执行整合检查
const result = Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: buildConsolidationPrompt(workDir)
});
const consolidationResult = JSON.parse(result);
```
## Prompt 构建
```javascript
function buildConsolidationPrompt(workDir) {
return `
[ROLE] Quality Analyst
[TASK]
检查所有章节的一致性和完整性
[INPUT]
- 章节文件: ${workDir}/sections/section-*.md
- Agent 结果: ${workDir}/agent-results.json
[CHECKS]
1. Markdown 语法有效性
2. 截图标记格式 (<!-- SCREENSHOT: id="..." -->)
3. 交叉引用有效性
4. 术语一致性
5. 代码块语言标注
[OUTPUT]
1. 写入 ${workDir}/consolidation-summary.md
2. 写入 ${workDir}/screenshots-list.json (截图清单)
[RETURN JSON]
{
"status": "completed",
"sections_checked": <n>,
"screenshots_found": <n>,
"issues": { "errors": <n>, "warnings": <n> },
"quality_score": <0-100>
}
`;
}
```
## Agent 职责
1. **读取章节** → 逐个检查 section-*.md
2. **提取截图** → 收集所有截图标记
3. **验证引用** → 检查交叉引用有效性
4. **评估质量** → 计算综合分数
5. **输出报告** → consolidation-summary.md
## 输出
- `consolidation-summary.md` - 质量报告
- `screenshots-list.json` - 截图清单(供 Phase 4 使用)
## 质量门禁
- [ ] 无错误
- [ ] 总分 >= 60%
- [ ] 交叉引用有效
## 下一阶段
→ [Phase 4: Screenshot Capture](04-screenshot-capture.md)

View File

@@ -0,0 +1,89 @@
# Phase 4: Screenshot Capture
使用 `universal-executor` 子 Agent 调用 Chrome MCP 截图。
## 核心原则
**主 Agent 负责编排,子 Agent 负责截图采集。**
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
const screenshotsList = JSON.parse(Read(`${workDir}/screenshots-list.json`));
// 委托给 universal-executor 执行截图
const result = Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: buildScreenshotPrompt(config, screenshotsList, workDir)
});
const captureResult = JSON.parse(result);
```
## Prompt 构建
```javascript
function buildScreenshotPrompt(config, screenshotsList, workDir) {
return `
[ROLE] Screenshot Capturer
[TASK]
使用 Chrome MCP 批量截图
[INPUT]
- 配置: ${workDir}/manual-config.json
- 截图清单: ${workDir}/screenshots-list.json
[STEPS]
1. 检查 Chrome MCP 可用性 (mcp__chrome__*)
2. 启动开发服务器: ${config.screenshot_config?.dev_command || 'npm run dev'}
3. 等待服务器就绪: ${config.screenshot_config?.dev_url || 'http://localhost:3000'}
4. 遍历截图清单,逐个调用 mcp__chrome__screenshot
5. 保存截图到 ${workDir}/screenshots/
6. 生成 manifest: ${workDir}/screenshots/screenshots-manifest.json
7. 停止开发服务器
[MCP CALLS]
- mcp__chrome__screenshot({ url, selector?, viewport })
- 保存为 PNG 文件
[FALLBACK]
若 Chrome MCP 不可用,生成手动截图指南: MANUAL_CAPTURE.md
[RETURN JSON]
{
"status": "completed|skipped",
"captured": <n>,
"failed": <n>,
"manifest_file": "screenshots-manifest.json"
}
`;
}
```
## Agent 职责
1. **检查 MCP** → Chrome MCP 可用性
2. **启动服务** → 开发服务器
3. **批量截图** → 调用 mcp__chrome__screenshot
4. **保存文件** → screenshots/*.png
5. **生成清单** → screenshots-manifest.json
## 输出
- `screenshots/*.png` - 截图文件
- `screenshots/screenshots-manifest.json` - 清单
- `screenshots/MANUAL_CAPTURE.md` - 手动指南fallback
## 质量门禁
- [ ] 高优先级截图完成
- [ ] 尺寸一致 (1280×800)
- [ ] 无空白截图
- [ ] Manifest 完整
## 下一阶段
→ [Phase 5: HTML Assembly](05-html-assembly.md)

View File

@@ -0,0 +1,132 @@
# Phase 5: HTML Assembly
使用 `universal-executor` 子 Agent 生成最终 HTML避免主 Agent 内存溢出。
## 核心原则
**主 Agent 负责编排,子 Agent 负责繁重计算。**
## 执行流程
```javascript
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
// 委托给 universal-executor 执行 HTML 组装
const result = Task({
subagent_type: 'universal-executor',
run_in_background: false,
prompt: buildAssemblyPrompt(config, workDir)
});
const buildResult = JSON.parse(result);
```
## Prompt 构建
```javascript
function buildAssemblyPrompt(config, workDir) {
return `
[ROLE] HTML Assembler
[TASK]
生成 TiddlyWiki 风格的交互式 HTML 手册(使用成熟库,无外部 CDN 依赖)
[INPUT]
- 模板: .claude/skills/software-manual/templates/tiddlywiki-shell.html
- CSS: .claude/skills/software-manual/templates/css/wiki-base.css, wiki-dark.css
- 配置: ${workDir}/manual-config.json
- 章节: ${workDir}/sections/section-*.md
- Agent 结果: ${workDir}/agent-results.json (含 tag 信息)
- 截图: ${workDir}/screenshots/
[LIBRARIES TO EMBED]
1. marked.js (v14+) - Markdown 转 HTML
- 从 https://unpkg.com/marked/marked.min.js 获取内容内嵌
2. highlight.js (v11+) - 代码语法高亮
- 核心 + 常用语言包 (js, ts, python, bash, json, yaml, html, css)
- 使用 github-dark 主题
[STEPS]
1. 读取 HTML 模板和 CSS
2. 内嵌 marked.js 和 highlight.js 代码
3. 读取 agent-results.json 提取各章节 tag
4. 动态生成 {{TAG_BUTTONS_HTML}} (基于实际使用的 tags)
5. 逐个读取 section-*.md使用 marked 转换为 HTML
6. 为代码块添加 data-language 属性和语法高亮
7. 处理 <!-- SCREENSHOT: id="..." --> 标记,嵌入 Base64 图片
8. 生成目录、搜索索引
9. 组装最终 HTML写入 ${workDir}/${config.software.name}-使用手册.html
[CONTENT FORMATTING]
- 代码块: 深色背景 + 语言标签 + 语法高亮
- 表格: 蓝色表头 + 边框 + 悬停效果
- 内联代码: 红色高亮
- 列表: 有序/无序样式增强
- 左侧导航: 固定侧边栏 + TOC
[RETURN JSON]
{
"status": "completed",
"output_file": "${config.software.name}-使用手册.html",
"file_size": "<size>",
"sections_count": <n>,
"tags_generated": [],
"screenshots_embedded": <n>
}
`;
}
```
## Agent 职责
1. **读取模板** → HTML + CSS
2. **转换章节** → Markdown → HTML tiddlers
3. **嵌入截图** → Base64 编码
4. **生成索引** → 搜索数据
5. **组装输出** → 单文件 HTML
## Markdown 转换规则
Agent 内部实现:
```
# H1 → <h1>
## H2 → <h2>
### H3 → <h3>
```code``` → <pre><code>
**bold** → <strong>
*italic* → <em>
[text](url) → <a href>
- item → <li>
<!-- SCREENSHOT: id="xxx" --> → <figure><img src="data:..."></figure>
```
## Tiddler 结构
```html
<article class="tiddler" id="tiddler-{name}" data-tags="..." data-difficulty="...">
<header class="tiddler-header">
<h2><button class="collapse-toggle"></button> {title}</h2>
<div class="tiddler-meta">{badges}</div>
</header>
<div class="tiddler-content">{html}</div>
</article>
```
## 输出
- `{软件名}-使用手册.html` - 最终 HTML
- `build-report.json` - 构建报告
## 质量门禁
- [ ] HTML 渲染正确
- [ ] 搜索功能可用
- [ ] 折叠/展开正常
- [ ] 主题切换持久化
- [ ] 截图显示正确
- [ ] 文件大小 < 10MB
## 下一阶段
→ [Phase 6: Iterative Refinement](06-iterative-refinement.md)

View File

@@ -0,0 +1,259 @@
# Phase 6: Iterative Refinement
Preview, collect feedback, and iterate until quality meets standards.
## Objective
- Preview generated HTML in browser
- Collect user feedback
- Address issues iteratively
- Finalize documentation
## Execution Steps
### Step 1: Preview HTML
```javascript
const buildReport = JSON.parse(Read(`${workDir}/build-report.json`));
const outputFile = `${workDir}/${buildReport.output}`;
// Open in default browser for preview
Bash({ command: `start "${outputFile}"` }); // Windows
// Bash({ command: `open "${outputFile}"` }); // macOS
// Report to user
console.log(`
📖 Manual Preview
File: ${buildReport.output}
Size: ${buildReport.size_human}
Sections: ${buildReport.sections}
Screenshots: ${buildReport.screenshots}
Please review the manual in your browser.
`);
```
### Step 2: Collect Feedback
```javascript
const feedback = await AskUserQuestion({
questions: [
{
question: "How does the manual look overall?",
header: "Overall",
options: [
{ label: "Looks great!", description: "Ready to finalize" },
{ label: "Minor issues", description: "Small tweaks needed" },
{ label: "Major issues", description: "Significant changes required" },
{ label: "Missing content", description: "Need to add more sections" }
],
multiSelect: false
},
{
question: "Which aspects need improvement? (Select all that apply)",
header: "Improvements",
options: [
{ label: "Content accuracy", description: "Fix incorrect information" },
{ label: "More examples", description: "Add more code examples" },
{ label: "Better screenshots", description: "Retake or add screenshots" },
{ label: "Styling/Layout", description: "Improve visual appearance" }
],
multiSelect: true
}
]
});
```
### Step 3: Address Feedback
Based on feedback, take appropriate action:
#### Minor Issues
```javascript
if (feedback.overall === "Minor issues") {
// Prompt for specific changes
const details = await AskUserQuestion({
questions: [{
question: "What specific changes are needed?",
header: "Details",
options: [
{ label: "Typo fixes", description: "Fix spelling/grammar" },
{ label: "Reorder sections", description: "Change section order" },
{ label: "Update content", description: "Modify existing text" },
{ label: "Custom changes", description: "I'll describe the changes" }
],
multiSelect: true
}]
});
// Apply changes based on user input
applyMinorChanges(details);
}
```
#### Major Issues
```javascript
if (feedback.overall === "Major issues") {
// Return to relevant phase
console.log(`
Major issues require returning to an earlier phase:
- Content issues → Phase 3 (Parallel Analysis)
- Screenshot issues → Phase 4 (Screenshot Capture)
- Structure issues → Phase 2 (Project Exploration)
Which phase should we return to?
`);
const phase = await selectPhase();
return { action: 'restart', from_phase: phase };
}
```
#### Missing Content
```javascript
if (feedback.overall === "Missing content") {
// Identify missing sections
const missing = await AskUserQuestion({
questions: [{
question: "What content is missing?",
header: "Missing",
options: [
{ label: "API endpoints", description: "More API documentation" },
{ label: "UI features", description: "Additional UI guides" },
{ label: "Examples", description: "More code examples" },
{ label: "Troubleshooting", description: "More FAQ items" }
],
multiSelect: true
}]
});
// Run additional agent(s) for missing content
await runSupplementaryAgents(missing);
}
```
### Step 4: Save Iteration
```javascript
// Save current version before changes
const iterationNum = getNextIterationNumber(workDir);
const iterationDir = `${workDir}/iterations`;
// Copy current version
Bash({ command: `copy "${outputFile}" "${iterationDir}\\v${iterationNum}.html"` });
// Log iteration
const iterationLog = {
version: iterationNum,
timestamp: new Date().toISOString(),
feedback: feedback,
changes: appliedChanges
};
Write(`${iterationDir}/iteration-${iterationNum}.json`, JSON.stringify(iterationLog, null, 2));
```
### Step 5: Regenerate if Needed
```javascript
if (changesApplied) {
// Re-run HTML assembly with updated sections
await runPhase('05-html-assembly');
// Open updated preview
Bash({ command: `start "${outputFile}"` });
}
```
### Step 6: Finalize
When user approves:
```javascript
if (feedback.overall === "Looks great!") {
// Final quality check
const finalReport = {
...buildReport,
iterations: iterationNum,
finalized_at: new Date().toISOString(),
quality_score: calculateFinalQuality()
};
Write(`${workDir}/final-report.json`, JSON.stringify(finalReport, null, 2));
// Suggest final location
console.log(`
✅ Manual Finalized!
Output: ${buildReport.output}
Size: ${buildReport.size_human}
Quality: ${finalReport.quality_score}%
Iterations: ${iterationNum}
Suggested actions:
1. Copy to project root: copy "${outputFile}" "docs/"
2. Add to version control
3. Publish to documentation site
`);
return { status: 'completed', output: outputFile };
}
```
## Iteration History
Each iteration is logged:
```
iterations/
├── v1.html # First version
├── iteration-1.json # Feedback and changes
├── v2.html # After first iteration
├── iteration-2.json # Feedback and changes
└── ...
```
## Quality Metrics
Track improvement across iterations:
```javascript
const qualityMetrics = {
content_completeness: 0, // All sections present
screenshot_coverage: 0, // Screenshots for all UI
example_diversity: 0, // Different difficulty levels
search_accuracy: 0, // Search returns relevant results
user_satisfaction: 0 // Based on feedback
};
```
## Exit Conditions
The refinement phase ends when:
1. User explicitly approves ("Looks great!")
2. Maximum iterations reached (configurable, default: 5)
3. Quality score exceeds threshold (default: 90%)
## Output
- **Final HTML**: `{软件名}-使用手册.html`
- **Final Report**: `final-report.json`
- **Iteration History**: `iterations/`
## Completion
When finalized, the skill is complete. Final output location:
```
.workflow/.scratchpad/manual-{timestamp}/
├── {软件名}-使用手册.html ← Final deliverable
├── final-report.json
└── iterations/
```
Consider copying to a permanent location like `docs/` or project root.

View File

@@ -0,0 +1,245 @@
# API 文档提取脚本
根据项目类型自动提取 API 文档,支持 FastAPI、Next.js、Python 模块。
## 支持的技术栈
| 类型 | 技术栈 | 工具 | 输出格式 |
|------|--------|------|----------|
| Backend | FastAPI | openapi-to-md | Markdown |
| Frontend | Next.js/TypeScript | TypeDoc | Markdown |
| Python Module | Python | pdoc | Markdown/HTML |
## 使用方法
### 1. FastAPI Backend (OpenAPI)
```bash
# 提取 OpenAPI JSON
cd D:/dongdiankaifa9/backend
python -c "
from app.main import app
import json
print(json.dumps(app.openapi(), indent=2))
" > api-docs/openapi.json
# 转换为 Markdown (使用 widdershins)
npx widdershins api-docs/openapi.json -o api-docs/API_REFERENCE.md --language_tabs 'python:Python' 'javascript:JavaScript' 'bash:cURL'
```
**备选方案 (无需启动服务)**:
```python
# scripts/extract_fastapi_openapi.py
import sys
sys.path.insert(0, 'D:/dongdiankaifa9/backend')
from app.main import app
import json
openapi_schema = app.openapi()
with open('api-docs/openapi.json', 'w', encoding='utf-8') as f:
json.dump(openapi_schema, f, indent=2, ensure_ascii=False)
print(f"Extracted {len(openapi_schema.get('paths', {}))} endpoints")
```
### 2. Next.js Frontend (TypeDoc)
```bash
cd D:/dongdiankaifa9/frontend
# 安装 TypeDoc
npm install --save-dev typedoc typedoc-plugin-markdown
# 生成文档
npx typedoc --plugin typedoc-plugin-markdown \
--out api-docs \
--entryPoints "./lib" "./hooks" "./components" \
--entryPointStrategy expand \
--exclude "**/node_modules/**" \
--exclude "**/*.test.*" \
--readme none
```
**typedoc.json 配置**:
```json
{
"$schema": "https://typedoc.org/schema.json",
"entryPoints": ["./lib", "./hooks", "./components"],
"entryPointStrategy": "expand",
"out": "api-docs",
"plugin": ["typedoc-plugin-markdown"],
"exclude": ["**/node_modules/**", "**/*.test.*", "**/*.spec.*"],
"excludePrivate": true,
"excludeInternal": true,
"readme": "none",
"name": "Frontend API Reference"
}
```
### 3. Python Module (pdoc)
```bash
# 安装 pdoc
pip install pdoc
# hydro_generator_module
cd D:/dongdiankaifa9
pdoc hydro_generator_module \
--output-dir api-docs/hydro_generator \
--format markdown \
--no-show-source
# multiphysics_network
pdoc multiphysics_network \
--output-dir api-docs/multiphysics \
--format markdown \
--no-show-source
```
**备选: Sphinx (更强大)**:
```bash
# 安装 Sphinx
pip install sphinx sphinx-markdown-builder
# 生成 API 文档
sphinx-apidoc -o docs/source hydro_generator_module
cd docs && make markdown
```
## 集成脚本
```python
#!/usr/bin/env python3
# scripts/extract_all_apis.py
import subprocess
import sys
from pathlib import Path
PROJECTS = {
'backend': {
'path': 'D:/dongdiankaifa9/backend',
'type': 'fastapi',
'output': 'api-docs/backend'
},
'frontend': {
'path': 'D:/dongdiankaifa9/frontend',
'type': 'typescript',
'output': 'api-docs/frontend'
},
'hydro_generator_module': {
'path': 'D:/dongdiankaifa9/hydro_generator_module',
'type': 'python',
'output': 'api-docs/hydro_generator'
},
'multiphysics_network': {
'path': 'D:/dongdiankaifa9/multiphysics_network',
'type': 'python',
'output': 'api-docs/multiphysics'
}
}
def extract_fastapi(config):
"""提取 FastAPI OpenAPI 文档"""
path = Path(config['path'])
sys.path.insert(0, str(path))
try:
from app.main import app
import json
output_dir = Path(config['output'])
output_dir.mkdir(parents=True, exist_ok=True)
# 导出 OpenAPI JSON
with open(output_dir / 'openapi.json', 'w', encoding='utf-8') as f:
json.dump(app.openapi(), f, indent=2, ensure_ascii=False)
print(f"✓ FastAPI: {len(app.openapi().get('paths', {}))} endpoints")
return True
except Exception as e:
print(f"✗ FastAPI error: {e}")
return False
def extract_typescript(config):
"""提取 TypeScript 文档"""
try:
subprocess.run([
'npx', 'typedoc',
'--plugin', 'typedoc-plugin-markdown',
'--out', config['output'],
'--entryPoints', './lib', './hooks',
'--entryPointStrategy', 'expand'
], cwd=config['path'], check=True)
print(f"✓ TypeDoc: {config['path']}")
return True
except Exception as e:
print(f"✗ TypeDoc error: {e}")
return False
def extract_python(config):
"""提取 Python 模块文档"""
try:
module_name = Path(config['path']).name
subprocess.run([
'pdoc', module_name,
'--output-dir', config['output'],
'--format', 'markdown'
], cwd=Path(config['path']).parent, check=True)
print(f"✓ pdoc: {module_name}")
return True
except Exception as e:
print(f"✗ pdoc error: {e}")
return False
EXTRACTORS = {
'fastapi': extract_fastapi,
'typescript': extract_typescript,
'python': extract_python
}
if __name__ == '__main__':
for name, config in PROJECTS.items():
print(f"\n[{name}]")
extractor = EXTRACTORS.get(config['type'])
if extractor:
extractor(config)
```
## Phase 3 集成
`api-reference` Agent 提示词中添加:
```
[PRE-EXTRACTION]
运行 API 提取脚本获取结构化文档:
- python scripts/extract_all_apis.py
[INPUT FILES]
- api-docs/backend/openapi.json (FastAPI endpoints)
- api-docs/frontend/*.md (TypeDoc output)
- api-docs/hydro_generator/*.md (pdoc output)
- api-docs/multiphysics/*.md (pdoc output)
```
## 输出结构
```
api-docs/
├── backend/
│ ├── openapi.json # Raw OpenAPI spec
│ └── API_REFERENCE.md # Converted Markdown
├── frontend/
│ ├── modules.md
│ ├── functions.md
│ └── classes/
├── hydro_generator/
│ ├── assembler.md
│ ├── blueprint.md
│ └── builders/
└── multiphysics/
├── analysis_domain.md
├── builders.md
└── compilers.md
```

View File

@@ -0,0 +1,85 @@
# 库文件打包说明
## 依赖库
HTML 组装阶段需要内嵌以下成熟库(无 CDN 依赖):
### 1. marked.js - Markdown 解析
```bash
# 获取最新版本
curl -o templates/libs/marked.min.js https://unpkg.com/marked/marked.min.js
```
### 2. highlight.js - 代码语法高亮
```bash
# 获取核心 + 常用语言包
curl -o templates/libs/highlight.min.js https://unpkg.com/@highlightjs/cdn-assets/highlight.min.js
# 获取 github-dark 主题
curl -o templates/libs/github-dark.min.css https://unpkg.com/@highlightjs/cdn-assets/styles/github-dark.min.css
```
## 内嵌方式
Phase 5 Agent 应:
1. 读取 `templates/libs/*.js``*.css`
2. 将内容嵌入 HTML 的 `<script>``<style>` 标签
3.`DOMContentLoaded` 后初始化:
```javascript
// 初始化 marked
marked.setOptions({
highlight: function(code, lang) {
if (lang && hljs.getLanguage(lang)) {
return hljs.highlight(code, { language: lang }).value;
}
return hljs.highlightAuto(code).value;
},
breaks: true,
gfm: true
});
// 应用高亮
document.querySelectorAll('pre code').forEach(block => {
hljs.highlightElement(block);
});
```
## 备选方案
如果无法获取外部库,使用内置的简化 Markdown 转换:
```javascript
function simpleMarkdown(md) {
return md
.replace(/^### (.+)$/gm, '<h3>$1</h3>')
.replace(/^## (.+)$/gm, '<h2>$1</h2>')
.replace(/^# (.+)$/gm, '<h1>$1</h1>')
.replace(/```(\w+)?\n([\s\S]*?)```/g, (m, lang, code) =>
`<pre data-language="${lang || ''}"><code class="language-${lang || ''}">${escapeHtml(code)}</code></pre>`)
.replace(/`([^`]+)`/g, '<code>$1</code>')
.replace(/\*\*(.+?)\*\*/g, '<strong>$1</strong>')
.replace(/\*(.+?)\*/g, '<em>$1</em>')
.replace(/\[([^\]]+)\]\(([^)]+)\)/g, '<a href="$2">$1</a>')
.replace(/^\|(.+)\|$/gm, processTableRow)
.replace(/^- (.+)$/gm, '<li>$1</li>')
.replace(/^\d+\. (.+)$/gm, '<li>$1</li>');
}
```
## 文件结构
```
templates/
├── libs/
│ ├── marked.min.js # Markdown parser
│ ├── highlight.min.js # Syntax highlighting
│ └── github-dark.min.css # Code theme
├── tiddlywiki-shell.html
└── css/
├── wiki-base.css
└── wiki-dark.css
```

View File

@@ -0,0 +1,270 @@
#!/usr/bin/env python3
"""
API 文档提取脚本
支持 FastAPI、TypeScript、Python 模块
"""
import subprocess
import sys
import json
from pathlib import Path
from typing import Dict, Any, Optional
# 项目配置
PROJECTS = {
'backend': {
'path': Path('D:/dongdiankaifa9/backend'),
'type': 'fastapi',
'entry': 'app.main:app',
'output': 'api-docs/backend'
},
'frontend': {
'path': Path('D:/dongdiankaifa9/frontend'),
'type': 'typescript',
'entries': ['lib', 'hooks', 'components'],
'output': 'api-docs/frontend'
},
'hydro_generator_module': {
'path': Path('D:/dongdiankaifa9/hydro_generator_module'),
'type': 'python',
'output': 'api-docs/hydro_generator'
},
'multiphysics_network': {
'path': Path('D:/dongdiankaifa9/multiphysics_network'),
'type': 'python',
'output': 'api-docs/multiphysics'
}
}
def extract_fastapi(name: str, config: Dict[str, Any], output_base: Path) -> bool:
"""提取 FastAPI OpenAPI 文档"""
path = config['path']
output_dir = output_base / config['output']
output_dir.mkdir(parents=True, exist_ok=True)
# 添加路径到 sys.path
if str(path) not in sys.path:
sys.path.insert(0, str(path))
try:
# 动态导入 app
from app.main import app
# 获取 OpenAPI schema
openapi_schema = app.openapi()
# 保存 JSON
json_path = output_dir / 'openapi.json'
with open(json_path, 'w', encoding='utf-8') as f:
json.dump(openapi_schema, f, indent=2, ensure_ascii=False)
# 生成 Markdown 摘要
md_path = output_dir / 'API_SUMMARY.md'
generate_api_markdown(openapi_schema, md_path)
endpoints = len(openapi_schema.get('paths', {}))
print(f" ✓ Extracted {endpoints} endpoints → {output_dir}")
return True
except ImportError as e:
print(f" ✗ Import error: {e}")
return False
except Exception as e:
print(f" ✗ Error: {e}")
return False
def generate_api_markdown(schema: Dict, output_path: Path):
"""从 OpenAPI schema 生成 Markdown"""
lines = [
f"# {schema.get('info', {}).get('title', 'API Reference')}",
"",
f"Version: {schema.get('info', {}).get('version', '1.0.0')}",
"",
"## Endpoints",
"",
"| Method | Path | Summary |",
"|--------|------|---------|"
]
for path, methods in schema.get('paths', {}).items():
for method, details in methods.items():
if method in ('get', 'post', 'put', 'delete', 'patch'):
summary = details.get('summary', details.get('operationId', '-'))
lines.append(f"| `{method.upper()}` | `{path}` | {summary} |")
lines.extend([
"",
"## Schemas",
""
])
for name, schema_def in schema.get('components', {}).get('schemas', {}).items():
lines.append(f"### {name}")
lines.append("")
if 'properties' in schema_def:
lines.append("| Property | Type | Required |")
lines.append("|----------|------|----------|")
required = schema_def.get('required', [])
for prop, prop_def in schema_def['properties'].items():
prop_type = prop_def.get('type', prop_def.get('$ref', 'any'))
is_required = '' if prop in required else ''
lines.append(f"| `{prop}` | {prop_type} | {is_required} |")
lines.append("")
with open(output_path, 'w', encoding='utf-8') as f:
f.write('\n'.join(lines))
def extract_typescript(name: str, config: Dict[str, Any], output_base: Path) -> bool:
"""提取 TypeScript 文档 (TypeDoc)"""
path = config['path']
output_dir = output_base / config['output']
# 检查 TypeDoc 是否已安装
try:
result = subprocess.run(
['npx', 'typedoc', '--version'],
cwd=path,
capture_output=True,
text=True
)
if result.returncode != 0:
print(f" ⚠ TypeDoc not installed, installing...")
subprocess.run(
['npm', 'install', '--save-dev', 'typedoc', 'typedoc-plugin-markdown'],
cwd=path,
check=True
)
except FileNotFoundError:
print(f" ✗ npm/npx not found")
return False
# 运行 TypeDoc
try:
entries = config.get('entries', ['lib'])
cmd = [
'npx', 'typedoc',
'--plugin', 'typedoc-plugin-markdown',
'--out', str(output_dir),
'--entryPointStrategy', 'expand',
'--exclude', '**/node_modules/**',
'--exclude', '**/*.test.*',
'--readme', 'none'
]
for entry in entries:
entry_path = path / entry
if entry_path.exists():
cmd.extend(['--entryPoints', str(entry_path)])
result = subprocess.run(cmd, cwd=path, capture_output=True, text=True)
if result.returncode == 0:
print(f" ✓ TypeDoc generated → {output_dir}")
return True
else:
print(f" ✗ TypeDoc error: {result.stderr[:200]}")
return False
except Exception as e:
print(f" ✗ Error: {e}")
return False
def extract_python_module(name: str, config: Dict[str, Any], output_base: Path) -> bool:
"""提取 Python 模块文档 (pdoc)"""
path = config['path']
output_dir = output_base / config['output']
module_name = path.name
# 检查 pdoc
try:
subprocess.run(['pdoc', '--version'], capture_output=True, check=True)
except (FileNotFoundError, subprocess.CalledProcessError):
print(f" ⚠ pdoc not installed, installing...")
subprocess.run([sys.executable, '-m', 'pip', 'install', 'pdoc'], check=True)
# 运行 pdoc
try:
result = subprocess.run(
[
'pdoc', module_name,
'--output-dir', str(output_dir),
'--format', 'markdown'
],
cwd=path.parent,
capture_output=True,
text=True
)
if result.returncode == 0:
# 统计生成的文件
md_files = list(output_dir.glob('**/*.md'))
print(f" ✓ pdoc generated {len(md_files)} files → {output_dir}")
return True
else:
print(f" ✗ pdoc error: {result.stderr[:200]}")
return False
except Exception as e:
print(f" ✗ Error: {e}")
return False
EXTRACTORS = {
'fastapi': extract_fastapi,
'typescript': extract_typescript,
'python': extract_python_module
}
def main(output_base: Optional[str] = None, projects: Optional[list] = None):
"""主入口"""
base = Path(output_base) if output_base else Path.cwd()
print("=" * 50)
print("API Documentation Extraction")
print("=" * 50)
results = {}
for name, config in PROJECTS.items():
if projects and name not in projects:
continue
print(f"\n[{name}] ({config['type']})")
if not config['path'].exists():
print(f" ✗ Path not found: {config['path']}")
results[name] = False
continue
extractor = EXTRACTORS.get(config['type'])
if extractor:
results[name] = extractor(name, config, base)
else:
print(f" ✗ Unknown type: {config['type']}")
results[name] = False
# 汇总
print("\n" + "=" * 50)
print("Summary")
print("=" * 50)
success = sum(1 for v in results.values() if v)
print(f"Success: {success}/{len(results)}")
return all(results.values())
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='Extract API documentation')
parser.add_argument('--output', '-o', default='.', help='Output base directory')
parser.add_argument('--projects', '-p', nargs='+', help='Specific projects to extract')
args = parser.parse_args()
success = main(args.output, args.projects)
sys.exit(0 if success else 1)

View File

@@ -0,0 +1,447 @@
# Screenshot Helper
Guide for capturing screenshots using Chrome MCP.
## Overview
This script helps capture screenshots of web interfaces for the software manual using Chrome MCP or fallback methods.
## Chrome MCP Prerequisites
### Check MCP Availability
```javascript
async function checkChromeMCPAvailability() {
try {
// Attempt to get Chrome version via MCP
const version = await mcp__chrome__getVersion();
return {
available: true,
browser: version.browser,
version: version.version
};
} catch (error) {
return {
available: false,
error: error.message
};
}
}
```
### MCP Configuration
Expected Claude configuration for Chrome MCP:
```json
{
"mcpServers": {
"chrome": {
"command": "npx",
"args": ["@anthropic-ai/mcp-chrome"],
"env": {
"CHROME_PATH": "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"
}
}
}
}
```
## Screenshot Workflow
### Step 1: Prepare Environment
```javascript
async function prepareScreenshotEnvironment(workDir, config) {
const screenshotDir = `${workDir}/screenshots`;
// Create directory
Bash({ command: `mkdir -p "${screenshotDir}"` });
// Check Chrome MCP
const chromeMCP = await checkChromeMCPAvailability();
if (!chromeMCP.available) {
console.log('Chrome MCP not available. Will generate manual guide.');
return { mode: 'manual' };
}
// Start development server if needed
if (config.screenshot_config?.dev_command) {
const server = await startDevServer(config);
return { mode: 'auto', server, screenshotDir };
}
return { mode: 'auto', screenshotDir };
}
```
### Step 2: Start Development Server
```javascript
async function startDevServer(config) {
const devCommand = config.screenshot_config.dev_command;
const devUrl = config.screenshot_config.dev_url;
// Start server in background
const server = Bash({
command: devCommand,
run_in_background: true
});
console.log(`Starting dev server: ${devCommand}`);
// Wait for server to be ready
const ready = await waitForServer(devUrl, 30000);
if (!ready) {
throw new Error(`Server at ${devUrl} did not start in time`);
}
console.log(`Dev server ready at ${devUrl}`);
return server;
}
async function waitForServer(url, timeout = 30000) {
const start = Date.now();
while (Date.now() - start < timeout) {
try {
const response = await fetch(url, { method: 'HEAD' });
if (response.ok) return true;
} catch (e) {
// Server not ready
}
await sleep(1000);
}
return false;
}
```
### Step 3: Capture Screenshots
```javascript
async function captureScreenshots(screenshots, config, workDir) {
const results = {
captured: [],
failed: []
};
const devUrl = config.screenshot_config.dev_url;
const screenshotDir = `${workDir}/screenshots`;
for (const ss of screenshots) {
try {
// Build full URL
const fullUrl = new URL(ss.url, devUrl).href;
console.log(`Capturing: ${ss.id} (${fullUrl})`);
// Configure capture options
const options = {
url: fullUrl,
viewport: { width: 1280, height: 800 },
fullPage: ss.fullPage || false
};
// Wait for specific element if specified
if (ss.wait_for) {
options.waitFor = ss.wait_for;
}
// Capture specific element if selector provided
if (ss.selector) {
options.selector = ss.selector;
}
// Add delay for animations
await sleep(500);
// Capture via Chrome MCP
const result = await mcp__chrome__screenshot(options);
// Save as PNG
const filename = `${ss.id}.png`;
Write(`${screenshotDir}/${filename}`, result.data, { encoding: 'base64' });
results.captured.push({
id: ss.id,
file: filename,
url: ss.url,
description: ss.description,
size: result.data.length
});
} catch (error) {
console.error(`Failed to capture ${ss.id}:`, error.message);
results.failed.push({
id: ss.id,
url: ss.url,
error: error.message
});
}
}
return results;
}
```
### Step 4: Generate Manifest
```javascript
function generateScreenshotManifest(results, workDir) {
const manifest = {
generated: new Date().toISOString(),
total: results.captured.length + results.failed.length,
captured: results.captured.length,
failed: results.failed.length,
screenshots: results.captured,
failures: results.failed
};
Write(`${workDir}/screenshots/screenshots-manifest.json`,
JSON.stringify(manifest, null, 2));
return manifest;
}
```
### Step 5: Cleanup
```javascript
async function cleanupScreenshotEnvironment(env) {
if (env.server) {
console.log('Stopping dev server...');
KillShell({ shell_id: env.server.task_id });
}
}
```
## Main Runner
```javascript
async function runScreenshotCapture(workDir, screenshots) {
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
// Prepare environment
const env = await prepareScreenshotEnvironment(workDir, config);
if (env.mode === 'manual') {
// Generate manual capture guide
generateManualCaptureGuide(screenshots, workDir);
return { success: false, mode: 'manual' };
}
try {
// Capture screenshots
const results = await captureScreenshots(screenshots, config, workDir);
// Generate manifest
const manifest = generateScreenshotManifest(results, workDir);
// Generate manual guide for failed captures
if (results.failed.length > 0) {
generateManualCaptureGuide(results.failed, workDir);
}
return {
success: true,
captured: results.captured.length,
failed: results.failed.length,
manifest
};
} finally {
// Cleanup
await cleanupScreenshotEnvironment(env);
}
}
```
## Manual Capture Fallback
When Chrome MCP is unavailable:
```javascript
function generateManualCaptureGuide(screenshots, workDir) {
const guide = `
# Manual Screenshot Capture Guide
Chrome MCP is not available. Please capture screenshots manually.
## Prerequisites
1. Start your development server
2. Open a browser
3. Use a screenshot tool (Snipping Tool, Screenshot, etc.)
## Screenshots Required
${screenshots.map((ss, i) => `
### ${i + 1}. ${ss.id}
- **URL**: ${ss.url}
- **Description**: ${ss.description}
- **Save as**: \`screenshots/${ss.id}.png\`
${ss.selector ? `- **Capture area**: \`${ss.selector}\` element only` : '- **Type**: Full page or viewport'}
${ss.wait_for ? `- **Wait for**: \`${ss.wait_for}\` to be visible` : ''}
**Steps:**
1. Navigate to ${ss.url}
${ss.wait_for ? `2. Wait for ${ss.wait_for} to appear` : ''}
${ss.selector ? `2. Capture only the ${ss.selector} area` : '2. Capture the full viewport'}
3. Save as \`${ss.id}.png\`
`).join('\n')}
## After Capturing
1. Place all PNG files in the \`screenshots/\` directory
2. Ensure filenames match exactly (case-sensitive)
3. Run Phase 5 (HTML Assembly) to continue
## Screenshot Specifications
- **Format**: PNG
- **Width**: 1280px recommended
- **Quality**: High
- **Annotations**: None (add in post-processing if needed)
`;
Write(`${workDir}/screenshots/MANUAL_CAPTURE.md`, guide);
}
```
## Advanced Options
### Viewport Sizes
```javascript
const viewportPresets = {
desktop: { width: 1280, height: 800 },
tablet: { width: 768, height: 1024 },
mobile: { width: 375, height: 667 },
wide: { width: 1920, height: 1080 }
};
async function captureResponsive(ss, config, workDir) {
const results = [];
for (const [name, viewport] of Object.entries(viewportPresets)) {
const result = await mcp__chrome__screenshot({
url: ss.url,
viewport
});
const filename = `${ss.id}-${name}.png`;
Write(`${workDir}/screenshots/${filename}`, result.data, { encoding: 'base64' });
results.push({ viewport: name, file: filename });
}
return results;
}
```
### Before/After Comparisons
```javascript
async function captureInteraction(ss, config, workDir) {
const baseUrl = config.screenshot_config.dev_url;
const fullUrl = new URL(ss.url, baseUrl).href;
// Capture before state
const before = await mcp__chrome__screenshot({
url: fullUrl,
viewport: { width: 1280, height: 800 }
});
Write(`${workDir}/screenshots/${ss.id}-before.png`, before.data, { encoding: 'base64' });
// Perform interaction (click, type, etc.)
if (ss.interaction) {
await mcp__chrome__click({ selector: ss.interaction.click });
await sleep(500);
}
// Capture after state
const after = await mcp__chrome__screenshot({
url: fullUrl,
viewport: { width: 1280, height: 800 }
});
Write(`${workDir}/screenshots/${ss.id}-after.png`, after.data, { encoding: 'base64' });
return {
before: `${ss.id}-before.png`,
after: `${ss.id}-after.png`
};
}
```
### Screenshot Annotation
```javascript
function generateAnnotationGuide(screenshots, workDir) {
const guide = `
# Screenshot Annotation Guide
For screenshots requiring callouts or highlights:
## Tools
- macOS: Preview, Skitch
- Windows: Snipping Tool, ShareX
- Cross-platform: Greenshot, Lightshot
## Annotation Guidelines
1. **Callouts**: Use numbered circles (1, 2, 3)
2. **Highlights**: Use semi-transparent rectangles
3. **Arrows**: Point from text to element
4. **Text**: Use sans-serif font, 12-14pt
## Color Scheme
- Primary: #0d6efd (blue)
- Secondary: #6c757d (gray)
- Success: #198754 (green)
- Warning: #ffc107 (yellow)
- Danger: #dc3545 (red)
## Screenshots Needing Annotation
${screenshots.filter(s => s.annotate).map(ss => `
- **${ss.id}**: ${ss.description}
- Highlight: ${ss.annotate.highlight || 'N/A'}
- Callouts: ${ss.annotate.callouts?.join(', ') || 'N/A'}
`).join('\n')}
`;
Write(`${workDir}/screenshots/ANNOTATION_GUIDE.md`, guide);
}
```
## Troubleshooting
### Chrome MCP Not Found
1. Check Claude MCP configuration
2. Verify Chrome is installed
3. Check CHROME_PATH environment variable
### Screenshots Are Blank
1. Increase wait time before capture
2. Check if page requires authentication
3. Verify URL is correct
### Elements Not Visible
1. Scroll element into view
2. Expand collapsed sections
3. Wait for animations to complete
### Server Not Starting
1. Check if port is already in use
2. Verify dev command is correct
3. Check for startup errors in logs

View File

@@ -0,0 +1,419 @@
# Swagger/OpenAPI Runner
Guide for generating backend API documentation from OpenAPI/Swagger specifications.
## Overview
This script extracts and converts OpenAPI/Swagger specifications to Markdown format for inclusion in the software manual.
## Detection Strategy
### Check for Existing Specification
```javascript
async function detectOpenAPISpec() {
// Check for existing spec files
const specPatterns = [
'openapi.json',
'openapi.yaml',
'openapi.yml',
'swagger.json',
'swagger.yaml',
'swagger.yml',
'**/openapi*.json',
'**/swagger*.json'
];
for (const pattern of specPatterns) {
const files = Glob(pattern);
if (files.length > 0) {
return { found: true, type: 'file', path: files[0] };
}
}
// Check for swagger-jsdoc in dependencies
const packageJson = JSON.parse(Read('package.json'));
if (packageJson.dependencies?.['swagger-jsdoc'] ||
packageJson.devDependencies?.['swagger-jsdoc']) {
return { found: true, type: 'jsdoc' };
}
// Check for NestJS Swagger
if (packageJson.dependencies?.['@nestjs/swagger']) {
return { found: true, type: 'nestjs' };
}
// Check for runtime endpoint
return { found: false, suggestion: 'runtime' };
}
```
## Extraction Methods
### Method A: From Existing Spec File
```javascript
async function extractFromFile(specPath, workDir) {
const outputDir = `${workDir}/api-docs/backend`;
Bash({ command: `mkdir -p "${outputDir}"` });
// Copy spec to output
Bash({ command: `cp "${specPath}" "${outputDir}/openapi.json"` });
// Convert to Markdown using widdershins
const result = Bash({
command: `npx widdershins "${specPath}" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'python:Python' 'bash:cURL'`,
timeout: 60000
});
return { success: result.exitCode === 0, outputDir };
}
```
### Method B: From swagger-jsdoc
```javascript
async function extractFromJsDoc(workDir) {
const outputDir = `${workDir}/api-docs/backend`;
// Look for swagger definition file
const defFiles = Glob('**/swagger*.js').concat(Glob('**/openapi*.js'));
if (defFiles.length === 0) {
return { success: false, error: 'No swagger definition found' };
}
// Generate spec
const result = Bash({
command: `npx swagger-jsdoc -d "${defFiles[0]}" -o "${outputDir}/openapi.json"`,
timeout: 60000
});
if (result.exitCode !== 0) {
return { success: false, error: result.stderr };
}
// Convert to Markdown
Bash({
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'bash:cURL'`
});
return { success: true, outputDir };
}
```
### Method C: From NestJS Swagger
```javascript
async function extractFromNestJS(workDir) {
const outputDir = `${workDir}/api-docs/backend`;
// NestJS typically exposes /api-docs-json at runtime
// We need to start the server temporarily
// Start server in background
const server = Bash({
command: 'npm run start:dev',
run_in_background: true,
timeout: 30000
});
// Wait for server to be ready
await waitForServer('http://localhost:3000', 30000);
// Fetch OpenAPI spec
const spec = await fetch('http://localhost:3000/api-docs-json');
const specJson = await spec.json();
// Save spec
Write(`${outputDir}/openapi.json`, JSON.stringify(specJson, null, 2));
// Stop server
KillShell({ shell_id: server.task_id });
// Convert to Markdown
Bash({
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'bash:cURL'`
});
return { success: true, outputDir };
}
```
### Method D: From Runtime Endpoint
```javascript
async function extractFromRuntime(workDir, serverUrl = 'http://localhost:3000') {
const outputDir = `${workDir}/api-docs/backend`;
// Common OpenAPI endpoint paths
const endpointPaths = [
'/api-docs-json',
'/swagger.json',
'/openapi.json',
'/docs/json',
'/api/v1/docs.json'
];
let specJson = null;
for (const path of endpointPaths) {
try {
const response = await fetch(`${serverUrl}${path}`);
if (response.ok) {
specJson = await response.json();
break;
}
} catch (e) {
continue;
}
}
if (!specJson) {
return { success: false, error: 'Could not fetch OpenAPI spec from server' };
}
// Save and convert
Write(`${outputDir}/openapi.json`, JSON.stringify(specJson, null, 2));
Bash({
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md"`
});
return { success: true, outputDir };
}
```
## Installation
### Required Tools
```bash
# For OpenAPI to Markdown conversion
npm install -g widdershins
# Or as dev dependency
npm install --save-dev widdershins
# For generating from JSDoc comments
npm install --save-dev swagger-jsdoc
```
## Configuration
### widdershins Options
```bash
npx widdershins openapi.json \
-o api-reference.md \
--language_tabs 'javascript:JavaScript' 'python:Python' 'bash:cURL' \
--summary \
--omitHeader \
--resolve \
--expandBody
```
| Option | Description |
|--------|-------------|
| `--language_tabs` | Code example languages |
| `--summary` | Use summary as operation heading |
| `--omitHeader` | Don't include title header |
| `--resolve` | Resolve $ref references |
| `--expandBody` | Show full request body |
### swagger-jsdoc Definition
Example `swagger-def.js`:
```javascript
module.exports = {
definition: {
openapi: '3.0.0',
info: {
title: 'MyApp API',
version: '1.0.0',
description: 'API documentation for MyApp'
},
servers: [
{ url: 'http://localhost:3000/api/v1' }
]
},
apis: ['./src/routes/*.js', './src/controllers/*.js']
};
```
## Output Format
### Generated Markdown Structure
```markdown
# MyApp API
## Overview
Base URL: `http://localhost:3000/api/v1`
## Authentication
This API uses Bearer token authentication.
---
## Projects
### List Projects
`GET /projects`
Returns a list of all projects.
**Parameters**
| Name | In | Type | Required | Description |
|------|-----|------|----------|-------------|
| status | query | string | false | Filter by status |
| page | query | integer | false | Page number |
**Responses**
| Status | Description |
|--------|-------------|
| 200 | Successful response |
| 401 | Unauthorized |
**Example Request**
```javascript
fetch('/api/v1/projects?status=active')
.then(res => res.json())
.then(data => console.log(data));
```
**Example Response**
```json
{
"data": [
{ "id": "1", "name": "Project 1" }
],
"pagination": {
"page": 1,
"total": 10
}
}
```
```
## Integration
### Main Runner
```javascript
async function runSwaggerExtraction(workDir) {
const detection = await detectOpenAPISpec();
if (!detection.found) {
console.log('No OpenAPI spec detected. Skipping backend API docs.');
return { success: false, skipped: true };
}
let result;
switch (detection.type) {
case 'file':
result = await extractFromFile(detection.path, workDir);
break;
case 'jsdoc':
result = await extractFromJsDoc(workDir);
break;
case 'nestjs':
result = await extractFromNestJS(workDir);
break;
default:
result = await extractFromRuntime(workDir);
}
if (result.success) {
// Post-process the Markdown
await postProcessApiDocs(result.outputDir);
}
return result;
}
async function postProcessApiDocs(outputDir) {
const mdFile = `${outputDir}/api-reference.md`;
let content = Read(mdFile);
// Remove widdershins header
content = content.replace(/^---[\s\S]*?---\n/, '');
// Add custom styling hints
content = content.replace(/^(#{1,3} .+)$/gm, '$1\n');
Write(mdFile, content);
}
```
## Troubleshooting
### Common Issues
#### "widdershins: command not found"
```bash
npm install -g widdershins
# Or use npx
npx widdershins openapi.json -o api.md
```
#### "Error parsing OpenAPI spec"
```bash
# Validate spec first
npx @redocly/cli lint openapi.json
# Fix common issues
npx @redocly/cli bundle openapi.json -o fixed.json
```
#### "Server not responding"
Ensure the development server is running and accessible:
```bash
# Check if server is running
curl http://localhost:3000/health
# Check OpenAPI endpoint
curl http://localhost:3000/api-docs-json
```
### Manual Fallback
If automatic extraction fails, document APIs manually:
1. List all route files: `Glob('**/routes/*.js')`
2. Extract route definitions using regex
3. Build documentation structure manually
```javascript
async function manualApiExtraction(workDir) {
const routeFiles = Glob('src/routes/*.js').concat(Glob('src/routes/*.ts'));
const endpoints = [];
for (const file of routeFiles) {
const content = Read(file);
const routes = content.matchAll(/router\.(get|post|put|delete|patch)\(['"]([^'"]+)['"]/g);
for (const match of routes) {
endpoints.push({
method: match[1].toUpperCase(),
path: match[2],
file: file
});
}
}
return endpoints;
}
```

View File

@@ -0,0 +1,357 @@
# TypeDoc Runner
Guide for generating frontend API documentation using TypeDoc.
## Overview
TypeDoc generates API documentation from TypeScript source code by analyzing type annotations and JSDoc comments.
## Prerequisites
### Check TypeScript Project
```javascript
// Verify TypeScript is used
const packageJson = JSON.parse(Read('package.json'));
const hasTypeScript = packageJson.devDependencies?.typescript ||
packageJson.dependencies?.typescript;
if (!hasTypeScript) {
console.log('Not a TypeScript project. Skipping TypeDoc.');
return;
}
// Check for tsconfig.json
const hasTsConfig = Glob('tsconfig.json').length > 0;
```
## Installation
### Install TypeDoc
```bash
npm install --save-dev typedoc typedoc-plugin-markdown
```
### Optional Plugins
```bash
# For better Markdown output
npm install --save-dev typedoc-plugin-markdown
# For README inclusion
npm install --save-dev typedoc-plugin-rename-defaults
```
## Configuration
### typedoc.json
Create `typedoc.json` in project root:
```json
{
"entryPoints": ["./src/index.ts"],
"entryPointStrategy": "expand",
"out": ".workflow/.scratchpad/manual-{timestamp}/api-docs/frontend",
"plugin": ["typedoc-plugin-markdown"],
"exclude": [
"**/node_modules/**",
"**/*.test.ts",
"**/*.spec.ts",
"**/tests/**"
],
"excludePrivate": true,
"excludeProtected": true,
"excludeInternal": true,
"hideGenerator": true,
"readme": "none",
"categorizeByGroup": true,
"navigation": {
"includeCategories": true,
"includeGroups": true
}
}
```
### Alternative: CLI Options
```bash
npx typedoc \
--entryPoints src/index.ts \
--entryPointStrategy expand \
--out api-docs/frontend \
--plugin typedoc-plugin-markdown \
--exclude "**/node_modules/**" \
--exclude "**/*.test.ts" \
--excludePrivate \
--excludeProtected \
--readme none
```
## Execution
### Basic Run
```javascript
async function runTypeDoc(workDir) {
const outputDir = `${workDir}/api-docs/frontend`;
// Create output directory
Bash({ command: `mkdir -p "${outputDir}"` });
// Run TypeDoc
const result = Bash({
command: `npx typedoc --out "${outputDir}" --plugin typedoc-plugin-markdown src/`,
timeout: 120000 // 2 minutes
});
if (result.exitCode !== 0) {
console.error('TypeDoc failed:', result.stderr);
return { success: false, error: result.stderr };
}
// List generated files
const files = Glob(`${outputDir}/**/*.md`);
console.log(`Generated ${files.length} documentation files`);
return { success: true, files };
}
```
### With Custom Entry Points
```javascript
async function runTypeDocCustom(workDir, entryPoints) {
const outputDir = `${workDir}/api-docs/frontend`;
// Build entry points string
const entries = entryPoints.map(e => `--entryPoints "${e}"`).join(' ');
const result = Bash({
command: `npx typedoc ${entries} --out "${outputDir}" --plugin typedoc-plugin-markdown`,
timeout: 120000
});
return { success: result.exitCode === 0 };
}
// Example: Document specific files
await runTypeDocCustom(workDir, [
'src/api/index.ts',
'src/hooks/index.ts',
'src/utils/index.ts'
]);
```
## Output Structure
```
api-docs/frontend/
├── README.md # Index
├── modules.md # Module list
├── modules/
│ ├── api.md # API module
│ ├── hooks.md # Hooks module
│ └── utils.md # Utils module
├── classes/
│ ├── ApiClient.md # Class documentation
│ └── ...
├── interfaces/
│ ├── Config.md # Interface documentation
│ └── ...
└── functions/
├── formatDate.md # Function documentation
└── ...
```
## Integration with Manual
### Reading TypeDoc Output
```javascript
async function integrateTypeDocOutput(workDir) {
const apiDocsDir = `${workDir}/api-docs/frontend`;
const files = Glob(`${apiDocsDir}/**/*.md`);
// Build API reference content
let content = '## Frontend API Reference\n\n';
// Add modules
const modules = Glob(`${apiDocsDir}/modules/*.md`);
for (const mod of modules) {
const modContent = Read(mod);
content += `### ${extractTitle(modContent)}\n\n`;
content += summarizeModule(modContent);
}
// Add functions
const functions = Glob(`${apiDocsDir}/functions/*.md`);
content += '\n### Functions\n\n';
for (const fn of functions) {
const fnContent = Read(fn);
content += formatFunctionDoc(fnContent);
}
// Add hooks
const hooks = Glob(`${apiDocsDir}/functions/*Hook*.md`);
if (hooks.length > 0) {
content += '\n### Hooks\n\n';
for (const hook of hooks) {
const hookContent = Read(hook);
content += formatHookDoc(hookContent);
}
}
return content;
}
```
### Example Output Format
```markdown
## Frontend API Reference
### API Module
Functions for interacting with the backend API.
#### fetchProjects
```typescript
function fetchProjects(options?: FetchOptions): Promise<Project[]>
```
Fetches all projects for the current user.
**Parameters:**
| Name | Type | Description |
|------|------|-------------|
| options | FetchOptions | Optional fetch configuration |
**Returns:** Promise<Project[]>
### Hooks
#### useProjects
```typescript
function useProjects(options?: UseProjectsOptions): UseProjectsResult
```
React hook for managing project data.
**Parameters:**
| Name | Type | Description |
|------|------|-------------|
| options.status | string | Filter by project status |
| options.limit | number | Max projects to fetch |
**Returns:**
| Property | Type | Description |
|----------|------|-------------|
| projects | Project[] | Array of projects |
| loading | boolean | Loading state |
| error | Error \| null | Error if failed |
| refetch | () => void | Refresh data |
```
## Troubleshooting
### Common Issues
#### "Cannot find module 'typescript'"
```bash
npm install --save-dev typescript
```
#### "No entry points found"
Ensure entry points exist:
```bash
# Check entry points
ls src/index.ts
# Or use glob pattern
npx typedoc --entryPoints "src/**/*.ts"
```
#### "Unsupported TypeScript version"
```bash
# Check TypeDoc compatibility
npm info typedoc peerDependencies
# Install compatible version
npm install --save-dev typedoc@0.25.x
```
### Debugging
```bash
# Verbose output
npx typedoc --logLevel Verbose src/
# Show warnings
npx typedoc --treatWarningsAsErrors src/
```
## Best Practices
### Document Exports Only
```typescript
// Good: Public API documented
/**
* Fetches projects from the API.
* @param options - Fetch options
* @returns Promise resolving to projects
*/
export function fetchProjects(options?: FetchOptions): Promise<Project[]> {
// ...
}
// Internal: Not documented
function internalHelper() {
// ...
}
```
### Use JSDoc Comments
```typescript
/**
* User hook for managing authentication state.
*
* @example
* ```tsx
* const { user, login, logout } = useAuth();
* ```
*
* @returns Authentication state and methods
*/
export function useAuth(): AuthResult {
// ...
}
```
### Define Types Properly
```typescript
/**
* Configuration for the API client.
*/
export interface ApiConfig {
/** API base URL */
baseUrl: string;
/** Request timeout in milliseconds */
timeout?: number;
/** Custom headers to include */
headers?: Record<string, string>;
}
```

View File

@@ -0,0 +1,325 @@
# HTML Template Specification
Technical specification for the TiddlyWiki-style HTML output.
## Overview
The output is a single, self-contained HTML file with:
- All CSS embedded inline
- All JavaScript embedded inline
- All images embedded as Base64
- Full offline functionality
## File Structure
```html
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{SOFTWARE_NAME}} - User Manual</title>
<style>{{EMBEDDED_CSS}}</style>
</head>
<body class="wiki-container" data-theme="light">
<aside class="wiki-sidebar">...</aside>
<main class="wiki-content">...</main>
<button class="theme-toggle">...</button>
<script id="search-index" type="application/json">{{SEARCH_INDEX}}</script>
<script>{{EMBEDDED_JS}}</script>
</body>
</html>
```
## Placeholders
| Placeholder | Description | Source |
|-------------|-------------|--------|
| `{{SOFTWARE_NAME}}` | Software name | manual-config.json |
| `{{VERSION}}` | Version number | manual-config.json |
| `{{EMBEDDED_CSS}}` | All CSS content | wiki-base.css + wiki-dark.css |
| `{{TOC_HTML}}` | Table of contents | Generated from sections |
| `{{TIDDLERS_HTML}}` | All content blocks | Converted from Markdown |
| `{{SEARCH_INDEX_JSON}}` | Search data | Generated from content |
| `{{EMBEDDED_JS}}` | JavaScript code | Inline in template |
| `{{TIMESTAMP}}` | Generation timestamp | ISO 8601 format |
| `{{LOGO_BASE64}}` | Logo image | Project logo or generated |
## Component Specifications
### Sidebar (`.wiki-sidebar`)
```
Width: 280px (fixed)
Position: Fixed left
Height: 100vh
Components:
- Logo area (.wiki-logo)
- Search box (.wiki-search)
- Tag navigation (.wiki-tags)
- Table of contents (.wiki-toc)
```
### Main Content (`.wiki-content`)
```
Margin-left: 280px (sidebar width)
Max-width: 900px (content)
Components:
- Header bar (.content-header)
- Tiddler container (.tiddler-container)
- Footer (.wiki-footer)
```
### Tiddler (Content Block)
```html
<article class="tiddler"
id="tiddler-{{ID}}"
data-tags="{{TAGS}}"
data-difficulty="{{DIFFICULTY}}">
<header class="tiddler-header">
<h2 class="tiddler-title">
<button class="collapse-toggle"></button>
{{TITLE}}
</h2>
<div class="tiddler-meta">
<span class="difficulty-badge {{DIFFICULTY}}">{{DIFFICULTY_LABEL}}</span>
{{TAG_BADGES}}
</div>
</header>
<div class="tiddler-content">
{{CONTENT_HTML}}
</div>
</article>
```
### Search Index Format
```json
{
"tiddler-overview": {
"title": "Product Overview",
"body": "Plain text content for searching...",
"tags": ["getting-started", "overview"]
},
"tiddler-ui-guide": {
"title": "UI Guide",
"body": "Plain text content...",
"tags": ["ui-guide"]
}
}
```
## Interactive Features
### 1. Search
- Full-text search with result highlighting
- Searches title, body, and tags
- Shows up to 10 results
- Keyboard accessible (Enter to search, Esc to close)
### 2. Collapse/Expand
- Per-section toggle via button
- Expand All / Collapse All buttons
- State indicated by ▼ (expanded) or ▶ (collapsed)
- Smooth transition animation
### 3. Tag Filtering
- Tags: all, getting-started, ui-guide, api, config, troubleshooting, examples
- Single selection (radio behavior)
- "all" shows everything
- Hidden tiddlers via `display: none`
### 4. Theme Toggle
- Light/Dark mode switch
- Persists to localStorage (`wiki-theme`)
- Applies to entire document via `[data-theme="dark"]`
- Toggle button shows sun/moon icon
### 5. Responsive Design
```
Breakpoints:
- Desktop (> 1024px): Sidebar visible
- Tablet (768-1024px): Sidebar collapsible
- Mobile (< 768px): Sidebar hidden, hamburger menu
```
### 6. Print Support
- Hides sidebar, toggles, interactive elements
- Expands all collapsed sections
- Adjusts colors for print
- Page breaks between sections
## Accessibility
### Keyboard Navigation
- Tab through interactive elements
- Enter to activate buttons
- Escape to close search results
- Arrow keys in search results
### ARIA Attributes
```html
<input aria-label="Search">
<nav aria-label="Table of Contents">
<button aria-label="Toggle theme">
<div aria-live="polite"> <!-- For search results -->
```
### Color Contrast
- Text/background ratio ≥ 4.5:1
- Interactive elements clearly visible
- Focus indicators visible
## Performance
### Target Metrics
| Metric | Target |
|--------|--------|
| Total file size | < 10MB |
| Time to interactive | < 2s |
| Search latency | < 100ms |
### Optimization Strategies
1. **Lazy loading for images**: `loading="lazy"`
2. **Efficient search**: In-memory index, no external requests
3. **CSS containment**: Scope styles to components
4. **Minimal JavaScript**: Vanilla JS, no libraries
## CSS Variables
### Light Theme
```css
:root {
--bg-primary: #ffffff;
--bg-secondary: #f8f9fa;
--text-primary: #212529;
--text-secondary: #495057;
--accent-color: #0d6efd;
--border-color: #dee2e6;
}
```
### Dark Theme
```css
[data-theme="dark"] {
--bg-primary: #1a1a2e;
--bg-secondary: #16213e;
--text-primary: #eaeaea;
--text-secondary: #b8b8b8;
--accent-color: #4dabf7;
--border-color: #2d3748;
}
```
## Markdown to HTML Mapping
| Markdown | HTML |
|----------|------|
| `# Heading` | `<h1>` |
| `## Heading` | `<h2>` |
| `**bold**` | `<strong>` |
| `*italic*` | `<em>` |
| `` `code` `` | `<code>` |
| `[link](url)` | `<a href="url">` |
| `- item` | `<ul><li>` |
| `1. item` | `<ol><li>` |
| ``` ```js ``` | `<pre><code class="language-js">` |
| `> quote` | `<blockquote>` |
| `---` | `<hr>` |
## Screenshot Embedding
### Marker Format
```markdown
<!-- SCREENSHOT: id="ss-login" url="/login" description="Login form" -->
```
### Embedded Format
```html
<figure class="screenshot">
<img src="data:image/png;base64,{{BASE64_DATA}}"
alt="Login form"
loading="lazy">
<figcaption>Login form</figcaption>
</figure>
```
### Placeholder (if missing)
```html
<div class="screenshot-placeholder">
[Screenshot: ss-login - Login form]
</div>
```
## File Size Optimization
### CSS
- Minify before embedding
- Remove unused styles
- Combine duplicate rules
### JavaScript
- Minify before embedding
- Remove console.log statements
- Use IIFE for scoping
### Images
- Compress before Base64 encoding
- Use appropriate dimensions (max 1280px width)
- Consider WebP format if browser support is acceptable
## Validation
### HTML Validation
- W3C HTML5 compliance
- Proper nesting
- Required attributes present
### CSS Validation
- Valid property values
- No deprecated properties
- Vendor prefixes where needed
### JavaScript
- No syntax errors
- All functions defined
- Error handling for edge cases
## Testing Checklist
- [ ] Opens in Chrome/Firefox/Safari/Edge
- [ ] Search works correctly
- [ ] Collapse/expand works
- [ ] Tag filtering works
- [ ] Theme toggle works
- [ ] Print preview correct
- [ ] Keyboard navigation works
- [ ] Mobile responsive
- [ ] Offline functionality
- [ ] All links valid
- [ ] All images display
- [ ] No console errors

View File

@@ -0,0 +1,253 @@
# Quality Standards
Quality gates and standards for software manual generation.
## Quality Dimensions
### 1. Completeness (25%)
All required sections present and adequately covered.
| Requirement | Weight | Criteria |
|-------------|--------|----------|
| Overview section | 5 | Product intro, features, quick start |
| UI Guide | 5 | All major screens documented |
| API Reference | 5 | All public APIs documented |
| Configuration | 4 | All config options explained |
| Troubleshooting | 3 | Common issues addressed |
| Examples | 3 | Multi-level examples provided |
**Scoring**:
- 100%: All sections present with adequate depth
- 80%: All sections present, some lacking depth
- 60%: Missing 1-2 sections
- 40%: Missing 3+ sections
- 0%: Critical sections missing (overview, UI guide)
### 2. Consistency (25%)
Terminology, style, and structure uniform across sections.
| Aspect | Check |
|--------|-------|
| Terminology | Same term for same concept throughout |
| Formatting | Consistent heading levels, code block styles |
| Tone | Consistent formality level |
| Cross-references | All internal links valid |
| Screenshot naming | Follow `ss-{feature}-{action}` pattern |
**Scoring**:
- 100%: Zero inconsistencies
- 80%: 1-3 minor inconsistencies
- 60%: 4-6 inconsistencies
- 40%: 7-10 inconsistencies
- 0%: Pervasive inconsistencies
### 3. Depth (25%)
Content provides sufficient detail for target audience.
| Level | Criteria |
|-------|----------|
| Shallow | Basic descriptions only |
| Standard | Descriptions + usage examples |
| Deep | Descriptions + examples + edge cases + best practices |
**Per-Section Depth Check**:
- [ ] Explains "what" (definition)
- [ ] Explains "why" (rationale)
- [ ] Explains "how" (procedure)
- [ ] Provides examples
- [ ] Covers edge cases
- [ ] Includes tips/best practices
**Scoring**:
- 100%: Deep coverage on all critical sections
- 80%: Standard coverage on all sections
- 60%: Shallow coverage on some sections
- 40%: Missing depth in critical areas
- 0%: Superficial throughout
### 4. Readability (25%)
Clear, user-friendly writing that's easy to follow.
| Metric | Target |
|--------|--------|
| Sentence length | Average < 20 words |
| Paragraph length | Average < 5 sentences |
| Heading hierarchy | Proper H1 > H2 > H3 nesting |
| Code blocks | Language specified |
| Lists | Used for 3+ items |
| Screenshots | Placed near relevant text |
**Structural Elements**:
- [ ] Clear section headers
- [ ] Numbered steps for procedures
- [ ] Bullet lists for options/features
- [ ] Tables for comparisons
- [ ] Code blocks with syntax highlighting
- [ ] Screenshots with captions
**Scoring**:
- 100%: All readability criteria met
- 80%: Minor structural issues
- 60%: Some sections hard to follow
- 40%: Significant readability problems
- 0%: Unclear, poorly structured
## Overall Quality Score
```
Overall = (Completeness × 0.25) + (Consistency × 0.25) +
(Depth × 0.25) + (Readability × 0.25)
```
**Quality Gates**:
| Gate | Threshold | Action |
|------|-----------|--------|
| Pass | ≥ 80% | Proceed to HTML generation |
| Review | 60-79% | Address warnings, proceed with caution |
| Fail | < 60% | Must address errors before continuing |
## Issue Classification
### Errors (Must Fix)
- Missing required sections
- Invalid cross-references
- Broken screenshot markers
- Code blocks without language
- Incomplete procedures (missing steps)
### Warnings (Should Fix)
- Terminology inconsistencies
- Sections lacking depth
- Missing examples
- Long paragraphs (> 7 sentences)
- Screenshots missing captions
### Info (Nice to Have)
- Optimization suggestions
- Additional example opportunities
- Alternative explanations
- Enhancement ideas
## Quality Checklist
### Pre-Generation
- [ ] All agents completed successfully
- [ ] No errors in consolidation report
- [ ] Overall score ≥ 60%
### Post-Generation
- [ ] HTML renders correctly
- [ ] Search returns relevant results
- [ ] All screenshots display
- [ ] Theme toggle works
- [ ] Print preview looks good
### Final Review
- [ ] User previewed and approved
- [ ] File size reasonable (< 10MB)
- [ ] No console errors in browser
- [ ] Accessible (keyboard navigation works)
## Automated Checks
```javascript
function runQualityChecks(workDir) {
const results = {
completeness: checkCompleteness(workDir),
consistency: checkConsistency(workDir),
depth: checkDepth(workDir),
readability: checkReadability(workDir)
};
results.overall = (
results.completeness * 0.25 +
results.consistency * 0.25 +
results.depth * 0.25 +
results.readability * 0.25
);
return results;
}
function checkCompleteness(workDir) {
const requiredSections = [
'section-overview.md',
'section-ui-guide.md',
'section-api-reference.md',
'section-configuration.md',
'section-troubleshooting.md',
'section-examples.md'
];
const existing = Glob(`${workDir}/sections/section-*.md`);
const found = requiredSections.filter(s =>
existing.some(e => e.endsWith(s))
);
return (found.length / requiredSections.length) * 100;
}
function checkConsistency(workDir) {
// Check terminology, cross-references, naming conventions
const issues = [];
// ... implementation ...
return Math.max(0, 100 - issues.length * 10);
}
function checkDepth(workDir) {
// Check content length, examples, edge cases
const sections = Glob(`${workDir}/sections/section-*.md`);
let totalScore = 0;
for (const section of sections) {
const content = Read(section);
let sectionScore = 0;
if (content.length > 500) sectionScore += 20;
if (content.includes('```')) sectionScore += 20;
if (content.includes('Example')) sectionScore += 20;
if (content.match(/\d+\./g)?.length > 3) sectionScore += 20;
if (content.includes('Note:') || content.includes('Tip:')) sectionScore += 20;
totalScore += sectionScore;
}
return totalScore / sections.length;
}
function checkReadability(workDir) {
// Check structure, formatting, organization
const sections = Glob(`${workDir}/sections/section-*.md`);
let issues = 0;
for (const section of sections) {
const content = Read(section);
// Check heading hierarchy
if (!content.startsWith('# ')) issues++;
// Check code block languages
const codeBlocks = content.match(/```\w*/g);
if (codeBlocks?.some(b => b === '```')) issues++;
// Check paragraph length
const paragraphs = content.split('\n\n');
if (paragraphs.some(p => p.split('. ').length > 7)) issues++;
}
return Math.max(0, 100 - issues * 10);
}
```

View File

@@ -0,0 +1,298 @@
# Writing Style Guide
User-friendly writing standards for software manuals.
## Core Principles
### 1. User-Centered
Write for the user, not the developer.
**Do**:
- "Click the **Save** button to save your changes"
- "Enter your email address in the login form"
**Don't**:
- "The onClick handler triggers the save mutation"
- "POST to /api/auth/login with email in body"
### 2. Action-Oriented
Focus on what users can **do**, not what the system does.
**Do**:
- "You can export your data as CSV"
- "To create a new project, click **New Project**"
**Don't**:
- "The system exports data in CSV format"
- "A new project is created when the button is clicked"
### 3. Clear and Direct
Use simple, straightforward language.
**Do**:
- "Select a file to upload"
- "The maximum file size is 10MB"
**Don't**:
- "Utilize the file selection interface to designate a file for uploading"
- "File size constraints limit uploads to 10 megabytes"
## Tone
### Friendly but Professional
- Conversational but not casual
- Helpful but not condescending
- Confident but not arrogant
**Examples**:
| Too Casual | Just Right | Too Formal |
|------------|------------|------------|
| "Yo, here's how..." | "Here's how to..." | "The following procedure describes..." |
| "Easy peasy!" | "That's all you need to do." | "The procedure has been completed." |
| "Don't worry about it" | "You don't need to change this" | "This parameter does not require modification" |
### Second Person
Address the user directly as "you".
**Do**: "You can customize your dashboard..."
**Don't**: "Users can customize their dashboards..."
## Structure
### Headings
Use clear, descriptive headings that tell users what they'll learn.
**Good Headings**:
- "Getting Started"
- "Creating Your First Project"
- "Configuring Email Notifications"
- "Troubleshooting Login Issues"
**Weak Headings**:
- "Overview"
- "Step 1"
- "Settings"
- "FAQ"
### Procedures
Number steps for sequential tasks.
```markdown
## Creating a New User
1. Navigate to **Settings** > **Users**.
2. Click the **Add User** button.
3. Enter the user's email address.
4. Select a role from the dropdown.
5. Click **Save**.
The new user will receive an invitation email.
```
### Features/Options
Use bullet lists for non-sequential items.
```markdown
## Export Options
You can export your data in several formats:
- **CSV**: Compatible with spreadsheets
- **JSON**: Best for developers
- **PDF**: Ideal for sharing reports
```
### Comparisons
Use tables for comparing options.
```markdown
## Plan Comparison
| Feature | Free | Pro | Enterprise |
|---------|------|-----|------------|
| Projects | 3 | Unlimited | Unlimited |
| Storage | 1GB | 10GB | 100GB |
| Support | Community | Email | Dedicated |
```
## Content Types
### Conceptual (What Is)
Explain what something is and why it matters.
```markdown
## What is a Workspace?
A workspace is a container for your projects and team members. Each workspace
has its own settings, billing, and permissions. You might create separate
workspaces for different clients or departments.
```
### Procedural (How To)
Step-by-step instructions for completing a task.
```markdown
## How to Create a Workspace
1. Click your profile icon in the top-right corner.
2. Select **Create Workspace**.
3. Enter a name for your workspace.
4. Choose a plan (you can upgrade later).
5. Click **Create**.
Your new workspace is ready to use.
```
### Reference (API/Config)
Detailed specifications and parameters.
```markdown
## Configuration Options
### `DATABASE_URL`
- **Type**: String (required)
- **Format**: `postgresql://user:password@host:port/database`
- **Example**: `postgresql://admin:secret@localhost:5432/myapp`
Database connection string for PostgreSQL.
```
## Formatting
### Bold
Use for:
- UI elements: Click **Save**
- First use of key terms: **Workspaces** contain projects
- Emphasis: **Never** share your API key
### Italic
Use for:
- Introducing new terms: A *workspace* is...
- Placeholders: Replace *your-api-key* with...
- Emphasis (sparingly): This is *really* important
### Code
Use for:
- Commands: Run `npm install`
- File paths: Edit `config/settings.json`
- Environment variables: Set `DATABASE_URL`
- API endpoints: POST `/api/users`
- Code references: The `handleSubmit` function
### Code Blocks
Always specify the language.
```javascript
// Example: Fetching user data
const response = await fetch('/api/user');
const user = await response.json();
```
### Notes and Warnings
Use for important callouts.
```markdown
> **Note**: This feature requires a Pro plan.
> **Warning**: Deleting a workspace cannot be undone.
> **Tip**: Use keyboard shortcuts to work faster.
```
## Screenshots
### When to Include
- First time showing a UI element
- Complex interfaces
- Before/after comparisons
- Error states
### Guidelines
- Capture just the relevant area
- Use consistent dimensions
- Highlight important elements
- Add descriptive captions
```markdown
<!-- SCREENSHOT: id="ss-dashboard" description="Main dashboard showing project list" -->
*The dashboard displays all your projects with their status.*
```
## Examples
### Good Section Example
```markdown
## Inviting Team Members
You can invite colleagues to collaborate on your projects.
### To invite a team member:
1. Open **Settings** > **Team**.
2. Click **Invite Member**.
3. Enter their email address.
4. Select their role:
- **Admin**: Full access to all settings
- **Editor**: Can edit projects
- **Viewer**: Read-only access
5. Click **Send Invite**.
The person will receive an email with a link to join your workspace.
> **Note**: You can have up to 5 team members on the Free plan.
<!-- SCREENSHOT: id="ss-invite-team" description="Team invitation dialog" -->
```
## Language Guidelines
### Avoid Jargon
| Technical | User-Friendly |
|-----------|---------------|
| Execute | Run |
| Terminate | Stop, End |
| Instantiate | Create |
| Invoke | Call, Use |
| Parameterize | Set, Configure |
| Persist | Save |
### Be Specific
| Vague | Specific |
|-------|----------|
| "Click the button" | "Click **Save**" |
| "Enter information" | "Enter your email address" |
| "An error occurred" | "Your password must be at least 8 characters" |
| "It takes a moment" | "This typically takes 2-3 seconds" |
### Use Active Voice
| Passive | Active |
|---------|--------|
| "The file is uploaded" | "Upload the file" |
| "Settings are saved" | "Click **Save** to keep your changes" |
| "Errors are displayed" | "The form shows any errors" |

View File

@@ -0,0 +1,788 @@
/* ========================================
TiddlyWiki-Style Base CSS
Software Manual Skill
======================================== */
/* ========== CSS Variables ========== */
:root {
/* Light Theme */
--bg-primary: #ffffff;
--bg-secondary: #f8f9fa;
--bg-tertiary: #e9ecef;
--text-primary: #212529;
--text-secondary: #495057;
--text-muted: #6c757d;
--border-color: #dee2e6;
--accent-color: #0d6efd;
--accent-hover: #0b5ed7;
--success-color: #198754;
--warning-color: #ffc107;
--danger-color: #dc3545;
--info-color: #0dcaf0;
/* Layout */
--sidebar-width: 280px;
--header-height: 60px;
--content-max-width: 900px;
--spacing-xs: 0.25rem;
--spacing-sm: 0.5rem;
--spacing-md: 1rem;
--spacing-lg: 1.5rem;
--spacing-xl: 2rem;
/* Typography */
--font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
--font-family-mono: 'SF Mono', Monaco, Consolas, 'Liberation Mono', 'Courier New', monospace;
--font-size-sm: 0.875rem;
--font-size-base: 1rem;
--font-size-lg: 1.125rem;
--font-size-xl: 1.25rem;
--line-height: 1.6;
/* Shadows */
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.05);
--shadow-md: 0 4px 6px rgba(0, 0, 0, 0.1);
--shadow-lg: 0 10px 15px rgba(0, 0, 0, 0.1);
/* Transitions */
--transition-fast: 150ms ease;
--transition-base: 300ms ease;
}
/* ========== Reset & Base ========== */
*, *::before, *::after {
box-sizing: border-box;
margin: 0;
padding: 0;
}
html {
scroll-behavior: smooth;
}
body {
font-family: var(--font-family);
font-size: var(--font-size-base);
line-height: var(--line-height);
color: var(--text-primary);
background-color: var(--bg-secondary);
}
/* ========== Layout ========== */
.wiki-container {
display: flex;
min-height: 100vh;
}
/* ========== Sidebar ========== */
.wiki-sidebar {
position: fixed;
top: 0;
left: 0;
width: var(--sidebar-width);
height: 100vh;
background-color: var(--bg-primary);
border-right: 1px solid var(--border-color);
overflow-y: auto;
z-index: 100;
display: flex;
flex-direction: column;
transition: transform var(--transition-base);
}
/* Logo Area */
.wiki-logo {
padding: var(--spacing-lg);
text-align: center;
border-bottom: 1px solid var(--border-color);
}
.wiki-logo .logo-placeholder {
width: 60px;
height: 60px;
margin: 0 auto var(--spacing-sm);
background: linear-gradient(135deg, var(--accent-color), var(--info-color));
border-radius: 12px;
display: flex;
align-items: center;
justify-content: center;
color: white;
font-weight: bold;
font-size: var(--font-size-xl);
}
.wiki-logo h1 {
font-size: var(--font-size-lg);
font-weight: 600;
margin-bottom: var(--spacing-xs);
}
.wiki-logo .version {
font-size: var(--font-size-sm);
color: var(--text-muted);
}
/* Search */
.wiki-search {
padding: var(--spacing-md);
position: relative;
}
.wiki-search input {
width: 100%;
padding: var(--spacing-sm) var(--spacing-md);
border: 1px solid var(--border-color);
border-radius: 6px;
font-size: var(--font-size-sm);
background-color: var(--bg-secondary);
transition: border-color var(--transition-fast), box-shadow var(--transition-fast);
}
.wiki-search input:focus {
outline: none;
border-color: var(--accent-color);
box-shadow: 0 0 0 3px rgba(13, 110, 253, 0.15);
}
.search-results {
position: absolute;
top: 100%;
left: var(--spacing-md);
right: var(--spacing-md);
background: var(--bg-primary);
border: 1px solid var(--border-color);
border-radius: 6px;
box-shadow: var(--shadow-lg);
max-height: 400px;
overflow-y: auto;
z-index: 200;
}
.search-result-item {
display: block;
padding: var(--spacing-sm) var(--spacing-md);
text-decoration: none;
color: var(--text-primary);
border-bottom: 1px solid var(--border-color);
transition: background-color var(--transition-fast);
}
.search-result-item:last-child {
border-bottom: none;
}
.search-result-item:hover {
background-color: var(--bg-secondary);
}
.result-title {
font-weight: 600;
margin-bottom: var(--spacing-xs);
}
.result-excerpt {
font-size: var(--font-size-sm);
color: var(--text-secondary);
}
.result-excerpt mark {
background-color: var(--warning-color);
padding: 0 2px;
border-radius: 2px;
}
.no-results {
padding: var(--spacing-md);
text-align: center;
color: var(--text-muted);
}
/* Tags */
.wiki-tags {
padding: var(--spacing-md);
display: flex;
flex-wrap: wrap;
gap: var(--spacing-xs);
border-bottom: 1px solid var(--border-color);
}
.wiki-tags .tag {
padding: var(--spacing-xs) var(--spacing-sm);
font-size: var(--font-size-sm);
border: 1px solid var(--border-color);
border-radius: 20px;
background: var(--bg-secondary);
color: var(--text-secondary);
cursor: pointer;
transition: all var(--transition-fast);
}
.wiki-tags .tag:hover {
border-color: var(--accent-color);
color: var(--accent-color);
}
.wiki-tags .tag.active {
background-color: var(--accent-color);
border-color: var(--accent-color);
color: white;
}
/* Table of Contents */
.wiki-toc {
flex: 1;
padding: var(--spacing-md);
overflow-y: auto;
}
.wiki-toc h3 {
font-size: var(--font-size-sm);
text-transform: uppercase;
letter-spacing: 0.05em;
color: var(--text-muted);
margin-bottom: var(--spacing-md);
}
.wiki-toc ul {
list-style: none;
}
.wiki-toc li {
margin-bottom: var(--spacing-xs);
}
.wiki-toc a {
display: flex;
align-items: center;
justify-content: space-between;
padding: var(--spacing-sm);
color: var(--text-secondary);
text-decoration: none;
border-radius: 6px;
font-size: var(--font-size-sm);
transition: all var(--transition-fast);
}
.wiki-toc a:hover {
background-color: var(--bg-secondary);
color: var(--accent-color);
}
/* ========== Main Content ========== */
.wiki-content {
flex: 1;
margin-left: var(--sidebar-width);
min-height: 100vh;
display: flex;
flex-direction: column;
}
/* Header */
.content-header {
position: sticky;
top: 0;
background-color: var(--bg-primary);
border-bottom: 1px solid var(--border-color);
padding: var(--spacing-sm) var(--spacing-lg);
display: flex;
align-items: center;
justify-content: space-between;
z-index: 50;
}
.sidebar-toggle {
display: none;
flex-direction: column;
gap: 4px;
padding: var(--spacing-sm);
background: none;
border: none;
cursor: pointer;
}
.sidebar-toggle span {
display: block;
width: 20px;
height: 2px;
background-color: var(--text-primary);
transition: transform var(--transition-fast);
}
.header-actions {
display: flex;
gap: var(--spacing-sm);
}
.header-actions button {
padding: var(--spacing-xs) var(--spacing-sm);
font-size: var(--font-size-sm);
border: 1px solid var(--border-color);
border-radius: 4px;
background: var(--bg-primary);
color: var(--text-secondary);
cursor: pointer;
transition: all var(--transition-fast);
}
.header-actions button:hover {
border-color: var(--accent-color);
color: var(--accent-color);
}
/* Tiddler Container */
.tiddler-container {
flex: 1;
max-width: var(--content-max-width);
margin: 0 auto;
padding: var(--spacing-lg);
width: 100%;
}
/* ========== Tiddler (Content Block) ========== */
.tiddler {
background-color: var(--bg-primary);
border: 1px solid var(--border-color);
border-radius: 8px;
margin-bottom: var(--spacing-lg);
box-shadow: var(--shadow-sm);
transition: box-shadow var(--transition-fast);
}
.tiddler:hover {
box-shadow: var(--shadow-md);
}
.tiddler-header {
padding: var(--spacing-md) var(--spacing-lg);
border-bottom: 1px solid var(--border-color);
display: flex;
align-items: center;
justify-content: space-between;
flex-wrap: wrap;
gap: var(--spacing-sm);
}
.tiddler-title {
display: flex;
align-items: center;
gap: var(--spacing-sm);
font-size: var(--font-size-xl);
font-weight: 600;
margin: 0;
}
.collapse-toggle {
background: none;
border: none;
font-size: var(--font-size-sm);
color: var(--text-muted);
cursor: pointer;
padding: var(--spacing-xs);
transition: transform var(--transition-fast);
}
.tiddler.collapsed .collapse-toggle {
transform: rotate(-90deg);
}
.tiddler-meta {
display: flex;
gap: var(--spacing-sm);
flex-wrap: wrap;
}
.difficulty-badge {
padding: var(--spacing-xs) var(--spacing-sm);
font-size: 0.75rem;
font-weight: 500;
border-radius: 4px;
text-transform: uppercase;
}
.difficulty-badge.beginner {
background-color: #d1fae5;
color: #065f46;
}
.difficulty-badge.intermediate {
background-color: #fef3c7;
color: #92400e;
}
.difficulty-badge.advanced {
background-color: #fee2e2;
color: #991b1b;
}
.tag-badge {
padding: var(--spacing-xs) var(--spacing-sm);
font-size: 0.75rem;
background-color: var(--bg-tertiary);
color: var(--text-secondary);
border-radius: 4px;
}
.tiddler-content {
padding: var(--spacing-lg);
}
.tiddler.collapsed .tiddler-content {
display: none;
}
/* ========== Content Typography ========== */
.tiddler-content h1,
.tiddler-content h2,
.tiddler-content h3,
.tiddler-content h4 {
margin-top: var(--spacing-lg);
margin-bottom: var(--spacing-md);
font-weight: 600;
}
.tiddler-content h1 { font-size: 1.75rem; }
.tiddler-content h2 { font-size: 1.5rem; }
.tiddler-content h3 { font-size: 1.25rem; }
.tiddler-content h4 { font-size: 1.125rem; }
.tiddler-content p {
margin-bottom: var(--spacing-md);
}
/* Lists - Enhanced Styling */
.tiddler-content ul,
.tiddler-content ol {
margin: var(--spacing-md) 0;
padding-left: var(--spacing-xl);
}
.tiddler-content ul {
list-style: none;
}
.tiddler-content ul > li {
position: relative;
margin-bottom: var(--spacing-sm);
padding-left: 8px;
}
.tiddler-content ul > li::before {
content: "•";
position: absolute;
left: -16px;
color: var(--accent-color);
font-weight: bold;
}
.tiddler-content ol {
list-style: none;
counter-reset: item;
}
.tiddler-content ol > li {
position: relative;
margin-bottom: var(--spacing-sm);
padding-left: 8px;
counter-increment: item;
}
.tiddler-content ol > li::before {
content: counter(item) ".";
position: absolute;
left: -24px;
color: var(--accent-color);
font-weight: 600;
}
/* Nested lists */
.tiddler-content ul ul,
.tiddler-content ol ol,
.tiddler-content ul ol,
.tiddler-content ol ul {
margin: var(--spacing-xs) 0;
}
.tiddler-content ul ul > li::before {
content: "◦";
}
.tiddler-content ul ul ul > li::before {
content: "▪";
}
.tiddler-content a {
color: var(--accent-color);
text-decoration: none;
}
.tiddler-content a:hover {
text-decoration: underline;
}
/* Inline Code - Red Highlight */
.tiddler-content code {
font-family: var(--font-family-mono);
font-size: 0.875em;
padding: 2px 6px;
background-color: #fff5f5;
color: #c92a2a;
border-radius: 4px;
border: 1px solid #ffc9c9;
}
/* Code Blocks - Dark Background */
.tiddler-content pre {
position: relative;
margin: var(--spacing-md) 0;
padding: 0;
background-color: #1e2128;
border-radius: 8px;
overflow: hidden;
border: 1px solid #3d4450;
}
.tiddler-content pre::before {
content: attr(data-language);
display: block;
padding: 8px 16px;
background-color: #2d333b;
color: #8b949e;
font-size: 0.75rem;
font-family: var(--font-family);
text-transform: uppercase;
letter-spacing: 0.05em;
border-bottom: 1px solid #3d4450;
}
.tiddler-content pre code {
display: block;
padding: var(--spacing-md);
background: none;
color: #e6edf3;
font-size: var(--font-size-sm);
line-height: 1.6;
overflow-x: auto;
border: none;
}
.copy-code-btn {
position: absolute;
top: 6px;
right: 12px;
padding: 4px 10px;
font-size: 0.7rem;
background-color: #3d4450;
color: #8b949e;
border: 1px solid #4d5566;
border-radius: 4px;
cursor: pointer;
opacity: 0;
transition: all var(--transition-fast);
}
.copy-code-btn:hover {
background-color: #4d5566;
color: #e6edf3;
}
.tiddler-content pre:hover .copy-code-btn {
opacity: 1;
}
/* Tables - Blue Header Style */
.tiddler-content table {
width: 100%;
margin: var(--spacing-md) 0;
border-collapse: collapse;
border: 1px solid #dee2e6;
border-radius: 8px;
overflow: hidden;
}
.tiddler-content th {
padding: 12px 16px;
background: linear-gradient(135deg, #1971c2, #228be6);
color: white;
font-weight: 600;
text-align: left;
border: none;
border-bottom: 2px solid #1864ab;
}
.tiddler-content td {
padding: 10px 16px;
border: 1px solid #e9ecef;
text-align: left;
}
.tiddler-content tbody tr:nth-child(odd) {
background-color: #f8f9fa;
}
.tiddler-content tbody tr:nth-child(even) {
background-color: #ffffff;
}
.tiddler-content tbody tr:hover {
background-color: #e7f5ff;
}
/* Screenshots */
.screenshot {
margin: var(--spacing-lg) 0;
text-align: center;
}
.screenshot img {
max-width: 100%;
border: 1px solid var(--border-color);
border-radius: 8px;
box-shadow: var(--shadow-md);
}
.screenshot figcaption {
margin-top: var(--spacing-sm);
font-size: var(--font-size-sm);
color: var(--text-muted);
font-style: italic;
}
.screenshot-placeholder {
padding: var(--spacing-xl);
background-color: var(--bg-tertiary);
border: 2px dashed var(--border-color);
border-radius: 8px;
color: var(--text-muted);
text-align: center;
}
/* ========== Footer ========== */
.wiki-footer {
padding: var(--spacing-lg);
text-align: center;
color: var(--text-muted);
font-size: var(--font-size-sm);
border-top: 1px solid var(--border-color);
background-color: var(--bg-primary);
}
/* ========== Theme Toggle ========== */
.theme-toggle {
position: fixed;
bottom: var(--spacing-lg);
right: var(--spacing-lg);
width: 48px;
height: 48px;
border-radius: 50%;
border: none;
background-color: var(--bg-primary);
box-shadow: var(--shadow-lg);
cursor: pointer;
font-size: 1.5rem;
z-index: 100;
transition: transform var(--transition-fast);
}
.theme-toggle:hover {
transform: scale(1.1);
}
[data-theme="light"] .moon-icon { display: inline; }
[data-theme="light"] .sun-icon { display: none; }
[data-theme="dark"] .moon-icon { display: none; }
[data-theme="dark"] .sun-icon { display: inline; }
/* ========== Back to Top ========== */
.back-to-top {
position: fixed;
bottom: calc(var(--spacing-lg) + 60px);
right: var(--spacing-lg);
width: 40px;
height: 40px;
border-radius: 50%;
border: none;
background-color: var(--accent-color);
color: white;
font-size: 1.25rem;
cursor: pointer;
opacity: 0;
visibility: hidden;
transition: all var(--transition-fast);
z-index: 100;
}
.back-to-top.visible {
opacity: 1;
visibility: visible;
}
.back-to-top:hover {
background-color: var(--accent-hover);
}
/* ========== Responsive ========== */
@media (max-width: 1024px) {
.wiki-sidebar {
transform: translateX(-100%);
}
.wiki-sidebar.open {
transform: translateX(0);
}
.wiki-content {
margin-left: 0;
}
.sidebar-toggle {
display: flex;
}
}
@media (max-width: 640px) {
.tiddler-header {
flex-direction: column;
align-items: flex-start;
}
.header-actions {
display: none;
}
.wiki-tags {
overflow-x: auto;
flex-wrap: nowrap;
padding-bottom: var(--spacing-md);
}
}
/* ========== Print Styles ========== */
@media print {
.wiki-sidebar,
.theme-toggle,
.back-to-top,
.content-header,
.collapse-toggle,
.copy-code-btn {
display: none !important;
}
.wiki-content {
margin-left: 0;
}
.tiddler {
break-inside: avoid;
box-shadow: none;
border: 1px solid #ccc;
}
.tiddler.collapsed .tiddler-content {
display: block;
}
.tiddler-content pre {
background-color: #f5f5f5 !important;
color: #333 !important;
}
}

View File

@@ -0,0 +1,278 @@
/* ========================================
TiddlyWiki-Style Dark Theme
Software Manual Skill
======================================== */
[data-theme="dark"] {
/* Dark Theme Colors */
--bg-primary: #1a1a2e;
--bg-secondary: #16213e;
--bg-tertiary: #0f3460;
--text-primary: #eaeaea;
--text-secondary: #b8b8b8;
--text-muted: #888888;
--border-color: #2d3748;
--accent-color: #4dabf7;
--accent-hover: #339af0;
--success-color: #51cf66;
--warning-color: #ffd43b;
--danger-color: #ff6b6b;
--info-color: #22b8cf;
/* Shadows */
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.3);
--shadow-md: 0 4px 6px rgba(0, 0, 0, 0.4);
--shadow-lg: 0 10px 15px rgba(0, 0, 0, 0.5);
}
/* Dark theme specific overrides */
[data-theme="dark"] .wiki-logo .logo-placeholder {
background: linear-gradient(135deg, var(--accent-color), #6741d9);
}
[data-theme="dark"] .wiki-search input {
background-color: var(--bg-tertiary);
border-color: var(--border-color);
color: var(--text-primary);
}
[data-theme="dark"] .wiki-search input::placeholder {
color: var(--text-muted);
}
[data-theme="dark"] .search-results {
background-color: var(--bg-secondary);
border-color: var(--border-color);
}
[data-theme="dark"] .search-result-item {
border-color: var(--border-color);
}
[data-theme="dark"] .search-result-item:hover {
background-color: var(--bg-tertiary);
}
[data-theme="dark"] .result-excerpt mark {
background-color: rgba(255, 212, 59, 0.3);
color: var(--warning-color);
}
[data-theme="dark"] .wiki-tags .tag {
background-color: var(--bg-tertiary);
border-color: var(--border-color);
color: var(--text-secondary);
}
[data-theme="dark"] .wiki-tags .tag:hover {
border-color: var(--accent-color);
color: var(--accent-color);
}
[data-theme="dark"] .wiki-tags .tag.active {
background-color: var(--accent-color);
border-color: var(--accent-color);
color: #1a1a2e;
}
[data-theme="dark"] .wiki-toc a:hover {
background-color: var(--bg-tertiary);
}
[data-theme="dark"] .content-header {
background-color: var(--bg-primary);
border-color: var(--border-color);
}
[data-theme="dark"] .sidebar-toggle span {
background-color: var(--text-primary);
}
[data-theme="dark"] .header-actions button {
background-color: var(--bg-secondary);
border-color: var(--border-color);
color: var(--text-secondary);
}
[data-theme="dark"] .header-actions button:hover {
border-color: var(--accent-color);
color: var(--accent-color);
}
[data-theme="dark"] .tiddler {
background-color: var(--bg-primary);
border-color: var(--border-color);
}
[data-theme="dark"] .tiddler-header {
border-color: var(--border-color);
}
[data-theme="dark"] .difficulty-badge.beginner {
background-color: rgba(81, 207, 102, 0.2);
color: var(--success-color);
}
[data-theme="dark"] .difficulty-badge.intermediate {
background-color: rgba(255, 212, 59, 0.2);
color: var(--warning-color);
}
[data-theme="dark"] .difficulty-badge.advanced {
background-color: rgba(255, 107, 107, 0.2);
color: var(--danger-color);
}
[data-theme="dark"] .tag-badge {
background-color: var(--bg-tertiary);
color: var(--text-secondary);
}
[data-theme="dark"] .tiddler-content code {
background-color: var(--bg-tertiary);
color: var(--accent-color);
}
[data-theme="dark"] .tiddler-content pre {
background-color: #0d1117;
border: 1px solid var(--border-color);
}
[data-theme="dark"] .tiddler-content pre code {
color: #e6e6e6;
}
[data-theme="dark"] .copy-code-btn {
background-color: var(--bg-tertiary);
color: var(--text-secondary);
}
[data-theme="dark"] .tiddler-content th {
background-color: var(--bg-tertiary);
}
[data-theme="dark"] .tiddler-content tr:nth-child(even) {
background-color: var(--bg-secondary);
}
[data-theme="dark"] .tiddler-content th,
[data-theme="dark"] .tiddler-content td {
border-color: var(--border-color);
}
[data-theme="dark"] .screenshot img {
border-color: var(--border-color);
}
[data-theme="dark"] .screenshot-placeholder {
background-color: var(--bg-tertiary);
border-color: var(--border-color);
}
[data-theme="dark"] .wiki-footer {
background-color: var(--bg-primary);
border-color: var(--border-color);
}
[data-theme="dark"] .theme-toggle {
background-color: var(--bg-secondary);
color: var(--warning-color);
}
[data-theme="dark"] .back-to-top {
background-color: var(--accent-color);
}
[data-theme="dark"] .back-to-top:hover {
background-color: var(--accent-hover);
}
/* Scrollbar styling for dark theme */
[data-theme="dark"] ::-webkit-scrollbar {
width: 8px;
height: 8px;
}
[data-theme="dark"] ::-webkit-scrollbar-track {
background: var(--bg-secondary);
}
[data-theme="dark"] ::-webkit-scrollbar-thumb {
background: var(--bg-tertiary);
border-radius: 4px;
}
[data-theme="dark"] ::-webkit-scrollbar-thumb:hover {
background: var(--border-color);
}
/* Selection color */
[data-theme="dark"] ::selection {
background-color: rgba(77, 171, 247, 0.3);
color: var(--text-primary);
}
/* Focus styles for accessibility */
[data-theme="dark"] :focus {
outline-color: var(--accent-color);
}
[data-theme="dark"] .wiki-search input:focus {
border-color: var(--accent-color);
box-shadow: 0 0 0 3px rgba(77, 171, 247, 0.2);
}
/* Link colors */
[data-theme="dark"] .tiddler-content a {
color: var(--accent-color);
}
[data-theme="dark"] .tiddler-content a:hover {
color: var(--accent-hover);
}
/* Blockquote styling */
[data-theme="dark"] .tiddler-content blockquote {
border-left: 4px solid var(--accent-color);
background-color: var(--bg-tertiary);
padding: var(--spacing-md);
margin: var(--spacing-md) 0;
color: var(--text-secondary);
}
/* Horizontal rule */
[data-theme="dark"] .tiddler-content hr {
border: none;
border-top: 1px solid var(--border-color);
margin: var(--spacing-lg) 0;
}
/* Alert/Note boxes */
[data-theme="dark"] .note,
[data-theme="dark"] .warning,
[data-theme="dark"] .tip,
[data-theme="dark"] .danger {
padding: var(--spacing-md);
border-radius: 6px;
margin: var(--spacing-md) 0;
}
[data-theme="dark"] .note {
background-color: rgba(34, 184, 207, 0.1);
border-left: 4px solid var(--info-color);
}
[data-theme="dark"] .warning {
background-color: rgba(255, 212, 59, 0.1);
border-left: 4px solid var(--warning-color);
}
[data-theme="dark"] .tip {
background-color: rgba(81, 207, 102, 0.1);
border-left: 4px solid var(--success-color);
}
[data-theme="dark"] .danger {
background-color: rgba(255, 107, 107, 0.1);
border-left: 4px solid var(--danger-color);
}

View File

@@ -0,0 +1,327 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="{{SOFTWARE_NAME}} - Interactive Software Manual">
<meta name="generator" content="software-manual-skill">
<title>{{SOFTWARE_NAME}} v{{VERSION}} - User Manual</title>
<style>
{{EMBEDDED_CSS}}
</style>
</head>
<body class="wiki-container" data-theme="light">
<!-- Sidebar Navigation -->
<aside class="wiki-sidebar">
<!-- Logo and Title -->
<div class="wiki-logo">
<div class="logo-placeholder">{{SOFTWARE_NAME}}</div>
<h1>{{SOFTWARE_NAME}}</h1>
<span class="version">v{{VERSION}}</span>
</div>
<!-- Search Box -->
<div class="wiki-search">
<input type="text" id="searchInput" placeholder="Search documentation..." aria-label="Search">
<div id="searchResults" class="search-results" aria-live="polite"></div>
</div>
<!-- Tag Navigation (Dynamic) -->
<nav class="wiki-tags" aria-label="Filter by category">
<button class="tag active" data-tag="all">全部</button>
{{TAG_BUTTONS_HTML}}
</nav>
<!-- Table of Contents -->
{{TOC_HTML}}
</aside>
<!-- Main Content Area -->
<main class="wiki-content">
<!-- Header Bar -->
<header class="content-header">
<button class="sidebar-toggle" id="sidebarToggle" aria-label="Toggle sidebar">
<span></span>
<span></span>
<span></span>
</button>
<div class="header-actions">
<button class="expand-all" id="expandAll">Expand All</button>
<button class="collapse-all" id="collapseAll">Collapse All</button>
<button class="print-btn" id="printBtn">Print</button>
</div>
</header>
<!-- Tiddler Container -->
<div class="tiddler-container">
{{TIDDLERS_HTML}}
</div>
<!-- Footer -->
<footer class="wiki-footer">
<p>Generated by <strong>software-manual-skill</strong></p>
<p>Last updated: <time datetime="{{TIMESTAMP}}">{{TIMESTAMP}}</time></p>
</footer>
</main>
<!-- Theme Toggle Button -->
<button class="theme-toggle" id="themeToggle" aria-label="Toggle theme">
<span class="sun-icon">&#9728;</span>
<span class="moon-icon">&#9790;</span>
</button>
<!-- Back to Top Button -->
<button class="back-to-top" id="backToTop" aria-label="Back to top">&#8593;</button>
<!-- Search Index Data -->
<script id="search-index" type="application/json">
{{SEARCH_INDEX_JSON}}
</script>
<!-- Embedded JavaScript -->
<script>
(function() {
'use strict';
// ========== Search Functionality ==========
class WikiSearch {
constructor(indexData) {
this.index = indexData;
}
search(query) {
if (!query || query.length < 2) return [];
const results = [];
const lowerQuery = query.toLowerCase();
const queryWords = lowerQuery.split(/\s+/);
for (const [id, content] of Object.entries(this.index)) {
let score = 0;
// Title match (higher weight)
const titleLower = content.title.toLowerCase();
if (titleLower.includes(lowerQuery)) {
score += 10;
}
queryWords.forEach(word => {
if (titleLower.includes(word)) score += 3;
});
// Body match
const bodyLower = content.body.toLowerCase();
if (bodyLower.includes(lowerQuery)) {
score += 5;
}
queryWords.forEach(word => {
if (bodyLower.includes(word)) score += 1;
});
// Tag match
if (content.tags) {
content.tags.forEach(tag => {
if (tag.toLowerCase().includes(lowerQuery)) score += 4;
});
}
if (score > 0) {
results.push({
id,
title: content.title,
excerpt: this.highlight(content.body, query),
score
});
}
}
return results
.sort((a, b) => b.score - a.score)
.slice(0, 10);
}
highlight(text, query) {
const maxLength = 150;
const lowerText = text.toLowerCase();
const lowerQuery = query.toLowerCase();
const index = lowerText.indexOf(lowerQuery);
if (index === -1) {
return text.substring(0, maxLength) + (text.length > maxLength ? '...' : '');
}
const start = Math.max(0, index - 40);
const end = Math.min(text.length, index + query.length + 80);
let excerpt = text.substring(start, end);
if (start > 0) excerpt = '...' + excerpt;
if (end < text.length) excerpt += '...';
// Highlight matches
const regex = new RegExp('(' + query.replace(/[.*+?^${}()|[\]\\]/g, '\\$&') + ')', 'gi');
return excerpt.replace(regex, '<mark>$1</mark>');
}
}
// Initialize search
const indexData = JSON.parse(document.getElementById('search-index').textContent);
const search = new WikiSearch(indexData);
const searchInput = document.getElementById('searchInput');
const searchResults = document.getElementById('searchResults');
searchInput.addEventListener('input', function() {
const query = this.value.trim();
const results = search.search(query);
if (results.length === 0) {
searchResults.innerHTML = query.length >= 2
? '<div class="no-results">No results found</div>'
: '';
return;
}
searchResults.innerHTML = results.map(r => `
<a href="#${r.id}" class="search-result-item" data-tiddler="${r.id}">
<div class="result-title">${r.title}</div>
<div class="result-excerpt">${r.excerpt}</div>
</a>
`).join('');
});
// Clear search on result click
searchResults.addEventListener('click', function(e) {
const item = e.target.closest('.search-result-item');
if (item) {
searchInput.value = '';
searchResults.innerHTML = '';
// Expand target tiddler
const tiddlerId = item.dataset.tiddler;
const tiddler = document.getElementById(tiddlerId);
if (tiddler) {
tiddler.classList.remove('collapsed');
const toggle = tiddler.querySelector('.collapse-toggle');
if (toggle) toggle.textContent = '▼';
}
}
});
// ========== Collapse/Expand ==========
document.querySelectorAll('.collapse-toggle').forEach(btn => {
btn.addEventListener('click', function() {
const tiddler = this.closest('.tiddler');
tiddler.classList.toggle('collapsed');
this.textContent = tiddler.classList.contains('collapsed') ? '▶' : '▼';
});
});
// Expand/Collapse All
document.getElementById('expandAll').addEventListener('click', function() {
document.querySelectorAll('.tiddler').forEach(t => {
t.classList.remove('collapsed');
const toggle = t.querySelector('.collapse-toggle');
if (toggle) toggle.textContent = '▼';
});
});
document.getElementById('collapseAll').addEventListener('click', function() {
document.querySelectorAll('.tiddler').forEach(t => {
t.classList.add('collapsed');
const toggle = t.querySelector('.collapse-toggle');
if (toggle) toggle.textContent = '▶';
});
});
// ========== Tag Filtering ==========
document.querySelectorAll('.wiki-tags .tag').forEach(tag => {
tag.addEventListener('click', function() {
const filter = this.dataset.tag;
// Update active state
document.querySelectorAll('.wiki-tags .tag').forEach(t => t.classList.remove('active'));
this.classList.add('active');
// Filter tiddlers
document.querySelectorAll('.tiddler').forEach(tiddler => {
if (filter === 'all') {
tiddler.style.display = '';
} else {
const tags = tiddler.dataset.tags || '';
tiddler.style.display = tags.includes(filter) ? '' : 'none';
}
});
});
});
// ========== Theme Toggle ==========
const themeToggle = document.getElementById('themeToggle');
const savedTheme = localStorage.getItem('wiki-theme');
if (savedTheme) {
document.body.dataset.theme = savedTheme;
}
themeToggle.addEventListener('click', function() {
const isDark = document.body.dataset.theme === 'dark';
document.body.dataset.theme = isDark ? 'light' : 'dark';
localStorage.setItem('wiki-theme', document.body.dataset.theme);
});
// ========== Sidebar Toggle (Mobile) ==========
document.getElementById('sidebarToggle').addEventListener('click', function() {
document.querySelector('.wiki-sidebar').classList.toggle('open');
});
// ========== Back to Top ==========
const backToTop = document.getElementById('backToTop');
window.addEventListener('scroll', function() {
backToTop.classList.toggle('visible', window.scrollY > 300);
});
backToTop.addEventListener('click', function() {
window.scrollTo({ top: 0, behavior: 'smooth' });
});
// ========== Print ==========
document.getElementById('printBtn').addEventListener('click', function() {
window.print();
});
// ========== TOC Navigation ==========
document.querySelectorAll('.wiki-toc a').forEach(link => {
link.addEventListener('click', function(e) {
const tiddlerId = this.getAttribute('href').substring(1);
const tiddler = document.getElementById(tiddlerId);
if (tiddler) {
// Expand if collapsed
tiddler.classList.remove('collapsed');
const toggle = tiddler.querySelector('.collapse-toggle');
if (toggle) toggle.textContent = '▼';
// Close sidebar on mobile
document.querySelector('.wiki-sidebar').classList.remove('open');
}
});
});
// ========== Code Block Copy ==========
document.querySelectorAll('pre').forEach(pre => {
const copyBtn = document.createElement('button');
copyBtn.className = 'copy-code-btn';
copyBtn.textContent = 'Copy';
copyBtn.addEventListener('click', function() {
const code = pre.querySelector('code');
navigator.clipboard.writeText(code.textContent).then(() => {
copyBtn.textContent = 'Copied!';
setTimeout(() => copyBtn.textContent = 'Copy', 2000);
});
});
pre.appendChild(copyBtn);
});
})();
</script>
</body>
</html>

View File

@@ -0,0 +1,219 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "discovery-finding-schema",
"title": "Discovery Finding Schema",
"description": "Schema for perspective-based issue discovery results",
"type": "object",
"required": ["perspective", "discovery_id", "analysis_timestamp", "cli_tool_used", "summary", "findings"],
"properties": {
"perspective": {
"type": "string",
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"],
"description": "Discovery perspective"
},
"discovery_id": {
"type": "string",
"pattern": "^DSC-\\d{8}-\\d{6}$",
"description": "Parent discovery session ID"
},
"analysis_timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of analysis"
},
"cli_tool_used": {
"type": "string",
"enum": ["gemini", "qwen", "codex"],
"description": "CLI tool that performed the analysis"
},
"model": {
"type": "string",
"description": "Specific model version used",
"examples": ["gemini-2.5-pro", "qwen-max"]
},
"analysis_duration_ms": {
"type": "integer",
"minimum": 0,
"description": "Analysis duration in milliseconds"
},
"summary": {
"type": "object",
"required": ["total_findings"],
"properties": {
"total_findings": { "type": "integer", "minimum": 0 },
"critical": { "type": "integer", "minimum": 0 },
"high": { "type": "integer", "minimum": 0 },
"medium": { "type": "integer", "minimum": 0 },
"low": { "type": "integer", "minimum": 0 },
"files_analyzed": { "type": "integer", "minimum": 0 }
},
"description": "Summary statistics (FLAT structure, NOT nested)"
},
"findings": {
"type": "array",
"items": {
"type": "object",
"required": ["id", "title", "perspective", "priority", "category", "description", "file", "line"],
"properties": {
"id": {
"type": "string",
"pattern": "^dsc-[a-z]+-\\d{3}-[a-f0-9]{8}$",
"description": "Unique finding ID: dsc-{perspective}-{seq}-{uuid8}",
"examples": ["dsc-bug-001-a1b2c3d4"]
},
"title": {
"type": "string",
"minLength": 10,
"maxLength": 200,
"description": "Concise finding title"
},
"perspective": {
"type": "string",
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"]
},
"priority": {
"type": "string",
"enum": ["critical", "high", "medium", "low"],
"description": "Priority level (lowercase only)"
},
"category": {
"type": "string",
"description": "Perspective-specific category",
"examples": ["null-check", "edge-case", "missing-test", "complexity", "injection"]
},
"description": {
"type": "string",
"minLength": 20,
"description": "Detailed description of the finding"
},
"file": {
"type": "string",
"description": "File path relative to project root"
},
"line": {
"type": "integer",
"minimum": 1,
"description": "Line number of the finding"
},
"snippet": {
"type": "string",
"description": "Relevant code snippet"
},
"suggested_issue": {
"type": "object",
"required": ["title", "type", "priority"],
"properties": {
"title": {
"type": "string",
"description": "Suggested issue title for export"
},
"type": {
"type": "string",
"enum": ["bug", "feature", "enhancement", "refactor", "test", "docs"],
"description": "Issue type"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"description": "Priority 1-5 (1=critical, 5=low)"
},
"labels": {
"type": "array",
"items": { "type": "string" },
"description": "Suggested labels for the issue"
}
},
"description": "Pre-filled issue suggestion for export"
},
"external_reference": {
"type": ["object", "null"],
"properties": {
"source": { "type": "string" },
"url": { "type": "string", "format": "uri" },
"relevance": { "type": "string" }
},
"description": "External reference from Exa research (if applicable)"
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Confidence score 0.0-1.0"
},
"impact": {
"type": "string",
"description": "Description of potential impact"
},
"recommendation": {
"type": "string",
"description": "Specific recommendation to address the finding"
},
"metadata": {
"type": "object",
"additionalProperties": true,
"description": "Additional metadata (CWE ID, OWASP category, etc.)"
}
}
},
"description": "Array of discovered findings"
},
"cross_references": {
"type": "array",
"items": {
"type": "object",
"properties": {
"finding_id": { "type": "string" },
"related_perspectives": {
"type": "array",
"items": { "type": "string" }
},
"reason": { "type": "string" }
}
},
"description": "Cross-references to findings in other perspectives"
}
},
"examples": [
{
"perspective": "bug",
"discovery_id": "DSC-20250128-143022",
"analysis_timestamp": "2025-01-28T14:35:00Z",
"cli_tool_used": "gemini",
"model": "gemini-2.5-pro",
"analysis_duration_ms": 45000,
"summary": {
"total_findings": 8,
"critical": 1,
"high": 2,
"medium": 3,
"low": 2,
"files_analyzed": 5
},
"findings": [
{
"id": "dsc-bug-001-a1b2c3d4",
"title": "Missing null check in user validation",
"perspective": "bug",
"priority": "high",
"category": "null-check",
"description": "User object is accessed without null check after database query, which may fail if user doesn't exist",
"file": "src/auth/validator.ts",
"line": 45,
"snippet": "const user = await db.findUser(id);\nreturn user.email; // user may be null",
"suggested_issue": {
"title": "Add null check in user validation",
"type": "bug",
"priority": 2,
"labels": ["bug", "auth"]
},
"external_reference": null,
"confidence": 0.85,
"impact": "Runtime error when user not found",
"recommendation": "Add null check: if (!user) throw new NotFoundError('User not found');"
}
],
"cross_references": []
}
]
}

View File

@@ -0,0 +1,125 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "discovery-state-schema",
"title": "Discovery State Schema (Merged)",
"description": "Unified schema for issue discovery session (state + progress merged)",
"type": "object",
"required": ["discovery_id", "target_pattern", "phase", "created_at"],
"properties": {
"discovery_id": {
"type": "string",
"description": "Unique discovery session ID",
"pattern": "^DSC-\\d{8}-\\d{6}$",
"examples": ["DSC-20250128-143022"]
},
"target_pattern": {
"type": "string",
"description": "File/directory pattern being analyzed",
"examples": ["src/auth/**", "codex-lens/**/*.py"]
},
"phase": {
"type": "string",
"enum": ["initialization", "parallel", "aggregation", "complete"],
"description": "Current execution phase"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"updated_at": {
"type": "string",
"format": "date-time"
},
"target": {
"type": "object",
"description": "Target module information",
"properties": {
"files_count": {
"type": "object",
"properties": {
"source": { "type": "integer" },
"tests": { "type": "integer" },
"total": { "type": "integer" }
}
},
"project": {
"type": "object",
"properties": {
"name": { "type": "string" },
"version": { "type": "string" }
}
}
}
},
"perspectives": {
"type": "array",
"description": "Perspective analysis status (merged from progress)",
"items": {
"type": "object",
"required": ["name", "status"],
"properties": {
"name": {
"type": "string",
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"]
},
"status": {
"type": "string",
"enum": ["pending", "in_progress", "completed", "failed"]
},
"findings": {
"type": "integer",
"minimum": 0
}
}
}
},
"external_research": {
"type": "object",
"properties": {
"enabled": { "type": "boolean", "default": false },
"completed": { "type": "boolean", "default": false }
}
},
"results": {
"type": "object",
"description": "Aggregated results (final phase)",
"properties": {
"total_findings": { "type": "integer", "minimum": 0 },
"issues_generated": { "type": "integer", "minimum": 0 },
"priority_distribution": {
"type": "object",
"properties": {
"critical": { "type": "integer" },
"high": { "type": "integer" },
"medium": { "type": "integer" },
"low": { "type": "integer" }
}
}
}
}
},
"examples": [
{
"discovery_id": "DSC-20251228-182237",
"target_pattern": "codex-lens/**/*.py",
"phase": "complete",
"created_at": "2025-12-28T18:22:37+08:00",
"updated_at": "2025-12-28T18:35:00+08:00",
"target": {
"files_count": { "source": 48, "tests": 44, "total": 93 },
"project": { "name": "codex-lens", "version": "0.1.0" }
},
"perspectives": [
{ "name": "bug", "status": "completed", "findings": 15 },
{ "name": "test", "status": "completed", "findings": 11 },
{ "name": "quality", "status": "completed", "findings": 12 }
],
"external_research": { "enabled": false, "completed": false },
"results": {
"total_findings": 37,
"issues_generated": 15,
"priority_distribution": { "critical": 4, "high": 13, "medium": 16, "low": 6 }
}
}
]
}

View File

@@ -1,136 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Task JSONL Schema",
"description": "Schema for individual task entries in tasks.jsonl file",
"type": "object",
"required": ["id", "title", "type", "description", "depends_on", "delivery_criteria", "status", "current_phase", "executor"],
"properties": {
"id": {
"type": "string",
"description": "Unique task identifier (e.g., TASK-001)",
"pattern": "^TASK-[0-9]+$"
},
"title": {
"type": "string",
"description": "Short summary of the task",
"maxLength": 100
},
"type": {
"type": "string",
"enum": ["feature", "bug", "refactor", "test", "chore", "docs"],
"description": "Task category"
},
"description": {
"type": "string",
"description": "Detailed instructions for the task"
},
"file_context": {
"type": "array",
"items": { "type": "string" },
"description": "List of relevant files/globs",
"default": []
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Array of Task IDs that must complete first",
"default": []
},
"delivery_criteria": {
"type": "array",
"items": { "type": "string" },
"description": "Checklist items that define task completion",
"minItems": 1
},
"pause_criteria": {
"type": "array",
"items": { "type": "string" },
"description": "Conditions that should halt execution (e.g., 'API spec unclear')",
"default": []
},
"status": {
"type": "string",
"enum": ["pending", "ready", "in_progress", "completed", "failed", "paused", "skipped"],
"description": "Current task status",
"default": "pending"
},
"current_phase": {
"type": "string",
"enum": ["analyze", "implement", "test", "optimize", "commit", "done"],
"description": "Current execution phase within the task lifecycle",
"default": "analyze"
},
"executor": {
"type": "string",
"enum": ["agent", "codex", "gemini", "auto"],
"description": "Preferred executor for this task",
"default": "auto"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"description": "Task priority (1=highest, 5=lowest)",
"default": 3
},
"phase_results": {
"type": "object",
"description": "Results from each execution phase",
"properties": {
"analyze": {
"type": "object",
"properties": {
"status": { "type": "string" },
"findings": { "type": "array", "items": { "type": "string" } },
"timestamp": { "type": "string", "format": "date-time" }
}
},
"implement": {
"type": "object",
"properties": {
"status": { "type": "string" },
"files_modified": { "type": "array", "items": { "type": "string" } },
"timestamp": { "type": "string", "format": "date-time" }
}
},
"test": {
"type": "object",
"properties": {
"status": { "type": "string" },
"test_results": { "type": "string" },
"retry_count": { "type": "integer" },
"timestamp": { "type": "string", "format": "date-time" }
}
},
"optimize": {
"type": "object",
"properties": {
"status": { "type": "string" },
"improvements": { "type": "array", "items": { "type": "string" } },
"timestamp": { "type": "string", "format": "date-time" }
}
},
"commit": {
"type": "object",
"properties": {
"status": { "type": "string" },
"commit_hash": { "type": "string" },
"message": { "type": "string" },
"timestamp": { "type": "string", "format": "date-time" }
}
}
}
},
"created_at": {
"type": "string",
"format": "date-time",
"description": "Task creation timestamp"
},
"updated_at": {
"type": "string",
"format": "date-time",
"description": "Last update timestamp"
}
},
"additionalProperties": false
}

View File

@@ -7,7 +7,7 @@
"properties": {
"id": {
"type": "string",
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
"description": "Issue ID (GH-123, ISS-xxx, DSC-001)"
},
"title": {
"type": "string"
@@ -21,24 +21,16 @@
"type": "integer",
"minimum": 1,
"maximum": 5,
"default": 3
"default": 3,
"description": "1=critical, 2=high, 3=medium, 4=low, 5=trivial"
},
"context": {
"type": "string",
"description": "Issue context/description (markdown)"
},
"bound_solution_id": {
"type": "string",
"description": "ID of the bound solution (null if none bound)"
},
"solution_count": {
"type": "integer",
"default": 0,
"description": "Number of candidate solutions in solutions/{id}.jsonl"
},
"source": {
"type": "string",
"enum": ["github", "text", "file"],
"enum": ["github", "text", "discovery"],
"description": "Source of the issue"
},
"source_url": {
@@ -50,6 +42,81 @@
"items": { "type": "string" },
"description": "Issue labels/tags"
},
"discovery_context": {
"type": "object",
"description": "Enriched context from issue:discover (only when source=discovery)",
"properties": {
"discovery_id": {
"type": "string",
"description": "Source discovery session ID"
},
"perspective": {
"type": "string",
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"]
},
"category": {
"type": "string",
"description": "Finding category (e.g., edge-case, race-condition)"
},
"file": {
"type": "string",
"description": "Primary affected file"
},
"line": {
"type": "integer",
"description": "Line number in primary file"
},
"snippet": {
"type": "string",
"description": "Code snippet showing the issue"
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Agent confidence score"
},
"suggested_fix": {
"type": "string",
"description": "Suggested remediation from discovery"
}
}
},
"affected_components": {
"type": "array",
"items": { "type": "string" },
"description": "Files/modules affected"
},
"lifecycle_requirements": {
"type": "object",
"properties": {
"test_strategy": {
"type": "string",
"enum": ["unit", "integration", "e2e", "manual", "auto"]
},
"regression_scope": {
"type": "string",
"enum": ["affected", "related", "full"]
},
"acceptance_type": {
"type": "string",
"enum": ["automated", "manual", "both"]
},
"commit_strategy": {
"type": "string",
"enum": ["per-task", "squash", "atomic"]
}
}
},
"bound_solution_id": {
"type": "string",
"description": "ID of the bound solution (null if none bound)"
},
"solution_count": {
"type": "integer",
"default": 0,
"description": "Number of candidate solutions"
},
"created_at": {
"type": "string",
"format": "date-time"
@@ -62,13 +129,40 @@
"type": "string",
"format": "date-time"
},
"queued_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
}
}
},
"examples": [
{
"id": "DSC-001",
"title": "Fix: SQLite connection pool memory leak",
"status": "registered",
"priority": 1,
"context": "Connection pool cleanup only happens when MAX_POOL_SIZE is reached...",
"source": "discovery",
"labels": ["bug", "resource-leak", "critical"],
"discovery_context": {
"discovery_id": "DSC-20251228-182237",
"perspective": "bug",
"category": "resource-leak",
"file": "storage/sqlite_store.py",
"line": 59,
"snippet": "if len(self._pool) >= self.MAX_POOL_SIZE:\n self._cleanup_stale_connections()",
"confidence": 0.85,
"suggested_fix": "Implement periodic cleanup or weak references"
},
"affected_components": ["storage/sqlite_store.py"],
"lifecycle_requirements": {
"test_strategy": "unit",
"regression_scope": "affected",
"acceptance_type": "automated",
"commit_strategy": "per-task"
},
"bound_solution_id": null,
"solution_count": 0,
"created_at": "2025-12-28T18:22:37Z"
}
]
}

View File

@@ -1,128 +1,63 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Issue Execution Queue Schema",
"description": "Global execution queue for all issue tasks",
"description": "Execution queue supporting both task-level (T-N) and solution-level (S-N) granularity",
"type": "object",
"properties": {
"queue": {
"id": {
"type": "string",
"pattern": "^QUE-[0-9]{8}-[0-9]{6}$",
"description": "Queue ID in format QUE-YYYYMMDD-HHMMSS"
},
"status": {
"type": "string",
"enum": ["active", "paused", "completed", "archived"],
"default": "active"
},
"issue_ids": {
"type": "array",
"description": "Ordered list of tasks to execute",
"items": { "type": "string" },
"description": "Issues included in this queue"
},
"solutions": {
"type": "array",
"description": "Solution-level queue items (preferred for new queues)",
"items": {
"type": "object",
"required": ["queue_id", "issue_id", "solution_id", "task_id", "status"],
"properties": {
"queue_id": {
"type": "string",
"pattern": "^Q-[0-9]+$",
"description": "Unique queue item identifier"
},
"issue_id": {
"type": "string",
"description": "Source issue ID"
},
"solution_id": {
"type": "string",
"description": "Source solution ID"
},
"task_id": {
"type": "string",
"description": "Task ID within solution"
},
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"execution_order": {
"type": "integer",
"description": "Order in execution sequence"
},
"execution_group": {
"type": "string",
"description": "Parallel execution group ID (e.g., P1, S1)"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Queue IDs this task depends on"
},
"semantic_priority": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Semantic importance score (0.0-1.0)"
},
"assigned_executor": {
"type": "string",
"enum": ["codex", "gemini", "agent"]
},
"queued_at": {
"type": "string",
"format": "date-time"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"completed_at": {
"type": "string",
"format": "date-time"
},
"result": {
"type": "object",
"description": "Execution result",
"properties": {
"files_modified": { "type": "array", "items": { "type": "string" } },
"files_created": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" },
"commit_hash": { "type": "string" }
}
},
"failure_reason": {
"type": "string"
}
}
"$ref": "#/definitions/solutionItem"
}
},
"tasks": {
"type": "array",
"description": "Task-level queue items (legacy format)",
"items": {
"$ref": "#/definitions/taskItem"
}
},
"conflicts": {
"type": "array",
"description": "Detected conflicts between tasks",
"description": "Detected conflicts between items",
"items": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Queue IDs involved in conflict"
},
"file": {
"type": "string",
"description": "Conflicting file path"
},
"resolution": {
"type": "string",
"enum": ["sequential", "merge", "manual"]
},
"resolution_order": {
"type": "array",
"items": { "type": "string" }
},
"resolved": {
"type": "boolean",
"default": false
}
}
"$ref": "#/definitions/conflict"
}
},
"execution_groups": {
"type": "array",
"description": "Parallel/Sequential execution groups",
"items": {
"$ref": "#/definitions/executionGroup"
}
},
"_metadata": {
"type": "object",
"properties": {
"version": { "type": "string", "default": "1.0" },
"total_items": { "type": "integer" },
"version": { "type": "string", "default": "2.0" },
"queue_type": {
"type": "string",
"enum": ["solution", "task"],
"description": "Queue granularity level"
},
"total_solutions": { "type": "integer" },
"total_tasks": { "type": "integer" },
"pending_count": { "type": "integer" },
"ready_count": { "type": "integer" },
"executing_count": { "type": "integer" },
@@ -132,5 +67,187 @@
"last_updated": { "type": "string", "format": "date-time" }
}
}
},
"definitions": {
"solutionItem": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "status", "task_count", "files_touched"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^S-[0-9]+$",
"description": "Solution-level queue item ID (S-1, S-2, ...)"
},
"issue_id": {
"type": "string",
"description": "Source issue ID"
},
"solution_id": {
"type": "string",
"description": "Bound solution ID"
},
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"task_count": {
"type": "integer",
"minimum": 1,
"description": "Number of tasks in this solution"
},
"files_touched": {
"type": "array",
"items": { "type": "string" },
"description": "All files modified by this solution"
},
"execution_order": {
"type": "integer",
"description": "Order in execution sequence"
},
"execution_group": {
"type": "string",
"description": "Parallel (P*) or Sequential (S*) group ID"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs this item depends on"
},
"semantic_priority": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Semantic importance score (0.0-1.0)"
},
"assigned_executor": {
"type": "string",
"enum": ["codex", "gemini", "agent"]
},
"queued_at": { "type": "string", "format": "date-time" },
"started_at": { "type": "string", "format": "date-time" },
"completed_at": { "type": "string", "format": "date-time" },
"result": {
"type": "object",
"properties": {
"summary": { "type": "string" },
"files_modified": { "type": "array", "items": { "type": "string" } },
"tasks_completed": { "type": "integer" },
"commit_hashes": { "type": "array", "items": { "type": "string" } }
}
},
"failure_reason": { "type": "string" }
}
},
"taskItem": {
"type": "object",
"required": ["item_id", "issue_id", "solution_id", "task_id", "status"],
"properties": {
"item_id": {
"type": "string",
"pattern": "^T-[0-9]+$",
"description": "Task-level queue item ID (T-1, T-2, ...)"
},
"issue_id": { "type": "string" },
"solution_id": { "type": "string" },
"task_id": { "type": "string" },
"status": {
"type": "string",
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
"default": "pending"
},
"execution_order": { "type": "integer" },
"execution_group": { "type": "string" },
"depends_on": { "type": "array", "items": { "type": "string" } },
"semantic_priority": { "type": "number", "minimum": 0, "maximum": 1 },
"assigned_executor": { "type": "string", "enum": ["codex", "gemini", "agent"] },
"queued_at": { "type": "string", "format": "date-time" },
"started_at": { "type": "string", "format": "date-time" },
"completed_at": { "type": "string", "format": "date-time" },
"result": {
"type": "object",
"properties": {
"files_modified": { "type": "array", "items": { "type": "string" } },
"files_created": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" },
"commit_hash": { "type": "string" }
}
},
"failure_reason": { "type": "string" }
}
},
"conflict": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
},
"file": {
"type": "string",
"description": "Conflicting file path"
},
"solutions": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs involved (for solution-level queues)"
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Task IDs involved (for task-level queues)"
},
"resolution": {
"type": "string",
"enum": ["sequential", "merge", "manual"]
},
"resolution_order": {
"type": "array",
"items": { "type": "string" },
"description": "Execution order to resolve conflict"
},
"rationale": {
"type": "string",
"description": "Explanation of resolution decision"
},
"resolved": {
"type": "boolean",
"default": false
}
}
},
"executionGroup": {
"type": "object",
"required": ["id", "type"],
"properties": {
"id": {
"type": "string",
"pattern": "^[PS][0-9]+$",
"description": "Group ID (P1, P2 for parallel, S1, S2 for sequential)"
},
"type": {
"type": "string",
"enum": ["parallel", "sequential"]
},
"solutions": {
"type": "array",
"items": { "type": "string" },
"description": "Solution IDs in this group"
},
"tasks": {
"type": "array",
"items": { "type": "string" },
"description": "Task IDs in this group (legacy)"
},
"solution_count": {
"type": "integer",
"description": "Number of solutions in group"
},
"task_count": {
"type": "integer",
"description": "Number of tasks in group (legacy)"
}
}
}
}
}

View File

@@ -3,27 +3,27 @@
"title": "Issue Solution Schema",
"description": "Schema for solution registered to an issue",
"type": "object",
"required": ["id", "issue_id", "tasks", "status", "created_at"],
"required": ["id", "tasks", "is_bound", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Unique solution identifier",
"pattern": "^SOL-[0-9]+$"
},
"issue_id": {
"description": {
"type": "string",
"description": "Parent issue ID"
"description": "High-level summary of the solution"
},
"plan_session_id": {
"approach": {
"type": "string",
"description": "Planning session that created this solution"
"description": "Technical approach or strategy"
},
"tasks": {
"type": "array",
"description": "Task breakdown for this solution",
"items": {
"type": "object",
"required": ["id", "title", "scope", "action", "acceptance"],
"required": ["id", "title", "scope", "action", "implementation", "acceptance"],
"properties": {
"id": {
"type": "string",
@@ -61,10 +61,40 @@
"items": { "type": "string" },
"description": "Step-by-step implementation guide"
},
"acceptance": {
"test": {
"type": "object",
"description": "Test requirements",
"properties": {
"unit": { "type": "array", "items": { "type": "string" } },
"integration": { "type": "array", "items": { "type": "string" } },
"commands": { "type": "array", "items": { "type": "string" } },
"coverage_target": { "type": "number" }
}
},
"regression": {
"type": "array",
"items": { "type": "string" },
"description": "Quantified completion criteria"
"description": "Regression check points"
},
"acceptance": {
"type": "object",
"description": "Acceptance criteria & verification",
"required": ["criteria", "verification"],
"properties": {
"criteria": { "type": "array", "items": { "type": "string" } },
"verification": { "type": "array", "items": { "type": "string" } },
"manual_checks": { "type": "array", "items": { "type": "string" } }
}
},
"commit": {
"type": "object",
"description": "Commit specification",
"properties": {
"type": { "type": "string", "enum": ["feat", "fix", "refactor", "test", "docs", "chore"] },
"scope": { "type": "string" },
"message_template": { "type": "string" },
"breaking": { "type": "boolean" }
}
},
"depends_on": {
"type": "array",
@@ -76,10 +106,15 @@
"type": "integer",
"description": "Estimated time to complete"
},
"executor": {
"status": {
"type": "string",
"enum": ["codex", "gemini", "agent", "auto"],
"default": "auto"
"description": "Task status (optional, for tracking)"
},
"priority": {
"type": "integer",
"minimum": 1,
"maximum": 5,
"default": 3
}
}
}
@@ -97,10 +132,20 @@
"integration_points": { "type": "string" }
}
},
"status": {
"type": "string",
"enum": ["draft", "candidate", "bound", "queued", "executing", "completed", "failed"],
"default": "draft"
"analysis": {
"type": "object",
"description": "Solution risk assessment",
"properties": {
"risk": { "type": "string", "enum": ["low", "medium", "high"] },
"impact": { "type": "string", "enum": ["low", "medium", "high"] },
"complexity": { "type": "string", "enum": ["low", "medium", "high"] }
}
},
"score": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Solution quality score (0.0-1.0)"
},
"is_bound": {
"type": "boolean",

View File

@@ -1,125 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Solutions JSONL Schema",
"description": "Schema for each line in solutions/{issue-id}.jsonl",
"type": "object",
"required": ["id", "tasks", "created_at"],
"properties": {
"id": {
"type": "string",
"description": "Unique solution identifier",
"pattern": "^SOL-[0-9]+$"
},
"description": {
"type": "string",
"description": "Solution approach description"
},
"tasks": {
"type": "array",
"description": "Task breakdown for this solution",
"items": {
"type": "object",
"required": ["id", "title", "scope", "action", "acceptance"],
"properties": {
"id": {
"type": "string",
"pattern": "^T[0-9]+$"
},
"title": {
"type": "string",
"description": "Action verb + target"
},
"scope": {
"type": "string",
"description": "Module path or feature area"
},
"action": {
"type": "string",
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
},
"description": {
"type": "string",
"description": "1-2 sentences describing what to implement"
},
"modification_points": {
"type": "array",
"items": {
"type": "object",
"properties": {
"file": { "type": "string" },
"target": { "type": "string" },
"change": { "type": "string" }
}
}
},
"implementation": {
"type": "array",
"items": { "type": "string" },
"description": "Step-by-step implementation guide"
},
"acceptance": {
"type": "array",
"items": { "type": "string" },
"description": "Quantified completion criteria"
},
"depends_on": {
"type": "array",
"items": { "type": "string" },
"default": [],
"description": "Task IDs this task depends on"
},
"estimated_minutes": {
"type": "integer",
"description": "Estimated time to complete"
},
"executor": {
"type": "string",
"enum": ["codex", "gemini", "agent", "auto"],
"default": "auto"
}
}
}
},
"exploration_context": {
"type": "object",
"description": "ACE exploration results",
"properties": {
"project_structure": { "type": "string" },
"relevant_files": {
"type": "array",
"items": { "type": "string" }
},
"patterns": { "type": "string" },
"integration_points": { "type": "string" }
}
},
"analysis": {
"type": "object",
"properties": {
"risk": { "type": "string", "enum": ["low", "medium", "high"] },
"impact": { "type": "string", "enum": ["low", "medium", "high"] },
"complexity": { "type": "string", "enum": ["low", "medium", "high"] }
}
},
"score": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Solution quality score (0.0-1.0)"
},
"is_bound": {
"type": "boolean",
"default": false,
"description": "Whether this solution is bound to the issue"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"bound_at": {
"type": "string",
"format": "date-time",
"description": "When this solution was bound to the issue"
}
}
}

View File

@@ -1,105 +0,0 @@
## MCP Tools Usage
### search_context (ACE) - Code Search (REQUIRED - HIGHEST PRIORITY)
**OVERRIDES**: All other search/discovery rules in other workflow files
**When**: ANY code discovery task, including:
- Find code, understand codebase structure, locate implementations
- Explore unknown locations
- Verify file existence before reading
- Pattern-based file discovery
- Semantic code understanding
**Priority Rule**:
1. **Always use mcp__ace-tool__search_context FIRST** for any code/file discovery
2. Only use Built-in Grep for single-file exact line search (after location confirmed)
3. Only use Built-in Read for known, confirmed file paths
**How**:
```javascript
// Natural language code search - best for understanding and exploration
mcp__ace-tool__search_context({
project_root_path: "/path/to/project",
query: "authentication logic"
})
// With keywords for better semantic matching
mcp__ace-tool__search_context({
project_root_path: "/path/to/project",
query: "I want to find where the server handles user login. Keywords: auth, login, session"
})
```
**Good Query Examples**:
- "Where is the function that handles user authentication?"
- "What tests are there for the login functionality?"
- "How is the database connected to the application?"
- "I want to find where the server handles chunk merging. Keywords: upload chunk merge"
- "Locate where the system refreshes cached data. Keywords: cache refresh, invalidation"
**Bad Query Examples** (use grep or file view instead):
- "Find definition of constructor of class Foo" (use grep tool instead)
- "Find all references to function bar" (use grep tool instead)
- "Show me how Checkout class is used in services/payment.py" (use file view tool instead)
**Key Features**:
- Real-time index of the codebase (always up-to-date)
- Cross-language retrieval support
- Semantic search with embeddings
- No manual index initialization required
---
### read_file - Read File Contents
**When**: Read files found by search_context
**How**:
```javascript
read_file(path="/path/to/file.ts") // Single file
read_file(path="/src/**/*.config.ts") // Pattern matching
```
---
### edit_file - Modify Files
**When**: Built-in Edit tool fails or need advanced features
**How**:
```javascript
edit_file(path="/file.ts", old_string="...", new_string="...", mode="update")
edit_file(path="/file.ts", line=10, content="...", mode="insert_after")
```
**Modes**: `update` (replace text), `insert_after`, `insert_before`, `delete_line`
---
### write_file - Create/Overwrite Files
**When**: Create new files or completely replace content
**How**:
```javascript
write_file(path="/new-file.ts", content="...")
```
---
### Exa - External Search
**When**: Find documentation/examples outside codebase
**How**:
```javascript
mcp__exa__search(query="React hooks 2025 documentation")
mcp__exa__search(query="FastAPI auth example", numResults=10)
mcp__exa__search(query="latest API docs", livecrawl="always")
```
**Parameters**:
- `query` (required): Search query string
- `numResults` (optional): Number of results to return (default: 5)
- `livecrawl` (optional): `"always"` or `"fallback"` for live crawling

View File

@@ -1,35 +1,27 @@
## MCP Tools Usage
## Context Acquisition (MCP Tools Priority)
### smart_search - Code Search (REQUIRED - HIGHEST PRIORITY)
**For task context gathering and analysis, ALWAYS prefer MCP tools**:
**OVERRIDES**: All other search/discovery rules in other workflow files
1. **mcp__ace-tool__search_context** - HIGHEST PRIORITY for code discovery
- Semantic search with real-time codebase index
- Use for: finding implementations, understanding architecture, locating patterns
- Example: `mcp__ace-tool__search_context(project_root_path="/path", query="authentication logic")`
**When**: ANY code discovery task, including:
- Find code, understand codebase structure, locate implementations
- Explore unknown locations
- Verify file existence before reading
- Pattern-based file discovery
2. **smart_search** - Fallback for structured search
- Use `smart_search(query="...")` for keyword/regex search
- Use `smart_search(action="find_files", pattern="*.ts")` for file discovery
- Supports modes: `auto`, `hybrid`, `exact`, `ripgrep`
**Priority Rule**:
1. **Always use smart_search FIRST** for any code/file discovery
2. Only use Built-in Grep for single-file exact line search (after location confirmed)
3. Only use Built-in Read for known, confirmed file paths
3. **read_file** - Batch file reading
- Read multiple files in parallel: `read_file(path="file1.ts")`, `read_file(path="file2.ts")`
- Supports glob patterns: `read_file(path="src/**/*.config.ts")`
**Workflow** (search first, init if needed):
```javascript
// Step 1: Try search directly (works if index exists or uses ripgrep fallback)
smart_search(query="authentication logic")
// Step 2: Only if search warns "No CodexLens index found", then init
smart_search(action="init", path=".") // Creates FTS index only
// Note: For semantic/vector search, use "ccw view" dashboard to create vector index
**Priority Order**:
```
ACE search_context (semantic) → smart_search (structured) → read_file (batch read) → shell commands (fallback)
```
**Modes**: `auto` (intelligent routing), `hybrid` (semantic, needs vector index), `exact` (FTS), `ripgrep` (no index)
---
**NEVER** use shell commands (`cat`, `find`, `grep`) when MCP tools are available.
### read_file - Read File Contents
**When**: Read files found by smart_search

View File

@@ -76,18 +76,23 @@ chcp 65001 > $null
**For task context gathering and analysis, ALWAYS prefer MCP tools**:
1. **smart_search** - First choice for code discovery
- Use `smart_search(query="...")` for semantic/keyword search
1. **mcp__ace-tool__search_context** - HIGHEST PRIORITY for code discovery
- Semantic search with real-time codebase index
- Use for: finding implementations, understanding architecture, locating patterns
- Example: `mcp__ace-tool__search_context(project_root_path="/path", query="authentication logic")`
2. **smart_search** - Fallback for structured search
- Use `smart_search(query="...")` for keyword/regex search
- Use `smart_search(action="find_files", pattern="*.ts")` for file discovery
- Supports modes: `auto`, `hybrid`, `exact`, `ripgrep`
2. **read_file** - Batch file reading
3. **read_file** - Batch file reading
- Read multiple files in parallel: `read_file(path="file1.ts")`, `read_file(path="file2.ts")`
- Supports glob patterns: `read_file(path="src/**/*.config.ts")`
**Priority Order**:
```
smart_search (discovery) → read_file (batch read) → shell commands (fallback)
ACE search_context (semantic) → smart_search (structured) → read_file (batch read) → shell commands (fallback)
```
**NEVER** use shell commands (`cat`, `find`, `grep`) when MCP tools are available.
@@ -96,7 +101,7 @@ smart_search (discovery) → read_file (batch read) → shell commands (fallback
**Before**:
- [ ] Understand PURPOSE and TASK clearly
- [ ] Use smart_search to discover relevant files
- [ ] Use ACE search_context first, fallback to smart_search for discovery
- [ ] Use read_file to batch read context files, find 3+ patterns
- [ ] Check RULES templates and constraints

View File

@@ -1,104 +1,132 @@
---
description: Execute issue queue tasks sequentially with git commit after each task
argument-hint: "[--dry-run]"
description: Execute all solutions from issue queue with git commit after each task
argument-hint: ""
---
# Issue Execute (Codex Version)
## Core Principle
**Serial Execution**: Execute tasks ONE BY ONE from the issue queue. Complete each task fully (implement → test → commit) before moving to next. Continue autonomously until ALL tasks complete or queue is empty.
**Serial Execution**: Execute solutions ONE BY ONE from the issue queue via `ccw issue next`. For each solution, complete all tasks sequentially (implement → test → commit per task). Continue autonomously until queue is empty.
## Execution Flow
```
INIT: Fetch first task via ccw issue next
INIT: Fetch first solution via ccw issue next
WHILE task exists:
1. Receive task JSON from ccw issue next
2. Execute full lifecycle:
- IMPLEMENT: Follow task.implementation steps
- TEST: Run task.test commands
- VERIFY: Check task.acceptance criteria
- COMMIT: Stage files, commit with task.commit.message_template
3. Report completion via ccw issue complete <queue_id>
4. Fetch next task via ccw issue next
WHILE solution exists:
1. Receive solution JSON from ccw issue next
2. Execute all tasks in solution.tasks sequentially:
FOR each task:
- IMPLEMENT: Follow task.implementation steps
- TEST: Run task.test commands
- VERIFY: Check task.acceptance criteria
- COMMIT: Stage files, commit with task.commit.message_template
3. Report completion via ccw issue done <item_id>
4. Fetch next solution via ccw issue next
WHEN queue empty:
Output final summary
```
## Step 1: Fetch First Task
## Step 1: Fetch First Solution
Run this command to get your first task:
Run this command to get your first solution:
```bash
ccw issue next
```
This returns JSON with the full task definition:
- `queue_id`: Unique ID for queue tracking (e.g., "Q-001")
- `issue_id`: Parent issue ID (e.g., "ISSUE-20251227-001")
- `task`: Full task definition with implementation steps
- `context`: Relevant files and patterns
This returns JSON with the full solution definition:
- `item_id`: Solution identifier in queue (e.g., "S-1")
- `issue_id`: Parent issue ID (e.g., "ISS-20251227-001")
- `solution_id`: Solution ID (e.g., "SOL-20251227-001")
- `solution`: Full solution with all tasks
- `execution_hints`: Timing and executor hints
If response contains `{ "status": "empty" }`, all tasks are complete - skip to final summary.
If response contains `{ "status": "empty" }`, all solutions are complete - skip to final summary.
## Step 2: Parse Task Response
## Step 2: Parse Solution Response
Expected task structure:
Expected solution structure:
```json
{
"queue_id": "Q-001",
"issue_id": "ISSUE-20251227-001",
"solution_id": "SOL-001",
"task": {
"id": "T1",
"title": "Task title",
"scope": "src/module/",
"action": "Create|Modify|Fix|Refactor",
"description": "What to do",
"modification_points": [
{ "file": "path/to/file.ts", "target": "function name", "change": "description" }
"item_id": "S-1",
"issue_id": "ISS-20251227-001",
"solution_id": "SOL-20251227-001",
"status": "pending",
"solution": {
"id": "SOL-20251227-001",
"description": "Description of solution approach",
"tasks": [
{
"id": "T1",
"title": "Task title",
"scope": "src/module/",
"action": "Create|Modify|Fix|Refactor|Add",
"description": "What to do",
"modification_points": [
{ "file": "path/to/file.ts", "target": "function name", "change": "description" }
],
"implementation": [
"Step 1: Do this",
"Step 2: Do that"
],
"test": {
"commands": ["npm test -- --filter=xxx"],
"unit": ["Unit test requirement 1", "Unit test requirement 2"]
},
"regression": ["Verify existing tests still pass"],
"acceptance": {
"criteria": ["Criterion 1: Must pass", "Criterion 2: Must verify"],
"verification": ["Run test command", "Manual verification step"]
},
"commit": {
"type": "feat|fix|test|refactor",
"scope": "module",
"message_template": "feat(scope): description"
},
"depends_on": [],
"estimated_minutes": 30,
"priority": 1
}
],
"implementation": [
"Step 1: Do this",
"Step 2: Do that"
],
"test": {
"commands": ["npm test -- --filter=xxx"],
"unit": "Unit test requirements",
"integration": "Integration test requirements (optional)"
"exploration_context": {
"relevant_files": ["path/to/reference.ts"],
"patterns": "Follow existing pattern in xxx",
"integration_points": "Used by other modules"
},
"acceptance": [
"Criterion 1: Must pass",
"Criterion 2: Must verify"
],
"commit": {
"message_template": "feat(scope): description"
}
"analysis": {
"risk": "low|medium|high",
"impact": "low|medium|high",
"complexity": "low|medium|high"
},
"score": 0.95,
"is_bound": true
},
"context": {
"relevant_files": ["path/to/reference.ts"],
"patterns": "Follow existing pattern in xxx"
"execution_hints": {
"executor": "codex",
"estimated_minutes": 180
}
}
```
## Step 3: Execute Task Lifecycle
## Step 3: Execute Tasks Sequentially
Iterate through `solution.tasks` array and execute each task:
### Phase A: IMPLEMENT
1. Read all `context.relevant_files` to understand existing patterns
1. Read all `solution.exploration_context.relevant_files` to understand existing patterns
2. Follow `task.implementation` steps in order
3. Apply changes to `task.modification_points` files
4. Follow `context.patterns` for code style consistency
4. Follow `solution.exploration_context.patterns` for code style consistency
5. Run `task.regression` checks if specified to ensure no breakage
**Output format:**
```
## Implementing: [task.title]
## Implementing: [task.title] (Task [N]/[Total])
**Scope**: [task.scope]
**Action**: [task.action]
@@ -132,7 +160,7 @@ Expected task structure:
### Phase C: VERIFY
Check all `task.acceptance` criteria are met:
Check all `task.acceptance.criteria` are met using `task.acceptance.verification` steps:
```
## Verifying: [task.title]
@@ -142,6 +170,10 @@ Check all `task.acceptance` criteria are met:
- [x] Criterion 2: Verified
...
**Verification Steps**:
- [x] Run test command
- [x] Manual verification step
All criteria met: YES
```
@@ -149,7 +181,7 @@ All criteria met: YES
### Phase D: COMMIT
After all phases pass, commit the changes:
After all phases pass, commit the changes for this task:
```bash
# Stage all modified files
@@ -159,7 +191,7 @@ git add path/to/file1.ts path/to/file2.ts ...
git commit -m "$(cat <<'EOF'
[task.commit.message_template]
Queue-ID: [queue_id]
Solution-ID: [solution_id]
Issue-ID: [issue_id]
Task-ID: [task.id]
EOF
@@ -175,30 +207,37 @@ EOF
**Files**: N files changed
```
### Repeat for Next Task
Continue to next task in `solution.tasks` array until all tasks are complete.
## Step 4: Report Completion
After commit succeeds, report to queue system:
After ALL tasks in the solution are complete, report to queue system:
```bash
ccw issue complete [queue_id] --result '{
ccw issue done <item_id> --result '{
"files_modified": ["path1", "path2"],
"tests_passed": true,
"acceptance_passed": true,
"committed": true,
"commit_hash": "[actual hash]",
"commits": [
{ "task_id": "T1", "hash": "abc123" },
{ "task_id": "T2", "hash": "def456" }
],
"summary": "[What was accomplished]"
}'
```
**If task failed and cannot be fixed:**
**If solution failed and cannot be fixed:**
```bash
ccw issue fail [queue_id] --reason "Phase [X] failed: [details]"
ccw issue done <item_id> --fail --reason "Task [task.id] failed: [details]"
```
## Step 5: Continue to Next Task
## Step 5: Continue to Next Solution
Immediately fetch the next task:
Immediately fetch the next solution:
```bash
ccw issue next
@@ -206,8 +245,8 @@ ccw issue next
**Output progress:**
```
✓ [N/M] Completed: [queue_id] - [task.title]
→ Fetching next task...
✓ [N/M] Completed: [item_id] - [solution.approach]
→ Fetching next solution...
```
**DO NOT STOP.** Return to Step 2 and continue until queue is empty.
@@ -219,12 +258,15 @@ When `ccw issue next` returns `{ "status": "empty" }`:
```markdown
## Issue Queue Execution Complete
**Total Tasks Executed**: N
**Total Solutions Executed**: N
**Total Tasks Executed**: M
**All Commits**:
| # | Queue ID | Task | Commit |
| # | Solution | Task | Commit |
|---|----------|------|--------|
| 1 | Q-001 | Task title | abc123 |
| 2 | Q-002 | Task title | def456 |
| 1 | S-1 | T1 | abc123 |
| 2 | S-1 | T2 | def456 |
| 3 | S-2 | T1 | ghi789 |
**Files Modified**:
- path/to/file1.ts
@@ -237,23 +279,32 @@ When `ccw issue next` returns `{ "status": "empty" }`:
## Execution Rules
1. **Never stop mid-queue** - Continue until queue is empty
2. **One task at a time** - Fully complete (including commit) before moving on
3. **Tests MUST pass** - Do not proceed to commit if tests fail
4. **Commit after each task** - Each task gets its own commit
5. **Self-verify** - All acceptance criteria must pass before commit
6. **Report accurately** - Use ccw issue complete/fail after each task
7. **Handle failures gracefully** - If a task fails, report via ccw issue fail and continue to next
2. **One solution at a time** - Fully complete (all tasks + report) before moving on
3. **Sequential within solution** - Complete each task (including commit) before next
4. **Tests MUST pass** - Do not proceed to commit if tests fail
5. **Commit after each task** - Each task gets its own commit
6. **Self-verify** - All acceptance criteria must pass before commit
7. **Report accurately** - Use `ccw issue done` after each solution
8. **Handle failures gracefully** - If a solution fails, report via `ccw issue done --fail` and continue to next
## Error Handling
| Situation | Action |
|-----------|--------|
| ccw issue next returns empty | All done - output final summary |
| `ccw issue next` returns empty | All done - output final summary |
| Tests fail | Fix code, re-run tests |
| Verification fails | Go back to implement phase |
| Git commit fails | Check staging, retry commit |
| ccw issue complete fails | Log error, continue to next task |
| Unrecoverable error | Call ccw issue fail, continue to next |
| `ccw issue done` fails | Log error, continue to next solution |
| Unrecoverable error | Call `ccw issue done --fail`, continue to next |
## CLI Command Reference
| Command | Purpose |
|---------|---------|
| `ccw issue next` | Fetch next solution from queue |
| `ccw issue done <id>` | Mark solution complete with result |
| `ccw issue done <id> --fail` | Mark solution failed with reason |
## Start Execution
@@ -263,4 +314,4 @@ Begin by running:
ccw issue next
```
Then follow the lifecycle for each task until queue is empty.
Then follow the solution lifecycle for each solution until queue is empty.

View File

@@ -0,0 +1,106 @@
---
description: Plan issue(s) into bound solutions (writes solutions JSONL via ccw issue bind)
argument-hint: "<issue-id>[,<issue-id>,...] [--all-pending] [--batch-size 3]"
---
# Issue Plan (Codex Version)
## Goal
Create executable solution(s) for issue(s) and bind the selected solution to each issue using `ccw issue bind`.
This workflow is **planning + registration** (no implementation): it explores the codebase just enough to produce a high-quality task breakdown that can be executed later (e.g., by `issue-execute.md`).
## Inputs
- **Explicit issues**: comma-separated IDs, e.g. `ISS-123,ISS-124`
- **All pending**: `--all-pending` → plan all issues in `registered` status
- **Batch size**: `--batch-size N` (default `3`) → max issues per batch
## Output Requirements
For each issue:
- Register at least one solution and bind one solution to the issue (updates `.workflow/issues/issues.jsonl` and appends to `.workflow/issues/solutions/{issue-id}.jsonl`).
- Ensure tasks conform to `.claude/workflows/cli-templates/schemas/solution-schema.json`.
- Each task includes quantified `acceptance.criteria` and concrete `acceptance.verification`.
Return a final summary JSON:
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": 0 }],
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "task_count": 0, "description": "..." }] }],
"conflicts": [{ "file": "...", "issues": ["..."] }]
}
```
## Workflow
### Step 1: Resolve issue list
- If `--all-pending`:
- Run `ccw issue list --status registered --json` and plan all returned issues.
- Else:
- Parse IDs from user input (split by `,`), and ensure each issue exists:
- `ccw issue init <issue-id> --title "Issue <issue-id>"` (safe if already exists)
### Step 2: Load issue details
For each issue ID:
- `ccw issue status <issue-id> --json`
- Extract the issue title/context/labels and any discovery hints (affected files, snippets, etc. if present).
### Step 3: Minimal exploration (evidence-based)
- If issue context names specific files or symbols: open them first.
- Otherwise:
- Use `rg` to locate relevant code paths by keywords from the title/context.
- Read 3+ similar patterns before proposing refactors or API changes.
### Step 4: Draft solutions and tasks (schema-driven)
Default to **one** solution per issue unless there are genuinely different approaches.
Task rules (from schema):
- `id`: `T1`, `T2`, ...
- `action`: one of `Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix`
- `implementation`: step-by-step, executable instructions
- `test.commands`: include at least one command per task when feasible
- `acceptance.criteria`: testable statements
- `acceptance.verification`: concrete steps/commands mapping to criteria
- Prefer small, independently testable tasks; encode dependencies in `depends_on`.
### Step 5: Register & bind solutions via CLI
Create an import JSON file per solution (NOT JSONL), then bind it:
1. Write a file (example path):
- `.workflow/issues/solutions/_imports/<issue-id>-<timestamp>.json`
2. File contents shape (minimum):
```json
{
"description": "High-level summary",
"approach": "Technical approach",
"tasks": []
}
```
3. Register+bind in one step:
- `ccw issue bind <issue-id> --solution <import-file>`
If you intentionally generated multiple solutions for the same issue:
- Register each via `ccw issue bind <issue-id> <solution-id> --solution <import-file>` (do NOT bind yet).
- Present the alternatives in `pending_selection` and stop for user choice.
- Bind chosen solution with: `ccw issue bind <issue-id> <solution-id>`.
### Step 6: Detect cross-issue file conflicts (best-effort)
Across the issues planned in this run:
- Build a set of touched files from each solutions `modification_points.file` (and/or task `scope` when explicit files are missing).
- If the same file appears in multiple issues, add it to `conflicts` with all involved issue IDs.
- Recommend a safe execution order (sequential) when conflicts exist.
## Done Criteria
- A bound solution exists for each issue unless explicitly deferred for user selection.
- All tasks validate against the solution schema fields (especially acceptance criteria + verification).
- The final summary JSON matches the required shape.

View File

@@ -0,0 +1,225 @@
---
description: Form execution queue from bound solutions (orders solutions, detects conflicts, assigns groups)
argument-hint: "[--issue <id>]"
---
# Issue Queue (Codex Version)
## Goal
Create an ordered execution queue from all bound solutions. Analyze inter-solution file conflicts, calculate semantic priorities, and assign parallel/sequential execution groups.
This workflow is **ordering only** (no execution): it reads bound solutions, detects conflicts, and produces a queue file that `issue-execute.md` can consume.
## Inputs
- **All planned**: Default behavior → queue all issues with `planned` status and bound solutions
- **Specific issue**: `--issue <id>` → queue only that issue's solution
## Output Requirements
**Generate Files (EXACTLY 2):**
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with solutions, conflicts, groups
2. `.workflow/issues/queues/index.json` - Update with new queue entry
**Return Summary:**
```json
{
"queue_id": "QUE-YYYYMMDD-HHMMSS",
"total_solutions": 3,
"total_tasks": 12,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": 2 }],
"conflicts_resolved": 1,
"issues_queued": ["ISS-xxx", "ISS-yyy"]
}
```
## Workflow
### Step 1: Generate Queue ID
Generate queue ID ONCE at start, reuse throughout:
```bash
# Format: QUE-YYYYMMDD-HHMMSS (UTC)
QUEUE_ID="QUE-$(date -u +%Y%m%d-%H%M%S)"
```
### Step 2: Load Planned Issues
Get all issues with bound solutions:
```bash
ccw issue list --status planned --json
```
For each issue in the result:
- Extract `id`, `bound_solution_id`, `priority`
- Read solution from `.workflow/issues/solutions/{issue-id}.jsonl`
- Find the bound solution by matching `solution.id === bound_solution_id`
- Collect `files_touched` from all tasks' `modification_points.file`
Build solution list:
```json
[
{
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx",
"task_count": 3,
"files_touched": ["src/auth.ts", "src/utils.ts"],
"priority": "medium"
}
]
```
### Step 3: Detect File Conflicts
Build a file → solutions mapping:
```javascript
fileModifications = {
"src/auth.ts": ["SOL-001", "SOL-003"],
"src/api.ts": ["SOL-002"]
}
```
Conflicts exist when a file has multiple solutions. For each conflict:
- Record the file and involved solutions
- Will be resolved in Step 4
### Step 4: Resolve Conflicts & Build DAG
**Resolution Rules (in priority order):**
1. Higher issue priority first: `critical > high > medium > low`
2. Foundation solutions first: fewer dependencies
3. More tasks = higher priority: larger impact
For each file conflict:
- Apply resolution rules to determine order
- Add dependency edge: later solution `depends_on` earlier solution
- Record rationale
**Semantic Priority Formula:**
```
Base: critical=0.9, high=0.7, medium=0.5, low=0.3
Boost: task_count>=5 → +0.1, task_count>=3 → +0.05
Final: clamp(base + boost, 0.0, 1.0)
```
### Step 5: Assign Execution Groups
- **Parallel (P1, P2, ...)**: Solutions with NO file overlaps between them
- **Sequential (S1, S2, ...)**: Solutions that share files must run in order
Group assignment:
1. Start with all solutions in potential parallel group
2. For each file conflict, move later solution to sequential group
3. Assign group IDs: P1 for first parallel batch, S2 for first sequential, etc.
### Step 6: Generate Queue Files
**Queue file structure** (`.workflow/issues/queues/{QUEUE_ID}.json`):
```json
{
"id": "QUE-20251228-120000",
"status": "active",
"issue_ids": ["ISS-001", "ISS-002"],
"solutions": [
{
"item_id": "S-1",
"issue_id": "ISS-001",
"solution_id": "SOL-001",
"status": "pending",
"execution_order": 1,
"execution_group": "P1",
"depends_on": [],
"semantic_priority": 0.8,
"assigned_executor": "codex",
"files_touched": ["src/auth.ts"],
"task_count": 3
}
],
"conflicts": [
{
"type": "file_conflict",
"file": "src/auth.ts",
"solutions": ["S-1", "S-3"],
"resolution": "sequential",
"resolution_order": ["S-1", "S-3"],
"rationale": "S-1 creates auth module, S-3 extends it"
}
],
"execution_groups": [
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"], "solution_count": 2 },
{ "id": "S2", "type": "sequential", "solutions": ["S-3"], "solution_count": 1 }
]
}
```
**Update index** (`.workflow/issues/queues/index.json`):
```json
{
"active_queue_id": "QUE-20251228-120000",
"queues": [
{
"id": "QUE-20251228-120000",
"status": "active",
"issue_ids": ["ISS-001", "ISS-002"],
"total_solutions": 3,
"completed_solutions": 0,
"created_at": "2025-12-28T12:00:00Z"
}
]
}
```
### Step 7: Update Issue Statuses
For each queued issue, update status to `queued`:
```bash
ccw issue update <issue-id> --status queued
```
## Queue Item ID Format
- Solution items: `S-1`, `S-2`, `S-3`, ...
- Sequential numbering starting from 1
## Done Criteria
- [ ] Exactly 2 files generated: queue JSON + index update
- [ ] Queue has valid DAG (no circular dependencies)
- [ ] All file conflicts resolved with rationale
- [ ] Semantic priority calculated for each solution (0.0-1.0)
- [ ] Execution groups assigned (P* for parallel, S* for sequential)
- [ ] Issue statuses updated to `queued`
- [ ] Summary JSON returned with correct shape
## Validation Rules
1. **No cycles**: If resolution creates a cycle, abort and report
2. **Parallel safety**: Solutions in same P* group must have NO file overlaps
3. **Sequential order**: Solutions in S* group must be in correct dependency order
4. **Single queue ID**: Use the same queue ID throughout (generated in Step 1)
## Error Handling
| Situation | Action |
|-----------|--------|
| No planned issues | Return empty queue summary |
| Circular dependency detected | Abort, report cycle details |
| Missing solution file | Skip issue, log warning |
| Index file missing | Create new index |
## Start Execution
Begin by listing planned issues:
```bash
ccw issue list --status planned --json
```
Then follow the workflow to generate the queue.

View File

@@ -5,6 +5,52 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [6.3.11] - 2025-12-28
### 🔧 Issue System Enhancements | Issue系统增强
#### CLI Improvements | CLI改进
- **Added**: `ccw issue update <id> --status <status>` command for pure field updates
- **Added**: Support for `--priority`, `--title`, `--description` in update command
- **Added**: Auto-timestamp setting based on status (planned_at, queued_at, completed_at)
#### Issue Plan Command | Issue Plan命令
- **Changed**: Agent execution from sequential to parallel (max 10 concurrent)
- **Added**: Multi-solution user selection prompt with clear notification
- **Added**: Explicit binding check (`solutions.length === 1`) before auto-bind
#### Issue Queue Command | Issue Queue命令
- **Fixed**: Queue ID generation moved from agent to command (avoid duplicate IDs)
- **Fixed**: Strict output file control (exactly 2 files per execution)
- **Added**: Clear documentation for `update` vs `done`/`queue add` usage
#### Discovery System | Discovery系统
- **Enhanced**: Discovery progress reading with new schema support
- **Enhanced**: Discovery index reading and issue exporting
## [6.3.9] - 2025-12-27
### 🔧 Issue System Consistency | Issue系统一致性修复
#### Schema Unification | Schema统一
- **Upgraded**: `solution-schema.json` to Rich Plan model with full lifecycle fields
- **Added**: `test`, `regression`, `commit`, `lifecycle_status` objects to task schema
- **Changed**: `acceptance` from string[] to object `{criteria[], verification[]}`
- **Added**: `analysis` and `score` fields for multi-solution evaluation
- **Removed**: Redundant `issue-task-jsonl-schema.json` and `solutions-jsonl-schema.json`
- **Fixed**: `queue-schema.json` field naming (`queue_id``item_id`)
#### Agent Updates | Agent更新
- **Added**: Multi-solution generation support based on complexity
- **Added**: Search tool fallback chain (ACE → smart_search → Grep → rg → Glob)
- **Added**: `lifecycle_requirements` propagation from issue to tasks
- **Added**: Priority mapping formula (1-5 → 0.0-1.0 semantic priority)
- **Fixed**: Task decomposition to match Rich Plan schema
#### Type Safety | 类型安全
- **Added**: `QueueConflict` and `ExecutionGroup` interfaces to `issue.ts`
- **Fixed**: `conflicts` array typing (from `any[]` to `QueueConflict[]`)
## [6.2.0] - 2025-12-21
### 🎯 Native CodexLens & Dashboard Revolution | 原生CodexLens与Dashboard革新

View File

@@ -277,12 +277,14 @@ export function run(argv: string[]): void {
.option('--priority <n>', 'Task priority (1-5)')
.option('--format <fmt>', 'Output format: json, markdown')
.option('--json', 'Output as JSON')
.option('--ids', 'List only IDs (one per line, for scripting)')
.option('--force', 'Force operation')
// New options for solution/queue management
.option('--solution <path>', 'Solution JSON file path')
.option('--solution-id <id>', 'Solution ID')
.option('--result <json>', 'Execution result JSON')
.option('--reason <text>', 'Failure reason')
.option('--fail', 'Mark task as failed')
.action((subcommand, args, options) => issueCommand(subcommand, args, options));
program.parse(argv);

File diff suppressed because it is too large Load Diff

View File

@@ -47,7 +47,8 @@ const MODULE_CSS_FILES = [
'28-mcp-manager.css',
'29-help.css',
'30-core-memory.css',
'31-api-settings.css'
'31-api-settings.css',
'34-discovery.css'
];
const MODULE_FILES = [
@@ -97,6 +98,8 @@ const MODULE_FILES = [
'views/rules-manager.js',
'views/claude-manager.js',
'views/api-settings.js',
'views/issue-manager.js',
'views/issue-discovery.js',
'views/help.js',
'main.js'
];

View File

@@ -110,6 +110,34 @@ interface SessionReviewData {
findings: Array<Finding & { dimension: string }>;
}
interface ProjectGuidelines {
conventions: {
coding_style: string[];
naming_patterns: string[];
file_structure: string[];
documentation: string[];
};
constraints: {
architecture: string[];
tech_stack: string[];
performance: string[];
security: string[];
};
quality_rules: Array<{ rule: string; scope: string; enforced_by?: string }>;
learnings: Array<{
date: string;
session_id?: string;
insight: string;
context?: string;
category?: string;
}>;
_metadata?: {
created_at: string;
updated_at?: string;
version: string;
};
}
interface ProjectOverview {
projectName: string;
description: string;
@@ -144,6 +172,7 @@ interface ProjectOverview {
analysis_timestamp: string | null;
analysis_mode: string;
};
guidelines: ProjectGuidelines | null;
}
/**
@@ -156,11 +185,13 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
// Initialize cache manager
const cache = createDashboardCache(workflowDir);
// Prepare paths to watch for changes
// Prepare paths to watch for changes (includes both new dual files and legacy)
const watchPaths = [
join(workflowDir, 'active'),
join(workflowDir, 'archives'),
join(workflowDir, 'project.json'),
join(workflowDir, 'project-tech.json'),
join(workflowDir, 'project-guidelines.json'),
join(workflowDir, 'project.json'), // Legacy support
...sessions.active.map(s => s.path),
...sessions.archived.map(s => s.path)
];
@@ -516,12 +547,19 @@ function sortTaskIds(a: string, b: string): number {
}
/**
* Load project overview from project.json
* Load project overview from project-tech.json and project-guidelines.json
* Supports dual file structure with backward compatibility for legacy project.json
* @param workflowDir - Path to .workflow directory
* @returns Project overview data or null if not found
*/
function loadProjectOverview(workflowDir: string): ProjectOverview | null {
const projectFile = join(workflowDir, 'project.json');
const techFile = join(workflowDir, 'project-tech.json');
const guidelinesFile = join(workflowDir, 'project-guidelines.json');
const legacyFile = join(workflowDir, 'project.json');
// Check for new dual file structure first, fallback to legacy
const useLegacy = !existsSync(techFile) && existsSync(legacyFile);
const projectFile = useLegacy ? legacyFile : techFile;
if (!existsSync(projectFile)) {
console.log(`Project file not found at: ${projectFile}`);
@@ -532,15 +570,59 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
const fileContent = readFileSync(projectFile, 'utf8');
const projectData = JSON.parse(fileContent) as Record<string, unknown>;
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'}`);
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'} (${useLegacy ? 'legacy' : 'tech'})`);
// Parse tech data (compatible with both legacy and new structure)
const overview = projectData.overview as Record<string, unknown> | undefined;
const technologyStack = overview?.technology_stack as Record<string, unknown[]> | undefined;
const architecture = overview?.architecture as Record<string, unknown> | undefined;
const developmentIndex = projectData.development_index as Record<string, unknown[]> | undefined;
const statistics = projectData.statistics as Record<string, unknown> | undefined;
const technologyAnalysis = projectData.technology_analysis as Record<string, unknown> | undefined;
const developmentStatus = projectData.development_status as Record<string, unknown> | undefined;
// Support both old and new schema field names
const technologyStack = (overview?.technology_stack || technologyAnalysis?.technology_stack) as Record<string, unknown[]> | undefined;
const architecture = (overview?.architecture || technologyAnalysis?.architecture) as Record<string, unknown> | undefined;
const developmentIndex = (projectData.development_index || developmentStatus?.development_index) as Record<string, unknown[]> | undefined;
const statistics = (projectData.statistics || developmentStatus?.statistics) as Record<string, unknown> | undefined;
const metadata = projectData._metadata as Record<string, unknown> | undefined;
// Load guidelines from separate file if exists
let guidelines: ProjectGuidelines | null = null;
if (existsSync(guidelinesFile)) {
try {
const guidelinesContent = readFileSync(guidelinesFile, 'utf8');
const guidelinesData = JSON.parse(guidelinesContent) as Record<string, unknown>;
const conventions = guidelinesData.conventions as Record<string, string[]> | undefined;
const constraints = guidelinesData.constraints as Record<string, string[]> | undefined;
guidelines = {
conventions: {
coding_style: conventions?.coding_style || [],
naming_patterns: conventions?.naming_patterns || [],
file_structure: conventions?.file_structure || [],
documentation: conventions?.documentation || []
},
constraints: {
architecture: constraints?.architecture || [],
tech_stack: constraints?.tech_stack || [],
performance: constraints?.performance || [],
security: constraints?.security || []
},
quality_rules: (guidelinesData.quality_rules as Array<{ rule: string; scope: string; enforced_by?: string }>) || [],
learnings: (guidelinesData.learnings as Array<{
date: string;
session_id?: string;
insight: string;
context?: string;
category?: string;
}>) || [],
_metadata: guidelinesData._metadata as ProjectGuidelines['_metadata'] | undefined
};
console.log(`Successfully loaded project guidelines`);
} catch (guidelinesErr) {
console.error(`Failed to parse project-guidelines.json:`, (guidelinesErr as Error).message);
}
}
return {
projectName: (projectData.project_name as string) || 'Unknown',
description: (overview?.description as string) || '',
@@ -574,10 +656,11 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
initialized_by: (metadata?.initialized_by as string) || 'unknown',
analysis_timestamp: (metadata?.analysis_timestamp as string) || null,
analysis_mode: (metadata?.analysis_mode as string) || 'unknown'
}
},
guidelines
};
} catch (err) {
console.error(`Failed to parse project.json at ${projectFile}:`, (err as Error).message);
console.error(`Failed to parse project file at ${projectFile}:`, (err as Error).message);
console.error('Error stack:', (err as Error).stack);
return null;
}

View File

@@ -0,0 +1,607 @@
// @ts-nocheck
/**
* Discovery Routes Module
*
* Storage Structure:
* .workflow/issues/discoveries/
* ├── index.json # Discovery session index
* └── {discovery-id}/
* ├── discovery-state.json # State machine
* ├── discovery-progress.json # Real-time progress
* ├── perspectives/ # Per-perspective results
* │ ├── bug.json
* │ └── ...
* ├── external-research.json # Exa research results
* ├── discovery-issues.jsonl # Generated candidate issues
* └── reports/
*
* API Endpoints:
* - GET /api/discoveries - List all discovery sessions
* - GET /api/discoveries/:id - Get discovery session detail
* - GET /api/discoveries/:id/findings - Get all findings
* - GET /api/discoveries/:id/progress - Get real-time progress
* - POST /api/discoveries/:id/export - Export findings as issues
* - PATCH /api/discoveries/:id/findings/:fid - Update finding status
* - DELETE /api/discoveries/:id - Delete discovery session
*/
import type { IncomingMessage, ServerResponse } from 'http';
import { readFileSync, existsSync, writeFileSync, mkdirSync, readdirSync, rmSync } from 'fs';
import { join } from 'path';
export interface RouteContext {
pathname: string;
url: URL;
req: IncomingMessage;
res: ServerResponse;
initialPath: string;
handlePostRequest: (req: IncomingMessage, res: ServerResponse, handler: (body: unknown) => Promise<any>) => void;
broadcastToClients: (data: unknown) => void;
}
// ========== Helper Functions ==========
function getDiscoveriesDir(projectPath: string): string {
return join(projectPath, '.workflow', 'issues', 'discoveries');
}
function readDiscoveryIndex(discoveriesDir: string): { discoveries: any[]; total: number } {
const indexPath = join(discoveriesDir, 'index.json');
// Try to read index.json first
if (existsSync(indexPath)) {
try {
return JSON.parse(readFileSync(indexPath, 'utf8'));
} catch {
// Fall through to scan
}
}
// Fallback: scan directory for discovery folders
if (!existsSync(discoveriesDir)) {
return { discoveries: [], total: 0 };
}
try {
const entries = readdirSync(discoveriesDir, { withFileTypes: true });
const discoveries: any[] = [];
for (const entry of entries) {
if (entry.isDirectory() && entry.name.startsWith('DSC-')) {
const statePath = join(discoveriesDir, entry.name, 'discovery-state.json');
if (existsSync(statePath)) {
try {
const state = JSON.parse(readFileSync(statePath, 'utf8'));
discoveries.push({
discovery_id: entry.name,
target_pattern: state.target_pattern,
perspectives: state.metadata?.perspectives || [],
created_at: state.metadata?.created_at,
completed_at: state.completed_at
});
} catch {
// Skip invalid entries
}
}
}
}
// Sort by creation time descending
discoveries.sort((a, b) => {
const timeA = new Date(a.created_at || 0).getTime();
const timeB = new Date(b.created_at || 0).getTime();
return timeB - timeA;
});
return { discoveries, total: discoveries.length };
} catch {
return { discoveries: [], total: 0 };
}
}
function writeDiscoveryIndex(discoveriesDir: string, index: any) {
if (!existsSync(discoveriesDir)) {
mkdirSync(discoveriesDir, { recursive: true });
}
writeFileSync(join(discoveriesDir, 'index.json'), JSON.stringify(index, null, 2));
}
function readDiscoveryState(discoveriesDir: string, discoveryId: string): any | null {
const statePath = join(discoveriesDir, discoveryId, 'discovery-state.json');
if (!existsSync(statePath)) return null;
try {
return JSON.parse(readFileSync(statePath, 'utf8'));
} catch {
return null;
}
}
function readDiscoveryProgress(discoveriesDir: string, discoveryId: string): any | null {
// Try merged state first (new schema)
const statePath = join(discoveriesDir, discoveryId, 'discovery-state.json');
if (existsSync(statePath)) {
try {
const state = JSON.parse(readFileSync(statePath, 'utf8'));
// New merged schema: perspectives array + results object
if (state.perspectives && Array.isArray(state.perspectives)) {
const completed = state.perspectives.filter((p: any) => p.status === 'completed').length;
const total = state.perspectives.length;
return {
discovery_id: discoveryId,
phase: state.phase,
last_update: state.updated_at || state.created_at,
progress: {
perspective_analysis: {
total,
completed,
in_progress: state.perspectives.filter((p: any) => p.status === 'in_progress').length,
percent_complete: total > 0 ? Math.round((completed / total) * 100) : 0
},
external_research: state.external_research || { enabled: false, completed: false },
aggregation: { completed: state.phase === 'aggregation' || state.phase === 'complete' },
issue_generation: { completed: state.phase === 'complete', issues_count: state.results?.issues_generated || 0 }
},
agent_status: state.perspectives
};
}
// Old schema: metadata.perspectives (backward compat)
if (state.metadata?.perspectives) {
return {
discovery_id: discoveryId,
phase: state.phase,
progress: { perspective_analysis: { total: state.metadata.perspectives.length, completed: state.perspectives_completed?.length || 0 } }
};
}
} catch {
// Fall through
}
}
// Fallback: try legacy progress file
const progressPath = join(discoveriesDir, discoveryId, 'discovery-progress.json');
if (existsSync(progressPath)) {
try { return JSON.parse(readFileSync(progressPath, 'utf8')); } catch { return null; }
}
return null;
}
function readPerspectiveFindings(discoveriesDir: string, discoveryId: string): any[] {
const perspectivesDir = join(discoveriesDir, discoveryId, 'perspectives');
if (!existsSync(perspectivesDir)) return [];
const allFindings: any[] = [];
const files = readdirSync(perspectivesDir).filter(f => f.endsWith('.json'));
for (const file of files) {
try {
const content = JSON.parse(readFileSync(join(perspectivesDir, file), 'utf8'));
const perspective = file.replace('.json', '');
if (content.findings && Array.isArray(content.findings)) {
allFindings.push({
perspective,
summary: content.summary || {},
findings: content.findings.map((f: any) => ({
...f,
perspective: f.perspective || perspective
}))
});
}
} catch {
// Skip invalid files
}
}
return allFindings;
}
function readDiscoveryIssues(discoveriesDir: string, discoveryId: string): any[] {
const issuesPath = join(discoveriesDir, discoveryId, 'discovery-issues.jsonl');
if (!existsSync(issuesPath)) return [];
try {
const content = readFileSync(issuesPath, 'utf8');
return content.split('\n').filter(line => line.trim()).map(line => JSON.parse(line));
} catch {
return [];
}
}
function writeDiscoveryIssues(discoveriesDir: string, discoveryId: string, issues: any[]) {
const issuesPath = join(discoveriesDir, discoveryId, 'discovery-issues.jsonl');
writeFileSync(issuesPath, issues.map(i => JSON.stringify(i)).join('\n'));
}
function flattenFindings(perspectiveResults: any[]): any[] {
const allFindings: any[] = [];
for (const result of perspectiveResults) {
if (result.findings) {
allFindings.push(...result.findings);
}
}
return allFindings;
}
function appendToIssuesJsonl(projectPath: string, issues: any[]): { added: number; skipped: number; skippedIds: string[] } {
const issuesDir = join(projectPath, '.workflow', 'issues');
const issuesPath = join(issuesDir, 'issues.jsonl');
if (!existsSync(issuesDir)) {
mkdirSync(issuesDir, { recursive: true });
}
// Read existing issues
let existingIssues: any[] = [];
if (existsSync(issuesPath)) {
try {
const content = readFileSync(issuesPath, 'utf8');
existingIssues = content.split('\n').filter(line => line.trim()).map(line => JSON.parse(line));
} catch {
// Start fresh
}
}
// Build set of existing IDs and source_finding combinations for deduplication
const existingIds = new Set(existingIssues.map(i => i.id));
const existingSourceFindings = new Set(
existingIssues
.filter(i => i.source === 'discovery' && i.source_finding_id)
.map(i => `${i.source_discovery_id}:${i.source_finding_id}`)
);
// Convert and filter duplicates
const skippedIds: string[] = [];
const newIssues: any[] = [];
for (const di of issues) {
// Check for duplicate by ID
if (existingIds.has(di.id)) {
skippedIds.push(di.id);
continue;
}
// Check for duplicate by source_discovery_id + source_finding_id
const sourceKey = `${di.source_discovery_id}:${di.source_finding_id}`;
if (di.source_finding_id && existingSourceFindings.has(sourceKey)) {
skippedIds.push(di.id);
continue;
}
newIssues.push({
id: di.id,
title: di.title,
status: 'registered',
priority: di.priority || 3,
context: di.context || di.description || '',
source: 'discovery',
source_discovery_id: di.source_discovery_id,
source_finding_id: di.source_finding_id,
perspective: di.perspective,
file: di.file,
line: di.line,
labels: di.labels || [],
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
});
}
if (newIssues.length > 0) {
const allIssues = [...existingIssues, ...newIssues];
writeFileSync(issuesPath, allIssues.map(i => JSON.stringify(i)).join('\n'));
}
return { added: newIssues.length, skipped: skippedIds.length, skippedIds };
}
// ========== Route Handler ==========
export async function handleDiscoveryRoutes(ctx: RouteContext): Promise<boolean> {
const { pathname, url, req, res, initialPath, handlePostRequest } = ctx;
const projectPath = url.searchParams.get('path') || initialPath;
const discoveriesDir = getDiscoveriesDir(projectPath);
// GET /api/discoveries - List all discovery sessions
if (pathname === '/api/discoveries' && req.method === 'GET') {
const index = readDiscoveryIndex(discoveriesDir);
// Enrich with state info
const enrichedDiscoveries = index.discoveries.map((d: any) => {
const state = readDiscoveryState(discoveriesDir, d.discovery_id);
const progress = readDiscoveryProgress(discoveriesDir, d.discovery_id);
return {
...d,
phase: state?.phase || 'unknown',
total_findings: state?.total_findings || 0,
issues_generated: state?.issues_generated || 0,
priority_distribution: state?.priority_distribution || {},
progress: progress?.progress || null
};
});
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
discoveries: enrichedDiscoveries,
total: enrichedDiscoveries.length,
_metadata: { updated_at: new Date().toISOString() }
}));
return true;
}
// GET /api/discoveries/:id - Get discovery detail
const detailMatch = pathname.match(/^\/api\/discoveries\/([^/]+)$/);
if (detailMatch && req.method === 'GET') {
const discoveryId = detailMatch[1];
const state = readDiscoveryState(discoveriesDir, discoveryId);
if (!state) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Discovery ${discoveryId} not found` }));
return true;
}
const progress = readDiscoveryProgress(discoveriesDir, discoveryId);
const perspectiveResults = readPerspectiveFindings(discoveriesDir, discoveryId);
const discoveryIssues = readDiscoveryIssues(discoveriesDir, discoveryId);
// Read external research if exists
let externalResearch = null;
const externalPath = join(discoveriesDir, discoveryId, 'external-research.json');
if (existsSync(externalPath)) {
try {
externalResearch = JSON.parse(readFileSync(externalPath, 'utf8'));
} catch {
// Ignore
}
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
...state,
progress: progress?.progress || null,
perspectives: perspectiveResults,
external_research: externalResearch,
discovery_issues: discoveryIssues
}));
return true;
}
// GET /api/discoveries/:id/findings - Get all findings
const findingsMatch = pathname.match(/^\/api\/discoveries\/([^/]+)\/findings$/);
if (findingsMatch && req.method === 'GET') {
const discoveryId = findingsMatch[1];
if (!existsSync(join(discoveriesDir, discoveryId))) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Discovery ${discoveryId} not found` }));
return true;
}
const perspectiveResults = readPerspectiveFindings(discoveriesDir, discoveryId);
const allFindings = flattenFindings(perspectiveResults);
// Support filtering
const perspectiveFilter = url.searchParams.get('perspective');
const priorityFilter = url.searchParams.get('priority');
let filtered = allFindings;
if (perspectiveFilter) {
filtered = filtered.filter(f => f.perspective === perspectiveFilter);
}
if (priorityFilter) {
filtered = filtered.filter(f => f.priority === priorityFilter);
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
findings: filtered,
total: filtered.length,
perspectives: [...new Set(allFindings.map(f => f.perspective))],
_metadata: { discovery_id: discoveryId }
}));
return true;
}
// GET /api/discoveries/:id/progress - Get real-time progress
const progressMatch = pathname.match(/^\/api\/discoveries\/([^/]+)\/progress$/);
if (progressMatch && req.method === 'GET') {
const discoveryId = progressMatch[1];
const progress = readDiscoveryProgress(discoveriesDir, discoveryId);
if (!progress) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Progress for ${discoveryId} not found` }));
return true;
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(progress));
return true;
}
// POST /api/discoveries/:id/export - Export findings as issues
const exportMatch = pathname.match(/^\/api\/discoveries\/([^/]+)\/export$/);
if (exportMatch && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const discoveryId = exportMatch[1];
const { finding_ids, export_all } = body as { finding_ids?: string[]; export_all?: boolean };
if (!existsSync(join(discoveriesDir, discoveryId))) {
return { error: `Discovery ${discoveryId} not found` };
}
const perspectiveResults = readPerspectiveFindings(discoveriesDir, discoveryId);
const allFindings = flattenFindings(perspectiveResults);
let toExport: any[];
if (export_all) {
toExport = allFindings;
} else if (finding_ids && finding_ids.length > 0) {
toExport = allFindings.filter(f => finding_ids.includes(f.id));
} else {
return { error: 'Either finding_ids or export_all required' };
}
if (toExport.length === 0) {
return { error: 'No findings to export' };
}
// Convert findings to issue format
const issuesToExport = toExport.map((f, idx) => {
const suggestedIssue = f.suggested_issue || {};
return {
id: `ISS-${Date.now()}-${idx}`,
title: suggestedIssue.title || f.title,
priority: suggestedIssue.priority || 3,
context: f.description || '',
source: 'discovery',
source_discovery_id: discoveryId,
source_finding_id: f.id, // Track original finding ID for deduplication
perspective: f.perspective,
file: f.file,
line: f.line,
labels: suggestedIssue.labels || [f.perspective]
};
});
// Append to main issues.jsonl (with deduplication)
const result = appendToIssuesJsonl(projectPath, issuesToExport);
// Mark exported findings in perspective files
if (result.added > 0) {
const exportedFindingIds = new Set(
issuesToExport
.filter((_, idx) => !result.skippedIds.includes(issuesToExport[idx].id))
.map(i => i.source_finding_id)
);
// Update each perspective file to mark findings as exported
const perspectivesDir = join(discoveriesDir, discoveryId, 'perspectives');
if (existsSync(perspectivesDir)) {
const files = readdirSync(perspectivesDir).filter(f => f.endsWith('.json'));
for (const file of files) {
const filePath = join(perspectivesDir, file);
try {
const content = JSON.parse(readFileSync(filePath, 'utf8'));
if (content.findings) {
let modified = false;
for (const finding of content.findings) {
if (exportedFindingIds.has(finding.id) && !finding.exported) {
finding.exported = true;
finding.exported_at = new Date().toISOString();
modified = true;
}
}
if (modified) {
writeFileSync(filePath, JSON.stringify(content, null, 2));
}
}
} catch {
// Skip invalid files
}
}
}
}
// Update discovery state
const state = readDiscoveryState(discoveriesDir, discoveryId);
if (state) {
state.issues_generated = (state.issues_generated || 0) + result.added;
writeFileSync(
join(discoveriesDir, discoveryId, 'discovery-state.json'),
JSON.stringify(state, null, 2)
);
}
return {
success: true,
exported_count: result.added,
skipped_count: result.skipped,
skipped_ids: result.skippedIds,
message: result.skipped > 0
? `Exported ${result.added} issues, skipped ${result.skipped} duplicates`
: `Exported ${result.added} issues`
};
});
return true;
}
// PATCH /api/discoveries/:id/findings/:fid - Update finding status
const updateFindingMatch = pathname.match(/^\/api\/discoveries\/([^/]+)\/findings\/([^/]+)$/);
if (updateFindingMatch && req.method === 'PATCH') {
handlePostRequest(req, res, async (body: any) => {
const [, discoveryId, findingId] = updateFindingMatch;
const { status, dismissed } = body as { status?: string; dismissed?: boolean };
const perspectivesDir = join(discoveriesDir, discoveryId, 'perspectives');
if (!existsSync(perspectivesDir)) {
return { error: `Discovery ${discoveryId} not found` };
}
// Find and update the finding
const files = readdirSync(perspectivesDir).filter(f => f.endsWith('.json'));
let updated = false;
for (const file of files) {
const filePath = join(perspectivesDir, file);
try {
const content = JSON.parse(readFileSync(filePath, 'utf8'));
if (content.findings) {
const findingIndex = content.findings.findIndex((f: any) => f.id === findingId);
if (findingIndex !== -1) {
if (status !== undefined) {
content.findings[findingIndex].status = status;
}
if (dismissed !== undefined) {
content.findings[findingIndex].dismissed = dismissed;
}
content.findings[findingIndex].updated_at = new Date().toISOString();
writeFileSync(filePath, JSON.stringify(content, null, 2));
updated = true;
break;
}
}
} catch {
// Skip invalid files
}
}
if (!updated) {
return { error: `Finding ${findingId} not found` };
}
return { success: true, finding_id: findingId };
});
return true;
}
// DELETE /api/discoveries/:id - Delete discovery session
const deleteMatch = pathname.match(/^\/api\/discoveries\/([^/]+)$/);
if (deleteMatch && req.method === 'DELETE') {
const discoveryId = deleteMatch[1];
const discoveryPath = join(discoveriesDir, discoveryId);
if (!existsSync(discoveryPath)) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Discovery ${discoveryId} not found` }));
return true;
}
try {
// Remove directory
rmSync(discoveryPath, { recursive: true, force: true });
// Update index
const index = readDiscoveryIndex(discoveriesDir);
index.discoveries = index.discoveries.filter((d: any) => d.discovery_id !== discoveryId);
index.total = index.discoveries.length;
writeDiscoveryIndex(discoveriesDir, index);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, deleted: discoveryId }));
} catch (err) {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to delete discovery' }));
}
return true;
}
// Not handled
return false;
}

View File

@@ -5,7 +5,9 @@
* Storage Structure:
* .workflow/issues/
* ├── issues.jsonl # All issues (one per line)
* ├── queue.json # Execution queue
* ├── queues/ # Queue history directory
* │ ├── index.json # Queue index (active + history)
* │ └── {queue-id}.json # Individual queue files
* └── solutions/
* ├── {issue-id}.jsonl # Solutions for issue (one per line)
* └── ...
@@ -102,12 +104,12 @@ function readQueue(issuesDir: string) {
}
}
return { queue: [], conflicts: [], execution_groups: [], _metadata: { version: '1.0', total_tasks: 0 } };
return { tasks: [], conflicts: [], execution_groups: [], _metadata: { version: '1.0', total_tasks: 0 } };
}
function writeQueue(issuesDir: string, queue: any) {
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
queue._metadata = { ...queue._metadata, updated_at: new Date().toISOString(), total_tasks: queue.queue?.length || 0 };
queue._metadata = { ...queue._metadata, updated_at: new Date().toISOString(), total_tasks: queue.tasks?.length || 0 };
// Check if using new multi-queue structure
const queuesDir = join(issuesDir, 'queues');
@@ -123,8 +125,8 @@ function writeQueue(issuesDir: string, queue: any) {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
const queueEntry = index.queues?.find((q: any) => q.id === queue.id);
if (queueEntry) {
queueEntry.total_tasks = queue.queue?.length || 0;
queueEntry.completed_tasks = queue.queue?.filter((i: any) => i.status === 'completed').length || 0;
queueEntry.total_tasks = queue.tasks?.length || 0;
queueEntry.completed_tasks = queue.tasks?.filter((i: any) => i.status === 'completed').length || 0;
writeFileSync(indexPath, JSON.stringify(index, null, 2));
}
} catch {
@@ -151,15 +153,29 @@ function getIssueDetail(issuesDir: string, issueId: string) {
}
function enrichIssues(issues: any[], issuesDir: string) {
return issues.map(issue => ({
...issue,
solution_count: readSolutionsJsonl(issuesDir, issue.id).length
}));
return issues.map(issue => {
const solutions = readSolutionsJsonl(issuesDir, issue.id);
let taskCount = 0;
// Get task count from bound solution
if (issue.bound_solution_id) {
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
if (boundSol?.tasks) {
taskCount = boundSol.tasks.length;
}
}
return {
...issue,
solution_count: solutions.length,
task_count: taskCount
};
});
}
function groupQueueByExecutionGroup(queue: any) {
const groups: { [key: string]: any[] } = {};
for (const item of queue.queue || []) {
for (const item of queue.tasks || []) {
const groupId = item.execution_group || 'ungrouped';
if (!groups[groupId]) groups[groupId] = [];
groups[groupId].push(item);
@@ -171,7 +187,7 @@ function groupQueueByExecutionGroup(queue: any) {
id,
type: id.startsWith('P') ? 'parallel' : id.startsWith('S') ? 'sequential' : 'unknown',
task_count: items.length,
tasks: items.map(i => i.queue_id)
tasks: items.map(i => i.item_id)
})).sort((a, b) => {
const aFirst = groups[a.id]?.[0]?.execution_order || 0;
const bFirst = groups[b.id]?.[0]?.execution_order || 0;
@@ -220,6 +236,82 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// GET /api/queue/history - Get queue history (all queues from index)
if (pathname === '/api/queue/history' && req.method === 'GET') {
const queuesDir = join(issuesDir, 'queues');
const indexPath = join(queuesDir, 'index.json');
if (!existsSync(indexPath)) {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ queues: [], active_queue_id: null }));
return true;
}
try {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(index));
} catch {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ queues: [], active_queue_id: null }));
}
return true;
}
// GET /api/queue/:id - Get specific queue by ID
const queueDetailMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
if (queueDetailMatch && req.method === 'GET' && queueDetailMatch[1] !== 'history' && queueDetailMatch[1] !== 'reorder') {
const queueId = queueDetailMatch[1];
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
if (!existsSync(queueFilePath)) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
return true;
}
try {
const queue = JSON.parse(readFileSync(queueFilePath, 'utf8'));
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(groupQueueByExecutionGroup(queue)));
} catch {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to read queue' }));
}
return true;
}
// POST /api/queue/switch - Switch active queue
if (pathname === '/api/queue/switch' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const { queueId } = body;
if (!queueId) return { error: 'queueId required' };
const queuesDir = join(issuesDir, 'queues');
const indexPath = join(queuesDir, 'index.json');
const queueFilePath = join(queuesDir, `${queueId}.json`);
if (!existsSync(queueFilePath)) {
return { error: `Queue ${queueId} not found` };
}
try {
const index = existsSync(indexPath)
? JSON.parse(readFileSync(indexPath, 'utf8'))
: { active_queue_id: null, queues: [] };
index.active_queue_id = queueId;
writeFileSync(indexPath, JSON.stringify(index, null, 2));
return { success: true, active_queue_id: queueId };
} catch (err) {
return { error: 'Failed to switch queue' };
}
});
return true;
}
// POST /api/queue/reorder - Reorder queue items
if (pathname === '/api/queue/reorder' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
@@ -229,20 +321,20 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
}
const queue = readQueue(issuesDir);
const groupItems = queue.queue.filter((item: any) => item.execution_group === groupId);
const otherItems = queue.queue.filter((item: any) => item.execution_group !== groupId);
const groupItems = queue.tasks.filter((item: any) => item.execution_group === groupId);
const otherItems = queue.tasks.filter((item: any) => item.execution_group !== groupId);
if (groupItems.length === 0) return { error: `No items in group ${groupId}` };
const groupQueueIds = new Set(groupItems.map((i: any) => i.queue_id));
if (groupQueueIds.size !== new Set(newOrder).size) {
const groupItemIds = new Set(groupItems.map((i: any) => i.item_id));
if (groupItemIds.size !== new Set(newOrder).size) {
return { error: 'newOrder must contain all group items' };
}
for (const id of newOrder) {
if (!groupQueueIds.has(id)) return { error: `Invalid queue_id: ${id}` };
if (!groupItemIds.has(id)) return { error: `Invalid item_id: ${id}` };
}
const itemMap = new Map(groupItems.map((i: any) => [i.queue_id, i]));
const itemMap = new Map(groupItems.map((i: any) => [i.item_id, i]));
const reorderedItems = newOrder.map((qid: string, idx: number) => ({ ...itemMap.get(qid), _idx: idx }));
const newQueue = [...otherItems, ...reorderedItems].sort((a, b) => {
const aGroup = parseInt(a.execution_group?.match(/\d+/)?.[0] || '999');
@@ -255,7 +347,7 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
});
newQueue.forEach((item, idx) => { item.execution_order = idx + 1; delete item._idx; });
queue.queue = newQueue;
queue.tasks = newQueue;
writeQueue(issuesDir, queue);
return { success: true, groupId, reordered: newOrder.length };

View File

@@ -18,6 +18,7 @@ import { handleSystemRoutes } from './routes/system-routes.js';
import { handleFilesRoutes } from './routes/files-routes.js';
import { handleSkillsRoutes } from './routes/skills-routes.js';
import { handleIssueRoutes } from './routes/issue-routes.js';
import { handleDiscoveryRoutes } from './routes/discovery-routes.js';
import { handleRulesRoutes } from './routes/rules-routes.js';
import { handleSessionRoutes } from './routes/session-routes.js';
import { handleCcwRoutes } from './routes/ccw-routes.js';
@@ -89,7 +90,8 @@ const MODULE_CSS_FILES = [
'30-core-memory.css',
'31-api-settings.css',
'32-issue-manager.css',
'33-cli-stream-viewer.css'
'33-cli-stream-viewer.css',
'34-discovery.css'
];
// Modular JS files in dependency order
@@ -147,6 +149,7 @@ const MODULE_FILES = [
'views/api-settings.js',
'views/help.js',
'views/issue-manager.js',
'views/issue-discovery.js',
'main.js'
];
@@ -355,6 +358,11 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
if (await handleIssueRoutes(routeContext)) return;
}
// Discovery routes (/api/discoveries*)
if (pathname.startsWith('/api/discoveries')) {
if (await handleDiscoveryRoutes(routeContext)) return;
}
// Rules routes (/api/rules*)
if (pathname.startsWith('/api/rules')) {
if (await handleRulesRoutes(routeContext)) return;

View File

@@ -285,9 +285,23 @@
.queue-empty-container {
display: flex;
flex-direction: column;
min-height: 300px;
}
.queue-empty-toolbar {
display: flex;
justify-content: flex-end;
padding: 0.5rem 0;
margin-bottom: 0.5rem;
}
.queue-empty-container .queue-empty {
flex: 1;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 300px;
}
.queue-empty-title {
@@ -2542,3 +2556,298 @@
min-width: 80px;
}
}
/* ==========================================
QUEUE HISTORY MODAL
========================================== */
.queue-history-list {
display: flex;
flex-direction: column;
gap: 0.75rem;
}
.queue-history-item {
padding: 1rem;
background: hsl(var(--muted) / 0.3);
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
cursor: pointer;
transition: all 0.15s ease;
}
.queue-history-item:hover {
background: hsl(var(--muted) / 0.5);
border-color: hsl(var(--primary) / 0.3);
}
.queue-history-item.active {
border-color: hsl(var(--primary));
background: hsl(var(--primary) / 0.1);
}
.queue-history-header {
display: flex;
align-items: center;
gap: 0.75rem;
margin-bottom: 0.5rem;
}
.queue-history-id {
font-size: 0.875rem;
font-weight: 600;
color: hsl(var(--foreground));
}
.queue-active-badge {
padding: 0.125rem 0.5rem;
font-size: 0.625rem;
font-weight: 600;
text-transform: uppercase;
background: hsl(var(--primary));
color: hsl(var(--primary-foreground));
border-radius: 9999px;
}
.queue-history-status {
padding: 0.125rem 0.5rem;
font-size: 0.75rem;
border-radius: 0.25rem;
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.queue-history-status.active {
background: hsl(142 76% 36% / 0.2);
color: hsl(142 76% 36%);
}
.queue-history-status.completed {
background: hsl(142 76% 36% / 0.2);
color: hsl(142 76% 36%);
}
.queue-history-status.archived {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.queue-history-meta {
display: flex;
flex-wrap: wrap;
gap: 1rem;
margin-bottom: 0.75rem;
}
.queue-history-actions {
display: flex;
gap: 0.5rem;
justify-content: flex-end;
}
/* Queue Detail View */
.queue-detail-view {
padding: 0.5rem 0;
}
.queue-detail-header {
display: flex;
align-items: center;
padding-bottom: 1rem;
border-bottom: 1px solid hsl(var(--border));
}
.queue-detail-stats {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 0.75rem;
}
.queue-detail-stats .stat-item {
text-align: center;
padding: 0.75rem;
background: hsl(var(--muted) / 0.3);
border-radius: 0.5rem;
}
.queue-detail-stats .stat-value {
display: block;
font-size: 1.5rem;
font-weight: 700;
color: hsl(var(--foreground));
}
.queue-detail-stats .stat-label {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
.queue-detail-stats .stat-item.completed .stat-value {
color: hsl(142 76% 36%);
}
.queue-detail-stats .stat-item.pending .stat-value {
color: hsl(48 96% 53%);
}
.queue-detail-stats .stat-item.failed .stat-value {
color: hsl(0 84% 60%);
}
.queue-detail-groups {
display: flex;
flex-direction: column;
gap: 1rem;
}
.queue-group-section {
background: hsl(var(--muted) / 0.2);
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
overflow: hidden;
}
.queue-group-header {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.3);
border-bottom: 1px solid hsl(var(--border));
font-weight: 500;
}
.queue-group-items {
padding: 0.5rem;
}
.queue-detail-item {
display: flex;
flex-direction: column;
gap: 0.25rem;
padding: 0.5rem 0.75rem;
border-radius: 0.25rem;
border-left: 3px solid transparent;
}
.queue-detail-item:hover {
background: hsl(var(--muted) / 0.3);
}
.queue-detail-item.completed {
border-left-color: hsl(142 76% 36%);
}
.queue-detail-item.pending {
border-left-color: hsl(48 96% 53%);
}
.queue-detail-item.executing {
border-left-color: hsl(217 91% 60%);
}
.queue-detail-item.failed {
border-left-color: hsl(0 84% 60%);
}
.queue-detail-item .item-main {
display: flex;
align-items: center;
gap: 0.75rem;
}
.queue-detail-item .item-id {
min-width: 50px;
color: hsl(var(--muted-foreground));
font-family: var(--font-mono);
}
.queue-detail-item .item-title {
flex: 1;
color: hsl(var(--foreground));
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.queue-detail-item .item-meta {
display: flex;
align-items: center;
gap: 0.75rem;
padding-left: calc(50px + 0.75rem);
}
.queue-detail-item .item-issue {
color: hsl(var(--primary));
font-family: var(--font-mono);
}
.queue-detail-item .item-status {
padding: 0.125rem 0.5rem;
font-size: 0.75rem;
border-radius: 0.25rem;
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.queue-detail-item .item-status.completed {
background: hsl(142 76% 36% / 0.2);
color: hsl(142 76% 36%);
}
.queue-detail-item .item-status.pending {
background: hsl(48 96% 53% / 0.2);
color: hsl(48 96% 53%);
}
.queue-detail-item .item-status.executing {
background: hsl(217 91% 60% / 0.2);
color: hsl(217 91% 60%);
}
.queue-detail-item .item-status.failed {
background: hsl(0 84% 60% / 0.2);
color: hsl(0 84% 60%);
}
/* Small Buttons */
.btn-sm {
display: inline-flex;
align-items: center;
gap: 0.25rem;
padding: 0.25rem 0.5rem;
font-size: 0.75rem;
border-radius: 0.25rem;
border: none;
cursor: pointer;
transition: all 0.15s ease;
}
.btn-sm.btn-primary {
background: hsl(var(--primary));
color: hsl(var(--primary-foreground));
}
.btn-sm.btn-primary:hover {
opacity: 0.9;
}
.btn-sm.btn-secondary {
background: hsl(var(--muted));
color: hsl(var(--foreground));
}
.btn-sm.btn-secondary:hover {
background: hsl(var(--muted) / 0.8);
}
@media (max-width: 640px) {
.queue-detail-stats {
grid-template-columns: repeat(2, 1fr);
}
.queue-history-meta {
flex-direction: column;
gap: 0.25rem;
}
}

View File

@@ -0,0 +1,783 @@
/* ==========================================
ISSUE DISCOVERY STYLES
========================================== */
/* Discovery Manager Container */
.discovery-manager {
width: 100%;
}
.discovery-manager.loading {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 300px;
color: hsl(var(--muted-foreground));
}
/* Discovery Header */
.discovery-header {
margin-bottom: 1.5rem;
}
.discovery-back-btn {
display: inline-flex;
align-items: center;
gap: 0.5rem;
padding: 0.5rem 1rem;
border-radius: 0.5rem;
font-size: 0.875rem;
font-weight: 500;
color: hsl(var(--muted-foreground));
background: hsl(var(--muted));
border: none;
cursor: pointer;
transition: all 0.15s ease;
}
.discovery-back-btn:hover {
color: hsl(var(--foreground));
background: hsl(var(--hover));
}
/* Discovery List */
.discovery-list-container {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(360px, 1fr));
gap: 1rem;
}
/* Discovery Card */
.discovery-card {
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
padding: 1rem;
cursor: pointer;
transition: all 0.15s ease;
}
.discovery-card:hover {
border-color: hsl(var(--primary) / 0.5);
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.05);
}
.discovery-card.running {
border-color: hsl(var(--warning) / 0.5);
}
.discovery-card-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 0.75rem;
}
.discovery-id {
display: flex;
align-items: center;
gap: 0.5rem;
font-size: 0.875rem;
font-weight: 600;
color: hsl(var(--foreground));
}
.discovery-phase {
padding: 0.25rem 0.5rem;
border-radius: 9999px;
font-size: 0.75rem;
font-weight: 500;
text-transform: capitalize;
}
.discovery-phase.complete {
background: hsl(var(--success) / 0.1);
color: hsl(var(--success));
}
.discovery-phase.parallel,
.discovery-phase.external,
.discovery-phase.aggregation {
background: hsl(var(--warning) / 0.1);
color: hsl(var(--warning));
}
.discovery-phase.initialization {
background: hsl(var(--primary) / 0.1);
color: hsl(var(--primary));
}
.discovery-card-body {
display: flex;
flex-direction: column;
gap: 0.75rem;
}
.discovery-target {
display: flex;
align-items: center;
gap: 0.5rem;
font-size: 0.875rem;
}
/* Perspective Badges */
.discovery-perspectives {
display: flex;
flex-wrap: wrap;
gap: 0.375rem;
}
.perspective-badge {
padding: 0.125rem 0.5rem;
border-radius: 9999px;
font-size: 0.625rem;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.025em;
}
.perspective-badge.bug {
background: hsl(0 84% 60% / 0.1);
color: hsl(0 84% 60%);
}
.perspective-badge.ux {
background: hsl(262 84% 60% / 0.1);
color: hsl(262 84% 60%);
}
.perspective-badge.test {
background: hsl(200 84% 50% / 0.1);
color: hsl(200 84% 50%);
}
.perspective-badge.quality {
background: hsl(142 76% 36% / 0.1);
color: hsl(142 76% 36%);
}
.perspective-badge.security {
background: hsl(0 84% 50% / 0.1);
color: hsl(0 84% 50%);
}
.perspective-badge.performance {
background: hsl(38 92% 50% / 0.1);
color: hsl(38 92% 50%);
}
.perspective-badge.maintainability {
background: hsl(280 60% 50% / 0.1);
color: hsl(280 60% 50%);
}
.perspective-badge.best-practices {
background: hsl(170 60% 45% / 0.1);
color: hsl(170 60% 45%);
}
.perspective-badge.more {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
/* Progress Bar */
.discovery-progress-bar {
height: 4px;
background: hsl(var(--muted));
border-radius: 2px;
overflow: hidden;
}
.discovery-progress-bar .progress-fill {
height: 100%;
background: hsl(var(--primary));
transition: width 0.3s ease;
}
/* Stats */
.discovery-stats {
display: flex;
gap: 1.5rem;
}
.discovery-stats .stat {
display: flex;
flex-direction: column;
}
.discovery-stats .stat-value {
font-size: 1.25rem;
font-weight: 700;
color: hsl(var(--foreground));
}
.discovery-stats .stat-label {
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
/* Priority Distribution Bar */
.discovery-priority-bar {
display: flex;
height: 4px;
border-radius: 2px;
overflow: hidden;
}
.discovery-priority-bar .priority-segment {
height: 100%;
}
.discovery-priority-bar .priority-segment.critical {
background: hsl(0 84% 60%);
}
.discovery-priority-bar .priority-segment.high {
background: hsl(38 92% 50%);
}
.discovery-priority-bar .priority-segment.medium {
background: hsl(48 96% 53%);
}
.discovery-priority-bar .priority-segment.low {
background: hsl(142 76% 36%);
}
.discovery-card-footer {
display: flex;
justify-content: flex-end;
margin-top: 0.75rem;
padding-top: 0.75rem;
border-top: 1px solid hsl(var(--border));
}
.discovery-action-btn {
padding: 0.375rem;
border-radius: 0.375rem;
color: hsl(var(--muted-foreground));
background: transparent;
border: none;
cursor: pointer;
transition: all 0.15s ease;
}
.discovery-action-btn:hover {
color: hsl(var(--destructive));
background: hsl(var(--destructive) / 0.1);
}
/* Empty State */
.discovery-empty {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 300px;
text-align: center;
color: hsl(var(--muted-foreground));
}
.discovery-empty .empty-icon {
margin-bottom: 1rem;
opacity: 0.5;
}
/* Discovery Detail Container */
.discovery-detail-container {
display: grid;
grid-template-columns: 1fr 400px;
gap: 1.5rem;
height: calc(100vh - 200px);
min-height: 500px;
}
@media (max-width: 1024px) {
.discovery-detail-container {
grid-template-columns: 1fr;
height: auto;
}
}
/* Findings Panel */
.discovery-findings-panel {
display: flex;
flex-direction: column;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
overflow: hidden;
}
.discovery-toolbar {
display: flex;
align-items: center;
justify-content: space-between;
gap: 1rem;
padding: 0.75rem 1rem;
border-bottom: 1px solid hsl(var(--border));
flex-wrap: wrap;
}
.toolbar-filters {
display: flex;
gap: 0.5rem;
}
.filter-select {
padding: 0.375rem 0.75rem;
border-radius: 0.375rem;
font-size: 0.75rem;
color: hsl(var(--foreground));
background: hsl(var(--muted));
border: 1px solid hsl(var(--border));
cursor: pointer;
}
.toolbar-search {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.375rem 0.75rem;
border-radius: 0.375rem;
background: hsl(var(--muted));
border: 1px solid hsl(var(--border));
flex: 1;
max-width: 250px;
}
.toolbar-search input {
flex: 1;
background: transparent;
border: none;
font-size: 0.75rem;
color: hsl(var(--foreground));
outline: none;
}
.findings-count {
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.5rem 1rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
border-bottom: 1px solid hsl(var(--border));
}
.findings-count-left {
display: flex;
align-items: center;
gap: 0.5rem;
}
.findings-count .selected-count {
color: hsl(var(--primary));
font-weight: 500;
}
.findings-count-actions {
display: flex;
gap: 0.5rem;
}
.select-action-btn {
display: inline-flex;
align-items: center;
gap: 0.25rem;
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-size: 0.625rem;
font-weight: 500;
color: hsl(var(--muted-foreground));
background: transparent;
border: 1px solid hsl(var(--border));
cursor: pointer;
transition: all 0.15s ease;
}
.select-action-btn:hover {
color: hsl(var(--foreground));
background: hsl(var(--muted));
border-color: hsl(var(--primary) / 0.3);
}
/* Findings List */
.findings-list {
flex: 1;
overflow-y: auto;
padding: 0.5rem;
}
.findings-empty {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
height: 200px;
color: hsl(var(--muted-foreground));
text-align: center;
}
/* Finding Item */
.finding-item {
display: flex;
gap: 0.75rem;
padding: 0.75rem;
border-radius: 0.5rem;
cursor: pointer;
transition: all 0.15s ease;
}
.finding-item:hover {
background: hsl(var(--muted) / 0.5);
}
.finding-item.active {
background: hsl(var(--primary) / 0.1);
border: 1px solid hsl(var(--primary) / 0.3);
}
.finding-item.selected {
background: hsl(var(--primary) / 0.05);
}
.finding-item.dismissed {
opacity: 0.5;
}
.finding-item.exported {
opacity: 0.6;
background: hsl(var(--success) / 0.05);
border: 1px solid hsl(var(--success) / 0.2);
}
.finding-item.exported:hover {
background: hsl(var(--success) / 0.08);
}
/* Exported Badge */
.exported-badge {
display: inline-flex;
align-items: center;
gap: 0.25rem;
padding: 0.125rem 0.5rem;
border-radius: 9999px;
font-size: 0.625rem;
font-weight: 600;
text-transform: uppercase;
background: hsl(var(--success) / 0.1);
color: hsl(var(--success));
}
.finding-checkbox input:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.finding-checkbox {
display: flex;
align-items: flex-start;
padding-top: 0.125rem;
}
.finding-checkbox input {
width: 16px;
height: 16px;
cursor: pointer;
}
.finding-content {
flex: 1;
min-width: 0;
}
.finding-header {
display: flex;
gap: 0.375rem;
margin-bottom: 0.375rem;
}
.finding-title {
font-size: 0.875rem;
font-weight: 500;
color: hsl(var(--foreground));
line-height: 1.3;
margin-bottom: 0.25rem;
}
.finding-location {
display: flex;
align-items: center;
gap: 0.25rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
}
/* Priority Badge */
.priority-badge {
padding: 0.125rem 0.375rem;
border-radius: 0.25rem;
font-size: 0.625rem;
font-weight: 600;
text-transform: uppercase;
}
.priority-badge.critical {
background: hsl(0 84% 60% / 0.1);
color: hsl(0 84% 60%);
}
.priority-badge.high {
background: hsl(38 92% 50% / 0.1);
color: hsl(38 92% 50%);
}
.priority-badge.medium {
background: hsl(48 96% 53% / 0.1);
color: hsl(48 70% 40%);
}
.priority-badge.low {
background: hsl(142 76% 36% / 0.1);
color: hsl(142 76% 36%);
}
/* Bulk Actions */
.bulk-actions {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.75rem 1rem;
border-top: 1px solid hsl(var(--border));
background: hsl(var(--muted) / 0.3);
}
.bulk-count {
font-size: 0.75rem;
font-weight: 500;
color: hsl(var(--foreground));
}
.bulk-action-btn {
display: inline-flex;
align-items: center;
gap: 0.375rem;
padding: 0.375rem 0.75rem;
border-radius: 0.375rem;
font-size: 0.75rem;
font-weight: 500;
border: none;
cursor: pointer;
transition: all 0.15s ease;
}
.bulk-action-btn.export {
background: hsl(var(--primary));
color: hsl(var(--primary-foreground));
}
.bulk-action-btn.export:hover {
opacity: 0.9;
}
.bulk-action-btn.dismiss {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.bulk-action-btn.dismiss:hover {
background: hsl(var(--destructive) / 0.1);
color: hsl(var(--destructive));
}
/* Preview Panel */
.discovery-preview-panel {
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
overflow: hidden;
}
.preview-empty {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
height: 100%;
min-height: 300px;
color: hsl(var(--muted-foreground));
text-align: center;
}
/* Finding Preview */
.finding-preview {
padding: 1.25rem;
height: 100%;
overflow-y: auto;
}
.preview-header {
margin-bottom: 1.25rem;
}
.preview-badges {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
margin-bottom: 0.75rem;
}
.confidence-badge {
padding: 0.125rem 0.5rem;
border-radius: 9999px;
font-size: 0.625rem;
font-weight: 500;
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.preview-title {
font-size: 1.125rem;
font-weight: 600;
color: hsl(var(--foreground));
line-height: 1.3;
}
.preview-section {
margin-bottom: 1rem;
}
.preview-section h4 {
display: flex;
align-items: center;
gap: 0.5rem;
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
color: hsl(var(--muted-foreground));
margin-bottom: 0.5rem;
}
.preview-location code {
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-size: 0.75rem;
background: hsl(var(--muted));
color: hsl(var(--primary));
}
.preview-snippet {
padding: 0.75rem;
border-radius: 0.5rem;
background: hsl(var(--muted));
overflow-x: auto;
}
.preview-snippet code {
font-size: 0.75rem;
white-space: pre;
}
.preview-description,
.preview-impact,
.preview-recommendation {
font-size: 0.875rem;
line-height: 1.5;
color: hsl(var(--foreground));
}
/* Suggested Issue */
.preview-section.suggested-issue {
padding: 0.75rem;
background: hsl(var(--primary) / 0.05);
border: 1px solid hsl(var(--primary) / 0.2);
border-radius: 0.5rem;
}
.suggested-issue-content {
margin-top: 0.5rem;
}
.suggested-issue-content .issue-title {
font-size: 0.875rem;
font-weight: 500;
color: hsl(var(--foreground));
margin-bottom: 0.5rem;
}
.suggested-issue-content .issue-meta {
display: flex;
flex-wrap: wrap;
gap: 0.375rem;
}
.issue-type,
.issue-priority,
.issue-label {
padding: 0.125rem 0.5rem;
border-radius: 0.25rem;
font-size: 0.625rem;
font-weight: 500;
}
.issue-type {
background: hsl(var(--primary) / 0.1);
color: hsl(var(--primary));
}
.issue-priority {
background: hsl(var(--warning) / 0.1);
color: hsl(var(--warning));
}
.issue-label {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
/* Preview Actions */
.preview-actions {
display: flex;
gap: 0.75rem;
margin-top: 1.5rem;
padding-top: 1rem;
border-top: 1px solid hsl(var(--border));
}
.preview-action-btn {
display: inline-flex;
align-items: center;
justify-content: center;
gap: 0.5rem;
padding: 0.5rem 1rem;
border-radius: 0.5rem;
font-size: 0.875rem;
font-weight: 500;
border: none;
cursor: pointer;
transition: all 0.15s ease;
flex: 1;
}
.preview-action-btn.primary {
background: hsl(var(--primary));
color: hsl(var(--primary-foreground));
}
.preview-action-btn.primary:hover {
opacity: 0.9;
}
.preview-action-btn.secondary {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
.preview-action-btn.secondary:hover {
color: hsl(var(--destructive));
background: hsl(var(--destructive) / 0.1);
}

View File

@@ -21,24 +21,6 @@ let nativeResumeEnabled = localStorage.getItem('ccw-native-resume') !== 'false';
// Recursive Query settings (for hierarchical storage aggregation)
let recursiveQueryEnabled = localStorage.getItem('ccw-recursive-query') !== 'false'; // default true
// Code Index MCP provider (codexlens, ace, or none)
let codeIndexMcpProvider = 'codexlens';
// ========== Helper Functions ==========
/**
* Get the context-tools filename based on provider
*/
function getContextToolsFileName(provider) {
switch (provider) {
case 'ace':
return 'context-tools-ace.md';
case 'none':
return 'context-tools-none.md';
default:
return 'context-tools.md';
}
}
// ========== Initialization ==========
function initCliStatus() {
// Load all statuses in one call using aggregated endpoint
@@ -259,12 +241,7 @@ async function loadCliToolsConfig() {
defaultCliTool = data.defaultTool;
}
// Load Code Index MCP provider from config
if (data.settings?.codeIndexMcp) {
codeIndexMcpProvider = data.settings.codeIndexMcp;
}
console.log('[CLI Config] Loaded from:', data._configInfo?.source || 'unknown', '| Default:', data.defaultTool, '| CodeIndexMCP:', codeIndexMcpProvider);
console.log('[CLI Config] Loaded from:', data._configInfo?.source || 'unknown', '| Default:', data.defaultTool);
return data;
} catch (err) {
console.error('Failed to load CLI tools config:', err);
@@ -637,33 +614,6 @@ function renderCliStatus() {
</div>
<p class="cli-setting-desc">Cache prefix/suffix injection mode for prompts</p>
</div>
<div class="cli-setting-item">
<label class="cli-setting-label">
<i data-lucide="search" class="w-3 h-3"></i>
Code Index MCP
</label>
<div class="cli-setting-control">
<div class="flex items-center bg-muted rounded-lg p-0.5">
<button class="code-mcp-btn px-3 py-1.5 text-xs font-medium rounded-md transition-all ${codeIndexMcpProvider === 'codexlens' ? 'bg-primary text-primary-foreground shadow-sm' : 'text-muted-foreground hover:text-foreground'}"
onclick="setCodeIndexMcpProvider('codexlens')">
CodexLens
</button>
<button class="code-mcp-btn px-3 py-1.5 text-xs font-medium rounded-md transition-all ${codeIndexMcpProvider === 'ace' ? 'bg-primary text-primary-foreground shadow-sm' : 'text-muted-foreground hover:text-foreground'}"
onclick="setCodeIndexMcpProvider('ace')">
ACE
</button>
<button class="code-mcp-btn px-3 py-1.5 text-xs font-medium rounded-md transition-all ${codeIndexMcpProvider === 'none' ? 'bg-primary text-primary-foreground shadow-sm' : 'text-muted-foreground hover:text-foreground'}"
onclick="setCodeIndexMcpProvider('none')">
None
</button>
</div>
</div>
<p class="cli-setting-desc">Code search provider (updates CLAUDE.md context-tools reference)</p>
<p class="cli-setting-desc text-xs text-muted-foreground mt-1">
<i data-lucide="file-text" class="w-3 h-3 inline-block mr-1"></i>
Current: <code class="bg-muted px-1 rounded">${getContextToolsFileName(codeIndexMcpProvider)}</code>
</p>
</div>
</div>
</div>
`;
@@ -786,33 +736,6 @@ async function setCacheInjectionMode(mode) {
}
}
async function setCodeIndexMcpProvider(provider) {
try {
const response = await fetch('/api/cli/code-index-mcp', {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ provider: provider })
});
if (response.ok) {
codeIndexMcpProvider = provider;
if (window.claudeCliToolsConfig && window.claudeCliToolsConfig.settings) {
window.claudeCliToolsConfig.settings.codeIndexMcp = provider;
}
const providerName = provider === 'ace' ? 'ACE (Augment)' : provider === 'none' ? 'None (Built-in only)' : 'CodexLens';
showRefreshToast(`Code Index MCP switched to ${providerName}`, 'success');
// Re-render both CLI status and settings section
if (typeof renderCliStatus === 'function') renderCliStatus();
if (typeof renderCliSettingsSection === 'function') renderCliSettingsSection();
} else {
const data = await response.json();
showRefreshToast(`Failed to switch Code Index MCP: ${data.error}`, 'error');
}
} catch (err) {
console.error('Failed to switch Code Index MCP:', err);
showRefreshToast('Failed to switch Code Index MCP', 'error');
}
}
async function refreshAllCliStatus() {
await loadAllStatuses();
renderCliStatus();

View File

@@ -161,6 +161,12 @@ function initNavigation() {
} else {
console.error('renderIssueManager not defined - please refresh the page');
}
} else if (currentView === 'issue-discovery') {
if (typeof renderIssueDiscovery === 'function') {
renderIssueDiscovery();
} else {
console.error('renderIssueDiscovery not defined - please refresh the page');
}
}
});
});
@@ -207,6 +213,8 @@ function updateContentTitle() {
titleEl.textContent = t('title.apiSettings');
} else if (currentView === 'issue-manager') {
titleEl.textContent = t('title.issueManager');
} else if (currentView === 'issue-discovery') {
titleEl.textContent = t('title.issueDiscovery');
} else if (currentView === 'liteTasks') {
const names = { 'lite-plan': t('title.litePlanSessions'), 'lite-fix': t('title.liteFixSessions') };
titleEl.textContent = names[currentLiteType] || t('title.liteTasks');

View File

@@ -28,6 +28,8 @@ const i18n = {
'common.deleteFailed': 'Delete failed',
'common.retry': 'Retry',
'common.refresh': 'Refresh',
'common.back': 'Back',
'common.search': 'Search...',
'common.minutes': 'minutes',
'common.enabled': 'Enabled',
'common.disabled': 'Disabled',
@@ -560,8 +562,6 @@ const i18n = {
'cli.recursiveQueryDesc': 'Aggregate CLI history and memory data from parent and child projects',
'cli.maxContextFiles': 'Max Context Files',
'cli.maxContextFilesDesc': 'Maximum files to include in smart context',
'cli.codeIndexMcp': 'Code Index MCP',
'cli.codeIndexMcpDesc': 'Code search provider (updates CLAUDE.md context-tools reference)',
// CCW Install
'ccw.install': 'CCW Install',
@@ -1728,7 +1728,74 @@ const i18n = {
// Issue Manager
'nav.issues': 'Issues',
'nav.issueManager': 'Manager',
'nav.issueDiscovery': 'Discovery',
'title.issueManager': 'Issue Manager',
'title.issueDiscovery': 'Issue Discovery',
// Issue Discovery
'discovery.title': 'Issue Discovery',
'discovery.description': 'Discover potential issues from multiple perspectives',
'discovery.noSessions': 'No discovery sessions',
'discovery.noDiscoveries': 'No discoveries yet',
'discovery.runHint': 'Run /issue:discover to start discovering issues',
'discovery.runCommand': 'Run /issue:discover to start discovering issues',
'discovery.sessions': 'Sessions',
'discovery.findings': 'Findings',
'discovery.phase': 'Phase',
'discovery.perspectives': 'Perspectives',
'discovery.progress': 'Progress',
'discovery.total': 'Total',
'discovery.exported': 'Exported',
'discovery.dismissed': 'Dismissed',
'discovery.pending': 'Pending',
'discovery.external': 'External Research',
'discovery.selectAll': 'Select All',
'discovery.deselectAll': 'Deselect All',
'discovery.exportSelected': 'Export Selected',
'discovery.dismissSelected': 'Dismiss Selected',
'discovery.exportAsIssue': 'Export as Issue',
'discovery.dismiss': 'Dismiss',
'discovery.keep': 'Keep',
'discovery.priority.critical': 'Critical',
'discovery.priority.high': 'High',
'discovery.priority.medium': 'Medium',
'discovery.priority.low': 'Low',
'discovery.perspective.bug': 'Bug',
'discovery.perspective.ux': 'UX',
'discovery.perspective.test': 'Test',
'discovery.perspective.quality': 'Quality',
'discovery.perspective.security': 'Security',
'discovery.perspective.performance': 'Performance',
'discovery.perspective.maintainability': 'Maintainability',
'discovery.perspective.best-practices': 'Best Practices',
'discovery.file': 'File',
'discovery.line': 'Line',
'discovery.confidence': 'Confidence',
'discovery.suggestedIssue': 'Suggested Issue',
'discovery.externalRef': 'External Reference',
'discovery.noFindings': 'No findings match your filters',
'discovery.filterPerspective': 'Filter by Perspective',
'discovery.filterPriority': 'Filter by Priority',
'discovery.filterAll': 'All',
'discovery.allPerspectives': 'All Perspectives',
'discovery.allPriorities': 'All Priorities',
'discovery.selectFinding': 'Select a finding to preview',
'discovery.location': 'Location',
'discovery.code': 'Code',
'discovery.impact': 'Impact',
'discovery.recommendation': 'Recommendation',
'discovery.exportAsIssues': 'Export as Issues',
'discovery.selectAll': 'Select All',
'discovery.deselectAll': 'Deselect All',
'discovery.deleteSession': 'Delete Session',
'discovery.confirmDelete': 'Are you sure you want to delete this discovery session?',
'discovery.deleted': 'Discovery session deleted',
'discovery.exportSuccess': 'Findings exported as issues',
'discovery.dismissSuccess': 'Findings dismissed',
'discovery.backToList': 'Back to Sessions',
'discovery.viewDetails': 'View Details',
'discovery.inProgress': 'In Progress',
'discovery.completed': 'Completed',
// issues.* keys (used by issue-manager.js)
'issues.title': 'Issue Manager',
'issues.description': 'Manage issues, solutions, and execution queue',
@@ -1881,6 +1948,8 @@ const i18n = {
'common.deleteFailed': '删除失败',
'common.retry': '重试',
'common.refresh': '刷新',
'common.back': '返回',
'common.search': '搜索...',
'common.minutes': '分钟',
'common.enabled': '已启用',
'common.disabled': '已禁用',
@@ -2414,8 +2483,6 @@ const i18n = {
'cli.recursiveQueryDesc': '聚合显示父项目和子项目的 CLI 历史与内存数据',
'cli.maxContextFiles': '最大上下文文件数',
'cli.maxContextFilesDesc': '智能上下文包含的最大文件数',
'cli.codeIndexMcp': '代码索引 MCP',
'cli.codeIndexMcpDesc': '代码搜索提供者 (更新 CLAUDE.md 的 context-tools 引用)',
// CCW Install
'ccw.install': 'CCW 安装',
@@ -3466,6 +3533,8 @@ const i18n = {
'common.edit': '编辑',
'common.close': '关闭',
'common.refresh': '刷新',
'common.back': '返回',
'common.search': '搜索...',
'common.refreshed': '已刷新',
'common.refreshing': '刷新中...',
'common.loading': '加载中...',
@@ -3590,7 +3659,74 @@ const i18n = {
// Issue Manager
'nav.issues': '议题',
'nav.issueManager': '管理器',
'nav.issueDiscovery': '发现',
'title.issueManager': '议题管理器',
'title.issueDiscovery': '议题发现',
// Issue Discovery
'discovery.title': '议题发现',
'discovery.description': '从多个视角发现潜在问题',
'discovery.noSessions': '暂无发现会话',
'discovery.noDiscoveries': '暂无发现',
'discovery.runHint': '运行 /issue:discover 开始发现问题',
'discovery.runCommand': '运行 /issue:discover 开始发现问题',
'discovery.sessions': '会话',
'discovery.findings': '发现',
'discovery.phase': '阶段',
'discovery.perspectives': '视角',
'discovery.progress': '进度',
'discovery.total': '总计',
'discovery.exported': '已导出',
'discovery.dismissed': '已忽略',
'discovery.pending': '待处理',
'discovery.external': '外部研究',
'discovery.selectAll': '全选',
'discovery.deselectAll': '取消全选',
'discovery.exportSelected': '导出选中',
'discovery.dismissSelected': '忽略选中',
'discovery.exportAsIssue': '导出为议题',
'discovery.dismiss': '忽略',
'discovery.keep': '保留',
'discovery.priority.critical': '紧急',
'discovery.priority.high': '高',
'discovery.priority.medium': '中',
'discovery.priority.low': '低',
'discovery.perspective.bug': 'Bug',
'discovery.perspective.ux': '用户体验',
'discovery.perspective.test': '测试',
'discovery.perspective.quality': '代码质量',
'discovery.perspective.security': '安全',
'discovery.perspective.performance': '性能',
'discovery.perspective.maintainability': '可维护性',
'discovery.perspective.best-practices': '最佳实践',
'discovery.file': '文件',
'discovery.line': '行号',
'discovery.confidence': '置信度',
'discovery.suggestedIssue': '建议议题',
'discovery.externalRef': '外部参考',
'discovery.noFindings': '没有匹配的发现',
'discovery.filterPerspective': '按视角筛选',
'discovery.filterPriority': '按优先级筛选',
'discovery.filterAll': '全部',
'discovery.allPerspectives': '所有视角',
'discovery.allPriorities': '所有优先级',
'discovery.selectFinding': '选择一个发现以预览',
'discovery.location': '位置',
'discovery.code': '代码',
'discovery.impact': '影响',
'discovery.recommendation': '建议',
'discovery.exportAsIssues': '导出为议题',
'discovery.selectAll': '全选',
'discovery.deselectAll': '取消全选',
'discovery.deleteSession': '删除会话',
'discovery.confirmDelete': '确定要删除此发现会话吗?',
'discovery.deleted': '发现会话已删除',
'discovery.exportSuccess': '发现已导出为议题',
'discovery.dismissSuccess': '发现已忽略',
'discovery.backToList': '返回列表',
'discovery.viewDetails': '查看详情',
'discovery.inProgress': '进行中',
'discovery.completed': '已完成',
// issues.* keys (used by issue-manager.js)
'issues.title': '议题管理器',
'issues.description': '管理议题、解决方案和执行队列',

View File

@@ -987,24 +987,6 @@ function renderCliSettingsSection() {
'</div>' +
'<p class="cli-setting-desc">' + t('cli.maxContextFilesDesc') + '</p>' +
'</div>' +
'<div class="cli-setting-item">' +
'<label class="cli-setting-label">' +
'<i data-lucide="search" class="w-3 h-3"></i>' +
t('cli.codeIndexMcp') +
'</label>' +
'<div class="cli-setting-control">' +
'<select class="cli-setting-select" onchange="setCodeIndexMcpProvider(this.value)">' +
'<option value="codexlens"' + (codeIndexMcpProvider === 'codexlens' ? ' selected' : '') + '>CodexLens</option>' +
'<option value="ace"' + (codeIndexMcpProvider === 'ace' ? ' selected' : '') + '>ACE (Augment)</option>' +
'<option value="none"' + (codeIndexMcpProvider === 'none' ? ' selected' : '') + '>None (Built-in)</option>' +
'</select>' +
'</div>' +
'<p class="cli-setting-desc">' + t('cli.codeIndexMcpDesc') + '</p>' +
'<p class="cli-setting-desc text-xs text-muted-foreground">' +
'<i data-lucide="file-text" class="w-3 h-3 inline-block mr-1"></i>' +
'Current: <code class="bg-muted px-1 rounded">' + getContextToolsFileName(codeIndexMcpProvider) + '</code>' +
'</p>' +
'</div>' +
'</div>';
container.innerHTML = settingsHtml;

View File

@@ -446,6 +446,12 @@ async function loadSemanticDepsStatus() {
* Build GPU mode selector HTML
*/
function buildGpuModeSelector(gpuInfo) {
// Check if DirectML is unavailable due to Python environment
var directmlUnavailableReason = null;
if (!gpuInfo.available.includes('directml') && gpuInfo.pythonEnv && gpuInfo.pythonEnv.error) {
directmlUnavailableReason = gpuInfo.pythonEnv.error;
}
var modes = [
{
id: 'cpu',
@@ -457,10 +463,13 @@ function buildGpuModeSelector(gpuInfo) {
{
id: 'directml',
label: 'DirectML',
desc: t('codexlens.directmlModeDesc') || 'Windows GPU (NVIDIA/AMD/Intel)',
desc: directmlUnavailableReason
? directmlUnavailableReason
: (t('codexlens.directmlModeDesc') || 'Windows GPU (NVIDIA/AMD/Intel)'),
icon: 'cpu',
available: gpuInfo.available.includes('directml'),
recommended: gpuInfo.mode === 'directml'
recommended: gpuInfo.mode === 'directml',
warning: directmlUnavailableReason
},
{
id: 'cuda',
@@ -487,6 +496,7 @@ function buildGpuModeSelector(gpuInfo) {
var isDisabled = !mode.available;
var isRecommended = mode.recommended;
var isDefault = mode.id === gpuInfo.mode;
var hasWarning = mode.warning;
html +=
'<label class="flex items-center gap-3 p-2 rounded border cursor-pointer hover:bg-muted/50 transition-colors ' +
@@ -502,7 +512,7 @@ function buildGpuModeSelector(gpuInfo) {
(isRecommended ? '<span class="text-xs bg-primary/20 text-primary px-1.5 py-0.5 rounded">' + (t('common.recommended') || 'Recommended') + '</span>' : '') +
(isDisabled ? '<span class="text-xs text-muted-foreground">(' + (t('common.unavailable') || 'Unavailable') + ')</span>' : '') +
'</div>' +
'<div class="text-xs text-muted-foreground">' + mode.desc + '</div>' +
'<div class="text-xs ' + (hasWarning ? 'text-warning' : 'text-muted-foreground') + '">' + mode.desc + '</div>' +
'</div>' +
'</label>';
});

View File

@@ -0,0 +1,730 @@
// ==========================================
// ISSUE DISCOVERY VIEW
// Manages discovery sessions and findings
// ==========================================
// ========== Discovery State ==========
var discoveryData = {
discoveries: [],
selectedDiscovery: null,
selectedFinding: null,
findings: [],
perspectiveFilter: 'all',
priorityFilter: 'all',
searchQuery: '',
selectedFindings: new Set(),
viewMode: 'list' // 'list' | 'detail'
};
var discoveryLoading = false;
var discoveryPollingInterval = null;
// ========== Helper Functions ==========
function getFilteredFindings() {
const findings = discoveryData.findings || [];
let filtered = findings;
if (discoveryData.perspectiveFilter !== 'all') {
filtered = filtered.filter(f => f.perspective === discoveryData.perspectiveFilter);
}
if (discoveryData.priorityFilter !== 'all') {
filtered = filtered.filter(f => f.priority === discoveryData.priorityFilter);
}
if (discoveryData.searchQuery) {
const q = discoveryData.searchQuery.toLowerCase();
filtered = filtered.filter(f =>
(f.title && f.title.toLowerCase().includes(q)) ||
(f.file && f.file.toLowerCase().includes(q)) ||
(f.description && f.description.toLowerCase().includes(q))
);
}
return filtered;
}
// ========== Main Render Function ==========
async function renderIssueDiscovery() {
const container = document.getElementById('mainContent');
if (!container) return;
// Hide stats grid and carousel
hideStatsAndCarousel();
// Show loading state
container.innerHTML = '<div class="discovery-manager loading">' +
'<div class="loading-spinner"><i data-lucide="loader-2" class="w-8 h-8 animate-spin"></i></div>' +
'<p>' + t('common.loading') + '</p>' +
'</div>';
lucide.createIcons();
// Load data
await loadDiscoveryData();
// Render the main view
renderDiscoveryView();
}
// ========== Data Loading ==========
async function loadDiscoveryData() {
discoveryLoading = true;
try {
const response = await fetch('/api/discoveries?path=' + encodeURIComponent(projectPath));
if (!response.ok) throw new Error('Failed to load discoveries');
const data = await response.json();
discoveryData.discoveries = data.discoveries || [];
updateDiscoveryBadge();
} catch (err) {
console.error('Failed to load discoveries:', err);
discoveryData.discoveries = [];
} finally {
discoveryLoading = false;
}
}
async function loadDiscoveryDetail(discoveryId) {
try {
const response = await fetch('/api/discoveries/' + encodeURIComponent(discoveryId) + '?path=' + encodeURIComponent(projectPath));
if (!response.ok) throw new Error('Failed to load discovery detail');
return await response.json();
} catch (err) {
console.error('Failed to load discovery detail:', err);
return null;
}
}
async function loadDiscoveryFindings(discoveryId) {
try {
let url = '/api/discoveries/' + encodeURIComponent(discoveryId) + '/findings?path=' + encodeURIComponent(projectPath);
if (discoveryData.perspectiveFilter !== 'all') {
url += '&perspective=' + encodeURIComponent(discoveryData.perspectiveFilter);
}
if (discoveryData.priorityFilter !== 'all') {
url += '&priority=' + encodeURIComponent(discoveryData.priorityFilter);
}
const response = await fetch(url);
if (!response.ok) throw new Error('Failed to load findings');
const data = await response.json();
return data.findings || [];
} catch (err) {
console.error('Failed to load findings:', err);
return [];
}
}
async function loadDiscoveryProgress(discoveryId) {
try {
const response = await fetch('/api/discoveries/' + encodeURIComponent(discoveryId) + '/progress?path=' + encodeURIComponent(projectPath));
if (!response.ok) return null;
return await response.json();
} catch (err) {
return null;
}
}
function updateDiscoveryBadge() {
const badge = document.getElementById('badgeDiscovery');
if (badge) {
badge.textContent = discoveryData.discoveries.length;
}
}
// ========== Main View Render ==========
function renderDiscoveryView() {
const container = document.getElementById('mainContent');
if (!container) return;
container.innerHTML = `
<div class="discovery-manager">
<!-- Header -->
<div class="discovery-header mb-6">
<div class="flex items-center justify-between flex-wrap gap-4">
<div class="flex items-center gap-3">
<div class="w-10 h-10 bg-primary/10 rounded-lg flex items-center justify-center">
<i data-lucide="search-code" class="w-5 h-5 text-primary"></i>
</div>
<div>
<h2 class="text-lg font-semibold text-foreground">${t('discovery.title') || 'Issue Discovery'}</h2>
<p class="text-sm text-muted-foreground">${t('discovery.description') || 'Discover potential issues from multiple perspectives'}</p>
</div>
</div>
<div class="flex items-center gap-3">
${discoveryData.viewMode === 'detail' ? `
<button class="discovery-back-btn" onclick="backToDiscoveryList()">
<i data-lucide="arrow-left" class="w-4 h-4"></i>
<span>${t('common.back') || 'Back'}</span>
</button>
` : ''}
</div>
</div>
</div>
${discoveryData.viewMode === 'list' ? renderDiscoveryListSection() : renderDiscoveryDetailSection()}
</div>
`;
lucide.createIcons();
}
// ========== Discovery List Section ==========
function renderDiscoveryListSection() {
const discoveries = discoveryData.discoveries || [];
if (discoveries.length === 0) {
return `
<div class="discovery-empty">
<div class="empty-icon">
<i data-lucide="search-x" class="w-12 h-12 text-muted-foreground"></i>
</div>
<h3 class="text-lg font-medium text-foreground mt-4">${t('discovery.noDiscoveries') || 'No discoveries yet'}</h3>
<p class="text-sm text-muted-foreground mt-2">${t('discovery.runCommand') || 'Run /issue:discover to start discovering issues'}</p>
<div class="mt-4 p-3 bg-muted/50 rounded-lg">
<code class="text-sm text-primary">/issue:discover src/auth/**</code>
</div>
</div>
`;
}
return `
<div class="discovery-list-container">
${discoveries.map(d => renderDiscoveryCard(d)).join('')}
</div>
`;
}
function renderDiscoveryCard(discovery) {
const { discovery_id, target_pattern, perspectives, phase, total_findings, issues_generated, priority_distribution, progress } = discovery;
const isComplete = phase === 'complete';
const isRunning = phase && phase !== 'complete' && phase !== 'failed';
// Calculate progress percentage
let progressPercent = 0;
if (progress && progress.perspective_analysis) {
progressPercent = progress.perspective_analysis.percent_complete || 0;
} else if (isComplete) {
progressPercent = 100;
}
// Priority distribution bar
const critical = priority_distribution?.critical || 0;
const high = priority_distribution?.high || 0;
const medium = priority_distribution?.medium || 0;
const low = priority_distribution?.low || 0;
const total = critical + high + medium + low || 1;
return `
<div class="discovery-card ${isComplete ? 'complete' : ''} ${isRunning ? 'running' : ''}" onclick="viewDiscoveryDetail('${discovery_id}')">
<div class="discovery-card-header">
<div class="discovery-id">
<i data-lucide="search" class="w-4 h-4"></i>
<span>${discovery_id}</span>
</div>
<span class="discovery-phase ${phase}">${phase || 'unknown'}</span>
</div>
<div class="discovery-card-body">
<div class="discovery-target">
<i data-lucide="folder" class="w-4 h-4 text-muted-foreground"></i>
<span class="text-sm text-foreground">${target_pattern || 'N/A'}</span>
</div>
${perspectives && perspectives.length > 0 ? `
<div class="discovery-perspectives">
${perspectives.slice(0, 5).map(p => `<span class="perspective-badge ${p}">${p}</span>`).join('')}
${perspectives.length > 5 ? `<span class="perspective-badge more">+${perspectives.length - 5}</span>` : ''}
</div>
` : ''}
${isRunning ? `
<div class="discovery-progress-bar">
<div class="progress-fill" style="width: ${progressPercent}%"></div>
</div>
<div class="text-xs text-muted-foreground mt-1">${progressPercent}% complete</div>
` : ''}
<div class="discovery-stats">
<div class="stat">
<span class="stat-value">${total_findings || 0}</span>
<span class="stat-label">${t('discovery.findings') || 'Findings'}</span>
</div>
<div class="stat">
<span class="stat-value">${issues_generated || 0}</span>
<span class="stat-label">${t('discovery.exported') || 'Exported'}</span>
</div>
</div>
${total_findings > 0 ? `
<div class="discovery-priority-bar">
<div class="priority-segment critical" style="width: ${(critical / total) * 100}%" title="Critical: ${critical}"></div>
<div class="priority-segment high" style="width: ${(high / total) * 100}%" title="High: ${high}"></div>
<div class="priority-segment medium" style="width: ${(medium / total) * 100}%" title="Medium: ${medium}"></div>
<div class="priority-segment low" style="width: ${(low / total) * 100}%" title="Low: ${low}"></div>
</div>
` : ''}
</div>
<div class="discovery-card-footer">
<button class="discovery-action-btn" onclick="event.stopPropagation(); deleteDiscovery('${discovery_id}')">
<i data-lucide="trash-2" class="w-4 h-4"></i>
</button>
</div>
</div>
`;
}
// ========== Discovery Detail Section ==========
function renderDiscoveryDetailSection() {
const discovery = discoveryData.selectedDiscovery;
if (!discovery) {
return '<div class="loading-spinner"><i data-lucide="loader-2" class="w-8 h-8 animate-spin"></i></div>';
}
const findings = discoveryData.findings || [];
const perspectives = [...new Set(findings.map(f => f.perspective))];
const filteredFindings = getFilteredFindings();
return `
<div class="discovery-detail-container">
<!-- Left Panel: Findings List -->
<div class="discovery-findings-panel">
<!-- Toolbar -->
<div class="discovery-toolbar">
<div class="toolbar-filters">
<select class="filter-select" onchange="filterDiscoveryByPerspective(this.value)">
<option value="all" ${discoveryData.perspectiveFilter === 'all' ? 'selected' : ''}>${t('discovery.allPerspectives') || 'All Perspectives'}</option>
${perspectives.map(p => `<option value="${p}" ${discoveryData.perspectiveFilter === p ? 'selected' : ''}>${p}</option>`).join('')}
</select>
<select class="filter-select" onchange="filterDiscoveryByPriority(this.value)">
<option value="all" ${discoveryData.priorityFilter === 'all' ? 'selected' : ''}>${t('discovery.allPriorities') || 'All Priorities'}</option>
<option value="critical" ${discoveryData.priorityFilter === 'critical' ? 'selected' : ''}>Critical</option>
<option value="high" ${discoveryData.priorityFilter === 'high' ? 'selected' : ''}>High</option>
<option value="medium" ${discoveryData.priorityFilter === 'medium' ? 'selected' : ''}>Medium</option>
<option value="low" ${discoveryData.priorityFilter === 'low' ? 'selected' : ''}>Low</option>
</select>
</div>
<div class="toolbar-search">
<i data-lucide="search" class="w-4 h-4"></i>
<input type="text" placeholder="${t('common.search') || 'Search...'}"
value="${discoveryData.searchQuery}"
oninput="searchDiscoveryFindings(this.value)">
</div>
</div>
<!-- Findings Count -->
<div class="findings-count">
<div class="findings-count-left">
<span>${filteredFindings.length} ${t('discovery.findings') || 'findings'}</span>
${discoveryData.selectedFindings.size > 0 ? `
<span class="selected-count">(${discoveryData.selectedFindings.size} selected)</span>
` : ''}
</div>
<div class="findings-count-actions">
<button class="select-action-btn" onclick="selectAllFindings()">
<i data-lucide="check-square" class="w-3 h-3"></i>
<span>${t('discovery.selectAll') || 'Select All'}</span>
</button>
<button class="select-action-btn" onclick="deselectAllFindings()">
<i data-lucide="square" class="w-3 h-3"></i>
<span>${t('discovery.deselectAll') || 'Deselect All'}</span>
</button>
</div>
</div>
<!-- Findings List -->
<div class="findings-list">
${filteredFindings.length === 0 ? `
<div class="findings-empty">
<i data-lucide="inbox" class="w-8 h-8 text-muted-foreground"></i>
<p>${t('discovery.noFindings') || 'No findings match your filters'}</p>
</div>
` : filteredFindings.map(f => renderFindingItem(f)).join('')}
</div>
<!-- Bulk Actions -->
${discoveryData.selectedFindings.size > 0 ? `
<div class="bulk-actions">
<span class="bulk-count">${discoveryData.selectedFindings.size} selected</span>
<button class="bulk-action-btn export" onclick="exportSelectedFindings()">
<i data-lucide="upload" class="w-4 h-4"></i>
<span>${t('discovery.exportAsIssues') || 'Export as Issues'}</span>
</button>
<button class="bulk-action-btn dismiss" onclick="dismissSelectedFindings()">
<i data-lucide="x" class="w-4 h-4"></i>
<span>${t('discovery.dismiss') || 'Dismiss'}</span>
</button>
</div>
` : ''}
</div>
<!-- Right Panel: Finding Preview -->
<div class="discovery-preview-panel">
${discoveryData.selectedFinding ? renderFindingPreview(discoveryData.selectedFinding) : `
<div class="preview-empty">
<i data-lucide="mouse-pointer-click" class="w-12 h-12 text-muted-foreground"></i>
<p>${t('discovery.selectFinding') || 'Select a finding to preview'}</p>
</div>
`}
</div>
</div>
`;
}
function renderFindingItem(finding) {
const isSelected = discoveryData.selectedFindings.has(finding.id);
const isActive = discoveryData.selectedFinding?.id === finding.id;
const isExported = finding.exported === true;
return `
<div class="finding-item ${isActive ? 'active' : ''} ${isSelected ? 'selected' : ''} ${finding.dismissed ? 'dismissed' : ''} ${isExported ? 'exported' : ''}"
onclick="selectFinding('${finding.id}')">
<div class="finding-checkbox" onclick="event.stopPropagation(); toggleFindingSelection('${finding.id}')">
<input type="checkbox" ${isSelected ? 'checked' : ''} ${isExported ? 'disabled' : ''}>
</div>
<div class="finding-content">
<div class="finding-header">
<span class="perspective-badge ${finding.perspective}">${finding.perspective}</span>
<span class="priority-badge ${finding.priority}">${finding.priority}</span>
${isExported ? '<span class="exported-badge">' + (t('discovery.exported') || 'Exported') + '</span>' : ''}
</div>
<div class="finding-title">${finding.title || 'Untitled'}</div>
<div class="finding-location">
<i data-lucide="file" class="w-3 h-3"></i>
<span>${finding.file || 'Unknown'}${finding.line ? ':' + finding.line : ''}</span>
</div>
</div>
</div>
`;
}
function renderFindingPreview(finding) {
return `
<div class="finding-preview">
<div class="preview-header">
<div class="preview-badges">
<span class="perspective-badge ${finding.perspective}">${finding.perspective}</span>
<span class="priority-badge ${finding.priority}">${finding.priority}</span>
${finding.confidence ? `<span class="confidence-badge">${Math.round(finding.confidence * 100)}% confidence</span>` : ''}
</div>
<h3 class="preview-title">${finding.title || 'Untitled'}</h3>
</div>
<div class="preview-section">
<h4><i data-lucide="file-code" class="w-4 h-4"></i> ${t('discovery.location') || 'Location'}</h4>
<div class="preview-location">
<code>${finding.file || 'Unknown'}${finding.line ? ':' + finding.line : ''}</code>
</div>
</div>
${finding.snippet ? `
<div class="preview-section">
<h4><i data-lucide="code" class="w-4 h-4"></i> ${t('discovery.code') || 'Code'}</h4>
<pre class="preview-snippet"><code>${escapeHtml(finding.snippet)}</code></pre>
</div>
` : ''}
<div class="preview-section">
<h4><i data-lucide="info" class="w-4 h-4"></i> ${t('discovery.description') || 'Description'}</h4>
<p class="preview-description">${finding.description || 'No description'}</p>
</div>
${finding.impact ? `
<div class="preview-section">
<h4><i data-lucide="alert-triangle" class="w-4 h-4"></i> ${t('discovery.impact') || 'Impact'}</h4>
<p class="preview-impact">${finding.impact}</p>
</div>
` : ''}
${finding.recommendation ? `
<div class="preview-section">
<h4><i data-lucide="lightbulb" class="w-4 h-4"></i> ${t('discovery.recommendation') || 'Recommendation'}</h4>
<p class="preview-recommendation">${finding.recommendation}</p>
</div>
` : ''}
${finding.suggested_issue ? `
<div class="preview-section suggested-issue">
<h4><i data-lucide="clipboard-list" class="w-4 h-4"></i> ${t('discovery.suggestedIssue') || 'Suggested Issue'}</h4>
<div class="suggested-issue-content">
<div class="issue-title">${finding.suggested_issue.title || finding.title}</div>
<div class="issue-meta">
<span class="issue-type">${finding.suggested_issue.type || 'bug'}</span>
<span class="issue-priority">P${finding.suggested_issue.priority || 3}</span>
${finding.suggested_issue.labels ? finding.suggested_issue.labels.map(l => `<span class="issue-label">${l}</span>`).join('') : ''}
</div>
</div>
</div>
` : ''}
<div class="preview-actions">
<button class="preview-action-btn primary" onclick="exportSingleFinding('${finding.id}')">
<i data-lucide="upload" class="w-4 h-4"></i>
<span>${t('discovery.exportAsIssue') || 'Export as Issue'}</span>
</button>
<button class="preview-action-btn secondary" onclick="dismissFinding('${finding.id}')">
<i data-lucide="x" class="w-4 h-4"></i>
<span>${t('discovery.dismiss') || 'Dismiss'}</span>
</button>
</div>
</div>
`;
}
// ========== Actions ==========
async function viewDiscoveryDetail(discoveryId) {
discoveryData.viewMode = 'detail';
discoveryData.selectedFinding = null;
discoveryData.selectedFindings.clear();
discoveryData.perspectiveFilter = 'all';
discoveryData.priorityFilter = 'all';
discoveryData.searchQuery = '';
// Show loading
renderDiscoveryView();
// Load detail
const detail = await loadDiscoveryDetail(discoveryId);
if (detail) {
discoveryData.selectedDiscovery = detail;
// Flatten findings from perspectives
const allFindings = [];
if (detail.perspectives) {
for (const p of detail.perspectives) {
if (p.findings) {
allFindings.push(...p.findings);
}
}
}
discoveryData.findings = allFindings;
}
// Start polling if running
if (detail && detail.phase && detail.phase !== 'complete' && detail.phase !== 'failed') {
startDiscoveryPolling(discoveryId);
}
renderDiscoveryView();
}
function backToDiscoveryList() {
stopDiscoveryPolling();
discoveryData.viewMode = 'list';
discoveryData.selectedDiscovery = null;
discoveryData.selectedFinding = null;
discoveryData.findings = [];
discoveryData.selectedFindings.clear();
renderDiscoveryView();
}
function selectFinding(findingId) {
const finding = discoveryData.findings.find(f => f.id === findingId);
discoveryData.selectedFinding = finding || null;
renderDiscoveryView();
}
function toggleFindingSelection(findingId) {
if (discoveryData.selectedFindings.has(findingId)) {
discoveryData.selectedFindings.delete(findingId);
} else {
discoveryData.selectedFindings.add(findingId);
}
renderDiscoveryView();
}
function selectAllFindings() {
// Get filtered findings (respecting current filters)
const filteredFindings = getFilteredFindings();
// Select only non-exported findings
for (const finding of filteredFindings) {
if (!finding.exported) {
discoveryData.selectedFindings.add(finding.id);
}
}
renderDiscoveryView();
}
function deselectAllFindings() {
discoveryData.selectedFindings.clear();
renderDiscoveryView();
}
function filterDiscoveryByPerspective(perspective) {
discoveryData.perspectiveFilter = perspective;
renderDiscoveryView();
}
function filterDiscoveryByPriority(priority) {
discoveryData.priorityFilter = priority;
renderDiscoveryView();
}
function searchDiscoveryFindings(query) {
discoveryData.searchQuery = query;
renderDiscoveryView();
}
async function exportSelectedFindings() {
if (discoveryData.selectedFindings.size === 0) return;
const discoveryId = discoveryData.selectedDiscovery?.discovery_id;
if (!discoveryId) return;
try {
const response = await fetch('/api/discoveries/' + encodeURIComponent(discoveryId) + '/export?path=' + encodeURIComponent(projectPath), {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ finding_ids: Array.from(discoveryData.selectedFindings) })
});
const result = await response.json();
if (result.success) {
// Show detailed message if duplicates were skipped
const msg = result.skipped_count > 0
? `Exported ${result.exported_count} issues, skipped ${result.skipped_count} duplicates`
: `Exported ${result.exported_count} issues`;
showNotification('success', msg);
discoveryData.selectedFindings.clear();
// Reload discovery data to reflect exported status
await loadDiscoveryData();
if (discoveryData.selectedDiscovery) {
await viewDiscoveryDetail(discoveryData.selectedDiscovery.discovery_id);
} else {
renderDiscoveryView();
}
} else {
showNotification('error', result.error || 'Export failed');
}
} catch (err) {
console.error('Export failed:', err);
showNotification('error', 'Export failed');
}
}
async function exportSingleFinding(findingId) {
const discoveryId = discoveryData.selectedDiscovery?.discovery_id;
if (!discoveryId) return;
try {
const response = await fetch('/api/discoveries/' + encodeURIComponent(discoveryId) + '/export?path=' + encodeURIComponent(projectPath), {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ finding_ids: [findingId] })
});
const result = await response.json();
if (result.success) {
showNotification('success', 'Exported 1 issue');
// Reload discovery data
await loadDiscoveryData();
renderDiscoveryView();
} else {
showNotification('error', result.error || 'Export failed');
}
} catch (err) {
console.error('Export failed:', err);
showNotification('error', 'Export failed');
}
}
async function dismissFinding(findingId) {
const discoveryId = discoveryData.selectedDiscovery?.discovery_id;
if (!discoveryId) return;
try {
const response = await fetch('/api/discoveries/' + encodeURIComponent(discoveryId) + '/findings/' + encodeURIComponent(findingId) + '?path=' + encodeURIComponent(projectPath), {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ dismissed: true })
});
const result = await response.json();
if (result.success) {
// Update local state
const finding = discoveryData.findings.find(f => f.id === findingId);
if (finding) {
finding.dismissed = true;
}
if (discoveryData.selectedFinding?.id === findingId) {
discoveryData.selectedFinding = null;
}
renderDiscoveryView();
}
} catch (err) {
console.error('Dismiss failed:', err);
}
}
async function dismissSelectedFindings() {
for (const findingId of discoveryData.selectedFindings) {
await dismissFinding(findingId);
}
discoveryData.selectedFindings.clear();
renderDiscoveryView();
}
async function deleteDiscovery(discoveryId) {
if (!confirm(`Delete discovery ${discoveryId}? This cannot be undone.`)) return;
try {
const response = await fetch('/api/discoveries/' + encodeURIComponent(discoveryId) + '?path=' + encodeURIComponent(projectPath), {
method: 'DELETE'
});
const result = await response.json();
if (result.success) {
showNotification('success', 'Discovery deleted');
await loadDiscoveryData();
renderDiscoveryView();
} else {
showNotification('error', result.error || 'Delete failed');
}
} catch (err) {
console.error('Delete failed:', err);
showNotification('error', 'Delete failed');
}
}
// ========== Progress Polling ==========
function startDiscoveryPolling(discoveryId) {
stopDiscoveryPolling();
discoveryPollingInterval = setInterval(async () => {
const progress = await loadDiscoveryProgress(discoveryId);
if (progress) {
// Update progress in UI
if (discoveryData.selectedDiscovery) {
discoveryData.selectedDiscovery.progress = progress.progress;
discoveryData.selectedDiscovery.phase = progress.phase;
}
// Stop polling if complete
if (progress.phase === 'complete' || progress.phase === 'failed') {
stopDiscoveryPolling();
// Reload full detail
viewDiscoveryDetail(discoveryId);
}
}
}, 3000); // Poll every 3 seconds
}
function stopDiscoveryPolling() {
if (discoveryPollingInterval) {
clearInterval(discoveryPollingInterval);
discoveryPollingInterval = null;
}
}
// ========== Utilities ==========
function escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
// ========== Cleanup ==========
function cleanupDiscoveryView() {
stopDiscoveryPolling();
discoveryData.selectedDiscovery = null;
discoveryData.selectedFinding = null;
discoveryData.findings = [];
discoveryData.selectedFindings.clear();
discoveryData.viewMode = 'list';
}

View File

@@ -6,7 +6,7 @@
// ========== Issue State ==========
var issueData = {
issues: [],
queue: { queue: [], conflicts: [], execution_groups: [], grouped_items: {} },
queue: { tasks: [], solutions: [], conflicts: [], execution_groups: [], grouped_items: {} },
selectedIssue: null,
selectedSolution: null,
selectedSolutionIssueId: null,
@@ -65,7 +65,7 @@ async function loadQueueData() {
issueData.queue = await response.json();
} catch (err) {
console.error('Failed to load queue:', err);
issueData.queue = { queue: [], conflicts: [], execution_groups: [], grouped_items: {} };
issueData.queue = { tasks: [], solutions: [], conflicts: [], execution_groups: [], grouped_items: {} };
}
}
@@ -360,13 +360,21 @@ function filterIssuesByStatus(status) {
// ========== Queue Section ==========
function renderQueueSection() {
const queue = issueData.queue;
const queueItems = queue.queue || [];
// Support both solution-level and task-level queues
const queueItems = queue.solutions || queue.tasks || [];
const isSolutionLevel = !!(queue.solutions && queue.solutions.length > 0);
const metadata = queue._metadata || {};
// Check if queue is empty
if (queueItems.length === 0) {
return `
<div class="queue-empty-container">
<div class="queue-empty-toolbar">
<button class="btn-secondary" onclick="showQueueHistoryModal()" title="${t('issues.queueHistory') || 'Queue History'}">
<i data-lucide="history" class="w-4 h-4"></i>
<span>${t('issues.history') || 'History'}</span>
</button>
</div>
<div class="queue-empty">
<i data-lucide="git-branch" class="w-16 h-16"></i>
<p class="queue-empty-title">${t('issues.queueEmpty') || 'Queue is empty'}</p>
@@ -423,6 +431,10 @@ function renderQueueSection() {
<button class="btn-secondary" onclick="refreshQueue()" title="${t('issues.refreshQueue') || 'Refresh'}">
<i data-lucide="refresh-cw" class="w-4 h-4"></i>
</button>
<button class="btn-secondary" onclick="showQueueHistoryModal()" title="${t('issues.queueHistory') || 'Queue History'}">
<i data-lucide="history" class="w-4 h-4"></i>
<span>${t('issues.history') || 'History'}</span>
</button>
<button class="btn-secondary" onclick="createExecutionQueue()" title="${t('issues.regenerateQueue') || 'Regenerate Queue'}">
<i data-lucide="rotate-cw" class="w-4 h-4"></i>
<span>${t('issues.regenerate') || 'Regenerate'}</span>
@@ -433,8 +445,8 @@ function renderQueueSection() {
<!-- Queue Stats -->
<div class="queue-stats-grid mb-4">
<div class="queue-stat-card">
<span class="queue-stat-value">${metadata.total_tasks || queueItems.length}</span>
<span class="queue-stat-label">${t('issues.totalTasks') || 'Total'}</span>
<span class="queue-stat-value">${isSolutionLevel ? (metadata.total_solutions || queueItems.length) : (metadata.total_tasks || queueItems.length)}</span>
<span class="queue-stat-label">${isSolutionLevel ? (t('issues.totalSolutions') || 'Solutions') : (t('issues.totalTasks') || 'Total')}</span>
</div>
<div class="queue-stat-card pending">
<span class="queue-stat-value">${metadata.pending_count || queueItems.filter(i => i.status === 'pending').length}</span>
@@ -500,6 +512,9 @@ function renderQueueSection() {
function renderQueueGroup(group, items) {
const isParallel = group.type === 'parallel';
// Support both solution-level (solution_count) and task-level (task_count)
const itemCount = group.solution_count || group.task_count || items.length;
const itemLabel = group.solution_count ? 'solutions' : 'tasks';
return `
<div class="queue-group" data-group-id="${group.id}">
@@ -508,7 +523,7 @@ function renderQueueGroup(group, items) {
<i data-lucide="${isParallel ? 'git-merge' : 'arrow-right'}" class="w-4 h-4"></i>
${group.id} (${isParallel ? t('issues.parallelGroup') || 'Parallel' : t('issues.sequentialGroup') || 'Sequential'})
</div>
<span class="text-sm text-muted-foreground">${group.task_count} tasks</span>
<span class="text-sm text-muted-foreground">${itemCount} ${itemLabel}</span>
</div>
<div class="queue-items ${isParallel ? 'parallel' : 'sequential'}">
${items.map((item, idx) => renderQueueItem(item, idx, items.length)).join('')}
@@ -527,15 +542,31 @@ function renderQueueItem(item, index, total) {
blocked: 'blocked'
};
// Check if this is a solution-level item (has task_count) or task-level (has task_id)
const isSolutionItem = item.task_count !== undefined;
return `
<div class="queue-item ${statusColors[item.status] || ''}"
draggable="true"
data-queue-id="${item.queue_id}"
data-item-id="${item.item_id}"
data-group-id="${item.execution_group}"
onclick="openQueueItemDetail('${item.queue_id}')">
<span class="queue-item-id font-mono text-xs">${item.queue_id}</span>
onclick="openQueueItemDetail('${item.item_id}')">
<span class="queue-item-id font-mono text-xs">${item.item_id}</span>
<span class="queue-item-issue text-xs text-muted-foreground">${item.issue_id}</span>
<span class="queue-item-task text-sm">${item.task_id}</span>
${isSolutionItem ? `
<span class="queue-item-solution text-sm" title="${item.solution_id || ''}">
<i data-lucide="package" class="w-3 h-3 inline mr-1"></i>
${item.task_count} ${t('issues.tasks') || 'tasks'}
</span>
${item.files_touched && item.files_touched.length > 0 ? `
<span class="queue-item-files text-xs text-muted-foreground" title="${item.files_touched.join(', ')}">
<i data-lucide="file" class="w-3 h-3"></i>
${item.files_touched.length}
</span>
` : ''}
` : `
<span class="queue-item-task text-sm">${item.task_id || '-'}</span>
`}
<span class="queue-item-priority" style="opacity: ${item.semantic_priority || 0.5}">
<i data-lucide="arrow-up" class="w-3 h-3"></i>
</span>
@@ -559,9 +590,12 @@ function renderConflictsSection(conflicts) {
${conflicts.map(c => `
<div class="conflict-item">
<span class="conflict-file font-mono text-xs">${c.file}</span>
<span class="conflict-tasks text-xs text-muted-foreground">${c.tasks.join(' → ')}</span>
<span class="conflict-status ${c.resolved ? 'resolved' : 'pending'}">
${c.resolved ? 'Resolved' : 'Pending'}
<span class="conflict-items text-xs text-muted-foreground">${(c.solutions || c.tasks || []).join(' → ')}</span>
${c.rationale ? `<span class="conflict-rationale text-xs text-muted-foreground" title="${c.rationale}">
<i data-lucide="info" class="w-3 h-3"></i>
</span>` : ''}
<span class="conflict-status ${c.resolved || c.resolution ? 'resolved' : 'pending'}">
${c.resolved || c.resolution ? 'Resolved' : 'Pending'}
</span>
</div>
`).join('')}
@@ -586,12 +620,12 @@ function handleIssueDragStart(e) {
const item = e.target.closest('.queue-item');
if (!item) return;
issueDragState.dragging = item.dataset.queueId;
issueDragState.dragging = item.dataset.itemId;
issueDragState.groupId = item.dataset.groupId;
item.classList.add('dragging');
e.dataTransfer.effectAllowed = 'move';
e.dataTransfer.setData('text/plain', item.dataset.queueId);
e.dataTransfer.setData('text/plain', item.dataset.itemId);
}
function handleIssueDragEnd(e) {
@@ -610,7 +644,7 @@ function handleIssueDragOver(e) {
e.preventDefault();
const target = e.target.closest('.queue-item');
if (!target || target.dataset.queueId === issueDragState.dragging) return;
if (!target || target.dataset.itemId === issueDragState.dragging) return;
// Only allow drag within same group
if (target.dataset.groupId !== issueDragState.groupId) {
@@ -635,7 +669,7 @@ function handleIssueDrop(e) {
// Get new order
const items = Array.from(container.querySelectorAll('.queue-item'));
const draggedItem = items.find(i => i.dataset.queueId === issueDragState.dragging);
const draggedItem = items.find(i => i.dataset.itemId === issueDragState.dragging);
const targetIndex = items.indexOf(target);
const draggedIndex = items.indexOf(draggedItem);
@@ -649,7 +683,7 @@ function handleIssueDrop(e) {
}
// Get new order and save
const newOrder = Array.from(container.querySelectorAll('.queue-item')).map(i => i.dataset.queueId);
const newOrder = Array.from(container.querySelectorAll('.queue-item')).map(i => i.dataset.itemId);
saveQueueOrder(issueDragState.groupId, newOrder);
}
@@ -767,7 +801,7 @@ function renderIssueDetailPanel(issue) {
<div class="flex items-center justify-between">
<span class="font-mono text-sm">${task.id}</span>
<select class="task-status-select" onchange="updateTaskStatus('${issue.id}', '${task.id}', this.value)">
${['pending', 'ready', 'in_progress', 'completed', 'failed', 'paused', 'skipped'].map(s =>
${['pending', 'ready', 'executing', 'completed', 'failed', 'blocked', 'paused', 'skipped'].map(s =>
`<option value="${s}" ${task.status === s ? 'selected' : ''}>${s}</option>`
).join('')}
</select>
@@ -1145,8 +1179,10 @@ function escapeHtml(text) {
return div.innerHTML;
}
function openQueueItemDetail(queueId) {
const item = issueData.queue.queue?.find(q => q.queue_id === queueId);
function openQueueItemDetail(itemId) {
// Support both solution-level and task-level queues
const items = issueData.queue.solutions || issueData.queue.tasks || [];
const item = items.find(q => q.item_id === itemId);
if (item) {
openIssueDetail(item.issue_id);
}
@@ -1529,6 +1565,253 @@ function hideQueueCommandModal() {
}
}
// ========== Queue History Modal ==========
async function showQueueHistoryModal() {
// Create modal if not exists
let modal = document.getElementById('queueHistoryModal');
if (!modal) {
modal = document.createElement('div');
modal.id = 'queueHistoryModal';
modal.className = 'issue-modal';
document.body.appendChild(modal);
}
// Show loading state
modal.innerHTML = `
<div class="issue-modal-backdrop" onclick="hideQueueHistoryModal()"></div>
<div class="issue-modal-content" style="max-width: 700px; max-height: 80vh;">
<div class="issue-modal-header">
<h3><i data-lucide="history" class="w-5 h-5 inline mr-2"></i>Queue History</h3>
<button class="btn-icon" onclick="hideQueueHistoryModal()">
<i data-lucide="x" class="w-5 h-5"></i>
</button>
</div>
<div class="issue-modal-body" style="overflow-y: auto; max-height: calc(80vh - 120px);">
<div class="flex items-center justify-center py-8">
<i data-lucide="loader-2" class="w-6 h-6 animate-spin"></i>
<span class="ml-2">Loading...</span>
</div>
</div>
</div>
`;
modal.classList.remove('hidden');
lucide.createIcons();
// Fetch queue history
try {
const response = await fetch(`/api/queue/history?path=${encodeURIComponent(projectPath)}`);
const data = await response.json();
const queues = data.queues || [];
const activeQueueId = data.active_queue_id;
// Render queue list
const queueListHtml = queues.length === 0
? `<div class="text-center py-8 text-muted-foreground">
<i data-lucide="inbox" class="w-12 h-12 mx-auto mb-2 opacity-50"></i>
<p>No queue history found</p>
</div>`
: `<div class="queue-history-list">
${queues.map(q => `
<div class="queue-history-item ${q.id === activeQueueId ? 'active' : ''}" onclick="viewQueueDetail('${q.id}')">
<div class="queue-history-header">
<span class="queue-history-id font-mono">${q.id}</span>
${q.id === activeQueueId ? '<span class="queue-active-badge">Active</span>' : ''}
<span class="queue-history-status ${q.status || ''}">${q.status || 'unknown'}</span>
</div>
<div class="queue-history-meta">
<span class="text-xs text-muted-foreground">
<i data-lucide="layers" class="w-3 h-3 inline"></i>
${q.issue_ids?.length || 0} issues
</span>
<span class="text-xs text-muted-foreground">
<i data-lucide="check-circle" class="w-3 h-3 inline"></i>
${q.completed_solutions || q.completed_tasks || 0}/${q.total_solutions || q.total_tasks || 0} ${q.total_solutions ? 'solutions' : 'tasks'}
</span>
<span class="text-xs text-muted-foreground">
<i data-lucide="calendar" class="w-3 h-3 inline"></i>
${q.created_at ? new Date(q.created_at).toLocaleDateString() : 'N/A'}
</span>
</div>
<div class="queue-history-actions">
${q.id !== activeQueueId ? `
<button class="btn-sm btn-primary" onclick="event.stopPropagation(); switchToQueue('${q.id}')">
<i data-lucide="arrow-right-circle" class="w-3 h-3"></i>
Switch
</button>
` : ''}
<button class="btn-sm btn-secondary" onclick="event.stopPropagation(); viewQueueDetail('${q.id}')">
<i data-lucide="eye" class="w-3 h-3"></i>
View
</button>
</div>
</div>
`).join('')}
</div>`;
modal.querySelector('.issue-modal-body').innerHTML = queueListHtml;
lucide.createIcons();
} catch (err) {
console.error('Failed to load queue history:', err);
modal.querySelector('.issue-modal-body').innerHTML = `
<div class="text-center py-8 text-red-500">
<i data-lucide="alert-circle" class="w-8 h-8 mx-auto mb-2"></i>
<p>Failed to load queue history</p>
</div>
`;
lucide.createIcons();
}
}
function hideQueueHistoryModal() {
const modal = document.getElementById('queueHistoryModal');
if (modal) {
modal.classList.add('hidden');
}
}
async function switchToQueue(queueId) {
try {
const response = await fetch(`/api/queue/switch?path=${encodeURIComponent(projectPath)}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ queueId })
});
const result = await response.json();
if (result.success) {
showNotification(t('issues.queueSwitched') || 'Switched to queue: ' + queueId, 'success');
hideQueueHistoryModal();
await loadQueueData();
renderIssueView();
} else {
showNotification(result.error || 'Failed to switch queue', 'error');
}
} catch (err) {
console.error('Failed to switch queue:', err);
showNotification('Failed to switch queue', 'error');
}
}
async function viewQueueDetail(queueId) {
const modal = document.getElementById('queueHistoryModal');
if (!modal) return;
// Show loading
modal.querySelector('.issue-modal-body').innerHTML = `
<div class="flex items-center justify-center py-8">
<i data-lucide="loader-2" class="w-6 h-6 animate-spin"></i>
<span class="ml-2">${t('common.loading') || 'Loading...'}</span>
</div>
`;
lucide.createIcons();
try {
const response = await fetch(`/api/queue/${queueId}?path=${encodeURIComponent(projectPath)}`);
const queue = await response.json();
if (queue.error) {
throw new Error(queue.error);
}
// Support both solution-level and task-level queues
const items = queue.solutions || queue.queue || queue.tasks || [];
const isSolutionLevel = !!(queue.solutions && queue.solutions.length > 0);
const metadata = queue._metadata || {};
// Group by execution_group
const grouped = {};
items.forEach(item => {
const group = item.execution_group || 'ungrouped';
if (!grouped[group]) grouped[group] = [];
grouped[group].push(item);
});
const itemLabel = isSolutionLevel ? 'solutions' : 'tasks';
const detailHtml = `
<div class="queue-detail-view">
<div class="queue-detail-header mb-4">
<button class="btn-sm btn-secondary" onclick="showQueueHistoryModal()">
<i data-lucide="arrow-left" class="w-3 h-3"></i>
Back
</button>
<div class="ml-4">
<h4 class="text-lg font-semibold">${queue.name || queue.id || queueId}</h4>
${queue.name ? `<span class="text-xs text-muted-foreground font-mono">${queue.id}</span>` : ''}
</div>
</div>
<div class="queue-detail-stats mb-4">
<div class="stat-item">
<span class="stat-value">${items.length}</span>
<span class="stat-label">${isSolutionLevel ? 'Solutions' : 'Total'}</span>
</div>
<div class="stat-item completed">
<span class="stat-value">${items.filter(t => t.status === 'completed').length}</span>
<span class="stat-label">Completed</span>
</div>
<div class="stat-item pending">
<span class="stat-value">${items.filter(t => t.status === 'pending').length}</span>
<span class="stat-label">Pending</span>
</div>
<div class="stat-item failed">
<span class="stat-value">${items.filter(t => t.status === 'failed').length}</span>
<span class="stat-label">Failed</span>
</div>
</div>
<div class="queue-detail-groups">
${Object.entries(grouped).map(([groupId, groupItems]) => `
<div class="queue-group-section">
<div class="queue-group-header">
<i data-lucide="folder" class="w-4 h-4"></i>
<span>${groupId}</span>
<span class="text-xs text-muted-foreground">(${groupItems.length} ${itemLabel})</span>
</div>
<div class="queue-group-items">
${groupItems.map(item => `
<div class="queue-detail-item ${item.status || ''}">
<div class="item-main">
<span class="item-id font-mono text-xs">${item.item_id || item.queue_id || item.task_id || 'N/A'}</span>
<span class="item-title text-sm">${isSolutionLevel ? (item.task_count + ' tasks') : (item.title || item.action || 'Untitled')}</span>
</div>
<div class="item-meta">
<span class="item-issue text-xs">${item.issue_id || ''}</span>
${isSolutionLevel && item.files_touched ? `<span class="item-files text-xs">${item.files_touched.length} files</span>` : ''}
<span class="item-status ${item.status || ''}">${item.status || 'unknown'}</span>
</div>
</div>
`).join('')}
</div>
</div>
`).join('')}
</div>
</div>
`;
modal.querySelector('.issue-modal-body').innerHTML = detailHtml;
lucide.createIcons();
} catch (err) {
console.error('Failed to load queue detail:', err);
modal.querySelector('.issue-modal-body').innerHTML = `
<div class="text-center py-8">
<button class="btn-sm btn-secondary mb-4" onclick="showQueueHistoryModal()">
<i data-lucide="arrow-left" class="w-3 h-3"></i>
Back
</button>
<div class="text-red-500">
<i data-lucide="alert-circle" class="w-8 h-8 mx-auto mb-2"></i>
<p>Failed to load queue detail</p>
</div>
</div>
`;
lucide.createIcons();
}
}
function copyCommand(command) {
navigator.clipboard.writeText(command).then(() => {
showNotification(t('common.copied') || 'Copied to clipboard', 'success');

View File

@@ -169,6 +169,9 @@ function renderProjectOverview() {
${renderDevelopmentIndex(project.developmentIndex)}
</div>
<!-- Project Guidelines -->
${renderProjectGuidelines(project.guidelines)}
<!-- Statistics -->
<div class="bg-card border border-border rounded-lg p-6">
<h3 class="text-lg font-semibold text-foreground mb-4 flex items-center gap-2">
@@ -248,3 +251,153 @@ function renderDevelopmentIndex(devIndex) {
</div>
`;
}
function renderProjectGuidelines(guidelines) {
if (!guidelines) {
return `
<div class="bg-card border border-border rounded-lg p-6 mb-6">
<h3 class="text-lg font-semibold text-foreground mb-4 flex items-center gap-2">
<i data-lucide="scroll-text" class="w-5 h-5"></i> Project Guidelines
</h3>
<p class="text-muted-foreground text-sm">
No guidelines configured. Run <code class="px-2 py-1 bg-muted rounded text-xs font-mono">/session:solidify</code> to add project constraints and conventions.
</p>
</div>
`;
}
// Count total items
const conventionCount = Object.values(guidelines.conventions || {}).flat().length;
const constraintCount = Object.values(guidelines.constraints || {}).flat().length;
const rulesCount = (guidelines.quality_rules || []).length;
const learningsCount = (guidelines.learnings || []).length;
const totalCount = conventionCount + constraintCount + rulesCount + learningsCount;
if (totalCount === 0) {
return `
<div class="bg-card border border-border rounded-lg p-6 mb-6">
<h3 class="text-lg font-semibold text-foreground mb-4 flex items-center gap-2">
<i data-lucide="scroll-text" class="w-5 h-5"></i> Project Guidelines
</h3>
<p class="text-muted-foreground text-sm">
Guidelines file exists but is empty. Run <code class="px-2 py-1 bg-muted rounded text-xs font-mono">/session:solidify</code> to add project constraints and conventions.
</p>
</div>
`;
}
return `
<div class="bg-card border border-border rounded-lg p-6 mb-6">
<h3 class="text-lg font-semibold text-foreground mb-4 flex items-center gap-2">
<i data-lucide="scroll-text" class="w-5 h-5"></i> Project Guidelines
<span class="text-xs px-2 py-0.5 bg-primary-light text-primary rounded-full">${totalCount} items</span>
</h3>
<div class="space-y-6">
<!-- Conventions -->
${renderGuidelinesSection('Conventions', guidelines.conventions, 'book-marked', 'bg-success-light text-success', [
{ key: 'coding_style', label: 'Coding Style' },
{ key: 'naming_patterns', label: 'Naming Patterns' },
{ key: 'file_structure', label: 'File Structure' },
{ key: 'documentation', label: 'Documentation' }
])}
<!-- Constraints -->
${renderGuidelinesSection('Constraints', guidelines.constraints, 'shield-alert', 'bg-destructive/10 text-destructive', [
{ key: 'architecture', label: 'Architecture' },
{ key: 'tech_stack', label: 'Tech Stack' },
{ key: 'performance', label: 'Performance' },
{ key: 'security', label: 'Security' }
])}
<!-- Quality Rules -->
${renderQualityRules(guidelines.quality_rules)}
<!-- Learnings -->
${renderLearnings(guidelines.learnings)}
</div>
</div>
`;
}
function renderGuidelinesSection(title, data, icon, badgeClass, categories) {
if (!data) return '';
const items = categories.flatMap(cat => (data[cat.key] || []).map(item => ({ category: cat.label, value: item })));
if (items.length === 0) return '';
return `
<div>
<h4 class="text-sm font-semibold text-foreground mb-3 flex items-center gap-2">
<i data-lucide="${icon}" class="w-4 h-4"></i>
<span>${title}</span>
<span class="text-xs px-2 py-0.5 ${badgeClass} rounded-full">${items.length}</span>
</h4>
<div class="space-y-2">
${items.slice(0, 8).map(item => `
<div class="flex items-start gap-3 p-3 bg-background border border-border rounded-lg">
<span class="text-xs px-2 py-0.5 bg-muted text-muted-foreground rounded whitespace-nowrap">${escapeHtml(item.category)}</span>
<span class="text-sm text-foreground">${escapeHtml(item.value)}</span>
</div>
`).join('')}
${items.length > 8 ? `<div class="text-sm text-muted-foreground text-center py-2">... and ${items.length - 8} more</div>` : ''}
</div>
</div>
`;
}
function renderQualityRules(rules) {
if (!rules || rules.length === 0) return '';
return `
<div>
<h4 class="text-sm font-semibold text-foreground mb-3 flex items-center gap-2">
<i data-lucide="check-square" class="w-4 h-4"></i>
<span>Quality Rules</span>
<span class="text-xs px-2 py-0.5 bg-warning-light text-warning rounded-full">${rules.length}</span>
</h4>
<div class="space-y-2">
${rules.slice(0, 6).map(rule => `
<div class="p-3 bg-background border border-border rounded-lg">
<div class="flex items-start justify-between mb-1">
<span class="text-sm text-foreground font-medium">${escapeHtml(rule.rule)}</span>
${rule.enforced_by ? `<span class="text-xs px-2 py-0.5 bg-muted text-muted-foreground rounded">${escapeHtml(rule.enforced_by)}</span>` : ''}
</div>
<span class="text-xs text-muted-foreground">Scope: ${escapeHtml(rule.scope)}</span>
</div>
`).join('')}
${rules.length > 6 ? `<div class="text-sm text-muted-foreground text-center py-2">... and ${rules.length - 6} more</div>` : ''}
</div>
</div>
`;
}
function renderLearnings(learnings) {
if (!learnings || learnings.length === 0) return '';
return `
<div>
<h4 class="text-sm font-semibold text-foreground mb-3 flex items-center gap-2">
<i data-lucide="lightbulb" class="w-4 h-4"></i>
<span>Session Learnings</span>
<span class="text-xs px-2 py-0.5 bg-accent text-accent-foreground rounded-full">${learnings.length}</span>
</h4>
<div class="space-y-2">
${learnings.slice(0, 5).map(learning => `
<div class="p-3 bg-background border border-border rounded-lg border-l-4 border-l-primary">
<div class="flex items-start justify-between mb-2">
<span class="text-sm text-foreground">${escapeHtml(learning.insight)}</span>
<span class="text-xs text-muted-foreground whitespace-nowrap ml-2">${formatDate(learning.date)}</span>
</div>
<div class="flex items-center gap-2 text-xs">
${learning.category ? `<span class="px-2 py-0.5 bg-muted text-muted-foreground rounded">${escapeHtml(learning.category)}</span>` : ''}
${learning.session_id ? `<span class="px-2 py-0.5 bg-primary-light text-primary rounded font-mono">${escapeHtml(learning.session_id)}</span>` : ''}
</div>
${learning.context ? `<p class="text-xs text-muted-foreground mt-2">${escapeHtml(learning.context)}</p>` : ''}
</div>
`).join('')}
${learnings.length > 5 ? `<div class="text-sm text-muted-foreground text-center py-2">... and ${learnings.length - 5} more</div>` : ''}
</div>
</div>
`;
}

View File

@@ -418,6 +418,11 @@
<span class="nav-text flex-1" data-i18n="nav.issueManager">Manager</span>
<span class="badge px-2 py-0.5 text-xs font-semibold rounded-full bg-hover text-muted-foreground" id="badgeIssues">0</span>
</li>
<li class="nav-item flex items-center gap-2 px-3 py-2.5 text-sm text-muted-foreground hover:bg-hover hover:text-foreground rounded cursor-pointer transition-colors" data-view="issue-discovery" data-tooltip="Issue Discovery">
<i data-lucide="search-code" class="nav-icon"></i>
<span class="nav-text flex-1" data-i18n="nav.issueDiscovery">Discovery</span>
<span class="badge px-2 py-0.5 text-xs font-semibold rounded-full bg-hover text-muted-foreground" id="badgeDiscovery">0</span>
</li>
</ul>
</div>

View File

@@ -379,11 +379,63 @@ async function ensureLiteLLMEmbedderReady(): Promise<BootstrapResult> {
*/
type GpuMode = 'cpu' | 'cuda' | 'directml';
/**
* Python environment info for compatibility checks
*/
interface PythonEnvInfo {
version: string; // e.g., "3.11.5"
majorMinor: string; // e.g., "3.11"
architecture: number; // 32 or 64
compatible: boolean; // true if 64-bit and Python 3.8-3.12
error?: string;
}
/**
* Check Python environment in venv for DirectML compatibility
* DirectML requires: 64-bit Python, version 3.8-3.12
*/
async function checkPythonEnvForDirectML(): Promise<PythonEnvInfo> {
const pythonPath =
process.platform === 'win32'
? join(CODEXLENS_VENV, 'Scripts', 'python.exe')
: join(CODEXLENS_VENV, 'bin', 'python');
if (!existsSync(pythonPath)) {
return { version: '', majorMinor: '', architecture: 0, compatible: false, error: 'Python not found in venv' };
}
try {
// Get Python version and architecture in one call
const checkScript = `import sys, struct; print(f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}|{struct.calcsize('P') * 8}")`;
const result = execSync(`"${pythonPath}" -c "${checkScript}"`, { encoding: 'utf-8', timeout: 10000 }).trim();
const [version, archStr] = result.split('|');
const architecture = parseInt(archStr, 10);
const [major, minor] = version.split('.').map(Number);
const majorMinor = `${major}.${minor}`;
// DirectML wheels available for Python 3.8-3.12, 64-bit only
const versionCompatible = major === 3 && minor >= 8 && minor <= 12;
const archCompatible = architecture === 64;
const compatible = versionCompatible && archCompatible;
let error: string | undefined;
if (!archCompatible) {
error = `Python is ${architecture}-bit. onnxruntime-directml requires 64-bit Python. Please reinstall Python as 64-bit.`;
} else if (!versionCompatible) {
error = `Python ${majorMinor} is not supported. onnxruntime-directml requires Python 3.8-3.12.`;
}
return { version, majorMinor, architecture, compatible, error };
} catch (e) {
return { version: '', majorMinor: '', architecture: 0, compatible: false, error: `Failed to check Python: ${(e as Error).message}` };
}
}
/**
* Detect available GPU acceleration
* @returns Detected GPU mode and info
*/
async function detectGpuSupport(): Promise<{ mode: GpuMode; available: GpuMode[]; info: string }> {
async function detectGpuSupport(): Promise<{ mode: GpuMode; available: GpuMode[]; info: string; pythonEnv?: PythonEnvInfo }> {
const available: GpuMode[] = ['cpu'];
let detectedInfo = 'CPU only';
@@ -402,19 +454,20 @@ async function detectGpuSupport(): Promise<{ mode: GpuMode; available: GpuMode[]
// NVIDIA not available
}
// On Windows, DirectML is always available if DirectX 12 is supported
// On Windows, DirectML requires 64-bit Python 3.8-3.12
let pythonEnv: PythonEnvInfo | undefined;
if (process.platform === 'win32') {
try {
// Check for DirectX 12 support via dxdiag or registry
// DirectML works on most modern Windows 10/11 systems
pythonEnv = await checkPythonEnvForDirectML();
if (pythonEnv.compatible) {
available.push('directml');
if (available.includes('cuda')) {
detectedInfo = 'NVIDIA GPU detected (CUDA & DirectML available)';
} else {
detectedInfo = 'DirectML available (Windows GPU acceleration)';
}
} catch {
// DirectML check failed
} else if (pythonEnv.error) {
// DirectML not available due to Python environment
console.log(`[CodexLens] DirectML unavailable: ${pythonEnv.error}`);
}
}
@@ -426,7 +479,7 @@ async function detectGpuSupport(): Promise<{ mode: GpuMode; available: GpuMode[]
recommendedMode = 'cuda';
}
return { mode: recommendedMode, available, info: detectedInfo };
return { mode: recommendedMode, available, info: detectedInfo, pythonEnv };
}
/**
@@ -441,6 +494,19 @@ async function installSemantic(gpuMode: GpuMode = 'cpu'): Promise<BootstrapResul
return { success: false, error: 'CodexLens not installed. Install CodexLens first.' };
}
// Check Python environment compatibility for DirectML
if (gpuMode === 'directml') {
const pythonEnv = await checkPythonEnvForDirectML();
if (!pythonEnv.compatible) {
const errorDetails = pythonEnv.error || 'Unknown compatibility issue';
return {
success: false,
error: `DirectML installation failed: ${errorDetails}\n\nTo fix this:\n1. Uninstall current Python\n2. Install 64-bit Python 3.10, 3.11, or 3.12 from python.org\n3. Delete ~/.codexlens/venv folder\n4. Reinstall CodexLens`
};
}
console.log(`[CodexLens] Python ${pythonEnv.version} (${pythonEnv.architecture}-bit) - DirectML compatible`);
}
const pipPath =
process.platform === 'win32'
? join(CODEXLENS_VENV, 'Scripts', 'pip.exe')
@@ -1411,7 +1477,7 @@ export {
cancelIndexing,
isIndexingInProgress,
};
export type { GpuMode };
export type { GpuMode, PythonEnvInfo };
// Backward-compatible export for tests
export const codexLensTool = {

View File

@@ -41,6 +41,7 @@ src/codexlens/semantic/vector_store.py
src/codexlens/storage/__init__.py
src/codexlens/storage/dir_index.py
src/codexlens/storage/file_cache.py
src/codexlens/storage/global_index.py
src/codexlens/storage/index_tree.py
src/codexlens/storage/migration_manager.py
src/codexlens/storage/path_mapper.py
@@ -64,6 +65,8 @@ tests/test_enrichment.py
tests/test_entities.py
tests/test_errors.py
tests/test_file_cache.py
tests/test_global_index.py
tests/test_global_symbol_index.py
tests/test_hybrid_chunker.py
tests/test_hybrid_search_e2e.py
tests/test_incremental_indexing.py

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "claude-code-workflow",
"version": "6.2.9",
"version": "6.3.8",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "claude-code-workflow",
"version": "6.2.9",
"version": "6.3.8",
"license": "MIT",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.0.4",

View File

@@ -1,6 +1,6 @@
{
"name": "claude-code-workflow",
"version": "6.3.6",
"version": "6.3.11",
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
"type": "module",
"main": "ccw/src/index.js",