Compare commits

...

13 Commits

Author SHA1 Message Date
catlog22
623afc1d35 6.3.31 2026-01-15 22:30:57 +08:00
catlog22
085652560a refactor: 移除 ccw cli 内部超时参数,改由外部 bash 控制
- 移除 --timeout 命令行选项和内部超时处理逻辑
- 进程生命周期跟随父进程(bash)状态
- 简化代码,超时控制交由外部调用者管理
2026-01-15 22:30:22 +08:00
catlog22
af4ddb1280 feat: 添加队列和议题删除功能,支持归档议题 2026-01-15 19:58:54 +08:00
catlog22
7db659f0e1 feat: 增强议题搜索功能与多队列卡片界面优化
搜索增强:
- 添加防抖处理修复快速输入导致页面卡死的问题
- 扩展搜索范围至解决方案的描述和方法字段
- 新增搜索结果高亮显示匹配关键词
- 添加搜索下拉建议,支持键盘导航

多队列界面:
- 优化队列展开视图的卡片布局使用CSS Grid
- 添加取消激活队列功能及API端点
- 改进状态颜色分布和统计卡片样式
- 添加激活/取消激活按钮的中文国际化

修复:
- 修复路由冲突导致的deactivate 404错误
- 修复异步加载后拖拽排序失效的问题
2026-01-15 19:44:44 +08:00
catlog22
ba526ea09e fix: 修复 Dashboard 概况页面无法显示项目信息的问题
添加 extractStringArray 辅助函数来处理混合数组类型(字符串数组和对象数组),
使 loadProjectOverview 函数能够正确处理 project-tech.json 中的数据结构。

修复的字段包括:
- languages: 对象数组 [{name, file_count, primary}] → 字符串数组
- frameworks: 保持兼容字符串数组
- key_components: 对象数组 [{name, description, path}] → 字符串数组
- layers/patterns: 保持兼容混合类型

Closes #79
2026-01-15 18:58:42 +08:00
catlog22
c308e429f8 feat: 添加增量更新命令以支持单文件索引更新 2026-01-15 18:14:51 +08:00
catlog22
c24ed016cb feat: 更新执行命令文档,添加队列ID要求和用户提示功能 2026-01-15 16:22:48 +08:00
catlog22
0c9a6d4154 chore: bump version to 6.3.29
Release 6.3.29 with:
- Multi-CLI task and discussion tabs i18n support
- Collapsible sections for discussion and summary tabs
- Post-Completion Expansion for execution commands
- Enhanced multi-CLI session handling
- Code structure refactoring
2026-01-15 15:38:15 +08:00
catlog22
7b5c3cacaa feat: 添加多CLI任务和讨论标签的国际化支持 2026-01-15 15:35:09 +08:00
catlog22
e6e7876b38 feat: Add collapsible sections and enhance layout for discussion and summary tabs 2026-01-15 15:30:11 +08:00
catlog22
0eda520fd7 feat: Enhance multi-CLI session handling and UI updates
- Added loading of plan.json in scanMultiCliDir to improve task extraction.
- Implemented normalization of tasks from plan.json format to support new UI.
- Updated CSS for multi-CLI plan summary and task item badges for better visibility.
- Refactored hook-manager to use Node.js for cross-platform compatibility in command execution.
- Improved i18n support for new CLI tool configuration in the hook wizard.
- Enhanced lite-tasks view to utilize normalized tasks and provide better fallback mechanisms.
- Updated memory-update-queue to return string messages for better integration with hooks.
2026-01-15 15:20:20 +08:00
catlog22
e22b525e9c feat: add Post-Completion Expansion to execution commands
执行命令完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 /issue:new
2026-01-15 13:00:50 +08:00
catlog22
86536aaa10 Refactor code structure for improved readability and maintainability 2026-01-15 11:51:19 +08:00
30 changed files with 4469 additions and 887 deletions

View File

@@ -1,7 +1,7 @@
---
name: execute
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
@@ -19,14 +19,57 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
- **Executor handles all tasks within a solution sequentially**
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
### If Queue ID Not Provided
When `--queue` parameter is missing, you MUST:
1. **List available queues** by running:
```javascript
const result = Bash('ccw issue queue list --brief --json');
const index = JSON.parse(result);
```
2. **Display available queues** to user:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
3. **Stop and ask user** to specify which queue to execute:
```javascript
AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: index.queues
.filter(q => q.status === 'active')
.map(q => ({
label: q.id,
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
})
```
4. **After user selection**, continue execution with the selected queue ID.
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
## Usage
```bash
/issue:execute # Execute active queue(s)
/issue:execute --queue QUE-xxx # Execute specific queue
/issue:execute --worktree # Execute entire queue in isolated worktree
/issue:execute --worktree --queue QUE-xxx
/issue:execute --worktree /path/to/existing/worktree # Resume in existing worktree
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
```
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
@@ -44,13 +87,18 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
## Execution Flow
```
Phase 0 (if --worktree): Setup Queue Worktree
Phase 0: Validate Queue ID (REQUIRED)
├─ If --queue provided → use specified queue
├─ If --queue missing → list queues, prompt user to select
└─ Store QUEUE_ID for all subsequent commands
Phase 0.5 (if --worktree): Setup Queue Worktree
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
├─ All subsequent execution happens in this worktree
└─ Main workspace remains clean and untouched
Phase 1: Get DAG & User Selection
├─ ccw issue queue dag [--queue QUE-xxx] → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
Phase 2: Dispatch Parallel Batch (DAG-driven)
@@ -75,11 +123,65 @@ Phase 4 (if --worktree): Worktree Completion
## Implementation
### Phase 0: Validate Queue ID
```javascript
// Check if --queue was provided
let QUEUE_ID = args.queue;
if (!QUEUE_ID) {
// List available queues
const listResult = Bash('ccw issue queue list --brief --json').trim();
const index = JSON.parse(listResult);
if (index.queues.length === 0) {
console.log('No queues found. Use /issue:queue to create one first.');
return;
}
// Filter active queues only
const activeQueues = index.queues.filter(q => q.status === 'active');
if (activeQueues.length === 0) {
console.log('No active queues found.');
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
return;
}
// Display and prompt user
console.log('\nAvailable Queues:');
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
console.log('-'.repeat(70));
for (const q of index.queues) {
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
q.issue_ids.join(', '));
}
const answer = AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: activeQueues.map(q => ({
label: q.id,
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
}))
}]
});
QUEUE_ID = answer['Queue'];
}
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
```
### Phase 1: Get DAG & User Selection
```javascript
// Get dependency graph and parallel batches
const dagJson = Bash(`ccw issue queue dag`).trim();
// Get dependency graph and parallel batches (QUEUE_ID required)
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
const dag = JSON.parse(dagJson);
if (dag.error || dag.ready_count === 0) {
@@ -298,8 +400,8 @@ ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "t
### Phase 3: Check Next Batch
```javascript
// Refresh DAG after batch completes
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
// Refresh DAG after batch completes (use same QUEUE_ID)
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
console.log(`
## Batch Complete
@@ -309,9 +411,9 @@ console.log(`
`);
if (refreshedDag.ready_count > 0) {
console.log('Run `/issue:execute` again for next batch.');
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
// Note: If resuming, pass existing worktree path:
// /issue:execute --worktree <worktreePath>
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
}
```
@@ -367,10 +469,12 @@ if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_coun
┌─────────────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────────────┤
│ 0. (if --worktree) Create ONE worktree for entire queue
│ 0. Validate QUEUE_ID (required, or prompt user to select)
│ │
│ 0.5 (if --worktree) Create ONE worktree for entire queue │
│ → .ccw/worktrees/queue-exec-<queue-id> │
│ │
│ 1. ccw issue queue dag
│ 1. ccw issue queue dag --queue ${QUEUE_ID}
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
│ │
│ 2. Dispatch batch 1 (parallel, SAME worktree): │
@@ -405,8 +509,19 @@ if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_coun
## CLI Endpoint Contract
### `ccw issue queue dag`
Returns dependency graph with parallel batches (solution-level):
### `ccw issue queue list --brief --json`
Returns queue index for selection (used when --queue not provided):
```json
{
"active_queue_id": "QUE-20251215-001",
"queues": [
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
]
}
```
### `ccw issue queue dag --queue <queue-id>`
Returns dependency graph with parallel batches (solution-level, **--queue required**):
```json
{
"queue_id": "QUE-...",

View File

@@ -311,6 +311,12 @@ Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
---
## Error Handling
| Situation | Action |

View File

@@ -275,6 +275,10 @@ AskUserQuestion({
- **"Enter Review"**: Execute `/workflow:review`
- **"Complete Session"**: Execute `/workflow:session:complete`
### Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Execution Strategy (IMPL_PLAN-Driven)
### Strategy Priority

View File

@@ -664,6 +664,10 @@ Collected after each execution call completes:
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:

View File

@@ -10,63 +10,33 @@ allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), mcp_
## Quick Start
```bash
# Basic usage
/workflow:lite-lite-lite "Fix the login bug"
# Complex task
/workflow:lite-lite-lite "Refactor payment module for multi-gateway support"
```
**Core Philosophy**: Minimal friction, maximum velocity. No files, no artifacts - just analyze and execute.
## What & Why
## Overview
### Core Concept
**Zero-artifact workflow**: Clarify → Select Tools → Multi-Mode Analysis → Decision → Direct Execution
**Zero-artifact workflow**: Clarify requirements → Auto-select tools → Mixed tool analysis → User decision → Direct execution. All state in memory, all decisions via AskUser.
**vs multi-cli-plan**:
- **multi-cli-plan**: Full artifacts (IMPL_PLAN.md, plan.json, synthesis.json)
- **lite-lite-lite**: No files, direct in-memory flow, immediate execution
### Value Proposition
1. **Ultra-Fast**: No file I/O overhead, no session management
2. **Smart Selection**: Auto-select optimal tool combination based on task
3. **Interactive**: Key decisions validated via AskUser
4. **Direct**: Analysis → Execution without intermediate artifacts
**vs multi-cli-plan**: No IMPL_PLAN.md, plan.json, synthesis.json - all state in memory.
## Execution Flow
```
Phase 1: Clarify Requirements
└─ Parse input → AskUser for missing details (if needed)
Phase 2: Auto-Select Tools
└─ Analyze task → Match to tool strengths → Confirm selection
Phase 3: Mixed Tool Analysis
└─ Execute selected tools in parallel → Aggregate results
Phase 4: User Decision
├─ Present analysis summary
├─ AskUser: Execute / Refine / Change tools / Cancel
└─ Loop to Phase 3 if refinement needed
Phase 5: Direct Execution
└─ Execute solution directly (no plan files)
Phase 1: Clarify Requirements → AskUser for missing details
Phase 2: Select Tools (CLI → Mode → Agent) → 3-step selection
Phase 3: Multi-Mode Analysis → Execute with --resume chaining
Phase 4: User Decision → Execute / Refine / Change / Cancel
Phase 5: Direct Execution → No plan files, immediate implementation
```
## Phase Details
## Phase 1: Clarify Requirements
### Phase 1: Clarify Requirements
**Parse Task Description**:
```javascript
// Extract intent from user input
const taskDescription = $ARGUMENTS
// Check if clarification needed
if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
AskUserQuestion({
questions: [{
@@ -80,173 +50,72 @@ if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
}]
})
}
```
**Quick ACE Context** (optional, for complex tasks):
```javascript
// Only if task seems to need codebase context
// Optional: Quick ACE Context for complex tasks
mcp__ace-tool__search_context({
project_root_path: process.cwd(),
query: `${taskDescription} implementation patterns`
})
```
### Phase 2: Auto-Select Analysis Tools
## Phase 2: Select Tools
**Tool Categories**:
### Tool Definitions
| Category | Source | Execution |
|----------|--------|-----------|
| **CLI Tools** | cli-tools.json | `ccw cli -p "..." --tool <name>` |
| **Sub Agents** | Task tool | `Task({ subagent_type: "...", prompt: "..." })` |
**Task Analysis Dimensions**:
**CLI Tools** (from cli-tools.json):
```javascript
function analyzeTask(taskDescription) {
return {
complexity: detectComplexity(taskDescription), // simple, medium, complex
taskType: detectTaskType(taskDescription), // bugfix, feature, refactor, analysis, etc.
domain: detectDomain(taskDescription), // frontend, backend, fullstack
needsExecution: detectExecutionNeed(taskDescription) // analysis-only vs needs-write
}
}
```
**CLI Tools** (dynamically loaded from cli-tools.json):
```javascript
// Load CLI tools from config file
const cliConfig = JSON.parse(Read("~/.claude/cli-tools.json"))
const cliTools = Object.entries(cliConfig.tools)
.filter(([_, config]) => config.enabled)
.map(([name, config]) => ({
name,
type: 'cli',
name, type: 'cli',
tags: config.tags || [],
model: config.primaryModel,
toolType: config.type // builtin, cli-wrapper, api-endpoint
}))
```
**Tags** (user-defined in cli-tools.json, no fixed specification):
Tags are completely user-defined. Users can create any tags that match their workflow needs.
**Config Example** (cli-tools.json):
```json
{
"tools": {
"gemini": {
"enabled": true,
"tags": ["architecture", "reasoning", "performance"],
"primaryModel": "gemini-2.5-pro"
},
"codex": {
"enabled": true,
"tags": ["implementation", "fast"],
"primaryModel": "gpt-5.2"
},
"qwen": {
"enabled": true,
"tags": ["implementation", "chinese", "documentation"],
"primaryModel": "coder-model"
}
}
}
```
**Sub Agents** (predefined, canExecute marks execution capability):
```javascript
const agents = [
{ name: 'code-developer', type: 'agent', strength: 'Code implementation, test writing', canExecute: true },
{ name: 'Explore', type: 'agent', strength: 'Fast code exploration', canExecute: false },
{ name: 'cli-explore-agent', type: 'agent', strength: 'Dual-source deep analysis', canExecute: false },
{ name: 'cli-discuss-agent', type: 'agent', strength: 'Multi-CLI collaborative verification', canExecute: false },
{ name: 'debug-explore-agent', type: 'agent', strength: 'Hypothesis-driven debugging', canExecute: false },
{ name: 'context-search-agent', type: 'agent', strength: 'Context collection', canExecute: false },
{ name: 'test-fix-agent', type: 'agent', strength: 'Test execution and fixing', canExecute: true },
{ name: 'universal-executor', type: 'agent', strength: 'General multi-step execution', canExecute: true }
]
```
**Sub Agents**:
| Agent | Strengths | canExecute |
|-------|-----------|------------|
| **code-developer** | Code implementation, test writing, incremental development | ✅ |
| **Explore** | Fast code exploration, file search, pattern discovery | ❌ |
| **cli-explore-agent** | Dual-source analysis (Bash+CLI), read-only exploration | ❌ |
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification, solution synthesis | ❌ |
| **debug-explore-agent** | Hypothesis-driven debugging, NDJSON logging, iterative verification | ❌ |
| **context-search-agent** | Multi-layer file discovery, dependency analysis, conflict assessment | ❌ |
| **code-developer** | Code implementation, test writing | ✅ |
| **Explore** | Fast code exploration, pattern discovery | ❌ |
| **cli-explore-agent** | Dual-source analysis (Bash+CLI) | ❌ |
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification | ❌ |
| **debug-explore-agent** | Hypothesis-driven debugging | ❌ |
| **context-search-agent** | Multi-layer file discovery, dependency analysis | ❌ |
| **test-fix-agent** | Test execution, failure diagnosis, code fixing | ✅ |
| **universal-executor** | General execution, multi-domain adaptation | ✅ |
**Three-Step Selection Flow** (CLI → Mode → Agent):
**Analysis Modes**:
| Mode | Pattern | Use Case | minCLIs |
|------|---------|----------|---------|
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective | 1+ |
| **Sequential** | `A → B(resume) → C(resume)` | Incremental deepening | 2+ |
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Adversarial validation | 2 |
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
### Three-Step Selection Flow
```javascript
// Step 1: Present CLI options from config (multiSelect for multi-CLI modes)
function getCliDescription(cli) {
return cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
}
const cliOptions = cliTools.map(cli => ({
label: cli.name,
description: getCliDescription(cli)
}))
// Step 1: Select CLIs (multiSelect)
AskUserQuestion({
questions: [{
question: "Select CLI tools for analysis (select 1-3 for collaboration modes)",
question: "Select CLI tools for analysis (1-3 for collaboration modes)",
header: "CLI Tools",
options: cliOptions,
multiSelect: true // Allow multiple selection for collaboration modes
options: cliTools.map(cli => ({
label: cli.name,
description: cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
})),
multiSelect: true
}]
})
```
```javascript
// Step 2: Select Analysis Mode
const analysisModes = [
{
name: 'parallel',
label: 'Parallel',
description: 'All CLIs analyze simultaneously, aggregate results',
minCLIs: 1,
pattern: 'A || B || C → Aggregate'
},
{
name: 'sequential',
label: 'Sequential',
description: 'Chain analysis: each CLI builds on previous via --resume',
minCLIs: 2,
pattern: 'A → B(resume A) → C(resume B)'
},
{
name: 'collaborative',
label: 'Collaborative',
description: 'Multi-round synthesis: CLIs take turns refining analysis',
minCLIs: 2,
pattern: 'A → B(resume A) → A(resume B) → Synthesize'
},
{
name: 'debate',
label: 'Debate',
description: 'Adversarial: CLI B challenges CLI A findings, A responds',
minCLIs: 2,
pattern: 'A(propose) → B(challenge, resume A) → A(defend, resume B)'
},
{
name: 'challenge',
label: 'Challenge',
description: 'Stress test: CLI B finds flaws/alternatives in CLI A analysis',
minCLIs: 2,
pattern: 'A(analyze) → B(challenge, resume A) → Evaluate'
}
]
// Filter modes based on selected CLI count
// Step 2: Select Mode (filtered by CLI count)
const availableModes = analysisModes.filter(m => selectedCLIs.length >= m.minCLIs)
AskUserQuestion({
questions: [{
question: "Select analysis mode",
@@ -258,43 +127,24 @@ AskUserQuestion({
multiSelect: false
}]
})
```
```javascript
// Step 3: Present Agent options for execution
const agentOptions = agents.map(agent => ({
label: agent.name,
description: agent.strength
}))
// Step 3: Select Agent for execution
AskUserQuestion({
questions: [{
question: "Select Sub Agent for execution",
header: "Agent",
options: agentOptions,
options: agents.map(a => ({ label: a.name, description: a.strength })),
multiSelect: false
}]
})
```
**Selection Summary**:
```javascript
console.log(`
## Selected Configuration
**CLI Tools**: ${selectedCLIs.map(c => c.name).join(' → ')}
**Analysis Mode**: ${selectedMode.label} - ${selectedMode.pattern}
**Execution Agent**: ${selectedAgent.name} - ${selectedAgent.strength}
> Mode determines how CLIs collaborate, Agent handles final execution
`)
// Confirm selection
AskUserQuestion({
questions: [{
question: "Confirm selection?",
header: "Confirm",
options: [
{ label: "Confirm and continue", description: `${selectedMode.label} mode with ${selectedCLIs.length} CLIs` },
{ label: "Confirm and continue", description: `${selectedMode.label} with ${selectedCLIs.length} CLIs` },
{ label: "Re-select CLIs", description: "Choose different CLI tools" },
{ label: "Re-select Mode", description: "Choose different analysis mode" },
{ label: "Re-select Agent", description: "Choose different Sub Agent" }
@@ -304,409 +154,226 @@ AskUserQuestion({
})
```
### Phase 3: Multi-Mode Analysis
## Phase 3: Multi-Mode Analysis
**Mode-Specific Execution Patterns**:
### Universal CLI Prompt Template
#### Mode 1: Parallel (并行)
```javascript
// All CLIs run simultaneously, no resume dependency
async function executeParallel(clis, taskDescription) {
const promises = clis.map(cli => Bash({
command: `ccw cli -p "
PURPOSE: Analyze and provide solution for: ${taskDescription}
TASK: • Identify affected files • Analyze implementation approach • List specific changes needed
// Unified prompt builder - used by all modes
function buildPrompt({ purpose, tasks, expected, rules, taskDescription }) {
return `
PURPOSE: ${purpose}: ${taskDescription}
TASK: ${tasks.map(t => `${t}`).join(' ')}
MODE: analysis
CONTEXT: @**/*
EXPECTED: Concise analysis with: 1) Root cause/approach 2) Files to modify 3) Key changes 4) Risks
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on actionable insights
" --tool ${cli.name} --mode analysis`,
run_in_background: true
}))
EXPECTED: ${expected}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${rules}
`
}
return await Promise.all(promises)
// Execute CLI with prompt
function execCLI(cli, prompt, options = {}) {
const { resume, background = false } = options
const resumeFlag = resume ? `--resume ${resume}` : ''
return Bash({
command: `ccw cli -p "${prompt}" --tool ${cli.name} --mode analysis ${resumeFlag}`,
run_in_background: background
})
}
```
#### Mode 2: Sequential (串联)
### Prompt Presets by Role
| Role | PURPOSE | TASKS | EXPECTED | RULES |
|------|---------|-------|----------|-------|
| **initial** | Initial analysis | Identify files, Analyze approach, List changes | Root cause, files, changes, risks | Focus on actionable insights |
| **extend** | Build on previous | Review previous, Extend, Add insights | Extended analysis building on findings | Build incrementally, avoid repetition |
| **synthesize** | Refine and synthesize | Review, Identify gaps, Synthesize | Refined synthesis with new perspectives | Add value not repetition |
| **propose** | Propose comprehensive analysis | Analyze thoroughly, Propose solution, State assumptions | Well-reasoned proposal with trade-offs | Be clear about assumptions |
| **challenge** | Challenge and stress-test | Identify weaknesses, Question assumptions, Suggest alternatives | Critique with counter-arguments | Be adversarial but constructive |
| **defend** | Respond to challenges | Address challenges, Defend valid aspects, Propose refined solution | Refined proposal incorporating feedback | Be open to criticism, synthesize |
| **criticize** | Find flaws ruthlessly | Find logical flaws, Identify edge cases, Rate criticisms | Critique with severity: [CRITICAL]/[HIGH]/[MEDIUM]/[LOW] | Be ruthlessly critical |
```javascript
// Chain analysis: each CLI builds on previous via --resume
async function executeSequential(clis, taskDescription) {
const PROMPTS = {
initial: { purpose: 'Initial analysis', tasks: ['Identify affected files', 'Analyze implementation approach', 'List specific changes'], expected: 'Root cause, files to modify, key changes, risks', rules: 'Focus on actionable insights' },
extend: { purpose: 'Build on previous analysis', tasks: ['Review previous findings', 'Extend analysis', 'Add new insights'], expected: 'Extended analysis building on previous', rules: 'Build incrementally, avoid repetition' },
synthesize: { purpose: 'Refine and synthesize', tasks: ['Review previous', 'Identify gaps', 'Add insights', 'Synthesize findings'], expected: 'Refined synthesis with new perspectives', rules: 'Build collaboratively, add value' },
propose: { purpose: 'Propose comprehensive analysis', tasks: ['Analyze thoroughly', 'Propose solution', 'State assumptions clearly'], expected: 'Well-reasoned proposal with trade-offs', rules: 'Be clear about assumptions' },
challenge: { purpose: 'Challenge and stress-test', tasks: ['Identify weaknesses', 'Question assumptions', 'Suggest alternatives', 'Highlight overlooked risks'], expected: 'Constructive critique with counter-arguments', rules: 'Be adversarial but constructive' },
defend: { purpose: 'Respond to challenges', tasks: ['Address each challenge', 'Defend valid aspects', 'Acknowledge valid criticisms', 'Propose refined solution'], expected: 'Refined proposal incorporating alternatives', rules: 'Be open to criticism, synthesize best ideas' },
criticize: { purpose: 'Stress-test and find weaknesses', tasks: ['Find logical flaws', 'Identify missed edge cases', 'Propose alternatives', 'Rate criticisms (High/Medium/Low)'], expected: 'Detailed critique with severity ratings', rules: 'Be ruthlessly critical, find every flaw' }
}
```
### Mode Implementations
```javascript
// Parallel: All CLIs run simultaneously
async function executeParallel(clis, task) {
return await Promise.all(clis.map(cli =>
execCLI(cli, buildPrompt({ ...PROMPTS.initial, taskDescription: task }), { background: true })
))
}
// Sequential: Each CLI builds on previous via --resume
async function executeSequential(clis, task) {
const results = []
let previousSessionId = null
let prevId = null
for (const cli of clis) {
const resumeFlag = previousSessionId ? `--resume ${previousSessionId}` : ''
const result = await Bash({
command: `ccw cli -p "
PURPOSE: ${previousSessionId ? 'Build on previous analysis and deepen' : 'Initial analysis'}: ${taskDescription}
TASK: • ${previousSessionId ? 'Review previous findings • Extend analysis • Add new insights' : 'Identify affected files • Analyze implementation approach'}
MODE: analysis
CONTEXT: @**/*
EXPECTED: ${previousSessionId ? 'Extended analysis building on previous findings' : 'Initial analysis with root cause and approach'}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${previousSessionId ? 'Build incrementally, avoid repetition' : 'Focus on actionable insights'}
" --tool ${cli.name} --mode analysis ${resumeFlag}`,
run_in_background: false
})
const preset = prevId ? PROMPTS.extend : PROMPTS.initial
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
results.push(result)
previousSessionId = extractSessionId(result) // Extract session ID for next iteration
prevId = extractSessionId(result)
}
return results
}
```
#### Mode 3: Collaborative (协同)
```javascript
// Multi-round synthesis: CLIs take turns refining analysis
async function executeCollaborative(clis, taskDescription, rounds = 2) {
// Collaborative: Multi-round synthesis
async function executeCollaborative(clis, task, rounds = 2) {
const results = []
let previousSessionId = null
for (let round = 0; round < rounds; round++) {
let prevId = null
for (let r = 0; r < rounds; r++) {
for (const cli of clis) {
const resumeFlag = previousSessionId ? `--resume ${previousSessionId}` : ''
const roundContext = round === 0 ? 'Initial analysis' : `Round ${round + 1}: Refine and synthesize`
const result = await Bash({
command: `ccw cli -p "
PURPOSE: ${roundContext} for: ${taskDescription}
TASK: • ${round === 0 ? 'Initial analysis of the problem' : 'Review previous analysis • Identify gaps • Add complementary insights • Synthesize findings'}
MODE: analysis
CONTEXT: @**/*
EXPECTED: ${round === 0 ? 'Foundational analysis' : 'Refined synthesis with new perspectives'}
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${round === 0 ? 'Be thorough' : 'Build collaboratively, add value not repetition'}
" --tool ${cli.name} --mode analysis ${resumeFlag}`,
run_in_background: false
})
results.push({ cli: cli.name, round, result })
previousSessionId = extractSessionId(result)
const preset = !prevId ? PROMPTS.initial : PROMPTS.synthesize
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
results.push({ cli: cli.name, round: r, result })
prevId = extractSessionId(result)
}
}
return results
}
```
#### Mode 4: Debate (辩论)
```javascript
// Adversarial: CLI B challenges CLI A findings, A responds
async function executeDebate(clis, taskDescription) {
// Debate: Propose → Challenge → Defend
async function executeDebate(clis, task) {
const [cliA, cliB] = clis
const results = []
// Step 1: CLI A proposes initial analysis
const proposeResult = await Bash({
command: `ccw cli -p "
PURPOSE: Propose comprehensive analysis for: ${taskDescription}
TASK: • Analyze problem thoroughly • Propose solution approach • Identify implementation details • State assumptions clearly
MODE: analysis
CONTEXT: @**/*
EXPECTED: Well-reasoned proposal with clear assumptions and trade-offs stated
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be clear about assumptions and trade-offs
" --tool ${cliA.name} --mode analysis`,
run_in_background: false
})
results.push({ phase: 'propose', cli: cliA.name, result: proposeResult })
const proposeSessionId = extractSessionId(proposeResult)
const propose = await execCLI(cliA, buildPrompt({ ...PROMPTS.propose, taskDescription: task }))
results.push({ phase: 'propose', cli: cliA.name, result: propose })
// Step 2: CLI B challenges the proposal
const challengeResult = await Bash({
command: `ccw cli -p "
PURPOSE: Challenge and stress-test the previous analysis for: ${taskDescription}
TASK: • Identify weaknesses in proposed approach • Question assumptions • Suggest alternative approaches • Highlight potential risks overlooked
MODE: analysis
CONTEXT: @**/*
EXPECTED: Constructive critique with specific counter-arguments and alternatives
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be adversarial but constructive, focus on improving the solution
" --tool ${cliB.name} --mode analysis --resume ${proposeSessionId}`,
run_in_background: false
})
results.push({ phase: 'challenge', cli: cliB.name, result: challengeResult })
const challengeSessionId = extractSessionId(challengeResult)
const challenge = await execCLI(cliB, buildPrompt({ ...PROMPTS.challenge, taskDescription: task }), { resume: extractSessionId(propose) })
results.push({ phase: 'challenge', cli: cliB.name, result: challenge })
// Step 3: CLI A defends and refines
const defendResult = await Bash({
command: `ccw cli -p "
PURPOSE: Respond to challenges and refine analysis for: ${taskDescription}
TASK: • Address each challenge point • Defend valid aspects • Acknowledge valid criticisms • Propose refined solution incorporating feedback
MODE: analysis
CONTEXT: @**/*
EXPECTED: Refined proposal that addresses criticisms and incorporates valid alternatives
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be open to valid criticism, synthesize best ideas
" --tool ${cliA.name} --mode analysis --resume ${challengeSessionId}`,
run_in_background: false
})
results.push({ phase: 'defend', cli: cliA.name, result: defendResult })
const defend = await execCLI(cliA, buildPrompt({ ...PROMPTS.defend, taskDescription: task }), { resume: extractSessionId(challenge) })
results.push({ phase: 'defend', cli: cliA.name, result: defend })
return results
}
```
#### Mode 5: Challenge (挑战)
```javascript
// Stress test: CLI B finds flaws/alternatives in CLI A analysis
async function executeChallenge(clis, taskDescription) {
// Challenge: Analyze → Criticize
async function executeChallenge(clis, task) {
const [cliA, cliB] = clis
const results = []
// Step 1: CLI A provides initial analysis
const analyzeResult = await Bash({
command: `ccw cli -p "
PURPOSE: Provide comprehensive analysis for: ${taskDescription}
TASK: • Deep analysis of problem space • Propose implementation approach • List specific changes • Identify risks
MODE: analysis
CONTEXT: @**/*
EXPECTED: Thorough analysis with clear reasoning
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be thorough and explicit about reasoning
" --tool ${cliA.name} --mode analysis`,
run_in_background: false
})
results.push({ phase: 'analyze', cli: cliA.name, result: analyzeResult })
const analyzeSessionId = extractSessionId(analyzeResult)
const analyze = await execCLI(cliA, buildPrompt({ ...PROMPTS.initial, taskDescription: task }))
results.push({ phase: 'analyze', cli: cliA.name, result: analyze })
// Step 2: CLI B challenges with focus on finding flaws
const challengeResult = await Bash({
command: `ccw cli -p "
PURPOSE: Stress-test and find weaknesses in the analysis for: ${taskDescription}
TASK: • Find logical flaws in reasoning • Identify missed edge cases • Propose better alternatives • Rate confidence in each criticism (High/Medium/Low)
MODE: analysis
CONTEXT: @**/*
EXPECTED: Detailed critique with severity ratings: [CRITICAL] [HIGH] [MEDIUM] [LOW] for each issue found
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be ruthlessly critical, find every possible flaw
" --tool ${cliB.name} --mode analysis --resume ${analyzeSessionId}`,
run_in_background: false
})
results.push({ phase: 'challenge', cli: cliB.name, result: challengeResult })
const criticize = await execCLI(cliB, buildPrompt({ ...PROMPTS.criticize, taskDescription: task }), { resume: extractSessionId(analyze) })
results.push({ phase: 'challenge', cli: cliB.name, result: criticize })
return results
}
```
**Mode Router**:
### Mode Router & Result Aggregation
```javascript
async function executeAnalysis(mode, clis, taskDescription) {
switch (mode.name) {
case 'parallel':
return await executeParallel(clis, taskDescription)
case 'sequential':
return await executeSequential(clis, taskDescription)
case 'collaborative':
return await executeCollaborative(clis, taskDescription)
case 'debate':
return await executeDebate(clis, taskDescription)
case 'challenge':
return await executeChallenge(clis, taskDescription)
default:
return await executeParallel(clis, taskDescription)
case 'parallel': return await executeParallel(clis, taskDescription)
case 'sequential': return await executeSequential(clis, taskDescription)
case 'collaborative': return await executeCollaborative(clis, taskDescription)
case 'debate': return await executeDebate(clis, taskDescription)
case 'challenge': return await executeChallenge(clis, taskDescription)
}
}
// Execute based on selected mode
const analysisResults = await executeAnalysis(selectedMode, selectedCLIs, taskDescription)
```
**Result Aggregation** (mode-aware):
```javascript
function aggregateResults(mode, results) {
const base = {
mode: mode.name,
pattern: mode.pattern,
tools_used: results.map(r => r.cli || 'unknown')
}
const base = { mode: mode.name, pattern: mode.pattern, tools_used: results.map(r => r.cli || 'unknown') }
switch (mode.name) {
case 'parallel':
return {
...base,
findings: results.map(r => parseOutput(r)),
consensus: findCommonPoints(results),
divergences: findDifferences(results)
}
return { ...base, findings: results.map(parseOutput), consensus: findCommonPoints(results), divergences: findDifferences(results) }
case 'sequential':
return {
...base,
evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })),
finalAnalysis: parseOutput(results[results.length - 1])
}
return { ...base, evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })), finalAnalysis: parseOutput(results.at(-1)) }
case 'collaborative':
return {
...base,
rounds: groupByRound(results),
synthesis: extractSynthesis(results[results.length - 1])
}
return { ...base, rounds: groupByRound(results), synthesis: extractSynthesis(results.at(-1)) }
case 'debate':
return {
...base,
proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
return { ...base, proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
challenges: parseOutput(results.find(r => r.phase === 'challenge')?.result),
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result),
confidence: calculateDebateConfidence(results)
}
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result), confidence: calculateDebateConfidence(results) }
case 'challenge':
return {
...base,
originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result),
riskScore: calculateRiskScore(results)
}
return { ...base, originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result), riskScore: calculateRiskScore(results) }
}
}
const aggregatedAnalysis = aggregateResults(selectedMode, analysisResults)
```
### Phase 4: User Decision
**Present Mode-Specific Summary**:
## Phase 4: User Decision
```javascript
function presentSummary(aggregatedAnalysis) {
const { mode, pattern } = aggregatedAnalysis
function presentSummary(analysis) {
console.log(`## Analysis Result\n**Mode**: ${analysis.mode} (${analysis.pattern})\n**Tools**: ${analysis.tools_used.join(' → ')}`)
console.log(`
## Analysis Result Summary
**Mode**: ${mode} (${pattern})
**Tools**: ${aggregatedAnalysis.tools_used.join(' → ')}
`)
switch (mode) {
switch (analysis.mode) {
case 'parallel':
console.log(`
### Consensus Points
${aggregatedAnalysis.consensus.map(c => `- ${c}`).join('\n')}
### Divergence Points
${aggregatedAnalysis.divergences.map(d => `- ${d}`).join('\n')}
`)
console.log(`### Consensus\n${analysis.consensus.map(c => `- ${c}`).join('\n')}\n### Divergences\n${analysis.divergences.map(d => `- ${d}`).join('\n')}`)
break
case 'sequential':
console.log(`
### Analysis Evolution
${aggregatedAnalysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}
### Final Analysis
${aggregatedAnalysis.finalAnalysis.summary}
`)
console.log(`### Evolution\n${analysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}\n### Final\n${analysis.finalAnalysis.summary}`)
break
case 'collaborative':
console.log(`
### Collaboration Rounds
${Object.entries(aggregatedAnalysis.rounds).map(([round, analyses]) =>
`**Round ${round}**: ${analyses.map(a => a.cli).join(' + ')}`
).join('\n')}
### Synthesized Result
${aggregatedAnalysis.synthesis}
`)
console.log(`### Rounds\n${Object.entries(analysis.rounds).map(([r, a]) => `**Round ${r}**: ${a.map(x => x.cli).join(' + ')}`).join('\n')}\n### Synthesis\n${analysis.synthesis}`)
break
case 'debate':
console.log(`
### Debate Summary
**Proposal**: ${aggregatedAnalysis.proposal.summary}
**Challenges**: ${aggregatedAnalysis.challenges.points?.length || 0} points raised
**Resolution**: ${aggregatedAnalysis.resolution.summary}
**Confidence**: ${aggregatedAnalysis.confidence}%
`)
console.log(`### Debate\n**Proposal**: ${analysis.proposal.summary}\n**Challenges**: ${analysis.challenges.points?.length || 0} points\n**Resolution**: ${analysis.resolution.summary}\n**Confidence**: ${analysis.confidence}%`)
break
case 'challenge':
console.log(`
### Challenge Summary
**Original Analysis**: ${aggregatedAnalysis.originalAnalysis.summary}
**Critiques Found**: ${aggregatedAnalysis.critiques.length} issues
${aggregatedAnalysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}
**Risk Score**: ${aggregatedAnalysis.riskScore}/100
`)
console.log(`### Challenge\n**Original**: ${analysis.originalAnalysis.summary}\n**Critiques**: ${analysis.critiques.length} issues\n${analysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}\n**Risk Score**: ${analysis.riskScore}/100`)
break
}
}
presentSummary(aggregatedAnalysis)
```
**Decision Options**:
```javascript
AskUserQuestion({
questions: [{
question: "How to proceed?",
header: "Next Step",
options: [
{ label: "Execute directly", description: "Implement immediately based on analysis" },
{ label: "Refine analysis", description: "Provide more constraints, re-analyze" },
{ label: "Change tools", description: "Select different tool combination" },
{ label: "Cancel", description: "End current workflow" }
{ label: "Execute directly", description: "Implement immediately" },
{ label: "Refine analysis", description: "Add constraints, re-analyze" },
{ label: "Change tools", description: "Different tool combination" },
{ label: "Cancel", description: "End workflow" }
],
multiSelect: false
}]
})
// Routing: Execute → Phase 5 | Refine → Phase 3 | Change → Phase 2 | Cancel → End
```
**Routing Logic**:
- **Execute directly** → Phase 5
- **Refine analysis** → Collect feedback, return to Phase 3
- **Change tools** → Return to Phase 2
- **Cancel** → End workflow
## Phase 5: Direct Execution
### Phase 5: Direct Execution
**No Artifacts - Direct Implementation**:
```javascript
// Use the aggregated analysis directly
// No IMPL_PLAN.md, no plan.json, no session files
console.log("Starting direct execution based on analysis...")
// Execution-capable agents (canExecute: true)
// No IMPL_PLAN.md, no plan.json - direct implementation
const executionAgents = agents.filter(a => a.canExecute)
// Select execution tool: prefer execution-capable agent, fallback to CLI
const executionTool = selectedTools.find(t =>
t.type === 'agent' && executionAgents.some(ea => ea.name === t.name)
) || selectedTools.find(t => t.type === 'cli')
const executionTool = selectedAgent.canExecute ? selectedAgent : selectedCLIs[0]
if (executionTool.type === 'agent') {
// Use Agent for execution (preferred if available)
Task({
subagent_type: executionTool.name,
run_in_background: false,
description: `Execute: ${taskDescription.slice(0, 30)}`,
prompt: `
## Task
${taskDescription}
## Analysis Results (from previous tools)
${JSON.stringify(aggregatedAnalysis, null, 2)}
## Instructions
Based on the analysis above, implement the solution:
1. Apply changes to identified files
2. Follow the recommended approach
3. Handle identified risks
4. Verify changes work correctly
`
prompt: `## Task\n${taskDescription}\n\n## Analysis Results\n${JSON.stringify(aggregatedAnalysis, null, 2)}\n\n## Instructions\n1. Apply changes to identified files\n2. Follow recommended approach\n3. Handle identified risks\n4. Verify changes work correctly`
})
} else {
// Use CLI with write mode
Bash({
command: `ccw cli -p "
PURPOSE: Implement the solution based on analysis: ${taskDescription}
PURPOSE: Implement solution: ${taskDescription}
TASK: ${extractedTasks.join(' • ')}
MODE: write
CONTEXT: @${affectedFiles.join(' @')}
EXPECTED: Working implementation with all changes applied
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Apply analysis findings directly
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
" --tool ${executionTool.name} --mode write`,
run_in_background: false
})
@@ -718,81 +385,49 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Ap
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Clarify requirements", status: "in_progress", activeForm: "Clarifying requirements" },
{ content: "Phase 2: Auto-select tools", status: "pending", activeForm: "Analyzing task" },
{ content: "Phase 3: Mixed tool analysis", status: "pending", activeForm: "Running analysis" },
{ content: "Phase 2: Select tools", status: "pending", activeForm: "Selecting tools" },
{ content: "Phase 3: Multi-mode analysis", status: "pending", activeForm: "Running analysis" },
{ content: "Phase 4: User decision", status: "pending", activeForm: "Awaiting decision" },
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing implementation" }
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing" }
]})
```
## Iteration Patterns
### Pattern A: Direct Path (Most Common)
```
Phase 1 → Phase 2 (auto) → Phase 3 → Phase 4 (execute) → Phase 5
```
### Pattern B: Refinement Loop
```
Phase 3 → Phase 4 (refine) → Phase 3 → Phase 4 → Phase 5
```
### Pattern C: Tool Adjustment
```
Phase 2 (adjust) → Phase 3 → Phase 4 → Phase 5
```
| Pattern | Flow |
|---------|------|
| **Direct** | Phase 1 → 2 → 3 → 4(execute) → 5 |
| **Refinement** | Phase 3 → 4(refine) → 3 → 4 → 5 |
| **Tool Adjust** | Phase 2(adjust) → 3 → 4 → 5 |
## Error Handling
| Error | Resolution |
|-------|------------|
| CLI timeout | Retry with secondary model |
| No enabled tools | Load cli-tools.json, ask user to enable tools |
| Task type unclear | Default to first available CLI + code-developer |
| No enabled tools | Ask user to enable tools in cli-tools.json |
| Task unclear | Default to first CLI + code-developer |
| Ambiguous task | Force clarification via AskUser |
| Execution fails | Present error, ask user for direction |
## Analysis Modes Reference
| Mode | Pattern | Use Case | CLI Count |
|------|---------|----------|-----------|
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective analysis | 1+ |
| **Sequential** | `A → B(resume) → C(resume)` | Deep incremental analysis | 2+ |
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Stress-test solutions | 2 |
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
## Comparison
## Comparison with multi-cli-plan
| Aspect | lite-lite-lite | multi-cli-plan |
|--------|----------------|----------------|
| **Artifacts** | None | IMPL_PLAN.md, plan.json, synthesis.json |
| **Session** | Stateless (uses --resume for chaining) | Persistent session folder |
| **Tool Selection** | Multi-CLI + Agent via 3-step selection | Config-driven with fixed tools |
| **Analysis Modes** | 5 modes (parallel/sequential/collaborative/debate/challenge) | Fixed synthesis rounds |
| **CLI Collaboration** | Auto --resume chaining | Manual session management |
| **Iteration** | Via AskUser | Via rounds/synthesis |
| **Execution** | Direct | Via lite-execute |
| **Best For** | Quick analysis, adversarial validation, rapid iteration | Complex multi-step implementations |
| **Session** | Stateless (--resume chaining) | Persistent session folder |
| **Tool Selection** | 3-step (CLI → Mode → Agent) | Config-driven fixed tools |
| **Analysis Modes** | 5 modes with --resume | Fixed synthesis rounds |
| **Best For** | Quick analysis, adversarial validation | Complex multi-step implementations |
## Best Practices
## Post-Completion Expansion
1. **Be Specific**: Clear task description improves auto-selection accuracy
2. **Trust Auto-Selection**: Algorithm matches task type to tool strengths
3. **Adjust When Needed**: Use "Adjust tools" if auto-selection doesn't fit
4. **Trust Consensus**: When tools agree, confidence is high
5. **Iterate Fast**: Use refinement loop for complex requirements
6. **Direct is Fast**: Skip artifacts when task is straightforward
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Related Commands
```bash
# Full planning workflow
/workflow:multi-cli-plan "complex task"
# Single CLI planning
/workflow:lite-plan "task"
# Direct execution
/workflow:lite-execute --in-memory
/workflow:multi-cli-plan "complex task" # Full planning workflow
/workflow:lite-plan "task" # Single CLI planning
/workflow:lite-execute --in-memory # Direct execution
```

View File

@@ -585,6 +585,10 @@ TodoWrite({
- Mark completed immediately after each group finishes
- Update parent phase status when all child items complete
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Best Practices
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis

View File

@@ -491,6 +491,10 @@ The orchestrator automatically creates git commits at key checkpoints to enable
**Note**: Final session completion creates additional commit with full summary.
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
## Best Practices
1. **Default Settings Work**: 10 iterations sufficient for most cases

View File

@@ -1,6 +1,6 @@
---
description: Execute all solutions from issue queue with git commit after each solution
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
---
# Issue Execute (Codex Version)
@@ -9,6 +9,49 @@ argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
**Serial Execution**: Execute solutions ONE BY ONE from the issue queue via `ccw issue next`. For each solution, complete all tasks sequentially (implement → test → verify), then commit once per solution with formatted summary. Continue autonomously until queue is empty.
## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
### If Queue ID Not Provided
When `--queue` parameter is missing, you MUST:
1. **List available queues** by running:
```javascript
const result = shell_command({ command: "ccw issue queue list --brief --json" })
```
2. **Parse and display queues** to user:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
3. **Stop and ask user** to specify which queue to execute:
```javascript
AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: [
// Generate from parsed queue list - only show active/pending queues
{ label: "QUE-20251215-001", description: "active, 3/10 completed, Issues: ISS-001, ISS-002" },
{ label: "QUE-20251210-002", description: "active, 0/5 completed, Issues: ISS-003" }
]
}]
})
```
4. **After user selection**, continue execution with the selected queue ID.
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
## Worktree Mode (Recommended for Parallel Execution)
When `--worktree` is specified, create or use a git worktree to isolate work.
@@ -77,7 +120,8 @@ cd "${WORKTREE_PATH}"
**Worktree Execution Pattern**:
```
1. [WORKTREE] ccw issue next → auto-redirects to main repo's .workflow/
0. [MAIN REPO] Validate queue ID (--queue required, or prompt user to select)
1. [WORKTREE] ccw issue next --queue <queue-id> → auto-redirects to main repo's .workflow/
2. [WORKTREE] Implement all tasks, run tests, git commit
3. [WORKTREE] ccw issue done <item_id> → auto-redirects to main repo
4. Repeat from step 1
@@ -177,10 +221,12 @@ echo "Branch '${WORKTREE_NAME}' kept. Merge manually when ready."
## Execution Flow
```
INIT: Fetch first solution via ccw issue next
STEP 0: Validate queue ID (--queue required, or prompt user to select)
INIT: Fetch first solution via ccw issue next --queue <queue-id>
WHILE solution exists:
1. Receive solution JSON from ccw issue next
1. Receive solution JSON from ccw issue next --queue <queue-id>
2. Execute all tasks in solution.tasks sequentially:
FOR each task:
- IMPLEMENT: Follow task.implementation steps
@@ -188,7 +234,7 @@ WHILE solution exists:
- VERIFY: Check task.acceptance criteria
3. COMMIT: Stage all files, commit once with formatted summary
4. Report completion via ccw issue done <item_id>
5. Fetch next solution via ccw issue next
5. Fetch next solution via ccw issue next --queue <queue-id>
WHEN queue empty:
Output final summary
@@ -196,11 +242,14 @@ WHEN queue empty:
## Step 1: Fetch First Solution
**Prerequisite**: Queue ID must be determined (either from `--queue` argument or user selection in Step 0).
Run this command to get your first solution:
```javascript
// ccw auto-detects worktree and uses main repo's .workflow/
const result = shell_command({ command: "ccw issue next" })
// QUEUE_ID is required - obtained from --queue argument or user selection
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
```
This returns JSON with the full solution definition:
@@ -494,11 +543,12 @@ shell_command({
## Step 5: Continue to Next Solution
Fetch next solution:
Fetch next solution (using same QUEUE_ID from Step 0/1):
```javascript
// ccw auto-detects worktree
const result = shell_command({ command: "ccw issue next" })
// Continue using the same QUEUE_ID throughout execution
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
```
**Output progress:**
@@ -567,18 +617,28 @@ When `ccw issue next` returns `{ "status": "empty" }`:
| Command | Purpose |
|---------|---------|
| `ccw issue next` | Fetch next solution from queue (auto-selects from active queues) |
| `ccw issue next --queue QUE-xxx` | Fetch from specific queue |
| `ccw issue queue list --brief --json` | List all queues (for queue selection) |
| `ccw issue next --queue QUE-xxx` | Fetch next solution from specified queue (**--queue required**) |
| `ccw issue done <id>` | Mark solution complete with result (auto-detects queue) |
| `ccw issue done <id> --fail --reason "..."` | Mark solution failed with structured reason |
| `ccw issue retry --queue QUE-xxx` | Reset failed items in specific queue |
## Start Execution
Begin by running:
**Step 0: Validate Queue ID**
If `--queue` was NOT provided in the command arguments:
1. Run `ccw issue queue list --brief --json`
2. Display available queues to user
3. Ask user to select a queue via `AskUserQuestion`
4. Store selected queue ID for all subsequent commands
**Step 1: Fetch First Solution**
Once queue ID is confirmed, begin by running:
```bash
ccw issue next
ccw issue next --queue <queue-id>
```
Then follow the solution lifecycle for each solution until queue is empty.

View File

@@ -5,6 +5,21 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [6.3.29] - 2026-01-15
### ✨ New Features | 新功能
#### Multi-CLI Task & Discussion Enhancements | 多CLI任务与讨论增强
- **Added**: Internationalization support for multi-CLI tasks and discussion tabs | 多CLI任务和讨论标签的国际化支持
- **Added**: Collapsible sections for discussion and summary tabs with enhanced layout | 讨论和摘要标签的可折叠区域及增强布局
- **Added**: Post-Completion Expansion feature for execution commands | 执行命令的完成后扩展功能
#### Session & UI Improvements | 会话与UI改进
- **Enhanced**: Multi-CLI session handling with improved UI updates | 多CLI会话处理及UI更新优化
- **Refactored**: Code structure for improved readability and maintainability | 代码结构重构以提升可读性和可维护性
---
## [6.3.19] - 2026-01-12
### 🚀 Major New Features | 主要新功能

View File

@@ -281,6 +281,9 @@ CCW provides comprehensive documentation to help you get started quickly and mas
- [**Dashboard Guide**](DASHBOARD_GUIDE.md) - Dashboard user guide and interface overview
- [**Dashboard Operations**](DASHBOARD_OPERATIONS_EN.md) - Detailed operation instructions
### 🔄 **Workflow Guides**
- [**Issue Loop Workflow**](docs/workflows/ISSUE_LOOP_WORKFLOW.md) - Batch issue processing with two-phase lifecycle (accumulate → resolve)
### 🏗️ **Architecture & Design**
- [**Architecture Overview**](ARCHITECTURE.md) - System design and core components
- [**Project Introduction**](PROJECT_INTRODUCTION.md) - Detailed project overview

View File

@@ -177,7 +177,7 @@ export function run(argv: string[]): void {
.option('--model <model>', 'Model override')
.option('--cd <path>', 'Working directory')
.option('--includeDirs <dirs>', 'Additional directories (--include-directories for gemini/qwen, --add-dir for codex/claude)')
.option('--timeout <ms>', 'Timeout in milliseconds (0=disabled, controlled by external caller)', '0')
// --timeout removed - controlled by external caller (bash timeout)
.option('--stream', 'Enable streaming output (default: non-streaming with caching)')
.option('--limit <n>', 'History limit')
.option('--status <status>', 'Filter by status')

View File

@@ -116,7 +116,7 @@ interface CliExecOptions {
model?: string;
cd?: string;
includeDirs?: string;
timeout?: string;
// timeout removed - controlled by external caller (bash timeout)
stream?: boolean; // Enable streaming (default: false, caches output)
resume?: string | boolean; // true = last, string = execution ID, comma-separated for merge
id?: string; // Custom execution ID (e.g., IMPL-001-step1)
@@ -535,7 +535,7 @@ async function statusAction(debug?: boolean): Promise<void> {
* @param {Object} options - CLI options
*/
async function execAction(positionalPrompt: string | undefined, options: CliExecOptions): Promise<void> {
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, timeout, stream, resume, id, noNative, cache, injectMode, debug } = options;
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, stream, resume, id, noNative, cache, injectMode, debug } = options;
// Enable debug mode if --debug flag is set
if (debug) {
@@ -842,7 +842,7 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
model,
cd,
includeDirs,
timeout: timeout ? parseInt(timeout, 10) : 0, // 0 = no internal timeout, controlled by external caller
// timeout removed - controlled by external caller (bash timeout)
resume,
id, // custom execution ID
noNative,
@@ -1221,7 +1221,7 @@ export async function cliCommand(
console.log(chalk.gray(' --model <model> Model override'));
console.log(chalk.gray(' --cd <path> Working directory'));
console.log(chalk.gray(' --includeDirs <dirs> Additional directories'));
console.log(chalk.gray(' --timeout <ms> Timeout (default: 0=disabled)'));
// --timeout removed - controlled by external caller (bash timeout)
console.log(chalk.gray(' --resume [id] Resume previous session'));
console.log(chalk.gray(' --cache <items> Cache: comma-separated @patterns and text'));
console.log(chalk.gray(' --inject-mode <m> Inject mode: none, full, progressive'));

View File

@@ -589,6 +589,18 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
const statistics = (projectData.statistics || developmentStatus?.statistics) as Record<string, unknown> | undefined;
const metadata = projectData._metadata as Record<string, unknown> | undefined;
// Helper to extract string array from mixed array (handles both string[] and {name: string}[])
const extractStringArray = (arr: unknown[] | undefined): string[] => {
if (!arr) return [];
return arr.map(item => {
if (typeof item === 'string') return item;
if (typeof item === 'object' && item !== null && 'name' in item) {
return String((item as { name: unknown }).name);
}
return String(item);
});
};
// Load guidelines from separate file if exists
let guidelines: ProjectGuidelines | null = null;
if (existsSync(guidelinesFile)) {
@@ -633,17 +645,17 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
description: (overview?.description as string) || '',
initializedAt: (projectData.initialized_at as string) || null,
technologyStack: {
languages: (technologyStack?.languages as string[]) || [],
frameworks: (technologyStack?.frameworks as string[]) || [],
build_tools: (technologyStack?.build_tools as string[]) || [],
test_frameworks: (technologyStack?.test_frameworks as string[]) || []
languages: extractStringArray(technologyStack?.languages),
frameworks: extractStringArray(technologyStack?.frameworks),
build_tools: extractStringArray(technologyStack?.build_tools),
test_frameworks: extractStringArray(technologyStack?.test_frameworks)
},
architecture: {
style: (architecture?.style as string) || 'Unknown',
layers: (architecture?.layers as string[]) || [],
patterns: (architecture?.patterns as string[]) || []
layers: extractStringArray(architecture?.layers as unknown[] | undefined),
patterns: extractStringArray(architecture?.patterns as unknown[] | undefined)
},
keyComponents: (overview?.key_components as string[]) || [],
keyComponents: extractStringArray(overview?.key_components as unknown[] | undefined),
features: (projectData.features as unknown[]) || [],
developmentIndex: {
feature: (developmentIndex?.feature as unknown[]) || [],

View File

@@ -238,10 +238,11 @@ async function scanMultiCliDir(dir: string): Promise<MultiCliSession[]> {
.map(async (entry) => {
const sessionPath = join(dir, entry.name);
const [createdAt, syntheses, sessionState] = await Promise.all([
const [createdAt, syntheses, sessionState, planJson] = await Promise.all([
getCreatedTime(sessionPath),
loadRoundSyntheses(sessionPath),
loadSessionState(sessionPath),
loadPlanJson(sessionPath),
]);
// Extract data from syntheses
@@ -258,13 +259,20 @@ async function scanMultiCliDir(dir: string): Promise<MultiCliSession[]> {
const status = sessionState?.status ||
(latestSynthesis?.convergence?.recommendation === 'converged' ? 'converged' : 'analyzing');
// Use plan.json if available, otherwise extract from synthesis
const plan = planJson || latestSynthesis;
// Use tasks from plan.json if available, otherwise extract from synthesis
const tasks = (planJson as any)?.tasks?.length > 0
? normalizePlanJsonTasks((planJson as any).tasks)
: extractTasksFromSyntheses(syntheses);
const session: MultiCliSession = {
id: entry.name,
type: 'multi-cli-plan',
path: sessionPath,
createdAt,
plan: latestSynthesis,
tasks: extractTasksFromSyntheses(syntheses),
plan,
tasks,
progress,
// Extended multi-cli specific fields
roundCount,
@@ -548,6 +556,53 @@ function normalizeSolutionTask(task: SolutionTask, solution: Solution): Normaliz
};
}
/**
* Normalize tasks from plan.json format to NormalizedTask[]
* plan.json tasks have: id, name, description, depends_on, status, files, key_point, acceptance_criteria
* @param tasks - Tasks array from plan.json
* @returns Normalized tasks
*/
function normalizePlanJsonTasks(tasks: unknown[]): NormalizedTask[] {
if (!Array.isArray(tasks)) return [];
return tasks.map((task: any): NormalizedTask | null => {
if (!task || !task.id) return null;
return {
id: task.id,
title: task.name || task.title || 'Untitled Task',
status: task.status || 'pending',
meta: {
type: 'implementation',
agent: null,
scope: task.scope || null,
module: null
},
context: {
requirements: task.description ? [task.description] : (task.key_point ? [task.key_point] : []),
focus_paths: task.files?.map((f: any) => typeof f === 'string' ? f : f.file) || [],
acceptance: task.acceptance_criteria || [],
depends_on: task.depends_on || []
},
flow_control: {
implementation_approach: task.files?.map((f: any, i: number) => {
const filePath = typeof f === 'string' ? f : f.file;
const action = typeof f === 'string' ? 'modify' : f.action;
const line = typeof f === 'string' ? null : f.line;
return {
step: `Step ${i + 1}`,
action: `${action} ${filePath}${line ? ` at line ${line}` : ''}`
};
}) || []
},
_raw: {
task,
estimated_complexity: task.estimated_complexity
}
};
}).filter((task): task is NormalizedTask => task !== null);
}
/**
* Load plan.json or fix-plan.json from session directory
* @param sessionPath - Session directory path

View File

@@ -23,7 +23,7 @@
* - POST /api/queue/reorder - Reorder queue items
*/
import { readFileSync, existsSync, writeFileSync, mkdirSync, unlinkSync } from 'fs';
import { join } from 'path';
import { join, resolve, normalize } from 'path';
import type { RouteContext } from './types.js';
// ========== JSONL Helper Functions ==========
@@ -67,6 +67,12 @@ function readIssueHistoryJsonl(issuesDir: string): any[] {
}
}
function writeIssueHistoryJsonl(issuesDir: string, issues: any[]) {
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
const historyPath = join(issuesDir, 'issue-history.jsonl');
writeFileSync(historyPath, issues.map(i => JSON.stringify(i)).join('\n'));
}
function writeSolutionsJsonl(issuesDir: string, issueId: string, solutions: any[]) {
const solutionsDir = join(issuesDir, 'solutions');
if (!existsSync(solutionsDir)) mkdirSync(solutionsDir, { recursive: true });
@@ -156,7 +162,30 @@ function writeQueue(issuesDir: string, queue: any) {
function getIssueDetail(issuesDir: string, issueId: string) {
const issues = readIssuesJsonl(issuesDir);
const issue = issues.find(i => i.id === issueId);
let issue = issues.find(i => i.id === issueId);
// Fallback: Reconstruct issue from solution file if issue not in issues.jsonl
if (!issue) {
const solutionPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
if (existsSync(solutionPath)) {
const solutions = readSolutionsJsonl(issuesDir, issueId);
if (solutions.length > 0) {
const boundSolution = solutions.find(s => s.is_bound) || solutions[0];
issue = {
id: issueId,
title: boundSolution?.description || issueId,
status: 'completed',
priority: 3,
context: boundSolution?.approach || '',
bound_solution_id: boundSolution?.id || null,
created_at: boundSolution?.created_at || new Date().toISOString(),
updated_at: new Date().toISOString(),
_reconstructed: true
};
}
}
}
if (!issue) return null;
const solutions = readSolutionsJsonl(issuesDir, issueId);
@@ -254,11 +283,46 @@ function bindSolutionToIssue(issuesDir: string, issueId: string, solutionId: str
return { success: true, bound: solutionId };
}
// ========== Path Validation ==========
/**
* Validate that the provided path is safe (no path traversal)
* Returns the resolved, normalized path or null if invalid
*/
function validateProjectPath(requestedPath: string, basePath: string): string | null {
if (!requestedPath) return basePath;
// Resolve to absolute path and normalize
const resolvedPath = resolve(normalize(requestedPath));
const resolvedBase = resolve(normalize(basePath));
// For local development tool, we allow any absolute path
// but prevent obvious traversal attempts
if (requestedPath.includes('..') && !resolvedPath.startsWith(resolvedBase)) {
// Check if it's trying to escape with ..
const normalizedRequested = normalize(requestedPath);
if (normalizedRequested.startsWith('..')) {
return null;
}
}
return resolvedPath;
}
// ========== Route Handler ==========
export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
const { pathname, url, req, res, initialPath, handlePostRequest } = ctx;
const projectPath = url.searchParams.get('path') || initialPath;
const rawProjectPath = url.searchParams.get('path') || initialPath;
// Validate project path to prevent path traversal
const projectPath = validateProjectPath(rawProjectPath, initialPath);
if (!projectPath) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Invalid project path' }));
return true;
}
const issuesDir = join(projectPath, '.workflow', 'issues');
// ===== Queue Routes (top-level /api/queue) =====
@@ -295,7 +359,8 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
// GET /api/queue/:id - Get specific queue by ID
const queueDetailMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
if (queueDetailMatch && req.method === 'GET' && queueDetailMatch[1] !== 'history' && queueDetailMatch[1] !== 'reorder') {
const reservedQueuePaths = ['history', 'reorder', 'switch', 'deactivate', 'merge'];
if (queueDetailMatch && req.method === 'GET' && !reservedQueuePaths.includes(queueDetailMatch[1])) {
const queueId = queueDetailMatch[1];
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
@@ -347,6 +412,29 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// POST /api/queue/deactivate - Deactivate current queue (set active to null)
if (pathname === '/api/queue/deactivate' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const queuesDir = join(issuesDir, 'queues');
const indexPath = join(queuesDir, 'index.json');
try {
const index = existsSync(indexPath)
? JSON.parse(readFileSync(indexPath, 'utf8'))
: { active_queue_id: null, queues: [] };
const previousActiveId = index.active_queue_id;
index.active_queue_id = null;
writeFileSync(indexPath, JSON.stringify(index, null, 2));
return { success: true, previous_active_id: previousActiveId };
} catch (err) {
return { error: 'Failed to deactivate queue' };
}
});
return true;
}
// POST /api/queue/reorder - Reorder queue items (supports both solutions and tasks)
if (pathname === '/api/queue/reorder' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
@@ -399,6 +487,237 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// DELETE /api/queue/:queueId/item/:itemId - Delete item from queue
const queueItemDeleteMatch = pathname.match(/^\/api\/queue\/([^/]+)\/item\/([^/]+)$/);
if (queueItemDeleteMatch && req.method === 'DELETE') {
const queueId = queueItemDeleteMatch[1];
const itemId = decodeURIComponent(queueItemDeleteMatch[2]);
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
if (!existsSync(queueFilePath)) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
return true;
}
try {
const queue = JSON.parse(readFileSync(queueFilePath, 'utf8'));
const items = queue.solutions || queue.tasks || [];
const filteredItems = items.filter((item: any) => item.item_id !== itemId);
if (filteredItems.length === items.length) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Item ${itemId} not found in queue` }));
return true;
}
// Update queue items
if (queue.solutions) {
queue.solutions = filteredItems;
} else {
queue.tasks = filteredItems;
}
// Recalculate metadata
const completedCount = filteredItems.filter((i: any) => i.status === 'completed').length;
queue._metadata = {
...queue._metadata,
updated_at: new Date().toISOString(),
...(queue.solutions
? { total_solutions: filteredItems.length, completed_solutions: completedCount }
: { total_tasks: filteredItems.length, completed_tasks: completedCount })
};
writeFileSync(queueFilePath, JSON.stringify(queue, null, 2));
// Update index counts
const indexPath = join(queuesDir, 'index.json');
if (existsSync(indexPath)) {
try {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
const queueEntry = index.queues?.find((q: any) => q.id === queueId);
if (queueEntry) {
if (queue.solutions) {
queueEntry.total_solutions = filteredItems.length;
queueEntry.completed_solutions = completedCount;
} else {
queueEntry.total_tasks = filteredItems.length;
queueEntry.completed_tasks = completedCount;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
}
} catch (err) {
console.error('Failed to update queue index:', err);
}
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, queueId, deletedItemId: itemId }));
} catch (err) {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to delete item' }));
}
return true;
}
// DELETE /api/queue/:queueId - Delete entire queue
const queueDeleteMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
if (queueDeleteMatch && req.method === 'DELETE') {
const queueId = queueDeleteMatch[1];
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
const indexPath = join(queuesDir, 'index.json');
if (!existsSync(queueFilePath)) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
return true;
}
try {
// Delete queue file
unlinkSync(queueFilePath);
// Update index
if (existsSync(indexPath)) {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
// Remove from queues array
index.queues = (index.queues || []).filter((q: any) => q.id !== queueId);
// Clear active if this was the active queue
if (index.active_queue_id === queueId) {
index.active_queue_id = null;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, deletedQueueId: queueId }));
} catch (err) {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to delete queue' }));
}
return true;
}
// POST /api/queue/merge - Merge source queue into target queue
if (pathname === '/api/queue/merge' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const { sourceQueueId, targetQueueId } = body;
if (!sourceQueueId || !targetQueueId) {
return { error: 'sourceQueueId and targetQueueId required' };
}
if (sourceQueueId === targetQueueId) {
return { error: 'Cannot merge queue into itself' };
}
const queuesDir = join(issuesDir, 'queues');
const sourcePath = join(queuesDir, `${sourceQueueId}.json`);
const targetPath = join(queuesDir, `${targetQueueId}.json`);
if (!existsSync(sourcePath)) return { error: `Source queue ${sourceQueueId} not found` };
if (!existsSync(targetPath)) return { error: `Target queue ${targetQueueId} not found` };
try {
const sourceQueue = JSON.parse(readFileSync(sourcePath, 'utf8'));
const targetQueue = JSON.parse(readFileSync(targetPath, 'utf8'));
const sourceItems = sourceQueue.solutions || sourceQueue.tasks || [];
const targetItems = targetQueue.solutions || targetQueue.tasks || [];
const isSolutionBased = !!targetQueue.solutions;
// Re-index source items to avoid ID conflicts
const maxOrder = targetItems.reduce((max: number, i: any) => Math.max(max, i.execution_order || 0), 0);
const reindexedSourceItems = sourceItems.map((item: any, idx: number) => ({
...item,
item_id: `${item.item_id}-merged`,
execution_order: maxOrder + idx + 1,
execution_group: item.execution_group ? `M-${item.execution_group}` : 'M-ungrouped'
}));
// Merge items
const mergedItems = [...targetItems, ...reindexedSourceItems];
if (isSolutionBased) {
targetQueue.solutions = mergedItems;
} else {
targetQueue.tasks = mergedItems;
}
// Merge issue_ids
const mergedIssueIds = [...new Set([
...(targetQueue.issue_ids || []),
...(sourceQueue.issue_ids || [])
])];
targetQueue.issue_ids = mergedIssueIds;
// Update metadata
const completedCount = mergedItems.filter((i: any) => i.status === 'completed').length;
targetQueue._metadata = {
...targetQueue._metadata,
updated_at: new Date().toISOString(),
...(isSolutionBased
? { total_solutions: mergedItems.length, completed_solutions: completedCount }
: { total_tasks: mergedItems.length, completed_tasks: completedCount })
};
// Write merged queue
writeFileSync(targetPath, JSON.stringify(targetQueue, null, 2));
// Update source queue status
sourceQueue.status = 'merged';
sourceQueue._metadata = {
...sourceQueue._metadata,
merged_into: targetQueueId,
merged_at: new Date().toISOString()
};
writeFileSync(sourcePath, JSON.stringify(sourceQueue, null, 2));
// Update index
const indexPath = join(queuesDir, 'index.json');
if (existsSync(indexPath)) {
try {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
const sourceEntry = index.queues?.find((q: any) => q.id === sourceQueueId);
const targetEntry = index.queues?.find((q: any) => q.id === targetQueueId);
if (sourceEntry) {
sourceEntry.status = 'merged';
}
if (targetEntry) {
if (isSolutionBased) {
targetEntry.total_solutions = mergedItems.length;
targetEntry.completed_solutions = completedCount;
} else {
targetEntry.total_tasks = mergedItems.length;
targetEntry.completed_tasks = completedCount;
}
targetEntry.issue_ids = mergedIssueIds;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
} catch {
// Ignore index update errors
}
}
return {
success: true,
sourceQueueId,
targetQueueId,
mergedItemCount: sourceItems.length,
totalItems: mergedItems.length
};
} catch (err) {
return { error: 'Failed to merge queues' };
}
});
return true;
}
// Legacy: GET /api/issues/queue (backward compat)
if (pathname === '/api/issues/queue' && req.method === 'GET') {
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
@@ -546,6 +865,39 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// POST /api/issues/:id/archive - Archive issue (move to history)
const archiveMatch = pathname.match(/^\/api\/issues\/([^/]+)\/archive$/);
if (archiveMatch && req.method === 'POST') {
const issueId = decodeURIComponent(archiveMatch[1]);
const issues = readIssuesJsonl(issuesDir);
const issueIndex = issues.findIndex(i => i.id === issueId);
if (issueIndex === -1) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Issue not found' }));
return true;
}
// Get the issue and add archive metadata
const issue = issues[issueIndex];
issue.archived_at = new Date().toISOString();
issue.status = 'completed';
// Move to history
const history = readIssueHistoryJsonl(issuesDir);
history.push(issue);
writeIssueHistoryJsonl(issuesDir, history);
// Remove from active issues
issues.splice(issueIndex, 1);
writeIssuesJsonl(issuesDir, issues);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, issueId, archivedAt: issue.archived_at }));
return true;
}
// POST /api/issues/:id/solutions - Add solution
const addSolMatch = pathname.match(/^\/api\/issues\/([^/]+)\/solutions$/);
if (addSolMatch && req.method === 'POST') {

View File

@@ -302,9 +302,14 @@
.collapsible-content {
padding: 1rem;
display: block;
}
.collapsible-content.collapsed {
display: none;
}
/* Legacy .open class support */
.collapsible-content.open {
display: block;
}

View File

@@ -406,6 +406,7 @@
}
.collapsible-content {
display: block;
padding: 1rem;
background: hsl(var(--muted));
}
@@ -1281,7 +1282,7 @@
.multi-cli-status.pending,
.multi-cli-status.exploring,
.multi-cli-status.initialized {
) background: hsl(var(--muted));
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
@@ -3440,6 +3441,309 @@
transform: rotate(-90deg);
}
/* Discussion Round using collapsible-section pattern */
.discussion-round.collapsible-section {
margin-bottom: 0.75rem;
border: 1px solid hsl(var(--border));
border-radius: 8px;
overflow: hidden;
background: hsl(var(--card));
}
.discussion-round.collapsible-section .collapsible-header {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.3);
cursor: pointer;
transition: background-color 0.2s;
}
.discussion-round.collapsible-section .collapsible-header:hover {
background: hsl(var(--muted) / 0.5);
}
.discussion-round.collapsible-section .collapsible-content {
padding: 1rem;
border-top: 1px solid hsl(var(--border) / 0.5);
background: hsl(var(--card));
}
.discussion-round.collapsible-section .collapsible-content.collapsed {
display: none;
}
/* ========== Summary Tab Content ========== */
.summary-tab-content .summary-section {
margin-bottom: 1rem;
padding: 1rem;
border: 1px solid hsl(var(--border));
border-radius: 8px;
background: hsl(var(--card));
}
.summary-section-title {
font-size: 0.9rem;
font-weight: 600;
color: hsl(var(--foreground));
margin-bottom: 0.75rem;
display: flex;
align-items: center;
gap: 0.375rem;
}
.summary-content {
font-size: 0.875rem;
color: hsl(var(--muted-foreground));
line-height: 1.6;
}
.convergence-info {
display: flex;
align-items: center;
gap: 0.75rem;
}
.convergence-level {
font-size: 0.75rem;
padding: 0.25rem 0.5rem;
border-radius: 4px;
text-transform: capitalize;
background: hsl(var(--muted));
}
.convergence-level.full { background: hsl(var(--success) / 0.15); color: hsl(var(--success)); }
.convergence-level.partial { background: hsl(var(--warning) / 0.15); color: hsl(var(--warning)); }
.convergence-level.low { background: hsl(var(--error) / 0.15); color: hsl(var(--error)); }
.convergence-rec {
font-size: 0.75rem;
padding: 0.25rem 0.5rem;
border-radius: 4px;
text-transform: capitalize;
background: hsl(var(--info) / 0.15);
color: hsl(var(--info));
}
.convergence-rec.converged { background: hsl(var(--success) / 0.15); color: hsl(var(--success)); }
.convergence-rec.continue { background: hsl(var(--info) / 0.15); color: hsl(var(--info)); }
/* Summary collapsible Solutions section */
.summary-section.collapsible-section {
padding: 0;
overflow: hidden;
}
.summary-section.collapsible-section .collapsible-header {
padding: 0.75rem 1rem;
background: hsl(var(--card));
border-bottom: 1px solid transparent;
}
.summary-section.collapsible-section .collapsible-header:hover {
background: hsl(var(--muted) / 0.5);
}
.summary-section.collapsible-section .collapsible-content {
padding: 1rem;
background: hsl(var(--muted) / 0.3);
border-top: 1px solid hsl(var(--border) / 0.5);
}
.solution-summary-item {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.5rem 0;
border-bottom: 1px solid hsl(var(--border) / 0.3);
}
.solution-summary-item:last-child {
border-bottom: none;
}
.solution-num {
font-size: 0.75rem;
font-weight: 600;
color: hsl(var(--primary));
min-width: 1.5rem;
}
.solution-name {
flex: 1;
font-size: 0.875rem;
}
.feasibility-badge {
font-size: 0.7rem;
padding: 0.125rem 0.375rem;
border-radius: 4px;
background: hsl(var(--success) / 0.15);
color: hsl(var(--success));
}
/* ========== Context Tab Content (Multi-CLI) ========== */
.context-tab-content {
display: flex;
flex-direction: column;
gap: 1rem;
padding: 1rem;
}
.context-tab-content .context-section {
padding: 1rem;
border: 1px solid hsl(var(--border));
border-radius: 8px;
background: hsl(var(--card));
}
.context-tab-content .context-section-title {
font-size: 0.9rem;
font-weight: 600;
color: hsl(var(--foreground));
margin-bottom: 0.75rem;
display: flex;
align-items: center;
gap: 0.375rem;
}
.context-tab-content .context-description {
font-size: 0.875rem;
color: hsl(var(--muted-foreground));
line-height: 1.6;
margin: 0;
}
.context-tab-content .constraints-list {
margin: 0;
padding-left: 1.25rem;
font-size: 0.875rem;
color: hsl(var(--muted-foreground));
}
.context-tab-content .constraints-list li {
margin-bottom: 0.375rem;
}
.context-tab-content .path-tags {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
}
.context-tab-content .path-tag {
font-family: monospace;
font-size: 0.75rem;
padding: 0.25rem 0.5rem;
background: hsl(var(--muted));
border-radius: 4px;
color: hsl(var(--foreground));
}
.context-tab-content .session-id-code {
font-family: monospace;
font-size: 0.8rem;
padding: 0.5rem 0.75rem;
background: hsl(var(--muted));
border-radius: 4px;
display: inline-block;
}
/* Context tab collapsible sections */
.context-tab-content .context-section.collapsible-section {
padding: 0;
overflow: hidden;
}
.context-tab-content .context-section.collapsible-section .collapsible-header {
padding: 0.75rem 1rem;
background: hsl(var(--card));
}
.context-tab-content .context-section.collapsible-section .collapsible-header:hover {
background: hsl(var(--muted) / 0.5);
}
.context-tab-content .context-section.collapsible-section .collapsible-content {
padding: 1rem;
background: hsl(var(--muted) / 0.3);
border-top: 1px solid hsl(var(--border) / 0.5);
}
.context-tab-content .files-list {
margin: 0;
padding: 0;
list-style: none;
}
.context-tab-content .file-item {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.375rem 0;
border-bottom: 1px solid hsl(var(--border) / 0.3);
font-size: 0.8rem;
}
.context-tab-content .file-item:last-child {
border-bottom: none;
}
.context-tab-content .file-icon {
flex-shrink: 0;
}
.context-tab-content .file-item code {
font-family: monospace;
font-size: 0.75rem;
background: hsl(var(--muted));
padding: 0.125rem 0.375rem;
border-radius: 3px;
}
.context-tab-content .file-reason {
color: hsl(var(--muted-foreground));
font-size: 0.75rem;
margin-left: auto;
}
.context-tab-content .deps-list {
margin: 0;
padding-left: 1.25rem;
font-size: 0.8rem;
color: hsl(var(--foreground));
}
.context-tab-content .deps-list li {
margin-bottom: 0.25rem;
}
.context-tab-content .risks-list {
margin: 0;
padding-left: 1.25rem;
}
.context-tab-content .risk-item {
font-size: 0.875rem;
color: hsl(var(--warning));
margin-bottom: 0.375rem;
}
.context-tab-content .json-content {
font-family: monospace;
font-size: 0.75rem;
line-height: 1.5;
margin: 0;
white-space: pre-wrap;
word-break: break-all;
max-height: 400px;
overflow-y: auto;
background: hsl(var(--background));
padding: 0.75rem;
border-radius: 4px;
}
/* ========== Association Section Styles ========== */
.association-section {
margin-bottom: 1.5rem;
@@ -3621,3 +3925,328 @@
}
}
/* ===================================
Multi-CLI Plan Summary Section
=================================== */
/* Plan Summary Section - card-like styling */
.plan-summary-section {
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
padding: 1rem 1.25rem;
margin-bottom: 1.25rem;
}
.plan-summary-section:hover {
border-color: hsl(var(--purple, 280 60% 50%) / 0.3);
}
/* Plan text styles */
.plan-summary-text,
.plan-solution-text,
.plan-approach-text {
font-size: 0.875rem;
line-height: 1.6;
color: hsl(var(--foreground));
margin: 0 0 0.75rem 0;
}
.plan-summary-text:last-child,
.plan-solution-text:last-child,
.plan-approach-text:last-child {
margin-bottom: 0;
}
.plan-summary-text strong,
.plan-solution-text strong,
.plan-approach-text strong {
color: hsl(var(--muted-foreground));
font-weight: 600;
margin-right: 0.5rem;
}
/* Plan meta badges container */
.plan-meta-badges {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
margin-top: 0.75rem;
padding-top: 0.75rem;
border-top: 1px solid hsl(var(--border) / 0.5);
}
/* Feasibility badge */
.feasibility-badge {
display: inline-flex;
align-items: center;
padding: 0.25rem 0.625rem;
background: hsl(var(--primary) / 0.1);
color: hsl(var(--primary));
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 500;
}
/* Effort badge variants */
.effort-badge {
display: inline-flex;
align-items: center;
padding: 0.25rem 0.625rem;
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 500;
}
.effort-badge.low {
background: hsl(var(--success-light, 142 70% 95%));
color: hsl(var(--success, 142 70% 45%));
}
.effort-badge.medium {
background: hsl(var(--warning-light, 45 90% 95%));
color: hsl(var(--warning, 45 90% 40%));
}
.effort-badge.high {
background: hsl(var(--destructive) / 0.1);
color: hsl(var(--destructive));
}
/* Complexity badge */
.complexity-badge {
display: inline-flex;
align-items: center;
padding: 0.25rem 0.625rem;
background: hsl(var(--muted));
color: hsl(var(--foreground));
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 500;
}
/* Time badge */
.time-badge {
display: inline-flex;
align-items: center;
padding: 0.25rem 0.625rem;
background: hsl(var(--info-light, 220 80% 95%));
color: hsl(var(--info, 220 80% 55%));
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 500;
}
/* ===================================
Multi-CLI Task Item Additional Badges
=================================== */
/* Files meta badge */
.meta-badge.files {
background: hsl(var(--purple, 280 60% 50%) / 0.1);
color: hsl(var(--purple, 280 60% 50%));
}
/* Depends meta badge */
.meta-badge.depends {
background: hsl(var(--info-light, 220 80% 95%));
color: hsl(var(--info, 220 80% 55%));
}
/* Multi-CLI Task Item Full - enhanced padding */
.detail-task-item-full.multi-cli-task-item {
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
padding: 0.875rem 1rem;
transition: all 0.2s ease;
border-left: 3px solid hsl(var(--primary) / 0.5);
}
.detail-task-item-full.multi-cli-task-item:hover {
border-color: hsl(var(--primary) / 0.4);
border-left-color: hsl(var(--primary));
box-shadow: 0 2px 8px hsl(var(--primary) / 0.1);
background: hsl(var(--hover));
}
/* Task ID badge enhancement */
.task-id-badge {
display: inline-flex;
align-items: center;
justify-content: center;
min-width: 2.5rem;
padding: 0.25rem 0.5rem;
background: hsl(var(--purple, 280 60% 50%));
color: white;
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 600;
flex-shrink: 0;
}
/* Tasks list container */
.tasks-list {
display: flex;
flex-direction: column;
gap: 0.625rem;
}
/* Plan section styling (for Plan tab) */
.plan-section {
background: hsl(var(--muted) / 0.3);
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
padding: 1rem;
margin-bottom: 1rem;
}
.plan-section:last-child {
margin-bottom: 0;
}
.plan-section-title {
font-size: 0.9rem;
font-weight: 600;
color: hsl(var(--foreground));
margin-bottom: 0.75rem;
display: flex;
align-items: center;
gap: 0.5rem;
}
.plan-tab-content {
display: flex;
flex-direction: column;
gap: 0;
}
.tasks-tab-content {
display: flex;
flex-direction: column;
gap: 1rem;
}
/* ===================================
Plan Summary Meta Badges
=================================== */
/* Base meta badge style (plan summary) */
.plan-meta-badges .meta-badge {
display: inline-block;
padding: 0.25rem 0.625rem;
border-radius: 0.375rem;
font-size: 0.75rem;
font-weight: 500;
white-space: nowrap;
}
/* Feasibility badge */
.meta-badge.feasibility {
background: hsl(var(--success) / 0.15);
color: hsl(var(--success));
border: 1px solid hsl(var(--success) / 0.3);
}
/* Effort badges */
.meta-badge.effort {
background: hsl(var(--muted));
color: hsl(var(--foreground));
}
.meta-badge.effort.low {
background: hsl(142 70% 50% / 0.15);
color: hsl(142 70% 35%);
}
.meta-badge.effort.medium {
background: hsl(30 90% 50% / 0.15);
color: hsl(30 90% 40%);
}
.meta-badge.effort.high {
background: hsl(0 70% 50% / 0.15);
color: hsl(0 70% 45%);
}
/* Risk badges */
.meta-badge.risk {
background: hsl(var(--muted));
color: hsl(var(--foreground));
}
.meta-badge.risk.low {
background: hsl(142 70% 50% / 0.15);
color: hsl(142 70% 35%);
}
.meta-badge.risk.medium {
background: hsl(30 90% 50% / 0.15);
color: hsl(30 90% 40%);
}
.meta-badge.risk.high {
background: hsl(0 70% 50% / 0.15);
color: hsl(0 70% 45%);
}
/* Severity badges */
.meta-badge.severity {
background: hsl(var(--muted));
color: hsl(var(--foreground));
}
.meta-badge.severity.low {
background: hsl(142 70% 50% / 0.15);
color: hsl(142 70% 35%);
}
.meta-badge.severity.medium {
background: hsl(30 90% 50% / 0.15);
color: hsl(30 90% 40%);
}
.meta-badge.severity.high,
.meta-badge.severity.critical {
background: hsl(0 70% 50% / 0.15);
color: hsl(0 70% 45%);
}
/* Complexity badge */
.meta-badge.complexity {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
}
/* Time badge */
.meta-badge.time {
background: hsl(220 80% 50% / 0.15);
color: hsl(220 80% 45%);
}
/* Task item action badge */
.meta-badge.action {
background: hsl(var(--primary) / 0.15);
color: hsl(var(--primary));
}
/* Task item scope badge */
.meta-badge.scope {
background: hsl(var(--muted));
color: hsl(var(--muted-foreground));
font-family: var(--font-mono);
font-size: 0.7rem;
}
/* Task item impl steps badge */
.meta-badge.impl {
background: hsl(280 60% 50% / 0.1);
color: hsl(280 60% 50%);
}
/* Task item acceptance criteria badge */
.meta-badge.accept {
background: hsl(var(--success) / 0.1);
color: hsl(var(--success));
}

View File

@@ -429,14 +429,16 @@
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
overflow: hidden;
margin-bottom: 1rem;
box-shadow: 0 1px 3px hsl(var(--foreground) / 0.04);
}
.queue-group-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.5);
padding: 0.875rem 1.25rem;
background: hsl(var(--muted) / 0.3);
border-bottom: 1px solid hsl(var(--border));
}
@@ -1256,6 +1258,68 @@
color: hsl(var(--destructive));
}
/* Search Highlight */
.search-highlight {
background: hsl(45 93% 47% / 0.3);
color: inherit;
padding: 0 2px;
border-radius: 2px;
font-weight: 500;
}
/* Search Suggestions Dropdown */
.search-suggestions {
position: absolute;
top: 100%;
left: 0;
right: 0;
margin-top: 0.25rem;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.1);
max-height: 300px;
overflow-y: auto;
z-index: 50;
display: none;
}
.search-suggestions.show {
display: block;
}
.search-suggestion-item {
padding: 0.625rem 0.875rem;
cursor: pointer;
border-bottom: 1px solid hsl(var(--border) / 0.5);
transition: background 0.15s ease;
}
.search-suggestion-item:hover,
.search-suggestion-item.selected {
background: hsl(var(--muted));
}
.search-suggestion-item:last-child {
border-bottom: none;
}
.suggestion-id {
font-family: var(--font-mono);
font-size: 0.7rem;
color: hsl(var(--muted-foreground));
margin-bottom: 0.125rem;
}
.suggestion-title {
font-size: 0.8125rem;
color: hsl(var(--foreground));
line-height: 1.3;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
/* ==========================================
CREATE BUTTON
========================================== */
@@ -1780,61 +1844,147 @@
}
.queue-items {
padding: 0.75rem;
padding: 1rem;
display: flex;
flex-direction: column;
gap: 0.5rem;
gap: 0.75rem;
}
/* Parallel items use CSS Grid for uniform sizing */
.queue-items.parallel {
flex-direction: row;
flex-wrap: wrap;
display: grid;
grid-template-columns: repeat(auto-fill, minmax(130px, 1fr));
gap: 0.75rem;
}
.queue-items.parallel .queue-item {
flex: 1;
min-width: 200px;
display: grid;
grid-template-areas:
"id id delete"
"issue issue issue"
"solution solution solution";
grid-template-columns: 1fr 1fr auto;
grid-template-rows: auto auto 1fr;
align-items: start;
padding: 0.75rem;
min-height: 90px;
gap: 0.25rem;
}
/* Card content layout */
.queue-items.parallel .queue-item .queue-item-id {
grid-area: id;
font-size: 0.875rem;
font-weight: 700;
color: hsl(var(--foreground));
}
.queue-items.parallel .queue-item .queue-item-issue {
grid-area: issue;
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
line-height: 1.3;
}
.queue-items.parallel .queue-item .queue-item-solution {
grid-area: solution;
display: flex;
align-items: center;
gap: 0.25rem;
font-size: 0.75rem;
font-weight: 500;
color: hsl(var(--foreground));
align-self: end;
}
/* Hide extra elements in parallel view */
.queue-items.parallel .queue-item .queue-item-files,
.queue-items.parallel .queue-item .queue-item-priority,
.queue-items.parallel .queue-item .queue-item-deps,
.queue-items.parallel .queue-item .queue-item-task {
display: none;
}
/* Delete button positioned in corner */
.queue-items.parallel .queue-item .queue-item-delete {
grid-area: delete;
justify-self: end;
padding: 0.125rem;
opacity: 0;
}
.queue-group-type {
display: flex;
display: inline-flex;
align-items: center;
gap: 0.375rem;
font-size: 0.875rem;
font-weight: 600;
padding: 0.25rem 0.625rem;
border-radius: 0.375rem;
}
.queue-group-type.parallel {
color: hsl(142 71% 45%);
color: hsl(142 71% 40%);
background: hsl(142 71% 45% / 0.1);
}
.queue-group-type.sequential {
color: hsl(262 83% 58%);
color: hsl(262 83% 50%);
background: hsl(262 83% 58% / 0.1);
}
/* Queue Item Status Colors */
/* Queue Item Status Colors - Enhanced visual distinction */
/* Pending - Default subtle state */
.queue-item.pending,
.queue-item:not(.ready):not(.executing):not(.completed):not(.failed):not(.blocked) {
border-color: hsl(var(--border));
background: hsl(var(--card));
}
/* Ready - Blue tint, ready to execute */
.queue-item.ready {
border-color: hsl(199 89% 48%);
background: hsl(199 89% 48% / 0.06);
border-left: 3px solid hsl(199 89% 48%);
}
/* Executing - Amber with pulse animation */
.queue-item.executing {
border-color: hsl(45 93% 47%);
background: hsl(45 93% 47% / 0.05);
border-color: hsl(38 92% 50%);
background: hsl(38 92% 50% / 0.08);
border-left: 3px solid hsl(38 92% 50%);
animation: executing-pulse 2s ease-in-out infinite;
}
@keyframes executing-pulse {
0%, 100% { box-shadow: 0 0 0 0 hsl(38 92% 50% / 0.3); }
50% { box-shadow: 0 0 8px 2px hsl(38 92% 50% / 0.2); }
}
/* Completed - Green success state */
.queue-item.completed {
border-color: hsl(var(--success));
background: hsl(var(--success) / 0.05);
border-color: hsl(142 71% 45%);
background: hsl(142 71% 45% / 0.06);
border-left: 3px solid hsl(142 71% 45%);
}
/* Failed - Red error state */
.queue-item.failed {
border-color: hsl(var(--destructive));
background: hsl(var(--destructive) / 0.05);
border-color: hsl(0 84% 60%);
background: hsl(0 84% 60% / 0.06);
border-left: 3px solid hsl(0 84% 60%);
}
/* Blocked - Purple/violet blocked state */
.queue-item.blocked {
border-color: hsl(262 83% 58%);
opacity: 0.7;
background: hsl(262 83% 58% / 0.05);
border-left: 3px solid hsl(262 83% 58%);
opacity: 0.8;
}
/* Priority indicator */
@@ -2236,61 +2386,89 @@
flex-direction: column;
align-items: center;
justify-content: center;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.3);
padding: 1rem 1.25rem;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
border-radius: 0.75rem;
text-align: center;
transition: all 0.2s ease;
}
.queue-stat-card:hover {
transform: translateY(-1px);
box-shadow: 0 2px 8px hsl(var(--foreground) / 0.06);
}
.queue-stat-card .queue-stat-value {
font-size: 1.5rem;
font-size: 1.75rem;
font-weight: 700;
color: hsl(var(--foreground));
line-height: 1.2;
}
.queue-stat-card .queue-stat-label {
font-size: 0.75rem;
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
text-transform: uppercase;
letter-spacing: 0.025em;
margin-top: 0.25rem;
letter-spacing: 0.05em;
margin-top: 0.375rem;
font-weight: 500;
}
/* Pending - Slate/Gray with subtle blue tint */
.queue-stat-card.pending {
border-color: hsl(var(--muted-foreground) / 0.3);
border-color: hsl(215 20% 65% / 0.4);
background: linear-gradient(135deg, hsl(215 20% 95%) 0%, hsl(var(--card)) 100%);
}
.queue-stat-card.pending .queue-stat-value {
color: hsl(var(--muted-foreground));
color: hsl(215 20% 45%);
}
.queue-stat-card.pending .queue-stat-label {
color: hsl(215 20% 55%);
}
/* Executing - Amber/Orange - attention-grabbing */
.queue-stat-card.executing {
border-color: hsl(45 93% 47% / 0.5);
background: hsl(45 93% 47% / 0.05);
border-color: hsl(38 92% 50% / 0.5);
background: linear-gradient(135deg, hsl(38 92% 95%) 0%, hsl(45 93% 97%) 100%);
}
.queue-stat-card.executing .queue-stat-value {
color: hsl(45 93% 47%);
color: hsl(38 92% 40%);
}
.queue-stat-card.executing .queue-stat-label {
color: hsl(38 70% 45%);
}
/* Completed - Green - success indicator */
.queue-stat-card.completed {
border-color: hsl(var(--success) / 0.5);
background: hsl(var(--success) / 0.05);
border-color: hsl(142 71% 45% / 0.5);
background: linear-gradient(135deg, hsl(142 71% 95%) 0%, hsl(142 50% 97%) 100%);
}
.queue-stat-card.completed .queue-stat-value {
color: hsl(var(--success));
color: hsl(142 71% 35%);
}
.queue-stat-card.completed .queue-stat-label {
color: hsl(142 50% 40%);
}
/* Failed - Red - error indicator */
.queue-stat-card.failed {
border-color: hsl(var(--destructive) / 0.5);
background: hsl(var(--destructive) / 0.05);
border-color: hsl(0 84% 60% / 0.5);
background: linear-gradient(135deg, hsl(0 84% 95%) 0%, hsl(0 70% 97%) 100%);
}
.queue-stat-card.failed .queue-stat-value {
color: hsl(var(--destructive));
color: hsl(0 84% 45%);
}
.queue-stat-card.failed .queue-stat-label {
color: hsl(0 60% 50%);
}
/* ==========================================
@@ -2874,3 +3052,251 @@
gap: 0.25rem;
}
}
/* ==========================================
MULTI-QUEUE CARDS VIEW
========================================== */
/* Queue Cards Header */
.queue-cards-header {
display: flex;
align-items: center;
justify-content: space-between;
flex-wrap: wrap;
gap: 1rem;
}
/* Queue Cards Grid */
.queue-cards-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
gap: 1rem;
margin-bottom: 1.5rem;
}
/* Individual Queue Card */
.queue-card {
position: relative;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
padding: 1rem;
cursor: pointer;
transition: all 0.2s ease;
}
.queue-card:hover {
border-color: hsl(var(--primary) / 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.08);
}
.queue-card.active {
border-color: hsl(var(--primary));
background: hsl(var(--primary) / 0.05);
}
.queue-card.merged {
opacity: 0.6;
border-style: dashed;
}
.queue-card.merged:hover {
opacity: 0.8;
}
/* Queue Card Header */
.queue-card-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 0.75rem;
}
.queue-card-id {
font-size: 0.875rem;
font-weight: 600;
color: hsl(var(--foreground));
}
.queue-card-badges {
display: flex;
align-items: center;
gap: 0.5rem;
}
/* Queue Card Stats - Progress Bar */
.queue-card-stats {
margin-bottom: 0.75rem;
}
.queue-card-stats .progress-bar {
height: 6px;
background: hsl(var(--muted));
border-radius: 3px;
overflow: hidden;
margin-bottom: 0.5rem;
}
.queue-card-stats .progress-fill {
height: 100%;
background: hsl(var(--primary));
border-radius: 3px;
transition: width 0.3s ease;
}
.queue-card-stats .progress-fill.completed {
background: hsl(var(--success, 142 76% 36%));
}
.queue-card-progress {
display: flex;
justify-content: space-between;
font-size: 0.75rem;
color: hsl(var(--foreground));
}
/* Queue Card Meta */
.queue-card-meta {
display: flex;
gap: 1rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
margin-bottom: 0.75rem;
}
/* Queue Card Actions */
.queue-card-actions {
display: flex;
gap: 0.5rem;
padding-top: 0.75rem;
border-top: 1px solid hsl(var(--border));
}
/* Queue Detail Header */
.queue-detail-header {
display: flex;
align-items: center;
gap: 1rem;
flex-wrap: wrap;
}
.queue-detail-title {
flex: 1;
display: flex;
align-items: center;
gap: 1rem;
}
.queue-detail-actions {
display: flex;
gap: 0.5rem;
}
/* Queue Item Delete Button */
.queue-item-delete {
margin-left: auto;
padding: 0.25rem;
opacity: 0;
transition: opacity 0.15s ease;
color: hsl(var(--muted-foreground));
border-radius: 0.25rem;
}
.queue-item:hover .queue-item-delete {
opacity: 1;
}
.queue-item-delete:hover {
color: hsl(var(--destructive, 0 84% 60%));
background: hsl(var(--destructive, 0 84% 60%) / 0.1);
}
/* Queue Error State */
.queue-error {
padding: 2rem;
text-align: center;
}
/* Responsive adjustments for queue cards */
@media (max-width: 640px) {
.queue-cards-grid {
grid-template-columns: 1fr;
}
.queue-cards-header {
flex-direction: column;
align-items: flex-start;
}
.queue-detail-header {
flex-direction: column;
align-items: flex-start;
}
.queue-detail-title {
flex-direction: column;
align-items: flex-start;
gap: 0.5rem;
}
}
/* ==========================================
WARNING BUTTON STYLE
========================================== */
.btn-warning,
.btn-secondary.btn-warning {
color: hsl(38 92% 40%);
border-color: hsl(38 92% 50% / 0.5);
background: hsl(38 92% 50% / 0.08);
}
.btn-warning:hover,
.btn-secondary.btn-warning:hover {
background: hsl(38 92% 50% / 0.15);
border-color: hsl(38 92% 50%);
}
.btn-danger,
.btn-secondary.btn-danger,
.btn-sm.btn-danger {
color: hsl(var(--destructive));
border-color: hsl(var(--destructive) / 0.5);
background: hsl(var(--destructive) / 0.08);
}
.btn-danger:hover,
.btn-secondary.btn-danger:hover,
.btn-sm.btn-danger:hover {
background: hsl(var(--destructive) / 0.15);
border-color: hsl(var(--destructive));
}
/* Issue Detail Actions */
.issue-detail-actions {
margin-top: 1rem;
padding-top: 1rem;
border-top: 1px solid hsl(var(--border));
}
.issue-detail-actions .flex {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
/* Active queue badge enhancement */
.queue-active-badge {
display: inline-flex;
align-items: center;
padding: 0.125rem 0.5rem;
font-size: 0.6875rem;
font-weight: 600;
color: hsl(142 71% 35%);
background: hsl(142 71% 45% / 0.15);
border: 1px solid hsl(142 71% 45% / 0.3);
border-radius: 9999px;
text-transform: uppercase;
letter-spacing: 0.025em;
}

View File

@@ -52,12 +52,13 @@ const HOOK_TEMPLATES = {
'memory-update-queue': {
event: 'Stop',
matcher: '',
command: 'bash',
args: ['-c', 'ccw tool exec memory_queue "{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\"}"'],
command: 'node',
args: ['-e', "require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'gemini'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'gemini'})],{stdio:'inherit'})"],
description: 'Queue CLAUDE.md update when session ends (batched by threshold/timeout)',
category: 'memory',
configurable: true,
config: {
tool: { type: 'select', default: 'gemini', options: ['gemini', 'qwen', 'codex', 'opencode'], label: 'CLI Tool' },
threshold: { type: 'number', default: 5, min: 1, max: 20, label: 'Threshold (paths)', step: 1 },
timeout: { type: 'number', default: 300, min: 60, max: 1800, label: 'Timeout (seconds)', step: 60 }
}
@@ -66,8 +67,8 @@ const HOOK_TEMPLATES = {
'skill-context-keyword': {
event: 'UserPromptSubmit',
matcher: '',
command: 'bash',
args: ['-c', 'ccw tool exec skill_context_loader --stdin'],
command: 'node',
args: ['-e', "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({prompt:p.user_prompt||''})],{stdio:'inherit'})"],
description: 'Load SKILL context based on keyword matching in user prompt',
category: 'skill',
configurable: true,
@@ -79,8 +80,8 @@ const HOOK_TEMPLATES = {
'skill-context-auto': {
event: 'UserPromptSubmit',
matcher: '',
command: 'bash',
args: ['-c', 'ccw tool exec skill_context_loader --stdin --mode auto'],
command: 'node',
args: ['-e', "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({mode:'auto',prompt:p.user_prompt||''})],{stdio:'inherit'})"],
description: 'Auto-detect and load SKILL based on skill name in prompt',
category: 'skill',
configurable: false
@@ -195,6 +196,7 @@ const WIZARD_TEMPLATES = {
}
],
configFields: [
{ key: 'tool', type: 'select', label: 'CLI Tool', default: 'gemini', options: ['gemini', 'qwen', 'codex', 'opencode'], description: 'CLI tool for CLAUDE.md generation' },
{ key: 'threshold', type: 'number', label: 'Threshold (paths)', default: 5, min: 1, max: 20, step: 1, description: 'Number of paths to trigger batch update' },
{ key: 'timeout', type: 'number', label: 'Timeout (seconds)', default: 300, min: 60, max: 1800, step: 60, description: 'Auto-flush queue after this time' }
]
@@ -748,6 +750,7 @@ function renderWizardModalContent() {
// Helper to get translated field labels
const getFieldLabel = (fieldKey) => {
const labels = {
'tool': t('hook.wizard.cliTool') || 'CLI Tool',
'threshold': t('hook.wizard.thresholdPaths') || 'Threshold (paths)',
'timeout': t('hook.wizard.timeoutSeconds') || 'Timeout (seconds)'
};
@@ -756,6 +759,7 @@ function renderWizardModalContent() {
const getFieldDesc = (fieldKey) => {
const descs = {
'tool': t('hook.wizard.cliToolDesc') || 'CLI tool for CLAUDE.md generation',
'threshold': t('hook.wizard.thresholdPathsDesc') || 'Number of paths to trigger batch update',
'timeout': t('hook.wizard.timeoutSecondsDesc') || 'Auto-flush queue after this time'
};
@@ -1121,20 +1125,19 @@ function generateWizardCommand() {
keywords: c.keywords.split(',').map(k => k.trim()).filter(k => k)
}));
const params = JSON.stringify({ configs: configJson, prompt: '$CLAUDE_PROMPT' });
return `ccw tool exec skill_context_loader '${params}'`;
// Use node + spawnSync for cross-platform JSON handling
const paramsObj = { configs: configJson, prompt: '${p.user_prompt}' };
return `node -e "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify(${JSON.stringify(paramsObj).replace('${p.user_prompt}', "'+p.user_prompt+'")})],{stdio:'inherit'})"`;
} else {
// auto mode
const params = JSON.stringify({ mode: 'auto', prompt: '$CLAUDE_PROMPT' });
return `ccw tool exec skill_context_loader '${params}'`;
// auto mode - use node + spawnSync
return `node -e "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({mode:'auto',prompt:p.user_prompt||''})],{stdio:'inherit'})"`;
}
}
// Handle memory-update wizard (default)
// Now uses memory_queue for batched updates with configurable threshold/timeout
// The command adds to queue, configuration is applied separately via submitHookWizard
const params = `"{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\"}"`;
return `ccw tool exec memory_queue ${params}`;
// Use node + spawnSync for cross-platform JSON handling
const selectedTool = wizardConfig.tool || 'gemini';
return `node -e "require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})],{stdio:'inherit'})"`;
}
async function submitHookWizard() {
@@ -1217,13 +1220,18 @@ async function submitHookWizard() {
const baseTemplate = HOOK_TEMPLATES[selectedOption.templateId];
if (!baseTemplate) return;
const command = generateWizardCommand();
const hookData = {
command: 'bash',
args: ['-c', command]
// Build hook data with configured values
let hookData = {
command: baseTemplate.command,
args: [...baseTemplate.args]
};
// For memory-update wizard, use configured tool in args (cross-platform)
if (wizard.id === 'memory-update') {
const selectedTool = wizardConfig.tool || 'gemini';
hookData.args = ['-e', `require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})],{stdio:'inherit'})`];
}
if (baseTemplate.matcher) {
hookData.matcher = baseTemplate.matcher;
}
@@ -1232,6 +1240,7 @@ async function submitHookWizard() {
// For memory-update wizard, also configure queue settings
if (wizard.id === 'memory-update') {
const selectedTool = wizardConfig.tool || 'gemini';
const threshold = wizardConfig.threshold || 5;
const timeout = wizardConfig.timeout || 300;
try {
@@ -1242,7 +1251,7 @@ async function submitHookWizard() {
body: JSON.stringify({ tool: 'memory_queue', params: configParams })
});
if (response.ok) {
showRefreshToast(`Queue configured: threshold=${threshold}, timeout=${timeout}s`, 'success');
showRefreshToast(`Queue configured: tool=${selectedTool}, threshold=${threshold}, timeout=${timeout}s`, 'success');
}
} catch (e) {
console.warn('Failed to configure memory queue:', e);

View File

@@ -1107,6 +1107,8 @@ const i18n = {
'hook.wizard.memoryUpdateDesc': 'Queue-based CLAUDE.md updates with configurable threshold and timeout',
'hook.wizard.queueBasedUpdate': 'Queue-Based Update',
'hook.wizard.queueBasedUpdateDesc': 'Batch updates when threshold reached or timeout expires',
'hook.wizard.cliTool': 'CLI Tool',
'hook.wizard.cliToolDesc': 'CLI tool for CLAUDE.md generation',
'hook.wizard.thresholdPaths': 'Threshold (paths)',
'hook.wizard.thresholdPathsDesc': 'Number of paths to trigger batch update',
'hook.wizard.timeoutSeconds': 'Timeout (seconds)',
@@ -1283,6 +1285,54 @@ const i18n = {
'multiCli.toolbar.noTasks': 'No tasks available',
'multiCli.toolbar.scrollToTask': 'Click to scroll to task',
// Context Tab
'multiCli.context.taskDescription': 'Task Description',
'multiCli.context.constraints': 'Constraints',
'multiCli.context.focusPaths': 'Focus Paths',
'multiCli.context.relevantFiles': 'Relevant Files',
'multiCli.context.dependencies': 'Dependencies',
'multiCli.context.conflictRisks': 'Conflict Risks',
'multiCli.context.sessionId': 'Session ID',
'multiCli.context.rawJson': 'Raw JSON',
// Summary Tab
'multiCli.summary.title': 'Summary',
'multiCli.summary.convergence': 'Convergence',
'multiCli.summary.solutions': 'Solutions',
'multiCli.summary.solution': 'Solution',
// Task Overview
'multiCli.task.description': 'Description',
'multiCli.task.keyPoint': 'Key Point',
'multiCli.task.scope': 'Scope',
'multiCli.task.dependencies': 'Dependencies',
'multiCli.task.targetFiles': 'Target Files',
'multiCli.task.acceptanceCriteria': 'Acceptance Criteria',
'multiCli.task.reference': 'Reference',
'multiCli.task.pattern': 'PATTERN',
'multiCli.task.files': 'FILES',
'multiCli.task.examples': 'EXAMPLES',
'multiCli.task.noOverviewData': 'No overview data available',
// Task Implementation
'multiCli.task.implementationSteps': 'Implementation Steps',
'multiCli.task.modificationPoints': 'Modification Points',
'multiCli.task.verification': 'Verification',
'multiCli.task.noImplementationData': 'No implementation details available',
'multiCli.task.noFilesSpecified': 'No files specified',
// Discussion Tab
'multiCli.discussion.title': 'Discussion',
'multiCli.discussion.discussionTopic': 'Discussion Topic',
'multiCli.solutions': 'Solutions',
'multiCli.decision': 'Decision',
// Plan
'multiCli.plan.objective': 'Objective',
'multiCli.plan.solution': 'Solution',
'multiCli.plan.approach': 'Approach',
'multiCli.plan.risk': 'risk',
// Modals
'modal.contentPreview': 'Content Preview',
'modal.raw': 'Raw',
@@ -2219,6 +2269,25 @@ const i18n = {
'issues.queueCommandInfo': 'After running the command, click "Refresh" to see the updated queue.',
'issues.alternative': 'Alternative',
'issues.refreshAfter': 'Refresh Queue',
'issues.activate': 'Activate',
'issues.deactivate': 'Deactivate',
'issues.queueActivated': 'Queue activated',
'issues.queueDeactivated': 'Queue deactivated',
'issues.deleteQueue': 'Delete queue',
'issues.confirmDeleteQueue': 'Are you sure you want to delete this queue? This action cannot be undone.',
'issues.queueDeleted': 'Queue deleted successfully',
'issues.actions': 'Actions',
'issues.archive': 'Archive',
'issues.delete': 'Delete',
'issues.confirmDeleteIssue': 'Are you sure you want to delete this issue? This action cannot be undone.',
'issues.confirmArchiveIssue': 'Archive this issue? It will be moved to history.',
'issues.issueDeleted': 'Issue deleted successfully',
'issues.issueArchived': 'Issue archived successfully',
'issues.executionQueues': 'Execution Queues',
'issues.queues': 'queues',
'issues.noQueues': 'No queues found',
'issues.queueEmptyHint': 'Generate execution queue from bound solutions',
'issues.refresh': 'Refresh',
// issue.* keys (legacy)
'issue.viewIssues': 'Issues',
'issue.viewQueue': 'Queue',
@@ -3347,6 +3416,8 @@ const i18n = {
'hook.wizard.memoryUpdateDesc': '基于队列的 CLAUDE.md 更新,支持阈值和超时配置',
'hook.wizard.queueBasedUpdate': '队列批量更新',
'hook.wizard.queueBasedUpdateDesc': '达到路径数量阈值或超时时批量更新',
'hook.wizard.cliTool': 'CLI 工具',
'hook.wizard.cliToolDesc': '用于生成 CLAUDE.md 的 CLI 工具',
'hook.wizard.thresholdPaths': '阈值(路径数)',
'hook.wizard.thresholdPathsDesc': '触发批量更新的路径数量',
'hook.wizard.timeoutSeconds': '超时(秒)',
@@ -3523,6 +3594,54 @@ const i18n = {
'multiCli.toolbar.noTasks': '暂无任务',
'multiCli.toolbar.scrollToTask': '点击定位到任务',
// Context Tab
'multiCli.context.taskDescription': '任务描述',
'multiCli.context.constraints': '约束条件',
'multiCli.context.focusPaths': '焦点路径',
'multiCli.context.relevantFiles': '相关文件',
'multiCli.context.dependencies': '依赖项',
'multiCli.context.conflictRisks': '冲突风险',
'multiCli.context.sessionId': '会话ID',
'multiCli.context.rawJson': '原始JSON',
// Summary Tab
'multiCli.summary.title': '摘要',
'multiCli.summary.convergence': '收敛状态',
'multiCli.summary.solutions': '解决方案',
'multiCli.summary.solution': '方案',
// Task Overview
'multiCli.task.description': '描述',
'multiCli.task.keyPoint': '关键点',
'multiCli.task.scope': '范围',
'multiCli.task.dependencies': '依赖项',
'multiCli.task.targetFiles': '目标文件',
'multiCli.task.acceptanceCriteria': '验收标准',
'multiCli.task.reference': '参考资料',
'multiCli.task.pattern': '模式',
'multiCli.task.files': '文件',
'multiCli.task.examples': '示例',
'multiCli.task.noOverviewData': '无概览数据',
// Task Implementation
'multiCli.task.implementationSteps': '实现步骤',
'multiCli.task.modificationPoints': '修改点',
'multiCli.task.verification': '验证',
'multiCli.task.noImplementationData': '无实现详情',
'multiCli.task.noFilesSpecified': '未指定文件',
// Discussion Tab
'multiCli.discussion.title': '讨论',
'multiCli.discussion.discussionTopic': '讨论主题',
'multiCli.solutions': '解决方案',
'multiCli.decision': '决策',
// Plan
'multiCli.plan.objective': '目标',
'multiCli.plan.solution': '解决方案',
'multiCli.plan.approach': '实现方式',
'multiCli.plan.risk': '风险',
// Modals
'modal.contentPreview': '内容预览',
'modal.raw': '原始',
@@ -4492,6 +4611,25 @@ const i18n = {
'issues.queueCommandInfo': '运行命令后,点击"刷新"查看更新后的队列。',
'issues.alternative': '或者',
'issues.refreshAfter': '刷新队列',
'issues.activate': '激活',
'issues.deactivate': '取消激活',
'issues.queueActivated': '队列已激活',
'issues.queueDeactivated': '队列已取消激活',
'issues.deleteQueue': '删除队列',
'issues.confirmDeleteQueue': '确定要删除此队列吗?此操作无法撤销。',
'issues.queueDeleted': '队列删除成功',
'issues.actions': '操作',
'issues.archive': '归档',
'issues.delete': '删除',
'issues.confirmDeleteIssue': '确定要删除此议题吗?此操作无法撤销。',
'issues.confirmArchiveIssue': '归档此议题?它将被移动到历史记录中。',
'issues.issueDeleted': '议题删除成功',
'issues.issueArchived': '议题归档成功',
'issues.executionQueues': '执行队列',
'issues.queues': '个队列',
'issues.noQueues': '暂无队列',
'issues.queueEmptyHint': '从绑定的解决方案生成执行队列',
'issues.refresh': '刷新',
// issue.* keys (legacy)
'issue.viewIssues': '议题',
'issue.viewQueue': '队列',

View File

@@ -6381,12 +6381,12 @@ async function showWatcherControlModal() {
// Get first indexed project path as default
let defaultPath = '';
if (indexes.success && indexes.projects && indexes.projects.length > 0) {
// Sort by last_indexed desc and pick the most recent
const sorted = indexes.projects.sort((a, b) =>
new Date(b.last_indexed || 0) - new Date(a.last_indexed || 0)
if (indexes.success && indexes.indexes && indexes.indexes.length > 0) {
// Sort by lastModified desc and pick the most recent
const sorted = indexes.indexes.sort((a, b) =>
new Date(b.lastModified || 0) - new Date(a.lastModified || 0)
);
defaultPath = sorted[0].source_root || '';
defaultPath = sorted[0].path || '';
}
const modalHtml = buildWatcherControlContent(status, defaultPath);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -956,15 +956,13 @@ function renderSkillFileModal() {
</div>
<!-- Content -->
<div class="flex-1 overflow-hidden p-4">
<div class="flex-1 min-h-0 overflow-auto p-4">
${isEditing ? `
<textarea id="skillFileContent"
class="w-full h-full min-h-[400px] px-4 py-3 bg-background border border-border rounded-lg text-sm font-mono focus:outline-none focus:ring-2 focus:ring-primary resize-none"
spellcheck="false">${escapeHtml(content)}</textarea>
` : `
<div class="w-full h-full min-h-[400px] overflow-auto">
<pre class="px-4 py-3 bg-muted/30 rounded-lg text-sm font-mono whitespace-pre-wrap break-words">${escapeHtml(content)}</pre>
</div>
<pre class="px-4 py-3 bg-muted/30 rounded-lg text-sm font-mono whitespace-pre-wrap break-words">${escapeHtml(content)}</pre>
`}
</div>

View File

@@ -356,7 +356,7 @@ const ParamsSchema = z.object({
model: z.string().optional(),
cd: z.string().optional(),
includeDirs: z.string().optional(),
timeout: z.number().default(0), // 0 = no internal timeout, controlled by external caller (e.g., bash timeout)
// timeout removed - controlled by external caller (bash timeout)
resume: z.union([z.boolean(), z.string()]).optional(), // true = last, string = single ID or comma-separated IDs
id: z.string().optional(), // Custom execution ID (e.g., IMPL-001-step1)
noNative: z.boolean().optional(), // Force prompt concatenation instead of native resume
@@ -388,7 +388,7 @@ async function executeCliTool(
throw new Error(`Invalid params: ${parsed.error.message}`);
}
const { tool, prompt, mode, format, model, cd, includeDirs, timeout, resume, id: customId, noNative, category, parentExecutionId, outputFormat } = parsed.data;
const { tool, prompt, mode, format, model, cd, includeDirs, resume, id: customId, noNative, category, parentExecutionId, outputFormat } = parsed.data;
// Validate and determine working directory early (needed for conversation lookup)
let workingDir: string;
@@ -862,7 +862,6 @@ async function executeCliTool(
let stdout = '';
let stderr = '';
let timedOut = false;
// Handle stdout
child.stdout!.on('data', (data: Buffer) => {
@@ -924,18 +923,14 @@ async function executeCliTool(
debugLog('CLOSE', `Process closed`, {
exitCode: code,
duration: `${duration}ms`,
timedOut,
stdoutLength: stdout.length,
stderrLength: stderr.length,
outputUnitsCount: allOutputUnits.length
});
// Determine status - prioritize output content over exit code
let status: 'success' | 'error' | 'timeout' = 'success';
if (timedOut) {
status = 'timeout';
debugLog('STATUS', `Execution timed out after ${duration}ms`);
} else if (code !== 0) {
let status: 'success' | 'error' = 'success';
if (code !== 0) {
// Non-zero exit code doesn't always mean failure
// Check if there's valid output (AI response) - treat as success
const hasValidOutput = stdout.trim().length > 0;
@@ -1169,25 +1164,8 @@ async function executeCliTool(
reject(new Error(`Failed to spawn ${tool}: ${error.message}\n Command: ${command} ${args.join(' ')}\n Working Dir: ${workingDir}`));
});
// Timeout handling (timeout=0 disables internal timeout, controlled by external caller)
let timeoutId: NodeJS.Timeout | null = null;
if (timeout > 0) {
timeoutId = setTimeout(() => {
timedOut = true;
child.kill('SIGTERM');
setTimeout(() => {
if (!child.killed) {
child.kill('SIGKILL');
}
}, 5000);
}, timeout);
}
child.on('close', () => {
if (timeoutId) {
clearTimeout(timeoutId);
}
});
// Timeout controlled by external caller (bash timeout)
// When parent process terminates, child will be cleaned up via process exit handler
});
}
@@ -1228,12 +1206,8 @@ Modes:
includeDirs: {
type: 'string',
description: 'Additional directories (comma-separated). Maps to --include-directories for gemini/qwen, --add-dir for codex'
},
timeout: {
type: 'number',
description: 'Timeout in milliseconds (default: 0 = disabled, controlled by external caller)',
default: 0
}
// timeout removed - controlled by external caller (bash timeout)
},
required: ['tool', 'prompt']
}

View File

@@ -391,11 +391,7 @@ async function execute(params) {
if (timeoutCheck.flushed) {
// Queue was flushed due to timeout, add to fresh queue
const result = addToQueue(path, { tool, strategy });
return {
...result,
timeoutFlushed: true,
flushResult: timeoutCheck.result
};
return `[MemoryQueue] Timeout flush (${timeoutCheck.result.processed} items) → ${result.message}`;
}
const addResult = addToQueue(path, { tool, strategy });
@@ -403,14 +399,12 @@ async function execute(params) {
// Auto-flush if threshold reached
if (addResult.willFlush) {
const flushResult = await flushQueue();
return {
...addResult,
flushed: true,
flushResult
};
// Return string for hook-friendly output
return `[MemoryQueue] ${addResult.message} → Flushed ${flushResult.processed} items`;
}
return addResult;
// Return string for hook-friendly output
return `[MemoryQueue] ${addResult.message}`;
case 'status':
// Check timeout first

View File

@@ -3645,6 +3645,84 @@ def index_status(
console.print(f" SPLADE encoder: {'[green]Yes[/green]' if splade_available else f'[red]No[/red] ({splade_err})'}")
# ==================== Index Update Command ====================
@index_app.command("update")
def index_update(
file_path: Path = typer.Argument(..., exists=True, file_okay=True, dir_okay=False, help="Path to the file to update in the index."),
json_mode: bool = typer.Option(False, "--json", help="Output JSON response."),
verbose: bool = typer.Option(False, "--verbose", "-v", help="Enable debug logging."),
) -> None:
"""Update the index for a single file incrementally.
This is a lightweight command designed for use in hooks (e.g., Claude Code PostToolUse).
It updates only the specified file without scanning the entire directory.
The file's parent directory must already be indexed via 'codexlens index init'.
Examples:
codexlens index update src/main.py # Update single file
codexlens index update ./foo.ts --json # JSON output for hooks
"""
_configure_logging(verbose, json_mode)
from codexlens.watcher.incremental_indexer import IncrementalIndexer
registry: RegistryStore | None = None
indexer: IncrementalIndexer | None = None
try:
registry = RegistryStore()
registry.initialize()
mapper = PathMapper()
config = Config()
resolved_path = file_path.resolve()
# Check if project is indexed
source_root = mapper.get_project_root(resolved_path)
if not source_root or not registry.get_project(source_root):
error_msg = f"Project containing file is not indexed: {file_path}"
if json_mode:
print_json(success=False, error=error_msg)
else:
console.print(f"[red]Error:[/red] {error_msg}")
console.print("[dim]Run 'codexlens index init' on the project root first.[/dim]")
raise typer.Exit(code=1)
indexer = IncrementalIndexer(registry, mapper, config)
result = indexer._index_file(resolved_path)
if result.success:
if json_mode:
print_json(success=True, result={
"path": str(result.path),
"symbols_count": result.symbols_count,
"status": "updated",
})
else:
console.print(f"[green]✓[/green] Updated index for [bold]{result.path.name}[/bold] ({result.symbols_count} symbols)")
else:
error_msg = result.error or f"Failed to update index for {file_path}"
if json_mode:
print_json(success=False, error=error_msg)
else:
console.print(f"[red]Error:[/red] {error_msg}")
raise typer.Exit(code=1)
except CodexLensError as exc:
if json_mode:
print_json(success=False, error=str(exc))
else:
console.print(f"[red]Update failed:[/red] {exc}")
raise typer.Exit(code=1)
finally:
if indexer:
indexer.close()
if registry:
registry.close()
# ==================== Index All Command ====================
@index_app.command("all")

View File

@@ -0,0 +1,435 @@
# CCW Issue Loop 工作流完全指南
> 两阶段生命周期设计,支持在项目迭代中积累问题并集中解决
---
## 目录
1. [什么是 Issue Loop 工作流](#什么是-issue-loop-工作流)
2. [核心架构](#核心架构)
3. [两阶段生命周期](#两阶段生命周期)
4. [命令详解](#命令详解)
5. [使用场景](#使用场景)
6. [推荐策略](#推荐策略)
7. [串行无监管执行](#串行无监管执行)
8. [最佳实践](#最佳实践)
---
## 什么是 Issue Loop 工作流
Issue Loop 是 CCW (Claude Code Workflow) 中的批量问题处理工作流专为处理项目迭代过程中积累的多个问题而设计。与单次修复不同Issue Loop 采用 **"积累 → 规划 → 队列 → 执行"** 的模式,实现问题的批量发现和集中解决。
### 核心理念
```
传统模式:发现问题 → 立即修复 → 发现问题 → 立即修复 → ...
Issue Loop持续积累 → 集中规划 → 队列优化 → 批量执行
```
**优势**
- 避免频繁上下文切换
- 冲突检测和依赖排序
- 并行执行支持
- 完整的追踪和审计
---
## 核心架构
```
┌─────────────────────────────────────────────────────────────────┐
│ Issue Loop Workflow │
├─────────────────────────────────────────────────────────────────┤
│ Phase 1: Accumulation (积累) │
│ /issue:discover, /issue:discover-by-prompt, /issue:new │
├─────────────────────────────────────────────────────────────────┤
│ Phase 2: Batch Resolution (批量解决) │
│ /issue:plan → /issue:queue → /issue:execute │
└─────────────────────────────────────────────────────────────────┘
```
### 数据流转
```
issues.jsonl → solutions/<id>.jsonl → queues/<queue-id>.json → 执行
↓ ↓ ↓
Issue 记录 解决方案 优先级排序 + 冲突检测
```
---
## 两阶段生命周期
### Phase 1: Accumulation (积累阶段)
在项目正常迭代过程中,持续发现和记录问题:
| 触发场景 | 对应命令 | 说明 |
|----------|----------|------|
| 任务完成后 Review | `/issue:discover` | 自动分析代码发现潜在问题 |
| 代码审查发现 | `/issue:new` | 手动创建结构化 Issue |
| 测试失败 | `/issue:discover-by-prompt` | 根据描述创建 Issue |
| 用户反馈 | `/issue:new` | 手动录入反馈问题 |
**Issue 状态流转**
```
registered → planned → queued → executing → completed
issue-history.jsonl
```
### Phase 2: Batch Resolution (批量解决阶段)
积累足够 Issue 后,集中处理:
```
Step 1: /issue:plan --all-pending # 为所有待处理 Issue 生成解决方案
Step 2: /issue:queue # 形成执行队列(冲突检测 + 排序)
Step 3: /issue:execute # 批量执行(串行或并行)
```
---
## 命令详解
### 积累阶段命令
#### `/issue:new`
手动创建结构化 Issue
```bash
ccw issue init <id> --title "Issue 标题" --priority P2
```
#### `/issue:discover`
自动分析代码发现问题:
```bash
# 使用 gemini 进行多视角分析
# 发现bug、安全问题、性能问题、代码规范等
```
#### `/issue:discover-by-prompt`
根据描述创建 Issue
```bash
# 输入问题描述,自动生成结构化 Issue
```
### 批量解决阶段命令
#### `/issue:plan`
为 Issue 生成解决方案:
```bash
ccw issue plan --all-pending # 规划所有待处理 Issue
ccw issue plan ISS-001 # 规划单个 Issue
```
**输出**:每个 Issue 生成包含以下内容的解决方案:
- 修改点 (modification_points)
- 实现步骤 (implementation)
- 测试要求 (test)
- 验收标准 (acceptance)
#### `/issue:queue`
形成执行队列:
```bash
ccw issue queue # 创建新队列
ccw issue queue add <id> # 添加到当前队列
ccw issue queue list # 查看队列历史
```
**关键功能**
- 冲突检测:使用 Gemini CLI 分析解决方案间的文件冲突
- 依赖排序:根据依赖关系确定执行顺序
- 优先级加权:高优先级 Issue 优先执行
#### `/issue:execute`
执行队列中的解决方案:
```bash
ccw issue next # 获取下一个待执行解决方案
ccw issue done <item_id> # 标记完成
ccw issue done <id> --fail # 标记失败
```
### 管理命令
```bash
ccw issue list # 列出活跃 Issue
ccw issue status <id> # 查看 Issue 详情
ccw issue history # 查看已完成 Issue
ccw issue update --from-queue # 从队列同步状态
```
---
## 使用场景
### 场景 1: 项目迭代后的技术债务清理
```
1. 完成 Sprint 功能开发
2. 执行 /issue:discover 发现技术债务
3. 积累一周后,执行 /issue:plan --all-pending
4. 使用 /issue:queue 形成队列
5. 使用 codex 执行 /issue:execute 批量处理
```
### 场景 2: 代码审查后的批量修复
```
1. 完成 PR 代码审查
2. 对每个发现执行 /issue:new 创建 Issue
3. 积累本次审查的所有发现
4. 执行 /issue:plan → /issue:queue → /issue:execute
```
### 场景 3: 测试失败的批量处理
```
1. 运行测试套件
2. 对失败的测试执行 /issue:discover-by-prompt
3. 一次性规划所有失败修复
4. 串行执行确保不引入新问题
```
### 场景 4: 安全漏洞批量修复
```
1. 安全扫描发现多个漏洞
2. 每个漏洞创建 Issue 并标记 P1 优先级
3. 使用 /issue:queue 自动按严重度排序
4. 执行修复并验证
```
---
## 推荐策略
### 何时使用 Issue Loop
| 条件 | 推荐 |
|------|------|
| 问题数量 >= 3 | Issue Loop |
| 问题涉及多个模块 | Issue Loop |
| 问题间可能有依赖 | Issue Loop |
| 需要冲突检测 | Issue Loop |
| 单个简单 bug | `/workflow:lite-fix` |
| 紧急生产问题 | `/workflow:lite-fix --hotfix` |
### 积累策略
**推荐阈值**
- 积累 5-10 个 Issue 后集中处理
- 或按时间周期(如每周五下午)统一处理
- 紧急问题除外,立即标记 P1 并单独处理
### 队列策略
```javascript
// 冲突检测规则
if (solution_A.files solution_B.files !== ) {
// 存在文件冲突,需要串行执行
queue.addDependency(solution_A, solution_B)
}
// 优先级排序
sort by:
1. priority (P1 > P2 > P3)
2. dependencies (被依赖的先执行)
3. complexity (低复杂度先执行)
```
---
## 串行无监管执行
**推荐使用 Codex 命令进行串行无监管执行**
```bash
codex -p "@.codex/prompts/issue-execute.md"
```
### 执行流程
```
INIT: ccw issue next
WHILE solution exists:
├── 1. 解析 solution JSON
├── 2. 逐个执行 tasks:
│ ├── IMPLEMENT: 按步骤实现
│ ├── TEST: 运行测试验证
│ └── VERIFY: 检查验收标准
├── 3. 提交代码 (每个 solution 一次 commit)
├── 4. 报告完成: ccw issue done <id>
└── 5. 获取下一个: ccw issue next
COMPLETE: 输出最终报告
```
### Worktree 模式(推荐并行执行)
```bash
# 创建隔离的工作树
codex -p "@.codex/prompts/issue-execute.md --worktree"
# 恢复中断的执行
codex -p "@.codex/prompts/issue-execute.md --worktree /path/to/existing"
```
**优势**
- 并行执行器不冲突
- 主工作目录保持干净
- 执行完成后易于清理
- 支持中断恢复
### 执行规则
1. **永不中途停止** - 持续执行直到队列为空
2. **一次一个解决方案** - 完全完成(所有任务 + 提交 + 报告)后继续
3. **解决方案内串行** - 每个任务的实现/测试/验证按顺序完成
4. **测试必须通过** - 任何任务测试失败则修复后继续
5. **每解决方案一次提交** - 所有任务共享一次 commit
6. **自我验证** - 所有验收标准必须通过
7. **准确报告** - 使用 `ccw issue done` 报告完成
8. **优雅处理失败** - 失败时报告并继续下一个
### Commit 格式
```
[commit_type](scope): [solution.description]
## Solution Summary
- **Solution-ID**: SOL-ISS-20251227-001-1
- **Issue-ID**: ISS-20251227-001
- **Risk/Impact/Complexity**: low/medium/low
## Tasks Completed
- [T1] 实现用户认证: Modify src/auth/
- [T2] 添加测试覆盖: Add tests/auth/
## Files Modified
- src/auth/login.ts
- tests/auth/login.test.ts
## Verification
- All unit tests passed
- All acceptance criteria verified
```
---
## 最佳实践
### 1. Issue 质量
创建高质量的 Issue 描述:
```json
{
"title": "清晰简洁的标题",
"context": {
"problem": "具体问题描述",
"impact": "影响范围",
"reproduction": "复现步骤(如适用)"
},
"priority": "P1-P5"
}
```
### 2. Solution 审查
在执行前审查生成的解决方案:
```bash
ccw issue status <id> # 查看解决方案详情
```
检查点:
- 修改点是否准确
- 测试覆盖是否充分
- 验收标准是否可验证
### 3. 队列监控
```bash
ccw issue queue # 查看当前队列状态
ccw issue queue list # 查看队列历史
```
### 4. 失败处理
```bash
# 单个失败
ccw issue done <id> --fail --reason '{"task_id": "T1", "error": "..."}'
# 重试失败项
ccw issue retry --queue QUE-xxx
```
### 5. 历史追溯
```bash
ccw issue history # 查看已完成 Issue
ccw issue history --json # JSON 格式导出
```
---
## 工作流对比
| 维度 | Issue Loop | lite-fix | coupled |
|------|------------|----------|---------|
| **适用场景** | 批量问题 | 单个 bug | 复杂功能 |
| **问题数量** | 3+ | 1 | 1 |
| **生命周期** | 两阶段 | 单次 | 多阶段 |
| **冲突检测** | 有 | 无 | 无 |
| **并行支持** | Worktree 模式 | 无 | 无 |
| **追踪审计** | 完整 | 基础 | 完整 |
---
## 快速参考
### 完整流程命令
```bash
# 1. 积累阶段
/issue:new # 手动创建
/issue:discover # 自动发现
# 2. 规划阶段
/issue:plan --all-pending
# 3. 队列阶段
/issue:queue
# 4. 执行阶段(推荐使用 codex
codex -p "@.codex/prompts/issue-execute.md"
# 或手动执行
/issue:execute
```
### CLI 命令速查
```bash
ccw issue list # 列出 Issue
ccw issue status <id> # 查看详情
ccw issue plan --all-pending # 批量规划
ccw issue queue # 创建队列
ccw issue next # 获取下一个
ccw issue done <id> # 标记完成
ccw issue history # 查看历史
```
---
## 总结
Issue Loop 工作流是 CCW 中处理批量问题的最佳选择,通过两阶段生命周期设计,实现了问题的高效积累和集中解决。配合 Codex 的串行无监管执行能力,可以在保证质量的同时大幅提升效率。
**记住**
- 积累足够数量5-10 个)后再集中处理
- 使用 Codex 进行串行无监管执行
- 利用 Worktree 模式实现并行执行
- 保持 Issue 描述的高质量

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "claude-code-workflow",
"version": "6.3.23",
"version": "6.3.31",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "claude-code-workflow",
"version": "6.3.23",
"version": "6.3.31",
"license": "MIT",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.0.4",

View File

@@ -1,6 +1,6 @@
{
"name": "claude-code-workflow",
"version": "6.3.28",
"version": "6.3.31",
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
"type": "module",
"main": "ccw/src/index.js",