mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
26 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
464f3343f3 | ||
|
|
bb6cf42df6 | ||
|
|
0f0cb7e08e | ||
|
|
39d070eab6 | ||
|
|
9ccaa7e2fd | ||
|
|
eeb90949ce | ||
|
|
7b677b20fb | ||
|
|
e2d56bc08a | ||
|
|
d515090097 | ||
|
|
d81dfaf143 | ||
|
|
d7e5ee44cc | ||
|
|
dde39fc6f5 | ||
|
|
9b4fdc1868 | ||
|
|
623afc1d35 | ||
|
|
085652560a | ||
|
|
af4ddb1280 | ||
|
|
7db659f0e1 | ||
|
|
ba526ea09e | ||
|
|
c308e429f8 | ||
|
|
c24ed016cb | ||
|
|
0c9a6d4154 | ||
|
|
7b5c3cacaa | ||
|
|
e6e7876b38 | ||
|
|
0eda520fd7 | ||
|
|
e22b525e9c | ||
|
|
86536aaa10 |
@@ -29,9 +29,8 @@ Available CLI endpoints are dynamically defined by the config file:
|
||||
```
|
||||
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
|
||||
```
|
||||
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT
|
||||
poll with TaskOutput
|
||||
|
||||
- **After CLI call**: Stop immediately - let CLI execute in background
|
||||
|
||||
### CLI Analysis Calls
|
||||
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
|
||||
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
|
||||
|
||||
@@ -308,3 +308,14 @@ When analysis is complete, ensure:
|
||||
- **Relevance**: Directly addresses user's specified requirements
|
||||
- **Actionability**: Provides concrete next steps and recommendations
|
||||
|
||||
## Output Size Limits
|
||||
|
||||
**Per-role limits** (prevent context overflow):
|
||||
- `analysis.md`: < 3000 words
|
||||
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
|
||||
- Total: < 15000 words per role
|
||||
|
||||
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
|
||||
|
||||
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute queue with DAG-based parallel orchestration (one commit per solution)
|
||||
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
|
||||
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
@@ -19,14 +19,57 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
|
||||
- **Executor handles all tasks within a solution sequentially**
|
||||
- **Single worktree for entire queue**: One worktree isolates ALL queue execution from main workspace
|
||||
|
||||
## Queue ID Requirement (MANDATORY)
|
||||
|
||||
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
|
||||
|
||||
### If Queue ID Not Provided
|
||||
|
||||
When `--queue` parameter is missing, you MUST:
|
||||
|
||||
1. **List available queues** by running:
|
||||
```javascript
|
||||
const result = Bash('ccw issue queue list --brief --json');
|
||||
const index = JSON.parse(result);
|
||||
```
|
||||
|
||||
2. **Display available queues** to user:
|
||||
```
|
||||
Available Queues:
|
||||
ID Status Progress Issues
|
||||
-----------------------------------------------------------
|
||||
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
|
||||
QUE-20251210-002 active 0/5 ISS-003
|
||||
QUE-20251205-003 completed 8/8 ISS-004
|
||||
```
|
||||
|
||||
3. **Stop and ask user** to specify which queue to execute:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: index.queues
|
||||
.filter(q => q.status === 'active')
|
||||
.map(q => ({
|
||||
label: q.id,
|
||||
description: `${q.status}, ${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
|
||||
}))
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
4. **After user selection**, continue execution with the selected queue ID.
|
||||
|
||||
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:execute # Execute active queue(s)
|
||||
/issue:execute --queue QUE-xxx # Execute specific queue
|
||||
/issue:execute --worktree # Execute entire queue in isolated worktree
|
||||
/issue:execute --worktree --queue QUE-xxx
|
||||
/issue:execute --worktree /path/to/existing/worktree # Resume in existing worktree
|
||||
/issue:execute --queue QUE-xxx # Execute specific queue (REQUIRED)
|
||||
/issue:execute --queue QUE-xxx --worktree # Execute in isolated worktree
|
||||
/issue:execute --queue QUE-xxx --worktree /path/to/existing/worktree # Resume
|
||||
```
|
||||
|
||||
**Parallelism**: Determined automatically by task dependency DAG (no manual control)
|
||||
@@ -44,13 +87,18 @@ Minimal orchestrator that dispatches **solution IDs** to executors. Each executo
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 0 (if --worktree): Setup Queue Worktree
|
||||
Phase 0: Validate Queue ID (REQUIRED)
|
||||
├─ If --queue provided → use specified queue
|
||||
├─ If --queue missing → list queues, prompt user to select
|
||||
└─ Store QUEUE_ID for all subsequent commands
|
||||
|
||||
Phase 0.5 (if --worktree): Setup Queue Worktree
|
||||
├─ Create ONE worktree for entire queue: .ccw/worktrees/queue-<timestamp>
|
||||
├─ All subsequent execution happens in this worktree
|
||||
└─ Main workspace remains clean and untouched
|
||||
|
||||
Phase 1: Get DAG & User Selection
|
||||
├─ ccw issue queue dag [--queue QUE-xxx] → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
├─ ccw issue queue dag --queue ${QUEUE_ID} → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
└─ AskUserQuestion → executor type (codex|gemini|agent), dry-run mode, worktree mode
|
||||
|
||||
Phase 2: Dispatch Parallel Batch (DAG-driven)
|
||||
@@ -75,11 +123,65 @@ Phase 4 (if --worktree): Worktree Completion
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 0: Validate Queue ID
|
||||
|
||||
```javascript
|
||||
// Check if --queue was provided
|
||||
let QUEUE_ID = args.queue;
|
||||
|
||||
if (!QUEUE_ID) {
|
||||
// List available queues
|
||||
const listResult = Bash('ccw issue queue list --brief --json').trim();
|
||||
const index = JSON.parse(listResult);
|
||||
|
||||
if (index.queues.length === 0) {
|
||||
console.log('No queues found. Use /issue:queue to create one first.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Filter active queues only
|
||||
const activeQueues = index.queues.filter(q => q.status === 'active');
|
||||
|
||||
if (activeQueues.length === 0) {
|
||||
console.log('No active queues found.');
|
||||
console.log('Available queues:', index.queues.map(q => `${q.id} (${q.status})`).join(', '));
|
||||
return;
|
||||
}
|
||||
|
||||
// Display and prompt user
|
||||
console.log('\nAvailable Queues:');
|
||||
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
|
||||
console.log('-'.repeat(70));
|
||||
for (const q of index.queues) {
|
||||
const marker = q.id === index.active_queue_id ? '→ ' : ' ';
|
||||
console.log(marker + q.id.padEnd(20) + q.status.padEnd(12) +
|
||||
`${q.completed_solutions || 0}/${q.total_solutions || 0}`.padEnd(12) +
|
||||
q.issue_ids.join(', '));
|
||||
}
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: activeQueues.map(q => ({
|
||||
label: q.id,
|
||||
description: `${q.completed_solutions || 0}/${q.total_solutions || 0} completed, Issues: ${q.issue_ids.join(', ')}`
|
||||
}))
|
||||
}]
|
||||
});
|
||||
|
||||
QUEUE_ID = answer['Queue'];
|
||||
}
|
||||
|
||||
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
|
||||
```
|
||||
|
||||
### Phase 1: Get DAG & User Selection
|
||||
|
||||
```javascript
|
||||
// Get dependency graph and parallel batches
|
||||
const dagJson = Bash(`ccw issue queue dag`).trim();
|
||||
// Get dependency graph and parallel batches (QUEUE_ID required)
|
||||
const dagJson = Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim();
|
||||
const dag = JSON.parse(dagJson);
|
||||
|
||||
if (dag.error || dag.ready_count === 0) {
|
||||
@@ -298,8 +400,8 @@ ccw issue done ${solutionId} --fail --reason '{"task_id": "TX", "error_type": "t
|
||||
### Phase 3: Check Next Batch
|
||||
|
||||
```javascript
|
||||
// Refresh DAG after batch completes
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
|
||||
// Refresh DAG after batch completes (use same QUEUE_ID)
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag --queue ${QUEUE_ID}`).trim());
|
||||
|
||||
console.log(`
|
||||
## Batch Complete
|
||||
@@ -309,9 +411,9 @@ console.log(`
|
||||
`);
|
||||
|
||||
if (refreshedDag.ready_count > 0) {
|
||||
console.log('Run `/issue:execute` again for next batch.');
|
||||
console.log(`Run \`/issue:execute --queue ${QUEUE_ID}\` again for next batch.`);
|
||||
// Note: If resuming, pass existing worktree path:
|
||||
// /issue:execute --worktree <worktreePath>
|
||||
// /issue:execute --queue ${QUEUE_ID} --worktree <worktreePath>
|
||||
}
|
||||
```
|
||||
|
||||
@@ -367,10 +469,12 @@ if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_coun
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ 0. (if --worktree) Create ONE worktree for entire queue │
|
||||
│ 0. Validate QUEUE_ID (required, or prompt user to select) │
|
||||
│ │
|
||||
│ 0.5 (if --worktree) Create ONE worktree for entire queue │
|
||||
│ → .ccw/worktrees/queue-exec-<queue-id> │
|
||||
│ │
|
||||
│ 1. ccw issue queue dag │
|
||||
│ 1. ccw issue queue dag --queue ${QUEUE_ID} │
|
||||
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
|
||||
│ │
|
||||
│ 2. Dispatch batch 1 (parallel, SAME worktree): │
|
||||
@@ -405,8 +509,19 @@ if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_coun
|
||||
|
||||
## CLI Endpoint Contract
|
||||
|
||||
### `ccw issue queue dag`
|
||||
Returns dependency graph with parallel batches (solution-level):
|
||||
### `ccw issue queue list --brief --json`
|
||||
Returns queue index for selection (used when --queue not provided):
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251215-001",
|
||||
"queues": [
|
||||
{ "id": "QUE-20251215-001", "status": "active", "issue_ids": ["ISS-001"], "total_solutions": 5, "completed_solutions": 2 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### `ccw issue queue dag --queue <queue-id>`
|
||||
Returns dependency graph with parallel batches (solution-level, **--queue required**):
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-...",
|
||||
|
||||
@@ -424,6 +424,17 @@ CONTEXT_VARS:
|
||||
- **Agent execution failure**: Agent-specific retry with minimal dependencies
|
||||
- **Template loading issues**: Agent handles graceful degradation
|
||||
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution
|
||||
- **Context overflow protection**: See below for automatic context management
|
||||
|
||||
## Context Overflow Protection
|
||||
|
||||
**Per-role limits**: See `conceptual-planning-agent.md` (< 3000 words main, < 2000 words sub-docs, max 5 sub-docs)
|
||||
|
||||
**Synthesis protection**: If total analysis > 100KB, synthesis reads only `analysis.md` files (not sub-documents)
|
||||
|
||||
**Recovery**: Check logs → reduce scope (--count 2) → use --summary-only → manual synthesis
|
||||
|
||||
**Prevention**: Start with --count 3, use structured topic format, review output sizes before synthesis
|
||||
|
||||
## Reference Information
|
||||
|
||||
|
||||
@@ -132,7 +132,7 @@ Scan and analyze workflow session directories:
|
||||
|
||||
**Staleness criteria**:
|
||||
- Active sessions: No modification >7 days + no related git commits
|
||||
- Archives: >30 days old + no feature references in project.json
|
||||
- Archives: >30 days old + no feature references in project-tech.json
|
||||
- Lite-plan: >7 days old + plan.json not executed
|
||||
- Debug: >3 days old + issue not in recent commits
|
||||
|
||||
@@ -443,8 +443,8 @@ if (selectedCategories.includes('Sessions')) {
|
||||
}
|
||||
}
|
||||
|
||||
// Update project.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project.json'
|
||||
// Update project-tech.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project-tech.json'
|
||||
if (fileExists(projectPath)) {
|
||||
const project = JSON.parse(Read(projectPath))
|
||||
const deletedPaths = new Set(results.deleted)
|
||||
|
||||
@@ -311,6 +311,12 @@ Output:
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
```
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|
||||
@@ -275,6 +275,10 @@ AskUserQuestion({
|
||||
- **"Enter Review"**: Execute `/workflow:review`
|
||||
- **"Complete Session"**: Execute `/workflow:session:complete`
|
||||
|
||||
### Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Execution Strategy (IMPL_PLAN-Driven)
|
||||
|
||||
### Strategy Priority
|
||||
|
||||
@@ -108,11 +108,24 @@ Analyze project for workflow initialization and generate .workflow/project-tech.
|
||||
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||
|
||||
## Task
|
||||
Generate complete project-tech.json with:
|
||||
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
|
||||
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
|
||||
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
||||
Generate complete project-tech.json following the schema structure:
|
||||
- project_name: "${projectName}"
|
||||
- initialized_at: ISO 8601 timestamp
|
||||
- overview: {
|
||||
description: "Brief project description",
|
||||
technology_stack: {
|
||||
languages: [{name, file_count, primary}],
|
||||
frameworks: ["string"],
|
||||
build_tools: ["string"],
|
||||
test_frameworks: ["string"]
|
||||
},
|
||||
architecture: {style, layers: [], patterns: []},
|
||||
key_components: [{name, path, description, importance}]
|
||||
}
|
||||
- features: []
|
||||
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
|
||||
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
|
||||
|
||||
## Analysis Requirements
|
||||
|
||||
@@ -132,7 +145,7 @@ Generate complete project-tech.json with:
|
||||
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||
2. Semantic analysis: Gemini for patterns/architecture
|
||||
3. Synthesis: Merge findings
|
||||
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
|
||||
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
|
||||
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
|
||||
6. Report: Return brief completion summary
|
||||
|
||||
@@ -181,16 +194,16 @@ console.log(`
|
||||
✓ Project initialized successfully
|
||||
|
||||
## Project Overview
|
||||
Name: ${projectTech.project_metadata.name}
|
||||
Description: ${projectTech.technology_analysis.description}
|
||||
Name: ${projectTech.project_name}
|
||||
Description: ${projectTech.overview.description}
|
||||
|
||||
### Technology Stack
|
||||
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
|
||||
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
|
||||
|
||||
### Architecture
|
||||
Style: ${projectTech.technology_analysis.architecture.style}
|
||||
Components: ${projectTech.technology_analysis.key_components.length} core modules
|
||||
Style: ${projectTech.overview.architecture.style}
|
||||
Components: ${projectTech.overview.key_components.length} core modules
|
||||
|
||||
---
|
||||
Files created:
|
||||
|
||||
@@ -531,11 +531,11 @@ if (hasUnresolvedIssues(reviewResult)) {
|
||||
|
||||
**Trigger**: After all executions complete (regardless of code review)
|
||||
|
||||
**Skip Condition**: Skip if `.workflow/project.json` does not exist
|
||||
**Skip Condition**: Skip if `.workflow/project-tech.json` does not exist
|
||||
|
||||
**Operations**:
|
||||
```javascript
|
||||
const projectJsonPath = '.workflow/project.json'
|
||||
const projectJsonPath = '.workflow/project-tech.json'
|
||||
if (!fileExists(projectJsonPath)) return // Silent skip
|
||||
|
||||
const projectJson = JSON.parse(Read(projectJsonPath))
|
||||
@@ -664,6 +664,10 @@ Collected after each execution call completes:
|
||||
|
||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||
|
||||
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||
|
||||
@@ -10,63 +10,33 @@ allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Bash(*), mcp_
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
/workflow:lite-lite-lite "Fix the login bug"
|
||||
|
||||
# Complex task
|
||||
/workflow:lite-lite-lite "Refactor payment module for multi-gateway support"
|
||||
```
|
||||
|
||||
**Core Philosophy**: Minimal friction, maximum velocity. No files, no artifacts - just analyze and execute.
|
||||
|
||||
## What & Why
|
||||
## Overview
|
||||
|
||||
### Core Concept
|
||||
**Zero-artifact workflow**: Clarify → Select Tools → Multi-Mode Analysis → Decision → Direct Execution
|
||||
|
||||
**Zero-artifact workflow**: Clarify requirements → Auto-select tools → Mixed tool analysis → User decision → Direct execution. All state in memory, all decisions via AskUser.
|
||||
|
||||
**vs multi-cli-plan**:
|
||||
- **multi-cli-plan**: Full artifacts (IMPL_PLAN.md, plan.json, synthesis.json)
|
||||
- **lite-lite-lite**: No files, direct in-memory flow, immediate execution
|
||||
|
||||
### Value Proposition
|
||||
|
||||
1. **Ultra-Fast**: No file I/O overhead, no session management
|
||||
2. **Smart Selection**: Auto-select optimal tool combination based on task
|
||||
3. **Interactive**: Key decisions validated via AskUser
|
||||
4. **Direct**: Analysis → Execution without intermediate artifacts
|
||||
**vs multi-cli-plan**: No IMPL_PLAN.md, plan.json, synthesis.json - all state in memory.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Clarify Requirements
|
||||
└─ Parse input → AskUser for missing details (if needed)
|
||||
|
||||
Phase 2: Auto-Select Tools
|
||||
└─ Analyze task → Match to tool strengths → Confirm selection
|
||||
|
||||
Phase 3: Mixed Tool Analysis
|
||||
└─ Execute selected tools in parallel → Aggregate results
|
||||
|
||||
Phase 4: User Decision
|
||||
├─ Present analysis summary
|
||||
├─ AskUser: Execute / Refine / Change tools / Cancel
|
||||
└─ Loop to Phase 3 if refinement needed
|
||||
|
||||
Phase 5: Direct Execution
|
||||
└─ Execute solution directly (no plan files)
|
||||
Phase 1: Clarify Requirements → AskUser for missing details
|
||||
Phase 2: Select Tools (CLI → Mode → Agent) → 3-step selection
|
||||
Phase 3: Multi-Mode Analysis → Execute with --resume chaining
|
||||
Phase 4: User Decision → Execute / Refine / Change / Cancel
|
||||
Phase 5: Direct Execution → No plan files, immediate implementation
|
||||
```
|
||||
|
||||
## Phase Details
|
||||
## Phase 1: Clarify Requirements
|
||||
|
||||
### Phase 1: Clarify Requirements
|
||||
|
||||
**Parse Task Description**:
|
||||
```javascript
|
||||
// Extract intent from user input
|
||||
const taskDescription = $ARGUMENTS
|
||||
|
||||
// Check if clarification needed
|
||||
if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
@@ -80,173 +50,72 @@ if (taskDescription.length < 20 || isAmbiguous(taskDescription)) {
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Quick ACE Context** (optional, for complex tasks):
|
||||
```javascript
|
||||
// Only if task seems to need codebase context
|
||||
// Optional: Quick ACE Context for complex tasks
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `${taskDescription} implementation patterns`
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 2: Auto-Select Analysis Tools
|
||||
## Phase 2: Select Tools
|
||||
|
||||
**Tool Categories**:
|
||||
### Tool Definitions
|
||||
|
||||
| Category | Source | Execution |
|
||||
|----------|--------|-----------|
|
||||
| **CLI Tools** | cli-tools.json | `ccw cli -p "..." --tool <name>` |
|
||||
| **Sub Agents** | Task tool | `Task({ subagent_type: "...", prompt: "..." })` |
|
||||
|
||||
**Task Analysis Dimensions**:
|
||||
**CLI Tools** (from cli-tools.json):
|
||||
```javascript
|
||||
function analyzeTask(taskDescription) {
|
||||
return {
|
||||
complexity: detectComplexity(taskDescription), // simple, medium, complex
|
||||
taskType: detectTaskType(taskDescription), // bugfix, feature, refactor, analysis, etc.
|
||||
domain: detectDomain(taskDescription), // frontend, backend, fullstack
|
||||
needsExecution: detectExecutionNeed(taskDescription) // analysis-only vs needs-write
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**CLI Tools** (dynamically loaded from cli-tools.json):
|
||||
|
||||
```javascript
|
||||
// Load CLI tools from config file
|
||||
const cliConfig = JSON.parse(Read("~/.claude/cli-tools.json"))
|
||||
const cliTools = Object.entries(cliConfig.tools)
|
||||
.filter(([_, config]) => config.enabled)
|
||||
.map(([name, config]) => ({
|
||||
name,
|
||||
type: 'cli',
|
||||
name, type: 'cli',
|
||||
tags: config.tags || [],
|
||||
model: config.primaryModel,
|
||||
toolType: config.type // builtin, cli-wrapper, api-endpoint
|
||||
}))
|
||||
```
|
||||
|
||||
**Tags** (user-defined in cli-tools.json, no fixed specification):
|
||||
|
||||
Tags are completely user-defined. Users can create any tags that match their workflow needs.
|
||||
|
||||
**Config Example** (cli-tools.json):
|
||||
```json
|
||||
{
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"tags": ["architecture", "reasoning", "performance"],
|
||||
"primaryModel": "gemini-2.5-pro"
|
||||
},
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"tags": ["implementation", "fast"],
|
||||
"primaryModel": "gpt-5.2"
|
||||
},
|
||||
"qwen": {
|
||||
"enabled": true,
|
||||
"tags": ["implementation", "chinese", "documentation"],
|
||||
"primaryModel": "coder-model"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Sub Agents** (predefined, canExecute marks execution capability):
|
||||
|
||||
```javascript
|
||||
const agents = [
|
||||
{ name: 'code-developer', type: 'agent', strength: 'Code implementation, test writing', canExecute: true },
|
||||
{ name: 'Explore', type: 'agent', strength: 'Fast code exploration', canExecute: false },
|
||||
{ name: 'cli-explore-agent', type: 'agent', strength: 'Dual-source deep analysis', canExecute: false },
|
||||
{ name: 'cli-discuss-agent', type: 'agent', strength: 'Multi-CLI collaborative verification', canExecute: false },
|
||||
{ name: 'debug-explore-agent', type: 'agent', strength: 'Hypothesis-driven debugging', canExecute: false },
|
||||
{ name: 'context-search-agent', type: 'agent', strength: 'Context collection', canExecute: false },
|
||||
{ name: 'test-fix-agent', type: 'agent', strength: 'Test execution and fixing', canExecute: true },
|
||||
{ name: 'universal-executor', type: 'agent', strength: 'General multi-step execution', canExecute: true }
|
||||
]
|
||||
```
|
||||
**Sub Agents**:
|
||||
|
||||
| Agent | Strengths | canExecute |
|
||||
|-------|-----------|------------|
|
||||
| **code-developer** | Code implementation, test writing, incremental development | ✅ |
|
||||
| **Explore** | Fast code exploration, file search, pattern discovery | ❌ |
|
||||
| **cli-explore-agent** | Dual-source analysis (Bash+CLI), read-only exploration | ❌ |
|
||||
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification, solution synthesis | ❌ |
|
||||
| **debug-explore-agent** | Hypothesis-driven debugging, NDJSON logging, iterative verification | ❌ |
|
||||
| **context-search-agent** | Multi-layer file discovery, dependency analysis, conflict assessment | ❌ |
|
||||
| **code-developer** | Code implementation, test writing | ✅ |
|
||||
| **Explore** | Fast code exploration, pattern discovery | ❌ |
|
||||
| **cli-explore-agent** | Dual-source analysis (Bash+CLI) | ❌ |
|
||||
| **cli-discuss-agent** | Multi-CLI collaboration, cross-verification | ❌ |
|
||||
| **debug-explore-agent** | Hypothesis-driven debugging | ❌ |
|
||||
| **context-search-agent** | Multi-layer file discovery, dependency analysis | ❌ |
|
||||
| **test-fix-agent** | Test execution, failure diagnosis, code fixing | ✅ |
|
||||
| **universal-executor** | General execution, multi-domain adaptation | ✅ |
|
||||
|
||||
**Three-Step Selection Flow** (CLI → Mode → Agent):
|
||||
**Analysis Modes**:
|
||||
|
||||
| Mode | Pattern | Use Case | minCLIs |
|
||||
|------|---------|----------|---------|
|
||||
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective | 1+ |
|
||||
| **Sequential** | `A → B(resume) → C(resume)` | Incremental deepening | 2+ |
|
||||
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
|
||||
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Adversarial validation | 2 |
|
||||
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
|
||||
|
||||
### Three-Step Selection Flow
|
||||
|
||||
```javascript
|
||||
// Step 1: Present CLI options from config (multiSelect for multi-CLI modes)
|
||||
function getCliDescription(cli) {
|
||||
return cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
|
||||
}
|
||||
|
||||
const cliOptions = cliTools.map(cli => ({
|
||||
label: cli.name,
|
||||
description: getCliDescription(cli)
|
||||
}))
|
||||
|
||||
// Step 1: Select CLIs (multiSelect)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select CLI tools for analysis (select 1-3 for collaboration modes)",
|
||||
question: "Select CLI tools for analysis (1-3 for collaboration modes)",
|
||||
header: "CLI Tools",
|
||||
options: cliOptions,
|
||||
multiSelect: true // Allow multiple selection for collaboration modes
|
||||
options: cliTools.map(cli => ({
|
||||
label: cli.name,
|
||||
description: cli.tags.length > 0 ? cli.tags.join(', ') : cli.model || 'general'
|
||||
})),
|
||||
multiSelect: true
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
```javascript
|
||||
// Step 2: Select Analysis Mode
|
||||
const analysisModes = [
|
||||
{
|
||||
name: 'parallel',
|
||||
label: 'Parallel',
|
||||
description: 'All CLIs analyze simultaneously, aggregate results',
|
||||
minCLIs: 1,
|
||||
pattern: 'A || B || C → Aggregate'
|
||||
},
|
||||
{
|
||||
name: 'sequential',
|
||||
label: 'Sequential',
|
||||
description: 'Chain analysis: each CLI builds on previous via --resume',
|
||||
minCLIs: 2,
|
||||
pattern: 'A → B(resume A) → C(resume B)'
|
||||
},
|
||||
{
|
||||
name: 'collaborative',
|
||||
label: 'Collaborative',
|
||||
description: 'Multi-round synthesis: CLIs take turns refining analysis',
|
||||
minCLIs: 2,
|
||||
pattern: 'A → B(resume A) → A(resume B) → Synthesize'
|
||||
},
|
||||
{
|
||||
name: 'debate',
|
||||
label: 'Debate',
|
||||
description: 'Adversarial: CLI B challenges CLI A findings, A responds',
|
||||
minCLIs: 2,
|
||||
pattern: 'A(propose) → B(challenge, resume A) → A(defend, resume B)'
|
||||
},
|
||||
{
|
||||
name: 'challenge',
|
||||
label: 'Challenge',
|
||||
description: 'Stress test: CLI B finds flaws/alternatives in CLI A analysis',
|
||||
minCLIs: 2,
|
||||
pattern: 'A(analyze) → B(challenge, resume A) → Evaluate'
|
||||
}
|
||||
]
|
||||
|
||||
// Filter modes based on selected CLI count
|
||||
// Step 2: Select Mode (filtered by CLI count)
|
||||
const availableModes = analysisModes.filter(m => selectedCLIs.length >= m.minCLIs)
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select analysis mode",
|
||||
@@ -258,43 +127,24 @@ AskUserQuestion({
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
```javascript
|
||||
// Step 3: Present Agent options for execution
|
||||
const agentOptions = agents.map(agent => ({
|
||||
label: agent.name,
|
||||
description: agent.strength
|
||||
}))
|
||||
|
||||
// Step 3: Select Agent for execution
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select Sub Agent for execution",
|
||||
header: "Agent",
|
||||
options: agentOptions,
|
||||
options: agents.map(a => ({ label: a.name, description: a.strength })),
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Selection Summary**:
|
||||
```javascript
|
||||
console.log(`
|
||||
## Selected Configuration
|
||||
|
||||
**CLI Tools**: ${selectedCLIs.map(c => c.name).join(' → ')}
|
||||
**Analysis Mode**: ${selectedMode.label} - ${selectedMode.pattern}
|
||||
**Execution Agent**: ${selectedAgent.name} - ${selectedAgent.strength}
|
||||
|
||||
> Mode determines how CLIs collaborate, Agent handles final execution
|
||||
`)
|
||||
|
||||
// Confirm selection
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Confirm selection?",
|
||||
header: "Confirm",
|
||||
options: [
|
||||
{ label: "Confirm and continue", description: `${selectedMode.label} mode with ${selectedCLIs.length} CLIs` },
|
||||
{ label: "Confirm and continue", description: `${selectedMode.label} with ${selectedCLIs.length} CLIs` },
|
||||
{ label: "Re-select CLIs", description: "Choose different CLI tools" },
|
||||
{ label: "Re-select Mode", description: "Choose different analysis mode" },
|
||||
{ label: "Re-select Agent", description: "Choose different Sub Agent" }
|
||||
@@ -304,409 +154,226 @@ AskUserQuestion({
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 3: Multi-Mode Analysis
|
||||
## Phase 3: Multi-Mode Analysis
|
||||
|
||||
**Mode-Specific Execution Patterns**:
|
||||
### Universal CLI Prompt Template
|
||||
|
||||
#### Mode 1: Parallel (并行)
|
||||
```javascript
|
||||
// All CLIs run simultaneously, no resume dependency
|
||||
async function executeParallel(clis, taskDescription) {
|
||||
const promises = clis.map(cli => Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Analyze and provide solution for: ${taskDescription}
|
||||
TASK: • Identify affected files • Analyze implementation approach • List specific changes needed
|
||||
// Unified prompt builder - used by all modes
|
||||
function buildPrompt({ purpose, tasks, expected, rules, taskDescription }) {
|
||||
return `
|
||||
PURPOSE: ${purpose}: ${taskDescription}
|
||||
TASK: ${tasks.map(t => `• ${t}`).join(' ')}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Concise analysis with: 1) Root cause/approach 2) Files to modify 3) Key changes 4) Risks
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Focus on actionable insights
|
||||
" --tool ${cli.name} --mode analysis`,
|
||||
run_in_background: true
|
||||
}))
|
||||
EXPECTED: ${expected}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${rules}
|
||||
`
|
||||
}
|
||||
|
||||
return await Promise.all(promises)
|
||||
// Execute CLI with prompt
|
||||
function execCLI(cli, prompt, options = {}) {
|
||||
const { resume, background = false } = options
|
||||
const resumeFlag = resume ? `--resume ${resume}` : ''
|
||||
return Bash({
|
||||
command: `ccw cli -p "${prompt}" --tool ${cli.name} --mode analysis ${resumeFlag}`,
|
||||
run_in_background: background
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
#### Mode 2: Sequential (串联)
|
||||
### Prompt Presets by Role
|
||||
|
||||
| Role | PURPOSE | TASKS | EXPECTED | RULES |
|
||||
|------|---------|-------|----------|-------|
|
||||
| **initial** | Initial analysis | Identify files, Analyze approach, List changes | Root cause, files, changes, risks | Focus on actionable insights |
|
||||
| **extend** | Build on previous | Review previous, Extend, Add insights | Extended analysis building on findings | Build incrementally, avoid repetition |
|
||||
| **synthesize** | Refine and synthesize | Review, Identify gaps, Synthesize | Refined synthesis with new perspectives | Add value not repetition |
|
||||
| **propose** | Propose comprehensive analysis | Analyze thoroughly, Propose solution, State assumptions | Well-reasoned proposal with trade-offs | Be clear about assumptions |
|
||||
| **challenge** | Challenge and stress-test | Identify weaknesses, Question assumptions, Suggest alternatives | Critique with counter-arguments | Be adversarial but constructive |
|
||||
| **defend** | Respond to challenges | Address challenges, Defend valid aspects, Propose refined solution | Refined proposal incorporating feedback | Be open to criticism, synthesize |
|
||||
| **criticize** | Find flaws ruthlessly | Find logical flaws, Identify edge cases, Rate criticisms | Critique with severity: [CRITICAL]/[HIGH]/[MEDIUM]/[LOW] | Be ruthlessly critical |
|
||||
|
||||
```javascript
|
||||
// Chain analysis: each CLI builds on previous via --resume
|
||||
async function executeSequential(clis, taskDescription) {
|
||||
const PROMPTS = {
|
||||
initial: { purpose: 'Initial analysis', tasks: ['Identify affected files', 'Analyze implementation approach', 'List specific changes'], expected: 'Root cause, files to modify, key changes, risks', rules: 'Focus on actionable insights' },
|
||||
extend: { purpose: 'Build on previous analysis', tasks: ['Review previous findings', 'Extend analysis', 'Add new insights'], expected: 'Extended analysis building on previous', rules: 'Build incrementally, avoid repetition' },
|
||||
synthesize: { purpose: 'Refine and synthesize', tasks: ['Review previous', 'Identify gaps', 'Add insights', 'Synthesize findings'], expected: 'Refined synthesis with new perspectives', rules: 'Build collaboratively, add value' },
|
||||
propose: { purpose: 'Propose comprehensive analysis', tasks: ['Analyze thoroughly', 'Propose solution', 'State assumptions clearly'], expected: 'Well-reasoned proposal with trade-offs', rules: 'Be clear about assumptions' },
|
||||
challenge: { purpose: 'Challenge and stress-test', tasks: ['Identify weaknesses', 'Question assumptions', 'Suggest alternatives', 'Highlight overlooked risks'], expected: 'Constructive critique with counter-arguments', rules: 'Be adversarial but constructive' },
|
||||
defend: { purpose: 'Respond to challenges', tasks: ['Address each challenge', 'Defend valid aspects', 'Acknowledge valid criticisms', 'Propose refined solution'], expected: 'Refined proposal incorporating alternatives', rules: 'Be open to criticism, synthesize best ideas' },
|
||||
criticize: { purpose: 'Stress-test and find weaknesses', tasks: ['Find logical flaws', 'Identify missed edge cases', 'Propose alternatives', 'Rate criticisms (High/Medium/Low)'], expected: 'Detailed critique with severity ratings', rules: 'Be ruthlessly critical, find every flaw' }
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Implementations
|
||||
|
||||
```javascript
|
||||
// Parallel: All CLIs run simultaneously
|
||||
async function executeParallel(clis, task) {
|
||||
return await Promise.all(clis.map(cli =>
|
||||
execCLI(cli, buildPrompt({ ...PROMPTS.initial, taskDescription: task }), { background: true })
|
||||
))
|
||||
}
|
||||
|
||||
// Sequential: Each CLI builds on previous via --resume
|
||||
async function executeSequential(clis, task) {
|
||||
const results = []
|
||||
let previousSessionId = null
|
||||
|
||||
let prevId = null
|
||||
for (const cli of clis) {
|
||||
const resumeFlag = previousSessionId ? `--resume ${previousSessionId}` : ''
|
||||
|
||||
const result = await Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: ${previousSessionId ? 'Build on previous analysis and deepen' : 'Initial analysis'}: ${taskDescription}
|
||||
TASK: • ${previousSessionId ? 'Review previous findings • Extend analysis • Add new insights' : 'Identify affected files • Analyze implementation approach'}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: ${previousSessionId ? 'Extended analysis building on previous findings' : 'Initial analysis with root cause and approach'}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${previousSessionId ? 'Build incrementally, avoid repetition' : 'Focus on actionable insights'}
|
||||
" --tool ${cli.name} --mode analysis ${resumeFlag}`,
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
const preset = prevId ? PROMPTS.extend : PROMPTS.initial
|
||||
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
|
||||
results.push(result)
|
||||
previousSessionId = extractSessionId(result) // Extract session ID for next iteration
|
||||
prevId = extractSessionId(result)
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
#### Mode 3: Collaborative (协同)
|
||||
```javascript
|
||||
// Multi-round synthesis: CLIs take turns refining analysis
|
||||
async function executeCollaborative(clis, taskDescription, rounds = 2) {
|
||||
// Collaborative: Multi-round synthesis
|
||||
async function executeCollaborative(clis, task, rounds = 2) {
|
||||
const results = []
|
||||
let previousSessionId = null
|
||||
|
||||
for (let round = 0; round < rounds; round++) {
|
||||
let prevId = null
|
||||
for (let r = 0; r < rounds; r++) {
|
||||
for (const cli of clis) {
|
||||
const resumeFlag = previousSessionId ? `--resume ${previousSessionId}` : ''
|
||||
const roundContext = round === 0 ? 'Initial analysis' : `Round ${round + 1}: Refine and synthesize`
|
||||
|
||||
const result = await Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: ${roundContext} for: ${taskDescription}
|
||||
TASK: • ${round === 0 ? 'Initial analysis of the problem' : 'Review previous analysis • Identify gaps • Add complementary insights • Synthesize findings'}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: ${round === 0 ? 'Foundational analysis' : 'Refined synthesis with new perspectives'}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${round === 0 ? 'Be thorough' : 'Build collaboratively, add value not repetition'}
|
||||
" --tool ${cli.name} --mode analysis ${resumeFlag}`,
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
results.push({ cli: cli.name, round, result })
|
||||
previousSessionId = extractSessionId(result)
|
||||
const preset = !prevId ? PROMPTS.initial : PROMPTS.synthesize
|
||||
const result = await execCLI(cli, buildPrompt({ ...preset, taskDescription: task }), { resume: prevId })
|
||||
results.push({ cli: cli.name, round: r, result })
|
||||
prevId = extractSessionId(result)
|
||||
}
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
#### Mode 4: Debate (辩论)
|
||||
```javascript
|
||||
// Adversarial: CLI B challenges CLI A findings, A responds
|
||||
async function executeDebate(clis, taskDescription) {
|
||||
// Debate: Propose → Challenge → Defend
|
||||
async function executeDebate(clis, task) {
|
||||
const [cliA, cliB] = clis
|
||||
const results = []
|
||||
|
||||
// Step 1: CLI A proposes initial analysis
|
||||
const proposeResult = await Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Propose comprehensive analysis for: ${taskDescription}
|
||||
TASK: • Analyze problem thoroughly • Propose solution approach • Identify implementation details • State assumptions clearly
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Well-reasoned proposal with clear assumptions and trade-offs stated
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be clear about assumptions and trade-offs
|
||||
" --tool ${cliA.name} --mode analysis`,
|
||||
run_in_background: false
|
||||
})
|
||||
results.push({ phase: 'propose', cli: cliA.name, result: proposeResult })
|
||||
const proposeSessionId = extractSessionId(proposeResult)
|
||||
const propose = await execCLI(cliA, buildPrompt({ ...PROMPTS.propose, taskDescription: task }))
|
||||
results.push({ phase: 'propose', cli: cliA.name, result: propose })
|
||||
|
||||
// Step 2: CLI B challenges the proposal
|
||||
const challengeResult = await Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Challenge and stress-test the previous analysis for: ${taskDescription}
|
||||
TASK: • Identify weaknesses in proposed approach • Question assumptions • Suggest alternative approaches • Highlight potential risks overlooked
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Constructive critique with specific counter-arguments and alternatives
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be adversarial but constructive, focus on improving the solution
|
||||
" --tool ${cliB.name} --mode analysis --resume ${proposeSessionId}`,
|
||||
run_in_background: false
|
||||
})
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: challengeResult })
|
||||
const challengeSessionId = extractSessionId(challengeResult)
|
||||
const challenge = await execCLI(cliB, buildPrompt({ ...PROMPTS.challenge, taskDescription: task }), { resume: extractSessionId(propose) })
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: challenge })
|
||||
|
||||
// Step 3: CLI A defends and refines
|
||||
const defendResult = await Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Respond to challenges and refine analysis for: ${taskDescription}
|
||||
TASK: • Address each challenge point • Defend valid aspects • Acknowledge valid criticisms • Propose refined solution incorporating feedback
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Refined proposal that addresses criticisms and incorporates valid alternatives
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be open to valid criticism, synthesize best ideas
|
||||
" --tool ${cliA.name} --mode analysis --resume ${challengeSessionId}`,
|
||||
run_in_background: false
|
||||
})
|
||||
results.push({ phase: 'defend', cli: cliA.name, result: defendResult })
|
||||
const defend = await execCLI(cliA, buildPrompt({ ...PROMPTS.defend, taskDescription: task }), { resume: extractSessionId(challenge) })
|
||||
results.push({ phase: 'defend', cli: cliA.name, result: defend })
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
#### Mode 5: Challenge (挑战)
|
||||
```javascript
|
||||
// Stress test: CLI B finds flaws/alternatives in CLI A analysis
|
||||
async function executeChallenge(clis, taskDescription) {
|
||||
// Challenge: Analyze → Criticize
|
||||
async function executeChallenge(clis, task) {
|
||||
const [cliA, cliB] = clis
|
||||
const results = []
|
||||
|
||||
// Step 1: CLI A provides initial analysis
|
||||
const analyzeResult = await Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Provide comprehensive analysis for: ${taskDescription}
|
||||
TASK: • Deep analysis of problem space • Propose implementation approach • List specific changes • Identify risks
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Thorough analysis with clear reasoning
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be thorough and explicit about reasoning
|
||||
" --tool ${cliA.name} --mode analysis`,
|
||||
run_in_background: false
|
||||
})
|
||||
results.push({ phase: 'analyze', cli: cliA.name, result: analyzeResult })
|
||||
const analyzeSessionId = extractSessionId(analyzeResult)
|
||||
const analyze = await execCLI(cliA, buildPrompt({ ...PROMPTS.initial, taskDescription: task }))
|
||||
results.push({ phase: 'analyze', cli: cliA.name, result: analyze })
|
||||
|
||||
// Step 2: CLI B challenges with focus on finding flaws
|
||||
const challengeResult = await Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Stress-test and find weaknesses in the analysis for: ${taskDescription}
|
||||
TASK: • Find logical flaws in reasoning • Identify missed edge cases • Propose better alternatives • Rate confidence in each criticism (High/Medium/Low)
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Detailed critique with severity ratings: [CRITICAL] [HIGH] [MEDIUM] [LOW] for each issue found
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Be ruthlessly critical, find every possible flaw
|
||||
" --tool ${cliB.name} --mode analysis --resume ${analyzeSessionId}`,
|
||||
run_in_background: false
|
||||
})
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: challengeResult })
|
||||
const criticize = await execCLI(cliB, buildPrompt({ ...PROMPTS.criticize, taskDescription: task }), { resume: extractSessionId(analyze) })
|
||||
results.push({ phase: 'challenge', cli: cliB.name, result: criticize })
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
**Mode Router**:
|
||||
### Mode Router & Result Aggregation
|
||||
|
||||
```javascript
|
||||
async function executeAnalysis(mode, clis, taskDescription) {
|
||||
switch (mode.name) {
|
||||
case 'parallel':
|
||||
return await executeParallel(clis, taskDescription)
|
||||
case 'sequential':
|
||||
return await executeSequential(clis, taskDescription)
|
||||
case 'collaborative':
|
||||
return await executeCollaborative(clis, taskDescription)
|
||||
case 'debate':
|
||||
return await executeDebate(clis, taskDescription)
|
||||
case 'challenge':
|
||||
return await executeChallenge(clis, taskDescription)
|
||||
default:
|
||||
return await executeParallel(clis, taskDescription)
|
||||
case 'parallel': return await executeParallel(clis, taskDescription)
|
||||
case 'sequential': return await executeSequential(clis, taskDescription)
|
||||
case 'collaborative': return await executeCollaborative(clis, taskDescription)
|
||||
case 'debate': return await executeDebate(clis, taskDescription)
|
||||
case 'challenge': return await executeChallenge(clis, taskDescription)
|
||||
}
|
||||
}
|
||||
|
||||
// Execute based on selected mode
|
||||
const analysisResults = await executeAnalysis(selectedMode, selectedCLIs, taskDescription)
|
||||
```
|
||||
|
||||
**Result Aggregation** (mode-aware):
|
||||
```javascript
|
||||
function aggregateResults(mode, results) {
|
||||
const base = {
|
||||
mode: mode.name,
|
||||
pattern: mode.pattern,
|
||||
tools_used: results.map(r => r.cli || 'unknown')
|
||||
}
|
||||
const base = { mode: mode.name, pattern: mode.pattern, tools_used: results.map(r => r.cli || 'unknown') }
|
||||
|
||||
switch (mode.name) {
|
||||
case 'parallel':
|
||||
return {
|
||||
...base,
|
||||
findings: results.map(r => parseOutput(r)),
|
||||
consensus: findCommonPoints(results),
|
||||
divergences: findDifferences(results)
|
||||
}
|
||||
|
||||
return { ...base, findings: results.map(parseOutput), consensus: findCommonPoints(results), divergences: findDifferences(results) }
|
||||
case 'sequential':
|
||||
return {
|
||||
...base,
|
||||
evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })),
|
||||
finalAnalysis: parseOutput(results[results.length - 1])
|
||||
}
|
||||
|
||||
return { ...base, evolution: results.map((r, i) => ({ step: i + 1, analysis: parseOutput(r) })), finalAnalysis: parseOutput(results.at(-1)) }
|
||||
case 'collaborative':
|
||||
return {
|
||||
...base,
|
||||
rounds: groupByRound(results),
|
||||
synthesis: extractSynthesis(results[results.length - 1])
|
||||
}
|
||||
|
||||
return { ...base, rounds: groupByRound(results), synthesis: extractSynthesis(results.at(-1)) }
|
||||
case 'debate':
|
||||
return {
|
||||
...base,
|
||||
proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
|
||||
return { ...base, proposal: parseOutput(results.find(r => r.phase === 'propose')?.result),
|
||||
challenges: parseOutput(results.find(r => r.phase === 'challenge')?.result),
|
||||
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result),
|
||||
confidence: calculateDebateConfidence(results)
|
||||
}
|
||||
|
||||
resolution: parseOutput(results.find(r => r.phase === 'defend')?.result), confidence: calculateDebateConfidence(results) }
|
||||
case 'challenge':
|
||||
return {
|
||||
...base,
|
||||
originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
|
||||
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result),
|
||||
riskScore: calculateRiskScore(results)
|
||||
}
|
||||
return { ...base, originalAnalysis: parseOutput(results.find(r => r.phase === 'analyze')?.result),
|
||||
critiques: parseCritiques(results.find(r => r.phase === 'challenge')?.result), riskScore: calculateRiskScore(results) }
|
||||
}
|
||||
}
|
||||
|
||||
const aggregatedAnalysis = aggregateResults(selectedMode, analysisResults)
|
||||
```
|
||||
|
||||
### Phase 4: User Decision
|
||||
|
||||
**Present Mode-Specific Summary**:
|
||||
## Phase 4: User Decision
|
||||
|
||||
```javascript
|
||||
function presentSummary(aggregatedAnalysis) {
|
||||
const { mode, pattern } = aggregatedAnalysis
|
||||
function presentSummary(analysis) {
|
||||
console.log(`## Analysis Result\n**Mode**: ${analysis.mode} (${analysis.pattern})\n**Tools**: ${analysis.tools_used.join(' → ')}`)
|
||||
|
||||
console.log(`
|
||||
## Analysis Result Summary
|
||||
|
||||
**Mode**: ${mode} (${pattern})
|
||||
**Tools**: ${aggregatedAnalysis.tools_used.join(' → ')}
|
||||
`)
|
||||
|
||||
switch (mode) {
|
||||
switch (analysis.mode) {
|
||||
case 'parallel':
|
||||
console.log(`
|
||||
### Consensus Points
|
||||
${aggregatedAnalysis.consensus.map(c => `- ${c}`).join('\n')}
|
||||
|
||||
### Divergence Points
|
||||
${aggregatedAnalysis.divergences.map(d => `- ${d}`).join('\n')}
|
||||
`)
|
||||
console.log(`### Consensus\n${analysis.consensus.map(c => `- ${c}`).join('\n')}\n### Divergences\n${analysis.divergences.map(d => `- ${d}`).join('\n')}`)
|
||||
break
|
||||
|
||||
case 'sequential':
|
||||
console.log(`
|
||||
### Analysis Evolution
|
||||
${aggregatedAnalysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}
|
||||
|
||||
### Final Analysis
|
||||
${aggregatedAnalysis.finalAnalysis.summary}
|
||||
`)
|
||||
console.log(`### Evolution\n${analysis.evolution.map(e => `**Step ${e.step}**: ${e.analysis.summary}`).join('\n')}\n### Final\n${analysis.finalAnalysis.summary}`)
|
||||
break
|
||||
|
||||
case 'collaborative':
|
||||
console.log(`
|
||||
### Collaboration Rounds
|
||||
${Object.entries(aggregatedAnalysis.rounds).map(([round, analyses]) =>
|
||||
`**Round ${round}**: ${analyses.map(a => a.cli).join(' + ')}`
|
||||
).join('\n')}
|
||||
|
||||
### Synthesized Result
|
||||
${aggregatedAnalysis.synthesis}
|
||||
`)
|
||||
console.log(`### Rounds\n${Object.entries(analysis.rounds).map(([r, a]) => `**Round ${r}**: ${a.map(x => x.cli).join(' + ')}`).join('\n')}\n### Synthesis\n${analysis.synthesis}`)
|
||||
break
|
||||
|
||||
case 'debate':
|
||||
console.log(`
|
||||
### Debate Summary
|
||||
**Proposal**: ${aggregatedAnalysis.proposal.summary}
|
||||
**Challenges**: ${aggregatedAnalysis.challenges.points?.length || 0} points raised
|
||||
**Resolution**: ${aggregatedAnalysis.resolution.summary}
|
||||
**Confidence**: ${aggregatedAnalysis.confidence}%
|
||||
`)
|
||||
console.log(`### Debate\n**Proposal**: ${analysis.proposal.summary}\n**Challenges**: ${analysis.challenges.points?.length || 0} points\n**Resolution**: ${analysis.resolution.summary}\n**Confidence**: ${analysis.confidence}%`)
|
||||
break
|
||||
|
||||
case 'challenge':
|
||||
console.log(`
|
||||
### Challenge Summary
|
||||
**Original Analysis**: ${aggregatedAnalysis.originalAnalysis.summary}
|
||||
**Critiques Found**: ${aggregatedAnalysis.critiques.length} issues
|
||||
${aggregatedAnalysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}
|
||||
**Risk Score**: ${aggregatedAnalysis.riskScore}/100
|
||||
`)
|
||||
console.log(`### Challenge\n**Original**: ${analysis.originalAnalysis.summary}\n**Critiques**: ${analysis.critiques.length} issues\n${analysis.critiques.map(c => `- [${c.severity}] ${c.description}`).join('\n')}\n**Risk Score**: ${analysis.riskScore}/100`)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
presentSummary(aggregatedAnalysis)
|
||||
```
|
||||
|
||||
**Decision Options**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "How to proceed?",
|
||||
header: "Next Step",
|
||||
options: [
|
||||
{ label: "Execute directly", description: "Implement immediately based on analysis" },
|
||||
{ label: "Refine analysis", description: "Provide more constraints, re-analyze" },
|
||||
{ label: "Change tools", description: "Select different tool combination" },
|
||||
{ label: "Cancel", description: "End current workflow" }
|
||||
{ label: "Execute directly", description: "Implement immediately" },
|
||||
{ label: "Refine analysis", description: "Add constraints, re-analyze" },
|
||||
{ label: "Change tools", description: "Different tool combination" },
|
||||
{ label: "Cancel", description: "End workflow" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
// Routing: Execute → Phase 5 | Refine → Phase 3 | Change → Phase 2 | Cancel → End
|
||||
```
|
||||
|
||||
**Routing Logic**:
|
||||
- **Execute directly** → Phase 5
|
||||
- **Refine analysis** → Collect feedback, return to Phase 3
|
||||
- **Change tools** → Return to Phase 2
|
||||
- **Cancel** → End workflow
|
||||
## Phase 5: Direct Execution
|
||||
|
||||
### Phase 5: Direct Execution
|
||||
|
||||
**No Artifacts - Direct Implementation**:
|
||||
```javascript
|
||||
// Use the aggregated analysis directly
|
||||
// No IMPL_PLAN.md, no plan.json, no session files
|
||||
|
||||
console.log("Starting direct execution based on analysis...")
|
||||
|
||||
// Execution-capable agents (canExecute: true)
|
||||
// No IMPL_PLAN.md, no plan.json - direct implementation
|
||||
const executionAgents = agents.filter(a => a.canExecute)
|
||||
|
||||
// Select execution tool: prefer execution-capable agent, fallback to CLI
|
||||
const executionTool = selectedTools.find(t =>
|
||||
t.type === 'agent' && executionAgents.some(ea => ea.name === t.name)
|
||||
) || selectedTools.find(t => t.type === 'cli')
|
||||
const executionTool = selectedAgent.canExecute ? selectedAgent : selectedCLIs[0]
|
||||
|
||||
if (executionTool.type === 'agent') {
|
||||
// Use Agent for execution (preferred if available)
|
||||
Task({
|
||||
subagent_type: executionTool.name,
|
||||
run_in_background: false,
|
||||
description: `Execute: ${taskDescription.slice(0, 30)}`,
|
||||
prompt: `
|
||||
## Task
|
||||
${taskDescription}
|
||||
|
||||
## Analysis Results (from previous tools)
|
||||
${JSON.stringify(aggregatedAnalysis, null, 2)}
|
||||
|
||||
## Instructions
|
||||
Based on the analysis above, implement the solution:
|
||||
1. Apply changes to identified files
|
||||
2. Follow the recommended approach
|
||||
3. Handle identified risks
|
||||
4. Verify changes work correctly
|
||||
`
|
||||
prompt: `## Task\n${taskDescription}\n\n## Analysis Results\n${JSON.stringify(aggregatedAnalysis, null, 2)}\n\n## Instructions\n1. Apply changes to identified files\n2. Follow recommended approach\n3. Handle identified risks\n4. Verify changes work correctly`
|
||||
})
|
||||
} else {
|
||||
// Use CLI with write mode
|
||||
Bash({
|
||||
command: `ccw cli -p "
|
||||
PURPOSE: Implement the solution based on analysis: ${taskDescription}
|
||||
PURPOSE: Implement solution: ${taskDescription}
|
||||
TASK: ${extractedTasks.join(' • ')}
|
||||
MODE: write
|
||||
CONTEXT: @${affectedFiles.join(' @')}
|
||||
EXPECTED: Working implementation with all changes applied
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Apply analysis findings directly
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
|
||||
" --tool ${executionTool.name} --mode write`,
|
||||
run_in_background: false
|
||||
})
|
||||
@@ -718,81 +385,49 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Ap
|
||||
```javascript
|
||||
TodoWrite({ todos: [
|
||||
{ content: "Phase 1: Clarify requirements", status: "in_progress", activeForm: "Clarifying requirements" },
|
||||
{ content: "Phase 2: Auto-select tools", status: "pending", activeForm: "Analyzing task" },
|
||||
{ content: "Phase 3: Mixed tool analysis", status: "pending", activeForm: "Running analysis" },
|
||||
{ content: "Phase 2: Select tools", status: "pending", activeForm: "Selecting tools" },
|
||||
{ content: "Phase 3: Multi-mode analysis", status: "pending", activeForm: "Running analysis" },
|
||||
{ content: "Phase 4: User decision", status: "pending", activeForm: "Awaiting decision" },
|
||||
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing implementation" }
|
||||
{ content: "Phase 5: Direct execution", status: "pending", activeForm: "Executing" }
|
||||
]})
|
||||
```
|
||||
|
||||
## Iteration Patterns
|
||||
|
||||
### Pattern A: Direct Path (Most Common)
|
||||
```
|
||||
Phase 1 → Phase 2 (auto) → Phase 3 → Phase 4 (execute) → Phase 5
|
||||
```
|
||||
|
||||
### Pattern B: Refinement Loop
|
||||
```
|
||||
Phase 3 → Phase 4 (refine) → Phase 3 → Phase 4 → Phase 5
|
||||
```
|
||||
|
||||
### Pattern C: Tool Adjustment
|
||||
```
|
||||
Phase 2 (adjust) → Phase 3 → Phase 4 → Phase 5
|
||||
```
|
||||
| Pattern | Flow |
|
||||
|---------|------|
|
||||
| **Direct** | Phase 1 → 2 → 3 → 4(execute) → 5 |
|
||||
| **Refinement** | Phase 3 → 4(refine) → 3 → 4 → 5 |
|
||||
| **Tool Adjust** | Phase 2(adjust) → 3 → 4 → 5 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| CLI timeout | Retry with secondary model |
|
||||
| No enabled tools | Load cli-tools.json, ask user to enable tools |
|
||||
| Task type unclear | Default to first available CLI + code-developer |
|
||||
| No enabled tools | Ask user to enable tools in cli-tools.json |
|
||||
| Task unclear | Default to first CLI + code-developer |
|
||||
| Ambiguous task | Force clarification via AskUser |
|
||||
| Execution fails | Present error, ask user for direction |
|
||||
|
||||
## Analysis Modes Reference
|
||||
|
||||
| Mode | Pattern | Use Case | CLI Count |
|
||||
|------|---------|----------|-----------|
|
||||
| **Parallel** | `A \|\| B \|\| C → Aggregate` | Fast multi-perspective analysis | 1+ |
|
||||
| **Sequential** | `A → B(resume) → C(resume)` | Deep incremental analysis | 2+ |
|
||||
| **Collaborative** | `A → B → A → B → Synthesize` | Multi-round refinement | 2+ |
|
||||
| **Debate** | `A(propose) → B(challenge) → A(defend)` | Stress-test solutions | 2 |
|
||||
| **Challenge** | `A(analyze) → B(challenge)` | Find flaws and risks | 2 |
|
||||
|
||||
## Comparison
|
||||
## Comparison with multi-cli-plan
|
||||
|
||||
| Aspect | lite-lite-lite | multi-cli-plan |
|
||||
|--------|----------------|----------------|
|
||||
| **Artifacts** | None | IMPL_PLAN.md, plan.json, synthesis.json |
|
||||
| **Session** | Stateless (uses --resume for chaining) | Persistent session folder |
|
||||
| **Tool Selection** | Multi-CLI + Agent via 3-step selection | Config-driven with fixed tools |
|
||||
| **Analysis Modes** | 5 modes (parallel/sequential/collaborative/debate/challenge) | Fixed synthesis rounds |
|
||||
| **CLI Collaboration** | Auto --resume chaining | Manual session management |
|
||||
| **Iteration** | Via AskUser | Via rounds/synthesis |
|
||||
| **Execution** | Direct | Via lite-execute |
|
||||
| **Best For** | Quick analysis, adversarial validation, rapid iteration | Complex multi-step implementations |
|
||||
| **Session** | Stateless (--resume chaining) | Persistent session folder |
|
||||
| **Tool Selection** | 3-step (CLI → Mode → Agent) | Config-driven fixed tools |
|
||||
| **Analysis Modes** | 5 modes with --resume | Fixed synthesis rounds |
|
||||
| **Best For** | Quick analysis, adversarial validation | Complex multi-step implementations |
|
||||
|
||||
## Best Practices
|
||||
## Post-Completion Expansion
|
||||
|
||||
1. **Be Specific**: Clear task description improves auto-selection accuracy
|
||||
2. **Trust Auto-Selection**: Algorithm matches task type to tool strengths
|
||||
3. **Adjust When Needed**: Use "Adjust tools" if auto-selection doesn't fit
|
||||
4. **Trust Consensus**: When tools agree, confidence is high
|
||||
5. **Iterate Fast**: Use refinement loop for complex requirements
|
||||
6. **Direct is Fast**: Skip artifacts when task is straightforward
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
# Full planning workflow
|
||||
/workflow:multi-cli-plan "complex task"
|
||||
|
||||
# Single CLI planning
|
||||
/workflow:lite-plan "task"
|
||||
|
||||
# Direct execution
|
||||
/workflow:lite-execute --in-memory
|
||||
/workflow:multi-cli-plan "complex task" # Full planning workflow
|
||||
/workflow:lite-plan "task" # Single CLI planning
|
||||
/workflow:lite-execute --in-memory # Direct execution
|
||||
```
|
||||
|
||||
@@ -585,6 +585,10 @@ TodoWrite({
|
||||
- Mark completed immediately after each group finishes
|
||||
- Update parent phase status when all child items complete
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
|
||||
|
||||
@@ -107,13 +107,13 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||
Manifest: Updated with N total sessions
|
||||
```
|
||||
|
||||
### Phase 4: Update project.json (Optional)
|
||||
### Phase 4: Update project-tech.json (Optional)
|
||||
|
||||
**Skip if**: `.workflow/project.json` doesn't exist
|
||||
**Skip if**: `.workflow/project-tech.json` doesn't exist
|
||||
|
||||
```bash
|
||||
# Check
|
||||
test -f .workflow/project.json || echo "SKIP"
|
||||
test -f .workflow/project-tech.json || echo "SKIP"
|
||||
```
|
||||
|
||||
**If exists**, add feature entry:
|
||||
@@ -134,6 +134,32 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
✓ Feature added to project registry
|
||||
```
|
||||
|
||||
### Phase 5: Ask About Solidify (Always)
|
||||
|
||||
After successful archival, prompt user to capture learnings:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Would you like to solidify learnings from this session into project guidelines?",
|
||||
header: "Solidify",
|
||||
options: [
|
||||
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
|
||||
{ label: "Skip", description: "Archive complete, no learnings to capture" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
|
||||
|
||||
**Output**:
|
||||
```
|
||||
Session archived successfully.
|
||||
→ Run /workflow:session:solidify to capture learnings (recommended)
|
||||
```
|
||||
|
||||
## Error Recovery
|
||||
|
||||
| Phase | Symptom | Recovery |
|
||||
@@ -149,5 +175,6 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
Phase 1: find session → create .archiving marker
|
||||
Phase 2: read key files → build manifest entry (no writes)
|
||||
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||
Phase 4: update project.json features array (optional)
|
||||
Phase 4: update project-tech.json features array (optional)
|
||||
Phase 5: ask user → solidify learnings (optional)
|
||||
```
|
||||
|
||||
@@ -16,7 +16,7 @@ examples:
|
||||
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
|
||||
|
||||
**Dual Responsibility**:
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project-tech.json` for feature registry
|
||||
2. **Session-level initialization** (always): Creates session directory structure
|
||||
|
||||
## Session Types
|
||||
|
||||
@@ -491,6 +491,10 @@ The orchestrator automatically creates git commits at key checkpoints to enable
|
||||
|
||||
**Note**: Final session completion creates additional commit with full summary.
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Default Settings Work**: 10 iterations sufficient for most cases
|
||||
|
||||
@@ -237,7 +237,7 @@ Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**:
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
|
||||
- If files don't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
@@ -255,7 +255,7 @@ Execute all discovery tracks:
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
|
||||
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
6. Perform conflict detection with risk assessment
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "2.0.0",
|
||||
"total_commands": 88,
|
||||
"total_commands": 45,
|
||||
"total_agents": 16,
|
||||
"description": "Unified CCW-Help command index"
|
||||
},
|
||||
@@ -485,6 +485,15 @@
|
||||
"category": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Initialize CLI tool configurations (.gemini/, .qwen/) with technology-aware ignore rules",
|
||||
"arguments": "[--tool gemini|qwen|all] [--preview] [--output path]",
|
||||
"category": "cli",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
}
|
||||
],
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
- 所有回复使用简体中文
|
||||
- 技术术语保留英文,首次出现可添加中文解释
|
||||
- 代码内容(变量名、注释)保持英文
|
||||
- 代码变量名保持英文,注释使用中文
|
||||
|
||||
## 格式规范
|
||||
|
||||
|
||||
@@ -0,0 +1,141 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Project Guidelines Schema",
|
||||
"description": "Schema for project-guidelines.json - user-maintained rules and constraints",
|
||||
"type": "object",
|
||||
"required": ["conventions", "constraints", "_metadata"],
|
||||
"properties": {
|
||||
"conventions": {
|
||||
"type": "object",
|
||||
"description": "Coding conventions and standards",
|
||||
"required": ["coding_style", "naming_patterns", "file_structure", "documentation"],
|
||||
"properties": {
|
||||
"coding_style": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Coding style rules (e.g., 'Use strict TypeScript mode', 'Prefer const over let')"
|
||||
},
|
||||
"naming_patterns": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Naming conventions (e.g., 'Use camelCase for variables', 'Use PascalCase for components')"
|
||||
},
|
||||
"file_structure": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "File organization rules (e.g., 'One component per file', 'Tests alongside source files')"
|
||||
},
|
||||
"documentation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Documentation requirements (e.g., 'JSDoc for public APIs', 'README for each module')"
|
||||
}
|
||||
}
|
||||
},
|
||||
"constraints": {
|
||||
"type": "object",
|
||||
"description": "Technical constraints and boundaries",
|
||||
"required": ["architecture", "tech_stack", "performance", "security"],
|
||||
"properties": {
|
||||
"architecture": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Architecture constraints (e.g., 'No circular dependencies', 'Services must be stateless')"
|
||||
},
|
||||
"tech_stack": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Technology constraints (e.g., 'No new dependencies without review', 'Use native fetch over axios')"
|
||||
},
|
||||
"performance": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Performance requirements (e.g., 'API response < 200ms', 'Bundle size < 500KB')"
|
||||
},
|
||||
"security": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Security requirements (e.g., 'Sanitize all user input', 'No secrets in code')"
|
||||
}
|
||||
}
|
||||
},
|
||||
"quality_rules": {
|
||||
"type": "array",
|
||||
"description": "Enforceable quality rules",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["rule", "scope"],
|
||||
"properties": {
|
||||
"rule": {
|
||||
"type": "string",
|
||||
"description": "The quality rule statement"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Where the rule applies (e.g., 'all', 'src/**', 'tests/**')"
|
||||
},
|
||||
"enforced_by": {
|
||||
"type": "string",
|
||||
"description": "How the rule is enforced (e.g., 'eslint', 'pre-commit', 'code-review')"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"learnings": {
|
||||
"type": "array",
|
||||
"description": "Project learnings captured from workflow sessions",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["date", "insight"],
|
||||
"properties": {
|
||||
"date": {
|
||||
"type": "string",
|
||||
"format": "date",
|
||||
"description": "Date the learning was captured (YYYY-MM-DD)"
|
||||
},
|
||||
"session_id": {
|
||||
"type": "string",
|
||||
"description": "WFS session ID where the learning originated"
|
||||
},
|
||||
"insight": {
|
||||
"type": "string",
|
||||
"description": "The learning or insight captured"
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"description": "Additional context about when/why this learning applies"
|
||||
},
|
||||
"category": {
|
||||
"type": "string",
|
||||
"enum": ["architecture", "performance", "security", "testing", "workflow", "other"],
|
||||
"description": "Category of the learning"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"required": ["created_at", "version"],
|
||||
"properties": {
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "ISO 8601 timestamp of creation"
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"description": "Schema version (e.g., '1.0.0')"
|
||||
},
|
||||
"last_updated": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "ISO 8601 timestamp of last update"
|
||||
},
|
||||
"updated_by": {
|
||||
"type": "string",
|
||||
"description": "Who/what last updated the file (e.g., 'user', 'workflow:session:solidify')"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Project Metadata Schema",
|
||||
"description": "Workflow initialization metadata for project-level context",
|
||||
"title": "Project Tech Schema",
|
||||
"description": "Schema for project-tech.json - auto-generated technical analysis (stack, architecture, components)",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"project_name",
|
||||
@@ -85,11 +85,14 @@ Tools are selected based on **tags** defined in the configuration. Use tags to m
|
||||
|
||||
```bash
|
||||
# Explicit tool selection
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write>
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write|review>
|
||||
|
||||
# Model override
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --model <model-id> --mode <analysis|write>
|
||||
|
||||
# Code review (codex only)
|
||||
ccw cli -p "<PROMPT>" --tool codex --mode review
|
||||
|
||||
# Tag-based auto-selection (future)
|
||||
ccw cli -p "<PROMPT>" --tags <tag1,tag2> --mode <analysis|write>
|
||||
```
|
||||
@@ -330,6 +333,14 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
- Use For: Feature implementation, bug fixes, documentation, code creation, file modifications
|
||||
- Specification: Requires explicit `--mode write`
|
||||
|
||||
- **`review`**
|
||||
- Permission: Read-only (code review output)
|
||||
- Use For: Git-aware code review of uncommitted changes, branch diffs, specific commits
|
||||
- Specification: **codex only** - uses `codex review` subcommand with `--uncommitted` by default
|
||||
- Tool Behavior:
|
||||
- `codex`: Executes `codex review --uncommitted [prompt]` for structured code review
|
||||
- Other tools (gemini/qwen/claude): Accept mode but no operation change (treated as analysis)
|
||||
|
||||
### Command Options
|
||||
|
||||
- **`--tool <tool>`**
|
||||
@@ -337,8 +348,9 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
- Default: First enabled tool in config
|
||||
|
||||
- **`--mode <mode>`**
|
||||
- Description: **REQUIRED**: analysis, write
|
||||
- Description: **REQUIRED**: analysis, write, review
|
||||
- Default: **NONE** (must specify)
|
||||
- Note: `review` mode triggers `codex review` subcommand for codex tool only
|
||||
|
||||
- **`--model <model>`**
|
||||
- Description: Model override
|
||||
@@ -463,6 +475,17 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
" --tool <tool-id> --mode write
|
||||
```
|
||||
|
||||
**Code Review Task** (codex review mode):
|
||||
```bash
|
||||
# Review uncommitted changes (default)
|
||||
ccw cli -p "Focus on security vulnerabilities and error handling" --tool codex --mode review
|
||||
|
||||
# Review with custom instructions
|
||||
ccw cli -p "Check for breaking changes in API contracts and backward compatibility" --tool codex --mode review
|
||||
```
|
||||
|
||||
> **Note**: `--mode review` only triggers special behavior for `codex` tool (uses `codex review --uncommitted`). Other tools accept the mode but execute as standard analysis.
|
||||
|
||||
---
|
||||
|
||||
### Permission Framework
|
||||
@@ -472,6 +495,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
**Mode Hierarchy**:
|
||||
- `analysis`: Read-only, safe for auto-execution
|
||||
- `write`: Create/Modify/Delete files, full operations - requires explicit `--mode write`
|
||||
- `review`: Git-aware code review (codex only), read-only output - requires explicit `--mode review`
|
||||
- **Exception**: User provides clear instructions like "modify", "create", "implement"
|
||||
|
||||
---
|
||||
@@ -502,7 +526,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
### Planning Checklist
|
||||
|
||||
- [ ] **Purpose defined** - Clear goal and intent
|
||||
- [ ] **Mode selected** - `--mode analysis|write`
|
||||
- [ ] **Mode selected** - `--mode analysis|write|review`
|
||||
- [ ] **Context gathered** - File references + memory (default `@**/*`)
|
||||
- [ ] **Directory navigation** - `--cd` and/or `--includeDirs`
|
||||
- [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection
|
||||
@@ -514,5 +538,5 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
1. **Load configuration** - Read `cli-tools.json` for available tools
|
||||
2. **Match by tags** - Select tool based on task requirements
|
||||
3. **Validate enabled** - Ensure selected tool is enabled
|
||||
4. **Execute with mode** - Always specify `--mode analysis|write`
|
||||
4. **Execute with mode** - Always specify `--mode analysis|write|review`
|
||||
5. **Fallback gracefully** - Use secondary model or next matching tool on failure
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
description: Execute all solutions from issue queue with git commit after each solution
|
||||
argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
|
||||
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
|
||||
---
|
||||
|
||||
# Issue Execute (Codex Version)
|
||||
@@ -9,6 +9,24 @@ argument-hint: "[--worktree [<existing-path>]] [--queue <queue-id>]"
|
||||
|
||||
**Serial Execution**: Execute solutions ONE BY ONE from the issue queue via `ccw issue next`. For each solution, complete all tasks sequentially (implement → test → verify), then commit once per solution with formatted summary. Continue autonomously until queue is empty.
|
||||
|
||||
## Queue ID Requirement (MANDATORY)
|
||||
|
||||
**`--queue <queue-id>` parameter is REQUIRED**
|
||||
|
||||
### When Queue ID Not Provided
|
||||
|
||||
```
|
||||
List queues → Output options → Stop and wait for user
|
||||
```
|
||||
|
||||
**Actions**:
|
||||
|
||||
1. `ccw issue queue list --brief --json` - Fetch queue list
|
||||
2. Filter active/pending status, output formatted list
|
||||
3. **Stop execution**, prompt user to rerun with `codex -p "@.codex/prompts/issue-execute.md --queue QUE-xxx"`
|
||||
|
||||
**No auto-selection** - User MUST explicitly specify queue-id
|
||||
|
||||
## Worktree Mode (Recommended for Parallel Execution)
|
||||
|
||||
When `--worktree` is specified, create or use a git worktree to isolate work.
|
||||
@@ -77,7 +95,8 @@ cd "${WORKTREE_PATH}"
|
||||
|
||||
**Worktree Execution Pattern**:
|
||||
```
|
||||
1. [WORKTREE] ccw issue next → auto-redirects to main repo's .workflow/
|
||||
0. [MAIN REPO] Validate queue ID (--queue required, or prompt user to select)
|
||||
1. [WORKTREE] ccw issue next --queue <queue-id> → auto-redirects to main repo's .workflow/
|
||||
2. [WORKTREE] Implement all tasks, run tests, git commit
|
||||
3. [WORKTREE] ccw issue done <item_id> → auto-redirects to main repo
|
||||
4. Repeat from step 1
|
||||
@@ -103,33 +122,19 @@ codex -p "@.codex/prompts/issue-execute.md --worktree /path/to/existing/worktree
|
||||
|
||||
**Completion - User Choice:**
|
||||
|
||||
When all solutions are complete, ask user what to do with the worktree branch:
|
||||
When all solutions are complete, output options and wait for user to specify:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "All solutions completed in worktree. What would you like to do with the changes?",
|
||||
header: "Merge",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Merge to main",
|
||||
description: "Merge worktree branch into main branch and cleanup"
|
||||
},
|
||||
{
|
||||
label: "Create PR",
|
||||
description: "Push branch and create a pull request for review"
|
||||
},
|
||||
{
|
||||
label: "Keep branch",
|
||||
description: "Keep the branch for manual handling, cleanup worktree only"
|
||||
}
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
All solutions completed in worktree. Choose next action:
|
||||
|
||||
1. Merge to main - Merge worktree branch into main and cleanup
|
||||
2. Create PR - Push branch and create pull request (Recommended for parallel execution)
|
||||
3. Keep branch - Keep branch for manual handling, cleanup worktree only
|
||||
|
||||
Please respond with: 1, 2, or 3
|
||||
```
|
||||
|
||||
**Based on user selection:**
|
||||
**Based on user response:**
|
||||
|
||||
```bash
|
||||
# Disable cleanup trap before intentional cleanup
|
||||
@@ -177,10 +182,12 @@ echo "Branch '${WORKTREE_NAME}' kept. Merge manually when ready."
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
INIT: Fetch first solution via ccw issue next
|
||||
STEP 0: Validate queue ID (--queue required, or prompt user to select)
|
||||
|
||||
INIT: Fetch first solution via ccw issue next --queue <queue-id>
|
||||
|
||||
WHILE solution exists:
|
||||
1. Receive solution JSON from ccw issue next
|
||||
1. Receive solution JSON from ccw issue next --queue <queue-id>
|
||||
2. Execute all tasks in solution.tasks sequentially:
|
||||
FOR each task:
|
||||
- IMPLEMENT: Follow task.implementation steps
|
||||
@@ -188,7 +195,7 @@ WHILE solution exists:
|
||||
- VERIFY: Check task.acceptance criteria
|
||||
3. COMMIT: Stage all files, commit once with formatted summary
|
||||
4. Report completion via ccw issue done <item_id>
|
||||
5. Fetch next solution via ccw issue next
|
||||
5. Fetch next solution via ccw issue next --queue <queue-id>
|
||||
|
||||
WHEN queue empty:
|
||||
Output final summary
|
||||
@@ -196,11 +203,14 @@ WHEN queue empty:
|
||||
|
||||
## Step 1: Fetch First Solution
|
||||
|
||||
**Prerequisite**: Queue ID must be determined (either from `--queue` argument or user selection in Step 0).
|
||||
|
||||
Run this command to get your first solution:
|
||||
|
||||
```javascript
|
||||
// ccw auto-detects worktree and uses main repo's .workflow/
|
||||
const result = shell_command({ command: "ccw issue next" })
|
||||
// QUEUE_ID is required - obtained from --queue argument or user selection
|
||||
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
|
||||
```
|
||||
|
||||
This returns JSON with the full solution definition:
|
||||
@@ -278,9 +288,154 @@ Expected solution structure:
|
||||
}
|
||||
```
|
||||
|
||||
## Step 2.1: Determine Execution Strategy
|
||||
|
||||
After parsing the solution, analyze the issue type and task actions to determine the appropriate execution strategy. The strategy defines additional verification steps and quality gates beyond the basic implement-test-verify cycle.
|
||||
|
||||
### Strategy Auto-Matching
|
||||
|
||||
**Matching Priority**:
|
||||
1. Explicit `solution.strategy_type` if provided
|
||||
2. Infer from `task.action` keywords (Debug, Fix, Feature, Refactor, Test, etc.)
|
||||
3. Infer from `solution.description` and `task.title` content
|
||||
4. Default to "standard" if no clear match
|
||||
|
||||
**Strategy Types and Matching Keywords**:
|
||||
|
||||
| Strategy Type | Match Keywords | Description |
|
||||
|---------------|----------------|-------------|
|
||||
| `debug` | Debug, Diagnose, Trace, Investigate | Bug diagnosis with logging and debugging |
|
||||
| `bugfix` | Fix, Patch, Resolve, Correct | Bug fixing with root cause analysis |
|
||||
| `feature` | Feature, Add, Implement, Create, Build | New feature development with full testing |
|
||||
| `refactor` | Refactor, Restructure, Optimize, Cleanup | Code restructuring with behavior preservation |
|
||||
| `test` | Test, Coverage, E2E, Integration | Test implementation with coverage checks |
|
||||
| `performance` | Performance, Optimize, Speed, Memory | Performance optimization with benchmarking |
|
||||
| `security` | Security, Vulnerability, CVE, Audit | Security fixes with vulnerability checks |
|
||||
| `hotfix` | Hotfix, Urgent, Critical, Emergency | Urgent fixes with minimal changes |
|
||||
| `documentation` | Documentation, Docs, Comment, README | Documentation updates with example validation |
|
||||
| `chore` | Chore, Dependency, Config, Maintenance | Maintenance tasks with compatibility checks |
|
||||
| `standard` | (default) | Standard implementation without extra steps |
|
||||
|
||||
### Strategy-Specific Execution Phases
|
||||
|
||||
Each strategy extends the basic cycle with additional quality gates:
|
||||
|
||||
#### 1. Debug → Reproduce → Instrument → Diagnose → Implement → Test → Verify → Cleanup
|
||||
|
||||
```
|
||||
REPRODUCE → INSTRUMENT → DIAGNOSE → IMPLEMENT → TEST → VERIFY → CLEANUP
|
||||
```
|
||||
|
||||
#### 2. Bugfix → Root Cause → Implement → Test → Edge Cases → Regression → Verify
|
||||
|
||||
```
|
||||
ROOT_CAUSE → IMPLEMENT → TEST → EDGE_CASES → REGRESSION → VERIFY
|
||||
```
|
||||
|
||||
#### 3. Feature → Design Review → Unit Tests → Implement → Integration Tests → Code Review → Docs → Verify
|
||||
|
||||
```
|
||||
DESIGN_REVIEW → UNIT_TESTS → IMPLEMENT → INTEGRATION_TESTS → TEST → CODE_REVIEW → DOCS → VERIFY
|
||||
```
|
||||
|
||||
#### 4. Refactor → Baseline Tests → Implement → Test → Behavior Check → Performance Compare → Verify
|
||||
|
||||
```
|
||||
BASELINE_TESTS → IMPLEMENT → TEST → BEHAVIOR_PRESERVATION → PERFORMANCE_CMP → VERIFY
|
||||
```
|
||||
|
||||
#### 5. Test → Coverage Baseline → Test Design → Implement → Coverage Check → Verify
|
||||
|
||||
```
|
||||
COVERAGE_BASELINE → TEST_DESIGN → IMPLEMENT → COVERAGE_CHECK → VERIFY
|
||||
```
|
||||
|
||||
#### 6. Performance → Profiling → Bottleneck → Implement → Benchmark → Test → Verify
|
||||
|
||||
```
|
||||
PROFILING → BOTTLENECK → IMPLEMENT → BENCHMARK → TEST → VERIFY
|
||||
```
|
||||
|
||||
#### 7. Security → Vulnerability Scan → Implement → Security Test → Penetration Test → Verify
|
||||
|
||||
```
|
||||
VULNERABILITY_SCAN → IMPLEMENT → SECURITY_TEST → PENETRATION_TEST → VERIFY
|
||||
```
|
||||
|
||||
#### 8. Hotfix → Impact Assessment → Implement → Test → Quick Verify → Verify
|
||||
|
||||
```
|
||||
IMPACT_ASSESSMENT → IMPLEMENT → TEST → QUICK_VERIFY → VERIFY
|
||||
```
|
||||
|
||||
#### 9. Documentation → Implement → Example Validation → Format Check → Link Validation → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → EXAMPLE_VALIDATION → FORMAT_CHECK → LINK_VALIDATION → VERIFY
|
||||
```
|
||||
|
||||
#### 10. Chore → Implement → Compatibility Check → Test → Changelog → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → COMPATIBILITY_CHECK → TEST → CHANGELOG → VERIFY
|
||||
```
|
||||
|
||||
#### 11. Standard → Implement → Test → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → TEST → VERIFY
|
||||
```
|
||||
|
||||
### Strategy Selection Implementation
|
||||
|
||||
**Pseudo-code for strategy matching**:
|
||||
|
||||
```javascript
|
||||
function determineStrategy(solution) {
|
||||
// Priority 1: Explicit strategy type
|
||||
if (solution.strategy_type) {
|
||||
return solution.strategy_type
|
||||
}
|
||||
|
||||
// Priority 2: Infer from task actions
|
||||
const actions = solution.tasks.map(t => t.action.toLowerCase())
|
||||
const titles = solution.tasks.map(t => t.title.toLowerCase())
|
||||
const description = solution.description.toLowerCase()
|
||||
const allText = [...actions, ...titles, description].join(' ')
|
||||
|
||||
// Match keywords (order matters - more specific first)
|
||||
if (/hotfix|urgent|critical|emergency/.test(allText)) return 'hotfix'
|
||||
if (/debug|diagnose|trace|investigate/.test(allText)) return 'debug'
|
||||
if (/security|vulnerability|cve|audit/.test(allText)) return 'security'
|
||||
if (/performance|optimize|speed|memory|benchmark/.test(allText)) return 'performance'
|
||||
if (/refactor|restructure|cleanup/.test(allText)) return 'refactor'
|
||||
if (/test|coverage|e2e|integration/.test(allText)) return 'test'
|
||||
if (/documentation|docs|comment|readme/.test(allText)) return 'documentation'
|
||||
if (/chore|dependency|config|maintenance/.test(allText)) return 'chore'
|
||||
if (/fix|patch|resolve|correct/.test(allText)) return 'bugfix'
|
||||
if (/feature|add|implement|create|build/.test(allText)) return 'feature'
|
||||
|
||||
// Default
|
||||
return 'standard'
|
||||
}
|
||||
```
|
||||
|
||||
**Usage in execution flow**:
|
||||
|
||||
```javascript
|
||||
// After parsing solution (Step 2)
|
||||
const strategy = determineStrategy(solution)
|
||||
console.log(`Strategy selected: ${strategy}`)
|
||||
|
||||
// During task execution (Step 3), follow strategy-specific phases
|
||||
for (const task of solution.tasks) {
|
||||
executeTaskWithStrategy(task, strategy)
|
||||
}
|
||||
```
|
||||
|
||||
## Step 2.5: Initialize Task Tracking
|
||||
|
||||
After parsing solution, use `update_plan` to track each task:
|
||||
After parsing solution and determining strategy, use `update_plan` to track each task:
|
||||
|
||||
```javascript
|
||||
// Initialize plan with all tasks from solution
|
||||
@@ -454,18 +609,19 @@ EOF
|
||||
## Solution Committed: [solution_id]
|
||||
|
||||
**Commit**: [commit hash]
|
||||
**Type**: [commit_type]
|
||||
**Scope**: [scope]
|
||||
**Type**: [commit_type]([scope])
|
||||
|
||||
**Summary**:
|
||||
[solution.description]
|
||||
**Changes**:
|
||||
- [Feature/Fix/Improvement]: [What functionality was added/fixed/improved]
|
||||
- [Specific change 1]
|
||||
- [Specific change 2]
|
||||
|
||||
**Tasks**: [N] tasks completed
|
||||
- [x] T1: [task1.title]
|
||||
- [x] T2: [task2.title]
|
||||
...
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts - [Brief description of changes]
|
||||
- path/to/file2.ts - [Brief description of changes]
|
||||
- path/to/file3.ts - [Brief description of changes]
|
||||
|
||||
**Files**: [M] files changed
|
||||
**Solution**: [solution_id] ([N] tasks completed)
|
||||
```
|
||||
|
||||
## Step 4: Report Completion
|
||||
@@ -494,11 +650,12 @@ shell_command({
|
||||
|
||||
## Step 5: Continue to Next Solution
|
||||
|
||||
Fetch next solution:
|
||||
Fetch next solution (using same QUEUE_ID from Step 0/1):
|
||||
|
||||
```javascript
|
||||
// ccw auto-detects worktree
|
||||
const result = shell_command({ command: "ccw issue next" })
|
||||
// Continue using the same QUEUE_ID throughout execution
|
||||
const result = shell_command({ command: `ccw issue next --queue ${QUEUE_ID}` })
|
||||
```
|
||||
|
||||
**Output progress:**
|
||||
@@ -567,18 +724,27 @@ When `ccw issue next` returns `{ "status": "empty" }`:
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `ccw issue next` | Fetch next solution from queue (auto-selects from active queues) |
|
||||
| `ccw issue next --queue QUE-xxx` | Fetch from specific queue |
|
||||
| `ccw issue queue list --brief --json` | List all queues (for queue selection) |
|
||||
| `ccw issue next --queue QUE-xxx` | Fetch next solution from specified queue (**--queue required**) |
|
||||
| `ccw issue done <id>` | Mark solution complete with result (auto-detects queue) |
|
||||
| `ccw issue done <id> --fail --reason "..."` | Mark solution failed with structured reason |
|
||||
| `ccw issue retry --queue QUE-xxx` | Reset failed items in specific queue |
|
||||
|
||||
## Start Execution
|
||||
|
||||
Begin by running:
|
||||
**Step 0: Validate Queue ID**
|
||||
|
||||
If `--queue` was NOT provided in the command arguments:
|
||||
1. Run `ccw issue queue list --brief --json`
|
||||
2. Filter and display active/pending queues to user
|
||||
3. **Stop execution**, prompt user to rerun with `--queue QUE-xxx`
|
||||
|
||||
**Step 1: Fetch First Solution**
|
||||
|
||||
Once queue ID is confirmed, begin by running:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
ccw issue next --queue <queue-id>
|
||||
```
|
||||
|
||||
Then follow the solution lifecycle for each solution until queue is empty.
|
||||
|
||||
15
CHANGELOG.md
15
CHANGELOG.md
@@ -5,6 +5,21 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [6.3.29] - 2026-01-15
|
||||
|
||||
### ✨ New Features | 新功能
|
||||
|
||||
#### Multi-CLI Task & Discussion Enhancements | 多CLI任务与讨论增强
|
||||
- **Added**: Internationalization support for multi-CLI tasks and discussion tabs | 多CLI任务和讨论标签的国际化支持
|
||||
- **Added**: Collapsible sections for discussion and summary tabs with enhanced layout | 讨论和摘要标签的可折叠区域及增强布局
|
||||
- **Added**: Post-Completion Expansion feature for execution commands | 执行命令的完成后扩展功能
|
||||
|
||||
#### Session & UI Improvements | 会话与UI改进
|
||||
- **Enhanced**: Multi-CLI session handling with improved UI updates | 多CLI会话处理及UI更新优化
|
||||
- **Refactored**: Code structure for improved readability and maintainability | 代码结构重构以提升可读性和可维护性
|
||||
|
||||
---
|
||||
|
||||
## [6.3.19] - 2026-01-12
|
||||
|
||||
### 🚀 Major New Features | 主要新功能
|
||||
|
||||
@@ -148,6 +148,36 @@ CCW Dashboard 是一个单页应用(SPA),界面由四个核心部分组成
|
||||
- **模型配置**: 配置每个工具的主要和次要模型
|
||||
- **安装/卸载**: 通过向导安装或卸载工具
|
||||
|
||||
#### API Endpoint 配置(无需安装 CLI)
|
||||
|
||||
如果您没有安装 Gemini/Qwen CLI,但有 API 访问权限(如反向代理服务),可以在 `~/.claude/cli-tools.json` 中配置 `api-endpoint` 类型的工具:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.2.0",
|
||||
"tools": {
|
||||
"gemini-api": {
|
||||
"enabled": true,
|
||||
"type": "api-endpoint",
|
||||
"id": "your-api-id",
|
||||
"primaryModel": "gemini-2.5-pro",
|
||||
"secondaryModel": "gemini-2.5-flash",
|
||||
"tags": ["analysis"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**配置说明**:
|
||||
- `type: "api-endpoint"`: 表示使用 API 调用而非 CLI
|
||||
- `id`: API 端点标识符,用于路由请求
|
||||
- API Endpoint 工具仅支持**分析模式**(只读),不支持文件写入操作
|
||||
|
||||
**使用示例**:
|
||||
```bash
|
||||
ccw cli -p "分析代码结构" --tool gemini-api --mode analysis
|
||||
```
|
||||
|
||||
#### CodexLens 管理
|
||||
- **索引路径**: 查看和修改索引存储位置
|
||||
- **索引操作**:
|
||||
|
||||
@@ -281,6 +281,9 @@ CCW provides comprehensive documentation to help you get started quickly and mas
|
||||
- [**Dashboard Guide**](DASHBOARD_GUIDE.md) - Dashboard user guide and interface overview
|
||||
- [**Dashboard Operations**](DASHBOARD_OPERATIONS_EN.md) - Detailed operation instructions
|
||||
|
||||
### 🔄 **Workflow Guides**
|
||||
- [**Issue Loop Workflow**](docs/workflows/ISSUE_LOOP_WORKFLOW.md) - Batch issue processing with two-phase lifecycle (accumulate → resolve)
|
||||
|
||||
### 🏗️ **Architecture & Design**
|
||||
- [**Architecture Overview**](ARCHITECTURE.md) - System design and core components
|
||||
- [**Project Introduction**](PROJECT_INTRODUCTION.md) - Detailed project overview
|
||||
|
||||
@@ -177,7 +177,7 @@ export function run(argv: string[]): void {
|
||||
.option('--model <model>', 'Model override')
|
||||
.option('--cd <path>', 'Working directory')
|
||||
.option('--includeDirs <dirs>', 'Additional directories (--include-directories for gemini/qwen, --add-dir for codex/claude)')
|
||||
.option('--timeout <ms>', 'Timeout in milliseconds (0=disabled, controlled by external caller)', '0')
|
||||
// --timeout removed - controlled by external caller (bash timeout)
|
||||
.option('--stream', 'Enable streaming output (default: non-streaming with caching)')
|
||||
.option('--limit <n>', 'History limit')
|
||||
.option('--status <status>', 'Filter by status')
|
||||
|
||||
@@ -116,7 +116,7 @@ interface CliExecOptions {
|
||||
model?: string;
|
||||
cd?: string;
|
||||
includeDirs?: string;
|
||||
timeout?: string;
|
||||
// timeout removed - controlled by external caller (bash timeout)
|
||||
stream?: boolean; // Enable streaming (default: false, caches output)
|
||||
resume?: string | boolean; // true = last, string = execution ID, comma-separated for merge
|
||||
id?: string; // Custom execution ID (e.g., IMPL-001-step1)
|
||||
@@ -535,7 +535,7 @@ async function statusAction(debug?: boolean): Promise<void> {
|
||||
* @param {Object} options - CLI options
|
||||
*/
|
||||
async function execAction(positionalPrompt: string | undefined, options: CliExecOptions): Promise<void> {
|
||||
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, timeout, stream, resume, id, noNative, cache, injectMode, debug } = options;
|
||||
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, stream, resume, id, noNative, cache, injectMode, debug } = options;
|
||||
|
||||
// Enable debug mode if --debug flag is set
|
||||
if (debug) {
|
||||
@@ -842,7 +842,7 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
|
||||
model,
|
||||
cd,
|
||||
includeDirs,
|
||||
timeout: timeout ? parseInt(timeout, 10) : 0, // 0 = no internal timeout, controlled by external caller
|
||||
// timeout removed - controlled by external caller (bash timeout)
|
||||
resume,
|
||||
id, // custom execution ID
|
||||
noNative,
|
||||
@@ -1216,12 +1216,12 @@ export async function cliCommand(
|
||||
console.log(chalk.gray(' -f, --file <file> Read prompt from file (recommended for multi-line prompts)'));
|
||||
console.log(chalk.gray(' -p, --prompt <text> Prompt text (single-line)'));
|
||||
console.log(chalk.gray(' --tool <tool> Tool: gemini, qwen, codex (default: gemini)'));
|
||||
console.log(chalk.gray(' --mode <mode> Mode: analysis, write, auto (default: analysis)'));
|
||||
console.log(chalk.gray(' --mode <mode> Mode: analysis, write, auto, review (default: analysis)'));
|
||||
console.log(chalk.gray(' -d, --debug Enable debug logging for troubleshooting'));
|
||||
console.log(chalk.gray(' --model <model> Model override'));
|
||||
console.log(chalk.gray(' --cd <path> Working directory'));
|
||||
console.log(chalk.gray(' --includeDirs <dirs> Additional directories'));
|
||||
console.log(chalk.gray(' --timeout <ms> Timeout (default: 0=disabled)'));
|
||||
// --timeout removed - controlled by external caller (bash timeout)
|
||||
console.log(chalk.gray(' --resume [id] Resume previous session'));
|
||||
console.log(chalk.gray(' --cache <items> Cache: comma-separated @patterns and text'));
|
||||
console.log(chalk.gray(' --inject-mode <m> Inject mode: none, full, progressive'));
|
||||
|
||||
@@ -140,12 +140,25 @@ interface ProjectGuidelines {
|
||||
};
|
||||
}
|
||||
|
||||
interface Language {
|
||||
name: string;
|
||||
file_count: number;
|
||||
primary: boolean;
|
||||
}
|
||||
|
||||
interface KeyComponent {
|
||||
name: string;
|
||||
path: string;
|
||||
description: string;
|
||||
importance: 'high' | 'medium' | 'low';
|
||||
}
|
||||
|
||||
interface ProjectOverview {
|
||||
projectName: string;
|
||||
description: string;
|
||||
initializedAt: string | null;
|
||||
technologyStack: {
|
||||
languages: string[];
|
||||
languages: Language[];
|
||||
frameworks: string[];
|
||||
build_tools: string[];
|
||||
test_frameworks: string[];
|
||||
@@ -155,7 +168,7 @@ interface ProjectOverview {
|
||||
layers: string[];
|
||||
patterns: string[];
|
||||
};
|
||||
keyComponents: string[];
|
||||
keyComponents: KeyComponent[];
|
||||
features: unknown[];
|
||||
developmentIndex: {
|
||||
feature: unknown[];
|
||||
@@ -187,13 +200,12 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
|
||||
// Initialize cache manager
|
||||
const cache = createDashboardCache(workflowDir);
|
||||
|
||||
// Prepare paths to watch for changes (includes both new dual files and legacy)
|
||||
// Prepare paths to watch for changes
|
||||
const watchPaths = [
|
||||
join(workflowDir, 'active'),
|
||||
join(workflowDir, 'archives'),
|
||||
join(workflowDir, 'project-tech.json'),
|
||||
join(workflowDir, 'project-guidelines.json'),
|
||||
join(workflowDir, 'project.json'), // Legacy support
|
||||
...sessions.active.map(s => s.path),
|
||||
...sessions.archived.map(s => s.path)
|
||||
];
|
||||
@@ -266,7 +278,7 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
|
||||
console.error('Error scanning lite tasks:', (err as Error).message);
|
||||
}
|
||||
|
||||
// Load project overview from project.json
|
||||
// Load project overview from project-tech.json
|
||||
try {
|
||||
data.projectOverview = loadProjectOverview(workflowDir);
|
||||
} catch (err) {
|
||||
@@ -553,31 +565,25 @@ function sortTaskIds(a: string, b: string): number {
|
||||
|
||||
/**
|
||||
* Load project overview from project-tech.json and project-guidelines.json
|
||||
* Supports dual file structure with backward compatibility for legacy project.json
|
||||
* @param workflowDir - Path to .workflow directory
|
||||
* @returns Project overview data or null if not found
|
||||
*/
|
||||
function loadProjectOverview(workflowDir: string): ProjectOverview | null {
|
||||
const techFile = join(workflowDir, 'project-tech.json');
|
||||
const guidelinesFile = join(workflowDir, 'project-guidelines.json');
|
||||
const legacyFile = join(workflowDir, 'project.json');
|
||||
|
||||
// Check for new dual file structure first, fallback to legacy
|
||||
const useLegacy = !existsSync(techFile) && existsSync(legacyFile);
|
||||
const projectFile = useLegacy ? legacyFile : techFile;
|
||||
|
||||
if (!existsSync(projectFile)) {
|
||||
console.log(`Project file not found at: ${projectFile}`);
|
||||
if (!existsSync(techFile)) {
|
||||
console.log(`Project file not found at: ${techFile}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
const fileContent = readFileSync(projectFile, 'utf8');
|
||||
const fileContent = readFileSync(techFile, 'utf8');
|
||||
const projectData = JSON.parse(fileContent) as Record<string, unknown>;
|
||||
|
||||
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'} (${useLegacy ? 'legacy' : 'tech'})`);
|
||||
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'}`);
|
||||
|
||||
// Parse tech data (compatible with both legacy and new structure)
|
||||
// Parse tech data from project-tech.json structure
|
||||
const overview = projectData.overview as Record<string, unknown> | undefined;
|
||||
const technologyAnalysis = projectData.technology_analysis as Record<string, unknown> | undefined;
|
||||
const developmentStatus = projectData.development_status as Record<string, unknown> | undefined;
|
||||
@@ -589,6 +595,18 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
|
||||
const statistics = (projectData.statistics || developmentStatus?.statistics) as Record<string, unknown> | undefined;
|
||||
const metadata = projectData._metadata as Record<string, unknown> | undefined;
|
||||
|
||||
// Helper to extract string array from mixed array (handles both string[] and {name: string}[])
|
||||
const extractStringArray = (arr: unknown[] | undefined): string[] => {
|
||||
if (!arr) return [];
|
||||
return arr.map(item => {
|
||||
if (typeof item === 'string') return item;
|
||||
if (typeof item === 'object' && item !== null && 'name' in item) {
|
||||
return String((item as { name: unknown }).name);
|
||||
}
|
||||
return String(item);
|
||||
});
|
||||
};
|
||||
|
||||
// Load guidelines from separate file if exists
|
||||
let guidelines: ProjectGuidelines | null = null;
|
||||
if (existsSync(guidelinesFile)) {
|
||||
@@ -633,17 +651,17 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
|
||||
description: (overview?.description as string) || '',
|
||||
initializedAt: (projectData.initialized_at as string) || null,
|
||||
technologyStack: {
|
||||
languages: (technologyStack?.languages as string[]) || [],
|
||||
frameworks: (technologyStack?.frameworks as string[]) || [],
|
||||
build_tools: (technologyStack?.build_tools as string[]) || [],
|
||||
test_frameworks: (technologyStack?.test_frameworks as string[]) || []
|
||||
languages: (technologyStack?.languages as Language[]) || [],
|
||||
frameworks: extractStringArray(technologyStack?.frameworks),
|
||||
build_tools: extractStringArray(technologyStack?.build_tools),
|
||||
test_frameworks: extractStringArray(technologyStack?.test_frameworks)
|
||||
},
|
||||
architecture: {
|
||||
style: (architecture?.style as string) || 'Unknown',
|
||||
layers: (architecture?.layers as string[]) || [],
|
||||
patterns: (architecture?.patterns as string[]) || []
|
||||
layers: extractStringArray(architecture?.layers as unknown[] | undefined),
|
||||
patterns: extractStringArray(architecture?.patterns as unknown[] | undefined)
|
||||
},
|
||||
keyComponents: (overview?.key_components as string[]) || [],
|
||||
keyComponents: (overview?.key_components as KeyComponent[]) || [],
|
||||
features: (projectData.features as unknown[]) || [],
|
||||
developmentIndex: {
|
||||
feature: (developmentIndex?.feature as unknown[]) || [],
|
||||
@@ -665,7 +683,7 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
|
||||
guidelines
|
||||
};
|
||||
} catch (err) {
|
||||
console.error(`Failed to parse project file at ${projectFile}:`, (err as Error).message);
|
||||
console.error(`Failed to parse project file at ${techFile}:`, (err as Error).message);
|
||||
console.error('Error stack:', (err as Error).stack);
|
||||
return null;
|
||||
}
|
||||
|
||||
@@ -238,10 +238,11 @@ async function scanMultiCliDir(dir: string): Promise<MultiCliSession[]> {
|
||||
.map(async (entry) => {
|
||||
const sessionPath = join(dir, entry.name);
|
||||
|
||||
const [createdAt, syntheses, sessionState] = await Promise.all([
|
||||
const [createdAt, syntheses, sessionState, planJson] = await Promise.all([
|
||||
getCreatedTime(sessionPath),
|
||||
loadRoundSyntheses(sessionPath),
|
||||
loadSessionState(sessionPath),
|
||||
loadPlanJson(sessionPath),
|
||||
]);
|
||||
|
||||
// Extract data from syntheses
|
||||
@@ -258,13 +259,20 @@ async function scanMultiCliDir(dir: string): Promise<MultiCliSession[]> {
|
||||
const status = sessionState?.status ||
|
||||
(latestSynthesis?.convergence?.recommendation === 'converged' ? 'converged' : 'analyzing');
|
||||
|
||||
// Use plan.json if available, otherwise extract from synthesis
|
||||
const plan = planJson || latestSynthesis;
|
||||
// Use tasks from plan.json if available, otherwise extract from synthesis
|
||||
const tasks = (planJson as any)?.tasks?.length > 0
|
||||
? normalizePlanJsonTasks((planJson as any).tasks)
|
||||
: extractTasksFromSyntheses(syntheses);
|
||||
|
||||
const session: MultiCliSession = {
|
||||
id: entry.name,
|
||||
type: 'multi-cli-plan',
|
||||
path: sessionPath,
|
||||
createdAt,
|
||||
plan: latestSynthesis,
|
||||
tasks: extractTasksFromSyntheses(syntheses),
|
||||
plan,
|
||||
tasks,
|
||||
progress,
|
||||
// Extended multi-cli specific fields
|
||||
roundCount,
|
||||
@@ -548,6 +556,53 @@ function normalizeSolutionTask(task: SolutionTask, solution: Solution): Normaliz
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize tasks from plan.json format to NormalizedTask[]
|
||||
* plan.json tasks have: id, name, description, depends_on, status, files, key_point, acceptance_criteria
|
||||
* @param tasks - Tasks array from plan.json
|
||||
* @returns Normalized tasks
|
||||
*/
|
||||
function normalizePlanJsonTasks(tasks: unknown[]): NormalizedTask[] {
|
||||
if (!Array.isArray(tasks)) return [];
|
||||
|
||||
return tasks.map((task: any): NormalizedTask | null => {
|
||||
if (!task || !task.id) return null;
|
||||
|
||||
return {
|
||||
id: task.id,
|
||||
title: task.name || task.title || 'Untitled Task',
|
||||
status: task.status || 'pending',
|
||||
meta: {
|
||||
type: 'implementation',
|
||||
agent: null,
|
||||
scope: task.scope || null,
|
||||
module: null
|
||||
},
|
||||
context: {
|
||||
requirements: task.description ? [task.description] : (task.key_point ? [task.key_point] : []),
|
||||
focus_paths: task.files?.map((f: any) => typeof f === 'string' ? f : f.file) || [],
|
||||
acceptance: task.acceptance_criteria || [],
|
||||
depends_on: task.depends_on || []
|
||||
},
|
||||
flow_control: {
|
||||
implementation_approach: task.files?.map((f: any, i: number) => {
|
||||
const filePath = typeof f === 'string' ? f : f.file;
|
||||
const action = typeof f === 'string' ? 'modify' : f.action;
|
||||
const line = typeof f === 'string' ? null : f.line;
|
||||
return {
|
||||
step: `Step ${i + 1}`,
|
||||
action: `${action} ${filePath}${line ? ` at line ${line}` : ''}`
|
||||
};
|
||||
}) || []
|
||||
},
|
||||
_raw: {
|
||||
task,
|
||||
estimated_complexity: task.estimated_complexity
|
||||
}
|
||||
};
|
||||
}).filter((task): task is NormalizedTask => task !== null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Load plan.json or fix-plan.json from session directory
|
||||
* @param sessionPath - Session directory path
|
||||
|
||||
@@ -8,23 +8,23 @@ import { homedir } from 'os';
|
||||
import type { RouteContext } from './types.js';
|
||||
|
||||
/**
|
||||
* Get the ccw-help index directory path (pure function)
|
||||
* Priority: project path (.claude/skills/ccw-help/index) > user path (~/.claude/skills/ccw-help/index)
|
||||
* Get the ccw-help command.json file path (pure function)
|
||||
* Priority: project path (.claude/skills/ccw-help/command.json) > user path (~/.claude/skills/ccw-help/command.json)
|
||||
* @param projectPath - The project path to check first
|
||||
*/
|
||||
function getIndexDir(projectPath: string | null): string | null {
|
||||
function getCommandFilePath(projectPath: string | null): string | null {
|
||||
// Try project path first
|
||||
if (projectPath) {
|
||||
const projectIndexDir = join(projectPath, '.claude', 'skills', 'ccw-help', 'index');
|
||||
if (existsSync(projectIndexDir)) {
|
||||
return projectIndexDir;
|
||||
const projectFilePath = join(projectPath, '.claude', 'skills', 'ccw-help', 'command.json');
|
||||
if (existsSync(projectFilePath)) {
|
||||
return projectFilePath;
|
||||
}
|
||||
}
|
||||
|
||||
// Fall back to user path
|
||||
const userIndexDir = join(homedir(), '.claude', 'skills', 'ccw-help', 'index');
|
||||
if (existsSync(userIndexDir)) {
|
||||
return userIndexDir;
|
||||
const userFilePath = join(homedir(), '.claude', 'skills', 'ccw-help', 'command.json');
|
||||
if (existsSync(userFilePath)) {
|
||||
return userFilePath;
|
||||
}
|
||||
|
||||
return null;
|
||||
@@ -83,46 +83,48 @@ function invalidateCache(key: string): void {
|
||||
let watchersInitialized = false;
|
||||
|
||||
/**
|
||||
* Initialize file watchers for JSON indexes
|
||||
* @param projectPath - The project path to resolve index directory
|
||||
* Initialize file watcher for command.json
|
||||
* @param projectPath - The project path to resolve command file
|
||||
*/
|
||||
function initializeFileWatchers(projectPath: string | null): void {
|
||||
if (watchersInitialized) return;
|
||||
|
||||
const indexDir = getIndexDir(projectPath);
|
||||
const commandFilePath = getCommandFilePath(projectPath);
|
||||
|
||||
if (!indexDir) {
|
||||
console.warn(`ccw-help index directory not found in project or user paths`);
|
||||
if (!commandFilePath) {
|
||||
console.warn(`ccw-help command.json not found in project or user paths`);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Watch all JSON files in index directory
|
||||
const watcher = watch(indexDir, { recursive: false }, (eventType, filename) => {
|
||||
if (!filename || !filename.endsWith('.json')) return;
|
||||
// Watch the command.json file
|
||||
const watcher = watch(commandFilePath, (eventType) => {
|
||||
console.log(`File change detected: command.json (${eventType})`);
|
||||
|
||||
console.log(`File change detected: ${filename} (${eventType})`);
|
||||
|
||||
// Invalidate relevant cache entries
|
||||
if (filename === 'all-commands.json') {
|
||||
invalidateCache('all-commands');
|
||||
} else if (filename === 'command-relationships.json') {
|
||||
invalidateCache('command-relationships');
|
||||
} else if (filename === 'by-category.json') {
|
||||
invalidateCache('by-category');
|
||||
}
|
||||
// Invalidate all cache entries when command.json changes
|
||||
invalidateCache('command-data');
|
||||
});
|
||||
|
||||
watchersInitialized = true;
|
||||
(watcher as any).unref?.();
|
||||
console.log(`File watchers initialized for: ${indexDir}`);
|
||||
console.log(`File watcher initialized for: ${commandFilePath}`);
|
||||
} catch (error) {
|
||||
console.error('Failed to initialize file watchers:', error);
|
||||
console.error('Failed to initialize file watcher:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// ========== Helper Functions ==========
|
||||
|
||||
/**
|
||||
* Get command data from command.json (with caching)
|
||||
*/
|
||||
function getCommandData(projectPath: string | null): any {
|
||||
const filePath = getCommandFilePath(projectPath);
|
||||
if (!filePath) return null;
|
||||
|
||||
return getCachedData('command-data', filePath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter commands by search query
|
||||
*/
|
||||
@@ -138,6 +140,15 @@ function filterCommands(commands: any[], query: string): any[] {
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Category merge mapping for frontend compatibility
|
||||
* Merges additional categories into target category for display
|
||||
* Format: { targetCategory: [additionalCategoriesToMerge] }
|
||||
*/
|
||||
const CATEGORY_MERGES: Record<string, string[]> = {
|
||||
'cli': ['general'], // CLI tab shows both 'cli' and 'general' commands
|
||||
};
|
||||
|
||||
/**
|
||||
* Group commands by category with subcategories
|
||||
*/
|
||||
@@ -166,9 +177,104 @@ function groupCommandsByCategory(commands: any[]): any {
|
||||
}
|
||||
}
|
||||
|
||||
// Apply category merges for frontend compatibility
|
||||
for (const [target, sources] of Object.entries(CATEGORY_MERGES)) {
|
||||
// Initialize target category if not exists
|
||||
if (!grouped[target]) {
|
||||
grouped[target] = {
|
||||
name: target,
|
||||
commands: [],
|
||||
subcategories: {}
|
||||
};
|
||||
}
|
||||
|
||||
// Merge commands from source categories into target
|
||||
for (const source of sources) {
|
||||
if (grouped[source]) {
|
||||
// Merge direct commands
|
||||
grouped[target].commands = [
|
||||
...grouped[target].commands,
|
||||
...grouped[source].commands
|
||||
];
|
||||
// Merge subcategories
|
||||
for (const [subcat, cmds] of Object.entries(grouped[source].subcategories)) {
|
||||
if (!grouped[target].subcategories[subcat]) {
|
||||
grouped[target].subcategories[subcat] = [];
|
||||
}
|
||||
grouped[target].subcategories[subcat] = [
|
||||
...grouped[target].subcategories[subcat],
|
||||
...(cmds as any[])
|
||||
];
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return grouped;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build workflow relationships from command flow data
|
||||
*/
|
||||
function buildWorkflowRelationships(commands: any[]): any {
|
||||
const relationships: any = {
|
||||
workflows: [],
|
||||
dependencies: {},
|
||||
alternatives: {}
|
||||
};
|
||||
|
||||
for (const cmd of commands) {
|
||||
if (!cmd.flow) continue;
|
||||
|
||||
const cmdName = cmd.command;
|
||||
|
||||
// Build next_steps relationships
|
||||
if (cmd.flow.next_steps) {
|
||||
if (!relationships.dependencies[cmdName]) {
|
||||
relationships.dependencies[cmdName] = { next: [], prev: [] };
|
||||
}
|
||||
relationships.dependencies[cmdName].next = cmd.flow.next_steps;
|
||||
|
||||
// Add reverse relationship
|
||||
for (const nextCmd of cmd.flow.next_steps) {
|
||||
if (!relationships.dependencies[nextCmd]) {
|
||||
relationships.dependencies[nextCmd] = { next: [], prev: [] };
|
||||
}
|
||||
if (!relationships.dependencies[nextCmd].prev.includes(cmdName)) {
|
||||
relationships.dependencies[nextCmd].prev.push(cmdName);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build prerequisites relationships
|
||||
if (cmd.flow.prerequisites) {
|
||||
if (!relationships.dependencies[cmdName]) {
|
||||
relationships.dependencies[cmdName] = { next: [], prev: [] };
|
||||
}
|
||||
relationships.dependencies[cmdName].prev = [
|
||||
...new Set([...relationships.dependencies[cmdName].prev, ...cmd.flow.prerequisites])
|
||||
];
|
||||
}
|
||||
|
||||
// Build alternatives
|
||||
if (cmd.flow.alternatives) {
|
||||
relationships.alternatives[cmdName] = cmd.flow.alternatives;
|
||||
}
|
||||
|
||||
// Add to workflows list
|
||||
if (cmd.category === 'workflow') {
|
||||
relationships.workflows.push({
|
||||
name: cmd.name,
|
||||
command: cmd.command,
|
||||
description: cmd.description,
|
||||
flow: cmd.flow
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return relationships;
|
||||
}
|
||||
|
||||
// ========== API Routes ==========
|
||||
|
||||
/**
|
||||
@@ -181,25 +287,17 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
// Initialize file watchers on first request
|
||||
initializeFileWatchers(initialPath);
|
||||
|
||||
const indexDir = getIndexDir(initialPath);
|
||||
|
||||
// API: Get all commands with optional search
|
||||
if (pathname === '/api/help/commands') {
|
||||
if (!indexDir) {
|
||||
const commandData = getCommandData(initialPath);
|
||||
if (!commandData) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
|
||||
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
const searchQuery = url.searchParams.get('q') || '';
|
||||
const filePath = join(indexDir, 'all-commands.json');
|
||||
|
||||
let commands = getCachedData('all-commands', filePath);
|
||||
|
||||
if (!commands) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Commands data not found' }));
|
||||
return true;
|
||||
}
|
||||
let commands = commandData.commands || [];
|
||||
|
||||
// Filter by search query if provided
|
||||
if (searchQuery) {
|
||||
@@ -213,26 +311,24 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
res.end(JSON.stringify({
|
||||
commands: commands,
|
||||
grouped: grouped,
|
||||
total: commands.length
|
||||
total: commands.length,
|
||||
essential: commandData.essential_commands || [],
|
||||
metadata: commandData._metadata
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Get workflow command relationships
|
||||
if (pathname === '/api/help/workflows') {
|
||||
if (!indexDir) {
|
||||
const commandData = getCommandData(initialPath);
|
||||
if (!commandData) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
|
||||
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
|
||||
return true;
|
||||
}
|
||||
const filePath = join(indexDir, 'command-relationships.json');
|
||||
const relationships = getCachedData('command-relationships', filePath);
|
||||
|
||||
if (!relationships) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Workflow relationships not found' }));
|
||||
return true;
|
||||
}
|
||||
const commands = commandData.commands || [];
|
||||
const relationships = buildWorkflowRelationships(commands);
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(relationships));
|
||||
@@ -241,22 +337,38 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
|
||||
// API: Get commands by category
|
||||
if (pathname === '/api/help/commands/by-category') {
|
||||
if (!indexDir) {
|
||||
const commandData = getCommandData(initialPath);
|
||||
if (!commandData) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
|
||||
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
|
||||
return true;
|
||||
}
|
||||
const filePath = join(indexDir, 'by-category.json');
|
||||
const byCategory = getCachedData('by-category', filePath);
|
||||
|
||||
if (!byCategory) {
|
||||
const commands = commandData.commands || [];
|
||||
const byCategory = groupCommandsByCategory(commands);
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({
|
||||
categories: commandData.categories || [],
|
||||
grouped: byCategory
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Get agents list
|
||||
if (pathname === '/api/help/agents') {
|
||||
const commandData = getCommandData(initialPath);
|
||||
if (!commandData) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Category data not found' }));
|
||||
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(byCategory));
|
||||
res.end(JSON.stringify({
|
||||
agents: commandData.agents || [],
|
||||
total: (commandData.agents || []).length
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
* - POST /api/queue/reorder - Reorder queue items
|
||||
*/
|
||||
import { readFileSync, existsSync, writeFileSync, mkdirSync, unlinkSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { join, resolve, normalize } from 'path';
|
||||
import type { RouteContext } from './types.js';
|
||||
|
||||
// ========== JSONL Helper Functions ==========
|
||||
@@ -67,6 +67,12 @@ function readIssueHistoryJsonl(issuesDir: string): any[] {
|
||||
}
|
||||
}
|
||||
|
||||
function writeIssueHistoryJsonl(issuesDir: string, issues: any[]) {
|
||||
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
|
||||
const historyPath = join(issuesDir, 'issue-history.jsonl');
|
||||
writeFileSync(historyPath, issues.map(i => JSON.stringify(i)).join('\n'));
|
||||
}
|
||||
|
||||
function writeSolutionsJsonl(issuesDir: string, issueId: string, solutions: any[]) {
|
||||
const solutionsDir = join(issuesDir, 'solutions');
|
||||
if (!existsSync(solutionsDir)) mkdirSync(solutionsDir, { recursive: true });
|
||||
@@ -156,7 +162,30 @@ function writeQueue(issuesDir: string, queue: any) {
|
||||
|
||||
function getIssueDetail(issuesDir: string, issueId: string) {
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issue = issues.find(i => i.id === issueId);
|
||||
let issue = issues.find(i => i.id === issueId);
|
||||
|
||||
// Fallback: Reconstruct issue from solution file if issue not in issues.jsonl
|
||||
if (!issue) {
|
||||
const solutionPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
|
||||
if (existsSync(solutionPath)) {
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
if (solutions.length > 0) {
|
||||
const boundSolution = solutions.find(s => s.is_bound) || solutions[0];
|
||||
issue = {
|
||||
id: issueId,
|
||||
title: boundSolution?.description || issueId,
|
||||
status: 'completed',
|
||||
priority: 3,
|
||||
context: boundSolution?.approach || '',
|
||||
bound_solution_id: boundSolution?.id || null,
|
||||
created_at: boundSolution?.created_at || new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
_reconstructed: true
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!issue) return null;
|
||||
|
||||
const solutions = readSolutionsJsonl(issuesDir, issueId);
|
||||
@@ -254,11 +283,46 @@ function bindSolutionToIssue(issuesDir: string, issueId: string, solutionId: str
|
||||
return { success: true, bound: solutionId };
|
||||
}
|
||||
|
||||
// ========== Path Validation ==========
|
||||
|
||||
/**
|
||||
* Validate that the provided path is safe (no path traversal)
|
||||
* Returns the resolved, normalized path or null if invalid
|
||||
*/
|
||||
function validateProjectPath(requestedPath: string, basePath: string): string | null {
|
||||
if (!requestedPath) return basePath;
|
||||
|
||||
// Resolve to absolute path and normalize
|
||||
const resolvedPath = resolve(normalize(requestedPath));
|
||||
const resolvedBase = resolve(normalize(basePath));
|
||||
|
||||
// For local development tool, we allow any absolute path
|
||||
// but prevent obvious traversal attempts
|
||||
if (requestedPath.includes('..') && !resolvedPath.startsWith(resolvedBase)) {
|
||||
// Check if it's trying to escape with ..
|
||||
const normalizedRequested = normalize(requestedPath);
|
||||
if (normalizedRequested.startsWith('..')) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
return resolvedPath;
|
||||
}
|
||||
|
||||
// ========== Route Handler ==========
|
||||
|
||||
export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
const { pathname, url, req, res, initialPath, handlePostRequest } = ctx;
|
||||
const projectPath = url.searchParams.get('path') || initialPath;
|
||||
const rawProjectPath = url.searchParams.get('path') || initialPath;
|
||||
|
||||
// Validate project path to prevent path traversal
|
||||
const projectPath = validateProjectPath(rawProjectPath, initialPath);
|
||||
if (!projectPath) {
|
||||
res.writeHead(400, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Invalid project path' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
const issuesDir = join(projectPath, '.workflow', 'issues');
|
||||
|
||||
// ===== Queue Routes (top-level /api/queue) =====
|
||||
@@ -295,7 +359,8 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
|
||||
// GET /api/queue/:id - Get specific queue by ID
|
||||
const queueDetailMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
|
||||
if (queueDetailMatch && req.method === 'GET' && queueDetailMatch[1] !== 'history' && queueDetailMatch[1] !== 'reorder') {
|
||||
const reservedQueuePaths = ['history', 'reorder', 'switch', 'deactivate', 'merge'];
|
||||
if (queueDetailMatch && req.method === 'GET' && !reservedQueuePaths.includes(queueDetailMatch[1])) {
|
||||
const queueId = queueDetailMatch[1];
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const queueFilePath = join(queuesDir, `${queueId}.json`);
|
||||
@@ -347,6 +412,29 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/queue/deactivate - Deactivate current queue (set active to null)
|
||||
if (pathname === '/api/queue/deactivate' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const indexPath = join(queuesDir, 'index.json');
|
||||
|
||||
try {
|
||||
const index = existsSync(indexPath)
|
||||
? JSON.parse(readFileSync(indexPath, 'utf8'))
|
||||
: { active_queue_id: null, queues: [] };
|
||||
|
||||
const previousActiveId = index.active_queue_id;
|
||||
index.active_queue_id = null;
|
||||
writeFileSync(indexPath, JSON.stringify(index, null, 2));
|
||||
|
||||
return { success: true, previous_active_id: previousActiveId };
|
||||
} catch (err) {
|
||||
return { error: 'Failed to deactivate queue' };
|
||||
}
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/queue/reorder - Reorder queue items (supports both solutions and tasks)
|
||||
if (pathname === '/api/queue/reorder' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
@@ -399,6 +487,237 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
return true;
|
||||
}
|
||||
|
||||
// DELETE /api/queue/:queueId/item/:itemId - Delete item from queue
|
||||
const queueItemDeleteMatch = pathname.match(/^\/api\/queue\/([^/]+)\/item\/([^/]+)$/);
|
||||
if (queueItemDeleteMatch && req.method === 'DELETE') {
|
||||
const queueId = queueItemDeleteMatch[1];
|
||||
const itemId = decodeURIComponent(queueItemDeleteMatch[2]);
|
||||
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const queueFilePath = join(queuesDir, `${queueId}.json`);
|
||||
|
||||
if (!existsSync(queueFilePath)) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
|
||||
return true;
|
||||
}
|
||||
|
||||
try {
|
||||
const queue = JSON.parse(readFileSync(queueFilePath, 'utf8'));
|
||||
const items = queue.solutions || queue.tasks || [];
|
||||
const filteredItems = items.filter((item: any) => item.item_id !== itemId);
|
||||
|
||||
if (filteredItems.length === items.length) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: `Item ${itemId} not found in queue` }));
|
||||
return true;
|
||||
}
|
||||
|
||||
// Update queue items
|
||||
if (queue.solutions) {
|
||||
queue.solutions = filteredItems;
|
||||
} else {
|
||||
queue.tasks = filteredItems;
|
||||
}
|
||||
|
||||
// Recalculate metadata
|
||||
const completedCount = filteredItems.filter((i: any) => i.status === 'completed').length;
|
||||
queue._metadata = {
|
||||
...queue._metadata,
|
||||
updated_at: new Date().toISOString(),
|
||||
...(queue.solutions
|
||||
? { total_solutions: filteredItems.length, completed_solutions: completedCount }
|
||||
: { total_tasks: filteredItems.length, completed_tasks: completedCount })
|
||||
};
|
||||
|
||||
writeFileSync(queueFilePath, JSON.stringify(queue, null, 2));
|
||||
|
||||
// Update index counts
|
||||
const indexPath = join(queuesDir, 'index.json');
|
||||
if (existsSync(indexPath)) {
|
||||
try {
|
||||
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
|
||||
const queueEntry = index.queues?.find((q: any) => q.id === queueId);
|
||||
if (queueEntry) {
|
||||
if (queue.solutions) {
|
||||
queueEntry.total_solutions = filteredItems.length;
|
||||
queueEntry.completed_solutions = completedCount;
|
||||
} else {
|
||||
queueEntry.total_tasks = filteredItems.length;
|
||||
queueEntry.completed_tasks = completedCount;
|
||||
}
|
||||
writeFileSync(indexPath, JSON.stringify(index, null, 2));
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to update queue index:', err);
|
||||
}
|
||||
}
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, queueId, deletedItemId: itemId }));
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Failed to delete item' }));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// DELETE /api/queue/:queueId - Delete entire queue
|
||||
const queueDeleteMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
|
||||
if (queueDeleteMatch && req.method === 'DELETE') {
|
||||
const queueId = queueDeleteMatch[1];
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const queueFilePath = join(queuesDir, `${queueId}.json`);
|
||||
const indexPath = join(queuesDir, 'index.json');
|
||||
|
||||
if (!existsSync(queueFilePath)) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
|
||||
return true;
|
||||
}
|
||||
|
||||
try {
|
||||
// Delete queue file
|
||||
unlinkSync(queueFilePath);
|
||||
|
||||
// Update index
|
||||
if (existsSync(indexPath)) {
|
||||
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
|
||||
|
||||
// Remove from queues array
|
||||
index.queues = (index.queues || []).filter((q: any) => q.id !== queueId);
|
||||
|
||||
// Clear active if this was the active queue
|
||||
if (index.active_queue_id === queueId) {
|
||||
index.active_queue_id = null;
|
||||
}
|
||||
|
||||
writeFileSync(indexPath, JSON.stringify(index, null, 2));
|
||||
}
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, deletedQueueId: queueId }));
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Failed to delete queue' }));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/queue/merge - Merge source queue into target queue
|
||||
if (pathname === '/api/queue/merge' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body: any) => {
|
||||
const { sourceQueueId, targetQueueId } = body;
|
||||
if (!sourceQueueId || !targetQueueId) {
|
||||
return { error: 'sourceQueueId and targetQueueId required' };
|
||||
}
|
||||
|
||||
if (sourceQueueId === targetQueueId) {
|
||||
return { error: 'Cannot merge queue into itself' };
|
||||
}
|
||||
|
||||
const queuesDir = join(issuesDir, 'queues');
|
||||
const sourcePath = join(queuesDir, `${sourceQueueId}.json`);
|
||||
const targetPath = join(queuesDir, `${targetQueueId}.json`);
|
||||
|
||||
if (!existsSync(sourcePath)) return { error: `Source queue ${sourceQueueId} not found` };
|
||||
if (!existsSync(targetPath)) return { error: `Target queue ${targetQueueId} not found` };
|
||||
|
||||
try {
|
||||
const sourceQueue = JSON.parse(readFileSync(sourcePath, 'utf8'));
|
||||
const targetQueue = JSON.parse(readFileSync(targetPath, 'utf8'));
|
||||
|
||||
const sourceItems = sourceQueue.solutions || sourceQueue.tasks || [];
|
||||
const targetItems = targetQueue.solutions || targetQueue.tasks || [];
|
||||
const isSolutionBased = !!targetQueue.solutions;
|
||||
|
||||
// Re-index source items to avoid ID conflicts
|
||||
const maxOrder = targetItems.reduce((max: number, i: any) => Math.max(max, i.execution_order || 0), 0);
|
||||
const reindexedSourceItems = sourceItems.map((item: any, idx: number) => ({
|
||||
...item,
|
||||
item_id: `${item.item_id}-merged`,
|
||||
execution_order: maxOrder + idx + 1,
|
||||
execution_group: item.execution_group ? `M-${item.execution_group}` : 'M-ungrouped'
|
||||
}));
|
||||
|
||||
// Merge items
|
||||
const mergedItems = [...targetItems, ...reindexedSourceItems];
|
||||
|
||||
if (isSolutionBased) {
|
||||
targetQueue.solutions = mergedItems;
|
||||
} else {
|
||||
targetQueue.tasks = mergedItems;
|
||||
}
|
||||
|
||||
// Merge issue_ids
|
||||
const mergedIssueIds = [...new Set([
|
||||
...(targetQueue.issue_ids || []),
|
||||
...(sourceQueue.issue_ids || [])
|
||||
])];
|
||||
targetQueue.issue_ids = mergedIssueIds;
|
||||
|
||||
// Update metadata
|
||||
const completedCount = mergedItems.filter((i: any) => i.status === 'completed').length;
|
||||
targetQueue._metadata = {
|
||||
...targetQueue._metadata,
|
||||
updated_at: new Date().toISOString(),
|
||||
...(isSolutionBased
|
||||
? { total_solutions: mergedItems.length, completed_solutions: completedCount }
|
||||
: { total_tasks: mergedItems.length, completed_tasks: completedCount })
|
||||
};
|
||||
|
||||
// Write merged queue
|
||||
writeFileSync(targetPath, JSON.stringify(targetQueue, null, 2));
|
||||
|
||||
// Update source queue status
|
||||
sourceQueue.status = 'merged';
|
||||
sourceQueue._metadata = {
|
||||
...sourceQueue._metadata,
|
||||
merged_into: targetQueueId,
|
||||
merged_at: new Date().toISOString()
|
||||
};
|
||||
writeFileSync(sourcePath, JSON.stringify(sourceQueue, null, 2));
|
||||
|
||||
// Update index
|
||||
const indexPath = join(queuesDir, 'index.json');
|
||||
if (existsSync(indexPath)) {
|
||||
try {
|
||||
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
|
||||
const sourceEntry = index.queues?.find((q: any) => q.id === sourceQueueId);
|
||||
const targetEntry = index.queues?.find((q: any) => q.id === targetQueueId);
|
||||
if (sourceEntry) {
|
||||
sourceEntry.status = 'merged';
|
||||
}
|
||||
if (targetEntry) {
|
||||
if (isSolutionBased) {
|
||||
targetEntry.total_solutions = mergedItems.length;
|
||||
targetEntry.completed_solutions = completedCount;
|
||||
} else {
|
||||
targetEntry.total_tasks = mergedItems.length;
|
||||
targetEntry.completed_tasks = completedCount;
|
||||
}
|
||||
targetEntry.issue_ids = mergedIssueIds;
|
||||
}
|
||||
writeFileSync(indexPath, JSON.stringify(index, null, 2));
|
||||
} catch {
|
||||
// Ignore index update errors
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
sourceQueueId,
|
||||
targetQueueId,
|
||||
mergedItemCount: sourceItems.length,
|
||||
totalItems: mergedItems.length
|
||||
};
|
||||
} catch (err) {
|
||||
return { error: 'Failed to merge queues' };
|
||||
}
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// Legacy: GET /api/issues/queue (backward compat)
|
||||
if (pathname === '/api/issues/queue' && req.method === 'GET') {
|
||||
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
|
||||
@@ -546,6 +865,39 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/issues/:id/archive - Archive issue (move to history)
|
||||
const archiveMatch = pathname.match(/^\/api\/issues\/([^/]+)\/archive$/);
|
||||
if (archiveMatch && req.method === 'POST') {
|
||||
const issueId = decodeURIComponent(archiveMatch[1]);
|
||||
|
||||
const issues = readIssuesJsonl(issuesDir);
|
||||
const issueIndex = issues.findIndex(i => i.id === issueId);
|
||||
|
||||
if (issueIndex === -1) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Issue not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
// Get the issue and add archive metadata
|
||||
const issue = issues[issueIndex];
|
||||
issue.archived_at = new Date().toISOString();
|
||||
issue.status = 'completed';
|
||||
|
||||
// Move to history
|
||||
const history = readIssueHistoryJsonl(issuesDir);
|
||||
history.push(issue);
|
||||
writeIssueHistoryJsonl(issuesDir, history);
|
||||
|
||||
// Remove from active issues
|
||||
issues.splice(issueIndex, 1);
|
||||
writeIssuesJsonl(issuesDir, issues);
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, issueId, archivedAt: issue.archived_at }));
|
||||
return true;
|
||||
}
|
||||
|
||||
// POST /api/issues/:id/solutions - Add solution
|
||||
const addSolMatch = pathname.match(/^\/api\/issues\/([^/]+)\/solutions$/);
|
||||
if (addSolMatch && req.method === 'POST') {
|
||||
|
||||
@@ -302,9 +302,14 @@
|
||||
|
||||
.collapsible-content {
|
||||
padding: 1rem;
|
||||
display: block;
|
||||
}
|
||||
|
||||
.collapsible-content.collapsed {
|
||||
display: none;
|
||||
}
|
||||
|
||||
/* Legacy .open class support */
|
||||
.collapsible-content.open {
|
||||
display: block;
|
||||
}
|
||||
|
||||
@@ -406,6 +406,7 @@
|
||||
}
|
||||
|
||||
.collapsible-content {
|
||||
display: block;
|
||||
padding: 1rem;
|
||||
background: hsl(var(--muted));
|
||||
}
|
||||
@@ -1281,7 +1282,7 @@
|
||||
.multi-cli-status.pending,
|
||||
.multi-cli-status.exploring,
|
||||
.multi-cli-status.initialized {
|
||||
) background: hsl(var(--muted));
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--muted-foreground));
|
||||
}
|
||||
|
||||
@@ -3440,6 +3441,309 @@
|
||||
transform: rotate(-90deg);
|
||||
}
|
||||
|
||||
/* Discussion Round using collapsible-section pattern */
|
||||
.discussion-round.collapsible-section {
|
||||
margin-bottom: 0.75rem;
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 8px;
|
||||
overflow: hidden;
|
||||
background: hsl(var(--card));
|
||||
}
|
||||
|
||||
.discussion-round.collapsible-section .collapsible-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.75rem;
|
||||
padding: 0.75rem 1rem;
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
cursor: pointer;
|
||||
transition: background-color 0.2s;
|
||||
}
|
||||
|
||||
.discussion-round.collapsible-section .collapsible-header:hover {
|
||||
background: hsl(var(--muted) / 0.5);
|
||||
}
|
||||
|
||||
.discussion-round.collapsible-section .collapsible-content {
|
||||
padding: 1rem;
|
||||
border-top: 1px solid hsl(var(--border) / 0.5);
|
||||
background: hsl(var(--card));
|
||||
}
|
||||
|
||||
.discussion-round.collapsible-section .collapsible-content.collapsed {
|
||||
display: none;
|
||||
}
|
||||
|
||||
/* ========== Summary Tab Content ========== */
|
||||
.summary-tab-content .summary-section {
|
||||
margin-bottom: 1rem;
|
||||
padding: 1rem;
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 8px;
|
||||
background: hsl(var(--card));
|
||||
}
|
||||
|
||||
.summary-section-title {
|
||||
font-size: 0.9rem;
|
||||
font-weight: 600;
|
||||
color: hsl(var(--foreground));
|
||||
margin-bottom: 0.75rem;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.375rem;
|
||||
}
|
||||
|
||||
.summary-content {
|
||||
font-size: 0.875rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
.convergence-info {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.75rem;
|
||||
}
|
||||
|
||||
.convergence-level {
|
||||
font-size: 0.75rem;
|
||||
padding: 0.25rem 0.5rem;
|
||||
border-radius: 4px;
|
||||
text-transform: capitalize;
|
||||
background: hsl(var(--muted));
|
||||
}
|
||||
|
||||
.convergence-level.full { background: hsl(var(--success) / 0.15); color: hsl(var(--success)); }
|
||||
.convergence-level.partial { background: hsl(var(--warning) / 0.15); color: hsl(var(--warning)); }
|
||||
.convergence-level.low { background: hsl(var(--error) / 0.15); color: hsl(var(--error)); }
|
||||
|
||||
.convergence-rec {
|
||||
font-size: 0.75rem;
|
||||
padding: 0.25rem 0.5rem;
|
||||
border-radius: 4px;
|
||||
text-transform: capitalize;
|
||||
background: hsl(var(--info) / 0.15);
|
||||
color: hsl(var(--info));
|
||||
}
|
||||
|
||||
.convergence-rec.converged { background: hsl(var(--success) / 0.15); color: hsl(var(--success)); }
|
||||
.convergence-rec.continue { background: hsl(var(--info) / 0.15); color: hsl(var(--info)); }
|
||||
|
||||
/* Summary collapsible Solutions section */
|
||||
.summary-section.collapsible-section {
|
||||
padding: 0;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.summary-section.collapsible-section .collapsible-header {
|
||||
padding: 0.75rem 1rem;
|
||||
background: hsl(var(--card));
|
||||
border-bottom: 1px solid transparent;
|
||||
}
|
||||
|
||||
.summary-section.collapsible-section .collapsible-header:hover {
|
||||
background: hsl(var(--muted) / 0.5);
|
||||
}
|
||||
|
||||
.summary-section.collapsible-section .collapsible-content {
|
||||
padding: 1rem;
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
border-top: 1px solid hsl(var(--border) / 0.5);
|
||||
}
|
||||
|
||||
.solution-summary-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.75rem;
|
||||
padding: 0.5rem 0;
|
||||
border-bottom: 1px solid hsl(var(--border) / 0.3);
|
||||
}
|
||||
|
||||
.solution-summary-item:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.solution-num {
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
color: hsl(var(--primary));
|
||||
min-width: 1.5rem;
|
||||
}
|
||||
|
||||
.solution-name {
|
||||
flex: 1;
|
||||
font-size: 0.875rem;
|
||||
}
|
||||
|
||||
.feasibility-badge {
|
||||
font-size: 0.7rem;
|
||||
padding: 0.125rem 0.375rem;
|
||||
border-radius: 4px;
|
||||
background: hsl(var(--success) / 0.15);
|
||||
color: hsl(var(--success));
|
||||
}
|
||||
|
||||
/* ========== Context Tab Content (Multi-CLI) ========== */
|
||||
.context-tab-content {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 1rem;
|
||||
padding: 1rem;
|
||||
}
|
||||
|
||||
.context-tab-content .context-section {
|
||||
padding: 1rem;
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 8px;
|
||||
background: hsl(var(--card));
|
||||
}
|
||||
|
||||
.context-tab-content .context-section-title {
|
||||
font-size: 0.9rem;
|
||||
font-weight: 600;
|
||||
color: hsl(var(--foreground));
|
||||
margin-bottom: 0.75rem;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.375rem;
|
||||
}
|
||||
|
||||
.context-tab-content .context-description {
|
||||
font-size: 0.875rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
line-height: 1.6;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.context-tab-content .constraints-list {
|
||||
margin: 0;
|
||||
padding-left: 1.25rem;
|
||||
font-size: 0.875rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
}
|
||||
|
||||
.context-tab-content .constraints-list li {
|
||||
margin-bottom: 0.375rem;
|
||||
}
|
||||
|
||||
.context-tab-content .path-tags {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
.context-tab-content .path-tag {
|
||||
font-family: monospace;
|
||||
font-size: 0.75rem;
|
||||
padding: 0.25rem 0.5rem;
|
||||
background: hsl(var(--muted));
|
||||
border-radius: 4px;
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.context-tab-content .session-id-code {
|
||||
font-family: monospace;
|
||||
font-size: 0.8rem;
|
||||
padding: 0.5rem 0.75rem;
|
||||
background: hsl(var(--muted));
|
||||
border-radius: 4px;
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
/* Context tab collapsible sections */
|
||||
.context-tab-content .context-section.collapsible-section {
|
||||
padding: 0;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.context-tab-content .context-section.collapsible-section .collapsible-header {
|
||||
padding: 0.75rem 1rem;
|
||||
background: hsl(var(--card));
|
||||
}
|
||||
|
||||
.context-tab-content .context-section.collapsible-section .collapsible-header:hover {
|
||||
background: hsl(var(--muted) / 0.5);
|
||||
}
|
||||
|
||||
.context-tab-content .context-section.collapsible-section .collapsible-content {
|
||||
padding: 1rem;
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
border-top: 1px solid hsl(var(--border) / 0.5);
|
||||
}
|
||||
|
||||
.context-tab-content .files-list {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
.context-tab-content .file-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
padding: 0.375rem 0;
|
||||
border-bottom: 1px solid hsl(var(--border) / 0.3);
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
|
||||
.context-tab-content .file-item:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.context-tab-content .file-icon {
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.context-tab-content .file-item code {
|
||||
font-family: monospace;
|
||||
font-size: 0.75rem;
|
||||
background: hsl(var(--muted));
|
||||
padding: 0.125rem 0.375rem;
|
||||
border-radius: 3px;
|
||||
}
|
||||
|
||||
.context-tab-content .file-reason {
|
||||
color: hsl(var(--muted-foreground));
|
||||
font-size: 0.75rem;
|
||||
margin-left: auto;
|
||||
}
|
||||
|
||||
.context-tab-content .deps-list {
|
||||
margin: 0;
|
||||
padding-left: 1.25rem;
|
||||
font-size: 0.8rem;
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.context-tab-content .deps-list li {
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
|
||||
.context-tab-content .risks-list {
|
||||
margin: 0;
|
||||
padding-left: 1.25rem;
|
||||
}
|
||||
|
||||
.context-tab-content .risk-item {
|
||||
font-size: 0.875rem;
|
||||
color: hsl(var(--warning));
|
||||
margin-bottom: 0.375rem;
|
||||
}
|
||||
|
||||
.context-tab-content .json-content {
|
||||
font-family: monospace;
|
||||
font-size: 0.75rem;
|
||||
line-height: 1.5;
|
||||
margin: 0;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-all;
|
||||
max-height: 400px;
|
||||
overflow-y: auto;
|
||||
background: hsl(var(--background));
|
||||
padding: 0.75rem;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
/* ========== Association Section Styles ========== */
|
||||
.association-section {
|
||||
margin-bottom: 1.5rem;
|
||||
@@ -3621,3 +3925,328 @@
|
||||
}
|
||||
}
|
||||
|
||||
/* ===================================
|
||||
Multi-CLI Plan Summary Section
|
||||
=================================== */
|
||||
|
||||
/* Plan Summary Section - card-like styling */
|
||||
.plan-summary-section {
|
||||
background: hsl(var(--card));
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 0.5rem;
|
||||
padding: 1rem 1.25rem;
|
||||
margin-bottom: 1.25rem;
|
||||
}
|
||||
|
||||
.plan-summary-section:hover {
|
||||
border-color: hsl(var(--purple, 280 60% 50%) / 0.3);
|
||||
}
|
||||
|
||||
/* Plan text styles */
|
||||
.plan-summary-text,
|
||||
.plan-solution-text,
|
||||
.plan-approach-text {
|
||||
font-size: 0.875rem;
|
||||
line-height: 1.6;
|
||||
color: hsl(var(--foreground));
|
||||
margin: 0 0 0.75rem 0;
|
||||
}
|
||||
|
||||
.plan-summary-text:last-child,
|
||||
.plan-solution-text:last-child,
|
||||
.plan-approach-text:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.plan-summary-text strong,
|
||||
.plan-solution-text strong,
|
||||
.plan-approach-text strong {
|
||||
color: hsl(var(--muted-foreground));
|
||||
font-weight: 600;
|
||||
margin-right: 0.5rem;
|
||||
}
|
||||
|
||||
/* Plan meta badges container */
|
||||
.plan-meta-badges {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 0.5rem;
|
||||
margin-top: 0.75rem;
|
||||
padding-top: 0.75rem;
|
||||
border-top: 1px solid hsl(var(--border) / 0.5);
|
||||
}
|
||||
|
||||
/* Feasibility badge */
|
||||
.feasibility-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 0.25rem 0.625rem;
|
||||
background: hsl(var(--primary) / 0.1);
|
||||
color: hsl(var(--primary));
|
||||
border-radius: 0.25rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Effort badge variants */
|
||||
.effort-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 0.25rem 0.625rem;
|
||||
border-radius: 0.25rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.effort-badge.low {
|
||||
background: hsl(var(--success-light, 142 70% 95%));
|
||||
color: hsl(var(--success, 142 70% 45%));
|
||||
}
|
||||
|
||||
.effort-badge.medium {
|
||||
background: hsl(var(--warning-light, 45 90% 95%));
|
||||
color: hsl(var(--warning, 45 90% 40%));
|
||||
}
|
||||
|
||||
.effort-badge.high {
|
||||
background: hsl(var(--destructive) / 0.1);
|
||||
color: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
/* Complexity badge */
|
||||
.complexity-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 0.25rem 0.625rem;
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--foreground));
|
||||
border-radius: 0.25rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Time badge */
|
||||
.time-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 0.25rem 0.625rem;
|
||||
background: hsl(var(--info-light, 220 80% 95%));
|
||||
color: hsl(var(--info, 220 80% 55%));
|
||||
border-radius: 0.25rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* ===================================
|
||||
Multi-CLI Task Item Additional Badges
|
||||
=================================== */
|
||||
|
||||
/* Files meta badge */
|
||||
.meta-badge.files {
|
||||
background: hsl(var(--purple, 280 60% 50%) / 0.1);
|
||||
color: hsl(var(--purple, 280 60% 50%));
|
||||
}
|
||||
|
||||
/* Depends meta badge */
|
||||
.meta-badge.depends {
|
||||
background: hsl(var(--info-light, 220 80% 95%));
|
||||
color: hsl(var(--info, 220 80% 55%));
|
||||
}
|
||||
|
||||
/* Multi-CLI Task Item Full - enhanced padding */
|
||||
.detail-task-item-full.multi-cli-task-item {
|
||||
background: hsl(var(--card));
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 0.5rem;
|
||||
padding: 0.875rem 1rem;
|
||||
transition: all 0.2s ease;
|
||||
border-left: 3px solid hsl(var(--primary) / 0.5);
|
||||
}
|
||||
|
||||
.detail-task-item-full.multi-cli-task-item:hover {
|
||||
border-color: hsl(var(--primary) / 0.4);
|
||||
border-left-color: hsl(var(--primary));
|
||||
box-shadow: 0 2px 8px hsl(var(--primary) / 0.1);
|
||||
background: hsl(var(--hover));
|
||||
}
|
||||
|
||||
/* Task ID badge enhancement */
|
||||
.task-id-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-width: 2.5rem;
|
||||
padding: 0.25rem 0.5rem;
|
||||
background: hsl(var(--purple, 280 60% 50%));
|
||||
color: white;
|
||||
border-radius: 0.25rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
/* Tasks list container */
|
||||
.tasks-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.625rem;
|
||||
}
|
||||
|
||||
/* Plan section styling (for Plan tab) */
|
||||
.plan-section {
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 0.5rem;
|
||||
padding: 1rem;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
.plan-section:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.plan-section-title {
|
||||
font-size: 0.9rem;
|
||||
font-weight: 600;
|
||||
color: hsl(var(--foreground));
|
||||
margin-bottom: 0.75rem;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
.plan-tab-content {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0;
|
||||
}
|
||||
|
||||
.tasks-tab-content {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
/* ===================================
|
||||
Plan Summary Meta Badges
|
||||
=================================== */
|
||||
|
||||
/* Base meta badge style (plan summary) */
|
||||
.plan-meta-badges .meta-badge {
|
||||
display: inline-block;
|
||||
padding: 0.25rem 0.625rem;
|
||||
border-radius: 0.375rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
/* Feasibility badge */
|
||||
.meta-badge.feasibility {
|
||||
background: hsl(var(--success) / 0.15);
|
||||
color: hsl(var(--success));
|
||||
border: 1px solid hsl(var(--success) / 0.3);
|
||||
}
|
||||
|
||||
/* Effort badges */
|
||||
.meta-badge.effort {
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.meta-badge.effort.low {
|
||||
background: hsl(142 70% 50% / 0.15);
|
||||
color: hsl(142 70% 35%);
|
||||
}
|
||||
|
||||
.meta-badge.effort.medium {
|
||||
background: hsl(30 90% 50% / 0.15);
|
||||
color: hsl(30 90% 40%);
|
||||
}
|
||||
|
||||
.meta-badge.effort.high {
|
||||
background: hsl(0 70% 50% / 0.15);
|
||||
color: hsl(0 70% 45%);
|
||||
}
|
||||
|
||||
/* Risk badges */
|
||||
.meta-badge.risk {
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.meta-badge.risk.low {
|
||||
background: hsl(142 70% 50% / 0.15);
|
||||
color: hsl(142 70% 35%);
|
||||
}
|
||||
|
||||
.meta-badge.risk.medium {
|
||||
background: hsl(30 90% 50% / 0.15);
|
||||
color: hsl(30 90% 40%);
|
||||
}
|
||||
|
||||
.meta-badge.risk.high {
|
||||
background: hsl(0 70% 50% / 0.15);
|
||||
color: hsl(0 70% 45%);
|
||||
}
|
||||
|
||||
/* Severity badges */
|
||||
.meta-badge.severity {
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.meta-badge.severity.low {
|
||||
background: hsl(142 70% 50% / 0.15);
|
||||
color: hsl(142 70% 35%);
|
||||
}
|
||||
|
||||
.meta-badge.severity.medium {
|
||||
background: hsl(30 90% 50% / 0.15);
|
||||
color: hsl(30 90% 40%);
|
||||
}
|
||||
|
||||
.meta-badge.severity.high,
|
||||
.meta-badge.severity.critical {
|
||||
background: hsl(0 70% 50% / 0.15);
|
||||
color: hsl(0 70% 45%);
|
||||
}
|
||||
|
||||
/* Complexity badge */
|
||||
.meta-badge.complexity {
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--muted-foreground));
|
||||
}
|
||||
|
||||
/* Time badge */
|
||||
.meta-badge.time {
|
||||
background: hsl(220 80% 50% / 0.15);
|
||||
color: hsl(220 80% 45%);
|
||||
}
|
||||
|
||||
/* Task item action badge */
|
||||
.meta-badge.action {
|
||||
background: hsl(var(--primary) / 0.15);
|
||||
color: hsl(var(--primary));
|
||||
}
|
||||
|
||||
/* Task item scope badge */
|
||||
.meta-badge.scope {
|
||||
background: hsl(var(--muted));
|
||||
color: hsl(var(--muted-foreground));
|
||||
font-family: var(--font-mono);
|
||||
font-size: 0.7rem;
|
||||
}
|
||||
|
||||
/* Task item impl steps badge */
|
||||
.meta-badge.impl {
|
||||
background: hsl(280 60% 50% / 0.1);
|
||||
color: hsl(280 60% 50%);
|
||||
}
|
||||
|
||||
/* Task item acceptance criteria badge */
|
||||
.meta-badge.accept {
|
||||
background: hsl(var(--success) / 0.1);
|
||||
color: hsl(var(--success));
|
||||
}
|
||||
|
||||
|
||||
@@ -429,14 +429,16 @@
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 0.75rem;
|
||||
overflow: hidden;
|
||||
margin-bottom: 1rem;
|
||||
box-shadow: 0 1px 3px hsl(var(--foreground) / 0.04);
|
||||
}
|
||||
|
||||
.queue-group-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: 0.75rem 1rem;
|
||||
background: hsl(var(--muted) / 0.5);
|
||||
padding: 0.875rem 1.25rem;
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
border-bottom: 1px solid hsl(var(--border));
|
||||
}
|
||||
|
||||
@@ -1256,6 +1258,68 @@
|
||||
color: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
/* Search Highlight */
|
||||
.search-highlight {
|
||||
background: hsl(45 93% 47% / 0.3);
|
||||
color: inherit;
|
||||
padding: 0 2px;
|
||||
border-radius: 2px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Search Suggestions Dropdown */
|
||||
.search-suggestions {
|
||||
position: absolute;
|
||||
top: 100%;
|
||||
left: 0;
|
||||
right: 0;
|
||||
margin-top: 0.25rem;
|
||||
background: hsl(var(--card));
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.1);
|
||||
max-height: 300px;
|
||||
overflow-y: auto;
|
||||
z-index: 50;
|
||||
display: none;
|
||||
}
|
||||
|
||||
.search-suggestions.show {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.search-suggestion-item {
|
||||
padding: 0.625rem 0.875rem;
|
||||
cursor: pointer;
|
||||
border-bottom: 1px solid hsl(var(--border) / 0.5);
|
||||
transition: background 0.15s ease;
|
||||
}
|
||||
|
||||
.search-suggestion-item:hover,
|
||||
.search-suggestion-item.selected {
|
||||
background: hsl(var(--muted));
|
||||
}
|
||||
|
||||
.search-suggestion-item:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.suggestion-id {
|
||||
font-family: var(--font-mono);
|
||||
font-size: 0.7rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
margin-bottom: 0.125rem;
|
||||
}
|
||||
|
||||
.suggestion-title {
|
||||
font-size: 0.8125rem;
|
||||
color: hsl(var(--foreground));
|
||||
line-height: 1.3;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
/* ==========================================
|
||||
CREATE BUTTON
|
||||
========================================== */
|
||||
@@ -1780,61 +1844,147 @@
|
||||
}
|
||||
|
||||
.queue-items {
|
||||
padding: 0.75rem;
|
||||
padding: 1rem;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.5rem;
|
||||
gap: 0.75rem;
|
||||
}
|
||||
|
||||
/* Parallel items use CSS Grid for uniform sizing */
|
||||
.queue-items.parallel {
|
||||
flex-direction: row;
|
||||
flex-wrap: wrap;
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(130px, 1fr));
|
||||
gap: 0.75rem;
|
||||
}
|
||||
|
||||
.queue-items.parallel .queue-item {
|
||||
flex: 1;
|
||||
min-width: 200px;
|
||||
display: grid;
|
||||
grid-template-areas:
|
||||
"id id delete"
|
||||
"issue issue issue"
|
||||
"solution solution solution";
|
||||
grid-template-columns: 1fr 1fr auto;
|
||||
grid-template-rows: auto auto 1fr;
|
||||
align-items: start;
|
||||
padding: 0.75rem;
|
||||
min-height: 90px;
|
||||
gap: 0.25rem;
|
||||
}
|
||||
|
||||
/* Card content layout */
|
||||
.queue-items.parallel .queue-item .queue-item-id {
|
||||
grid-area: id;
|
||||
font-size: 0.875rem;
|
||||
font-weight: 700;
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.queue-items.parallel .queue-item .queue-item-issue {
|
||||
grid-area: issue;
|
||||
font-size: 0.6875rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
white-space: nowrap;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
line-height: 1.3;
|
||||
}
|
||||
|
||||
.queue-items.parallel .queue-item .queue-item-solution {
|
||||
grid-area: solution;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.25rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
color: hsl(var(--foreground));
|
||||
align-self: end;
|
||||
}
|
||||
|
||||
/* Hide extra elements in parallel view */
|
||||
.queue-items.parallel .queue-item .queue-item-files,
|
||||
.queue-items.parallel .queue-item .queue-item-priority,
|
||||
.queue-items.parallel .queue-item .queue-item-deps,
|
||||
.queue-items.parallel .queue-item .queue-item-task {
|
||||
display: none;
|
||||
}
|
||||
|
||||
/* Delete button positioned in corner */
|
||||
.queue-items.parallel .queue-item .queue-item-delete {
|
||||
grid-area: delete;
|
||||
justify-self: end;
|
||||
padding: 0.125rem;
|
||||
opacity: 0;
|
||||
}
|
||||
|
||||
.queue-group-type {
|
||||
display: flex;
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.375rem;
|
||||
font-size: 0.875rem;
|
||||
font-weight: 600;
|
||||
padding: 0.25rem 0.625rem;
|
||||
border-radius: 0.375rem;
|
||||
}
|
||||
|
||||
.queue-group-type.parallel {
|
||||
color: hsl(142 71% 45%);
|
||||
color: hsl(142 71% 40%);
|
||||
background: hsl(142 71% 45% / 0.1);
|
||||
}
|
||||
|
||||
.queue-group-type.sequential {
|
||||
color: hsl(262 83% 58%);
|
||||
color: hsl(262 83% 50%);
|
||||
background: hsl(262 83% 58% / 0.1);
|
||||
}
|
||||
|
||||
/* Queue Item Status Colors */
|
||||
/* Queue Item Status Colors - Enhanced visual distinction */
|
||||
|
||||
/* Pending - Default subtle state */
|
||||
.queue-item.pending,
|
||||
.queue-item:not(.ready):not(.executing):not(.completed):not(.failed):not(.blocked) {
|
||||
border-color: hsl(var(--border));
|
||||
background: hsl(var(--card));
|
||||
}
|
||||
|
||||
/* Ready - Blue tint, ready to execute */
|
||||
.queue-item.ready {
|
||||
border-color: hsl(199 89% 48%);
|
||||
background: hsl(199 89% 48% / 0.06);
|
||||
border-left: 3px solid hsl(199 89% 48%);
|
||||
}
|
||||
|
||||
/* Executing - Amber with pulse animation */
|
||||
.queue-item.executing {
|
||||
border-color: hsl(45 93% 47%);
|
||||
background: hsl(45 93% 47% / 0.05);
|
||||
border-color: hsl(38 92% 50%);
|
||||
background: hsl(38 92% 50% / 0.08);
|
||||
border-left: 3px solid hsl(38 92% 50%);
|
||||
animation: executing-pulse 2s ease-in-out infinite;
|
||||
}
|
||||
|
||||
@keyframes executing-pulse {
|
||||
0%, 100% { box-shadow: 0 0 0 0 hsl(38 92% 50% / 0.3); }
|
||||
50% { box-shadow: 0 0 8px 2px hsl(38 92% 50% / 0.2); }
|
||||
}
|
||||
|
||||
/* Completed - Green success state */
|
||||
.queue-item.completed {
|
||||
border-color: hsl(var(--success));
|
||||
background: hsl(var(--success) / 0.05);
|
||||
border-color: hsl(142 71% 45%);
|
||||
background: hsl(142 71% 45% / 0.06);
|
||||
border-left: 3px solid hsl(142 71% 45%);
|
||||
}
|
||||
|
||||
/* Failed - Red error state */
|
||||
.queue-item.failed {
|
||||
border-color: hsl(var(--destructive));
|
||||
background: hsl(var(--destructive) / 0.05);
|
||||
border-color: hsl(0 84% 60%);
|
||||
background: hsl(0 84% 60% / 0.06);
|
||||
border-left: 3px solid hsl(0 84% 60%);
|
||||
}
|
||||
|
||||
/* Blocked - Purple/violet blocked state */
|
||||
.queue-item.blocked {
|
||||
border-color: hsl(262 83% 58%);
|
||||
opacity: 0.7;
|
||||
background: hsl(262 83% 58% / 0.05);
|
||||
border-left: 3px solid hsl(262 83% 58%);
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
/* Priority indicator */
|
||||
@@ -2236,61 +2386,89 @@
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: 0.75rem 1rem;
|
||||
background: hsl(var(--muted) / 0.3);
|
||||
padding: 1rem 1.25rem;
|
||||
background: hsl(var(--card));
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 0.5rem;
|
||||
border-radius: 0.75rem;
|
||||
text-align: center;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.queue-stat-card:hover {
|
||||
transform: translateY(-1px);
|
||||
box-shadow: 0 2px 8px hsl(var(--foreground) / 0.06);
|
||||
}
|
||||
|
||||
.queue-stat-card .queue-stat-value {
|
||||
font-size: 1.5rem;
|
||||
font-size: 1.75rem;
|
||||
font-weight: 700;
|
||||
color: hsl(var(--foreground));
|
||||
line-height: 1.2;
|
||||
}
|
||||
|
||||
.queue-stat-card .queue-stat-label {
|
||||
font-size: 0.75rem;
|
||||
font-size: 0.6875rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.025em;
|
||||
margin-top: 0.25rem;
|
||||
letter-spacing: 0.05em;
|
||||
margin-top: 0.375rem;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Pending - Slate/Gray with subtle blue tint */
|
||||
.queue-stat-card.pending {
|
||||
border-color: hsl(var(--muted-foreground) / 0.3);
|
||||
border-color: hsl(215 20% 65% / 0.4);
|
||||
background: linear-gradient(135deg, hsl(215 20% 95%) 0%, hsl(var(--card)) 100%);
|
||||
}
|
||||
|
||||
.queue-stat-card.pending .queue-stat-value {
|
||||
color: hsl(var(--muted-foreground));
|
||||
color: hsl(215 20% 45%);
|
||||
}
|
||||
|
||||
.queue-stat-card.pending .queue-stat-label {
|
||||
color: hsl(215 20% 55%);
|
||||
}
|
||||
|
||||
/* Executing - Amber/Orange - attention-grabbing */
|
||||
.queue-stat-card.executing {
|
||||
border-color: hsl(45 93% 47% / 0.5);
|
||||
background: hsl(45 93% 47% / 0.05);
|
||||
border-color: hsl(38 92% 50% / 0.5);
|
||||
background: linear-gradient(135deg, hsl(38 92% 95%) 0%, hsl(45 93% 97%) 100%);
|
||||
}
|
||||
|
||||
.queue-stat-card.executing .queue-stat-value {
|
||||
color: hsl(45 93% 47%);
|
||||
color: hsl(38 92% 40%);
|
||||
}
|
||||
|
||||
.queue-stat-card.executing .queue-stat-label {
|
||||
color: hsl(38 70% 45%);
|
||||
}
|
||||
|
||||
/* Completed - Green - success indicator */
|
||||
.queue-stat-card.completed {
|
||||
border-color: hsl(var(--success) / 0.5);
|
||||
background: hsl(var(--success) / 0.05);
|
||||
border-color: hsl(142 71% 45% / 0.5);
|
||||
background: linear-gradient(135deg, hsl(142 71% 95%) 0%, hsl(142 50% 97%) 100%);
|
||||
}
|
||||
|
||||
.queue-stat-card.completed .queue-stat-value {
|
||||
color: hsl(var(--success));
|
||||
color: hsl(142 71% 35%);
|
||||
}
|
||||
|
||||
.queue-stat-card.completed .queue-stat-label {
|
||||
color: hsl(142 50% 40%);
|
||||
}
|
||||
|
||||
/* Failed - Red - error indicator */
|
||||
.queue-stat-card.failed {
|
||||
border-color: hsl(var(--destructive) / 0.5);
|
||||
background: hsl(var(--destructive) / 0.05);
|
||||
border-color: hsl(0 84% 60% / 0.5);
|
||||
background: linear-gradient(135deg, hsl(0 84% 95%) 0%, hsl(0 70% 97%) 100%);
|
||||
}
|
||||
|
||||
.queue-stat-card.failed .queue-stat-value {
|
||||
color: hsl(var(--destructive));
|
||||
color: hsl(0 84% 45%);
|
||||
}
|
||||
|
||||
.queue-stat-card.failed .queue-stat-label {
|
||||
color: hsl(0 60% 50%);
|
||||
}
|
||||
|
||||
/* ==========================================
|
||||
@@ -2874,3 +3052,251 @@
|
||||
gap: 0.25rem;
|
||||
}
|
||||
}
|
||||
|
||||
/* ==========================================
|
||||
MULTI-QUEUE CARDS VIEW
|
||||
========================================== */
|
||||
|
||||
/* Queue Cards Header */
|
||||
.queue-cards-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
flex-wrap: wrap;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
/* Queue Cards Grid */
|
||||
.queue-cards-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
|
||||
gap: 1rem;
|
||||
margin-bottom: 1.5rem;
|
||||
}
|
||||
|
||||
/* Individual Queue Card */
|
||||
.queue-card {
|
||||
position: relative;
|
||||
background: hsl(var(--card));
|
||||
border: 1px solid hsl(var(--border));
|
||||
border-radius: 0.75rem;
|
||||
padding: 1rem;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.queue-card:hover {
|
||||
border-color: hsl(var(--primary) / 0.5);
|
||||
transform: translateY(-2px);
|
||||
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.08);
|
||||
}
|
||||
|
||||
.queue-card.active {
|
||||
border-color: hsl(var(--primary));
|
||||
background: hsl(var(--primary) / 0.05);
|
||||
}
|
||||
|
||||
.queue-card.merged {
|
||||
opacity: 0.6;
|
||||
border-style: dashed;
|
||||
}
|
||||
|
||||
.queue-card.merged:hover {
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
/* Queue Card Header */
|
||||
.queue-card-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.queue-card-id {
|
||||
font-size: 0.875rem;
|
||||
font-weight: 600;
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
.queue-card-badges {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
/* Queue Card Stats - Progress Bar */
|
||||
.queue-card-stats {
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.queue-card-stats .progress-bar {
|
||||
height: 6px;
|
||||
background: hsl(var(--muted));
|
||||
border-radius: 3px;
|
||||
overflow: hidden;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
.queue-card-stats .progress-fill {
|
||||
height: 100%;
|
||||
background: hsl(var(--primary));
|
||||
border-radius: 3px;
|
||||
transition: width 0.3s ease;
|
||||
}
|
||||
|
||||
.queue-card-stats .progress-fill.completed {
|
||||
background: hsl(var(--success, 142 76% 36%));
|
||||
}
|
||||
|
||||
.queue-card-progress {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
font-size: 0.75rem;
|
||||
color: hsl(var(--foreground));
|
||||
}
|
||||
|
||||
/* Queue Card Meta */
|
||||
.queue-card-meta {
|
||||
display: flex;
|
||||
gap: 1rem;
|
||||
font-size: 0.75rem;
|
||||
color: hsl(var(--muted-foreground));
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
/* Queue Card Actions */
|
||||
.queue-card-actions {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
padding-top: 0.75rem;
|
||||
border-top: 1px solid hsl(var(--border));
|
||||
}
|
||||
|
||||
/* Queue Detail Header */
|
||||
.queue-detail-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 1rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.queue-detail-title {
|
||||
flex: 1;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.queue-detail-actions {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
/* Queue Item Delete Button */
|
||||
.queue-item-delete {
|
||||
margin-left: auto;
|
||||
padding: 0.25rem;
|
||||
opacity: 0;
|
||||
transition: opacity 0.15s ease;
|
||||
color: hsl(var(--muted-foreground));
|
||||
border-radius: 0.25rem;
|
||||
}
|
||||
|
||||
.queue-item:hover .queue-item-delete {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.queue-item-delete:hover {
|
||||
color: hsl(var(--destructive, 0 84% 60%));
|
||||
background: hsl(var(--destructive, 0 84% 60%) / 0.1);
|
||||
}
|
||||
|
||||
/* Queue Error State */
|
||||
.queue-error {
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
/* Responsive adjustments for queue cards */
|
||||
@media (max-width: 640px) {
|
||||
.queue-cards-grid {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.queue-cards-header {
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.queue-detail-header {
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.queue-detail-title {
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
}
|
||||
|
||||
/* ==========================================
|
||||
WARNING BUTTON STYLE
|
||||
========================================== */
|
||||
|
||||
.btn-warning,
|
||||
.btn-secondary.btn-warning {
|
||||
color: hsl(38 92% 40%);
|
||||
border-color: hsl(38 92% 50% / 0.5);
|
||||
background: hsl(38 92% 50% / 0.08);
|
||||
}
|
||||
|
||||
.btn-warning:hover,
|
||||
.btn-secondary.btn-warning:hover {
|
||||
background: hsl(38 92% 50% / 0.15);
|
||||
border-color: hsl(38 92% 50%);
|
||||
}
|
||||
|
||||
.btn-danger,
|
||||
.btn-secondary.btn-danger,
|
||||
.btn-sm.btn-danger {
|
||||
color: hsl(var(--destructive));
|
||||
border-color: hsl(var(--destructive) / 0.5);
|
||||
background: hsl(var(--destructive) / 0.08);
|
||||
}
|
||||
|
||||
.btn-danger:hover,
|
||||
.btn-secondary.btn-danger:hover,
|
||||
.btn-sm.btn-danger:hover {
|
||||
background: hsl(var(--destructive) / 0.15);
|
||||
border-color: hsl(var(--destructive));
|
||||
}
|
||||
|
||||
/* Issue Detail Actions */
|
||||
.issue-detail-actions {
|
||||
margin-top: 1rem;
|
||||
padding-top: 1rem;
|
||||
border-top: 1px solid hsl(var(--border));
|
||||
}
|
||||
|
||||
.issue-detail-actions .flex {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
/* Active queue badge enhancement */
|
||||
.queue-active-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 0.125rem 0.5rem;
|
||||
font-size: 0.6875rem;
|
||||
font-weight: 600;
|
||||
color: hsl(142 71% 35%);
|
||||
background: hsl(142 71% 45% / 0.15);
|
||||
border: 1px solid hsl(142 71% 45% / 0.3);
|
||||
border-radius: 9999px;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.025em;
|
||||
}
|
||||
|
||||
@@ -1,6 +1,103 @@
|
||||
// Hook Manager Component
|
||||
// Manages Claude Code hooks configuration from settings.json
|
||||
|
||||
// ========== Platform Detection ==========
|
||||
const PlatformUtils = {
|
||||
// Detect current platform
|
||||
detect() {
|
||||
if (typeof navigator !== 'undefined') {
|
||||
const platform = navigator.platform.toLowerCase();
|
||||
if (platform.includes('win')) return 'windows';
|
||||
if (platform.includes('mac')) return 'macos';
|
||||
return 'linux';
|
||||
}
|
||||
if (typeof process !== 'undefined') {
|
||||
if (process.platform === 'win32') return 'windows';
|
||||
if (process.platform === 'darwin') return 'macos';
|
||||
return 'linux';
|
||||
}
|
||||
return 'unknown';
|
||||
},
|
||||
|
||||
isWindows() {
|
||||
return this.detect() === 'windows';
|
||||
},
|
||||
|
||||
isUnix() {
|
||||
const platform = this.detect();
|
||||
return platform === 'macos' || platform === 'linux';
|
||||
},
|
||||
|
||||
// Get default shell for platform
|
||||
getShell() {
|
||||
return this.isWindows() ? 'cmd' : 'bash';
|
||||
},
|
||||
|
||||
// Check if template is compatible with current platform
|
||||
checkCompatibility(template) {
|
||||
const platform = this.detect();
|
||||
const issues = [];
|
||||
|
||||
// bash commands require Unix or Git Bash on Windows
|
||||
if (template.command === 'bash' && platform === 'windows') {
|
||||
issues.push({
|
||||
level: 'warning',
|
||||
message: 'bash command may not work on Windows without Git Bash or WSL'
|
||||
});
|
||||
}
|
||||
|
||||
// Check for Unix-specific shell features in args
|
||||
if (template.args && Array.isArray(template.args)) {
|
||||
const argStr = template.args.join(' ');
|
||||
|
||||
if (platform === 'windows') {
|
||||
// Unix shell features that won't work in cmd
|
||||
if (argStr.includes('$HOME') || argStr.includes('${HOME}')) {
|
||||
issues.push({ level: 'warning', message: 'Uses $HOME - use %USERPROFILE% on Windows' });
|
||||
}
|
||||
if (argStr.includes('$(') || argStr.includes('`')) {
|
||||
issues.push({ level: 'warning', message: 'Uses command substitution - not supported in cmd' });
|
||||
}
|
||||
if (argStr.includes(' | ')) {
|
||||
issues.push({ level: 'info', message: 'Uses pipes - works in cmd but syntax may differ' });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
compatible: issues.filter(i => i.level === 'error').length === 0,
|
||||
issues
|
||||
};
|
||||
},
|
||||
|
||||
// Get platform-specific command variant if available
|
||||
getVariant(template) {
|
||||
const platform = this.detect();
|
||||
|
||||
// Check if template has platform-specific variants
|
||||
if (template.variants && template.variants[platform]) {
|
||||
return { ...template, ...template.variants[platform] };
|
||||
}
|
||||
|
||||
return template;
|
||||
},
|
||||
|
||||
// Escape script for specific shell type
|
||||
escapeForShell(script, shell) {
|
||||
if (shell === 'bash' || shell === 'sh') {
|
||||
// Unix: use single quotes, escape internal single quotes
|
||||
return script.replace(/'/g, "'\\''");
|
||||
} else if (shell === 'cmd') {
|
||||
// Windows cmd: escape double quotes and special chars
|
||||
return script.replace(/"/g, '\\"').replace(/%/g, '%%');
|
||||
} else if (shell === 'powershell') {
|
||||
// PowerShell: escape single quotes by doubling
|
||||
return script.replace(/'/g, "''");
|
||||
}
|
||||
return script;
|
||||
}
|
||||
};
|
||||
|
||||
// ========== Hook State ==========
|
||||
let hookConfig = {
|
||||
global: { hooks: {} },
|
||||
@@ -52,12 +149,13 @@ const HOOK_TEMPLATES = {
|
||||
'memory-update-queue': {
|
||||
event: 'Stop',
|
||||
matcher: '',
|
||||
command: 'bash',
|
||||
args: ['-c', 'ccw tool exec memory_queue "{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\"}"'],
|
||||
command: 'node',
|
||||
args: ['-e', "require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'gemini'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'gemini'})],{stdio:'inherit'})"],
|
||||
description: 'Queue CLAUDE.md update when session ends (batched by threshold/timeout)',
|
||||
category: 'memory',
|
||||
configurable: true,
|
||||
config: {
|
||||
tool: { type: 'select', default: 'gemini', options: ['gemini', 'qwen', 'codex', 'opencode'], label: 'CLI Tool' },
|
||||
threshold: { type: 'number', default: 5, min: 1, max: 20, label: 'Threshold (paths)', step: 1 },
|
||||
timeout: { type: 'number', default: 300, min: 60, max: 1800, label: 'Timeout (seconds)', step: 60 }
|
||||
}
|
||||
@@ -66,8 +164,8 @@ const HOOK_TEMPLATES = {
|
||||
'skill-context-keyword': {
|
||||
event: 'UserPromptSubmit',
|
||||
matcher: '',
|
||||
command: 'bash',
|
||||
args: ['-c', 'ccw tool exec skill_context_loader --stdin'],
|
||||
command: 'node',
|
||||
args: ['-e', "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({prompt:p.user_prompt||''})],{stdio:'inherit'})"],
|
||||
description: 'Load SKILL context based on keyword matching in user prompt',
|
||||
category: 'skill',
|
||||
configurable: true,
|
||||
@@ -79,8 +177,8 @@ const HOOK_TEMPLATES = {
|
||||
'skill-context-auto': {
|
||||
event: 'UserPromptSubmit',
|
||||
matcher: '',
|
||||
command: 'bash',
|
||||
args: ['-c', 'ccw tool exec skill_context_loader --stdin --mode auto'],
|
||||
command: 'node',
|
||||
args: ['-e', "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({mode:'auto',prompt:p.user_prompt||''})],{stdio:'inherit'})"],
|
||||
description: 'Auto-detect and load SKILL based on skill name in prompt',
|
||||
category: 'skill',
|
||||
configurable: false
|
||||
@@ -195,6 +293,7 @@ const WIZARD_TEMPLATES = {
|
||||
}
|
||||
],
|
||||
configFields: [
|
||||
{ key: 'tool', type: 'select', label: 'CLI Tool', default: 'gemini', options: ['gemini', 'qwen', 'codex', 'opencode'], description: 'CLI tool for CLAUDE.md generation' },
|
||||
{ key: 'threshold', type: 'number', label: 'Threshold (paths)', default: 5, min: 1, max: 20, step: 1, description: 'Number of paths to trigger batch update' },
|
||||
{ key: 'timeout', type: 'number', label: 'Timeout (seconds)', default: 300, min: 60, max: 1800, step: 60, description: 'Auto-flush queue after this time' }
|
||||
]
|
||||
@@ -392,6 +491,29 @@ function convertToClaudeCodeFormat(hookData) {
|
||||
});
|
||||
commandStr += ' ' + additionalArgs.join(' ');
|
||||
}
|
||||
} else if (commandStr === 'node' && hookData.args.length >= 2 && hookData.args[0] === '-e') {
|
||||
// Special handling for node -e commands using PlatformUtils
|
||||
const script = hookData.args[1];
|
||||
|
||||
if (PlatformUtils.isWindows()) {
|
||||
// Windows: use double quotes, escape internal quotes
|
||||
const escapedScript = PlatformUtils.escapeForShell(script, 'cmd');
|
||||
commandStr = `node -e "${escapedScript}"`;
|
||||
} else {
|
||||
// Unix: use single quotes to prevent shell interpretation
|
||||
const escapedScript = PlatformUtils.escapeForShell(script, 'bash');
|
||||
commandStr = `node -e '${escapedScript}'`;
|
||||
}
|
||||
// Handle any additional args after the script
|
||||
if (hookData.args.length > 2) {
|
||||
const additionalArgs = hookData.args.slice(2).map(arg => {
|
||||
if (arg.includes(' ') && !arg.startsWith('"') && !arg.startsWith("'")) {
|
||||
return `"${arg.replace(/"/g, '\\"')}"`;
|
||||
}
|
||||
return arg;
|
||||
});
|
||||
commandStr += ' ' + additionalArgs.join(' ');
|
||||
}
|
||||
} else {
|
||||
// Default handling for other commands
|
||||
const quotedArgs = hookData.args.map(arg => {
|
||||
@@ -748,6 +870,7 @@ function renderWizardModalContent() {
|
||||
// Helper to get translated field labels
|
||||
const getFieldLabel = (fieldKey) => {
|
||||
const labels = {
|
||||
'tool': t('hook.wizard.cliTool') || 'CLI Tool',
|
||||
'threshold': t('hook.wizard.thresholdPaths') || 'Threshold (paths)',
|
||||
'timeout': t('hook.wizard.timeoutSeconds') || 'Timeout (seconds)'
|
||||
};
|
||||
@@ -756,6 +879,7 @@ function renderWizardModalContent() {
|
||||
|
||||
const getFieldDesc = (fieldKey) => {
|
||||
const descs = {
|
||||
'tool': t('hook.wizard.cliToolDesc') || 'CLI tool for CLAUDE.md generation',
|
||||
'threshold': t('hook.wizard.thresholdPathsDesc') || 'Number of paths to trigger batch update',
|
||||
'timeout': t('hook.wizard.timeoutSecondsDesc') || 'Auto-flush queue after this time'
|
||||
};
|
||||
@@ -1121,20 +1245,19 @@ function generateWizardCommand() {
|
||||
keywords: c.keywords.split(',').map(k => k.trim()).filter(k => k)
|
||||
}));
|
||||
|
||||
const params = JSON.stringify({ configs: configJson, prompt: '$CLAUDE_PROMPT' });
|
||||
return `ccw tool exec skill_context_loader '${params}'`;
|
||||
// Use node + spawnSync for cross-platform JSON handling
|
||||
const paramsObj = { configs: configJson, prompt: '${p.user_prompt}' };
|
||||
return `node -e "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify(${JSON.stringify(paramsObj).replace('${p.user_prompt}', "'+p.user_prompt+'")})],{stdio:'inherit'})"`;
|
||||
} else {
|
||||
// auto mode
|
||||
const params = JSON.stringify({ mode: 'auto', prompt: '$CLAUDE_PROMPT' });
|
||||
return `ccw tool exec skill_context_loader '${params}'`;
|
||||
// auto mode - use node + spawnSync
|
||||
return `node -e "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({mode:'auto',prompt:p.user_prompt||''})],{stdio:'inherit'})"`;
|
||||
}
|
||||
}
|
||||
|
||||
// Handle memory-update wizard (default)
|
||||
// Now uses memory_queue for batched updates with configurable threshold/timeout
|
||||
// The command adds to queue, configuration is applied separately via submitHookWizard
|
||||
const params = `"{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\"}"`;
|
||||
return `ccw tool exec memory_queue ${params}`;
|
||||
// Use node + spawnSync for cross-platform JSON handling
|
||||
const selectedTool = wizardConfig.tool || 'gemini';
|
||||
return `node -e "require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})],{stdio:'inherit'})"`;
|
||||
}
|
||||
|
||||
async function submitHookWizard() {
|
||||
@@ -1217,13 +1340,18 @@ async function submitHookWizard() {
|
||||
const baseTemplate = HOOK_TEMPLATES[selectedOption.templateId];
|
||||
if (!baseTemplate) return;
|
||||
|
||||
const command = generateWizardCommand();
|
||||
|
||||
const hookData = {
|
||||
command: 'bash',
|
||||
args: ['-c', command]
|
||||
// Build hook data with configured values
|
||||
let hookData = {
|
||||
command: baseTemplate.command,
|
||||
args: [...baseTemplate.args]
|
||||
};
|
||||
|
||||
// For memory-update wizard, use configured tool in args (cross-platform)
|
||||
if (wizard.id === 'memory-update') {
|
||||
const selectedTool = wizardConfig.tool || 'gemini';
|
||||
hookData.args = ['-e', `require('child_process').spawnSync(process.platform==='win32'?'cmd':'ccw',process.platform==='win32'?['/c','ccw','tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})]:['tool','exec','memory_queue',JSON.stringify({action:'add',path:process.env.CLAUDE_PROJECT_DIR,tool:'${selectedTool}'})],{stdio:'inherit'})`];
|
||||
}
|
||||
|
||||
if (baseTemplate.matcher) {
|
||||
hookData.matcher = baseTemplate.matcher;
|
||||
}
|
||||
@@ -1232,6 +1360,7 @@ async function submitHookWizard() {
|
||||
|
||||
// For memory-update wizard, also configure queue settings
|
||||
if (wizard.id === 'memory-update') {
|
||||
const selectedTool = wizardConfig.tool || 'gemini';
|
||||
const threshold = wizardConfig.threshold || 5;
|
||||
const timeout = wizardConfig.timeout || 300;
|
||||
try {
|
||||
@@ -1242,7 +1371,7 @@ async function submitHookWizard() {
|
||||
body: JSON.stringify({ tool: 'memory_queue', params: configParams })
|
||||
});
|
||||
if (response.ok) {
|
||||
showRefreshToast(`Queue configured: threshold=${threshold}, timeout=${timeout}s`, 'success');
|
||||
showRefreshToast(`Queue configured: tool=${selectedTool}, threshold=${threshold}, timeout=${timeout}s`, 'success');
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn('Failed to configure memory queue:', e);
|
||||
|
||||
@@ -1107,6 +1107,8 @@ const i18n = {
|
||||
'hook.wizard.memoryUpdateDesc': 'Queue-based CLAUDE.md updates with configurable threshold and timeout',
|
||||
'hook.wizard.queueBasedUpdate': 'Queue-Based Update',
|
||||
'hook.wizard.queueBasedUpdateDesc': 'Batch updates when threshold reached or timeout expires',
|
||||
'hook.wizard.cliTool': 'CLI Tool',
|
||||
'hook.wizard.cliToolDesc': 'CLI tool for CLAUDE.md generation',
|
||||
'hook.wizard.thresholdPaths': 'Threshold (paths)',
|
||||
'hook.wizard.thresholdPathsDesc': 'Number of paths to trigger batch update',
|
||||
'hook.wizard.timeoutSeconds': 'Timeout (seconds)',
|
||||
@@ -1283,6 +1285,54 @@ const i18n = {
|
||||
'multiCli.toolbar.noTasks': 'No tasks available',
|
||||
'multiCli.toolbar.scrollToTask': 'Click to scroll to task',
|
||||
|
||||
// Context Tab
|
||||
'multiCli.context.taskDescription': 'Task Description',
|
||||
'multiCli.context.constraints': 'Constraints',
|
||||
'multiCli.context.focusPaths': 'Focus Paths',
|
||||
'multiCli.context.relevantFiles': 'Relevant Files',
|
||||
'multiCli.context.dependencies': 'Dependencies',
|
||||
'multiCli.context.conflictRisks': 'Conflict Risks',
|
||||
'multiCli.context.sessionId': 'Session ID',
|
||||
'multiCli.context.rawJson': 'Raw JSON',
|
||||
|
||||
// Summary Tab
|
||||
'multiCli.summary.title': 'Summary',
|
||||
'multiCli.summary.convergence': 'Convergence',
|
||||
'multiCli.summary.solutions': 'Solutions',
|
||||
'multiCli.summary.solution': 'Solution',
|
||||
|
||||
// Task Overview
|
||||
'multiCli.task.description': 'Description',
|
||||
'multiCli.task.keyPoint': 'Key Point',
|
||||
'multiCli.task.scope': 'Scope',
|
||||
'multiCli.task.dependencies': 'Dependencies',
|
||||
'multiCli.task.targetFiles': 'Target Files',
|
||||
'multiCli.task.acceptanceCriteria': 'Acceptance Criteria',
|
||||
'multiCli.task.reference': 'Reference',
|
||||
'multiCli.task.pattern': 'PATTERN',
|
||||
'multiCli.task.files': 'FILES',
|
||||
'multiCli.task.examples': 'EXAMPLES',
|
||||
'multiCli.task.noOverviewData': 'No overview data available',
|
||||
|
||||
// Task Implementation
|
||||
'multiCli.task.implementationSteps': 'Implementation Steps',
|
||||
'multiCli.task.modificationPoints': 'Modification Points',
|
||||
'multiCli.task.verification': 'Verification',
|
||||
'multiCli.task.noImplementationData': 'No implementation details available',
|
||||
'multiCli.task.noFilesSpecified': 'No files specified',
|
||||
|
||||
// Discussion Tab
|
||||
'multiCli.discussion.title': 'Discussion',
|
||||
'multiCli.discussion.discussionTopic': 'Discussion Topic',
|
||||
'multiCli.solutions': 'Solutions',
|
||||
'multiCli.decision': 'Decision',
|
||||
|
||||
// Plan
|
||||
'multiCli.plan.objective': 'Objective',
|
||||
'multiCli.plan.solution': 'Solution',
|
||||
'multiCli.plan.approach': 'Approach',
|
||||
'multiCli.plan.risk': 'risk',
|
||||
|
||||
// Modals
|
||||
'modal.contentPreview': 'Content Preview',
|
||||
'modal.raw': 'Raw',
|
||||
@@ -2219,6 +2269,25 @@ const i18n = {
|
||||
'issues.queueCommandInfo': 'After running the command, click "Refresh" to see the updated queue.',
|
||||
'issues.alternative': 'Alternative',
|
||||
'issues.refreshAfter': 'Refresh Queue',
|
||||
'issues.activate': 'Activate',
|
||||
'issues.deactivate': 'Deactivate',
|
||||
'issues.queueActivated': 'Queue activated',
|
||||
'issues.queueDeactivated': 'Queue deactivated',
|
||||
'issues.deleteQueue': 'Delete queue',
|
||||
'issues.confirmDeleteQueue': 'Are you sure you want to delete this queue? This action cannot be undone.',
|
||||
'issues.queueDeleted': 'Queue deleted successfully',
|
||||
'issues.actions': 'Actions',
|
||||
'issues.archive': 'Archive',
|
||||
'issues.delete': 'Delete',
|
||||
'issues.confirmDeleteIssue': 'Are you sure you want to delete this issue? This action cannot be undone.',
|
||||
'issues.confirmArchiveIssue': 'Archive this issue? It will be moved to history.',
|
||||
'issues.issueDeleted': 'Issue deleted successfully',
|
||||
'issues.issueArchived': 'Issue archived successfully',
|
||||
'issues.executionQueues': 'Execution Queues',
|
||||
'issues.queues': 'queues',
|
||||
'issues.noQueues': 'No queues found',
|
||||
'issues.queueEmptyHint': 'Generate execution queue from bound solutions',
|
||||
'issues.refresh': 'Refresh',
|
||||
// issue.* keys (legacy)
|
||||
'issue.viewIssues': 'Issues',
|
||||
'issue.viewQueue': 'Queue',
|
||||
@@ -3347,6 +3416,8 @@ const i18n = {
|
||||
'hook.wizard.memoryUpdateDesc': '基于队列的 CLAUDE.md 更新,支持阈值和超时配置',
|
||||
'hook.wizard.queueBasedUpdate': '队列批量更新',
|
||||
'hook.wizard.queueBasedUpdateDesc': '达到路径数量阈值或超时时批量更新',
|
||||
'hook.wizard.cliTool': 'CLI 工具',
|
||||
'hook.wizard.cliToolDesc': '用于生成 CLAUDE.md 的 CLI 工具',
|
||||
'hook.wizard.thresholdPaths': '阈值(路径数)',
|
||||
'hook.wizard.thresholdPathsDesc': '触发批量更新的路径数量',
|
||||
'hook.wizard.timeoutSeconds': '超时(秒)',
|
||||
@@ -3523,6 +3594,54 @@ const i18n = {
|
||||
'multiCli.toolbar.noTasks': '暂无任务',
|
||||
'multiCli.toolbar.scrollToTask': '点击定位到任务',
|
||||
|
||||
// Context Tab
|
||||
'multiCli.context.taskDescription': '任务描述',
|
||||
'multiCli.context.constraints': '约束条件',
|
||||
'multiCli.context.focusPaths': '焦点路径',
|
||||
'multiCli.context.relevantFiles': '相关文件',
|
||||
'multiCli.context.dependencies': '依赖项',
|
||||
'multiCli.context.conflictRisks': '冲突风险',
|
||||
'multiCli.context.sessionId': '会话ID',
|
||||
'multiCli.context.rawJson': '原始JSON',
|
||||
|
||||
// Summary Tab
|
||||
'multiCli.summary.title': '摘要',
|
||||
'multiCli.summary.convergence': '收敛状态',
|
||||
'multiCli.summary.solutions': '解决方案',
|
||||
'multiCli.summary.solution': '方案',
|
||||
|
||||
// Task Overview
|
||||
'multiCli.task.description': '描述',
|
||||
'multiCli.task.keyPoint': '关键点',
|
||||
'multiCli.task.scope': '范围',
|
||||
'multiCli.task.dependencies': '依赖项',
|
||||
'multiCli.task.targetFiles': '目标文件',
|
||||
'multiCli.task.acceptanceCriteria': '验收标准',
|
||||
'multiCli.task.reference': '参考资料',
|
||||
'multiCli.task.pattern': '模式',
|
||||
'multiCli.task.files': '文件',
|
||||
'multiCli.task.examples': '示例',
|
||||
'multiCli.task.noOverviewData': '无概览数据',
|
||||
|
||||
// Task Implementation
|
||||
'multiCli.task.implementationSteps': '实现步骤',
|
||||
'multiCli.task.modificationPoints': '修改点',
|
||||
'multiCli.task.verification': '验证',
|
||||
'multiCli.task.noImplementationData': '无实现详情',
|
||||
'multiCli.task.noFilesSpecified': '未指定文件',
|
||||
|
||||
// Discussion Tab
|
||||
'multiCli.discussion.title': '讨论',
|
||||
'multiCli.discussion.discussionTopic': '讨论主题',
|
||||
'multiCli.solutions': '解决方案',
|
||||
'multiCli.decision': '决策',
|
||||
|
||||
// Plan
|
||||
'multiCli.plan.objective': '目标',
|
||||
'multiCli.plan.solution': '解决方案',
|
||||
'multiCli.plan.approach': '实现方式',
|
||||
'multiCli.plan.risk': '风险',
|
||||
|
||||
// Modals
|
||||
'modal.contentPreview': '内容预览',
|
||||
'modal.raw': '原始',
|
||||
@@ -4492,6 +4611,25 @@ const i18n = {
|
||||
'issues.queueCommandInfo': '运行命令后,点击"刷新"查看更新后的队列。',
|
||||
'issues.alternative': '或者',
|
||||
'issues.refreshAfter': '刷新队列',
|
||||
'issues.activate': '激活',
|
||||
'issues.deactivate': '取消激活',
|
||||
'issues.queueActivated': '队列已激活',
|
||||
'issues.queueDeactivated': '队列已取消激活',
|
||||
'issues.deleteQueue': '删除队列',
|
||||
'issues.confirmDeleteQueue': '确定要删除此队列吗?此操作无法撤销。',
|
||||
'issues.queueDeleted': '队列删除成功',
|
||||
'issues.actions': '操作',
|
||||
'issues.archive': '归档',
|
||||
'issues.delete': '删除',
|
||||
'issues.confirmDeleteIssue': '确定要删除此议题吗?此操作无法撤销。',
|
||||
'issues.confirmArchiveIssue': '归档此议题?它将被移动到历史记录中。',
|
||||
'issues.issueDeleted': '议题删除成功',
|
||||
'issues.issueArchived': '议题归档成功',
|
||||
'issues.executionQueues': '执行队列',
|
||||
'issues.queues': '个队列',
|
||||
'issues.noQueues': '暂无队列',
|
||||
'issues.queueEmptyHint': '从绑定的解决方案生成执行队列',
|
||||
'issues.refresh': '刷新',
|
||||
// issue.* keys (legacy)
|
||||
'issue.viewIssues': '议题',
|
||||
'issue.viewQueue': '队列',
|
||||
|
||||
@@ -398,6 +398,7 @@ async function updateCliToolConfig(tool, updates) {
|
||||
// Invalidate cache to ensure fresh data on page refresh
|
||||
if (window.cacheManager) {
|
||||
window.cacheManager.invalidate('cli-config');
|
||||
window.cacheManager.invalidate('cli-tools-config');
|
||||
}
|
||||
}
|
||||
return data;
|
||||
|
||||
@@ -6381,12 +6381,12 @@ async function showWatcherControlModal() {
|
||||
|
||||
// Get first indexed project path as default
|
||||
let defaultPath = '';
|
||||
if (indexes.success && indexes.projects && indexes.projects.length > 0) {
|
||||
// Sort by last_indexed desc and pick the most recent
|
||||
const sorted = indexes.projects.sort((a, b) =>
|
||||
new Date(b.last_indexed || 0) - new Date(a.last_indexed || 0)
|
||||
if (indexes.success && indexes.indexes && indexes.indexes.length > 0) {
|
||||
// Sort by lastModified desc and pick the most recent
|
||||
const sorted = indexes.indexes.sort((a, b) =>
|
||||
new Date(b.lastModified || 0) - new Date(a.lastModified || 0)
|
||||
);
|
||||
defaultPath = sorted[0].source_root || '';
|
||||
defaultPath = sorted[0].path || '';
|
||||
}
|
||||
|
||||
const modalHtml = buildWatcherControlContent(status, defaultPath);
|
||||
|
||||
@@ -524,16 +524,32 @@ async function installHookTemplate(templateId, scope) {
|
||||
return;
|
||||
}
|
||||
|
||||
const hookData = {
|
||||
command: template.command,
|
||||
args: template.args
|
||||
};
|
||||
|
||||
if (template.matcher) {
|
||||
hookData.matcher = template.matcher;
|
||||
// Platform compatibility check
|
||||
const compatibility = PlatformUtils.checkCompatibility(template);
|
||||
if (compatibility.issues.length > 0) {
|
||||
const warnings = compatibility.issues.filter(i => i.level === 'warning');
|
||||
if (warnings.length > 0) {
|
||||
const platform = PlatformUtils.detect();
|
||||
const warningMsg = warnings.map(w => w.message).join('; ');
|
||||
console.warn(`[Hook Install] Platform: ${platform}, Warnings: ${warningMsg}`);
|
||||
// Show warning but continue installation
|
||||
showRefreshToast(`Warning: ${warningMsg}`, 'warning', 5000);
|
||||
}
|
||||
}
|
||||
|
||||
await saveHook(scope, template.event, hookData);
|
||||
// Get platform-specific variant if available
|
||||
const adaptedTemplate = PlatformUtils.getVariant(template);
|
||||
|
||||
const hookData = {
|
||||
command: adaptedTemplate.command,
|
||||
args: adaptedTemplate.args
|
||||
};
|
||||
|
||||
if (adaptedTemplate.matcher) {
|
||||
hookData.matcher = adaptedTemplate.matcher;
|
||||
}
|
||||
|
||||
await saveHook(scope, adaptedTemplate.event, hookData);
|
||||
}
|
||||
|
||||
async function uninstallHookTemplate(templateId) {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -956,15 +956,13 @@ function renderSkillFileModal() {
|
||||
</div>
|
||||
|
||||
<!-- Content -->
|
||||
<div class="flex-1 overflow-hidden p-4">
|
||||
<div class="flex-1 min-h-0 overflow-auto p-4">
|
||||
${isEditing ? `
|
||||
<textarea id="skillFileContent"
|
||||
class="w-full h-full min-h-[400px] px-4 py-3 bg-background border border-border rounded-lg text-sm font-mono focus:outline-none focus:ring-2 focus:ring-primary resize-none"
|
||||
spellcheck="false">${escapeHtml(content)}</textarea>
|
||||
` : `
|
||||
<div class="w-full h-full min-h-[400px] overflow-auto">
|
||||
<pre class="px-4 py-3 bg-muted/30 rounded-lg text-sm font-mono whitespace-pre-wrap break-words">${escapeHtml(content)}</pre>
|
||||
</div>
|
||||
<pre class="px-4 py-3 bg-muted/30 rounded-lg text-sm font-mono whitespace-pre-wrap break-words">${escapeHtml(content)}</pre>
|
||||
`}
|
||||
</div>
|
||||
|
||||
|
||||
@@ -160,7 +160,7 @@ interface ClaudeWithSettingsParams {
|
||||
prompt: string;
|
||||
settingsPath: string;
|
||||
endpointId: string;
|
||||
mode: 'analysis' | 'write' | 'auto';
|
||||
mode: 'analysis' | 'write' | 'auto' | 'review';
|
||||
workingDir: string;
|
||||
cd?: string;
|
||||
includeDirs?: string[];
|
||||
@@ -351,12 +351,12 @@ type BuiltinCliTool = typeof BUILTIN_CLI_TOOLS[number];
|
||||
const ParamsSchema = z.object({
|
||||
tool: z.string().min(1, 'Tool is required'), // Accept any tool ID (built-in or custom endpoint)
|
||||
prompt: z.string().min(1, 'Prompt is required'),
|
||||
mode: z.enum(['analysis', 'write', 'auto']).default('analysis'),
|
||||
mode: z.enum(['analysis', 'write', 'auto', 'review']).default('analysis'),
|
||||
format: z.enum(['plain', 'yaml', 'json']).default('plain'), // Multi-turn prompt concatenation format
|
||||
model: z.string().optional(),
|
||||
cd: z.string().optional(),
|
||||
includeDirs: z.string().optional(),
|
||||
timeout: z.number().default(0), // 0 = no internal timeout, controlled by external caller (e.g., bash timeout)
|
||||
// timeout removed - controlled by external caller (bash timeout)
|
||||
resume: z.union([z.boolean(), z.string()]).optional(), // true = last, string = single ID or comma-separated IDs
|
||||
id: z.string().optional(), // Custom execution ID (e.g., IMPL-001-step1)
|
||||
noNative: z.boolean().optional(), // Force prompt concatenation instead of native resume
|
||||
@@ -388,7 +388,7 @@ async function executeCliTool(
|
||||
throw new Error(`Invalid params: ${parsed.error.message}`);
|
||||
}
|
||||
|
||||
const { tool, prompt, mode, format, model, cd, includeDirs, timeout, resume, id: customId, noNative, category, parentExecutionId, outputFormat } = parsed.data;
|
||||
const { tool, prompt, mode, format, model, cd, includeDirs, resume, id: customId, noNative, category, parentExecutionId, outputFormat } = parsed.data;
|
||||
|
||||
// Validate and determine working directory early (needed for conversation lookup)
|
||||
let workingDir: string;
|
||||
@@ -862,7 +862,6 @@ async function executeCliTool(
|
||||
|
||||
let stdout = '';
|
||||
let stderr = '';
|
||||
let timedOut = false;
|
||||
|
||||
// Handle stdout
|
||||
child.stdout!.on('data', (data: Buffer) => {
|
||||
@@ -924,18 +923,14 @@ async function executeCliTool(
|
||||
debugLog('CLOSE', `Process closed`, {
|
||||
exitCode: code,
|
||||
duration: `${duration}ms`,
|
||||
timedOut,
|
||||
stdoutLength: stdout.length,
|
||||
stderrLength: stderr.length,
|
||||
outputUnitsCount: allOutputUnits.length
|
||||
});
|
||||
|
||||
// Determine status - prioritize output content over exit code
|
||||
let status: 'success' | 'error' | 'timeout' = 'success';
|
||||
if (timedOut) {
|
||||
status = 'timeout';
|
||||
debugLog('STATUS', `Execution timed out after ${duration}ms`);
|
||||
} else if (code !== 0) {
|
||||
let status: 'success' | 'error' = 'success';
|
||||
if (code !== 0) {
|
||||
// Non-zero exit code doesn't always mean failure
|
||||
// Check if there's valid output (AI response) - treat as success
|
||||
const hasValidOutput = stdout.trim().length > 0;
|
||||
@@ -1169,25 +1164,8 @@ async function executeCliTool(
|
||||
reject(new Error(`Failed to spawn ${tool}: ${error.message}\n Command: ${command} ${args.join(' ')}\n Working Dir: ${workingDir}`));
|
||||
});
|
||||
|
||||
// Timeout handling (timeout=0 disables internal timeout, controlled by external caller)
|
||||
let timeoutId: NodeJS.Timeout | null = null;
|
||||
if (timeout > 0) {
|
||||
timeoutId = setTimeout(() => {
|
||||
timedOut = true;
|
||||
child.kill('SIGTERM');
|
||||
setTimeout(() => {
|
||||
if (!child.killed) {
|
||||
child.kill('SIGKILL');
|
||||
}
|
||||
}, 5000);
|
||||
}, timeout);
|
||||
}
|
||||
|
||||
child.on('close', () => {
|
||||
if (timeoutId) {
|
||||
clearTimeout(timeoutId);
|
||||
}
|
||||
});
|
||||
// Timeout controlled by external caller (bash timeout)
|
||||
// When parent process terminates, child will be cleaned up via process exit handler
|
||||
});
|
||||
}
|
||||
|
||||
@@ -1198,7 +1176,8 @@ export const schema: ToolSchema = {
|
||||
Modes:
|
||||
- analysis: Read-only operations (default)
|
||||
- write: File modifications allowed
|
||||
- auto: Full autonomous operations (codex only)`,
|
||||
- auto: Full autonomous operations (codex only)
|
||||
- review: Code review mode (codex uses 'codex review' subcommand, others accept but no operation change)`,
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
@@ -1213,8 +1192,8 @@ Modes:
|
||||
},
|
||||
mode: {
|
||||
type: 'string',
|
||||
enum: ['analysis', 'write', 'auto'],
|
||||
description: 'Execution mode (default: analysis)',
|
||||
enum: ['analysis', 'write', 'auto', 'review'],
|
||||
description: 'Execution mode (default: analysis). review mode uses codex review subcommand for codex tool.',
|
||||
default: 'analysis'
|
||||
},
|
||||
model: {
|
||||
@@ -1228,12 +1207,8 @@ Modes:
|
||||
includeDirs: {
|
||||
type: 'string',
|
||||
description: 'Additional directories (comma-separated). Maps to --include-directories for gemini/qwen, --add-dir for codex'
|
||||
},
|
||||
timeout: {
|
||||
type: 'number',
|
||||
description: 'Timeout in milliseconds (default: 0 = disabled, controlled by external caller)',
|
||||
default: 0
|
||||
}
|
||||
// timeout removed - controlled by external caller (bash timeout)
|
||||
},
|
||||
required: ['tool', 'prompt']
|
||||
}
|
||||
|
||||
@@ -223,7 +223,21 @@ export function buildCommand(params: {
|
||||
|
||||
case 'codex':
|
||||
useStdin = true;
|
||||
if (nativeResume?.enabled) {
|
||||
if (mode === 'review') {
|
||||
// codex review mode: non-interactive code review
|
||||
// Format: codex review [OPTIONS] [PROMPT]
|
||||
args.push('review');
|
||||
// Default to --uncommitted if no specific review target in prompt
|
||||
args.push('--uncommitted');
|
||||
if (model) {
|
||||
args.push('-m', model);
|
||||
}
|
||||
// codex review uses positional prompt argument, not stdin
|
||||
useStdin = false;
|
||||
if (prompt) {
|
||||
args.push(prompt);
|
||||
}
|
||||
} else if (nativeResume?.enabled) {
|
||||
args.push('resume');
|
||||
if (nativeResume.isLatest) {
|
||||
args.push('--last');
|
||||
|
||||
@@ -391,11 +391,7 @@ async function execute(params) {
|
||||
if (timeoutCheck.flushed) {
|
||||
// Queue was flushed due to timeout, add to fresh queue
|
||||
const result = addToQueue(path, { tool, strategy });
|
||||
return {
|
||||
...result,
|
||||
timeoutFlushed: true,
|
||||
flushResult: timeoutCheck.result
|
||||
};
|
||||
return `[MemoryQueue] Timeout flush (${timeoutCheck.result.processed} items) → ${result.message}`;
|
||||
}
|
||||
|
||||
const addResult = addToQueue(path, { tool, strategy });
|
||||
@@ -403,14 +399,12 @@ async function execute(params) {
|
||||
// Auto-flush if threshold reached
|
||||
if (addResult.willFlush) {
|
||||
const flushResult = await flushQueue();
|
||||
return {
|
||||
...addResult,
|
||||
flushed: true,
|
||||
flushResult
|
||||
};
|
||||
// Return string for hook-friendly output
|
||||
return `[MemoryQueue] ${addResult.message} → Flushed ${flushResult.processed} items`;
|
||||
}
|
||||
|
||||
return addResult;
|
||||
// Return string for hook-friendly output
|
||||
return `[MemoryQueue] ${addResult.message}`;
|
||||
|
||||
case 'status':
|
||||
// Check timeout first
|
||||
|
||||
316
codex-lens/docs/LSP_INTEGRATION_CHECKLIST.md
Normal file
316
codex-lens/docs/LSP_INTEGRATION_CHECKLIST.md
Normal file
@@ -0,0 +1,316 @@
|
||||
# codex-lens LSP Integration Execution Checklist
|
||||
|
||||
> Generated: 2026-01-15
|
||||
> Based on: Gemini multi-round deep analysis
|
||||
> Status: Ready for implementation
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: LSP Server Foundation (Priority: HIGH)
|
||||
|
||||
### 1.1 Create LSP Server Entry Point
|
||||
- [ ] **Install pygls dependency**
|
||||
```bash
|
||||
pip install pygls
|
||||
```
|
||||
- [ ] **Create `src/codexlens/lsp/__init__.py`**
|
||||
- Export: `CodexLensServer`, `start_server`
|
||||
- [ ] **Create `src/codexlens/lsp/server.py`**
|
||||
- Class: `CodexLensServer(LanguageServer)`
|
||||
- Initialize: `ChainSearchEngine`, `GlobalSymbolIndex`, `WatcherManager`
|
||||
- Lifecycle: Start `WatcherManager` on `initialize` request
|
||||
|
||||
### 1.2 Implement Core LSP Handlers
|
||||
- [ ] **`textDocument/definition`** handler
|
||||
- Source: `GlobalSymbolIndex.search()` exact match
|
||||
- Reference: `storage/global_index.py:173`
|
||||
- Return: `Location(uri, Range)`
|
||||
|
||||
- [ ] **`textDocument/completion`** handler
|
||||
- Source: `GlobalSymbolIndex.search(prefix_mode=True)`
|
||||
- Reference: `storage/global_index.py:173`
|
||||
- Return: `CompletionItem[]`
|
||||
|
||||
- [ ] **`workspace/symbol`** handler
|
||||
- Source: `ChainSearchEngine.search_symbols()`
|
||||
- Reference: `search/chain_search.py:618`
|
||||
- Return: `SymbolInformation[]`
|
||||
|
||||
### 1.3 Wire File Watcher to LSP Events
|
||||
- [ ] **`workspace/didChangeWatchedFiles`** handler
|
||||
- Delegate to: `WatcherManager.process_changes()`
|
||||
- Reference: `watcher/manager.py:53`
|
||||
|
||||
- [ ] **`textDocument/didSave`** handler
|
||||
- Trigger: `IncrementalIndexer` for single file
|
||||
- Reference: `watcher/incremental_indexer.py`
|
||||
|
||||
### 1.4 Deliverables
|
||||
- [ ] Unit tests for LSP handlers
|
||||
- [ ] Integration test: definition lookup
|
||||
- [ ] Integration test: completion prefix search
|
||||
- [ ] Benchmark: query latency < 50ms
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Find References Implementation (Priority: MEDIUM)
|
||||
|
||||
### 2.1 Create `search_references` Method
|
||||
- [ ] **Add to `src/codexlens/search/chain_search.py`**
|
||||
```python
|
||||
def search_references(
|
||||
self,
|
||||
symbol_name: str,
|
||||
source_path: Path,
|
||||
depth: int = -1
|
||||
) -> List[ReferenceResult]:
|
||||
"""Find all references to a symbol across the project."""
|
||||
```
|
||||
|
||||
### 2.2 Implement Parallel Query Orchestration
|
||||
- [ ] **Collect index paths**
|
||||
- Use: `_collect_index_paths()` existing method
|
||||
|
||||
- [ ] **Parallel query execution**
|
||||
- ThreadPoolExecutor across all `_index.db`
|
||||
- SQL: `SELECT * FROM code_relationships WHERE target_qualified_name = ?`
|
||||
- Reference: `storage/sqlite_store.py:348`
|
||||
|
||||
- [ ] **Result aggregation**
|
||||
- Deduplicate by file:line
|
||||
- Sort by file path, then line number
|
||||
|
||||
### 2.3 LSP Handler
|
||||
- [ ] **`textDocument/references`** handler
|
||||
- Call: `ChainSearchEngine.search_references()`
|
||||
- Return: `Location[]`
|
||||
|
||||
### 2.4 Deliverables
|
||||
- [ ] Unit test: single-index reference lookup
|
||||
- [ ] Integration test: cross-directory references
|
||||
- [ ] Benchmark: < 200ms for 10+ index files
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Enhanced Hover Information (Priority: MEDIUM)
|
||||
|
||||
### 3.1 Implement Hover Data Extraction
|
||||
- [ ] **Create `src/codexlens/lsp/hover_provider.py`**
|
||||
```python
|
||||
class HoverProvider:
|
||||
def get_hover_info(self, symbol: Symbol) -> HoverInfo:
|
||||
"""Extract hover information for a symbol."""
|
||||
```
|
||||
|
||||
### 3.2 Data Sources
|
||||
- [ ] **Symbol metadata**
|
||||
- Source: `GlobalSymbolIndex.search()`
|
||||
- Fields: `kind`, `name`, `file_path`, `range`
|
||||
|
||||
- [ ] **Source code extraction**
|
||||
- Source: `SQLiteStore.files` table
|
||||
- Reference: `storage/sqlite_store.py:284`
|
||||
- Extract: Lines from `range[0]` to `range[1]`
|
||||
|
||||
### 3.3 LSP Handler
|
||||
- [ ] **`textDocument/hover`** handler
|
||||
- Return: `Hover(contents=MarkupContent)`
|
||||
- Format: Markdown with code fence
|
||||
|
||||
### 3.4 Deliverables
|
||||
- [ ] Unit test: hover for function/class/variable
|
||||
- [ ] Integration test: multi-line function signature
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: MCP Bridge for Claude Code (Priority: HIGH VALUE)
|
||||
|
||||
### 4.1 Define MCP Schema
|
||||
- [ ] **Create `src/codexlens/mcp/__init__.py`**
|
||||
- [ ] **Create `src/codexlens/mcp/schema.py`**
|
||||
```python
|
||||
@dataclass
|
||||
class MCPContext:
|
||||
version: str = "1.0"
|
||||
context_type: str
|
||||
symbol: Optional[SymbolInfo]
|
||||
definition: Optional[str]
|
||||
references: List[ReferenceInfo]
|
||||
related_symbols: List[SymbolInfo]
|
||||
```
|
||||
|
||||
### 4.2 Create MCP Provider
|
||||
- [ ] **Create `src/codexlens/mcp/provider.py`**
|
||||
```python
|
||||
class MCPProvider:
|
||||
def build_context(
|
||||
self,
|
||||
symbol_name: str,
|
||||
context_type: str = "symbol_explanation"
|
||||
) -> MCPContext:
|
||||
"""Build structured context for LLM consumption."""
|
||||
```
|
||||
|
||||
### 4.3 Context Building Logic
|
||||
- [ ] **Symbol lookup**
|
||||
- Use: `GlobalSymbolIndex.search()`
|
||||
|
||||
- [ ] **Definition extraction**
|
||||
- Use: `SQLiteStore` file content
|
||||
|
||||
- [ ] **References collection**
|
||||
- Use: `ChainSearchEngine.search_references()`
|
||||
|
||||
- [ ] **Related symbols**
|
||||
- Use: `code_relationships` for imports/calls
|
||||
|
||||
### 4.4 Hook Integration Points
|
||||
- [ ] **Document `pre-tool` hook interface**
|
||||
```python
|
||||
def pre_tool_hook(action: str, params: dict) -> MCPContext:
|
||||
"""Called before LLM action to gather context."""
|
||||
```
|
||||
|
||||
- [ ] **Document `post-tool` hook interface**
|
||||
```python
|
||||
def post_tool_hook(action: str, result: Any) -> None:
|
||||
"""Called after LSP action for proactive caching."""
|
||||
```
|
||||
|
||||
### 4.5 Deliverables
|
||||
- [ ] MCP schema JSON documentation
|
||||
- [ ] Unit test: context building
|
||||
- [ ] Integration test: hook → MCP → JSON output
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Advanced Features (Priority: LOW)
|
||||
|
||||
### 5.1 Custom LSP Commands
|
||||
- [ ] **`codexlens/hybridSearch`**
|
||||
- Expose: `HybridSearchEngine.search()`
|
||||
- Reference: `search/hybrid_search.py`
|
||||
|
||||
- [ ] **`codexlens/symbolGraph`**
|
||||
- Return: Symbol relationship graph
|
||||
- Source: `code_relationships` table
|
||||
|
||||
### 5.2 Proactive Context Caching
|
||||
- [ ] **Implement `post-tool` hook caching**
|
||||
- After `go-to-definition`: pre-fetch references
|
||||
- Cache TTL: 5 minutes
|
||||
- Storage: In-memory LRU
|
||||
|
||||
### 5.3 Performance Optimizations
|
||||
- [ ] **Connection pooling**
|
||||
- Reference: `storage/sqlite_store.py` thread-local
|
||||
|
||||
- [ ] **Result caching**
|
||||
- LRU cache for frequent queries
|
||||
- Invalidate on file change
|
||||
|
||||
---
|
||||
|
||||
## File Structure After Implementation
|
||||
|
||||
```
|
||||
src/codexlens/
|
||||
├── lsp/ # NEW
|
||||
│ ├── __init__.py
|
||||
│ ├── server.py # Main LSP server
|
||||
│ ├── handlers.py # LSP request handlers
|
||||
│ ├── hover_provider.py # Hover information
|
||||
│ └── utils.py # LSP utilities
|
||||
│
|
||||
├── mcp/ # NEW
|
||||
│ ├── __init__.py
|
||||
│ ├── schema.py # MCP data models
|
||||
│ ├── provider.py # Context builder
|
||||
│ └── hooks.py # Hook interfaces
|
||||
│
|
||||
├── search/
|
||||
│ ├── chain_search.py # MODIFY: add search_references()
|
||||
│ └── ...
|
||||
│
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dependencies to Add
|
||||
|
||||
```toml
|
||||
# pyproject.toml
|
||||
[project.optional-dependencies]
|
||||
lsp = [
|
||||
"pygls>=1.3.0",
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
```
|
||||
tests/
|
||||
├── lsp/
|
||||
│ ├── test_definition.py
|
||||
│ ├── test_completion.py
|
||||
│ ├── test_references.py
|
||||
│ └── test_hover.py
|
||||
│
|
||||
└── mcp/
|
||||
├── test_schema.py
|
||||
└── test_provider.py
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
- [ ] Full LSP handshake test
|
||||
- [ ] Multi-file project navigation
|
||||
- [ ] Incremental index update via didSave
|
||||
|
||||
### Performance Benchmarks
|
||||
| Operation | Target | Acceptable |
|
||||
|-----------|--------|------------|
|
||||
| Definition lookup | < 30ms | < 50ms |
|
||||
| Completion (100 items) | < 50ms | < 100ms |
|
||||
| Find references (10 files) | < 150ms | < 200ms |
|
||||
| Initial indexing (1000 files) | < 60s | < 120s |
|
||||
|
||||
---
|
||||
|
||||
## Execution Order
|
||||
|
||||
```
|
||||
Week 1: Phase 1.1 → 1.2 → 1.3 → 1.4
|
||||
Week 2: Phase 2.1 → 2.2 → 2.3 → 2.4
|
||||
Week 3: Phase 3 + Phase 4.1 → 4.2
|
||||
Week 4: Phase 4.3 → 4.4 → 4.5
|
||||
Week 5: Phase 5 (optional) + Polish
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Start Commands
|
||||
|
||||
```bash
|
||||
# Install LSP dependencies
|
||||
pip install pygls
|
||||
|
||||
# Run LSP server (after implementation)
|
||||
python -m codexlens.lsp --stdio
|
||||
|
||||
# Test LSP connection
|
||||
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}' | python -m codexlens.lsp --stdio
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reference Links
|
||||
|
||||
- pygls Documentation: https://pygls.readthedocs.io/
|
||||
- LSP Specification: https://microsoft.github.io/language-server-protocol/
|
||||
- codex-lens GlobalSymbolIndex: `storage/global_index.py:173`
|
||||
- codex-lens ChainSearchEngine: `search/chain_search.py:618`
|
||||
- codex-lens WatcherManager: `watcher/manager.py:53`
|
||||
2588
codex-lens/docs/LSP_INTEGRATION_PLAN.md
Normal file
2588
codex-lens/docs/LSP_INTEGRATION_PLAN.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -3645,6 +3645,84 @@ def index_status(
|
||||
console.print(f" SPLADE encoder: {'[green]Yes[/green]' if splade_available else f'[red]No[/red] ({splade_err})'}")
|
||||
|
||||
|
||||
# ==================== Index Update Command ====================
|
||||
|
||||
@index_app.command("update")
|
||||
def index_update(
|
||||
file_path: Path = typer.Argument(..., exists=True, file_okay=True, dir_okay=False, help="Path to the file to update in the index."),
|
||||
json_mode: bool = typer.Option(False, "--json", help="Output JSON response."),
|
||||
verbose: bool = typer.Option(False, "--verbose", "-v", help="Enable debug logging."),
|
||||
) -> None:
|
||||
"""Update the index for a single file incrementally.
|
||||
|
||||
This is a lightweight command designed for use in hooks (e.g., Claude Code PostToolUse).
|
||||
It updates only the specified file without scanning the entire directory.
|
||||
|
||||
The file's parent directory must already be indexed via 'codexlens index init'.
|
||||
|
||||
Examples:
|
||||
codexlens index update src/main.py # Update single file
|
||||
codexlens index update ./foo.ts --json # JSON output for hooks
|
||||
"""
|
||||
_configure_logging(verbose, json_mode)
|
||||
|
||||
from codexlens.watcher.incremental_indexer import IncrementalIndexer
|
||||
|
||||
registry: RegistryStore | None = None
|
||||
indexer: IncrementalIndexer | None = None
|
||||
|
||||
try:
|
||||
registry = RegistryStore()
|
||||
registry.initialize()
|
||||
mapper = PathMapper()
|
||||
config = Config()
|
||||
|
||||
resolved_path = file_path.resolve()
|
||||
|
||||
# Check if project is indexed
|
||||
source_root = mapper.get_project_root(resolved_path)
|
||||
if not source_root or not registry.get_project(source_root):
|
||||
error_msg = f"Project containing file is not indexed: {file_path}"
|
||||
if json_mode:
|
||||
print_json(success=False, error=error_msg)
|
||||
else:
|
||||
console.print(f"[red]Error:[/red] {error_msg}")
|
||||
console.print("[dim]Run 'codexlens index init' on the project root first.[/dim]")
|
||||
raise typer.Exit(code=1)
|
||||
|
||||
indexer = IncrementalIndexer(registry, mapper, config)
|
||||
result = indexer._index_file(resolved_path)
|
||||
|
||||
if result.success:
|
||||
if json_mode:
|
||||
print_json(success=True, result={
|
||||
"path": str(result.path),
|
||||
"symbols_count": result.symbols_count,
|
||||
"status": "updated",
|
||||
})
|
||||
else:
|
||||
console.print(f"[green]✓[/green] Updated index for [bold]{result.path.name}[/bold] ({result.symbols_count} symbols)")
|
||||
else:
|
||||
error_msg = result.error or f"Failed to update index for {file_path}"
|
||||
if json_mode:
|
||||
print_json(success=False, error=error_msg)
|
||||
else:
|
||||
console.print(f"[red]Error:[/red] {error_msg}")
|
||||
raise typer.Exit(code=1)
|
||||
|
||||
except CodexLensError as exc:
|
||||
if json_mode:
|
||||
print_json(success=False, error=str(exc))
|
||||
else:
|
||||
console.print(f"[red]Update failed:[/red] {exc}")
|
||||
raise typer.Exit(code=1)
|
||||
finally:
|
||||
if indexer:
|
||||
indexer.close()
|
||||
if registry:
|
||||
registry.close()
|
||||
|
||||
|
||||
# ==================== Index All Command ====================
|
||||
|
||||
@index_app.command("all")
|
||||
|
||||
435
docs/workflows/ISSUE_LOOP_WORKFLOW.md
Normal file
435
docs/workflows/ISSUE_LOOP_WORKFLOW.md
Normal file
@@ -0,0 +1,435 @@
|
||||
# CCW Issue Loop 工作流完全指南
|
||||
|
||||
> 两阶段生命周期设计,支持在项目迭代中积累问题并集中解决
|
||||
|
||||
---
|
||||
|
||||
## 目录
|
||||
|
||||
1. [什么是 Issue Loop 工作流](#什么是-issue-loop-工作流)
|
||||
2. [核心架构](#核心架构)
|
||||
3. [两阶段生命周期](#两阶段生命周期)
|
||||
4. [命令详解](#命令详解)
|
||||
5. [使用场景](#使用场景)
|
||||
6. [推荐策略](#推荐策略)
|
||||
7. [串行无监管执行](#串行无监管执行)
|
||||
8. [最佳实践](#最佳实践)
|
||||
|
||||
---
|
||||
|
||||
## 什么是 Issue Loop 工作流
|
||||
|
||||
Issue Loop 是 CCW (Claude Code Workflow) 中的批量问题处理工作流,专为处理项目迭代过程中积累的多个问题而设计。与单次修复不同,Issue Loop 采用 **"积累 → 规划 → 队列 → 执行"** 的模式,实现问题的批量发现和集中解决。
|
||||
|
||||
### 核心理念
|
||||
|
||||
```
|
||||
传统模式:发现问题 → 立即修复 → 发现问题 → 立即修复 → ...
|
||||
↓
|
||||
Issue Loop:持续积累 → 集中规划 → 队列优化 → 批量执行
|
||||
```
|
||||
|
||||
**优势**:
|
||||
- 避免频繁上下文切换
|
||||
- 冲突检测和依赖排序
|
||||
- 并行执行支持
|
||||
- 完整的追踪和审计
|
||||
|
||||
---
|
||||
|
||||
## 核心架构
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Issue Loop Workflow │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 1: Accumulation (积累) │
|
||||
│ /issue:discover, /issue:discover-by-prompt, /issue:new │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 2: Batch Resolution (批量解决) │
|
||||
│ /issue:plan → /issue:queue → /issue:execute │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 数据流转
|
||||
|
||||
```
|
||||
issues.jsonl → solutions/<id>.jsonl → queues/<queue-id>.json → 执行
|
||||
↓ ↓ ↓
|
||||
Issue 记录 解决方案 优先级排序 + 冲突检测
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 两阶段生命周期
|
||||
|
||||
### Phase 1: Accumulation (积累阶段)
|
||||
|
||||
在项目正常迭代过程中,持续发现和记录问题:
|
||||
|
||||
| 触发场景 | 对应命令 | 说明 |
|
||||
|----------|----------|------|
|
||||
| 任务完成后 Review | `/issue:discover` | 自动分析代码发现潜在问题 |
|
||||
| 代码审查发现 | `/issue:new` | 手动创建结构化 Issue |
|
||||
| 测试失败 | `/issue:discover-by-prompt` | 根据描述创建 Issue |
|
||||
| 用户反馈 | `/issue:new` | 手动录入反馈问题 |
|
||||
|
||||
**Issue 状态流转**:
|
||||
```
|
||||
registered → planned → queued → executing → completed
|
||||
↓
|
||||
issue-history.jsonl
|
||||
```
|
||||
|
||||
### Phase 2: Batch Resolution (批量解决阶段)
|
||||
|
||||
积累足够 Issue 后,集中处理:
|
||||
|
||||
```
|
||||
Step 1: /issue:plan --all-pending # 为所有待处理 Issue 生成解决方案
|
||||
Step 2: /issue:queue # 形成执行队列(冲突检测 + 排序)
|
||||
Step 3: /issue:execute # 批量执行(串行或并行)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 命令详解
|
||||
|
||||
### 积累阶段命令
|
||||
|
||||
#### `/issue:new`
|
||||
手动创建结构化 Issue:
|
||||
```bash
|
||||
ccw issue init <id> --title "Issue 标题" --priority P2
|
||||
```
|
||||
|
||||
#### `/issue:discover`
|
||||
自动分析代码发现问题:
|
||||
```bash
|
||||
# 使用 gemini 进行多视角分析
|
||||
# 发现:bug、安全问题、性能问题、代码规范等
|
||||
```
|
||||
|
||||
#### `/issue:discover-by-prompt`
|
||||
根据描述创建 Issue:
|
||||
```bash
|
||||
# 输入问题描述,自动生成结构化 Issue
|
||||
```
|
||||
|
||||
### 批量解决阶段命令
|
||||
|
||||
#### `/issue:plan`
|
||||
为 Issue 生成解决方案:
|
||||
```bash
|
||||
ccw issue plan --all-pending # 规划所有待处理 Issue
|
||||
ccw issue plan ISS-001 # 规划单个 Issue
|
||||
```
|
||||
|
||||
**输出**:每个 Issue 生成包含以下内容的解决方案:
|
||||
- 修改点 (modification_points)
|
||||
- 实现步骤 (implementation)
|
||||
- 测试要求 (test)
|
||||
- 验收标准 (acceptance)
|
||||
|
||||
#### `/issue:queue`
|
||||
形成执行队列:
|
||||
```bash
|
||||
ccw issue queue # 创建新队列
|
||||
ccw issue queue add <id> # 添加到当前队列
|
||||
ccw issue queue list # 查看队列历史
|
||||
```
|
||||
|
||||
**关键功能**:
|
||||
- 冲突检测:使用 Gemini CLI 分析解决方案间的文件冲突
|
||||
- 依赖排序:根据依赖关系确定执行顺序
|
||||
- 优先级加权:高优先级 Issue 优先执行
|
||||
|
||||
#### `/issue:execute`
|
||||
执行队列中的解决方案:
|
||||
```bash
|
||||
ccw issue next # 获取下一个待执行解决方案
|
||||
ccw issue done <item_id> # 标记完成
|
||||
ccw issue done <id> --fail # 标记失败
|
||||
```
|
||||
|
||||
### 管理命令
|
||||
|
||||
```bash
|
||||
ccw issue list # 列出活跃 Issue
|
||||
ccw issue status <id> # 查看 Issue 详情
|
||||
ccw issue history # 查看已完成 Issue
|
||||
ccw issue update --from-queue # 从队列同步状态
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 使用场景
|
||||
|
||||
### 场景 1: 项目迭代后的技术债务清理
|
||||
|
||||
```
|
||||
1. 完成 Sprint 功能开发
|
||||
2. 执行 /issue:discover 发现技术债务
|
||||
3. 积累一周后,执行 /issue:plan --all-pending
|
||||
4. 使用 /issue:queue 形成队列
|
||||
5. 使用 codex 执行 /issue:execute 批量处理
|
||||
```
|
||||
|
||||
### 场景 2: 代码审查后的批量修复
|
||||
|
||||
```
|
||||
1. 完成 PR 代码审查
|
||||
2. 对每个发现执行 /issue:new 创建 Issue
|
||||
3. 积累本次审查的所有发现
|
||||
4. 执行 /issue:plan → /issue:queue → /issue:execute
|
||||
```
|
||||
|
||||
### 场景 3: 测试失败的批量处理
|
||||
|
||||
```
|
||||
1. 运行测试套件
|
||||
2. 对失败的测试执行 /issue:discover-by-prompt
|
||||
3. 一次性规划所有失败修复
|
||||
4. 串行执行确保不引入新问题
|
||||
```
|
||||
|
||||
### 场景 4: 安全漏洞批量修复
|
||||
|
||||
```
|
||||
1. 安全扫描发现多个漏洞
|
||||
2. 每个漏洞创建 Issue 并标记 P1 优先级
|
||||
3. 使用 /issue:queue 自动按严重度排序
|
||||
4. 执行修复并验证
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 推荐策略
|
||||
|
||||
### 何时使用 Issue Loop
|
||||
|
||||
| 条件 | 推荐 |
|
||||
|------|------|
|
||||
| 问题数量 >= 3 | Issue Loop |
|
||||
| 问题涉及多个模块 | Issue Loop |
|
||||
| 问题间可能有依赖 | Issue Loop |
|
||||
| 需要冲突检测 | Issue Loop |
|
||||
| 单个简单 bug | `/workflow:lite-fix` |
|
||||
| 紧急生产问题 | `/workflow:lite-fix --hotfix` |
|
||||
|
||||
### 积累策略
|
||||
|
||||
**推荐阈值**:
|
||||
- 积累 5-10 个 Issue 后集中处理
|
||||
- 或按时间周期(如每周五下午)统一处理
|
||||
- 紧急问题除外,立即标记 P1 并单独处理
|
||||
|
||||
### 队列策略
|
||||
|
||||
```javascript
|
||||
// 冲突检测规则
|
||||
if (solution_A.files ∩ solution_B.files !== ∅) {
|
||||
// 存在文件冲突,需要串行执行
|
||||
queue.addDependency(solution_A, solution_B)
|
||||
}
|
||||
|
||||
// 优先级排序
|
||||
sort by:
|
||||
1. priority (P1 > P2 > P3)
|
||||
2. dependencies (被依赖的先执行)
|
||||
3. complexity (低复杂度先执行)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 串行无监管执行
|
||||
|
||||
**推荐使用 Codex 命令进行串行无监管执行**:
|
||||
|
||||
```bash
|
||||
codex -p "@.codex/prompts/issue-execute.md"
|
||||
```
|
||||
|
||||
### 执行流程
|
||||
|
||||
```
|
||||
INIT: ccw issue next
|
||||
↓
|
||||
WHILE solution exists:
|
||||
├── 1. 解析 solution JSON
|
||||
├── 2. 逐个执行 tasks:
|
||||
│ ├── IMPLEMENT: 按步骤实现
|
||||
│ ├── TEST: 运行测试验证
|
||||
│ └── VERIFY: 检查验收标准
|
||||
├── 3. 提交代码 (每个 solution 一次 commit)
|
||||
├── 4. 报告完成: ccw issue done <id>
|
||||
└── 5. 获取下一个: ccw issue next
|
||||
↓
|
||||
COMPLETE: 输出最终报告
|
||||
```
|
||||
|
||||
### Worktree 模式(推荐并行执行)
|
||||
|
||||
```bash
|
||||
# 创建隔离的工作树
|
||||
codex -p "@.codex/prompts/issue-execute.md --worktree"
|
||||
|
||||
# 恢复中断的执行
|
||||
codex -p "@.codex/prompts/issue-execute.md --worktree /path/to/existing"
|
||||
```
|
||||
|
||||
**优势**:
|
||||
- 并行执行器不冲突
|
||||
- 主工作目录保持干净
|
||||
- 执行完成后易于清理
|
||||
- 支持中断恢复
|
||||
|
||||
### 执行规则
|
||||
|
||||
1. **永不中途停止** - 持续执行直到队列为空
|
||||
2. **一次一个解决方案** - 完全完成(所有任务 + 提交 + 报告)后继续
|
||||
3. **解决方案内串行** - 每个任务的实现/测试/验证按顺序完成
|
||||
4. **测试必须通过** - 任何任务测试失败则修复后继续
|
||||
5. **每解决方案一次提交** - 所有任务共享一次 commit
|
||||
6. **自我验证** - 所有验收标准必须通过
|
||||
7. **准确报告** - 使用 `ccw issue done` 报告完成
|
||||
8. **优雅处理失败** - 失败时报告并继续下一个
|
||||
|
||||
### Commit 格式
|
||||
|
||||
```
|
||||
[commit_type](scope): [solution.description]
|
||||
|
||||
## Solution Summary
|
||||
- **Solution-ID**: SOL-ISS-20251227-001-1
|
||||
- **Issue-ID**: ISS-20251227-001
|
||||
- **Risk/Impact/Complexity**: low/medium/low
|
||||
|
||||
## Tasks Completed
|
||||
- [T1] 实现用户认证: Modify src/auth/
|
||||
- [T2] 添加测试覆盖: Add tests/auth/
|
||||
|
||||
## Files Modified
|
||||
- src/auth/login.ts
|
||||
- tests/auth/login.test.ts
|
||||
|
||||
## Verification
|
||||
- All unit tests passed
|
||||
- All acceptance criteria verified
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 最佳实践
|
||||
|
||||
### 1. Issue 质量
|
||||
|
||||
创建高质量的 Issue 描述:
|
||||
```json
|
||||
{
|
||||
"title": "清晰简洁的标题",
|
||||
"context": {
|
||||
"problem": "具体问题描述",
|
||||
"impact": "影响范围",
|
||||
"reproduction": "复现步骤(如适用)"
|
||||
},
|
||||
"priority": "P1-P5"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Solution 审查
|
||||
|
||||
在执行前审查生成的解决方案:
|
||||
```bash
|
||||
ccw issue status <id> # 查看解决方案详情
|
||||
```
|
||||
|
||||
检查点:
|
||||
- 修改点是否准确
|
||||
- 测试覆盖是否充分
|
||||
- 验收标准是否可验证
|
||||
|
||||
### 3. 队列监控
|
||||
|
||||
```bash
|
||||
ccw issue queue # 查看当前队列状态
|
||||
ccw issue queue list # 查看队列历史
|
||||
```
|
||||
|
||||
### 4. 失败处理
|
||||
|
||||
```bash
|
||||
# 单个失败
|
||||
ccw issue done <id> --fail --reason '{"task_id": "T1", "error": "..."}'
|
||||
|
||||
# 重试失败项
|
||||
ccw issue retry --queue QUE-xxx
|
||||
```
|
||||
|
||||
### 5. 历史追溯
|
||||
|
||||
```bash
|
||||
ccw issue history # 查看已完成 Issue
|
||||
ccw issue history --json # JSON 格式导出
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 工作流对比
|
||||
|
||||
| 维度 | Issue Loop | lite-fix | coupled |
|
||||
|------|------------|----------|---------|
|
||||
| **适用场景** | 批量问题 | 单个 bug | 复杂功能 |
|
||||
| **问题数量** | 3+ | 1 | 1 |
|
||||
| **生命周期** | 两阶段 | 单次 | 多阶段 |
|
||||
| **冲突检测** | 有 | 无 | 无 |
|
||||
| **并行支持** | Worktree 模式 | 无 | 无 |
|
||||
| **追踪审计** | 完整 | 基础 | 完整 |
|
||||
|
||||
---
|
||||
|
||||
## 快速参考
|
||||
|
||||
### 完整流程命令
|
||||
|
||||
```bash
|
||||
# 1. 积累阶段
|
||||
/issue:new # 手动创建
|
||||
/issue:discover # 自动发现
|
||||
|
||||
# 2. 规划阶段
|
||||
/issue:plan --all-pending
|
||||
|
||||
# 3. 队列阶段
|
||||
/issue:queue
|
||||
|
||||
# 4. 执行阶段(推荐使用 codex)
|
||||
codex -p "@.codex/prompts/issue-execute.md"
|
||||
|
||||
# 或手动执行
|
||||
/issue:execute
|
||||
```
|
||||
|
||||
### CLI 命令速查
|
||||
|
||||
```bash
|
||||
ccw issue list # 列出 Issue
|
||||
ccw issue status <id> # 查看详情
|
||||
ccw issue plan --all-pending # 批量规划
|
||||
ccw issue queue # 创建队列
|
||||
ccw issue next # 获取下一个
|
||||
ccw issue done <id> # 标记完成
|
||||
ccw issue history # 查看历史
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 总结
|
||||
|
||||
Issue Loop 工作流是 CCW 中处理批量问题的最佳选择,通过两阶段生命周期设计,实现了问题的高效积累和集中解决。配合 Codex 的串行无监管执行能力,可以在保证质量的同时大幅提升效率。
|
||||
|
||||
**记住**:
|
||||
- 积累足够数量(5-10 个)后再集中处理
|
||||
- 使用 Codex 进行串行无监管执行
|
||||
- 利用 Worktree 模式实现并行执行
|
||||
- 保持 Issue 描述的高质量
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.3.23",
|
||||
"version": "6.3.31",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.3.23",
|
||||
"version": "6.3.31",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.0.4",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.3.28",
|
||||
"version": "6.3.33",
|
||||
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
|
||||
"type": "module",
|
||||
"main": "ccw/src/index.js",
|
||||
|
||||
Reference in New Issue
Block a user