Compare commits

..

19 Commits

Author SHA1 Message Date
catlog22
464f3343f3 chore: bump version to 6.3.33 2026-01-16 15:50:32 +08:00
catlog22
bb6cf42df6 fix: 更新 issue 执行文档,明确队列 ID 要求和用户交互流程 2026-01-16 15:49:26 +08:00
catlog22
0f0cb7e08e refactor: 优化 brainstorm 上下文溢出保护文档
- conceptual-planning-agent.md: 34行 → 10行(-71%)
- auto-parallel.md: 42行 → 9行(-79%)
- 消除重复定义,workflow 引用 agent 限制
- 移除冗余的策略列表、自检清单、代码示例
- 保留核心功能:限制数字、简要策略、恢复方法
2026-01-16 15:36:59 +08:00
catlog22
39d070eab6 fix: resolve GitHub issues (#50, #54)
- #54: Add API endpoint configuration documentation to DASHBOARD_GUIDE.md
- #50: Add brainstorm context overflow protection with output size limits

Note: #72 and #53 not changed per user feedback - existing behavior is sufficient
(users can configure envFile themselves; default Python version is appropriate)
2026-01-16 15:09:31 +08:00
catlog22
9ccaa7e2fd fix: 更新 CLI 工具配置缓存失效逻辑 2026-01-16 14:28:10 +08:00
catlog22
eeb90949ce chore: bump version to 6.3.32
- Fix: Dashboard project overview display issue (#80)
- Refactor: Update project structure to use project-tech.json
2026-01-16 14:09:09 +08:00
catlog22
7b677b20fb fix: 更新项目文档,修正项目上下文和学习固化流程描述 2026-01-16 14:01:27 +08:00
catlog22
e2d56bc08a refactor: 更新项目结构,替换 project.json 为 project-tech.json,添加新架构和技术分析 2026-01-16 13:33:38 +08:00
catlog22
d515090097 feat: add --mode review support for codex CLI
- Add 'review' to mode enum in ParamsSchema and schema
- Implement codex review subcommand in buildCommand (uses --uncommitted by default)
- Other tools (gemini/qwen/claude) accept review mode but no operation change
- Update cli-tools-usage.md with review mode documentation
2026-01-16 13:01:02 +08:00
catlog22
d81dfaf143 fix: add cross-platform support for hook installation (#82)
- Add PlatformUtils module for platform detection (Windows/macOS/Linux)
- Add escapeForShell() for platform-specific shell escaping
- Add checkCompatibility() to warn about incompatible hooks before install
- Add getVariant() to support platform-specific template variants
- Fix node -e commands: use double quotes on Windows, single quotes on Unix
2026-01-16 12:54:56 +08:00
catlog22
d7e5ee44cc fix: adapt help-routes.ts to new command.json structure (fixes #81)
- Replace getIndexDir() with getCommandFilePath() to find command.json
- Update file watcher to monitor command.json instead of index/ directory
- Modify API routes to read from unified command.json structure
- Add buildWorkflowRelationships() to dynamically build workflow data from flow fields
- Add /api/help/agents endpoint for agents list
- Add category merge logic for frontend compatibility (cli includes general)
- Add cli-init command to command.json
2026-01-16 12:46:50 +08:00
catlog22
dde39fc6f5 fix: 更新 CLI 调用后说明,移除不必要的轮询建议 2026-01-16 09:40:21 +08:00
catlog22
9b4fdc1868 Refactor code structure for improved readability and maintainability 2026-01-15 22:43:44 +08:00
catlog22
623afc1d35 6.3.31 2026-01-15 22:30:57 +08:00
catlog22
085652560a refactor: 移除 ccw cli 内部超时参数,改由外部 bash 控制
- 移除 --timeout 命令行选项和内部超时处理逻辑
- 进程生命周期跟随父进程(bash)状态
- 简化代码,超时控制交由外部调用者管理
2026-01-15 22:30:22 +08:00
catlog22
af4ddb1280 feat: 添加队列和议题删除功能,支持归档议题 2026-01-15 19:58:54 +08:00
catlog22
7db659f0e1 feat: 增强议题搜索功能与多队列卡片界面优化
搜索增强:
- 添加防抖处理修复快速输入导致页面卡死的问题
- 扩展搜索范围至解决方案的描述和方法字段
- 新增搜索结果高亮显示匹配关键词
- 添加搜索下拉建议,支持键盘导航

多队列界面:
- 优化队列展开视图的卡片布局使用CSS Grid
- 添加取消激活队列功能及API端点
- 改进状态颜色分布和统计卡片样式
- 添加激活/取消激活按钮的中文国际化

修复:
- 修复路由冲突导致的deactivate 404错误
- 修复异步加载后拖拽排序失效的问题
2026-01-15 19:44:44 +08:00
catlog22
ba526ea09e fix: 修复 Dashboard 概况页面无法显示项目信息的问题
添加 extractStringArray 辅助函数来处理混合数组类型(字符串数组和对象数组),
使 loadProjectOverview 函数能够正确处理 project-tech.json 中的数据结构。

修复的字段包括:
- languages: 对象数组 [{name, file_count, primary}] → 字符串数组
- frameworks: 保持兼容字符串数组
- key_components: 对象数组 [{name, description, path}] → 字符串数组
- layers/patterns: 保持兼容混合类型

Closes #79
2026-01-15 18:58:42 +08:00
catlog22
c308e429f8 feat: 添加增量更新命令以支持单文件索引更新 2026-01-15 18:14:51 +08:00
36 changed files with 5569 additions and 323 deletions

View File

@@ -29,9 +29,8 @@ Available CLI endpoints are dynamically defined by the config file:
```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
```
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT
poll with TaskOutput
- **After CLI call**: Stop immediately - let CLI execute in background
### CLI Analysis Calls
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:

View File

@@ -308,3 +308,14 @@ When analysis is complete, ensure:
- **Relevance**: Directly addresses user's specified requirements
- **Actionability**: Provides concrete next steps and recommendations
## Output Size Limits
**Per-role limits** (prevent context overflow):
- `analysis.md`: < 3000 words
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
- Total: < 15000 words per role
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style

View File

@@ -424,6 +424,17 @@ CONTEXT_VARS:
- **Agent execution failure**: Agent-specific retry with minimal dependencies
- **Template loading issues**: Agent handles graceful degradation
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution
- **Context overflow protection**: See below for automatic context management
## Context Overflow Protection
**Per-role limits**: See `conceptual-planning-agent.md` (< 3000 words main, < 2000 words sub-docs, max 5 sub-docs)
**Synthesis protection**: If total analysis > 100KB, synthesis reads only `analysis.md` files (not sub-documents)
**Recovery**: Check logs → reduce scope (--count 2) → use --summary-only → manual synthesis
**Prevention**: Start with --count 3, use structured topic format, review output sizes before synthesis
## Reference Information

View File

@@ -132,7 +132,7 @@ Scan and analyze workflow session directories:
**Staleness criteria**:
- Active sessions: No modification >7 days + no related git commits
- Archives: >30 days old + no feature references in project.json
- Archives: >30 days old + no feature references in project-tech.json
- Lite-plan: >7 days old + plan.json not executed
- Debug: >3 days old + issue not in recent commits
@@ -443,8 +443,8 @@ if (selectedCategories.includes('Sessions')) {
}
}
// Update project.json if features referenced deleted sessions
const projectPath = '.workflow/project.json'
// Update project-tech.json if features referenced deleted sessions
const projectPath = '.workflow/project-tech.json'
if (fileExists(projectPath)) {
const project = JSON.parse(Read(projectPath))
const deletedPaths = new Set(results.deleted)

View File

@@ -108,11 +108,24 @@ Analyze project for workflow initialization and generate .workflow/project-tech.
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
## Task
Generate complete project-tech.json with:
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
Generate complete project-tech.json following the schema structure:
- project_name: "${projectName}"
- initialized_at: ISO 8601 timestamp
- overview: {
description: "Brief project description",
technology_stack: {
languages: [{name, file_count, primary}],
frameworks: ["string"],
build_tools: ["string"],
test_frameworks: ["string"]
},
architecture: {style, layers: [], patterns: []},
key_components: [{name, path, description, importance}]
}
- features: []
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
## Analysis Requirements
@@ -132,7 +145,7 @@ Generate complete project-tech.json with:
1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
6. Report: Return brief completion summary
@@ -181,16 +194,16 @@ console.log(`
✓ Project initialized successfully
## Project Overview
Name: ${projectTech.project_metadata.name}
Description: ${projectTech.technology_analysis.description}
Name: ${projectTech.project_name}
Description: ${projectTech.overview.description}
### Technology Stack
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
### Architecture
Style: ${projectTech.technology_analysis.architecture.style}
Components: ${projectTech.technology_analysis.key_components.length} core modules
Style: ${projectTech.overview.architecture.style}
Components: ${projectTech.overview.key_components.length} core modules
---
Files created:

View File

@@ -531,11 +531,11 @@ if (hasUnresolvedIssues(reviewResult)) {
**Trigger**: After all executions complete (regardless of code review)
**Skip Condition**: Skip if `.workflow/project.json` does not exist
**Skip Condition**: Skip if `.workflow/project-tech.json` does not exist
**Operations**:
```javascript
const projectJsonPath = '.workflow/project.json'
const projectJsonPath = '.workflow/project-tech.json'
if (!fileExists(projectJsonPath)) return // Silent skip
const projectJson = JSON.parse(Read(projectJsonPath))

View File

@@ -107,13 +107,13 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
Manifest: Updated with N total sessions
```
### Phase 4: Update project.json (Optional)
### Phase 4: Update project-tech.json (Optional)
**Skip if**: `.workflow/project.json` doesn't exist
**Skip if**: `.workflow/project-tech.json` doesn't exist
```bash
# Check
test -f .workflow/project.json || echo "SKIP"
test -f .workflow/project-tech.json || echo "SKIP"
```
**If exists**, add feature entry:
@@ -134,6 +134,32 @@ test -f .workflow/project.json || echo "SKIP"
✓ Feature added to project registry
```
### Phase 5: Ask About Solidify (Always)
After successful archival, prompt user to capture learnings:
```javascript
AskUserQuestion({
questions: [{
question: "Would you like to solidify learnings from this session into project guidelines?",
header: "Solidify",
options: [
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
{ label: "Skip", description: "Archive complete, no learnings to capture" }
],
multiSelect: false
}]
})
```
**If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
**Output**:
```
Session archived successfully.
→ Run /workflow:session:solidify to capture learnings (recommended)
```
## Error Recovery
| Phase | Symptom | Recovery |
@@ -149,5 +175,6 @@ test -f .workflow/project.json || echo "SKIP"
Phase 1: find session → create .archiving marker
Phase 2: read key files → build manifest entry (no writes)
Phase 3: mkdir → mv → update manifest.json → rm marker
Phase 4: update project.json features array (optional)
Phase 4: update project-tech.json features array (optional)
Phase 5: ask user → solidify learnings (optional)
```

View File

@@ -16,7 +16,7 @@ examples:
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
**Dual Responsibility**:
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
1. **Project-level initialization** (first-time only): Creates `.workflow/project-tech.json` for feature registry
2. **Session-level initialization** (always): Creates session directory structure
## Session Types

View File

@@ -237,7 +237,7 @@ Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**:
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
- Read and parse `.workflow/project-tech.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
- If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
@@ -255,7 +255,7 @@ Execute all discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
6. Perform conflict detection with risk assessment

View File

@@ -1,7 +1,7 @@
{
"_metadata": {
"version": "2.0.0",
"total_commands": 88,
"total_commands": 45,
"total_agents": 16,
"description": "Unified CCW-Help command index"
},
@@ -485,6 +485,15 @@
"category": "general",
"difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md"
},
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Initialize CLI tool configurations (.gemini/, .qwen/) with technology-aware ignore rules",
"arguments": "[--tool gemini|qwen|all] [--preview] [--output path]",
"category": "cli",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
}
],

View File

@@ -4,7 +4,7 @@
- 所有回复使用简体中文
- 技术术语保留英文,首次出现可添加中文解释
- 代码内容(变量名、注释)保持英
- 代码变量名保持英文,注释使用中
## 格式规范

View File

@@ -0,0 +1,141 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Project Guidelines Schema",
"description": "Schema for project-guidelines.json - user-maintained rules and constraints",
"type": "object",
"required": ["conventions", "constraints", "_metadata"],
"properties": {
"conventions": {
"type": "object",
"description": "Coding conventions and standards",
"required": ["coding_style", "naming_patterns", "file_structure", "documentation"],
"properties": {
"coding_style": {
"type": "array",
"items": { "type": "string" },
"description": "Coding style rules (e.g., 'Use strict TypeScript mode', 'Prefer const over let')"
},
"naming_patterns": {
"type": "array",
"items": { "type": "string" },
"description": "Naming conventions (e.g., 'Use camelCase for variables', 'Use PascalCase for components')"
},
"file_structure": {
"type": "array",
"items": { "type": "string" },
"description": "File organization rules (e.g., 'One component per file', 'Tests alongside source files')"
},
"documentation": {
"type": "array",
"items": { "type": "string" },
"description": "Documentation requirements (e.g., 'JSDoc for public APIs', 'README for each module')"
}
}
},
"constraints": {
"type": "object",
"description": "Technical constraints and boundaries",
"required": ["architecture", "tech_stack", "performance", "security"],
"properties": {
"architecture": {
"type": "array",
"items": { "type": "string" },
"description": "Architecture constraints (e.g., 'No circular dependencies', 'Services must be stateless')"
},
"tech_stack": {
"type": "array",
"items": { "type": "string" },
"description": "Technology constraints (e.g., 'No new dependencies without review', 'Use native fetch over axios')"
},
"performance": {
"type": "array",
"items": { "type": "string" },
"description": "Performance requirements (e.g., 'API response < 200ms', 'Bundle size < 500KB')"
},
"security": {
"type": "array",
"items": { "type": "string" },
"description": "Security requirements (e.g., 'Sanitize all user input', 'No secrets in code')"
}
}
},
"quality_rules": {
"type": "array",
"description": "Enforceable quality rules",
"items": {
"type": "object",
"required": ["rule", "scope"],
"properties": {
"rule": {
"type": "string",
"description": "The quality rule statement"
},
"scope": {
"type": "string",
"description": "Where the rule applies (e.g., 'all', 'src/**', 'tests/**')"
},
"enforced_by": {
"type": "string",
"description": "How the rule is enforced (e.g., 'eslint', 'pre-commit', 'code-review')"
}
}
}
},
"learnings": {
"type": "array",
"description": "Project learnings captured from workflow sessions",
"items": {
"type": "object",
"required": ["date", "insight"],
"properties": {
"date": {
"type": "string",
"format": "date",
"description": "Date the learning was captured (YYYY-MM-DD)"
},
"session_id": {
"type": "string",
"description": "WFS session ID where the learning originated"
},
"insight": {
"type": "string",
"description": "The learning or insight captured"
},
"context": {
"type": "string",
"description": "Additional context about when/why this learning applies"
},
"category": {
"type": "string",
"enum": ["architecture", "performance", "security", "testing", "workflow", "other"],
"description": "Category of the learning"
}
}
}
},
"_metadata": {
"type": "object",
"required": ["created_at", "version"],
"properties": {
"created_at": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of creation"
},
"version": {
"type": "string",
"description": "Schema version (e.g., '1.0.0')"
},
"last_updated": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of last update"
},
"updated_by": {
"type": "string",
"description": "Who/what last updated the file (e.g., 'user', 'workflow:session:solidify')"
}
}
}
}
}

View File

@@ -1,7 +1,7 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Project Metadata Schema",
"description": "Workflow initialization metadata for project-level context",
"title": "Project Tech Schema",
"description": "Schema for project-tech.json - auto-generated technical analysis (stack, architecture, components)",
"type": "object",
"required": [
"project_name",

View File

@@ -85,11 +85,14 @@ Tools are selected based on **tags** defined in the configuration. Use tags to m
```bash
# Explicit tool selection
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write>
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write|review>
# Model override
ccw cli -p "<PROMPT>" --tool <tool-id> --model <model-id> --mode <analysis|write>
# Code review (codex only)
ccw cli -p "<PROMPT>" --tool codex --mode review
# Tag-based auto-selection (future)
ccw cli -p "<PROMPT>" --tags <tag1,tag2> --mode <analysis|write>
```
@@ -330,6 +333,14 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
- Use For: Feature implementation, bug fixes, documentation, code creation, file modifications
- Specification: Requires explicit `--mode write`
- **`review`**
- Permission: Read-only (code review output)
- Use For: Git-aware code review of uncommitted changes, branch diffs, specific commits
- Specification: **codex only** - uses `codex review` subcommand with `--uncommitted` by default
- Tool Behavior:
- `codex`: Executes `codex review --uncommitted [prompt]` for structured code review
- Other tools (gemini/qwen/claude): Accept mode but no operation change (treated as analysis)
### Command Options
- **`--tool <tool>`**
@@ -337,8 +348,9 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
- Default: First enabled tool in config
- **`--mode <mode>`**
- Description: **REQUIRED**: analysis, write
- Description: **REQUIRED**: analysis, write, review
- Default: **NONE** (must specify)
- Note: `review` mode triggers `codex review` subcommand for codex tool only
- **`--model <model>`**
- Description: Model override
@@ -463,6 +475,17 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
" --tool <tool-id> --mode write
```
**Code Review Task** (codex review mode):
```bash
# Review uncommitted changes (default)
ccw cli -p "Focus on security vulnerabilities and error handling" --tool codex --mode review
# Review with custom instructions
ccw cli -p "Check for breaking changes in API contracts and backward compatibility" --tool codex --mode review
```
> **Note**: `--mode review` only triggers special behavior for `codex` tool (uses `codex review --uncommitted`). Other tools accept the mode but execute as standard analysis.
---
### Permission Framework
@@ -472,6 +495,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
**Mode Hierarchy**:
- `analysis`: Read-only, safe for auto-execution
- `write`: Create/Modify/Delete files, full operations - requires explicit `--mode write`
- `review`: Git-aware code review (codex only), read-only output - requires explicit `--mode review`
- **Exception**: User provides clear instructions like "modify", "create", "implement"
---
@@ -502,7 +526,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
### Planning Checklist
- [ ] **Purpose defined** - Clear goal and intent
- [ ] **Mode selected** - `--mode analysis|write`
- [ ] **Mode selected** - `--mode analysis|write|review`
- [ ] **Context gathered** - File references + memory (default `@**/*`)
- [ ] **Directory navigation** - `--cd` and/or `--includeDirs`
- [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection
@@ -514,5 +538,5 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
1. **Load configuration** - Read `cli-tools.json` for available tools
2. **Match by tags** - Select tool based on task requirements
3. **Validate enabled** - Ensure selected tool is enabled
4. **Execute with mode** - Always specify `--mode analysis|write`
4. **Execute with mode** - Always specify `--mode analysis|write|review`
5. **Fallback gracefully** - Use secondary model or next matching tool on failure

View File

@@ -11,46 +11,21 @@ argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
**`--queue <queue-id>` parameter is REQUIRED**
### If Queue ID Not Provided
### When Queue ID Not Provided
When `--queue` parameter is missing, you MUST:
1. **List available queues** by running:
```javascript
const result = shell_command({ command: "ccw issue queue list --brief --json" })
```
List queues → Output options → Stop and wait for user
```
2. **Parse and display queues** to user:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
**Actions**:
3. **Stop and ask user** to specify which queue to execute:
```javascript
AskUserQuestion({
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: [
// Generate from parsed queue list - only show active/pending queues
{ label: "QUE-20251215-001", description: "active, 3/10 completed, Issues: ISS-001, ISS-002" },
{ label: "QUE-20251210-002", description: "active, 0/5 completed, Issues: ISS-003" }
]
}]
})
```
1. `ccw issue queue list --brief --json` - Fetch queue list
2. Filter active/pending status, output formatted list
3. **Stop execution**, prompt user to rerun with `codex -p "@.codex/prompts/issue-execute.md --queue QUE-xxx"`
4. **After user selection**, continue execution with the selected queue ID.
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
**No auto-selection** - User MUST explicitly specify queue-id
## Worktree Mode (Recommended for Parallel Execution)
@@ -147,33 +122,19 @@ codex -p "@.codex/prompts/issue-execute.md --worktree /path/to/existing/worktree
**Completion - User Choice:**
When all solutions are complete, ask user what to do with the worktree branch:
When all solutions are complete, output options and wait for user to specify:
```javascript
AskUserQuestion({
questions: [{
question: "All solutions completed in worktree. What would you like to do with the changes?",
header: "Merge",
multiSelect: false,
options: [
{
label: "Merge to main",
description: "Merge worktree branch into main branch and cleanup"
},
{
label: "Create PR",
description: "Push branch and create a pull request for review"
},
{
label: "Keep branch",
description: "Keep the branch for manual handling, cleanup worktree only"
}
]
}]
})
```
All solutions completed in worktree. Choose next action:
1. Merge to main - Merge worktree branch into main and cleanup
2. Create PR - Push branch and create pull request (Recommended for parallel execution)
3. Keep branch - Keep branch for manual handling, cleanup worktree only
Please respond with: 1, 2, or 3
```
**Based on user selection:**
**Based on user response:**
```bash
# Disable cleanup trap before intentional cleanup
@@ -327,9 +288,154 @@ Expected solution structure:
}
```
## Step 2.1: Determine Execution Strategy
After parsing the solution, analyze the issue type and task actions to determine the appropriate execution strategy. The strategy defines additional verification steps and quality gates beyond the basic implement-test-verify cycle.
### Strategy Auto-Matching
**Matching Priority**:
1. Explicit `solution.strategy_type` if provided
2. Infer from `task.action` keywords (Debug, Fix, Feature, Refactor, Test, etc.)
3. Infer from `solution.description` and `task.title` content
4. Default to "standard" if no clear match
**Strategy Types and Matching Keywords**:
| Strategy Type | Match Keywords | Description |
|---------------|----------------|-------------|
| `debug` | Debug, Diagnose, Trace, Investigate | Bug diagnosis with logging and debugging |
| `bugfix` | Fix, Patch, Resolve, Correct | Bug fixing with root cause analysis |
| `feature` | Feature, Add, Implement, Create, Build | New feature development with full testing |
| `refactor` | Refactor, Restructure, Optimize, Cleanup | Code restructuring with behavior preservation |
| `test` | Test, Coverage, E2E, Integration | Test implementation with coverage checks |
| `performance` | Performance, Optimize, Speed, Memory | Performance optimization with benchmarking |
| `security` | Security, Vulnerability, CVE, Audit | Security fixes with vulnerability checks |
| `hotfix` | Hotfix, Urgent, Critical, Emergency | Urgent fixes with minimal changes |
| `documentation` | Documentation, Docs, Comment, README | Documentation updates with example validation |
| `chore` | Chore, Dependency, Config, Maintenance | Maintenance tasks with compatibility checks |
| `standard` | (default) | Standard implementation without extra steps |
### Strategy-Specific Execution Phases
Each strategy extends the basic cycle with additional quality gates:
#### 1. Debug → Reproduce → Instrument → Diagnose → Implement → Test → Verify → Cleanup
```
REPRODUCE → INSTRUMENT → DIAGNOSE → IMPLEMENT → TEST → VERIFY → CLEANUP
```
#### 2. Bugfix → Root Cause → Implement → Test → Edge Cases → Regression → Verify
```
ROOT_CAUSE → IMPLEMENT → TEST → EDGE_CASES → REGRESSION → VERIFY
```
#### 3. Feature → Design Review → Unit Tests → Implement → Integration Tests → Code Review → Docs → Verify
```
DESIGN_REVIEW → UNIT_TESTS → IMPLEMENT → INTEGRATION_TESTS → TEST → CODE_REVIEW → DOCS → VERIFY
```
#### 4. Refactor → Baseline Tests → Implement → Test → Behavior Check → Performance Compare → Verify
```
BASELINE_TESTS → IMPLEMENT → TEST → BEHAVIOR_PRESERVATION → PERFORMANCE_CMP → VERIFY
```
#### 5. Test → Coverage Baseline → Test Design → Implement → Coverage Check → Verify
```
COVERAGE_BASELINE → TEST_DESIGN → IMPLEMENT → COVERAGE_CHECK → VERIFY
```
#### 6. Performance → Profiling → Bottleneck → Implement → Benchmark → Test → Verify
```
PROFILING → BOTTLENECK → IMPLEMENT → BENCHMARK → TEST → VERIFY
```
#### 7. Security → Vulnerability Scan → Implement → Security Test → Penetration Test → Verify
```
VULNERABILITY_SCAN → IMPLEMENT → SECURITY_TEST → PENETRATION_TEST → VERIFY
```
#### 8. Hotfix → Impact Assessment → Implement → Test → Quick Verify → Verify
```
IMPACT_ASSESSMENT → IMPLEMENT → TEST → QUICK_VERIFY → VERIFY
```
#### 9. Documentation → Implement → Example Validation → Format Check → Link Validation → Verify
```
IMPLEMENT → EXAMPLE_VALIDATION → FORMAT_CHECK → LINK_VALIDATION → VERIFY
```
#### 10. Chore → Implement → Compatibility Check → Test → Changelog → Verify
```
IMPLEMENT → COMPATIBILITY_CHECK → TEST → CHANGELOG → VERIFY
```
#### 11. Standard → Implement → Test → Verify
```
IMPLEMENT → TEST → VERIFY
```
### Strategy Selection Implementation
**Pseudo-code for strategy matching**:
```javascript
function determineStrategy(solution) {
// Priority 1: Explicit strategy type
if (solution.strategy_type) {
return solution.strategy_type
}
// Priority 2: Infer from task actions
const actions = solution.tasks.map(t => t.action.toLowerCase())
const titles = solution.tasks.map(t => t.title.toLowerCase())
const description = solution.description.toLowerCase()
const allText = [...actions, ...titles, description].join(' ')
// Match keywords (order matters - more specific first)
if (/hotfix|urgent|critical|emergency/.test(allText)) return 'hotfix'
if (/debug|diagnose|trace|investigate/.test(allText)) return 'debug'
if (/security|vulnerability|cve|audit/.test(allText)) return 'security'
if (/performance|optimize|speed|memory|benchmark/.test(allText)) return 'performance'
if (/refactor|restructure|cleanup/.test(allText)) return 'refactor'
if (/test|coverage|e2e|integration/.test(allText)) return 'test'
if (/documentation|docs|comment|readme/.test(allText)) return 'documentation'
if (/chore|dependency|config|maintenance/.test(allText)) return 'chore'
if (/fix|patch|resolve|correct/.test(allText)) return 'bugfix'
if (/feature|add|implement|create|build/.test(allText)) return 'feature'
// Default
return 'standard'
}
```
**Usage in execution flow**:
```javascript
// After parsing solution (Step 2)
const strategy = determineStrategy(solution)
console.log(`Strategy selected: ${strategy}`)
// During task execution (Step 3), follow strategy-specific phases
for (const task of solution.tasks) {
executeTaskWithStrategy(task, strategy)
}
```
## Step 2.5: Initialize Task Tracking
After parsing solution, use `update_plan` to track each task:
After parsing solution and determining strategy, use `update_plan` to track each task:
```javascript
// Initialize plan with all tasks from solution
@@ -503,18 +609,19 @@ EOF
## Solution Committed: [solution_id]
**Commit**: [commit hash]
**Type**: [commit_type]
**Scope**: [scope]
**Type**: [commit_type]([scope])
**Summary**:
[solution.description]
**Changes**:
- [Feature/Fix/Improvement]: [What functionality was added/fixed/improved]
- [Specific change 1]
- [Specific change 2]
**Tasks**: [N] tasks completed
- [x] T1: [task1.title]
- [x] T2: [task2.title]
...
**Files Modified**:
- path/to/file1.ts - [Brief description of changes]
- path/to/file2.ts - [Brief description of changes]
- path/to/file3.ts - [Brief description of changes]
**Files**: [M] files changed
**Solution**: [solution_id] ([N] tasks completed)
```
## Step 4: Report Completion
@@ -629,9 +736,8 @@ When `ccw issue next` returns `{ "status": "empty" }`:
If `--queue` was NOT provided in the command arguments:
1. Run `ccw issue queue list --brief --json`
2. Display available queues to user
3. Ask user to select a queue via `AskUserQuestion`
4. Store selected queue ID for all subsequent commands
2. Filter and display active/pending queues to user
3. **Stop execution**, prompt user to rerun with `--queue QUE-xxx`
**Step 1: Fetch First Solution**

View File

@@ -148,6 +148,36 @@ CCW Dashboard 是一个单页应用SPA界面由四个核心部分组成
- **模型配置**: 配置每个工具的主要和次要模型
- **安装/卸载**: 通过向导安装或卸载工具
#### API Endpoint 配置(无需安装 CLI
如果您没有安装 Gemini/Qwen CLI但有 API 访问权限(如反向代理服务),可以在 `~/.claude/cli-tools.json` 中配置 `api-endpoint` 类型的工具:
```json
{
"version": "3.2.0",
"tools": {
"gemini-api": {
"enabled": true,
"type": "api-endpoint",
"id": "your-api-id",
"primaryModel": "gemini-2.5-pro",
"secondaryModel": "gemini-2.5-flash",
"tags": ["analysis"]
}
}
}
```
**配置说明**
- `type: "api-endpoint"`: 表示使用 API 调用而非 CLI
- `id`: API 端点标识符,用于路由请求
- API Endpoint 工具仅支持**分析模式**(只读),不支持文件写入操作
**使用示例**
```bash
ccw cli -p "分析代码结构" --tool gemini-api --mode analysis
```
#### CodexLens 管理
- **索引路径**: 查看和修改索引存储位置
- **索引操作**:

View File

@@ -177,7 +177,7 @@ export function run(argv: string[]): void {
.option('--model <model>', 'Model override')
.option('--cd <path>', 'Working directory')
.option('--includeDirs <dirs>', 'Additional directories (--include-directories for gemini/qwen, --add-dir for codex/claude)')
.option('--timeout <ms>', 'Timeout in milliseconds (0=disabled, controlled by external caller)', '0')
// --timeout removed - controlled by external caller (bash timeout)
.option('--stream', 'Enable streaming output (default: non-streaming with caching)')
.option('--limit <n>', 'History limit')
.option('--status <status>', 'Filter by status')

View File

@@ -116,7 +116,7 @@ interface CliExecOptions {
model?: string;
cd?: string;
includeDirs?: string;
timeout?: string;
// timeout removed - controlled by external caller (bash timeout)
stream?: boolean; // Enable streaming (default: false, caches output)
resume?: string | boolean; // true = last, string = execution ID, comma-separated for merge
id?: string; // Custom execution ID (e.g., IMPL-001-step1)
@@ -535,7 +535,7 @@ async function statusAction(debug?: boolean): Promise<void> {
* @param {Object} options - CLI options
*/
async function execAction(positionalPrompt: string | undefined, options: CliExecOptions): Promise<void> {
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, timeout, stream, resume, id, noNative, cache, injectMode, debug } = options;
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, stream, resume, id, noNative, cache, injectMode, debug } = options;
// Enable debug mode if --debug flag is set
if (debug) {
@@ -842,7 +842,7 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
model,
cd,
includeDirs,
timeout: timeout ? parseInt(timeout, 10) : 0, // 0 = no internal timeout, controlled by external caller
// timeout removed - controlled by external caller (bash timeout)
resume,
id, // custom execution ID
noNative,
@@ -1216,12 +1216,12 @@ export async function cliCommand(
console.log(chalk.gray(' -f, --file <file> Read prompt from file (recommended for multi-line prompts)'));
console.log(chalk.gray(' -p, --prompt <text> Prompt text (single-line)'));
console.log(chalk.gray(' --tool <tool> Tool: gemini, qwen, codex (default: gemini)'));
console.log(chalk.gray(' --mode <mode> Mode: analysis, write, auto (default: analysis)'));
console.log(chalk.gray(' --mode <mode> Mode: analysis, write, auto, review (default: analysis)'));
console.log(chalk.gray(' -d, --debug Enable debug logging for troubleshooting'));
console.log(chalk.gray(' --model <model> Model override'));
console.log(chalk.gray(' --cd <path> Working directory'));
console.log(chalk.gray(' --includeDirs <dirs> Additional directories'));
console.log(chalk.gray(' --timeout <ms> Timeout (default: 0=disabled)'));
// --timeout removed - controlled by external caller (bash timeout)
console.log(chalk.gray(' --resume [id] Resume previous session'));
console.log(chalk.gray(' --cache <items> Cache: comma-separated @patterns and text'));
console.log(chalk.gray(' --inject-mode <m> Inject mode: none, full, progressive'));

View File

@@ -140,12 +140,25 @@ interface ProjectGuidelines {
};
}
interface Language {
name: string;
file_count: number;
primary: boolean;
}
interface KeyComponent {
name: string;
path: string;
description: string;
importance: 'high' | 'medium' | 'low';
}
interface ProjectOverview {
projectName: string;
description: string;
initializedAt: string | null;
technologyStack: {
languages: string[];
languages: Language[];
frameworks: string[];
build_tools: string[];
test_frameworks: string[];
@@ -155,7 +168,7 @@ interface ProjectOverview {
layers: string[];
patterns: string[];
};
keyComponents: string[];
keyComponents: KeyComponent[];
features: unknown[];
developmentIndex: {
feature: unknown[];
@@ -187,13 +200,12 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
// Initialize cache manager
const cache = createDashboardCache(workflowDir);
// Prepare paths to watch for changes (includes both new dual files and legacy)
// Prepare paths to watch for changes
const watchPaths = [
join(workflowDir, 'active'),
join(workflowDir, 'archives'),
join(workflowDir, 'project-tech.json'),
join(workflowDir, 'project-guidelines.json'),
join(workflowDir, 'project.json'), // Legacy support
...sessions.active.map(s => s.path),
...sessions.archived.map(s => s.path)
];
@@ -266,7 +278,7 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
console.error('Error scanning lite tasks:', (err as Error).message);
}
// Load project overview from project.json
// Load project overview from project-tech.json
try {
data.projectOverview = loadProjectOverview(workflowDir);
} catch (err) {
@@ -553,31 +565,25 @@ function sortTaskIds(a: string, b: string): number {
/**
* Load project overview from project-tech.json and project-guidelines.json
* Supports dual file structure with backward compatibility for legacy project.json
* @param workflowDir - Path to .workflow directory
* @returns Project overview data or null if not found
*/
function loadProjectOverview(workflowDir: string): ProjectOverview | null {
const techFile = join(workflowDir, 'project-tech.json');
const guidelinesFile = join(workflowDir, 'project-guidelines.json');
const legacyFile = join(workflowDir, 'project.json');
// Check for new dual file structure first, fallback to legacy
const useLegacy = !existsSync(techFile) && existsSync(legacyFile);
const projectFile = useLegacy ? legacyFile : techFile;
if (!existsSync(projectFile)) {
console.log(`Project file not found at: ${projectFile}`);
if (!existsSync(techFile)) {
console.log(`Project file not found at: ${techFile}`);
return null;
}
try {
const fileContent = readFileSync(projectFile, 'utf8');
const fileContent = readFileSync(techFile, 'utf8');
const projectData = JSON.parse(fileContent) as Record<string, unknown>;
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'} (${useLegacy ? 'legacy' : 'tech'})`);
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'}`);
// Parse tech data (compatible with both legacy and new structure)
// Parse tech data from project-tech.json structure
const overview = projectData.overview as Record<string, unknown> | undefined;
const technologyAnalysis = projectData.technology_analysis as Record<string, unknown> | undefined;
const developmentStatus = projectData.development_status as Record<string, unknown> | undefined;
@@ -589,6 +595,18 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
const statistics = (projectData.statistics || developmentStatus?.statistics) as Record<string, unknown> | undefined;
const metadata = projectData._metadata as Record<string, unknown> | undefined;
// Helper to extract string array from mixed array (handles both string[] and {name: string}[])
const extractStringArray = (arr: unknown[] | undefined): string[] => {
if (!arr) return [];
return arr.map(item => {
if (typeof item === 'string') return item;
if (typeof item === 'object' && item !== null && 'name' in item) {
return String((item as { name: unknown }).name);
}
return String(item);
});
};
// Load guidelines from separate file if exists
let guidelines: ProjectGuidelines | null = null;
if (existsSync(guidelinesFile)) {
@@ -633,17 +651,17 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
description: (overview?.description as string) || '',
initializedAt: (projectData.initialized_at as string) || null,
technologyStack: {
languages: (technologyStack?.languages as string[]) || [],
frameworks: (technologyStack?.frameworks as string[]) || [],
build_tools: (technologyStack?.build_tools as string[]) || [],
test_frameworks: (technologyStack?.test_frameworks as string[]) || []
languages: (technologyStack?.languages as Language[]) || [],
frameworks: extractStringArray(technologyStack?.frameworks),
build_tools: extractStringArray(technologyStack?.build_tools),
test_frameworks: extractStringArray(technologyStack?.test_frameworks)
},
architecture: {
style: (architecture?.style as string) || 'Unknown',
layers: (architecture?.layers as string[]) || [],
patterns: (architecture?.patterns as string[]) || []
layers: extractStringArray(architecture?.layers as unknown[] | undefined),
patterns: extractStringArray(architecture?.patterns as unknown[] | undefined)
},
keyComponents: (overview?.key_components as string[]) || [],
keyComponents: (overview?.key_components as KeyComponent[]) || [],
features: (projectData.features as unknown[]) || [],
developmentIndex: {
feature: (developmentIndex?.feature as unknown[]) || [],
@@ -665,7 +683,7 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
guidelines
};
} catch (err) {
console.error(`Failed to parse project file at ${projectFile}:`, (err as Error).message);
console.error(`Failed to parse project file at ${techFile}:`, (err as Error).message);
console.error('Error stack:', (err as Error).stack);
return null;
}

View File

@@ -8,23 +8,23 @@ import { homedir } from 'os';
import type { RouteContext } from './types.js';
/**
* Get the ccw-help index directory path (pure function)
* Priority: project path (.claude/skills/ccw-help/index) > user path (~/.claude/skills/ccw-help/index)
* Get the ccw-help command.json file path (pure function)
* Priority: project path (.claude/skills/ccw-help/command.json) > user path (~/.claude/skills/ccw-help/command.json)
* @param projectPath - The project path to check first
*/
function getIndexDir(projectPath: string | null): string | null {
function getCommandFilePath(projectPath: string | null): string | null {
// Try project path first
if (projectPath) {
const projectIndexDir = join(projectPath, '.claude', 'skills', 'ccw-help', 'index');
if (existsSync(projectIndexDir)) {
return projectIndexDir;
const projectFilePath = join(projectPath, '.claude', 'skills', 'ccw-help', 'command.json');
if (existsSync(projectFilePath)) {
return projectFilePath;
}
}
// Fall back to user path
const userIndexDir = join(homedir(), '.claude', 'skills', 'ccw-help', 'index');
if (existsSync(userIndexDir)) {
return userIndexDir;
const userFilePath = join(homedir(), '.claude', 'skills', 'ccw-help', 'command.json');
if (existsSync(userFilePath)) {
return userFilePath;
}
return null;
@@ -83,46 +83,48 @@ function invalidateCache(key: string): void {
let watchersInitialized = false;
/**
* Initialize file watchers for JSON indexes
* @param projectPath - The project path to resolve index directory
* Initialize file watcher for command.json
* @param projectPath - The project path to resolve command file
*/
function initializeFileWatchers(projectPath: string | null): void {
if (watchersInitialized) return;
const indexDir = getIndexDir(projectPath);
const commandFilePath = getCommandFilePath(projectPath);
if (!indexDir) {
console.warn(`ccw-help index directory not found in project or user paths`);
if (!commandFilePath) {
console.warn(`ccw-help command.json not found in project or user paths`);
return;
}
try {
// Watch all JSON files in index directory
const watcher = watch(indexDir, { recursive: false }, (eventType, filename) => {
if (!filename || !filename.endsWith('.json')) return;
// Watch the command.json file
const watcher = watch(commandFilePath, (eventType) => {
console.log(`File change detected: command.json (${eventType})`);
console.log(`File change detected: ${filename} (${eventType})`);
// Invalidate relevant cache entries
if (filename === 'all-commands.json') {
invalidateCache('all-commands');
} else if (filename === 'command-relationships.json') {
invalidateCache('command-relationships');
} else if (filename === 'by-category.json') {
invalidateCache('by-category');
}
// Invalidate all cache entries when command.json changes
invalidateCache('command-data');
});
watchersInitialized = true;
(watcher as any).unref?.();
console.log(`File watchers initialized for: ${indexDir}`);
console.log(`File watcher initialized for: ${commandFilePath}`);
} catch (error) {
console.error('Failed to initialize file watchers:', error);
console.error('Failed to initialize file watcher:', error);
}
}
// ========== Helper Functions ==========
/**
* Get command data from command.json (with caching)
*/
function getCommandData(projectPath: string | null): any {
const filePath = getCommandFilePath(projectPath);
if (!filePath) return null;
return getCachedData('command-data', filePath);
}
/**
* Filter commands by search query
*/
@@ -138,6 +140,15 @@ function filterCommands(commands: any[], query: string): any[] {
);
}
/**
* Category merge mapping for frontend compatibility
* Merges additional categories into target category for display
* Format: { targetCategory: [additionalCategoriesToMerge] }
*/
const CATEGORY_MERGES: Record<string, string[]> = {
'cli': ['general'], // CLI tab shows both 'cli' and 'general' commands
};
/**
* Group commands by category with subcategories
*/
@@ -166,9 +177,104 @@ function groupCommandsByCategory(commands: any[]): any {
}
}
// Apply category merges for frontend compatibility
for (const [target, sources] of Object.entries(CATEGORY_MERGES)) {
// Initialize target category if not exists
if (!grouped[target]) {
grouped[target] = {
name: target,
commands: [],
subcategories: {}
};
}
// Merge commands from source categories into target
for (const source of sources) {
if (grouped[source]) {
// Merge direct commands
grouped[target].commands = [
...grouped[target].commands,
...grouped[source].commands
];
// Merge subcategories
for (const [subcat, cmds] of Object.entries(grouped[source].subcategories)) {
if (!grouped[target].subcategories[subcat]) {
grouped[target].subcategories[subcat] = [];
}
grouped[target].subcategories[subcat] = [
...grouped[target].subcategories[subcat],
...(cmds as any[])
];
}
}
}
}
return grouped;
}
/**
* Build workflow relationships from command flow data
*/
function buildWorkflowRelationships(commands: any[]): any {
const relationships: any = {
workflows: [],
dependencies: {},
alternatives: {}
};
for (const cmd of commands) {
if (!cmd.flow) continue;
const cmdName = cmd.command;
// Build next_steps relationships
if (cmd.flow.next_steps) {
if (!relationships.dependencies[cmdName]) {
relationships.dependencies[cmdName] = { next: [], prev: [] };
}
relationships.dependencies[cmdName].next = cmd.flow.next_steps;
// Add reverse relationship
for (const nextCmd of cmd.flow.next_steps) {
if (!relationships.dependencies[nextCmd]) {
relationships.dependencies[nextCmd] = { next: [], prev: [] };
}
if (!relationships.dependencies[nextCmd].prev.includes(cmdName)) {
relationships.dependencies[nextCmd].prev.push(cmdName);
}
}
}
// Build prerequisites relationships
if (cmd.flow.prerequisites) {
if (!relationships.dependencies[cmdName]) {
relationships.dependencies[cmdName] = { next: [], prev: [] };
}
relationships.dependencies[cmdName].prev = [
...new Set([...relationships.dependencies[cmdName].prev, ...cmd.flow.prerequisites])
];
}
// Build alternatives
if (cmd.flow.alternatives) {
relationships.alternatives[cmdName] = cmd.flow.alternatives;
}
// Add to workflows list
if (cmd.category === 'workflow') {
relationships.workflows.push({
name: cmd.name,
command: cmd.command,
description: cmd.description,
flow: cmd.flow
});
}
}
return relationships;
}
// ========== API Routes ==========
/**
@@ -181,25 +287,17 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
// Initialize file watchers on first request
initializeFileWatchers(initialPath);
const indexDir = getIndexDir(initialPath);
// API: Get all commands with optional search
if (pathname === '/api/help/commands') {
if (!indexDir) {
const commandData = getCommandData(initialPath);
if (!commandData) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
return true;
}
const searchQuery = url.searchParams.get('q') || '';
const filePath = join(indexDir, 'all-commands.json');
let commands = getCachedData('all-commands', filePath);
if (!commands) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Commands data not found' }));
return true;
}
let commands = commandData.commands || [];
// Filter by search query if provided
if (searchQuery) {
@@ -213,26 +311,24 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
res.end(JSON.stringify({
commands: commands,
grouped: grouped,
total: commands.length
total: commands.length,
essential: commandData.essential_commands || [],
metadata: commandData._metadata
}));
return true;
}
// API: Get workflow command relationships
if (pathname === '/api/help/workflows') {
if (!indexDir) {
const commandData = getCommandData(initialPath);
if (!commandData) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
return true;
}
const filePath = join(indexDir, 'command-relationships.json');
const relationships = getCachedData('command-relationships', filePath);
if (!relationships) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Workflow relationships not found' }));
return true;
}
const commands = commandData.commands || [];
const relationships = buildWorkflowRelationships(commands);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(relationships));
@@ -241,22 +337,38 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
// API: Get commands by category
if (pathname === '/api/help/commands/by-category') {
if (!indexDir) {
const commandData = getCommandData(initialPath);
if (!commandData) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
return true;
}
const filePath = join(indexDir, 'by-category.json');
const byCategory = getCachedData('by-category', filePath);
if (!byCategory) {
const commands = commandData.commands || [];
const byCategory = groupCommandsByCategory(commands);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
categories: commandData.categories || [],
grouped: byCategory
}));
return true;
}
// API: Get agents list
if (pathname === '/api/help/agents') {
const commandData = getCommandData(initialPath);
if (!commandData) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Category data not found' }));
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
return true;
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(byCategory));
res.end(JSON.stringify({
agents: commandData.agents || [],
total: (commandData.agents || []).length
}));
return true;
}

View File

@@ -23,7 +23,7 @@
* - POST /api/queue/reorder - Reorder queue items
*/
import { readFileSync, existsSync, writeFileSync, mkdirSync, unlinkSync } from 'fs';
import { join } from 'path';
import { join, resolve, normalize } from 'path';
import type { RouteContext } from './types.js';
// ========== JSONL Helper Functions ==========
@@ -67,6 +67,12 @@ function readIssueHistoryJsonl(issuesDir: string): any[] {
}
}
function writeIssueHistoryJsonl(issuesDir: string, issues: any[]) {
if (!existsSync(issuesDir)) mkdirSync(issuesDir, { recursive: true });
const historyPath = join(issuesDir, 'issue-history.jsonl');
writeFileSync(historyPath, issues.map(i => JSON.stringify(i)).join('\n'));
}
function writeSolutionsJsonl(issuesDir: string, issueId: string, solutions: any[]) {
const solutionsDir = join(issuesDir, 'solutions');
if (!existsSync(solutionsDir)) mkdirSync(solutionsDir, { recursive: true });
@@ -156,7 +162,30 @@ function writeQueue(issuesDir: string, queue: any) {
function getIssueDetail(issuesDir: string, issueId: string) {
const issues = readIssuesJsonl(issuesDir);
const issue = issues.find(i => i.id === issueId);
let issue = issues.find(i => i.id === issueId);
// Fallback: Reconstruct issue from solution file if issue not in issues.jsonl
if (!issue) {
const solutionPath = join(issuesDir, 'solutions', `${issueId}.jsonl`);
if (existsSync(solutionPath)) {
const solutions = readSolutionsJsonl(issuesDir, issueId);
if (solutions.length > 0) {
const boundSolution = solutions.find(s => s.is_bound) || solutions[0];
issue = {
id: issueId,
title: boundSolution?.description || issueId,
status: 'completed',
priority: 3,
context: boundSolution?.approach || '',
bound_solution_id: boundSolution?.id || null,
created_at: boundSolution?.created_at || new Date().toISOString(),
updated_at: new Date().toISOString(),
_reconstructed: true
};
}
}
}
if (!issue) return null;
const solutions = readSolutionsJsonl(issuesDir, issueId);
@@ -254,11 +283,46 @@ function bindSolutionToIssue(issuesDir: string, issueId: string, solutionId: str
return { success: true, bound: solutionId };
}
// ========== Path Validation ==========
/**
* Validate that the provided path is safe (no path traversal)
* Returns the resolved, normalized path or null if invalid
*/
function validateProjectPath(requestedPath: string, basePath: string): string | null {
if (!requestedPath) return basePath;
// Resolve to absolute path and normalize
const resolvedPath = resolve(normalize(requestedPath));
const resolvedBase = resolve(normalize(basePath));
// For local development tool, we allow any absolute path
// but prevent obvious traversal attempts
if (requestedPath.includes('..') && !resolvedPath.startsWith(resolvedBase)) {
// Check if it's trying to escape with ..
const normalizedRequested = normalize(requestedPath);
if (normalizedRequested.startsWith('..')) {
return null;
}
}
return resolvedPath;
}
// ========== Route Handler ==========
export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
const { pathname, url, req, res, initialPath, handlePostRequest } = ctx;
const projectPath = url.searchParams.get('path') || initialPath;
const rawProjectPath = url.searchParams.get('path') || initialPath;
// Validate project path to prevent path traversal
const projectPath = validateProjectPath(rawProjectPath, initialPath);
if (!projectPath) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Invalid project path' }));
return true;
}
const issuesDir = join(projectPath, '.workflow', 'issues');
// ===== Queue Routes (top-level /api/queue) =====
@@ -295,7 +359,8 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
// GET /api/queue/:id - Get specific queue by ID
const queueDetailMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
if (queueDetailMatch && req.method === 'GET' && queueDetailMatch[1] !== 'history' && queueDetailMatch[1] !== 'reorder') {
const reservedQueuePaths = ['history', 'reorder', 'switch', 'deactivate', 'merge'];
if (queueDetailMatch && req.method === 'GET' && !reservedQueuePaths.includes(queueDetailMatch[1])) {
const queueId = queueDetailMatch[1];
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
@@ -347,6 +412,29 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// POST /api/queue/deactivate - Deactivate current queue (set active to null)
if (pathname === '/api/queue/deactivate' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const queuesDir = join(issuesDir, 'queues');
const indexPath = join(queuesDir, 'index.json');
try {
const index = existsSync(indexPath)
? JSON.parse(readFileSync(indexPath, 'utf8'))
: { active_queue_id: null, queues: [] };
const previousActiveId = index.active_queue_id;
index.active_queue_id = null;
writeFileSync(indexPath, JSON.stringify(index, null, 2));
return { success: true, previous_active_id: previousActiveId };
} catch (err) {
return { error: 'Failed to deactivate queue' };
}
});
return true;
}
// POST /api/queue/reorder - Reorder queue items (supports both solutions and tasks)
if (pathname === '/api/queue/reorder' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
@@ -399,6 +487,237 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// DELETE /api/queue/:queueId/item/:itemId - Delete item from queue
const queueItemDeleteMatch = pathname.match(/^\/api\/queue\/([^/]+)\/item\/([^/]+)$/);
if (queueItemDeleteMatch && req.method === 'DELETE') {
const queueId = queueItemDeleteMatch[1];
const itemId = decodeURIComponent(queueItemDeleteMatch[2]);
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
if (!existsSync(queueFilePath)) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
return true;
}
try {
const queue = JSON.parse(readFileSync(queueFilePath, 'utf8'));
const items = queue.solutions || queue.tasks || [];
const filteredItems = items.filter((item: any) => item.item_id !== itemId);
if (filteredItems.length === items.length) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Item ${itemId} not found in queue` }));
return true;
}
// Update queue items
if (queue.solutions) {
queue.solutions = filteredItems;
} else {
queue.tasks = filteredItems;
}
// Recalculate metadata
const completedCount = filteredItems.filter((i: any) => i.status === 'completed').length;
queue._metadata = {
...queue._metadata,
updated_at: new Date().toISOString(),
...(queue.solutions
? { total_solutions: filteredItems.length, completed_solutions: completedCount }
: { total_tasks: filteredItems.length, completed_tasks: completedCount })
};
writeFileSync(queueFilePath, JSON.stringify(queue, null, 2));
// Update index counts
const indexPath = join(queuesDir, 'index.json');
if (existsSync(indexPath)) {
try {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
const queueEntry = index.queues?.find((q: any) => q.id === queueId);
if (queueEntry) {
if (queue.solutions) {
queueEntry.total_solutions = filteredItems.length;
queueEntry.completed_solutions = completedCount;
} else {
queueEntry.total_tasks = filteredItems.length;
queueEntry.completed_tasks = completedCount;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
}
} catch (err) {
console.error('Failed to update queue index:', err);
}
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, queueId, deletedItemId: itemId }));
} catch (err) {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to delete item' }));
}
return true;
}
// DELETE /api/queue/:queueId - Delete entire queue
const queueDeleteMatch = pathname.match(/^\/api\/queue\/([^/]+)$/);
if (queueDeleteMatch && req.method === 'DELETE') {
const queueId = queueDeleteMatch[1];
const queuesDir = join(issuesDir, 'queues');
const queueFilePath = join(queuesDir, `${queueId}.json`);
const indexPath = join(queuesDir, 'index.json');
if (!existsSync(queueFilePath)) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: `Queue ${queueId} not found` }));
return true;
}
try {
// Delete queue file
unlinkSync(queueFilePath);
// Update index
if (existsSync(indexPath)) {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
// Remove from queues array
index.queues = (index.queues || []).filter((q: any) => q.id !== queueId);
// Clear active if this was the active queue
if (index.active_queue_id === queueId) {
index.active_queue_id = null;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, deletedQueueId: queueId }));
} catch (err) {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to delete queue' }));
}
return true;
}
// POST /api/queue/merge - Merge source queue into target queue
if (pathname === '/api/queue/merge' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const { sourceQueueId, targetQueueId } = body;
if (!sourceQueueId || !targetQueueId) {
return { error: 'sourceQueueId and targetQueueId required' };
}
if (sourceQueueId === targetQueueId) {
return { error: 'Cannot merge queue into itself' };
}
const queuesDir = join(issuesDir, 'queues');
const sourcePath = join(queuesDir, `${sourceQueueId}.json`);
const targetPath = join(queuesDir, `${targetQueueId}.json`);
if (!existsSync(sourcePath)) return { error: `Source queue ${sourceQueueId} not found` };
if (!existsSync(targetPath)) return { error: `Target queue ${targetQueueId} not found` };
try {
const sourceQueue = JSON.parse(readFileSync(sourcePath, 'utf8'));
const targetQueue = JSON.parse(readFileSync(targetPath, 'utf8'));
const sourceItems = sourceQueue.solutions || sourceQueue.tasks || [];
const targetItems = targetQueue.solutions || targetQueue.tasks || [];
const isSolutionBased = !!targetQueue.solutions;
// Re-index source items to avoid ID conflicts
const maxOrder = targetItems.reduce((max: number, i: any) => Math.max(max, i.execution_order || 0), 0);
const reindexedSourceItems = sourceItems.map((item: any, idx: number) => ({
...item,
item_id: `${item.item_id}-merged`,
execution_order: maxOrder + idx + 1,
execution_group: item.execution_group ? `M-${item.execution_group}` : 'M-ungrouped'
}));
// Merge items
const mergedItems = [...targetItems, ...reindexedSourceItems];
if (isSolutionBased) {
targetQueue.solutions = mergedItems;
} else {
targetQueue.tasks = mergedItems;
}
// Merge issue_ids
const mergedIssueIds = [...new Set([
...(targetQueue.issue_ids || []),
...(sourceQueue.issue_ids || [])
])];
targetQueue.issue_ids = mergedIssueIds;
// Update metadata
const completedCount = mergedItems.filter((i: any) => i.status === 'completed').length;
targetQueue._metadata = {
...targetQueue._metadata,
updated_at: new Date().toISOString(),
...(isSolutionBased
? { total_solutions: mergedItems.length, completed_solutions: completedCount }
: { total_tasks: mergedItems.length, completed_tasks: completedCount })
};
// Write merged queue
writeFileSync(targetPath, JSON.stringify(targetQueue, null, 2));
// Update source queue status
sourceQueue.status = 'merged';
sourceQueue._metadata = {
...sourceQueue._metadata,
merged_into: targetQueueId,
merged_at: new Date().toISOString()
};
writeFileSync(sourcePath, JSON.stringify(sourceQueue, null, 2));
// Update index
const indexPath = join(queuesDir, 'index.json');
if (existsSync(indexPath)) {
try {
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
const sourceEntry = index.queues?.find((q: any) => q.id === sourceQueueId);
const targetEntry = index.queues?.find((q: any) => q.id === targetQueueId);
if (sourceEntry) {
sourceEntry.status = 'merged';
}
if (targetEntry) {
if (isSolutionBased) {
targetEntry.total_solutions = mergedItems.length;
targetEntry.completed_solutions = completedCount;
} else {
targetEntry.total_tasks = mergedItems.length;
targetEntry.completed_tasks = completedCount;
}
targetEntry.issue_ids = mergedIssueIds;
}
writeFileSync(indexPath, JSON.stringify(index, null, 2));
} catch {
// Ignore index update errors
}
}
return {
success: true,
sourceQueueId,
targetQueueId,
mergedItemCount: sourceItems.length,
totalItems: mergedItems.length
};
} catch (err) {
return { error: 'Failed to merge queues' };
}
});
return true;
}
// Legacy: GET /api/issues/queue (backward compat)
if (pathname === '/api/issues/queue' && req.method === 'GET') {
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
@@ -546,6 +865,39 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// POST /api/issues/:id/archive - Archive issue (move to history)
const archiveMatch = pathname.match(/^\/api\/issues\/([^/]+)\/archive$/);
if (archiveMatch && req.method === 'POST') {
const issueId = decodeURIComponent(archiveMatch[1]);
const issues = readIssuesJsonl(issuesDir);
const issueIndex = issues.findIndex(i => i.id === issueId);
if (issueIndex === -1) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Issue not found' }));
return true;
}
// Get the issue and add archive metadata
const issue = issues[issueIndex];
issue.archived_at = new Date().toISOString();
issue.status = 'completed';
// Move to history
const history = readIssueHistoryJsonl(issuesDir);
history.push(issue);
writeIssueHistoryJsonl(issuesDir, history);
// Remove from active issues
issues.splice(issueIndex, 1);
writeIssuesJsonl(issuesDir, issues);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true, issueId, archivedAt: issue.archived_at }));
return true;
}
// POST /api/issues/:id/solutions - Add solution
const addSolMatch = pathname.match(/^\/api\/issues\/([^/]+)\/solutions$/);
if (addSolMatch && req.method === 'POST') {

View File

@@ -429,14 +429,16 @@
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
overflow: hidden;
margin-bottom: 1rem;
box-shadow: 0 1px 3px hsl(var(--foreground) / 0.04);
}
.queue-group-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.5);
padding: 0.875rem 1.25rem;
background: hsl(var(--muted) / 0.3);
border-bottom: 1px solid hsl(var(--border));
}
@@ -1256,6 +1258,68 @@
color: hsl(var(--destructive));
}
/* Search Highlight */
.search-highlight {
background: hsl(45 93% 47% / 0.3);
color: inherit;
padding: 0 2px;
border-radius: 2px;
font-weight: 500;
}
/* Search Suggestions Dropdown */
.search-suggestions {
position: absolute;
top: 100%;
left: 0;
right: 0;
margin-top: 0.25rem;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.1);
max-height: 300px;
overflow-y: auto;
z-index: 50;
display: none;
}
.search-suggestions.show {
display: block;
}
.search-suggestion-item {
padding: 0.625rem 0.875rem;
cursor: pointer;
border-bottom: 1px solid hsl(var(--border) / 0.5);
transition: background 0.15s ease;
}
.search-suggestion-item:hover,
.search-suggestion-item.selected {
background: hsl(var(--muted));
}
.search-suggestion-item:last-child {
border-bottom: none;
}
.suggestion-id {
font-family: var(--font-mono);
font-size: 0.7rem;
color: hsl(var(--muted-foreground));
margin-bottom: 0.125rem;
}
.suggestion-title {
font-size: 0.8125rem;
color: hsl(var(--foreground));
line-height: 1.3;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
/* ==========================================
CREATE BUTTON
========================================== */
@@ -1780,61 +1844,147 @@
}
.queue-items {
padding: 0.75rem;
padding: 1rem;
display: flex;
flex-direction: column;
gap: 0.5rem;
gap: 0.75rem;
}
/* Parallel items use CSS Grid for uniform sizing */
.queue-items.parallel {
flex-direction: row;
flex-wrap: wrap;
display: grid;
grid-template-columns: repeat(auto-fill, minmax(130px, 1fr));
gap: 0.75rem;
}
.queue-items.parallel .queue-item {
flex: 1;
min-width: 200px;
display: grid;
grid-template-areas:
"id id delete"
"issue issue issue"
"solution solution solution";
grid-template-columns: 1fr 1fr auto;
grid-template-rows: auto auto 1fr;
align-items: start;
padding: 0.75rem;
min-height: 90px;
gap: 0.25rem;
}
/* Card content layout */
.queue-items.parallel .queue-item .queue-item-id {
grid-area: id;
font-size: 0.875rem;
font-weight: 700;
color: hsl(var(--foreground));
}
.queue-items.parallel .queue-item .queue-item-issue {
grid-area: issue;
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
line-height: 1.3;
}
.queue-items.parallel .queue-item .queue-item-solution {
grid-area: solution;
display: flex;
align-items: center;
gap: 0.25rem;
font-size: 0.75rem;
font-weight: 500;
color: hsl(var(--foreground));
align-self: end;
}
/* Hide extra elements in parallel view */
.queue-items.parallel .queue-item .queue-item-files,
.queue-items.parallel .queue-item .queue-item-priority,
.queue-items.parallel .queue-item .queue-item-deps,
.queue-items.parallel .queue-item .queue-item-task {
display: none;
}
/* Delete button positioned in corner */
.queue-items.parallel .queue-item .queue-item-delete {
grid-area: delete;
justify-self: end;
padding: 0.125rem;
opacity: 0;
}
.queue-group-type {
display: flex;
display: inline-flex;
align-items: center;
gap: 0.375rem;
font-size: 0.875rem;
font-weight: 600;
padding: 0.25rem 0.625rem;
border-radius: 0.375rem;
}
.queue-group-type.parallel {
color: hsl(142 71% 45%);
color: hsl(142 71% 40%);
background: hsl(142 71% 45% / 0.1);
}
.queue-group-type.sequential {
color: hsl(262 83% 58%);
color: hsl(262 83% 50%);
background: hsl(262 83% 58% / 0.1);
}
/* Queue Item Status Colors */
/* Queue Item Status Colors - Enhanced visual distinction */
/* Pending - Default subtle state */
.queue-item.pending,
.queue-item:not(.ready):not(.executing):not(.completed):not(.failed):not(.blocked) {
border-color: hsl(var(--border));
background: hsl(var(--card));
}
/* Ready - Blue tint, ready to execute */
.queue-item.ready {
border-color: hsl(199 89% 48%);
background: hsl(199 89% 48% / 0.06);
border-left: 3px solid hsl(199 89% 48%);
}
/* Executing - Amber with pulse animation */
.queue-item.executing {
border-color: hsl(45 93% 47%);
background: hsl(45 93% 47% / 0.05);
border-color: hsl(38 92% 50%);
background: hsl(38 92% 50% / 0.08);
border-left: 3px solid hsl(38 92% 50%);
animation: executing-pulse 2s ease-in-out infinite;
}
@keyframes executing-pulse {
0%, 100% { box-shadow: 0 0 0 0 hsl(38 92% 50% / 0.3); }
50% { box-shadow: 0 0 8px 2px hsl(38 92% 50% / 0.2); }
}
/* Completed - Green success state */
.queue-item.completed {
border-color: hsl(var(--success));
background: hsl(var(--success) / 0.05);
border-color: hsl(142 71% 45%);
background: hsl(142 71% 45% / 0.06);
border-left: 3px solid hsl(142 71% 45%);
}
/* Failed - Red error state */
.queue-item.failed {
border-color: hsl(var(--destructive));
background: hsl(var(--destructive) / 0.05);
border-color: hsl(0 84% 60%);
background: hsl(0 84% 60% / 0.06);
border-left: 3px solid hsl(0 84% 60%);
}
/* Blocked - Purple/violet blocked state */
.queue-item.blocked {
border-color: hsl(262 83% 58%);
opacity: 0.7;
background: hsl(262 83% 58% / 0.05);
border-left: 3px solid hsl(262 83% 58%);
opacity: 0.8;
}
/* Priority indicator */
@@ -2236,61 +2386,89 @@
flex-direction: column;
align-items: center;
justify-content: center;
padding: 0.75rem 1rem;
background: hsl(var(--muted) / 0.3);
padding: 1rem 1.25rem;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.5rem;
border-radius: 0.75rem;
text-align: center;
transition: all 0.2s ease;
}
.queue-stat-card:hover {
transform: translateY(-1px);
box-shadow: 0 2px 8px hsl(var(--foreground) / 0.06);
}
.queue-stat-card .queue-stat-value {
font-size: 1.5rem;
font-size: 1.75rem;
font-weight: 700;
color: hsl(var(--foreground));
line-height: 1.2;
}
.queue-stat-card .queue-stat-label {
font-size: 0.75rem;
font-size: 0.6875rem;
color: hsl(var(--muted-foreground));
text-transform: uppercase;
letter-spacing: 0.025em;
margin-top: 0.25rem;
letter-spacing: 0.05em;
margin-top: 0.375rem;
font-weight: 500;
}
/* Pending - Slate/Gray with subtle blue tint */
.queue-stat-card.pending {
border-color: hsl(var(--muted-foreground) / 0.3);
border-color: hsl(215 20% 65% / 0.4);
background: linear-gradient(135deg, hsl(215 20% 95%) 0%, hsl(var(--card)) 100%);
}
.queue-stat-card.pending .queue-stat-value {
color: hsl(var(--muted-foreground));
color: hsl(215 20% 45%);
}
.queue-stat-card.pending .queue-stat-label {
color: hsl(215 20% 55%);
}
/* Executing - Amber/Orange - attention-grabbing */
.queue-stat-card.executing {
border-color: hsl(45 93% 47% / 0.5);
background: hsl(45 93% 47% / 0.05);
border-color: hsl(38 92% 50% / 0.5);
background: linear-gradient(135deg, hsl(38 92% 95%) 0%, hsl(45 93% 97%) 100%);
}
.queue-stat-card.executing .queue-stat-value {
color: hsl(45 93% 47%);
color: hsl(38 92% 40%);
}
.queue-stat-card.executing .queue-stat-label {
color: hsl(38 70% 45%);
}
/* Completed - Green - success indicator */
.queue-stat-card.completed {
border-color: hsl(var(--success) / 0.5);
background: hsl(var(--success) / 0.05);
border-color: hsl(142 71% 45% / 0.5);
background: linear-gradient(135deg, hsl(142 71% 95%) 0%, hsl(142 50% 97%) 100%);
}
.queue-stat-card.completed .queue-stat-value {
color: hsl(var(--success));
color: hsl(142 71% 35%);
}
.queue-stat-card.completed .queue-stat-label {
color: hsl(142 50% 40%);
}
/* Failed - Red - error indicator */
.queue-stat-card.failed {
border-color: hsl(var(--destructive) / 0.5);
background: hsl(var(--destructive) / 0.05);
border-color: hsl(0 84% 60% / 0.5);
background: linear-gradient(135deg, hsl(0 84% 95%) 0%, hsl(0 70% 97%) 100%);
}
.queue-stat-card.failed .queue-stat-value {
color: hsl(var(--destructive));
color: hsl(0 84% 45%);
}
.queue-stat-card.failed .queue-stat-label {
color: hsl(0 60% 50%);
}
/* ==========================================
@@ -2874,3 +3052,251 @@
gap: 0.25rem;
}
}
/* ==========================================
MULTI-QUEUE CARDS VIEW
========================================== */
/* Queue Cards Header */
.queue-cards-header {
display: flex;
align-items: center;
justify-content: space-between;
flex-wrap: wrap;
gap: 1rem;
}
/* Queue Cards Grid */
.queue-cards-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
gap: 1rem;
margin-bottom: 1.5rem;
}
/* Individual Queue Card */
.queue-card {
position: relative;
background: hsl(var(--card));
border: 1px solid hsl(var(--border));
border-radius: 0.75rem;
padding: 1rem;
cursor: pointer;
transition: all 0.2s ease;
}
.queue-card:hover {
border-color: hsl(var(--primary) / 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px hsl(var(--foreground) / 0.08);
}
.queue-card.active {
border-color: hsl(var(--primary));
background: hsl(var(--primary) / 0.05);
}
.queue-card.merged {
opacity: 0.6;
border-style: dashed;
}
.queue-card.merged:hover {
opacity: 0.8;
}
/* Queue Card Header */
.queue-card-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 0.75rem;
}
.queue-card-id {
font-size: 0.875rem;
font-weight: 600;
color: hsl(var(--foreground));
}
.queue-card-badges {
display: flex;
align-items: center;
gap: 0.5rem;
}
/* Queue Card Stats - Progress Bar */
.queue-card-stats {
margin-bottom: 0.75rem;
}
.queue-card-stats .progress-bar {
height: 6px;
background: hsl(var(--muted));
border-radius: 3px;
overflow: hidden;
margin-bottom: 0.5rem;
}
.queue-card-stats .progress-fill {
height: 100%;
background: hsl(var(--primary));
border-radius: 3px;
transition: width 0.3s ease;
}
.queue-card-stats .progress-fill.completed {
background: hsl(var(--success, 142 76% 36%));
}
.queue-card-progress {
display: flex;
justify-content: space-between;
font-size: 0.75rem;
color: hsl(var(--foreground));
}
/* Queue Card Meta */
.queue-card-meta {
display: flex;
gap: 1rem;
font-size: 0.75rem;
color: hsl(var(--muted-foreground));
margin-bottom: 0.75rem;
}
/* Queue Card Actions */
.queue-card-actions {
display: flex;
gap: 0.5rem;
padding-top: 0.75rem;
border-top: 1px solid hsl(var(--border));
}
/* Queue Detail Header */
.queue-detail-header {
display: flex;
align-items: center;
gap: 1rem;
flex-wrap: wrap;
}
.queue-detail-title {
flex: 1;
display: flex;
align-items: center;
gap: 1rem;
}
.queue-detail-actions {
display: flex;
gap: 0.5rem;
}
/* Queue Item Delete Button */
.queue-item-delete {
margin-left: auto;
padding: 0.25rem;
opacity: 0;
transition: opacity 0.15s ease;
color: hsl(var(--muted-foreground));
border-radius: 0.25rem;
}
.queue-item:hover .queue-item-delete {
opacity: 1;
}
.queue-item-delete:hover {
color: hsl(var(--destructive, 0 84% 60%));
background: hsl(var(--destructive, 0 84% 60%) / 0.1);
}
/* Queue Error State */
.queue-error {
padding: 2rem;
text-align: center;
}
/* Responsive adjustments for queue cards */
@media (max-width: 640px) {
.queue-cards-grid {
grid-template-columns: 1fr;
}
.queue-cards-header {
flex-direction: column;
align-items: flex-start;
}
.queue-detail-header {
flex-direction: column;
align-items: flex-start;
}
.queue-detail-title {
flex-direction: column;
align-items: flex-start;
gap: 0.5rem;
}
}
/* ==========================================
WARNING BUTTON STYLE
========================================== */
.btn-warning,
.btn-secondary.btn-warning {
color: hsl(38 92% 40%);
border-color: hsl(38 92% 50% / 0.5);
background: hsl(38 92% 50% / 0.08);
}
.btn-warning:hover,
.btn-secondary.btn-warning:hover {
background: hsl(38 92% 50% / 0.15);
border-color: hsl(38 92% 50%);
}
.btn-danger,
.btn-secondary.btn-danger,
.btn-sm.btn-danger {
color: hsl(var(--destructive));
border-color: hsl(var(--destructive) / 0.5);
background: hsl(var(--destructive) / 0.08);
}
.btn-danger:hover,
.btn-secondary.btn-danger:hover,
.btn-sm.btn-danger:hover {
background: hsl(var(--destructive) / 0.15);
border-color: hsl(var(--destructive));
}
/* Issue Detail Actions */
.issue-detail-actions {
margin-top: 1rem;
padding-top: 1rem;
border-top: 1px solid hsl(var(--border));
}
.issue-detail-actions .flex {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
/* Active queue badge enhancement */
.queue-active-badge {
display: inline-flex;
align-items: center;
padding: 0.125rem 0.5rem;
font-size: 0.6875rem;
font-weight: 600;
color: hsl(142 71% 35%);
background: hsl(142 71% 45% / 0.15);
border: 1px solid hsl(142 71% 45% / 0.3);
border-radius: 9999px;
text-transform: uppercase;
letter-spacing: 0.025em;
}

View File

@@ -1,6 +1,103 @@
// Hook Manager Component
// Manages Claude Code hooks configuration from settings.json
// ========== Platform Detection ==========
const PlatformUtils = {
// Detect current platform
detect() {
if (typeof navigator !== 'undefined') {
const platform = navigator.platform.toLowerCase();
if (platform.includes('win')) return 'windows';
if (platform.includes('mac')) return 'macos';
return 'linux';
}
if (typeof process !== 'undefined') {
if (process.platform === 'win32') return 'windows';
if (process.platform === 'darwin') return 'macos';
return 'linux';
}
return 'unknown';
},
isWindows() {
return this.detect() === 'windows';
},
isUnix() {
const platform = this.detect();
return platform === 'macos' || platform === 'linux';
},
// Get default shell for platform
getShell() {
return this.isWindows() ? 'cmd' : 'bash';
},
// Check if template is compatible with current platform
checkCompatibility(template) {
const platform = this.detect();
const issues = [];
// bash commands require Unix or Git Bash on Windows
if (template.command === 'bash' && platform === 'windows') {
issues.push({
level: 'warning',
message: 'bash command may not work on Windows without Git Bash or WSL'
});
}
// Check for Unix-specific shell features in args
if (template.args && Array.isArray(template.args)) {
const argStr = template.args.join(' ');
if (platform === 'windows') {
// Unix shell features that won't work in cmd
if (argStr.includes('$HOME') || argStr.includes('${HOME}')) {
issues.push({ level: 'warning', message: 'Uses $HOME - use %USERPROFILE% on Windows' });
}
if (argStr.includes('$(') || argStr.includes('`')) {
issues.push({ level: 'warning', message: 'Uses command substitution - not supported in cmd' });
}
if (argStr.includes(' | ')) {
issues.push({ level: 'info', message: 'Uses pipes - works in cmd but syntax may differ' });
}
}
}
return {
compatible: issues.filter(i => i.level === 'error').length === 0,
issues
};
},
// Get platform-specific command variant if available
getVariant(template) {
const platform = this.detect();
// Check if template has platform-specific variants
if (template.variants && template.variants[platform]) {
return { ...template, ...template.variants[platform] };
}
return template;
},
// Escape script for specific shell type
escapeForShell(script, shell) {
if (shell === 'bash' || shell === 'sh') {
// Unix: use single quotes, escape internal single quotes
return script.replace(/'/g, "'\\''");
} else if (shell === 'cmd') {
// Windows cmd: escape double quotes and special chars
return script.replace(/"/g, '\\"').replace(/%/g, '%%');
} else if (shell === 'powershell') {
// PowerShell: escape single quotes by doubling
return script.replace(/'/g, "''");
}
return script;
}
};
// ========== Hook State ==========
let hookConfig = {
global: { hooks: {} },
@@ -394,6 +491,29 @@ function convertToClaudeCodeFormat(hookData) {
});
commandStr += ' ' + additionalArgs.join(' ');
}
} else if (commandStr === 'node' && hookData.args.length >= 2 && hookData.args[0] === '-e') {
// Special handling for node -e commands using PlatformUtils
const script = hookData.args[1];
if (PlatformUtils.isWindows()) {
// Windows: use double quotes, escape internal quotes
const escapedScript = PlatformUtils.escapeForShell(script, 'cmd');
commandStr = `node -e "${escapedScript}"`;
} else {
// Unix: use single quotes to prevent shell interpretation
const escapedScript = PlatformUtils.escapeForShell(script, 'bash');
commandStr = `node -e '${escapedScript}'`;
}
// Handle any additional args after the script
if (hookData.args.length > 2) {
const additionalArgs = hookData.args.slice(2).map(arg => {
if (arg.includes(' ') && !arg.startsWith('"') && !arg.startsWith("'")) {
return `"${arg.replace(/"/g, '\\"')}"`;
}
return arg;
});
commandStr += ' ' + additionalArgs.join(' ');
}
} else {
// Default handling for other commands
const quotedArgs = hookData.args.map(arg => {

View File

@@ -2269,6 +2269,25 @@ const i18n = {
'issues.queueCommandInfo': 'After running the command, click "Refresh" to see the updated queue.',
'issues.alternative': 'Alternative',
'issues.refreshAfter': 'Refresh Queue',
'issues.activate': 'Activate',
'issues.deactivate': 'Deactivate',
'issues.queueActivated': 'Queue activated',
'issues.queueDeactivated': 'Queue deactivated',
'issues.deleteQueue': 'Delete queue',
'issues.confirmDeleteQueue': 'Are you sure you want to delete this queue? This action cannot be undone.',
'issues.queueDeleted': 'Queue deleted successfully',
'issues.actions': 'Actions',
'issues.archive': 'Archive',
'issues.delete': 'Delete',
'issues.confirmDeleteIssue': 'Are you sure you want to delete this issue? This action cannot be undone.',
'issues.confirmArchiveIssue': 'Archive this issue? It will be moved to history.',
'issues.issueDeleted': 'Issue deleted successfully',
'issues.issueArchived': 'Issue archived successfully',
'issues.executionQueues': 'Execution Queues',
'issues.queues': 'queues',
'issues.noQueues': 'No queues found',
'issues.queueEmptyHint': 'Generate execution queue from bound solutions',
'issues.refresh': 'Refresh',
// issue.* keys (legacy)
'issue.viewIssues': 'Issues',
'issue.viewQueue': 'Queue',
@@ -4592,6 +4611,25 @@ const i18n = {
'issues.queueCommandInfo': '运行命令后,点击"刷新"查看更新后的队列。',
'issues.alternative': '或者',
'issues.refreshAfter': '刷新队列',
'issues.activate': '激活',
'issues.deactivate': '取消激活',
'issues.queueActivated': '队列已激活',
'issues.queueDeactivated': '队列已取消激活',
'issues.deleteQueue': '删除队列',
'issues.confirmDeleteQueue': '确定要删除此队列吗?此操作无法撤销。',
'issues.queueDeleted': '队列删除成功',
'issues.actions': '操作',
'issues.archive': '归档',
'issues.delete': '删除',
'issues.confirmDeleteIssue': '确定要删除此议题吗?此操作无法撤销。',
'issues.confirmArchiveIssue': '归档此议题?它将被移动到历史记录中。',
'issues.issueDeleted': '议题删除成功',
'issues.issueArchived': '议题归档成功',
'issues.executionQueues': '执行队列',
'issues.queues': '个队列',
'issues.noQueues': '暂无队列',
'issues.queueEmptyHint': '从绑定的解决方案生成执行队列',
'issues.refresh': '刷新',
// issue.* keys (legacy)
'issue.viewIssues': '议题',
'issue.viewQueue': '队列',

View File

@@ -398,6 +398,7 @@ async function updateCliToolConfig(tool, updates) {
// Invalidate cache to ensure fresh data on page refresh
if (window.cacheManager) {
window.cacheManager.invalidate('cli-config');
window.cacheManager.invalidate('cli-tools-config');
}
}
return data;

View File

@@ -6381,12 +6381,12 @@ async function showWatcherControlModal() {
// Get first indexed project path as default
let defaultPath = '';
if (indexes.success && indexes.projects && indexes.projects.length > 0) {
// Sort by last_indexed desc and pick the most recent
const sorted = indexes.projects.sort((a, b) =>
new Date(b.last_indexed || 0) - new Date(a.last_indexed || 0)
if (indexes.success && indexes.indexes && indexes.indexes.length > 0) {
// Sort by lastModified desc and pick the most recent
const sorted = indexes.indexes.sort((a, b) =>
new Date(b.lastModified || 0) - new Date(a.lastModified || 0)
);
defaultPath = sorted[0].source_root || '';
defaultPath = sorted[0].path || '';
}
const modalHtml = buildWatcherControlContent(status, defaultPath);

View File

@@ -524,16 +524,32 @@ async function installHookTemplate(templateId, scope) {
return;
}
const hookData = {
command: template.command,
args: template.args
};
if (template.matcher) {
hookData.matcher = template.matcher;
// Platform compatibility check
const compatibility = PlatformUtils.checkCompatibility(template);
if (compatibility.issues.length > 0) {
const warnings = compatibility.issues.filter(i => i.level === 'warning');
if (warnings.length > 0) {
const platform = PlatformUtils.detect();
const warningMsg = warnings.map(w => w.message).join('; ');
console.warn(`[Hook Install] Platform: ${platform}, Warnings: ${warningMsg}`);
// Show warning but continue installation
showRefreshToast(`Warning: ${warningMsg}`, 'warning', 5000);
}
}
await saveHook(scope, template.event, hookData);
// Get platform-specific variant if available
const adaptedTemplate = PlatformUtils.getVariant(template);
const hookData = {
command: adaptedTemplate.command,
args: adaptedTemplate.args
};
if (adaptedTemplate.matcher) {
hookData.matcher = adaptedTemplate.matcher;
}
await saveHook(scope, adaptedTemplate.event, hookData);
}
async function uninstallHookTemplate(templateId) {

File diff suppressed because it is too large Load Diff

View File

@@ -956,15 +956,13 @@ function renderSkillFileModal() {
</div>
<!-- Content -->
<div class="flex-1 overflow-hidden p-4">
<div class="flex-1 min-h-0 overflow-auto p-4">
${isEditing ? `
<textarea id="skillFileContent"
class="w-full h-full min-h-[400px] px-4 py-3 bg-background border border-border rounded-lg text-sm font-mono focus:outline-none focus:ring-2 focus:ring-primary resize-none"
spellcheck="false">${escapeHtml(content)}</textarea>
` : `
<div class="w-full h-full min-h-[400px] overflow-auto">
<pre class="px-4 py-3 bg-muted/30 rounded-lg text-sm font-mono whitespace-pre-wrap break-words">${escapeHtml(content)}</pre>
</div>
<pre class="px-4 py-3 bg-muted/30 rounded-lg text-sm font-mono whitespace-pre-wrap break-words">${escapeHtml(content)}</pre>
`}
</div>

View File

@@ -160,7 +160,7 @@ interface ClaudeWithSettingsParams {
prompt: string;
settingsPath: string;
endpointId: string;
mode: 'analysis' | 'write' | 'auto';
mode: 'analysis' | 'write' | 'auto' | 'review';
workingDir: string;
cd?: string;
includeDirs?: string[];
@@ -351,12 +351,12 @@ type BuiltinCliTool = typeof BUILTIN_CLI_TOOLS[number];
const ParamsSchema = z.object({
tool: z.string().min(1, 'Tool is required'), // Accept any tool ID (built-in or custom endpoint)
prompt: z.string().min(1, 'Prompt is required'),
mode: z.enum(['analysis', 'write', 'auto']).default('analysis'),
mode: z.enum(['analysis', 'write', 'auto', 'review']).default('analysis'),
format: z.enum(['plain', 'yaml', 'json']).default('plain'), // Multi-turn prompt concatenation format
model: z.string().optional(),
cd: z.string().optional(),
includeDirs: z.string().optional(),
timeout: z.number().default(0), // 0 = no internal timeout, controlled by external caller (e.g., bash timeout)
// timeout removed - controlled by external caller (bash timeout)
resume: z.union([z.boolean(), z.string()]).optional(), // true = last, string = single ID or comma-separated IDs
id: z.string().optional(), // Custom execution ID (e.g., IMPL-001-step1)
noNative: z.boolean().optional(), // Force prompt concatenation instead of native resume
@@ -388,7 +388,7 @@ async function executeCliTool(
throw new Error(`Invalid params: ${parsed.error.message}`);
}
const { tool, prompt, mode, format, model, cd, includeDirs, timeout, resume, id: customId, noNative, category, parentExecutionId, outputFormat } = parsed.data;
const { tool, prompt, mode, format, model, cd, includeDirs, resume, id: customId, noNative, category, parentExecutionId, outputFormat } = parsed.data;
// Validate and determine working directory early (needed for conversation lookup)
let workingDir: string;
@@ -862,7 +862,6 @@ async function executeCliTool(
let stdout = '';
let stderr = '';
let timedOut = false;
// Handle stdout
child.stdout!.on('data', (data: Buffer) => {
@@ -924,18 +923,14 @@ async function executeCliTool(
debugLog('CLOSE', `Process closed`, {
exitCode: code,
duration: `${duration}ms`,
timedOut,
stdoutLength: stdout.length,
stderrLength: stderr.length,
outputUnitsCount: allOutputUnits.length
});
// Determine status - prioritize output content over exit code
let status: 'success' | 'error' | 'timeout' = 'success';
if (timedOut) {
status = 'timeout';
debugLog('STATUS', `Execution timed out after ${duration}ms`);
} else if (code !== 0) {
let status: 'success' | 'error' = 'success';
if (code !== 0) {
// Non-zero exit code doesn't always mean failure
// Check if there's valid output (AI response) - treat as success
const hasValidOutput = stdout.trim().length > 0;
@@ -1169,25 +1164,8 @@ async function executeCliTool(
reject(new Error(`Failed to spawn ${tool}: ${error.message}\n Command: ${command} ${args.join(' ')}\n Working Dir: ${workingDir}`));
});
// Timeout handling (timeout=0 disables internal timeout, controlled by external caller)
let timeoutId: NodeJS.Timeout | null = null;
if (timeout > 0) {
timeoutId = setTimeout(() => {
timedOut = true;
child.kill('SIGTERM');
setTimeout(() => {
if (!child.killed) {
child.kill('SIGKILL');
}
}, 5000);
}, timeout);
}
child.on('close', () => {
if (timeoutId) {
clearTimeout(timeoutId);
}
});
// Timeout controlled by external caller (bash timeout)
// When parent process terminates, child will be cleaned up via process exit handler
});
}
@@ -1198,7 +1176,8 @@ export const schema: ToolSchema = {
Modes:
- analysis: Read-only operations (default)
- write: File modifications allowed
- auto: Full autonomous operations (codex only)`,
- auto: Full autonomous operations (codex only)
- review: Code review mode (codex uses 'codex review' subcommand, others accept but no operation change)`,
inputSchema: {
type: 'object',
properties: {
@@ -1213,8 +1192,8 @@ Modes:
},
mode: {
type: 'string',
enum: ['analysis', 'write', 'auto'],
description: 'Execution mode (default: analysis)',
enum: ['analysis', 'write', 'auto', 'review'],
description: 'Execution mode (default: analysis). review mode uses codex review subcommand for codex tool.',
default: 'analysis'
},
model: {
@@ -1228,12 +1207,8 @@ Modes:
includeDirs: {
type: 'string',
description: 'Additional directories (comma-separated). Maps to --include-directories for gemini/qwen, --add-dir for codex'
},
timeout: {
type: 'number',
description: 'Timeout in milliseconds (default: 0 = disabled, controlled by external caller)',
default: 0
}
// timeout removed - controlled by external caller (bash timeout)
},
required: ['tool', 'prompt']
}

View File

@@ -223,7 +223,21 @@ export function buildCommand(params: {
case 'codex':
useStdin = true;
if (nativeResume?.enabled) {
if (mode === 'review') {
// codex review mode: non-interactive code review
// Format: codex review [OPTIONS] [PROMPT]
args.push('review');
// Default to --uncommitted if no specific review target in prompt
args.push('--uncommitted');
if (model) {
args.push('-m', model);
}
// codex review uses positional prompt argument, not stdin
useStdin = false;
if (prompt) {
args.push(prompt);
}
} else if (nativeResume?.enabled) {
args.push('resume');
if (nativeResume.isLatest) {
args.push('--last');

View File

@@ -0,0 +1,316 @@
# codex-lens LSP Integration Execution Checklist
> Generated: 2026-01-15
> Based on: Gemini multi-round deep analysis
> Status: Ready for implementation
---
## Phase 1: LSP Server Foundation (Priority: HIGH)
### 1.1 Create LSP Server Entry Point
- [ ] **Install pygls dependency**
```bash
pip install pygls
```
- [ ] **Create `src/codexlens/lsp/__init__.py`**
- Export: `CodexLensServer`, `start_server`
- [ ] **Create `src/codexlens/lsp/server.py`**
- Class: `CodexLensServer(LanguageServer)`
- Initialize: `ChainSearchEngine`, `GlobalSymbolIndex`, `WatcherManager`
- Lifecycle: Start `WatcherManager` on `initialize` request
### 1.2 Implement Core LSP Handlers
- [ ] **`textDocument/definition`** handler
- Source: `GlobalSymbolIndex.search()` exact match
- Reference: `storage/global_index.py:173`
- Return: `Location(uri, Range)`
- [ ] **`textDocument/completion`** handler
- Source: `GlobalSymbolIndex.search(prefix_mode=True)`
- Reference: `storage/global_index.py:173`
- Return: `CompletionItem[]`
- [ ] **`workspace/symbol`** handler
- Source: `ChainSearchEngine.search_symbols()`
- Reference: `search/chain_search.py:618`
- Return: `SymbolInformation[]`
### 1.3 Wire File Watcher to LSP Events
- [ ] **`workspace/didChangeWatchedFiles`** handler
- Delegate to: `WatcherManager.process_changes()`
- Reference: `watcher/manager.py:53`
- [ ] **`textDocument/didSave`** handler
- Trigger: `IncrementalIndexer` for single file
- Reference: `watcher/incremental_indexer.py`
### 1.4 Deliverables
- [ ] Unit tests for LSP handlers
- [ ] Integration test: definition lookup
- [ ] Integration test: completion prefix search
- [ ] Benchmark: query latency < 50ms
---
## Phase 2: Find References Implementation (Priority: MEDIUM)
### 2.1 Create `search_references` Method
- [ ] **Add to `src/codexlens/search/chain_search.py`**
```python
def search_references(
self,
symbol_name: str,
source_path: Path,
depth: int = -1
) -> List[ReferenceResult]:
"""Find all references to a symbol across the project."""
```
### 2.2 Implement Parallel Query Orchestration
- [ ] **Collect index paths**
- Use: `_collect_index_paths()` existing method
- [ ] **Parallel query execution**
- ThreadPoolExecutor across all `_index.db`
- SQL: `SELECT * FROM code_relationships WHERE target_qualified_name = ?`
- Reference: `storage/sqlite_store.py:348`
- [ ] **Result aggregation**
- Deduplicate by file:line
- Sort by file path, then line number
### 2.3 LSP Handler
- [ ] **`textDocument/references`** handler
- Call: `ChainSearchEngine.search_references()`
- Return: `Location[]`
### 2.4 Deliverables
- [ ] Unit test: single-index reference lookup
- [ ] Integration test: cross-directory references
- [ ] Benchmark: < 200ms for 10+ index files
---
## Phase 3: Enhanced Hover Information (Priority: MEDIUM)
### 3.1 Implement Hover Data Extraction
- [ ] **Create `src/codexlens/lsp/hover_provider.py`**
```python
class HoverProvider:
def get_hover_info(self, symbol: Symbol) -> HoverInfo:
"""Extract hover information for a symbol."""
```
### 3.2 Data Sources
- [ ] **Symbol metadata**
- Source: `GlobalSymbolIndex.search()`
- Fields: `kind`, `name`, `file_path`, `range`
- [ ] **Source code extraction**
- Source: `SQLiteStore.files` table
- Reference: `storage/sqlite_store.py:284`
- Extract: Lines from `range[0]` to `range[1]`
### 3.3 LSP Handler
- [ ] **`textDocument/hover`** handler
- Return: `Hover(contents=MarkupContent)`
- Format: Markdown with code fence
### 3.4 Deliverables
- [ ] Unit test: hover for function/class/variable
- [ ] Integration test: multi-line function signature
---
## Phase 4: MCP Bridge for Claude Code (Priority: HIGH VALUE)
### 4.1 Define MCP Schema
- [ ] **Create `src/codexlens/mcp/__init__.py`**
- [ ] **Create `src/codexlens/mcp/schema.py`**
```python
@dataclass
class MCPContext:
version: str = "1.0"
context_type: str
symbol: Optional[SymbolInfo]
definition: Optional[str]
references: List[ReferenceInfo]
related_symbols: List[SymbolInfo]
```
### 4.2 Create MCP Provider
- [ ] **Create `src/codexlens/mcp/provider.py`**
```python
class MCPProvider:
def build_context(
self,
symbol_name: str,
context_type: str = "symbol_explanation"
) -> MCPContext:
"""Build structured context for LLM consumption."""
```
### 4.3 Context Building Logic
- [ ] **Symbol lookup**
- Use: `GlobalSymbolIndex.search()`
- [ ] **Definition extraction**
- Use: `SQLiteStore` file content
- [ ] **References collection**
- Use: `ChainSearchEngine.search_references()`
- [ ] **Related symbols**
- Use: `code_relationships` for imports/calls
### 4.4 Hook Integration Points
- [ ] **Document `pre-tool` hook interface**
```python
def pre_tool_hook(action: str, params: dict) -> MCPContext:
"""Called before LLM action to gather context."""
```
- [ ] **Document `post-tool` hook interface**
```python
def post_tool_hook(action: str, result: Any) -> None:
"""Called after LSP action for proactive caching."""
```
### 4.5 Deliverables
- [ ] MCP schema JSON documentation
- [ ] Unit test: context building
- [ ] Integration test: hook → MCP → JSON output
---
## Phase 5: Advanced Features (Priority: LOW)
### 5.1 Custom LSP Commands
- [ ] **`codexlens/hybridSearch`**
- Expose: `HybridSearchEngine.search()`
- Reference: `search/hybrid_search.py`
- [ ] **`codexlens/symbolGraph`**
- Return: Symbol relationship graph
- Source: `code_relationships` table
### 5.2 Proactive Context Caching
- [ ] **Implement `post-tool` hook caching**
- After `go-to-definition`: pre-fetch references
- Cache TTL: 5 minutes
- Storage: In-memory LRU
### 5.3 Performance Optimizations
- [ ] **Connection pooling**
- Reference: `storage/sqlite_store.py` thread-local
- [ ] **Result caching**
- LRU cache for frequent queries
- Invalidate on file change
---
## File Structure After Implementation
```
src/codexlens/
├── lsp/ # NEW
│ ├── __init__.py
│ ├── server.py # Main LSP server
│ ├── handlers.py # LSP request handlers
│ ├── hover_provider.py # Hover information
│ └── utils.py # LSP utilities
├── mcp/ # NEW
│ ├── __init__.py
│ ├── schema.py # MCP data models
│ ├── provider.py # Context builder
│ └── hooks.py # Hook interfaces
├── search/
│ ├── chain_search.py # MODIFY: add search_references()
│ └── ...
└── ...
```
---
## Dependencies to Add
```toml
# pyproject.toml
[project.optional-dependencies]
lsp = [
"pygls>=1.3.0",
]
```
---
## Testing Strategy
### Unit Tests
```
tests/
├── lsp/
│ ├── test_definition.py
│ ├── test_completion.py
│ ├── test_references.py
│ └── test_hover.py
└── mcp/
├── test_schema.py
└── test_provider.py
```
### Integration Tests
- [ ] Full LSP handshake test
- [ ] Multi-file project navigation
- [ ] Incremental index update via didSave
### Performance Benchmarks
| Operation | Target | Acceptable |
|-----------|--------|------------|
| Definition lookup | < 30ms | < 50ms |
| Completion (100 items) | < 50ms | < 100ms |
| Find references (10 files) | < 150ms | < 200ms |
| Initial indexing (1000 files) | < 60s | < 120s |
---
## Execution Order
```
Week 1: Phase 1.1 → 1.2 → 1.3 → 1.4
Week 2: Phase 2.1 → 2.2 → 2.3 → 2.4
Week 3: Phase 3 + Phase 4.1 → 4.2
Week 4: Phase 4.3 → 4.4 → 4.5
Week 5: Phase 5 (optional) + Polish
```
---
## Quick Start Commands
```bash
# Install LSP dependencies
pip install pygls
# Run LSP server (after implementation)
python -m codexlens.lsp --stdio
# Test LSP connection
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}' | python -m codexlens.lsp --stdio
```
---
## Reference Links
- pygls Documentation: https://pygls.readthedocs.io/
- LSP Specification: https://microsoft.github.io/language-server-protocol/
- codex-lens GlobalSymbolIndex: `storage/global_index.py:173`
- codex-lens ChainSearchEngine: `search/chain_search.py:618`
- codex-lens WatcherManager: `watcher/manager.py:53`

File diff suppressed because it is too large Load Diff

View File

@@ -3645,6 +3645,84 @@ def index_status(
console.print(f" SPLADE encoder: {'[green]Yes[/green]' if splade_available else f'[red]No[/red] ({splade_err})'}")
# ==================== Index Update Command ====================
@index_app.command("update")
def index_update(
file_path: Path = typer.Argument(..., exists=True, file_okay=True, dir_okay=False, help="Path to the file to update in the index."),
json_mode: bool = typer.Option(False, "--json", help="Output JSON response."),
verbose: bool = typer.Option(False, "--verbose", "-v", help="Enable debug logging."),
) -> None:
"""Update the index for a single file incrementally.
This is a lightweight command designed for use in hooks (e.g., Claude Code PostToolUse).
It updates only the specified file without scanning the entire directory.
The file's parent directory must already be indexed via 'codexlens index init'.
Examples:
codexlens index update src/main.py # Update single file
codexlens index update ./foo.ts --json # JSON output for hooks
"""
_configure_logging(verbose, json_mode)
from codexlens.watcher.incremental_indexer import IncrementalIndexer
registry: RegistryStore | None = None
indexer: IncrementalIndexer | None = None
try:
registry = RegistryStore()
registry.initialize()
mapper = PathMapper()
config = Config()
resolved_path = file_path.resolve()
# Check if project is indexed
source_root = mapper.get_project_root(resolved_path)
if not source_root or not registry.get_project(source_root):
error_msg = f"Project containing file is not indexed: {file_path}"
if json_mode:
print_json(success=False, error=error_msg)
else:
console.print(f"[red]Error:[/red] {error_msg}")
console.print("[dim]Run 'codexlens index init' on the project root first.[/dim]")
raise typer.Exit(code=1)
indexer = IncrementalIndexer(registry, mapper, config)
result = indexer._index_file(resolved_path)
if result.success:
if json_mode:
print_json(success=True, result={
"path": str(result.path),
"symbols_count": result.symbols_count,
"status": "updated",
})
else:
console.print(f"[green]✓[/green] Updated index for [bold]{result.path.name}[/bold] ({result.symbols_count} symbols)")
else:
error_msg = result.error or f"Failed to update index for {file_path}"
if json_mode:
print_json(success=False, error=error_msg)
else:
console.print(f"[red]Error:[/red] {error_msg}")
raise typer.Exit(code=1)
except CodexLensError as exc:
if json_mode:
print_json(success=False, error=str(exc))
else:
console.print(f"[red]Update failed:[/red] {exc}")
raise typer.Exit(code=1)
finally:
if indexer:
indexer.close()
if registry:
registry.close()
# ==================== Index All Command ====================
@index_app.command("all")

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "claude-code-workflow",
"version": "6.3.23",
"version": "6.3.31",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "claude-code-workflow",
"version": "6.3.23",
"version": "6.3.31",
"license": "MIT",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.0.4",

View File

@@ -1,6 +1,6 @@
{
"name": "claude-code-workflow",
"version": "6.3.29",
"version": "6.3.33",
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
"type": "module",
"main": "ccw/src/index.js",