Add benchmark results for fast3 and fast4, implement KeepAliveLspBridge, and add tests for staged strategies

- Added new benchmark result files: compare_2026-02-09_score_fast3.json and compare_2026-02-09_score_fast4.json.
- Implemented KeepAliveLspBridge to maintain a persistent LSP connection across multiple queries, improving performance.
- Created unit tests for staged clustering strategies in test_staged_stage3_fast_strategies.py, ensuring correct behavior of score and dir_rr strategies.
This commit is contained in:
catlog22
2026-02-09 20:45:29 +08:00
parent c62d26183b
commit 4344e79e68
64 changed files with 6154 additions and 123 deletions

View File

@@ -1,6 +1,6 @@
--- ---
name: parallel-dev-cycle name: parallel-dev-cycle
description: Multi-agent parallel development cycle with requirement analysis, exploration planning, code development, and validation. Supports continuous iteration with markdown progress documentation. Triggers on "parallel-dev-cycle". description: Multi-agent parallel development cycle with requirement analysis, exploration planning, code development, and validation. Orchestration runs inline in main flow (no separate orchestrator agent). Supports continuous iteration with markdown progress documentation. Triggers on "parallel-dev-cycle".
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
--- ---
@@ -12,6 +12,8 @@ Multi-agent parallel development cycle using Codex subagent pattern with four sp
3. **Code Development** (CD) - Code development with debug strategy support 3. **Code Development** (CD) - Code development with debug strategy support
4. **Validation & Archival Summary** (VAS) - Validation and archival summary 4. **Validation & Archival Summary** (VAS) - Validation and archival summary
Orchestration logic (phase management, state updates, feedback coordination) runs **inline in the main flow** — no separate orchestrator agent is spawned. Only 4 worker agents are allocated.
Each agent **maintains one main document** (e.g., requirements.md, plan.json, implementation.md) that is completely rewritten per iteration, plus auxiliary logs (changes.log, debug-log.ndjson) that are append-only. Each agent **maintains one main document** (e.g., requirements.md, plan.json, implementation.md) that is completely rewritten per iteration, plus auxiliary logs (changes.log, debug-log.ndjson) that are append-only.
## Architecture Overview ## Architecture Overview
@@ -22,10 +24,10 @@ Each agent **maintains one main document** (e.g., requirements.md, plan.json, im
└────────────────────────────┬────────────────────────────────┘ └────────────────────────────┬────────────────────────────────┘
v v
┌──────────────────────┐ ──────────────────────────────┐
│ Orchestrator Agent │ (Coordinator) Main Flow (Inline Orchestration) │
│ (spawned once) Phase 1 → 2 → 3 → 4
└──────────────────────┘ ──────────────────────────────┘
┌────────────────────┼────────────────────┐ ┌────────────────────┼────────────────────┐
│ │ │ │ │ │
@@ -44,10 +46,10 @@ Each agent **maintains one main document** (e.g., requirements.md, plan.json, im
└────────┘ └────────┘
v v
┌──────────────────────┐ ──────────────────────────────┐
│ Summary Report │ │ Summary Report
│ & Markdown Docs │ │ & Markdown Docs
└──────────────────────┘ ──────────────────────────────┘
``` ```
## Key Design Principles ## Key Design Principles
@@ -56,7 +58,7 @@ Each agent **maintains one main document** (e.g., requirements.md, plan.json, im
2. **Version-Based Overwrite**: Main documents completely rewritten per version; logs append-only 2. **Version-Based Overwrite**: Main documents completely rewritten per version; logs append-only
3. **Automatic Archival**: Old main document versions automatically archived to `history/` directory 3. **Automatic Archival**: Old main document versions automatically archived to `history/` directory
4. **Complete Audit Trail**: Changes.log (NDJSON) preserves all change history 4. **Complete Audit Trail**: Changes.log (NDJSON) preserves all change history
5. **Parallel Coordination**: Four agents launched simultaneously; coordination via shared state and orchestrator 5. **Parallel Coordination**: Four agents launched simultaneously; coordination via shared state and inline main flow
6. **File References**: Use short file paths instead of content passing 6. **File References**: Use short file paths instead of content passing
7. **Self-Enhancement**: RA agent proactively extends requirements based on context 7. **Self-Enhancement**: RA agent proactively extends requirements based on context
8. **Shared Discovery Board**: All agents share exploration findings via `discoveries.ndjson` — read on start, write as you discover, eliminating redundant codebase exploration 8. **Shared Discovery Board**: All agents share exploration findings via `discoveries.ndjson` — read on start, write as you discover, eliminating redundant codebase exploration
@@ -315,7 +317,7 @@ All agents share a real-time discovery board at `coordination/discoveries.ndjson
3. Deduplicate — check existing entries; skip if same `type` + dedup key value already exists 3. Deduplicate — check existing entries; skip if same `type` + dedup key value already exists
4. Append-only — never modify or delete existing lines 4. Append-only — never modify or delete existing lines
### Agent → Orchestrator Communication ### Agent → Main Flow Communication
``` ```
PHASE_RESULT: PHASE_RESULT:
- phase: ra | ep | cd | vas - phase: ra | ep | cd | vas
@@ -325,7 +327,7 @@ PHASE_RESULT:
- issues: [] - issues: []
``` ```
### Orchestrator → Agent Communication ### Main Flow → Agent Communication
Feedback via `send_input` (file refs + issue summary, never full content): Feedback via `send_input` (file refs + issue summary, never full content):
``` ```
@@ -337,7 +339,7 @@ Feedback via `send_input` (file refs + issue summary, never full content):
1. [Specific fix] 1. [Specific fix]
``` ```
**Rules**: Only orchestrator writes state file. Agents read state, write to own `.progress/{agent}/` directory only. **Rules**: Only main flow writes state file. Agents read state, write to own `.progress/{agent}/` directory only.
## Core Rules ## Core Rules
@@ -346,7 +348,7 @@ Feedback via `send_input` (file refs + issue summary, never full content):
3. **Parse Every Output**: Extract PHASE_RESULT data from each agent for next phase 3. **Parse Every Output**: Extract PHASE_RESULT data from each agent for next phase
4. **Auto-Continue**: After each phase, execute next pending phase automatically 4. **Auto-Continue**: After each phase, execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with attachment/collapse pattern 5. **Track Progress**: Update TodoWrite dynamically with attachment/collapse pattern
6. **Single Writer**: Only orchestrator writes to master state file; agents report via PHASE_RESULT 6. **Single Writer**: Only main flow writes to master state file; agents report via PHASE_RESULT
7. **File References**: Pass file paths between agents, not content 7. **File References**: Pass file paths between agents, not content
8. **DO NOT STOP**: Continuous execution until all phases complete or max iterations reached 8. **DO NOT STOP**: Continuous execution until all phases complete or max iterations reached
@@ -357,11 +359,11 @@ Feedback via `send_input` (file refs + issue summary, never full content):
| Agent timeout | send_input requesting convergence, then retry | | Agent timeout | send_input requesting convergence, then retry |
| State corrupted | Rebuild from progress markdown files and changes.log | | State corrupted | Rebuild from progress markdown files and changes.log |
| Agent failed | Re-spawn agent with previous context | | Agent failed | Re-spawn agent with previous context |
| Conflicting results | Orchestrator sends reconciliation request | | Conflicting results | Main flow sends reconciliation request |
| Missing files | RA/EP agents identify and request clarification | | Missing files | RA/EP agents identify and request clarification |
| Max iterations reached | Generate summary with remaining issues documented | | Max iterations reached | Generate summary with remaining issues documented |
## Coordinator Checklist ## Coordinator Checklist (Main Flow)
### Before Each Phase ### Before Each Phase

View File

@@ -258,4 +258,4 @@ function checkControlSignals(cycleId) {
## Next Phase ## Next Phase
Return to orchestrator, then auto-continue to [Phase 2: Agent Execution](02-agent-execution.md). Return to main flow, then auto-continue to [Phase 2: Agent Execution](02-agent-execution.md).

View File

@@ -446,4 +446,4 @@ Execution timeout reached. Please:
## Next Phase ## Next Phase
Return to orchestrator, then auto-continue to [Phase 3: Result Aggregation & Iteration](03-result-aggregation.md). Return to main flow, then auto-continue to [Phase 3: Result Aggregation & Iteration](03-result-aggregation.md).

View File

@@ -227,4 +227,4 @@ Phase 3: Result Aggregation
## Next Phase ## Next Phase
If iteration continues: Return to Phase 2. If iteration continues: Return to Phase 2.
If iteration completes: Return to orchestrator, then auto-continue to [Phase 4: Completion & Summary](04-completion-summary.md). If iteration completes: Return to main flow, then auto-continue to [Phase 4: Completion & Summary](04-completion-summary.md).

View File

@@ -83,7 +83,7 @@ Object.values(agents).forEach(id => {
### Step 4.4: Return Result ### Step 4.4: Return Result
```javascript ```javascript
console.log('\n=== Parallel Dev Cycle Orchestrator Finished ===') console.log('\n=== Parallel Dev Cycle Finished ===')
return { return {
status: 'completed', status: 'completed',

View File

@@ -181,7 +181,7 @@ When tests fail during implementation, the CD agent MUST initiate the hypothesis
|---------|-----------|--------| |---------|-----------|--------|
| **Test Failure** | Automated tests fail during implementation | Start debug workflow | | **Test Failure** | Automated tests fail during implementation | Start debug workflow |
| **Integration Conflict** | Blockers logged in `issues.md` | Start debug workflow | | **Integration Conflict** | Blockers logged in `issues.md` | Start debug workflow |
| **VAS Feedback** | Orchestrator provides validation failure feedback | Start debug workflow | | **VAS Feedback** | Main flow provides validation failure feedback | Start debug workflow |
### Debug Workflow Phases ### Debug Workflow Phases
@@ -356,7 +356,7 @@ PHASE_RESULT:
- Used to guide development - Used to guide development
- **RA (Requirements Analyst)**: "Requirement FR-X means..." - **RA (Requirements Analyst)**: "Requirement FR-X means..."
- Used for clarification - Used for clarification
- **Orchestrator**: "Fix these issues in next iteration" - **Main Flow**: "Fix these issues in next iteration"
- Used for priority setting - Used for priority setting
### Sends To: ### Sends To:
@@ -364,7 +364,7 @@ PHASE_RESULT:
- Used for test generation - Used for test generation
- **RA (Requirements Analyst)**: "FR-X is unclear, need clarification" - **RA (Requirements Analyst)**: "FR-X is unclear, need clarification"
- Used for requirement updates - Used for requirement updates
- **Orchestrator**: "Found blocker X, need help" - **Main Flow**: "Found blocker X, need help"
- Used for decision making - Used for decision making
## Code Quality Standards ## Code Quality Standards

View File

@@ -333,7 +333,7 @@ PHASE_RESULT:
### Receives From: ### Receives From:
- **RA (Requirements Analyst)**: "Definitive requirements, version X.Y.Z" - **RA (Requirements Analyst)**: "Definitive requirements, version X.Y.Z"
- Used to structure plan - Used to structure plan
- **Orchestrator**: "Continue planning with iteration X" - **Main Flow**: "Continue planning with iteration X"
- Used to update plan for extensions - Used to update plan for extensions
### Sends To: ### Sends To:

View File

@@ -427,7 +427,7 @@ This section documents auto-generated requirements by the RA agent.
### Integration Notes ### Integration Notes
- Self-enhancement is **internal to RA agent** - no orchestrator changes needed - Self-enhancement is **internal to RA agent** - no main flow changes needed
- Read-only access to codebase and cycle state required - Read-only access to codebase and cycle state required
- Enhanced requirements are **transparently marked** for user review - Enhanced requirements are **transparently marked** for user review
- User can accept, modify, or reject enhanced requirements in next iteration - User can accept, modify, or reject enhanced requirements in next iteration

View File

@@ -420,7 +420,7 @@ PHASE_RESULT:
### Sends To: ### Sends To:
- **CD (Developer)**: "These tests are failing, needs fixes" - **CD (Developer)**: "These tests are failing, needs fixes"
- Used for prioritizing work - Used for prioritizing work
- **Orchestrator**: "Quality report and recommendations" - **Main Flow**: "Quality report and recommendations"
- Used for final sign-off - Used for final sign-off
## Quality Standards ## Quality Standards

View File

@@ -0,0 +1,313 @@
# EnsoAI → CCW Frontend 集成计划(架构识别 / 多 CLI / 远程连接 / 功能盘点)
日期2026-02-09
目标:把 `G:\github_lib\EnsoAI` 的“多路智能并行工作流”核心体验,按 CCW 现有能力边界,规划性集成到 `D:\Claude_dms3\ccw\frontend`Web Dashboard
> 注:由于本环境 `read_file` 工具仅允许读取 `D:\Claude_dms3`EnsoAI 代码引用路径来自本地检索结果与 PowerShell 阅读,不影响结论准确性。
---
## 1) EnsoAI 架构速览(你要找的“架构骨架”)
EnsoAI 是一个 Electron 桌面应用,典型三段式:
- **Main Process主进程**`G:\github_lib\EnsoAI\src\main`
- IPC handlers、系统服务集成、Git/PTY/远程共享/代理、Claude 生态集成Provider、MCP、IDE Bridge
- **Preload桥接层**`G:\github_lib\EnsoAI\src\preload`
- `window.electronAPI.*` 暴露给渲染进程settings、hapi、cloudflared、mcp、git、terminal 等)
- **RendererUI**`G:\github_lib\EnsoAI\src\renderer`
- React UIWorktree/Git、Monaco 编辑器、xterm 终端、多 Agent Chat、设置面板等
- **Shared共享**`G:\github_lib\EnsoAI\src\shared`
- types/i18n/ipc channel 等
核心思路:**把“多 Agent 并行”落到“多 worktree + 多终端会话 + 可恢复的 AI session”上**并提供一揽子系统集成Claude IDE Bridge / MCP / 远程共享 / proxy
---
## 2) EnsoAI 如何调用多 CLI重点两条路径
### 2.1 非交互任务调用(更像“工具/能力调用”)
入口:`G:\github_lib\EnsoAI\src\main\services\ai\providers.ts`
- 按 provider 选择 CLI
- Claude`claude -p --output-format ... --model ... [--session-id] [--no-session-persistence]`
- Codex`codex exec -m <model> ...`prompt 走 stdin
- Gemini`gemini -o ... -m ... --yolo`prompt 走 stdin
- Cursor`agent -p --output-format ... --model ...`(不支持部分 claude flags
- 统一策略:**通过用户 shell 包装 spawn**prompt 统一写 stdin实现了 kill 逻辑与输出解析(尤其 Claude JSON 输出的版本兼容)。
适用场景commit message 生成、code review、分支命名等“非交互式一次性任务”。
### 2.2 交互终端调用(更像“人机对话/操作台”)
入口:`G:\github_lib\EnsoAI\src\renderer\components\chat\AgentTerminal.tsx`
关键点:
- **会话恢复**`--resume <id>` / `--session-id <id>`(并对 Cursor/Claude initialized 的差异做了兼容)
- **IDE 集成**Claude 追加 `--ide`
- **环境包装environment wrapper**
- `native`:直接执行 `<agentCommand> ...`
- `hapi``hapi <agent> ...``npx -y @twsxtd/hapi <agent> ...`,并注入 `CLI_API_TOKEN`
- `happy``happy <agent> ...`(同理)
- **跨平台兼容**
- WSL`wsl.exe -e sh -lc ...` 进入登录 shell
- PowerShell`& { <command> }` 避免 `--session-id` 被 PowerShell 当成自身参数
- **tmux 支持(非 Windows + Claude**:把会话包进 tmux保证“断开 UI 也能续接”一类体验
适用场景:多 Agent 终端式并行对话、跨会话恢复、长任务驻留。
### 2.3 CLI 探测/安装(多 CLI 的“可用性治理”)
- 探测:`G:\github_lib\EnsoAI\src\main\services\cli\CliDetector.ts`
- 内置:`claude/codex/droid/gemini/auggie/cursor-agent/opencode`
- 通过 PTY 执行 `<cmd> --version`Windows 放宽超时
- 安装EnsoAI 自身 CLI用于打开目录等`G:\github_lib\EnsoAI\src\main\services\cli\CliInstaller.ts`
- 跨平台脚本生成macOS/Windows/Linux以及可能的管理员权限处理
---
## 3) EnsoAI 的“远程连接 / 系统集成”到底做了什么
### 3.1 Claude IDE BridgeClaude Code CLI 的 IDE 集成)
入口:`G:\github_lib\EnsoAI\src\main\services\claude\ClaudeIdeBridge.ts`
机制要点:
- 本机启动 WebSocket server绑定 127.0.0.1,随机端口)
- 写 lock file 到:
- `~/.claude/ide/<port>.lock`(或 `CLAUDE_CONFIG_DIR/ide/<port>.lock`
- payload 含:`pid/workspaceFolders/ideName/transport/ws/runningInWindows/authToken`
- Claude Code 客户端连接时需 header
- `x-claude-code-ide-authorization: <authToken>`
- 可选 `x-claude-code-workspace` 辅助路由
- 可向 Claude Code 广播通知:
- `selection_changed`
- `at_mentioned`
- 为“运行 Claude CLI 子进程”注入 env
- `ENABLE_IDE_INTEGRATION=true`
- `CLAUDE_CODE_SSE_PORT=<port>`(配合 `claude --ide`
这属于“系统集成层”,不是普通 API 配置;它解决的是:**Claude Code CLI 与 IDE/宿主之间的状态互通**。
### 3.2 MCP Servers 管理(与 Claude 生态配置打通)
入口:`G:\github_lib\EnsoAI\src\main\services\claude\McpManager.ts`
- 读写 `~/.claude.json``mcpServers`
- 支持:
- stdio MCP`command/args/env`UI 可编辑)
- HTTP/SSE MCP`type/url/headers`(一般保留原配置,避免破坏)
### 3.3 远程共享Hapi与公网暴露Cloudflared
- Hapi`G:\github_lib\EnsoAI\src\main\services\hapi\HapiServerManager.ts` + `src/main/ipc/hapi.ts`
- `hapi server` 或 fallback `npx -y @twsxtd/hapi server`
- env`WEBAPP_PORT/CLI_API_TOKEN/TELEGRAM_BOT_TOKEN/WEBAPP_URL/ALLOWED_CHAT_IDS`
- UI 文案明确:**Web + Telegram 远程共享 agent sessions**
- Cloudflared`G:\github_lib\EnsoAI\src\main\services\hapi\CloudflaredManager.ts`
- 安装到 userData/bin避开 asar 只读)
- 支持 quick tunnel / token auth tunnel可选 `--protocol http2`
### 3.4 Proxy代理
入口:`G:\github_lib\EnsoAI\src\main\services\proxy\ProxyConfig.ts`
- Electron session 级代理 + 子进程环境变量注入HTTP(S)_PROXY / NO_PROXY
- 内置代理联通性测试HEAD 请求多个探测 URL
---
## 4) EnsoAI 功能点全量清单(按模块)
> 这里是“罗列所有功能点”的汇总视图;后续集成计划会用它做映射。
### 4.1 多 Agent / 多 CLI核心卖点
- 多 Agent 终端会话并行Claude/Codex/Gemini + 额外探测 Droid/Auggie/Cursor/OpenCode
- 会话持久化与恢复session-id / resume
- CLI 可用性探测installed/version/timeout
- 环境包装native +hapi/happy+ tmux + WSL/PowerShell 兼容
### 4.2 Git / Worktree 工作流(核心载体)
- Worktree 创建/切换/删除;分支为“一等公民”
- Source Control 面板:
- 文件变更列表、stage/unstage、commit、history、branch 切换
- diff viewer / diff review modal
- 三栏 Merge Editor冲突解决
- 远端同步fetch/pull/push 等
### 4.3 编辑器与文件系统
- Monaco 编辑器、多标签、拖拽排序
- 文件树 CRUD新建/重命名/删除)
- 预览Markdown / PDF / Image
- 未保存变更保护、冲突提示
### 4.4 AI 辅助开发
- AI commit message 生成
- AI code review围绕 diff 的审查/优化)
- 分支命名辅助(推测)
### 4.5 Claude 生态集成
- Claude Provider 管理(监听/应用 `~/.claude/settings.json`
- MCP servers 管理(`~/.claude.json`stdio + HTTP/SSE
- Prompts / Plugins 管理
- Claude IDE BridgeWS + lock file + selection/@ 通知)
### 4.6 远程与网络
- Hapi remote sharingWeb + Telegram
- Cloudflared tunnel 暴露本地服务
- 代理设置与测试
### 4.7 通用体验
- 多窗口、多工作空间
- 主题同步(终端主题)
- 快捷键/命令面板
- 设置 JSON 持久化
- Web Inspector、更新通知
---
## 5) CCW Frontend 基线(与 EnsoAI 的可复用对齐点)
CCW 前端已具备的关键“载体/底座”:
- **CLI Viewer多 pane**`ccw/frontend/src/pages/CliViewerPage.tsx`
- 已支持 split/grid 布局、聚焦 pane、执行恢复与 WebSocket 输出接入
- **CLI Endpoints / Installations**
- `ccw/frontend/src/pages/EndpointsPage.tsx`litellm/custom/wrapper
- `ccw/frontend/src/pages/InstallationsPage.tsx`install/upgrade/uninstall
- **MCP Manager含 Cross-CLI**`ccw/frontend/src/pages/McpManagerPage.tsx`
- **后端 CLI executor 体系**`ccw/src/tools/cli-executor-core.ts` + `ccw/src/core/routes/cli-routes.ts`
- 已有 active executions 恢复、history/sqlite、tool availability、wrapper/tools 路由等能力
CCW 与 EnsoAI 的主要差异(决定“集成方式”):
- EnsoAI 是“桌面 IDE/工作台”CCW 是“Web Dashboard + Orchestrator”
- EnsoAI 的 Worktree/Git/Editor/PTY 属于重资产 IDE 能力CCW 目前不承担这些(除非明确要扩域)
- EnsoAI 的 IDE Bridge / Hapi/Cloudflared 属于系统集成层CCW 当前需要新增后端能力才能对齐
---
## 6) 集成策略(把 EnsoAI 优势带到 CCW但不把 CCW 变成 IDE
优先级建议:
1) **先移植“多 Agent 并行调度 + 可恢复输出体验”**(与 CCW 现有 CLI viewer/cli executor 高度匹配)
2) 再做 **“多 CLI 治理与配置同步”体验收敛**Installations/Endpoints/MCP 的信息架构对齐)
3) 最后才考虑 **系统集成层**Claude IDE Bridge、远程共享、tunnel/proxy 等)
---
## 7) Roadmap集成到 `ccw/frontend` 的分期落地)
### Phase 0 — 对齐与设计1~2 天)
- 产出:统一术语与信息架构
- Agent / Tool / Endpoint / Wrapper 的定义
- “并行会话”= 多 execution 并发 + CLI viewer 多 pane 展示
- 产出UI 草图Agent Matrix + 单 Agent Details + 快捷动作)
### Phase 1MVP— Agent Matrix最接近 EnsoAI 卖点,且最可落地)
目标:在 CCW 中“一屏管理多 Agent 并行执行”,并可一键把多个 execution 打开到 CLI Viewer 的多个 pane。
前端交付(建议):
- 新页面:`/agents`(或并入 `/cli-viewer` 的“Matrix 模式”)
- 卡片:每个 Agentclaude/codex/gemini/...)展示 installed/version/status
- 快捷动作:
- Run prompt选择 modeanalysis/plan/review…
- Open in CLI Viewer自动分配 pane
- Resume / View history打开对应 conversation/execution
- 并行启动:一次选多个 Agent → 并发触发多次执行 → 同步创建 viewer tabs
- 复用:`cliStreamStore` + `viewerStore`
后端依赖(最小化):
- 若现有 `/api/cli/*` 已满足执行与 active 恢复:仅需补一个 “agents list/status” 聚合接口(可由现有 installations/status 拼装)
验收标准:
- 同一 prompt 可对多个 agent 并行执行,输出稳定落到 CLI Viewer 多 pane
- 刷新页面后仍可恢复 running executions复用 `/api/cli/active`
- 每个 execution 可追溯到 history已有 sqlite/history 体系)
### Phase 2 — Wrapper/Endpoint 体验收敛(对齐 EnsoAI 的 environment wrapper 思路)
目标:把 EnsoAI 的 `native/hapi/happy/tmux/wsl` 思路映射成 CCW 的 endpoint/wrapper 配置,形成一致的“运行环境选择”体验。
前端交付(建议):
- 在 Agent Matrix 卡片中加入:
- “运行环境”下拉native / wrapper endpoints
- “会话策略”选项new / resume / no-persistence 等,取决于后端 executor 支持)
- 在 EndpointsPage 中标注 “可用于哪些 agent/tool” 的适配关系(避免用户创建了 wrapper 但不知道生效范围)
后端依赖:
- wrapper endpoints 的能力需要明确:是否等价于 EnsoAI 的 `hapi/happy` 前缀包装?(如果是,需在 cli executor 的 command builder 支持这一类 wrapper
### Phase 3 — Claude IDE Bridge可选大改动系统集成层
目标:在 CCWNode 服务)侧实现类似 EnsoAI 的 IDE Bridge并在前端提供状态面板。
前端交付(建议):
- Settings 新增 “IDE IntegrationClaude Code
- Bridge enable/disable
- 当前端口、lock file 路径、连接状态、最近一次 selection/@ 通知
- “复制 token / 复制诊断信息”按钮
后端依赖:
- 需要在 CCW server 里增加 WS server + lock file 写入逻辑(参考 EnsoAI 的 `ClaudeIdeBridge.ts`
- 需要明确 Claude Code CLI 的兼容要求协议版本、headers、env key
### Phase 4 — 远程共享与公网暴露(可选,偏产品功能)
目标:把 EnsoAI 的 Hapi + Cloudflared 能力变成 CCW 的“远程访问/协作”能力。
前端交付(建议):
- Settings 新增 “Remote Sharing”
- 启停服务、端口、token、允许列表
- Cloudflared 安装/启停、URL 展示、一键复制
后端依赖:
- 需要引入 hapi/cloudflared 管理模块(或选择更贴近 CCW 的方案,例如仅 cloudflared 暴露 CCW server
- 需要明确安全边界认证、token 轮换、日志脱敏、默认不开放)
---
## 8) EnsoAI 功能 → CCW 现状 → 集成建议(映射表)
| EnsoAI 功能 | CCW 现状 | 建议集成方式(优先级) |
|---|---|---|
| 多 Agent 并行会话 | 有 CLI viewer + executor但缺“一屏矩阵” | Phase 1新增 Agent Matrix + 自动 pane 分配 |
| CLI 探测/安装 | 有 InstallationsPage | Phase 1把 Installations 信息在 Matrix 中复用展示 |
| environment wrapperhapi/happy/tmux/wsl | 有 wrapper endpoints 概念 | Phase 2把 wrapper endpoints 显式接到 Agent 执行选择 |
| Claude MCP servers 管理 | 有 MCP Manager含 Cross-CLI | 现有功能可继续增强(模板/同步策略) |
| Claude IDE Bridge | 暂无 | Phase 3可选系统集成层模块 + Settings 面板 |
| Hapi remote sharing + Cloudflared | 暂无 | Phase 4可选 |
| Worktree/Git/Editor/Merge | 暂无(且是 IDE 域) | 默认不集成;若要做需单独立项 |
---
## 9) 下一步(如果你要我继续落地)
你可以选择一个切入点,我可以直接在 `ccw/frontend` 开工实现:
1) **Phase 1推荐**:新增 `/agents` 页面Agent Matrix复用现有 `/api/cli/*` 与 CLI viewer
2) **Phase 2**:把 wrapper endpoints 打通到“Agent 运行环境选择”
3) **Phase 3/4**:需要先确认产品边界与安全策略,再做后端设计

View File

@@ -0,0 +1,232 @@
# CCW Issue 看板 × 原生 CLI 窗口 × 队列管理 × 编排器tmux-like集成规划
日期2026-02-09
输入参考:`.workflow/.analysis/ANL-codemoss-issue-panel-2026-02-09/report.md`CodeMoss 看板/会话机制)
## 0) 已确认的关键决策(来自本次讨论)
- **终端形态**Web 内嵌终端(`xterm.js`+ 后端 **PTY**(需要 TTY 行为)
- **默认会话壳**:先启动 `bash`(在 Windows 下优先走 WSL bash其次 Git-Bash都不可用时再降级
- **连续性**:用 `resumeKey` 作为“逻辑会话键”(可映射到 CLI 的 `--resume/--continue/--session-id` 等)
- **Queue 增强优先级**:先做 **执行面**Queue item → 投递到某个 CLI 会话并执行),管理面后置
- **resumeKey 映射**:两种策略都要支持(`nativeResume``promptConcat`,见 4.6
## 1) 要做的“产品形态”(一句话)
把 CCW 的 `Issues + Queue + Orchestrator + CLI Viewer` 变成一个统一的 **Issue Workbench**
- 中间Issue 看板Kanban+ Queue 管理(可视化/可执行/可重排)
- 右侧:可切换的 **CLI 会话窗口**Claude/Codex/Gemini/…Web 内嵌终端承载,能向不同会话发送消息/命令
- 编排器:能把节点输出/指令 **路由** 到某个 CLI 会话(类似 tmux 的 send-keys 到某个 pane / session
---
## 2) 当前 CCW 已具备的关键底座(可直接复用)
### 2.1 IssueHub 的“右侧抽屉”模式
- IssueHub tabs`ccw/frontend/src/components/issue/hub/IssueHubTabs.tsx`目前issues/queue/discovery
- IssueDrawer右侧抽屉 + tabs`ccw/frontend/src/components/issue/hub/IssueDrawer.tsx`
- SolutionDrawer右侧抽屉`ccw/frontend/src/components/issue/queue/SolutionDrawer.tsx`
这让“右侧弹窗/抽屉内嵌 CLI 窗口”的 UI 成本很低:新增一个 tab 或替换为更通用的 drawer 即可。
### 2.2 Kanban DnD 组件
- `ccw/frontend/src/components/shared/KanbanBoard.tsx`@hello-pangea/dnd
可以直接用于 Issue Board以及 Queue Board
### 2.3 “原生 CLI 输出”的 L0 形态已经存在(流式 + WS
- 后端:`ccw/src/core/routes/cli-routes.ts`
- `/api/cli/execute` 支持 streaming并 WS 广播 `CLI_EXECUTION_STARTED` + `CLI_OUTPUT`(以及 completed/error
- 前端:`ccw/frontend/src/pages/CliViewerPage.tsx` + `ccw/frontend/src/stores/cliStreamStore.ts`
- 多 pane 输出、执行恢复(`/api/cli/active`
这套能力仍然有价值(例如:一次性非 TTY 执行、回放、聚合输出)。但本次已确认 MVP 需要 **PTY + xterm** 的真终端体验,因此会新增一套 “CLI SessionPTY” 的生命周期与 WS 双向通道。
### 2.4 编排器右侧滑出 ExecutionMonitor 可复用交互
- `ccw/frontend/src/pages/orchestrator/ExecutionMonitor.tsx`
可作为 Issue 看板右侧“CLI/执行监控”面板的交互模板(布局、滚动、自动跟随、固定宽度/可变宽度)。
---
## 3) 参考CodeMoss 的关键设计点(值得吸收)
来自 `.workflow/.analysis/ANL-codemoss-issue-panel-2026-02-09/report.md`
1) **Kanban task 数据里直接挂执行关联字段**threadId/engineType/modelId/autoStart/branchName 等)
2) **拖到 inprogress 可触发自动动作**(启动会话/执行)
3) “原生 CLI”更像 **长驻进程/协议**codex app-server JSON-linesUI 只是事件的消费者
CCW 的“差异现实”:
- CCW 是 Web Dashboard不是 Tauri/Electron但我们仍然选择 **node-pty + xterm.js**(后端持有 PTY 会话,前端 attach 显示),以满足“原生 CLI 窗口 / tmux-like 路由”的需求。
---
## 4) 目标能力拆解(功能点清单)
### 4.1 Issue BoardKanban
- 新增 tab`board``/issues?tab=board`
- 列:`open` / `in_progress` / `resolved` / `completed``closed` 可独立列或筛选隐藏)
- 支持:
- 列内排序sortOrder
- 跨列拖拽(更新 issue.status
- 选中卡片 → 右侧 Issue Workbench Drawer
- 自动动作(对齐 CodeMoss
- 从非 `in_progress` 拖到 `in_progress`:触发 `onDragToInProgress(issue)`(可配置)
### 4.2 右侧 “原生 CLI 窗口”PTY + xterm.js
右侧 drawer 增加 `Terminal`/`CLI` tab或将 IssueDrawer 升级为 “IssueWorkbenchDrawer”
- 会话列表per issue / per queue / global
- 多 CLIClaude/Codex/Gemini/…
- 每个会话绑定:
- `sessionKey`:后端 PTY 会话 ID用于 attach/close/send/resize
- `resumeKey`:逻辑连续性(用于生成 CLI 的 resume/continue 参数,或用于生成“同一会话下的命令模板”)
- `tool/model/workingDir/envWrapper`(与 `ENSOAI_INTEGRATION_PLAN.md` 的能力对齐)
- 终端能力MVP
- attach打开/切换某个会话的 xterm 视图
- send向会话发送文本send-keys / paste并可选择自动回车
- resize前端尺寸变化时同步 PTY size
- close关闭会话释放进程与资源
- 操作
- `Send`:把“指令/消息”直接投递到当前 PTY 会话tmux send-keys 语义)
- `Open in CLI Viewer`:可选(如果保留 `/api/cli/execute` 回放/聚合,则用于打开对应 execution否则用于打开独立 CLI Viewer 的“会话视图”)
### 4.3 队列管理增强Queue Workbench
队列本身已有 API 结构(多队列 index + active_queue_ids、merge/split/activate 等,见 `ccw/src/core/routes/issue-routes.ts`),但前端仍偏“展示 + 少量操作”。
建议增强项:
- 多队列:列表/切换 active queuesUI 对接 `/api/queue/history` + `/api/queue/activate`
- 可视化重排:
- 列表重排(按 execution_group 分组,组内 drag reorder
- 将 item 拖到不同 execution_group更新 execution_group + execution_order
- 依赖图/阻塞原因depends_on / blocked
- 与执行联动:
- item `Execute in Session`(把 queue item 直接投递到指定 CLI 会话并执行;本次确认优先做这个)
- item `Send to Orchestrator`(创建/触发一个 flow 执行)
### 4.4 编排器Orchestrator与 CLI 会话的“tmux-like 路由”
目标:编排器不只“跑一次性 CLI”还能“向某个会话发送消息/指令”。
建议抽象:`CliMux`CLI Multiplexer
- **MVP本次确认CliMuxPersistent**:以 PTY 会话为核心,提供 `send(sessionKey, text)` / `resize` / `close`
- **补充可选CliMuxStateless**:保留 `/api/cli/execute` 作为非 TTY 执行与审计回放通道
编排器侧最小改动思路:
- 继续保持 node 数据模型统一prompt-template但新增可选字段
- `targetSessionKey?: string`(或 `issueId` + `sessionName` 组合)
- `delivery?: 'newExecution' | 'sendToSession'`(默认 newExecution
- FlowExecutor 在执行节点时:
- `delivery=newExecution`:沿用现有 executeCliTool
- `delivery=sendToSession`:调用 CliMux.send(targetSessionKey, instruction)
这样就形成“像 tmux 一样把消息打到指定 pane/session”的能力。
### 4.5 后端 CLI SessionPTY协议草案用于前后端对齐
目标:让前端的 xterm 以 “attach 到 sessionKey” 的方式工作Queue/Orchestrator 以 “send 到 sessionKey” 的方式工作。
- REST建议
- `POST /api/cli-sessions`:创建会话(入参:`tool/model/workingDir/envWrapper/resumeKey/resumeStrategy/shell=bash`;出参:`sessionKey`
- `GET /api/cli-sessions`:列出会话(用于切换/恢复)
- `POST /api/cli-sessions/:sessionKey/send`:发送文本(支持 `appendNewline`
- `POST /api/cli-sessions/:sessionKey/execute`:按 tool + resumeStrategy 生成并发送“可执行命令”Queue/Orchestrator 走这个更稳)
- `POST /api/cli-sessions/:sessionKey/resize`:同步 rows/cols
- `POST /api/cli-sessions/:sessionKey/close`:关闭并回收
- WS 事件(建议)
- `CLI_SESSION_CREATED` / `CLI_SESSION_CLOSED`
- `CLI_SESSION_OUTPUT``sessionKey`, `data`, `stream=stdout|stderr`
- `CLI_SESSION_ERROR``sessionKey`, `message`
- 兼容策略
- 现有 `/api/cli/execute` + `CLI_OUTPUT` 保留:用于非 TTY 一次性执行、可审计回放、与现有 `CliViewerPage` 兼容
- 新增“会话视图”可先复用 `cliStreamStore` 的 pane 结构key 从 `transactionId` 扩展到 `sessionKey`
### 4.6 `resumeKey` 映射策略(两种都实现)
`resumeKey` 的定位:一个跨 UI/Queue/Orchestrator 的“逻辑会话键”。同一个 `resumeKey` 下可能会对应:
- **策略 A`nativeResume`(优先)**
- 由 CLI 自己保存与恢复上下文(例如:`--resume/--continue/--session-id`
- 优点:上下文更可靠,输出格式更稳定;适合 Claude/Codex 这类有明确 session 概念的 CLI
- 风险:不同 CLI 旗标/行为差异较大,需要 per-tool adapter并要跟版本兼容
- **策略 B`promptConcat`(必须支持)**
- CCW 自己维护 `resumeKey -> transcript/context`,每次执行把“历史 + 本次指令”拼成 prompt
- 优点:对 CLI 依赖更少;即使 CLI 不支持 resume 也能连续;适合 Gemini/其它工具
- 风险token 成本、上下文截断策略、以及敏感信息/审计(需要明确 retention
落地建议:后端新增 `ToolAdapter`(按 `tool` 输出“命令模板 + resume 参数 + prompt 注入方式”),`/api/cli-sessions/:sessionKey/execute` 统一走 adapterQueue/Orchestrator 不直接拼 shell 命令。
---
## 5) 数据模型建议(对齐 CodeMoss但保持 CCW 简洁)
### 5.1 Issue 扩展字段(建议在后端 issue schema 中落盘)
`Issue` 增加:
- `sortOrder?: number`(用于 board 列内排序)
- `sessions?: Array<{ sessionKey: string; resumeKey: string; tool: string; model?: string; workingDir?: string; envWrapper?: string; createdAt: string }>`
注意:
- 后端 `ccw/src/commands/issue.ts` 中已经出现 `status: 'queued'` 等状态迹象;前端 `Issue['status']` 需要对齐,否则 board/filters 会漏状态。
### 5.2 QueueItem 与 Session 的绑定(可选)
- `QueueItem.sessionKey?: string`(允许把某条 queue item 固定投递到某会话)
---
## 6) 分期交付(从快到慢)
### Phase 12~6 天PTY 会话底座 + Drawer 内嵌终端 + Queue “执行面”
- [BE] 新增 `CLI Session (PTY)`create/list/attach/close/send/resizeWS 双向output + input + resize
- [FE] IssueDrawer 增加 `Terminal` tabxterm attach 会话、切换会话、发送文本、resize、close
- [FE] QueuePanel 增加 `Execute in Session`:选/新建 sessionKey将 queue item 渲染成可执行指令投递到会话
- [Auto] 支持“拖到 in_progress 自动执行”:触发 `Execute in Session`(可配置开关)
### Phase 22~5 天Issue BoardKanban+ Queue 管理面(多队列/重排/分组)
- [UI] IssueHubTabs 增加 `board`IssueBoard 复用 `KanbanBoard`(状态列/排序/拖拽)
- [UI] QueuePanel 增加多队列切换、execution_group 分组与 drag reorder、blocked/depends_on 可视化
- [API] 若缺:跨 group 更新/排序落盘所需 endpoints与现有 `/api/queue/reorder` 区分)
### Phase 33~8 天):编排器与 CliMux 集成tmux-like 路由)
- [Core] 引入 `CliMuxPersistent`PTY 会话send/resize/close
- [Orchestrator] prompt-template 增加 `targetSessionKey/delivery`
- [UI] ExecutionMonitor 增加“路由到会话”的快捷操作(选择 session
### Phase 4长期安全、隔离、可观测、远程连接
- 资源隔离与安全策略(命令/路径白名单、审计日志、速率限制)
- 会话回收空闲超时、最大并发、OOM 保护)
- 远程连接(可选):会话分享、只读 attach、与外部 agent/daemon 的桥接(对齐 EnsoAI 的 Hapi/Cloudflared 思路)
---
## 7) 尚未决策(下一轮需要明确)
1) `bash` 的落地方式(跨平台):
- Linux/macOS`bash -l`(或用户配置的 default shell
- Windows优先 `wsl.exe -e bash -li`,其次 Git-Bash`bash.exe`),都不可用时是否允许降级到 `pwsh`
2) per-tool 的“nativeResume”参数与触发方式需要按版本验证
- Claude`--resume/--continue/--session-id` 的选择,以及在 PTY 下如何稳定触发一次“发送”(命令式 `-p` vs 交互式 send-keys
- Codex/Gemini各自的 session/resume 旗标与输出格式(是否能稳定 machine-readable
3) 安全边界:
- 是否需要 per-workspace 的路径白名单、命令白名单、以及审计日志(特别是 Queue/Orchestrator 自动投递)

View File

@@ -55,7 +55,9 @@
"tailwind-merge": "^2.5.0", "tailwind-merge": "^2.5.0",
"web-vitals": "^5.1.0", "web-vitals": "^5.1.0",
"zod": "^4.1.13", "zod": "^4.1.13",
"zustand": "^5.0.0" "zustand": "^5.0.0",
"xterm": "^5.3.0",
"xterm-addon-fit": "^0.8.0"
}, },
"devDependencies": { "devDependencies": {
"@playwright/test": "^1.57.0", "@playwright/test": "^1.57.0",

View File

@@ -0,0 +1,444 @@
// ========================================
// Issue Board Panel
// ========================================
// Kanban board view for issues (status-driven) with local ordering.
import { useCallback, useEffect, useMemo, useState } from 'react';
import { useIntl } from 'react-intl';
import type { DropResult } from '@hello-pangea/dnd';
import { AlertCircle, LayoutGrid } from 'lucide-react';
import { Card } from '@/components/ui/Card';
import { Switch } from '@/components/ui/Switch';
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from '@/components/ui/Select';
import { KanbanBoard, type KanbanColumn, type KanbanItem } from '@/components/shared/KanbanBoard';
import { IssueCard } from '@/components/shared/IssueCard';
import { IssueDrawer } from '@/components/issue/hub/IssueDrawer';
import { cn } from '@/lib/utils';
import { useIssues, useIssueMutations } from '@/hooks';
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
import { createCliSession, executeInCliSession } from '@/lib/api';
import type { Issue } from '@/lib/api';
type IssueBoardStatus = Issue['status'];
type ToolName = 'claude' | 'codex' | 'gemini';
type ResumeStrategy = 'nativeResume' | 'promptConcat';
const BOARD_COLUMNS: Array<{ id: IssueBoardStatus; titleKey: string }> = [
{ id: 'open', titleKey: 'issues.status.open' },
{ id: 'in_progress', titleKey: 'issues.status.inProgress' },
{ id: 'resolved', titleKey: 'issues.status.resolved' },
{ id: 'completed', titleKey: 'issues.status.completed' },
{ id: 'closed', titleKey: 'issues.status.closed' },
];
type BoardOrder = Partial<Record<IssueBoardStatus, string[]>>;
function storageKey(projectPath: string | null | undefined): string {
const base = projectPath ? encodeURIComponent(projectPath) : 'global';
return `ccw.issueBoard.order:${base}`;
}
interface AutoStartConfig {
enabled: boolean;
tool: ToolName;
mode: 'analysis' | 'write';
resumeStrategy: ResumeStrategy;
}
function autoStartStorageKey(projectPath: string | null | undefined): string {
const base = projectPath ? encodeURIComponent(projectPath) : 'global';
return `ccw.issueBoard.autoStart:${base}`;
}
function safeParseAutoStart(value: string | null): AutoStartConfig {
const defaults: AutoStartConfig = {
enabled: false,
tool: 'claude',
mode: 'write',
resumeStrategy: 'nativeResume',
};
if (!value) return defaults;
try {
const parsed = JSON.parse(value) as Partial<AutoStartConfig>;
return {
enabled: Boolean(parsed.enabled),
tool: parsed.tool === 'codex' || parsed.tool === 'gemini' ? parsed.tool : 'claude',
mode: parsed.mode === 'analysis' ? 'analysis' : 'write',
resumeStrategy: parsed.resumeStrategy === 'promptConcat' ? 'promptConcat' : 'nativeResume',
};
} catch {
return defaults;
}
}
function safeParseOrder(value: string | null): BoardOrder {
if (!value) return {};
try {
const parsed = JSON.parse(value) as unknown;
if (!parsed || typeof parsed !== 'object') return {};
return parsed as BoardOrder;
} catch {
return {};
}
}
function buildColumns(
issues: Issue[],
order: BoardOrder,
formatTitle: (statusId: IssueBoardStatus) => string
): KanbanColumn<Issue & KanbanItem>[] {
const byId = new Map(issues.map((i) => [i.id, i]));
const columns: KanbanColumn<Issue & KanbanItem>[] = [];
for (const col of BOARD_COLUMNS) {
const desired = (order[col.id] ?? []).map((id) => byId.get(id)).filter(Boolean) as Issue[];
const desiredIds = new Set(desired.map((i) => i.id));
const remaining = issues
.filter((i) => i.status === col.id && !desiredIds.has(i.id))
.sort((a, b) => {
const at = a.updatedAt || a.createdAt;
const bt = b.updatedAt || b.createdAt;
return bt.localeCompare(at);
});
const items = [...desired, ...remaining].map((issue) => ({
...issue,
id: issue.id,
title: issue.title,
status: issue.status,
}));
columns.push({
id: col.id,
title: formatTitle(col.id),
items,
icon: <LayoutGrid className="w-4 h-4" />,
});
}
return columns;
}
function syncOrderWithIssues(prev: BoardOrder, issues: Issue[]): BoardOrder {
const statusById = new Map(issues.map((i) => [i.id, i.status]));
const next: BoardOrder = {};
for (const { id: status } of BOARD_COLUMNS) {
const existing = prev[status] ?? [];
const filtered = existing.filter((id) => statusById.get(id) === status);
const present = new Set(filtered);
const missing = issues
.filter((i) => i.status === status && !present.has(i.id))
.map((i) => i.id);
next[status] = [...filtered, ...missing];
}
return next;
}
function reorderIds(list: string[], from: number, to: number): string[] {
const next = [...list];
const [moved] = next.splice(from, 1);
if (moved === undefined) return list;
next.splice(to, 0, moved);
return next;
}
function buildIssueAutoPrompt(issue: Issue): string {
const lines: string[] = [];
lines.push(`Issue: ${issue.id}`);
lines.push(`Status: ${issue.status}`);
lines.push(`Priority: ${issue.priority}`);
lines.push('');
lines.push(`Title: ${issue.title}`);
if (issue.context) {
lines.push('');
lines.push('Context:');
lines.push(String(issue.context));
}
if (Array.isArray(issue.solutions) && issue.solutions.length > 0) {
lines.push('');
lines.push('Solutions:');
for (const s of issue.solutions) {
lines.push(`- [${s.status}] ${s.description}`);
if (s.approach) lines.push(` Approach: ${s.approach}`);
}
}
lines.push('');
lines.push('Instruction:');
lines.push(
'Start working on this issue in this repository. Prefer small, testable changes; run relevant tests; report blockers if any.'
);
return lines.join('\n');
}
export function IssueBoardPanel() {
const { formatMessage } = useIntl();
const projectPath = useWorkflowStore(selectProjectPath);
const { issues, isLoading, error } = useIssues();
const { updateIssue } = useIssueMutations();
const [order, setOrder] = useState<BoardOrder>({});
const [selectedIssue, setSelectedIssue] = useState<Issue | null>(null);
const [drawerInitialTab, setDrawerInitialTab] = useState<'overview' | 'terminal'>('overview');
const [optimisticError, setOptimisticError] = useState<string | null>(null);
const [autoStart, setAutoStart] = useState<AutoStartConfig>(() => safeParseAutoStart(null));
// Load order when project changes
useEffect(() => {
const key = storageKey(projectPath);
const loaded = safeParseOrder(localStorage.getItem(key));
setOrder(loaded);
}, [projectPath]);
// Load auto-start config when project changes
useEffect(() => {
const key = autoStartStorageKey(projectPath);
setAutoStart(safeParseAutoStart(localStorage.getItem(key)));
}, [projectPath]);
// Keep order consistent with current issues (status moves, deletions, new issues)
useEffect(() => {
setOrder((prev) => syncOrderWithIssues(prev, issues));
}, [issues]);
// Persist order
useEffect(() => {
const key = storageKey(projectPath);
try {
localStorage.setItem(key, JSON.stringify(order));
} catch {
// ignore quota errors
}
}, [order, projectPath]);
// Persist auto-start config
useEffect(() => {
const key = autoStartStorageKey(projectPath);
try {
localStorage.setItem(key, JSON.stringify(autoStart));
} catch {
// ignore quota errors
}
}, [autoStart, projectPath]);
const columns = useMemo(
() =>
buildColumns(issues, order, (statusId) => {
const col = BOARD_COLUMNS.find((c) => c.id === statusId);
if (!col) return statusId;
return formatMessage({ id: col.titleKey });
}),
[issues, order, formatMessage]
);
const idsByStatus = useMemo(() => {
const map: Record<string, string[]> = {};
for (const col of columns) {
map[col.id] = col.items.map((i) => i.id);
}
return map;
}, [columns]);
const handleItemClick = useCallback((issue: Issue) => {
setDrawerInitialTab('overview');
setSelectedIssue(issue);
}, []);
const handleCloseDrawer = useCallback(() => {
setSelectedIssue(null);
setOptimisticError(null);
}, []);
const handleDragEnd = useCallback(
async (result: DropResult, sourceColumn: string, destColumn: string) => {
const issueId = result.draggableId;
const issue = issues.find((i) => i.id === issueId);
if (!issue) return;
setOptimisticError(null);
const sourceStatus = sourceColumn as IssueBoardStatus;
const destStatus = destColumn as IssueBoardStatus;
const sourceIds = idsByStatus[sourceStatus] ?? [];
const destIds = idsByStatus[destStatus] ?? [];
// Update local order first (optimistic)
setOrder((prev) => {
const next = { ...prev };
if (sourceStatus === destStatus) {
next[sourceStatus] = reorderIds(sourceIds, result.source.index, result.destination!.index);
return next;
}
const nextSource = [...sourceIds];
nextSource.splice(result.source.index, 1);
const nextDest = [...destIds];
nextDest.splice(result.destination!.index, 0, issueId);
next[sourceStatus] = nextSource;
next[destStatus] = nextDest;
return next;
});
// Status update
if (sourceStatus !== destStatus) {
try {
await updateIssue(issueId, { status: destStatus });
// Auto action: drag to in_progress opens the drawer on terminal tab.
if (destStatus === 'in_progress' && sourceStatus !== 'in_progress') {
setDrawerInitialTab('terminal');
setSelectedIssue({ ...issue, status: destStatus });
if (autoStart.enabled) {
if (!projectPath) {
setOptimisticError('Auto-start failed: no project path selected');
return;
}
try {
const created = await createCliSession({
workingDir: projectPath,
preferredShell: 'bash',
tool: autoStart.tool,
resumeKey: issueId,
});
await executeInCliSession(created.session.sessionKey, {
tool: autoStart.tool,
prompt: buildIssueAutoPrompt({ ...issue, status: destStatus }),
mode: autoStart.mode,
resumeKey: issueId,
resumeStrategy: autoStart.resumeStrategy,
});
} catch (e) {
setOptimisticError(`Auto-start failed: ${e instanceof Error ? e.message : String(e)}`);
}
}
}
} catch (e) {
setOptimisticError(e instanceof Error ? e.message : String(e));
}
}
},
[issues, idsByStatus, updateIssue]
);
if (error) {
return (
<Card className="p-12 text-center">
<AlertCircle className="w-16 h-16 mx-auto text-destructive/50" />
<h3 className="mt-4 text-lg font-medium text-foreground">
{formatMessage({ id: 'issues.queue.error.title' })}
</h3>
<p className="mt-2 text-muted-foreground">{error.message}</p>
</Card>
);
}
return (
<>
<div className="mb-3 flex flex-col gap-2">
<div className="flex flex-col md:flex-row md:items-center justify-between gap-3">
<div className="flex items-center gap-2">
<Switch
checked={autoStart.enabled}
onCheckedChange={(checked) => setAutoStart((prev) => ({ ...prev, enabled: checked }))}
/>
<div className="text-sm text-foreground">
{formatMessage({ id: 'issues.board.autoStart.label' })}
</div>
</div>
<div className="flex items-center gap-2">
<Select
value={autoStart.tool}
onValueChange={(v) => setAutoStart((prev) => ({ ...prev, tool: v as ToolName }))}
disabled={!autoStart.enabled}
>
<SelectTrigger className="w-[140px]">
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="claude">claude</SelectItem>
<SelectItem value="codex">codex</SelectItem>
<SelectItem value="gemini">gemini</SelectItem>
</SelectContent>
</Select>
<Select
value={autoStart.mode}
onValueChange={(v) => setAutoStart((prev) => ({ ...prev, mode: v as 'analysis' | 'write' }))}
disabled={!autoStart.enabled}
>
<SelectTrigger className="w-[120px]">
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="analysis">analysis</SelectItem>
<SelectItem value="write">write</SelectItem>
</SelectContent>
</Select>
<Select
value={autoStart.resumeStrategy}
onValueChange={(v) => setAutoStart((prev) => ({ ...prev, resumeStrategy: v as ResumeStrategy }))}
disabled={!autoStart.enabled}
>
<SelectTrigger className="w-[160px]">
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="nativeResume">nativeResume</SelectItem>
<SelectItem value="promptConcat">promptConcat</SelectItem>
</SelectContent>
</Select>
</div>
</div>
</div>
{optimisticError && (
<div className="text-sm text-destructive">
{optimisticError}
</div>
)}
<KanbanBoard<Issue & KanbanItem>
columns={columns}
onDragEnd={handleDragEnd}
onItemClick={(item) => handleItemClick(item as unknown as Issue)}
isLoading={isLoading}
emptyColumnMessage={formatMessage({ id: 'issues.emptyState.message' })}
className={cn('gap-4', 'grid')}
renderItem={(item, provided) => (
<IssueCard
issue={item as unknown as Issue}
compact
showActions={false}
onClick={(i) => handleItemClick(i)}
innerRef={provided.innerRef}
draggableProps={provided.draggableProps}
dragHandleProps={provided.dragHandleProps}
className="w-full"
/>
)}
/>
<IssueDrawer
issue={selectedIssue}
isOpen={Boolean(selectedIssue)}
onClose={handleCloseDrawer}
initialTab={drawerInitialTab}
/>
</>
);
}
export default IssueBoardPanel;

View File

@@ -3,23 +3,25 @@
// ======================================== // ========================================
// Right-side issue detail drawer with Overview/Solutions/History tabs // Right-side issue detail drawer with Overview/Solutions/History tabs
import { useState } from 'react'; import { useEffect, useState } from 'react';
import { useIntl } from 'react-intl'; import { useIntl } from 'react-intl';
import { X, FileText, CheckCircle, Circle, Loader2, Tag, History, Hash } from 'lucide-react'; import { X, FileText, CheckCircle, Circle, Loader2, Tag, History, Hash, Terminal } from 'lucide-react';
import { Badge } from '@/components/ui/Badge'; import { Badge } from '@/components/ui/Badge';
import { Button } from '@/components/ui/Button'; import { Button } from '@/components/ui/Button';
import { Tabs, TabsList, TabsTrigger, TabsContent } from '@/components/ui/Tabs'; import { Tabs, TabsList, TabsTrigger, TabsContent } from '@/components/ui/Tabs';
import { cn } from '@/lib/utils'; import { cn } from '@/lib/utils';
import type { Issue } from '@/lib/api'; import type { Issue } from '@/lib/api';
import { IssueTerminalTab } from './IssueTerminalTab';
// ========== Types ========== // ========== Types ==========
export interface IssueDrawerProps { export interface IssueDrawerProps {
issue: Issue | null; issue: Issue | null;
isOpen: boolean; isOpen: boolean;
onClose: () => void; onClose: () => void;
initialTab?: TabValue;
} }
type TabValue = 'overview' | 'solutions' | 'history' | 'json'; type TabValue = 'overview' | 'solutions' | 'history' | 'terminal' | 'json';
// ========== Status Configuration ========== // ========== Status Configuration ==========
const statusConfig: Record<string, { label: string; variant: 'default' | 'secondary' | 'destructive' | 'outline' | 'success' | 'warning' | 'info'; icon: React.ComponentType<{ className?: string }> }> = { const statusConfig: Record<string, { label: string; variant: 'default' | 'secondary' | 'destructive' | 'outline' | 'success' | 'warning' | 'info'; icon: React.ComponentType<{ className?: string }> }> = {
@@ -39,20 +41,25 @@ const priorityConfig: Record<string, { label: string; variant: 'default' | 'seco
// ========== Component ========== // ========== Component ==========
export function IssueDrawer({ issue, isOpen, onClose }: IssueDrawerProps) { export function IssueDrawer({ issue, isOpen, onClose, initialTab = 'overview' }: IssueDrawerProps) {
const { formatMessage } = useIntl(); const { formatMessage } = useIntl();
const [activeTab, setActiveTab] = useState<TabValue>('overview'); const [activeTab, setActiveTab] = useState<TabValue>(initialTab);
// Reset to overview when issue changes // Reset to initial tab when opening/switching issues
useState(() => { useEffect(() => {
if (!isOpen || !issue) return;
setActiveTab(initialTab);
}, [initialTab, isOpen, issue?.id]);
// ESC key to close
useEffect(() => {
if (!isOpen) return;
const handleEsc = (e: KeyboardEvent) => { const handleEsc = (e: KeyboardEvent) => {
if (e.key === 'Escape' && isOpen) { if (e.key === 'Escape') onClose();
onClose();
}
}; };
window.addEventListener('keydown', handleEsc); window.addEventListener('keydown', handleEsc);
return () => window.removeEventListener('keydown', handleEsc); return () => window.removeEventListener('keydown', handleEsc);
}); }, [isOpen, onClose]);
if (!issue || !isOpen) { if (!issue || !isOpen) {
return null; return null;
@@ -126,6 +133,10 @@ export function IssueDrawer({ issue, isOpen, onClose }: IssueDrawerProps) {
<History className="h-4 w-4 mr-2" /> <History className="h-4 w-4 mr-2" />
{formatMessage({ id: 'issues.detail.tabs.history' })} {formatMessage({ id: 'issues.detail.tabs.history' })}
</TabsTrigger> </TabsTrigger>
<TabsTrigger value="terminal" className="flex-1">
<Terminal className="h-4 w-4 mr-2" />
{formatMessage({ id: 'issues.detail.tabs.terminal' })}
</TabsTrigger>
<TabsTrigger value="json" className="flex-1"> <TabsTrigger value="json" className="flex-1">
<Hash className="h-4 w-4 mr-2" /> <Hash className="h-4 w-4 mr-2" />
JSON JSON
@@ -213,6 +224,11 @@ export function IssueDrawer({ issue, isOpen, onClose }: IssueDrawerProps) {
)} )}
</TabsContent> </TabsContent>
{/* Terminal Tab */}
<TabsContent value="terminal" className="mt-4 pb-6 focus-visible:outline-none">
<IssueTerminalTab issueId={issue.id} />
</TabsContent>
{/* History Tab */} {/* History Tab */}
<TabsContent value="history" className="mt-4 pb-6 focus-visible:outline-none"> <TabsContent value="history" className="mt-4 pb-6 focus-visible:outline-none">
<div className="text-center py-12 text-muted-foreground"> <div className="text-center py-12 text-muted-foreground">

View File

@@ -4,9 +4,9 @@
// Dynamic header component for IssueHub // Dynamic header component for IssueHub
import { useIntl } from 'react-intl'; import { useIntl } from 'react-intl';
import { AlertCircle, Radar, ListTodo } from 'lucide-react'; import { AlertCircle, Radar, ListTodo, LayoutGrid } from 'lucide-react';
type IssueTab = 'issues' | 'queue' | 'discovery'; type IssueTab = 'issues' | 'board' | 'queue' | 'discovery';
interface IssueHubHeaderProps { interface IssueHubHeaderProps {
currentTab: IssueTab; currentTab: IssueTab;
@@ -22,6 +22,11 @@ export function IssueHubHeader({ currentTab }: IssueHubHeaderProps) {
title: formatMessage({ id: 'issues.title' }), title: formatMessage({ id: 'issues.title' }),
description: formatMessage({ id: 'issues.description' }), description: formatMessage({ id: 'issues.description' }),
}, },
board: {
icon: <LayoutGrid className="w-6 h-6 text-primary" />,
title: formatMessage({ id: 'issues.board.pageTitle' }),
description: formatMessage({ id: 'issues.board.description' }),
},
queue: { queue: {
icon: <ListTodo className="w-6 h-6 text-primary" />, icon: <ListTodo className="w-6 h-6 text-primary" />,
title: formatMessage({ id: 'issues.queue.pageTitle' }), title: formatMessage({ id: 'issues.queue.pageTitle' }),

View File

@@ -7,7 +7,7 @@ import { useIntl } from 'react-intl';
import { Button } from '@/components/ui/Button'; import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils'; import { cn } from '@/lib/utils';
export type IssueTab = 'issues' | 'queue' | 'discovery'; export type IssueTab = 'issues' | 'board' | 'queue' | 'discovery';
interface IssueHubTabsProps { interface IssueHubTabsProps {
currentTab: IssueTab; currentTab: IssueTab;
@@ -19,6 +19,7 @@ export function IssueHubTabs({ currentTab, onTabChange }: IssueHubTabsProps) {
const tabs: Array<{ value: IssueTab; label: string }> = [ const tabs: Array<{ value: IssueTab; label: string }> = [
{ value: 'issues', label: formatMessage({ id: 'issues.hub.tabs.issues' }) }, { value: 'issues', label: formatMessage({ id: 'issues.hub.tabs.issues' }) },
{ value: 'board', label: formatMessage({ id: 'issues.hub.tabs.board' }) },
{ value: 'queue', label: formatMessage({ id: 'issues.hub.tabs.queue' }) }, { value: 'queue', label: formatMessage({ id: 'issues.hub.tabs.queue' }) },
{ value: 'discovery', label: formatMessage({ id: 'issues.hub.tabs.discovery' }) }, { value: 'discovery', label: formatMessage({ id: 'issues.hub.tabs.discovery' }) },
]; ];

View File

@@ -0,0 +1,402 @@
// ========================================
// IssueTerminalTab
// ========================================
// Embedded xterm.js terminal for PTY-backed CLI sessions.
import { useEffect, useMemo, useRef, useState } from 'react';
import { useIntl } from 'react-intl';
import { Plus, RefreshCw, XCircle } from 'lucide-react';
import { Terminal as XTerm } from 'xterm';
import { FitAddon } from 'xterm-addon-fit';
import { Button } from '@/components/ui/Button';
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from '@/components/ui/Select';
import { Input } from '@/components/ui/Input';
import { cn } from '@/lib/utils';
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
import {
closeCliSession,
createCliSession,
executeInCliSession,
fetchCliSessionBuffer,
fetchCliSessions,
resizeCliSession,
sendCliSessionText,
type CliSession,
} from '@/lib/api';
import { useCliSessionStore } from '@/stores/cliSessionStore';
type ToolName = 'claude' | 'codex' | 'gemini';
type ResumeStrategy = 'nativeResume' | 'promptConcat';
export function IssueTerminalTab({ issueId }: { issueId: string }) {
const { formatMessage } = useIntl();
const projectPath = useWorkflowStore(selectProjectPath);
const sessionsByKey = useCliSessionStore((s) => s.sessions);
const outputChunks = useCliSessionStore((s) => s.outputChunks);
const setSessions = useCliSessionStore((s) => s.setSessions);
const upsertSession = useCliSessionStore((s) => s.upsertSession);
const setBuffer = useCliSessionStore((s) => s.setBuffer);
const clearOutput = useCliSessionStore((s) => s.clearOutput);
const sessions = useMemo(() => Object.values(sessionsByKey).sort((a, b) => a.createdAt.localeCompare(b.createdAt)), [sessionsByKey]);
const [selectedSessionKey, setSelectedSessionKey] = useState<string>('');
const [isLoadingSessions, setIsLoadingSessions] = useState(false);
const [isCreating, setIsCreating] = useState(false);
const [isClosing, setIsClosing] = useState(false);
const [error, setError] = useState<string | null>(null);
const [tool, setTool] = useState<ToolName>('claude');
const [mode, setMode] = useState<'analysis' | 'write'>('analysis');
const [resumeKey, setResumeKey] = useState(issueId);
const [resumeStrategy, setResumeStrategy] = useState<ResumeStrategy>('nativeResume');
const [prompt, setPrompt] = useState('');
const [isExecuting, setIsExecuting] = useState(false);
const terminalHostRef = useRef<HTMLDivElement | null>(null);
const xtermRef = useRef<XTerm | null>(null);
const fitAddonRef = useRef<FitAddon | null>(null);
const lastChunkIndexRef = useRef<number>(0);
const pendingInputRef = useRef<string>('');
const flushTimerRef = useRef<number | null>(null);
const flushInput = async () => {
const sessionKey = selectedSessionKey;
if (!sessionKey) return;
const pending = pendingInputRef.current;
pendingInputRef.current = '';
if (!pending) return;
try {
await sendCliSessionText(sessionKey, { text: pending, appendNewline: false });
} catch (e) {
// Ignore transient failures (WS output still shows process state)
}
};
const scheduleFlush = () => {
if (flushTimerRef.current !== null) return;
flushTimerRef.current = window.setTimeout(async () => {
flushTimerRef.current = null;
await flushInput();
}, 30);
};
useEffect(() => {
setIsLoadingSessions(true);
setError(null);
fetchCliSessions()
.then((r) => {
setSessions(r.sessions as unknown as CliSession[]);
})
.catch((e) => setError(e instanceof Error ? e.message : String(e)))
.finally(() => setIsLoadingSessions(false));
}, [setSessions]);
// Auto-select a session if none selected yet
useEffect(() => {
if (selectedSessionKey) return;
if (sessions.length === 0) return;
setSelectedSessionKey(sessions[sessions.length - 1]?.sessionKey ?? '');
}, [sessions, selectedSessionKey]);
// Init xterm
useEffect(() => {
if (!terminalHostRef.current) return;
if (xtermRef.current) return;
const term = new XTerm({
convertEol: true,
cursorBlink: true,
fontFamily: 'ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace',
fontSize: 12,
scrollback: 5000,
});
const fitAddon = new FitAddon();
term.loadAddon(fitAddon);
term.open(terminalHostRef.current);
fitAddon.fit();
// Forward keystrokes to backend (batched)
term.onData((data) => {
if (!selectedSessionKey) return;
pendingInputRef.current += data;
scheduleFlush();
});
xtermRef.current = term;
fitAddonRef.current = fitAddon;
return () => {
try {
term.dispose();
} finally {
xtermRef.current = null;
fitAddonRef.current = null;
}
};
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
// Attach to selected session: clear terminal and load buffer
useEffect(() => {
const term = xtermRef.current;
const fitAddon = fitAddonRef.current;
if (!term || !fitAddon) return;
lastChunkIndexRef.current = 0;
term.reset();
term.clear();
if (!selectedSessionKey) return;
clearOutput(selectedSessionKey);
fetchCliSessionBuffer(selectedSessionKey)
.then(({ buffer }) => {
setBuffer(selectedSessionKey, buffer || '');
})
.catch(() => {
// ignore
})
.finally(() => {
fitAddon.fit();
});
}, [selectedSessionKey, setBuffer, clearOutput]);
// Stream new output chunks into xterm
useEffect(() => {
const term = xtermRef.current;
if (!term) return;
if (!selectedSessionKey) return;
const chunks = outputChunks[selectedSessionKey] ?? [];
const start = lastChunkIndexRef.current;
if (start >= chunks.length) return;
for (let i = start; i < chunks.length; i++) {
term.write(chunks[i].data);
}
lastChunkIndexRef.current = chunks.length;
}, [outputChunks, selectedSessionKey]);
// Resize observer -> fit + resize backend
useEffect(() => {
const host = terminalHostRef.current;
const term = xtermRef.current;
const fitAddon = fitAddonRef.current;
if (!host || !term || !fitAddon) return;
const resize = () => {
fitAddon.fit();
if (selectedSessionKey) {
void (async () => {
try {
await resizeCliSession(selectedSessionKey, { cols: term.cols, rows: term.rows });
} catch {
// ignore
}
})();
}
};
const ro = new ResizeObserver(resize);
ro.observe(host);
return () => ro.disconnect();
}, [selectedSessionKey]);
const handleCreateSession = async () => {
setIsCreating(true);
setError(null);
try {
const created = await createCliSession({
workingDir: projectPath || undefined,
preferredShell: 'bash',
cols: xtermRef.current?.cols,
rows: xtermRef.current?.rows,
tool,
model: undefined,
resumeKey,
});
upsertSession(created.session as unknown as CliSession);
setSelectedSessionKey(created.session.sessionKey);
} catch (e) {
setError(e instanceof Error ? e.message : String(e));
} finally {
setIsCreating(false);
}
};
const handleCloseSession = async () => {
if (!selectedSessionKey) return;
setIsClosing(true);
setError(null);
try {
await closeCliSession(selectedSessionKey);
setSelectedSessionKey('');
} catch (e) {
setError(e instanceof Error ? e.message : String(e));
} finally {
setIsClosing(false);
}
};
const handleExecute = async () => {
if (!selectedSessionKey) return;
if (!prompt.trim()) return;
setIsExecuting(true);
setError(null);
try {
await executeInCliSession(selectedSessionKey, {
tool,
prompt: prompt.trim(),
mode,
resumeKey: resumeKey.trim() || undefined,
resumeStrategy,
category: 'user',
});
setPrompt('');
} catch (e) {
setError(e instanceof Error ? e.message : String(e));
} finally {
setIsExecuting(false);
}
};
const handleRefreshSessions = async () => {
setIsLoadingSessions(true);
setError(null);
try {
const r = await fetchCliSessions();
setSessions(r.sessions as unknown as CliSession[]);
} catch (e) {
setError(e instanceof Error ? e.message : String(e));
} finally {
setIsLoadingSessions(false);
}
};
return (
<div className="space-y-3">
<div className="flex items-center gap-2 flex-wrap">
<div className="min-w-[240px] flex-1">
<Select value={selectedSessionKey} onValueChange={setSelectedSessionKey}>
<SelectTrigger>
<SelectValue placeholder={formatMessage({ id: 'issues.terminal.session.select' })} />
</SelectTrigger>
<SelectContent>
{sessions.map((s) => (
<SelectItem key={s.sessionKey} value={s.sessionKey}>
{(s.tool || 'cli') + ' · ' + s.sessionKey}
</SelectItem>
))}
{sessions.length === 0 && (
<SelectItem value="__none__" disabled>
{formatMessage({ id: 'issues.terminal.session.none' })}
</SelectItem>
)}
</SelectContent>
</Select>
</div>
<Button variant="outline" onClick={handleRefreshSessions} disabled={isLoadingSessions}>
<RefreshCw className="w-4 h-4 mr-2" />
{formatMessage({ id: 'issues.terminal.session.refresh' })}
</Button>
<Button onClick={handleCreateSession} disabled={isCreating}>
<Plus className="w-4 h-4 mr-2" />
{formatMessage({ id: 'issues.terminal.session.new' })}
</Button>
<Button
variant="destructive"
onClick={handleCloseSession}
disabled={!selectedSessionKey || isClosing}
>
<XCircle className="w-4 h-4 mr-2" />
{formatMessage({ id: 'issues.terminal.session.close' })}
</Button>
</div>
<div className="grid grid-cols-2 gap-2">
<div className="space-y-1">
<div className="text-xs text-muted-foreground">{formatMessage({ id: 'issues.terminal.exec.tool' })}</div>
<Select value={tool} onValueChange={(v) => setTool(v as ToolName)}>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="claude">claude</SelectItem>
<SelectItem value="codex">codex</SelectItem>
<SelectItem value="gemini">gemini</SelectItem>
</SelectContent>
</Select>
</div>
<div className="space-y-1">
<div className="text-xs text-muted-foreground">{formatMessage({ id: 'issues.terminal.exec.mode' })}</div>
<Select value={mode} onValueChange={(v) => setMode(v as 'analysis' | 'write')}>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="analysis">analysis</SelectItem>
<SelectItem value="write">write</SelectItem>
</SelectContent>
</Select>
</div>
</div>
<div className="grid grid-cols-2 gap-2">
<div className="space-y-1">
<div className="text-xs text-muted-foreground">{formatMessage({ id: 'issues.terminal.exec.resumeKey' })}</div>
<Input value={resumeKey} onChange={(e) => setResumeKey(e.target.value)} placeholder={issueId} />
</div>
<div className="space-y-1">
<div className="text-xs text-muted-foreground">
{formatMessage({ id: 'issues.terminal.exec.resumeStrategy' })}
</div>
<Select value={resumeStrategy} onValueChange={(v) => setResumeStrategy(v as ResumeStrategy)}>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="nativeResume">nativeResume</SelectItem>
<SelectItem value="promptConcat">promptConcat</SelectItem>
</SelectContent>
</Select>
</div>
</div>
<div className="space-y-2">
<div className="text-xs text-muted-foreground">{formatMessage({ id: 'issues.terminal.exec.prompt.label' })}</div>
<textarea
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder={formatMessage({ id: 'issues.terminal.exec.prompt.placeholder' })}
className={cn(
'w-full min-h-[90px] p-3 bg-background border border-input rounded-md text-sm resize-none',
'focus:outline-none focus:ring-2 focus:ring-primary'
)}
/>
<div className="flex justify-end">
<Button onClick={handleExecute} disabled={!selectedSessionKey || isExecuting || !prompt.trim()}>
{formatMessage({ id: 'issues.terminal.exec.run' })}
</Button>
</div>
</div>
{error && (
<div className="text-sm text-destructive">
{error}
</div>
)}
<div className="rounded-md border border-border bg-black/90 overflow-hidden">
<div ref={terminalHostRef} className="h-[420px] w-full" />
</div>
</div>
);
}

View File

@@ -3,7 +3,7 @@
// ======================================== // ========================================
// Content panel for Queue tab in IssueHub // Content panel for Queue tab in IssueHub
import { useState } from 'react'; import { useEffect, useState } from 'react';
import { useIntl } from 'react-intl'; import { useIntl } from 'react-intl';
import { import {
AlertCircle, AlertCircle,
@@ -14,9 +14,12 @@ import {
} from 'lucide-react'; } from 'lucide-react';
import { Card } from '@/components/ui/Card'; import { Card } from '@/components/ui/Card';
import { Badge } from '@/components/ui/Badge'; import { Badge } from '@/components/ui/Badge';
import { Button } from '@/components/ui/Button';
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from '@/components/ui/Select';
import { QueueCard } from '@/components/issue/queue/QueueCard'; import { QueueCard } from '@/components/issue/queue/QueueCard';
import { QueueBoard } from '@/components/issue/queue/QueueBoard';
import { SolutionDrawer } from '@/components/issue/queue/SolutionDrawer'; import { SolutionDrawer } from '@/components/issue/queue/SolutionDrawer';
import { useIssueQueue, useQueueMutations } from '@/hooks'; import { useIssueQueue, useQueueHistory, useQueueMutations } from '@/hooks';
import type { QueueItem } from '@/lib/api'; import type { QueueItem } from '@/lib/api';
// ========== Loading Skeleton ========== // ========== Loading Skeleton ==========
@@ -73,6 +76,7 @@ export function QueuePanel() {
const [selectedItem, setSelectedItem] = useState<QueueItem | null>(null); const [selectedItem, setSelectedItem] = useState<QueueItem | null>(null);
const { data: queueData, isLoading, error } = useIssueQueue(); const { data: queueData, isLoading, error } = useIssueQueue();
const { data: historyIndex } = useQueueHistory();
const { const {
activateQueue, activateQueue,
deactivateQueue, deactivateQueue,
@@ -93,6 +97,16 @@ export function QueuePanel() {
const conflictCount = queue?.conflicts?.length || 0; const conflictCount = queue?.conflicts?.length || 0;
const groupCount = Object.keys(queue?.grouped_items || {}).length; const groupCount = Object.keys(queue?.grouped_items || {}).length;
const totalItems = taskCount + solutionCount; const totalItems = taskCount + solutionCount;
const activeQueueId = historyIndex?.active_queue_id || null;
const activeQueueIds = historyIndex?.active_queue_ids || [];
const queueId = queue?.id;
const [selectedQueueId, setSelectedQueueId] = useState<string>('');
// Keep selector in sync with active queue id
useEffect(() => {
if (activeQueueId) setSelectedQueueId(activeQueueId);
else if (queueId) setSelectedQueueId(queueId);
}, [activeQueueId, queueId]);
const handleActivate = async (queueId: string) => { const handleActivate = async (queueId: string) => {
try { try {
@@ -164,11 +178,62 @@ export function QueuePanel() {
return <QueueEmptyState />; return <QueueEmptyState />;
} }
// Check if queue is active (has items and no conflicts) // Check if queue is active (multi-queue index preferred)
const isActive = totalItems > 0 && conflictCount === 0; const isActive = queueId ? activeQueueIds.includes(queueId) : totalItems > 0 && conflictCount === 0;
return ( return (
<div className="space-y-6"> <div className="space-y-6">
{/* Queue History / Active Queue Selector */}
{historyIndex && (
<Card className="p-4">
<div className="flex flex-col md:flex-row md:items-center gap-3 justify-between">
<div className="min-w-0">
<div className="text-sm font-semibold text-foreground">
{formatMessage({ id: 'issues.queue.history.title' })}
</div>
<div className="text-xs text-muted-foreground mt-1 font-mono">
{formatMessage({ id: 'issues.queue.history.active' })}:{' '}
{activeQueueId || '—'}
</div>
</div>
<div className="flex items-center gap-2">
<Select value={selectedQueueId} onValueChange={(v) => setSelectedQueueId(v)}>
<SelectTrigger className="w-[260px]">
<SelectValue placeholder={formatMessage({ id: 'issues.queue.history.select' })} />
</SelectTrigger>
<SelectContent>
{(historyIndex.queues || []).length === 0 ? (
<SelectItem value="" disabled>
{formatMessage({ id: 'issues.queue.history.empty' })}
</SelectItem>
) : (
historyIndex.queues.map((q) => (
<SelectItem key={q.id} value={q.id}>
{q.id}
</SelectItem>
))
)}
</SelectContent>
</Select>
<Button
variant="outline"
disabled={!selectedQueueId || isActivating}
onClick={() => activateQueue(selectedQueueId)}
>
{formatMessage({ id: 'issues.queue.history.activate' })}
</Button>
<Button
variant="outline"
disabled={isDeactivating}
onClick={() => deactivateQueue()}
>
{formatMessage({ id: 'issues.queue.actions.deactivate' })}
</Button>
</div>
</div>
</Card>
)}
{/* Stats Cards */} {/* Stats Cards */}
<div className="grid grid-cols-2 md:grid-cols-4 gap-4"> <div className="grid grid-cols-2 md:grid-cols-4 gap-4">
<Card className="p-4"> <Card className="p-4">
@@ -226,25 +291,26 @@ export function QueuePanel() {
</Card> </Card>
)} )}
{/* Queue Card */} {/* Queue Card (actions + summary) */}
<div className="grid gap-4 md:grid-cols-2"> <QueueCard
<QueueCard key={queue.id || 'legacy'}
key="current" queue={queue}
queue={queue} isActive={isActive}
isActive={isActive} onActivate={handleActivate}
onActivate={handleActivate} onDeactivate={handleDeactivate}
onDeactivate={handleDeactivate} onDelete={handleDelete}
onDelete={handleDelete} onMerge={handleMerge}
onMerge={handleMerge} onSplit={handleSplit}
onSplit={handleSplit} onItemClick={handleItemClick}
onItemClick={handleItemClick} isActivating={isActivating}
isActivating={isActivating} isDeactivating={isDeactivating}
isDeactivating={isDeactivating} isDeleting={isDeleting}
isDeleting={isDeleting} isMerging={isMerging}
isMerging={isMerging} isSplitting={isSplitting}
isSplitting={isSplitting} />
/>
</div> {/* Queue Board (DnD reorder/move) */}
<QueueBoard queue={queue} onItemClick={handleItemClick} />
{/* Solution Detail Drawer */} {/* Solution Detail Drawer */}
<SolutionDrawer <SolutionDrawer

View File

@@ -33,6 +33,7 @@ import type { IssueQueue, QueueItem } from '@/lib/api';
export interface QueueActionsProps { export interface QueueActionsProps {
queue: IssueQueue; queue: IssueQueue;
queueId?: string;
isActive?: boolean; isActive?: boolean;
onActivate?: (queueId: string) => void; onActivate?: (queueId: string) => void;
onDeactivate?: () => void; onDeactivate?: () => void;
@@ -50,6 +51,7 @@ export interface QueueActionsProps {
export function QueueActions({ export function QueueActions({
queue, queue,
queueId: queueIdProp,
isActive = false, isActive = false,
onActivate, onActivate,
onDeactivate, onDeactivate,
@@ -69,20 +71,20 @@ export function QueueActions({
const [mergeTargetId, setMergeTargetId] = useState(''); const [mergeTargetId, setMergeTargetId] = useState('');
const [selectedItemIds, setSelectedItemIds] = useState<string[]>([]); const [selectedItemIds, setSelectedItemIds] = useState<string[]>([]);
// Use "current" as the queue ID for single-queue model const queueId = queueIdProp;
// This matches the API pattern where deactivate works on the current queue
const queueId = 'current';
// Get all items from grouped_items for split dialog // Get all items from grouped_items for split dialog
const allItems: QueueItem[] = Object.values(queue.grouped_items || {}).flat(); const allItems: QueueItem[] = Object.values(queue.grouped_items || {}).flat();
const handleDelete = () => { const handleDelete = () => {
if (!queueId) return;
onDelete?.(queueId); onDelete?.(queueId);
setIsDeleteOpen(false); setIsDeleteOpen(false);
}; };
const handleMerge = () => { const handleMerge = () => {
if (mergeTargetId.trim()) { if (mergeTargetId.trim()) {
if (!queueId) return;
onMerge?.(queueId, mergeTargetId.trim()); onMerge?.(queueId, mergeTargetId.trim());
setIsMergeOpen(false); setIsMergeOpen(false);
setMergeTargetId(''); setMergeTargetId('');
@@ -91,6 +93,7 @@ export function QueueActions({
const handleSplit = () => { const handleSplit = () => {
if (selectedItemIds.length > 0 && selectedItemIds.length < allItems.length) { if (selectedItemIds.length > 0 && selectedItemIds.length < allItems.length) {
if (!queueId) return;
onSplit?.(queueId, selectedItemIds); onSplit?.(queueId, selectedItemIds);
setIsSplitOpen(false); setIsSplitOpen(false);
setSelectedItemIds([]); setSelectedItemIds([]);
@@ -128,7 +131,7 @@ export function QueueActions({
size="sm" size="sm"
className="h-8 w-8 p-0" className="h-8 w-8 p-0"
onClick={() => onActivate(queueId)} onClick={() => onActivate(queueId)}
disabled={isActivating} disabled={isActivating || !queueId}
title={formatMessage({ id: 'issues.queue.actions.activate' })} title={formatMessage({ id: 'issues.queue.actions.activate' })}
> >
{isActivating ? ( {isActivating ? (
@@ -161,7 +164,7 @@ export function QueueActions({
size="sm" size="sm"
className="h-8 w-8 p-0" className="h-8 w-8 p-0"
onClick={() => setIsMergeOpen(true)} onClick={() => setIsMergeOpen(true)}
disabled={isMerging} disabled={isMerging || !queueId}
title={formatMessage({ id: 'issues.queue.actions.merge' })} title={formatMessage({ id: 'issues.queue.actions.merge' })}
> >
{isMerging ? ( {isMerging ? (
@@ -178,7 +181,7 @@ export function QueueActions({
size="sm" size="sm"
className="h-8 w-8 p-0" className="h-8 w-8 p-0"
onClick={() => setIsSplitOpen(true)} onClick={() => setIsSplitOpen(true)}
disabled={isSplitting} disabled={isSplitting || !queueId}
title={formatMessage({ id: 'issues.queue.actions.split' })} title={formatMessage({ id: 'issues.queue.actions.split' })}
> >
{isSplitting ? ( {isSplitting ? (
@@ -195,7 +198,7 @@ export function QueueActions({
size="sm" size="sm"
className="h-8 w-8 p-0" className="h-8 w-8 p-0"
onClick={() => setIsDeleteOpen(true)} onClick={() => setIsDeleteOpen(true)}
disabled={isDeleting} disabled={isDeleting || !queueId}
title={formatMessage({ id: 'issues.queue.actions.delete' })} title={formatMessage({ id: 'issues.queue.actions.delete' })}
> >
{isDeleting ? ( {isDeleting ? (

View File

@@ -0,0 +1,171 @@
// ========================================
// QueueBoard
// ========================================
// Kanban-style view of queue execution groups with DnD reordering/moving.
import { useCallback, useEffect, useMemo, useState } from 'react';
import type { DropResult } from '@hello-pangea/dnd';
import { useIntl } from 'react-intl';
import { LayoutGrid } from 'lucide-react';
import { KanbanBoard, type KanbanColumn, type KanbanItem } from '@/components/shared/KanbanBoard';
import { Badge } from '@/components/ui/Badge';
import { cn } from '@/lib/utils';
import { useQueueMutations } from '@/hooks';
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
import type { IssueQueue, QueueItem } from '@/lib/api';
type QueueBoardItem = QueueItem & KanbanItem;
function groupSortKey(groupId: string): [number, string] {
const n = parseInt(groupId.match(/\d+/)?.[0] || '999');
return [Number.isFinite(n) ? n : 999, groupId];
}
function buildColumns(queue: IssueQueue): KanbanColumn<QueueBoardItem>[] {
const entries = Object.entries(queue.grouped_items || {});
entries.sort(([a], [b]) => {
const [an, aid] = groupSortKey(a);
const [bn, bid] = groupSortKey(b);
if (an !== bn) return an - bn;
return aid.localeCompare(bid);
});
return entries.map(([groupId, items]) => {
const sorted = [...(items || [])].sort((a, b) => (a.execution_order || 0) - (b.execution_order || 0));
const mapped = sorted.map((it) => ({
...it,
id: it.item_id,
title: `${it.issue_id} · ${it.solution_id}`,
status: it.status,
}));
return {
id: groupId,
title: groupId,
items: mapped,
icon: <LayoutGrid className="w-4 h-4" />,
};
});
}
function applyDrag(columns: KanbanColumn<QueueBoardItem>[], result: DropResult): KanbanColumn<QueueBoardItem>[] {
if (!result.destination) return columns;
const { source, destination, draggableId } = result;
const next = columns.map((c) => ({ ...c, items: [...c.items] }));
const src = next.find((c) => c.id === source.droppableId);
const dst = next.find((c) => c.id === destination.droppableId);
if (!src || !dst) return columns;
const srcIndex = src.items.findIndex((i) => i.id === draggableId);
if (srcIndex === -1) return columns;
const [moved] = src.items.splice(srcIndex, 1);
if (!moved) return columns;
dst.items.splice(destination.index, 0, moved);
return next;
}
export function QueueBoard({
queue,
onItemClick,
className,
}: {
queue: IssueQueue;
onItemClick?: (item: QueueItem) => void;
className?: string;
}) {
const { formatMessage } = useIntl();
const projectPath = useWorkflowStore(selectProjectPath);
const { reorderQueueGroup, moveQueueItem, isReordering, isMoving } = useQueueMutations();
const baseColumns = useMemo(() => buildColumns(queue), [queue]);
const [columns, setColumns] = useState<KanbanColumn<QueueBoardItem>[]>(baseColumns);
useEffect(() => {
setColumns(baseColumns);
}, [baseColumns]);
const handleDragEnd = useCallback(
async (result: DropResult, sourceColumn: string, destColumn: string) => {
if (!projectPath) return;
if (!result.destination) return;
if (sourceColumn === destColumn && result.source.index === result.destination.index) return;
try {
const nextColumns = applyDrag(columns, result);
setColumns(nextColumns);
const itemId = result.draggableId;
if (sourceColumn === destColumn) {
const column = nextColumns.find((c) => c.id === sourceColumn);
const nextOrder = (column?.items ?? []).map((i) => i.item_id);
await reorderQueueGroup(sourceColumn, nextOrder);
} else {
await moveQueueItem(itemId, destColumn, result.destination.index);
}
} catch (e) {
// Revert by resetting to server-derived columns
setColumns(baseColumns);
}
},
[baseColumns, columns, moveQueueItem, projectPath, reorderQueueGroup]
);
return (
<div className={cn('space-y-2', className)}>
<div className="flex items-center gap-2 text-xs text-muted-foreground">
<Badge variant="secondary" className="gap-1">
{formatMessage({ id: 'issues.queue.stats.executionGroups' })}
</Badge>
{(isReordering || isMoving) && (
<span>
{formatMessage({ id: 'common.status.running' })}
</span>
)}
</div>
<KanbanBoard<QueueBoardItem>
columns={columns}
onDragEnd={handleDragEnd}
onItemClick={(item) => onItemClick?.(item as unknown as QueueItem)}
emptyColumnMessage={formatMessage({ id: 'issues.queue.empty' })}
renderItem={(item, provided) => (
<div
ref={provided.innerRef}
{...provided.draggableProps}
{...provided.dragHandleProps}
onClick={() => onItemClick?.(item as unknown as QueueItem)}
className={cn(
'p-3 bg-card border border-border rounded-lg shadow-sm cursor-pointer',
'hover:shadow-md hover:border-primary/50 transition-all',
item.status === 'blocked' && 'border-destructive/50 bg-destructive/5',
item.status === 'failed' && 'border-destructive/50 bg-destructive/5',
item.status === 'executing' && 'border-primary/40'
)}
>
<div className="flex items-start justify-between gap-2">
<div className="min-w-0">
<div className="text-xs font-mono text-muted-foreground">{item.item_id}</div>
<div className="text-sm font-medium text-foreground truncate">
{item.issue_id} · {item.solution_id}
</div>
</div>
<Badge variant="outline" className="text-xs shrink-0">
{formatMessage({ id: `issues.queue.status.${item.status}` })}
</Badge>
</div>
{item.depends_on?.length ? (
<div className="mt-2 text-xs text-muted-foreground">
{formatMessage({ id: 'issues.solution.overview.dependencies' })}: {item.depends_on.length}
</div>
) : null}
</div>
)}
/>
</div>
);
}
export default QueueBoard;

View File

@@ -51,8 +51,7 @@ export function QueueCard({
}: QueueCardProps) { }: QueueCardProps) {
const { formatMessage } = useIntl(); const { formatMessage } = useIntl();
// Use "current" for queue ID display const queueId = queue.id;
const queueId = 'current';
// Calculate item counts // Calculate item counts
const taskCount = queue.tasks?.length || 0; const taskCount = queue.tasks?.length || 0;
@@ -88,7 +87,7 @@ export function QueueCard({
{formatMessage({ id: 'issues.queue.title' })} {formatMessage({ id: 'issues.queue.title' })}
</h3> </h3>
<p className="text-xs text-muted-foreground mt-0.5"> <p className="text-xs text-muted-foreground mt-0.5">
{queueId.substring(0, 20)}{queueId.length > 20 ? '...' : ''} {(queueId || 'legacy').substring(0, 20)}{(queueId || 'legacy').length > 20 ? '...' : ''}
</p> </p>
</div> </div>
</div> </div>
@@ -102,6 +101,7 @@ export function QueueCard({
<QueueActions <QueueActions
queue={queue} queue={queue}
queueId={queueId}
isActive={isActive} isActive={isActive}
onActivate={onActivate} onActivate={onActivate}
onDeactivate={onDeactivate} onDeactivate={onDeactivate}

View File

@@ -0,0 +1,290 @@
// ========================================
// QueueExecuteInSession
// ========================================
// Minimal “execution plane” for queue items:
// pick/create a PTY session and submit a generated prompt to it.
import { useEffect, useMemo, useState } from 'react';
import { useIntl } from 'react-intl';
import { Plus, RefreshCw } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from '@/components/ui/Select';
import { cn } from '@/lib/utils';
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
import { useIssues } from '@/hooks';
import {
createCliSession,
executeInCliSession,
fetchCliSessions,
type CliSession,
type QueueItem,
} from '@/lib/api';
import { useCliSessionStore } from '@/stores/cliSessionStore';
type ToolName = 'claude' | 'codex' | 'gemini';
type ResumeStrategy = 'nativeResume' | 'promptConcat';
function buildQueueItemPrompt(item: QueueItem, issue: any | undefined): string {
const lines: string[] = [];
lines.push(`Queue Item: ${item.item_id}`);
lines.push(`Issue: ${item.issue_id}`);
lines.push(`Solution: ${item.solution_id}`);
if (item.task_id) lines.push(`Task: ${item.task_id}`);
lines.push('');
if (issue) {
if (issue.title) lines.push(`Title: ${issue.title}`);
if (issue.context) {
lines.push('');
lines.push('Context:');
lines.push(String(issue.context));
}
const solution = Array.isArray(issue.solutions)
? issue.solutions.find((s: any) => s?.id === item.solution_id)
: undefined;
if (solution) {
lines.push('');
lines.push('Solution Description:');
if (solution.description) lines.push(String(solution.description));
if (solution.approach) {
lines.push('');
lines.push('Approach:');
lines.push(String(solution.approach));
}
// Best-effort: if the solution has embedded tasks, include the matched task.
const tasks = Array.isArray(solution.tasks) ? solution.tasks : [];
const task = item.task_id ? tasks.find((t: any) => t?.id === item.task_id) : undefined;
if (task) {
lines.push('');
lines.push('Task:');
if (task.title) lines.push(`- ${task.title}`);
if (task.description) lines.push(String(task.description));
}
}
}
lines.push('');
lines.push('Instruction:');
lines.push(
'Implement the above queue item in this repository. Prefer small, testable changes; run relevant tests; report blockers if any.'
);
return lines.join('\n');
}
export function QueueExecuteInSession({ item, className }: { item: QueueItem; className?: string }) {
const { formatMessage } = useIntl();
const projectPath = useWorkflowStore(selectProjectPath);
const { issues } = useIssues();
const issue = useMemo(() => issues.find((i) => i.id === item.issue_id) as any, [issues, item.issue_id]);
const sessionsByKey = useCliSessionStore((s) => s.sessions);
const setSessions = useCliSessionStore((s) => s.setSessions);
const upsertSession = useCliSessionStore((s) => s.upsertSession);
const sessions = useMemo(
() => Object.values(sessionsByKey).sort((a, b) => a.createdAt.localeCompare(b.createdAt)),
[sessionsByKey]
);
const [selectedSessionKey, setSelectedSessionKey] = useState<string>('');
const [tool, setTool] = useState<ToolName>('claude');
const [mode, setMode] = useState<'analysis' | 'write'>('write');
const [resumeStrategy, setResumeStrategy] = useState<ResumeStrategy>('nativeResume');
const [isLoading, setIsLoading] = useState(false);
const [isExecuting, setIsExecuting] = useState(false);
const [error, setError] = useState<string | null>(null);
const [lastExecution, setLastExecution] = useState<{ executionId: string; command: string } | null>(null);
const refreshSessions = async () => {
setIsLoading(true);
setError(null);
try {
const r = await fetchCliSessions();
setSessions(r.sessions as unknown as CliSession[]);
} catch (e) {
setError(e instanceof Error ? e.message : String(e));
} finally {
setIsLoading(false);
}
};
useEffect(() => {
void refreshSessions();
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
useEffect(() => {
if (selectedSessionKey) return;
if (sessions.length === 0) return;
setSelectedSessionKey(sessions[sessions.length - 1]?.sessionKey ?? '');
}, [sessions, selectedSessionKey]);
const ensureSession = async (): Promise<string> => {
if (selectedSessionKey) return selectedSessionKey;
if (!projectPath) throw new Error('No project path selected');
const created = await createCliSession({
workingDir: projectPath,
preferredShell: 'bash',
resumeKey: item.issue_id,
});
upsertSession(created.session as unknown as CliSession);
setSelectedSessionKey(created.session.sessionKey);
return created.session.sessionKey;
};
const handleCreateSession = async () => {
setError(null);
try {
if (!projectPath) throw new Error('No project path selected');
const created = await createCliSession({
workingDir: projectPath,
preferredShell: 'bash',
resumeKey: item.issue_id,
});
upsertSession(created.session as unknown as CliSession);
setSelectedSessionKey(created.session.sessionKey);
await refreshSessions();
} catch (e) {
setError(e instanceof Error ? e.message : String(e));
}
};
const handleExecute = async () => {
setIsExecuting(true);
setError(null);
setLastExecution(null);
try {
const sessionKey = await ensureSession();
const prompt = buildQueueItemPrompt(item, issue);
const result = await executeInCliSession(sessionKey, {
tool,
prompt,
mode,
workingDir: projectPath,
category: 'user',
resumeKey: item.issue_id,
resumeStrategy,
});
setLastExecution({ executionId: result.executionId, command: result.command });
} catch (e) {
setError(e instanceof Error ? e.message : String(e));
} finally {
setIsExecuting(false);
}
};
return (
<div className={cn('space-y-3', className)}>
<div className="flex items-center justify-between gap-2">
<h3 className="text-sm font-semibold text-foreground">
{formatMessage({ id: 'issues.queue.exec.title' })}
</h3>
<div className="flex items-center gap-2">
<Button
variant="outline"
size="sm"
onClick={refreshSessions}
disabled={isLoading}
className="gap-2"
>
<RefreshCw className={cn('h-4 w-4', isLoading && 'animate-spin')} />
{formatMessage({ id: 'issues.terminal.session.refresh' })}
</Button>
<Button variant="outline" size="sm" onClick={handleCreateSession} className="gap-2">
<Plus className="h-4 w-4" />
{formatMessage({ id: 'issues.terminal.session.new' })}
</Button>
</div>
</div>
<div className="grid grid-cols-1 md:grid-cols-2 gap-3">
<div>
<label className="block text-xs font-medium text-muted-foreground mb-1">
{formatMessage({ id: 'issues.terminal.session.select' })}
</label>
<Select value={selectedSessionKey} onValueChange={(v) => setSelectedSessionKey(v)}>
<SelectTrigger>
<SelectValue placeholder={formatMessage({ id: 'issues.terminal.session.none' })} />
</SelectTrigger>
<SelectContent>
{sessions.length === 0 ? (
<SelectItem value="" disabled>
{formatMessage({ id: 'issues.terminal.session.none' })}
</SelectItem>
) : (
sessions.map((s) => (
<SelectItem key={s.sessionKey} value={s.sessionKey}>
{(s.tool || 'cli') + ' · ' + s.sessionKey}
</SelectItem>
))
)}
</SelectContent>
</Select>
</div>
<div>
<label className="block text-xs font-medium text-muted-foreground mb-1">
{formatMessage({ id: 'issues.terminal.exec.tool' })}
</label>
<Select value={tool} onValueChange={(v) => setTool(v as ToolName)}>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="claude">claude</SelectItem>
<SelectItem value="codex">codex</SelectItem>
<SelectItem value="gemini">gemini</SelectItem>
</SelectContent>
</Select>
</div>
<div>
<label className="block text-xs font-medium text-muted-foreground mb-1">
{formatMessage({ id: 'issues.terminal.exec.mode' })}
</label>
<Select value={mode} onValueChange={(v) => setMode(v as 'analysis' | 'write')}>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="analysis">analysis</SelectItem>
<SelectItem value="write">write</SelectItem>
</SelectContent>
</Select>
</div>
<div>
<label className="block text-xs font-medium text-muted-foreground mb-1">
{formatMessage({ id: 'issues.terminal.exec.resumeStrategy' })}
</label>
<Select value={resumeStrategy} onValueChange={(v) => setResumeStrategy(v as ResumeStrategy)}>
<SelectTrigger>
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="nativeResume">nativeResume</SelectItem>
<SelectItem value="promptConcat">promptConcat</SelectItem>
</SelectContent>
</Select>
</div>
</div>
{error && <div className="text-sm text-destructive">{error}</div>}
{lastExecution && (
<div className="text-xs text-muted-foreground font-mono break-all">
{lastExecution.executionId}
</div>
)}
<div className="flex items-center justify-end">
<Button onClick={handleExecute} disabled={isExecuting || !projectPath} className="gap-2">
{formatMessage({ id: 'issues.terminal.exec.run' })}
</Button>
</div>
</div>
);
}

View File

@@ -5,10 +5,12 @@
import { useEffect, useMemo, useState } from 'react'; import { useEffect, useMemo, useState } from 'react';
import { useIntl } from 'react-intl'; import { useIntl } from 'react-intl';
import { X, FileText, CheckCircle, Circle, Loader2, XCircle, Clock, AlertTriangle } from 'lucide-react'; import { X, FileText, CheckCircle, Circle, Loader2, XCircle, Clock, AlertTriangle, Terminal } from 'lucide-react';
import { Badge } from '@/components/ui/Badge'; import { Badge } from '@/components/ui/Badge';
import { Button } from '@/components/ui/Button'; import { Button } from '@/components/ui/Button';
import { Tabs, TabsList, TabsTrigger, TabsContent } from '@/components/ui/Tabs'; import { Tabs, TabsList, TabsTrigger, TabsContent } from '@/components/ui/Tabs';
import { QueueExecuteInSession } from '@/components/issue/queue/QueueExecuteInSession';
import { IssueTerminalTab } from '@/components/issue/hub/IssueTerminalTab';
import { useIssueQueue } from '@/hooks'; import { useIssueQueue } from '@/hooks';
import { cn } from '@/lib/utils'; import { cn } from '@/lib/utils';
import type { QueueItem } from '@/lib/api'; import type { QueueItem } from '@/lib/api';
@@ -20,7 +22,7 @@ export interface SolutionDrawerProps {
onClose: () => void; onClose: () => void;
} }
type TabValue = 'overview' | 'tasks' | 'json'; type TabValue = 'overview' | 'tasks' | 'terminal' | 'json';
// ========== Status Configuration ========== // ========== Status Configuration ==========
const statusConfig: Record<string, { label: string; variant: 'default' | 'secondary' | 'destructive' | 'outline' | 'success' | 'warning' | 'info'; icon: React.ComponentType<{ className?: string }> }> = { const statusConfig: Record<string, { label: string; variant: 'default' | 'secondary' | 'destructive' | 'outline' | 'success' | 'warning' | 'info'; icon: React.ComponentType<{ className?: string }> }> = {
@@ -134,6 +136,10 @@ export function SolutionDrawer({ item, isOpen, onClose }: SolutionDrawerProps) {
<CheckCircle className="h-4 w-4 mr-2" /> <CheckCircle className="h-4 w-4 mr-2" />
{formatMessage({ id: 'issues.solution.tabs.tasks' })} {formatMessage({ id: 'issues.solution.tabs.tasks' })}
</TabsTrigger> </TabsTrigger>
<TabsTrigger value="terminal" className="flex-1">
<Terminal className="h-4 w-4 mr-2" />
{formatMessage({ id: 'issues.solution.tabs.terminal' })}
</TabsTrigger>
<TabsTrigger value="json" className="flex-1"> <TabsTrigger value="json" className="flex-1">
<FileText className="h-4 w-4 mr-2" /> <FileText className="h-4 w-4 mr-2" />
{formatMessage({ id: 'issues.solution.tabs.json' })} {formatMessage({ id: 'issues.solution.tabs.json' })}
@@ -170,6 +176,9 @@ export function SolutionDrawer({ item, isOpen, onClose }: SolutionDrawerProps) {
</div> </div>
</div> </div>
{/* Execute in Session */}
<QueueExecuteInSession item={item} />
{/* Dependencies */} {/* Dependencies */}
{item.depends_on && item.depends_on.length > 0 && ( {item.depends_on && item.depends_on.length > 0 && (
<div> <div>
@@ -244,6 +253,11 @@ export function SolutionDrawer({ item, isOpen, onClose }: SolutionDrawerProps) {
)} )}
</TabsContent> </TabsContent>
{/* Terminal Tab */}
<TabsContent value="terminal" className="mt-4 pb-6 focus-visible:outline-none">
<IssueTerminalTab issueId={issueId} />
</TabsContent>
{/* JSON Tab */} {/* JSON Tab */}
<TabsContent value="json" className="mt-4 pb-6 focus-visible:outline-none"> <TabsContent value="json" className="mt-4 pb-6 focus-visible:outline-none">
<pre className="p-4 bg-muted rounded-md overflow-x-auto text-xs"> <pre className="p-4 bg-muted rounded-md overflow-x-auto text-xs">

View File

@@ -69,6 +69,7 @@ export type {
export { export {
useIssues, useIssues,
useIssueQueue, useIssueQueue,
useQueueHistory,
useCreateIssue, useCreateIssue,
useUpdateIssue, useUpdateIssue,
useDeleteIssue, useDeleteIssue,

View File

@@ -8,6 +8,7 @@ import {
fetchIssues, fetchIssues,
fetchIssueHistory, fetchIssueHistory,
fetchIssueQueue, fetchIssueQueue,
fetchQueueHistory,
createIssue, createIssue,
updateIssue, updateIssue,
deleteIssue, deleteIssue,
@@ -16,12 +17,15 @@ import {
deleteQueue as deleteQueueApi, deleteQueue as deleteQueueApi,
mergeQueues as mergeQueuesApi, mergeQueues as mergeQueuesApi,
splitQueue as splitQueueApi, splitQueue as splitQueueApi,
reorderQueueGroup as reorderQueueGroupApi,
moveQueueItem as moveQueueItemApi,
fetchDiscoveries, fetchDiscoveries,
fetchDiscoveryFindings, fetchDiscoveryFindings,
exportDiscoveryFindingsAsIssues, exportDiscoveryFindingsAsIssues,
type Issue, type Issue,
type IssueQueue, type IssueQueue,
type IssuesResponse, type IssuesResponse,
type QueueHistoryIndex,
type DiscoverySession, type DiscoverySession,
type Finding, type Finding,
} from '../lib/api'; } from '../lib/api';
@@ -309,14 +313,31 @@ export interface UseQueueMutationsReturn {
deleteQueue: (queueId: string) => Promise<void>; deleteQueue: (queueId: string) => Promise<void>;
mergeQueues: (sourceId: string, targetId: string) => Promise<void>; mergeQueues: (sourceId: string, targetId: string) => Promise<void>;
splitQueue: (sourceQueueId: string, itemIds: string[]) => Promise<void>; splitQueue: (sourceQueueId: string, itemIds: string[]) => Promise<void>;
reorderQueueGroup: (groupId: string, newOrder: string[]) => Promise<void>;
moveQueueItem: (itemId: string, toGroupId: string, toIndex?: number) => Promise<void>;
isActivating: boolean; isActivating: boolean;
isDeactivating: boolean; isDeactivating: boolean;
isDeleting: boolean; isDeleting: boolean;
isMerging: boolean; isMerging: boolean;
isSplitting: boolean; isSplitting: boolean;
isReordering: boolean;
isMoving: boolean;
isMutating: boolean; isMutating: boolean;
} }
export function useQueueHistory(options?: { enabled?: boolean; refetchInterval?: number }): UseQueryResult<QueueHistoryIndex> {
const { enabled = true, refetchInterval = 0 } = options ?? {};
const projectPath = useWorkflowStore(selectProjectPath);
return useQuery({
queryKey: workspaceQueryKeys.issueQueueHistory(projectPath),
queryFn: () => fetchQueueHistory(projectPath),
staleTime: STALE_TIME,
enabled: enabled && !!projectPath,
refetchInterval: refetchInterval > 0 ? refetchInterval : false,
retry: 2,
});
}
export function useQueueMutations(): UseQueueMutationsReturn { export function useQueueMutations(): UseQueueMutationsReturn {
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const projectPath = useWorkflowStore(selectProjectPath); const projectPath = useWorkflowStore(selectProjectPath);
@@ -325,6 +346,7 @@ export function useQueueMutations(): UseQueueMutationsReturn {
mutationFn: (queueId: string) => activateQueue(queueId, projectPath), mutationFn: (queueId: string) => activateQueue(queueId, projectPath),
onSuccess: () => { onSuccess: () => {
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) }); queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) });
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueueHistory(projectPath) });
}, },
}); });
@@ -332,6 +354,7 @@ export function useQueueMutations(): UseQueueMutationsReturn {
mutationFn: () => deactivateQueue(projectPath), mutationFn: () => deactivateQueue(projectPath),
onSuccess: () => { onSuccess: () => {
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) }); queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) });
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueueHistory(projectPath) });
}, },
}); });
@@ -339,6 +362,7 @@ export function useQueueMutations(): UseQueueMutationsReturn {
mutationFn: (queueId: string) => deleteQueueApi(queueId, projectPath), mutationFn: (queueId: string) => deleteQueueApi(queueId, projectPath),
onSuccess: () => { onSuccess: () => {
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) }); queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) });
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueueHistory(projectPath) });
}, },
}); });
@@ -347,12 +371,30 @@ export function useQueueMutations(): UseQueueMutationsReturn {
mergeQueuesApi(sourceId, targetId, projectPath), mergeQueuesApi(sourceId, targetId, projectPath),
onSuccess: () => { onSuccess: () => {
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) }); queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) });
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueueHistory(projectPath) });
}, },
}); });
const splitMutation = useMutation({ const splitMutation = useMutation({
mutationFn: ({ sourceQueueId, itemIds }: { sourceQueueId: string; itemIds: string[] }) => mutationFn: ({ sourceQueueId, itemIds }: { sourceQueueId: string; itemIds: string[] }) =>
splitQueueApi(sourceQueueId, itemIds, projectPath), splitQueueApi(sourceQueueId, itemIds, projectPath),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) });
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueueHistory(projectPath) });
},
});
const reorderMutation = useMutation({
mutationFn: ({ groupId, newOrder }: { groupId: string; newOrder: string[] }) =>
reorderQueueGroupApi(projectPath, { groupId, newOrder }),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) });
},
});
const moveMutation = useMutation({
mutationFn: ({ itemId, toGroupId, toIndex }: { itemId: string; toGroupId: string; toIndex?: number }) =>
moveQueueItemApi(projectPath, { itemId, toGroupId, toIndex }),
onSuccess: () => { onSuccess: () => {
queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) }); queryClient.invalidateQueries({ queryKey: workspaceQueryKeys.issueQueue(projectPath) });
}, },
@@ -364,12 +406,23 @@ export function useQueueMutations(): UseQueueMutationsReturn {
deleteQueue: deleteMutation.mutateAsync, deleteQueue: deleteMutation.mutateAsync,
mergeQueues: (sourceId, targetId) => mergeMutation.mutateAsync({ sourceId, targetId }), mergeQueues: (sourceId, targetId) => mergeMutation.mutateAsync({ sourceId, targetId }),
splitQueue: (sourceQueueId, itemIds) => splitMutation.mutateAsync({ sourceQueueId, itemIds }), splitQueue: (sourceQueueId, itemIds) => splitMutation.mutateAsync({ sourceQueueId, itemIds }),
reorderQueueGroup: (groupId, newOrder) => reorderMutation.mutateAsync({ groupId, newOrder }).then(() => {}),
moveQueueItem: (itemId, toGroupId, toIndex) => moveMutation.mutateAsync({ itemId, toGroupId, toIndex }).then(() => {}),
isActivating: activateMutation.isPending, isActivating: activateMutation.isPending,
isDeactivating: deactivateMutation.isPending, isDeactivating: deactivateMutation.isPending,
isDeleting: deleteMutation.isPending, isDeleting: deleteMutation.isPending,
isMerging: mergeMutation.isPending, isMerging: mergeMutation.isPending,
isSplitting: splitMutation.isPending, isSplitting: splitMutation.isPending,
isMutating: activateMutation.isPending || deactivateMutation.isPending || deleteMutation.isPending || mergeMutation.isPending || splitMutation.isPending, isReordering: reorderMutation.isPending,
isMoving: moveMutation.isPending,
isMutating:
activateMutation.isPending ||
deactivateMutation.isPending ||
deleteMutation.isPending ||
mergeMutation.isPending ||
splitMutation.isPending ||
reorderMutation.isPending ||
moveMutation.isPending,
}; };
} }

View File

@@ -8,6 +8,7 @@ import { useNotificationStore } from '@/stores';
import { useExecutionStore } from '@/stores/executionStore'; import { useExecutionStore } from '@/stores/executionStore';
import { useFlowStore } from '@/stores'; import { useFlowStore } from '@/stores';
import { useCliStreamStore } from '@/stores/cliStreamStore'; import { useCliStreamStore } from '@/stores/cliStreamStore';
import { useCliSessionStore } from '@/stores/cliSessionStore';
import { import {
OrchestratorMessageSchema, OrchestratorMessageSchema,
type OrchestratorWebSocketMessage, type OrchestratorWebSocketMessage,
@@ -28,6 +29,7 @@ function getStoreState() {
const execution = useExecutionStore.getState(); const execution = useExecutionStore.getState();
const flow = useFlowStore.getState(); const flow = useFlowStore.getState();
const cliStream = useCliStreamStore.getState(); const cliStream = useCliStreamStore.getState();
const cliSessions = useCliSessionStore.getState();
return { return {
// Notification store // Notification store
setWsStatus: notification.setWsStatus, setWsStatus: notification.setWsStatus,
@@ -56,6 +58,11 @@ function getStoreState() {
updateNode: flow.updateNode, updateNode: flow.updateNode,
// CLI stream store // CLI stream store
addOutput: cliStream.addOutput, addOutput: cliStream.addOutput,
// CLI session store (PTY-backed terminal)
upsertCliSession: cliSessions.upsertSession,
removeCliSession: cliSessions.removeSession,
appendCliSessionOutput: cliSessions.appendOutput,
}; };
} }
@@ -163,6 +170,31 @@ export function useWebSocket(options: UseWebSocketOptions = {}): UseWebSocketRet
break; break;
} }
// ========== PTY CLI Sessions ==========
case 'CLI_SESSION_CREATED': {
const session = data.payload?.session;
if (session?.sessionKey) {
stores.upsertCliSession(session);
}
break;
}
case 'CLI_SESSION_OUTPUT': {
const { sessionKey, data: chunk } = data.payload ?? {};
if (typeof sessionKey === 'string' && typeof chunk === 'string') {
stores.appendCliSessionOutput(sessionKey, chunk);
}
break;
}
case 'CLI_SESSION_CLOSED': {
const { sessionKey } = data.payload ?? {};
if (typeof sessionKey === 'string') {
stores.removeCliSession(sessionKey);
}
break;
}
case 'CLI_OUTPUT': { case 'CLI_OUTPUT': {
const { executionId, chunkType, data: outputData, unit } = data.payload; const { executionId, chunkType, data: outputData, unit } = data.payload;

View File

@@ -702,8 +702,9 @@ export interface QueueItem {
} }
export interface IssueQueue { export interface IssueQueue {
tasks: string[]; id?: string;
solutions: string[]; tasks?: QueueItem[];
solutions?: QueueItem[];
conflicts: string[]; conflicts: string[];
execution_groups: string[]; execution_groups: string[];
grouped_items: Record<string, QueueItem[]>; grouped_items: Record<string, QueueItem[]>;
@@ -816,12 +817,38 @@ export async function pullIssuesFromGitHub(options: GitHubPullOptions = {}): Pro
}); });
} }
// ========== Queue History (Multi-Queue) ==========
export interface QueueHistoryEntry {
id: string;
created_at?: string;
updated_at?: string;
status?: string;
issue_ids?: string[];
total_tasks?: number;
completed_tasks?: number;
total_solutions?: number;
completed_solutions?: number;
[key: string]: unknown;
}
export interface QueueHistoryIndex {
queues: QueueHistoryEntry[];
active_queue_id: string | null;
active_queue_ids: string[];
}
export async function fetchQueueHistory(projectPath: string): Promise<QueueHistoryIndex> {
return fetchApi<QueueHistoryIndex>(`/api/queue/history?path=${encodeURIComponent(projectPath)}`);
}
/** /**
* Activate a queue * Activate a queue
*/ */
export async function activateQueue(queueId: string, projectPath: string): Promise<void> { export async function activateQueue(queueId: string, projectPath: string): Promise<void> {
return fetchApi<void>(`/api/queue/${encodeURIComponent(queueId)}/activate?path=${encodeURIComponent(projectPath)}`, { return fetchApi<void>(`/api/queue/activate?path=${encodeURIComponent(projectPath)}`, {
method: 'POST', method: 'POST',
body: JSON.stringify({ queueId }),
}); });
} }
@@ -834,6 +861,32 @@ export async function deactivateQueue(projectPath: string): Promise<void> {
}); });
} }
/**
* Reorder items within a single execution group
*/
export async function reorderQueueGroup(
projectPath: string,
input: { groupId: string; newOrder: string[] }
): Promise<{ success: boolean; groupId: string; reordered: number }> {
return fetchApi<{ success: boolean; groupId: string; reordered: number }>(
`/api/queue/reorder?path=${encodeURIComponent(projectPath)}`,
{ method: 'POST', body: JSON.stringify(input) }
);
}
/**
* Move an item across execution groups (and optionally insert at index)
*/
export async function moveQueueItem(
projectPath: string,
input: { itemId: string; toGroupId: string; toIndex?: number }
): Promise<{ success: boolean; itemId: string; fromGroupId: string; toGroupId: string }> {
return fetchApi<{ success: boolean; itemId: string; fromGroupId: string; toGroupId: string }>(
`/api/queue/move?path=${encodeURIComponent(projectPath)}`,
{ method: 'POST', body: JSON.stringify(input) }
);
}
/** /**
* Delete a queue * Delete a queue
*/ */
@@ -849,7 +902,7 @@ export async function deleteQueue(queueId: string, projectPath: string): Promise
export async function mergeQueues(sourceId: string, targetId: string, projectPath: string): Promise<void> { export async function mergeQueues(sourceId: string, targetId: string, projectPath: string): Promise<void> {
return fetchApi<void>(`/api/queue/merge?path=${encodeURIComponent(projectPath)}`, { return fetchApi<void>(`/api/queue/merge?path=${encodeURIComponent(projectPath)}`, {
method: 'POST', method: 'POST',
body: JSON.stringify({ sourceId, targetId }), body: JSON.stringify({ sourceQueueId: sourceId, targetQueueId: targetId }),
}); });
} }
@@ -5630,3 +5683,91 @@ export async function fetchTeamStatus(
): Promise<{ members: Array<{ member: string; lastSeen: string; lastAction: string; messageCount: number }>; total_messages: number }> { ): Promise<{ members: Array<{ member: string; lastSeen: string; lastAction: string; messageCount: number }>; total_messages: number }> {
return fetchApi(`/api/teams/${encodeURIComponent(teamName)}/status`); return fetchApi(`/api/teams/${encodeURIComponent(teamName)}/status`);
} }
// ========== CLI Sessions (PTY) API ==========
export interface CliSession {
sessionKey: string;
shellKind: string;
workingDir: string;
tool?: string;
model?: string;
resumeKey?: string;
createdAt: string;
updatedAt: string;
}
export interface CreateCliSessionInput {
workingDir?: string;
cols?: number;
rows?: number;
preferredShell?: 'bash' | 'pwsh';
tool?: string;
model?: string;
resumeKey?: string;
}
export async function fetchCliSessions(): Promise<{ sessions: CliSession[] }> {
return fetchApi<{ sessions: CliSession[] }>('/api/cli-sessions');
}
export async function createCliSession(input: CreateCliSessionInput): Promise<{ success: boolean; session: CliSession }> {
return fetchApi<{ success: boolean; session: CliSession }>('/api/cli-sessions', {
method: 'POST',
body: JSON.stringify(input),
});
}
export async function fetchCliSessionBuffer(sessionKey: string): Promise<{ session: CliSession; buffer: string }> {
return fetchApi<{ session: CliSession; buffer: string }>(
`/api/cli-sessions/${encodeURIComponent(sessionKey)}/buffer`
);
}
export async function sendCliSessionText(
sessionKey: string,
input: { text: string; appendNewline?: boolean }
): Promise<{ success: boolean }> {
return fetchApi<{ success: boolean }>(`/api/cli-sessions/${encodeURIComponent(sessionKey)}/send`, {
method: 'POST',
body: JSON.stringify(input),
});
}
export interface ExecuteInCliSessionInput {
tool: string;
prompt: string;
mode?: 'analysis' | 'write' | 'auto';
model?: string;
workingDir?: string;
category?: 'user' | 'internal' | 'insight';
resumeKey?: string;
resumeStrategy?: 'nativeResume' | 'promptConcat';
}
export async function executeInCliSession(
sessionKey: string,
input: ExecuteInCliSessionInput
): Promise<{ success: boolean; executionId: string; command: string }> {
return fetchApi<{ success: boolean; executionId: string; command: string }>(
`/api/cli-sessions/${encodeURIComponent(sessionKey)}/execute`,
{ method: 'POST', body: JSON.stringify(input) }
);
}
export async function resizeCliSession(
sessionKey: string,
input: { cols: number; rows: number }
): Promise<{ success: boolean }> {
return fetchApi<{ success: boolean }>(`/api/cli-sessions/${encodeURIComponent(sessionKey)}/resize`, {
method: 'POST',
body: JSON.stringify(input),
});
}
export async function closeCliSession(sessionKey: string): Promise<{ success: boolean }> {
return fetchApi<{ success: boolean }>(`/api/cli-sessions/${encodeURIComponent(sessionKey)}/close`, {
method: 'POST',
body: JSON.stringify({}),
});
}

View File

@@ -35,6 +35,7 @@ export const workspaceQueryKeys = {
issuesList: (projectPath: string) => [...workspaceQueryKeys.issues(projectPath), 'list'] as const, issuesList: (projectPath: string) => [...workspaceQueryKeys.issues(projectPath), 'list'] as const,
issuesHistory: (projectPath: string) => [...workspaceQueryKeys.issues(projectPath), 'history'] as const, issuesHistory: (projectPath: string) => [...workspaceQueryKeys.issues(projectPath), 'history'] as const,
issueQueue: (projectPath: string) => [...workspaceQueryKeys.issues(projectPath), 'queue'] as const, issueQueue: (projectPath: string) => [...workspaceQueryKeys.issues(projectPath), 'queue'] as const,
issueQueueHistory: (projectPath: string) => [...workspaceQueryKeys.issues(projectPath), 'queueHistory'] as const,
// ========== Discoveries ========== // ========== Discoveries ==========
discoveries: (projectPath: string) => ['workspace', projectPath, 'discoveries'] as const, discoveries: (projectPath: string) => ['workspace', projectPath, 'discoveries'] as const,

View File

@@ -89,6 +89,7 @@
"overview": "Overview", "overview": "Overview",
"solutions": "Solutions", "solutions": "Solutions",
"history": "History", "history": "History",
"terminal": "Terminal",
"json": "JSON" "json": "JSON"
}, },
"overview": { "overview": {
@@ -112,10 +113,40 @@
"empty": "No history yet" "empty": "No history yet"
} }
}, },
"terminal": {
"session": {
"select": "Select session",
"none": "No sessions",
"refresh": "Refresh",
"new": "New Session",
"close": "Close"
},
"exec": {
"tool": "Tool",
"mode": "Mode",
"resumeKey": "resumeKey",
"resumeStrategy": "resumeStrategy",
"prompt": {
"label": "Prompt",
"placeholder": "Type a prompt to execute in this session..."
},
"run": "Execute"
}
},
"queue": { "queue": {
"title": "Queue", "title": "Queue",
"pageTitle": "Issue Queue", "pageTitle": "Issue Queue",
"description": "Manage issue execution queue with execution groups", "description": "Manage issue execution queue with execution groups",
"history": {
"title": "Queue History",
"active": "Active",
"select": "Select queue",
"activate": "Activate",
"empty": "No queues"
},
"exec": {
"title": "Execute in Session"
},
"status": { "status": {
"pending": "Pending", "pending": "Pending",
"ready": "Ready", "ready": "Ready",
@@ -192,6 +223,7 @@
"tabs": { "tabs": {
"overview": "Overview", "overview": "Overview",
"tasks": "Tasks", "tasks": "Tasks",
"terminal": "Terminal",
"json": "JSON" "json": "JSON"
}, },
"overview": { "overview": {
@@ -313,8 +345,16 @@
"description": "Unified management for issues, queues, and discoveries", "description": "Unified management for issues, queues, and discoveries",
"tabs": { "tabs": {
"issues": "Issues", "issues": "Issues",
"board": "Board",
"queue": "Queue", "queue": "Queue",
"discovery": "Discovery" "discovery": "Discovery"
} }
},
"board": {
"pageTitle": "Issue Board",
"description": "Visualize and manage issues in a kanban board",
"autoStart": {
"label": "Auto-run when moved to In Progress"
}
} }
} }

View File

@@ -238,6 +238,12 @@
"condition": "Condition", "condition": "Condition",
"conditionPlaceholder": "e.g., {{prev.success}} === true", "conditionPlaceholder": "e.g., {{prev.success}} === true",
"artifacts": "Artifacts", "artifacts": "Artifacts",
"delivery": "Delivery",
"targetSessionKey": "Target Session",
"targetSessionKeyPlaceholder": "e.g., cli-session-... (from Issue Terminal tab)",
"resumeKey": "resumeKey",
"resumeKeyPlaceholder": "e.g., issue-123 or any stable key",
"resumeStrategy": "resumeStrategy",
"available": "Available:", "available": "Available:",
"variables": "Variables:", "variables": "Variables:",
"artifactsLabel": "Artifacts:", "artifactsLabel": "Artifacts:",
@@ -282,6 +288,11 @@
"modeWrite": "Write (modify files)", "modeWrite": "Write (modify files)",
"modeMainprocess": "Main Process (blocking)", "modeMainprocess": "Main Process (blocking)",
"modeAsync": "Async (non-blocking)" "modeAsync": "Async (non-blocking)"
,
"deliveryNewExecution": "New execution",
"deliverySendToSession": "Send to session",
"resumeStrategyNative": "nativeResume",
"resumeStrategyPromptConcat": "promptConcat"
} }
} }
} }

View File

@@ -89,6 +89,7 @@
"overview": "概览", "overview": "概览",
"solutions": "解决方案", "solutions": "解决方案",
"history": "历史", "history": "历史",
"terminal": "终端",
"json": "JSON" "json": "JSON"
}, },
"overview": { "overview": {
@@ -112,10 +113,40 @@
"empty": "暂无历史记录" "empty": "暂无历史记录"
} }
}, },
"terminal": {
"session": {
"select": "选择会话",
"none": "暂无会话",
"refresh": "刷新",
"new": "新建会话",
"close": "关闭"
},
"exec": {
"tool": "工具",
"mode": "模式",
"resumeKey": "resumeKey",
"resumeStrategy": "resumeStrategy",
"prompt": {
"label": "提示词",
"placeholder": "输入要在该会话中执行的提示词..."
},
"run": "执行"
}
},
"queue": { "queue": {
"title": "队列", "title": "队列",
"pageTitle": "问题队列", "pageTitle": "问题队列",
"description": "管理问题执行队列和执行组", "description": "管理问题执行队列和执行组",
"history": {
"title": "队列历史",
"active": "当前",
"select": "选择队列",
"activate": "激活",
"empty": "暂无队列"
},
"exec": {
"title": "在会话中执行"
},
"status": { "status": {
"pending": "待处理", "pending": "待处理",
"ready": "就绪", "ready": "就绪",
@@ -192,6 +223,7 @@
"tabs": { "tabs": {
"overview": "概览", "overview": "概览",
"tasks": "任务", "tasks": "任务",
"terminal": "终端",
"json": "JSON" "json": "JSON"
}, },
"overview": { "overview": {
@@ -313,8 +345,16 @@
"description": "统一管理问题、队列和发现", "description": "统一管理问题、队列和发现",
"tabs": { "tabs": {
"issues": "问题列表", "issues": "问题列表",
"board": "看板",
"queue": "执行队列", "queue": "执行队列",
"discovery": "问题发现" "discovery": "问题发现"
} }
},
"board": {
"pageTitle": "问题看板",
"description": "以看板方式可视化管理问题",
"autoStart": {
"label": "拖到进行中自动执行"
}
} }
} }

View File

@@ -238,6 +238,12 @@
"condition": "条件", "condition": "条件",
"conditionPlaceholder": "例如: {{prev.success}} === true", "conditionPlaceholder": "例如: {{prev.success}} === true",
"artifacts": "产物", "artifacts": "产物",
"delivery": "投递方式",
"targetSessionKey": "目标会话",
"targetSessionKeyPlaceholder": "例如cli-session-...(从 Issue 终端页复制)",
"resumeKey": "resumeKey",
"resumeKeyPlaceholder": "例如issue-123 或任意稳定 key",
"resumeStrategy": "resumeStrategy",
"available": "可用:", "available": "可用:",
"variables": "变量:", "variables": "变量:",
"artifactsLabel": "产物:", "artifactsLabel": "产物:",
@@ -281,7 +287,11 @@
"modeAnalysis": "分析 (只读)", "modeAnalysis": "分析 (只读)",
"modeWrite": "写入 (修改文件)", "modeWrite": "写入 (修改文件)",
"modeMainprocess": "主进程 (阻塞)", "modeMainprocess": "主进程 (阻塞)",
"modeAsync": "异步 (非阻塞)" "modeAsync": "异步 (非阻塞)",
"deliveryNewExecution": "新执行",
"deliverySendToSession": "发送到会话",
"resumeStrategyNative": "nativeResume",
"resumeStrategyPromptConcat": "promptConcat"
} }
} }
} }

View File

@@ -4,6 +4,7 @@ import App from './App.tsx'
import './index.css' import './index.css'
import 'react-grid-layout/css/styles.css' import 'react-grid-layout/css/styles.css'
import 'react-resizable/css/styles.css' import 'react-resizable/css/styles.css'
import 'xterm/css/xterm.css'
import { initMessages, getInitialLocale, getMessages, type Locale } from './lib/i18n' import { initMessages, getInitialLocale, getMessages, type Locale } from './lib/i18n'
import { logWebVitals } from './lib/webVitals' import { logWebVitals } from './lib/webVitals'

View File

@@ -15,6 +15,7 @@ import {
import { IssueHubHeader } from '@/components/issue/hub/IssueHubHeader'; import { IssueHubHeader } from '@/components/issue/hub/IssueHubHeader';
import { IssueHubTabs, type IssueTab } from '@/components/issue/hub/IssueHubTabs'; import { IssueHubTabs, type IssueTab } from '@/components/issue/hub/IssueHubTabs';
import { IssuesPanel } from '@/components/issue/hub/IssuesPanel'; import { IssuesPanel } from '@/components/issue/hub/IssuesPanel';
import { IssueBoardPanel } from '@/components/issue/hub/IssueBoardPanel';
import { QueuePanel } from '@/components/issue/hub/QueuePanel'; import { QueuePanel } from '@/components/issue/hub/QueuePanel';
import { DiscoveryPanel } from '@/components/issue/hub/DiscoveryPanel'; import { DiscoveryPanel } from '@/components/issue/hub/DiscoveryPanel';
import { Button } from '@/components/ui/Button'; import { Button } from '@/components/ui/Button';
@@ -161,6 +162,7 @@ export function IssueHubPage() {
const renderActionButtons = () => { const renderActionButtons = () => {
switch (currentTab) { switch (currentTab) {
case 'issues': case 'issues':
case 'board':
return ( return (
<> <>
<Button variant="outline" onClick={handleIssuesRefresh} disabled={isFetchingIssues}> <Button variant="outline" onClick={handleIssuesRefresh} disabled={isFetchingIssues}>
@@ -212,6 +214,7 @@ export function IssueHubPage() {
<IssueHubTabs currentTab={currentTab} onTabChange={setCurrentTab} /> <IssueHubTabs currentTab={currentTab} onTabChange={setCurrentTab} />
{currentTab === 'issues' && <IssuesPanel onCreateIssue={() => setIsNewIssueOpen(true)} />} {currentTab === 'issues' && <IssuesPanel onCreateIssue={() => setIsNewIssueOpen(true)} />}
{currentTab === 'board' && <IssueBoardPanel />}
{currentTab === 'queue' && <QueuePanel />} {currentTab === 'queue' && <QueuePanel />}
{currentTab === 'discovery' && <DiscoveryPanel />} {currentTab === 'discovery' && <DiscoveryPanel />}

View File

@@ -1122,6 +1122,84 @@ function PromptTemplateProperties({ data, onChange }: PromptTemplatePropertiesPr
onChange={(artifacts) => onChange({ artifacts })} onChange={(artifacts) => onChange({ artifacts })}
/> />
</div> </div>
{/* CLI Session Routing (tmux-like) */}
{!isSlashCommandMode && (
<>
<div>
<label className="block text-sm font-medium text-foreground mb-1">
{formatMessage({ id: 'orchestrator.propertyPanel.delivery' })}
</label>
<select
value={(data.delivery as string) || 'newExecution'}
onChange={(e) => {
const next = e.target.value as 'newExecution' | 'sendToSession';
const updates: Partial<PromptTemplateNodeData> = { delivery: next };
if (next !== 'sendToSession') {
updates.targetSessionKey = undefined;
updates.resumeKey = undefined;
updates.resumeStrategy = undefined;
}
onChange(updates);
}}
className="w-full h-10 px-3 rounded-md border border-border bg-background text-foreground text-sm"
>
<option value="newExecution">
{formatMessage({ id: 'orchestrator.propertyPanel.options.deliveryNewExecution' })}
</option>
<option value="sendToSession">
{formatMessage({ id: 'orchestrator.propertyPanel.options.deliverySendToSession' })}
</option>
</select>
</div>
{((data.delivery as string) || 'newExecution') === 'sendToSession' && (
<>
<div>
<label className="block text-sm font-medium text-foreground mb-1">
{formatMessage({ id: 'orchestrator.propertyPanel.targetSessionKey' })}
</label>
<Input
value={(data.targetSessionKey as string) || ''}
onChange={(e) => onChange({ targetSessionKey: e.target.value || undefined })}
placeholder={formatMessage({ id: 'orchestrator.propertyPanel.targetSessionKeyPlaceholder' })}
className="font-mono text-sm"
/>
</div>
<div>
<label className="block text-sm font-medium text-foreground mb-1">
{formatMessage({ id: 'orchestrator.propertyPanel.resumeKey' })}
</label>
<Input
value={(data.resumeKey as string) || ''}
onChange={(e) => onChange({ resumeKey: e.target.value || undefined })}
placeholder={formatMessage({ id: 'orchestrator.propertyPanel.resumeKeyPlaceholder' })}
className="font-mono text-sm"
/>
</div>
<div>
<label className="block text-sm font-medium text-foreground mb-1">
{formatMessage({ id: 'orchestrator.propertyPanel.resumeStrategy' })}
</label>
<select
value={(data.resumeStrategy as string) || 'nativeResume'}
onChange={(e) => onChange({ resumeStrategy: e.target.value as any })}
className="w-full h-10 px-3 rounded-md border border-border bg-background text-foreground text-sm"
>
<option value="nativeResume">
{formatMessage({ id: 'orchestrator.propertyPanel.options.resumeStrategyNative' })}
</option>
<option value="promptConcat">
{formatMessage({ id: 'orchestrator.propertyPanel.options.resumeStrategyPromptConcat' })}
</option>
</select>
</div>
</>
)}
</>
)}
</CollapsibleSection> </CollapsibleSection>
</div> </div>
); );

View File

@@ -0,0 +1,132 @@
// ========================================
// CLI Session Store (PTY-backed terminals)
// ========================================
// Zustand store for managing PTY session metadata and output chunks.
import { create } from 'zustand';
import { devtools } from 'zustand/middleware';
export interface CliSessionMeta {
sessionKey: string;
shellKind: string;
workingDir: string;
tool?: string;
model?: string;
resumeKey?: string;
createdAt: string;
updatedAt: string;
}
export interface CliSessionOutputChunk {
data: string;
timestamp: number;
}
interface CliSessionState {
sessions: Record<string, CliSessionMeta>;
outputChunks: Record<string, CliSessionOutputChunk[]>;
outputBytes: Record<string, number>;
setSessions: (sessions: CliSessionMeta[]) => void;
upsertSession: (session: CliSessionMeta) => void;
removeSession: (sessionKey: string) => void;
setBuffer: (sessionKey: string, buffer: string) => void;
appendOutput: (sessionKey: string, data: string, timestamp?: number) => void;
clearOutput: (sessionKey: string) => void;
}
const MAX_OUTPUT_BYTES_PER_SESSION = 2 * 1024 * 1024; // 2MB
const utf8Encoder = new TextEncoder();
function utf8ByteLength(value: string): number {
// Browser-safe alternative to Buffer.byteLength
return utf8Encoder.encode(value).length;
}
export const useCliSessionStore = create<CliSessionState>()(
devtools(
(set, get) => ({
sessions: {},
outputChunks: {},
outputBytes: {},
setSessions: (sessions) =>
set((state) => {
const nextSessions: Record<string, CliSessionMeta> = {};
for (const session of sessions) {
nextSessions[session.sessionKey] = session;
}
const keepKeys = new Set(Object.keys(nextSessions));
const nextChunks = { ...state.outputChunks };
const nextBytes = { ...state.outputBytes };
for (const key of Object.keys(nextChunks)) {
if (!keepKeys.has(key)) delete nextChunks[key];
}
for (const key of Object.keys(nextBytes)) {
if (!keepKeys.has(key)) delete nextBytes[key];
}
return { sessions: nextSessions, outputChunks: nextChunks, outputBytes: nextBytes };
}),
upsertSession: (session) =>
set((state) => ({
sessions: { ...state.sessions, [session.sessionKey]: session },
})),
removeSession: (sessionKey) =>
set((state) => {
const nextSessions = { ...state.sessions };
const nextChunks = { ...state.outputChunks };
const nextBytes = { ...state.outputBytes };
delete nextSessions[sessionKey];
delete nextChunks[sessionKey];
delete nextBytes[sessionKey];
return { sessions: nextSessions, outputChunks: nextChunks, outputBytes: nextBytes };
}),
setBuffer: (sessionKey, buffer) =>
set((state) => ({
outputChunks: {
...state.outputChunks,
[sessionKey]: buffer ? [{ data: buffer, timestamp: Date.now() }] : [],
},
outputBytes: {
...state.outputBytes,
[sessionKey]: buffer ? utf8ByteLength(buffer) : 0,
},
})),
appendOutput: (sessionKey, data, timestamp = Date.now()) => {
if (!data) return;
const chunkBytes = utf8ByteLength(data);
const { outputChunks, outputBytes } = get();
const existingChunks = outputChunks[sessionKey] ?? [];
const existingBytes = outputBytes[sessionKey] ?? 0;
const nextChunks = [...existingChunks, { data, timestamp }];
let nextBytes = existingBytes + chunkBytes;
// Ring-buffer by bytes
while (nextBytes > MAX_OUTPUT_BYTES_PER_SESSION && nextChunks.length > 0) {
const removed = nextChunks.shift();
if (removed) nextBytes -= utf8ByteLength(removed.data);
}
set((state) => ({
outputChunks: { ...state.outputChunks, [sessionKey]: nextChunks },
outputBytes: { ...state.outputBytes, [sessionKey]: Math.max(0, nextBytes) },
}));
},
clearOutput: (sessionKey) =>
set((state) => ({
outputChunks: { ...state.outputChunks, [sessionKey]: [] },
outputBytes: { ...state.outputBytes, [sessionKey]: 0 },
})),
}),
{ name: 'cliSessionStore' }
)
);

View File

@@ -89,6 +89,28 @@ export interface PromptTemplateNodeData {
*/ */
mode?: ExecutionMode; mode?: ExecutionMode;
/**
* Delivery target for CLI-mode execution.
* - newExecution: spawn a fresh CLI execution (default)
* - sendToSession: route to a PTY session (tmux-like send)
*/
delivery?: 'newExecution' | 'sendToSession';
/**
* When delivery=sendToSession, route execution to this PTY session key.
*/
targetSessionKey?: string;
/**
* Optional logical resume key for chaining executions.
*/
resumeKey?: string;
/**
* Optional resume mapping strategy.
*/
resumeStrategy?: 'nativeResume' | 'promptConcat';
/** /**
* References to outputs from previous steps * References to outputs from previous steps
* Use the outputName values from earlier nodes * Use the outputName values from earlier nodes

View File

@@ -0,0 +1,153 @@
/**
* CLI Sessions (PTY) Routes Module
* Independent from existing /api/cli/* execution endpoints.
*
* Endpoints:
* - GET /api/cli-sessions
* - POST /api/cli-sessions
* - GET /api/cli-sessions/:sessionKey/buffer
* - POST /api/cli-sessions/:sessionKey/send
* - POST /api/cli-sessions/:sessionKey/execute
* - POST /api/cli-sessions/:sessionKey/resize
* - POST /api/cli-sessions/:sessionKey/close
*/
import type { RouteContext } from './types.js';
import { getCliSessionManager } from '../services/cli-session-manager.js';
export async function handleCliSessionsRoutes(ctx: RouteContext): Promise<boolean> {
const { pathname, req, res, handlePostRequest, initialPath } = ctx;
const manager = getCliSessionManager(process.cwd());
// GET /api/cli-sessions
if (pathname === '/api/cli-sessions' && req.method === 'GET') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ sessions: manager.listSessions() }));
return true;
}
// POST /api/cli-sessions
if (pathname === '/api/cli-sessions' && req.method === 'POST') {
handlePostRequest(req, res, async (body: unknown) => {
const {
workingDir,
cols,
rows,
preferredShell,
tool,
model,
resumeKey
} = (body || {}) as any;
const session = manager.createSession({
workingDir: workingDir || initialPath,
cols: typeof cols === 'number' ? cols : undefined,
rows: typeof rows === 'number' ? rows : undefined,
preferredShell: preferredShell === 'pwsh' ? 'pwsh' : 'bash',
tool,
model,
resumeKey
});
return { success: true, session };
});
return true;
}
// GET /api/cli-sessions/:sessionKey/buffer
const bufferMatch = pathname.match(/^\/api\/cli-sessions\/([^/]+)\/buffer$/);
if (bufferMatch && req.method === 'GET') {
const sessionKey = decodeURIComponent(bufferMatch[1]);
const session = manager.getSession(sessionKey);
if (!session) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Session not found' }));
return true;
}
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ session, buffer: manager.getBuffer(sessionKey) }));
return true;
}
// POST /api/cli-sessions/:sessionKey/send
const sendMatch = pathname.match(/^\/api\/cli-sessions\/([^/]+)\/send$/);
if (sendMatch && req.method === 'POST') {
const sessionKey = decodeURIComponent(sendMatch[1]);
handlePostRequest(req, res, async (body: unknown) => {
const { text, appendNewline } = (body || {}) as any;
if (typeof text !== 'string') {
return { error: 'text is required', status: 400 };
}
manager.sendText(sessionKey, text, appendNewline !== false);
return { success: true };
});
return true;
}
// POST /api/cli-sessions/:sessionKey/execute
const executeMatch = pathname.match(/^\/api\/cli-sessions\/([^/]+)\/execute$/);
if (executeMatch && req.method === 'POST') {
const sessionKey = decodeURIComponent(executeMatch[1]);
handlePostRequest(req, res, async (body: unknown) => {
const {
tool,
prompt,
mode,
model,
workingDir,
category,
resumeKey,
resumeStrategy
} = (body || {}) as any;
if (!tool || typeof tool !== 'string') {
return { error: 'tool is required', status: 400 };
}
if (!prompt || typeof prompt !== 'string') {
return { error: 'prompt is required', status: 400 };
}
const result = manager.execute(sessionKey, {
tool,
prompt,
mode,
model,
workingDir,
category,
resumeKey,
resumeStrategy: resumeStrategy === 'promptConcat' ? 'promptConcat' : 'nativeResume'
});
return { success: true, ...result };
});
return true;
}
// POST /api/cli-sessions/:sessionKey/resize
const resizeMatch = pathname.match(/^\/api\/cli-sessions\/([^/]+)\/resize$/);
if (resizeMatch && req.method === 'POST') {
const sessionKey = decodeURIComponent(resizeMatch[1]);
handlePostRequest(req, res, async (body: unknown) => {
const { cols, rows } = (body || {}) as any;
if (typeof cols !== 'number' || typeof rows !== 'number') {
return { error: 'cols and rows are required', status: 400 };
}
manager.resize(sessionKey, cols, rows);
return { success: true };
});
return true;
}
// POST /api/cli-sessions/:sessionKey/close
const closeMatch = pathname.match(/^\/api\/cli-sessions\/([^/]+)\/close$/);
if (closeMatch && req.method === 'POST') {
const sessionKey = decodeURIComponent(closeMatch[1]);
manager.close(sessionKey);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ success: true }));
return true;
}
return false;
}

View File

@@ -390,7 +390,7 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
// GET /api/queue/:id or /api/issues/queue/:id - Get specific queue by ID // GET /api/queue/:id or /api/issues/queue/:id - Get specific queue by ID
const queueDetailMatch = normalizedPath?.match(/^\/api\/queue\/([^/]+)$/); const queueDetailMatch = normalizedPath?.match(/^\/api\/queue\/([^/]+)$/);
const reservedQueuePaths = ['history', 'reorder', 'switch', 'deactivate', 'merge', 'activate']; const reservedQueuePaths = ['history', 'reorder', 'move', 'switch', 'deactivate', 'merge', 'activate'];
if (queueDetailMatch && req.method === 'GET' && !reservedQueuePaths.includes(queueDetailMatch[1])) { if (queueDetailMatch && req.method === 'GET' && !reservedQueuePaths.includes(queueDetailMatch[1])) {
const queueId = queueDetailMatch[1]; const queueId = queueDetailMatch[1];
const queuesDir = join(issuesDir, 'queues'); const queuesDir = join(issuesDir, 'queues');
@@ -592,6 +592,89 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
return true; return true;
} }
// POST /api/queue/move - Move an item to a different execution_group (and optionally insert at index)
if (normalizedPath === '/api/queue/move' && req.method === 'POST') {
handlePostRequest(req, res, async (body: any) => {
const { itemId, toGroupId, toIndex } = body;
if (!itemId || !toGroupId) {
return { error: 'itemId and toGroupId required' };
}
const queue = readQueue(issuesDir);
const items = getQueueItems(queue);
const isSolutionBased = isSolutionBasedQueue(queue);
const itemIndex = items.findIndex((i: any) => i.item_id === itemId);
if (itemIndex === -1) return { error: `Item ${itemId} not found` };
const moved = { ...items[itemIndex] };
const fromGroupId = moved.execution_group || 'ungrouped';
// Build per-group ordered lists based on current execution_order
const groupToIds = new Map<string, string[]>();
const sorted = [...items].sort((a: any, b: any) => (a.execution_order || 0) - (b.execution_order || 0));
for (const it of sorted) {
const gid = it.execution_group || 'ungrouped';
if (!groupToIds.has(gid)) groupToIds.set(gid, []);
groupToIds.get(gid)!.push(it.item_id);
}
// Remove from old group
const fromList = groupToIds.get(fromGroupId) || [];
groupToIds.set(fromGroupId, fromList.filter((id) => id !== itemId));
// Insert into target group
const targetList = groupToIds.get(toGroupId) || [];
const insertAt = typeof toIndex === 'number' ? Math.max(0, Math.min(targetList.length, toIndex)) : targetList.length;
const nextTarget = [...targetList];
nextTarget.splice(insertAt, 0, itemId);
groupToIds.set(toGroupId, nextTarget);
moved.execution_group = toGroupId;
const itemMap = new Map(items.map((i: any) => [i.item_id, i]));
itemMap.set(itemId, moved);
const groupIds = Array.from(groupToIds.keys());
groupIds.sort((a, b) => {
const aGroup = parseInt(a.match(/\\d+/)?.[0] || '999');
const bGroup = parseInt(b.match(/\\d+/)?.[0] || '999');
if (aGroup !== bGroup) return aGroup - bGroup;
return a.localeCompare(b);
});
const nextItems: any[] = [];
const seen = new Set<string>();
for (const gid of groupIds) {
const ids = groupToIds.get(gid) || [];
for (const id of ids) {
const it = itemMap.get(id);
if (!it) continue;
if (seen.has(id)) continue;
seen.add(id);
nextItems.push(it);
}
}
// Fallback: append any missing items
for (const it of items) {
if (!seen.has(it.item_id)) nextItems.push(it);
}
nextItems.forEach((it, idx) => { it.execution_order = idx + 1; });
if (isSolutionBased) {
queue.solutions = nextItems;
} else {
queue.tasks = nextItems;
}
writeQueue(issuesDir, queue);
return { success: true, itemId, fromGroupId, toGroupId };
});
return true;
}
// DELETE /api/queue/:queueId/item/:itemId or /api/issues/queue/:queueId/item/:itemId // DELETE /api/queue/:queueId/item/:itemId or /api/issues/queue/:queueId/item/:itemId
const queueItemDeleteMatch = normalizedPath?.match(/^\/api\/queue\/([^/]+)\/item\/([^/]+)$/); const queueItemDeleteMatch = normalizedPath?.match(/^\/api\/queue\/([^/]+)\/item\/([^/]+)$/);
if (queueItemDeleteMatch && req.method === 'DELETE') { if (queueItemDeleteMatch && req.method === 'DELETE') {

View File

@@ -104,6 +104,28 @@ export interface PromptTemplateNodeData {
*/ */
mode?: ExecutionMode; mode?: ExecutionMode;
/**
* Delivery target for CLI-mode execution.
* - newExecution: spawn a fresh CLI execution (default)
* - sendToSession: route to a PTY session (tmux-like send)
*/
delivery?: 'newExecution' | 'sendToSession';
/**
* When delivery=sendToSession, route execution to this PTY session key.
*/
targetSessionKey?: string;
/**
* Optional logical resume key for chaining executions.
*/
resumeKey?: string;
/**
* Optional resume mapping strategy.
*/
resumeStrategy?: 'nativeResume' | 'promptConcat';
/** /**
* References to outputs from previous steps * References to outputs from previous steps
* Use the outputName values from earlier nodes * Use the outputName values from earlier nodes

View File

@@ -8,6 +8,7 @@ import { resolvePath, getRecentPaths, normalizePathForDisplay } from '../utils/p
import { handleStatusRoutes } from './routes/status-routes.js'; import { handleStatusRoutes } from './routes/status-routes.js';
import { handleCliRoutes, cleanupStaleExecutions } from './routes/cli-routes.js'; import { handleCliRoutes, cleanupStaleExecutions } from './routes/cli-routes.js';
import { handleCliSettingsRoutes } from './routes/cli-settings-routes.js'; import { handleCliSettingsRoutes } from './routes/cli-settings-routes.js';
import { handleCliSessionsRoutes } from './routes/cli-sessions-routes.js';
import { handleProviderRoutes } from './routes/provider-routes.js'; import { handleProviderRoutes } from './routes/provider-routes.js';
import { handleMemoryRoutes } from './routes/memory-routes.js'; import { handleMemoryRoutes } from './routes/memory-routes.js';
import { handleCoreMemoryRoutes } from './routes/core-memory-routes.js'; import { handleCoreMemoryRoutes } from './routes/core-memory-routes.js';
@@ -591,6 +592,11 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
if (await handleDashboardRoutes(routeContext)) return; if (await handleDashboardRoutes(routeContext)) return;
} }
// CLI sessions (PTY) routes (/api/cli-sessions/*) - independent from /api/cli/*
if (pathname.startsWith('/api/cli-sessions')) {
if (await handleCliSessionsRoutes(routeContext)) return;
}
// CLI routes (/api/cli/*) // CLI routes (/api/cli/*)
if (pathname.startsWith('/api/cli/')) { if (pathname.startsWith('/api/cli/')) {
// CLI Settings routes first (more specific path /api/cli/settings/*) // CLI Settings routes first (more specific path /api/cli/settings/*)

View File

@@ -0,0 +1,110 @@
import path from 'path';
export type CliSessionShellKind = 'wsl-bash' | 'git-bash' | 'pwsh';
export type CliSessionResumeStrategy = 'nativeResume' | 'promptConcat';
export interface CliSessionExecuteCommandInput {
projectRoot: string;
shellKind: CliSessionShellKind;
tool: string;
prompt: string;
mode?: 'analysis' | 'write' | 'auto';
model?: string;
workingDir?: string;
category?: 'user' | 'internal' | 'insight';
resumeStrategy?: CliSessionResumeStrategy;
prevExecutionId?: string;
executionId: string;
}
export interface CliSessionExecuteCommandOutput {
command: string;
}
function toPosixPath(p: string): string {
return p.replace(/\\/g, '/');
}
function toWslPath(winPath: string): string {
const normalized = winPath.replace(/\\/g, '/').replace(/\/+/g, '/');
const driveMatch = normalized.match(/^([a-zA-Z]):\/(.*)$/);
if (!driveMatch) return normalized;
return `/mnt/${driveMatch[1].toLowerCase()}/${driveMatch[2]}`;
}
function escapeArg(value: string): string {
// Minimal quoting that works in pwsh + bash.
// We intentionally avoid escaping with platform-specific rules; values are expected to be simple (paths/tool/model).
if (!value) return '""';
if (/[\s"]/g.test(value)) {
return `"${value.replaceAll('"', '\\"')}"`;
}
return value;
}
export function buildCliSessionExecuteCommand(input: CliSessionExecuteCommandInput): CliSessionExecuteCommandOutput {
const {
projectRoot,
shellKind,
tool,
prompt,
mode = 'analysis',
model,
workingDir,
category = 'user',
resumeStrategy = 'nativeResume',
prevExecutionId,
executionId
} = input;
const nodeExe = shellKind === 'wsl-bash' ? 'node.exe' : 'node';
const ccwScriptWin = path.join(projectRoot, 'ccw', 'bin', 'ccw.js');
const ccwScriptPosix = toPosixPath(ccwScriptWin);
const ccwScriptWsl = toWslPath(ccwScriptPosix);
// In WSL we prefer running the Windows Node (`node.exe`) for compatibility
// (no dependency on Node being installed inside the Linux distro). However,
// Windows executables do not reliably understand `/mnt/*` paths, so we convert
// to Windows paths at runtime via `wslpath -w`.
const wslPreambleParts: string[] = [];
if (shellKind === 'wsl-bash') {
wslPreambleParts.push(`CCW_WIN=$(wslpath -w ${escapeArg(ccwScriptWsl)})`);
if (workingDir) {
const wdWsl = toWslPath(toPosixPath(workingDir));
wslPreambleParts.push(`WD_WIN=$(wslpath -w ${escapeArg(wdWsl)})`);
}
}
const wslPreamble = wslPreambleParts.length > 0 ? `${wslPreambleParts.join('; ')}; ` : '';
const cdArg =
workingDir
? shellKind === 'wsl-bash'
? ' --cd "$WD_WIN"'
: ` --cd ${escapeArg(toPosixPath(workingDir))}`
: '';
const modelArg = model ? ` --model ${escapeArg(model)}` : '';
const resumeArg = prevExecutionId ? ` --resume ${escapeArg(prevExecutionId)}` : '';
const noNativeArg = resumeStrategy === 'promptConcat' ? ' --no-native' : '';
// Pipe prompt through stdin so multi-line works without shell-dependent quoting.
// Base64 avoids escaping issues; decode is performed by node itself.
const promptB64 = Buffer.from(prompt, 'utf8').toString('base64');
const decodeCmd = `${nodeExe} -e "process.stdout.write(Buffer.from('${promptB64}','base64'))"`;
const ccwTarget = shellKind === 'wsl-bash' ? '"$CCW_WIN"' : escapeArg(ccwScriptPosix);
const ccwCmd =
`${nodeExe} ${ccwTarget} cli` +
` --tool ${escapeArg(tool)}` +
` --mode ${escapeArg(mode)}` +
`${modelArg}` +
`${cdArg}` +
` --category ${escapeArg(category)}` +
` --stream` +
` --id ${escapeArg(executionId)}` +
`${resumeArg}` +
`${noNativeArg}`;
return { command: `${wslPreamble}${decodeCmd} | ${ccwCmd}` };
}

View File

@@ -0,0 +1,380 @@
import { existsSync } from 'fs';
import os from 'os';
import path from 'path';
import { randomBytes } from 'crypto';
import { spawnSync } from 'child_process';
import * as nodePty from 'node-pty';
import { EventEmitter } from 'events';
import { broadcastToClients } from '../websocket.js';
import {
buildCliSessionExecuteCommand,
type CliSessionShellKind,
type CliSessionResumeStrategy
} from './cli-session-command-builder.js';
import { getCliSessionPolicy } from './cli-session-policy.js';
export interface CliSession {
sessionKey: string;
shellKind: CliSessionShellKind;
workingDir: string;
tool?: string;
model?: string;
resumeKey?: string;
createdAt: string;
updatedAt: string;
}
export interface CreateCliSessionOptions {
workingDir: string;
cols?: number;
rows?: number;
preferredShell?: 'bash' | 'pwsh';
tool?: string;
model?: string;
resumeKey?: string;
}
export interface ExecuteInCliSessionOptions {
tool: string;
prompt: string;
mode?: 'analysis' | 'write' | 'auto';
model?: string;
workingDir?: string;
category?: 'user' | 'internal' | 'insight';
resumeKey?: string;
resumeStrategy?: CliSessionResumeStrategy;
}
export interface CliSessionOutputEvent {
sessionKey: string;
data: string;
timestamp: string;
}
interface CliSessionInternal extends CliSession {
pty: nodePty.IPty;
buffer: string[];
bufferBytes: number;
lastActivityAt: number;
}
function nowIso(): string {
return new Date().toISOString();
}
function createSessionKey(): string {
const suffix = randomBytes(4).toString('hex');
return `cli-session-${Date.now()}-${suffix}`;
}
function normalizeWorkingDir(workingDir: string): string {
return path.resolve(workingDir);
}
function findGitBashExe(): string | null {
const candidates = [
'C:\\\\Program Files\\\\Git\\\\bin\\\\bash.exe',
'C:\\\\Program Files\\\\Git\\\\usr\\\\bin\\\\bash.exe',
'C:\\\\Program Files (x86)\\\\Git\\\\bin\\\\bash.exe',
'C:\\\\Program Files (x86)\\\\Git\\\\usr\\\\bin\\\\bash.exe'
];
for (const candidate of candidates) {
if (existsSync(candidate)) return candidate;
}
try {
const where = spawnSync('where', ['bash'], { encoding: 'utf8', windowsHide: true });
if (where.status === 0) {
const lines = (where.stdout || '').split(/\r?\n/).map(l => l.trim()).filter(Boolean);
const gitBash = lines.find(l => /\\Git\\.*\\bash\.exe$/i.test(l));
return gitBash || (lines[0] || null);
}
} catch {
// ignore
}
return null;
}
function isWslAvailable(): boolean {
try {
const probe = spawnSync('wsl.exe', ['-e', 'bash', '-lc', 'echo ok'], {
encoding: 'utf8',
windowsHide: true,
timeout: 1500
});
return probe.status === 0;
} catch {
return false;
}
}
function pickShell(preferred: 'bash' | 'pwsh'): { shellKind: CliSessionShellKind; file: string; args: string[] } {
if (os.platform() === 'win32') {
if (preferred === 'bash') {
if (isWslAvailable()) {
return { shellKind: 'wsl-bash', file: 'wsl.exe', args: ['-e', 'bash', '-l', '-i'] };
}
const gitBash = findGitBashExe();
if (gitBash) {
return { shellKind: 'git-bash', file: gitBash, args: ['-l', '-i'] };
}
}
// Fallback: PowerShell (pwsh preferred, windows powershell as final)
const pwsh = spawnSync('where', ['pwsh'], { encoding: 'utf8', windowsHide: true });
if (pwsh.status === 0) {
return { shellKind: 'pwsh', file: 'pwsh', args: ['-NoLogo'] };
}
return { shellKind: 'pwsh', file: 'powershell', args: ['-NoLogo'] };
}
// Non-Windows: keep it simple (bash-first)
if (preferred === 'pwsh') {
return { shellKind: 'pwsh', file: 'pwsh', args: ['-NoLogo'] };
}
return { shellKind: 'git-bash', file: 'bash', args: ['-l', '-i'] };
}
function toWslPath(winPath: string): string {
const normalized = winPath.replace(/\\/g, '/');
const driveMatch = normalized.match(/^([a-zA-Z]):\/(.*)$/);
if (!driveMatch) return normalized;
return `/mnt/${driveMatch[1].toLowerCase()}/${driveMatch[2]}`;
}
export class CliSessionManager {
private sessions = new Map<string, CliSessionInternal>();
private resumeKeyLastExecution = new Map<string, string>();
private projectRoot: string;
private emitter = new EventEmitter();
private maxBufferBytes: number;
constructor(projectRoot: string) {
this.projectRoot = projectRoot;
this.maxBufferBytes = getCliSessionPolicy().maxBufferBytes;
}
listSessions(): CliSession[] {
return Array.from(this.sessions.values()).map(({ pty: _pty, buffer: _buffer, bufferBytes: _bytes, ...rest }) => rest);
}
getSession(sessionKey: string): CliSession | null {
const session = this.sessions.get(sessionKey);
if (!session) return null;
const { pty: _pty, buffer: _buffer, bufferBytes: _bytes, ...rest } = session;
return rest;
}
getBuffer(sessionKey: string): string {
const session = this.sessions.get(sessionKey);
if (!session) return '';
return session.buffer.join('');
}
createSession(options: CreateCliSessionOptions): CliSession {
const workingDir = normalizeWorkingDir(options.workingDir);
const preferredShell = options.preferredShell ?? 'bash';
const { shellKind, file, args } = pickShell(preferredShell);
const sessionKey = createSessionKey();
const createdAt = nowIso();
const pty = nodePty.spawn(file, args, {
name: 'xterm-256color',
cols: options.cols ?? 120,
rows: options.rows ?? 30,
cwd: workingDir,
env: process.env as Record<string, string>
});
const session: CliSessionInternal = {
sessionKey,
shellKind,
workingDir,
tool: options.tool,
model: options.model,
resumeKey: options.resumeKey,
createdAt,
updatedAt: createdAt,
pty,
buffer: [],
bufferBytes: 0,
lastActivityAt: Date.now(),
};
pty.onData((data) => {
this.appendToBuffer(sessionKey, data);
const now = Date.now();
const s = this.sessions.get(sessionKey);
if (s) {
s.updatedAt = nowIso();
s.lastActivityAt = now;
}
this.emitter.emit('output', {
sessionKey,
data,
timestamp: nowIso(),
} satisfies CliSessionOutputEvent);
broadcastToClients({
type: 'CLI_SESSION_OUTPUT',
payload: {
sessionKey,
data,
timestamp: nowIso()
} satisfies CliSessionOutputEvent
});
});
pty.onExit(({ exitCode, signal }) => {
this.sessions.delete(sessionKey);
broadcastToClients({
type: 'CLI_SESSION_CLOSED',
payload: {
sessionKey,
exitCode,
signal,
timestamp: nowIso()
}
});
});
this.sessions.set(sessionKey, session);
// WSL often ignores Windows cwd; best-effort cd to mounted path.
if (shellKind === 'wsl-bash') {
const wslCwd = toWslPath(workingDir.replace(/\\/g, '/'));
this.sendText(sessionKey, `cd ${wslCwd}`, true);
}
broadcastToClients({
type: 'CLI_SESSION_CREATED',
payload: { session: this.getSession(sessionKey), timestamp: nowIso() }
});
return this.getSession(sessionKey)!;
}
sendText(sessionKey: string, text: string, appendNewline: boolean): void {
const session = this.sessions.get(sessionKey);
if (!session) {
throw new Error(`Session not found: ${sessionKey}`);
}
session.updatedAt = nowIso();
session.lastActivityAt = Date.now();
session.pty.write(text);
if (appendNewline) {
session.pty.write('\r');
}
}
resize(sessionKey: string, cols: number, rows: number): void {
const session = this.sessions.get(sessionKey);
if (!session) {
throw new Error(`Session not found: ${sessionKey}`);
}
session.updatedAt = nowIso();
session.lastActivityAt = Date.now();
session.pty.resize(cols, rows);
}
close(sessionKey: string): void {
const session = this.sessions.get(sessionKey);
if (!session) return;
session.updatedAt = nowIso();
session.lastActivityAt = Date.now();
try {
session.pty.kill();
} finally {
this.sessions.delete(sessionKey);
broadcastToClients({ type: 'CLI_SESSION_CLOSED', payload: { sessionKey, timestamp: nowIso() } });
}
}
execute(sessionKey: string, options: ExecuteInCliSessionOptions): { executionId: string; command: string } {
const session = this.sessions.get(sessionKey);
if (!session) {
throw new Error(`Session not found: ${sessionKey}`);
}
session.updatedAt = nowIso();
session.lastActivityAt = Date.now();
const resumeKey = options.resumeKey ?? session.resumeKey;
const resumeMapKey = resumeKey ? `${options.tool}:${resumeKey}` : null;
const prevExecutionId = resumeMapKey ? this.resumeKeyLastExecution.get(resumeMapKey) : undefined;
const executionId = resumeKey
? `${resumeKey}-${Date.now()}`
: `exec-${Date.now()}-${randomBytes(3).toString('hex')}`;
const { command } = buildCliSessionExecuteCommand({
projectRoot: this.projectRoot,
shellKind: session.shellKind,
tool: options.tool,
prompt: options.prompt,
mode: options.mode,
model: options.model,
workingDir: options.workingDir ?? session.workingDir,
category: options.category,
resumeStrategy: options.resumeStrategy,
prevExecutionId,
executionId
});
// Best-effort: preemptively update mapping so subsequent queue items can chain.
if (resumeMapKey) {
this.resumeKeyLastExecution.set(resumeMapKey, executionId);
}
this.sendText(sessionKey, command, true);
broadcastToClients({
type: 'CLI_SESSION_EXECUTE',
payload: { sessionKey, executionId, command, timestamp: nowIso() }
});
return { executionId, command };
}
private appendToBuffer(sessionKey: string, chunk: string): void {
const session = this.sessions.get(sessionKey);
if (!session) return;
session.buffer.push(chunk);
session.bufferBytes += Buffer.byteLength(chunk, 'utf8');
while (session.bufferBytes > this.maxBufferBytes && session.buffer.length > 0) {
const removed = session.buffer.shift();
if (removed) session.bufferBytes -= Buffer.byteLength(removed, 'utf8');
}
}
onOutput(listener: (event: CliSessionOutputEvent) => void): () => void {
const handler = (event: CliSessionOutputEvent) => listener(event);
this.emitter.on('output', handler);
return () => this.emitter.off('output', handler);
}
closeIdleSessions(idleTimeoutMs: number): number {
if (idleTimeoutMs <= 0) return 0;
const now = Date.now();
let closed = 0;
for (const s of this.sessions.values()) {
if (now - s.lastActivityAt >= idleTimeoutMs) {
this.close(s.sessionKey);
closed += 1;
}
}
return closed;
}
}
const managersByRoot = new Map<string, CliSessionManager>();
export function getCliSessionManager(projectRoot: string = process.cwd()): CliSessionManager {
const resolved = path.resolve(projectRoot);
const existing = managersByRoot.get(resolved);
if (existing) return existing;
const created = new CliSessionManager(resolved);
managersByRoot.set(resolved, created);
return created;
}

View File

@@ -0,0 +1,55 @@
import os from 'os';
export interface CliSessionPolicy {
allowedTools: string[];
maxSessions: number;
idleTimeoutMs: number;
allowWorkingDirOutsideProject: boolean;
maxBufferBytes: number;
rateLimit: {
createPerMinute: number;
executePerMinute: number;
sendBytesPerMinute: number;
resizePerMinute: number;
};
}
function parseIntEnv(name: string, fallback: number): number {
const raw = (process.env[name] ?? '').trim();
if (!raw) return fallback;
const n = Number.parseInt(raw, 10);
return Number.isFinite(n) ? n : fallback;
}
function parseBoolEnv(name: string, fallback: boolean): boolean {
const raw = (process.env[name] ?? '').trim().toLowerCase();
if (!raw) return fallback;
if (raw === '1' || raw === 'true' || raw === 'yes' || raw === 'y') return true;
if (raw === '0' || raw === 'false' || raw === 'no' || raw === 'n') return false;
return fallback;
}
export function getCliSessionPolicy(): CliSessionPolicy {
const defaultAllowedTools = ['claude', 'codex', 'gemini', 'qwen', 'opencode'];
const allowedToolsRaw = (process.env.CCW_CLI_SESSIONS_ALLOWED_TOOLS ?? '').trim();
const allowedTools = allowedToolsRaw
? allowedToolsRaw.split(',').map((t) => t.trim()).filter(Boolean)
: defaultAllowedTools;
const maxSessionsDefault = os.platform() === 'win32' ? 6 : 8;
return {
allowedTools,
maxSessions: parseIntEnv('CCW_CLI_SESSIONS_MAX', maxSessionsDefault),
idleTimeoutMs: parseIntEnv('CCW_CLI_SESSIONS_IDLE_TIMEOUT_MS', 30 * 60_000),
allowWorkingDirOutsideProject: parseBoolEnv('CCW_CLI_SESSIONS_ALLOW_OUTSIDE_PROJECT', false),
maxBufferBytes: parseIntEnv('CCW_CLI_SESSIONS_MAX_BUFFER_BYTES', 2 * 1024 * 1024),
rateLimit: {
createPerMinute: parseIntEnv('CCW_CLI_SESSIONS_RL_CREATE_PER_MIN', 12),
executePerMinute: parseIntEnv('CCW_CLI_SESSIONS_RL_EXECUTE_PER_MIN', 60),
sendBytesPerMinute: parseIntEnv('CCW_CLI_SESSIONS_RL_SEND_BYTES_PER_MIN', 256 * 1024),
resizePerMinute: parseIntEnv('CCW_CLI_SESSIONS_RL_RESIZE_PER_MIN', 120),
},
};
}

View File

@@ -19,6 +19,7 @@ import { existsSync } from 'fs';
import { join } from 'path'; import { join } from 'path';
import { broadcastToClients } from '../websocket.js'; import { broadcastToClients } from '../websocket.js';
import { executeCliTool } from '../../tools/cli-executor-core.js'; import { executeCliTool } from '../../tools/cli-executor-core.js';
import { getCliSessionManager } from './cli-session-manager.js';
import type { import type {
Flow, Flow,
FlowNode, FlowNode,
@@ -244,6 +245,44 @@ export class NodeRunner {
const mode = this.determineCliMode(data.mode); const mode = this.determineCliMode(data.mode);
try { try {
// Optional: route execution to a PTY session (tmux-like send)
if (data.delivery === 'sendToSession') {
const targetSessionKey = data.targetSessionKey;
if (!targetSessionKey) {
return {
success: false,
error: 'delivery=sendToSession requires targetSessionKey'
};
}
const manager = getCliSessionManager(process.cwd());
const routed = manager.execute(targetSessionKey, {
tool,
prompt: instruction,
mode,
workingDir: this.context.workingDir,
resumeKey: data.resumeKey,
resumeStrategy: data.resumeStrategy === 'promptConcat' ? 'promptConcat' : 'nativeResume'
});
const outputKey = data.outputName || `${node.id}_output`;
this.context.variables[outputKey] = {
delivery: 'sendToSession',
sessionKey: targetSessionKey,
executionId: routed.executionId,
command: routed.command
};
this.context.variables[`${node.id}_executionId`] = routed.executionId;
this.context.variables[`${node.id}_command`] = routed.command;
this.context.variables[`${node.id}_success`] = true;
return {
success: true,
output: routed.command,
exitCode: 0
};
}
// Execute via CLI tool // Execute via CLI tool
const result = await executeCliTool({ const result = await executeCliTool({
tool, tool,

View File

@@ -0,0 +1,49 @@
export interface RateLimitResult {
ok: boolean;
remaining: number;
resetAt: number;
}
interface BucketState {
tokens: number;
resetAt: number;
}
/**
* Simple fixed-window token bucket (in-memory).
* Good enough for local dashboard usage; not suitable for multi-process deployments.
*/
export class RateLimiter {
private buckets = new Map<string, BucketState>();
private limit: number;
private windowMs: number;
constructor(opts: { limit: number; windowMs: number }) {
this.limit = Math.max(0, opts.limit);
this.windowMs = Math.max(1, opts.windowMs);
}
consume(key: string, cost: number = 1): RateLimitResult {
const now = Date.now();
const safeCost = Math.max(0, Math.floor(cost));
const existing = this.buckets.get(key);
if (!existing || now >= existing.resetAt) {
const resetAt = now + this.windowMs;
const nextTokens = this.limit - safeCost;
const ok = nextTokens >= 0;
const tokens = ok ? nextTokens : this.limit;
this.buckets.set(key, { tokens, resetAt });
return { ok, remaining: Math.max(0, ok ? tokens : 0), resetAt };
}
const nextTokens = existing.tokens - safeCost;
if (nextTokens < 0) {
return { ok: false, remaining: Math.max(0, existing.tokens), resetAt: existing.resetAt };
}
existing.tokens = nextTokens;
return { ok: true, remaining: nextTokens, resetAt: existing.resetAt };
}
}

View File

@@ -0,0 +1,62 @@
/**
* Unit tests for PTY session execute command builder
*/
import { describe, it } from 'node:test';
import assert from 'node:assert/strict';
const builderUrl = new URL('../dist/core/services/cli-session-command-builder.js', import.meta.url).href;
// eslint-disable-next-line @typescript-eslint/no-explicit-any
let mod;
describe('buildCliSessionExecuteCommand', async () => {
mod = await import(builderUrl);
it('builds a node-piped command with resume + promptConcat', () => {
const { command } = mod.buildCliSessionExecuteCommand({
projectRoot: 'D:\\\\Claude_dms3',
shellKind: 'pwsh',
tool: 'codex',
prompt: 'line1\nline2',
mode: 'write',
workingDir: 'D:\\\\Claude_dms3',
resumeStrategy: 'promptConcat',
prevExecutionId: 'prev-123',
executionId: 'rk-1'
});
assert.match(command, /^node -e "process\.stdout\.write\(Buffer\.from\('/);
assert.match(command, /\| node /);
assert.match(command, / cli\b/);
assert.match(command, / --tool codex\b/);
assert.match(command, / --mode write\b/);
assert.match(command, / --stream\b/);
assert.match(command, / --id rk-1\b/);
assert.match(command, / --resume prev-123\b/);
assert.match(command, / --no-native\b/);
});
it('uses wslpath to pass Windows paths to node.exe in wsl-bash', () => {
const { command } = mod.buildCliSessionExecuteCommand({
projectRoot: 'D:\\\\Claude_dms3',
shellKind: 'wsl-bash',
tool: 'claude',
prompt: 'hello',
mode: 'analysis',
workingDir: 'D:\\\\Claude_dms3',
resumeStrategy: 'nativeResume',
executionId: 'exec-1'
});
assert.match(command, /^CCW_WIN=\$\(\s*wslpath -w /);
assert.match(command, /WD_WIN=\$\(\s*wslpath -w /);
assert.match(command, /node\.exe -e /);
assert.match(command, /\| node\.exe "\$CCW_WIN" cli/);
assert.match(command, / --cd "\$WD_WIN"/);
assert.match(command, / --tool claude\b/);
assert.match(command, / --mode analysis\b/);
assert.match(command, / --id exec-1\b/);
assert.ok(!command.includes('--no-native'));
});
});

View File

@@ -249,6 +249,12 @@ def main() -> None:
parser.add_argument("--k", type=int, default=10, help="Final result count (default 10)") parser.add_argument("--k", type=int, default=10, help="Final result count (default 10)")
parser.add_argument("--coarse-k", type=int, default=100, help="Coarse candidates (default 100)") parser.add_argument("--coarse-k", type=int, default=100, help="Coarse candidates (default 100)")
parser.add_argument("--warmup", type=int, default=1, help="Warmup runs per strategy (default 1)") parser.add_argument("--warmup", type=int, default=1, help="Warmup runs per strategy (default 1)")
parser.add_argument(
"--staged-cluster-strategy",
type=str,
default=None,
help="Override Config.staged_clustering_strategy for staged pipeline (e.g. auto, dir_rr, score, path)",
)
parser.add_argument( parser.add_argument(
"--output", "--output",
type=Path, type=Path,
@@ -271,6 +277,8 @@ def main() -> None:
config.cascade_strategy = "staged" config.cascade_strategy = "staged"
config.staged_stage2_mode = "realtime" config.staged_stage2_mode = "realtime"
config.enable_staged_rerank = True config.enable_staged_rerank = True
if args.staged_cluster_strategy:
config.staged_clustering_strategy = str(args.staged_cluster_strategy)
# Stability: on some Windows setups, fastembed + DirectML can crash under load. # Stability: on some Windows setups, fastembed + DirectML can crash under load.
# Dense_rerank uses the embedding backend that matches the index; force CPU here. # Dense_rerank uses the embedding backend that matches the index; force CPU here.
config.embedding_use_gpu = False config.embedding_use_gpu = False

View File

@@ -0,0 +1,356 @@
{
"summary": {
"timestamp": "2026-02-09 20:37:28",
"source": "src",
"k": 10,
"coarse_k": 100,
"query_count": 7,
"avg_jaccard_topk": 0.12095811211246858,
"avg_rbo_topk": 0.09594444061244897,
"staged": {
"success": 7,
"avg_latency_ms": 2471.239057132176
},
"dense_rerank": {
"success": 7,
"avg_latency_ms": 3087.217985710927
}
},
"comparisons": [
{
"query": "class Config",
"staged": {
"strategy": "staged",
"query": "class Config",
"latency_ms": 312.2674999535084,
"num_results": 37,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\path_mapper.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\semantic.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\references.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\parsers\\factory.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\server.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\reranker\\litellm_reranker.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "class Config",
"latency_ms": 2672.6916999816895,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.05263157894736842,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 5,
"dense_unique_dirs_topk": 4
},
{
"query": "def search",
"staged": {
"strategy": "staged",
"query": "def search",
"latency_ms": 15344.861499994993,
"num_results": 3,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\entities.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 81.70747756958008,
"stage2_expand_ms": 12762.907266616821,
"stage3_cluster_ms": 0.0021457672119140625,
"stage4_rerank_ms": 2422.7287769317627
},
"stage_counts": {
"stage1_candidates": 3,
"stage2_expanded": 4,
"stage2_unique_paths": 3,
"stage2_duplicate_paths": 1,
"stage3_clustered": 4,
"stage3_strategy": "dir_rr",
"stage4_reranked": 4
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "def search",
"latency_ms": 2908.5530000030994,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.09090909090909091,
"rbo_topk": 0.23541639942571424,
"staged_unique_files_topk": 2,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
},
{
"query": "LspBridge",
"staged": {
"strategy": "staged",
"query": "LspBridge",
"latency_ms": 328.4989999830723,
"num_results": 5,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_graph_builder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "LspBridge",
"latency_ms": 3426.8526000082493,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\vector_meta_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.07142857142857142,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 5,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
},
{
"query": "graph expansion",
"staged": {
"strategy": "staged",
"query": "graph expansion",
"latency_ms": 359.32230001688004,
"num_results": 11,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_graph_builder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\ann_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migrations\\migration_007_add_graph_neighbors.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\config.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\enrichment.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\hybrid_search\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "graph expansion",
"latency_ms": 3472.025099992752,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\global_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.17647058823529413,
"rbo_topk": 0.06801300374142856,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 7,
"dense_unique_dirs_topk": 4
},
{
"query": "clustering strategy",
"staged": {
"strategy": "staged",
"query": "clustering strategy",
"latency_ms": 289.3139999806881,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\config.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\factory.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\noop_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\hdbscan_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\dbscan_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\base.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\frequency_strategy.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "clustering strategy",
"latency_ms": 2859.5299999713898,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\gpu_support.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\enrichment.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.1111111111111111,
"rbo_topk": 0.04670528456571428,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 3,
"dense_unique_dirs_topk": 4
},
{
"query": "error handling",
"staged": {
"strategy": "staged",
"query": "error handling",
"latency_ms": 305.66699999570847,
"num_results": 5,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\gpu_support.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\rotational_embedder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\watcher\\manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "error handling",
"latency_ms": 3101.3711999952793,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.07142857142857142,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 5,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 4,
"dense_unique_dirs_topk": 4
},
{
"query": "how to parse json",
"staged": {
"strategy": "staged",
"query": "how to parse json",
"latency_ms": 358.74210000038147,
"num_results": 4,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "how to parse json",
"latency_ms": 3169.5023000240326,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\ranking.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\ann_index.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.2727272727272727,
"rbo_topk": 0.18590219827714285,
"staged_unique_files_topk": 4,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 3,
"dense_unique_dirs_topk": 4
}
]
}

View File

@@ -0,0 +1,171 @@
{
"summary": {
"timestamp": "2026-02-09 19:16:45",
"source": "src",
"k": 10,
"coarse_k": 100,
"query_count": 3,
"avg_jaccard_topk": 0.07165641376167692,
"avg_rbo_topk": 0.10859973275904759,
"staged": {
"success": 3,
"avg_latency_ms": 7919.317766676347
},
"dense_rerank": {
"success": 3,
"avg_latency_ms": 2812.574933330218
}
},
"comparisons": [
{
"query": "class Config",
"staged": {
"strategy": "staged",
"query": "class Config",
"latency_ms": 6351.961700022221,
"num_results": 37,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\path_mapper.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\semantic.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\references.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\parsers\\factory.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\server.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\reranker\\litellm_reranker.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "class Config",
"latency_ms": 4424.698300004005,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.05263157894736842,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 5,
"dense_unique_dirs_topk": 4
},
{
"query": "def search",
"staged": {
"strategy": "staged",
"query": "def search",
"latency_ms": 17239.81479999423,
"num_results": 3,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\entities.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 18.40996742248535,
"stage2_expand_ms": 16024.681329727173,
"stage3_cluster_ms": 0.00095367431640625,
"stage4_rerank_ms": 1160.1319313049316
},
"stage_counts": {
"stage1_candidates": 3,
"stage2_expanded": 4,
"stage2_unique_paths": 3,
"stage2_duplicate_paths": 1,
"stage3_clustered": 4,
"stage3_strategy": "score",
"stage4_reranked": 4
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "def search",
"latency_ms": 2086.8772999942303,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.09090909090909091,
"rbo_topk": 0.23541639942571424,
"staged_unique_files_topk": 2,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
},
{
"query": "LspBridge",
"staged": {
"strategy": "staged",
"query": "LspBridge",
"latency_ms": 166.1768000125885,
"num_results": 5,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_graph_builder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "LspBridge",
"latency_ms": 1926.1491999924183,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\vector_meta_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.07142857142857142,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 5,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
}
]
}

View File

@@ -0,0 +1,171 @@
{
"summary": {
"timestamp": "2026-02-09 19:19:13",
"source": "src",
"k": 10,
"coarse_k": 100,
"query_count": 3,
"avg_jaccard_topk": 0.07165641376167692,
"avg_rbo_topk": 0.10859973275904759,
"staged": {
"success": 3,
"avg_latency_ms": 8272.264699995518
},
"dense_rerank": {
"success": 3,
"avg_latency_ms": 2753.5123999913535
}
},
"comparisons": [
{
"query": "class Config",
"staged": {
"strategy": "staged",
"query": "class Config",
"latency_ms": 6453.665100008249,
"num_results": 37,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\path_mapper.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\semantic.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\references.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\parsers\\factory.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\server.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\reranker\\litellm_reranker.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "class Config",
"latency_ms": 4530.146999955177,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.05263157894736842,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 5,
"dense_unique_dirs_topk": 4
},
{
"query": "def search",
"staged": {
"strategy": "staged",
"query": "def search",
"latency_ms": 18202.905599981546,
"num_results": 3,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\entities.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 15.580177307128906,
"stage2_expand_ms": 16622.225522994995,
"stage3_cluster_ms": 0.00095367431640625,
"stage4_rerank_ms": 1516.9692039489746
},
"stage_counts": {
"stage1_candidates": 3,
"stage2_expanded": 4,
"stage2_unique_paths": 3,
"stage2_duplicate_paths": 1,
"stage3_clustered": 4,
"stage3_strategy": "score",
"stage4_reranked": 4
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "def search",
"latency_ms": 1746.9925000071526,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.09090909090909091,
"rbo_topk": 0.23541639942571424,
"staged_unique_files_topk": 2,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
},
{
"query": "LspBridge",
"staged": {
"strategy": "staged",
"query": "LspBridge",
"latency_ms": 160.2233999967575,
"num_results": 5,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_graph_builder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "LspBridge",
"latency_ms": 1983.3977000117302,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\vector_meta_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.07142857142857142,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 5,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
}
]
}

View File

@@ -0,0 +1,208 @@
{
"summary": {
"timestamp": "2026-02-09 17:27:26",
"source": "src",
"k": 10,
"coarse_k": 100,
"query_count": 3,
"avg_jaccard_topk": 0.5809523809523809,
"avg_rbo_topk": 0.31359567182809517,
"staged": {
"success": 3,
"avg_latency_ms": 22826.711433331173
},
"dense_rerank": {
"success": 3,
"avg_latency_ms": 2239.804533312718
}
},
"comparisons": [
{
"query": "class Config",
"staged": {
"strategy": "staged",
"query": "class Config",
"latency_ms": 26690.878500014544,
"num_results": 6,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\references.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\config.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\semantic.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 8534.121036529541,
"stage2_expand_ms": 13298.827648162842,
"stage3_cluster_ms": 0.026226043701171875,
"stage4_rerank_ms": 4805.774688720703
},
"stage_counts": {
"stage1_candidates": 100,
"stage2_expanded": 149,
"stage2_unique_paths": 43,
"stage2_duplicate_paths": 106,
"stage3_clustered": 20,
"stage3_strategy": "score",
"stage4_reranked": 20
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "class Config",
"latency_ms": 2416.653799980879,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.14285714285714285,
"rbo_topk": 0.25764429885142853,
"staged_unique_files_topk": 6,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 3,
"dense_unique_dirs_topk": 4
},
{
"query": "def search",
"staged": {
"strategy": "staged",
"query": "def search",
"latency_ms": 26188.838399976492,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 525.7587432861328,
"stage2_expand_ms": 23659.400939941406,
"stage3_cluster_ms": 0.021696090698242188,
"stage4_rerank_ms": 1928.950309753418
},
"stage_counts": {
"stage1_candidates": 100,
"stage2_expanded": 101,
"stage2_unique_paths": 23,
"stage2_duplicate_paths": 78,
"stage3_clustered": 20,
"stage3_strategy": "score",
"stage4_reranked": 20
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "def search",
"latency_ms": 1953.0992999970913,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.9,
"rbo_topk": 0.39374892065285705,
"staged_unique_files_topk": 9,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 4,
"dense_unique_dirs_topk": 4
},
{
"query": "LspBridge",
"staged": {
"strategy": "staged",
"query": "LspBridge",
"latency_ms": 15600.41740000248,
"num_results": 7,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\vector_meta_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 475.54636001586914,
"stage2_expand_ms": 13318.811893463135,
"stage3_cluster_ms": 0.03218650817871094,
"stage4_rerank_ms": 1755.7547092437744
},
"stage_counts": {
"stage1_candidates": 100,
"stage2_expanded": 100,
"stage2_unique_paths": 21,
"stage2_duplicate_paths": 79,
"stage3_clustered": 20,
"stage3_strategy": "score",
"stage4_reranked": 20
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "LspBridge",
"latency_ms": 2349.660499960184,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\vector_meta_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.7,
"rbo_topk": 0.28939379598,
"staged_unique_files_topk": 7,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 3,
"dense_unique_dirs_topk": 4
}
]
}

View File

@@ -0,0 +1,356 @@
{
"summary": {
"timestamp": "2026-02-09 20:36:02",
"source": "src",
"k": 10,
"coarse_k": 100,
"query_count": 7,
"avg_jaccard_topk": 0.12095811211246858,
"avg_rbo_topk": 0.09594444061244897,
"staged": {
"success": 7,
"avg_latency_ms": 2436.7641000066483
},
"dense_rerank": {
"success": 7,
"avg_latency_ms": 2593.7630428629263
}
},
"comparisons": [
{
"query": "class Config",
"staged": {
"strategy": "staged",
"query": "class Config",
"latency_ms": 285.091000020504,
"num_results": 37,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\path_mapper.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\semantic.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\references.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\parsers\\factory.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\server.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\reranker\\litellm_reranker.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "class Config",
"latency_ms": 2412.1290000081062,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.05263157894736842,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 5,
"dense_unique_dirs_topk": 4
},
{
"query": "def search",
"staged": {
"strategy": "staged",
"query": "def search",
"latency_ms": 15029.73520001769,
"num_results": 3,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\entities.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 101.95636749267578,
"stage2_expand_ms": 12690.008640289307,
"stage3_cluster_ms": 0.001430511474609375,
"stage4_rerank_ms": 2155.757427215576
},
"stage_counts": {
"stage1_candidates": 3,
"stage2_expanded": 4,
"stage2_unique_paths": 3,
"stage2_duplicate_paths": 1,
"stage3_clustered": 4,
"stage3_strategy": "score",
"stage4_reranked": 4
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "def search",
"latency_ms": 2424.7003000080585,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.09090909090909091,
"rbo_topk": 0.23541639942571424,
"staged_unique_files_topk": 2,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
},
{
"query": "LspBridge",
"staged": {
"strategy": "staged",
"query": "LspBridge",
"latency_ms": 324.4240999817848,
"num_results": 5,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_graph_builder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "LspBridge",
"latency_ms": 2497.174100011587,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\vector_meta_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.07142857142857142,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 5,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
},
{
"query": "graph expansion",
"staged": {
"strategy": "staged",
"query": "graph expansion",
"latency_ms": 359.32159999012947,
"num_results": 11,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_graph_builder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\ann_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migrations\\migration_007_add_graph_neighbors.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\config.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\enrichment.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\hybrid_search\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "graph expansion",
"latency_ms": 2553.8585999906063,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\global_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.17647058823529413,
"rbo_topk": 0.06801300374142856,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 7,
"dense_unique_dirs_topk": 4
},
{
"query": "clustering strategy",
"staged": {
"strategy": "staged",
"query": "clustering strategy",
"latency_ms": 286.38240000605583,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\config.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\factory.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\noop_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\hdbscan_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\dbscan_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\base.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\frequency_strategy.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "clustering strategy",
"latency_ms": 2570.379099994898,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\gpu_support.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\enrichment.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.1111111111111111,
"rbo_topk": 0.04670528456571428,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 3,
"dense_unique_dirs_topk": 4
},
{
"query": "error handling",
"staged": {
"strategy": "staged",
"query": "error handling",
"latency_ms": 412.58780002593994,
"num_results": 5,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\gpu_support.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\rotational_embedder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\watcher\\manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "error handling",
"latency_ms": 2894.3279000222683,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.07142857142857142,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 5,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 4,
"dense_unique_dirs_topk": 4
},
{
"query": "how to parse json",
"staged": {
"strategy": "staged",
"query": "how to parse json",
"latency_ms": 359.8066000044346,
"num_results": 4,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py"
],
"stage_stats": null,
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "how to parse json",
"latency_ms": 2803.772300004959,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\ranking.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\ann_index.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.2727272727272727,
"rbo_topk": 0.18590219827714285,
"staged_unique_files_topk": 4,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 3,
"dense_unique_dirs_topk": 4
}
]
}

View File

@@ -0,0 +1,462 @@
{
"summary": {
"timestamp": "2026-02-09 20:45:10",
"source": "src",
"k": 10,
"coarse_k": 100,
"query_count": 7,
"avg_jaccard_topk": 0.1283498247783962,
"avg_rbo_topk": 0.09664773770897958,
"staged": {
"success": 7,
"avg_latency_ms": 16394.152085712976
},
"dense_rerank": {
"success": 7,
"avg_latency_ms": 2839.464457145759
}
},
"comparisons": [
{
"query": "class Config",
"staged": {
"strategy": "staged",
"query": "class Config",
"latency_ms": 6233.342700004578,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\path_mapper.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\semantic.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\api\\references.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\config.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\server.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 125.80323219299316,
"stage1_fallback_search_ms": 277.1914005279541,
"stage2_expand_ms": 3032.3121547698975,
"stage3_cluster_ms": 0.02765655517578125,
"stage4_rerank_ms": 2699.3532180786133
},
"stage_counts": {
"stage1_candidates": 37,
"stage1_fallback_used": 1,
"stage2_expanded": 86,
"stage2_unique_paths": 53,
"stage2_duplicate_paths": 33,
"stage3_clustered": 20,
"stage3_strategy": "score",
"stage4_reranked": 20
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "class Config",
"latency_ms": 3036.3474999964237,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.125,
"rbo_topk": 0.06741929885142856,
"staged_unique_files_topk": 8,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 5,
"dense_unique_dirs_topk": 4
},
{
"query": "def search",
"staged": {
"strategy": "staged",
"query": "def search",
"latency_ms": 12703.503900021315,
"num_results": 3,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\entities.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 83.4202766418457,
"stage2_expand_ms": 9856.60433769226,
"stage3_cluster_ms": 0.0011920928955078125,
"stage4_rerank_ms": 2664.630174636841
},
"stage_counts": {
"stage1_candidates": 3,
"stage2_expanded": 4,
"stage2_unique_paths": 3,
"stage2_duplicate_paths": 1,
"stage3_clustered": 4,
"stage3_strategy": "score",
"stage4_reranked": 4
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "def search",
"latency_ms": 2888.501700013876,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\query_parser.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.09090909090909091,
"rbo_topk": 0.23541639942571424,
"staged_unique_files_topk": 2,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
},
{
"query": "LspBridge",
"staged": {
"strategy": "staged",
"query": "LspBridge",
"latency_ms": 33684.76710000634,
"num_results": 5,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_graph_builder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 78.8118839263916,
"stage1_fallback_search_ms": 174.6652126312256,
"stage2_expand_ms": 31018.909692764282,
"stage3_cluster_ms": 0.0016689300537109375,
"stage4_rerank_ms": 2316.9021606445312
},
"stage_counts": {
"stage1_candidates": 5,
"stage1_fallback_used": 1,
"stage2_expanded": 5,
"stage2_unique_paths": 5,
"stage2_duplicate_paths": 0,
"stage3_clustered": 5,
"stage3_strategy": "score",
"stage4_reranked": 5
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "LspBridge",
"latency_ms": 2824.729699999094,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\vector_meta_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\graph_expander.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.07142857142857142,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 5,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 2,
"dense_unique_dirs_topk": 4
},
{
"query": "graph expansion",
"staged": {
"strategy": "staged",
"query": "graph expansion",
"latency_ms": 16910.090099990368,
"num_results": 8,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migrations\\migration_007_add_graph_neighbors.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_graph_builder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\ann_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\config.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 99.6243953704834,
"stage1_fallback_search_ms": 207.89742469787598,
"stage2_expand_ms": 13929.257154464722,
"stage3_cluster_ms": 0.016927719116210938,
"stage4_rerank_ms": 2586.843729019165
},
"stage_counts": {
"stage1_candidates": 11,
"stage1_fallback_used": 1,
"stage2_expanded": 29,
"stage2_unique_paths": 14,
"stage2_duplicate_paths": 15,
"stage3_clustered": 20,
"stage3_strategy": "score",
"stage4_reranked": 20
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "graph expansion",
"latency_ms": 2765.958099991083,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\migration_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\global_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.21428571428571427,
"rbo_topk": 0.06893318399142857,
"staged_unique_files_topk": 7,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 6,
"dense_unique_dirs_topk": 4
},
{
"query": "clustering strategy",
"staged": {
"strategy": "staged",
"query": "clustering strategy",
"latency_ms": 8380.20839998126,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\config.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\noop_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\dbscan_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\base.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\hdbscan_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\frequency_strategy.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\factory.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\clustering\\__init__.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 95.42632102966309,
"stage1_fallback_search_ms": 187.4692440032959,
"stage2_expand_ms": 5561.658143997192,
"stage3_cluster_ms": 0.0007152557373046875,
"stage4_rerank_ms": 2441.287040710449
},
"stage_counts": {
"stage1_candidates": 10,
"stage1_fallback_used": 1,
"stage2_expanded": 10,
"stage2_unique_paths": 10,
"stage2_duplicate_paths": 0,
"stage3_clustered": 10,
"stage3_strategy": "score",
"stage4_reranked": 10
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "clustering strategy",
"latency_ms": 2788.0665000081062,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\vector_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\gpu_support.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\enrichment.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.1111111111111111,
"rbo_topk": 0.04670528456571428,
"staged_unique_files_topk": 10,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 3,
"dense_unique_dirs_topk": 4
},
{
"query": "error handling",
"staged": {
"strategy": "staged",
"query": "error handling",
"latency_ms": 19897.71709999442,
"num_results": 6,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\lsp_bridge.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\gpu_support.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\rotational_embedder.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\watcher\\manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 114.1653060913086,
"stage1_fallback_search_ms": 235.73827743530273,
"stage2_expand_ms": 16702.077865600586,
"stage3_cluster_ms": 0.00095367431640625,
"stage4_rerank_ms": 2757.4093341827393
},
"stage_counts": {
"stage1_candidates": 5,
"stage1_fallback_used": 1,
"stage2_expanded": 13,
"stage2_unique_paths": 6,
"stage2_duplicate_paths": 7,
"stage3_clustered": 13,
"stage3_strategy": "score",
"stage4_reranked": 13
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "error handling",
"latency_ms": 2874.178600013256,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\__init__.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\registry.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\embedding_manager.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.07142857142857142,
"rbo_topk": 0.045191399425714276,
"staged_unique_files_topk": 5,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 4,
"dense_unique_dirs_topk": 4
},
{
"query": "how to parse json",
"staged": {
"strategy": "staged",
"query": "how to parse json",
"latency_ms": 16949.43529999256,
"num_results": 7,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\lsp\\standalone_manager.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\parsers\\factory.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\indexing\\symbol_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\parsers\\treesitter_parser.py"
],
"stage_stats": {
"stage_times": {
"stage1_binary_ms": 104.50935363769531,
"stage1_fallback_search_ms": 190.6723976135254,
"stage2_expand_ms": 14165.841102600098,
"stage3_cluster_ms": 0.0011920928955078125,
"stage4_rerank_ms": 2399.226188659668
},
"stage_counts": {
"stage1_candidates": 4,
"stage1_fallback_used": 1,
"stage2_expanded": 11,
"stage2_unique_paths": 7,
"stage2_duplicate_paths": 4,
"stage3_clustered": 11,
"stage3_strategy": "score",
"stage4_reranked": 11
}
},
"error": null
},
"dense_rerank": {
"strategy": "dense_rerank",
"query": "how to parse json",
"latency_ms": 2698.469099998474,
"num_results": 10,
"topk_paths": [
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\cli\\commands.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\chain_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\index_tree.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\code_extractor.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\dir_index.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\hybrid_search.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\search\\ranking.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\chunker.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\storage\\sqlite_store.py",
"d:\\claude_dms3\\codex-lens\\src\\codexlens\\semantic\\ann_index.py"
],
"stage_stats": null,
"error": null
},
"jaccard_topk": 0.21428571428571427,
"rbo_topk": 0.16767719827714284,
"staged_unique_files_topk": 7,
"dense_unique_files_topk": 10,
"staged_unique_dirs_topk": 5,
"dense_unique_dirs_topk": 4
}
]
}

View File

@@ -3486,6 +3486,81 @@ def index_binary(
console.print(f" [dim]... and {len(errors_list) - 3} more[/dim]") console.print(f" [dim]... and {len(errors_list) - 3} more[/dim]")
@index_app.command("binary-mmap")
def index_binary_mmap(
path: Annotated[Path, typer.Argument(help="Project directory (indexed) or _index.db file")],
force: Annotated[bool, typer.Option("--force", "-f", help="Force rebuild binary mmap + metadata")] = False,
embedding_dim: Annotated[Optional[int], typer.Option("--embedding-dim", help="Only use embeddings with this dimension (e.g. 768)")] = None,
json_mode: Annotated[bool, typer.Option("--json", help="Output JSON response")] = False,
verbose: Annotated[bool, typer.Option("--verbose", "-v", help="Enable verbose logging")] = False,
) -> None:
"""Build centralized `_binary_vectors.mmap` from existing embeddings (no model calls).
This command enables the staged binary coarse search without regenerating
embeddings and without triggering global model locks. It:
- scans distributed semantic_chunks.embedding blobs under the index root
- assigns global chunk_ids
- writes `<index_root>/_binary_vectors.mmap` (+ `.meta.json`)
- writes `<index_root>/_vectors_meta.db` (chunk_metadata + binary_vectors)
"""
_configure_logging(verbose, json_mode)
from codexlens.cli.embedding_manager import build_centralized_binary_vectors_from_existing
target_path = path.expanduser().resolve()
# Resolve index_root similar to other index commands.
if target_path.is_file() and target_path.name == "_index.db":
index_root = target_path.parent
else:
registry = RegistryStore()
try:
registry.initialize()
mapper = PathMapper()
index_db = mapper.source_to_index_db(target_path)
if not index_db.exists():
msg = f"No index found for {target_path}"
if json_mode:
print_json(success=False, error=msg)
else:
console.print(f"[red]Error:[/red] {msg}")
console.print("Run `codexlens index init` first to create an index.")
raise typer.Exit(code=1)
index_root = index_db.parent
finally:
registry.close()
def progress_update(message: str) -> None:
if json_mode:
return
console.print(f"[dim]{message}[/dim]")
result = build_centralized_binary_vectors_from_existing(
index_root,
force=force,
embedding_dim=embedding_dim,
progress_callback=progress_update,
)
if json_mode:
print_json(**result)
return
if not result.get("success"):
console.print(f"[red]Error:[/red] {result.get('error', 'Unknown error')}")
hint = result.get("hint")
if hint:
console.print(f"[dim]{hint}[/dim]")
raise typer.Exit(code=1)
data = result.get("result", {})
console.print("\n[green]Binary mmap build complete[/green]")
console.print(f" Index root: {data.get('index_root')}")
console.print(f" Chunks written: {data.get('chunks_written'):,}")
console.print(f" Binary mmap: {data.get('binary_mmap')}")
console.print(f" Meta DB: {data.get('vectors_meta_db')}")
# ==================== Index Status Command ==================== # ==================== Index Status Command ====================
@index_app.command("status") @index_app.command("status")

View File

@@ -860,6 +860,294 @@ def _discover_index_dbs_internal(index_root: Path) -> List[Path]:
return sorted(index_root.rglob("_index.db")) return sorted(index_root.rglob("_index.db"))
def build_centralized_binary_vectors_from_existing(
index_root: Path,
*,
force: bool = False,
embedding_dim: Optional[int] = None,
progress_callback: Optional[callable] = None,
) -> Dict[str, Any]:
"""Build centralized binary vectors + metadata from existing semantic_chunks embeddings.
This is a fast-path for enabling the staged binary coarse search without
regenerating embeddings (and without triggering global model locks).
It scans all distributed `_index.db` files under `index_root`, reads
existing `semantic_chunks.embedding` blobs, assigns new global chunk_ids,
and writes:
- `<index_root>/_binary_vectors.mmap` (+ `.meta.json`)
- `<index_root>/_vectors_meta.db` (chunk_metadata + binary_vectors)
"""
from codexlens.config import BINARY_VECTORS_MMAP_NAME, VECTORS_META_DB_NAME
from codexlens.storage.vector_meta_store import VectorMetadataStore
index_root = Path(index_root).resolve()
vectors_meta_path = index_root / VECTORS_META_DB_NAME
mmap_path = index_root / BINARY_VECTORS_MMAP_NAME
meta_path = mmap_path.with_suffix(".meta.json")
index_files = _discover_index_dbs_internal(index_root)
if not index_files:
return {"success": False, "error": f"No _index.db files found under {index_root}"}
if progress_callback:
progress_callback(f"Scanning {len(index_files)} index databases for existing embeddings...")
# First pass: detect embedding dims present.
dims_seen: Dict[int, int] = {}
selected_config: Optional[Dict[str, Any]] = None
for index_path in index_files:
try:
with sqlite3.connect(index_path) as conn:
conn.row_factory = sqlite3.Row
has_table = conn.execute(
"SELECT 1 FROM sqlite_master WHERE type='table' AND name='semantic_chunks'"
).fetchone()
if not has_table:
continue
dim_row = conn.execute(
"SELECT backend, model_profile, model_name, embedding_dim FROM embeddings_config WHERE id=1"
).fetchone()
if dim_row and dim_row[3]:
dim_val = int(dim_row[3])
dims_seen[dim_val] = dims_seen.get(dim_val, 0) + 1
if selected_config is None:
selected_config = {
"backend": dim_row[0],
"model_profile": dim_row[1],
"model_name": dim_row[2],
"embedding_dim": dim_val,
}
# We count per-dim later after selecting a target dim.
except Exception:
continue
if not dims_seen:
return {"success": False, "error": "No embeddings_config found under index_root"}
if embedding_dim is None:
# Default: pick the most common embedding dim across indexes.
embedding_dim = max(dims_seen.items(), key=lambda kv: kv[1])[0]
embedding_dim = int(embedding_dim)
if progress_callback and len(dims_seen) > 1:
progress_callback(f"Mixed embedding dims detected, selecting dim={embedding_dim} (seen={dims_seen})")
# Re-detect the selected model config for this dim (do not reuse an arbitrary first-seen config).
selected_config = None
# Second pass: count only chunks matching selected dim.
total_chunks = 0
for index_path in index_files:
try:
with sqlite3.connect(index_path) as conn:
conn.row_factory = sqlite3.Row
has_table = conn.execute(
"SELECT 1 FROM sqlite_master WHERE type='table' AND name='semantic_chunks'"
).fetchone()
if not has_table:
continue
dim_row = conn.execute(
"SELECT backend, model_profile, model_name, embedding_dim FROM embeddings_config WHERE id=1"
).fetchone()
dim_val = int(dim_row[3]) if dim_row and dim_row[3] else None
if dim_val != embedding_dim:
continue
if selected_config is None:
selected_config = {
"backend": dim_row[0],
"model_profile": dim_row[1],
"model_name": dim_row[2],
"embedding_dim": dim_val,
}
row = conn.execute(
"SELECT COUNT(*) FROM semantic_chunks WHERE embedding IS NOT NULL AND length(embedding) > 0"
).fetchone()
total_chunks += int(row[0] if row else 0)
except Exception:
continue
if not total_chunks:
return {
"success": False,
"error": f"No existing embeddings found for embedding_dim={embedding_dim}",
"dims_seen": dims_seen,
}
if progress_callback:
progress_callback(f"Found {total_chunks} embedded chunks (dim={embedding_dim}). Building binary vectors...")
# Prepare output files / DB.
try:
import numpy as np
except Exception as exc:
return {"success": False, "error": f"numpy required to build binary vectors: {exc}"}
store = VectorMetadataStore(vectors_meta_path)
store._ensure_schema()
if force:
try:
store.clear()
except Exception:
pass
try:
store.clear_binary_vectors()
except Exception:
pass
try:
if mmap_path.exists():
mmap_path.unlink()
except Exception:
pass
try:
if meta_path.exists():
meta_path.unlink()
except Exception:
pass
bytes_per_vec = (int(embedding_dim) + 7) // 8
mmap = np.memmap(
str(mmap_path),
dtype=np.uint8,
mode="w+",
shape=(int(total_chunks), int(bytes_per_vec)),
)
chunk_ids: List[int] = []
chunks_batch: List[Dict[str, Any]] = []
bin_ids_batch: List[int] = []
bin_vecs_batch: List[bytes] = []
batch_limit = 500
global_id = 1
write_idx = 0
skipped_indexes: Dict[str, int] = {}
for index_path in index_files:
try:
with sqlite3.connect(index_path) as conn:
conn.row_factory = sqlite3.Row
has_table = conn.execute(
"SELECT 1 FROM sqlite_master WHERE type='table' AND name='semantic_chunks'"
).fetchone()
if not has_table:
continue
dim_row = conn.execute(
"SELECT embedding_dim FROM embeddings_config WHERE id=1"
).fetchone()
dim_val = int(dim_row[0]) if dim_row and dim_row[0] else None
if dim_val != embedding_dim:
skipped_indexes[str(index_path)] = dim_val or -1
continue
rows = conn.execute(
"SELECT file_path, content, embedding, metadata, category FROM semantic_chunks "
"WHERE embedding IS NOT NULL AND length(embedding) > 0"
).fetchall()
for row in rows:
emb = np.frombuffer(row["embedding"], dtype=np.float32)
if emb.size != int(embedding_dim):
continue
packed = np.packbits((emb > 0).astype(np.uint8))
if packed.size != bytes_per_vec:
continue
mmap[write_idx] = packed
write_idx += 1
cid = global_id
global_id += 1
chunk_ids.append(cid)
meta_raw = row["metadata"]
meta_dict: Dict[str, Any] = {}
if meta_raw:
try:
meta_dict = json.loads(meta_raw) if isinstance(meta_raw, str) else dict(meta_raw)
except Exception:
meta_dict = {}
chunks_batch.append(
{
"chunk_id": cid,
"file_path": row["file_path"],
"content": row["content"],
"start_line": meta_dict.get("start_line"),
"end_line": meta_dict.get("end_line"),
"category": row["category"],
"metadata": meta_dict,
"source_index_db": str(index_path),
}
)
bin_ids_batch.append(cid)
bin_vecs_batch.append(packed.tobytes())
if len(chunks_batch) >= batch_limit:
store.add_chunks(chunks_batch)
store.add_binary_vectors(bin_ids_batch, bin_vecs_batch)
chunks_batch = []
bin_ids_batch = []
bin_vecs_batch = []
except Exception:
continue
if chunks_batch:
store.add_chunks(chunks_batch)
store.add_binary_vectors(bin_ids_batch, bin_vecs_batch)
mmap.flush()
del mmap
# If we skipped inconsistent vectors, truncate metadata to actual write count.
chunk_ids = chunk_ids[:write_idx]
# Write sidecar metadata.
with open(meta_path, "w", encoding="utf-8") as f:
json.dump(
{
"shape": [int(write_idx), int(bytes_per_vec)],
"chunk_ids": chunk_ids,
"embedding_dim": int(embedding_dim),
"backend": (selected_config or {}).get("backend"),
"model_profile": (selected_config or {}).get("model_profile"),
"model_name": (selected_config or {}).get("model_name"),
},
f,
)
if progress_callback:
progress_callback(f"Binary vectors ready: {mmap_path} (rows={write_idx})")
return {
"success": True,
"result": {
"index_root": str(index_root),
"index_files_scanned": len(index_files),
"chunks_total": int(total_chunks),
"chunks_written": int(write_idx),
"embedding_dim": int(embedding_dim),
"bytes_per_vector": int(bytes_per_vec),
"skipped_indexes": len(skipped_indexes),
"vectors_meta_db": str(vectors_meta_path),
"binary_mmap": str(mmap_path),
"binary_meta_json": str(meta_path),
},
}
def discover_all_index_dbs(index_root: Path) -> List[Path]: def discover_all_index_dbs(index_root: Path) -> List[Path]:
"""Recursively find all _index.db files in an index tree. """Recursively find all _index.db files in an index tree.
@@ -1804,4 +2092,4 @@ def check_global_model_lock(
"has_conflict": has_conflict, "has_conflict": has_conflict,
"locked_config": locked_config, "locked_config": locked_config,
"target_config": {"backend": target_backend, "model": target_model}, "target_config": {"backend": target_backend, "model": target_model},
} }

View File

@@ -153,7 +153,7 @@ class Config:
staged_realtime_lsp_max_concurrent: int = 2 # Max concurrent LSP requests during graph expansion staged_realtime_lsp_max_concurrent: int = 2 # Max concurrent LSP requests during graph expansion
staged_realtime_lsp_warmup_s: float = 3.0 # Wait for server analysis after opening seed docs staged_realtime_lsp_warmup_s: float = 3.0 # Wait for server analysis after opening seed docs
staged_realtime_lsp_resolve_symbols: bool = False # If True, resolves symbol names via documentSymbol (slower) staged_realtime_lsp_resolve_symbols: bool = False # If True, resolves symbol names via documentSymbol (slower)
staged_clustering_strategy: str = "auto" # "auto", "hdbscan", "dbscan", "frequency", "noop" staged_clustering_strategy: str = "auto" # "auto", "hdbscan", "dbscan", "frequency", "noop", "score", "dir_rr", "path"
staged_clustering_min_size: int = 3 # Minimum cluster size for Stage 3 grouping staged_clustering_min_size: int = 3 # Minimum cluster size for Stage 3 grouping
enable_staged_rerank: bool = True # Enable optional cross-encoder reranking in Stage 4 enable_staged_rerank: bool = True # Enable optional cross-encoder reranking in Stage 4

View File

@@ -0,0 +1,135 @@
"""Keep-alive wrapper for Standalone LSP servers in synchronous workflows.
The staged realtime pipeline calls into LSP from synchronous code paths.
Creating a fresh asyncio loop per query (via asyncio.run) forces language
servers to start/stop every time, which is slow and can trigger shutdown
timeouts on Windows.
This module runs an asyncio event loop in a background thread and keeps a
single LspBridge (and its StandaloneLspManager + subprocesses) alive across
multiple queries. Callers submit coroutines that operate on the shared bridge.
"""
from __future__ import annotations
import atexit
import asyncio
import threading
from dataclasses import dataclass
from typing import Awaitable, Callable, Optional, TypeVar
from codexlens.lsp.lsp_bridge import LspBridge
T = TypeVar("T")
@dataclass(frozen=True)
class KeepAliveKey:
workspace_root: str
config_file: Optional[str]
timeout: float
class KeepAliveLspBridge:
"""Runs a shared LspBridge on a dedicated event loop thread."""
def __init__(self, *, workspace_root: str, config_file: Optional[str], timeout: float) -> None:
self._key = KeepAliveKey(workspace_root=workspace_root, config_file=config_file, timeout=float(timeout))
self._lock = threading.RLock()
self._call_lock = threading.RLock()
self._ready = threading.Event()
self._thread: Optional[threading.Thread] = None
self._loop: Optional[asyncio.AbstractEventLoop] = None
self._bridge: Optional[LspBridge] = None
self._stopped = False
atexit.register(self.stop)
@property
def key(self) -> KeepAliveKey:
return self._key
def start(self) -> None:
with self._lock:
if self._stopped:
raise RuntimeError("KeepAliveLspBridge is stopped")
if self._thread is not None and self._thread.is_alive():
return
self._ready.clear()
thread = threading.Thread(target=self._run, name="codexlens-lsp-keepalive", daemon=True)
self._thread = thread
thread.start()
if not self._ready.wait(timeout=10.0):
raise RuntimeError("Timed out starting LSP keep-alive loop")
def stop(self) -> None:
with self._lock:
if self._stopped:
return
self._stopped = True
loop = self._loop
bridge = self._bridge
thread = self._thread
if loop is not None and bridge is not None:
try:
fut = asyncio.run_coroutine_threadsafe(bridge.close(), loop)
fut.result(timeout=5.0)
except Exception:
pass
try:
loop.call_soon_threadsafe(loop.stop)
except Exception:
pass
if thread is not None:
try:
thread.join(timeout=5.0)
except Exception:
pass
def run(self, fn: Callable[[LspBridge], Awaitable[T]], *, timeout: Optional[float] = None) -> T:
"""Run an async function against the shared LspBridge and return its result."""
self.start()
loop = self._loop
bridge = self._bridge
if loop is None or bridge is None:
raise RuntimeError("Keep-alive loop not initialized")
async def _call() -> T:
return await fn(bridge)
# Serialize bridge usage to avoid overlapping LSP request storms.
with self._call_lock:
fut = asyncio.run_coroutine_threadsafe(_call(), loop)
return fut.result(timeout=float(timeout or self._key.timeout) + 1.0)
def _run(self) -> None:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
bridge = LspBridge(
workspace_root=self._key.workspace_root,
config_file=self._key.config_file,
timeout=self._key.timeout,
)
with self._lock:
self._loop = loop
self._bridge = bridge
self._ready.set()
try:
loop.run_forever()
finally:
try:
if self._bridge is not None:
loop.run_until_complete(self._bridge.close())
except Exception:
pass
try:
loop.close()
except Exception:
pass

View File

@@ -103,6 +103,7 @@ class StandaloneLspManager:
self._configs: Dict[str, ServerConfig] = {} # language_id -> ServerConfig self._configs: Dict[str, ServerConfig] = {} # language_id -> ServerConfig
self._read_tasks: Dict[str, asyncio.Task] = {} # language_id -> read task self._read_tasks: Dict[str, asyncio.Task] = {} # language_id -> read task
self._stderr_tasks: Dict[str, asyncio.Task] = {} # language_id -> stderr read task self._stderr_tasks: Dict[str, asyncio.Task] = {} # language_id -> stderr read task
self._processor_tasks: Dict[str, asyncio.Task] = {} # language_id -> message processor task
self._lock = asyncio.Lock() self._lock = asyncio.Lock()
def _find_config_file(self) -> Optional[Path]: def _find_config_file(self) -> Optional[Path]:
@@ -269,7 +270,7 @@ class StandaloneLspManager:
) )
# Start the message processor task to handle queued messages # Start the message processor task to handle queued messages
asyncio.create_task(self._process_messages(language_id)) self._processor_tasks[language_id] = asyncio.create_task(self._process_messages(language_id))
# Initialize the server - now uses queue for reading responses # Initialize the server - now uses queue for reading responses
await self._initialize_server(state) await self._initialize_server(state)
@@ -311,6 +312,15 @@ class StandaloneLspManager:
except asyncio.CancelledError: except asyncio.CancelledError:
pass pass
# Cancel message processor task
processor_task = self._processor_tasks.pop(language_id, None)
if processor_task:
processor_task.cancel()
try:
await processor_task
except asyncio.CancelledError:
pass
# Send shutdown request # Send shutdown request
try: try:
await self._send_request(state, "shutdown", None, timeout=5.0) await self._send_request(state, "shutdown", None, timeout=5.0)

View File

@@ -66,6 +66,10 @@ class BinarySearcher:
self._binary_matrix: Optional[np.ndarray] = None self._binary_matrix: Optional[np.ndarray] = None
self._is_memmap = False self._is_memmap = False
self._loaded = False self._loaded = False
self._embedding_dim: Optional[int] = None
self._backend: Optional[str] = None
self._model: Optional[str] = None
self._model_profile: Optional[str] = None
def load(self) -> bool: def load(self) -> bool:
"""Load binary vectors using memory-mapped file or database fallback. """Load binary vectors using memory-mapped file or database fallback.
@@ -90,6 +94,10 @@ class BinarySearcher:
shape = tuple(meta['shape']) shape = tuple(meta['shape'])
self._chunk_ids = np.array(meta['chunk_ids'], dtype=np.int64) self._chunk_ids = np.array(meta['chunk_ids'], dtype=np.int64)
self._embedding_dim = meta.get("embedding_dim")
self._backend = meta.get("backend")
self._model = meta.get("model") or meta.get("model_name")
self._model_profile = meta.get("model_profile")
# Memory-map the binary matrix (read-only) # Memory-map the binary matrix (read-only)
self._binary_matrix = np.memmap( self._binary_matrix = np.memmap(
@@ -141,6 +149,10 @@ class BinarySearcher:
self._binary_matrix = np.vstack(binary_arrays) self._binary_matrix = np.vstack(binary_arrays)
self._is_memmap = False self._is_memmap = False
self._loaded = True self._loaded = True
self._embedding_dim = None
self._backend = None
self._model = None
self._model_profile = None
logger.info( logger.info(
"Loaded %d binary vectors from DB (%d bytes each)", "Loaded %d binary vectors from DB (%d bytes each)",
@@ -261,6 +273,26 @@ class BinarySearcher:
"""Get number of loaded binary vectors.""" """Get number of loaded binary vectors."""
return len(self._chunk_ids) if self._chunk_ids is not None else 0 return len(self._chunk_ids) if self._chunk_ids is not None else 0
@property
def embedding_dim(self) -> Optional[int]:
"""Embedding dimension used to build these binary vectors (if known)."""
return int(self._embedding_dim) if self._embedding_dim is not None else None
@property
def backend(self) -> Optional[str]:
"""Embedding backend used to build these vectors (if known)."""
return self._backend
@property
def model(self) -> Optional[str]:
"""Embedding model name used to build these vectors (if known)."""
return self._model
@property
def model_profile(self) -> Optional[str]:
"""Embedding profile name (fastembed) used to build these vectors (if known)."""
return self._model_profile
@property @property
def is_memmap(self) -> bool: def is_memmap(self) -> bool:
"""Check if using memory-mapped file (vs in-memory array).""" """Check if using memory-mapped file (vs in-memory array)."""

View File

@@ -13,6 +13,7 @@ from typing import List, Optional, Dict, Any, Literal, Tuple, TYPE_CHECKING
import json import json
import logging import logging
import os import os
import threading
import time import time
from codexlens.entities import SearchResult, Symbol from codexlens.entities import SearchResult, Symbol
@@ -32,7 +33,7 @@ from codexlens.storage.global_index import GlobalSymbolIndex
from codexlens.storage.path_mapper import PathMapper from codexlens.storage.path_mapper import PathMapper
from codexlens.storage.sqlite_store import SQLiteStore from codexlens.storage.sqlite_store import SQLiteStore
from codexlens.storage.vector_meta_store import VectorMetadataStore from codexlens.storage.vector_meta_store import VectorMetadataStore
from codexlens.config import VECTORS_META_DB_NAME from codexlens.config import BINARY_VECTORS_MMAP_NAME, VECTORS_META_DB_NAME
from codexlens.search.hybrid_search import HybridSearchEngine from codexlens.search.hybrid_search import HybridSearchEngine
@@ -165,6 +166,9 @@ class ChainSearchEngine:
self._max_workers = max_workers self._max_workers = max_workers
self._executor: Optional[ThreadPoolExecutor] = None self._executor: Optional[ThreadPoolExecutor] = None
self._config = config self._config = config
self._realtime_lsp_keepalive_lock = threading.RLock()
self._realtime_lsp_keepalive = None
self._realtime_lsp_keepalive_key = None
def _get_executor(self, max_workers: Optional[int] = None) -> ThreadPoolExecutor: def _get_executor(self, max_workers: Optional[int] = None) -> ThreadPoolExecutor:
"""Get or create the shared thread pool executor. """Get or create the shared thread pool executor.
@@ -187,6 +191,15 @@ class ChainSearchEngine:
if self._executor is not None: if self._executor is not None:
self._executor.shutdown(wait=True) self._executor.shutdown(wait=True)
self._executor = None self._executor = None
with self._realtime_lsp_keepalive_lock:
keepalive = self._realtime_lsp_keepalive
self._realtime_lsp_keepalive = None
self._realtime_lsp_keepalive_key = None
if keepalive is not None:
try:
keepalive.stop()
except Exception:
pass
def __enter__(self) -> "ChainSearchEngine": def __enter__(self) -> "ChainSearchEngine":
"""Context manager entry.""" """Context manager entry."""
@@ -838,7 +851,11 @@ class ChainSearchEngine:
# ========== Stage 1: Binary Coarse Search ========== # ========== Stage 1: Binary Coarse Search ==========
stage1_start = time.time() stage1_start = time.time()
coarse_results, index_root = self._stage1_binary_search( coarse_results, index_root = self._stage1_binary_search(
query, index_paths, coarse_k, stats query,
index_paths,
coarse_k,
stats,
index_root=start_index.parent,
) )
stage_times["stage1_binary_ms"] = (time.time() - stage1_start) * 1000 stage_times["stage1_binary_ms"] = (time.time() - stage1_start) * 1000
stage_counts["stage1_candidates"] = len(coarse_results) stage_counts["stage1_candidates"] = len(coarse_results)
@@ -849,14 +866,47 @@ class ChainSearchEngine:
) )
if not coarse_results: if not coarse_results:
self.logger.debug("No binary candidates found, falling back to standard search") # Keep the staged pipeline running even when Stage 1 yields no candidates.
return self.search(query, source_path, options=options) # This makes "realtime LSP graph → clustering → rerank" comparable across queries.
self.logger.debug(
"No Stage 1 candidates found; seeding staged pipeline with FTS results"
)
stage1_fallback_start = time.time()
try:
seed_opts = SearchOptions(
depth=options.depth,
max_workers=options.max_workers,
limit_per_dir=max(10, int(coarse_k)),
total_limit=int(coarse_k),
include_symbols=True,
enable_vector=False,
hybrid_mode=False,
enable_cascade=False,
)
seed = self.search(query, source_path, options=seed_opts)
coarse_results = list(seed.results or [])[: int(coarse_k)]
stage_counts["stage1_fallback_used"] = 1
except Exception as exc:
self.logger.debug("Stage 1 fallback seeding failed: %r", exc)
coarse_results = []
stage_times["stage1_fallback_search_ms"] = (time.time() - stage1_fallback_start) * 1000
stage_counts["stage1_candidates"] = len(coarse_results)
if not coarse_results:
return ChainSearchResult(query=query, results=[], symbols=[], stats=stats)
# ========== Stage 2: LSP Graph Expansion ========== # ========== Stage 2: LSP Graph Expansion ==========
stage2_start = time.time() stage2_start = time.time()
expanded_results = self._stage2_lsp_expand(coarse_results, index_root, query=query) expanded_results = self._stage2_lsp_expand(coarse_results, index_root, query=query)
stage_times["stage2_expand_ms"] = (time.time() - stage2_start) * 1000 stage_times["stage2_expand_ms"] = (time.time() - stage2_start) * 1000
stage_counts["stage2_expanded"] = len(expanded_results) stage_counts["stage2_expanded"] = len(expanded_results)
try:
stage2_unique_paths = len({(r.path or "").lower() for r in expanded_results if getattr(r, "path", None)})
except Exception:
stage2_unique_paths = 0
stage_counts["stage2_unique_paths"] = stage2_unique_paths
stage_counts["stage2_duplicate_paths"] = max(0, len(expanded_results) - stage2_unique_paths)
self.logger.debug( self.logger.debug(
"Staged Stage 2: LSP expansion %d -> %d results in %.2fms", "Staged Stage 2: LSP expansion %d -> %d results in %.2fms",
@@ -868,6 +918,11 @@ class ChainSearchEngine:
clustered_results = self._stage3_cluster_prune(expanded_results, k * 2) clustered_results = self._stage3_cluster_prune(expanded_results, k * 2)
stage_times["stage3_cluster_ms"] = (time.time() - stage3_start) * 1000 stage_times["stage3_cluster_ms"] = (time.time() - stage3_start) * 1000
stage_counts["stage3_clustered"] = len(clustered_results) stage_counts["stage3_clustered"] = len(clustered_results)
if self._config is not None:
try:
stage_counts["stage3_strategy"] = str(getattr(self._config, "staged_clustering_strategy", "auto") or "auto")
except Exception:
pass
self.logger.debug( self.logger.debug(
"Staged Stage 3: Clustering %d -> %d representatives in %.2fms", "Staged Stage 3: Clustering %d -> %d representatives in %.2fms",
@@ -944,6 +999,8 @@ class ChainSearchEngine:
index_paths: List[Path], index_paths: List[Path],
coarse_k: int, coarse_k: int,
stats: SearchStats, stats: SearchStats,
*,
index_root: Optional[Path] = None,
) -> Tuple[List[SearchResult], Optional[Path]]: ) -> Tuple[List[SearchResult], Optional[Path]]:
"""Stage 1: Binary vector coarse search using Hamming distance. """Stage 1: Binary vector coarse search using Hamming distance.
@@ -967,8 +1024,12 @@ class ChainSearchEngine:
) )
return [], None return [], None
# Try centralized BinarySearcher first (preferred for mmap indexes) # Try centralized BinarySearcher first (preferred for mmap indexes).
index_root = index_paths[0].parent if index_paths else None # Centralized binary vectors live at a project index root (where `index binary-mmap`
# was run), which may be an ancestor of the nearest `_index.db` directory.
index_root = Path(index_root).resolve() if index_root is not None else (index_paths[0].parent if index_paths else None)
if index_root is not None:
index_root = self._find_nearest_binary_mmap_root(index_root)
coarse_candidates: List[Tuple[int, float, Path]] = [] # (chunk_id, distance, index_path) coarse_candidates: List[Tuple[int, float, Path]] = [] # (chunk_id, distance, index_path)
used_centralized = False used_centralized = False
using_dense_fallback = False using_dense_fallback = False
@@ -977,9 +1038,26 @@ class ChainSearchEngine:
binary_searcher = self._get_centralized_binary_searcher(index_root) binary_searcher = self._get_centralized_binary_searcher(index_root)
if binary_searcher is not None: if binary_searcher is not None:
try: try:
from codexlens.semantic.embedder import Embedder use_gpu = True
embedder = Embedder() if self._config is not None:
query_dense = embedder.embed_to_numpy([query])[0] use_gpu = getattr(self._config, "embedding_use_gpu", True)
query_dense = None
backend = getattr(binary_searcher, "backend", None)
model = getattr(binary_searcher, "model", None)
profile = getattr(binary_searcher, "model_profile", None) or "code"
if backend == "litellm":
try:
from codexlens.semantic.factory import get_embedder as get_factory_embedder
embedder = get_factory_embedder(backend="litellm", model=model or "code")
query_dense = embedder.embed_to_numpy([query])[0]
except Exception:
query_dense = None
if query_dense is None:
from codexlens.semantic.embedder import get_embedder
embedder = get_embedder(profile=str(profile), use_gpu=use_gpu)
query_dense = embedder.embed_to_numpy([query])[0]
results = binary_searcher.search(query_dense, top_k=coarse_k) results = binary_searcher.search(query_dense, top_k=coarse_k)
for chunk_id, distance in results: for chunk_id, distance in results:
@@ -1531,34 +1609,26 @@ class ChainSearchEngine:
if not seed_nodes: if not seed_nodes:
return coarse_results return coarse_results
async def expand_graph(): async def expand_graph(bridge: LspBridge):
async with LspBridge( # Warm up analysis: open seed docs and wait a bit so references/call hierarchy are populated.
workspace_root=str(workspace_root), if warmup_s > 0:
config_file=str(lsp_config_file) if lsp_config_file else None, for seed in seed_nodes[:3]:
timeout=timeout_s,
) as bridge:
# Warm up analysis: open seed docs and wait a bit so references/call hierarchy are populated.
if warmup_s > 0:
for seed in seed_nodes[:3]:
try:
await bridge.get_document_symbols(seed.file_path)
except Exception:
continue
try: try:
warmup_budget = min(warmup_s, max(0.0, timeout_s * 0.1)) await bridge.get_document_symbols(seed.file_path)
await asyncio.sleep(min(warmup_budget, max(0.0, timeout_s - 0.5)))
except Exception: except Exception:
pass continue
builder = LspGraphBuilder( try:
max_depth=max_depth, warmup_budget = min(warmup_s, max(0.0, timeout_s * 0.1))
max_nodes=max_nodes, await asyncio.sleep(min(warmup_budget, max(0.0, timeout_s - 0.5)))
max_concurrent=max(1, max_concurrent), except Exception:
resolve_symbols=resolve_symbols, pass
) builder = LspGraphBuilder(
return await builder.build_from_seeds(seed_nodes, bridge) max_depth=max_depth,
max_nodes=max_nodes,
def run_coro_blocking(): max_concurrent=max(1, max_concurrent),
return asyncio.run(asyncio.wait_for(expand_graph(), timeout=timeout_s)) resolve_symbols=resolve_symbols,
)
return await builder.build_from_seeds(seed_nodes, bridge)
try: try:
try: try:
@@ -1569,9 +1639,43 @@ class ChainSearchEngine:
if has_running_loop: if has_running_loop:
with ThreadPoolExecutor(max_workers=1) as executor: with ThreadPoolExecutor(max_workers=1) as executor:
graph = executor.submit(run_coro_blocking).result(timeout=timeout_s + 1.0) async def _expand_once():
async with LspBridge(
workspace_root=str(workspace_root),
config_file=str(lsp_config_file) if lsp_config_file else None,
timeout=timeout_s,
) as bridge:
return await expand_graph(bridge)
def _run():
return asyncio.run(asyncio.wait_for(_expand_once(), timeout=timeout_s))
graph = executor.submit(_run).result(timeout=timeout_s + 1.0)
else: else:
graph = run_coro_blocking() from codexlens.lsp.keepalive_bridge import KeepAliveKey, KeepAliveLspBridge
key = KeepAliveKey(
workspace_root=str(workspace_root),
config_file=str(lsp_config_file) if lsp_config_file else None,
timeout=float(timeout_s),
)
with self._realtime_lsp_keepalive_lock:
keepalive = self._realtime_lsp_keepalive
if keepalive is None or self._realtime_lsp_keepalive_key != key:
if keepalive is not None:
try:
keepalive.stop()
except Exception:
pass
keepalive = KeepAliveLspBridge(
workspace_root=key.workspace_root,
config_file=key.config_file,
timeout=key.timeout,
)
self._realtime_lsp_keepalive = keepalive
self._realtime_lsp_keepalive_key = key
graph = keepalive.run(expand_graph, timeout=timeout_s)
except Exception as exc: except Exception as exc:
self.logger.debug("Stage 2 (realtime) expansion failed: %r", exc) self.logger.debug("Stage 2 (realtime) expansion failed: %r", exc)
return coarse_results return coarse_results
@@ -1705,6 +1809,57 @@ class ChainSearchEngine:
if len(expanded_results) <= target_count: if len(expanded_results) <= target_count:
return expanded_results return expanded_results
strategy_name = "auto"
if self._config is not None:
strategy_name = getattr(self._config, "staged_clustering_strategy", "auto") or "auto"
strategy_name = str(strategy_name).strip().lower()
if strategy_name in {"noop", "none", "off"}:
return sorted(expanded_results, key=lambda r: r.score, reverse=True)[:target_count]
if strategy_name in {"score", "top", "rank"}:
return sorted(expanded_results, key=lambda r: r.score, reverse=True)[:target_count]
if strategy_name in {"path", "file"}:
best_by_path: Dict[str, SearchResult] = {}
for r in expanded_results:
if not r.path:
continue
key = str(r.path).lower()
if key not in best_by_path or r.score > best_by_path[key].score:
best_by_path[key] = r
candidates = list(best_by_path.values()) or expanded_results
candidates.sort(key=lambda r: r.score, reverse=True)
return candidates[:target_count]
if strategy_name in {"dir_rr", "rr_dir", "round_robin_dir"}:
results_sorted = sorted(expanded_results, key=lambda r: r.score, reverse=True)
buckets: Dict[str, List[SearchResult]] = {}
dir_order: List[str] = []
for r in results_sorted:
try:
d = str(Path(r.path).parent).lower()
except Exception:
d = ""
if d not in buckets:
buckets[d] = []
dir_order.append(d)
buckets[d].append(r)
out: List[SearchResult] = []
while len(out) < target_count:
progressed = False
for d in dir_order:
if not buckets.get(d):
continue
out.append(buckets[d].pop(0))
progressed = True
if len(out) >= target_count:
break
if not progressed:
break
return out
try: try:
from codexlens.search.clustering import ( from codexlens.search.clustering import (
ClusteringConfig, ClusteringConfig,
@@ -2550,6 +2705,31 @@ class ChainSearchEngine:
self.logger.debug("Failed to load centralized binary searcher: %s", exc) self.logger.debug("Failed to load centralized binary searcher: %s", exc)
return None return None
def _find_nearest_binary_mmap_root(self, index_root: Path, *, max_levels: int = 10) -> Path:
"""Walk up index_root parents to find the nearest centralized binary mmap.
Centralized staged-binary artifacts are stored at a project index root
(e.g. `.../project/src/_binary_vectors.mmap`), but staged search often starts
from the nearest ancestor `_index.db` path, which can be nested deeper.
This helper makes Stage 1 robust by locating the nearest ancestor directory
that contains the centralized `_binary_vectors.mmap`.
"""
current_dir = Path(index_root).resolve()
for _ in range(max(0, int(max_levels)) + 1):
try:
if (current_dir / BINARY_VECTORS_MMAP_NAME).exists():
return current_dir
except Exception:
return Path(index_root).resolve()
parent = current_dir.parent
if parent == current_dir:
break
current_dir = parent
return Path(index_root).resolve()
def _compute_cosine_similarity( def _compute_cosine_similarity(
self, self,
query_vec: "np.ndarray", query_vec: "np.ndarray",

View File

@@ -0,0 +1,56 @@
from __future__ import annotations
from unittest.mock import MagicMock
import pytest
from codexlens.config import Config
from codexlens.entities import SearchResult
from codexlens.search.chain_search import ChainSearchEngine
def _engine_with_strategy(name: str) -> ChainSearchEngine:
cfg = Config.load()
cfg.staged_clustering_strategy = name
return ChainSearchEngine(registry=MagicMock(), mapper=MagicMock(), config=cfg)
def test_stage3_strategy_score_skips_embedding(monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.setattr(
"codexlens.semantic.factory.get_embedder",
lambda *a, **k: (_ for _ in ()).throw(RuntimeError("should not embed")),
)
engine = _engine_with_strategy("score")
expanded = [
SearchResult(path="D:/p/a.py", score=0.9),
SearchResult(path="D:/p/a.py", score=0.1),
SearchResult(path="D:/p/b.py", score=0.8),
SearchResult(path="D:/p/c.py", score=0.7),
]
reps = engine._stage3_cluster_prune(expanded, target_count=3)
assert [r.path for r in reps] == ["D:/p/a.py", "D:/p/b.py", "D:/p/c.py"]
def test_stage3_strategy_dir_rr_round_robins_dirs(monkeypatch: pytest.MonkeyPatch) -> None:
monkeypatch.setattr(
"codexlens.semantic.factory.get_embedder",
lambda *a, **k: (_ for _ in ()).throw(RuntimeError("should not embed")),
)
engine = _engine_with_strategy("dir_rr")
expanded = [
SearchResult(path="D:/p1/a.py", score=0.99),
SearchResult(path="D:/p1/b.py", score=0.98),
SearchResult(path="D:/p2/c.py", score=0.97),
SearchResult(path="D:/p2/d.py", score=0.96),
SearchResult(path="D:/p3/e.py", score=0.95),
]
reps = engine._stage3_cluster_prune(expanded, target_count=4)
assert len(reps) == 4
assert reps[0].path.endswith("p1/a.py")
assert reps[1].path.endswith("p2/c.py")
assert reps[2].path.endswith("p3/e.py")

37
package-lock.json generated
View File

@@ -7,6 +7,7 @@
"": { "": {
"name": "claude-code-workflow", "name": "claude-code-workflow",
"version": "6.3.54", "version": "6.3.54",
"hasInstallScript": true,
"license": "MIT", "license": "MIT",
"workspaces": [ "workspaces": [
"ccw/frontend", "ccw/frontend",
@@ -23,6 +24,7 @@
"gradient-string": "^2.0.2", "gradient-string": "^2.0.2",
"inquirer": "^9.2.0", "inquirer": "^9.2.0",
"jsonwebtoken": "^9.0.3", "jsonwebtoken": "^9.0.3",
"node-pty": "^1.1.0-beta21",
"open": "^9.1.0", "open": "^9.1.0",
"ora": "^7.0.0", "ora": "^7.0.0",
"zod": "^4.1.13" "zod": "^4.1.13"
@@ -113,6 +115,8 @@
"sonner": "^2.0.7", "sonner": "^2.0.7",
"tailwind-merge": "^2.5.0", "tailwind-merge": "^2.5.0",
"web-vitals": "^5.1.0", "web-vitals": "^5.1.0",
"xterm": "^5.3.0",
"xterm-addon-fit": "^0.8.0",
"zod": "^4.1.13", "zod": "^4.1.13",
"zustand": "^5.0.0" "zustand": "^5.0.0"
}, },
@@ -19769,6 +19773,12 @@
"node": ">=10" "node": ">=10"
} }
}, },
"node_modules/node-addon-api": {
"version": "7.1.1",
"resolved": "https://registry.npmjs.org/node-addon-api/-/node-addon-api-7.1.1.tgz",
"integrity": "sha512-5m3bsyrjFWE1xf7nz7YXdN4udnVtXK6/Yfgn5qnahL6bCkf2yKt4k3nuTKAtT4r3IG8JNR2ncsIMdZuAzJjHQQ==",
"license": "MIT"
},
"node_modules/node-emoji": { "node_modules/node-emoji": {
"version": "2.2.0", "version": "2.2.0",
"resolved": "https://registry.npmjs.org/node-emoji/-/node-emoji-2.2.0.tgz", "resolved": "https://registry.npmjs.org/node-emoji/-/node-emoji-2.2.0.tgz",
@@ -19784,6 +19794,16 @@
"node": ">=18" "node": ">=18"
} }
}, },
"node_modules/node-pty": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/node-pty/-/node-pty-1.1.0.tgz",
"integrity": "sha512-20JqtutY6JPXTUnL0ij1uad7Qe1baT46lyolh2sSENDd4sTzKZ4nmAFkeAARDKwmlLjPx6XKRlwRUxwjOy+lUg==",
"hasInstallScript": true,
"license": "MIT",
"dependencies": {
"node-addon-api": "^7.1.0"
}
},
"node_modules/node-releases": { "node_modules/node-releases": {
"version": "2.0.27", "version": "2.0.27",
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.27.tgz", "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.27.tgz",
@@ -29523,6 +29543,23 @@
"dev": true, "dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/xterm": {
"version": "5.3.0",
"resolved": "https://registry.npmjs.org/xterm/-/xterm-5.3.0.tgz",
"integrity": "sha512-8QqjlekLUFTrU6x7xck1MsPzPA571K5zNqWm0M0oroYEWVOptZ0+ubQSkQ3uxIEhcIHRujJy6emDWX4A7qyFzg==",
"deprecated": "This package is now deprecated. Move to @xterm/xterm instead.",
"license": "MIT"
},
"node_modules/xterm-addon-fit": {
"version": "0.8.0",
"resolved": "https://registry.npmjs.org/xterm-addon-fit/-/xterm-addon-fit-0.8.0.tgz",
"integrity": "sha512-yj3Np7XlvxxhYF/EJ7p3KHaMt6OdwQ+HDu573Vx1lRXsVxOcnVJs51RgjZOouIZOczTsskaS+CpXspK81/DLqw==",
"deprecated": "This package is now deprecated. Move to @xterm/addon-fit instead.",
"license": "MIT",
"peerDependencies": {
"xterm": "^5.0.0"
}
},
"node_modules/y18n": { "node_modules/y18n": {
"version": "5.0.8", "version": "5.0.8",
"resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz", "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz",

View File

@@ -57,6 +57,7 @@
"gradient-string": "^2.0.2", "gradient-string": "^2.0.2",
"inquirer": "^9.2.0", "inquirer": "^9.2.0",
"jsonwebtoken": "^9.0.3", "jsonwebtoken": "^9.0.3",
"node-pty": "^1.1.0-beta21",
"open": "^9.1.0", "open": "^9.1.0",
"ora": "^7.0.0", "ora": "^7.0.0",
"zod": "^4.1.13" "zod": "^4.1.13"