mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-05 01:50:27 +08:00
feat: Add CodexLens Manager to dashboard and enhance GPU management
- Introduced a new CodexLens Manager item in the dashboard for easier access. - Implemented GPU management commands in the CLI, including listing available GPUs, selecting a specific GPU, and resetting to automatic detection. - Enhanced the embedding generation process to utilize GPU resources more effectively, including batch size optimization for better performance. - Updated the embedder to support device ID options for GPU selection, ensuring compatibility with DirectML and CUDA. - Added detailed logging and error handling for GPU detection and selection processes. - Updated package version to 6.2.9 and added comprehensive documentation for Codex Agent Execution Protocol.
This commit is contained in:
@@ -1,9 +1,7 @@
|
||||
# Analysis Mode Protocol
|
||||
|
||||
## Mode Definition
|
||||
|
||||
**Mode**: `analysis` (READ-ONLY)
|
||||
**Tools**: Gemini, Qwen (default mode)
|
||||
|
||||
## Operation Boundaries
|
||||
|
||||
@@ -27,8 +25,8 @@
|
||||
2. **Read** and analyze CONTEXT files thoroughly
|
||||
3. **Identify** patterns, issues, and dependencies
|
||||
4. **Generate** insights and recommendations
|
||||
5. **Output** structured analysis (text response only)
|
||||
6. **Validate** EXPECTED deliverables met
|
||||
5. **Validate** EXPECTED deliverables met
|
||||
6. **Output** structured analysis (text response only)
|
||||
|
||||
## Core Requirements
|
||||
|
||||
|
||||
@@ -1,10 +1,5 @@
|
||||
# Write Mode Protocol
|
||||
|
||||
## Mode Definition
|
||||
|
||||
**Mode**: `write` (FILE OPERATIONS) / `auto` (FULL OPERATIONS)
|
||||
**Tools**: Codex (auto), Gemini/Qwen (write)
|
||||
|
||||
## Operation Boundaries
|
||||
|
||||
### MODE: write
|
||||
@@ -15,12 +10,6 @@
|
||||
|
||||
**Restrictions**: Follow project conventions, cannot break existing functionality
|
||||
|
||||
### MODE: auto (Codex only)
|
||||
- All `write` mode operations
|
||||
- Run tests and builds
|
||||
- Commit code incrementally
|
||||
- Full autonomous development
|
||||
|
||||
**Constraint**: Must test every change
|
||||
|
||||
## Execution Flow
|
||||
@@ -33,16 +22,6 @@
|
||||
5. **Validate** changes
|
||||
6. **Report** file changes
|
||||
|
||||
### MODE: auto
|
||||
1. **Parse** all 6 fields
|
||||
2. **Analyze** CONTEXT files - find 3+ similar patterns
|
||||
3. **Plan** implementation following RULES
|
||||
4. **Generate** code with tests
|
||||
5. **Run** tests continuously
|
||||
6. **Commit** working code incrementally
|
||||
7. **Validate** EXPECTED deliverables
|
||||
8. **Report** results
|
||||
|
||||
## Core Requirements
|
||||
|
||||
**ALWAYS**:
|
||||
|
||||
331
AGENTS.md
Normal file
331
AGENTS.md
Normal file
@@ -0,0 +1,331 @@
|
||||
# Codex Agent Execution Protocol
|
||||
|
||||
## Overview
|
||||
|
||||
**Role**: Autonomous development, implementation, and testing specialist
|
||||
|
||||
|
||||
## Prompt Structure
|
||||
|
||||
All prompts follow this 6-field format:
|
||||
|
||||
```
|
||||
PURPOSE: [development goal]
|
||||
TASK: [specific implementation task]
|
||||
MODE: [auto|write]
|
||||
CONTEXT: [file patterns]
|
||||
EXPECTED: [deliverables]
|
||||
RULES: [templates | additional constraints]
|
||||
```
|
||||
|
||||
**Subtask indicator**: `Subtask N of M: [title]` or `CONTINUE TO NEXT SUBTASK`
|
||||
|
||||
## MODE Definitions
|
||||
|
||||
### MODE: auto (default)
|
||||
|
||||
**Permissions**:
|
||||
- Full file operations (create/modify/delete)
|
||||
- Run tests and builds
|
||||
- Commit code incrementally
|
||||
|
||||
**Execute**:
|
||||
1. Parse PURPOSE and TASK
|
||||
2. Analyze CONTEXT files - find 3+ similar patterns
|
||||
3. Plan implementation following RULES
|
||||
4. Generate code with tests
|
||||
5. Run tests continuously
|
||||
6. Commit working code incrementally
|
||||
7. Validate EXPECTED deliverables
|
||||
8. Report results (with context for next subtask if multi-task)
|
||||
|
||||
**Constraint**: Must test every change
|
||||
|
||||
### MODE: write
|
||||
|
||||
**Permissions**:
|
||||
- Focused file operations
|
||||
- Create/modify specific files
|
||||
- Run tests for validation
|
||||
|
||||
**Execute**:
|
||||
1. Analyze CONTEXT files
|
||||
2. Make targeted changes
|
||||
3. Validate tests pass
|
||||
4. Report file changes
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
### Core Requirements
|
||||
|
||||
**ALWAYS**:
|
||||
- Parse all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
||||
- Study CONTEXT files - find 3+ similar patterns before implementing
|
||||
- Apply RULES (templates + constraints) exactly
|
||||
- Test continuously after every change
|
||||
- Commit incrementally with working code
|
||||
- Match project style and patterns exactly
|
||||
- List all created/modified files at output beginning
|
||||
- Use direct binary calls (avoid shell wrappers)
|
||||
- Prefer apply_patch for text edits
|
||||
- Configure Windows UTF-8 encoding for Chinese support
|
||||
|
||||
**NEVER**:
|
||||
- Make assumptions without code verification
|
||||
- Ignore existing patterns
|
||||
- Skip tests
|
||||
- Use clever tricks over boring solutions
|
||||
- Over-engineer solutions
|
||||
- Break existing code or backward compatibility
|
||||
- Exceed 3 failed attempts without stopping
|
||||
|
||||
### RULES Processing
|
||||
|
||||
- Parse RULES field to extract template content and constraints
|
||||
- Recognize `|` as separator: `template content | additional constraints`
|
||||
- Apply ALL template guidelines as mandatory
|
||||
- Apply ALL additional constraints as mandatory
|
||||
- Treat rule violations as task failures
|
||||
|
||||
### Multi-Task Execution (Resume Pattern)
|
||||
|
||||
**First subtask**: Standard execution flow above
|
||||
**Subsequent subtasks** (via `resume --last`):
|
||||
- Recall context from previous subtasks
|
||||
- Build on previous work (don't repeat)
|
||||
- Maintain consistency with established patterns
|
||||
- Focus on current subtask scope only
|
||||
- Test integration with previous work
|
||||
- Report context for next subtask
|
||||
|
||||
## System Optimization
|
||||
|
||||
**Direct Binary Calls**: Always call binaries directly in `functions.shell`, set `workdir`, avoid shell wrappers (`bash -lc`, `cmd /c`, etc.)
|
||||
|
||||
**Text Editing Priority**:
|
||||
1. Use `apply_patch` tool for all routine text edits
|
||||
2. Fall back to `sed` for single-line substitutions if unavailable
|
||||
3. Avoid Python editing scripts unless both fail
|
||||
|
||||
**apply_patch invocation**:
|
||||
```json
|
||||
{
|
||||
"command": ["apply_patch", "*** Begin Patch\n*** Update File: path/to/file\n@@\n- old\n+ new\n*** End Patch\n"],
|
||||
"workdir": "<workdir>",
|
||||
"justification": "Brief reason"
|
||||
}
|
||||
```
|
||||
|
||||
**Windows UTF-8 Encoding** (before commands):
|
||||
```powershell
|
||||
[Console]::InputEncoding = [Text.UTF8Encoding]::new($false)
|
||||
[Console]::OutputEncoding = [Text.UTF8Encoding]::new($false)
|
||||
chcp 65001 > $null
|
||||
```
|
||||
|
||||
## Output Standards
|
||||
|
||||
### Format Priority
|
||||
|
||||
**If template defines output format** → Follow template format EXACTLY (all sections mandatory)
|
||||
|
||||
**If template has no format** → Use default format below based on task type
|
||||
|
||||
### Default Output Formats
|
||||
|
||||
#### Single Task Implementation
|
||||
|
||||
```markdown
|
||||
# Implementation: [TASK Title]
|
||||
|
||||
## Changes
|
||||
- Created: `path/to/file1.ext` (X lines)
|
||||
- Modified: `path/to/file2.ext` (+Y/-Z lines)
|
||||
- Deleted: `path/to/file3.ext`
|
||||
|
||||
## Summary
|
||||
[2-3 sentence overview of what was implemented]
|
||||
|
||||
## Key Decisions
|
||||
1. [Decision] - Rationale and reference to similar pattern
|
||||
2. [Decision] - path/to/reference:line
|
||||
|
||||
## Implementation Details
|
||||
[Evidence-based description with code references]
|
||||
|
||||
## Testing
|
||||
- Tests written: X new tests
|
||||
- Tests passing: Y/Z tests
|
||||
- Coverage: N%
|
||||
|
||||
## Validation
|
||||
✅ Tests: X passing
|
||||
✅ Coverage: Y%
|
||||
✅ Build: Success
|
||||
|
||||
## Next Steps
|
||||
[Recommendations or future improvements]
|
||||
```
|
||||
|
||||
#### Multi-Task Execution (with Resume)
|
||||
|
||||
**First Subtask**:
|
||||
```markdown
|
||||
# Subtask 1/N: [TASK Title]
|
||||
|
||||
## Changes
|
||||
[List of file changes]
|
||||
|
||||
## Implementation
|
||||
[Details with code references]
|
||||
|
||||
## Testing
|
||||
✅ Tests: X passing
|
||||
✅ Integration: Compatible with existing code
|
||||
|
||||
## Context for Next Subtask
|
||||
- Key decisions: [established patterns]
|
||||
- Files created: [paths and purposes]
|
||||
- Integration points: [where next subtask should connect]
|
||||
```
|
||||
|
||||
**Subsequent Subtasks**:
|
||||
```markdown
|
||||
# Subtask N/M: [TASK Title]
|
||||
|
||||
## Changes
|
||||
[List of file changes]
|
||||
|
||||
## Integration Notes
|
||||
✅ Compatible with subtask N-1
|
||||
✅ Maintains established patterns
|
||||
✅ Tests pass with previous work
|
||||
|
||||
## Implementation
|
||||
[Details with code references]
|
||||
|
||||
## Testing
|
||||
✅ Tests: X passing
|
||||
✅ Total coverage: Y%
|
||||
|
||||
## Context for Next Subtask
|
||||
[If not final subtask, provide context for continuation]
|
||||
```
|
||||
|
||||
#### Partial Completion
|
||||
|
||||
```markdown
|
||||
# Task Status: Partially Completed
|
||||
|
||||
## Completed
|
||||
- [What worked successfully]
|
||||
- Files: `path/to/completed.ext`
|
||||
|
||||
## Blocked
|
||||
- **Issue**: [What failed]
|
||||
- **Root Cause**: [Analysis of failure]
|
||||
- **Attempted**: [Solutions tried - attempt X of 3]
|
||||
|
||||
## Required
|
||||
[What's needed to proceed]
|
||||
|
||||
## Recommendation
|
||||
[Suggested next steps or alternative approaches]
|
||||
```
|
||||
|
||||
### Code References
|
||||
|
||||
**Format**: `path/to/file:line_number`
|
||||
|
||||
**Example**: `src/auth/jwt.ts:45` - Implemented token validation following pattern from `src/auth/session.ts:78`
|
||||
|
||||
### Related Files Section
|
||||
|
||||
**Always include at output beginning** - List ALL files analyzed, created, or modified:
|
||||
|
||||
```markdown
|
||||
## Related Files
|
||||
- `path/to/file1.ext` - [Role in implementation]
|
||||
- `path/to/file2.ext` - [Reference pattern used]
|
||||
- `path/to/file3.ext` - [Modified for X reason]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Three-Attempt Rule
|
||||
|
||||
**On 3rd failed attempt**:
|
||||
1. Stop execution
|
||||
2. Report: What attempted, what failed, root cause
|
||||
3. Request guidance or suggest alternatives
|
||||
|
||||
### Recovery Strategies
|
||||
|
||||
| Error Type | Response |
|
||||
|------------|----------|
|
||||
| **Syntax/Type** | Review errors → Fix → Re-run tests → Validate build |
|
||||
| **Runtime** | Analyze stack trace → Add error handling → Test error cases |
|
||||
| **Test Failure** | Debug in isolation → Review setup → Fix implementation/test |
|
||||
| **Build Failure** | Check messages → Fix incrementally → Validate each fix |
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Code Quality
|
||||
- Follow project's existing patterns
|
||||
- Match import style and naming conventions
|
||||
- Single responsibility per function/class
|
||||
- DRY (Don't Repeat Yourself)
|
||||
- YAGNI (You Aren't Gonna Need It)
|
||||
|
||||
### Testing
|
||||
- Test all public functions
|
||||
- Test edge cases and error conditions
|
||||
- Mock external dependencies
|
||||
- Target 80%+ coverage
|
||||
|
||||
### Error Handling
|
||||
- Proper try-catch blocks
|
||||
- Clear error messages
|
||||
- Graceful degradation
|
||||
- Don't expose sensitive info
|
||||
|
||||
## Core Principles
|
||||
|
||||
**Incremental Progress**:
|
||||
- Small, testable changes
|
||||
- Commit working code frequently
|
||||
- Build on previous work (subtasks)
|
||||
|
||||
**Evidence-Based**:
|
||||
- Study 3+ similar patterns before implementing
|
||||
- Match project style exactly
|
||||
- Verify with existing code
|
||||
|
||||
**Pragmatic**:
|
||||
- Boring solutions over clever code
|
||||
- Simple over complex
|
||||
- Adapt to project reality
|
||||
|
||||
**Context Continuity** (Multi-Task):
|
||||
- Leverage resume for consistency
|
||||
- Maintain established patterns
|
||||
- Test integration between subtasks
|
||||
|
||||
## Execution Checklist
|
||||
|
||||
**Before**:
|
||||
- [ ] Understand PURPOSE and TASK clearly
|
||||
- [ ] Review CONTEXT files, find 3+ patterns
|
||||
- [ ] Check RULES templates and constraints
|
||||
|
||||
**During**:
|
||||
- [ ] Follow existing patterns exactly
|
||||
- [ ] Write tests alongside code
|
||||
- [ ] Run tests after every change
|
||||
- [ ] Commit working code incrementally
|
||||
|
||||
**After**:
|
||||
- [ ] All tests pass
|
||||
- [ ] Coverage meets target
|
||||
- [ ] Build succeeds
|
||||
- [ ] All EXPECTED deliverables met
|
||||
@@ -80,6 +80,13 @@ export async function handleCodexLensRoutes(ctx: RouteContext): Promise<boolean>
|
||||
// API: CodexLens Index List - Get all indexed projects with details
|
||||
if (pathname === '/api/codexlens/indexes') {
|
||||
try {
|
||||
// Check if CodexLens is installed first (without auto-installing)
|
||||
const venvStatus = await checkVenvStatus();
|
||||
if (!venvStatus.ready) {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, indexes: [], totalSize: 0, totalSizeFormatted: '0 B' }));
|
||||
return true;
|
||||
}
|
||||
// Get config for index directory path
|
||||
const configResult = await executeCodexLens(['config', '--json']);
|
||||
let indexDir = '';
|
||||
@@ -290,14 +297,24 @@ export async function handleCodexLensRoutes(ctx: RouteContext): Promise<boolean>
|
||||
// API: CodexLens Config - GET (Get current configuration with index count)
|
||||
if (pathname === '/api/codexlens/config' && req.method === 'GET') {
|
||||
try {
|
||||
// Check if CodexLens is installed first (without auto-installing)
|
||||
const venvStatus = await checkVenvStatus();
|
||||
|
||||
let responseData = { index_dir: '~/.codexlens/indexes', index_count: 0 };
|
||||
|
||||
// If not installed, return default config without executing CodexLens
|
||||
if (!venvStatus.ready) {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(responseData));
|
||||
return true;
|
||||
}
|
||||
|
||||
// Fetch both config and status to merge index_count
|
||||
const [configResult, statusResult] = await Promise.all([
|
||||
executeCodexLens(['config', '--json']),
|
||||
executeCodexLens(['status', '--json'])
|
||||
]);
|
||||
|
||||
let responseData = { index_dir: '~/.codexlens/indexes', index_count: 0 };
|
||||
|
||||
// Parse config (extract JSON from output that may contain log messages)
|
||||
if (configResult.success) {
|
||||
try {
|
||||
@@ -682,6 +699,87 @@ export async function handleCodexLensRoutes(ctx: RouteContext): Promise<boolean>
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: List available GPU devices for selection
|
||||
if (pathname === '/api/codexlens/gpu/list' && req.method === 'GET') {
|
||||
try {
|
||||
// Check if CodexLens is installed first (without auto-installing)
|
||||
const venvStatus = await checkVenvStatus();
|
||||
if (!venvStatus.ready) {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, devices: [], selected_device_id: null }));
|
||||
return true;
|
||||
}
|
||||
const result = await executeCodexLens(['gpu-list', '--json']);
|
||||
if (result.success) {
|
||||
try {
|
||||
const parsed = extractJSON(result.output);
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(parsed));
|
||||
} catch {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: true, devices: [], output: result.output }));
|
||||
}
|
||||
} else {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: false, error: result.error }));
|
||||
}
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: false, error: err.message }));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Select GPU device for embedding
|
||||
if (pathname === '/api/codexlens/gpu/select' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body) => {
|
||||
const { device_id } = body;
|
||||
|
||||
if (device_id === undefined || device_id === null) {
|
||||
return { success: false, error: 'device_id is required', status: 400 };
|
||||
}
|
||||
|
||||
try {
|
||||
const result = await executeCodexLens(['gpu-select', String(device_id), '--json']);
|
||||
if (result.success) {
|
||||
try {
|
||||
const parsed = extractJSON(result.output);
|
||||
return parsed;
|
||||
} catch {
|
||||
return { success: true, message: 'GPU selected', output: result.output };
|
||||
}
|
||||
} else {
|
||||
return { success: false, error: result.error, status: 500 };
|
||||
}
|
||||
} catch (err) {
|
||||
return { success: false, error: err.message, status: 500 };
|
||||
}
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Reset GPU selection to auto-detection
|
||||
if (pathname === '/api/codexlens/gpu/reset' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async () => {
|
||||
try {
|
||||
const result = await executeCodexLens(['gpu-reset', '--json']);
|
||||
if (result.success) {
|
||||
try {
|
||||
const parsed = extractJSON(result.output);
|
||||
return parsed;
|
||||
} catch {
|
||||
return { success: true, message: 'GPU selection reset', output: result.output };
|
||||
}
|
||||
} else {
|
||||
return { success: false, error: result.error, status: 500 };
|
||||
}
|
||||
} catch (err) {
|
||||
return { success: false, error: err.message, status: 500 };
|
||||
}
|
||||
});
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: CodexLens Semantic Search Install (with GPU mode support)
|
||||
if (pathname === '/api/codexlens/semantic/install' && req.method === 'POST') {
|
||||
handlePostRequest(req, res, async (body) => {
|
||||
@@ -721,6 +819,13 @@ export async function handleCodexLensRoutes(ctx: RouteContext): Promise<boolean>
|
||||
// API: CodexLens Model List (list available embedding models)
|
||||
if (pathname === '/api/codexlens/models' && req.method === 'GET') {
|
||||
try {
|
||||
// Check if CodexLens is installed first (without auto-installing)
|
||||
const venvStatus = await checkVenvStatus();
|
||||
if (!venvStatus.ready) {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ success: false, error: 'CodexLens not installed' }));
|
||||
return true;
|
||||
}
|
||||
const result = await executeCodexLens(['model-list', '--json']);
|
||||
if (result.success) {
|
||||
try {
|
||||
|
||||
@@ -4,9 +4,56 @@
|
||||
* Aggregated status endpoint for faster dashboard loading
|
||||
*/
|
||||
import type { IncomingMessage, ServerResponse } from 'http';
|
||||
import { existsSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import { getCliToolsStatus } from '../../tools/cli-executor.js';
|
||||
import { checkVenvStatus, checkSemanticStatus } from '../../tools/codex-lens.js';
|
||||
|
||||
/**
|
||||
* Check CCW installation status
|
||||
* Verifies that required workflow files are installed in user's home directory
|
||||
*/
|
||||
function checkCcwInstallStatus(): {
|
||||
installed: boolean;
|
||||
workflowsInstalled: boolean;
|
||||
missingFiles: string[];
|
||||
installPath: string;
|
||||
} {
|
||||
const claudeDir = join(homedir(), '.claude');
|
||||
const workflowsDir = join(claudeDir, 'workflows');
|
||||
|
||||
// Required workflow files for full functionality
|
||||
const requiredFiles = [
|
||||
'chinese-response.md',
|
||||
'windows-platform.md',
|
||||
'cli-tools-usage.md',
|
||||
'coding-philosophy.md',
|
||||
'context-tools.md',
|
||||
'file-modification.md'
|
||||
];
|
||||
|
||||
const missingFiles: string[] = [];
|
||||
|
||||
// Check each required file
|
||||
for (const file of requiredFiles) {
|
||||
const filePath = join(workflowsDir, file);
|
||||
if (!existsSync(filePath)) {
|
||||
missingFiles.push(file);
|
||||
}
|
||||
}
|
||||
|
||||
const workflowsInstalled = existsSync(workflowsDir) && missingFiles.length === 0;
|
||||
const installed = existsSync(claudeDir) && workflowsInstalled;
|
||||
|
||||
return {
|
||||
installed,
|
||||
workflowsInstalled,
|
||||
missingFiles,
|
||||
installPath: claudeDir
|
||||
};
|
||||
}
|
||||
|
||||
export interface RouteContext {
|
||||
pathname: string;
|
||||
url: URL;
|
||||
@@ -27,6 +74,9 @@ export async function handleStatusRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
// API: Aggregated Status (all statuses in one call)
|
||||
if (pathname === '/api/status/all') {
|
||||
try {
|
||||
// Check CCW installation status (sync, fast)
|
||||
const ccwInstallStatus = checkCcwInstallStatus();
|
||||
|
||||
// Execute all status checks in parallel
|
||||
const [cliStatus, codexLensStatus, semanticStatus] = await Promise.all([
|
||||
getCliToolsStatus(),
|
||||
@@ -39,6 +89,7 @@ export async function handleStatusRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
cli: cliStatus,
|
||||
codexLens: codexLensStatus,
|
||||
semantic: semanticStatus,
|
||||
ccwInstall: ccwInstallStatus,
|
||||
timestamp: new Date().toISOString()
|
||||
};
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
let cliToolStatus = { gemini: {}, qwen: {}, codex: {}, claude: {} };
|
||||
let codexLensStatus = { ready: false };
|
||||
let semanticStatus = { available: false };
|
||||
let ccwInstallStatus = { installed: true, workflowsInstalled: true, missingFiles: [], installPath: '' };
|
||||
let defaultCliTool = 'gemini';
|
||||
let promptConcatFormat = localStorage.getItem('ccw-prompt-format') || 'plain'; // plain, yaml, json
|
||||
|
||||
@@ -38,10 +39,12 @@ async function loadAllStatuses() {
|
||||
cliToolStatus = data.cli || { gemini: {}, qwen: {}, codex: {}, claude: {} };
|
||||
codexLensStatus = data.codexLens || { ready: false };
|
||||
semanticStatus = data.semantic || { available: false };
|
||||
ccwInstallStatus = data.ccwInstall || { installed: true, workflowsInstalled: true, missingFiles: [], installPath: '' };
|
||||
|
||||
// Update badges
|
||||
updateCliBadge();
|
||||
updateCodexLensBadge();
|
||||
updateCcwInstallBadge();
|
||||
|
||||
return data;
|
||||
} catch (err) {
|
||||
@@ -187,6 +190,25 @@ function updateCodexLensBadge() {
|
||||
}
|
||||
}
|
||||
|
||||
function updateCcwInstallBadge() {
|
||||
const badge = document.getElementById('badgeCcwInstall');
|
||||
if (badge) {
|
||||
if (ccwInstallStatus.installed) {
|
||||
badge.textContent = t('status.installed');
|
||||
badge.classList.add('text-success');
|
||||
badge.classList.remove('text-warning', 'text-destructive');
|
||||
} else if (ccwInstallStatus.workflowsInstalled === false) {
|
||||
badge.textContent = t('status.incomplete');
|
||||
badge.classList.add('text-warning');
|
||||
badge.classList.remove('text-success', 'text-destructive');
|
||||
} else {
|
||||
badge.textContent = t('status.notInstalled');
|
||||
badge.classList.add('text-destructive');
|
||||
badge.classList.remove('text-success', 'text-warning');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ========== Rendering ==========
|
||||
function renderCliStatus() {
|
||||
const container = document.getElementById('cli-status-panel');
|
||||
@@ -310,6 +332,39 @@ function renderCliStatus() {
|
||||
</div>
|
||||
` : '';
|
||||
|
||||
// CCW Installation Status card (show warning if not fully installed)
|
||||
const ccwInstallHtml = !ccwInstallStatus.installed ? `
|
||||
<div class="cli-tool-card tool-ccw-install unavailable" style="border: 1px solid var(--warning); background: rgba(var(--warning-rgb), 0.05);">
|
||||
<div class="cli-tool-header">
|
||||
<span class="cli-tool-status status-unavailable" style="background: var(--warning);"></span>
|
||||
<span class="cli-tool-name">${t('status.ccwInstall')}</span>
|
||||
<span class="badge px-1.5 py-0.5 text-xs rounded bg-warning/20 text-warning">${t('status.required')}</span>
|
||||
</div>
|
||||
<div class="cli-tool-desc text-xs text-muted-foreground mt-1">
|
||||
${t('status.ccwInstallDesc')}
|
||||
</div>
|
||||
<div class="cli-tool-info mt-2">
|
||||
<span class="text-warning flex items-center gap-1">
|
||||
<i data-lucide="alert-triangle" class="w-3 h-3"></i>
|
||||
${ccwInstallStatus.missingFiles.length} ${t('status.filesMissing')}
|
||||
</span>
|
||||
</div>
|
||||
<div class="cli-tool-actions flex flex-col gap-2 mt-3">
|
||||
<div class="text-xs text-muted-foreground">
|
||||
<p class="mb-1">${t('status.missingFiles')}:</p>
|
||||
<ul class="list-disc list-inside text-xs opacity-70">
|
||||
${ccwInstallStatus.missingFiles.slice(0, 3).map(f => `<li>${f}</li>`).join('')}
|
||||
${ccwInstallStatus.missingFiles.length > 3 ? `<li>+${ccwInstallStatus.missingFiles.length - 3} more...</li>` : ''}
|
||||
</ul>
|
||||
</div>
|
||||
<div class="bg-muted/50 rounded p-2 mt-2">
|
||||
<p class="text-xs font-medium mb-1">${t('status.runToFix')}:</p>
|
||||
<code class="text-xs bg-background px-2 py-1 rounded block">ccw install</code>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
` : '';
|
||||
|
||||
// CLI Settings section
|
||||
const settingsHtml = `
|
||||
<div class="cli-settings-section">
|
||||
@@ -392,6 +447,7 @@ function renderCliStatus() {
|
||||
<i data-lucide="refresh-cw" class="w-4 h-4"></i>
|
||||
</button>
|
||||
</div>
|
||||
${ccwInstallHtml}
|
||||
<div class="cli-tools-grid">
|
||||
${toolsHtml}
|
||||
${codexLensHtml}
|
||||
|
||||
@@ -143,6 +143,12 @@ function initNavigation() {
|
||||
} else {
|
||||
console.error('renderCoreMemoryView not defined - please refresh the page');
|
||||
}
|
||||
} else if (currentView === 'codexlens-manager') {
|
||||
if (typeof renderCodexLensManager === 'function') {
|
||||
renderCodexLensManager();
|
||||
} else {
|
||||
console.error('renderCodexLensManager not defined - please refresh the page');
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
@@ -183,6 +189,8 @@ function updateContentTitle() {
|
||||
titleEl.textContent = t('title.helpGuide');
|
||||
} else if (currentView === 'core-memory') {
|
||||
titleEl.textContent = t('title.coreMemory');
|
||||
} else if (currentView === 'codexlens-manager') {
|
||||
titleEl.textContent = t('title.codexLensManager');
|
||||
} else if (currentView === 'liteTasks') {
|
||||
const names = { 'lite-plan': t('title.litePlanSessions'), 'lite-fix': t('title.liteFixSessions') };
|
||||
titleEl.textContent = names[currentLiteType] || t('title.liteTasks');
|
||||
|
||||
@@ -41,6 +41,7 @@ const i18n = {
|
||||
'nav.explorer': 'Explorer',
|
||||
'nav.status': 'Status',
|
||||
'nav.history': 'History',
|
||||
'nav.codexLensManager': 'CodexLens',
|
||||
'nav.memory': 'Memory',
|
||||
'nav.contextMemory': 'Context',
|
||||
'nav.coreMemory': 'Core Memory',
|
||||
@@ -98,6 +99,7 @@ const i18n = {
|
||||
'title.hookManager': 'Hook Manager',
|
||||
'title.memoryModule': 'Memory Module',
|
||||
'title.promptHistory': 'Prompt History',
|
||||
'title.codexLensManager': 'CodexLens Manager',
|
||||
|
||||
// Search
|
||||
'search.placeholder': 'Search...',
|
||||
@@ -215,6 +217,7 @@ const i18n = {
|
||||
'cli.default': 'Default',
|
||||
'cli.install': 'Install',
|
||||
'cli.uninstall': 'Uninstall',
|
||||
'cli.openManager': 'Manager',
|
||||
'cli.initIndex': 'Init Index',
|
||||
'cli.geminiDesc': 'Google AI for code analysis',
|
||||
'cli.qwenDesc': 'Alibaba AI assistant',
|
||||
@@ -226,9 +229,11 @@ const i18n = {
|
||||
|
||||
// CodexLens Configuration
|
||||
'codexlens.config': 'CodexLens Configuration',
|
||||
'codexlens.configDesc': 'Manage code indexing, semantic search, and embedding models',
|
||||
'codexlens.status': 'Status',
|
||||
'codexlens.installed': 'Installed',
|
||||
'codexlens.notInstalled': 'Not Installed',
|
||||
'codexlens.installFirst': 'Install CodexLens to access semantic search and model management features',
|
||||
'codexlens.indexes': 'Indexes',
|
||||
'codexlens.currentWorkspace': 'Current Workspace',
|
||||
'codexlens.indexStoragePath': 'Index Storage Path',
|
||||
@@ -237,6 +242,8 @@ const i18n = {
|
||||
'codexlens.newStoragePath': 'New Storage Path',
|
||||
'codexlens.pathPlaceholder': 'e.g., /path/to/indexes or ~/.codexlens/indexes',
|
||||
'codexlens.pathInfo': 'Supports ~ for home directory. Changes take effect immediately.',
|
||||
'codexlens.pathUnchanged': 'Path unchanged',
|
||||
'codexlens.pathEmpty': 'Path cannot be empty',
|
||||
'codexlens.migrationRequired': 'Migration Required',
|
||||
'codexlens.migrationWarning': 'After changing the path, existing indexes will need to be re-initialized for each workspace.',
|
||||
'codexlens.actions': 'Actions',
|
||||
@@ -244,6 +251,17 @@ const i18n = {
|
||||
'codexlens.cleanCurrentWorkspace': 'Clean Current Workspace',
|
||||
'codexlens.cleanAllIndexes': 'Clean All Indexes',
|
||||
'codexlens.installCodexLens': 'Install CodexLens',
|
||||
'codexlens.createIndex': 'Create Index',
|
||||
'codexlens.embeddingModel': 'Embedding Model',
|
||||
'codexlens.modelHint': 'Select embedding model for vector search (models with ✓ are installed)',
|
||||
'codexlens.fullIndex': 'Full',
|
||||
'codexlens.vectorIndex': 'Vector',
|
||||
'codexlens.ftsIndex': 'FTS',
|
||||
'codexlens.fullIndexDesc': 'FTS + Semantic search (recommended)',
|
||||
'codexlens.vectorIndexDesc': 'Semantic search with embeddings only',
|
||||
'codexlens.ftsIndexDesc': 'Fast full-text search only',
|
||||
'codexlens.indexTypeHint': 'Full index includes FTS + semantic search. FTS only is faster but without AI-powered search.',
|
||||
'codexlens.maintenance': 'Maintenance',
|
||||
'codexlens.testSearch': 'Test Search',
|
||||
'codexlens.testFunctionality': 'test CodexLens functionality',
|
||||
'codexlens.textSearch': 'Text Search',
|
||||
@@ -291,6 +309,17 @@ const i18n = {
|
||||
'codexlens.cudaModeDesc': 'NVIDIA GPU (requires CUDA Toolkit)',
|
||||
'common.recommended': 'Recommended',
|
||||
'common.unavailable': 'Unavailable',
|
||||
'common.auto': 'Auto',
|
||||
|
||||
// GPU Device Selection
|
||||
'codexlens.selectGpuDevice': 'Select GPU Device',
|
||||
'codexlens.discrete': 'Discrete',
|
||||
'codexlens.integrated': 'Integrated',
|
||||
'codexlens.selectingGpu': 'Selecting GPU...',
|
||||
'codexlens.gpuSelected': 'GPU selected',
|
||||
'codexlens.resettingGpu': 'Resetting GPU selection...',
|
||||
'codexlens.gpuReset': 'GPU selection reset to auto',
|
||||
'codexlens.resetToAuto': 'Reset to Auto',
|
||||
'codexlens.modelManagement': 'Model Management',
|
||||
'codexlens.loadingModels': 'Loading models...',
|
||||
'codexlens.downloadModel': 'Download',
|
||||
@@ -444,6 +473,19 @@ const i18n = {
|
||||
'lang.windowsDisableSuccess': 'Windows platform guidelines disabled',
|
||||
'lang.windowsEnableFailed': 'Failed to enable Windows platform guidelines',
|
||||
'lang.windowsDisableFailed': 'Failed to disable Windows platform guidelines',
|
||||
'lang.installRequired': 'Run "ccw install" to enable this feature',
|
||||
|
||||
// CCW Installation Status
|
||||
'status.installed': 'Installed',
|
||||
'status.incomplete': 'Incomplete',
|
||||
'status.notInstalled': 'Not Installed',
|
||||
'status.ccwInstall': 'CCW Workflows',
|
||||
'status.ccwInstallDesc': 'Required workflow files for full functionality',
|
||||
'status.required': 'Required',
|
||||
'status.filesMissing': 'files missing',
|
||||
'status.missingFiles': 'Missing files',
|
||||
'status.runToFix': 'Run to fix',
|
||||
|
||||
'cli.promptFormat': 'Prompt Format',
|
||||
'cli.promptFormatDesc': 'Format for multi-turn conversation concatenation',
|
||||
'cli.storageBackend': 'Storage Backend',
|
||||
@@ -1457,6 +1499,7 @@ const i18n = {
|
||||
'nav.explorer': '文件浏览器',
|
||||
'nav.status': '状态',
|
||||
'nav.history': '历史',
|
||||
'nav.codexLensManager': 'CodexLens',
|
||||
'nav.memory': '记忆',
|
||||
'nav.contextMemory': '活动',
|
||||
'nav.coreMemory': '核心记忆',
|
||||
@@ -1514,6 +1557,7 @@ const i18n = {
|
||||
'title.hookManager': '钩子管理',
|
||||
'title.memoryModule': '记忆模块',
|
||||
'title.promptHistory': '提示历史',
|
||||
'title.codexLensManager': 'CodexLens 管理',
|
||||
|
||||
// Search
|
||||
'search.placeholder': '搜索...',
|
||||
@@ -1631,6 +1675,7 @@ const i18n = {
|
||||
'cli.default': '默认',
|
||||
'cli.install': '安装',
|
||||
'cli.uninstall': '卸载',
|
||||
'cli.openManager': '管理',
|
||||
'cli.initIndex': '初始化索引',
|
||||
'cli.geminiDesc': 'Google AI 代码分析',
|
||||
'cli.qwenDesc': '阿里通义 AI 助手',
|
||||
@@ -1642,9 +1687,11 @@ const i18n = {
|
||||
|
||||
// CodexLens 配置
|
||||
'codexlens.config': 'CodexLens 配置',
|
||||
'codexlens.configDesc': '管理代码索引、语义搜索和嵌入模型',
|
||||
'codexlens.status': '状态',
|
||||
'codexlens.installed': '已安装',
|
||||
'codexlens.notInstalled': '未安装',
|
||||
'codexlens.installFirst': '安装 CodexLens 以访问语义搜索和模型管理功能',
|
||||
'codexlens.indexes': '索引',
|
||||
'codexlens.currentWorkspace': '当前工作区',
|
||||
'codexlens.indexStoragePath': '索引存储路径',
|
||||
@@ -1653,6 +1700,8 @@ const i18n = {
|
||||
'codexlens.newStoragePath': '新存储路径',
|
||||
'codexlens.pathPlaceholder': '例如:/path/to/indexes 或 ~/.codexlens/indexes',
|
||||
'codexlens.pathInfo': '支持 ~ 表示用户目录。更改立即生效。',
|
||||
'codexlens.pathUnchanged': '路径未变更',
|
||||
'codexlens.pathEmpty': '路径不能为空',
|
||||
'codexlens.migrationRequired': '需要迁移',
|
||||
'codexlens.migrationWarning': '更改路径后,需要为每个工作区重新初始化索引。',
|
||||
'codexlens.actions': '操作',
|
||||
@@ -1660,6 +1709,17 @@ const i18n = {
|
||||
'codexlens.cleanCurrentWorkspace': '清理当前工作空间',
|
||||
'codexlens.cleanAllIndexes': '清理所有索引',
|
||||
'codexlens.installCodexLens': '安装 CodexLens',
|
||||
'codexlens.createIndex': '创建索引',
|
||||
'codexlens.embeddingModel': '嵌入模型',
|
||||
'codexlens.modelHint': '选择向量搜索的嵌入模型(带 ✓ 的已安装)',
|
||||
'codexlens.fullIndex': '全部',
|
||||
'codexlens.vectorIndex': '向量',
|
||||
'codexlens.ftsIndex': 'FTS',
|
||||
'codexlens.fullIndexDesc': 'FTS + 语义搜索(推荐)',
|
||||
'codexlens.vectorIndexDesc': '仅语义嵌入搜索',
|
||||
'codexlens.ftsIndexDesc': '仅快速全文搜索',
|
||||
'codexlens.indexTypeHint': '完整索引包含 FTS + 语义搜索。仅 FTS 更快但无 AI 搜索功能。',
|
||||
'codexlens.maintenance': '维护',
|
||||
'codexlens.testSearch': '测试搜索',
|
||||
'codexlens.testFunctionality': '测试 CodexLens 功能',
|
||||
'codexlens.textSearch': '文本搜索',
|
||||
@@ -1707,6 +1767,18 @@ const i18n = {
|
||||
'codexlens.cudaModeDesc': 'NVIDIA GPU(需要 CUDA Toolkit)',
|
||||
'common.recommended': '推荐',
|
||||
'common.unavailable': '不可用',
|
||||
'common.auto': '自动',
|
||||
|
||||
// GPU 设备选择
|
||||
'codexlens.selectGpuDevice': '选择 GPU 设备',
|
||||
'codexlens.discrete': '独立显卡',
|
||||
'codexlens.integrated': '集成显卡',
|
||||
'codexlens.selectingGpu': '选择 GPU 中...',
|
||||
'codexlens.gpuSelected': 'GPU 已选择',
|
||||
'codexlens.resettingGpu': '重置 GPU 选择中...',
|
||||
'codexlens.gpuReset': 'GPU 选择已重置为自动',
|
||||
'codexlens.resetToAuto': '重置为自动',
|
||||
|
||||
'codexlens.modelManagement': '模型管理',
|
||||
'codexlens.loadingModels': '加载模型中...',
|
||||
'codexlens.downloadModel': '下载',
|
||||
@@ -1860,6 +1932,19 @@ const i18n = {
|
||||
'lang.windowsDisableSuccess': 'Windows 平台规范已禁用',
|
||||
'lang.windowsEnableFailed': '启用 Windows 平台规范失败',
|
||||
'lang.windowsDisableFailed': '禁用 Windows 平台规范失败',
|
||||
'lang.installRequired': '请运行 "ccw install" 以启用此功能',
|
||||
|
||||
// CCW 安装状态
|
||||
'status.installed': '已安装',
|
||||
'status.incomplete': '不完整',
|
||||
'status.notInstalled': '未安装',
|
||||
'status.ccwInstall': 'CCW 工作流',
|
||||
'status.ccwInstallDesc': '完整功能所需的工作流文件',
|
||||
'status.required': '必需',
|
||||
'status.filesMissing': '个文件缺失',
|
||||
'status.missingFiles': '缺失文件',
|
||||
'status.runToFix': '修复命令',
|
||||
|
||||
'cli.promptFormat': '提示词格式',
|
||||
'cli.promptFormatDesc': '多轮对话拼接格式',
|
||||
'cli.storageBackend': '存储后端',
|
||||
|
||||
@@ -9,6 +9,26 @@ var ccwEndpointTools = [];
|
||||
var cliToolConfig = null; // Store loaded CLI config
|
||||
var predefinedModels = {}; // Store predefined models per tool
|
||||
|
||||
// ========== Navigation Helpers ==========
|
||||
|
||||
/**
|
||||
* Navigate to CodexLens Manager page
|
||||
*/
|
||||
function navigateToCodexLensManager() {
|
||||
var navItem = document.querySelector('.nav-item[data-view="codexlens-manager"]');
|
||||
if (navItem) {
|
||||
navItem.click();
|
||||
} else {
|
||||
// Fallback: try to render directly
|
||||
if (typeof renderCodexLensManager === 'function') {
|
||||
currentView = 'codexlens-manager';
|
||||
renderCodexLensManager();
|
||||
} else {
|
||||
showRefreshToast(t('common.error') + ': CodexLens Manager not available', 'error');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ========== CCW Installations ==========
|
||||
async function loadCcwInstallations() {
|
||||
try {
|
||||
@@ -314,8 +334,7 @@ async function renderCliManager() {
|
||||
'<div class="cli-settings-section" id="cli-settings-section" style="margin-top: 1.5rem;"></div>' +
|
||||
'<div class="cli-section" id="ccw-endpoint-tools-section" style="margin-top: 1.5rem;"></div>' +
|
||||
'</div>' +
|
||||
'<section id="storageCard" class="mb-6"></section>' +
|
||||
'<section id="indexCard" class="mb-6"></section>';
|
||||
'<section id="storageCard" class="mb-6"></section>';
|
||||
|
||||
// Render sub-panels
|
||||
renderToolsSection();
|
||||
@@ -329,11 +348,6 @@ async function renderCliManager() {
|
||||
initStorageManager();
|
||||
}
|
||||
|
||||
// Initialize index manager card
|
||||
if (typeof initIndexManager === 'function') {
|
||||
initIndexManager();
|
||||
}
|
||||
|
||||
// Initialize Lucide icons
|
||||
if (window.lucide) lucide.createIcons();
|
||||
}
|
||||
@@ -434,28 +448,22 @@ function renderToolsSection() {
|
||||
'</div>';
|
||||
}).join('');
|
||||
|
||||
// CodexLens item
|
||||
var codexLensHtml = '<div class="tool-item clickable ' + (codexLensStatus.ready ? 'available' : 'unavailable') + '" onclick="showCodexLensConfigModal()">' +
|
||||
// CodexLens item - simplified view with link to manager page
|
||||
var codexLensHtml = '<div class="tool-item clickable ' + (codexLensStatus.ready ? 'available' : 'unavailable') + '" onclick="navigateToCodexLensManager()">' +
|
||||
'<div class="tool-item-left">' +
|
||||
'<span class="tool-status-dot ' + (codexLensStatus.ready ? 'status-available' : 'status-unavailable') + '"></span>' +
|
||||
'<div class="tool-item-info">' +
|
||||
'<div class="tool-item-name">CodexLens <span class="tool-type-badge">Index</span>' +
|
||||
'<i data-lucide="settings" class="w-3 h-3 tool-config-icon"></i></div>' +
|
||||
'<i data-lucide="external-link" class="w-3 h-3 tool-config-icon"></i></div>' +
|
||||
'<div class="tool-item-desc">' + (codexLensStatus.ready ? t('cli.codexLensDesc') : t('cli.codexLensDescFull')) + '</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'<div class="tool-item-right">' +
|
||||
(codexLensStatus.ready
|
||||
? '<span class="tool-status-text success"><i data-lucide="check-circle" class="w-3.5 h-3.5"></i> v' + (codexLensStatus.version || 'installed') + '</span>' +
|
||||
'<select id="codexlensModelSelect" class="btn-sm bg-muted border border-border rounded text-xs" onclick="event.stopPropagation()" title="' + (t('index.selectModel') || 'Select embedding model') + '">' +
|
||||
buildModelSelectOptions() +
|
||||
'</select>' +
|
||||
'<button class="btn-sm btn-primary" onclick="event.stopPropagation(); initCodexLensIndex(\'full\', getSelectedModel())" title="' + (t('index.fullDesc') || 'FTS + Semantic search (recommended)') + '"><i data-lucide="layers" class="w-3 h-3"></i> ' + (t('index.fullIndex') || '全部索引') + '</button>' +
|
||||
'<button class="btn-sm btn-outline" onclick="event.stopPropagation(); initCodexLensIndex(\'vector\', getSelectedModel())" title="' + (t('index.vectorDesc') || 'Semantic search with embeddings') + '"><i data-lucide="sparkles" class="w-3 h-3"></i> ' + (t('index.vectorIndex') || '向量索引') + '</button>' +
|
||||
'<button class="btn-sm btn-outline" onclick="event.stopPropagation(); initCodexLensIndex(\'normal\')" title="' + (t('index.normalDesc') || 'Fast full-text search only') + '"><i data-lucide="file-text" class="w-3 h-3"></i> ' + (t('index.normalIndex') || 'FTS索引') + '</button>' +
|
||||
'<button class="btn-sm btn-outline btn-danger" onclick="event.stopPropagation(); uninstallCodexLens()"><i data-lucide="trash-2" class="w-3 h-3"></i> ' + t('cli.uninstall') + '</button>'
|
||||
'<button class="btn-sm btn-primary" onclick="event.stopPropagation(); navigateToCodexLensManager()"><i data-lucide="settings" class="w-3 h-3"></i> ' + t('cli.openManager') + '</button>'
|
||||
: '<span class="tool-status-text muted"><i data-lucide="circle-dashed" class="w-3.5 h-3.5"></i> ' + t('cli.notInstalled') + '</span>' +
|
||||
'<button class="btn-sm btn-primary" onclick="event.stopPropagation(); installCodexLens()"><i data-lucide="download" class="w-3 h-3"></i> ' + t('cli.install') + '</button>') +
|
||||
'<button class="btn-sm btn-primary" onclick="event.stopPropagation(); navigateToCodexLensManager()"><i data-lucide="settings" class="w-3 h-3"></i> ' + t('cli.openManager') + '</button>') +
|
||||
'</div>' +
|
||||
'</div>';
|
||||
|
||||
@@ -606,6 +614,16 @@ async function loadWindowsPlatformSettings() {
|
||||
|
||||
async function toggleChineseResponse(enabled) {
|
||||
if (chineseResponseLoading) return;
|
||||
|
||||
// Pre-check: verify CCW workflows are installed (only when enabling)
|
||||
if (enabled && typeof ccwInstallStatus !== 'undefined' && !ccwInstallStatus.installed) {
|
||||
var missingFile = ccwInstallStatus.missingFiles.find(function(f) { return f === 'chinese-response.md'; });
|
||||
if (missingFile) {
|
||||
showRefreshToast(t('lang.installRequired'), 'warning');
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
chineseResponseLoading = true;
|
||||
|
||||
try {
|
||||
@@ -617,7 +635,14 @@ async function toggleChineseResponse(enabled) {
|
||||
|
||||
if (!response.ok) {
|
||||
var errData = await response.json();
|
||||
throw new Error(errData.error || 'Failed to update setting');
|
||||
// Show specific error message from backend
|
||||
var errorMsg = errData.error || 'Failed to update setting';
|
||||
if (errorMsg.includes('not found')) {
|
||||
showRefreshToast(t('lang.installRequired'), 'warning');
|
||||
} else {
|
||||
showRefreshToast((enabled ? t('lang.enableFailed') : t('lang.disableFailed')) + ': ' + errorMsg, 'error');
|
||||
}
|
||||
throw new Error(errorMsg);
|
||||
}
|
||||
|
||||
var data = await response.json();
|
||||
@@ -630,7 +655,7 @@ async function toggleChineseResponse(enabled) {
|
||||
showRefreshToast(enabled ? t('lang.enableSuccess') : t('lang.disableSuccess'), 'success');
|
||||
} catch (err) {
|
||||
console.error('Failed to toggle Chinese response:', err);
|
||||
showRefreshToast(enabled ? t('lang.enableFailed') : t('lang.disableFailed'), 'error');
|
||||
// Error already shown in the !response.ok block
|
||||
} finally {
|
||||
chineseResponseLoading = false;
|
||||
}
|
||||
@@ -638,6 +663,16 @@ async function toggleChineseResponse(enabled) {
|
||||
|
||||
async function toggleWindowsPlatform(enabled) {
|
||||
if (windowsPlatformLoading) return;
|
||||
|
||||
// Pre-check: verify CCW workflows are installed (only when enabling)
|
||||
if (enabled && typeof ccwInstallStatus !== 'undefined' && !ccwInstallStatus.installed) {
|
||||
var missingFile = ccwInstallStatus.missingFiles.find(function(f) { return f === 'windows-platform.md'; });
|
||||
if (missingFile) {
|
||||
showRefreshToast(t('lang.installRequired'), 'warning');
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
windowsPlatformLoading = true;
|
||||
|
||||
try {
|
||||
@@ -649,7 +684,14 @@ async function toggleWindowsPlatform(enabled) {
|
||||
|
||||
if (!response.ok) {
|
||||
var errData = await response.json();
|
||||
throw new Error(errData.error || 'Failed to update setting');
|
||||
// Show specific error message from backend
|
||||
var errorMsg = errData.error || 'Failed to update setting';
|
||||
if (errorMsg.includes('not found')) {
|
||||
showRefreshToast(t('lang.installRequired'), 'warning');
|
||||
} else {
|
||||
showRefreshToast((enabled ? t('lang.windowsEnableFailed') : t('lang.windowsDisableFailed')) + ': ' + errorMsg, 'error');
|
||||
}
|
||||
throw new Error(errorMsg);
|
||||
}
|
||||
|
||||
var data = await response.json();
|
||||
@@ -662,7 +704,7 @@ async function toggleWindowsPlatform(enabled) {
|
||||
showRefreshToast(enabled ? t('lang.windowsEnableSuccess') : t('lang.windowsDisableSuccess'), 'success');
|
||||
} catch (err) {
|
||||
console.error('Failed to toggle Windows platform:', err);
|
||||
showRefreshToast(enabled ? t('lang.windowsEnableFailed') : t('lang.windowsDisableFailed'), 'error');
|
||||
// Error already shown in the !response.ok block
|
||||
} finally {
|
||||
windowsPlatformLoading = false;
|
||||
}
|
||||
|
||||
@@ -337,6 +337,8 @@ function initCodexLensConfigEvents(currentConfig) {
|
||||
|
||||
// Store detected GPU info
|
||||
var detectedGpuInfo = null;
|
||||
// Store available GPU devices
|
||||
var availableGpuDevices = null;
|
||||
|
||||
/**
|
||||
* Detect GPU support
|
||||
@@ -363,11 +365,13 @@ async function loadSemanticDepsStatus() {
|
||||
if (!container) return;
|
||||
|
||||
try {
|
||||
// Detect GPU support in parallel
|
||||
// Detect GPU support and load GPU devices in parallel
|
||||
var gpuPromise = detectGpuSupport();
|
||||
var gpuDevicesPromise = loadGpuDevices();
|
||||
var response = await fetch('/api/codexlens/semantic/status');
|
||||
var result = await response.json();
|
||||
var gpuInfo = await gpuPromise;
|
||||
var gpuDevices = await gpuDevicesPromise;
|
||||
|
||||
if (result.available) {
|
||||
// Build accelerator badge
|
||||
@@ -386,6 +390,9 @@ async function loadSemanticDepsStatus() {
|
||||
acceleratorClass = 'bg-red-500/20 text-red-600';
|
||||
}
|
||||
|
||||
// Build GPU device selector if multiple GPUs available
|
||||
var gpuDeviceSelector = buildGpuDeviceSelector(gpuDevices);
|
||||
|
||||
container.innerHTML =
|
||||
'<div class="space-y-2">' +
|
||||
'<div class="flex items-center gap-2 text-sm">' +
|
||||
@@ -402,6 +409,7 @@ async function loadSemanticDepsStatus() {
|
||||
? '<span class="text-xs text-muted-foreground">' + result.providers.join(', ') + '</span>'
|
||||
: '') +
|
||||
'</div>' +
|
||||
gpuDeviceSelector +
|
||||
'</div>';
|
||||
} else {
|
||||
// Build GPU mode options
|
||||
@@ -506,6 +514,134 @@ function getSelectedGpuMode() {
|
||||
return selected ? selected.value : 'cpu';
|
||||
}
|
||||
|
||||
/**
|
||||
* Load available GPU devices
|
||||
*/
|
||||
async function loadGpuDevices() {
|
||||
try {
|
||||
var response = await fetch('/api/codexlens/gpu/list');
|
||||
var result = await response.json();
|
||||
if (result.success && result.result) {
|
||||
availableGpuDevices = result.result;
|
||||
return result.result;
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('GPU devices load failed:', err);
|
||||
}
|
||||
return { devices: [], selected_device_id: null };
|
||||
}
|
||||
|
||||
/**
|
||||
* Build GPU device selector HTML
|
||||
*/
|
||||
function buildGpuDeviceSelector(gpuDevices) {
|
||||
if (!gpuDevices || !gpuDevices.devices || gpuDevices.devices.length === 0) {
|
||||
return '';
|
||||
}
|
||||
|
||||
// Only show selector if there are multiple GPUs
|
||||
if (gpuDevices.devices.length < 2) {
|
||||
return '';
|
||||
}
|
||||
|
||||
var html =
|
||||
'<div class="mt-3 p-3 bg-muted/30 rounded-lg border border-border">' +
|
||||
'<div class="text-xs font-medium text-muted-foreground flex items-center gap-1 mb-2">' +
|
||||
'<i data-lucide="cpu" class="w-3 h-3"></i>' +
|
||||
(t('codexlens.selectGpuDevice') || 'Select GPU Device') +
|
||||
'</div>' +
|
||||
'<div class="space-y-1">';
|
||||
|
||||
gpuDevices.devices.forEach(function(device) {
|
||||
var isSelected = device.is_selected;
|
||||
var vendorIcon = device.vendor === 'nvidia' ? 'zap' : (device.vendor === 'amd' ? 'flame' : 'cpu');
|
||||
var vendorColor = device.vendor === 'nvidia' ? 'text-green-500' : (device.vendor === 'amd' ? 'text-red-500' : 'text-blue-500');
|
||||
var typeLabel = device.is_discrete ? (t('codexlens.discrete') || 'Discrete') : (t('codexlens.integrated') || 'Integrated');
|
||||
|
||||
html +=
|
||||
'<label class="flex items-center gap-3 p-2 rounded border cursor-pointer hover:bg-muted/50 transition-colors ' +
|
||||
(isSelected ? 'border-primary bg-primary/5' : 'border-transparent') + '">' +
|
||||
'<input type="radio" name="gpuDevice" value="' + device.device_id + '" ' +
|
||||
(isSelected ? 'checked' : '') +
|
||||
' class="accent-primary" onchange="selectGpuDevice(' + device.device_id + ')">' +
|
||||
'<div class="flex-1">' +
|
||||
'<div class="flex items-center gap-2">' +
|
||||
'<i data-lucide="' + vendorIcon + '" class="w-4 h-4 ' + vendorColor + '"></i>' +
|
||||
'<span class="font-medium text-sm">' + device.name + '</span>' +
|
||||
'</div>' +
|
||||
'<div class="flex items-center gap-2 mt-0.5">' +
|
||||
'<span class="text-xs text-muted-foreground">' + device.vendor.toUpperCase() + '</span>' +
|
||||
'<span class="text-xs px-1.5 py-0.5 rounded ' +
|
||||
(device.is_discrete ? 'bg-green-500/20 text-green-600' : 'bg-muted text-muted-foreground') + '">' +
|
||||
typeLabel +
|
||||
'</span>' +
|
||||
(device.is_preferred ? '<span class="text-xs bg-primary/20 text-primary px-1.5 py-0.5 rounded">' + (t('common.auto') || 'Auto') + '</span>' : '') +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</label>';
|
||||
});
|
||||
|
||||
html +=
|
||||
'</div>' +
|
||||
'<button class="btn-xs text-muted-foreground hover:text-foreground mt-2" onclick="resetGpuDevice()">' +
|
||||
'<i data-lucide="rotate-ccw" class="w-3 h-3"></i> ' + (t('codexlens.resetToAuto') || 'Reset to Auto') +
|
||||
'</button>' +
|
||||
'</div>';
|
||||
|
||||
return html;
|
||||
}
|
||||
|
||||
/**
|
||||
* Select a GPU device
|
||||
*/
|
||||
async function selectGpuDevice(deviceId) {
|
||||
try {
|
||||
showRefreshToast(t('codexlens.selectingGpu') || 'Selecting GPU...', 'info');
|
||||
|
||||
var response = await fetch('/api/codexlens/gpu/select', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ device_id: deviceId })
|
||||
});
|
||||
|
||||
var result = await response.json();
|
||||
if (result.success) {
|
||||
showRefreshToast(t('codexlens.gpuSelected') || 'GPU selected', 'success');
|
||||
// Reload semantic status to reflect change
|
||||
loadSemanticDepsStatus();
|
||||
} else {
|
||||
showRefreshToast(result.error || 'Failed to select GPU', 'error');
|
||||
}
|
||||
} catch (err) {
|
||||
showRefreshToast(err.message, 'error');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset GPU device selection to auto
|
||||
*/
|
||||
async function resetGpuDevice() {
|
||||
try {
|
||||
showRefreshToast(t('codexlens.resettingGpu') || 'Resetting GPU selection...', 'info');
|
||||
|
||||
var response = await fetch('/api/codexlens/gpu/reset', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' }
|
||||
});
|
||||
|
||||
var result = await response.json();
|
||||
if (result.success) {
|
||||
showRefreshToast(t('codexlens.gpuReset') || 'GPU selection reset to auto', 'success');
|
||||
// Reload semantic status to reflect change
|
||||
loadSemanticDepsStatus();
|
||||
} else {
|
||||
showRefreshToast(result.error || 'Failed to reset GPU', 'error');
|
||||
}
|
||||
} catch (err) {
|
||||
showRefreshToast(err.message, 'error');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Install semantic dependencies with GPU mode
|
||||
*/
|
||||
@@ -570,9 +706,7 @@ async function installSemanticDeps() {
|
||||
function buildManualDownloadGuide() {
|
||||
var modelData = [
|
||||
{ profile: 'code', name: 'jinaai/jina-embeddings-v2-base-code', size: '~150 MB' },
|
||||
{ profile: 'fast', name: 'BAAI/bge-small-en-v1.5', size: '~80 MB' },
|
||||
{ profile: 'balanced', name: 'mixedbread-ai/mxbai-embed-large-v1', size: '~600 MB' },
|
||||
{ profile: 'multilingual', name: 'intfloat/multilingual-e5-large', size: '~1 GB' }
|
||||
{ profile: 'fast', name: 'BAAI/bge-small-en-v1.5', size: '~80 MB' }
|
||||
];
|
||||
|
||||
var html =
|
||||
@@ -807,9 +941,7 @@ async function downloadModel(profile) {
|
||||
// Get model info for size estimation
|
||||
var modelSizes = {
|
||||
'fast': { size: 80, time: '1-2' },
|
||||
'code': { size: 150, time: '2-5' },
|
||||
'multilingual': { size: 1000, time: '5-15' },
|
||||
'balanced': { size: 600, time: '3-10' }
|
||||
'code': { size: 150, time: '2-5' }
|
||||
};
|
||||
|
||||
var modelInfo = modelSizes[profile] || { size: 100, time: '2-5' };
|
||||
@@ -933,9 +1065,7 @@ async function downloadModel(profile) {
|
||||
function showModelDownloadError(modelCard, profile, error, originalHTML) {
|
||||
var modelNames = {
|
||||
'fast': 'BAAI/bge-small-en-v1.5',
|
||||
'code': 'jinaai/jina-embeddings-v2-base-code',
|
||||
'multilingual': 'intfloat/multilingual-e5-large',
|
||||
'balanced': 'mixedbread-ai/mxbai-embed-large-v1'
|
||||
'code': 'jinaai/jina-embeddings-v2-base-code'
|
||||
};
|
||||
|
||||
var modelName = modelNames[profile] || profile;
|
||||
@@ -1035,7 +1165,7 @@ async function deleteModel(profile) {
|
||||
/**
|
||||
* Initialize CodexLens index with bottom floating progress bar
|
||||
* @param {string} indexType - 'vector' (with embeddings), 'normal' (FTS only), or 'full' (FTS + Vector)
|
||||
* @param {string} embeddingModel - Model profile: 'code', 'fast', 'multilingual', 'balanced'
|
||||
* @param {string} embeddingModel - Model profile: 'code', 'fast'
|
||||
*/
|
||||
async function initCodexLensIndex(indexType, embeddingModel) {
|
||||
indexType = indexType || 'vector';
|
||||
@@ -1104,7 +1234,7 @@ async function initCodexLensIndex(indexType, embeddingModel) {
|
||||
// Add model info for vector indexes
|
||||
var modelLabel = '';
|
||||
if (indexType !== 'normal') {
|
||||
var modelNames = { code: 'Code', fast: 'Fast', multilingual: 'Multi', balanced: 'Balanced' };
|
||||
var modelNames = { code: 'Code', fast: 'Fast' };
|
||||
modelLabel = ' [' + (modelNames[embeddingModel] || embeddingModel) + ']';
|
||||
}
|
||||
|
||||
@@ -1148,7 +1278,7 @@ async function initCodexLensIndex(indexType, embeddingModel) {
|
||||
/**
|
||||
* Start the indexing process
|
||||
* @param {string} indexType - 'vector' or 'normal'
|
||||
* @param {string} embeddingModel - Model profile: 'code', 'fast', 'multilingual', 'balanced'
|
||||
* @param {string} embeddingModel - Model profile: 'code', 'fast'
|
||||
*/
|
||||
async function startCodexLensIndexing(indexType, embeddingModel) {
|
||||
indexType = indexType || 'vector';
|
||||
@@ -1727,3 +1857,546 @@ async function cleanCodexLensIndexes() {
|
||||
showRefreshToast(t('common.error') + ': ' + err.message, 'error');
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// CODEXLENS MANAGER PAGE (Independent View)
|
||||
// ============================================================
|
||||
|
||||
/**
|
||||
* Render CodexLens Manager as an independent page view
|
||||
*/
|
||||
async function renderCodexLensManager() {
|
||||
var container = document.getElementById('mainContent');
|
||||
if (!container) return;
|
||||
|
||||
// Hide stats grid and search
|
||||
var statsGrid = document.getElementById('statsGrid');
|
||||
var searchContainer = document.querySelector('.search-container');
|
||||
if (statsGrid) statsGrid.style.display = 'none';
|
||||
if (searchContainer) searchContainer.style.display = 'none';
|
||||
|
||||
container.innerHTML = '<div class="flex items-center justify-center py-12"><div class="animate-spin w-6 h-6 border-2 border-primary border-t-transparent rounded-full"></div><span class="ml-3">' + t('common.loading') + '</span></div>';
|
||||
|
||||
try {
|
||||
// Load CodexLens status first to populate window.cliToolsStatus.codexlens
|
||||
if (typeof loadCodexLensStatus === 'function') {
|
||||
await loadCodexLensStatus();
|
||||
}
|
||||
|
||||
var response = await fetch('/api/codexlens/config');
|
||||
var config = await response.json();
|
||||
|
||||
container.innerHTML = buildCodexLensManagerPage(config);
|
||||
if (window.lucide) lucide.createIcons();
|
||||
initCodexLensManagerPageEvents(config);
|
||||
loadSemanticDepsStatus();
|
||||
loadModelList();
|
||||
// Load index stats for the Index Manager section
|
||||
if (window.cliToolsStatus?.codexlens?.installed) {
|
||||
loadIndexStatsForPage();
|
||||
}
|
||||
} catch (err) {
|
||||
container.innerHTML = '<div class="text-center py-12 text-destructive"><i data-lucide="alert-circle" class="w-8 h-8 mx-auto mb-2"></i><p>' + t('common.error') + ': ' + err.message + '</p></div>';
|
||||
if (window.lucide) lucide.createIcons();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build CodexLens Manager page content
|
||||
*/
|
||||
function buildCodexLensManagerPage(config) {
|
||||
var indexDir = config.index_dir || '~/.codexlens/indexes';
|
||||
var indexCount = config.index_count || 0;
|
||||
var isInstalled = window.cliToolsStatus?.codexlens?.installed || false;
|
||||
|
||||
// Build model options for vector indexing
|
||||
var modelOptions = buildModelSelectOptionsForPage();
|
||||
|
||||
return '<div class="codexlens-manager-page space-y-6">' +
|
||||
// Header with status
|
||||
'<div class="bg-card border border-border rounded-lg p-6">' +
|
||||
'<div class="flex items-center justify-between flex-wrap gap-4">' +
|
||||
'<div class="flex items-center gap-4">' +
|
||||
'<div class="w-12 h-12 rounded-full bg-primary/10 flex items-center justify-center">' +
|
||||
'<i data-lucide="database" class="w-6 h-6 text-primary"></i>' +
|
||||
'</div>' +
|
||||
'<div>' +
|
||||
'<h2 class="text-xl font-bold">' + t('codexlens.config') + '</h2>' +
|
||||
'<p class="text-sm text-muted-foreground">' + t('codexlens.configDesc') + '</p>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'<div class="flex items-center gap-4">' +
|
||||
(isInstalled
|
||||
? '<span class="inline-flex items-center gap-1.5 px-3 py-1.5 rounded-full text-sm font-medium bg-success/10 text-success border border-success/20"><i data-lucide="check-circle" class="w-4 h-4"></i> ' + t('codexlens.installed') + '</span>'
|
||||
: '<span class="inline-flex items-center gap-1.5 px-3 py-1.5 rounded-full text-sm font-medium bg-muted text-muted-foreground border border-border"><i data-lucide="circle" class="w-4 h-4"></i> ' + t('codexlens.notInstalled') + '</span>') +
|
||||
'<div class="flex items-center gap-2 px-3 py-1.5 rounded-lg bg-primary/5 border border-primary/20">' +
|
||||
'<span class="text-sm text-muted-foreground">' + t('codexlens.indexes') + ':</span>' +
|
||||
'<span class="text-lg font-bold text-primary">' + indexCount + '</span>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
|
||||
(isInstalled
|
||||
? // Installed: Show full management UI
|
||||
'<div class="grid grid-cols-1 lg:grid-cols-2 gap-6">' +
|
||||
// Left Column
|
||||
'<div class="space-y-6">' +
|
||||
// Create Index Section
|
||||
'<div class="bg-card border border-border rounded-lg p-5">' +
|
||||
'<h4 class="text-lg font-semibold mb-4 flex items-center gap-2"><i data-lucide="layers" class="w-5 h-5 text-primary"></i> ' + t('codexlens.createIndex') + '</h4>' +
|
||||
'<div class="space-y-4">' +
|
||||
// Model selector
|
||||
'<div>' +
|
||||
'<label class="block text-sm font-medium mb-1.5">' + t('codexlens.embeddingModel') + '</label>' +
|
||||
'<select id="pageModelSelect" class="w-full px-3 py-2 border border-border rounded-lg bg-background text-sm">' +
|
||||
modelOptions +
|
||||
'</select>' +
|
||||
'<p class="text-xs text-muted-foreground mt-1">' + t('codexlens.modelHint') + '</p>' +
|
||||
'</div>' +
|
||||
// Index buttons - two modes: full (FTS + Vector) or FTS only
|
||||
'<div class="grid grid-cols-2 gap-3">' +
|
||||
'<button class="btn btn-primary flex items-center justify-center gap-2 py-3" onclick="initCodexLensIndexFromPage(\'full\')" title="' + t('codexlens.fullIndexDesc') + '">' +
|
||||
'<i data-lucide="layers" class="w-4 h-4"></i>' +
|
||||
'<span>' + t('codexlens.fullIndex') + '</span>' +
|
||||
'</button>' +
|
||||
'<button class="btn btn-outline flex items-center justify-center gap-2 py-3" onclick="initCodexLensIndexFromPage(\'normal\')" title="' + t('codexlens.ftsIndexDesc') + '">' +
|
||||
'<i data-lucide="file-text" class="w-4 h-4"></i>' +
|
||||
'<span>' + t('codexlens.ftsIndex') + '</span>' +
|
||||
'</button>' +
|
||||
'</div>' +
|
||||
'<p class="text-xs text-muted-foreground">' + t('codexlens.indexTypeHint') + '</p>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
// Storage Path Section
|
||||
'<div class="bg-card border border-border rounded-lg p-5">' +
|
||||
'<h4 class="text-lg font-semibold mb-4 flex items-center gap-2"><i data-lucide="folder" class="w-5 h-5 text-primary"></i> ' + t('codexlens.indexStoragePath') + '</h4>' +
|
||||
'<div class="space-y-3">' +
|
||||
'<div>' +
|
||||
'<label class="block text-sm font-medium mb-1.5">' + t('codexlens.currentPath') + '</label>' +
|
||||
'<div class="text-sm text-muted-foreground bg-muted/50 rounded-lg px-3 py-2 font-mono border border-border truncate" title="' + indexDir + '">' + indexDir + '</div>' +
|
||||
'</div>' +
|
||||
'<div>' +
|
||||
'<label class="block text-sm font-medium mb-1.5">' + t('codexlens.newStoragePath') + '</label>' +
|
||||
'<div class="flex gap-2">' +
|
||||
'<input type="text" id="indexDirInput" value="' + indexDir + '" class="flex-1 px-3 py-2 border border-border rounded-lg bg-background text-foreground text-sm" />' +
|
||||
'<button class="btn-sm btn-primary" id="saveIndexPathBtn"><i data-lucide="save" class="w-3.5 h-3.5"></i></button>' +
|
||||
'</div>' +
|
||||
'<p class="text-xs text-muted-foreground mt-1">' + t('codexlens.pathInfo') + '</p>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
// Maintenance Section
|
||||
'<div class="bg-card border border-border rounded-lg p-5">' +
|
||||
'<h4 class="text-lg font-semibold mb-4 flex items-center gap-2"><i data-lucide="settings" class="w-5 h-5 text-primary"></i> ' + t('codexlens.maintenance') + '</h4>' +
|
||||
'<div class="flex flex-wrap gap-2">' +
|
||||
'<button class="btn-sm btn-outline" onclick="cleanCurrentWorkspaceIndex()"><i data-lucide="folder-x" class="w-3.5 h-3.5"></i> ' + t('codexlens.cleanCurrentWorkspace') + '</button>' +
|
||||
'<button class="btn-sm btn-outline" onclick="cleanCodexLensIndexes()"><i data-lucide="trash" class="w-3.5 h-3.5"></i> ' + t('codexlens.cleanAllIndexes') + '</button>' +
|
||||
'<button class="btn-sm btn-destructive" onclick="uninstallCodexLensFromManager()"><i data-lucide="trash-2" class="w-3.5 h-3.5"></i> ' + t('cli.uninstall') + '</button>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
// Right Column
|
||||
'<div class="space-y-6">' +
|
||||
// Semantic Dependencies
|
||||
'<div class="bg-card border border-border rounded-lg p-5">' +
|
||||
'<h4 class="text-lg font-semibold mb-4 flex items-center gap-2"><i data-lucide="cpu" class="w-5 h-5 text-primary"></i> ' + t('codexlens.semanticDeps') + '</h4>' +
|
||||
'<div id="semanticDepsStatus" class="space-y-3">' +
|
||||
'<div class="flex items-center gap-2 text-sm text-muted-foreground">' +
|
||||
'<div class="animate-spin w-4 h-4 border-2 border-primary border-t-transparent rounded-full"></div> ' + t('codexlens.checkingDeps') +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
// Model Management
|
||||
'<div class="bg-card border border-border rounded-lg p-5">' +
|
||||
'<h4 class="text-lg font-semibold mb-4 flex items-center gap-2"><i data-lucide="box" class="w-5 h-5 text-primary"></i> ' + t('codexlens.modelManagement') + '</h4>' +
|
||||
'<div id="modelListContainer" class="space-y-3">' +
|
||||
'<div class="flex items-center gap-2 text-sm text-muted-foreground">' +
|
||||
'<div class="animate-spin w-4 h-4 border-2 border-primary border-t-transparent rounded-full"></div> ' + t('codexlens.loadingModels') +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
// Index Manager Section
|
||||
'<div class="bg-card border border-border rounded-lg overflow-hidden" id="indexManagerSection">' +
|
||||
'<div class="bg-muted/30 border-b border-border px-4 py-3 flex items-center justify-between">' +
|
||||
'<div class="flex items-center gap-2">' +
|
||||
'<i data-lucide="database" class="w-4 h-4 text-primary"></i>' +
|
||||
'<span class="font-medium text-foreground">' + t('index.manager') + '</span>' +
|
||||
'<span class="text-xs px-2 py-0.5 bg-muted rounded-full text-muted-foreground" id="indexTotalSize">-</span>' +
|
||||
'</div>' +
|
||||
'<div class="flex items-center gap-2">' +
|
||||
'<button onclick="loadIndexStatsForPage()" class="text-xs px-2 py-1 text-muted-foreground hover:text-foreground hover:bg-muted rounded transition-colors" title="' + t('common.refresh') + '">' +
|
||||
'<i data-lucide="refresh-cw" class="w-3.5 h-3.5"></i>' +
|
||||
'</button>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'<div class="p-4">' +
|
||||
'<div class="flex items-center gap-2 mb-3 text-xs text-muted-foreground">' +
|
||||
'<i data-lucide="folder" class="w-3.5 h-3.5"></i>' +
|
||||
'<span class="font-mono truncate" id="indexDirDisplay" title="' + indexDir + '">' + indexDir + '</span>' +
|
||||
'</div>' +
|
||||
'<div class="grid grid-cols-4 gap-3 mb-4">' +
|
||||
'<div class="bg-muted/30 rounded-lg p-3 text-center">' +
|
||||
'<div class="text-lg font-semibold text-foreground" id="indexProjectCount">-</div>' +
|
||||
'<div class="text-xs text-muted-foreground">' + t('index.projects') + '</div>' +
|
||||
'</div>' +
|
||||
'<div class="bg-muted/30 rounded-lg p-3 text-center">' +
|
||||
'<div class="text-lg font-semibold text-foreground" id="indexTotalSizeVal">-</div>' +
|
||||
'<div class="text-xs text-muted-foreground">' + t('index.totalSize') + '</div>' +
|
||||
'</div>' +
|
||||
'<div class="bg-muted/30 rounded-lg p-3 text-center">' +
|
||||
'<div class="text-lg font-semibold text-foreground" id="indexVectorCount">-</div>' +
|
||||
'<div class="text-xs text-muted-foreground">' + t('index.vectorIndexes') + '</div>' +
|
||||
'</div>' +
|
||||
'<div class="bg-muted/30 rounded-lg p-3 text-center">' +
|
||||
'<div class="text-lg font-semibold text-foreground" id="indexFtsCount">-</div>' +
|
||||
'<div class="text-xs text-muted-foreground">' + t('index.ftsIndexes') + '</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'<div class="border border-border rounded-lg overflow-hidden">' +
|
||||
'<table class="w-full text-sm">' +
|
||||
'<thead class="bg-muted/50">' +
|
||||
'<tr class="text-xs text-muted-foreground">' +
|
||||
'<th class="py-2 px-2 text-left font-medium">' + t('index.projectId') + '</th>' +
|
||||
'<th class="py-2 px-2 text-right font-medium">' + t('index.size') + '</th>' +
|
||||
'<th class="py-2 px-2 text-center font-medium">' + t('index.type') + '</th>' +
|
||||
'<th class="py-2 px-2 text-right font-medium">' + t('index.lastModified') + '</th>' +
|
||||
'<th class="py-2 px-1 w-8"></th>' +
|
||||
'</tr>' +
|
||||
'</thead>' +
|
||||
'<tbody id="indexTableBody">' +
|
||||
'<tr><td colspan="5" class="py-4 text-center text-muted-foreground text-sm">' + t('common.loading') + '</td></tr>' +
|
||||
'</tbody>' +
|
||||
'</table>' +
|
||||
'</div>' +
|
||||
'<div class="mt-4 flex justify-end">' +
|
||||
'<button onclick="cleanAllIndexesFromPage()" class="text-xs px-3 py-1.5 bg-destructive/10 text-destructive hover:bg-destructive/20 rounded transition-colors flex items-center gap-1.5">' +
|
||||
'<i data-lucide="trash" class="w-3.5 h-3.5"></i>' +
|
||||
t('index.cleanAll') +
|
||||
'</button>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
// Test Search Section
|
||||
'<div class="bg-card border border-border rounded-lg p-5">' +
|
||||
'<h4 class="text-lg font-semibold mb-4 flex items-center gap-2"><i data-lucide="search" class="w-5 h-5 text-primary"></i> ' + t('codexlens.testSearch') + '</h4>' +
|
||||
'<div class="space-y-4">' +
|
||||
'<div class="flex gap-3">' +
|
||||
'<select id="searchTypeSelect" class="flex-1 px-3 py-2 border border-border rounded-lg bg-background text-sm">' +
|
||||
'<option value="search">' + t('codexlens.textSearch') + '</option>' +
|
||||
'<option value="search_files">' + t('codexlens.fileSearch') + '</option>' +
|
||||
'<option value="symbol">' + t('codexlens.symbolSearch') + '</option>' +
|
||||
'</select>' +
|
||||
'<select id="searchModeSelect" class="flex-1 px-3 py-2 border border-border rounded-lg bg-background text-sm">' +
|
||||
'<option value="exact">' + t('codexlens.exactMode') + '</option>' +
|
||||
'<option value="fuzzy">' + t('codexlens.fuzzyMode') + '</option>' +
|
||||
'<option value="hybrid">' + t('codexlens.hybridMode') + '</option>' +
|
||||
'<option value="vector">' + t('codexlens.vectorMode') + '</option>' +
|
||||
'</select>' +
|
||||
'</div>' +
|
||||
'<div class="flex gap-3">' +
|
||||
'<input type="text" id="searchQueryInput" class="flex-1 px-3 py-2 border border-border rounded-lg bg-background text-sm" placeholder="' + t('codexlens.searchPlaceholder') + '" />' +
|
||||
'<button class="btn-sm btn-primary" id="runSearchBtn"><i data-lucide="search" class="w-3.5 h-3.5"></i> ' + t('codexlens.runSearch') + '</button>' +
|
||||
'</div>' +
|
||||
'<div id="searchResults" class="hidden">' +
|
||||
'<div class="flex items-center justify-between mb-2">' +
|
||||
'<span class="text-sm font-medium">' + t('codexlens.results') + ':</span>' +
|
||||
'<span id="searchResultCount" class="text-xs text-muted-foreground"></span>' +
|
||||
'</div>' +
|
||||
'<pre id="searchResultContent" class="bg-muted/50 border border-border p-3 rounded-lg text-xs overflow-auto max-h-64 font-mono"></pre>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'</div>'
|
||||
|
||||
: // Not installed: Show install prompt
|
||||
'<div class="bg-card border border-border rounded-lg p-8">' +
|
||||
'<div class="text-center max-w-md mx-auto">' +
|
||||
'<div class="w-16 h-16 rounded-full bg-primary/10 flex items-center justify-center mx-auto mb-4">' +
|
||||
'<i data-lucide="database" class="w-8 h-8 text-primary"></i>' +
|
||||
'</div>' +
|
||||
'<h3 class="text-lg font-semibold mb-2">' + t('codexlens.installCodexLens') + '</h3>' +
|
||||
'<p class="text-sm text-muted-foreground mb-6">' + t('codexlens.installFirst') + '</p>' +
|
||||
'<button class="btn btn-primary" onclick="installCodexLensFromManager()">' +
|
||||
'<i data-lucide="download" class="w-4 h-4"></i> ' + t('codexlens.installCodexLens') +
|
||||
'</button>' +
|
||||
'</div>' +
|
||||
'</div>'
|
||||
) +
|
||||
'</div>';
|
||||
}
|
||||
|
||||
/**
|
||||
* Build model select options for the page
|
||||
*/
|
||||
function buildModelSelectOptionsForPage() {
|
||||
var installedModels = window.cliToolsStatus?.codexlens?.installedModels || [];
|
||||
var allModels = window.cliToolsStatus?.codexlens?.allModels || [];
|
||||
|
||||
if (allModels.length === 0) {
|
||||
// Fallback to default models if not loaded
|
||||
return '<option value="code">code (default)</option>' +
|
||||
'<option value="fast">fast</option>';
|
||||
}
|
||||
|
||||
var options = '';
|
||||
allModels.forEach(function(model) {
|
||||
var isInstalled = model.installed || installedModels.includes(model.profile);
|
||||
var label = model.profile + (isInstalled ? ' ✓' : '');
|
||||
var selected = model.profile === 'code' ? ' selected' : '';
|
||||
options += '<option value="' + model.profile + '"' + selected + '>' + label + '</option>';
|
||||
});
|
||||
return options;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize index from page with selected model
|
||||
*/
|
||||
function initCodexLensIndexFromPage(indexType) {
|
||||
var modelSelect = document.getElementById('pageModelSelect');
|
||||
var selectedModel = modelSelect ? modelSelect.value : 'code';
|
||||
|
||||
// For FTS-only index, model is not needed
|
||||
if (indexType === 'normal') {
|
||||
initCodexLensIndex(indexType);
|
||||
} else {
|
||||
initCodexLensIndex(indexType, selectedModel);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize CodexLens Manager page event handlers
|
||||
*/
|
||||
function initCodexLensManagerPageEvents(currentConfig) {
|
||||
var saveBtn = document.getElementById('saveIndexPathBtn');
|
||||
if (saveBtn) {
|
||||
saveBtn.onclick = async function() {
|
||||
var indexDirInput = document.getElementById('indexDirInput');
|
||||
var newIndexDir = indexDirInput ? indexDirInput.value.trim() : '';
|
||||
if (!newIndexDir) { showRefreshToast(t('codexlens.pathEmpty'), 'error'); return; }
|
||||
if (newIndexDir === currentConfig.index_dir) { showRefreshToast(t('codexlens.pathUnchanged'), 'info'); return; }
|
||||
saveBtn.disabled = true;
|
||||
saveBtn.innerHTML = '<span class="animate-pulse">' + t('common.saving') + '</span>';
|
||||
try {
|
||||
var response = await fetch('/api/codexlens/config', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ index_dir: newIndexDir }) });
|
||||
var result = await response.json();
|
||||
if (result.success) { showRefreshToast(t('codexlens.configSaved'), 'success'); renderCodexLensManager(); }
|
||||
else { showRefreshToast(t('common.saveFailed') + ': ' + result.error, 'error'); }
|
||||
} catch (err) { showRefreshToast(t('common.error') + ': ' + err.message, 'error'); }
|
||||
saveBtn.disabled = false;
|
||||
saveBtn.innerHTML = '<i data-lucide="save" class="w-3.5 h-3.5"></i> ' + t('codexlens.saveConfig');
|
||||
if (window.lucide) lucide.createIcons();
|
||||
};
|
||||
}
|
||||
|
||||
var runSearchBtn = document.getElementById('runSearchBtn');
|
||||
if (runSearchBtn) {
|
||||
runSearchBtn.onclick = async function() {
|
||||
var searchType = document.getElementById('searchTypeSelect').value;
|
||||
var searchMode = document.getElementById('searchModeSelect').value;
|
||||
var query = document.getElementById('searchQueryInput').value.trim();
|
||||
var resultsDiv = document.getElementById('searchResults');
|
||||
var resultCount = document.getElementById('searchResultCount');
|
||||
var resultContent = document.getElementById('searchResultContent');
|
||||
if (!query) { showRefreshToast(t('codexlens.enterQuery'), 'warning'); return; }
|
||||
runSearchBtn.disabled = true;
|
||||
runSearchBtn.innerHTML = '<span class="animate-pulse">' + t('codexlens.searching') + '</span>';
|
||||
resultsDiv.classList.add('hidden');
|
||||
try {
|
||||
var endpoint = '/api/codexlens/' + searchType;
|
||||
var params = new URLSearchParams({ query: query, limit: '20' });
|
||||
if (searchType === 'search' || searchType === 'search_files') { params.append('mode', searchMode); }
|
||||
var response = await fetch(endpoint + '?' + params.toString());
|
||||
var result = await response.json();
|
||||
if (result.success) {
|
||||
var results = result.results || result.files || [];
|
||||
resultCount.textContent = results.length + ' ' + t('codexlens.resultsCount');
|
||||
resultContent.textContent = JSON.stringify(results, null, 2);
|
||||
resultsDiv.classList.remove('hidden');
|
||||
} else {
|
||||
resultContent.textContent = t('common.error') + ': ' + (result.error || t('common.unknownError'));
|
||||
resultsDiv.classList.remove('hidden');
|
||||
}
|
||||
} catch (err) {
|
||||
resultContent.textContent = t('common.exception') + ': ' + err.message;
|
||||
resultsDiv.classList.remove('hidden');
|
||||
}
|
||||
runSearchBtn.disabled = false;
|
||||
runSearchBtn.innerHTML = '<i data-lucide="search" class="w-3.5 h-3.5"></i> ' + t('codexlens.runSearch');
|
||||
if (window.lucide) lucide.createIcons();
|
||||
};
|
||||
}
|
||||
|
||||
var searchInput = document.getElementById('searchQueryInput');
|
||||
if (searchInput) { searchInput.onkeypress = function(e) { if (e.key === 'Enter' && runSearchBtn) { runSearchBtn.click(); } }; }
|
||||
}
|
||||
|
||||
/**
|
||||
* Show index initialization modal
|
||||
*/
|
||||
function showIndexInitModal() {
|
||||
// Use initCodexLensIndex with default settings
|
||||
initCodexLensIndex('vector', 'code');
|
||||
}
|
||||
|
||||
/**
|
||||
* Load index stats for the CodexLens Manager page
|
||||
*/
|
||||
async function loadIndexStatsForPage() {
|
||||
try {
|
||||
var response = await fetch('/api/codexlens/indexes');
|
||||
if (!response.ok) throw new Error('Failed to load index stats');
|
||||
var data = await response.json();
|
||||
renderIndexStatsForPage(data);
|
||||
} catch (err) {
|
||||
console.error('[CodexLens] Failed to load index stats:', err);
|
||||
var tbody = document.getElementById('indexTableBody');
|
||||
if (tbody) {
|
||||
tbody.innerHTML = '<tr><td colspan="5" class="py-4 text-center text-destructive text-sm">' + err.message + '</td></tr>';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Render index stats in the CodexLens Manager page
|
||||
*/
|
||||
function renderIndexStatsForPage(data) {
|
||||
var summary = data.summary || {};
|
||||
var indexes = data.indexes || [];
|
||||
var indexDir = data.indexDir || '';
|
||||
|
||||
// Update summary stats
|
||||
var totalSizeEl = document.getElementById('indexTotalSize');
|
||||
var projectCountEl = document.getElementById('indexProjectCount');
|
||||
var totalSizeValEl = document.getElementById('indexTotalSizeVal');
|
||||
var vectorCountEl = document.getElementById('indexVectorCount');
|
||||
var ftsCountEl = document.getElementById('indexFtsCount');
|
||||
var indexDirEl = document.getElementById('indexDirDisplay');
|
||||
|
||||
if (totalSizeEl) totalSizeEl.textContent = summary.totalSizeFormatted || '0 B';
|
||||
if (projectCountEl) projectCountEl.textContent = summary.totalProjects || 0;
|
||||
if (totalSizeValEl) totalSizeValEl.textContent = summary.totalSizeFormatted || '0 B';
|
||||
if (vectorCountEl) vectorCountEl.textContent = summary.vectorIndexCount || 0;
|
||||
if (ftsCountEl) ftsCountEl.textContent = summary.normalIndexCount || 0;
|
||||
if (indexDirEl && indexDir) {
|
||||
indexDirEl.textContent = indexDir;
|
||||
indexDirEl.title = indexDir;
|
||||
}
|
||||
|
||||
// Render table rows
|
||||
var tbody = document.getElementById('indexTableBody');
|
||||
if (!tbody) return;
|
||||
|
||||
if (indexes.length === 0) {
|
||||
tbody.innerHTML = '<tr><td colspan="5" class="py-4 text-center text-muted-foreground text-sm">' + (t('index.noIndexes') || 'No indexes yet') + '</td></tr>';
|
||||
return;
|
||||
}
|
||||
|
||||
var rows = '';
|
||||
indexes.forEach(function(idx) {
|
||||
var vectorBadge = idx.hasVectorIndex
|
||||
? '<span class="text-xs px-1.5 py-0.5 bg-primary/10 text-primary rounded">' + (t('index.vector') || 'Vector') + '</span>'
|
||||
: '';
|
||||
var normalBadge = idx.hasNormalIndex
|
||||
? '<span class="text-xs px-1.5 py-0.5 bg-muted text-muted-foreground rounded">' + (t('index.fts') || 'FTS') + '</span>'
|
||||
: '';
|
||||
|
||||
rows += '<tr class="border-t border-border hover:bg-muted/30 transition-colors">' +
|
||||
'<td class="py-2 px-2 text-foreground">' +
|
||||
'<span class="font-mono text-xs truncate max-w-[250px] inline-block" title="' + escapeHtml(idx.id) + '">' + escapeHtml(idx.id) + '</span>' +
|
||||
'</td>' +
|
||||
'<td class="py-2 px-2 text-right text-muted-foreground">' + (idx.sizeFormatted || '-') + '</td>' +
|
||||
'<td class="py-2 px-2 text-center"><div class="flex items-center justify-center gap-1">' + vectorBadge + normalBadge + '</div></td>' +
|
||||
'<td class="py-2 px-2 text-right text-muted-foreground">' + formatTimeAgoSimple(idx.lastModified) + '</td>' +
|
||||
'<td class="py-2 px-1 text-center">' +
|
||||
'<button onclick="cleanIndexProjectFromPage(\'' + escapeHtml(idx.id) + '\')" ' +
|
||||
'class="text-destructive/70 hover:text-destructive p-1 rounded hover:bg-destructive/10 transition-colors" ' +
|
||||
'title="' + (t('index.cleanProject') || 'Clean Index') + '">' +
|
||||
'<i data-lucide="trash-2" class="w-3.5 h-3.5"></i>' +
|
||||
'</button>' +
|
||||
'</td>' +
|
||||
'</tr>';
|
||||
});
|
||||
|
||||
tbody.innerHTML = rows;
|
||||
if (window.lucide) lucide.createIcons();
|
||||
}
|
||||
|
||||
/**
|
||||
* Simple time ago formatter
|
||||
*/
|
||||
function formatTimeAgoSimple(isoString) {
|
||||
if (!isoString) return t('common.never') || 'Never';
|
||||
var date = new Date(isoString);
|
||||
var now = new Date();
|
||||
var diffMs = now - date;
|
||||
var diffMins = Math.floor(diffMs / 60000);
|
||||
var diffHours = Math.floor(diffMins / 60);
|
||||
var diffDays = Math.floor(diffHours / 24);
|
||||
if (diffMins < 1) return t('common.justNow') || 'Just now';
|
||||
if (diffMins < 60) return diffMins + 'm ' + (t('common.ago') || 'ago');
|
||||
if (diffHours < 24) return diffHours + 'h ' + (t('common.ago') || 'ago');
|
||||
if (diffDays < 30) return diffDays + 'd ' + (t('common.ago') || 'ago');
|
||||
return date.toLocaleDateString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean a specific project's index from the page
|
||||
*/
|
||||
async function cleanIndexProjectFromPage(projectId) {
|
||||
if (!confirm((t('index.cleanProjectConfirm') || 'Clean index for') + ' ' + projectId + '?')) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
showRefreshToast(t('index.cleaning') || 'Cleaning index...', 'info');
|
||||
|
||||
var response = await fetch('/api/codexlens/clean', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ projectId: projectId })
|
||||
});
|
||||
|
||||
var result = await response.json();
|
||||
|
||||
if (result.success) {
|
||||
showRefreshToast(t('index.cleanSuccess') || 'Index cleaned successfully', 'success');
|
||||
await loadIndexStatsForPage();
|
||||
} else {
|
||||
showRefreshToast((t('index.cleanFailed') || 'Clean failed') + ': ' + result.error, 'error');
|
||||
}
|
||||
} catch (err) {
|
||||
showRefreshToast((t('common.error') || 'Error') + ': ' + err.message, 'error');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean all indexes from the page
|
||||
*/
|
||||
async function cleanAllIndexesFromPage() {
|
||||
if (!confirm(t('index.cleanAllConfirm') || 'Are you sure you want to clean ALL indexes? This cannot be undone.')) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
showRefreshToast(t('index.cleaning') || 'Cleaning indexes...', 'info');
|
||||
|
||||
var response = await fetch('/api/codexlens/clean', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ all: true })
|
||||
});
|
||||
|
||||
var result = await response.json();
|
||||
|
||||
if (result.success) {
|
||||
showRefreshToast(t('index.cleanAllSuccess') || 'All indexes cleaned', 'success');
|
||||
await loadIndexStatsForPage();
|
||||
} else {
|
||||
showRefreshToast((t('index.cleanFailed') || 'Clean failed') + ': ' + result.error, 'error');
|
||||
}
|
||||
} catch (err) {
|
||||
showRefreshToast((t('common.error') || 'Error') + ': ' + err.message, 'error');
|
||||
}
|
||||
}
|
||||
|
||||
@@ -331,6 +331,11 @@
|
||||
<i data-lucide="history" class="nav-icon"></i>
|
||||
<span class="nav-text flex-1" data-i18n="nav.history">History</span>
|
||||
</li>
|
||||
<li class="nav-item flex items-center gap-2 px-3 py-2.5 text-sm text-muted-foreground hover:bg-hover hover:text-foreground rounded cursor-pointer transition-colors" data-view="codexlens-manager" data-tooltip="CodexLens Manager">
|
||||
<i data-lucide="search-code" class="nav-icon"></i>
|
||||
<span class="nav-text flex-1" data-i18n="nav.codexLensManager">CodexLens</span>
|
||||
<span class="badge px-2 py-0.5 text-xs font-semibold rounded-full bg-hover text-muted-foreground" id="badgeCodexLens">-</span>
|
||||
</li>
|
||||
<!-- Hidden: Code Graph Explorer (feature disabled)
|
||||
<li class="nav-item flex items-center gap-2 px-3 py-2.5 text-sm text-muted-foreground hover:bg-hover hover:text-foreground rounded cursor-pointer transition-colors" data-view="graph-explorer" data-tooltip="Code Graph Explorer">
|
||||
<i data-lucide="git-branch" class="nav-icon"></i>
|
||||
|
||||
@@ -421,6 +421,17 @@ async function installSemantic(gpuMode: GpuMode = 'cpu'): Promise<BootstrapResul
|
||||
|
||||
child.on('close', (code) => {
|
||||
if (code === 0) {
|
||||
// IMPORTANT: fastembed installs onnxruntime (CPU) as dependency, which conflicts
|
||||
// with onnxruntime-directml/gpu. Reinstall the GPU version to ensure it takes precedence.
|
||||
if (gpuMode !== 'cpu') {
|
||||
try {
|
||||
console.log(`[CodexLens] Reinstalling ${onnxPackage} to ensure GPU provider works...`);
|
||||
execSync(`"${pipPath}" install --force-reinstall ${onnxPackage}`, { stdio: 'pipe', timeout: 300000 });
|
||||
console.log(`[CodexLens] ${onnxPackage} reinstalled successfully`);
|
||||
} catch (e) {
|
||||
console.warn(`[CodexLens] Warning: Failed to reinstall ${onnxPackage}: ${(e as Error).message}`);
|
||||
}
|
||||
}
|
||||
console.log(`[CodexLens] Semantic dependencies installed successfully (${gpuMode} mode)`);
|
||||
resolve({ success: true, message: `Installed with ${modeDescription}` });
|
||||
} else {
|
||||
|
||||
@@ -1955,3 +1955,178 @@ def embeddings_generate(
|
||||
|
||||
console.print("\n[dim]Use vector search with:[/dim]")
|
||||
console.print(" [cyan]codexlens search 'your query' --mode pure-vector[/cyan]")
|
||||
|
||||
|
||||
# ==================== GPU Management Commands ====================
|
||||
|
||||
@app.command(name="gpu-list")
|
||||
def gpu_list(
|
||||
json_mode: bool = typer.Option(False, "--json", help="Output JSON response."),
|
||||
) -> None:
|
||||
"""List available GPU devices for embedding acceleration.
|
||||
|
||||
Shows all detected GPU devices with their capabilities and selection status.
|
||||
Discrete GPUs (NVIDIA, AMD) are automatically preferred over integrated GPUs.
|
||||
|
||||
Examples:
|
||||
codexlens gpu-list # List all GPUs
|
||||
codexlens gpu-list --json # JSON output for scripting
|
||||
"""
|
||||
from codexlens.semantic.gpu_support import get_gpu_devices, detect_gpu, get_selected_device_id
|
||||
|
||||
gpu_info = detect_gpu()
|
||||
devices = get_gpu_devices()
|
||||
selected_id = get_selected_device_id()
|
||||
|
||||
if json_mode:
|
||||
print_json(
|
||||
success=True,
|
||||
result={
|
||||
"devices": devices,
|
||||
"selected_device_id": selected_id,
|
||||
"gpu_available": gpu_info.gpu_available,
|
||||
"providers": gpu_info.onnx_providers,
|
||||
}
|
||||
)
|
||||
else:
|
||||
if not devices:
|
||||
console.print("[yellow]No GPU devices detected[/yellow]")
|
||||
console.print(f"ONNX Providers: [dim]{', '.join(gpu_info.onnx_providers)}[/dim]")
|
||||
return
|
||||
|
||||
console.print("[bold]Available GPU Devices[/bold]\n")
|
||||
|
||||
table = Table(show_header=True, header_style="bold")
|
||||
table.add_column("ID", justify="center")
|
||||
table.add_column("Name")
|
||||
table.add_column("Vendor", justify="center")
|
||||
table.add_column("Type", justify="center")
|
||||
table.add_column("Status", justify="center")
|
||||
|
||||
for dev in devices:
|
||||
type_str = "[green]Discrete[/green]" if dev["is_discrete"] else "[dim]Integrated[/dim]"
|
||||
vendor_color = {
|
||||
"nvidia": "green",
|
||||
"amd": "red",
|
||||
"intel": "blue"
|
||||
}.get(dev["vendor"], "white")
|
||||
vendor_str = f"[{vendor_color}]{dev['vendor'].upper()}[/{vendor_color}]"
|
||||
|
||||
status_parts = []
|
||||
if dev["is_preferred"]:
|
||||
status_parts.append("[cyan]Auto[/cyan]")
|
||||
if dev["is_selected"]:
|
||||
status_parts.append("[green]✓ Selected[/green]")
|
||||
|
||||
status_str = " ".join(status_parts) if status_parts else "[dim]—[/dim]"
|
||||
|
||||
table.add_row(
|
||||
str(dev["device_id"]),
|
||||
dev["name"],
|
||||
vendor_str,
|
||||
type_str,
|
||||
status_str,
|
||||
)
|
||||
|
||||
console.print(table)
|
||||
console.print(f"\nONNX Providers: [dim]{', '.join(gpu_info.onnx_providers)}[/dim]")
|
||||
console.print("\n[dim]Select GPU with:[/dim]")
|
||||
console.print(" [cyan]codexlens gpu-select <device_id>[/cyan]")
|
||||
|
||||
|
||||
@app.command(name="gpu-select")
|
||||
def gpu_select(
|
||||
device_id: int = typer.Argument(
|
||||
...,
|
||||
help="GPU device ID to use for embeddings. Use 'codexlens gpu-list' to see available IDs.",
|
||||
),
|
||||
json_mode: bool = typer.Option(False, "--json", help="Output JSON response."),
|
||||
) -> None:
|
||||
"""Select a specific GPU device for embedding generation.
|
||||
|
||||
By default, CodexLens automatically selects the most powerful GPU (discrete over integrated).
|
||||
Use this command to override the selection.
|
||||
|
||||
Examples:
|
||||
codexlens gpu-select 1 # Use GPU device 1
|
||||
codexlens gpu-select 0 --json # Select GPU 0 with JSON output
|
||||
"""
|
||||
from codexlens.semantic.gpu_support import set_selected_device_id, get_gpu_devices
|
||||
from codexlens.semantic.embedder import clear_embedder_cache
|
||||
|
||||
devices = get_gpu_devices()
|
||||
valid_ids = [dev["device_id"] for dev in devices]
|
||||
|
||||
if device_id not in valid_ids:
|
||||
if json_mode:
|
||||
print_json(success=False, error=f"Invalid device_id {device_id}. Valid IDs: {valid_ids}")
|
||||
else:
|
||||
console.print(f"[red]Error:[/red] Invalid device_id {device_id}")
|
||||
console.print(f"Valid IDs: {valid_ids}")
|
||||
console.print("\n[dim]Use 'codexlens gpu-list' to see available devices[/dim]")
|
||||
raise typer.Exit(code=1)
|
||||
|
||||
success = set_selected_device_id(device_id)
|
||||
|
||||
if success:
|
||||
# Clear embedder cache to force reload with new GPU
|
||||
clear_embedder_cache()
|
||||
|
||||
device_name = next((dev["name"] for dev in devices if dev["device_id"] == device_id), "Unknown")
|
||||
|
||||
if json_mode:
|
||||
print_json(
|
||||
success=True,
|
||||
result={
|
||||
"device_id": device_id,
|
||||
"device_name": device_name,
|
||||
"message": f"GPU selection set to device {device_id}: {device_name}",
|
||||
}
|
||||
)
|
||||
else:
|
||||
console.print(f"[green]✓[/green] GPU selection updated")
|
||||
console.print(f" Device ID: {device_id}")
|
||||
console.print(f" Device: [cyan]{device_name}[/cyan]")
|
||||
console.print("\n[dim]New embeddings will use this GPU[/dim]")
|
||||
else:
|
||||
if json_mode:
|
||||
print_json(success=False, error="Failed to set GPU selection")
|
||||
else:
|
||||
console.print("[red]Error:[/red] Failed to set GPU selection")
|
||||
raise typer.Exit(code=1)
|
||||
|
||||
|
||||
@app.command(name="gpu-reset")
|
||||
def gpu_reset(
|
||||
json_mode: bool = typer.Option(False, "--json", help="Output JSON response."),
|
||||
) -> None:
|
||||
"""Reset GPU selection to automatic detection.
|
||||
|
||||
Clears any manual GPU selection and returns to automatic selection
|
||||
(discrete GPU preferred over integrated).
|
||||
|
||||
Examples:
|
||||
codexlens gpu-reset # Reset to auto-detection
|
||||
"""
|
||||
from codexlens.semantic.gpu_support import set_selected_device_id, detect_gpu
|
||||
from codexlens.semantic.embedder import clear_embedder_cache
|
||||
|
||||
set_selected_device_id(None)
|
||||
clear_embedder_cache()
|
||||
|
||||
gpu_info = detect_gpu(force_refresh=True)
|
||||
|
||||
if json_mode:
|
||||
print_json(
|
||||
success=True,
|
||||
result={
|
||||
"message": "GPU selection reset to auto-detection",
|
||||
"preferred_device_id": gpu_info.preferred_device_id,
|
||||
"preferred_device_name": gpu_info.gpu_name,
|
||||
}
|
||||
)
|
||||
else:
|
||||
console.print("[green]✓[/green] GPU selection reset to auto-detection")
|
||||
if gpu_info.preferred_device_id is not None:
|
||||
console.print(f" Auto-selected device: {gpu_info.preferred_device_id}")
|
||||
console.print(f" Device: [cyan]{gpu_info.gpu_name}[/cyan]")
|
||||
|
||||
@@ -21,7 +21,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
# Embedding batch size - larger values improve throughput on modern hardware
|
||||
# Benchmark: 256 gives ~2.35x speedup over 64 with DirectML GPU acceleration
|
||||
EMBEDDING_BATCH_SIZE = 256 # Optimized from 64 based on batch size benchmarks
|
||||
EMBEDDING_BATCH_SIZE = 256
|
||||
|
||||
|
||||
def _generate_chunks_from_cursor(
|
||||
@@ -337,7 +337,8 @@ def generate_embeddings(
|
||||
# Generate embeddings directly to numpy (no tolist() conversion)
|
||||
try:
|
||||
batch_contents = [chunk.content for chunk, _ in chunk_batch]
|
||||
embeddings_numpy = embedder.embed_to_numpy(batch_contents)
|
||||
# Pass batch_size to fastembed for optimal GPU utilization
|
||||
embeddings_numpy = embedder.embed_to_numpy(batch_contents, batch_size=EMBEDDING_BATCH_SIZE)
|
||||
|
||||
# Use add_chunks_batch_numpy to avoid numpy->list->numpy roundtrip
|
||||
vector_store.add_chunks_batch_numpy(chunk_batch, embeddings_numpy)
|
||||
|
||||
@@ -14,7 +14,7 @@ from typing import Dict, Iterable, List, Optional
|
||||
import numpy as np
|
||||
|
||||
from . import SEMANTIC_AVAILABLE
|
||||
from .gpu_support import get_optimal_providers, is_gpu_available, get_gpu_summary
|
||||
from .gpu_support import get_optimal_providers, is_gpu_available, get_gpu_summary, get_selected_device_id
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -144,11 +144,12 @@ class Embedder:
|
||||
else:
|
||||
self.model_name = self.DEFAULT_MODEL
|
||||
|
||||
# Configure ONNX execution providers
|
||||
# Configure ONNX execution providers with device_id options for GPU selection
|
||||
# Using with_device_options=True ensures DirectML/CUDA device_id is passed correctly
|
||||
if providers is not None:
|
||||
self._providers = providers
|
||||
else:
|
||||
self._providers = get_optimal_providers(use_gpu=use_gpu)
|
||||
self._providers = get_optimal_providers(use_gpu=use_gpu, with_device_options=True)
|
||||
|
||||
self._use_gpu = use_gpu
|
||||
self._model = None
|
||||
@@ -168,7 +169,12 @@ class Embedder:
|
||||
"""Check if GPU acceleration is enabled for this embedder."""
|
||||
gpu_providers = {"CUDAExecutionProvider", "TensorrtExecutionProvider",
|
||||
"DmlExecutionProvider", "ROCMExecutionProvider", "CoreMLExecutionProvider"}
|
||||
return any(p in gpu_providers for p in self._providers)
|
||||
# Handle both string providers and tuple providers (name, options)
|
||||
for p in self._providers:
|
||||
provider_name = p[0] if isinstance(p, tuple) else p
|
||||
if provider_name in gpu_providers:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _load_model(self) -> None:
|
||||
"""Lazy load the embedding model with configured providers."""
|
||||
@@ -177,7 +183,9 @@ class Embedder:
|
||||
|
||||
from fastembed import TextEmbedding
|
||||
|
||||
# fastembed supports 'providers' parameter for ONNX execution providers
|
||||
# providers already include device_id options via get_optimal_providers(with_device_options=True)
|
||||
# DO NOT pass device_ids separately - fastembed ignores it when providers is specified
|
||||
# See: fastembed/text/onnx_embedding.py - device_ids is only used with cuda=True
|
||||
try:
|
||||
self._model = TextEmbedding(
|
||||
model_name=self.model_name,
|
||||
@@ -215,7 +223,7 @@ class Embedder:
|
||||
embeddings = list(self._model.embed(texts))
|
||||
return [emb.tolist() for emb in embeddings]
|
||||
|
||||
def embed_to_numpy(self, texts: str | Iterable[str]) -> np.ndarray:
|
||||
def embed_to_numpy(self, texts: str | Iterable[str], batch_size: Optional[int] = None) -> np.ndarray:
|
||||
"""Generate embeddings for one or more texts (returns numpy arrays).
|
||||
|
||||
This method is more memory-efficient than embed() as it avoids converting
|
||||
@@ -224,6 +232,8 @@ class Embedder:
|
||||
|
||||
Args:
|
||||
texts: Single text or iterable of texts to embed.
|
||||
batch_size: Optional batch size for fastembed processing.
|
||||
Larger values improve GPU utilization but use more memory.
|
||||
|
||||
Returns:
|
||||
numpy.ndarray of shape (n_texts, embedding_dim) containing embeddings.
|
||||
@@ -235,7 +245,11 @@ class Embedder:
|
||||
else:
|
||||
texts = list(texts)
|
||||
|
||||
# Return embeddings as numpy array directly (no .tolist() conversion)
|
||||
# Pass batch_size to fastembed for optimal GPU utilization
|
||||
# Default batch_size in fastembed is 256, but larger values can improve throughput
|
||||
if batch_size is not None:
|
||||
embeddings = list(self._model.embed(texts, batch_size=batch_size))
|
||||
else:
|
||||
embeddings = list(self._model.embed(texts))
|
||||
return np.array(embeddings)
|
||||
|
||||
|
||||
@@ -13,6 +13,15 @@ from typing import List, Optional
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class GPUDevice:
|
||||
"""Individual GPU device info."""
|
||||
device_id: int
|
||||
name: str
|
||||
is_discrete: bool # True for discrete GPU (NVIDIA, AMD), False for integrated (Intel UHD)
|
||||
vendor: str # "nvidia", "amd", "intel", "unknown"
|
||||
|
||||
|
||||
@dataclass
|
||||
class GPUInfo:
|
||||
"""GPU availability and configuration info."""
|
||||
@@ -22,15 +31,117 @@ class GPUInfo:
|
||||
gpu_count: int = 0
|
||||
gpu_name: Optional[str] = None
|
||||
onnx_providers: List[str] = None
|
||||
devices: List[GPUDevice] = None # List of detected GPU devices
|
||||
preferred_device_id: Optional[int] = None # Preferred GPU for embedding
|
||||
|
||||
def __post_init__(self):
|
||||
if self.onnx_providers is None:
|
||||
self.onnx_providers = ["CPUExecutionProvider"]
|
||||
if self.devices is None:
|
||||
self.devices = []
|
||||
|
||||
|
||||
_gpu_info_cache: Optional[GPUInfo] = None
|
||||
|
||||
|
||||
def _enumerate_gpus() -> List[GPUDevice]:
|
||||
"""Enumerate available GPU devices using WMI on Windows.
|
||||
|
||||
Returns:
|
||||
List of GPUDevice with device info, ordered by device_id.
|
||||
"""
|
||||
devices = []
|
||||
|
||||
try:
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
if sys.platform == "win32":
|
||||
# Use PowerShell to query GPU information via WMI
|
||||
cmd = [
|
||||
"powershell", "-NoProfile", "-Command",
|
||||
"Get-WmiObject Win32_VideoController | Select-Object DeviceID, Name, AdapterCompatibility | ConvertTo-Json"
|
||||
]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
|
||||
|
||||
if result.returncode == 0 and result.stdout.strip():
|
||||
import json
|
||||
gpu_data = json.loads(result.stdout)
|
||||
|
||||
# Handle single GPU case (returns dict instead of list)
|
||||
if isinstance(gpu_data, dict):
|
||||
gpu_data = [gpu_data]
|
||||
|
||||
for idx, gpu in enumerate(gpu_data):
|
||||
name = gpu.get("Name", "Unknown GPU")
|
||||
compat = gpu.get("AdapterCompatibility", "").lower()
|
||||
|
||||
# Determine vendor
|
||||
name_lower = name.lower()
|
||||
if "nvidia" in name_lower or "nvidia" in compat:
|
||||
vendor = "nvidia"
|
||||
is_discrete = True
|
||||
elif "amd" in name_lower or "radeon" in name_lower or "amd" in compat:
|
||||
vendor = "amd"
|
||||
is_discrete = True
|
||||
elif "intel" in name_lower or "intel" in compat:
|
||||
vendor = "intel"
|
||||
# Intel UHD/Iris are integrated, Intel Arc is discrete
|
||||
is_discrete = "arc" in name_lower
|
||||
else:
|
||||
vendor = "unknown"
|
||||
is_discrete = False
|
||||
|
||||
devices.append(GPUDevice(
|
||||
device_id=idx,
|
||||
name=name,
|
||||
is_discrete=is_discrete,
|
||||
vendor=vendor
|
||||
))
|
||||
logger.debug(f"Detected GPU {idx}: {name} (vendor={vendor}, discrete={is_discrete})")
|
||||
|
||||
except Exception as e:
|
||||
logger.debug(f"GPU enumeration failed: {e}")
|
||||
|
||||
return devices
|
||||
|
||||
|
||||
def _get_preferred_device_id(devices: List[GPUDevice]) -> Optional[int]:
|
||||
"""Determine the preferred GPU device_id for embedding.
|
||||
|
||||
Preference order:
|
||||
1. NVIDIA discrete GPU (best DirectML/CUDA support)
|
||||
2. AMD discrete GPU
|
||||
3. Intel Arc (discrete)
|
||||
4. Intel integrated (fallback)
|
||||
|
||||
Returns:
|
||||
device_id of preferred GPU, or None to use default.
|
||||
"""
|
||||
if not devices:
|
||||
return None
|
||||
|
||||
# Priority: NVIDIA > AMD > Intel Arc > Intel integrated
|
||||
priority_order = [
|
||||
("nvidia", True), # NVIDIA discrete
|
||||
("amd", True), # AMD discrete
|
||||
("intel", True), # Intel Arc (discrete)
|
||||
("intel", False), # Intel integrated (fallback)
|
||||
]
|
||||
|
||||
for target_vendor, target_discrete in priority_order:
|
||||
for device in devices:
|
||||
if device.vendor == target_vendor and device.is_discrete == target_discrete:
|
||||
logger.info(f"Preferred GPU: {device.name} (device_id={device.device_id})")
|
||||
return device.device_id
|
||||
|
||||
# If no match, use first device
|
||||
if devices:
|
||||
return devices[0].device_id
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def detect_gpu(force_refresh: bool = False) -> GPUInfo:
|
||||
"""Detect available GPU resources for embedding acceleration.
|
||||
|
||||
@@ -47,6 +158,18 @@ def detect_gpu(force_refresh: bool = False) -> GPUInfo:
|
||||
|
||||
info = GPUInfo()
|
||||
|
||||
# Enumerate GPU devices first
|
||||
info.devices = _enumerate_gpus()
|
||||
info.gpu_count = len(info.devices)
|
||||
if info.devices:
|
||||
# Set preferred device (discrete GPU preferred over integrated)
|
||||
info.preferred_device_id = _get_preferred_device_id(info.devices)
|
||||
# Set gpu_name to preferred device name
|
||||
for dev in info.devices:
|
||||
if dev.device_id == info.preferred_device_id:
|
||||
info.gpu_name = dev.name
|
||||
break
|
||||
|
||||
# Check PyTorch CUDA availability (most reliable detection)
|
||||
try:
|
||||
import torch
|
||||
@@ -143,22 +266,49 @@ def detect_gpu(force_refresh: bool = False) -> GPUInfo:
|
||||
return info
|
||||
|
||||
|
||||
def get_optimal_providers(use_gpu: bool = True) -> List[str]:
|
||||
def get_optimal_providers(use_gpu: bool = True, with_device_options: bool = False) -> list:
|
||||
"""Get optimal ONNX execution providers based on availability.
|
||||
|
||||
Args:
|
||||
use_gpu: If True, include GPU providers when available.
|
||||
If False, force CPU-only execution.
|
||||
with_device_options: If True, return providers as tuples with device_id options
|
||||
for proper GPU device selection (required for DirectML).
|
||||
|
||||
Returns:
|
||||
List of provider names in priority order.
|
||||
List of provider names or tuples (provider_name, options_dict) in priority order.
|
||||
"""
|
||||
if not use_gpu:
|
||||
return ["CPUExecutionProvider"]
|
||||
|
||||
gpu_info = detect_gpu()
|
||||
|
||||
if not with_device_options:
|
||||
return gpu_info.onnx_providers
|
||||
|
||||
# Build providers with device_id options for GPU providers
|
||||
device_id = get_selected_device_id()
|
||||
providers = []
|
||||
|
||||
for provider in gpu_info.onnx_providers:
|
||||
if provider == "DmlExecutionProvider" and device_id is not None:
|
||||
# DirectML requires device_id in provider_options tuple
|
||||
providers.append(("DmlExecutionProvider", {"device_id": device_id}))
|
||||
logger.debug(f"DmlExecutionProvider configured with device_id={device_id}")
|
||||
elif provider == "CUDAExecutionProvider" and device_id is not None:
|
||||
# CUDA also supports device_id in provider_options
|
||||
providers.append(("CUDAExecutionProvider", {"device_id": device_id}))
|
||||
logger.debug(f"CUDAExecutionProvider configured with device_id={device_id}")
|
||||
elif provider == "ROCMExecutionProvider" and device_id is not None:
|
||||
# ROCm supports device_id
|
||||
providers.append(("ROCMExecutionProvider", {"device_id": device_id}))
|
||||
logger.debug(f"ROCMExecutionProvider configured with device_id={device_id}")
|
||||
else:
|
||||
# CPU and other providers don't need device_id
|
||||
providers.append(provider)
|
||||
|
||||
return providers
|
||||
|
||||
|
||||
def is_gpu_available() -> bool:
|
||||
"""Check if any GPU acceleration is available."""
|
||||
@@ -190,3 +340,75 @@ def clear_gpu_cache() -> None:
|
||||
"""Clear cached GPU detection info."""
|
||||
global _gpu_info_cache
|
||||
_gpu_info_cache = None
|
||||
|
||||
|
||||
# User-selected device ID (overrides auto-detection)
|
||||
_selected_device_id: Optional[int] = None
|
||||
|
||||
|
||||
def get_gpu_devices() -> List[dict]:
|
||||
"""Get list of available GPU devices for frontend selection.
|
||||
|
||||
Returns:
|
||||
List of dicts with device info for each GPU.
|
||||
"""
|
||||
info = detect_gpu()
|
||||
devices = []
|
||||
|
||||
for dev in info.devices:
|
||||
devices.append({
|
||||
"device_id": dev.device_id,
|
||||
"name": dev.name,
|
||||
"vendor": dev.vendor,
|
||||
"is_discrete": dev.is_discrete,
|
||||
"is_preferred": dev.device_id == info.preferred_device_id,
|
||||
"is_selected": dev.device_id == get_selected_device_id(),
|
||||
})
|
||||
|
||||
return devices
|
||||
|
||||
|
||||
def get_selected_device_id() -> Optional[int]:
|
||||
"""Get the user-selected GPU device_id.
|
||||
|
||||
Returns:
|
||||
User-selected device_id, or auto-detected preferred device_id if not set.
|
||||
"""
|
||||
global _selected_device_id
|
||||
|
||||
if _selected_device_id is not None:
|
||||
return _selected_device_id
|
||||
|
||||
# Fall back to auto-detected preferred device
|
||||
info = detect_gpu()
|
||||
return info.preferred_device_id
|
||||
|
||||
|
||||
def set_selected_device_id(device_id: Optional[int]) -> bool:
|
||||
"""Set the GPU device_id to use for embeddings.
|
||||
|
||||
Args:
|
||||
device_id: GPU device_id to use, or None to use auto-detection.
|
||||
|
||||
Returns:
|
||||
True if device_id is valid, False otherwise.
|
||||
"""
|
||||
global _selected_device_id
|
||||
|
||||
if device_id is None:
|
||||
_selected_device_id = None
|
||||
logger.info("GPU selection reset to auto-detection")
|
||||
return True
|
||||
|
||||
# Validate device_id exists
|
||||
info = detect_gpu()
|
||||
valid_ids = [dev.device_id for dev in info.devices]
|
||||
|
||||
if device_id in valid_ids:
|
||||
_selected_device_id = device_id
|
||||
device_name = next((dev.name for dev in info.devices if dev.device_id == device_id), "Unknown")
|
||||
logger.info(f"GPU selection set to device {device_id}: {device_name}")
|
||||
return True
|
||||
else:
|
||||
logger.warning(f"Invalid device_id {device_id}. Valid IDs: {valid_ids}")
|
||||
return False
|
||||
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.2.8",
|
||||
"version": "6.2.9",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.2.8",
|
||||
"version": "6.2.9",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.0.4",
|
||||
|
||||
Reference in New Issue
Block a user