Compare commits

..

13 Commits

Author SHA1 Message Date
catlog22
464f3343f3 chore: bump version to 6.3.33 2026-01-16 15:50:32 +08:00
catlog22
bb6cf42df6 fix: 更新 issue 执行文档,明确队列 ID 要求和用户交互流程 2026-01-16 15:49:26 +08:00
catlog22
0f0cb7e08e refactor: 优化 brainstorm 上下文溢出保护文档
- conceptual-planning-agent.md: 34行 → 10行(-71%)
- auto-parallel.md: 42行 → 9行(-79%)
- 消除重复定义,workflow 引用 agent 限制
- 移除冗余的策略列表、自检清单、代码示例
- 保留核心功能:限制数字、简要策略、恢复方法
2026-01-16 15:36:59 +08:00
catlog22
39d070eab6 fix: resolve GitHub issues (#50, #54)
- #54: Add API endpoint configuration documentation to DASHBOARD_GUIDE.md
- #50: Add brainstorm context overflow protection with output size limits

Note: #72 and #53 not changed per user feedback - existing behavior is sufficient
(users can configure envFile themselves; default Python version is appropriate)
2026-01-16 15:09:31 +08:00
catlog22
9ccaa7e2fd fix: 更新 CLI 工具配置缓存失效逻辑 2026-01-16 14:28:10 +08:00
catlog22
eeb90949ce chore: bump version to 6.3.32
- Fix: Dashboard project overview display issue (#80)
- Refactor: Update project structure to use project-tech.json
2026-01-16 14:09:09 +08:00
catlog22
7b677b20fb fix: 更新项目文档,修正项目上下文和学习固化流程描述 2026-01-16 14:01:27 +08:00
catlog22
e2d56bc08a refactor: 更新项目结构,替换 project.json 为 project-tech.json,添加新架构和技术分析 2026-01-16 13:33:38 +08:00
catlog22
d515090097 feat: add --mode review support for codex CLI
- Add 'review' to mode enum in ParamsSchema and schema
- Implement codex review subcommand in buildCommand (uses --uncommitted by default)
- Other tools (gemini/qwen/claude) accept review mode but no operation change
- Update cli-tools-usage.md with review mode documentation
2026-01-16 13:01:02 +08:00
catlog22
d81dfaf143 fix: add cross-platform support for hook installation (#82)
- Add PlatformUtils module for platform detection (Windows/macOS/Linux)
- Add escapeForShell() for platform-specific shell escaping
- Add checkCompatibility() to warn about incompatible hooks before install
- Add getVariant() to support platform-specific template variants
- Fix node -e commands: use double quotes on Windows, single quotes on Unix
2026-01-16 12:54:56 +08:00
catlog22
d7e5ee44cc fix: adapt help-routes.ts to new command.json structure (fixes #81)
- Replace getIndexDir() with getCommandFilePath() to find command.json
- Update file watcher to monitor command.json instead of index/ directory
- Modify API routes to read from unified command.json structure
- Add buildWorkflowRelationships() to dynamically build workflow data from flow fields
- Add /api/help/agents endpoint for agents list
- Add category merge logic for frontend compatibility (cli includes general)
- Add cli-init command to command.json
2026-01-16 12:46:50 +08:00
catlog22
dde39fc6f5 fix: 更新 CLI 调用后说明,移除不必要的轮询建议 2026-01-16 09:40:21 +08:00
catlog22
9b4fdc1868 Refactor code structure for improved readability and maintainability 2026-01-15 22:43:44 +08:00
27 changed files with 3745 additions and 200 deletions

View File

@@ -29,9 +29,8 @@ Available CLI endpoints are dynamically defined by the config file:
``` ```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true }) Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
``` ```
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT - **After CLI call**: Stop immediately - let CLI execute in background
poll with TaskOutput
### CLI Analysis Calls ### CLI Analysis Calls
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running - **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results: - **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:

View File

@@ -308,3 +308,14 @@ When analysis is complete, ensure:
- **Relevance**: Directly addresses user's specified requirements - **Relevance**: Directly addresses user's specified requirements
- **Actionability**: Provides concrete next steps and recommendations - **Actionability**: Provides concrete next steps and recommendations
## Output Size Limits
**Per-role limits** (prevent context overflow):
- `analysis.md`: < 3000 words
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
- Total: < 15000 words per role
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style

View File

@@ -424,6 +424,17 @@ CONTEXT_VARS:
- **Agent execution failure**: Agent-specific retry with minimal dependencies - **Agent execution failure**: Agent-specific retry with minimal dependencies
- **Template loading issues**: Agent handles graceful degradation - **Template loading issues**: Agent handles graceful degradation
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution - **Synthesis conflicts**: Synthesis highlights disagreements without resolution
- **Context overflow protection**: See below for automatic context management
## Context Overflow Protection
**Per-role limits**: See `conceptual-planning-agent.md` (< 3000 words main, < 2000 words sub-docs, max 5 sub-docs)
**Synthesis protection**: If total analysis > 100KB, synthesis reads only `analysis.md` files (not sub-documents)
**Recovery**: Check logs → reduce scope (--count 2) → use --summary-only → manual synthesis
**Prevention**: Start with --count 3, use structured topic format, review output sizes before synthesis
## Reference Information ## Reference Information

View File

@@ -132,7 +132,7 @@ Scan and analyze workflow session directories:
**Staleness criteria**: **Staleness criteria**:
- Active sessions: No modification >7 days + no related git commits - Active sessions: No modification >7 days + no related git commits
- Archives: >30 days old + no feature references in project.json - Archives: >30 days old + no feature references in project-tech.json
- Lite-plan: >7 days old + plan.json not executed - Lite-plan: >7 days old + plan.json not executed
- Debug: >3 days old + issue not in recent commits - Debug: >3 days old + issue not in recent commits
@@ -443,8 +443,8 @@ if (selectedCategories.includes('Sessions')) {
} }
} }
// Update project.json if features referenced deleted sessions // Update project-tech.json if features referenced deleted sessions
const projectPath = '.workflow/project.json' const projectPath = '.workflow/project-tech.json'
if (fileExists(projectPath)) { if (fileExists(projectPath)) {
const project = JSON.parse(Read(projectPath)) const project = JSON.parse(Read(projectPath))
const deletedPaths = new Set(results.deleted) const deletedPaths = new Set(results.deleted)

View File

@@ -108,11 +108,24 @@ Analyze project for workflow initialization and generate .workflow/project-tech.
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure) 2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
## Task ## Task
Generate complete project-tech.json with: Generate complete project-tech.json following the schema structure:
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at} - project_name: "${projectName}"
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies} - initialized_at: ISO 8601 timestamp
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'} - overview: {
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode} description: "Brief project description",
technology_stack: {
languages: [{name, file_count, primary}],
frameworks: ["string"],
build_tools: ["string"],
test_frameworks: ["string"]
},
architecture: {style, layers: [], patterns: []},
key_components: [{name, path, description, importance}]
}
- features: []
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
## Analysis Requirements ## Analysis Requirements
@@ -132,7 +145,7 @@ Generate complete project-tech.json with:
1. Structural scan: get_modules_by_depth.sh, find, wc -l 1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture 2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings 3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''} 4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
5. Write JSON: Write('.workflow/project-tech.json', jsonContent) 5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
6. Report: Return brief completion summary 6. Report: Return brief completion summary
@@ -181,16 +194,16 @@ console.log(`
✓ Project initialized successfully ✓ Project initialized successfully
## Project Overview ## Project Overview
Name: ${projectTech.project_metadata.name} Name: ${projectTech.project_name}
Description: ${projectTech.technology_analysis.description} Description: ${projectTech.overview.description}
### Technology Stack ### Technology Stack
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')} Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')} Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
### Architecture ### Architecture
Style: ${projectTech.technology_analysis.architecture.style} Style: ${projectTech.overview.architecture.style}
Components: ${projectTech.technology_analysis.key_components.length} core modules Components: ${projectTech.overview.key_components.length} core modules
--- ---
Files created: Files created:

View File

@@ -531,11 +531,11 @@ if (hasUnresolvedIssues(reviewResult)) {
**Trigger**: After all executions complete (regardless of code review) **Trigger**: After all executions complete (regardless of code review)
**Skip Condition**: Skip if `.workflow/project.json` does not exist **Skip Condition**: Skip if `.workflow/project-tech.json` does not exist
**Operations**: **Operations**:
```javascript ```javascript
const projectJsonPath = '.workflow/project.json' const projectJsonPath = '.workflow/project-tech.json'
if (!fileExists(projectJsonPath)) return // Silent skip if (!fileExists(projectJsonPath)) return // Silent skip
const projectJson = JSON.parse(Read(projectJsonPath)) const projectJson = JSON.parse(Read(projectJsonPath))

View File

@@ -107,13 +107,13 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
Manifest: Updated with N total sessions Manifest: Updated with N total sessions
``` ```
### Phase 4: Update project.json (Optional) ### Phase 4: Update project-tech.json (Optional)
**Skip if**: `.workflow/project.json` doesn't exist **Skip if**: `.workflow/project-tech.json` doesn't exist
```bash ```bash
# Check # Check
test -f .workflow/project.json || echo "SKIP" test -f .workflow/project-tech.json || echo "SKIP"
``` ```
**If exists**, add feature entry: **If exists**, add feature entry:
@@ -134,6 +134,32 @@ test -f .workflow/project.json || echo "SKIP"
✓ Feature added to project registry ✓ Feature added to project registry
``` ```
### Phase 5: Ask About Solidify (Always)
After successful archival, prompt user to capture learnings:
```javascript
AskUserQuestion({
questions: [{
question: "Would you like to solidify learnings from this session into project guidelines?",
header: "Solidify",
options: [
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
{ label: "Skip", description: "Archive complete, no learnings to capture" }
],
multiSelect: false
}]
})
```
**If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
**Output**:
```
Session archived successfully.
→ Run /workflow:session:solidify to capture learnings (recommended)
```
## Error Recovery ## Error Recovery
| Phase | Symptom | Recovery | | Phase | Symptom | Recovery |
@@ -149,5 +175,6 @@ test -f .workflow/project.json || echo "SKIP"
Phase 1: find session → create .archiving marker Phase 1: find session → create .archiving marker
Phase 2: read key files → build manifest entry (no writes) Phase 2: read key files → build manifest entry (no writes)
Phase 3: mkdir → mv → update manifest.json → rm marker Phase 3: mkdir → mv → update manifest.json → rm marker
Phase 4: update project.json features array (optional) Phase 4: update project-tech.json features array (optional)
Phase 5: ask user → solidify learnings (optional)
``` ```

View File

@@ -16,7 +16,7 @@ examples:
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new. Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
**Dual Responsibility**: **Dual Responsibility**:
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry 1. **Project-level initialization** (first-time only): Creates `.workflow/project-tech.json` for feature registry
2. **Session-level initialization** (always): Creates session directory structure 2. **Session-level initialization** (always): Creates session directory structure
## Session Types ## Session Types

View File

@@ -237,7 +237,7 @@ Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis ### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**: 1. **Project State Loading**:
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. - Read and parse `.workflow/project-tech.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section. - Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
- If files don't exist, proceed with fresh analysis. - If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid) 2. **Detection**: Check for existing context-package (early exit if valid)
@@ -255,7 +255,7 @@ Execute all discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging ### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph 1. Apply relevance scoring and build dependency graph
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated. 2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components. 3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section. 4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content) 5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
6. Perform conflict detection with risk assessment 6. Perform conflict detection with risk assessment

View File

@@ -1,7 +1,7 @@
{ {
"_metadata": { "_metadata": {
"version": "2.0.0", "version": "2.0.0",
"total_commands": 88, "total_commands": 45,
"total_agents": 16, "total_agents": 16,
"description": "Unified CCW-Help command index" "description": "Unified CCW-Help command index"
}, },
@@ -485,6 +485,15 @@
"category": "general", "category": "general",
"difficulty": "Intermediate", "difficulty": "Intermediate",
"source": "../../../commands/enhance-prompt.md" "source": "../../../commands/enhance-prompt.md"
},
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Initialize CLI tool configurations (.gemini/, .qwen/) with technology-aware ignore rules",
"arguments": "[--tool gemini|qwen|all] [--preview] [--output path]",
"category": "cli",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
} }
], ],

View File

@@ -4,7 +4,7 @@
- 所有回复使用简体中文 - 所有回复使用简体中文
- 技术术语保留英文,首次出现可添加中文解释 - 技术术语保留英文,首次出现可添加中文解释
- 代码内容(变量名、注释)保持英 - 代码变量名保持英文,注释使用中
## 格式规范 ## 格式规范

View File

@@ -0,0 +1,141 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Project Guidelines Schema",
"description": "Schema for project-guidelines.json - user-maintained rules and constraints",
"type": "object",
"required": ["conventions", "constraints", "_metadata"],
"properties": {
"conventions": {
"type": "object",
"description": "Coding conventions and standards",
"required": ["coding_style", "naming_patterns", "file_structure", "documentation"],
"properties": {
"coding_style": {
"type": "array",
"items": { "type": "string" },
"description": "Coding style rules (e.g., 'Use strict TypeScript mode', 'Prefer const over let')"
},
"naming_patterns": {
"type": "array",
"items": { "type": "string" },
"description": "Naming conventions (e.g., 'Use camelCase for variables', 'Use PascalCase for components')"
},
"file_structure": {
"type": "array",
"items": { "type": "string" },
"description": "File organization rules (e.g., 'One component per file', 'Tests alongside source files')"
},
"documentation": {
"type": "array",
"items": { "type": "string" },
"description": "Documentation requirements (e.g., 'JSDoc for public APIs', 'README for each module')"
}
}
},
"constraints": {
"type": "object",
"description": "Technical constraints and boundaries",
"required": ["architecture", "tech_stack", "performance", "security"],
"properties": {
"architecture": {
"type": "array",
"items": { "type": "string" },
"description": "Architecture constraints (e.g., 'No circular dependencies', 'Services must be stateless')"
},
"tech_stack": {
"type": "array",
"items": { "type": "string" },
"description": "Technology constraints (e.g., 'No new dependencies without review', 'Use native fetch over axios')"
},
"performance": {
"type": "array",
"items": { "type": "string" },
"description": "Performance requirements (e.g., 'API response < 200ms', 'Bundle size < 500KB')"
},
"security": {
"type": "array",
"items": { "type": "string" },
"description": "Security requirements (e.g., 'Sanitize all user input', 'No secrets in code')"
}
}
},
"quality_rules": {
"type": "array",
"description": "Enforceable quality rules",
"items": {
"type": "object",
"required": ["rule", "scope"],
"properties": {
"rule": {
"type": "string",
"description": "The quality rule statement"
},
"scope": {
"type": "string",
"description": "Where the rule applies (e.g., 'all', 'src/**', 'tests/**')"
},
"enforced_by": {
"type": "string",
"description": "How the rule is enforced (e.g., 'eslint', 'pre-commit', 'code-review')"
}
}
}
},
"learnings": {
"type": "array",
"description": "Project learnings captured from workflow sessions",
"items": {
"type": "object",
"required": ["date", "insight"],
"properties": {
"date": {
"type": "string",
"format": "date",
"description": "Date the learning was captured (YYYY-MM-DD)"
},
"session_id": {
"type": "string",
"description": "WFS session ID where the learning originated"
},
"insight": {
"type": "string",
"description": "The learning or insight captured"
},
"context": {
"type": "string",
"description": "Additional context about when/why this learning applies"
},
"category": {
"type": "string",
"enum": ["architecture", "performance", "security", "testing", "workflow", "other"],
"description": "Category of the learning"
}
}
}
},
"_metadata": {
"type": "object",
"required": ["created_at", "version"],
"properties": {
"created_at": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of creation"
},
"version": {
"type": "string",
"description": "Schema version (e.g., '1.0.0')"
},
"last_updated": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of last update"
},
"updated_by": {
"type": "string",
"description": "Who/what last updated the file (e.g., 'user', 'workflow:session:solidify')"
}
}
}
}
}

View File

@@ -1,7 +1,7 @@
{ {
"$schema": "http://json-schema.org/draft-07/schema#", "$schema": "http://json-schema.org/draft-07/schema#",
"title": "Project Metadata Schema", "title": "Project Tech Schema",
"description": "Workflow initialization metadata for project-level context", "description": "Schema for project-tech.json - auto-generated technical analysis (stack, architecture, components)",
"type": "object", "type": "object",
"required": [ "required": [
"project_name", "project_name",

View File

@@ -85,11 +85,14 @@ Tools are selected based on **tags** defined in the configuration. Use tags to m
```bash ```bash
# Explicit tool selection # Explicit tool selection
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write> ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write|review>
# Model override # Model override
ccw cli -p "<PROMPT>" --tool <tool-id> --model <model-id> --mode <analysis|write> ccw cli -p "<PROMPT>" --tool <tool-id> --model <model-id> --mode <analysis|write>
# Code review (codex only)
ccw cli -p "<PROMPT>" --tool codex --mode review
# Tag-based auto-selection (future) # Tag-based auto-selection (future)
ccw cli -p "<PROMPT>" --tags <tag1,tag2> --mode <analysis|write> ccw cli -p "<PROMPT>" --tags <tag1,tag2> --mode <analysis|write>
``` ```
@@ -330,6 +333,14 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
- Use For: Feature implementation, bug fixes, documentation, code creation, file modifications - Use For: Feature implementation, bug fixes, documentation, code creation, file modifications
- Specification: Requires explicit `--mode write` - Specification: Requires explicit `--mode write`
- **`review`**
- Permission: Read-only (code review output)
- Use For: Git-aware code review of uncommitted changes, branch diffs, specific commits
- Specification: **codex only** - uses `codex review` subcommand with `--uncommitted` by default
- Tool Behavior:
- `codex`: Executes `codex review --uncommitted [prompt]` for structured code review
- Other tools (gemini/qwen/claude): Accept mode but no operation change (treated as analysis)
### Command Options ### Command Options
- **`--tool <tool>`** - **`--tool <tool>`**
@@ -337,8 +348,9 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
- Default: First enabled tool in config - Default: First enabled tool in config
- **`--mode <mode>`** - **`--mode <mode>`**
- Description: **REQUIRED**: analysis, write - Description: **REQUIRED**: analysis, write, review
- Default: **NONE** (must specify) - Default: **NONE** (must specify)
- Note: `review` mode triggers `codex review` subcommand for codex tool only
- **`--model <model>`** - **`--model <model>`**
- Description: Model override - Description: Model override
@@ -463,6 +475,17 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
" --tool <tool-id> --mode write " --tool <tool-id> --mode write
``` ```
**Code Review Task** (codex review mode):
```bash
# Review uncommitted changes (default)
ccw cli -p "Focus on security vulnerabilities and error handling" --tool codex --mode review
# Review with custom instructions
ccw cli -p "Check for breaking changes in API contracts and backward compatibility" --tool codex --mode review
```
> **Note**: `--mode review` only triggers special behavior for `codex` tool (uses `codex review --uncommitted`). Other tools accept the mode but execute as standard analysis.
--- ---
### Permission Framework ### Permission Framework
@@ -472,6 +495,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
**Mode Hierarchy**: **Mode Hierarchy**:
- `analysis`: Read-only, safe for auto-execution - `analysis`: Read-only, safe for auto-execution
- `write`: Create/Modify/Delete files, full operations - requires explicit `--mode write` - `write`: Create/Modify/Delete files, full operations - requires explicit `--mode write`
- `review`: Git-aware code review (codex only), read-only output - requires explicit `--mode review`
- **Exception**: User provides clear instructions like "modify", "create", "implement" - **Exception**: User provides clear instructions like "modify", "create", "implement"
--- ---
@@ -502,7 +526,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
### Planning Checklist ### Planning Checklist
- [ ] **Purpose defined** - Clear goal and intent - [ ] **Purpose defined** - Clear goal and intent
- [ ] **Mode selected** - `--mode analysis|write` - [ ] **Mode selected** - `--mode analysis|write|review`
- [ ] **Context gathered** - File references + memory (default `@**/*`) - [ ] **Context gathered** - File references + memory (default `@**/*`)
- [ ] **Directory navigation** - `--cd` and/or `--includeDirs` - [ ] **Directory navigation** - `--cd` and/or `--includeDirs`
- [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection - [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection
@@ -514,5 +538,5 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
1. **Load configuration** - Read `cli-tools.json` for available tools 1. **Load configuration** - Read `cli-tools.json` for available tools
2. **Match by tags** - Select tool based on task requirements 2. **Match by tags** - Select tool based on task requirements
3. **Validate enabled** - Ensure selected tool is enabled 3. **Validate enabled** - Ensure selected tool is enabled
4. **Execute with mode** - Always specify `--mode analysis|write` 4. **Execute with mode** - Always specify `--mode analysis|write|review`
5. **Fallback gracefully** - Use secondary model or next matching tool on failure 5. **Fallback gracefully** - Use secondary model or next matching tool on failure

View File

@@ -11,46 +11,21 @@ argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
## Queue ID Requirement (MANDATORY) ## Queue ID Requirement (MANDATORY)
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`. **`--queue <queue-id>` parameter is REQUIRED**
### If Queue ID Not Provided ### When Queue ID Not Provided
When `--queue` parameter is missing, you MUST: ```
List queues → Output options → Stop and wait for user
1. **List available queues** by running:
```javascript
const result = shell_command({ command: "ccw issue queue list --brief --json" })
``` ```
2. **Parse and display queues** to user: **Actions**:
```
Available Queues:
ID Status Progress Issues
-----------------------------------------------------------
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
QUE-20251210-002 active 0/5 ISS-003
QUE-20251205-003 completed 8/8 ISS-004
```
3. **Stop and ask user** to specify which queue to execute: 1. `ccw issue queue list --brief --json` - Fetch queue list
```javascript 2. Filter active/pending status, output formatted list
AskUserQuestion({ 3. **Stop execution**, prompt user to rerun with `codex -p "@.codex/prompts/issue-execute.md --queue QUE-xxx"`
questions: [{
question: "Which queue would you like to execute?",
header: "Queue",
multiSelect: false,
options: [
// Generate from parsed queue list - only show active/pending queues
{ label: "QUE-20251215-001", description: "active, 3/10 completed, Issues: ISS-001, ISS-002" },
{ label: "QUE-20251210-002", description: "active, 0/5 completed, Issues: ISS-003" }
]
}]
})
```
4. **After user selection**, continue execution with the selected queue ID. **No auto-selection** - User MUST explicitly specify queue-id
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
## Worktree Mode (Recommended for Parallel Execution) ## Worktree Mode (Recommended for Parallel Execution)
@@ -147,33 +122,19 @@ codex -p "@.codex/prompts/issue-execute.md --worktree /path/to/existing/worktree
**Completion - User Choice:** **Completion - User Choice:**
When all solutions are complete, ask user what to do with the worktree branch: When all solutions are complete, output options and wait for user to specify:
```javascript ```
AskUserQuestion({ All solutions completed in worktree. Choose next action:
questions: [{
question: "All solutions completed in worktree. What would you like to do with the changes?", 1. Merge to main - Merge worktree branch into main and cleanup
header: "Merge", 2. Create PR - Push branch and create pull request (Recommended for parallel execution)
multiSelect: false, 3. Keep branch - Keep branch for manual handling, cleanup worktree only
options: [
{ Please respond with: 1, 2, or 3
label: "Merge to main",
description: "Merge worktree branch into main branch and cleanup"
},
{
label: "Create PR",
description: "Push branch and create a pull request for review"
},
{
label: "Keep branch",
description: "Keep the branch for manual handling, cleanup worktree only"
}
]
}]
})
``` ```
**Based on user selection:** **Based on user response:**
```bash ```bash
# Disable cleanup trap before intentional cleanup # Disable cleanup trap before intentional cleanup
@@ -327,9 +288,154 @@ Expected solution structure:
} }
``` ```
## Step 2.1: Determine Execution Strategy
After parsing the solution, analyze the issue type and task actions to determine the appropriate execution strategy. The strategy defines additional verification steps and quality gates beyond the basic implement-test-verify cycle.
### Strategy Auto-Matching
**Matching Priority**:
1. Explicit `solution.strategy_type` if provided
2. Infer from `task.action` keywords (Debug, Fix, Feature, Refactor, Test, etc.)
3. Infer from `solution.description` and `task.title` content
4. Default to "standard" if no clear match
**Strategy Types and Matching Keywords**:
| Strategy Type | Match Keywords | Description |
|---------------|----------------|-------------|
| `debug` | Debug, Diagnose, Trace, Investigate | Bug diagnosis with logging and debugging |
| `bugfix` | Fix, Patch, Resolve, Correct | Bug fixing with root cause analysis |
| `feature` | Feature, Add, Implement, Create, Build | New feature development with full testing |
| `refactor` | Refactor, Restructure, Optimize, Cleanup | Code restructuring with behavior preservation |
| `test` | Test, Coverage, E2E, Integration | Test implementation with coverage checks |
| `performance` | Performance, Optimize, Speed, Memory | Performance optimization with benchmarking |
| `security` | Security, Vulnerability, CVE, Audit | Security fixes with vulnerability checks |
| `hotfix` | Hotfix, Urgent, Critical, Emergency | Urgent fixes with minimal changes |
| `documentation` | Documentation, Docs, Comment, README | Documentation updates with example validation |
| `chore` | Chore, Dependency, Config, Maintenance | Maintenance tasks with compatibility checks |
| `standard` | (default) | Standard implementation without extra steps |
### Strategy-Specific Execution Phases
Each strategy extends the basic cycle with additional quality gates:
#### 1. Debug → Reproduce → Instrument → Diagnose → Implement → Test → Verify → Cleanup
```
REPRODUCE → INSTRUMENT → DIAGNOSE → IMPLEMENT → TEST → VERIFY → CLEANUP
```
#### 2. Bugfix → Root Cause → Implement → Test → Edge Cases → Regression → Verify
```
ROOT_CAUSE → IMPLEMENT → TEST → EDGE_CASES → REGRESSION → VERIFY
```
#### 3. Feature → Design Review → Unit Tests → Implement → Integration Tests → Code Review → Docs → Verify
```
DESIGN_REVIEW → UNIT_TESTS → IMPLEMENT → INTEGRATION_TESTS → TEST → CODE_REVIEW → DOCS → VERIFY
```
#### 4. Refactor → Baseline Tests → Implement → Test → Behavior Check → Performance Compare → Verify
```
BASELINE_TESTS → IMPLEMENT → TEST → BEHAVIOR_PRESERVATION → PERFORMANCE_CMP → VERIFY
```
#### 5. Test → Coverage Baseline → Test Design → Implement → Coverage Check → Verify
```
COVERAGE_BASELINE → TEST_DESIGN → IMPLEMENT → COVERAGE_CHECK → VERIFY
```
#### 6. Performance → Profiling → Bottleneck → Implement → Benchmark → Test → Verify
```
PROFILING → BOTTLENECK → IMPLEMENT → BENCHMARK → TEST → VERIFY
```
#### 7. Security → Vulnerability Scan → Implement → Security Test → Penetration Test → Verify
```
VULNERABILITY_SCAN → IMPLEMENT → SECURITY_TEST → PENETRATION_TEST → VERIFY
```
#### 8. Hotfix → Impact Assessment → Implement → Test → Quick Verify → Verify
```
IMPACT_ASSESSMENT → IMPLEMENT → TEST → QUICK_VERIFY → VERIFY
```
#### 9. Documentation → Implement → Example Validation → Format Check → Link Validation → Verify
```
IMPLEMENT → EXAMPLE_VALIDATION → FORMAT_CHECK → LINK_VALIDATION → VERIFY
```
#### 10. Chore → Implement → Compatibility Check → Test → Changelog → Verify
```
IMPLEMENT → COMPATIBILITY_CHECK → TEST → CHANGELOG → VERIFY
```
#### 11. Standard → Implement → Test → Verify
```
IMPLEMENT → TEST → VERIFY
```
### Strategy Selection Implementation
**Pseudo-code for strategy matching**:
```javascript
function determineStrategy(solution) {
// Priority 1: Explicit strategy type
if (solution.strategy_type) {
return solution.strategy_type
}
// Priority 2: Infer from task actions
const actions = solution.tasks.map(t => t.action.toLowerCase())
const titles = solution.tasks.map(t => t.title.toLowerCase())
const description = solution.description.toLowerCase()
const allText = [...actions, ...titles, description].join(' ')
// Match keywords (order matters - more specific first)
if (/hotfix|urgent|critical|emergency/.test(allText)) return 'hotfix'
if (/debug|diagnose|trace|investigate/.test(allText)) return 'debug'
if (/security|vulnerability|cve|audit/.test(allText)) return 'security'
if (/performance|optimize|speed|memory|benchmark/.test(allText)) return 'performance'
if (/refactor|restructure|cleanup/.test(allText)) return 'refactor'
if (/test|coverage|e2e|integration/.test(allText)) return 'test'
if (/documentation|docs|comment|readme/.test(allText)) return 'documentation'
if (/chore|dependency|config|maintenance/.test(allText)) return 'chore'
if (/fix|patch|resolve|correct/.test(allText)) return 'bugfix'
if (/feature|add|implement|create|build/.test(allText)) return 'feature'
// Default
return 'standard'
}
```
**Usage in execution flow**:
```javascript
// After parsing solution (Step 2)
const strategy = determineStrategy(solution)
console.log(`Strategy selected: ${strategy}`)
// During task execution (Step 3), follow strategy-specific phases
for (const task of solution.tasks) {
executeTaskWithStrategy(task, strategy)
}
```
## Step 2.5: Initialize Task Tracking ## Step 2.5: Initialize Task Tracking
After parsing solution, use `update_plan` to track each task: After parsing solution and determining strategy, use `update_plan` to track each task:
```javascript ```javascript
// Initialize plan with all tasks from solution // Initialize plan with all tasks from solution
@@ -503,18 +609,19 @@ EOF
## Solution Committed: [solution_id] ## Solution Committed: [solution_id]
**Commit**: [commit hash] **Commit**: [commit hash]
**Type**: [commit_type] **Type**: [commit_type]([scope])
**Scope**: [scope]
**Summary**: **Changes**:
[solution.description] - [Feature/Fix/Improvement]: [What functionality was added/fixed/improved]
- [Specific change 1]
- [Specific change 2]
**Tasks**: [N] tasks completed **Files Modified**:
- [x] T1: [task1.title] - path/to/file1.ts - [Brief description of changes]
- [x] T2: [task2.title] - path/to/file2.ts - [Brief description of changes]
... - path/to/file3.ts - [Brief description of changes]
**Files**: [M] files changed **Solution**: [solution_id] ([N] tasks completed)
``` ```
## Step 4: Report Completion ## Step 4: Report Completion
@@ -629,9 +736,8 @@ When `ccw issue next` returns `{ "status": "empty" }`:
If `--queue` was NOT provided in the command arguments: If `--queue` was NOT provided in the command arguments:
1. Run `ccw issue queue list --brief --json` 1. Run `ccw issue queue list --brief --json`
2. Display available queues to user 2. Filter and display active/pending queues to user
3. Ask user to select a queue via `AskUserQuestion` 3. **Stop execution**, prompt user to rerun with `--queue QUE-xxx`
4. Store selected queue ID for all subsequent commands
**Step 1: Fetch First Solution** **Step 1: Fetch First Solution**

View File

@@ -148,6 +148,36 @@ CCW Dashboard 是一个单页应用SPA界面由四个核心部分组成
- **模型配置**: 配置每个工具的主要和次要模型 - **模型配置**: 配置每个工具的主要和次要模型
- **安装/卸载**: 通过向导安装或卸载工具 - **安装/卸载**: 通过向导安装或卸载工具
#### API Endpoint 配置(无需安装 CLI
如果您没有安装 Gemini/Qwen CLI但有 API 访问权限(如反向代理服务),可以在 `~/.claude/cli-tools.json` 中配置 `api-endpoint` 类型的工具:
```json
{
"version": "3.2.0",
"tools": {
"gemini-api": {
"enabled": true,
"type": "api-endpoint",
"id": "your-api-id",
"primaryModel": "gemini-2.5-pro",
"secondaryModel": "gemini-2.5-flash",
"tags": ["analysis"]
}
}
}
```
**配置说明**
- `type: "api-endpoint"`: 表示使用 API 调用而非 CLI
- `id`: API 端点标识符,用于路由请求
- API Endpoint 工具仅支持**分析模式**(只读),不支持文件写入操作
**使用示例**
```bash
ccw cli -p "分析代码结构" --tool gemini-api --mode analysis
```
#### CodexLens 管理 #### CodexLens 管理
- **索引路径**: 查看和修改索引存储位置 - **索引路径**: 查看和修改索引存储位置
- **索引操作**: - **索引操作**:

View File

@@ -1216,7 +1216,7 @@ export async function cliCommand(
console.log(chalk.gray(' -f, --file <file> Read prompt from file (recommended for multi-line prompts)')); console.log(chalk.gray(' -f, --file <file> Read prompt from file (recommended for multi-line prompts)'));
console.log(chalk.gray(' -p, --prompt <text> Prompt text (single-line)')); console.log(chalk.gray(' -p, --prompt <text> Prompt text (single-line)'));
console.log(chalk.gray(' --tool <tool> Tool: gemini, qwen, codex (default: gemini)')); console.log(chalk.gray(' --tool <tool> Tool: gemini, qwen, codex (default: gemini)'));
console.log(chalk.gray(' --mode <mode> Mode: analysis, write, auto (default: analysis)')); console.log(chalk.gray(' --mode <mode> Mode: analysis, write, auto, review (default: analysis)'));
console.log(chalk.gray(' -d, --debug Enable debug logging for troubleshooting')); console.log(chalk.gray(' -d, --debug Enable debug logging for troubleshooting'));
console.log(chalk.gray(' --model <model> Model override')); console.log(chalk.gray(' --model <model> Model override'));
console.log(chalk.gray(' --cd <path> Working directory')); console.log(chalk.gray(' --cd <path> Working directory'));

View File

@@ -140,12 +140,25 @@ interface ProjectGuidelines {
}; };
} }
interface Language {
name: string;
file_count: number;
primary: boolean;
}
interface KeyComponent {
name: string;
path: string;
description: string;
importance: 'high' | 'medium' | 'low';
}
interface ProjectOverview { interface ProjectOverview {
projectName: string; projectName: string;
description: string; description: string;
initializedAt: string | null; initializedAt: string | null;
technologyStack: { technologyStack: {
languages: string[]; languages: Language[];
frameworks: string[]; frameworks: string[];
build_tools: string[]; build_tools: string[];
test_frameworks: string[]; test_frameworks: string[];
@@ -155,7 +168,7 @@ interface ProjectOverview {
layers: string[]; layers: string[];
patterns: string[]; patterns: string[];
}; };
keyComponents: string[]; keyComponents: KeyComponent[];
features: unknown[]; features: unknown[];
developmentIndex: { developmentIndex: {
feature: unknown[]; feature: unknown[];
@@ -187,13 +200,12 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
// Initialize cache manager // Initialize cache manager
const cache = createDashboardCache(workflowDir); const cache = createDashboardCache(workflowDir);
// Prepare paths to watch for changes (includes both new dual files and legacy) // Prepare paths to watch for changes
const watchPaths = [ const watchPaths = [
join(workflowDir, 'active'), join(workflowDir, 'active'),
join(workflowDir, 'archives'), join(workflowDir, 'archives'),
join(workflowDir, 'project-tech.json'), join(workflowDir, 'project-tech.json'),
join(workflowDir, 'project-guidelines.json'), join(workflowDir, 'project-guidelines.json'),
join(workflowDir, 'project.json'), // Legacy support
...sessions.active.map(s => s.path), ...sessions.active.map(s => s.path),
...sessions.archived.map(s => s.path) ...sessions.archived.map(s => s.path)
]; ];
@@ -266,7 +278,7 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
console.error('Error scanning lite tasks:', (err as Error).message); console.error('Error scanning lite tasks:', (err as Error).message);
} }
// Load project overview from project.json // Load project overview from project-tech.json
try { try {
data.projectOverview = loadProjectOverview(workflowDir); data.projectOverview = loadProjectOverview(workflowDir);
} catch (err) { } catch (err) {
@@ -553,31 +565,25 @@ function sortTaskIds(a: string, b: string): number {
/** /**
* Load project overview from project-tech.json and project-guidelines.json * Load project overview from project-tech.json and project-guidelines.json
* Supports dual file structure with backward compatibility for legacy project.json
* @param workflowDir - Path to .workflow directory * @param workflowDir - Path to .workflow directory
* @returns Project overview data or null if not found * @returns Project overview data or null if not found
*/ */
function loadProjectOverview(workflowDir: string): ProjectOverview | null { function loadProjectOverview(workflowDir: string): ProjectOverview | null {
const techFile = join(workflowDir, 'project-tech.json'); const techFile = join(workflowDir, 'project-tech.json');
const guidelinesFile = join(workflowDir, 'project-guidelines.json'); const guidelinesFile = join(workflowDir, 'project-guidelines.json');
const legacyFile = join(workflowDir, 'project.json');
// Check for new dual file structure first, fallback to legacy if (!existsSync(techFile)) {
const useLegacy = !existsSync(techFile) && existsSync(legacyFile); console.log(`Project file not found at: ${techFile}`);
const projectFile = useLegacy ? legacyFile : techFile;
if (!existsSync(projectFile)) {
console.log(`Project file not found at: ${projectFile}`);
return null; return null;
} }
try { try {
const fileContent = readFileSync(projectFile, 'utf8'); const fileContent = readFileSync(techFile, 'utf8');
const projectData = JSON.parse(fileContent) as Record<string, unknown>; const projectData = JSON.parse(fileContent) as Record<string, unknown>;
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'} (${useLegacy ? 'legacy' : 'tech'})`); console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'}`);
// Parse tech data (compatible with both legacy and new structure) // Parse tech data from project-tech.json structure
const overview = projectData.overview as Record<string, unknown> | undefined; const overview = projectData.overview as Record<string, unknown> | undefined;
const technologyAnalysis = projectData.technology_analysis as Record<string, unknown> | undefined; const technologyAnalysis = projectData.technology_analysis as Record<string, unknown> | undefined;
const developmentStatus = projectData.development_status as Record<string, unknown> | undefined; const developmentStatus = projectData.development_status as Record<string, unknown> | undefined;
@@ -645,7 +651,7 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
description: (overview?.description as string) || '', description: (overview?.description as string) || '',
initializedAt: (projectData.initialized_at as string) || null, initializedAt: (projectData.initialized_at as string) || null,
technologyStack: { technologyStack: {
languages: extractStringArray(technologyStack?.languages), languages: (technologyStack?.languages as Language[]) || [],
frameworks: extractStringArray(technologyStack?.frameworks), frameworks: extractStringArray(technologyStack?.frameworks),
build_tools: extractStringArray(technologyStack?.build_tools), build_tools: extractStringArray(technologyStack?.build_tools),
test_frameworks: extractStringArray(technologyStack?.test_frameworks) test_frameworks: extractStringArray(technologyStack?.test_frameworks)
@@ -655,7 +661,7 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
layers: extractStringArray(architecture?.layers as unknown[] | undefined), layers: extractStringArray(architecture?.layers as unknown[] | undefined),
patterns: extractStringArray(architecture?.patterns as unknown[] | undefined) patterns: extractStringArray(architecture?.patterns as unknown[] | undefined)
}, },
keyComponents: extractStringArray(overview?.key_components as unknown[] | undefined), keyComponents: (overview?.key_components as KeyComponent[]) || [],
features: (projectData.features as unknown[]) || [], features: (projectData.features as unknown[]) || [],
developmentIndex: { developmentIndex: {
feature: (developmentIndex?.feature as unknown[]) || [], feature: (developmentIndex?.feature as unknown[]) || [],
@@ -677,7 +683,7 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
guidelines guidelines
}; };
} catch (err) { } catch (err) {
console.error(`Failed to parse project file at ${projectFile}:`, (err as Error).message); console.error(`Failed to parse project file at ${techFile}:`, (err as Error).message);
console.error('Error stack:', (err as Error).stack); console.error('Error stack:', (err as Error).stack);
return null; return null;
} }

View File

@@ -8,23 +8,23 @@ import { homedir } from 'os';
import type { RouteContext } from './types.js'; import type { RouteContext } from './types.js';
/** /**
* Get the ccw-help index directory path (pure function) * Get the ccw-help command.json file path (pure function)
* Priority: project path (.claude/skills/ccw-help/index) > user path (~/.claude/skills/ccw-help/index) * Priority: project path (.claude/skills/ccw-help/command.json) > user path (~/.claude/skills/ccw-help/command.json)
* @param projectPath - The project path to check first * @param projectPath - The project path to check first
*/ */
function getIndexDir(projectPath: string | null): string | null { function getCommandFilePath(projectPath: string | null): string | null {
// Try project path first // Try project path first
if (projectPath) { if (projectPath) {
const projectIndexDir = join(projectPath, '.claude', 'skills', 'ccw-help', 'index'); const projectFilePath = join(projectPath, '.claude', 'skills', 'ccw-help', 'command.json');
if (existsSync(projectIndexDir)) { if (existsSync(projectFilePath)) {
return projectIndexDir; return projectFilePath;
} }
} }
// Fall back to user path // Fall back to user path
const userIndexDir = join(homedir(), '.claude', 'skills', 'ccw-help', 'index'); const userFilePath = join(homedir(), '.claude', 'skills', 'ccw-help', 'command.json');
if (existsSync(userIndexDir)) { if (existsSync(userFilePath)) {
return userIndexDir; return userFilePath;
} }
return null; return null;
@@ -83,46 +83,48 @@ function invalidateCache(key: string): void {
let watchersInitialized = false; let watchersInitialized = false;
/** /**
* Initialize file watchers for JSON indexes * Initialize file watcher for command.json
* @param projectPath - The project path to resolve index directory * @param projectPath - The project path to resolve command file
*/ */
function initializeFileWatchers(projectPath: string | null): void { function initializeFileWatchers(projectPath: string | null): void {
if (watchersInitialized) return; if (watchersInitialized) return;
const indexDir = getIndexDir(projectPath); const commandFilePath = getCommandFilePath(projectPath);
if (!indexDir) { if (!commandFilePath) {
console.warn(`ccw-help index directory not found in project or user paths`); console.warn(`ccw-help command.json not found in project or user paths`);
return; return;
} }
try { try {
// Watch all JSON files in index directory // Watch the command.json file
const watcher = watch(indexDir, { recursive: false }, (eventType, filename) => { const watcher = watch(commandFilePath, (eventType) => {
if (!filename || !filename.endsWith('.json')) return; console.log(`File change detected: command.json (${eventType})`);
console.log(`File change detected: ${filename} (${eventType})`); // Invalidate all cache entries when command.json changes
invalidateCache('command-data');
// Invalidate relevant cache entries
if (filename === 'all-commands.json') {
invalidateCache('all-commands');
} else if (filename === 'command-relationships.json') {
invalidateCache('command-relationships');
} else if (filename === 'by-category.json') {
invalidateCache('by-category');
}
}); });
watchersInitialized = true; watchersInitialized = true;
(watcher as any).unref?.(); (watcher as any).unref?.();
console.log(`File watchers initialized for: ${indexDir}`); console.log(`File watcher initialized for: ${commandFilePath}`);
} catch (error) { } catch (error) {
console.error('Failed to initialize file watchers:', error); console.error('Failed to initialize file watcher:', error);
} }
} }
// ========== Helper Functions ========== // ========== Helper Functions ==========
/**
* Get command data from command.json (with caching)
*/
function getCommandData(projectPath: string | null): any {
const filePath = getCommandFilePath(projectPath);
if (!filePath) return null;
return getCachedData('command-data', filePath);
}
/** /**
* Filter commands by search query * Filter commands by search query
*/ */
@@ -138,6 +140,15 @@ function filterCommands(commands: any[], query: string): any[] {
); );
} }
/**
* Category merge mapping for frontend compatibility
* Merges additional categories into target category for display
* Format: { targetCategory: [additionalCategoriesToMerge] }
*/
const CATEGORY_MERGES: Record<string, string[]> = {
'cli': ['general'], // CLI tab shows both 'cli' and 'general' commands
};
/** /**
* Group commands by category with subcategories * Group commands by category with subcategories
*/ */
@@ -166,9 +177,104 @@ function groupCommandsByCategory(commands: any[]): any {
} }
} }
// Apply category merges for frontend compatibility
for (const [target, sources] of Object.entries(CATEGORY_MERGES)) {
// Initialize target category if not exists
if (!grouped[target]) {
grouped[target] = {
name: target,
commands: [],
subcategories: {}
};
}
// Merge commands from source categories into target
for (const source of sources) {
if (grouped[source]) {
// Merge direct commands
grouped[target].commands = [
...grouped[target].commands,
...grouped[source].commands
];
// Merge subcategories
for (const [subcat, cmds] of Object.entries(grouped[source].subcategories)) {
if (!grouped[target].subcategories[subcat]) {
grouped[target].subcategories[subcat] = [];
}
grouped[target].subcategories[subcat] = [
...grouped[target].subcategories[subcat],
...(cmds as any[])
];
}
}
}
}
return grouped; return grouped;
} }
/**
* Build workflow relationships from command flow data
*/
function buildWorkflowRelationships(commands: any[]): any {
const relationships: any = {
workflows: [],
dependencies: {},
alternatives: {}
};
for (const cmd of commands) {
if (!cmd.flow) continue;
const cmdName = cmd.command;
// Build next_steps relationships
if (cmd.flow.next_steps) {
if (!relationships.dependencies[cmdName]) {
relationships.dependencies[cmdName] = { next: [], prev: [] };
}
relationships.dependencies[cmdName].next = cmd.flow.next_steps;
// Add reverse relationship
for (const nextCmd of cmd.flow.next_steps) {
if (!relationships.dependencies[nextCmd]) {
relationships.dependencies[nextCmd] = { next: [], prev: [] };
}
if (!relationships.dependencies[nextCmd].prev.includes(cmdName)) {
relationships.dependencies[nextCmd].prev.push(cmdName);
}
}
}
// Build prerequisites relationships
if (cmd.flow.prerequisites) {
if (!relationships.dependencies[cmdName]) {
relationships.dependencies[cmdName] = { next: [], prev: [] };
}
relationships.dependencies[cmdName].prev = [
...new Set([...relationships.dependencies[cmdName].prev, ...cmd.flow.prerequisites])
];
}
// Build alternatives
if (cmd.flow.alternatives) {
relationships.alternatives[cmdName] = cmd.flow.alternatives;
}
// Add to workflows list
if (cmd.category === 'workflow') {
relationships.workflows.push({
name: cmd.name,
command: cmd.command,
description: cmd.description,
flow: cmd.flow
});
}
}
return relationships;
}
// ========== API Routes ========== // ========== API Routes ==========
/** /**
@@ -181,25 +287,17 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
// Initialize file watchers on first request // Initialize file watchers on first request
initializeFileWatchers(initialPath); initializeFileWatchers(initialPath);
const indexDir = getIndexDir(initialPath);
// API: Get all commands with optional search // API: Get all commands with optional search
if (pathname === '/api/help/commands') { if (pathname === '/api/help/commands') {
if (!indexDir) { const commandData = getCommandData(initialPath);
if (!commandData) {
res.writeHead(404, { 'Content-Type': 'application/json' }); res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'ccw-help index directory not found' })); res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
return true; return true;
} }
const searchQuery = url.searchParams.get('q') || ''; const searchQuery = url.searchParams.get('q') || '';
const filePath = join(indexDir, 'all-commands.json'); let commands = commandData.commands || [];
let commands = getCachedData('all-commands', filePath);
if (!commands) {
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Commands data not found' }));
return true;
}
// Filter by search query if provided // Filter by search query if provided
if (searchQuery) { if (searchQuery) {
@@ -213,26 +311,24 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
res.end(JSON.stringify({ res.end(JSON.stringify({
commands: commands, commands: commands,
grouped: grouped, grouped: grouped,
total: commands.length total: commands.length,
essential: commandData.essential_commands || [],
metadata: commandData._metadata
})); }));
return true; return true;
} }
// API: Get workflow command relationships // API: Get workflow command relationships
if (pathname === '/api/help/workflows') { if (pathname === '/api/help/workflows') {
if (!indexDir) { const commandData = getCommandData(initialPath);
if (!commandData) {
res.writeHead(404, { 'Content-Type': 'application/json' }); res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'ccw-help index directory not found' })); res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
return true; return true;
} }
const filePath = join(indexDir, 'command-relationships.json');
const relationships = getCachedData('command-relationships', filePath);
if (!relationships) { const commands = commandData.commands || [];
res.writeHead(404, { 'Content-Type': 'application/json' }); const relationships = buildWorkflowRelationships(commands);
res.end(JSON.stringify({ error: 'Workflow relationships not found' }));
return true;
}
res.writeHead(200, { 'Content-Type': 'application/json' }); res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(relationships)); res.end(JSON.stringify(relationships));
@@ -241,22 +337,38 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
// API: Get commands by category // API: Get commands by category
if (pathname === '/api/help/commands/by-category') { if (pathname === '/api/help/commands/by-category') {
if (!indexDir) { const commandData = getCommandData(initialPath);
if (!commandData) {
res.writeHead(404, { 'Content-Type': 'application/json' }); res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'ccw-help index directory not found' })); res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
return true; return true;
} }
const filePath = join(indexDir, 'by-category.json');
const byCategory = getCachedData('by-category', filePath);
if (!byCategory) { const commands = commandData.commands || [];
const byCategory = groupCommandsByCategory(commands);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
categories: commandData.categories || [],
grouped: byCategory
}));
return true;
}
// API: Get agents list
if (pathname === '/api/help/agents') {
const commandData = getCommandData(initialPath);
if (!commandData) {
res.writeHead(404, { 'Content-Type': 'application/json' }); res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Category data not found' })); res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
return true; return true;
} }
res.writeHead(200, { 'Content-Type': 'application/json' }); res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(byCategory)); res.end(JSON.stringify({
agents: commandData.agents || [],
total: (commandData.agents || []).length
}));
return true; return true;
} }

View File

@@ -1,6 +1,103 @@
// Hook Manager Component // Hook Manager Component
// Manages Claude Code hooks configuration from settings.json // Manages Claude Code hooks configuration from settings.json
// ========== Platform Detection ==========
const PlatformUtils = {
// Detect current platform
detect() {
if (typeof navigator !== 'undefined') {
const platform = navigator.platform.toLowerCase();
if (platform.includes('win')) return 'windows';
if (platform.includes('mac')) return 'macos';
return 'linux';
}
if (typeof process !== 'undefined') {
if (process.platform === 'win32') return 'windows';
if (process.platform === 'darwin') return 'macos';
return 'linux';
}
return 'unknown';
},
isWindows() {
return this.detect() === 'windows';
},
isUnix() {
const platform = this.detect();
return platform === 'macos' || platform === 'linux';
},
// Get default shell for platform
getShell() {
return this.isWindows() ? 'cmd' : 'bash';
},
// Check if template is compatible with current platform
checkCompatibility(template) {
const platform = this.detect();
const issues = [];
// bash commands require Unix or Git Bash on Windows
if (template.command === 'bash' && platform === 'windows') {
issues.push({
level: 'warning',
message: 'bash command may not work on Windows without Git Bash or WSL'
});
}
// Check for Unix-specific shell features in args
if (template.args && Array.isArray(template.args)) {
const argStr = template.args.join(' ');
if (platform === 'windows') {
// Unix shell features that won't work in cmd
if (argStr.includes('$HOME') || argStr.includes('${HOME}')) {
issues.push({ level: 'warning', message: 'Uses $HOME - use %USERPROFILE% on Windows' });
}
if (argStr.includes('$(') || argStr.includes('`')) {
issues.push({ level: 'warning', message: 'Uses command substitution - not supported in cmd' });
}
if (argStr.includes(' | ')) {
issues.push({ level: 'info', message: 'Uses pipes - works in cmd but syntax may differ' });
}
}
}
return {
compatible: issues.filter(i => i.level === 'error').length === 0,
issues
};
},
// Get platform-specific command variant if available
getVariant(template) {
const platform = this.detect();
// Check if template has platform-specific variants
if (template.variants && template.variants[platform]) {
return { ...template, ...template.variants[platform] };
}
return template;
},
// Escape script for specific shell type
escapeForShell(script, shell) {
if (shell === 'bash' || shell === 'sh') {
// Unix: use single quotes, escape internal single quotes
return script.replace(/'/g, "'\\''");
} else if (shell === 'cmd') {
// Windows cmd: escape double quotes and special chars
return script.replace(/"/g, '\\"').replace(/%/g, '%%');
} else if (shell === 'powershell') {
// PowerShell: escape single quotes by doubling
return script.replace(/'/g, "''");
}
return script;
}
};
// ========== Hook State ========== // ========== Hook State ==========
let hookConfig = { let hookConfig = {
global: { hooks: {} }, global: { hooks: {} },
@@ -394,6 +491,29 @@ function convertToClaudeCodeFormat(hookData) {
}); });
commandStr += ' ' + additionalArgs.join(' '); commandStr += ' ' + additionalArgs.join(' ');
} }
} else if (commandStr === 'node' && hookData.args.length >= 2 && hookData.args[0] === '-e') {
// Special handling for node -e commands using PlatformUtils
const script = hookData.args[1];
if (PlatformUtils.isWindows()) {
// Windows: use double quotes, escape internal quotes
const escapedScript = PlatformUtils.escapeForShell(script, 'cmd');
commandStr = `node -e "${escapedScript}"`;
} else {
// Unix: use single quotes to prevent shell interpretation
const escapedScript = PlatformUtils.escapeForShell(script, 'bash');
commandStr = `node -e '${escapedScript}'`;
}
// Handle any additional args after the script
if (hookData.args.length > 2) {
const additionalArgs = hookData.args.slice(2).map(arg => {
if (arg.includes(' ') && !arg.startsWith('"') && !arg.startsWith("'")) {
return `"${arg.replace(/"/g, '\\"')}"`;
}
return arg;
});
commandStr += ' ' + additionalArgs.join(' ');
}
} else { } else {
// Default handling for other commands // Default handling for other commands
const quotedArgs = hookData.args.map(arg => { const quotedArgs = hookData.args.map(arg => {

View File

@@ -398,6 +398,7 @@ async function updateCliToolConfig(tool, updates) {
// Invalidate cache to ensure fresh data on page refresh // Invalidate cache to ensure fresh data on page refresh
if (window.cacheManager) { if (window.cacheManager) {
window.cacheManager.invalidate('cli-config'); window.cacheManager.invalidate('cli-config');
window.cacheManager.invalidate('cli-tools-config');
} }
} }
return data; return data;

View File

@@ -524,16 +524,32 @@ async function installHookTemplate(templateId, scope) {
return; return;
} }
const hookData = { // Platform compatibility check
command: template.command, const compatibility = PlatformUtils.checkCompatibility(template);
args: template.args if (compatibility.issues.length > 0) {
}; const warnings = compatibility.issues.filter(i => i.level === 'warning');
if (warnings.length > 0) {
if (template.matcher) { const platform = PlatformUtils.detect();
hookData.matcher = template.matcher; const warningMsg = warnings.map(w => w.message).join('; ');
console.warn(`[Hook Install] Platform: ${platform}, Warnings: ${warningMsg}`);
// Show warning but continue installation
showRefreshToast(`Warning: ${warningMsg}`, 'warning', 5000);
}
} }
await saveHook(scope, template.event, hookData); // Get platform-specific variant if available
const adaptedTemplate = PlatformUtils.getVariant(template);
const hookData = {
command: adaptedTemplate.command,
args: adaptedTemplate.args
};
if (adaptedTemplate.matcher) {
hookData.matcher = adaptedTemplate.matcher;
}
await saveHook(scope, adaptedTemplate.event, hookData);
} }
async function uninstallHookTemplate(templateId) { async function uninstallHookTemplate(templateId) {

View File

@@ -160,7 +160,7 @@ interface ClaudeWithSettingsParams {
prompt: string; prompt: string;
settingsPath: string; settingsPath: string;
endpointId: string; endpointId: string;
mode: 'analysis' | 'write' | 'auto'; mode: 'analysis' | 'write' | 'auto' | 'review';
workingDir: string; workingDir: string;
cd?: string; cd?: string;
includeDirs?: string[]; includeDirs?: string[];
@@ -351,7 +351,7 @@ type BuiltinCliTool = typeof BUILTIN_CLI_TOOLS[number];
const ParamsSchema = z.object({ const ParamsSchema = z.object({
tool: z.string().min(1, 'Tool is required'), // Accept any tool ID (built-in or custom endpoint) tool: z.string().min(1, 'Tool is required'), // Accept any tool ID (built-in or custom endpoint)
prompt: z.string().min(1, 'Prompt is required'), prompt: z.string().min(1, 'Prompt is required'),
mode: z.enum(['analysis', 'write', 'auto']).default('analysis'), mode: z.enum(['analysis', 'write', 'auto', 'review']).default('analysis'),
format: z.enum(['plain', 'yaml', 'json']).default('plain'), // Multi-turn prompt concatenation format format: z.enum(['plain', 'yaml', 'json']).default('plain'), // Multi-turn prompt concatenation format
model: z.string().optional(), model: z.string().optional(),
cd: z.string().optional(), cd: z.string().optional(),
@@ -1176,7 +1176,8 @@ export const schema: ToolSchema = {
Modes: Modes:
- analysis: Read-only operations (default) - analysis: Read-only operations (default)
- write: File modifications allowed - write: File modifications allowed
- auto: Full autonomous operations (codex only)`, - auto: Full autonomous operations (codex only)
- review: Code review mode (codex uses 'codex review' subcommand, others accept but no operation change)`,
inputSchema: { inputSchema: {
type: 'object', type: 'object',
properties: { properties: {
@@ -1191,8 +1192,8 @@ Modes:
}, },
mode: { mode: {
type: 'string', type: 'string',
enum: ['analysis', 'write', 'auto'], enum: ['analysis', 'write', 'auto', 'review'],
description: 'Execution mode (default: analysis)', description: 'Execution mode (default: analysis). review mode uses codex review subcommand for codex tool.',
default: 'analysis' default: 'analysis'
}, },
model: { model: {

View File

@@ -223,7 +223,21 @@ export function buildCommand(params: {
case 'codex': case 'codex':
useStdin = true; useStdin = true;
if (nativeResume?.enabled) { if (mode === 'review') {
// codex review mode: non-interactive code review
// Format: codex review [OPTIONS] [PROMPT]
args.push('review');
// Default to --uncommitted if no specific review target in prompt
args.push('--uncommitted');
if (model) {
args.push('-m', model);
}
// codex review uses positional prompt argument, not stdin
useStdin = false;
if (prompt) {
args.push(prompt);
}
} else if (nativeResume?.enabled) {
args.push('resume'); args.push('resume');
if (nativeResume.isLatest) { if (nativeResume.isLatest) {
args.push('--last'); args.push('--last');

View File

@@ -0,0 +1,316 @@
# codex-lens LSP Integration Execution Checklist
> Generated: 2026-01-15
> Based on: Gemini multi-round deep analysis
> Status: Ready for implementation
---
## Phase 1: LSP Server Foundation (Priority: HIGH)
### 1.1 Create LSP Server Entry Point
- [ ] **Install pygls dependency**
```bash
pip install pygls
```
- [ ] **Create `src/codexlens/lsp/__init__.py`**
- Export: `CodexLensServer`, `start_server`
- [ ] **Create `src/codexlens/lsp/server.py`**
- Class: `CodexLensServer(LanguageServer)`
- Initialize: `ChainSearchEngine`, `GlobalSymbolIndex`, `WatcherManager`
- Lifecycle: Start `WatcherManager` on `initialize` request
### 1.2 Implement Core LSP Handlers
- [ ] **`textDocument/definition`** handler
- Source: `GlobalSymbolIndex.search()` exact match
- Reference: `storage/global_index.py:173`
- Return: `Location(uri, Range)`
- [ ] **`textDocument/completion`** handler
- Source: `GlobalSymbolIndex.search(prefix_mode=True)`
- Reference: `storage/global_index.py:173`
- Return: `CompletionItem[]`
- [ ] **`workspace/symbol`** handler
- Source: `ChainSearchEngine.search_symbols()`
- Reference: `search/chain_search.py:618`
- Return: `SymbolInformation[]`
### 1.3 Wire File Watcher to LSP Events
- [ ] **`workspace/didChangeWatchedFiles`** handler
- Delegate to: `WatcherManager.process_changes()`
- Reference: `watcher/manager.py:53`
- [ ] **`textDocument/didSave`** handler
- Trigger: `IncrementalIndexer` for single file
- Reference: `watcher/incremental_indexer.py`
### 1.4 Deliverables
- [ ] Unit tests for LSP handlers
- [ ] Integration test: definition lookup
- [ ] Integration test: completion prefix search
- [ ] Benchmark: query latency < 50ms
---
## Phase 2: Find References Implementation (Priority: MEDIUM)
### 2.1 Create `search_references` Method
- [ ] **Add to `src/codexlens/search/chain_search.py`**
```python
def search_references(
self,
symbol_name: str,
source_path: Path,
depth: int = -1
) -> List[ReferenceResult]:
"""Find all references to a symbol across the project."""
```
### 2.2 Implement Parallel Query Orchestration
- [ ] **Collect index paths**
- Use: `_collect_index_paths()` existing method
- [ ] **Parallel query execution**
- ThreadPoolExecutor across all `_index.db`
- SQL: `SELECT * FROM code_relationships WHERE target_qualified_name = ?`
- Reference: `storage/sqlite_store.py:348`
- [ ] **Result aggregation**
- Deduplicate by file:line
- Sort by file path, then line number
### 2.3 LSP Handler
- [ ] **`textDocument/references`** handler
- Call: `ChainSearchEngine.search_references()`
- Return: `Location[]`
### 2.4 Deliverables
- [ ] Unit test: single-index reference lookup
- [ ] Integration test: cross-directory references
- [ ] Benchmark: < 200ms for 10+ index files
---
## Phase 3: Enhanced Hover Information (Priority: MEDIUM)
### 3.1 Implement Hover Data Extraction
- [ ] **Create `src/codexlens/lsp/hover_provider.py`**
```python
class HoverProvider:
def get_hover_info(self, symbol: Symbol) -> HoverInfo:
"""Extract hover information for a symbol."""
```
### 3.2 Data Sources
- [ ] **Symbol metadata**
- Source: `GlobalSymbolIndex.search()`
- Fields: `kind`, `name`, `file_path`, `range`
- [ ] **Source code extraction**
- Source: `SQLiteStore.files` table
- Reference: `storage/sqlite_store.py:284`
- Extract: Lines from `range[0]` to `range[1]`
### 3.3 LSP Handler
- [ ] **`textDocument/hover`** handler
- Return: `Hover(contents=MarkupContent)`
- Format: Markdown with code fence
### 3.4 Deliverables
- [ ] Unit test: hover for function/class/variable
- [ ] Integration test: multi-line function signature
---
## Phase 4: MCP Bridge for Claude Code (Priority: HIGH VALUE)
### 4.1 Define MCP Schema
- [ ] **Create `src/codexlens/mcp/__init__.py`**
- [ ] **Create `src/codexlens/mcp/schema.py`**
```python
@dataclass
class MCPContext:
version: str = "1.0"
context_type: str
symbol: Optional[SymbolInfo]
definition: Optional[str]
references: List[ReferenceInfo]
related_symbols: List[SymbolInfo]
```
### 4.2 Create MCP Provider
- [ ] **Create `src/codexlens/mcp/provider.py`**
```python
class MCPProvider:
def build_context(
self,
symbol_name: str,
context_type: str = "symbol_explanation"
) -> MCPContext:
"""Build structured context for LLM consumption."""
```
### 4.3 Context Building Logic
- [ ] **Symbol lookup**
- Use: `GlobalSymbolIndex.search()`
- [ ] **Definition extraction**
- Use: `SQLiteStore` file content
- [ ] **References collection**
- Use: `ChainSearchEngine.search_references()`
- [ ] **Related symbols**
- Use: `code_relationships` for imports/calls
### 4.4 Hook Integration Points
- [ ] **Document `pre-tool` hook interface**
```python
def pre_tool_hook(action: str, params: dict) -> MCPContext:
"""Called before LLM action to gather context."""
```
- [ ] **Document `post-tool` hook interface**
```python
def post_tool_hook(action: str, result: Any) -> None:
"""Called after LSP action for proactive caching."""
```
### 4.5 Deliverables
- [ ] MCP schema JSON documentation
- [ ] Unit test: context building
- [ ] Integration test: hook → MCP → JSON output
---
## Phase 5: Advanced Features (Priority: LOW)
### 5.1 Custom LSP Commands
- [ ] **`codexlens/hybridSearch`**
- Expose: `HybridSearchEngine.search()`
- Reference: `search/hybrid_search.py`
- [ ] **`codexlens/symbolGraph`**
- Return: Symbol relationship graph
- Source: `code_relationships` table
### 5.2 Proactive Context Caching
- [ ] **Implement `post-tool` hook caching**
- After `go-to-definition`: pre-fetch references
- Cache TTL: 5 minutes
- Storage: In-memory LRU
### 5.3 Performance Optimizations
- [ ] **Connection pooling**
- Reference: `storage/sqlite_store.py` thread-local
- [ ] **Result caching**
- LRU cache for frequent queries
- Invalidate on file change
---
## File Structure After Implementation
```
src/codexlens/
├── lsp/ # NEW
│ ├── __init__.py
│ ├── server.py # Main LSP server
│ ├── handlers.py # LSP request handlers
│ ├── hover_provider.py # Hover information
│ └── utils.py # LSP utilities
├── mcp/ # NEW
│ ├── __init__.py
│ ├── schema.py # MCP data models
│ ├── provider.py # Context builder
│ └── hooks.py # Hook interfaces
├── search/
│ ├── chain_search.py # MODIFY: add search_references()
│ └── ...
└── ...
```
---
## Dependencies to Add
```toml
# pyproject.toml
[project.optional-dependencies]
lsp = [
"pygls>=1.3.0",
]
```
---
## Testing Strategy
### Unit Tests
```
tests/
├── lsp/
│ ├── test_definition.py
│ ├── test_completion.py
│ ├── test_references.py
│ └── test_hover.py
└── mcp/
├── test_schema.py
└── test_provider.py
```
### Integration Tests
- [ ] Full LSP handshake test
- [ ] Multi-file project navigation
- [ ] Incremental index update via didSave
### Performance Benchmarks
| Operation | Target | Acceptable |
|-----------|--------|------------|
| Definition lookup | < 30ms | < 50ms |
| Completion (100 items) | < 50ms | < 100ms |
| Find references (10 files) | < 150ms | < 200ms |
| Initial indexing (1000 files) | < 60s | < 120s |
---
## Execution Order
```
Week 1: Phase 1.1 → 1.2 → 1.3 → 1.4
Week 2: Phase 2.1 → 2.2 → 2.3 → 2.4
Week 3: Phase 3 + Phase 4.1 → 4.2
Week 4: Phase 4.3 → 4.4 → 4.5
Week 5: Phase 5 (optional) + Polish
```
---
## Quick Start Commands
```bash
# Install LSP dependencies
pip install pygls
# Run LSP server (after implementation)
python -m codexlens.lsp --stdio
# Test LSP connection
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}' | python -m codexlens.lsp --stdio
```
---
## Reference Links
- pygls Documentation: https://pygls.readthedocs.io/
- LSP Specification: https://microsoft.github.io/language-server-protocol/
- codex-lens GlobalSymbolIndex: `storage/global_index.py:173`
- codex-lens ChainSearchEngine: `search/chain_search.py:618`
- codex-lens WatcherManager: `watcher/manager.py:53`

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{ {
"name": "claude-code-workflow", "name": "claude-code-workflow",
"version": "6.3.31", "version": "6.3.33",
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution", "description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
"type": "module", "type": "module",
"main": "ccw/src/index.js", "main": "ccw/src/index.js",