mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
13 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
464f3343f3 | ||
|
|
bb6cf42df6 | ||
|
|
0f0cb7e08e | ||
|
|
39d070eab6 | ||
|
|
9ccaa7e2fd | ||
|
|
eeb90949ce | ||
|
|
7b677b20fb | ||
|
|
e2d56bc08a | ||
|
|
d515090097 | ||
|
|
d81dfaf143 | ||
|
|
d7e5ee44cc | ||
|
|
dde39fc6f5 | ||
|
|
9b4fdc1868 |
@@ -29,9 +29,8 @@ Available CLI endpoints are dynamically defined by the config file:
|
||||
```
|
||||
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
|
||||
```
|
||||
- **After CLI call**: Stop immediately - let CLI execute in background, do NOT
|
||||
poll with TaskOutput
|
||||
|
||||
- **After CLI call**: Stop immediately - let CLI execute in background
|
||||
|
||||
### CLI Analysis Calls
|
||||
- **Wait for results**: MUST wait for CLI analysis to complete before taking any write action. Do NOT proceed with fixes while analysis is running
|
||||
- **Value every call**: Each CLI invocation is valuable and costly. NEVER waste analysis results:
|
||||
|
||||
@@ -308,3 +308,14 @@ When analysis is complete, ensure:
|
||||
- **Relevance**: Directly addresses user's specified requirements
|
||||
- **Actionability**: Provides concrete next steps and recommendations
|
||||
|
||||
## Output Size Limits
|
||||
|
||||
**Per-role limits** (prevent context overflow):
|
||||
- `analysis.md`: < 3000 words
|
||||
- `analysis-*.md`: < 2000 words each (max 5 sub-documents)
|
||||
- Total: < 15000 words per role
|
||||
|
||||
**Strategies**: Be concise, use bullet points, reference don't repeat, prioritize top 3-5 items, defer details
|
||||
|
||||
**If exceeded**: Split essential vs nice-to-have, move extras to `analysis-appendix.md` (counts toward limit), use executive summary style
|
||||
|
||||
|
||||
@@ -424,6 +424,17 @@ CONTEXT_VARS:
|
||||
- **Agent execution failure**: Agent-specific retry with minimal dependencies
|
||||
- **Template loading issues**: Agent handles graceful degradation
|
||||
- **Synthesis conflicts**: Synthesis highlights disagreements without resolution
|
||||
- **Context overflow protection**: See below for automatic context management
|
||||
|
||||
## Context Overflow Protection
|
||||
|
||||
**Per-role limits**: See `conceptual-planning-agent.md` (< 3000 words main, < 2000 words sub-docs, max 5 sub-docs)
|
||||
|
||||
**Synthesis protection**: If total analysis > 100KB, synthesis reads only `analysis.md` files (not sub-documents)
|
||||
|
||||
**Recovery**: Check logs → reduce scope (--count 2) → use --summary-only → manual synthesis
|
||||
|
||||
**Prevention**: Start with --count 3, use structured topic format, review output sizes before synthesis
|
||||
|
||||
## Reference Information
|
||||
|
||||
|
||||
@@ -132,7 +132,7 @@ Scan and analyze workflow session directories:
|
||||
|
||||
**Staleness criteria**:
|
||||
- Active sessions: No modification >7 days + no related git commits
|
||||
- Archives: >30 days old + no feature references in project.json
|
||||
- Archives: >30 days old + no feature references in project-tech.json
|
||||
- Lite-plan: >7 days old + plan.json not executed
|
||||
- Debug: >3 days old + issue not in recent commits
|
||||
|
||||
@@ -443,8 +443,8 @@ if (selectedCategories.includes('Sessions')) {
|
||||
}
|
||||
}
|
||||
|
||||
// Update project.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project.json'
|
||||
// Update project-tech.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project-tech.json'
|
||||
if (fileExists(projectPath)) {
|
||||
const project = JSON.parse(Read(projectPath))
|
||||
const deletedPaths = new Set(results.deleted)
|
||||
|
||||
@@ -108,11 +108,24 @@ Analyze project for workflow initialization and generate .workflow/project-tech.
|
||||
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||
|
||||
## Task
|
||||
Generate complete project-tech.json with:
|
||||
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
|
||||
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
|
||||
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
||||
Generate complete project-tech.json following the schema structure:
|
||||
- project_name: "${projectName}"
|
||||
- initialized_at: ISO 8601 timestamp
|
||||
- overview: {
|
||||
description: "Brief project description",
|
||||
technology_stack: {
|
||||
languages: [{name, file_count, primary}],
|
||||
frameworks: ["string"],
|
||||
build_tools: ["string"],
|
||||
test_frameworks: ["string"]
|
||||
},
|
||||
architecture: {style, layers: [], patterns: []},
|
||||
key_components: [{name, path, description, importance}]
|
||||
}
|
||||
- features: []
|
||||
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
|
||||
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
|
||||
|
||||
## Analysis Requirements
|
||||
|
||||
@@ -132,7 +145,7 @@ Generate complete project-tech.json with:
|
||||
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||
2. Semantic analysis: Gemini for patterns/architecture
|
||||
3. Synthesis: Merge findings
|
||||
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
|
||||
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
|
||||
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
|
||||
6. Report: Return brief completion summary
|
||||
|
||||
@@ -181,16 +194,16 @@ console.log(`
|
||||
✓ Project initialized successfully
|
||||
|
||||
## Project Overview
|
||||
Name: ${projectTech.project_metadata.name}
|
||||
Description: ${projectTech.technology_analysis.description}
|
||||
Name: ${projectTech.project_name}
|
||||
Description: ${projectTech.overview.description}
|
||||
|
||||
### Technology Stack
|
||||
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
|
||||
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
|
||||
|
||||
### Architecture
|
||||
Style: ${projectTech.technology_analysis.architecture.style}
|
||||
Components: ${projectTech.technology_analysis.key_components.length} core modules
|
||||
Style: ${projectTech.overview.architecture.style}
|
||||
Components: ${projectTech.overview.key_components.length} core modules
|
||||
|
||||
---
|
||||
Files created:
|
||||
|
||||
@@ -531,11 +531,11 @@ if (hasUnresolvedIssues(reviewResult)) {
|
||||
|
||||
**Trigger**: After all executions complete (regardless of code review)
|
||||
|
||||
**Skip Condition**: Skip if `.workflow/project.json` does not exist
|
||||
**Skip Condition**: Skip if `.workflow/project-tech.json` does not exist
|
||||
|
||||
**Operations**:
|
||||
```javascript
|
||||
const projectJsonPath = '.workflow/project.json'
|
||||
const projectJsonPath = '.workflow/project-tech.json'
|
||||
if (!fileExists(projectJsonPath)) return // Silent skip
|
||||
|
||||
const projectJson = JSON.parse(Read(projectJsonPath))
|
||||
|
||||
@@ -107,13 +107,13 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||
Manifest: Updated with N total sessions
|
||||
```
|
||||
|
||||
### Phase 4: Update project.json (Optional)
|
||||
### Phase 4: Update project-tech.json (Optional)
|
||||
|
||||
**Skip if**: `.workflow/project.json` doesn't exist
|
||||
**Skip if**: `.workflow/project-tech.json` doesn't exist
|
||||
|
||||
```bash
|
||||
# Check
|
||||
test -f .workflow/project.json || echo "SKIP"
|
||||
test -f .workflow/project-tech.json || echo "SKIP"
|
||||
```
|
||||
|
||||
**If exists**, add feature entry:
|
||||
@@ -134,6 +134,32 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
✓ Feature added to project registry
|
||||
```
|
||||
|
||||
### Phase 5: Ask About Solidify (Always)
|
||||
|
||||
After successful archival, prompt user to capture learnings:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Would you like to solidify learnings from this session into project guidelines?",
|
||||
header: "Solidify",
|
||||
options: [
|
||||
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
|
||||
{ label: "Skip", description: "Archive complete, no learnings to capture" }
|
||||
],
|
||||
multiSelect: false
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
|
||||
|
||||
**Output**:
|
||||
```
|
||||
Session archived successfully.
|
||||
→ Run /workflow:session:solidify to capture learnings (recommended)
|
||||
```
|
||||
|
||||
## Error Recovery
|
||||
|
||||
| Phase | Symptom | Recovery |
|
||||
@@ -149,5 +175,6 @@ test -f .workflow/project.json || echo "SKIP"
|
||||
Phase 1: find session → create .archiving marker
|
||||
Phase 2: read key files → build manifest entry (no writes)
|
||||
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||
Phase 4: update project.json features array (optional)
|
||||
Phase 4: update project-tech.json features array (optional)
|
||||
Phase 5: ask user → solidify learnings (optional)
|
||||
```
|
||||
|
||||
@@ -16,7 +16,7 @@ examples:
|
||||
Manages workflow sessions with three operation modes: discovery (manual), auto (intelligent), and force-new.
|
||||
|
||||
**Dual Responsibility**:
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project-tech.json` for feature registry
|
||||
2. **Session-level initialization** (always): Creates session directory structure
|
||||
|
||||
## Session Types
|
||||
|
||||
@@ -237,7 +237,7 @@ Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**:
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
|
||||
- If files don't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
@@ -255,7 +255,7 @@ Execute all discovery tracks:
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
3. **Populate `project_context`**: Directly use the `overview` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
|
||||
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
6. Perform conflict detection with risk assessment
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"_metadata": {
|
||||
"version": "2.0.0",
|
||||
"total_commands": 88,
|
||||
"total_commands": 45,
|
||||
"total_agents": 16,
|
||||
"description": "Unified CCW-Help command index"
|
||||
},
|
||||
@@ -485,6 +485,15 @@
|
||||
"category": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/enhance-prompt.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-init",
|
||||
"command": "/cli:cli-init",
|
||||
"description": "Initialize CLI tool configurations (.gemini/, .qwen/) with technology-aware ignore rules",
|
||||
"arguments": "[--tool gemini|qwen|all] [--preview] [--output path]",
|
||||
"category": "cli",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
}
|
||||
],
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
- 所有回复使用简体中文
|
||||
- 技术术语保留英文,首次出现可添加中文解释
|
||||
- 代码内容(变量名、注释)保持英文
|
||||
- 代码变量名保持英文,注释使用中文
|
||||
|
||||
## 格式规范
|
||||
|
||||
|
||||
@@ -0,0 +1,141 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Project Guidelines Schema",
|
||||
"description": "Schema for project-guidelines.json - user-maintained rules and constraints",
|
||||
"type": "object",
|
||||
"required": ["conventions", "constraints", "_metadata"],
|
||||
"properties": {
|
||||
"conventions": {
|
||||
"type": "object",
|
||||
"description": "Coding conventions and standards",
|
||||
"required": ["coding_style", "naming_patterns", "file_structure", "documentation"],
|
||||
"properties": {
|
||||
"coding_style": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Coding style rules (e.g., 'Use strict TypeScript mode', 'Prefer const over let')"
|
||||
},
|
||||
"naming_patterns": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Naming conventions (e.g., 'Use camelCase for variables', 'Use PascalCase for components')"
|
||||
},
|
||||
"file_structure": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "File organization rules (e.g., 'One component per file', 'Tests alongside source files')"
|
||||
},
|
||||
"documentation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Documentation requirements (e.g., 'JSDoc for public APIs', 'README for each module')"
|
||||
}
|
||||
}
|
||||
},
|
||||
"constraints": {
|
||||
"type": "object",
|
||||
"description": "Technical constraints and boundaries",
|
||||
"required": ["architecture", "tech_stack", "performance", "security"],
|
||||
"properties": {
|
||||
"architecture": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Architecture constraints (e.g., 'No circular dependencies', 'Services must be stateless')"
|
||||
},
|
||||
"tech_stack": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Technology constraints (e.g., 'No new dependencies without review', 'Use native fetch over axios')"
|
||||
},
|
||||
"performance": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Performance requirements (e.g., 'API response < 200ms', 'Bundle size < 500KB')"
|
||||
},
|
||||
"security": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Security requirements (e.g., 'Sanitize all user input', 'No secrets in code')"
|
||||
}
|
||||
}
|
||||
},
|
||||
"quality_rules": {
|
||||
"type": "array",
|
||||
"description": "Enforceable quality rules",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["rule", "scope"],
|
||||
"properties": {
|
||||
"rule": {
|
||||
"type": "string",
|
||||
"description": "The quality rule statement"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Where the rule applies (e.g., 'all', 'src/**', 'tests/**')"
|
||||
},
|
||||
"enforced_by": {
|
||||
"type": "string",
|
||||
"description": "How the rule is enforced (e.g., 'eslint', 'pre-commit', 'code-review')"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"learnings": {
|
||||
"type": "array",
|
||||
"description": "Project learnings captured from workflow sessions",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["date", "insight"],
|
||||
"properties": {
|
||||
"date": {
|
||||
"type": "string",
|
||||
"format": "date",
|
||||
"description": "Date the learning was captured (YYYY-MM-DD)"
|
||||
},
|
||||
"session_id": {
|
||||
"type": "string",
|
||||
"description": "WFS session ID where the learning originated"
|
||||
},
|
||||
"insight": {
|
||||
"type": "string",
|
||||
"description": "The learning or insight captured"
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"description": "Additional context about when/why this learning applies"
|
||||
},
|
||||
"category": {
|
||||
"type": "string",
|
||||
"enum": ["architecture", "performance", "security", "testing", "workflow", "other"],
|
||||
"description": "Category of the learning"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"required": ["created_at", "version"],
|
||||
"properties": {
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "ISO 8601 timestamp of creation"
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"description": "Schema version (e.g., '1.0.0')"
|
||||
},
|
||||
"last_updated": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "ISO 8601 timestamp of last update"
|
||||
},
|
||||
"updated_by": {
|
||||
"type": "string",
|
||||
"description": "Who/what last updated the file (e.g., 'user', 'workflow:session:solidify')"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Project Metadata Schema",
|
||||
"description": "Workflow initialization metadata for project-level context",
|
||||
"title": "Project Tech Schema",
|
||||
"description": "Schema for project-tech.json - auto-generated technical analysis (stack, architecture, components)",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"project_name",
|
||||
@@ -85,11 +85,14 @@ Tools are selected based on **tags** defined in the configuration. Use tags to m
|
||||
|
||||
```bash
|
||||
# Explicit tool selection
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write>
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write|review>
|
||||
|
||||
# Model override
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --model <model-id> --mode <analysis|write>
|
||||
|
||||
# Code review (codex only)
|
||||
ccw cli -p "<PROMPT>" --tool codex --mode review
|
||||
|
||||
# Tag-based auto-selection (future)
|
||||
ccw cli -p "<PROMPT>" --tags <tag1,tag2> --mode <analysis|write>
|
||||
```
|
||||
@@ -330,6 +333,14 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
- Use For: Feature implementation, bug fixes, documentation, code creation, file modifications
|
||||
- Specification: Requires explicit `--mode write`
|
||||
|
||||
- **`review`**
|
||||
- Permission: Read-only (code review output)
|
||||
- Use For: Git-aware code review of uncommitted changes, branch diffs, specific commits
|
||||
- Specification: **codex only** - uses `codex review` subcommand with `--uncommitted` by default
|
||||
- Tool Behavior:
|
||||
- `codex`: Executes `codex review --uncommitted [prompt]` for structured code review
|
||||
- Other tools (gemini/qwen/claude): Accept mode but no operation change (treated as analysis)
|
||||
|
||||
### Command Options
|
||||
|
||||
- **`--tool <tool>`**
|
||||
@@ -337,8 +348,9 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
- Default: First enabled tool in config
|
||||
|
||||
- **`--mode <mode>`**
|
||||
- Description: **REQUIRED**: analysis, write
|
||||
- Description: **REQUIRED**: analysis, write, review
|
||||
- Default: **NONE** (must specify)
|
||||
- Note: `review` mode triggers `codex review` subcommand for codex tool only
|
||||
|
||||
- **`--model <model>`**
|
||||
- Description: Model override
|
||||
@@ -463,6 +475,17 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
" --tool <tool-id> --mode write
|
||||
```
|
||||
|
||||
**Code Review Task** (codex review mode):
|
||||
```bash
|
||||
# Review uncommitted changes (default)
|
||||
ccw cli -p "Focus on security vulnerabilities and error handling" --tool codex --mode review
|
||||
|
||||
# Review with custom instructions
|
||||
ccw cli -p "Check for breaking changes in API contracts and backward compatibility" --tool codex --mode review
|
||||
```
|
||||
|
||||
> **Note**: `--mode review` only triggers special behavior for `codex` tool (uses `codex review --uncommitted`). Other tools accept the mode but execute as standard analysis.
|
||||
|
||||
---
|
||||
|
||||
### Permission Framework
|
||||
@@ -472,6 +495,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
**Mode Hierarchy**:
|
||||
- `analysis`: Read-only, safe for auto-execution
|
||||
- `write`: Create/Modify/Delete files, full operations - requires explicit `--mode write`
|
||||
- `review`: Git-aware code review (codex only), read-only output - requires explicit `--mode review`
|
||||
- **Exception**: User provides clear instructions like "modify", "create", "implement"
|
||||
|
||||
---
|
||||
@@ -502,7 +526,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
### Planning Checklist
|
||||
|
||||
- [ ] **Purpose defined** - Clear goal and intent
|
||||
- [ ] **Mode selected** - `--mode analysis|write`
|
||||
- [ ] **Mode selected** - `--mode analysis|write|review`
|
||||
- [ ] **Context gathered** - File references + memory (default `@**/*`)
|
||||
- [ ] **Directory navigation** - `--cd` and/or `--includeDirs`
|
||||
- [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection
|
||||
@@ -514,5 +538,5 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
1. **Load configuration** - Read `cli-tools.json` for available tools
|
||||
2. **Match by tags** - Select tool based on task requirements
|
||||
3. **Validate enabled** - Ensure selected tool is enabled
|
||||
4. **Execute with mode** - Always specify `--mode analysis|write`
|
||||
4. **Execute with mode** - Always specify `--mode analysis|write|review`
|
||||
5. **Fallback gracefully** - Use secondary model or next matching tool on failure
|
||||
|
||||
@@ -11,46 +11,21 @@ argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
|
||||
|
||||
## Queue ID Requirement (MANDATORY)
|
||||
|
||||
**Queue ID is REQUIRED.** You MUST specify which queue to execute via `--queue <queue-id>`.
|
||||
**`--queue <queue-id>` parameter is REQUIRED**
|
||||
|
||||
### If Queue ID Not Provided
|
||||
### When Queue ID Not Provided
|
||||
|
||||
When `--queue` parameter is missing, you MUST:
|
||||
|
||||
1. **List available queues** by running:
|
||||
```javascript
|
||||
const result = shell_command({ command: "ccw issue queue list --brief --json" })
|
||||
```
|
||||
List queues → Output options → Stop and wait for user
|
||||
```
|
||||
|
||||
2. **Parse and display queues** to user:
|
||||
```
|
||||
Available Queues:
|
||||
ID Status Progress Issues
|
||||
-----------------------------------------------------------
|
||||
→ QUE-20251215-001 active 3/10 ISS-001, ISS-002
|
||||
QUE-20251210-002 active 0/5 ISS-003
|
||||
QUE-20251205-003 completed 8/8 ISS-004
|
||||
```
|
||||
**Actions**:
|
||||
|
||||
3. **Stop and ask user** to specify which queue to execute:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Which queue would you like to execute?",
|
||||
header: "Queue",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
// Generate from parsed queue list - only show active/pending queues
|
||||
{ label: "QUE-20251215-001", description: "active, 3/10 completed, Issues: ISS-001, ISS-002" },
|
||||
{ label: "QUE-20251210-002", description: "active, 0/5 completed, Issues: ISS-003" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
1. `ccw issue queue list --brief --json` - Fetch queue list
|
||||
2. Filter active/pending status, output formatted list
|
||||
3. **Stop execution**, prompt user to rerun with `codex -p "@.codex/prompts/issue-execute.md --queue QUE-xxx"`
|
||||
|
||||
4. **After user selection**, continue execution with the selected queue ID.
|
||||
|
||||
**DO NOT auto-select queues.** Explicit user confirmation is required to prevent accidental execution of wrong queue.
|
||||
**No auto-selection** - User MUST explicitly specify queue-id
|
||||
|
||||
## Worktree Mode (Recommended for Parallel Execution)
|
||||
|
||||
@@ -147,33 +122,19 @@ codex -p "@.codex/prompts/issue-execute.md --worktree /path/to/existing/worktree
|
||||
|
||||
**Completion - User Choice:**
|
||||
|
||||
When all solutions are complete, ask user what to do with the worktree branch:
|
||||
When all solutions are complete, output options and wait for user to specify:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "All solutions completed in worktree. What would you like to do with the changes?",
|
||||
header: "Merge",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Merge to main",
|
||||
description: "Merge worktree branch into main branch and cleanup"
|
||||
},
|
||||
{
|
||||
label: "Create PR",
|
||||
description: "Push branch and create a pull request for review"
|
||||
},
|
||||
{
|
||||
label: "Keep branch",
|
||||
description: "Keep the branch for manual handling, cleanup worktree only"
|
||||
}
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
All solutions completed in worktree. Choose next action:
|
||||
|
||||
1. Merge to main - Merge worktree branch into main and cleanup
|
||||
2. Create PR - Push branch and create pull request (Recommended for parallel execution)
|
||||
3. Keep branch - Keep branch for manual handling, cleanup worktree only
|
||||
|
||||
Please respond with: 1, 2, or 3
|
||||
```
|
||||
|
||||
**Based on user selection:**
|
||||
**Based on user response:**
|
||||
|
||||
```bash
|
||||
# Disable cleanup trap before intentional cleanup
|
||||
@@ -327,9 +288,154 @@ Expected solution structure:
|
||||
}
|
||||
```
|
||||
|
||||
## Step 2.1: Determine Execution Strategy
|
||||
|
||||
After parsing the solution, analyze the issue type and task actions to determine the appropriate execution strategy. The strategy defines additional verification steps and quality gates beyond the basic implement-test-verify cycle.
|
||||
|
||||
### Strategy Auto-Matching
|
||||
|
||||
**Matching Priority**:
|
||||
1. Explicit `solution.strategy_type` if provided
|
||||
2. Infer from `task.action` keywords (Debug, Fix, Feature, Refactor, Test, etc.)
|
||||
3. Infer from `solution.description` and `task.title` content
|
||||
4. Default to "standard" if no clear match
|
||||
|
||||
**Strategy Types and Matching Keywords**:
|
||||
|
||||
| Strategy Type | Match Keywords | Description |
|
||||
|---------------|----------------|-------------|
|
||||
| `debug` | Debug, Diagnose, Trace, Investigate | Bug diagnosis with logging and debugging |
|
||||
| `bugfix` | Fix, Patch, Resolve, Correct | Bug fixing with root cause analysis |
|
||||
| `feature` | Feature, Add, Implement, Create, Build | New feature development with full testing |
|
||||
| `refactor` | Refactor, Restructure, Optimize, Cleanup | Code restructuring with behavior preservation |
|
||||
| `test` | Test, Coverage, E2E, Integration | Test implementation with coverage checks |
|
||||
| `performance` | Performance, Optimize, Speed, Memory | Performance optimization with benchmarking |
|
||||
| `security` | Security, Vulnerability, CVE, Audit | Security fixes with vulnerability checks |
|
||||
| `hotfix` | Hotfix, Urgent, Critical, Emergency | Urgent fixes with minimal changes |
|
||||
| `documentation` | Documentation, Docs, Comment, README | Documentation updates with example validation |
|
||||
| `chore` | Chore, Dependency, Config, Maintenance | Maintenance tasks with compatibility checks |
|
||||
| `standard` | (default) | Standard implementation without extra steps |
|
||||
|
||||
### Strategy-Specific Execution Phases
|
||||
|
||||
Each strategy extends the basic cycle with additional quality gates:
|
||||
|
||||
#### 1. Debug → Reproduce → Instrument → Diagnose → Implement → Test → Verify → Cleanup
|
||||
|
||||
```
|
||||
REPRODUCE → INSTRUMENT → DIAGNOSE → IMPLEMENT → TEST → VERIFY → CLEANUP
|
||||
```
|
||||
|
||||
#### 2. Bugfix → Root Cause → Implement → Test → Edge Cases → Regression → Verify
|
||||
|
||||
```
|
||||
ROOT_CAUSE → IMPLEMENT → TEST → EDGE_CASES → REGRESSION → VERIFY
|
||||
```
|
||||
|
||||
#### 3. Feature → Design Review → Unit Tests → Implement → Integration Tests → Code Review → Docs → Verify
|
||||
|
||||
```
|
||||
DESIGN_REVIEW → UNIT_TESTS → IMPLEMENT → INTEGRATION_TESTS → TEST → CODE_REVIEW → DOCS → VERIFY
|
||||
```
|
||||
|
||||
#### 4. Refactor → Baseline Tests → Implement → Test → Behavior Check → Performance Compare → Verify
|
||||
|
||||
```
|
||||
BASELINE_TESTS → IMPLEMENT → TEST → BEHAVIOR_PRESERVATION → PERFORMANCE_CMP → VERIFY
|
||||
```
|
||||
|
||||
#### 5. Test → Coverage Baseline → Test Design → Implement → Coverage Check → Verify
|
||||
|
||||
```
|
||||
COVERAGE_BASELINE → TEST_DESIGN → IMPLEMENT → COVERAGE_CHECK → VERIFY
|
||||
```
|
||||
|
||||
#### 6. Performance → Profiling → Bottleneck → Implement → Benchmark → Test → Verify
|
||||
|
||||
```
|
||||
PROFILING → BOTTLENECK → IMPLEMENT → BENCHMARK → TEST → VERIFY
|
||||
```
|
||||
|
||||
#### 7. Security → Vulnerability Scan → Implement → Security Test → Penetration Test → Verify
|
||||
|
||||
```
|
||||
VULNERABILITY_SCAN → IMPLEMENT → SECURITY_TEST → PENETRATION_TEST → VERIFY
|
||||
```
|
||||
|
||||
#### 8. Hotfix → Impact Assessment → Implement → Test → Quick Verify → Verify
|
||||
|
||||
```
|
||||
IMPACT_ASSESSMENT → IMPLEMENT → TEST → QUICK_VERIFY → VERIFY
|
||||
```
|
||||
|
||||
#### 9. Documentation → Implement → Example Validation → Format Check → Link Validation → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → EXAMPLE_VALIDATION → FORMAT_CHECK → LINK_VALIDATION → VERIFY
|
||||
```
|
||||
|
||||
#### 10. Chore → Implement → Compatibility Check → Test → Changelog → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → COMPATIBILITY_CHECK → TEST → CHANGELOG → VERIFY
|
||||
```
|
||||
|
||||
#### 11. Standard → Implement → Test → Verify
|
||||
|
||||
```
|
||||
IMPLEMENT → TEST → VERIFY
|
||||
```
|
||||
|
||||
### Strategy Selection Implementation
|
||||
|
||||
**Pseudo-code for strategy matching**:
|
||||
|
||||
```javascript
|
||||
function determineStrategy(solution) {
|
||||
// Priority 1: Explicit strategy type
|
||||
if (solution.strategy_type) {
|
||||
return solution.strategy_type
|
||||
}
|
||||
|
||||
// Priority 2: Infer from task actions
|
||||
const actions = solution.tasks.map(t => t.action.toLowerCase())
|
||||
const titles = solution.tasks.map(t => t.title.toLowerCase())
|
||||
const description = solution.description.toLowerCase()
|
||||
const allText = [...actions, ...titles, description].join(' ')
|
||||
|
||||
// Match keywords (order matters - more specific first)
|
||||
if (/hotfix|urgent|critical|emergency/.test(allText)) return 'hotfix'
|
||||
if (/debug|diagnose|trace|investigate/.test(allText)) return 'debug'
|
||||
if (/security|vulnerability|cve|audit/.test(allText)) return 'security'
|
||||
if (/performance|optimize|speed|memory|benchmark/.test(allText)) return 'performance'
|
||||
if (/refactor|restructure|cleanup/.test(allText)) return 'refactor'
|
||||
if (/test|coverage|e2e|integration/.test(allText)) return 'test'
|
||||
if (/documentation|docs|comment|readme/.test(allText)) return 'documentation'
|
||||
if (/chore|dependency|config|maintenance/.test(allText)) return 'chore'
|
||||
if (/fix|patch|resolve|correct/.test(allText)) return 'bugfix'
|
||||
if (/feature|add|implement|create|build/.test(allText)) return 'feature'
|
||||
|
||||
// Default
|
||||
return 'standard'
|
||||
}
|
||||
```
|
||||
|
||||
**Usage in execution flow**:
|
||||
|
||||
```javascript
|
||||
// After parsing solution (Step 2)
|
||||
const strategy = determineStrategy(solution)
|
||||
console.log(`Strategy selected: ${strategy}`)
|
||||
|
||||
// During task execution (Step 3), follow strategy-specific phases
|
||||
for (const task of solution.tasks) {
|
||||
executeTaskWithStrategy(task, strategy)
|
||||
}
|
||||
```
|
||||
|
||||
## Step 2.5: Initialize Task Tracking
|
||||
|
||||
After parsing solution, use `update_plan` to track each task:
|
||||
After parsing solution and determining strategy, use `update_plan` to track each task:
|
||||
|
||||
```javascript
|
||||
// Initialize plan with all tasks from solution
|
||||
@@ -503,18 +609,19 @@ EOF
|
||||
## Solution Committed: [solution_id]
|
||||
|
||||
**Commit**: [commit hash]
|
||||
**Type**: [commit_type]
|
||||
**Scope**: [scope]
|
||||
**Type**: [commit_type]([scope])
|
||||
|
||||
**Summary**:
|
||||
[solution.description]
|
||||
**Changes**:
|
||||
- [Feature/Fix/Improvement]: [What functionality was added/fixed/improved]
|
||||
- [Specific change 1]
|
||||
- [Specific change 2]
|
||||
|
||||
**Tasks**: [N] tasks completed
|
||||
- [x] T1: [task1.title]
|
||||
- [x] T2: [task2.title]
|
||||
...
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts - [Brief description of changes]
|
||||
- path/to/file2.ts - [Brief description of changes]
|
||||
- path/to/file3.ts - [Brief description of changes]
|
||||
|
||||
**Files**: [M] files changed
|
||||
**Solution**: [solution_id] ([N] tasks completed)
|
||||
```
|
||||
|
||||
## Step 4: Report Completion
|
||||
@@ -629,9 +736,8 @@ When `ccw issue next` returns `{ "status": "empty" }`:
|
||||
|
||||
If `--queue` was NOT provided in the command arguments:
|
||||
1. Run `ccw issue queue list --brief --json`
|
||||
2. Display available queues to user
|
||||
3. Ask user to select a queue via `AskUserQuestion`
|
||||
4. Store selected queue ID for all subsequent commands
|
||||
2. Filter and display active/pending queues to user
|
||||
3. **Stop execution**, prompt user to rerun with `--queue QUE-xxx`
|
||||
|
||||
**Step 1: Fetch First Solution**
|
||||
|
||||
|
||||
@@ -148,6 +148,36 @@ CCW Dashboard 是一个单页应用(SPA),界面由四个核心部分组成
|
||||
- **模型配置**: 配置每个工具的主要和次要模型
|
||||
- **安装/卸载**: 通过向导安装或卸载工具
|
||||
|
||||
#### API Endpoint 配置(无需安装 CLI)
|
||||
|
||||
如果您没有安装 Gemini/Qwen CLI,但有 API 访问权限(如反向代理服务),可以在 `~/.claude/cli-tools.json` 中配置 `api-endpoint` 类型的工具:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.2.0",
|
||||
"tools": {
|
||||
"gemini-api": {
|
||||
"enabled": true,
|
||||
"type": "api-endpoint",
|
||||
"id": "your-api-id",
|
||||
"primaryModel": "gemini-2.5-pro",
|
||||
"secondaryModel": "gemini-2.5-flash",
|
||||
"tags": ["analysis"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**配置说明**:
|
||||
- `type: "api-endpoint"`: 表示使用 API 调用而非 CLI
|
||||
- `id`: API 端点标识符,用于路由请求
|
||||
- API Endpoint 工具仅支持**分析模式**(只读),不支持文件写入操作
|
||||
|
||||
**使用示例**:
|
||||
```bash
|
||||
ccw cli -p "分析代码结构" --tool gemini-api --mode analysis
|
||||
```
|
||||
|
||||
#### CodexLens 管理
|
||||
- **索引路径**: 查看和修改索引存储位置
|
||||
- **索引操作**:
|
||||
|
||||
@@ -1216,7 +1216,7 @@ export async function cliCommand(
|
||||
console.log(chalk.gray(' -f, --file <file> Read prompt from file (recommended for multi-line prompts)'));
|
||||
console.log(chalk.gray(' -p, --prompt <text> Prompt text (single-line)'));
|
||||
console.log(chalk.gray(' --tool <tool> Tool: gemini, qwen, codex (default: gemini)'));
|
||||
console.log(chalk.gray(' --mode <mode> Mode: analysis, write, auto (default: analysis)'));
|
||||
console.log(chalk.gray(' --mode <mode> Mode: analysis, write, auto, review (default: analysis)'));
|
||||
console.log(chalk.gray(' -d, --debug Enable debug logging for troubleshooting'));
|
||||
console.log(chalk.gray(' --model <model> Model override'));
|
||||
console.log(chalk.gray(' --cd <path> Working directory'));
|
||||
|
||||
@@ -140,12 +140,25 @@ interface ProjectGuidelines {
|
||||
};
|
||||
}
|
||||
|
||||
interface Language {
|
||||
name: string;
|
||||
file_count: number;
|
||||
primary: boolean;
|
||||
}
|
||||
|
||||
interface KeyComponent {
|
||||
name: string;
|
||||
path: string;
|
||||
description: string;
|
||||
importance: 'high' | 'medium' | 'low';
|
||||
}
|
||||
|
||||
interface ProjectOverview {
|
||||
projectName: string;
|
||||
description: string;
|
||||
initializedAt: string | null;
|
||||
technologyStack: {
|
||||
languages: string[];
|
||||
languages: Language[];
|
||||
frameworks: string[];
|
||||
build_tools: string[];
|
||||
test_frameworks: string[];
|
||||
@@ -155,7 +168,7 @@ interface ProjectOverview {
|
||||
layers: string[];
|
||||
patterns: string[];
|
||||
};
|
||||
keyComponents: string[];
|
||||
keyComponents: KeyComponent[];
|
||||
features: unknown[];
|
||||
developmentIndex: {
|
||||
feature: unknown[];
|
||||
@@ -187,13 +200,12 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
|
||||
// Initialize cache manager
|
||||
const cache = createDashboardCache(workflowDir);
|
||||
|
||||
// Prepare paths to watch for changes (includes both new dual files and legacy)
|
||||
// Prepare paths to watch for changes
|
||||
const watchPaths = [
|
||||
join(workflowDir, 'active'),
|
||||
join(workflowDir, 'archives'),
|
||||
join(workflowDir, 'project-tech.json'),
|
||||
join(workflowDir, 'project-guidelines.json'),
|
||||
join(workflowDir, 'project.json'), // Legacy support
|
||||
...sessions.active.map(s => s.path),
|
||||
...sessions.archived.map(s => s.path)
|
||||
];
|
||||
@@ -266,7 +278,7 @@ export async function aggregateData(sessions: ScanSessionsResult, workflowDir: s
|
||||
console.error('Error scanning lite tasks:', (err as Error).message);
|
||||
}
|
||||
|
||||
// Load project overview from project.json
|
||||
// Load project overview from project-tech.json
|
||||
try {
|
||||
data.projectOverview = loadProjectOverview(workflowDir);
|
||||
} catch (err) {
|
||||
@@ -553,31 +565,25 @@ function sortTaskIds(a: string, b: string): number {
|
||||
|
||||
/**
|
||||
* Load project overview from project-tech.json and project-guidelines.json
|
||||
* Supports dual file structure with backward compatibility for legacy project.json
|
||||
* @param workflowDir - Path to .workflow directory
|
||||
* @returns Project overview data or null if not found
|
||||
*/
|
||||
function loadProjectOverview(workflowDir: string): ProjectOverview | null {
|
||||
const techFile = join(workflowDir, 'project-tech.json');
|
||||
const guidelinesFile = join(workflowDir, 'project-guidelines.json');
|
||||
const legacyFile = join(workflowDir, 'project.json');
|
||||
|
||||
// Check for new dual file structure first, fallback to legacy
|
||||
const useLegacy = !existsSync(techFile) && existsSync(legacyFile);
|
||||
const projectFile = useLegacy ? legacyFile : techFile;
|
||||
|
||||
if (!existsSync(projectFile)) {
|
||||
console.log(`Project file not found at: ${projectFile}`);
|
||||
if (!existsSync(techFile)) {
|
||||
console.log(`Project file not found at: ${techFile}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
const fileContent = readFileSync(projectFile, 'utf8');
|
||||
const fileContent = readFileSync(techFile, 'utf8');
|
||||
const projectData = JSON.parse(fileContent) as Record<string, unknown>;
|
||||
|
||||
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'} (${useLegacy ? 'legacy' : 'tech'})`);
|
||||
console.log(`Successfully loaded project overview: ${projectData.project_name || 'Unknown'}`);
|
||||
|
||||
// Parse tech data (compatible with both legacy and new structure)
|
||||
// Parse tech data from project-tech.json structure
|
||||
const overview = projectData.overview as Record<string, unknown> | undefined;
|
||||
const technologyAnalysis = projectData.technology_analysis as Record<string, unknown> | undefined;
|
||||
const developmentStatus = projectData.development_status as Record<string, unknown> | undefined;
|
||||
@@ -645,7 +651,7 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
|
||||
description: (overview?.description as string) || '',
|
||||
initializedAt: (projectData.initialized_at as string) || null,
|
||||
technologyStack: {
|
||||
languages: extractStringArray(technologyStack?.languages),
|
||||
languages: (technologyStack?.languages as Language[]) || [],
|
||||
frameworks: extractStringArray(technologyStack?.frameworks),
|
||||
build_tools: extractStringArray(technologyStack?.build_tools),
|
||||
test_frameworks: extractStringArray(technologyStack?.test_frameworks)
|
||||
@@ -655,7 +661,7 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
|
||||
layers: extractStringArray(architecture?.layers as unknown[] | undefined),
|
||||
patterns: extractStringArray(architecture?.patterns as unknown[] | undefined)
|
||||
},
|
||||
keyComponents: extractStringArray(overview?.key_components as unknown[] | undefined),
|
||||
keyComponents: (overview?.key_components as KeyComponent[]) || [],
|
||||
features: (projectData.features as unknown[]) || [],
|
||||
developmentIndex: {
|
||||
feature: (developmentIndex?.feature as unknown[]) || [],
|
||||
@@ -677,7 +683,7 @@ function loadProjectOverview(workflowDir: string): ProjectOverview | null {
|
||||
guidelines
|
||||
};
|
||||
} catch (err) {
|
||||
console.error(`Failed to parse project file at ${projectFile}:`, (err as Error).message);
|
||||
console.error(`Failed to parse project file at ${techFile}:`, (err as Error).message);
|
||||
console.error('Error stack:', (err as Error).stack);
|
||||
return null;
|
||||
}
|
||||
|
||||
@@ -8,23 +8,23 @@ import { homedir } from 'os';
|
||||
import type { RouteContext } from './types.js';
|
||||
|
||||
/**
|
||||
* Get the ccw-help index directory path (pure function)
|
||||
* Priority: project path (.claude/skills/ccw-help/index) > user path (~/.claude/skills/ccw-help/index)
|
||||
* Get the ccw-help command.json file path (pure function)
|
||||
* Priority: project path (.claude/skills/ccw-help/command.json) > user path (~/.claude/skills/ccw-help/command.json)
|
||||
* @param projectPath - The project path to check first
|
||||
*/
|
||||
function getIndexDir(projectPath: string | null): string | null {
|
||||
function getCommandFilePath(projectPath: string | null): string | null {
|
||||
// Try project path first
|
||||
if (projectPath) {
|
||||
const projectIndexDir = join(projectPath, '.claude', 'skills', 'ccw-help', 'index');
|
||||
if (existsSync(projectIndexDir)) {
|
||||
return projectIndexDir;
|
||||
const projectFilePath = join(projectPath, '.claude', 'skills', 'ccw-help', 'command.json');
|
||||
if (existsSync(projectFilePath)) {
|
||||
return projectFilePath;
|
||||
}
|
||||
}
|
||||
|
||||
// Fall back to user path
|
||||
const userIndexDir = join(homedir(), '.claude', 'skills', 'ccw-help', 'index');
|
||||
if (existsSync(userIndexDir)) {
|
||||
return userIndexDir;
|
||||
const userFilePath = join(homedir(), '.claude', 'skills', 'ccw-help', 'command.json');
|
||||
if (existsSync(userFilePath)) {
|
||||
return userFilePath;
|
||||
}
|
||||
|
||||
return null;
|
||||
@@ -83,46 +83,48 @@ function invalidateCache(key: string): void {
|
||||
let watchersInitialized = false;
|
||||
|
||||
/**
|
||||
* Initialize file watchers for JSON indexes
|
||||
* @param projectPath - The project path to resolve index directory
|
||||
* Initialize file watcher for command.json
|
||||
* @param projectPath - The project path to resolve command file
|
||||
*/
|
||||
function initializeFileWatchers(projectPath: string | null): void {
|
||||
if (watchersInitialized) return;
|
||||
|
||||
const indexDir = getIndexDir(projectPath);
|
||||
const commandFilePath = getCommandFilePath(projectPath);
|
||||
|
||||
if (!indexDir) {
|
||||
console.warn(`ccw-help index directory not found in project or user paths`);
|
||||
if (!commandFilePath) {
|
||||
console.warn(`ccw-help command.json not found in project or user paths`);
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Watch all JSON files in index directory
|
||||
const watcher = watch(indexDir, { recursive: false }, (eventType, filename) => {
|
||||
if (!filename || !filename.endsWith('.json')) return;
|
||||
// Watch the command.json file
|
||||
const watcher = watch(commandFilePath, (eventType) => {
|
||||
console.log(`File change detected: command.json (${eventType})`);
|
||||
|
||||
console.log(`File change detected: ${filename} (${eventType})`);
|
||||
|
||||
// Invalidate relevant cache entries
|
||||
if (filename === 'all-commands.json') {
|
||||
invalidateCache('all-commands');
|
||||
} else if (filename === 'command-relationships.json') {
|
||||
invalidateCache('command-relationships');
|
||||
} else if (filename === 'by-category.json') {
|
||||
invalidateCache('by-category');
|
||||
}
|
||||
// Invalidate all cache entries when command.json changes
|
||||
invalidateCache('command-data');
|
||||
});
|
||||
|
||||
watchersInitialized = true;
|
||||
(watcher as any).unref?.();
|
||||
console.log(`File watchers initialized for: ${indexDir}`);
|
||||
console.log(`File watcher initialized for: ${commandFilePath}`);
|
||||
} catch (error) {
|
||||
console.error('Failed to initialize file watchers:', error);
|
||||
console.error('Failed to initialize file watcher:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// ========== Helper Functions ==========
|
||||
|
||||
/**
|
||||
* Get command data from command.json (with caching)
|
||||
*/
|
||||
function getCommandData(projectPath: string | null): any {
|
||||
const filePath = getCommandFilePath(projectPath);
|
||||
if (!filePath) return null;
|
||||
|
||||
return getCachedData('command-data', filePath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter commands by search query
|
||||
*/
|
||||
@@ -138,6 +140,15 @@ function filterCommands(commands: any[], query: string): any[] {
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Category merge mapping for frontend compatibility
|
||||
* Merges additional categories into target category for display
|
||||
* Format: { targetCategory: [additionalCategoriesToMerge] }
|
||||
*/
|
||||
const CATEGORY_MERGES: Record<string, string[]> = {
|
||||
'cli': ['general'], // CLI tab shows both 'cli' and 'general' commands
|
||||
};
|
||||
|
||||
/**
|
||||
* Group commands by category with subcategories
|
||||
*/
|
||||
@@ -166,9 +177,104 @@ function groupCommandsByCategory(commands: any[]): any {
|
||||
}
|
||||
}
|
||||
|
||||
// Apply category merges for frontend compatibility
|
||||
for (const [target, sources] of Object.entries(CATEGORY_MERGES)) {
|
||||
// Initialize target category if not exists
|
||||
if (!grouped[target]) {
|
||||
grouped[target] = {
|
||||
name: target,
|
||||
commands: [],
|
||||
subcategories: {}
|
||||
};
|
||||
}
|
||||
|
||||
// Merge commands from source categories into target
|
||||
for (const source of sources) {
|
||||
if (grouped[source]) {
|
||||
// Merge direct commands
|
||||
grouped[target].commands = [
|
||||
...grouped[target].commands,
|
||||
...grouped[source].commands
|
||||
];
|
||||
// Merge subcategories
|
||||
for (const [subcat, cmds] of Object.entries(grouped[source].subcategories)) {
|
||||
if (!grouped[target].subcategories[subcat]) {
|
||||
grouped[target].subcategories[subcat] = [];
|
||||
}
|
||||
grouped[target].subcategories[subcat] = [
|
||||
...grouped[target].subcategories[subcat],
|
||||
...(cmds as any[])
|
||||
];
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return grouped;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build workflow relationships from command flow data
|
||||
*/
|
||||
function buildWorkflowRelationships(commands: any[]): any {
|
||||
const relationships: any = {
|
||||
workflows: [],
|
||||
dependencies: {},
|
||||
alternatives: {}
|
||||
};
|
||||
|
||||
for (const cmd of commands) {
|
||||
if (!cmd.flow) continue;
|
||||
|
||||
const cmdName = cmd.command;
|
||||
|
||||
// Build next_steps relationships
|
||||
if (cmd.flow.next_steps) {
|
||||
if (!relationships.dependencies[cmdName]) {
|
||||
relationships.dependencies[cmdName] = { next: [], prev: [] };
|
||||
}
|
||||
relationships.dependencies[cmdName].next = cmd.flow.next_steps;
|
||||
|
||||
// Add reverse relationship
|
||||
for (const nextCmd of cmd.flow.next_steps) {
|
||||
if (!relationships.dependencies[nextCmd]) {
|
||||
relationships.dependencies[nextCmd] = { next: [], prev: [] };
|
||||
}
|
||||
if (!relationships.dependencies[nextCmd].prev.includes(cmdName)) {
|
||||
relationships.dependencies[nextCmd].prev.push(cmdName);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build prerequisites relationships
|
||||
if (cmd.flow.prerequisites) {
|
||||
if (!relationships.dependencies[cmdName]) {
|
||||
relationships.dependencies[cmdName] = { next: [], prev: [] };
|
||||
}
|
||||
relationships.dependencies[cmdName].prev = [
|
||||
...new Set([...relationships.dependencies[cmdName].prev, ...cmd.flow.prerequisites])
|
||||
];
|
||||
}
|
||||
|
||||
// Build alternatives
|
||||
if (cmd.flow.alternatives) {
|
||||
relationships.alternatives[cmdName] = cmd.flow.alternatives;
|
||||
}
|
||||
|
||||
// Add to workflows list
|
||||
if (cmd.category === 'workflow') {
|
||||
relationships.workflows.push({
|
||||
name: cmd.name,
|
||||
command: cmd.command,
|
||||
description: cmd.description,
|
||||
flow: cmd.flow
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return relationships;
|
||||
}
|
||||
|
||||
// ========== API Routes ==========
|
||||
|
||||
/**
|
||||
@@ -181,25 +287,17 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
// Initialize file watchers on first request
|
||||
initializeFileWatchers(initialPath);
|
||||
|
||||
const indexDir = getIndexDir(initialPath);
|
||||
|
||||
// API: Get all commands with optional search
|
||||
if (pathname === '/api/help/commands') {
|
||||
if (!indexDir) {
|
||||
const commandData = getCommandData(initialPath);
|
||||
if (!commandData) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
|
||||
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
const searchQuery = url.searchParams.get('q') || '';
|
||||
const filePath = join(indexDir, 'all-commands.json');
|
||||
|
||||
let commands = getCachedData('all-commands', filePath);
|
||||
|
||||
if (!commands) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Commands data not found' }));
|
||||
return true;
|
||||
}
|
||||
let commands = commandData.commands || [];
|
||||
|
||||
// Filter by search query if provided
|
||||
if (searchQuery) {
|
||||
@@ -213,26 +311,24 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
res.end(JSON.stringify({
|
||||
commands: commands,
|
||||
grouped: grouped,
|
||||
total: commands.length
|
||||
total: commands.length,
|
||||
essential: commandData.essential_commands || [],
|
||||
metadata: commandData._metadata
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Get workflow command relationships
|
||||
if (pathname === '/api/help/workflows') {
|
||||
if (!indexDir) {
|
||||
const commandData = getCommandData(initialPath);
|
||||
if (!commandData) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
|
||||
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
|
||||
return true;
|
||||
}
|
||||
const filePath = join(indexDir, 'command-relationships.json');
|
||||
const relationships = getCachedData('command-relationships', filePath);
|
||||
|
||||
if (!relationships) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Workflow relationships not found' }));
|
||||
return true;
|
||||
}
|
||||
const commands = commandData.commands || [];
|
||||
const relationships = buildWorkflowRelationships(commands);
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(relationships));
|
||||
@@ -241,22 +337,38 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
|
||||
// API: Get commands by category
|
||||
if (pathname === '/api/help/commands/by-category') {
|
||||
if (!indexDir) {
|
||||
const commandData = getCommandData(initialPath);
|
||||
if (!commandData) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'ccw-help index directory not found' }));
|
||||
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
|
||||
return true;
|
||||
}
|
||||
const filePath = join(indexDir, 'by-category.json');
|
||||
const byCategory = getCachedData('by-category', filePath);
|
||||
|
||||
if (!byCategory) {
|
||||
const commands = commandData.commands || [];
|
||||
const byCategory = groupCommandsByCategory(commands);
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({
|
||||
categories: commandData.categories || [],
|
||||
grouped: byCategory
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
// API: Get agents list
|
||||
if (pathname === '/api/help/agents') {
|
||||
const commandData = getCommandData(initialPath);
|
||||
if (!commandData) {
|
||||
res.writeHead(404, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Category data not found' }));
|
||||
res.end(JSON.stringify({ error: 'ccw-help command.json not found' }));
|
||||
return true;
|
||||
}
|
||||
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(byCategory));
|
||||
res.end(JSON.stringify({
|
||||
agents: commandData.agents || [],
|
||||
total: (commandData.agents || []).length
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,103 @@
|
||||
// Hook Manager Component
|
||||
// Manages Claude Code hooks configuration from settings.json
|
||||
|
||||
// ========== Platform Detection ==========
|
||||
const PlatformUtils = {
|
||||
// Detect current platform
|
||||
detect() {
|
||||
if (typeof navigator !== 'undefined') {
|
||||
const platform = navigator.platform.toLowerCase();
|
||||
if (platform.includes('win')) return 'windows';
|
||||
if (platform.includes('mac')) return 'macos';
|
||||
return 'linux';
|
||||
}
|
||||
if (typeof process !== 'undefined') {
|
||||
if (process.platform === 'win32') return 'windows';
|
||||
if (process.platform === 'darwin') return 'macos';
|
||||
return 'linux';
|
||||
}
|
||||
return 'unknown';
|
||||
},
|
||||
|
||||
isWindows() {
|
||||
return this.detect() === 'windows';
|
||||
},
|
||||
|
||||
isUnix() {
|
||||
const platform = this.detect();
|
||||
return platform === 'macos' || platform === 'linux';
|
||||
},
|
||||
|
||||
// Get default shell for platform
|
||||
getShell() {
|
||||
return this.isWindows() ? 'cmd' : 'bash';
|
||||
},
|
||||
|
||||
// Check if template is compatible with current platform
|
||||
checkCompatibility(template) {
|
||||
const platform = this.detect();
|
||||
const issues = [];
|
||||
|
||||
// bash commands require Unix or Git Bash on Windows
|
||||
if (template.command === 'bash' && platform === 'windows') {
|
||||
issues.push({
|
||||
level: 'warning',
|
||||
message: 'bash command may not work on Windows without Git Bash or WSL'
|
||||
});
|
||||
}
|
||||
|
||||
// Check for Unix-specific shell features in args
|
||||
if (template.args && Array.isArray(template.args)) {
|
||||
const argStr = template.args.join(' ');
|
||||
|
||||
if (platform === 'windows') {
|
||||
// Unix shell features that won't work in cmd
|
||||
if (argStr.includes('$HOME') || argStr.includes('${HOME}')) {
|
||||
issues.push({ level: 'warning', message: 'Uses $HOME - use %USERPROFILE% on Windows' });
|
||||
}
|
||||
if (argStr.includes('$(') || argStr.includes('`')) {
|
||||
issues.push({ level: 'warning', message: 'Uses command substitution - not supported in cmd' });
|
||||
}
|
||||
if (argStr.includes(' | ')) {
|
||||
issues.push({ level: 'info', message: 'Uses pipes - works in cmd but syntax may differ' });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
compatible: issues.filter(i => i.level === 'error').length === 0,
|
||||
issues
|
||||
};
|
||||
},
|
||||
|
||||
// Get platform-specific command variant if available
|
||||
getVariant(template) {
|
||||
const platform = this.detect();
|
||||
|
||||
// Check if template has platform-specific variants
|
||||
if (template.variants && template.variants[platform]) {
|
||||
return { ...template, ...template.variants[platform] };
|
||||
}
|
||||
|
||||
return template;
|
||||
},
|
||||
|
||||
// Escape script for specific shell type
|
||||
escapeForShell(script, shell) {
|
||||
if (shell === 'bash' || shell === 'sh') {
|
||||
// Unix: use single quotes, escape internal single quotes
|
||||
return script.replace(/'/g, "'\\''");
|
||||
} else if (shell === 'cmd') {
|
||||
// Windows cmd: escape double quotes and special chars
|
||||
return script.replace(/"/g, '\\"').replace(/%/g, '%%');
|
||||
} else if (shell === 'powershell') {
|
||||
// PowerShell: escape single quotes by doubling
|
||||
return script.replace(/'/g, "''");
|
||||
}
|
||||
return script;
|
||||
}
|
||||
};
|
||||
|
||||
// ========== Hook State ==========
|
||||
let hookConfig = {
|
||||
global: { hooks: {} },
|
||||
@@ -394,6 +491,29 @@ function convertToClaudeCodeFormat(hookData) {
|
||||
});
|
||||
commandStr += ' ' + additionalArgs.join(' ');
|
||||
}
|
||||
} else if (commandStr === 'node' && hookData.args.length >= 2 && hookData.args[0] === '-e') {
|
||||
// Special handling for node -e commands using PlatformUtils
|
||||
const script = hookData.args[1];
|
||||
|
||||
if (PlatformUtils.isWindows()) {
|
||||
// Windows: use double quotes, escape internal quotes
|
||||
const escapedScript = PlatformUtils.escapeForShell(script, 'cmd');
|
||||
commandStr = `node -e "${escapedScript}"`;
|
||||
} else {
|
||||
// Unix: use single quotes to prevent shell interpretation
|
||||
const escapedScript = PlatformUtils.escapeForShell(script, 'bash');
|
||||
commandStr = `node -e '${escapedScript}'`;
|
||||
}
|
||||
// Handle any additional args after the script
|
||||
if (hookData.args.length > 2) {
|
||||
const additionalArgs = hookData.args.slice(2).map(arg => {
|
||||
if (arg.includes(' ') && !arg.startsWith('"') && !arg.startsWith("'")) {
|
||||
return `"${arg.replace(/"/g, '\\"')}"`;
|
||||
}
|
||||
return arg;
|
||||
});
|
||||
commandStr += ' ' + additionalArgs.join(' ');
|
||||
}
|
||||
} else {
|
||||
// Default handling for other commands
|
||||
const quotedArgs = hookData.args.map(arg => {
|
||||
|
||||
@@ -398,6 +398,7 @@ async function updateCliToolConfig(tool, updates) {
|
||||
// Invalidate cache to ensure fresh data on page refresh
|
||||
if (window.cacheManager) {
|
||||
window.cacheManager.invalidate('cli-config');
|
||||
window.cacheManager.invalidate('cli-tools-config');
|
||||
}
|
||||
}
|
||||
return data;
|
||||
|
||||
@@ -524,16 +524,32 @@ async function installHookTemplate(templateId, scope) {
|
||||
return;
|
||||
}
|
||||
|
||||
const hookData = {
|
||||
command: template.command,
|
||||
args: template.args
|
||||
};
|
||||
|
||||
if (template.matcher) {
|
||||
hookData.matcher = template.matcher;
|
||||
// Platform compatibility check
|
||||
const compatibility = PlatformUtils.checkCompatibility(template);
|
||||
if (compatibility.issues.length > 0) {
|
||||
const warnings = compatibility.issues.filter(i => i.level === 'warning');
|
||||
if (warnings.length > 0) {
|
||||
const platform = PlatformUtils.detect();
|
||||
const warningMsg = warnings.map(w => w.message).join('; ');
|
||||
console.warn(`[Hook Install] Platform: ${platform}, Warnings: ${warningMsg}`);
|
||||
// Show warning but continue installation
|
||||
showRefreshToast(`Warning: ${warningMsg}`, 'warning', 5000);
|
||||
}
|
||||
}
|
||||
|
||||
await saveHook(scope, template.event, hookData);
|
||||
// Get platform-specific variant if available
|
||||
const adaptedTemplate = PlatformUtils.getVariant(template);
|
||||
|
||||
const hookData = {
|
||||
command: adaptedTemplate.command,
|
||||
args: adaptedTemplate.args
|
||||
};
|
||||
|
||||
if (adaptedTemplate.matcher) {
|
||||
hookData.matcher = adaptedTemplate.matcher;
|
||||
}
|
||||
|
||||
await saveHook(scope, adaptedTemplate.event, hookData);
|
||||
}
|
||||
|
||||
async function uninstallHookTemplate(templateId) {
|
||||
|
||||
@@ -160,7 +160,7 @@ interface ClaudeWithSettingsParams {
|
||||
prompt: string;
|
||||
settingsPath: string;
|
||||
endpointId: string;
|
||||
mode: 'analysis' | 'write' | 'auto';
|
||||
mode: 'analysis' | 'write' | 'auto' | 'review';
|
||||
workingDir: string;
|
||||
cd?: string;
|
||||
includeDirs?: string[];
|
||||
@@ -351,7 +351,7 @@ type BuiltinCliTool = typeof BUILTIN_CLI_TOOLS[number];
|
||||
const ParamsSchema = z.object({
|
||||
tool: z.string().min(1, 'Tool is required'), // Accept any tool ID (built-in or custom endpoint)
|
||||
prompt: z.string().min(1, 'Prompt is required'),
|
||||
mode: z.enum(['analysis', 'write', 'auto']).default('analysis'),
|
||||
mode: z.enum(['analysis', 'write', 'auto', 'review']).default('analysis'),
|
||||
format: z.enum(['plain', 'yaml', 'json']).default('plain'), // Multi-turn prompt concatenation format
|
||||
model: z.string().optional(),
|
||||
cd: z.string().optional(),
|
||||
@@ -1176,7 +1176,8 @@ export const schema: ToolSchema = {
|
||||
Modes:
|
||||
- analysis: Read-only operations (default)
|
||||
- write: File modifications allowed
|
||||
- auto: Full autonomous operations (codex only)`,
|
||||
- auto: Full autonomous operations (codex only)
|
||||
- review: Code review mode (codex uses 'codex review' subcommand, others accept but no operation change)`,
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
@@ -1191,8 +1192,8 @@ Modes:
|
||||
},
|
||||
mode: {
|
||||
type: 'string',
|
||||
enum: ['analysis', 'write', 'auto'],
|
||||
description: 'Execution mode (default: analysis)',
|
||||
enum: ['analysis', 'write', 'auto', 'review'],
|
||||
description: 'Execution mode (default: analysis). review mode uses codex review subcommand for codex tool.',
|
||||
default: 'analysis'
|
||||
},
|
||||
model: {
|
||||
|
||||
@@ -223,7 +223,21 @@ export function buildCommand(params: {
|
||||
|
||||
case 'codex':
|
||||
useStdin = true;
|
||||
if (nativeResume?.enabled) {
|
||||
if (mode === 'review') {
|
||||
// codex review mode: non-interactive code review
|
||||
// Format: codex review [OPTIONS] [PROMPT]
|
||||
args.push('review');
|
||||
// Default to --uncommitted if no specific review target in prompt
|
||||
args.push('--uncommitted');
|
||||
if (model) {
|
||||
args.push('-m', model);
|
||||
}
|
||||
// codex review uses positional prompt argument, not stdin
|
||||
useStdin = false;
|
||||
if (prompt) {
|
||||
args.push(prompt);
|
||||
}
|
||||
} else if (nativeResume?.enabled) {
|
||||
args.push('resume');
|
||||
if (nativeResume.isLatest) {
|
||||
args.push('--last');
|
||||
|
||||
316
codex-lens/docs/LSP_INTEGRATION_CHECKLIST.md
Normal file
316
codex-lens/docs/LSP_INTEGRATION_CHECKLIST.md
Normal file
@@ -0,0 +1,316 @@
|
||||
# codex-lens LSP Integration Execution Checklist
|
||||
|
||||
> Generated: 2026-01-15
|
||||
> Based on: Gemini multi-round deep analysis
|
||||
> Status: Ready for implementation
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: LSP Server Foundation (Priority: HIGH)
|
||||
|
||||
### 1.1 Create LSP Server Entry Point
|
||||
- [ ] **Install pygls dependency**
|
||||
```bash
|
||||
pip install pygls
|
||||
```
|
||||
- [ ] **Create `src/codexlens/lsp/__init__.py`**
|
||||
- Export: `CodexLensServer`, `start_server`
|
||||
- [ ] **Create `src/codexlens/lsp/server.py`**
|
||||
- Class: `CodexLensServer(LanguageServer)`
|
||||
- Initialize: `ChainSearchEngine`, `GlobalSymbolIndex`, `WatcherManager`
|
||||
- Lifecycle: Start `WatcherManager` on `initialize` request
|
||||
|
||||
### 1.2 Implement Core LSP Handlers
|
||||
- [ ] **`textDocument/definition`** handler
|
||||
- Source: `GlobalSymbolIndex.search()` exact match
|
||||
- Reference: `storage/global_index.py:173`
|
||||
- Return: `Location(uri, Range)`
|
||||
|
||||
- [ ] **`textDocument/completion`** handler
|
||||
- Source: `GlobalSymbolIndex.search(prefix_mode=True)`
|
||||
- Reference: `storage/global_index.py:173`
|
||||
- Return: `CompletionItem[]`
|
||||
|
||||
- [ ] **`workspace/symbol`** handler
|
||||
- Source: `ChainSearchEngine.search_symbols()`
|
||||
- Reference: `search/chain_search.py:618`
|
||||
- Return: `SymbolInformation[]`
|
||||
|
||||
### 1.3 Wire File Watcher to LSP Events
|
||||
- [ ] **`workspace/didChangeWatchedFiles`** handler
|
||||
- Delegate to: `WatcherManager.process_changes()`
|
||||
- Reference: `watcher/manager.py:53`
|
||||
|
||||
- [ ] **`textDocument/didSave`** handler
|
||||
- Trigger: `IncrementalIndexer` for single file
|
||||
- Reference: `watcher/incremental_indexer.py`
|
||||
|
||||
### 1.4 Deliverables
|
||||
- [ ] Unit tests for LSP handlers
|
||||
- [ ] Integration test: definition lookup
|
||||
- [ ] Integration test: completion prefix search
|
||||
- [ ] Benchmark: query latency < 50ms
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Find References Implementation (Priority: MEDIUM)
|
||||
|
||||
### 2.1 Create `search_references` Method
|
||||
- [ ] **Add to `src/codexlens/search/chain_search.py`**
|
||||
```python
|
||||
def search_references(
|
||||
self,
|
||||
symbol_name: str,
|
||||
source_path: Path,
|
||||
depth: int = -1
|
||||
) -> List[ReferenceResult]:
|
||||
"""Find all references to a symbol across the project."""
|
||||
```
|
||||
|
||||
### 2.2 Implement Parallel Query Orchestration
|
||||
- [ ] **Collect index paths**
|
||||
- Use: `_collect_index_paths()` existing method
|
||||
|
||||
- [ ] **Parallel query execution**
|
||||
- ThreadPoolExecutor across all `_index.db`
|
||||
- SQL: `SELECT * FROM code_relationships WHERE target_qualified_name = ?`
|
||||
- Reference: `storage/sqlite_store.py:348`
|
||||
|
||||
- [ ] **Result aggregation**
|
||||
- Deduplicate by file:line
|
||||
- Sort by file path, then line number
|
||||
|
||||
### 2.3 LSP Handler
|
||||
- [ ] **`textDocument/references`** handler
|
||||
- Call: `ChainSearchEngine.search_references()`
|
||||
- Return: `Location[]`
|
||||
|
||||
### 2.4 Deliverables
|
||||
- [ ] Unit test: single-index reference lookup
|
||||
- [ ] Integration test: cross-directory references
|
||||
- [ ] Benchmark: < 200ms for 10+ index files
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Enhanced Hover Information (Priority: MEDIUM)
|
||||
|
||||
### 3.1 Implement Hover Data Extraction
|
||||
- [ ] **Create `src/codexlens/lsp/hover_provider.py`**
|
||||
```python
|
||||
class HoverProvider:
|
||||
def get_hover_info(self, symbol: Symbol) -> HoverInfo:
|
||||
"""Extract hover information for a symbol."""
|
||||
```
|
||||
|
||||
### 3.2 Data Sources
|
||||
- [ ] **Symbol metadata**
|
||||
- Source: `GlobalSymbolIndex.search()`
|
||||
- Fields: `kind`, `name`, `file_path`, `range`
|
||||
|
||||
- [ ] **Source code extraction**
|
||||
- Source: `SQLiteStore.files` table
|
||||
- Reference: `storage/sqlite_store.py:284`
|
||||
- Extract: Lines from `range[0]` to `range[1]`
|
||||
|
||||
### 3.3 LSP Handler
|
||||
- [ ] **`textDocument/hover`** handler
|
||||
- Return: `Hover(contents=MarkupContent)`
|
||||
- Format: Markdown with code fence
|
||||
|
||||
### 3.4 Deliverables
|
||||
- [ ] Unit test: hover for function/class/variable
|
||||
- [ ] Integration test: multi-line function signature
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: MCP Bridge for Claude Code (Priority: HIGH VALUE)
|
||||
|
||||
### 4.1 Define MCP Schema
|
||||
- [ ] **Create `src/codexlens/mcp/__init__.py`**
|
||||
- [ ] **Create `src/codexlens/mcp/schema.py`**
|
||||
```python
|
||||
@dataclass
|
||||
class MCPContext:
|
||||
version: str = "1.0"
|
||||
context_type: str
|
||||
symbol: Optional[SymbolInfo]
|
||||
definition: Optional[str]
|
||||
references: List[ReferenceInfo]
|
||||
related_symbols: List[SymbolInfo]
|
||||
```
|
||||
|
||||
### 4.2 Create MCP Provider
|
||||
- [ ] **Create `src/codexlens/mcp/provider.py`**
|
||||
```python
|
||||
class MCPProvider:
|
||||
def build_context(
|
||||
self,
|
||||
symbol_name: str,
|
||||
context_type: str = "symbol_explanation"
|
||||
) -> MCPContext:
|
||||
"""Build structured context for LLM consumption."""
|
||||
```
|
||||
|
||||
### 4.3 Context Building Logic
|
||||
- [ ] **Symbol lookup**
|
||||
- Use: `GlobalSymbolIndex.search()`
|
||||
|
||||
- [ ] **Definition extraction**
|
||||
- Use: `SQLiteStore` file content
|
||||
|
||||
- [ ] **References collection**
|
||||
- Use: `ChainSearchEngine.search_references()`
|
||||
|
||||
- [ ] **Related symbols**
|
||||
- Use: `code_relationships` for imports/calls
|
||||
|
||||
### 4.4 Hook Integration Points
|
||||
- [ ] **Document `pre-tool` hook interface**
|
||||
```python
|
||||
def pre_tool_hook(action: str, params: dict) -> MCPContext:
|
||||
"""Called before LLM action to gather context."""
|
||||
```
|
||||
|
||||
- [ ] **Document `post-tool` hook interface**
|
||||
```python
|
||||
def post_tool_hook(action: str, result: Any) -> None:
|
||||
"""Called after LSP action for proactive caching."""
|
||||
```
|
||||
|
||||
### 4.5 Deliverables
|
||||
- [ ] MCP schema JSON documentation
|
||||
- [ ] Unit test: context building
|
||||
- [ ] Integration test: hook → MCP → JSON output
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Advanced Features (Priority: LOW)
|
||||
|
||||
### 5.1 Custom LSP Commands
|
||||
- [ ] **`codexlens/hybridSearch`**
|
||||
- Expose: `HybridSearchEngine.search()`
|
||||
- Reference: `search/hybrid_search.py`
|
||||
|
||||
- [ ] **`codexlens/symbolGraph`**
|
||||
- Return: Symbol relationship graph
|
||||
- Source: `code_relationships` table
|
||||
|
||||
### 5.2 Proactive Context Caching
|
||||
- [ ] **Implement `post-tool` hook caching**
|
||||
- After `go-to-definition`: pre-fetch references
|
||||
- Cache TTL: 5 minutes
|
||||
- Storage: In-memory LRU
|
||||
|
||||
### 5.3 Performance Optimizations
|
||||
- [ ] **Connection pooling**
|
||||
- Reference: `storage/sqlite_store.py` thread-local
|
||||
|
||||
- [ ] **Result caching**
|
||||
- LRU cache for frequent queries
|
||||
- Invalidate on file change
|
||||
|
||||
---
|
||||
|
||||
## File Structure After Implementation
|
||||
|
||||
```
|
||||
src/codexlens/
|
||||
├── lsp/ # NEW
|
||||
│ ├── __init__.py
|
||||
│ ├── server.py # Main LSP server
|
||||
│ ├── handlers.py # LSP request handlers
|
||||
│ ├── hover_provider.py # Hover information
|
||||
│ └── utils.py # LSP utilities
|
||||
│
|
||||
├── mcp/ # NEW
|
||||
│ ├── __init__.py
|
||||
│ ├── schema.py # MCP data models
|
||||
│ ├── provider.py # Context builder
|
||||
│ └── hooks.py # Hook interfaces
|
||||
│
|
||||
├── search/
|
||||
│ ├── chain_search.py # MODIFY: add search_references()
|
||||
│ └── ...
|
||||
│
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dependencies to Add
|
||||
|
||||
```toml
|
||||
# pyproject.toml
|
||||
[project.optional-dependencies]
|
||||
lsp = [
|
||||
"pygls>=1.3.0",
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
```
|
||||
tests/
|
||||
├── lsp/
|
||||
│ ├── test_definition.py
|
||||
│ ├── test_completion.py
|
||||
│ ├── test_references.py
|
||||
│ └── test_hover.py
|
||||
│
|
||||
└── mcp/
|
||||
├── test_schema.py
|
||||
└── test_provider.py
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
- [ ] Full LSP handshake test
|
||||
- [ ] Multi-file project navigation
|
||||
- [ ] Incremental index update via didSave
|
||||
|
||||
### Performance Benchmarks
|
||||
| Operation | Target | Acceptable |
|
||||
|-----------|--------|------------|
|
||||
| Definition lookup | < 30ms | < 50ms |
|
||||
| Completion (100 items) | < 50ms | < 100ms |
|
||||
| Find references (10 files) | < 150ms | < 200ms |
|
||||
| Initial indexing (1000 files) | < 60s | < 120s |
|
||||
|
||||
---
|
||||
|
||||
## Execution Order
|
||||
|
||||
```
|
||||
Week 1: Phase 1.1 → 1.2 → 1.3 → 1.4
|
||||
Week 2: Phase 2.1 → 2.2 → 2.3 → 2.4
|
||||
Week 3: Phase 3 + Phase 4.1 → 4.2
|
||||
Week 4: Phase 4.3 → 4.4 → 4.5
|
||||
Week 5: Phase 5 (optional) + Polish
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Start Commands
|
||||
|
||||
```bash
|
||||
# Install LSP dependencies
|
||||
pip install pygls
|
||||
|
||||
# Run LSP server (after implementation)
|
||||
python -m codexlens.lsp --stdio
|
||||
|
||||
# Test LSP connection
|
||||
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}' | python -m codexlens.lsp --stdio
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reference Links
|
||||
|
||||
- pygls Documentation: https://pygls.readthedocs.io/
|
||||
- LSP Specification: https://microsoft.github.io/language-server-protocol/
|
||||
- codex-lens GlobalSymbolIndex: `storage/global_index.py:173`
|
||||
- codex-lens ChainSearchEngine: `search/chain_search.py:618`
|
||||
- codex-lens WatcherManager: `watcher/manager.py:53`
|
||||
2588
codex-lens/docs/LSP_INTEGRATION_PLAN.md
Normal file
2588
codex-lens/docs/LSP_INTEGRATION_PLAN.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-code-workflow",
|
||||
"version": "6.3.31",
|
||||
"version": "6.3.33",
|
||||
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
|
||||
"type": "module",
|
||||
"main": "ccw/src/index.js",
|
||||
|
||||
Reference in New Issue
Block a user