mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-28 20:01:17 +08:00
feat(cli): 添加 --rule 选项支持模板自动发现
重构 ccw cli 模板系统: - 新增 template-discovery.ts 模块,支持扁平化模板自动发现 - 添加 --rule <template> 选项,自动加载 protocol 和 template - 模板目录从嵌套结构 (prompts/category/file.txt) 迁移到扁平结构 (prompts/category-function.txt) - 更新所有 agent/command 文件,使用 $PROTO $TMPL 环境变量替代 $(cat ...) 模式 - 支持模糊匹配:--rule 02-review-architecture 可匹配 analysis-review-architecture.txt 其他更新: - Dashboard: 添加 Claude Manager 和 Issue Manager 页面 - Codex-lens: 增强 chain_search 和 clustering 模块 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -215,7 +215,7 @@ CONTEXT: @**/* | Memory: {ace_context_summary}
|
|||||||
|
|
||||||
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
|
EXPECTED: JSON with feasibility_score, findings, implementation_approaches, technical_concerns, code_locations
|
||||||
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) |
|
RULES: $PROTO |
|
||||||
- Specific file:line references
|
- Specific file:line references
|
||||||
- Quantify effort estimates
|
- Quantify effort estimates
|
||||||
- Concrete pros/cons
|
- Concrete pros/cons
|
||||||
|
|||||||
@@ -115,8 +115,9 @@ bug-fix → development/bug-diagnosis.txt
|
|||||||
```
|
```
|
||||||
|
|
||||||
**3. RULES Field**:
|
**3. RULES Field**:
|
||||||
- Use `$(cat ~/.claude/workflows/cli-templates/prompts/{path}.txt)` directly
|
- Use `--rule <template>` option to auto-load protocol + template as `$PROTO` and `$TMPL`
|
||||||
- NEVER escape: `\$`, `\"`, `\'` breaks command substitution
|
- Template names: `category-function` format (e.g., `analysis-code-patterns`, `development-feature`)
|
||||||
|
- NEVER escape: `\$`, `\"`, `\'` breaks variable expansion
|
||||||
|
|
||||||
**4. Structured Prompt**:
|
**4. Structured Prompt**:
|
||||||
```bash
|
```bash
|
||||||
@@ -125,7 +126,7 @@ TASK: {specific_task_with_details}
|
|||||||
MODE: {analysis|write|auto}
|
MODE: {analysis|write|auto}
|
||||||
CONTEXT: {structured_file_references}
|
CONTEXT: {structured_file_references}
|
||||||
EXPECTED: {clear_output_expectations}
|
EXPECTED: {clear_output_expectations}
|
||||||
RULES: $(cat {selected_template}) | {constraints}
|
RULES: $PROTO $TMPL | {constraints}
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -156,8 +157,8 @@ TASK: {task}
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: {output}
|
EXPECTED: {output}
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
RULES: $PROTO $TMPL
|
||||||
" --tool gemini --mode analysis --cd {dir}
|
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {dir}
|
||||||
|
|
||||||
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -106,7 +106,7 @@ EXPECTED:
|
|||||||
## Time Estimate
|
## Time Estimate
|
||||||
**Total**: [time]
|
**Total**: [time]
|
||||||
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
RULES: $PROTO $TMPL |
|
||||||
- Follow schema structure from {schema_path}
|
- Follow schema structure from {schema_path}
|
||||||
- Acceptance/verification must be quantified
|
- Acceptance/verification must be quantified
|
||||||
- Dependencies use task IDs
|
- Dependencies use task IDs
|
||||||
|
|||||||
@@ -127,14 +127,14 @@ EXPECTED: Structured fix strategy with:
|
|||||||
- Fix approach ensuring business logic correctness (not just test passage)
|
- Fix approach ensuring business logic correctness (not just test passage)
|
||||||
- Expected outcome and verification steps
|
- Expected outcome and verification steps
|
||||||
- Impact assessment: Will this fix potentially mask other issues?
|
- Impact assessment: Will this fix potentially mask other issues?
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
RULES: $PROTO $TMPL |
|
||||||
- For {test_type} tests: {layer_specific_guidance}
|
- For {test_type} tests: {layer_specific_guidance}
|
||||||
- Avoid 'surgical fixes' that mask underlying issues
|
- Avoid 'surgical fixes' that mask underlying issues
|
||||||
- Provide specific line numbers for modifications
|
- Provide specific line numbers for modifications
|
||||||
- Consider previous iteration failures
|
- Consider previous iteration failures
|
||||||
- Validate fix doesn't introduce new vulnerabilities
|
- Validate fix doesn't introduce new vulnerabilities
|
||||||
- analysis=READ-ONLY
|
- analysis=READ-ONLY
|
||||||
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
" --tool {cli_tool} --mode analysis --rule {template} --cd {project_root} --timeout {timeout_value}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer-Specific Guidance Injection**:
|
**Layer-Specific Guidance Injection**:
|
||||||
|
|||||||
@@ -105,7 +105,7 @@ TASK: • Analyze error pattern • Identify potential root causes • Suggest t
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @{affected_files}
|
CONTEXT: @{affected_files}
|
||||||
EXPECTED: Structured hypothesis list with priority ranking
|
EXPECTED: Structured hypothesis list with priority ranking
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Focus on testable conditions
|
RULES: $PROTO $TMPL | Focus on testable conditions
|
||||||
" --tool gemini --mode analysis --cd {project_root}
|
" --tool gemini --mode analysis --cd {project_root}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -213,7 +213,7 @@ EXPECTED:
|
|||||||
- Evidence summary
|
- Evidence summary
|
||||||
- Root cause identification (if confirmed)
|
- Root cause identification (if confirmed)
|
||||||
- Next steps (if inconclusive)
|
- Next steps (if inconclusive)
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Evidence-based reasoning only
|
RULES: $PROTO $TMPL | Evidence-based reasoning only
|
||||||
" --tool gemini --mode analysis
|
" --tool gemini --mode analysis
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -271,7 +271,7 @@ TASK:
|
|||||||
MODE: write
|
MODE: write
|
||||||
CONTEXT: @{affected_files}
|
CONTEXT: @{affected_files}
|
||||||
EXPECTED: Working fix that addresses root cause
|
EXPECTED: Working fix that addresses root cause
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt) | Minimal changes only
|
RULES: $PROTO $TMPL | Minimal changes only
|
||||||
" --tool codex --mode write --cd {project_root}
|
" --tool codex --mode write --cd {project_root}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -70,8 +70,8 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
CONTEXT: @**/* ./src/modules/auth|code|code:5|dirs:2
|
CONTEXT: @**/* ./src/modules/auth|code|code:5|dirs:2
|
||||||
./src/modules/api|code|code:3|dirs:0
|
./src/modules/api|code|code:3|dirs:0
|
||||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
RULES: $PROTO $TMPL | Mirror source structure
|
||||||
" --tool gemini --mode write --cd src/modules
|
" --tool gemini --mode write --rule documentation-module --cd src/modules
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **CLI Execution** (Gemini CLI):
|
4. **CLI Execution** (Gemini CLI):
|
||||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
|||||||
{
|
{
|
||||||
"step": "analyze_module_structure",
|
"step": "analyze_module_structure",
|
||||||
"action": "Deep analysis of module structure and API",
|
"action": "Deep analysis of module structure and API",
|
||||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $PROTO $TMPL\" --tool gemini --mode analysis --rule documentation-module --cd src/auth",
|
||||||
"output_to": "module_analysis",
|
"output_to": "module_analysis",
|
||||||
"on_error": "fail"
|
"on_error": "fail"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -87,7 +87,7 @@ TASK: • Detect file conflicts (same file modified by multiple solutions)
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @.workflow/issues/solutions/**/*.jsonl | Solution data: \${SOLUTIONS_JSON}
|
CONTEXT: @.workflow/issues/solutions/**/*.jsonl | Solution data: \${SOLUTIONS_JSON}
|
||||||
EXPECTED: JSON array of conflicts with type, severity, solutions, recommended_order
|
EXPECTED: JSON array of conflicts with type, severity, solutions, recommended_order
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Severity: high (API/data) > medium (file/dependency) > low (architecture)
|
RULES: $PROTO | Severity: high (API/data) > medium (file/dependency) > low (architecture)
|
||||||
" --tool gemini --mode analysis --cd .workflow/issues
|
" --tool gemini --mode analysis --cd .workflow/issues
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
311
.claude/commands/cli/codex-review.md
Normal file
311
.claude/commands/cli/codex-review.md
Normal file
@@ -0,0 +1,311 @@
|
|||||||
|
---
|
||||||
|
name: codex-review
|
||||||
|
description: Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions
|
||||||
|
argument-hint: "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]"
|
||||||
|
allowed-tools: Bash(*), AskUserQuestion(*), Read(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Codex Review Command (/cli:codex-review)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Interactive code review command that invokes `codex review` via ccw cli endpoint with guided parameter selection.
|
||||||
|
|
||||||
|
**Codex Review Parameters** (from `codex review --help`):
|
||||||
|
| Parameter | Description |
|
||||||
|
|-----------|-------------|
|
||||||
|
| `[PROMPT]` | Custom review instructions (positional) |
|
||||||
|
| `-c model=<model>` | Override model via config |
|
||||||
|
| `--uncommitted` | Review staged, unstaged, and untracked changes |
|
||||||
|
| `--base <BRANCH>` | Review changes against base branch |
|
||||||
|
| `--commit <SHA>` | Review changes introduced by a commit |
|
||||||
|
| `--title <TITLE>` | Optional commit title for review summary |
|
||||||
|
|
||||||
|
## Prompt Template Format
|
||||||
|
|
||||||
|
Follow the standard ccw cli prompt template:
|
||||||
|
|
||||||
|
```
|
||||||
|
PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
|
||||||
|
TASK: • [step 1] • [step 2] • [step 3]
|
||||||
|
MODE: review
|
||||||
|
CONTEXT: [review target description] | Memory: [relevant context]
|
||||||
|
EXPECTED: [deliverable format] + [quality criteria]
|
||||||
|
RULES: $PROTO $TMPL | [focus constraints]
|
||||||
|
```
|
||||||
|
|
||||||
|
## EXECUTION INSTRUCTIONS - START HERE
|
||||||
|
|
||||||
|
**When this command is triggered, follow these exact steps:**
|
||||||
|
|
||||||
|
### Step 1: Parse Arguments
|
||||||
|
|
||||||
|
Check if user provided arguments directly:
|
||||||
|
- `--uncommitted` → Record target = uncommitted
|
||||||
|
- `--base <branch>` → Record target = base, branch name
|
||||||
|
- `--commit <sha>` → Record target = commit, sha value
|
||||||
|
- `--model <model>` → Record model selection
|
||||||
|
- `--title <title>` → Record title
|
||||||
|
- Remaining text → Use as custom focus/prompt
|
||||||
|
|
||||||
|
If no target specified → Continue to Step 2 for interactive selection.
|
||||||
|
|
||||||
|
### Step 2: Interactive Parameter Selection
|
||||||
|
|
||||||
|
**2.1 Review Target Selection**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "What do you want to review?",
|
||||||
|
header: "Review Target",
|
||||||
|
options: [
|
||||||
|
{ label: "Uncommitted changes (Recommended)", description: "Review staged, unstaged, and untracked changes" },
|
||||||
|
{ label: "Compare to branch", description: "Review changes against a base branch (e.g., main)" },
|
||||||
|
{ label: "Specific commit", description: "Review changes introduced by a specific commit" }
|
||||||
|
],
|
||||||
|
multiSelect: false
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**2.2 Branch/Commit Input (if needed)**
|
||||||
|
|
||||||
|
If "Compare to branch" selected:
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "Which base branch to compare against?",
|
||||||
|
header: "Base Branch",
|
||||||
|
options: [
|
||||||
|
{ label: "main", description: "Compare against main branch" },
|
||||||
|
{ label: "master", description: "Compare against master branch" },
|
||||||
|
{ label: "develop", description: "Compare against develop branch" }
|
||||||
|
],
|
||||||
|
multiSelect: false
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
If "Specific commit" selected:
|
||||||
|
- Run `git log --oneline -10` to show recent commits
|
||||||
|
- Ask user to provide commit SHA or select from list
|
||||||
|
|
||||||
|
**2.3 Model Selection (Optional)**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "Which model to use for review?",
|
||||||
|
header: "Model",
|
||||||
|
options: [
|
||||||
|
{ label: "Default", description: "Use codex default model (gpt-5.2)" },
|
||||||
|
{ label: "o3", description: "OpenAI o3 reasoning model" },
|
||||||
|
{ label: "gpt-4.1", description: "GPT-4.1 model" },
|
||||||
|
{ label: "o4-mini", description: "OpenAI o4-mini (faster)" }
|
||||||
|
],
|
||||||
|
multiSelect: false
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**2.4 Review Focus Selection**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "What should the review focus on?",
|
||||||
|
header: "Focus Area",
|
||||||
|
options: [
|
||||||
|
{ label: "General review (Recommended)", description: "Comprehensive review: correctness, style, bugs, docs" },
|
||||||
|
{ label: "Security focus", description: "Security vulnerabilities, input validation, auth issues" },
|
||||||
|
{ label: "Performance focus", description: "Performance bottlenecks, complexity, resource usage" },
|
||||||
|
{ label: "Code quality", description: "Readability, maintainability, SOLID principles" }
|
||||||
|
],
|
||||||
|
multiSelect: false
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Build Prompt and Command
|
||||||
|
|
||||||
|
**3.1 Construct Prompt Based on Focus**
|
||||||
|
|
||||||
|
**General Review Prompt:**
|
||||||
|
```
|
||||||
|
PURPOSE: Comprehensive code review to identify issues, improve quality, and ensure best practices; success = actionable feedback with clear priorities
|
||||||
|
TASK: • Review code correctness and logic errors • Check coding standards and consistency • Identify potential bugs and edge cases • Evaluate documentation completeness
|
||||||
|
MODE: review
|
||||||
|
CONTEXT: {target_description} | Memory: Project conventions from CLAUDE.md
|
||||||
|
EXPECTED: Structured review report with: severity levels (Critical/High/Medium/Low), file:line references, specific improvement suggestions, priority ranking
|
||||||
|
RULES: $PROTO $TMPL | Focus on actionable feedback
|
||||||
|
```
|
||||||
|
|
||||||
|
**Security Focus Prompt:**
|
||||||
|
```
|
||||||
|
PURPOSE: Security-focused code review to identify vulnerabilities and security risks; success = all security issues documented with remediation
|
||||||
|
TASK: • Scan for injection vulnerabilities (SQL, XSS, command) • Check authentication and authorization logic • Evaluate input validation and sanitization • Identify sensitive data exposure risks
|
||||||
|
MODE: review
|
||||||
|
CONTEXT: {target_description} | Memory: Security best practices, OWASP Top 10
|
||||||
|
EXPECTED: Security report with: vulnerability classification, CVE references where applicable, remediation code snippets, risk severity matrix
|
||||||
|
RULES: $PROTO $TMPL | Security-first analysis | Flag all potential vulnerabilities
|
||||||
|
```
|
||||||
|
|
||||||
|
**Performance Focus Prompt:**
|
||||||
|
```
|
||||||
|
PURPOSE: Performance-focused code review to identify bottlenecks and optimization opportunities; success = measurable improvement recommendations
|
||||||
|
TASK: • Analyze algorithmic complexity (Big-O) • Identify memory allocation issues • Check for N+1 queries and blocking operations • Evaluate caching opportunities
|
||||||
|
MODE: review
|
||||||
|
CONTEXT: {target_description} | Memory: Performance patterns and anti-patterns
|
||||||
|
EXPECTED: Performance report with: complexity analysis, bottleneck identification, optimization suggestions with expected impact, benchmark recommendations
|
||||||
|
RULES: $PROTO $TMPL | Performance optimization focus
|
||||||
|
```
|
||||||
|
|
||||||
|
**Code Quality Focus Prompt:**
|
||||||
|
```
|
||||||
|
PURPOSE: Code quality review to improve maintainability and readability; success = cleaner, more maintainable code
|
||||||
|
TASK: • Assess SOLID principles adherence • Identify code duplication and abstraction opportunities • Review naming conventions and clarity • Evaluate test coverage implications
|
||||||
|
MODE: review
|
||||||
|
CONTEXT: {target_description} | Memory: Project coding standards
|
||||||
|
EXPECTED: Quality report with: principle violations, refactoring suggestions, naming improvements, maintainability score
|
||||||
|
RULES: $PROTO $TMPL | Code quality and maintainability focus
|
||||||
|
```
|
||||||
|
|
||||||
|
**3.2 Build Target Description**
|
||||||
|
|
||||||
|
Based on selection, set `{target_description}`:
|
||||||
|
- Uncommitted: `Reviewing uncommitted changes (staged + unstaged + untracked)`
|
||||||
|
- Base branch: `Reviewing changes against {branch} branch`
|
||||||
|
- Commit: `Reviewing changes introduced by commit {sha}`
|
||||||
|
|
||||||
|
### Step 4: Execute via CCW CLI
|
||||||
|
|
||||||
|
Build and execute the ccw cli command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Base structure
|
||||||
|
ccw cli -p "<PROMPT>" --tool codex --mode review [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Command Construction:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Variables from user selection
|
||||||
|
TARGET_FLAG="" # --uncommitted | --base <branch> | --commit <sha>
|
||||||
|
MODEL_FLAG="" # --model <model> (if not default)
|
||||||
|
TITLE_FLAG="" # --title "<title>" (if provided)
|
||||||
|
|
||||||
|
# Build target flag
|
||||||
|
if [ "$target" = "uncommitted" ]; then
|
||||||
|
TARGET_FLAG="--uncommitted"
|
||||||
|
elif [ "$target" = "base" ]; then
|
||||||
|
TARGET_FLAG="--base $branch"
|
||||||
|
elif [ "$target" = "commit" ]; then
|
||||||
|
TARGET_FLAG="--commit $sha"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build model flag (only if not default)
|
||||||
|
if [ "$model" != "default" ] && [ -n "$model" ]; then
|
||||||
|
MODEL_FLAG="--model $model"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build title flag (if provided)
|
||||||
|
if [ -n "$title" ]; then
|
||||||
|
TITLE_FLAG="--title \"$title\""
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Execute
|
||||||
|
ccw cli -p "$PROMPT" --tool codex --mode review $TARGET_FLAG $MODEL_FLAG $TITLE_FLAG
|
||||||
|
```
|
||||||
|
|
||||||
|
**Full Example Command:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw cli -p "
|
||||||
|
PURPOSE: Comprehensive code review to identify issues and improve quality; success = actionable feedback with priorities
|
||||||
|
TASK: • Review correctness and logic • Check standards compliance • Identify bugs and edge cases • Evaluate documentation
|
||||||
|
MODE: review
|
||||||
|
CONTEXT: Reviewing uncommitted changes | Memory: Project conventions
|
||||||
|
EXPECTED: Structured report with severity levels, file:line refs, improvement suggestions
|
||||||
|
RULES: $PROTO $TMPL | Actionable feedback
|
||||||
|
" --tool codex --mode review --uncommitted --rule analysis-review-code-quality
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Execute and Display Results
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Bash({
|
||||||
|
command: "ccw cli -p \"$PROMPT\" --tool codex --mode review $FLAGS",
|
||||||
|
run_in_background: true
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
Wait for completion and display formatted results.
|
||||||
|
|
||||||
|
## Quick Usage Examples
|
||||||
|
|
||||||
|
### Direct Execution (No Interaction)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Review uncommitted changes with default settings
|
||||||
|
/cli:codex-review --uncommitted
|
||||||
|
|
||||||
|
# Review against main branch
|
||||||
|
/cli:codex-review --base main
|
||||||
|
|
||||||
|
# Review specific commit
|
||||||
|
/cli:codex-review --commit abc123
|
||||||
|
|
||||||
|
# Review with custom model
|
||||||
|
/cli:codex-review --uncommitted --model o3
|
||||||
|
|
||||||
|
# Review with security focus
|
||||||
|
/cli:codex-review --uncommitted security
|
||||||
|
|
||||||
|
# Full options
|
||||||
|
/cli:codex-review --base main --model o3 --title "Auth Feature" security
|
||||||
|
```
|
||||||
|
|
||||||
|
### Interactive Mode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start interactive selection (guided flow)
|
||||||
|
/cli:codex-review
|
||||||
|
```
|
||||||
|
|
||||||
|
## Focus Area Mapping
|
||||||
|
|
||||||
|
| User Selection | Prompt Focus | Key Checks |
|
||||||
|
|----------------|--------------|------------|
|
||||||
|
| General review | Comprehensive | Correctness, style, bugs, docs |
|
||||||
|
| Security focus | Security-first | Injection, auth, validation, exposure |
|
||||||
|
| Performance focus | Optimization | Complexity, memory, queries, caching |
|
||||||
|
| Code quality | Maintainability | SOLID, duplication, naming, tests |
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### No Changes to Review
|
||||||
|
```
|
||||||
|
No changes found for review target. Suggestions:
|
||||||
|
- For --uncommitted: Make some code changes first
|
||||||
|
- For --base: Ensure branch exists and has diverged
|
||||||
|
- For --commit: Verify commit SHA exists
|
||||||
|
```
|
||||||
|
|
||||||
|
### Invalid Branch
|
||||||
|
```bash
|
||||||
|
# Show available branches
|
||||||
|
git branch -a --list | head -20
|
||||||
|
```
|
||||||
|
|
||||||
|
### Invalid Commit
|
||||||
|
```bash
|
||||||
|
# Show recent commits
|
||||||
|
git log --oneline -10
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Notes
|
||||||
|
|
||||||
|
- Uses `ccw cli --tool codex --mode review` endpoint
|
||||||
|
- Model passed via prompt (codex uses `-c model=` internally)
|
||||||
|
- Target flags (`--uncommitted`, `--base`, `--commit`) passed through to codex
|
||||||
|
- Prompt follows standard ccw cli template format for consistency
|
||||||
@@ -267,7 +267,7 @@ EXPECTED: JSON exploration plan following exploration-plan-schema.json:
|
|||||||
"estimated_iterations": N,
|
"estimated_iterations": N,
|
||||||
"termination_conditions": [...]
|
"termination_conditions": [...]
|
||||||
}
|
}
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Use ACE context to inform targets | Focus on actionable plan
|
RULES: $PROTO | Use ACE context to inform targets | Focus on actionable plan
|
||||||
`;
|
`;
|
||||||
|
|
||||||
// Step 3: Execute Gemini planning
|
// Step 3: Execute Gemini planning
|
||||||
|
|||||||
@@ -131,7 +131,7 @@ TASK: • Analyze issue titles/tags semantically • Identify functional/archite
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: Issue metadata only
|
CONTEXT: Issue metadata only
|
||||||
EXPECTED: JSON with groups array, each containing max 4 issue_ids, theme, rationale
|
EXPECTED: JSON with groups array, each containing max 4 issue_ids, theme, rationale
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Each issue in exactly one group | Max 4 issues per group | Balance group sizes
|
RULES: $PROTO | Each issue in exactly one group | Max 4 issues per group | Balance group sizes
|
||||||
|
|
||||||
INPUT:
|
INPUT:
|
||||||
${JSON.stringify(issueSummaries, null, 2)}
|
${JSON.stringify(issueSummaries, null, 2)}
|
||||||
|
|||||||
@@ -223,8 +223,8 @@ TASK:
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
|
CONTEXT: @src/**/*.controller.ts @src/**/*.routes.ts @src/**/*.dto.ts @src/**/middleware/**/*
|
||||||
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
|
EXPECTED: JSON format API structure analysis report with modules, endpoints, security schemes, and error codes
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
|
RULES: $PROTO | Strict RESTful standards | Identify all public endpoints | Document output language: {lang}
|
||||||
" --tool gemini --mode analysis --cd {project_root}
|
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Update swagger-planning-data.json** with analysis results:
|
**Update swagger-planning-data.json** with analysis results:
|
||||||
@@ -387,7 +387,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
|||||||
"step": 1,
|
"step": 1,
|
||||||
"title": "Generate OpenAPI spec file",
|
"title": "Generate OpenAPI spec file",
|
||||||
"description": "Create complete swagger.yaml specification file",
|
"description": "Create complete swagger.yaml specification file",
|
||||||
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | Use {lang} for all descriptions | Strict RESTful standards",
|
"cli_prompt": "PURPOSE: Generate OpenAPI 3.0.3 specification file from analyzed API structure\nTASK:\n• Define openapi version: 3.0.3\n• Define info: title, description, version, contact, license\n• Define servers: development, staging, production environments\n• Define tags: organized by business modules\n• Define paths: all API endpoints with complete specifications\n• Define components: schemas, securitySchemes, parameters, responses\nMODE: write\nCONTEXT: @[api_analysis]\nEXPECTED: Complete swagger.yaml file following OpenAPI 3.0.3 specification\nRULES: $PROTO $TMPL | Use {lang} for all descriptions | Strict RESTful standards\n--rule documentation-swagger-api",
|
||||||
"output": "swagger.yaml"
|
"output": "swagger.yaml"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -429,7 +429,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
|||||||
{
|
{
|
||||||
"step": 1,
|
"step": 1,
|
||||||
"title": "Generate authentication documentation",
|
"title": "Generate authentication documentation",
|
||||||
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include code examples | Clear step-by-step instructions",
|
"cli_prompt": "PURPOSE: Generate comprehensive authentication documentation for API security\nTASK:\n• Document authentication mechanism: JWT Bearer Token\n• Explain header format: Authorization: Bearer <token>\n• Describe token lifecycle: acquisition, refresh, expiration handling\n• Define permission levels: public, user, admin, super_admin\n• Document authentication failure responses: 401/403 error handling\nMODE: write\nCONTEXT: @[auth_patterns] @src/**/auth/**/* @src/**/guard/**/*\nEXPECTED: Complete authentication guide in {lang}\nRULES: $PROTO | Include code examples | Clear step-by-step instructions\n--rule development-feature",
|
||||||
"output": "{auth_doc_name}"
|
"output": "{auth_doc_name}"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -464,7 +464,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
|||||||
{
|
{
|
||||||
"step": 1,
|
"step": 1,
|
||||||
"title": "Generate error code specification document",
|
"title": "Generate error code specification document",
|
||||||
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include response examples | Clear categorization",
|
"cli_prompt": "PURPOSE: Generate comprehensive error code specification for consistent API error handling\nTASK:\n• Define error response format: {code, message, details, timestamp}\n• Document authentication errors (AUTH_xxx): 401/403 series\n• Document parameter errors (PARAM_xxx): 400 series\n• Document business errors (BIZ_xxx): business logic errors\n• Document system errors (SYS_xxx): 500 series\n• For each error code: HTTP status, error message, possible causes, resolution suggestions\nMODE: write\nCONTEXT: @src/**/*.exception.ts @src/**/*.filter.ts\nEXPECTED: Complete error code specification in {lang} with tables and examples\nRULES: $PROTO | Include response examples | Clear categorization\n--rule development-feature",
|
||||||
"output": "{error_doc_name}"
|
"output": "{error_doc_name}"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -523,7 +523,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
|||||||
"step": 1,
|
"step": 1,
|
||||||
"title": "Generate module API documentation",
|
"title": "Generate module API documentation",
|
||||||
"description": "Generate complete API documentation for ${module_name}",
|
"description": "Generate complete API documentation for ${module_name}",
|
||||||
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/documentation/swagger-api.txt) | RESTful standards | Include all response codes",
|
"cli_prompt": "PURPOSE: Generate complete RESTful API documentation for ${module_name} module\nTASK:\n• Create module overview: purpose, use cases, prerequisites\n• Generate endpoint index: grouped by functionality\n• For each endpoint document:\n - Functional description: purpose and business context\n - Request method: GET/POST/PUT/DELETE\n - URL path: complete API path\n - Request headers: Authorization and other required headers\n - Path parameters: {id} and other path variables\n - Query parameters: pagination, filters, etc.\n - Request body: JSON Schema format\n - Response body: success and error responses\n - Field description table: type, required, example, description\n• Add usage examples: cURL, JavaScript, Python\n• Add version info: v1.0.0, last updated date\nMODE: write\nCONTEXT: @[module_endpoints] @[source_code]\nEXPECTED: Complete module API documentation in {lang} with all endpoints fully documented\nRULES: $PROTO $TMPL | RESTful standards | Include all response codes\n--rule documentation-swagger-api",
|
||||||
"output": "${module_doc_name}"
|
"output": "${module_doc_name}"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -559,7 +559,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
|||||||
{
|
{
|
||||||
"step": 1,
|
"step": 1,
|
||||||
"title": "Generate API overview",
|
"title": "Generate API overview",
|
||||||
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Clear structure | Quick start focus",
|
"cli_prompt": "PURPOSE: Generate API overview document with navigation and quick start guide\nTASK:\n• Create introduction: system features, tech stack, version\n• Write quick start guide: authentication, first request example\n• Build module navigation: categorized links to all modules\n• Document environment configuration: development, staging, production\n• List SDKs and tools: client libraries, Postman collection\nMODE: write\nCONTEXT: @[all_module_docs] @.workflow/docs/${project_name}/api/swagger.yaml\nEXPECTED: Complete API overview in {lang} with navigation links\nRULES: $PROTO | Clear structure | Quick start focus\n--rule development-feature",
|
||||||
"output": "README.md"
|
"output": "README.md"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -602,7 +602,7 @@ bash(cat ${session_dir}/.process/swagger-planning-data.json | jq -r '.api_struct
|
|||||||
{
|
{
|
||||||
"step": 1,
|
"step": 1,
|
||||||
"title": "Generate test report",
|
"title": "Generate test report",
|
||||||
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nRULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) | Include test cases | Clear pass/fail status",
|
"cli_prompt": "PURPOSE: Generate comprehensive API test validation report\nTASK:\n• Document test environment configuration\n• Calculate endpoint coverage statistics\n• Report test results: pass/fail counts\n• Document boundary tests: parameter limits, null values, special characters\n• Document exception tests: auth failures, permission denied, resource not found\n• List issues found with recommendations\nMODE: write\nCONTEXT: @[swagger_spec]\nEXPECTED: Complete test report in {lang} with detailed results\nRULES: $PROTO | Include test cases | Clear pass/fail status\n--rule development-tests",
|
||||||
"output": "{test_doc_name}"
|
"output": "{test_doc_name}"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -147,8 +147,8 @@ You are generating path-conditional rules for Claude Code.
|
|||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
Read the agent prompt template for detailed instructions:
|
Read the agent prompt template for detailed instructions.
|
||||||
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
Use --rule rules-tech-rules-agent-prompt to load the template automatically.
|
||||||
|
|
||||||
## Execution Steps
|
## Execution Steps
|
||||||
|
|
||||||
|
|||||||
@@ -81,6 +81,7 @@ AskUserQuestion({
|
|||||||
options: [
|
options: [
|
||||||
{ label: "Skip", description: "No review" },
|
{ label: "Skip", description: "No review" },
|
||||||
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
{ label: "Gemini Review", description: "Gemini CLI tool" },
|
||||||
|
{ label: "Codex Review", description: "codex review --uncommitted" },
|
||||||
{ label: "Agent Review", description: "Current agent review" }
|
{ label: "Agent Review", description: "Current agent review" }
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -485,7 +486,7 @@ TASK: • Verify plan acceptance criteria fulfillment • Analyze code quality
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements
|
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements
|
||||||
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from plan.json tasks.
|
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from plan.json tasks.
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
|
RULES: $PROTO $TMPL | Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||||
```
|
```
|
||||||
|
|
||||||
**Tool-Specific Execution** (Apply shared prompt template above):
|
**Tool-Specific Execution** (Apply shared prompt template above):
|
||||||
@@ -504,8 +505,9 @@ ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analys
|
|||||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||||
# Same prompt as Gemini, different execution engine
|
# Same prompt as Gemini, different execution engine
|
||||||
|
|
||||||
# Method 4: Codex Review (autonomous)
|
# Method 4: Codex Review (git-aware)
|
||||||
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
ccw cli -p "[Shared Prompt Template with artifacts]" --tool codex --mode review --uncommitted
|
||||||
|
# Reviews uncommitted changes against plan acceptance criteria
|
||||||
```
|
```
|
||||||
|
|
||||||
**Multi-Round Review with Fixed IDs**:
|
**Multi-Round Review with Fixed IDs**:
|
||||||
|
|||||||
@@ -167,7 +167,7 @@ TASK: ${tasks.map(t => `• ${t}`).join(' ')}
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: ${expected}
|
EXPECTED: ${expected}
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | ${rules}
|
RULES: $PROTO | ${rules}
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -373,7 +373,7 @@ TASK: ${extractedTasks.join(' • ')}
|
|||||||
MODE: write
|
MODE: write
|
||||||
CONTEXT: @${affectedFiles.join(' @')}
|
CONTEXT: @${affectedFiles.join(' @')}
|
||||||
EXPECTED: Working implementation with all changes applied
|
EXPECTED: Working implementation with all changes applied
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
|
RULES: $PROTO
|
||||||
" --tool ${executionTool.name} --mode write`,
|
" --tool ${executionTool.name} --mode write`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -497,6 +497,7 @@ ${plan.tasks.map((t, i) => `${i+1}. ${t.title} (${t.file})`).join('\n')}
|
|||||||
|
|
||||||
**Step 4.2: Collect Confirmation**
|
**Step 4.2: Collect Confirmation**
|
||||||
```javascript
|
```javascript
|
||||||
|
// Note: Execution "Other" option allows specifying CLI tools from ~/.claude/cli-tools.json
|
||||||
AskUserQuestion({
|
AskUserQuestion({
|
||||||
questions: [
|
questions: [
|
||||||
{
|
{
|
||||||
@@ -524,8 +525,9 @@ AskUserQuestion({
|
|||||||
header: "Review",
|
header: "Review",
|
||||||
multiSelect: false,
|
multiSelect: false,
|
||||||
options: [
|
options: [
|
||||||
{ label: "Gemini Review", description: "Gemini CLI" },
|
{ label: "Gemini Review", description: "Gemini CLI review" },
|
||||||
{ label: "Agent Review", description: "@code-reviewer" },
|
{ label: "Codex Review", description: "codex review --uncommitted" },
|
||||||
|
{ label: "Agent Review", description: "@code-reviewer agent" },
|
||||||
{ label: "Skip", description: "No review" }
|
{ label: "Skip", description: "No review" }
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -154,8 +154,8 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
|||||||
- Validation of exploration conflict_indicators
|
- Validation of exploration conflict_indicators
|
||||||
- ModuleOverlap conflicts with overlap_analysis
|
- ModuleOverlap conflicts with overlap_analysis
|
||||||
- Targeted clarification questions
|
- Targeted clarification questions
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
RULES: $PROTO $TMPL | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||||
" --tool gemini --mode analysis --cd {project_root}
|
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
|
||||||
|
|
||||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||||
|
|
||||||
|
|||||||
@@ -90,7 +90,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
|
|||||||
|
|
||||||
## EXECUTION STEPS
|
## EXECUTION STEPS
|
||||||
1. Execute Gemini analysis:
|
1. Execute Gemini analysis:
|
||||||
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
|
ccw cli -p "..." --tool gemini --mode write --rule test-test-concept-analysis --cd .workflow/active/{test_session_id}/.process
|
||||||
|
|
||||||
2. Generate TEST_ANALYSIS_RESULTS.md:
|
2. Generate TEST_ANALYSIS_RESULTS.md:
|
||||||
Synthesize gemini-test-analysis.md into standardized format for task generation
|
Synthesize gemini-test-analysis.md into standardized format for task generation
|
||||||
|
|||||||
@@ -8,6 +8,44 @@ allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), G
|
|||||||
|
|
||||||
无状态工作流协调器,根据任务意图自动选择最优工作流。
|
无状态工作流协调器,根据任务意图自动选择最优工作流。
|
||||||
|
|
||||||
|
## Workflow System Overview
|
||||||
|
|
||||||
|
CCW 提供两个工作流系统:**Main Workflow** 和 **Issue Workflow**,协同覆盖完整的软件开发生命周期。
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Main Workflow │
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ Level 1 │ → │ Level 2 │ → │ Level 3 │ → │ Level 4 │ │
|
||||||
|
│ │ Rapid │ │ Lightweight │ │ Standard │ │ Brainstorm │ │
|
||||||
|
│ │ │ │ │ │ │ │ │ │
|
||||||
|
│ │ lite-lite- │ │ lite-plan │ │ plan │ │ brainstorm │ │
|
||||||
|
│ │ lite │ │ lite-fix │ │ tdd-plan │ │ :auto- │ │
|
||||||
|
│ │ │ │ multi-cli- │ │ test-fix- │ │ parallel │ │
|
||||||
|
│ │ │ │ plan │ │ gen │ │ ↓ │ │
|
||||||
|
│ │ │ │ │ │ │ │ plan │ │
|
||||||
|
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ Complexity: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶ │
|
||||||
|
│ Low High │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
│ After development
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Issue Workflow │
|
||||||
|
│ │
|
||||||
|
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||||
|
│ │ Accumulate │ → │ Plan │ → │ Execute │ │
|
||||||
|
│ │ Discover & │ │ Batch │ │ Parallel │ │
|
||||||
|
│ │ Collect │ │ Planning │ │ Execution │ │
|
||||||
|
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ Supplementary role: Maintain main branch stability, worktree isolation │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -17,7 +55,7 @@ allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), G
|
|||||||
│ Phase 1 │ Input Analysis (rule-based, fast path) │
|
│ Phase 1 │ Input Analysis (rule-based, fast path) │
|
||||||
│ Phase 1.5 │ CLI Classification (semantic, smart path) │
|
│ Phase 1.5 │ CLI Classification (semantic, smart path) │
|
||||||
│ Phase 1.75 │ Requirement Clarification (clarity < 2) │
|
│ Phase 1.75 │ Requirement Clarification (clarity < 2) │
|
||||||
│ Phase 2 │ Chain Selection (intent → workflow) │
|
│ Phase 2 │ Level Selection (intent → level → workflow) │
|
||||||
│ Phase 2.5 │ CLI Action Planning (high complexity) │
|
│ Phase 2.5 │ CLI Action Planning (high complexity) │
|
||||||
│ Phase 3 │ User Confirmation (optional) │
|
│ Phase 3 │ User Confirmation (optional) │
|
||||||
│ Phase 4 │ TODO Tracking Setup │
|
│ Phase 4 │ TODO Tracking Setup │
|
||||||
@@ -25,23 +63,79 @@ allowed-tools: Task(*), SlashCommand(*), AskUserQuestion(*), Read(*), Bash(*), G
|
|||||||
└─────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Level Quick Reference
|
||||||
|
|
||||||
|
| Level | Name | Workflows | Artifacts | Execution |
|
||||||
|
|-------|------|-----------|-----------|-----------|
|
||||||
|
| **1** | Rapid | `lite-lite-lite` | None | Direct execute |
|
||||||
|
| **2** | Lightweight | `lite-plan`, `lite-fix`, `multi-cli-plan` | Memory/Lightweight files | → `lite-execute` |
|
||||||
|
| **3** | Standard | `plan`, `tdd-plan`, `test-fix-gen` | Session persistence | → `execute` / `test-cycle-execute` |
|
||||||
|
| **4** | Brainstorm | `brainstorm:auto-parallel` → `plan` | Multi-role analysis + Session | → `execute` |
|
||||||
|
| **-** | Issue | `discover` → `plan` → `queue` → `execute` | Issue records | Worktree isolation (optional) |
|
||||||
|
|
||||||
|
## Workflow Selection Decision Tree
|
||||||
|
|
||||||
|
```
|
||||||
|
Start
|
||||||
|
│
|
||||||
|
├─ Is it post-development maintenance?
|
||||||
|
│ ├─ Yes → Issue Workflow
|
||||||
|
│ └─ No ↓
|
||||||
|
│
|
||||||
|
├─ Are requirements clear?
|
||||||
|
│ ├─ Uncertain → Level 4 (brainstorm:auto-parallel)
|
||||||
|
│ └─ Clear ↓
|
||||||
|
│
|
||||||
|
├─ Need persistent Session?
|
||||||
|
│ ├─ Yes → Level 3 (plan / tdd-plan / test-fix-gen)
|
||||||
|
│ └─ No ↓
|
||||||
|
│
|
||||||
|
├─ Need multi-perspective / solution comparison?
|
||||||
|
│ ├─ Yes → Level 2 (multi-cli-plan)
|
||||||
|
│ └─ No ↓
|
||||||
|
│
|
||||||
|
├─ Is it a bug fix?
|
||||||
|
│ ├─ Yes → Level 2 (lite-fix)
|
||||||
|
│ └─ No ↓
|
||||||
|
│
|
||||||
|
├─ Need planning?
|
||||||
|
│ ├─ Yes → Level 2 (lite-plan)
|
||||||
|
│ └─ No → Level 1 (lite-lite-lite)
|
||||||
|
```
|
||||||
|
|
||||||
## Intent Classification
|
## Intent Classification
|
||||||
|
|
||||||
### Priority Order
|
### Priority Order (with Level Mapping)
|
||||||
|
|
||||||
| Priority | Intent | Patterns | Flow |
|
| Priority | Intent | Patterns | Level | Flow |
|
||||||
|----------|--------|----------|------|
|
|----------|--------|----------|-------|------|
|
||||||
| 1 | bugfix/hotfix | `urgent,production,critical` + bug | `bugfix.hotfix` |
|
| 1 | bugfix/hotfix | `urgent,production,critical` + bug | L2 | `bugfix.hotfix` |
|
||||||
| 1 | bugfix | `fix,bug,error,crash,fail` | `bugfix.standard` |
|
| 1 | bugfix | `fix,bug,error,crash,fail` | L2 | `bugfix.standard` |
|
||||||
| 2 | issue batch | `issues,batch` + `fix,resolve` | `issue` |
|
| 2 | issue batch | `issues,batch` + `fix,resolve` | Issue | `issue` |
|
||||||
| 3 | exploration | `不确定,explore,研究,what if` | `full` |
|
| 3 | exploration | `不确定,explore,研究,what if` | L4 | `full` |
|
||||||
| 3 | multi-perspective | `多视角,权衡,比较方案,cross-verify` | `multi-cli-plan` |
|
| 3 | multi-perspective | `多视角,权衡,比较方案,cross-verify` | L2 | `multi-cli-plan` |
|
||||||
| 4 | quick-task | `快速,简单,small,quick` + feature | `lite-lite-lite` |
|
| 4 | quick-task | `快速,简单,small,quick` + feature | L1 | `lite-lite-lite` |
|
||||||
| 5 | ui design | `ui,design,component,style` | `ui` |
|
| 5 | ui design | `ui,design,component,style` | L3/L4 | `ui` |
|
||||||
| 6 | tdd | `tdd,test-driven,先写测试` | `tdd` |
|
| 6 | tdd | `tdd,test-driven,先写测试` | L3 | `tdd` |
|
||||||
| 7 | review | `review,审查,code review` | `review-fix` |
|
| 7 | test-fix | `测试失败,test fail,fix test` | L3 | `test-fix-gen` |
|
||||||
| 8 | documentation | `文档,docs,readme` | `docs` |
|
| 8 | review | `review,审查,code review` | L3 | `review-fix` |
|
||||||
| 99 | feature | complexity-based | `rapid`/`coupled` |
|
| 9 | documentation | `文档,docs,readme` | L2 | `docs` |
|
||||||
|
| 99 | feature | complexity-based | L2/L3 | `rapid`/`coupled` |
|
||||||
|
|
||||||
|
### Quick Selection Guide
|
||||||
|
|
||||||
|
| Scenario | Recommended Workflow | Level |
|
||||||
|
|----------|---------------------|-------|
|
||||||
|
| Quick fixes, config adjustments | `lite-lite-lite` | 1 |
|
||||||
|
| Clear single-module features | `lite-plan → lite-execute` | 2 |
|
||||||
|
| Bug diagnosis and fix | `lite-fix` | 2 |
|
||||||
|
| Production emergencies | `lite-fix --hotfix` | 2 |
|
||||||
|
| Technology selection, solution comparison | `multi-cli-plan → lite-execute` | 2 |
|
||||||
|
| Multi-module changes, refactoring | `plan → verify → execute` | 3 |
|
||||||
|
| Test-driven development | `tdd-plan → execute → tdd-verify` | 3 |
|
||||||
|
| Test failure fixes | `test-fix-gen → test-cycle-execute` | 3 |
|
||||||
|
| New features, architecture design | `brainstorm:auto-parallel → plan → execute` | 4 |
|
||||||
|
| Post-development issue fixes | Issue Workflow | - |
|
||||||
|
|
||||||
### Complexity Assessment
|
### Complexity Assessment
|
||||||
|
|
||||||
@@ -214,24 +308,100 @@ CLI 可返回建议:`use_default` | `modify` (调整步骤) | `upgrade` (升
|
|||||||
|
|
||||||
## Workflow Flow Details
|
## Workflow Flow Details
|
||||||
|
|
||||||
### Issue Workflow (两阶段生命周期)
|
### Issue Workflow (Main Workflow 补充机制)
|
||||||
|
|
||||||
Issue 工作流设计为两阶段生命周期,支持在项目迭代过程中积累问题并集中解决。
|
Issue Workflow 是 Main Workflow 的**补充机制**,专注于开发后的持续维护。
|
||||||
|
|
||||||
**Phase 1: Accumulation (积累阶段)**
|
#### 设计理念
|
||||||
- 触发:任务完成后的 review、代码审查发现、测试失败
|
|
||||||
- 活动:需求扩展、bug 分析、测试覆盖、安全审查
|
|
||||||
- 命令:`/issue:discover`, `/issue:discover-by-prompt`, `/issue:new`
|
|
||||||
|
|
||||||
**Phase 2: Batch Resolution (批量解决阶段)**
|
| 方面 | Main Workflow | Issue Workflow |
|
||||||
- 触发:积累足够 issue 后的集中处理
|
|------|---------------|----------------|
|
||||||
- 流程:plan → queue → execute
|
| **用途** | 主要开发周期 | 开发后维护 |
|
||||||
- 命令:`/issue:plan --all-pending` → `/issue:queue` → `/issue:execute`
|
| **时机** | 功能开发阶段 | 主工作流完成后 |
|
||||||
|
| **范围** | 完整功能实现 | 针对性修复/增强 |
|
||||||
|
| **并行性** | 依赖分析 → Agent 并行 | Worktree 隔离 (可选) |
|
||||||
|
| **分支模型** | 当前分支工作 | 可使用隔离的 worktree |
|
||||||
|
|
||||||
|
#### 为什么 Main Workflow 不自动使用 Worktree?
|
||||||
|
|
||||||
|
**依赖分析已解决并行性问题**:
|
||||||
|
1. 规划阶段 (`/workflow:plan`) 执行依赖分析
|
||||||
|
2. 自动识别任务依赖和关键路径
|
||||||
|
3. 划分为**并行组**(独立任务)和**串行链**(依赖任务)
|
||||||
|
4. Agent 并行执行独立任务,无需文件系统隔离
|
||||||
|
|
||||||
|
#### 两阶段生命周期
|
||||||
|
|
||||||
```
|
```
|
||||||
任务完成 → discover → 积累 issue → ... → plan all → queue → parallel execute
|
┌─────────────────────────────────────────────────────────────────────┐
|
||||||
↑ ↓
|
│ Phase 1: Accumulation (积累阶段) │
|
||||||
└────── 迭代循环 ───────┘
|
│ │
|
||||||
|
│ Triggers: 任务完成后的 review、代码审查发现、测试失败 │
|
||||||
|
│ │
|
||||||
|
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||||
|
│ │ discover │ │ discover- │ │ new │ │
|
||||||
|
│ │ Auto-find │ │ by-prompt │ │ Manual │ │
|
||||||
|
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ 持续积累 issues 到待处理队列 │
|
||||||
|
└─────────────────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
│ 积累足够后
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Phase 2: Batch Resolution (批量解决阶段) │
|
||||||
|
│ │
|
||||||
|
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||||
|
│ │ plan │ ──→ │ queue │ ──→ │ execute │ │
|
||||||
|
│ │ --all- │ │ Optimize │ │ Parallel │ │
|
||||||
|
│ │ pending │ │ order │ │ execution │ │
|
||||||
|
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ 支持 worktree 隔离,保持主分支稳定 │
|
||||||
|
└─────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 与 Main Workflow 的协作
|
||||||
|
|
||||||
|
```
|
||||||
|
开发迭代循环
|
||||||
|
┌─────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ │
|
||||||
|
│ ┌─────────┐ ┌─────────┐ │
|
||||||
|
│ │ Feature │ ──→ Main Workflow ──→ Done ──→│ Review │ │
|
||||||
|
│ │ Request │ (Level 1-4) └────┬────┘ │
|
||||||
|
│ └─────────┘ │ │
|
||||||
|
│ ▲ │ 发现 Issues │
|
||||||
|
│ │ ▼ │
|
||||||
|
│ │ ┌─────────┐ │
|
||||||
|
│ 继续 │ │ Issue │ │
|
||||||
|
│ 新功能│ │ Workflow│ │
|
||||||
|
│ │ └────┬────┘ │
|
||||||
|
│ │ ┌──────────────────────────────┘ │
|
||||||
|
│ │ │ 修复完成 │
|
||||||
|
│ │ ▼ │
|
||||||
|
│ ┌────┴────┐◀────── │
|
||||||
|
│ │ Main │ Merge │
|
||||||
|
│ │ Branch │ back │
|
||||||
|
│ └─────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 命令列表
|
||||||
|
|
||||||
|
**积累阶段:**
|
||||||
|
```bash
|
||||||
|
/issue:discover # 多视角自动发现
|
||||||
|
/issue:discover-by-prompt # 基于提示发现
|
||||||
|
/issue:new # 手动创建
|
||||||
|
```
|
||||||
|
|
||||||
|
**批量解决阶段:**
|
||||||
|
```bash
|
||||||
|
/issue:plan --all-pending # 批量规划所有待处理
|
||||||
|
/issue:queue # 生成优化执行队列
|
||||||
|
/issue:execute # 并行执行
|
||||||
```
|
```
|
||||||
|
|
||||||
### lite-lite-lite vs multi-cli-plan
|
### lite-lite-lite vs multi-cli-plan
|
||||||
|
|||||||
@@ -73,10 +73,37 @@
|
|||||||
},
|
},
|
||||||
|
|
||||||
"flows": {
|
"flows": {
|
||||||
|
"_level_guide": {
|
||||||
|
"L1": "Rapid - No artifacts, direct execution",
|
||||||
|
"L2": "Lightweight - Memory/lightweight files, → lite-execute",
|
||||||
|
"L3": "Standard - Session persistence, → execute/test-cycle-execute",
|
||||||
|
"L4": "Brainstorm - Multi-role analysis + Session, → execute"
|
||||||
|
},
|
||||||
|
"lite-lite-lite": {
|
||||||
|
"name": "Ultra-Rapid Execution",
|
||||||
|
"level": "L1",
|
||||||
|
"description": "零文件 + 自动CLI选择 + 语义描述 + 直接执行",
|
||||||
|
"complexity": ["low"],
|
||||||
|
"artifacts": "none",
|
||||||
|
"steps": [
|
||||||
|
{ "phase": "clarify", "description": "需求澄清 (AskUser if needed)" },
|
||||||
|
{ "phase": "auto-select", "description": "任务分析 → 自动选择CLI组合" },
|
||||||
|
{ "phase": "multi-cli", "description": "并行多CLI分析" },
|
||||||
|
{ "phase": "decision", "description": "展示结果 → AskUser决策" },
|
||||||
|
{ "phase": "execute", "description": "直接执行 (无中间文件)" }
|
||||||
|
],
|
||||||
|
"cli_hints": {
|
||||||
|
"analysis": { "tool": "auto", "mode": "analysis", "parallel": true },
|
||||||
|
"execution": { "tool": "auto", "mode": "write" }
|
||||||
|
},
|
||||||
|
"estimated_time": "10-30 min"
|
||||||
|
},
|
||||||
"rapid": {
|
"rapid": {
|
||||||
"name": "Rapid Iteration",
|
"name": "Rapid Iteration",
|
||||||
"description": "多模型协作分析 + 直接执行",
|
"level": "L2",
|
||||||
|
"description": "内存规划 + 直接执行",
|
||||||
"complexity": ["low", "medium"],
|
"complexity": ["low", "medium"],
|
||||||
|
"artifacts": "memory://plan",
|
||||||
"steps": [
|
"steps": [
|
||||||
{ "command": "/workflow:lite-plan", "optional": false, "auto_continue": true },
|
{ "command": "/workflow:lite-plan", "optional": false, "auto_continue": true },
|
||||||
{ "command": "/workflow:lite-execute", "optional": false }
|
{ "command": "/workflow:lite-execute", "optional": false }
|
||||||
@@ -87,107 +114,12 @@
|
|||||||
},
|
},
|
||||||
"estimated_time": "15-45 min"
|
"estimated_time": "15-45 min"
|
||||||
},
|
},
|
||||||
"full": {
|
|
||||||
"name": "Full Exploration",
|
|
||||||
"description": "头脑风暴 + 规划 + 执行",
|
|
||||||
"complexity": ["medium", "high"],
|
|
||||||
"steps": [
|
|
||||||
{ "command": "/workflow:brainstorm:auto-parallel", "optional": false, "confirm_before": true },
|
|
||||||
{ "command": "/workflow:plan", "optional": false },
|
|
||||||
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
|
||||||
{ "command": "/workflow:execute", "optional": false }
|
|
||||||
],
|
|
||||||
"cli_hints": {
|
|
||||||
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
|
||||||
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
|
|
||||||
},
|
|
||||||
"estimated_time": "1-3 hours"
|
|
||||||
},
|
|
||||||
"coupled": {
|
|
||||||
"name": "Coupled Planning",
|
|
||||||
"description": "完整规划 + 验证 + 执行",
|
|
||||||
"complexity": ["high"],
|
|
||||||
"steps": [
|
|
||||||
{ "command": "/workflow:plan", "optional": false },
|
|
||||||
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
|
|
||||||
{ "command": "/workflow:execute", "optional": false },
|
|
||||||
{ "command": "/workflow:review", "optional": true }
|
|
||||||
],
|
|
||||||
"cli_hints": {
|
|
||||||
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
|
||||||
"execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
|
||||||
},
|
|
||||||
"estimated_time": "2-4 hours"
|
|
||||||
},
|
|
||||||
"bugfix": {
|
|
||||||
"name": "Bug Fix",
|
|
||||||
"description": "智能诊断 + 修复",
|
|
||||||
"complexity": ["low", "medium"],
|
|
||||||
"variants": {
|
|
||||||
"standard": [{ "command": "/workflow:lite-fix", "optional": false }],
|
|
||||||
"hotfix": [{ "command": "/workflow:lite-fix --hotfix", "optional": false }]
|
|
||||||
},
|
|
||||||
"cli_hints": {
|
|
||||||
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
|
||||||
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
|
|
||||||
},
|
|
||||||
"estimated_time": "10-30 min"
|
|
||||||
},
|
|
||||||
"issue": {
|
|
||||||
"name": "Issue Lifecycle",
|
|
||||||
"description": "发现积累 → 批量规划 → 队列优化 → 并行执行",
|
|
||||||
"complexity": ["medium", "high"],
|
|
||||||
"phases": {
|
|
||||||
"accumulation": {
|
|
||||||
"description": "项目迭代中持续发现和积累issue",
|
|
||||||
"commands": ["/issue:discover", "/issue:new"],
|
|
||||||
"trigger": "post-task, code-review, test-failure"
|
|
||||||
},
|
|
||||||
"resolution": {
|
|
||||||
"description": "集中规划和执行积累的issue",
|
|
||||||
"steps": [
|
|
||||||
{ "command": "/issue:plan --all-pending", "optional": false },
|
|
||||||
{ "command": "/issue:queue", "optional": false },
|
|
||||||
{ "command": "/issue:execute", "optional": false }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"cli_hints": {
|
|
||||||
"discovery": { "tool": "gemini", "mode": "analysis", "trigger": "perspective_analysis", "parallel": true },
|
|
||||||
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
|
||||||
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
|
||||||
},
|
|
||||||
"estimated_time": "1-4 hours"
|
|
||||||
},
|
|
||||||
"lite-lite-lite": {
|
|
||||||
"name": "Ultra-Lite Multi-CLI",
|
|
||||||
"description": "零文件 + 自动CLI选择 + 语义描述 + 直接执行",
|
|
||||||
"complexity": ["low", "medium"],
|
|
||||||
"steps": [
|
|
||||||
{ "phase": "clarify", "description": "需求澄清 (AskUser if needed)" },
|
|
||||||
{ "phase": "auto-select", "description": "任务分析 → 自动选择CLI组合" },
|
|
||||||
{ "phase": "multi-cli", "description": "并行多CLI分析" },
|
|
||||||
{ "phase": "decision", "description": "展示结果 → AskUser决策" },
|
|
||||||
{ "phase": "execute", "description": "直接执行 (无中间文件)" }
|
|
||||||
],
|
|
||||||
"vs_multi_cli_plan": {
|
|
||||||
"artifacts": "None vs IMPL_PLAN.md + plan.json + synthesis.json",
|
|
||||||
"session": "Stateless vs Persistent",
|
|
||||||
"cli_selection": "Auto-select based on task analysis vs Config-driven",
|
|
||||||
"iteration": "Via AskUser vs Via rounds/synthesis",
|
|
||||||
"execution": "Direct vs Via lite-execute",
|
|
||||||
"best_for": "Quick fixes, simple features vs Complex multi-step implementations"
|
|
||||||
},
|
|
||||||
"cli_hints": {
|
|
||||||
"analysis": { "tool": "auto", "mode": "analysis", "parallel": true },
|
|
||||||
"execution": { "tool": "auto", "mode": "write" }
|
|
||||||
},
|
|
||||||
"estimated_time": "10-30 min"
|
|
||||||
},
|
|
||||||
"multi-cli-plan": {
|
"multi-cli-plan": {
|
||||||
"name": "Multi-CLI Collaborative Planning",
|
"name": "Multi-CLI Collaborative Planning",
|
||||||
|
"level": "L2",
|
||||||
"description": "ACE上下文 + 多CLI协作分析 + 迭代收敛 + 计划生成",
|
"description": "ACE上下文 + 多CLI协作分析 + 迭代收敛 + 计划生成",
|
||||||
"complexity": ["medium", "high"],
|
"complexity": ["medium", "high"],
|
||||||
|
"artifacts": ".workflow/.multi-cli-plan/{session}/",
|
||||||
"steps": [
|
"steps": [
|
||||||
{ "command": "/workflow:multi-cli-plan", "optional": false, "phases": [
|
{ "command": "/workflow:multi-cli-plan", "optional": false, "phases": [
|
||||||
"context_gathering: ACE语义搜索",
|
"context_gathering: ACE语义搜索",
|
||||||
@@ -210,28 +142,154 @@
|
|||||||
"discussion": { "tools": ["gemini", "codex", "claude"], "mode": "analysis", "parallel": true },
|
"discussion": { "tools": ["gemini", "codex", "claude"], "mode": "analysis", "parallel": true },
|
||||||
"planning": { "tool": "gemini", "mode": "analysis" }
|
"planning": { "tool": "gemini", "mode": "analysis" }
|
||||||
},
|
},
|
||||||
"output": ".workflow/.multi-cli-plan/{session-id}/",
|
|
||||||
"estimated_time": "30-90 min"
|
"estimated_time": "30-90 min"
|
||||||
},
|
},
|
||||||
|
"coupled": {
|
||||||
|
"name": "Standard Planning",
|
||||||
|
"level": "L3",
|
||||||
|
"description": "完整规划 + 验证 + 执行",
|
||||||
|
"complexity": ["medium", "high"],
|
||||||
|
"artifacts": ".workflow/active/{session}/",
|
||||||
|
"steps": [
|
||||||
|
{ "command": "/workflow:plan", "optional": false },
|
||||||
|
{ "command": "/workflow:action-plan-verify", "optional": false, "auto_continue": true },
|
||||||
|
{ "command": "/workflow:execute", "optional": false },
|
||||||
|
{ "command": "/workflow:review", "optional": true }
|
||||||
|
],
|
||||||
|
"cli_hints": {
|
||||||
|
"pre_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||||
|
"execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||||
|
},
|
||||||
|
"estimated_time": "2-4 hours"
|
||||||
|
},
|
||||||
|
"full": {
|
||||||
|
"name": "Full Exploration (Brainstorm)",
|
||||||
|
"level": "L4",
|
||||||
|
"description": "头脑风暴 + 规划 + 执行",
|
||||||
|
"complexity": ["high"],
|
||||||
|
"artifacts": ".workflow/active/{session}/.brainstorming/",
|
||||||
|
"steps": [
|
||||||
|
{ "command": "/workflow:brainstorm:auto-parallel", "optional": false, "confirm_before": true },
|
||||||
|
{ "command": "/workflow:plan", "optional": false },
|
||||||
|
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||||
|
{ "command": "/workflow:execute", "optional": false }
|
||||||
|
],
|
||||||
|
"cli_hints": {
|
||||||
|
"role_analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||||
|
"execution": { "tool": "codex", "mode": "write", "trigger": "task_count >= 3" }
|
||||||
|
},
|
||||||
|
"estimated_time": "1-3 hours"
|
||||||
|
},
|
||||||
|
"bugfix": {
|
||||||
|
"name": "Bug Fix",
|
||||||
|
"level": "L2",
|
||||||
|
"description": "智能诊断 + 修复 (5 phases)",
|
||||||
|
"complexity": ["low", "medium"],
|
||||||
|
"artifacts": ".workflow/.lite-fix/{bug-slug}-{date}/",
|
||||||
|
"variants": {
|
||||||
|
"standard": [{ "command": "/workflow:lite-fix", "optional": false }],
|
||||||
|
"hotfix": [{ "command": "/workflow:lite-fix --hotfix", "optional": false }]
|
||||||
|
},
|
||||||
|
"phases": [
|
||||||
|
"Phase 1: Bug Analysis & Diagnosis (severity pre-assessment)",
|
||||||
|
"Phase 2: Clarification (optional, AskUserQuestion)",
|
||||||
|
"Phase 3: Fix Planning (Low/Medium → Claude, High/Critical → cli-lite-planning-agent)",
|
||||||
|
"Phase 4: Confirmation & Selection",
|
||||||
|
"Phase 5: Execute (→ lite-execute --mode bugfix)"
|
||||||
|
],
|
||||||
|
"cli_hints": {
|
||||||
|
"diagnosis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||||
|
"fix": { "tool": "codex", "mode": "write", "trigger": "severity >= medium" }
|
||||||
|
},
|
||||||
|
"estimated_time": "10-30 min"
|
||||||
|
},
|
||||||
|
"issue": {
|
||||||
|
"name": "Issue Lifecycle",
|
||||||
|
"level": "Supplementary",
|
||||||
|
"description": "发现积累 → 批量规划 → 队列优化 → 并行执行 (Main Workflow 补充机制)",
|
||||||
|
"complexity": ["medium", "high"],
|
||||||
|
"artifacts": ".workflow/.issues/",
|
||||||
|
"purpose": "Post-development continuous maintenance, maintain main branch stability",
|
||||||
|
"phases": {
|
||||||
|
"accumulation": {
|
||||||
|
"description": "项目迭代中持续发现和积累issue",
|
||||||
|
"commands": ["/issue:discover", "/issue:discover-by-prompt", "/issue:new"],
|
||||||
|
"trigger": "post-task, code-review, test-failure"
|
||||||
|
},
|
||||||
|
"resolution": {
|
||||||
|
"description": "集中规划和执行积累的issue",
|
||||||
|
"steps": [
|
||||||
|
{ "command": "/issue:plan --all-pending", "optional": false },
|
||||||
|
{ "command": "/issue:queue", "optional": false },
|
||||||
|
{ "command": "/issue:execute", "optional": false }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"worktree_support": {
|
||||||
|
"description": "可选的 worktree 隔离,保持主分支稳定",
|
||||||
|
"use_case": "主开发完成后的 issue 修复"
|
||||||
|
},
|
||||||
|
"cli_hints": {
|
||||||
|
"discovery": { "tool": "gemini", "mode": "analysis", "trigger": "perspective_analysis", "parallel": true },
|
||||||
|
"solution_generation": { "tool": "gemini", "mode": "analysis", "trigger": "always", "parallel": true },
|
||||||
|
"batch_execution": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||||
|
},
|
||||||
|
"estimated_time": "1-4 hours"
|
||||||
|
},
|
||||||
"tdd": {
|
"tdd": {
|
||||||
"name": "Test-Driven Development",
|
"name": "Test-Driven Development",
|
||||||
"description": "TDD规划 + 执行 + 验证",
|
"level": "L3",
|
||||||
|
"description": "TDD规划 + 执行 + 验证 (6 phases)",
|
||||||
"complexity": ["medium", "high"],
|
"complexity": ["medium", "high"],
|
||||||
|
"artifacts": ".workflow/active/{session}/",
|
||||||
"steps": [
|
"steps": [
|
||||||
{ "command": "/workflow:tdd-plan", "optional": false },
|
{ "command": "/workflow:tdd-plan", "optional": false },
|
||||||
|
{ "command": "/workflow:action-plan-verify", "optional": true, "auto_continue": true },
|
||||||
{ "command": "/workflow:execute", "optional": false },
|
{ "command": "/workflow:execute", "optional": false },
|
||||||
{ "command": "/workflow:tdd-verify", "optional": false }
|
{ "command": "/workflow:tdd-verify", "optional": false }
|
||||||
],
|
],
|
||||||
|
"tdd_structure": {
|
||||||
|
"description": "Each IMPL task contains complete internal Red-Green-Refactor cycle",
|
||||||
|
"meta": "tdd_workflow: true",
|
||||||
|
"flow_control": "implementation_approach contains 3 steps (red/green/refactor)"
|
||||||
|
},
|
||||||
"cli_hints": {
|
"cli_hints": {
|
||||||
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
"test_strategy": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||||
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
|
"red_green_refactor": { "tool": "codex", "mode": "write", "trigger": "always" }
|
||||||
},
|
},
|
||||||
"estimated_time": "1-3 hours"
|
"estimated_time": "1-3 hours"
|
||||||
},
|
},
|
||||||
|
"test-fix": {
|
||||||
|
"name": "Test Fix Generation",
|
||||||
|
"level": "L3",
|
||||||
|
"description": "测试修复生成 + 执行循环 (5 phases)",
|
||||||
|
"complexity": ["medium", "high"],
|
||||||
|
"artifacts": ".workflow/active/WFS-test-{session}/",
|
||||||
|
"dual_mode": {
|
||||||
|
"session_mode": { "input": "WFS-xxx", "context_source": "Source session summaries" },
|
||||||
|
"prompt_mode": { "input": "Text/file path", "context_source": "Direct codebase analysis" }
|
||||||
|
},
|
||||||
|
"steps": [
|
||||||
|
{ "command": "/workflow:test-fix-gen", "optional": false },
|
||||||
|
{ "command": "/workflow:test-cycle-execute", "optional": false }
|
||||||
|
],
|
||||||
|
"task_structure": [
|
||||||
|
"IMPL-001.json (test understanding & generation)",
|
||||||
|
"IMPL-001.5-review.json (quality gate)",
|
||||||
|
"IMPL-002.json (test execution & fix cycle)"
|
||||||
|
],
|
||||||
|
"cli_hints": {
|
||||||
|
"analysis": { "tool": "gemini", "mode": "analysis", "trigger": "always" },
|
||||||
|
"fix_cycle": { "tool": "codex", "mode": "write", "trigger": "pass_rate < 0.95" }
|
||||||
|
},
|
||||||
|
"estimated_time": "1-2 hours"
|
||||||
|
},
|
||||||
"ui": {
|
"ui": {
|
||||||
"name": "UI-First Development",
|
"name": "UI-First Development",
|
||||||
|
"level": "L3/L4",
|
||||||
"description": "UI设计 + 规划 + 执行",
|
"description": "UI设计 + 规划 + 执行",
|
||||||
"complexity": ["medium", "high"],
|
"complexity": ["medium", "high"],
|
||||||
|
"artifacts": ".workflow/active/{session}/",
|
||||||
"variants": {
|
"variants": {
|
||||||
"explore": [
|
"explore": [
|
||||||
{ "command": "/workflow:ui-design:explore-auto", "optional": false },
|
{ "command": "/workflow:ui-design:explore-auto", "optional": false },
|
||||||
@@ -250,8 +308,10 @@
|
|||||||
},
|
},
|
||||||
"review-fix": {
|
"review-fix": {
|
||||||
"name": "Review and Fix",
|
"name": "Review and Fix",
|
||||||
|
"level": "L3",
|
||||||
"description": "多维审查 + 自动修复",
|
"description": "多维审查 + 自动修复",
|
||||||
"complexity": ["medium"],
|
"complexity": ["medium"],
|
||||||
|
"artifacts": ".workflow/active/{session}/review_report.md",
|
||||||
"steps": [
|
"steps": [
|
||||||
{ "command": "/workflow:review-session-cycle", "optional": false },
|
{ "command": "/workflow:review-session-cycle", "optional": false },
|
||||||
{ "command": "/workflow:review-fix", "optional": true }
|
{ "command": "/workflow:review-fix", "optional": true }
|
||||||
@@ -264,6 +324,7 @@
|
|||||||
},
|
},
|
||||||
"docs": {
|
"docs": {
|
||||||
"name": "Documentation",
|
"name": "Documentation",
|
||||||
|
"level": "L2",
|
||||||
"description": "批量文档生成",
|
"description": "批量文档生成",
|
||||||
"complexity": ["low", "medium"],
|
"complexity": ["low", "medium"],
|
||||||
"variants": {
|
"variants": {
|
||||||
@@ -278,8 +339,17 @@
|
|||||||
},
|
},
|
||||||
|
|
||||||
"intent_rules": {
|
"intent_rules": {
|
||||||
|
"_level_mapping": {
|
||||||
|
"description": "Intent → Level → Flow mapping guide",
|
||||||
|
"L1": ["lite-lite-lite"],
|
||||||
|
"L2": ["rapid", "bugfix", "multi-cli-plan", "docs"],
|
||||||
|
"L3": ["coupled", "tdd", "test-fix", "review-fix", "ui"],
|
||||||
|
"L4": ["full"],
|
||||||
|
"Supplementary": ["issue"]
|
||||||
|
},
|
||||||
"bugfix": {
|
"bugfix": {
|
||||||
"priority": 1,
|
"priority": 1,
|
||||||
|
"level": "L2",
|
||||||
"variants": {
|
"variants": {
|
||||||
"hotfix": {
|
"hotfix": {
|
||||||
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
|
"patterns": ["hotfix", "urgent", "production", "critical", "emergency", "紧急", "生产环境", "线上"],
|
||||||
@@ -293,6 +363,7 @@
|
|||||||
},
|
},
|
||||||
"issue_batch": {
|
"issue_batch": {
|
||||||
"priority": 2,
|
"priority": 2,
|
||||||
|
"level": "Supplementary",
|
||||||
"patterns": {
|
"patterns": {
|
||||||
"batch": ["issues", "batch", "queue", "多个", "批量"],
|
"batch": ["issues", "batch", "queue", "多个", "批量"],
|
||||||
"action": ["fix", "resolve", "处理", "解决"]
|
"action": ["fix", "resolve", "处理", "解决"]
|
||||||
@@ -302,11 +373,25 @@
|
|||||||
},
|
},
|
||||||
"exploration": {
|
"exploration": {
|
||||||
"priority": 3,
|
"priority": 3,
|
||||||
|
"level": "L4",
|
||||||
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "探索"],
|
"patterns": ["不确定", "不知道", "explore", "研究", "分析一下", "怎么做", "what if", "探索"],
|
||||||
"flow": "full"
|
"flow": "full"
|
||||||
},
|
},
|
||||||
"ui_design": {
|
"multi_perspective": {
|
||||||
|
"priority": 3,
|
||||||
|
"level": "L2",
|
||||||
|
"patterns": ["多视角", "权衡", "比较方案", "cross-verify", "多CLI", "协作分析"],
|
||||||
|
"flow": "multi-cli-plan"
|
||||||
|
},
|
||||||
|
"quick_task": {
|
||||||
"priority": 4,
|
"priority": 4,
|
||||||
|
"level": "L1",
|
||||||
|
"patterns": ["快速", "简单", "small", "quick", "simple", "trivial", "小改动"],
|
||||||
|
"flow": "lite-lite-lite"
|
||||||
|
},
|
||||||
|
"ui_design": {
|
||||||
|
"priority": 5,
|
||||||
|
"level": "L3/L4",
|
||||||
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局"],
|
"patterns": ["ui", "界面", "design", "设计", "component", "组件", "style", "样式", "layout", "布局"],
|
||||||
"variants": {
|
"variants": {
|
||||||
"imitate": { "triggers": ["参考", "模仿", "像", "类似"], "flow": "ui.imitate" },
|
"imitate": { "triggers": ["参考", "模仿", "像", "类似"], "flow": "ui.imitate" },
|
||||||
@@ -314,17 +399,26 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"tdd": {
|
"tdd": {
|
||||||
"priority": 5,
|
"priority": 6,
|
||||||
|
"level": "L3",
|
||||||
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "test first"],
|
"patterns": ["tdd", "test-driven", "测试驱动", "先写测试", "test first"],
|
||||||
"flow": "tdd"
|
"flow": "tdd"
|
||||||
},
|
},
|
||||||
|
"test_fix": {
|
||||||
|
"priority": 7,
|
||||||
|
"level": "L3",
|
||||||
|
"patterns": ["测试失败", "test fail", "fix test", "test error", "pass rate", "coverage gap"],
|
||||||
|
"flow": "test-fix"
|
||||||
|
},
|
||||||
"review": {
|
"review": {
|
||||||
"priority": 6,
|
"priority": 8,
|
||||||
|
"level": "L3",
|
||||||
"patterns": ["review", "审查", "检查代码", "code review", "质量检查"],
|
"patterns": ["review", "审查", "检查代码", "code review", "质量检查"],
|
||||||
"flow": "review-fix"
|
"flow": "review-fix"
|
||||||
},
|
},
|
||||||
"documentation": {
|
"documentation": {
|
||||||
"priority": 7,
|
"priority": 9,
|
||||||
|
"level": "L2",
|
||||||
"patterns": ["文档", "documentation", "docs", "readme"],
|
"patterns": ["文档", "documentation", "docs", "readme"],
|
||||||
"variants": {
|
"variants": {
|
||||||
"incremental": { "triggers": ["更新", "增量"], "flow": "docs.incremental" },
|
"incremental": { "triggers": ["更新", "增量"], "flow": "docs.incremental" },
|
||||||
@@ -334,9 +428,9 @@
|
|||||||
"feature": {
|
"feature": {
|
||||||
"priority": 99,
|
"priority": 99,
|
||||||
"complexity_map": {
|
"complexity_map": {
|
||||||
"high": "coupled",
|
"high": { "level": "L3", "flow": "coupled" },
|
||||||
"medium": "rapid",
|
"medium": { "level": "L2", "flow": "rapid" },
|
||||||
"low": "rapid"
|
"low": { "level": "L1", "flow": "lite-lite-lite" }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -304,17 +304,15 @@ async function runWithTool(tool, context) {
|
|||||||
### 引用协议模板
|
### 引用协议模板
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 分析模式 - 必须引用 analysis-protocol.md
|
# 分析模式 - 使用 --rule 自动加载协议和模板
|
||||||
ccw cli -p "
|
ccw cli -p "
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md)
|
RULES: $PROTO $TMPL | ...
|
||||||
$(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt)
|
..." --tool gemini --mode analysis --rule analysis-code-patterns
|
||||||
..." --tool gemini --mode analysis
|
|
||||||
|
|
||||||
# 写入模式 - 必须引用 write-protocol.md
|
# 写入模式 - 使用 --rule 自动加载协议和模板
|
||||||
ccw cli -p "
|
ccw cli -p "
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
|
RULES: $PROTO $TMPL | ...
|
||||||
$(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt)
|
..." --tool codex --mode write --rule development-feature
|
||||||
..." --tool codex --mode write
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 动态模板构建
|
### 动态模板构建
|
||||||
@@ -333,8 +331,8 @@ TASK: ${task.map(t => `• ${t}`).join('\n')}
|
|||||||
MODE: ${mode}
|
MODE: ${mode}
|
||||||
CONTEXT: ${context}
|
CONTEXT: ${context}
|
||||||
EXPECTED: ${expected}
|
EXPECTED: ${expected}
|
||||||
RULES: $(cat ${protocolPath}) $(cat ${template})
|
RULES: $PROTO $TMPL
|
||||||
`;
|
`; // Use --rule option to specify template
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -438,8 +436,8 @@ CLI 调用 (Bash + ccw cli):
|
|||||||
### 4. 提示词规范
|
### 4. 提示词规范
|
||||||
|
|
||||||
- 始终使用 PURPOSE/TASK/MODE/CONTEXT/EXPECTED/RULES 结构
|
- 始终使用 PURPOSE/TASK/MODE/CONTEXT/EXPECTED/RULES 结构
|
||||||
- 必须引用协议模板(analysis-protocol 或 write-protocol)
|
- 使用 `--rule <template>` 自动加载协议和模板(`$PROTO` 和 `$TMPL`)
|
||||||
- 使用 `$(cat ...)` 动态加载模板
|
- 模板名格式: `category-function` (如 `analysis-code-patterns`)
|
||||||
|
|
||||||
### 5. 结果处理
|
### 5. 结果处理
|
||||||
|
|
||||||
|
|||||||
@@ -76,8 +76,8 @@ TASK: • Extract conflicts from IMPL_PLAN and lessons • Group by type (archit
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @.workflow/.archives/*/IMPL_PLAN.md @.workflow/.archives/manifest.json
|
CONTEXT: @.workflow/.archives/*/IMPL_PLAN.md @.workflow/.archives/manifest.json
|
||||||
EXPECTED: Conflict patterns with frequency and resolution
|
EXPECTED: Conflict patterns with frequency and resolution
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/workflow/skill-aggregation.txt) | analysis=READ-ONLY
|
RULES: $PROTO $TMPL | analysis=READ-ONLY
|
||||||
" --tool gemini --cd .workflow/.archives
|
" --tool gemini --mode analysis --rule workflow-skill-aggregation --cd .workflow/.archives
|
||||||
```
|
```
|
||||||
|
|
||||||
**Pattern Grouping**:
|
**Pattern Grouping**:
|
||||||
@@ -72,8 +72,8 @@ TASK: • Group successes by functional domain • Categorize challenges by seve
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @.workflow/.archives/manifest.json
|
CONTEXT: @.workflow/.archives/manifest.json
|
||||||
EXPECTED: Aggregated lessons with frequency counts
|
EXPECTED: Aggregated lessons with frequency counts
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/workflow/skill-aggregation.txt) | analysis=READ-ONLY
|
RULES: $PROTO $TMPL | analysis=READ-ONLY
|
||||||
" --tool gemini --cd .workflow/.archives
|
" --tool gemini --mode analysis --rule workflow-skill-aggregation --cd .workflow/.archives
|
||||||
```
|
```
|
||||||
|
|
||||||
**Severity Classification**:
|
**Severity Classification**:
|
||||||
@@ -110,13 +110,16 @@ When primary tool fails or is unavailable:
|
|||||||
|
|
||||||
### Universal Prompt Template
|
### Universal Prompt Template
|
||||||
|
|
||||||
```
|
```bash
|
||||||
|
# Use --rule to auto-load protocol and template as $PROTO and $TMPL
|
||||||
|
ccw cli -p "
|
||||||
PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
|
PURPOSE: [what] + [why] + [success criteria] + [constraints/scope]
|
||||||
TASK: • [step 1: specific action] • [step 2: specific action] • [step 3: specific action]
|
TASK: • [step 1: specific action] • [step 2: specific action] • [step 3: specific action]
|
||||||
MODE: [analysis|write]
|
MODE: [analysis|write]
|
||||||
CONTEXT: @[file patterns] | Memory: [session/tech/module context]
|
CONTEXT: @[file patterns] | Memory: [session/tech/module context]
|
||||||
EXPECTED: [deliverable format] + [quality criteria] + [structure requirements]
|
EXPECTED: [deliverable format] + [quality criteria] + [structure requirements]
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/[mode]-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/[category]/[template].txt) | [domain constraints]
|
RULES: $PROTO $TMPL | [domain constraints]
|
||||||
|
" --tool <tool-id> --mode <analysis|write> --rule <category-template>
|
||||||
```
|
```
|
||||||
|
|
||||||
### Intent Capture Checklist (Before CLI Execution)
|
### Intent Capture Checklist (Before CLI Execution)
|
||||||
@@ -167,9 +170,9 @@ Every command MUST include these fields:
|
|||||||
|
|
||||||
- **RULES**
|
- **RULES**
|
||||||
- Purpose: Protocol + template + constraints
|
- Purpose: Protocol + template + constraints
|
||||||
- Components: $(cat protocol) + $(cat template) + domain rules
|
- Components: $PROTO + $TMPL + domain rules (variables loaded beforehand)
|
||||||
- Bad Example: (missing)
|
- Bad Example: (missing)
|
||||||
- Good Example: "$(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/analysis/03-assess-security-risks.txt) \| Focus on authentication \| Ignore test files"
|
- Good Example: "$PROTO $TMPL | Focus on authentication | Ignore test files" (where PROTO and TMPL are pre-loaded variables)
|
||||||
|
|
||||||
### CONTEXT Configuration
|
### CONTEXT Configuration
|
||||||
|
|
||||||
@@ -216,106 +219,74 @@ ccw cli -p "..." --tool <tool-id> --mode analysis --cd src
|
|||||||
|
|
||||||
### RULES Configuration
|
### RULES Configuration
|
||||||
|
|
||||||
**Format**: `RULES: $(cat ~/.claude/workflows/cli-templates/prompts/[category]/[template].txt) | [constraints]`
|
**使用 `--rule` 选项自动加载模板**:
|
||||||
|
|
||||||
**⚠️ MANDATORY**: Exactly ONE template reference is REQUIRED. Select from Task-Template Matrix or use universal fallback:
|
|
||||||
- `universal/00-universal-rigorous-style.txt` - For precision-critical tasks (default fallback)
|
|
||||||
- `universal/00-universal-creative-style.txt` - For exploratory tasks
|
|
||||||
|
|
||||||
**Command Substitution Rules**:
|
|
||||||
- Use `$(cat ...)` directly in **double quotes** - command substitution executes in your local shell BEFORE passing to ccw
|
|
||||||
- Shell expands `$(cat ...)` into file content automatically - do NOT read template content first
|
|
||||||
- NEVER use escape characters (`\$`, `\"`, `\'`) or single quotes - these prevent shell expansion
|
|
||||||
- Tilde (`~`) expands correctly in prompt context
|
|
||||||
|
|
||||||
**Critical**: Use double quotes `"..."` around the entire prompt to enable `$(cat ...)` expansion:
|
|
||||||
```bash
|
```bash
|
||||||
# ✓ CORRECT - double quotes allow shell expansion
|
ccw cli -p "... RULES: \$PROTO \$TMPL | constraints" --tool gemini --mode analysis --rule analysis-review-architecture
|
||||||
ccw cli -p "RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) ..." --tool <tool-id>
|
|
||||||
|
|
||||||
# ✗ WRONG - single quotes prevent expansion
|
|
||||||
ccw cli -p 'RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) ...' --tool <tool-id>
|
|
||||||
|
|
||||||
# ✗ WRONG - escaped $ prevents expansion
|
|
||||||
ccw cli -p "RULES: \$(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) ..." --tool <tool-id>
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Mode Protocol References (MANDATORY)
|
**`--rule` 工作原理**:
|
||||||
|
1. 自动从 `~/.claude/workflows/cli-templates/prompts/` 发现模板
|
||||||
|
2. 根据 `--mode` 自动加载对应 protocol(analysis-protocol.md 或 write-protocol.md)
|
||||||
|
3. 设置环境变量 `$PROTO`(protocol)和 `$TMPL`(template)供子进程使用
|
||||||
|
4. 在提示词中用 `$PROTO` 和 `$TMPL` 引用
|
||||||
|
|
||||||
**⚠️ REQUIRED**: Every CLI execution MUST include the corresponding mode protocol in RULES:
|
**模板选择**:从 Task-Template Matrix 选择或使用通用模板:
|
||||||
|
- `universal-rigorous-style` - 精确型任务
|
||||||
|
- `universal-creative-style` - 探索型任务
|
||||||
|
|
||||||
#### Mode Rule Templates
|
### Mode Protocol References
|
||||||
|
|
||||||
**Purpose**: Mode protocols define permission boundaries and operational constraints for each execution mode.
|
**`--rule` 自动处理 Protocol**:
|
||||||
|
- `--mode analysis` → `$PROTO` = analysis-protocol.md
|
||||||
|
- `--mode write` → `$PROTO` = write-protocol.md
|
||||||
|
|
||||||
**Protocol Mapping**:
|
**Protocol 映射**:
|
||||||
|
|
||||||
- **`analysis`** mode
|
- **`analysis`** 模式
|
||||||
- Protocol: `$(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md)`
|
- 权限:只读操作
|
||||||
- Permission: Read-only operations
|
- 约束:禁止文件创建/修改/删除
|
||||||
- Enforces: No file creation/modification/deletion
|
|
||||||
|
|
||||||
- **`write`** mode
|
- **`write`** 模式
|
||||||
- Protocol: `$(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)`
|
- 权限:创建/修改/删除文件
|
||||||
- Permission: Create/Modify/Delete files
|
- 约束:完整工作流执行能力
|
||||||
- Enforces: Explicit write authorization and full workflow execution capability
|
|
||||||
|
|
||||||
**RULES Format** (protocol MUST be included):
|
|
||||||
```bash
|
|
||||||
# Analysis mode - MUST include analysis-protocol.md
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/analysis/...) | constraints
|
|
||||||
|
|
||||||
# Write mode - MUST include write-protocol.md
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/development/...) | constraints
|
|
||||||
```
|
|
||||||
|
|
||||||
**Validation**: CLI execution without mode protocol reference is INVALID
|
|
||||||
|
|
||||||
**Why Mode Rules Are Required**:
|
|
||||||
- Ensures consistent permission enforcement across all tools
|
|
||||||
- Prevents accidental file modifications during analysis tasks
|
|
||||||
- Provides explicit authorization trail for write operations
|
|
||||||
- Enables safe automation with clear boundaries
|
|
||||||
|
|
||||||
### Template System
|
### Template System
|
||||||
|
|
||||||
**Base Path**: `~/.claude/workflows/cli-templates/prompts/`
|
**Base Path**: `~/.claude/workflows/cli-templates/prompts/`
|
||||||
|
|
||||||
**Naming Convention**:
|
**Naming Convention**: `category-function.txt`
|
||||||
- `00-*` - Universal fallbacks (when no specific match)
|
- 第一段为分类(analysis, development, planning 等)
|
||||||
- `01-*` - Universal, high-frequency
|
- 第二段为功能描述
|
||||||
- `02-*` - Common specialized
|
|
||||||
- `03-*` - Domain-specific
|
|
||||||
|
|
||||||
**Universal Templates**:
|
**Universal Templates**:
|
||||||
|
- `universal-rigorous-style` - 精确型任务
|
||||||
- **`universal/00-universal-rigorous-style.txt`**: Precision-critical, systematic methodology
|
- `universal-creative-style` - 探索型任务
|
||||||
- **`universal/00-universal-creative-style.txt`**: Exploratory, innovative solutions
|
|
||||||
|
|
||||||
**Task-Template Matrix**:
|
**Task-Template Matrix**:
|
||||||
|
|
||||||
**Analysis**:
|
**Analysis**:
|
||||||
- Execution Tracing: `analysis/01-trace-code-execution.txt`
|
- Execution Tracing: `analysis-trace-code-execution`
|
||||||
- Bug Diagnosis: `analysis/01-diagnose-bug-root-cause.txt`
|
- Bug Diagnosis: `analysis-diagnose-bug-root-cause`
|
||||||
- Code Patterns: `analysis/02-analyze-code-patterns.txt`
|
- Code Patterns: `analysis-analyze-code-patterns`
|
||||||
- Document Analysis: `analysis/02-analyze-technical-document.txt`
|
- Document Analysis: `analysis-analyze-technical-document`
|
||||||
- Architecture Review: `analysis/02-review-architecture.txt`
|
- Architecture Review: `analysis-review-architecture`
|
||||||
- Code Review: `analysis/02-review-code-quality.txt`
|
- Code Review: `analysis-review-code-quality`
|
||||||
- Performance: `analysis/03-analyze-performance.txt`
|
- Performance: `analysis-analyze-performance`
|
||||||
- Security: `analysis/03-assess-security-risks.txt`
|
- Security: `analysis-assess-security-risks`
|
||||||
|
|
||||||
**Planning**:
|
**Planning**:
|
||||||
- Architecture: `planning/01-plan-architecture-design.txt`
|
- Architecture: `planning-plan-architecture-design`
|
||||||
- Task Breakdown: `planning/02-breakdown-task-steps.txt`
|
- Task Breakdown: `planning-breakdown-task-steps`
|
||||||
- Component Design: `planning/02-design-component-spec.txt`
|
- Component Design: `planning-design-component-spec`
|
||||||
- Migration: `planning/03-plan-migration-strategy.txt`
|
- Migration: `planning-plan-migration-strategy`
|
||||||
|
|
||||||
**Development**:
|
**Development**:
|
||||||
- Feature: `development/02-implement-feature.txt`
|
- Feature: `development-implement-feature`
|
||||||
- Refactoring: `development/02-refactor-codebase.txt`
|
- Refactoring: `development-refactor-codebase`
|
||||||
- Tests: `development/02-generate-tests.txt`
|
- Tests: `development-generate-tests`
|
||||||
- UI Component: `development/02-implement-component-ui.txt`
|
- UI Component: `development-implement-component-ui`
|
||||||
- Debugging: `development/03-debug-runtime-issues.txt`
|
- Debugging: `development-debug-runtime-issues`
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -368,6 +339,11 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
|||||||
- Description: Resume previous session
|
- Description: Resume previous session
|
||||||
- Default: -
|
- Default: -
|
||||||
|
|
||||||
|
- **`--rule <template>`**
|
||||||
|
- Description: 模板名称,自动加载 protocol + template 为 $PROTO 和 $TMPL 环境变量
|
||||||
|
- Default: none
|
||||||
|
- 根据 --mode 自动选择 protocol
|
||||||
|
|
||||||
### Directory Configuration
|
### Directory Configuration
|
||||||
|
|
||||||
#### Working Directory (`--cd`)
|
#### Working Directory (`--cd`)
|
||||||
@@ -435,8 +411,8 @@ TASK: • Scan for injection flaws (SQL, command, LDAP) • Check authentication
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @src/auth/**/* @src/middleware/auth.ts | Memory: Using bcrypt for passwords, JWT for sessions
|
CONTEXT: @src/auth/**/* @src/middleware/auth.ts | Memory: Using bcrypt for passwords, JWT for sessions
|
||||||
EXPECTED: Security report with: severity matrix, file:line references, CVE mappings where applicable, remediation code snippets prioritized by risk
|
EXPECTED: Security report with: severity matrix, file:line references, CVE mappings where applicable, remediation code snippets prioritized by risk
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/analysis/03-assess-security-risks.txt) | Focus on authentication | Ignore test files
|
RULES: \$PROTO \$TMPL | Focus on authentication | Ignore test files
|
||||||
" --tool <tool-id> --mode analysis --cd src/auth
|
" --tool gemini --mode analysis --rule analysis-assess-security-risks --cd src/auth
|
||||||
```
|
```
|
||||||
|
|
||||||
**Implementation Task** (New Feature):
|
**Implementation Task** (New Feature):
|
||||||
@@ -447,8 +423,8 @@ TASK: • Create rate limiter middleware with sliding window • Implement per-r
|
|||||||
MODE: write
|
MODE: write
|
||||||
CONTEXT: @src/middleware/**/* @src/config/**/* | Memory: Using Express.js, Redis already configured, existing middleware pattern in auth.ts
|
CONTEXT: @src/middleware/**/* @src/config/**/* | Memory: Using Express.js, Redis already configured, existing middleware pattern in auth.ts
|
||||||
EXPECTED: Production-ready code with: TypeScript types, unit tests, integration test, configuration example, migration guide
|
EXPECTED: Production-ready code with: TypeScript types, unit tests, integration test, configuration example, migration guide
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt) | Follow existing middleware patterns | No breaking changes
|
RULES: \$PROTO \$TMPL | Follow existing middleware patterns | No breaking changes
|
||||||
" --tool <tool-id> --mode write
|
" --tool gemini --mode write --rule development-implement-feature
|
||||||
```
|
```
|
||||||
|
|
||||||
**Bug Fix Task**:
|
**Bug Fix Task**:
|
||||||
@@ -459,8 +435,8 @@ TASK: • Trace connection lifecycle from open to close • Identify event liste
|
|||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @src/websocket/**/* @src/services/connection-manager.ts | Memory: Using ws library, ~5000 concurrent connections in production
|
CONTEXT: @src/websocket/**/* @src/services/connection-manager.ts | Memory: Using ws library, ~5000 concurrent connections in production
|
||||||
EXPECTED: Root cause analysis with: memory profile, leak source (file:line), fix recommendation with code, verification steps
|
EXPECTED: Root cause analysis with: memory profile, leak source (file:line), fix recommendation with code, verification steps
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Focus on resource cleanup
|
RULES: \$PROTO \$TMPL | Focus on resource cleanup
|
||||||
" --tool <tool-id> --mode analysis --cd src
|
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause --cd src
|
||||||
```
|
```
|
||||||
|
|
||||||
**Refactoring Task**:
|
**Refactoring Task**:
|
||||||
@@ -471,8 +447,8 @@ TASK: • Extract gateway interface from current implementation • Create strat
|
|||||||
MODE: write
|
MODE: write
|
||||||
CONTEXT: @src/payments/**/* @src/types/payment.ts | Memory: Currently only Stripe, adding PayPal next sprint, must support future gateways
|
CONTEXT: @src/payments/**/* @src/types/payment.ts | Memory: Currently only Stripe, adding PayPal next sprint, must support future gateways
|
||||||
EXPECTED: Refactored code with: strategy interface, concrete implementations, factory class, updated tests, migration checklist
|
EXPECTED: Refactored code with: strategy interface, concrete implementations, factory class, updated tests, migration checklist
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/development/02-refactor-codebase.txt) | Preserve all existing behavior | Tests must pass
|
RULES: \$PROTO \$TMPL | Preserve all existing behavior | Tests must pass
|
||||||
" --tool <tool-id> --mode write
|
" --tool gemini --mode write --rule development-refactor-codebase
|
||||||
```
|
```
|
||||||
|
|
||||||
**Code Review Task** (codex review mode):
|
**Code Review Task** (codex review mode):
|
||||||
@@ -509,14 +485,13 @@ ccw cli -p "Check for breaking changes in API contracts and backward compatibili
|
|||||||
- **Use tools early and often** - Tools are faster and more thorough
|
- **Use tools early and often** - Tools are faster and more thorough
|
||||||
- **Unified CLI** - Always use `ccw cli -p` for consistent parameter handling
|
- **Unified CLI** - Always use `ccw cli -p` for consistent parameter handling
|
||||||
- **Default mode is analysis** - Omit `--mode` for read-only operations, explicitly use `--mode write` for file modifications
|
- **Default mode is analysis** - Omit `--mode` for read-only operations, explicitly use `--mode write` for file modifications
|
||||||
- **One template required** - ALWAYS reference exactly ONE template in RULES (use universal fallback if no specific match)
|
- **Use `--rule` for templates** - 自动加载 protocol + template 为 `$PROTO` 和 `$TMPL` 环境变量
|
||||||
- **Write protection** - Require EXPLICIT `--mode write` for file operations
|
- **Write protection** - Require EXPLICIT `--mode write` for file operations
|
||||||
- **Use double quotes for shell expansion** - Always wrap prompts in double quotes `"..."` to enable `$(cat ...)` command substitution; NEVER use single quotes or escape characters (`\$`, `\"`, `\'`)
|
|
||||||
|
|
||||||
### Workflow Principles
|
### Workflow Principles
|
||||||
|
|
||||||
- **Use CCW unified interface** for all executions
|
- **Use CCW unified interface** for all executions
|
||||||
- **Always include template** - Use Task-Template Matrix or universal fallback
|
- **Always include template** - 使用 `--rule <template-name>` 加载模板
|
||||||
- **Be specific** - Clear PURPOSE, TASK, EXPECTED fields
|
- **Be specific** - Clear PURPOSE, TASK, EXPECTED fields
|
||||||
- **Include constraints** - File patterns, scope in RULES
|
- **Include constraints** - File patterns, scope in RULES
|
||||||
- **Leverage memory context** when building on previous work
|
- **Leverage memory context** when building on previous work
|
||||||
@@ -530,7 +505,7 @@ ccw cli -p "Check for breaking changes in API contracts and backward compatibili
|
|||||||
- [ ] **Context gathered** - File references + memory (default `@**/*`)
|
- [ ] **Context gathered** - File references + memory (default `@**/*`)
|
||||||
- [ ] **Directory navigation** - `--cd` and/or `--includeDirs`
|
- [ ] **Directory navigation** - `--cd` and/or `--includeDirs`
|
||||||
- [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection
|
- [ ] **Tool selected** - Explicit `--tool` or tag-based auto-selection
|
||||||
- [ ] **Template applied (REQUIRED)** - Use specific or universal fallback template
|
- [ ] **Rule template** - `--rule <template-name>` 自动加载 protocol + template
|
||||||
- [ ] **Constraints specified** - Scope, requirements
|
- [ ] **Constraints specified** - Scope, requirements
|
||||||
|
|
||||||
### Execution Workflow
|
### Execution Workflow
|
||||||
|
|||||||
@@ -187,6 +187,13 @@ export function run(argv: string[]): void {
|
|||||||
.option('--no-native', 'Force prompt concatenation instead of native resume')
|
.option('--no-native', 'Force prompt concatenation instead of native resume')
|
||||||
.option('--cache [items]', 'Cache: comma-separated @patterns and text content')
|
.option('--cache [items]', 'Cache: comma-separated @patterns and text content')
|
||||||
.option('--inject-mode <mode>', 'Inject mode: none, full, progressive (default: codex=full, others=none)')
|
.option('--inject-mode <mode>', 'Inject mode: none, full, progressive (default: codex=full, others=none)')
|
||||||
|
// Template/Rules options
|
||||||
|
.option('--rule <template>', 'Template name for auto-discovery (defines $PROTO and $TMPL env vars)')
|
||||||
|
// Codex review options
|
||||||
|
.option('--uncommitted', 'Review uncommitted changes (codex review)')
|
||||||
|
.option('--base <branch>', 'Review changes against base branch (codex review)')
|
||||||
|
.option('--commit <sha>', 'Review changes from specific commit (codex review)')
|
||||||
|
.option('--title <title>', 'Optional commit title for review summary (codex review)')
|
||||||
// Storage options
|
// Storage options
|
||||||
.option('--project <path>', 'Project path for storage operations')
|
.option('--project <path>', 'Project path for storage operations')
|
||||||
.option('--force', 'Confirm destructive operations')
|
.option('--force', 'Confirm destructive operations')
|
||||||
|
|||||||
@@ -124,6 +124,13 @@ interface CliExecOptions {
|
|||||||
cache?: string | boolean; // Cache: true = auto from CONTEXT, string = comma-separated patterns/content
|
cache?: string | boolean; // Cache: true = auto from CONTEXT, string = comma-separated patterns/content
|
||||||
injectMode?: 'none' | 'full' | 'progressive'; // Inject mode for cached content
|
injectMode?: 'none' | 'full' | 'progressive'; // Inject mode for cached content
|
||||||
debug?: boolean; // Enable debug logging
|
debug?: boolean; // Enable debug logging
|
||||||
|
// Codex review options
|
||||||
|
uncommitted?: boolean; // Review uncommitted changes (default for review mode)
|
||||||
|
base?: string; // Review changes against base branch
|
||||||
|
commit?: string; // Review changes from specific commit
|
||||||
|
title?: string; // Optional title for review summary
|
||||||
|
// Template/Rules options
|
||||||
|
rule?: string; // Template name for auto-discovery (defines $PROTO and $TMPL env vars)
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Cache configuration parsed from --cache */
|
/** Cache configuration parsed from --cache */
|
||||||
@@ -535,7 +542,7 @@ async function statusAction(debug?: boolean): Promise<void> {
|
|||||||
* @param {Object} options - CLI options
|
* @param {Object} options - CLI options
|
||||||
*/
|
*/
|
||||||
async function execAction(positionalPrompt: string | undefined, options: CliExecOptions): Promise<void> {
|
async function execAction(positionalPrompt: string | undefined, options: CliExecOptions): Promise<void> {
|
||||||
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, stream, resume, id, noNative, cache, injectMode, debug } = options;
|
const { prompt: optionPrompt, file, tool = 'gemini', mode = 'analysis', model, cd, includeDirs, stream, resume, id, noNative, cache, injectMode, debug, uncommitted, base, commit, title, rule } = options;
|
||||||
|
|
||||||
// Enable debug mode if --debug flag is set
|
// Enable debug mode if --debug flag is set
|
||||||
if (debug) {
|
if (debug) {
|
||||||
@@ -579,6 +586,25 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
|
|||||||
|
|
||||||
const prompt_to_use = finalPrompt || '';
|
const prompt_to_use = finalPrompt || '';
|
||||||
|
|
||||||
|
// Load rules templates if --rule is specified (will be passed as env vars)
|
||||||
|
let rulesEnv: { PROTO?: string; TMPL?: string } = {};
|
||||||
|
if (rule) {
|
||||||
|
try {
|
||||||
|
const { loadProtocol, loadTemplate } = await import('../tools/template-discovery.js');
|
||||||
|
const proto = loadProtocol(mode);
|
||||||
|
const tmpl = loadTemplate(rule);
|
||||||
|
if (proto) rulesEnv.PROTO = proto;
|
||||||
|
if (tmpl) rulesEnv.TMPL = tmpl;
|
||||||
|
if (debug) {
|
||||||
|
console.log(chalk.gray(` Rule loaded: PROTO(${proto ? proto.length : 0} chars) + TMPL(${tmpl ? tmpl.length : 0} chars)`));
|
||||||
|
console.log(chalk.gray(` Use $PROTO and $TMPL in your prompt to reference them`));
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error(chalk.red(`Error loading rule template: ${error instanceof Error ? error.message : error}`));
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Handle cache option: pack @patterns and/or content
|
// Handle cache option: pack @patterns and/or content
|
||||||
let cacheSessionId: string | undefined;
|
let cacheSessionId: string | undefined;
|
||||||
let actualPrompt = prompt_to_use;
|
let actualPrompt = prompt_to_use;
|
||||||
@@ -847,7 +873,14 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
|
|||||||
id, // custom execution ID
|
id, // custom execution ID
|
||||||
noNative,
|
noNative,
|
||||||
stream: !!stream, // stream=true → streaming enabled (no cache), stream=false → cache output (default)
|
stream: !!stream, // stream=true → streaming enabled (no cache), stream=false → cache output (default)
|
||||||
outputFormat // Enable JSONL parsing for tools that support it
|
outputFormat, // Enable JSONL parsing for tools that support it
|
||||||
|
// Codex review options
|
||||||
|
uncommitted,
|
||||||
|
base,
|
||||||
|
commit,
|
||||||
|
title,
|
||||||
|
// Rules env vars (PROTO, TMPL)
|
||||||
|
rulesEnv: Object.keys(rulesEnv).length > 0 ? rulesEnv : undefined
|
||||||
}, onOutput); // Always pass onOutput for real-time dashboard streaming
|
}, onOutput); // Always pass onOutput for real-time dashboard streaming
|
||||||
|
|
||||||
if (elapsedInterval) clearInterval(elapsedInterval);
|
if (elapsedInterval) clearInterval(elapsedInterval);
|
||||||
|
|||||||
@@ -811,6 +811,56 @@ export async function handleClaudeRoutes(ctx: RouteContext): Promise<boolean> {
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// API: Batch delete files
|
||||||
|
if (pathname === '/api/memory/claude/batch-delete' && req.method === 'POST') {
|
||||||
|
handlePostRequest(req, res, async (body: any) => {
|
||||||
|
const { paths, confirm } = body;
|
||||||
|
|
||||||
|
if (!paths || !Array.isArray(paths) || paths.length === 0) {
|
||||||
|
return { error: 'paths array is required', status: 400 };
|
||||||
|
}
|
||||||
|
|
||||||
|
if (confirm !== true) {
|
||||||
|
return { error: 'Confirmation required', status: 400 };
|
||||||
|
}
|
||||||
|
|
||||||
|
const results = {
|
||||||
|
success: true,
|
||||||
|
total: paths.length,
|
||||||
|
deleted: 0,
|
||||||
|
errors: [] as Array<{ path: string; error: string }>
|
||||||
|
};
|
||||||
|
|
||||||
|
// Delete each file
|
||||||
|
for (const filePath of paths) {
|
||||||
|
const result = deleteClaudeFile(filePath);
|
||||||
|
if (result.success) {
|
||||||
|
results.deleted++;
|
||||||
|
// Broadcast individual file deletion
|
||||||
|
broadcastToClients({
|
||||||
|
type: 'CLAUDE_FILE_DELETED',
|
||||||
|
data: { path: filePath }
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
results.errors.push({ path: filePath, error: result.error || 'Unknown error' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Broadcast batch deletion completion
|
||||||
|
broadcastToClients({
|
||||||
|
type: 'CLAUDE_BATCH_DELETED',
|
||||||
|
data: {
|
||||||
|
total: results.total,
|
||||||
|
deleted: results.deleted,
|
||||||
|
failed: results.errors.length
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return results;
|
||||||
|
});
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
// API: Create file
|
// API: Create file
|
||||||
if (pathname === '/api/memory/claude/create' && req.method === 'POST') {
|
if (pathname === '/api/memory/claude/create' && req.method === 'POST') {
|
||||||
handlePostRequest(req, res, async (body: any) => {
|
handlePostRequest(req, res, async (body: any) => {
|
||||||
|
|||||||
@@ -79,6 +79,12 @@ function writeSolutionsJsonl(issuesDir: string, issueId: string, solutions: any[
|
|||||||
writeFileSync(join(solutionsDir, `${issueId}.jsonl`), solutions.map(s => JSON.stringify(s)).join('\n'));
|
writeFileSync(join(solutionsDir, `${issueId}.jsonl`), solutions.map(s => JSON.stringify(s)).join('\n'));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function generateQueueFileId(): string {
|
||||||
|
const now = new Date();
|
||||||
|
const ts = now.toISOString().replace(/[-:T]/g, '').slice(0, 14);
|
||||||
|
return `QUE-${ts}`;
|
||||||
|
}
|
||||||
|
|
||||||
function readQueue(issuesDir: string) {
|
function readQueue(issuesDir: string) {
|
||||||
// Try new multi-queue structure first
|
// Try new multi-queue structure first
|
||||||
const queuesDir = join(issuesDir, 'queues');
|
const queuesDir = join(issuesDir, 'queues');
|
||||||
@@ -718,6 +724,183 @@ export async function handleIssueRoutes(ctx: RouteContext): Promise<boolean> {
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// POST /api/queue/split - Split items from source queue into a new queue
|
||||||
|
if (pathname === '/api/queue/split' && req.method === 'POST') {
|
||||||
|
handlePostRequest(req, res, async (body: any) => {
|
||||||
|
const { sourceQueueId, itemIds } = body;
|
||||||
|
if (!sourceQueueId || !itemIds || !Array.isArray(itemIds) || itemIds.length === 0) {
|
||||||
|
return { error: 'sourceQueueId and itemIds (non-empty array) required' };
|
||||||
|
}
|
||||||
|
|
||||||
|
const queuesDir = join(issuesDir, 'queues');
|
||||||
|
const sourcePath = join(queuesDir, `${sourceQueueId}.json`);
|
||||||
|
|
||||||
|
if (!existsSync(sourcePath)) {
|
||||||
|
return { error: `Source queue ${sourceQueueId} not found` };
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const sourceQueue = JSON.parse(readFileSync(sourcePath, 'utf8'));
|
||||||
|
const sourceItems = sourceQueue.solutions || sourceQueue.tasks || [];
|
||||||
|
const isSolutionBased = !!sourceQueue.solutions;
|
||||||
|
|
||||||
|
// Find items to split
|
||||||
|
const itemsToSplit = sourceItems.filter((item: any) =>
|
||||||
|
itemIds.includes(item.item_id) ||
|
||||||
|
itemIds.includes(item.solution_id) ||
|
||||||
|
itemIds.includes(item.task_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
if (itemsToSplit.length === 0) {
|
||||||
|
return { error: 'No matching items found to split' };
|
||||||
|
}
|
||||||
|
|
||||||
|
if (itemsToSplit.length === sourceItems.length) {
|
||||||
|
return { error: 'Cannot split all items - at least one item must remain in source queue' };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find remaining items
|
||||||
|
const remainingItems = sourceItems.filter((item: any) =>
|
||||||
|
!itemIds.includes(item.item_id) &&
|
||||||
|
!itemIds.includes(item.solution_id) &&
|
||||||
|
!itemIds.includes(item.task_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Create new queue with split items
|
||||||
|
const newQueueId = generateQueueFileId();
|
||||||
|
const newQueuePath = join(queuesDir, `${newQueueId}.json`);
|
||||||
|
|
||||||
|
// Re-index split items
|
||||||
|
const reindexedSplitItems = itemsToSplit.map((item: any, idx: number) => ({
|
||||||
|
...item,
|
||||||
|
execution_order: idx + 1
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Extract issue IDs from split items
|
||||||
|
const splitIssueIds = [...new Set(itemsToSplit.map((item: any) => item.issue_id).filter(Boolean))];
|
||||||
|
|
||||||
|
// Remaining issue IDs
|
||||||
|
const remainingIssueIds = [...new Set(remainingItems.map((item: any) => item.issue_id).filter(Boolean))];
|
||||||
|
|
||||||
|
// Create new queue
|
||||||
|
const newQueue: any = {
|
||||||
|
id: newQueueId,
|
||||||
|
status: 'active',
|
||||||
|
issue_ids: splitIssueIds,
|
||||||
|
conflicts: [],
|
||||||
|
_metadata: {
|
||||||
|
version: '2.1',
|
||||||
|
updated_at: new Date().toISOString(),
|
||||||
|
split_from: sourceQueueId,
|
||||||
|
split_at: new Date().toISOString(),
|
||||||
|
...(isSolutionBased
|
||||||
|
? {
|
||||||
|
total_solutions: reindexedSplitItems.length,
|
||||||
|
completed_solutions: reindexedSplitItems.filter((i: any) => i.status === 'completed').length
|
||||||
|
}
|
||||||
|
: {
|
||||||
|
total_tasks: reindexedSplitItems.length,
|
||||||
|
completed_tasks: reindexedSplitItems.filter((i: any) => i.status === 'completed').length
|
||||||
|
})
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if (isSolutionBased) {
|
||||||
|
newQueue.solutions = reindexedSplitItems;
|
||||||
|
} else {
|
||||||
|
newQueue.tasks = reindexedSplitItems;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update source queue with remaining items
|
||||||
|
const reindexedRemainingItems = remainingItems.map((item: any, idx: number) => ({
|
||||||
|
...item,
|
||||||
|
execution_order: idx + 1
|
||||||
|
}));
|
||||||
|
|
||||||
|
if (isSolutionBased) {
|
||||||
|
sourceQueue.solutions = reindexedRemainingItems;
|
||||||
|
} else {
|
||||||
|
sourceQueue.tasks = reindexedRemainingItems;
|
||||||
|
}
|
||||||
|
|
||||||
|
sourceQueue.issue_ids = remainingIssueIds;
|
||||||
|
sourceQueue._metadata = {
|
||||||
|
...sourceQueue._metadata,
|
||||||
|
updated_at: new Date().toISOString(),
|
||||||
|
...(isSolutionBased
|
||||||
|
? {
|
||||||
|
total_solutions: reindexedRemainingItems.length,
|
||||||
|
completed_solutions: reindexedRemainingItems.filter((i: any) => i.status === 'completed').length
|
||||||
|
}
|
||||||
|
: {
|
||||||
|
total_tasks: reindexedRemainingItems.length,
|
||||||
|
completed_tasks: reindexedRemainingItems.filter((i: any) => i.status === 'completed').length
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
// Write both queues
|
||||||
|
writeFileSync(newQueuePath, JSON.stringify(newQueue, null, 2));
|
||||||
|
writeFileSync(sourcePath, JSON.stringify(sourceQueue, null, 2));
|
||||||
|
|
||||||
|
// Update index
|
||||||
|
const indexPath = join(queuesDir, 'index.json');
|
||||||
|
if (existsSync(indexPath)) {
|
||||||
|
try {
|
||||||
|
const index = JSON.parse(readFileSync(indexPath, 'utf8'));
|
||||||
|
|
||||||
|
// Add new queue to index
|
||||||
|
const newQueueEntry: any = {
|
||||||
|
id: newQueueId,
|
||||||
|
status: 'active',
|
||||||
|
issue_ids: splitIssueIds,
|
||||||
|
created_at: new Date().toISOString(),
|
||||||
|
...(isSolutionBased
|
||||||
|
? {
|
||||||
|
total_solutions: reindexedSplitItems.length,
|
||||||
|
completed_solutions: reindexedSplitItems.filter((i: any) => i.status === 'completed').length
|
||||||
|
}
|
||||||
|
: {
|
||||||
|
total_tasks: reindexedSplitItems.length,
|
||||||
|
completed_tasks: reindexedSplitItems.filter((i: any) => i.status === 'completed').length
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
index.queues = index.queues || [];
|
||||||
|
index.queues.push(newQueueEntry);
|
||||||
|
|
||||||
|
// Update source queue in index
|
||||||
|
const sourceEntry = index.queues.find((q: any) => q.id === sourceQueueId);
|
||||||
|
if (sourceEntry) {
|
||||||
|
sourceEntry.issue_ids = remainingIssueIds;
|
||||||
|
if (isSolutionBased) {
|
||||||
|
sourceEntry.total_solutions = reindexedRemainingItems.length;
|
||||||
|
sourceEntry.completed_solutions = reindexedRemainingItems.filter((i: any) => i.status === 'completed').length;
|
||||||
|
} else {
|
||||||
|
sourceEntry.total_tasks = reindexedRemainingItems.length;
|
||||||
|
sourceEntry.completed_tasks = reindexedRemainingItems.filter((i: any) => i.status === 'completed').length;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
writeFileSync(indexPath, JSON.stringify(index, null, 2));
|
||||||
|
} catch {
|
||||||
|
// Ignore index update errors
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
sourceQueueId,
|
||||||
|
newQueueId,
|
||||||
|
splitItemCount: itemsToSplit.length,
|
||||||
|
remainingItemCount: remainingItems.length
|
||||||
|
};
|
||||||
|
} catch (err) {
|
||||||
|
return { error: 'Failed to split queue' };
|
||||||
|
}
|
||||||
|
});
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
// Legacy: GET /api/issues/queue (backward compat)
|
// Legacy: GET /api/issues/queue (backward compat)
|
||||||
if (pathname === '/api/issues/queue' && req.method === 'GET') {
|
if (pathname === '/api/issues/queue' && req.method === 'GET') {
|
||||||
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
|
const queue = groupQueueByExecutionGroup(readQueue(issuesDir));
|
||||||
|
|||||||
@@ -75,10 +75,13 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load summaries from .summaries/
|
// Load summaries from .summaries/ and fallback to plan.json
|
||||||
if (dataType === 'summary' || dataType === 'all') {
|
if (dataType === 'summary' || dataType === 'all') {
|
||||||
const summariesDir = join(normalizedPath, '.summaries');
|
const summariesDir = join(normalizedPath, '.summaries');
|
||||||
result.summaries = [];
|
result.summaries = [];
|
||||||
|
result.summary = null; // Single summary text from plan.json
|
||||||
|
|
||||||
|
// 1. Try to load from .summaries/ directory
|
||||||
if (await fileExists(summariesDir)) {
|
if (await fileExists(summariesDir)) {
|
||||||
const files = (await readdir(summariesDir)).filter(f => f.endsWith('.md'));
|
const files = (await readdir(summariesDir)).filter(f => f.endsWith('.md'));
|
||||||
for (const file of files) {
|
for (const file of files) {
|
||||||
@@ -90,6 +93,26 @@ async function getSessionDetailData(sessionPath: string, dataType: string): Prom
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 2. Fallback: Try to get summary from plan.json (for lite-fix-plan sessions)
|
||||||
|
if (result.summaries.length === 0) {
|
||||||
|
const planFile = join(normalizedPath, 'plan.json');
|
||||||
|
if (await fileExists(planFile)) {
|
||||||
|
try {
|
||||||
|
const planData = JSON.parse(await readFile(planFile, 'utf8'));
|
||||||
|
// Check plan.summary
|
||||||
|
if (planData.summary) {
|
||||||
|
result.summary = planData.summary;
|
||||||
|
}
|
||||||
|
// Check synthesis.convergence.summary
|
||||||
|
if (!result.summary && planData.synthesis?.convergence?.summary) {
|
||||||
|
result.summary = planData.synthesis.convergence.summary;
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
console.warn('Failed to parse plan file for summary:', planFile, (e as Error).message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load plan.json (for lite tasks)
|
// Load plan.json (for lite tasks)
|
||||||
|
|||||||
@@ -906,3 +906,182 @@
|
|||||||
max-height: 300px;
|
max-height: 300px;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* ========================================
|
||||||
|
* Batch Delete Modal
|
||||||
|
* ======================================== */
|
||||||
|
.batch-delete-modal {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 1.25rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.warning-banner {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 0.75rem;
|
||||||
|
padding: 1rem;
|
||||||
|
background: hsl(var(--destructive) / 0.1);
|
||||||
|
border: 1px solid hsl(var(--destructive) / 0.3);
|
||||||
|
border-radius: 0.5rem;
|
||||||
|
color: hsl(var(--destructive));
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
|
||||||
|
.warning-banner i {
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.delete-summary {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: repeat(2, 1fr);
|
||||||
|
gap: 1rem;
|
||||||
|
padding: 1rem;
|
||||||
|
background: hsl(var(--muted) / 0.3);
|
||||||
|
border-radius: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.summary-item {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 0.25rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.summary-label {
|
||||||
|
font-size: 0.75rem;
|
||||||
|
color: hsl(var(--muted-foreground));
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.025em;
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
|
||||||
|
.summary-value {
|
||||||
|
font-size: 1.5rem;
|
||||||
|
font-weight: 600;
|
||||||
|
color: hsl(var(--foreground));
|
||||||
|
}
|
||||||
|
|
||||||
|
.file-list-container h4 {
|
||||||
|
font-size: 0.875rem;
|
||||||
|
font-weight: 600;
|
||||||
|
color: hsl(var(--foreground));
|
||||||
|
margin-bottom: 0.75rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.file-list {
|
||||||
|
max-height: 300px;
|
||||||
|
overflow-y: auto;
|
||||||
|
border: 1px solid hsl(var(--border));
|
||||||
|
border-radius: 0.5rem;
|
||||||
|
padding: 0.5rem;
|
||||||
|
background: hsl(var(--muted) / 0.2);
|
||||||
|
}
|
||||||
|
|
||||||
|
.delete-file-item {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 0.75rem;
|
||||||
|
padding: 0.75rem;
|
||||||
|
background: hsl(var(--card));
|
||||||
|
border: 1px solid hsl(var(--border));
|
||||||
|
border-radius: 0.375rem;
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
transition: all 0.15s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.delete-file-item:last-child {
|
||||||
|
margin-bottom: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.delete-file-item:hover {
|
||||||
|
background: hsl(var(--hover));
|
||||||
|
}
|
||||||
|
|
||||||
|
.delete-file-item i {
|
||||||
|
flex-shrink: 0;
|
||||||
|
color: hsl(var(--muted-foreground));
|
||||||
|
}
|
||||||
|
|
||||||
|
.file-info {
|
||||||
|
flex: 1;
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 0.25rem;
|
||||||
|
min-width: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.file-info .file-name {
|
||||||
|
font-weight: 500;
|
||||||
|
color: hsl(var(--foreground));
|
||||||
|
font-size: 0.875rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.file-info .file-path {
|
||||||
|
font-size: 0.75rem;
|
||||||
|
color: hsl(var(--muted-foreground));
|
||||||
|
font-family: 'Courier New', monospace;
|
||||||
|
white-space: nowrap;
|
||||||
|
overflow: hidden;
|
||||||
|
text-overflow: ellipsis;
|
||||||
|
}
|
||||||
|
|
||||||
|
.level-badge {
|
||||||
|
display: inline-flex;
|
||||||
|
align-items: center;
|
||||||
|
padding: 0.25rem 0.5rem;
|
||||||
|
border-radius: 0.25rem;
|
||||||
|
font-size: 0.75rem;
|
||||||
|
font-weight: 500;
|
||||||
|
white-space: nowrap;
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.level-badge.project {
|
||||||
|
background: hsl(142, 76%, 36%, 0.15);
|
||||||
|
color: hsl(142, 76%, 36%);
|
||||||
|
border: 1px solid hsl(142, 76%, 36%, 0.3);
|
||||||
|
}
|
||||||
|
|
||||||
|
.level-badge.module {
|
||||||
|
background: hsl(221, 83%, 53%, 0.15);
|
||||||
|
color: hsl(221, 83%, 53%);
|
||||||
|
border: 1px solid hsl(221, 83%, 53%, 0.3);
|
||||||
|
}
|
||||||
|
|
||||||
|
.confirmation-actions {
|
||||||
|
display: flex;
|
||||||
|
justify-content: flex-end;
|
||||||
|
gap: 0.75rem;
|
||||||
|
padding-top: 0.75rem;
|
||||||
|
border-top: 1px solid hsl(var(--border));
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Remove file button in batch delete list */
|
||||||
|
.remove-file-btn {
|
||||||
|
padding: 0.25rem;
|
||||||
|
border-radius: 0.25rem;
|
||||||
|
color: hsl(var(--muted-foreground));
|
||||||
|
background: transparent;
|
||||||
|
border: none;
|
||||||
|
cursor: pointer;
|
||||||
|
opacity: 0;
|
||||||
|
transition: all 0.15s ease;
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.delete-file-item:hover .remove-file-btn {
|
||||||
|
opacity: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.remove-file-btn:hover {
|
||||||
|
background: hsl(var(--destructive) / 0.1);
|
||||||
|
color: hsl(var(--destructive));
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Empty list message */
|
||||||
|
.empty-list-message {
|
||||||
|
padding: 2rem;
|
||||||
|
text-align: center;
|
||||||
|
color: hsl(var(--muted-foreground));
|
||||||
|
font-style: italic;
|
||||||
|
}
|
||||||
|
|||||||
@@ -3300,3 +3300,93 @@
|
|||||||
text-transform: uppercase;
|
text-transform: uppercase;
|
||||||
letter-spacing: 0.025em;
|
letter-spacing: 0.025em;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* ==========================================
|
||||||
|
SPLIT QUEUE MODAL STYLES
|
||||||
|
========================================== */
|
||||||
|
|
||||||
|
.split-queue-modal-content {
|
||||||
|
max-width: 600px;
|
||||||
|
width: 90%;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-controls {
|
||||||
|
display: flex;
|
||||||
|
gap: 0.5rem;
|
||||||
|
align-items: center;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-issues {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-issue-group {
|
||||||
|
border: 1px solid hsl(var(--border));
|
||||||
|
border-radius: 0.5rem;
|
||||||
|
padding: 0.75rem;
|
||||||
|
background: hsl(var(--muted) / 0.3);
|
||||||
|
transition: all 0.15s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-issue-group:hover {
|
||||||
|
background: hsl(var(--muted) / 0.5);
|
||||||
|
border-color: hsl(var(--primary) / 0.3);
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-issue-header {
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
padding-bottom: 0.5rem;
|
||||||
|
border-bottom: 1px solid hsl(var(--border) / 0.5);
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-issue-header label {
|
||||||
|
cursor: pointer;
|
||||||
|
user-select: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-issue-header input[type="checkbox"] {
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-solutions {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 0.25rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-solutions label {
|
||||||
|
cursor: pointer;
|
||||||
|
user-select: none;
|
||||||
|
padding: 0.25rem;
|
||||||
|
border-radius: 0.25rem;
|
||||||
|
transition: background-color 0.15s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-solutions label:hover {
|
||||||
|
background: hsl(var(--muted) / 0.5);
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-solutions input[type="checkbox"] {
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Checkbox styles */
|
||||||
|
.split-queue-modal-content input[type="checkbox"] {
|
||||||
|
width: 1rem;
|
||||||
|
height: 1rem;
|
||||||
|
border: 1px solid hsl(var(--border));
|
||||||
|
border-radius: 0.25rem;
|
||||||
|
cursor: pointer;
|
||||||
|
transition: all 0.15s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-modal-content input[type="checkbox"]:hover {
|
||||||
|
border-color: hsl(var(--primary));
|
||||||
|
}
|
||||||
|
|
||||||
|
.split-queue-modal-content input[type="checkbox"]:checked {
|
||||||
|
background-color: hsl(var(--primary));
|
||||||
|
border-color: hsl(var(--primary));
|
||||||
|
}
|
||||||
|
|||||||
@@ -1704,6 +1704,19 @@ const i18n = {
|
|||||||
'claude.deleteFile': 'Delete File',
|
'claude.deleteFile': 'Delete File',
|
||||||
'claude.deleteConfirm': 'Are you sure you want to delete {file}?',
|
'claude.deleteConfirm': 'Are you sure you want to delete {file}?',
|
||||||
'claude.deleteWarning': 'This action cannot be undone.',
|
'claude.deleteWarning': 'This action cannot be undone.',
|
||||||
|
'claude.batchDeleteProject': 'Delete Project Files',
|
||||||
|
'claude.batchDeleteTitle': 'Delete Project Workspace Files',
|
||||||
|
'claude.batchDeleteWarning': 'This will delete all CLAUDE.md files in the project workspace (excluding user-level files)',
|
||||||
|
'claude.noProjectFiles': 'No project workspace files to delete',
|
||||||
|
'claude.filesToDelete': 'Files to delete:',
|
||||||
|
'claude.totalSize': 'Total size:',
|
||||||
|
'claude.fileList': 'File List',
|
||||||
|
'claude.confirmDelete': 'Confirm Delete',
|
||||||
|
'claude.deletingFiles': 'Deleting {count} files...',
|
||||||
|
'claude.batchDeleteSuccess': 'Successfully deleted {deleted} of {total} files',
|
||||||
|
'claude.batchDeleteError': 'Failed to delete files',
|
||||||
|
'claude.removeFromList': 'Remove from list',
|
||||||
|
'claude.noFilesInList': 'No files in the list',
|
||||||
'claude.copyContent': 'Copy Content',
|
'claude.copyContent': 'Copy Content',
|
||||||
'claude.contentCopied': 'Content copied to clipboard',
|
'claude.contentCopied': 'Content copied to clipboard',
|
||||||
'claude.copyError': 'Failed to copy content',
|
'claude.copyError': 'Failed to copy content',
|
||||||
@@ -4013,6 +4026,19 @@ const i18n = {
|
|||||||
'claude.deleteFile': '删除文件',
|
'claude.deleteFile': '删除文件',
|
||||||
'claude.deleteConfirm': '确定要删除 {file} 吗?',
|
'claude.deleteConfirm': '确定要删除 {file} 吗?',
|
||||||
'claude.deleteWarning': '此操作无法撤销。',
|
'claude.deleteWarning': '此操作无法撤销。',
|
||||||
|
'claude.batchDeleteProject': '删除项目文件',
|
||||||
|
'claude.batchDeleteTitle': '删除项目工作空间文件',
|
||||||
|
'claude.batchDeleteWarning': '此操作将删除项目工作空间内的所有 CLAUDE.md 文件(不包括用户级文件)',
|
||||||
|
'claude.noProjectFiles': '没有可删除的项目工作空间文件',
|
||||||
|
'claude.filesToDelete': '待删除文件数:',
|
||||||
|
'claude.totalSize': '总大小:',
|
||||||
|
'claude.fileList': '文件清单',
|
||||||
|
'claude.confirmDelete': '确认删除',
|
||||||
|
'claude.deletingFiles': '正在删除 {count} 个文件...',
|
||||||
|
'claude.batchDeleteSuccess': '成功删除 {deleted}/{total} 个文件',
|
||||||
|
'claude.batchDeleteError': '删除文件失败',
|
||||||
|
'claude.removeFromList': '从清单中移除',
|
||||||
|
'claude.noFilesInList': '清单中没有文件',
|
||||||
'claude.copyContent': '复制内容',
|
'claude.copyContent': '复制内容',
|
||||||
'claude.contentCopied': '内容已复制到剪贴板',
|
'claude.contentCopied': '内容已复制到剪贴板',
|
||||||
'claude.copyError': '复制内容失败',
|
'claude.copyError': '复制内容失败',
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ var searchQuery = '';
|
|||||||
var freshnessData = {}; // { [filePath]: FreshnessResult }
|
var freshnessData = {}; // { [filePath]: FreshnessResult }
|
||||||
var freshnessSummary = null;
|
var freshnessSummary = null;
|
||||||
var searchKeyboardHandlerAdded = false;
|
var searchKeyboardHandlerAdded = false;
|
||||||
|
var pendingDeleteFiles = []; // Files pending for batch delete
|
||||||
|
|
||||||
// ========== Main Render Function ==========
|
// ========== Main Render Function ==========
|
||||||
async function renderClaudeManager() {
|
async function renderClaudeManager() {
|
||||||
@@ -64,6 +65,9 @@ async function renderClaudeManager() {
|
|||||||
'<button class="btn btn-sm btn-secondary" onclick="refreshClaudeFiles()">' +
|
'<button class="btn btn-sm btn-secondary" onclick="refreshClaudeFiles()">' +
|
||||||
'<i data-lucide="refresh-cw" class="w-4 h-4"></i> ' + t('common.refresh') +
|
'<i data-lucide="refresh-cw" class="w-4 h-4"></i> ' + t('common.refresh') +
|
||||||
'</button>' +
|
'</button>' +
|
||||||
|
'<button class="btn btn-sm btn-danger" onclick="showBatchDeleteDialog()">' +
|
||||||
|
'<i data-lucide="trash-2" class="w-4 h-4"></i> ' + t('claude.batchDeleteProject') +
|
||||||
|
'</button>' +
|
||||||
'</div>' +
|
'</div>' +
|
||||||
'</div>' +
|
'</div>' +
|
||||||
'<div class="claude-manager-columns">' +
|
'<div class="claude-manager-columns">' +
|
||||||
@@ -959,3 +963,167 @@ window.initClaudeManager = function() {
|
|||||||
|
|
||||||
// Make destroyClaudeManager accessible globally as well
|
// Make destroyClaudeManager accessible globally as well
|
||||||
window.destroyClaudeManager = destroyClaudeManager;
|
window.destroyClaudeManager = destroyClaudeManager;
|
||||||
|
|
||||||
|
// ========== Batch Delete Functions ==========
|
||||||
|
/**
|
||||||
|
* Show batch delete confirmation dialog for project workspace files
|
||||||
|
*/
|
||||||
|
function showBatchDeleteDialog() {
|
||||||
|
// Get project workspace files (project + modules, exclude user)
|
||||||
|
var projectFiles = [];
|
||||||
|
|
||||||
|
if (claudeFilesData.project.main) {
|
||||||
|
projectFiles.push(claudeFilesData.project.main);
|
||||||
|
}
|
||||||
|
|
||||||
|
projectFiles.push(...claudeFilesData.modules);
|
||||||
|
|
||||||
|
if (projectFiles.length === 0) {
|
||||||
|
showRefreshToast(t('claude.noProjectFiles') || 'No project workspace files to delete', 'info');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize pending delete files list
|
||||||
|
pendingDeleteFiles = [...projectFiles];
|
||||||
|
|
||||||
|
// Render the modal with current pending files
|
||||||
|
renderBatchDeleteModal();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Render or re-render the batch delete modal content
|
||||||
|
*/
|
||||||
|
function renderBatchDeleteModal() {
|
||||||
|
// Build file list HTML with remove buttons
|
||||||
|
var fileListHTML = pendingDeleteFiles.map(function(file, index) {
|
||||||
|
var levelBadge = file.level === 'project'
|
||||||
|
? '<span class="level-badge project">' + t('claudeManager.projectLevel') + '</span>'
|
||||||
|
: '<span class="level-badge module">' + t('claudeManager.moduleLevel') + '</span>';
|
||||||
|
|
||||||
|
return '<div class="delete-file-item" data-file-index="' + index + '">' +
|
||||||
|
'<i data-lucide="file-text" class="w-4 h-4"></i>' +
|
||||||
|
'<div class="file-info">' +
|
||||||
|
'<span class="file-name">' + escapeHtml(file.name) + '</span>' +
|
||||||
|
'<span class="file-path">' + escapeHtml(file.relativePath) + '</span>' +
|
||||||
|
'</div>' +
|
||||||
|
levelBadge +
|
||||||
|
'<button class="btn btn-sm btn-ghost remove-file-btn" onclick="removeFromDeleteList(' + index + ')" title="' + (t('claude.removeFromList') || 'Remove from list') + '">' +
|
||||||
|
'<i data-lucide="x" class="w-4 h-4"></i>' +
|
||||||
|
'</button>' +
|
||||||
|
'</div>';
|
||||||
|
}).join('');
|
||||||
|
|
||||||
|
var totalSize = pendingDeleteFiles.reduce(function(sum, f) { return sum + f.size; }, 0);
|
||||||
|
|
||||||
|
var modalContent = '<div class="batch-delete-modal">' +
|
||||||
|
'<div class="warning-banner">' +
|
||||||
|
'<i data-lucide="alert-triangle" class="w-5 h-5"></i>' +
|
||||||
|
'<span>' + (t('claude.batchDeleteWarning') || 'This will delete all CLAUDE.md files in the project workspace') + '</span>' +
|
||||||
|
'</div>' +
|
||||||
|
'<div class="delete-summary" id="delete-summary">' +
|
||||||
|
'<div class="summary-item">' +
|
||||||
|
'<span class="summary-label">' + t('claude.filesToDelete') + '</span>' +
|
||||||
|
'<span class="summary-value" id="files-to-delete-count">' + pendingDeleteFiles.length + '</span>' +
|
||||||
|
'</div>' +
|
||||||
|
'<div class="summary-item">' +
|
||||||
|
'<span class="summary-label">' + t('claude.totalSize') + '</span>' +
|
||||||
|
'<span class="summary-value" id="total-size-value">' + formatFileSize(totalSize) + '</span>' +
|
||||||
|
'</div>' +
|
||||||
|
'</div>' +
|
||||||
|
'<div class="file-list-container">' +
|
||||||
|
'<h4>' + t('claude.fileList') + '</h4>' +
|
||||||
|
'<div class="file-list" id="pending-file-list">' + (fileListHTML || '<div class="empty-list-message">' + (t('claude.noFilesInList') || 'No files in the list') + '</div>') + '</div>' +
|
||||||
|
'</div>' +
|
||||||
|
'<div class="confirmation-actions">' +
|
||||||
|
'<button class="btn btn-secondary" onclick="closeModal()">' +
|
||||||
|
'<i data-lucide="x" class="w-4 h-4"></i> ' + t('common.cancel') +
|
||||||
|
'</button>' +
|
||||||
|
'<button class="btn btn-danger" onclick="confirmBatchDeleteProject()"' + (pendingDeleteFiles.length === 0 ? ' disabled' : '') + '>' +
|
||||||
|
'<i data-lucide="trash-2" class="w-4 h-4"></i> ' + t('claude.confirmDelete') +
|
||||||
|
'</button>' +
|
||||||
|
'</div>' +
|
||||||
|
'</div>';
|
||||||
|
|
||||||
|
showModal(
|
||||||
|
t('claude.batchDeleteTitle') || 'Delete Project Workspace Files',
|
||||||
|
modalContent,
|
||||||
|
{ size: 'large' }
|
||||||
|
);
|
||||||
|
|
||||||
|
if (window.lucide) lucide.createIcons();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Remove a file from the pending delete list
|
||||||
|
*/
|
||||||
|
function removeFromDeleteList(index) {
|
||||||
|
if (index >= 0 && index < pendingDeleteFiles.length) {
|
||||||
|
pendingDeleteFiles.splice(index, 1);
|
||||||
|
renderBatchDeleteModal();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Execute batch delete for project workspace files
|
||||||
|
*/
|
||||||
|
async function confirmBatchDeleteProject() {
|
||||||
|
// Collect file paths from pending delete list
|
||||||
|
var filePaths = pendingDeleteFiles.map(function(file) {
|
||||||
|
return file.path;
|
||||||
|
});
|
||||||
|
|
||||||
|
if (filePaths.length === 0) return;
|
||||||
|
|
||||||
|
closeModal();
|
||||||
|
|
||||||
|
// Show progress
|
||||||
|
showRefreshToast(
|
||||||
|
(t('claude.deletingFiles') || 'Deleting {count} files...').replace('{count}', filePaths.length),
|
||||||
|
'info'
|
||||||
|
);
|
||||||
|
|
||||||
|
try {
|
||||||
|
var res = await fetch('/api/memory/claude/batch-delete', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({
|
||||||
|
paths: filePaths,
|
||||||
|
confirm: true
|
||||||
|
})
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!res.ok) throw new Error('Batch delete failed');
|
||||||
|
|
||||||
|
var result = await res.json();
|
||||||
|
|
||||||
|
if (result.success) {
|
||||||
|
var message = (t('claude.batchDeleteSuccess') || 'Successfully deleted {deleted} of {total} files')
|
||||||
|
.replace('{deleted}', result.deleted)
|
||||||
|
.replace('{total}', result.total);
|
||||||
|
|
||||||
|
showRefreshToast(message, 'success');
|
||||||
|
addGlobalNotification('success', message, null, 'CLAUDE.md');
|
||||||
|
|
||||||
|
if (result.errors && result.errors.length > 0) {
|
||||||
|
console.warn('Some files failed to delete:', result.errors);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clear selection if deleted file was selected
|
||||||
|
if (selectedFile && filePaths.includes(selectedFile.path)) {
|
||||||
|
selectedFile = null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Refresh file tree
|
||||||
|
await refreshClaudeFiles();
|
||||||
|
} else {
|
||||||
|
throw new Error(result.error || 'Unknown error');
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error in batch delete:', error);
|
||||||
|
showRefreshToast(
|
||||||
|
t('claude.batchDeleteError') || 'Failed to delete files',
|
||||||
|
'error'
|
||||||
|
);
|
||||||
|
addGlobalNotification('error', t('claude.batchDeleteError') || 'Failed to delete files', null, 'CLAUDE.md');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -562,6 +562,11 @@ function renderQueueCard(queue, isActive) {
|
|||||||
<i data-lucide="git-merge" class="w-3 h-3"></i>
|
<i data-lucide="git-merge" class="w-3 h-3"></i>
|
||||||
</button>
|
</button>
|
||||||
` : ''}
|
` : ''}
|
||||||
|
${queue.status !== 'merged' && issueCount > 1 ? `
|
||||||
|
<button class="btn-sm" onclick="showSplitQueueModal('${safeQueueId}')" title="Split queue into multiple queues">
|
||||||
|
<i data-lucide="git-branch" class="w-3 h-3"></i>
|
||||||
|
</button>
|
||||||
|
` : ''}
|
||||||
<button class="btn-sm btn-danger" onclick="confirmDeleteQueue('${safeQueueId}')" title="${t('issues.deleteQueue') || 'Delete queue'}">
|
<button class="btn-sm btn-danger" onclick="confirmDeleteQueue('${safeQueueId}')" title="${t('issues.deleteQueue') || 'Delete queue'}">
|
||||||
<i data-lucide="trash-2" class="w-3 h-3"></i>
|
<i data-lucide="trash-2" class="w-3 h-3"></i>
|
||||||
</button>
|
</button>
|
||||||
@@ -989,6 +994,188 @@ async function executeQueueMerge(sourceQueueId) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ========== Queue Split Modal ==========
|
||||||
|
async function showSplitQueueModal(queueId) {
|
||||||
|
let modal = document.getElementById('splitQueueModal');
|
||||||
|
if (!modal) {
|
||||||
|
modal = document.createElement('div');
|
||||||
|
modal.id = 'splitQueueModal';
|
||||||
|
modal.className = 'issue-modal';
|
||||||
|
document.body.appendChild(modal);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch queue details
|
||||||
|
let queue;
|
||||||
|
try {
|
||||||
|
const response = await fetch('/api/queue/' + encodeURIComponent(queueId) + '?path=' + encodeURIComponent(projectPath));
|
||||||
|
queue = await response.json();
|
||||||
|
if (queue.error) throw new Error(queue.error);
|
||||||
|
} catch (err) {
|
||||||
|
showNotification('Failed to load queue details', 'error');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const safeQueueId = escapeHtml(queueId || '');
|
||||||
|
const items = queue.solutions || queue.tasks || [];
|
||||||
|
const isSolutionLevel = !!queue.solutions;
|
||||||
|
|
||||||
|
// Group items by issue
|
||||||
|
const issueGroups = {};
|
||||||
|
items.forEach(item => {
|
||||||
|
const issueId = item.issue_id || 'unknown';
|
||||||
|
if (!issueGroups[issueId]) {
|
||||||
|
issueGroups[issueId] = [];
|
||||||
|
}
|
||||||
|
issueGroups[issueId].push(item);
|
||||||
|
});
|
||||||
|
|
||||||
|
const issueIds = Object.keys(issueGroups);
|
||||||
|
|
||||||
|
modal.innerHTML = `
|
||||||
|
<div class="issue-modal-content split-queue-modal-content">
|
||||||
|
<div class="issue-modal-header">
|
||||||
|
<h3><i data-lucide="git-branch" class="w-5 h-5"></i> Split Queue: ${safeQueueId}</h3>
|
||||||
|
<button class="issue-modal-close" onclick="hideSplitQueueModal()">
|
||||||
|
<i data-lucide="x" class="w-5 h-5"></i>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="issue-modal-body">
|
||||||
|
<p class="text-sm text-muted-foreground mb-4">
|
||||||
|
Select issues and their solutions to split into a new queue. The remaining items will stay in the current queue.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
${issueIds.length === 0 ? `
|
||||||
|
<p class="text-center text-muted-foreground py-4">No items to split</p>
|
||||||
|
` : `
|
||||||
|
<div class="split-queue-controls mb-3">
|
||||||
|
<button class="btn-sm btn-secondary" onclick="selectAllIssues()">
|
||||||
|
<i data-lucide="check-square" class="w-3 h-3"></i> Select All
|
||||||
|
</button>
|
||||||
|
<button class="btn-sm btn-secondary" onclick="deselectAllIssues()">
|
||||||
|
<i data-lucide="square" class="w-3 h-3"></i> Deselect All
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="split-queue-issues">
|
||||||
|
${issueIds.map(issueId => {
|
||||||
|
const issueItems = issueGroups[issueId];
|
||||||
|
const safeIssueId = escapeHtml(issueId);
|
||||||
|
return `
|
||||||
|
<div class="split-queue-issue-group" data-issue-id="${safeIssueId}">
|
||||||
|
<div class="split-queue-issue-header">
|
||||||
|
<label class="flex items-center gap-2">
|
||||||
|
<input type="checkbox"
|
||||||
|
class="issue-checkbox"
|
||||||
|
data-issue-id="${safeIssueId}"
|
||||||
|
onchange="toggleIssueSelection('${safeIssueId}')">
|
||||||
|
<span class="font-medium">${safeIssueId}</span>
|
||||||
|
<span class="text-xs text-muted-foreground">(${issueItems.length} ${isSolutionLevel ? 'solution' : 'task'}${issueItems.length > 1 ? 's' : ''})</span>
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
<div class="split-queue-solutions ml-6">
|
||||||
|
${issueItems.map(item => {
|
||||||
|
const itemId = item.item_id || item.solution_id || item.task_id || '';
|
||||||
|
const safeItemId = escapeHtml(itemId);
|
||||||
|
const displayName = isSolutionLevel
|
||||||
|
? (item.solution_id || itemId)
|
||||||
|
: (item.task_id || itemId);
|
||||||
|
return `
|
||||||
|
<label class="flex items-center gap-2 py-1">
|
||||||
|
<input type="checkbox"
|
||||||
|
class="solution-checkbox"
|
||||||
|
data-issue-id="${safeIssueId}"
|
||||||
|
data-item-id="${safeItemId}"
|
||||||
|
value="${safeItemId}">
|
||||||
|
<span class="text-sm font-mono">${escapeHtml(displayName)}</span>
|
||||||
|
${item.task_count ? `<span class="text-xs text-muted-foreground">(${item.task_count} tasks)</span>` : ''}
|
||||||
|
</label>
|
||||||
|
`;
|
||||||
|
}).join('')}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
`;
|
||||||
|
}).join('')}
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="issue-modal-footer">
|
||||||
|
<button class="btn-secondary" onclick="hideSplitQueueModal()">Cancel</button>
|
||||||
|
${issueIds.length > 0 ? `
|
||||||
|
<button class="btn-primary" onclick="executeQueueSplit('${safeQueueId}')">
|
||||||
|
<i data-lucide="git-branch" class="w-4 h-4"></i>
|
||||||
|
<span>Split Queue</span>
|
||||||
|
</button>
|
||||||
|
` : ''}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
`;
|
||||||
|
|
||||||
|
modal.classList.remove('hidden');
|
||||||
|
lucide.createIcons();
|
||||||
|
}
|
||||||
|
|
||||||
|
function hideSplitQueueModal() {
|
||||||
|
const modal = document.getElementById('splitQueueModal');
|
||||||
|
if (modal) {
|
||||||
|
modal.classList.add('hidden');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function toggleIssueSelection(issueId) {
|
||||||
|
const issueCheckbox = document.querySelector(`.issue-checkbox[data-issue-id="${issueId}"]`);
|
||||||
|
const solutionCheckboxes = document.querySelectorAll(`.solution-checkbox[data-issue-id="${issueId}"]`);
|
||||||
|
|
||||||
|
if (issueCheckbox && solutionCheckboxes) {
|
||||||
|
solutionCheckboxes.forEach(cb => {
|
||||||
|
cb.checked = issueCheckbox.checked;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function selectAllIssues() {
|
||||||
|
const allCheckboxes = document.querySelectorAll('.split-queue-modal-content input[type="checkbox"]');
|
||||||
|
allCheckboxes.forEach(cb => cb.checked = true);
|
||||||
|
}
|
||||||
|
|
||||||
|
function deselectAllIssues() {
|
||||||
|
const allCheckboxes = document.querySelectorAll('.split-queue-modal-content input[type="checkbox"]');
|
||||||
|
allCheckboxes.forEach(cb => cb.checked = false);
|
||||||
|
}
|
||||||
|
|
||||||
|
async function executeQueueSplit(sourceQueueId) {
|
||||||
|
const selectedCheckboxes = document.querySelectorAll('.solution-checkbox:checked');
|
||||||
|
const selectedItemIds = Array.from(selectedCheckboxes).map(cb => cb.value);
|
||||||
|
|
||||||
|
if (selectedItemIds.length === 0) {
|
||||||
|
showNotification('Please select at least one item to split', 'warning');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetch('/api/queue/split?path=' + encodeURIComponent(projectPath), {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({ sourceQueueId, itemIds: selectedItemIds })
|
||||||
|
});
|
||||||
|
const result = await response.json();
|
||||||
|
|
||||||
|
if (result.success) {
|
||||||
|
showNotification(`Split ${result.splitItemCount} items into new queue ${result.newQueueId}`, 'success');
|
||||||
|
hideSplitQueueModal();
|
||||||
|
queueData.expandedQueueId = null;
|
||||||
|
await Promise.all([loadQueueData(), loadAllQueues()]);
|
||||||
|
renderIssueView();
|
||||||
|
} else {
|
||||||
|
showNotification(result.error || 'Failed to split queue', 'error');
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to split queue:', err);
|
||||||
|
showNotification('Failed to split queue', 'error');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// ========== Legacy Queue Render (for backward compatibility) ==========
|
// ========== Legacy Queue Render (for backward compatibility) ==========
|
||||||
function renderLegacyQueueSection() {
|
function renderLegacyQueueSection() {
|
||||||
const queue = issueData.queue;
|
const queue = issueData.queue;
|
||||||
|
|||||||
@@ -1024,7 +1024,9 @@ async function loadAndRenderMultiCliSummaryTab(session, contentArea) {
|
|||||||
const response = await fetch(`/api/session-detail?path=${encodeURIComponent(session.path)}&type=summary`);
|
const response = await fetch(`/api/session-detail?path=${encodeURIComponent(session.path)}&type=summary`);
|
||||||
if (response.ok) {
|
if (response.ok) {
|
||||||
const data = await response.json();
|
const data = await response.json();
|
||||||
contentArea.innerHTML = renderMultiCliSummaryContent(data.summary, session);
|
// Support both summaries (from .summaries/) and summary (from plan.json)
|
||||||
|
const summaryText = data.summary || (data.summaries?.length ? data.summaries[0].content : null);
|
||||||
|
contentArea.innerHTML = renderMultiCliSummaryContent(summaryText, session);
|
||||||
initCollapsibleSections(contentArea);
|
initCollapsibleSections(contentArea);
|
||||||
if (typeof lucide !== 'undefined') lucide.createIcons();
|
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||||
return;
|
return;
|
||||||
@@ -3135,16 +3137,38 @@ async function loadAndRenderLiteSummaryTab(session, contentArea) {
|
|||||||
const response = await fetch(`/api/session-detail?path=${encodeURIComponent(session.path)}&type=summary`);
|
const response = await fetch(`/api/session-detail?path=${encodeURIComponent(session.path)}&type=summary`);
|
||||||
if (response.ok) {
|
if (response.ok) {
|
||||||
const data = await response.json();
|
const data = await response.json();
|
||||||
contentArea.innerHTML = renderSummaryContent(data.summaries);
|
// Prioritize .summaries/ directory content
|
||||||
return;
|
if (data.summaries?.length) {
|
||||||
|
contentArea.innerHTML = renderSummaryContent(data.summaries);
|
||||||
|
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
// Fallback to plan.json summary field
|
||||||
|
if (data.summary) {
|
||||||
|
contentArea.innerHTML = renderSummaryContent([{ name: 'Summary', content: data.summary }]);
|
||||||
|
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||||
|
return;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Fallback
|
|
||||||
|
// Fallback: try to get summary from session object (plan.summary or synthesis.convergence.summary)
|
||||||
|
const plan = session.plan || {};
|
||||||
|
const synthesis = session.latestSynthesis || session.discussionTopic || {};
|
||||||
|
const summaryText = plan.summary || synthesis.convergence?.summary;
|
||||||
|
|
||||||
|
if (summaryText) {
|
||||||
|
contentArea.innerHTML = renderSummaryContent([{ name: 'Summary', content: summaryText }]);
|
||||||
|
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// No summary available
|
||||||
contentArea.innerHTML = `
|
contentArea.innerHTML = `
|
||||||
<div class="tab-empty-state">
|
<div class="tab-empty-state">
|
||||||
<div class="empty-icon"><i data-lucide="file-text" class="w-12 h-12"></i></div>
|
<div class="empty-icon"><i data-lucide="file-text" class="w-12 h-12"></i></div>
|
||||||
<div class="empty-title">No Summaries</div>
|
<div class="empty-title">${t('empty.noSummary') || 'No Summary'}</div>
|
||||||
<div class="empty-text">No summaries found in .summaries/</div>
|
<div class="empty-text">${t('empty.noSummaryText') || 'No summary available for this session.'}</div>
|
||||||
</div>
|
</div>
|
||||||
`;
|
`;
|
||||||
if (typeof lucide !== 'undefined') lucide.createIcons();
|
if (typeof lucide !== 'undefined') lucide.createIcons();
|
||||||
|
|||||||
@@ -364,6 +364,16 @@ const ParamsSchema = z.object({
|
|||||||
parentExecutionId: z.string().optional(), // Parent execution ID for fork/retry scenarios
|
parentExecutionId: z.string().optional(), // Parent execution ID for fork/retry scenarios
|
||||||
stream: z.boolean().default(false), // false = cache full output (default), true = stream output via callback
|
stream: z.boolean().default(false), // false = cache full output (default), true = stream output via callback
|
||||||
outputFormat: z.enum(['text', 'json-lines']).optional().default('json-lines'), // Output parsing format (default: json-lines for type badges)
|
outputFormat: z.enum(['text', 'json-lines']).optional().default('json-lines'), // Output parsing format (default: json-lines for type badges)
|
||||||
|
// Codex review options
|
||||||
|
uncommitted: z.boolean().optional(), // Review uncommitted changes (default for review mode)
|
||||||
|
base: z.string().optional(), // Review changes against base branch
|
||||||
|
commit: z.string().optional(), // Review changes from specific commit
|
||||||
|
title: z.string().optional(), // Optional title for review summary
|
||||||
|
// Rules env vars (PROTO, TMPL) - will be passed to subprocess environment
|
||||||
|
rulesEnv: z.object({
|
||||||
|
PROTO: z.string().optional(),
|
||||||
|
TMPL: z.string().optional(),
|
||||||
|
}).optional(),
|
||||||
});
|
});
|
||||||
|
|
||||||
type Params = z.infer<typeof ParamsSchema>;
|
type Params = z.infer<typeof ParamsSchema>;
|
||||||
@@ -388,7 +398,7 @@ async function executeCliTool(
|
|||||||
throw new Error(`Invalid params: ${parsed.error.message}`);
|
throw new Error(`Invalid params: ${parsed.error.message}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
const { tool, prompt, mode, format, model, cd, includeDirs, resume, id: customId, noNative, category, parentExecutionId, outputFormat } = parsed.data;
|
const { tool, prompt, mode, format, model, cd, includeDirs, resume, id: customId, noNative, category, parentExecutionId, outputFormat, uncommitted, base, commit, title, rulesEnv } = parsed.data;
|
||||||
|
|
||||||
// Validate and determine working directory early (needed for conversation lookup)
|
// Validate and determine working directory early (needed for conversation lookup)
|
||||||
let workingDir: string;
|
let workingDir: string;
|
||||||
@@ -786,7 +796,8 @@ async function executeCliTool(
|
|||||||
model: effectiveModel,
|
model: effectiveModel,
|
||||||
dir: cd,
|
dir: cd,
|
||||||
include: includeDirs,
|
include: includeDirs,
|
||||||
nativeResume: nativeResumeConfig
|
nativeResume: nativeResumeConfig,
|
||||||
|
reviewOptions: mode === 'review' ? { uncommitted, base, commit, title } : undefined
|
||||||
});
|
});
|
||||||
|
|
||||||
// Create output parser and IR storage
|
// Create output parser and IR storage
|
||||||
@@ -823,9 +834,11 @@ async function executeCliTool(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Merge custom env with process.env (custom env takes precedence)
|
// Merge custom env with process.env (custom env takes precedence)
|
||||||
|
// Also include rulesEnv for $PROTO and $TMPL template variables
|
||||||
const spawnEnv = {
|
const spawnEnv = {
|
||||||
...process.env,
|
...process.env,
|
||||||
...customEnv
|
...customEnv,
|
||||||
|
...(rulesEnv || {})
|
||||||
};
|
};
|
||||||
|
|
||||||
debugLog('SPAWN', `Spawning process`, {
|
debugLog('SPAWN', `Spawning process`, {
|
||||||
|
|||||||
@@ -159,8 +159,15 @@ export function buildCommand(params: {
|
|||||||
nativeResume?: NativeResumeConfig;
|
nativeResume?: NativeResumeConfig;
|
||||||
/** Claude CLI settings file path (for --settings parameter) */
|
/** Claude CLI settings file path (for --settings parameter) */
|
||||||
settingsFile?: string;
|
settingsFile?: string;
|
||||||
|
/** Codex review options */
|
||||||
|
reviewOptions?: {
|
||||||
|
uncommitted?: boolean;
|
||||||
|
base?: string;
|
||||||
|
commit?: string;
|
||||||
|
title?: string;
|
||||||
|
};
|
||||||
}): { command: string; args: string[]; useStdin: boolean } {
|
}): { command: string; args: string[]; useStdin: boolean } {
|
||||||
const { tool, prompt, mode = 'analysis', model, dir, include, nativeResume, settingsFile } = params;
|
const { tool, prompt, mode = 'analysis', model, dir, include, nativeResume, settingsFile, reviewOptions } = params;
|
||||||
|
|
||||||
debugLog('BUILD_CMD', `Building command for tool: ${tool}`, {
|
debugLog('BUILD_CMD', `Building command for tool: ${tool}`, {
|
||||||
mode,
|
mode,
|
||||||
@@ -227,10 +234,25 @@ export function buildCommand(params: {
|
|||||||
// codex review mode: non-interactive code review
|
// codex review mode: non-interactive code review
|
||||||
// Format: codex review [OPTIONS] [PROMPT]
|
// Format: codex review [OPTIONS] [PROMPT]
|
||||||
args.push('review');
|
args.push('review');
|
||||||
// Default to --uncommitted if no specific review target in prompt
|
|
||||||
args.push('--uncommitted');
|
// Review target: --uncommitted (default), --base <branch>, or --commit <sha>
|
||||||
|
if (reviewOptions?.base) {
|
||||||
|
args.push('--base', reviewOptions.base);
|
||||||
|
} else if (reviewOptions?.commit) {
|
||||||
|
args.push('--commit', reviewOptions.commit);
|
||||||
|
} else {
|
||||||
|
// Default to --uncommitted if no specific target
|
||||||
|
args.push('--uncommitted');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Optional title for review summary
|
||||||
|
if (reviewOptions?.title) {
|
||||||
|
args.push('--title', reviewOptions.title);
|
||||||
|
}
|
||||||
|
|
||||||
if (model) {
|
if (model) {
|
||||||
args.push('-m', model);
|
// codex review uses -c key=value for config override, not -m
|
||||||
|
args.push('-c', `model=${model}`);
|
||||||
}
|
}
|
||||||
// codex review uses positional prompt argument, not stdin
|
// codex review uses positional prompt argument, not stdin
|
||||||
useStdin = false;
|
useStdin = false;
|
||||||
|
|||||||
303
ccw/src/tools/template-discovery.ts
Normal file
303
ccw/src/tools/template-discovery.ts
Normal file
@@ -0,0 +1,303 @@
|
|||||||
|
/**
|
||||||
|
* Template Discovery Module
|
||||||
|
*
|
||||||
|
* Provides auto-discovery and loading of CLI templates from
|
||||||
|
* ~/.claude/workflows/cli-templates/
|
||||||
|
*
|
||||||
|
* Features:
|
||||||
|
* - Scan prompts/ directory (flat structure with category-function.txt naming)
|
||||||
|
* - Match template names (e.g., "analysis-review-architecture" or just "review-architecture")
|
||||||
|
* - Load protocol files based on mode (analysis/write)
|
||||||
|
* - Cache template content for performance
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readdirSync, readFileSync, existsSync } from 'fs';
|
||||||
|
import { join, basename, extname } from 'path';
|
||||||
|
import { homedir } from 'os';
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Types
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
export interface TemplateMeta {
|
||||||
|
name: string; // Full filename without extension (e.g., "analysis-review-architecture")
|
||||||
|
path: string; // Full absolute path
|
||||||
|
category: string; // Category from filename (e.g., "analysis")
|
||||||
|
shortName: string; // Name without category prefix (e.g., "review-architecture")
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface TemplateIndex {
|
||||||
|
templates: Map<string, TemplateMeta>; // name -> meta (full name match)
|
||||||
|
byShortName: Map<string, TemplateMeta>; // shortName -> meta (for fuzzy match)
|
||||||
|
categories: Map<string, string[]>; // category -> template names
|
||||||
|
lastScan: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Constants
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
const TEMPLATES_BASE_DIR = join(homedir(), '.claude', 'workflows', 'cli-templates');
|
||||||
|
const PROMPTS_DIR = join(TEMPLATES_BASE_DIR, 'prompts');
|
||||||
|
const PROTOCOLS_DIR = join(TEMPLATES_BASE_DIR, 'protocols');
|
||||||
|
|
||||||
|
const PROTOCOL_FILES: Record<string, string> = {
|
||||||
|
analysis: 'analysis-protocol.md',
|
||||||
|
write: 'write-protocol.md',
|
||||||
|
};
|
||||||
|
|
||||||
|
// Cache
|
||||||
|
let templateIndex: TemplateIndex | null = null;
|
||||||
|
const contentCache: Map<string, string> = new Map();
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// Public API
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the base templates directory path
|
||||||
|
*/
|
||||||
|
export function getTemplatesDir(): string {
|
||||||
|
return TEMPLATES_BASE_DIR;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the prompts directory path
|
||||||
|
*/
|
||||||
|
export function getPromptsDir(): string {
|
||||||
|
return PROMPTS_DIR;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the protocols directory path
|
||||||
|
*/
|
||||||
|
export function getProtocolsDir(): string {
|
||||||
|
return PROTOCOLS_DIR;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan templates directory and build index
|
||||||
|
* Flat structure: prompts/category-function.txt
|
||||||
|
* Results are cached for performance
|
||||||
|
*/
|
||||||
|
export function scanTemplates(forceRescan = false): TemplateIndex {
|
||||||
|
if (templateIndex && !forceRescan) {
|
||||||
|
return templateIndex;
|
||||||
|
}
|
||||||
|
|
||||||
|
const templates = new Map<string, TemplateMeta>();
|
||||||
|
const byShortName = new Map<string, TemplateMeta>();
|
||||||
|
const categories = new Map<string, string[]>();
|
||||||
|
|
||||||
|
if (!existsSync(PROMPTS_DIR)) {
|
||||||
|
console.warn(`[template-discovery] Prompts directory not found: ${PROMPTS_DIR}`);
|
||||||
|
templateIndex = { templates, byShortName, categories, lastScan: Date.now() };
|
||||||
|
return templateIndex;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan all files directly in prompts/ (flat structure)
|
||||||
|
const files = readdirSync(PROMPTS_DIR).filter(file => {
|
||||||
|
const ext = extname(file).toLowerCase();
|
||||||
|
return ext === '.txt' || ext === '.md';
|
||||||
|
});
|
||||||
|
|
||||||
|
for (const file of files) {
|
||||||
|
const name = basename(file, extname(file)); // e.g., "analysis-review-architecture"
|
||||||
|
const fullPath = join(PROMPTS_DIR, file);
|
||||||
|
|
||||||
|
// Extract category from filename (first segment before -)
|
||||||
|
const dashIndex = name.indexOf('-');
|
||||||
|
const category = dashIndex > 0 ? name.substring(0, dashIndex) : 'other';
|
||||||
|
const shortName = dashIndex > 0 ? name.substring(dashIndex + 1) : name;
|
||||||
|
|
||||||
|
const meta: TemplateMeta = {
|
||||||
|
name,
|
||||||
|
path: fullPath,
|
||||||
|
category,
|
||||||
|
shortName,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Index by full name
|
||||||
|
templates.set(name, meta);
|
||||||
|
|
||||||
|
// Index by short name (for fuzzy match)
|
||||||
|
// If duplicate shortName exists, prefer keeping first one
|
||||||
|
if (!byShortName.has(shortName)) {
|
||||||
|
byShortName.set(shortName, meta);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Group by category
|
||||||
|
if (!categories.has(category)) {
|
||||||
|
categories.set(category, []);
|
||||||
|
}
|
||||||
|
categories.get(category)!.push(name);
|
||||||
|
}
|
||||||
|
|
||||||
|
templateIndex = { templates, byShortName, categories, lastScan: Date.now() };
|
||||||
|
return templateIndex;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Find a template by name
|
||||||
|
*
|
||||||
|
* @param nameOrShort - Full template name (e.g., "analysis-review-architecture")
|
||||||
|
* or short name (e.g., "review-architecture")
|
||||||
|
* @returns Full path to template file, or null if not found
|
||||||
|
*/
|
||||||
|
export function findTemplate(nameOrShort: string): string | null {
|
||||||
|
const index = scanTemplates();
|
||||||
|
|
||||||
|
// Try exact full name match first
|
||||||
|
if (index.templates.has(nameOrShort)) {
|
||||||
|
return index.templates.get(nameOrShort)!.path;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try with .txt extension removed
|
||||||
|
const nameWithoutExt = nameOrShort.replace(/\.(txt|md)$/i, '');
|
||||||
|
if (index.templates.has(nameWithoutExt)) {
|
||||||
|
return index.templates.get(nameWithoutExt)!.path;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try short name match (without category prefix)
|
||||||
|
if (index.byShortName.has(nameOrShort)) {
|
||||||
|
return index.byShortName.get(nameOrShort)!.path;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try short name without extension
|
||||||
|
if (index.byShortName.has(nameWithoutExt)) {
|
||||||
|
return index.byShortName.get(nameWithoutExt)!.path;
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load protocol content based on mode
|
||||||
|
*
|
||||||
|
* @param mode - Execution mode: "analysis" or "write"
|
||||||
|
* @returns Protocol file content, or empty string if not found
|
||||||
|
*/
|
||||||
|
export function loadProtocol(mode: string): string {
|
||||||
|
const protocolFile = PROTOCOL_FILES[mode];
|
||||||
|
if (!protocolFile) {
|
||||||
|
console.warn(`[template-discovery] No protocol defined for mode: ${mode}`);
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
|
||||||
|
const protocolPath = join(PROTOCOLS_DIR, protocolFile);
|
||||||
|
|
||||||
|
// Check cache
|
||||||
|
if (contentCache.has(protocolPath)) {
|
||||||
|
return contentCache.get(protocolPath)!;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!existsSync(protocolPath)) {
|
||||||
|
console.warn(`[template-discovery] Protocol file not found: ${protocolPath}`);
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = readFileSync(protocolPath, 'utf8');
|
||||||
|
contentCache.set(protocolPath, content);
|
||||||
|
return content;
|
||||||
|
} catch (error) {
|
||||||
|
console.error(`[template-discovery] Failed to read protocol: ${protocolPath}`, error);
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load template content by name or path
|
||||||
|
*
|
||||||
|
* @param nameOrPath - Template name or relative path
|
||||||
|
* @returns Template file content
|
||||||
|
* @throws Error if template not found
|
||||||
|
*/
|
||||||
|
export function loadTemplate(nameOrPath: string): string {
|
||||||
|
const templatePath = findTemplate(nameOrPath);
|
||||||
|
|
||||||
|
if (!templatePath) {
|
||||||
|
// List available templates for helpful error message
|
||||||
|
const index = scanTemplates();
|
||||||
|
const available = Array.from(index.templates.keys()).slice(0, 10).join(', ');
|
||||||
|
throw new Error(
|
||||||
|
`Template not found: "${nameOrPath}"\n` +
|
||||||
|
`Available templates (first 10): ${available}...\n` +
|
||||||
|
`Use 'ccw cli templates' to list all available templates.`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check cache
|
||||||
|
if (contentCache.has(templatePath)) {
|
||||||
|
return contentCache.get(templatePath)!;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = readFileSync(templatePath, 'utf8');
|
||||||
|
contentCache.set(templatePath, content);
|
||||||
|
return content;
|
||||||
|
} catch (error) {
|
||||||
|
throw new Error(`Failed to read template: ${templatePath}: ${error}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Build rules content from protocol and template
|
||||||
|
*
|
||||||
|
* @param mode - Execution mode for protocol selection
|
||||||
|
* @param templateName - Template name or path (optional)
|
||||||
|
* @param includeProtocol - Whether to include protocol (default: true)
|
||||||
|
* @returns Combined rules content
|
||||||
|
*/
|
||||||
|
export function buildRulesContent(
|
||||||
|
mode: string,
|
||||||
|
templateName?: string,
|
||||||
|
includeProtocol = true
|
||||||
|
): string {
|
||||||
|
const parts: string[] = [];
|
||||||
|
|
||||||
|
// Load protocol if requested
|
||||||
|
if (includeProtocol) {
|
||||||
|
const protocol = loadProtocol(mode);
|
||||||
|
if (protocol) {
|
||||||
|
parts.push(protocol);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load template if specified
|
||||||
|
if (templateName) {
|
||||||
|
const template = loadTemplate(templateName);
|
||||||
|
if (template) {
|
||||||
|
parts.push(template);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return parts.join('\n\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List all available templates
|
||||||
|
*
|
||||||
|
* @returns Object with categories and their templates
|
||||||
|
*/
|
||||||
|
export function listTemplates(): Record<string, TemplateMeta[]> {
|
||||||
|
const index = scanTemplates();
|
||||||
|
const result: Record<string, TemplateMeta[]> = {};
|
||||||
|
|
||||||
|
for (const [category, names] of index.categories) {
|
||||||
|
result[category] = names.map(name => {
|
||||||
|
const meta = index.templates.get(name);
|
||||||
|
return meta || { name, path: '', category, shortName: '' };
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clear template cache (useful for testing or after template updates)
|
||||||
|
*/
|
||||||
|
export function clearCache(): void {
|
||||||
|
templateIndex = null;
|
||||||
|
contentCache.clear();
|
||||||
|
}
|
||||||
676
codex-lens/docs/CODEXLENS_LSP_API_SPEC.md
Normal file
676
codex-lens/docs/CODEXLENS_LSP_API_SPEC.md
Normal file
@@ -0,0 +1,676 @@
|
|||||||
|
# Codexlens LSP API 规范
|
||||||
|
|
||||||
|
**版本**: 1.1
|
||||||
|
**状态**: ✅ APPROVED (Gemini Review)
|
||||||
|
**架构**: codexlens 提供 Python API,CCW 实现 MCP 端点
|
||||||
|
**分析来源**: Gemini (架构评审) + Codex (实现评审)
|
||||||
|
**最后更新**: 2025-01-17
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 一、概述
|
||||||
|
|
||||||
|
### 1.1 背景
|
||||||
|
|
||||||
|
基于 cclsp MCP 服务器实现的分析,设计 codexlens 的 LSP 搜索方法接口,为 AI 提供代码智能能力。
|
||||||
|
|
||||||
|
### 1.2 架构决策
|
||||||
|
|
||||||
|
**MCP 端点由 CCW 实现,codexlens 只提供 Python API**
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
|
│ Claude Code │
|
||||||
|
│ ┌───────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ MCP Client │ │
|
||||||
|
│ └───────────────────────────────────────────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌───────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ CCW MCP Server │ │
|
||||||
|
│ │ ┌─────────────────────────────────────────────────┐ │ │
|
||||||
|
│ │ │ MCP Tool Handlers │ │ │
|
||||||
|
│ │ │ • codexlens_file_context │ │ │
|
||||||
|
│ │ │ • codexlens_find_definition │ │ │
|
||||||
|
│ │ │ • codexlens_find_references │ │ │
|
||||||
|
│ │ │ • codexlens_semantic_search │ │ │
|
||||||
|
│ │ └──────────────────────┬──────────────────────────┘ │ │
|
||||||
|
│ └─────────────────────────┼─────────────────────────────┘ │
|
||||||
|
└────────────────────────────┼────────────────────────────────┘
|
||||||
|
│ Python API 调用
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
|
│ codexlens │
|
||||||
|
│ ┌───────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Public API Layer │ │
|
||||||
|
│ │ codexlens.api.file_context() │ │
|
||||||
|
│ │ codexlens.api.find_definition() │ │
|
||||||
|
│ │ codexlens.api.find_references() │ │
|
||||||
|
│ │ codexlens.api.semantic_search() │ │
|
||||||
|
│ └──────────────────────┬────────────────────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ┌──────────────────────▼────────────────────────────────┐ │
|
||||||
|
│ │ Core Components │ │
|
||||||
|
│ │ GlobalSymbolIndex | ChainSearchEngine | HoverProvider │ │
|
||||||
|
│ └───────────────────────────────────────────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ┌──────────────────────▼────────────────────────────────┐ │
|
||||||
|
│ │ SQLite Index Databases │ │
|
||||||
|
│ │ global_symbols.db | *.index.db (per-directory) │ │
|
||||||
|
│ └───────────────────────────────────────────────────────┘ │
|
||||||
|
└─────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.3 职责分离
|
||||||
|
|
||||||
|
| 组件 | 职责 |
|
||||||
|
|------|------|
|
||||||
|
| **codexlens** | Python API、索引查询、搜索算法、结果聚合、降级处理 |
|
||||||
|
| **CCW** | MCP 协议、参数校验、结果序列化、错误处理、project_root 推断 |
|
||||||
|
|
||||||
|
### 1.4 codexlens vs cclsp 对比
|
||||||
|
|
||||||
|
| 特性 | cclsp | codexlens |
|
||||||
|
|------|-------|-----------|
|
||||||
|
| 数据源 | 实时 LSP 服务器 | 预建 SQLite 索引 |
|
||||||
|
| 启动时间 | 200-3000ms | <50ms |
|
||||||
|
| 响应时间 | 50-500ms | <5ms |
|
||||||
|
| 跨语言 | 每语言需要 LSP 服务器 | 统一 Python/TS/JS/Go 索引 |
|
||||||
|
| 依赖 | 需要语言服务器 | 无外部依赖 |
|
||||||
|
| 准确度 | 100% (编译器级) | 95%+ (tree-sitter) |
|
||||||
|
| 重命名支持 | 是 | 否 (只读索引) |
|
||||||
|
| 实时诊断 | 是 | 通过 IDE MCP |
|
||||||
|
|
||||||
|
**推荐**: codexlens 用于快速搜索,cclsp 用于精确重构
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 二、cclsp 设计模式 (参考)
|
||||||
|
|
||||||
|
### 2.1 MCP 工具接口设计
|
||||||
|
|
||||||
|
| 模式 | 说明 | 代码位置 |
|
||||||
|
|------|------|----------|
|
||||||
|
| **基于名称** | 接受 `symbol_name` 而非文件坐标 | `index.ts:70` |
|
||||||
|
| **安全消歧义** | `rename_symbol` → `rename_symbol_strict` 两步 | `index.ts:133, 172` |
|
||||||
|
| **复杂性抽象** | 隐藏 LSP 协议细节 | `index.ts:211` |
|
||||||
|
| **优雅失败** | 返回有用的文本响应 | 全局 |
|
||||||
|
|
||||||
|
### 2.2 符号解析算法
|
||||||
|
|
||||||
|
```
|
||||||
|
1. getDocumentSymbols (lsp-client.ts:1406)
|
||||||
|
└─ 获取文件所有符号
|
||||||
|
|
||||||
|
2. 处理两种格式:
|
||||||
|
├─ DocumentSymbol[] → 扁平化
|
||||||
|
└─ SymbolInformation[] → 二次定位
|
||||||
|
|
||||||
|
3. 过滤: symbol.name === symbolName && symbol.kind
|
||||||
|
|
||||||
|
4. 回退: 无结果时移除 kind 约束重试
|
||||||
|
|
||||||
|
5. 聚合: 遍历所有匹配,聚合定义位置
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 三、需求规格
|
||||||
|
|
||||||
|
### 需求 1: 文件上下文查询 (`file_context`)
|
||||||
|
|
||||||
|
**用途**: 读取代码文件,返回文件中所有方法的调用关系摘要
|
||||||
|
|
||||||
|
**输出示例**:
|
||||||
|
```markdown
|
||||||
|
## src/auth/login.py (3 methods)
|
||||||
|
|
||||||
|
### login_user (line 15-45)
|
||||||
|
- Calls: validate_password (auth/utils.py:23), create_session (session/manager.py:89)
|
||||||
|
- Called by: handle_login_request (api/routes.py:156), test_login (tests/test_auth.py:34)
|
||||||
|
|
||||||
|
### validate_token (line 47-62)
|
||||||
|
- Calls: decode_jwt (auth/jwt.py:12)
|
||||||
|
- Called by: auth_middleware (middleware/auth.py:28)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 需求 2: 通用 LSP 搜索 (cclsp 兼容)
|
||||||
|
|
||||||
|
| 端点 | 用途 |
|
||||||
|
|------|------|
|
||||||
|
| `find_definition` | 根据符号名查找定义位置 |
|
||||||
|
| `find_references` | 查找符号的所有引用 |
|
||||||
|
| `workspace_symbols` | 工作区符号搜索 |
|
||||||
|
| `get_hover` | 获取符号悬停信息 |
|
||||||
|
|
||||||
|
### 需求 3: 向量 + LSP 融合搜索
|
||||||
|
|
||||||
|
**用途**: 结合向量语义搜索和结构化 LSP 搜索
|
||||||
|
|
||||||
|
**融合策略**:
|
||||||
|
- **RRF** (首选): 简单、不需要分数归一化、鲁棒
|
||||||
|
- **Cascade**: 特定场景,先向量后 LSP
|
||||||
|
- **Adaptive**: 长期目标,按查询类型自动选择
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 四、API 规范
|
||||||
|
|
||||||
|
### 4.1 模块结构
|
||||||
|
|
||||||
|
```
|
||||||
|
src/codexlens/
|
||||||
|
├─ api/ [新增] 公开 API 层
|
||||||
|
│ ├─ __init__.py 导出所有 API
|
||||||
|
│ ├─ file_context.py 文件上下文
|
||||||
|
│ ├─ definition.py 定义查找
|
||||||
|
│ ├─ references.py 引用查找
|
||||||
|
│ ├─ symbols.py 符号搜索
|
||||||
|
│ ├─ hover.py 悬停信息
|
||||||
|
│ └─ semantic.py 语义搜索
|
||||||
|
│
|
||||||
|
├─ storage/
|
||||||
|
│ ├─ global_index.py [扩展] get_file_symbols()
|
||||||
|
│ └─ relationship_query.py [新增] 有向调用查询
|
||||||
|
│
|
||||||
|
└─ search/
|
||||||
|
└─ chain_search.py [修复] schema 兼容
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 `codexlens.api.file_context()`
|
||||||
|
|
||||||
|
```python
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import List, Optional, Dict, Tuple
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CallInfo:
|
||||||
|
"""调用关系信息"""
|
||||||
|
symbol_name: str
|
||||||
|
file_path: Optional[str] # 目标文件 (可能为 None)
|
||||||
|
line: int
|
||||||
|
relationship: str # call | import | inheritance
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MethodContext:
|
||||||
|
"""方法上下文"""
|
||||||
|
name: str
|
||||||
|
kind: str # function | method | class
|
||||||
|
line_range: Tuple[int, int]
|
||||||
|
signature: Optional[str]
|
||||||
|
calls: List[CallInfo] # 出向调用
|
||||||
|
callers: List[CallInfo] # 入向调用
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FileContextResult:
|
||||||
|
"""文件上下文结果"""
|
||||||
|
file_path: str
|
||||||
|
language: str
|
||||||
|
methods: List[MethodContext]
|
||||||
|
summary: str # 人类可读摘要
|
||||||
|
discovery_status: Dict[str, bool] = field(default_factory=lambda: {
|
||||||
|
"outgoing_resolved": False,
|
||||||
|
"incoming_resolved": True,
|
||||||
|
"targets_resolved": False
|
||||||
|
})
|
||||||
|
|
||||||
|
def file_context(
|
||||||
|
project_root: str,
|
||||||
|
file_path: str,
|
||||||
|
include_calls: bool = True,
|
||||||
|
include_callers: bool = True,
|
||||||
|
max_depth: int = 1,
|
||||||
|
format: str = "brief" # brief | detailed | tree
|
||||||
|
) -> FileContextResult:
|
||||||
|
"""
|
||||||
|
获取代码文件的方法调用上下文。
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_root: 项目根目录 (用于定位索引)
|
||||||
|
file_path: 代码文件路径
|
||||||
|
include_calls: 是否包含出向调用
|
||||||
|
include_callers: 是否包含入向调用
|
||||||
|
max_depth: 调用链深度 (1=直接调用)
|
||||||
|
⚠️ V1 限制: 当前版本仅支持 max_depth=1
|
||||||
|
深度调用链分析将在 V2 实现
|
||||||
|
format: 输出格式
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
FileContextResult
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
IndexNotFoundError: 项目未索引
|
||||||
|
FileNotFoundError: 文件不存在
|
||||||
|
|
||||||
|
Note:
|
||||||
|
V1 实现限制:
|
||||||
|
- max_depth 仅支持 1 (直接调用)
|
||||||
|
- 出向调用目标文件可能为 None (未解析)
|
||||||
|
- 深度调用链分析作为 V2 特性规划
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.3 `codexlens.api.find_definition()`
|
||||||
|
|
||||||
|
```python
|
||||||
|
@dataclass
|
||||||
|
class DefinitionResult:
|
||||||
|
"""定义查找结果"""
|
||||||
|
name: str
|
||||||
|
kind: str
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
end_line: int
|
||||||
|
signature: Optional[str]
|
||||||
|
container: Optional[str] # 所属类/模块
|
||||||
|
score: float
|
||||||
|
|
||||||
|
def find_definition(
|
||||||
|
project_root: str,
|
||||||
|
symbol_name: str,
|
||||||
|
symbol_kind: Optional[str] = None,
|
||||||
|
file_context: Optional[str] = None,
|
||||||
|
limit: int = 10
|
||||||
|
) -> List[DefinitionResult]:
|
||||||
|
"""
|
||||||
|
根据符号名称查找定义位置。
|
||||||
|
|
||||||
|
Fallback 策略:
|
||||||
|
1. 精确匹配 + kind 过滤
|
||||||
|
2. 精确匹配 (移除 kind)
|
||||||
|
3. 前缀匹配
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.4 `codexlens.api.find_references()`
|
||||||
|
|
||||||
|
```python
|
||||||
|
@dataclass
|
||||||
|
class ReferenceResult:
|
||||||
|
"""引用结果"""
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
column: int
|
||||||
|
context_line: str
|
||||||
|
relationship: str # call | import | type_annotation | inheritance
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class GroupedReferences:
|
||||||
|
"""按定义分组的引用"""
|
||||||
|
definition: DefinitionResult
|
||||||
|
references: List[ReferenceResult]
|
||||||
|
|
||||||
|
def find_references(
|
||||||
|
project_root: str,
|
||||||
|
symbol_name: str,
|
||||||
|
symbol_kind: Optional[str] = None,
|
||||||
|
include_definition: bool = True,
|
||||||
|
group_by_definition: bool = True,
|
||||||
|
limit: int = 100
|
||||||
|
) -> List[GroupedReferences]:
|
||||||
|
"""
|
||||||
|
查找符号的所有引用位置。
|
||||||
|
|
||||||
|
多定义时分组返回,解决引用混淆问题。
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.5 `codexlens.api.workspace_symbols()`
|
||||||
|
|
||||||
|
```python
|
||||||
|
@dataclass
|
||||||
|
class SymbolInfo:
|
||||||
|
"""符号信息"""
|
||||||
|
name: str
|
||||||
|
kind: str
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
container: Optional[str]
|
||||||
|
score: float
|
||||||
|
|
||||||
|
def workspace_symbols(
|
||||||
|
project_root: str,
|
||||||
|
query: str,
|
||||||
|
kind_filter: Optional[List[str]] = None,
|
||||||
|
file_pattern: Optional[str] = None,
|
||||||
|
limit: int = 50
|
||||||
|
) -> List[SymbolInfo]:
|
||||||
|
"""在整个工作区搜索符号 (前缀匹配)。"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.6 `codexlens.api.get_hover()`
|
||||||
|
|
||||||
|
```python
|
||||||
|
@dataclass
|
||||||
|
class HoverInfo:
|
||||||
|
"""悬停信息"""
|
||||||
|
name: str
|
||||||
|
kind: str
|
||||||
|
signature: str
|
||||||
|
documentation: Optional[str]
|
||||||
|
file_path: str
|
||||||
|
line_range: Tuple[int, int]
|
||||||
|
type_info: Optional[str]
|
||||||
|
|
||||||
|
def get_hover(
|
||||||
|
project_root: str,
|
||||||
|
symbol_name: str,
|
||||||
|
file_path: Optional[str] = None
|
||||||
|
) -> Optional[HoverInfo]:
|
||||||
|
"""获取符号的详细悬停信息。"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.7 `codexlens.api.semantic_search()`
|
||||||
|
|
||||||
|
```python
|
||||||
|
@dataclass
|
||||||
|
class SemanticResult:
|
||||||
|
"""语义搜索结果"""
|
||||||
|
symbol_name: str
|
||||||
|
kind: str
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
vector_score: Optional[float]
|
||||||
|
structural_score: Optional[float]
|
||||||
|
fusion_score: float
|
||||||
|
snippet: str
|
||||||
|
match_reason: Optional[str]
|
||||||
|
|
||||||
|
def semantic_search(
|
||||||
|
project_root: str,
|
||||||
|
query: str,
|
||||||
|
mode: str = "fusion", # vector | structural | fusion
|
||||||
|
vector_weight: float = 0.5,
|
||||||
|
structural_weight: float = 0.3,
|
||||||
|
keyword_weight: float = 0.2,
|
||||||
|
fusion_strategy: str = "rrf", # rrf | staged | binary | hybrid
|
||||||
|
kind_filter: Optional[List[str]] = None,
|
||||||
|
limit: int = 20,
|
||||||
|
include_match_reason: bool = False
|
||||||
|
) -> List[SemanticResult]:
|
||||||
|
"""
|
||||||
|
语义搜索 - 结合向量和结构化搜索。
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_root: 项目根目录
|
||||||
|
query: 自然语言查询
|
||||||
|
mode: 搜索模式
|
||||||
|
- vector: 仅向量搜索
|
||||||
|
- structural: 仅结构搜索 (符号 + 关系)
|
||||||
|
- fusion: 融合搜索 (默认)
|
||||||
|
vector_weight: 向量搜索权重 [0, 1]
|
||||||
|
structural_weight: 结构搜索权重 [0, 1]
|
||||||
|
keyword_weight: 关键词搜索权重 [0, 1]
|
||||||
|
fusion_strategy: 融合策略 (映射到 chain_search.py)
|
||||||
|
- rrf: Reciprocal Rank Fusion (推荐,默认)
|
||||||
|
- staged: 分阶段级联 → staged_cascade_search
|
||||||
|
- binary: 二分重排级联 → binary_rerank_cascade_search
|
||||||
|
- hybrid: 混合级联 → hybrid_cascade_search
|
||||||
|
kind_filter: 符号类型过滤
|
||||||
|
limit: 最大返回数量
|
||||||
|
include_match_reason: 是否生成匹配原因 (启发式,非 LLM)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
按 fusion_score 排序的结果列表
|
||||||
|
|
||||||
|
降级行为:
|
||||||
|
- 无向量索引: vector_score=None, 使用 FTS + 结构搜索
|
||||||
|
- 无关系数据: structural_score=None, 仅向量搜索
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 五、已知问题与解决方案
|
||||||
|
|
||||||
|
### 5.1 P0 阻塞项
|
||||||
|
|
||||||
|
| 问题 | 位置 | 解决方案 |
|
||||||
|
|------|------|----------|
|
||||||
|
| **索引 Schema 不匹配** | `chain_search.py:313-324` vs `dir_index.py:304-312` | 兼容 `full_path` 和 `path` |
|
||||||
|
| **文件符号查询缺失** | `global_index.py:214-260` | 新增 `get_file_symbols()` |
|
||||||
|
| **出向调用查询缺失** | `dir_index.py:333-342` | 新增 `RelationshipQuery` |
|
||||||
|
| **关系类型不一致** | `entities.py:74-79` | 规范化 `calls` → `call` |
|
||||||
|
|
||||||
|
### 5.2 设计缺陷 (Gemini 发现)
|
||||||
|
|
||||||
|
| 缺陷 | 影响 | 解决方案 |
|
||||||
|
|------|------|----------|
|
||||||
|
| **调用图不完整** | `file_context` 缺少出向调用 | 新增有向调用 API |
|
||||||
|
| **消歧义未定义** | 多定义时无法区分 | 实现 `rank_by_proximity()` |
|
||||||
|
| **AI 特性成本过高** | `explanation` 需要 LLM | 设为可选,默认关闭 |
|
||||||
|
| **融合参数不一致** | 3 分支但只有 2 权重 | 补充 `keyword_weight` |
|
||||||
|
|
||||||
|
### 5.3 消歧义算法
|
||||||
|
|
||||||
|
**V1 实现** (基于文件路径接近度):
|
||||||
|
|
||||||
|
```python
|
||||||
|
def rank_by_proximity(
|
||||||
|
results: List[DefinitionResult],
|
||||||
|
file_context: str
|
||||||
|
) -> List[DefinitionResult]:
|
||||||
|
"""按文件接近度排序 (V1: 路径接近度)"""
|
||||||
|
def proximity_score(result):
|
||||||
|
# 1. 同目录最高分
|
||||||
|
if os.path.dirname(result.file_path) == os.path.dirname(file_context):
|
||||||
|
return 100
|
||||||
|
# 2. 共同路径前缀长度
|
||||||
|
common = os.path.commonpath([result.file_path, file_context])
|
||||||
|
return len(common)
|
||||||
|
|
||||||
|
return sorted(results, key=proximity_score, reverse=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
**V2 增强计划** (基于 import graph 距离):
|
||||||
|
|
||||||
|
```python
|
||||||
|
def rank_by_import_distance(
|
||||||
|
results: List[DefinitionResult],
|
||||||
|
file_context: str,
|
||||||
|
import_graph: Dict[str, Set[str]]
|
||||||
|
) -> List[DefinitionResult]:
|
||||||
|
"""按 import graph 距离排序 (V2)"""
|
||||||
|
def import_distance(result):
|
||||||
|
# BFS 计算最短 import 路径
|
||||||
|
return bfs_shortest_path(
|
||||||
|
import_graph,
|
||||||
|
file_context,
|
||||||
|
result.file_path
|
||||||
|
)
|
||||||
|
|
||||||
|
# 组合: 0.6 * import_distance + 0.4 * path_proximity
|
||||||
|
return sorted(results, key=lambda r: (
|
||||||
|
0.6 * import_distance(r) +
|
||||||
|
0.4 * (100 - proximity_score(r))
|
||||||
|
))
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.4 参考实现: `get_file_symbols()`
|
||||||
|
|
||||||
|
**位置**: `src/codexlens/storage/global_index.py`
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_file_symbols(self, file_path: str | Path) -> List[Symbol]:
|
||||||
|
"""
|
||||||
|
获取指定文件中定义的所有符号。
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_path: 文件路径 (相对或绝对)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
按行号排序的符号列表
|
||||||
|
"""
|
||||||
|
file_path_str = str(Path(file_path).resolve())
|
||||||
|
with self._lock:
|
||||||
|
conn = self._get_connection()
|
||||||
|
rows = conn.execute(
|
||||||
|
"""
|
||||||
|
SELECT symbol_name, symbol_kind, file_path, start_line, end_line
|
||||||
|
FROM global_symbols
|
||||||
|
WHERE project_id = ? AND file_path = ?
|
||||||
|
ORDER BY start_line
|
||||||
|
""",
|
||||||
|
(self.project_id, file_path_str),
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
return [
|
||||||
|
Symbol(
|
||||||
|
name=row["symbol_name"],
|
||||||
|
kind=row["symbol_kind"],
|
||||||
|
range=(row["start_line"], row["end_line"]),
|
||||||
|
file=row["file_path"],
|
||||||
|
)
|
||||||
|
for row in rows
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 六、实现计划
|
||||||
|
|
||||||
|
### Phase 0: 基础设施 (16h)
|
||||||
|
|
||||||
|
| 任务 | 工时 | 说明 |
|
||||||
|
|------|------|------|
|
||||||
|
| 修复 `search_references` schema | 4h | 兼容两种 schema |
|
||||||
|
| 新增 `GlobalSymbolIndex.get_file_symbols()` | 4h | 文件符号查询 (见 5.4) |
|
||||||
|
| 新增 `RelationshipQuery` 类 | 6h | 有向调用查询 |
|
||||||
|
| 关系类型规范化层 | 2h | `calls` → `call` |
|
||||||
|
|
||||||
|
### Phase 1: API 层 (48h)
|
||||||
|
|
||||||
|
| 任务 | 工时 | 复杂度 |
|
||||||
|
|------|------|--------|
|
||||||
|
| `find_definition()` | 4h | S |
|
||||||
|
| `find_references()` | 8h | M |
|
||||||
|
| `workspace_symbols()` | 4h | S |
|
||||||
|
| `get_hover()` | 4h | S |
|
||||||
|
| `file_context()` | 16h | L |
|
||||||
|
| `semantic_search()` | 12h | M |
|
||||||
|
|
||||||
|
### Phase 2: 测试与文档 (16h)
|
||||||
|
|
||||||
|
| 任务 | 工时 |
|
||||||
|
|------|------|
|
||||||
|
| 单元测试 (≥80%) | 8h |
|
||||||
|
| API 文档 | 4h |
|
||||||
|
| 示例代码 | 4h |
|
||||||
|
|
||||||
|
### 关键路径
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 0.1 (schema fix)
|
||||||
|
↓
|
||||||
|
Phase 0.2 (file symbols) → Phase 1.5 (file_context)
|
||||||
|
↓
|
||||||
|
Phase 1 (其他 API)
|
||||||
|
↓
|
||||||
|
Phase 2 (测试)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 七、测试策略
|
||||||
|
|
||||||
|
### 7.1 单元测试
|
||||||
|
|
||||||
|
```python
|
||||||
|
# test_global_index.py
|
||||||
|
def test_get_file_symbols():
|
||||||
|
index = GlobalSymbolIndex(":memory:")
|
||||||
|
index.update_file_symbols(project_id=1, file_path="test.py", symbols=[...])
|
||||||
|
results = index.get_file_symbols("test.py")
|
||||||
|
assert len(results) == 3
|
||||||
|
|
||||||
|
# test_relationship_query.py
|
||||||
|
def test_outgoing_calls():
|
||||||
|
store = DirIndexStore(":memory:")
|
||||||
|
calls = store.get_outgoing_calls("src/auth.py", "login")
|
||||||
|
assert calls[0].relationship == "call" # 已规范化
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7.2 Schema 兼容性测试
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_search_references_both_schemas():
|
||||||
|
"""测试两种 schema 的引用搜索"""
|
||||||
|
# 旧 schema: files(path, ...)
|
||||||
|
# 新 schema: files(full_path, ...)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7.3 降级测试
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_semantic_search_without_vectors():
|
||||||
|
result = semantic_search(query="auth", mode="fusion")
|
||||||
|
assert result.vector_score is None
|
||||||
|
assert result.fusion_score > 0
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 八、使用示例
|
||||||
|
|
||||||
|
```python
|
||||||
|
from codexlens.api import (
|
||||||
|
file_context,
|
||||||
|
find_definition,
|
||||||
|
find_references,
|
||||||
|
semantic_search
|
||||||
|
)
|
||||||
|
|
||||||
|
# 1. 获取文件上下文
|
||||||
|
result = file_context(
|
||||||
|
project_root="/path/to/project",
|
||||||
|
file_path="src/auth/login.py",
|
||||||
|
format="brief"
|
||||||
|
)
|
||||||
|
print(result.summary)
|
||||||
|
|
||||||
|
# 2. 查找定义
|
||||||
|
definitions = find_definition(
|
||||||
|
project_root="/path/to/project",
|
||||||
|
symbol_name="UserService",
|
||||||
|
symbol_kind="class"
|
||||||
|
)
|
||||||
|
|
||||||
|
# 3. 语义搜索
|
||||||
|
results = semantic_search(
|
||||||
|
project_root="/path/to/project",
|
||||||
|
query="处理用户登录验证的函数",
|
||||||
|
mode="fusion"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 九、CCW 集成
|
||||||
|
|
||||||
|
| codexlens API | CCW MCP Tool |
|
||||||
|
|---------------|--------------|
|
||||||
|
| `file_context()` | `codexlens_file_context` |
|
||||||
|
| `find_definition()` | `codexlens_find_definition` |
|
||||||
|
| `find_references()` | `codexlens_find_references` |
|
||||||
|
| `workspace_symbols()` | `codexlens_workspace_symbol` |
|
||||||
|
| `get_hover()` | `codexlens_get_hover` |
|
||||||
|
| `semantic_search()` | `codexlens_semantic_search` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 十、分析来源
|
||||||
|
|
||||||
|
| 工具 | Session ID | 贡献 |
|
||||||
|
|------|------------|------|
|
||||||
|
| Gemini | `1768618654438-gemini` | 架构评审、设计缺陷、融合策略 |
|
||||||
|
| Codex | `1768618658183-codex` | 组件复用、复杂度估算、任务分解 |
|
||||||
|
| Gemini | `1768620615744-gemini` | 最终评审、改进建议、APPROVED |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 十一、版本历史
|
||||||
|
|
||||||
|
| 版本 | 日期 | 变更 |
|
||||||
|
|------|------|------|
|
||||||
|
| 1.0 | 2025-01-17 | 初始版本,合并多文档 |
|
||||||
|
| 1.1 | 2025-01-17 | 应用 Gemini 评审改进: V1 限制说明、策略映射、消歧义增强、参考实现 |
|
||||||
@@ -97,11 +97,25 @@ encoding = [
|
|||||||
"chardet>=5.0",
|
"chardet>=5.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# Clustering for staged hybrid search (HDBSCAN + sklearn)
|
||||||
|
clustering = [
|
||||||
|
"hdbscan>=0.8.1",
|
||||||
|
"scikit-learn>=1.3.0",
|
||||||
|
]
|
||||||
|
|
||||||
# Full features including tiktoken for accurate token counting
|
# Full features including tiktoken for accurate token counting
|
||||||
full = [
|
full = [
|
||||||
"tiktoken>=0.5.0",
|
"tiktoken>=0.5.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# Language Server Protocol support
|
||||||
|
lsp = [
|
||||||
|
"pygls>=1.3.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[project.scripts]
|
||||||
|
codexlens-lsp = "codexlens.lsp:main"
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
Homepage = "https://github.com/openai/codex-lens"
|
Homepage = "https://github.com/openai/codex-lens"
|
||||||
|
|
||||||
|
|||||||
@@ -52,5 +52,10 @@ Requires-Dist: transformers>=4.36; extra == "splade-gpu"
|
|||||||
Requires-Dist: optimum[onnxruntime-gpu]>=1.16; extra == "splade-gpu"
|
Requires-Dist: optimum[onnxruntime-gpu]>=1.16; extra == "splade-gpu"
|
||||||
Provides-Extra: encoding
|
Provides-Extra: encoding
|
||||||
Requires-Dist: chardet>=5.0; extra == "encoding"
|
Requires-Dist: chardet>=5.0; extra == "encoding"
|
||||||
|
Provides-Extra: clustering
|
||||||
|
Requires-Dist: hdbscan>=0.8.1; extra == "clustering"
|
||||||
|
Requires-Dist: scikit-learn>=1.3.0; extra == "clustering"
|
||||||
Provides-Extra: full
|
Provides-Extra: full
|
||||||
Requires-Dist: tiktoken>=0.5.0; extra == "full"
|
Requires-Dist: tiktoken>=0.5.0; extra == "full"
|
||||||
|
Provides-Extra: lsp
|
||||||
|
Requires-Dist: pygls>=1.3.0; extra == "lsp"
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ pyproject.toml
|
|||||||
src/codex_lens.egg-info/PKG-INFO
|
src/codex_lens.egg-info/PKG-INFO
|
||||||
src/codex_lens.egg-info/SOURCES.txt
|
src/codex_lens.egg-info/SOURCES.txt
|
||||||
src/codex_lens.egg-info/dependency_links.txt
|
src/codex_lens.egg-info/dependency_links.txt
|
||||||
|
src/codex_lens.egg-info/entry_points.txt
|
||||||
src/codex_lens.egg-info/requires.txt
|
src/codex_lens.egg-info/requires.txt
|
||||||
src/codex_lens.egg-info/top_level.txt
|
src/codex_lens.egg-info/top_level.txt
|
||||||
src/codexlens/__init__.py
|
src/codexlens/__init__.py
|
||||||
@@ -18,6 +19,14 @@ src/codexlens/cli/output.py
|
|||||||
src/codexlens/indexing/__init__.py
|
src/codexlens/indexing/__init__.py
|
||||||
src/codexlens/indexing/embedding.py
|
src/codexlens/indexing/embedding.py
|
||||||
src/codexlens/indexing/symbol_extractor.py
|
src/codexlens/indexing/symbol_extractor.py
|
||||||
|
src/codexlens/lsp/__init__.py
|
||||||
|
src/codexlens/lsp/handlers.py
|
||||||
|
src/codexlens/lsp/providers.py
|
||||||
|
src/codexlens/lsp/server.py
|
||||||
|
src/codexlens/mcp/__init__.py
|
||||||
|
src/codexlens/mcp/hooks.py
|
||||||
|
src/codexlens/mcp/provider.py
|
||||||
|
src/codexlens/mcp/schema.py
|
||||||
src/codexlens/parsers/__init__.py
|
src/codexlens/parsers/__init__.py
|
||||||
src/codexlens/parsers/encoding.py
|
src/codexlens/parsers/encoding.py
|
||||||
src/codexlens/parsers/factory.py
|
src/codexlens/parsers/factory.py
|
||||||
@@ -31,6 +40,13 @@ src/codexlens/search/graph_expander.py
|
|||||||
src/codexlens/search/hybrid_search.py
|
src/codexlens/search/hybrid_search.py
|
||||||
src/codexlens/search/query_parser.py
|
src/codexlens/search/query_parser.py
|
||||||
src/codexlens/search/ranking.py
|
src/codexlens/search/ranking.py
|
||||||
|
src/codexlens/search/clustering/__init__.py
|
||||||
|
src/codexlens/search/clustering/base.py
|
||||||
|
src/codexlens/search/clustering/dbscan_strategy.py
|
||||||
|
src/codexlens/search/clustering/factory.py
|
||||||
|
src/codexlens/search/clustering/frequency_strategy.py
|
||||||
|
src/codexlens/search/clustering/hdbscan_strategy.py
|
||||||
|
src/codexlens/search/clustering/noop_strategy.py
|
||||||
src/codexlens/semantic/__init__.py
|
src/codexlens/semantic/__init__.py
|
||||||
src/codexlens/semantic/ann_index.py
|
src/codexlens/semantic/ann_index.py
|
||||||
src/codexlens/semantic/base.py
|
src/codexlens/semantic/base.py
|
||||||
@@ -84,6 +100,7 @@ tests/test_api_reranker.py
|
|||||||
tests/test_chain_search.py
|
tests/test_chain_search.py
|
||||||
tests/test_cli_hybrid_search.py
|
tests/test_cli_hybrid_search.py
|
||||||
tests/test_cli_output.py
|
tests/test_cli_output.py
|
||||||
|
tests/test_clustering_strategies.py
|
||||||
tests/test_code_extractor.py
|
tests/test_code_extractor.py
|
||||||
tests/test_config.py
|
tests/test_config.py
|
||||||
tests/test_dual_fts.py
|
tests/test_dual_fts.py
|
||||||
@@ -122,6 +139,7 @@ tests/test_search_performance.py
|
|||||||
tests/test_semantic.py
|
tests/test_semantic.py
|
||||||
tests/test_semantic_search.py
|
tests/test_semantic_search.py
|
||||||
tests/test_sqlite_store.py
|
tests/test_sqlite_store.py
|
||||||
|
tests/test_staged_cascade.py
|
||||||
tests/test_storage.py
|
tests/test_storage.py
|
||||||
tests/test_storage_concurrency.py
|
tests/test_storage_concurrency.py
|
||||||
tests/test_symbol_extractor.py
|
tests/test_symbol_extractor.py
|
||||||
|
|||||||
2
codex-lens/src/codex_lens.egg-info/entry_points.txt
Normal file
2
codex-lens/src/codex_lens.egg-info/entry_points.txt
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
[console_scripts]
|
||||||
|
codexlens-lsp = codexlens.lsp:main
|
||||||
@@ -8,12 +8,19 @@ tree-sitter-typescript>=0.23
|
|||||||
pathspec>=0.11
|
pathspec>=0.11
|
||||||
watchdog>=3.0
|
watchdog>=3.0
|
||||||
|
|
||||||
|
[clustering]
|
||||||
|
hdbscan>=0.8.1
|
||||||
|
scikit-learn>=1.3.0
|
||||||
|
|
||||||
[encoding]
|
[encoding]
|
||||||
chardet>=5.0
|
chardet>=5.0
|
||||||
|
|
||||||
[full]
|
[full]
|
||||||
tiktoken>=0.5.0
|
tiktoken>=0.5.0
|
||||||
|
|
||||||
|
[lsp]
|
||||||
|
pygls>=1.3.0
|
||||||
|
|
||||||
[reranker]
|
[reranker]
|
||||||
optimum>=1.16
|
optimum>=1.16
|
||||||
onnxruntime>=1.15
|
onnxruntime>=1.15
|
||||||
|
|||||||
88
codex-lens/src/codexlens/api/__init__.py
Normal file
88
codex-lens/src/codexlens/api/__init__.py
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
"""Codexlens Public API Layer.
|
||||||
|
|
||||||
|
This module exports all public API functions and dataclasses for the
|
||||||
|
codexlens LSP-like functionality.
|
||||||
|
|
||||||
|
Dataclasses (from models.py):
|
||||||
|
- CallInfo: Call relationship information
|
||||||
|
- MethodContext: Method context with call relationships
|
||||||
|
- FileContextResult: File context result with method summaries
|
||||||
|
- DefinitionResult: Definition lookup result
|
||||||
|
- ReferenceResult: Reference lookup result
|
||||||
|
- GroupedReferences: References grouped by definition
|
||||||
|
- SymbolInfo: Symbol information for workspace search
|
||||||
|
- HoverInfo: Hover information for a symbol
|
||||||
|
- SemanticResult: Semantic search result
|
||||||
|
|
||||||
|
Utility functions (from utils.py):
|
||||||
|
- resolve_project: Resolve and validate project root path
|
||||||
|
- normalize_relationship_type: Normalize relationship type to canonical form
|
||||||
|
- rank_by_proximity: Rank results by file path proximity
|
||||||
|
|
||||||
|
Example:
|
||||||
|
>>> from codexlens.api import (
|
||||||
|
... DefinitionResult,
|
||||||
|
... resolve_project,
|
||||||
|
... normalize_relationship_type
|
||||||
|
... )
|
||||||
|
>>> project = resolve_project("/path/to/project")
|
||||||
|
>>> rel_type = normalize_relationship_type("calls")
|
||||||
|
>>> print(rel_type)
|
||||||
|
'call'
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
# Dataclasses
|
||||||
|
from .models import (
|
||||||
|
CallInfo,
|
||||||
|
MethodContext,
|
||||||
|
FileContextResult,
|
||||||
|
DefinitionResult,
|
||||||
|
ReferenceResult,
|
||||||
|
GroupedReferences,
|
||||||
|
SymbolInfo,
|
||||||
|
HoverInfo,
|
||||||
|
SemanticResult,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Utility functions
|
||||||
|
from .utils import (
|
||||||
|
resolve_project,
|
||||||
|
normalize_relationship_type,
|
||||||
|
rank_by_proximity,
|
||||||
|
rank_by_score,
|
||||||
|
)
|
||||||
|
|
||||||
|
# API functions
|
||||||
|
from .definition import find_definition
|
||||||
|
from .symbols import workspace_symbols
|
||||||
|
from .hover import get_hover
|
||||||
|
from .file_context import file_context
|
||||||
|
from .references import find_references
|
||||||
|
from .semantic import semantic_search
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
# Dataclasses
|
||||||
|
"CallInfo",
|
||||||
|
"MethodContext",
|
||||||
|
"FileContextResult",
|
||||||
|
"DefinitionResult",
|
||||||
|
"ReferenceResult",
|
||||||
|
"GroupedReferences",
|
||||||
|
"SymbolInfo",
|
||||||
|
"HoverInfo",
|
||||||
|
"SemanticResult",
|
||||||
|
# Utility functions
|
||||||
|
"resolve_project",
|
||||||
|
"normalize_relationship_type",
|
||||||
|
"rank_by_proximity",
|
||||||
|
"rank_by_score",
|
||||||
|
# API functions
|
||||||
|
"find_definition",
|
||||||
|
"workspace_symbols",
|
||||||
|
"get_hover",
|
||||||
|
"file_context",
|
||||||
|
"find_references",
|
||||||
|
"semantic_search",
|
||||||
|
]
|
||||||
126
codex-lens/src/codexlens/api/definition.py
Normal file
126
codex-lens/src/codexlens/api/definition.py
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
"""find_definition API implementation.
|
||||||
|
|
||||||
|
This module provides the find_definition() function for looking up
|
||||||
|
symbol definitions with a 3-stage fallback strategy.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
from ..entities import Symbol
|
||||||
|
from ..storage.global_index import GlobalSymbolIndex
|
||||||
|
from ..storage.registry import RegistryStore
|
||||||
|
from ..errors import IndexNotFoundError
|
||||||
|
from .models import DefinitionResult
|
||||||
|
from .utils import resolve_project, rank_by_proximity
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def find_definition(
|
||||||
|
project_root: str,
|
||||||
|
symbol_name: str,
|
||||||
|
symbol_kind: Optional[str] = None,
|
||||||
|
file_context: Optional[str] = None,
|
||||||
|
limit: int = 10
|
||||||
|
) -> List[DefinitionResult]:
|
||||||
|
"""Find definition locations for a symbol.
|
||||||
|
|
||||||
|
Uses a 3-stage fallback strategy:
|
||||||
|
1. Exact match with kind filter
|
||||||
|
2. Exact match without kind filter
|
||||||
|
3. Prefix match
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_root: Project root directory (for index location)
|
||||||
|
symbol_name: Name of the symbol to find
|
||||||
|
symbol_kind: Optional symbol kind filter (class, function, etc.)
|
||||||
|
file_context: Optional file path for proximity ranking
|
||||||
|
limit: Maximum number of results to return
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of DefinitionResult sorted by proximity if file_context provided
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
IndexNotFoundError: If project is not indexed
|
||||||
|
"""
|
||||||
|
project_path = resolve_project(project_root)
|
||||||
|
|
||||||
|
# Get project info from registry
|
||||||
|
registry = RegistryStore()
|
||||||
|
project_info = registry.get_project_by_source(str(project_path))
|
||||||
|
if project_info is None:
|
||||||
|
raise IndexNotFoundError(f"Project not indexed: {project_path}")
|
||||||
|
|
||||||
|
# Open global symbol index
|
||||||
|
index_db = project_info.index_root / "_global_symbols.db"
|
||||||
|
if not index_db.exists():
|
||||||
|
raise IndexNotFoundError(f"Global symbol index not found: {index_db}")
|
||||||
|
|
||||||
|
global_index = GlobalSymbolIndex(str(index_db), project_info.id)
|
||||||
|
|
||||||
|
# Stage 1: Exact match with kind filter
|
||||||
|
results = _search_with_kind(global_index, symbol_name, symbol_kind, limit)
|
||||||
|
if results:
|
||||||
|
logger.debug(f"Stage 1 (exact+kind): Found {len(results)} results for {symbol_name}")
|
||||||
|
return _rank_and_convert(results, file_context)
|
||||||
|
|
||||||
|
# Stage 2: Exact match without kind (if kind was specified)
|
||||||
|
if symbol_kind:
|
||||||
|
results = _search_with_kind(global_index, symbol_name, None, limit)
|
||||||
|
if results:
|
||||||
|
logger.debug(f"Stage 2 (exact): Found {len(results)} results for {symbol_name}")
|
||||||
|
return _rank_and_convert(results, file_context)
|
||||||
|
|
||||||
|
# Stage 3: Prefix match
|
||||||
|
results = global_index.search(
|
||||||
|
name=symbol_name,
|
||||||
|
kind=None,
|
||||||
|
limit=limit,
|
||||||
|
prefix_mode=True
|
||||||
|
)
|
||||||
|
if results:
|
||||||
|
logger.debug(f"Stage 3 (prefix): Found {len(results)} results for {symbol_name}")
|
||||||
|
return _rank_and_convert(results, file_context)
|
||||||
|
|
||||||
|
logger.debug(f"No definitions found for {symbol_name}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
def _search_with_kind(
|
||||||
|
global_index: GlobalSymbolIndex,
|
||||||
|
symbol_name: str,
|
||||||
|
symbol_kind: Optional[str],
|
||||||
|
limit: int
|
||||||
|
) -> List[Symbol]:
|
||||||
|
"""Search for symbols with optional kind filter."""
|
||||||
|
return global_index.search(
|
||||||
|
name=symbol_name,
|
||||||
|
kind=symbol_kind,
|
||||||
|
limit=limit,
|
||||||
|
prefix_mode=False
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _rank_and_convert(
|
||||||
|
symbols: List[Symbol],
|
||||||
|
file_context: Optional[str]
|
||||||
|
) -> List[DefinitionResult]:
|
||||||
|
"""Convert symbols to DefinitionResult and rank by proximity."""
|
||||||
|
results = [
|
||||||
|
DefinitionResult(
|
||||||
|
name=sym.name,
|
||||||
|
kind=sym.kind,
|
||||||
|
file_path=sym.file or "",
|
||||||
|
line=sym.range[0] if sym.range else 1,
|
||||||
|
end_line=sym.range[1] if sym.range else 1,
|
||||||
|
signature=None, # Could extract from file if needed
|
||||||
|
container=None, # Could extract from parent symbol
|
||||||
|
score=1.0
|
||||||
|
)
|
||||||
|
for sym in symbols
|
||||||
|
]
|
||||||
|
return rank_by_proximity(results, file_context)
|
||||||
271
codex-lens/src/codexlens/api/file_context.py
Normal file
271
codex-lens/src/codexlens/api/file_context.py
Normal file
@@ -0,0 +1,271 @@
|
|||||||
|
"""file_context API implementation.
|
||||||
|
|
||||||
|
This module provides the file_context() function for retrieving
|
||||||
|
method call graphs from a source file.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Optional, Tuple
|
||||||
|
|
||||||
|
from ..entities import Symbol
|
||||||
|
from ..storage.global_index import GlobalSymbolIndex
|
||||||
|
from ..storage.dir_index import DirIndexStore
|
||||||
|
from ..storage.registry import RegistryStore
|
||||||
|
from ..errors import IndexNotFoundError
|
||||||
|
from .models import (
|
||||||
|
FileContextResult,
|
||||||
|
MethodContext,
|
||||||
|
CallInfo,
|
||||||
|
)
|
||||||
|
from .utils import resolve_project, normalize_relationship_type
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def file_context(
|
||||||
|
project_root: str,
|
||||||
|
file_path: str,
|
||||||
|
include_calls: bool = True,
|
||||||
|
include_callers: bool = True,
|
||||||
|
max_depth: int = 1,
|
||||||
|
format: str = "brief"
|
||||||
|
) -> FileContextResult:
|
||||||
|
"""Get method call context for a code file.
|
||||||
|
|
||||||
|
Retrieves all methods/functions in the file along with their
|
||||||
|
outgoing calls and incoming callers.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_root: Project root directory (for index location)
|
||||||
|
file_path: Path to the code file to analyze
|
||||||
|
include_calls: Whether to include outgoing calls
|
||||||
|
include_callers: Whether to include incoming callers
|
||||||
|
max_depth: Call chain depth (V1 only supports 1)
|
||||||
|
format: Output format (brief | detailed | tree)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
FileContextResult with method contexts and summary
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
IndexNotFoundError: If project is not indexed
|
||||||
|
FileNotFoundError: If file does not exist
|
||||||
|
ValueError: If max_depth > 1 (V1 limitation)
|
||||||
|
"""
|
||||||
|
# V1 limitation: only depth=1 supported
|
||||||
|
if max_depth > 1:
|
||||||
|
raise ValueError(
|
||||||
|
f"max_depth > 1 not supported in V1. "
|
||||||
|
f"Requested: {max_depth}, supported: 1"
|
||||||
|
)
|
||||||
|
|
||||||
|
project_path = resolve_project(project_root)
|
||||||
|
file_path_resolved = Path(file_path).resolve()
|
||||||
|
|
||||||
|
# Validate file exists
|
||||||
|
if not file_path_resolved.exists():
|
||||||
|
raise FileNotFoundError(f"File not found: {file_path_resolved}")
|
||||||
|
|
||||||
|
# Get project info from registry
|
||||||
|
registry = RegistryStore()
|
||||||
|
project_info = registry.get_project_by_source(str(project_path))
|
||||||
|
if project_info is None:
|
||||||
|
raise IndexNotFoundError(f"Project not indexed: {project_path}")
|
||||||
|
|
||||||
|
# Open global symbol index
|
||||||
|
index_db = project_info.index_root / "_global_symbols.db"
|
||||||
|
if not index_db.exists():
|
||||||
|
raise IndexNotFoundError(f"Global symbol index not found: {index_db}")
|
||||||
|
|
||||||
|
global_index = GlobalSymbolIndex(str(index_db), project_info.id)
|
||||||
|
|
||||||
|
# Get all symbols in the file
|
||||||
|
symbols = global_index.get_file_symbols(str(file_path_resolved))
|
||||||
|
|
||||||
|
# Filter to functions, methods, and classes
|
||||||
|
method_symbols = [
|
||||||
|
s for s in symbols
|
||||||
|
if s.kind in ("function", "method", "class")
|
||||||
|
]
|
||||||
|
|
||||||
|
logger.debug(f"Found {len(method_symbols)} methods in {file_path}")
|
||||||
|
|
||||||
|
# Try to find dir_index for relationship queries
|
||||||
|
dir_index = _find_dir_index(project_info, file_path_resolved)
|
||||||
|
|
||||||
|
# Build method contexts
|
||||||
|
methods: List[MethodContext] = []
|
||||||
|
outgoing_resolved = True
|
||||||
|
incoming_resolved = True
|
||||||
|
targets_resolved = True
|
||||||
|
|
||||||
|
for symbol in method_symbols:
|
||||||
|
calls: List[CallInfo] = []
|
||||||
|
callers: List[CallInfo] = []
|
||||||
|
|
||||||
|
if include_calls and dir_index:
|
||||||
|
try:
|
||||||
|
outgoing = dir_index.get_outgoing_calls(
|
||||||
|
str(file_path_resolved),
|
||||||
|
symbol.name
|
||||||
|
)
|
||||||
|
for target_name, rel_type, line, target_file in outgoing:
|
||||||
|
calls.append(CallInfo(
|
||||||
|
symbol_name=target_name,
|
||||||
|
file_path=target_file,
|
||||||
|
line=line,
|
||||||
|
relationship=normalize_relationship_type(rel_type)
|
||||||
|
))
|
||||||
|
if target_file is None:
|
||||||
|
targets_resolved = False
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to get outgoing calls: {e}")
|
||||||
|
outgoing_resolved = False
|
||||||
|
|
||||||
|
if include_callers and dir_index:
|
||||||
|
try:
|
||||||
|
incoming = dir_index.get_incoming_calls(symbol.name)
|
||||||
|
for source_name, rel_type, line, source_file in incoming:
|
||||||
|
callers.append(CallInfo(
|
||||||
|
symbol_name=source_name,
|
||||||
|
file_path=source_file,
|
||||||
|
line=line,
|
||||||
|
relationship=normalize_relationship_type(rel_type)
|
||||||
|
))
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to get incoming calls: {e}")
|
||||||
|
incoming_resolved = False
|
||||||
|
|
||||||
|
methods.append(MethodContext(
|
||||||
|
name=symbol.name,
|
||||||
|
kind=symbol.kind,
|
||||||
|
line_range=symbol.range if symbol.range else (1, 1),
|
||||||
|
signature=None, # Could extract from source
|
||||||
|
calls=calls,
|
||||||
|
callers=callers
|
||||||
|
))
|
||||||
|
|
||||||
|
# Detect language from file extension
|
||||||
|
language = _detect_language(file_path_resolved)
|
||||||
|
|
||||||
|
# Generate summary
|
||||||
|
summary = _generate_summary(file_path_resolved, methods, format)
|
||||||
|
|
||||||
|
return FileContextResult(
|
||||||
|
file_path=str(file_path_resolved),
|
||||||
|
language=language,
|
||||||
|
methods=methods,
|
||||||
|
summary=summary,
|
||||||
|
discovery_status={
|
||||||
|
"outgoing_resolved": outgoing_resolved,
|
||||||
|
"incoming_resolved": incoming_resolved,
|
||||||
|
"targets_resolved": targets_resolved
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _find_dir_index(project_info, file_path: Path) -> Optional[DirIndexStore]:
|
||||||
|
"""Find the dir_index that contains the file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_info: Project information from registry
|
||||||
|
file_path: Path to the file
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
DirIndexStore if found, None otherwise
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Look for _index.db in file's directory or parent directories
|
||||||
|
current = file_path.parent
|
||||||
|
while current != current.parent:
|
||||||
|
index_db = current / "_index.db"
|
||||||
|
if index_db.exists():
|
||||||
|
return DirIndexStore(str(index_db))
|
||||||
|
|
||||||
|
# Also check in project's index_root
|
||||||
|
relative = current.relative_to(project_info.source_root)
|
||||||
|
index_in_cache = project_info.index_root / relative / "_index.db"
|
||||||
|
if index_in_cache.exists():
|
||||||
|
return DirIndexStore(str(index_in_cache))
|
||||||
|
|
||||||
|
current = current.parent
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to find dir_index: {e}")
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _detect_language(file_path: Path) -> str:
|
||||||
|
"""Detect programming language from file extension.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_path: Path to the file
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Language name
|
||||||
|
"""
|
||||||
|
ext_map = {
|
||||||
|
".py": "python",
|
||||||
|
".js": "javascript",
|
||||||
|
".ts": "typescript",
|
||||||
|
".jsx": "javascript",
|
||||||
|
".tsx": "typescript",
|
||||||
|
".go": "go",
|
||||||
|
".rs": "rust",
|
||||||
|
".java": "java",
|
||||||
|
".c": "c",
|
||||||
|
".cpp": "cpp",
|
||||||
|
".h": "c",
|
||||||
|
".hpp": "cpp",
|
||||||
|
}
|
||||||
|
return ext_map.get(file_path.suffix.lower(), "unknown")
|
||||||
|
|
||||||
|
|
||||||
|
def _generate_summary(
|
||||||
|
file_path: Path,
|
||||||
|
methods: List[MethodContext],
|
||||||
|
format: str
|
||||||
|
) -> str:
|
||||||
|
"""Generate human-readable summary of file context.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_path: Path to the file
|
||||||
|
methods: List of method contexts
|
||||||
|
format: Output format (brief | detailed | tree)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Markdown-formatted summary
|
||||||
|
"""
|
||||||
|
lines = [f"## {file_path.name} ({len(methods)} methods)\n"]
|
||||||
|
|
||||||
|
for method in methods:
|
||||||
|
start, end = method.line_range
|
||||||
|
lines.append(f"### {method.name} (line {start}-{end})")
|
||||||
|
|
||||||
|
if method.calls:
|
||||||
|
calls_str = ", ".join(
|
||||||
|
f"{c.symbol_name} ({c.file_path or 'unresolved'}:{c.line})"
|
||||||
|
if format == "detailed"
|
||||||
|
else c.symbol_name
|
||||||
|
for c in method.calls
|
||||||
|
)
|
||||||
|
lines.append(f"- Calls: {calls_str}")
|
||||||
|
|
||||||
|
if method.callers:
|
||||||
|
callers_str = ", ".join(
|
||||||
|
f"{c.symbol_name} ({c.file_path}:{c.line})"
|
||||||
|
if format == "detailed"
|
||||||
|
else c.symbol_name
|
||||||
|
for c in method.callers
|
||||||
|
)
|
||||||
|
lines.append(f"- Called by: {callers_str}")
|
||||||
|
|
||||||
|
if not method.calls and not method.callers:
|
||||||
|
lines.append("- (no call relationships)")
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
148
codex-lens/src/codexlens/api/hover.py
Normal file
148
codex-lens/src/codexlens/api/hover.py
Normal file
@@ -0,0 +1,148 @@
|
|||||||
|
"""get_hover API implementation.
|
||||||
|
|
||||||
|
This module provides the get_hover() function for retrieving
|
||||||
|
detailed hover information for symbols.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from ..entities import Symbol
|
||||||
|
from ..storage.global_index import GlobalSymbolIndex
|
||||||
|
from ..storage.registry import RegistryStore
|
||||||
|
from ..errors import IndexNotFoundError
|
||||||
|
from .models import HoverInfo
|
||||||
|
from .utils import resolve_project
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def get_hover(
|
||||||
|
project_root: str,
|
||||||
|
symbol_name: str,
|
||||||
|
file_path: Optional[str] = None
|
||||||
|
) -> Optional[HoverInfo]:
|
||||||
|
"""Get detailed hover information for a symbol.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_root: Project root directory (for index location)
|
||||||
|
symbol_name: Name of the symbol to look up
|
||||||
|
file_path: Optional file path to disambiguate when symbol
|
||||||
|
appears in multiple files
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
HoverInfo if symbol found, None otherwise
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
IndexNotFoundError: If project is not indexed
|
||||||
|
"""
|
||||||
|
project_path = resolve_project(project_root)
|
||||||
|
|
||||||
|
# Get project info from registry
|
||||||
|
registry = RegistryStore()
|
||||||
|
project_info = registry.get_project_by_source(str(project_path))
|
||||||
|
if project_info is None:
|
||||||
|
raise IndexNotFoundError(f"Project not indexed: {project_path}")
|
||||||
|
|
||||||
|
# Open global symbol index
|
||||||
|
index_db = project_info.index_root / "_global_symbols.db"
|
||||||
|
if not index_db.exists():
|
||||||
|
raise IndexNotFoundError(f"Global symbol index not found: {index_db}")
|
||||||
|
|
||||||
|
global_index = GlobalSymbolIndex(str(index_db), project_info.id)
|
||||||
|
|
||||||
|
# Search for the symbol
|
||||||
|
results = global_index.search(
|
||||||
|
name=symbol_name,
|
||||||
|
kind=None,
|
||||||
|
limit=50,
|
||||||
|
prefix_mode=False
|
||||||
|
)
|
||||||
|
|
||||||
|
if not results:
|
||||||
|
logger.debug(f"No hover info found for {symbol_name}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# If file_path provided, filter to that file
|
||||||
|
if file_path:
|
||||||
|
file_path_resolved = str(Path(file_path).resolve())
|
||||||
|
matching = [s for s in results if s.file == file_path_resolved]
|
||||||
|
if matching:
|
||||||
|
results = matching
|
||||||
|
|
||||||
|
# Take the first result
|
||||||
|
symbol = results[0]
|
||||||
|
|
||||||
|
# Build hover info
|
||||||
|
return HoverInfo(
|
||||||
|
name=symbol.name,
|
||||||
|
kind=symbol.kind,
|
||||||
|
signature=_extract_signature(symbol),
|
||||||
|
documentation=_extract_documentation(symbol),
|
||||||
|
file_path=symbol.file or "",
|
||||||
|
line_range=symbol.range if symbol.range else (1, 1),
|
||||||
|
type_info=_extract_type_info(symbol)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_signature(symbol: Symbol) -> str:
|
||||||
|
"""Extract signature from symbol.
|
||||||
|
|
||||||
|
For now, generates a basic signature based on kind and name.
|
||||||
|
In a full implementation, this would parse the actual source code.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
symbol: The symbol to extract signature from
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Signature string
|
||||||
|
"""
|
||||||
|
if symbol.kind == "function":
|
||||||
|
return f"def {symbol.name}(...)"
|
||||||
|
elif symbol.kind == "method":
|
||||||
|
return f"def {symbol.name}(self, ...)"
|
||||||
|
elif symbol.kind == "class":
|
||||||
|
return f"class {symbol.name}"
|
||||||
|
elif symbol.kind == "variable":
|
||||||
|
return symbol.name
|
||||||
|
elif symbol.kind == "constant":
|
||||||
|
return f"{symbol.name} = ..."
|
||||||
|
else:
|
||||||
|
return f"{symbol.kind} {symbol.name}"
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_documentation(symbol: Symbol) -> Optional[str]:
|
||||||
|
"""Extract documentation from symbol.
|
||||||
|
|
||||||
|
In a full implementation, this would parse docstrings from source.
|
||||||
|
For now, returns None.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
symbol: The symbol to extract documentation from
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Documentation string if available, None otherwise
|
||||||
|
"""
|
||||||
|
# Would need to read source file and parse docstring
|
||||||
|
# For V1, return None
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_type_info(symbol: Symbol) -> Optional[str]:
|
||||||
|
"""Extract type information from symbol.
|
||||||
|
|
||||||
|
In a full implementation, this would parse type annotations.
|
||||||
|
For now, returns None.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
symbol: The symbol to extract type info from
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Type info string if available, None otherwise
|
||||||
|
"""
|
||||||
|
# Would need to parse type annotations from source
|
||||||
|
# For V1, return None
|
||||||
|
return None
|
||||||
281
codex-lens/src/codexlens/api/models.py
Normal file
281
codex-lens/src/codexlens/api/models.py
Normal file
@@ -0,0 +1,281 @@
|
|||||||
|
"""API dataclass definitions for codexlens LSP API.
|
||||||
|
|
||||||
|
This module defines all result dataclasses used by the public API layer,
|
||||||
|
following the patterns established in mcp/schema.py.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from dataclasses import dataclass, field, asdict
|
||||||
|
from typing import List, Optional, Dict, Tuple
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Section 4.2: file_context dataclasses
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class CallInfo:
|
||||||
|
"""Call relationship information.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
symbol_name: Name of the called/calling symbol
|
||||||
|
file_path: Target file path (may be None if unresolved)
|
||||||
|
line: Line number of the call
|
||||||
|
relationship: Type of relationship (call | import | inheritance)
|
||||||
|
"""
|
||||||
|
symbol_name: str
|
||||||
|
file_path: Optional[str]
|
||||||
|
line: int
|
||||||
|
relationship: str # call | import | inheritance
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary, filtering None values."""
|
||||||
|
return {k: v for k, v in asdict(self).items() if v is not None}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MethodContext:
|
||||||
|
"""Method context with call relationships.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
name: Method/function name
|
||||||
|
kind: Symbol kind (function | method | class)
|
||||||
|
line_range: Start and end line numbers
|
||||||
|
signature: Function signature (if available)
|
||||||
|
calls: List of outgoing calls
|
||||||
|
callers: List of incoming calls
|
||||||
|
"""
|
||||||
|
name: str
|
||||||
|
kind: str # function | method | class
|
||||||
|
line_range: Tuple[int, int]
|
||||||
|
signature: Optional[str]
|
||||||
|
calls: List[CallInfo] = field(default_factory=list)
|
||||||
|
callers: List[CallInfo] = field(default_factory=list)
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary, filtering None values."""
|
||||||
|
result = {
|
||||||
|
"name": self.name,
|
||||||
|
"kind": self.kind,
|
||||||
|
"line_range": list(self.line_range),
|
||||||
|
"calls": [c.to_dict() for c in self.calls],
|
||||||
|
"callers": [c.to_dict() for c in self.callers],
|
||||||
|
}
|
||||||
|
if self.signature is not None:
|
||||||
|
result["signature"] = self.signature
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FileContextResult:
|
||||||
|
"""File context result with method summaries.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
file_path: Path to the analyzed file
|
||||||
|
language: Programming language
|
||||||
|
methods: List of method contexts
|
||||||
|
summary: Human-readable summary
|
||||||
|
discovery_status: Status flags for call resolution
|
||||||
|
"""
|
||||||
|
file_path: str
|
||||||
|
language: str
|
||||||
|
methods: List[MethodContext]
|
||||||
|
summary: str
|
||||||
|
discovery_status: Dict[str, bool] = field(default_factory=lambda: {
|
||||||
|
"outgoing_resolved": False,
|
||||||
|
"incoming_resolved": True,
|
||||||
|
"targets_resolved": False
|
||||||
|
})
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary for JSON serialization."""
|
||||||
|
return {
|
||||||
|
"file_path": self.file_path,
|
||||||
|
"language": self.language,
|
||||||
|
"methods": [m.to_dict() for m in self.methods],
|
||||||
|
"summary": self.summary,
|
||||||
|
"discovery_status": self.discovery_status,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Section 4.3: find_definition dataclasses
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class DefinitionResult:
|
||||||
|
"""Definition lookup result.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
name: Symbol name
|
||||||
|
kind: Symbol kind (class, function, method, etc.)
|
||||||
|
file_path: File where symbol is defined
|
||||||
|
line: Start line number
|
||||||
|
end_line: End line number
|
||||||
|
signature: Symbol signature (if available)
|
||||||
|
container: Containing class/module (if any)
|
||||||
|
score: Match score for ranking
|
||||||
|
"""
|
||||||
|
name: str
|
||||||
|
kind: str
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
end_line: int
|
||||||
|
signature: Optional[str] = None
|
||||||
|
container: Optional[str] = None
|
||||||
|
score: float = 1.0
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary, filtering None values."""
|
||||||
|
return {k: v for k, v in asdict(self).items() if v is not None}
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Section 4.4: find_references dataclasses
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ReferenceResult:
|
||||||
|
"""Reference lookup result.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
file_path: File containing the reference
|
||||||
|
line: Line number
|
||||||
|
column: Column number
|
||||||
|
context_line: The line of code containing the reference
|
||||||
|
relationship: Type of reference (call | import | type_annotation | inheritance)
|
||||||
|
"""
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
column: int
|
||||||
|
context_line: str
|
||||||
|
relationship: str # call | import | type_annotation | inheritance
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary."""
|
||||||
|
return asdict(self)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class GroupedReferences:
|
||||||
|
"""References grouped by definition.
|
||||||
|
|
||||||
|
Used when a symbol has multiple definitions (e.g., overloads).
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
definition: The definition this group refers to
|
||||||
|
references: List of references to this definition
|
||||||
|
"""
|
||||||
|
definition: DefinitionResult
|
||||||
|
references: List[ReferenceResult] = field(default_factory=list)
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary."""
|
||||||
|
return {
|
||||||
|
"definition": self.definition.to_dict(),
|
||||||
|
"references": [r.to_dict() for r in self.references],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Section 4.5: workspace_symbols dataclasses
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SymbolInfo:
|
||||||
|
"""Symbol information for workspace search.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
name: Symbol name
|
||||||
|
kind: Symbol kind
|
||||||
|
file_path: File where symbol is defined
|
||||||
|
line: Line number
|
||||||
|
container: Containing class/module (if any)
|
||||||
|
score: Match score for ranking
|
||||||
|
"""
|
||||||
|
name: str
|
||||||
|
kind: str
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
container: Optional[str] = None
|
||||||
|
score: float = 1.0
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary, filtering None values."""
|
||||||
|
return {k: v for k, v in asdict(self).items() if v is not None}
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Section 4.6: get_hover dataclasses
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class HoverInfo:
|
||||||
|
"""Hover information for a symbol.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
name: Symbol name
|
||||||
|
kind: Symbol kind
|
||||||
|
signature: Symbol signature
|
||||||
|
documentation: Documentation string (if available)
|
||||||
|
file_path: File where symbol is defined
|
||||||
|
line_range: Start and end line numbers
|
||||||
|
type_info: Type information (if available)
|
||||||
|
"""
|
||||||
|
name: str
|
||||||
|
kind: str
|
||||||
|
signature: str
|
||||||
|
documentation: Optional[str]
|
||||||
|
file_path: str
|
||||||
|
line_range: Tuple[int, int]
|
||||||
|
type_info: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary, filtering None values."""
|
||||||
|
result = {
|
||||||
|
"name": self.name,
|
||||||
|
"kind": self.kind,
|
||||||
|
"signature": self.signature,
|
||||||
|
"file_path": self.file_path,
|
||||||
|
"line_range": list(self.line_range),
|
||||||
|
}
|
||||||
|
if self.documentation is not None:
|
||||||
|
result["documentation"] = self.documentation
|
||||||
|
if self.type_info is not None:
|
||||||
|
result["type_info"] = self.type_info
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Section 4.7: semantic_search dataclasses
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SemanticResult:
|
||||||
|
"""Semantic search result.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
symbol_name: Name of the matched symbol
|
||||||
|
kind: Symbol kind
|
||||||
|
file_path: File where symbol is defined
|
||||||
|
line: Line number
|
||||||
|
vector_score: Vector similarity score (None if not available)
|
||||||
|
structural_score: Structural match score (None if not available)
|
||||||
|
fusion_score: Combined fusion score
|
||||||
|
snippet: Code snippet
|
||||||
|
match_reason: Explanation of why this matched (optional)
|
||||||
|
"""
|
||||||
|
symbol_name: str
|
||||||
|
kind: str
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
vector_score: Optional[float]
|
||||||
|
structural_score: Optional[float]
|
||||||
|
fusion_score: float
|
||||||
|
snippet: str
|
||||||
|
match_reason: Optional[str] = None
|
||||||
|
|
||||||
|
def to_dict(self) -> dict:
|
||||||
|
"""Convert to dictionary, filtering None values."""
|
||||||
|
return {k: v for k, v in asdict(self).items() if v is not None}
|
||||||
345
codex-lens/src/codexlens/api/references.py
Normal file
345
codex-lens/src/codexlens/api/references.py
Normal file
@@ -0,0 +1,345 @@
|
|||||||
|
"""Find references API for codexlens.
|
||||||
|
|
||||||
|
This module implements the find_references() function that wraps
|
||||||
|
ChainSearchEngine.search_references() with grouped result structure
|
||||||
|
for multi-definition symbols.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Optional, Dict
|
||||||
|
|
||||||
|
from .models import (
|
||||||
|
DefinitionResult,
|
||||||
|
ReferenceResult,
|
||||||
|
GroupedReferences,
|
||||||
|
)
|
||||||
|
from .utils import (
|
||||||
|
resolve_project,
|
||||||
|
normalize_relationship_type,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def _read_line_from_file(file_path: str, line: int) -> str:
|
||||||
|
"""Read a specific line from a file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_path: Path to the file
|
||||||
|
line: Line number (1-based)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The line content, stripped of trailing whitespace.
|
||||||
|
Returns empty string if file cannot be read or line doesn't exist.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
path = Path(file_path)
|
||||||
|
if not path.exists():
|
||||||
|
return ""
|
||||||
|
|
||||||
|
with path.open("r", encoding="utf-8", errors="replace") as f:
|
||||||
|
for i, content in enumerate(f, 1):
|
||||||
|
if i == line:
|
||||||
|
return content.rstrip()
|
||||||
|
return ""
|
||||||
|
except Exception as exc:
|
||||||
|
logger.debug("Failed to read line %d from %s: %s", line, file_path, exc)
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def _transform_to_reference_result(
|
||||||
|
raw_ref: "RawReferenceResult",
|
||||||
|
) -> ReferenceResult:
|
||||||
|
"""Transform raw ChainSearchEngine reference to API ReferenceResult.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
raw_ref: Raw reference result from ChainSearchEngine
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
API ReferenceResult with context_line and normalized relationship
|
||||||
|
"""
|
||||||
|
# Read the actual line from the file
|
||||||
|
context_line = _read_line_from_file(raw_ref.file_path, raw_ref.line)
|
||||||
|
|
||||||
|
# Normalize relationship type
|
||||||
|
relationship = normalize_relationship_type(raw_ref.relationship_type)
|
||||||
|
|
||||||
|
return ReferenceResult(
|
||||||
|
file_path=raw_ref.file_path,
|
||||||
|
line=raw_ref.line,
|
||||||
|
column=raw_ref.column,
|
||||||
|
context_line=context_line,
|
||||||
|
relationship=relationship,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def find_references(
|
||||||
|
project_root: str,
|
||||||
|
symbol_name: str,
|
||||||
|
symbol_kind: Optional[str] = None,
|
||||||
|
include_definition: bool = True,
|
||||||
|
group_by_definition: bool = True,
|
||||||
|
limit: int = 100,
|
||||||
|
) -> List[GroupedReferences]:
|
||||||
|
"""Find all reference locations for a symbol.
|
||||||
|
|
||||||
|
Multi-definition case returns grouped results to resolve ambiguity.
|
||||||
|
|
||||||
|
This function wraps ChainSearchEngine.search_references() and groups
|
||||||
|
the results by definition location. Each GroupedReferences contains
|
||||||
|
a definition and all references that point to it.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
project_root: Project root directory path
|
||||||
|
symbol_name: Name of the symbol to find references for
|
||||||
|
symbol_kind: Optional symbol kind filter (e.g., 'function', 'class')
|
||||||
|
include_definition: Whether to include the definition location
|
||||||
|
in the result (default True)
|
||||||
|
group_by_definition: Whether to group references by definition.
|
||||||
|
If False, returns a single group with all references.
|
||||||
|
(default True)
|
||||||
|
limit: Maximum number of references to return (default 100)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of GroupedReferences. Each group contains:
|
||||||
|
- definition: The DefinitionResult for this symbol definition
|
||||||
|
- references: List of ReferenceResult pointing to this definition
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If project_root does not exist or is not a directory
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
>>> refs = find_references("/path/to/project", "authenticate")
|
||||||
|
>>> for group in refs:
|
||||||
|
... print(f"Definition: {group.definition.file_path}:{group.definition.line}")
|
||||||
|
... for ref in group.references:
|
||||||
|
... print(f" Reference: {ref.file_path}:{ref.line} ({ref.relationship})")
|
||||||
|
|
||||||
|
Note:
|
||||||
|
Reference relationship types are normalized:
|
||||||
|
- 'calls' -> 'call'
|
||||||
|
- 'imports' -> 'import'
|
||||||
|
- 'inherits' -> 'inheritance'
|
||||||
|
"""
|
||||||
|
# Validate and resolve project root
|
||||||
|
project_path = resolve_project(project_root)
|
||||||
|
|
||||||
|
# Import here to avoid circular imports
|
||||||
|
from codexlens.config import Config
|
||||||
|
from codexlens.storage.registry import RegistryStore
|
||||||
|
from codexlens.storage.path_mapper import PathMapper
|
||||||
|
from codexlens.storage.global_index import GlobalSymbolIndex
|
||||||
|
from codexlens.search.chain_search import ChainSearchEngine
|
||||||
|
from codexlens.search.chain_search import ReferenceResult as RawReferenceResult
|
||||||
|
from codexlens.entities import Symbol
|
||||||
|
|
||||||
|
# Initialize infrastructure
|
||||||
|
config = Config()
|
||||||
|
registry = RegistryStore(config.registry_db_path)
|
||||||
|
mapper = PathMapper(config.index_root)
|
||||||
|
|
||||||
|
# Create chain search engine
|
||||||
|
engine = ChainSearchEngine(registry, mapper, config=config)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Step 1: Find definitions for the symbol
|
||||||
|
definitions: List[DefinitionResult] = []
|
||||||
|
|
||||||
|
if include_definition or group_by_definition:
|
||||||
|
# Search for symbol definitions
|
||||||
|
symbols = engine.search_symbols(
|
||||||
|
name=symbol_name,
|
||||||
|
source_path=project_path,
|
||||||
|
kind=symbol_kind,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Convert Symbol to DefinitionResult
|
||||||
|
for sym in symbols:
|
||||||
|
# Only include exact name matches for definitions
|
||||||
|
if sym.name != symbol_name:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Optionally filter by kind
|
||||||
|
if symbol_kind and sym.kind != symbol_kind:
|
||||||
|
continue
|
||||||
|
|
||||||
|
definitions.append(DefinitionResult(
|
||||||
|
name=sym.name,
|
||||||
|
kind=sym.kind,
|
||||||
|
file_path=sym.file or "",
|
||||||
|
line=sym.range[0] if sym.range else 1,
|
||||||
|
end_line=sym.range[1] if sym.range else 1,
|
||||||
|
signature=None, # Not available from Symbol
|
||||||
|
container=None, # Not available from Symbol
|
||||||
|
score=1.0,
|
||||||
|
))
|
||||||
|
|
||||||
|
# Step 2: Get all references using ChainSearchEngine
|
||||||
|
raw_references = engine.search_references(
|
||||||
|
symbol_name=symbol_name,
|
||||||
|
source_path=project_path,
|
||||||
|
depth=-1,
|
||||||
|
limit=limit,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Step 3: Transform raw references to API ReferenceResult
|
||||||
|
api_references: List[ReferenceResult] = []
|
||||||
|
for raw_ref in raw_references:
|
||||||
|
api_ref = _transform_to_reference_result(raw_ref)
|
||||||
|
api_references.append(api_ref)
|
||||||
|
|
||||||
|
# Step 4: Group references by definition
|
||||||
|
if group_by_definition and definitions:
|
||||||
|
return _group_references_by_definition(
|
||||||
|
definitions=definitions,
|
||||||
|
references=api_references,
|
||||||
|
include_definition=include_definition,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Return single group with placeholder definition or first definition
|
||||||
|
if definitions:
|
||||||
|
definition = definitions[0]
|
||||||
|
else:
|
||||||
|
# Create placeholder definition when no definition found
|
||||||
|
definition = DefinitionResult(
|
||||||
|
name=symbol_name,
|
||||||
|
kind=symbol_kind or "unknown",
|
||||||
|
file_path="",
|
||||||
|
line=0,
|
||||||
|
end_line=0,
|
||||||
|
signature=None,
|
||||||
|
container=None,
|
||||||
|
score=0.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
return [GroupedReferences(
|
||||||
|
definition=definition,
|
||||||
|
references=api_references,
|
||||||
|
)]
|
||||||
|
|
||||||
|
finally:
|
||||||
|
engine.close()
|
||||||
|
|
||||||
|
|
||||||
|
def _group_references_by_definition(
|
||||||
|
definitions: List[DefinitionResult],
|
||||||
|
references: List[ReferenceResult],
|
||||||
|
include_definition: bool = True,
|
||||||
|
) -> List[GroupedReferences]:
|
||||||
|
"""Group references by their likely definition.
|
||||||
|
|
||||||
|
Uses file proximity heuristic to assign references to definitions.
|
||||||
|
References in the same file or directory as a definition are
|
||||||
|
assigned to that definition.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
definitions: List of definition locations
|
||||||
|
references: List of reference locations
|
||||||
|
include_definition: Whether to include definition in results
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of GroupedReferences with references assigned to definitions
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
|
||||||
|
if not definitions:
|
||||||
|
return []
|
||||||
|
|
||||||
|
if len(definitions) == 1:
|
||||||
|
# Single definition - all references belong to it
|
||||||
|
return [GroupedReferences(
|
||||||
|
definition=definitions[0],
|
||||||
|
references=references,
|
||||||
|
)]
|
||||||
|
|
||||||
|
# Multiple definitions - group by proximity
|
||||||
|
groups: Dict[int, List[ReferenceResult]] = {
|
||||||
|
i: [] for i in range(len(definitions))
|
||||||
|
}
|
||||||
|
|
||||||
|
for ref in references:
|
||||||
|
# Find the closest definition by file proximity
|
||||||
|
best_def_idx = 0
|
||||||
|
best_score = -1
|
||||||
|
|
||||||
|
for i, defn in enumerate(definitions):
|
||||||
|
score = _proximity_score(ref.file_path, defn.file_path)
|
||||||
|
if score > best_score:
|
||||||
|
best_score = score
|
||||||
|
best_def_idx = i
|
||||||
|
|
||||||
|
groups[best_def_idx].append(ref)
|
||||||
|
|
||||||
|
# Build result groups
|
||||||
|
result: List[GroupedReferences] = []
|
||||||
|
for i, defn in enumerate(definitions):
|
||||||
|
# Skip definitions with no references if not including definition itself
|
||||||
|
if not include_definition and not groups[i]:
|
||||||
|
continue
|
||||||
|
|
||||||
|
result.append(GroupedReferences(
|
||||||
|
definition=defn,
|
||||||
|
references=groups[i],
|
||||||
|
))
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def _proximity_score(ref_path: str, def_path: str) -> int:
|
||||||
|
"""Calculate proximity score between two file paths.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
ref_path: Reference file path
|
||||||
|
def_path: Definition file path
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Proximity score (higher = closer):
|
||||||
|
- Same file: 1000
|
||||||
|
- Same directory: 100
|
||||||
|
- Otherwise: common path prefix length
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
|
||||||
|
if not ref_path or not def_path:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# Normalize paths
|
||||||
|
ref_path = os.path.normpath(ref_path)
|
||||||
|
def_path = os.path.normpath(def_path)
|
||||||
|
|
||||||
|
# Same file
|
||||||
|
if ref_path == def_path:
|
||||||
|
return 1000
|
||||||
|
|
||||||
|
ref_dir = os.path.dirname(ref_path)
|
||||||
|
def_dir = os.path.dirname(def_path)
|
||||||
|
|
||||||
|
# Same directory
|
||||||
|
if ref_dir == def_dir:
|
||||||
|
return 100
|
||||||
|
|
||||||
|
# Common path prefix
|
||||||
|
try:
|
||||||
|
common = os.path.commonpath([ref_path, def_path])
|
||||||
|
return len(common)
|
||||||
|
except ValueError:
|
||||||
|
# No common path (different drives on Windows)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
# Type alias for the raw reference from ChainSearchEngine
|
||||||
|
class RawReferenceResult:
|
||||||
|
"""Type stub for ChainSearchEngine.ReferenceResult.
|
||||||
|
|
||||||
|
This is only used for type hints and is replaced at runtime
|
||||||
|
by the actual import.
|
||||||
|
"""
|
||||||
|
file_path: str
|
||||||
|
line: int
|
||||||
|
column: int
|
||||||
|
context: str
|
||||||
|
relationship_type: str
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user