mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-14 02:42:04 +08:00
feat: add issue discovery by prompt command with Gemini planning
- Introduced `/issue:discover-by-prompt` command for user-driven issue discovery. - Implemented multi-agent exploration with iterative feedback loops. - Added ACE semantic search for context gathering and cross-module comparison capabilities. - Enhanced user experience with natural language input and adaptive exploration strategies. feat: implement memory update queue tool for batching updates - Created `memory-update-queue.js` for managing CLAUDE.md updates. - Added functionality for queuing paths, deduplication, and auto-flushing based on thresholds and timeouts. - Implemented methods for queue status retrieval, flushing, and timeout checks. - Configured to store queue data persistently in `~/.claude/.memory-queue.json`.
This commit is contained in:
764
.claude/commands/issue/discover-by-prompt.md
Normal file
764
.claude/commands/issue/discover-by-prompt.md
Normal file
@@ -0,0 +1,764 @@
|
|||||||
|
---
|
||||||
|
name: issue:discover-by-prompt
|
||||||
|
description: Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).
|
||||||
|
argument-hint: "<prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
|
||||||
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*), mcp__ace-tool__search_context(*), mcp__exa__search(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Issue Discovery by Prompt
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Discover issues based on user description
|
||||||
|
/issue:discover-by-prompt "Check if frontend API calls match backend implementations"
|
||||||
|
|
||||||
|
# Compare specific modules
|
||||||
|
/issue:discover-by-prompt "Verify auth flow consistency between mobile and web clients" --scope=src/auth/**,src/mobile/**
|
||||||
|
|
||||||
|
# Deep exploration with more iterations
|
||||||
|
/issue:discover-by-prompt "Find all places where error handling is inconsistent" --depth=deep --max-iterations=8
|
||||||
|
|
||||||
|
# Focused backend-frontend contract check
|
||||||
|
/issue:discover-by-prompt "Compare REST API definitions with frontend fetch calls"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Core Difference from `/issue:discover`**:
|
||||||
|
- `discover`: Pre-defined perspectives (bug, security, etc.), parallel execution
|
||||||
|
- `discover-by-prompt`: User-driven prompt, Gemini-planned strategy, iterative exploration
|
||||||
|
|
||||||
|
## What & Why
|
||||||
|
|
||||||
|
### Core Concept
|
||||||
|
|
||||||
|
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command:
|
||||||
|
|
||||||
|
1. **Analyzes user intent** via Gemini to understand what to find
|
||||||
|
2. **Plans exploration strategy** dynamically based on codebase structure
|
||||||
|
3. **Executes iterative multi-agent exploration** with feedback loops
|
||||||
|
4. **Performs cross-module comparison** when detecting comparison intent
|
||||||
|
|
||||||
|
### Value Proposition
|
||||||
|
|
||||||
|
1. **Natural Language Input**: Describe what you want to find, not how to find it
|
||||||
|
2. **Intelligent Planning**: Gemini designs optimal exploration strategy
|
||||||
|
3. **Iterative Refinement**: Each round builds on previous discoveries
|
||||||
|
4. **Cross-Module Analysis**: Compare frontend/backend, mobile/web, old/new implementations
|
||||||
|
5. **Adaptive Exploration**: Adjusts direction based on findings
|
||||||
|
|
||||||
|
### Use Cases
|
||||||
|
|
||||||
|
| Scenario | Example Prompt |
|
||||||
|
|----------|----------------|
|
||||||
|
| API Contract | "Check if frontend calls match backend endpoints" |
|
||||||
|
| Error Handling | "Find inconsistent error handling patterns" |
|
||||||
|
| Migration Gap | "Compare old auth with new auth implementation" |
|
||||||
|
| Feature Parity | "Verify mobile has all web features" |
|
||||||
|
| Schema Drift | "Check if TypeScript types match API responses" |
|
||||||
|
| Integration | "Find mismatches between service A and service B" |
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
### Execution Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Prompt Analysis & Initialization
|
||||||
|
├─ Parse user prompt and flags
|
||||||
|
├─ Detect exploration intent (comparison/search/verification)
|
||||||
|
└─ Initialize discovery session
|
||||||
|
|
||||||
|
Phase 1.5: ACE Context Gathering
|
||||||
|
├─ Use ACE semantic search to understand codebase structure
|
||||||
|
├─ Identify relevant modules based on prompt keywords
|
||||||
|
├─ Collect architecture context for Gemini planning
|
||||||
|
└─ Build initial context package
|
||||||
|
|
||||||
|
Phase 2: Gemini Strategy Planning
|
||||||
|
├─ Feed ACE context + prompt to Gemini CLI
|
||||||
|
├─ Gemini analyzes and generates exploration strategy
|
||||||
|
├─ Create exploration dimensions with search targets
|
||||||
|
├─ Define comparison matrix (if comparison intent)
|
||||||
|
└─ Set success criteria and iteration limits
|
||||||
|
|
||||||
|
Phase 3: Iterative Agent Exploration (with ACE)
|
||||||
|
├─ Iteration 1: Initial exploration by assigned agents
|
||||||
|
│ ├─ Agent A: ACE search + explore dimension 1
|
||||||
|
│ ├─ Agent B: ACE search + explore dimension 2
|
||||||
|
│ └─ Collect findings, update shared context
|
||||||
|
├─ Iteration 2-N: Refined exploration
|
||||||
|
│ ├─ Analyze previous findings
|
||||||
|
│ ├─ ACE search for related code paths
|
||||||
|
│ ├─ Execute targeted exploration
|
||||||
|
│ └─ Update cumulative findings
|
||||||
|
└─ Termination: Max iterations or convergence
|
||||||
|
|
||||||
|
Phase 4: Cross-Analysis & Synthesis
|
||||||
|
├─ Compare findings across dimensions
|
||||||
|
├─ Identify discrepancies and issues
|
||||||
|
├─ Calculate confidence scores
|
||||||
|
└─ Generate issue candidates
|
||||||
|
|
||||||
|
Phase 5: Issue Generation & Summary
|
||||||
|
├─ Convert findings to issue format
|
||||||
|
├─ Write discovery outputs
|
||||||
|
└─ Prompt user for next action
|
||||||
|
```
|
||||||
|
|
||||||
|
### Exploration Dimensions
|
||||||
|
|
||||||
|
Dimensions are **dynamically generated by Gemini** based on the user prompt. Not limited to predefined categories.
|
||||||
|
|
||||||
|
**Examples**:
|
||||||
|
|
||||||
|
| Prompt | Generated Dimensions |
|
||||||
|
|--------|---------------------|
|
||||||
|
| "Check API contracts" | frontend-calls, backend-handlers |
|
||||||
|
| "Find auth issues" | auth-module (single dimension) |
|
||||||
|
| "Compare old/new implementations" | legacy-code, new-code |
|
||||||
|
| "Audit payment flow" | payment-service, validation, logging |
|
||||||
|
| "Find error handling gaps" | error-handlers, error-types, recovery-logic |
|
||||||
|
|
||||||
|
Gemini analyzes the prompt + ACE context to determine:
|
||||||
|
- How many dimensions are needed (1 to N)
|
||||||
|
- What each dimension should focus on
|
||||||
|
- Whether comparison is needed between dimensions
|
||||||
|
|
||||||
|
### Iteration Strategy
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
|
│ Iteration Loop │
|
||||||
|
├─────────────────────────────────────────────────────────────┤
|
||||||
|
│ 1. Plan: What to explore this iteration │
|
||||||
|
│ └─ Based on: previous findings + unexplored areas │
|
||||||
|
│ │
|
||||||
|
│ 2. Execute: Launch agents for this iteration │
|
||||||
|
│ └─ Each agent: explore → collect → return summary │
|
||||||
|
│ │
|
||||||
|
│ 3. Analyze: Process iteration results │
|
||||||
|
│ └─ New findings? Gaps? Contradictions? │
|
||||||
|
│ │
|
||||||
|
│ 4. Decide: Continue or terminate │
|
||||||
|
│ └─ Terminate if: max iterations OR convergence OR │
|
||||||
|
│ high confidence on all questions │
|
||||||
|
└─────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Responsibilities
|
||||||
|
|
||||||
|
### Phase 1: Prompt Analysis & Initialization
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Step 1: Parse arguments
|
||||||
|
const { prompt, scope, depth, maxIterations } = parseArgs(args);
|
||||||
|
|
||||||
|
// Step 2: Generate discovery ID
|
||||||
|
const discoveryId = `DBP-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
|
||||||
|
|
||||||
|
// Step 3: Create output directory
|
||||||
|
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
|
||||||
|
await mkdir(outputDir, { recursive: true });
|
||||||
|
await mkdir(`${outputDir}/iterations`, { recursive: true });
|
||||||
|
|
||||||
|
// Step 4: Detect intent type from prompt
|
||||||
|
const intentType = detectIntent(prompt);
|
||||||
|
// Returns: 'comparison' | 'search' | 'verification' | 'audit'
|
||||||
|
|
||||||
|
// Step 5: Initialize discovery state
|
||||||
|
await writeJson(`${outputDir}/discovery-state.json`, {
|
||||||
|
discovery_id: discoveryId,
|
||||||
|
type: 'prompt-driven',
|
||||||
|
prompt: prompt,
|
||||||
|
intent_type: intentType,
|
||||||
|
scope: scope || '**/*',
|
||||||
|
depth: depth || 'standard',
|
||||||
|
max_iterations: maxIterations || 5,
|
||||||
|
phase: 'initialization',
|
||||||
|
created_at: new Date().toISOString(),
|
||||||
|
iterations: [],
|
||||||
|
cumulative_findings: [],
|
||||||
|
comparison_matrix: null // filled for comparison intent
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 1.5: ACE Context Gathering
|
||||||
|
|
||||||
|
**Purpose**: Use ACE semantic search to gather codebase context before Gemini planning.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Step 1: Extract keywords from prompt for semantic search
|
||||||
|
const keywords = extractKeywords(prompt);
|
||||||
|
// e.g., "frontend API calls match backend" → ["frontend", "API", "backend", "endpoints"]
|
||||||
|
|
||||||
|
// Step 2: Use ACE to understand codebase structure
|
||||||
|
const aceQueries = [
|
||||||
|
`Project architecture and module structure for ${keywords.join(', ')}`,
|
||||||
|
`Where are ${keywords[0]} implementations located?`,
|
||||||
|
`How does ${keywords.slice(0, 2).join(' ')} work in this codebase?`
|
||||||
|
];
|
||||||
|
|
||||||
|
const aceResults = [];
|
||||||
|
for (const query of aceQueries) {
|
||||||
|
const result = await mcp__ace-tool__search_context({
|
||||||
|
project_root_path: process.cwd(),
|
||||||
|
query: query
|
||||||
|
});
|
||||||
|
aceResults.push({ query, result });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 3: Build context package for Gemini (kept in memory)
|
||||||
|
const aceContext = {
|
||||||
|
prompt_keywords: keywords,
|
||||||
|
codebase_structure: aceResults[0].result,
|
||||||
|
relevant_modules: aceResults.slice(1).map(r => r.result),
|
||||||
|
detected_patterns: extractPatterns(aceResults)
|
||||||
|
};
|
||||||
|
|
||||||
|
// Step 4: Update state (no separate file)
|
||||||
|
await updateDiscoveryState(outputDir, {
|
||||||
|
phase: 'context-gathered',
|
||||||
|
ace_context: {
|
||||||
|
queries_executed: aceQueries.length,
|
||||||
|
modules_identified: aceContext.relevant_modules.length
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// aceContext passed to Phase 2 in memory
|
||||||
|
```
|
||||||
|
|
||||||
|
**ACE Query Strategy by Intent Type**:
|
||||||
|
|
||||||
|
| Intent | ACE Queries |
|
||||||
|
|--------|-------------|
|
||||||
|
| **comparison** | "frontend API calls", "backend API handlers", "API contract definitions" |
|
||||||
|
| **search** | "{keyword} implementations", "{keyword} usage patterns" |
|
||||||
|
| **verification** | "expected behavior for {feature}", "test coverage for {feature}" |
|
||||||
|
| **audit** | "all {category} patterns", "{category} security concerns" |
|
||||||
|
|
||||||
|
### Phase 2: Gemini Strategy Planning
|
||||||
|
|
||||||
|
**Purpose**: Gemini analyzes user prompt + ACE context to design optimal exploration strategy.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Step 1: Load ACE context gathered in Phase 1.5
|
||||||
|
const aceContext = await readJson(`${outputDir}/ace-context.json`);
|
||||||
|
|
||||||
|
// Step 2: Build Gemini planning prompt with ACE context
|
||||||
|
const planningPrompt = `
|
||||||
|
PURPOSE: Analyze discovery prompt and create exploration strategy based on codebase context
|
||||||
|
TASK:
|
||||||
|
• Parse user intent from prompt: "${prompt}"
|
||||||
|
• Use codebase context to identify specific modules and files to explore
|
||||||
|
• Create exploration dimensions with precise search targets
|
||||||
|
• Define comparison matrix structure (if comparison intent)
|
||||||
|
• Set success criteria and iteration strategy
|
||||||
|
MODE: analysis
|
||||||
|
CONTEXT: @${scope || '**/*'} | Discovery type: ${intentType}
|
||||||
|
|
||||||
|
## Codebase Context (from ACE semantic search)
|
||||||
|
${JSON.stringify(aceContext, null, 2)}
|
||||||
|
|
||||||
|
EXPECTED: JSON exploration plan following exploration-plan-schema.json:
|
||||||
|
{
|
||||||
|
"intent_analysis": { "type": "${intentType}", "primary_question": "...", "sub_questions": [...] },
|
||||||
|
"dimensions": [{ "name": "...", "description": "...", "search_targets": [...], "focus_areas": [...], "agent_prompt": "..." }],
|
||||||
|
"comparison_matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
|
||||||
|
"success_criteria": [...],
|
||||||
|
"estimated_iterations": N,
|
||||||
|
"termination_conditions": [...]
|
||||||
|
}
|
||||||
|
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) | Use ACE context to inform targets | Focus on actionable plan
|
||||||
|
`;
|
||||||
|
|
||||||
|
// Step 3: Execute Gemini planning
|
||||||
|
Bash({
|
||||||
|
command: `ccw cli -p "${planningPrompt}" --tool gemini --mode analysis`,
|
||||||
|
run_in_background: true,
|
||||||
|
timeout: 300000
|
||||||
|
});
|
||||||
|
|
||||||
|
// Step 4: Parse Gemini output and validate against schema
|
||||||
|
const explorationPlan = await parseGeminiPlanOutput(geminiResult);
|
||||||
|
validateAgainstSchema(explorationPlan, 'exploration-plan-schema.json');
|
||||||
|
|
||||||
|
// Step 5: Enhance plan with ACE-discovered file paths
|
||||||
|
explorationPlan.dimensions = explorationPlan.dimensions.map(dim => ({
|
||||||
|
...dim,
|
||||||
|
ace_suggested_files: aceContext.relevant_modules
|
||||||
|
.filter(m => m.relevance_to === dim.name)
|
||||||
|
.map(m => m.file_path)
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Step 6: Update state (plan kept in memory, not persisted)
|
||||||
|
await updateDiscoveryState(outputDir, {
|
||||||
|
phase: 'planned',
|
||||||
|
exploration_plan: {
|
||||||
|
dimensions_count: explorationPlan.dimensions.length,
|
||||||
|
has_comparison_matrix: !!explorationPlan.comparison_matrix,
|
||||||
|
estimated_iterations: explorationPlan.estimated_iterations
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// explorationPlan passed to Phase 3 in memory
|
||||||
|
```
|
||||||
|
|
||||||
|
**Gemini Planning Responsibilities**:
|
||||||
|
|
||||||
|
| Responsibility | Input | Output |
|
||||||
|
|----------------|-------|--------|
|
||||||
|
| Intent Analysis | User prompt | type, primary_question, sub_questions |
|
||||||
|
| Dimension Design | ACE context + prompt | dimensions with search_targets |
|
||||||
|
| Comparison Matrix | Intent type + modules | comparison_points (if applicable) |
|
||||||
|
| Iteration Strategy | Depth setting | estimated_iterations, termination_conditions |
|
||||||
|
|
||||||
|
**Gemini Planning Output Schema**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"intent_analysis": {
|
||||||
|
"type": "comparison|search|verification|audit",
|
||||||
|
"primary_question": "string",
|
||||||
|
"sub_questions": ["string"]
|
||||||
|
},
|
||||||
|
"dimensions": [
|
||||||
|
{
|
||||||
|
"name": "frontend",
|
||||||
|
"description": "Client-side API calls and error handling",
|
||||||
|
"search_targets": ["src/api/**", "src/hooks/**"],
|
||||||
|
"focus_areas": ["fetch calls", "error boundaries", "response parsing"],
|
||||||
|
"agent_prompt": "Explore frontend API consumption patterns..."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "backend",
|
||||||
|
"description": "Server-side API implementations",
|
||||||
|
"search_targets": ["src/server/**", "src/routes/**"],
|
||||||
|
"focus_areas": ["endpoint handlers", "response schemas", "error responses"],
|
||||||
|
"agent_prompt": "Explore backend API implementations..."
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"comparison_matrix": {
|
||||||
|
"dimension_a": "frontend",
|
||||||
|
"dimension_b": "backend",
|
||||||
|
"comparison_points": [
|
||||||
|
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
|
||||||
|
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
|
||||||
|
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
|
||||||
|
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"},
|
||||||
|
{"aspect": "errors", "frontend_check": "error handling", "backend_check": "error responses"}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"success_criteria": [
|
||||||
|
"All API endpoints mapped between frontend and backend",
|
||||||
|
"Discrepancies identified with file:line references",
|
||||||
|
"Each finding includes remediation suggestion"
|
||||||
|
],
|
||||||
|
"estimated_iterations": 3,
|
||||||
|
"termination_conditions": [
|
||||||
|
"All comparison points verified",
|
||||||
|
"No new findings in last iteration",
|
||||||
|
"Confidence > 0.8 on primary question"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Iterative Agent Exploration (with ACE)
|
||||||
|
|
||||||
|
**Purpose**: Multi-agent iterative exploration using ACE for semantic search within each iteration.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let iteration = 0;
|
||||||
|
let cumulativeFindings = [];
|
||||||
|
let sharedContext = { aceDiscoveries: [], crossReferences: [] };
|
||||||
|
let shouldContinue = true;
|
||||||
|
|
||||||
|
while (shouldContinue && iteration < maxIterations) {
|
||||||
|
iteration++;
|
||||||
|
const iterationDir = `${outputDir}/iterations/${iteration}`;
|
||||||
|
await mkdir(iterationDir, { recursive: true });
|
||||||
|
|
||||||
|
// Step 1: ACE-assisted iteration planning
|
||||||
|
// Use previous findings to guide ACE queries for this iteration
|
||||||
|
const iterationAceQueries = iteration === 1
|
||||||
|
? explorationPlan.dimensions.map(d => d.focus_areas[0]) // Initial queries from plan
|
||||||
|
: deriveQueriesFromFindings(cumulativeFindings); // Follow-up queries from findings
|
||||||
|
|
||||||
|
// Execute ACE searches to find related code
|
||||||
|
const iterationAceResults = [];
|
||||||
|
for (const query of iterationAceQueries) {
|
||||||
|
const result = await mcp__ace-tool__search_context({
|
||||||
|
project_root_path: process.cwd(),
|
||||||
|
query: `${query} in ${explorationPlan.scope}`
|
||||||
|
});
|
||||||
|
iterationAceResults.push({ query, result });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update shared context with ACE discoveries
|
||||||
|
sharedContext.aceDiscoveries.push(...iterationAceResults);
|
||||||
|
|
||||||
|
// Step 2: Plan this iteration based on ACE results
|
||||||
|
const iterationPlan = planIteration(iteration, explorationPlan, cumulativeFindings, iterationAceResults);
|
||||||
|
|
||||||
|
// Step 3: Launch dimension agents with ACE context
|
||||||
|
const agentPromises = iterationPlan.dimensions.map(dimension =>
|
||||||
|
Task({
|
||||||
|
subagent_type: "cli-explore-agent",
|
||||||
|
run_in_background: false,
|
||||||
|
description: `Explore ${dimension.name} (iteration ${iteration})`,
|
||||||
|
prompt: buildDimensionPromptWithACE(dimension, iteration, cumulativeFindings, iterationAceResults, iterationDir)
|
||||||
|
})
|
||||||
|
);
|
||||||
|
|
||||||
|
// Wait for iteration agents
|
||||||
|
const iterationResults = await Promise.all(agentPromises);
|
||||||
|
|
||||||
|
// Step 4: Collect and analyze iteration findings
|
||||||
|
const iterationFindings = await collectIterationFindings(iterationDir, iterationPlan.dimensions);
|
||||||
|
|
||||||
|
// Step 5: Cross-reference findings between dimensions
|
||||||
|
if (iterationPlan.dimensions.length > 1) {
|
||||||
|
const crossRefs = findCrossReferences(iterationFindings, iterationPlan.dimensions);
|
||||||
|
sharedContext.crossReferences.push(...crossRefs);
|
||||||
|
}
|
||||||
|
|
||||||
|
cumulativeFindings.push(...iterationFindings);
|
||||||
|
|
||||||
|
// Step 6: Decide whether to continue
|
||||||
|
const convergenceCheck = checkConvergence(iterationFindings, cumulativeFindings, explorationPlan);
|
||||||
|
shouldContinue = !convergenceCheck.converged;
|
||||||
|
|
||||||
|
// Step 7: Update state (iteration summary embedded in state)
|
||||||
|
await updateDiscoveryState(outputDir, {
|
||||||
|
iterations: [...state.iterations, {
|
||||||
|
number: iteration,
|
||||||
|
findings_count: iterationFindings.length,
|
||||||
|
ace_queries: iterationAceQueries.length,
|
||||||
|
cross_references: sharedContext.crossReferences.length,
|
||||||
|
new_discoveries: convergenceCheck.newDiscoveries,
|
||||||
|
confidence: convergenceCheck.confidence,
|
||||||
|
continued: shouldContinue
|
||||||
|
}],
|
||||||
|
cumulative_findings: cumulativeFindings
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**ACE in Iteration Loop**:
|
||||||
|
|
||||||
|
```
|
||||||
|
Iteration N
|
||||||
|
│
|
||||||
|
├─→ ACE Search (based on previous findings)
|
||||||
|
│ └─ Query: "related code paths for {finding.category}"
|
||||||
|
│ └─ Result: Additional files to explore
|
||||||
|
│
|
||||||
|
├─→ Agent Exploration (with ACE context)
|
||||||
|
│ └─ Agent receives: dimension targets + ACE suggestions
|
||||||
|
│ └─ Agent can call ACE for deeper search
|
||||||
|
│
|
||||||
|
├─→ Cross-Reference Analysis
|
||||||
|
│ └─ Compare findings between dimensions
|
||||||
|
│ └─ Identify discrepancies
|
||||||
|
│
|
||||||
|
└─→ Convergence Check
|
||||||
|
└─ New findings? Continue
|
||||||
|
└─ No new findings? Terminate
|
||||||
|
```
|
||||||
|
|
||||||
|
**Dimension Agent Prompt Template (with ACE)**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function buildDimensionPromptWithACE(dimension, iteration, previousFindings, aceResults, outputDir) {
|
||||||
|
// Filter ACE results relevant to this dimension
|
||||||
|
const relevantAceResults = aceResults.filter(r =>
|
||||||
|
r.query.includes(dimension.name) || dimension.focus_areas.some(fa => r.query.includes(fa))
|
||||||
|
);
|
||||||
|
|
||||||
|
return `
|
||||||
|
## Task Objective
|
||||||
|
Explore ${dimension.name} dimension for issue discovery (Iteration ${iteration})
|
||||||
|
|
||||||
|
## Context
|
||||||
|
- Dimension: ${dimension.name}
|
||||||
|
- Description: ${dimension.description}
|
||||||
|
- Search Targets: ${dimension.search_targets.join(', ')}
|
||||||
|
- Focus Areas: ${dimension.focus_areas.join(', ')}
|
||||||
|
|
||||||
|
## ACE Semantic Search Results (Pre-gathered)
|
||||||
|
The following files/code sections were identified by ACE as relevant to this dimension:
|
||||||
|
${JSON.stringify(relevantAceResults.map(r => ({ query: r.query, files: r.result.slice(0, 5) })), null, 2)}
|
||||||
|
|
||||||
|
**Use ACE for deeper exploration**: You have access to mcp__ace-tool__search_context.
|
||||||
|
When you find something interesting, use ACE to find related code:
|
||||||
|
- mcp__ace-tool__search_context({ project_root_path: ".", query: "related to {finding}" })
|
||||||
|
|
||||||
|
${iteration > 1 ? `
|
||||||
|
## Previous Findings to Build Upon
|
||||||
|
${summarizePreviousFindings(previousFindings, dimension.name)}
|
||||||
|
|
||||||
|
## This Iteration Focus
|
||||||
|
- Explore areas not yet covered (check ACE results for new files)
|
||||||
|
- Verify/deepen previous findings
|
||||||
|
- Follow leads from previous discoveries
|
||||||
|
- Use ACE to find cross-references between dimensions
|
||||||
|
` : ''}
|
||||||
|
|
||||||
|
## MANDATORY FIRST STEPS
|
||||||
|
1. Read exploration plan: ${outputDir}/../exploration-plan.json
|
||||||
|
2. Read schema: ~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json
|
||||||
|
3. Review ACE results above for starting points
|
||||||
|
4. Explore files identified by ACE
|
||||||
|
|
||||||
|
## Exploration Instructions
|
||||||
|
${dimension.agent_prompt}
|
||||||
|
|
||||||
|
## ACE Usage Guidelines
|
||||||
|
- Use ACE when you need to find:
|
||||||
|
- Where a function/class is used
|
||||||
|
- Related implementations in other modules
|
||||||
|
- Cross-module dependencies
|
||||||
|
- Similar patterns elsewhere in codebase
|
||||||
|
- Query format: Natural language, be specific
|
||||||
|
- Example: "Where is UserService.authenticate called from?"
|
||||||
|
|
||||||
|
## Output Requirements
|
||||||
|
|
||||||
|
**1. Write JSON file**: ${outputDir}/${dimension.name}.json
|
||||||
|
Follow discovery-finding-schema.json:
|
||||||
|
- findings: [{id, title, category, description, file, line, snippet, confidence, related_dimension}]
|
||||||
|
- coverage: {files_explored, areas_covered, areas_remaining}
|
||||||
|
- leads: [{description, suggested_search}] // for next iteration
|
||||||
|
- ace_queries_used: [{query, result_count}] // track ACE usage
|
||||||
|
|
||||||
|
**2. Return summary**:
|
||||||
|
- Total findings this iteration
|
||||||
|
- Key discoveries
|
||||||
|
- ACE queries that revealed important code
|
||||||
|
- Recommended next exploration areas
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] JSON written to ${outputDir}/${dimension.name}.json
|
||||||
|
- [ ] Each finding has file:line reference
|
||||||
|
- [ ] ACE used for cross-references where applicable
|
||||||
|
- [ ] Coverage report included
|
||||||
|
- [ ] Leads for next iteration identified
|
||||||
|
`;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Cross-Analysis & Synthesis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// For comparison intent, perform cross-analysis
|
||||||
|
if (intentType === 'comparison' && explorationPlan.comparison_matrix) {
|
||||||
|
const comparisonResults = [];
|
||||||
|
|
||||||
|
for (const point of explorationPlan.comparison_matrix.comparison_points) {
|
||||||
|
const dimensionAFindings = cumulativeFindings.filter(f =>
|
||||||
|
f.related_dimension === explorationPlan.comparison_matrix.dimension_a &&
|
||||||
|
f.category.includes(point.aspect)
|
||||||
|
);
|
||||||
|
|
||||||
|
const dimensionBFindings = cumulativeFindings.filter(f =>
|
||||||
|
f.related_dimension === explorationPlan.comparison_matrix.dimension_b &&
|
||||||
|
f.category.includes(point.aspect)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Compare and find discrepancies
|
||||||
|
const discrepancies = findDiscrepancies(dimensionAFindings, dimensionBFindings, point);
|
||||||
|
|
||||||
|
comparisonResults.push({
|
||||||
|
aspect: point.aspect,
|
||||||
|
dimension_a_count: dimensionAFindings.length,
|
||||||
|
dimension_b_count: dimensionBFindings.length,
|
||||||
|
discrepancies: discrepancies,
|
||||||
|
match_rate: calculateMatchRate(dimensionAFindings, dimensionBFindings)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write comparison analysis
|
||||||
|
await writeJson(`${outputDir}/comparison-analysis.json`, {
|
||||||
|
matrix: explorationPlan.comparison_matrix,
|
||||||
|
results: comparisonResults,
|
||||||
|
summary: {
|
||||||
|
total_discrepancies: comparisonResults.reduce((sum, r) => sum + r.discrepancies.length, 0),
|
||||||
|
overall_match_rate: average(comparisonResults.map(r => r.match_rate)),
|
||||||
|
critical_mismatches: comparisonResults.filter(r => r.match_rate < 0.5)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prioritize all findings
|
||||||
|
const prioritizedFindings = prioritizeFindings(cumulativeFindings, explorationPlan);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: Issue Generation & Summary
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Convert high-confidence findings to issues
|
||||||
|
const issueWorthy = prioritizedFindings.filter(f =>
|
||||||
|
f.confidence >= 0.7 || f.priority === 'critical' || f.priority === 'high'
|
||||||
|
);
|
||||||
|
|
||||||
|
const issues = issueWorthy.map(finding => ({
|
||||||
|
id: `ISS-${discoveryId}-${finding.id}`,
|
||||||
|
title: finding.title,
|
||||||
|
description: finding.description,
|
||||||
|
source: {
|
||||||
|
discovery_id: discoveryId,
|
||||||
|
finding_id: finding.id,
|
||||||
|
dimension: finding.related_dimension
|
||||||
|
},
|
||||||
|
file: finding.file,
|
||||||
|
line: finding.line,
|
||||||
|
priority: finding.priority,
|
||||||
|
category: finding.category,
|
||||||
|
suggested_fix: finding.suggested_fix,
|
||||||
|
confidence: finding.confidence,
|
||||||
|
status: 'discovered',
|
||||||
|
created_at: new Date().toISOString()
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Write issues
|
||||||
|
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
|
||||||
|
|
||||||
|
// Update final state (summary embedded in state, no separate file)
|
||||||
|
await updateDiscoveryState(outputDir, {
|
||||||
|
phase: 'complete',
|
||||||
|
updated_at: new Date().toISOString(),
|
||||||
|
results: {
|
||||||
|
total_iterations: iteration,
|
||||||
|
total_findings: cumulativeFindings.length,
|
||||||
|
issues_generated: issues.length,
|
||||||
|
comparison_match_rate: comparisonResults
|
||||||
|
? average(comparisonResults.map(r => r.match_rate))
|
||||||
|
: null
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Prompt user for next action
|
||||||
|
await AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: `Discovery complete: ${issues.length} issues from ${cumulativeFindings.length} findings across ${iteration} iterations. What next?`,
|
||||||
|
header: "Next Step",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Export to Issues (Recommended)", description: `Export ${issues.length} issues for planning` },
|
||||||
|
{ label: "Review Details", description: "View comparison analysis and iteration details" },
|
||||||
|
{ label: "Run Deeper", description: "Continue with more iterations" },
|
||||||
|
{ label: "Skip", description: "Complete without exporting" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/issues/discoveries/
|
||||||
|
└── {DBP-YYYYMMDD-HHmmss}/
|
||||||
|
├── discovery-state.json # Session state with iteration tracking
|
||||||
|
├── iterations/
|
||||||
|
│ ├── 1/
|
||||||
|
│ │ └── {dimension}.json # Dimension findings
|
||||||
|
│ ├── 2/
|
||||||
|
│ │ └── {dimension}.json
|
||||||
|
│ └── ...
|
||||||
|
├── comparison-analysis.json # Cross-dimension comparison (if applicable)
|
||||||
|
└── discovery-issues.jsonl # Generated issue candidates
|
||||||
|
```
|
||||||
|
|
||||||
|
**Simplified Design**:
|
||||||
|
- ACE context and Gemini plan kept in memory, not persisted
|
||||||
|
- Iteration summaries embedded in state
|
||||||
|
- No separate summary.md (state.json contains all needed info)
|
||||||
|
|
||||||
|
## Schema References
|
||||||
|
|
||||||
|
| Schema | Path | Used By |
|
||||||
|
|--------|------|---------|
|
||||||
|
| **Discovery State** | `discovery-state-schema.json` | Orchestrator (state tracking) |
|
||||||
|
| **Discovery Finding** | `discovery-finding-schema.json` | Dimension agents (output) |
|
||||||
|
| **Exploration Plan** | `exploration-plan-schema.json` | Gemini output validation (memory only) |
|
||||||
|
|
||||||
|
## Configuration Options
|
||||||
|
|
||||||
|
| Flag | Default | Description |
|
||||||
|
|------|---------|-------------|
|
||||||
|
| `--scope` | `**/*` | File pattern to explore |
|
||||||
|
| `--depth` | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
|
||||||
|
| `--max-iterations` | 5 | Maximum exploration iterations |
|
||||||
|
| `--tool` | `gemini` | Planning tool (gemini/qwen) |
|
||||||
|
| `--plan-only` | `false` | Stop after Phase 2 (Gemini planning), show plan for user review |
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: Single Module Deep Dive
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:discover-by-prompt "Find all potential issues in the auth module" --scope=src/auth/**
|
||||||
|
```
|
||||||
|
|
||||||
|
**Gemini plans** (single dimension):
|
||||||
|
- Dimension: auth-module
|
||||||
|
- Focus: security vulnerabilities, edge cases, error handling, test gaps
|
||||||
|
|
||||||
|
**Iterations**: 2-3 (until no new findings)
|
||||||
|
|
||||||
|
### Example 2: API Contract Comparison
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:discover-by-prompt "Check if API calls match implementations" --scope=src/**
|
||||||
|
```
|
||||||
|
|
||||||
|
**Gemini plans** (comparison):
|
||||||
|
- Dimension 1: api-consumers (fetch calls, hooks, services)
|
||||||
|
- Dimension 2: api-providers (handlers, routes, controllers)
|
||||||
|
- Comparison matrix: endpoints, methods, payloads, responses
|
||||||
|
|
||||||
|
### Example 3: Multi-Module Audit
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:discover-by-prompt "Audit the payment flow for issues" --scope=src/payment/**
|
||||||
|
```
|
||||||
|
|
||||||
|
**Gemini plans** (multi-dimension):
|
||||||
|
- Dimension 1: payment-logic (calculations, state transitions)
|
||||||
|
- Dimension 2: validation (input checks, business rules)
|
||||||
|
- Dimension 3: error-handling (failure modes, recovery)
|
||||||
|
|
||||||
|
### Example 4: Plan Only Mode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:discover-by-prompt "Find inconsistent patterns" --plan-only
|
||||||
|
```
|
||||||
|
|
||||||
|
Stops after Gemini planning, outputs:
|
||||||
|
```
|
||||||
|
Gemini Plan:
|
||||||
|
- Intent: search
|
||||||
|
- Dimensions: 2 (pattern-definitions, pattern-usages)
|
||||||
|
- Estimated iterations: 3
|
||||||
|
|
||||||
|
Continue with exploration? [Y/n]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# After discovery, plan solutions
|
||||||
|
/issue:plan DBP-001-01,DBP-001-02
|
||||||
|
|
||||||
|
# View all discoveries
|
||||||
|
/issue:manage
|
||||||
|
|
||||||
|
# Standard perspective-based discovery
|
||||||
|
/issue:discover src/auth/** --perspectives=security,bug
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Be Specific in Prompts**: More specific prompts lead to better Gemini planning
|
||||||
|
2. **Scope Appropriately**: Narrow scope for focused comparison, wider for audits
|
||||||
|
3. **Review Exploration Plan**: Check `exploration-plan.json` before long explorations
|
||||||
|
4. **Use Standard Depth First**: Start with standard, go deep only if needed
|
||||||
|
5. **Combine with `/issue:discover`**: Use prompt-based for comparisons, perspective-based for audits
|
||||||
@@ -1256,5 +1256,89 @@ RULES: Be concise. Focus on practical understanding. Include function signatures
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// API: Memory Queue - Add path to queue
|
||||||
|
if (pathname === '/api/memory/queue/add' && req.method === 'POST') {
|
||||||
|
handlePostRequest(req, res, async (body) => {
|
||||||
|
const { path: modulePath, tool = 'gemini', strategy = 'single-layer' } = body;
|
||||||
|
|
||||||
|
if (!modulePath) {
|
||||||
|
return { error: 'path is required', status: 400 };
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const { memoryQueueTool } = await import('../../tools/memory-update-queue.js');
|
||||||
|
const result = await memoryQueueTool.execute({
|
||||||
|
action: 'add',
|
||||||
|
path: modulePath,
|
||||||
|
tool,
|
||||||
|
strategy
|
||||||
|
}) as { queueSize?: number; willFlush?: boolean; flushed?: boolean };
|
||||||
|
|
||||||
|
// Broadcast queue update event
|
||||||
|
broadcastToClients({
|
||||||
|
type: 'MEMORY_QUEUE_UPDATED',
|
||||||
|
payload: {
|
||||||
|
action: 'add',
|
||||||
|
path: modulePath,
|
||||||
|
queueSize: result.queueSize || 0,
|
||||||
|
willFlush: result.willFlush || false,
|
||||||
|
flushed: result.flushed || false,
|
||||||
|
timestamp: new Date().toISOString()
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return { success: true, ...result };
|
||||||
|
} catch (error: unknown) {
|
||||||
|
return { error: (error as Error).message, status: 500 };
|
||||||
|
}
|
||||||
|
});
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// API: Memory Queue - Get queue status
|
||||||
|
if (pathname === '/api/memory/queue/status' && req.method === 'GET') {
|
||||||
|
try {
|
||||||
|
const { memoryQueueTool } = await import('../../tools/memory-update-queue.js');
|
||||||
|
const result = await memoryQueueTool.execute({ action: 'status' }) as Record<string, unknown>;
|
||||||
|
|
||||||
|
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||||
|
res.end(JSON.stringify({ success: true, ...result }));
|
||||||
|
} catch (error: unknown) {
|
||||||
|
res.writeHead(500, { 'Content-Type': 'application/json' });
|
||||||
|
res.end(JSON.stringify({ error: (error as Error).message }));
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// API: Memory Queue - Flush queue immediately
|
||||||
|
if (pathname === '/api/memory/queue/flush' && req.method === 'POST') {
|
||||||
|
handlePostRequest(req, res, async () => {
|
||||||
|
try {
|
||||||
|
const { memoryQueueTool } = await import('../../tools/memory-update-queue.js');
|
||||||
|
const result = await memoryQueueTool.execute({ action: 'flush' }) as {
|
||||||
|
processed?: number;
|
||||||
|
success?: boolean;
|
||||||
|
errors?: unknown[];
|
||||||
|
};
|
||||||
|
|
||||||
|
// Broadcast queue flushed event
|
||||||
|
broadcastToClients({
|
||||||
|
type: 'MEMORY_QUEUE_FLUSHED',
|
||||||
|
payload: {
|
||||||
|
processed: result.processed || 0,
|
||||||
|
success: result.success || false,
|
||||||
|
errors: result.errors?.length || 0,
|
||||||
|
timestamp: new Date().toISOString()
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return { success: true, ...result };
|
||||||
|
} catch (error: unknown) {
|
||||||
|
return { error: (error as Error).message, status: 500 };
|
||||||
|
}
|
||||||
|
});
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,6 +9,15 @@ import { getCliToolsStatus } from '../../tools/cli-executor.js';
|
|||||||
import { checkVenvStatus, checkSemanticStatus } from '../../tools/codex-lens.js';
|
import { checkVenvStatus, checkSemanticStatus } from '../../tools/codex-lens.js';
|
||||||
import type { RouteContext } from './types.js';
|
import type { RouteContext } from './types.js';
|
||||||
|
|
||||||
|
// Performance logging helper
|
||||||
|
const PERF_LOG_ENABLED = process.env.CCW_PERF_LOG === '1' || true; // Enable by default for debugging
|
||||||
|
function perfLog(label: string, startTime: number, extra?: Record<string, unknown>): void {
|
||||||
|
if (!PERF_LOG_ENABLED) return;
|
||||||
|
const duration = Date.now() - startTime;
|
||||||
|
const extraStr = extra ? ` | ${JSON.stringify(extra)}` : '';
|
||||||
|
console.log(`[PERF][Status] ${label}: ${duration}ms${extraStr}`);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check CCW installation status
|
* Check CCW installation status
|
||||||
* Verifies that required workflow files are installed in user's home directory
|
* Verifies that required workflow files are installed in user's home directory
|
||||||
@@ -62,16 +71,39 @@ export async function handleStatusRoutes(ctx: RouteContext): Promise<boolean> {
|
|||||||
|
|
||||||
// API: Aggregated Status (all statuses in one call)
|
// API: Aggregated Status (all statuses in one call)
|
||||||
if (pathname === '/api/status/all') {
|
if (pathname === '/api/status/all') {
|
||||||
|
const totalStart = Date.now();
|
||||||
|
console.log('[PERF][Status] === /api/status/all START ===');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Check CCW installation status (sync, fast)
|
// Check CCW installation status (sync, fast)
|
||||||
|
const ccwStart = Date.now();
|
||||||
const ccwInstallStatus = checkCcwInstallStatus();
|
const ccwInstallStatus = checkCcwInstallStatus();
|
||||||
|
perfLog('checkCcwInstallStatus', ccwStart);
|
||||||
|
|
||||||
|
// Execute all status checks in parallel with individual timing
|
||||||
|
const cliStart = Date.now();
|
||||||
|
const codexStart = Date.now();
|
||||||
|
const semanticStart = Date.now();
|
||||||
|
|
||||||
// Execute all status checks in parallel
|
|
||||||
const [cliStatus, codexLensStatus, semanticStatus] = await Promise.all([
|
const [cliStatus, codexLensStatus, semanticStatus] = await Promise.all([
|
||||||
getCliToolsStatus(),
|
getCliToolsStatus().then(result => {
|
||||||
checkVenvStatus(),
|
perfLog('getCliToolsStatus', cliStart, { toolCount: Object.keys(result).length });
|
||||||
|
return result;
|
||||||
|
}),
|
||||||
|
checkVenvStatus().then(result => {
|
||||||
|
perfLog('checkVenvStatus', codexStart, { ready: result.ready });
|
||||||
|
return result;
|
||||||
|
}),
|
||||||
// Always check semantic status (will return available: false if CodexLens not ready)
|
// Always check semantic status (will return available: false if CodexLens not ready)
|
||||||
checkSemanticStatus().catch(() => ({ available: false, backend: null }))
|
checkSemanticStatus()
|
||||||
|
.then(result => {
|
||||||
|
perfLog('checkSemanticStatus', semanticStart, { available: result.available });
|
||||||
|
return result;
|
||||||
|
})
|
||||||
|
.catch(() => {
|
||||||
|
perfLog('checkSemanticStatus (error)', semanticStart);
|
||||||
|
return { available: false, backend: null };
|
||||||
|
})
|
||||||
]);
|
]);
|
||||||
|
|
||||||
const response = {
|
const response = {
|
||||||
@@ -82,10 +114,13 @@ export async function handleStatusRoutes(ctx: RouteContext): Promise<boolean> {
|
|||||||
timestamp: new Date().toISOString()
|
timestamp: new Date().toISOString()
|
||||||
};
|
};
|
||||||
|
|
||||||
|
perfLog('=== /api/status/all TOTAL ===', totalStart);
|
||||||
|
|
||||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||||
res.end(JSON.stringify(response));
|
res.end(JSON.stringify(response));
|
||||||
return true;
|
return true;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
perfLog('=== /api/status/all ERROR ===', totalStart);
|
||||||
console.error('[Status Routes] Error fetching aggregated status:', error);
|
console.error('[Status Routes] Error fetching aggregated status:', error);
|
||||||
res.writeHead(500, { 'Content-Type': 'application/json' });
|
res.writeHead(500, { 'Content-Type': 'application/json' });
|
||||||
res.end(JSON.stringify({ error: (error as Error).message }));
|
res.end(JSON.stringify({ error: (error as Error).message }));
|
||||||
|
|||||||
@@ -42,6 +42,10 @@ import { randomBytes } from 'crypto';
|
|||||||
// Import health check service
|
// Import health check service
|
||||||
import { getHealthCheckService } from './services/health-check-service.js';
|
import { getHealthCheckService } from './services/health-check-service.js';
|
||||||
|
|
||||||
|
// Import status check functions for warmup
|
||||||
|
import { checkSemanticStatus, checkVenvStatus } from '../tools/codex-lens.js';
|
||||||
|
import { getCliToolsStatus } from '../tools/cli-executor.js';
|
||||||
|
|
||||||
import type { ServerConfig } from '../types/config.js';
|
import type { ServerConfig } from '../types/config.js';
|
||||||
import type { PostRequestHandler } from './routes/types.js';
|
import type { PostRequestHandler } from './routes/types.js';
|
||||||
|
|
||||||
@@ -290,6 +294,56 @@ function setCsrfCookie(res: http.ServerResponse, token: string, maxAgeSeconds: n
|
|||||||
appendSetCookie(res, attributes.join('; '));
|
appendSetCookie(res, attributes.join('; '));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Warmup function to pre-populate caches on server startup
|
||||||
|
* This runs asynchronously and non-blocking after the server starts
|
||||||
|
*/
|
||||||
|
async function warmupCaches(initialPath: string): Promise<void> {
|
||||||
|
console.log('[WARMUP] Starting cache warmup...');
|
||||||
|
const startTime = Date.now();
|
||||||
|
|
||||||
|
// Run all warmup tasks in parallel for faster startup
|
||||||
|
const warmupTasks = [
|
||||||
|
// Warmup semantic status cache (Python process startup - can be slow first time)
|
||||||
|
(async () => {
|
||||||
|
const taskStart = Date.now();
|
||||||
|
try {
|
||||||
|
const semanticStatus = await checkSemanticStatus();
|
||||||
|
console.log(`[WARMUP] Semantic status: ${semanticStatus.available ? 'available' : 'not available'} (${Date.now() - taskStart}ms)`);
|
||||||
|
} catch (err) {
|
||||||
|
console.warn(`[WARMUP] Semantic status check failed: ${(err as Error).message}`);
|
||||||
|
}
|
||||||
|
})(),
|
||||||
|
|
||||||
|
// Warmup venv status cache
|
||||||
|
(async () => {
|
||||||
|
const taskStart = Date.now();
|
||||||
|
try {
|
||||||
|
const venvStatus = await checkVenvStatus();
|
||||||
|
console.log(`[WARMUP] Venv status: ${venvStatus.ready ? 'ready' : 'not ready'} (${Date.now() - taskStart}ms)`);
|
||||||
|
} catch (err) {
|
||||||
|
console.warn(`[WARMUP] Venv status check failed: ${(err as Error).message}`);
|
||||||
|
}
|
||||||
|
})(),
|
||||||
|
|
||||||
|
// Warmup CLI tools status cache
|
||||||
|
(async () => {
|
||||||
|
const taskStart = Date.now();
|
||||||
|
try {
|
||||||
|
const cliStatus = await getCliToolsStatus();
|
||||||
|
const availableCount = Object.values(cliStatus).filter(s => s.available).length;
|
||||||
|
const totalCount = Object.keys(cliStatus).length;
|
||||||
|
console.log(`[WARMUP] CLI tools status: ${availableCount}/${totalCount} available (${Date.now() - taskStart}ms)`);
|
||||||
|
} catch (err) {
|
||||||
|
console.warn(`[WARMUP] CLI tools status check failed: ${(err as Error).message}`);
|
||||||
|
}
|
||||||
|
})()
|
||||||
|
];
|
||||||
|
|
||||||
|
await Promise.allSettled(warmupTasks);
|
||||||
|
console.log(`[WARMUP] Cache warmup complete (${Date.now() - startTime}ms total)`);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Generate dashboard HTML with embedded CSS and JS
|
* Generate dashboard HTML with embedded CSS and JS
|
||||||
*/
|
*/
|
||||||
@@ -650,6 +704,14 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
|
|||||||
console.warn('[Server] Failed to start health check service:', err);
|
console.warn('[Server] Failed to start health check service:', err);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Start cache warmup asynchronously (non-blocking)
|
||||||
|
// Uses setImmediate to not delay server startup response
|
||||||
|
setImmediate(() => {
|
||||||
|
warmupCaches(initialPath).catch((err) => {
|
||||||
|
console.warn('[WARMUP] Cache warmup failed:', err);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
resolve(server);
|
resolve(server);
|
||||||
});
|
});
|
||||||
server.on('error', reject);
|
server.on('error', reject);
|
||||||
|
|||||||
@@ -33,11 +33,14 @@ function initCliStatus() {
|
|||||||
* Load all statuses using aggregated endpoint (single API call)
|
* Load all statuses using aggregated endpoint (single API call)
|
||||||
*/
|
*/
|
||||||
async function loadAllStatuses() {
|
async function loadAllStatuses() {
|
||||||
|
const totalStart = performance.now();
|
||||||
|
console.log('[PERF][Frontend] loadAllStatuses START');
|
||||||
|
|
||||||
// 1. 尝试从缓存获取(预加载的数据)
|
// 1. 尝试从缓存获取(预加载的数据)
|
||||||
if (window.cacheManager) {
|
if (window.cacheManager) {
|
||||||
const cached = window.cacheManager.get('all-status');
|
const cached = window.cacheManager.get('all-status');
|
||||||
if (cached) {
|
if (cached) {
|
||||||
console.log('[CLI Status] Loaded all statuses from cache');
|
console.log(`[PERF][Frontend] Cache hit: ${(performance.now() - totalStart).toFixed(1)}ms`);
|
||||||
// 应用缓存数据
|
// 应用缓存数据
|
||||||
cliToolStatus = cached.cli || {};
|
cliToolStatus = cached.cli || {};
|
||||||
codexLensStatus = cached.codexLens || { ready: false };
|
codexLensStatus = cached.codexLens || { ready: false };
|
||||||
@@ -45,25 +48,32 @@ async function loadAllStatuses() {
|
|||||||
ccwInstallStatus = cached.ccwInstall || { installed: true, workflowsInstalled: true, missingFiles: [], installPath: '' };
|
ccwInstallStatus = cached.ccwInstall || { installed: true, workflowsInstalled: true, missingFiles: [], installPath: '' };
|
||||||
|
|
||||||
// Load CLI tools config, API endpoints, and CLI Settings(这些有自己的缓存)
|
// Load CLI tools config, API endpoints, and CLI Settings(这些有自己的缓存)
|
||||||
|
const configStart = performance.now();
|
||||||
await Promise.all([
|
await Promise.all([
|
||||||
loadCliToolsConfig(),
|
loadCliToolsConfig(),
|
||||||
loadApiEndpoints(),
|
loadApiEndpoints(),
|
||||||
loadCliSettingsEndpoints()
|
loadCliSettingsEndpoints()
|
||||||
]);
|
]);
|
||||||
|
console.log(`[PERF][Frontend] Config/Endpoints load: ${(performance.now() - configStart).toFixed(1)}ms`);
|
||||||
|
|
||||||
// Update badges
|
// Update badges
|
||||||
updateCliBadge();
|
updateCliBadge();
|
||||||
updateCodexLensBadge();
|
updateCodexLensBadge();
|
||||||
updateCcwInstallBadge();
|
updateCcwInstallBadge();
|
||||||
|
|
||||||
|
console.log(`[PERF][Frontend] loadAllStatuses TOTAL (cached): ${(performance.now() - totalStart).toFixed(1)}ms`);
|
||||||
return cached;
|
return cached;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// 2. 缓存未命中,从服务器获取
|
// 2. 缓存未命中,从服务器获取
|
||||||
try {
|
try {
|
||||||
|
const fetchStart = performance.now();
|
||||||
|
console.log('[PERF][Frontend] Fetching /api/status/all...');
|
||||||
const response = await fetch('/api/status/all');
|
const response = await fetch('/api/status/all');
|
||||||
if (!response.ok) throw new Error('Failed to load status');
|
if (!response.ok) throw new Error('Failed to load status');
|
||||||
const data = await response.json();
|
const data = await response.json();
|
||||||
|
console.log(`[PERF][Frontend] /api/status/all fetch: ${(performance.now() - fetchStart).toFixed(1)}ms`);
|
||||||
|
|
||||||
// 存入缓存
|
// 存入缓存
|
||||||
if (window.cacheManager) {
|
if (window.cacheManager) {
|
||||||
@@ -77,10 +87,11 @@ async function loadAllStatuses() {
|
|||||||
ccwInstallStatus = data.ccwInstall || { installed: true, workflowsInstalled: true, missingFiles: [], installPath: '' };
|
ccwInstallStatus = data.ccwInstall || { installed: true, workflowsInstalled: true, missingFiles: [], installPath: '' };
|
||||||
|
|
||||||
// Load CLI tools config, API endpoints, and CLI Settings
|
// Load CLI tools config, API endpoints, and CLI Settings
|
||||||
await Promise.all([
|
const configStart = performance.now();
|
||||||
loadCliToolsConfig(),
|
const [configResult, endpointsResult, settingsResult] = await Promise.all([
|
||||||
loadApiEndpoints(),
|
loadCliToolsConfig().then(r => { console.log(`[PERF][Frontend] loadCliToolsConfig: ${(performance.now() - configStart).toFixed(1)}ms`); return r; }),
|
||||||
loadCliSettingsEndpoints()
|
loadApiEndpoints().then(r => { console.log(`[PERF][Frontend] loadApiEndpoints: ${(performance.now() - configStart).toFixed(1)}ms`); return r; }),
|
||||||
|
loadCliSettingsEndpoints().then(r => { console.log(`[PERF][Frontend] loadCliSettingsEndpoints: ${(performance.now() - configStart).toFixed(1)}ms`); return r; })
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// Update badges
|
// Update badges
|
||||||
@@ -88,9 +99,11 @@ async function loadAllStatuses() {
|
|||||||
updateCodexLensBadge();
|
updateCodexLensBadge();
|
||||||
updateCcwInstallBadge();
|
updateCcwInstallBadge();
|
||||||
|
|
||||||
|
console.log(`[PERF][Frontend] loadAllStatuses TOTAL: ${(performance.now() - totalStart).toFixed(1)}ms`);
|
||||||
return data;
|
return data;
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
console.error('Failed to load aggregated status:', err);
|
console.error('Failed to load aggregated status:', err);
|
||||||
|
console.log(`[PERF][Frontend] loadAllStatuses ERROR after: ${(performance.now() - totalStart).toFixed(1)}ms`);
|
||||||
// Fallback to individual calls if aggregated endpoint fails
|
// Fallback to individual calls if aggregated endpoint fails
|
||||||
return await loadAllStatusesFallback();
|
return await loadAllStatusesFallback();
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -53,39 +53,39 @@ const HOOK_TEMPLATES = {
|
|||||||
event: 'Stop',
|
event: 'Stop',
|
||||||
matcher: '',
|
matcher: '',
|
||||||
command: 'bash',
|
command: 'bash',
|
||||||
args: ['-c', 'ccw tool exec update_module_claude \'{"strategy":"related","tool":"gemini"}\''],
|
args: ['-c', 'ccw tool exec memory_queue "{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\",\\"tool\\":\\"gemini\\",\\"strategy\\":\\"single-layer\\"}"'],
|
||||||
description: 'Update CLAUDE.md for changed modules when session ends',
|
description: 'Queue CLAUDE.md update for changed modules when session ends',
|
||||||
category: 'memory',
|
category: 'memory',
|
||||||
configurable: true,
|
configurable: true,
|
||||||
config: {
|
config: {
|
||||||
tool: { type: 'select', options: ['gemini', 'qwen', 'codex'], default: 'gemini', label: 'CLI Tool' },
|
tool: { type: 'select', options: ['gemini', 'qwen', 'codex'], default: 'gemini', label: 'CLI Tool' },
|
||||||
strategy: { type: 'select', options: ['related', 'single-layer'], default: 'related', label: 'Strategy' }
|
strategy: { type: 'select', options: ['single-layer', 'multi-layer'], default: 'single-layer', label: 'Strategy' }
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
'memory-update-periodic': {
|
'memory-update-periodic': {
|
||||||
event: 'PostToolUse',
|
event: 'PostToolUse',
|
||||||
matcher: 'Write|Edit',
|
matcher: 'Write|Edit',
|
||||||
command: 'bash',
|
command: 'bash',
|
||||||
args: ['-c', 'INTERVAL=300; LAST_FILE="$HOME/.claude/.last_memory_update"; mkdir -p "$HOME/.claude"; NOW=$(date +%s); LAST=0; [ -f "$LAST_FILE" ] && LAST=$(cat "$LAST_FILE" 2>/dev/null || echo 0); if [ $((NOW - LAST)) -ge $INTERVAL ]; then echo $NOW > "$LAST_FILE"; ccw tool exec update_module_claude \'{"strategy":"related","tool":"gemini"}\' & fi'],
|
args: ['-c', 'ccw tool exec memory_queue "{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\",\\"tool\\":\\"gemini\\",\\"strategy\\":\\"single-layer\\"}"'],
|
||||||
description: 'Periodically update CLAUDE.md (default: 5 min interval)',
|
description: 'Queue CLAUDE.md update on file changes (batched with threshold/timeout)',
|
||||||
category: 'memory',
|
category: 'memory',
|
||||||
configurable: true,
|
configurable: true,
|
||||||
config: {
|
config: {
|
||||||
tool: { type: 'select', options: ['gemini', 'qwen', 'codex'], default: 'gemini', label: 'CLI Tool' },
|
tool: { type: 'select', options: ['gemini', 'qwen', 'codex'], default: 'gemini', label: 'CLI Tool' },
|
||||||
interval: { type: 'number', default: 300, min: 60, max: 3600, label: 'Interval (seconds)', step: 60 }
|
strategy: { type: 'select', options: ['single-layer', 'multi-layer'], default: 'single-layer', label: 'Strategy' }
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
'memory-update-count-based': {
|
'memory-update-count-based': {
|
||||||
event: 'PostToolUse',
|
event: 'PostToolUse',
|
||||||
matcher: 'Write|Edit',
|
matcher: 'Write|Edit',
|
||||||
command: 'bash',
|
command: 'bash',
|
||||||
args: ['-c', 'THRESHOLD=10; COUNT_FILE="$HOME/.claude/.memory_update_count"; mkdir -p "$HOME/.claude"; INPUT=$(cat); FILE_PATH=$(echo "$INPUT" | jq -r ".tool_input.file_path // .tool_input.path // empty" 2>/dev/null); [ -z "$FILE_PATH" ] && exit 0; COUNT=0; [ -f "$COUNT_FILE" ] && COUNT=$(cat "$COUNT_FILE" 2>/dev/null || echo 0); COUNT=$((COUNT + 1)); echo $COUNT > "$COUNT_FILE"; if [ $COUNT -ge $THRESHOLD ]; then echo 0 > "$COUNT_FILE"; ccw tool exec update_module_claude \'{"strategy":"related","tool":"gemini"}\' & fi'],
|
args: ['-c', 'ccw tool exec memory_queue "{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\",\\"tool\\":\\"gemini\\",\\"strategy\\":\\"single-layer\\"}"'],
|
||||||
description: 'Update CLAUDE.md when file changes reach threshold (default: 10 files)',
|
description: 'Queue CLAUDE.md update on file changes (auto-flush at 5 paths or 5min timeout)',
|
||||||
category: 'memory',
|
category: 'memory',
|
||||||
configurable: true,
|
configurable: true,
|
||||||
config: {
|
config: {
|
||||||
tool: { type: 'select', options: ['gemini', 'qwen', 'codex'], default: 'gemini', label: 'CLI Tool' },
|
tool: { type: 'select', options: ['gemini', 'qwen', 'codex'], default: 'gemini', label: 'CLI Tool' },
|
||||||
threshold: { type: 'number', default: 10, min: 3, max: 50, label: 'File count threshold', step: 1 }
|
strategy: { type: 'select', options: ['single-layer', 'multi-layer'], default: 'single-layer', label: 'Strategy' }
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
// SKILL Context Loader templates
|
// SKILL Context Loader templates
|
||||||
@@ -1154,21 +1154,17 @@ function generateWizardCommand() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Handle memory-update wizard (default)
|
// Handle memory-update wizard (default)
|
||||||
|
// Now uses memory_queue for batched updates
|
||||||
const tool = wizardConfig.tool || 'gemini';
|
const tool = wizardConfig.tool || 'gemini';
|
||||||
const strategy = wizardConfig.strategy || 'related';
|
const strategy = wizardConfig.strategy || 'single-layer';
|
||||||
const interval = wizardConfig.interval || 300;
|
|
||||||
const threshold = wizardConfig.threshold || 10;
|
|
||||||
|
|
||||||
// Build the ccw tool command based on configuration
|
// Build the ccw tool command using memory_queue
|
||||||
const params = JSON.stringify({ strategy, tool });
|
// Use double quotes to allow bash $CLAUDE_PROJECT_DIR expansion
|
||||||
|
const params = `"{\\"action\\":\\"add\\",\\"path\\":\\"$CLAUDE_PROJECT_DIR\\",\\"tool\\":\\"${tool}\\",\\"strategy\\":\\"${strategy}\\"}"`;
|
||||||
|
|
||||||
if (triggerType === 'periodic') {
|
// All trigger types now use the same queue-based command
|
||||||
return `INTERVAL=${interval}; LAST_FILE="$HOME/.claude/.last_memory_update"; mkdir -p "$HOME/.claude"; NOW=$(date +%s); LAST=0; [ -f "$LAST_FILE" ] && LAST=$(cat "$LAST_FILE" 2>/dev/null || echo 0); if [ $((NOW - LAST)) -ge $INTERVAL ]; then echo $NOW > "$LAST_FILE"; ccw tool exec update_module_claude '${params}' & fi`;
|
// The queue handles batching (threshold: 5 paths, timeout: 5 min)
|
||||||
} else if (triggerType === 'count-based') {
|
return `ccw tool exec memory_queue ${params}`;
|
||||||
return `THRESHOLD=${threshold}; COUNT_FILE="$HOME/.claude/.memory_update_count"; mkdir -p "$HOME/.claude"; INPUT=$(cat); FILE_PATH=$(echo "$INPUT" | jq -r ".tool_input.file_path // .tool_input.path // empty" 2>/dev/null); [ -z "$FILE_PATH" ] && exit 0; COUNT=0; [ -f "$COUNT_FILE" ] && COUNT=$(cat "$COUNT_FILE" 2>/dev/null || echo 0); COUNT=$((COUNT + 1)); echo $COUNT > "$COUNT_FILE"; if [ $COUNT -ge $THRESHOLD ]; then echo 0 > "$COUNT_FILE"; ccw tool exec update_module_claude '${params}' & fi`;
|
|
||||||
} else {
|
|
||||||
return `ccw tool exec update_module_claude '${params}'`;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
async function submitHookWizard() {
|
async function submitHookWizard() {
|
||||||
|
|||||||
@@ -1190,6 +1190,9 @@ export {
|
|||||||
* - api-endpoint: Check LiteLLM endpoint configuration exists
|
* - api-endpoint: Check LiteLLM endpoint configuration exists
|
||||||
*/
|
*/
|
||||||
export async function getCliToolsStatus(): Promise<Record<string, ToolAvailability>> {
|
export async function getCliToolsStatus(): Promise<Record<string, ToolAvailability>> {
|
||||||
|
const funcStart = Date.now();
|
||||||
|
debugLog('PERF', 'getCliToolsStatus START');
|
||||||
|
|
||||||
// Default built-in tools
|
// Default built-in tools
|
||||||
const builtInTools = ['gemini', 'qwen', 'codex', 'claude', 'opencode'];
|
const builtInTools = ['gemini', 'qwen', 'codex', 'claude', 'opencode'];
|
||||||
|
|
||||||
@@ -1202,6 +1205,7 @@ export async function getCliToolsStatus(): Promise<Record<string, ToolAvailabili
|
|||||||
}
|
}
|
||||||
let toolsInfo: ToolInfo[] = builtInTools.map(name => ({ name, type: 'builtin' }));
|
let toolsInfo: ToolInfo[] = builtInTools.map(name => ({ name, type: 'builtin' }));
|
||||||
|
|
||||||
|
const configLoadStart = Date.now();
|
||||||
try {
|
try {
|
||||||
// Dynamic import to avoid circular dependencies
|
// Dynamic import to avoid circular dependencies
|
||||||
const { loadClaudeCliTools } = await import('./claude-cli-tools.js');
|
const { loadClaudeCliTools } = await import('./claude-cli-tools.js');
|
||||||
@@ -1225,11 +1229,15 @@ export async function getCliToolsStatus(): Promise<Record<string, ToolAvailabili
|
|||||||
// Fallback to built-in tools if config load fails
|
// Fallback to built-in tools if config load fails
|
||||||
debugLog('cli-executor', `Using built-in tools (config load failed: ${(e as Error).message})`);
|
debugLog('cli-executor', `Using built-in tools (config load failed: ${(e as Error).message})`);
|
||||||
}
|
}
|
||||||
|
debugLog('PERF', `Config load: ${Date.now() - configLoadStart}ms, tools: ${toolsInfo.length}`);
|
||||||
|
|
||||||
const results: Record<string, ToolAvailability> = {};
|
const results: Record<string, ToolAvailability> = {};
|
||||||
|
const toolTimings: Record<string, number> = {};
|
||||||
|
|
||||||
|
const checksStart = Date.now();
|
||||||
await Promise.all(toolsInfo.map(async (toolInfo) => {
|
await Promise.all(toolsInfo.map(async (toolInfo) => {
|
||||||
const { name, type, enabled, id } = toolInfo;
|
const { name, type, enabled, id } = toolInfo;
|
||||||
|
const toolStart = Date.now();
|
||||||
|
|
||||||
// Check availability based on tool type
|
// Check availability based on tool type
|
||||||
if (type === 'cli-wrapper') {
|
if (type === 'cli-wrapper') {
|
||||||
@@ -1271,8 +1279,13 @@ export async function getCliToolsStatus(): Promise<Record<string, ToolAvailabili
|
|||||||
// For builtin: check system PATH availability
|
// For builtin: check system PATH availability
|
||||||
results[name] = await checkToolAvailability(name);
|
results[name] = await checkToolAvailability(name);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
toolTimings[name] = Date.now() - toolStart;
|
||||||
}));
|
}));
|
||||||
|
|
||||||
|
debugLog('PERF', `Tool checks: ${Date.now() - checksStart}ms | Individual: ${JSON.stringify(toolTimings)}`);
|
||||||
|
debugLog('PERF', `getCliToolsStatus TOTAL: ${Date.now() - funcStart}ms`);
|
||||||
|
|
||||||
return results;
|
return results;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -49,6 +49,14 @@ interface VenvStatusCache {
|
|||||||
let venvStatusCache: VenvStatusCache | null = null;
|
let venvStatusCache: VenvStatusCache | null = null;
|
||||||
const VENV_STATUS_TTL = 5 * 60 * 1000; // 5 minutes TTL
|
const VENV_STATUS_TTL = 5 * 60 * 1000; // 5 minutes TTL
|
||||||
|
|
||||||
|
// Semantic status cache with TTL (same as venv cache)
|
||||||
|
interface SemanticStatusCache {
|
||||||
|
status: SemanticStatus;
|
||||||
|
timestamp: number;
|
||||||
|
}
|
||||||
|
let semanticStatusCache: SemanticStatusCache | null = null;
|
||||||
|
const SEMANTIC_STATUS_TTL = 5 * 60 * 1000; // 5 minutes TTL
|
||||||
|
|
||||||
// Track running indexing process for cancellation
|
// Track running indexing process for cancellation
|
||||||
let currentIndexingProcess: ReturnType<typeof spawn> | null = null;
|
let currentIndexingProcess: ReturnType<typeof spawn> | null = null;
|
||||||
let currentIndexingAborted = false;
|
let currentIndexingAborted = false;
|
||||||
@@ -147,8 +155,12 @@ function clearVenvStatusCache(): void {
|
|||||||
* @returns Ready status
|
* @returns Ready status
|
||||||
*/
|
*/
|
||||||
async function checkVenvStatus(force = false): Promise<ReadyStatus> {
|
async function checkVenvStatus(force = false): Promise<ReadyStatus> {
|
||||||
|
const funcStart = Date.now();
|
||||||
|
console.log('[PERF][CodexLens] checkVenvStatus START');
|
||||||
|
|
||||||
// Use cached result if available and not expired
|
// Use cached result if available and not expired
|
||||||
if (!force && venvStatusCache && (Date.now() - venvStatusCache.timestamp < VENV_STATUS_TTL)) {
|
if (!force && venvStatusCache && (Date.now() - venvStatusCache.timestamp < VENV_STATUS_TTL)) {
|
||||||
|
console.log(`[PERF][CodexLens] checkVenvStatus CACHE HIT: ${Date.now() - funcStart}ms`);
|
||||||
return venvStatusCache.status;
|
return venvStatusCache.status;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -156,6 +168,7 @@ async function checkVenvStatus(force = false): Promise<ReadyStatus> {
|
|||||||
if (!existsSync(CODEXLENS_VENV)) {
|
if (!existsSync(CODEXLENS_VENV)) {
|
||||||
const result = { ready: false, error: 'Venv not found' };
|
const result = { ready: false, error: 'Venv not found' };
|
||||||
venvStatusCache = { status: result, timestamp: Date.now() };
|
venvStatusCache = { status: result, timestamp: Date.now() };
|
||||||
|
console.log(`[PERF][CodexLens] checkVenvStatus (no venv): ${Date.now() - funcStart}ms`);
|
||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -163,12 +176,16 @@ async function checkVenvStatus(force = false): Promise<ReadyStatus> {
|
|||||||
if (!existsSync(VENV_PYTHON)) {
|
if (!existsSync(VENV_PYTHON)) {
|
||||||
const result = { ready: false, error: 'Python executable not found in venv' };
|
const result = { ready: false, error: 'Python executable not found in venv' };
|
||||||
venvStatusCache = { status: result, timestamp: Date.now() };
|
venvStatusCache = { status: result, timestamp: Date.now() };
|
||||||
|
console.log(`[PERF][CodexLens] checkVenvStatus (no python): ${Date.now() - funcStart}ms`);
|
||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check codexlens is importable
|
// Check codexlens and core dependencies are importable
|
||||||
|
const spawnStart = Date.now();
|
||||||
|
console.log('[PERF][CodexLens] checkVenvStatus spawning Python...');
|
||||||
|
|
||||||
return new Promise((resolve) => {
|
return new Promise((resolve) => {
|
||||||
const child = spawn(VENV_PYTHON, ['-c', 'import codexlens; print(codexlens.__version__)'], {
|
const child = spawn(VENV_PYTHON, ['-c', 'import codexlens; import watchdog; print(codexlens.__version__)'], {
|
||||||
stdio: ['ignore', 'pipe', 'pipe'],
|
stdio: ['ignore', 'pipe', 'pipe'],
|
||||||
timeout: 10000,
|
timeout: 10000,
|
||||||
});
|
});
|
||||||
@@ -192,29 +209,54 @@ async function checkVenvStatus(force = false): Promise<ReadyStatus> {
|
|||||||
}
|
}
|
||||||
// Cache the result
|
// Cache the result
|
||||||
venvStatusCache = { status: result, timestamp: Date.now() };
|
venvStatusCache = { status: result, timestamp: Date.now() };
|
||||||
|
console.log(`[PERF][CodexLens] checkVenvStatus Python spawn: ${Date.now() - spawnStart}ms | TOTAL: ${Date.now() - funcStart}ms | ready: ${result.ready}`);
|
||||||
resolve(result);
|
resolve(result);
|
||||||
});
|
});
|
||||||
|
|
||||||
child.on('error', (err) => {
|
child.on('error', (err) => {
|
||||||
const result = { ready: false, error: `Failed to check venv: ${err.message}` };
|
const result = { ready: false, error: `Failed to check venv: ${err.message}` };
|
||||||
venvStatusCache = { status: result, timestamp: Date.now() };
|
venvStatusCache = { status: result, timestamp: Date.now() };
|
||||||
|
console.log(`[PERF][CodexLens] checkVenvStatus ERROR: ${Date.now() - funcStart}ms`);
|
||||||
resolve(result);
|
resolve(result);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clear semantic status cache (call after install/uninstall operations)
|
||||||
|
*/
|
||||||
|
function clearSemanticStatusCache(): void {
|
||||||
|
semanticStatusCache = null;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if semantic search dependencies are installed
|
* Check if semantic search dependencies are installed
|
||||||
|
* @param force - Force refresh cache (default: false)
|
||||||
* @returns Semantic status
|
* @returns Semantic status
|
||||||
*/
|
*/
|
||||||
async function checkSemanticStatus(): Promise<SemanticStatus> {
|
async function checkSemanticStatus(force = false): Promise<SemanticStatus> {
|
||||||
|
const funcStart = Date.now();
|
||||||
|
console.log('[PERF][CodexLens] checkSemanticStatus START');
|
||||||
|
|
||||||
|
// Use cached result if available and not expired
|
||||||
|
if (!force && semanticStatusCache && (Date.now() - semanticStatusCache.timestamp < SEMANTIC_STATUS_TTL)) {
|
||||||
|
console.log(`[PERF][CodexLens] checkSemanticStatus CACHE HIT: ${Date.now() - funcStart}ms`);
|
||||||
|
return semanticStatusCache.status;
|
||||||
|
}
|
||||||
|
|
||||||
// First check if CodexLens is installed
|
// First check if CodexLens is installed
|
||||||
const venvStatus = await checkVenvStatus();
|
const venvStatus = await checkVenvStatus();
|
||||||
if (!venvStatus.ready) {
|
if (!venvStatus.ready) {
|
||||||
return { available: false, error: 'CodexLens not installed' };
|
const result: SemanticStatus = { available: false, error: 'CodexLens not installed' };
|
||||||
|
semanticStatusCache = { status: result, timestamp: Date.now() };
|
||||||
|
console.log(`[PERF][CodexLens] checkSemanticStatus (no venv): ${Date.now() - funcStart}ms`);
|
||||||
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check semantic module availability and accelerator info
|
// Check semantic module availability and accelerator info
|
||||||
|
const spawnStart = Date.now();
|
||||||
|
console.log('[PERF][CodexLens] checkSemanticStatus spawning Python...');
|
||||||
|
|
||||||
return new Promise((resolve) => {
|
return new Promise((resolve) => {
|
||||||
const checkCode = `
|
const checkCode = `
|
||||||
import sys
|
import sys
|
||||||
@@ -274,21 +316,31 @@ except Exception as e:
|
|||||||
const output = stdout.trim();
|
const output = stdout.trim();
|
||||||
try {
|
try {
|
||||||
const result = JSON.parse(output);
|
const result = JSON.parse(output);
|
||||||
resolve({
|
console.log(`[PERF][CodexLens] checkSemanticStatus Python spawn: ${Date.now() - spawnStart}ms | TOTAL: ${Date.now() - funcStart}ms | available: ${result.available}`);
|
||||||
|
const status: SemanticStatus = {
|
||||||
available: result.available || false,
|
available: result.available || false,
|
||||||
backend: result.backend,
|
backend: result.backend,
|
||||||
accelerator: result.accelerator || 'CPU',
|
accelerator: result.accelerator || 'CPU',
|
||||||
providers: result.providers || [],
|
providers: result.providers || [],
|
||||||
litellmAvailable: result.litellm_available || false,
|
litellmAvailable: result.litellm_available || false,
|
||||||
error: result.error
|
error: result.error
|
||||||
});
|
};
|
||||||
|
// Cache the result
|
||||||
|
semanticStatusCache = { status, timestamp: Date.now() };
|
||||||
|
resolve(status);
|
||||||
} catch {
|
} catch {
|
||||||
resolve({ available: false, error: output || stderr || 'Unknown error' });
|
console.log(`[PERF][CodexLens] checkSemanticStatus PARSE ERROR: ${Date.now() - funcStart}ms`);
|
||||||
|
const errorStatus: SemanticStatus = { available: false, error: output || stderr || 'Unknown error' };
|
||||||
|
semanticStatusCache = { status: errorStatus, timestamp: Date.now() };
|
||||||
|
resolve(errorStatus);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
child.on('error', (err) => {
|
child.on('error', (err) => {
|
||||||
resolve({ available: false, error: `Check failed: ${err.message}` });
|
console.log(`[PERF][CodexLens] checkSemanticStatus ERROR: ${Date.now() - funcStart}ms`);
|
||||||
|
const errorStatus: SemanticStatus = { available: false, error: `Check failed: ${err.message}` };
|
||||||
|
semanticStatusCache = { status: errorStatus, timestamp: Date.now() };
|
||||||
|
resolve(errorStatus);
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
@@ -583,6 +635,7 @@ async function bootstrapWithUv(gpuMode: GpuMode = 'cpu'): Promise<BootstrapResul
|
|||||||
|
|
||||||
// Clear cache after successful installation
|
// Clear cache after successful installation
|
||||||
clearVenvStatusCache();
|
clearVenvStatusCache();
|
||||||
|
clearSemanticStatusCache();
|
||||||
console.log(`[CodexLens] Bootstrap with UV complete (${gpuMode} mode)`);
|
console.log(`[CodexLens] Bootstrap with UV complete (${gpuMode} mode)`);
|
||||||
return { success: true, message: `Installed with UV (${gpuMode} mode)` };
|
return { success: true, message: `Installed with UV (${gpuMode} mode)` };
|
||||||
}
|
}
|
||||||
@@ -878,6 +931,7 @@ async function bootstrapVenv(): Promise<BootstrapResult> {
|
|||||||
|
|
||||||
// Clear cache after successful installation
|
// Clear cache after successful installation
|
||||||
clearVenvStatusCache();
|
clearVenvStatusCache();
|
||||||
|
clearSemanticStatusCache();
|
||||||
return { success: true };
|
return { success: true };
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
return { success: false, error: `Failed to install codexlens: ${(err as Error).message}` };
|
return { success: false, error: `Failed to install codexlens: ${(err as Error).message}` };
|
||||||
@@ -1631,6 +1685,7 @@ async function uninstallCodexLens(): Promise<BootstrapResult> {
|
|||||||
bootstrapChecked = false;
|
bootstrapChecked = false;
|
||||||
bootstrapReady = false;
|
bootstrapReady = false;
|
||||||
clearVenvStatusCache();
|
clearVenvStatusCache();
|
||||||
|
clearSemanticStatusCache();
|
||||||
|
|
||||||
console.log('[CodexLens] CodexLens uninstalled successfully');
|
console.log('[CodexLens] CodexLens uninstalled successfully');
|
||||||
return { success: true, message: 'CodexLens uninstalled successfully' };
|
return { success: true, message: 'CodexLens uninstalled successfully' };
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ import type { ProgressInfo } from './codex-lens.js';
|
|||||||
import { uiGeneratePreviewTool } from './ui-generate-preview.js';
|
import { uiGeneratePreviewTool } from './ui-generate-preview.js';
|
||||||
import { uiInstantiatePrototypesTool } from './ui-instantiate-prototypes.js';
|
import { uiInstantiatePrototypesTool } from './ui-instantiate-prototypes.js';
|
||||||
import { updateModuleClaudeTool } from './update-module-claude.js';
|
import { updateModuleClaudeTool } from './update-module-claude.js';
|
||||||
|
import { memoryQueueTool } from './memory-update-queue.js';
|
||||||
|
|
||||||
interface LegacyTool {
|
interface LegacyTool {
|
||||||
name: string;
|
name: string;
|
||||||
@@ -366,6 +367,7 @@ registerTool(toLegacyTool(skillContextLoaderMod));
|
|||||||
registerTool(uiGeneratePreviewTool);
|
registerTool(uiGeneratePreviewTool);
|
||||||
registerTool(uiInstantiatePrototypesTool);
|
registerTool(uiInstantiatePrototypesTool);
|
||||||
registerTool(updateModuleClaudeTool);
|
registerTool(updateModuleClaudeTool);
|
||||||
|
registerTool(memoryQueueTool);
|
||||||
|
|
||||||
// Export for external tool registration
|
// Export for external tool registration
|
||||||
export { registerTool };
|
export { registerTool };
|
||||||
|
|||||||
421
ccw/src/tools/memory-update-queue.js
Normal file
421
ccw/src/tools/memory-update-queue.js
Normal file
@@ -0,0 +1,421 @@
|
|||||||
|
/**
|
||||||
|
* Memory Update Queue Tool
|
||||||
|
* Queue mechanism for batching CLAUDE.md updates
|
||||||
|
*
|
||||||
|
* Configuration:
|
||||||
|
* - Threshold: 5 paths trigger update
|
||||||
|
* - Timeout: 5 minutes auto-trigger
|
||||||
|
* - Storage: ~/.claude/.memory-queue.json
|
||||||
|
* - Deduplication: Same path only kept once
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { existsSync, readFileSync, writeFileSync, mkdirSync } from 'fs';
|
||||||
|
import { join, dirname, resolve } from 'path';
|
||||||
|
import { homedir } from 'os';
|
||||||
|
|
||||||
|
// Configuration constants
|
||||||
|
const QUEUE_THRESHOLD = 5;
|
||||||
|
const QUEUE_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes
|
||||||
|
const QUEUE_FILE_PATH = join(homedir(), '.claude', '.memory-queue.json');
|
||||||
|
|
||||||
|
// In-memory timeout reference (for cross-call persistence, we track via file timestamp)
|
||||||
|
let scheduledTimeoutId = null;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ensure parent directory exists
|
||||||
|
*/
|
||||||
|
function ensureDir(filePath) {
|
||||||
|
const dir = dirname(filePath);
|
||||||
|
if (!existsSync(dir)) {
|
||||||
|
mkdirSync(dir, { recursive: true });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load queue from file
|
||||||
|
* @returns {{ items: Array<{path: string, tool: string, strategy: string, addedAt: string}>, createdAt: string | null }}
|
||||||
|
*/
|
||||||
|
function loadQueue() {
|
||||||
|
try {
|
||||||
|
if (existsSync(QUEUE_FILE_PATH)) {
|
||||||
|
const content = readFileSync(QUEUE_FILE_PATH, 'utf8');
|
||||||
|
const data = JSON.parse(content);
|
||||||
|
return {
|
||||||
|
items: Array.isArray(data.items) ? data.items : [],
|
||||||
|
createdAt: data.createdAt || null
|
||||||
|
};
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
console.error('[MemoryQueue] Failed to load queue:', e.message);
|
||||||
|
}
|
||||||
|
return { items: [], createdAt: null };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Save queue to file
|
||||||
|
* @param {{ items: Array<{path: string, tool: string, strategy: string, addedAt: string}>, createdAt: string | null }} data
|
||||||
|
*/
|
||||||
|
function saveQueue(data) {
|
||||||
|
try {
|
||||||
|
ensureDir(QUEUE_FILE_PATH);
|
||||||
|
writeFileSync(QUEUE_FILE_PATH, JSON.stringify(data, null, 2), 'utf8');
|
||||||
|
} catch (e) {
|
||||||
|
console.error('[MemoryQueue] Failed to save queue:', e.message);
|
||||||
|
throw e;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Normalize path for comparison (handle Windows/Unix differences)
|
||||||
|
* @param {string} p
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
function normalizePath(p) {
|
||||||
|
return resolve(p).replace(/\\/g, '/').toLowerCase();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Add path to queue with deduplication
|
||||||
|
* @param {string} path - Module path to update
|
||||||
|
* @param {{ tool?: string, strategy?: string }} options
|
||||||
|
* @returns {{ queued: boolean, queueSize: number, willFlush: boolean, message: string }}
|
||||||
|
*/
|
||||||
|
function addToQueue(path, options = {}) {
|
||||||
|
const { tool = 'gemini', strategy = 'single-layer' } = options;
|
||||||
|
const queue = loadQueue();
|
||||||
|
const normalizedPath = normalizePath(path);
|
||||||
|
const now = new Date().toISOString();
|
||||||
|
|
||||||
|
// Check for duplicates
|
||||||
|
const existingIndex = queue.items.findIndex(
|
||||||
|
item => normalizePath(item.path) === normalizedPath
|
||||||
|
);
|
||||||
|
|
||||||
|
if (existingIndex !== -1) {
|
||||||
|
// Update existing entry timestamp but keep it deduplicated
|
||||||
|
queue.items[existingIndex].addedAt = now;
|
||||||
|
queue.items[existingIndex].tool = tool;
|
||||||
|
queue.items[existingIndex].strategy = strategy;
|
||||||
|
saveQueue(queue);
|
||||||
|
|
||||||
|
return {
|
||||||
|
queued: false,
|
||||||
|
queueSize: queue.items.length,
|
||||||
|
willFlush: queue.items.length >= QUEUE_THRESHOLD,
|
||||||
|
message: `Path already in queue (updated): ${path}`
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add new item
|
||||||
|
queue.items.push({
|
||||||
|
path,
|
||||||
|
tool,
|
||||||
|
strategy,
|
||||||
|
addedAt: now
|
||||||
|
});
|
||||||
|
|
||||||
|
// Set createdAt if this is the first item
|
||||||
|
if (!queue.createdAt) {
|
||||||
|
queue.createdAt = now;
|
||||||
|
}
|
||||||
|
|
||||||
|
saveQueue(queue);
|
||||||
|
|
||||||
|
const willFlush = queue.items.length >= QUEUE_THRESHOLD;
|
||||||
|
|
||||||
|
// Schedule timeout if not already scheduled
|
||||||
|
scheduleTimeout();
|
||||||
|
|
||||||
|
return {
|
||||||
|
queued: true,
|
||||||
|
queueSize: queue.items.length,
|
||||||
|
willFlush,
|
||||||
|
message: willFlush
|
||||||
|
? `Queue threshold reached (${queue.items.length}/${QUEUE_THRESHOLD}), will flush`
|
||||||
|
: `Added to queue (${queue.items.length}/${QUEUE_THRESHOLD})`
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get current queue status
|
||||||
|
* @returns {{ queueSize: number, threshold: number, items: Array, timeoutMs: number | null, createdAt: string | null }}
|
||||||
|
*/
|
||||||
|
function getQueueStatus() {
|
||||||
|
const queue = loadQueue();
|
||||||
|
let timeUntilTimeout = null;
|
||||||
|
|
||||||
|
if (queue.createdAt && queue.items.length > 0) {
|
||||||
|
const createdTime = new Date(queue.createdAt).getTime();
|
||||||
|
const elapsed = Date.now() - createdTime;
|
||||||
|
timeUntilTimeout = Math.max(0, QUEUE_TIMEOUT_MS - elapsed);
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
queueSize: queue.items.length,
|
||||||
|
threshold: QUEUE_THRESHOLD,
|
||||||
|
items: queue.items,
|
||||||
|
timeoutMs: QUEUE_TIMEOUT_MS,
|
||||||
|
timeUntilTimeout,
|
||||||
|
createdAt: queue.createdAt
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Flush queue - execute batch update
|
||||||
|
* @returns {Promise<{ success: boolean, processed: number, results: Array, errors: Array }>}
|
||||||
|
*/
|
||||||
|
async function flushQueue() {
|
||||||
|
const queue = loadQueue();
|
||||||
|
|
||||||
|
if (queue.items.length === 0) {
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
processed: 0,
|
||||||
|
results: [],
|
||||||
|
errors: [],
|
||||||
|
message: 'Queue is empty'
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clear timeout
|
||||||
|
clearScheduledTimeout();
|
||||||
|
|
||||||
|
// Import update_module_claude dynamically to avoid circular deps
|
||||||
|
const { updateModuleClaudeTool } = await import('./update-module-claude.js');
|
||||||
|
|
||||||
|
const results = [];
|
||||||
|
const errors = [];
|
||||||
|
|
||||||
|
// Group by tool and strategy for efficiency
|
||||||
|
const groups = new Map();
|
||||||
|
for (const item of queue.items) {
|
||||||
|
const key = `${item.tool}:${item.strategy}`;
|
||||||
|
if (!groups.has(key)) {
|
||||||
|
groups.set(key, []);
|
||||||
|
}
|
||||||
|
groups.get(key).push(item);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process each group
|
||||||
|
for (const [key, items] of groups) {
|
||||||
|
const [tool, strategy] = key.split(':');
|
||||||
|
console.log(`[MemoryQueue] Processing ${items.length} items with ${tool}/${strategy}`);
|
||||||
|
|
||||||
|
for (const item of items) {
|
||||||
|
try {
|
||||||
|
const result = await updateModuleClaudeTool.execute({
|
||||||
|
path: item.path,
|
||||||
|
tool: item.tool,
|
||||||
|
strategy: item.strategy
|
||||||
|
});
|
||||||
|
|
||||||
|
results.push({
|
||||||
|
path: item.path,
|
||||||
|
success: result.success !== false,
|
||||||
|
result
|
||||||
|
});
|
||||||
|
} catch (e) {
|
||||||
|
console.error(`[MemoryQueue] Failed to update ${item.path}:`, e.message);
|
||||||
|
errors.push({
|
||||||
|
path: item.path,
|
||||||
|
error: e.message
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clear queue after processing
|
||||||
|
saveQueue({ items: [], createdAt: null });
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: errors.length === 0,
|
||||||
|
processed: queue.items.length,
|
||||||
|
results,
|
||||||
|
errors,
|
||||||
|
message: `Processed ${results.length} items, ${errors.length} errors`
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Schedule timeout for auto-flush
|
||||||
|
*/
|
||||||
|
function scheduleTimeout() {
|
||||||
|
// We use file-based timeout tracking for persistence across process restarts
|
||||||
|
// The actual timeout check happens on next add/status call
|
||||||
|
const queue = loadQueue();
|
||||||
|
|
||||||
|
if (!queue.createdAt || queue.items.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const createdTime = new Date(queue.createdAt).getTime();
|
||||||
|
const elapsed = Date.now() - createdTime;
|
||||||
|
|
||||||
|
if (elapsed >= QUEUE_TIMEOUT_MS) {
|
||||||
|
// Timeout already exceeded, should flush
|
||||||
|
console.log('[MemoryQueue] Timeout exceeded, auto-flushing');
|
||||||
|
// Don't await here to avoid blocking
|
||||||
|
flushQueue().catch(e => {
|
||||||
|
console.error('[MemoryQueue] Auto-flush failed:', e.message);
|
||||||
|
});
|
||||||
|
} else if (!scheduledTimeoutId) {
|
||||||
|
// Schedule in-memory timeout for current process
|
||||||
|
const remaining = QUEUE_TIMEOUT_MS - elapsed;
|
||||||
|
scheduledTimeoutId = setTimeout(() => {
|
||||||
|
scheduledTimeoutId = null;
|
||||||
|
const currentQueue = loadQueue();
|
||||||
|
if (currentQueue.items.length > 0) {
|
||||||
|
console.log('[MemoryQueue] Timeout reached, auto-flushing');
|
||||||
|
flushQueue().catch(e => {
|
||||||
|
console.error('[MemoryQueue] Auto-flush failed:', e.message);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}, remaining);
|
||||||
|
|
||||||
|
// Prevent timeout from keeping process alive
|
||||||
|
if (scheduledTimeoutId.unref) {
|
||||||
|
scheduledTimeoutId.unref();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clear scheduled timeout
|
||||||
|
*/
|
||||||
|
function clearScheduledTimeout() {
|
||||||
|
if (scheduledTimeoutId) {
|
||||||
|
clearTimeout(scheduledTimeoutId);
|
||||||
|
scheduledTimeoutId = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if timeout has expired and auto-flush if needed
|
||||||
|
* @returns {Promise<{ expired: boolean, flushed: boolean, result?: object }>}
|
||||||
|
*/
|
||||||
|
async function checkTimeout() {
|
||||||
|
const queue = loadQueue();
|
||||||
|
|
||||||
|
if (!queue.createdAt || queue.items.length === 0) {
|
||||||
|
return { expired: false, flushed: false };
|
||||||
|
}
|
||||||
|
|
||||||
|
const createdTime = new Date(queue.createdAt).getTime();
|
||||||
|
const elapsed = Date.now() - createdTime;
|
||||||
|
|
||||||
|
if (elapsed >= QUEUE_TIMEOUT_MS) {
|
||||||
|
console.log('[MemoryQueue] Timeout expired, triggering flush');
|
||||||
|
const result = await flushQueue();
|
||||||
|
return { expired: true, flushed: true, result };
|
||||||
|
}
|
||||||
|
|
||||||
|
return { expired: false, flushed: false };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Main execute function for tool interface
|
||||||
|
* @param {Record<string, unknown>} params
|
||||||
|
* @returns {Promise<unknown>}
|
||||||
|
*/
|
||||||
|
async function execute(params) {
|
||||||
|
const { action, path, tool = 'gemini', strategy = 'single-layer' } = params;
|
||||||
|
|
||||||
|
switch (action) {
|
||||||
|
case 'add':
|
||||||
|
if (!path) {
|
||||||
|
throw new Error('Parameter "path" is required for add action');
|
||||||
|
}
|
||||||
|
// Check timeout first
|
||||||
|
const timeoutCheck = await checkTimeout();
|
||||||
|
if (timeoutCheck.flushed) {
|
||||||
|
// Queue was flushed due to timeout, add to fresh queue
|
||||||
|
const result = addToQueue(path, { tool, strategy });
|
||||||
|
return {
|
||||||
|
...result,
|
||||||
|
timeoutFlushed: true,
|
||||||
|
flushResult: timeoutCheck.result
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const addResult = addToQueue(path, { tool, strategy });
|
||||||
|
|
||||||
|
// Auto-flush if threshold reached
|
||||||
|
if (addResult.willFlush) {
|
||||||
|
const flushResult = await flushQueue();
|
||||||
|
return {
|
||||||
|
...addResult,
|
||||||
|
flushed: true,
|
||||||
|
flushResult
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return addResult;
|
||||||
|
|
||||||
|
case 'status':
|
||||||
|
// Check timeout first
|
||||||
|
await checkTimeout();
|
||||||
|
return getQueueStatus();
|
||||||
|
|
||||||
|
case 'flush':
|
||||||
|
return await flushQueue();
|
||||||
|
|
||||||
|
default:
|
||||||
|
throw new Error(`Unknown action: ${action}. Valid actions: add, status, flush`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Tool Definition
|
||||||
|
*/
|
||||||
|
export const memoryQueueTool = {
|
||||||
|
name: 'memory_queue',
|
||||||
|
description: `Memory update queue management. Batches CLAUDE.md updates for efficiency.
|
||||||
|
|
||||||
|
Actions:
|
||||||
|
- add: Add path to queue (auto-flushes at threshold ${QUEUE_THRESHOLD} or timeout ${QUEUE_TIMEOUT_MS / 1000}s)
|
||||||
|
- status: Get queue status
|
||||||
|
- flush: Immediately execute all queued updates`,
|
||||||
|
parameters: {
|
||||||
|
type: 'object',
|
||||||
|
properties: {
|
||||||
|
action: {
|
||||||
|
type: 'string',
|
||||||
|
enum: ['add', 'status', 'flush'],
|
||||||
|
description: 'Queue action to perform'
|
||||||
|
},
|
||||||
|
path: {
|
||||||
|
type: 'string',
|
||||||
|
description: 'Module directory path (required for add action)'
|
||||||
|
},
|
||||||
|
tool: {
|
||||||
|
type: 'string',
|
||||||
|
enum: ['gemini', 'qwen', 'codex'],
|
||||||
|
description: 'CLI tool to use (default: gemini)',
|
||||||
|
default: 'gemini'
|
||||||
|
},
|
||||||
|
strategy: {
|
||||||
|
type: 'string',
|
||||||
|
enum: ['single-layer', 'multi-layer'],
|
||||||
|
description: 'Update strategy (default: single-layer)',
|
||||||
|
default: 'single-layer'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
required: ['action']
|
||||||
|
},
|
||||||
|
execute
|
||||||
|
};
|
||||||
|
|
||||||
|
// Export individual functions for direct use
|
||||||
|
export {
|
||||||
|
loadQueue,
|
||||||
|
saveQueue,
|
||||||
|
addToQueue,
|
||||||
|
getQueueStatus,
|
||||||
|
flushQueue,
|
||||||
|
scheduleTimeout,
|
||||||
|
clearScheduledTimeout,
|
||||||
|
checkTimeout,
|
||||||
|
QUEUE_THRESHOLD,
|
||||||
|
QUEUE_TIMEOUT_MS,
|
||||||
|
QUEUE_FILE_PATH
|
||||||
|
};
|
||||||
Reference in New Issue
Block a user