feat: Enhance search functionality with quality tiers and scoped indexing

- Updated `search_code` function to include a `quality` parameter for search quality tiers: "fast", "balanced", "thorough", and "auto".
- Introduced `search_scope` function to limit search results to a specific directory scope.
- Added `index_scope` function for indexing a specific directory without re-indexing the entire project.
- Refactored `SearchPipeline` to support quality-based routing in the `search` method.
- Implemented `Shard` and `ShardManager` classes to manage multiple index shards with LRU eviction and efficient file routing.
- Added debounce functionality in `IncrementalIndexer` to batch file events and reduce redundant processing.
- Enhanced `FileWatcher` to integrate with `IncrementalIndexer` for improved event handling.
This commit is contained in:
catlog22
2026-03-19 17:47:53 +08:00
parent 54071473fc
commit 18aff260a0
46 changed files with 1537 additions and 658 deletions

View File

@@ -1,89 +0,0 @@
---
prefix: ANALYZE
inner_loop: false
additional_prefixes: [ANALYZE-fix]
message_types:
success: analysis_ready
error: error
---
# Deep Analyst
Perform deep multi-perspective analysis on exploration results via CLI tools. Generate structured insights, discussion points, and recommendations with confidence levels.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| Exploration results | `<session>/explorations/*.json` | Yes |
1. Extract session path, topic, perspective, dimensions from task description
2. Detect direction-fix mode: `type:\s*direction-fix` with `adjusted_focus:\s*(.+)`
3. Load corresponding exploration results:
| Condition | Source |
|-----------|--------|
| Direction fix | Read ALL exploration files, merge context |
| Normal ANALYZE-N | Read exploration matching number N |
| Fallback | Read first available exploration file |
4. Select CLI tool by perspective:
| Perspective | CLI Tool | Rule Template |
|-------------|----------|---------------|
| technical | gemini | analysis-analyze-code-patterns |
| architectural | claude | analysis-review-architecture |
| business | codex | analysis-analyze-code-patterns |
| domain_expert | gemini | analysis-analyze-code-patterns |
| direction-fix (any) | gemini | analysis-diagnose-bug-root-cause |
## Phase 3: Deep Analysis via CLI
Build analysis prompt with exploration context:
```
PURPOSE: <Normal: "Deep analysis of '<topic>' from <perspective> perspective">
<Fix: "Supplementary analysis with adjusted focus on '<adjusted_focus>'">
Success: Actionable insights with confidence levels and evidence references
PRIOR EXPLORATION CONTEXT:
- Key files: <top 5-8 files from exploration>
- Patterns found: <top 3-5 patterns>
- Key findings: <top 3-5 findings>
TASK:
- <perspective-specific analysis tasks>
- Generate structured findings with confidence levels (high/medium/low)
- Identify discussion points requiring user input
- List open questions needing further exploration
MODE: analysis
CONTEXT: @**/* | Topic: <topic>
EXPECTED: Structured analysis with: key_insights, key_findings, discussion_points, open_questions, recommendations
CONSTRAINTS: Focus on <perspective> perspective | <dimensions>
```
Execute: `ccw cli -p "<prompt>" --tool <cli-tool> --mode analysis --rule <rule>`
## Phase 4: Result Aggregation
Write analysis output to `<session>/analyses/analysis-<num>.json`:
```json
{
"perspective": "<perspective>",
"dimensions": ["<dim1>", "<dim2>"],
"is_direction_fix": false,
"key_insights": [{"insight": "...", "confidence": "high", "evidence": "file:line"}],
"key_findings": [{"finding": "...", "file_ref": "...", "impact": "..."}],
"discussion_points": ["..."],
"open_questions": ["..."],
"recommendations": [{"action": "...", "rationale": "...", "priority": "high"}],
"_metadata": {"cli_tool": "...", "cli_rule": "...", "perspective": "...", "timestamp": "..."}
}
```
Update `<session>/wisdom/.msg/meta.json` under `analyst` namespace:
- Read existing -> merge `{ "analyst": { perspective, insight_count, finding_count, is_direction_fix } }` -> write back

View File

@@ -1,106 +0,0 @@
---
prefix: DISCUSS
inner_loop: false
message_types:
success: discussion_processed
error: error
---
# Discussant
Process analysis results and user feedback. Execute direction adjustments, deep-dive explorations, or targeted Q&A based on discussion type. Update discussion timeline.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| Analysis results | `<session>/analyses/*.json` | Yes |
| Exploration results | `<session>/explorations/*.json` | No |
1. Extract session path, topic, round, discussion type, user feedback:
| Field | Pattern | Default |
|-------|---------|---------|
| sessionFolder | `session:\s*(.+)` | required |
| topic | `topic:\s*(.+)` | required |
| round | `round:\s*(\d+)` | 1 |
| discussType | `type:\s*(.+)` | "initial" |
| userFeedback | `user_feedback:\s*(.+)` | empty |
2. Read all analysis and exploration results
3. Aggregate current findings, insights, open questions
## Phase 3: Discussion Processing
Select strategy by discussion type:
| Type | Mode | Description |
|------|------|-------------|
| initial | inline | Aggregate all analyses: convergent themes, conflicts, top discussion points |
| deepen | cli | Use CLI tool to investigate open questions deeper |
| direction-adjusted | cli | Re-analyze via `ccw cli` from adjusted perspective |
| specific-questions | cli | Targeted exploration answering user questions |
**initial**: Cross-perspective summary -- identify convergent themes, conflicting views, top 5 discussion points and open questions from all analyses.
**deepen**: Use CLI tool for deep investigation:
```javascript
Bash({
command: `ccw cli -p "PURPOSE: Investigate open questions and uncertain insights; success = evidence-based findings
TASK: • Focus on open questions: <questions> • Find supporting evidence • Validate uncertain insights • Document findings
MODE: analysis
CONTEXT: @**/* | Memory: Session <session-folder>, previous analyses
EXPECTED: JSON output with investigation results | Write to <session>/discussions/deepen-<num>.json
CONSTRAINTS: Evidence-based analysis only
" --tool gemini --mode analysis --rule analysis-trace-code-execution`,
run_in_background: false
})
```
**direction-adjusted**: CLI re-analysis from adjusted focus:
```javascript
Bash({
command: `ccw cli -p "Re-analyze '<topic>' with adjusted focus on '<userFeedback>'" --tool gemini --mode analysis`,
run_in_background: false
})
```
**specific-questions**: Use CLI tool for targeted Q&A:
```javascript
Bash({
command: `ccw cli -p "PURPOSE: Answer specific user questions about <topic>; success = clear, evidence-based answers
TASK: • Answer: <userFeedback> • Provide code references • Explain context
MODE: analysis
CONTEXT: @**/* | Memory: Session <session-folder>
EXPECTED: JSON output with answers and evidence | Write to <session>/discussions/questions-<num>.json
CONSTRAINTS: Direct answers with code references
" --tool gemini --mode analysis`,
run_in_background: false
})
```
## Phase 4: Update Discussion Timeline
1. Write round content to `<session>/discussions/discussion-round-<num>.json`:
```json
{
"round": 1, "type": "initial", "user_feedback": "...",
"updated_understanding": { "confirmed": [], "corrected": [], "new_insights": [] },
"new_findings": [], "new_questions": [], "timestamp": "..."
}
```
2. Append round section to `<session>/discussion.md`:
```markdown
### Round <N> - Discussion (<timestamp>)
#### Type: <discussType>
#### User Input: <userFeedback or "(Initial discussion round)">
#### Updated Understanding
**Confirmed**: <list> | **Corrected**: <list> | **New Insights**: <list>
#### New Findings / Open Questions
```
Update `<session>/wisdom/.msg/meta.json` under `discussant` namespace:
- Read existing -> merge `{ "discussant": { round, type, new_insight_count, corrected_count } }` -> write back

View File

@@ -1,73 +0,0 @@
---
prefix: EXPLORE
inner_loop: false
message_types:
success: exploration_ready
error: error
---
# Codebase Explorer
Explore codebase structure through cli-explore-agent, collecting structured context (files, patterns, findings) for downstream analysis. One explorer per analysis perspective.
## Phase 2: Context & Scope Assessment
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
1. Extract session path, topic, perspective, dimensions from task description:
| Field | Pattern | Default |
|-------|---------|---------|
| sessionFolder | `session:\s*(.+)` | required |
| topic | `topic:\s*(.+)` | required |
| perspective | `perspective:\s*(.+)` | "general" |
| dimensions | `dimensions:\s*(.+)` | "general" |
2. Determine exploration number from task subject (EXPLORE-N)
3. Build exploration strategy by perspective:
| Perspective | Focus | Search Depth |
|-------------|-------|-------------|
| general | Overall codebase structure and patterns | broad |
| technical | Implementation details, code patterns, feasibility | medium |
| architectural | System design, module boundaries, interactions | broad |
| business | Business logic, domain models, value flows | medium |
| domain_expert | Domain patterns, standards, best practices | deep |
## Phase 3: Codebase Exploration
Use CLI tool for codebase exploration:
```javascript
Bash({
command: `ccw cli -p "PURPOSE: Explore codebase for <topic> from <perspective> perspective; success = structured findings with relevant files and patterns
TASK: • Run module depth analysis • Search for topic-related patterns • Identify key files and their relationships • Extract architectural insights
MODE: analysis
CONTEXT: @**/* | Memory: Session <session-folder>, perspective <perspective>
EXPECTED: JSON output with: relevant_files (path, relevance, summary), patterns, key_findings, module_map, questions_for_analysis, _metadata (perspective, search_queries, timestamp)
CONSTRAINTS: Focus on <perspective> angle - <strategy.focus> | Write to <session>/explorations/exploration-<num>.json
" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`,
run_in_background: false
})
```
**ACE fallback** (when CLI produces no output):
```javascript
mcp__ace-tool__search_context({ project_root_path: ".", query: "<topic> <perspective>" })
```
## Phase 4: Result Validation
| Check | Method | Action on Failure |
|-------|--------|-------------------|
| Output file exists | Read output path | Create empty result, run ACE fallback |
| Has relevant_files | Array length > 0 | Trigger ACE supplementary search |
| Has key_findings | Array length > 0 | Note partial results, proceed |
Write validated exploration to `<session>/explorations/exploration-<num>.json`.
Update `<session>/wisdom/.msg/meta.json` under `explorer` namespace:
- Read existing -> merge `{ "explorer": { perspective, file_count, finding_count } }` -> write back

View File

@@ -1,77 +0,0 @@
---
prefix: SYNTH
inner_loop: false
message_types:
success: synthesis_ready
error: error
---
# Synthesizer
Integrate all explorations, analyses, and discussions into final conclusions. Cross-perspective theme extraction, conflict resolution, evidence consolidation, and recommendation prioritization. Pure integration role -- no external tools or CLI calls.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| All artifacts | `<session>/explorations/*.json`, `analyses/*.json`, `discussions/*.json` | Yes |
| Decision trail | From wisdom/.msg/meta.json | No |
1. Extract session path and topic from task description
2. Read all exploration, analysis, and discussion round files
3. Load decision trail and current understanding from meta.json
4. Select synthesis strategy:
| Condition | Strategy |
|-----------|----------|
| Single analysis, no discussions | simple (Quick mode summary) |
| Multiple analyses, >2 discussion rounds | deep (track evolution) |
| Default | standard (cross-perspective integration) |
## Phase 3: Cross-Perspective Synthesis
Execute synthesis across four dimensions:
**1. Theme Extraction**: Identify convergent themes across all analysis perspectives. Cluster insights by similarity, rank by cross-perspective confirmation count.
**2. Conflict Resolution**: Identify contradictions between perspectives. Present both sides with trade-off analysis when irreconcilable.
**3. Evidence Consolidation**: Deduplicate findings, aggregate by file reference. Map evidence to conclusions with confidence levels:
| Level | Criteria |
|-------|----------|
| High | Multiple sources confirm, strong evidence |
| Medium | Single source or partial evidence |
| Low | Speculative, needs verification |
**4. Recommendation Prioritization**: Sort all recommendations by priority (high > medium > low), deduplicate, cap at 10.
Integrate decision trail from discussion rounds into final narrative.
## Phase 4: Write Conclusions
1. Write `<session>/conclusions.json`:
```json
{
"session_id": "...", "topic": "...", "completed": "ISO-8601",
"summary": "Executive summary...",
"key_conclusions": [{"point": "...", "evidence": "...", "confidence": "high"}],
"recommendations": [{"action": "...", "rationale": "...", "priority": "high"}],
"open_questions": ["..."],
"decision_trail": [{"round": 1, "decision": "...", "context": "..."}],
"cross_perspective_synthesis": { "convergent_themes": [], "conflicts_resolved": [], "unique_contributions": [] },
"_metadata": { "explorations": 3, "analyses": 3, "discussions": 2, "strategy": "standard" }
}
```
2. Append conclusions section to `<session>/discussion.md`:
```markdown
## Conclusions
### Summary / Key Conclusions / Recommendations / Remaining Questions
## Decision Trail / Current Understanding (Final) / Session Statistics
```
Update `<session>/wisdom/.msg/meta.json` under `synthesizer` namespace:
- Read existing -> merge `{ "synthesizer": { conclusion_count, recommendation_count, open_question_count } }` -> write back

View File

@@ -40,18 +40,28 @@ MAX_ROUNDS = pipeline_mode === 'deep' ? 5
Triggered when a worker sends completion message (via SendMessage callback).
1. Parse message to identify role and task ID:
1. Parse message to identify role, then resolve completed tasks:
| Message Pattern | Role Detection |
|----------------|---------------|
| `[explorer]` or task ID `EXPLORE-*` | explorer |
| `[analyst]` or task ID `ANALYZE-*` | analyst |
| `[discussant]` or task ID `DISCUSS-*` | discussant |
| `[synthesizer]` or task ID `SYNTH-*` | synthesizer |
**Role detection** (from message tag at start of body):
2. Mark task as completed:
| Message starts with | Role | Handler |
|---------------------|------|---------|
| `[explorer]` | explorer | handleCallback |
| `[analyst]` | analyst | handleCallback |
| `[discussant]` | discussant | handleCallback |
| `[synthesizer]` | synthesizer | handleCallback |
| `[supervisor]` | supervisor | Log checkpoint result, verify CHECKPOINT task completed, proceed to handleSpawnNext |
**Task ID resolution** (do NOT parse from message — use TaskList):
- Call `TaskList()` and find tasks matching the detected role's prefix
- Tasks with status `completed` that were not previously tracked = newly completed tasks
- This is reliable even when a worker reports multiple tasks (inner_loop) or when message format varies
2. Verify task completion (worker already marks completed in Phase 5):
```
TaskGet({ taskId: "<task-id>" })
// If still "in_progress" (worker failed to mark) → fallback:
TaskUpdate({ taskId: "<task-id>", status: "completed" })
```
@@ -112,7 +122,7 @@ ELSE:
|----------|--------|
| "Continue deeper" | Create new DISCUSS-`<N+1>` task (pending, no blockedBy). Record decision in discussion.md. Proceed to handleSpawnNext |
| "Adjust direction" | AskUserQuestion for new focus. Create ANALYZE-fix-`<N>` task (pending). Create DISCUSS-`<N+1>` task (pending, blockedBy ANALYZE-fix-`<N>`). Record direction change in discussion.md. Proceed to handleSpawnNext |
| "Done" | Create SYNTH-001 task (pending, blockedBy last DISCUSS). Record decision in discussion.md. Proceed to handleSpawnNext |
| "Done" | Check if SYNTH-001 already exists (from dispatch): if yes, ensure blockedBy is updated to reference last DISCUSS task; if no, create SYNTH-001 (pending, blockedBy last DISCUSS). Record decision in discussion.md. Proceed to handleSpawnNext |
**Dynamic task creation templates**:
@@ -160,8 +170,11 @@ InnerLoop: false"
TaskUpdate({ taskId: "ANALYZE-fix-<N>", owner: "analyst" })
```
SYNTH-001 (created dynamically in deep mode):
SYNTH-001 (created dynamically — check existence first):
```
// Guard: only create if SYNTH-001 doesn't exist yet (dispatch may have pre-created it)
const existingSynth = TaskList().find(t => t.subject === 'SYNTH-001')
if (!existingSynth) {
TaskCreate({
subject: "SYNTH-001",
description: "PURPOSE: Integrate all analysis into final conclusions | Success: Executive summary with recommendations
@@ -179,6 +192,8 @@ CONSTRAINTS: Pure integration, no new exploration
---
InnerLoop: false"
})
}
// Always update blockedBy to reference the last DISCUSS task (whether pre-existing or newly created)
TaskUpdate({ taskId: "SYNTH-001", addBlockedBy: ["<last-DISCUSS-task-id>"], owner: "synthesizer" })
```
@@ -211,10 +226,10 @@ Find and spawn the next ready tasks.
| Task Prefix | Role | Role Spec |
|-------------|------|-----------|
| `EXPLORE-*` | explorer | `~ or <project>/.claude/skills/team-ultra-analyze/role-specs/explorer.md` |
| `ANALYZE-*` | analyst | `~ or <project>/.claude/skills/team-ultra-analyze/role-specs/analyst.md` |
| `DISCUSS-*` | discussant | `~ or <project>/.claude/skills/team-ultra-analyze/role-specs/discussant.md` |
| `SYNTH-*` | synthesizer | `~ or <project>/.claude/skills/team-ultra-analyze/role-specs/synthesizer.md` |
| `EXPLORE-*` | explorer | `<skill_root>/roles/explorer/role.md` |
| `ANALYZE-*` | analyst | `<skill_root>/roles/analyst/role.md` |
| `DISCUSS-*` | discussant | `<skill_root>/roles/discussant/role.md` |
| `SYNTH-*` | synthesizer | `<skill_root>/roles/synthesizer/role.md` |
3. Spawn team-worker for each ready task:
@@ -227,7 +242,7 @@ Agent({
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: ~ or <project>/.claude/skills/team-ultra-analyze/role-specs/<role>.md
role_spec: <skill_root>/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: ultra-analyze
@@ -298,11 +313,11 @@ Triggered when all pipeline tasks are completed.
| deep | All EXPLORE + ANALYZE + all DISCUSS-N + SYNTH-001 completed |
1. Verify all tasks completed. If any not completed, return to handleSpawnNext
2. If all completed, transition to coordinator Phase 5
2. If all completed, **inline-execute coordinator Phase 5** (shutdown workers → report → completion action). Do NOT STOP here — continue directly into Phase 5 within the same turn.
## Phase 4: State Persistence
After every handler execution:
After every handler execution **except handleComplete**:
1. Update session.json with current state:
- `discussion_round`: current round count
@@ -311,6 +326,8 @@ After every handler execution:
2. Verify task list consistency (no orphan tasks, no broken dependencies)
3. **STOP** and wait for next event
> **handleComplete exception**: handleComplete does NOT STOP — it transitions directly to coordinator Phase 5.
## Error Handling
| Scenario | Resolution |

View File

@@ -44,13 +44,21 @@ When coordinator is invoked, detect invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains role tag [explorer], [analyst], [discussant], [synthesizer] | -> handleCallback (monitor.md) |
| Worker callback | Message content starts with `[explorer]`, `[analyst]`, `[discussant]`, or `[synthesizer]` (role tag at beginning of message body) | -> handleCallback (monitor.md) |
| Supervisor callback | Message content starts with `[supervisor]` | -> handleSupervisorReport (log checkpoint result, proceed to handleSpawnNext if tasks unblocked) |
| Idle notification | System notification that a teammate went idle (does NOT start with a role tag — typically says "Agent X is now idle") | -> **IGNORE** (do not handleCallback; idle is normal after every turn) |
| Shutdown response | Message content is a JSON object containing `shutdown_response` (parse as structured data, not string) | -> handleShutdownResponse (see Phase 5) |
| Status check | Arguments contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume (monitor.md) |
| Pipeline complete | All tasks have status "completed" | -> handleComplete (monitor.md) |
| Interrupted session | Active/paused session exists | -> Phase 0 |
| New session | None of above | -> Phase 1 |
**Message format discrimination**:
- **String messages starting with `[<role>]`**: Worker/supervisor completion reports → route to handleCallback or handleSupervisorReport
- **JSON object messages** (contain `type:` field): Structured protocol messages (shutdown_response) → route by `type` field
- **Other strings without role tags**: System idle notifications → IGNORE
For callback/check/resume/complete: load `@commands/monitor.md` and execute matched handler, then STOP.
### Router Implementation
@@ -167,7 +175,31 @@ All subsequent coordination is handled by `commands/monitor.md` handlers trigger
---
## Phase 5: Report + Completion Action
## Phase 5: Shutdown Workers + Report + Completion Action
### Shutdown All Workers
Before reporting, gracefully shut down all active teammates. This is a **multi-turn** process:
1. Read team config: `~/.claude/teams/ultra-analyze/config.json`
2. Build shutdown tracking list: `pending_shutdown = [<all member names except coordinator>]`
3. For each member in pending_shutdown, send shutdown request:
```javascript
SendMessage({
to: "<member-name>",
message: { type: "shutdown_request", reason: "Pipeline complete" }
})
```
4. **STOP** — wait for responses. Each `shutdown_response` triggers a new coordinator turn.
5. On each subsequent turn (shutdown_response received):
- Remove responder from `pending_shutdown`
- If `pending_shutdown` is empty → proceed to **Report** section below
- If not empty → **STOP** again, wait for remaining responses
6. If a member is unresponsive after 2 follow-ups, remove from tracking and proceed
**Note**: Workers that completed Phase 5-F and reached STOP may have already terminated. SendMessage to a terminated agent is silently ignored — this is safe. Only resident agents (e.g., supervisor) require explicit shutdown.
### Report
1. Load session state -> count completed tasks, calculate duration
2. List deliverables: