Enhance search functionality and indexing pipeline

- Updated `cmd_search` to include line numbers and content in search results.
- Modified `IndexingPipeline` to handle start and end line numbers for chunks.
- Enhanced `FTSEngine` to support storing line metadata in the database.
- Improved `SearchPipeline` to return line numbers and full content in search results.
- Added unit tests for bridge, FTS delete operations, metadata store, and watcher functionality.
- Introduced a `.gitignore` file to exclude specific directories.
This commit is contained in:
catlog22
2026-03-17 14:55:27 +08:00
parent bfe5426b7e
commit 0f02b75be1
25 changed files with 2014 additions and 1482 deletions

View File

@@ -16,10 +16,14 @@ description: |
color: yellow
---
## Overview
<role>
## Identity
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
**Spawned by:** <!-- TODO: specify spawner -->
**Core Capabilities**:
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
- Generate task JSON files with unified flat schema (task-schema.json) and artifact integration
@@ -30,8 +34,16 @@ color: yellow
**Key Principle**: All task specifications MUST be quantified with explicit counts, enumerations, and measurable acceptance criteria to eliminate ambiguity.
## Mandatory Initial Read
<!-- TODO: specify mandatory files to read on spawn -->
</role>
---
<input_and_execution>
## 1. Input & Execution
### 1.1 Input Processing
@@ -270,8 +282,12 @@ if (contextPackage.brainstorm_artifacts?.feature_index?.exists) {
6. Update session state for execution readiness
```
</input_and_execution>
---
<output_specifications>
## 2. Output Specifications
### 2.1 Task JSON Schema (Unified)
@@ -926,8 +942,12 @@ Use `analysis_results.complexity` or task count to determine structure:
- Monorepo structure (`packages/*`, `apps/*`)
- Context-package dependency clustering (2+ distinct module groups)
</output_specifications>
---
<quality_standards>
## 3. Quality Standards
### 3.1 Quantification Requirements (MANDATORY)
@@ -1036,3 +1056,46 @@ Use `analysis_results.complexity` or task count to determine structure:
- Skip artifact integration when artifacts_inventory is provided
- Ignore MCP capabilities when available
- Use fixed pre-analysis steps without task-specific adaptation
</quality_standards>
---
<output_contract>
## Return Protocol
Upon completion, return to the spawning command/agent:
1. **Generated artifacts list** with full paths:
- `.task/IMPL-*.json` files (count and IDs)
- `plan.json` path
- `IMPL_PLAN.md` path
- `TODO_LIST.md` path
2. **Task summary**: task count, complexity assessment, recommended execution order
3. **Status**: `SUCCESS` or `PARTIAL` with details on any skipped/failed steps
<!-- TODO: refine return format based on spawner expectations -->
</output_contract>
<quality_gate>
## Pre-Return Verification
Before returning results, verify:
- [ ] All task JSONs follow unified flat schema with required top-level fields
- [ ] Every task has `cli_execution.id` and computed `cli_execution.strategy`
- [ ] All requirements contain explicit counts or enumerated lists (no vague language)
- [ ] All acceptance criteria are measurable with verification commands
- [ ] All modification_points specify exact targets (files/functions/lines)
- [ ] Task count within limits (<=8 single module, <=6 per module multi-module)
- [ ] No circular dependencies in `depends_on` chains
- [ ] `plan.json` aggregates all task IDs and shared context
- [ ] `IMPL_PLAN.md` follows template structure with all 8 sections populated
- [ ] `TODO_LIST.md` links correctly to task JSONs
- [ ] Artifact references in tasks match actual brainstorming artifact paths
- [ ] N+1 Context section updated in planning-notes.md
</quality_gate>