mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-11 17:21:03 +08:00
Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6f9dc836c3 | ||
|
|
663620955c | ||
|
|
cbd1813ea7 | ||
|
|
b2fc2f60f1 |
27
.ccw/personal/coding-style.md
Normal file
27
.ccw/personal/coding-style.md
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
---
|
||||||
|
title: "Personal Coding Style"
|
||||||
|
dimension: personal
|
||||||
|
category: general
|
||||||
|
keywords:
|
||||||
|
- style
|
||||||
|
- preference
|
||||||
|
readMode: optional
|
||||||
|
priority: medium
|
||||||
|
---
|
||||||
|
|
||||||
|
# Personal Coding Style
|
||||||
|
|
||||||
|
## Preferences
|
||||||
|
|
||||||
|
- Describe your preferred coding style here
|
||||||
|
- Example: verbose variable names vs terse, functional vs imperative
|
||||||
|
|
||||||
|
## Patterns I Prefer
|
||||||
|
|
||||||
|
- List patterns you reach for most often
|
||||||
|
- Example: builder pattern, factory functions, tagged unions
|
||||||
|
|
||||||
|
## Things I Avoid
|
||||||
|
|
||||||
|
- List anti-patterns or approaches you dislike
|
||||||
|
- Example: deep inheritance hierarchies, magic strings
|
||||||
25
.ccw/personal/tool-preferences.md
Normal file
25
.ccw/personal/tool-preferences.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
---
|
||||||
|
title: "Tool Preferences"
|
||||||
|
dimension: personal
|
||||||
|
category: general
|
||||||
|
keywords:
|
||||||
|
- tool
|
||||||
|
- cli
|
||||||
|
- editor
|
||||||
|
readMode: optional
|
||||||
|
priority: low
|
||||||
|
---
|
||||||
|
|
||||||
|
# Tool Preferences
|
||||||
|
|
||||||
|
## Editor
|
||||||
|
|
||||||
|
- Preferred editor and key extensions/plugins
|
||||||
|
|
||||||
|
## CLI Tools
|
||||||
|
|
||||||
|
- Preferred shell, package manager, build tools
|
||||||
|
|
||||||
|
## Debugging
|
||||||
|
|
||||||
|
- Preferred debugging approach and tools
|
||||||
@@ -1,3 +1,13 @@
|
|||||||
|
---
|
||||||
|
title: Architecture Constraints
|
||||||
|
readMode: optional
|
||||||
|
priority: medium
|
||||||
|
category: general
|
||||||
|
scope: project
|
||||||
|
dimension: specs
|
||||||
|
keywords: [architecture, constraint, schema, compatibility, portability, design, arch]
|
||||||
|
---
|
||||||
|
|
||||||
# Architecture Constraints
|
# Architecture Constraints
|
||||||
|
|
||||||
## Schema Evolution
|
## Schema Evolution
|
||||||
|
|||||||
@@ -1,3 +1,13 @@
|
|||||||
|
---
|
||||||
|
title: Coding Conventions
|
||||||
|
readMode: optional
|
||||||
|
priority: medium
|
||||||
|
category: general
|
||||||
|
scope: project
|
||||||
|
dimension: specs
|
||||||
|
keywords: [coding, convention, style, naming, pattern, navigation, schema, error-handling, implementation, validation, clarity, doc]
|
||||||
|
---
|
||||||
|
|
||||||
# Coding Conventions
|
# Coding Conventions
|
||||||
|
|
||||||
## Navigation & Path Handling
|
## Navigation & Path Handling
|
||||||
@@ -9,6 +19,7 @@
|
|||||||
## Document Generation
|
## Document Generation
|
||||||
|
|
||||||
- [architecture] For document generation systems, adopt Layer 3→2→1 pattern (components → features → indexes) for efficient incremental updates. (learned: 2026-03-07)
|
- [architecture] For document generation systems, adopt Layer 3→2→1 pattern (components → features → indexes) for efficient incremental updates. (learned: 2026-03-07)
|
||||||
|
- [tools] When commands need to generate files with deterministic paths and frontmatter, use dedicated ccw tool endpoints (`ccw tool exec`) instead of raw `ccw cli -p` calls. Endpoints control output path, file naming, and structural metadata; CLI tools only generate prose content. (learned: 2026-03-09)
|
||||||
|
|
||||||
## Implementation Quality
|
## Implementation Quality
|
||||||
|
|
||||||
|
|||||||
@@ -48,8 +48,9 @@ doc-index.json → tech-registry/*.md (L3) → feature-maps/*.md (L2) → _index
|
|||||||
├── tech-registry/ ← Component documentation (Layer 3)
|
├── tech-registry/ ← Component documentation (Layer 3)
|
||||||
│ ├── _index.md
|
│ ├── _index.md
|
||||||
│ └── {component-slug}.md
|
│ └── {component-slug}.md
|
||||||
└── sessions/
|
└── planning/ ← Planning sessions (Layer 1)
|
||||||
└── _index.md ← Planning sessions index (Layer 1)
|
├── _index.md ← Planning sessions index
|
||||||
|
└── {task-slug}-{date}/ ← Individual session folders
|
||||||
```
|
```
|
||||||
|
|
||||||
## Phase 1: Load & Validate
|
## Phase 1: Load & Validate
|
||||||
@@ -87,147 +88,82 @@ IF docs already exist AND NOT --force:
|
|||||||
Ask user (unless -y → overwrite)
|
Ask user (unless -y → overwrite)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Phase 2: Layer 3 — Component Documentation
|
## Phase 2: Layer 3 -- Component Documentation
|
||||||
|
|
||||||
For each component in `technicalComponents[]`:
|
For each component in `technicalComponents[]`, call the generate_ddd_docs endpoint:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ccw cli -p "PURPOSE: Generate component documentation for {component.name}
|
for COMPONENT_ID in "${technicalComponents[@]}"; do
|
||||||
TASK:
|
ccw tool exec generate_ddd_docs '{"strategy":"component","entityId":"'"$COMPONENT_ID"'","tool":"gemini"}'
|
||||||
• Document component purpose and responsibility
|
done
|
||||||
• List exported symbols (classes, functions, types)
|
|
||||||
• Document dependencies (internal and external)
|
|
||||||
• Include code examples for key APIs
|
|
||||||
• Document integration points with other components
|
|
||||||
MODE: write
|
|
||||||
CONTEXT: @{component.codeLocations[].path}
|
|
||||||
EXPECTED: Markdown file with: Overview, API Reference, Dependencies, Usage Examples
|
|
||||||
CONSTRAINTS: Focus on public API | Include type signatures
|
|
||||||
" --tool gemini --mode write --cd .workflow/.doc-index/tech-registry/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The endpoint handles:
|
||||||
|
- Loading the component entity from doc-index.json
|
||||||
|
- Building YAML frontmatter (layer: 3, component_id, name, type, features, code_locations, generated_at)
|
||||||
|
- Constructing the CLI prompt with code context paths
|
||||||
|
- **Including Change History section**: Pull related entries from `doc-index.json.actions[]` where `affectedComponents` includes this component ID. Display as timeline (date, action type, description)
|
||||||
|
- Writing output to `.workflow/.doc-index/tech-registry/{slug}.md`
|
||||||
|
- Tool fallback (gemini -> qwen -> codex) on failure
|
||||||
|
|
||||||
Output: `.workflow/.doc-index/tech-registry/{component-slug}.md`
|
Output: `.workflow/.doc-index/tech-registry/{component-slug}.md`
|
||||||
|
|
||||||
Frontmatter:
|
## Phase 3: Layer 2 -- Feature Documentation
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
layer: 3
|
|
||||||
component_id: tech-{slug}
|
|
||||||
name: ComponentName
|
|
||||||
type: service|controller|model|...
|
|
||||||
features: [feat-auth]
|
|
||||||
code_locations:
|
|
||||||
- path: src/services/auth.ts
|
|
||||||
symbols: [AuthService, AuthService.login]
|
|
||||||
generated_at: ISO8601
|
|
||||||
---
|
|
||||||
```
|
|
||||||
|
|
||||||
Sections: Responsibility, Code Locations, Related Requirements, Architecture Decisions, Dependencies (in/out)
|
For each feature in `features[]`, call the generate_ddd_docs endpoint:
|
||||||
|
|
||||||
## Phase 3: Layer 2 — Feature Documentation
|
|
||||||
|
|
||||||
For each feature in `features[]`:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ccw cli -p "PURPOSE: Generate feature documentation for {feature.name}
|
for FEATURE_ID in "${features[@]}"; do
|
||||||
TASK:
|
ccw tool exec generate_ddd_docs '{"strategy":"feature","entityId":"'"$FEATURE_ID"'","tool":"gemini"}'
|
||||||
• Describe feature purpose and business value
|
done
|
||||||
• List requirements (from requirementIds)
|
|
||||||
• Document components involved (from techComponentIds)
|
|
||||||
• Include architecture decisions (from adrIds)
|
|
||||||
• Provide integration guide
|
|
||||||
MODE: write
|
|
||||||
CONTEXT: @.workflow/.doc-index/tech-registry/{related-components}.md
|
|
||||||
EXPECTED: Markdown file with: Overview, Requirements, Components, Architecture, Integration
|
|
||||||
CONSTRAINTS: Reference Layer 3 component docs | Business-focused language
|
|
||||||
" --tool gemini --mode write --cd .workflow/.doc-index/feature-maps/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The endpoint handles:
|
||||||
|
- Loading the feature entity from doc-index.json
|
||||||
|
- Building YAML frontmatter (layer: 2, feature_id, name, epic_id, status, requirements, components, tags, generated_at)
|
||||||
|
- Constructing the CLI prompt referencing Layer 3 component docs
|
||||||
|
- **Including Change History section**: Pull related entries from `doc-index.json.actions[]` where `affectedFeatures` includes this feature ID. Display as timeline (date, action type, description)
|
||||||
|
- Writing output to `.workflow/.doc-index/feature-maps/{slug}.md`
|
||||||
|
- Tool fallback (gemini -> qwen -> codex) on failure
|
||||||
|
|
||||||
Output: `.workflow/.doc-index/feature-maps/{feature-slug}.md`
|
Output: `.workflow/.doc-index/feature-maps/{feature-slug}.md`
|
||||||
|
|
||||||
Frontmatter:
|
## Phase 4: Layer 1 -- Index & Overview Documentation
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
layer: 2
|
|
||||||
feature_id: feat-{slug}
|
|
||||||
name: Feature Name
|
|
||||||
epic_id: EPIC-NNN|null
|
|
||||||
status: implemented|in-progress|planned|partial
|
|
||||||
requirements: [REQ-001, REQ-002]
|
|
||||||
components: [tech-auth-service, tech-user-model]
|
|
||||||
depends_on_layer3: [tech-auth-service, tech-user-model]
|
|
||||||
tags: [auth, security]
|
|
||||||
generated_at: ISO8601
|
|
||||||
---
|
|
||||||
```
|
|
||||||
|
|
||||||
Sections: Overview, Requirements (with mapping status), Technical Components, Architecture Decisions, Change History
|
|
||||||
|
|
||||||
## Phase 4: Layer 1 — Index & Overview Documentation
|
|
||||||
|
|
||||||
### 4.1 Index Documents
|
### 4.1 Index Documents
|
||||||
|
|
||||||
Generate catalog files:
|
Generate catalog files for each subdirectory:
|
||||||
|
|
||||||
- **feature-maps/_index.md** — Feature overview table with status
|
```bash
|
||||||
- **tech-registry/_index.md** — Component registry table with types
|
# Feature maps index
|
||||||
- **action-logs/_index.md** — Action history table (empty initially for new projects)
|
ccw tool exec generate_ddd_docs '{"strategy":"index","entityId":"feature-maps","tool":"gemini"}'
|
||||||
|
|
||||||
|
# Tech registry index
|
||||||
|
ccw tool exec generate_ddd_docs '{"strategy":"index","entityId":"tech-registry","tool":"gemini"}'
|
||||||
|
|
||||||
|
# Action logs index
|
||||||
|
ccw tool exec generate_ddd_docs '{"strategy":"index","entityId":"action-logs","tool":"gemini"}'
|
||||||
|
|
||||||
|
# Planning sessions index
|
||||||
|
ccw tool exec generate_ddd_docs '{"strategy":"index","entityId":"planning","tool":"gemini"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Or generate all indexes at once (omit entityId):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw tool exec generate_ddd_docs '{"strategy":"index","tool":"gemini"}'
|
||||||
|
```
|
||||||
|
|
||||||
### 4.2 README.md (unless --skip-overview)
|
### 4.2 README.md (unless --skip-overview)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ccw cli -p "PURPOSE: Generate project README with overview and navigation
|
ccw tool exec generate_ddd_docs '{"strategy":"overview","tool":"gemini"}'
|
||||||
TASK:
|
|
||||||
• Project summary and purpose
|
|
||||||
• Quick start guide
|
|
||||||
• Navigation to features, components, and architecture
|
|
||||||
• Link to doc-index.json
|
|
||||||
MODE: write
|
|
||||||
CONTEXT: @.workflow/.doc-index/doc-index.json @.workflow/.doc-index/feature-maps/_index.md
|
|
||||||
EXPECTED: README.md with: Overview, Quick Start, Navigation, Links
|
|
||||||
CONSTRAINTS: High-level only | Entry point for new developers
|
|
||||||
" --tool gemini --mode write --cd .workflow/.doc-index/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4.3 ARCHITECTURE.md (unless --skip-overview)
|
### 4.3 ARCHITECTURE.md (unless --skip-overview)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ccw cli -p "PURPOSE: Generate architecture overview document
|
ccw tool exec generate_ddd_docs '{"strategy":"overview","entityId":"architecture","tool":"gemini"}'
|
||||||
TASK:
|
|
||||||
• System design overview
|
|
||||||
• Component relationships and dependencies
|
|
||||||
• Key architecture decisions (from ADRs)
|
|
||||||
• Technology stack
|
|
||||||
MODE: write
|
|
||||||
CONTEXT: @.workflow/.doc-index/doc-index.json @.workflow/.doc-index/tech-registry/*.md
|
|
||||||
EXPECTED: ARCHITECTURE.md with: System Design, Component Diagram, ADRs, Tech Stack
|
|
||||||
CONSTRAINTS: Architecture-focused | Reference component docs for details
|
|
||||||
" --tool gemini --mode write --cd .workflow/.doc-index/
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.4 sessions/_index.md (unless --skip-overview)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ccw cli -p "PURPOSE: Generate planning sessions index
|
|
||||||
TASK:
|
|
||||||
• List all planning session folders chronologically
|
|
||||||
• Link to each session's plan.json
|
|
||||||
• Show session status and task count
|
|
||||||
MODE: write
|
|
||||||
CONTEXT: @.workflow/.doc-index/planning/*/plan.json
|
|
||||||
EXPECTED: sessions/_index.md with: Session List, Links, Status
|
|
||||||
CONSTRAINTS: Chronological order | Link to session folders
|
|
||||||
" --tool gemini --mode write --cd .workflow/.doc-index/sessions/
|
|
||||||
```
|
|
||||||
|
|
||||||
Layer 1 frontmatter:
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
layer: 1
|
|
||||||
depends_on_layer2: [feat-auth, feat-orders]
|
|
||||||
generated_at: ISO8601
|
|
||||||
---
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Phase 5: SCHEMA.md (unless --skip-schema)
|
## Phase 5: SCHEMA.md (unless --skip-schema)
|
||||||
@@ -235,17 +171,7 @@ generated_at: ISO8601
|
|||||||
### 5.1 Generate Schema Documentation
|
### 5.1 Generate Schema Documentation
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ccw cli -p "PURPOSE: Document doc-index.json schema structure and versioning
|
ccw tool exec generate_ddd_docs '{"strategy":"schema","tool":"gemini"}'
|
||||||
TASK:
|
|
||||||
• Document current schema structure (all fields)
|
|
||||||
• Define versioning policy (semver: major.minor)
|
|
||||||
• Document migration protocol for version upgrades
|
|
||||||
• Provide examples for each schema section
|
|
||||||
MODE: write
|
|
||||||
CONTEXT: @.workflow/.doc-index/doc-index.json
|
|
||||||
EXPECTED: SCHEMA.md with: Schema Structure, Versioning Policy, Migration Protocol, Examples
|
|
||||||
CONSTRAINTS: Complete field documentation | Clear migration steps
|
|
||||||
" --tool gemini --mode write --cd .workflow/.doc-index/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 5.2 Versioning Policy
|
### 5.2 Versioning Policy
|
||||||
@@ -284,7 +210,7 @@ Total: {N} documents generated
|
|||||||
| `-y, --yes` | Auto-confirm all decisions |
|
| `-y, --yes` | Auto-confirm all decisions |
|
||||||
| `--layer <3\|2\|1\|all>` | Generate specific layer only (default: all) |
|
| `--layer <3\|2\|1\|all>` | Generate specific layer only (default: all) |
|
||||||
| `--force` | Overwrite existing documents |
|
| `--force` | Overwrite existing documents |
|
||||||
| `--skip-overview` | Skip README.md, ARCHITECTURE.md, sessions/_index.md |
|
| `--skip-overview` | Skip README.md, ARCHITECTURE.md, planning/_index.md |
|
||||||
| `--skip-schema` | Skip SCHEMA.md generation |
|
| `--skip-schema` | Skip SCHEMA.md generation |
|
||||||
|
|
||||||
## Integration Points
|
## Integration Points
|
||||||
@@ -293,3 +219,4 @@ Total: {N} documents generated
|
|||||||
- **Called by**: `/ddd:scan` (after index assembly), `/ddd:index-build` (after index assembly)
|
- **Called by**: `/ddd:scan` (after index assembly), `/ddd:index-build` (after index assembly)
|
||||||
- **Standalone**: Can be run independently on any project with existing doc-index.json
|
- **Standalone**: Can be run independently on any project with existing doc-index.json
|
||||||
- **Output**: Complete document tree in `.workflow/.doc-index/`
|
- **Output**: Complete document tree in `.workflow/.doc-index/`
|
||||||
|
- **Endpoint**: `ccw tool exec generate_ddd_docs` handles prompt construction, frontmatter, tool fallback, and file creation
|
||||||
|
|||||||
@@ -163,7 +163,7 @@ ccw cli -p "PURPOSE: Update project overview docs after feature changes
|
|||||||
TASK:
|
TASK:
|
||||||
• Update README.md feature list
|
• Update README.md feature list
|
||||||
• Update ARCHITECTURE.md if new components added
|
• Update ARCHITECTURE.md if new components added
|
||||||
• Update sessions/_index.md with new planning sessions
|
• Update planning/_index.md with new planning sessions
|
||||||
MODE: write
|
MODE: write
|
||||||
CONTEXT: @.workflow/.doc-index/feature-maps/*.md @.workflow/.doc-index/doc-index.json
|
CONTEXT: @.workflow/.doc-index/feature-maps/*.md @.workflow/.doc-index/doc-index.json
|
||||||
EXPECTED: Updated overview docs with current project state
|
EXPECTED: Updated overview docs with current project state
|
||||||
|
|||||||
@@ -37,11 +37,42 @@ After completing a development task, synchronize the document index with actual
|
|||||||
- `doc-index.json` must exist
|
- `doc-index.json` must exist
|
||||||
- Git repository with committed or staged changes
|
- Git repository with committed or staged changes
|
||||||
|
|
||||||
|
## Phase 0: Consistency Validation
|
||||||
|
|
||||||
|
Before processing changes, verify that `doc-index.json` entries are consistent with actual code state.
|
||||||
|
|
||||||
|
### 0.1 Validate Code Locations
|
||||||
|
|
||||||
|
For each `technicalComponents[].codeLocations[]`:
|
||||||
|
- Verify file exists on disk
|
||||||
|
- If file was deleted/moved → flag for removal or update
|
||||||
|
- If file exists → verify listed `symbols[]` still exist (quick grep/AST check)
|
||||||
|
|
||||||
|
### 0.2 Validate Symbols
|
||||||
|
|
||||||
|
For components with `codeLocations[].symbols[]`:
|
||||||
|
- Check each symbol still exists in the referenced file
|
||||||
|
- Detect new exported symbols not yet tracked
|
||||||
|
- Report: `{N} stale symbols, {N} untracked symbols`
|
||||||
|
|
||||||
|
### 0.3 Validation Report
|
||||||
|
|
||||||
|
```
|
||||||
|
Consistency Check:
|
||||||
|
Components validated: {N}
|
||||||
|
Files verified: {N}
|
||||||
|
Stale references: {N} (files missing or symbols removed)
|
||||||
|
Untracked symbols: {N} (new exports not in index)
|
||||||
|
```
|
||||||
|
|
||||||
|
If stale references found: warn and auto-fix during Phase 3 updates.
|
||||||
|
If `--dry-run`: report only, no fixes.
|
||||||
|
|
||||||
## Phase 1: Change Detection
|
## Phase 1: Change Detection
|
||||||
|
|
||||||
### 0.1 Schema Version Check (TASK-006)
|
### 1.0.1 Schema Version Check
|
||||||
|
|
||||||
Before processing changes, verify doc-index schema compatibility:
|
Before processing changes, verify doc-index.json schema compatibility:
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const docIndex = JSON.parse(Read('.workflow/.doc-index/doc-index.json'));
|
const docIndex = JSON.parse(Read('.workflow/.doc-index/doc-index.json'));
|
||||||
@@ -201,6 +232,7 @@ For each affected component in `doc-index.json`:
|
|||||||
- Update `codeLocations` if file paths or line ranges changed
|
- Update `codeLocations` if file paths or line ranges changed
|
||||||
- Update `symbols` if new exports were added
|
- Update `symbols` if new exports were added
|
||||||
- Add new `actionIds` entry
|
- Add new `actionIds` entry
|
||||||
|
- **Auto-update `responsibility`**: If symbols changed (new methods/exports added or removed), re-infer responsibility from current symbols list using Gemini analysis. This prevents stale descriptions (e.g., responsibility still says "登录、注册" after adding logout support)
|
||||||
|
|
||||||
### 3.2 Register New Components
|
### 3.2 Register New Components
|
||||||
|
|
||||||
|
|||||||
@@ -65,11 +65,14 @@ Analyze context and produce two update payloads. Use LLM reasoning (current agen
|
|||||||
```javascript
|
```javascript
|
||||||
// ── Guidelines extraction ──
|
// ── Guidelines extraction ──
|
||||||
// Scan git diff + session for:
|
// Scan git diff + session for:
|
||||||
// - New patterns adopted → convention
|
// - Debugging experiences → bug
|
||||||
// - Restrictions discovered → constraint
|
// - Reusable code patterns → pattern
|
||||||
// - Surprises / gotchas → learning
|
// - Architecture/design decisions → decision
|
||||||
|
// - Conventions, constraints, insights → rule
|
||||||
//
|
//
|
||||||
// Output: array of { type, category, text }
|
// Output: array of { type, tag, text }
|
||||||
|
// type: 'bug' | 'pattern' | 'decision' | 'rule'
|
||||||
|
// tag: domain tag (api, routing, schema, security, etc.)
|
||||||
// RULE: Only extract genuinely reusable insights. Skip trivial/obvious items.
|
// RULE: Only extract genuinely reusable insights. Skip trivial/obvious items.
|
||||||
// RULE: Deduplicate against existing guidelines before adding.
|
// RULE: Deduplicate against existing guidelines before adding.
|
||||||
|
|
||||||
@@ -118,7 +121,7 @@ console.log(`
|
|||||||
── Sync Preview ──
|
── Sync Preview ──
|
||||||
|
|
||||||
Guidelines (${guidelineUpdates.length} items):
|
Guidelines (${guidelineUpdates.length} items):
|
||||||
${guidelineUpdates.map(g => ` [${g.type}/${g.category}] ${g.text}`).join('\n') || ' (none)'}
|
${guidelineUpdates.map(g => ` [${g.type}:${g.tag}] ${g.text}`).join('\n') || ' (none)'}
|
||||||
|
|
||||||
Tech [${detectCategory(summary)}]:
|
Tech [${detectCategory(summary)}]:
|
||||||
${techEntry.title}
|
${techEntry.title}
|
||||||
@@ -137,26 +140,102 @@ if (!autoYes) {
|
|||||||
## Step 4: Write
|
## Step 4: Write
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// ── Update specs/*.md ──
|
const matter = require('gray-matter') // YAML frontmatter parser
|
||||||
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
|
||||||
if (guidelineUpdates.length > 0) {
|
// ── Frontmatter check & repair helper ──
|
||||||
// Map guideline types to spec files
|
// Ensures target spec file has valid YAML frontmatter with keywords
|
||||||
const specFileMap = {
|
// Uses gray-matter for robust parsing (handles malformed frontmatter, missing fields)
|
||||||
convention: '.ccw/specs/coding-conventions.md',
|
function ensureFrontmatter(filePath, tag, type) {
|
||||||
constraint: '.ccw/specs/architecture-constraints.md',
|
const titleMap = {
|
||||||
learning: '.ccw/specs/coding-conventions.md' // learnings appended to conventions
|
'coding-conventions': 'Coding Conventions',
|
||||||
|
'architecture-constraints': 'Architecture Constraints',
|
||||||
|
'learnings': 'Learnings',
|
||||||
|
'quality-rules': 'Quality Rules'
|
||||||
|
}
|
||||||
|
const basename = filePath.split('/').pop().replace('.md', '')
|
||||||
|
const title = titleMap[basename] || basename
|
||||||
|
const defaultFm = {
|
||||||
|
title,
|
||||||
|
readMode: 'optional',
|
||||||
|
priority: 'medium',
|
||||||
|
scope: 'project',
|
||||||
|
dimension: 'specs',
|
||||||
|
keywords: [tag, type]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (!file_exists(filePath)) {
|
||||||
|
// Case A: Create new file with frontmatter
|
||||||
|
Write(filePath, matter.stringify(`\n# ${title}\n\n`, defaultFm))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
const raw = Read(filePath)
|
||||||
|
let parsed
|
||||||
|
try {
|
||||||
|
parsed = matter(raw)
|
||||||
|
} catch {
|
||||||
|
parsed = { data: {}, content: raw }
|
||||||
|
}
|
||||||
|
|
||||||
|
const hasFrontmatter = raw.trimStart().startsWith('---')
|
||||||
|
|
||||||
|
if (!hasFrontmatter) {
|
||||||
|
// Case B: File exists but no frontmatter → prepend
|
||||||
|
Write(filePath, matter.stringify(raw, defaultFm))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Case C: Frontmatter exists → ensure keywords include current tag
|
||||||
|
const existingKeywords = parsed.data.keywords || []
|
||||||
|
const newKeywords = [...new Set([...existingKeywords, tag, type])]
|
||||||
|
|
||||||
|
if (newKeywords.length !== existingKeywords.length) {
|
||||||
|
parsed.data.keywords = newKeywords
|
||||||
|
Write(filePath, matter.stringify(parsed.content, parsed.data))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Update specs/*.md ──
|
||||||
|
// Uses .ccw/specs/ directory - unified [type:tag] entry format
|
||||||
|
if (guidelineUpdates.length > 0) {
|
||||||
|
// Map knowledge types to spec files
|
||||||
|
const specFileMap = {
|
||||||
|
bug: '.ccw/specs/learnings.md',
|
||||||
|
pattern: '.ccw/specs/coding-conventions.md',
|
||||||
|
decision: '.ccw/specs/architecture-constraints.md',
|
||||||
|
rule: null // determined by content below
|
||||||
|
}
|
||||||
|
|
||||||
|
const date = new Date().toISOString().split('T')[0]
|
||||||
|
const needsDate = { bug: true, pattern: true, decision: true, rule: false }
|
||||||
|
|
||||||
for (const g of guidelineUpdates) {
|
for (const g of guidelineUpdates) {
|
||||||
const targetFile = specFileMap[g.type]
|
// For rule type, route by content and tag
|
||||||
|
let targetFile = specFileMap[g.type]
|
||||||
|
if (!targetFile) {
|
||||||
|
const isQuality = /\b(test|coverage|lint|eslint|质量|测试覆盖|pre-commit|tsc|type.check)\b/i.test(g.text)
|
||||||
|
|| ['testing', 'quality', 'lint'].includes(g.tag)
|
||||||
|
const isConstraint = /\b(禁止|no|never|must not|forbidden|不得|不允许)\b/i.test(g.text)
|
||||||
|
if (isQuality) {
|
||||||
|
targetFile = '.ccw/specs/quality-rules.md'
|
||||||
|
} else if (isConstraint) {
|
||||||
|
targetFile = '.ccw/specs/architecture-constraints.md'
|
||||||
|
} else {
|
||||||
|
targetFile = '.ccw/specs/coding-conventions.md'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure frontmatter exists and keywords are up-to-date
|
||||||
|
ensureFrontmatter(targetFile, g.tag, g.type)
|
||||||
|
|
||||||
const existing = Read(targetFile)
|
const existing = Read(targetFile)
|
||||||
const ruleText = g.type === 'learning'
|
const entryLine = needsDate[g.type]
|
||||||
? `- [${g.category}] ${g.text} (learned: ${new Date().toISOString().split('T')[0]})`
|
? `- [${g.type}:${g.tag}] ${g.text} (${date})`
|
||||||
: `- [${g.category}] ${g.text}`
|
: `- [${g.type}:${g.tag}] ${g.text}`
|
||||||
|
|
||||||
// Deduplicate: skip if text already in file
|
// Deduplicate: skip if text already in file
|
||||||
if (!existing.includes(g.text)) {
|
if (!existing.includes(g.text)) {
|
||||||
const newContent = existing.trimEnd() + '\n' + ruleText + '\n'
|
const newContent = existing.trimEnd() + '\n' + entryLine + '\n'
|
||||||
Write(targetFile, newContent)
|
Write(targetFile, newContent)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -198,4 +277,5 @@ Write(techPath, JSON.stringify(tech, null, 2))
|
|||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
- `/workflow:spec:setup` - Initialize project with specs scaffold
|
- `/workflow:spec:setup` - Initialize project with specs scaffold
|
||||||
- `/workflow:spec:add` - Interactive wizard to create individual specs with scope selection
|
- `/workflow:spec:add` - Add knowledge entries (bug/pattern/decision/rule) with unified [type:tag] format
|
||||||
|
- `/workflow:spec:load` - Interactive spec loader with keyword/type/tag filtering
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
392
.claude/commands/workflow/spec/load.md
Normal file
392
.claude/commands/workflow/spec/load.md
Normal file
@@ -0,0 +1,392 @@
|
|||||||
|
---
|
||||||
|
name: load
|
||||||
|
description: Interactive spec loader - ask what user needs, then load relevant specs by keyword routing
|
||||||
|
argument-hint: "[--all] [--type <bug|pattern|decision|rule>] [--tag <tag>] [\"keyword query\"]"
|
||||||
|
examples:
|
||||||
|
- /workflow:spec:load
|
||||||
|
- /workflow:spec:load "api routing"
|
||||||
|
- /workflow:spec:load --type bug
|
||||||
|
- /workflow:spec:load --all
|
||||||
|
- /workflow:spec:load --tag security
|
||||||
|
---
|
||||||
|
|
||||||
|
# Spec Load Command (/workflow:spec:load)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Interactive entry point for loading and browsing project specs. Asks the user what they need, then routes to the appropriate spec content based on keywords, type filters, or tag filters.
|
||||||
|
|
||||||
|
**Design**: Menu-driven → keyword match → load & display. No file modifications.
|
||||||
|
|
||||||
|
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
```bash
|
||||||
|
/workflow:spec:load # Interactive menu
|
||||||
|
/workflow:spec:load "api routing" # Direct keyword search
|
||||||
|
/workflow:spec:load --type bug # Filter by knowledge type
|
||||||
|
/workflow:spec:load --tag security # Filter by domain tag
|
||||||
|
/workflow:spec:load --all # Load all specs
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse --all flag → loadAll = true | false
|
||||||
|
├─ Parse --type (bug|pattern|decision|rule)
|
||||||
|
├─ Parse --tag (domain tag)
|
||||||
|
└─ Parse keyword query (positional text)
|
||||||
|
|
||||||
|
Decision:
|
||||||
|
├─ --all → Load all specs (Path C)
|
||||||
|
├─ --type or --tag or keyword → Direct filter (Path B)
|
||||||
|
└─ No args → Interactive menu (Path A)
|
||||||
|
|
||||||
|
Path A: Interactive Menu
|
||||||
|
├─ Step A1: Ask user intent
|
||||||
|
├─ Step A2: Route to action
|
||||||
|
└─ Step A3: Display results
|
||||||
|
|
||||||
|
Path B: Direct Filter
|
||||||
|
├─ Step B1: Build filter from args
|
||||||
|
├─ Step B2: Search specs
|
||||||
|
└─ Step B3: Display results
|
||||||
|
|
||||||
|
Path C: Load All
|
||||||
|
└─ Display all spec contents
|
||||||
|
|
||||||
|
Output:
|
||||||
|
└─ Formatted spec entries matching user query
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Parse Input
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const args = $ARGUMENTS
|
||||||
|
const argsLower = args.toLowerCase()
|
||||||
|
|
||||||
|
const loadAll = argsLower.includes('--all')
|
||||||
|
const hasType = argsLower.includes('--type')
|
||||||
|
const hasTag = argsLower.includes('--tag')
|
||||||
|
|
||||||
|
let type = hasType ? args.match(/--type\s+(\w+)/i)?.[1]?.toLowerCase() : null
|
||||||
|
let tag = hasTag ? args.match(/--tag\s+([\w-]+)/i)?.[1]?.toLowerCase() : null
|
||||||
|
|
||||||
|
// Extract keyword query (everything that's not a flag)
|
||||||
|
let keyword = args
|
||||||
|
.replace(/--type\s+\w+/gi, '')
|
||||||
|
.replace(/--tag\s+[\w-]+/gi, '')
|
||||||
|
.replace(/--all/gi, '')
|
||||||
|
.replace(/^["']|["']$/g, '')
|
||||||
|
.trim()
|
||||||
|
|
||||||
|
// Validate type
|
||||||
|
if (type && !['bug', 'pattern', 'decision', 'rule'].includes(type)) {
|
||||||
|
console.log("Invalid type. Use 'bug', 'pattern', 'decision', or 'rule'.")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Determine Mode
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const useInteractive = !loadAll && !hasType && !hasTag && !keyword
|
||||||
|
```
|
||||||
|
|
||||||
|
### Path A: Interactive Menu
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (useInteractive) {
|
||||||
|
const answer = AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "What specs would you like to load?",
|
||||||
|
header: "Action",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{
|
||||||
|
label: "Browse all specs",
|
||||||
|
description: "Load and display all project spec entries"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Search by keyword",
|
||||||
|
description: "Find specs matching a keyword (e.g., api, security, routing)"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "View bug experiences",
|
||||||
|
description: "Load all [bug:*] debugging experience entries"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "View code patterns",
|
||||||
|
description: "Load all [pattern:*] reusable code pattern entries"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
|
const choice = answer.answers["Action"]
|
||||||
|
|
||||||
|
if (choice === "Browse all specs") {
|
||||||
|
loadAll = true
|
||||||
|
} else if (choice === "View bug experiences") {
|
||||||
|
type = "bug"
|
||||||
|
} else if (choice === "View code patterns") {
|
||||||
|
type = "pattern"
|
||||||
|
} else if (choice === "Search by keyword") {
|
||||||
|
// Ask for keyword
|
||||||
|
const kwAnswer = AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "Enter keyword(s) to search for:",
|
||||||
|
header: "Keyword",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "api", description: "API endpoints, HTTP, REST, routing" },
|
||||||
|
{ label: "security", description: "Authentication, authorization, input validation" },
|
||||||
|
{ label: "arch", description: "Architecture, design patterns, module structure" },
|
||||||
|
{ label: "perf", description: "Performance, caching, optimization" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
keyword = kwAnswer.answers["Keyword"].toLowerCase()
|
||||||
|
} else {
|
||||||
|
// "Other" — user typed custom input, use as keyword
|
||||||
|
keyword = choice.toLowerCase()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Load Spec Files
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Discover all spec files
|
||||||
|
const specFiles = [
|
||||||
|
'.ccw/specs/coding-conventions.md',
|
||||||
|
'.ccw/specs/architecture-constraints.md',
|
||||||
|
'.ccw/specs/learnings.md',
|
||||||
|
'.ccw/specs/quality-rules.md'
|
||||||
|
]
|
||||||
|
|
||||||
|
// Also check personal specs
|
||||||
|
const personalFiles = [
|
||||||
|
'~/.ccw/personal/conventions.md',
|
||||||
|
'~/.ccw/personal/constraints.md',
|
||||||
|
'~/.ccw/personal/learnings.md',
|
||||||
|
'.ccw/personal/conventions.md',
|
||||||
|
'.ccw/personal/constraints.md',
|
||||||
|
'.ccw/personal/learnings.md'
|
||||||
|
]
|
||||||
|
|
||||||
|
// Read all existing spec files
|
||||||
|
const allEntries = []
|
||||||
|
|
||||||
|
for (const file of [...specFiles, ...personalFiles]) {
|
||||||
|
if (!file_exists(file)) continue
|
||||||
|
const content = Read(file)
|
||||||
|
|
||||||
|
// Extract entries using unified format regex
|
||||||
|
// Entry line: - [type:tag] summary (date)
|
||||||
|
// Extended: - key: value
|
||||||
|
const lines = content.split('\n')
|
||||||
|
let currentEntry = null
|
||||||
|
|
||||||
|
for (const line of lines) {
|
||||||
|
const entryMatch = line.match(/^- \[(\w+):([\w-]+)\] (.*?)(?:\s+\((\d{4}-\d{2}-\d{2})\))?$/)
|
||||||
|
if (entryMatch) {
|
||||||
|
if (currentEntry) allEntries.push(currentEntry)
|
||||||
|
currentEntry = {
|
||||||
|
type: entryMatch[1],
|
||||||
|
tag: entryMatch[2],
|
||||||
|
summary: entryMatch[3],
|
||||||
|
date: entryMatch[4] || null,
|
||||||
|
extended: {},
|
||||||
|
source: file,
|
||||||
|
raw: line
|
||||||
|
}
|
||||||
|
} else if (currentEntry && /^\s{4}- ([\w-]+):\s?(.*)/.test(line)) {
|
||||||
|
const fieldMatch = line.match(/^\s{4}- ([\w-]+):\s?(.*)/)
|
||||||
|
currentEntry.extended[fieldMatch[1]] = fieldMatch[2]
|
||||||
|
} else if (currentEntry && !/^\s{4}/.test(line) && line.trim() !== '') {
|
||||||
|
// Non-indented non-empty line = end of current entry
|
||||||
|
allEntries.push(currentEntry)
|
||||||
|
currentEntry = null
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also handle legacy format: - [tag] text (learned: date)
|
||||||
|
const legacyMatch = line.match(/^- \[([\w-]+)\] (.+?)(?:\s+\(learned: (\d{4}-\d{2}-\d{2})\))?$/)
|
||||||
|
if (!entryMatch && legacyMatch) {
|
||||||
|
if (currentEntry) allEntries.push(currentEntry)
|
||||||
|
currentEntry = {
|
||||||
|
type: 'rule',
|
||||||
|
tag: legacyMatch[1],
|
||||||
|
summary: legacyMatch[2],
|
||||||
|
date: legacyMatch[3] || null,
|
||||||
|
extended: {},
|
||||||
|
source: file,
|
||||||
|
raw: line,
|
||||||
|
legacy: true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (currentEntry) allEntries.push(currentEntry)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Filter Entries
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let filtered = allEntries
|
||||||
|
|
||||||
|
// Filter by type
|
||||||
|
if (type) {
|
||||||
|
filtered = filtered.filter(e => e.type === type)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter by tag
|
||||||
|
if (tag) {
|
||||||
|
filtered = filtered.filter(e => e.tag === tag)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter by keyword (search in tag, summary, and extended fields)
|
||||||
|
if (keyword) {
|
||||||
|
const kw = keyword.toLowerCase()
|
||||||
|
const kwTerms = kw.split(/\s+/)
|
||||||
|
|
||||||
|
filtered = filtered.filter(e => {
|
||||||
|
const searchText = [
|
||||||
|
e.type, e.tag, e.summary,
|
||||||
|
...Object.values(e.extended)
|
||||||
|
].join(' ').toLowerCase()
|
||||||
|
|
||||||
|
return kwTerms.every(term => searchText.includes(term))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// If --all, keep everything (no filter)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Display Results
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (filtered.length === 0) {
|
||||||
|
const filterDesc = []
|
||||||
|
if (type) filterDesc.push(`type=${type}`)
|
||||||
|
if (tag) filterDesc.push(`tag=${tag}`)
|
||||||
|
if (keyword) filterDesc.push(`keyword="${keyword}"`)
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
No specs found matching: ${filterDesc.join(', ') || '(all)'}
|
||||||
|
|
||||||
|
Available spec files:
|
||||||
|
${specFiles.filter(f => file_exists(f)).map(f => ` - ${f}`).join('\n') || ' (none)'}
|
||||||
|
|
||||||
|
Suggestions:
|
||||||
|
- Use /workflow:spec:setup to initialize specs
|
||||||
|
- Use /workflow:spec:add to add new entries
|
||||||
|
- Use /workflow:spec:load --all to see everything
|
||||||
|
`)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Group by source file
|
||||||
|
const grouped = {}
|
||||||
|
for (const entry of filtered) {
|
||||||
|
if (!grouped[entry.source]) grouped[entry.source] = []
|
||||||
|
grouped[entry.source].push(entry)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Display
|
||||||
|
console.log(`
|
||||||
|
## Specs Loaded (${filtered.length} entries)
|
||||||
|
${type ? `Type: ${type}` : ''}${tag ? ` Tag: ${tag}` : ''}${keyword ? ` Keyword: "${keyword}"` : ''}
|
||||||
|
`)
|
||||||
|
|
||||||
|
for (const [source, entries] of Object.entries(grouped)) {
|
||||||
|
console.log(`### ${source}`)
|
||||||
|
console.log('')
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
// Render entry
|
||||||
|
const datePart = entry.date ? ` (${entry.date})` : ''
|
||||||
|
console.log(`- [${entry.type}:${entry.tag}] ${entry.summary}${datePart}`)
|
||||||
|
|
||||||
|
// Render extended fields
|
||||||
|
for (const [key, value] of Object.entries(entry.extended)) {
|
||||||
|
console.log(` - ${key}: ${value}`)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
console.log('')
|
||||||
|
}
|
||||||
|
|
||||||
|
// Summary footer
|
||||||
|
const typeCounts = {}
|
||||||
|
for (const e of filtered) {
|
||||||
|
typeCounts[e.type] = (typeCounts[e.type] || 0) + 1
|
||||||
|
}
|
||||||
|
const typeBreakdown = Object.entries(typeCounts)
|
||||||
|
.map(([t, c]) => `${t}: ${c}`)
|
||||||
|
.join(', ')
|
||||||
|
|
||||||
|
console.log(`---`)
|
||||||
|
console.log(`Total: ${filtered.length} entries (${typeBreakdown})`)
|
||||||
|
console.log(`Sources: ${Object.keys(grouped).join(', ')}`)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Interactive Browse
|
||||||
|
```bash
|
||||||
|
/workflow:spec:load
|
||||||
|
# → Menu: "What specs would you like to load?"
|
||||||
|
# → User selects "Browse all specs"
|
||||||
|
# → Displays all entries grouped by file
|
||||||
|
```
|
||||||
|
|
||||||
|
### Keyword Search
|
||||||
|
```bash
|
||||||
|
/workflow:spec:load "api routing"
|
||||||
|
# → Filters entries where tag/summary/extended contains "api" AND "routing"
|
||||||
|
# → Displays matching entries
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type Filter
|
||||||
|
```bash
|
||||||
|
/workflow:spec:load --type bug
|
||||||
|
# → Shows all [bug:*] entries from learnings.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tag Filter
|
||||||
|
```bash
|
||||||
|
/workflow:spec:load --tag security
|
||||||
|
# → Shows all [*:security] entries across all spec files
|
||||||
|
```
|
||||||
|
|
||||||
|
### Combined Filters
|
||||||
|
```bash
|
||||||
|
/workflow:spec:load --type rule --tag api
|
||||||
|
# → Shows all [rule:api] entries
|
||||||
|
```
|
||||||
|
|
||||||
|
### Load All
|
||||||
|
```bash
|
||||||
|
/workflow:spec:load --all
|
||||||
|
# → Displays every entry from every spec file
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| No spec files found | Suggest `/workflow:spec:setup` to initialize |
|
||||||
|
| No matching entries | Show available files and suggest alternatives |
|
||||||
|
| Invalid type | Exit with valid type list |
|
||||||
|
| Corrupt entry format | Skip unparseable lines, continue loading |
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/workflow:spec:setup` - Initialize project with specs scaffold
|
||||||
|
- `/workflow:spec:add` - Add knowledge entries (bug/pattern/decision/rule) with unified [type:tag] format
|
||||||
|
- `/workflow:session:sync` - Quick-sync session work to specs and project-tech
|
||||||
|
- `ccw spec list` - View spec file index
|
||||||
|
- `ccw spec load` - CLI-level spec loading (used by hooks)
|
||||||
@@ -471,70 +471,129 @@ For each category of collected answers, append rules to the corresponding spec M
|
|||||||
- Round 5 (quality): `category: execution` (testing phase)
|
- Round 5 (quality): `category: execution` (testing phase)
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
|
const matter = require('gray-matter') // YAML frontmatter parser
|
||||||
|
|
||||||
|
// ── Frontmatter check & repair helper ──
|
||||||
|
// Ensures target spec file has valid YAML frontmatter with keywords
|
||||||
|
// Uses gray-matter for robust parsing (handles malformed frontmatter, missing fields)
|
||||||
|
function ensureSpecFrontmatter(filePath, extraKeywords = []) {
|
||||||
|
const titleMap = {
|
||||||
|
'coding-conventions': 'Coding Conventions',
|
||||||
|
'architecture-constraints': 'Architecture Constraints',
|
||||||
|
'learnings': 'Learnings',
|
||||||
|
'quality-rules': 'Quality Rules'
|
||||||
|
}
|
||||||
|
const basename = filePath.split('/').pop().replace('.md', '')
|
||||||
|
const title = titleMap[basename] || basename
|
||||||
|
const defaultKw = filePath.includes('conventions') ? 'convention'
|
||||||
|
: filePath.includes('constraints') ? 'constraint' : 'quality'
|
||||||
|
const defaultFm = {
|
||||||
|
title,
|
||||||
|
readMode: 'optional',
|
||||||
|
priority: 'medium',
|
||||||
|
category: 'general',
|
||||||
|
scope: 'project',
|
||||||
|
dimension: 'specs',
|
||||||
|
keywords: [...new Set([defaultKw, ...extraKeywords])]
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!file_exists(filePath)) {
|
||||||
|
// Case A: Create new file with frontmatter
|
||||||
|
const specDir = path.dirname(filePath)
|
||||||
|
if (!fs.existsSync(specDir)) {
|
||||||
|
fs.mkdirSync(specDir, { recursive: true })
|
||||||
|
}
|
||||||
|
Write(filePath, matter.stringify(`\n# ${title}\n\n`, defaultFm))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
const raw = Read(filePath)
|
||||||
|
let parsed
|
||||||
|
try {
|
||||||
|
parsed = matter(raw)
|
||||||
|
} catch {
|
||||||
|
parsed = { data: {}, content: raw }
|
||||||
|
}
|
||||||
|
|
||||||
|
const hasFrontmatter = raw.trimStart().startsWith('---')
|
||||||
|
|
||||||
|
if (!hasFrontmatter) {
|
||||||
|
// Case B: File exists but no frontmatter → prepend
|
||||||
|
Write(filePath, matter.stringify(raw, defaultFm))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Case C: Frontmatter exists → ensure keywords include extras
|
||||||
|
const existingKeywords = parsed.data.keywords || []
|
||||||
|
const newKeywords = [...new Set([...existingKeywords, defaultKw, ...extraKeywords])]
|
||||||
|
|
||||||
|
if (newKeywords.length !== existingKeywords.length) {
|
||||||
|
parsed.data.keywords = newKeywords
|
||||||
|
Write(filePath, matter.stringify(parsed.content, parsed.data))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Helper: append rules to a spec MD file with category support
|
// Helper: append rules to a spec MD file with category support
|
||||||
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
||||||
function appendRulesToSpecFile(filePath, rules, defaultCategory = 'general') {
|
function appendRulesToSpecFile(filePath, rules, defaultCategory = 'general') {
|
||||||
if (rules.length === 0) return
|
if (rules.length === 0) return
|
||||||
|
|
||||||
// Ensure .ccw/specs/ directory exists
|
// Extract domain tags from rules for keyword accumulation
|
||||||
const specDir = path.dirname(filePath)
|
const ruleTags = rules
|
||||||
if (!fs.existsSync(specDir)) {
|
.map(r => r.match(/\[[\w]+:([\w-]+)\]/)?.[1])
|
||||||
fs.mkdirSync(specDir, { recursive: true })
|
.filter(Boolean)
|
||||||
}
|
|
||||||
|
|
||||||
// Check if file exists
|
// Ensure frontmatter exists and keywords include rule tags
|
||||||
if (!file_exists(filePath)) {
|
ensureSpecFrontmatter(filePath, [...new Set(ruleTags)])
|
||||||
// Create file with frontmatter including category
|
|
||||||
const frontmatter = `---
|
|
||||||
title: ${filePath.includes('conventions') ? 'Coding Conventions' : filePath.includes('constraints') ? 'Architecture Constraints' : 'Quality Rules'}
|
|
||||||
readMode: optional
|
|
||||||
priority: medium
|
|
||||||
category: ${defaultCategory}
|
|
||||||
scope: project
|
|
||||||
dimension: specs
|
|
||||||
keywords: [${defaultCategory}, ${filePath.includes('conventions') ? 'convention' : filePath.includes('constraints') ? 'constraint' : 'quality'}]
|
|
||||||
---
|
|
||||||
|
|
||||||
# ${filePath.includes('conventions') ? 'Coding Conventions' : filePath.includes('constraints') ? 'Architecture Constraints' : 'Quality Rules'}
|
|
||||||
|
|
||||||
`
|
|
||||||
Write(filePath, frontmatter)
|
|
||||||
}
|
|
||||||
|
|
||||||
const existing = Read(filePath)
|
const existing = Read(filePath)
|
||||||
// Append new rules as markdown list items after existing content
|
// Append new rules as markdown list items - rules are already in [type:tag] format from caller
|
||||||
const newContent = existing.trimEnd() + '\n' + rules.map(r => `- ${r}`).join('\n') + '\n'
|
const newContent = existing.trimEnd() + '\n' + rules.map(r => {
|
||||||
|
// If rule already has - prefix or [type:tag] format, use as-is
|
||||||
|
if (/^- /.test(r)) return r
|
||||||
|
if (/^\[[\w]+:[\w-]+\]/.test(r)) return `- ${r}`
|
||||||
|
return `- [rule:${defaultCategory}] ${r}`
|
||||||
|
}).join('\n') + '\n'
|
||||||
Write(filePath, newContent)
|
Write(filePath, newContent)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Write conventions (general category) - use .ccw/specs/ (same as frontend/backend)
|
// Helper: infer domain tag from rule content
|
||||||
appendRulesToSpecFile('.ccw/specs/coding-conventions.md',
|
function inferTag(text) {
|
||||||
[...newCodingStyle, ...newNamingPatterns, ...newFileStructure, ...newDocumentation],
|
const t = text.toLowerCase()
|
||||||
'general')
|
if (/\b(api|http|rest|endpoint|routing)\b/.test(t)) return 'api'
|
||||||
|
if (/\b(security|auth|permission|xss|sql|sanitize)\b/.test(t)) return 'security'
|
||||||
|
if (/\b(database|db|sql|postgres|mysql)\b/.test(t)) return 'db'
|
||||||
|
if (/\b(react|component|hook|jsx|tsx)\b/.test(t)) return 'react'
|
||||||
|
if (/\b(performance|cache|lazy|async|slow)\b/.test(t)) return 'perf'
|
||||||
|
if (/\b(test|coverage|mock|jest|vitest)\b/.test(t)) return 'testing'
|
||||||
|
if (/\b(architecture|layer|module|dependency)\b/.test(t)) return 'arch'
|
||||||
|
if (/\b(naming|camel|pascal|prefix|suffix)\b/.test(t)) return 'naming'
|
||||||
|
if (/\b(file|folder|directory|structure)\b/.test(t)) return 'file'
|
||||||
|
if (/\b(doc|comment|jsdoc|readme)\b/.test(t)) return 'doc'
|
||||||
|
if (/\b(build|webpack|vite|compile)\b/.test(t)) return 'build'
|
||||||
|
if (/\b(deploy|ci|cd|docker)\b/.test(t)) return 'deploy'
|
||||||
|
if (/\b(lint|eslint|prettier|format)\b/.test(t)) return 'lint'
|
||||||
|
if (/\b(type|typescript|strict|any)\b/.test(t)) return 'typing'
|
||||||
|
return 'style' // fallback for coding conventions
|
||||||
|
}
|
||||||
|
|
||||||
// Write constraints (planning category)
|
// Write conventions - infer domain tags from content
|
||||||
|
appendRulesToSpecFile('.ccw/specs/coding-conventions.md',
|
||||||
|
[...newCodingStyle, ...newNamingPatterns, ...newFileStructure, ...newDocumentation]
|
||||||
|
.map(r => /^\[[\w]+:[\w-]+\]/.test(r) ? r : `[rule:${inferTag(r)}] ${r}`),
|
||||||
|
'style')
|
||||||
|
|
||||||
|
// Write constraints - infer domain tags from content
|
||||||
appendRulesToSpecFile('.ccw/specs/architecture-constraints.md',
|
appendRulesToSpecFile('.ccw/specs/architecture-constraints.md',
|
||||||
[...newArchitecture, ...newTechStack, ...newPerformance, ...newSecurity],
|
[...newArchitecture, ...newTechStack, ...newPerformance, ...newSecurity]
|
||||||
'planning')
|
.map(r => /^\[[\w]+:[\w-]+\]/.test(r) ? r : `[rule:${inferTag(r)}] ${r}`),
|
||||||
|
'arch')
|
||||||
|
|
||||||
// Write quality rules (execution category)
|
// Write quality rules (execution category)
|
||||||
if (newQualityRules.length > 0) {
|
if (newQualityRules.length > 0) {
|
||||||
const qualityPath = '.ccw/specs/quality-rules.md'
|
const qualityPath = '.ccw/specs/quality-rules.md'
|
||||||
if (!file_exists(qualityPath)) {
|
// ensureSpecFrontmatter handles create/repair/keyword-update
|
||||||
Write(qualityPath, `---
|
ensureSpecFrontmatter(qualityPath, ['quality', 'testing', 'coverage', 'lint'])
|
||||||
title: Quality Rules
|
|
||||||
readMode: required
|
|
||||||
priority: high
|
|
||||||
category: execution
|
|
||||||
scope: project
|
|
||||||
dimension: specs
|
|
||||||
keywords: [execution, quality, testing, coverage, lint]
|
|
||||||
---
|
|
||||||
|
|
||||||
# Quality Rules
|
|
||||||
|
|
||||||
`)
|
|
||||||
}
|
|
||||||
appendRulesToSpecFile(qualityPath,
|
appendRulesToSpecFile(qualityPath,
|
||||||
newQualityRules.map(q => `${q.rule} (scope: ${q.scope}, enforced by: ${q.enforced_by})`),
|
newQualityRules.map(q => `${q.rule} (scope: ${q.scope}, enforced by: ${q.enforced_by})`),
|
||||||
'execution')
|
'execution')
|
||||||
@@ -644,7 +703,8 @@ Next steps:
|
|||||||
|
|
||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
- `/workflow:spec:add` - Interactive wizard to create individual specs with scope selection
|
- `/workflow:spec:add` - Add knowledge entries (bug/pattern/decision/rule) with unified [type:tag] format
|
||||||
|
- `/workflow:spec:load` - Interactive spec loader with keyword/type/tag filtering
|
||||||
- `/workflow:session:sync` - Quick-sync session work to specs and project-tech
|
- `/workflow:session:sync` - Quick-sync session work to specs and project-tech
|
||||||
- `workflow-plan` skill - Start planning with initialized project context
|
- `workflow-plan` skill - Start planning with initialized project context
|
||||||
- `/workflow:status --project` - View project state and guidelines
|
- `/workflow:status --project` - View project state and guidelines
|
||||||
|
|||||||
@@ -32,6 +32,18 @@ Universal team coordination skill: analyze task -> generate role-specs -> dispat
|
|||||||
ccw cli --mode write - code generation and modification
|
ccw cli --mode write - code generation and modification
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Shared Constants
|
||||||
|
|
||||||
|
| Constant | Value |
|
||||||
|
|----------|-------|
|
||||||
|
| Session prefix | `TC` |
|
||||||
|
| Session path | `.workflow/.team/TC-<slug>-<date>/` |
|
||||||
|
| Worker agent | `team-worker` |
|
||||||
|
| Message bus | `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)` |
|
||||||
|
| CLI analysis | `ccw cli --mode analysis` |
|
||||||
|
| CLI write | `ccw cli --mode write` |
|
||||||
|
| Max roles | 5 |
|
||||||
|
|
||||||
## Role Router
|
## Role Router
|
||||||
|
|
||||||
This skill is **coordinator-only**. Workers do NOT invoke this skill -- they are spawned as `team-worker` agents directly.
|
This skill is **coordinator-only**. Workers do NOT invoke this skill -- they are spawned as `team-worker` agents directly.
|
||||||
@@ -85,6 +97,9 @@ User provides task description
|
|||||||
|---------|--------|
|
|---------|--------|
|
||||||
| `check` / `status` | Output execution status graph, no advancement |
|
| `check` / `status` | Output execution status graph, no advancement |
|
||||||
| `resume` / `continue` | Check worker states, advance next step |
|
| `resume` / `continue` | Check worker states, advance next step |
|
||||||
|
| `revise <TASK-ID> [feedback]` | Revise specific task with optional feedback |
|
||||||
|
| `feedback <text>` | Inject feedback into active pipeline |
|
||||||
|
| `improve [dimension]` | Auto-improve weakest quality dimension |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -150,6 +165,17 @@ AskUserQuestion({
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Specs Reference
|
||||||
|
|
||||||
|
| Spec | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| [specs/pipelines.md](specs/pipelines.md) | Dynamic pipeline model, task naming, dependency graph |
|
||||||
|
| [specs/role-spec-template.md](specs/role-spec-template.md) | Template for dynamic role-spec generation |
|
||||||
|
| [specs/quality-gates.md](specs/quality-gates.md) | Quality thresholds and scoring dimensions |
|
||||||
|
| [specs/knowledge-transfer.md](specs/knowledge-transfer.md) | Context transfer protocols between roles |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Session Directory
|
## Session Directory
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -16,6 +16,20 @@ Parse user task description -> detect required capabilities -> build dependency
|
|||||||
|
|
||||||
If task context requires codebase knowledge, set `needs_research: true`. Phase 2 will spawn researcher worker.
|
If task context requires codebase knowledge, set `needs_research: true`. Phase 2 will spawn researcher worker.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
| Trigger | Condition |
|
||||||
|
|---------|-----------|
|
||||||
|
| New task | Coordinator Phase 1 receives task description |
|
||||||
|
| Re-analysis | User provides revised requirements |
|
||||||
|
| Adapt | handleAdapt extends analysis for new capability |
|
||||||
|
|
||||||
|
## Strategy
|
||||||
|
|
||||||
|
- **Delegation**: Inline execution (coordinator processes directly)
|
||||||
|
- **Mode**: Text-level analysis only (no codebase reading)
|
||||||
|
- **Output**: `<session>/task-analysis.json`
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
## Phase 2: Context Loading
|
||||||
|
|
||||||
| Input | Source | Required |
|
| Input | Source | Required |
|
||||||
|
|||||||
@@ -4,6 +4,20 @@
|
|||||||
|
|
||||||
Create task chains from dynamic dependency graphs. Builds pipelines from the task-analysis.json produced by Phase 1. Workers are spawned as team-worker agents with role-spec paths.
|
Create task chains from dynamic dependency graphs. Builds pipelines from the task-analysis.json produced by Phase 1. Workers are spawned as team-worker agents with role-spec paths.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
| Trigger | Condition |
|
||||||
|
|---------|-----------|
|
||||||
|
| After analysis | Phase 1 complete, task-analysis.json exists |
|
||||||
|
| After adapt | handleAdapt created new roles, needs new tasks |
|
||||||
|
| Re-dispatch | Pipeline restructuring (rare) |
|
||||||
|
|
||||||
|
## Strategy
|
||||||
|
|
||||||
|
- **Delegation**: Inline execution (coordinator processes directly)
|
||||||
|
- **Inputs**: task-analysis.json + team-session.json
|
||||||
|
- **Output**: TaskCreate calls with dependency chains
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
## Phase 2: Context Loading
|
||||||
|
|
||||||
| Input | Source | Required |
|
| Input | Source | Required |
|
||||||
|
|||||||
@@ -4,6 +4,22 @@
|
|||||||
|
|
||||||
Event-driven pipeline coordination with Spawn-and-Stop pattern. Role names are read from `team-session.json#roles`. Workers are spawned as `team-worker` agents with role-spec paths. Includes `handleComplete` for pipeline completion action and `handleAdapt` for mid-pipeline capability gap handling.
|
Event-driven pipeline coordination with Spawn-and-Stop pattern. Role names are read from `team-session.json#roles`. Workers are spawned as `team-worker` agents with role-spec paths. Includes `handleComplete` for pipeline completion action and `handleAdapt` for mid-pipeline capability gap handling.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
| Trigger | Condition |
|
||||||
|
|---------|-----------|
|
||||||
|
| Worker callback | Message contains [role-name] from session roles |
|
||||||
|
| User command | "check", "status", "resume", "continue" |
|
||||||
|
| Capability gap | Worker reports capability_gap |
|
||||||
|
| Pipeline spawn | After dispatch, initial spawn needed |
|
||||||
|
| Pipeline complete | All tasks done |
|
||||||
|
|
||||||
|
## Strategy
|
||||||
|
|
||||||
|
- **Delegation**: Inline execution with handler routing
|
||||||
|
- **Beat model**: ONE_STEP_PER_INVOCATION — one handler then STOP
|
||||||
|
- **Workers**: Spawned as team-worker via Agent() in background
|
||||||
|
|
||||||
## Constants
|
## Constants
|
||||||
|
|
||||||
| Constant | Value | Description |
|
| Constant | Value | Description |
|
||||||
|
|||||||
@@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
role: coordinator
|
||||||
|
---
|
||||||
|
|
||||||
# Coordinator Role
|
# Coordinator Role
|
||||||
|
|
||||||
Orchestrate the team-coordinate workflow: task analysis, dynamic role-spec generation, task dispatching, progress monitoring, session state, and completion action. The sole built-in role -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent.
|
Orchestrate the team-coordinate workflow: task analysis, dynamic role-spec generation, task dispatching, progress monitoring, session state, and completion action. The sole built-in role -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent.
|
||||||
@@ -33,6 +37,30 @@ Orchestrate the team-coordinate workflow: task analysis, dynamic role-spec gener
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Message Types
|
||||||
|
|
||||||
|
| Type | Direction | Trigger |
|
||||||
|
|------|-----------|---------|
|
||||||
|
| state_update | outbound | Session init, pipeline progress |
|
||||||
|
| task_unblocked | outbound | Task ready for execution |
|
||||||
|
| fast_advance | inbound | Worker skipped coordinator |
|
||||||
|
| capability_gap | inbound | Worker needs new capability |
|
||||||
|
| error | inbound | Worker failure |
|
||||||
|
| impl_complete | inbound | Worker task done |
|
||||||
|
| consensus_blocked | inbound | Discussion verdict conflict |
|
||||||
|
|
||||||
|
## Message Bus Protocol
|
||||||
|
|
||||||
|
All coordinator state changes MUST be logged to team_msg BEFORE SendMessage:
|
||||||
|
|
||||||
|
1. `team_msg(operation="log", ...)` — log the event
|
||||||
|
2. `SendMessage(...)` — communicate to worker/user
|
||||||
|
3. `TaskUpdate(...)` — update task state
|
||||||
|
|
||||||
|
Read state before every handler: `team_msg(operation="get_state", session_id=<session-id>)`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Command Execution Protocol
|
## Command Execution Protocol
|
||||||
|
|
||||||
When coordinator needs to execute a command (analyze-task, dispatch, monitor):
|
When coordinator needs to execute a command (analyze-task, dispatch, monitor):
|
||||||
@@ -52,6 +80,20 @@ Phase 1 needs task analysis
|
|||||||
-> Continue to Phase 2
|
-> Continue to Phase 2
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Toolbox
|
||||||
|
|
||||||
|
| Tool | Type | Purpose |
|
||||||
|
|------|------|---------|
|
||||||
|
| commands/analyze-task.md | Command | Task analysis and role design |
|
||||||
|
| commands/dispatch.md | Command | Task chain creation |
|
||||||
|
| commands/monitor.md | Command | Pipeline monitoring and handlers |
|
||||||
|
| team-worker | Subagent | Worker spawning |
|
||||||
|
| TeamCreate / TeamDelete | System | Team lifecycle |
|
||||||
|
| TaskCreate / TaskList / TaskGet / TaskUpdate | System | Task lifecycle |
|
||||||
|
| team_msg | System | Message bus operations |
|
||||||
|
| SendMessage | System | Inter-agent communication |
|
||||||
|
| AskUserQuestion | System | User interaction |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Entry Router
|
## Entry Router
|
||||||
|
|||||||
111
.claude/skills/team-coordinate/specs/knowledge-transfer.md
Normal file
111
.claude/skills/team-coordinate/specs/knowledge-transfer.md
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
# Knowledge Transfer Protocols
|
||||||
|
|
||||||
|
## 1. Transfer Channels
|
||||||
|
|
||||||
|
| Channel | Scope | Mechanism | When to Use |
|
||||||
|
|---------|-------|-----------|-------------|
|
||||||
|
| **Artifacts** | Producer -> Consumer | Write to `<session>/artifacts/<name>.md`, consumer reads in Phase 2 | Structured deliverables (reports, plans, specs) |
|
||||||
|
| **State Updates** | Cross-role | `team_msg(operation="log", type="state_update", data={...})` / `team_msg(operation="get_state", session_id=<session-id>)` | Key findings, decisions, metadata (small, structured data) |
|
||||||
|
| **Wisdom** | Cross-task | Append to `<session>/wisdom/{learnings,decisions,conventions,issues}.md` | Patterns, conventions, risks discovered during execution |
|
||||||
|
| **Context Accumulator** | Intra-role (inner loop) | In-memory array, passed to each subsequent task in same-prefix loop | Prior task summaries within same role's inner loop |
|
||||||
|
| **Exploration Cache** | Cross-role | `<session>/explorations/cache-index.json` + per-angle JSON | Codebase discovery results, prevents duplicate exploration |
|
||||||
|
|
||||||
|
## 2. Context Loading Protocol (Phase 2)
|
||||||
|
|
||||||
|
Every role MUST load context in this order before starting work.
|
||||||
|
|
||||||
|
| Step | Action | Required |
|
||||||
|
|------|--------|----------|
|
||||||
|
| 1 | Extract session path from task description | Yes |
|
||||||
|
| 2 | `team_msg(operation="get_state", session_id=<session-id>)` | Yes |
|
||||||
|
| 3 | Read artifact files from upstream state's `ref` paths | Yes |
|
||||||
|
| 4 | Read `<session>/wisdom/*.md` if exists | Yes |
|
||||||
|
| 5 | Check `<session>/explorations/cache-index.json` before new exploration | If exploring |
|
||||||
|
| 6 | For inner_loop roles: load context_accumulator from prior tasks | If inner_loop |
|
||||||
|
|
||||||
|
**Loading rules**:
|
||||||
|
- Never skip step 2 -- state contains key decisions and findings
|
||||||
|
- If `ref` path in state does not exist, log warning and continue
|
||||||
|
- Wisdom files are append-only -- read all entries, newest last
|
||||||
|
|
||||||
|
## 3. Context Publishing Protocol (Phase 4)
|
||||||
|
|
||||||
|
| Step | Action | Required |
|
||||||
|
|------|--------|----------|
|
||||||
|
| 1 | Write deliverable to `<session>/artifacts/<task-id>-<name>.md` | Yes |
|
||||||
|
| 2 | Send `team_msg(type="state_update")` with payload (see schema below) | Yes |
|
||||||
|
| 3 | Append wisdom entries for learnings, decisions, issues found | If applicable |
|
||||||
|
|
||||||
|
## 4. State Update Schema
|
||||||
|
|
||||||
|
Sent via `team_msg(type="state_update")` on task completion.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "task_complete",
|
||||||
|
"task_id": "<TASK-NNN>",
|
||||||
|
"ref": "<session>/artifacts/<filename>",
|
||||||
|
"key_findings": [
|
||||||
|
"Finding 1",
|
||||||
|
"Finding 2"
|
||||||
|
],
|
||||||
|
"decisions": [
|
||||||
|
"Decision with rationale"
|
||||||
|
],
|
||||||
|
"files_modified": [
|
||||||
|
"path/to/file.ts"
|
||||||
|
],
|
||||||
|
"verification": "self-validated | peer-reviewed | tested"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field rules**:
|
||||||
|
- `ref`: Always an artifact path, never inline content
|
||||||
|
- `key_findings`: Max 5 items, each under 100 chars
|
||||||
|
- `decisions`: Include rationale, not just the choice
|
||||||
|
- `files_modified`: Only for implementation tasks
|
||||||
|
- `verification`: One of `self-validated`, `peer-reviewed`, `tested`
|
||||||
|
|
||||||
|
**Write state** (namespaced by role):
|
||||||
|
```
|
||||||
|
team_msg(operation="log", session_id=<session-id>, from=<role>, type="state_update", data={
|
||||||
|
"<role_name>": { "key_findings": [...], "scope": "..." }
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Read state**:
|
||||||
|
```
|
||||||
|
team_msg(operation="get_state", session_id=<session-id>)
|
||||||
|
// Returns merged state from all state_update messages
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. Exploration Cache Protocol
|
||||||
|
|
||||||
|
Prevents redundant research across tasks and discussion rounds.
|
||||||
|
|
||||||
|
| Step | Action |
|
||||||
|
|------|--------|
|
||||||
|
| 1 | Read `<session>/explorations/cache-index.json` |
|
||||||
|
| 2 | If angle already explored, read cached result from `explore-<angle>.json` |
|
||||||
|
| 3 | If not cached, perform exploration |
|
||||||
|
| 4 | Write result to `<session>/explorations/explore-<angle>.json` |
|
||||||
|
| 5 | Update `cache-index.json` with new entry |
|
||||||
|
|
||||||
|
**cache-index.json format**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"entries": [
|
||||||
|
{
|
||||||
|
"angle": "competitor-analysis",
|
||||||
|
"file": "explore-competitor-analysis.json",
|
||||||
|
"created_by": "RESEARCH-001",
|
||||||
|
"timestamp": "2026-01-15T10:30:00Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rules**:
|
||||||
|
- Cache key is the exploration `angle` (normalized to kebab-case)
|
||||||
|
- Cache entries never expire within a session
|
||||||
|
- Any role can read cached explorations; only the creator updates them
|
||||||
@@ -81,3 +81,17 @@ message_types:
|
|||||||
## Specs Reference
|
## Specs Reference
|
||||||
|
|
||||||
- [role-spec-template.md](role-spec-template.md) — Template for generating dynamic role-specs
|
- [role-spec-template.md](role-spec-template.md) — Template for generating dynamic role-specs
|
||||||
|
- [quality-gates.md](quality-gates.md) — Quality thresholds and scoring dimensions
|
||||||
|
- [knowledge-transfer.md](knowledge-transfer.md) — Context transfer protocols between roles
|
||||||
|
|
||||||
|
## Quality Gate Integration
|
||||||
|
|
||||||
|
Dynamic pipelines reference quality thresholds from [specs/quality-gates.md](quality-gates.md).
|
||||||
|
|
||||||
|
| Gate Point | Trigger | Criteria Source |
|
||||||
|
|------------|---------|----------------|
|
||||||
|
| After artifact production | Producer role Phase 4 | Behavioral Traits in role-spec |
|
||||||
|
| After validation tasks | Tester/analyst completion | quality-gates.md thresholds |
|
||||||
|
| Pipeline completion | All tasks done | Aggregate scoring |
|
||||||
|
|
||||||
|
Issue classification: Error (blocks) > Warning (proceed with justification) > Info (log for future).
|
||||||
|
|||||||
112
.claude/skills/team-coordinate/specs/quality-gates.md
Normal file
112
.claude/skills/team-coordinate/specs/quality-gates.md
Normal file
@@ -0,0 +1,112 @@
|
|||||||
|
# Quality Gates
|
||||||
|
|
||||||
|
## 1. Quality Thresholds
|
||||||
|
|
||||||
|
| Result | Score | Action |
|
||||||
|
|--------|-------|--------|
|
||||||
|
| Pass | >= 80% | Report completed |
|
||||||
|
| Review | 60-79% | Report completed with warnings |
|
||||||
|
| Fail | < 60% | Retry Phase 3 (max 2 retries) |
|
||||||
|
|
||||||
|
## 2. Scoring Dimensions
|
||||||
|
|
||||||
|
| Dimension | Weight | Criteria |
|
||||||
|
|-----------|--------|----------|
|
||||||
|
| Completeness | 25% | All required outputs present with substantive content |
|
||||||
|
| Consistency | 25% | Terminology, formatting, cross-references are uniform |
|
||||||
|
| Accuracy | 25% | Outputs are factually correct and verifiable against sources |
|
||||||
|
| Depth | 25% | Sufficient detail for downstream consumers to act on deliverables |
|
||||||
|
|
||||||
|
**Score** = weighted average of all dimensions (0-100 per dimension).
|
||||||
|
|
||||||
|
## 3. Dynamic Role Quality Checks
|
||||||
|
|
||||||
|
Quality checks vary by `output_type` (from task-analysis.json role metadata).
|
||||||
|
|
||||||
|
### output_type: artifact
|
||||||
|
|
||||||
|
| Check | Pass Criteria |
|
||||||
|
|-------|---------------|
|
||||||
|
| Artifact exists | File written to `<session>/artifacts/` |
|
||||||
|
| Content non-empty | Substantive content, not just headers |
|
||||||
|
| Format correct | Expected format (MD, JSON) matches deliverable |
|
||||||
|
| Cross-references | All references to upstream artifacts resolve |
|
||||||
|
|
||||||
|
### output_type: codebase
|
||||||
|
|
||||||
|
| Check | Pass Criteria |
|
||||||
|
|-------|---------------|
|
||||||
|
| Files modified | Claimed files actually changed (Read to confirm) |
|
||||||
|
| Syntax valid | No syntax errors in modified files |
|
||||||
|
| No regressions | Existing functionality preserved |
|
||||||
|
| Summary artifact | Implementation summary written to artifacts/ |
|
||||||
|
|
||||||
|
### output_type: mixed
|
||||||
|
|
||||||
|
All checks from both `artifact` and `codebase` apply.
|
||||||
|
|
||||||
|
## 4. Verification Protocol
|
||||||
|
|
||||||
|
Derived from Behavioral Traits in [role-spec-template.md](role-spec-template.md).
|
||||||
|
|
||||||
|
| Step | Action | Required |
|
||||||
|
|------|--------|----------|
|
||||||
|
| 1 | Verify all claimed files exist via Read | Yes |
|
||||||
|
| 2 | Confirm artifact written to `<session>/artifacts/` | Yes |
|
||||||
|
| 3 | Check verification summary fields present | Yes |
|
||||||
|
| 4 | Score against quality dimensions | Yes |
|
||||||
|
| 5 | Apply threshold -> Pass/Review/Fail | Yes |
|
||||||
|
|
||||||
|
**On Fail**: Retry Phase 3 (max 2 retries). After 2 retries, report `partial_completion`.
|
||||||
|
|
||||||
|
**On Review**: Proceed with warnings logged to `<session>/wisdom/issues.md`.
|
||||||
|
|
||||||
|
## 5. Code Review Dimensions
|
||||||
|
|
||||||
|
For REVIEW-* or validation tasks during implementation pipelines.
|
||||||
|
|
||||||
|
### Quality
|
||||||
|
|
||||||
|
| Check | Severity |
|
||||||
|
|-------|----------|
|
||||||
|
| Empty catch blocks | Error |
|
||||||
|
| `as any` type casts | Warning |
|
||||||
|
| `@ts-ignore` / `@ts-expect-error` | Warning |
|
||||||
|
| `console.log` in production code | Warning |
|
||||||
|
| Unused imports/variables | Info |
|
||||||
|
|
||||||
|
### Security
|
||||||
|
|
||||||
|
| Check | Severity |
|
||||||
|
|-------|----------|
|
||||||
|
| Hardcoded secrets/credentials | Error |
|
||||||
|
| SQL injection vectors | Error |
|
||||||
|
| `eval()` or `Function()` usage | Error |
|
||||||
|
| `innerHTML` assignment | Warning |
|
||||||
|
| Missing input validation | Warning |
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
| Check | Severity |
|
||||||
|
|-------|----------|
|
||||||
|
| Circular dependencies | Error |
|
||||||
|
| Deep cross-boundary imports (3+ levels) | Warning |
|
||||||
|
| Files > 500 lines | Warning |
|
||||||
|
| Functions > 50 lines | Info |
|
||||||
|
|
||||||
|
### Requirements Coverage
|
||||||
|
|
||||||
|
| Check | Severity |
|
||||||
|
|-------|----------|
|
||||||
|
| Core functionality implemented | Error if missing |
|
||||||
|
| Acceptance criteria covered | Error if missing |
|
||||||
|
| Edge cases handled | Warning |
|
||||||
|
| Error states handled | Warning |
|
||||||
|
|
||||||
|
## 6. Issue Classification
|
||||||
|
|
||||||
|
| Class | Label | Action |
|
||||||
|
|-------|-------|--------|
|
||||||
|
| Error | Must fix | Blocks progression, must resolve before proceeding |
|
||||||
|
| Warning | Should fix | Should resolve, can proceed with justification |
|
||||||
|
| Info | Nice to have | Optional improvement, log for future |
|
||||||
@@ -46,6 +46,7 @@ message_types:
|
|||||||
| `prefix` | Yes | Task prefix to filter (e.g., RESEARCH, DRAFT, IMPL) |
|
| `prefix` | Yes | Task prefix to filter (e.g., RESEARCH, DRAFT, IMPL) |
|
||||||
| `inner_loop` | Yes | Whether team-worker loops through same-prefix tasks |
|
| `inner_loop` | Yes | Whether team-worker loops through same-prefix tasks |
|
||||||
| `CLI tools` | No | Array of CLI tool types this role may call |
|
| `CLI tools` | No | Array of CLI tool types this role may call |
|
||||||
|
| `output_tag` | Yes | Output tag for all messages, e.g., `[researcher]` |
|
||||||
| `message_types` | Yes | Message type mapping for team_msg |
|
| `message_types` | Yes | Message type mapping for team_msg |
|
||||||
| `message_types.success` | Yes | Type string for successful completion |
|
| `message_types.success` | Yes | Type string for successful completion |
|
||||||
| `message_types.error` | Yes | Type string for errors (usually "error") |
|
| `message_types.error` | Yes | Type string for errors (usually "error") |
|
||||||
@@ -63,6 +64,29 @@ message_types:
|
|||||||
| `<placeholder>` notation | Use angle brackets for variable substitution |
|
| `<placeholder>` notation | Use angle brackets for variable substitution |
|
||||||
| Reference CLI tools by name | team-worker resolves invocation from its delegation templates |
|
| Reference CLI tools by name | team-worker resolves invocation from its delegation templates |
|
||||||
|
|
||||||
|
## Generated Role-Spec Structure
|
||||||
|
|
||||||
|
Every generated role-spec MUST include these blocks:
|
||||||
|
|
||||||
|
### Identity Block (mandatory — first section of generated spec)
|
||||||
|
|
||||||
|
```
|
||||||
|
Tag: [<role_name>] | Prefix: <PREFIX>-*
|
||||||
|
Responsibility: <one-line from task analysis>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Boundaries Block (mandatory — after Identity)
|
||||||
|
|
||||||
|
```
|
||||||
|
### MUST
|
||||||
|
- <3-5 rules derived from task analysis>
|
||||||
|
|
||||||
|
### MUST NOT
|
||||||
|
- Execute work outside assigned prefix
|
||||||
|
- Modify artifacts from other roles
|
||||||
|
- Skip Phase 4 verification
|
||||||
|
```
|
||||||
|
|
||||||
## Behavioral Traits
|
## Behavioral Traits
|
||||||
|
|
||||||
All dynamically generated role-specs MUST embed these traits into Phase 4. Coordinator copies this section verbatim into every generated role-spec as a Phase 4 appendix.
|
All dynamically generated role-specs MUST embed these traits into Phase 4. Coordinator copies this section verbatim into every generated role-spec as a Phase 4 appendix.
|
||||||
@@ -93,6 +117,11 @@ Phase 4 must produce a verification summary with these fields:
|
|||||||
- Still fails → report `partial_completion` with details, NOT `completed`
|
- Still fails → report `partial_completion` with details, NOT `completed`
|
||||||
- Update shared state via `team_msg(operation="log", type="state_update", data={...})` after verification passes
|
- Update shared state via `team_msg(operation="log", type="state_update", data={...})` after verification passes
|
||||||
|
|
||||||
|
Quality thresholds from [specs/quality-gates.md](quality-gates.md):
|
||||||
|
- Pass >= 80%: report completed
|
||||||
|
- Review 60-79%: report completed with warnings
|
||||||
|
- Fail < 60%: retry Phase 3 (max 2)
|
||||||
|
|
||||||
### Error Protocol
|
### Error Protocol
|
||||||
|
|
||||||
- Primary approach fails → try alternative (different CLI tool / different tool)
|
- Primary approach fails → try alternative (different CLI tool / different tool)
|
||||||
@@ -139,48 +168,25 @@ Coordinator MAY reference these patterns when composing Phase 2-4 content for a
|
|||||||
|
|
||||||
## Knowledge Transfer Protocol
|
## Knowledge Transfer Protocol
|
||||||
|
|
||||||
How context flows between roles. Coordinator MUST reference this when composing Phase 2 of any role-spec.
|
Full protocol: [specs/knowledge-transfer.md](knowledge-transfer.md)
|
||||||
|
|
||||||
### Transfer Channels
|
Generated role-specs Phase 2 MUST declare which upstream sources to load.
|
||||||
|
Generated role-specs Phase 4 MUST include state update and artifact publishing.
|
||||||
|
|
||||||
| Channel | Scope | Mechanism | When to Use |
|
---
|
||||||
|---------|-------|-----------|-------------|
|
|
||||||
| **Artifacts** | Producer -> Consumer | Write to `<session>/artifacts/<name>.md`, consumer reads in Phase 2 | Structured deliverables (reports, plans, specs) |
|
|
||||||
| **State Updates** | Cross-role | `team_msg(operation="log", type="state_update", data={...})` / `team_msg(operation="get_state", session_id=<session-id>)` | Key findings, decisions, metadata (small, structured data) |
|
|
||||||
| **Wisdom** | Cross-task | Append to `<session>/wisdom/{learnings,decisions,conventions,issues}.md` | Patterns, conventions, risks discovered during execution |
|
|
||||||
| **context_accumulator** | Intra-role (inner loop) | In-memory array, passed to each subsequent task in same-prefix loop | Prior task summaries within same role's inner loop |
|
|
||||||
| **Exploration cache** | Cross-role | `<session>/explorations/cache-index.json` + per-angle JSON | Codebase discovery results, prevents duplicate exploration |
|
|
||||||
|
|
||||||
### Phase 2 Context Loading (role-spec must specify)
|
## Generated Role-Spec Validation
|
||||||
|
|
||||||
Every generated role-spec Phase 2 MUST declare which upstream sources to load:
|
Coordinator verifies before writing each role-spec:
|
||||||
|
|
||||||
```
|
| Check | Criteria |
|
||||||
1. Extract session path from task description
|
|-------|----------|
|
||||||
2. Read upstream artifacts: <list which artifacts from which upstream role>
|
| Frontmatter complete | All required fields present (role, prefix, inner_loop, output_tag, message_types, CLI tools) |
|
||||||
3. Read cross-role state via `team_msg(operation="get_state", session_id=<session-id>)`
|
| Identity block | Tag, prefix, responsibility defined |
|
||||||
4. Load wisdom files for accumulated knowledge
|
| Boundaries | MUST and MUST NOT rules present |
|
||||||
5. For inner_loop roles: load context_accumulator from prior tasks
|
| Phase 2 | Context loading sources specified |
|
||||||
6. Check exploration cache before running new explorations
|
| Phase 3 | Execution goal clear, not prescriptive about tools |
|
||||||
```
|
| Phase 4 | Behavioral Traits copied verbatim |
|
||||||
|
| Error Handling | Table with 3+ scenarios |
|
||||||
### State Update Convention
|
| Line count | Target ~80 lines (max 120) |
|
||||||
|
| No built-in overlap | No Phase 1/5, no message bus code, no consensus handling |
|
||||||
Cross-role state is managed via `team_msg` state updates instead of a separate file:
|
|
||||||
|
|
||||||
- **Write state**: `team_msg(operation="log", session_id=<session-id>, from=<role>, type="state_update", data={ "<role_name>": { ... } })`
|
|
||||||
- **Read state**: `team_msg(operation="get_state", session_id=<session-id>)`
|
|
||||||
- **Namespaced keys**: Each role writes under its own namespace key in `data`
|
|
||||||
- **Small data only**: Key findings, decision summaries, metadata. NOT full documents
|
|
||||||
- **State stored in**: `.msg/meta.json` (auto-managed by team_msg)
|
|
||||||
- **Example write**:
|
|
||||||
```
|
|
||||||
team_msg(operation="log", session_id="TC-auth-2026-03-03", from="researcher", type="state_update", data={
|
|
||||||
"researcher": { "key_findings": [...], "scope": "..." }
|
|
||||||
})
|
|
||||||
```
|
|
||||||
- **Example read**:
|
|
||||||
```
|
|
||||||
team_msg(operation="get_state", session_id="TC-auth-2026-03-03")
|
|
||||||
// Returns merged state from all state_update messages
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -502,40 +502,3 @@ if (plannerAgent) {
|
|||||||
| `check` / `status` | Show progress: planned / executing / completed / failed counts |
|
| `check` / `status` | Show progress: planned / executing / completed / failed counts |
|
||||||
| `resume` / `continue` | Re-enter loop from Phase 2 |
|
| `resume` / `continue` | Re-enter loop from Phase 2 |
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Coordinator Role Constraints (Main Agent)
|
|
||||||
|
|
||||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
|
||||||
|
|
||||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
|
||||||
- Spawns agents with task assignments
|
|
||||||
- Waits for agent callbacks
|
|
||||||
- Merges results and coordinates workflow
|
|
||||||
- Manages workflow transitions between phases
|
|
||||||
|
|
||||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
|
||||||
- Wait patiently for `wait()` calls to complete
|
|
||||||
- NOT skip workflow steps due to perceived delays
|
|
||||||
- NOT assume agents have failed just because they're taking time
|
|
||||||
- Trust the timeout mechanisms defined in the skill
|
|
||||||
|
|
||||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
|
||||||
- Use `send_input()` to ask questions or provide clarification
|
|
||||||
- NOT skip the agent or move to next phase prematurely
|
|
||||||
- Give agents opportunity to respond before escalating
|
|
||||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
|
||||||
|
|
||||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
|
||||||
- Skip phases or stages defined in the workflow
|
|
||||||
- Bypass required approval or review steps
|
|
||||||
- Execute dependent tasks before prerequisites complete
|
|
||||||
- Assume task completion without explicit agent callback
|
|
||||||
- Make up or fabricate agent results
|
|
||||||
|
|
||||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
|
||||||
- Total execution time may range from 30-90 minutes or longer
|
|
||||||
- Each phase may take 10-30 minutes depending on complexity
|
|
||||||
- The coordinator must remain active and attentive throughout the entire process
|
|
||||||
- Do not terminate or skip steps due to time concerns
|
|
||||||
|
|||||||
154
ccw/frontend/src/components/codexlens/AdvancedTab.test.tsx
Normal file
154
ccw/frontend/src/components/codexlens/AdvancedTab.test.tsx
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
import { fireEvent } from '@testing-library/react';
|
||||||
|
import { render, screen, waitFor } from '@/test/i18n';
|
||||||
|
import { AdvancedTab } from './AdvancedTab';
|
||||||
|
|
||||||
|
vi.mock('@/hooks', async (importOriginal) => {
|
||||||
|
const actual = await importOriginal<typeof import('@/hooks')>();
|
||||||
|
return {
|
||||||
|
...actual,
|
||||||
|
useCodexLensEnv: vi.fn(),
|
||||||
|
useUpdateCodexLensEnv: vi.fn(),
|
||||||
|
useCodexLensIgnorePatterns: vi.fn(),
|
||||||
|
useUpdateIgnorePatterns: vi.fn(),
|
||||||
|
useNotifications: vi.fn(),
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
import {
|
||||||
|
useCodexLensEnv,
|
||||||
|
useUpdateCodexLensEnv,
|
||||||
|
useCodexLensIgnorePatterns,
|
||||||
|
useUpdateIgnorePatterns,
|
||||||
|
useNotifications,
|
||||||
|
} from '@/hooks';
|
||||||
|
|
||||||
|
const mockRefetchEnv = vi.fn().mockResolvedValue(undefined);
|
||||||
|
const mockRefetchPatterns = vi.fn().mockResolvedValue(undefined);
|
||||||
|
const mockUpdateEnv = vi.fn().mockResolvedValue({ success: true, message: 'Saved' });
|
||||||
|
const mockUpdatePatterns = vi.fn().mockResolvedValue({
|
||||||
|
success: true,
|
||||||
|
patterns: ['dist', 'frontend/dist'],
|
||||||
|
extensionFilters: ['*.min.js', 'frontend/skip.ts'],
|
||||||
|
defaults: {
|
||||||
|
patterns: ['dist', 'build'],
|
||||||
|
extensionFilters: ['*.min.js'],
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const mockToastSuccess = vi.fn();
|
||||||
|
const mockToastError = vi.fn();
|
||||||
|
|
||||||
|
function setupDefaultMocks() {
|
||||||
|
vi.mocked(useCodexLensEnv).mockReturnValue({
|
||||||
|
data: { success: true, env: {}, settings: {}, raw: '', path: '~/.codexlens/.env' },
|
||||||
|
raw: '',
|
||||||
|
env: {},
|
||||||
|
settings: {},
|
||||||
|
isLoading: false,
|
||||||
|
error: null,
|
||||||
|
refetch: mockRefetchEnv,
|
||||||
|
});
|
||||||
|
|
||||||
|
vi.mocked(useUpdateCodexLensEnv).mockReturnValue({
|
||||||
|
updateEnv: mockUpdateEnv,
|
||||||
|
isUpdating: false,
|
||||||
|
error: null,
|
||||||
|
});
|
||||||
|
|
||||||
|
vi.mocked(useCodexLensIgnorePatterns).mockReturnValue({
|
||||||
|
data: {
|
||||||
|
success: true,
|
||||||
|
patterns: ['dist', 'coverage'],
|
||||||
|
extensionFilters: ['*.min.js', '*.map'],
|
||||||
|
defaults: {
|
||||||
|
patterns: ['dist', 'build', 'coverage'],
|
||||||
|
extensionFilters: ['*.min.js', '*.map'],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
patterns: ['dist', 'coverage'],
|
||||||
|
extensionFilters: ['*.min.js', '*.map'],
|
||||||
|
defaults: {
|
||||||
|
patterns: ['dist', 'build', 'coverage'],
|
||||||
|
extensionFilters: ['*.min.js', '*.map'],
|
||||||
|
},
|
||||||
|
isLoading: false,
|
||||||
|
error: null,
|
||||||
|
refetch: mockRefetchPatterns,
|
||||||
|
});
|
||||||
|
|
||||||
|
vi.mocked(useUpdateIgnorePatterns).mockReturnValue({
|
||||||
|
updatePatterns: mockUpdatePatterns,
|
||||||
|
isUpdating: false,
|
||||||
|
error: null,
|
||||||
|
});
|
||||||
|
|
||||||
|
vi.mocked(useNotifications).mockReturnValue({
|
||||||
|
success: mockToastSuccess,
|
||||||
|
error: mockToastError,
|
||||||
|
} as ReturnType<typeof useNotifications>);
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('AdvancedTab', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
setupDefaultMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('renders existing filter configuration', () => {
|
||||||
|
render(<AdvancedTab enabled={true} />);
|
||||||
|
|
||||||
|
expect(screen.getByLabelText(/Ignored directories \/ paths/i)).toHaveValue('dist\ncoverage');
|
||||||
|
expect(screen.getByLabelText(/Skipped files \/ globs/i)).toHaveValue('*.min.js\n*.map');
|
||||||
|
expect(screen.getByText(/Directory filters: 2/i)).toBeInTheDocument();
|
||||||
|
expect(screen.getByText(/File filters: 2/i)).toBeInTheDocument();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('saves parsed filter configuration', async () => {
|
||||||
|
render(<AdvancedTab enabled={true} />);
|
||||||
|
|
||||||
|
const ignorePatternsInput = screen.getByLabelText(/Ignored directories \/ paths/i);
|
||||||
|
const extensionFiltersInput = screen.getByLabelText(/Skipped files \/ globs/i);
|
||||||
|
|
||||||
|
fireEvent.change(ignorePatternsInput, { target: { value: 'dist,\nfrontend/dist' } });
|
||||||
|
fireEvent.change(extensionFiltersInput, { target: { value: '*.min.js,\nfrontend/skip.ts' } });
|
||||||
|
fireEvent.click(screen.getByRole('button', { name: /Save filters/i }));
|
||||||
|
|
||||||
|
await waitFor(() => {
|
||||||
|
expect(mockUpdatePatterns).toHaveBeenCalledWith({
|
||||||
|
patterns: ['dist', 'frontend/dist'],
|
||||||
|
extensionFilters: ['*.min.js', 'frontend/skip.ts'],
|
||||||
|
});
|
||||||
|
});
|
||||||
|
expect(mockRefetchPatterns).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('restores default filter values before saving', async () => {
|
||||||
|
render(<AdvancedTab enabled={true} />);
|
||||||
|
|
||||||
|
fireEvent.click(screen.getByRole('button', { name: /Restore defaults/i }));
|
||||||
|
|
||||||
|
expect(screen.getByLabelText(/Ignored directories \/ paths/i)).toHaveValue('dist\nbuild\ncoverage');
|
||||||
|
expect(screen.getByLabelText(/Skipped files \/ globs/i)).toHaveValue('*.min.js\n*.map');
|
||||||
|
|
||||||
|
fireEvent.click(screen.getByRole('button', { name: /Save filters/i }));
|
||||||
|
|
||||||
|
await waitFor(() => {
|
||||||
|
expect(mockUpdatePatterns).toHaveBeenCalledWith({
|
||||||
|
patterns: ['dist', 'build', 'coverage'],
|
||||||
|
extensionFilters: ['*.min.js', '*.map'],
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('blocks invalid filter entries before saving', async () => {
|
||||||
|
render(<AdvancedTab enabled={true} />);
|
||||||
|
|
||||||
|
fireEvent.change(screen.getByLabelText(/Ignored directories \/ paths/i), {
|
||||||
|
target: { value: 'bad pattern!' },
|
||||||
|
});
|
||||||
|
fireEvent.click(screen.getByRole('button', { name: /Save filters/i }));
|
||||||
|
|
||||||
|
expect(mockUpdatePatterns).not.toHaveBeenCalled();
|
||||||
|
expect(screen.getByText(/Invalid ignore patterns/i)).toBeInTheDocument();
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -5,13 +5,18 @@
|
|||||||
|
|
||||||
import { useState, useEffect } from 'react';
|
import { useState, useEffect } from 'react';
|
||||||
import { useIntl } from 'react-intl';
|
import { useIntl } from 'react-intl';
|
||||||
import { Save, RefreshCw, AlertTriangle, FileCode, AlertCircle } from 'lucide-react';
|
import { Save, RefreshCw, AlertTriangle, FileCode, AlertCircle, Filter } from 'lucide-react';
|
||||||
import { Card } from '@/components/ui/Card';
|
import { Card } from '@/components/ui/Card';
|
||||||
import { Textarea } from '@/components/ui/Textarea';
|
import { Textarea } from '@/components/ui/Textarea';
|
||||||
import { Button } from '@/components/ui/Button';
|
import { Button } from '@/components/ui/Button';
|
||||||
import { Label } from '@/components/ui/Label';
|
import { Label } from '@/components/ui/Label';
|
||||||
import { Badge } from '@/components/ui/Badge';
|
import { Badge } from '@/components/ui/Badge';
|
||||||
import { useCodexLensEnv, useUpdateCodexLensEnv } from '@/hooks';
|
import {
|
||||||
|
useCodexLensEnv,
|
||||||
|
useCodexLensIgnorePatterns,
|
||||||
|
useUpdateCodexLensEnv,
|
||||||
|
useUpdateIgnorePatterns,
|
||||||
|
} from '@/hooks';
|
||||||
import { useNotifications } from '@/hooks';
|
import { useNotifications } from '@/hooks';
|
||||||
import { cn } from '@/lib/utils';
|
import { cn } from '@/lib/utils';
|
||||||
import { CcwToolsCard } from './CcwToolsCard';
|
import { CcwToolsCard } from './CcwToolsCard';
|
||||||
@@ -22,6 +27,28 @@ interface AdvancedTabProps {
|
|||||||
|
|
||||||
interface FormErrors {
|
interface FormErrors {
|
||||||
env?: string;
|
env?: string;
|
||||||
|
ignorePatterns?: string;
|
||||||
|
extensionFilters?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const FILTER_ENTRY_PATTERN = /^[-\w.*\\/]+$/;
|
||||||
|
|
||||||
|
function parseListEntries(text: string): string[] {
|
||||||
|
return text
|
||||||
|
.split(/[\n,]/)
|
||||||
|
.map((entry) => entry.trim())
|
||||||
|
.filter(Boolean);
|
||||||
|
}
|
||||||
|
|
||||||
|
function normalizeListEntries(entries: string[]): string {
|
||||||
|
return entries
|
||||||
|
.map((entry) => entry.trim())
|
||||||
|
.filter(Boolean)
|
||||||
|
.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
function normalizeListText(text: string): string {
|
||||||
|
return normalizeListEntries(parseListEntries(text));
|
||||||
}
|
}
|
||||||
|
|
||||||
export function AdvancedTab({ enabled = true }: AdvancedTabProps) {
|
export function AdvancedTab({ enabled = true }: AdvancedTabProps) {
|
||||||
@@ -37,14 +64,32 @@ export function AdvancedTab({ enabled = true }: AdvancedTabProps) {
|
|||||||
refetch,
|
refetch,
|
||||||
} = useCodexLensEnv({ enabled });
|
} = useCodexLensEnv({ enabled });
|
||||||
|
|
||||||
|
const {
|
||||||
|
patterns,
|
||||||
|
extensionFilters,
|
||||||
|
defaults,
|
||||||
|
isLoading: isLoadingPatterns,
|
||||||
|
error: patternsError,
|
||||||
|
refetch: refetchPatterns,
|
||||||
|
} = useCodexLensIgnorePatterns({ enabled });
|
||||||
|
|
||||||
const { updateEnv, isUpdating } = useUpdateCodexLensEnv();
|
const { updateEnv, isUpdating } = useUpdateCodexLensEnv();
|
||||||
|
const { updatePatterns, isUpdating: isUpdatingPatterns } = useUpdateIgnorePatterns();
|
||||||
|
|
||||||
// Form state
|
// Form state
|
||||||
const [envInput, setEnvInput] = useState('');
|
const [envInput, setEnvInput] = useState('');
|
||||||
|
const [ignorePatternsInput, setIgnorePatternsInput] = useState('');
|
||||||
|
const [extensionFiltersInput, setExtensionFiltersInput] = useState('');
|
||||||
const [errors, setErrors] = useState<FormErrors>({});
|
const [errors, setErrors] = useState<FormErrors>({});
|
||||||
const [hasChanges, setHasChanges] = useState(false);
|
const [hasChanges, setHasChanges] = useState(false);
|
||||||
|
const [hasFilterChanges, setHasFilterChanges] = useState(false);
|
||||||
const [showWarning, setShowWarning] = useState(false);
|
const [showWarning, setShowWarning] = useState(false);
|
||||||
|
|
||||||
|
const currentIgnorePatterns = patterns ?? [];
|
||||||
|
const currentExtensionFilters = extensionFilters ?? [];
|
||||||
|
const defaultIgnorePatterns = defaults?.patterns ?? [];
|
||||||
|
const defaultExtensionFilters = defaults?.extensionFilters ?? [];
|
||||||
|
|
||||||
// Initialize form from env - handles both undefined (loading) and empty string (empty file)
|
// Initialize form from env - handles both undefined (loading) and empty string (empty file)
|
||||||
// The hook returns raw directly, so we check if it's been set (not undefined means data loaded)
|
// The hook returns raw directly, so we check if it's been set (not undefined means data loaded)
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -58,6 +103,31 @@ export function AdvancedTab({ enabled = true }: AdvancedTabProps) {
|
|||||||
}
|
}
|
||||||
}, [raw, isLoadingEnv]);
|
}, [raw, isLoadingEnv]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (!isLoadingPatterns) {
|
||||||
|
const nextIgnorePatterns = patterns ?? [];
|
||||||
|
const nextExtensionFilters = extensionFilters ?? [];
|
||||||
|
setIgnorePatternsInput(nextIgnorePatterns.join('\n'));
|
||||||
|
setExtensionFiltersInput(nextExtensionFilters.join('\n'));
|
||||||
|
setErrors((prev) => ({
|
||||||
|
...prev,
|
||||||
|
ignorePatterns: undefined,
|
||||||
|
extensionFilters: undefined,
|
||||||
|
}));
|
||||||
|
setHasFilterChanges(false);
|
||||||
|
}
|
||||||
|
}, [extensionFilters, isLoadingPatterns, patterns]);
|
||||||
|
|
||||||
|
const updateFilterChangeState = (nextIgnorePatternsInput: string, nextExtensionFiltersInput: string) => {
|
||||||
|
const normalizedCurrentIgnorePatterns = normalizeListEntries(currentIgnorePatterns);
|
||||||
|
const normalizedCurrentExtensionFilters = normalizeListEntries(currentExtensionFilters);
|
||||||
|
|
||||||
|
setHasFilterChanges(
|
||||||
|
normalizeListText(nextIgnorePatternsInput) !== normalizedCurrentIgnorePatterns
|
||||||
|
|| normalizeListText(nextExtensionFiltersInput) !== normalizedCurrentExtensionFilters
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
const handleEnvChange = (value: string) => {
|
const handleEnvChange = (value: string) => {
|
||||||
setEnvInput(value);
|
setEnvInput(value);
|
||||||
// Check if there are changes - compare with raw value (handle undefined as empty)
|
// Check if there are changes - compare with raw value (handle undefined as empty)
|
||||||
@@ -69,6 +139,22 @@ export function AdvancedTab({ enabled = true }: AdvancedTabProps) {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const handleIgnorePatternsChange = (value: string) => {
|
||||||
|
setIgnorePatternsInput(value);
|
||||||
|
updateFilterChangeState(value, extensionFiltersInput);
|
||||||
|
if (errors.ignorePatterns) {
|
||||||
|
setErrors((prev) => ({ ...prev, ignorePatterns: undefined }));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleExtensionFiltersChange = (value: string) => {
|
||||||
|
setExtensionFiltersInput(value);
|
||||||
|
updateFilterChangeState(ignorePatternsInput, value);
|
||||||
|
if (errors.extensionFilters) {
|
||||||
|
setErrors((prev) => ({ ...prev, extensionFilters: undefined }));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
const parseEnvVariables = (text: string): Record<string, string> => {
|
const parseEnvVariables = (text: string): Record<string, string> => {
|
||||||
const envObj: Record<string, string> = {};
|
const envObj: Record<string, string> = {};
|
||||||
const lines = text.split('\n');
|
const lines = text.split('\n');
|
||||||
@@ -101,10 +187,50 @@ export function AdvancedTab({ enabled = true }: AdvancedTabProps) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
setErrors(newErrors);
|
setErrors((prev) => ({ ...prev, env: newErrors.env }));
|
||||||
return Object.keys(newErrors).length === 0;
|
return Object.keys(newErrors).length === 0;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const validateFilterForm = (): boolean => {
|
||||||
|
const nextErrors: Pick<FormErrors, 'ignorePatterns' | 'extensionFilters'> = {};
|
||||||
|
const parsedIgnorePatterns = parseListEntries(ignorePatternsInput);
|
||||||
|
const parsedExtensionFilters = parseListEntries(extensionFiltersInput);
|
||||||
|
|
||||||
|
const invalidIgnorePatterns = parsedIgnorePatterns.filter(
|
||||||
|
(entry) => !FILTER_ENTRY_PATTERN.test(entry)
|
||||||
|
);
|
||||||
|
if (invalidIgnorePatterns.length > 0) {
|
||||||
|
nextErrors.ignorePatterns = formatMessage(
|
||||||
|
{
|
||||||
|
id: 'codexlens.advanced.validation.invalidIgnorePatterns',
|
||||||
|
defaultMessage: 'Invalid ignore patterns: {values}',
|
||||||
|
},
|
||||||
|
{ values: invalidIgnorePatterns.join(', ') }
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const invalidExtensionFilters = parsedExtensionFilters.filter(
|
||||||
|
(entry) => !FILTER_ENTRY_PATTERN.test(entry)
|
||||||
|
);
|
||||||
|
if (invalidExtensionFilters.length > 0) {
|
||||||
|
nextErrors.extensionFilters = formatMessage(
|
||||||
|
{
|
||||||
|
id: 'codexlens.advanced.validation.invalidExtensionFilters',
|
||||||
|
defaultMessage: 'Invalid file filters: {values}',
|
||||||
|
},
|
||||||
|
{ values: invalidExtensionFilters.join(', ') }
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
setErrors((prev) => ({
|
||||||
|
...prev,
|
||||||
|
ignorePatterns: nextErrors.ignorePatterns,
|
||||||
|
extensionFilters: nextErrors.extensionFilters,
|
||||||
|
}));
|
||||||
|
|
||||||
|
return Object.values(nextErrors).every((value) => !value);
|
||||||
|
};
|
||||||
|
|
||||||
const handleSave = async () => {
|
const handleSave = async () => {
|
||||||
if (!validateForm()) {
|
if (!validateForm()) {
|
||||||
return;
|
return;
|
||||||
@@ -138,12 +264,68 @@ export function AdvancedTab({ enabled = true }: AdvancedTabProps) {
|
|||||||
const handleReset = () => {
|
const handleReset = () => {
|
||||||
// Reset to current raw value (handle undefined as empty)
|
// Reset to current raw value (handle undefined as empty)
|
||||||
setEnvInput(raw ?? '');
|
setEnvInput(raw ?? '');
|
||||||
setErrors({});
|
setErrors((prev) => ({ ...prev, env: undefined }));
|
||||||
setHasChanges(false);
|
setHasChanges(false);
|
||||||
setShowWarning(false);
|
setShowWarning(false);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const handleSaveFilters = async () => {
|
||||||
|
if (!validateFilterForm()) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const parsedIgnorePatterns = parseListEntries(ignorePatternsInput);
|
||||||
|
const parsedExtensionFilters = parseListEntries(extensionFiltersInput);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await updatePatterns({
|
||||||
|
patterns: parsedIgnorePatterns,
|
||||||
|
extensionFilters: parsedExtensionFilters,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (result.success) {
|
||||||
|
setIgnorePatternsInput((result.patterns ?? parsedIgnorePatterns).join('\n'));
|
||||||
|
setExtensionFiltersInput((result.extensionFilters ?? parsedExtensionFilters).join('\n'));
|
||||||
|
setHasFilterChanges(false);
|
||||||
|
setErrors((prev) => ({
|
||||||
|
...prev,
|
||||||
|
ignorePatterns: undefined,
|
||||||
|
extensionFilters: undefined,
|
||||||
|
}));
|
||||||
|
await refetchPatterns();
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Hook-level mutation already reports the error.
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleResetFilters = () => {
|
||||||
|
setIgnorePatternsInput(currentIgnorePatterns.join('\n'));
|
||||||
|
setExtensionFiltersInput(currentExtensionFilters.join('\n'));
|
||||||
|
setErrors((prev) => ({
|
||||||
|
...prev,
|
||||||
|
ignorePatterns: undefined,
|
||||||
|
extensionFilters: undefined,
|
||||||
|
}));
|
||||||
|
setHasFilterChanges(false);
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleRestoreDefaultFilters = () => {
|
||||||
|
const defaultIgnoreText = defaultIgnorePatterns.join('\n');
|
||||||
|
const defaultExtensionText = defaultExtensionFilters.join('\n');
|
||||||
|
|
||||||
|
setIgnorePatternsInput(defaultIgnoreText);
|
||||||
|
setExtensionFiltersInput(defaultExtensionText);
|
||||||
|
setErrors((prev) => ({
|
||||||
|
...prev,
|
||||||
|
ignorePatterns: undefined,
|
||||||
|
extensionFilters: undefined,
|
||||||
|
}));
|
||||||
|
updateFilterChangeState(defaultIgnoreText, defaultExtensionText);
|
||||||
|
};
|
||||||
|
|
||||||
const isLoading = isLoadingEnv;
|
const isLoading = isLoadingEnv;
|
||||||
|
const isLoadingFilters = isLoadingPatterns;
|
||||||
|
|
||||||
// Get current env variables as array for display
|
// Get current env variables as array for display
|
||||||
const currentEnvVars = env
|
const currentEnvVars = env
|
||||||
@@ -242,6 +424,204 @@ export function AdvancedTab({ enabled = true }: AdvancedTabProps) {
|
|||||||
{/* CCW Tools Card */}
|
{/* CCW Tools Card */}
|
||||||
<CcwToolsCard />
|
<CcwToolsCard />
|
||||||
|
|
||||||
|
{/* Index Filters */}
|
||||||
|
<Card className="p-6">
|
||||||
|
<div className="flex flex-col gap-3 mb-4 lg:flex-row lg:items-center lg:justify-between">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Filter className="w-5 h-5 text-muted-foreground" />
|
||||||
|
<h3 className="text-lg font-semibold text-foreground">
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.indexFilters',
|
||||||
|
defaultMessage: 'Index Filters',
|
||||||
|
})}
|
||||||
|
</h3>
|
||||||
|
</div>
|
||||||
|
<div className="flex flex-wrap items-center gap-2">
|
||||||
|
<Badge variant="outline" className="text-xs">
|
||||||
|
{formatMessage(
|
||||||
|
{
|
||||||
|
id: 'codexlens.advanced.ignorePatternCount',
|
||||||
|
defaultMessage: 'Directory filters: {count}',
|
||||||
|
},
|
||||||
|
{ count: currentIgnorePatterns.length }
|
||||||
|
)}
|
||||||
|
</Badge>
|
||||||
|
<Badge variant="outline" className="text-xs">
|
||||||
|
{formatMessage(
|
||||||
|
{
|
||||||
|
id: 'codexlens.advanced.extensionFilterCount',
|
||||||
|
defaultMessage: 'File filters: {count}',
|
||||||
|
},
|
||||||
|
{ count: currentExtensionFilters.length }
|
||||||
|
)}
|
||||||
|
</Badge>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="space-y-4">
|
||||||
|
{patternsError && (
|
||||||
|
<div className="rounded-md border border-destructive/20 bg-destructive/5 p-3">
|
||||||
|
<div className="flex items-start gap-2">
|
||||||
|
<AlertCircle className="w-4 h-4 mt-0.5 text-destructive" />
|
||||||
|
<div>
|
||||||
|
<p className="text-sm font-medium text-destructive">
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.filtersLoadError',
|
||||||
|
defaultMessage: 'Unable to load current filter settings',
|
||||||
|
})}
|
||||||
|
</p>
|
||||||
|
<p className="text-xs text-destructive/80 mt-1">{patternsError.message}</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div className="grid gap-4 xl:grid-cols-2">
|
||||||
|
<div className="space-y-2">
|
||||||
|
<Label htmlFor="ignore-patterns-input">
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.ignorePatterns',
|
||||||
|
defaultMessage: 'Ignored directories / paths',
|
||||||
|
})}
|
||||||
|
</Label>
|
||||||
|
<Textarea
|
||||||
|
id="ignore-patterns-input"
|
||||||
|
value={ignorePatternsInput}
|
||||||
|
onChange={(event) => handleIgnorePatternsChange(event.target.value)}
|
||||||
|
placeholder={formatMessage({
|
||||||
|
id: 'codexlens.advanced.ignorePatternsPlaceholder',
|
||||||
|
defaultMessage: 'dist\nfrontend/dist\ncoverage',
|
||||||
|
})}
|
||||||
|
className={cn(
|
||||||
|
'min-h-[220px] font-mono text-sm',
|
||||||
|
errors.ignorePatterns && 'border-destructive focus-visible:ring-destructive'
|
||||||
|
)}
|
||||||
|
disabled={isLoadingFilters || isUpdatingPatterns}
|
||||||
|
/>
|
||||||
|
{errors.ignorePatterns && (
|
||||||
|
<p className="text-sm text-destructive">{errors.ignorePatterns}</p>
|
||||||
|
)}
|
||||||
|
<p className="text-xs text-muted-foreground">
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.ignorePatternsHint',
|
||||||
|
defaultMessage: 'One entry per line. Supports exact names, relative paths, and glob patterns.',
|
||||||
|
})}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="space-y-2">
|
||||||
|
<Label htmlFor="extension-filters-input">
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.extensionFilters',
|
||||||
|
defaultMessage: 'Skipped files / globs',
|
||||||
|
})}
|
||||||
|
</Label>
|
||||||
|
<Textarea
|
||||||
|
id="extension-filters-input"
|
||||||
|
value={extensionFiltersInput}
|
||||||
|
onChange={(event) => handleExtensionFiltersChange(event.target.value)}
|
||||||
|
placeholder={formatMessage({
|
||||||
|
id: 'codexlens.advanced.extensionFiltersPlaceholder',
|
||||||
|
defaultMessage: '*.min.js\n*.map\npackage-lock.json',
|
||||||
|
})}
|
||||||
|
className={cn(
|
||||||
|
'min-h-[220px] font-mono text-sm',
|
||||||
|
errors.extensionFilters && 'border-destructive focus-visible:ring-destructive'
|
||||||
|
)}
|
||||||
|
disabled={isLoadingFilters || isUpdatingPatterns}
|
||||||
|
/>
|
||||||
|
{errors.extensionFilters && (
|
||||||
|
<p className="text-sm text-destructive">{errors.extensionFilters}</p>
|
||||||
|
)}
|
||||||
|
<p className="text-xs text-muted-foreground">
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.extensionFiltersHint',
|
||||||
|
defaultMessage: 'Use this for generated or low-value files that should stay out of the index.',
|
||||||
|
})}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="grid gap-3 rounded-md border border-border/60 bg-muted/30 p-3 md:grid-cols-2">
|
||||||
|
<div>
|
||||||
|
<p className="text-xs font-medium text-foreground">
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.defaultIgnorePatterns',
|
||||||
|
defaultMessage: 'Default directory filters',
|
||||||
|
})}
|
||||||
|
</p>
|
||||||
|
<div className="mt-2 flex flex-wrap gap-2">
|
||||||
|
{defaultIgnorePatterns.slice(0, 6).map((pattern) => (
|
||||||
|
<Badge key={pattern} variant="secondary" className="font-mono text-xs">
|
||||||
|
{pattern}
|
||||||
|
</Badge>
|
||||||
|
))}
|
||||||
|
{defaultIgnorePatterns.length > 6 && (
|
||||||
|
<Badge variant="outline" className="text-xs">
|
||||||
|
+{defaultIgnorePatterns.length - 6}
|
||||||
|
</Badge>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<p className="text-xs font-medium text-foreground">
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.defaultExtensionFilters',
|
||||||
|
defaultMessage: 'Default file filters',
|
||||||
|
})}
|
||||||
|
</p>
|
||||||
|
<div className="mt-2 flex flex-wrap gap-2">
|
||||||
|
{defaultExtensionFilters.slice(0, 6).map((pattern) => (
|
||||||
|
<Badge key={pattern} variant="secondary" className="font-mono text-xs">
|
||||||
|
{pattern}
|
||||||
|
</Badge>
|
||||||
|
))}
|
||||||
|
{defaultExtensionFilters.length > 6 && (
|
||||||
|
<Badge variant="outline" className="text-xs">
|
||||||
|
+{defaultExtensionFilters.length - 6}
|
||||||
|
</Badge>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="flex flex-wrap items-center gap-2">
|
||||||
|
<Button
|
||||||
|
onClick={handleSaveFilters}
|
||||||
|
disabled={isLoadingFilters || isUpdatingPatterns || !hasFilterChanges}
|
||||||
|
>
|
||||||
|
<Save className={cn('w-4 h-4 mr-2', isUpdatingPatterns && 'animate-spin')} />
|
||||||
|
{isUpdatingPatterns
|
||||||
|
? formatMessage({ id: 'codexlens.advanced.saving', defaultMessage: 'Saving...' })
|
||||||
|
: formatMessage({
|
||||||
|
id: 'codexlens.advanced.saveFilters',
|
||||||
|
defaultMessage: 'Save filters',
|
||||||
|
})
|
||||||
|
}
|
||||||
|
</Button>
|
||||||
|
<Button
|
||||||
|
variant="outline"
|
||||||
|
onClick={handleResetFilters}
|
||||||
|
disabled={isLoadingFilters || !hasFilterChanges}
|
||||||
|
>
|
||||||
|
<RefreshCw className="w-4 h-4 mr-2" />
|
||||||
|
{formatMessage({ id: 'codexlens.advanced.reset', defaultMessage: 'Reset' })}
|
||||||
|
</Button>
|
||||||
|
<Button
|
||||||
|
variant="outline"
|
||||||
|
onClick={handleRestoreDefaultFilters}
|
||||||
|
disabled={isLoadingFilters || isUpdatingPatterns}
|
||||||
|
>
|
||||||
|
{formatMessage({
|
||||||
|
id: 'codexlens.advanced.restoreDefaults',
|
||||||
|
defaultMessage: 'Restore defaults',
|
||||||
|
})}
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</Card>
|
||||||
|
|
||||||
{/* Environment Variables Editor */}
|
{/* Environment Variables Editor */}
|
||||||
<Card className="p-6">
|
<Card className="p-6">
|
||||||
<div className="flex items-center justify-between mb-4">
|
<div className="flex items-center justify-between mb-4">
|
||||||
|
|||||||
@@ -212,7 +212,7 @@ export function McpServerDialog({
|
|||||||
const { formatMessage } = useIntl();
|
const { formatMessage } = useIntl();
|
||||||
const queryClient = useQueryClient();
|
const queryClient = useQueryClient();
|
||||||
const projectPath = useWorkflowStore(selectProjectPath);
|
const projectPath = useWorkflowStore(selectProjectPath);
|
||||||
const { error: showError } = useNotifications();
|
const { error: showError, success: showSuccess } = useNotifications();
|
||||||
|
|
||||||
// Fetch templates from backend
|
// Fetch templates from backend
|
||||||
const { templates, isLoading: templatesLoading } = useMcpTemplates();
|
const { templates, isLoading: templatesLoading } = useMcpTemplates();
|
||||||
@@ -241,6 +241,10 @@ export function McpServerDialog({
|
|||||||
const [saveAsTemplate, setSaveAsTemplate] = useState(false);
|
const [saveAsTemplate, setSaveAsTemplate] = useState(false);
|
||||||
const projectConfigType: McpProjectConfigType = configType === 'claude-json' ? 'claude' : 'mcp';
|
const projectConfigType: McpProjectConfigType = configType === 'claude-json' ? 'claude' : 'mcp';
|
||||||
|
|
||||||
|
// JSON import mode state
|
||||||
|
const [inputMode, setInputMode] = useState<'form' | 'json'>('form');
|
||||||
|
const [jsonInput, setJsonInput] = useState('');
|
||||||
|
|
||||||
// Helper to detect transport type from server data
|
// Helper to detect transport type from server data
|
||||||
const detectTransportType = useCallback((serverData: McpServer | undefined): McpTransportType => {
|
const detectTransportType = useCallback((serverData: McpServer | undefined): McpTransportType => {
|
||||||
if (!serverData) return 'stdio';
|
if (!serverData) return 'stdio';
|
||||||
@@ -249,6 +253,73 @@ export function McpServerDialog({
|
|||||||
return 'stdio';
|
return 'stdio';
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
|
// Parse JSON config and populate form
|
||||||
|
const parseJsonConfig = useCallback(() => {
|
||||||
|
try {
|
||||||
|
const config = JSON.parse(jsonInput);
|
||||||
|
|
||||||
|
// Detect transport type based on config structure
|
||||||
|
if (config.url) {
|
||||||
|
// HTTP transport
|
||||||
|
setTransportType('http');
|
||||||
|
|
||||||
|
// Parse headers
|
||||||
|
const headers: HttpHeader[] = [];
|
||||||
|
if (config.headers && typeof config.headers === 'object') {
|
||||||
|
Object.entries(config.headers).forEach(([name, value], idx) => {
|
||||||
|
headers.push({
|
||||||
|
id: `header-${Date.now()}-${idx}`,
|
||||||
|
name,
|
||||||
|
value: String(value),
|
||||||
|
isEnvVar: false,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
setFormData(prev => ({
|
||||||
|
...prev,
|
||||||
|
url: config.url || '',
|
||||||
|
headers,
|
||||||
|
bearerTokenEnvVar: config.bearer_token_env_var || config.bearerTokenEnvVar || '',
|
||||||
|
}));
|
||||||
|
} else {
|
||||||
|
// STDIO transport
|
||||||
|
setTransportType('stdio');
|
||||||
|
|
||||||
|
const args = Array.isArray(config.args) ? config.args : [];
|
||||||
|
const env = config.env && typeof config.env === 'object' ? config.env : {};
|
||||||
|
|
||||||
|
setFormData(prev => ({
|
||||||
|
...prev,
|
||||||
|
command: config.command || '',
|
||||||
|
args,
|
||||||
|
env,
|
||||||
|
}));
|
||||||
|
|
||||||
|
setArgsInput(args.join(', '));
|
||||||
|
setEnvInput(
|
||||||
|
Object.entries(env)
|
||||||
|
.map(([k, v]) => `${k}=${v}`)
|
||||||
|
.join('\n')
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Switch to form mode to show parsed data
|
||||||
|
setInputMode('form');
|
||||||
|
setErrors({});
|
||||||
|
showSuccess(
|
||||||
|
formatMessage({ id: 'mcp.dialog.json.parseSuccess' }),
|
||||||
|
formatMessage({ id: 'mcp.dialog.json.parseSuccessDesc' })
|
||||||
|
);
|
||||||
|
} catch (error) {
|
||||||
|
setErrors({
|
||||||
|
name: formatMessage({ id: 'mcp.dialog.json.parseError' }, {
|
||||||
|
error: error instanceof Error ? error.message : 'Invalid JSON'
|
||||||
|
})
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}, [jsonInput, formatMessage, showSuccess]);
|
||||||
|
|
||||||
// Initialize form from server prop (edit mode)
|
// Initialize form from server prop (edit mode)
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (server && mode === 'edit') {
|
if (server && mode === 'edit') {
|
||||||
@@ -578,9 +649,96 @@ export function McpServerDialog({
|
|||||||
</DialogTitle>
|
</DialogTitle>
|
||||||
</DialogHeader>
|
</DialogHeader>
|
||||||
|
|
||||||
|
{/* Input Mode Switcher - Only in add mode */}
|
||||||
|
{mode === 'add' && (
|
||||||
|
<div className="flex gap-2 border-b pb-3">
|
||||||
|
<Button
|
||||||
|
type="button"
|
||||||
|
variant={inputMode === 'form' ? 'default' : 'outline'}
|
||||||
|
size="sm"
|
||||||
|
onClick={() => setInputMode('form')}
|
||||||
|
className="flex-1"
|
||||||
|
>
|
||||||
|
{formatMessage({ id: 'mcp.dialog.mode.form' })}
|
||||||
|
</Button>
|
||||||
|
<Button
|
||||||
|
type="button"
|
||||||
|
variant={inputMode === 'json' ? 'default' : 'outline'}
|
||||||
|
size="sm"
|
||||||
|
onClick={() => setInputMode('json')}
|
||||||
|
className="flex-1"
|
||||||
|
>
|
||||||
|
{formatMessage({ id: 'mcp.dialog.mode.json' })}
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
<div className="space-y-4">
|
<div className="space-y-4">
|
||||||
{/* Template Selector - Only for STDIO */}
|
{/* JSON Input Mode */}
|
||||||
{transportType === 'stdio' && (
|
{inputMode === 'json' ? (
|
||||||
|
<div className="space-y-4">
|
||||||
|
<div className="space-y-2">
|
||||||
|
<label className="text-sm font-medium text-foreground">
|
||||||
|
{formatMessage({ id: 'mcp.dialog.json.label' })}
|
||||||
|
</label>
|
||||||
|
<textarea
|
||||||
|
value={jsonInput}
|
||||||
|
onChange={(e) => setJsonInput(e.target.value)}
|
||||||
|
placeholder={formatMessage({ id: 'mcp.dialog.json.placeholder' })}
|
||||||
|
className={cn(
|
||||||
|
'flex min-h-[300px] w-full rounded-md border border-input bg-background px-3 py-2 text-sm ring-offset-background placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50 font-mono',
|
||||||
|
errors.name && 'border-destructive focus-visible:ring-destructive'
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
{errors.name && (
|
||||||
|
<p className="text-sm text-destructive">{errors.name}</p>
|
||||||
|
)}
|
||||||
|
<p className="text-xs text-muted-foreground">
|
||||||
|
{formatMessage({ id: 'mcp.dialog.json.hint' })}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Example JSON */}
|
||||||
|
<div className="space-y-2">
|
||||||
|
<label className="text-sm font-medium text-foreground">
|
||||||
|
{formatMessage({ id: 'mcp.dialog.json.example' })}
|
||||||
|
</label>
|
||||||
|
<div className="bg-muted p-3 rounded-md">
|
||||||
|
<p className="text-xs font-medium mb-2">STDIO:</p>
|
||||||
|
<pre className="text-xs overflow-x-auto">
|
||||||
|
{`{
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path"],
|
||||||
|
"env": {
|
||||||
|
"API_KEY": "your-key"
|
||||||
|
}
|
||||||
|
}`}
|
||||||
|
</pre>
|
||||||
|
<p className="text-xs font-medium mt-3 mb-2">HTTP:</p>
|
||||||
|
<pre className="text-xs overflow-x-auto">
|
||||||
|
{`{
|
||||||
|
"url": "http://localhost:3000",
|
||||||
|
"headers": {
|
||||||
|
"Authorization": "Bearer token"
|
||||||
|
}
|
||||||
|
}`}
|
||||||
|
</pre>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<Button
|
||||||
|
type="button"
|
||||||
|
onClick={parseJsonConfig}
|
||||||
|
disabled={!jsonInput.trim()}
|
||||||
|
className="w-full"
|
||||||
|
>
|
||||||
|
{formatMessage({ id: 'mcp.dialog.json.parse' })}
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<>
|
||||||
|
{/* Template Selector - Only for STDIO */}
|
||||||
|
{transportType === 'stdio' && (
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
<label className="text-sm font-medium text-foreground">
|
<label className="text-sm font-medium text-foreground">
|
||||||
{formatMessage({ id: 'mcp.dialog.form.template' })}
|
{formatMessage({ id: 'mcp.dialog.form.template' })}
|
||||||
@@ -901,6 +1059,8 @@ export function McpServerDialog({
|
|||||||
</label>
|
</label>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<DialogFooter>
|
<DialogFooter>
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { describe, expect, it, vi } from 'vitest';
|
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||||
import {
|
import {
|
||||||
fetchMcpServers,
|
fetchMcpServers,
|
||||||
toggleMcpServer,
|
toggleMcpServer,
|
||||||
@@ -26,7 +26,29 @@ function getLastFetchCall(fetchMock: any) {
|
|||||||
return calls[calls.length - 1] as [RequestInfo | URL, RequestInit | undefined];
|
return calls[calls.length - 1] as [RequestInfo | URL, RequestInit | undefined];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const TEST_CSRF_TOKEN = 'test-csrf-token';
|
||||||
|
|
||||||
|
function mockFetchWithCsrf(
|
||||||
|
handler: (input: RequestInfo | URL, init?: RequestInit) => Response | Promise<Response>
|
||||||
|
) {
|
||||||
|
return vi.spyOn(globalThis, 'fetch').mockImplementation(async (input, init) => {
|
||||||
|
if (input === '/api/csrf-token') {
|
||||||
|
return jsonResponse({ csrfToken: TEST_CSRF_TOKEN });
|
||||||
|
}
|
||||||
|
|
||||||
|
return handler(input, init);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
describe('MCP API (frontend ↔ backend contract)', () => {
|
describe('MCP API (frontend ↔ backend contract)', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
vi.spyOn(console, 'warn').mockImplementation(() => {});
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
});
|
||||||
it('fetchMcpServers derives lists from /api/mcp-config and computes enabled from disabledMcpServers', async () => {
|
it('fetchMcpServers derives lists from /api/mcp-config and computes enabled from disabledMcpServers', async () => {
|
||||||
const fetchMock = vi.spyOn(globalThis, 'fetch').mockResolvedValue(
|
const fetchMock = vi.spyOn(globalThis, 'fetch').mockResolvedValue(
|
||||||
jsonResponse({
|
jsonResponse({
|
||||||
@@ -58,7 +80,8 @@ describe('MCP API (frontend ↔ backend contract)', () => {
|
|||||||
expect(fetchMock.mock.calls[0]?.[0]).toBe('/api/mcp-config');
|
expect(fetchMock.mock.calls[0]?.[0]).toBe('/api/mcp-config');
|
||||||
|
|
||||||
expect(result.global.map((s) => s.name).sort()).toEqual(['global1', 'globalDup']);
|
expect(result.global.map((s) => s.name).sort()).toEqual(['global1', 'globalDup']);
|
||||||
expect(result.project.map((s) => s.name)).toEqual(['projOnly']);
|
expect(result.project.map((s) => s.name)).toEqual(['projOnly', 'globalDup', 'entDup']);
|
||||||
|
expect(result.conflicts.map((c) => c.name)).toEqual(['globalDup']);
|
||||||
|
|
||||||
const global1 = result.global.find((s) => s.name === 'global1');
|
const global1 = result.global.find((s) => s.name === 'global1');
|
||||||
expect(global1?.enabled).toBe(false);
|
expect(global1?.enabled).toBe(false);
|
||||||
@@ -76,9 +99,7 @@ describe('MCP API (frontend ↔ backend contract)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('toggleMcpServer uses /api/mcp-toggle with { projectPath, serverName, enable }', async () => {
|
it('toggleMcpServer uses /api/mcp-toggle with { projectPath, serverName, enable }', async () => {
|
||||||
const fetchMock = vi
|
const fetchMock = mockFetchWithCsrf(async (input, _init) => {
|
||||||
.spyOn(globalThis, 'fetch')
|
|
||||||
.mockImplementation(async (input, _init) => {
|
|
||||||
if (input === '/api/mcp-toggle') {
|
if (input === '/api/mcp-toggle') {
|
||||||
return jsonResponse({ success: true, serverName: 'global1', enabled: false });
|
return jsonResponse({ success: true, serverName: 'global1', enabled: false });
|
||||||
}
|
}
|
||||||
@@ -111,7 +132,7 @@ describe('MCP API (frontend ↔ backend contract)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('deleteMcpServer calls the correct backend endpoint for project/global scopes', async () => {
|
it('deleteMcpServer calls the correct backend endpoint for project/global scopes', async () => {
|
||||||
const fetchMock = vi.spyOn(globalThis, 'fetch').mockImplementation(async (input) => {
|
const fetchMock = mockFetchWithCsrf(async (input) => {
|
||||||
if (input === '/api/mcp-remove-global-server') {
|
if (input === '/api/mcp-remove-global-server') {
|
||||||
return jsonResponse({ success: true });
|
return jsonResponse({ success: true });
|
||||||
}
|
}
|
||||||
@@ -129,9 +150,7 @@ describe('MCP API (frontend ↔ backend contract)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('createMcpServer (project) uses /api/mcp-copy-server and includes serverName + serverConfig', async () => {
|
it('createMcpServer (project) uses /api/mcp-copy-server and includes serverName + serverConfig', async () => {
|
||||||
const fetchMock = vi
|
const fetchMock = mockFetchWithCsrf(async (input) => {
|
||||||
.spyOn(globalThis, 'fetch')
|
|
||||||
.mockImplementation(async (input) => {
|
|
||||||
if (input === '/api/mcp-copy-server') {
|
if (input === '/api/mcp-copy-server') {
|
||||||
return jsonResponse({ success: true });
|
return jsonResponse({ success: true });
|
||||||
}
|
}
|
||||||
@@ -181,7 +200,7 @@ describe('MCP API (frontend ↔ backend contract)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('updateMcpServer (global) upserts via /api/mcp-add-global-server', async () => {
|
it('updateMcpServer (global) upserts via /api/mcp-add-global-server', async () => {
|
||||||
const fetchMock = vi.spyOn(globalThis, 'fetch').mockImplementation(async (input) => {
|
const fetchMock = mockFetchWithCsrf(async (input) => {
|
||||||
if (input === '/api/mcp-add-global-server') {
|
if (input === '/api/mcp-add-global-server') {
|
||||||
return jsonResponse({ success: true });
|
return jsonResponse({ success: true });
|
||||||
}
|
}
|
||||||
@@ -232,7 +251,7 @@ describe('MCP API (frontend ↔ backend contract)', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('crossCliCopy codex->claude copies via /api/mcp-copy-server per server', async () => {
|
it('crossCliCopy codex->claude copies via /api/mcp-copy-server per server', async () => {
|
||||||
const fetchMock = vi.spyOn(globalThis, 'fetch').mockImplementation(async (input) => {
|
const fetchMock = mockFetchWithCsrf(async (input) => {
|
||||||
if (input === '/api/codex-mcp-config') {
|
if (input === '/api/codex-mcp-config') {
|
||||||
return jsonResponse({ servers: { s1: { command: 'node' } }, configPath: 'x', exists: true });
|
return jsonResponse({ servers: { s1: { command: 'node' } }, configPath: 'x', exists: true });
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -115,6 +115,20 @@
|
|||||||
"addHeader": "Add Header"
|
"addHeader": "Add Header"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"mode": {
|
||||||
|
"form": "Form Mode",
|
||||||
|
"json": "JSON Mode"
|
||||||
|
},
|
||||||
|
"json": {
|
||||||
|
"label": "JSON Configuration",
|
||||||
|
"placeholder": "Paste MCP server JSON configuration...",
|
||||||
|
"hint": "Paste complete MCP server configuration JSON, then click Parse button",
|
||||||
|
"example": "Example Format",
|
||||||
|
"parse": "Parse JSON",
|
||||||
|
"parseSuccess": "JSON Parsed Successfully",
|
||||||
|
"parseSuccessDesc": "Configuration has been populated to the form, please review and save",
|
||||||
|
"parseError": "JSON Parse Error: {error}"
|
||||||
|
},
|
||||||
"templates": {
|
"templates": {
|
||||||
"npx-stdio": "NPX STDIO",
|
"npx-stdio": "NPX STDIO",
|
||||||
"python-stdio": "Python STDIO",
|
"python-stdio": "Python STDIO",
|
||||||
|
|||||||
@@ -104,6 +104,20 @@
|
|||||||
"addHeader": "添加请求头"
|
"addHeader": "添加请求头"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"mode": {
|
||||||
|
"form": "表单模式",
|
||||||
|
"json": "JSON 模式"
|
||||||
|
},
|
||||||
|
"json": {
|
||||||
|
"label": "JSON 配置",
|
||||||
|
"placeholder": "粘贴 MCP 服务器 JSON 配置...",
|
||||||
|
"hint": "粘贴完整的 MCP 服务器配置 JSON,然后点击解析按钮",
|
||||||
|
"example": "示例格式",
|
||||||
|
"parse": "解析 JSON",
|
||||||
|
"parseSuccess": "JSON 解析成功",
|
||||||
|
"parseSuccessDesc": "配置已填充到表单中,请检查并保存",
|
||||||
|
"parseError": "JSON 解析失败:{error}"
|
||||||
|
},
|
||||||
"templates": {
|
"templates": {
|
||||||
"npx-stdio": "NPX STDIO",
|
"npx-stdio": "NPX STDIO",
|
||||||
"python-stdio": "Python STDIO",
|
"python-stdio": "Python STDIO",
|
||||||
|
|||||||
671
ccw/src/tools/generate-ddd-docs.ts
Normal file
671
ccw/src/tools/generate-ddd-docs.ts
Normal file
@@ -0,0 +1,671 @@
|
|||||||
|
/**
|
||||||
|
* Generate DDD Docs Tool
|
||||||
|
* Generate DDD documentation from doc-index.json with deterministic output paths.
|
||||||
|
* Supports 5 strategies: component (L3), feature (L2), index, overview, schema
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { z } from 'zod';
|
||||||
|
import type { ToolSchema, ToolResult } from '../types/tool.js';
|
||||||
|
import { existsSync, readFileSync, mkdirSync, writeFileSync, unlinkSync } from 'fs';
|
||||||
|
import { join, resolve, dirname } from 'path';
|
||||||
|
import { execSync } from 'child_process';
|
||||||
|
import { tmpdir } from 'os';
|
||||||
|
import { getSecondaryModel } from './cli-config-manager.js';
|
||||||
|
|
||||||
|
// Default doc-index path relative to project root
|
||||||
|
const DEFAULT_DOC_INDEX_PATH = '.workflow/.doc-index/doc-index.json';
|
||||||
|
|
||||||
|
// Define Zod schema for validation
|
||||||
|
const ParamsSchema = z.object({
|
||||||
|
strategy: z.enum(['component', 'feature', 'index', 'overview', 'schema']),
|
||||||
|
entityId: z.string().optional(),
|
||||||
|
docIndexPath: z.string().default(DEFAULT_DOC_INDEX_PATH),
|
||||||
|
tool: z.enum(['gemini', 'qwen', 'codex']).default('gemini'),
|
||||||
|
model: z.string().optional(),
|
||||||
|
});
|
||||||
|
|
||||||
|
type Params = z.infer<typeof ParamsSchema>;
|
||||||
|
|
||||||
|
interface ToolOutput {
|
||||||
|
success: boolean;
|
||||||
|
strategy: string;
|
||||||
|
entity_id?: string;
|
||||||
|
output_path: string;
|
||||||
|
tool: string;
|
||||||
|
model?: string;
|
||||||
|
duration_seconds?: number;
|
||||||
|
message?: string;
|
||||||
|
error?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- doc-index.json type definitions ---
|
||||||
|
|
||||||
|
interface CodeLocation {
|
||||||
|
path: string;
|
||||||
|
symbols?: string[];
|
||||||
|
lineRange?: [number, number];
|
||||||
|
}
|
||||||
|
|
||||||
|
interface TechnicalComponent {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
type: string;
|
||||||
|
responsibility?: string;
|
||||||
|
adrId?: string | null;
|
||||||
|
docPath?: string;
|
||||||
|
codeLocations?: CodeLocation[];
|
||||||
|
dependsOn?: string[];
|
||||||
|
featureIds?: string[];
|
||||||
|
actionIds?: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
interface Feature {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
epicId?: string | null;
|
||||||
|
status?: string;
|
||||||
|
docPath?: string;
|
||||||
|
requirementIds?: string[];
|
||||||
|
techComponentIds?: string[];
|
||||||
|
tags?: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
interface DocIndex {
|
||||||
|
version?: string;
|
||||||
|
schema_version?: string;
|
||||||
|
project?: string;
|
||||||
|
build_path?: string;
|
||||||
|
last_updated?: string;
|
||||||
|
features?: Feature[];
|
||||||
|
technicalComponents?: TechnicalComponent[];
|
||||||
|
requirements?: Array<{ id: string; title?: string; priority?: string }>;
|
||||||
|
architectureDecisions?: Array<{ id: string; title?: string; componentIds?: string[] }>;
|
||||||
|
actions?: Array<{ id: string; description?: string; type?: string; timestamp?: string; affectedComponents?: string[]; affectedFeatures?: string[] }>;
|
||||||
|
glossary?: Array<{ id: string; term: string; definition?: string }>;
|
||||||
|
[key: string]: unknown;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Core functions ---
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load and parse doc-index.json
|
||||||
|
*/
|
||||||
|
function loadDocIndex(indexPath: string): DocIndex {
|
||||||
|
const absPath = resolve(process.cwd(), indexPath);
|
||||||
|
if (!existsSync(absPath)) {
|
||||||
|
throw new Error(`doc-index.json not found at: ${absPath}. Run /ddd:scan or /ddd:index-build first.`);
|
||||||
|
}
|
||||||
|
const raw = readFileSync(absPath, 'utf8');
|
||||||
|
return JSON.parse(raw) as DocIndex;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate deterministic output path based on strategy and entityId.
|
||||||
|
* All paths are relative to the doc-index directory.
|
||||||
|
*/
|
||||||
|
function calculateDddOutputPath(
|
||||||
|
strategy: string,
|
||||||
|
entityId: string | undefined,
|
||||||
|
docIndexDir: string
|
||||||
|
): string {
|
||||||
|
switch (strategy) {
|
||||||
|
case 'component': {
|
||||||
|
if (!entityId) throw new Error('entityId is required for component strategy');
|
||||||
|
// tech-{slug} -> {slug}.md
|
||||||
|
const slug = entityId.replace(/^tech-/, '');
|
||||||
|
return join(docIndexDir, 'tech-registry', `${slug}.md`);
|
||||||
|
}
|
||||||
|
case 'feature': {
|
||||||
|
if (!entityId) throw new Error('entityId is required for feature strategy');
|
||||||
|
// feat-{slug} -> {slug}.md
|
||||||
|
const slug = entityId.replace(/^feat-/, '');
|
||||||
|
return join(docIndexDir, 'feature-maps', `${slug}.md`);
|
||||||
|
}
|
||||||
|
case 'index':
|
||||||
|
// Generate _index.md files - entityId determines which subdirectory
|
||||||
|
if (entityId) {
|
||||||
|
return join(docIndexDir, entityId, '_index.md');
|
||||||
|
}
|
||||||
|
// Default: generate all index files (return the doc-index dir itself)
|
||||||
|
return docIndexDir;
|
||||||
|
case 'overview':
|
||||||
|
if (entityId === 'architecture') {
|
||||||
|
return join(docIndexDir, 'ARCHITECTURE.md');
|
||||||
|
}
|
||||||
|
return join(docIndexDir, 'README.md');
|
||||||
|
case 'schema':
|
||||||
|
return join(docIndexDir, 'SCHEMA.md');
|
||||||
|
default:
|
||||||
|
throw new Error(`Unknown strategy: ${strategy}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Build YAML frontmatter string from entity metadata
|
||||||
|
*/
|
||||||
|
function buildFrontmatter(
|
||||||
|
strategy: string,
|
||||||
|
entity: TechnicalComponent | Feature | null,
|
||||||
|
docIndex: DocIndex
|
||||||
|
): string {
|
||||||
|
const now = new Date().toISOString();
|
||||||
|
|
||||||
|
switch (strategy) {
|
||||||
|
case 'component': {
|
||||||
|
const comp = entity as TechnicalComponent;
|
||||||
|
if (!comp) return '';
|
||||||
|
const featureIds = comp.featureIds || [];
|
||||||
|
const codeLocations = (comp.codeLocations || []).map(loc => {
|
||||||
|
const symbolsStr = loc.symbols && loc.symbols.length > 0
|
||||||
|
? `\n symbols: [${loc.symbols.join(', ')}]`
|
||||||
|
: '';
|
||||||
|
return ` - path: ${loc.path}${symbolsStr}`;
|
||||||
|
}).join('\n');
|
||||||
|
|
||||||
|
return [
|
||||||
|
'---',
|
||||||
|
'layer: 3',
|
||||||
|
`component_id: ${comp.id}`,
|
||||||
|
`name: ${comp.name}`,
|
||||||
|
`type: ${comp.type || 'unknown'}`,
|
||||||
|
`features: [${featureIds.join(', ')}]`,
|
||||||
|
codeLocations ? `code_locations:\n${codeLocations}` : 'code_locations: []',
|
||||||
|
`generated_at: ${now}`,
|
||||||
|
'---',
|
||||||
|
].join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
case 'feature': {
|
||||||
|
const feat = entity as Feature;
|
||||||
|
if (!feat) return '';
|
||||||
|
const reqIds = feat.requirementIds || [];
|
||||||
|
const techIds = feat.techComponentIds || [];
|
||||||
|
const tags = feat.tags || [];
|
||||||
|
|
||||||
|
return [
|
||||||
|
'---',
|
||||||
|
'layer: 2',
|
||||||
|
`feature_id: ${feat.id}`,
|
||||||
|
`name: ${feat.name}`,
|
||||||
|
`epic_id: ${feat.epicId || 'null'}`,
|
||||||
|
`status: ${feat.status || 'planned'}`,
|
||||||
|
`requirements: [${reqIds.join(', ')}]`,
|
||||||
|
`components: [${techIds.join(', ')}]`,
|
||||||
|
`depends_on_layer3: [${techIds.join(', ')}]`,
|
||||||
|
`tags: [${tags.join(', ')}]`,
|
||||||
|
`generated_at: ${now}`,
|
||||||
|
'---',
|
||||||
|
].join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
case 'index':
|
||||||
|
case 'overview': {
|
||||||
|
const featureIds = (docIndex.features || []).map(f => f.id);
|
||||||
|
return [
|
||||||
|
'---',
|
||||||
|
'layer: 1',
|
||||||
|
`depends_on_layer2: [${featureIds.join(', ')}]`,
|
||||||
|
`generated_at: ${now}`,
|
||||||
|
'---',
|
||||||
|
].join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
case 'schema':
|
||||||
|
return [
|
||||||
|
'---',
|
||||||
|
`schema_version: ${docIndex.schema_version || docIndex.version || '1.0'}`,
|
||||||
|
`generated_at: ${now}`,
|
||||||
|
'---',
|
||||||
|
].join('\n');
|
||||||
|
|
||||||
|
default:
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Build CLI prompt combining frontmatter, content instructions, and code context
|
||||||
|
*/
|
||||||
|
function buildDddPrompt(
|
||||||
|
strategy: string,
|
||||||
|
entity: TechnicalComponent | Feature | null,
|
||||||
|
frontmatter: string,
|
||||||
|
docIndex: DocIndex,
|
||||||
|
outputPath: string
|
||||||
|
): string {
|
||||||
|
const absOutputPath = resolve(process.cwd(), outputPath);
|
||||||
|
|
||||||
|
switch (strategy) {
|
||||||
|
case 'component': {
|
||||||
|
const comp = entity as TechnicalComponent;
|
||||||
|
const contextPaths = (comp.codeLocations || []).map(loc => `@${loc.path}`).join(' ');
|
||||||
|
// Build change history from actions
|
||||||
|
const compActions = (docIndex.actions || [])
|
||||||
|
.filter(a => (a.affectedComponents || []).includes(comp.id))
|
||||||
|
.map(a => `- ${a.timestamp?.split('T')[0] || 'unknown'} | ${a.type || 'change'} | ${a.description || a.id}`)
|
||||||
|
.join('\n');
|
||||||
|
const changeHistoryBlock = compActions
|
||||||
|
? `\n\nChange History (include as "## Change History" section):\n${compActions}`
|
||||||
|
: '';
|
||||||
|
return `PURPOSE: Generate component documentation for ${comp.name}
|
||||||
|
TASK:
|
||||||
|
- Document component purpose and responsibility
|
||||||
|
- List exported symbols (classes, functions, types)
|
||||||
|
- Document dependencies (internal and external)
|
||||||
|
- Include code examples for key APIs
|
||||||
|
- Document integration points with other components
|
||||||
|
- Include change history timeline
|
||||||
|
MODE: write
|
||||||
|
CONTEXT: ${contextPaths || '@**/*'}
|
||||||
|
EXPECTED: Markdown file with: Overview, API Reference, Dependencies, Usage Examples, Change History
|
||||||
|
CONSTRAINTS: Focus on public API | Include type signatures
|
||||||
|
|
||||||
|
OUTPUT FILE: ${absOutputPath}
|
||||||
|
|
||||||
|
The file MUST start with this exact frontmatter:
|
||||||
|
|
||||||
|
${frontmatter}
|
||||||
|
|
||||||
|
Sections to include after frontmatter:
|
||||||
|
- Responsibility
|
||||||
|
- Code Locations
|
||||||
|
- Related Requirements
|
||||||
|
- Architecture Decisions
|
||||||
|
- Dependencies (in/out)
|
||||||
|
- Change History${changeHistoryBlock}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
case 'feature': {
|
||||||
|
const feat = entity as Feature;
|
||||||
|
const techIds = feat.techComponentIds || [];
|
||||||
|
const componentDocs = techIds
|
||||||
|
.map(id => {
|
||||||
|
const slug = id.replace(/^tech-/, '');
|
||||||
|
return `@.workflow/.doc-index/tech-registry/${slug}.md`;
|
||||||
|
})
|
||||||
|
.join(' ');
|
||||||
|
// Build change history from actions
|
||||||
|
const featActions = (docIndex.actions || [])
|
||||||
|
.filter(a => (a.affectedFeatures || []).includes(feat.id))
|
||||||
|
.map(a => `- ${a.timestamp?.split('T')[0] || 'unknown'} | ${a.type || 'change'} | ${a.description || a.id}`)
|
||||||
|
.join('\n');
|
||||||
|
const featChangeHistoryBlock = featActions
|
||||||
|
? `\n\nChange History (include as "## Change History" section):\n${featActions}`
|
||||||
|
: '';
|
||||||
|
return `PURPOSE: Generate feature documentation for ${feat.name}
|
||||||
|
TASK:
|
||||||
|
- Describe feature purpose and business value
|
||||||
|
- List requirements (from requirementIds)
|
||||||
|
- Document components involved (from techComponentIds)
|
||||||
|
- Include architecture decisions (from adrIds)
|
||||||
|
- Provide integration guide
|
||||||
|
- Include change history timeline
|
||||||
|
MODE: write
|
||||||
|
CONTEXT: ${componentDocs || '@.workflow/.doc-index/tech-registry/*.md'}
|
||||||
|
EXPECTED: Markdown file with: Overview, Requirements, Components, Architecture, Integration, Change History
|
||||||
|
CONSTRAINTS: Reference Layer 3 component docs | Business-focused language
|
||||||
|
|
||||||
|
OUTPUT FILE: ${absOutputPath}
|
||||||
|
|
||||||
|
The file MUST start with this exact frontmatter:
|
||||||
|
|
||||||
|
${frontmatter}
|
||||||
|
|
||||||
|
Sections to include after frontmatter:
|
||||||
|
- Overview
|
||||||
|
- Requirements (with mapping status)
|
||||||
|
- Technical Components
|
||||||
|
- Architecture Decisions
|
||||||
|
- Change History${featChangeHistoryBlock}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
case 'index': {
|
||||||
|
const docIndexDir = dirname(resolve(process.cwd(), outputPath));
|
||||||
|
const parentDir = dirname(docIndexDir);
|
||||||
|
return `PURPOSE: Generate index document for ${docIndexDir}
|
||||||
|
TASK:
|
||||||
|
- List all entries in this directory with brief descriptions
|
||||||
|
- Create a navigable catalog with links to each document
|
||||||
|
- Include status/type columns where applicable
|
||||||
|
MODE: write
|
||||||
|
CONTEXT: @${parentDir}/doc-index.json
|
||||||
|
EXPECTED: Markdown index file with: table of entries, descriptions, links
|
||||||
|
CONSTRAINTS: Catalog format | Link to sibling documents
|
||||||
|
|
||||||
|
OUTPUT FILE: ${absOutputPath}
|
||||||
|
|
||||||
|
The file MUST start with this exact frontmatter:
|
||||||
|
|
||||||
|
${frontmatter}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
case 'overview': {
|
||||||
|
const isArchitecture = outputPath.endsWith('ARCHITECTURE.md');
|
||||||
|
if (isArchitecture) {
|
||||||
|
return `PURPOSE: Generate architecture overview document
|
||||||
|
TASK:
|
||||||
|
- System design overview
|
||||||
|
- Component relationships and dependencies
|
||||||
|
- Key architecture decisions (from ADRs)
|
||||||
|
- Technology stack
|
||||||
|
MODE: write
|
||||||
|
CONTEXT: @.workflow/.doc-index/doc-index.json @.workflow/.doc-index/tech-registry/*.md
|
||||||
|
EXPECTED: ARCHITECTURE.md with: System Design, Component Diagram, ADRs, Tech Stack
|
||||||
|
CONSTRAINTS: Architecture-focused | Reference component docs for details
|
||||||
|
|
||||||
|
OUTPUT FILE: ${absOutputPath}
|
||||||
|
|
||||||
|
The file MUST start with this exact frontmatter:
|
||||||
|
|
||||||
|
${frontmatter}`;
|
||||||
|
}
|
||||||
|
return `PURPOSE: Generate project README with overview and navigation
|
||||||
|
TASK:
|
||||||
|
- Project summary and purpose
|
||||||
|
- Quick start guide
|
||||||
|
- Navigation to features, components, and architecture
|
||||||
|
- Link to doc-index.json
|
||||||
|
MODE: write
|
||||||
|
CONTEXT: @.workflow/.doc-index/doc-index.json @.workflow/.doc-index/feature-maps/_index.md
|
||||||
|
EXPECTED: README.md with: Overview, Quick Start, Navigation, Links
|
||||||
|
CONSTRAINTS: High-level only | Entry point for new developers
|
||||||
|
|
||||||
|
OUTPUT FILE: ${absOutputPath}
|
||||||
|
|
||||||
|
The file MUST start with this exact frontmatter:
|
||||||
|
|
||||||
|
${frontmatter}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
case 'schema': {
|
||||||
|
return `PURPOSE: Document doc-index.json schema structure and versioning
|
||||||
|
TASK:
|
||||||
|
- Document current schema structure (all fields)
|
||||||
|
- Define versioning policy (semver: major.minor)
|
||||||
|
- Document migration protocol for version upgrades
|
||||||
|
- Provide examples for each schema section
|
||||||
|
MODE: write
|
||||||
|
CONTEXT: @.workflow/.doc-index/doc-index.json
|
||||||
|
EXPECTED: SCHEMA.md with: Schema Structure, Versioning Policy, Migration Protocol, Examples
|
||||||
|
CONSTRAINTS: Complete field documentation | Clear migration steps
|
||||||
|
|
||||||
|
OUTPUT FILE: ${absOutputPath}
|
||||||
|
|
||||||
|
The file MUST start with this exact frontmatter:
|
||||||
|
|
||||||
|
${frontmatter}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
default:
|
||||||
|
throw new Error(`Unknown strategy: ${strategy}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create temporary prompt file and return path
|
||||||
|
*/
|
||||||
|
function createPromptFile(prompt: string): string {
|
||||||
|
const timestamp = Date.now();
|
||||||
|
const randomSuffix = Math.random().toString(36).substring(2, 8);
|
||||||
|
const promptFile = join(tmpdir(), `ddd-docs-prompt-${timestamp}-${randomSuffix}.txt`);
|
||||||
|
writeFileSync(promptFile, prompt, 'utf8');
|
||||||
|
return promptFile;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Build CLI command using stdin piping
|
||||||
|
*/
|
||||||
|
function buildCliCommand(tool: string, promptFile: string, model: string): string {
|
||||||
|
const normalizedPath = promptFile.replace(/\\/g, '/');
|
||||||
|
const isWindows = process.platform === 'win32';
|
||||||
|
|
||||||
|
const catCmd = isWindows ? `Get-Content -Raw "${normalizedPath}" | ` : `cat "${normalizedPath}" | `;
|
||||||
|
const modelFlag = model ? ` -m "${model}"` : '';
|
||||||
|
|
||||||
|
switch (tool) {
|
||||||
|
case 'qwen':
|
||||||
|
return `${catCmd}qwen${modelFlag} --yolo`;
|
||||||
|
case 'codex':
|
||||||
|
if (isWindows) {
|
||||||
|
return `codex --full-auto exec (Get-Content -Raw "${normalizedPath}")${modelFlag} --skip-git-repo-check -s danger-full-access`;
|
||||||
|
}
|
||||||
|
return `codex --full-auto exec "$(cat "${normalizedPath}")"${modelFlag} --skip-git-repo-check -s danger-full-access`;
|
||||||
|
case 'gemini':
|
||||||
|
default:
|
||||||
|
return `${catCmd}gemini${modelFlag} --yolo`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Resolve entity from doc-index based on strategy and entityId
|
||||||
|
*/
|
||||||
|
function resolveEntity(
|
||||||
|
strategy: string,
|
||||||
|
entityId: string | undefined,
|
||||||
|
docIndex: DocIndex
|
||||||
|
): TechnicalComponent | Feature | null {
|
||||||
|
if (strategy === 'component') {
|
||||||
|
if (!entityId) throw new Error('entityId is required for component strategy');
|
||||||
|
const comp = (docIndex.technicalComponents || []).find(c => c.id === entityId);
|
||||||
|
if (!comp) throw new Error(`Component not found in doc-index: ${entityId}`);
|
||||||
|
return comp;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (strategy === 'feature') {
|
||||||
|
if (!entityId) throw new Error('entityId is required for feature strategy');
|
||||||
|
const feat = (docIndex.features || []).find(f => f.id === entityId);
|
||||||
|
if (!feat) throw new Error(`Feature not found in doc-index: ${entityId}`);
|
||||||
|
return feat;
|
||||||
|
}
|
||||||
|
|
||||||
|
// index, overview, schema do not require a specific entity
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* For the index strategy, generate _index.md for multiple directories
|
||||||
|
*/
|
||||||
|
function getIndexTargets(entityId: string | undefined): string[] {
|
||||||
|
if (entityId) {
|
||||||
|
return [entityId];
|
||||||
|
}
|
||||||
|
// Default: all standard subdirectories
|
||||||
|
return ['feature-maps', 'tech-registry', 'action-logs', 'planning'];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tool schema for MCP
|
||||||
|
export const schema: ToolSchema = {
|
||||||
|
name: 'generate_ddd_docs',
|
||||||
|
description: `Generate DDD documentation from doc-index.json with deterministic output paths.
|
||||||
|
|
||||||
|
Strategies:
|
||||||
|
- component: Layer 3 technical component doc (tech-registry/{slug}.md)
|
||||||
|
- feature: Layer 2 feature map doc (feature-maps/{slug}.md)
|
||||||
|
- index: Layer 1 _index.md catalog files for subdirectories
|
||||||
|
- overview: Layer 1 README.md or ARCHITECTURE.md
|
||||||
|
- schema: SCHEMA.md documenting doc-index.json structure
|
||||||
|
|
||||||
|
Requires doc-index.json from /ddd:scan or /ddd:index-build.
|
||||||
|
Output: .workflow/.doc-index/...`,
|
||||||
|
inputSchema: {
|
||||||
|
type: 'object',
|
||||||
|
properties: {
|
||||||
|
strategy: {
|
||||||
|
type: 'string',
|
||||||
|
enum: ['component', 'feature', 'index', 'overview', 'schema'],
|
||||||
|
description: 'Document generation strategy: component (L3), feature (L2), index, overview, schema (L1)'
|
||||||
|
},
|
||||||
|
entityId: {
|
||||||
|
type: 'string',
|
||||||
|
description: 'Entity ID from doc-index.json (required for component/feature, optional for index/overview). For overview: "architecture" to generate ARCHITECTURE.md, omit for README.md. For index: subdirectory name or omit for all.'
|
||||||
|
},
|
||||||
|
docIndexPath: {
|
||||||
|
type: 'string',
|
||||||
|
description: 'Path to doc-index.json (default: .workflow/.doc-index/doc-index.json)',
|
||||||
|
default: '.workflow/.doc-index/doc-index.json'
|
||||||
|
},
|
||||||
|
tool: {
|
||||||
|
type: 'string',
|
||||||
|
enum: ['gemini', 'qwen', 'codex'],
|
||||||
|
description: 'CLI tool to use (default: gemini)',
|
||||||
|
default: 'gemini'
|
||||||
|
},
|
||||||
|
model: {
|
||||||
|
type: 'string',
|
||||||
|
description: 'Model name (optional, uses tool defaults)'
|
||||||
|
}
|
||||||
|
},
|
||||||
|
required: ['strategy']
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Handler function
|
||||||
|
export async function handler(params: Record<string, unknown>): Promise<ToolResult<ToolOutput>> {
|
||||||
|
const parsed = ParamsSchema.safeParse(params);
|
||||||
|
if (!parsed.success) {
|
||||||
|
return { success: false, error: `Invalid params: ${parsed.error.message}` };
|
||||||
|
}
|
||||||
|
|
||||||
|
const { strategy, entityId, docIndexPath, tool, model } = parsed.data;
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Load doc-index.json
|
||||||
|
const docIndex = loadDocIndex(docIndexPath);
|
||||||
|
const docIndexDir = dirname(resolve(process.cwd(), docIndexPath));
|
||||||
|
|
||||||
|
// Resolve model
|
||||||
|
let actualModel = model || '';
|
||||||
|
if (!actualModel) {
|
||||||
|
try {
|
||||||
|
actualModel = getSecondaryModel(process.cwd(), tool);
|
||||||
|
} catch {
|
||||||
|
actualModel = '';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle index strategy separately (may generate multiple files)
|
||||||
|
if (strategy === 'index') {
|
||||||
|
const targets = getIndexTargets(entityId);
|
||||||
|
const results: string[] = [];
|
||||||
|
|
||||||
|
for (const target of targets) {
|
||||||
|
const outputPath = join(docIndexDir, target, '_index.md');
|
||||||
|
const absOutputDir = dirname(resolve(process.cwd(), outputPath));
|
||||||
|
|
||||||
|
// Ensure directory exists
|
||||||
|
mkdirSync(absOutputDir, { recursive: true });
|
||||||
|
|
||||||
|
const frontmatter = buildFrontmatter('index', null, docIndex);
|
||||||
|
const prompt = buildDddPrompt('index', null, frontmatter, docIndex, outputPath);
|
||||||
|
const promptFile = createPromptFile(prompt);
|
||||||
|
const command = buildCliCommand(tool, promptFile, actualModel);
|
||||||
|
|
||||||
|
console.log(`[DDD] Generating index: ${target}/_index.md`);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const startTime = Date.now();
|
||||||
|
execSync(command, {
|
||||||
|
cwd: docIndexDir,
|
||||||
|
encoding: 'utf8',
|
||||||
|
stdio: 'inherit',
|
||||||
|
timeout: 600000,
|
||||||
|
shell: process.platform === 'win32' ? 'powershell.exe' : '/bin/bash'
|
||||||
|
});
|
||||||
|
const duration = Math.round((Date.now() - startTime) / 1000);
|
||||||
|
results.push(`${target}/_index.md (${duration}s)`);
|
||||||
|
} finally {
|
||||||
|
try { unlinkSync(promptFile); } catch { /* ignore */ }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
result: {
|
||||||
|
success: true,
|
||||||
|
strategy,
|
||||||
|
entity_id: entityId,
|
||||||
|
output_path: docIndexDir,
|
||||||
|
tool,
|
||||||
|
model: actualModel,
|
||||||
|
message: `Generated index files: ${results.join(', ')}`
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Single-file strategies: component, feature, overview, schema
|
||||||
|
const entity = resolveEntity(strategy, entityId, docIndex);
|
||||||
|
const outputPath = calculateDddOutputPath(strategy, entityId, docIndexDir);
|
||||||
|
const absOutputDir = dirname(resolve(process.cwd(), outputPath));
|
||||||
|
|
||||||
|
// Ensure output directory exists
|
||||||
|
mkdirSync(absOutputDir, { recursive: true });
|
||||||
|
|
||||||
|
// Build frontmatter and prompt
|
||||||
|
const frontmatter = buildFrontmatter(strategy, entity, docIndex);
|
||||||
|
const prompt = buildDddPrompt(strategy, entity, frontmatter, docIndex, outputPath);
|
||||||
|
|
||||||
|
// Create temp prompt file
|
||||||
|
const promptFile = createPromptFile(prompt);
|
||||||
|
|
||||||
|
// Build CLI command
|
||||||
|
const command = buildCliCommand(tool, promptFile, actualModel);
|
||||||
|
|
||||||
|
console.log(`[DDD] Generating ${strategy}: ${outputPath}`);
|
||||||
|
console.log(`[DDD] Tool: ${tool} | Model: ${actualModel || 'default'}`);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const startTime = Date.now();
|
||||||
|
|
||||||
|
execSync(command, {
|
||||||
|
cwd: docIndexDir,
|
||||||
|
encoding: 'utf8',
|
||||||
|
stdio: 'inherit',
|
||||||
|
timeout: 600000,
|
||||||
|
shell: process.platform === 'win32' ? 'powershell.exe' : '/bin/bash'
|
||||||
|
});
|
||||||
|
|
||||||
|
const duration = Math.round((Date.now() - startTime) / 1000);
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
try { unlinkSync(promptFile); } catch { /* ignore */ }
|
||||||
|
|
||||||
|
console.log(`[DDD] Completed in ${duration}s: ${outputPath}`);
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
result: {
|
||||||
|
success: true,
|
||||||
|
strategy,
|
||||||
|
entity_id: entityId,
|
||||||
|
output_path: outputPath,
|
||||||
|
tool,
|
||||||
|
model: actualModel,
|
||||||
|
duration_seconds: duration,
|
||||||
|
message: `Documentation generated successfully in ${duration}s`
|
||||||
|
}
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
// Cleanup on error
|
||||||
|
try { unlinkSync(promptFile); } catch { /* ignore */ }
|
||||||
|
|
||||||
|
// Tool fallback: gemini -> qwen -> codex
|
||||||
|
const fallbackChain = ['gemini', 'qwen', 'codex'];
|
||||||
|
const currentIdx = fallbackChain.indexOf(tool);
|
||||||
|
if (currentIdx >= 0 && currentIdx < fallbackChain.length - 1) {
|
||||||
|
const nextTool = fallbackChain[currentIdx + 1];
|
||||||
|
console.log(`[DDD] ${tool} failed, falling back to ${nextTool}`);
|
||||||
|
return handler({ ...params, tool: nextTool });
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: `Documentation generation failed: ${(error as Error).message}`
|
||||||
|
};
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: `Tool execution failed: ${(error as Error).message}`
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -14,6 +14,7 @@ import * as classifyFoldersMod from './classify-folders.js';
|
|||||||
import * as detectChangedModulesMod from './detect-changed-modules.js';
|
import * as detectChangedModulesMod from './detect-changed-modules.js';
|
||||||
import * as discoverDesignFilesMod from './discover-design-files.js';
|
import * as discoverDesignFilesMod from './discover-design-files.js';
|
||||||
import * as generateModuleDocsMod from './generate-module-docs.js';
|
import * as generateModuleDocsMod from './generate-module-docs.js';
|
||||||
|
import * as generateDddDocsMod from './generate-ddd-docs.js';
|
||||||
import * as convertTokensToCssMod from './convert-tokens-to-css.js';
|
import * as convertTokensToCssMod from './convert-tokens-to-css.js';
|
||||||
import * as sessionManagerMod from './session-manager.js';
|
import * as sessionManagerMod from './session-manager.js';
|
||||||
import * as cliExecutorMod from './cli-executor.js';
|
import * as cliExecutorMod from './cli-executor.js';
|
||||||
@@ -358,6 +359,7 @@ registerTool(toLegacyTool(classifyFoldersMod));
|
|||||||
registerTool(toLegacyTool(detectChangedModulesMod));
|
registerTool(toLegacyTool(detectChangedModulesMod));
|
||||||
registerTool(toLegacyTool(discoverDesignFilesMod));
|
registerTool(toLegacyTool(discoverDesignFilesMod));
|
||||||
registerTool(toLegacyTool(generateModuleDocsMod));
|
registerTool(toLegacyTool(generateModuleDocsMod));
|
||||||
|
registerTool(toLegacyTool(generateDddDocsMod));
|
||||||
registerTool(toLegacyTool(convertTokensToCssMod));
|
registerTool(toLegacyTool(convertTokensToCssMod));
|
||||||
registerTool(toLegacyTool(sessionManagerMod));
|
registerTool(toLegacyTool(sessionManagerMod));
|
||||||
registerTool(toLegacyTool(cliExecutorMod));
|
registerTool(toLegacyTool(cliExecutorMod));
|
||||||
|
|||||||
144
ccw/tests/integration/codexlens-ignore-pattern-routes.test.ts
Normal file
144
ccw/tests/integration/codexlens-ignore-pattern-routes.test.ts
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
/**
|
||||||
|
* Integration tests for CodexLens ignore-pattern configuration routes.
|
||||||
|
*
|
||||||
|
* Notes:
|
||||||
|
* - Targets runtime implementation shipped in `ccw/dist`.
|
||||||
|
* - Calls route handler directly (no HTTP server required).
|
||||||
|
* - Uses temporary CODEXLENS_DATA_DIR to isolate ~/.codexlens writes.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { after, before, describe, it, mock } from 'node:test';
|
||||||
|
import assert from 'node:assert/strict';
|
||||||
|
import { existsSync, mkdtempSync, readFileSync, rmSync } from 'node:fs';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
|
||||||
|
const CODEXLENS_HOME = mkdtempSync(join(tmpdir(), 'codexlens-ignore-home-'));
|
||||||
|
const PROJECT_ROOT = mkdtempSync(join(tmpdir(), 'codexlens-ignore-project-'));
|
||||||
|
|
||||||
|
const configRoutesUrl = new URL('../../dist/core/routes/codexlens/config-handlers.js', import.meta.url);
|
||||||
|
configRoutesUrl.searchParams.set('t', String(Date.now()));
|
||||||
|
|
||||||
|
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||||
|
let mod: any;
|
||||||
|
|
||||||
|
const originalEnv = {
|
||||||
|
CODEXLENS_DATA_DIR: process.env.CODEXLENS_DATA_DIR,
|
||||||
|
};
|
||||||
|
|
||||||
|
async function callConfigRoute(
|
||||||
|
initialPath: string,
|
||||||
|
method: string,
|
||||||
|
path: string,
|
||||||
|
body?: unknown,
|
||||||
|
): Promise<{ handled: boolean; status: number; json: any }> {
|
||||||
|
const url = new URL(path, 'http://localhost');
|
||||||
|
let status = 0;
|
||||||
|
let text = '';
|
||||||
|
let postPromise: Promise<void> | null = null;
|
||||||
|
|
||||||
|
const res = {
|
||||||
|
writeHead(code: number) {
|
||||||
|
status = code;
|
||||||
|
},
|
||||||
|
end(chunk?: unknown) {
|
||||||
|
text = chunk === undefined ? '' : String(chunk);
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const handlePostRequest = (_req: unknown, _res: unknown, handler: (parsed: any) => Promise<any>) => {
|
||||||
|
postPromise = (async () => {
|
||||||
|
const result = await handler(body ?? {});
|
||||||
|
const errorValue = result && typeof result === 'object' ? result.error : undefined;
|
||||||
|
const statusValue = result && typeof result === 'object' ? result.status : undefined;
|
||||||
|
|
||||||
|
if (typeof errorValue === 'string' && errorValue.length > 0) {
|
||||||
|
res.writeHead(typeof statusValue === 'number' ? statusValue : 500);
|
||||||
|
res.end(JSON.stringify({ error: errorValue }));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
res.writeHead(200);
|
||||||
|
res.end(JSON.stringify(result));
|
||||||
|
})();
|
||||||
|
};
|
||||||
|
|
||||||
|
const handled = await mod.handleCodexLensConfigRoutes({
|
||||||
|
pathname: url.pathname,
|
||||||
|
url,
|
||||||
|
req: { method },
|
||||||
|
res,
|
||||||
|
initialPath,
|
||||||
|
handlePostRequest,
|
||||||
|
broadcastToClients() {},
|
||||||
|
});
|
||||||
|
|
||||||
|
if (postPromise) {
|
||||||
|
await postPromise;
|
||||||
|
}
|
||||||
|
|
||||||
|
return { handled, status, json: text ? JSON.parse(text) : null };
|
||||||
|
}
|
||||||
|
|
||||||
|
describe('codexlens ignore-pattern routes integration', async () => {
|
||||||
|
before(async () => {
|
||||||
|
process.env.CODEXLENS_DATA_DIR = CODEXLENS_HOME;
|
||||||
|
mock.method(console, 'log', () => {});
|
||||||
|
mock.method(console, 'warn', () => {});
|
||||||
|
mock.method(console, 'error', () => {});
|
||||||
|
mod = await import(configRoutesUrl.href);
|
||||||
|
});
|
||||||
|
|
||||||
|
after(() => {
|
||||||
|
mock.restoreAll();
|
||||||
|
process.env.CODEXLENS_DATA_DIR = originalEnv.CODEXLENS_DATA_DIR;
|
||||||
|
rmSync(CODEXLENS_HOME, { recursive: true, force: true });
|
||||||
|
rmSync(PROJECT_ROOT, { recursive: true, force: true });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('GET /api/codexlens/ignore-patterns returns defaults before config exists', async () => {
|
||||||
|
const res = await callConfigRoute(PROJECT_ROOT, 'GET', '/api/codexlens/ignore-patterns');
|
||||||
|
|
||||||
|
assert.equal(res.handled, true);
|
||||||
|
assert.equal(res.status, 200);
|
||||||
|
assert.equal(res.json.success, true);
|
||||||
|
assert.equal(Array.isArray(res.json.patterns), true);
|
||||||
|
assert.equal(Array.isArray(res.json.extensionFilters), true);
|
||||||
|
assert.ok(res.json.patterns.includes('dist'));
|
||||||
|
assert.ok(res.json.extensionFilters.includes('*.min.js'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('POST /api/codexlens/ignore-patterns persists custom patterns and filters', async () => {
|
||||||
|
const saveRes = await callConfigRoute(PROJECT_ROOT, 'POST', '/api/codexlens/ignore-patterns', {
|
||||||
|
patterns: ['dist', 'frontend/dist'],
|
||||||
|
extensionFilters: ['*.min.js', 'frontend/skip.ts'],
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(saveRes.handled, true);
|
||||||
|
assert.equal(saveRes.status, 200);
|
||||||
|
assert.equal(saveRes.json.success, true);
|
||||||
|
assert.deepEqual(saveRes.json.patterns, ['dist', 'frontend/dist']);
|
||||||
|
assert.deepEqual(saveRes.json.extensionFilters, ['*.min.js', 'frontend/skip.ts']);
|
||||||
|
|
||||||
|
const settingsPath = join(CODEXLENS_HOME, 'settings.json');
|
||||||
|
assert.equal(existsSync(settingsPath), true);
|
||||||
|
const savedSettings = JSON.parse(readFileSync(settingsPath, 'utf8'));
|
||||||
|
assert.deepEqual(savedSettings.ignore_patterns, ['dist', 'frontend/dist']);
|
||||||
|
assert.deepEqual(savedSettings.extension_filters, ['*.min.js', 'frontend/skip.ts']);
|
||||||
|
|
||||||
|
const getRes = await callConfigRoute(PROJECT_ROOT, 'GET', '/api/codexlens/ignore-patterns');
|
||||||
|
assert.equal(getRes.status, 200);
|
||||||
|
assert.deepEqual(getRes.json.patterns, ['dist', 'frontend/dist']);
|
||||||
|
assert.deepEqual(getRes.json.extensionFilters, ['*.min.js', 'frontend/skip.ts']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('POST /api/codexlens/ignore-patterns rejects invalid entries', async () => {
|
||||||
|
const res = await callConfigRoute(PROJECT_ROOT, 'POST', '/api/codexlens/ignore-patterns', {
|
||||||
|
patterns: ['bad pattern!'],
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(res.handled, true);
|
||||||
|
assert.equal(res.status, 400);
|
||||||
|
assert.match(String(res.json.error), /Invalid patterns:/);
|
||||||
|
});
|
||||||
|
});
|
||||||
10
ccw/undefined/settings.json
Normal file
10
ccw/undefined/settings.json
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"ignore_patterns": [
|
||||||
|
"dist",
|
||||||
|
"frontend/dist"
|
||||||
|
],
|
||||||
|
"extension_filters": [
|
||||||
|
"*.min.js",
|
||||||
|
"frontend/skip.ts"
|
||||||
|
]
|
||||||
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
{"ignore_patterns": ["frontend/dist"], "extension_filters": ["*.min.js"]}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
export const app = 1
|
||||||
1
codex-lens/.pytest-temp/test_builder_loads_saved_ignor0/frontend/bundle.min.js
vendored
Normal file
1
codex-lens/.pytest-temp/test_builder_loads_saved_ignor0/frontend/bundle.min.js
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
export const bundle = 1
|
||||||
1
codex-lens/.pytest-temp/test_builder_loads_saved_ignor0/frontend/dist/compiled.ts
vendored
Normal file
1
codex-lens/.pytest-temp/test_builder_loads_saved_ignor0/frontend/dist/compiled.ts
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
export const compiled = 1
|
||||||
1
codex-lens/.pytest-temp/test_collect_dirs_by_depth_res0/frontend/dist/bundle.ts
vendored
Normal file
1
codex-lens/.pytest-temp/test_collect_dirs_by_depth_res0/frontend/dist/bundle.ts
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
export const bundle = 1
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
export const app = 1
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
print('artifact')
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
print('artifact')
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
print('artifact')
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
print('artifact')
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
print('artifact')
|
||||||
1
codex-lens/.pytest-temp/test_collect_dirs_by_depth_ski0/dist/generated.py
vendored
Normal file
1
codex-lens/.pytest-temp/test_collect_dirs_by_depth_ski0/dist/generated.py
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
print('artifact')
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
print('artifact')
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
print('ok')
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
print('artifact')
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
export const app = 1
|
||||||
1
codex-lens/.pytest-temp/test_iter_source_files_respect0/frontend/bundle.min.js
vendored
Normal file
1
codex-lens/.pytest-temp/test_iter_source_files_respect0/frontend/bundle.min.js
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
export const bundle = 1
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
export const skip = 1
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
{"ignore_patterns": ["frontend/dist", "coverage"], "extension_filters": ["*.min.js", "*.map"]}
|
||||||
1
codex-lens/.pytest-temp/test_should_index_dir_ignores_0/package/dist/bundle.py
vendored
Normal file
1
codex-lens/.pytest-temp/test_should_index_dir_ignores_0/package/dist/bundle.py
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
print('compiled')
|
||||||
@@ -123,11 +123,12 @@ class IndexTreeBuilder:
|
|||||||
"""
|
"""
|
||||||
self.registry = registry
|
self.registry = registry
|
||||||
self.mapper = mapper
|
self.mapper = mapper
|
||||||
self.config = config or Config()
|
self.config = config or Config.load()
|
||||||
self.parser_factory = ParserFactory(self.config)
|
self.parser_factory = ParserFactory(self.config)
|
||||||
self.logger = logging.getLogger(__name__)
|
self.logger = logging.getLogger(__name__)
|
||||||
self.incremental = incremental
|
self.incremental = incremental
|
||||||
self.ignore_patterns = self._resolve_ignore_patterns()
|
self.ignore_patterns = self._resolve_ignore_patterns()
|
||||||
|
self.extension_filters = self._resolve_extension_filters()
|
||||||
|
|
||||||
def _resolve_ignore_patterns(self) -> Tuple[str, ...]:
|
def _resolve_ignore_patterns(self) -> Tuple[str, ...]:
|
||||||
configured_patterns = getattr(self.config, "ignore_patterns", None)
|
configured_patterns = getattr(self.config, "ignore_patterns", None)
|
||||||
@@ -139,6 +140,18 @@ class IndexTreeBuilder:
|
|||||||
cleaned.append(pattern)
|
cleaned.append(pattern)
|
||||||
return tuple(dict.fromkeys(cleaned))
|
return tuple(dict.fromkeys(cleaned))
|
||||||
|
|
||||||
|
def _resolve_extension_filters(self) -> Tuple[str, ...]:
|
||||||
|
configured_filters = getattr(self.config, "extension_filters", None)
|
||||||
|
if not configured_filters:
|
||||||
|
return tuple()
|
||||||
|
|
||||||
|
cleaned: List[str] = []
|
||||||
|
for item in configured_filters:
|
||||||
|
pattern = str(item).strip().replace('\\', '/').rstrip('/')
|
||||||
|
if pattern:
|
||||||
|
cleaned.append(pattern)
|
||||||
|
return tuple(dict.fromkeys(cleaned))
|
||||||
|
|
||||||
def _is_ignored_dir(self, dir_path: Path, source_root: Optional[Path] = None) -> bool:
|
def _is_ignored_dir(self, dir_path: Path, source_root: Optional[Path] = None) -> bool:
|
||||||
name = dir_path.name
|
name = dir_path.name
|
||||||
if name.startswith('.'):
|
if name.startswith('.'):
|
||||||
@@ -159,6 +172,25 @@ class IndexTreeBuilder:
|
|||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def _is_filtered_file(self, file_path: Path, source_root: Optional[Path] = None) -> bool:
|
||||||
|
if not self.extension_filters:
|
||||||
|
return False
|
||||||
|
|
||||||
|
rel_path: Optional[str] = None
|
||||||
|
if source_root is not None:
|
||||||
|
try:
|
||||||
|
rel_path = file_path.relative_to(source_root).as_posix()
|
||||||
|
except ValueError:
|
||||||
|
rel_path = None
|
||||||
|
|
||||||
|
for pattern in self.extension_filters:
|
||||||
|
if pattern == file_path.name or fnmatch.fnmatch(file_path.name, pattern):
|
||||||
|
return True
|
||||||
|
if rel_path and (pattern == rel_path or fnmatch.fnmatch(rel_path, pattern)):
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
def build(
|
def build(
|
||||||
self,
|
self,
|
||||||
source_root: Path,
|
source_root: Path,
|
||||||
@@ -259,6 +291,7 @@ class IndexTreeBuilder:
|
|||||||
dirs,
|
dirs,
|
||||||
languages,
|
languages,
|
||||||
workers,
|
workers,
|
||||||
|
source_root=source_root,
|
||||||
project_id=project_info.id,
|
project_id=project_info.id,
|
||||||
global_index_db_path=global_index_db_path,
|
global_index_db_path=global_index_db_path,
|
||||||
)
|
)
|
||||||
@@ -410,6 +443,7 @@ class IndexTreeBuilder:
|
|||||||
return self._build_single_dir(
|
return self._build_single_dir(
|
||||||
source_path,
|
source_path,
|
||||||
languages=None,
|
languages=None,
|
||||||
|
source_root=project_root,
|
||||||
project_id=project_info.id,
|
project_id=project_info.id,
|
||||||
global_index_db_path=global_index_db_path,
|
global_index_db_path=global_index_db_path,
|
||||||
)
|
)
|
||||||
@@ -491,7 +525,7 @@ class IndexTreeBuilder:
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
# Check for supported files in this directory
|
# Check for supported files in this directory
|
||||||
source_files = self._iter_source_files(dir_path, languages)
|
source_files = self._iter_source_files(dir_path, languages, source_root=source_root)
|
||||||
if len(source_files) > 0:
|
if len(source_files) > 0:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@@ -519,7 +553,7 @@ class IndexTreeBuilder:
|
|||||||
True if directory tree contains indexable files
|
True if directory tree contains indexable files
|
||||||
"""
|
"""
|
||||||
# Check for supported files in this directory
|
# Check for supported files in this directory
|
||||||
source_files = self._iter_source_files(dir_path, languages)
|
source_files = self._iter_source_files(dir_path, languages, source_root=source_root)
|
||||||
if len(source_files) > 0:
|
if len(source_files) > 0:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@@ -543,6 +577,7 @@ class IndexTreeBuilder:
|
|||||||
languages: List[str],
|
languages: List[str],
|
||||||
workers: int,
|
workers: int,
|
||||||
*,
|
*,
|
||||||
|
source_root: Path,
|
||||||
project_id: int,
|
project_id: int,
|
||||||
global_index_db_path: Path,
|
global_index_db_path: Path,
|
||||||
) -> List[DirBuildResult]:
|
) -> List[DirBuildResult]:
|
||||||
@@ -570,6 +605,7 @@ class IndexTreeBuilder:
|
|||||||
result = self._build_single_dir(
|
result = self._build_single_dir(
|
||||||
dirs[0],
|
dirs[0],
|
||||||
languages,
|
languages,
|
||||||
|
source_root=source_root,
|
||||||
project_id=project_id,
|
project_id=project_id,
|
||||||
global_index_db_path=global_index_db_path,
|
global_index_db_path=global_index_db_path,
|
||||||
)
|
)
|
||||||
@@ -585,6 +621,7 @@ class IndexTreeBuilder:
|
|||||||
"static_graph_relationship_types": self.config.static_graph_relationship_types,
|
"static_graph_relationship_types": self.config.static_graph_relationship_types,
|
||||||
"use_astgrep": getattr(self.config, "use_astgrep", False),
|
"use_astgrep": getattr(self.config, "use_astgrep", False),
|
||||||
"ignore_patterns": list(getattr(self.config, "ignore_patterns", [])),
|
"ignore_patterns": list(getattr(self.config, "ignore_patterns", [])),
|
||||||
|
"extension_filters": list(getattr(self.config, "extension_filters", [])),
|
||||||
}
|
}
|
||||||
|
|
||||||
worker_args = [
|
worker_args = [
|
||||||
@@ -595,6 +632,7 @@ class IndexTreeBuilder:
|
|||||||
config_dict,
|
config_dict,
|
||||||
int(project_id),
|
int(project_id),
|
||||||
str(global_index_db_path),
|
str(global_index_db_path),
|
||||||
|
str(source_root),
|
||||||
)
|
)
|
||||||
for dir_path in dirs
|
for dir_path in dirs
|
||||||
]
|
]
|
||||||
@@ -631,6 +669,7 @@ class IndexTreeBuilder:
|
|||||||
dir_path: Path,
|
dir_path: Path,
|
||||||
languages: List[str] = None,
|
languages: List[str] = None,
|
||||||
*,
|
*,
|
||||||
|
source_root: Path,
|
||||||
project_id: int,
|
project_id: int,
|
||||||
global_index_db_path: Path,
|
global_index_db_path: Path,
|
||||||
) -> DirBuildResult:
|
) -> DirBuildResult:
|
||||||
@@ -663,7 +702,7 @@ class IndexTreeBuilder:
|
|||||||
store.initialize()
|
store.initialize()
|
||||||
|
|
||||||
# Get source files in this directory only
|
# Get source files in this directory only
|
||||||
source_files = self._iter_source_files(dir_path, languages)
|
source_files = self._iter_source_files(dir_path, languages, source_root=source_root)
|
||||||
|
|
||||||
files_count = 0
|
files_count = 0
|
||||||
symbols_count = 0
|
symbols_count = 0
|
||||||
@@ -731,7 +770,7 @@ class IndexTreeBuilder:
|
|||||||
d.name
|
d.name
|
||||||
for d in dir_path.iterdir()
|
for d in dir_path.iterdir()
|
||||||
if d.is_dir()
|
if d.is_dir()
|
||||||
and not self._is_ignored_dir(d)
|
and not self._is_ignored_dir(d, source_root=source_root)
|
||||||
]
|
]
|
||||||
|
|
||||||
store.update_merkle_root()
|
store.update_merkle_root()
|
||||||
@@ -826,7 +865,7 @@ class IndexTreeBuilder:
|
|||||||
)
|
)
|
||||||
|
|
||||||
def _iter_source_files(
|
def _iter_source_files(
|
||||||
self, dir_path: Path, languages: List[str] = None
|
self, dir_path: Path, languages: List[str] = None, source_root: Optional[Path] = None
|
||||||
) -> List[Path]:
|
) -> List[Path]:
|
||||||
"""Iterate source files in directory (non-recursive).
|
"""Iterate source files in directory (non-recursive).
|
||||||
|
|
||||||
@@ -852,6 +891,9 @@ class IndexTreeBuilder:
|
|||||||
if item.name.startswith("."):
|
if item.name.startswith("."):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if self._is_filtered_file(item, source_root=source_root):
|
||||||
|
continue
|
||||||
|
|
||||||
# Check language support
|
# Check language support
|
||||||
language_id = self.config.language_for_path(item)
|
language_id = self.config.language_for_path(item)
|
||||||
if not language_id:
|
if not language_id:
|
||||||
@@ -1027,19 +1069,37 @@ def _compute_graph_neighbors(
|
|||||||
# === Worker Function for ProcessPoolExecutor ===
|
# === Worker Function for ProcessPoolExecutor ===
|
||||||
|
|
||||||
|
|
||||||
def _matches_ignore_patterns(path: Path, patterns: List[str]) -> bool:
|
def _matches_path_patterns(path: Path, patterns: List[str], source_root: Optional[Path] = None) -> bool:
|
||||||
name = path.name
|
rel_path: Optional[str] = None
|
||||||
if name.startswith('.'):
|
if source_root is not None:
|
||||||
return True
|
try:
|
||||||
|
rel_path = path.relative_to(source_root).as_posix()
|
||||||
|
except ValueError:
|
||||||
|
rel_path = None
|
||||||
|
|
||||||
for pattern in patterns:
|
for pattern in patterns:
|
||||||
normalized = str(pattern).strip().replace('\\', '/').rstrip('/')
|
normalized = str(pattern).strip().replace('\\', '/').rstrip('/')
|
||||||
if not normalized:
|
if not normalized:
|
||||||
continue
|
continue
|
||||||
if normalized == name or fnmatch.fnmatch(name, normalized):
|
if normalized == path.name or fnmatch.fnmatch(path.name, normalized):
|
||||||
|
return True
|
||||||
|
if rel_path and (normalized == rel_path or fnmatch.fnmatch(rel_path, normalized)):
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _matches_ignore_patterns(path: Path, patterns: List[str], source_root: Optional[Path] = None) -> bool:
|
||||||
|
if path.name.startswith('.'):
|
||||||
|
return True
|
||||||
|
return _matches_path_patterns(path, patterns, source_root)
|
||||||
|
|
||||||
|
|
||||||
|
def _matches_extension_filters(path: Path, patterns: List[str], source_root: Optional[Path] = None) -> bool:
|
||||||
|
if not patterns:
|
||||||
|
return False
|
||||||
|
return _matches_path_patterns(path, patterns, source_root)
|
||||||
|
|
||||||
|
|
||||||
def _build_dir_worker(args: tuple) -> DirBuildResult:
|
def _build_dir_worker(args: tuple) -> DirBuildResult:
|
||||||
"""Worker function for parallel directory building.
|
"""Worker function for parallel directory building.
|
||||||
|
|
||||||
@@ -1047,12 +1107,12 @@ def _build_dir_worker(args: tuple) -> DirBuildResult:
|
|||||||
Reconstructs necessary objects from serializable arguments.
|
Reconstructs necessary objects from serializable arguments.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
args: Tuple of (dir_path, index_db_path, languages, config_dict, project_id, global_index_db_path)
|
args: Tuple of (dir_path, index_db_path, languages, config_dict, project_id, global_index_db_path, source_root)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
DirBuildResult for the directory
|
DirBuildResult for the directory
|
||||||
"""
|
"""
|
||||||
dir_path, index_db_path, languages, config_dict, project_id, global_index_db_path = args
|
dir_path, index_db_path, languages, config_dict, project_id, global_index_db_path, source_root = args
|
||||||
|
|
||||||
# Reconstruct config
|
# Reconstruct config
|
||||||
config = Config(
|
config = Config(
|
||||||
@@ -1064,9 +1124,11 @@ def _build_dir_worker(args: tuple) -> DirBuildResult:
|
|||||||
static_graph_relationship_types=list(config_dict.get("static_graph_relationship_types", ["imports", "inherits"])),
|
static_graph_relationship_types=list(config_dict.get("static_graph_relationship_types", ["imports", "inherits"])),
|
||||||
use_astgrep=bool(config_dict.get("use_astgrep", False)),
|
use_astgrep=bool(config_dict.get("use_astgrep", False)),
|
||||||
ignore_patterns=list(config_dict.get("ignore_patterns", [])),
|
ignore_patterns=list(config_dict.get("ignore_patterns", [])),
|
||||||
|
extension_filters=list(config_dict.get("extension_filters", [])),
|
||||||
)
|
)
|
||||||
|
|
||||||
parser_factory = ParserFactory(config)
|
parser_factory = ParserFactory(config)
|
||||||
|
source_root_path = Path(source_root) if source_root else None
|
||||||
|
|
||||||
global_index: GlobalSymbolIndex | None = None
|
global_index: GlobalSymbolIndex | None = None
|
||||||
try:
|
try:
|
||||||
@@ -1092,6 +1154,9 @@ def _build_dir_worker(args: tuple) -> DirBuildResult:
|
|||||||
if item.name.startswith("."):
|
if item.name.startswith("."):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if _matches_extension_filters(item, config.extension_filters, source_root_path):
|
||||||
|
continue
|
||||||
|
|
||||||
language_id = config.language_for_path(item)
|
language_id = config.language_for_path(item)
|
||||||
if not language_id:
|
if not language_id:
|
||||||
continue
|
continue
|
||||||
@@ -1146,7 +1211,7 @@ def _build_dir_worker(args: tuple) -> DirBuildResult:
|
|||||||
subdirs = [
|
subdirs = [
|
||||||
d.name
|
d.name
|
||||||
for d in dir_path.iterdir()
|
for d in dir_path.iterdir()
|
||||||
if d.is_dir() and not _matches_ignore_patterns(d, ignore_patterns)
|
if d.is_dir() and not _matches_ignore_patterns(d, ignore_patterns, source_root_path)
|
||||||
]
|
]
|
||||||
|
|
||||||
store.update_merkle_root()
|
store.update_merkle_root()
|
||||||
|
|||||||
@@ -19,6 +19,7 @@ def test_load_settings_reads_ignore_patterns_and_extension_filters(tmp_path: Pat
|
|||||||
)
|
)
|
||||||
|
|
||||||
config = Config(data_dir=tmp_path)
|
config = Config(data_dir=tmp_path)
|
||||||
|
config.load_settings()
|
||||||
|
|
||||||
assert config.ignore_patterns == ["frontend/dist", "coverage"]
|
assert config.ignore_patterns == ["frontend/dist", "coverage"]
|
||||||
assert config.extension_filters == ["*.min.js", "*.map"]
|
assert config.extension_filters == ["*.min.js", "*.map"]
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from unittest.mock import MagicMock
|
from unittest.mock import MagicMock
|
||||||
|
|
||||||
@@ -84,3 +85,63 @@ def test_collect_dirs_by_depth_respects_relative_ignore_patterns_from_config(tmp
|
|||||||
|
|
||||||
assert "frontend/src" in discovered_dirs
|
assert "frontend/src" in discovered_dirs
|
||||||
assert "frontend/dist" not in discovered_dirs
|
assert "frontend/dist" not in discovered_dirs
|
||||||
|
|
||||||
|
|
||||||
|
def test_iter_source_files_respects_extension_filters_and_relative_patterns(tmp_path: Path) -> None:
|
||||||
|
frontend_dir = tmp_path / "frontend"
|
||||||
|
frontend_dir.mkdir()
|
||||||
|
(frontend_dir / "app.ts").write_text("export const app = 1\n", encoding="utf-8")
|
||||||
|
(frontend_dir / "bundle.min.js").write_text("export const bundle = 1\n", encoding="utf-8")
|
||||||
|
(frontend_dir / "skip.ts").write_text("export const skip = 1\n", encoding="utf-8")
|
||||||
|
|
||||||
|
builder = IndexTreeBuilder(
|
||||||
|
registry=MagicMock(),
|
||||||
|
mapper=MagicMock(),
|
||||||
|
config=Config(
|
||||||
|
data_dir=tmp_path / "data",
|
||||||
|
extension_filters=["*.min.js", "frontend/skip.ts"],
|
||||||
|
),
|
||||||
|
incremental=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
source_files = builder._iter_source_files(frontend_dir, source_root=tmp_path)
|
||||||
|
|
||||||
|
assert [path.name for path in source_files] == ["app.ts"]
|
||||||
|
assert builder._should_index_dir(frontend_dir, source_root=tmp_path) is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_builder_loads_saved_ignore_and_extension_filters_by_default(tmp_path: Path, monkeypatch) -> None:
|
||||||
|
codexlens_home = tmp_path / "codexlens-home"
|
||||||
|
codexlens_home.mkdir()
|
||||||
|
(codexlens_home / "settings.json").write_text(
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"ignore_patterns": ["frontend/dist"],
|
||||||
|
"extension_filters": ["*.min.js"],
|
||||||
|
}
|
||||||
|
),
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
monkeypatch.setenv("CODEXLENS_DATA_DIR", str(codexlens_home))
|
||||||
|
|
||||||
|
frontend_dir = tmp_path / "frontend"
|
||||||
|
frontend_dir.mkdir()
|
||||||
|
dist_dir = frontend_dir / "dist"
|
||||||
|
dist_dir.mkdir()
|
||||||
|
(frontend_dir / "app.ts").write_text("export const app = 1\n", encoding="utf-8")
|
||||||
|
(frontend_dir / "bundle.min.js").write_text("export const bundle = 1\n", encoding="utf-8")
|
||||||
|
(dist_dir / "compiled.ts").write_text("export const compiled = 1\n", encoding="utf-8")
|
||||||
|
|
||||||
|
builder = IndexTreeBuilder(
|
||||||
|
registry=MagicMock(),
|
||||||
|
mapper=MagicMock(),
|
||||||
|
config=None,
|
||||||
|
incremental=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
source_files = builder._iter_source_files(frontend_dir, source_root=tmp_path)
|
||||||
|
dirs_by_depth = builder._collect_dirs_by_depth(tmp_path)
|
||||||
|
discovered_dirs = _relative_dirs(tmp_path, dirs_by_depth)
|
||||||
|
|
||||||
|
assert [path.name for path in source_files] == ["app.ts"]
|
||||||
|
assert "frontend/dist" not in discovered_dirs
|
||||||
|
|||||||
415
docs/branding/naming-system.md
Normal file
415
docs/branding/naming-system.md
Normal file
@@ -0,0 +1,415 @@
|
|||||||
|
# Maestro 品牌命名系统
|
||||||
|
|
||||||
|
> **文档版本**: 1.0.0
|
||||||
|
> **最后更新**: 2026-03-09
|
||||||
|
> **状态**: 已确定
|
||||||
|
|
||||||
|
## 概述
|
||||||
|
|
||||||
|
本文档定义了 Maestro 项目的完整品牌命名系统,包括总品牌、子品牌(工作流)、包名、CLI 命令、域名等命名规范。
|
||||||
|
|
||||||
|
### 品牌理念
|
||||||
|
|
||||||
|
**Maestro**(指挥家/编排大师)是一个智能编排平台,协调多个 AI CLI 工具,为开发者提供统一的工作流体验。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 品牌架构
|
||||||
|
|
||||||
|
```
|
||||||
|
Maestro (总平台/编排系统)
|
||||||
|
├── Maestro Claude (基于 Claude Code 的工作流)
|
||||||
|
├── Maestro Codex (基于 Codex 的工作流)
|
||||||
|
├── Maestro Gemini (基于 Gemini 的工作流)
|
||||||
|
└── Maestro Qwen (基于 Qwen 的工作流)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 设计原则
|
||||||
|
|
||||||
|
1. **清晰直观** - 工作流名称直接表明使用的 CLI 工具
|
||||||
|
2. **品牌统一** - 所有工作流都在 Maestro 品牌下
|
||||||
|
3. **易于扩展** - 未来添加新 CLI 时,命名规则保持一致
|
||||||
|
4. **技术透明** - 开发者清楚底层技术栈
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 完整命名规范
|
||||||
|
|
||||||
|
| 工作流 | 品牌名 | NPM 包 | CLI 命令 | GitHub Repo | 域名 |
|
||||||
|
|--------|--------|---------|----------|-------------|------|
|
||||||
|
| Claude Code 工作流 | **Maestro Claude** | `@maestro/claude` | `maestro claude` | `maestro-claude` | `claude.maestro.dev` |
|
||||||
|
| Codex 工作流 | **Maestro Codex** | `@maestro/codex` | `maestro codex` | `maestro-codex` | `codex.maestro.dev` |
|
||||||
|
| Gemini 工作流 | **Maestro Gemini** | `@maestro/gemini` | `maestro gemini` | `maestro-gemini` | `gemini.maestro.dev` |
|
||||||
|
| Qwen 工作流 | **Maestro Qwen** | `@maestro/qwen` | `maestro qwen` | `maestro-qwen` | `qwen.maestro.dev` |
|
||||||
|
|
||||||
|
### 命名规则
|
||||||
|
|
||||||
|
- **品牌名**: `Maestro <CLI名称>`
|
||||||
|
- **NPM 包**: `@maestro/<cli-name>`(小写,使用 scope)
|
||||||
|
- **CLI 命令**: `maestro <cli-name>`(小写)
|
||||||
|
- **GitHub 仓库**: `maestro-<cli-name>`(小写,连字符)
|
||||||
|
- **域名**: `<cli-name>.maestro.dev`(小写,子域名)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 目录结构
|
||||||
|
|
||||||
|
### 推荐的项目结构
|
||||||
|
|
||||||
|
```
|
||||||
|
maestro/
|
||||||
|
├── packages/
|
||||||
|
│ ├── core/ # Maestro 核心引擎
|
||||||
|
│ │ ├── src/
|
||||||
|
│ │ └── package.json
|
||||||
|
│ ├── claude/ # Maestro Claude 工作流
|
||||||
|
│ │ ├── src/
|
||||||
|
│ │ └── package.json
|
||||||
|
│ ├── codex/ # Maestro Codex 工作流
|
||||||
|
│ │ ├── src/
|
||||||
|
│ │ └── package.json
|
||||||
|
│ ├── gemini/ # Maestro Gemini 工作流
|
||||||
|
│ │ ├── src/
|
||||||
|
│ │ └── package.json
|
||||||
|
│ ├── qwen/ # Maestro Qwen 工作流
|
||||||
|
│ │ ├── src/
|
||||||
|
│ │ └── package.json
|
||||||
|
│ └── podium/ # Maestro UI (原 CCW)
|
||||||
|
│ ├── frontend/
|
||||||
|
│ ├── backend/
|
||||||
|
│ └── package.json
|
||||||
|
├── docs/
|
||||||
|
│ ├── branding/ # 品牌文档
|
||||||
|
│ ├── guides/ # 使用指南
|
||||||
|
│ └── api/ # API 文档
|
||||||
|
├── .codex/ # Codex 配置和技能
|
||||||
|
├── .workflow/ # 工作流配置
|
||||||
|
├── package.json # Monorepo 根配置
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CLI 使用示例
|
||||||
|
|
||||||
|
### 基本调用
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 使用 Claude Code 工作流
|
||||||
|
maestro claude --prompt "implement user authentication"
|
||||||
|
|
||||||
|
# 使用 Codex 工作流
|
||||||
|
maestro codex --analyze "src/**/*.ts"
|
||||||
|
|
||||||
|
# 使用 Gemini 工作流
|
||||||
|
maestro gemini --task "summarize this document"
|
||||||
|
|
||||||
|
# 使用 Qwen 工作流
|
||||||
|
maestro qwen --prompt "explain this code"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 带参数调用
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Claude 工作流 - 代码生成
|
||||||
|
maestro claude generate --file "components/Button.tsx" --prompt "add loading state"
|
||||||
|
|
||||||
|
# Codex 工作流 - 代码分析
|
||||||
|
maestro codex search --pattern "useEffect" --path "src/"
|
||||||
|
|
||||||
|
# Gemini 工作流 - 多模态任务
|
||||||
|
maestro gemini analyze --image "screenshot.png" --prompt "describe this UI"
|
||||||
|
|
||||||
|
# Qwen 工作流 - 快速任务
|
||||||
|
maestro qwen translate --from "en" --to "zh" --text "Hello World"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 工作流选择
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 查看可用工作流
|
||||||
|
maestro list
|
||||||
|
|
||||||
|
# 查看特定工作流信息
|
||||||
|
maestro info claude
|
||||||
|
|
||||||
|
# 设置默认工作流
|
||||||
|
maestro config set default-workflow claude
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 配置文件
|
||||||
|
|
||||||
|
### maestro.config.json
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"version": "1.0.0",
|
||||||
|
"workflows": {
|
||||||
|
"claude": {
|
||||||
|
"name": "Maestro Claude",
|
||||||
|
"description": "Claude Code workflow for code generation and refactoring",
|
||||||
|
"cli": "claude-code",
|
||||||
|
"enabled": true,
|
||||||
|
"defaultModel": "claude-sonnet-4",
|
||||||
|
"capabilities": ["generate", "refactor", "explain", "chat"]
|
||||||
|
},
|
||||||
|
"codex": {
|
||||||
|
"name": "Maestro Codex",
|
||||||
|
"description": "Codex workflow for code analysis and understanding",
|
||||||
|
"cli": "codex",
|
||||||
|
"enabled": true,
|
||||||
|
"defaultModel": "gpt-5.2",
|
||||||
|
"capabilities": ["analyze", "search", "visualize", "index"]
|
||||||
|
},
|
||||||
|
"gemini": {
|
||||||
|
"name": "Maestro Gemini",
|
||||||
|
"description": "Gemini workflow for general-purpose AI tasks",
|
||||||
|
"cli": "gemini",
|
||||||
|
"enabled": true,
|
||||||
|
"defaultModel": "gemini-2.5-pro",
|
||||||
|
"capabilities": ["multimodal", "general", "experimental"]
|
||||||
|
},
|
||||||
|
"qwen": {
|
||||||
|
"name": "Maestro Qwen",
|
||||||
|
"description": "Qwen workflow for fast response and experimental tasks",
|
||||||
|
"cli": "qwen",
|
||||||
|
"enabled": true,
|
||||||
|
"defaultModel": "coder-model",
|
||||||
|
"capabilities": ["fast", "experimental", "assistant"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"branding": {
|
||||||
|
"name": "Maestro",
|
||||||
|
"tagline": "Orchestrate Your Development Workflow",
|
||||||
|
"website": "https://maestro.dev",
|
||||||
|
"repository": "https://github.com/maestro-suite/maestro"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 包名规范
|
||||||
|
|
||||||
|
### NPM 包
|
||||||
|
|
||||||
|
#### @maestro/claude
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "@maestro/claude",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"description": "Maestro Claude - Claude Code workflow orchestration",
|
||||||
|
"main": "dist/index.js",
|
||||||
|
"types": "dist/index.d.ts",
|
||||||
|
"bin": {
|
||||||
|
"maestro-claude": "./bin/cli.js"
|
||||||
|
},
|
||||||
|
"keywords": [
|
||||||
|
"maestro",
|
||||||
|
"claude",
|
||||||
|
"claude-code",
|
||||||
|
"workflow",
|
||||||
|
"ai",
|
||||||
|
"code-generation"
|
||||||
|
],
|
||||||
|
"repository": {
|
||||||
|
"type": "git",
|
||||||
|
"url": "https://github.com/maestro-suite/maestro-claude"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### @maestro/codex
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "@maestro/codex",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"description": "Maestro Codex - Codex workflow orchestration",
|
||||||
|
"main": "dist/index.js",
|
||||||
|
"types": "dist/index.d.ts",
|
||||||
|
"bin": {
|
||||||
|
"maestro-codex": "./bin/cli.js"
|
||||||
|
},
|
||||||
|
"keywords": [
|
||||||
|
"maestro",
|
||||||
|
"codex",
|
||||||
|
"workflow",
|
||||||
|
"ai",
|
||||||
|
"code-analysis"
|
||||||
|
],
|
||||||
|
"repository": {
|
||||||
|
"type": "git",
|
||||||
|
"url": "https://github.com/maestro-suite/maestro-codex"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python 包(如果需要)
|
||||||
|
|
||||||
|
#### maestro-claude
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[project]
|
||||||
|
name = "maestro-claude"
|
||||||
|
version = "1.0.0"
|
||||||
|
description = "Maestro Claude - Claude Code workflow orchestration"
|
||||||
|
readme = "README.md"
|
||||||
|
requires-python = ">=3.8"
|
||||||
|
keywords = ["maestro", "claude", "workflow", "ai"]
|
||||||
|
|
||||||
|
[project.urls]
|
||||||
|
Homepage = "https://claude.maestro.dev"
|
||||||
|
Repository = "https://github.com/maestro-suite/maestro-claude"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 品牌视觉
|
||||||
|
|
||||||
|
### Logo 设计
|
||||||
|
|
||||||
|
每个工作流使用不同的颜色来区分,但保持统一的设计语言:
|
||||||
|
|
||||||
|
| 工作流 | 主色 | 辅助色 | 图标元素 |
|
||||||
|
|--------|------|--------|----------|
|
||||||
|
| **Maestro Claude** | 橙色 `#FF6B35` | 深橙 `#D94F1C` | Claude 的 C 字母 + 指挥棒 |
|
||||||
|
| **Maestro Codex** | 绿色 `#00D084` | 深绿 `#00A86B` | Codex 的代码符号 + 指挥棒 |
|
||||||
|
| **Maestro Gemini** | 蓝色 `#4285F4` | 深蓝 `#1967D2` | Gemini 的双子星 + 指挥棒 |
|
||||||
|
| **Maestro Qwen** | 紫色 `#9C27B0` | 深紫 `#7B1FA2` | Qwen 的 Q 字母 + 指挥棒 |
|
||||||
|
|
||||||
|
### 总品牌色彩
|
||||||
|
|
||||||
|
- **主色**: 深蓝/午夜蓝 `#192A56` - 专业、稳定
|
||||||
|
- **强调色**: 活力青/薄荷绿 `#48D1CC` - 智能、创新
|
||||||
|
- **中性色**: 浅灰 `#F5F5F5`, 深灰 `#333333`
|
||||||
|
|
||||||
|
### 设计元素
|
||||||
|
|
||||||
|
- **指挥棒**: 所有 Logo 的核心元素,象征编排和指挥
|
||||||
|
- **声波/轨迹**: 动态的线条,表示工作流的流动
|
||||||
|
- **几何化**: 现代、简洁的几何图形
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 方案优点
|
||||||
|
|
||||||
|
### 1. 清晰直观
|
||||||
|
- 用户一眼就知道使用的是哪个 CLI 工具
|
||||||
|
- 不需要学习额外的术语映射
|
||||||
|
|
||||||
|
### 2. 易于理解
|
||||||
|
- 命名规则简单一致
|
||||||
|
- 新用户快速上手
|
||||||
|
|
||||||
|
### 3. 灵活扩展
|
||||||
|
- 未来添加新 CLI 时,命名规则保持一致
|
||||||
|
- 例如:添加 `Maestro GPT`、`Maestro Llama` 等
|
||||||
|
|
||||||
|
### 4. 品牌统一
|
||||||
|
- 所有工作流都在 Maestro 品牌下
|
||||||
|
- 强化 Maestro 作为编排平台的定位
|
||||||
|
|
||||||
|
### 5. 技术透明
|
||||||
|
- 开发者清楚底层使用的技术栈
|
||||||
|
- 便于调试和问题排查
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 注意事项
|
||||||
|
|
||||||
|
### 1. 商标问题
|
||||||
|
|
||||||
|
使用 "Maestro Claude"、"Maestro Codex" 等名称时,需要注意:
|
||||||
|
|
||||||
|
- ⚠️ 确保不侵犯原 CLI 的商标权
|
||||||
|
- ✅ 在文档中明确说明这些是"基于 XXX 的工作流",而不是官方产品
|
||||||
|
- ✅ 添加免责声明:
|
||||||
|
```
|
||||||
|
Maestro Claude 是基于 Claude Code 的工作流编排系统。
|
||||||
|
Claude 和 Claude Code 是 Anthropic 的商标。
|
||||||
|
本项目与 Anthropic 无关联。
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. 命名冲突
|
||||||
|
|
||||||
|
在发布前需要检查:
|
||||||
|
|
||||||
|
- [ ] npm 包名 `@maestro/claude`、`@maestro/codex` 等是否可用
|
||||||
|
- [ ] PyPI 包名 `maestro-claude`、`maestro-codex` 等是否可用
|
||||||
|
- [ ] GitHub 组织名 `maestro-suite` 是否可用
|
||||||
|
- [ ] 域名 `maestro.dev`、`claude.maestro.dev` 等是否可用
|
||||||
|
|
||||||
|
### 3. 用户认知
|
||||||
|
|
||||||
|
需要在文档中清楚说明:
|
||||||
|
|
||||||
|
- **Maestro** 是编排平台(总品牌)
|
||||||
|
- **Maestro Claude/Codex/Gemini/Qwen** 是工作流系统(子品牌)
|
||||||
|
- 底层使用的是对应的 CLI 工具(技术实现)
|
||||||
|
|
||||||
|
示例说明:
|
||||||
|
```
|
||||||
|
Maestro 是一个 AI 工作流编排平台。
|
||||||
|
Maestro Claude 是基于 Claude Code 的工作流系统,
|
||||||
|
它调用 Claude Code CLI 来执行代码生成和重构任务。
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 下一步行动
|
||||||
|
|
||||||
|
### 阶段 1: 资源可用性检查
|
||||||
|
|
||||||
|
- [ ] 检查域名可用性
|
||||||
|
- [ ] `maestro.dev`
|
||||||
|
- [ ] `claude.maestro.dev`
|
||||||
|
- [ ] `codex.maestro.dev`
|
||||||
|
- [ ] `gemini.maestro.dev`
|
||||||
|
- [ ] `qwen.maestro.dev`
|
||||||
|
|
||||||
|
- [ ] 检查 npm 包名可用性
|
||||||
|
- [ ] `@maestro/core`
|
||||||
|
- [ ] `@maestro/claude`
|
||||||
|
- [ ] `@maestro/codex`
|
||||||
|
- [ ] `@maestro/gemini`
|
||||||
|
- [ ] `@maestro/qwen`
|
||||||
|
|
||||||
|
- [ ] 检查 GitHub 可用性
|
||||||
|
- [ ] 组织名 `maestro-suite`
|
||||||
|
- [ ] 仓库名 `maestro`, `maestro-claude`, `maestro-codex` 等
|
||||||
|
|
||||||
|
### 阶段 2: 迁移计划
|
||||||
|
|
||||||
|
- [ ] 创建迁移文档(详见 `docs/migration/renaming-plan.md`)
|
||||||
|
- [ ] 重命名根目录:`Claude_dms3` → `maestro`
|
||||||
|
- [ ] 重组包结构:创建 `packages/` 目录
|
||||||
|
- [ ] 更新所有配置文件
|
||||||
|
- [ ] 更新代码中的引用
|
||||||
|
|
||||||
|
### 阶段 3: 实施和发布
|
||||||
|
|
||||||
|
- [ ] 执行迁移
|
||||||
|
- [ ] 更新文档和 README
|
||||||
|
- [ ] 创建 Logo 和品牌资产
|
||||||
|
- [ ] 发布到 npm/PyPI
|
||||||
|
- [ ] 配置域名和网站
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 参考资料
|
||||||
|
|
||||||
|
- [品牌架构设计](./brand-architecture.md)
|
||||||
|
- [迁移计划](../migration/renaming-plan.md)
|
||||||
|
- [视觉设计指南](./visual-identity.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 变更历史
|
||||||
|
|
||||||
|
| 版本 | 日期 | 变更内容 | 作者 |
|
||||||
|
|------|------|----------|------|
|
||||||
|
| 1.0.0 | 2026-03-09 | 初始版本,确定品牌命名系统 | - |
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "claude-code-workflow",
|
"name": "claude-code-workflow",
|
||||||
"version": "7.2.4",
|
"version": "7.2.5",
|
||||||
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
|
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"main": "ccw/dist/index.js",
|
"main": "ccw/dist/index.js",
|
||||||
|
|||||||
Reference in New Issue
Block a user