mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-11 17:21:03 +08:00
chore: update commands, specs, and ccw tools
Update DDD commands (doc-generate, doc-refresh, sync), workflow commands (session/sync, spec/add, spec/setup, spec/load), ccw specs, personal preferences, and add generate-ddd-docs tool.
This commit is contained in:
@@ -48,8 +48,9 @@ doc-index.json → tech-registry/*.md (L3) → feature-maps/*.md (L2) → _index
|
||||
├── tech-registry/ ← Component documentation (Layer 3)
|
||||
│ ├── _index.md
|
||||
│ └── {component-slug}.md
|
||||
└── sessions/
|
||||
└── _index.md ← Planning sessions index (Layer 1)
|
||||
└── planning/ ← Planning sessions (Layer 1)
|
||||
├── _index.md ← Planning sessions index
|
||||
└── {task-slug}-{date}/ ← Individual session folders
|
||||
```
|
||||
|
||||
## Phase 1: Load & Validate
|
||||
@@ -87,147 +88,82 @@ IF docs already exist AND NOT --force:
|
||||
Ask user (unless -y → overwrite)
|
||||
```
|
||||
|
||||
## Phase 2: Layer 3 — Component Documentation
|
||||
## Phase 2: Layer 3 -- Component Documentation
|
||||
|
||||
For each component in `technicalComponents[]`:
|
||||
For each component in `technicalComponents[]`, call the generate_ddd_docs endpoint:
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate component documentation for {component.name}
|
||||
TASK:
|
||||
• Document component purpose and responsibility
|
||||
• List exported symbols (classes, functions, types)
|
||||
• Document dependencies (internal and external)
|
||||
• Include code examples for key APIs
|
||||
• Document integration points with other components
|
||||
MODE: write
|
||||
CONTEXT: @{component.codeLocations[].path}
|
||||
EXPECTED: Markdown file with: Overview, API Reference, Dependencies, Usage Examples
|
||||
CONSTRAINTS: Focus on public API | Include type signatures
|
||||
" --tool gemini --mode write --cd .workflow/.doc-index/tech-registry/
|
||||
for COMPONENT_ID in "${technicalComponents[@]}"; do
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"component","entityId":"'"$COMPONENT_ID"'","tool":"gemini"}'
|
||||
done
|
||||
```
|
||||
|
||||
The endpoint handles:
|
||||
- Loading the component entity from doc-index.json
|
||||
- Building YAML frontmatter (layer: 3, component_id, name, type, features, code_locations, generated_at)
|
||||
- Constructing the CLI prompt with code context paths
|
||||
- **Including Change History section**: Pull related entries from `doc-index.json.actions[]` where `affectedComponents` includes this component ID. Display as timeline (date, action type, description)
|
||||
- Writing output to `.workflow/.doc-index/tech-registry/{slug}.md`
|
||||
- Tool fallback (gemini -> qwen -> codex) on failure
|
||||
|
||||
Output: `.workflow/.doc-index/tech-registry/{component-slug}.md`
|
||||
|
||||
Frontmatter:
|
||||
```markdown
|
||||
---
|
||||
layer: 3
|
||||
component_id: tech-{slug}
|
||||
name: ComponentName
|
||||
type: service|controller|model|...
|
||||
features: [feat-auth]
|
||||
code_locations:
|
||||
- path: src/services/auth.ts
|
||||
symbols: [AuthService, AuthService.login]
|
||||
generated_at: ISO8601
|
||||
---
|
||||
```
|
||||
## Phase 3: Layer 2 -- Feature Documentation
|
||||
|
||||
Sections: Responsibility, Code Locations, Related Requirements, Architecture Decisions, Dependencies (in/out)
|
||||
|
||||
## Phase 3: Layer 2 — Feature Documentation
|
||||
|
||||
For each feature in `features[]`:
|
||||
For each feature in `features[]`, call the generate_ddd_docs endpoint:
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate feature documentation for {feature.name}
|
||||
TASK:
|
||||
• Describe feature purpose and business value
|
||||
• List requirements (from requirementIds)
|
||||
• Document components involved (from techComponentIds)
|
||||
• Include architecture decisions (from adrIds)
|
||||
• Provide integration guide
|
||||
MODE: write
|
||||
CONTEXT: @.workflow/.doc-index/tech-registry/{related-components}.md
|
||||
EXPECTED: Markdown file with: Overview, Requirements, Components, Architecture, Integration
|
||||
CONSTRAINTS: Reference Layer 3 component docs | Business-focused language
|
||||
" --tool gemini --mode write --cd .workflow/.doc-index/feature-maps/
|
||||
for FEATURE_ID in "${features[@]}"; do
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"feature","entityId":"'"$FEATURE_ID"'","tool":"gemini"}'
|
||||
done
|
||||
```
|
||||
|
||||
The endpoint handles:
|
||||
- Loading the feature entity from doc-index.json
|
||||
- Building YAML frontmatter (layer: 2, feature_id, name, epic_id, status, requirements, components, tags, generated_at)
|
||||
- Constructing the CLI prompt referencing Layer 3 component docs
|
||||
- **Including Change History section**: Pull related entries from `doc-index.json.actions[]` where `affectedFeatures` includes this feature ID. Display as timeline (date, action type, description)
|
||||
- Writing output to `.workflow/.doc-index/feature-maps/{slug}.md`
|
||||
- Tool fallback (gemini -> qwen -> codex) on failure
|
||||
|
||||
Output: `.workflow/.doc-index/feature-maps/{feature-slug}.md`
|
||||
|
||||
Frontmatter:
|
||||
```markdown
|
||||
---
|
||||
layer: 2
|
||||
feature_id: feat-{slug}
|
||||
name: Feature Name
|
||||
epic_id: EPIC-NNN|null
|
||||
status: implemented|in-progress|planned|partial
|
||||
requirements: [REQ-001, REQ-002]
|
||||
components: [tech-auth-service, tech-user-model]
|
||||
depends_on_layer3: [tech-auth-service, tech-user-model]
|
||||
tags: [auth, security]
|
||||
generated_at: ISO8601
|
||||
---
|
||||
```
|
||||
|
||||
Sections: Overview, Requirements (with mapping status), Technical Components, Architecture Decisions, Change History
|
||||
|
||||
## Phase 4: Layer 1 — Index & Overview Documentation
|
||||
## Phase 4: Layer 1 -- Index & Overview Documentation
|
||||
|
||||
### 4.1 Index Documents
|
||||
|
||||
Generate catalog files:
|
||||
Generate catalog files for each subdirectory:
|
||||
|
||||
- **feature-maps/_index.md** — Feature overview table with status
|
||||
- **tech-registry/_index.md** — Component registry table with types
|
||||
- **action-logs/_index.md** — Action history table (empty initially for new projects)
|
||||
```bash
|
||||
# Feature maps index
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"index","entityId":"feature-maps","tool":"gemini"}'
|
||||
|
||||
# Tech registry index
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"index","entityId":"tech-registry","tool":"gemini"}'
|
||||
|
||||
# Action logs index
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"index","entityId":"action-logs","tool":"gemini"}'
|
||||
|
||||
# Planning sessions index
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"index","entityId":"planning","tool":"gemini"}'
|
||||
```
|
||||
|
||||
Or generate all indexes at once (omit entityId):
|
||||
|
||||
```bash
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"index","tool":"gemini"}'
|
||||
```
|
||||
|
||||
### 4.2 README.md (unless --skip-overview)
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate project README with overview and navigation
|
||||
TASK:
|
||||
• Project summary and purpose
|
||||
• Quick start guide
|
||||
• Navigation to features, components, and architecture
|
||||
• Link to doc-index.json
|
||||
MODE: write
|
||||
CONTEXT: @.workflow/.doc-index/doc-index.json @.workflow/.doc-index/feature-maps/_index.md
|
||||
EXPECTED: README.md with: Overview, Quick Start, Navigation, Links
|
||||
CONSTRAINTS: High-level only | Entry point for new developers
|
||||
" --tool gemini --mode write --cd .workflow/.doc-index/
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"overview","tool":"gemini"}'
|
||||
```
|
||||
|
||||
### 4.3 ARCHITECTURE.md (unless --skip-overview)
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate architecture overview document
|
||||
TASK:
|
||||
• System design overview
|
||||
• Component relationships and dependencies
|
||||
• Key architecture decisions (from ADRs)
|
||||
• Technology stack
|
||||
MODE: write
|
||||
CONTEXT: @.workflow/.doc-index/doc-index.json @.workflow/.doc-index/tech-registry/*.md
|
||||
EXPECTED: ARCHITECTURE.md with: System Design, Component Diagram, ADRs, Tech Stack
|
||||
CONSTRAINTS: Architecture-focused | Reference component docs for details
|
||||
" --tool gemini --mode write --cd .workflow/.doc-index/
|
||||
```
|
||||
|
||||
### 4.4 sessions/_index.md (unless --skip-overview)
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate planning sessions index
|
||||
TASK:
|
||||
• List all planning session folders chronologically
|
||||
• Link to each session's plan.json
|
||||
• Show session status and task count
|
||||
MODE: write
|
||||
CONTEXT: @.workflow/.doc-index/planning/*/plan.json
|
||||
EXPECTED: sessions/_index.md with: Session List, Links, Status
|
||||
CONSTRAINTS: Chronological order | Link to session folders
|
||||
" --tool gemini --mode write --cd .workflow/.doc-index/sessions/
|
||||
```
|
||||
|
||||
Layer 1 frontmatter:
|
||||
```markdown
|
||||
---
|
||||
layer: 1
|
||||
depends_on_layer2: [feat-auth, feat-orders]
|
||||
generated_at: ISO8601
|
||||
---
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"overview","entityId":"architecture","tool":"gemini"}'
|
||||
```
|
||||
|
||||
## Phase 5: SCHEMA.md (unless --skip-schema)
|
||||
@@ -235,17 +171,7 @@ generated_at: ISO8601
|
||||
### 5.1 Generate Schema Documentation
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Document doc-index.json schema structure and versioning
|
||||
TASK:
|
||||
• Document current schema structure (all fields)
|
||||
• Define versioning policy (semver: major.minor)
|
||||
• Document migration protocol for version upgrades
|
||||
• Provide examples for each schema section
|
||||
MODE: write
|
||||
CONTEXT: @.workflow/.doc-index/doc-index.json
|
||||
EXPECTED: SCHEMA.md with: Schema Structure, Versioning Policy, Migration Protocol, Examples
|
||||
CONSTRAINTS: Complete field documentation | Clear migration steps
|
||||
" --tool gemini --mode write --cd .workflow/.doc-index/
|
||||
ccw tool exec generate_ddd_docs '{"strategy":"schema","tool":"gemini"}'
|
||||
```
|
||||
|
||||
### 5.2 Versioning Policy
|
||||
@@ -284,7 +210,7 @@ Total: {N} documents generated
|
||||
| `-y, --yes` | Auto-confirm all decisions |
|
||||
| `--layer <3\|2\|1\|all>` | Generate specific layer only (default: all) |
|
||||
| `--force` | Overwrite existing documents |
|
||||
| `--skip-overview` | Skip README.md, ARCHITECTURE.md, sessions/_index.md |
|
||||
| `--skip-overview` | Skip README.md, ARCHITECTURE.md, planning/_index.md |
|
||||
| `--skip-schema` | Skip SCHEMA.md generation |
|
||||
|
||||
## Integration Points
|
||||
@@ -293,3 +219,4 @@ Total: {N} documents generated
|
||||
- **Called by**: `/ddd:scan` (after index assembly), `/ddd:index-build` (after index assembly)
|
||||
- **Standalone**: Can be run independently on any project with existing doc-index.json
|
||||
- **Output**: Complete document tree in `.workflow/.doc-index/`
|
||||
- **Endpoint**: `ccw tool exec generate_ddd_docs` handles prompt construction, frontmatter, tool fallback, and file creation
|
||||
|
||||
@@ -163,7 +163,7 @@ ccw cli -p "PURPOSE: Update project overview docs after feature changes
|
||||
TASK:
|
||||
• Update README.md feature list
|
||||
• Update ARCHITECTURE.md if new components added
|
||||
• Update sessions/_index.md with new planning sessions
|
||||
• Update planning/_index.md with new planning sessions
|
||||
MODE: write
|
||||
CONTEXT: @.workflow/.doc-index/feature-maps/*.md @.workflow/.doc-index/doc-index.json
|
||||
EXPECTED: Updated overview docs with current project state
|
||||
|
||||
@@ -37,11 +37,42 @@ After completing a development task, synchronize the document index with actual
|
||||
- `doc-index.json` must exist
|
||||
- Git repository with committed or staged changes
|
||||
|
||||
## Phase 0: Consistency Validation
|
||||
|
||||
Before processing changes, verify that `doc-index.json` entries are consistent with actual code state.
|
||||
|
||||
### 0.1 Validate Code Locations
|
||||
|
||||
For each `technicalComponents[].codeLocations[]`:
|
||||
- Verify file exists on disk
|
||||
- If file was deleted/moved → flag for removal or update
|
||||
- If file exists → verify listed `symbols[]` still exist (quick grep/AST check)
|
||||
|
||||
### 0.2 Validate Symbols
|
||||
|
||||
For components with `codeLocations[].symbols[]`:
|
||||
- Check each symbol still exists in the referenced file
|
||||
- Detect new exported symbols not yet tracked
|
||||
- Report: `{N} stale symbols, {N} untracked symbols`
|
||||
|
||||
### 0.3 Validation Report
|
||||
|
||||
```
|
||||
Consistency Check:
|
||||
Components validated: {N}
|
||||
Files verified: {N}
|
||||
Stale references: {N} (files missing or symbols removed)
|
||||
Untracked symbols: {N} (new exports not in index)
|
||||
```
|
||||
|
||||
If stale references found: warn and auto-fix during Phase 3 updates.
|
||||
If `--dry-run`: report only, no fixes.
|
||||
|
||||
## Phase 1: Change Detection
|
||||
|
||||
### 0.1 Schema Version Check (TASK-006)
|
||||
### 1.0.1 Schema Version Check
|
||||
|
||||
Before processing changes, verify doc-index schema compatibility:
|
||||
Before processing changes, verify doc-index.json schema compatibility:
|
||||
|
||||
```javascript
|
||||
const docIndex = JSON.parse(Read('.workflow/.doc-index/doc-index.json'));
|
||||
@@ -201,6 +232,7 @@ For each affected component in `doc-index.json`:
|
||||
- Update `codeLocations` if file paths or line ranges changed
|
||||
- Update `symbols` if new exports were added
|
||||
- Add new `actionIds` entry
|
||||
- **Auto-update `responsibility`**: If symbols changed (new methods/exports added or removed), re-infer responsibility from current symbols list using Gemini analysis. This prevents stale descriptions (e.g., responsibility still says "登录、注册" after adding logout support)
|
||||
|
||||
### 3.2 Register New Components
|
||||
|
||||
|
||||
@@ -65,11 +65,14 @@ Analyze context and produce two update payloads. Use LLM reasoning (current agen
|
||||
```javascript
|
||||
// ── Guidelines extraction ──
|
||||
// Scan git diff + session for:
|
||||
// - New patterns adopted → convention
|
||||
// - Restrictions discovered → constraint
|
||||
// - Surprises / gotchas → learning
|
||||
// - Debugging experiences → bug
|
||||
// - Reusable code patterns → pattern
|
||||
// - Architecture/design decisions → decision
|
||||
// - Conventions, constraints, insights → rule
|
||||
//
|
||||
// Output: array of { type, category, text }
|
||||
// Output: array of { type, tag, text }
|
||||
// type: 'bug' | 'pattern' | 'decision' | 'rule'
|
||||
// tag: domain tag (api, routing, schema, security, etc.)
|
||||
// RULE: Only extract genuinely reusable insights. Skip trivial/obvious items.
|
||||
// RULE: Deduplicate against existing guidelines before adding.
|
||||
|
||||
@@ -118,7 +121,7 @@ console.log(`
|
||||
── Sync Preview ──
|
||||
|
||||
Guidelines (${guidelineUpdates.length} items):
|
||||
${guidelineUpdates.map(g => ` [${g.type}/${g.category}] ${g.text}`).join('\n') || ' (none)'}
|
||||
${guidelineUpdates.map(g => ` [${g.type}:${g.tag}] ${g.text}`).join('\n') || ' (none)'}
|
||||
|
||||
Tech [${detectCategory(summary)}]:
|
||||
${techEntry.title}
|
||||
@@ -137,26 +140,102 @@ if (!autoYes) {
|
||||
## Step 4: Write
|
||||
|
||||
```javascript
|
||||
// ── Update specs/*.md ──
|
||||
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
||||
if (guidelineUpdates.length > 0) {
|
||||
// Map guideline types to spec files
|
||||
const specFileMap = {
|
||||
convention: '.ccw/specs/coding-conventions.md',
|
||||
constraint: '.ccw/specs/architecture-constraints.md',
|
||||
learning: '.ccw/specs/coding-conventions.md' // learnings appended to conventions
|
||||
const matter = require('gray-matter') // YAML frontmatter parser
|
||||
|
||||
// ── Frontmatter check & repair helper ──
|
||||
// Ensures target spec file has valid YAML frontmatter with keywords
|
||||
// Uses gray-matter for robust parsing (handles malformed frontmatter, missing fields)
|
||||
function ensureFrontmatter(filePath, tag, type) {
|
||||
const titleMap = {
|
||||
'coding-conventions': 'Coding Conventions',
|
||||
'architecture-constraints': 'Architecture Constraints',
|
||||
'learnings': 'Learnings',
|
||||
'quality-rules': 'Quality Rules'
|
||||
}
|
||||
const basename = filePath.split('/').pop().replace('.md', '')
|
||||
const title = titleMap[basename] || basename
|
||||
const defaultFm = {
|
||||
title,
|
||||
readMode: 'optional',
|
||||
priority: 'medium',
|
||||
scope: 'project',
|
||||
dimension: 'specs',
|
||||
keywords: [tag, type]
|
||||
}
|
||||
|
||||
if (!file_exists(filePath)) {
|
||||
// Case A: Create new file with frontmatter
|
||||
Write(filePath, matter.stringify(`\n# ${title}\n\n`, defaultFm))
|
||||
return
|
||||
}
|
||||
|
||||
const raw = Read(filePath)
|
||||
let parsed
|
||||
try {
|
||||
parsed = matter(raw)
|
||||
} catch {
|
||||
parsed = { data: {}, content: raw }
|
||||
}
|
||||
|
||||
const hasFrontmatter = raw.trimStart().startsWith('---')
|
||||
|
||||
if (!hasFrontmatter) {
|
||||
// Case B: File exists but no frontmatter → prepend
|
||||
Write(filePath, matter.stringify(raw, defaultFm))
|
||||
return
|
||||
}
|
||||
|
||||
// Case C: Frontmatter exists → ensure keywords include current tag
|
||||
const existingKeywords = parsed.data.keywords || []
|
||||
const newKeywords = [...new Set([...existingKeywords, tag, type])]
|
||||
|
||||
if (newKeywords.length !== existingKeywords.length) {
|
||||
parsed.data.keywords = newKeywords
|
||||
Write(filePath, matter.stringify(parsed.content, parsed.data))
|
||||
}
|
||||
}
|
||||
|
||||
// ── Update specs/*.md ──
|
||||
// Uses .ccw/specs/ directory - unified [type:tag] entry format
|
||||
if (guidelineUpdates.length > 0) {
|
||||
// Map knowledge types to spec files
|
||||
const specFileMap = {
|
||||
bug: '.ccw/specs/learnings.md',
|
||||
pattern: '.ccw/specs/coding-conventions.md',
|
||||
decision: '.ccw/specs/architecture-constraints.md',
|
||||
rule: null // determined by content below
|
||||
}
|
||||
|
||||
const date = new Date().toISOString().split('T')[0]
|
||||
const needsDate = { bug: true, pattern: true, decision: true, rule: false }
|
||||
|
||||
for (const g of guidelineUpdates) {
|
||||
const targetFile = specFileMap[g.type]
|
||||
// For rule type, route by content and tag
|
||||
let targetFile = specFileMap[g.type]
|
||||
if (!targetFile) {
|
||||
const isQuality = /\b(test|coverage|lint|eslint|质量|测试覆盖|pre-commit|tsc|type.check)\b/i.test(g.text)
|
||||
|| ['testing', 'quality', 'lint'].includes(g.tag)
|
||||
const isConstraint = /\b(禁止|no|never|must not|forbidden|不得|不允许)\b/i.test(g.text)
|
||||
if (isQuality) {
|
||||
targetFile = '.ccw/specs/quality-rules.md'
|
||||
} else if (isConstraint) {
|
||||
targetFile = '.ccw/specs/architecture-constraints.md'
|
||||
} else {
|
||||
targetFile = '.ccw/specs/coding-conventions.md'
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure frontmatter exists and keywords are up-to-date
|
||||
ensureFrontmatter(targetFile, g.tag, g.type)
|
||||
|
||||
const existing = Read(targetFile)
|
||||
const ruleText = g.type === 'learning'
|
||||
? `- [${g.category}] ${g.text} (learned: ${new Date().toISOString().split('T')[0]})`
|
||||
: `- [${g.category}] ${g.text}`
|
||||
const entryLine = needsDate[g.type]
|
||||
? `- [${g.type}:${g.tag}] ${g.text} (${date})`
|
||||
: `- [${g.type}:${g.tag}] ${g.text}`
|
||||
|
||||
// Deduplicate: skip if text already in file
|
||||
if (!existing.includes(g.text)) {
|
||||
const newContent = existing.trimEnd() + '\n' + ruleText + '\n'
|
||||
const newContent = existing.trimEnd() + '\n' + entryLine + '\n'
|
||||
Write(targetFile, newContent)
|
||||
}
|
||||
}
|
||||
@@ -198,4 +277,5 @@ Write(techPath, JSON.stringify(tech, null, 2))
|
||||
## Related Commands
|
||||
|
||||
- `/workflow:spec:setup` - Initialize project with specs scaffold
|
||||
- `/workflow:spec:add` - Interactive wizard to create individual specs with scope selection
|
||||
- `/workflow:spec:add` - Add knowledge entries (bug/pattern/decision/rule) with unified [type:tag] format
|
||||
- `/workflow:spec:load` - Interactive spec loader with keyword/type/tag filtering
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
392
.claude/commands/workflow/spec/load.md
Normal file
392
.claude/commands/workflow/spec/load.md
Normal file
@@ -0,0 +1,392 @@
|
||||
---
|
||||
name: load
|
||||
description: Interactive spec loader - ask what user needs, then load relevant specs by keyword routing
|
||||
argument-hint: "[--all] [--type <bug|pattern|decision|rule>] [--tag <tag>] [\"keyword query\"]"
|
||||
examples:
|
||||
- /workflow:spec:load
|
||||
- /workflow:spec:load "api routing"
|
||||
- /workflow:spec:load --type bug
|
||||
- /workflow:spec:load --all
|
||||
- /workflow:spec:load --tag security
|
||||
---
|
||||
|
||||
# Spec Load Command (/workflow:spec:load)
|
||||
|
||||
## Overview
|
||||
|
||||
Interactive entry point for loading and browsing project specs. Asks the user what they need, then routes to the appropriate spec content based on keywords, type filters, or tag filters.
|
||||
|
||||
**Design**: Menu-driven → keyword match → load & display. No file modifications.
|
||||
|
||||
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow.
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
/workflow:spec:load # Interactive menu
|
||||
/workflow:spec:load "api routing" # Direct keyword search
|
||||
/workflow:spec:load --type bug # Filter by knowledge type
|
||||
/workflow:spec:load --tag security # Filter by domain tag
|
||||
/workflow:spec:load --all # Load all specs
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
├─ Parse --all flag → loadAll = true | false
|
||||
├─ Parse --type (bug|pattern|decision|rule)
|
||||
├─ Parse --tag (domain tag)
|
||||
└─ Parse keyword query (positional text)
|
||||
|
||||
Decision:
|
||||
├─ --all → Load all specs (Path C)
|
||||
├─ --type or --tag or keyword → Direct filter (Path B)
|
||||
└─ No args → Interactive menu (Path A)
|
||||
|
||||
Path A: Interactive Menu
|
||||
├─ Step A1: Ask user intent
|
||||
├─ Step A2: Route to action
|
||||
└─ Step A3: Display results
|
||||
|
||||
Path B: Direct Filter
|
||||
├─ Step B1: Build filter from args
|
||||
├─ Step B2: Search specs
|
||||
└─ Step B3: Display results
|
||||
|
||||
Path C: Load All
|
||||
└─ Display all spec contents
|
||||
|
||||
Output:
|
||||
└─ Formatted spec entries matching user query
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Step 1: Parse Input
|
||||
|
||||
```javascript
|
||||
const args = $ARGUMENTS
|
||||
const argsLower = args.toLowerCase()
|
||||
|
||||
const loadAll = argsLower.includes('--all')
|
||||
const hasType = argsLower.includes('--type')
|
||||
const hasTag = argsLower.includes('--tag')
|
||||
|
||||
let type = hasType ? args.match(/--type\s+(\w+)/i)?.[1]?.toLowerCase() : null
|
||||
let tag = hasTag ? args.match(/--tag\s+([\w-]+)/i)?.[1]?.toLowerCase() : null
|
||||
|
||||
// Extract keyword query (everything that's not a flag)
|
||||
let keyword = args
|
||||
.replace(/--type\s+\w+/gi, '')
|
||||
.replace(/--tag\s+[\w-]+/gi, '')
|
||||
.replace(/--all/gi, '')
|
||||
.replace(/^["']|["']$/g, '')
|
||||
.trim()
|
||||
|
||||
// Validate type
|
||||
if (type && !['bug', 'pattern', 'decision', 'rule'].includes(type)) {
|
||||
console.log("Invalid type. Use 'bug', 'pattern', 'decision', or 'rule'.")
|
||||
return
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Determine Mode
|
||||
|
||||
```javascript
|
||||
const useInteractive = !loadAll && !hasType && !hasTag && !keyword
|
||||
```
|
||||
|
||||
### Path A: Interactive Menu
|
||||
|
||||
```javascript
|
||||
if (useInteractive) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What specs would you like to load?",
|
||||
header: "Action",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Browse all specs",
|
||||
description: "Load and display all project spec entries"
|
||||
},
|
||||
{
|
||||
label: "Search by keyword",
|
||||
description: "Find specs matching a keyword (e.g., api, security, routing)"
|
||||
},
|
||||
{
|
||||
label: "View bug experiences",
|
||||
description: "Load all [bug:*] debugging experience entries"
|
||||
},
|
||||
{
|
||||
label: "View code patterns",
|
||||
description: "Load all [pattern:*] reusable code pattern entries"
|
||||
}
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
const choice = answer.answers["Action"]
|
||||
|
||||
if (choice === "Browse all specs") {
|
||||
loadAll = true
|
||||
} else if (choice === "View bug experiences") {
|
||||
type = "bug"
|
||||
} else if (choice === "View code patterns") {
|
||||
type = "pattern"
|
||||
} else if (choice === "Search by keyword") {
|
||||
// Ask for keyword
|
||||
const kwAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Enter keyword(s) to search for:",
|
||||
header: "Keyword",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "api", description: "API endpoints, HTTP, REST, routing" },
|
||||
{ label: "security", description: "Authentication, authorization, input validation" },
|
||||
{ label: "arch", description: "Architecture, design patterns, module structure" },
|
||||
{ label: "perf", description: "Performance, caching, optimization" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
keyword = kwAnswer.answers["Keyword"].toLowerCase()
|
||||
} else {
|
||||
// "Other" — user typed custom input, use as keyword
|
||||
keyword = choice.toLowerCase()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Load Spec Files
|
||||
|
||||
```javascript
|
||||
// Discover all spec files
|
||||
const specFiles = [
|
||||
'.ccw/specs/coding-conventions.md',
|
||||
'.ccw/specs/architecture-constraints.md',
|
||||
'.ccw/specs/learnings.md',
|
||||
'.ccw/specs/quality-rules.md'
|
||||
]
|
||||
|
||||
// Also check personal specs
|
||||
const personalFiles = [
|
||||
'~/.ccw/personal/conventions.md',
|
||||
'~/.ccw/personal/constraints.md',
|
||||
'~/.ccw/personal/learnings.md',
|
||||
'.ccw/personal/conventions.md',
|
||||
'.ccw/personal/constraints.md',
|
||||
'.ccw/personal/learnings.md'
|
||||
]
|
||||
|
||||
// Read all existing spec files
|
||||
const allEntries = []
|
||||
|
||||
for (const file of [...specFiles, ...personalFiles]) {
|
||||
if (!file_exists(file)) continue
|
||||
const content = Read(file)
|
||||
|
||||
// Extract entries using unified format regex
|
||||
// Entry line: - [type:tag] summary (date)
|
||||
// Extended: - key: value
|
||||
const lines = content.split('\n')
|
||||
let currentEntry = null
|
||||
|
||||
for (const line of lines) {
|
||||
const entryMatch = line.match(/^- \[(\w+):([\w-]+)\] (.*?)(?:\s+\((\d{4}-\d{2}-\d{2})\))?$/)
|
||||
if (entryMatch) {
|
||||
if (currentEntry) allEntries.push(currentEntry)
|
||||
currentEntry = {
|
||||
type: entryMatch[1],
|
||||
tag: entryMatch[2],
|
||||
summary: entryMatch[3],
|
||||
date: entryMatch[4] || null,
|
||||
extended: {},
|
||||
source: file,
|
||||
raw: line
|
||||
}
|
||||
} else if (currentEntry && /^\s{4}- ([\w-]+):\s?(.*)/.test(line)) {
|
||||
const fieldMatch = line.match(/^\s{4}- ([\w-]+):\s?(.*)/)
|
||||
currentEntry.extended[fieldMatch[1]] = fieldMatch[2]
|
||||
} else if (currentEntry && !/^\s{4}/.test(line) && line.trim() !== '') {
|
||||
// Non-indented non-empty line = end of current entry
|
||||
allEntries.push(currentEntry)
|
||||
currentEntry = null
|
||||
}
|
||||
|
||||
// Also handle legacy format: - [tag] text (learned: date)
|
||||
const legacyMatch = line.match(/^- \[([\w-]+)\] (.+?)(?:\s+\(learned: (\d{4}-\d{2}-\d{2})\))?$/)
|
||||
if (!entryMatch && legacyMatch) {
|
||||
if (currentEntry) allEntries.push(currentEntry)
|
||||
currentEntry = {
|
||||
type: 'rule',
|
||||
tag: legacyMatch[1],
|
||||
summary: legacyMatch[2],
|
||||
date: legacyMatch[3] || null,
|
||||
extended: {},
|
||||
source: file,
|
||||
raw: line,
|
||||
legacy: true
|
||||
}
|
||||
}
|
||||
}
|
||||
if (currentEntry) allEntries.push(currentEntry)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Filter Entries
|
||||
|
||||
```javascript
|
||||
let filtered = allEntries
|
||||
|
||||
// Filter by type
|
||||
if (type) {
|
||||
filtered = filtered.filter(e => e.type === type)
|
||||
}
|
||||
|
||||
// Filter by tag
|
||||
if (tag) {
|
||||
filtered = filtered.filter(e => e.tag === tag)
|
||||
}
|
||||
|
||||
// Filter by keyword (search in tag, summary, and extended fields)
|
||||
if (keyword) {
|
||||
const kw = keyword.toLowerCase()
|
||||
const kwTerms = kw.split(/\s+/)
|
||||
|
||||
filtered = filtered.filter(e => {
|
||||
const searchText = [
|
||||
e.type, e.tag, e.summary,
|
||||
...Object.values(e.extended)
|
||||
].join(' ').toLowerCase()
|
||||
|
||||
return kwTerms.every(term => searchText.includes(term))
|
||||
})
|
||||
}
|
||||
|
||||
// If --all, keep everything (no filter)
|
||||
```
|
||||
|
||||
### Step 5: Display Results
|
||||
|
||||
```javascript
|
||||
if (filtered.length === 0) {
|
||||
const filterDesc = []
|
||||
if (type) filterDesc.push(`type=${type}`)
|
||||
if (tag) filterDesc.push(`tag=${tag}`)
|
||||
if (keyword) filterDesc.push(`keyword="${keyword}"`)
|
||||
|
||||
console.log(`
|
||||
No specs found matching: ${filterDesc.join(', ') || '(all)'}
|
||||
|
||||
Available spec files:
|
||||
${specFiles.filter(f => file_exists(f)).map(f => ` - ${f}`).join('\n') || ' (none)'}
|
||||
|
||||
Suggestions:
|
||||
- Use /workflow:spec:setup to initialize specs
|
||||
- Use /workflow:spec:add to add new entries
|
||||
- Use /workflow:spec:load --all to see everything
|
||||
`)
|
||||
return
|
||||
}
|
||||
|
||||
// Group by source file
|
||||
const grouped = {}
|
||||
for (const entry of filtered) {
|
||||
if (!grouped[entry.source]) grouped[entry.source] = []
|
||||
grouped[entry.source].push(entry)
|
||||
}
|
||||
|
||||
// Display
|
||||
console.log(`
|
||||
## Specs Loaded (${filtered.length} entries)
|
||||
${type ? `Type: ${type}` : ''}${tag ? ` Tag: ${tag}` : ''}${keyword ? ` Keyword: "${keyword}"` : ''}
|
||||
`)
|
||||
|
||||
for (const [source, entries] of Object.entries(grouped)) {
|
||||
console.log(`### ${source}`)
|
||||
console.log('')
|
||||
|
||||
for (const entry of entries) {
|
||||
// Render entry
|
||||
const datePart = entry.date ? ` (${entry.date})` : ''
|
||||
console.log(`- [${entry.type}:${entry.tag}] ${entry.summary}${datePart}`)
|
||||
|
||||
// Render extended fields
|
||||
for (const [key, value] of Object.entries(entry.extended)) {
|
||||
console.log(` - ${key}: ${value}`)
|
||||
}
|
||||
}
|
||||
console.log('')
|
||||
}
|
||||
|
||||
// Summary footer
|
||||
const typeCounts = {}
|
||||
for (const e of filtered) {
|
||||
typeCounts[e.type] = (typeCounts[e.type] || 0) + 1
|
||||
}
|
||||
const typeBreakdown = Object.entries(typeCounts)
|
||||
.map(([t, c]) => `${t}: ${c}`)
|
||||
.join(', ')
|
||||
|
||||
console.log(`---`)
|
||||
console.log(`Total: ${filtered.length} entries (${typeBreakdown})`)
|
||||
console.log(`Sources: ${Object.keys(grouped).join(', ')}`)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Interactive Browse
|
||||
```bash
|
||||
/workflow:spec:load
|
||||
# → Menu: "What specs would you like to load?"
|
||||
# → User selects "Browse all specs"
|
||||
# → Displays all entries grouped by file
|
||||
```
|
||||
|
||||
### Keyword Search
|
||||
```bash
|
||||
/workflow:spec:load "api routing"
|
||||
# → Filters entries where tag/summary/extended contains "api" AND "routing"
|
||||
# → Displays matching entries
|
||||
```
|
||||
|
||||
### Type Filter
|
||||
```bash
|
||||
/workflow:spec:load --type bug
|
||||
# → Shows all [bug:*] entries from learnings.md
|
||||
```
|
||||
|
||||
### Tag Filter
|
||||
```bash
|
||||
/workflow:spec:load --tag security
|
||||
# → Shows all [*:security] entries across all spec files
|
||||
```
|
||||
|
||||
### Combined Filters
|
||||
```bash
|
||||
/workflow:spec:load --type rule --tag api
|
||||
# → Shows all [rule:api] entries
|
||||
```
|
||||
|
||||
### Load All
|
||||
```bash
|
||||
/workflow:spec:load --all
|
||||
# → Displays every entry from every spec file
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No spec files found | Suggest `/workflow:spec:setup` to initialize |
|
||||
| No matching entries | Show available files and suggest alternatives |
|
||||
| Invalid type | Exit with valid type list |
|
||||
| Corrupt entry format | Skip unparseable lines, continue loading |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/workflow:spec:setup` - Initialize project with specs scaffold
|
||||
- `/workflow:spec:add` - Add knowledge entries (bug/pattern/decision/rule) with unified [type:tag] format
|
||||
- `/workflow:session:sync` - Quick-sync session work to specs and project-tech
|
||||
- `ccw spec list` - View spec file index
|
||||
- `ccw spec load` - CLI-level spec loading (used by hooks)
|
||||
@@ -471,70 +471,129 @@ For each category of collected answers, append rules to the corresponding spec M
|
||||
- Round 5 (quality): `category: execution` (testing phase)
|
||||
|
||||
```javascript
|
||||
const matter = require('gray-matter') // YAML frontmatter parser
|
||||
|
||||
// ── Frontmatter check & repair helper ──
|
||||
// Ensures target spec file has valid YAML frontmatter with keywords
|
||||
// Uses gray-matter for robust parsing (handles malformed frontmatter, missing fields)
|
||||
function ensureSpecFrontmatter(filePath, extraKeywords = []) {
|
||||
const titleMap = {
|
||||
'coding-conventions': 'Coding Conventions',
|
||||
'architecture-constraints': 'Architecture Constraints',
|
||||
'learnings': 'Learnings',
|
||||
'quality-rules': 'Quality Rules'
|
||||
}
|
||||
const basename = filePath.split('/').pop().replace('.md', '')
|
||||
const title = titleMap[basename] || basename
|
||||
const defaultKw = filePath.includes('conventions') ? 'convention'
|
||||
: filePath.includes('constraints') ? 'constraint' : 'quality'
|
||||
const defaultFm = {
|
||||
title,
|
||||
readMode: 'optional',
|
||||
priority: 'medium',
|
||||
category: 'general',
|
||||
scope: 'project',
|
||||
dimension: 'specs',
|
||||
keywords: [...new Set([defaultKw, ...extraKeywords])]
|
||||
}
|
||||
|
||||
if (!file_exists(filePath)) {
|
||||
// Case A: Create new file with frontmatter
|
||||
const specDir = path.dirname(filePath)
|
||||
if (!fs.existsSync(specDir)) {
|
||||
fs.mkdirSync(specDir, { recursive: true })
|
||||
}
|
||||
Write(filePath, matter.stringify(`\n# ${title}\n\n`, defaultFm))
|
||||
return
|
||||
}
|
||||
|
||||
const raw = Read(filePath)
|
||||
let parsed
|
||||
try {
|
||||
parsed = matter(raw)
|
||||
} catch {
|
||||
parsed = { data: {}, content: raw }
|
||||
}
|
||||
|
||||
const hasFrontmatter = raw.trimStart().startsWith('---')
|
||||
|
||||
if (!hasFrontmatter) {
|
||||
// Case B: File exists but no frontmatter → prepend
|
||||
Write(filePath, matter.stringify(raw, defaultFm))
|
||||
return
|
||||
}
|
||||
|
||||
// Case C: Frontmatter exists → ensure keywords include extras
|
||||
const existingKeywords = parsed.data.keywords || []
|
||||
const newKeywords = [...new Set([...existingKeywords, defaultKw, ...extraKeywords])]
|
||||
|
||||
if (newKeywords.length !== existingKeywords.length) {
|
||||
parsed.data.keywords = newKeywords
|
||||
Write(filePath, matter.stringify(parsed.content, parsed.data))
|
||||
}
|
||||
}
|
||||
|
||||
// Helper: append rules to a spec MD file with category support
|
||||
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
||||
function appendRulesToSpecFile(filePath, rules, defaultCategory = 'general') {
|
||||
if (rules.length === 0) return
|
||||
|
||||
// Ensure .ccw/specs/ directory exists
|
||||
const specDir = path.dirname(filePath)
|
||||
if (!fs.existsSync(specDir)) {
|
||||
fs.mkdirSync(specDir, { recursive: true })
|
||||
}
|
||||
// Extract domain tags from rules for keyword accumulation
|
||||
const ruleTags = rules
|
||||
.map(r => r.match(/\[[\w]+:([\w-]+)\]/)?.[1])
|
||||
.filter(Boolean)
|
||||
|
||||
// Check if file exists
|
||||
if (!file_exists(filePath)) {
|
||||
// Create file with frontmatter including category
|
||||
const frontmatter = `---
|
||||
title: ${filePath.includes('conventions') ? 'Coding Conventions' : filePath.includes('constraints') ? 'Architecture Constraints' : 'Quality Rules'}
|
||||
readMode: optional
|
||||
priority: medium
|
||||
category: ${defaultCategory}
|
||||
scope: project
|
||||
dimension: specs
|
||||
keywords: [${defaultCategory}, ${filePath.includes('conventions') ? 'convention' : filePath.includes('constraints') ? 'constraint' : 'quality'}]
|
||||
---
|
||||
|
||||
# ${filePath.includes('conventions') ? 'Coding Conventions' : filePath.includes('constraints') ? 'Architecture Constraints' : 'Quality Rules'}
|
||||
|
||||
`
|
||||
Write(filePath, frontmatter)
|
||||
}
|
||||
// Ensure frontmatter exists and keywords include rule tags
|
||||
ensureSpecFrontmatter(filePath, [...new Set(ruleTags)])
|
||||
|
||||
const existing = Read(filePath)
|
||||
// Append new rules as markdown list items after existing content
|
||||
const newContent = existing.trimEnd() + '\n' + rules.map(r => `- ${r}`).join('\n') + '\n'
|
||||
// Append new rules as markdown list items - rules are already in [type:tag] format from caller
|
||||
const newContent = existing.trimEnd() + '\n' + rules.map(r => {
|
||||
// If rule already has - prefix or [type:tag] format, use as-is
|
||||
if (/^- /.test(r)) return r
|
||||
if (/^\[[\w]+:[\w-]+\]/.test(r)) return `- ${r}`
|
||||
return `- [rule:${defaultCategory}] ${r}`
|
||||
}).join('\n') + '\n'
|
||||
Write(filePath, newContent)
|
||||
}
|
||||
|
||||
// Write conventions (general category) - use .ccw/specs/ (same as frontend/backend)
|
||||
appendRulesToSpecFile('.ccw/specs/coding-conventions.md',
|
||||
[...newCodingStyle, ...newNamingPatterns, ...newFileStructure, ...newDocumentation],
|
||||
'general')
|
||||
// Helper: infer domain tag from rule content
|
||||
function inferTag(text) {
|
||||
const t = text.toLowerCase()
|
||||
if (/\b(api|http|rest|endpoint|routing)\b/.test(t)) return 'api'
|
||||
if (/\b(security|auth|permission|xss|sql|sanitize)\b/.test(t)) return 'security'
|
||||
if (/\b(database|db|sql|postgres|mysql)\b/.test(t)) return 'db'
|
||||
if (/\b(react|component|hook|jsx|tsx)\b/.test(t)) return 'react'
|
||||
if (/\b(performance|cache|lazy|async|slow)\b/.test(t)) return 'perf'
|
||||
if (/\b(test|coverage|mock|jest|vitest)\b/.test(t)) return 'testing'
|
||||
if (/\b(architecture|layer|module|dependency)\b/.test(t)) return 'arch'
|
||||
if (/\b(naming|camel|pascal|prefix|suffix)\b/.test(t)) return 'naming'
|
||||
if (/\b(file|folder|directory|structure)\b/.test(t)) return 'file'
|
||||
if (/\b(doc|comment|jsdoc|readme)\b/.test(t)) return 'doc'
|
||||
if (/\b(build|webpack|vite|compile)\b/.test(t)) return 'build'
|
||||
if (/\b(deploy|ci|cd|docker)\b/.test(t)) return 'deploy'
|
||||
if (/\b(lint|eslint|prettier|format)\b/.test(t)) return 'lint'
|
||||
if (/\b(type|typescript|strict|any)\b/.test(t)) return 'typing'
|
||||
return 'style' // fallback for coding conventions
|
||||
}
|
||||
|
||||
// Write constraints (planning category)
|
||||
// Write conventions - infer domain tags from content
|
||||
appendRulesToSpecFile('.ccw/specs/coding-conventions.md',
|
||||
[...newCodingStyle, ...newNamingPatterns, ...newFileStructure, ...newDocumentation]
|
||||
.map(r => /^\[[\w]+:[\w-]+\]/.test(r) ? r : `[rule:${inferTag(r)}] ${r}`),
|
||||
'style')
|
||||
|
||||
// Write constraints - infer domain tags from content
|
||||
appendRulesToSpecFile('.ccw/specs/architecture-constraints.md',
|
||||
[...newArchitecture, ...newTechStack, ...newPerformance, ...newSecurity],
|
||||
'planning')
|
||||
[...newArchitecture, ...newTechStack, ...newPerformance, ...newSecurity]
|
||||
.map(r => /^\[[\w]+:[\w-]+\]/.test(r) ? r : `[rule:${inferTag(r)}] ${r}`),
|
||||
'arch')
|
||||
|
||||
// Write quality rules (execution category)
|
||||
if (newQualityRules.length > 0) {
|
||||
const qualityPath = '.ccw/specs/quality-rules.md'
|
||||
if (!file_exists(qualityPath)) {
|
||||
Write(qualityPath, `---
|
||||
title: Quality Rules
|
||||
readMode: required
|
||||
priority: high
|
||||
category: execution
|
||||
scope: project
|
||||
dimension: specs
|
||||
keywords: [execution, quality, testing, coverage, lint]
|
||||
---
|
||||
|
||||
# Quality Rules
|
||||
|
||||
`)
|
||||
}
|
||||
// ensureSpecFrontmatter handles create/repair/keyword-update
|
||||
ensureSpecFrontmatter(qualityPath, ['quality', 'testing', 'coverage', 'lint'])
|
||||
appendRulesToSpecFile(qualityPath,
|
||||
newQualityRules.map(q => `${q.rule} (scope: ${q.scope}, enforced by: ${q.enforced_by})`),
|
||||
'execution')
|
||||
@@ -644,7 +703,8 @@ Next steps:
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/workflow:spec:add` - Interactive wizard to create individual specs with scope selection
|
||||
- `/workflow:spec:add` - Add knowledge entries (bug/pattern/decision/rule) with unified [type:tag] format
|
||||
- `/workflow:spec:load` - Interactive spec loader with keyword/type/tag filtering
|
||||
- `/workflow:session:sync` - Quick-sync session work to specs and project-tech
|
||||
- `workflow-plan` skill - Start planning with initialized project context
|
||||
- `/workflow:status --project` - View project state and guidelines
|
||||
|
||||
Reference in New Issue
Block a user