feat: add spec-setup command for project initialization and interactive configuration

- Introduced a new command `spec-setup` to initialize project-level state.
- Generates `.workflow/project-tech.json` and `.ccw/specs/*.md` files.
- Implements a multi-round interactive questionnaire for configuring project guidelines.
- Supports flags for regeneration, skipping specs, and resetting existing content.
- Integrates analysis via `cli-explore-agent` for comprehensive project understanding.
- Provides detailed execution process and error handling for various scenarios.
This commit is contained in:
catlog22
2026-03-06 16:49:35 +08:00
parent f2d4364c69
commit a9469a5e3b
25 changed files with 3472 additions and 1819 deletions

View File

@@ -923,8 +923,9 @@ Task: <description>
**Cycle Workflows**: workflow:integration-test-cycle, workflow:refactor-cycle
**Execution**: workflow:unified-execute-with-file
**Design**: workflow:ui-design:*
**Session Management**: workflow:session:start, workflow:session:resume, workflow:session:complete, workflow:session:solidify, workflow:session:list, workflow:session:sync
**Utility**: workflow:clean, workflow:init, workflow:init-guidelines, workflow:status
**Session Management**: workflow:session:start, workflow:session:resume, workflow:session:complete, workflow:session:list, workflow:session:sync
**Utility**: workflow:clean, workflow:spec:setup, workflow:status
**Spec Management**: workflow:spec:setup, workflow:spec:add
**Issue Workflow**: issue:discover, issue:discover-by-prompt, issue:plan, issue:queue, issue:execute, issue:convert-to-plan, issue:from-brainstorm, issue:new
### Testing Commands Distinction

View File

@@ -760,8 +760,8 @@ todos = [
|---------|---------|
| `workflow:unified-execute-with-file` | Universal execution engine - consumes plan output from collaborative-plan, roadmap, brainstorm |
| `workflow:clean` | Intelligent code cleanup - mainline detection, stale artifact removal |
| `workflow:init` | Initialize `.workflow/project-tech.json` with project analysis |
| `workflow:init-guidelines` | Interactive wizard to fill `specs/*.md` |
| `workflow:spec:setup` | Initialize `.workflow/project-tech.json` with project analysis and specs scaffold |
| `workflow:spec:add` | Interactive wizard to add individual specs with scope selection |
| `workflow:status` | Generate on-demand views for project overview and workflow tasks |
---
@@ -817,7 +817,7 @@ todos = [
# Utility commands (invoked directly, not auto-routed)
# /workflow:unified-execute-with-file # 通用执行引擎(消费 plan 输出)
# /workflow:clean # 智能代码清理
# /workflow:init # 初始化项目状态
# /workflow:init-guidelines # 交互式填充项目规范
# /workflow:spec:setup # 初始化项目状态
# /workflow:spec:add # 交互式填充项目规范
# /workflow:status # 项目概览和工作流状态
```

View File

@@ -93,7 +93,7 @@ async function selectCommandCategory() {
{ label: "Brainstorm", description: "brainstorm-with-file, brainstorm (unified skill)" },
{ label: "Analysis", description: "analyze-with-file" },
{ label: "Issue", description: "discover, plan, queue, execute, from-brainstorm, convert-to-plan" },
{ label: "Utility", description: "clean, init, replan, status" }
{ label: "Utility", description: "clean, spec:setup, spec:add, replan, status" }
],
multiSelect: false
}]
@@ -153,7 +153,7 @@ async function selectCommand(category) {
],
'Utility': [
{ label: "/workflow:clean", description: "Intelligent code cleanup" },
{ label: "/workflow:init", description: "Initialize project-level state" },
{ label: "/workflow:spec:setup", description: "Initialize project-level state" },
{ label: "/workflow:replan", description: "Interactive workflow replanning" },
{ label: "/workflow:status", description: "Generate workflow status views" }
]

View File

@@ -65,6 +65,14 @@ When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended analys
## Implementation
### AskUserQuestion Constraints
All `AskUserQuestion` calls MUST comply:
- **questions**: 1-4 questions per call
- **options**: 2-4 per question (system auto-adds "Other" for free-text input)
- **header**: max 12 characters
- **label**: 1-5 words per option
### Session Initialization
1. Extract topic/question from `$ARGUMENTS`
@@ -79,10 +87,10 @@ When `--yes` or `-y`: Auto-confirm exploration decisions, use recommended analys
### Phase 1: Topic Understanding
1. **Parse Topic & Identify Dimensions** — Match keywords against Analysis Dimensions table
2. **Initial Scoping** (if new session + not auto mode):
- **Focus**: Multi-select from Dimension-Direction Mapping directions
- **Perspectives**: Multi-select up to 4 (see Analysis Perspectives), default: single comprehensive
- **Depth**: Quick Overview (10-15min) / Standard (30-60min) / Deep Dive (1-2hr)
2. **Initial Scoping** (if new session + not auto mode) — use **single AskUserQuestion call with up to 3 questions**:
- Q1 **Focus** (multiSelect: true, header: "分析方向"): Top 3-4 directions from Dimension-Direction Mapping (options max 4)
- Q2 **Perspectives** (multiSelect: true, header: "分析视角"): Up to 4 from Analysis Perspectives table (options max 4), default: single comprehensive
- Q3 **Depth** (multiSelect: false, header: "分析深度"): Quick Overview / Standard / Deep Dive (3 options)
3. **Initialize discussion.md** — Structure includes:
- **Dynamic TOC** (top of file, updated after each round/phase): `## Table of Contents` with links to major sections
- **Current Understanding** (replaceable block, overwritten each round — NOT appended): `## Current Understanding` initialized as "To be populated after exploration"
@@ -223,31 +231,26 @@ CONSTRAINTS: Focus on ${dimensions.join(', ')}
2. **Present Findings** from explorations.json
3. **Gather Feedback** (AskUserQuestion, single-select):
- **同意,继续深入**: Direction correct, deepen
- **同意,并建议下一步**: Agree with direction, but user has specific next step in mind
- **需要调整方向**: Different focus
3. **Gather Feedback** (AskUserQuestion, single-select, header: "分析反馈"):
- **继续深入**: Direction correct deepen automatically or user specifies direction (combines agree+deepen and agree+suggest)
- **调整方向**: Different focus or specific questions to address
- **补充信息**: User has additional context, constraints, or corrections to provide
- **分析完成**: Sufficient → exit to Phase 4
- **有具体问题**: Specific questions
4. **Process Response** (always record user choice + impact to discussion.md):
**Agree, Deepen** → Dynamically generate deepen directions from current analysis context:
- Extract 2-3 context-driven options from: unresolved questions in explorations.json, low-confidence findings, unexplored dimensions, user-highlighted areas
- Generate 1-2 heuristic options that break current frame: e.g., "compare with best practices in [related domain]", "analyze under extreme load scenarios", "review from security audit perspective", "explore simpler architectural alternatives"
- Each option specifies: label, description, tool (cli-explore-agent for code-level / Gemini CLI for pattern-level), scope
- AskUserQuestion with generated options (single-select)
- Execute selected direction via corresponding tool
- Merge new code_anchors/call_chains into existing results
- Record confirmed assumptions + deepen angle
**继续深入** → Sub-question to choose direction (AskUserQuestion, single-select, header: "深入方向"):
- Dynamically generate **max 3** context-driven options from: unresolved questions, low-confidence findings, unexplored dimensions, user-highlighted areas
- Add **1** heuristic option that breaks current frame (e.g., "compare with best practices", "review from security perspective", "explore simpler alternatives")
- Total: **max 4 options**. Each specifies: label, description, tool (cli-explore-agent for code-level / Gemini CLI for pattern-level), scope
- **"Other" is auto-provided** by AskUserQuestion — covers user-specified custom direction (no need for separate "suggest next step" option)
- Execute selected direction → merge new code_anchors/call_chains → record confirmed assumptions + deepen angle
**Agree, Suggest Next Step** → AskUserQuestion (free text: "请描述您希望下一步深入的方向") → Execute user's specific direction via cli-explore-agent or CLI → Record user-driven exploration rationale
**调整方向** → AskUserQuestion (header: "新方向", user selects or provides custom via "Other") → new CLI exploration → Record Decision (old vs new direction, reason, impact)
**Adjust Direction** → AskUserQuestion for new focus → new CLI exploration → Record Decision (old vs new direction, reason, impact)
**补充信息** → Capture user input, integrate into context, answer questions via CLI/analysis if needed → Record corrections/additions + updated understanding
**Specific Questions** → Capture, answer via CLI/analysis, document Q&A → Record gaps revealed + new understanding
**Complete** → Exit loop → Record why concluding
**分析完成** → Exit loop → Record why concluding
5. **Update discussion.md**:
- **Append** Round N: user input, direction adjustment, Q&A, corrections, new insights
@@ -319,11 +322,11 @@ CONSTRAINTS: Focus on ${dimensions.join(', ')}
```
For each recommendation (ordered by priority high→medium→low):
1. Present: action, rationale, priority, steps[] (numbered sub-steps)
2. AskUserQuestion (single-select, header: "Rec #N"):
- **确认**: Accept as-is → review_status = "accepted"
- **修改**: User adjusts scope/steps → record modification → review_status = "modified"
- **删除**: Not needed → record reason → review_status = "rejected"
- **跳过逐条审议**: Accept all remaining as-is → break loop
2. AskUserQuestion (single-select, header: "建议#N"):
- **确认** (label: "确认", desc: "Accept as-is") → review_status = "accepted"
- **修改** (label: "修改", desc: "Adjust scope/steps") → record modification → review_status = "modified"
- **删除** (label: "删除", desc: "Not needed") → record reason → review_status = "rejected"
- **跳过审议** (label: "跳过审议", desc: "Accept all remaining") → break loop
3. Record review decision to discussion.md Decision Log
4. Update conclusions.json recommendation.review_status
```

View File

@@ -587,7 +587,11 @@ Schema (tasks): ~/.ccw/workflows/cli-templates/schemas/task-schema.json
- Execution command
- Conflict status
6. **Update Todo**
6. **Sync Session State**
- Execute: `/workflow:session:sync -y "Plan complete: ${subDomains.length} domains, ${allTasks.length} tasks"`
- Updates specs/*.md with planning insights and project-tech.json with planning session entry
7. **Update Todo**
- Set Phase 4 status to `completed`
**plan.md Structure**:

View File

@@ -1,380 +0,0 @@
---
name: init-specs
description: Interactive wizard to create individual specs or personal constraints with scope selection
argument-hint: "[--scope <global|project>] [--dimension <specs|personal>] [--category <general|exploration|planning|execution>]"
examples:
- /workflow:init-specs
- /workflow:init-specs --scope global --dimension personal
- /workflow:init-specs --scope project --dimension specs
---
# Workflow Init Specs Command (/workflow:init-specs)
## Overview
Interactive wizard for creating individual specs or personal constraints with scope selection. This command provides a guided experience for adding new rules to the spec system.
**Key Features**:
- Supports both project specs and personal specs
- Scope selection (global vs project) for personal specs
- Category-based organization for workflow stages
- Interactive mode with smart defaults
## Usage
```bash
/workflow:init-specs # Interactive mode (all prompts)
/workflow:init-specs --scope global # Create global personal spec
/workflow:init-specs --scope project # Create project spec (default)
/workflow:init-specs --dimension specs # Project conventions/constraints
/workflow:init-specs --dimension personal # Personal preferences
/workflow:init-specs --category exploration # Workflow stage category
```
## Parameters
| Parameter | Values | Default | Description |
|-----------|--------|---------|-------------|
| `--scope` | `global`, `project` | `project` | Where to store the spec (only for personal dimension) |
| `--dimension` | `specs`, `personal` | Interactive | Type of spec to create |
| `--category` | `general`, `exploration`, `planning`, `execution` | `general` | Workflow stage category |
## Execution Process
```
Input Parsing:
├─ Parse --scope (global | project)
├─ Parse --dimension (specs | personal)
└─ Parse --category (general | exploration | planning | execution)
Step 1: Gather Requirements (Interactive)
├─ If dimension not specified → Ask dimension
├─ If personal + scope not specified → Ask scope
├─ If category not specified → Ask category
├─ Ask type (convention | constraint | learning)
└─ Ask content (rule text)
Step 2: Determine Target File
├─ specs dimension → .ccw/specs/coding-conventions.md or architecture-constraints.md
└─ personal dimension → ~/.ccw/specs/personal/ or .ccw/specs/personal/
Step 3: Write Spec
├─ Check if file exists, create if needed with proper frontmatter
├─ Append rule to appropriate section
└─ Run ccw spec rebuild
Step 4: Display Confirmation
```
## Implementation
### Step 1: Parse Input and Gather Requirements
```javascript
// Parse arguments
const args = $ARGUMENTS.toLowerCase()
const hasScope = args.includes('--scope')
const hasDimension = args.includes('--dimension')
const hasCategory = args.includes('--category')
// Extract values from arguments
let scope = hasScope ? args.match(/--scope\s+(\w+)/)?.[1] : null
let dimension = hasDimension ? args.match(/--dimension\s+(\w+)/)?.[1] : null
let category = hasCategory ? args.match(/--category\s+(\w+)/)?.[1] : null
// Validate values
if (scope && !['global', 'project'].includes(scope)) {
console.log("Invalid scope. Use 'global' or 'project'.")
return
}
if (dimension && !['specs', 'personal'].includes(dimension)) {
console.log("Invalid dimension. Use 'specs' or 'personal'.")
return
}
if (category && !['general', 'exploration', 'planning', 'execution'].includes(category)) {
console.log("Invalid category. Use 'general', 'exploration', 'planning', or 'execution'.")
return
}
```
### Step 2: Interactive Questions
**If dimension not specified**:
```javascript
if (!dimension) {
const dimensionAnswer = AskUserQuestion({
questions: [{
question: "What type of spec do you want to create?",
header: "Dimension",
multiSelect: false,
options: [
{
label: "Project Spec",
description: "Coding conventions, constraints, quality rules for this project (stored in .ccw/specs/)"
},
{
label: "Personal Spec",
description: "Personal preferences and constraints that follow you across projects (stored in ~/.ccw/specs/personal/ or .ccw/specs/personal/)"
}
]
}]
})
dimension = dimensionAnswer.answers["Dimension"] === "Project Spec" ? "specs" : "personal"
}
```
**If personal dimension and scope not specified**:
```javascript
if (dimension === 'personal' && !scope) {
const scopeAnswer = AskUserQuestion({
questions: [{
question: "Where should this personal spec be stored?",
header: "Scope",
multiSelect: false,
options: [
{
label: "Global (Recommended)",
description: "Apply to ALL projects (~/.ccw/specs/personal/)"
},
{
label: "Project-only",
description: "Apply only to this project (.ccw/specs/personal/)"
}
]
}]
})
scope = scopeAnswer.answers["Scope"].includes("Global") ? "global" : "project"
}
```
**If category not specified**:
```javascript
if (!category) {
const categoryAnswer = AskUserQuestion({
questions: [{
question: "Which workflow stage does this spec apply to?",
header: "Category",
multiSelect: false,
options: [
{
label: "General (Recommended)",
description: "Applies to all stages (default)"
},
{
label: "Exploration",
description: "Code exploration, analysis, debugging"
},
{
label: "Planning",
description: "Task planning, requirements gathering"
},
{
label: "Execution",
description: "Implementation, testing, deployment"
}
]
}]
})
const categoryLabel = categoryAnswer.answers["Category"]
category = categoryLabel.includes("General") ? "general"
: categoryLabel.includes("Exploration") ? "exploration"
: categoryLabel.includes("Planning") ? "planning"
: "execution"
}
```
**Ask type**:
```javascript
const typeAnswer = AskUserQuestion({
questions: [{
question: "What type of rule is this?",
header: "Type",
multiSelect: false,
options: [
{
label: "Convention",
description: "Coding style preference (e.g., use functional components)"
},
{
label: "Constraint",
description: "Hard rule that must not be violated (e.g., no direct DB access)"
},
{
label: "Learning",
description: "Insight or lesson learned (e.g., cache invalidation needs events)"
}
]
}]
})
const type = typeAnswer.answers["Type"]
const isConvention = type.includes("Convention")
const isConstraint = type.includes("Constraint")
const isLearning = type.includes("Learning")
```
**Ask content**:
```javascript
const contentAnswer = AskUserQuestion({
questions: [{
question: "Enter the rule or guideline text:",
header: "Content",
multiSelect: false,
options: []
}]
})
const ruleText = contentAnswer.answers["Content"]
```
### Step 3: Determine Target File
```javascript
const path = require('path')
const os = require('os')
let targetFile: string
let targetDir: string
if (dimension === 'specs') {
// Project specs - use .ccw/specs/ (same as frontend/backend spec-index-builder)
targetDir = '.ccw/specs'
if (isConstraint) {
targetFile = path.join(targetDir, 'architecture-constraints.md')
} else {
targetFile = path.join(targetDir, 'coding-conventions.md')
}
} else {
// Personal specs - use .ccw/personal/ (same as backend spec-index-builder)
if (scope === 'global') {
targetDir = path.join(os.homedir(), '.ccw', 'personal')
} else {
targetDir = path.join('.ccw', 'personal')
}
// Create category-based filename
const typePrefix = isConstraint ? 'constraints' : isLearning ? 'learnings' : 'conventions'
targetFile = path.join(targetDir, `${typePrefix}.md`)
}
```
### Step 4: Write Spec
```javascript
const fs = require('fs')
// Ensure directory exists
if (!fs.existsSync(targetDir)) {
fs.mkdirSync(targetDir, { recursive: true })
}
// Check if file exists
const fileExists = fs.existsSync(targetFile)
if (!fileExists) {
// Create new file with frontmatter
const frontmatter = `---
title: ${dimension === 'specs' ? 'Project' : 'Personal'} ${isConstraint ? 'Constraints' : isLearning ? 'Learnings' : 'Conventions'}
readMode: optional
priority: medium
category: ${category}
scope: ${dimension === 'personal' ? scope : 'project'}
dimension: ${dimension}
keywords: [${category}, ${isConstraint ? 'constraint' : isLearning ? 'learning' : 'convention'}]
---
# ${dimension === 'specs' ? 'Project' : 'Personal'} ${isConstraint ? 'Constraints' : isLearning ? 'Learnings' : 'Conventions'}
`
fs.writeFileSync(targetFile, frontmatter, 'utf8')
}
// Read existing content
let content = fs.readFileSync(targetFile, 'utf8')
// Format the new rule
const timestamp = new Date().toISOString().split('T')[0]
const rulePrefix = isLearning ? `- [learning] ` : `- [${category}] `
const ruleSuffix = isLearning ? ` (${timestamp})` : ''
const newRule = `${rulePrefix}${ruleText}${ruleSuffix}`
// Check for duplicate
if (content.includes(ruleText)) {
console.log(`
Rule already exists in ${targetFile}
Text: "${ruleText}"
`)
return
}
// Append the rule
content = content.trimEnd() + '\n' + newRule + '\n'
fs.writeFileSync(targetFile, content, 'utf8')
// Rebuild spec index
Bash('ccw spec rebuild')
```
### Step 5: Display Confirmation
```
Spec created successfully
Dimension: ${dimension}
Scope: ${dimension === 'personal' ? scope : 'project'}
Category: ${category}
Type: ${type}
Rule: "${ruleText}"
Location: ${targetFile}
Use 'ccw spec list' to view all specs
Use 'ccw spec load --category ${category}' to load specs by category
```
## Target File Resolution
### Project Specs (dimension: specs)
```
.ccw/specs/
├── coding-conventions.md ← conventions, learnings
├── architecture-constraints.md ← constraints
└── quality-rules.md ← quality rules
```
### Personal Specs (dimension: personal)
```
# Global (~/.ccw/personal/)
~/.ccw/personal/
├── conventions.md ← personal conventions (all projects)
├── constraints.md ← personal constraints (all projects)
└── learnings.md ← personal learnings (all projects)
# Project-local (.ccw/personal/)
.ccw/personal/
├── conventions.md ← personal conventions (this project only)
├── constraints.md ← personal constraints (this project only)
└── learnings.md ← personal learnings (this project only)
```
## Category Field Usage
The `category` field in frontmatter enables filtered loading:
| Category | Use Case | Example Rules |
|----------|----------|---------------|
| `general` | Applies to all stages | "Use TypeScript strict mode" |
| `exploration` | Code exploration, debugging | "Always trace the call stack before modifying" |
| `planning` | Task planning, requirements | "Break down tasks into 2-hour chunks" |
| `execution` | Implementation, testing | "Run tests after each file modification" |
## Error Handling
- **File not writable**: Check permissions, suggest manual creation
- **Duplicate rule**: Warn and skip (don't add duplicates)
- **Invalid path**: Exit with error message
## Related Commands
- `/workflow:init` - Initialize project with specs scaffold
- `/workflow:init-guidelines` - Interactive wizard to fill specs
- `/workflow:session:solidify` - Add rules during/after sessions
- `ccw spec list` - View all specs
- `ccw spec load --category <cat>` - Load filtered specs

View File

@@ -1,291 +0,0 @@
---
name: init
description: Initialize project-level state with intelligent project analysis using cli-explore-agent
argument-hint: "[--regenerate] [--skip-specs]"
examples:
- /workflow:init
- /workflow:init --regenerate
- /workflow:init --skip-specs
---
# Workflow Init Command (/workflow:init)
## Overview
Initialize `.workflow/project-tech.json` and `.ccw/specs/*.md` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
**Dual File System**:
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
- `specs/*.md`: User-maintained rules and constraints (created as scaffold)
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
## Usage
```bash
/workflow:init # Initialize (skip if exists)
/workflow:init --regenerate # Force regeneration
/workflow:init --skip-specs # Initialize project-tech only, skip spec initialization
```
## Execution Process
```
Input Parsing:
├─ Parse --regenerate flag → regenerate = true | false
└─ Parse --skip-specs flag → skipSpecs = true | false
Decision:
├─ BOTH_EXIST + no --regenerate → Exit: "Already initialized"
├─ EXISTS + --regenerate → Backup existing → Continue analysis
└─ NOT_FOUND → Continue analysis
Analysis Flow:
├─ Get project metadata (name, root)
├─ Invoke cli-explore-agent
│ ├─ Structural scan (get_modules_by_depth.sh, find, wc)
│ ├─ Semantic analysis (Gemini CLI)
│ ├─ Synthesis and merge
│ └─ Write .workflow/project-tech.json
├─ Spec Initialization (if not --skip-specs)
│ ├─ Check if specs/*.md exist
│ ├─ If NOT_FOUND → Run ccw spec init
│ ├─ Run ccw spec rebuild
│ └─ Ask about guidelines configuration
│ ├─ If guidelines empty → Ask user: "Configure now?" or "Skip"
│ │ ├─ Configure now → Skill(skill="workflow:init-guidelines")
│ │ └─ Skip → Show next steps
│ └─ If guidelines populated → Show next steps only
└─ Display summary
Output:
├─ .workflow/project-tech.json (+ .backup if regenerate)
└─ .ccw/specs/*.md (scaffold or configured, unless --skip-specs)
```
## Implementation
### Step 1: Parse Input and Check Existing State
**Parse flags**:
```javascript
const regenerate = $ARGUMENTS.includes('--regenerate')
const skipSpecs = $ARGUMENTS.includes('--skip-specs')
```
**Check existing state**:
```bash
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .ccw/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
```
**If BOTH_EXIST and no --regenerate**: Exit early
```
Project already initialized:
- Tech analysis: .workflow/project-tech.json
- Guidelines: .ccw/specs/*.md
Use /workflow:init --regenerate to rebuild tech analysis
Use /workflow:session:solidify to add guidelines
Use /workflow:status --project to view state
```
### Step 2: Get Project Metadata
```bash
bash(basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
bash(mkdir -p .workflow)
```
### Step 3: Invoke cli-explore-agent
**For --regenerate**: Backup and preserve existing data
```bash
bash(cp .workflow/project-tech.json .workflow/project-tech.json.backup)
```
**Delegate analysis to agent**:
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Deep project analysis",
prompt=`
Analyze project for workflow initialization and generate .workflow/project-tech.json.
## MANDATORY FIRST STEPS
1. Execute: cat ~/.ccw/workflows/cli-templates/schemas/project-tech-schema.json (get schema reference)
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
## Task
Generate complete project-tech.json following the schema structure:
- project_name: "${projectName}"
- initialized_at: ISO 8601 timestamp
- overview: {
description: "Brief project description",
technology_stack: {
languages: [{name, file_count, primary}],
frameworks: ["string"],
build_tools: ["string"],
test_frameworks: ["string"]
},
architecture: {style, layers: [], patterns: []},
key_components: [{name, path, description, importance}]
}
- features: []
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
## Analysis Requirements
**Technology Stack**:
- Languages: File counts, mark primary
- Frameworks: From package.json, requirements.txt, go.mod, etc.
- Build tools: npm, cargo, maven, webpack, vite
- Test frameworks: jest, pytest, go test, junit
**Architecture**:
- Style: MVC, microservices, layered (from structure & imports)
- Layers: presentation, business-logic, data-access
- Patterns: singleton, factory, repository
- Key components: 5-10 modules {name, path, description, importance}
## Execution
1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
6. Report: Return brief completion summary
Project root: ${projectRoot}
`
)
```
### Step 3.5: Initialize Spec System (if not --skip-specs)
```javascript
// Skip spec initialization if --skip-specs flag is provided
if (!skipSpecs) {
// Initialize spec system if not already initialized
const specsCheck = Bash('test -f .ccw/specs/coding-conventions.md && echo EXISTS || echo NOT_FOUND')
if (specsCheck.includes('NOT_FOUND')) {
console.log('Initializing spec system...')
Bash('ccw spec init')
Bash('ccw spec rebuild')
}
} else {
console.log('Skipping spec initialization (--skip-specs)')
}
```
### Step 4: Display Summary
```javascript
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
const specsInitialized = !skipSpecs && file_exists('.ccw/specs/coding-conventions.md');
console.log(`
Project initialized successfully
## Project Overview
Name: ${projectTech.project_name}
Description: ${projectTech.overview.description}
### Technology Stack
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
### Architecture
Style: ${projectTech.overview.architecture.style}
Components: ${projectTech.overview.key_components.length} core modules
---
Files created:
- Tech analysis: .workflow/project-tech.json
${!skipSpecs ? `- Specs: .ccw/specs/ ${specsInitialized ? '(initialized)' : ''}` : '- Specs: (skipped via --skip-specs)'}
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
`);
```
### Step 5: Ask About Guidelines Configuration (if not --skip-specs)
After displaying the summary, ask the user if they want to configure project guidelines interactively. Skip this step if `--skip-specs` was provided.
```javascript
// Skip guidelines configuration if --skip-specs was provided
if (skipSpecs) {
console.log(`
Next steps:
- Use /workflow:init-specs to create individual specs
- Use /workflow:init-guidelines to configure specs interactively
- Use /workflow-plan to start planning
`);
return;
}
// Check if specs have user content beyond seed documents
const specsList = Bash('ccw spec list --json');
const specsCount = JSON.parse(specsList).total || 0;
// Only ask if specs are just seeds
if (specsCount <= 5) {
const userChoice = AskUserQuestion({
questions: [{
question: "Would you like to configure project specs now? The wizard will ask targeted questions based on your tech stack.",
header: "Specs",
multiSelect: false,
options: [
{
label: "Configure now (Recommended)",
description: "Interactive wizard to set up coding conventions, constraints, and quality rules"
},
{
label: "Skip for now",
description: "You can run /workflow:init-guidelines later or use ccw spec load to import specs"
}
]
}]
});
if (userChoice.answers["Specs"] === "Configure now (Recommended)") {
console.log("\nStarting specs configuration wizard...\n");
Skill(skill="workflow:init-guidelines");
} else {
console.log(`
Next steps:
- Use /workflow:init-specs to create individual specs
- Use /workflow:init-guidelines to configure specs interactively
- Use ccw spec load to import specs from external sources
- Use /workflow-plan to start planning
`);
}
} else {
console.log(`
Specs already configured (${specsCount} spec files).
Next steps:
- Use /workflow:init-specs to create additional specs
- Use /workflow:init-guidelines --reset to reconfigure
- Use /workflow:session:solidify to add individual rules
- Use /workflow-plan to start planning
`);
}
```
## Error Handling
**Agent Failure**: Fall back to basic initialization with placeholder overview
**Missing Tools**: Agent uses Qwen fallback or bash-only
**Empty Project**: Create minimal JSON with all gaps identified
## Related Commands
- `/workflow:init-specs` - Interactive wizard to create individual specs with scope selection
- `/workflow:init-guidelines` - Interactive wizard to configure project guidelines (called after init)
- `/workflow:session:solidify` - Add individual rules/constraints one at a time
- `workflow-plan` skill - Start planning with initialized project context
- `/workflow:status --project` - View project state and guidelines

View File

@@ -806,6 +806,10 @@ AskUserQuestion({
})
```
4. **Sync Session State** (automatic)
- Execute: `/workflow:session:sync -y "Integration test cycle complete: ${passRate}% pass rate, ${iterations} iterations"`
- Updates specs/*.md with test learnings and project-tech.json with development index entry
---
## Completion Conditions

View File

@@ -1,440 +0,0 @@
---
name: solidify
description: Crystallize session learnings and user-defined constraints into permanent project guidelines, or compress recent memories
argument-hint: "[-y|--yes] [--type <convention|constraint|learning|compress>] [--category <category>] [--limit <N>] \"rule or insight\""
examples:
- /workflow:session:solidify "Use functional components for all React code" --type convention
- /workflow:session:solidify -y "No direct DB access from controllers" --type constraint --category architecture
- /workflow:session:solidify "Cache invalidation requires event sourcing" --type learning --category architecture
- /workflow:session:solidify --interactive
- /workflow:session:solidify --type compress --limit 10
---
## Auto Mode
When `--yes` or `-y`: Auto-categorize and add guideline without confirmation.
# Session Solidify Command (/workflow:session:solidify)
## Overview
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.ccw/specs/*.md`. This ensures valuable learnings persist across sessions and inform future planning.
## Use Cases
1. **During Session**: Capture important decisions as they're made
2. **After Session**: Reflect on lessons learned before archiving
3. **Proactive**: Add team conventions or architectural rules
## Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `rule` | string | Yes (unless --interactive or --type compress) | The rule, convention, or insight to solidify |
| `--type` | enum | No | Type: `convention`, `constraint`, `learning`, `compress` (default: auto-detect) |
| `--category` | string | No | Category for organization (see categories below) |
| `--interactive` | flag | No | Launch guided wizard for adding rules |
| `--limit` | number | No | Number of recent memories to compress (default: 20, only for --type compress) |
### Type Categories
**convention** → Coding style preferences (goes to `conventions` section)
- Subcategories: `coding_style`, `naming_patterns`, `file_structure`, `documentation`
**constraint** → Hard rules that must not be violated (goes to `constraints` section)
- Subcategories: `architecture`, `tech_stack`, `performance`, `security`
**learning** -> Session-specific insights (goes to `learnings` array)
- Subcategories: `architecture`, `performance`, `security`, `testing`, `process`, `other`
**compress** -> Compress/deduplicate recent memories into a single consolidated CMEM
- No subcategories (operates on core memories, not project guidelines)
- Fetches recent non-archived memories, LLM-compresses them, creates a new CMEM
- Source memories are archived after successful compression
## Execution Process
```
Input Parsing:
|- Parse: rule text (required unless --interactive or --type compress)
|- Parse: --type (convention|constraint|learning|compress)
|- Parse: --category (subcategory)
|- Parse: --interactive (flag)
+- Parse: --limit (number, default 20, compress only)
IF --type compress:
Step C1: Fetch Recent Memories
+- Call getRecentMemories(limit, excludeArchived=true)
Step C2: Validate Candidates
+- If fewer than 2 memories found -> abort with message
Step C3: LLM Compress
+- Build compression prompt with all memory contents
+- Send to LLM for consolidation
+- Receive compressed text
Step C4: Merge Tags
+- Collect tags from all source memories
+- Deduplicate into a single merged tag array
Step C5: Create Compressed CMEM
+- Generate new CMEM via upsertMemory with:
- content: compressed text from LLM
- summary: auto-generated
- tags: merged deduplicated tags
- metadata: buildCompressionMetadata(sourceIds, originalSize, compressedSize)
Step C6: Archive Source Memories
+- Call archiveMemories(sourceIds)
Step C7: Display Compression Report
+- Show source count, compression ratio, new CMEM ID
ELSE (convention/constraint/learning):
Step 1: Ensure Guidelines File Exists
+- If not exists -> Create with empty structure
Step 2: Auto-detect Type (if not specified)
+- Analyze rule text for keywords
Step 3: Validate and Format Entry
+- Build entry object based on type
Step 4: Update Guidelines File
+- Add entry to appropriate section
Step 5: Display Confirmation
+- Show what was added and where
```
## Implementation
### Step 1: Ensure Guidelines File Exists
**Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)**
```bash
bash(test -f .ccw/specs/coding-conventions.md && echo "EXISTS" || echo "NOT_FOUND")
```
**If NOT_FOUND**, initialize spec system:
```bash
Bash('ccw spec init')
Bash('ccw spec rebuild')
```
### Step 2: Auto-detect Type (if not specified)
```javascript
function detectType(ruleText) {
const text = ruleText.toLowerCase();
// Constraint indicators
if (/\b(no|never|must not|forbidden|prohibited|always must)\b/.test(text)) {
return 'constraint';
}
// Learning indicators
if (/\b(learned|discovered|realized|found that|turns out)\b/.test(text)) {
return 'learning';
}
// Default to convention
return 'convention';
}
function detectCategory(ruleText, type) {
const text = ruleText.toLowerCase();
if (type === 'constraint' || type === 'learning') {
if (/\b(architecture|layer|module|dependency|circular)\b/.test(text)) return 'architecture';
if (/\b(security|auth|permission|sanitize|xss|sql)\b/.test(text)) return 'security';
if (/\b(performance|cache|lazy|async|sync|slow)\b/.test(text)) return 'performance';
if (/\b(test|coverage|mock|stub)\b/.test(text)) return 'testing';
}
if (type === 'convention') {
if (/\b(name|naming|prefix|suffix|camel|pascal)\b/.test(text)) return 'naming_patterns';
if (/\b(file|folder|directory|structure|organize)\b/.test(text)) return 'file_structure';
if (/\b(doc|comment|jsdoc|readme)\b/.test(text)) return 'documentation';
return 'coding_style';
}
return type === 'constraint' ? 'tech_stack' : 'other';
}
```
### Step 3: Build Entry
```javascript
function buildEntry(rule, type, category, sessionId) {
if (type === 'learning') {
return {
date: new Date().toISOString().split('T')[0],
session_id: sessionId || null,
insight: rule,
category: category,
context: null
};
}
// For conventions and constraints, just return the rule string
return rule;
}
```
### Step 4: Update Spec Files
```javascript
// Map type+category to target spec file
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
const specFileMap = {
convention: '.ccw/specs/coding-conventions.md',
constraint: '.ccw/specs/architecture-constraints.md'
}
if (type === 'convention' || type === 'constraint') {
const targetFile = specFileMap[type]
const existing = Read(targetFile)
// Deduplicate: skip if rule text already exists in the file
if (!existing.includes(rule)) {
const ruleText = `- [${category}] ${rule}`
const newContent = existing.trimEnd() + '\n' + ruleText + '\n'
Write(targetFile, newContent)
}
} else if (type === 'learning') {
// Learnings go to coding-conventions.md as a special section
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
const targetFile = '.ccw/specs/coding-conventions.md'
const existing = Read(targetFile)
const entry = buildEntry(rule, type, category, sessionId)
const learningText = `- [learning/${category}] ${entry.insight} (${entry.date})`
if (!existing.includes(entry.insight)) {
const newContent = existing.trimEnd() + '\n' + learningText + '\n'
Write(targetFile, newContent)
}
}
// Rebuild spec index after modification
Bash('ccw spec rebuild')
```
### Step 5: Display Confirmation
```
Guideline solidified
Type: ${type}
Category: ${category}
Rule: "${rule}"
Location: .ccw/specs/*.md -> ${type}s.${category}
Total ${type}s in ${category}: ${count}
```
## Compress Mode (--type compress)
When `--type compress` is specified, the command operates on core memories instead of project guidelines. It fetches recent memories, sends them to an LLM for consolidation, and creates a new compressed CMEM.
### Step C1: Fetch Recent Memories
```javascript
// Uses CoreMemoryStore.getRecentMemories()
const limit = parsedArgs.limit || 20;
const recentMemories = store.getRecentMemories(limit, /* excludeArchived */ true);
if (recentMemories.length < 2) {
console.log("Not enough non-archived memories to compress (need at least 2).");
return;
}
```
### Step C2: Build Compression Prompt
Concatenate all memory contents and send to LLM with the following prompt:
```
Given these ${N} memories, produce a single consolidated memory that:
1. Preserves all key information and insights
2. Removes redundancy and duplicate concepts
3. Organizes content by theme/topic
4. Maintains specific technical details and decisions
Source memories:
---
[Memory CMEM-XXXXXXXX-XXXXXX]:
${memory.content}
---
[Memory CMEM-XXXXXXXX-XXXXXX]:
${memory.content}
---
...
Output: A single comprehensive memory text.
```
### Step C3: Merge Tags from Source Memories
```javascript
// Collect all tags from source memories and deduplicate
const allTags = new Set();
for (const memory of recentMemories) {
if (memory.tags) {
for (const tag of memory.tags) {
allTags.add(tag);
}
}
}
const mergedTags = Array.from(allTags);
```
### Step C4: Create Compressed CMEM
```javascript
const sourceIds = recentMemories.map(m => m.id);
const originalSize = recentMemories.reduce((sum, m) => sum + m.content.length, 0);
const compressedSize = compressedText.length;
const metadata = store.buildCompressionMetadata(sourceIds, originalSize, compressedSize);
const newMemory = store.upsertMemory({
content: compressedText,
summary: `Compressed from ${sourceIds.length} memories`,
tags: mergedTags,
metadata: metadata
});
```
### Step C5: Archive Source Memories
```javascript
// Archive all source memories after successful compression
store.archiveMemories(sourceIds);
```
### Step C6: Display Compression Report
```
Memory compression complete
New CMEM: ${newMemory.id}
Sources compressed: ${sourceIds.length}
Original size: ${originalSize} chars
Compressed size: ${compressedSize} chars
Compression ratio: ${(compressedSize / originalSize * 100).toFixed(1)}%
Tags merged: ${mergedTags.join(', ') || '(none)'}
Source memories archived: ${sourceIds.join(', ')}
```
### Compressed CMEM Metadata Format
The compressed CMEM's `metadata` field contains a JSON string with:
```json
{
"compressed_from": ["CMEM-20260101-120000", "CMEM-20260102-140000", "..."],
"compression_ratio": 0.45,
"compressed_at": "2026-02-23T10:30:00.000Z"
}
```
- `compressed_from`: Array of source memory IDs that were consolidated
- `compression_ratio`: Ratio of compressed size to original size (lower = more compression)
- `compressed_at`: ISO timestamp of when the compression occurred
## Interactive Mode
When `--interactive` flag is provided:
```javascript
AskUserQuestion({
questions: [
{
question: "What type of guideline are you adding?",
header: "Type",
multiSelect: false,
options: [
{ label: "Convention", description: "Coding style preference (e.g., use functional components)" },
{ label: "Constraint", description: "Hard rule that must not be violated (e.g., no direct DB access)" },
{ label: "Learning", description: "Insight from this session (e.g., cache invalidation needs events)" }
]
}
]
});
// Follow-up based on type selection...
```
## Examples
### Add a Convention
```bash
/workflow:session:solidify "Use async/await instead of callbacks" --type convention --category coding_style
```
Result in `.ccw/specs/coding-conventions.md`:
```markdown
- [coding_style] Use async/await instead of callbacks
```
### Add an Architectural Constraint
```bash
/workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
```
Result in `.ccw/specs/architecture-constraints.md`:
```markdown
- [architecture] No direct DB access from controllers
```
### Capture a Session Learning
```bash
/workflow:session:solidify "Cache invalidation requires event sourcing for consistency" --type learning
```
Result in `.ccw/specs/coding-conventions.md`:
```markdown
- [learning/architecture] Cache invalidation requires event sourcing for consistency (2024-12-28)
```
### Compress Recent Memories
```bash
/workflow:session:solidify --type compress --limit 10
```
Result: Creates a new CMEM with consolidated content from the 10 most recent non-archived memories. Source memories are archived. The new CMEM's metadata tracks which memories were compressed:
```json
{
"compressed_from": ["CMEM-20260220-100000", "CMEM-20260221-143000", "..."],
"compression_ratio": 0.42,
"compressed_at": "2026-02-23T10:30:00.000Z"
}
```
## Integration with Planning
The `specs/*.md` is consumed by:
1. **`workflow-plan` skill (context-gather phase)**: Loads guidelines into context-package.json
2. **`workflow-plan` skill**: Passes guidelines to task generation agent
3. **`task-generate-agent`**: Includes guidelines as "CRITICAL CONSTRAINTS" in system prompt
This ensures all future planning respects solidified rules without users needing to re-state them.
## Error Handling
- **Duplicate Rule**: Warn and skip if exact rule already exists
- **Invalid Category**: Suggest valid categories for the type
- **File Corruption**: Backup existing file before modification
## Related Commands
- `/workflow:session:start` - Start a session (may prompt for solidify at end)
- `/workflow:session:complete` - Complete session (prompts for learnings to solidify)
- `/workflow:init` - Creates specs/*.md scaffold if missing
- `/workflow:init-specs` - Interactive wizard to create individual specs with scope selection

View File

@@ -38,7 +38,7 @@ ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
## Step 0: Initialize Project State (First-time Only)
**Executed before all modes** - Ensures project-level state files exist by calling `/workflow:init`.
**Executed before all modes** - Ensures project-level state files exist by calling `/workflow:spec:setup`.
### Check and Initialize
```bash
@@ -47,10 +47,10 @@ bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT
bash(test -f .ccw/specs/*.md && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
```
**If either NOT_FOUND**, delegate to `/workflow:init`:
**If either NOT_FOUND**, delegate to `/workflow:spec:setup`:
```javascript
// Call workflow:init for intelligent project analysis
Skill(skill="workflow:init");
// Call workflow:spec:setup for intelligent project analysis
Skill(skill="workflow:spec:setup");
// Wait for init completion
// project-tech.json and specs/*.md will be created
@@ -58,11 +58,11 @@ Skill(skill="workflow:init");
**Output**:
- If BOTH_EXIST: `PROJECT_STATE: initialized`
- If NOT_FOUND: Calls `/workflow:init` → creates:
- If NOT_FOUND: Calls `/workflow:spec:setup` → creates:
- `.workflow/project-tech.json` with full technical analysis
- `.ccw/specs/*.md` with empty scaffold
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
**Note**: `/workflow:spec:setup` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
## Mode 1: Discovery Mode (Default)

View File

@@ -190,13 +190,12 @@ Write(techPath, JSON.stringify(tech, null, 2))
| Error | Resolution |
|-------|------------|
| File missing | Create scaffold (same as solidify Step 1) |
| File missing | Create scaffold (same as spec:setup Step 4) |
| No git history | Use user summary or session context only |
| No meaningful updates | Skip guidelines, still add tech entry |
| Duplicate entry | Skip silently (dedup check in Step 4) |
## Related Commands
- `/workflow:init` - Initialize project with specs scaffold
- `/workflow:init-specs` - Interactive wizard to create individual specs with scope selection
- `/workflow:session:solidify` - Add individual rules one at a time
- `/workflow:spec:setup` - Initialize project with specs scaffold
- `/workflow:spec:add` - Interactive wizard to create individual specs with scope selection

View File

@@ -0,0 +1,644 @@
---
name: add
description: Add specs, conventions, constraints, or learnings to project guidelines interactively or automatically
argument-hint: "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] [--dimension <specs|personal>] [--scope <global|project>] [--interactive] \"rule text\""
examples:
- /workflow:spec:add "Use functional components for all React code"
- /workflow:spec:add -y "No direct DB access from controllers" --type constraint
- /workflow:spec:add --scope global --dimension personal
- /workflow:spec:add --interactive
- /workflow:spec:add "Cache invalidation requires event sourcing" --type learning --category architecture
---
## Auto Mode
When `--yes` or `-y`: Auto-categorize and add guideline without confirmation.
# Spec Add Command (/workflow:spec:add)
## Overview
Unified command for adding specs one at a time. Supports both interactive wizard mode and direct CLI mode.
**Key Features**:
- Supports both project specs and personal specs
- Scope selection (global vs project) for personal specs
- Category-based organization for workflow stages
- Interactive wizard mode with smart defaults
- Direct CLI mode with auto-detection of type and category
- Auto-confirm mode (`-y`/`--yes`) for scripted usage
## Use Cases
1. **During Session**: Capture important decisions as they're made
2. **After Session**: Reflect on lessons learned before archiving
3. **Proactive**: Add team conventions or architectural rules
4. **Interactive**: Guided wizard for adding rules with full control over dimension, scope, and category
## Usage
```bash
/workflow:spec:add # Interactive wizard (all prompts)
/workflow:spec:add --interactive # Explicit interactive wizard
/workflow:spec:add "Use async/await instead of callbacks" # Direct mode (auto-detect type)
/workflow:spec:add -y "No direct DB access" --type constraint # Auto-confirm, skip confirmation
/workflow:spec:add --scope global --dimension personal # Create global personal spec (interactive)
/workflow:spec:add --dimension specs --category exploration # Project spec in exploration category (interactive)
```
## Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `rule` | string | Yes (unless `--interactive`) | - | The rule, convention, or insight to add |
| `--type` | enum | No | auto-detect | Type: `convention`, `constraint`, `learning` |
| `--category` | string | No | auto-detect / `general` | Category for organization (see categories below) |
| `--dimension` | enum | No | Interactive | `specs` (project) or `personal` |
| `--scope` | enum | No | `project` | `global` or `project` (only for personal dimension) |
| `--interactive` | flag | No | - | Launch full guided wizard for adding rules |
| `-y` / `--yes` | flag | No | - | Auto-categorize and add without confirmation |
### Type Categories
**convention** - Coding style preferences (goes to `conventions` section)
- Subcategories: `coding_style`, `naming_patterns`, `file_structure`, `documentation`
**constraint** - Hard rules that must not be violated (goes to `constraints` section)
- Subcategories: `architecture`, `tech_stack`, `performance`, `security`
**learning** - Session-specific insights (goes to `learnings` array)
- Subcategories: `architecture`, `performance`, `security`, `testing`, `process`, `other`
### Workflow Stage Categories (for `--category`)
| Category | Use Case | Example Rules |
|----------|----------|---------------|
| `general` | Applies to all stages | "Use TypeScript strict mode" |
| `exploration` | Code exploration, debugging | "Always trace the call stack before modifying" |
| `planning` | Task planning, requirements | "Break down tasks into 2-hour chunks" |
| `execution` | Implementation, testing | "Run tests after each file modification" |
## Execution Process
```
Input Parsing:
|- Parse: rule text (positional argument, optional if --interactive)
|- Parse: --type (convention|constraint|learning)
|- Parse: --category (subcategory)
|- Parse: --dimension (specs|personal)
|- Parse: --scope (global|project)
|- Parse: --interactive (flag)
+- Parse: -y / --yes (flag)
Step 1: Parse Input
Step 2: Determine Mode
|- If --interactive OR no rule text → Full Interactive Wizard (Path A)
+- If rule text provided → Direct Mode (Path B)
Path A: Interactive Wizard
|- Step A1: Ask dimension (if not specified)
|- Step A2: Ask scope (if personal + scope not specified)
|- Step A3: Ask category (if not specified)
|- Step A4: Ask type (convention|constraint|learning)
|- Step A5: Ask content (rule text)
+- Continue to Step 3
Path B: Direct Mode
|- Step B1: Auto-detect type (if not specified) using detectType()
|- Step B2: Auto-detect category (if not specified) using detectCategory()
|- Step B3: Default dimension to 'specs' if not specified
+- Continue to Step 3
Step 3: Determine Target File
|- specs dimension → .ccw/specs/coding-conventions.md or architecture-constraints.md
+- personal dimension → ~/.ccw/personal/ or .ccw/personal/
Step 4: Validate and Write Spec
|- Ensure target directory and file exist
|- Check for duplicates
|- Append rule to appropriate section
+- Run ccw spec rebuild
Step 5: Display Confirmation
+- If -y/--yes: Minimal output
+- Otherwise: Full confirmation with location details
```
## Implementation
### Step 1: Parse Input
```javascript
// Parse arguments
const args = $ARGUMENTS
const argsLower = args.toLowerCase()
// Extract flags
const autoConfirm = argsLower.includes('--yes') || argsLower.includes('-y')
const isInteractive = argsLower.includes('--interactive')
// Extract named parameters
const hasType = argsLower.includes('--type')
const hasCategory = argsLower.includes('--category')
const hasDimension = argsLower.includes('--dimension')
const hasScope = argsLower.includes('--scope')
let type = hasType ? args.match(/--type\s+(\w+)/i)?.[1]?.toLowerCase() : null
let category = hasCategory ? args.match(/--category\s+(\w+)/i)?.[1]?.toLowerCase() : null
let dimension = hasDimension ? args.match(/--dimension\s+(\w+)/i)?.[1]?.toLowerCase() : null
let scope = hasScope ? args.match(/--scope\s+(\w+)/i)?.[1]?.toLowerCase() : null
// Extract rule text (everything before flags, or quoted string)
let ruleText = args
.replace(/--type\s+\w+/gi, '')
.replace(/--category\s+\w+/gi, '')
.replace(/--dimension\s+\w+/gi, '')
.replace(/--scope\s+\w+/gi, '')
.replace(/--interactive/gi, '')
.replace(/--yes/gi, '')
.replace(/-y\b/gi, '')
.replace(/^["']|["']$/g, '')
.trim()
// Validate values
if (scope && !['global', 'project'].includes(scope)) {
console.log("Invalid scope. Use 'global' or 'project'.")
return
}
if (dimension && !['specs', 'personal'].includes(dimension)) {
console.log("Invalid dimension. Use 'specs' or 'personal'.")
return
}
if (type && !['convention', 'constraint', 'learning'].includes(type)) {
console.log("Invalid type. Use 'convention', 'constraint', or 'learning'.")
return
}
if (category) {
const validCategories = [
'general', 'exploration', 'planning', 'execution',
'coding_style', 'naming_patterns', 'file_structure', 'documentation',
'architecture', 'tech_stack', 'performance', 'security',
'testing', 'process', 'other'
]
if (!validCategories.includes(category)) {
console.log(`Invalid category. Valid categories: ${validCategories.join(', ')}`)
return
}
}
```
### Step 2: Determine Mode
```javascript
const useInteractiveWizard = isInteractive || !ruleText
```
### Path A: Interactive Wizard
**If dimension not specified**:
```javascript
if (!dimension) {
const dimensionAnswer = AskUserQuestion({
questions: [{
question: "What type of spec do you want to create?",
header: "Dimension",
multiSelect: false,
options: [
{
label: "Project Spec",
description: "Coding conventions, constraints, quality rules for this project (stored in .ccw/specs/)"
},
{
label: "Personal Spec",
description: "Personal preferences and constraints that follow you across projects (stored in ~/.ccw/specs/personal/ or .ccw/specs/personal/)"
}
]
}]
})
dimension = dimensionAnswer.answers["Dimension"] === "Project Spec" ? "specs" : "personal"
}
```
**If personal dimension and scope not specified**:
```javascript
if (dimension === 'personal' && !scope) {
const scopeAnswer = AskUserQuestion({
questions: [{
question: "Where should this personal spec be stored?",
header: "Scope",
multiSelect: false,
options: [
{
label: "Global (Recommended)",
description: "Apply to ALL projects (~/.ccw/specs/personal/)"
},
{
label: "Project-only",
description: "Apply only to this project (.ccw/specs/personal/)"
}
]
}]
})
scope = scopeAnswer.answers["Scope"].includes("Global") ? "global" : "project"
}
```
**If category not specified**:
```javascript
if (!category) {
const categoryAnswer = AskUserQuestion({
questions: [{
question: "Which workflow stage does this spec apply to?",
header: "Category",
multiSelect: false,
options: [
{
label: "General (Recommended)",
description: "Applies to all stages (default)"
},
{
label: "Exploration",
description: "Code exploration, analysis, debugging"
},
{
label: "Planning",
description: "Task planning, requirements gathering"
},
{
label: "Execution",
description: "Implementation, testing, deployment"
}
]
}]
})
const categoryLabel = categoryAnswer.answers["Category"]
category = categoryLabel.includes("General") ? "general"
: categoryLabel.includes("Exploration") ? "exploration"
: categoryLabel.includes("Planning") ? "planning"
: "execution"
}
```
**Ask type (if not specified)**:
```javascript
if (!type) {
const typeAnswer = AskUserQuestion({
questions: [{
question: "What type of rule is this?",
header: "Type",
multiSelect: false,
options: [
{
label: "Convention",
description: "Coding style preference (e.g., use functional components)"
},
{
label: "Constraint",
description: "Hard rule that must not be violated (e.g., no direct DB access)"
},
{
label: "Learning",
description: "Insight or lesson learned (e.g., cache invalidation needs events)"
}
]
}]
})
const typeLabel = typeAnswer.answers["Type"]
type = typeLabel.includes("Convention") ? "convention"
: typeLabel.includes("Constraint") ? "constraint"
: "learning"
}
```
**Ask content (rule text)**:
```javascript
if (!ruleText) {
const contentAnswer = AskUserQuestion({
questions: [{
question: "Enter the rule or guideline text:",
header: "Content",
multiSelect: false,
options: [
{ label: "Custom rule", description: "Type your own rule using the 'Other' option below" },
{ label: "Skip", description: "Cancel adding a spec" }
]
}]
})
if (contentAnswer.answers["Content"] === "Skip") return
ruleText = contentAnswer.answers["Content"]
}
```
### Path B: Direct Mode
**Auto-detect type if not specified**:
```javascript
function detectType(ruleText) {
const text = ruleText.toLowerCase();
// Constraint indicators
if (/\b(no|never|must not|forbidden|prohibited|always must)\b/.test(text)) {
return 'constraint';
}
// Learning indicators
if (/\b(learned|discovered|realized|found that|turns out)\b/.test(text)) {
return 'learning';
}
// Default to convention
return 'convention';
}
function detectCategory(ruleText, type) {
const text = ruleText.toLowerCase();
if (type === 'constraint' || type === 'learning') {
if (/\b(architecture|layer|module|dependency|circular)\b/.test(text)) return 'architecture';
if (/\b(security|auth|permission|sanitize|xss|sql)\b/.test(text)) return 'security';
if (/\b(performance|cache|lazy|async|sync|slow)\b/.test(text)) return 'performance';
if (/\b(test|coverage|mock|stub)\b/.test(text)) return 'testing';
}
if (type === 'convention') {
if (/\b(name|naming|prefix|suffix|camel|pascal)\b/.test(text)) return 'naming_patterns';
if (/\b(file|folder|directory|structure|organize)\b/.test(text)) return 'file_structure';
if (/\b(doc|comment|jsdoc|readme)\b/.test(text)) return 'documentation';
return 'coding_style';
}
return type === 'constraint' ? 'tech_stack' : 'other';
}
if (!type) {
type = detectType(ruleText)
}
if (!category) {
category = detectCategory(ruleText, type)
}
if (!dimension) {
dimension = 'specs' // Default to project specs in direct mode
}
```
### Step 3: Ensure Guidelines File Exists
**Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)**
```bash
bash(test -f .ccw/specs/coding-conventions.md && echo "EXISTS" || echo "NOT_FOUND")
```
**If NOT_FOUND**, initialize spec system:
```bash
Bash('ccw spec init')
Bash('ccw spec rebuild')
```
### Step 4: Determine Target File
```javascript
const path = require('path')
const os = require('os')
const isConvention = type === 'convention'
const isConstraint = type === 'constraint'
const isLearning = type === 'learning'
let targetFile
let targetDir
if (dimension === 'specs') {
// Project specs - use .ccw/specs/ (same as frontend/backend spec-index-builder)
targetDir = '.ccw/specs'
if (isConstraint) {
targetFile = path.join(targetDir, 'architecture-constraints.md')
} else {
targetFile = path.join(targetDir, 'coding-conventions.md')
}
} else {
// Personal specs - use .ccw/personal/ (same as backend spec-index-builder)
if (scope === 'global') {
targetDir = path.join(os.homedir(), '.ccw', 'personal')
} else {
targetDir = path.join('.ccw', 'personal')
}
// Create type-based filename
const typePrefix = isConstraint ? 'constraints' : isLearning ? 'learnings' : 'conventions'
targetFile = path.join(targetDir, `${typePrefix}.md`)
}
```
### Step 5: Build Entry
```javascript
function buildEntry(rule, type, category, sessionId) {
if (type === 'learning') {
return {
date: new Date().toISOString().split('T')[0],
session_id: sessionId || null,
insight: rule,
category: category,
context: null
};
}
// For conventions and constraints, just return the rule string
return rule;
}
```
### Step 6: Write Spec
```javascript
const fs = require('fs')
// Ensure directory exists
if (!fs.existsSync(targetDir)) {
fs.mkdirSync(targetDir, { recursive: true })
}
// Check if file exists
const fileExists = fs.existsSync(targetFile)
if (!fileExists) {
// Create new file with frontmatter
const frontmatter = `---
title: ${dimension === 'specs' ? 'Project' : 'Personal'} ${isConstraint ? 'Constraints' : isLearning ? 'Learnings' : 'Conventions'}
readMode: optional
priority: medium
category: ${category}
scope: ${dimension === 'personal' ? scope : 'project'}
dimension: ${dimension}
keywords: [${category}, ${isConstraint ? 'constraint' : isLearning ? 'learning' : 'convention'}]
---
# ${dimension === 'specs' ? 'Project' : 'Personal'} ${isConstraint ? 'Constraints' : isLearning ? 'Learnings' : 'Conventions'}
`
fs.writeFileSync(targetFile, frontmatter, 'utf8')
}
// Read existing content
let content = fs.readFileSync(targetFile, 'utf8')
// Deduplicate: skip if rule text already exists in the file
if (content.includes(ruleText)) {
console.log(`
Rule already exists in ${targetFile}
Text: "${ruleText}"
`)
return
}
// Format the new rule based on type
let newRule
if (isLearning) {
const entry = buildEntry(ruleText, type, category)
newRule = `- [learning/${category}] ${entry.insight} (${entry.date})`
} else {
newRule = `- [${category}] ${ruleText}`
}
// Append the rule
content = content.trimEnd() + '\n' + newRule + '\n'
fs.writeFileSync(targetFile, content, 'utf8')
// Rebuild spec index
Bash('ccw spec rebuild')
```
### Step 7: Display Confirmation
**If `-y`/`--yes` (auto mode)**:
```
Spec added: [${type}/${category}] "${ruleText}" -> ${targetFile}
```
**Otherwise (full confirmation)**:
```
Spec created successfully
Dimension: ${dimension}
Scope: ${dimension === 'personal' ? scope : 'project'}
Category: ${category}
Type: ${type}
Rule: "${ruleText}"
Location: ${targetFile}
Use 'ccw spec list' to view all specs
Use 'ccw spec load --category ${category}' to load specs by category
```
## Target File Resolution
### Project Specs (dimension: specs)
```
.ccw/specs/
|- coding-conventions.md <- conventions, learnings
|- architecture-constraints.md <- constraints
+- quality-rules.md <- quality rules
```
### Personal Specs (dimension: personal)
```
# Global (~/.ccw/personal/)
~/.ccw/personal/
|- conventions.md <- personal conventions (all projects)
|- constraints.md <- personal constraints (all projects)
+- learnings.md <- personal learnings (all projects)
# Project-local (.ccw/personal/)
.ccw/personal/
|- conventions.md <- personal conventions (this project only)
|- constraints.md <- personal constraints (this project only)
+- learnings.md <- personal learnings (this project only)
```
## Examples
### Interactive Wizard
```bash
/workflow:spec:add --interactive
# Prompts for: dimension -> scope (if personal) -> category -> type -> content
```
### Add a Convention (Direct)
```bash
/workflow:spec:add "Use async/await instead of callbacks" --type convention --category coding_style
```
Result in `.ccw/specs/coding-conventions.md`:
```markdown
- [coding_style] Use async/await instead of callbacks
```
### Add an Architectural Constraint (Direct)
```bash
/workflow:spec:add "No direct DB access from controllers" --type constraint --category architecture
```
Result in `.ccw/specs/architecture-constraints.md`:
```markdown
- [architecture] No direct DB access from controllers
```
### Capture a Learning (Direct, Auto-detect)
```bash
/workflow:spec:add "Cache invalidation requires event sourcing for consistency" --type learning
```
Result in `.ccw/specs/coding-conventions.md`:
```markdown
- [learning/architecture] Cache invalidation requires event sourcing for consistency (2026-03-06)
```
### Auto-confirm Mode
```bash
/workflow:spec:add -y "No direct DB access from controllers" --type constraint
# Auto-detects category as 'architecture', writes without confirmation prompt
```
### Personal Spec (Global)
```bash
/workflow:spec:add --scope global --dimension personal --type convention "Prefer descriptive variable names"
```
Result in `~/.ccw/personal/conventions.md`:
```markdown
- [general] Prefer descriptive variable names
```
### Personal Spec (Project)
```bash
/workflow:spec:add --scope project --dimension personal --type constraint "No ORM in this project"
```
Result in `.ccw/personal/constraints.md`:
```markdown
- [general] No ORM in this project
```
## Error Handling
- **Duplicate Rule**: Warn and skip if exact rule text already exists in target file
- **Invalid Category**: Suggest valid categories for the type
- **Invalid Scope**: Exit with error - must be 'global' or 'project'
- **Invalid Dimension**: Exit with error - must be 'specs' or 'personal'
- **Invalid Type**: Exit with error - must be 'convention', 'constraint', or 'learning'
- **File not writable**: Check permissions, suggest manual creation
- **Invalid path**: Exit with error message
- **File Corruption**: Backup existing file before modification
## Related Commands
- `/workflow:spec:setup` - Initialize project with specs scaffold
- `/workflow:session:sync` - Quick-sync session work to specs and project-tech
- `/workflow:session:start` - Start a session
- `/workflow:session:complete` - Complete session (prompts for learnings)
- `ccw spec list` - View all specs
- `ccw spec load --category <cat>` - Load filtered specs
- `ccw spec rebuild` - Rebuild spec index

View File

@@ -1,74 +1,208 @@
---
name: init-guidelines
description: Interactive wizard to fill specs/*.md based on project analysis
argument-hint: "[--reset]"
name: setup
description: Initialize project-level state and configure specs via interactive questionnaire using cli-explore-agent
argument-hint: "[--regenerate] [--skip-specs] [--reset]"
examples:
- /workflow:init-guidelines
- /workflow:init-guidelines --reset
- /workflow:spec:setup
- /workflow:spec:setup --regenerate
- /workflow:spec:setup --skip-specs
- /workflow:spec:setup --reset
---
# Workflow Init Guidelines Command (/workflow:init-guidelines)
# Workflow Spec Setup Command (/workflow:spec:setup)
## Overview
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.ccw/specs/*.md` with coding conventions, constraints, and quality rules.
Initialize `.workflow/project-tech.json` and `.ccw/specs/*.md` with comprehensive project understanding by delegating analysis to **cli-explore-agent**, then interactively configure project guidelines through a multi-round questionnaire.
**Dual File System**:
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
- `specs/*.md`: User-maintained rules and constraints (created and populated interactively)
**Design Principle**: Questions are dynamically generated based on the project's tech stack, architecture, and patterns — not generic boilerplate.
**Note**: This command may be called by `/workflow:init` after initialization. Upon completion, return to the calling workflow if applicable.
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
## Usage
```bash
/workflow:init-guidelines # Fill guidelines interactively (skip if already populated)
/workflow:init-guidelines --reset # Reset and re-fill guidelines from scratch
/workflow:spec:setup # Initialize (skip if exists)
/workflow:spec:setup --regenerate # Force regeneration of project-tech.json
/workflow:spec:setup --skip-specs # Initialize project-tech only, skip spec initialization and questionnaire
/workflow:spec:setup --reset # Reset specs content before questionnaire
```
## Execution Process
```
Input Parsing:
├─ Parse --regenerate flag → regenerate = true | false
├─ Parse --skip-specs flag → skipSpecs = true | false
└─ Parse --reset flag → reset = true | false
Step 1: Check Prerequisites
├─ project-tech.json must exist (run /workflow:init first)
├─ specs/*.md: check if populated or scaffold-only
If populated + no --reset → Ask: "Guidelines already exist. Overwrite or append?"
Decision:
├─ BOTH_EXIST + no --regenerate + no --reset → Exit: "Already initialized"
├─ EXISTS + --regenerate → Backup existing → Continue analysis
EXISTS + --reset → Reset specs, keep project-tech → Skip to questionnaire
└─ NOT_FOUND → Continue full flow
Step 2: Load Project Context
Read project-tech.json → extract tech stack, architecture, patterns
Full Flow:
Step 1: Parse input and check existing state
├─ Step 2: Get project metadata (name, root)
├─ Step 3: Invoke cli-explore-agent
│ ├─ Structural scan (get_modules_by_depth.sh, find, wc)
│ ├─ Semantic analysis (Gemini CLI)
│ ├─ Synthesis and merge
│ └─ Write .workflow/project-tech.json
├─ Step 4: Initialize Spec System (if not --skip-specs)
│ ├─ Check if specs/*.md exist
│ ├─ If NOT_FOUND → Run ccw spec init
│ └─ Run ccw spec rebuild
├─ Step 5: Multi-Round Interactive Questionnaire (if not --skip-specs)
│ ├─ Check if guidelines already populated → Ask: "Append / Reset / Cancel"
│ ├─ Load project context from project-tech.json
│ ├─ Round 1: Coding Conventions (coding_style, naming_patterns)
│ ├─ Round 2: File & Documentation Conventions (file_structure, documentation)
│ ├─ Round 3: Architecture & Tech Constraints (architecture, tech_stack)
│ ├─ Round 4: Performance & Security Constraints (performance, security)
│ └─ Round 5: Quality Rules (quality_rules)
├─ Step 6: Write specs/*.md (if not --skip-specs)
└─ Step 7: Display Summary
Step 3: Multi-Round Interactive Questionnaire
├─ Round 1: Coding Conventions (coding_style, naming_patterns)
Round 2: File & Documentation Conventions (file_structure, documentation)
├─ Round 3: Architecture & Tech Constraints (architecture, tech_stack)
├─ Round 4: Performance & Security Constraints (performance, security)
└─ Round 5: Quality Rules (quality_rules)
Step 4: Write specs/*.md
Step 5: Display Summary
Output:
├─ .workflow/project-tech.json (+ .backup if regenerate)
.ccw/specs/*.md (scaffold or configured, unless --skip-specs)
```
## Implementation
### Step 1: Check Prerequisites
### Step 1: Parse Input and Check Existing State
**Parse flags**:
```javascript
const regenerate = $ARGUMENTS.includes('--regenerate')
const skipSpecs = $ARGUMENTS.includes('--skip-specs')
const reset = $ARGUMENTS.includes('--reset')
```
**Check existing state**:
```bash
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .ccw/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
```
**If TECH_NOT_FOUND**: Exit with message
**If BOTH_EXIST and no --regenerate and no --reset**: Exit early
```
Project tech analysis not found. Run /workflow:init first.
Project already initialized:
- Tech analysis: .workflow/project-tech.json
- Guidelines: .ccw/specs/*.md
Use /workflow:spec:setup --regenerate to rebuild tech analysis
Use /workflow:spec:setup --reset to reconfigure guidelines
Use /workflow:spec:add to add individual rules
Use /workflow:status --project to view state
```
**Parse --reset flag**:
### Step 2: Get Project Metadata
```bash
bash(basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
bash(mkdir -p .workflow)
```
### Step 3: Invoke cli-explore-agent
**For --regenerate**: Backup and preserve existing data
```bash
bash(cp .workflow/project-tech.json .workflow/project-tech.json.backup)
```
**Delegate analysis to agent**:
```javascript
const reset = $ARGUMENTS.includes('--reset')
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Deep project analysis",
prompt=`
Analyze project for workflow initialization and generate .workflow/project-tech.json.
## MANDATORY FIRST STEPS
1. Execute: cat ~/.ccw/workflows/cli-templates/schemas/project-tech-schema.json (get schema reference)
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
## Task
Generate complete project-tech.json following the schema structure:
- project_name: "${projectName}"
- initialized_at: ISO 8601 timestamp
- overview: {
description: "Brief project description",
technology_stack: {
languages: [{name, file_count, primary}],
frameworks: ["string"],
build_tools: ["string"],
test_frameworks: ["string"]
},
architecture: {style, layers: [], patterns: []},
key_components: [{name, path, description, importance}]
}
- features: []
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
## Analysis Requirements
**Technology Stack**:
- Languages: File counts, mark primary
- Frameworks: From package.json, requirements.txt, go.mod, etc.
- Build tools: npm, cargo, maven, webpack, vite
- Test frameworks: jest, pytest, go test, junit
**Architecture**:
- Style: MVC, microservices, layered (from structure & imports)
- Layers: presentation, business-logic, data-access
- Patterns: singleton, factory, repository
- Key components: 5-10 modules {name, path, description, importance}
## Execution
1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
6. Report: Return brief completion summary
Project root: ${projectRoot}
`
)
```
**If GUIDELINES_EXISTS and not --reset**: Check if guidelines are populated (not just scaffold)
### Step 4: Initialize Spec System (if not --skip-specs)
```javascript
// Skip spec initialization if --skip-specs flag is provided
if (!skipSpecs) {
// Initialize spec system if not already initialized
const specsCheck = Bash('test -f .ccw/specs/coding-conventions.md && echo EXISTS || echo NOT_FOUND')
if (specsCheck.includes('NOT_FOUND')) {
console.log('Initializing spec system...')
Bash('ccw spec init')
Bash('ccw spec rebuild')
}
} else {
console.log('Skipping spec initialization and questionnaire (--skip-specs)')
}
```
If `--skip-specs` is provided, skip directly to Step 7 (Display Summary) with limited output.
### Step 5: Multi-Round Interactive Questionnaire (if not --skip-specs)
#### Step 5.0: Check Existing Guidelines
If guidelines already have content, ask the user how to proceed:
```javascript
// Check if specs already have content via ccw spec list
@@ -76,7 +210,7 @@ const specsList = Bash('ccw spec list --json 2>/dev/null || echo "{}"')
const specsData = JSON.parse(specsList)
const isPopulated = (specsData.total || 0) > 5 // More than seed docs
if (isPopulated) {
if (isPopulated && !reset) {
AskUserQuestion({
questions: [{
question: "Project guidelines already contain entries. How would you like to proceed?",
@@ -93,9 +227,15 @@ if (isPopulated) {
// If Reset → clear all arrays before proceeding
// If Append → keep existing, wizard adds to them
}
// If --reset flag was provided, clear existing entries before proceeding
if (reset) {
// Reset specs content
console.log('Resetting existing guidelines...')
}
```
### Step 2: Load Project Context
#### Step 5.1: Load Project Context
```javascript
// Load project context via ccw spec load for planning context
@@ -112,15 +252,15 @@ const archPatterns = specData.overview?.architecture?.patterns || []
const buildTools = specData.overview?.technology_stack?.build_tools || []
```
### Step 3: Multi-Round Interactive Questionnaire
#### Step 5.2: Multi-Round Questionnaire
Each round uses `AskUserQuestion` with project-aware options. The user can always select "Other" to provide custom input.
**⚠️ CRITICAL**: After each round, collect the user's answers and convert them into guideline entries. Do NOT batch all rounds — process each round's answers before proceeding to the next.
**CRITICAL**: After each round, collect the user's answers and convert them into guideline entries. Do NOT batch all rounds — process each round's answers before proceeding to the next.
---
#### Round 1: Coding Conventions
##### Round 1: Coding Conventions
Generate options dynamically based on detected language/framework:
@@ -175,11 +315,11 @@ AskUserQuestion({
})
```
**Process Round 1 answers** add to `conventions.coding_style` and `conventions.naming_patterns` arrays.
**Process Round 1 answers** -> add to `conventions.coding_style` and `conventions.naming_patterns` arrays.
---
#### Round 2: File Structure & Documentation
##### Round 2: File Structure & Documentation
```javascript
AskUserQuestion({
@@ -210,11 +350,11 @@ AskUserQuestion({
})
```
**Process Round 2 answers** add to `conventions.file_structure` and `conventions.documentation`.
**Process Round 2 answers** -> add to `conventions.file_structure` and `conventions.documentation`.
---
#### Round 3: Architecture & Tech Stack Constraints
##### Round 3: Architecture & Tech Stack Constraints
```javascript
// Build architecture-specific options
@@ -259,11 +399,11 @@ AskUserQuestion({
})
```
**Process Round 3 answers** add to `constraints.architecture` and `constraints.tech_stack`.
**Process Round 3 answers** -> add to `constraints.architecture` and `constraints.tech_stack`.
---
#### Round 4: Performance & Security Constraints
##### Round 4: Performance & Security Constraints
```javascript
AskUserQuestion({
@@ -294,11 +434,11 @@ AskUserQuestion({
})
```
**Process Round 4 answers** add to `constraints.performance` and `constraints.security`.
**Process Round 4 answers** -> add to `constraints.performance` and `constraints.security`.
---
#### Round 5: Quality Rules
##### Round 5: Quality Rules
```javascript
AskUserQuestion({
@@ -318,9 +458,9 @@ AskUserQuestion({
})
```
**Process Round 5 answers** add to `quality_rules` array as `{ rule, scope, enforced_by }` objects.
**Process Round 5 answers** -> add to `quality_rules` array as `{ rule, scope, enforced_by }` objects.
### Step 4: Write specs/*.md
### Step 6: Write specs/*.md (if not --skip-specs)
For each category of collected answers, append rules to the corresponding spec MD file. Each spec file uses YAML frontmatter with `readMode`, `priority`, `category`, and `keywords`.
@@ -404,51 +544,107 @@ keywords: [execution, quality, testing, coverage, lint]
Bash('ccw spec rebuild')
```
### Step 5: Display Summary
#### Answer Processing Rules
When converting user selections to guideline entries:
1. **Selected option** -> Use the option's `description` as the guideline string (it's more precise than the label)
2. **"Other" with custom text** -> Use the user's text directly as the guideline string
3. **Deduplication** -> Skip entries that already exist in the guidelines (exact string match)
4. **Quality rules** -> Convert to `{ rule: description, scope: "all", enforced_by: "code-review" }` format
### Step 7: Display Summary
```javascript
const countConventions = newCodingStyle.length + newNamingPatterns.length
+ newFileStructure.length + newDocumentation.length
const countConstraints = newArchitecture.length + newTechStack.length
+ newPerformance.length + newSecurity.length
const countQuality = newQualityRules.length
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
// Get updated spec list
const specsList = Bash('ccw spec list --json 2>/dev/null || echo "{}"')
if (skipSpecs) {
// Minimal summary for --skip-specs mode
console.log(`
Project initialized successfully (tech analysis only)
console.log(`
✓ Project guidelines configured
## Project Overview
Name: ${projectTech.project_name}
Description: ${projectTech.overview.description}
## Summary
### Technology Stack
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
### Architecture
Style: ${projectTech.overview.architecture.style}
Components: ${projectTech.overview.key_components.length} core modules
---
Files created:
- Tech analysis: .workflow/project-tech.json
- Specs: (skipped via --skip-specs)
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
Next steps:
- Use /workflow:spec:setup (without --skip-specs) to configure guidelines
- Use /workflow:spec:add to create individual specs
- Use workflow-plan skill to start planning
`);
} else {
// Full summary with guidelines stats
const countConventions = newCodingStyle.length + newNamingPatterns.length
+ newFileStructure.length + newDocumentation.length
const countConstraints = newArchitecture.length + newTechStack.length
+ newPerformance.length + newSecurity.length
const countQuality = newQualityRules.length
// Get updated spec list
const specsList = Bash('ccw spec list --json 2>/dev/null || echo "{}"')
console.log(`
Project initialized and guidelines configured
## Project Overview
Name: ${projectTech.project_name}
Description: ${projectTech.overview.description}
### Technology Stack
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
### Architecture
Style: ${projectTech.overview.architecture.style}
Components: ${projectTech.overview.key_components.length} core modules
### Guidelines Summary
- Conventions: ${countConventions} rules added to coding-conventions.md
- Constraints: ${countConstraints} rules added to architecture-constraints.md
- Quality rules: ${countQuality} rules added to quality-rules.md
Spec index rebuilt. Use \`ccw spec list\` to view all specs.
---
Files created:
- Tech analysis: .workflow/project-tech.json
- Specs: .ccw/specs/ (configured)
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
Next steps:
- Use /workflow:session:solidify to add individual rules later
- Use /workflow:spec:add to add individual rules later
- Specs are auto-loaded via hook on each prompt
`)
- Use workflow-plan skill to start planning
`);
}
```
## Answer Processing Rules
When converting user selections to guideline entries:
1. **Selected option** → Use the option's `description` as the guideline string (it's more precise than the label)
2. **"Other" with custom text** → Use the user's text directly as the guideline string
3. **Deduplication** → Skip entries that already exist in the guidelines (exact string match)
4. **Quality rules** → Convert to `{ rule: description, scope: "all", enforced_by: "code-review" }` format
## Error Handling
- **No project-tech.json**: Exit with instruction to run `/workflow:init` first
- **User cancels mid-wizard**: Save whatever was collected so far (partial is better than nothing)
- **File write failure**: Report error, suggest manual edit
**Agent Failure**: Fall back to basic initialization with placeholder overview
**Missing Tools**: Agent uses Qwen fallback or bash-only
**Empty Project**: Create minimal JSON with all gaps identified
**No project-tech.json** (when --reset without prior init): Run full flow from Step 2
**User cancels mid-wizard**: Save whatever was collected so far (partial is better than nothing)
**File write failure**: Report error, suggest manual edit
## Related Commands
- `/workflow:init` - Creates scaffold; optionally calls this command
- `/workflow:init-specs` - Interactive wizard to create individual specs with scope selection
- `/workflow:session:solidify` - Add individual rules one at a time
- `/workflow:spec:add` - Interactive wizard to create individual specs with scope selection
- `/workflow:session:sync` - Quick-sync session work to specs and project-tech
- `workflow-plan` skill - Start planning with initialized project context
- `/workflow:status --project` - View project state and guidelines

View File

@@ -658,9 +658,15 @@ ${recommendations.map(r => \`- ${r}\`).join('\\n')}
- "优化执行" → Analyze execution improvements
- "完成" → No further action
5. **Sync Session State** (automatic, unless `--dry-run`)
- Execute: `/workflow:session:sync -y "Execution complete: ${completedCount}/${totalCount} tasks succeeded"`
- Updates specs/*.md with any learnings from execution
- Updates project-tech.json with development index entry
**Success Criteria**:
- [ ] Statistics collected and displayed
- [ ] execution.md updated with final status
- [ ] Session state synced via /workflow:session:sync
- [ ] User informed of completion
---

View File

@@ -1,584 +0,0 @@
# Mermaid Utilities Library
Shared utilities for generating and validating Mermaid diagrams across all analysis skills.
## Sanitization Functions
### sanitizeId
Convert any text to a valid Mermaid node ID.
```javascript
/**
* Sanitize text to valid Mermaid node ID
* - Only alphanumeric and underscore allowed
* - Cannot start with number
* - Truncates to 50 chars max
*
* @param {string} text - Input text
* @returns {string} - Valid Mermaid ID
*/
function sanitizeId(text) {
if (!text) return '_empty';
return text
.replace(/[^a-zA-Z0-9_\u4e00-\u9fa5]/g, '_') // Allow Chinese chars
.replace(/^[0-9]/, '_$&') // Prefix number with _
.replace(/_+/g, '_') // Collapse multiple _
.substring(0, 50); // Limit length
}
// Examples:
// sanitizeId("User-Service") → "User_Service"
// sanitizeId("3rdParty") → "_3rdParty"
// sanitizeId("用户服务") → "用户服务"
```
### escapeLabel
Escape special characters for Mermaid labels.
```javascript
/**
* Escape special characters in Mermaid labels
* Uses HTML entity encoding for problematic chars
*
* @param {string} text - Label text
* @returns {string} - Escaped label
*/
function escapeLabel(text) {
if (!text) return '';
return text
.replace(/"/g, "'") // Avoid quote issues
.replace(/\(/g, '#40;') // (
.replace(/\)/g, '#41;') // )
.replace(/\{/g, '#123;') // {
.replace(/\}/g, '#125;') // }
.replace(/\[/g, '#91;') // [
.replace(/\]/g, '#93;') // ]
.replace(/</g, '#60;') // <
.replace(/>/g, '#62;') // >
.replace(/\|/g, '#124;') // |
.substring(0, 80); // Limit length
}
// Examples:
// escapeLabel("Process(data)") → "Process#40;data#41;"
// escapeLabel("Check {valid?}") → "Check #123;valid?#125;"
```
### sanitizeType
Sanitize type names for class diagrams.
```javascript
/**
* Sanitize type names for Mermaid classDiagram
* Removes generics syntax that causes issues
*
* @param {string} type - Type name
* @returns {string} - Sanitized type
*/
function sanitizeType(type) {
if (!type) return 'any';
return type
.replace(/<[^>]*>/g, '') // Remove generics <T>
.replace(/\|/g, ' or ') // Union types
.replace(/&/g, ' and ') // Intersection types
.replace(/\[\]/g, 'Array') // Array notation
.substring(0, 30);
}
// Examples:
// sanitizeType("Array<string>") → "Array"
// sanitizeType("string | number") → "string or number"
```
## Diagram Generation Functions
### generateFlowchartNode
Generate a flowchart node with proper shape.
```javascript
/**
* Generate flowchart node with shape
*
* @param {string} id - Node ID
* @param {string} label - Display label
* @param {string} type - Node type: start|end|process|decision|io|subroutine
* @returns {string} - Mermaid node definition
*/
function generateFlowchartNode(id, label, type = 'process') {
const safeId = sanitizeId(id);
const safeLabel = escapeLabel(label);
const shapes = {
start: `${safeId}(["${safeLabel}"])`, // Stadium shape
end: `${safeId}(["${safeLabel}"])`, // Stadium shape
process: `${safeId}["${safeLabel}"]`, // Rectangle
decision: `${safeId}{"${safeLabel}"}`, // Diamond
io: `${safeId}[/"${safeLabel}"/]`, // Parallelogram
subroutine: `${safeId}[["${safeLabel}"]]`, // Subroutine
database: `${safeId}[("${safeLabel}")]`, // Cylinder
manual: `${safeId}[/"${safeLabel}"\\]` // Trapezoid
};
return shapes[type] || shapes.process;
}
```
### generateFlowchartEdge
Generate a flowchart edge with optional label.
```javascript
/**
* Generate flowchart edge
*
* @param {string} from - Source node ID
* @param {string} to - Target node ID
* @param {string} label - Edge label (optional)
* @param {string} style - Edge style: solid|dashed|thick
* @returns {string} - Mermaid edge definition
*/
function generateFlowchartEdge(from, to, label = '', style = 'solid') {
const safeFrom = sanitizeId(from);
const safeTo = sanitizeId(to);
const safeLabel = label ? `|"${escapeLabel(label)}"|` : '';
const arrows = {
solid: '-->',
dashed: '-.->',
thick: '==>'
};
const arrow = arrows[style] || arrows.solid;
return ` ${safeFrom} ${arrow}${safeLabel} ${safeTo}`;
}
```
### generateAlgorithmFlowchart (Enhanced)
Generate algorithm flowchart with branch/loop support.
```javascript
/**
* Generate algorithm flowchart with decision support
*
* @param {Object} algorithm - Algorithm definition
* - name: Algorithm name
* - inputs: [{name, type}]
* - outputs: [{name, type}]
* - steps: [{id, description, type, next: [id], conditions: [text]}]
* @returns {string} - Complete Mermaid flowchart
*/
function generateAlgorithmFlowchart(algorithm) {
let mermaid = 'flowchart TD\n';
// Start node
mermaid += ` START(["开始: ${escapeLabel(algorithm.name)}"])\n`;
// Input node (if has inputs)
if (algorithm.inputs?.length > 0) {
const inputList = algorithm.inputs.map(i => `${i.name}: ${i.type}`).join(', ');
mermaid += ` INPUT[/"输入: ${escapeLabel(inputList)}"/]\n`;
mermaid += ` START --> INPUT\n`;
}
// Process nodes
const steps = algorithm.steps || [];
for (const step of steps) {
const nodeId = sanitizeId(step.id || `STEP_${step.step_num}`);
if (step.type === 'decision') {
mermaid += ` ${nodeId}{"${escapeLabel(step.description)}"}\n`;
} else if (step.type === 'io') {
mermaid += ` ${nodeId}[/"${escapeLabel(step.description)}"/]\n`;
} else if (step.type === 'loop_start') {
mermaid += ` ${nodeId}[["循环: ${escapeLabel(step.description)}"]]\n`;
} else {
mermaid += ` ${nodeId}["${escapeLabel(step.description)}"]\n`;
}
}
// Output node
const outputDesc = algorithm.outputs?.map(o => o.name).join(', ') || '结果';
mermaid += ` OUTPUT[/"输出: ${escapeLabel(outputDesc)}"/]\n`;
mermaid += ` END_(["结束"])\n`;
// Connect first step to input/start
if (steps.length > 0) {
const firstStep = sanitizeId(steps[0].id || 'STEP_1');
if (algorithm.inputs?.length > 0) {
mermaid += ` INPUT --> ${firstStep}\n`;
} else {
mermaid += ` START --> ${firstStep}\n`;
}
}
// Connect steps based on next array
for (const step of steps) {
const nodeId = sanitizeId(step.id || `STEP_${step.step_num}`);
if (step.next && step.next.length > 0) {
step.next.forEach((nextId, index) => {
const safeNextId = sanitizeId(nextId);
const condition = step.conditions?.[index];
if (condition) {
mermaid += ` ${nodeId} -->|"${escapeLabel(condition)}"| ${safeNextId}\n`;
} else {
mermaid += ` ${nodeId} --> ${safeNextId}\n`;
}
});
} else if (!step.type?.includes('end')) {
// Default: connect to next step or output
const stepIndex = steps.indexOf(step);
if (stepIndex < steps.length - 1) {
const nextStep = sanitizeId(steps[stepIndex + 1].id || `STEP_${stepIndex + 2}`);
mermaid += ` ${nodeId} --> ${nextStep}\n`;
} else {
mermaid += ` ${nodeId} --> OUTPUT\n`;
}
}
}
// Connect output to end
mermaid += ` OUTPUT --> END_\n`;
return mermaid;
}
```
## Diagram Validation
### validateMermaidSyntax
Comprehensive Mermaid syntax validation.
```javascript
/**
* Validate Mermaid diagram syntax
*
* @param {string} content - Mermaid diagram content
* @returns {Object} - {valid: boolean, issues: string[]}
*/
function validateMermaidSyntax(content) {
const issues = [];
// Check 1: Diagram type declaration
if (!content.match(/^(graph|flowchart|classDiagram|sequenceDiagram|stateDiagram|erDiagram|gantt|pie|mindmap)/m)) {
issues.push('Missing diagram type declaration');
}
// Check 2: Undefined values
if (content.includes('undefined') || content.includes('null')) {
issues.push('Contains undefined/null values');
}
// Check 3: Invalid arrow syntax
if (content.match(/-->\s*-->/)) {
issues.push('Double arrow syntax error');
}
// Check 4: Unescaped special characters in labels
const labelMatches = content.match(/\["[^"]*[(){}[\]<>][^"]*"\]/g);
if (labelMatches?.some(m => !m.includes('#'))) {
issues.push('Unescaped special characters in labels');
}
// Check 5: Node ID starts with number
if (content.match(/\n\s*[0-9][a-zA-Z0-9_]*[\[\({]/)) {
issues.push('Node ID cannot start with number');
}
// Check 6: Nested subgraph syntax error
if (content.match(/subgraph\s+\S+\s*\n[^e]*subgraph/)) {
// This is actually valid, only flag if brackets don't match
const subgraphCount = (content.match(/subgraph/g) || []).length;
const endCount = (content.match(/\bend\b/g) || []).length;
if (subgraphCount > endCount) {
issues.push('Unbalanced subgraph/end blocks');
}
}
// Check 7: Invalid arrow type for diagram type
const diagramType = content.match(/^(graph|flowchart|classDiagram|sequenceDiagram)/m)?.[1];
if (diagramType === 'classDiagram' && content.includes('-->|')) {
issues.push('Invalid edge label syntax for classDiagram');
}
// Check 8: Empty node labels
if (content.match(/\[""\]|\{\}|\(\)/)) {
issues.push('Empty node labels detected');
}
// Check 9: Reserved keywords as IDs
const reserved = ['end', 'graph', 'subgraph', 'direction', 'class', 'click'];
for (const keyword of reserved) {
const pattern = new RegExp(`\\n\\s*${keyword}\\s*[\\[\\(\\{]`, 'i');
if (content.match(pattern)) {
issues.push(`Reserved keyword "${keyword}" used as node ID`);
}
}
// Check 10: Line length (Mermaid has issues with very long lines)
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
if (lines[i].length > 500) {
issues.push(`Line ${i + 1} exceeds 500 characters`);
}
}
return {
valid: issues.length === 0,
issues
};
}
```
### validateDiagramDirectory
Validate all diagrams in a directory.
```javascript
/**
* Validate all Mermaid diagrams in directory
*
* @param {string} diagramDir - Path to diagrams directory
* @returns {Object[]} - Array of {file, valid, issues}
*/
function validateDiagramDirectory(diagramDir) {
const files = Glob(`${diagramDir}/*.mmd`);
const results = [];
for (const file of files) {
const content = Read(file);
const validation = validateMermaidSyntax(content);
results.push({
file: file.split('/').pop(),
path: file,
valid: validation.valid,
issues: validation.issues,
lines: content.split('\n').length
});
}
return results;
}
```
## Class Diagram Utilities
### generateClassDiagram
Generate class diagram with relationships.
```javascript
/**
* Generate class diagram from analysis data
*
* @param {Object} analysis - Data structure analysis
* - entities: [{name, type, properties, methods}]
* - relationships: [{from, to, type, label}]
* @param {Object} options - Generation options
* - maxClasses: Max classes to include (default: 15)
* - maxProperties: Max properties per class (default: 8)
* - maxMethods: Max methods per class (default: 6)
* @returns {string} - Mermaid classDiagram
*/
function generateClassDiagram(analysis, options = {}) {
const maxClasses = options.maxClasses || 15;
const maxProperties = options.maxProperties || 8;
const maxMethods = options.maxMethods || 6;
let mermaid = 'classDiagram\n';
const entities = (analysis.entities || []).slice(0, maxClasses);
// Generate classes
for (const entity of entities) {
const className = sanitizeId(entity.name);
mermaid += ` class ${className} {\n`;
// Properties
for (const prop of (entity.properties || []).slice(0, maxProperties)) {
const vis = {public: '+', private: '-', protected: '#'}[prop.visibility] || '+';
const type = sanitizeType(prop.type);
mermaid += ` ${vis}${type} ${prop.name}\n`;
}
// Methods
for (const method of (entity.methods || []).slice(0, maxMethods)) {
const vis = {public: '+', private: '-', protected: '#'}[method.visibility] || '+';
const params = (method.params || []).map(p => p.name).join(', ');
const returnType = sanitizeType(method.returnType || 'void');
mermaid += ` ${vis}${method.name}(${params}) ${returnType}\n`;
}
mermaid += ' }\n';
// Add stereotype if applicable
if (entity.type === 'interface') {
mermaid += ` <<interface>> ${className}\n`;
} else if (entity.type === 'abstract') {
mermaid += ` <<abstract>> ${className}\n`;
}
}
// Generate relationships
const arrows = {
inheritance: '--|>',
implementation: '..|>',
composition: '*--',
aggregation: 'o--',
association: '-->',
dependency: '..>'
};
for (const rel of (analysis.relationships || [])) {
const from = sanitizeId(rel.from);
const to = sanitizeId(rel.to);
const arrow = arrows[rel.type] || '-->';
const label = rel.label ? ` : ${escapeLabel(rel.label)}` : '';
// Only include if both entities exist
if (entities.some(e => sanitizeId(e.name) === from) &&
entities.some(e => sanitizeId(e.name) === to)) {
mermaid += ` ${from} ${arrow} ${to}${label}\n`;
}
}
return mermaid;
}
```
## Sequence Diagram Utilities
### generateSequenceDiagram
Generate sequence diagram from scenario.
```javascript
/**
* Generate sequence diagram from scenario
*
* @param {Object} scenario - Sequence scenario
* - name: Scenario name
* - actors: [{id, name, type}]
* - messages: [{from, to, description, type}]
* - blocks: [{type, condition, messages}]
* @returns {string} - Mermaid sequenceDiagram
*/
function generateSequenceDiagram(scenario) {
let mermaid = 'sequenceDiagram\n';
// Title
if (scenario.name) {
mermaid += ` title ${escapeLabel(scenario.name)}\n`;
}
// Participants
for (const actor of scenario.actors || []) {
const actorType = actor.type === 'external' ? 'actor' : 'participant';
mermaid += ` ${actorType} ${sanitizeId(actor.id)} as ${escapeLabel(actor.name)}\n`;
}
mermaid += '\n';
// Messages
for (const msg of scenario.messages || []) {
const from = sanitizeId(msg.from);
const to = sanitizeId(msg.to);
const desc = escapeLabel(msg.description);
let arrow;
switch (msg.type) {
case 'async': arrow = '->>'; break;
case 'response': arrow = '-->>'; break;
case 'create': arrow = '->>+'; break;
case 'destroy': arrow = '->>-'; break;
case 'self': arrow = '->>'; break;
default: arrow = '->>';
}
mermaid += ` ${from}${arrow}${to}: ${desc}\n`;
// Activation
if (msg.activate) {
mermaid += ` activate ${to}\n`;
}
if (msg.deactivate) {
mermaid += ` deactivate ${from}\n`;
}
// Notes
if (msg.note) {
mermaid += ` Note over ${to}: ${escapeLabel(msg.note)}\n`;
}
}
// Blocks (loops, alt, opt)
for (const block of scenario.blocks || []) {
switch (block.type) {
case 'loop':
mermaid += ` loop ${escapeLabel(block.condition)}\n`;
break;
case 'alt':
mermaid += ` alt ${escapeLabel(block.condition)}\n`;
break;
case 'opt':
mermaid += ` opt ${escapeLabel(block.condition)}\n`;
break;
}
for (const m of block.messages || []) {
mermaid += ` ${sanitizeId(m.from)}->>${sanitizeId(m.to)}: ${escapeLabel(m.description)}\n`;
}
mermaid += ' end\n';
}
return mermaid;
}
```
## Usage Examples
### Example 1: Algorithm with Branches
```javascript
const algorithm = {
name: "用户认证流程",
inputs: [{name: "credentials", type: "Object"}],
outputs: [{name: "token", type: "JWT"}],
steps: [
{id: "validate", description: "验证输入格式", type: "process"},
{id: "check_user", description: "用户是否存在?", type: "decision",
next: ["verify_pwd", "error_user"], conditions: ["是", "否"]},
{id: "verify_pwd", description: "验证密码", type: "process"},
{id: "pwd_ok", description: "密码正确?", type: "decision",
next: ["gen_token", "error_pwd"], conditions: ["是", "否"]},
{id: "gen_token", description: "生成 JWT Token", type: "process"},
{id: "error_user", description: "返回用户不存在", type: "io"},
{id: "error_pwd", description: "返回密码错误", type: "io"}
]
};
const flowchart = generateAlgorithmFlowchart(algorithm);
```
### Example 2: Validate Before Output
```javascript
const diagram = generateClassDiagram(analysis);
const validation = validateMermaidSyntax(diagram);
if (!validation.valid) {
console.log("Diagram has issues:", validation.issues);
// Fix issues or regenerate
} else {
Write(`${outputDir}/class-diagram.mmd`, diagram);
}
```

View File

@@ -706,10 +706,19 @@ if (!autoMode) {
| Export | Copy plan.md + plan-note.md to user-specified location |
| Done | Display artifact paths, end workflow |
### Step 4.5: Sync Session State
```bash
$session-sync -y "Plan complete: {domains} domains, {tasks} tasks"
```
Updates specs/*.md with planning insights and project-tech.json with planning session entry.
**Success Criteria**:
- `plan.md` generated with complete summary
- `.task/TASK-*.json` collected at session root (consumable by unified-execute)
- All artifacts present in session directory
- Session state synced via `$session-sync`
- User informed of completion and next steps
---

View File

@@ -119,6 +119,7 @@ Phase 4: Completion & Summary
└─ Ref: phases/04-completion-summary.md
├─ Generate unified summary report
├─ Update final state
├─ Sync session state: $session-sync -y "Dev cycle complete: {iterations} iterations"
├─ Close all agents
└─ Output: final cycle report with continuation instructions
```

View File

@@ -224,6 +224,7 @@ Phase 8: Fix Execution
Phase 9: Fix Completion
└─ Ref: phases/09-fix-completion.md
├─ Aggregate results → fix-summary.md
├─ Sync session state: $session-sync -y "Review cycle complete: {findings} findings, {fixed} fixed"
└─ Optional: complete workflow session if all fixes successful
Complete: Review reports + optional fix results
@@ -473,3 +474,9 @@ review-cycle src/auth/**
# Step 2: Fix (continue or standalone)
review-cycle --fix ${projectRoot}/.workflow/active/WFS-{session-id}/.review/
```
### Session Sync
```bash
# Auto-synced at Phase 9 (fix completion)
$session-sync -y "Review cycle complete: {findings} findings, {fixed} fixed"
```

View File

@@ -0,0 +1,212 @@
---
name: session-sync
description: Quick-sync session work to specs/*.md and project-tech.json
argument-hint: "[-y|--yes] [\"what was done\"]"
allowed-tools: AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Session Sync
One-shot update `specs/*.md` + `project-tech.json` from current session context.
**Design**: Scan context -> extract -> write. No interactive wizards.
## Usage
```bash
$session-sync # Sync with preview + confirmation
$session-sync -y # Auto-sync, skip confirmation
$session-sync "Added JWT auth flow" # Sync with explicit summary
$session-sync -y "Fixed N+1 query" # Auto-sync with summary
```
## Process
```
Step 1: Gather Context
|- git diff --stat HEAD~3..HEAD (recent changes)
|- Active session folder (.workflow/.lite-plan/*) if exists
+- User summary ($ARGUMENTS or auto-generate from git log)
Step 2: Extract Updates
|- Guidelines: conventions / constraints / learnings
+- Tech: development_index entry
Step 3: Preview & Confirm (skip if --yes)
Step 4: Write both files
Step 5: One-line confirmation
```
## Implementation
### Step 1: Gather Context
```javascript
const AUTO_YES = "$ARGUMENTS".includes('--yes') || "$ARGUMENTS".includes('-y')
const userSummary = "$ARGUMENTS".replace(/--yes|-y/g, '').trim()
// Recent changes
const gitStat = Bash('git diff --stat HEAD~3..HEAD 2>/dev/null || git diff --stat HEAD 2>/dev/null')
const gitLog = Bash('git log --oneline -5')
// Active session (optional)
const sessionFolders = Glob('.workflow/.lite-plan/*/plan.json')
let sessionContext = null
if (sessionFolders.length > 0) {
const latest = sessionFolders[sessionFolders.length - 1]
sessionContext = JSON.parse(Read(latest))
}
// Build summary
const summary = userSummary
|| sessionContext?.summary
|| gitLog.split('\n')[0].replace(/^[a-f0-9]+ /, '')
```
### Step 2: Extract Updates
Analyze context and produce two update payloads. Use LLM reasoning (current agent) -- no CLI calls.
```javascript
// -- Guidelines extraction --
// Scan git diff + session for:
// - New patterns adopted -> convention
// - Restrictions discovered -> constraint
// - Surprises / gotchas -> learning
//
// Output: array of { type, category, text }
// RULE: Only extract genuinely reusable insights. Skip trivial/obvious items.
// RULE: Deduplicate against existing guidelines before adding.
// Load existing specs via ccw spec load
const existingSpecs = Bash('ccw spec load --dimension specs 2>/dev/null || echo ""')
const guidelineUpdates = [] // populated by agent analysis
// -- Tech extraction --
// Build one development_index entry from session work
function detectCategory(text) {
text = text.toLowerCase()
if (/\b(fix|bug|error|crash)\b/.test(text)) return 'bugfix'
if (/\b(refactor|cleanup|reorganize)\b/.test(text)) return 'refactor'
if (/\b(doc|readme|comment)\b/.test(text)) return 'docs'
if (/\b(add|new|create|implement)\b/.test(text)) return 'feature'
return 'enhancement'
}
function detectSubFeature(gitStat) {
// Most-changed directory from git diff --stat
const dirs = gitStat.match(/\S+\//g) || []
const counts = {}
dirs.forEach(d => {
const seg = d.split('/').filter(Boolean).slice(-2, -1)[0] || 'general'
counts[seg] = (counts[seg] || 0) + 1
})
return Object.entries(counts).sort((a, b) => b[1] - a[1])[0]?.[0] || 'general'
}
const techEntry = {
title: summary.slice(0, 60),
sub_feature: detectSubFeature(gitStat),
date: new Date().toISOString().split('T')[0],
description: summary.slice(0, 100),
status: 'completed',
session_id: sessionContext ? sessionFolders[sessionFolders.length - 1].match(/lite-plan\/([^/]+)/)?.[1] : null
}
```
### Step 3: Preview & Confirm
```javascript
// Show preview
console.log(`
-- Sync Preview --
Guidelines (${guidelineUpdates.length} items):
${guidelineUpdates.map(g => ` [${g.type}/${g.category}] ${g.text}`).join('\n') || ' (none)'}
Tech [${detectCategory(summary)}]:
${techEntry.title}
Target files:
.ccw/specs/*.md
.workflow/project-tech.json
`)
if (!AUTO_YES) {
const approved = CONFIRM("Apply these updates? (modify/skip items if needed)") // BLOCKS (wait for user response)
if (!approved) {
console.log('Sync cancelled.')
return
}
}
```
### Step 4: Write
```javascript
// -- Update specs/*.md --
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
if (guidelineUpdates.length > 0) {
// Map guideline types to spec files
const specFileMap = {
convention: '.ccw/specs/coding-conventions.md',
constraint: '.ccw/specs/architecture-constraints.md',
learning: '.ccw/specs/coding-conventions.md' // learnings appended to conventions
}
for (const g of guidelineUpdates) {
const targetFile = specFileMap[g.type]
const existing = Read(targetFile)
const ruleText = g.type === 'learning'
? `- [${g.category}] ${g.text} (learned: ${new Date().toISOString().split('T')[0]})`
: `- [${g.category}] ${g.text}`
// Deduplicate: skip if text already in file
if (!existing.includes(g.text)) {
const newContent = existing.trimEnd() + '\n' + ruleText + '\n'
Write(targetFile, newContent)
}
}
// Rebuild spec index after writing
Bash('ccw spec rebuild')
}
// -- Update project-tech.json --
const techPath = '.workflow/project-tech.json'
const tech = JSON.parse(Read(techPath))
if (!tech.development_index) {
tech.development_index = { feature: [], enhancement: [], bugfix: [], refactor: [], docs: [] }
}
const category = detectCategory(summary)
tech.development_index[category].push(techEntry)
tech._metadata.last_updated = new Date().toISOString()
Write(techPath, JSON.stringify(tech, null, 2))
```
### Step 5: Confirm
```
Synced: ${guidelineUpdates.length} guidelines + 1 tech entry [${category}]
```
## Error Handling
| Error | Resolution |
|-------|------------|
| File missing | Create scaffold (same as $spec-setup Step 4) |
| No git history | Use user summary or session context only |
| No meaningful updates | Skip guidelines, still add tech entry |
| Duplicate entry | Skip silently (dedup check in Step 4) |
## Related Commands
- `$spec-setup` - Initialize project with specs scaffold
- `$spec-add` - Interactive wizard to create individual specs with scope selection
- `$workflow-plan` - Start planning with initialized project context

View File

@@ -0,0 +1,613 @@
---
name: spec-add
description: Add specs, conventions, constraints, or learnings to project guidelines interactively or automatically
argument-hint: "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] [--dimension <specs|personal>] [--scope <global|project>] [--interactive] \"rule text\""
allowed-tools: AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Spec Add Command
## Overview
Unified command for adding specs one at a time. Supports both interactive wizard mode and direct CLI mode.
**Key Features**:
- Supports both project specs and personal specs
- Scope selection (global vs project) for personal specs
- Category-based organization for workflow stages
- Interactive wizard mode with smart defaults
- Direct CLI mode with auto-detection of type and category
- Auto-confirm mode (`-y`/`--yes`) for scripted usage
## Use Cases
1. **During Session**: Capture important decisions as they're made
2. **After Session**: Reflect on lessons learned before archiving
3. **Proactive**: Add team conventions or architectural rules
4. **Interactive**: Guided wizard for adding rules with full control over dimension, scope, and category
## Usage
```bash
$spec-add # Interactive wizard (all prompts)
$spec-add --interactive # Explicit interactive wizard
$spec-add "Use async/await instead of callbacks" # Direct mode (auto-detect type)
$spec-add -y "No direct DB access" --type constraint # Auto-confirm, skip confirmation
$spec-add --scope global --dimension personal # Create global personal spec (interactive)
$spec-add --dimension specs --category exploration # Project spec in exploration category (interactive)
$spec-add "Cache invalidation requires event sourcing" --type learning --category architecture
```
## Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `rule` | string | Yes (unless `--interactive`) | - | The rule, convention, or insight to add |
| `--type` | enum | No | auto-detect | Type: `convention`, `constraint`, `learning` |
| `--category` | string | No | auto-detect / `general` | Category for organization (see categories below) |
| `--dimension` | enum | No | Interactive | `specs` (project) or `personal` |
| `--scope` | enum | No | `project` | `global` or `project` (only for personal dimension) |
| `--interactive` | flag | No | - | Launch full guided wizard for adding rules |
| `-y` / `--yes` | flag | No | - | Auto-categorize and add without confirmation |
### Type Categories
**convention** - Coding style preferences (goes to `conventions` section)
- Subcategories: `coding_style`, `naming_patterns`, `file_structure`, `documentation`
**constraint** - Hard rules that must not be violated (goes to `constraints` section)
- Subcategories: `architecture`, `tech_stack`, `performance`, `security`
**learning** - Session-specific insights (goes to `learnings` array)
- Subcategories: `architecture`, `performance`, `security`, `testing`, `process`, `other`
### Workflow Stage Categories (for `--category`)
| Category | Use Case | Example Rules |
|----------|----------|---------------|
| `general` | Applies to all stages | "Use TypeScript strict mode" |
| `exploration` | Code exploration, debugging | "Always trace the call stack before modifying" |
| `planning` | Task planning, requirements | "Break down tasks into 2-hour chunks" |
| `execution` | Implementation, testing | "Run tests after each file modification" |
## Execution Process
```
Input Parsing:
|- Parse: rule text (positional argument, optional if --interactive)
|- Parse: --type (convention|constraint|learning)
|- Parse: --category (subcategory)
|- Parse: --dimension (specs|personal)
|- Parse: --scope (global|project)
|- Parse: --interactive (flag)
+- Parse: -y / --yes (flag)
Step 1: Parse Input
Step 2: Determine Mode
|- If --interactive OR no rule text -> Full Interactive Wizard (Path A)
+- If rule text provided -> Direct Mode (Path B)
Path A: Interactive Wizard
|- Step A1: Ask dimension (if not specified)
|- Step A2: Ask scope (if personal + scope not specified)
|- Step A3: Ask category (if not specified)
|- Step A4: Ask type (convention|constraint|learning)
|- Step A5: Ask content (rule text)
+- Continue to Step 3
Path B: Direct Mode
|- Step B1: Auto-detect type (if not specified) using detectType()
|- Step B2: Auto-detect category (if not specified) using detectCategory()
|- Step B3: Default dimension to 'specs' if not specified
+- Continue to Step 3
Step 3: Determine Target File
|- specs dimension -> .ccw/specs/coding-conventions.md or architecture-constraints.md
+- personal dimension -> ~/.ccw/personal/ or .ccw/personal/
Step 4: Validate and Write Spec
|- Ensure target directory and file exist
|- Check for duplicates
|- Append rule to appropriate section
+- Run ccw spec rebuild
Step 5: Display Confirmation
+- If -y/--yes: Minimal output
+- Otherwise: Full confirmation with location details
```
## Implementation
### Step 1: Parse Input
```javascript
// Parse arguments
const args = "$ARGUMENTS"
const argsLower = args.toLowerCase()
// Extract flags
const AUTO_YES = argsLower.includes('--yes') || argsLower.includes('-y')
const isInteractive = argsLower.includes('--interactive')
// Extract named parameters
const hasType = argsLower.includes('--type')
const hasCategory = argsLower.includes('--category')
const hasDimension = argsLower.includes('--dimension')
const hasScope = argsLower.includes('--scope')
let type = hasType ? args.match(/--type\s+(\w+)/i)?.[1]?.toLowerCase() : null
let category = hasCategory ? args.match(/--category\s+(\w+)/i)?.[1]?.toLowerCase() : null
let dimension = hasDimension ? args.match(/--dimension\s+(\w+)/i)?.[1]?.toLowerCase() : null
let scope = hasScope ? args.match(/--scope\s+(\w+)/i)?.[1]?.toLowerCase() : null
// Extract rule text (everything before flags, or quoted string)
let ruleText = args
.replace(/--type\s+\w+/gi, '')
.replace(/--category\s+\w+/gi, '')
.replace(/--dimension\s+\w+/gi, '')
.replace(/--scope\s+\w+/gi, '')
.replace(/--interactive/gi, '')
.replace(/--yes/gi, '')
.replace(/-y\b/gi, '')
.replace(/^["']|["']$/g, '')
.trim()
// Validate values
if (scope && !['global', 'project'].includes(scope)) {
console.log("Invalid scope. Use 'global' or 'project'.")
return
}
if (dimension && !['specs', 'personal'].includes(dimension)) {
console.log("Invalid dimension. Use 'specs' or 'personal'.")
return
}
if (type && !['convention', 'constraint', 'learning'].includes(type)) {
console.log("Invalid type. Use 'convention', 'constraint', or 'learning'.")
return
}
if (category) {
const validCategories = [
'general', 'exploration', 'planning', 'execution',
'coding_style', 'naming_patterns', 'file_structure', 'documentation',
'architecture', 'tech_stack', 'performance', 'security',
'testing', 'process', 'other'
]
if (!validCategories.includes(category)) {
console.log(`Invalid category. Valid categories: ${validCategories.join(', ')}`)
return
}
}
```
### Step 2: Determine Mode
```javascript
const useInteractiveWizard = isInteractive || !ruleText
```
### Path A: Interactive Wizard
```javascript
if (useInteractiveWizard) {
// --- Step A1: Ask dimension (if not specified) ---
if (!dimension) {
if (AUTO_YES) {
dimension = 'specs' // Default to project specs in auto mode
} else {
const dimensionAnswer = ASK_USER([
{
id: "dimension", type: "select",
prompt: "What type of spec do you want to create?",
options: [
{ label: "Project Spec", description: "Coding conventions, constraints, quality rules for this project (stored in .ccw/specs/)" },
{ label: "Personal Spec", description: "Personal preferences and constraints that follow you across projects (stored in ~/.ccw/specs/personal/ or .ccw/specs/personal/)" }
]
}
]) // BLOCKS (wait for user response)
dimension = dimensionAnswer.dimension === "Project Spec" ? "specs" : "personal"
}
}
// --- Step A2: Ask scope (if personal + scope not specified) ---
if (dimension === 'personal' && !scope) {
if (AUTO_YES) {
scope = 'project' // Default to project scope in auto mode
} else {
const scopeAnswer = ASK_USER([
{
id: "scope", type: "select",
prompt: "Where should this personal spec be stored?",
options: [
{ label: "Global (Recommended)", description: "Apply to ALL projects (~/.ccw/specs/personal/)" },
{ label: "Project-only", description: "Apply only to this project (.ccw/specs/personal/)" }
]
}
]) // BLOCKS (wait for user response)
scope = scopeAnswer.scope.includes("Global") ? "global" : "project"
}
}
// --- Step A3: Ask category (if not specified) ---
if (!category) {
if (AUTO_YES) {
category = 'general' // Default to general in auto mode
} else {
const categoryAnswer = ASK_USER([
{
id: "category", type: "select",
prompt: "Which workflow stage does this spec apply to?",
options: [
{ label: "General (Recommended)", description: "Applies to all stages (default)" },
{ label: "Exploration", description: "Code exploration, analysis, debugging" },
{ label: "Planning", description: "Task planning, requirements gathering" },
{ label: "Execution", description: "Implementation, testing, deployment" }
]
}
]) // BLOCKS (wait for user response)
const categoryLabel = categoryAnswer.category
category = categoryLabel.includes("General") ? "general"
: categoryLabel.includes("Exploration") ? "exploration"
: categoryLabel.includes("Planning") ? "planning"
: "execution"
}
}
// --- Step A4: Ask type (if not specified) ---
if (!type) {
if (AUTO_YES) {
type = 'convention' // Default to convention in auto mode
} else {
const typeAnswer = ASK_USER([
{
id: "type", type: "select",
prompt: "What type of rule is this?",
options: [
{ label: "Convention", description: "Coding style preference (e.g., use functional components)" },
{ label: "Constraint", description: "Hard rule that must not be violated (e.g., no direct DB access)" },
{ label: "Learning", description: "Insight or lesson learned (e.g., cache invalidation needs events)" }
]
}
]) // BLOCKS (wait for user response)
const typeLabel = typeAnswer.type
type = typeLabel.includes("Convention") ? "convention"
: typeLabel.includes("Constraint") ? "constraint"
: "learning"
}
}
// --- Step A5: Ask content (rule text) ---
if (!ruleText) {
if (AUTO_YES) {
console.log("Error: Rule text is required in auto mode. Provide rule text as argument.")
return
}
const contentAnswer = ASK_USER([
{
id: "content", type: "text",
prompt: "Enter the rule or guideline text:"
}
]) // BLOCKS (wait for user response)
ruleText = contentAnswer.content
}
}
```
### Path B: Direct Mode
**Auto-detect type if not specified**:
```javascript
function detectType(ruleText) {
const text = ruleText.toLowerCase();
// Constraint indicators
if (/\b(no|never|must not|forbidden|prohibited|always must)\b/.test(text)) {
return 'constraint';
}
// Learning indicators
if (/\b(learned|discovered|realized|found that|turns out)\b/.test(text)) {
return 'learning';
}
// Default to convention
return 'convention';
}
function detectCategory(ruleText, type) {
const text = ruleText.toLowerCase();
if (type === 'constraint' || type === 'learning') {
if (/\b(architecture|layer|module|dependency|circular)\b/.test(text)) return 'architecture';
if (/\b(security|auth|permission|sanitize|xss|sql)\b/.test(text)) return 'security';
if (/\b(performance|cache|lazy|async|sync|slow)\b/.test(text)) return 'performance';
if (/\b(test|coverage|mock|stub)\b/.test(text)) return 'testing';
}
if (type === 'convention') {
if (/\b(name|naming|prefix|suffix|camel|pascal)\b/.test(text)) return 'naming_patterns';
if (/\b(file|folder|directory|structure|organize)\b/.test(text)) return 'file_structure';
if (/\b(doc|comment|jsdoc|readme)\b/.test(text)) return 'documentation';
return 'coding_style';
}
return type === 'constraint' ? 'tech_stack' : 'other';
}
if (!useInteractiveWizard) {
if (!type) {
type = detectType(ruleText)
}
if (!category) {
category = detectCategory(ruleText, type)
}
if (!dimension) {
dimension = 'specs' // Default to project specs in direct mode
}
}
```
### Step 3: Ensure Guidelines File Exists
**Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)**
```bash
bash(test -f .ccw/specs/coding-conventions.md && echo "EXISTS" || echo "NOT_FOUND")
```
**If NOT_FOUND**, initialize spec system:
```bash
Bash('ccw spec init')
Bash('ccw spec rebuild')
```
### Step 4: Determine Target File
```javascript
const path = require('path')
const os = require('os')
const isConvention = type === 'convention'
const isConstraint = type === 'constraint'
const isLearning = type === 'learning'
let targetFile
let targetDir
if (dimension === 'specs') {
// Project specs - use .ccw/specs/ (same as frontend/backend spec-index-builder)
targetDir = '.ccw/specs'
if (isConstraint) {
targetFile = path.join(targetDir, 'architecture-constraints.md')
} else {
targetFile = path.join(targetDir, 'coding-conventions.md')
}
} else {
// Personal specs - use .ccw/personal/ (same as backend spec-index-builder)
if (scope === 'global') {
targetDir = path.join(os.homedir(), '.ccw', 'personal')
} else {
targetDir = path.join('.ccw', 'personal')
}
// Create type-based filename
const typePrefix = isConstraint ? 'constraints' : isLearning ? 'learnings' : 'conventions'
targetFile = path.join(targetDir, `${typePrefix}.md`)
}
```
### Step 5: Build Entry
```javascript
function buildEntry(rule, type, category, sessionId) {
if (type === 'learning') {
return {
date: new Date().toISOString().split('T')[0],
session_id: sessionId || null,
insight: rule,
category: category,
context: null
};
}
// For conventions and constraints, just return the rule string
return rule;
}
```
### Step 6: Write Spec
```javascript
const fs = require('fs')
// Ensure directory exists
if (!fs.existsSync(targetDir)) {
fs.mkdirSync(targetDir, { recursive: true })
}
// Check if file exists
const fileExists = fs.existsSync(targetFile)
if (!fileExists) {
// Create new file with frontmatter
const frontmatter = `---
title: ${dimension === 'specs' ? 'Project' : 'Personal'} ${isConstraint ? 'Constraints' : isLearning ? 'Learnings' : 'Conventions'}
readMode: optional
priority: medium
category: ${category}
scope: ${dimension === 'personal' ? scope : 'project'}
dimension: ${dimension}
keywords: [${category}, ${isConstraint ? 'constraint' : isLearning ? 'learning' : 'convention'}]
---
# ${dimension === 'specs' ? 'Project' : 'Personal'} ${isConstraint ? 'Constraints' : isLearning ? 'Learnings' : 'Conventions'}
`
fs.writeFileSync(targetFile, frontmatter, 'utf8')
}
// Read existing content
let content = fs.readFileSync(targetFile, 'utf8')
// Deduplicate: skip if rule text already exists in the file
if (content.includes(ruleText)) {
console.log(`
Rule already exists in ${targetFile}
Text: "${ruleText}"
`)
return
}
// Format the new rule based on type
let newRule
if (isLearning) {
const entry = buildEntry(ruleText, type, category)
newRule = `- [learning/${category}] ${entry.insight} (${entry.date})`
} else {
newRule = `- [${category}] ${ruleText}`
}
// Append the rule
content = content.trimEnd() + '\n' + newRule + '\n'
fs.writeFileSync(targetFile, content, 'utf8')
// Rebuild spec index
Bash('ccw spec rebuild')
```
### Step 7: Display Confirmation
**If `-y`/`--yes` (auto mode)**:
```
Spec added: [${type}/${category}] "${ruleText}" -> ${targetFile}
```
**Otherwise (full confirmation)**:
```
Spec created successfully
Dimension: ${dimension}
Scope: ${dimension === 'personal' ? scope : 'project'}
Category: ${category}
Type: ${type}
Rule: "${ruleText}"
Location: ${targetFile}
Use 'ccw spec list' to view all specs
Use 'ccw spec load --category ${category}' to load specs by category
```
## Target File Resolution
### Project Specs (dimension: specs)
```
.ccw/specs/
|- coding-conventions.md <- conventions, learnings
|- architecture-constraints.md <- constraints
+- quality-rules.md <- quality rules
```
### Personal Specs (dimension: personal)
```
# Global (~/.ccw/personal/)
~/.ccw/personal/
|- conventions.md <- personal conventions (all projects)
|- constraints.md <- personal constraints (all projects)
+- learnings.md <- personal learnings (all projects)
# Project-local (.ccw/personal/)
.ccw/personal/
|- conventions.md <- personal conventions (this project only)
|- constraints.md <- personal constraints (this project only)
+- learnings.md <- personal learnings (this project only)
```
## Examples
### Interactive Wizard
```bash
$spec-add --interactive
# Prompts for: dimension -> scope (if personal) -> category -> type -> content
```
### Add a Convention (Direct)
```bash
$spec-add "Use async/await instead of callbacks" --type convention --category coding_style
```
Result in `.ccw/specs/coding-conventions.md`:
```markdown
- [coding_style] Use async/await instead of callbacks
```
### Add an Architectural Constraint (Direct)
```bash
$spec-add "No direct DB access from controllers" --type constraint --category architecture
```
Result in `.ccw/specs/architecture-constraints.md`:
```markdown
- [architecture] No direct DB access from controllers
```
### Capture a Learning (Direct, Auto-detect)
```bash
$spec-add "Cache invalidation requires event sourcing for consistency" --type learning
```
Result in `.ccw/specs/coding-conventions.md`:
```markdown
- [learning/architecture] Cache invalidation requires event sourcing for consistency (2026-03-06)
```
### Auto-confirm Mode
```bash
$spec-add -y "No direct DB access from controllers" --type constraint
# Auto-detects category as 'architecture', writes without confirmation prompt
```
### Personal Spec (Global)
```bash
$spec-add --scope global --dimension personal --type convention "Prefer descriptive variable names"
```
Result in `~/.ccw/personal/conventions.md`:
```markdown
- [general] Prefer descriptive variable names
```
### Personal Spec (Project)
```bash
$spec-add --scope project --dimension personal --type constraint "No ORM in this project"
```
Result in `.ccw/personal/constraints.md`:
```markdown
- [general] No ORM in this project
```
## Error Handling
- **Duplicate Rule**: Warn and skip if exact rule text already exists in target file
- **Invalid Category**: Suggest valid categories for the type
- **Invalid Scope**: Exit with error - must be 'global' or 'project'
- **Invalid Dimension**: Exit with error - must be 'specs' or 'personal'
- **Invalid Type**: Exit with error - must be 'convention', 'constraint', or 'learning'
- **File not writable**: Check permissions, suggest manual creation
- **Invalid path**: Exit with error message
- **File Corruption**: Backup existing file before modification
## Related Commands
- `$spec-setup` - Initialize project with specs scaffold
- `$session-sync` - Quick-sync session work to specs and project-tech
- `$workflow-session-start` - Start a session
- `$workflow-session-complete` - Complete session (prompts for learnings)
- `ccw spec list` - View all specs
- `ccw spec load --category <cat>` - Load filtered specs
- `ccw spec rebuild` - Rebuild spec index

View File

@@ -0,0 +1,657 @@
---
name: spec-setup
description: Initialize project-level state and configure specs via interactive questionnaire using cli-explore-agent
argument-hint: "[--regenerate] [--skip-specs] [--reset]"
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Workflow Spec Setup Command
## Overview
Initialize `.workflow/project-tech.json` and `.ccw/specs/*.md` with comprehensive project understanding by delegating analysis to **cli-explore-agent**, then interactively configure project guidelines through a multi-round questionnaire.
**Dual File System**:
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
- `specs/*.md`: User-maintained rules and constraints (created and populated interactively)
**Design Principle**: Questions are dynamically generated based on the project's tech stack, architecture, and patterns -- not generic boilerplate.
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
## Usage
```bash
$spec-setup # Initialize (skip if exists)
$spec-setup --regenerate # Force regeneration of project-tech.json
$spec-setup --skip-specs # Initialize project-tech only, skip spec initialization and questionnaire
$spec-setup --reset # Reset specs content before questionnaire
```
## Execution Process
```
Input Parsing:
|- Parse --regenerate flag -> regenerate = true | false
|- Parse --skip-specs flag -> skipSpecs = true | false
+- Parse --reset flag -> reset = true | false
Decision:
|- BOTH_EXIST + no --regenerate + no --reset -> Exit: "Already initialized"
|- EXISTS + --regenerate -> Backup existing -> Continue analysis
|- EXISTS + --reset -> Reset specs, keep project-tech -> Skip to questionnaire
+- NOT_FOUND -> Continue full flow
Full Flow:
|- Step 1: Parse input and check existing state
|- Step 2: Get project metadata (name, root)
|- Step 3: Invoke cli-explore-agent (subagent)
| |- Structural scan (get_modules_by_depth.sh, find, wc)
| |- Semantic analysis (Gemini CLI)
| |- Synthesis and merge
| +- Write .workflow/project-tech.json
|- Step 4: Initialize Spec System (if not --skip-specs)
| |- Check if specs/*.md exist
| |- If NOT_FOUND -> Run ccw spec init
| +- Run ccw spec rebuild
|- Step 5: Multi-Round Interactive Questionnaire (if not --skip-specs)
| |- Check if guidelines already populated -> Ask: "Append / Reset / Cancel"
| |- Load project context from project-tech.json
| |- Round 1: Coding Conventions (coding_style, naming_patterns)
| |- Round 2: File & Documentation Conventions (file_structure, documentation)
| |- Round 3: Architecture & Tech Constraints (architecture, tech_stack)
| |- Round 4: Performance & Security Constraints (performance, security)
| +- Round 5: Quality Rules (quality_rules)
|- Step 6: Write specs/*.md (if not --skip-specs)
+- Step 7: Display Summary
Output:
|- .workflow/project-tech.json (+ .backup if regenerate)
+- .ccw/specs/*.md (scaffold or configured, unless --skip-specs)
```
## Implementation
### Step 1: Parse Input and Check Existing State
**Parse flags**:
```javascript
const regenerate = $ARGUMENTS.includes('--regenerate')
const skipSpecs = $ARGUMENTS.includes('--skip-specs')
const reset = $ARGUMENTS.includes('--reset')
```
**Check existing state**:
```bash
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .ccw/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
```
**If BOTH_EXIST and no --regenerate and no --reset**: Exit early
```
Project already initialized:
- Tech analysis: .workflow/project-tech.json
- Guidelines: .ccw/specs/*.md
Use $spec-setup --regenerate to rebuild tech analysis
Use $spec-setup --reset to reconfigure guidelines
Use $spec-add to add individual rules
Use $workflow-status --project to view state
```
### Step 2: Get Project Metadata
```bash
bash(basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
bash(mkdir -p .workflow)
```
### Step 3: Invoke cli-explore-agent (Subagent)
**For --regenerate**: Backup and preserve existing data
```bash
bash(cp .workflow/project-tech.json .workflow/project-tech.json.backup)
```
**Delegate analysis to subagent**:
```javascript
let exploreAgent = null
try {
exploreAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: .workflow/project-tech.json (if exists, for --regenerate)
---
Analyze project for workflow initialization and generate .workflow/project-tech.json.
## MANDATORY FIRST STEPS
1. Execute: cat ~/.ccw/workflows/cli-templates/schemas/project-tech-schema.json (get schema reference)
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
## Task
Generate complete project-tech.json following the schema structure:
- project_name: "${projectName}"
- initialized_at: ISO 8601 timestamp
- overview: {
description: "Brief project description",
technology_stack: {
languages: [{name, file_count, primary}],
frameworks: ["string"],
build_tools: ["string"],
test_frameworks: ["string"]
},
architecture: {style, layers: [], patterns: []},
key_components: [{name, path, description, importance}]
}
- features: []
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated: ISO timestamp}'}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp: ISO timestamp, analysis_mode: "deep-scan"}
## Analysis Requirements
**Technology Stack**:
- Languages: File counts, mark primary
- Frameworks: From package.json, requirements.txt, go.mod, etc.
- Build tools: npm, cargo, maven, webpack, vite
- Test frameworks: jest, pytest, go test, junit
**Architecture**:
- Style: MVC, microservices, layered (from structure & imports)
- Layers: presentation, business-logic, data-access
- Patterns: singleton, factory, repository
- Key components: 5-10 modules {name, path, description, importance}
## Execution
1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved development_index and statistics from .workflow/project-tech.json.backup' : ''}
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
6. Report: Return brief completion summary
Project root: ${projectRoot}
`
})
// Wait for completion
const result = wait({ ids: [exploreAgent], timeout_ms: 600000 })
if (result.timed_out) {
send_input({ id: exploreAgent, message: 'Complete analysis now and write project-tech.json.' })
const retry = wait({ ids: [exploreAgent], timeout_ms: 300000 })
if (retry.timed_out) throw new Error('Agent timeout')
}
} finally {
if (exploreAgent) close_agent({ id: exploreAgent })
}
```
### Step 4: Initialize Spec System (if not --skip-specs)
```javascript
// Skip spec initialization if --skip-specs flag is provided
if (!skipSpecs) {
// Initialize spec system if not already initialized
const specsCheck = Bash('test -f .ccw/specs/coding-conventions.md && echo EXISTS || echo NOT_FOUND')
if (specsCheck.includes('NOT_FOUND')) {
console.log('Initializing spec system...')
Bash('ccw spec init')
Bash('ccw spec rebuild')
}
} else {
console.log('Skipping spec initialization and questionnaire (--skip-specs)')
}
```
If `--skip-specs` is provided, skip directly to Step 7 (Display Summary) with limited output.
### Step 5: Multi-Round Interactive Questionnaire (if not --skip-specs)
#### Step 5.0: Check Existing Guidelines
If guidelines already have content, ask the user how to proceed:
```javascript
// Check if specs already have content via ccw spec list
const specsList = Bash('ccw spec list --json 2>/dev/null || echo "{}"')
const specsData = JSON.parse(specsList)
const isPopulated = (specsData.total || 0) > 5 // More than seed docs
if (isPopulated && !reset) {
const mode = ASK_USER([
{
id: "mode", type: "select",
prompt: "Project guidelines already contain entries. How would you like to proceed?",
options: [
{ label: "Append", description: "Keep existing entries and add new ones from the wizard" },
{ label: "Reset", description: "Clear all existing entries and start fresh" },
{ label: "Cancel", description: "Exit without changes" }
],
default: "Append"
}
]) // BLOCKS (wait for user response)
// If Cancel -> exit
// If Reset -> clear all arrays before proceeding
// If Append -> keep existing, wizard adds to them
}
// If --reset flag was provided, clear existing entries before proceeding
if (reset) {
// Reset specs content
console.log('Resetting existing guidelines...')
}
```
#### Step 5.1: Load Project Context
```javascript
// Load project context via ccw spec load for planning context
const projectContext = Bash('ccw spec load --category planning 2>/dev/null || echo "{}"')
const specData = JSON.parse(projectContext)
// Extract key info from loaded specs for generating smart questions
const languages = specData.overview?.technology_stack?.languages || []
const primaryLang = languages.find(l => l.primary)?.name || languages[0]?.name || 'Unknown'
const frameworks = specData.overview?.technology_stack?.frameworks || []
const testFrameworks = specData.overview?.technology_stack?.test_frameworks || []
const archStyle = specData.overview?.architecture?.style || 'Unknown'
const archPatterns = specData.overview?.architecture?.patterns || []
const buildTools = specData.overview?.technology_stack?.build_tools || []
```
#### Step 5.2: Multi-Round Questionnaire
Each round uses `ASK_USER` with project-aware options. The user can always select "Other" to provide custom input.
**CRITICAL**: After each round, collect the user's answers and convert them into guideline entries. Do NOT batch all rounds -- process each round's answers before proceeding to the next.
---
##### Round 1: Coding Conventions
Generate options dynamically based on detected language/framework:
```javascript
// Build language-specific coding style options
const codingStyleOptions = []
if (['TypeScript', 'JavaScript'].includes(primaryLang)) {
codingStyleOptions.push(
{ label: "Strict TypeScript", description: "Use strict mode, no 'any' type, explicit return types for public APIs" },
{ label: "Functional style", description: "Prefer pure functions, immutability, avoid class-based patterns where possible" },
{ label: "Const over let", description: "Always use const; only use let when reassignment is truly needed" }
)
} else if (primaryLang === 'Python') {
codingStyleOptions.push(
{ label: "Type hints", description: "Use type hints for all function signatures and class attributes" },
{ label: "Functional style", description: "Prefer pure functions, list comprehensions, avoid mutable state" },
{ label: "PEP 8 strict", description: "Strict PEP 8 compliance with max line length 88 (Black formatter)" }
)
} else if (primaryLang === 'Go') {
codingStyleOptions.push(
{ label: "Error wrapping", description: "Always wrap errors with context using fmt.Errorf with %w" },
{ label: "Interface first", description: "Define interfaces at the consumer side, not the provider" },
{ label: "Table-driven tests", description: "Use table-driven test pattern for all unit tests" }
)
}
// Add universal options
codingStyleOptions.push(
{ label: "Early returns", description: "Prefer early returns / guard clauses over deep nesting" }
)
// Round 1: Coding Conventions
const round1 = ASK_USER([
{
id: "coding_style", type: "multi-select",
prompt: `Your project uses ${primaryLang}. Which coding style conventions do you follow?`,
options: codingStyleOptions.slice(0, 4) // Max 4 options
},
{
id: "naming", type: "multi-select",
prompt: `What naming conventions does your ${primaryLang} project use?`,
options: [
{ label: "camelCase variables", description: "Variables and functions use camelCase (e.g., getUserName)" },
{ label: "PascalCase types", description: "Classes, interfaces, type aliases use PascalCase (e.g., UserService)" },
{ label: "UPPER_SNAKE constants", description: "Constants use UPPER_SNAKE_CASE (e.g., MAX_RETRIES)" },
{ label: "Prefix interfaces", description: "Prefix interfaces with 'I' (e.g., IUserService)" }
]
}
]) // BLOCKS (wait for user response)
```
**Process Round 1 answers** -> add to `conventions.coding_style` and `conventions.naming_patterns` arrays.
---
##### Round 2: File Structure & Documentation
```javascript
// Round 2: File Structure & Documentation
const round2 = ASK_USER([
{
id: "file_structure", type: "multi-select",
prompt: `Your project has a ${archStyle} architecture. What file organization rules apply?`,
options: [
{ label: "Co-located tests", description: "Test files live next to source files (e.g., foo.ts + foo.test.ts)" },
{ label: "Separate test dir", description: "Tests in a dedicated __tests__ or tests/ directory" },
{ label: "One export per file", description: "Each file exports a single main component/class/function" },
{ label: "Index barrels", description: "Use index.ts barrel files for clean imports from directories" }
]
},
{
id: "documentation", type: "multi-select",
prompt: "What documentation standards does your project follow?",
options: [
{ label: "JSDoc/docstring public APIs", description: "All public functions and classes must have JSDoc/docstrings" },
{ label: "README per module", description: "Each major module/package has its own README" },
{ label: "Inline comments for why", description: "Comments explain 'why', not 'what' -- code should be self-documenting" },
{ label: "No comment requirement", description: "Code should be self-explanatory; comments only for non-obvious logic" }
]
}
]) // BLOCKS (wait for user response)
```
**Process Round 2 answers** -> add to `conventions.file_structure` and `conventions.documentation`.
---
##### Round 3: Architecture & Tech Stack Constraints
```javascript
// Build architecture-specific options
const archOptions = []
if (archStyle.toLowerCase().includes('monolith')) {
archOptions.push(
{ label: "No circular deps", description: "Modules must not have circular dependencies" },
{ label: "Layer boundaries", description: "Strict layer separation: UI -> Service -> Data (no skipping layers)" }
)
} else if (archStyle.toLowerCase().includes('microservice')) {
archOptions.push(
{ label: "Service isolation", description: "Services must not share databases or internal state" },
{ label: "API contracts", description: "All inter-service communication through versioned API contracts" }
)
}
archOptions.push(
{ label: "Stateless services", description: "Service/business logic must be stateless (state in DB/cache only)" },
{ label: "Dependency injection", description: "Use dependency injection for testability, no hardcoded dependencies" }
)
// Round 3: Architecture & Tech Stack Constraints
const round3 = ASK_USER([
{
id: "architecture", type: "multi-select",
prompt: `Your ${archStyle} architecture uses ${archPatterns.join(', ') || 'various'} patterns. What architecture constraints apply?`,
options: archOptions.slice(0, 4)
},
{
id: "tech_stack", type: "multi-select",
prompt: `Tech stack: ${frameworks.join(', ')}. What technology constraints apply?`,
options: [
{ label: "No new deps without review", description: "Adding new dependencies requires explicit justification and review" },
{ label: "Pin dependency versions", description: "All dependencies must use exact versions, not ranges" },
{ label: "Prefer native APIs", description: "Use built-in/native APIs over third-party libraries when possible" },
{ label: "Framework conventions", description: `Follow official ${frameworks[0] || 'framework'} conventions and best practices` }
]
}
]) // BLOCKS (wait for user response)
```
**Process Round 3 answers** -> add to `constraints.architecture` and `constraints.tech_stack`.
---
##### Round 4: Performance & Security Constraints
```javascript
// Round 4: Performance & Security Constraints
const round4 = ASK_USER([
{
id: "performance", type: "multi-select",
prompt: "What performance requirements does your project have?",
options: [
{ label: "API response time", description: "API endpoints must respond within 200ms (p95)" },
{ label: "Bundle size limit", description: "Frontend bundle size must stay under 500KB gzipped" },
{ label: "Lazy loading", description: "Large modules/routes must use lazy loading / code splitting" },
{ label: "No N+1 queries", description: "Database access must avoid N+1 query patterns" }
]
},
{
id: "security", type: "multi-select",
prompt: "What security requirements does your project enforce?",
options: [
{ label: "Input sanitization", description: "All user input must be validated and sanitized before use" },
{ label: "No secrets in code", description: "No API keys, passwords, or tokens in source code -- use env vars" },
{ label: "Auth on all endpoints", description: "All API endpoints require authentication unless explicitly public" },
{ label: "Parameterized queries", description: "All database queries must use parameterized/prepared statements" }
]
}
]) // BLOCKS (wait for user response)
```
**Process Round 4 answers** -> add to `constraints.performance` and `constraints.security`.
---
##### Round 5: Quality Rules
```javascript
// Round 5: Quality Rules
const round5 = ASK_USER([
{
id: "quality", type: "multi-select",
prompt: `Testing with ${testFrameworks.join(', ') || 'your test framework'}. What quality rules apply?`,
options: [
{ label: "Min test coverage", description: "Minimum 80% code coverage for new code; no merging below threshold" },
{ label: "No skipped tests", description: "Tests must not be skipped (.skip/.only) in committed code" },
{ label: "Lint must pass", description: "All code must pass linter checks before commit (enforced by pre-commit)" },
{ label: "Type check must pass", description: "Full type checking (tsc --noEmit) must pass with zero errors" }
]
}
]) // BLOCKS (wait for user response)
```
**Process Round 5 answers** -> add to `quality_rules` array as `{ rule, scope, enforced_by }` objects.
### Step 6: Write specs/*.md (if not --skip-specs)
For each category of collected answers, append rules to the corresponding spec MD file. Each spec file uses YAML frontmatter with `readMode`, `priority`, `category`, and `keywords`.
**Category Assignment**: Based on the round and question type:
- Round 1-2 (conventions): `category: general` (applies to all stages)
- Round 3 (architecture/tech): `category: planning` (planning phase)
- Round 4 (performance/security): `category: execution` (implementation phase)
- Round 5 (quality): `category: execution` (testing phase)
```javascript
// Helper: append rules to a spec MD file with category support
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
function appendRulesToSpecFile(filePath, rules, defaultCategory = 'general') {
if (rules.length === 0) return
// Ensure .ccw/specs/ directory exists
const specDir = path.dirname(filePath)
if (!fs.existsSync(specDir)) {
fs.mkdirSync(specDir, { recursive: true })
}
// Check if file exists
if (!file_exists(filePath)) {
// Create file with frontmatter including category
const frontmatter = `---
title: ${filePath.includes('conventions') ? 'Coding Conventions' : filePath.includes('constraints') ? 'Architecture Constraints' : 'Quality Rules'}
readMode: optional
priority: medium
category: ${defaultCategory}
scope: project
dimension: specs
keywords: [${defaultCategory}, ${filePath.includes('conventions') ? 'convention' : filePath.includes('constraints') ? 'constraint' : 'quality'}]
---
# ${filePath.includes('conventions') ? 'Coding Conventions' : filePath.includes('constraints') ? 'Architecture Constraints' : 'Quality Rules'}
`
Write(filePath, frontmatter)
}
const existing = Read(filePath)
// Append new rules as markdown list items after existing content
const newContent = existing.trimEnd() + '\n' + rules.map(r => `- ${r}`).join('\n') + '\n'
Write(filePath, newContent)
}
// Write conventions (general category) - use .ccw/specs/ (same as frontend/backend)
appendRulesToSpecFile('.ccw/specs/coding-conventions.md',
[...newCodingStyle, ...newNamingPatterns, ...newFileStructure, ...newDocumentation],
'general')
// Write constraints (planning category)
appendRulesToSpecFile('.ccw/specs/architecture-constraints.md',
[...newArchitecture, ...newTechStack, ...newPerformance, ...newSecurity],
'planning')
// Write quality rules (execution category)
if (newQualityRules.length > 0) {
const qualityPath = '.ccw/specs/quality-rules.md'
if (!file_exists(qualityPath)) {
Write(qualityPath, `---
title: Quality Rules
readMode: required
priority: high
category: execution
scope: project
dimension: specs
keywords: [execution, quality, testing, coverage, lint]
---
# Quality Rules
`)
}
appendRulesToSpecFile(qualityPath,
newQualityRules.map(q => `${q.rule} (scope: ${q.scope}, enforced by: ${q.enforced_by})`),
'execution')
}
// Rebuild spec index after writing
Bash('ccw spec rebuild')
```
#### Answer Processing Rules
When converting user selections to guideline entries:
1. **Selected option** -> Use the option's `description` as the guideline string (it's more precise than the label)
2. **"Other" with custom text** -> Use the user's text directly as the guideline string
3. **Deduplication** -> Skip entries that already exist in the guidelines (exact string match)
4. **Quality rules** -> Convert to `{ rule: description, scope: "all", enforced_by: "code-review" }` format
### Step 7: Display Summary
```javascript
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
if (skipSpecs) {
// Minimal summary for --skip-specs mode
console.log(`
Project initialized successfully (tech analysis only)
## Project Overview
Name: ${projectTech.project_name}
Description: ${projectTech.overview.description}
### Technology Stack
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
### Architecture
Style: ${projectTech.overview.architecture.style}
Components: ${projectTech.overview.key_components.length} core modules
---
Files created:
- Tech analysis: .workflow/project-tech.json
- Specs: (skipped via --skip-specs)
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
Next steps:
- Use $spec-setup (without --skip-specs) to configure guidelines
- Use $spec-add to create individual specs
- Use $workflow-plan to start planning
`);
} else {
// Full summary with guidelines stats
const countConventions = newCodingStyle.length + newNamingPatterns.length
+ newFileStructure.length + newDocumentation.length
const countConstraints = newArchitecture.length + newTechStack.length
+ newPerformance.length + newSecurity.length
const countQuality = newQualityRules.length
// Get updated spec list
const specsList = Bash('ccw spec list --json 2>/dev/null || echo "{}"')
console.log(`
Project initialized and guidelines configured
## Project Overview
Name: ${projectTech.project_name}
Description: ${projectTech.overview.description}
### Technology Stack
Languages: ${projectTech.overview.technology_stack.languages.map(l => l.name).join(', ')}
Frameworks: ${projectTech.overview.technology_stack.frameworks.join(', ')}
### Architecture
Style: ${projectTech.overview.architecture.style}
Components: ${projectTech.overview.key_components.length} core modules
### Guidelines Summary
- Conventions: ${countConventions} rules added to coding-conventions.md
- Constraints: ${countConstraints} rules added to architecture-constraints.md
- Quality rules: ${countQuality} rules added to quality-rules.md
Spec index rebuilt. Use \`ccw spec list\` to view all specs.
---
Files created:
- Tech analysis: .workflow/project-tech.json
- Specs: .ccw/specs/ (configured)
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
Next steps:
- Use $spec-add to add individual rules later
- Specs are auto-loaded via hook on each prompt
- Use $workflow-plan to start planning
`);
}
```
## Error Handling
| Situation | Action |
|-----------|--------|
| **Agent Failure** | Fall back to basic initialization with placeholder overview |
| **Missing Tools** | Agent uses Qwen fallback or bash-only |
| **Empty Project** | Create minimal JSON with all gaps identified |
| **No project-tech.json** (when --reset without prior init) | Run full flow from Step 2 |
| **User cancels mid-wizard** | Save whatever was collected so far (partial is better than nothing) |
| **File write failure** | Report error, suggest manual edit |
## Related Commands
- `$spec-add` - Interactive wizard to create individual specs with scope selection
- `$session-sync` - Quick-sync session work to specs and project-tech
- `$workflow-plan` - Start planning with initialized project context
- `$workflow-status --project` - View project state and guidelines

View File

@@ -717,7 +717,17 @@ AskUserQuestion({
| Retry Failed | Filter tasks with `_execution.status === 'failed'`, re-execute, append `[RETRY]` events |
| View Events | Display execution-events.md content |
| Create Issue | `Skill(skill="issue:new", args="...")` from failed task details |
| Done | Display artifact paths, end workflow |
| Done | Display artifact paths, sync session state, end workflow |
### Step 4.5: Sync Session State
After completion (regardless of user selection), unless `--dry-run`:
```bash
$session-sync -y "Execution complete: {completed}/{total} tasks succeeded"
```
Updates specs/*.md with execution learnings and project-tech.json with development index entry.
---

View File

@@ -208,7 +208,7 @@ Phase 2: Test-Cycle Execution (phases/02-test-cycle-execute.md)
│ ├─ spawn_agent(@cli-planning-agent) → IMPL-fix-N.json
│ ├─ spawn_agent(@test-fix-agent) → Apply fix & re-test
│ └─ Re-test → Back to decision
└─ Completion: Final validation → Summary → Auto-complete session
└─ Completion: Final validation → Summary → Sync session state → Auto-complete session
```
## Core Rules
@@ -387,5 +387,6 @@ try {
- `test-fix-agent` (~/.codex/agents/test-fix-agent.md) - Test execution, code fixes, criticality assignment
**Follow-up**:
- Session sync: `$session-sync -y "Test-fix cycle complete: {pass_rate}% pass rate"`
- Session auto-complete on success
- Issue creation for follow-up work (post-completion expansion)

View File

@@ -186,6 +186,37 @@ class DeepWikiStore:
"CREATE INDEX IF NOT EXISTS idx_deepwiki_symbols_doc ON deepwiki_symbols(doc_file)"
)
# Generation progress table for LLM document generation tracking
conn.execute(
"""
CREATE TABLE IF NOT EXISTS generation_progress (
id INTEGER PRIMARY KEY AUTOINCREMENT,
symbol_key TEXT NOT NULL UNIQUE,
file_path TEXT NOT NULL,
symbol_name TEXT NOT NULL,
symbol_type TEXT NOT NULL,
layer INTEGER NOT NULL,
source_hash TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
attempts INTEGER DEFAULT 0,
last_tool TEXT,
last_error TEXT,
generated_at TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
updated_at TEXT DEFAULT CURRENT_TIMESTAMP
)
"""
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_progress_status ON generation_progress(status)"
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_progress_file ON generation_progress(file_path)"
)
conn.execute(
"CREATE INDEX IF NOT EXISTS idx_progress_hash ON generation_progress(source_hash)"
)
# Record schema version
conn.execute(
"""
@@ -720,6 +751,165 @@ class DeepWikiStore:
"db_path": str(self.db_path),
}
# === Generation Progress Operations ===
def get_progress(self, symbol_key: str) -> Optional[Dict[str, Any]]:
"""Get generation progress for a symbol.
Args:
symbol_key: Unique symbol identifier (file_path:symbol_name:line_start).
Returns:
Progress record dict if found, None otherwise.
"""
with self._lock:
conn = self._get_connection()
row = conn.execute(
"SELECT * FROM generation_progress WHERE symbol_key=?",
(symbol_key,),
).fetchone()
return dict(row) if row else None
def update_progress(self, symbol_key: str, data: Dict[str, Any]) -> None:
"""Update or create generation progress for a symbol.
Args:
symbol_key: Unique symbol identifier (file_path:symbol_name:line_start).
data: Dict with fields to update (file_path, symbol_name, symbol_type,
layer, source_hash, status, attempts, last_tool, last_error, generated_at).
"""
with self._lock:
conn = self._get_connection()
now = time.time()
# Build update query dynamically
fields = list(data.keys())
placeholders = ["?"] * len(fields)
values = [data[f] for f in fields]
conn.execute(
f"""
INSERT INTO generation_progress(symbol_key, {', '.join(fields)}, created_at, updated_at)
VALUES(?, {', '.join(placeholders)}, ?, ?)
ON CONFLICT(symbol_key) DO UPDATE SET
{', '.join(f'{f}=excluded.{f}' for f in fields)},
updated_at=excluded.updated_at
""",
[symbol_key] + values + [now, now],
)
conn.commit()
def mark_completed(self, symbol_key: str, tool: str) -> None:
"""Mark a symbol's documentation as completed.
Args:
symbol_key: Unique symbol identifier.
tool: The LLM tool that generated the documentation.
"""
with self._lock:
conn = self._get_connection()
now = time.time()
conn.execute(
"""
UPDATE generation_progress
SET status='completed', last_tool=?, generated_at=?, updated_at=?
WHERE symbol_key=?
""",
(tool, now, now, symbol_key),
)
conn.commit()
def mark_failed(self, symbol_key: str, error: str, tool: str | None = None) -> None:
"""Mark a symbol's documentation generation as failed.
Args:
symbol_key: Unique symbol identifier.
error: Error message describing the failure.
tool: The LLM tool that was used (optional).
"""
with self._lock:
conn = self._get_connection()
now = time.time()
if tool:
conn.execute(
"""
UPDATE generation_progress
SET status='failed', last_error=?, last_tool=?,
attempts=attempts+1, updated_at=?
WHERE symbol_key=?
""",
(error, tool, now, symbol_key),
)
else:
conn.execute(
"""
UPDATE generation_progress
SET status='failed', last_error=?, attempts=attempts+1, updated_at=?
WHERE symbol_key=?
""",
(error, now, symbol_key),
)
conn.commit()
def get_pending_symbols(self, limit: int = 1000) -> List[Dict[str, Any]]:
"""Get all symbols with pending or failed status for retry.
Args:
limit: Maximum number of records to return.
Returns:
List of progress records with pending or failed status.
"""
with self._lock:
conn = self._get_connection()
rows = conn.execute(
"""
SELECT * FROM generation_progress
WHERE status IN ('pending', 'failed')
ORDER BY updated_at ASC
LIMIT ?
""",
(limit,),
).fetchall()
return [dict(row) for row in rows]
def get_completed_symbol_keys(self) -> set:
"""Get set of all completed symbol keys for orphan detection.
Returns:
Set of symbol_key strings for completed symbols.
"""
with self._lock:
conn = self._get_connection()
rows = conn.execute(
"SELECT symbol_key FROM generation_progress WHERE status='completed'"
).fetchall()
return {row["symbol_key"] for row in rows}
def delete_progress(self, symbol_keys: List[str]) -> int:
"""Delete progress records for orphaned symbols.
Args:
symbol_keys: List of symbol keys to delete.
Returns:
Number of records deleted.
"""
if not symbol_keys:
return 0
with self._lock:
conn = self._get_connection()
placeholders = ",".join("?" * len(symbol_keys))
cursor = conn.execute(
f"DELETE FROM generation_progress WHERE symbol_key IN ({placeholders})",
symbol_keys,
)
conn.commit()
return cursor.rowcount
# === Row Conversion Methods ===
def _row_to_deepwiki_file(self, row: sqlite3.Row) -> DeepWikiFile:

View File

@@ -7,8 +7,15 @@ from __future__ import annotations
import hashlib
import logging
import shlex
import signal
import subprocess
import sys
import threading
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Dict, Optional, Protocol, Any
from typing import List, Dict, Optional, Protocol, Any, Tuple, Set
from codexlens.storage.deepwiki_store import DeepWikiStore
from codexlens.storage.deepwiki_models import DeepWikiSymbol, DeepWikiFile, DeepWikiDoc
@@ -254,3 +261,787 @@ class DeepWikiGenerator:
)
return results
# =============================================================================
# TASK-002: LLMMarkdownGenerator Core Class
# =============================================================================
@dataclass
class GenerationResult:
"""Result of a documentation generation attempt."""
success: bool
content: Optional[str] = None
tool: Optional[str] = None
attempts: int = 0
error: Optional[str] = None
symbol: Optional[DeepWikiSymbol] = None
@dataclass
class GeneratorConfig:
"""Configuration for LLM generator."""
max_concurrent: int = 4
batch_size: int = 4
graceful_shutdown: bool = True
# Tool fallback chains: primary -> secondary -> tertiary
TOOL_CHAIN: Dict[str, List[str]] = {
"gemini": ["gemini", "qwen", "codex"],
"qwen": ["qwen", "gemini", "codex"],
"codex": ["codex", "gemini", "qwen"],
}
# Layer-based timeout settings (seconds)
TOOL_TIMEOUTS: Dict[str, Dict[str, int]] = {
"gemini": {"layer3": 120, "layer2": 60, "layer1": 30},
"qwen": {"layer3": 90, "layer2": 45, "layer1": 20},
"codex": {"layer3": 180, "layer2": 90, "layer1": 45},
}
# Required sections per layer for validation
REQUIRED_SECTIONS: Dict[int, List[str]] = {
3: ["Description", "Parameters", "Returns", "Example"],
2: ["Description", "Returns"],
1: ["Description"],
}
class LLMMarkdownGenerator:
"""LLM-powered Markdown generator with tool fallback and retry logic.
Implements the MarkdownGenerator protocol with:
- Tool fallback chain (gemini -> qwen -> codex)
- Layer-based timeouts
- SHA256 incremental updates
- Structure validation
"""
def __init__(
self,
primary_tool: str = "gemini",
db: DeepWikiStore | None = None,
force_mode: bool = False,
progress_tracker: Optional[Any] = None,
) -> None:
"""Initialize LLM generator.
Args:
primary_tool: Primary LLM tool to use (gemini/qwen/codex).
db: DeepWikiStore instance for progress tracking.
force_mode: If True, regenerate all docs regardless of hash.
progress_tracker: Optional ProgressTracker for timeout alerts.
"""
self.primary_tool = primary_tool
self.db = db or DeepWikiStore()
self.force_mode = force_mode
self.progress_tracker = progress_tracker
self._ensure_db_initialized()
def _ensure_db_initialized(self) -> None:
"""Ensure database is initialized."""
try:
self.db.initialize()
except Exception:
pass # Already initialized
def _classify_layer(self, symbol: DeepWikiSymbol) -> int:
"""Classify symbol into layer (1, 2, or 3).
Layer 3: class, function, async_function, interface (detailed docs)
Layer 2: method, property (compact docs)
Layer 1: variable, constant (minimal docs)
"""
symbol_type = symbol.type.lower()
if symbol_type in ("class", "function", "async_function", "interface"):
return 3
elif symbol_type in ("method", "property"):
return 2
else:
return 1
def _build_prompt(self, symbol: DeepWikiSymbol, source_code: str, layer: int) -> str:
"""Build LLM prompt based on symbol layer.
Args:
symbol: Symbol to document.
source_code: Source code of the symbol.
layer: Layer (1, 2, or 3) determining prompt template.
Returns:
Prompt string for the LLM.
"""
file_ext = Path(symbol.source_file).suffix.lstrip(".")
if layer == 3:
# Full documentation template
return f"""Generate comprehensive Markdown documentation for this code symbol.
## Symbol Information
- Name: {symbol.name}
- Type: {symbol.type}
- File: {symbol.source_file}
- Lines: {symbol.line_range[0]}-{symbol.line_range[1]}
## Source Code
```{file_ext}
{source_code}
```
## Required Sections
Generate a Markdown document with these sections:
1. **Description** - Clear description of what this symbol does
2. **Parameters** - List all parameters with types and descriptions
3. **Returns** - What this symbol returns (if applicable)
4. **Example** - Code example showing usage
Format the output as clean Markdown. Use code fences for code blocks."""
elif layer == 2:
# Compact documentation template
return f"""Generate compact Markdown documentation for this code symbol.
## Symbol Information
- Name: {symbol.name}
- Type: {symbol.type}
- File: {symbol.source_file}
## Source Code
```{file_ext}
{source_code}
```
## Required Sections
Generate a Markdown document with these sections:
1. **Description** - Brief description of this symbol's purpose
2. **Returns** - Return value description (if applicable)
Keep it concise. Format as clean Markdown."""
else:
# Minimal documentation template (layer 1)
return f"""Generate minimal Markdown documentation for this code symbol.
## Symbol Information
- Name: {symbol.name}
- Type: {symbol.type}
## Source Code
```{file_ext}
{source_code}
```
## Required Sections
Generate a Markdown document with:
1. **Description** - One-line description of this symbol
Keep it minimal. Format as clean Markdown."""
def _call_cli_with_timeout(
self, tool: str, prompt: str, timeout: int
) -> str:
"""Call LLM CLI tool with timeout.
Args:
tool: CLI tool name (gemini/qwen/codex).
prompt: Prompt to send to the LLM.
timeout: Timeout in seconds.
Returns:
Generated content string.
Raises:
TimeoutError: If command times out.
RuntimeError: If command fails.
"""
# Build ccw cli command
escaped_prompt = prompt.replace('"', '\\"')
cmd = [
"ccw", "cli", "-p", prompt,
"--tool", tool,
"--mode", "write",
]
try:
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=timeout,
cwd=str(Path.cwd()),
)
if result.returncode != 0:
raise RuntimeError(f"CLI failed: {result.stderr}")
return result.stdout.strip()
except subprocess.TimeoutExpired as exc:
raise TimeoutError(
f"Timeout after {timeout}s with {tool}"
) from exc
def _emit_timeout_alert(
self, symbol: DeepWikiSymbol, tool: str, timeout: int
) -> None:
"""Emit timeout alert to progress tracker and logs.
Args:
symbol: Symbol that timed out.
tool: Tool that timed out.
timeout: Timeout duration in seconds.
"""
alert_msg = f"TIMEOUT: {symbol.name} ({symbol.source_file}) with {tool} after {timeout}s"
logger.warning(alert_msg)
# Output to progress tracker if available
if self.progress_tracker:
self.progress_tracker.write_above(f"[WARNING] {alert_msg}")
def validate_structure(self, content: str, layer: int) -> bool:
"""Validate generated content has required structure.
Args:
content: Generated markdown content.
layer: Layer (1, 2, or 3).
Returns:
True if content passes validation, False otherwise.
"""
if not content or len(content.strip()) < 20:
return False
required = REQUIRED_SECTIONS.get(layer, ["Description"])
content_lower = content.lower()
for section in required:
if section.lower() not in content_lower:
return False
return True
def generate_with_retry(
self, symbol: DeepWikiSymbol, source_code: str
) -> GenerationResult:
"""Generate documentation with tool fallback chain.
Strategy: Immediate tool fallback
- Tool A fails -> Immediately try Tool B
- All 3 tools fail -> Mark as failed
Args:
symbol: Symbol to document.
source_code: Source code of the symbol.
Returns:
GenerationResult with success status and content.
"""
tool_chain = TOOL_CHAIN.get(self.primary_tool, ["gemini", "qwen", "codex"])
layer = self._classify_layer(symbol)
prompt = self._build_prompt(symbol, source_code, layer)
symbol_key = f"{symbol.source_file}:{symbol.name}:{symbol.line_range[0]}"
last_error = None
for attempt, tool in enumerate(tool_chain, 1):
timeout = TOOL_TIMEOUTS.get(tool, {}).get(f"layer{layer}", 60)
try:
# Update progress
if self.db:
self.db.update_progress(
symbol_key,
{
"file_path": symbol.source_file,
"symbol_name": symbol.name,
"symbol_type": symbol.type,
"layer": layer,
"source_hash": hashlib.sha256(source_code.encode()).hexdigest(),
"status": "processing",
"attempts": attempt,
"last_tool": tool,
},
)
result = self._call_cli_with_timeout(tool, prompt, timeout)
if result and self.validate_structure(result, layer):
# Success
if self.db:
self.db.mark_completed(symbol_key, tool)
return GenerationResult(
success=True,
content=result,
tool=tool,
attempts=attempt,
symbol=symbol,
)
# Invalid structure
last_error = f"Invalid structure from {tool}"
continue
except TimeoutError:
self._emit_timeout_alert(symbol, tool, timeout)
last_error = f"Timeout after {timeout}s with {tool}"
continue
except Exception as exc:
last_error = f"{type(exc).__name__}: {exc}"
continue
# All tools failed
if self.db:
self.db.mark_failed(symbol_key, last_error or "All tools failed")
return GenerationResult(
success=False,
content=None,
tool=None,
attempts=len(tool_chain),
error=last_error,
symbol=symbol,
)
def should_regenerate(self, symbol: DeepWikiSymbol, source_code: str) -> bool:
"""Check if symbol needs regeneration.
Conditions for regeneration:
1. --force mode is enabled
2. Symbol not in database (new)
3. Source code hash changed
4. Previous generation failed
Args:
symbol: Symbol to check.
source_code: Source code of the symbol.
Returns:
True if regeneration needed, False otherwise.
"""
if self.force_mode:
return True
current_hash = hashlib.sha256(source_code.encode()).hexdigest()
symbol_key = f"{symbol.source_file}:{symbol.name}:{symbol.line_range[0]}"
if self.db:
progress = self.db.get_progress(symbol_key)
if not progress:
return True # New symbol
if progress.get("source_hash") != current_hash:
return True # Code changed
if progress.get("status") == "failed":
return True # Retry failed
return False # Skip
def _fallback_generate(
self, symbol: DeepWikiSymbol, source_code: str
) -> str:
"""Fallback to Mock generation when all LLM tools fail.
Args:
symbol: Symbol to document.
source_code: Source code of the symbol.
Returns:
Mock-generated markdown content.
"""
mock = MockMarkdownGenerator()
return mock.generate(symbol, source_code)
def generate(self, symbol: DeepWikiSymbol, source_code: str) -> str:
"""Generate Markdown documentation (implements MarkdownGenerator protocol).
Args:
symbol: Symbol to document.
source_code: Source code of the symbol.
Returns:
Generated markdown content.
"""
result = self.generate_with_retry(symbol, source_code)
if result.success and result.content:
return result.content
# Fallback to mock on failure
return self._fallback_generate(symbol, source_code)
# =============================================================================
# TASK-003: BatchProcessor + Graceful Interrupt
# TASK-004: ProgressTracker (rich progress bar)
# =============================================================================
class ProgressTracker:
"""Progress tracker using rich progress bar.
Shows real-time progress with:
- Progress bar: [=====> ] 120/500 (24%) eta: 5min
- Timeout alerts above progress bar
- Failure summary at completion
"""
def __init__(self, total: int) -> None:
"""Initialize progress tracker.
Args:
total: Total number of symbols to process.
"""
self.total = total
self.completed = 0
self.failed_symbols: List[Dict[str, Any]] = []
self._lock = threading.Lock()
self._started = False
# Lazy import rich to avoid dependency issues
try:
from rich.console import Console
from rich.progress import Progress, SpinnerColumn, TextColumn, BarColumn, TimeRemainingColumn
self._console = Console()
self._progress = Progress(
SpinnerColumn(),
TextColumn("[progress.description]{task.description}"),
BarColumn(),
TextColumn("[progress.percentage]{task.percentage:>3.0f}%"),
TextColumn("({task.completed}/{task.total})"),
TimeRemainingColumn(),
console=self._console,
)
self._task_id = None
self._rich_available = True
except ImportError:
self._rich_available = False
self._console = None
def start(self) -> None:
"""Start the progress bar."""
if self._rich_available and self._progress:
self._progress.start()
self._task_id = self._progress.add_task(
"Generating docs", total=self.total
)
self._started = True
def update(self, symbol: DeepWikiSymbol, result: GenerationResult) -> None:
"""Update progress after a symbol is processed.
Args:
symbol: Processed symbol.
result: Generation result.
"""
with self._lock:
self.completed += 1
if self._rich_available and self._progress and self._task_id is not None:
self._progress.advance(self._task_id)
if not result.success:
self.failed_symbols.append({
"symbol": symbol.name,
"file": symbol.source_file,
"error": result.error or "Unknown error",
})
def write_above(self, message: str) -> None:
"""Write message above the progress bar.
Args:
message: Message to display.
"""
if self._rich_available and self._console:
self._console.print(message)
else:
print(message)
def print_summary(self) -> None:
"""Print final summary after all processing completes."""
self.stop()
success = self.completed - len(self.failed_symbols)
failed = len(self.failed_symbols)
if self._rich_available and self._console:
self._console.print(
f"\n[bold]Generation complete:[/bold] "
f"[green]{success}/{self.completed}[/green] successful"
)
if self.failed_symbols:
self._console.print(
f"\n[bold red]Failed symbols ({failed}):[/bold red]"
)
for item in self.failed_symbols:
self._console.print(
f" - [yellow]{item['symbol']}[/yellow] "
f"({item['file']}): {item['error']}"
)
else:
print(f"\nGeneration complete: {success}/{self.completed} successful")
if self.failed_symbols:
print(f"\nFailed symbols ({failed}):")
for item in self.failed_symbols:
print(f" - {item['symbol']} ({item['file']}): {item['error']}")
def stop(self) -> None:
"""Stop the progress bar."""
if self._rich_available and self._progress and self._started:
self._progress.stop()
self._started = False
class BatchProcessor:
"""Batch processor with concurrent execution and graceful interrupt.
Features:
- ThreadPoolExecutor with configurable concurrency (default: 4)
- Signal handlers for Ctrl+C graceful interrupt
- Orphaned document cleanup
- Integration with ProgressTracker
"""
def __init__(
self,
generator: LLMMarkdownGenerator,
config: GeneratorConfig | None = None,
) -> None:
"""Initialize batch processor.
Args:
generator: LLM generator instance.
config: Generator configuration.
"""
self.generator = generator
self.config = config or GeneratorConfig()
self.shutdown_event = threading.Event()
self._executor = None
self._progress: Optional[ProgressTracker] = None
def setup_signal_handlers(self) -> None:
"""Set up signal handlers for graceful Ctrl+C interrupt."""
def handle_sigint(signum: int, frame) -> None:
if self.shutdown_event.is_set():
# Second Ctrl+C: force exit
print("\n[WARNING] Forced exit, progress may be lost")
sys.exit(1)
# First Ctrl+C: graceful interrupt
print("\n[INFO] Completing current batch...")
self.shutdown_event.set()
signal.signal(signal.SIGINT, handle_sigint)
def process_batch(
self, symbols: List[Tuple[DeepWikiSymbol, str]]
) -> List[GenerationResult]:
"""Process a batch of symbols concurrently.
Args:
symbols: List of (symbol, source_code) tuples.
Returns:
List of GenerationResult for each symbol.
"""
from concurrent.futures import ThreadPoolExecutor, as_completed
results: List[GenerationResult] = []
futures = []
with ThreadPoolExecutor(max_workers=self.config.max_concurrent) as executor:
self._executor = executor
for symbol, source_code in symbols:
if self.shutdown_event.is_set():
break
future = executor.submit(
self.generator.generate_with_retry,
symbol,
source_code,
)
futures.append((symbol, future))
# Wait for all submitted tasks
for symbol, future in futures:
try:
result = future.result(timeout=300) # 5 min total timeout
results.append(result)
if self._progress:
self._progress.update(symbol, result)
except Exception as exc:
error_result = GenerationResult(
success=False,
error=str(exc),
symbol=symbol,
)
results.append(error_result)
if self._progress:
self._progress.update(symbol, error_result)
return results
def cleanup_orphaned_docs(
self, current_symbols: List[DeepWikiSymbol]
) -> int:
"""Clean up documents for symbols that no longer exist in source.
Args:
current_symbols: List of current symbols in source code.
Returns:
Number of orphaned documents removed.
"""
if not self.generator.db:
return 0
current_keys = {
f"{s.source_file}:{s.name}:{s.line_range[0]}"
for s in current_symbols
}
stored_keys = self.generator.db.get_completed_symbol_keys()
orphaned_keys = list(stored_keys - current_keys)
if orphaned_keys:
deleted = self.generator.db.delete_progress(orphaned_keys)
logger.info(f"Cleaned up {deleted} orphaned documents")
return deleted
return 0
def run(
self,
path: Path,
tool: str = "gemini",
force: bool = False,
resume: bool = False,
) -> Dict[str, Any]:
"""Main entry point for batch processing.
Flow:
1. Scan source files
2. Extract symbols
3. SHA256 filter
4. Layer sort (3 -> 2 -> 1)
5. Batch process with concurrency
Args:
path: File or directory path to process.
tool: Primary LLM tool to use.
force: Force regenerate all docs.
resume: Resume from previous interrupted run.
Returns:
Processing summary dictionary.
"""
# Update generator settings
self.generator.primary_tool = tool
self.generator.force_mode = force
# Setup signal handlers
if self.config.graceful_shutdown:
self.setup_signal_handlers()
# Initialize database
self.generator._ensure_db_initialized()
# Phase 1: Scan files
path = Path(path)
if path.is_file():
files = [path]
elif path.is_dir():
files = []
for ext in DeepWikiGenerator.SUPPORTED_EXTENSIONS:
files.extend(path.rglob(f"*{ext}"))
else:
raise ValueError(f"Path not found: {path}")
# Phase 2: Extract symbols
all_symbols: List[Tuple[DeepWikiSymbol, str]] = []
temp_gen = DeepWikiGenerator(store=self.generator.db)
for file_path in files:
raw_symbols = temp_gen._extract_symbols_simple(file_path)
for sym in raw_symbols:
symbol = DeepWikiSymbol(
name=sym["name"],
symbol_type=sym["type"],
source_file=str(file_path),
doc_file=f".deepwiki/{file_path.stem}.md",
anchor=f"#{sym['name'].lower()}",
line_start=sym["line_start"],
line_end=sym["line_end"],
)
all_symbols.append((symbol, sym["source"]))
# Phase 3: SHA256 filter
symbols_to_process = [
(s, c) for s, c in all_symbols
if self.generator.should_regenerate(s, c)
]
if not symbols_to_process:
logger.info("All symbols up to date, nothing to process")
return {
"total_symbols": len(all_symbols),
"processed": 0,
"skipped": len(all_symbols),
"success": 0,
"failed": 0,
}
# Phase 4: Cleanup orphaned docs
current_symbols = [s for s, _ in all_symbols]
orphaned = self.cleanup_orphaned_docs(current_symbols)
# Phase 5: Sort by layer (3 -> 2 -> 1)
symbols_to_process.sort(
key=lambda x: self.generator._classify_layer(x[0]),
reverse=True
)
# Phase 6: Initialize progress tracker
self._progress = ProgressTracker(total=len(symbols_to_process))
self.generator.progress_tracker = self._progress
self._progress.start()
# Phase 7: Batch process
all_results: List[GenerationResult] = []
batch_size = self.config.batch_size
for i in range(0, len(symbols_to_process), batch_size):
if self.shutdown_event.is_set():
break
batch = symbols_to_process[i:i + batch_size]
results = self.process_batch(batch)
all_results.extend(results)
# Phase 8: Print summary
if self._progress:
self._progress.print_summary()
# Calculate statistics
success_count = sum(1 for r in all_results if r.success)
failed_count = len(all_results) - success_count
return {
"total_symbols": len(all_symbols),
"processed": len(all_results),
"skipped": len(all_symbols) - len(symbols_to_process),
"success": success_count,
"failed": failed_count,
"orphaned_cleaned": orphaned,
}