mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-12 02:37:45 +08:00
refactor: spec-generator outputs from monolithic files to directory structure
Requirements, architecture, and epics now output as directories with individual files per design point (_index.md + REQ-*.md/ADR-*.md/EPIC-*.md), linked via relative paths for better referencing and downstream consumption. Phase 6 handoff bridge simplified to read directly from individual EPIC files.
This commit is contained in:
@@ -49,9 +49,9 @@ Generates a complete specification package through 6 sequential phases:
|
||||
| 1 | `spec-config.json` | Session configuration and state |
|
||||
| 1 | `discovery-context.json` | Codebase exploration (optional) |
|
||||
| 2 | `product-brief.md` | Product brief with multi-perspective synthesis |
|
||||
| 3 | `requirements.md` | Detailed PRD with acceptance criteria |
|
||||
| 4 | `architecture.md` | Architecture decisions and component design |
|
||||
| 5 | `epics.md` | Epic/Story breakdown with dependencies |
|
||||
| 3 | `requirements/` | `_index.md` + `REQ-NNN-{slug}.md` + `NFR-{type}-NNN-{slug}.md` |
|
||||
| 4 | `architecture/` | `_index.md` + `ADR-NNN-{slug}.md` |
|
||||
| 5 | `epics/` | `_index.md` + `EPIC-NNN-{slug}.md` |
|
||||
| 6 | `readiness-report.md` | Quality validation report |
|
||||
| 6 | `spec-summary.md` | One-page executive summary |
|
||||
|
||||
|
||||
@@ -17,11 +17,11 @@ Phase 1: Discovery -> spec-config.json + discovery-context.json
|
||||
|
|
||||
Phase 2: Product Brief -> product-brief.md (multi-CLI parallel analysis)
|
||||
|
|
||||
Phase 3: Requirements (PRD) -> requirements.md
|
||||
Phase 3: Requirements (PRD) -> requirements/ (_index.md + REQ-*.md + NFR-*.md)
|
||||
|
|
||||
Phase 4: Architecture -> architecture.md (multi-CLI review)
|
||||
Phase 4: Architecture -> architecture/ (_index.md + ADR-*.md, multi-CLI review)
|
||||
|
|
||||
Phase 5: Epics & Stories -> epics.md
|
||||
Phase 5: Epics & Stories -> epics/ (_index.md + EPIC-*.md)
|
||||
|
|
||||
Phase 6: Readiness Check -> readiness-report.md + spec-summary.md
|
||||
|
|
||||
@@ -91,7 +91,7 @@ Phase 3: Requirements / PRD
|
||||
|- Gemini CLI: expand goals into functional + non-functional requirements
|
||||
|- Generate acceptance criteria per requirement
|
||||
|- User priority sorting: MoSCoW (interactive, -y auto-assigns)
|
||||
|- Output: requirements.md (from template)
|
||||
|- Output: requirements/ directory (_index.md + REQ-*.md + NFR-*.md, from template)
|
||||
|
||||
Phase 4: Architecture
|
||||
|- Ref: phases/04-architecture.md
|
||||
@@ -99,7 +99,7 @@ Phase 4: Architecture
|
||||
|- Codebase integration mapping (conditional)
|
||||
|- Codex CLI: architecture challenge + review
|
||||
|- Interactive ADR decisions (-y auto-accepts)
|
||||
|- Output: architecture.md (from template)
|
||||
|- Output: architecture/ directory (_index.md + ADR-*.md, from template)
|
||||
|
||||
Phase 5: Epics & Stories
|
||||
|- Ref: phases/05-epics-stories.md
|
||||
@@ -107,7 +107,7 @@ Phase 5: Epics & Stories
|
||||
|- Story generation: As a...I want...So that...
|
||||
|- Dependency mapping (Mermaid)
|
||||
|- Interactive validation (-y skips)
|
||||
|- Output: epics.md (from template)
|
||||
|- Output: epics/ directory (_index.md + EPIC-*.md, from template)
|
||||
|
||||
Phase 6: Readiness Check
|
||||
|- Ref: phases/06-readiness-check.md
|
||||
@@ -146,14 +146,21 @@ Bash(`mkdir -p "${workDir}"`);
|
||||
|
||||
```
|
||||
.workflow/.spec/SPEC-{slug}-{YYYY-MM-DD}/
|
||||
|- spec-config.json # Session configuration + phase state
|
||||
|- discovery-context.json # Codebase exploration results (optional)
|
||||
|- product-brief.md # Phase 2: Product brief
|
||||
|- requirements.md # Phase 3: Detailed PRD
|
||||
|- architecture.md # Phase 4: Architecture decisions
|
||||
|- epics.md # Phase 5: Epic/Story breakdown
|
||||
|- readiness-report.md # Phase 6: Quality report
|
||||
|- spec-summary.md # Phase 6: One-page executive summary
|
||||
├── spec-config.json # Session configuration + phase state
|
||||
├── discovery-context.json # Codebase exploration results (optional)
|
||||
├── product-brief.md # Phase 2: Product brief
|
||||
├── requirements/ # Phase 3: Detailed PRD (directory)
|
||||
│ ├── _index.md # Summary, MoSCoW table, traceability, links
|
||||
│ ├── REQ-NNN-{slug}.md # Individual functional requirement
|
||||
│ └── NFR-{type}-NNN-{slug}.md # Individual non-functional requirement
|
||||
├── architecture/ # Phase 4: Architecture decisions (directory)
|
||||
│ ├── _index.md # Overview, components, tech stack, links
|
||||
│ └── ADR-NNN-{slug}.md # Individual Architecture Decision Record
|
||||
├── epics/ # Phase 5: Epic/Story breakdown (directory)
|
||||
│ ├── _index.md # Epic table, dependency map, MVP scope
|
||||
│ └── EPIC-NNN-{slug}.md # Individual Epic with Stories
|
||||
├── readiness-report.md # Phase 6: Quality report
|
||||
└── spec-summary.md # Phase 6: One-page executive summary
|
||||
```
|
||||
|
||||
## State Management
|
||||
@@ -178,7 +185,8 @@ Bash(`mkdir -p "${workDir}"`);
|
||||
},
|
||||
"has_codebase": false,
|
||||
"phasesCompleted": [
|
||||
{ "phase": 1, "name": "discovery", "output_file": "spec-config.json", "completed_at": "ISO8601" }
|
||||
{ "phase": 1, "name": "discovery", "output_file": "spec-config.json", "completed_at": "ISO8601" },
|
||||
{ "phase": 3, "name": "requirements", "output_dir": "requirements/", "output_index": "requirements/_index.md", "file_count": 8, "completed_at": "ISO8601" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
179
.claude/skills/spec-generator/phases/03-requirements.md
Normal file
179
.claude/skills/spec-generator/phases/03-requirements.md
Normal file
@@ -0,0 +1,179 @@
|
||||
# Phase 3: Requirements (PRD)
|
||||
|
||||
Generate a detailed Product Requirements Document with functional/non-functional requirements, acceptance criteria, and MoSCoW prioritization.
|
||||
|
||||
## Objective
|
||||
|
||||
- Read product-brief.md and extract goals, scope, constraints
|
||||
- Expand each goal into functional requirements with acceptance criteria
|
||||
- Generate non-functional requirements
|
||||
- Apply MoSCoW priority labels (user input or auto)
|
||||
- Generate requirements.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/product-brief.md`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/requirements-prd.md` (directory structure: `_index.md` + `REQ-*.md` + `NFR-*.md`)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
|
||||
// Extract key sections from product brief
|
||||
// - Goals & Success Metrics table
|
||||
// - Scope (in-scope items)
|
||||
// - Target Users (personas)
|
||||
// - Constraints
|
||||
// - Technical perspective insights
|
||||
```
|
||||
|
||||
### Step 2: Requirements Expansion via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate detailed functional and non-functional requirements from product brief.
|
||||
Success: Complete PRD with testable acceptance criteria for every requirement.
|
||||
|
||||
PRODUCT BRIEF CONTEXT:
|
||||
${productBrief}
|
||||
|
||||
TASK:
|
||||
- For each goal in the product brief, generate 3-7 functional requirements
|
||||
- Each requirement must have:
|
||||
- Unique ID: REQ-NNN (zero-padded)
|
||||
- Clear title
|
||||
- Detailed description
|
||||
- User story: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 specific, testable acceptance criteria
|
||||
- Generate non-functional requirements:
|
||||
- Performance (response times, throughput)
|
||||
- Security (authentication, authorization, data protection)
|
||||
- Scalability (user load, data volume)
|
||||
- Usability (accessibility, learnability)
|
||||
- Assign initial MoSCoW priority based on:
|
||||
- Must: Core functionality, cannot launch without
|
||||
- Should: Important but has workaround
|
||||
- Could: Nice-to-have, enhances experience
|
||||
- Won't: Explicitly deferred
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured requirements with: ID, title, description, user story, acceptance criteria, priority, traceability to goals
|
||||
CONSTRAINTS: Every requirement must be specific enough to estimate and test. No vague requirements like 'system should be fast'.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 3: User Priority Sorting (Interactive)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present requirements grouped by initial priority
|
||||
// Allow user to adjust MoSCoW labels
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the Must-Have requirements. Any that should be reprioritized?",
|
||||
header: "Must-Have",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "All correct", description: "Must-have requirements are accurate" },
|
||||
{ label: "Too many", description: "Some should be Should/Could" },
|
||||
{ label: "Missing items", description: "Some Should requirements should be Must" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "What is the target MVP scope?",
|
||||
header: "MVP Scope",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Must-Have only (Recommended)", description: "MVP includes only Must requirements" },
|
||||
{ label: "Must + key Should", description: "Include critical Should items in MVP" },
|
||||
{ label: "Comprehensive", description: "Include all Must and Should" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user adjustments to priorities
|
||||
} else {
|
||||
// Auto mode: accept CLI-suggested priorities as-is
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate requirements/ directory
|
||||
|
||||
```javascript
|
||||
// Read template
|
||||
const template = Read('templates/requirements-prd.md');
|
||||
|
||||
// Create requirements directory
|
||||
Bash(`mkdir -p "${workDir}/requirements"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI output into structured requirements
|
||||
const funcReqs = parseFunctionalRequirements(cliOutput); // [{id, slug, title, priority, ...}]
|
||||
const nfReqs = parseNonFunctionalRequirements(cliOutput); // [{id, type, slug, title, ...}]
|
||||
|
||||
// Step 4a: Write individual REQ-*.md files (one per functional requirement)
|
||||
funcReqs.forEach(req => {
|
||||
// Use REQ-NNN-{slug}.md template from templates/requirements-prd.md
|
||||
// Fill: id, title, priority, description, user_story, acceptance_criteria, traces
|
||||
Write(`${workDir}/requirements/REQ-${req.id}-${req.slug}.md`, reqContent);
|
||||
});
|
||||
|
||||
// Step 4b: Write individual NFR-*.md files (one per non-functional requirement)
|
||||
nfReqs.forEach(nfr => {
|
||||
// Use NFR-{type}-NNN-{slug}.md template from templates/requirements-prd.md
|
||||
// Fill: id, type, category, title, requirement, metric, target, traces
|
||||
Write(`${workDir}/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent);
|
||||
});
|
||||
|
||||
// Step 4c: Write _index.md (summary + links to all individual files)
|
||||
// Use _index.md template from templates/requirements-prd.md
|
||||
// Fill: summary table, functional req links table, NFR links tables,
|
||||
// data requirements, integration requirements, traceability matrix
|
||||
Write(`${workDir}/requirements/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 3,
|
||||
name: "requirements",
|
||||
output_dir: "requirements/",
|
||||
output_index: "requirements/_index.md",
|
||||
file_count: funcReqs.length + nfReqs.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `requirements/`
|
||||
- `_index.md` — Summary, MoSCoW table, traceability matrix, links
|
||||
- `REQ-NNN-{slug}.md` — Individual functional requirement (per requirement)
|
||||
- `NFR-{type}-NNN-{slug}.md` — Individual non-functional requirement (per NFR)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Functional requirements: >= 3 with REQ-NNN IDs, each in own file
|
||||
- [ ] Every requirement file has >= 1 acceptance criterion
|
||||
- [ ] Every requirement has MoSCoW priority tag in frontmatter
|
||||
- [ ] Non-functional requirements: >= 1, each in own file
|
||||
- [ ] User stories present for Must-have requirements
|
||||
- [ ] `_index.md` links to all individual requirement files
|
||||
- [ ] Traceability links to product-brief.md goals
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 4: Architecture](04-architecture.md) with the generated requirements.md.
|
||||
213
.claude/skills/spec-generator/phases/04-architecture.md
Normal file
213
.claude/skills/spec-generator/phases/04-architecture.md
Normal file
@@ -0,0 +1,213 @@
|
||||
# Phase 4: Architecture
|
||||
|
||||
Generate technical architecture decisions, component design, and technology selections based on requirements.
|
||||
|
||||
## Objective
|
||||
|
||||
- Analyze requirements to identify core components and system architecture
|
||||
- Generate Architecture Decision Records (ADRs) with alternatives
|
||||
- Map architecture to existing codebase (if applicable)
|
||||
- Challenge architecture via Codex CLI review
|
||||
- Generate architecture.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/requirements/_index.md` (and individual `REQ-*.md` files)
|
||||
- Reference: `{workDir}/product-brief.md`
|
||||
- Optional: `{workDir}/discovery-context.json`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/architecture-doc.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2-3 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
|
||||
let discoveryContext = null;
|
||||
if (specConfig.has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) { /* no context */ }
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Architecture Analysis via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate technical architecture for the specified requirements.
|
||||
Success: Complete component architecture, tech stack, and ADRs with justified decisions.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${productBrief.slice(0, 3000)}
|
||||
|
||||
REQUIREMENTS:
|
||||
${requirements.slice(0, 5000)}
|
||||
|
||||
${discoveryContext ? `EXISTING CODEBASE:
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join('; ') || 'none'}
|
||||
- Architecture constraints: ${discoveryContext.architecture_constraints?.slice(0,3).join('; ') || 'none'}
|
||||
` : ''}
|
||||
|
||||
TASK:
|
||||
- Define system architecture style (monolith, microservices, serverless, etc.) with justification
|
||||
- Identify core components and their responsibilities
|
||||
- Create component interaction diagram (Mermaid graph TD format)
|
||||
- Specify technology stack: languages, frameworks, databases, infrastructure
|
||||
- Generate 2-4 Architecture Decision Records (ADRs):
|
||||
- Each ADR: context, decision, 2-3 alternatives with pros/cons, consequences
|
||||
- Focus on: data storage, API design, authentication, key technical choices
|
||||
- Define data model: key entities and relationships (Mermaid erDiagram format)
|
||||
- Identify security architecture: auth, authorization, data protection
|
||||
- List API endpoints (high-level)
|
||||
${discoveryContext ? '- Map new components to existing codebase modules' : ''}
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Complete architecture with: style justification, component diagram, tech stack table, ADRs, data model, security controls, API overview
|
||||
CONSTRAINTS: Architecture must support all Must-have requirements. Prefer proven technologies over cutting-edge.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 3: Architecture Review via Codex CLI
|
||||
|
||||
```javascript
|
||||
// After receiving Gemini analysis, challenge it with Codex
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of proposed architecture - identify weaknesses and risks.
|
||||
Success: Actionable feedback with specific concerns and improvement suggestions.
|
||||
|
||||
PROPOSED ARCHITECTURE:
|
||||
${geminiArchitectureOutput.slice(0, 5000)}
|
||||
|
||||
REQUIREMENTS CONTEXT:
|
||||
${requirements.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Challenge each ADR: are the alternatives truly the best options?
|
||||
- Identify scalability bottlenecks in the component design
|
||||
- Assess security gaps: authentication, authorization, data protection
|
||||
- Evaluate technology choices: maturity, community support, fit
|
||||
- Check for over-engineering or under-engineering
|
||||
- Verify architecture covers all Must-have requirements
|
||||
- Rate overall architecture quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Architecture review with: per-ADR feedback, scalability concerns, security gaps, technology risks, quality rating
|
||||
CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable improvements.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 4: Interactive ADR Decisions (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present ADRs with review feedback to user
|
||||
// For each ADR where review raised concerns:
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Architecture review raised concerns. How should we proceed?",
|
||||
header: "ADR Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Accept as-is", description: "Architecture is sound, proceed" },
|
||||
{ label: "Incorporate feedback", description: "Adjust ADRs based on review" },
|
||||
{ label: "Simplify", description: "Reduce complexity, fewer components" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user decisions to architecture
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Codebase Integration Mapping (Conditional)
|
||||
|
||||
```javascript
|
||||
if (specConfig.has_codebase && discoveryContext) {
|
||||
// Map new architecture components to existing code
|
||||
const integrationMapping = discoveryContext.relevant_files.map(f => ({
|
||||
new_component: "...", // matched from architecture
|
||||
existing_module: f.path,
|
||||
integration_type: "Extend|Replace|New",
|
||||
notes: f.rationale
|
||||
}));
|
||||
// Include in architecture document
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Generate architecture/ directory
|
||||
|
||||
```javascript
|
||||
const template = Read('templates/architecture-doc.md');
|
||||
|
||||
// Create architecture directory
|
||||
Bash(`mkdir -p "${workDir}/architecture"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI outputs into structured ADRs
|
||||
const adrs = parseADRs(geminiArchitectureOutput, codexReviewOutput); // [{id, slug, title, ...}]
|
||||
|
||||
// Step 6a: Write individual ADR-*.md files (one per decision)
|
||||
adrs.forEach(adr => {
|
||||
// Use ADR-NNN-{slug}.md template from templates/architecture-doc.md
|
||||
// Fill: id, title, status, context, decision, alternatives, consequences, traces
|
||||
Write(`${workDir}/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent);
|
||||
});
|
||||
|
||||
// Step 6b: Write _index.md (overview + components + tech stack + links to ADRs)
|
||||
// Use _index.md template from templates/architecture-doc.md
|
||||
// Fill: system overview, component diagram, tech stack, ADR links table,
|
||||
// data model, API design, security controls, infrastructure, codebase integration
|
||||
Write(`${workDir}/architecture/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 4,
|
||||
name: "architecture",
|
||||
output_dir: "architecture/",
|
||||
output_index: "architecture/_index.md",
|
||||
file_count: adrs.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `architecture/`
|
||||
- `_index.md` — Overview, component diagram, tech stack, data model, security, links
|
||||
- `ADR-NNN-{slug}.md` — Individual Architecture Decision Record (per ADR)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked to requirements via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Component diagram present in `_index.md` (Mermaid or ASCII)
|
||||
- [ ] Tech stack specified (languages, frameworks, key libraries)
|
||||
- [ ] >= 1 ADR file with alternatives considered
|
||||
- [ ] Each ADR file lists >= 2 options
|
||||
- [ ] `_index.md` ADR table links to all individual ADR files
|
||||
- [ ] Integration points identified
|
||||
- [ ] Data model described
|
||||
- [ ] Codebase mapping present (if has_codebase)
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
- [ ] ADR files link back to requirement files
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 5: Epics & Stories](05-epics-stories.md) with the generated architecture.md.
|
||||
168
.claude/skills/spec-generator/phases/05-epics-stories.md
Normal file
168
.claude/skills/spec-generator/phases/05-epics-stories.md
Normal file
@@ -0,0 +1,168 @@
|
||||
# Phase 5: Epics & Stories
|
||||
|
||||
Decompose the specification into executable Epics and Stories with dependency mapping.
|
||||
|
||||
## Objective
|
||||
|
||||
- Group requirements into 3-7 logical Epics
|
||||
- Tag MVP subset of Epics
|
||||
- Generate 2-5 Stories per Epic in standard user story format
|
||||
- Map cross-Epic dependencies (Mermaid diagram)
|
||||
- Generate epics.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/requirements/_index.md`, `{workDir}/architecture/_index.md` (and individual files)
|
||||
- Reference: `{workDir}/product-brief.md`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/epics-template.md` (directory structure: `_index.md` + `EPIC-*.md`)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2-4 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
const architecture = Read(`${workDir}/architecture.md`);
|
||||
```
|
||||
|
||||
### Step 2: Epic Decomposition via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Decompose requirements into executable Epics and Stories for implementation planning.
|
||||
Success: 3-7 Epics with prioritized Stories, dependency map, and MVP subset clearly defined.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${productBrief.slice(0, 2000)}
|
||||
|
||||
REQUIREMENTS:
|
||||
${requirements.slice(0, 5000)}
|
||||
|
||||
ARCHITECTURE (summary):
|
||||
${architecture.slice(0, 3000)}
|
||||
|
||||
TASK:
|
||||
- Group requirements into 3-7 logical Epics:
|
||||
- Each Epic: EPIC-NNN ID, title, description, priority (Must/Should/Could)
|
||||
- Group by functional domain or user journey stage
|
||||
- Tag MVP Epics (minimum set for initial release)
|
||||
|
||||
- For each Epic, generate 2-5 Stories:
|
||||
- Each Story: STORY-{EPIC}-NNN ID, title
|
||||
- User story format: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 acceptance criteria per story (testable)
|
||||
- Relative size estimate: S/M/L/XL
|
||||
- Trace to source requirement(s): REQ-NNN
|
||||
|
||||
- Create dependency map:
|
||||
- Cross-Epic dependencies (which Epics block others)
|
||||
- Mermaid graph LR format
|
||||
- Recommended execution order with rationale
|
||||
|
||||
- Define MVP:
|
||||
- Which Epics are in MVP
|
||||
- MVP definition of done (3-5 criteria)
|
||||
- What is explicitly deferred post-MVP
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured output with: Epic list (ID, title, priority, MVP flag), Stories per Epic (ID, user story, AC, size, trace), dependency Mermaid diagram, execution order, MVP definition
|
||||
CONSTRAINTS:
|
||||
- Every Must-have requirement must appear in at least one Story
|
||||
- Stories must be small enough to implement independently (no XL stories in MVP)
|
||||
- Dependencies should be minimized across Epics
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 3: Interactive Validation (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present Epic overview table and dependency diagram
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the Epic breakdown. Any adjustments needed?",
|
||||
header: "Epics",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Looks good", description: "Epic structure is appropriate" },
|
||||
{ label: "Merge epics", description: "Some epics should be combined" },
|
||||
{ label: "Split epic", description: "An epic is too large, needs splitting" },
|
||||
{ label: "Adjust MVP", description: "Change which epics are in MVP" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user adjustments
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate epics/ directory
|
||||
|
||||
```javascript
|
||||
const template = Read('templates/epics-template.md');
|
||||
|
||||
// Create epics directory
|
||||
Bash(`mkdir -p "${workDir}/epics"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI output into structured Epics
|
||||
const epicsList = parseEpics(cliOutput); // [{id, slug, title, priority, mvp, size, stories[], reqs[], adrs[], deps[]}]
|
||||
|
||||
// Step 4a: Write individual EPIC-*.md files (one per Epic, stories included)
|
||||
epicsList.forEach(epic => {
|
||||
// Use EPIC-NNN-{slug}.md template from templates/epics-template.md
|
||||
// Fill: id, title, priority, mvp, size, description, requirements links,
|
||||
// architecture links, dependency links, stories with user stories + AC
|
||||
Write(`${workDir}/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent);
|
||||
});
|
||||
|
||||
// Step 4b: Write _index.md (overview + dependency map + MVP scope + traceability)
|
||||
// Use _index.md template from templates/epics-template.md
|
||||
// Fill: epic overview table (with links), dependency Mermaid diagram,
|
||||
// execution order, MVP scope, traceability matrix, estimation summary
|
||||
Write(`${workDir}/epics/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 5,
|
||||
name: "epics-stories",
|
||||
output_dir: "epics/",
|
||||
output_index: "epics/_index.md",
|
||||
file_count: epicsList.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `epics/`
|
||||
- `_index.md` — Overview table, dependency map, MVP scope, traceability matrix, links
|
||||
- `EPIC-NNN-{slug}.md` — Individual Epic with Stories (per Epic)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked to requirements and architecture via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] 3-7 Epic files with EPIC-NNN IDs
|
||||
- [ ] >= 1 Epic tagged as MVP in frontmatter
|
||||
- [ ] 2-5 Stories per Epic file
|
||||
- [ ] Stories use "As a...I want...So that..." format
|
||||
- [ ] `_index.md` has cross-Epic dependency map (Mermaid)
|
||||
- [ ] `_index.md` links to all individual Epic files
|
||||
- [ ] Relative sizing (S/M/L/XL) per Story
|
||||
- [ ] Epic files link to requirement files and ADR files
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 6: Readiness Check](06-readiness-check.md) to validate the complete specification package.
|
||||
@@ -13,7 +13,7 @@ Validate the complete specification package, generate quality report and executi
|
||||
|
||||
## Input
|
||||
|
||||
- All Phase 2-5 outputs: `product-brief.md`, `requirements.md`, `architecture.md`, `epics.md`
|
||||
- All Phase 2-5 outputs: `product-brief.md`, `requirements/_index.md` (+ `REQ-*.md`, `NFR-*.md`), `architecture/_index.md` (+ `ADR-*.md`), `epics/_index.md` (+ `EPIC-*.md`)
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Reference: `specs/quality-gates.md`
|
||||
|
||||
@@ -24,10 +24,16 @@ Validate the complete specification package, generate quality report and executi
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
const architecture = Read(`${workDir}/architecture.md`);
|
||||
const epics = Read(`${workDir}/epics.md`);
|
||||
const requirementsIndex = Read(`${workDir}/requirements/_index.md`);
|
||||
const architectureIndex = Read(`${workDir}/architecture/_index.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
const qualityGates = Read('specs/quality-gates.md');
|
||||
|
||||
// Load individual files for deep validation
|
||||
const reqFiles = Glob(`${workDir}/requirements/REQ-*.md`);
|
||||
const nfrFiles = Glob(`${workDir}/requirements/NFR-*.md`);
|
||||
const adrFiles = Glob(`${workDir}/architecture/ADR-*.md`);
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
```
|
||||
|
||||
### Step 2: Cross-Document Validation via Gemini CLI
|
||||
@@ -42,14 +48,14 @@ DOCUMENTS TO VALIDATE:
|
||||
=== PRODUCT BRIEF ===
|
||||
${productBrief.slice(0, 3000)}
|
||||
|
||||
=== REQUIREMENTS ===
|
||||
${requirements.slice(0, 4000)}
|
||||
=== REQUIREMENTS INDEX (${reqFiles.length} REQ + ${nfrFiles.length} NFR files) ===
|
||||
${requirementsIndex.slice(0, 3000)}
|
||||
|
||||
=== ARCHITECTURE ===
|
||||
${architecture.slice(0, 3000)}
|
||||
=== ARCHITECTURE INDEX (${adrFiles.length} ADR files) ===
|
||||
${architectureIndex.slice(0, 2500)}
|
||||
|
||||
=== EPICS ===
|
||||
${epics.slice(0, 3000)}
|
||||
=== EPICS INDEX (${epicFiles.length} EPIC files) ===
|
||||
${epicsIndex.slice(0, 2500)}
|
||||
|
||||
QUALITY CRITERIA (from quality-gates.md):
|
||||
${qualityGates.slice(0, 2000)}
|
||||
@@ -111,9 +117,9 @@ stepsCompleted: ["load-all", "cross-validation", "scoring", "report-generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
- requirements.md
|
||||
- architecture.md
|
||||
- epics.md
|
||||
- requirements/_index.md
|
||||
- architecture/_index.md
|
||||
- epics/_index.md
|
||||
---`;
|
||||
|
||||
// Report content from CLI validation output:
|
||||
@@ -139,9 +145,9 @@ stepsCompleted: ["synthesis"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
- requirements.md
|
||||
- architecture.md
|
||||
- epics.md
|
||||
- requirements/_index.md
|
||||
- architecture/_index.md
|
||||
- epics/_index.md
|
||||
- readiness-report.md
|
||||
---`;
|
||||
|
||||
@@ -161,13 +167,26 @@ Write(`${workDir}/spec-summary.md`, `${frontmatterSummary}\n\n${summaryContent}`
|
||||
### Step 5: Update All Document Status
|
||||
|
||||
```javascript
|
||||
// Update frontmatter status to 'complete' in all documents
|
||||
const docs = ['product-brief.md', 'requirements.md', 'architecture.md', 'epics.md'];
|
||||
for (const doc of docs) {
|
||||
// Update frontmatter status to 'complete' in all documents (directories + single files)
|
||||
// product-brief.md is a single file
|
||||
const singleFiles = ['product-brief.md'];
|
||||
singleFiles.forEach(doc => {
|
||||
const content = Read(`${workDir}/${doc}`);
|
||||
const updated = content.replace(/status: draft/, 'status: complete');
|
||||
Write(`${workDir}/${doc}`, updated);
|
||||
}
|
||||
Write(`${workDir}/${doc}`, content.replace(/status: draft/, 'status: complete'));
|
||||
});
|
||||
|
||||
// Update all files in directories (index + individual files)
|
||||
const dirFiles = [
|
||||
...Glob(`${workDir}/requirements/*.md`),
|
||||
...Glob(`${workDir}/architecture/*.md`),
|
||||
...Glob(`${workDir}/epics/*.md`)
|
||||
];
|
||||
dirFiles.forEach(filePath => {
|
||||
const content = Read(filePath);
|
||||
if (content.includes('status: draft')) {
|
||||
Write(filePath, content.replace(/status: draft/, 'status: complete'));
|
||||
}
|
||||
});
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
@@ -214,23 +233,32 @@ AskUserQuestion({
|
||||
|
||||
if (selection === "Execute via lite-plan") {
|
||||
// lite-plan accepts a text description directly
|
||||
const epicsContent = Read(`${workDir}/epics.md`);
|
||||
// Extract first MVP Epic's title + description as task input
|
||||
const firstEpic = extractFirstMvpEpicDescription(epicsContent);
|
||||
Skill(skill="workflow:lite-plan", args=`"${firstEpic}"`)
|
||||
// Read first MVP Epic from individual EPIC-*.md files
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const firstMvpFile = epicFiles.find(f => {
|
||||
const content = Read(f);
|
||||
return content.includes('mvp: true');
|
||||
});
|
||||
const epicContent = Read(firstMvpFile);
|
||||
const title = extractTitle(epicContent); // First # heading
|
||||
const description = extractSection(epicContent, "Description");
|
||||
Skill(skill="workflow:lite-plan", args=`"${title}: ${description}"`)
|
||||
}
|
||||
|
||||
if (selection === "Full planning" || selection === "Create roadmap") {
|
||||
// === Bridge: Build brainstorm_artifacts compatible structure ===
|
||||
// This enables workflow:plan's context-search-agent to discover spec artifacts
|
||||
// via the standard .brainstorming/ directory convention.
|
||||
// Reads from directory-based outputs (individual files), maps to .brainstorming/ format
|
||||
// for context-search-agent auto-discovery → action-planning-agent consumption.
|
||||
|
||||
// Step A: Read all spec documents
|
||||
// Step A: Read spec documents from directories
|
||||
const specSummary = Read(`${workDir}/spec-summary.md`);
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
const architecture = Read(`${workDir}/architecture.md`);
|
||||
const epics = Read(`${workDir}/epics.md`);
|
||||
const requirementsIndex = Read(`${workDir}/requirements/_index.md`);
|
||||
const architectureIndex = Read(`${workDir}/architecture/_index.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
|
||||
// Read individual EPIC files (already split — direct mapping to feature-specs)
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
|
||||
// Step B: Build structured description from spec-summary
|
||||
const structuredDesc = `GOAL: ${extractGoal(specSummary)}
|
||||
@@ -245,8 +273,8 @@ CONTEXT: Generated from spec session ${specConfig.session_id}. Source: ${workDir
|
||||
const brainstormDir = `.workflow/active/${sessionId}/.brainstorming`;
|
||||
Bash(`mkdir -p "${brainstormDir}/feature-specs"`);
|
||||
|
||||
// D.1: guidance-specification.md (highest priority - action-planning-agent reads first)
|
||||
// Synthesized from spec-summary + product-brief key decisions + architecture decisions
|
||||
// D.1: guidance-specification.md (highest priority — action-planning-agent reads first)
|
||||
// Synthesized from spec-summary + product-brief + architecture/requirements indexes
|
||||
Write(`${brainstormDir}/guidance-specification.md`, `
|
||||
# ${specConfig.seed_analysis.problem_statement} - Confirmed Guidance Specification
|
||||
|
||||
@@ -259,80 +287,86 @@ ${extractSection(productBrief, "Vision")}
|
||||
${extractSection(productBrief, "Goals")}
|
||||
|
||||
## 2. Requirements Summary
|
||||
${extractSection(requirements, "Requirement Summary")}
|
||||
${extractSection(requirementsIndex, "Functional Requirements")}
|
||||
|
||||
## 3. Architecture Decisions
|
||||
${extractSection(architecture, "Architecture Decision Records")}
|
||||
${extractSection(architecture, "Technology Stack")}
|
||||
${extractSection(architectureIndex, "Architecture Decision Records")}
|
||||
${extractSection(architectureIndex, "Technology Stack")}
|
||||
|
||||
## 4. Implementation Scope
|
||||
${extractSection(epics, "Epic Overview")}
|
||||
${extractSection(epics, "MVP Scope")}
|
||||
${extractSection(epicsIndex, "Epic Overview")}
|
||||
${extractSection(epicsIndex, "MVP Scope")}
|
||||
|
||||
## Feature Decomposition
|
||||
${extractFeatureTable(epics)}
|
||||
${extractSection(epicsIndex, "Traceability Matrix")}
|
||||
|
||||
## Appendix: Source Documents
|
||||
| Document | Path | Description |
|
||||
|----------|------|-------------|
|
||||
| Product Brief | ${workDir}/product-brief.md | Vision, goals, scope |
|
||||
| Requirements | ${workDir}/requirements.md | Functional + non-functional requirements |
|
||||
| Architecture | ${workDir}/architecture.md | ADRs, tech stack, components |
|
||||
| Epics | ${workDir}/epics.md | Epic/Story breakdown |
|
||||
| Requirements | ${workDir}/requirements/ | _index.md + REQ-*.md + NFR-*.md |
|
||||
| Architecture | ${workDir}/architecture/ | _index.md + ADR-*.md |
|
||||
| Epics | ${workDir}/epics/ | _index.md + EPIC-*.md |
|
||||
| Readiness Report | ${workDir}/readiness-report.md | Quality validation |
|
||||
`);
|
||||
|
||||
// D.2: feature-index.json (each Epic mapped to a Feature)
|
||||
// Path: feature-specs/feature-index.json (matches context-search-agent discovery at line 344)
|
||||
const epicsList = parseEpics(epics); // Extract: id, slug, name, description, mvp, stories[]
|
||||
const featureIndex = {
|
||||
// D.2: feature-index.json (each EPIC file mapped to a Feature)
|
||||
// Path: feature-specs/feature-index.json (matches context-search-agent discovery)
|
||||
// Directly read from individual EPIC-*.md files (no monolithic parsing needed)
|
||||
const features = epicFiles.map(epicFile => {
|
||||
const content = Read(epicFile);
|
||||
const fm = parseFrontmatter(content); // Extract YAML frontmatter
|
||||
const basename = path.basename(epicFile, '.md'); // EPIC-001-slug
|
||||
const epicNum = fm.id.replace('EPIC-', ''); // 001
|
||||
const slug = basename.replace(/^EPIC-\d+-/, ''); // slug
|
||||
return {
|
||||
id: `F-${epicNum}`,
|
||||
slug: slug,
|
||||
name: extractTitle(content),
|
||||
description: extractSection(content, "Description"),
|
||||
priority: fm.mvp ? "High" : "Medium",
|
||||
spec_path: `${brainstormDir}/feature-specs/F-${epicNum}-${slug}.md`,
|
||||
source_epic: fm.id,
|
||||
source_file: epicFile
|
||||
};
|
||||
});
|
||||
Write(`${brainstormDir}/feature-specs/feature-index.json`, JSON.stringify({
|
||||
version: "1.0",
|
||||
source: "spec-generator",
|
||||
spec_session: specConfig.session_id,
|
||||
features: epicsList.map(epic => ({
|
||||
id: `F-${epic.id.replace('EPIC-', '')}`,
|
||||
slug: epic.slug,
|
||||
name: epic.name,
|
||||
description: epic.description,
|
||||
priority: epic.mvp ? "High" : "Medium",
|
||||
spec_path: `${brainstormDir}/feature-specs/F-${epic.id.replace('EPIC-','')}-${epic.slug}.md`,
|
||||
source_epic: epic.id,
|
||||
stories: epic.stories
|
||||
})),
|
||||
features,
|
||||
cross_cutting_specs: []
|
||||
};
|
||||
Write(`${brainstormDir}/feature-specs/feature-index.json`, JSON.stringify(featureIndex, null, 2));
|
||||
}, null, 2));
|
||||
|
||||
// D.3: Individual feature-spec files (one per Epic)
|
||||
// D.3: Feature-spec files — directly adapt from individual EPIC-*.md files
|
||||
// Since Epics are already individual documents, transform format directly
|
||||
// Filename pattern: F-{num}-{slug}.md (matches context-search-agent glob F-*-*.md)
|
||||
epicsList.forEach(epic => {
|
||||
const epicDetail = extractEpicDetail(epics, epic.id);
|
||||
const relatedReqs = extractRelatedRequirements(requirements, epic.id);
|
||||
Write(`${brainstormDir}/feature-specs/F-${epic.id.replace('EPIC-','')}-${epic.slug}.md`, `
|
||||
# Feature Spec: ${epic.id} - ${epic.name}
|
||||
features.forEach(feature => {
|
||||
const epicContent = Read(feature.source_file);
|
||||
Write(feature.spec_path, `
|
||||
# Feature Spec: ${feature.source_epic} - ${feature.name}
|
||||
|
||||
**Source**: ${workDir}/epics.md
|
||||
**Priority**: ${epic.mvp ? "MVP" : "Post-MVP"}
|
||||
**Related Requirements**: ${relatedReqs.join(', ')}
|
||||
**Source**: ${feature.source_file}
|
||||
**Priority**: ${feature.priority === "High" ? "MVP" : "Post-MVP"}
|
||||
|
||||
## Scope
|
||||
${epicDetail.scope}
|
||||
## Description
|
||||
${extractSection(epicContent, "Description")}
|
||||
|
||||
## Stories
|
||||
${epicDetail.stories.map(s => `- ${s.id}: ${s.title} (${s.estimate})`).join('\n')}
|
||||
${extractSection(epicContent, "Stories")}
|
||||
|
||||
## Acceptance Criteria
|
||||
${epicDetail.acceptanceCriteria}
|
||||
## Requirements
|
||||
${extractSection(epicContent, "Requirements")}
|
||||
|
||||
## Architecture Notes
|
||||
${extractArchitectureNotes(architecture, epic.id)}
|
||||
## Architecture
|
||||
${extractSection(epicContent, "Architecture")}
|
||||
`);
|
||||
});
|
||||
|
||||
// Step E: Invoke downstream workflow
|
||||
// context-search-agent will auto-discover .brainstorming/ files
|
||||
// → context-package.json.brainstorm_artifacts populated
|
||||
// → action-planning-agent loads guidance_specification (priority 1) + feature_index (priority 2)
|
||||
// → action-planning-agent loads guidance_specification (P1) + feature_index (P2)
|
||||
if (selection === "Full planning") {
|
||||
Skill(skill="workflow:plan", args=`"${structuredDesc}"`)
|
||||
} else {
|
||||
@@ -341,11 +375,13 @@ ${extractArchitectureNotes(architecture, epic.id)}
|
||||
}
|
||||
|
||||
if (selection === "Create Issues") {
|
||||
// For each Epic, create an issue
|
||||
const epics = Read(`${workDir}/epics.md`);
|
||||
const epicsList = parseEpics(epics);
|
||||
epicsList.forEach(epic => {
|
||||
Skill(skill="issue:new", args=`"${epic.name}: ${epic.description}"`)
|
||||
// For each EPIC file, create an issue (read directly from individual files)
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
epicFiles.forEach(epicFile => {
|
||||
const content = Read(epicFile);
|
||||
const title = extractTitle(content);
|
||||
const description = extractSection(content, "Description");
|
||||
Skill(skill="issue:new", args=`"${title}: ${description}"`)
|
||||
});
|
||||
}
|
||||
|
||||
@@ -354,12 +390,17 @@ if (selection === "Create Issues") {
|
||||
|
||||
#### Helper Functions Reference (pseudocode)
|
||||
|
||||
The following helper functions are used in the handoff bridge. They operate on the markdown content loaded from spec documents:
|
||||
The following helper functions are used in the handoff bridge. They operate on markdown content from individual spec files:
|
||||
|
||||
```javascript
|
||||
// Extract the first MVP Epic's title + scope as a one-line task description
|
||||
function extractFirstMvpEpicDescription(epicsContent) {
|
||||
// Find first Epic marked as MVP, return: "Epic Name - Brief scope description"
|
||||
// Extract title from a markdown document (first # heading)
|
||||
function extractTitle(markdown) {
|
||||
// Return the text after the first # heading (e.g., "# EPIC-001: Title" → "Title")
|
||||
}
|
||||
|
||||
// Parse YAML frontmatter from markdown (between --- markers)
|
||||
function parseFrontmatter(markdown) {
|
||||
// Return object with: id, priority, mvp, size, requirements, architecture, dependencies
|
||||
}
|
||||
|
||||
// Extract GOAL/SCOPE from spec-summary frontmatter or ## sections
|
||||
@@ -370,31 +411,6 @@ function extractScope(specSummary) { /* Return the Scope/MVP boundary */ }
|
||||
function extractSection(markdown, sectionName) {
|
||||
// Return content between ## {sectionName} and next ## heading
|
||||
}
|
||||
|
||||
// Build a markdown table of Epics from epics.md
|
||||
function extractFeatureTable(epicsContent) {
|
||||
// Return: | Epic ID | Name | Priority | Story Count |
|
||||
}
|
||||
|
||||
// Parse epics.md into structured Epic objects
|
||||
function parseEpics(epicsContent) {
|
||||
// Returns: [{ id, slug, name, description, mvp, stories[] }]
|
||||
}
|
||||
|
||||
// Extract detailed Epic section (scope, stories, acceptance criteria)
|
||||
function extractEpicDetail(epicsContent, epicId) {
|
||||
// Returns: { scope, stories[], acceptanceCriteria }
|
||||
}
|
||||
|
||||
// Find requirements related to a specific Epic
|
||||
function extractRelatedRequirements(requirementsContent, epicId) {
|
||||
// Returns: ["REQ-001", "REQ-003"] based on traceability references
|
||||
}
|
||||
|
||||
// Extract architecture notes relevant to a specific Epic
|
||||
function extractArchitectureNotes(architectureContent, epicId) {
|
||||
// Returns: relevant ADRs, component references, tech stack notes
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
@@ -405,14 +421,14 @@ function extractArchitectureNotes(architectureContent, epicId) {
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All 4 documents validated (product-brief, requirements, architecture, epics)
|
||||
- [ ] All frontmatter parseable and valid
|
||||
- [ ] Cross-references checked
|
||||
- [ ] All document directories validated (product-brief, requirements/, architecture/, epics/)
|
||||
- [ ] All frontmatter parseable and valid (index + individual files)
|
||||
- [ ] Cross-references checked (relative links between directories)
|
||||
- [ ] Overall quality score calculated
|
||||
- [ ] No unresolved Error-severity issues
|
||||
- [ ] Traceability matrix generated
|
||||
- [ ] spec-summary.md created
|
||||
- [ ] All document statuses updated to 'complete'
|
||||
- [ ] All document statuses updated to 'complete' (all files in all directories)
|
||||
- [ ] Handoff options presented
|
||||
|
||||
## Completion
|
||||
@@ -421,13 +437,13 @@ This is the final phase. The specification package is ready for execution handof
|
||||
|
||||
### Output Files Manifest
|
||||
|
||||
| File | Phase | Description |
|
||||
| Path | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `spec-config.json` | 1 | Session configuration and state |
|
||||
| `discovery-context.json` | 1 | Codebase exploration (optional) |
|
||||
| `product-brief.md` | 2 | Product brief with multi-perspective synthesis |
|
||||
| `requirements.md` | 3 | Detailed PRD with MoSCoW priorities |
|
||||
| `architecture.md` | 4 | Architecture decisions and component design |
|
||||
| `epics.md` | 5 | Epic/Story breakdown with dependencies |
|
||||
| `requirements/` | 3 | Directory: `_index.md` + `REQ-*.md` + `NFR-*.md` |
|
||||
| `architecture/` | 4 | Directory: `_index.md` + `ADR-*.md` |
|
||||
| `epics/` | 5 | Directory: `_index.md` + `EPIC-*.md` |
|
||||
| `readiness-report.md` | 6 | Quality validation report |
|
||||
| `spec-summary.md` | 6 | One-page executive summary |
|
||||
|
||||
254
.claude/skills/spec-generator/templates/architecture-doc.md
Normal file
254
.claude/skills/spec-generator/templates/architecture-doc.md
Normal file
@@ -0,0 +1,254 @@
|
||||
# Architecture Document Template (Directory Structure)
|
||||
|
||||
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
|
||||
| Output Location | `{workDir}/architecture/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/architecture/
|
||||
├── _index.md # Overview, components, tech stack, data model, security
|
||||
├── ADR-001-{slug}.md # Individual Architecture Decision Record
|
||||
├── ADR-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 4
|
||||
document_type: architecture-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
---
|
||||
|
||||
# Architecture: {product_name}
|
||||
|
||||
{executive_summary - high-level architecture approach and key decisions}
|
||||
|
||||
## System Overview
|
||||
|
||||
### Architecture Style
|
||||
{description of chosen architecture style: microservices, monolith, serverless, etc.}
|
||||
|
||||
### System Context Diagram
|
||||
|
||||
```mermaid
|
||||
C4Context
|
||||
title System Context Diagram
|
||||
Person(user, "User", "Primary user")
|
||||
System(system, "{product_name}", "Core system")
|
||||
System_Ext(ext1, "{external_system}", "{description}")
|
||||
Rel(user, system, "Uses")
|
||||
Rel(system, ext1, "Integrates with")
|
||||
```
|
||||
|
||||
## Component Architecture
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "{product_name}"
|
||||
A[Component A] --> B[Component B]
|
||||
B --> C[Component C]
|
||||
A --> D[Component D]
|
||||
end
|
||||
B --> E[External Service]
|
||||
```
|
||||
|
||||
### Component Descriptions
|
||||
|
||||
| Component | Responsibility | Technology | Dependencies |
|
||||
|-----------|---------------|------------|--------------|
|
||||
| {component_name} | {what it does} | {tech stack} | {depends on} |
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Core Technologies
|
||||
|
||||
| Layer | Technology | Version | Rationale |
|
||||
|-------|-----------|---------|-----------|
|
||||
| Frontend | {technology} | {version} | {why chosen} |
|
||||
| Backend | {technology} | {version} | {why chosen} |
|
||||
| Database | {technology} | {version} | {why chosen} |
|
||||
| Infrastructure | {technology} | {version} | {why chosen} |
|
||||
|
||||
### Key Libraries & Frameworks
|
||||
|
||||
| Library | Purpose | License |
|
||||
|---------|---------|---------|
|
||||
| {library_name} | {purpose} | {license} |
|
||||
|
||||
## Architecture Decision Records
|
||||
|
||||
| ADR | Title | Status | Key Choice |
|
||||
|-----|-------|--------|------------|
|
||||
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
|
||||
|
||||
## Data Architecture
|
||||
|
||||
### Data Model
|
||||
|
||||
```mermaid
|
||||
erDiagram
|
||||
ENTITY_A ||--o{ ENTITY_B : "has many"
|
||||
ENTITY_A {
|
||||
string id PK
|
||||
string name
|
||||
datetime created_at
|
||||
}
|
||||
ENTITY_B {
|
||||
string id PK
|
||||
string entity_a_id FK
|
||||
string value
|
||||
}
|
||||
```
|
||||
|
||||
### Data Storage Strategy
|
||||
|
||||
| Data Type | Storage | Retention | Backup |
|
||||
|-----------|---------|-----------|--------|
|
||||
| {type} | {storage solution} | {retention policy} | {backup strategy} |
|
||||
|
||||
## API Design
|
||||
|
||||
### API Overview
|
||||
|
||||
| Endpoint | Method | Purpose | Auth |
|
||||
|----------|--------|---------|------|
|
||||
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Security Controls
|
||||
|
||||
| Control | Implementation | Requirement |
|
||||
|---------|---------------|-------------|
|
||||
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
|
||||
## Infrastructure & Deployment
|
||||
|
||||
### Deployment Architecture
|
||||
|
||||
{description of deployment model: containers, serverless, VMs, etc.}
|
||||
|
||||
### Environment Strategy
|
||||
|
||||
| Environment | Purpose | Configuration |
|
||||
|-------------|---------|---------------|
|
||||
| Development | Local development | {config} |
|
||||
| Staging | Pre-production testing | {config} |
|
||||
| Production | Live system | {config} |
|
||||
|
||||
## Codebase Integration
|
||||
|
||||
{if has_codebase is true:}
|
||||
|
||||
### Existing Code Mapping
|
||||
|
||||
| New Component | Existing Module | Integration Type | Notes |
|
||||
|--------------|----------------|------------------|-------|
|
||||
| {component} | {existing module path} | Extend/Replace/New | {notes} |
|
||||
|
||||
### Migration Notes
|
||||
{any migration considerations for existing code}
|
||||
|
||||
## Quality Attributes
|
||||
|
||||
| Attribute | Target | Measurement | ADR Reference |
|
||||
|-----------|--------|-------------|---------------|
|
||||
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Impact | Probability | Mitigation |
|
||||
|------|--------|-------------|------------|
|
||||
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {architectural question 1}
|
||||
- [ ] {architectural question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
|
||||
- Next: [Epics & Stories](../epics/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: ADR-{NNN}
|
||||
status: Accepted
|
||||
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
|
||||
date: {timestamp}
|
||||
---
|
||||
|
||||
# ADR-{NNN}: {decision_title}
|
||||
|
||||
## Context
|
||||
|
||||
{what is the situation that motivates this decision}
|
||||
|
||||
## Decision
|
||||
|
||||
{what is the chosen approach}
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
| Option | Pros | Cons |
|
||||
|--------|------|------|
|
||||
| {option_1 - chosen} | {pros} | {cons} |
|
||||
| {option_2} | {pros} | {cons} |
|
||||
| {option_3} | {pros} | {cons} |
|
||||
|
||||
## Consequences
|
||||
|
||||
- **Positive**: {positive outcomes}
|
||||
- **Negative**: {tradeoffs accepted}
|
||||
- **Risks**: {risks to monitor}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | ADR/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from decision title |
|
||||
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |
|
||||
196
.claude/skills/spec-generator/templates/epics-template.md
Normal file
196
.claude/skills/spec-generator/templates/epics-template.md
Normal file
@@ -0,0 +1,196 @@
|
||||
# Epics & Stories Template (Directory Structure)
|
||||
|
||||
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
|
||||
| Output Location | `{workDir}/epics/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/epics/
|
||||
├── _index.md # Overview table + dependency map + MVP scope + execution order
|
||||
├── EPIC-001-{slug}.md # Individual Epic with its Stories
|
||||
├── EPIC-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 5
|
||||
document_type: epics-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
- ../architecture/_index.md
|
||||
---
|
||||
|
||||
# Epics & Stories: {product_name}
|
||||
|
||||
{executive_summary - overview of epic structure and MVP scope}
|
||||
|
||||
## Epic Overview
|
||||
|
||||
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|
||||
|---------|-------|----------|-----|---------|-----------|
|
||||
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
EPIC-001 --> EPIC-002
|
||||
EPIC-001 --> EPIC-003
|
||||
EPIC-002 --> EPIC-004
|
||||
EPIC-003 --> EPIC-005
|
||||
```
|
||||
|
||||
### Dependency Notes
|
||||
{explanation of why these dependencies exist and suggested execution order}
|
||||
|
||||
### Recommended Execution Order
|
||||
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
|
||||
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
|
||||
3. ...
|
||||
|
||||
## MVP Scope
|
||||
|
||||
### MVP Epics
|
||||
{list of epics included in MVP with justification, linking to each}
|
||||
|
||||
### MVP Definition of Done
|
||||
- [ ] {MVP completion criterion 1}
|
||||
- [ ] {MVP completion criterion 2}
|
||||
- [ ] {MVP completion criterion 3}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Requirement | Epic | Stories | Architecture |
|
||||
|-------------|------|---------|--------------|
|
||||
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
|
||||
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
|
||||
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
|
||||
|
||||
## Estimation Summary
|
||||
|
||||
| Size | Meaning | Count |
|
||||
|------|---------|-------|
|
||||
| S | Small - well-understood, minimal risk | {n} |
|
||||
| M | Medium - some complexity, moderate risk | {n} |
|
||||
| L | Large - significant complexity, should consider splitting | {n} |
|
||||
| XL | Extra Large - high complexity, must split before implementation | {n} |
|
||||
|
||||
## Risks & Considerations
|
||||
|
||||
| Risk | Affected Epics | Mitigation |
|
||||
|------|---------------|------------|
|
||||
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {question about scope or implementation 1}
|
||||
- [ ] {question about scope or implementation 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
|
||||
- Handoff to: execution workflows (lite-plan, plan, req-plan)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: EPIC-NNN-{slug}.md (Individual Epic)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: EPIC-{NNN}
|
||||
priority: {Must|Should|Could}
|
||||
mvp: {true|false}
|
||||
size: {S|M|L|XL}
|
||||
requirements: [REQ-{NNN}]
|
||||
architecture: [ADR-{NNN}]
|
||||
dependencies: [EPIC-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# EPIC-{NNN}: {epic_title}
|
||||
|
||||
**Priority**: {Must|Should|Could}
|
||||
**MVP**: {Yes|No}
|
||||
**Estimated Size**: {S|M|L|XL}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed epic description}
|
||||
|
||||
## Requirements
|
||||
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
|
||||
## Architecture
|
||||
|
||||
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
|
||||
- Component: {component_name}
|
||||
|
||||
## Dependencies
|
||||
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
|
||||
|
||||
## Stories
|
||||
|
||||
### STORY-{EPIC}-001: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
- [ ] {criterion 3}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
|
||||
---
|
||||
|
||||
### STORY-{EPIC}-002: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
|
||||
| `{NNN}` | Auto-increment | Story/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
|
||||
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |
|
||||
224
.claude/skills/spec-generator/templates/requirements-prd.md
Normal file
224
.claude/skills/spec-generator/templates/requirements-prd.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Requirements PRD Template (Directory Structure)
|
||||
|
||||
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
|
||||
| Output Location | `{workDir}/requirements/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/requirements/
|
||||
├── _index.md # Summary + MoSCoW table + traceability matrix + links
|
||||
├── REQ-001-{slug}.md # Individual functional requirement
|
||||
├── REQ-002-{slug}.md
|
||||
├── NFR-P-001-{slug}.md # Non-functional: Performance
|
||||
├── NFR-S-001-{slug}.md # Non-functional: Security
|
||||
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
|
||||
├── NFR-U-001-{slug}.md # Non-functional: Usability
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 3
|
||||
document_type: requirements-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
---
|
||||
|
||||
# Requirements: {product_name}
|
||||
|
||||
{executive_summary - brief overview of what this PRD covers and key decisions}
|
||||
|
||||
## Requirement Summary
|
||||
|
||||
| Priority | Count | Coverage |
|
||||
|----------|-------|----------|
|
||||
| Must Have | {n} | {description of must-have scope} |
|
||||
| Should Have | {n} | {description of should-have scope} |
|
||||
| Could Have | {n} | {description of could-have scope} |
|
||||
| Won't Have | {n} | {description of explicitly excluded} |
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
| ID | Title | Priority | Traces To |
|
||||
|----|-------|----------|-----------|
|
||||
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Security
|
||||
|
||||
| ID | Title | Standard |
|
||||
|----|-------|----------|
|
||||
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
|
||||
|
||||
### Scalability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Usability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
## Data Requirements
|
||||
|
||||
### Data Entities
|
||||
|
||||
| Entity | Description | Key Attributes |
|
||||
|--------|-------------|----------------|
|
||||
| {entity_name} | {description} | {attr1, attr2, attr3} |
|
||||
|
||||
### Data Flows
|
||||
|
||||
{description of key data flows, optionally with Mermaid diagram}
|
||||
|
||||
## Integration Requirements
|
||||
|
||||
| System | Direction | Protocol | Data Format | Notes |
|
||||
|--------|-----------|----------|-------------|-------|
|
||||
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
|
||||
|
||||
## Constraints & Assumptions
|
||||
|
||||
### Constraints
|
||||
- {technical or business constraint 1}
|
||||
- {technical or business constraint 2}
|
||||
|
||||
### Assumptions
|
||||
- {assumption 1 - must be validated}
|
||||
- {assumption 2 - must be validated}
|
||||
|
||||
## Priority Rationale
|
||||
|
||||
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Goal | Requirements |
|
||||
|------|-------------|
|
||||
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
|
||||
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Product Brief](../product-brief.md)
|
||||
- Next: [Architecture](../architecture/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: REQ-{NNN}
|
||||
type: functional
|
||||
priority: {Must|Should|Could|Won't}
|
||||
traces_to: [G-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# REQ-{NNN}: {requirement_title}
|
||||
|
||||
**Priority**: {Must|Should|Could|Won't}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## User Story
|
||||
|
||||
As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] {specific, testable criterion 1}
|
||||
- [ ] {specific, testable criterion 2}
|
||||
- [ ] {specific, testable criterion 3}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: NFR-{type}-{NNN}
|
||||
type: non-functional
|
||||
category: {Performance|Security|Scalability|Usability}
|
||||
priority: {Must|Should|Could}
|
||||
status: draft
|
||||
---
|
||||
|
||||
# NFR-{type}-{NNN}: {requirement_title}
|
||||
|
||||
**Category**: {Performance|Security|Scalability|Usability}
|
||||
**Priority**: {Must|Should|Could}
|
||||
|
||||
## Requirement
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## Metric & Target
|
||||
|
||||
| Metric | Target | Measurement Method |
|
||||
|--------|--------|--------------------|
|
||||
| {metric} | {target value} | {how measured} |
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
|
||||
| `{slug}` | Auto-generated | Kebab-case from requirement title |
|
||||
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
|
||||
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |
|
||||
Reference in New Issue
Block a user