mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-14 17:41:22 +08:00
Add unit tests for various components and stores in the terminal dashboard
- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management. - Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping. - Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
110
.codex/skills/spec-generator/README.md
Normal file
110
.codex/skills/spec-generator/README.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Spec Generator
|
||||
|
||||
Structured specification document generator producing a complete document chain (Product Brief -> PRD -> Architecture -> Epics).
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Via workflow command
|
||||
/workflow:spec "Build a task management system"
|
||||
/workflow:spec -y "User auth with OAuth2" # Auto mode
|
||||
/workflow:spec -c "task management" # Resume session
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
spec-generator/
|
||||
|- SKILL.md # Entry point: metadata + architecture + flow
|
||||
|- phases/
|
||||
| |- 01-discovery.md # Seed analysis + codebase exploration + spec type selection
|
||||
| |- 01-5-requirement-clarification.md # Interactive requirement expansion
|
||||
| |- 02-product-brief.md # Multi-CLI product brief + glossary generation
|
||||
| |- 03-requirements.md # PRD with MoSCoW priorities + RFC 2119 constraints
|
||||
| |- 04-architecture.md # Architecture + state machine + config model + observability
|
||||
| |- 05-epics-stories.md # Epic/Story decomposition
|
||||
| |- 06-readiness-check.md # Quality validation + handoff + iterate option
|
||||
| |- 06-5-auto-fix.md # Auto-fix loop for readiness issues (max 2 iterations)
|
||||
| |- 07-issue-export.md # Issue creation from Epics + export report
|
||||
|- specs/
|
||||
| |- document-standards.md # Format, frontmatter, naming rules
|
||||
| |- quality-gates.md # Per-phase quality criteria + iteration tracking
|
||||
| |- glossary-template.json # Terminology glossary schema
|
||||
|- templates/
|
||||
| |- product-brief.md # Product brief template (+ Concepts & Non-Goals)
|
||||
| |- requirements-prd.md # PRD template
|
||||
| |- architecture-doc.md # Architecture template (+ state machine, config, observability)
|
||||
| |- epics-template.md # Epic/Story template (+ versioning)
|
||||
| |- profiles/ # Spec type specialization profiles
|
||||
| |- service-profile.md # Service spec: lifecycle, observability, trust
|
||||
| |- api-profile.md # API spec: endpoints, auth, rate limiting
|
||||
| |- library-profile.md # Library spec: public API, examples, compatibility
|
||||
|- README.md # This file
|
||||
```
|
||||
|
||||
## 7-Phase Pipeline
|
||||
|
||||
| Phase | Name | Output | CLI Tools | Key Features |
|
||||
|-------|------|--------|-----------|-------------|
|
||||
| 1 | Discovery | spec-config.json | Gemini (analysis) | Spec type selection |
|
||||
| 1.5 | Req Expansion | refined-requirements.json | Gemini (analysis) | Multi-round interactive |
|
||||
| 2 | Product Brief *(Agent)* | product-brief.md, glossary.json | Gemini + Codex + Claude (parallel) | Terminology glossary |
|
||||
| 3 | Requirements *(Agent)* | requirements/ | Gemini + **Codex review** | RFC 2119, data model |
|
||||
| 4 | Architecture *(Agent)* | architecture/ | Gemini + Codex (sequential) | State machine, config, observability |
|
||||
| 5 | Epics & Stories *(Agent)* | epics/ | Gemini + **Codex review** | Glossary consistency |
|
||||
| 6 | Readiness Check | readiness-report.md, spec-summary.md | Gemini + **Codex** (parallel) | Per-requirement verification |
|
||||
| 6.5 | Auto-Fix *(Agent)* | Updated phase docs | Gemini (analysis) | Max 2 iterations |
|
||||
| 7 | Issue Export | issue-export-report.md | ccw issue create | Epic→Issue mapping, wave assignment |
|
||||
|
||||
## Runtime Output
|
||||
|
||||
```
|
||||
.workflow/.spec/SPEC-{slug}-{YYYY-MM-DD}/
|
||||
|- spec-config.json # Session state
|
||||
|- discovery-context.json # Codebase context (optional)
|
||||
|- refined-requirements.json # Phase 1.5 (requirement expansion)
|
||||
|- glossary.json # Phase 2 (terminology)
|
||||
|- product-brief.md # Phase 2
|
||||
|- requirements/ # Phase 3 (directory)
|
||||
| |- _index.md
|
||||
| |- REQ-*.md
|
||||
| └── NFR-*.md
|
||||
|- architecture/ # Phase 4 (directory)
|
||||
| |- _index.md
|
||||
| └── ADR-*.md
|
||||
|- epics/ # Phase 5 (directory)
|
||||
| |- _index.md
|
||||
| └── EPIC-*.md
|
||||
|- readiness-report.md # Phase 6
|
||||
|- spec-summary.md # Phase 6
|
||||
└── issue-export-report.md # Phase 7 (issue export)
|
||||
```
|
||||
|
||||
## Flags
|
||||
|
||||
- `-y|--yes`: Auto mode - skip all interactive confirmations
|
||||
- `-c|--continue`: Resume from last completed phase
|
||||
|
||||
Spec type is selected interactively in Phase 1 (defaults to `service` in auto mode)
|
||||
Available types: `service`, `api`, `library`, `platform`
|
||||
|
||||
## Handoff
|
||||
|
||||
After Phase 6, choose execution path:
|
||||
- `Export Issues (Phase 7)` - Create issues per Epic with spec links → team-planex
|
||||
- `workflow-lite-plan` - Execute per Epic
|
||||
- `workflow:req-plan-with-file` - Roadmap decomposition
|
||||
- `workflow-plan` - Full planning
|
||||
- `Iterate & improve` - Re-run failed phases (max 2 iterations)
|
||||
|
||||
## Design Principles
|
||||
|
||||
- **Document chain**: Each phase builds on previous outputs
|
||||
- **Multi-perspective**: Gemini/Codex/Claude provide different viewpoints
|
||||
- **Template-driven**: Consistent format via templates + frontmatter
|
||||
- **Resumable**: spec-config.json tracks completed phases
|
||||
- **Pure documentation**: No code generation - clean handoff to execution workflows
|
||||
- **Type-specialized**: Profiles adapt templates to service/api/library/platform requirements
|
||||
- **Iterative quality**: Phase 6.5 auto-fix repairs issues, max 2 iterations before handoff
|
||||
- **Terminology-first**: glossary.json ensures consistent terminology across all documents
|
||||
- **Agent-delegated**: Heavy document phases (2-5, 6.5) run in doc-generator agents to minimize main context usage
|
||||
425
.codex/skills/spec-generator/SKILL.md
Normal file
425
.codex/skills/spec-generator/SKILL.md
Normal file
@@ -0,0 +1,425 @@
|
||||
---
|
||||
name: spec-generator
|
||||
description: Specification generator - 7 phase document chain producing product brief, PRD, architecture, epics, and issues. Agent-delegated heavy phases (2-5, 6.5) with Codex review gates. Triggers on "generate spec", "create specification", "spec generator", "workflow:spec".
|
||||
allowed-tools: Agent, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, Read, Write, Edit, Bash, Glob, Grep, Skill
|
||||
---
|
||||
|
||||
# Spec Generator
|
||||
|
||||
Structured specification document generator producing a complete specification package (Product Brief, PRD, Architecture, Epics, Issues) through 7 sequential phases with multi-CLI analysis, Codex review gates, and interactive refinement. Heavy document phases are delegated to `doc-generator` agents to minimize main context usage. **Document generation only** - execution handoff via issue export to team-planex or existing workflows.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
Phase 0: Specification Study (Read specs/ + templates/ - mandatory prerequisite) [Inline]
|
||||
|
|
||||
Phase 1: Discovery -> spec-config.json + discovery-context.json [Inline]
|
||||
| (includes spec_type selection)
|
||||
Phase 1.5: Req Expansion -> refined-requirements.json [Inline]
|
||||
| (interactive discussion + CLI gap analysis)
|
||||
Phase 2: Product Brief -> product-brief.md + glossary.json [Agent]
|
||||
| (3-CLI parallel + synthesis)
|
||||
Phase 3: Requirements (PRD) -> requirements/ (_index.md + REQ-*.md + NFR-*.md) [Agent]
|
||||
| (Gemini + Codex review)
|
||||
Phase 4: Architecture -> architecture/ (_index.md + ADR-*.md) [Agent]
|
||||
| (Gemini + Codex review)
|
||||
Phase 5: Epics & Stories -> epics/ (_index.md + EPIC-*.md) [Agent]
|
||||
| (Gemini + Codex review)
|
||||
Phase 6: Readiness Check -> readiness-report.md + spec-summary.md [Inline]
|
||||
| (Gemini + Codex dual validation + per-req verification)
|
||||
├── Pass (>=80%): Handoff or Phase 7
|
||||
├── Review (60-79%): Handoff with caveats or Phase 7
|
||||
└── Fail (<60%): Phase 6.5 Auto-Fix (max 2 iterations)
|
||||
|
|
||||
Phase 6.5: Auto-Fix -> Updated Phase 2-5 documents [Agent]
|
||||
|
|
||||
└── Re-run Phase 6 validation
|
||||
|
|
||||
Phase 7: Issue Export -> issue-export-report.md [Inline]
|
||||
(Epic→Issue mapping, ccw issue create, wave assignment)
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Document Chain**: Each phase builds on previous outputs, creating a traceable specification chain from idea to executable issues
|
||||
2. **Agent-Delegated**: Heavy document phases (2-5, 6.5) run in `doc-generator` agents, keeping main context lean (summaries only)
|
||||
3. **Multi-Perspective Analysis**: CLI tools (Gemini/Codex/Claude) provide product, technical, and user perspectives in parallel
|
||||
4. **Codex Review Gates**: Phases 3, 5, 6 include Codex CLI review for quality validation before output
|
||||
5. **Interactive by Default**: Each phase offers user confirmation points; `-y` flag enables full auto mode
|
||||
6. **Resumable Sessions**: `spec-config.json` tracks completed phases; `-c` flag resumes from last checkpoint
|
||||
7. **Template-Driven**: All documents generated from standardized templates with YAML frontmatter
|
||||
8. **Pure Documentation**: No code generation or execution - clean handoff via issue export to execution workflows
|
||||
9. **Spec Type Specialization**: Templates adapt to spec type (service/api/library/platform) via profiles for domain-specific depth
|
||||
10. **Iterative Quality**: Phase 6.5 auto-fix loop repairs issues found in readiness check (max 2 iterations)
|
||||
11. **Terminology Consistency**: glossary.json generated in Phase 2, injected into all subsequent phases
|
||||
|
||||
---
|
||||
|
||||
## Mandatory Prerequisites
|
||||
|
||||
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents. Proceeding without reading the specifications will result in outputs that do not meet quality standards.
|
||||
|
||||
### Specification Documents (Required Reading)
|
||||
|
||||
| Document | Purpose | Priority |
|
||||
|----------|---------|----------|
|
||||
| [specs/document-standards.md](specs/document-standards.md) | Document format, frontmatter, naming conventions | **P0 - Must read before execution** |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Per-phase quality gate criteria and scoring | **P0 - Must read before execution** |
|
||||
|
||||
### Template Files (Must read before generation)
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [templates/product-brief.md](templates/product-brief.md) | Product brief document template |
|
||||
| [templates/requirements-prd.md](templates/requirements-prd.md) | PRD document template |
|
||||
| [templates/architecture-doc.md](templates/architecture-doc.md) | Architecture document template |
|
||||
| [templates/epics-template.md](templates/epics-template.md) | Epic/Story document template |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
|- Parse $ARGUMENTS: extract idea/topic, flags (-y, -c, -m)
|
||||
|- Detect mode: new | continue
|
||||
|- If continue: read spec-config.json, resume from first incomplete phase
|
||||
|- If new: proceed to Phase 1
|
||||
|
||||
Phase 1: Discovery & Seed Analysis
|
||||
|- Ref: phases/01-discovery.md
|
||||
|- Generate session ID: SPEC-{slug}-{YYYY-MM-DD}
|
||||
|- Parse input (text or file reference)
|
||||
|- Gemini CLI seed analysis (problem, users, domain, dimensions)
|
||||
|- Codebase exploration (conditional, if project detected)
|
||||
|- Spec type selection: service|api|library|platform (interactive, -y defaults to service)
|
||||
|- User confirmation (interactive, -y skips)
|
||||
|- Output: spec-config.json, discovery-context.json (optional)
|
||||
|
||||
Phase 1.5: Requirement Expansion & Clarification
|
||||
|- Ref: phases/01-5-requirement-clarification.md
|
||||
|- CLI gap analysis: completeness scoring, missing dimensions detection
|
||||
|- Multi-round interactive discussion (max 5 rounds)
|
||||
| |- Round 1: present gap analysis + expansion suggestions
|
||||
| |- Round N: follow-up refinement based on user responses
|
||||
|- User final confirmation of requirements
|
||||
|- Auto mode (-y): CLI auto-expansion without interaction
|
||||
|- Output: refined-requirements.json
|
||||
|
||||
Phase 2: Product Brief [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/02-product-brief.md
|
||||
|- Agent executes: 3 parallel CLI analyses + synthesis + glossary generation
|
||||
|- Agent writes: product-brief.md, glossary.json
|
||||
|- Agent returns: JSON summary {files_created, quality_notes, key_decisions}
|
||||
|- Orchestrator validates: files exist, spec-config.json updated
|
||||
|
||||
Phase 3: Requirements / PRD [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/03-requirements.md
|
||||
|- Agent executes: Gemini expansion + Codex review (Step 2.5) + priority sorting
|
||||
|- Agent writes: requirements/ directory (_index.md + REQ-*.md + NFR-*.md)
|
||||
|- Agent returns: JSON summary {files_created, codex_review_integrated, key_decisions}
|
||||
|- Orchestrator validates: directory exists, file count matches
|
||||
|
||||
Phase 4: Architecture [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/04-architecture.md
|
||||
|- Agent executes: Gemini analysis + Codex review + codebase mapping
|
||||
|- Agent writes: architecture/ directory (_index.md + ADR-*.md)
|
||||
|- Agent returns: JSON summary {files_created, codex_review_rating, key_decisions}
|
||||
|- Orchestrator validates: directory exists, ADR files present
|
||||
|
||||
Phase 5: Epics & Stories [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/05-epics-stories.md
|
||||
|- Agent executes: Gemini decomposition + Codex review (Step 2.5) + validation
|
||||
|- Agent writes: epics/ directory (_index.md + EPIC-*.md)
|
||||
|- Agent returns: JSON summary {files_created, codex_review_integrated, mvp_epic_count}
|
||||
|- Orchestrator validates: directory exists, MVP epics present
|
||||
|
||||
Phase 6: Readiness Check [INLINE + ENHANCED]
|
||||
|- Ref: phases/06-readiness-check.md
|
||||
|- Gemini CLI: cross-document validation (completeness, consistency, traceability)
|
||||
|- Codex CLI: technical depth review (ADR quality, data model, security, observability)
|
||||
|- Per-requirement verification: iterate all REQ-*.md / NFR-*.md
|
||||
| |- Check: AC exists + testable, Brief trace, Story coverage, Arch coverage
|
||||
| |- Generate: Per-Requirement Verification table
|
||||
|- Merge dual CLI scores into quality report
|
||||
|- Output: readiness-report.md, spec-summary.md
|
||||
|- Handoff options: Phase 7 (issue export), lite-plan, req-plan, plan, iterate
|
||||
|
||||
Phase 6.5: Auto-Fix (conditional) [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/06-5-auto-fix.md + readiness-report.md
|
||||
|- Agent executes: fix affected Phase 2-5 documents
|
||||
|- Agent returns: JSON summary {files_modified, issues_fixed, phases_touched}
|
||||
|- Re-run Phase 6 validation
|
||||
|- Max 2 iterations, then force handoff
|
||||
|
||||
Phase 7: Issue Export [INLINE]
|
||||
|- Ref: phases/07-issue-export.md
|
||||
|- Read EPIC-*.md files, assign waves (MVP→wave-1, others→wave-2)
|
||||
|- Create issues via ccw issue create (one per Epic)
|
||||
|- Map Epic dependencies to issue dependencies
|
||||
|- Generate issue-export-report.md
|
||||
|- Update spec-config.json with issue_ids
|
||||
|- Handoff: team-planex, wave-1 only, view issues, done
|
||||
|
||||
Complete: Full specification package with issues ready for execution
|
||||
|
||||
Phase 6/7 → Handoff Bridge (conditional, based on user selection):
|
||||
├─ team-planex: Execute issues via coordinated team workflow
|
||||
├─ lite-plan: Extract first MVP Epic description → direct text input
|
||||
├─ plan / req-plan: Create WFS session + .brainstorming/ bridge files
|
||||
│ ├─ guidance-specification.md (synthesized from spec outputs)
|
||||
│ ├─ feature-specs/feature-index.json (Epic → Feature mapping)
|
||||
│ └─ feature-specs/F-{num}-{slug}.md (one per Epic)
|
||||
└─ context-search-agent auto-discovers .brainstorming/
|
||||
→ context-package.json.brainstorm_artifacts populated
|
||||
→ action-planning-agent consumes: guidance_spec (P1) → feature_index (P2)
|
||||
```
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
// Session ID generation
|
||||
const slug = topic.toLowerCase().replace(/[^a-z0-9\u4e00-\u9fff]+/g, '-').slice(0, 40);
|
||||
const date = new Date().toISOString().slice(0, 10);
|
||||
const sessionId = `SPEC-${slug}-${date}`;
|
||||
const workDir = `.workflow/.spec/${sessionId}`;
|
||||
|
||||
Bash(`mkdir -p "${workDir}"`);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.spec/SPEC-{slug}-{YYYY-MM-DD}/
|
||||
├── spec-config.json # Session configuration + phase state
|
||||
├── discovery-context.json # Codebase exploration results (optional)
|
||||
├── refined-requirements.json # Phase 1.5: Confirmed requirements after discussion
|
||||
├── glossary.json # Phase 2: Terminology glossary for cross-doc consistency
|
||||
├── product-brief.md # Phase 2: Product brief
|
||||
├── requirements/ # Phase 3: Detailed PRD (directory)
|
||||
│ ├── _index.md # Summary, MoSCoW table, traceability, links
|
||||
│ ├── REQ-NNN-{slug}.md # Individual functional requirement
|
||||
│ └── NFR-{type}-NNN-{slug}.md # Individual non-functional requirement
|
||||
├── architecture/ # Phase 4: Architecture decisions (directory)
|
||||
│ ├── _index.md # Overview, components, tech stack, links
|
||||
│ └── ADR-NNN-{slug}.md # Individual Architecture Decision Record
|
||||
├── epics/ # Phase 5: Epic/Story breakdown (directory)
|
||||
│ ├── _index.md # Epic table, dependency map, MVP scope
|
||||
│ └── EPIC-NNN-{slug}.md # Individual Epic with Stories
|
||||
├── readiness-report.md # Phase 6: Quality report (+ per-req verification table)
|
||||
├── spec-summary.md # Phase 6: One-page executive summary
|
||||
└── issue-export-report.md # Phase 7: Issue mapping table + spec links
|
||||
```
|
||||
|
||||
## State Management
|
||||
|
||||
**spec-config.json** serves as core state file:
|
||||
```json
|
||||
{
|
||||
"session_id": "SPEC-xxx-2026-02-11",
|
||||
"seed_input": "User input text",
|
||||
"input_type": "text",
|
||||
"timestamp": "ISO8601",
|
||||
"mode": "interactive",
|
||||
"complexity": "moderate",
|
||||
"depth": "standard",
|
||||
"focus_areas": [],
|
||||
"spec_type": "service",
|
||||
"iteration_count": 0,
|
||||
"iteration_history": [],
|
||||
"seed_analysis": {
|
||||
"problem_statement": "...",
|
||||
"target_users": [],
|
||||
"domain": "...",
|
||||
"constraints": [],
|
||||
"dimensions": []
|
||||
},
|
||||
"has_codebase": false,
|
||||
"refined_requirements_file": "refined-requirements.json",
|
||||
"issue_ids": [],
|
||||
"issues_created": 0,
|
||||
"phasesCompleted": [
|
||||
{ "phase": 1, "name": "discovery", "output_file": "spec-config.json", "completed_at": "ISO8601" },
|
||||
{ "phase": 1.5, "name": "requirement-clarification", "output_file": "refined-requirements.json", "discussion_rounds": 2, "completed_at": "ISO8601" },
|
||||
{ "phase": 3, "name": "requirements", "output_dir": "requirements/", "output_index": "requirements/_index.md", "file_count": 8, "completed_at": "ISO8601" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Resume mechanism**: `-c|--continue` flag reads `spec-config.json.phasesCompleted`, resumes from first incomplete phase.
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is TaskCreate initialization, then Phase 0 (spec study), then Phase 1
|
||||
2. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
|
||||
3. **Auto-Continue**: All phases run autonomously; check TaskList to execute next pending phase
|
||||
4. **Parse Every Output**: Extract required data from each phase for next phase context
|
||||
5. **DO NOT STOP**: Continuous 7-phase pipeline until all phases complete or user exits
|
||||
6. **Respect -y Flag**: When auto mode, skip all AskUserQuestion calls, use recommended defaults
|
||||
7. **Respect -c Flag**: When continue mode, load spec-config.json and resume from checkpoint
|
||||
8. **Inject Glossary**: From Phase 3 onward, inject glossary.json terms into every CLI prompt
|
||||
9. **Load Profile**: Read templates/profiles/{spec_type}-profile.md and inject requirements into Phase 2-5 prompts
|
||||
10. **Iterate on Failure**: When Phase 6 score < 60%, auto-trigger Phase 6.5 (max 2 iterations)
|
||||
11. **Agent Delegation**: Phase 2-5 and 6.5 MUST be delegated to `doc-generator` agents via Task tool — never execute inline
|
||||
12. **Lean Context**: Orchestrator only sees agent return summaries (JSON), never the full document content
|
||||
13. **Validate Agent Output**: After each agent returns, verify files exist on disk and spec-config.json was updated
|
||||
|
||||
## Agent Delegation Protocol
|
||||
|
||||
For Phase 2-5 and 6.5, the orchestrator delegates to a `doc-generator` agent via the Task tool. The orchestrator builds a lean context envelope — passing only paths, never file content.
|
||||
|
||||
### Context Envelope Template
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "doc-generator",
|
||||
run_in_background: false,
|
||||
description: `Spec Phase ${N}: ${phaseName}`,
|
||||
prompt: `
|
||||
## Spec Generator - Phase ${N}: ${phaseName}
|
||||
|
||||
### Session
|
||||
- ID: ${sessionId}
|
||||
- Work Dir: ${workDir}
|
||||
- Auto Mode: ${autoMode}
|
||||
- Spec Type: ${specType}
|
||||
|
||||
### Input (read from disk)
|
||||
${inputFilesList} // Only file paths — agent reads content itself
|
||||
|
||||
### Instructions
|
||||
Read: ${skillDir}/phases/${phaseFile} // Agent reads the phase doc for full instructions
|
||||
Apply template: ${skillDir}/templates/${templateFile}
|
||||
|
||||
### Glossary (Phase 3+ only)
|
||||
Read: ${workDir}/glossary.json
|
||||
|
||||
### Output
|
||||
Write files to: ${workDir}/${outputPath}
|
||||
Update: ${workDir}/spec-config.json (phasesCompleted)
|
||||
Return: JSON summary { files_created, quality_notes, key_decisions }
|
||||
`
|
||||
});
|
||||
```
|
||||
|
||||
### Orchestrator Post-Agent Validation
|
||||
|
||||
After each agent returns:
|
||||
|
||||
```javascript
|
||||
// 1. Parse agent return summary
|
||||
const summary = JSON.parse(agentResult);
|
||||
|
||||
// 2. Validate files exist
|
||||
summary.files_created.forEach(file => {
|
||||
const exists = Glob(`${workDir}/${file}`);
|
||||
if (!exists.length) throw new Error(`Agent claimed to create ${file} but file not found`);
|
||||
});
|
||||
|
||||
// 3. Verify spec-config.json updated
|
||||
const config = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const phaseComplete = config.phasesCompleted.some(p => p.phase === N);
|
||||
if (!phaseComplete) throw new Error(`Agent did not update phasesCompleted for Phase ${N}`);
|
||||
|
||||
// 4. Store summary for downstream context (do NOT read full documents)
|
||||
phasesSummaries[N] = summary;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reference Documents by Phase
|
||||
|
||||
### Phase 1: Discovery
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/01-discovery.md](phases/01-discovery.md) | Seed analysis and session setup | Phase start |
|
||||
| [templates/profiles/](templates/profiles/) | Spec type profiles | Spec type selection |
|
||||
| [specs/document-standards.md](specs/document-standards.md) | Frontmatter format for spec-config.json | Config generation |
|
||||
|
||||
### Phase 1.5: Requirement Expansion & Clarification
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/01-5-requirement-clarification.md](phases/01-5-requirement-clarification.md) | Interactive requirement discussion workflow | Phase start |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria for refined requirements | Validation |
|
||||
|
||||
### Phase 2: Product Brief
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/02-product-brief.md](phases/02-product-brief.md) | Multi-CLI analysis orchestration | Phase start |
|
||||
| [templates/product-brief.md](templates/product-brief.md) | Document template | Document generation |
|
||||
| [specs/glossary-template.json](specs/glossary-template.json) | Glossary schema | Glossary generation |
|
||||
|
||||
### Phase 3: Requirements
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/03-requirements.md](phases/03-requirements.md) | PRD generation workflow | Phase start |
|
||||
| [templates/requirements-prd.md](templates/requirements-prd.md) | Document template | Document generation |
|
||||
|
||||
### Phase 4: Architecture
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/04-architecture.md](phases/04-architecture.md) | Architecture decision workflow | Phase start |
|
||||
| [templates/architecture-doc.md](templates/architecture-doc.md) | Document template | Document generation |
|
||||
|
||||
### Phase 5: Epics & Stories
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/05-epics-stories.md](phases/05-epics-stories.md) | Epic/Story decomposition | Phase start |
|
||||
| [templates/epics-template.md](templates/epics-template.md) | Document template | Document generation |
|
||||
|
||||
### Phase 6: Readiness Check
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/06-readiness-check.md](phases/06-readiness-check.md) | Cross-document validation | Phase start |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality scoring criteria | Validation |
|
||||
|
||||
### Phase 6.5: Auto-Fix
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/06-5-auto-fix.md](phases/06-5-auto-fix.md) | Auto-fix workflow for readiness issues | When Phase 6 score < 60% |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Iteration exit criteria | Validation |
|
||||
|
||||
### Phase 7: Issue Export
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/07-issue-export.md](phases/07-issue-export.md) | Epic→Issue mapping and export | Phase start |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Issue export quality criteria | Validation |
|
||||
|
||||
### Debugging & Troubleshooting
|
||||
| Issue | Solution Document |
|
||||
|-------|-------------------|
|
||||
| Phase execution failed | Refer to the relevant Phase documentation |
|
||||
| Output does not meet expectations | [specs/quality-gates.md](specs/quality-gates.md) |
|
||||
| Document format issues | [specs/document-standards.md](specs/document-standards.md) |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Phase | Error | Blocking? | Action |
|
||||
|-------|-------|-----------|--------|
|
||||
| Phase 1 | Empty input | Yes | Error and exit |
|
||||
| Phase 1 | CLI seed analysis fails | No | Use basic parsing fallback |
|
||||
| Phase 1.5 | Gap analysis CLI fails | No | Skip to user questions with basic prompts |
|
||||
| Phase 1.5 | User skips discussion | No | Proceed with seed_analysis as-is |
|
||||
| Phase 1.5 | Max rounds reached (5) | No | Force confirmation with current state |
|
||||
| Phase 2 | Single CLI perspective fails | No | Continue with available perspectives |
|
||||
| Phase 2 | All CLI calls fail | No | Generate basic brief from seed analysis |
|
||||
| Phase 3 | Gemini CLI fails | No | Use codex fallback |
|
||||
| Phase 4 | Architecture review fails | No | Skip review, proceed with initial analysis |
|
||||
| Phase 5 | Story generation fails | No | Generate epics without detailed stories |
|
||||
| Phase 6 | Validation CLI fails | No | Generate partial report with available data |
|
||||
| Phase 6.5 | Auto-fix CLI fails | No | Log failure, proceed to handoff with Review status |
|
||||
| Phase 6.5 | Max iterations reached | No | Force handoff, report remaining issues |
|
||||
| Phase 7 | ccw issue create fails for one Epic | No | Log error, continue with remaining Epics |
|
||||
| Phase 7 | No EPIC files found | Yes | Error and return to Phase 5 |
|
||||
| Phase 7 | All issue creations fail | Yes | Error with CLI diagnostic, suggest manual creation |
|
||||
| Phase 2-5 | Agent fails to return | Yes | Retry once, then fall back to inline execution |
|
||||
| Phase 2-5 | Agent returns incomplete files | No | Log gaps, attempt inline completion for missing files |
|
||||
|
||||
### CLI Fallback Chain
|
||||
|
||||
Gemini -> Codex -> Claude -> degraded mode (local analysis only)
|
||||
@@ -0,0 +1,404 @@
|
||||
# Phase 1.5: Requirement Expansion & Clarification
|
||||
|
||||
在进入正式文档生成前,通过多轮交互讨论对原始需求进行深度挖掘、扩展和确认。
|
||||
|
||||
## Objective
|
||||
|
||||
- 识别原始需求中的模糊点、遗漏和潜在风险
|
||||
- 通过 CLI 辅助分析需求完整性,生成深度探测问题
|
||||
- 支持多轮交互讨论,逐步细化需求
|
||||
- 生成经用户确认的 `refined-requirements.json` 作为后续阶段的高质量输入
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/spec-config.json` (Phase 1 output)
|
||||
- Optional: `{workDir}/discovery-context.json` (codebase context)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 1 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const { seed_analysis, seed_input, focus_areas, has_codebase, depth } = specConfig;
|
||||
|
||||
let discoveryContext = null;
|
||||
if (has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: CLI Gap Analysis & Question Generation
|
||||
|
||||
调用 Gemini CLI 分析原始需求的完整性,识别模糊点并生成探测问题。
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 深度分析用户的初始需求,识别模糊点、遗漏和需要澄清的领域。
|
||||
Success: 生成 3-5 个高质量的探测问题,覆盖功能范围、边界条件、非功能性需求、用户场景等维度。
|
||||
|
||||
ORIGINAL SEED INPUT:
|
||||
${seed_input}
|
||||
|
||||
SEED ANALYSIS:
|
||||
${JSON.stringify(seed_analysis, null, 2)}
|
||||
|
||||
FOCUS AREAS: ${focus_areas.join(', ')}
|
||||
${discoveryContext ? `
|
||||
CODEBASE CONTEXT:
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
` : ''}
|
||||
|
||||
TASK:
|
||||
1. 评估当前需求描述的完整性(1-10 分,列出缺失维度)
|
||||
2. 识别 3-5 个关键模糊区域,每个区域包含:
|
||||
- 模糊点描述(为什么不清楚)
|
||||
- 1-2 个开放式探测问题
|
||||
- 1-2 个扩展建议(基于领域最佳实践)
|
||||
3. 检查以下维度是否有遗漏:
|
||||
- 功能范围边界(什么在范围内/外?)
|
||||
- 核心用户场景和流程
|
||||
- 非功能性需求(性能、安全、可用性、可扩展性)
|
||||
- 集成点和外部依赖
|
||||
- 数据模型和存储需求
|
||||
- 错误处理和异常场景
|
||||
4. 基于领域经验提供需求扩展建议
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output:
|
||||
{
|
||||
\"completeness_score\": 7,
|
||||
\"missing_dimensions\": [\"Performance requirements\", \"Error handling\"],
|
||||
\"clarification_areas\": [
|
||||
{
|
||||
\"area\": \"Scope boundary\",
|
||||
\"rationale\": \"Input does not clarify...\",
|
||||
\"questions\": [\"Question 1?\", \"Question 2?\"],
|
||||
\"suggestions\": [\"Suggestion 1\", \"Suggestion 2\"]
|
||||
}
|
||||
],
|
||||
\"expansion_recommendations\": [
|
||||
{
|
||||
\"category\": \"Non-functional\",
|
||||
\"recommendation\": \"Consider adding...\",
|
||||
\"priority\": \"high|medium|low\"
|
||||
}
|
||||
]
|
||||
}
|
||||
CONSTRAINTS: 问题必须是开放式的,建议必须具体可执行,使用用户输入的语言
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result before continuing
|
||||
```
|
||||
|
||||
解析 CLI 输出为结构化数据:
|
||||
```javascript
|
||||
const gapAnalysis = {
|
||||
completeness_score: 0,
|
||||
missing_dimensions: [],
|
||||
clarification_areas: [],
|
||||
expansion_recommendations: []
|
||||
};
|
||||
// Parse from CLI output
|
||||
```
|
||||
|
||||
### Step 3: Interactive Discussion Loop
|
||||
|
||||
核心多轮交互循环。每轮:展示分析结果 → 用户回应 → 更新需求状态 → 判断是否继续。
|
||||
|
||||
```javascript
|
||||
// Initialize requirement state
|
||||
let requirementState = {
|
||||
problem_statement: seed_analysis.problem_statement,
|
||||
target_users: seed_analysis.target_users,
|
||||
domain: seed_analysis.domain,
|
||||
constraints: seed_analysis.constraints,
|
||||
confirmed_features: [],
|
||||
non_functional_requirements: [],
|
||||
boundary_conditions: [],
|
||||
integration_points: [],
|
||||
key_assumptions: [],
|
||||
discussion_rounds: 0
|
||||
};
|
||||
|
||||
let discussionLog = [];
|
||||
let userSatisfied = false;
|
||||
|
||||
// === Round 1: Present gap analysis results ===
|
||||
// Display completeness_score, clarification_areas, expansion_recommendations
|
||||
// Then ask user to respond
|
||||
|
||||
while (!userSatisfied && requirementState.discussion_rounds < 5) {
|
||||
requirementState.discussion_rounds++;
|
||||
|
||||
if (requirementState.discussion_rounds === 1) {
|
||||
// --- First round: present initial gap analysis ---
|
||||
// Format questions and suggestions from gapAnalysis for display
|
||||
// Present as a structured summary to the user
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: buildDiscussionPrompt(gapAnalysis, requirementState),
|
||||
header: "Req Expand",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "I'll answer", description: "I have answers/feedback to provide (type in 'Other')" },
|
||||
{ label: "Accept all suggestions", description: "Accept all expansion recommendations as-is" },
|
||||
{ label: "Skip to generation", description: "Requirements are clear enough, proceed directly" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
} else {
|
||||
// --- Subsequent rounds: refine based on user feedback ---
|
||||
// Call CLI with accumulated context for follow-up analysis
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 基于用户最新回应,更新需求理解,识别剩余模糊点。
|
||||
|
||||
CURRENT REQUIREMENT STATE:
|
||||
${JSON.stringify(requirementState, null, 2)}
|
||||
|
||||
DISCUSSION HISTORY:
|
||||
${JSON.stringify(discussionLog, null, 2)}
|
||||
|
||||
USER'S LATEST RESPONSE:
|
||||
${lastUserResponse}
|
||||
|
||||
TASK:
|
||||
1. 将用户回应整合到需求状态中
|
||||
2. 识别 1-3 个仍需澄清或可扩展的领域
|
||||
3. 生成后续问题(如有必要)
|
||||
4. 如果需求已充分,输出最终需求摘要
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output:
|
||||
{
|
||||
\"updated_fields\": { /* fields to merge into requirementState */ },
|
||||
\"status\": \"need_more_discussion\" | \"ready_for_confirmation\",
|
||||
\"follow_up\": {
|
||||
\"remaining_areas\": [{\"area\": \"...\", \"questions\": [\"...\"]}],
|
||||
\"summary\": \"...\"
|
||||
}
|
||||
}
|
||||
CONSTRAINTS: 避免重复已回答的问题,聚焦未覆盖的领域
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result, parse and continue
|
||||
|
||||
// If status === "ready_for_confirmation", break to confirmation step
|
||||
// If status === "need_more_discussion", present follow-up questions
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: buildFollowUpPrompt(followUpAnalysis, requirementState),
|
||||
header: "Follow-up",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "I'll answer", description: "I have more feedback (type in 'Other')" },
|
||||
{ label: "Looks good", description: "Requirements are sufficiently clear now" },
|
||||
{ label: "Accept suggestions", description: "Accept remaining suggestions" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
}
|
||||
|
||||
// Process user response
|
||||
// - "Skip to generation" / "Looks good" → userSatisfied = true
|
||||
// - "Accept all suggestions" → merge suggestions into requirementState, userSatisfied = true
|
||||
// - "I'll answer" (with Other text) → record in discussionLog, continue loop
|
||||
// - User selects Other with custom text → parse and record
|
||||
|
||||
discussionLog.push({
|
||||
round: requirementState.discussion_rounds,
|
||||
agent_prompt: currentPrompt,
|
||||
user_response: userResponse,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
#### Helper: Build Discussion Prompt
|
||||
|
||||
```javascript
|
||||
function buildDiscussionPrompt(gapAnalysis, state) {
|
||||
let prompt = `## Requirement Analysis Results\n\n`;
|
||||
prompt += `**Completeness Score**: ${gapAnalysis.completeness_score}/10\n`;
|
||||
|
||||
if (gapAnalysis.missing_dimensions.length > 0) {
|
||||
prompt += `**Missing Dimensions**: ${gapAnalysis.missing_dimensions.join(', ')}\n\n`;
|
||||
}
|
||||
|
||||
prompt += `### Key Questions\n\n`;
|
||||
gapAnalysis.clarification_areas.forEach((area, i) => {
|
||||
prompt += `**${i+1}. ${area.area}**\n`;
|
||||
prompt += ` ${area.rationale}\n`;
|
||||
area.questions.forEach(q => { prompt += ` - ${q}\n`; });
|
||||
if (area.suggestions.length > 0) {
|
||||
prompt += ` Suggestions: ${area.suggestions.join('; ')}\n`;
|
||||
}
|
||||
prompt += `\n`;
|
||||
});
|
||||
|
||||
if (gapAnalysis.expansion_recommendations.length > 0) {
|
||||
prompt += `### Expansion Recommendations\n\n`;
|
||||
gapAnalysis.expansion_recommendations.forEach(rec => {
|
||||
prompt += `- [${rec.priority}] **${rec.category}**: ${rec.recommendation}\n`;
|
||||
});
|
||||
}
|
||||
|
||||
prompt += `\nPlease answer the questions above, or choose an option below.`;
|
||||
return prompt;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Auto Mode Handling
|
||||
|
||||
```javascript
|
||||
if (autoMode) {
|
||||
// Skip interactive discussion
|
||||
// CLI generates default requirement expansion based on seed_analysis
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 基于种子分析自动生成需求扩展,无需用户交互。
|
||||
|
||||
SEED ANALYSIS:
|
||||
${JSON.stringify(seed_analysis, null, 2)}
|
||||
|
||||
SEED INPUT: ${seed_input}
|
||||
DEPTH: ${depth}
|
||||
${discoveryContext ? `CODEBASE: ${JSON.stringify(discoveryContext.tech_stack || {})}` : ''}
|
||||
|
||||
TASK:
|
||||
1. 基于领域最佳实践,自动扩展功能需求清单
|
||||
2. 推断合理的非功能性需求
|
||||
3. 识别明显的边界条件
|
||||
4. 列出关键假设
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output matching refined-requirements.json schema
|
||||
CONSTRAINTS: 保守推断,只添加高置信度的扩展
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Parse output directly into refined-requirements.json
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate Requirement Confirmation Summary
|
||||
|
||||
在写入文件前,向用户展示最终的需求确认摘要(非 auto mode)。
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Build confirmation summary from requirementState
|
||||
const summary = buildConfirmationSummary(requirementState);
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `## Requirement Confirmation\n\n${summary}\n\nConfirm and proceed to specification generation?`,
|
||||
header: "Confirm",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Confirm & proceed", description: "Requirements confirmed, start spec generation" },
|
||||
{ label: "Need adjustments", description: "Go back and refine further" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// If "Need adjustments" → loop back to Step 3
|
||||
// If "Confirm & proceed" → continue to Step 6
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Write refined-requirements.json
|
||||
|
||||
```javascript
|
||||
const refinedRequirements = {
|
||||
session_id: specConfig.session_id,
|
||||
phase: "1.5",
|
||||
generated_at: new Date().toISOString(),
|
||||
source: autoMode ? "auto-expansion" : "interactive-discussion",
|
||||
discussion_rounds: requirementState.discussion_rounds,
|
||||
|
||||
// Core requirement content
|
||||
clarified_problem_statement: requirementState.problem_statement,
|
||||
confirmed_target_users: requirementState.target_users.map(u =>
|
||||
typeof u === 'string' ? { name: u, needs: [], pain_points: [] } : u
|
||||
),
|
||||
confirmed_domain: requirementState.domain,
|
||||
|
||||
confirmed_features: requirementState.confirmed_features.map(f => ({
|
||||
name: f.name,
|
||||
description: f.description,
|
||||
acceptance_criteria: f.acceptance_criteria || [],
|
||||
edge_cases: f.edge_cases || [],
|
||||
priority: f.priority || "unset"
|
||||
})),
|
||||
|
||||
non_functional_requirements: requirementState.non_functional_requirements.map(nfr => ({
|
||||
type: nfr.type, // Performance, Security, Usability, Scalability, etc.
|
||||
details: nfr.details,
|
||||
measurable_criteria: nfr.measurable_criteria || ""
|
||||
})),
|
||||
|
||||
boundary_conditions: {
|
||||
in_scope: requirementState.boundary_conditions.filter(b => b.scope === 'in'),
|
||||
out_of_scope: requirementState.boundary_conditions.filter(b => b.scope === 'out'),
|
||||
constraints: requirementState.constraints
|
||||
},
|
||||
|
||||
integration_points: requirementState.integration_points,
|
||||
key_assumptions: requirementState.key_assumptions,
|
||||
|
||||
// Traceability
|
||||
discussion_log: autoMode ? [] : discussionLog
|
||||
};
|
||||
|
||||
Write(`${workDir}/refined-requirements.json`, JSON.stringify(refinedRequirements, null, 2));
|
||||
```
|
||||
|
||||
### Step 7: Update spec-config.json
|
||||
|
||||
```javascript
|
||||
specConfig.refined_requirements_file = "refined-requirements.json";
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 1.5,
|
||||
name: "requirement-clarification",
|
||||
output_file: "refined-requirements.json",
|
||||
discussion_rounds: requirementState.discussion_rounds,
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `refined-requirements.json`
|
||||
- **Format**: JSON
|
||||
- **Updated**: `spec-config.json` (added `refined_requirements_file` field and phase 1.5 to `phasesCompleted`)
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Problem statement refined (>= 30 characters, more specific than seed)
|
||||
- [ ] At least 2 confirmed features with descriptions
|
||||
- [ ] At least 1 non-functional requirement identified
|
||||
- [ ] Boundary conditions defined (in-scope + out-of-scope)
|
||||
- [ ] Key assumptions listed (>= 1)
|
||||
- [ ] Discussion rounds recorded (>= 1 in interactive mode)
|
||||
- [ ] User explicitly confirmed requirements (non-auto mode)
|
||||
- [ ] `refined-requirements.json` written with valid JSON
|
||||
- [ ] `spec-config.json` updated with phase 1.5 completion
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Product Brief](02-product-brief.md). Phase 2 should load `refined-requirements.json` as primary input instead of relying solely on `spec-config.json.seed_analysis`.
|
||||
257
.codex/skills/spec-generator/phases/01-discovery.md
Normal file
257
.codex/skills/spec-generator/phases/01-discovery.md
Normal file
@@ -0,0 +1,257 @@
|
||||
# Phase 1: Discovery
|
||||
|
||||
Parse input, analyze the seed idea, optionally explore codebase, establish session configuration.
|
||||
|
||||
## Objective
|
||||
|
||||
- Generate session ID and create output directory
|
||||
- Parse user input (text description or file reference)
|
||||
- Analyze seed via Gemini CLI to extract problem space dimensions
|
||||
- Conditionally explore codebase for existing patterns and constraints
|
||||
- Gather user preferences (depth, focus areas) via interactive confirmation
|
||||
- Write `spec-config.json` as the session state file
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `$ARGUMENTS` (user input from command)
|
||||
- Flags: `-y` (auto mode), `-c` (continue mode)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Session Initialization
|
||||
|
||||
```javascript
|
||||
// Parse arguments
|
||||
const args = $ARGUMENTS;
|
||||
const autoMode = args.includes('-y') || args.includes('--yes');
|
||||
const continueMode = args.includes('-c') || args.includes('--continue');
|
||||
|
||||
// Extract the idea/topic (remove flags)
|
||||
const idea = args.replace(/(-y|--yes|-c|--continue)\s*/g, '').trim();
|
||||
|
||||
// Generate session ID
|
||||
const slug = idea.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fff]+/g, '-')
|
||||
.replace(/^-|-$/g, '')
|
||||
.slice(0, 40);
|
||||
const date = new Date().toISOString().slice(0, 10);
|
||||
const sessionId = `SPEC-${slug}-${date}`;
|
||||
const workDir = `.workflow/.spec/${sessionId}`;
|
||||
|
||||
// Check for continue mode
|
||||
if (continueMode) {
|
||||
// Find existing session
|
||||
const existingSessions = Glob('.workflow/.spec/SPEC-*/spec-config.json');
|
||||
// If slug matches an existing session, load it and resume
|
||||
// Read spec-config.json, find first incomplete phase, jump to that phase
|
||||
return; // Resume logic handled by orchestrator
|
||||
}
|
||||
|
||||
// Create output directory
|
||||
Bash(`mkdir -p "${workDir}"`);
|
||||
```
|
||||
|
||||
### Step 2: Input Parsing
|
||||
|
||||
```javascript
|
||||
// Determine input type
|
||||
if (idea.startsWith('@') || idea.endsWith('.md') || idea.endsWith('.txt')) {
|
||||
// File reference - read and extract content
|
||||
const filePath = idea.replace(/^@/, '');
|
||||
const fileContent = Read(filePath);
|
||||
// Use file content as the seed
|
||||
inputType = 'file';
|
||||
seedInput = fileContent;
|
||||
} else {
|
||||
// Direct text description
|
||||
inputType = 'text';
|
||||
seedInput = idea;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Seed Analysis via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Analyze this seed idea/requirement to extract structured problem space dimensions.
|
||||
Success: Clear problem statement, target users, domain identification, 3-5 exploration dimensions.
|
||||
|
||||
SEED INPUT:
|
||||
${seedInput}
|
||||
|
||||
TASK:
|
||||
- Extract a clear problem statement (what problem does this solve?)
|
||||
- Identify target users (who benefits?)
|
||||
- Determine the domain (technical, business, consumer, etc.)
|
||||
- List constraints (budget, time, technical, regulatory)
|
||||
- Generate 3-5 exploration dimensions (key areas to investigate)
|
||||
- Assess complexity: simple (1-2 components), moderate (3-5 components), complex (6+ components)
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output with fields: problem_statement, target_users[], domain, constraints[], dimensions[], complexity
|
||||
CONSTRAINTS: Be specific and actionable, not vague
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result before continuing
|
||||
```
|
||||
|
||||
Parse the CLI output into structured `seedAnalysis`:
|
||||
```javascript
|
||||
const seedAnalysis = {
|
||||
problem_statement: "...",
|
||||
target_users: ["..."],
|
||||
domain: "...",
|
||||
constraints: ["..."],
|
||||
dimensions: ["..."]
|
||||
};
|
||||
const complexity = "moderate"; // from CLI output
|
||||
```
|
||||
|
||||
### Step 4: Codebase Exploration (Conditional)
|
||||
|
||||
```javascript
|
||||
// Detect if running inside a project with code
|
||||
const hasCodebase = Glob('**/*.{ts,js,py,java,go,rs}').length > 0
|
||||
|| Glob('package.json').length > 0
|
||||
|| Glob('Cargo.toml').length > 0;
|
||||
|
||||
if (hasCodebase) {
|
||||
Agent({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore codebase for spec: ${slug}`,
|
||||
prompt: `
|
||||
## Spec Generator Context
|
||||
Topic: ${seedInput}
|
||||
Dimensions: ${seedAnalysis.dimensions.join(', ')}
|
||||
Session: ${workDir}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Search for code related to topic keywords
|
||||
2. Read project config files (package.json, pyproject.toml, etc.) if they exist
|
||||
|
||||
## Exploration Focus
|
||||
- Identify existing implementations related to the topic
|
||||
- Find patterns that could inform architecture decisions
|
||||
- Map current architecture constraints
|
||||
- Locate integration points and dependencies
|
||||
|
||||
## Output
|
||||
Write findings to: ${workDir}/discovery-context.json
|
||||
|
||||
Schema:
|
||||
{
|
||||
"relevant_files": [{"path": "...", "relevance": "high|medium|low", "rationale": "..."}],
|
||||
"existing_patterns": ["pattern descriptions"],
|
||||
"architecture_constraints": ["constraint descriptions"],
|
||||
"integration_points": ["integration point descriptions"],
|
||||
"tech_stack": {"languages": [], "frameworks": [], "databases": []},
|
||||
"_metadata": { "exploration_type": "spec-discovery", "timestamp": "ISO8601" }
|
||||
}
|
||||
`
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: User Confirmation (Interactive)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Confirm problem statement and select depth
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `Problem statement: "${seedAnalysis.problem_statement}" - Is this accurate?`,
|
||||
header: "Problem",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Accurate", description: "Proceed with this problem statement" },
|
||||
{ label: "Needs adjustment", description: "I'll refine the problem statement" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "What specification depth do you need?",
|
||||
header: "Depth",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Light", description: "Quick overview - key decisions only" },
|
||||
{ label: "Standard (Recommended)", description: "Balanced detail for most projects" },
|
||||
{ label: "Comprehensive", description: "Maximum detail for complex/critical projects" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Which areas should we focus on?",
|
||||
header: "Focus",
|
||||
multiSelect: true,
|
||||
options: seedAnalysis.dimensions.map(d => ({ label: d, description: `Explore ${d} in depth` }))
|
||||
},
|
||||
{
|
||||
question: "What type of specification is this?",
|
||||
header: "Spec Type",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Service (Recommended)", description: "Long-running service with lifecycle, state machine, observability" },
|
||||
{ label: "API", description: "REST/GraphQL API with endpoints, auth, rate limiting" },
|
||||
{ label: "Library/SDK", description: "Reusable package with public API surface, examples" },
|
||||
{ label: "Platform", description: "Multi-component system, uses Service profile" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
} else {
|
||||
// Auto mode defaults
|
||||
depth = "standard";
|
||||
focusAreas = seedAnalysis.dimensions;
|
||||
specType = "service"; // default for auto mode
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Write spec-config.json
|
||||
|
||||
```javascript
|
||||
const specConfig = {
|
||||
session_id: sessionId,
|
||||
seed_input: seedInput,
|
||||
input_type: inputType,
|
||||
timestamp: new Date().toISOString(),
|
||||
mode: autoMode ? "auto" : "interactive",
|
||||
complexity: complexity,
|
||||
depth: depth,
|
||||
focus_areas: focusAreas,
|
||||
seed_analysis: seedAnalysis,
|
||||
has_codebase: hasCodebase,
|
||||
spec_type: specType, // "service" | "api" | "library" | "platform"
|
||||
iteration_count: 0,
|
||||
iteration_history: [],
|
||||
phasesCompleted: [
|
||||
{
|
||||
phase: 1,
|
||||
name: "discovery",
|
||||
output_file: "spec-config.json",
|
||||
completed_at: new Date().toISOString()
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `spec-config.json`
|
||||
- **File**: `discovery-context.json` (optional, if codebase detected)
|
||||
- **Format**: JSON
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Session ID matches `SPEC-{slug}-{date}` format
|
||||
- [ ] Problem statement exists and is >= 20 characters
|
||||
- [ ] Target users identified (>= 1)
|
||||
- [ ] 3-5 exploration dimensions generated
|
||||
- [ ] spec-config.json written with all required fields
|
||||
- [ ] Output directory created
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Product Brief](02-product-brief.md) with the generated spec-config.json.
|
||||
298
.codex/skills/spec-generator/phases/02-product-brief.md
Normal file
298
.codex/skills/spec-generator/phases/02-product-brief.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# Phase 2: Product Brief
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate a product brief through multi-perspective CLI analysis, establishing "what" and "why".
|
||||
|
||||
## Objective
|
||||
|
||||
- Read Phase 1 outputs (spec-config.json, discovery-context.json)
|
||||
- Launch 3 parallel CLI analyses from product, technical, and user perspectives
|
||||
- Synthesize convergent themes and conflicting views
|
||||
- Optionally refine with user input
|
||||
- Generate product-brief.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/spec-config.json`
|
||||
- Primary: `{workDir}/refined-requirements.json` (Phase 1.5 output, preferred over raw seed_analysis)
|
||||
- Optional: `{workDir}/discovery-context.json`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/product-brief.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 1 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const { seed_analysis, seed_input, has_codebase, depth, focus_areas } = specConfig;
|
||||
|
||||
// Load refined requirements (Phase 1.5 output) - preferred over raw seed_analysis
|
||||
let refinedReqs = null;
|
||||
try {
|
||||
refinedReqs = JSON.parse(Read(`${workDir}/refined-requirements.json`));
|
||||
} catch (e) {
|
||||
// No refined requirements, fall back to seed_analysis
|
||||
}
|
||||
|
||||
let discoveryContext = null;
|
||||
if (has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) {
|
||||
// No discovery context available, proceed without
|
||||
}
|
||||
}
|
||||
|
||||
// Build shared context string for CLI prompts
|
||||
// Prefer refined requirements over raw seed_analysis
|
||||
const problem = refinedReqs?.clarified_problem_statement || seed_analysis.problem_statement;
|
||||
const users = refinedReqs?.confirmed_target_users?.map(u => u.name || u).join(', ')
|
||||
|| seed_analysis.target_users.join(', ');
|
||||
const domain = refinedReqs?.confirmed_domain || seed_analysis.domain;
|
||||
const constraints = refinedReqs?.boundary_conditions?.constraints?.join(', ')
|
||||
|| seed_analysis.constraints.join(', ');
|
||||
const features = refinedReqs?.confirmed_features?.map(f => f.name).join(', ') || '';
|
||||
const nfrs = refinedReqs?.non_functional_requirements?.map(n => `${n.type}: ${n.details}`).join('; ') || '';
|
||||
|
||||
const sharedContext = `
|
||||
SEED: ${seed_input}
|
||||
PROBLEM: ${problem}
|
||||
TARGET USERS: ${users}
|
||||
DOMAIN: ${domain}
|
||||
CONSTRAINTS: ${constraints}
|
||||
FOCUS AREAS: ${focus_areas.join(', ')}
|
||||
${features ? `CONFIRMED FEATURES: ${features}` : ''}
|
||||
${nfrs ? `NON-FUNCTIONAL REQUIREMENTS: ${nfrs}` : ''}
|
||||
${discoveryContext ? `
|
||||
CODEBASE CONTEXT:
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
||||
- Architecture constraints: ${discoveryContext.architecture_constraints?.slice(0,3).join(', ') || 'none'}
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
` : ''}`;
|
||||
```
|
||||
|
||||
### Step 2: Multi-CLI Parallel Analysis (3 perspectives)
|
||||
|
||||
Launch 3 CLI calls in parallel:
|
||||
|
||||
**Product Perspective (Gemini)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Product analysis for specification - identify market fit, user value, and success criteria.
|
||||
Success: Clear vision, measurable goals, competitive positioning.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Define product vision (1-3 sentences, aspirational)
|
||||
- Analyze market/competitive landscape
|
||||
- Define 3-5 measurable success metrics
|
||||
- Identify scope boundaries (in-scope vs out-of-scope)
|
||||
- Assess user value proposition
|
||||
- List assumptions that need validation
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured product analysis with: vision, goals with metrics, scope, competitive positioning, assumptions
|
||||
CONSTRAINTS: Focus on 'what' and 'why', not 'how'
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
```
|
||||
|
||||
**Technical Perspective (Codex)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Technical feasibility analysis for specification - assess implementation viability and constraints.
|
||||
Success: Clear technical constraints, integration complexity, technology recommendations.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Assess technical feasibility of the core concept
|
||||
- Identify technical constraints and blockers
|
||||
- Evaluate integration complexity with existing systems
|
||||
- Recommend technology approach (high-level)
|
||||
- Identify technical risks and dependencies
|
||||
- Estimate complexity: simple/moderate/complex
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Technical analysis with: feasibility assessment, constraints, integration complexity, tech recommendations, risks
|
||||
CONSTRAINTS: Focus on feasibility and constraints, not detailed architecture
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
```
|
||||
|
||||
**User Perspective (Claude)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: User experience analysis for specification - understand user journeys, pain points, and UX considerations.
|
||||
Success: Clear user personas, journey maps, UX requirements.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Elaborate user personas with goals and frustrations
|
||||
- Map primary user journey (happy path)
|
||||
- Identify key pain points in current experience
|
||||
- Define UX success criteria
|
||||
- List accessibility and usability considerations
|
||||
- Suggest interaction patterns
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: User analysis with: personas, journey map, pain points, UX criteria, interaction recommendations
|
||||
CONSTRAINTS: Focus on user needs and experience, not implementation
|
||||
" --tool claude --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// STOP: Wait for all 3 CLI results before continuing
|
||||
```
|
||||
|
||||
### Step 3: Synthesize Perspectives
|
||||
|
||||
```javascript
|
||||
// After receiving all 3 CLI results:
|
||||
// Extract convergent themes (all agree)
|
||||
// Identify conflicting views (need resolution)
|
||||
// Note unique contributions from each perspective
|
||||
|
||||
const synthesis = {
|
||||
convergent_themes: [], // themes all 3 perspectives agree on
|
||||
conflicts: [], // areas where perspectives differ
|
||||
product_insights: [], // unique from product perspective
|
||||
technical_insights: [], // unique from technical perspective
|
||||
user_insights: [] // unique from user perspective
|
||||
};
|
||||
```
|
||||
|
||||
### Step 4: Interactive Refinement (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present synthesis summary to user
|
||||
// AskUserQuestion with:
|
||||
// - Confirm vision statement
|
||||
// - Resolve any conflicts between perspectives
|
||||
// - Adjust scope if needed
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the synthesized product brief. Any adjustments needed?",
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Looks good", description: "Proceed to PRD generation" },
|
||||
{ label: "Adjust scope", description: "Narrow or expand the scope" },
|
||||
{ label: "Revise vision", description: "Refine the vision statement" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate product-brief.md
|
||||
|
||||
```javascript
|
||||
// Read template
|
||||
const template = Read('templates/product-brief.md');
|
||||
|
||||
// Fill template with synthesized content
|
||||
// Apply document-standards.md formatting rules
|
||||
// Write with YAML frontmatter
|
||||
|
||||
const frontmatter = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: ${autoMode ? 'complete' : 'draft'}
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["load-context", "multi-cli-analysis", "synthesis", "generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
---`;
|
||||
|
||||
// Combine frontmatter + filled template content
|
||||
Write(`${workDir}/product-brief.md`, `${frontmatter}\n\n${filledContent}`);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 2,
|
||||
name: "product-brief",
|
||||
output_file: "product-brief.md",
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 5.5: Generate glossary.json
|
||||
|
||||
```javascript
|
||||
// Extract terminology from product brief and CLI analysis
|
||||
// Generate structured glossary for cross-document consistency
|
||||
|
||||
const glossary = {
|
||||
session_id: specConfig.session_id,
|
||||
terms: [
|
||||
// Extract from product brief content:
|
||||
// - Key domain nouns from problem statement
|
||||
// - User persona names
|
||||
// - Technical terms from multi-perspective synthesis
|
||||
// Each term should have:
|
||||
// { term: "...", definition: "...", aliases: [], first_defined_in: "product-brief.md", category: "core|technical|business" }
|
||||
]
|
||||
};
|
||||
|
||||
Write(`${workDir}/glossary.json`, JSON.stringify(glossary, null, 2));
|
||||
```
|
||||
|
||||
**Glossary Injection**: In all subsequent phase prompts, inject the following into the CONTEXT section:
|
||||
```
|
||||
TERMINOLOGY GLOSSARY (use these terms consistently):
|
||||
${JSON.stringify(glossary.terms, null, 2)}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `product-brief.md`
|
||||
- **Format**: Markdown with YAML frontmatter
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Vision statement: clear, 1-3 sentences
|
||||
- [ ] Problem statement: specific and measurable
|
||||
- [ ] Target users: >= 1 persona with needs
|
||||
- [ ] Goals: >= 2 with measurable metrics
|
||||
- [ ] Scope: in-scope and out-of-scope defined
|
||||
- [ ] Multi-perspective synthesis included
|
||||
- [ ] YAML frontmatter valid
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 3: Requirements](03-requirements.md) with the generated product-brief.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 2,
|
||||
"status": "complete",
|
||||
"files_created": ["product-brief.md", "glossary.json"],
|
||||
"quality_notes": ["list of any quality concerns or deviations"],
|
||||
"key_decisions": ["list of significant synthesis decisions made"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that listed files exist on disk
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
248
.codex/skills/spec-generator/phases/03-requirements.md
Normal file
248
.codex/skills/spec-generator/phases/03-requirements.md
Normal file
@@ -0,0 +1,248 @@
|
||||
# Phase 3: Requirements (PRD)
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate a detailed Product Requirements Document with functional/non-functional requirements, acceptance criteria, and MoSCoW prioritization.
|
||||
|
||||
## Objective
|
||||
|
||||
- Read product-brief.md and extract goals, scope, constraints
|
||||
- Expand each goal into functional requirements with acceptance criteria
|
||||
- Generate non-functional requirements
|
||||
- Apply MoSCoW priority labels (user input or auto)
|
||||
- Generate requirements.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/product-brief.md`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/requirements-prd.md` (directory structure: `_index.md` + `REQ-*.md` + `NFR-*.md`)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
|
||||
// Extract key sections from product brief
|
||||
// - Goals & Success Metrics table
|
||||
// - Scope (in-scope items)
|
||||
// - Target Users (personas)
|
||||
// - Constraints
|
||||
// - Technical perspective insights
|
||||
```
|
||||
|
||||
### Step 2: Requirements Expansion via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate detailed functional and non-functional requirements from product brief.
|
||||
Success: Complete PRD with testable acceptance criteria for every requirement.
|
||||
|
||||
PRODUCT BRIEF CONTEXT:
|
||||
${productBrief}
|
||||
|
||||
TASK:
|
||||
- For each goal in the product brief, generate 3-7 functional requirements
|
||||
- Each requirement must have:
|
||||
- Unique ID: REQ-NNN (zero-padded)
|
||||
- Clear title
|
||||
- Detailed description
|
||||
- User story: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 specific, testable acceptance criteria
|
||||
- Generate non-functional requirements:
|
||||
- Performance (response times, throughput)
|
||||
- Security (authentication, authorization, data protection)
|
||||
- Scalability (user load, data volume)
|
||||
- Usability (accessibility, learnability)
|
||||
- Assign initial MoSCoW priority based on:
|
||||
- Must: Core functionality, cannot launch without
|
||||
- Should: Important but has workaround
|
||||
- Could: Nice-to-have, enhances experience
|
||||
- Won't: Explicitly deferred
|
||||
- Use RFC 2119 keywords (MUST, SHOULD, MAY, MUST NOT, SHOULD NOT) to define behavioral constraints for each requirement. Example: 'The system MUST return a 401 response within 100ms for invalid tokens.'
|
||||
- For each core domain entity referenced in requirements, define its data model: fields, types, constraints, and relationships to other entities
|
||||
- Maintain terminology consistency with the glossary below:
|
||||
TERMINOLOGY GLOSSARY:
|
||||
\${glossary ? JSON.stringify(glossary.terms, null, 2) : 'N/A - generate terms inline'}
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured requirements with: ID, title, description, user story, acceptance criteria, priority, traceability to goals
|
||||
CONSTRAINTS: Every requirement must be specific enough to estimate and test. No vague requirements like 'system should be fast'.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2.5: Codex Requirements Review
|
||||
|
||||
After receiving Gemini expansion results, validate requirements quality via Codex CLI before proceeding:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of generated requirements - validate quality, testability, and scope alignment.
|
||||
Success: Actionable feedback on requirement quality with specific issues identified.
|
||||
|
||||
GENERATED REQUIREMENTS:
|
||||
${geminiRequirementsOutput.slice(0, 5000)}
|
||||
|
||||
PRODUCT BRIEF SCOPE:
|
||||
${productBrief.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Verify every acceptance criterion is specific, measurable, and testable (not vague like 'should be fast')
|
||||
- Validate RFC 2119 keyword usage: MUST/SHOULD/MAY used correctly per RFC 2119 semantics
|
||||
- Check scope containment: no requirement exceeds the product brief's defined scope boundaries
|
||||
- Assess data model completeness: all referenced entities have field-level definitions
|
||||
- Identify duplicate or overlapping requirements
|
||||
- Rate overall requirements quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Requirements review with: per-requirement feedback, testability assessment, scope violations, data model gaps, quality rating
|
||||
CONSTRAINTS: Be genuinely critical. Focus on requirements that would block implementation if left vague.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for Codex review result
|
||||
// Integrate feedback into requirements before writing files:
|
||||
// - Fix vague acceptance criteria flagged by Codex
|
||||
// - Correct RFC 2119 keyword misuse
|
||||
// - Remove or flag requirements that exceed brief scope
|
||||
// - Fill data model gaps identified by Codex
|
||||
```
|
||||
|
||||
### Step 3: User Priority Sorting (Interactive)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present requirements grouped by initial priority
|
||||
// Allow user to adjust MoSCoW labels
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the Must-Have requirements. Any that should be reprioritized?",
|
||||
header: "Must-Have",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "All correct", description: "Must-have requirements are accurate" },
|
||||
{ label: "Too many", description: "Some should be Should/Could" },
|
||||
{ label: "Missing items", description: "Some Should requirements should be Must" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "What is the target MVP scope?",
|
||||
header: "MVP Scope",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Must-Have only (Recommended)", description: "MVP includes only Must requirements" },
|
||||
{ label: "Must + key Should", description: "Include critical Should items in MVP" },
|
||||
{ label: "Comprehensive", description: "Include all Must and Should" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user adjustments to priorities
|
||||
} else {
|
||||
// Auto mode: accept CLI-suggested priorities as-is
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate requirements/ directory
|
||||
|
||||
```javascript
|
||||
// Read template
|
||||
const template = Read('templates/requirements-prd.md');
|
||||
|
||||
// Create requirements directory
|
||||
Bash(`mkdir -p "${workDir}/requirements"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI output into structured requirements
|
||||
const funcReqs = parseFunctionalRequirements(cliOutput); // [{id, slug, title, priority, ...}]
|
||||
const nfReqs = parseNonFunctionalRequirements(cliOutput); // [{id, type, slug, title, ...}]
|
||||
|
||||
// Step 4a: Write individual REQ-*.md files (one per functional requirement)
|
||||
funcReqs.forEach(req => {
|
||||
// Use REQ-NNN-{slug}.md template from templates/requirements-prd.md
|
||||
// Fill: id, title, priority, description, user_story, acceptance_criteria, traces
|
||||
Write(`${workDir}/requirements/REQ-${req.id}-${req.slug}.md`, reqContent);
|
||||
});
|
||||
|
||||
// Step 4b: Write individual NFR-*.md files (one per non-functional requirement)
|
||||
nfReqs.forEach(nfr => {
|
||||
// Use NFR-{type}-NNN-{slug}.md template from templates/requirements-prd.md
|
||||
// Fill: id, type, category, title, requirement, metric, target, traces
|
||||
Write(`${workDir}/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent);
|
||||
});
|
||||
|
||||
// Step 4c: Write _index.md (summary + links to all individual files)
|
||||
// Use _index.md template from templates/requirements-prd.md
|
||||
// Fill: summary table, functional req links table, NFR links tables,
|
||||
// data requirements, integration requirements, traceability matrix
|
||||
Write(`${workDir}/requirements/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 3,
|
||||
name: "requirements",
|
||||
output_dir: "requirements/",
|
||||
output_index: "requirements/_index.md",
|
||||
file_count: funcReqs.length + nfReqs.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `requirements/`
|
||||
- `_index.md` — Summary, MoSCoW table, traceability matrix, links
|
||||
- `REQ-NNN-{slug}.md` — Individual functional requirement (per requirement)
|
||||
- `NFR-{type}-NNN-{slug}.md` — Individual non-functional requirement (per NFR)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Functional requirements: >= 3 with REQ-NNN IDs, each in own file
|
||||
- [ ] Every requirement file has >= 1 acceptance criterion
|
||||
- [ ] Every requirement has MoSCoW priority tag in frontmatter
|
||||
- [ ] Non-functional requirements: >= 1, each in own file
|
||||
- [ ] User stories present for Must-have requirements
|
||||
- [ ] `_index.md` links to all individual requirement files
|
||||
- [ ] Traceability links to product-brief.md goals
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 4: Architecture](04-architecture.md) with the generated requirements.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 3,
|
||||
"status": "complete",
|
||||
"files_created": ["requirements/_index.md", "requirements/REQ-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_integrated": true,
|
||||
"quality_notes": ["list of quality concerns or Codex feedback items addressed"],
|
||||
"key_decisions": ["MoSCoW priority rationale", "scope adjustments from Codex review"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `requirements/` directory exists with `_index.md` and individual files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
274
.codex/skills/spec-generator/phases/04-architecture.md
Normal file
274
.codex/skills/spec-generator/phases/04-architecture.md
Normal file
@@ -0,0 +1,274 @@
|
||||
# Phase 4: Architecture
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate technical architecture decisions, component design, and technology selections based on requirements.
|
||||
|
||||
## Objective
|
||||
|
||||
- Analyze requirements to identify core components and system architecture
|
||||
- Generate Architecture Decision Records (ADRs) with alternatives
|
||||
- Map architecture to existing codebase (if applicable)
|
||||
- Challenge architecture via Codex CLI review
|
||||
- Generate architecture.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/requirements/_index.md` (and individual `REQ-*.md` files)
|
||||
- Reference: `{workDir}/product-brief.md`
|
||||
- Optional: `{workDir}/discovery-context.json`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/architecture-doc.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2-3 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
|
||||
let discoveryContext = null;
|
||||
if (specConfig.has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) { /* no context */ }
|
||||
}
|
||||
|
||||
// Load glossary for terminology consistency
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
|
||||
// Load spec type profile for specialized sections
|
||||
const specType = specConfig.spec_type || 'service';
|
||||
let profile = null;
|
||||
try {
|
||||
profile = Read(`templates/profiles/${specType}-profile.md`);
|
||||
} catch (e) { /* use base template only */ }
|
||||
```
|
||||
|
||||
### Step 2: Architecture Analysis via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate technical architecture for the specified requirements.
|
||||
Success: Complete component architecture, tech stack, and ADRs with justified decisions.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${productBrief.slice(0, 3000)}
|
||||
|
||||
REQUIREMENTS:
|
||||
${requirements.slice(0, 5000)}
|
||||
|
||||
${discoveryContext ? `EXISTING CODEBASE:
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join('; ') || 'none'}
|
||||
- Architecture constraints: ${discoveryContext.architecture_constraints?.slice(0,3).join('; ') || 'none'}
|
||||
` : ''}
|
||||
|
||||
TASK:
|
||||
- Define system architecture style (monolith, microservices, serverless, etc.) with justification
|
||||
- Identify core components and their responsibilities
|
||||
- Create component interaction diagram (Mermaid graph TD format)
|
||||
- Specify technology stack: languages, frameworks, databases, infrastructure
|
||||
- Generate 2-4 Architecture Decision Records (ADRs):
|
||||
- Each ADR: context, decision, 2-3 alternatives with pros/cons, consequences
|
||||
- Focus on: data storage, API design, authentication, key technical choices
|
||||
- Define data model: key entities and relationships (Mermaid erDiagram format)
|
||||
- Identify security architecture: auth, authorization, data protection
|
||||
- List API endpoints (high-level)
|
||||
${discoveryContext ? '- Map new components to existing codebase modules' : ''}
|
||||
- For each core entity with a lifecycle, create an ASCII state machine diagram showing:
|
||||
- All states and transitions
|
||||
- Trigger events for each transition
|
||||
- Side effects of transitions
|
||||
- Error states and recovery paths
|
||||
- Define a Configuration Model: list all configurable fields with name, type, default value, constraint, and description
|
||||
- Define Error Handling strategy:
|
||||
- Classify errors (transient/permanent/degraded)
|
||||
- Per-component error behavior using RFC 2119 keywords
|
||||
- Recovery mechanisms
|
||||
- Define Observability requirements:
|
||||
- Key metrics (name, type: counter/gauge/histogram, labels)
|
||||
- Structured log format and key log events
|
||||
- Health check endpoints
|
||||
\${profile ? \`
|
||||
SPEC TYPE PROFILE REQUIREMENTS (\${specType}):
|
||||
\${profile}
|
||||
\` : ''}
|
||||
\${glossary ? \`
|
||||
TERMINOLOGY GLOSSARY (use consistently):
|
||||
\${JSON.stringify(glossary.terms, null, 2)}
|
||||
\` : ''}
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Complete architecture with: style justification, component diagram, tech stack table, ADRs, data model, security controls, API overview
|
||||
CONSTRAINTS: Architecture must support all Must-have requirements. Prefer proven technologies over cutting-edge.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 3: Architecture Review via Codex CLI
|
||||
|
||||
```javascript
|
||||
// After receiving Gemini analysis, challenge it with Codex
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of proposed architecture - identify weaknesses and risks.
|
||||
Success: Actionable feedback with specific concerns and improvement suggestions.
|
||||
|
||||
PROPOSED ARCHITECTURE:
|
||||
${geminiArchitectureOutput.slice(0, 5000)}
|
||||
|
||||
REQUIREMENTS CONTEXT:
|
||||
${requirements.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Challenge each ADR: are the alternatives truly the best options?
|
||||
- Identify scalability bottlenecks in the component design
|
||||
- Assess security gaps: authentication, authorization, data protection
|
||||
- Evaluate technology choices: maturity, community support, fit
|
||||
- Check for over-engineering or under-engineering
|
||||
- Verify architecture covers all Must-have requirements
|
||||
- Rate overall architecture quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Architecture review with: per-ADR feedback, scalability concerns, security gaps, technology risks, quality rating
|
||||
CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable improvements.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 4: Interactive ADR Decisions (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present ADRs with review feedback to user
|
||||
// For each ADR where review raised concerns:
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Architecture review raised concerns. How should we proceed?",
|
||||
header: "ADR Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Accept as-is", description: "Architecture is sound, proceed" },
|
||||
{ label: "Incorporate feedback", description: "Adjust ADRs based on review" },
|
||||
{ label: "Simplify", description: "Reduce complexity, fewer components" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user decisions to architecture
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Codebase Integration Mapping (Conditional)
|
||||
|
||||
```javascript
|
||||
if (specConfig.has_codebase && discoveryContext) {
|
||||
// Map new architecture components to existing code
|
||||
const integrationMapping = discoveryContext.relevant_files.map(f => ({
|
||||
new_component: "...", // matched from architecture
|
||||
existing_module: f.path,
|
||||
integration_type: "Extend|Replace|New",
|
||||
notes: f.rationale
|
||||
}));
|
||||
// Include in architecture document
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Generate architecture/ directory
|
||||
|
||||
```javascript
|
||||
const template = Read('templates/architecture-doc.md');
|
||||
|
||||
// Create architecture directory
|
||||
Bash(`mkdir -p "${workDir}/architecture"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI outputs into structured ADRs
|
||||
const adrs = parseADRs(geminiArchitectureOutput, codexReviewOutput); // [{id, slug, title, ...}]
|
||||
|
||||
// Step 6a: Write individual ADR-*.md files (one per decision)
|
||||
adrs.forEach(adr => {
|
||||
// Use ADR-NNN-{slug}.md template from templates/architecture-doc.md
|
||||
// Fill: id, title, status, context, decision, alternatives, consequences, traces
|
||||
Write(`${workDir}/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent);
|
||||
});
|
||||
|
||||
// Step 6b: Write _index.md (overview + components + tech stack + links to ADRs)
|
||||
// Use _index.md template from templates/architecture-doc.md
|
||||
// Fill: system overview, component diagram, tech stack, ADR links table,
|
||||
// data model, API design, security controls, infrastructure, codebase integration
|
||||
Write(`${workDir}/architecture/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 4,
|
||||
name: "architecture",
|
||||
output_dir: "architecture/",
|
||||
output_index: "architecture/_index.md",
|
||||
file_count: adrs.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `architecture/`
|
||||
- `_index.md` — Overview, component diagram, tech stack, data model, security, links
|
||||
- `ADR-NNN-{slug}.md` — Individual Architecture Decision Record (per ADR)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked to requirements via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Component diagram present in `_index.md` (Mermaid or ASCII)
|
||||
- [ ] Tech stack specified (languages, frameworks, key libraries)
|
||||
- [ ] >= 1 ADR file with alternatives considered
|
||||
- [ ] Each ADR file lists >= 2 options
|
||||
- [ ] `_index.md` ADR table links to all individual ADR files
|
||||
- [ ] Integration points identified
|
||||
- [ ] Data model described
|
||||
- [ ] Codebase mapping present (if has_codebase)
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
- [ ] ADR files link back to requirement files
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 5: Epics & Stories](05-epics-stories.md) with the generated architecture.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 4,
|
||||
"status": "complete",
|
||||
"files_created": ["architecture/_index.md", "architecture/ADR-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_rating": 0,
|
||||
"quality_notes": ["list of quality concerns or review feedback addressed"],
|
||||
"key_decisions": ["architecture style choice", "key ADR decisions"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `architecture/` directory exists with `_index.md` and ADR files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
241
.codex/skills/spec-generator/phases/05-epics-stories.md
Normal file
241
.codex/skills/spec-generator/phases/05-epics-stories.md
Normal file
@@ -0,0 +1,241 @@
|
||||
# Phase 5: Epics & Stories
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Decompose the specification into executable Epics and Stories with dependency mapping.
|
||||
|
||||
## Objective
|
||||
|
||||
- Group requirements into 3-7 logical Epics
|
||||
- Tag MVP subset of Epics
|
||||
- Generate 2-5 Stories per Epic in standard user story format
|
||||
- Map cross-Epic dependencies (Mermaid diagram)
|
||||
- Generate epics.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/requirements/_index.md`, `{workDir}/architecture/_index.md` (and individual files)
|
||||
- Reference: `{workDir}/product-brief.md`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/epics-template.md` (directory structure: `_index.md` + `EPIC-*.md`)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2-4 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
const architecture = Read(`${workDir}/architecture.md`);
|
||||
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
```
|
||||
|
||||
### Step 2: Epic Decomposition via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Decompose requirements into executable Epics and Stories for implementation planning.
|
||||
Success: 3-7 Epics with prioritized Stories, dependency map, and MVP subset clearly defined.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${productBrief.slice(0, 2000)}
|
||||
|
||||
REQUIREMENTS:
|
||||
${requirements.slice(0, 5000)}
|
||||
|
||||
ARCHITECTURE (summary):
|
||||
${architecture.slice(0, 3000)}
|
||||
|
||||
TASK:
|
||||
- Group requirements into 3-7 logical Epics:
|
||||
- Each Epic: EPIC-NNN ID, title, description, priority (Must/Should/Could)
|
||||
- Group by functional domain or user journey stage
|
||||
- Tag MVP Epics (minimum set for initial release)
|
||||
|
||||
- For each Epic, generate 2-5 Stories:
|
||||
- Each Story: STORY-{EPIC}-NNN ID, title
|
||||
- User story format: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 acceptance criteria per story (testable)
|
||||
- Relative size estimate: S/M/L/XL
|
||||
- Trace to source requirement(s): REQ-NNN
|
||||
|
||||
- Create dependency map:
|
||||
- Cross-Epic dependencies (which Epics block others)
|
||||
- Mermaid graph LR format
|
||||
- Recommended execution order with rationale
|
||||
|
||||
- Define MVP:
|
||||
- Which Epics are in MVP
|
||||
- MVP definition of done (3-5 criteria)
|
||||
- What is explicitly deferred post-MVP
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured output with: Epic list (ID, title, priority, MVP flag), Stories per Epic (ID, user story, AC, size, trace), dependency Mermaid diagram, execution order, MVP definition
|
||||
CONSTRAINTS:
|
||||
- Every Must-have requirement must appear in at least one Story
|
||||
- Stories must be small enough to implement independently (no XL stories in MVP)
|
||||
- Dependencies should be minimized across Epics
|
||||
\${glossary ? \`- Maintain terminology consistency with glossary: \${glossary.terms.map(t => t.term).join(', ')}\` : ''}
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2.5: Codex Epics Review
|
||||
|
||||
After receiving Gemini decomposition results, validate epic/story quality via Codex CLI:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of epic/story decomposition - validate coverage, sizing, and dependency structure.
|
||||
Success: Actionable feedback on epic quality with specific issues identified.
|
||||
|
||||
GENERATED EPICS AND STORIES:
|
||||
${geminiEpicsOutput.slice(0, 5000)}
|
||||
|
||||
REQUIREMENTS (Must-Have):
|
||||
${mustHaveRequirements.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Verify Must-Have requirement coverage: every Must requirement appears in at least one Story
|
||||
- Check MVP story sizing: no XL stories in MVP epics (too large to implement independently)
|
||||
- Validate dependency graph: no circular dependencies between Epics
|
||||
- Assess acceptance criteria: every Story AC is specific and testable
|
||||
- Verify traceability: Stories trace back to specific REQ-NNN IDs
|
||||
- Check Epic granularity: 3-7 epics (not too few/many), 2-5 stories each
|
||||
- Rate overall decomposition quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Epic review with: coverage gaps, oversized stories, dependency issues, traceability gaps, quality rating
|
||||
CONSTRAINTS: Focus on issues that would block execution planning. Be specific about which Story/Epic has problems.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for Codex review result
|
||||
// Integrate feedback into epics before writing files:
|
||||
// - Add missing Stories for uncovered Must requirements
|
||||
// - Split XL stories in MVP epics into smaller units
|
||||
// - Fix dependency cycles identified by Codex
|
||||
// - Improve vague acceptance criteria
|
||||
```
|
||||
|
||||
### Step 3: Interactive Validation (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present Epic overview table and dependency diagram
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the Epic breakdown. Any adjustments needed?",
|
||||
header: "Epics",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Looks good", description: "Epic structure is appropriate" },
|
||||
{ label: "Merge epics", description: "Some epics should be combined" },
|
||||
{ label: "Split epic", description: "An epic is too large, needs splitting" },
|
||||
{ label: "Adjust MVP", description: "Change which epics are in MVP" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user adjustments
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate epics/ directory
|
||||
|
||||
```javascript
|
||||
const template = Read('templates/epics-template.md');
|
||||
|
||||
// Create epics directory
|
||||
Bash(`mkdir -p "${workDir}/epics"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI output into structured Epics
|
||||
const epicsList = parseEpics(cliOutput); // [{id, slug, title, priority, mvp, size, stories[], reqs[], adrs[], deps[]}]
|
||||
|
||||
// Step 4a: Write individual EPIC-*.md files (one per Epic, stories included)
|
||||
epicsList.forEach(epic => {
|
||||
// Use EPIC-NNN-{slug}.md template from templates/epics-template.md
|
||||
// Fill: id, title, priority, mvp, size, description, requirements links,
|
||||
// architecture links, dependency links, stories with user stories + AC
|
||||
Write(`${workDir}/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent);
|
||||
});
|
||||
|
||||
// Step 4b: Write _index.md (overview + dependency map + MVP scope + traceability)
|
||||
// Use _index.md template from templates/epics-template.md
|
||||
// Fill: epic overview table (with links), dependency Mermaid diagram,
|
||||
// execution order, MVP scope, traceability matrix, estimation summary
|
||||
Write(`${workDir}/epics/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 5,
|
||||
name: "epics-stories",
|
||||
output_dir: "epics/",
|
||||
output_index: "epics/_index.md",
|
||||
file_count: epicsList.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `epics/`
|
||||
- `_index.md` — Overview table, dependency map, MVP scope, traceability matrix, links
|
||||
- `EPIC-NNN-{slug}.md` — Individual Epic with Stories (per Epic)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked to requirements and architecture via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] 3-7 Epic files with EPIC-NNN IDs
|
||||
- [ ] >= 1 Epic tagged as MVP in frontmatter
|
||||
- [ ] 2-5 Stories per Epic file
|
||||
- [ ] Stories use "As a...I want...So that..." format
|
||||
- [ ] `_index.md` has cross-Epic dependency map (Mermaid)
|
||||
- [ ] `_index.md` links to all individual Epic files
|
||||
- [ ] Relative sizing (S/M/L/XL) per Story
|
||||
- [ ] Epic files link to requirement files and ADR files
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 6: Readiness Check](06-readiness-check.md) to validate the complete specification package.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 5,
|
||||
"status": "complete",
|
||||
"files_created": ["epics/_index.md", "epics/EPIC-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_integrated": true,
|
||||
"mvp_epic_count": 0,
|
||||
"total_story_count": 0,
|
||||
"quality_notes": ["list of quality concerns or Codex feedback items addressed"],
|
||||
"key_decisions": ["MVP scope decisions", "dependency resolution choices"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `epics/` directory exists with `_index.md` and EPIC files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
172
.codex/skills/spec-generator/phases/06-5-auto-fix.md
Normal file
172
.codex/skills/spec-generator/phases/06-5-auto-fix.md
Normal file
@@ -0,0 +1,172 @@
|
||||
# Phase 6.5: Auto-Fix
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent when triggered by the orchestrator after Phase 6 identifies issues. The agent reads this file for instructions, applies fixes to affected documents, and returns a JSON summary.
|
||||
|
||||
Automatically repair specification issues identified in Phase 6 Readiness Check.
|
||||
|
||||
## Objective
|
||||
|
||||
- Parse readiness-report.md to extract Error and Warning items
|
||||
- Group issues by originating Phase (2-5)
|
||||
- Re-generate affected sections with error context injected into CLI prompts
|
||||
- Re-run Phase 6 validation after fixes
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/readiness-report.md` (Phase 6 output)
|
||||
- Config: `{workDir}/spec-config.json` (with iteration_count)
|
||||
- All Phase 2-5 outputs
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Parse Readiness Report
|
||||
|
||||
```javascript
|
||||
const readinessReport = Read(`${workDir}/readiness-report.md`);
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
|
||||
// Load glossary for terminology consistency during fixes
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
|
||||
// Extract issues from readiness report
|
||||
// Parse Error and Warning severity items
|
||||
// Group by originating phase:
|
||||
// Phase 2 issues: vision, problem statement, scope, personas
|
||||
// Phase 3 issues: requirements, acceptance criteria, priority, traceability
|
||||
// Phase 4 issues: architecture, ADRs, tech stack, data model, state machine
|
||||
// Phase 5 issues: epics, stories, dependencies, MVP scope
|
||||
|
||||
const issuesByPhase = {
|
||||
2: [], // product brief issues
|
||||
3: [], // requirements issues
|
||||
4: [], // architecture issues
|
||||
5: [] // epics issues
|
||||
};
|
||||
|
||||
// Parse structured issues from report
|
||||
// Each issue: { severity: "Error"|"Warning", description: "...", location: "file:section" }
|
||||
|
||||
// Map phase numbers to output files
|
||||
const phaseOutputFile = {
|
||||
2: 'product-brief.md',
|
||||
3: 'requirements/_index.md',
|
||||
4: 'architecture/_index.md',
|
||||
5: 'epics/_index.md'
|
||||
};
|
||||
```
|
||||
|
||||
### Step 2: Fix Affected Phases (Sequential)
|
||||
|
||||
For each phase with issues (in order 2 -> 3 -> 4 -> 5):
|
||||
|
||||
```javascript
|
||||
for (const [phase, issues] of Object.entries(issuesByPhase)) {
|
||||
if (issues.length === 0) continue;
|
||||
|
||||
const errorContext = issues.map(i => `[${i.severity}] ${i.description} (at ${i.location})`).join('\n');
|
||||
|
||||
// Read current phase output
|
||||
const currentOutput = Read(`${workDir}/${phaseOutputFile[phase]}`);
|
||||
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Fix specification issues identified in readiness check for Phase ${phase}.
|
||||
Success: All listed issues resolved while maintaining consistency with other documents.
|
||||
|
||||
CURRENT DOCUMENT:
|
||||
${currentOutput.slice(0, 5000)}
|
||||
|
||||
ISSUES TO FIX:
|
||||
${errorContext}
|
||||
|
||||
${glossary ? `GLOSSARY (maintain consistency):
|
||||
${JSON.stringify(glossary.terms, null, 2)}` : ''}
|
||||
|
||||
TASK:
|
||||
- Address each listed issue specifically
|
||||
- Maintain all existing content that is not flagged
|
||||
- Ensure terminology consistency with glossary
|
||||
- Preserve YAML frontmatter and cross-references
|
||||
- Use RFC 2119 keywords for behavioral requirements
|
||||
- Increment document version number
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Corrected document content addressing all listed issues
|
||||
CONSTRAINTS: Minimal changes - only fix flagged issues, do not restructure unflagged sections
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for result, apply fixes to document
|
||||
// Update document version in frontmatter
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Update State
|
||||
|
||||
```javascript
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 6.5,
|
||||
name: "auto-fix",
|
||||
iteration: specConfig.iteration_count,
|
||||
phases_fixed: Object.keys(issuesByPhase).filter(p => issuesByPhase[p].length > 0),
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 4: Re-run Phase 6 Validation
|
||||
|
||||
```javascript
|
||||
// Re-execute Phase 6: Readiness Check
|
||||
// This creates a new readiness-report.md
|
||||
// If still Fail and iteration_count < 2: loop back to Step 1
|
||||
// If Pass or iteration_count >= 2: proceed to handoff
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Updated**: Phase 2-5 documents (only affected ones)
|
||||
- **Updated**: `spec-config.json` (iteration tracking)
|
||||
- **Triggers**: Phase 6 re-validation
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All Error-severity issues addressed
|
||||
- [ ] Warning-severity issues attempted (best effort)
|
||||
- [ ] Document versions incremented for modified files
|
||||
- [ ] Terminology consistency maintained
|
||||
- [ ] Cross-references still valid after fixes
|
||||
- [ ] Iteration count not exceeded (max 2)
|
||||
|
||||
## Next Phase
|
||||
|
||||
Re-run [Phase 6: Readiness Check](06-readiness-check.md) to validate fixes.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 6.5,
|
||||
"status": "complete",
|
||||
"files_modified": ["list of files that were updated"],
|
||||
"issues_fixed": {
|
||||
"errors": 0,
|
||||
"warnings": 0
|
||||
},
|
||||
"quality_notes": ["list of fix decisions and remaining concerns"],
|
||||
"phases_touched": [2, 3, 4, 5]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that listed files were actually modified (check version increment)
|
||||
2. Update `spec-config.json` iteration tracking
|
||||
3. Re-trigger Phase 6 validation
|
||||
581
.codex/skills/spec-generator/phases/06-readiness-check.md
Normal file
581
.codex/skills/spec-generator/phases/06-readiness-check.md
Normal file
@@ -0,0 +1,581 @@
|
||||
# Phase 6: Readiness Check
|
||||
|
||||
Validate the complete specification package, generate quality report and executive summary, provide execution handoff options.
|
||||
|
||||
## Objective
|
||||
|
||||
- Cross-document validation: completeness, consistency, traceability, depth
|
||||
- Generate quality scores per dimension
|
||||
- Produce readiness-report.md with issue list and traceability matrix
|
||||
- Produce spec-summary.md as one-page executive summary
|
||||
- Update all document frontmatter to `status: complete`
|
||||
- Present handoff options to execution workflows
|
||||
|
||||
## Input
|
||||
|
||||
- All Phase 2-5 outputs: `product-brief.md`, `requirements/_index.md` (+ `REQ-*.md`, `NFR-*.md`), `architecture/_index.md` (+ `ADR-*.md`), `epics/_index.md` (+ `EPIC-*.md`)
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Reference: `specs/quality-gates.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load All Documents
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirementsIndex = Read(`${workDir}/requirements/_index.md`);
|
||||
const architectureIndex = Read(`${workDir}/architecture/_index.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
const qualityGates = Read('specs/quality-gates.md');
|
||||
|
||||
// Load individual files for deep validation
|
||||
const reqFiles = Glob(`${workDir}/requirements/REQ-*.md`);
|
||||
const nfrFiles = Glob(`${workDir}/requirements/NFR-*.md`);
|
||||
const adrFiles = Glob(`${workDir}/architecture/ADR-*.md`);
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
```
|
||||
|
||||
### Step 2: Cross-Document Validation via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Validate specification package for completeness, consistency, traceability, and depth.
|
||||
Success: Comprehensive quality report with scores, issues, and traceability matrix.
|
||||
|
||||
DOCUMENTS TO VALIDATE:
|
||||
|
||||
=== PRODUCT BRIEF ===
|
||||
${productBrief.slice(0, 3000)}
|
||||
|
||||
=== REQUIREMENTS INDEX (${reqFiles.length} REQ + ${nfrFiles.length} NFR files) ===
|
||||
${requirementsIndex.slice(0, 3000)}
|
||||
|
||||
=== ARCHITECTURE INDEX (${adrFiles.length} ADR files) ===
|
||||
${architectureIndex.slice(0, 2500)}
|
||||
|
||||
=== EPICS INDEX (${epicFiles.length} EPIC files) ===
|
||||
${epicsIndex.slice(0, 2500)}
|
||||
|
||||
QUALITY CRITERIA (from quality-gates.md):
|
||||
${qualityGates.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
Perform 4-dimension validation:
|
||||
|
||||
1. COMPLETENESS (25%):
|
||||
- All required sections present in each document?
|
||||
- All template fields filled with substantive content?
|
||||
- Score 0-100 with specific gaps listed
|
||||
|
||||
2. CONSISTENCY (25%):
|
||||
- Terminology uniform across documents?
|
||||
- Terminology glossary compliance: all core terms used consistently per glossary.json definitions?
|
||||
- No synonym drift (e.g., "user" vs "client" vs "consumer" for same concept)?
|
||||
- User personas consistent?
|
||||
- Scope consistent (PRD does not exceed brief)?
|
||||
- Scope containment: PRD requirements do not exceed product brief's defined scope?
|
||||
- Non-Goals respected: no requirement or story contradicts explicit Non-Goals?
|
||||
- Tech stack references match between architecture and epics?
|
||||
- Score 0-100 with inconsistencies listed
|
||||
|
||||
3. TRACEABILITY (25%):
|
||||
- Every goal has >= 1 requirement?
|
||||
- Every Must requirement has architecture coverage?
|
||||
- Every Must requirement appears in >= 1 story?
|
||||
- ADR choices reflected in epics?
|
||||
- Build traceability matrix: Goal -> Requirement -> Architecture -> Epic/Story
|
||||
- Score 0-100 with orphan items listed
|
||||
|
||||
4. DEPTH (25%):
|
||||
- Acceptance criteria specific and testable?
|
||||
- Architecture decisions justified with alternatives?
|
||||
- Stories estimable by dev team?
|
||||
- Score 0-100 with vague areas listed
|
||||
|
||||
ALSO:
|
||||
- List all issues found, classified as Error/Warning/Info
|
||||
- Generate overall weighted score
|
||||
- Determine gate: Pass (>=80) / Review (60-79) / Fail (<60)
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON-compatible output with: dimension scores, overall score, gate, issues list (severity + description + location), traceability matrix
|
||||
CONSTRAINTS: Be thorough but fair. Focus on actionable issues.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2b: Codex Technical Depth Review
|
||||
|
||||
Launch Codex review in parallel with Gemini validation for deeper technical assessment:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Deep technical quality review of specification package - assess architectural rigor and implementation readiness.
|
||||
Success: Technical quality assessment with specific actionable feedback on ADR quality, data model, security, and observability.
|
||||
|
||||
ARCHITECTURE INDEX:
|
||||
${architectureIndex.slice(0, 3000)}
|
||||
|
||||
ADR FILES (summaries):
|
||||
${adrFiles.map(f => Read(f).slice(0, 500)).join('\n---\n')}
|
||||
|
||||
REQUIREMENTS INDEX:
|
||||
${requirementsIndex.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- ADR Alternative Quality: Each ADR has >= 2 genuine alternatives with substantive pros/cons (not strawman options)
|
||||
- Data Model Completeness: All entities referenced in requirements have field-level definitions with types and constraints
|
||||
- Security Coverage: Authentication, authorization, data protection, and input validation addressed for all external interfaces
|
||||
- Observability Specification: Metrics, logging, and health checks defined for service/platform types
|
||||
- Error Handling: Error classification and recovery strategies defined per component
|
||||
- Configuration Model: All configurable parameters documented with types, defaults, and constraints
|
||||
- Rate each dimension 1-5 with specific gaps identified
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Technical depth review with: per-dimension scores (1-5), specific gaps, improvement recommendations, overall technical readiness assessment
|
||||
CONSTRAINTS: Focus on gaps that would cause implementation ambiguity. Ignore cosmetic issues.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Codex result merged with Gemini result in Step 3
|
||||
```
|
||||
|
||||
### Step 2c: Per-Requirement Verification
|
||||
|
||||
Iterate through all individual requirement files for fine-grained verification:
|
||||
|
||||
```javascript
|
||||
// Load all requirement files
|
||||
const reqFiles = Glob(`${workDir}/requirements/REQ-*.md`);
|
||||
const nfrFiles = Glob(`${workDir}/requirements/NFR-*.md`);
|
||||
const allReqFiles = [...reqFiles, ...nfrFiles];
|
||||
|
||||
// Load reference documents for cross-checking
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const adrFiles = Glob(`${workDir}/architecture/ADR-*.md`);
|
||||
|
||||
// Read all epic content for coverage check
|
||||
const epicContents = epicFiles.map(f => ({ path: f, content: Read(f) }));
|
||||
const adrContents = adrFiles.map(f => ({ path: f, content: Read(f) }));
|
||||
|
||||
// Per-requirement verification
|
||||
const verificationResults = allReqFiles.map(reqFile => {
|
||||
const content = Read(reqFile);
|
||||
const reqId = extractReqId(content); // e.g., REQ-001 or NFR-PERF-001
|
||||
const priority = extractPriority(content); // Must/Should/Could/Won't
|
||||
|
||||
// Check 1: AC exists and is testable
|
||||
const hasAC = content.includes('- [ ]') || content.includes('Acceptance Criteria');
|
||||
const acTestable = !content.match(/should be (fast|good|reliable|secure)/i); // No vague AC
|
||||
|
||||
// Check 2: Traces back to Brief goal
|
||||
const tracesLinks = content.match(/product-brief\.md/);
|
||||
|
||||
// Check 3: Must requirements have Story coverage (search EPIC files)
|
||||
let storyCoverage = priority !== 'Must' ? 'N/A' :
|
||||
epicContents.some(e => e.content.includes(reqId)) ? 'Covered' : 'MISSING';
|
||||
|
||||
// Check 4: Must requirements have architecture coverage (search ADR files)
|
||||
let archCoverage = priority !== 'Must' ? 'N/A' :
|
||||
adrContents.some(a => a.content.includes(reqId)) ||
|
||||
Read(`${workDir}/architecture/_index.md`).includes(reqId) ? 'Covered' : 'MISSING';
|
||||
|
||||
return {
|
||||
req_id: reqId,
|
||||
priority,
|
||||
ac_exists: hasAC ? 'Yes' : 'MISSING',
|
||||
ac_testable: acTestable ? 'Yes' : 'VAGUE',
|
||||
brief_trace: tracesLinks ? 'Yes' : 'MISSING',
|
||||
story_coverage: storyCoverage,
|
||||
arch_coverage: archCoverage,
|
||||
pass: hasAC && acTestable && tracesLinks &&
|
||||
(priority !== 'Must' || (storyCoverage === 'Covered' && archCoverage === 'Covered'))
|
||||
};
|
||||
});
|
||||
|
||||
// Generate Per-Requirement Verification table for readiness-report.md
|
||||
const verificationTable = `
|
||||
## Per-Requirement Verification
|
||||
|
||||
| Req ID | Priority | AC Exists | AC Testable | Brief Trace | Story Coverage | Arch Coverage | Status |
|
||||
|--------|----------|-----------|-------------|-------------|----------------|---------------|--------|
|
||||
${verificationResults.map(r =>
|
||||
`| ${r.req_id} | ${r.priority} | ${r.ac_exists} | ${r.ac_testable} | ${r.brief_trace} | ${r.story_coverage} | ${r.arch_coverage} | ${r.pass ? 'PASS' : 'FAIL'} |`
|
||||
).join('\n')}
|
||||
|
||||
**Summary**: ${verificationResults.filter(r => r.pass).length}/${verificationResults.length} requirements pass all checks.
|
||||
`;
|
||||
```
|
||||
|
||||
### Step 3: Generate readiness-report.md
|
||||
|
||||
```javascript
|
||||
const frontmatterReport = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 6
|
||||
document_type: readiness-report
|
||||
status: complete
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["load-all", "cross-validation", "codex-technical-review", "per-req-verification", "scoring", "report-generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
- requirements/_index.md
|
||||
- architecture/_index.md
|
||||
- epics/_index.md
|
||||
---`;
|
||||
|
||||
// Report content from CLI validation output:
|
||||
// - Quality Score Summary (4 dimensions + overall)
|
||||
// - Gate Decision (Pass/Review/Fail)
|
||||
// - Issue List (grouped by severity: Error, Warning, Info)
|
||||
// - Traceability Matrix (Goal -> Req -> Arch -> Epic/Story)
|
||||
// - Codex Technical Depth Review (per-dimension scores from Step 2b)
|
||||
// - Per-Requirement Verification Table (from Step 2c)
|
||||
// - Recommendations for improvement
|
||||
|
||||
Write(`${workDir}/readiness-report.md`, `${frontmatterReport}\n\n${reportContent}`);
|
||||
```
|
||||
|
||||
### Step 4: Generate spec-summary.md
|
||||
|
||||
```javascript
|
||||
const frontmatterSummary = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 6
|
||||
document_type: spec-summary
|
||||
status: complete
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["synthesis"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
- requirements/_index.md
|
||||
- architecture/_index.md
|
||||
- epics/_index.md
|
||||
- readiness-report.md
|
||||
---`;
|
||||
|
||||
// One-page executive summary:
|
||||
// - Product Name & Vision (from product-brief.md)
|
||||
// - Problem & Target Users (from product-brief.md)
|
||||
// - Key Requirements count (Must/Should/Could from requirements.md)
|
||||
// - Architecture Style & Tech Stack (from architecture.md)
|
||||
// - Epic Overview (count, MVP scope from epics.md)
|
||||
// - Quality Score (from readiness-report.md)
|
||||
// - Recommended Next Step
|
||||
// - File manifest with links
|
||||
|
||||
Write(`${workDir}/spec-summary.md`, `${frontmatterSummary}\n\n${summaryContent}`);
|
||||
```
|
||||
|
||||
### Step 5: Update All Document Status
|
||||
|
||||
```javascript
|
||||
// Update frontmatter status to 'complete' in all documents (directories + single files)
|
||||
// product-brief.md is a single file
|
||||
const singleFiles = ['product-brief.md'];
|
||||
singleFiles.forEach(doc => {
|
||||
const content = Read(`${workDir}/${doc}`);
|
||||
Write(`${workDir}/${doc}`, content.replace(/status: draft/, 'status: complete'));
|
||||
});
|
||||
|
||||
// Update all files in directories (index + individual files)
|
||||
const dirFiles = [
|
||||
...Glob(`${workDir}/requirements/*.md`),
|
||||
...Glob(`${workDir}/architecture/*.md`),
|
||||
...Glob(`${workDir}/epics/*.md`)
|
||||
];
|
||||
dirFiles.forEach(filePath => {
|
||||
const content = Read(filePath);
|
||||
if (content.includes('status: draft')) {
|
||||
Write(filePath, content.replace(/status: draft/, 'status: complete'));
|
||||
}
|
||||
});
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 6,
|
||||
name: "readiness-check",
|
||||
output_file: "readiness-report.md",
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 6: Handoff Options
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Specification package is complete. What would you like to do next?",
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Execute via lite-plan",
|
||||
description: "Start implementing with /workflow-lite-plan, one Epic at a time"
|
||||
},
|
||||
{
|
||||
label: "Create roadmap",
|
||||
description: "Generate execution roadmap with /workflow:req-plan-with-file"
|
||||
},
|
||||
{
|
||||
label: "Full planning",
|
||||
description: "Detailed planning with /workflow-plan for the full scope"
|
||||
},
|
||||
{
|
||||
label: "Export Issues (Phase 7)",
|
||||
description: "Create issues per Epic with spec links and wave assignment"
|
||||
},
|
||||
{
|
||||
label: "Iterate & improve",
|
||||
description: "Re-run failed phases based on readiness report issues (max 2 iterations)"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Based on user selection, execute the corresponding handoff:
|
||||
|
||||
if (selection === "Execute via lite-plan") {
|
||||
// lite-plan accepts a text description directly
|
||||
// Read first MVP Epic from individual EPIC-*.md files
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const firstMvpFile = epicFiles.find(f => {
|
||||
const content = Read(f);
|
||||
return content.includes('mvp: true');
|
||||
});
|
||||
const epicContent = Read(firstMvpFile);
|
||||
const title = extractTitle(epicContent); // First # heading
|
||||
const description = extractSection(epicContent, "Description");
|
||||
Skill(skill="workflow-lite-plan", args=`"${title}: ${description}"`)
|
||||
}
|
||||
|
||||
if (selection === "Full planning" || selection === "Create roadmap") {
|
||||
// === Bridge: Build brainstorm_artifacts compatible structure ===
|
||||
// Reads from directory-based outputs (individual files), maps to .brainstorming/ format
|
||||
// for context-search-agent auto-discovery → action-planning-agent consumption.
|
||||
|
||||
// Step A: Read spec documents from directories
|
||||
const specSummary = Read(`${workDir}/spec-summary.md`);
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirementsIndex = Read(`${workDir}/requirements/_index.md`);
|
||||
const architectureIndex = Read(`${workDir}/architecture/_index.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
|
||||
// Read individual EPIC files (already split — direct mapping to feature-specs)
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
|
||||
// Step B: Build structured description from spec-summary
|
||||
const structuredDesc = `GOAL: ${extractGoal(specSummary)}
|
||||
SCOPE: ${extractScope(specSummary)}
|
||||
CONTEXT: Generated from spec session ${specConfig.session_id}. Source: ${workDir}/`;
|
||||
|
||||
// Step C: Create WFS session (provides session directory + .brainstorming/)
|
||||
Skill(skill="workflow:session:start", args=`--auto "${structuredDesc}"`)
|
||||
// → Produces sessionId (WFS-xxx) and session directory at .workflow/active/{sessionId}/
|
||||
|
||||
// Step D: Create .brainstorming/ bridge files
|
||||
const brainstormDir = `.workflow/active/${sessionId}/.brainstorming`;
|
||||
Bash(`mkdir -p "${brainstormDir}/feature-specs"`);
|
||||
|
||||
// D.1: guidance-specification.md (highest priority — action-planning-agent reads first)
|
||||
// Synthesized from spec-summary + product-brief + architecture/requirements indexes
|
||||
Write(`${brainstormDir}/guidance-specification.md`, `
|
||||
# ${specConfig.seed_analysis.problem_statement} - Confirmed Guidance Specification
|
||||
|
||||
**Source**: spec-generator session ${specConfig.session_id}
|
||||
**Generated**: ${new Date().toISOString()}
|
||||
**Spec Directory**: ${workDir}
|
||||
|
||||
## 1. Project Positioning & Goals
|
||||
${extractSection(productBrief, "Vision")}
|
||||
${extractSection(productBrief, "Goals")}
|
||||
|
||||
## 2. Requirements Summary
|
||||
${extractSection(requirementsIndex, "Functional Requirements")}
|
||||
|
||||
## 3. Architecture Decisions
|
||||
${extractSection(architectureIndex, "Architecture Decision Records")}
|
||||
${extractSection(architectureIndex, "Technology Stack")}
|
||||
|
||||
## 4. Implementation Scope
|
||||
${extractSection(epicsIndex, "Epic Overview")}
|
||||
${extractSection(epicsIndex, "MVP Scope")}
|
||||
|
||||
## Feature Decomposition
|
||||
${extractSection(epicsIndex, "Traceability Matrix")}
|
||||
|
||||
## Appendix: Source Documents
|
||||
| Document | Path | Description |
|
||||
|----------|------|-------------|
|
||||
| Product Brief | ${workDir}/product-brief.md | Vision, goals, scope |
|
||||
| Requirements | ${workDir}/requirements/ | _index.md + REQ-*.md + NFR-*.md |
|
||||
| Architecture | ${workDir}/architecture/ | _index.md + ADR-*.md |
|
||||
| Epics | ${workDir}/epics/ | _index.md + EPIC-*.md |
|
||||
| Readiness Report | ${workDir}/readiness-report.md | Quality validation |
|
||||
`);
|
||||
|
||||
// D.2: feature-index.json (each EPIC file mapped to a Feature)
|
||||
// Path: feature-specs/feature-index.json (matches context-search-agent discovery)
|
||||
// Directly read from individual EPIC-*.md files (no monolithic parsing needed)
|
||||
const features = epicFiles.map(epicFile => {
|
||||
const content = Read(epicFile);
|
||||
const fm = parseFrontmatter(content); // Extract YAML frontmatter
|
||||
const basename = path.basename(epicFile, '.md'); // EPIC-001-slug
|
||||
const epicNum = fm.id.replace('EPIC-', ''); // 001
|
||||
const slug = basename.replace(/^EPIC-\d+-/, ''); // slug
|
||||
return {
|
||||
id: `F-${epicNum}`,
|
||||
slug: slug,
|
||||
name: extractTitle(content),
|
||||
description: extractSection(content, "Description"),
|
||||
priority: fm.mvp ? "High" : "Medium",
|
||||
spec_path: `${brainstormDir}/feature-specs/F-${epicNum}-${slug}.md`,
|
||||
source_epic: fm.id,
|
||||
source_file: epicFile
|
||||
};
|
||||
});
|
||||
Write(`${brainstormDir}/feature-specs/feature-index.json`, JSON.stringify({
|
||||
version: "1.0",
|
||||
source: "spec-generator",
|
||||
spec_session: specConfig.session_id,
|
||||
features,
|
||||
cross_cutting_specs: []
|
||||
}, null, 2));
|
||||
|
||||
// D.3: Feature-spec files — directly adapt from individual EPIC-*.md files
|
||||
// Since Epics are already individual documents, transform format directly
|
||||
// Filename pattern: F-{num}-{slug}.md (matches context-search-agent glob F-*-*.md)
|
||||
features.forEach(feature => {
|
||||
const epicContent = Read(feature.source_file);
|
||||
Write(feature.spec_path, `
|
||||
# Feature Spec: ${feature.source_epic} - ${feature.name}
|
||||
|
||||
**Source**: ${feature.source_file}
|
||||
**Priority**: ${feature.priority === "High" ? "MVP" : "Post-MVP"}
|
||||
|
||||
## Description
|
||||
${extractSection(epicContent, "Description")}
|
||||
|
||||
## Stories
|
||||
${extractSection(epicContent, "Stories")}
|
||||
|
||||
## Requirements
|
||||
${extractSection(epicContent, "Requirements")}
|
||||
|
||||
## Architecture
|
||||
${extractSection(epicContent, "Architecture")}
|
||||
`);
|
||||
});
|
||||
|
||||
// Step E: Invoke downstream workflow
|
||||
// context-search-agent will auto-discover .brainstorming/ files
|
||||
// → context-package.json.brainstorm_artifacts populated
|
||||
// → action-planning-agent loads guidance_specification (P1) + feature_index (P2)
|
||||
if (selection === "Full planning") {
|
||||
Skill(skill="workflow-plan", args=`"${structuredDesc}"`)
|
||||
} else {
|
||||
Skill(skill="workflow:req-plan-with-file", args=`"${extractGoal(specSummary)}"`)
|
||||
}
|
||||
}
|
||||
|
||||
if (selection === "Export Issues (Phase 7)") {
|
||||
// Proceed to Phase 7: Issue Export
|
||||
// Read phases/07-issue-export.md and execute
|
||||
}
|
||||
|
||||
// If user selects "Other": Export only or return to specific phase
|
||||
|
||||
if (selection === "Iterate & improve") {
|
||||
// Check iteration count
|
||||
if (specConfig.iteration_count >= 2) {
|
||||
// Max iterations reached, force handoff
|
||||
// Present handoff options again without iterate
|
||||
return;
|
||||
}
|
||||
|
||||
// Update iteration tracking
|
||||
specConfig.iteration_count = (specConfig.iteration_count || 0) + 1;
|
||||
specConfig.iteration_history.push({
|
||||
iteration: specConfig.iteration_count,
|
||||
timestamp: new Date().toISOString(),
|
||||
readiness_score: overallScore,
|
||||
errors_found: errorCount,
|
||||
phases_to_fix: affectedPhases
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
|
||||
// Proceed to Phase 6.5: Auto-Fix
|
||||
// Read phases/06-5-auto-fix.md and execute
|
||||
}
|
||||
```
|
||||
|
||||
#### Helper Functions Reference (pseudocode)
|
||||
|
||||
The following helper functions are used in the handoff bridge. They operate on markdown content from individual spec files:
|
||||
|
||||
```javascript
|
||||
// Extract title from a markdown document (first # heading)
|
||||
function extractTitle(markdown) {
|
||||
// Return the text after the first # heading (e.g., "# EPIC-001: Title" → "Title")
|
||||
}
|
||||
|
||||
// Parse YAML frontmatter from markdown (between --- markers)
|
||||
function parseFrontmatter(markdown) {
|
||||
// Return object with: id, priority, mvp, size, requirements, architecture, dependencies
|
||||
}
|
||||
|
||||
// Extract GOAL/SCOPE from spec-summary frontmatter or ## sections
|
||||
function extractGoal(specSummary) { /* Return the Vision/Goal line */ }
|
||||
function extractScope(specSummary) { /* Return the Scope/MVP boundary */ }
|
||||
|
||||
// Extract a named ## section from a markdown document
|
||||
function extractSection(markdown, sectionName) {
|
||||
// Return content between ## {sectionName} and next ## heading
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `readiness-report.md` - Quality validation report
|
||||
- **File**: `spec-summary.md` - One-page executive summary
|
||||
- **Format**: Markdown with YAML frontmatter
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All document directories validated (product-brief, requirements/, architecture/, epics/)
|
||||
- [ ] All frontmatter parseable and valid (index + individual files)
|
||||
- [ ] Cross-references checked (relative links between directories)
|
||||
- [ ] Overall quality score calculated
|
||||
- [ ] No unresolved Error-severity issues
|
||||
- [ ] Traceability matrix generated
|
||||
- [ ] spec-summary.md created
|
||||
- [ ] All document statuses updated to 'complete' (all files in all directories)
|
||||
- [ ] Handoff options presented
|
||||
|
||||
## Completion
|
||||
|
||||
This is the final phase. The specification package is ready for execution handoff.
|
||||
|
||||
### Output Files Manifest
|
||||
|
||||
| Path | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `spec-config.json` | 1 | Session configuration and state |
|
||||
| `discovery-context.json` | 1 | Codebase exploration (optional) |
|
||||
| `product-brief.md` | 2 | Product brief with multi-perspective synthesis |
|
||||
| `requirements/` | 3 | Directory: `_index.md` + `REQ-*.md` + `NFR-*.md` |
|
||||
| `architecture/` | 4 | Directory: `_index.md` + `ADR-*.md` |
|
||||
| `epics/` | 5 | Directory: `_index.md` + `EPIC-*.md` |
|
||||
| `readiness-report.md` | 6 | Quality validation report |
|
||||
| `spec-summary.md` | 6 | One-page executive summary |
|
||||
329
.codex/skills/spec-generator/phases/07-issue-export.md
Normal file
329
.codex/skills/spec-generator/phases/07-issue-export.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# Phase 7: Issue Export
|
||||
|
||||
Map specification Epics to issues, create them via `ccw issue create`, and generate an export report with spec document links.
|
||||
|
||||
> **Execution Mode: Inline**
|
||||
> This phase runs in the main orchestrator context (not delegated to agent) for direct access to `ccw issue create` CLI and interactive handoff options.
|
||||
|
||||
## Objective
|
||||
|
||||
- Read all EPIC-*.md files from Phase 5 output
|
||||
- Assign waves: MVP epics → wave-1, non-MVP → wave-2
|
||||
- Create one issue per Epic via `ccw issue create`
|
||||
- Map Epic dependencies to issue dependencies
|
||||
- Generate issue-export-report.md with mapping table and spec links
|
||||
- Present handoff options for execution
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/epics/_index.md` (and individual `EPIC-*.md` files)
|
||||
- Reference: `{workDir}/readiness-report.md`, `{workDir}/spec-config.json`
|
||||
- Reference: `{workDir}/product-brief.md`, `{workDir}/requirements/_index.md`, `{workDir}/architecture/_index.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Epic Files
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
|
||||
// Parse each Epic file
|
||||
const epics = epicFiles.map(epicFile => {
|
||||
const content = Read(epicFile);
|
||||
const fm = parseFrontmatter(content);
|
||||
const title = extractTitle(content);
|
||||
const description = extractSection(content, "Description");
|
||||
const stories = extractSection(content, "Stories");
|
||||
const reqRefs = extractSection(content, "Requirements");
|
||||
const adrRefs = extractSection(content, "Architecture");
|
||||
const deps = fm.dependencies || [];
|
||||
|
||||
return {
|
||||
file: epicFile,
|
||||
id: fm.id, // e.g., EPIC-001
|
||||
title,
|
||||
description,
|
||||
stories,
|
||||
reqRefs,
|
||||
adrRefs,
|
||||
priority: fm.priority,
|
||||
mvp: fm.mvp || false,
|
||||
dependencies: deps, // other EPIC IDs this depends on
|
||||
size: fm.size
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
### Step 2: Wave Assignment
|
||||
|
||||
```javascript
|
||||
const epicWaves = epics.map(epic => ({
|
||||
...epic,
|
||||
wave: epic.mvp ? 1 : 2
|
||||
}));
|
||||
|
||||
// Log wave assignment
|
||||
const wave1 = epicWaves.filter(e => e.wave === 1);
|
||||
const wave2 = epicWaves.filter(e => e.wave === 2);
|
||||
// wave-1: MVP epics (must-have, core functionality)
|
||||
// wave-2: Post-MVP epics (should-have, enhancements)
|
||||
```
|
||||
|
||||
### Step 3: Issue Creation Loop
|
||||
|
||||
```javascript
|
||||
const createdIssues = [];
|
||||
const epicToIssue = {}; // EPIC-ID -> Issue ID mapping
|
||||
|
||||
for (const epic of epicWaves) {
|
||||
// Build issue JSON matching roadmap-with-file schema
|
||||
const issueData = {
|
||||
title: `[${specConfig.session_id}] ${epic.title}`,
|
||||
status: "pending",
|
||||
priority: epic.wave === 1 ? 2 : 3, // wave-1 = higher priority
|
||||
context: `## ${epic.title}
|
||||
|
||||
${epic.description}
|
||||
|
||||
## Stories
|
||||
${epic.stories}
|
||||
|
||||
## Spec References
|
||||
- Epic: ${epic.file}
|
||||
- Requirements: ${epic.reqRefs}
|
||||
- Architecture: ${epic.adrRefs}
|
||||
- Product Brief: ${workDir}/product-brief.md
|
||||
- Full Spec: ${workDir}/`,
|
||||
source: "text",
|
||||
tags: [
|
||||
"spec-generated",
|
||||
`spec:${specConfig.session_id}`,
|
||||
`wave-${epic.wave}`,
|
||||
epic.mvp ? "mvp" : "post-mvp",
|
||||
`epic:${epic.id}`
|
||||
],
|
||||
extended_context: {
|
||||
notes: {
|
||||
session: specConfig.session_id,
|
||||
spec_dir: workDir,
|
||||
source_epic: epic.id,
|
||||
wave: epic.wave,
|
||||
depends_on_issues: [], // Filled in Step 4
|
||||
spec_documents: {
|
||||
product_brief: `${workDir}/product-brief.md`,
|
||||
requirements: `${workDir}/requirements/_index.md`,
|
||||
architecture: `${workDir}/architecture/_index.md`,
|
||||
epic: epic.file
|
||||
}
|
||||
}
|
||||
},
|
||||
lifecycle_requirements: {
|
||||
test_strategy: "acceptance",
|
||||
regression_scope: "affected",
|
||||
acceptance_type: "manual",
|
||||
commit_strategy: "per-epic"
|
||||
}
|
||||
};
|
||||
|
||||
// Create issue via ccw issue create (pipe JSON to avoid shell escaping)
|
||||
const result = Bash(`echo '${JSON.stringify(issueData)}' | ccw issue create`);
|
||||
|
||||
// Parse returned issue ID
|
||||
const issueId = JSON.parse(result).id; // e.g., ISS-20260308-001
|
||||
epicToIssue[epic.id] = issueId;
|
||||
|
||||
createdIssues.push({
|
||||
epic_id: epic.id,
|
||||
epic_title: epic.title,
|
||||
issue_id: issueId,
|
||||
wave: epic.wave,
|
||||
priority: issueData.priority,
|
||||
mvp: epic.mvp
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Epic Dependency → Issue Dependency Mapping
|
||||
|
||||
```javascript
|
||||
// Map EPIC dependencies to Issue dependencies
|
||||
for (const epic of epicWaves) {
|
||||
if (epic.dependencies.length === 0) continue;
|
||||
|
||||
const issueId = epicToIssue[epic.id];
|
||||
const depIssueIds = epic.dependencies
|
||||
.map(depEpicId => epicToIssue[depEpicId])
|
||||
.filter(Boolean);
|
||||
|
||||
if (depIssueIds.length > 0) {
|
||||
// Update issue's extended_context.notes.depends_on_issues
|
||||
// This is informational — actual dependency enforcement is in execution phase
|
||||
// Note: ccw issue create already created the issue; dependency info is in the context
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate issue-export-report.md
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
const reportContent = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 7
|
||||
document_type: issue-export-report
|
||||
status: complete
|
||||
generated_at: ${timestamp}
|
||||
stepsCompleted: ["load-epics", "wave-assignment", "issue-creation", "dependency-mapping", "report-generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- epics/_index.md
|
||||
- readiness-report.md
|
||||
---
|
||||
|
||||
# Issue Export Report
|
||||
|
||||
## Summary
|
||||
|
||||
- **Session**: ${specConfig.session_id}
|
||||
- **Issues Created**: ${createdIssues.length}
|
||||
- **Wave 1 (MVP)**: ${wave1.length} issues
|
||||
- **Wave 2 (Post-MVP)**: ${wave2.length} issues
|
||||
- **Export Date**: ${timestamp}
|
||||
|
||||
## Issue Mapping
|
||||
|
||||
| Epic ID | Epic Title | Issue ID | Wave | Priority | MVP |
|
||||
|---------|-----------|----------|------|----------|-----|
|
||||
${createdIssues.map(i =>
|
||||
`| ${i.epic_id} | ${i.epic_title} | ${i.issue_id} | ${i.wave} | ${i.priority} | ${i.mvp ? 'Yes' : 'No'} |`
|
||||
).join('\n')}
|
||||
|
||||
## Spec Document Links
|
||||
|
||||
| Document | Path | Description |
|
||||
|----------|------|-------------|
|
||||
| Product Brief | ${workDir}/product-brief.md | Vision, goals, scope |
|
||||
| Requirements | ${workDir}/requirements/_index.md | Functional + non-functional requirements |
|
||||
| Architecture | ${workDir}/architecture/_index.md | Components, ADRs, tech stack |
|
||||
| Epics | ${workDir}/epics/_index.md | Epic/Story breakdown |
|
||||
| Readiness Report | ${workDir}/readiness-report.md | Quality validation |
|
||||
| Spec Summary | ${workDir}/spec-summary.md | Executive summary |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
| Issue ID | Depends On |
|
||||
|----------|-----------|
|
||||
${createdIssues.map(i => {
|
||||
const epic = epicWaves.find(e => e.id === i.epic_id);
|
||||
const deps = (epic.dependencies || []).map(d => epicToIssue[d]).filter(Boolean);
|
||||
return `| ${i.issue_id} | ${deps.length > 0 ? deps.join(', ') : 'None'} |`;
|
||||
}).join('\n')}
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **team-planex**: Execute all issues via coordinated team workflow
|
||||
2. **Wave 1 only**: Execute MVP issues first (${wave1.length} issues)
|
||||
3. **View issues**: Browse created issues via \`ccw issue list --tag spec:${specConfig.session_id}\`
|
||||
4. **Manual review**: Review individual issues before execution
|
||||
`;
|
||||
|
||||
Write(`${workDir}/issue-export-report.md`, reportContent);
|
||||
```
|
||||
|
||||
### Step 6: Update spec-config.json
|
||||
|
||||
```javascript
|
||||
specConfig.issue_ids = createdIssues.map(i => i.issue_id);
|
||||
specConfig.issues_created = createdIssues.length;
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 7,
|
||||
name: "issue-export",
|
||||
output_file: "issue-export-report.md",
|
||||
issues_created: createdIssues.length,
|
||||
wave_1_count: wave1.length,
|
||||
wave_2_count: wave2.length,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 7: Handoff Options
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `${createdIssues.length} issues created from ${epicWaves.length} Epics. What would you like to do next?`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Execute via team-planex",
|
||||
description: `Execute all ${createdIssues.length} issues with coordinated team workflow`
|
||||
},
|
||||
{
|
||||
label: "Wave 1 only",
|
||||
description: `Execute ${wave1.length} MVP issues first`
|
||||
},
|
||||
{
|
||||
label: "View issues",
|
||||
description: "Browse created issues before deciding"
|
||||
},
|
||||
{
|
||||
label: "Done",
|
||||
description: "Export complete, handle manually"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Based on user selection:
|
||||
if (selection === "Execute via team-planex") {
|
||||
const issueIds = createdIssues.map(i => i.issue_id).join(',');
|
||||
Skill({ skill: "team-planex", args: `--issues ${issueIds}` });
|
||||
}
|
||||
|
||||
if (selection === "Wave 1 only") {
|
||||
const wave1Ids = createdIssues.filter(i => i.wave === 1).map(i => i.issue_id).join(',');
|
||||
Skill({ skill: "team-planex", args: `--issues ${wave1Ids}` });
|
||||
}
|
||||
|
||||
if (selection === "View issues") {
|
||||
Bash(`ccw issue list --tag spec:${specConfig.session_id}`);
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `issue-export-report.md` — Issue mapping table + spec links + next steps
|
||||
- **Updated**: `.workflow/issues/issues.jsonl` — New issue entries appended
|
||||
- **Updated**: `spec-config.json` — Phase 7 completion + issue IDs
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All MVP Epics have corresponding issues created
|
||||
- [ ] All non-MVP Epics have corresponding issues created
|
||||
- [ ] Issue tags include `spec-generated` and `spec:{session_id}`
|
||||
- [ ] Issue `extended_context.notes.spec_documents` paths are correct
|
||||
- [ ] Wave assignment matches MVP status (MVP → wave-1, non-MVP → wave-2)
|
||||
- [ ] Epic dependencies mapped to issue dependency references
|
||||
- [ ] `issue-export-report.md` generated with mapping table
|
||||
- [ ] `spec-config.json` updated with `issue_ids` and `issues_created`
|
||||
- [ ] Handoff options presented
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Blocking? | Action |
|
||||
|-------|-----------|--------|
|
||||
| `ccw issue create` fails for one Epic | No | Log error, continue with remaining Epics, report partial creation |
|
||||
| No EPIC files found | Yes | Error and return to Phase 5 |
|
||||
| All issue creations fail | Yes | Error with CLI diagnostic, suggest manual creation |
|
||||
| Dependency EPIC not found in mapping | No | Skip dependency link, log warning |
|
||||
|
||||
## Completion
|
||||
|
||||
Phase 7 is the final phase. The specification package has been fully converted to executable issues ready for team-planex or manual execution.
|
||||
295
.codex/skills/spec-generator/specs/document-standards.md
Normal file
295
.codex/skills/spec-generator/specs/document-standards.md
Normal file
@@ -0,0 +1,295 @@
|
||||
# Document Standards
|
||||
|
||||
Defines format conventions, YAML frontmatter schema, naming rules, and content structure for all spec-generator outputs.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| All Phases | Frontmatter format | YAML Frontmatter Schema |
|
||||
| All Phases | File naming | Naming Conventions |
|
||||
| Phase 2-5 | Document structure | Content Structure |
|
||||
| Phase 6 | Validation reference | All sections |
|
||||
|
||||
---
|
||||
|
||||
## YAML Frontmatter Schema
|
||||
|
||||
Every generated document MUST begin with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
session_id: SPEC-{slug}-{YYYY-MM-DD}
|
||||
phase: {1-6}
|
||||
document_type: {product-brief|requirements|architecture|epics|readiness-report|spec-summary|issue-export-report}
|
||||
status: draft|review|complete
|
||||
generated_at: {ISO8601 timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- {list of input documents used}
|
||||
---
|
||||
```
|
||||
|
||||
### Field Definitions
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `session_id` | string | Yes | Session identifier matching spec-config.json |
|
||||
| `phase` | number | Yes | Phase number that generated this document (1-6) |
|
||||
| `document_type` | string | Yes | One of: product-brief, requirements, architecture, epics, readiness-report, spec-summary, issue-export-report |
|
||||
| `status` | enum | Yes | draft (initial), review (user reviewed), complete (finalized) |
|
||||
| `generated_at` | string | Yes | ISO8601 timestamp of generation |
|
||||
| `stepsCompleted` | array | Yes | List of step IDs completed during generation |
|
||||
| `version` | number | Yes | Document version, incremented on re-generation |
|
||||
| `dependencies` | array | No | List of input files this document depends on |
|
||||
|
||||
### Status Transitions
|
||||
|
||||
```
|
||||
draft -> review -> complete
|
||||
| ^
|
||||
+-------------------+ (direct promotion in auto mode)
|
||||
```
|
||||
|
||||
- **draft**: Initial generation, not yet user-reviewed
|
||||
- **review**: User has reviewed and provided feedback
|
||||
- **complete**: Finalized, ready for downstream consumption
|
||||
|
||||
In auto mode (`-y`), documents are promoted directly from `draft` to `complete`.
|
||||
|
||||
---
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Session ID Format
|
||||
|
||||
```
|
||||
SPEC-{slug}-{YYYY-MM-DD}
|
||||
```
|
||||
|
||||
- **slug**: Lowercase, alphanumeric + Chinese characters, hyphens as separators, max 40 chars
|
||||
- **date**: UTC+8 date in YYYY-MM-DD format
|
||||
|
||||
Examples:
|
||||
- `SPEC-task-management-system-2026-02-11`
|
||||
- `SPEC-user-auth-oauth-2026-02-11`
|
||||
|
||||
### Output Files
|
||||
|
||||
| File | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `spec-config.json` | 1 | Session configuration and state |
|
||||
| `discovery-context.json` | 1 | Codebase exploration results (optional) |
|
||||
| `refined-requirements.json` | 1.5 | Confirmed requirements after discussion |
|
||||
| `glossary.json` | 2 | Terminology glossary for cross-document consistency |
|
||||
| `product-brief.md` | 2 | Product brief document |
|
||||
| `requirements.md` | 3 | PRD document |
|
||||
| `architecture.md` | 4 | Architecture decisions document |
|
||||
| `epics.md` | 5 | Epic/Story breakdown document |
|
||||
| `readiness-report.md` | 6 | Quality validation report |
|
||||
| `spec-summary.md` | 6 | One-page executive summary |
|
||||
| `issue-export-report.md` | 7 | Issue export report with Epic→Issue mapping |
|
||||
|
||||
### Output Directory
|
||||
|
||||
```
|
||||
.workflow/.spec/{session-id}/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Content Structure
|
||||
|
||||
### Heading Hierarchy
|
||||
|
||||
- `#` (H1): Document title only (one per document)
|
||||
- `##` (H2): Major sections
|
||||
- `###` (H3): Subsections
|
||||
- `####` (H4): Detail items (use sparingly)
|
||||
|
||||
Maximum depth: 4 levels. Prefer flat structures.
|
||||
|
||||
### Section Ordering
|
||||
|
||||
Every document follows this general pattern:
|
||||
|
||||
1. **YAML Frontmatter** (mandatory)
|
||||
2. **Title** (H1)
|
||||
3. **Executive Summary** (2-3 sentences)
|
||||
4. **Core Content Sections** (H2, document-specific)
|
||||
5. **Open Questions / Risks** (if applicable)
|
||||
6. **References / Traceability** (links to upstream/downstream docs)
|
||||
|
||||
### Formatting Rules
|
||||
|
||||
| Element | Format | Example |
|
||||
|---------|--------|---------|
|
||||
| Requirements | `REQ-{NNN}` prefix | REQ-001: User login |
|
||||
| Acceptance criteria | Checkbox list | `- [ ] User can log in with email` |
|
||||
| Architecture decisions | `ADR-{NNN}` prefix | ADR-001: Use PostgreSQL |
|
||||
| Epics | `EPIC-{NNN}` prefix | EPIC-001: Authentication |
|
||||
| Stories | `STORY-{EPIC}-{NNN}` prefix | STORY-001-001: Login form |
|
||||
| Priority tags | MoSCoW labels | `[Must]`, `[Should]`, `[Could]`, `[Won't]` |
|
||||
| Mermaid diagrams | Fenced code blocks | ````mermaid ... ``` `` |
|
||||
| Code examples | Language-tagged blocks | ````typescript ... ``` `` |
|
||||
|
||||
### Cross-Reference Format
|
||||
|
||||
Use relative references between documents:
|
||||
|
||||
```markdown
|
||||
See [Product Brief](product-brief.md#section-name) for details.
|
||||
Derived from [REQ-001](requirements.md#req-001).
|
||||
```
|
||||
|
||||
### Language
|
||||
|
||||
- Document body: Follow user's input language (Chinese or English)
|
||||
- Technical identifiers: Always English (REQ-001, ADR-001, EPIC-001)
|
||||
- YAML frontmatter keys: Always English
|
||||
|
||||
---
|
||||
|
||||
## spec-config.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "string (required)",
|
||||
"seed_input": "string (required) - original user input",
|
||||
"input_type": "text|file (required)",
|
||||
"timestamp": "ISO8601 (required)",
|
||||
"mode": "interactive|auto (required)",
|
||||
"complexity": "simple|moderate|complex (required)",
|
||||
"depth": "light|standard|comprehensive (required)",
|
||||
"focus_areas": ["string array"],
|
||||
"seed_analysis": {
|
||||
"problem_statement": "string",
|
||||
"target_users": ["string array"],
|
||||
"domain": "string",
|
||||
"constraints": ["string array"],
|
||||
"dimensions": ["string array - 3-5 exploration dimensions"]
|
||||
},
|
||||
"has_codebase": "boolean",
|
||||
"spec_type": "service|api|library|platform (required) - type of specification",
|
||||
"iteration_count": "number (required, default 0) - number of auto-fix iterations completed",
|
||||
"iteration_history": [
|
||||
{
|
||||
"iteration": "number",
|
||||
"timestamp": "ISO8601",
|
||||
"readiness_score": "number (0-100)",
|
||||
"errors_found": "number",
|
||||
"phases_fixed": ["number array - phase numbers that were re-generated"]
|
||||
}
|
||||
],
|
||||
"refined_requirements_file": "string (optional) - path to refined-requirements.json",
|
||||
"phasesCompleted": [
|
||||
{
|
||||
"phase": "number (1-6)",
|
||||
"name": "string (phase name)",
|
||||
"output_file": "string (primary output file)",
|
||||
"completed_at": "ISO8601"
|
||||
}
|
||||
],
|
||||
"issue_ids": ["string array (optional) - IDs of issues created in Phase 7"],
|
||||
"issues_created": "number (optional, default 0) - count of issues created in Phase 7"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## refined-requirements.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "string (required) - matches spec-config.json",
|
||||
"phase": "1.5",
|
||||
"generated_at": "ISO8601 (required)",
|
||||
"source": "interactive-discussion|auto-expansion (required)",
|
||||
"discussion_rounds": "number (required) - 0 for auto mode",
|
||||
"clarified_problem_statement": "string (required) - refined problem statement",
|
||||
"confirmed_target_users": [
|
||||
{
|
||||
"name": "string",
|
||||
"needs": ["string array"],
|
||||
"pain_points": ["string array"]
|
||||
}
|
||||
],
|
||||
"confirmed_domain": "string",
|
||||
"confirmed_features": [
|
||||
{
|
||||
"name": "string",
|
||||
"description": "string",
|
||||
"acceptance_criteria": ["string array"],
|
||||
"edge_cases": ["string array"],
|
||||
"priority": "must|should|could|unset"
|
||||
}
|
||||
],
|
||||
"non_functional_requirements": [
|
||||
{
|
||||
"type": "Performance|Security|Usability|Scalability|Reliability|...",
|
||||
"details": "string",
|
||||
"measurable_criteria": "string (optional)"
|
||||
}
|
||||
],
|
||||
"boundary_conditions": {
|
||||
"in_scope": ["string array"],
|
||||
"out_of_scope": ["string array"],
|
||||
"constraints": ["string array"]
|
||||
},
|
||||
"integration_points": ["string array"],
|
||||
"key_assumptions": ["string array"],
|
||||
"discussion_log": [
|
||||
{
|
||||
"round": "number",
|
||||
"agent_prompt": "string",
|
||||
"user_response": "string",
|
||||
"timestamp": "ISO8601"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## glossary.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "string (required) - matches spec-config.json",
|
||||
"generated_at": "ISO8601 (required)",
|
||||
"version": "number (required, default 1) - incremented on updates",
|
||||
"terms": [
|
||||
{
|
||||
"term": "string (required) - the canonical term",
|
||||
"definition": "string (required) - concise definition",
|
||||
"aliases": ["string array - acceptable alternative names"],
|
||||
"first_defined_in": "string (required) - source document path",
|
||||
"category": "core|technical|business (required)"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Glossary Usage Rules
|
||||
|
||||
- Terms MUST be defined before first use in any document
|
||||
- All documents MUST use the canonical term from glossary; aliases are for reference only
|
||||
- Glossary is generated in Phase 2 and injected into all subsequent phase prompts
|
||||
- Phase 6 validates glossary compliance across all documents
|
||||
|
||||
---
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
- [ ] Every document starts with valid YAML frontmatter
|
||||
- [ ] `session_id` matches across all documents in a session
|
||||
- [ ] `status` field reflects current document state
|
||||
- [ ] All cross-references resolve to valid targets
|
||||
- [ ] Heading hierarchy is correct (no skipped levels)
|
||||
- [ ] Technical identifiers use correct prefixes
|
||||
- [ ] Output files are in the correct directory
|
||||
- [ ] `glossary.json` created with >= 5 terms
|
||||
- [ ] `spec_type` field set in spec-config.json
|
||||
- [ ] All documents use glossary terms consistently
|
||||
- [ ] Non-Goals section present in product brief (if applicable)
|
||||
29
.codex/skills/spec-generator/specs/glossary-template.json
Normal file
29
.codex/skills/spec-generator/specs/glossary-template.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"$schema": "glossary-v1",
|
||||
"description": "Template for terminology glossary used across spec-generator documents",
|
||||
"session_id": "",
|
||||
"generated_at": "",
|
||||
"version": 1,
|
||||
"terms": [
|
||||
{
|
||||
"term": "",
|
||||
"definition": "",
|
||||
"aliases": [],
|
||||
"first_defined_in": "product-brief.md",
|
||||
"category": "core"
|
||||
}
|
||||
],
|
||||
"_usage_notes": {
|
||||
"category_values": {
|
||||
"core": "Domain-specific terms central to the product (e.g., 'Workspace', 'Session')",
|
||||
"technical": "Technical terms specific to the architecture (e.g., 'gRPC', 'event bus')",
|
||||
"business": "Business/process terms (e.g., 'Sprint', 'SLA', 'stakeholder')"
|
||||
},
|
||||
"rules": [
|
||||
"Terms MUST be defined before first use in any document",
|
||||
"All documents MUST use the canonical 'term' field consistently",
|
||||
"Aliases are for reference only - prefer canonical term in all documents",
|
||||
"Phase 6 validates glossary compliance across all documents"
|
||||
]
|
||||
}
|
||||
}
|
||||
270
.codex/skills/spec-generator/specs/quality-gates.md
Normal file
270
.codex/skills/spec-generator/specs/quality-gates.md
Normal file
@@ -0,0 +1,270 @@
|
||||
# Quality Gates
|
||||
|
||||
Per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| Phase 2-5 | Post-generation self-check | Per-Phase Gates |
|
||||
| Phase 6 | Cross-document validation | Cross-Document Validation |
|
||||
| Phase 6 | Final scoring | Scoring Dimensions |
|
||||
|
||||
---
|
||||
|
||||
## Quality Thresholds
|
||||
|
||||
| Gate | Score | Action |
|
||||
|------|-------|--------|
|
||||
| **Pass** | >= 80% | Continue to next phase |
|
||||
| **Review** | 60-79% | Log warnings, continue with caveats |
|
||||
| **Fail** | < 60% | Must address issues before continuing |
|
||||
|
||||
In auto mode (`-y`), Review-level issues are logged but do not block progress.
|
||||
|
||||
---
|
||||
|
||||
## Scoring Dimensions
|
||||
|
||||
### 1. Completeness (25%)
|
||||
|
||||
All required sections present with substantive content.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All template sections filled with detailed content |
|
||||
| 75% | All sections present, some lack detail |
|
||||
| 50% | Major sections present but minor sections missing |
|
||||
| 25% | Multiple major sections missing or empty |
|
||||
| 0% | Document is a skeleton only |
|
||||
|
||||
### 2. Consistency (25%)
|
||||
|
||||
Terminology, formatting, and references are uniform across documents.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All terms consistent, all references valid, formatting uniform |
|
||||
| 75% | Minor terminology variations, all references valid |
|
||||
| 50% | Some inconsistent terms, 1-2 broken references |
|
||||
| 25% | Frequent inconsistencies, multiple broken references |
|
||||
| 0% | Documents contradict each other |
|
||||
|
||||
### 3. Traceability (25%)
|
||||
|
||||
Requirements, architecture decisions, and stories trace back to goals.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Every story traces to a requirement, every requirement traces to a goal |
|
||||
| 75% | Most items traceable, few orphans |
|
||||
| 50% | Partial traceability, some disconnected items |
|
||||
| 25% | Weak traceability, many orphan items |
|
||||
| 0% | No traceability between documents |
|
||||
|
||||
### 4. Depth (25%)
|
||||
|
||||
Content provides sufficient detail for execution teams.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Acceptance criteria specific and testable, architecture decisions justified, stories estimable |
|
||||
| 75% | Most items detailed enough, few vague areas |
|
||||
| 50% | Mix of detailed and vague content |
|
||||
| 25% | Mostly high-level, lacking actionable detail |
|
||||
| 0% | Too abstract for execution |
|
||||
|
||||
---
|
||||
|
||||
## Per-Phase Quality Gates
|
||||
|
||||
### Phase 1: Discovery
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Session ID valid | Matches `SPEC-{slug}-{date}` format | Error |
|
||||
| Problem statement exists | Non-empty, >= 20 characters | Error |
|
||||
| Target users identified | >= 1 user group | Error |
|
||||
| Dimensions generated | 3-5 exploration dimensions | Warning |
|
||||
| Constraints listed | >= 0 (can be empty with justification) | Info |
|
||||
|
||||
### Phase 1.5: Requirement Expansion & Clarification
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Problem statement refined | More specific than seed, >= 30 characters | Error |
|
||||
| Confirmed features | >= 2 features with descriptions | Error |
|
||||
| Non-functional requirements | >= 1 identified (performance, security, etc.) | Warning |
|
||||
| Boundary conditions | In-scope and out-of-scope defined | Warning |
|
||||
| Key assumptions | >= 1 assumption listed | Warning |
|
||||
| User confirmation | Explicit user confirmation recorded (non-auto mode) | Info |
|
||||
| Discussion rounds | >= 1 round of interaction (non-auto mode) | Info |
|
||||
|
||||
### Phase 2: Product Brief
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Vision statement | Clear, 1-3 sentences | Error |
|
||||
| Problem statement | Specific and measurable | Error |
|
||||
| Target users | >= 1 persona with needs described | Error |
|
||||
| Goals defined | >= 2 measurable goals | Error |
|
||||
| Success metrics | >= 2 quantifiable metrics | Warning |
|
||||
| Scope boundaries | In-scope and out-of-scope listed | Warning |
|
||||
| Multi-perspective | >= 2 CLI perspectives synthesized | Info |
|
||||
| Terminology glossary generated | glossary.json created with >= 5 terms | Warning |
|
||||
| Non-Goals section present | At least 1 non-goal with rationale | Warning |
|
||||
| Concepts section present | Terminology table in product brief | Warning |
|
||||
|
||||
### Phase 3: Requirements (PRD)
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Functional requirements | >= 3 with REQ-NNN IDs | Error |
|
||||
| Acceptance criteria | Every requirement has >= 1 criterion | Error |
|
||||
| MoSCoW priority | Every requirement tagged | Error |
|
||||
| Non-functional requirements | >= 1 (performance, security, etc.) | Warning |
|
||||
| User stories | >= 1 per Must-have requirement | Warning |
|
||||
| Traceability | Requirements trace to product brief goals | Warning |
|
||||
| RFC 2119 keywords used | Behavioral requirements use MUST/SHOULD/MAY | Warning |
|
||||
| Data model defined | Core entities have field-level definitions | Warning |
|
||||
|
||||
### Phase 4: Architecture
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Component diagram | Present (Mermaid or ASCII) | Error |
|
||||
| Tech stack specified | Languages, frameworks, key libraries | Error |
|
||||
| ADR present | >= 1 Architecture Decision Record | Error |
|
||||
| ADR has alternatives | Each ADR lists >= 2 options considered | Warning |
|
||||
| Integration points | External systems/APIs identified | Warning |
|
||||
| Data model | Key entities and relationships described | Warning |
|
||||
| Codebase mapping | Mapped to existing code (if has_codebase) | Info |
|
||||
| State machine defined | >= 1 lifecycle state diagram (if service/platform type) | Warning |
|
||||
| Configuration model defined | All config fields with type/default/constraint (if service type) | Warning |
|
||||
| Error handling strategy | Per-component error classification and recovery | Warning |
|
||||
| Observability metrics | >= 3 metrics defined (if service/platform type) | Warning |
|
||||
| Trust model defined | Trust levels documented (if service type) | Info |
|
||||
| Implementation guidance | Key decisions for implementers listed | Info |
|
||||
|
||||
### Phase 5: Epics & Stories
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Epics defined | 3-7 epics with EPIC-NNN IDs | Error |
|
||||
| MVP subset | >= 1 epic tagged as MVP | Error |
|
||||
| Stories per epic | 2-5 stories per epic | Error |
|
||||
| Story format | "As a...I want...So that..." pattern | Warning |
|
||||
| Dependency map | Cross-epic dependencies documented | Warning |
|
||||
| Estimation hints | Relative sizing (S/M/L/XL) per story | Info |
|
||||
| Traceability | Stories trace to requirements | Warning |
|
||||
|
||||
### Phase 6: Readiness Check
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| All documents exist | product-brief, requirements, architecture, epics | Error |
|
||||
| Frontmatter valid | All YAML frontmatter parseable and correct | Error |
|
||||
| Cross-references valid | All document links resolve | Error |
|
||||
| Overall score >= 60% | Weighted average across 4 dimensions | Error |
|
||||
| No unresolved Errors | All Error-severity issues addressed | Error |
|
||||
| Summary generated | spec-summary.md created | Warning |
|
||||
| Per-requirement verified | All Must requirements pass 4-check verification | Error |
|
||||
| Codex technical review | Technical depth assessment completed | Warning |
|
||||
| Dual-source validation | Both Gemini and Codex scores recorded | Warning |
|
||||
|
||||
### Phase 7: Issue Export
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| All MVP epics have issues | Every MVP-tagged Epic has a corresponding issue created | Error |
|
||||
| Issue tags correct | Each issue has `spec-generated` and `spec:{session_id}` tags | Error |
|
||||
| Export report generated | `issue-export-report.md` exists with mapping table | Error |
|
||||
| Wave assignment correct | MVP epics → wave-1, non-MVP epics → wave-2 | Warning |
|
||||
| Spec document links valid | `extended_context.notes.spec_documents` paths resolve | Warning |
|
||||
| Epic dependencies mapped | Cross-epic dependencies reflected in issue dependency references | Warning |
|
||||
| All epics covered | Non-MVP epics also have corresponding issues | Info |
|
||||
|
||||
---
|
||||
|
||||
## Cross-Document Validation
|
||||
|
||||
Checks performed during Phase 6 across all documents:
|
||||
|
||||
### Completeness Matrix
|
||||
|
||||
```
|
||||
Product Brief goals -> Requirements (each goal has >= 1 requirement)
|
||||
Requirements -> Architecture (each Must requirement has design coverage)
|
||||
Requirements -> Epics (each Must requirement appears in >= 1 story)
|
||||
Architecture ADRs -> Epics (tech choices reflected in implementation stories)
|
||||
Glossary terms -> All Documents (core terms used consistently)
|
||||
Non-Goals (Brief) -> Requirements + Epics (no contradictions)
|
||||
```
|
||||
|
||||
### Consistency Checks
|
||||
|
||||
| Check | Documents | Rule |
|
||||
|-------|-----------|------|
|
||||
| Terminology | All | Same term used consistently (no synonyms for same concept) |
|
||||
| User personas | Brief + PRD + Epics | Same user names/roles throughout |
|
||||
| Scope | Brief + PRD | PRD scope does not exceed brief scope |
|
||||
| Tech stack | Architecture + Epics | Stories reference correct technologies |
|
||||
| Glossary compliance | All | Core terms match glossary.json definitions, no synonym drift |
|
||||
| Scope containment | Brief + PRD | PRD requirements do not introduce scope beyond brief boundaries |
|
||||
| Non-Goals respected | Brief + PRD + Epics | No requirement/story contradicts explicit Non-Goals |
|
||||
|
||||
### Traceability Matrix Format
|
||||
|
||||
```markdown
|
||||
| Goal | Requirements | Architecture | Epics |
|
||||
|------|-------------|--------------|-------|
|
||||
| G-001: ... | REQ-001, REQ-002 | ADR-001 | EPIC-001 |
|
||||
| G-002: ... | REQ-003 | ADR-002 | EPIC-002, EPIC-003 |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Classification
|
||||
|
||||
### Error (Must Fix)
|
||||
|
||||
- Missing required document or section
|
||||
- Broken cross-references
|
||||
- Contradictory information between documents
|
||||
- Empty acceptance criteria on Must-have requirements
|
||||
- No MVP subset defined in epics
|
||||
|
||||
### Warning (Should Fix)
|
||||
|
||||
- Vague acceptance criteria
|
||||
- Missing non-functional requirements
|
||||
- No success metrics defined
|
||||
- Incomplete traceability
|
||||
- Missing architecture review notes
|
||||
|
||||
### Info (Nice to Have)
|
||||
|
||||
- Could add more detailed personas
|
||||
- Consider additional ADR alternatives
|
||||
- Story estimation hints missing
|
||||
- Mermaid diagrams could be more detailed
|
||||
|
||||
---
|
||||
|
||||
## Iteration Quality Tracking
|
||||
|
||||
When Phase 6.5 (Auto-Fix) is triggered:
|
||||
|
||||
| Iteration | Expected Improvement | Max Iterations |
|
||||
|-----------|---------------------|----------------|
|
||||
| 1st | Fix all Error-severity issues | - |
|
||||
| 2nd | Fix remaining Warnings, improve scores | Max reached |
|
||||
|
||||
### Iteration Exit Criteria
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Overall score >= 80% after fix | Pass, proceed to handoff |
|
||||
| Overall score 60-79% after 2 iterations | Review, proceed with caveats |
|
||||
| Overall score < 60% after 2 iterations | Fail, manual intervention required |
|
||||
| No Error-severity issues remaining | Eligible for handoff regardless of score |
|
||||
373
.codex/skills/spec-generator/templates/architecture-doc.md
Normal file
373
.codex/skills/spec-generator/templates/architecture-doc.md
Normal file
@@ -0,0 +1,373 @@
|
||||
# Architecture Document Template (Directory Structure)
|
||||
|
||||
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
|
||||
| Output Location | `{workDir}/architecture/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/architecture/
|
||||
├── _index.md # Overview, components, tech stack, data model, security
|
||||
├── ADR-001-{slug}.md # Individual Architecture Decision Record
|
||||
├── ADR-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 4
|
||||
document_type: architecture-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
---
|
||||
|
||||
# Architecture: {product_name}
|
||||
|
||||
{executive_summary - high-level architecture approach and key decisions}
|
||||
|
||||
## System Overview
|
||||
|
||||
### Architecture Style
|
||||
{description of chosen architecture style: microservices, monolith, serverless, etc.}
|
||||
|
||||
### System Context Diagram
|
||||
|
||||
```mermaid
|
||||
C4Context
|
||||
title System Context Diagram
|
||||
Person(user, "User", "Primary user")
|
||||
System(system, "{product_name}", "Core system")
|
||||
System_Ext(ext1, "{external_system}", "{description}")
|
||||
Rel(user, system, "Uses")
|
||||
Rel(system, ext1, "Integrates with")
|
||||
```
|
||||
|
||||
## Component Architecture
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "{product_name}"
|
||||
A[Component A] --> B[Component B]
|
||||
B --> C[Component C]
|
||||
A --> D[Component D]
|
||||
end
|
||||
B --> E[External Service]
|
||||
```
|
||||
|
||||
### Component Descriptions
|
||||
|
||||
| Component | Responsibility | Technology | Dependencies |
|
||||
|-----------|---------------|------------|--------------|
|
||||
| {component_name} | {what it does} | {tech stack} | {depends on} |
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Core Technologies
|
||||
|
||||
| Layer | Technology | Version | Rationale |
|
||||
|-------|-----------|---------|-----------|
|
||||
| Frontend | {technology} | {version} | {why chosen} |
|
||||
| Backend | {technology} | {version} | {why chosen} |
|
||||
| Database | {technology} | {version} | {why chosen} |
|
||||
| Infrastructure | {technology} | {version} | {why chosen} |
|
||||
|
||||
### Key Libraries & Frameworks
|
||||
|
||||
| Library | Purpose | License |
|
||||
|---------|---------|---------|
|
||||
| {library_name} | {purpose} | {license} |
|
||||
|
||||
## Architecture Decision Records
|
||||
|
||||
| ADR | Title | Status | Key Choice |
|
||||
|-----|-------|--------|------------|
|
||||
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
|
||||
|
||||
## Data Architecture
|
||||
|
||||
### Data Model
|
||||
|
||||
```mermaid
|
||||
erDiagram
|
||||
ENTITY_A ||--o{ ENTITY_B : "has many"
|
||||
ENTITY_A {
|
||||
string id PK
|
||||
string name
|
||||
datetime created_at
|
||||
}
|
||||
ENTITY_B {
|
||||
string id PK
|
||||
string entity_a_id FK
|
||||
string value
|
||||
}
|
||||
```
|
||||
|
||||
### Data Storage Strategy
|
||||
|
||||
| Data Type | Storage | Retention | Backup |
|
||||
|-----------|---------|-----------|--------|
|
||||
| {type} | {storage solution} | {retention policy} | {backup strategy} |
|
||||
|
||||
## API Design
|
||||
|
||||
### API Overview
|
||||
|
||||
| Endpoint | Method | Purpose | Auth |
|
||||
|----------|--------|---------|------|
|
||||
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Security Controls
|
||||
|
||||
| Control | Implementation | Requirement |
|
||||
|---------|---------------|-------------|
|
||||
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
|
||||
## Infrastructure & Deployment
|
||||
|
||||
### Deployment Architecture
|
||||
|
||||
{description of deployment model: containers, serverless, VMs, etc.}
|
||||
|
||||
### Environment Strategy
|
||||
|
||||
| Environment | Purpose | Configuration |
|
||||
|-------------|---------|---------------|
|
||||
| Development | Local development | {config} |
|
||||
| Staging | Pre-production testing | {config} |
|
||||
| Production | Live system | {config} |
|
||||
|
||||
## Codebase Integration
|
||||
|
||||
{if has_codebase is true:}
|
||||
|
||||
### Existing Code Mapping
|
||||
|
||||
| New Component | Existing Module | Integration Type | Notes |
|
||||
|--------------|----------------|------------------|-------|
|
||||
| {component} | {existing module path} | Extend/Replace/New | {notes} |
|
||||
|
||||
### Migration Notes
|
||||
{any migration considerations for existing code}
|
||||
|
||||
## Quality Attributes
|
||||
|
||||
| Attribute | Target | Measurement | ADR Reference |
|
||||
|-----------|--------|-------------|---------------|
|
||||
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
|
||||
## State Machine
|
||||
|
||||
{For each core entity with a lifecycle (e.g., Order, Session, Task):}
|
||||
|
||||
### {Entity} Lifecycle
|
||||
|
||||
```
|
||||
{ASCII state diagram showing all states, transitions, triggers, and error paths}
|
||||
|
||||
┌──────────┐
|
||||
│ Created │
|
||||
└─────┬────┘
|
||||
│ start()
|
||||
▼
|
||||
┌──────────┐ error ┌──────────┐
|
||||
│ Running │ ──────────▶ │ Failed │
|
||||
└─────┬────┘ └──────────┘
|
||||
│ complete()
|
||||
▼
|
||||
┌──────────┐
|
||||
│ Completed │
|
||||
└──────────┘
|
||||
```
|
||||
|
||||
| From State | Event | To State | Side Effects | Error Handling |
|
||||
|-----------|-------|----------|-------------|----------------|
|
||||
| {from} | {event} | {to} | {side_effects} | {error_behavior} |
|
||||
|
||||
## Configuration Model
|
||||
|
||||
### Required Configuration
|
||||
|
||||
| Field | Type | Default | Constraint | Description |
|
||||
|-------|------|---------|------------|-------------|
|
||||
| {field_name} | {string/number/boolean/enum} | {default_value} | {validation rule} | {description} |
|
||||
|
||||
### Optional Configuration
|
||||
|
||||
| Field | Type | Default | Constraint | Description |
|
||||
|-------|------|---------|------------|-------------|
|
||||
| {field_name} | {type} | {default} | {constraint} | {description} |
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Maps To | Required |
|
||||
|----------|---------|----------|
|
||||
| {ENV_VAR} | {config_field} | {yes/no} |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Error Classification
|
||||
|
||||
| Category | Severity | Retry | Example |
|
||||
|----------|----------|-------|---------|
|
||||
| Transient | Low | Yes, with backoff | Network timeout, rate limit |
|
||||
| Permanent | High | No | Invalid configuration, auth failure |
|
||||
| Degraded | Medium | Partial | Dependency unavailable, fallback active |
|
||||
|
||||
### Per-Component Error Strategy
|
||||
|
||||
| Component | Error Scenario | Behavior | Recovery |
|
||||
|-----------|---------------|----------|----------|
|
||||
| {component} | {scenario} | {MUST/SHOULD behavior} | {recovery strategy} |
|
||||
|
||||
## Observability
|
||||
|
||||
### Metrics
|
||||
|
||||
| Metric Name | Type | Labels | Description |
|
||||
|-------------|------|--------|-------------|
|
||||
| {metric_name} | {counter/gauge/histogram} | {label1, label2} | {what it measures} |
|
||||
|
||||
### Logging
|
||||
|
||||
| Event | Level | Fields | Description |
|
||||
|-------|-------|--------|-------------|
|
||||
| {event_name} | {INFO/WARN/ERROR} | {structured fields} | {when logged} |
|
||||
|
||||
### Health Checks
|
||||
|
||||
| Check | Endpoint | Interval | Failure Action |
|
||||
|-------|----------|----------|----------------|
|
||||
| {check_name} | {/health/xxx} | {duration} | {action on failure} |
|
||||
|
||||
## Trust & Safety
|
||||
|
||||
### Trust Levels
|
||||
|
||||
| Level | Description | Approval Required | Allowed Operations |
|
||||
|-------|-------------|-------------------|-------------------|
|
||||
| High Trust | {description} | None | {operations} |
|
||||
| Standard | {description} | {approval type} | {operations} |
|
||||
| Low Trust | {description} | {approval type} | {operations} |
|
||||
|
||||
### Security Controls
|
||||
|
||||
{Detailed security controls beyond the basic auth covered in Security Architecture}
|
||||
|
||||
## Implementation Guidance
|
||||
|
||||
### Key Decisions for Implementers
|
||||
|
||||
| Decision | Options | Recommendation | Rationale |
|
||||
|----------|---------|---------------|-----------|
|
||||
| {decision_area} | {option_1, option_2} | {recommended} | {why} |
|
||||
|
||||
### Implementation Order
|
||||
|
||||
1. {component/module 1}: {why first}
|
||||
2. {component/module 2}: {depends on #1}
|
||||
|
||||
### Testing Strategy
|
||||
|
||||
| Layer | Scope | Tools | Coverage Target |
|
||||
|-------|-------|-------|-----------------|
|
||||
| Unit | {scope} | {tools} | {target} |
|
||||
| Integration | {scope} | {tools} | {target} |
|
||||
| E2E | {scope} | {tools} | {target} |
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Impact | Probability | Mitigation |
|
||||
|------|--------|-------------|------------|
|
||||
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {architectural question 1}
|
||||
- [ ] {architectural question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
|
||||
- Next: [Epics & Stories](../epics/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: ADR-{NNN}
|
||||
status: Accepted
|
||||
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
|
||||
date: {timestamp}
|
||||
---
|
||||
|
||||
# ADR-{NNN}: {decision_title}
|
||||
|
||||
## Context
|
||||
|
||||
{what is the situation that motivates this decision}
|
||||
|
||||
## Decision
|
||||
|
||||
{what is the chosen approach}
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
| Option | Pros | Cons |
|
||||
|--------|------|------|
|
||||
| {option_1 - chosen} | {pros} | {cons} |
|
||||
| {option_2} | {pros} | {cons} |
|
||||
| {option_3} | {pros} | {cons} |
|
||||
|
||||
## Consequences
|
||||
|
||||
- **Positive**: {positive outcomes}
|
||||
- **Negative**: {tradeoffs accepted}
|
||||
- **Risks**: {risks to monitor}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | ADR/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from decision title |
|
||||
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |
|
||||
209
.codex/skills/spec-generator/templates/epics-template.md
Normal file
209
.codex/skills/spec-generator/templates/epics-template.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# Epics & Stories Template (Directory Structure)
|
||||
|
||||
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
|
||||
| Output Location | `{workDir}/epics/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/epics/
|
||||
├── _index.md # Overview table + dependency map + MVP scope + execution order
|
||||
├── EPIC-001-{slug}.md # Individual Epic with its Stories
|
||||
├── EPIC-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 5
|
||||
document_type: epics-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
- ../architecture/_index.md
|
||||
---
|
||||
|
||||
# Epics & Stories: {product_name}
|
||||
|
||||
{executive_summary - overview of epic structure and MVP scope}
|
||||
|
||||
## Epic Overview
|
||||
|
||||
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|
||||
|---------|-------|----------|-----|---------|-----------|
|
||||
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
EPIC-001 --> EPIC-002
|
||||
EPIC-001 --> EPIC-003
|
||||
EPIC-002 --> EPIC-004
|
||||
EPIC-003 --> EPIC-005
|
||||
```
|
||||
|
||||
### Dependency Notes
|
||||
{explanation of why these dependencies exist and suggested execution order}
|
||||
|
||||
### Recommended Execution Order
|
||||
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
|
||||
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
|
||||
3. ...
|
||||
|
||||
## MVP Scope
|
||||
|
||||
### MVP Epics
|
||||
{list of epics included in MVP with justification, linking to each}
|
||||
|
||||
### MVP Definition of Done
|
||||
- [ ] {MVP completion criterion 1}
|
||||
- [ ] {MVP completion criterion 2}
|
||||
- [ ] {MVP completion criterion 3}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Requirement | Epic | Stories | Architecture |
|
||||
|-------------|------|---------|--------------|
|
||||
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
|
||||
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
|
||||
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
|
||||
|
||||
## Estimation Summary
|
||||
|
||||
| Size | Meaning | Count |
|
||||
|------|---------|-------|
|
||||
| S | Small - well-understood, minimal risk | {n} |
|
||||
| M | Medium - some complexity, moderate risk | {n} |
|
||||
| L | Large - significant complexity, should consider splitting | {n} |
|
||||
| XL | Extra Large - high complexity, must split before implementation | {n} |
|
||||
|
||||
## Risks & Considerations
|
||||
|
||||
| Risk | Affected Epics | Mitigation |
|
||||
|------|---------------|------------|
|
||||
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
|
||||
|
||||
## Versioning & Changelog
|
||||
|
||||
### Version Strategy
|
||||
- **Versioning Scheme**: {semver/calver/custom}
|
||||
- **Breaking Change Definition**: {what constitutes a breaking change}
|
||||
- **Deprecation Policy**: {how deprecated features are handled}
|
||||
|
||||
### Changelog
|
||||
|
||||
| Version | Date | Type | Description |
|
||||
|---------|------|------|-------------|
|
||||
| {version} | {date} | {Added/Changed/Fixed/Removed} | {description} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {question about scope or implementation 1}
|
||||
- [ ] {question about scope or implementation 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
|
||||
- Handoff to: execution workflows (lite-plan, plan, req-plan)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: EPIC-NNN-{slug}.md (Individual Epic)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: EPIC-{NNN}
|
||||
priority: {Must|Should|Could}
|
||||
mvp: {true|false}
|
||||
size: {S|M|L|XL}
|
||||
requirements: [REQ-{NNN}]
|
||||
architecture: [ADR-{NNN}]
|
||||
dependencies: [EPIC-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# EPIC-{NNN}: {epic_title}
|
||||
|
||||
**Priority**: {Must|Should|Could}
|
||||
**MVP**: {Yes|No}
|
||||
**Estimated Size**: {S|M|L|XL}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed epic description}
|
||||
|
||||
## Requirements
|
||||
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
|
||||
## Architecture
|
||||
|
||||
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
|
||||
- Component: {component_name}
|
||||
|
||||
## Dependencies
|
||||
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
|
||||
|
||||
## Stories
|
||||
|
||||
### STORY-{EPIC}-001: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
- [ ] {criterion 3}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
|
||||
---
|
||||
|
||||
### STORY-{EPIC}-002: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
|
||||
| `{NNN}` | Auto-increment | Story/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
|
||||
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |
|
||||
153
.codex/skills/spec-generator/templates/product-brief.md
Normal file
153
.codex/skills/spec-generator/templates/product-brief.md
Normal file
@@ -0,0 +1,153 @@
|
||||
# Product Brief Template
|
||||
|
||||
Template for generating product brief documents in Phase 2.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis |
|
||||
| Output Location | `{workDir}/product-brief.md` |
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
---
|
||||
|
||||
# Product Brief: {product_name}
|
||||
|
||||
{executive_summary - 2-3 sentences capturing the essence of the product/feature}
|
||||
|
||||
## Concepts & Terminology
|
||||
|
||||
| Term | Definition | Aliases |
|
||||
|------|-----------|---------|
|
||||
| {term_1} | {definition} | {comma-separated aliases if any} |
|
||||
| {term_2} | {definition} | |
|
||||
|
||||
{Note: All documents in this specification MUST use these terms consistently.}
|
||||
|
||||
## Vision
|
||||
|
||||
{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like}
|
||||
|
||||
## Problem Statement
|
||||
|
||||
### Current Situation
|
||||
{description of the current state and pain points}
|
||||
|
||||
### Impact
|
||||
{quantified impact of the problem - who is affected, how much, how often}
|
||||
|
||||
## Target Users
|
||||
|
||||
{for each user persona:}
|
||||
|
||||
### {Persona Name}
|
||||
- **Role**: {user's role/context}
|
||||
- **Needs**: {primary needs related to this product}
|
||||
- **Pain Points**: {current frustrations}
|
||||
- **Success Criteria**: {what success looks like for this user}
|
||||
|
||||
## Goals & Success Metrics
|
||||
|
||||
| Goal ID | Goal | Success Metric | Target |
|
||||
|---------|------|----------------|--------|
|
||||
| G-001 | {goal description} | {measurable metric} | {specific target} |
|
||||
| G-002 | {goal description} | {measurable metric} | {specific target} |
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
- {feature/capability 1}
|
||||
- {feature/capability 2}
|
||||
- {feature/capability 3}
|
||||
|
||||
### Out of Scope
|
||||
- {explicitly excluded item 1}
|
||||
- {explicitly excluded item 2}
|
||||
|
||||
### Non-Goals
|
||||
|
||||
{Explicit list of things this project will NOT do, with rationale for each:}
|
||||
|
||||
| Non-Goal | Rationale |
|
||||
|----------|-----------|
|
||||
| {non_goal_1} | {why this is explicitly excluded} |
|
||||
| {non_goal_2} | {why this is explicitly excluded} |
|
||||
|
||||
### Assumptions
|
||||
- {key assumption 1}
|
||||
- {key assumption 2}
|
||||
|
||||
## Competitive Landscape
|
||||
|
||||
| Aspect | Current State | Proposed Solution | Advantage |
|
||||
|--------|--------------|-------------------|-----------|
|
||||
| {aspect} | {how it's done now} | {our approach} | {differentiator} |
|
||||
|
||||
## Constraints & Dependencies
|
||||
|
||||
### Technical Constraints
|
||||
- {constraint 1}
|
||||
- {constraint 2}
|
||||
|
||||
### Business Constraints
|
||||
- {constraint 1}
|
||||
|
||||
### Dependencies
|
||||
- {external dependency 1}
|
||||
- {external dependency 2}
|
||||
|
||||
## Multi-Perspective Synthesis
|
||||
|
||||
### Product Perspective
|
||||
{summary of product/market analysis findings}
|
||||
|
||||
### Technical Perspective
|
||||
{summary of technical feasibility and constraints}
|
||||
|
||||
### User Perspective
|
||||
{summary of user journey and UX considerations}
|
||||
|
||||
### Convergent Themes
|
||||
{themes where all perspectives agree}
|
||||
|
||||
### Conflicting Views
|
||||
{areas where perspectives differ, with notes on resolution approach}
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [spec-config.json](spec-config.json)
|
||||
- Next: [Requirements PRD](requirements.md)
|
||||
```
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | Seed analysis | Product/feature name |
|
||||
| `{executive_summary}` | CLI synthesis | 2-3 sentence summary |
|
||||
| `{vision_statement}` | CLI product perspective | Aspirational vision |
|
||||
| `{term_1}`, `{term_2}` | CLI synthesis | Domain terms with definitions and optional aliases |
|
||||
| `{non_goal_1}`, `{non_goal_2}` | CLI synthesis | Explicit exclusions with rationale |
|
||||
| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis |
|
||||
@@ -0,0 +1,27 @@
|
||||
# API Spec Profile
|
||||
|
||||
Defines additional required sections for API-type specifications.
|
||||
|
||||
## Required Sections (in addition to base template)
|
||||
|
||||
### In Architecture Document
|
||||
- **Endpoint Definition**: MUST list all endpoints with method, path, auth, request/response schema
|
||||
- **Authentication Model**: MUST define auth mechanism (OAuth2/JWT/API Key), token lifecycle
|
||||
- **Rate Limiting**: MUST define rate limits per tier/endpoint, throttling behavior
|
||||
- **Error Codes**: MUST define error response format, standard error codes with descriptions
|
||||
- **API Versioning**: MUST define versioning strategy (URL/header/query), deprecation policy
|
||||
- **Pagination**: SHOULD define pagination strategy for list endpoints
|
||||
- **Idempotency**: SHOULD define idempotency requirements for write operations
|
||||
|
||||
### In Requirements Document
|
||||
- **Endpoint Acceptance Criteria**: Each requirement SHOULD map to specific endpoints
|
||||
- **SLA Definitions**: MUST define response time, availability targets per endpoint tier
|
||||
|
||||
### Quality Gate Additions
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Endpoints documented | All endpoints with method + path | Error |
|
||||
| Auth model defined | Authentication mechanism specified | Error |
|
||||
| Error codes defined | Standard error format + codes | Warning |
|
||||
| Rate limits defined | Per-endpoint or per-tier limits | Warning |
|
||||
| API versioning strategy | Versioning approach specified | Warning |
|
||||
@@ -0,0 +1,25 @@
|
||||
# Library Spec Profile
|
||||
|
||||
Defines additional required sections for library/SDK-type specifications.
|
||||
|
||||
## Required Sections (in addition to base template)
|
||||
|
||||
### In Architecture Document
|
||||
- **Public API Surface**: MUST define all public interfaces with signatures, parameters, return types
|
||||
- **Usage Examples**: MUST provide >= 3 code examples showing common usage patterns
|
||||
- **Compatibility Matrix**: MUST define supported language versions, runtime environments
|
||||
- **Dependency Policy**: MUST define transitive dependency policy, version constraints
|
||||
- **Extension Points**: SHOULD define plugin/extension mechanisms if applicable
|
||||
- **Bundle Size**: SHOULD define target bundle size and tree-shaking strategy
|
||||
|
||||
### In Requirements Document
|
||||
- **API Ergonomics**: Requirements SHOULD address developer experience and API consistency
|
||||
- **Error Reporting**: MUST define error types, messages, and recovery hints for consumers
|
||||
|
||||
### Quality Gate Additions
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Public API documented | All public interfaces with types | Error |
|
||||
| Usage examples | >= 3 working examples | Warning |
|
||||
| Compatibility matrix | Supported environments listed | Warning |
|
||||
| Dependency policy | Transitive deps strategy defined | Info |
|
||||
@@ -0,0 +1,28 @@
|
||||
# Service Spec Profile
|
||||
|
||||
Defines additional required sections for service-type specifications.
|
||||
|
||||
## Required Sections (in addition to base template)
|
||||
|
||||
### In Architecture Document
|
||||
- **Concepts & Terminology**: MUST define all domain terms with consistent aliases
|
||||
- **State Machine**: MUST include ASCII state diagram for each entity with a lifecycle
|
||||
- **Configuration Model**: MUST define all configurable fields with types, defaults, constraints
|
||||
- **Error Handling**: MUST define per-component error classification and recovery strategies
|
||||
- **Observability**: MUST define >= 3 metrics, structured log format, health check endpoints
|
||||
- **Trust & Safety**: SHOULD define trust levels and approval matrix
|
||||
- **Graceful Shutdown**: MUST describe shutdown sequence and cleanup procedures
|
||||
- **Implementation Guidance**: SHOULD provide implementation order and key decisions
|
||||
|
||||
### In Requirements Document
|
||||
- **Behavioral Constraints**: MUST use RFC 2119 keywords (MUST/SHOULD/MAY) for all requirements
|
||||
- **Data Model**: MUST define core entities with field-level detail (type, constraint, relation)
|
||||
|
||||
### Quality Gate Additions
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| State machine present | >= 1 lifecycle state diagram | Error |
|
||||
| Configuration model | All config fields documented | Warning |
|
||||
| Observability metrics | >= 3 metrics defined | Warning |
|
||||
| Error handling defined | Per-component strategy | Warning |
|
||||
| RFC keywords used | Behavioral requirements use MUST/SHOULD/MAY | Warning |
|
||||
224
.codex/skills/spec-generator/templates/requirements-prd.md
Normal file
224
.codex/skills/spec-generator/templates/requirements-prd.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Requirements PRD Template (Directory Structure)
|
||||
|
||||
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
|
||||
| Output Location | `{workDir}/requirements/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/requirements/
|
||||
├── _index.md # Summary + MoSCoW table + traceability matrix + links
|
||||
├── REQ-001-{slug}.md # Individual functional requirement
|
||||
├── REQ-002-{slug}.md
|
||||
├── NFR-P-001-{slug}.md # Non-functional: Performance
|
||||
├── NFR-S-001-{slug}.md # Non-functional: Security
|
||||
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
|
||||
├── NFR-U-001-{slug}.md # Non-functional: Usability
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 3
|
||||
document_type: requirements-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
---
|
||||
|
||||
# Requirements: {product_name}
|
||||
|
||||
{executive_summary - brief overview of what this PRD covers and key decisions}
|
||||
|
||||
## Requirement Summary
|
||||
|
||||
| Priority | Count | Coverage |
|
||||
|----------|-------|----------|
|
||||
| Must Have | {n} | {description of must-have scope} |
|
||||
| Should Have | {n} | {description of should-have scope} |
|
||||
| Could Have | {n} | {description of could-have scope} |
|
||||
| Won't Have | {n} | {description of explicitly excluded} |
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
| ID | Title | Priority | Traces To |
|
||||
|----|-------|----------|-----------|
|
||||
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Security
|
||||
|
||||
| ID | Title | Standard |
|
||||
|----|-------|----------|
|
||||
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
|
||||
|
||||
### Scalability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Usability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
## Data Requirements
|
||||
|
||||
### Data Entities
|
||||
|
||||
| Entity | Description | Key Attributes |
|
||||
|--------|-------------|----------------|
|
||||
| {entity_name} | {description} | {attr1, attr2, attr3} |
|
||||
|
||||
### Data Flows
|
||||
|
||||
{description of key data flows, optionally with Mermaid diagram}
|
||||
|
||||
## Integration Requirements
|
||||
|
||||
| System | Direction | Protocol | Data Format | Notes |
|
||||
|--------|-----------|----------|-------------|-------|
|
||||
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
|
||||
|
||||
## Constraints & Assumptions
|
||||
|
||||
### Constraints
|
||||
- {technical or business constraint 1}
|
||||
- {technical or business constraint 2}
|
||||
|
||||
### Assumptions
|
||||
- {assumption 1 - must be validated}
|
||||
- {assumption 2 - must be validated}
|
||||
|
||||
## Priority Rationale
|
||||
|
||||
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Goal | Requirements |
|
||||
|------|-------------|
|
||||
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
|
||||
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Product Brief](../product-brief.md)
|
||||
- Next: [Architecture](../architecture/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: REQ-{NNN}
|
||||
type: functional
|
||||
priority: {Must|Should|Could|Won't}
|
||||
traces_to: [G-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# REQ-{NNN}: {requirement_title}
|
||||
|
||||
**Priority**: {Must|Should|Could|Won't}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## User Story
|
||||
|
||||
As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] {specific, testable criterion 1}
|
||||
- [ ] {specific, testable criterion 2}
|
||||
- [ ] {specific, testable criterion 3}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: NFR-{type}-{NNN}
|
||||
type: non-functional
|
||||
category: {Performance|Security|Scalability|Usability}
|
||||
priority: {Must|Should|Could}
|
||||
status: draft
|
||||
---
|
||||
|
||||
# NFR-{type}-{NNN}: {requirement_title}
|
||||
|
||||
**Category**: {Performance|Security|Scalability|Usability}
|
||||
**Priority**: {Must|Should|Could}
|
||||
|
||||
## Requirement
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## Metric & Target
|
||||
|
||||
| Metric | Target | Measurement Method |
|
||||
|--------|--------|--------------------|
|
||||
| {metric} | {target value} | {how measured} |
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
|
||||
| `{slug}` | Auto-generated | Kebab-case from requirement title |
|
||||
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
|
||||
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |
|
||||
Reference in New Issue
Block a user