mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-11 17:21:03 +08:00
Add unit tests for various components and stores in the terminal dashboard
- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management. - Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping. - Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
@@ -0,0 +1,404 @@
|
||||
# Phase 1.5: Requirement Expansion & Clarification
|
||||
|
||||
在进入正式文档生成前,通过多轮交互讨论对原始需求进行深度挖掘、扩展和确认。
|
||||
|
||||
## Objective
|
||||
|
||||
- 识别原始需求中的模糊点、遗漏和潜在风险
|
||||
- 通过 CLI 辅助分析需求完整性,生成深度探测问题
|
||||
- 支持多轮交互讨论,逐步细化需求
|
||||
- 生成经用户确认的 `refined-requirements.json` 作为后续阶段的高质量输入
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/spec-config.json` (Phase 1 output)
|
||||
- Optional: `{workDir}/discovery-context.json` (codebase context)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 1 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const { seed_analysis, seed_input, focus_areas, has_codebase, depth } = specConfig;
|
||||
|
||||
let discoveryContext = null;
|
||||
if (has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: CLI Gap Analysis & Question Generation
|
||||
|
||||
调用 Gemini CLI 分析原始需求的完整性,识别模糊点并生成探测问题。
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 深度分析用户的初始需求,识别模糊点、遗漏和需要澄清的领域。
|
||||
Success: 生成 3-5 个高质量的探测问题,覆盖功能范围、边界条件、非功能性需求、用户场景等维度。
|
||||
|
||||
ORIGINAL SEED INPUT:
|
||||
${seed_input}
|
||||
|
||||
SEED ANALYSIS:
|
||||
${JSON.stringify(seed_analysis, null, 2)}
|
||||
|
||||
FOCUS AREAS: ${focus_areas.join(', ')}
|
||||
${discoveryContext ? `
|
||||
CODEBASE CONTEXT:
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
` : ''}
|
||||
|
||||
TASK:
|
||||
1. 评估当前需求描述的完整性(1-10 分,列出缺失维度)
|
||||
2. 识别 3-5 个关键模糊区域,每个区域包含:
|
||||
- 模糊点描述(为什么不清楚)
|
||||
- 1-2 个开放式探测问题
|
||||
- 1-2 个扩展建议(基于领域最佳实践)
|
||||
3. 检查以下维度是否有遗漏:
|
||||
- 功能范围边界(什么在范围内/外?)
|
||||
- 核心用户场景和流程
|
||||
- 非功能性需求(性能、安全、可用性、可扩展性)
|
||||
- 集成点和外部依赖
|
||||
- 数据模型和存储需求
|
||||
- 错误处理和异常场景
|
||||
4. 基于领域经验提供需求扩展建议
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output:
|
||||
{
|
||||
\"completeness_score\": 7,
|
||||
\"missing_dimensions\": [\"Performance requirements\", \"Error handling\"],
|
||||
\"clarification_areas\": [
|
||||
{
|
||||
\"area\": \"Scope boundary\",
|
||||
\"rationale\": \"Input does not clarify...\",
|
||||
\"questions\": [\"Question 1?\", \"Question 2?\"],
|
||||
\"suggestions\": [\"Suggestion 1\", \"Suggestion 2\"]
|
||||
}
|
||||
],
|
||||
\"expansion_recommendations\": [
|
||||
{
|
||||
\"category\": \"Non-functional\",
|
||||
\"recommendation\": \"Consider adding...\",
|
||||
\"priority\": \"high|medium|low\"
|
||||
}
|
||||
]
|
||||
}
|
||||
CONSTRAINTS: 问题必须是开放式的,建议必须具体可执行,使用用户输入的语言
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result before continuing
|
||||
```
|
||||
|
||||
解析 CLI 输出为结构化数据:
|
||||
```javascript
|
||||
const gapAnalysis = {
|
||||
completeness_score: 0,
|
||||
missing_dimensions: [],
|
||||
clarification_areas: [],
|
||||
expansion_recommendations: []
|
||||
};
|
||||
// Parse from CLI output
|
||||
```
|
||||
|
||||
### Step 3: Interactive Discussion Loop
|
||||
|
||||
核心多轮交互循环。每轮:展示分析结果 → 用户回应 → 更新需求状态 → 判断是否继续。
|
||||
|
||||
```javascript
|
||||
// Initialize requirement state
|
||||
let requirementState = {
|
||||
problem_statement: seed_analysis.problem_statement,
|
||||
target_users: seed_analysis.target_users,
|
||||
domain: seed_analysis.domain,
|
||||
constraints: seed_analysis.constraints,
|
||||
confirmed_features: [],
|
||||
non_functional_requirements: [],
|
||||
boundary_conditions: [],
|
||||
integration_points: [],
|
||||
key_assumptions: [],
|
||||
discussion_rounds: 0
|
||||
};
|
||||
|
||||
let discussionLog = [];
|
||||
let userSatisfied = false;
|
||||
|
||||
// === Round 1: Present gap analysis results ===
|
||||
// Display completeness_score, clarification_areas, expansion_recommendations
|
||||
// Then ask user to respond
|
||||
|
||||
while (!userSatisfied && requirementState.discussion_rounds < 5) {
|
||||
requirementState.discussion_rounds++;
|
||||
|
||||
if (requirementState.discussion_rounds === 1) {
|
||||
// --- First round: present initial gap analysis ---
|
||||
// Format questions and suggestions from gapAnalysis for display
|
||||
// Present as a structured summary to the user
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: buildDiscussionPrompt(gapAnalysis, requirementState),
|
||||
header: "Req Expand",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "I'll answer", description: "I have answers/feedback to provide (type in 'Other')" },
|
||||
{ label: "Accept all suggestions", description: "Accept all expansion recommendations as-is" },
|
||||
{ label: "Skip to generation", description: "Requirements are clear enough, proceed directly" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
} else {
|
||||
// --- Subsequent rounds: refine based on user feedback ---
|
||||
// Call CLI with accumulated context for follow-up analysis
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 基于用户最新回应,更新需求理解,识别剩余模糊点。
|
||||
|
||||
CURRENT REQUIREMENT STATE:
|
||||
${JSON.stringify(requirementState, null, 2)}
|
||||
|
||||
DISCUSSION HISTORY:
|
||||
${JSON.stringify(discussionLog, null, 2)}
|
||||
|
||||
USER'S LATEST RESPONSE:
|
||||
${lastUserResponse}
|
||||
|
||||
TASK:
|
||||
1. 将用户回应整合到需求状态中
|
||||
2. 识别 1-3 个仍需澄清或可扩展的领域
|
||||
3. 生成后续问题(如有必要)
|
||||
4. 如果需求已充分,输出最终需求摘要
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output:
|
||||
{
|
||||
\"updated_fields\": { /* fields to merge into requirementState */ },
|
||||
\"status\": \"need_more_discussion\" | \"ready_for_confirmation\",
|
||||
\"follow_up\": {
|
||||
\"remaining_areas\": [{\"area\": \"...\", \"questions\": [\"...\"]}],
|
||||
\"summary\": \"...\"
|
||||
}
|
||||
}
|
||||
CONSTRAINTS: 避免重复已回答的问题,聚焦未覆盖的领域
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result, parse and continue
|
||||
|
||||
// If status === "ready_for_confirmation", break to confirmation step
|
||||
// If status === "need_more_discussion", present follow-up questions
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: buildFollowUpPrompt(followUpAnalysis, requirementState),
|
||||
header: "Follow-up",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "I'll answer", description: "I have more feedback (type in 'Other')" },
|
||||
{ label: "Looks good", description: "Requirements are sufficiently clear now" },
|
||||
{ label: "Accept suggestions", description: "Accept remaining suggestions" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
}
|
||||
|
||||
// Process user response
|
||||
// - "Skip to generation" / "Looks good" → userSatisfied = true
|
||||
// - "Accept all suggestions" → merge suggestions into requirementState, userSatisfied = true
|
||||
// - "I'll answer" (with Other text) → record in discussionLog, continue loop
|
||||
// - User selects Other with custom text → parse and record
|
||||
|
||||
discussionLog.push({
|
||||
round: requirementState.discussion_rounds,
|
||||
agent_prompt: currentPrompt,
|
||||
user_response: userResponse,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
#### Helper: Build Discussion Prompt
|
||||
|
||||
```javascript
|
||||
function buildDiscussionPrompt(gapAnalysis, state) {
|
||||
let prompt = `## Requirement Analysis Results\n\n`;
|
||||
prompt += `**Completeness Score**: ${gapAnalysis.completeness_score}/10\n`;
|
||||
|
||||
if (gapAnalysis.missing_dimensions.length > 0) {
|
||||
prompt += `**Missing Dimensions**: ${gapAnalysis.missing_dimensions.join(', ')}\n\n`;
|
||||
}
|
||||
|
||||
prompt += `### Key Questions\n\n`;
|
||||
gapAnalysis.clarification_areas.forEach((area, i) => {
|
||||
prompt += `**${i+1}. ${area.area}**\n`;
|
||||
prompt += ` ${area.rationale}\n`;
|
||||
area.questions.forEach(q => { prompt += ` - ${q}\n`; });
|
||||
if (area.suggestions.length > 0) {
|
||||
prompt += ` Suggestions: ${area.suggestions.join('; ')}\n`;
|
||||
}
|
||||
prompt += `\n`;
|
||||
});
|
||||
|
||||
if (gapAnalysis.expansion_recommendations.length > 0) {
|
||||
prompt += `### Expansion Recommendations\n\n`;
|
||||
gapAnalysis.expansion_recommendations.forEach(rec => {
|
||||
prompt += `- [${rec.priority}] **${rec.category}**: ${rec.recommendation}\n`;
|
||||
});
|
||||
}
|
||||
|
||||
prompt += `\nPlease answer the questions above, or choose an option below.`;
|
||||
return prompt;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Auto Mode Handling
|
||||
|
||||
```javascript
|
||||
if (autoMode) {
|
||||
// Skip interactive discussion
|
||||
// CLI generates default requirement expansion based on seed_analysis
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 基于种子分析自动生成需求扩展,无需用户交互。
|
||||
|
||||
SEED ANALYSIS:
|
||||
${JSON.stringify(seed_analysis, null, 2)}
|
||||
|
||||
SEED INPUT: ${seed_input}
|
||||
DEPTH: ${depth}
|
||||
${discoveryContext ? `CODEBASE: ${JSON.stringify(discoveryContext.tech_stack || {})}` : ''}
|
||||
|
||||
TASK:
|
||||
1. 基于领域最佳实践,自动扩展功能需求清单
|
||||
2. 推断合理的非功能性需求
|
||||
3. 识别明显的边界条件
|
||||
4. 列出关键假设
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output matching refined-requirements.json schema
|
||||
CONSTRAINTS: 保守推断,只添加高置信度的扩展
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Parse output directly into refined-requirements.json
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate Requirement Confirmation Summary
|
||||
|
||||
在写入文件前,向用户展示最终的需求确认摘要(非 auto mode)。
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Build confirmation summary from requirementState
|
||||
const summary = buildConfirmationSummary(requirementState);
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `## Requirement Confirmation\n\n${summary}\n\nConfirm and proceed to specification generation?`,
|
||||
header: "Confirm",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Confirm & proceed", description: "Requirements confirmed, start spec generation" },
|
||||
{ label: "Need adjustments", description: "Go back and refine further" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// If "Need adjustments" → loop back to Step 3
|
||||
// If "Confirm & proceed" → continue to Step 6
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Write refined-requirements.json
|
||||
|
||||
```javascript
|
||||
const refinedRequirements = {
|
||||
session_id: specConfig.session_id,
|
||||
phase: "1.5",
|
||||
generated_at: new Date().toISOString(),
|
||||
source: autoMode ? "auto-expansion" : "interactive-discussion",
|
||||
discussion_rounds: requirementState.discussion_rounds,
|
||||
|
||||
// Core requirement content
|
||||
clarified_problem_statement: requirementState.problem_statement,
|
||||
confirmed_target_users: requirementState.target_users.map(u =>
|
||||
typeof u === 'string' ? { name: u, needs: [], pain_points: [] } : u
|
||||
),
|
||||
confirmed_domain: requirementState.domain,
|
||||
|
||||
confirmed_features: requirementState.confirmed_features.map(f => ({
|
||||
name: f.name,
|
||||
description: f.description,
|
||||
acceptance_criteria: f.acceptance_criteria || [],
|
||||
edge_cases: f.edge_cases || [],
|
||||
priority: f.priority || "unset"
|
||||
})),
|
||||
|
||||
non_functional_requirements: requirementState.non_functional_requirements.map(nfr => ({
|
||||
type: nfr.type, // Performance, Security, Usability, Scalability, etc.
|
||||
details: nfr.details,
|
||||
measurable_criteria: nfr.measurable_criteria || ""
|
||||
})),
|
||||
|
||||
boundary_conditions: {
|
||||
in_scope: requirementState.boundary_conditions.filter(b => b.scope === 'in'),
|
||||
out_of_scope: requirementState.boundary_conditions.filter(b => b.scope === 'out'),
|
||||
constraints: requirementState.constraints
|
||||
},
|
||||
|
||||
integration_points: requirementState.integration_points,
|
||||
key_assumptions: requirementState.key_assumptions,
|
||||
|
||||
// Traceability
|
||||
discussion_log: autoMode ? [] : discussionLog
|
||||
};
|
||||
|
||||
Write(`${workDir}/refined-requirements.json`, JSON.stringify(refinedRequirements, null, 2));
|
||||
```
|
||||
|
||||
### Step 7: Update spec-config.json
|
||||
|
||||
```javascript
|
||||
specConfig.refined_requirements_file = "refined-requirements.json";
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 1.5,
|
||||
name: "requirement-clarification",
|
||||
output_file: "refined-requirements.json",
|
||||
discussion_rounds: requirementState.discussion_rounds,
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `refined-requirements.json`
|
||||
- **Format**: JSON
|
||||
- **Updated**: `spec-config.json` (added `refined_requirements_file` field and phase 1.5 to `phasesCompleted`)
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Problem statement refined (>= 30 characters, more specific than seed)
|
||||
- [ ] At least 2 confirmed features with descriptions
|
||||
- [ ] At least 1 non-functional requirement identified
|
||||
- [ ] Boundary conditions defined (in-scope + out-of-scope)
|
||||
- [ ] Key assumptions listed (>= 1)
|
||||
- [ ] Discussion rounds recorded (>= 1 in interactive mode)
|
||||
- [ ] User explicitly confirmed requirements (non-auto mode)
|
||||
- [ ] `refined-requirements.json` written with valid JSON
|
||||
- [ ] `spec-config.json` updated with phase 1.5 completion
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Product Brief](02-product-brief.md). Phase 2 should load `refined-requirements.json` as primary input instead of relying solely on `spec-config.json.seed_analysis`.
|
||||
257
.codex/skills/spec-generator/phases/01-discovery.md
Normal file
257
.codex/skills/spec-generator/phases/01-discovery.md
Normal file
@@ -0,0 +1,257 @@
|
||||
# Phase 1: Discovery
|
||||
|
||||
Parse input, analyze the seed idea, optionally explore codebase, establish session configuration.
|
||||
|
||||
## Objective
|
||||
|
||||
- Generate session ID and create output directory
|
||||
- Parse user input (text description or file reference)
|
||||
- Analyze seed via Gemini CLI to extract problem space dimensions
|
||||
- Conditionally explore codebase for existing patterns and constraints
|
||||
- Gather user preferences (depth, focus areas) via interactive confirmation
|
||||
- Write `spec-config.json` as the session state file
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `$ARGUMENTS` (user input from command)
|
||||
- Flags: `-y` (auto mode), `-c` (continue mode)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Session Initialization
|
||||
|
||||
```javascript
|
||||
// Parse arguments
|
||||
const args = $ARGUMENTS;
|
||||
const autoMode = args.includes('-y') || args.includes('--yes');
|
||||
const continueMode = args.includes('-c') || args.includes('--continue');
|
||||
|
||||
// Extract the idea/topic (remove flags)
|
||||
const idea = args.replace(/(-y|--yes|-c|--continue)\s*/g, '').trim();
|
||||
|
||||
// Generate session ID
|
||||
const slug = idea.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fff]+/g, '-')
|
||||
.replace(/^-|-$/g, '')
|
||||
.slice(0, 40);
|
||||
const date = new Date().toISOString().slice(0, 10);
|
||||
const sessionId = `SPEC-${slug}-${date}`;
|
||||
const workDir = `.workflow/.spec/${sessionId}`;
|
||||
|
||||
// Check for continue mode
|
||||
if (continueMode) {
|
||||
// Find existing session
|
||||
const existingSessions = Glob('.workflow/.spec/SPEC-*/spec-config.json');
|
||||
// If slug matches an existing session, load it and resume
|
||||
// Read spec-config.json, find first incomplete phase, jump to that phase
|
||||
return; // Resume logic handled by orchestrator
|
||||
}
|
||||
|
||||
// Create output directory
|
||||
Bash(`mkdir -p "${workDir}"`);
|
||||
```
|
||||
|
||||
### Step 2: Input Parsing
|
||||
|
||||
```javascript
|
||||
// Determine input type
|
||||
if (idea.startsWith('@') || idea.endsWith('.md') || idea.endsWith('.txt')) {
|
||||
// File reference - read and extract content
|
||||
const filePath = idea.replace(/^@/, '');
|
||||
const fileContent = Read(filePath);
|
||||
// Use file content as the seed
|
||||
inputType = 'file';
|
||||
seedInput = fileContent;
|
||||
} else {
|
||||
// Direct text description
|
||||
inputType = 'text';
|
||||
seedInput = idea;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Seed Analysis via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Analyze this seed idea/requirement to extract structured problem space dimensions.
|
||||
Success: Clear problem statement, target users, domain identification, 3-5 exploration dimensions.
|
||||
|
||||
SEED INPUT:
|
||||
${seedInput}
|
||||
|
||||
TASK:
|
||||
- Extract a clear problem statement (what problem does this solve?)
|
||||
- Identify target users (who benefits?)
|
||||
- Determine the domain (technical, business, consumer, etc.)
|
||||
- List constraints (budget, time, technical, regulatory)
|
||||
- Generate 3-5 exploration dimensions (key areas to investigate)
|
||||
- Assess complexity: simple (1-2 components), moderate (3-5 components), complex (6+ components)
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output with fields: problem_statement, target_users[], domain, constraints[], dimensions[], complexity
|
||||
CONSTRAINTS: Be specific and actionable, not vague
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result before continuing
|
||||
```
|
||||
|
||||
Parse the CLI output into structured `seedAnalysis`:
|
||||
```javascript
|
||||
const seedAnalysis = {
|
||||
problem_statement: "...",
|
||||
target_users: ["..."],
|
||||
domain: "...",
|
||||
constraints: ["..."],
|
||||
dimensions: ["..."]
|
||||
};
|
||||
const complexity = "moderate"; // from CLI output
|
||||
```
|
||||
|
||||
### Step 4: Codebase Exploration (Conditional)
|
||||
|
||||
```javascript
|
||||
// Detect if running inside a project with code
|
||||
const hasCodebase = Glob('**/*.{ts,js,py,java,go,rs}').length > 0
|
||||
|| Glob('package.json').length > 0
|
||||
|| Glob('Cargo.toml').length > 0;
|
||||
|
||||
if (hasCodebase) {
|
||||
Agent({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore codebase for spec: ${slug}`,
|
||||
prompt: `
|
||||
## Spec Generator Context
|
||||
Topic: ${seedInput}
|
||||
Dimensions: ${seedAnalysis.dimensions.join(', ')}
|
||||
Session: ${workDir}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Search for code related to topic keywords
|
||||
2. Read project config files (package.json, pyproject.toml, etc.) if they exist
|
||||
|
||||
## Exploration Focus
|
||||
- Identify existing implementations related to the topic
|
||||
- Find patterns that could inform architecture decisions
|
||||
- Map current architecture constraints
|
||||
- Locate integration points and dependencies
|
||||
|
||||
## Output
|
||||
Write findings to: ${workDir}/discovery-context.json
|
||||
|
||||
Schema:
|
||||
{
|
||||
"relevant_files": [{"path": "...", "relevance": "high|medium|low", "rationale": "..."}],
|
||||
"existing_patterns": ["pattern descriptions"],
|
||||
"architecture_constraints": ["constraint descriptions"],
|
||||
"integration_points": ["integration point descriptions"],
|
||||
"tech_stack": {"languages": [], "frameworks": [], "databases": []},
|
||||
"_metadata": { "exploration_type": "spec-discovery", "timestamp": "ISO8601" }
|
||||
}
|
||||
`
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: User Confirmation (Interactive)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Confirm problem statement and select depth
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `Problem statement: "${seedAnalysis.problem_statement}" - Is this accurate?`,
|
||||
header: "Problem",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Accurate", description: "Proceed with this problem statement" },
|
||||
{ label: "Needs adjustment", description: "I'll refine the problem statement" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "What specification depth do you need?",
|
||||
header: "Depth",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Light", description: "Quick overview - key decisions only" },
|
||||
{ label: "Standard (Recommended)", description: "Balanced detail for most projects" },
|
||||
{ label: "Comprehensive", description: "Maximum detail for complex/critical projects" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Which areas should we focus on?",
|
||||
header: "Focus",
|
||||
multiSelect: true,
|
||||
options: seedAnalysis.dimensions.map(d => ({ label: d, description: `Explore ${d} in depth` }))
|
||||
},
|
||||
{
|
||||
question: "What type of specification is this?",
|
||||
header: "Spec Type",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Service (Recommended)", description: "Long-running service with lifecycle, state machine, observability" },
|
||||
{ label: "API", description: "REST/GraphQL API with endpoints, auth, rate limiting" },
|
||||
{ label: "Library/SDK", description: "Reusable package with public API surface, examples" },
|
||||
{ label: "Platform", description: "Multi-component system, uses Service profile" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
} else {
|
||||
// Auto mode defaults
|
||||
depth = "standard";
|
||||
focusAreas = seedAnalysis.dimensions;
|
||||
specType = "service"; // default for auto mode
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Write spec-config.json
|
||||
|
||||
```javascript
|
||||
const specConfig = {
|
||||
session_id: sessionId,
|
||||
seed_input: seedInput,
|
||||
input_type: inputType,
|
||||
timestamp: new Date().toISOString(),
|
||||
mode: autoMode ? "auto" : "interactive",
|
||||
complexity: complexity,
|
||||
depth: depth,
|
||||
focus_areas: focusAreas,
|
||||
seed_analysis: seedAnalysis,
|
||||
has_codebase: hasCodebase,
|
||||
spec_type: specType, // "service" | "api" | "library" | "platform"
|
||||
iteration_count: 0,
|
||||
iteration_history: [],
|
||||
phasesCompleted: [
|
||||
{
|
||||
phase: 1,
|
||||
name: "discovery",
|
||||
output_file: "spec-config.json",
|
||||
completed_at: new Date().toISOString()
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `spec-config.json`
|
||||
- **File**: `discovery-context.json` (optional, if codebase detected)
|
||||
- **Format**: JSON
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Session ID matches `SPEC-{slug}-{date}` format
|
||||
- [ ] Problem statement exists and is >= 20 characters
|
||||
- [ ] Target users identified (>= 1)
|
||||
- [ ] 3-5 exploration dimensions generated
|
||||
- [ ] spec-config.json written with all required fields
|
||||
- [ ] Output directory created
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Product Brief](02-product-brief.md) with the generated spec-config.json.
|
||||
298
.codex/skills/spec-generator/phases/02-product-brief.md
Normal file
298
.codex/skills/spec-generator/phases/02-product-brief.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# Phase 2: Product Brief
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate a product brief through multi-perspective CLI analysis, establishing "what" and "why".
|
||||
|
||||
## Objective
|
||||
|
||||
- Read Phase 1 outputs (spec-config.json, discovery-context.json)
|
||||
- Launch 3 parallel CLI analyses from product, technical, and user perspectives
|
||||
- Synthesize convergent themes and conflicting views
|
||||
- Optionally refine with user input
|
||||
- Generate product-brief.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/spec-config.json`
|
||||
- Primary: `{workDir}/refined-requirements.json` (Phase 1.5 output, preferred over raw seed_analysis)
|
||||
- Optional: `{workDir}/discovery-context.json`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/product-brief.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 1 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const { seed_analysis, seed_input, has_codebase, depth, focus_areas } = specConfig;
|
||||
|
||||
// Load refined requirements (Phase 1.5 output) - preferred over raw seed_analysis
|
||||
let refinedReqs = null;
|
||||
try {
|
||||
refinedReqs = JSON.parse(Read(`${workDir}/refined-requirements.json`));
|
||||
} catch (e) {
|
||||
// No refined requirements, fall back to seed_analysis
|
||||
}
|
||||
|
||||
let discoveryContext = null;
|
||||
if (has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) {
|
||||
// No discovery context available, proceed without
|
||||
}
|
||||
}
|
||||
|
||||
// Build shared context string for CLI prompts
|
||||
// Prefer refined requirements over raw seed_analysis
|
||||
const problem = refinedReqs?.clarified_problem_statement || seed_analysis.problem_statement;
|
||||
const users = refinedReqs?.confirmed_target_users?.map(u => u.name || u).join(', ')
|
||||
|| seed_analysis.target_users.join(', ');
|
||||
const domain = refinedReqs?.confirmed_domain || seed_analysis.domain;
|
||||
const constraints = refinedReqs?.boundary_conditions?.constraints?.join(', ')
|
||||
|| seed_analysis.constraints.join(', ');
|
||||
const features = refinedReqs?.confirmed_features?.map(f => f.name).join(', ') || '';
|
||||
const nfrs = refinedReqs?.non_functional_requirements?.map(n => `${n.type}: ${n.details}`).join('; ') || '';
|
||||
|
||||
const sharedContext = `
|
||||
SEED: ${seed_input}
|
||||
PROBLEM: ${problem}
|
||||
TARGET USERS: ${users}
|
||||
DOMAIN: ${domain}
|
||||
CONSTRAINTS: ${constraints}
|
||||
FOCUS AREAS: ${focus_areas.join(', ')}
|
||||
${features ? `CONFIRMED FEATURES: ${features}` : ''}
|
||||
${nfrs ? `NON-FUNCTIONAL REQUIREMENTS: ${nfrs}` : ''}
|
||||
${discoveryContext ? `
|
||||
CODEBASE CONTEXT:
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
||||
- Architecture constraints: ${discoveryContext.architecture_constraints?.slice(0,3).join(', ') || 'none'}
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
` : ''}`;
|
||||
```
|
||||
|
||||
### Step 2: Multi-CLI Parallel Analysis (3 perspectives)
|
||||
|
||||
Launch 3 CLI calls in parallel:
|
||||
|
||||
**Product Perspective (Gemini)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Product analysis for specification - identify market fit, user value, and success criteria.
|
||||
Success: Clear vision, measurable goals, competitive positioning.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Define product vision (1-3 sentences, aspirational)
|
||||
- Analyze market/competitive landscape
|
||||
- Define 3-5 measurable success metrics
|
||||
- Identify scope boundaries (in-scope vs out-of-scope)
|
||||
- Assess user value proposition
|
||||
- List assumptions that need validation
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured product analysis with: vision, goals with metrics, scope, competitive positioning, assumptions
|
||||
CONSTRAINTS: Focus on 'what' and 'why', not 'how'
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
```
|
||||
|
||||
**Technical Perspective (Codex)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Technical feasibility analysis for specification - assess implementation viability and constraints.
|
||||
Success: Clear technical constraints, integration complexity, technology recommendations.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Assess technical feasibility of the core concept
|
||||
- Identify technical constraints and blockers
|
||||
- Evaluate integration complexity with existing systems
|
||||
- Recommend technology approach (high-level)
|
||||
- Identify technical risks and dependencies
|
||||
- Estimate complexity: simple/moderate/complex
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Technical analysis with: feasibility assessment, constraints, integration complexity, tech recommendations, risks
|
||||
CONSTRAINTS: Focus on feasibility and constraints, not detailed architecture
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
```
|
||||
|
||||
**User Perspective (Claude)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: User experience analysis for specification - understand user journeys, pain points, and UX considerations.
|
||||
Success: Clear user personas, journey maps, UX requirements.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Elaborate user personas with goals and frustrations
|
||||
- Map primary user journey (happy path)
|
||||
- Identify key pain points in current experience
|
||||
- Define UX success criteria
|
||||
- List accessibility and usability considerations
|
||||
- Suggest interaction patterns
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: User analysis with: personas, journey map, pain points, UX criteria, interaction recommendations
|
||||
CONSTRAINTS: Focus on user needs and experience, not implementation
|
||||
" --tool claude --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// STOP: Wait for all 3 CLI results before continuing
|
||||
```
|
||||
|
||||
### Step 3: Synthesize Perspectives
|
||||
|
||||
```javascript
|
||||
// After receiving all 3 CLI results:
|
||||
// Extract convergent themes (all agree)
|
||||
// Identify conflicting views (need resolution)
|
||||
// Note unique contributions from each perspective
|
||||
|
||||
const synthesis = {
|
||||
convergent_themes: [], // themes all 3 perspectives agree on
|
||||
conflicts: [], // areas where perspectives differ
|
||||
product_insights: [], // unique from product perspective
|
||||
technical_insights: [], // unique from technical perspective
|
||||
user_insights: [] // unique from user perspective
|
||||
};
|
||||
```
|
||||
|
||||
### Step 4: Interactive Refinement (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present synthesis summary to user
|
||||
// AskUserQuestion with:
|
||||
// - Confirm vision statement
|
||||
// - Resolve any conflicts between perspectives
|
||||
// - Adjust scope if needed
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the synthesized product brief. Any adjustments needed?",
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Looks good", description: "Proceed to PRD generation" },
|
||||
{ label: "Adjust scope", description: "Narrow or expand the scope" },
|
||||
{ label: "Revise vision", description: "Refine the vision statement" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate product-brief.md
|
||||
|
||||
```javascript
|
||||
// Read template
|
||||
const template = Read('templates/product-brief.md');
|
||||
|
||||
// Fill template with synthesized content
|
||||
// Apply document-standards.md formatting rules
|
||||
// Write with YAML frontmatter
|
||||
|
||||
const frontmatter = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: ${autoMode ? 'complete' : 'draft'}
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["load-context", "multi-cli-analysis", "synthesis", "generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
---`;
|
||||
|
||||
// Combine frontmatter + filled template content
|
||||
Write(`${workDir}/product-brief.md`, `${frontmatter}\n\n${filledContent}`);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 2,
|
||||
name: "product-brief",
|
||||
output_file: "product-brief.md",
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 5.5: Generate glossary.json
|
||||
|
||||
```javascript
|
||||
// Extract terminology from product brief and CLI analysis
|
||||
// Generate structured glossary for cross-document consistency
|
||||
|
||||
const glossary = {
|
||||
session_id: specConfig.session_id,
|
||||
terms: [
|
||||
// Extract from product brief content:
|
||||
// - Key domain nouns from problem statement
|
||||
// - User persona names
|
||||
// - Technical terms from multi-perspective synthesis
|
||||
// Each term should have:
|
||||
// { term: "...", definition: "...", aliases: [], first_defined_in: "product-brief.md", category: "core|technical|business" }
|
||||
]
|
||||
};
|
||||
|
||||
Write(`${workDir}/glossary.json`, JSON.stringify(glossary, null, 2));
|
||||
```
|
||||
|
||||
**Glossary Injection**: In all subsequent phase prompts, inject the following into the CONTEXT section:
|
||||
```
|
||||
TERMINOLOGY GLOSSARY (use these terms consistently):
|
||||
${JSON.stringify(glossary.terms, null, 2)}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `product-brief.md`
|
||||
- **Format**: Markdown with YAML frontmatter
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Vision statement: clear, 1-3 sentences
|
||||
- [ ] Problem statement: specific and measurable
|
||||
- [ ] Target users: >= 1 persona with needs
|
||||
- [ ] Goals: >= 2 with measurable metrics
|
||||
- [ ] Scope: in-scope and out-of-scope defined
|
||||
- [ ] Multi-perspective synthesis included
|
||||
- [ ] YAML frontmatter valid
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 3: Requirements](03-requirements.md) with the generated product-brief.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 2,
|
||||
"status": "complete",
|
||||
"files_created": ["product-brief.md", "glossary.json"],
|
||||
"quality_notes": ["list of any quality concerns or deviations"],
|
||||
"key_decisions": ["list of significant synthesis decisions made"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that listed files exist on disk
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
248
.codex/skills/spec-generator/phases/03-requirements.md
Normal file
248
.codex/skills/spec-generator/phases/03-requirements.md
Normal file
@@ -0,0 +1,248 @@
|
||||
# Phase 3: Requirements (PRD)
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate a detailed Product Requirements Document with functional/non-functional requirements, acceptance criteria, and MoSCoW prioritization.
|
||||
|
||||
## Objective
|
||||
|
||||
- Read product-brief.md and extract goals, scope, constraints
|
||||
- Expand each goal into functional requirements with acceptance criteria
|
||||
- Generate non-functional requirements
|
||||
- Apply MoSCoW priority labels (user input or auto)
|
||||
- Generate requirements.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/product-brief.md`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/requirements-prd.md` (directory structure: `_index.md` + `REQ-*.md` + `NFR-*.md`)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
|
||||
// Extract key sections from product brief
|
||||
// - Goals & Success Metrics table
|
||||
// - Scope (in-scope items)
|
||||
// - Target Users (personas)
|
||||
// - Constraints
|
||||
// - Technical perspective insights
|
||||
```
|
||||
|
||||
### Step 2: Requirements Expansion via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate detailed functional and non-functional requirements from product brief.
|
||||
Success: Complete PRD with testable acceptance criteria for every requirement.
|
||||
|
||||
PRODUCT BRIEF CONTEXT:
|
||||
${productBrief}
|
||||
|
||||
TASK:
|
||||
- For each goal in the product brief, generate 3-7 functional requirements
|
||||
- Each requirement must have:
|
||||
- Unique ID: REQ-NNN (zero-padded)
|
||||
- Clear title
|
||||
- Detailed description
|
||||
- User story: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 specific, testable acceptance criteria
|
||||
- Generate non-functional requirements:
|
||||
- Performance (response times, throughput)
|
||||
- Security (authentication, authorization, data protection)
|
||||
- Scalability (user load, data volume)
|
||||
- Usability (accessibility, learnability)
|
||||
- Assign initial MoSCoW priority based on:
|
||||
- Must: Core functionality, cannot launch without
|
||||
- Should: Important but has workaround
|
||||
- Could: Nice-to-have, enhances experience
|
||||
- Won't: Explicitly deferred
|
||||
- Use RFC 2119 keywords (MUST, SHOULD, MAY, MUST NOT, SHOULD NOT) to define behavioral constraints for each requirement. Example: 'The system MUST return a 401 response within 100ms for invalid tokens.'
|
||||
- For each core domain entity referenced in requirements, define its data model: fields, types, constraints, and relationships to other entities
|
||||
- Maintain terminology consistency with the glossary below:
|
||||
TERMINOLOGY GLOSSARY:
|
||||
\${glossary ? JSON.stringify(glossary.terms, null, 2) : 'N/A - generate terms inline'}
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured requirements with: ID, title, description, user story, acceptance criteria, priority, traceability to goals
|
||||
CONSTRAINTS: Every requirement must be specific enough to estimate and test. No vague requirements like 'system should be fast'.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2.5: Codex Requirements Review
|
||||
|
||||
After receiving Gemini expansion results, validate requirements quality via Codex CLI before proceeding:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of generated requirements - validate quality, testability, and scope alignment.
|
||||
Success: Actionable feedback on requirement quality with specific issues identified.
|
||||
|
||||
GENERATED REQUIREMENTS:
|
||||
${geminiRequirementsOutput.slice(0, 5000)}
|
||||
|
||||
PRODUCT BRIEF SCOPE:
|
||||
${productBrief.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Verify every acceptance criterion is specific, measurable, and testable (not vague like 'should be fast')
|
||||
- Validate RFC 2119 keyword usage: MUST/SHOULD/MAY used correctly per RFC 2119 semantics
|
||||
- Check scope containment: no requirement exceeds the product brief's defined scope boundaries
|
||||
- Assess data model completeness: all referenced entities have field-level definitions
|
||||
- Identify duplicate or overlapping requirements
|
||||
- Rate overall requirements quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Requirements review with: per-requirement feedback, testability assessment, scope violations, data model gaps, quality rating
|
||||
CONSTRAINTS: Be genuinely critical. Focus on requirements that would block implementation if left vague.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for Codex review result
|
||||
// Integrate feedback into requirements before writing files:
|
||||
// - Fix vague acceptance criteria flagged by Codex
|
||||
// - Correct RFC 2119 keyword misuse
|
||||
// - Remove or flag requirements that exceed brief scope
|
||||
// - Fill data model gaps identified by Codex
|
||||
```
|
||||
|
||||
### Step 3: User Priority Sorting (Interactive)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present requirements grouped by initial priority
|
||||
// Allow user to adjust MoSCoW labels
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the Must-Have requirements. Any that should be reprioritized?",
|
||||
header: "Must-Have",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "All correct", description: "Must-have requirements are accurate" },
|
||||
{ label: "Too many", description: "Some should be Should/Could" },
|
||||
{ label: "Missing items", description: "Some Should requirements should be Must" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "What is the target MVP scope?",
|
||||
header: "MVP Scope",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Must-Have only (Recommended)", description: "MVP includes only Must requirements" },
|
||||
{ label: "Must + key Should", description: "Include critical Should items in MVP" },
|
||||
{ label: "Comprehensive", description: "Include all Must and Should" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user adjustments to priorities
|
||||
} else {
|
||||
// Auto mode: accept CLI-suggested priorities as-is
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate requirements/ directory
|
||||
|
||||
```javascript
|
||||
// Read template
|
||||
const template = Read('templates/requirements-prd.md');
|
||||
|
||||
// Create requirements directory
|
||||
Bash(`mkdir -p "${workDir}/requirements"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI output into structured requirements
|
||||
const funcReqs = parseFunctionalRequirements(cliOutput); // [{id, slug, title, priority, ...}]
|
||||
const nfReqs = parseNonFunctionalRequirements(cliOutput); // [{id, type, slug, title, ...}]
|
||||
|
||||
// Step 4a: Write individual REQ-*.md files (one per functional requirement)
|
||||
funcReqs.forEach(req => {
|
||||
// Use REQ-NNN-{slug}.md template from templates/requirements-prd.md
|
||||
// Fill: id, title, priority, description, user_story, acceptance_criteria, traces
|
||||
Write(`${workDir}/requirements/REQ-${req.id}-${req.slug}.md`, reqContent);
|
||||
});
|
||||
|
||||
// Step 4b: Write individual NFR-*.md files (one per non-functional requirement)
|
||||
nfReqs.forEach(nfr => {
|
||||
// Use NFR-{type}-NNN-{slug}.md template from templates/requirements-prd.md
|
||||
// Fill: id, type, category, title, requirement, metric, target, traces
|
||||
Write(`${workDir}/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent);
|
||||
});
|
||||
|
||||
// Step 4c: Write _index.md (summary + links to all individual files)
|
||||
// Use _index.md template from templates/requirements-prd.md
|
||||
// Fill: summary table, functional req links table, NFR links tables,
|
||||
// data requirements, integration requirements, traceability matrix
|
||||
Write(`${workDir}/requirements/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 3,
|
||||
name: "requirements",
|
||||
output_dir: "requirements/",
|
||||
output_index: "requirements/_index.md",
|
||||
file_count: funcReqs.length + nfReqs.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `requirements/`
|
||||
- `_index.md` — Summary, MoSCoW table, traceability matrix, links
|
||||
- `REQ-NNN-{slug}.md` — Individual functional requirement (per requirement)
|
||||
- `NFR-{type}-NNN-{slug}.md` — Individual non-functional requirement (per NFR)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Functional requirements: >= 3 with REQ-NNN IDs, each in own file
|
||||
- [ ] Every requirement file has >= 1 acceptance criterion
|
||||
- [ ] Every requirement has MoSCoW priority tag in frontmatter
|
||||
- [ ] Non-functional requirements: >= 1, each in own file
|
||||
- [ ] User stories present for Must-have requirements
|
||||
- [ ] `_index.md` links to all individual requirement files
|
||||
- [ ] Traceability links to product-brief.md goals
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 4: Architecture](04-architecture.md) with the generated requirements.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 3,
|
||||
"status": "complete",
|
||||
"files_created": ["requirements/_index.md", "requirements/REQ-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_integrated": true,
|
||||
"quality_notes": ["list of quality concerns or Codex feedback items addressed"],
|
||||
"key_decisions": ["MoSCoW priority rationale", "scope adjustments from Codex review"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `requirements/` directory exists with `_index.md` and individual files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
274
.codex/skills/spec-generator/phases/04-architecture.md
Normal file
274
.codex/skills/spec-generator/phases/04-architecture.md
Normal file
@@ -0,0 +1,274 @@
|
||||
# Phase 4: Architecture
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate technical architecture decisions, component design, and technology selections based on requirements.
|
||||
|
||||
## Objective
|
||||
|
||||
- Analyze requirements to identify core components and system architecture
|
||||
- Generate Architecture Decision Records (ADRs) with alternatives
|
||||
- Map architecture to existing codebase (if applicable)
|
||||
- Challenge architecture via Codex CLI review
|
||||
- Generate architecture.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/requirements/_index.md` (and individual `REQ-*.md` files)
|
||||
- Reference: `{workDir}/product-brief.md`
|
||||
- Optional: `{workDir}/discovery-context.json`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/architecture-doc.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2-3 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
|
||||
let discoveryContext = null;
|
||||
if (specConfig.has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) { /* no context */ }
|
||||
}
|
||||
|
||||
// Load glossary for terminology consistency
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
|
||||
// Load spec type profile for specialized sections
|
||||
const specType = specConfig.spec_type || 'service';
|
||||
let profile = null;
|
||||
try {
|
||||
profile = Read(`templates/profiles/${specType}-profile.md`);
|
||||
} catch (e) { /* use base template only */ }
|
||||
```
|
||||
|
||||
### Step 2: Architecture Analysis via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate technical architecture for the specified requirements.
|
||||
Success: Complete component architecture, tech stack, and ADRs with justified decisions.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${productBrief.slice(0, 3000)}
|
||||
|
||||
REQUIREMENTS:
|
||||
${requirements.slice(0, 5000)}
|
||||
|
||||
${discoveryContext ? `EXISTING CODEBASE:
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join('; ') || 'none'}
|
||||
- Architecture constraints: ${discoveryContext.architecture_constraints?.slice(0,3).join('; ') || 'none'}
|
||||
` : ''}
|
||||
|
||||
TASK:
|
||||
- Define system architecture style (monolith, microservices, serverless, etc.) with justification
|
||||
- Identify core components and their responsibilities
|
||||
- Create component interaction diagram (Mermaid graph TD format)
|
||||
- Specify technology stack: languages, frameworks, databases, infrastructure
|
||||
- Generate 2-4 Architecture Decision Records (ADRs):
|
||||
- Each ADR: context, decision, 2-3 alternatives with pros/cons, consequences
|
||||
- Focus on: data storage, API design, authentication, key technical choices
|
||||
- Define data model: key entities and relationships (Mermaid erDiagram format)
|
||||
- Identify security architecture: auth, authorization, data protection
|
||||
- List API endpoints (high-level)
|
||||
${discoveryContext ? '- Map new components to existing codebase modules' : ''}
|
||||
- For each core entity with a lifecycle, create an ASCII state machine diagram showing:
|
||||
- All states and transitions
|
||||
- Trigger events for each transition
|
||||
- Side effects of transitions
|
||||
- Error states and recovery paths
|
||||
- Define a Configuration Model: list all configurable fields with name, type, default value, constraint, and description
|
||||
- Define Error Handling strategy:
|
||||
- Classify errors (transient/permanent/degraded)
|
||||
- Per-component error behavior using RFC 2119 keywords
|
||||
- Recovery mechanisms
|
||||
- Define Observability requirements:
|
||||
- Key metrics (name, type: counter/gauge/histogram, labels)
|
||||
- Structured log format and key log events
|
||||
- Health check endpoints
|
||||
\${profile ? \`
|
||||
SPEC TYPE PROFILE REQUIREMENTS (\${specType}):
|
||||
\${profile}
|
||||
\` : ''}
|
||||
\${glossary ? \`
|
||||
TERMINOLOGY GLOSSARY (use consistently):
|
||||
\${JSON.stringify(glossary.terms, null, 2)}
|
||||
\` : ''}
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Complete architecture with: style justification, component diagram, tech stack table, ADRs, data model, security controls, API overview
|
||||
CONSTRAINTS: Architecture must support all Must-have requirements. Prefer proven technologies over cutting-edge.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 3: Architecture Review via Codex CLI
|
||||
|
||||
```javascript
|
||||
// After receiving Gemini analysis, challenge it with Codex
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of proposed architecture - identify weaknesses and risks.
|
||||
Success: Actionable feedback with specific concerns and improvement suggestions.
|
||||
|
||||
PROPOSED ARCHITECTURE:
|
||||
${geminiArchitectureOutput.slice(0, 5000)}
|
||||
|
||||
REQUIREMENTS CONTEXT:
|
||||
${requirements.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Challenge each ADR: are the alternatives truly the best options?
|
||||
- Identify scalability bottlenecks in the component design
|
||||
- Assess security gaps: authentication, authorization, data protection
|
||||
- Evaluate technology choices: maturity, community support, fit
|
||||
- Check for over-engineering or under-engineering
|
||||
- Verify architecture covers all Must-have requirements
|
||||
- Rate overall architecture quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Architecture review with: per-ADR feedback, scalability concerns, security gaps, technology risks, quality rating
|
||||
CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable improvements.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 4: Interactive ADR Decisions (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present ADRs with review feedback to user
|
||||
// For each ADR where review raised concerns:
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Architecture review raised concerns. How should we proceed?",
|
||||
header: "ADR Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Accept as-is", description: "Architecture is sound, proceed" },
|
||||
{ label: "Incorporate feedback", description: "Adjust ADRs based on review" },
|
||||
{ label: "Simplify", description: "Reduce complexity, fewer components" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user decisions to architecture
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Codebase Integration Mapping (Conditional)
|
||||
|
||||
```javascript
|
||||
if (specConfig.has_codebase && discoveryContext) {
|
||||
// Map new architecture components to existing code
|
||||
const integrationMapping = discoveryContext.relevant_files.map(f => ({
|
||||
new_component: "...", // matched from architecture
|
||||
existing_module: f.path,
|
||||
integration_type: "Extend|Replace|New",
|
||||
notes: f.rationale
|
||||
}));
|
||||
// Include in architecture document
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Generate architecture/ directory
|
||||
|
||||
```javascript
|
||||
const template = Read('templates/architecture-doc.md');
|
||||
|
||||
// Create architecture directory
|
||||
Bash(`mkdir -p "${workDir}/architecture"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI outputs into structured ADRs
|
||||
const adrs = parseADRs(geminiArchitectureOutput, codexReviewOutput); // [{id, slug, title, ...}]
|
||||
|
||||
// Step 6a: Write individual ADR-*.md files (one per decision)
|
||||
adrs.forEach(adr => {
|
||||
// Use ADR-NNN-{slug}.md template from templates/architecture-doc.md
|
||||
// Fill: id, title, status, context, decision, alternatives, consequences, traces
|
||||
Write(`${workDir}/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent);
|
||||
});
|
||||
|
||||
// Step 6b: Write _index.md (overview + components + tech stack + links to ADRs)
|
||||
// Use _index.md template from templates/architecture-doc.md
|
||||
// Fill: system overview, component diagram, tech stack, ADR links table,
|
||||
// data model, API design, security controls, infrastructure, codebase integration
|
||||
Write(`${workDir}/architecture/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 4,
|
||||
name: "architecture",
|
||||
output_dir: "architecture/",
|
||||
output_index: "architecture/_index.md",
|
||||
file_count: adrs.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `architecture/`
|
||||
- `_index.md` — Overview, component diagram, tech stack, data model, security, links
|
||||
- `ADR-NNN-{slug}.md` — Individual Architecture Decision Record (per ADR)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked to requirements via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Component diagram present in `_index.md` (Mermaid or ASCII)
|
||||
- [ ] Tech stack specified (languages, frameworks, key libraries)
|
||||
- [ ] >= 1 ADR file with alternatives considered
|
||||
- [ ] Each ADR file lists >= 2 options
|
||||
- [ ] `_index.md` ADR table links to all individual ADR files
|
||||
- [ ] Integration points identified
|
||||
- [ ] Data model described
|
||||
- [ ] Codebase mapping present (if has_codebase)
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
- [ ] ADR files link back to requirement files
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 5: Epics & Stories](05-epics-stories.md) with the generated architecture.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 4,
|
||||
"status": "complete",
|
||||
"files_created": ["architecture/_index.md", "architecture/ADR-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_rating": 0,
|
||||
"quality_notes": ["list of quality concerns or review feedback addressed"],
|
||||
"key_decisions": ["architecture style choice", "key ADR decisions"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `architecture/` directory exists with `_index.md` and ADR files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
241
.codex/skills/spec-generator/phases/05-epics-stories.md
Normal file
241
.codex/skills/spec-generator/phases/05-epics-stories.md
Normal file
@@ -0,0 +1,241 @@
|
||||
# Phase 5: Epics & Stories
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Decompose the specification into executable Epics and Stories with dependency mapping.
|
||||
|
||||
## Objective
|
||||
|
||||
- Group requirements into 3-7 logical Epics
|
||||
- Tag MVP subset of Epics
|
||||
- Generate 2-5 Stories per Epic in standard user story format
|
||||
- Map cross-Epic dependencies (Mermaid diagram)
|
||||
- Generate epics.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/requirements/_index.md`, `{workDir}/architecture/_index.md` (and individual files)
|
||||
- Reference: `{workDir}/product-brief.md`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/epics-template.md` (directory structure: `_index.md` + `EPIC-*.md`)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2-4 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
const architecture = Read(`${workDir}/architecture.md`);
|
||||
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
```
|
||||
|
||||
### Step 2: Epic Decomposition via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Decompose requirements into executable Epics and Stories for implementation planning.
|
||||
Success: 3-7 Epics with prioritized Stories, dependency map, and MVP subset clearly defined.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${productBrief.slice(0, 2000)}
|
||||
|
||||
REQUIREMENTS:
|
||||
${requirements.slice(0, 5000)}
|
||||
|
||||
ARCHITECTURE (summary):
|
||||
${architecture.slice(0, 3000)}
|
||||
|
||||
TASK:
|
||||
- Group requirements into 3-7 logical Epics:
|
||||
- Each Epic: EPIC-NNN ID, title, description, priority (Must/Should/Could)
|
||||
- Group by functional domain or user journey stage
|
||||
- Tag MVP Epics (minimum set for initial release)
|
||||
|
||||
- For each Epic, generate 2-5 Stories:
|
||||
- Each Story: STORY-{EPIC}-NNN ID, title
|
||||
- User story format: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 acceptance criteria per story (testable)
|
||||
- Relative size estimate: S/M/L/XL
|
||||
- Trace to source requirement(s): REQ-NNN
|
||||
|
||||
- Create dependency map:
|
||||
- Cross-Epic dependencies (which Epics block others)
|
||||
- Mermaid graph LR format
|
||||
- Recommended execution order with rationale
|
||||
|
||||
- Define MVP:
|
||||
- Which Epics are in MVP
|
||||
- MVP definition of done (3-5 criteria)
|
||||
- What is explicitly deferred post-MVP
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured output with: Epic list (ID, title, priority, MVP flag), Stories per Epic (ID, user story, AC, size, trace), dependency Mermaid diagram, execution order, MVP definition
|
||||
CONSTRAINTS:
|
||||
- Every Must-have requirement must appear in at least one Story
|
||||
- Stories must be small enough to implement independently (no XL stories in MVP)
|
||||
- Dependencies should be minimized across Epics
|
||||
\${glossary ? \`- Maintain terminology consistency with glossary: \${glossary.terms.map(t => t.term).join(', ')}\` : ''}
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2.5: Codex Epics Review
|
||||
|
||||
After receiving Gemini decomposition results, validate epic/story quality via Codex CLI:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of epic/story decomposition - validate coverage, sizing, and dependency structure.
|
||||
Success: Actionable feedback on epic quality with specific issues identified.
|
||||
|
||||
GENERATED EPICS AND STORIES:
|
||||
${geminiEpicsOutput.slice(0, 5000)}
|
||||
|
||||
REQUIREMENTS (Must-Have):
|
||||
${mustHaveRequirements.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Verify Must-Have requirement coverage: every Must requirement appears in at least one Story
|
||||
- Check MVP story sizing: no XL stories in MVP epics (too large to implement independently)
|
||||
- Validate dependency graph: no circular dependencies between Epics
|
||||
- Assess acceptance criteria: every Story AC is specific and testable
|
||||
- Verify traceability: Stories trace back to specific REQ-NNN IDs
|
||||
- Check Epic granularity: 3-7 epics (not too few/many), 2-5 stories each
|
||||
- Rate overall decomposition quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Epic review with: coverage gaps, oversized stories, dependency issues, traceability gaps, quality rating
|
||||
CONSTRAINTS: Focus on issues that would block execution planning. Be specific about which Story/Epic has problems.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for Codex review result
|
||||
// Integrate feedback into epics before writing files:
|
||||
// - Add missing Stories for uncovered Must requirements
|
||||
// - Split XL stories in MVP epics into smaller units
|
||||
// - Fix dependency cycles identified by Codex
|
||||
// - Improve vague acceptance criteria
|
||||
```
|
||||
|
||||
### Step 3: Interactive Validation (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present Epic overview table and dependency diagram
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the Epic breakdown. Any adjustments needed?",
|
||||
header: "Epics",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Looks good", description: "Epic structure is appropriate" },
|
||||
{ label: "Merge epics", description: "Some epics should be combined" },
|
||||
{ label: "Split epic", description: "An epic is too large, needs splitting" },
|
||||
{ label: "Adjust MVP", description: "Change which epics are in MVP" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user adjustments
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate epics/ directory
|
||||
|
||||
```javascript
|
||||
const template = Read('templates/epics-template.md');
|
||||
|
||||
// Create epics directory
|
||||
Bash(`mkdir -p "${workDir}/epics"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI output into structured Epics
|
||||
const epicsList = parseEpics(cliOutput); // [{id, slug, title, priority, mvp, size, stories[], reqs[], adrs[], deps[]}]
|
||||
|
||||
// Step 4a: Write individual EPIC-*.md files (one per Epic, stories included)
|
||||
epicsList.forEach(epic => {
|
||||
// Use EPIC-NNN-{slug}.md template from templates/epics-template.md
|
||||
// Fill: id, title, priority, mvp, size, description, requirements links,
|
||||
// architecture links, dependency links, stories with user stories + AC
|
||||
Write(`${workDir}/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent);
|
||||
});
|
||||
|
||||
// Step 4b: Write _index.md (overview + dependency map + MVP scope + traceability)
|
||||
// Use _index.md template from templates/epics-template.md
|
||||
// Fill: epic overview table (with links), dependency Mermaid diagram,
|
||||
// execution order, MVP scope, traceability matrix, estimation summary
|
||||
Write(`${workDir}/epics/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 5,
|
||||
name: "epics-stories",
|
||||
output_dir: "epics/",
|
||||
output_index: "epics/_index.md",
|
||||
file_count: epicsList.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `epics/`
|
||||
- `_index.md` — Overview table, dependency map, MVP scope, traceability matrix, links
|
||||
- `EPIC-NNN-{slug}.md` — Individual Epic with Stories (per Epic)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked to requirements and architecture via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] 3-7 Epic files with EPIC-NNN IDs
|
||||
- [ ] >= 1 Epic tagged as MVP in frontmatter
|
||||
- [ ] 2-5 Stories per Epic file
|
||||
- [ ] Stories use "As a...I want...So that..." format
|
||||
- [ ] `_index.md` has cross-Epic dependency map (Mermaid)
|
||||
- [ ] `_index.md` links to all individual Epic files
|
||||
- [ ] Relative sizing (S/M/L/XL) per Story
|
||||
- [ ] Epic files link to requirement files and ADR files
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 6: Readiness Check](06-readiness-check.md) to validate the complete specification package.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 5,
|
||||
"status": "complete",
|
||||
"files_created": ["epics/_index.md", "epics/EPIC-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_integrated": true,
|
||||
"mvp_epic_count": 0,
|
||||
"total_story_count": 0,
|
||||
"quality_notes": ["list of quality concerns or Codex feedback items addressed"],
|
||||
"key_decisions": ["MVP scope decisions", "dependency resolution choices"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `epics/` directory exists with `_index.md` and EPIC files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
172
.codex/skills/spec-generator/phases/06-5-auto-fix.md
Normal file
172
.codex/skills/spec-generator/phases/06-5-auto-fix.md
Normal file
@@ -0,0 +1,172 @@
|
||||
# Phase 6.5: Auto-Fix
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent when triggered by the orchestrator after Phase 6 identifies issues. The agent reads this file for instructions, applies fixes to affected documents, and returns a JSON summary.
|
||||
|
||||
Automatically repair specification issues identified in Phase 6 Readiness Check.
|
||||
|
||||
## Objective
|
||||
|
||||
- Parse readiness-report.md to extract Error and Warning items
|
||||
- Group issues by originating Phase (2-5)
|
||||
- Re-generate affected sections with error context injected into CLI prompts
|
||||
- Re-run Phase 6 validation after fixes
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/readiness-report.md` (Phase 6 output)
|
||||
- Config: `{workDir}/spec-config.json` (with iteration_count)
|
||||
- All Phase 2-5 outputs
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Parse Readiness Report
|
||||
|
||||
```javascript
|
||||
const readinessReport = Read(`${workDir}/readiness-report.md`);
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
|
||||
// Load glossary for terminology consistency during fixes
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
|
||||
// Extract issues from readiness report
|
||||
// Parse Error and Warning severity items
|
||||
// Group by originating phase:
|
||||
// Phase 2 issues: vision, problem statement, scope, personas
|
||||
// Phase 3 issues: requirements, acceptance criteria, priority, traceability
|
||||
// Phase 4 issues: architecture, ADRs, tech stack, data model, state machine
|
||||
// Phase 5 issues: epics, stories, dependencies, MVP scope
|
||||
|
||||
const issuesByPhase = {
|
||||
2: [], // product brief issues
|
||||
3: [], // requirements issues
|
||||
4: [], // architecture issues
|
||||
5: [] // epics issues
|
||||
};
|
||||
|
||||
// Parse structured issues from report
|
||||
// Each issue: { severity: "Error"|"Warning", description: "...", location: "file:section" }
|
||||
|
||||
// Map phase numbers to output files
|
||||
const phaseOutputFile = {
|
||||
2: 'product-brief.md',
|
||||
3: 'requirements/_index.md',
|
||||
4: 'architecture/_index.md',
|
||||
5: 'epics/_index.md'
|
||||
};
|
||||
```
|
||||
|
||||
### Step 2: Fix Affected Phases (Sequential)
|
||||
|
||||
For each phase with issues (in order 2 -> 3 -> 4 -> 5):
|
||||
|
||||
```javascript
|
||||
for (const [phase, issues] of Object.entries(issuesByPhase)) {
|
||||
if (issues.length === 0) continue;
|
||||
|
||||
const errorContext = issues.map(i => `[${i.severity}] ${i.description} (at ${i.location})`).join('\n');
|
||||
|
||||
// Read current phase output
|
||||
const currentOutput = Read(`${workDir}/${phaseOutputFile[phase]}`);
|
||||
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Fix specification issues identified in readiness check for Phase ${phase}.
|
||||
Success: All listed issues resolved while maintaining consistency with other documents.
|
||||
|
||||
CURRENT DOCUMENT:
|
||||
${currentOutput.slice(0, 5000)}
|
||||
|
||||
ISSUES TO FIX:
|
||||
${errorContext}
|
||||
|
||||
${glossary ? `GLOSSARY (maintain consistency):
|
||||
${JSON.stringify(glossary.terms, null, 2)}` : ''}
|
||||
|
||||
TASK:
|
||||
- Address each listed issue specifically
|
||||
- Maintain all existing content that is not flagged
|
||||
- Ensure terminology consistency with glossary
|
||||
- Preserve YAML frontmatter and cross-references
|
||||
- Use RFC 2119 keywords for behavioral requirements
|
||||
- Increment document version number
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Corrected document content addressing all listed issues
|
||||
CONSTRAINTS: Minimal changes - only fix flagged issues, do not restructure unflagged sections
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for result, apply fixes to document
|
||||
// Update document version in frontmatter
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Update State
|
||||
|
||||
```javascript
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 6.5,
|
||||
name: "auto-fix",
|
||||
iteration: specConfig.iteration_count,
|
||||
phases_fixed: Object.keys(issuesByPhase).filter(p => issuesByPhase[p].length > 0),
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 4: Re-run Phase 6 Validation
|
||||
|
||||
```javascript
|
||||
// Re-execute Phase 6: Readiness Check
|
||||
// This creates a new readiness-report.md
|
||||
// If still Fail and iteration_count < 2: loop back to Step 1
|
||||
// If Pass or iteration_count >= 2: proceed to handoff
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Updated**: Phase 2-5 documents (only affected ones)
|
||||
- **Updated**: `spec-config.json` (iteration tracking)
|
||||
- **Triggers**: Phase 6 re-validation
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All Error-severity issues addressed
|
||||
- [ ] Warning-severity issues attempted (best effort)
|
||||
- [ ] Document versions incremented for modified files
|
||||
- [ ] Terminology consistency maintained
|
||||
- [ ] Cross-references still valid after fixes
|
||||
- [ ] Iteration count not exceeded (max 2)
|
||||
|
||||
## Next Phase
|
||||
|
||||
Re-run [Phase 6: Readiness Check](06-readiness-check.md) to validate fixes.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 6.5,
|
||||
"status": "complete",
|
||||
"files_modified": ["list of files that were updated"],
|
||||
"issues_fixed": {
|
||||
"errors": 0,
|
||||
"warnings": 0
|
||||
},
|
||||
"quality_notes": ["list of fix decisions and remaining concerns"],
|
||||
"phases_touched": [2, 3, 4, 5]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that listed files were actually modified (check version increment)
|
||||
2. Update `spec-config.json` iteration tracking
|
||||
3. Re-trigger Phase 6 validation
|
||||
581
.codex/skills/spec-generator/phases/06-readiness-check.md
Normal file
581
.codex/skills/spec-generator/phases/06-readiness-check.md
Normal file
@@ -0,0 +1,581 @@
|
||||
# Phase 6: Readiness Check
|
||||
|
||||
Validate the complete specification package, generate quality report and executive summary, provide execution handoff options.
|
||||
|
||||
## Objective
|
||||
|
||||
- Cross-document validation: completeness, consistency, traceability, depth
|
||||
- Generate quality scores per dimension
|
||||
- Produce readiness-report.md with issue list and traceability matrix
|
||||
- Produce spec-summary.md as one-page executive summary
|
||||
- Update all document frontmatter to `status: complete`
|
||||
- Present handoff options to execution workflows
|
||||
|
||||
## Input
|
||||
|
||||
- All Phase 2-5 outputs: `product-brief.md`, `requirements/_index.md` (+ `REQ-*.md`, `NFR-*.md`), `architecture/_index.md` (+ `ADR-*.md`), `epics/_index.md` (+ `EPIC-*.md`)
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Reference: `specs/quality-gates.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load All Documents
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirementsIndex = Read(`${workDir}/requirements/_index.md`);
|
||||
const architectureIndex = Read(`${workDir}/architecture/_index.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
const qualityGates = Read('specs/quality-gates.md');
|
||||
|
||||
// Load individual files for deep validation
|
||||
const reqFiles = Glob(`${workDir}/requirements/REQ-*.md`);
|
||||
const nfrFiles = Glob(`${workDir}/requirements/NFR-*.md`);
|
||||
const adrFiles = Glob(`${workDir}/architecture/ADR-*.md`);
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
```
|
||||
|
||||
### Step 2: Cross-Document Validation via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Validate specification package for completeness, consistency, traceability, and depth.
|
||||
Success: Comprehensive quality report with scores, issues, and traceability matrix.
|
||||
|
||||
DOCUMENTS TO VALIDATE:
|
||||
|
||||
=== PRODUCT BRIEF ===
|
||||
${productBrief.slice(0, 3000)}
|
||||
|
||||
=== REQUIREMENTS INDEX (${reqFiles.length} REQ + ${nfrFiles.length} NFR files) ===
|
||||
${requirementsIndex.slice(0, 3000)}
|
||||
|
||||
=== ARCHITECTURE INDEX (${adrFiles.length} ADR files) ===
|
||||
${architectureIndex.slice(0, 2500)}
|
||||
|
||||
=== EPICS INDEX (${epicFiles.length} EPIC files) ===
|
||||
${epicsIndex.slice(0, 2500)}
|
||||
|
||||
QUALITY CRITERIA (from quality-gates.md):
|
||||
${qualityGates.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
Perform 4-dimension validation:
|
||||
|
||||
1. COMPLETENESS (25%):
|
||||
- All required sections present in each document?
|
||||
- All template fields filled with substantive content?
|
||||
- Score 0-100 with specific gaps listed
|
||||
|
||||
2. CONSISTENCY (25%):
|
||||
- Terminology uniform across documents?
|
||||
- Terminology glossary compliance: all core terms used consistently per glossary.json definitions?
|
||||
- No synonym drift (e.g., "user" vs "client" vs "consumer" for same concept)?
|
||||
- User personas consistent?
|
||||
- Scope consistent (PRD does not exceed brief)?
|
||||
- Scope containment: PRD requirements do not exceed product brief's defined scope?
|
||||
- Non-Goals respected: no requirement or story contradicts explicit Non-Goals?
|
||||
- Tech stack references match between architecture and epics?
|
||||
- Score 0-100 with inconsistencies listed
|
||||
|
||||
3. TRACEABILITY (25%):
|
||||
- Every goal has >= 1 requirement?
|
||||
- Every Must requirement has architecture coverage?
|
||||
- Every Must requirement appears in >= 1 story?
|
||||
- ADR choices reflected in epics?
|
||||
- Build traceability matrix: Goal -> Requirement -> Architecture -> Epic/Story
|
||||
- Score 0-100 with orphan items listed
|
||||
|
||||
4. DEPTH (25%):
|
||||
- Acceptance criteria specific and testable?
|
||||
- Architecture decisions justified with alternatives?
|
||||
- Stories estimable by dev team?
|
||||
- Score 0-100 with vague areas listed
|
||||
|
||||
ALSO:
|
||||
- List all issues found, classified as Error/Warning/Info
|
||||
- Generate overall weighted score
|
||||
- Determine gate: Pass (>=80) / Review (60-79) / Fail (<60)
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON-compatible output with: dimension scores, overall score, gate, issues list (severity + description + location), traceability matrix
|
||||
CONSTRAINTS: Be thorough but fair. Focus on actionable issues.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2b: Codex Technical Depth Review
|
||||
|
||||
Launch Codex review in parallel with Gemini validation for deeper technical assessment:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Deep technical quality review of specification package - assess architectural rigor and implementation readiness.
|
||||
Success: Technical quality assessment with specific actionable feedback on ADR quality, data model, security, and observability.
|
||||
|
||||
ARCHITECTURE INDEX:
|
||||
${architectureIndex.slice(0, 3000)}
|
||||
|
||||
ADR FILES (summaries):
|
||||
${adrFiles.map(f => Read(f).slice(0, 500)).join('\n---\n')}
|
||||
|
||||
REQUIREMENTS INDEX:
|
||||
${requirementsIndex.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- ADR Alternative Quality: Each ADR has >= 2 genuine alternatives with substantive pros/cons (not strawman options)
|
||||
- Data Model Completeness: All entities referenced in requirements have field-level definitions with types and constraints
|
||||
- Security Coverage: Authentication, authorization, data protection, and input validation addressed for all external interfaces
|
||||
- Observability Specification: Metrics, logging, and health checks defined for service/platform types
|
||||
- Error Handling: Error classification and recovery strategies defined per component
|
||||
- Configuration Model: All configurable parameters documented with types, defaults, and constraints
|
||||
- Rate each dimension 1-5 with specific gaps identified
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Technical depth review with: per-dimension scores (1-5), specific gaps, improvement recommendations, overall technical readiness assessment
|
||||
CONSTRAINTS: Focus on gaps that would cause implementation ambiguity. Ignore cosmetic issues.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Codex result merged with Gemini result in Step 3
|
||||
```
|
||||
|
||||
### Step 2c: Per-Requirement Verification
|
||||
|
||||
Iterate through all individual requirement files for fine-grained verification:
|
||||
|
||||
```javascript
|
||||
// Load all requirement files
|
||||
const reqFiles = Glob(`${workDir}/requirements/REQ-*.md`);
|
||||
const nfrFiles = Glob(`${workDir}/requirements/NFR-*.md`);
|
||||
const allReqFiles = [...reqFiles, ...nfrFiles];
|
||||
|
||||
// Load reference documents for cross-checking
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const adrFiles = Glob(`${workDir}/architecture/ADR-*.md`);
|
||||
|
||||
// Read all epic content for coverage check
|
||||
const epicContents = epicFiles.map(f => ({ path: f, content: Read(f) }));
|
||||
const adrContents = adrFiles.map(f => ({ path: f, content: Read(f) }));
|
||||
|
||||
// Per-requirement verification
|
||||
const verificationResults = allReqFiles.map(reqFile => {
|
||||
const content = Read(reqFile);
|
||||
const reqId = extractReqId(content); // e.g., REQ-001 or NFR-PERF-001
|
||||
const priority = extractPriority(content); // Must/Should/Could/Won't
|
||||
|
||||
// Check 1: AC exists and is testable
|
||||
const hasAC = content.includes('- [ ]') || content.includes('Acceptance Criteria');
|
||||
const acTestable = !content.match(/should be (fast|good|reliable|secure)/i); // No vague AC
|
||||
|
||||
// Check 2: Traces back to Brief goal
|
||||
const tracesLinks = content.match(/product-brief\.md/);
|
||||
|
||||
// Check 3: Must requirements have Story coverage (search EPIC files)
|
||||
let storyCoverage = priority !== 'Must' ? 'N/A' :
|
||||
epicContents.some(e => e.content.includes(reqId)) ? 'Covered' : 'MISSING';
|
||||
|
||||
// Check 4: Must requirements have architecture coverage (search ADR files)
|
||||
let archCoverage = priority !== 'Must' ? 'N/A' :
|
||||
adrContents.some(a => a.content.includes(reqId)) ||
|
||||
Read(`${workDir}/architecture/_index.md`).includes(reqId) ? 'Covered' : 'MISSING';
|
||||
|
||||
return {
|
||||
req_id: reqId,
|
||||
priority,
|
||||
ac_exists: hasAC ? 'Yes' : 'MISSING',
|
||||
ac_testable: acTestable ? 'Yes' : 'VAGUE',
|
||||
brief_trace: tracesLinks ? 'Yes' : 'MISSING',
|
||||
story_coverage: storyCoverage,
|
||||
arch_coverage: archCoverage,
|
||||
pass: hasAC && acTestable && tracesLinks &&
|
||||
(priority !== 'Must' || (storyCoverage === 'Covered' && archCoverage === 'Covered'))
|
||||
};
|
||||
});
|
||||
|
||||
// Generate Per-Requirement Verification table for readiness-report.md
|
||||
const verificationTable = `
|
||||
## Per-Requirement Verification
|
||||
|
||||
| Req ID | Priority | AC Exists | AC Testable | Brief Trace | Story Coverage | Arch Coverage | Status |
|
||||
|--------|----------|-----------|-------------|-------------|----------------|---------------|--------|
|
||||
${verificationResults.map(r =>
|
||||
`| ${r.req_id} | ${r.priority} | ${r.ac_exists} | ${r.ac_testable} | ${r.brief_trace} | ${r.story_coverage} | ${r.arch_coverage} | ${r.pass ? 'PASS' : 'FAIL'} |`
|
||||
).join('\n')}
|
||||
|
||||
**Summary**: ${verificationResults.filter(r => r.pass).length}/${verificationResults.length} requirements pass all checks.
|
||||
`;
|
||||
```
|
||||
|
||||
### Step 3: Generate readiness-report.md
|
||||
|
||||
```javascript
|
||||
const frontmatterReport = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 6
|
||||
document_type: readiness-report
|
||||
status: complete
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["load-all", "cross-validation", "codex-technical-review", "per-req-verification", "scoring", "report-generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
- requirements/_index.md
|
||||
- architecture/_index.md
|
||||
- epics/_index.md
|
||||
---`;
|
||||
|
||||
// Report content from CLI validation output:
|
||||
// - Quality Score Summary (4 dimensions + overall)
|
||||
// - Gate Decision (Pass/Review/Fail)
|
||||
// - Issue List (grouped by severity: Error, Warning, Info)
|
||||
// - Traceability Matrix (Goal -> Req -> Arch -> Epic/Story)
|
||||
// - Codex Technical Depth Review (per-dimension scores from Step 2b)
|
||||
// - Per-Requirement Verification Table (from Step 2c)
|
||||
// - Recommendations for improvement
|
||||
|
||||
Write(`${workDir}/readiness-report.md`, `${frontmatterReport}\n\n${reportContent}`);
|
||||
```
|
||||
|
||||
### Step 4: Generate spec-summary.md
|
||||
|
||||
```javascript
|
||||
const frontmatterSummary = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 6
|
||||
document_type: spec-summary
|
||||
status: complete
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["synthesis"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
- requirements/_index.md
|
||||
- architecture/_index.md
|
||||
- epics/_index.md
|
||||
- readiness-report.md
|
||||
---`;
|
||||
|
||||
// One-page executive summary:
|
||||
// - Product Name & Vision (from product-brief.md)
|
||||
// - Problem & Target Users (from product-brief.md)
|
||||
// - Key Requirements count (Must/Should/Could from requirements.md)
|
||||
// - Architecture Style & Tech Stack (from architecture.md)
|
||||
// - Epic Overview (count, MVP scope from epics.md)
|
||||
// - Quality Score (from readiness-report.md)
|
||||
// - Recommended Next Step
|
||||
// - File manifest with links
|
||||
|
||||
Write(`${workDir}/spec-summary.md`, `${frontmatterSummary}\n\n${summaryContent}`);
|
||||
```
|
||||
|
||||
### Step 5: Update All Document Status
|
||||
|
||||
```javascript
|
||||
// Update frontmatter status to 'complete' in all documents (directories + single files)
|
||||
// product-brief.md is a single file
|
||||
const singleFiles = ['product-brief.md'];
|
||||
singleFiles.forEach(doc => {
|
||||
const content = Read(`${workDir}/${doc}`);
|
||||
Write(`${workDir}/${doc}`, content.replace(/status: draft/, 'status: complete'));
|
||||
});
|
||||
|
||||
// Update all files in directories (index + individual files)
|
||||
const dirFiles = [
|
||||
...Glob(`${workDir}/requirements/*.md`),
|
||||
...Glob(`${workDir}/architecture/*.md`),
|
||||
...Glob(`${workDir}/epics/*.md`)
|
||||
];
|
||||
dirFiles.forEach(filePath => {
|
||||
const content = Read(filePath);
|
||||
if (content.includes('status: draft')) {
|
||||
Write(filePath, content.replace(/status: draft/, 'status: complete'));
|
||||
}
|
||||
});
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 6,
|
||||
name: "readiness-check",
|
||||
output_file: "readiness-report.md",
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 6: Handoff Options
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Specification package is complete. What would you like to do next?",
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Execute via lite-plan",
|
||||
description: "Start implementing with /workflow-lite-plan, one Epic at a time"
|
||||
},
|
||||
{
|
||||
label: "Create roadmap",
|
||||
description: "Generate execution roadmap with /workflow:req-plan-with-file"
|
||||
},
|
||||
{
|
||||
label: "Full planning",
|
||||
description: "Detailed planning with /workflow-plan for the full scope"
|
||||
},
|
||||
{
|
||||
label: "Export Issues (Phase 7)",
|
||||
description: "Create issues per Epic with spec links and wave assignment"
|
||||
},
|
||||
{
|
||||
label: "Iterate & improve",
|
||||
description: "Re-run failed phases based on readiness report issues (max 2 iterations)"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Based on user selection, execute the corresponding handoff:
|
||||
|
||||
if (selection === "Execute via lite-plan") {
|
||||
// lite-plan accepts a text description directly
|
||||
// Read first MVP Epic from individual EPIC-*.md files
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const firstMvpFile = epicFiles.find(f => {
|
||||
const content = Read(f);
|
||||
return content.includes('mvp: true');
|
||||
});
|
||||
const epicContent = Read(firstMvpFile);
|
||||
const title = extractTitle(epicContent); // First # heading
|
||||
const description = extractSection(epicContent, "Description");
|
||||
Skill(skill="workflow-lite-plan", args=`"${title}: ${description}"`)
|
||||
}
|
||||
|
||||
if (selection === "Full planning" || selection === "Create roadmap") {
|
||||
// === Bridge: Build brainstorm_artifacts compatible structure ===
|
||||
// Reads from directory-based outputs (individual files), maps to .brainstorming/ format
|
||||
// for context-search-agent auto-discovery → action-planning-agent consumption.
|
||||
|
||||
// Step A: Read spec documents from directories
|
||||
const specSummary = Read(`${workDir}/spec-summary.md`);
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirementsIndex = Read(`${workDir}/requirements/_index.md`);
|
||||
const architectureIndex = Read(`${workDir}/architecture/_index.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
|
||||
// Read individual EPIC files (already split — direct mapping to feature-specs)
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
|
||||
// Step B: Build structured description from spec-summary
|
||||
const structuredDesc = `GOAL: ${extractGoal(specSummary)}
|
||||
SCOPE: ${extractScope(specSummary)}
|
||||
CONTEXT: Generated from spec session ${specConfig.session_id}. Source: ${workDir}/`;
|
||||
|
||||
// Step C: Create WFS session (provides session directory + .brainstorming/)
|
||||
Skill(skill="workflow:session:start", args=`--auto "${structuredDesc}"`)
|
||||
// → Produces sessionId (WFS-xxx) and session directory at .workflow/active/{sessionId}/
|
||||
|
||||
// Step D: Create .brainstorming/ bridge files
|
||||
const brainstormDir = `.workflow/active/${sessionId}/.brainstorming`;
|
||||
Bash(`mkdir -p "${brainstormDir}/feature-specs"`);
|
||||
|
||||
// D.1: guidance-specification.md (highest priority — action-planning-agent reads first)
|
||||
// Synthesized from spec-summary + product-brief + architecture/requirements indexes
|
||||
Write(`${brainstormDir}/guidance-specification.md`, `
|
||||
# ${specConfig.seed_analysis.problem_statement} - Confirmed Guidance Specification
|
||||
|
||||
**Source**: spec-generator session ${specConfig.session_id}
|
||||
**Generated**: ${new Date().toISOString()}
|
||||
**Spec Directory**: ${workDir}
|
||||
|
||||
## 1. Project Positioning & Goals
|
||||
${extractSection(productBrief, "Vision")}
|
||||
${extractSection(productBrief, "Goals")}
|
||||
|
||||
## 2. Requirements Summary
|
||||
${extractSection(requirementsIndex, "Functional Requirements")}
|
||||
|
||||
## 3. Architecture Decisions
|
||||
${extractSection(architectureIndex, "Architecture Decision Records")}
|
||||
${extractSection(architectureIndex, "Technology Stack")}
|
||||
|
||||
## 4. Implementation Scope
|
||||
${extractSection(epicsIndex, "Epic Overview")}
|
||||
${extractSection(epicsIndex, "MVP Scope")}
|
||||
|
||||
## Feature Decomposition
|
||||
${extractSection(epicsIndex, "Traceability Matrix")}
|
||||
|
||||
## Appendix: Source Documents
|
||||
| Document | Path | Description |
|
||||
|----------|------|-------------|
|
||||
| Product Brief | ${workDir}/product-brief.md | Vision, goals, scope |
|
||||
| Requirements | ${workDir}/requirements/ | _index.md + REQ-*.md + NFR-*.md |
|
||||
| Architecture | ${workDir}/architecture/ | _index.md + ADR-*.md |
|
||||
| Epics | ${workDir}/epics/ | _index.md + EPIC-*.md |
|
||||
| Readiness Report | ${workDir}/readiness-report.md | Quality validation |
|
||||
`);
|
||||
|
||||
// D.2: feature-index.json (each EPIC file mapped to a Feature)
|
||||
// Path: feature-specs/feature-index.json (matches context-search-agent discovery)
|
||||
// Directly read from individual EPIC-*.md files (no monolithic parsing needed)
|
||||
const features = epicFiles.map(epicFile => {
|
||||
const content = Read(epicFile);
|
||||
const fm = parseFrontmatter(content); // Extract YAML frontmatter
|
||||
const basename = path.basename(epicFile, '.md'); // EPIC-001-slug
|
||||
const epicNum = fm.id.replace('EPIC-', ''); // 001
|
||||
const slug = basename.replace(/^EPIC-\d+-/, ''); // slug
|
||||
return {
|
||||
id: `F-${epicNum}`,
|
||||
slug: slug,
|
||||
name: extractTitle(content),
|
||||
description: extractSection(content, "Description"),
|
||||
priority: fm.mvp ? "High" : "Medium",
|
||||
spec_path: `${brainstormDir}/feature-specs/F-${epicNum}-${slug}.md`,
|
||||
source_epic: fm.id,
|
||||
source_file: epicFile
|
||||
};
|
||||
});
|
||||
Write(`${brainstormDir}/feature-specs/feature-index.json`, JSON.stringify({
|
||||
version: "1.0",
|
||||
source: "spec-generator",
|
||||
spec_session: specConfig.session_id,
|
||||
features,
|
||||
cross_cutting_specs: []
|
||||
}, null, 2));
|
||||
|
||||
// D.3: Feature-spec files — directly adapt from individual EPIC-*.md files
|
||||
// Since Epics are already individual documents, transform format directly
|
||||
// Filename pattern: F-{num}-{slug}.md (matches context-search-agent glob F-*-*.md)
|
||||
features.forEach(feature => {
|
||||
const epicContent = Read(feature.source_file);
|
||||
Write(feature.spec_path, `
|
||||
# Feature Spec: ${feature.source_epic} - ${feature.name}
|
||||
|
||||
**Source**: ${feature.source_file}
|
||||
**Priority**: ${feature.priority === "High" ? "MVP" : "Post-MVP"}
|
||||
|
||||
## Description
|
||||
${extractSection(epicContent, "Description")}
|
||||
|
||||
## Stories
|
||||
${extractSection(epicContent, "Stories")}
|
||||
|
||||
## Requirements
|
||||
${extractSection(epicContent, "Requirements")}
|
||||
|
||||
## Architecture
|
||||
${extractSection(epicContent, "Architecture")}
|
||||
`);
|
||||
});
|
||||
|
||||
// Step E: Invoke downstream workflow
|
||||
// context-search-agent will auto-discover .brainstorming/ files
|
||||
// → context-package.json.brainstorm_artifacts populated
|
||||
// → action-planning-agent loads guidance_specification (P1) + feature_index (P2)
|
||||
if (selection === "Full planning") {
|
||||
Skill(skill="workflow-plan", args=`"${structuredDesc}"`)
|
||||
} else {
|
||||
Skill(skill="workflow:req-plan-with-file", args=`"${extractGoal(specSummary)}"`)
|
||||
}
|
||||
}
|
||||
|
||||
if (selection === "Export Issues (Phase 7)") {
|
||||
// Proceed to Phase 7: Issue Export
|
||||
// Read phases/07-issue-export.md and execute
|
||||
}
|
||||
|
||||
// If user selects "Other": Export only or return to specific phase
|
||||
|
||||
if (selection === "Iterate & improve") {
|
||||
// Check iteration count
|
||||
if (specConfig.iteration_count >= 2) {
|
||||
// Max iterations reached, force handoff
|
||||
// Present handoff options again without iterate
|
||||
return;
|
||||
}
|
||||
|
||||
// Update iteration tracking
|
||||
specConfig.iteration_count = (specConfig.iteration_count || 0) + 1;
|
||||
specConfig.iteration_history.push({
|
||||
iteration: specConfig.iteration_count,
|
||||
timestamp: new Date().toISOString(),
|
||||
readiness_score: overallScore,
|
||||
errors_found: errorCount,
|
||||
phases_to_fix: affectedPhases
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
|
||||
// Proceed to Phase 6.5: Auto-Fix
|
||||
// Read phases/06-5-auto-fix.md and execute
|
||||
}
|
||||
```
|
||||
|
||||
#### Helper Functions Reference (pseudocode)
|
||||
|
||||
The following helper functions are used in the handoff bridge. They operate on markdown content from individual spec files:
|
||||
|
||||
```javascript
|
||||
// Extract title from a markdown document (first # heading)
|
||||
function extractTitle(markdown) {
|
||||
// Return the text after the first # heading (e.g., "# EPIC-001: Title" → "Title")
|
||||
}
|
||||
|
||||
// Parse YAML frontmatter from markdown (between --- markers)
|
||||
function parseFrontmatter(markdown) {
|
||||
// Return object with: id, priority, mvp, size, requirements, architecture, dependencies
|
||||
}
|
||||
|
||||
// Extract GOAL/SCOPE from spec-summary frontmatter or ## sections
|
||||
function extractGoal(specSummary) { /* Return the Vision/Goal line */ }
|
||||
function extractScope(specSummary) { /* Return the Scope/MVP boundary */ }
|
||||
|
||||
// Extract a named ## section from a markdown document
|
||||
function extractSection(markdown, sectionName) {
|
||||
// Return content between ## {sectionName} and next ## heading
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `readiness-report.md` - Quality validation report
|
||||
- **File**: `spec-summary.md` - One-page executive summary
|
||||
- **Format**: Markdown with YAML frontmatter
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All document directories validated (product-brief, requirements/, architecture/, epics/)
|
||||
- [ ] All frontmatter parseable and valid (index + individual files)
|
||||
- [ ] Cross-references checked (relative links between directories)
|
||||
- [ ] Overall quality score calculated
|
||||
- [ ] No unresolved Error-severity issues
|
||||
- [ ] Traceability matrix generated
|
||||
- [ ] spec-summary.md created
|
||||
- [ ] All document statuses updated to 'complete' (all files in all directories)
|
||||
- [ ] Handoff options presented
|
||||
|
||||
## Completion
|
||||
|
||||
This is the final phase. The specification package is ready for execution handoff.
|
||||
|
||||
### Output Files Manifest
|
||||
|
||||
| Path | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `spec-config.json` | 1 | Session configuration and state |
|
||||
| `discovery-context.json` | 1 | Codebase exploration (optional) |
|
||||
| `product-brief.md` | 2 | Product brief with multi-perspective synthesis |
|
||||
| `requirements/` | 3 | Directory: `_index.md` + `REQ-*.md` + `NFR-*.md` |
|
||||
| `architecture/` | 4 | Directory: `_index.md` + `ADR-*.md` |
|
||||
| `epics/` | 5 | Directory: `_index.md` + `EPIC-*.md` |
|
||||
| `readiness-report.md` | 6 | Quality validation report |
|
||||
| `spec-summary.md` | 6 | One-page executive summary |
|
||||
329
.codex/skills/spec-generator/phases/07-issue-export.md
Normal file
329
.codex/skills/spec-generator/phases/07-issue-export.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# Phase 7: Issue Export
|
||||
|
||||
Map specification Epics to issues, create them via `ccw issue create`, and generate an export report with spec document links.
|
||||
|
||||
> **Execution Mode: Inline**
|
||||
> This phase runs in the main orchestrator context (not delegated to agent) for direct access to `ccw issue create` CLI and interactive handoff options.
|
||||
|
||||
## Objective
|
||||
|
||||
- Read all EPIC-*.md files from Phase 5 output
|
||||
- Assign waves: MVP epics → wave-1, non-MVP → wave-2
|
||||
- Create one issue per Epic via `ccw issue create`
|
||||
- Map Epic dependencies to issue dependencies
|
||||
- Generate issue-export-report.md with mapping table and spec links
|
||||
- Present handoff options for execution
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/epics/_index.md` (and individual `EPIC-*.md` files)
|
||||
- Reference: `{workDir}/readiness-report.md`, `{workDir}/spec-config.json`
|
||||
- Reference: `{workDir}/product-brief.md`, `{workDir}/requirements/_index.md`, `{workDir}/architecture/_index.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Epic Files
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
|
||||
// Parse each Epic file
|
||||
const epics = epicFiles.map(epicFile => {
|
||||
const content = Read(epicFile);
|
||||
const fm = parseFrontmatter(content);
|
||||
const title = extractTitle(content);
|
||||
const description = extractSection(content, "Description");
|
||||
const stories = extractSection(content, "Stories");
|
||||
const reqRefs = extractSection(content, "Requirements");
|
||||
const adrRefs = extractSection(content, "Architecture");
|
||||
const deps = fm.dependencies || [];
|
||||
|
||||
return {
|
||||
file: epicFile,
|
||||
id: fm.id, // e.g., EPIC-001
|
||||
title,
|
||||
description,
|
||||
stories,
|
||||
reqRefs,
|
||||
adrRefs,
|
||||
priority: fm.priority,
|
||||
mvp: fm.mvp || false,
|
||||
dependencies: deps, // other EPIC IDs this depends on
|
||||
size: fm.size
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
### Step 2: Wave Assignment
|
||||
|
||||
```javascript
|
||||
const epicWaves = epics.map(epic => ({
|
||||
...epic,
|
||||
wave: epic.mvp ? 1 : 2
|
||||
}));
|
||||
|
||||
// Log wave assignment
|
||||
const wave1 = epicWaves.filter(e => e.wave === 1);
|
||||
const wave2 = epicWaves.filter(e => e.wave === 2);
|
||||
// wave-1: MVP epics (must-have, core functionality)
|
||||
// wave-2: Post-MVP epics (should-have, enhancements)
|
||||
```
|
||||
|
||||
### Step 3: Issue Creation Loop
|
||||
|
||||
```javascript
|
||||
const createdIssues = [];
|
||||
const epicToIssue = {}; // EPIC-ID -> Issue ID mapping
|
||||
|
||||
for (const epic of epicWaves) {
|
||||
// Build issue JSON matching roadmap-with-file schema
|
||||
const issueData = {
|
||||
title: `[${specConfig.session_id}] ${epic.title}`,
|
||||
status: "pending",
|
||||
priority: epic.wave === 1 ? 2 : 3, // wave-1 = higher priority
|
||||
context: `## ${epic.title}
|
||||
|
||||
${epic.description}
|
||||
|
||||
## Stories
|
||||
${epic.stories}
|
||||
|
||||
## Spec References
|
||||
- Epic: ${epic.file}
|
||||
- Requirements: ${epic.reqRefs}
|
||||
- Architecture: ${epic.adrRefs}
|
||||
- Product Brief: ${workDir}/product-brief.md
|
||||
- Full Spec: ${workDir}/`,
|
||||
source: "text",
|
||||
tags: [
|
||||
"spec-generated",
|
||||
`spec:${specConfig.session_id}`,
|
||||
`wave-${epic.wave}`,
|
||||
epic.mvp ? "mvp" : "post-mvp",
|
||||
`epic:${epic.id}`
|
||||
],
|
||||
extended_context: {
|
||||
notes: {
|
||||
session: specConfig.session_id,
|
||||
spec_dir: workDir,
|
||||
source_epic: epic.id,
|
||||
wave: epic.wave,
|
||||
depends_on_issues: [], // Filled in Step 4
|
||||
spec_documents: {
|
||||
product_brief: `${workDir}/product-brief.md`,
|
||||
requirements: `${workDir}/requirements/_index.md`,
|
||||
architecture: `${workDir}/architecture/_index.md`,
|
||||
epic: epic.file
|
||||
}
|
||||
}
|
||||
},
|
||||
lifecycle_requirements: {
|
||||
test_strategy: "acceptance",
|
||||
regression_scope: "affected",
|
||||
acceptance_type: "manual",
|
||||
commit_strategy: "per-epic"
|
||||
}
|
||||
};
|
||||
|
||||
// Create issue via ccw issue create (pipe JSON to avoid shell escaping)
|
||||
const result = Bash(`echo '${JSON.stringify(issueData)}' | ccw issue create`);
|
||||
|
||||
// Parse returned issue ID
|
||||
const issueId = JSON.parse(result).id; // e.g., ISS-20260308-001
|
||||
epicToIssue[epic.id] = issueId;
|
||||
|
||||
createdIssues.push({
|
||||
epic_id: epic.id,
|
||||
epic_title: epic.title,
|
||||
issue_id: issueId,
|
||||
wave: epic.wave,
|
||||
priority: issueData.priority,
|
||||
mvp: epic.mvp
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Epic Dependency → Issue Dependency Mapping
|
||||
|
||||
```javascript
|
||||
// Map EPIC dependencies to Issue dependencies
|
||||
for (const epic of epicWaves) {
|
||||
if (epic.dependencies.length === 0) continue;
|
||||
|
||||
const issueId = epicToIssue[epic.id];
|
||||
const depIssueIds = epic.dependencies
|
||||
.map(depEpicId => epicToIssue[depEpicId])
|
||||
.filter(Boolean);
|
||||
|
||||
if (depIssueIds.length > 0) {
|
||||
// Update issue's extended_context.notes.depends_on_issues
|
||||
// This is informational — actual dependency enforcement is in execution phase
|
||||
// Note: ccw issue create already created the issue; dependency info is in the context
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate issue-export-report.md
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
const reportContent = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 7
|
||||
document_type: issue-export-report
|
||||
status: complete
|
||||
generated_at: ${timestamp}
|
||||
stepsCompleted: ["load-epics", "wave-assignment", "issue-creation", "dependency-mapping", "report-generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- epics/_index.md
|
||||
- readiness-report.md
|
||||
---
|
||||
|
||||
# Issue Export Report
|
||||
|
||||
## Summary
|
||||
|
||||
- **Session**: ${specConfig.session_id}
|
||||
- **Issues Created**: ${createdIssues.length}
|
||||
- **Wave 1 (MVP)**: ${wave1.length} issues
|
||||
- **Wave 2 (Post-MVP)**: ${wave2.length} issues
|
||||
- **Export Date**: ${timestamp}
|
||||
|
||||
## Issue Mapping
|
||||
|
||||
| Epic ID | Epic Title | Issue ID | Wave | Priority | MVP |
|
||||
|---------|-----------|----------|------|----------|-----|
|
||||
${createdIssues.map(i =>
|
||||
`| ${i.epic_id} | ${i.epic_title} | ${i.issue_id} | ${i.wave} | ${i.priority} | ${i.mvp ? 'Yes' : 'No'} |`
|
||||
).join('\n')}
|
||||
|
||||
## Spec Document Links
|
||||
|
||||
| Document | Path | Description |
|
||||
|----------|------|-------------|
|
||||
| Product Brief | ${workDir}/product-brief.md | Vision, goals, scope |
|
||||
| Requirements | ${workDir}/requirements/_index.md | Functional + non-functional requirements |
|
||||
| Architecture | ${workDir}/architecture/_index.md | Components, ADRs, tech stack |
|
||||
| Epics | ${workDir}/epics/_index.md | Epic/Story breakdown |
|
||||
| Readiness Report | ${workDir}/readiness-report.md | Quality validation |
|
||||
| Spec Summary | ${workDir}/spec-summary.md | Executive summary |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
| Issue ID | Depends On |
|
||||
|----------|-----------|
|
||||
${createdIssues.map(i => {
|
||||
const epic = epicWaves.find(e => e.id === i.epic_id);
|
||||
const deps = (epic.dependencies || []).map(d => epicToIssue[d]).filter(Boolean);
|
||||
return `| ${i.issue_id} | ${deps.length > 0 ? deps.join(', ') : 'None'} |`;
|
||||
}).join('\n')}
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **team-planex**: Execute all issues via coordinated team workflow
|
||||
2. **Wave 1 only**: Execute MVP issues first (${wave1.length} issues)
|
||||
3. **View issues**: Browse created issues via \`ccw issue list --tag spec:${specConfig.session_id}\`
|
||||
4. **Manual review**: Review individual issues before execution
|
||||
`;
|
||||
|
||||
Write(`${workDir}/issue-export-report.md`, reportContent);
|
||||
```
|
||||
|
||||
### Step 6: Update spec-config.json
|
||||
|
||||
```javascript
|
||||
specConfig.issue_ids = createdIssues.map(i => i.issue_id);
|
||||
specConfig.issues_created = createdIssues.length;
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 7,
|
||||
name: "issue-export",
|
||||
output_file: "issue-export-report.md",
|
||||
issues_created: createdIssues.length,
|
||||
wave_1_count: wave1.length,
|
||||
wave_2_count: wave2.length,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 7: Handoff Options
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `${createdIssues.length} issues created from ${epicWaves.length} Epics. What would you like to do next?`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Execute via team-planex",
|
||||
description: `Execute all ${createdIssues.length} issues with coordinated team workflow`
|
||||
},
|
||||
{
|
||||
label: "Wave 1 only",
|
||||
description: `Execute ${wave1.length} MVP issues first`
|
||||
},
|
||||
{
|
||||
label: "View issues",
|
||||
description: "Browse created issues before deciding"
|
||||
},
|
||||
{
|
||||
label: "Done",
|
||||
description: "Export complete, handle manually"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Based on user selection:
|
||||
if (selection === "Execute via team-planex") {
|
||||
const issueIds = createdIssues.map(i => i.issue_id).join(',');
|
||||
Skill({ skill: "team-planex", args: `--issues ${issueIds}` });
|
||||
}
|
||||
|
||||
if (selection === "Wave 1 only") {
|
||||
const wave1Ids = createdIssues.filter(i => i.wave === 1).map(i => i.issue_id).join(',');
|
||||
Skill({ skill: "team-planex", args: `--issues ${wave1Ids}` });
|
||||
}
|
||||
|
||||
if (selection === "View issues") {
|
||||
Bash(`ccw issue list --tag spec:${specConfig.session_id}`);
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `issue-export-report.md` — Issue mapping table + spec links + next steps
|
||||
- **Updated**: `.workflow/issues/issues.jsonl` — New issue entries appended
|
||||
- **Updated**: `spec-config.json` — Phase 7 completion + issue IDs
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All MVP Epics have corresponding issues created
|
||||
- [ ] All non-MVP Epics have corresponding issues created
|
||||
- [ ] Issue tags include `spec-generated` and `spec:{session_id}`
|
||||
- [ ] Issue `extended_context.notes.spec_documents` paths are correct
|
||||
- [ ] Wave assignment matches MVP status (MVP → wave-1, non-MVP → wave-2)
|
||||
- [ ] Epic dependencies mapped to issue dependency references
|
||||
- [ ] `issue-export-report.md` generated with mapping table
|
||||
- [ ] `spec-config.json` updated with `issue_ids` and `issues_created`
|
||||
- [ ] Handoff options presented
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Blocking? | Action |
|
||||
|-------|-----------|--------|
|
||||
| `ccw issue create` fails for one Epic | No | Log error, continue with remaining Epics, report partial creation |
|
||||
| No EPIC files found | Yes | Error and return to Phase 5 |
|
||||
| All issue creations fail | Yes | Error with CLI diagnostic, suggest manual creation |
|
||||
| Dependency EPIC not found in mapping | No | Skip dependency link, log warning |
|
||||
|
||||
## Completion
|
||||
|
||||
Phase 7 is the final phase. The specification package has been fully converted to executable issues ready for team-planex or manual execution.
|
||||
Reference in New Issue
Block a user