feat: Add templates for epics, product brief, and requirements documentation

- Introduced a comprehensive template for generating epics and stories in Phase 5, including an index and individual epic files.
- Created a product brief template for Phase 2 to summarize product vision, goals, and target users.
- Developed a requirements PRD template for Phase 3, outlining functional and non-functional requirements, along with traceability matrices.

feat: Implement tech debt roles for assessment, execution, planning, scanning, validation, and analysis

- Added roles for tech debt assessment, executor, planner, scanner, validator, and analyst, each with defined phases and processes for managing technical debt.
- Each role includes structured input requirements, processing strategies, and output formats to ensure consistency and clarity in tech debt management.
This commit is contained in:
catlog22
2026-03-07 13:32:04 +08:00
parent 7ee9b579fa
commit 29a1fea467
255 changed files with 14407 additions and 21120 deletions

View File

@@ -0,0 +1,140 @@
---
name: team-lifecycle-v4
description: Full lifecycle team skill with clean architecture. SKILL.md is a universal router — all roles read it. Beat model is coordinator-only. Structure is roles/ + specs/ + templates/. Triggers on "team lifecycle v4".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Agent(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team Lifecycle v4
Orchestrate multi-agent software development: specification → planning → implementation → testing → review.
## Architecture
```
Skill(skill="team-lifecycle-v4", args="task description")
|
SKILL.md (this file) = Router
|
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze → dispatch → spawn workers → STOP
|
+-------+-------+-------+
v v v v
[team-worker agents, each loads roles/<role>/role.md]
```
## Role Registry
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | RESEARCH-* | false |
| writer | [roles/writer/role.md](roles/writer/role.md) | DRAFT-* | true |
| planner | [roles/planner/role.md](roles/planner/role.md) | PLAN-* | true |
| executor | [roles/executor/role.md](roles/executor/role.md) | IMPL-* | true |
| tester | [roles/tester/role.md](roles/tester/role.md) | TEST-* | false |
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-*, QUALITY-*, IMPROVE-* | false |
## Role Router
Parse `$ARGUMENTS`:
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role` → Read `roles/coordinator/role.md`, execute entry router
## Shared Constants
- **Session prefix**: `TLV4`
- **Session path**: `.workflow/.team/TLV4-<slug>-<date>/`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
## Worker Spawn Template
Coordinator spawns workers using this template:
```
Agent({
subagent_type: "team-worker",
description: "Spawn <role> worker",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-lifecycle-v4/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: <team-name>
requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).`
})
```
## User Commands
| Command | Action |
|---------|--------|
| `check` / `status` | View execution status graph |
| `resume` / `continue` | Advance to next step |
| `revise <TASK-ID> [feedback]` | Revise specific task |
| `feedback <text>` | Inject feedback for revision |
| `recheck` | Re-run quality check |
| `improve [dimension]` | Auto-improve weakest dimension |
## Completion Action
When pipeline completes, coordinator presents:
```
AskUserQuestion({
questions: [{
question: "Pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
{ label: "Keep Active", description: "Keep session for follow-up work" },
{ label: "Export Results", description: "Export deliverables to target directory" }
]
}]
})
```
## Specs Reference
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
- [specs/quality-gates.md](specs/quality-gates.md) — Quality gate criteria and scoring
- [specs/knowledge-transfer.md](specs/knowledge-transfer.md) — Artifact and state transfer protocols
## Session Directory
```
.workflow/.team/TLV4-<slug>-<date>/
├── team-session.json # Session state + role registry
├── spec/ # Spec phase outputs
├── plan/ # Implementation plan + TASK-*.json
├── artifacts/ # All deliverables
├── wisdom/ # Cross-task knowledge
├── explorations/ # Shared explore cache
├── discussions/ # Discuss round records
└── .msg/ # Team message bus
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown command | Error with available command list |
| Role not found | Error with role registry |
| CLI tool fails | Worker fallback to direct implementation |
| Fast-advance conflict | Coordinator reconciles on next callback |
| Completion action fails | Default to Keep Active |

View File

@@ -0,0 +1,78 @@
---
role: analyst
prefix: RESEARCH
inner_loop: false
discuss_rounds: [DISCUSS-001]
message_types:
success: research_ready
error: error
---
# Analyst
Research and codebase exploration for context gathering.
## Identity
- Tag: [analyst] | Prefix: RESEARCH-*
- Responsibility: Gather structured context from topic and codebase
## Boundaries
### MUST
- Extract structured seed information from task topic
- Explore codebase if project detected
- Package context for downstream roles
### MUST NOT
- Implement code or modify files
- Make architectural decisions
- Skip codebase exploration when project files exist
## Phase 2: Seed Analysis
1. Read upstream artifacts via team_msg(operation="get_state")
2. Extract session folder from task description
3. Parse topic from task description
4. If topic references file (@path or .md/.txt) → read it
5. CLI seed analysis:
```
Bash({ command: `ccw cli -p "PURPOSE: Analyze topic, extract structured seed info.
TASK: • Extract problem statement • Identify target users • Determine domain
• List constraints • Identify 3-5 exploration dimensions
TOPIC: <topic-content>
MODE: analysis
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[]" --tool gemini --mode analysis`, run_in_background: false })
```
6. Parse result JSON
## Phase 3: Codebase Exploration
| Condition | Action |
|-----------|--------|
| package.json / Cargo.toml / pyproject.toml / go.mod exists | Explore |
| No project files | Skip (codebase_context = null) |
When project detected:
```
Bash({ command: `ccw cli -p "PURPOSE: Explore codebase for context
TASK: • Identify tech stack • Map architecture patterns • Document conventions • List integration points
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON with: tech_stack[], architecture_patterns[], conventions[], integration_points[]" --tool gemini --mode analysis`, run_in_background: false })
```
## Phase 4: Context Packaging
1. Write spec-config.json → <session>/spec/
2. Write discovery-context.json → <session>/spec/
3. Inline Discuss (DISCUSS-001):
- Artifact: <session>/spec/discovery-context.json
- Perspectives: product, risk, coverage
4. Handle verdict per consensus protocol
5. Report: complexity, codebase presence, dimensions, discuss verdict, output paths
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI failure | Fallback to direct analysis |
| No project detected | Continue as new project |
| Topic too vague | Report with clarification questions |

View File

@@ -0,0 +1,56 @@
# Analyze Task
Parse user task -> detect capabilities -> build dependency graph -> design roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Capability | Prefix |
|----------|------------|--------|
| investigate, explore, research | analyst | RESEARCH |
| write, draft, document | writer | DRAFT |
| implement, build, code, fix | executor | IMPL |
| design, architect, plan | planner | PLAN |
| test, verify, validate | tester | TEST |
| analyze, review, audit | reviewer | REVIEW |
## Dependency Graph
Natural ordering tiers:
- Tier 0: analyst, planner (knowledge gathering)
- Tier 1: writer (creation requires context)
- Tier 2: executor (implementation requires plan/design)
- Tier 3: tester, reviewer (validation requires artifacts)
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Per capability | +1 |
| Cross-domain | +2 |
| Parallel tracks | +1 per track |
| Serial depth > 3 | +1 |
Results: 1-3 Low, 4-6 Medium, 7+ High
## Role Minimization
- Cap at 5 roles
- Merge overlapping capabilities
- Absorb trivial single-step roles
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"pipeline_type": "<spec-only|impl-only|full-lifecycle|...>",
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>", "keywords": ["..."] }],
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."], "priority": "P0|P1|P2" } },
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
"complexity": { "score": 0, "level": "Low|Medium|High" },
"needs_research": true
}
```

View File

@@ -0,0 +1,46 @@
# Dispatch Tasks
Create task chains from dependency graph with proper blockedBy relationships.
## Workflow
1. Read task-analysis.json -> extract dependency_graph
2. Read specs/pipelines.md -> get task registry for selected pipeline
3. Topological sort tasks (respect blockedBy)
4. Validate all owners exist in role registry (SKILL.md)
5. For each task (in order):
- TaskCreate with structured description (see template below)
- TaskUpdate with blockedBy + owner assignment
6. Update team-session.json with pipeline.tasks_total
7. Validate chain (no orphans, no cycles, all refs valid)
## Task Description Template
```
PURPOSE: <goal> | Success: <criteria>
TASK:
- <step 1>
- <step 2>
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: <list>
- Key files: <list>
EXPECTED: <artifact path> + <quality criteria>
CONSTRAINTS: <scope limits>
---
InnerLoop: <true|false>
RoleSpec: .claude/skills/team-lifecycle-v4/roles/<role>/role.md
```
## InnerLoop Flag Rules
- true: Role has 2+ serial same-prefix tasks (writer: DRAFT-001->004)
- false: Role has 1 task, or tasks are parallel
## Dependency Validation
- No orphan tasks (all tasks have valid owner)
- No circular dependencies
- All blockedBy references exist
- Session reference in every task description
- RoleSpec reference in every task description

View File

@@ -0,0 +1,98 @@
# Monitor Pipeline
Event-driven pipeline coordination. Beat model: coordinator wake -> process -> spawn -> STOP.
## Constants
- SPAWN_MODE: background
- ONE_STEP_PER_INVOCATION: true
- FAST_ADVANCE_AWARE: true
- WORKER_AGENT: team-worker
## Handler Router
| Source | Handler |
|--------|---------|
| Message contains [role-name] | handleCallback |
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## handleCallback
Worker completed. Process and advance.
1. Find matching worker by role in message
2. Check if progress update (inner loop) or final completion
3. Progress -> update session state, STOP
4. Completion -> mark task done, remove from active_workers
5. Check for checkpoints:
- QUALITY-001 -> display quality gate, pause for user commands
- PLAN-001 -> read plan.json complexity, create dynamic IMPL tasks per specs/pipelines.md routing
6. -> handleSpawnNext
## handleCheck
Read-only status report, then STOP.
Output:
```
[coordinator] Pipeline Status
[coordinator] Progress: <done>/<total> (<pct>%)
[coordinator] Active: <workers with elapsed time>
[coordinator] Ready: <pending tasks with resolved deps>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
## handleResume
1. No active workers -> handleSpawnNext
2. Has active -> check each status
- completed -> mark done
- in_progress -> still running
3. Some completed -> handleSpawnNext
4. All running -> report status, STOP
## handleSpawnNext
Find ready tasks, spawn workers, STOP.
1. Collect: completedSubjects, inProgressSubjects, readySubjects
2. No ready + work in progress -> report waiting, STOP
3. No ready + nothing in progress -> handleComplete
4. Has ready -> for each:
a. Check if inner loop role with active worker -> skip (worker picks up)
b. TaskUpdate -> in_progress
c. team_msg log -> task_unblocked
d. Spawn team-worker (see SKILL.md Spawn Template)
e. Add to active_workers
5. Update session, output summary, STOP
## handleComplete
Pipeline done. Generate report and completion action.
1. Generate summary (deliverables, stats, discussions)
2. Read session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed, TeamDelete)
- auto_keep -> Keep Active (status=paused)
## handleAdapt
Capability gap reported mid-pipeline.
1. Parse gap description
2. Check if existing role covers it -> redirect
3. Role count < 5 -> generate dynamic role-spec in <session>/role-specs/
4. Create new task, spawn worker
5. Role count >= 5 -> merge or pause
## Fast-Advance Reconciliation
On every coordinator wake:
1. Read team_msg entries with type="fast_advance"
2. Sync active_workers with spawned successors
3. No duplicate spawns

View File

@@ -0,0 +1,116 @@
# Coordinator Role
Orchestrate team-lifecycle-v4: analyze -> dispatch -> spawn -> monitor -> report.
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Analyze task -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Parse task description (text-level only, no codebase reading)
- Create team and spawn team-worker agents in background
- Dispatch tasks with proper dependency chains
- Monitor progress via callbacks and route messages
- Maintain session state (team-session.json)
- Handle capability_gap reports
- Execute completion action when pipeline finishes
### MUST NOT
- Read source code or explore codebase (delegate to workers)
- Execute task work directly
- Modify task output artifacts
- Spawn workers with general-purpose agent (MUST use team-worker)
- Generate more than 5 worker roles
## Command Execution Protocol
When coordinator needs to execute a specific phase:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains [role-name] | -> handleCallback (monitor.md) |
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active session in .workflow/.team/TLV4-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume/adapt/complete: load commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
1. Scan .workflow/.team/TLV4-*/team-session.json for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (audit TaskList, reset in_progress->pending, rebuild team, kick first ready task)
4. Multiple -> AskUserQuestion for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse task description
2. Clarify if ambiguous (AskUserQuestion: scope, deliverables, constraints)
3. Delegate to commands/analyze.md
4. Output: task-analysis.json
5. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Create Team + Initialize Session
1. Generate session ID: TLV4-<slug>-<date>
2. Create session folder structure
3. TeamCreate with team name
4. Read specs/pipelines.md -> select pipeline
5. Register roles in team-session.json
6. Initialize shared infrastructure (wisdom/*.md, explorations/cache-index.json)
7. Initialize pipeline via team_msg state_update:
```
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: { pipeline_mode: "<mode>", pipeline_stages: [...], team_name: "<name>" }
})
```
8. Write team-session.json
## Phase 3: Create Task Chain
Delegate to commands/dispatch.md:
1. Read dependency graph from task-analysis.json
2. Read specs/pipelines.md for selected pipeline's task registry
3. Topological sort tasks
4. Create tasks via TaskCreate with blockedBy
5. Update team-session.json
## Phase 4: Spawn-and-Stop
Delegate to commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + blockedBy resolved)
2. Spawn team-worker agents (see SKILL.md Spawn Template)
3. Output status summary
4. STOP
## Phase 5: Report + Completion Action
1. Generate summary (deliverables, pipeline stats, discussions)
2. Execute completion action per session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean
- auto_keep -> Keep Active
## Error Handling
| Error | Resolution |
|-------|------------|
| Task too vague | AskUserQuestion for clarification |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| Dependency cycle | Detect in analysis, halt |
| Role limit exceeded | Merge overlapping roles |

View File

@@ -0,0 +1,35 @@
# Fix
Revision workflow for bug fixes and feedback-driven changes.
## Workflow
1. Read original task + feedback/revision notes from task description
2. Load original implementation context (files modified, approach taken)
3. Analyze feedback to identify specific changes needed
4. Apply fixes:
- Agent mode: Edit tool for targeted changes
- CLI mode: Resume previous session with fix prompt
5. Re-validate convergence criteria
6. Report: original task, changes applied, validation result
## Fix Prompt Template (CLI mode)
```
PURPOSE: Fix issues in <task.title> based on feedback
TASK:
- Review original implementation
- Apply feedback: <feedback text>
- Verify fixes address all feedback points
MODE: write
CONTEXT: @<modified files>
EXPECTED: All feedback points addressed, convergence criteria met
CONSTRAINTS: Minimal changes | No scope creep
```
## Quality Rules
- Fix ONLY what feedback requests
- No refactoring beyond fix scope
- Verify original convergence criteria still pass
- Report partial_completion if some feedback unclear

View File

@@ -0,0 +1,62 @@
# Implement
Execute implementation from task JSON via agent or CLI delegation.
## Agent Mode
Direct implementation using Edit/Write/Bash tools:
1. Read task.files[] as target files
2. Read task.implementation[] as step-by-step instructions
3. For each step:
- Substitute [variable] placeholders with pre_analysis results
- New file → Write tool; Modify file → Edit tool
- Follow task.reference patterns
4. Apply task.rationale.chosen_approach
5. Mitigate task.risks[] during implementation
Quality rules:
- Verify module existence before referencing
- Incremental progress — small working changes
- Follow existing patterns from task.reference
- ASCII-only, no premature abstractions
## CLI Delegation Mode
Build prompt from task JSON, delegate to CLI:
Prompt structure:
```
PURPOSE: <task.title>
<task.description>
TARGET FILES:
<task.files[] with paths and changes>
IMPLEMENTATION STEPS:
<task.implementation[] numbered>
PRE-ANALYSIS CONTEXT:
<pre_analysis results>
REFERENCE:
<task.reference pattern and files>
DONE WHEN:
<task.convergence.criteria[]>
MODE: write
CONSTRAINTS: Only modify listed files | Follow existing patterns
```
CLI call:
```
Bash({ command: `ccw cli -p "<prompt>" --tool <tool> --mode write --rule development-implement-feature`,
run_in_background: false, timeout: 3600000 })
```
Resume strategy:
| Strategy | Command |
|----------|---------|
| new | --id <session>-<task_id> |
| resume | --resume <parent_id> |

View File

@@ -0,0 +1,67 @@
---
role: executor
prefix: IMPL
inner_loop: true
message_types:
success: impl_complete
progress: impl_progress
error: error
---
# Executor
Code implementation worker with dual execution modes.
## Identity
- Tag: [executor] | Prefix: IMPL-*
- Responsibility: Implement code from plan tasks via agent or CLI delegation
## Boundaries
### MUST
- Parse task JSON before implementation
- Execute pre_analysis steps if defined
- Follow existing code patterns (task.reference)
- Run convergence check after implementation
### MUST NOT
- Skip convergence validation
- Implement without reading task JSON
- Introduce breaking changes not in plan
## Phase 2: Parse Task + Resolve Mode
1. Extract from task description: task_file path, session folder, execution mode
2. Read task JSON (id, title, files[], implementation[], convergence.criteria[])
3. Resolve execution mode:
| Priority | Source |
|----------|--------|
| 1 | Task description Executor: field |
| 2 | task.meta.execution_config.method |
| 3 | plan.json recommended_execution |
| 4 | Auto: Low → agent, Medium/High → codex |
4. Execute pre_analysis[] if exists (Read, Bash, Grep, Glob tools)
## Phase 3: Execute Implementation
Route by mode → read commands/<command>.md:
- agent / gemini / codex / qwen → commands/implement.md
- Revision task → commands/fix.md
## Phase 4: Self-Validation
| Step | Method | Pass Criteria |
|------|--------|--------------|
| Convergence check | Match criteria vs output | All criteria addressed |
| Syntax check | tsc --noEmit or equivalent | Exit code 0 |
| Test detection | Find test files for modified files | Tests identified |
Report: task ID, status, mode used, files modified, convergence results.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Agent mode syntax errors | Retry with error context (max 3) |
| CLI mode failure | Retry or resume with --resume |
| pre_analysis failure | Follow on_error (fail/continue/skip) |
| CLI tool unavailable | Fallback: gemini → qwen → codex |
| Max retries exceeded | Report failure to coordinator |

View File

@@ -0,0 +1,76 @@
---
role: planner
prefix: PLAN
inner_loop: true
message_types:
success: plan_ready
revision: plan_revision
error: error
---
# Planner
Codebase-informed implementation planning with complexity assessment.
## Identity
- Tag: [planner] | Prefix: PLAN-*
- Responsibility: Explore codebase → generate structured plan → assess complexity
## Boundaries
### MUST
- Check shared exploration cache before re-exploring
- Generate plan.json + TASK-*.json files
- Assess complexity (Low/Medium/High) for routing
- Load spec context if available (full-lifecycle)
### MUST NOT
- Implement code
- Skip codebase exploration
- Create more than 7 tasks
## Phase 2: Context + Exploration
1. If <session>/spec/ exists → load requirements, architecture, epics (full-lifecycle)
2. Check <session>/explorations/cache-index.json for cached explorations
3. Explore codebase (cache-aware):
```
Bash({ command: `ccw cli -p "PURPOSE: Explore codebase to inform planning
TASK: • Search for relevant patterns • Identify files to modify • Document integration points
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON with: relevant_files[], patterns[], integration_points[], recommendations[]" --tool gemini --mode analysis`, run_in_background: false })
```
4. Store results in <session>/explorations/
## Phase 3: Plan Generation
Generate plan.json + .task/TASK-*.json:
```
Bash({ command: `ccw cli -p "PURPOSE: Generate implementation plan from exploration results
TASK: • Create plan.json overview • Generate TASK-*.json files (2-7 tasks) • Define dependencies • Set convergence criteria
MODE: write
CONTEXT: @<session>/explorations/*.json
EXPECTED: Files: plan.json + .task/TASK-*.json
CONSTRAINTS: 2-7 tasks, include id/title/files[]/convergence.criteria/depends_on" --tool gemini --mode write`, run_in_background: false })
```
Output files:
```
<session>/plan/
├── plan.json # Overview + complexity assessment
└── .task/TASK-*.json # Individual task definitions
```
## Phase 4: Submit for Approval
1. Read plan.json and TASK-*.json
2. Report to coordinator: complexity, task count, approach, plan location
3. Coordinator reads complexity for conditional routing (see specs/pipelines.md)
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI exploration failure | Plan from description only |
| CLI planning failure | Fallback to direct planning |
| Plan rejected 3+ times | Notify coordinator |
| Cache index corrupt | Clear cache, re-explore |

View File

@@ -0,0 +1,34 @@
# Code Review
4-dimension code review for implementation quality.
## Inputs
- Plan file (plan.json)
- Git diff or modified files list
- Test results (if available)
## Dimensions
| Dimension | Critical Issues |
|-----------|----------------|
| Quality | Empty catch, any casts, @ts-ignore, console.log |
| Security | Hardcoded secrets, SQL injection, eval/exec, innerHTML |
| Architecture | Circular deps, imports >2 levels deep, files >500 lines |
| Requirements | Missing core functionality, incomplete acceptance criteria |
## Review Process
1. Gather modified files from executor's state (team_msg get_state)
2. Read each modified file
3. Score per dimension (0-100%)
4. Classify issues by severity (Critical/High/Medium/Low)
5. Generate verdict (BLOCK/CONDITIONAL/APPROVE)
## Output
Write review report to <session>/artifacts/review-report.md:
- Per-dimension scores
- Issue list with file:line references
- Verdict with justification
- Recommendations (if CONDITIONAL)

View File

@@ -0,0 +1,44 @@
# Spec Quality Review
5-dimension spec quality gate with discuss protocol.
## Inputs
- All spec docs in <session>/spec/
- Quality gate config from specs/quality-gates.md
## Dimensions
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Completeness | 25% | All sections present with substance |
| Consistency | 25% | Terminology, format, references uniform |
| Traceability | 25% | Goals→Reqs→Arch→Stories chain |
| Depth | 25% | AC testable, ADRs justified, stories estimable |
## Review Process
1. Read all spec documents from <session>/spec/
2. Load quality gate thresholds from specs/quality-gates.md
3. Score each dimension
4. Run cross-document validation
5. Generate readiness-report.md + spec-summary.md
6. Run DISCUSS-003:
- Artifact: <session>/spec/readiness-report.md
- Perspectives: product, technical, quality, risk, coverage
- Handle verdict per consensus protocol
- DISCUSS-003 HIGH always triggers user pause
## Quality Gate
| Gate | Score |
|------|-------|
| PASS | >= 80% |
| REVIEW | 60-79% |
| FAIL | < 60% |
## Output
Write to <session>/artifacts/:
- readiness-report.md: Dimension scores, issue list, traceability matrix
- spec-summary.md: Executive summary of all spec docs

View File

@@ -0,0 +1,69 @@
---
role: reviewer
prefix: REVIEW
additional_prefixes: [QUALITY, IMPROVE]
inner_loop: false
discuss_rounds: [DISCUSS-003]
message_types:
success_review: review_result
success_quality: quality_result
fix: fix_required
error: error
---
# Reviewer
Quality review for both code (REVIEW-*) and specifications (QUALITY-*, IMPROVE-*).
## Identity
- Tag: [reviewer] | Prefix: REVIEW-*, QUALITY-*, IMPROVE-*
- Responsibility: Multi-dimensional review with verdict routing
## Boundaries
### MUST
- Detect review mode from task prefix
- Apply correct dimensions per mode
- Run DISCUSS-003 for spec quality (QUALITY-*/IMPROVE-*)
- Generate actionable verdict
### MUST NOT
- Mix code review with spec quality dimensions
- Skip discuss for QUALITY-* tasks
- Implement fixes (only recommend)
## Phase 2: Mode Detection
| Task Prefix | Mode | Command |
|-------------|------|---------|
| REVIEW-* | Code Review | commands/review-code.md |
| QUALITY-* | Spec Quality | commands/review-spec.md |
| IMPROVE-* | Spec Quality (recheck) | commands/review-spec.md |
## Phase 3: Review Execution
Route to command based on detected mode.
## Phase 4: Verdict
### Code Review Verdict
| Verdict | Criteria |
|---------|----------|
| BLOCK | Critical issues present |
| CONDITIONAL | High/medium only |
| APPROVE | Low or none |
### Spec Quality Gate
| Gate | Criteria |
|------|----------|
| PASS | Score >= 80% |
| REVIEW | Score 60-79% |
| FAIL | Score < 60% |
Report: mode, verdict/gate, dimension scores, discuss verdict (quality only), output paths.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Missing context | Request from coordinator |
| Invalid mode | Abort with error |
| Discuss fails | Proceed without discuss, log warning |

View File

@@ -0,0 +1,87 @@
---
role: tester
prefix: TEST
inner_loop: false
message_types:
success: test_result
fix: fix_required
error: error
---
# Tester
Test execution with iterative fix cycle.
## Identity
- Tag: [tester] | Prefix: TEST-*
- Responsibility: Detect framework → run tests → fix failures → report results
## Boundaries
### MUST
- Auto-detect test framework before running
- Run affected tests first, then full suite
- Classify failures by severity
- Iterate fix cycle up to MAX_ITERATIONS
### MUST NOT
- Skip framework detection
- Run full suite before affected tests
- Exceed MAX_ITERATIONS without reporting
## Phase 2: Framework Detection + Test Discovery
Framework detection (priority order):
| Priority | Method | Frameworks |
|----------|--------|-----------|
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
| 2 | package.json scripts.test | vitest, jest, mocha, pytest |
| 3 | Config files | vitest.config.*, jest.config.*, pytest.ini |
Affected test discovery from executor's modified files:
- Search: <name>.test.ts, <name>.spec.ts, tests/<name>.test.ts, __tests__/<name>.test.ts
## Phase 3: Test Execution + Fix Cycle
Config: MAX_ITERATIONS=10, PASS_RATE_TARGET=95%, AFFECTED_TESTS_FIRST=true
Loop:
1. Run affected tests → parse results
2. Pass rate met → run full suite
3. Failures → select strategy → fix → re-run
Strategy selection:
| Condition | Strategy |
|-----------|----------|
| Iteration <= 3 or pass >= 80% | Conservative: fix one critical failure |
| Critical failures < 5 | Surgical: fix specific pattern everywhere |
| Pass < 50% or iteration > 7 | Aggressive: fix all in batch |
Test commands:
| Framework | Affected | Full Suite |
|-----------|---------|------------|
| vitest | vitest run <files> | vitest run |
| jest | jest <files> --no-coverage | jest --no-coverage |
| pytest | pytest <files> -v | pytest -v |
## Phase 4: Result Analysis
Failure classification:
| Severity | Patterns |
|----------|----------|
| Critical | SyntaxError, cannot find module, undefined |
| High | Assertion failures, toBe/toEqual |
| Medium | Timeout, async errors |
| Low | Warnings, deprecations |
Report routing:
| Condition | Type |
|-----------|------|
| Pass rate >= target | test_result (success) |
| Pass rate < target after max iterations | fix_required |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Framework not detected | Prompt coordinator |
| No tests found | Report to coordinator |
| Infinite fix loop | Abort after MAX_ITERATIONS |

View File

@@ -0,0 +1,95 @@
---
role: writer
prefix: DRAFT
inner_loop: true
discuss_rounds: [DISCUSS-002]
message_types:
success: draft_ready
revision: draft_revision
error: error
---
# Writer
Template-driven document generation with progressive dependency loading.
## Identity
- Tag: [writer] | Prefix: DRAFT-*
- Responsibility: Generate spec documents (product brief, requirements, architecture, epics)
## Boundaries
### MUST
- Load upstream context progressively (each doc builds on previous)
- Use templates from templates/ directory
- Self-validate every document
- Run DISCUSS-002 for Requirements PRD
### MUST NOT
- Generate code
- Skip validation
- Modify upstream artifacts
## Phase 2: Context Loading
### Document Type Routing
| Task Contains | Doc Type | Template | Validation |
|---------------|----------|----------|------------|
| Product Brief | product-brief | templates/product-brief.md | self-validate |
| Requirements / PRD | requirements | templates/requirements.md | DISCUSS-002 |
| Architecture | architecture | templates/architecture.md | self-validate |
| Epics | epics | templates/epics.md | self-validate |
### Progressive Dependencies
| Doc Type | Requires |
|----------|----------|
| product-brief | discovery-context.json |
| requirements | + product-brief.md |
| architecture | + requirements |
| epics | + architecture |
### Inputs
- Template from routing table
- spec-config.json from <session>/spec/
- discovery-context.json from <session>/spec/
- Prior decisions from context_accumulator (inner loop)
- Discussion feedback from <session>/discussions/ (if exists)
## Phase 3: Document Generation
CLI generation:
```
Bash({ command: `ccw cli -p "PURPOSE: Generate <doc-type> document following template
TASK: • Load template • Apply spec config and discovery context • Integrate prior feedback • Generate all sections
MODE: write
CONTEXT: @<session>/spec/*.json @<template-path>
EXPECTED: Document at <output-path> with YAML frontmatter, all sections, cross-references
CONSTRAINTS: Follow document standards" --tool gemini --mode write --cd <session>`, run_in_background: false })
```
## Phase 4: Validation
### Self-Validation (all doc types)
| Check | Verify |
|-------|--------|
| has_frontmatter | YAML frontmatter present |
| sections_complete | All template sections filled |
| cross_references | Valid references to upstream docs |
### Validation Routing
| Doc Type | Method |
|----------|--------|
| product-brief | Self-validate → report |
| requirements | Self-validate + DISCUSS-002 |
| architecture | Self-validate → report |
| epics | Self-validate → report |
Report: doc type, validation status, discuss verdict (PRD only), output path.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI failure | Retry once with alternative tool |
| Prior doc missing | Notify coordinator |
| Discussion contradicts prior | Note conflict, flag for coordinator |

View File

@@ -0,0 +1,96 @@
# Knowledge Transfer Protocols
## 1. Transfer Channels
| Channel | Method | Producer | Consumer |
|---------|--------|----------|----------|
| Artifacts | Files in `<session>/artifacts/` | Task executor | Next task in pipeline |
| State Updates | `team_msg(type="state_update")` | Task executor | Coordinator + downstream |
| Wisdom | Append to `<session>/wisdom/*.md` | Any role | All roles |
| Context Accumulator | In-memory aggregation | Inner loop only | Current task |
| Exploration Cache | `<session>/explorations/` | Analyst / researcher | All roles |
## 2. Context Loading Protocol (Before Task Execution)
Every role MUST load context in this order before starting work.
| Step | Action | Required |
|------|--------|----------|
| 1 | `team_msg(operation="get_state", role=<upstream>)` | Yes |
| 2 | Read artifact files from upstream state's `ref` paths | Yes |
| 3 | Read `<session>/wisdom/*.md` if exists | Yes |
| 4 | Check `<session>/explorations/cache-index.json` before new exploration | If exploring |
**Loading rules**:
- Never skip step 1 -- state contains key decisions and findings
- If `ref` path in state does not exist, log warning and continue
- Wisdom files are append-only -- read all entries, newest last
## 3. Context Publishing Protocol (After Task Completion)
| Step | Action | Required |
|------|--------|----------|
| 1 | Write deliverable to `<session>/artifacts/<task-id>-<name>.md` | Yes |
| 2 | Send `team_msg(type="state_update")` with payload (see schema below) | Yes |
| 3 | Append wisdom entries for learnings, decisions, issues found | If applicable |
## 4. State Update Schema
Sent via `team_msg(type="state_update")` on task completion.
```json
{
"status": "task_complete",
"task_id": "<TASK-NNN>",
"ref": "<session>/artifacts/<filename>",
"key_findings": [
"Finding 1",
"Finding 2"
],
"decisions": [
"Decision with rationale"
],
"files_modified": [
"path/to/file.ts"
],
"verification": "self-validated | peer-reviewed | tested"
}
```
**Field rules**:
- `ref`: Always an artifact path, never inline content
- `key_findings`: Max 5 items, each under 100 chars
- `decisions`: Include rationale, not just the choice
- `files_modified`: Only for implementation tasks
- `verification`: One of `self-validated`, `peer-reviewed`, `tested`
## 5. Exploration Cache Protocol
Prevents redundant research across tasks and discussion rounds.
| Step | Action |
|------|--------|
| 1 | Read `<session>/explorations/cache-index.json` |
| 2 | If angle already explored, read cached result from `explore-<angle>.json` |
| 3 | If not cached, perform exploration |
| 4 | Write result to `<session>/explorations/explore-<angle>.json` |
| 5 | Update `cache-index.json` with new entry |
**cache-index.json format**:
```json
{
"entries": [
{
"angle": "competitor-analysis",
"file": "explore-competitor-analysis.json",
"created_by": "RESEARCH-001",
"timestamp": "2026-01-15T10:30:00Z"
}
]
}
```
**Rules**:
- Cache key is the exploration `angle` (normalized to kebab-case)
- Cache entries never expire within a session
- Any role can read cached explorations; only the creator updates them

View File

@@ -0,0 +1,113 @@
# Pipeline Definitions
## 1. Pipeline Selection Criteria
| Keywords | Pipeline |
|----------|----------|
| spec, design, document, requirements | `spec-only` |
| implement, build, fix, code | `impl-only` |
| full, lifecycle, end-to-end | `full-lifecycle` |
| frontend, UI, react, vue | `fe-only` or `fullstack` |
| Ambiguous / unclear | AskUserQuestion |
## 2. Spec-Only Pipeline
**6 tasks, 3 discussion rounds**
```
RESEARCH-001(+D1) -> DRAFT-001 -> DRAFT-002(+D2) -> DRAFT-003 -> DRAFT-004 -> QUALITY-001(+D3)
```
| Task | Role | Description | Discuss |
|------|------|-------------|---------|
| RESEARCH-001 | analyst | Research domain, competitors, constraints | D1: scope alignment |
| DRAFT-001 | writer | Product brief, self-validate | - |
| DRAFT-002 | writer | Requirements PRD | D2: requirements review |
| DRAFT-003 | writer | Architecture design, self-validate | - |
| DRAFT-004 | writer | Epics & stories, self-validate | - |
| QUALITY-001 | reviewer | Quality gate scoring | D3: readiness decision |
**Checkpoint**: After QUALITY-001 -- pause for user approval before any implementation.
## 3. Impl-Only Pipeline
**4 tasks, 0 discussion rounds**
```
PLAN-001 -> IMPL-001 -> TEST-001 + REVIEW-001
```
| Task | Role | Description |
|------|------|-------------|
| PLAN-001 | planner | Break down into implementation steps, assess complexity |
| IMPL-001 | implementer | Execute implementation plan |
| TEST-001 | tester | Validate against acceptance criteria |
| REVIEW-001 | reviewer | Code review |
TEST-001 and REVIEW-001 run in parallel after IMPL-001 completes.
## 4. Full-Lifecycle Pipeline
**10 tasks = spec-only (6) + impl (4)**
```
[Spec pipeline] -> PLAN-001(blockedBy: QUALITY-001) -> IMPL-001 -> TEST-001 + REVIEW-001
```
PLAN-001 is blocked until QUALITY-001 passes and user approves the checkpoint.
## 5. Frontend Pipelines
| Pipeline | Description |
|----------|-------------|
| `fe-only` | Frontend implementation only: PLAN-001 -> IMPL-001 (fe-implementer) -> TEST-001 + REVIEW-001 |
| `fullstack` | Backend + frontend: PLAN-001 -> IMPL-001 (backend) + IMPL-002 (frontend) -> TEST-001 + REVIEW-001 |
| `full-lifecycle-fe` | Full spec pipeline -> fullstack impl pipeline |
## 6. Conditional Routing
PLAN-001 outputs a complexity assessment that determines the impl topology.
| Complexity | Modules | Route |
|------------|---------|-------|
| Low | 1-2 | PLAN-001 -> IMPL-001 -> TEST + REVIEW |
| Medium | 3-4 | PLAN-001 -> ORCH-001 -> IMPL-{1..N} (parallel) -> TEST + REVIEW |
| High | 5+ | PLAN-001 -> ARCH-001 -> ORCH-001 -> IMPL-{1..N} -> TEST + REVIEW |
- **ORCH-001** (orchestrator): Coordinates parallel IMPL tasks, manages dependencies
- **ARCH-001** (architect): Detailed architecture decisions before orchestration
## 7. Task Metadata Registry
| Task ID | Role | Phase | Depends On | Discuss | Priority |
|---------|------|-------|------------|---------|----------|
| RESEARCH-001 | analyst | research | - | D1 | P0 |
| DRAFT-001 | writer | product-brief | RESEARCH-001 | - | P0 |
| DRAFT-002 | writer | requirements | DRAFT-001 | D2 | P0 |
| DRAFT-003 | writer | architecture | DRAFT-002 | - | P0 |
| DRAFT-004 | writer | epics | DRAFT-003 | - | P0 |
| QUALITY-001 | reviewer | readiness | DRAFT-004 | D3 | P0 |
| PLAN-001 | planner | planning | QUALITY-001 (or user input) | - | P0 |
| ARCH-001 | architect | arch-detail | PLAN-001 | - | P1 |
| ORCH-001 | orchestrator | orchestration | PLAN-001 or ARCH-001 | - | P1 |
| IMPL-001 | implementer | implementation | PLAN-001 or ORCH-001 | - | P0 |
| IMPL-{N} | implementer | implementation | ORCH-001 | - | P0 |
| TEST-001 | tester | validation | IMPL-* | - | P0 |
| REVIEW-001 | reviewer | review | IMPL-* | - | P0 |
## 8. Dynamic Specialist Injection
When task content or user request matches trigger keywords, inject a specialist task.
| Trigger Keywords | Specialist Role | Task Prefix | Priority | Insert After |
|------------------|----------------|-------------|----------|--------------|
| security, vulnerability, OWASP | security-expert | SECURITY-* | P0 | PLAN |
| performance, optimization, latency | performance-optimizer | PERF-* | P1 | IMPL |
| data, pipeline, ETL, migration | data-engineer | DATA-* | P0 | parallel with IMPL |
| devops, CI/CD, deployment, infra | devops-engineer | DEVOPS-* | P1 | IMPL |
| ML, model, training, inference | ml-engineer | ML-* | P0 | parallel with IMPL |
**Injection rules**:
- Specialist tasks inherit the session context and wisdom
- They publish state_update on completion like any other task
- P0 specialists block downstream tasks; P1 run in parallel

View File

@@ -0,0 +1,130 @@
# Quality Gates
## 1. Quality Thresholds
| Result | Score | Action |
|--------|-------|--------|
| Pass | >= 80% | Proceed to next phase |
| Review | 60-79% | Revise flagged items, re-evaluate |
| Fail | < 60% | Return to producer for rework |
## 2. Scoring Dimensions
| Dimension | Weight | Criteria |
|-----------|--------|----------|
| Completeness | 25% | All required sections present with substantive content |
| Consistency | 25% | Terminology, formatting, cross-references are uniform |
| Traceability | 25% | Clear chain: Goals -> Requirements -> Architecture -> Stories |
| Depth | 25% | ACs are testable, ADRs justified, stories estimable |
**Score** = weighted average of all dimensions (0-100 per dimension).
## 3. Per-Phase Quality Gates
### Phase 2: Product Brief
| Check | Pass Criteria |
|-------|---------------|
| Vision statement | Clear, one-paragraph, measurable outcome |
| Problem definition | Specific pain points with evidence |
| Target users | Defined personas or segments |
| Success goals | Quantifiable metrics (KPIs) |
| Success metrics | Measurement method specified |
### Phase 3: Requirements PRD
| Check | Pass Criteria |
|-------|---------------|
| Functional requirements | Each has unique ID (FR-NNN) |
| Acceptance criteria | Testable given/when/then format |
| Prioritization | MoSCoW applied to all requirements |
| User stories | Format: As a [role], I want [goal], so that [benefit] |
| Non-functional reqs | Performance, security, scalability addressed |
### Phase 4: Architecture
| Check | Pass Criteria |
|-------|---------------|
| Component diagram | All major components identified with boundaries |
| Tech stack | Each choice justified against alternatives |
| ADRs | At least 1 ADR per major decision, with status |
| Data model | Entities, relationships, key fields defined |
| Integration points | APIs, protocols, data formats specified |
### Phase 5: Epics & Stories
| Check | Pass Criteria |
|-------|---------------|
| Epic count | 2-8 epics (too few = too broad, too many = too granular) |
| MVP subset | Clearly marked MVP epics/stories |
| Stories per epic | 3-12 stories each |
| Story format | Title, description, ACs, estimate present |
### Phase 6: Readiness Gate
| Check | Pass Criteria |
|-------|---------------|
| All docs exist | Brief, PRD, Architecture, Epics all present |
| Cross-refs valid | All document references resolve correctly |
| Overall score | >= 60% across all dimensions |
| No P0 issues | Zero Error-class issues outstanding |
## 4. Cross-Document Validation
| Source | Target | Validation |
|--------|--------|------------|
| Brief goals | PRD requirements | Every goal has >= 1 requirement |
| PRD requirements | Architecture components | Every requirement maps to a component |
| PRD requirements | Epic stories | Every requirement covered by >= 1 story |
| Architecture components | Epic stories | Every component has implementation stories |
| Brief success metrics | Epic ACs | Metrics traceable to acceptance criteria |
## 5. Code Review Dimensions
For REVIEW-* tasks during implementation phases.
### Quality
| Check | Severity |
|-------|----------|
| Empty catch blocks | Error |
| `as any` type casts | Warning |
| `@ts-ignore` / `@ts-expect-error` | Warning |
| `console.log` in production code | Warning |
| Unused imports/variables | Info |
### Security
| Check | Severity |
|-------|----------|
| Hardcoded secrets/credentials | Error |
| SQL injection vectors | Error |
| `eval()` or `Function()` usage | Error |
| `innerHTML` assignment | Warning |
| Missing input validation | Warning |
### Architecture
| Check | Severity |
|-------|----------|
| Circular dependencies | Error |
| Deep cross-boundary imports (3+ levels) | Warning |
| Files > 500 lines | Warning |
| Functions > 50 lines | Info |
### Requirements Coverage
| Check | Severity |
|-------|----------|
| Core functionality implemented | Error if missing |
| Acceptance criteria covered | Error if missing |
| Edge cases handled | Warning |
| Error states handled | Warning |
## 6. Issue Classification
| Class | Label | Action |
|-------|-------|--------|
| Error | Must fix | Blocks progression, must resolve before proceeding |
| Warning | Should fix | Should resolve, can proceed with justification |
| Info | Nice to have | Optional improvement, log for future |

View File

@@ -0,0 +1,254 @@
# Architecture Document Template (Directory Structure)
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
| Output Location | `{workDir}/architecture/` |
## Output Structure
```
{workDir}/architecture/
├── _index.md # Overview, components, tech stack, data model, security
├── ADR-001-{slug}.md # Individual Architecture Decision Record
├── ADR-002-{slug}.md
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 4
document_type: architecture-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
- ../requirements/_index.md
---
# Architecture: {product_name}
{executive_summary - high-level architecture approach and key decisions}
## System Overview
### Architecture Style
{description of chosen architecture style: microservices, monolith, serverless, etc.}
### System Context Diagram
```mermaid
C4Context
title System Context Diagram
Person(user, "User", "Primary user")
System(system, "{product_name}", "Core system")
System_Ext(ext1, "{external_system}", "{description}")
Rel(user, system, "Uses")
Rel(system, ext1, "Integrates with")
```
## Component Architecture
### Component Diagram
```mermaid
graph TD
subgraph "{product_name}"
A[Component A] --> B[Component B]
B --> C[Component C]
A --> D[Component D]
end
B --> E[External Service]
```
### Component Descriptions
| Component | Responsibility | Technology | Dependencies |
|-----------|---------------|------------|--------------|
| {component_name} | {what it does} | {tech stack} | {depends on} |
## Technology Stack
### Core Technologies
| Layer | Technology | Version | Rationale |
|-------|-----------|---------|-----------|
| Frontend | {technology} | {version} | {why chosen} |
| Backend | {technology} | {version} | {why chosen} |
| Database | {technology} | {version} | {why chosen} |
| Infrastructure | {technology} | {version} | {why chosen} |
### Key Libraries & Frameworks
| Library | Purpose | License |
|---------|---------|---------|
| {library_name} | {purpose} | {license} |
## Architecture Decision Records
| ADR | Title | Status | Key Choice |
|-----|-------|--------|------------|
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
## Data Architecture
### Data Model
```mermaid
erDiagram
ENTITY_A ||--o{ ENTITY_B : "has many"
ENTITY_A {
string id PK
string name
datetime created_at
}
ENTITY_B {
string id PK
string entity_a_id FK
string value
}
```
### Data Storage Strategy
| Data Type | Storage | Retention | Backup |
|-----------|---------|-----------|--------|
| {type} | {storage solution} | {retention policy} | {backup strategy} |
## API Design
### API Overview
| Endpoint | Method | Purpose | Auth |
|----------|--------|---------|------|
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
## Security Architecture
### Security Controls
| Control | Implementation | Requirement |
|---------|---------------|-------------|
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
## Infrastructure & Deployment
### Deployment Architecture
{description of deployment model: containers, serverless, VMs, etc.}
### Environment Strategy
| Environment | Purpose | Configuration |
|-------------|---------|---------------|
| Development | Local development | {config} |
| Staging | Pre-production testing | {config} |
| Production | Live system | {config} |
## Codebase Integration
{if has_codebase is true:}
### Existing Code Mapping
| New Component | Existing Module | Integration Type | Notes |
|--------------|----------------|------------------|-------|
| {component} | {existing module path} | Extend/Replace/New | {notes} |
### Migration Notes
{any migration considerations for existing code}
## Quality Attributes
| Attribute | Target | Measurement | ADR Reference |
|-----------|--------|-------------|---------------|
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
## Risks & Mitigations
| Risk | Impact | Probability | Mitigation |
|------|--------|-------------|------------|
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
## Open Questions
- [ ] {architectural question 1}
- [ ] {architectural question 2}
## References
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
- Next: [Epics & Stories](../epics/_index.md)
```
---
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
```markdown
---
id: ADR-{NNN}
status: Accepted
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
date: {timestamp}
---
# ADR-{NNN}: {decision_title}
## Context
{what is the situation that motivates this decision}
## Decision
{what is the chosen approach}
## Alternatives Considered
| Option | Pros | Cons |
|--------|------|------|
| {option_1 - chosen} | {pros} | {cons} |
| {option_2} | {pros} | {cons} |
| {option_3} | {pros} | {cons} |
## Consequences
- **Positive**: {positive outcomes}
- **Negative**: {tradeoffs accepted}
- **Risks**: {risks to monitor}
## Traces
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{NNN}` | Auto-increment | ADR/requirement number |
| `{slug}` | Auto-generated | Kebab-case from decision title |
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |

View File

@@ -0,0 +1,196 @@
# Epics & Stories Template (Directory Structure)
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
| Output Location | `{workDir}/epics/` |
## Output Structure
```
{workDir}/epics/
├── _index.md # Overview table + dependency map + MVP scope + execution order
├── EPIC-001-{slug}.md # Individual Epic with its Stories
├── EPIC-002-{slug}.md
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 5
document_type: epics-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
- ../requirements/_index.md
- ../architecture/_index.md
---
# Epics & Stories: {product_name}
{executive_summary - overview of epic structure and MVP scope}
## Epic Overview
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|---------|-------|----------|-----|---------|-----------|
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
## Dependency Map
```mermaid
graph LR
EPIC-001 --> EPIC-002
EPIC-001 --> EPIC-003
EPIC-002 --> EPIC-004
EPIC-003 --> EPIC-005
```
### Dependency Notes
{explanation of why these dependencies exist and suggested execution order}
### Recommended Execution Order
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
3. ...
## MVP Scope
### MVP Epics
{list of epics included in MVP with justification, linking to each}
### MVP Definition of Done
- [ ] {MVP completion criterion 1}
- [ ] {MVP completion criterion 2}
- [ ] {MVP completion criterion 3}
## Traceability Matrix
| Requirement | Epic | Stories | Architecture |
|-------------|------|---------|--------------|
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
## Estimation Summary
| Size | Meaning | Count |
|------|---------|-------|
| S | Small - well-understood, minimal risk | {n} |
| M | Medium - some complexity, moderate risk | {n} |
| L | Large - significant complexity, should consider splitting | {n} |
| XL | Extra Large - high complexity, must split before implementation | {n} |
## Risks & Considerations
| Risk | Affected Epics | Mitigation |
|------|---------------|------------|
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
## Open Questions
- [ ] {question about scope or implementation 1}
- [ ] {question about scope or implementation 2}
## References
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
- Handoff to: execution workflows (lite-plan, plan, req-plan)
```
---
## Template: EPIC-NNN-{slug}.md (Individual Epic)
```markdown
---
id: EPIC-{NNN}
priority: {Must|Should|Could}
mvp: {true|false}
size: {S|M|L|XL}
requirements: [REQ-{NNN}]
architecture: [ADR-{NNN}]
dependencies: [EPIC-{NNN}]
status: draft
---
# EPIC-{NNN}: {epic_title}
**Priority**: {Must|Should|Could}
**MVP**: {Yes|No}
**Estimated Size**: {S|M|L|XL}
## Description
{detailed epic description}
## Requirements
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
## Architecture
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
- Component: {component_name}
## Dependencies
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
## Stories
### STORY-{EPIC}-001: {story_title}
**User Story**: As a {persona}, I want to {action} so that {benefit}.
**Acceptance Criteria**:
- [ ] {criterion 1}
- [ ] {criterion 2}
- [ ] {criterion 3}
**Size**: {S|M|L|XL}
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
---
### STORY-{EPIC}-002: {story_title}
**User Story**: As a {persona}, I want to {action} so that {benefit}.
**Acceptance Criteria**:
- [ ] {criterion 1}
- [ ] {criterion 2}
**Size**: {S|M|L|XL}
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
| `{NNN}` | Auto-increment | Story/requirement number |
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |

View File

@@ -0,0 +1,133 @@
# Product Brief Template
Template for generating product brief documents in Phase 2.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis |
| Output Location | `{workDir}/product-brief.md` |
---
## Template
```markdown
---
session_id: {session_id}
phase: 2
document_type: product-brief
status: draft
generated_at: {timestamp}
stepsCompleted: []
version: 1
dependencies:
- spec-config.json
---
# Product Brief: {product_name}
{executive_summary - 2-3 sentences capturing the essence of the product/feature}
## Vision
{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like}
## Problem Statement
### Current Situation
{description of the current state and pain points}
### Impact
{quantified impact of the problem - who is affected, how much, how often}
## Target Users
{for each user persona:}
### {Persona Name}
- **Role**: {user's role/context}
- **Needs**: {primary needs related to this product}
- **Pain Points**: {current frustrations}
- **Success Criteria**: {what success looks like for this user}
## Goals & Success Metrics
| Goal ID | Goal | Success Metric | Target |
|---------|------|----------------|--------|
| G-001 | {goal description} | {measurable metric} | {specific target} |
| G-002 | {goal description} | {measurable metric} | {specific target} |
## Scope
### In Scope
- {feature/capability 1}
- {feature/capability 2}
- {feature/capability 3}
### Out of Scope
- {explicitly excluded item 1}
- {explicitly excluded item 2}
### Assumptions
- {key assumption 1}
- {key assumption 2}
## Competitive Landscape
| Aspect | Current State | Proposed Solution | Advantage |
|--------|--------------|-------------------|-----------|
| {aspect} | {how it's done now} | {our approach} | {differentiator} |
## Constraints & Dependencies
### Technical Constraints
- {constraint 1}
- {constraint 2}
### Business Constraints
- {constraint 1}
### Dependencies
- {external dependency 1}
- {external dependency 2}
## Multi-Perspective Synthesis
### Product Perspective
{summary of product/market analysis findings}
### Technical Perspective
{summary of technical feasibility and constraints}
### User Perspective
{summary of user journey and UX considerations}
### Convergent Themes
{themes where all perspectives agree}
### Conflicting Views
{areas where perspectives differ, with notes on resolution approach}
## Open Questions
- [ ] {unresolved question 1}
- [ ] {unresolved question 2}
## References
- Derived from: [spec-config.json](spec-config.json)
- Next: [Requirements PRD](requirements.md)
```
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | Seed analysis | Product/feature name |
| `{executive_summary}` | CLI synthesis | 2-3 sentence summary |
| `{vision_statement}` | CLI product perspective | Aspirational vision |
| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis |

View File

@@ -0,0 +1,224 @@
# Requirements PRD Template (Directory Structure)
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
| Output Location | `{workDir}/requirements/` |
## Output Structure
```
{workDir}/requirements/
├── _index.md # Summary + MoSCoW table + traceability matrix + links
├── REQ-001-{slug}.md # Individual functional requirement
├── REQ-002-{slug}.md
├── NFR-P-001-{slug}.md # Non-functional: Performance
├── NFR-S-001-{slug}.md # Non-functional: Security
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
├── NFR-U-001-{slug}.md # Non-functional: Usability
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 3
document_type: requirements-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
---
# Requirements: {product_name}
{executive_summary - brief overview of what this PRD covers and key decisions}
## Requirement Summary
| Priority | Count | Coverage |
|----------|-------|----------|
| Must Have | {n} | {description of must-have scope} |
| Should Have | {n} | {description of should-have scope} |
| Could Have | {n} | {description of could-have scope} |
| Won't Have | {n} | {description of explicitly excluded} |
## Functional Requirements
| ID | Title | Priority | Traces To |
|----|-------|----------|-----------|
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
## Non-Functional Requirements
### Performance
| ID | Title | Target |
|----|-------|--------|
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
### Security
| ID | Title | Standard |
|----|-------|----------|
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
### Scalability
| ID | Title | Target |
|----|-------|--------|
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
### Usability
| ID | Title | Target |
|----|-------|--------|
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
## Data Requirements
### Data Entities
| Entity | Description | Key Attributes |
|--------|-------------|----------------|
| {entity_name} | {description} | {attr1, attr2, attr3} |
### Data Flows
{description of key data flows, optionally with Mermaid diagram}
## Integration Requirements
| System | Direction | Protocol | Data Format | Notes |
|--------|-----------|----------|-------------|-------|
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
## Constraints & Assumptions
### Constraints
- {technical or business constraint 1}
- {technical or business constraint 2}
### Assumptions
- {assumption 1 - must be validated}
- {assumption 2 - must be validated}
## Priority Rationale
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
## Traceability Matrix
| Goal | Requirements |
|------|-------------|
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
## Open Questions
- [ ] {unresolved question 1}
- [ ] {unresolved question 2}
## References
- Derived from: [Product Brief](../product-brief.md)
- Next: [Architecture](../architecture/_index.md)
```
---
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
```markdown
---
id: REQ-{NNN}
type: functional
priority: {Must|Should|Could|Won't}
traces_to: [G-{NNN}]
status: draft
---
# REQ-{NNN}: {requirement_title}
**Priority**: {Must|Should|Could|Won't}
## Description
{detailed requirement description}
## User Story
As a {persona}, I want to {action} so that {benefit}.
## Acceptance Criteria
- [ ] {specific, testable criterion 1}
- [ ] {specific, testable criterion 2}
- [ ] {specific, testable criterion 3}
## Traces
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
```
---
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
```markdown
---
id: NFR-{type}-{NNN}
type: non-functional
category: {Performance|Security|Scalability|Usability}
priority: {Must|Should|Could}
status: draft
---
# NFR-{type}-{NNN}: {requirement_title}
**Category**: {Performance|Security|Scalability|Usability}
**Priority**: {Must|Should|Could}
## Requirement
{detailed requirement description}
## Metric & Target
| Metric | Target | Measurement Method |
|--------|--------|--------------------|
| {metric} | {target value} | {how measured} |
## Traces
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
| `{slug}` | Auto-generated | Kebab-case from requirement title |
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |