Files
Claude-Code-Workflow/.claude/skills/team-lifecycle-v3/roles/reviewer/commands/spec-quality.md
catlog22 4ad05f8217 feat: add templates for architecture documents, epics, product briefs, and requirements PRD
- Introduced architecture document template for Phase 4, including structure and individual ADR records.
- Added epics & stories template for Phase 5, detailing epic breakdown and dependencies.
- Created product brief template for Phase 2, summarizing product vision, problem statement, and target users.
- Developed requirements PRD template for Phase 3, outlining functional and non-functional requirements with traceability.
- Implemented spec command for project spec management with subcommands for loading, listing, rebuilding, and initializing specs.
2026-02-26 13:59:47 +08:00

6.6 KiB

Command: spec-quality

Purpose

5-dimension spec quality check with weighted scoring, quality gate determination, and readiness report generation.

Phase 2: Context Loading

Input Source Required
Spec documents <session_folder>/spec/ (all .md files) Yes
Original requirements Product brief objectives section Yes
Quality gate config specs/quality-gates.md No
Session folder Task description Session: field Yes

Spec document phases (matched by filename/directory):

Phase Expected Path
product-brief spec/product-brief.md
prd spec/requirements/*.md
architecture spec/architecture/_index.md + ADR-*.md
user-stories spec/epics/*.md
implementation-plan plan/plan.json
test-strategy spec/test-strategy.md

Phase 3: 5-Dimension Scoring

Dimension Weights

Dimension Weight Focus
Completeness 25% All required sections present with substance
Consistency 20% Terminology, format, references, naming
Traceability 25% Goals -> Reqs -> Components -> Stories chain
Depth 20% AC testable, ADRs justified, stories estimable
Coverage 10% Original requirements mapped to spec

Dimension 1: Completeness (25%)

Check each spec document for required sections.

Required sections by phase:

Phase Required Sections
product-brief Vision Statement, Problem Statement, Target Audience, Success Metrics, Constraints
prd Goals, Requirements, User Stories, Acceptance Criteria, Non-Functional Requirements
architecture System Overview, Component Design, Data Models, API Specifications, Technology Stack
user-stories Story List, Acceptance Criteria, Priority, Estimation
implementation-plan Task Breakdown, Dependencies, Timeline, Resource Allocation
test-strategy Test Scope, Test Cases, Coverage Goals, Test Environment

Scoring formula:

  • Section present: 50% credit
  • Section has substantial content (>100 chars beyond header): additional 50% credit
  • Per-document score = (present_ratio * 50) + (substantial_ratio * 50)
  • Overall = average across all documents

Dimension 2: Consistency (20%)

Check cross-document consistency across four areas.

Area What to Check Severity
Terminology Same concept with different casing/spelling across docs Medium
Format Mixed header styles at same level across docs Low
References Broken links (./ or ../ paths that don't resolve) High
Naming Mixed naming conventions (camelCase vs snake_case vs kebab-case) Low

Scoring:

  • Penalty weights: High = 10, Medium = 5, Low = 2
  • Score = max(0, 100 - (total_penalty / 100) * 100)

Dimension 3: Traceability (25%)

Build and validate traceability chains: Goals -> Requirements -> Components -> Stories.

Chain building flow:

  1. Extract goals from product-brief (pattern: - Goal: <text>)
  2. Extract requirements from PRD (pattern: - REQ-NNN: <text>)
  3. Extract components from architecture (pattern: - Component: <text>)
  4. Extract stories from user-stories (pattern: - US-NNN: <text>)
  5. Link by keyword overlap (threshold: 30% keyword match)

Chain completeness: A chain is complete when a goal links to at least one requirement, one component, and one story.

Scoring: (complete chains / total chains) * 100

Weak link identification: For each incomplete chain, report which link is missing (no requirements, no components, or no stories).


Dimension 4: Depth (20%)

Assess the analytical depth of spec content across four sub-dimensions.

Sub-dimension Source Testable Criteria
AC Testability PRD / User Stories Contains measurable verbs (display, return, validate) or Given/When/Then or numbers
ADR Justification Architecture Contains rationale, alternatives, consequences, or trade-offs
Story Estimability User Stories Has "As a/I want/So that" + AC, or explicit estimate
Technical Detail Architecture + Plan Contains code blocks, API terms, HTTP methods, DB terms

Scoring: Average of sub-dimension scores (each 0-100%)


Dimension 5: Coverage (10%)

Map original requirements to spec requirements.

Flow:

  1. Extract original requirements from product-brief objectives section
  2. Extract spec requirements from all documents (pattern: - REQ-NNN: or - Requirement: or - Feature:)
  3. For each original requirement, check keyword overlap with any spec requirement (threshold: 40%)
  4. Score = (covered_count / total_original) * 100

Quality Gate Decision Table

Gate Criteria Message
PASS Overall score >= 80% AND coverage >= 70% Ready for implementation
FAIL Overall score < 60% OR coverage < 50% Major revisions required
REVIEW All other cases Improvements needed, may proceed with caution

Phase 4: Validation

Readiness Report Format

Write to <session_folder>/spec/readiness-report.md:

# Specification Readiness Report

**Generated**: <timestamp>
**Overall Score**: <score>%
**Quality Gate**: <PASS|REVIEW|FAIL> - <message>
**Recommended Action**: <action>

## Dimension Scores

| Dimension | Score | Weight | Weighted Score |
|-----------|-------|--------|----------------|
| Completeness | <n>% | 25% | <n>% |
| Consistency | <n>% | 20% | <n>% |
| Traceability | <n>% | 25% | <n>% |
| Depth | <n>% | 20% | <n>% |
| Coverage | <n>% | 10% | <n>% |

## Completeness Analysis
(per-phase breakdown: sections present/expected, missing sections)

## Consistency Analysis
(issues by area: terminology, format, references, naming)

## Traceability Analysis
(complete chains / total, weak links)

## Depth Analysis
(per sub-dimension scores)

## Requirement Coverage
(covered / total, uncovered requirements list)

Spec Summary Format

Write to <session_folder>/spec/spec-summary.md:

# Specification Summary

**Overall Quality Score**: <score>%
**Quality Gate**: <gate>

## Documents Reviewed
(per document: phase, path, size, section list)

## Key Findings
### Strengths (dimensions scoring >= 80%)
### Areas for Improvement (dimensions scoring < 70%)
### Recommendations

Error Handling

Scenario Resolution
Spec folder empty FAIL gate, report no documents found
Missing phase document Score 0 for that phase in completeness, note in report
No original requirements found Score coverage at 100% (nothing to cover)
Broken references Flag in consistency, do not fail entire review