Add document standards, quality gates, and templates for team lifecycle phases

- Introduced `document-standards.md` to define YAML frontmatter schema, naming conventions, and content structure for spec-generator outputs.
- Created `quality-gates.md` outlining per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
- Added templates for architecture documents, epics and stories, product briefs, and requirements PRD to streamline documentation in respective phases.
This commit is contained in:
catlog22
2026-03-04 23:54:20 +08:00
parent fd0c9efa4d
commit bbdd1840de
103 changed files with 1959 additions and 1311 deletions

View File

@@ -2,7 +2,7 @@
prefix: IMPL
inner_loop: true
additional_prefixes: [FIX]
subagents: [explore]
delegates_to: []
message_types:
success: impl_complete
error: error
@@ -47,7 +47,7 @@ Implement optimization changes following the strategy plan. For FIX tasks, apply
- **Independent pipeline**: Read `<session>/artifacts/pipelines/{P}/optimization-plan.md` -- extract this pipeline's plan
4. For FIX: parse review/benchmark feedback for specific issues to address
5. Use `explore` subagent to load implementation context for target files
5. Use ACE search or CLI tools to load implementation context for target files
6. For inner loop (single mode only): load context_accumulator from prior IMPL/FIX tasks
**Shared-memory namespace**:

View File

@@ -1,7 +1,7 @@
---
prefix: PROFILE
inner_loop: false
subagents: [explore]
delegates_to: []
message_types:
success: profile_complete
error: error
@@ -31,7 +31,7 @@ Profile application performance to identify CPU, memory, I/O, network, and rende
| CLI entry / bin/ directory | CLI Tool | Startup time, throughput, memory peak |
| No detection | Generic | All profiling dimensions |
3. Use `explore` subagent to map performance-critical code paths within target scope
3. Use ACE search or CLI tools to map performance-critical code paths within target scope
4. Detect available profiling tools (test runners, benchmark harnesses, linting tools)
## Phase 3: Performance Profiling

View File

@@ -3,7 +3,7 @@ prefix: REVIEW
inner_loop: false
additional_prefixes: [QUALITY]
discuss_rounds: [DISCUSS-REVIEW]
subagents: [discuss]
delegates_to: []
message_types:
success: review_complete
error: error
@@ -65,7 +65,7 @@ Per-dimension review process:
- Record findings with severity (Critical / High / Medium / Low)
- Include specific file:line references and suggested fixes
If any Critical findings detected, invoke `discuss` subagent (DISCUSS-REVIEW round) to validate the assessment before issuing verdict.
If any Critical findings detected, use CLI tools for multi-perspective validation (DISCUSS-REVIEW round) to validate the assessment before issuing verdict.
## Phase 4: Verdict & Feedback

View File

@@ -2,7 +2,7 @@
prefix: STRATEGY
inner_loop: false
discuss_rounds: [DISCUSS-OPT]
subagents: [discuss]
delegates_to: []
message_types:
success: strategy_complete
error: error
@@ -55,7 +55,7 @@ Prioritize optimizations by impact/effort ratio:
| P2 (Medium) | Medium impact + Low effort |
| P3 (Low) | Low impact or High effort -- defer |
If complexity is High, invoke `discuss` subagent (DISCUSS-OPT round) to evaluate trade-offs between competing strategies before finalizing the plan.
If complexity is High, use CLI tools for multi-perspective analysis (DISCUSS-OPT round) to evaluate trade-offs between competing strategies before finalizing the plan.
Define measurable success criteria per optimization (target metric value or improvement %).