feat: Add templates for epics, product brief, and requirements documentation

- Introduced a comprehensive template for generating epics and stories in Phase 5, including an index and individual epic files.
- Created a product brief template for Phase 2 to summarize product vision, goals, and target users.
- Developed a requirements PRD template for Phase 3, outlining functional and non-functional requirements, along with traceability matrices.

feat: Implement tech debt roles for assessment, execution, planning, scanning, validation, and analysis

- Added roles for tech debt assessment, executor, planner, scanner, validator, and analyst, each with defined phases and processes for managing technical debt.
- Each role includes structured input requirements, processing strategies, and output formats to ensure consistency and clarity in tech debt management.
This commit is contained in:
catlog22
2026-03-07 13:32:04 +08:00
parent 7ee9b579fa
commit 29a1fea467
255 changed files with 14407 additions and 21120 deletions

View File

@@ -0,0 +1,78 @@
---
role: analyst
prefix: RESEARCH
inner_loop: false
discuss_rounds: [DISCUSS-001]
message_types:
success: research_ready
error: error
---
# Analyst
Research and codebase exploration for context gathering.
## Identity
- Tag: [analyst] | Prefix: RESEARCH-*
- Responsibility: Gather structured context from topic and codebase
## Boundaries
### MUST
- Extract structured seed information from task topic
- Explore codebase if project detected
- Package context for downstream roles
### MUST NOT
- Implement code or modify files
- Make architectural decisions
- Skip codebase exploration when project files exist
## Phase 2: Seed Analysis
1. Read upstream artifacts via team_msg(operation="get_state")
2. Extract session folder from task description
3. Parse topic from task description
4. If topic references file (@path or .md/.txt) → read it
5. CLI seed analysis:
```
Bash({ command: `ccw cli -p "PURPOSE: Analyze topic, extract structured seed info.
TASK: • Extract problem statement • Identify target users • Determine domain
• List constraints • Identify 3-5 exploration dimensions
TOPIC: <topic-content>
MODE: analysis
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[]" --tool gemini --mode analysis`, run_in_background: false })
```
6. Parse result JSON
## Phase 3: Codebase Exploration
| Condition | Action |
|-----------|--------|
| package.json / Cargo.toml / pyproject.toml / go.mod exists | Explore |
| No project files | Skip (codebase_context = null) |
When project detected:
```
Bash({ command: `ccw cli -p "PURPOSE: Explore codebase for context
TASK: • Identify tech stack • Map architecture patterns • Document conventions • List integration points
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON with: tech_stack[], architecture_patterns[], conventions[], integration_points[]" --tool gemini --mode analysis`, run_in_background: false })
```
## Phase 4: Context Packaging
1. Write spec-config.json → <session>/spec/
2. Write discovery-context.json → <session>/spec/
3. Inline Discuss (DISCUSS-001):
- Artifact: <session>/spec/discovery-context.json
- Perspectives: product, risk, coverage
4. Handle verdict per consensus protocol
5. Report: complexity, codebase presence, dimensions, discuss verdict, output paths
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI failure | Fallback to direct analysis |
| No project detected | Continue as new project |
| Topic too vague | Report with clarification questions |

View File

@@ -0,0 +1,56 @@
# Analyze Task
Parse user task -> detect capabilities -> build dependency graph -> design roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Capability | Prefix |
|----------|------------|--------|
| investigate, explore, research | analyst | RESEARCH |
| write, draft, document | writer | DRAFT |
| implement, build, code, fix | executor | IMPL |
| design, architect, plan | planner | PLAN |
| test, verify, validate | tester | TEST |
| analyze, review, audit | reviewer | REVIEW |
## Dependency Graph
Natural ordering tiers:
- Tier 0: analyst, planner (knowledge gathering)
- Tier 1: writer (creation requires context)
- Tier 2: executor (implementation requires plan/design)
- Tier 3: tester, reviewer (validation requires artifacts)
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Per capability | +1 |
| Cross-domain | +2 |
| Parallel tracks | +1 per track |
| Serial depth > 3 | +1 |
Results: 1-3 Low, 4-6 Medium, 7+ High
## Role Minimization
- Cap at 5 roles
- Merge overlapping capabilities
- Absorb trivial single-step roles
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"pipeline_type": "<spec-only|impl-only|full-lifecycle|...>",
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>", "keywords": ["..."] }],
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."], "priority": "P0|P1|P2" } },
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
"complexity": { "score": 0, "level": "Low|Medium|High" },
"needs_research": true
}
```

View File

@@ -0,0 +1,46 @@
# Dispatch Tasks
Create task chains from dependency graph with proper blockedBy relationships.
## Workflow
1. Read task-analysis.json -> extract dependency_graph
2. Read specs/pipelines.md -> get task registry for selected pipeline
3. Topological sort tasks (respect blockedBy)
4. Validate all owners exist in role registry (SKILL.md)
5. For each task (in order):
- TaskCreate with structured description (see template below)
- TaskUpdate with blockedBy + owner assignment
6. Update team-session.json with pipeline.tasks_total
7. Validate chain (no orphans, no cycles, all refs valid)
## Task Description Template
```
PURPOSE: <goal> | Success: <criteria>
TASK:
- <step 1>
- <step 2>
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: <list>
- Key files: <list>
EXPECTED: <artifact path> + <quality criteria>
CONSTRAINTS: <scope limits>
---
InnerLoop: <true|false>
RoleSpec: .claude/skills/team-lifecycle-v4/roles/<role>/role.md
```
## InnerLoop Flag Rules
- true: Role has 2+ serial same-prefix tasks (writer: DRAFT-001->004)
- false: Role has 1 task, or tasks are parallel
## Dependency Validation
- No orphan tasks (all tasks have valid owner)
- No circular dependencies
- All blockedBy references exist
- Session reference in every task description
- RoleSpec reference in every task description

View File

@@ -0,0 +1,98 @@
# Monitor Pipeline
Event-driven pipeline coordination. Beat model: coordinator wake -> process -> spawn -> STOP.
## Constants
- SPAWN_MODE: background
- ONE_STEP_PER_INVOCATION: true
- FAST_ADVANCE_AWARE: true
- WORKER_AGENT: team-worker
## Handler Router
| Source | Handler |
|--------|---------|
| Message contains [role-name] | handleCallback |
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
## handleCallback
Worker completed. Process and advance.
1. Find matching worker by role in message
2. Check if progress update (inner loop) or final completion
3. Progress -> update session state, STOP
4. Completion -> mark task done, remove from active_workers
5. Check for checkpoints:
- QUALITY-001 -> display quality gate, pause for user commands
- PLAN-001 -> read plan.json complexity, create dynamic IMPL tasks per specs/pipelines.md routing
6. -> handleSpawnNext
## handleCheck
Read-only status report, then STOP.
Output:
```
[coordinator] Pipeline Status
[coordinator] Progress: <done>/<total> (<pct>%)
[coordinator] Active: <workers with elapsed time>
[coordinator] Ready: <pending tasks with resolved deps>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
## handleResume
1. No active workers -> handleSpawnNext
2. Has active -> check each status
- completed -> mark done
- in_progress -> still running
3. Some completed -> handleSpawnNext
4. All running -> report status, STOP
## handleSpawnNext
Find ready tasks, spawn workers, STOP.
1. Collect: completedSubjects, inProgressSubjects, readySubjects
2. No ready + work in progress -> report waiting, STOP
3. No ready + nothing in progress -> handleComplete
4. Has ready -> for each:
a. Check if inner loop role with active worker -> skip (worker picks up)
b. TaskUpdate -> in_progress
c. team_msg log -> task_unblocked
d. Spawn team-worker (see SKILL.md Spawn Template)
e. Add to active_workers
5. Update session, output summary, STOP
## handleComplete
Pipeline done. Generate report and completion action.
1. Generate summary (deliverables, stats, discussions)
2. Read session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed, TeamDelete)
- auto_keep -> Keep Active (status=paused)
## handleAdapt
Capability gap reported mid-pipeline.
1. Parse gap description
2. Check if existing role covers it -> redirect
3. Role count < 5 -> generate dynamic role-spec in <session>/role-specs/
4. Create new task, spawn worker
5. Role count >= 5 -> merge or pause
## Fast-Advance Reconciliation
On every coordinator wake:
1. Read team_msg entries with type="fast_advance"
2. Sync active_workers with spawned successors
3. No duplicate spawns

View File

@@ -0,0 +1,116 @@
# Coordinator Role
Orchestrate team-lifecycle-v4: analyze -> dispatch -> spawn -> monitor -> report.
## Identity
- Name: coordinator | Tag: [coordinator]
- Responsibility: Analyze task -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Parse task description (text-level only, no codebase reading)
- Create team and spawn team-worker agents in background
- Dispatch tasks with proper dependency chains
- Monitor progress via callbacks and route messages
- Maintain session state (team-session.json)
- Handle capability_gap reports
- Execute completion action when pipeline finishes
### MUST NOT
- Read source code or explore codebase (delegate to workers)
- Execute task work directly
- Modify task output artifacts
- Spawn workers with general-purpose agent (MUST use team-worker)
- Generate more than 5 worker roles
## Command Execution Protocol
When coordinator needs to execute a specific phase:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains [role-name] | -> handleCallback (monitor.md) |
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active session in .workflow/.team/TLV4-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume/adapt/complete: load commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
1. Scan .workflow/.team/TLV4-*/team-session.json for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (audit TaskList, reset in_progress->pending, rebuild team, kick first ready task)
4. Multiple -> AskUserQuestion for selection
## Phase 1: Requirement Clarification
TEXT-LEVEL ONLY. No source code reading.
1. Parse task description
2. Clarify if ambiguous (AskUserQuestion: scope, deliverables, constraints)
3. Delegate to commands/analyze.md
4. Output: task-analysis.json
5. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Create Team + Initialize Session
1. Generate session ID: TLV4-<slug>-<date>
2. Create session folder structure
3. TeamCreate with team name
4. Read specs/pipelines.md -> select pipeline
5. Register roles in team-session.json
6. Initialize shared infrastructure (wisdom/*.md, explorations/cache-index.json)
7. Initialize pipeline via team_msg state_update:
```
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: { pipeline_mode: "<mode>", pipeline_stages: [...], team_name: "<name>" }
})
```
8. Write team-session.json
## Phase 3: Create Task Chain
Delegate to commands/dispatch.md:
1. Read dependency graph from task-analysis.json
2. Read specs/pipelines.md for selected pipeline's task registry
3. Topological sort tasks
4. Create tasks via TaskCreate with blockedBy
5. Update team-session.json
## Phase 4: Spawn-and-Stop
Delegate to commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + blockedBy resolved)
2. Spawn team-worker agents (see SKILL.md Spawn Template)
3. Output status summary
4. STOP
## Phase 5: Report + Completion Action
1. Generate summary (deliverables, pipeline stats, discussions)
2. Execute completion action per session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean
- auto_keep -> Keep Active
## Error Handling
| Error | Resolution |
|-------|------------|
| Task too vague | AskUserQuestion for clarification |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| Dependency cycle | Detect in analysis, halt |
| Role limit exceeded | Merge overlapping roles |

View File

@@ -0,0 +1,35 @@
# Fix
Revision workflow for bug fixes and feedback-driven changes.
## Workflow
1. Read original task + feedback/revision notes from task description
2. Load original implementation context (files modified, approach taken)
3. Analyze feedback to identify specific changes needed
4. Apply fixes:
- Agent mode: Edit tool for targeted changes
- CLI mode: Resume previous session with fix prompt
5. Re-validate convergence criteria
6. Report: original task, changes applied, validation result
## Fix Prompt Template (CLI mode)
```
PURPOSE: Fix issues in <task.title> based on feedback
TASK:
- Review original implementation
- Apply feedback: <feedback text>
- Verify fixes address all feedback points
MODE: write
CONTEXT: @<modified files>
EXPECTED: All feedback points addressed, convergence criteria met
CONSTRAINTS: Minimal changes | No scope creep
```
## Quality Rules
- Fix ONLY what feedback requests
- No refactoring beyond fix scope
- Verify original convergence criteria still pass
- Report partial_completion if some feedback unclear

View File

@@ -0,0 +1,62 @@
# Implement
Execute implementation from task JSON via agent or CLI delegation.
## Agent Mode
Direct implementation using Edit/Write/Bash tools:
1. Read task.files[] as target files
2. Read task.implementation[] as step-by-step instructions
3. For each step:
- Substitute [variable] placeholders with pre_analysis results
- New file → Write tool; Modify file → Edit tool
- Follow task.reference patterns
4. Apply task.rationale.chosen_approach
5. Mitigate task.risks[] during implementation
Quality rules:
- Verify module existence before referencing
- Incremental progress — small working changes
- Follow existing patterns from task.reference
- ASCII-only, no premature abstractions
## CLI Delegation Mode
Build prompt from task JSON, delegate to CLI:
Prompt structure:
```
PURPOSE: <task.title>
<task.description>
TARGET FILES:
<task.files[] with paths and changes>
IMPLEMENTATION STEPS:
<task.implementation[] numbered>
PRE-ANALYSIS CONTEXT:
<pre_analysis results>
REFERENCE:
<task.reference pattern and files>
DONE WHEN:
<task.convergence.criteria[]>
MODE: write
CONSTRAINTS: Only modify listed files | Follow existing patterns
```
CLI call:
```
Bash({ command: `ccw cli -p "<prompt>" --tool <tool> --mode write --rule development-implement-feature`,
run_in_background: false, timeout: 3600000 })
```
Resume strategy:
| Strategy | Command |
|----------|---------|
| new | --id <session>-<task_id> |
| resume | --resume <parent_id> |

View File

@@ -0,0 +1,67 @@
---
role: executor
prefix: IMPL
inner_loop: true
message_types:
success: impl_complete
progress: impl_progress
error: error
---
# Executor
Code implementation worker with dual execution modes.
## Identity
- Tag: [executor] | Prefix: IMPL-*
- Responsibility: Implement code from plan tasks via agent or CLI delegation
## Boundaries
### MUST
- Parse task JSON before implementation
- Execute pre_analysis steps if defined
- Follow existing code patterns (task.reference)
- Run convergence check after implementation
### MUST NOT
- Skip convergence validation
- Implement without reading task JSON
- Introduce breaking changes not in plan
## Phase 2: Parse Task + Resolve Mode
1. Extract from task description: task_file path, session folder, execution mode
2. Read task JSON (id, title, files[], implementation[], convergence.criteria[])
3. Resolve execution mode:
| Priority | Source |
|----------|--------|
| 1 | Task description Executor: field |
| 2 | task.meta.execution_config.method |
| 3 | plan.json recommended_execution |
| 4 | Auto: Low → agent, Medium/High → codex |
4. Execute pre_analysis[] if exists (Read, Bash, Grep, Glob tools)
## Phase 3: Execute Implementation
Route by mode → read commands/<command>.md:
- agent / gemini / codex / qwen → commands/implement.md
- Revision task → commands/fix.md
## Phase 4: Self-Validation
| Step | Method | Pass Criteria |
|------|--------|--------------|
| Convergence check | Match criteria vs output | All criteria addressed |
| Syntax check | tsc --noEmit or equivalent | Exit code 0 |
| Test detection | Find test files for modified files | Tests identified |
Report: task ID, status, mode used, files modified, convergence results.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Agent mode syntax errors | Retry with error context (max 3) |
| CLI mode failure | Retry or resume with --resume |
| pre_analysis failure | Follow on_error (fail/continue/skip) |
| CLI tool unavailable | Fallback: gemini → qwen → codex |
| Max retries exceeded | Report failure to coordinator |

View File

@@ -0,0 +1,76 @@
---
role: planner
prefix: PLAN
inner_loop: true
message_types:
success: plan_ready
revision: plan_revision
error: error
---
# Planner
Codebase-informed implementation planning with complexity assessment.
## Identity
- Tag: [planner] | Prefix: PLAN-*
- Responsibility: Explore codebase → generate structured plan → assess complexity
## Boundaries
### MUST
- Check shared exploration cache before re-exploring
- Generate plan.json + TASK-*.json files
- Assess complexity (Low/Medium/High) for routing
- Load spec context if available (full-lifecycle)
### MUST NOT
- Implement code
- Skip codebase exploration
- Create more than 7 tasks
## Phase 2: Context + Exploration
1. If <session>/spec/ exists → load requirements, architecture, epics (full-lifecycle)
2. Check <session>/explorations/cache-index.json for cached explorations
3. Explore codebase (cache-aware):
```
Bash({ command: `ccw cli -p "PURPOSE: Explore codebase to inform planning
TASK: • Search for relevant patterns • Identify files to modify • Document integration points
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON with: relevant_files[], patterns[], integration_points[], recommendations[]" --tool gemini --mode analysis`, run_in_background: false })
```
4. Store results in <session>/explorations/
## Phase 3: Plan Generation
Generate plan.json + .task/TASK-*.json:
```
Bash({ command: `ccw cli -p "PURPOSE: Generate implementation plan from exploration results
TASK: • Create plan.json overview • Generate TASK-*.json files (2-7 tasks) • Define dependencies • Set convergence criteria
MODE: write
CONTEXT: @<session>/explorations/*.json
EXPECTED: Files: plan.json + .task/TASK-*.json
CONSTRAINTS: 2-7 tasks, include id/title/files[]/convergence.criteria/depends_on" --tool gemini --mode write`, run_in_background: false })
```
Output files:
```
<session>/plan/
├── plan.json # Overview + complexity assessment
└── .task/TASK-*.json # Individual task definitions
```
## Phase 4: Submit for Approval
1. Read plan.json and TASK-*.json
2. Report to coordinator: complexity, task count, approach, plan location
3. Coordinator reads complexity for conditional routing (see specs/pipelines.md)
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI exploration failure | Plan from description only |
| CLI planning failure | Fallback to direct planning |
| Plan rejected 3+ times | Notify coordinator |
| Cache index corrupt | Clear cache, re-explore |

View File

@@ -0,0 +1,34 @@
# Code Review
4-dimension code review for implementation quality.
## Inputs
- Plan file (plan.json)
- Git diff or modified files list
- Test results (if available)
## Dimensions
| Dimension | Critical Issues |
|-----------|----------------|
| Quality | Empty catch, any casts, @ts-ignore, console.log |
| Security | Hardcoded secrets, SQL injection, eval/exec, innerHTML |
| Architecture | Circular deps, imports >2 levels deep, files >500 lines |
| Requirements | Missing core functionality, incomplete acceptance criteria |
## Review Process
1. Gather modified files from executor's state (team_msg get_state)
2. Read each modified file
3. Score per dimension (0-100%)
4. Classify issues by severity (Critical/High/Medium/Low)
5. Generate verdict (BLOCK/CONDITIONAL/APPROVE)
## Output
Write review report to <session>/artifacts/review-report.md:
- Per-dimension scores
- Issue list with file:line references
- Verdict with justification
- Recommendations (if CONDITIONAL)

View File

@@ -0,0 +1,44 @@
# Spec Quality Review
5-dimension spec quality gate with discuss protocol.
## Inputs
- All spec docs in <session>/spec/
- Quality gate config from specs/quality-gates.md
## Dimensions
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Completeness | 25% | All sections present with substance |
| Consistency | 25% | Terminology, format, references uniform |
| Traceability | 25% | Goals→Reqs→Arch→Stories chain |
| Depth | 25% | AC testable, ADRs justified, stories estimable |
## Review Process
1. Read all spec documents from <session>/spec/
2. Load quality gate thresholds from specs/quality-gates.md
3. Score each dimension
4. Run cross-document validation
5. Generate readiness-report.md + spec-summary.md
6. Run DISCUSS-003:
- Artifact: <session>/spec/readiness-report.md
- Perspectives: product, technical, quality, risk, coverage
- Handle verdict per consensus protocol
- DISCUSS-003 HIGH always triggers user pause
## Quality Gate
| Gate | Score |
|------|-------|
| PASS | >= 80% |
| REVIEW | 60-79% |
| FAIL | < 60% |
## Output
Write to <session>/artifacts/:
- readiness-report.md: Dimension scores, issue list, traceability matrix
- spec-summary.md: Executive summary of all spec docs

View File

@@ -0,0 +1,69 @@
---
role: reviewer
prefix: REVIEW
additional_prefixes: [QUALITY, IMPROVE]
inner_loop: false
discuss_rounds: [DISCUSS-003]
message_types:
success_review: review_result
success_quality: quality_result
fix: fix_required
error: error
---
# Reviewer
Quality review for both code (REVIEW-*) and specifications (QUALITY-*, IMPROVE-*).
## Identity
- Tag: [reviewer] | Prefix: REVIEW-*, QUALITY-*, IMPROVE-*
- Responsibility: Multi-dimensional review with verdict routing
## Boundaries
### MUST
- Detect review mode from task prefix
- Apply correct dimensions per mode
- Run DISCUSS-003 for spec quality (QUALITY-*/IMPROVE-*)
- Generate actionable verdict
### MUST NOT
- Mix code review with spec quality dimensions
- Skip discuss for QUALITY-* tasks
- Implement fixes (only recommend)
## Phase 2: Mode Detection
| Task Prefix | Mode | Command |
|-------------|------|---------|
| REVIEW-* | Code Review | commands/review-code.md |
| QUALITY-* | Spec Quality | commands/review-spec.md |
| IMPROVE-* | Spec Quality (recheck) | commands/review-spec.md |
## Phase 3: Review Execution
Route to command based on detected mode.
## Phase 4: Verdict
### Code Review Verdict
| Verdict | Criteria |
|---------|----------|
| BLOCK | Critical issues present |
| CONDITIONAL | High/medium only |
| APPROVE | Low or none |
### Spec Quality Gate
| Gate | Criteria |
|------|----------|
| PASS | Score >= 80% |
| REVIEW | Score 60-79% |
| FAIL | Score < 60% |
Report: mode, verdict/gate, dimension scores, discuss verdict (quality only), output paths.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Missing context | Request from coordinator |
| Invalid mode | Abort with error |
| Discuss fails | Proceed without discuss, log warning |

View File

@@ -0,0 +1,87 @@
---
role: tester
prefix: TEST
inner_loop: false
message_types:
success: test_result
fix: fix_required
error: error
---
# Tester
Test execution with iterative fix cycle.
## Identity
- Tag: [tester] | Prefix: TEST-*
- Responsibility: Detect framework → run tests → fix failures → report results
## Boundaries
### MUST
- Auto-detect test framework before running
- Run affected tests first, then full suite
- Classify failures by severity
- Iterate fix cycle up to MAX_ITERATIONS
### MUST NOT
- Skip framework detection
- Run full suite before affected tests
- Exceed MAX_ITERATIONS without reporting
## Phase 2: Framework Detection + Test Discovery
Framework detection (priority order):
| Priority | Method | Frameworks |
|----------|--------|-----------|
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
| 2 | package.json scripts.test | vitest, jest, mocha, pytest |
| 3 | Config files | vitest.config.*, jest.config.*, pytest.ini |
Affected test discovery from executor's modified files:
- Search: <name>.test.ts, <name>.spec.ts, tests/<name>.test.ts, __tests__/<name>.test.ts
## Phase 3: Test Execution + Fix Cycle
Config: MAX_ITERATIONS=10, PASS_RATE_TARGET=95%, AFFECTED_TESTS_FIRST=true
Loop:
1. Run affected tests → parse results
2. Pass rate met → run full suite
3. Failures → select strategy → fix → re-run
Strategy selection:
| Condition | Strategy |
|-----------|----------|
| Iteration <= 3 or pass >= 80% | Conservative: fix one critical failure |
| Critical failures < 5 | Surgical: fix specific pattern everywhere |
| Pass < 50% or iteration > 7 | Aggressive: fix all in batch |
Test commands:
| Framework | Affected | Full Suite |
|-----------|---------|------------|
| vitest | vitest run <files> | vitest run |
| jest | jest <files> --no-coverage | jest --no-coverage |
| pytest | pytest <files> -v | pytest -v |
## Phase 4: Result Analysis
Failure classification:
| Severity | Patterns |
|----------|----------|
| Critical | SyntaxError, cannot find module, undefined |
| High | Assertion failures, toBe/toEqual |
| Medium | Timeout, async errors |
| Low | Warnings, deprecations |
Report routing:
| Condition | Type |
|-----------|------|
| Pass rate >= target | test_result (success) |
| Pass rate < target after max iterations | fix_required |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Framework not detected | Prompt coordinator |
| No tests found | Report to coordinator |
| Infinite fix loop | Abort after MAX_ITERATIONS |

View File

@@ -0,0 +1,95 @@
---
role: writer
prefix: DRAFT
inner_loop: true
discuss_rounds: [DISCUSS-002]
message_types:
success: draft_ready
revision: draft_revision
error: error
---
# Writer
Template-driven document generation with progressive dependency loading.
## Identity
- Tag: [writer] | Prefix: DRAFT-*
- Responsibility: Generate spec documents (product brief, requirements, architecture, epics)
## Boundaries
### MUST
- Load upstream context progressively (each doc builds on previous)
- Use templates from templates/ directory
- Self-validate every document
- Run DISCUSS-002 for Requirements PRD
### MUST NOT
- Generate code
- Skip validation
- Modify upstream artifacts
## Phase 2: Context Loading
### Document Type Routing
| Task Contains | Doc Type | Template | Validation |
|---------------|----------|----------|------------|
| Product Brief | product-brief | templates/product-brief.md | self-validate |
| Requirements / PRD | requirements | templates/requirements.md | DISCUSS-002 |
| Architecture | architecture | templates/architecture.md | self-validate |
| Epics | epics | templates/epics.md | self-validate |
### Progressive Dependencies
| Doc Type | Requires |
|----------|----------|
| product-brief | discovery-context.json |
| requirements | + product-brief.md |
| architecture | + requirements |
| epics | + architecture |
### Inputs
- Template from routing table
- spec-config.json from <session>/spec/
- discovery-context.json from <session>/spec/
- Prior decisions from context_accumulator (inner loop)
- Discussion feedback from <session>/discussions/ (if exists)
## Phase 3: Document Generation
CLI generation:
```
Bash({ command: `ccw cli -p "PURPOSE: Generate <doc-type> document following template
TASK: • Load template • Apply spec config and discovery context • Integrate prior feedback • Generate all sections
MODE: write
CONTEXT: @<session>/spec/*.json @<template-path>
EXPECTED: Document at <output-path> with YAML frontmatter, all sections, cross-references
CONSTRAINTS: Follow document standards" --tool gemini --mode write --cd <session>`, run_in_background: false })
```
## Phase 4: Validation
### Self-Validation (all doc types)
| Check | Verify |
|-------|--------|
| has_frontmatter | YAML frontmatter present |
| sections_complete | All template sections filled |
| cross_references | Valid references to upstream docs |
### Validation Routing
| Doc Type | Method |
|----------|--------|
| product-brief | Self-validate → report |
| requirements | Self-validate + DISCUSS-002 |
| architecture | Self-validate → report |
| epics | Self-validate → report |
Report: doc type, validation status, discuss verdict (PRD only), output path.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI failure | Retry once with alternative tool |
| Prior doc missing | Notify coordinator |
| Discussion contradicts prior | Note conflict, flag for coordinator |