feat: Add Role Analysis Reviewer Agent and validation template

- Introduced Role Analysis Reviewer Agent to validate role analysis outputs against templates and quality standards.
- Created a detailed validation ruleset for the system-architect role, including mandatory and recommended sections.
- Added JSON validation report structure for output.
- Implemented execution command for validation process.

test: Add UX tests for HookCard component

- Created comprehensive tests for HookCard component, focusing on delete confirmation UX pattern.
- Verified confirmation dialog appearance, deletion functionality, and button interactions.
- Ensured proper handling of state updates and visual feedback for enabled/disabled status.

test: Add UX tests for ThemeSelector component

- Developed tests for ThemeSelector component, emphasizing delete confirmation UX pattern.
- Validated confirmation dialog display, deletion actions, and toast notifications for undo functionality.
- Ensured proper management of theme slots and state updates.

feat: Implement useDebounce hook

- Added useDebounce hook to delay expensive computations or API calls, enhancing performance.

feat: Create System Architect Analysis Template

- Developed a comprehensive template for system architect role analysis, covering required sections such as architecture overview, data model, state machine, error handling strategy, observability requirements, configuration model, and boundary scenarios.
- Included examples and templates for each section to guide users in producing SPEC.md-level precision modeling.
This commit is contained in:
catlog22
2026-03-05 19:58:10 +08:00
parent bc7a556985
commit 3fd55ebd4b
55 changed files with 4262 additions and 1138 deletions

View File

@@ -0,0 +1,232 @@
# Role Directory Guide
This directory contains all agent role specifications for Team Lifecycle v3.
## Directory Structure
```
roles/
├── README.md # This file
├── coordinator/ # Orchestrator agent
│ ├── role.md # Coordinator specification
│ └── commands/ # User command handlers
│ ├── dispatch.md
│ └── monitor.md
├── pipeline/ # Core pipeline roles (always present)
│ ├── analyst.md # Research and discovery
│ ├── writer.md # Document drafting
│ ├── planner.md # Implementation planning
│ ├── executor.md # Code implementation
│ ├── tester.md # Test generation
│ ├── reviewer.md # Quality review
│ ├── architect.md # Architecture design (consulting)
│ ├── fe-developer.md # Frontend development (consulting)
│ └── fe-qa.md # Frontend QA (consulting)
└── specialists/ # Specialist roles (dynamically injected)
├── orchestrator.role.md # Multi-module coordination
├── security-expert.role.md # Security analysis
├── performance-optimizer.role.md # Performance optimization
├── data-engineer.role.md # Data pipeline work
├── devops-engineer.role.md # DevOps and deployment
└── ml-engineer.role.md # ML/AI implementation
```
## Role Types
### Coordinator (Orchestrator)
**Location**: `coordinator/`
**Purpose**: Manages workflow orchestration, task dependencies, role injection, and artifact registry.
**Key Responsibilities**:
- Parse user requests and clarify requirements
- Create and manage team sessions
- Analyze complexity and inject specialist roles
- Create task chains with dependencies
- Spawn workers and handle callbacks
- Validate artifacts and advance pipeline
- Display checkpoints and handle user commands
**Always Present**: Yes
**Spawned By**: Skill invocation
### Pipeline Roles (Core Team)
**Location**: `pipeline/`
**Purpose**: Execute standard development workflow tasks.
**Always Present**: Yes (based on pipeline selection)
**Spawned By**: Coordinator
#### Core Pipeline Roles
| Role | File | Purpose | Task Prefix |
|------|------|---------|-------------|
| analyst | `analyst.md` | Research and discovery | RESEARCH-* |
| writer | `writer.md` | Document drafting | DRAFT-* |
| planner | `planner.md` | Implementation planning | PLAN-* |
| executor | `executor.md` | Code implementation | IMPL-* |
| tester | `tester.md` | Test generation and execution | TEST-* |
| reviewer | `reviewer.md` | Quality review and improvement | REVIEW-*, QUALITY-*, IMPROVE-* |
#### Consulting Roles
| Role | File | Purpose | Task Prefix | Injection Trigger |
|------|------|---------|-------------|-------------------|
| architect | `architect.md` | Architecture design | ARCH-* | High complexity |
| fe-developer | `fe-developer.md` | Frontend development | DEV-FE-* | Frontend tasks |
| fe-qa | `fe-qa.md` | Frontend QA | QA-FE-* | Frontend tasks |
### Specialist Roles (Dynamic Injection)
**Location**: `specialists/`
**Purpose**: Provide expert capabilities for specific domains.
**Always Present**: No (injected based on task analysis)
**Spawned By**: Coordinator (after complexity/keyword analysis)
| Role | File | Purpose | Task Prefix | Injection Trigger |
|------|------|---------|-------------|-------------------|
| orchestrator | `orchestrator.role.md` | Multi-module coordination | ORCH-* | Medium/High complexity |
| security-expert | `security-expert.role.md` | Security analysis and audit | SECURITY-* | Keywords: security, vulnerability, OWASP, auth |
| performance-optimizer | `performance-optimizer.role.md` | Performance optimization | PERF-* | Keywords: performance, optimization, bottleneck |
| data-engineer | `data-engineer.role.md` | Data pipeline work | DATA-* | Keywords: data, pipeline, ETL, schema |
| devops-engineer | `devops-engineer.role.md` | DevOps and deployment | DEVOPS-* | Keywords: devops, CI/CD, deployment, docker |
| ml-engineer | `ml-engineer.role.md` | ML/AI implementation | ML-* | Keywords: ML, model, training, inference |
## Role Specification Format
All role specifications follow this structure:
```markdown
---
role: <role-name>
type: <coordinator|pipeline|specialist>
task_prefix: <TASK-PREFIX>
priority: <P0|P1|P2>
injection_trigger: <always|complexity|keywords>
---
# Role: <Role Name>
## Purpose
Brief description of role purpose.
## Responsibilities
- Responsibility 1
- Responsibility 2
## Phase Execution
### Phase 1: Task Discovery
...
### Phase 2: Context Gathering
...
### Phase 3: Domain Work
...
### Phase 4: Artifact Generation
...
### Phase 5: Reporting
...
## Tools & Capabilities
- Tool 1
- Tool 2
## Artifact Contract
Expected artifacts and manifest schema.
```
## Worker Execution Model
All workers (pipeline and specialist roles) follow the **5-phase execution model**:
1. **Phase 1: Task Discovery** - Read task metadata, understand requirements
2. **Phase 2: Context Gathering** - Discover upstream artifacts, gather context
3. **Phase 3: Domain Work** - Execute role-specific work
4. **Phase 4: Artifact Generation** - Generate deliverables with manifest
5. **Phase 5: Reporting** - Report completion to coordinator
## CLI Tool Integration
Workers can use CLI tools for complex analysis:
| Capability | CLI Command | Used By |
|------------|-------------|---------|
| Codebase exploration | `ccw cli --tool gemini --mode analysis` | analyst, planner, architect |
| Multi-perspective critique | `ccw cli --tool gemini --mode analysis` (parallel) | analyst, writer, reviewer |
| Document generation | `ccw cli --tool gemini --mode write` | writer |
**Note**: Workers CANNOT spawn utility members (explorer, discussant). Only the coordinator can spawn utility members.
## Utility Members (Coordinator-Only)
Utility members are NOT roles but specialized subagents that can only be spawned by the coordinator:
| Utility | Purpose | Callable By |
|---------|---------|-------------|
| explorer | Parallel multi-angle exploration | Coordinator only |
| discussant | Aggregate multi-CLI critique | Coordinator only |
| doc-generator | Template-based doc generation | Coordinator only |
**Location**: `../subagents/` (not in roles directory)
## Adding New Roles
To add a new specialist role:
1. Create role specification file in `specialists/` directory
2. Follow the role specification format
3. Define injection trigger (keywords or complexity)
4. Update `../specs/team-config.json` role registry
5. Update coordinator's role injection logic
6. Test with sample task descriptions
## Role Selection Logic
### Pipeline Selection
Coordinator selects pipeline based on user requirements:
- **Spec-only**: Documentation, requirements, design work
- **Impl-only**: Quick implementations with clear requirements
- **Full-lifecycle**: Complete feature development
### Specialist Injection
Coordinator analyzes task description for:
1. **Keywords**: Specific domain terms (security, performance, data, etc.)
2. **Complexity**: Module count, dependency depth
3. **Explicit requests**: User mentions specific expertise needed
### Conditional Routing
PLAN-001 assesses complexity and routes to appropriate implementation strategy:
- **Low complexity** → Direct implementation (executor only)
- **Medium complexity** → Orchestrated implementation (orchestrator + parallel executors)
- **High complexity** → Architecture + orchestrated implementation (architect + orchestrator + parallel executors)
## Reference Documents
For detailed information, see:
- [../specs/core-concepts.md](../specs/core-concepts.md) - Foundational principles
- [../specs/execution-flow.md](../specs/execution-flow.md) - Execution walkthrough
- [../specs/artifact-contract-spec.md](../specs/artifact-contract-spec.md) - Artifact manifest specification
- [coordinator/role.md](coordinator/role.md) - Coordinator specification

View File

@@ -0,0 +1,108 @@
---
role: analyst
prefix: RESEARCH
inner_loop: false
discuss_rounds: [DISCUSS-001]
input_artifact_types: []
message_types:
success: research_ready
progress: research_progress
error: error
---
# Analyst — Phase 2-4
## Phase 2: Seed Analysis
**Objective**: Extract structured seed information from the topic.
1. Read upstream artifacts from `context-artifacts.json` (if exists)
2. Extract session folder from task description (`Session: <path>`)
3. Parse topic from task description
4. If topic starts with `@` or ends with `.md`/`.txt` → Read referenced file
5. Run CLI seed analysis:
```
Bash({
command: `ccw cli -p "PURPOSE: Analyze topic and extract structured seed information.
TASK: * Extract problem statement * Identify target users * Determine domain context
* List constraints * Identify 3-5 exploration dimensions * Assess complexity
TOPIC: <topic-content>
MODE: analysis
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment" --tool gemini --mode analysis`,
run_in_background: false
})
```
6. Parse seed analysis JSON
## Phase 3: Codebase Exploration (conditional)
**Objective**: Gather codebase context if project detected.
| Condition | Action |
|-----------|--------|
| package.json / Cargo.toml / pyproject.toml / go.mod exists | Explore |
| No project files | Skip (codebase_context = null) |
**When project detected**: Use CLI exploration.
```
Bash({
command: `ccw cli -p "PURPOSE: Explore codebase for context to inform spec generation
TASK: • Identify tech stack • Map architecture patterns • Document conventions • List integration points
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON with: tech_stack[], architecture_patterns[], conventions[], integration_points[]" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`,
run_in_background: false
})
```
## Phase 4: Context Packaging + Discuss
### 4a: Context Packaging
**spec-config.json**`<session>/spec/spec-config.json`
**discovery-context.json**`<session>/spec/discovery-context.json`
**design-intelligence.json**`<session>/analysis/design-intelligence.json` (UI mode only)
### 4b: Generate Artifact Manifest
Create `<session>/artifacts/<task-id>/artifact-manifest.json`:
```json
{
"artifact_id": "uuid-...",
"creator_role": "analyst",
"artifact_type": "spec",
"version": "1.0.0",
"path": "./spec/discovery-context.json",
"dependencies": [],
"validation_status": "passed",
"validation_summary": "Seed analysis complete, codebase explored",
"metadata": {
"complexity": "low | medium | high",
"has_codebase": true | false
}
}
```
### 4c: Inline Discuss (DISCUSS-001)
Call discuss subagent:
- Artifact: `<session>/spec/discovery-context.json`
- Round: DISCUSS-001
- Perspectives: product, risk, coverage
Handle verdict per consensus protocol.
**Report**: complexity, codebase presence, problem statement, dimensions, discuss verdict, output paths.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI failure | Fallback to direct Claude analysis |
| Codebase detection failed | Continue as new project |
| Topic too vague | Report with clarification questions |
| Discuss subagent fails | Proceed without discuss, log warning |

View File

@@ -0,0 +1,76 @@
---
role: architect
prefix: ARCH
inner_loop: false
discuss_rounds: []
input_artifact_types: []
message_types:
success: arch_ready
concern: arch_concern
error: error
---
# Architect — Phase 2-4
## Consultation Modes
| Task Pattern | Mode | Focus |
|-------------|------|-------|
| ARCH-SPEC-* | spec-review | Review architecture docs |
| ARCH-PLAN-* | plan-review | Review plan soundness |
| ARCH-CODE-* | code-review | Assess code change impact |
| ARCH-CONSULT-* | consult | Answer architecture questions |
| ARCH-FEASIBILITY-* | feasibility | Technical feasibility |
## Phase 2: Context Loading
**Common**: session folder, wisdom, project-tech.json, explorations
**Mode-specific**:
| Mode | Additional Context |
|------|-------------------|
| spec-review | architecture/_index.md, ADR-*.md |
| plan-review | plan/plan.json |
| code-review | git diff, changed files |
| consult | Question from task description |
| feasibility | Requirements + codebase |
## Phase 3: Assessment
Analyze using mode-specific criteria. Output: mode, verdict (APPROVE/CONCERN/BLOCK), dimensions[], concerns[], recommendations[].
For complex questions → Gemini CLI with architecture review rule:
```
Bash({
command: `ccw cli -p "..." --tool gemini --mode analysis --rule analysis-review-architecture`,
run_in_background: true
})
```
## Phase 4: Report
Output to `<session-folder>/architecture/arch-<slug>.json`. Contribute decisions to wisdom/decisions.md.
**Frontend project outputs** (when frontend tech stack detected):
- `<session-folder>/architecture/design-tokens.json` — color, spacing, typography, shadow tokens
- `<session-folder>/architecture/component-specs/*.md` — per-component design spec
**Report**: mode, verdict, concern count, recommendations, output path(s).
### Coordinator Integration
| Timing | Task |
|--------|------|
| After DRAFT-003 | ARCH-SPEC-001: architecture doc review |
| After PLAN-001 | ARCH-PLAN-001: plan architecture review |
| On-demand | ARCH-CONSULT-001: architecture consultation |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Docs not found | Assess from available context |
| CLI timeout | Partial assessment |
| Insufficient context | Request explorer via coordinator |

View File

@@ -0,0 +1,67 @@
---
role: executor
prefix: IMPL
inner_loop: true
discuss_rounds: []
input_artifact_types: []
message_types:
success: impl_complete
progress: impl_progress
error: error
---
# Executor — Phase 2-4
## Phase 2: Task & Plan Loading
**Objective**: Load plan and determine execution strategy.
1. Load plan.json and .task/TASK-*.json from `<session-folder>/plan/`
**Backend selection** (priority order):
| Priority | Source | Method |
|----------|--------|--------|
| 1 | Task metadata | task.metadata.executor field |
| 2 | Plan default | "Execution Backend:" in plan |
| 3 | Auto-select | Simple (< 200 chars, no refactor) → agent; Complex → codex |
**Code review selection**:
| Priority | Source | Method |
|----------|--------|--------|
| 1 | Task metadata | task.metadata.code_review field |
| 2 | Plan default | "Code Review:" in plan |
| 3 | Auto-select | Critical keywords (auth, security, payment) → enabled |
## Phase 3: Code Implementation
**Objective**: Execute implementation across batches.
**Batching**: Topological sort by IMPL task dependencies → sequential batches.
| Backend | Invocation | Use Case |
|---------|-----------|----------|
| gemini | `ccw cli --tool gemini --mode write` (foreground) | Simple, direct edits |
| codex | `ccw cli --tool codex --mode write` (foreground) | Complex, architecture |
| qwen | `ccw cli --tool qwen --mode write` (foreground) | Alternative backend |
## Phase 4: Self-Validation
| Step | Method | Pass Criteria |
|------|--------|--------------|
| Syntax check | `tsc --noEmit` (30s) | Exit code 0 |
| Acceptance criteria | Match criteria keywords vs implementation | All addressed |
| Test detection | Find .test.ts/.spec.ts for modified files | Tests identified |
| Code review (optional) | gemini analysis or codex review | No blocking issues |
**Report**: task ID, status, files modified, validation results, backend used.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Syntax errors | Retry with error context (max 3) |
| Missing dependencies | Request from coordinator |
| Backend unavailable | Fallback to alternative tool |
| Circular dependencies | Abort, report graph |

View File

@@ -0,0 +1,79 @@
---
role: fe-developer
prefix: DEV-FE
inner_loop: false
discuss_rounds: []
input_artifact_types: []
message_types:
success: dev_fe_complete
progress: dev_fe_progress
error: error
---
# FE Developer — Phase 2-4
## Phase 2: Context Loading
**Inputs to load**:
- Plan: `<session-folder>/plan/plan.json`
- Design tokens: `<session-folder>/architecture/design-tokens.json` (optional)
- Design intelligence: `<session-folder>/analysis/design-intelligence.json` (optional)
- Component specs: `<session-folder>/architecture/component-specs/*.md` (optional)
- Shared memory, wisdom
**Tech stack detection**:
| Signal | Framework | Styling |
|--------|-----------|---------|
| react/react-dom in deps | react | - |
| vue in deps | vue | - |
| next in deps | nextjs | - |
| tailwindcss in deps | - | tailwind |
| @shadcn/ui in deps | - | shadcn |
## Phase 3: Frontend Implementation
**Step 1**: Generate design token CSS (if tokens available)
- Convert design-tokens.json → CSS custom properties (`:root { --color-*, --space-*, --text-* }`)
- Include dark mode overrides via `@media (prefers-color-scheme: dark)`
- Write to `src/styles/tokens.css`
**Step 2**: Implement components
| Task Size | Strategy |
|-----------|----------|
| Simple (<= 3 files, single component) | `ccw cli --tool gemini --mode write` (foreground) |
| Complex (system, multi-component) | `ccw cli --tool codex --mode write` (foreground) |
**Coding standards** (include in agent/CLI prompt):
- Use design token CSS variables, never hardcode colors/spacing
- Interactive elements: cursor: pointer
- Transitions: 150-300ms
- Text contrast: minimum 4.5:1
- Include focus-visible styles
- Support prefers-reduced-motion
- Responsive: mobile-first
- No emoji as functional icons
## Phase 4: Self-Validation
| Check | What |
|-------|------|
| hardcoded-color | No #hex outside tokens.css |
| cursor-pointer | Interactive elements have cursor: pointer |
| focus-styles | Interactive elements have focus styles |
| responsive | Has responsive breakpoints |
| reduced-motion | Animations respect prefers-reduced-motion |
| emoji-icon | No emoji as functional icons |
Contribute to wisdom/conventions.md. Update shared-memory.json with component inventory.
**Report**: file count, framework, design token usage, self-validation results.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Design tokens not found | Use project defaults |
| Tech stack undetected | Default HTML + CSS |
| CLI failure | Retry with alternative tool |

View File

@@ -0,0 +1,79 @@
---
role: fe-qa
prefix: QA-FE
inner_loop: false
discuss_rounds: []
input_artifact_types: []
message_types:
success: qa_fe_passed
result: qa_fe_result
fix: fix_required
error: error
---
# FE QA — Phase 2-4
## Review Dimensions
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Code Quality | 25% | TypeScript types, component structure, error handling |
| Accessibility | 25% | Semantic HTML, ARIA, keyboard nav, contrast, focus-visible |
| Design Compliance | 20% | Token usage, no hardcoded colors, no emoji icons |
| UX Best Practices | 15% | Loading/error/empty states, cursor-pointer, responsive |
| Pre-Delivery | 15% | No console.log, dark mode, i18n readiness |
## Phase 2: Context Loading
**Inputs**: design tokens, design intelligence, shared memory, previous QA results (for GC round tracking), changed frontend files via git diff.
Determine GC round from previous QA results count. Max 2 rounds.
## Phase 3: 5-Dimension Review
For each changed frontend file, check against all 5 dimensions. Score each dimension 0-10, deducting for issues found.
**Scoring deductions**:
| Severity | Deduction |
|----------|-----------|
| High | -2 to -3 |
| Medium | -1 to -1.5 |
| Low | -0.5 |
**Overall score** = weighted sum of dimension scores.
**Verdict routing**:
| Condition | Verdict |
|-----------|---------|
| Score >= 8 AND no critical issues | PASS |
| GC round >= max AND score >= 6 | PASS_WITH_WARNINGS |
| GC round >= max AND score < 6 | FAIL |
| Otherwise | NEEDS_FIX |
## Phase 4: Report
Write audit to `<session-folder>/qa/audit-fe-<task>-r<round>.json`. Update wisdom and shared memory.
**Report**: round, verdict, overall score, dimension scores, critical issues with Do/Don't format, action required (if NEEDS_FIX).
### Generator-Critic Loop
Orchestrated by coordinator:
```
Round 1: DEV-FE-001 → QA-FE-001
if NEEDS_FIX → coordinator creates DEV-FE-002 + QA-FE-002
Round 2: DEV-FE-002 → QA-FE-002
if still NEEDS_FIX → PASS_WITH_WARNINGS or FAIL (max 2)
```
**Convergence**: score >= 8 AND critical_count = 0
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No changed files | Report empty, score N/A |
| Design tokens not found | Skip design compliance, adjust weights |
| Max GC rounds exceeded | Force verdict |

View File

@@ -0,0 +1,145 @@
---
prefix: ORCH
inner_loop: false
message_types:
success: orch_complete
error: error
---
# Orchestrator
Decomposes complex multi-module tasks into coordinated sub-tasks with parallel execution and dependency management.
## Phase 2: Context & Complexity Assessment
| Input | Source | Required |
|-------|--------|----------|
| Task description | From coordinator | Yes |
| Plan document | Session plan/ | Yes |
| Exploration cache | Session explorations/ | No |
### Step 1: Load Context
Extract session path from task description. Read plan document to understand scope and requirements.
### Step 2: Complexity Analysis
Assess task complexity across dimensions:
| Dimension | Indicators | Weight |
|-----------|-----------|--------|
| Module count | Number of modules affected | High |
| Dependency depth | Cross-module dependencies | High |
| Technology stack | Multiple tech stacks involved | Medium |
| Integration points | External system integrations | Medium |
### Step 3: Decomposition Strategy
| Complexity | Strategy |
|------------|----------|
| 2-3 modules, shallow deps | Simple parallel split |
| 4-6 modules, moderate deps | Phased parallel with integration checkpoints |
| 7+ modules, deep deps | Hierarchical decomposition with sub-orchestrators |
### Step 4: Exploration
If complexity is High, delegate to explorer utility member for codebase context gathering.
## Phase 3: Task Decomposition & Coordination
### Step 1: Generate Sub-Tasks
Break down into parallel tracks:
| Track Type | Characteristics | Owner Role |
|------------|----------------|------------|
| Frontend | UI components, state management | fe-developer |
| Backend | API, business logic, data access | executor |
| Data | Schema, migrations, ETL | data-engineer |
| Infrastructure | Deployment, CI/CD | devops-engineer |
### Step 2: Dependency Mapping
Create dependency graph:
- Identify shared interfaces (API contracts, data schemas)
- Mark blocking dependencies (schema before backend, API before frontend)
- Identify parallel-safe tracks
### Step 3: Priority Assignment
Assign priority levels:
| Priority | Criteria | Impact |
|----------|----------|--------|
| P0 | Blocking dependencies, critical path | Execute first |
| P1 | Standard implementation | Execute after P0 |
| P2 | Nice-to-have, non-blocking | Execute last |
### Step 4: Spawn Coordination
Create sub-tasks via coordinator message:
```
SendMessage({
type: "spawn_request",
recipient: "coordinator",
content: {
sub_tasks: [
{ id: "IMPL-FE-001", role: "fe-developer", priority: "P1", blockedBy: ["IMPL-BE-001"] },
{ id: "IMPL-BE-001", role: "executor", priority: "P0", blockedBy: [] },
{ id: "DATA-001", role: "data-engineer", priority: "P0", blockedBy: [] }
],
parallel_groups: [
["IMPL-BE-001", "DATA-001"],
["IMPL-FE-001"]
]
}
})
```
## Phase 4: Integration & Validation
### Step 1: Monitor Progress
Track sub-task completion via message bus. Wait for all sub-tasks in current parallel group to complete.
### Step 2: Integration Check
Validate integration points:
| Check | Method | Pass Criteria |
|-------|--------|---------------|
| API contracts | Compare spec vs implementation | All endpoints match |
| Data schemas | Validate migrations applied | Schema version consistent |
| Type consistency | Cross-module type checking | No type mismatches |
### Step 3: Artifact Registry
Generate artifact manifest for orchestration result:
```javascript
Write("artifact-manifest.json", JSON.stringify({
artifact_id: `orchestrator-integration-${Date.now()}`,
creator_role: "orchestrator",
artifact_type: "integration",
version: "1.0.0",
path: "integration-report.md",
dependencies: ["<sub-task-artifact-ids>"],
validation_status: "passed",
validation_summary: "All integration points validated",
metadata: {
created_at: new Date().toISOString(),
task_id: "<current-task-id>",
sub_task_count: <count>,
parallel_groups: <groups>
}
}))
```
### Step 4: Report
Generate integration report with:
- Sub-task completion status
- Integration validation results
- Identified issues and resolutions
- Next steps or recommendations

View File

@@ -0,0 +1,98 @@
---
role: planner
prefix: PLAN
inner_loop: true
discuss_rounds: []
input_artifact_types: [spec, architecture]
message_types:
success: plan_ready
revision: plan_revision
error: error
---
# Planner — Phase 2-4
## Phase 1.5: Load Spec Context (Full-Lifecycle)
If `<session-folder>/spec/` exists → load requirements/_index.md, architecture/_index.md, epics/_index.md, spec-config.json. Otherwise → impl-only mode.
**Check shared explorations**: Read `<session-folder>/explorations/cache-index.json` to see if analyst already cached useful explorations. Reuse rather than re-explore.
## Phase 2: Multi-Angle Exploration
**Objective**: Explore codebase to inform planning.
**Complexity routing**:
| Complexity | Criteria | Strategy |
|------------|----------|----------|
| Low | < 200 chars, no refactor/architecture keywords | ACE semantic search only |
| Medium | 200-500 chars or moderate scope | 2-3 angle explore subagent |
| High | > 500 chars, refactor/architecture, multi-module | 3-5 angle explore subagent |
For each angle, use CLI exploration (cache-aware — check cache-index.json before each call):
```
Bash({
command: `ccw cli -p "PURPOSE: Explore codebase from <angle> perspective to inform planning
TASK: • Search for <angle>-specific patterns • Identify relevant files • Document integration points
MODE: analysis
CONTEXT: @**/* | Memory: Task keywords: <keywords>
EXPECTED: JSON with: relevant_files[], patterns[], integration_points[], recommendations[]
CONSTRAINTS: Focus on <angle> perspective" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`,
run_in_background: false
})
```
## Phase 3: Plan Generation
**Objective**: Generate structured implementation plan.
| Complexity | Strategy |
|------------|----------|
| Low | Direct planning → single TASK-001 with plan.json |
| Medium/High | cli-lite-planning-agent with exploration results |
**CLI call** (Medium/High):
```
Bash({
command: `ccw cli -p "PURPOSE: Generate structured implementation plan from exploration results
TASK: • Create plan.json with overview • Generate TASK-*.json files (2-7 tasks) • Define dependencies • Set convergence criteria
MODE: write
CONTEXT: @<session-folder>/explorations/*.json | Memory: Complexity: <complexity>
EXPECTED: Files: plan.json + .task/TASK-*.json. Schema: ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
CONSTRAINTS: 2-7 tasks, include id/title/files[].change/convergence.criteria/depends_on" --tool gemini --mode write --rule planning-breakdown-task-steps`,
run_in_background: false
})
```
**Spec context** (full-lifecycle): Reference REQ-* IDs, follow ADR decisions, reuse Epic/Story decomposition.
## Phase 4: Submit for Approval
1. Read plan.json and TASK-*.json
2. Report to coordinator: complexity, task count, task list, approach, plan location
3. Wait for response: approved → complete; revision → update and resubmit
**Session files**:
```
<session-folder>/explorations/ (shared cache)
+-- cache-index.json
+-- explore-<angle>.json
<session-folder>/plan/
+-- explorations-manifest.json
+-- plan.json
+-- .task/TASK-*.json
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI exploration failure | Plan from description only |
| CLI planning failure | Fallback to direct planning |
| Plan rejected 3+ times | Notify coordinator, suggest alternative |
| Schema not found | Use basic structure |
| Cache index corrupt | Clear cache, re-explore all angles |

View File

@@ -0,0 +1,94 @@
---
role: reviewer
prefix: REVIEW
additional_prefixes: [QUALITY, IMPROVE]
inner_loop: false
discuss_rounds: [DISCUSS-003]
input_artifact_types: []
message_types:
success_review: review_result
success_quality: quality_result
fix: fix_required
error: error
---
# Reviewer — Phase 2-4
## Phase 2: Mode Detection
| Task Prefix | Mode | Dimensions | Discuss |
|-------------|------|-----------|---------|
| REVIEW-* | Code Review | quality, security, architecture, requirements | None |
| QUALITY-* | Spec Quality | completeness, consistency, traceability, depth, coverage | DISCUSS-003 |
| IMPROVE-* | Spec Quality (recheck) | Same as QUALITY | DISCUSS-003 |
## Phase 3: Review Execution
### Code Review (REVIEW-*)
**Inputs**: Plan file, git diff, modified files, test results
**4 dimensions**:
| Dimension | Critical Issues |
|-----------|----------------|
| Quality | Empty catch, any in public APIs, @ts-ignore, console.log |
| Security | Hardcoded secrets, SQL injection, eval/exec, innerHTML |
| Architecture | Circular deps, parent imports >2 levels, files >500 lines |
| Requirements | Missing core functionality, incomplete acceptance criteria |
### Spec Quality (QUALITY-* / IMPROVE-*)
**Inputs**: All spec docs in session folder, quality gate config
**5 dimensions**:
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Completeness | 25% | All sections present with substance |
| Consistency | 20% | Terminology, format, references |
| Traceability | 25% | Goals -> Reqs -> Arch -> Stories chain |
| Depth | 20% | AC testable, ADRs justified, stories estimable |
| Coverage | 10% | Original requirements mapped |
**Quality gate**:
| Gate | Criteria |
|------|----------|
| PASS | Score >= 80% AND coverage >= 70% |
| REVIEW | Score 60-79% OR coverage 50-69% |
| FAIL | Score < 60% OR coverage < 50% |
**Artifacts**: readiness-report.md + spec-summary.md
## Phase 4: Verdict + Discuss
### Code Review Verdict
| Verdict | Criteria |
|---------|----------|
| BLOCK | Critical issues present |
| CONDITIONAL | High/medium only |
| APPROVE | Low or none |
### Spec Quality Discuss (DISCUSS-003)
After generating readiness-report.md, call discuss subagent:
- Artifact: `<session>/spec/readiness-report.md`
- Round: DISCUSS-003
- Perspectives: product, technical, quality, risk, coverage (all 5)
Handle verdict per consensus protocol.
> **Note**: DISCUSS-003 HIGH always triggers user pause (final sign-off gate).
**Report**: mode, verdict/gate, dimension scores, discuss verdict (QUALITY only), output paths.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Missing context | Request from coordinator |
| Invalid mode | Abort with error |
| Analysis failure | Retry, then fallback |
| Discuss subagent fails | Proceed without discuss, log warning |

View File

@@ -0,0 +1,76 @@
---
role: tester
prefix: TEST
inner_loop: false
discuss_rounds: []
input_artifact_types: []
message_types:
success: test_result
fix: fix_required
error: error
---
# Tester — Phase 2-4
## Phase 2: Framework Detection & Test Discovery
**Framework detection** (priority order):
| Priority | Method | Frameworks |
|----------|--------|-----------|
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
| 2 | package.json scripts.test | vitest, jest, mocha, pytest |
| 3 | Config files | vitest.config.*, jest.config.*, pytest.ini |
**Affected test discovery** from executor's modified files:
- Search variants: `<name>.test.ts`, `<name>.spec.ts`, `tests/<name>.test.ts`, `__tests__/<name>.test.ts`
## Phase 3: Test Execution & Fix Cycle
**Config**: MAX_ITERATIONS=10, PASS_RATE_TARGET=95%, AFFECTED_TESTS_FIRST=true
1. Run affected tests → parse results
2. Pass rate met → run full suite
3. Failures → select strategy → fix → re-run → repeat
**Strategy selection**:
| Condition | Strategy | Behavior |
|-----------|----------|----------|
| Iteration <= 3 or pass >= 80% | Conservative | Fix one critical failure at a time |
| Critical failures < 5 | Surgical | Fix specific pattern everywhere |
| Pass < 50% or iteration > 7 | Aggressive | Fix all failures in batch |
**Test commands**:
| Framework | Affected | Full Suite |
|-----------|---------|------------|
| vitest | `vitest run <files>` | `vitest run` |
| jest | `jest <files> --no-coverage` | `jest --no-coverage` |
| pytest | `pytest <files> -v` | `pytest -v` |
## Phase 4: Result Analysis
**Failure classification**:
| Severity | Patterns |
|----------|----------|
| Critical | SyntaxError, cannot find module, undefined |
| High | Assertion failures, toBe/toEqual |
| Medium | Timeout, async errors |
| Low | Warnings, deprecations |
**Report routing**:
| Condition | Type |
|-----------|------|
| Pass rate >= target | test_result (success) |
| Pass rate < target after max iterations | fix_required |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Framework not detected | Prompt user |
| No tests found | Report to coordinator |
| Infinite fix loop | Abort after MAX_ITERATIONS |

View File

@@ -0,0 +1,139 @@
---
role: writer
prefix: DRAFT
inner_loop: true
discuss_rounds: [DISCUSS-002]
input_artifact_types: [spec]
message_types:
success: draft_ready
revision: draft_revision
error: error
---
# Writer — Phase 2-4
## Phase 2: Context Loading
**Objective**: Load all required inputs for document generation.
### 2a: Read Upstream Artifacts
Load `context-artifacts.json` to discover upstream artifacts:
```json
{
"artifacts": [
{
"artifact_id": "uuid-...",
"artifact_type": "spec",
"path": "./spec/discovery-context.json",
"creator_role": "analyst"
}
]
}
```
### 2b: Document Type Routing
| Task Subject Contains | Doc Type | Template | Validation |
|----------------------|----------|----------|------------|
| Product Brief | product-brief | templates/product-brief.md | self-validate |
| Requirements / PRD | requirements | templates/requirements-prd.md | DISCUSS-002 |
| Architecture | architecture | templates/architecture-doc.md | self-validate |
| Epics | epics | templates/epics-template.md | self-validate |
### 2c: Progressive Dependency Loading
| Doc Type | Requires |
|----------|----------|
| product-brief | discovery-context.json |
| requirements | + product-brief.md |
| architecture | + requirements/_index.md |
| epics | + architecture/_index.md |
**Prior decisions from accumulator**: Pass context_accumulator summaries as "Prior Decisions" to generation.
| Input | Source | Required |
|-------|--------|----------|
| Document standards | `../../specs/document-standards.md` | Yes |
| Template | From routing table | Yes |
| Spec config | `<session>/spec/spec-config.json` | Yes |
| Discovery context | `<session>/spec/discovery-context.json` | Yes |
| Discussion feedback | `<session>/discussions/<discuss-file>` | If exists |
| Prior decisions | context_accumulator (in-memory) | If prior tasks |
## Phase 3: Document Generation
**Objective**: Generate document using CLI tool.
```
Bash({
command: `ccw cli -p "PURPOSE: Generate <doc-type> document following template and standards
TASK: • Load template • Apply spec config and discovery context • Integrate prior feedback • Generate all sections
MODE: write
CONTEXT: @<session>/spec/*.json @<template-path> | Memory: Prior decisions: <accumulator summary>
EXPECTED: Document at <output-path> with: YAML frontmatter, all sections, cross-references
CONSTRAINTS: Follow document-standards.md" --tool gemini --mode write --rule development-implement-feature --cd <session>`,
run_in_background: false
})
```
## Phase 4: Validation + Artifact Manifest
### 4a: Self-Validation (all doc types)
| Check | What to Verify |
|-------|---------------|
| has_frontmatter | Starts with YAML frontmatter |
| sections_complete | All template sections present |
| cross_references | session_id included |
| progressive_consistency | References to upstream docs are valid |
### 4b: Generate Artifact Manifest
Create `<session>/artifacts/<task-id>/artifact-manifest.json`:
```json
{
"artifact_id": "uuid-...",
"creator_role": "writer",
"artifact_type": "spec",
"version": "1.0.0",
"path": "./spec/<doc-type>/_index.md",
"dependencies": ["analyst-artifact-id"],
"validation_status": "passed | failed",
"validation_summary": "All sections complete, frontmatter valid",
"metadata": {
"doc_type": "product-brief | requirements | architecture | epics",
"sections_count": 8
}
}
```
### 4c: Validation Routing
| Doc Type | Validation Method |
|----------|------------------|
| product-brief | Self-validation only → report |
| requirements (PRD) | Self-validation + **DISCUSS-002** |
| architecture | Self-validation only → report |
| epics | Self-validation only → report |
**DISCUSS-002** (PRD only):
- Artifact: `<session>/spec/requirements/_index.md`
- Round: DISCUSS-002
- Perspectives: quality, product, coverage
Handle discuss verdict per consensus protocol.
**Report**: doc type, validation status, discuss verdict (PRD only), output path.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI failure | Retry once with alternative tool. Still fails → log, continue next |
| Discuss subagent fails | Skip discuss, log warning |
| Cumulative 3 task failures | SendMessage to coordinator, STOP |
| Prior doc not found | Notify coordinator, request prerequisite |
| Discussion contradicts prior docs | Note conflict, flag for coordinator |

View File

@@ -0,0 +1,65 @@
# Role Library - Team Lifecycle v3
Dynamic role specification library for team-lifecycle-v3. Role definitions are loaded at runtime to extend the built-in role detection table.
## Purpose
- Extend role inference beyond hardcoded defaults
- Support domain-specific specialist roles
- Enable dynamic role injection based on task keywords
- Maintain backward compatibility with v2 core roles
## Role Categories
### Core Pipeline Roles (v2 inherited)
- analyst, writer, planner, executor, tester, reviewer
- architect, fe-developer, fe-qa
### Specialist Roles (v3 new)
- **orchestrator**: Complex task decomposition and parallel coordination
- **security-expert**: Security analysis and vulnerability scanning
- **performance-optimizer**: Performance profiling and optimization
- **data-engineer**: Data pipeline and schema design
- **devops-engineer**: Infrastructure as code and CI/CD
- **ml-engineer**: Machine learning pipeline implementation
## Dynamic Role Injection
Specialist roles are injected at runtime when coordinator detects matching keywords in task descriptions:
| Keywords | Injected Role |
|----------|---------------|
| security, vulnerability, OWASP | security-expert |
| performance, optimization, bottleneck | performance-optimizer |
| data, pipeline, ETL, schema | data-engineer |
| devops, CI/CD, deployment, docker | devops-engineer |
| machine learning, ML, model, training | ml-engineer |
| orchestrate, complex, multi-module | orchestrator |
## Role Definition Format
Each role definition is a `.role.md` file with YAML frontmatter + description.
### Schema
```yaml
---
role: <role-name>
keywords: [<keyword1>, <keyword2>, ...]
responsibility_type: <Orchestration|Code generation|Validation|Read-only analysis>
task_prefix: <PREFIX>
default_inner_loop: <true|false>
category: <domain-category>
capabilities: [<capability1>, <capability2>, ...]
---
<Role description and responsibilities>
```
## Usage
Role library is loaded by coordinator during Phase 1 (Requirements Collection) to extend role detection capabilities. Custom roles override built-in roles with same `role` identifier.
## Extensibility
Users can add custom role definitions by creating new `.role.md` files in this directory following the schema above.

View File

@@ -0,0 +1,37 @@
---
role: data-engineer
keywords: [data, pipeline, ETL, database, schema, migration, analytics]
responsibility_type: Code generation
task_prefix: DATA
default_inner_loop: false
category: data
capabilities:
- data_pipeline_design
- schema_design
- etl_implementation
---
# Data Engineer
Designs and implements data pipelines, schemas, and ETL processes.
## Responsibilities
- Design database schemas and data models
- Implement ETL pipelines for data processing
- Create data migration scripts
- Optimize data storage and retrieval
- Implement data validation and quality checks
## Typical Tasks
- Design and implement data warehouse schema
- Build ETL pipeline for analytics
- Create database migration scripts
- Implement data validation framework
## Integration Points
- Called by coordinator when data keywords detected
- Works with executor for backend integration
- Coordinates with planner for data architecture

View File

@@ -0,0 +1,37 @@
---
role: devops-engineer
keywords: [devops, CI/CD, deployment, infrastructure, docker, kubernetes, terraform]
responsibility_type: Code generation
task_prefix: DEVOPS
default_inner_loop: false
category: devops
capabilities:
- infrastructure_as_code
- ci_cd_pipeline
- deployment_automation
---
# DevOps Engineer
Implements infrastructure as code, CI/CD pipelines, and deployment automation.
## Responsibilities
- Design and implement CI/CD pipelines
- Create infrastructure as code (Terraform, CloudFormation)
- Implement deployment automation
- Configure monitoring and alerting
- Manage containerization and orchestration
## Typical Tasks
- Set up CI/CD pipeline for new project
- Implement infrastructure as code for cloud resources
- Create Docker containerization strategy
- Configure Kubernetes deployment
## Integration Points
- Called by coordinator when devops keywords detected
- Works with executor for deployment integration
- Coordinates with planner for infrastructure architecture

View File

@@ -0,0 +1,37 @@
---
role: ml-engineer
keywords: [machine learning, ML, model, training, inference, neural network, AI]
responsibility_type: Code generation
task_prefix: ML
default_inner_loop: false
category: machine-learning
capabilities:
- model_training
- feature_engineering
- model_deployment
---
# ML Engineer
Implements machine learning pipelines, model training, and inference systems.
## Responsibilities
- Design and implement ML training pipelines
- Perform feature engineering and data preprocessing
- Train and evaluate ML models
- Implement model serving and inference
- Monitor model performance and drift
## Typical Tasks
- Build ML training pipeline
- Implement feature engineering pipeline
- Deploy model serving infrastructure
- Create model monitoring system
## Integration Points
- Called by coordinator when ML keywords detected
- Works with data-engineer for data pipeline integration
- Coordinates with planner for ML architecture

View File

@@ -0,0 +1,39 @@
---
role: orchestrator
keywords: [orchestrate, coordinate, complex, multi-module, decompose, parallel, dependency]
responsibility_type: Orchestration
task_prefix: ORCH
default_inner_loop: false
category: orchestration
weight: 1.5
capabilities:
- task_decomposition
- parallel_coordination
- dependency_management
---
# Orchestrator
Decomposes complex multi-module tasks into coordinated sub-tasks with dependency management and parallel execution support.
## Responsibilities
- Analyze complex requirements and decompose into manageable sub-tasks
- Coordinate parallel execution of multiple implementation tracks
- Manage dependencies between sub-tasks
- Integrate results from parallel workers
- Validate integration points and cross-module consistency
## Typical Tasks
- Break down large features into frontend + backend + data components
- Coordinate multi-team parallel development
- Manage complex refactoring across multiple modules
- Orchestrate migration strategies with phased rollout
## Integration Points
- Works with planner to receive high-level plans
- Spawns multiple executor/fe-developer workers in parallel
- Integrates with tester for cross-module validation
- Reports to coordinator with integration status

View File

@@ -0,0 +1,37 @@
---
role: performance-optimizer
keywords: [performance, optimization, bottleneck, latency, throughput, profiling, benchmark]
responsibility_type: Read-only analysis
task_prefix: PERF
default_inner_loop: false
category: performance
capabilities:
- performance_profiling
- bottleneck_identification
- optimization_recommendations
---
# Performance Optimizer
Analyzes code and architecture for performance bottlenecks and provides optimization recommendations.
## Responsibilities
- Profile code execution and identify bottlenecks
- Analyze database query performance
- Review caching strategies and effectiveness
- Assess resource utilization (CPU, memory, I/O)
- Recommend optimization strategies
## Typical Tasks
- Performance audit of critical paths
- Database query optimization review
- Caching strategy assessment
- Load testing analysis and recommendations
## Integration Points
- Called by coordinator when performance keywords detected
- Works with reviewer for performance-focused code review
- Reports findings with impact levels and optimization priorities

View File

@@ -0,0 +1,37 @@
---
role: security-expert
keywords: [security, vulnerability, OWASP, compliance, audit, penetration, threat]
responsibility_type: Read-only analysis
task_prefix: SECURITY
default_inner_loop: false
category: security
capabilities:
- vulnerability_scanning
- threat_modeling
- compliance_checking
---
# Security Expert
Performs security analysis, vulnerability scanning, and compliance checking for code and architecture.
## Responsibilities
- Scan code for OWASP Top 10 vulnerabilities
- Perform threat modeling and attack surface analysis
- Check compliance with security standards (GDPR, HIPAA, etc.)
- Review authentication and authorization implementations
- Assess data protection and encryption strategies
## Typical Tasks
- Security audit of authentication module
- Vulnerability assessment of API endpoints
- Compliance review for data handling
- Threat modeling for new features
## Integration Points
- Called by coordinator when security keywords detected
- Works with reviewer for security-focused code review
- Reports findings with severity levels (Critical/High/Medium/Low)