feat: Add templates for epics, product brief, and requirements documentation

- Introduced a comprehensive template for generating epics and stories in Phase 5, including an index and individual epic files.
- Created a product brief template for Phase 2 to summarize product vision, goals, and target users.
- Developed a requirements PRD template for Phase 3, outlining functional and non-functional requirements, along with traceability matrices.

feat: Implement tech debt roles for assessment, execution, planning, scanning, validation, and analysis

- Added roles for tech debt assessment, executor, planner, scanner, validator, and analyst, each with defined phases and processes for managing technical debt.
- Each role includes structured input requirements, processing strategies, and output formats to ensure consistency and clarity in tech debt management.
This commit is contained in:
catlog22
2026-03-07 13:32:04 +08:00
parent 7ee9b579fa
commit 29a1fea467
255 changed files with 14407 additions and 21120 deletions

View File

@@ -1,373 +1,74 @@
---
name: team-quality-assurance
description: Unified team skill for quality assurance team. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team quality-assurance", "team qa".
description: Unified team skill for quality assurance. Full closed-loop QA combining issue discovery and software testing. Triggers on "team quality-assurance", "team qa".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Agent(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team Quality Assurance
Unified team skill: quality assurance combining issue discovery and software testing into a closed loop of scout -> strategy -> generate -> execute -> analyze. Uses multi-perspective scanning, Generator-Executor pipeline, and shared defect pattern database for progressive quality assurance. All team members invoke with `--role=xxx` to route to role-specific execution.
Orchestrate multi-agent QA: scout -> strategist -> generator -> executor -> analyst. Supports discovery, testing, and full closed-loop modes with parallel generation and GC loops.
## Architecture
```
+---------------------------------------------------+
| Skill(skill="team-quality-assurance") |
| args="<task-description>" |
+-------------------+-------------------------------+
Skill(skill="team-quality-assurance", args="task description")
|
Orchestration Mode (auto -> coordinator)
SKILL.md (this file) = Router
|
Coordinator (inline)
Phase 0-5 orchestration
|
+-----+-----+-----+-----+-----+
v v v v v
[tw] [tw] [tw] [tw] [tw]
scout stra- gene- execu- analy-
tegist rator tor st
(tw) = team-worker agent
+--------------+--------------+
| |
no --role flag --role <name>
| |
Coordinator Worker
roles/coordinator/role.md roles/<name>/role.md
|
+-- analyze -> dispatch -> spawn workers -> STOP
|
+-------+-------+-------+-------+-------+
v v v v v
[scout] [strat] [gen] [exec] [analyst]
team-worker agents, each loads roles/<role>/role.md
```
## Command Architecture
## Role Registry
```
roles/
├── coordinator/
│ ├── role.md # Pipeline orchestration (mode selection, task dispatch, monitoring)
│ └── commands/
│ ├── dispatch.md # Task chain creation
│ └── monitor.md # Progress monitoring
├── scout/
│ ├── role.md # Multi-perspective issue scanning
│ └── commands/
│ └── scan.md # Multi-perspective CLI fan-out scanning
├── strategist/
│ ├── role.md # Test strategy formulation
│ └── commands/
│ └── analyze-scope.md # Change scope analysis
├── generator/
│ ├── role.md # Test case generation
│ └── commands/
│ └── generate-tests.md # Layer-based test code generation
├── executor/
│ ├── role.md # Test execution and fix cycles
│ └── commands/
│ └── run-fix-cycle.md # Iterative test-fix loop
└── analyst/
├── role.md # Quality analysis reporting
└── commands/
└── quality-report.md # Defect pattern + coverage analysis
```
**Design principle**: role.md retains Phase 1 (Task Discovery) and Phase 5 (Report) inline. Phase 2-4 delegate to `commands/*.md` based on complexity.
| Role | Path | Prefix | Inner Loop |
|------|------|--------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
| scout | [roles/scout/role.md](roles/scout/role.md) | SCOUT-* | false |
| strategist | [roles/strategist/role.md](roles/strategist/role.md) | QASTRAT-* | false |
| generator | [roles/generator/role.md](roles/generator/role.md) | QAGEN-* | false |
| executor | [roles/executor/role.md](roles/executor/role.md) | QARUN-* | true |
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | QAANA-* | false |
## Role Router
### Input Parsing
Parse `$ARGUMENTS`:
- Has `--role <name>` -> Read `roles/<name>/role.md`, execute Phase 2-4
- No `--role` -> Read `roles/coordinator/role.md`, execute entry router
Parse `$ARGUMENTS` to extract `--role`. If absent -> Orchestration Mode (auto route to coordinator).
## Shared Constants
### Role Registry
- **Session prefix**: `QA`
- **Session path**: `.workflow/.team/QA-<slug>-<date>/`
- **Team name**: `quality-assurance`
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
| Role | Spec | Task Prefix | Inner Loop |
|------|------|-------------|------------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - |
| scout | [role-specs/scout.md](role-specs/scout.md) | SCOUT-* | false |
| strategist | [role-specs/strategist.md](role-specs/strategist.md) | QASTRAT-* | false |
| generator | [role-specs/generator.md](role-specs/generator.md) | QAGEN-* | false |
| executor | [role-specs/executor.md](role-specs/executor.md) | QARUN-* | true |
| analyst | [role-specs/analyst.md](role-specs/analyst.md) | QAANA-* | false |
## Worker Spawn Template
> **COMPACT PROTECTION**: Role files are execution documents, not reference material. When context compression occurs and role instructions are reduced to summaries, **you MUST immediately `Read` the corresponding role.md to reload before continuing execution**. Do not execute any Phase based on summaries.
### Dispatch
1. Extract `--role` from arguments
2. If no `--role` -> route to coordinator (Orchestration Mode)
3. Look up role in registry -> Read the role file -> Execute its phases
### Orchestration Mode
When invoked without `--role`, coordinator auto-starts. User just provides task description.
**Invocation**: `Skill(skill="team-quality-assurance", args="<task-description>")`
**Lifecycle**:
```
User provides task description
-> coordinator Phase 1-3: Mode detection + requirement clarification -> TeamCreate -> Create task chain
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
-> Worker executes -> SendMessage callback -> coordinator advances next step
-> Loop until pipeline complete -> Phase 5 report
```
**User Commands** (wake paused coordinator):
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no advancement |
| `resume` / `continue` | Check worker states, advance next step |
---
## Shared Infrastructure
The following templates apply to all worker roles. Each role.md only needs to write **Phase 2-4** role-specific logic.
### Worker Phase 1: Task Discovery (shared by all workers)
Every worker executes the same task discovery flow on startup:
1. Call `TaskList()` to get all tasks
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
3. No tasks -> idle wait
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
**Resume Artifact Check** (prevent duplicate output after resume):
- Check whether this task's output artifact already exists
- Artifact complete -> skip to Phase 5 report completion
- Artifact incomplete or missing -> normal Phase 2-4 execution
### Worker Phase 5: Report (shared by all workers)
Standard reporting flow after task completion:
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
- Parameters: operation="log", session_id=<session-id>, from=<role>, type=<message-type>, data={ref: "<artifact-path>"}
- `to` and `summary` auto-defaulted -- do NOT specify explicitly
- **CLI fallback**: `ccw team log --session-id <session-id> --from <role> --type <type> --json`
2. **SendMessage**: Send result to coordinator
3. **TaskUpdate**: Mark task completed
4. **Loop**: Return to Phase 1 to check next task
### Wisdom Accumulation (all roles)
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session initialization.
**Directory**:
```
<session-folder>/wisdom/
├── learnings.md # Patterns and insights
├── decisions.md # Architecture and design decisions
├── conventions.md # Codebase conventions
└── issues.md # Known risks and issues
```
**Worker Load** (Phase 2): Extract `Session: <path>` from task description, read wisdom directory files.
**Worker Contribute** (Phase 4/5): Write this task's discoveries to corresponding wisdom files.
### Role Isolation Rules
#### Output Tagging
All outputs must carry `[role_name]` prefix.
#### Coordinator Isolation
| Allowed | Forbidden |
|---------|-----------|
| Requirement clarification (AskUserQuestion) | Direct test writing |
| Create task chain (TaskCreate) | Direct test execution or scanning |
| Mode selection + quality gating | Direct coverage analysis |
| Monitor progress (message bus) | Bypassing workers |
#### Worker Isolation
| Allowed | Forbidden |
|---------|-----------|
| Process tasks with own prefix | Process tasks with other role prefixes |
| Share state via team_msg(type='state_update') | Create tasks for other roles |
| SendMessage to coordinator | Communicate directly with other workers |
| Delegate to commands/ files | Modify resources outside own responsibility |
### Team Configuration
| Setting | Value |
|---------|-------|
| Team name | quality-assurance |
| Session directory | `.workflow/.team/QA-<slug>-<date>/` |
| Test layers | L1: Unit (80%), L2: Integration (60%), L3: E2E (40%) |
| Scan perspectives | bug, security, ux, test-coverage, code-quality |
### Shared Memory
Cross-role accumulated knowledge stored via team_msg(type='state_update'):
| Field | Owner | Content |
|-------|-------|---------|
| `discovered_issues` | scout | Multi-perspective scan findings |
| `test_strategy` | strategist | Layer selection, coverage targets, scope |
| `generated_tests` | generator | Test file paths and metadata |
| `execution_results` | executor | Test run results and coverage data |
| `defect_patterns` | analyst | Recurring defect pattern database |
| `quality_score` | analyst | Overall quality assessment |
| `coverage_history` | analyst | Coverage trend over time |
Each role reads in Phase 2, writes own fields in Phase 5.
### Message Bus (All Roles)
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log.
**Message types by role**:
| Role | Types |
|------|-------|
| coordinator | `mode_selected`, `gc_loop_trigger`, `quality_gate`, `task_unblocked`, `error`, `shutdown` |
| scout | `scan_ready`, `issues_found`, `error` |
| strategist | `strategy_ready`, `error` |
| generator | `tests_generated`, `tests_revised`, `error` |
| executor | `tests_passed`, `tests_failed`, `coverage_report`, `error` |
| analyst | `analysis_ready`, `quality_report`, `error` |
---
## Three-Mode Pipeline Architecture
### Mode Auto-Detection
| Condition | Mode |
|-----------|------|
| Explicit `--mode=discovery` flag | discovery |
| Explicit `--mode=testing` flag | testing |
| Explicit `--mode=full` flag | full |
| Task description contains: discovery/scan/issue keywords | discovery |
| Task description contains: test/coverage/TDD keywords | testing |
| No explicit flag and no keyword match | full (default) |
### Pipeline Diagrams
```
Discovery Mode (issue discovery first):
SCOUT-001(multi-perspective scan) -> QASTRAT-001 -> QAGEN-001 -> QARUN-001 -> QAANA-001
Testing Mode (skip scout, test first):
QASTRAT-001(change analysis) -> QAGEN-001(L1) -> QARUN-001(L1) -> QAGEN-002(L2) -> QARUN-002(L2) -> QAANA-001
Full QA Mode (complete closed loop):
SCOUT-001(scan) -> QASTRAT-001(strategy)
-> [QAGEN-001(L1) || QAGEN-002(L2)](parallel) -> [QARUN-001 || QARUN-002](parallel)
-> QAANA-001(analysis) -> SCOUT-002(regression scan)
```
### Generator-Executor Pipeline (GC Loop)
Generator and executor iterate per test layer until coverage targets are met:
```
QAGEN -> QARUN -> (if coverage < target) -> QAGEN-fix -> QARUN-2
(if coverage >= target) -> next layer or QAANA
```
Coordinator monitors GC loop progress. After 3 GC iterations without convergence, accept current coverage with warning.
In Full QA mode, spawn N generator agents in parallel (one per test layer). Each receives a QAGEN-N task with layer assignment. Use `run_in_background: true` for all spawns, then coordinator stops and waits for callbacks. Similarly spawn N executor agents in parallel for QARUN-N tasks.
### Cadence Control
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
```
Beat Cycle (single beat)
═══════════════════════════════════════════════════════════
Event Coordinator Workers
───────────────────────────────────────────────────────────
callback/resume ──> ┌─ handleCallback ─┐
│ mark completed │
│ check pipeline │
├─ handleSpawnNext ─┤
│ find ready tasks │
│ spawn workers ───┼──> [Worker A] Phase 1-5
│ (parallel OK) ──┼──> [Worker B] Phase 1-5
└─ STOP (idle) ─────┘ │
callback <─────────────────────────────────────────┘
(next beat) SendMessage + TaskUpdate(completed)
═══════════════════════════════════════════════════════════
```
**Pipeline beat view**:
```
Discovery mode (5 beats, strictly serial)
──────────────────────────────────────────────────────────
Beat 1 2 3 4 5
│ │ │ │ │
SCOUT -> STRAT -> GEN -> RUN -> ANA
▲ ▲
pipeline pipeline
start done
S=SCOUT STRAT=QASTRAT GEN=QAGEN RUN=QARUN ANA=QAANA
Testing mode (6 beats, layer progression)
──────────────────────────────────────────────────────────
Beat 1 2 3 4 5 6
│ │ │ │ │ │
STRAT -> GEN-L1 -> RUN-L1 -> GEN-L2 -> RUN-L2 -> ANA
▲ ▲
no scout analysis
(test only)
Full QA mode (6 beats, with parallel windows + regression)
──────────────────────────────────────────────────────────
Beat 1 2 3 4 5 6
│ │ ┌────┴────┐ ┌────┴────┐ │ │
SCOUT -> STRAT -> GEN-L1||GEN-L2 -> RUN-1||RUN-2 -> ANA -> SCOUT-2
▲ ▲
parallel gen regression
scan
```
**Checkpoints**:
| Trigger | Location | Behavior |
|---------|----------|----------|
| GC loop limit | QARUN coverage < target | After 3 iterations, accept current coverage with warning |
| Pipeline stall | No ready + no running | Check missing tasks, report to user |
| Regression scan (full mode) | QAANA-001 complete | Trigger SCOUT-002 for regression verification |
**Stall Detection** (coordinator `handleCheck` executes):
| Check | Condition | Resolution |
|-------|-----------|------------|
| Worker no response | in_progress task no callback | Report waiting task list, suggest user `resume` |
| Pipeline deadlock | no ready + no running + has pending | Check blockedBy dependency chain, report blocking point |
| GC loop exceeded | generator/executor iteration > 3 | Terminate loop, output latest coverage report |
### Task Metadata Registry
| Task ID | Role | Phase | Dependencies | Description |
|---------|------|-------|-------------|-------------|
| SCOUT-001 | scout | discovery | (none) | Multi-perspective issue scanning |
| QASTRAT-001 | strategist | strategy | SCOUT-001 or (none) | Change scope analysis + test strategy |
| QAGEN-001 | generator | generation | QASTRAT-001 | L1 unit test generation |
| QAGEN-002 | generator | generation | QASTRAT-001 (full mode) | L2 integration test generation |
| QARUN-001 | executor | execution | QAGEN-001 | L1 test execution + fix cycles |
| QARUN-002 | executor | execution | QAGEN-002 (full mode) | L2 test execution + fix cycles |
| QAANA-001 | analyst | analysis | QARUN-001 (+ QARUN-002) | Defect pattern analysis + quality report |
| SCOUT-002 | scout | regression | QAANA-001 (full mode) | Regression scan after fixes |
---
## Coordinator Spawn Template
### v5 Worker Spawn (all roles)
When coordinator spawns workers, use `team-worker` agent with role-spec path:
Coordinator spawns workers using this template:
```
Agent({
agent_type: "team-worker",
subagent_type: "team-worker",
description: "Spawn <role> worker",
team_name: "quality-assurance",
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-quality-assurance/role-specs/<role>.md
role_spec: .claude/skills/team-quality-assurance/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: quality-assurance
@@ -375,56 +76,23 @@ requirement: <task-description>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).`
})
```
**Inner Loop roles** (executor): Set `inner_loop: true`.
## User Commands
**Single-task roles** (scout, strategist, generator, analyst): Set `inner_loop: false`.
### Parallel Spawn (N agents for same role)
> When pipeline has parallel tasks assigned to the same role, spawn N distinct team-worker agents with unique names.
**Parallel detection**:
| Condition | Action |
|-----------|--------|
| N parallel tasks for same role prefix | Spawn N agents named `<role>-1`, `<role>-2` ... |
| Single task for role | Standard spawn (single agent) |
**Parallel spawn template**:
```
Agent({
agent_type: "team-worker",
description: "Spawn <role>-<N> worker",
team_name: "quality-assurance",
name: "<role>-<N>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-quality-assurance/role-specs/<role>.md
session: <session-folder>
session_id: <session-id>
team_name: quality-assurance
requirement: <task-description>
agent_name: <role>-<N>
inner_loop: <true|false>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 (task discovery, owner=<role>-<N>) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
})
```
**Dispatch must match agent names**: In dispatch, parallel tasks use instance-specific owner: `<role>-<N>`.
---
| Command | Action |
|---------|--------|
| `check` / `status` | View pipeline status graph |
| `resume` / `continue` | Advance to next step |
| `--mode=discovery` | Force discovery mode |
| `--mode=testing` | Force testing mode |
| `--mode=full` | Force full QA mode |
## Completion Action
When the pipeline completes (all tasks done, coordinator Phase 5):
When pipeline completes, coordinator presents:
```
AskUserQuestion({
@@ -433,55 +101,41 @@ AskUserQuestion({
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
{ label: "Keep Active", description: "Keep session for follow-up work" },
{ label: "Export Results", description: "Export deliverables to target directory" }
]
}]
})
```
| Choice | Action |
|--------|--------|
| Archive & Clean | Update session status="completed" -> TeamDelete() -> output final summary |
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-quality-assurance", args="resume")` |
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
## Unified Session Directory
## Session Directory
```
.workflow/.team/QA-<slug>-<YYYY-MM-DD>/
├── .msg/meta.json # Session state
├── .msg/messages.jsonl # Team message bus
├── .msg/meta.json # Session metadata
├── wisdom/ # Cross-task knowledge
│ ├── learnings.md
│ ├── decisions.md
│ ├── conventions.md
│ └── issues.md
├── scan/ # Scout output
│ └── scan-results.json
├── strategy/ # Strategist output
│ └── test-strategy.md
├── tests/ # Generator output
│ ├── L1-unit/
│ ├── L2-integration/
│ └── L3-e2e/
├── results/ # Executor output
│ ├── run-001.json
│ └── coverage-001.json
└── analysis/ # Analyst output
└── quality-report.md
.workflow/.team/QA-<slug>-<date>/
├── .msg/messages.jsonl # Team message bus
├── .msg/meta.json # Session state + shared memory
├── wisdom/ # Cross-task knowledge
├── scan/ # Scout output
├── strategy/ # Strategist output
├── tests/ # Generator output (L1/, L2/, L3/)
├── results/ # Executor output
└── analysis/ # Analyst output
```
## Specs Reference
- [specs/pipelines.md](specs/pipelines.md) — Pipeline definitions and task registry
- [specs/team-config.json](specs/team-config.json) — Team configuration and shared memory schema
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with available role list |
| Missing --role arg | Orchestration Mode -> auto route to coordinator |
| Role file not found | Error with expected path (roles/<name>/role.md) |
| Task prefix conflict | Log warning, proceed |
| Coverage never reaches target | After 3 GC loops, accept current with warning |
| Role not found | Error with expected path (roles/<name>/role.md) |
| CLI tool fails | Worker fallback to direct implementation |
| Scout finds no issues | Report clean scan, skip to testing mode |
| Test environment broken | Notify user, suggest manual fix |
| GC loop exceeded | Accept current coverage with warning |
| Fast-advance conflict | Coordinator reconciles on next callback |
| Completion action fails | Default to Keep Active |

View File

@@ -0,0 +1,80 @@
---
role: analyst
prefix: QAANA
inner_loop: false
message_types:
success: analysis_ready
report: quality_report
error: error
---
# Quality Analyst
Analyze defect patterns, coverage gaps, test effectiveness, and generate comprehensive quality reports. Maintain defect pattern database and provide quality scoring.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/wisdom/.msg/meta.json | Yes |
| Discovered issues | meta.json -> discovered_issues | No |
| Test strategy | meta.json -> test_strategy | No |
| Generated tests | meta.json -> generated_tests | No |
| Execution results | meta.json -> execution_results | No |
| Historical patterns | meta.json -> defect_patterns | No |
1. Extract session path from task description
2. Read .msg/meta.json for all accumulated QA data
3. Read coverage data from `coverage/coverage-summary.json` if available
4. Read layer execution results from `<session>/results/run-*.json`
5. Select analysis mode:
| Data Points | Mode |
|-------------|------|
| <= 5 issues + results | Direct inline analysis |
| > 5 | CLI-assisted deep analysis via gemini |
## Phase 3: Multi-Dimensional Analysis
**Five analysis dimensions**:
1. **Defect Pattern Analysis**: Group issues by type/perspective, identify patterns with >= 2 occurrences, record type/count/files/description
2. **Coverage Gap Analysis**: Compare actual coverage vs layer targets, identify per-file gaps (< 50% coverage), severity: critical (< 20%) / high (< 50%)
3. **Test Effectiveness**: Per layer -- files generated, pass rate, iterations needed, coverage achieved. Effective = pass_rate >= 95% AND iterations <= 2
4. **Quality Trend**: Compare against coverage_history. Trend: improving (delta > 5%), declining (delta < -5%), stable
5. **Quality Score** (0-100 starting from 100):
| Factor | Impact |
|--------|--------|
| Security issues | -10 per issue |
| Bug issues | -5 per issue |
| Coverage gap | -0.5 per gap percentage |
| Test failures | -(100 - pass_rate) * 0.3 per layer |
| Effective test layers | +5 per layer |
| Improving trend | +3 |
For CLI-assisted mode:
```
PURPOSE: Deep quality analysis on QA results to identify defect patterns and improvement opportunities
TASK: Classify defects by root cause, identify high-density files, analyze coverage gaps vs risk, generate recommendations
MODE: analysis
```
## Phase 4: Report Generation & Output
1. Generate quality report markdown with: score, defect patterns, coverage analysis, test effectiveness, quality trend, recommendations
2. Write report to `<session>/analysis/quality-report.md`
3. Update `<session>/wisdom/.msg/meta.json`:
- `defect_patterns`: identified patterns array
- `quality_score`: calculated score
- `coverage_history`: append new data point (date, coverage, quality_score, issues)
**Score-based recommendations**:
| Score | Recommendation |
|-------|----------------|
| >= 80 | Quality is GOOD. Maintain current testing practices. |
| 60-79 | Quality needs IMPROVEMENT. Focus on coverage gaps and recurring patterns. |
| < 60 | Quality is CONCERNING. Recommend comprehensive review and testing effort. |

View File

@@ -0,0 +1,72 @@
# Analyze Task
Parse user task -> detect QA capabilities -> build dependency graph -> design roles.
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
## Signal Detection
| Keywords | Capability | Prefix |
|----------|------------|--------|
| scan, discover, find issues, audit | scout | SCOUT |
| strategy, plan, test layers, coverage | strategist | QASTRAT |
| generate tests, write tests, create tests | generator | QAGEN |
| run tests, execute, fix tests | executor | QARUN |
| analyze, report, quality score | analyst | QAANA |
## QA Mode Detection
| Condition | Mode |
|-----------|------|
| Keywords: discovery, scan, issues, bug-finding | discovery |
| Keywords: test, coverage, TDD, unit, integration | testing |
| Both keyword types OR no clear match | full |
## Dependency Graph
Natural ordering tiers for QA pipeline:
- Tier 0: scout (issue discovery)
- Tier 1: strategist (strategy requires scout discoveries)
- Tier 2: generator (generation requires strategy)
- Tier 3: executor (execution requires generated tests)
- Tier 4: analyst (analysis requires execution results)
## Pipeline Definitions
```
Discovery Mode: SCOUT -> QASTRAT -> QAGEN(L1) -> QARUN(L1) -> QAANA
Testing Mode: QASTRAT -> QAGEN(L1) -> QARUN(L1) -> QAGEN(L2) -> QARUN(L2) -> QAANA
Full Mode: SCOUT -> QASTRAT -> [QAGEN(L1) || QAGEN(L2)] -> [QARUN(L1) || QARUN(L2)] -> QAANA -> SCOUT(regression)
```
## Complexity Scoring
| Factor | Points |
|--------|--------|
| Per capability | +1 |
| Cross-domain (test + discovery) | +2 |
| Parallel tracks | +1 per track |
| Serial depth > 3 | +1 |
Results: 1-3 Low, 4-6 Medium, 7+ High
## Role Minimization
- Cap at 6 roles (coordinator + 5 workers)
- Merge overlapping capabilities
- Absorb trivial single-step roles
## Output
Write <session>/task-analysis.json:
```json
{
"task_description": "<original>",
"pipeline_mode": "<discovery|testing|full>",
"capabilities": [{ "name": "<cap>", "prefix": "<PREFIX>", "keywords": ["..."] }],
"dependency_graph": { "<TASK-ID>": { "role": "<role>", "blockedBy": ["..."], "priority": "P0|P1|P2" } },
"roles": [{ "name": "<role>", "prefix": "<PREFIX>", "inner_loop": false }],
"complexity": { "score": 0, "level": "Low|Medium|High" },
"gc_loop_enabled": true
}
```

View File

@@ -1,167 +1,111 @@
# Command: dispatch
# Dispatch Tasks
> 任务链创建与依赖管理。根据 QA 模式创建 pipeline 任务链并分配给 worker 角色。
Create task chains from dependency graph with proper blockedBy relationships.
## Workflow
## When to Use
1. Read task-analysis.json -> extract pipeline_mode and dependency_graph
2. Read specs/pipelines.md -> get task registry for selected pipeline
3. Topological sort tasks (respect blockedBy)
4. Validate all owners exist in role registry (SKILL.md)
5. For each task (in order):
- TaskCreate with structured description (see template below)
- TaskUpdate with blockedBy + owner assignment
6. Update session.json with pipeline.tasks_total
7. Validate chain (no orphans, no cycles, all refs valid)
- Phase 3 of Coordinator
- QA 模式已确定,需要创建任务链
- 团队已创建worker 已 spawn
**Trigger conditions**:
- Coordinator Phase 2 完成后
- 模式切换需要重建任务链
- GC 循环需要创建修复任务
## Strategy
### Delegation Mode
**Mode**: Directcoordinator 直接操作 TaskCreate/TaskUpdate
### Decision Logic
```javascript
// 根据 qaMode 选择 pipeline
function buildPipeline(qaMode, sessionFolder, taskDescription) {
const pipelines = {
'discovery': [
{ prefix: 'SCOUT', owner: 'scout', desc: '多视角问题扫描', blockedBy: [] },
{ prefix: 'QASTRAT', owner: 'strategist', desc: '测试策略制定', blockedBy: ['SCOUT'] },
{ prefix: 'QAGEN', owner: 'generator', desc: '测试代码生成 (L1)', meta: 'layer: L1', blockedBy: ['QASTRAT'] },
{ prefix: 'QARUN', owner: 'executor', desc: '测试执行 (L1)', meta: 'layer: L1', blockedBy: ['QAGEN'] },
{ prefix: 'QAANA', owner: 'analyst', desc: '质量分析报告', blockedBy: ['QARUN'] }
],
'testing': [
{ prefix: 'QASTRAT', owner: 'strategist', desc: '测试策略制定', blockedBy: [] },
{ prefix: 'QAGEN-L1', owner: 'generator', desc: '测试代码生成 (L1)', meta: 'layer: L1', blockedBy: ['QASTRAT'] },
{ prefix: 'QARUN-L1', owner: 'executor', desc: '测试执行 (L1)', meta: 'layer: L1', blockedBy: ['QAGEN-L1'] },
{ prefix: 'QAGEN-L2', owner: 'generator', desc: '测试代码生成 (L2)', meta: 'layer: L2', blockedBy: ['QARUN-L1'] },
{ prefix: 'QARUN-L2', owner: 'executor', desc: '测试执行 (L2)', meta: 'layer: L2', blockedBy: ['QAGEN-L2'] },
{ prefix: 'QAANA', owner: 'analyst', desc: '质量分析报告', blockedBy: ['QARUN-L2'] }
],
'full': [
{ prefix: 'SCOUT', owner: 'scout', desc: '多视角问题扫描', blockedBy: [] },
{ prefix: 'QASTRAT', owner: 'strategist', desc: '测试策略制定', blockedBy: ['SCOUT'] },
{ prefix: 'QAGEN-L1', owner: 'generator-1', desc: '测试代码生成 (L1)', meta: 'layer: L1', blockedBy: ['QASTRAT'] },
{ prefix: 'QAGEN-L2', owner: 'generator-2', desc: '测试代码生成 (L2)', meta: 'layer: L2', blockedBy: ['QASTRAT'] },
{ prefix: 'QARUN-L1', owner: 'executor-1', desc: '测试执行 (L1)', meta: 'layer: L1', blockedBy: ['QAGEN-L1'] },
{ prefix: 'QARUN-L2', owner: 'executor-2', desc: '测试执行 (L2)', meta: 'layer: L2', blockedBy: ['QAGEN-L2'] },
{ prefix: 'QAANA', owner: 'analyst', desc: '质量分析报告', blockedBy: ['QARUN-L1', 'QARUN-L2'] },
{ prefix: 'SCOUT-REG', owner: 'scout', desc: '回归扫描', blockedBy: ['QAANA'] }
]
}
return pipelines[qaMode] || pipelines['discovery']
}
```
## Execution Steps
### Step 1: Context Preparation
```javascript
const pipeline = buildPipeline(qaMode, sessionFolder, taskDescription)
```
### Step 2: Execute Strategy
```javascript
const taskIds = {}
for (const stage of pipeline) {
// 构建任务描述(包含 session 和层级信息)
const fullDesc = [
stage.desc,
`\nsession: ${sessionFolder}`,
stage.meta ? `\n${stage.meta}` : '',
`\n\n目标: ${taskDescription}`
].join('')
// 创建任务
TaskCreate({
subject: `${stage.prefix}-001: ${stage.desc}`,
description: fullDesc,
activeForm: `${stage.desc}进行中`
})
// 记录任务 ID假设 TaskCreate 返回 ID
const allTasks = TaskList()
const newTask = allTasks.find(t => t.subject.startsWith(`${stage.prefix}-001`))
taskIds[stage.prefix] = newTask.id
// 设置 owner 和依赖
const blockedByIds = stage.blockedBy
.map(dep => taskIds[dep])
.filter(Boolean)
TaskUpdate({
taskId: newTask.id,
owner: stage.owner,
addBlockedBy: blockedByIds
})
}
```
### Step 3: Result Processing
```javascript
// 验证任务链
const allTasks = TaskList()
const chainTasks = pipeline.map(s => taskIds[s.prefix]).filter(Boolean)
const chainValid = chainTasks.length === pipeline.length
if (!chainValid) {
mcp__ccw-tools__team_msg({
operation: "log", session_id: teamName, from: "coordinator",
to: "user", type: "error",
})
}
```
## GC Loop Task Creation
当 executor 报告覆盖率不达标时coordinator 调用此逻辑追加任务:
```javascript
function createGCLoopTasks(gcIteration, targetLayer, sessionFolder) {
// 创建修复任务
TaskCreate({
subject: `QAGEN-fix-${gcIteration}: 修复 ${targetLayer} 测试 (GC #${gcIteration})`,
description: `修复未通过测试并补充覆盖\nsession: ${sessionFolder}\nlayer: ${targetLayer}\ntype: gc-fix`,
activeForm: `GC循环 #${gcIteration} 修复中`
})
// 创建重新执行任务
TaskCreate({
subject: `QARUN-gc-${gcIteration}: 重新执行 ${targetLayer} (GC #${gcIteration})`,
description: `重新执行测试验证修复\nsession: ${sessionFolder}\nlayer: ${targetLayer}`,
activeForm: `GC循环 #${gcIteration} 执行中`
})
// 设置依赖: QARUN-gc 依赖 QAGEN-fix
// ... TaskUpdate addBlockedBy
}
```
## Output Format
## Task Description Template
```
## Task Chain Created
### Mode: [discovery|testing|full]
### Pipeline Stages: [count]
- [prefix]-001: [description] (owner: [role], blocked by: [deps])
### Verification: PASS/FAIL
PURPOSE: <goal> | Success: <criteria>
TASK:
- <step 1>
- <step 2>
CONTEXT:
- Session: <session-folder>
- Layer: <L1-unit|L2-integration|L3-e2e> (if applicable)
- Upstream artifacts: <list>
- Shared memory: <session>/wisdom/.msg/meta.json
EXPECTED: <artifact path> + <quality criteria>
CONSTRAINTS: <scope limits>
---
InnerLoop: <true|false>
RoleSpec: .claude/skills/team-quality-assurance/roles/<role>/role.md
```
## Error Handling
## Pipeline Task Registry
| Scenario | Resolution |
|----------|------------|
| Task creation fails | Retry once, then report to user |
| Dependency cycle detected | Flatten dependencies, warn coordinator |
| Invalid qaMode | Default to 'discovery' mode |
| Agent/CLI failure | Retry once, then fallback to inline execution |
| Timeout (>5 min) | Report partial results, notify coordinator |
### Discovery Mode
```
SCOUT-001 (scout): Multi-perspective issue scanning
blockedBy: []
QASTRAT-001 (strategist): Test strategy formulation
blockedBy: [SCOUT-001]
QAGEN-001 (generator): L1 unit test generation
blockedBy: [QASTRAT-001], meta: layer=L1
QARUN-001 (executor): L1 test execution + fix cycles
blockedBy: [QAGEN-001], inner_loop: true, meta: layer=L1
QAANA-001 (analyst): Quality analysis report
blockedBy: [QARUN-001]
```
### Testing Mode
```
QASTRAT-001 (strategist): Test strategy formulation
blockedBy: []
QAGEN-L1-001 (generator): L1 unit test generation
blockedBy: [QASTRAT-001], meta: layer=L1
QARUN-L1-001 (executor): L1 test execution + fix cycles
blockedBy: [QAGEN-L1-001], inner_loop: true, meta: layer=L1
QAGEN-L2-001 (generator): L2 integration test generation
blockedBy: [QARUN-L1-001], meta: layer=L2
QARUN-L2-001 (executor): L2 test execution + fix cycles
blockedBy: [QAGEN-L2-001], inner_loop: true, meta: layer=L2
QAANA-001 (analyst): Quality analysis report
blockedBy: [QARUN-L2-001]
```
### Full Mode
```
SCOUT-001 (scout): Multi-perspective issue scanning
blockedBy: []
QASTRAT-001 (strategist): Test strategy formulation
blockedBy: [SCOUT-001]
QAGEN-L1-001 (generator-1): L1 unit test generation
blockedBy: [QASTRAT-001], meta: layer=L1
QAGEN-L2-001 (generator-2): L2 integration test generation
blockedBy: [QASTRAT-001], meta: layer=L2
QARUN-L1-001 (executor-1): L1 test execution + fix cycles
blockedBy: [QAGEN-L1-001], inner_loop: true, meta: layer=L1
QARUN-L2-001 (executor-2): L2 test execution + fix cycles
blockedBy: [QAGEN-L2-001], inner_loop: true, meta: layer=L2
QAANA-001 (analyst): Quality analysis report
blockedBy: [QARUN-L1-001, QARUN-L2-001]
SCOUT-002 (scout): Regression scan after fixes
blockedBy: [QAANA-001]
```
## InnerLoop Flag Rules
- true: executor roles (run-fix cycles)
- false: scout, strategist, generator, analyst roles
## Dependency Validation
- No orphan tasks (all tasks have valid owner)
- No circular dependencies
- All blockedBy references exist
- Session reference in every task description
- RoleSpec reference in every task description
## Log After Creation
```
mcp__ccw-tools__team_msg({
operation: "log",
session_id: <session-id>,
from: "coordinator",
type: "pipeline_selected",
data: { pipeline: "<mode>", task_count: <N> }
})
```

View File

@@ -1,31 +1,31 @@
# Command: Monitor
# Monitor Pipeline
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, GC loops, and completion.
Event-driven pipeline coordination. Beat model: coordinator wake -> process -> spawn -> STOP.
## Constants
| Key | Value |
|-----|-------|
| SPAWN_MODE | background |
| ONE_STEP_PER_INVOCATION | true |
| WORKER_AGENT | team-worker |
| MAX_GC_ROUNDS | 3 |
- SPAWN_MODE: background
- ONE_STEP_PER_INVOCATION: true
- FAST_ADVANCE_AWARE: true
- WORKER_AGENT: team-worker
- MAX_GC_ROUNDS: 3
## Phase 2: Context Loading
## Handler Router
| Input | Source | Required |
|-------|--------|----------|
| Session state | <session>/session.json | Yes |
| Task list | TaskList() | Yes |
| Trigger event | From Entry Router detection | Yes |
| Meta state | <session>/.msg/meta.json | Yes |
| Pipeline definition | From SKILL.md | Yes |
| Source | Handler |
|--------|---------|
| Message contains [scout], [strategist], [generator], [executor], [analyst] | handleCallback |
| "capability_gap" | handleAdapt |
| "check" or "status" | handleCheck |
| "resume" or "continue" | handleResume |
| All tasks completed | handleComplete |
| Default | handleSpawnNext |
1. Load session.json for current state, `pipeline_mode`, `gc_rounds`
2. Run TaskList() to get current task statuses
3. Identify trigger event type from Entry Router
## handleCallback
### Role Detection Table
Worker completed. Process and advance.
1. Parse message to identify role and task ID:
| Message Pattern | Role Detection |
|----------------|---------------|
@@ -35,170 +35,184 @@ Handle all coordinator monitoring events: worker callbacks, status checks, pipel
| `[executor]` or task ID `QARUN-*` | executor |
| `[analyst]` or task ID `QAANA-*` | analyst |
### Pipeline Stage Order
```
SCOUT -> QASTRAT -> QAGEN -> QARUN -> QAANA
```
## Phase 3: Event Handlers
### handleCallback
Triggered when a worker sends completion message.
1. Parse message to identify role and task ID using Role Detection Table
2. Mark task as completed:
```
TaskUpdate({ taskId: "<task-id>", status: "completed" })
```
3. Record completion in session state
4. **GC Loop Check** (when executor QARUN completes):
Read `<session>/.msg/meta.json` for execution results.
| Condition | Action |
|-----------|--------|
| Coverage >= target OR no coverage data | Proceed to handleSpawnNext |
| Coverage < target AND gc_rounds < 3 | Create GC fix tasks, increment gc_rounds |
| Coverage < target AND gc_rounds >= 3 | Accept current coverage, proceed to handleSpawnNext |
2. Check if progress update (inner loop) or final completion
3. Progress -> update session state, STOP
4. Completion -> mark task done via TaskUpdate(status="completed"), remove from active_workers
5. Check for checkpoints:
- QARUN-* completes -> read meta.json for coverage:
- coverage >= target OR gc_rounds >= MAX_GC_ROUNDS -> proceed to handleSpawnNext
- coverage < target AND gc_rounds < MAX_GC_ROUNDS -> create GC fix tasks, increment gc_rounds
**GC Fix Task Creation** (when coverage below target):
```
TaskCreate({
subject: "QAGEN-fix-<round>",
subject: "QAGEN-fix-<round>: Fix tests for <layer> (GC #<round>)",
description: "PURPOSE: Fix failing tests and improve coverage | Success: Coverage meets target
TASK:
- Load execution results and failing test details
- Fix broken tests and add missing coverage
- Re-validate fixes
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: <session>/.msg/meta.json
- Layer: <layer>
- Previous results: <session>/results/run-<layer>.json
EXPECTED: Fixed test files | Improved coverage
CONSTRAINTS: Targeted fixes only | Do not introduce regressions",
blockedBy: [],
status: "pending"
CONSTRAINTS: Only modify test files | No source changes
---
InnerLoop: false
RoleSpec: .claude/skills/team-quality-assurance/roles/generator/role.md"
})
TaskCreate({
subject: "QARUN-recheck-<round>",
subject: "QARUN-gc-<round>: Re-execute <layer> (GC #<round>)",
description: "PURPOSE: Re-execute tests after fixes | Success: Coverage >= target
TASK:
- Execute test suite on fixed code
- Measure coverage
- Report results
TASK: Execute test suite, measure coverage, report results
CONTEXT:
- Session: <session-folder>
EXPECTED: Execution results with coverage metrics
CONSTRAINTS: Read-only execution",
blockedBy: ["QAGEN-fix-<round>"],
status: "pending"
- Layer: <layer>
EXPECTED: <session>/results/run-<layer>-gc-<round>.json
CONSTRAINTS: Read-only execution
---
InnerLoop: false
RoleSpec: .claude/skills/team-quality-assurance/roles/executor/role.md",
blockedBy: ["QAGEN-fix-<round>"]
})
```
5. Proceed to handleSpawnNext
6. -> handleSpawnNext
### handleSpawnNext
## handleCheck
Find and spawn the next ready tasks.
Read-only status report, then STOP.
1. Scan task list for tasks where:
- Status is "pending"
- All blockedBy tasks have status "completed"
Output:
```
[coordinator] QA Pipeline Status
[coordinator] Mode: <pipeline_mode>
[coordinator] Progress: <done>/<total> (<pct>%)
[coordinator] GC Rounds: <gc_rounds>/3
2. If no ready tasks and all tasks completed, proceed to handleComplete
[coordinator] Pipeline Graph:
SCOUT-001: <done|run|wait> <summary>
QASTRAT-001: <done|run|wait> <summary>
QAGEN-001: <done|run|wait> <summary>
QARUN-001: <done|run|wait> <summary>
QAANA-001: <done|run|wait> <summary>
3. If no ready tasks but some still in_progress, STOP and wait
4. For each ready task, determine role from task subject prefix:
```javascript
const STAGE_WORKER_MAP = {
'SCOUT': { role: 'scout' },
'QASTRAT': { role: 'strategist' },
'QAGEN': { role: 'generator' },
'QARUN': { role: 'executor' },
'QAANA': { role: 'analyst' }
}
[coordinator] Active Workers: <list with elapsed time>
[coordinator] Ready: <pending tasks with resolved deps>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
5. Spawn team-worker (one at a time for sequential pipeline):
Then STOP.
## handleResume
1. No active workers -> handleSpawnNext
2. Has active -> check each status
- completed -> mark done via TaskUpdate
- in_progress -> still running
3. Some completed -> handleSpawnNext
4. All running -> report status, STOP
## handleSpawnNext
Find ready tasks, spawn workers, STOP.
1. Collect from TaskList():
- completedSubjects: status = completed
- inProgressSubjects: status = in_progress
- readySubjects: status = pending AND all blockedBy in completedSubjects
2. No ready + work in progress -> report waiting, STOP
3. No ready + nothing in progress -> handleComplete
4. Has ready -> for each:
a. Determine role from task prefix:
| Prefix | Role | inner_loop |
|--------|------|------------|
| SCOUT-* | scout | false |
| QASTRAT-* | strategist | false |
| QAGEN-* | generator | false |
| QARUN-* | executor | true |
| QAANA-* | analyst | false |
b. Check if inner loop role with active worker -> skip (worker picks up next task)
c. TaskUpdate -> in_progress
d. team_msg log -> task_unblocked
e. Spawn team-worker:
```
Agent({
agent_type: "team-worker",
description: "Spawn <role> worker for <task-id>",
subagent_type: "team-worker",
description: "Spawn <role> worker for <subject>",
team_name: "quality-assurance",
name: "<role>",
run_in_background: true,
prompt: `## Role Assignment
role: <role>
role_spec: .claude/skills/team-quality-assurance/role-specs/<role>.md
role_spec: .claude/skills/team-quality-assurance/roles/<role>/role.md
session: <session-folder>
session_id: <session-id>
team_name: quality-assurance
requirement: <task-description>
inner_loop: false
inner_loop: <true|false>
## Current Task
- Task ID: <task-id>
- Task: <task-subject>
- Task: <subject>
Read role_spec file to load Phase 2-4 domain instructions.
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
Execute built-in Phase 1 (task discovery) -> role Phase 2-4 -> built-in Phase 5 (report).`
})
```
6. STOP after spawning -- wait for next callback
f. Add to active_workers
5. Update session, output summary, STOP
### handleCheck
## handleComplete
Output current pipeline status.
Pipeline done. Generate report and completion action.
```
Pipeline Status:
[DONE] SCOUT-001 (scout) -> scan complete
[DONE] QASTRAT-001 (strategist) -> strategy ready
[RUN] QAGEN-001 (generator) -> generating tests...
[WAIT] QARUN-001 (executor) -> blocked by QAGEN-001
[WAIT] QAANA-001 (analyst) -> blocked by QARUN-001
1. Verify all tasks (including GC fix/recheck tasks) have status "completed" or "deleted"
2. If any tasks incomplete -> return to handleSpawnNext
3. If all complete:
- Read final state from meta.json (quality_score, coverage, gc_rounds)
- Generate summary (deliverables, stats, discussions)
4. Read session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean (status=completed, TeamDelete)
- auto_keep -> Keep Active (status=paused)
GC Rounds: 0/3
Session: <session-id>
```
## handleAdapt
Output status -- do NOT advance pipeline.
Capability gap reported mid-pipeline.
### handleResume
1. Parse gap description
2. Check if existing role covers it -> redirect
3. Role count < 6 -> generate dynamic role-spec in <session>/role-specs/
4. Create new task, spawn worker
5. Role count >= 6 -> merge or pause
Resume pipeline after user pause or interruption.
## Fast-Advance Reconciliation
1. Audit task list for inconsistencies:
- Tasks stuck in "in_progress" -> reset to "pending"
- Tasks with completed blockers but still "pending" -> include in spawn list
2. Proceed to handleSpawnNext
### handleComplete
Triggered when all pipeline tasks are completed.
1. Verify all tasks (including any GC fix/recheck tasks) have status "completed"
2. If any tasks not completed, return to handleSpawnNext
3. If all completed:
- Read final state from `<session>/.msg/meta.json`
- Compile summary: total tasks, completed, gc_rounds, quality_score, coverage
- Transition to coordinator Phase 5
On every coordinator wake:
1. Read team_msg entries with type="fast_advance"
2. Sync active_workers with spawned successors
3. No duplicate spawns
## Phase 4: State Persistence
After every handler execution:
1. Reconcile active_workers with actual TaskList states
2. Remove entries for completed/deleted tasks
3. Write updated meta.json
4. STOP (wait for next callback)
1. Update session.json with current state (active tasks, gc_rounds, last event)
2. Verify task list consistency
3. STOP and wait for next event
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-initialization |
| Worker callback from unknown role | Log info, scan for other completions |
| Pipeline stall (no ready, no running, has pending) | Check blockedBy chains, report to user |
| GC loop exceeded | Accept current coverage with warning, proceed |
| Scout finds 0 issues | Skip to testing mode, proceed to QASTRAT |

View File

@@ -1,115 +1,62 @@
# Coordinator Role
Orchestrate the Quality Assurance workflow: requirement clarification, mode selection, team creation, task dispatch, progress monitoring, quality gate control, and result reporting.
Orchestrate team-quality-assurance: analyze -> dispatch -> spawn -> monitor -> report.
## Identity
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
- Name: coordinator | Tag: [coordinator]
- Responsibility: Parse requirements -> Mode selection -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- All output (SendMessage, team_msg, logs) must carry `[coordinator]` identifier
- Only responsible for requirement clarification, mode selection, task creation/dispatch, progress monitoring, quality gate control, and result reporting
- Create tasks via TaskCreate and assign to worker roles
- Monitor worker progress via message bus and route messages
- Parse task description and detect QA mode
- Create team and spawn team-worker agents in background
- Dispatch tasks with proper dependency chains
- Monitor progress via callbacks and route messages
- Maintain session state
- Handle GC loop (generator-executor coverage cycles)
- Execute completion action when pipeline finishes
### MUST NOT
- Directly execute any business tasks (scanning, testing, analysis, etc.)
- Directly invoke implementation CLI tools (cli-explore-agent, code-developer, etc.)
- Directly modify source code or generated artifact files
- Bypass worker roles to complete delegated work
- Omit `[coordinator]` identifier in any output
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
---
- Read source code or explore codebase (delegate to workers)
- Execute scan, test, or analysis work directly
- Modify test files or source code
- Spawn workers with general-purpose agent (MUST use team-worker)
- Generate more than 6 worker roles
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
4. **Execute synchronously** -- complete the command workflow before proceeding
Example:
```
Phase 3 needs task dispatch
-> Read roles/coordinator/commands/dispatch.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Chain Creation)
-> Execute Phase 4 (Validation)
-> Continue to Phase 4
```
---
When coordinator needs to execute a specific phase:
1. Read `commands/<command>.md`
2. Follow the workflow defined in the command
3. Commands are inline execution guides, NOT separate agents
4. Execute synchronously, complete before proceeding
## Entry Router
When coordinator is invoked, detect invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains role tag [scout], [strategist], [generator], [executor], [analyst] | -> handleCallback |
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Pipeline complete | All tasks have status "completed" | -> handleComplete |
| Interrupted session | Active/paused session exists | -> Phase 0 (Session Resume Check) |
| Worker callback | Message contains [scout], [strategist], [generator], [executor], [analyst] | -> handleCallback (monitor.md) |
| Status check | Args contain "check" or "status" | -> handleCheck (monitor.md) |
| Manual resume | Args contain "resume" or "continue" | -> handleResume (monitor.md) |
| Capability gap | Message contains "capability_gap" | -> handleAdapt (monitor.md) |
| Pipeline complete | All tasks completed | -> handleComplete (monitor.md) |
| Interrupted session | Active session in .workflow/.team/QA-* | -> Phase 0 |
| New session | None of above | -> Phase 1 |
For callback/check/resume/complete: load `commands/monitor.md` and execute matched handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/QA-*/.msg/meta.json` for active/paused sessions
- If found, extract session folder path, status, and pipeline mode
2. **Parse $ARGUMENTS** for detection keywords:
- Check for role name tags in message content
- Check for "check", "status", "resume", "continue" keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Requirement Clarification below
---
For callback/check/resume/adapt/complete: load commands/monitor.md, execute handler, STOP.
## Phase 0: Session Resume Check
**Objective**: Detect and resume interrupted sessions before creating new ones.
**Workflow**:
1. Scan session directory for sessions with status "active" or "paused"
2. No sessions found -> proceed to Phase 1
3. Single session found -> resume it (-> Session Reconciliation)
4. Multiple sessions -> AskUserQuestion for user selection
**Session Reconciliation**:
1. Audit TaskList -> get real status of all tasks
2. Reconcile: session state <-> TaskList status (bidirectional sync)
3. Reset any in_progress tasks -> pending (they were interrupted)
4. Determine remaining pipeline from reconciled state
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
6. Create missing tasks with correct blockedBy dependencies
7. Verify dependency chain integrity
8. Update session file with reconciled state
9. Kick first executable task's worker -> Phase 4
---
1. Scan .workflow/.team/QA-*/session.json for active/paused sessions
2. No sessions -> Phase 1
3. Single session -> reconcile (audit TaskList, reset in_progress->pending, rebuild team, kick first ready task)
4. Multiple -> AskUserQuestion for selection
## Phase 1: Requirement Clarification
**Objective**: Parse user input and gather execution parameters.
**Workflow**:
1. **Parse arguments** for explicit settings: mode, scope, focus areas
TEXT-LEVEL ONLY. No source code reading.
1. Parse task description and extract flags
2. **QA Mode Selection**:
| Condition | Mode |
@@ -121,124 +68,73 @@ For callback/check/resume/complete: load `commands/monitor.md` and execute match
| Task description contains: test/coverage/TDD keywords | testing |
| No explicit flag and no keyword match | full (default) |
3. **Ask for missing parameters** via AskUserQuestion (skip in auto mode with -y/--yes flag):
| Question | Options |
|----------|---------|
| QA Target description | Custom input / Full project scan / Change testing / Complete QA flow |
**Success**: All parameters captured, mode finalized.
---
3. Clarify if ambiguous (AskUserQuestion: scope, deliverables, constraints)
4. Delegate to commands/analyze.md
5. Output: task-analysis.json
6. CRITICAL: Always proceed to Phase 2, never skip team workflow
## Phase 2: Create Team + Initialize Session
**Objective**: Initialize team, session file, and shared memory.
**Workflow**:
1. Generate session ID
2. Create session folder
3. Call TeamCreate with team name
4. Initialize meta.json with pipeline metadata and shared state:
```typescript
// Use team_msg to write pipeline metadata to .msg/meta.json
mcp__ccw-tools__team_msg({
operation: "log",
session_id: "<session-id>",
from: "coordinator",
type: "state_update",
summary: "Session initialized",
data: {
pipeline_mode: "<discovery|testing|full>",
pipeline_stages: ["scout", "strategist", "generator", "executor", "analyst"],
roles: ["coordinator", "scout", "strategist", "generator", "executor", "analyst"],
team_name: "quality-assurance",
discovered_issues: [],
test_strategy: {},
generated_tests: {},
execution_results: {},
defect_patterns: [],
coverage_history: [],
quality_score: null
}
})
```
5. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
6. Write session file with: session_id, mode, scope, status="active"
**Success**: Team created, session file written, shared memory initialized.
---
1. Generate session ID: QA-<slug>-<date>
2. Create session folder structure
3. TeamCreate with team name "quality-assurance"
4. Read specs/pipelines.md -> select pipeline based on mode
5. Register roles in session.json
6. Initialize shared infrastructure (wisdom/*.md)
7. Initialize pipeline via team_msg state_update:
```
mcp__ccw-tools__team_msg({
operation: "log", session_id: "<id>", from: "coordinator",
type: "state_update", summary: "Session initialized",
data: {
pipeline_mode: "<discovery|testing|full>",
pipeline_stages: [...],
team_name: "quality-assurance",
discovered_issues: [],
test_strategy: {},
generated_tests: {},
execution_results: {},
defect_patterns: [],
coverage_history: [],
quality_score: null
}
})
```
8. Write session.json
## Phase 3: Create Task Chain
**Objective**: Dispatch tasks based on mode with proper dependencies.
Delegate to commands/dispatch.md:
1. Read dependency graph from task-analysis.json
2. Read specs/pipelines.md for selected pipeline's task registry
3. Topological sort tasks
4. Create tasks via TaskCreate with blockedBy
5. Update session.json
Delegate to `commands/dispatch.md` which creates the full task chain.
## Phase 4: Spawn-and-Stop
**Pipeline by Mode**:
Delegate to commands/monitor.md#handleSpawnNext:
1. Find ready tasks (pending + blockedBy resolved)
2. Spawn team-worker agents (see SKILL.md Spawn Template)
3. Output status summary
4. STOP
| Mode | Pipeline |
|------|----------|
| Discovery | SCOUT-001 -> QASTRAT-001 -> QAGEN-001 -> QARUN-001 -> QAANA-001 |
| Testing | QASTRAT-001 -> QAGEN-001(L1) -> QARUN-001(L1) -> QAGEN-002(L2) -> QARUN-002(L2) -> QAANA-001 |
| Full QA | SCOUT-001 -> QASTRAT-001 -> [QAGEN-001(L1) + QAGEN-002(L2)](parallel) -> [QARUN-001 + QARUN-002](parallel) -> QAANA-001 -> SCOUT-002(regression) |
## Phase 5: Report + Completion Action
---
## Phase 4: Coordination Loop
**Objective**: Spawn workers, monitor progress, advance pipeline.
> **Design principle (Stop-Wait)**: Model execution has no time concept. No polling with sleep.
> - Use synchronous `Task(run_in_background: false)` calls
> - Worker return = stage completion signal
Delegate to `commands/monitor.md` for full implementation.
**Message Handling**:
| Received Message | Action |
|-----------------|--------|
| `scan_ready` | Mark SCOUT complete -> unlock QASTRAT |
| `strategy_ready` | Mark QASTRAT complete -> unlock QAGEN |
| `tests_generated` | Mark QAGEN complete -> unlock QARUN |
| `tests_passed` | Mark QARUN complete -> unlock QAANA or next layer |
| `tests_failed` | Evaluate coverage -> trigger GC loop (gc_loop_trigger) or continue |
| `analysis_ready` | Mark QAANA complete -> evaluate quality gate |
| Worker: `error` | Evaluate severity -> retry or report to user |
**GC Loop Trigger Logic**:
| Condition | Action |
|-----------|--------|
| coverage < targetCoverage AND gcIteration < 3 | Create QAGEN-fix task -> QARUN re-execute, gcIteration++ |
| gcIteration >= 3 | Accept current coverage, continue pipeline, team_msg quality_gate CONDITIONAL |
---
## Phase 5: Report + Persist
**Objective**: Completion report and follow-up options.
**Workflow**:
1. Load session state -> count completed tasks, duration
2. Read shared memory for summary
3. List deliverables with output paths
4. Update session status -> "completed"
5. Log via team_msg
6. SendMessage report to user
7. Offer next steps to user (skip in auto mode)
---
1. Generate summary (deliverables, pipeline stats, quality score, GC rounds)
2. Execute completion action per session.completion_action:
- interactive -> AskUserQuestion (Archive/Keep/Export)
- auto_archive -> Archive & Clean
- auto_keep -> Keep Active
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Teammate unresponsive | Send follow-up, 2x -> respawn |
| Error | Resolution |
|-------|------------|
| Task too vague | AskUserQuestion for clarification |
| Session corruption | Attempt recovery, fallback to manual |
| Worker crash | Reset task to pending, respawn |
| Dependency cycle | Detect in analysis, halt |
| Scout finds nothing | Skip to testing mode |
| GC loop stuck >3 iterations | Accept current coverage, continue pipeline |
| Test environment broken | Notify user, suggest manual fix |
| All tasks completed but quality_score < 60 | Report with WARNING, suggest re-run with deeper analysis |
| GC loop stuck > 3 | Accept current coverage with warning |
| quality_score < 60 | Report with WARNING, suggest re-run |

View File

@@ -0,0 +1,65 @@
---
role: executor
prefix: QARUN
inner_loop: true
additional_prefixes: [QARUN-gc]
message_types:
success: tests_passed
failure: tests_failed
coverage: coverage_report
error: error
---
# Test Executor
Run test suites, collect coverage data, and perform automatic fix cycles when tests fail. Implements the execution side of the Generator-Executor (GC) loop.
## Phase 2: Environment Detection
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/wisdom/.msg/meta.json | Yes |
| Test strategy | meta.json -> test_strategy | Yes |
| Generated tests | meta.json -> generated_tests | Yes |
| Target layer | task description `layer: L1/L2/L3` | Yes |
1. Extract session path and target layer from task description
2. Read .msg/meta.json for strategy and generated test file list
3. Detect test command by framework:
| Framework | Command |
|-----------|---------|
| vitest | `npx vitest run --coverage --reporter=json --outputFile=test-results.json` |
| jest | `npx jest --coverage --json --outputFile=test-results.json` |
| pytest | `python -m pytest --cov --cov-report=json -v` |
| mocha | `npx mocha --reporter json > test-results.json` |
| unknown | `npm test -- --coverage` |
4. Get test files from `generated_tests[targetLayer].files`
## Phase 3: Iterative Test-Fix Cycle
**Max iterations**: 5. **Pass threshold**: 95% or all tests pass.
Per iteration:
1. Run test command, capture output
2. Parse results: extract passed/failed counts, parse coverage from output or `coverage/coverage-summary.json`
3. If all pass (0 failures) -> exit loop (success)
4. If pass rate >= 95% and iteration >= 2 -> exit loop (good enough)
5. If iteration >= MAX -> exit loop (report current state)
6. Extract failure details (error lines, assertion failures)
7. Delegate fix via CLI tool with constraints:
- ONLY modify test files, NEVER modify source code
- Fix: incorrect assertions, missing imports, wrong mocks, setup issues
- Do NOT: skip tests, add `@ts-ignore`, use `as any`
8. Increment iteration, repeat
## Phase 4: Result Analysis & Output
1. Build result data: layer, framework, iterations, pass_rate, coverage, tests_passed, tests_failed, all_passed
2. Save results to `<session>/results/run-<layer>.json`
3. Save last test output to `<session>/results/output-<layer>.txt`
4. Update `<session>/wisdom/.msg/meta.json` under `execution_results[layer]` and top-level `execution_results.pass_rate`, `execution_results.coverage`
5. Message type: `tests_passed` if all_passed, else `tests_failed`

View File

@@ -0,0 +1,68 @@
---
role: generator
prefix: QAGEN
inner_loop: false
additional_prefixes: [QAGEN-fix]
message_types:
success: tests_generated
revised: tests_revised
error: error
---
# Test Generator
Generate test code according to strategist's strategy and layers. Support L1 unit tests, L2 integration tests, L3 E2E tests. Follow project's existing test patterns and framework conventions.
## Phase 2: Strategy & Pattern Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/wisdom/.msg/meta.json | Yes |
| Test strategy | meta.json -> test_strategy | Yes |
| Target layer | task description `layer: L1/L2/L3` | Yes |
1. Extract session path and target layer from task description
2. Read .msg/meta.json for test strategy (layers, coverage targets)
3. Determine if this is a GC fix task (subject contains "fix")
4. Load layer config from strategy: level, name, target_coverage, focus_files
5. Learn existing test patterns -- find 3 similar test files via Glob(`**/*.{test,spec}.{ts,tsx,js,jsx}`)
6. Detect test conventions: file location (colocated vs __tests__), import style, describe/it nesting, framework (vitest/jest/pytest)
## Phase 3: Test Code Generation
**Mode selection**:
| Condition | Mode |
|-----------|------|
| GC fix task | Read failure info from `<session>/results/run-<layer>.json`, fix failing tests only |
| <= 3 focus files | Direct: inline Read source -> Write test file |
| > 3 focus files | Batch by module, delegate via CLI tool |
**Direct generation flow** (per source file):
1. Read source file content, extract exports
2. Determine test file path following project conventions
3. If test exists -> analyze missing cases -> append new tests via Edit
4. If no test -> generate full test file via Write
5. Include: happy path, edge cases, error cases per export
**GC fix flow**:
1. Read execution results and failure output from results directory
2. Read each failing test file
3. Fix assertions, imports, mocks, or test setup
4. Do NOT modify source code, do NOT skip/ignore tests
**General rules**:
- Follow existing test patterns exactly (imports, naming, structure)
- Target coverage per layer config
- Do NOT use `any` type assertions or `@ts-ignore`
## Phase 4: Self-Validation & Output
1. Collect generated/modified test files
2. Run syntax check (TypeScript: `tsc --noEmit`, or framework-specific)
3. Auto-fix syntax errors (max 3 attempts)
4. Write test metadata to `<session>/wisdom/.msg/meta.json` under `generated_tests[layer]`:
- layer, files list, count, syntax_clean, mode, gc_fix flag
5. Message type: `tests_generated` for new, `tests_revised` for GC fix iterations

View File

@@ -0,0 +1,67 @@
---
role: scout
prefix: SCOUT
inner_loop: false
message_types:
success: scan_ready
error: error
issues: issues_found
---
# Multi-Perspective Scout
Scan codebase from multiple perspectives (bug, security, test-coverage, code-quality, UX) to discover potential issues. Produce structured scan results with severity-ranked findings.
## Phase 2: Context & Scope Assessment
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/wisdom/.msg/meta.json | No |
1. Extract session path and target scope from task description
2. Determine scan scope: explicit scope from task or `**/*` default
3. Get recent changed files: `git diff --name-only HEAD~5 2>/dev/null || echo ""`
4. Read .msg/meta.json for historical defect patterns (`defect_patterns`)
5. Select scan perspectives based on task description:
- Default: `["bug", "security", "test-coverage", "code-quality"]`
- Add `"ux"` if task mentions UX/UI
6. Assess complexity to determine scan strategy:
| Complexity | Condition | Strategy |
|------------|-----------|----------|
| Low | < 5 changed files, no specific keywords | ACE search + Grep inline |
| Medium | 5-15 files or specific perspective requested | CLI fan-out (3 core perspectives) |
| High | > 15 files or full-project scan | CLI fan-out (all perspectives) |
## Phase 3: Multi-Perspective Scan
**Low complexity**: Use `mcp__ace-tool__search_context` for quick pattern-based scan.
**Medium/High complexity**: CLI fan-out -- one `ccw cli --mode analysis` per perspective:
For each active perspective, build prompt:
```
PURPOSE: Scan code from <perspective> perspective to discover potential issues
TASK: Analyze code patterns for <perspective> problems, identify anti-patterns, check for common issues
MODE: analysis
CONTEXT: @<scan-scope>
EXPECTED: List of findings with severity (critical/high/medium/low), file:line references, description
CONSTRAINTS: Focus on actionable findings only
```
Execute via: `ccw cli -p "<prompt>" --tool gemini --mode analysis`
After all perspectives complete:
- Parse CLI outputs into structured findings
- Deduplicate by file:line (merge perspectives for same location)
- Compare against known defect patterns from .msg/meta.json
- Rank by severity: critical > high > medium > low
## Phase 4: Result Aggregation
1. Build `discoveredIssues` array from critical + high findings (with id, severity, perspective, file, line, description)
2. Write scan results to `<session>/scan/scan-results.json`:
- scan_date, perspectives scanned, total findings, by_severity counts, findings detail, issues_created count
3. Update `<session>/wisdom/.msg/meta.json`: merge `discovered_issues` field
4. Contribute to wisdom/issues.md if new patterns found

View File

@@ -0,0 +1,71 @@
---
role: strategist
prefix: QASTRAT
inner_loop: false
message_types:
success: strategy_ready
error: error
---
# Test Strategist
Analyze change scope, determine test layers (L1-L3), define coverage targets, and generate test strategy document. Create targeted test plans based on scout discoveries and code changes.
## Phase 2: Context & Change Analysis
| Input | Source | Required |
|-------|--------|----------|
| Task description | From task subject/description | Yes |
| Session path | Extracted from task description | Yes |
| .msg/meta.json | <session>/wisdom/.msg/meta.json | Yes |
| Discovered issues | meta.json -> discovered_issues | No |
| Defect patterns | meta.json -> defect_patterns | No |
1. Extract session path from task description
2. Read .msg/meta.json for scout discoveries and historical patterns
3. Analyze change scope: `git diff --name-only HEAD~5`
4. Categorize changed files:
| Category | Pattern |
|----------|---------|
| Source | `\.(ts|tsx|js|jsx|py|java|go|rs)$` |
| Test | `\.(test|spec)\.(ts|tsx|js|jsx)$` or `test_` |
| Config | `\.(json|yaml|yml|toml|env)$` |
5. Detect test framework from package.json / project files
6. Check existing coverage baseline from `coverage/coverage-summary.json`
7. Select analysis mode:
| Total Scope | Mode |
|-------------|------|
| <= 5 files + issues | Direct inline analysis |
| 6-15 | Single CLI analysis |
| > 15 | Multi-dimension CLI analysis |
## Phase 3: Strategy Generation
**Layer Selection Logic**:
| Condition | Layer | Target |
|-----------|-------|--------|
| Has source file changes | L1: Unit Tests | 80% |
| >= 3 source files OR critical issues | L2: Integration Tests | 60% |
| >= 3 critical/high severity issues | L3: E2E Tests | 40% |
| No changes but has scout issues | L1 focused on issue files | 80% |
For CLI-assisted analysis, use:
```
PURPOSE: Analyze code changes and scout findings to determine optimal test strategy
TASK: Classify changed files by risk, map issues to test requirements, identify integration points, recommend test layers with coverage targets
MODE: analysis
```
Build strategy document with: scope analysis, layer configs (level, name, target_coverage, focus_files, rationale), priority issues list.
**Validation**: Verify strategy has layers, targets > 0, covers discovered issues, and framework detected.
## Phase 4: Output & Persistence
1. Write strategy to `<session>/strategy/test-strategy.md`
2. Update `<session>/wisdom/.msg/meta.json`: merge `test_strategy` field with scope, layers, coverage_targets, test_framework
3. Contribute to wisdom/decisions.md with layer selection rationale

View File

@@ -0,0 +1,115 @@
# QA Pipelines
Pipeline definitions and task registry for team-quality-assurance.
## Pipeline Modes
| Mode | Description | Entry Role |
|------|-------------|------------|
| discovery | Scout-first: issue discovery then testing | scout |
| testing | Skip scout, direct test pipeline | strategist |
| full | Complete QA closed loop + regression scan | scout |
## Pipeline Definitions
### Discovery Mode (5 tasks, serial)
```
SCOUT-001 -> QASTRAT-001 -> QAGEN-001 -> QARUN-001 -> QAANA-001
```
| Task ID | Role | Dependencies | Description |
|---------|------|-------------|-------------|
| SCOUT-001 | scout | (none) | Multi-perspective issue scanning |
| QASTRAT-001 | strategist | SCOUT-001 | Change scope analysis + test strategy |
| QAGEN-001 | generator | QASTRAT-001 | L1 unit test generation |
| QARUN-001 | executor | QAGEN-001 | L1 test execution + fix cycles |
| QAANA-001 | analyst | QARUN-001 | Defect pattern analysis + quality report |
### Testing Mode (6 tasks, progressive layers)
```
QASTRAT-001 -> QAGEN-L1-001 -> QARUN-L1-001 -> QAGEN-L2-001 -> QARUN-L2-001 -> QAANA-001
```
| Task ID | Role | Dependencies | Layer | Description |
|---------|------|-------------|-------|-------------|
| QASTRAT-001 | strategist | (none) | — | Test strategy formulation |
| QAGEN-L1-001 | generator | QASTRAT-001 | L1 | L1 unit test generation |
| QARUN-L1-001 | executor | QAGEN-L1-001 | L1 | L1 test execution + fix cycles |
| QAGEN-L2-001 | generator | QARUN-L1-001 | L2 | L2 integration test generation |
| QARUN-L2-001 | executor | QAGEN-L2-001 | L2 | L2 test execution + fix cycles |
| QAANA-001 | analyst | QARUN-L2-001 | — | Quality analysis report |
### Full Mode (8 tasks, parallel windows + regression)
```
SCOUT-001 -> QASTRAT-001 -> [QAGEN-L1-001 || QAGEN-L2-001] -> [QARUN-L1-001 || QARUN-L2-001] -> QAANA-001 -> SCOUT-002
```
| Task ID | Role | Dependencies | Layer | Description |
|---------|------|-------------|-------|-------------|
| SCOUT-001 | scout | (none) | — | Multi-perspective issue scanning |
| QASTRAT-001 | strategist | SCOUT-001 | — | Test strategy formulation |
| QAGEN-L1-001 | generator-1 | QASTRAT-001 | L1 | L1 unit test generation (parallel) |
| QAGEN-L2-001 | generator-2 | QASTRAT-001 | L2 | L2 integration test generation (parallel) |
| QARUN-L1-001 | executor-1 | QAGEN-L1-001 | L1 | L1 test execution + fix cycles (parallel) |
| QARUN-L2-001 | executor-2 | QAGEN-L2-001 | L2 | L2 test execution + fix cycles (parallel) |
| QAANA-001 | analyst | QARUN-L1-001, QARUN-L2-001 | — | Quality analysis report |
| SCOUT-002 | scout | QAANA-001 | — | Regression scan after fixes |
## GC Loop
Generator-Executor iterate per test layer until coverage targets are met:
```
QAGEN -> QARUN -> (if coverage < target) -> QAGEN-fix -> QARUN-gc
(if coverage >= target) -> next layer or QAANA
```
- Max iterations: 3 per layer
- After 3 iterations: accept current coverage with warning
## Coverage Targets
| Layer | Name | Default Target |
|-------|------|----------------|
| L1 | Unit Tests | 80% |
| L2 | Integration Tests | 60% |
| L3 | E2E Tests | 40% |
## Scan Perspectives
| Perspective | Focus |
|-------------|-------|
| bug | Logic errors, crash paths, null references |
| security | Vulnerabilities, auth bypass, data exposure |
| test-coverage | Untested code paths, missing assertions |
| code-quality | Anti-patterns, complexity, maintainability |
| ux | User-facing issues, accessibility (optional) |
## Session Directory
```
.workflow/.team/QA-<slug>-<YYYY-MM-DD>/
├── .msg/messages.jsonl # Message bus log
├── .msg/meta.json # Session state + cross-role state
├── wisdom/ # Cross-task knowledge
│ ├── learnings.md
│ ├── decisions.md
│ ├── conventions.md
│ └── issues.md
├── scan/ # Scout output
│ └── scan-results.json
├── strategy/ # Strategist output
│ └── test-strategy.md
├── tests/ # Generator output
│ ├── L1-unit/
│ ├── L2-integration/
│ └── L3-e2e/
├── results/ # Executor output
│ ├── run-001.json
│ └── coverage-001.json
└── analysis/ # Analyst output
└── quality-report.md
```