feat(skills): update 12 team skills to v3 design patterns

- Update all 12 team-* SKILL.md files with v3 structure:
  - Replace JS pseudocode with text decision tables
  - Add Role Registry with Compact column
  - Add COMPACT PROTECTION blocks
  - Add Cadence Control sections
  - Add Wisdom Accumulation sections
  - Add Task Metadata Registry
  - Add Orchestration Mode user commands

- Update 58 role files (SKILL.md + roles/*):
  - Flat-file skills: team-brainstorm, team-issue, team-testing,
    team-uidesign, team-planex, team-iterdev
  - Folder-based skills: team-review, team-roadmap-dev, team-frontend,
    team-quality-assurance, team-tech-debt, team-ultra-analyze

- Preserve special architectures:
  - team-planex: 2-member (planner + executor only)
  - team-tech-debt: Stop-Wait strategy (run_in_background:false)
  - team-iterdev: 7 behavior protocol tables in coordinator

- All 12 teams reviewed for content completeness (PASS)
This commit is contained in:
catlog22
2026-02-26 21:14:45 +08:00
parent e228b8b273
commit 430d817e43
73 changed files with 13606 additions and 15439 deletions

View File

@@ -1,228 +1,265 @@
# Role: analyst
# Analyst Role
测试质量分析师。负责缺陷模式分析、覆盖率差距识别、质量报告生成。
Test quality analyst. Responsible for defect pattern analysis, coverage gap identification, and quality report generation.
## Role Identity
## Identity
- **Name**: `analyst`
- **Name**: `analyst` | **Tag**: `[analyst]`
- **Task Prefix**: `TESTANA-*`
- **Responsibility**: Read-only analysis (质量分析)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[analyst]`
- **Responsibility**: Read-only analysis (quality analysis)
## Role Boundaries
## Boundaries
### MUST
- 仅处理 `TESTANA-*` 前缀的任务
- 所有输出必须带 `[analyst]` 标识
- Phase 2 读取 shared-memory.json (所有历史数据)Phase 5 写入 analysis_report
- Only process `TESTANA-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[analyst]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within read-only analysis responsibility scope
- Phase 2: Read shared-memory.json (all historical data)
- Phase 5: Write analysis_report to shared-memory.json
### MUST NOT
- ❌ 生成测试、执行测试或制定策略
- ❌ 直接与其他 worker 通信
- ❌ 为其他角色创建任务
- Execute work outside this role's responsibility scope (no test generation, execution, or strategy formulation)
- Communicate directly with other worker roles (must go through coordinator)
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
- Modify files or resources outside this role's responsibility
- Omit `[analyst]` identifier in any output
---
## Toolbox
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| Read | Read | Phase 2 | Load shared-memory.json, strategy, results |
| Glob | Read | Phase 2 | Find result files, test files |
| Write | Write | Phase 3 | Create quality-report.md |
| TaskUpdate | Write | Phase 5 | Mark task completed |
| SendMessage | Write | Phase 5 | Report to coordinator |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `analysis_ready` | analyst coordinator | Analysis completed | 分析报告完成 |
| `error` | analyst coordinator | Processing failure | 错误上报 |
| `analysis_ready` | analyst -> coordinator | Analysis completed | Analysis report complete |
| `error` | analyst -> coordinator | Processing failure | Error report |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: "testing",
from: "analyst",
to: "coordinator",
type: <message-type>,
summary: "[analyst] TESTANA complete: <summary>",
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team testing --from analyst --to coordinator --type <message-type> --summary \"[analyst] ...\" --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('TESTANA-') &&
t.owner === 'analyst' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `TESTANA-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Context Loading
**Input Sources**:
| Input | Source | Required |
|-------|--------|----------|
| Session path | Task description (Session: <path>) | Yes |
| Shared memory | <session-folder>/shared-memory.json | Yes |
| Execution results | <session-folder>/results/run-*.json | Yes |
| Test strategy | <session-folder>/strategy/test-strategy.md | Yes |
| Test files | <session-folder>/tests/**/* | Yes |
**Loading steps**:
1. Extract session path from task description (look for `Session: <path>`)
2. Read shared memory:
```
Read("<session-folder>/shared-memory.json")
```
### Phase 2: Context Loading + Shared Memory Read
3. Read all execution results:
```javascript
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
const sessionFolder = sessionMatch?.[1]?.trim()
```
Glob({ pattern: "<session-folder>/results/run-*.json" })
Read("<session-folder>/results/run-001.json")
Read("<session-folder>/results/run-002.json")
...
```
const memoryPath = `${sessionFolder}/shared-memory.json`
let sharedMemory = {}
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
4. Read test strategy:
// Read all execution results
const resultFiles = Glob({ pattern: `${sessionFolder}/results/run-*.json` })
const results = resultFiles.map(f => {
try { return JSON.parse(Read(f)) } catch { return null }
}).filter(Boolean)
```
Read("<session-folder>/strategy/test-strategy.md")
```
// Read test strategy
const strategy = Read(`${sessionFolder}/strategy/test-strategy.md`)
5. Read test files for pattern analysis:
// Read test files for pattern analysis
const testFiles = Glob({ pattern: `${sessionFolder}/tests/**/*` })
```
Glob({ pattern: "<session-folder>/tests/**/*" })
```
### Phase 3: Quality Analysis
```javascript
const outputPath = `${sessionFolder}/analysis/quality-report.md`
**Analysis dimensions**:
// 1. Coverage Analysis
const coverageHistory = sharedMemory.coverage_history || []
const layerCoverage = {}
coverageHistory.forEach(c => {
if (!layerCoverage[c.layer] || c.timestamp > layerCoverage[c.layer].timestamp) {
layerCoverage[c.layer] = c
}
})
1. **Coverage Analysis** - Aggregate coverage by layer from coverage_history
2. **Defect Pattern Analysis** - Frequency and severity of recurring patterns
3. **GC Loop Effectiveness** - Coverage improvement across rounds
4. **Test Quality Metrics** - Effective patterns, test file count
// 2. Defect Pattern Analysis
const defectPatterns = sharedMemory.defect_patterns || []
const patternFrequency = {}
defectPatterns.forEach(p => {
patternFrequency[p] = (patternFrequency[p] || 0) + 1
})
const sortedPatterns = Object.entries(patternFrequency)
.sort(([,a], [,b]) => b - a)
// 3. GC Loop Effectiveness
const gcRounds = sharedMemory.gc_round || 0
const gcEffectiveness = coverageHistory.length >= 2
? coverageHistory[coverageHistory.length - 1].coverage - coverageHistory[0].coverage
: 0
// 4. Test Quality Metrics
const totalTests = sharedMemory.generated_tests?.length || 0
const effectivePatterns = sharedMemory.effective_test_patterns || []
const reportContent = `# Quality Analysis Report
**Session**: ${sessionFolder}
**Pipeline**: ${sharedMemory.pipeline}
**GC Rounds**: ${gcRounds}
**Total Test Files**: ${totalTests}
## Coverage Summary
**Coverage Summary Table**:
| Layer | Coverage | Target | Status |
|-------|----------|--------|--------|
${Object.entries(layerCoverage).map(([layer, data]) =>
`| ${layer} | ${data.coverage}% | ${data.target}% | ${data.coverage >= data.target ? '✅ Met' : '❌ Below'} |`
).join('\n')}
| L1 | <coverage>% | <target>% | <Met/Below> |
| L2 | <coverage>% | <target>% | <Met/Below> |
| L3 | <coverage>% | <target>% | <Met/Below> |
## Defect Pattern Analysis
**Defect Pattern Analysis**:
| Pattern | Frequency | Severity |
|---------|-----------|----------|
${sortedPatterns.map(([pattern, freq]) =>
`| ${pattern} | ${freq} | ${freq >= 3 ? 'HIGH' : freq >= 2 ? 'MEDIUM' : 'LOW'} |`
).join('\n')}
| <pattern-1> | <count> | HIGH (>=3), MEDIUM (>=2), LOW (<2) |
### Recurring Defect Categories
${categorizeDefects(defectPatterns).map(cat =>
`- **${cat.name}**: ${cat.count} occurrences — ${cat.recommendation}`
).join('\n')}
**GC Loop Effectiveness**:
## Generator-Critic Loop Effectiveness
| Metric | Value | Assessment |
|--------|-------|------------|
| Rounds Executed | <N> | - |
| Coverage Improvement | <+/-X%> | HIGH (>10%), MEDIUM (>5%), LOW (<=5%) |
| Recommendation | <text> | Based on effectiveness |
- **Rounds Executed**: ${gcRounds}
- **Coverage Improvement**: ${gcEffectiveness > 0 ? '+' : ''}${gcEffectiveness.toFixed(1)}%
- **Effectiveness**: ${gcEffectiveness > 10 ? 'HIGH' : gcEffectiveness > 5 ? 'MEDIUM' : 'LOW'}
- **Recommendation**: ${gcRounds > 2 && gcEffectiveness < 5 ? 'Diminishing returns — consider manual intervention' : 'GC loop effective'}
**Coverage Gaps**:
## Coverage Gaps
For each gap identified:
- Area: <module/feature>
- Current: <X>%
- Gap: <target - current>%
- Reason: <why gap exists>
- Recommendation: <how to close>
${identifyCoverageGaps(sharedMemory).map(gap =>
`### ${gap.area}\n- **Current**: ${gap.current}%\n- **Gap**: ${gap.gap}%\n- **Reason**: ${gap.reason}\n- **Recommendation**: ${gap.recommendation}\n`
).join('\n')}
**Quality Score**:
## Effective Test Patterns
| Dimension | Score (1-10) | Weight | Weighted |
|-----------|--------------|--------|----------|
| Coverage Achievement | <score> | 30% | <weighted> |
| Test Effectiveness | <score> | 25% | <weighted> |
| Defect Detection | <score> | 25% | <weighted> |
| GC Loop Efficiency | <score> | 20% | <weighted> |
| **Total** | | | **<total>/10** |
${effectivePatterns.map(p => `- ${p}`).join('\n')}
**Output file**: `<session-folder>/analysis/quality-report.md`
## Recommendations
### Immediate Actions
${immediateActions.map((a, i) => `${i + 1}. ${a}`).join('\n')}
### Long-term Improvements
${longTermActions.map((a, i) => `${i + 1}. ${a}`).join('\n')}
## Quality Score
| Dimension | Score | Weight | Weighted |
|-----------|-------|--------|----------|
| Coverage Achievement | ${coverageScore}/10 | 30% | ${(coverageScore * 0.3).toFixed(1)} |
| Test Effectiveness | ${effectivenessScore}/10 | 25% | ${(effectivenessScore * 0.25).toFixed(1)} |
| Defect Detection | ${defectScore}/10 | 25% | ${(defectScore * 0.25).toFixed(1)} |
| GC Loop Efficiency | ${gcScore}/10 | 20% | ${(gcScore * 0.2).toFixed(1)} |
| **Total** | | | **${totalScore.toFixed(1)}/10** |
`
Write(outputPath, reportContent)
```
Write("<session-folder>/analysis/quality-report.md", <report-content>)
```
### Phase 4: Trend Analysis (if historical data available)
```javascript
// Compare with previous sessions if available
const previousSessions = Glob({ pattern: '.workflow/.team/TST-*/shared-memory.json' })
if (previousSessions.length > 1) {
// Track coverage trends, defect pattern evolution
}
**Historical comparison**:
```
Glob({ pattern: ".workflow/.team/TST-*/shared-memory.json" })
```
### Phase 5: Report to Coordinator + Shared Memory Write
If multiple sessions exist:
- Track coverage trends over time
- Identify defect pattern evolution
- Compare GC loop effectiveness across sessions
```javascript
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
1. **Update shared memory**:
```
sharedMemory.analysis_report = {
quality_score: totalScore,
coverage_gaps: coverageGaps,
top_defect_patterns: sortedPatterns.slice(0, 5),
gc_effectiveness: gcEffectiveness,
recommendations: immediateActions
quality_score: <total-score>,
coverage_gaps: <gap-list>,
top_defect_patterns: <patterns>.slice(0, 5),
gc_effectiveness: <improvement>,
recommendations: <immediate-actions>
}
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
Write("<session-folder>/shared-memory.json", <updated-json>)
```
2. **Log via team_msg**:
```
mcp__ccw-tools__team_msg({
operation: "log", team: teamName, from: "analyst", to: "coordinator",
operation: "log", team: "testing", from: "analyst", to: "coordinator",
type: "analysis_ready",
summary: `[analyst] Quality report: score ${totalScore.toFixed(1)}/10, ${sortedPatterns.length} defect patterns, ${coverageGaps.length} coverage gaps`,
ref: outputPath
summary: "[analyst] Quality report: score <score>/10, <pattern-count> defect patterns, <gap-count> coverage gaps",
ref: "<session-folder>/analysis/quality-report.md"
})
```
3. **SendMessage to coordinator**:
```
SendMessage({
type: "message", recipient: "coordinator",
content: `## [analyst] Quality Analysis Complete
content: "## [analyst] Quality Analysis Complete
**Quality Score**: ${totalScore.toFixed(1)}/10
**Defect Patterns**: ${sortedPatterns.length}
**Coverage Gaps**: ${coverageGaps.length}
**GC Effectiveness**: ${gcEffectiveness > 0 ? '+' : ''}${gcEffectiveness.toFixed(1)}%
**Output**: ${outputPath}
**Quality Score**: <score>/10
**Defect Patterns**: <count>
**Coverage Gaps**: <count>
**GC Effectiveness**: <+/-><X>%
**Output**: <report-path>
### Top Issues
${immediateActions.slice(0, 3).map((a, i) => `${i + 1}. ${a}`).join('\n')}`,
summary: `[analyst] Quality: ${totalScore.toFixed(1)}/10`
1. <issue-1>
2. <issue-2>
3. <issue-3>",
summary: "[analyst] Quality: <score>/10"
})
TaskUpdate({ taskId: task.id, status: 'completed' })
```
4. **TaskUpdate completed**:
```
TaskUpdate({ taskId: <task-id>, status: "completed" })
```
5. **Loop**: Return to Phase 1 to check next task
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No TESTANA-* tasks | Idle |
| No TESTANA-* tasks available | Idle, wait for coordinator assignment |
| No execution results | Generate report based on strategy only |
| Incomplete data | Report available metrics, flag gaps |
| Previous session data corrupted | Analyze current session only |
| Shared memory not found | Notify coordinator, request location |
| Context/Plan file not found | Notify coordinator, request location |

View File

@@ -1,110 +1,148 @@
# Role: coordinator
# Coordinator Role
测试团队协调者。负责变更范围分析、测试层级选择、Generator-Critic 循环控制(generatorexecutor)和质量门控。
Test team orchestrator. Responsible for change scope analysis, test layer selection, Generator-Critic loop control (generator<->executor), and quality gates.
## Role Identity
## Identity
- **Name**: `coordinator`
- **Task Prefix**: N/A (coordinator creates tasks, doesn't receive them)
- **Responsibility**: Orchestration
- **Communication**: SendMessage to all teammates
- **Output Tag**: `[coordinator]`
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Role Boundaries
## Boundaries
### MUST
- 所有输出必须带 `[coordinator]` 标识
- 仅负责变更分析、任务创建/分发、质量门控、结果汇报
- 管理 Generator-Critic 循环计数generator↔executor
- 根据覆盖率结果决定是否触发修订循环
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
- Create team and spawn worker subagents in background
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
- Monitor progress via worker callbacks and route messages
- Maintain session state persistence
- All output (SendMessage, team_msg, logs) must carry `[coordinator]` identifier
- Manage Generator-Critic loop counter (generator <-> executor cycle)
- Decide whether to trigger revision loop based on coverage results
### MUST NOT
- **直接编写测试、执行测试或分析覆盖率**
- ❌ 直接调用实现类 subagent
- ❌ 直接修改测试文件或源代码
- ❌ 绕过 worker 角色自行完成应委派的工作
- Execute test generation, test execution, or coverage analysis directly (delegate to workers)
- Modify task outputs (workers own their deliverables)
- Call implementation subagents directly
- Skip dependency validation when creating task chains
- Modify test files or source code
- Bypass worker roles to do delegated work
## Message Types
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `pipeline_selected` | coordinator → all | Pipeline decided | 通知选定管道模式 |
| `gc_loop_trigger` | coordinator → generator | Coverage < target | 触发 generator 修订测试 |
| `quality_gate` | coordinator → all | Quality assessment | 质量门控结果 |
| `task_unblocked` | coordinator → any | Dependency resolved | 通知 worker 可用任务 |
| `error` | coordinator → all | Critical error | 上报用户 |
| `shutdown` | coordinator → all | Team dissolving | 关闭信号 |
---
## Execution
## Entry Router
### Phase 1: Change Scope Analysis
When coordinator is invoked, first detect the invocation type:
```javascript
const args = "$ARGUMENTS"
const teamName = args.match(/--team-name[=\s]+([\w-]+)/)?.[1] || `testing-${Date.now().toString(36)}`
const taskDescription = args.replace(/--team-name[=\s]+[\w-]+/, '').replace(/--role[=\s]+\w+/, '').trim()
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
| New session | None of the above | -> Phase 0 (Session Resume Check) |
// Analyze change scope
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`).split('\n').filter(Boolean)
const changedModules = new Set(changedFiles.map(f => f.split('/').slice(0, 2).join('/')))
For callback/check/resume: load `commands/monitor.md` if available, execute the appropriate handler, then STOP.
function selectPipeline(fileCount, moduleCount) {
if (fileCount <= 3 && moduleCount <= 1) return 'targeted'
if (fileCount <= 10 && moduleCount <= 3) return 'standard'
return 'comprehensive'
}
---
const suggestedPipeline = selectPipeline(changedFiles.length, changedModules.size)
## Phase 0: Session Resume Check
**Objective**: Detect and resume interrupted sessions before creating new ones.
**Workflow**:
1. Scan session directory for sessions with status "active" or "paused"
2. No sessions found -> proceed to Phase 1
3. Single session found -> resume it (-> Session Reconciliation)
4. Multiple sessions -> AskUserQuestion for user selection
**Session Reconciliation**:
1. Audit TaskList -> get real status of all tasks
2. Reconcile: session state <-> TaskList status (bidirectional sync)
3. Reset any in_progress tasks -> pending (they were interrupted)
4. Determine remaining pipeline from reconciled state
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
6. Create missing tasks with correct blockedBy dependencies
7. Verify dependency chain integrity
8. Update session file with reconciled state
9. Kick first executable task's worker -> Phase 4
---
## Phase 1: Change Scope Analysis
**Objective**: Parse user input and gather execution parameters.
**Workflow**:
1. **Parse arguments** for explicit settings: mode, scope, focus areas
2. **Analyze change scope**:
```
Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
```
```javascript
AskUserQuestion({
questions: [
{
question: `检测到 ${changedFiles.length} 个变更文件,${changedModules.size} 个模块。选择测试模式:`,
header: "Mode",
multiSelect: false,
options: [
{ label: suggestedPipeline === 'targeted' ? "targeted (推荐)" : "targeted", description: "目标模式策略→生成L1→执行小范围变更" },
{ label: suggestedPipeline === 'standard' ? "standard (推荐)" : "standard", description: "标准模式L1→L2 渐进式(含分析)" },
{ label: suggestedPipeline === 'comprehensive' ? "comprehensive (推荐)" : "comprehensive", description: "全覆盖并行L1+L2→L3含分析" }
]
},
{
question: "覆盖率目标:",
header: "Coverage",
multiSelect: false,
options: [
{ label: "标准", description: "L1:80% L2:60% L3:40%" },
{ label: "严格", description: "L1:90% L2:75% L3:60%" },
{ label: "最低", description: "L1:60% L2:40% L3:20%" }
]
}
]
})
Extract changed files and modules for pipeline selection.
3. **Select pipeline**:
| Condition | Pipeline |
|-----------|----------|
| fileCount <= 3 AND moduleCount <= 1 | targeted |
| fileCount <= 10 AND moduleCount <= 3 | standard |
| Otherwise | comprehensive |
4. **Ask for missing parameters** via AskUserQuestion:
**Mode Selection**:
- Targeted: Strategy -> Generate L1 -> Execute (small scope)
- Standard: L1 -> L2 progressive (includes analysis)
- Comprehensive: Parallel L1+L2 -> L3 (includes analysis)
**Coverage Target**:
- Standard: L1:80% L2:60% L3:40%
- Strict: L1:90% L2:75% L3:60%
- Minimum: L1:60% L2:40% L3:20%
5. **Store requirements**: mode, scope, focus, constraints
**Success**: All parameters captured, mode finalized.
---
## Phase 2: Create Team + Initialize Session
**Objective**: Initialize team, session file, and wisdom directory.
**Workflow**:
1. Generate session ID: `TST-<slug>-<YYYY-MM-DD>`
2. Create session folder structure:
```
.workflow/.team/TST-<slug>-<date>/
├── strategy/
├── tests/L1-unit/
├── tests/L2-integration/
├── tests/L3-e2e/
├── results/
├── analysis/
└── wisdom/
```
### Phase 2: Create Team + Initialize Session
3. Call TeamCreate with team name
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
5. Initialize shared memory:
```javascript
TeamCreate({ team_name: teamName })
const topicSlug = taskDescription.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const dateStr = new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString().substring(0, 10)
const sessionId = `TST-${topicSlug}-${dateStr}`
const sessionFolder = `.workflow/.team/${sessionId}`
Bash(`mkdir -p "${sessionFolder}/strategy" "${sessionFolder}/tests/L1-unit" "${sessionFolder}/tests/L2-integration" "${sessionFolder}/tests/L3-e2e" "${sessionFolder}/results" "${sessionFolder}/analysis"`)
// Initialize shared memory
const sharedMemory = {
task: taskDescription,
pipeline: selectedPipeline,
changed_files: changedFiles,
changed_modules: [...changedModules],
coverage_targets: coverageTargets,
```
Write("<session-folder>/shared-memory.json", {
task: <description>,
pipeline: <selected-pipeline>,
changed_files: [...],
changed_modules: [...],
coverage_targets: {...},
gc_round: 0,
max_gc_rounds: 3,
test_strategy: null,
@@ -113,181 +151,152 @@ const sharedMemory = {
defect_patterns: [],
effective_test_patterns: [],
coverage_history: []
}
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
const teamSession = {
session_id: sessionId,
team_name: teamName,
task: taskDescription,
pipeline: selectedPipeline,
status: "active",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
gc_round: 0,
completed_tasks: []
}
Write(`${sessionFolder}/team-session.json`, JSON.stringify(teamSession, null, 2))
})
```
// ⚠️ Workers are NOT pre-spawned here.
// Workers are spawned per-stage in Phase 4 via Stop-Wait Task(run_in_background: false).
// See SKILL.md Coordinator Spawn Template for worker prompt templates.
6. Write session file with: session_id, mode, scope, status="active"
### Phase 3: Create Task Chain
**Success**: Team created, session file written, wisdom initialized.
#### Targeted Pipeline
---
```javascript
TaskCreate({ subject: "STRATEGY-001: 变更范围分析与测试策略", description: `分析变更: ${changedFiles.join(', ')}\n\nSession: ${sessionFolder}\n输出: ${sessionFolder}/strategy/test-strategy.md\n\n确定测试层级、覆盖目标、优先级`, activeForm: "制定策略中" })
TaskUpdate({ taskId: strategyId, owner: "strategist" })
## Phase 3: Create Task Chain
TaskCreate({ subject: "TESTGEN-001: 生成 L1 单元测试", description: `基于策略生成单元测试\n\nSession: ${sessionFolder}\n层级: L1-unit\n输入: strategy/test-strategy.md\n输出: tests/L1-unit/\n覆盖率目标: ${coverageTargets.L1}%`, activeForm: "生成测试中" })
TaskUpdate({ taskId: genId, owner: "generator", addBlockedBy: [strategyId] })
**Objective**: Dispatch tasks based on mode with proper dependencies.
TaskCreate({ subject: "TESTRUN-001: 执行 L1 单元测试", description: `执行生成的单元测试\n\nSession: ${sessionFolder}\n输入: tests/L1-unit/\n输出: results/run-001.json + coverage-001.json\n覆盖率目标: ${coverageTargets.L1}%`, activeForm: "执行测试中" })
TaskUpdate({ taskId: runId, owner: "executor", addBlockedBy: [genId] })
### Targeted Pipeline
| Task ID | Role | Blocked By | Description |
|---------|------|------------|-------------|
| STRATEGY-001 | strategist | (none) | Analyze change scope, define test strategy |
| TESTGEN-001 | generator | STRATEGY-001 | Generate L1 unit tests |
| TESTRUN-001 | executor | TESTGEN-001 | Execute L1 tests, collect coverage |
### Standard Pipeline
| Task ID | Role | Blocked By | Description |
|---------|------|------------|-------------|
| STRATEGY-001 | strategist | (none) | Analyze change scope |
| TESTGEN-001 | generator | STRATEGY-001 | Generate L1 unit tests |
| TESTRUN-001 | executor | TESTGEN-001 | Execute L1 tests |
| TESTGEN-002 | generator | TESTRUN-001 | Generate L2 integration tests |
| TESTRUN-002 | executor | TESTGEN-002 | Execute L2 tests |
| TESTANA-001 | analyst | TESTRUN-002 | Quality analysis report |
### Comprehensive Pipeline
| Task ID | Role | Blocked By | Description |
|---------|------|------------|-------------|
| STRATEGY-001 | strategist | (none) | Analyze change scope |
| TESTGEN-001 | generator | STRATEGY-001 | Generate L1 unit tests |
| TESTGEN-002 | generator | STRATEGY-001 | Generate L2 integration tests (parallel) |
| TESTRUN-001 | executor | TESTGEN-001 | Execute L1 tests |
| TESTRUN-002 | executor | TESTGEN-002 | Execute L2 tests (parallel) |
| TESTGEN-003 | generator | TESTRUN-001, TESTRUN-002 | Generate L3 E2E tests |
| TESTRUN-003 | executor | TESTGEN-003 | Execute L3 tests |
| TESTANA-001 | analyst | TESTRUN-003 | Quality analysis report |
**Task creation pattern**:
```
TaskCreate({ subject: "<TASK-ID>: <description>", description: "Session: <session-folder>\n...", activeForm: "..." })
TaskUpdate({ taskId: <id>, owner: "<role>", addBlockedBy: [...] })
```
#### Standard Pipeline
---
```javascript
// STRATEGY-001 → TESTGEN-001(L1) → TESTRUN-001(L1) → TESTGEN-002(L2) → TESTRUN-002(L2) → TESTANA-001
## Phase 4: Coordination Loop + Generator-Critic Control
// ... STRATEGY-001, TESTGEN-001, TESTRUN-001 same as targeted ...
> **Design principle (Stop-Wait)**: Model execution has no time concept. No polling or sleep loops.
> - Use synchronous Task(run_in_background: false) calls. Worker return = phase complete signal.
> - Follow Phase 3 task chain, spawn workers stage by stage.
TaskCreate({ subject: "TESTGEN-002: 生成 L2 集成测试", description: `基于 L1 结果生成集成测试\n\nSession: ${sessionFolder}\n层级: L2-integration\n输入: strategy/ + results/run-001.json\n输出: tests/L2-integration/\n覆盖率目标: ${coverageTargets.L2}%`, activeForm: "生成集成测试中" })
TaskUpdate({ taskId: gen2Id, owner: "generator", addBlockedBy: [run1Id] })
TaskCreate({ subject: "TESTRUN-002: 执行 L2 集成测试", description: `执行集成测试\n\nSession: ${sessionFolder}\n输入: tests/L2-integration/\n输出: results/run-002.json`, activeForm: "执行集成测试中" })
TaskUpdate({ taskId: run2Id, owner: "executor", addBlockedBy: [gen2Id] })
TaskCreate({ subject: "TESTANA-001: 质量分析报告", description: `分析所有测试结果\n\nSession: ${sessionFolder}\n输入: results/ + shared-memory.json\n输出: analysis/quality-report.md\n\n分析: 缺陷模式、覆盖率差距、测试有效性`, activeForm: "分析中" })
TaskUpdate({ taskId: anaId, owner: "analyst", addBlockedBy: [run2Id] })
```
#### Comprehensive Pipeline
```javascript
// STRATEGY-001 → [TESTGEN-001(L1) + TESTGEN-002(L2)] → [TESTRUN-001 + TESTRUN-002] → TESTGEN-003(L3) → TESTRUN-003 → TESTANA-001
// TESTGEN-001 and TESTGEN-002 are parallel (both blockedBy STRATEGY-001)
// TESTRUN-001 and TESTRUN-002 are parallel (blockedBy their respective TESTGEN)
// TESTGEN-003(L3) blockedBy both TESTRUN-001 and TESTRUN-002
// TESTRUN-003 blockedBy TESTGEN-003
// TESTANA-001 blockedBy TESTRUN-003
```
### Phase 4: Coordination Loop + Generator-Critic Control
> **设计原则Stop-Wait**: 模型执行没有时间概念,禁止任何形式的轮询等待。
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态
> - ✅ 采用: 同步 `Task(run_in_background: false)` 调用Worker 返回 = 阶段完成信号
>
> 按 Phase 3 创建的任务链顺序,逐阶段 spawn worker 同步执行。
> Worker prompt 使用 SKILL.md Coordinator Spawn Template。
### Callback Message Handling
| Received Message | Action |
|-----------------|--------|
| strategist: strategy_ready | Read strategy team_msg log TaskUpdate completed |
| generator: tests_generated | team_msg log TaskUpdate completed unblock TESTRUN |
| executor: tests_passed | Read coverage **质量门控** proceed to next layer |
| executor: tests_failed | **Generator-Critic 判断** → 决定是否触发修订 |
| executor: coverage_report | Read coverage data update shared memory |
| analyst: analysis_ready | Read report team_msg log Phase 5 |
| strategist: strategy_ready | Read strategy -> team_msg log -> TaskUpdate completed |
| generator: tests_generated | team_msg log -> TaskUpdate completed -> unblock TESTRUN |
| executor: tests_passed | Read coverage -> **Quality gate** -> proceed to next layer |
| executor: tests_failed | **Generator-Critic decision** -> decide whether to trigger revision |
| executor: coverage_report | Read coverage data -> update shared memory |
| analyst: analysis_ready | Read report -> team_msg log -> Phase 5 |
#### Generator-Critic Loop Control
### Generator-Critic Loop Control
```javascript
if (msgType === 'tests_failed' || msgType === 'coverage_report') {
const result = JSON.parse(Read(`${sessionFolder}/results/run-${runNum}.json`))
const sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
const passRate = result.pass_rate || 0
const coverage = result.coverage || 0
const target = coverageTargets[currentLayer]
const gcRound = sharedMemory.gc_round || 0
When receiving `tests_failed` or `coverage_report`:
if ((passRate < 0.95 || coverage < target) && gcRound < sharedMemory.max_gc_rounds) {
// Trigger generator revision
sharedMemory.gc_round = gcRound + 1
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
**Decision table**:
// Create TESTGEN-fix task
TaskCreate({
subject: `TESTGEN-fix-${gcRound + 1}: 修订 ${currentLayer} 测试`,
description: `基于执行结果修订测试\n\nSession: ${sessionFolder}\n失败原因: ${result.failure_summary}\n覆盖率: ${coverage}% (目标: ${target}%)\n通过率: ${(passRate * 100).toFixed(1)}%`,
activeForm: "修订测试中"
})
TaskUpdate({ taskId: fixGenId, owner: "generator" })
| Condition | Action |
|-----------|--------|
| passRate < 0.95 AND gcRound < maxRounds | Create TESTGEN-fix task, increment gc_round, trigger revision |
| coverage < target AND gcRound < maxRounds | Create TESTGEN-fix task, increment gc_round, trigger revision |
| gcRound >= maxRounds | Accept current coverage, log warning, proceed |
| Coverage met | Log success, proceed to next layer |
mcp__ccw-tools__team_msg({
operation: "log", team: teamName, from: "coordinator", to: "generator",
type: "gc_loop_trigger",
summary: `[coordinator] GC round ${gcRound + 1}: coverage ${coverage}% < target ${target}%, revise tests`
})
} else if (gcRound >= sharedMemory.max_gc_rounds) {
// Max rounds exceeded — accept current coverage
mcp__ccw-tools__team_msg({
operation: "log", team: teamName, from: "coordinator", to: "all",
type: "quality_gate",
summary: `[coordinator] GC loop exhausted (${gcRound} rounds), accepting coverage ${coverage}%`
})
} else {
// Coverage met — proceed
mcp__ccw-tools__team_msg({
operation: "log", team: teamName, from: "coordinator", to: "all",
type: "quality_gate",
summary: `[coordinator] ${currentLayer} coverage ${coverage}% >= target ${target}%, proceeding`
})
}
}
**GC Loop trigger message**:
```
### Phase 5: Report + Persist
```javascript
const sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
const analysisReport = Read(`${sessionFolder}/analysis/quality-report.md`)
SendMessage({
content: `## [coordinator] 测试完成
**任务**: ${taskDescription}
**管道**: ${selectedPipeline}
**GC 轮次**: ${sharedMemory.gc_round}
**变更文件**: ${changedFiles.length}
### 覆盖率
${sharedMemory.coverage_history.map(c => `- **${c.layer}**: ${c.coverage}% (目标: ${c.target}%)`).join('\n')}
### 质量报告
${analysisReport}`,
summary: `[coordinator] Testing complete: ${sharedMemory.gc_round} GC rounds`
})
updateSession(sessionFolder, { status: 'completed', completed_at: new Date().toISOString() })
AskUserQuestion({
questions: [{
question: "测试已完成。下一步:",
header: "Next",
multiSelect: false,
options: [
{ label: "新测试", description: "对新变更运行测试" },
{ label: "深化测试", description: "增加测试层级或提高覆盖率" },
{ label: "关闭团队", description: "关闭所有 teammate 并清理" }
]
}]
mcp__ccw-tools__team_msg({
operation: "log", team: "testing", from: "coordinator", to: "generator",
type: "gc_loop_trigger",
summary: "[coordinator] GC round <N>: coverage <X>% < target <Y>%, revise tests"
})
```
**Spawn-and-Stop pattern**:
1. Find tasks with: status=pending, blockedBy all resolved, owner assigned
2. For each ready task -> spawn worker (see SKILL.md Spawn Template)
3. Output status summary
4. STOP
**Pipeline advancement** driven by three wake sources:
- Worker callback (automatic) -> Entry Router -> handleCallback
- User "check" -> handleCheck (status only)
- User "resume" -> handleResume (advance)
---
## Phase 5: Report + Next Steps
**Objective**: Completion report and follow-up options.
**Workflow**:
1. Load session state -> count completed tasks, duration
2. List deliverables with output paths
3. Generate summary:
```
## [coordinator] Testing Complete
**Task**: <description>
**Pipeline**: <selected-pipeline>
**GC Rounds**: <count>
**Changed Files**: <count>
### Coverage
<For each layer>: **<layer>**: <coverage>% (target: <target>%)
### Quality Report
<analysis-summary>
```
4. Update session status -> "completed"
5. Offer next steps via AskUserQuestion:
- New test: Run tests on new changes
- Deepen test: Add test layers or increase coverage
- Close team: Shutdown all teammates and cleanup
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Teammate 无响应 | 发追踪消息2次 → 重新 spawn |
| GC 循环超限 (3轮) | 接受当前覆盖率,记录到 shared memory |
| 测试环境异常 | 上报用户,建议手动修复 |
| 所有测试失败 | 检查测试框架配置,通知 analyst 分析 |
| 覆盖率工具不可用 | 降级为通过率判断 |
| Teammate no response | Send tracking message, 2 times -> respawn worker |
| GC loop exceeded (3 rounds) | Accept current coverage, log to shared memory |
| Test environment failure | Report to user, suggest manual fix |
| All tests fail | Check test framework config, notify analyst |
| Coverage tool unavailable | Degrade to pass rate judgment |
| Worker crash | Respawn worker, reassign task |
| Dependency cycle | Detect, report to user, halt |
| Invalid mode | Reject with error, ask to clarify |

View File

@@ -1,204 +1,300 @@
# Role: executor
# Executor Role
测试执行者。执行测试、收集覆盖率、尝试自动修复失败。作为 Generator-Critic 循环中的 Critic 角色。
Test executor. Executes tests, collects coverage, attempts auto-fix for failures. Acts as the Critic in the Generator-Critic loop.
## Role Identity
## Identity
- **Name**: `executor`
- **Name**: `executor` | **Tag**: `[executor]`
- **Task Prefix**: `TESTRUN-*`
- **Responsibility**: Validation (测试执行与验证)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[executor]`
- **Responsibility**: Validation (test execution and verification)
## Role Boundaries
## Boundaries
### MUST
- 仅处理 `TESTRUN-*` 前缀的任务
- 所有输出必须带 `[executor]` 标识
- Phase 2 读取 shared-memory.jsonPhase 5 写入 execution_results + defect_patterns
- 报告覆盖率和通过率供 coordinator 做 GC 判断
- Only process `TESTRUN-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[executor]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within validation responsibility scope
- Phase 2: Read shared-memory.json
- Phase 5: Write execution_results + defect_patterns to shared-memory.json
- Report coverage and pass rate for coordinator's GC decision
### MUST NOT
- ❌ 生成新测试、制定策略或分析趋势
- ❌ 直接与其他 worker 通信
- ❌ 为其他角色创建任务
- Execute work outside this role's responsibility scope (no test generation, strategy formulation, or trend analysis)
- Communicate directly with other worker roles (must go through coordinator)
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
- Modify files or resources outside this role's responsibility
- Omit `[executor]` identifier in any output
---
## Toolbox
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| Read | Read | Phase 2 | Load shared-memory.json |
| Glob | Read | Phase 2 | Find test files to execute |
| Bash | Execute | Phase 3 | Run test commands |
| Write | Write | Phase 3 | Save test results |
| Task | Delegate | Phase 3 | Delegate fix to code-developer |
| TaskUpdate | Write | Phase 5 | Mark task completed |
| SendMessage | Write | Phase 5 | Report to coordinator |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `tests_passed` | executor coordinator | All tests pass + coverage met | 测试通过 |
| `tests_failed` | executor coordinator | Tests fail or coverage below target | 测试失败/覆盖不足 |
| `coverage_report` | executor coordinator | Coverage data collected | 覆盖率数据 |
| `error` | executor coordinator | Execution environment failure | 错误上报 |
| `tests_passed` | executor -> coordinator | All tests pass + coverage met | Tests passed |
| `tests_failed` | executor -> coordinator | Tests fail or coverage below target | Tests failed / coverage insufficient |
| `coverage_report` | executor -> coordinator | Coverage data collected | Coverage data |
| `error` | executor -> coordinator | Execution environment failure | Error report |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: "testing",
from: "executor",
to: "coordinator",
type: <message-type>,
summary: "[executor] TESTRUN complete: <summary>",
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team testing --from executor --to coordinator --type <message-type> --summary \"[executor] ...\" --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('TESTRUN-') &&
t.owner === 'executor' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `TESTRUN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Context Loading
**Input Sources**:
| Input | Source | Required |
|-------|--------|----------|
| Session path | Task description (Session: <path>) | Yes |
| Shared memory | <session-folder>/shared-memory.json | Yes |
| Test directory | Task description (Input: <path>) | Yes |
| Coverage target | Task description | Yes |
**Loading steps**:
1. Extract session path from task description (look for `Session: <path>`)
2. Extract test directory from task description (look for `Input: <path>`)
3. Extract coverage target from task description (default: 80%)
```
Read("<session-folder>/shared-memory.json")
```
### Phase 2: Context Loading + Shared Memory Read
4. Determine test framework from shared memory:
```javascript
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
const sessionFolder = sessionMatch?.[1]?.trim()
| Framework | Detection |
|-----------|-----------|
| Jest | sharedMemory.test_strategy.framework === "Jest" |
| Pytest | sharedMemory.test_strategy.framework === "Pytest" |
| Vitest | sharedMemory.test_strategy.framework === "Vitest" |
| Unknown | Default to Jest |
const memoryPath = `${sessionFolder}/shared-memory.json`
let sharedMemory = {}
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
5. Find test files to execute:
const framework = sharedMemory.test_strategy?.framework || 'Jest'
const coverageTarget = parseInt(task.description.match(/覆盖率目标:\s*(\d+)/)?.[1] || '80')
// Find test files to execute
const testDir = task.description.match(/输入:\s*([^\n]+)/)?.[1]?.trim()
const testFiles = Glob({ pattern: `${sessionFolder}/${testDir || 'tests'}/**/*` })
```
Glob({ pattern: "<session-folder>/<test-dir>/**/*" })
```
### Phase 3: Test Execution + Fix Cycle
```javascript
// Determine test command based on framework
const testCommands = {
'Jest': `npx jest --coverage --json --outputFile=${sessionFolder}/results/jest-output.json`,
'Pytest': `python -m pytest --cov --cov-report=json:${sessionFolder}/results/coverage.json -v`,
'Vitest': `npx vitest run --coverage --reporter=json`
**Iterative test-fix cycle** (max 3 iterations):
| Step | Action |
|------|--------|
| 1 | Run test command |
| 2 | Parse results -> check pass rate |
| 3 | Pass rate >= 95% AND coverage >= target -> exit loop (success) |
| 4 | Extract failing test details |
| 5 | Delegate fix to code-developer subagent |
| 6 | Increment iteration counter |
| 7 | Iteration >= MAX (3) -> exit loop (report failures) |
| 8 | Go to Step 1 |
**Test commands by framework**:
| Framework | Command |
|-----------|---------|
| Jest | `npx jest --coverage --json --outputFile=<session>/results/jest-output.json` |
| Pytest | `python -m pytest --cov --cov-report=json:<session>/results/coverage.json -v` |
| Vitest | `npx vitest run --coverage --reporter=json` |
**Execution**:
```
Bash("<test-command> 2>&1 || true")
```
**Result parsing**:
| Metric | Parse Method |
|--------|--------------|
| Passed | Output does not contain "FAIL" or "FAILED" |
| Pass rate | Parse from test output (e.g., "X passed, Y failed") |
| Coverage | Parse from coverage output (e.g., "All files | XX") |
**Auto-fix delegation** (on failure):
```
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Fix test failures (iteration <N>)",
prompt: "Fix these test failures:
<test-output>
Only fix the test files, not the source code."
})
```
**Result data structure**:
```
{
run_id: "run-<N>",
pass_rate: <0.0-1.0>,
coverage: <percentage>,
coverage_target: <target>,
iterations: <N>,
passed: <pass_rate >= 0.95 && coverage >= target>,
failure_summary: <string or null>,
timestamp: <ISO-date>
}
const testCommand = testCommands[framework] || testCommands['Jest']
```
// Execute tests with auto-fix cycle (max 3 iterations)
let iteration = 0
const MAX_FIX_ITERATIONS = 3
let lastResult = null
let passRate = 0
let coverage = 0
**Save results**:
while (iteration < MAX_FIX_ITERATIONS) {
lastResult = Bash(`${testCommand} 2>&1 || true`)
// Parse results
const passed = !lastResult.includes('FAIL') && !lastResult.includes('FAILED')
passRate = parsePassRate(lastResult)
coverage = parseCoverage(lastResult)
if (passed && coverage >= coverageTarget) break
if (iteration < MAX_FIX_ITERATIONS - 1 && !passed) {
// Attempt auto-fix for simple failures (import errors, type mismatches)
Task({
subagent_type: "code-developer",
run_in_background: false,
description: `Fix test failures (iteration ${iteration + 1})`,
prompt: `Fix these test failures:\n${lastResult.substring(0, 3000)}\n\nOnly fix the test files, not the source code.`
})
}
iteration++
}
// Save results
const runNum = task.subject.match(/TESTRUN-(\d+)/)?.[1] || '001'
const resultData = {
run_id: `run-${runNum}`,
pass_rate: passRate,
coverage: coverage,
coverage_target: coverageTarget,
iterations: iteration,
passed: passRate >= 0.95 && coverage >= coverageTarget,
failure_summary: passRate < 0.95 ? extractFailures(lastResult) : null,
timestamp: new Date().toISOString()
}
Write(`${sessionFolder}/results/run-${runNum}.json`, JSON.stringify(resultData, null, 2))
```
Write("<session-folder>/results/run-<N>.json", <result-json>)
```
### Phase 4: Defect Pattern Extraction
```javascript
// Extract defect patterns from failures
if (resultData.failure_summary) {
const newPatterns = extractDefectPatterns(lastResult)
// Common patterns: null reference, async timing, import errors, type mismatches
resultData.defect_patterns = newPatterns
}
**Extract patterns from failures** (if failure_summary exists):
| Pattern Type | Detection |
|--------------|-----------|
| Null reference | "null", "undefined", "Cannot read property" |
| Async timing | "timeout", "async", "await", "promise" |
| Import errors | "Cannot find module", "import" |
| Type mismatches | "type", "expected", "received" |
**Record effective test patterns** (if pass_rate > 0.8):
| Pattern | Detection |
|---------|-----------|
| Happy path | Tests with "should succeed" or "valid input" |
| Edge cases | Tests with "edge", "boundary", "limit" |
| Error handling | Tests with "should fail", "error", "throw" |
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
1. **Update shared memory**:
// Record effective test patterns (from passing tests)
if (passRate > 0.8) {
const effectivePatterns = extractEffectivePatterns(testFiles)
resultData.effective_patterns = effectivePatterns
}
```
### Phase 5: Report to Coordinator + Shared Memory Write
```javascript
// Update shared memory
sharedMemory.execution_results.push(resultData)
if (resultData.defect_patterns) {
sharedMemory.execution_results.push(<result-data>)
if (<result-data>.defect_patterns) {
sharedMemory.defect_patterns = [
...sharedMemory.defect_patterns,
...resultData.defect_patterns
...<result-data>.defect_patterns
]
}
if (resultData.effective_patterns) {
if (<result-data>.effective_patterns) {
sharedMemory.effective_test_patterns = [
...new Set([...sharedMemory.effective_test_patterns, ...resultData.effective_patterns])
...new Set([...sharedMemory.effective_test_patterns, ...<result-data>.effective_patterns])
]
}
sharedMemory.coverage_history.push({
layer: testDir,
coverage: coverage,
target: coverageTarget,
pass_rate: passRate,
timestamp: new Date().toISOString()
layer: <test-dir>,
coverage: <coverage>,
target: <target>,
pass_rate: <pass_rate>,
timestamp: <ISO-date>
})
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
Write("<session-folder>/shared-memory.json", <updated-json>)
```
const msgType = resultData.passed ? "tests_passed" : "tests_failed"
2. **Log via team_msg**:
```
mcp__ccw-tools__team_msg({
operation: "log", team: teamName, from: "executor", to: "coordinator",
type: msgType,
summary: `[executor] ${msgType}: pass=${(passRate*100).toFixed(1)}%, coverage=${coverage}% (target: ${coverageTarget}%), iterations=${iteration}`,
ref: `${sessionFolder}/results/run-${runNum}.json`
operation: "log", team: "testing", from: "executor", to: "coordinator",
type: <passed ? "tests_passed" : "tests_failed">,
summary: "[executor] <passed|failed>: pass=<pass_rate>%, coverage=<coverage>% (target: <target>%), iterations=<N>",
ref: "<session-folder>/results/run-<N>.json"
})
```
3. **SendMessage to coordinator**:
```
SendMessage({
type: "message", recipient: "coordinator",
content: `## [executor] Test Execution Results
content: "## [executor] Test Execution Results
**Task**: ${task.subject}
**Pass Rate**: ${(passRate * 100).toFixed(1)}%
**Coverage**: ${coverage}% (target: ${coverageTarget}%)
**Fix Iterations**: ${iteration}/${MAX_FIX_ITERATIONS}
**Status**: ${resultData.passed ? '✅ PASSED' : '❌ NEEDS REVISION'}
**Task**: <task-subject>
**Pass Rate**: <pass_rate>%
**Coverage**: <coverage>% (target: <target>%)
**Fix Iterations**: <N>/3
**Status**: <PASSED|NEEDS REVISION>
${resultData.defect_patterns ? `### Defect Patterns\n${resultData.defect_patterns.map(p => `- ${p}`).join('\n')}` : ''}`,
summary: `[executor] ${resultData.passed ? 'PASSED' : 'FAILED'}: ${coverage}% coverage`
<if-defect-patterns>
### Defect Patterns
- <pattern-1>
- <pattern-2>
</if-defect-patterns>",
summary: "[executor] <PASSED|FAILED>: <coverage>% coverage"
})
TaskUpdate({ taskId: task.id, status: 'completed' })
```
4. **TaskUpdate completed**:
```
TaskUpdate({ taskId: <task-id>, status: "completed" })
```
5. **Loop**: Return to Phase 1 to check next task
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No TESTRUN-* tasks | Idle |
| No TESTRUN-* tasks available | Idle, wait for coordinator assignment |
| Test command fails to start | Check framework installation, notify coordinator |
| Coverage tool unavailable | Report pass rate only |
| All tests timeout | Increase timeout, retry once |
| Auto-fix makes tests worse | Revert, report original failures |
| Shared memory not found | Notify coordinator, request location |
| Context/Plan file not found | Notify coordinator, request location |

View File

@@ -1,192 +1,272 @@
# Role: generator
# Generator Role
测试用例生成者。按层级L1单元/L2集成/L3 E2E生成测试代码。作为 Generator-Critic 循环中的 Generator 角色。
Test case generator. Generates test code by layer (L1 unit / L2 integration / L3 E2E). Acts as the Generator in the Generator-Critic loop.
## Role Identity
## Identity
- **Name**: `generator`
- **Name**: `generator` | **Tag**: `[generator]`
- **Task Prefix**: `TESTGEN-*`
- **Responsibility**: Code generation (测试代码生成)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[generator]`
- **Responsibility**: Code generation (test code creation)
## Role Boundaries
## Boundaries
### MUST
- 仅处理 `TESTGEN-*` 前缀的任务
- 所有输出必须带 `[generator]` 标识
- Phase 2 读取 shared-memory.json + test strategyPhase 5 写入 generated_tests
- 生成可直接执行的测试代码
- Only process `TESTGEN-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[generator]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within code generation responsibility scope
- Phase 2: Read shared-memory.json + test strategy
- Phase 5: Write generated_tests to shared-memory.json
- Generate executable test code
### MUST NOT
- ❌ 执行测试、分析覆盖率或制定策略
- ❌ 直接与其他 worker 通信
- ❌ 为其他角色创建任务
- ❌ 修改源代码(仅生成测试代码)
- Execute work outside this role's responsibility scope (no test execution, coverage analysis, or strategy formulation)
- Communicate directly with other worker roles (must go through coordinator)
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
- Modify source code (only generate test code)
- Omit `[generator]` identifier in any output
---
## Toolbox
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| Read | Read | Phase 2 | Load shared-memory.json, strategy, source files |
| Glob | Read | Phase 2 | Find test files, source files |
| Write | Write | Phase 3 | Create test files |
| Edit | Write | Phase 3 | Modify existing test files |
| Bash | Read | Phase 4 | Syntax validation (tsc --noEmit) |
| Task | Delegate | Phase 3 | Delegate to code-developer for complex generation |
| TaskUpdate | Write | Phase 5 | Mark task completed |
| SendMessage | Write | Phase 5 | Report to coordinator |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `tests_generated` | generator coordinator | Tests created | 测试生成完成 |
| `tests_revised` | generator coordinator | Tests revised after failure | 修订测试完成 (GC 循环) |
| `error` | generator coordinator | Processing failure | 错误上报 |
| `tests_generated` | generator -> coordinator | Tests created | Test generation complete |
| `tests_revised` | generator -> coordinator | Tests revised after failure | Tests revised (GC loop) |
| `error` | generator -> coordinator | Processing failure | Error report |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: "testing",
from: "generator",
to: "coordinator",
type: <message-type>,
summary: "[generator] TESTGEN complete: <summary>",
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team testing --from generator --to coordinator --type <message-type> --summary \"[generator] ...\" --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('TESTGEN-') &&
t.owner === 'generator' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `TESTGEN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Context Loading
**Input Sources**:
| Input | Source | Required |
|-------|--------|----------|
| Session path | Task description (Session: <path>) | Yes |
| Shared memory | <session-folder>/shared-memory.json | Yes |
| Test strategy | <session-folder>/strategy/test-strategy.md | Yes |
| Source files | From test_strategy.priority_files | Yes |
| Wisdom | <session-folder>/wisdom/ | No |
**Loading steps**:
1. Extract session path from task description (look for `Session: <path>`)
2. Extract layer from task description (look for `Layer: <L1-unit|L2-integration|L3-e2e>`)
3. Read shared memory:
```
Read("<session-folder>/shared-memory.json")
```
### Phase 2: Context Loading + Shared Memory Read
4. Read test strategy:
```javascript
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
const sessionFolder = sessionMatch?.[1]?.trim()
const layerMatch = task.description.match(/层级:\s*(\S+)/)
const layer = layerMatch?.[1] || 'L1-unit'
const memoryPath = `${sessionFolder}/shared-memory.json`
let sharedMemory = {}
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
// Read strategy
const strategy = Read(`${sessionFolder}/strategy/test-strategy.md`)
// Read source files to test
const targetFiles = sharedMemory.test_strategy?.priority_files || sharedMemory.changed_files || []
const sourceContents = {}
for (const file of targetFiles.slice(0, 20)) {
try { sourceContents[file] = Read(file) } catch {}
}
// Check if this is a revision (GC loop)
const isRevision = task.subject.includes('fix') || task.subject.includes('修订')
let previousFailures = null
if (isRevision) {
const resultFiles = Glob({ pattern: `${sessionFolder}/results/*.json` })
if (resultFiles.length > 0) {
try { previousFailures = JSON.parse(Read(resultFiles[resultFiles.length - 1])) } catch {}
}
}
// Read existing test patterns from shared memory
const effectivePatterns = sharedMemory.effective_test_patterns || []
```
Read("<session-folder>/strategy/test-strategy.md")
```
5. Read source files to test (limit to 20 files):
```
Read("<source-file-1>")
Read("<source-file-2>")
...
```
6. Check if this is a revision (GC loop):
| Condition | Revision Mode |
|-----------|---------------|
| Task subject contains "fix" or "revised" | Yes - load previous failures |
| Otherwise | No - fresh generation |
**For revision mode**:
- Read latest result file for failure details
- Load effective test patterns from shared memory
7. Read wisdom files if available
### Phase 3: Test Generation
```javascript
const framework = sharedMemory.test_strategy?.framework || 'Jest'
**Strategy selection**:
// Determine complexity for delegation
const fileCount = Object.keys(sourceContents).length
| File Count | Complexity | Strategy |
|------------|------------|----------|
| <= 3 files | Low | Direct: inline Write/Edit |
| 3-5 files | Medium | Single agent: one code-developer for all |
| > 5 files | High | Batch agent: group by module, one agent per batch |
if (fileCount <= 3) {
// Direct generation — write test files inline
for (const [file, content] of Object.entries(sourceContents)) {
const testPath = generateTestPath(file, layer)
const testCode = generateTestCode(file, content, layer, framework, {
isRevision,
previousFailures,
effectivePatterns
})
Write(testPath, testCode)
}
} else {
// Delegate to code-developer for batch generation
Task({
subagent_type: "code-developer",
run_in_background: false,
description: `Generate ${layer} tests`,
prompt: `Generate ${layer} tests using ${framework} for the following files:
**Direct generation (low complexity)**:
${Object.entries(sourceContents).map(([f, c]) => `### ${f}\n\`\`\`\n${c.substring(0, 2000)}\n\`\`\``).join('\n\n')}
For each source file:
1. Generate test path based on layer convention
2. Generate test code covering: happy path, edge cases, error handling
3. Write test file
${isRevision ? `\n## Previous Failures\n${JSON.stringify(previousFailures?.failures?.slice(0, 10), null, 2)}` : ''}
```
Write("<session-folder>/tests/<layer>/<test-file>", <test-code>)
```
${effectivePatterns.length > 0 ? `\n## Effective Patterns (from previous rounds)\n${effectivePatterns.map(p => `- ${p}`).join('\n')}` : ''}
**Agent delegation (medium/high complexity)**:
Write test files to: ${sessionFolder}/tests/${layer}/
Use ${framework} conventions.
Each test file should cover: happy path, edge cases, error handling.`
})
}
```
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Generate <layer> tests",
prompt: "Generate <layer> tests using <framework> for the following files:
const generatedTestFiles = Glob({ pattern: `${sessionFolder}/tests/${layer}/**/*` })
<file-list-with-content>
<if-revision>
## Previous Failures
<failure-details>
</if-revision>
<if-effective-patterns>
## Effective Patterns (from previous rounds)
<pattern-list>
</if-effective-patterns>
Write test files to: <session-folder>/tests/<layer>/
Use <framework> conventions.
Each test file should cover: happy path, edge cases, error handling."
})
```
**Output verification**:
```
Glob({ pattern: "<session-folder>/tests/<layer>/**/*" })
```
### Phase 4: Self-Validation
```javascript
// Verify generated tests are syntactically valid
const syntaxCheck = Bash(`cd "${sessionFolder}" && npx tsc --noEmit tests/${layer}/**/*.ts 2>&1 || true`)
const hasSyntaxErrors = syntaxCheck.includes('error TS')
**Validation checks**:
if (hasSyntaxErrors) {
// Attempt auto-fix for common issues (imports, types)
}
| Check | Method | Pass Criteria | Action on Fail |
|-------|--------|---------------|----------------|
| Syntax | `tsc --noEmit` or equivalent | No errors | Auto-fix imports and types |
| File count | Count generated files | >= 1 file | Report issue |
| Import resolution | Check no broken imports | All imports resolve | Fix import paths |
// Verify minimum test count
const testFileCount = generatedTestFiles.length
**Syntax check command**:
```
Bash("cd \"<session-folder>\" && npx tsc --noEmit tests/<layer>/**/*.ts 2>&1 || true")
```
### Phase 5: Report to Coordinator + Shared Memory Write
If syntax errors found, attempt auto-fix for common issues (imports, types).
```javascript
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
1. **Update shared memory**:
```
sharedMemory.generated_tests = [
...sharedMemory.generated_tests,
...generatedTestFiles.map(f => ({
...<new-test-files>.map(f => ({
file: f,
layer: layer,
round: isRevision ? sharedMemory.gc_round : 0,
revised: isRevision
layer: <layer>,
round: <is-revision ? gc_round : 0>,
revised: <is-revision>
}))
]
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
Write("<session-folder>/shared-memory.json", <updated-json>)
```
const msgType = isRevision ? "tests_revised" : "tests_generated"
2. **Log via team_msg**:
```
mcp__ccw-tools__team_msg({
operation: "log", team: teamName, from: "generator", to: "coordinator",
type: msgType,
summary: `[generator] ${isRevision ? 'Revised' : 'Generated'} ${testFileCount} ${layer} test files`,
ref: `${sessionFolder}/tests/${layer}/`
operation: "log", team: "testing", from: "generator", to: "coordinator",
type: <is-revision ? "tests_revised" : "tests_generated">,
summary: "[generator] <Generated|Revised> <file-count> <layer> test files",
ref: "<session-folder>/tests/<layer>/"
})
```
3. **SendMessage to coordinator**:
```
SendMessage({
type: "message", recipient: "coordinator",
content: `## [generator] Tests ${isRevision ? 'Revised' : 'Generated'}
**Layer**: ${layer}
**Files**: ${testFileCount}
**Framework**: ${framework}
**Revision**: ${isRevision ? 'Yes (GC round ' + sharedMemory.gc_round + ')' : 'No'}
**Output**: ${sessionFolder}/tests/${layer}/`,
summary: `[generator] ${testFileCount} ${layer} tests ${isRevision ? 'revised' : 'generated'}`
content: "## [generator] Tests <Generated|Revised>\n\n**Layer**: <layer>\n**Files**: <file-count>\n**Framework**: <framework>\n**Revision**: <Yes/No>\n**Output**: <path>",
summary: "[generator] <file-count> <layer> tests <generated|revised>"
})
TaskUpdate({ taskId: task.id, status: 'completed' })
```
4. **TaskUpdate completed**:
```
TaskUpdate({ taskId: <task-id>, status: "completed" })
```
5. **Loop**: Return to Phase 1 to check next task
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No TESTGEN-* tasks | Idle |
| No TESTGEN-* tasks available | Idle, wait for coordinator assignment |
| Source file not found | Skip, notify coordinator |
| Test framework unknown | Default to Jest patterns |
| Revision with no failure data | Generate additional tests instead of revising |
| Syntax errors in generated tests | Auto-fix imports and types |
| Shared memory not found | Notify coordinator, request location |
| Context/Plan file not found | Notify coordinator, request location |

View File

@@ -1,179 +1,218 @@
# Role: strategist
# Strategist Role
测试策略制定者。分析 git diff、确定测试层级、定义覆盖率目标和测试优先级。
Test strategy designer. Analyzes git diff, determines test layers, defines coverage targets and test priorities.
## Role Identity
## Identity
- **Name**: `strategist`
- **Name**: `strategist` | **Tag**: `[strategist]`
- **Task Prefix**: `STRATEGY-*`
- **Responsibility**: Read-only analysis (策略分析)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[strategist]`
- **Responsibility**: Read-only analysis (strategy formulation)
## Role Boundaries
## Boundaries
### MUST
- 仅处理 `STRATEGY-*` 前缀的任务
- 所有输出必须带 `[strategist]` 标识
- Phase 2 读取 shared-memory.jsonPhase 5 写入 test_strategy
- Only process `STRATEGY-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[strategist]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within read-only analysis responsibility scope
- Phase 2: Read shared-memory.json
- Phase 5: Write test_strategy to shared-memory.json
### MUST NOT
- ❌ 生成测试代码、执行测试或分析结果
- ❌ 直接与其他 worker 通信
- ❌ 为其他角色创建任务
- Execute work outside this role's responsibility scope (no test generation, execution, or result analysis)
- Communicate directly with other worker roles (must go through coordinator)
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
- Modify files or resources outside this role's responsibility
- Omit `[strategist]` identifier in any output
---
## Toolbox
### Tool Capabilities
| Tool | Type | Used By | Purpose |
|------|------|---------|---------|
| Read | Read | Phase 2 | Load shared-memory.json, existing test patterns |
| Bash | Read | Phase 2 | Git diff analysis, framework detection |
| Glob | Read | Phase 2 | Find test files, config files |
| Write | Write | Phase 3 | Create test-strategy.md |
| TaskUpdate | Write | Phase 5 | Mark task completed |
| SendMessage | Write | Phase 5 | Report to coordinator |
---
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `strategy_ready` | strategist coordinator | Strategy completed | 策略制定完成 |
| `error` | strategist coordinator | Processing failure | 错误上报 |
| `strategy_ready` | strategist -> coordinator | Strategy completed | Strategy formulation complete |
| `error` | strategist -> coordinator | Processing failure | Error report |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: "testing",
from: "strategist",
to: "coordinator",
type: <message-type>,
summary: "[strategist] STRATEGY complete: <summary>",
ref: <artifact-path>
})
```
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team testing --from strategist --to coordinator --type <message-type> --summary \"[strategist] ...\" --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('STRATEGY-') &&
t.owner === 'strategist' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `STRATEGY-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: Context Loading
**Input Sources**:
| Input | Source | Required |
|-------|--------|----------|
| Session path | Task description (Session: <path>) | Yes |
| Shared memory | <session-folder>/shared-memory.json | Yes |
| Git diff | `git diff HEAD~1` or `git diff --cached` | Yes |
| Changed files | From git diff --name-only | Yes |
**Loading steps**:
1. Extract session path from task description (look for `Session: <path>`)
2. Read shared-memory.json for changed files and modules
```
Read("<session-folder>/shared-memory.json")
```
### Phase 2: Context Loading + Shared Memory Read
3. Get detailed git diff for analysis:
```javascript
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
const sessionFolder = sessionMatch?.[1]?.trim()
```
Bash("git diff HEAD~1 -- <file1> <file2> ... 2>/dev/null || git diff --cached -- <files>")
```
const memoryPath = `${sessionFolder}/shared-memory.json`
let sharedMemory = {}
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
4. Detect test framework from project files:
const changedFiles = sharedMemory.changed_files || []
const changedModules = sharedMemory.changed_modules || []
| Framework | Detection Method |
|-----------|-----------------|
| Jest | Check jest.config.js or jest.config.ts exists |
| Pytest | Check pytest.ini or pyproject.toml exists |
| Vitest | Check vitest.config.ts or vitest.config.js exists |
// Read git diff for detailed analysis
const gitDiff = Bash(`git diff HEAD~1 -- ${changedFiles.join(' ')} 2>/dev/null || git diff --cached -- ${changedFiles.join(' ')}`)
// Detect test framework
const hasJest = Bash(`test -f jest.config.js || test -f jest.config.ts && echo "yes" || echo "no"`).trim() === 'yes'
const hasPytest = Bash(`test -f pytest.ini || test -f pyproject.toml && echo "yes" || echo "no"`).trim() === 'yes'
const hasVitest = Bash(`test -f vitest.config.ts || test -f vitest.config.js && echo "yes" || echo "no"`).trim() === 'yes'
```
Bash("test -f jest.config.js || test -f jest.config.ts && echo \"yes\" || echo \"no\"")
```
### Phase 3: Strategy Formulation
```javascript
// Analyze changes by type:
// - New files → need new tests
// - Modified functions → need updated tests
// - Deleted files → need test cleanup
// - Config changes → may need integration tests
**Analysis dimensions**:
const outputPath = `${sessionFolder}/strategy/test-strategy.md`
| Change Type | Analysis | Impact |
|-------------|----------|--------|
| New files | Need new tests | High priority |
| Modified functions | Need updated tests | Medium priority |
| Deleted files | Need test cleanup | Low priority |
| Config changes | May need integration tests | Variable |
const strategyContent = `# Test Strategy
**Strategy structure**:
**Changed Files**: ${changedFiles.length}
**Changed Modules**: ${changedModules.join(', ')}
**Test Framework**: ${hasJest ? 'Jest' : hasPytest ? 'Pytest' : hasVitest ? 'Vitest' : 'Unknown'}
1. **Change Analysis Table**: File, Change Type, Impact, Priority
2. **Test Layer Recommendations**:
- L1 Unit Tests: Scope, Coverage Target, Priority Files, Test Patterns
- L2 Integration Tests: Scope, Coverage Target, Integration Points
- L3 E2E Tests: Scope, Coverage Target, User Scenarios
3. **Risk Assessment**: Risk, Probability, Impact, Mitigation
4. **Test Execution Order**: Prioritized sequence
## Change Analysis
**Output file**: `<session-folder>/strategy/test-strategy.md`
| File | Change Type | Impact | Priority |
|------|------------|--------|----------|
${changeAnalysis.map(c => `| ${c.file} | ${c.type} | ${c.impact} | ${c.priority} |`).join('\n')}
## Test Layer Recommendations
### L1: Unit Tests
- **Scope**: ${l1Scope.join(', ')}
- **Coverage Target**: ${coverageTargets.L1}%
- **Priority Files**: ${l1Priority.join(', ')}
- **Test Patterns**: ${l1Patterns.join(', ')}
### L2: Integration Tests
- **Scope**: ${l2Scope.join(', ')}
- **Coverage Target**: ${coverageTargets.L2}%
- **Integration Points**: ${integrationPoints.join(', ')}
### L3: E2E Tests
- **Scope**: ${l3Scope.join(', ')}
- **Coverage Target**: ${coverageTargets.L3}%
- **User Scenarios**: ${userScenarios.join(', ')}
## Risk Assessment
| Risk | Probability | Impact | Mitigation |
|------|------------|--------|------------|
${risks.map(r => `| ${r.risk} | ${r.probability} | ${r.impact} | ${r.mitigation} |`).join('\n')}
## Test Execution Order
1. L1 unit tests for high-priority changed files
2. L1 unit tests for dependent modules
3. L2 integration tests for cross-module interactions
4. L3 E2E tests for affected user scenarios
`
Write(outputPath, strategyContent)
```
Write("<session-folder>/strategy/test-strategy.md", <strategy-content>)
```
### Phase 4: Self-Validation
```javascript
// Verify strategy completeness
const hasAllLayers = l1Scope.length > 0
const hasCoverageTargets = coverageTargets.L1 > 0
const hasPriorityFiles = l1Priority.length > 0
**Validation checks**:
if (!hasAllLayers || !hasCoverageTargets) {
// Fill gaps
| Check | Criteria | Action |
|-------|----------|--------|
| Has L1 scope | L1 scope not empty | If empty, set default based on changed files |
| Has coverage targets | L1 target > 0 | If missing, use default (80/60/40) |
| Has priority files | Priority list not empty | If empty, use all changed files |
### Phase 5: Report to Coordinator
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
1. **Update shared memory**:
```
sharedMemory.test_strategy = {
framework: <detected-framework>,
layers: { L1: [...], L2: [...], L3: [...] },
coverage_targets: { L1: <n>, L2: <n>, L3: <n> },
priority_files: [...],
risks: [...]
}
Write("<session-folder>/shared-memory.json", <updated-json>)
```
### Phase 5: Report to Coordinator + Shared Memory Write
```javascript
sharedMemory.test_strategy = {
framework: hasJest ? 'Jest' : hasPytest ? 'Pytest' : hasVitest ? 'Vitest' : 'Unknown',
layers: { L1: l1Scope, L2: l2Scope, L3: l3Scope },
coverage_targets: coverageTargets,
priority_files: l1Priority,
risks: risks
}
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
2. **Log via team_msg**:
```
mcp__ccw-tools__team_msg({
operation: "log", team: teamName, from: "strategist", to: "coordinator",
operation: "log", team: "testing", from: "strategist", to: "coordinator",
type: "strategy_ready",
summary: `[strategist] Strategy complete: ${changedFiles.length} files, L1-L3 layers defined`,
ref: outputPath
summary: "[strategist] Strategy complete: <file-count> files, L1-L3 layers defined",
ref: "<session-folder>/strategy/test-strategy.md"
})
```
3. **SendMessage to coordinator**:
```
SendMessage({
type: "message", recipient: "coordinator",
content: `## [strategist] Test Strategy Ready\n\n**Files**: ${changedFiles.length}\n**Layers**: L1(${l1Scope.length} targets), L2(${l2Scope.length}), L3(${l3Scope.length})\n**Framework**: ${sharedMemory.test_strategy.framework}\n**Output**: ${outputPath}`,
summary: `[strategist] Strategy ready`
content: "## [strategist] Test Strategy Ready\n\n**Files**: <count>\n**Layers**: L1(<count>), L2(<count>), L3(<count>)\n**Framework**: <framework>\n**Output**: <path>",
summary: "[strategist] Strategy ready"
})
TaskUpdate({ taskId: task.id, status: 'completed' })
```
4. **TaskUpdate completed**:
```
TaskUpdate({ taskId: <task-id>, status: "completed" })
```
5. **Loop**: Return to Phase 1 to check next task
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No STRATEGY-* tasks | Idle |
| No STRATEGY-* tasks available | Idle, wait for coordinator assignment |
| No changed files | Analyze full codebase, recommend smoke tests |
| Unknown test framework | Recommend Jest/Pytest based on project language |
| All files are config | Recommend integration tests only |
| Shared memory not found | Notify coordinator, request location |
| Context/Plan file not found | Notify coordinator, request location |