mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-28 09:23:08 +08:00
chore: remove skills_lib from remote tracking
- Add .claude/skills_lib/ to .gitignore - Remove folder from git index (local files preserved)
This commit is contained in:
@@ -1,574 +0,0 @@
|
||||
---
|
||||
name: team-lifecycle-v2
|
||||
description: Unified team skill for full lifecycle - spec/impl/test. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team lifecycle".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team Lifecycle
|
||||
|
||||
Unified team skill covering specification, implementation, testing, and review. All team members invoke this skill with `--role=xxx` to route to role-specific execution.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-lifecycle-v2") │
|
||||
│ args="任务描述" 或 args="--role=xxx" │
|
||||
└───────────────────┬───────────────────────────────┘
|
||||
│ Role Router
|
||||
│
|
||||
┌──── --role present? ────┐
|
||||
│ NO │ YES
|
||||
↓ ↓
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto → coordinator) (route to role.md)
|
||||
│
|
||||
┌────┴────┬───────┬───────┬───────┬───────┬───────┬───────┐
|
||||
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
|
||||
┌──────────┐┌───────┐┌──────┐┌──────────┐┌───────┐┌────────┐┌──────┐┌────────┐
|
||||
│coordinator││analyst││writer││discussant││planner││executor││tester││reviewer│
|
||||
│ roles/ ││roles/ ││roles/││ roles/ ││roles/ ││ roles/ ││roles/││ roles/ │
|
||||
└──────────┘└───────┘└──────┘└──────────┘└───────┘└────────┘└──────┘└────────┘
|
||||
↑ ↑
|
||||
on-demand by coordinator
|
||||
┌──────────┐ ┌─────────┐
|
||||
│ explorer │ │architect│
|
||||
│ (service)│ │(consult)│
|
||||
└──────────┘ └─────────┘
|
||||
```
|
||||
|
||||
## Command Architecture
|
||||
|
||||
Each role is organized as a folder with a `role.md` orchestrator and optional `commands/` for delegation:
|
||||
|
||||
```
|
||||
roles/
|
||||
├── coordinator/
|
||||
│ ├── role.md # Orchestrator (Phase 1/5 inline, Phase 2-4 delegate)
|
||||
│ └── commands/
|
||||
│ ├── dispatch.md # Task chain creation (3 modes)
|
||||
│ └── monitor.md # Coordination loop + message routing
|
||||
├── analyst/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
├── writer/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
│ └── generate-doc.md # Multi-CLI document generation (4 doc types)
|
||||
├── discussant/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
│ └── critique.md # Multi-perspective CLI critique
|
||||
├── planner/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
│ └── explore.md # Multi-angle codebase exploration
|
||||
├── executor/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
│ └── implement.md # Multi-backend code implementation
|
||||
├── tester/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
│ └── validate.md # Test-fix cycle
|
||||
├── reviewer/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
│ ├── code-review.md # 4-dimension code review
|
||||
│ └── spec-quality.md # 5-dimension spec quality check
|
||||
├── explorer/ # Service role (on-demand)
|
||||
│ └── role.md # Multi-strategy code search & pattern discovery
|
||||
└── architect/ # Consulting role (on-demand)
|
||||
├── role.md # Multi-mode architecture assessment
|
||||
└── commands/
|
||||
└── assess.md # Mode-specific assessment strategies
|
||||
├── fe-developer/ # Frontend pipeline role
|
||||
│ └── role.md # Frontend component/page implementation
|
||||
└── fe-qa/ # Frontend pipeline role
|
||||
├── role.md
|
||||
└── commands/
|
||||
└── pre-delivery-checklist.md
|
||||
└── role.md # 5-dimension frontend QA + GC loop
|
||||
```
|
||||
|
||||
**Design principle**: role.md keeps Phase 1 (Task Discovery) and Phase 5 (Report) inline. Phases 2-4 either stay inline (simple logic) or delegate to `commands/*.md` via `Read("commands/xxx.md")` when they involve subagent delegation, CLI fan-out, or complex strategies.
|
||||
|
||||
**Command files** are self-contained: each includes Strategy, Execution Steps, and Error Handling. Any subagent can `Read()` a command file and execute it independently.
|
||||
|
||||
## Role Router
|
||||
|
||||
### Input Parsing
|
||||
|
||||
Parse `$ARGUMENTS` to extract `--role`:
|
||||
|
||||
```javascript
|
||||
const args = "$ARGUMENTS"
|
||||
const roleMatch = args.match(/--role[=\s]+(\w+)/)
|
||||
const teamName = args.match(/--team[=\s]+([\w-]+)/)?.[1] || "lifecycle"
|
||||
|
||||
if (!roleMatch) {
|
||||
// No --role: Orchestration Mode → auto route to coordinator
|
||||
// See "Orchestration Mode" section below
|
||||
}
|
||||
|
||||
const role = roleMatch ? roleMatch[1] : "coordinator"
|
||||
```
|
||||
|
||||
### Role Dispatch
|
||||
|
||||
```javascript
|
||||
const VALID_ROLES = {
|
||||
"coordinator": { file: "roles/coordinator/role.md", prefix: null },
|
||||
"analyst": { file: "roles/analyst/role.md", prefix: "RESEARCH" },
|
||||
"writer": { file: "roles/writer/role.md", prefix: "DRAFT" },
|
||||
"discussant": { file: "roles/discussant/role.md", prefix: "DISCUSS" },
|
||||
"planner": { file: "roles/planner/role.md", prefix: "PLAN" },
|
||||
"executor": { file: "roles/executor/role.md", prefix: "IMPL" },
|
||||
"tester": { file: "roles/tester/role.md", prefix: "TEST" },
|
||||
"reviewer": { file: "roles/reviewer/role.md", prefix: ["REVIEW", "QUALITY"] },
|
||||
"explorer": { file: "roles/explorer/role.md", prefix: "EXPLORE", type: "service" },
|
||||
"architect": { file: "roles/architect/role.md", prefix: "ARCH", type: "consulting" },
|
||||
"fe-developer":{ file: "roles/fe-developer/role.md",prefix: "DEV-FE", type: "frontend-pipeline" },
|
||||
"fe-qa": { file: "roles/fe-qa/role.md", prefix: "QA-FE", type: "frontend-pipeline" }
|
||||
}
|
||||
|
||||
if (!VALID_ROLES[role]) {
|
||||
throw new Error(`Unknown role: ${role}. Available: ${Object.keys(VALID_ROLES).join(', ')}`)
|
||||
}
|
||||
|
||||
// Read and execute role-specific logic
|
||||
Read(VALID_ROLES[role].file)
|
||||
// → Execute the 5-phase process defined in that file
|
||||
```
|
||||
|
||||
### Orchestration Mode(无参数触发)
|
||||
|
||||
当不带 `--role` 调用时,自动进入 coordinator 编排模式。用户只需传任务描述即可触发完整流程。
|
||||
|
||||
**触发方式**:
|
||||
|
||||
```javascript
|
||||
// 用户调用(无 --role)— 自动路由到 coordinator
|
||||
Skill(skill="team-lifecycle-v2", args="任务描述")
|
||||
|
||||
// 等价于
|
||||
Skill(skill="team-lifecycle-v2", args="--role=coordinator 任务描述")
|
||||
```
|
||||
|
||||
**流程**:
|
||||
|
||||
```javascript
|
||||
if (!roleMatch) {
|
||||
// Orchestration Mode: 自动路由到 coordinator
|
||||
// coordinator role.md 将执行:
|
||||
// Phase 1: 需求澄清
|
||||
// Phase 2: TeamCreate + spawn 所有 worker agents
|
||||
// 每个 agent prompt 中包含 Skill(args="--role=xxx") 回调
|
||||
// Phase 3: 创建任务链
|
||||
// Phase 4: 监控协调循环
|
||||
// Phase 5: 结果汇报
|
||||
|
||||
const role = "coordinator"
|
||||
Read(VALID_ROLES[role].file)
|
||||
}
|
||||
```
|
||||
|
||||
**完整调用链**:
|
||||
|
||||
```
|
||||
用户: Skill(args="任务描述")
|
||||
│
|
||||
├─ SKILL.md: 无 --role → Orchestration Mode → 读取 coordinator role.md
|
||||
│
|
||||
├─ coordinator Phase 2: TeamCreate + spawn workers
|
||||
│ 每个 worker prompt 中包含 Skill(args="--role=xxx") 回调
|
||||
│
|
||||
├─ coordinator Phase 3: dispatch 任务链
|
||||
│
|
||||
├─ worker 收到任务 → Skill(args="--role=xxx") → SKILL.md Role Router → role.md
|
||||
│ 每个 worker 自动获取:
|
||||
│ ├─ 角色定义 (role.md: identity, boundaries, message types)
|
||||
│ ├─ 可用命令 (commands/*.md)
|
||||
│ └─ 执行逻辑 (5-phase process)
|
||||
│
|
||||
└─ coordinator Phase 4-5: 监控 → 结果汇报
|
||||
```
|
||||
|
||||
### Available Roles
|
||||
|
||||
| Role | Task Prefix | Responsibility | Role File |
|
||||
|------|-------------|----------------|-----------|
|
||||
| `coordinator` | N/A | Pipeline orchestration, requirement clarification, task dispatch | [roles/coordinator/role.md](roles/coordinator/role.md) |
|
||||
| `analyst` | RESEARCH-* | Seed analysis, codebase exploration, context gathering | [roles/analyst/role.md](roles/analyst/role.md) |
|
||||
| `writer` | DRAFT-* | Product Brief / PRD / Architecture / Epics generation | [roles/writer/role.md](roles/writer/role.md) |
|
||||
| `discussant` | DISCUSS-* | Multi-perspective critique, consensus building | [roles/discussant/role.md](roles/discussant/role.md) |
|
||||
| `planner` | PLAN-* | Multi-angle exploration, structured planning | [roles/planner/role.md](roles/planner/role.md) |
|
||||
| `executor` | IMPL-* | Code implementation following plans | [roles/executor/role.md](roles/executor/role.md) |
|
||||
| `tester` | TEST-* | Adaptive test-fix cycles, quality gates | [roles/tester/role.md](roles/tester/role.md) |
|
||||
| `reviewer` | `REVIEW-*` + `QUALITY-*` | Code review + Spec quality validation (auto-switch by prefix) | [roles/reviewer/role.md](roles/reviewer/role.md) |
|
||||
| `explorer` | EXPLORE-* | Code search, pattern discovery, dependency tracing (service role, on-demand) | [roles/explorer/role.md](roles/explorer/role.md) |
|
||||
| `architect` | ARCH-* | Architecture assessment, tech feasibility, design review (consulting role, on-demand) | [roles/architect/role.md](roles/architect/role.md) |
|
||||
| `fe-developer` | DEV-FE-* | Frontend component/page implementation, design token consumption (frontend pipeline) | [roles/fe-developer/role.md](roles/fe-developer/role.md) |
|
||||
| `fe-qa` | QA-FE-* | 5-dimension frontend QA, accessibility, design compliance, GC loop (frontend pipeline) | [roles/fe-qa/role.md](roles/fe-qa/role.md) |
|
||||
|
||||
## Shared Infrastructure
|
||||
|
||||
### Role Isolation Rules
|
||||
|
||||
**核心原则**: 每个角色仅能执行自己职责范围内的工作。
|
||||
|
||||
#### Output Tagging(强制)
|
||||
|
||||
所有角色的输出必须带 `[role_name]` 标识前缀:
|
||||
|
||||
```javascript
|
||||
// SendMessage — content 和 summary 都必须带标识
|
||||
SendMessage({
|
||||
content: `## [${role}] ...`,
|
||||
summary: `[${role}] ...`
|
||||
})
|
||||
|
||||
// team_msg — summary 必须带标识
|
||||
mcp__ccw-tools__team_msg({
|
||||
summary: `[${role}] ...`
|
||||
})
|
||||
```
|
||||
|
||||
#### Coordinator 隔离
|
||||
|
||||
| 允许 | 禁止 |
|
||||
|------|------|
|
||||
| 需求澄清 (AskUserQuestion) | ❌ 直接编写/修改代码 |
|
||||
| 创建任务链 (TaskCreate) | ❌ 调用实现类 subagent (code-developer 等) |
|
||||
| 分发任务给 worker | ❌ 直接执行分析/测试/审查 |
|
||||
| 监控进度 (消息总线) | ❌ 绕过 worker 自行完成任务 |
|
||||
| 汇报结果给用户 | ❌ 修改源代码或产物文件 |
|
||||
|
||||
#### Worker 隔离
|
||||
|
||||
| 允许 | 禁止 |
|
||||
|------|------|
|
||||
| 处理自己前缀的任务 | ❌ 处理其他角色前缀的任务 |
|
||||
| SendMessage 给 coordinator | ❌ 直接与其他 worker 通信 |
|
||||
| 使用 Toolbox 中声明的工具 | ❌ 为其他角色创建任务 (TaskCreate) |
|
||||
| 委派给 commands/ 中的命令 | ❌ 修改不属于本职责的资源 |
|
||||
|
||||
### Message Bus (All Roles)
|
||||
|
||||
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
||||
|
||||
```javascript
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: teamName,
|
||||
from: role,
|
||||
to: "coordinator",
|
||||
type: "<type>",
|
||||
summary: `[${role}] <summary>`,
|
||||
ref: "<file_path>"
|
||||
})
|
||||
```
|
||||
|
||||
**Message types by role**:
|
||||
|
||||
| Role | Types |
|
||||
|------|-------|
|
||||
| coordinator | `plan_approved`, `plan_revision`, `task_unblocked`, `fix_required`, `error`, `shutdown` |
|
||||
| analyst | `research_ready`, `research_progress`, `error` |
|
||||
| writer | `draft_ready`, `draft_revision`, `impl_progress`, `error` |
|
||||
| discussant | `discussion_ready`, `discussion_blocked`, `impl_progress`, `error` |
|
||||
| planner | `plan_ready`, `plan_revision`, `impl_progress`, `error` |
|
||||
| executor | `impl_complete`, `impl_progress`, `error` |
|
||||
| tester | `test_result`, `impl_progress`, `fix_required`, `error` |
|
||||
| reviewer | `review_result`, `quality_result`, `fix_required`, `error` |
|
||||
| explorer | `explore_ready`, `explore_progress`, `task_failed` |
|
||||
| architect | `arch_ready`, `arch_concern`, `arch_progress`, `error` |
|
||||
| fe-developer | `dev_fe_complete`, `dev_fe_progress`, `error` |
|
||||
| fe-qa | `qa_fe_passed`, `qa_fe_result`, `fix_required`, `error` |
|
||||
|
||||
### CLI Fallback
|
||||
|
||||
When `mcp__ccw-tools__team_msg` MCP is unavailable:
|
||||
|
||||
```javascript
|
||||
Bash(`ccw team log --team "${teamName}" --from "${role}" --to "coordinator" --type "<type>" --summary "[${role}] <summary>" --json`)
|
||||
Bash(`ccw team list --team "${teamName}" --last 10 --json`)
|
||||
Bash(`ccw team status --team "${teamName}" --json`)
|
||||
```
|
||||
|
||||
### Wisdom Accumulation (All Roles)
|
||||
|
||||
跨任务知识积累机制。Coordinator 在 session 初始化时创建 `wisdom/` 目录,所有 worker 在执行过程中读取和贡献 wisdom。
|
||||
|
||||
**目录结构**:
|
||||
```
|
||||
{sessionFolder}/wisdom/
|
||||
├── learnings.md # 发现的模式和洞察
|
||||
├── decisions.md # 架构和设计决策
|
||||
├── conventions.md # 代码库约定
|
||||
└── issues.md # 已知风险和问题
|
||||
```
|
||||
|
||||
**Phase 2 加载(所有 worker)**:
|
||||
```javascript
|
||||
// Load wisdom context at start of Phase 2
|
||||
const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim()
|
||||
let wisdom = {}
|
||||
if (sessionFolder) {
|
||||
try { wisdom.learnings = Read(`${sessionFolder}/wisdom/learnings.md`) } catch {}
|
||||
try { wisdom.decisions = Read(`${sessionFolder}/wisdom/decisions.md`) } catch {}
|
||||
try { wisdom.conventions = Read(`${sessionFolder}/wisdom/conventions.md`) } catch {}
|
||||
try { wisdom.issues = Read(`${sessionFolder}/wisdom/issues.md`) } catch {}
|
||||
}
|
||||
```
|
||||
|
||||
**Phase 4/5 贡献(任务完成时)**:
|
||||
```javascript
|
||||
// Contribute wisdom after task completion
|
||||
if (sessionFolder) {
|
||||
const timestamp = new Date().toISOString().substring(0, 10)
|
||||
|
||||
// Role-specific contributions:
|
||||
// analyst → learnings (exploration dimensions, codebase patterns)
|
||||
// writer → conventions (document structure, naming patterns)
|
||||
// planner → decisions (task decomposition rationale)
|
||||
// executor → learnings (implementation patterns), issues (bugs encountered)
|
||||
// tester → issues (test failures, edge cases), learnings (test patterns)
|
||||
// reviewer → conventions (code quality patterns), issues (review findings)
|
||||
// explorer → conventions (codebase patterns), learnings (dependency insights)
|
||||
// architect → decisions (architecture choices), issues (architectural risks)
|
||||
|
||||
try {
|
||||
const targetFile = `${sessionFolder}/wisdom/${wisdomTarget}.md`
|
||||
const existing = Read(targetFile)
|
||||
const entry = `- [${timestamp}] [${role}] ${wisdomEntry}`
|
||||
Write(targetFile, existing + '\n' + entry)
|
||||
} catch {} // wisdom not initialized
|
||||
}
|
||||
```
|
||||
|
||||
**Coordinator 注入**: Coordinator 在 spawn worker 时通过 task description 传递 `Session: {sessionFolder}`,worker 据此定位 wisdom 目录。已有 wisdom 内容为后续 worker 提供上下文,实现跨任务知识传递。
|
||||
|
||||
### Task Lifecycle (All Worker Roles)
|
||||
|
||||
```javascript
|
||||
// Standard task lifecycle every worker role follows
|
||||
// Phase 1: Discovery
|
||||
const tasks = TaskList()
|
||||
const prefixes = Array.isArray(VALID_ROLES[role].prefix) ? VALID_ROLES[role].prefix : [VALID_ROLES[role].prefix]
|
||||
const myTasks = tasks.filter(t =>
|
||||
prefixes.some(p => t.subject.startsWith(`${p}-`)) &&
|
||||
t.owner === role &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
if (myTasks.length === 0) return // idle
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
|
||||
// Phase 1.5: Resume Artifact Check (防止重复产出)
|
||||
// 当 session 从暂停恢复时,coordinator 已将 in_progress 任务重置为 pending。
|
||||
// Worker 在开始工作前,必须检查该任务的输出产物是否已存在。
|
||||
// 如果产物已存在且内容完整:
|
||||
// → 直接跳到 Phase 5 报告完成(避免覆盖上次成果)
|
||||
// 如果产物存在但不完整(如文件为空或缺少关键 section):
|
||||
// → 正常执行 Phase 2-4(基于已有产物继续,而非从头开始)
|
||||
// 如果产物不存在:
|
||||
// → 正常执行 Phase 2-4
|
||||
//
|
||||
// 每个 role 检查自己的输出路径:
|
||||
// analyst → sessionFolder/spec/discovery-context.json
|
||||
// writer → sessionFolder/spec/{product-brief.md | requirements/ | architecture/ | epics/}
|
||||
// discussant → sessionFolder/discussions/discuss-NNN-*.md
|
||||
// planner → sessionFolder/plan/plan.json
|
||||
// executor → git diff (已提交的代码变更)
|
||||
// tester → test pass rate
|
||||
// reviewer → sessionFolder/spec/readiness-report.md (quality) 或 review findings (code)
|
||||
|
||||
// Phase 2-4: Role-specific (see roles/{role}/role.md)
|
||||
|
||||
// Phase 5: Report + Loop — 所有输出必须带 [role] 标识
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: role, to: "coordinator", type: "...", summary: `[${role}] ...` })
|
||||
SendMessage({ type: "message", recipient: "coordinator", content: `## [${role}] ...`, summary: `[${role}] ...` })
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
// Check for next task → back to Phase 1
|
||||
```
|
||||
|
||||
## Three-Mode Pipeline
|
||||
|
||||
```
|
||||
Spec-only:
|
||||
RESEARCH-001 → DISCUSS-001 → DRAFT-001 → DISCUSS-002
|
||||
→ DRAFT-002 → DISCUSS-003 → DRAFT-003 → DISCUSS-004
|
||||
→ DRAFT-004 → DISCUSS-005 → QUALITY-001 → DISCUSS-006
|
||||
|
||||
Impl-only (backend):
|
||||
PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
|
||||
|
||||
Full-lifecycle (backend):
|
||||
[Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006) → IMPL-001 → TEST-001 + REVIEW-001
|
||||
```
|
||||
|
||||
### Frontend Pipelines
|
||||
|
||||
Coordinator 根据任务关键词自动检测前端任务并路由到前端子流水线:
|
||||
|
||||
```
|
||||
FE-only (纯前端):
|
||||
PLAN-001 → DEV-FE-001 → QA-FE-001
|
||||
(GC loop: if QA-FE verdict=NEEDS_FIX → DEV-FE-002 → QA-FE-002, max 2 rounds)
|
||||
|
||||
Fullstack (前后端并行):
|
||||
PLAN-001 → IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001
|
||||
|
||||
Full-lifecycle + FE:
|
||||
[Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006)
|
||||
→ IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001
|
||||
```
|
||||
|
||||
### Frontend Detection
|
||||
|
||||
Coordinator 在 Phase 1 根据任务关键词 + 项目文件自动检测前端任务并选择流水线模式(fe-only / fullstack / impl-only)。检测逻辑见 [roles/coordinator/role.md](roles/coordinator/role.md)。
|
||||
|
||||
### Generator-Critic Loop (fe-developer ↔ fe-qa)
|
||||
|
||||
```
|
||||
┌──────────────┐ DEV-FE artifact ┌──────────┐
|
||||
│ fe-developer │ ──────────────────→ │ fe-qa │
|
||||
│ (Generator) │ │ (Critic) │
|
||||
│ │ ←────────────────── │ │
|
||||
└──────────────┘ QA-FE feedback └──────────┘
|
||||
(max 2 rounds)
|
||||
|
||||
Convergence: fe-qa.score >= 8 && fe-qa.critical_count === 0
|
||||
```
|
||||
|
||||
## Unified Session Directory
|
||||
|
||||
All session artifacts are stored under a single session folder:
|
||||
|
||||
```
|
||||
.workflow/.team/TLS-{slug}-{YYYY-MM-DD}/
|
||||
├── team-session.json # Session state (status, progress, completed_tasks)
|
||||
├── spec/ # Spec artifacts (analyst, writer, reviewer output)
|
||||
│ ├── spec-config.json
|
||||
│ ├── discovery-context.json
|
||||
│ ├── product-brief.md
|
||||
│ ├── requirements/ # _index.md + REQ-*.md + NFR-*.md
|
||||
│ ├── architecture/ # _index.md + ADR-*.md
|
||||
│ ├── epics/ # _index.md + EPIC-*.md
|
||||
│ ├── readiness-report.md
|
||||
│ └── spec-summary.md
|
||||
├── discussions/ # Discussion records (discussant output)
|
||||
│ └── discuss-001..006.md
|
||||
├── plan/ # Plan artifacts (planner output)
|
||||
│ ├── exploration-{angle}.json
|
||||
│ ├── explorations-manifest.json
|
||||
│ ├── plan.json
|
||||
│ └── .task/
|
||||
│ └── TASK-*.json
|
||||
├── explorations/ # Explorer output (cached for cross-role reuse)
|
||||
│ └── explore-*.json
|
||||
├── architecture/ # Architect output (assessment reports)
|
||||
│ └── arch-*.json
|
||||
└── wisdom/ # Cross-task accumulated knowledge
|
||||
├── learnings.md # Patterns and insights discovered
|
||||
├── decisions.md # Architectural decisions made
|
||||
├── conventions.md # Codebase conventions found
|
||||
└── issues.md # Known issues and risks
|
||||
├── qa/ # QA output (fe-qa audit reports)
|
||||
│ └── audit-fe-*.json
|
||||
└── build/ # Frontend build output (fe-developer)
|
||||
├── token-files/
|
||||
└── component-files/
|
||||
```
|
||||
|
||||
Messages remain at `.workflow/.team-msg/{team-name}/` (unchanged).
|
||||
|
||||
## Session Resume
|
||||
|
||||
Coordinator supports `--resume` / `--continue` flags to resume interrupted sessions:
|
||||
|
||||
1. Scans `.workflow/.team/TLS-*/team-session.json` for `status: "active"` or `"paused"`
|
||||
2. Multiple matches → `AskUserQuestion` for user selection
|
||||
3. **Audit TaskList** — 获取当前所有任务的真实状态
|
||||
4. **Reconcile** — 双向同步 session.completed_tasks ↔ TaskList 状态:
|
||||
- session 已完成但 TaskList 未标记 → 修正 TaskList 为 completed
|
||||
- TaskList 已完成但 session 未记录 → 补录到 session
|
||||
- in_progress 状态(暂停中断)→ 重置为 pending
|
||||
5. Determines remaining pipeline from reconciled state
|
||||
6. Rebuilds team (`TeamCreate` + worker spawns for needed roles only)
|
||||
7. Creates missing tasks with correct `blockedBy` dependency chain (uses `TASK_METADATA` lookup)
|
||||
8. Verifies dependency chain integrity for existing tasks
|
||||
9. Updates session file with reconciled state + current_phase
|
||||
10. **Kick** — 向首个可执行任务的 worker 发送 `task_unblocked` 消息,打破 resume 死锁
|
||||
11. Jumps to Phase 4 coordination loop
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator creates teammates, use this pattern:
|
||||
|
||||
```javascript
|
||||
TeamCreate({ team_name: teamName })
|
||||
|
||||
// For each worker role:
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: `Spawn ${roleName} worker`, // ← 必填参数
|
||||
team_name: teamName,
|
||||
name: "<role_name>",
|
||||
prompt: `你是 team "${teamName}" 的 <ROLE_NAME_UPPER>.
|
||||
|
||||
## ⚠️ 首要指令(MUST)
|
||||
你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥:
|
||||
Skill(skill="team-lifecycle-v2", args="--role=<role_name>")
|
||||
此调用会加载你的角色定义(role.md)、可用命令(commands/*.md)和完整执行逻辑。
|
||||
|
||||
当前需求: ${taskDescription}
|
||||
约束: ${constraints}
|
||||
Session: ${sessionFolder}
|
||||
|
||||
## 角色准则(强制)
|
||||
- 你只能处理 <PREFIX>-* 前缀的任务,不得执行其他角色的工作
|
||||
- 所有输出(SendMessage、team_msg)必须带 [<role_name>] 标识前缀
|
||||
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||
- 不得使用 TaskCreate 为其他角色创建任务
|
||||
|
||||
## 消息总线(必须)
|
||||
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||
|
||||
## 工作流程(严格按顺序)
|
||||
1. 调用 Skill(skill="team-lifecycle-v2", args="--role=<role_name>") 获取角色定义和执行逻辑
|
||||
2. 按 role.md 中的 5-Phase 流程执行(TaskList → 找到 <PREFIX>-* 任务 → 执行 → 汇报)
|
||||
3. team_msg log + SendMessage 结果给 coordinator(带 [<role_name>] 标识)
|
||||
4. TaskUpdate completed → 检查下一个任务 → 回到步骤 1`
|
||||
})
|
||||
```
|
||||
|
||||
See [roles/coordinator/role.md](roles/coordinator/role.md) for the full spawn implementation with per-role prompts.
|
||||
|
||||
## Shared Spec Resources
|
||||
|
||||
Writer 和 Reviewer 角色在 spec 模式下使用本 skill 内置的标准和模板(从 spec-generator 复制,独立维护):
|
||||
|
||||
| Resource | Path | Usage |
|
||||
|----------|------|-------|
|
||||
| Document Standards | `specs/document-standards.md` | YAML frontmatter、命名规范、内容结构 |
|
||||
| Quality Gates | `specs/quality-gates.md` | Per-phase 质量门禁、评分标尺 |
|
||||
| Product Brief Template | `templates/product-brief.md` | DRAFT-001 文档生成 |
|
||||
| Requirements Template | `templates/requirements-prd.md` | DRAFT-002 文档生成 |
|
||||
| Architecture Template | `templates/architecture-doc.md` | DRAFT-003 文档生成 |
|
||||
| Epics Template | `templates/epics-template.md` | DRAFT-004 文档生成 |
|
||||
|
||||
> Writer 在执行每个 DRAFT-* 任务前 **必须先 Read** 对应的 template 文件和 document-standards.md。
|
||||
> 从 `roles/` 子目录引用时路径为 `../../specs/` 和 `../../templates/`。
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown --role value | Error with available role list |
|
||||
| Missing --role arg | Orchestration Mode → auto route to coordinator |
|
||||
| Role file not found | Error with expected path (roles/{name}/role.md) |
|
||||
| Command file not found | Fall back to inline execution in role.md |
|
||||
| Task prefix conflict | Log warning, proceed |
|
||||
@@ -1,271 +0,0 @@
|
||||
# Role: analyst
|
||||
|
||||
Seed analysis, codebase exploration, and multi-dimensional context gathering. Maps to spec-generator Phase 1 (Discovery).
|
||||
|
||||
## Role Identity
|
||||
|
||||
- **Name**: `analyst`
|
||||
- **Task Prefix**: `RESEARCH-*`
|
||||
- **Output Tag**: `[analyst]`
|
||||
- **Responsibility**: Seed Analysis → Codebase Exploration → Context Packaging → Report
|
||||
- **Communication**: SendMessage to coordinator only
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process RESEARCH-* tasks
|
||||
- Communicate only with coordinator
|
||||
- Use Toolbox tools (ACE search, Gemini CLI)
|
||||
- Generate discovery-context.json and spec-config.json
|
||||
- Support file reference input (@ prefix or .md/.txt extension)
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks for other roles
|
||||
- Directly contact other workers
|
||||
- Modify spec documents (only create discovery-context.json and spec-config.json)
|
||||
- Skip seed analysis step
|
||||
- Proceed without codebase detection
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `research_ready` | analyst → coordinator | Research complete | With discovery-context.json path and dimension summary |
|
||||
| `research_progress` | analyst → coordinator | Long research progress | Intermediate progress update |
|
||||
| `error` | analyst → coordinator | Unrecoverable error | Codebase access failure, CLI timeout, etc. |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log:
|
||||
|
||||
```javascript
|
||||
// Research complete
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: teamName,
|
||||
from: "analyst",
|
||||
to: "coordinator",
|
||||
type: "research_ready",
|
||||
summary: "[analyst] Research done: 5 exploration dimensions",
|
||||
ref: `${sessionFolder}/spec/discovery-context.json`
|
||||
})
|
||||
|
||||
// Error report
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: teamName,
|
||||
from: "analyst",
|
||||
to: "coordinator",
|
||||
type: "error",
|
||||
summary: "[analyst] Codebase access failed"
|
||||
})
|
||||
```
|
||||
|
||||
### CLI Fallback
|
||||
|
||||
When `mcp__ccw-tools__team_msg` MCP is unavailable:
|
||||
|
||||
```bash
|
||||
ccw team log --team "${teamName}" --from "analyst" --to "coordinator" --type "research_ready" --summary "[analyst] Research done" --ref "${sessionFolder}/discovery-context.json" --json
|
||||
```
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
- None (simple enough for inline execution)
|
||||
|
||||
### Subagent Capabilities
|
||||
- None
|
||||
|
||||
### CLI Capabilities
|
||||
- `ccw cli --tool gemini --mode analysis` for seed analysis
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('RESEARCH-') &&
|
||||
t.owner === 'analyst' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
|
||||
if (myTasks.length === 0) return // idle
|
||||
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
```
|
||||
|
||||
### Phase 2: Seed Analysis
|
||||
|
||||
```javascript
|
||||
// Extract session folder from task description
|
||||
const sessionMatch = task.description.match(/Session:\s*(.+)/)
|
||||
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '.workflow/.team/default'
|
||||
|
||||
// Parse topic from task description
|
||||
const topicLines = task.description.split('\n').filter(l => !l.startsWith('Session:') && !l.startsWith('输出:') && l.trim())
|
||||
const rawTopic = topicLines[0] || task.subject.replace('RESEARCH-001: ', '')
|
||||
|
||||
// 支持文件引用输入(与 spec-generator Phase 1 一致)
|
||||
const topic = (rawTopic.startsWith('@') || rawTopic.endsWith('.md') || rawTopic.endsWith('.txt'))
|
||||
? Read(rawTopic.replace(/^@/, ''))
|
||||
: rawTopic
|
||||
|
||||
// Use Gemini CLI for seed analysis
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Analyze the following topic/idea and extract structured seed information for specification generation.
|
||||
TASK:
|
||||
• Extract problem statement (what problem does this solve)
|
||||
• Identify target users and their pain points
|
||||
• Determine domain and industry context
|
||||
• List constraints and assumptions
|
||||
• Identify 3-5 exploration dimensions for deeper research
|
||||
• Assess complexity (simple/moderate/complex)
|
||||
|
||||
TOPIC: ${topic}
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: JSON output with fields: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment
|
||||
CONSTRAINTS: Output as valid JSON" --tool gemini --mode analysis --rule analysis-analyze-technical-document`,
|
||||
run_in_background: true
|
||||
})
|
||||
// Wait for CLI result, then parse seedAnalysis from output
|
||||
```
|
||||
|
||||
### Phase 3: Codebase Exploration (conditional)
|
||||
|
||||
```javascript
|
||||
// Check if there's an existing codebase to explore
|
||||
const hasProject = Bash(`test -f package.json || test -f Cargo.toml || test -f pyproject.toml || test -f go.mod; echo $?`)
|
||||
|
||||
if (hasProject === '0') {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: teamName,
|
||||
from: "analyst",
|
||||
to: "coordinator",
|
||||
type: "research_progress",
|
||||
summary: "[analyst] 种子分析完成, 开始代码库探索"
|
||||
})
|
||||
|
||||
// Explore codebase using ACE search
|
||||
const archSearch = mcp__ace-tool__search_context({
|
||||
project_root_path: projectRoot,
|
||||
query: `Architecture patterns, main modules, entry points for: ${topic}`
|
||||
})
|
||||
|
||||
// Detect tech stack from package files
|
||||
// Explore existing patterns and integration points
|
||||
|
||||
var codebaseContext = {
|
||||
tech_stack,
|
||||
architecture_patterns,
|
||||
existing_conventions,
|
||||
integration_points,
|
||||
constraints_from_codebase: []
|
||||
}
|
||||
} else {
|
||||
var codebaseContext = null
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Context Packaging
|
||||
|
||||
```javascript
|
||||
// Generate spec-config.json
|
||||
const specConfig = {
|
||||
session_id: `SPEC-${topicSlug}-${dateStr}`,
|
||||
topic: topic,
|
||||
status: "research_complete",
|
||||
complexity: seedAnalysis.complexity_assessment || "moderate",
|
||||
depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard",
|
||||
focus_areas: seedAnalysis.exploration_dimensions || [],
|
||||
mode: "interactive", // team 模式始终交互
|
||||
phases_completed: ["discovery"],
|
||||
created_at: new Date().toISOString(),
|
||||
session_folder: sessionFolder,
|
||||
discussion_depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard"
|
||||
}
|
||||
Write(`${sessionFolder}/spec/spec-config.json`, JSON.stringify(specConfig, null, 2))
|
||||
|
||||
// Generate discovery-context.json
|
||||
const discoveryContext = {
|
||||
session_id: specConfig.session_id,
|
||||
phase: 1,
|
||||
document_type: "discovery-context",
|
||||
status: "complete",
|
||||
generated_at: new Date().toISOString(),
|
||||
seed_analysis: {
|
||||
problem_statement: seedAnalysis.problem_statement,
|
||||
target_users: seedAnalysis.target_users,
|
||||
domain: seedAnalysis.domain,
|
||||
constraints: seedAnalysis.constraints,
|
||||
exploration_dimensions: seedAnalysis.exploration_dimensions,
|
||||
complexity: seedAnalysis.complexity_assessment
|
||||
},
|
||||
codebase_context: codebaseContext,
|
||||
recommendations: { focus_areas: [], risks: [], open_questions: [] }
|
||||
}
|
||||
Write(`${sessionFolder}/spec/discovery-context.json`, JSON.stringify(discoveryContext, null, 2))
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
```javascript
|
||||
const dimensionCount = discoveryContext.seed_analysis.exploration_dimensions?.length || 0
|
||||
const hasCodebase = codebaseContext !== null
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "analyst", to: "coordinator",
|
||||
type: "research_ready",
|
||||
summary: `[analyst] 研究完成: ${dimensionCount}个探索维度, ${hasCodebase ? '有' : '无'}代码库上下文, 复杂度=${specConfig.complexity}`,
|
||||
ref: `${sessionFolder}/discovery-context.json`
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[analyst] ## 研究分析结果
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**复杂度**: ${specConfig.complexity}
|
||||
**代码库**: ${hasCodebase ? '已检测到现有项目' : '全新项目'}
|
||||
|
||||
### 问题陈述
|
||||
${discoveryContext.seed_analysis.problem_statement}
|
||||
|
||||
### 目标用户
|
||||
${(discoveryContext.seed_analysis.target_users || []).map(u => '- ' + u).join('\n')}
|
||||
|
||||
### 探索维度
|
||||
${(discoveryContext.seed_analysis.exploration_dimensions || []).map((d, i) => (i+1) + '. ' + d).join('\n')}
|
||||
|
||||
### 输出位置
|
||||
- Config: ${sessionFolder}/spec/spec-config.json
|
||||
- Context: ${sessionFolder}/spec/discovery-context.json
|
||||
|
||||
研究已就绪,可进入讨论轮次 DISCUSS-001。`,
|
||||
summary: `[analyst] 研究就绪: ${dimensionCount}维度, ${specConfig.complexity}`
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
|
||||
// Check for next RESEARCH task → back to Phase 1
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No RESEARCH-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Gemini CLI analysis failure | Fallback to direct Claude analysis without CLI |
|
||||
| Codebase detection failed | Continue as new project (no codebase context) |
|
||||
| Session folder cannot be created | Notify coordinator, request alternative path |
|
||||
| Topic too vague for analysis | Report to coordinator with clarification questions |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,271 +0,0 @@
|
||||
# Assess Command
|
||||
|
||||
## Purpose
|
||||
Multi-mode architecture assessment with mode-specific analysis strategies. Delegated from architect role.md Phase 3.
|
||||
|
||||
## Input Context
|
||||
|
||||
```javascript
|
||||
// Provided by role.md Phase 2
|
||||
const { consultMode, sessionFolder, wisdom, explorations, projectTech, task } = context
|
||||
```
|
||||
|
||||
## Mode Strategies
|
||||
|
||||
### spec-review (ARCH-SPEC-*)
|
||||
|
||||
审查架构文档的技术合理性。
|
||||
|
||||
```javascript
|
||||
const dimensions = [
|
||||
{ name: 'consistency', weight: 0.25 },
|
||||
{ name: 'scalability', weight: 0.25 },
|
||||
{ name: 'security', weight: 0.25 },
|
||||
{ name: 'tech-fitness', weight: 0.25 }
|
||||
]
|
||||
|
||||
// Load architecture documents
|
||||
const archIndex = Read(`${sessionFolder}/spec/architecture/_index.md`)
|
||||
const adrFiles = Glob({ pattern: `${sessionFolder}/spec/architecture/ADR-*.md` })
|
||||
const adrs = adrFiles.map(f => ({ path: f, content: Read(f) }))
|
||||
|
||||
// Check ADR consistency
|
||||
const adrDecisions = adrs.map(adr => {
|
||||
const status = adr.content.match(/status:\s*(\w+)/i)?.[1]
|
||||
const context = adr.content.match(/## Context\n([\s\S]*?)##/)?.[1]?.trim()
|
||||
const decision = adr.content.match(/## Decision\n([\s\S]*?)##/)?.[1]?.trim()
|
||||
return { path: adr.path, status, context, decision }
|
||||
})
|
||||
|
||||
// Cross-reference: ADR decisions vs architecture index
|
||||
// Flag contradictions between ADRs
|
||||
// Check if tech choices align with project-tech.json
|
||||
|
||||
for (const dim of dimensions) {
|
||||
const score = evaluateDimension(dim.name, archIndex, adrs, projectTech)
|
||||
assessment.dimensions.push({ name: dim.name, score, weight: dim.weight })
|
||||
}
|
||||
```
|
||||
|
||||
### plan-review (ARCH-PLAN-*)
|
||||
|
||||
审查实现计划的架构合理性。
|
||||
|
||||
```javascript
|
||||
const plan = JSON.parse(Read(`${sessionFolder}/plan/plan.json`))
|
||||
const taskFiles = Glob({ pattern: `${sessionFolder}/plan/.task/TASK-*.json` })
|
||||
const tasks = taskFiles.map(f => JSON.parse(Read(f)))
|
||||
|
||||
// 1. Dependency cycle detection
|
||||
function detectCycles(tasks) {
|
||||
const graph = {}
|
||||
tasks.forEach(t => { graph[t.id] = t.depends_on || [] })
|
||||
const visited = new Set(), inStack = new Set()
|
||||
function dfs(node) {
|
||||
if (inStack.has(node)) return true // cycle
|
||||
if (visited.has(node)) return false
|
||||
visited.add(node); inStack.add(node)
|
||||
for (const dep of (graph[node] || [])) {
|
||||
if (dfs(dep)) return true
|
||||
}
|
||||
inStack.delete(node)
|
||||
return false
|
||||
}
|
||||
return Object.keys(graph).filter(n => dfs(n))
|
||||
}
|
||||
const cycles = detectCycles(tasks)
|
||||
if (cycles.length > 0) {
|
||||
assessment.concerns.push({
|
||||
severity: 'high',
|
||||
concern: `Circular dependency detected: ${cycles.join(' → ')}`,
|
||||
suggestion: 'Break cycle by extracting shared interface or reordering tasks'
|
||||
})
|
||||
}
|
||||
|
||||
// 2. Task granularity check
|
||||
tasks.forEach(t => {
|
||||
const fileCount = (t.files || []).length
|
||||
if (fileCount > 8) {
|
||||
assessment.concerns.push({
|
||||
severity: 'medium',
|
||||
task: t.id,
|
||||
concern: `Task touches ${fileCount} files — may be too coarse`,
|
||||
suggestion: 'Split into smaller tasks with clearer boundaries'
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// 3. Convention compliance (from wisdom)
|
||||
if (wisdom.conventions) {
|
||||
// Check if plan follows discovered conventions
|
||||
}
|
||||
|
||||
// 4. Architecture alignment (from wisdom.decisions)
|
||||
if (wisdom.decisions) {
|
||||
// Verify plan doesn't contradict previous architectural decisions
|
||||
}
|
||||
```
|
||||
|
||||
### code-review (ARCH-CODE-*)
|
||||
|
||||
评估代码变更的架构影响。
|
||||
|
||||
```javascript
|
||||
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`)
|
||||
.split('\n').filter(Boolean)
|
||||
|
||||
// 1. Layer violation detection
|
||||
function detectLayerViolation(file, content) {
|
||||
// Check import depth — deeper layers should not import from shallower
|
||||
const imports = (content.match(/from\s+['"]([^'"]+)['"]/g) || [])
|
||||
.map(i => i.match(/['"]([^'"]+)['"]/)?.[1]).filter(Boolean)
|
||||
return imports.filter(imp => isUpwardImport(file, imp))
|
||||
}
|
||||
|
||||
// 2. New dependency analysis
|
||||
const pkgChanges = changedFiles.filter(f => f.includes('package.json'))
|
||||
if (pkgChanges.length > 0) {
|
||||
for (const pkg of pkgChanges) {
|
||||
const diff = Bash(`git diff HEAD~1 -- ${pkg} 2>/dev/null || git diff --cached -- ${pkg}`)
|
||||
const newDeps = (diff.match(/\+\s+"([^"]+)":\s+"[^"]+"/g) || [])
|
||||
.map(d => d.match(/"([^"]+)"/)?.[1]).filter(Boolean)
|
||||
if (newDeps.length > 0) {
|
||||
assessment.recommendations.push({
|
||||
area: 'dependencies',
|
||||
suggestion: `New dependencies added: ${newDeps.join(', ')}. Verify license compatibility and bundle size impact.`
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Module boundary changes
|
||||
const indexChanges = changedFiles.filter(f => f.endsWith('index.ts') || f.endsWith('index.js'))
|
||||
if (indexChanges.length > 0) {
|
||||
assessment.concerns.push({
|
||||
severity: 'medium',
|
||||
concern: `Module boundary files modified: ${indexChanges.join(', ')}`,
|
||||
suggestion: 'Verify public API changes are intentional and backward compatible'
|
||||
})
|
||||
}
|
||||
|
||||
// 4. Architectural impact scoring
|
||||
assessment.architectural_impact = changedFiles.length > 10 ? 'high'
|
||||
: indexChanges.length > 0 || pkgChanges.length > 0 ? 'medium' : 'low'
|
||||
```
|
||||
|
||||
### consult (ARCH-CONSULT-*)
|
||||
|
||||
回答架构决策咨询。
|
||||
|
||||
```javascript
|
||||
const question = task.description
|
||||
.replace(/Session:.*\n?/g, '')
|
||||
.replace(/Requester:.*\n?/g, '')
|
||||
.trim()
|
||||
|
||||
const isComplex = question.length > 200 ||
|
||||
/architect|design|pattern|refactor|migrate|scalab/i.test(question)
|
||||
|
||||
if (isComplex) {
|
||||
// Use cli-explore-agent for deep exploration
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Architecture consultation: ${question.substring(0, 80)}`,
|
||||
prompt: `## Architecture Consultation
|
||||
|
||||
Question: ${question}
|
||||
|
||||
## Steps
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}'
|
||||
2. Search for relevant architectural patterns in codebase
|
||||
3. Read .workflow/project-tech.json (if exists)
|
||||
4. Analyze architectural implications
|
||||
|
||||
## Output
|
||||
Write to: ${sessionFolder}/architecture/consult-exploration.json
|
||||
Schema: { relevant_files[], patterns[], architectural_implications[], options[] }`
|
||||
})
|
||||
|
||||
// Parse exploration results into assessment
|
||||
try {
|
||||
const exploration = JSON.parse(Read(`${sessionFolder}/architecture/consult-exploration.json`))
|
||||
assessment.recommendations = (exploration.options || []).map(opt => ({
|
||||
area: 'architecture',
|
||||
suggestion: `${opt.name}: ${opt.description}`,
|
||||
trade_offs: opt.trade_offs || []
|
||||
}))
|
||||
} catch {}
|
||||
} else {
|
||||
// Simple consultation — direct analysis
|
||||
assessment.recommendations.push({
|
||||
area: 'architecture',
|
||||
suggestion: `Direct answer based on codebase context and wisdom`
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### feasibility (ARCH-FEASIBILITY-*)
|
||||
|
||||
技术可行性评估。
|
||||
|
||||
```javascript
|
||||
const proposal = task.description
|
||||
.replace(/Session:.*\n?/g, '')
|
||||
.replace(/Requester:.*\n?/g, '')
|
||||
.trim()
|
||||
|
||||
// 1. Tech stack compatibility
|
||||
const techStack = projectTech?.tech_stack || {}
|
||||
// Check if proposal requires technologies not in current stack
|
||||
|
||||
// 2. Codebase readiness
|
||||
// Use ACE search to find relevant integration points
|
||||
const searchResults = mcp__ace-tool__search_context({
|
||||
project_root_path: '.',
|
||||
query: proposal
|
||||
})
|
||||
|
||||
// 3. Effort estimation
|
||||
const touchPoints = (searchResults?.relevant_files || []).length
|
||||
const effort = touchPoints > 20 ? 'high' : touchPoints > 5 ? 'medium' : 'low'
|
||||
|
||||
// 4. Risk assessment
|
||||
assessment.verdict = 'FEASIBLE' // FEASIBLE | RISKY | INFEASIBLE
|
||||
assessment.effort_estimate = effort
|
||||
assessment.prerequisites = []
|
||||
assessment.risks = []
|
||||
|
||||
if (touchPoints > 20) {
|
||||
assessment.verdict = 'RISKY'
|
||||
assessment.risks.push({
|
||||
risk: 'High touch-point count suggests significant refactoring',
|
||||
mitigation: 'Phase the implementation, start with core module'
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
## Verdict Logic
|
||||
|
||||
```javascript
|
||||
function determineVerdict(assessment) {
|
||||
const highConcerns = (assessment.concerns || []).filter(c => c.severity === 'high')
|
||||
const mediumConcerns = (assessment.concerns || []).filter(c => c.severity === 'medium')
|
||||
|
||||
if (highConcerns.length >= 2) return 'BLOCK'
|
||||
if (highConcerns.length >= 1 || mediumConcerns.length >= 3) return 'CONCERN'
|
||||
return 'APPROVE'
|
||||
}
|
||||
|
||||
assessment.overall_verdict = determineVerdict(assessment)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Architecture docs not found | Assess from available context, note limitation in report |
|
||||
| Plan file missing | Report to coordinator via arch_concern |
|
||||
| Git diff fails (no commits) | Use staged changes or skip code-review mode |
|
||||
| CLI exploration timeout | Provide partial assessment, flag as incomplete |
|
||||
| Exploration results unparseable | Fall back to direct analysis without exploration |
|
||||
@@ -1,368 +0,0 @@
|
||||
# Role: architect
|
||||
|
||||
架构顾问。提供架构决策咨询、技术可行性评估、设计模式建议。咨询角色,在 spec 和 impl 流程关键节点提供专业判断。
|
||||
|
||||
## Role Identity
|
||||
|
||||
- **Name**: `architect`
|
||||
- **Task Prefix**: `ARCH-*`
|
||||
- **Responsibility**: Context loading → Mode detection → Architecture analysis → Package assessment → Report
|
||||
- **Communication**: SendMessage to coordinator only
|
||||
- **Output Tag**: `[architect]`
|
||||
- **Role Type**: Consulting(咨询角色,不阻塞主链路,输出被引用)
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- 仅处理 `ARCH-*` 前缀的任务
|
||||
- 所有输出(SendMessage、team_msg、日志)必须带 `[architect]` 标识
|
||||
- 仅通过 SendMessage 与 coordinator 通信
|
||||
- 输出结构化评估报告供调用方消费
|
||||
- 根据任务前缀自动切换咨询模式
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- ❌ 直接修改源代码文件
|
||||
- ❌ 执行需求分析、代码实现、测试等其他角色职责
|
||||
- ❌ 直接与其他 worker 角色通信
|
||||
- ❌ 为其他角色创建任务
|
||||
- ❌ 做最终决策(仅提供建议,决策权在 coordinator/用户)
|
||||
- ❌ 在输出中省略 `[architect]` 标识
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `arch_ready` | architect → coordinator | Consultation complete | 架构评估/建议已就绪 |
|
||||
| `arch_concern` | architect → coordinator | Significant risk found | 发现重大架构风险 |
|
||||
| `arch_progress` | architect → coordinator | Long analysis progress | 复杂分析进度更新 |
|
||||
| `error` | architect → coordinator | Analysis failure | 分析失败或上下文不足 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
每次 SendMessage **前**,必须调用 `mcp__ccw-tools__team_msg` 记录消息:
|
||||
|
||||
```javascript
|
||||
// Consultation complete
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "architect", to: "coordinator",
|
||||
type: "arch_ready",
|
||||
summary: "[architect] ARCH complete: 3 recommendations, 1 concern",
|
||||
ref: outputPath
|
||||
})
|
||||
|
||||
// Risk alert
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "architect", to: "coordinator",
|
||||
type: "arch_concern",
|
||||
summary: "[architect] RISK: circular dependency in module graph"
|
||||
})
|
||||
```
|
||||
|
||||
### CLI 回退
|
||||
|
||||
当 `mcp__ccw-tools__team_msg` MCP 不可用时,使用 `ccw team` CLI 作为等效回退:
|
||||
|
||||
```javascript
|
||||
Bash(`ccw team log --team "${teamName}" --from "architect" --to "coordinator" --type "arch_ready" --summary "[architect] ARCH complete" --ref "${outputPath}" --json`)
|
||||
```
|
||||
|
||||
**参数映射**: `team_msg(params)` → `ccw team log --team <team> --from architect --to coordinator --type <type> --summary "<text>" [--ref <path>] [--json]`
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
- `commands/assess.md` — Multi-mode architecture assessment (Phase 3)
|
||||
|
||||
### Subagent Capabilities
|
||||
|
||||
| Agent Type | Used By | Purpose |
|
||||
|------------|---------|---------|
|
||||
| `cli-explore-agent` | commands/assess.md | 深度架构探索(模块依赖、分层结构) |
|
||||
|
||||
### CLI Capabilities
|
||||
|
||||
| CLI Tool | Mode | Used By | Purpose |
|
||||
|----------|------|---------|---------|
|
||||
| `ccw cli --tool gemini --mode analysis` | analysis | commands/assess.md | 架构分析、模式评估 |
|
||||
|
||||
## Consultation Modes
|
||||
|
||||
根据任务 subject 前缀自动切换:
|
||||
|
||||
| Mode | Task Pattern | Focus | Output |
|
||||
|------|-------------|-------|--------|
|
||||
| `spec-review` | ARCH-SPEC-* | 审查架构文档(ADR、组件图) | 架构评审报告 |
|
||||
| `plan-review` | ARCH-PLAN-* | 审查实现计划的架构合理性 | 计划评审意见 |
|
||||
| `code-review` | ARCH-CODE-* | 评估代码变更的架构影响 | 架构影响分析 |
|
||||
| `consult` | ARCH-CONSULT-* | 回答架构决策咨询 | 决策建议 |
|
||||
| `feasibility` | ARCH-FEASIBILITY-* | 技术可行性评估 | 可行性报告 |
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('ARCH-') &&
|
||||
t.owner === 'architect' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
|
||||
if (myTasks.length === 0) return // idle
|
||||
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
```
|
||||
|
||||
### Phase 2: Context Loading & Mode Detection
|
||||
|
||||
```javascript
|
||||
const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim()
|
||||
|
||||
// Auto-detect consultation mode from task subject
|
||||
const MODE_MAP = {
|
||||
'ARCH-SPEC': 'spec-review',
|
||||
'ARCH-PLAN': 'plan-review',
|
||||
'ARCH-CODE': 'code-review',
|
||||
'ARCH-CONSULT': 'consult',
|
||||
'ARCH-FEASIBILITY': 'feasibility'
|
||||
}
|
||||
const modePrefix = Object.keys(MODE_MAP).find(p => task.subject.startsWith(p))
|
||||
const consultMode = modePrefix ? MODE_MAP[modePrefix] : 'consult'
|
||||
|
||||
// Load wisdom (accumulated knowledge from previous tasks)
|
||||
let wisdom = {}
|
||||
if (sessionFolder) {
|
||||
try { wisdom.learnings = Read(`${sessionFolder}/wisdom/learnings.md`) } catch {}
|
||||
try { wisdom.decisions = Read(`${sessionFolder}/wisdom/decisions.md`) } catch {}
|
||||
try { wisdom.conventions = Read(`${sessionFolder}/wisdom/conventions.md`) } catch {}
|
||||
}
|
||||
|
||||
// Load project tech context
|
||||
let projectTech = {}
|
||||
try { projectTech = JSON.parse(Read('.workflow/project-tech.json')) } catch {}
|
||||
|
||||
// Load exploration results if available
|
||||
let explorations = []
|
||||
if (sessionFolder) {
|
||||
try {
|
||||
const exploreFiles = Glob({ pattern: `${sessionFolder}/explorations/*.json` })
|
||||
explorations = exploreFiles.map(f => {
|
||||
try { return JSON.parse(Read(f)) } catch { return null }
|
||||
}).filter(Boolean)
|
||||
} catch {}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Architecture Assessment
|
||||
|
||||
Delegate to command file for mode-specific analysis:
|
||||
|
||||
```javascript
|
||||
try {
|
||||
const assessCommand = Read("commands/assess.md")
|
||||
// Execute mode-specific strategy defined in command file
|
||||
// Input: consultMode, sessionFolder, wisdom, explorations, projectTech
|
||||
// Output: assessment object
|
||||
} catch {
|
||||
// Fallback: inline execution (see below)
|
||||
}
|
||||
```
|
||||
|
||||
**Command**: [commands/assess.md](commands/assess.md)
|
||||
|
||||
**Inline Fallback** (when command file unavailable):
|
||||
|
||||
```javascript
|
||||
const assessment = {
|
||||
mode: consultMode,
|
||||
overall_verdict: 'APPROVE', // APPROVE | CONCERN | BLOCK
|
||||
dimensions: [],
|
||||
concerns: [],
|
||||
recommendations: [],
|
||||
_metadata: { timestamp: new Date().toISOString(), wisdom_loaded: Object.keys(wisdom).length > 0 }
|
||||
}
|
||||
|
||||
// Mode-specific analysis
|
||||
if (consultMode === 'spec-review') {
|
||||
// Load architecture documents, check ADR consistency, scalability, security
|
||||
const archIndex = Read(`${sessionFolder}/spec/architecture/_index.md`)
|
||||
const adrFiles = Glob({ pattern: `${sessionFolder}/spec/architecture/ADR-*.md` })
|
||||
// Score dimensions: consistency, scalability, security, tech-fitness
|
||||
}
|
||||
|
||||
if (consultMode === 'plan-review') {
|
||||
// Load plan.json, check task granularity, dependency cycles, convention compliance
|
||||
const plan = JSON.parse(Read(`${sessionFolder}/plan/plan.json`))
|
||||
// Detect circular dependencies, oversized tasks, missing risk assessment
|
||||
}
|
||||
|
||||
if (consultMode === 'code-review') {
|
||||
// Analyze changed files for layer violations, new deps, module boundary changes
|
||||
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`)
|
||||
.split('\n').filter(Boolean)
|
||||
// Check import depth, package.json changes, index.ts modifications
|
||||
}
|
||||
|
||||
if (consultMode === 'consult') {
|
||||
// Free-form consultation — use CLI for complex questions
|
||||
const question = task.description.replace(/Session:.*\n?/g, '').replace(/Requester:.*\n?/g, '').trim()
|
||||
const isComplex = question.length > 200 || /architect|design|pattern|refactor|migrate/i.test(question)
|
||||
if (isComplex) {
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Architecture consultation — ${question}
|
||||
TASK: • Analyze architectural implications • Identify options with trade-offs • Recommend approach
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Structured analysis with options, trade-offs, recommendation
|
||||
CONSTRAINTS: Architecture-level only" --tool gemini --mode analysis --rule analysis-review-architecture`,
|
||||
run_in_background: true
|
||||
})
|
||||
// Wait for result, parse into assessment
|
||||
}
|
||||
}
|
||||
|
||||
if (consultMode === 'feasibility') {
|
||||
// Assess technical feasibility against current codebase
|
||||
// Output: verdict (FEASIBLE|RISKY|INFEASIBLE), risks, effort estimate, prerequisites
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Package & Wisdom Contribution
|
||||
|
||||
```javascript
|
||||
// Write assessment to session
|
||||
const outputPath = sessionFolder
|
||||
? `${sessionFolder}/architecture/arch-${task.subject.replace(/[^a-zA-Z0-9-]/g, '-').toLowerCase()}.json`
|
||||
: '.workflow/.tmp/arch-assessment.json'
|
||||
|
||||
Bash(`mkdir -p "$(dirname '${outputPath}')"`)
|
||||
Write(outputPath, JSON.stringify(assessment, null, 2))
|
||||
|
||||
// Contribute to wisdom: record architectural decisions
|
||||
if (sessionFolder && assessment.recommendations?.length > 0) {
|
||||
try {
|
||||
const decisionsPath = `${sessionFolder}/wisdom/decisions.md`
|
||||
const existing = Read(decisionsPath)
|
||||
const newDecisions = assessment.recommendations
|
||||
.map(r => `- [${new Date().toISOString().substring(0, 10)}] ${r.area || r.dimension}: ${r.suggestion}`)
|
||||
.join('\n')
|
||||
Write(decisionsPath, existing + '\n' + newDecisions)
|
||||
} catch {} // wisdom not initialized
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
```javascript
|
||||
const verdict = assessment.overall_verdict || assessment.verdict || 'N/A'
|
||||
const concernCount = (assessment.concerns || []).length
|
||||
const highConcerns = (assessment.concerns || []).filter(c => c.severity === 'high').length
|
||||
const recCount = (assessment.recommendations || []).length
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "architect", to: "coordinator",
|
||||
type: highConcerns > 0 ? "arch_concern" : "arch_ready",
|
||||
summary: `[architect] ARCH ${consultMode}: ${verdict}, ${concernCount} concerns, ${recCount} recommendations`,
|
||||
ref: outputPath
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[architect] ## Architecture Assessment
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**Mode**: ${consultMode}
|
||||
**Verdict**: ${verdict}
|
||||
|
||||
### Summary
|
||||
- **Concerns**: ${concernCount} (${highConcerns} high)
|
||||
- **Recommendations**: ${recCount}
|
||||
${assessment.architectural_impact ? `- **Impact**: ${assessment.architectural_impact}` : ''}
|
||||
|
||||
${assessment.dimensions?.length > 0 ? `### Dimension Scores
|
||||
${assessment.dimensions.map(d => `- **${d.name}**: ${d.score}%`).join('\n')}` : ''}
|
||||
|
||||
${concernCount > 0 ? `### Concerns
|
||||
${assessment.concerns.map(c => `- [${(c.severity || 'medium').toUpperCase()}] ${c.task || c.file || ''}: ${c.concern}`).join('\n')}` : ''}
|
||||
|
||||
### Recommendations
|
||||
${(assessment.recommendations || []).map(r => `- ${r.area || r.dimension || ''}: ${r.suggestion}`).join('\n') || 'None'}
|
||||
|
||||
### Output: ${outputPath}`,
|
||||
summary: `[architect] ARCH ${consultMode}: ${verdict}`
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
|
||||
// Check for next ARCH task → back to Phase 1
|
||||
const nextTasks = TaskList().filter(t =>
|
||||
t.subject.startsWith('ARCH-') &&
|
||||
t.owner === 'architect' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
if (nextTasks.length > 0) {
|
||||
// Continue → back to Phase 1
|
||||
}
|
||||
```
|
||||
|
||||
## Coordinator Integration
|
||||
|
||||
Architect 由 coordinator 在关键节点按需创建 ARCH-* 任务:
|
||||
|
||||
### Spec Pipeline (after DRAFT-003, before DISCUSS-004)
|
||||
|
||||
```javascript
|
||||
TaskCreate({
|
||||
subject: 'ARCH-SPEC-001: 架构文档专业评审',
|
||||
description: `评审架构文档的技术合理性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/architecture/`,
|
||||
activeForm: '架构评审中'
|
||||
})
|
||||
TaskUpdate({ taskId: archSpecId, owner: 'architect' })
|
||||
// DISCUSS-004 addBlockedBy [archSpecId]
|
||||
```
|
||||
|
||||
### Impl Pipeline (after PLAN-001, before IMPL-001)
|
||||
|
||||
```javascript
|
||||
TaskCreate({
|
||||
subject: 'ARCH-PLAN-001: 实现计划架构审查',
|
||||
description: `审查实现计划的架构合理性\n\nSession: ${sessionFolder}\nPlan: ${sessionFolder}/plan/plan.json`,
|
||||
activeForm: '计划审查中'
|
||||
})
|
||||
TaskUpdate({ taskId: archPlanId, owner: 'architect' })
|
||||
// IMPL-001 addBlockedBy [archPlanId]
|
||||
```
|
||||
|
||||
### On-Demand (any point via coordinator)
|
||||
|
||||
```javascript
|
||||
TaskCreate({
|
||||
subject: 'ARCH-CONSULT-001: 架构决策咨询',
|
||||
description: `${question}\n\nSession: ${sessionFolder}\nRequester: ${role}`,
|
||||
activeForm: '架构咨询中'
|
||||
})
|
||||
TaskUpdate({ taskId: archConsultId, owner: 'architect' })
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No ARCH-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Architecture documents not found | Assess from available context, note limitation |
|
||||
| Plan file not found | Report to coordinator, request location |
|
||||
| CLI analysis timeout | Provide partial assessment, note incomplete |
|
||||
| Insufficient context | Request explorer to gather more context via coordinator |
|
||||
| Conflicting requirements | Flag as concern, provide options |
|
||||
| Command file not found | Fall back to inline execution |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,523 +0,0 @@
|
||||
# Dispatch Command - Task Chain Creation
|
||||
|
||||
**Purpose**: Create task chains based on execution mode, aligned with SKILL.md Three-Mode Pipeline
|
||||
|
||||
**Invoked by**: Coordinator role.md Phase 3
|
||||
|
||||
**Output Tag**: `[coordinator]`
|
||||
|
||||
---
|
||||
|
||||
## Task Chain Strategies
|
||||
|
||||
### Role-Task Mapping (Source of Truth: SKILL.md VALID_ROLES)
|
||||
|
||||
| Task Prefix | Role | VALID_ROLES Key |
|
||||
|-------------|------|-----------------|
|
||||
| RESEARCH-* | analyst | `analyst` |
|
||||
| DISCUSS-* | discussant | `discussant` |
|
||||
| DRAFT-* | writer | `writer` |
|
||||
| QUALITY-* | reviewer | `reviewer` |
|
||||
| PLAN-* | planner | `planner` |
|
||||
| IMPL-* | executor | `executor` |
|
||||
| TEST-* | tester | `tester` |
|
||||
| REVIEW-* | reviewer | `reviewer` |
|
||||
| DEV-FE-* | fe-developer | `fe-developer` |
|
||||
| QA-FE-* | fe-qa | `fe-qa` |
|
||||
|
||||
---
|
||||
|
||||
### Strategy 1: Spec-Only Mode (12 tasks)
|
||||
|
||||
Pipeline: `RESEARCH → DISCUSS → DRAFT → DISCUSS → DRAFT → DISCUSS → DRAFT → DISCUSS → DRAFT → DISCUSS → QUALITY → DISCUSS`
|
||||
|
||||
```javascript
|
||||
if (requirements.mode === "spec-only") {
|
||||
Output("[coordinator] Creating spec-only task chain (12 tasks)")
|
||||
|
||||
// Task 1: Seed Analysis
|
||||
TaskCreate({
|
||||
subject: "RESEARCH-001",
|
||||
owner: "analyst",
|
||||
description: `Seed analysis: codebase exploration and context gathering\nSession: ${sessionFolder}\nScope: ${requirements.scope}\nFocus: ${requirements.focus.join(", ")}\nDepth: ${requirements.depth}`,
|
||||
blockedBy: [],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 2: Critique Research
|
||||
TaskCreate({
|
||||
subject: "DISCUSS-001",
|
||||
owner: "discussant",
|
||||
description: `Critique research findings from RESEARCH-001, identify gaps and clarify scope\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["RESEARCH-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 3: Product Brief
|
||||
TaskCreate({
|
||||
subject: "DRAFT-001",
|
||||
owner: "writer",
|
||||
description: `Generate Product Brief based on RESEARCH-001 findings and DISCUSS-001 feedback\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DISCUSS-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 4: Critique Product Brief
|
||||
TaskCreate({
|
||||
subject: "DISCUSS-002",
|
||||
owner: "discussant",
|
||||
description: `Critique Product Brief (DRAFT-001), evaluate completeness and clarity\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DRAFT-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 5: Requirements/PRD
|
||||
TaskCreate({
|
||||
subject: "DRAFT-002",
|
||||
owner: "writer",
|
||||
description: `Generate Requirements/PRD incorporating DISCUSS-002 feedback\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DISCUSS-002"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 6: Critique Requirements
|
||||
TaskCreate({
|
||||
subject: "DISCUSS-003",
|
||||
owner: "discussant",
|
||||
description: `Critique Requirements/PRD (DRAFT-002), validate coverage and feasibility\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DRAFT-002"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 7: Architecture Document
|
||||
TaskCreate({
|
||||
subject: "DRAFT-003",
|
||||
owner: "writer",
|
||||
description: `Generate Architecture Document incorporating DISCUSS-003 feedback\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DISCUSS-003"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 8: Critique Architecture
|
||||
TaskCreate({
|
||||
subject: "DISCUSS-004",
|
||||
owner: "discussant",
|
||||
description: `Critique Architecture Document (DRAFT-003), evaluate design decisions\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DRAFT-003"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 9: Epics
|
||||
TaskCreate({
|
||||
subject: "DRAFT-004",
|
||||
owner: "writer",
|
||||
description: `Generate Epics document incorporating DISCUSS-004 feedback\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DISCUSS-004"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 10: Critique Epics
|
||||
TaskCreate({
|
||||
subject: "DISCUSS-005",
|
||||
owner: "discussant",
|
||||
description: `Critique Epics (DRAFT-004), validate task decomposition and priorities\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DRAFT-004"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 11: Spec Quality Check
|
||||
TaskCreate({
|
||||
subject: "QUALITY-001",
|
||||
owner: "reviewer",
|
||||
description: `5-dimension spec quality validation across all spec artifacts\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DISCUSS-005"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 12: Final Review Discussion
|
||||
TaskCreate({
|
||||
subject: "DISCUSS-006",
|
||||
owner: "discussant",
|
||||
description: `Final review discussion: address QUALITY-001 findings, sign-off\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["QUALITY-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
Output("[coordinator] Spec-only task chain created (12 tasks)")
|
||||
Output("[coordinator] Starting with: RESEARCH-001 (analyst)")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Strategy 2: Impl-Only Mode (4 tasks)
|
||||
|
||||
Pipeline: `PLAN → IMPL → TEST + REVIEW`
|
||||
|
||||
```javascript
|
||||
if (requirements.mode === "impl-only") {
|
||||
Output("[coordinator] Creating impl-only task chain (4 tasks)")
|
||||
|
||||
// Verify spec exists
|
||||
const specExists = AskUserQuestion({
|
||||
question: "Implementation mode requires existing specifications. Do you have a spec file?",
|
||||
choices: ["yes", "no"]
|
||||
})
|
||||
|
||||
if (specExists === "no") {
|
||||
Output("[coordinator] ERROR: impl-only mode requires existing specifications")
|
||||
Output("[coordinator] Please run spec-only mode first or use full-lifecycle mode")
|
||||
throw new Error("Missing specifications for impl-only mode")
|
||||
}
|
||||
|
||||
const specFile = AskUserQuestion({
|
||||
question: "Provide path to specification file:",
|
||||
type: "text"
|
||||
})
|
||||
|
||||
const specContent = Read(specFile)
|
||||
if (!specContent) {
|
||||
throw new Error(`Specification file not found: ${specFile}`)
|
||||
}
|
||||
|
||||
Output(`[coordinator] Using specification: ${specFile}`)
|
||||
|
||||
// Task 1: Planning
|
||||
TaskCreate({
|
||||
subject: "PLAN-001",
|
||||
owner: "planner",
|
||||
description: `Multi-angle codebase exploration and structured planning\nSession: ${sessionFolder}\nSpec: ${specFile}\nScope: ${requirements.scope}`,
|
||||
blockedBy: [],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 2: Implementation
|
||||
TaskCreate({
|
||||
subject: "IMPL-001",
|
||||
owner: "executor",
|
||||
description: `Code implementation following PLAN-001\nSession: ${sessionFolder}\nSpec: ${specFile}`,
|
||||
blockedBy: ["PLAN-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 3: Testing (parallel with REVIEW-001)
|
||||
TaskCreate({
|
||||
subject: "TEST-001",
|
||||
owner: "tester",
|
||||
description: `Adaptive test-fix cycles and quality gates\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["IMPL-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Task 4: Code Review (parallel with TEST-001)
|
||||
TaskCreate({
|
||||
subject: "REVIEW-001",
|
||||
owner: "reviewer",
|
||||
description: `4-dimension code review of IMPL-001 output\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["IMPL-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
Output("[coordinator] Impl-only task chain created (4 tasks)")
|
||||
Output("[coordinator] Starting with: PLAN-001 (planner)")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Strategy 3: Full-Lifecycle Mode (16 tasks)
|
||||
|
||||
Pipeline: `[Spec pipeline 12] → PLAN(blockedBy: DISCUSS-006) → IMPL → TEST + REVIEW`
|
||||
|
||||
```javascript
|
||||
if (requirements.mode === "full-lifecycle") {
|
||||
Output("[coordinator] Creating full-lifecycle task chain (16 tasks)")
|
||||
|
||||
// ========================================
|
||||
// SPEC PHASE (12 tasks) — same as spec-only
|
||||
// ========================================
|
||||
|
||||
TaskCreate({ subject: "RESEARCH-001", owner: "analyst", description: `Seed analysis: codebase exploration and context gathering\nSession: ${sessionFolder}\nScope: ${requirements.scope}\nFocus: ${requirements.focus.join(", ")}\nDepth: ${requirements.depth}`, blockedBy: [], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-001", owner: "discussant", description: `Critique research findings from RESEARCH-001\nSession: ${sessionFolder}`, blockedBy: ["RESEARCH-001"], status: "pending" })
|
||||
TaskCreate({ subject: "DRAFT-001", owner: "writer", description: `Generate Product Brief\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-001"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-002", owner: "discussant", description: `Critique Product Brief (DRAFT-001)\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-001"], status: "pending" })
|
||||
TaskCreate({ subject: "DRAFT-002", owner: "writer", description: `Generate Requirements/PRD\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-002"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-003", owner: "discussant", description: `Critique Requirements/PRD (DRAFT-002)\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-002"], status: "pending" })
|
||||
TaskCreate({ subject: "DRAFT-003", owner: "writer", description: `Generate Architecture Document\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-003"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-004", owner: "discussant", description: `Critique Architecture Document (DRAFT-003)\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-003"], status: "pending" })
|
||||
TaskCreate({ subject: "DRAFT-004", owner: "writer", description: `Generate Epics\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-004"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-005", owner: "discussant", description: `Critique Epics (DRAFT-004)\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-004"], status: "pending" })
|
||||
TaskCreate({ subject: "QUALITY-001", owner: "reviewer", description: `5-dimension spec quality validation\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-005"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-006", owner: "discussant", description: `Final review discussion and sign-off\nSession: ${sessionFolder}`, blockedBy: ["QUALITY-001"], status: "pending" })
|
||||
|
||||
// ========================================
|
||||
// IMPL PHASE (4 tasks) — blocked by spec completion
|
||||
// ========================================
|
||||
|
||||
TaskCreate({
|
||||
subject: "PLAN-001",
|
||||
owner: "planner",
|
||||
description: `Multi-angle codebase exploration and structured planning\nSession: ${sessionFolder}\nScope: ${requirements.scope}`,
|
||||
blockedBy: ["DISCUSS-006"], // Blocked until spec phase completes
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "IMPL-001",
|
||||
owner: "executor",
|
||||
description: `Code implementation following PLAN-001\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["PLAN-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "TEST-001",
|
||||
owner: "tester",
|
||||
description: `Adaptive test-fix cycles and quality gates\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["IMPL-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "REVIEW-001",
|
||||
owner: "reviewer",
|
||||
description: `4-dimension code review of IMPL-001 output\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["IMPL-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
Output("[coordinator] Full-lifecycle task chain created (16 tasks)")
|
||||
Output("[coordinator] Starting with: RESEARCH-001 (analyst)")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Strategy 4: FE-Only Mode (3 tasks)
|
||||
|
||||
Pipeline: `PLAN → DEV-FE → QA-FE` (with GC loop: max 2 rounds)
|
||||
|
||||
```javascript
|
||||
if (requirements.mode === "fe-only") {
|
||||
Output("[coordinator] Creating fe-only task chain (3 tasks)")
|
||||
|
||||
TaskCreate({
|
||||
subject: "PLAN-001",
|
||||
owner: "planner",
|
||||
description: `Multi-angle codebase exploration and structured planning (frontend focus)\nSession: ${sessionFolder}\nScope: ${requirements.scope}`,
|
||||
blockedBy: [],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "DEV-FE-001",
|
||||
owner: "fe-developer",
|
||||
description: `Frontend component/page implementation following PLAN-001\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["PLAN-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "QA-FE-001",
|
||||
owner: "fe-qa",
|
||||
description: `5-dimension frontend QA for DEV-FE-001 output\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DEV-FE-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Note: GC loop (DEV-FE-002 → QA-FE-002) created dynamically by coordinator
|
||||
// when QA-FE-001 verdict = NEEDS_FIX (max 2 rounds)
|
||||
|
||||
Output("[coordinator] FE-only task chain created (3 tasks)")
|
||||
Output("[coordinator] Starting with: PLAN-001 (planner)")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Strategy 5: Fullstack Mode (6 tasks)
|
||||
|
||||
Pipeline: `PLAN → IMPL ∥ DEV-FE → TEST ∥ QA-FE → REVIEW`
|
||||
|
||||
```javascript
|
||||
if (requirements.mode === "fullstack") {
|
||||
Output("[coordinator] Creating fullstack task chain (6 tasks)")
|
||||
|
||||
TaskCreate({
|
||||
subject: "PLAN-001",
|
||||
owner: "planner",
|
||||
description: `Multi-angle codebase exploration and structured planning (fullstack)\nSession: ${sessionFolder}\nScope: ${requirements.scope}`,
|
||||
blockedBy: [],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Backend + Frontend in parallel
|
||||
TaskCreate({
|
||||
subject: "IMPL-001",
|
||||
owner: "executor",
|
||||
description: `Backend implementation following PLAN-001\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["PLAN-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "DEV-FE-001",
|
||||
owner: "fe-developer",
|
||||
description: `Frontend implementation following PLAN-001\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["PLAN-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Testing + QA in parallel
|
||||
TaskCreate({
|
||||
subject: "TEST-001",
|
||||
owner: "tester",
|
||||
description: `Backend test-fix cycles\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["IMPL-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "QA-FE-001",
|
||||
owner: "fe-qa",
|
||||
description: `Frontend QA for DEV-FE-001\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["DEV-FE-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
// Final review after all testing
|
||||
TaskCreate({
|
||||
subject: "REVIEW-001",
|
||||
owner: "reviewer",
|
||||
description: `Full code review (backend + frontend)\nSession: ${sessionFolder}`,
|
||||
blockedBy: ["TEST-001", "QA-FE-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
Output("[coordinator] Fullstack task chain created (6 tasks)")
|
||||
Output("[coordinator] Starting with: PLAN-001 (planner)")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Strategy 6: Full-Lifecycle-FE Mode (18 tasks)
|
||||
|
||||
Pipeline: `[Spec 12] → PLAN(blockedBy: DISCUSS-006) → IMPL ∥ DEV-FE → TEST ∥ QA-FE → REVIEW`
|
||||
|
||||
```javascript
|
||||
if (requirements.mode === "full-lifecycle-fe") {
|
||||
Output("[coordinator] Creating full-lifecycle-fe task chain (18 tasks)")
|
||||
|
||||
// SPEC PHASE (12 tasks) — same as spec-only
|
||||
TaskCreate({ subject: "RESEARCH-001", owner: "analyst", description: `Seed analysis\nSession: ${sessionFolder}\nScope: ${requirements.scope}`, blockedBy: [], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-001", owner: "discussant", description: `Critique research findings\nSession: ${sessionFolder}`, blockedBy: ["RESEARCH-001"], status: "pending" })
|
||||
TaskCreate({ subject: "DRAFT-001", owner: "writer", description: `Generate Product Brief\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-001"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-002", owner: "discussant", description: `Critique Product Brief\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-001"], status: "pending" })
|
||||
TaskCreate({ subject: "DRAFT-002", owner: "writer", description: `Generate Requirements/PRD\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-002"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-003", owner: "discussant", description: `Critique Requirements\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-002"], status: "pending" })
|
||||
TaskCreate({ subject: "DRAFT-003", owner: "writer", description: `Generate Architecture Document\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-003"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-004", owner: "discussant", description: `Critique Architecture\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-003"], status: "pending" })
|
||||
TaskCreate({ subject: "DRAFT-004", owner: "writer", description: `Generate Epics\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-004"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-005", owner: "discussant", description: `Critique Epics\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-004"], status: "pending" })
|
||||
TaskCreate({ subject: "QUALITY-001", owner: "reviewer", description: `Spec quality validation\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-005"], status: "pending" })
|
||||
TaskCreate({ subject: "DISCUSS-006", owner: "discussant", description: `Final review and sign-off\nSession: ${sessionFolder}`, blockedBy: ["QUALITY-001"], status: "pending" })
|
||||
|
||||
// IMPL PHASE (6 tasks) — fullstack, blocked by spec
|
||||
TaskCreate({ subject: "PLAN-001", owner: "planner", description: `Fullstack planning\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-006"], status: "pending" })
|
||||
TaskCreate({ subject: "IMPL-001", owner: "executor", description: `Backend implementation\nSession: ${sessionFolder}`, blockedBy: ["PLAN-001"], status: "pending" })
|
||||
TaskCreate({ subject: "DEV-FE-001", owner: "fe-developer", description: `Frontend implementation\nSession: ${sessionFolder}`, blockedBy: ["PLAN-001"], status: "pending" })
|
||||
TaskCreate({ subject: "TEST-001", owner: "tester", description: `Backend test-fix cycles\nSession: ${sessionFolder}`, blockedBy: ["IMPL-001"], status: "pending" })
|
||||
TaskCreate({ subject: "QA-FE-001", owner: "fe-qa", description: `Frontend QA\nSession: ${sessionFolder}`, blockedBy: ["DEV-FE-001"], status: "pending" })
|
||||
TaskCreate({ subject: "REVIEW-001", owner: "reviewer", description: `Full code review\nSession: ${sessionFolder}`, blockedBy: ["TEST-001", "QA-FE-001"], status: "pending" })
|
||||
|
||||
Output("[coordinator] Full-lifecycle-fe task chain created (18 tasks)")
|
||||
Output("[coordinator] Starting with: RESEARCH-001 (analyst)")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Metadata Reference
|
||||
|
||||
```javascript
|
||||
// Unified metadata for all pipelines (used by Session Resume)
|
||||
const TASK_METADATA = {
|
||||
// Spec pipeline (12 tasks)
|
||||
"RESEARCH-001": { role: "analyst", deps: [], description: "Seed analysis: codebase exploration and context gathering" },
|
||||
"DISCUSS-001": { role: "discussant", deps: ["RESEARCH-001"], description: "Critique research findings, identify gaps" },
|
||||
"DRAFT-001": { role: "writer", deps: ["DISCUSS-001"], description: "Generate Product Brief" },
|
||||
"DISCUSS-002": { role: "discussant", deps: ["DRAFT-001"], description: "Critique Product Brief" },
|
||||
"DRAFT-002": { role: "writer", deps: ["DISCUSS-002"], description: "Generate Requirements/PRD" },
|
||||
"DISCUSS-003": { role: "discussant", deps: ["DRAFT-002"], description: "Critique Requirements/PRD" },
|
||||
"DRAFT-003": { role: "writer", deps: ["DISCUSS-003"], description: "Generate Architecture Document" },
|
||||
"DISCUSS-004": { role: "discussant", deps: ["DRAFT-003"], description: "Critique Architecture Document" },
|
||||
"DRAFT-004": { role: "writer", deps: ["DISCUSS-004"], description: "Generate Epics" },
|
||||
"DISCUSS-005": { role: "discussant", deps: ["DRAFT-004"], description: "Critique Epics" },
|
||||
"QUALITY-001": { role: "reviewer", deps: ["DISCUSS-005"], description: "5-dimension spec quality validation" },
|
||||
"DISCUSS-006": { role: "discussant", deps: ["QUALITY-001"], description: "Final review discussion and sign-off" },
|
||||
|
||||
// Impl pipeline (4 tasks) — deps shown for impl-only mode
|
||||
// In full-lifecycle, PLAN-001 deps = ["DISCUSS-006"]
|
||||
"PLAN-001": { role: "planner", deps: [], description: "Multi-angle codebase exploration and structured planning" },
|
||||
"IMPL-001": { role: "executor", deps: ["PLAN-001"], description: "Code implementation following plan" },
|
||||
"TEST-001": { role: "tester", deps: ["IMPL-001"], description: "Adaptive test-fix cycles and quality gates" },
|
||||
"REVIEW-001": { role: "reviewer", deps: ["IMPL-001"], description: "4-dimension code review" },
|
||||
|
||||
// Frontend pipeline tasks
|
||||
"DEV-FE-001": { role: "fe-developer", deps: ["PLAN-001"], description: "Frontend component/page implementation" },
|
||||
"QA-FE-001": { role: "fe-qa", deps: ["DEV-FE-001"], description: "5-dimension frontend QA" },
|
||||
// GC loop tasks (created dynamically)
|
||||
"DEV-FE-002": { role: "fe-developer", deps: ["QA-FE-001"], description: "Frontend fixes (GC round 2)" },
|
||||
"QA-FE-002": { role: "fe-qa", deps: ["DEV-FE-002"], description: "Frontend QA re-check (GC round 2)" }
|
||||
}
|
||||
|
||||
// Pipeline chain constants
|
||||
const SPEC_CHAIN = [
|
||||
"RESEARCH-001", "DISCUSS-001", "DRAFT-001", "DISCUSS-002",
|
||||
"DRAFT-002", "DISCUSS-003", "DRAFT-003", "DISCUSS-004",
|
||||
"DRAFT-004", "DISCUSS-005", "QUALITY-001", "DISCUSS-006"
|
||||
]
|
||||
|
||||
const IMPL_CHAIN = ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"]
|
||||
|
||||
const FE_CHAIN = ["DEV-FE-001", "QA-FE-001"]
|
||||
|
||||
const FULLSTACK_CHAIN = ["PLAN-001", "IMPL-001", "DEV-FE-001", "TEST-001", "QA-FE-001", "REVIEW-001"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Method Handling
|
||||
|
||||
### Sequential Execution
|
||||
|
||||
```javascript
|
||||
if (requirements.executionMethod === "sequential") {
|
||||
Output("[coordinator] Sequential execution: tasks will run one at a time")
|
||||
// Only one task active at a time
|
||||
// Next task activated only after predecessor completes
|
||||
}
|
||||
```
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
```javascript
|
||||
if (requirements.executionMethod === "parallel") {
|
||||
Output("[coordinator] Parallel execution: independent tasks will run concurrently")
|
||||
// Tasks with all deps met can run in parallel
|
||||
// e.g., TEST-001 and REVIEW-001 both depend on IMPL-001 → run together
|
||||
// e.g., IMPL-001 and DEV-FE-001 both depend on PLAN-001 → run together
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
All outputs from this command use the `[coordinator]` tag:
|
||||
|
||||
```
|
||||
[coordinator] Creating spec-only task chain (12 tasks)
|
||||
[coordinator] Starting with: RESEARCH-001 (analyst)
|
||||
```
|
||||
@@ -1,368 +0,0 @@
|
||||
# Monitor Command - Coordination Loop
|
||||
|
||||
**Purpose**: Monitor task progress, route messages, and handle checkpoints
|
||||
|
||||
**Invoked by**: Coordinator role.md Phase 4
|
||||
|
||||
**Output Tag**: `[coordinator]`
|
||||
|
||||
---
|
||||
|
||||
## Coordination Loop
|
||||
|
||||
> **设计原则**: 模型执行没有时间概念,禁止任何形式的轮询等待。
|
||||
> 使用同步 `Task(run_in_background: false)` 调用作为等待机制。
|
||||
> Worker 返回 = 阶段完成信号(天然回调),无需 sleep 轮询。
|
||||
|
||||
```javascript
|
||||
Output("[coordinator] Entering coordination loop (Stop-Wait mode)...")
|
||||
|
||||
// Get all tasks and filter for pending work
|
||||
const allTasks = TaskList()
|
||||
const pendingTasks = allTasks.filter(t => t.status !== 'completed')
|
||||
|
||||
for (const task of pendingTasks) {
|
||||
// Check if all dependencies are met
|
||||
const allDepsMet = (task.blockedBy || []).every(depSubject => {
|
||||
const dep = allTasks.find(t => t.subject === depSubject)
|
||||
return dep && dep.status === 'completed'
|
||||
})
|
||||
|
||||
if (!allDepsMet) {
|
||||
Output(`[coordinator] Task ${task.subject} blocked by dependencies, skipping`)
|
||||
continue
|
||||
}
|
||||
|
||||
// Determine role from task subject prefix → TASK_METADATA lookup
|
||||
const taskMeta = TASK_METADATA[task.subject]
|
||||
const role = taskMeta ? taskMeta.role : task.owner
|
||||
|
||||
Output(`[coordinator] Starting task: ${task.subject} (role: ${role})`)
|
||||
|
||||
// Mark as in_progress
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
|
||||
// ============================================================
|
||||
// Spawn worker using SKILL.md Coordinator Spawn Template
|
||||
// Key: worker MUST call Skill() to load role definition
|
||||
// ============================================================
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: `Spawn ${role} worker for ${task.subject}`,
|
||||
team_name: teamName,
|
||||
name: role,
|
||||
prompt: `你是 team "${teamName}" 的 ${role.toUpperCase()}.
|
||||
|
||||
## ⚠️ 首要指令(MUST)
|
||||
你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥:
|
||||
Skill(skill="team-lifecycle-v2", args="--role=${role}")
|
||||
此调用会加载你的角色定义(role.md)、可用命令(commands/*.md)和完整执行逻辑。
|
||||
|
||||
当前任务: ${task.subject} - ${task.description}
|
||||
Session: ${sessionFolder}
|
||||
|
||||
## 角色准则(强制)
|
||||
- 你只能处理 ${taskMeta ? task.subject.split('-')[0] : ''}-* 前缀的任务,不得执行其他角色的工作
|
||||
- 所有输出(SendMessage、team_msg)必须带 [${role}] 标识前缀
|
||||
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||
- 不得使用 TaskCreate 为其他角色创建任务
|
||||
|
||||
## 消息总线(必须)
|
||||
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||
|
||||
## 工作流程(严格按顺序)
|
||||
1. 调用 Skill(skill="team-lifecycle-v2", args="--role=${role}") 获取角色定义和执行逻辑
|
||||
2. 按 role.md 中的 5-Phase 流程执行(TaskList → 找到任务 → 执行 → 汇报)
|
||||
3. team_msg log + SendMessage 结果给 coordinator(带 [${role}] 标识)
|
||||
4. TaskUpdate completed → 检查下一个任务 → 回到步骤 1`,
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
// Worker returned — check status
|
||||
const completedTask = TaskGet({ taskId: task.id })
|
||||
Output(`[coordinator] Task ${task.subject} status: ${completedTask.status}`)
|
||||
|
||||
if (completedTask.status === "completed") {
|
||||
handleTaskComplete({ subject: task.subject, output: completedTask })
|
||||
}
|
||||
|
||||
// Update session progress
|
||||
const session = Read(sessionFile)
|
||||
const allTasksNow = TaskList()
|
||||
session.tasks_completed = allTasksNow.filter(t => t.status === "completed").length
|
||||
Write(sessionFile, session)
|
||||
|
||||
// Check if all tasks complete
|
||||
const remaining = allTasksNow.filter(t => t.status !== "completed")
|
||||
if (remaining.length === 0) {
|
||||
Output("[coordinator] All tasks completed!")
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
Output("[coordinator] Coordination loop complete")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Message Handlers
|
||||
|
||||
### handleTaskComplete
|
||||
|
||||
```javascript
|
||||
function handleTaskComplete(message) {
|
||||
const subject = message.subject
|
||||
|
||||
Output(`[coordinator] Task completed: ${subject}`)
|
||||
|
||||
// Check for dependent tasks
|
||||
const allTasks = TaskList()
|
||||
const dependentTasks = allTasks.filter(t =>
|
||||
(t.blockedBy || []).includes(subject) && t.status === 'pending'
|
||||
)
|
||||
|
||||
Output(`[coordinator] Checking ${dependentTasks.length} dependent tasks`)
|
||||
|
||||
for (const depTask of dependentTasks) {
|
||||
// Check if all dependencies are met
|
||||
const allDepsMet = (depTask.blockedBy || []).every(depSubject => {
|
||||
const dep = allTasks.find(t => t.subject === depSubject)
|
||||
return dep && dep.status === 'completed'
|
||||
})
|
||||
|
||||
if (allDepsMet) {
|
||||
Output(`[coordinator] Unblocking task: ${depTask.subject} (${depTask.owner})`)
|
||||
}
|
||||
}
|
||||
|
||||
// Special checkpoint: Spec phase complete before implementation
|
||||
if (subject === "DISCUSS-006" && (requirements.mode === "full-lifecycle" || requirements.mode === "full-lifecycle-fe")) {
|
||||
Output("[coordinator] Spec phase complete. Checkpoint before implementation.")
|
||||
handleSpecCompleteCheckpoint()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### handleTaskBlocked
|
||||
|
||||
```javascript
|
||||
function handleTaskBlocked(message) {
|
||||
const subject = message.subject
|
||||
const reason = message.reason
|
||||
|
||||
Output(`[coordinator] Task blocked: ${subject}`)
|
||||
Output(`[coordinator] Reason: ${reason}`)
|
||||
|
||||
// Check if block reason is dependency-related
|
||||
if (reason.includes("dependency")) {
|
||||
Output("[coordinator] Dependency block detected. Waiting for predecessor tasks.")
|
||||
return
|
||||
}
|
||||
|
||||
// Check if block reason is ambiguity-related
|
||||
if (reason.includes("ambiguous") || reason.includes("unclear")) {
|
||||
Output("[coordinator] Ambiguity detected. Routing to analyst for research.")
|
||||
handleAmbiguityBlock(subject, reason)
|
||||
return
|
||||
}
|
||||
|
||||
// Unknown block reason - escalate to user
|
||||
Output("[coordinator] Unknown block reason. Escalating to user.")
|
||||
const userDecision = AskUserQuestion({
|
||||
question: `Task ${subject} is blocked: ${reason}. How to proceed?`,
|
||||
choices: [
|
||||
"retry - Retry the task",
|
||||
"skip - Skip this task",
|
||||
"abort - Abort entire workflow",
|
||||
"manual - Provide manual input"
|
||||
]
|
||||
})
|
||||
|
||||
switch (userDecision) {
|
||||
case "retry":
|
||||
// Task will be retried in next coordination loop iteration
|
||||
break
|
||||
|
||||
case "skip":
|
||||
const task = TaskList().find(t => t.subject === subject)
|
||||
if (task) TaskUpdate({ taskId: task.id, status: "completed" })
|
||||
Output(`[coordinator] Task ${subject} skipped by user`)
|
||||
break
|
||||
|
||||
case "abort":
|
||||
Output("[coordinator] Workflow aborted by user")
|
||||
loopActive = false
|
||||
break
|
||||
|
||||
case "manual":
|
||||
const manualInput = AskUserQuestion({
|
||||
question: `Provide manual input for task ${subject}:`,
|
||||
type: "text"
|
||||
})
|
||||
const taskToComplete = TaskList().find(t => t.subject === subject)
|
||||
if (taskToComplete) TaskUpdate({ taskId: taskToComplete.id, status: "completed" })
|
||||
Output(`[coordinator] Task ${subject} completed with manual input`)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Route ambiguity to analyst (explorer as fallback)
|
||||
function handleAmbiguityBlock(subject, reason) {
|
||||
Output(`[coordinator] Creating research task for ambiguity in ${subject}`)
|
||||
|
||||
// Spawn analyst on-demand to research the ambiguity
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: `Spawn analyst for ambiguity research`,
|
||||
team_name: teamName,
|
||||
name: "analyst",
|
||||
prompt: `你是 team "${teamName}" 的 ANALYST.
|
||||
|
||||
## ⚠️ 首要指令(MUST)
|
||||
Skill(skill="team-lifecycle-v2", args="--role=analyst")
|
||||
|
||||
## 紧急研究任务
|
||||
被阻塞任务: ${subject}
|
||||
阻塞原因: ${reason}
|
||||
Session: ${sessionFolder}
|
||||
|
||||
请调查并通过 SendMessage 汇报研究结果给 coordinator。`,
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
Output(`[coordinator] Ambiguity research complete for ${subject}`)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### handleDiscussionNeeded
|
||||
|
||||
```javascript
|
||||
function handleDiscussionNeeded(message) {
|
||||
const subject = message.subject
|
||||
const question = message.question
|
||||
const context = message.context
|
||||
|
||||
Output(`[coordinator] Discussion needed for task: ${subject}`)
|
||||
Output(`[coordinator] Question: ${question}`)
|
||||
|
||||
// Route to user
|
||||
const userResponse = AskUserQuestion({
|
||||
question: `Task ${subject} needs clarification:\n\n${question}\n\nContext: ${context}`,
|
||||
type: "text"
|
||||
})
|
||||
|
||||
Output(`[coordinator] User response received for ${subject}`)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checkpoint Handlers
|
||||
|
||||
### handleSpecCompleteCheckpoint
|
||||
|
||||
```javascript
|
||||
function handleSpecCompleteCheckpoint() {
|
||||
Output("[coordinator] ========================================")
|
||||
Output("[coordinator] SPEC PHASE COMPLETE - CHECKPOINT")
|
||||
Output("[coordinator] ========================================")
|
||||
|
||||
// Ask user to review
|
||||
const userDecision = AskUserQuestion({
|
||||
question: "Spec phase complete (DISCUSS-006 done). Review specifications before proceeding to implementation?",
|
||||
choices: [
|
||||
"proceed - Proceed to implementation (PLAN-001)",
|
||||
"review - Review spec artifacts in session folder",
|
||||
"revise - Request spec revision",
|
||||
"stop - Stop here (spec-only)"
|
||||
]
|
||||
})
|
||||
|
||||
switch (userDecision) {
|
||||
case "proceed":
|
||||
Output("[coordinator] Proceeding to implementation phase (PLAN-001)")
|
||||
break
|
||||
|
||||
case "review":
|
||||
Output("[coordinator] Spec artifacts are in: " + sessionFolder + "/spec/")
|
||||
Output("[coordinator] Please review and then re-invoke to continue.")
|
||||
handleSpecCompleteCheckpoint()
|
||||
break
|
||||
|
||||
case "revise":
|
||||
const revisionScope = AskUserQuestion({
|
||||
question: "Which spec artifacts need revision? (e.g., DRAFT-002 requirements, DRAFT-003 architecture)",
|
||||
type: "text"
|
||||
})
|
||||
Output(`[coordinator] Revision requested: ${revisionScope}`)
|
||||
handleSpecCompleteCheckpoint()
|
||||
break
|
||||
|
||||
case "stop":
|
||||
Output("[coordinator] Stopping at spec phase (user request)")
|
||||
loopActive = false
|
||||
break
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Message Routing Tables
|
||||
|
||||
### Spec Phase Messages
|
||||
|
||||
| Message Type | Sender Role | Trigger | Coordinator Action |
|
||||
|--------------|-------------|---------|-------------------|
|
||||
| `research_ready` | analyst | RESEARCH-* done | Update session, unblock DISCUSS-001 |
|
||||
| `discussion_ready` | discussant | DISCUSS-* done | Unblock next DRAFT-* or QUALITY-* |
|
||||
| `draft_ready` | writer | DRAFT-* done | Unblock next DISCUSS-* |
|
||||
| `quality_result` | reviewer | QUALITY-* done | Unblock DISCUSS-006 |
|
||||
| `error` | any worker | Task failed | Log error, escalate to user |
|
||||
|
||||
### Impl Phase Messages
|
||||
|
||||
| Message Type | Sender Role | Trigger | Coordinator Action |
|
||||
|--------------|-------------|---------|-------------------|
|
||||
| `plan_ready` | planner | PLAN-001 done | Unblock IMPL-001 (+ DEV-FE-001 for fullstack) |
|
||||
| `impl_complete` | executor | IMPL-001 done | Unblock TEST-001 + REVIEW-001 |
|
||||
| `test_result` | tester | TEST-001 done | Log results |
|
||||
| `review_result` | reviewer | REVIEW-001 done | Log results |
|
||||
| `dev_fe_complete` | fe-developer | DEV-FE-* done | Unblock QA-FE-* |
|
||||
| `qa_fe_result` | fe-qa | QA-FE-* done | Check verdict, maybe create GC round |
|
||||
| `error` | any worker | Task failed | Log error, escalate to user |
|
||||
|
||||
---
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
```javascript
|
||||
function logProgress() {
|
||||
const session = Read(sessionFile)
|
||||
const completedCount = session.tasks_completed
|
||||
const totalCount = session.tasks_total
|
||||
const percentage = Math.round((completedCount / totalCount) * 100)
|
||||
|
||||
Output(`[coordinator] Progress: ${completedCount}/${totalCount} tasks (${percentage}%)`)
|
||||
Output(`[coordinator] Current phase: ${session.current_phase}`)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
All outputs from this command use the `[coordinator]` tag:
|
||||
|
||||
```
|
||||
[coordinator] Entering coordination loop (Stop-Wait mode)...
|
||||
[coordinator] Starting task: RESEARCH-001 (role: analyst)
|
||||
[coordinator] Task RESEARCH-001 status: completed
|
||||
[coordinator] Checking 1 dependent tasks
|
||||
[coordinator] Unblocking task: DISCUSS-001 (discussant)
|
||||
[coordinator] Progress: 1/12 tasks (8%)
|
||||
```
|
||||
@@ -1,695 +0,0 @@
|
||||
# Coordinator Role
|
||||
|
||||
## Role Identity
|
||||
|
||||
**Role**: Coordinator
|
||||
**Output Tag**: `[coordinator]`
|
||||
**Responsibility**: Orchestrate the team-lifecycle workflow by managing team creation, task dispatching, progress monitoring, and session state persistence.
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Parse user requirements and clarify ambiguous inputs
|
||||
- Create team and spawn worker subagents
|
||||
- Dispatch tasks with proper dependency chains
|
||||
- Monitor task progress and route messages
|
||||
- Handle session resume and reconciliation
|
||||
- Maintain session state persistence
|
||||
- Provide progress reports and next-step options
|
||||
|
||||
### MUST NOT
|
||||
- Execute spec/impl/research work directly (delegate to workers)
|
||||
- Modify task outputs (workers own their deliverables)
|
||||
- Skip dependency validation
|
||||
- Proceed without user confirmation at checkpoints
|
||||
|
||||
## Message Types
|
||||
|
||||
| Message Type | Sender | Trigger | Coordinator Action |
|
||||
|--------------|--------|---------|-------------------|
|
||||
| `task_complete` | Worker | Task finished | Update session, check dependencies, kick next task |
|
||||
| `task_blocked` | Worker | Dependency missing | Log block reason, wait for predecessor |
|
||||
| `discussion_needed` | Worker | Ambiguity found | Route to user via AskUserQuestion |
|
||||
| `research_ready` | analyst | Research done | Checkpoint with user before impl |
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
- `commands/dispatch.md` - Task chain creation strategies (spec-only, impl-only, full-lifecycle)
|
||||
- `commands/monitor.md` - Coordination loop with message routing and checkpoint handling
|
||||
|
||||
### Subagent Capabilities
|
||||
- `TeamCreate` - Initialize team with session metadata
|
||||
- `TeamSpawn` - Spawn worker subagents (analyst, writer, discussant, planner, executor, tester, reviewer, etc.)
|
||||
- `TaskCreate` - Create tasks with dependencies
|
||||
- `TaskUpdate` - Update task status/metadata
|
||||
- `TaskGet` - Retrieve task details
|
||||
- `AskUserQuestion` - Interactive user prompts
|
||||
|
||||
### CLI Capabilities
|
||||
- Session file I/O (`Read`, `Write`)
|
||||
- Directory scanning (`Glob`)
|
||||
- Background execution for long-running tasks
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Phase 0: Session Resume Check
|
||||
|
||||
**Purpose**: Detect and resume interrupted sessions
|
||||
|
||||
```javascript
|
||||
// Scan for session files
|
||||
const sessionFiles = Glob("D:/Claude_dms3/.workflow/.sessions/team-lifecycle-*.json")
|
||||
|
||||
if (sessionFiles.length === 0) {
|
||||
// No existing session, proceed to Phase 1
|
||||
goto Phase1
|
||||
}
|
||||
|
||||
if (sessionFiles.length === 1) {
|
||||
// Single session found
|
||||
const session = Read(sessionFiles[0])
|
||||
if (session.status === "active" || session.status === "paused") {
|
||||
Output("[coordinator] Resuming session: " + session.session_id)
|
||||
goto SessionReconciliation
|
||||
}
|
||||
}
|
||||
|
||||
if (sessionFiles.length > 1) {
|
||||
// Multiple sessions - ask user
|
||||
const choices = sessionFiles.map(f => {
|
||||
const s = Read(f)
|
||||
return `${s.session_id} (${s.status}) - ${s.mode} - ${s.tasks_completed}/${s.tasks_total}`
|
||||
})
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
question: "Multiple sessions found. Which to resume?",
|
||||
choices: ["Create new session", ...choices]
|
||||
})
|
||||
|
||||
if (answer === "Create new session") {
|
||||
goto Phase1
|
||||
} else {
|
||||
const selectedSession = Read(sessionFiles[answer.index - 1])
|
||||
goto SessionReconciliation
|
||||
}
|
||||
}
|
||||
|
||||
// Session Reconciliation Process
|
||||
SessionReconciliation: {
|
||||
Output("[coordinator] Reconciling session state...")
|
||||
|
||||
// Pipeline constants (aligned with SKILL.md Three-Mode Pipeline)
|
||||
const SPEC_CHAIN = [
|
||||
"RESEARCH-001", "DISCUSS-001", "DRAFT-001", "DISCUSS-002",
|
||||
"DRAFT-002", "DISCUSS-003", "DRAFT-003", "DISCUSS-004",
|
||||
"DRAFT-004", "DISCUSS-005", "QUALITY-001", "DISCUSS-006"
|
||||
]
|
||||
|
||||
const IMPL_CHAIN = ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"]
|
||||
|
||||
const FE_CHAIN = ["DEV-FE-001", "QA-FE-001"]
|
||||
|
||||
const FULLSTACK_CHAIN = ["PLAN-001", "IMPL-001", "DEV-FE-001", "TEST-001", "QA-FE-001", "REVIEW-001"]
|
||||
|
||||
// Task metadata — role must match VALID_ROLES in SKILL.md
|
||||
const TASK_METADATA = {
|
||||
// Spec pipeline (12 tasks)
|
||||
"RESEARCH-001": { role: "analyst", phase: "spec", deps: [], description: "Seed analysis: codebase exploration and context gathering" },
|
||||
"DISCUSS-001": { role: "discussant", phase: "spec", deps: ["RESEARCH-001"], description: "Critique research findings" },
|
||||
"DRAFT-001": { role: "writer", phase: "spec", deps: ["DISCUSS-001"], description: "Generate Product Brief" },
|
||||
"DISCUSS-002": { role: "discussant", phase: "spec", deps: ["DRAFT-001"], description: "Critique Product Brief" },
|
||||
"DRAFT-002": { role: "writer", phase: "spec", deps: ["DISCUSS-002"], description: "Generate Requirements/PRD" },
|
||||
"DISCUSS-003": { role: "discussant", phase: "spec", deps: ["DRAFT-002"], description: "Critique Requirements/PRD" },
|
||||
"DRAFT-003": { role: "writer", phase: "spec", deps: ["DISCUSS-003"], description: "Generate Architecture Document" },
|
||||
"DISCUSS-004": { role: "discussant", phase: "spec", deps: ["DRAFT-003"], description: "Critique Architecture Document" },
|
||||
"DRAFT-004": { role: "writer", phase: "spec", deps: ["DISCUSS-004"], description: "Generate Epics" },
|
||||
"DISCUSS-005": { role: "discussant", phase: "spec", deps: ["DRAFT-004"], description: "Critique Epics" },
|
||||
"QUALITY-001": { role: "reviewer", phase: "spec", deps: ["DISCUSS-005"], description: "5-dimension spec quality validation" },
|
||||
"DISCUSS-006": { role: "discussant", phase: "spec", deps: ["QUALITY-001"], description: "Final review discussion and sign-off" },
|
||||
|
||||
// Impl pipeline (deps shown for impl-only; full-lifecycle adds PLAN-001 → ["DISCUSS-006"])
|
||||
"PLAN-001": { role: "planner", phase: "impl", deps: [], description: "Multi-angle codebase exploration and structured planning" },
|
||||
"IMPL-001": { role: "executor", phase: "impl", deps: ["PLAN-001"], description: "Code implementation following plan" },
|
||||
"TEST-001": { role: "tester", phase: "impl", deps: ["IMPL-001"], description: "Adaptive test-fix cycles and quality gates" },
|
||||
"REVIEW-001": { role: "reviewer", phase: "impl", deps: ["IMPL-001"], description: "4-dimension code review" },
|
||||
|
||||
// Frontend pipeline tasks
|
||||
"DEV-FE-001": { role: "fe-developer", phase: "impl", deps: ["PLAN-001"], description: "Frontend component/page implementation" },
|
||||
"QA-FE-001": { role: "fe-qa", phase: "impl", deps: ["DEV-FE-001"], description: "5-dimension frontend QA" }
|
||||
}
|
||||
|
||||
// Helper: Get predecessor task
|
||||
function getPredecessor(taskId, chain) {
|
||||
const index = chain.indexOf(taskId)
|
||||
return index > 0 ? chain[index - 1] : null
|
||||
}
|
||||
|
||||
// Step 1: Audit current state
|
||||
const session = Read(sessionFile)
|
||||
const teamState = TeamGet(session.team_id)
|
||||
const allTasks = teamState.tasks
|
||||
|
||||
Output("[coordinator] Session audit:")
|
||||
Output(` Mode: ${session.mode}`)
|
||||
Output(` Tasks completed: ${session.tasks_completed}/${session.tasks_total}`)
|
||||
Output(` Status: ${session.status}`)
|
||||
|
||||
// Step 2: Reconcile task states
|
||||
const completedTasks = allTasks.filter(t => t.status === "completed")
|
||||
const activeTasks = allTasks.filter(t => t.status === "active")
|
||||
const blockedTasks = allTasks.filter(t => t.status === "blocked")
|
||||
const pendingTasks = allTasks.filter(t => t.status === "pending")
|
||||
|
||||
Output("[coordinator] Task breakdown:")
|
||||
Output(` Completed: ${completedTasks.length}`)
|
||||
Output(` Active: ${activeTasks.length}`)
|
||||
Output(` Blocked: ${blockedTasks.length}`)
|
||||
Output(` Pending: ${pendingTasks.length}`)
|
||||
|
||||
// Step 3: Determine remaining work
|
||||
const expectedChain =
|
||||
session.mode === "spec-only" ? SPEC_CHAIN :
|
||||
session.mode === "impl-only" ? IMPL_CHAIN :
|
||||
session.mode === "fe-only" ? ["PLAN-001", ...FE_CHAIN] :
|
||||
session.mode === "fullstack" ? FULLSTACK_CHAIN :
|
||||
session.mode === "full-lifecycle-fe" ? [...SPEC_CHAIN, ...FULLSTACK_CHAIN] :
|
||||
[...SPEC_CHAIN, ...IMPL_CHAIN] // full-lifecycle default
|
||||
|
||||
const remainingTaskIds = expectedChain.filter(id =>
|
||||
!completedTasks.some(t => t.subject === id)
|
||||
)
|
||||
|
||||
Output(`[coordinator] Remaining tasks: ${remainingTaskIds.join(", ")}`)
|
||||
|
||||
// Step 4: Rebuild team if needed
|
||||
if (!teamState || teamState.status === "disbanded") {
|
||||
Output("[coordinator] Team disbanded, recreating...")
|
||||
TeamCreate({
|
||||
team_id: session.team_id,
|
||||
session_id: session.session_id,
|
||||
mode: session.mode
|
||||
})
|
||||
}
|
||||
|
||||
// Step 5: Create missing tasks
|
||||
for (const taskId of remainingTaskIds) {
|
||||
const existingTask = allTasks.find(t => t.subject === taskId)
|
||||
if (!existingTask) {
|
||||
const metadata = TASK_METADATA[taskId]
|
||||
TaskCreate({
|
||||
subject: taskId,
|
||||
owner: metadata.role,
|
||||
description: `${metadata.description}\nSession: ${sessionFolder}`,
|
||||
blockedBy: metadata.deps,
|
||||
status: "pending"
|
||||
})
|
||||
Output(`[coordinator] Created missing task: ${taskId} (${metadata.role})`)
|
||||
}
|
||||
}
|
||||
|
||||
// Step 6: Verify dependencies
|
||||
for (const taskId of remainingTaskIds) {
|
||||
const task = allTasks.find(t => t.subject === taskId)
|
||||
if (!task) continue
|
||||
const metadata = TASK_METADATA[taskId]
|
||||
const allDepsMet = metadata.deps.every(depId =>
|
||||
completedTasks.some(t => t.subject === depId)
|
||||
)
|
||||
|
||||
if (allDepsMet && task.status !== "completed") {
|
||||
Output(`[coordinator] Unblocked task: ${taskId} (${metadata.role})`)
|
||||
}
|
||||
}
|
||||
|
||||
// Step 7: Update session state
|
||||
session.status = "active"
|
||||
session.resumed_at = new Date().toISOString()
|
||||
session.tasks_completed = completedTasks.length
|
||||
Write(sessionFile, session)
|
||||
|
||||
// Step 8: Report reconciliation
|
||||
Output("[coordinator] Session reconciliation complete")
|
||||
Output(`[coordinator] Ready to resume from: ${remainingTaskIds[0] || "all tasks complete"}`)
|
||||
|
||||
// Step 9: Kick next task
|
||||
if (remainingTaskIds.length > 0) {
|
||||
const nextTaskId = remainingTaskIds[0]
|
||||
const nextTask = TaskGet(nextTaskId)
|
||||
const metadata = TASK_METADATA[nextTaskId]
|
||||
|
||||
if (metadata.deps.every(depId => completedTasks.some(t => t.subject === depId))) {
|
||||
TaskUpdate(nextTaskId, { status: "active" })
|
||||
Output(`[coordinator] Kicking task: ${nextTaskId}`)
|
||||
goto Phase4_CoordinationLoop
|
||||
} else {
|
||||
Output(`[coordinator] Next task ${nextTaskId} blocked on: ${metadata.deps.join(", ")}`)
|
||||
goto Phase4_CoordinationLoop
|
||||
}
|
||||
} else {
|
||||
Output("[coordinator] All tasks complete!")
|
||||
goto Phase5_Report
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement Clarification
|
||||
|
||||
**Purpose**: Parse user input and clarify execution parameters
|
||||
|
||||
```javascript
|
||||
Output("[coordinator] Phase 1: Requirement Clarification")
|
||||
|
||||
// Parse $ARGUMENTS
|
||||
const userInput = $ARGUMENTS
|
||||
|
||||
// Extract mode if specified
|
||||
let mode = null
|
||||
if (userInput.includes("spec-only")) mode = "spec-only"
|
||||
if (userInput.includes("impl-only")) mode = "impl-only"
|
||||
if (userInput.includes("full-lifecycle")) mode = "full-lifecycle"
|
||||
|
||||
// Extract scope if specified
|
||||
let scope = null
|
||||
if (userInput.includes("scope:")) {
|
||||
scope = userInput.match(/scope:\s*([^\n]+)/)[1]
|
||||
}
|
||||
|
||||
// Extract focus areas
|
||||
let focus = []
|
||||
if (userInput.includes("focus:")) {
|
||||
focus = userInput.match(/focus:\s*([^\n]+)/)[1].split(",").map(s => s.trim())
|
||||
}
|
||||
|
||||
// Extract depth preference
|
||||
let depth = "standard"
|
||||
if (userInput.includes("depth:shallow")) depth = "shallow"
|
||||
if (userInput.includes("depth:deep")) depth = "deep"
|
||||
|
||||
// Ask for missing parameters
|
||||
if (!mode) {
|
||||
mode = AskUserQuestion({
|
||||
question: "Select execution mode:",
|
||||
choices: [
|
||||
"spec-only - Generate specifications only",
|
||||
"impl-only - Implementation only (requires existing spec)",
|
||||
"full-lifecycle - Complete spec + implementation",
|
||||
"fe-only - Frontend-only pipeline (plan → dev → QA)",
|
||||
"fullstack - Backend + frontend parallel pipeline",
|
||||
"full-lifecycle-fe - Full lifecycle with frontend (spec → fullstack)"
|
||||
]
|
||||
})
|
||||
}
|
||||
|
||||
if (!scope) {
|
||||
scope = AskUserQuestion({
|
||||
question: "Describe the project scope:",
|
||||
type: "text"
|
||||
})
|
||||
}
|
||||
|
||||
if (focus.length === 0) {
|
||||
const focusAnswer = AskUserQuestion({
|
||||
question: "Any specific focus areas? (optional)",
|
||||
type: "text",
|
||||
optional: true
|
||||
})
|
||||
if (focusAnswer) {
|
||||
focus = focusAnswer.split(",").map(s => s.trim())
|
||||
}
|
||||
}
|
||||
|
||||
// Determine execution method
|
||||
const executionMethod = AskUserQuestion({
|
||||
question: "Execution method:",
|
||||
choices: [
|
||||
"sequential - One task at a time (safer, slower)",
|
||||
"parallel - Multiple tasks in parallel (faster, more complex)"
|
||||
]
|
||||
})
|
||||
|
||||
// Store clarified requirements
|
||||
const requirements = {
|
||||
mode,
|
||||
scope,
|
||||
focus,
|
||||
depth,
|
||||
executionMethod,
|
||||
originalInput: userInput
|
||||
}
|
||||
|
||||
// --- Frontend Detection ---
|
||||
// Auto-detect frontend tasks and adjust pipeline mode
|
||||
const FE_KEYWORDS = /component|page|UI|前端|frontend|CSS|HTML|React|Vue|Tailwind|组件|页面|样式|layout|responsive|Svelte|Next\.js|Nuxt|shadcn|设计系统|design.system/i
|
||||
const BE_KEYWORDS = /API|database|server|后端|backend|middleware|auth|REST|GraphQL|migration|schema|model|controller|service/i
|
||||
|
||||
function detectImplMode(taskDescription) {
|
||||
const hasFE = FE_KEYWORDS.test(taskDescription)
|
||||
const hasBE = BE_KEYWORDS.test(taskDescription)
|
||||
|
||||
// Also check project files for frontend frameworks
|
||||
const hasFEFiles = Bash(`test -f package.json && (grep -q react package.json || grep -q vue package.json || grep -q svelte package.json || grep -q next package.json); echo $?`) === '0'
|
||||
|
||||
if (hasFE && hasBE) return 'fullstack'
|
||||
if (hasFE || hasFEFiles) return 'fe-only'
|
||||
return 'impl-only' // default backend
|
||||
}
|
||||
|
||||
// Apply frontend detection for implementation modes
|
||||
if (mode === 'impl-only' || mode === 'full-lifecycle') {
|
||||
const detectedMode = detectImplMode(scope + ' ' + userInput)
|
||||
if (detectedMode !== 'impl-only') {
|
||||
// Frontend detected — upgrade pipeline mode
|
||||
if (mode === 'impl-only') {
|
||||
mode = detectedMode // fe-only or fullstack
|
||||
} else if (mode === 'full-lifecycle') {
|
||||
mode = 'full-lifecycle-fe' // spec + fullstack
|
||||
}
|
||||
requirements.mode = mode
|
||||
Output(`[coordinator] Frontend detected → pipeline upgraded to: ${mode}`)
|
||||
}
|
||||
}
|
||||
|
||||
Output("[coordinator] Requirements clarified:")
|
||||
Output(` Mode: ${mode}`)
|
||||
Output(` Scope: ${scope}`)
|
||||
Output(` Focus: ${focus.join(", ") || "none"}`)
|
||||
Output(` Depth: ${depth}`)
|
||||
Output(` Execution: ${executionMethod}`)
|
||||
|
||||
goto Phase2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Create Team + Initialize Session
|
||||
|
||||
**Purpose**: Initialize team and session state
|
||||
|
||||
```javascript
|
||||
Output("[coordinator] Phase 2: Team Creation")
|
||||
|
||||
// Generate session ID
|
||||
const sessionId = `team-lifecycle-${Date.now()}`
|
||||
const teamId = sessionId
|
||||
|
||||
// Create team
|
||||
TeamCreate({
|
||||
team_id: teamId,
|
||||
session_id: sessionId,
|
||||
mode: requirements.mode,
|
||||
scope: requirements.scope,
|
||||
focus: requirements.focus,
|
||||
depth: requirements.depth,
|
||||
executionMethod: requirements.executionMethod
|
||||
})
|
||||
|
||||
Output(`[coordinator] Team created: ${teamId}`)
|
||||
|
||||
// Initialize wisdom directory
|
||||
const wisdomDir = `${sessionFolder}/wisdom`
|
||||
Bash(`mkdir -p "${wisdomDir}"`)
|
||||
Write(`${wisdomDir}/learnings.md`, `# Learnings\n\n<!-- Auto-accumulated by team roles -->\n`)
|
||||
Write(`${wisdomDir}/decisions.md`, `# Decisions\n\n<!-- Architectural and design decisions -->\n`)
|
||||
Write(`${wisdomDir}/conventions.md`, `# Conventions\n\n<!-- Codebase conventions discovered -->\n<!-- explorer-patterns -->\n`)
|
||||
Write(`${wisdomDir}/issues.md`, `# Known Issues\n\n<!-- Risks and issues found during execution -->\n`)
|
||||
|
||||
// Initialize session file
|
||||
const sessionFile = `D:/Claude_dms3/.workflow/.sessions/${sessionId}.json`
|
||||
const sessionData = {
|
||||
session_id: sessionId,
|
||||
team_id: teamId,
|
||||
mode: requirements.mode,
|
||||
scope: requirements.scope,
|
||||
focus: requirements.focus,
|
||||
depth: requirements.depth,
|
||||
executionMethod: requirements.executionMethod,
|
||||
status: "active",
|
||||
created_at: new Date().toISOString(),
|
||||
tasks_total: requirements.mode === "spec-only" ? 12 :
|
||||
requirements.mode === "impl-only" ? 4 :
|
||||
requirements.mode === "fe-only" ? 3 :
|
||||
requirements.mode === "fullstack" ? 6 :
|
||||
requirements.mode === "full-lifecycle-fe" ? 18 : 16,
|
||||
tasks_completed: 0,
|
||||
current_phase: requirements.mode === "impl-only" ? "impl" : "spec"
|
||||
}
|
||||
|
||||
Write(sessionFile, sessionData)
|
||||
Output(`[coordinator] Session file created: ${sessionFile}`)
|
||||
|
||||
// ⚠️ Workers are NOT pre-spawned here.
|
||||
// Workers are spawned per-stage in Phase 4 via Stop-Wait Task(run_in_background: false).
|
||||
// See SKILL.md Coordinator Spawn Template for worker prompt templates.
|
||||
//
|
||||
// Worker roles by mode (spawned on-demand, must match VALID_ROLES in SKILL.md):
|
||||
// spec-only: analyst, discussant, writer, reviewer
|
||||
// impl-only: planner, executor, tester, reviewer
|
||||
// fe-only: planner, fe-developer, fe-qa
|
||||
// fullstack: planner, executor, fe-developer, tester, fe-qa, reviewer
|
||||
// full-lifecycle: analyst, discussant, writer, reviewer, planner, executor, tester
|
||||
// full-lifecycle-fe: all of the above + fe-developer, fe-qa
|
||||
// On-demand (ambiguity): analyst or explorer
|
||||
|
||||
goto Phase3
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Create Task Chain
|
||||
|
||||
**Purpose**: Dispatch tasks based on execution mode
|
||||
|
||||
```javascript
|
||||
Output("[coordinator] Phase 3: Task Dispatching")
|
||||
|
||||
// Delegate to command file
|
||||
const dispatchStrategy = Read("commands/dispatch.md")
|
||||
|
||||
// Execute strategy defined in command file
|
||||
// (dispatch.md contains the complete task chain creation logic)
|
||||
|
||||
goto Phase4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Coordination Loop
|
||||
|
||||
**Purpose**: Monitor task progress and route messages
|
||||
|
||||
> **设计原则(Stop-Wait)**: 模型执行没有时间概念,禁止任何形式的轮询等待。
|
||||
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态
|
||||
> - ✅ 采用: 同步 `Task(run_in_background: false)` 调用,Worker 返回 = 阶段完成信号
|
||||
>
|
||||
> 按 Phase 3 创建的任务链顺序,逐阶段 spawn worker 同步执行。
|
||||
> Worker prompt 使用 SKILL.md Coordinator Spawn Template。
|
||||
|
||||
```javascript
|
||||
Output("[coordinator] Phase 4: Coordination Loop")
|
||||
|
||||
// Delegate to command file
|
||||
const monitorStrategy = Read("commands/monitor.md")
|
||||
|
||||
// Execute strategy defined in command file
|
||||
// (monitor.md contains the complete message routing and checkpoint logic)
|
||||
|
||||
goto Phase5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Report + Persistent Loop
|
||||
|
||||
**Purpose**: Provide completion report and offer next steps
|
||||
|
||||
```javascript
|
||||
Output("[coordinator] Phase 5: Completion Report")
|
||||
|
||||
// Load session state
|
||||
const session = Read(sessionFile)
|
||||
const teamState = TeamGet(session.team_id)
|
||||
|
||||
// Generate report
|
||||
Output("[coordinator] ========================================")
|
||||
Output("[coordinator] TEAM LIFECYCLE EXECUTION COMPLETE")
|
||||
Output("[coordinator] ========================================")
|
||||
Output(`[coordinator] Session ID: ${session.session_id}`)
|
||||
Output(`[coordinator] Mode: ${session.mode}`)
|
||||
Output(`[coordinator] Tasks Completed: ${session.tasks_completed}/${session.tasks_total}`)
|
||||
Output(`[coordinator] Duration: ${calculateDuration(session.created_at, new Date())}`)
|
||||
|
||||
// List deliverables
|
||||
const completedTasks = teamState.tasks.filter(t => t.status === "completed")
|
||||
Output("[coordinator] Deliverables:")
|
||||
for (const task of completedTasks) {
|
||||
Output(` ✓ ${task.subject}: ${task.description}`)
|
||||
if (task.output_file) {
|
||||
Output(` Output: ${task.output_file}`)
|
||||
}
|
||||
}
|
||||
|
||||
// Update session status
|
||||
session.status = "completed"
|
||||
session.completed_at = new Date().toISOString()
|
||||
Write(sessionFile, session)
|
||||
|
||||
// Offer next steps
|
||||
const nextAction = AskUserQuestion({
|
||||
question: "What would you like to do next?",
|
||||
choices: [
|
||||
"exit - End session",
|
||||
"review - Review specific deliverables",
|
||||
"extend - Add more tasks to this session",
|
||||
"handoff-lite-plan - Create lite-plan from spec",
|
||||
"handoff-full-plan - Create full-plan from spec",
|
||||
"handoff-req-plan - Create req-plan from requirements",
|
||||
"handoff-create-issues - Generate GitHub issues"
|
||||
]
|
||||
})
|
||||
|
||||
switch (nextAction) {
|
||||
case "exit":
|
||||
Output("[coordinator] Session ended. Goodbye!")
|
||||
break
|
||||
|
||||
case "review":
|
||||
const taskToReview = AskUserQuestion({
|
||||
question: "Which task output to review?",
|
||||
choices: completedTasks.map(t => t.subject)
|
||||
})
|
||||
const reviewTask = completedTasks.find(t => t.subject === taskToReview)
|
||||
if (reviewTask.output_file) {
|
||||
const content = Read(reviewTask.output_file)
|
||||
Output(`[coordinator] Task: ${reviewTask.subject}`)
|
||||
Output(content)
|
||||
}
|
||||
goto Phase5 // Loop back for more actions
|
||||
|
||||
case "extend":
|
||||
const extensionScope = AskUserQuestion({
|
||||
question: "Describe additional work:",
|
||||
type: "text"
|
||||
})
|
||||
Output("[coordinator] Creating extension tasks...")
|
||||
// Create custom tasks based on extension scope
|
||||
// (Implementation depends on extension requirements)
|
||||
goto Phase4 // Return to coordination loop
|
||||
|
||||
case "handoff-lite-plan":
|
||||
Output("[coordinator] Generating lite-plan from specifications...")
|
||||
// Read spec completion output (DISCUSS-006 = final sign-off)
|
||||
const specOutput = Read(getTaskOutput("DISCUSS-006"))
|
||||
// Create lite-plan format
|
||||
const litePlan = generateLitePlan(specOutput)
|
||||
const litePlanFile = `D:/Claude_dms3/.workflow/.sessions/${session.session_id}-lite-plan.md`
|
||||
Write(litePlanFile, litePlan)
|
||||
Output(`[coordinator] Lite-plan created: ${litePlanFile}`)
|
||||
goto Phase5
|
||||
|
||||
case "handoff-full-plan":
|
||||
Output("[coordinator] Generating full-plan from specifications...")
|
||||
const fullSpecOutput = Read(getTaskOutput("DISCUSS-006"))
|
||||
const fullPlan = generateFullPlan(fullSpecOutput)
|
||||
const fullPlanFile = `D:/Claude_dms3/.workflow/.sessions/${session.session_id}-full-plan.md`
|
||||
Write(fullPlanFile, fullPlan)
|
||||
Output(`[coordinator] Full-plan created: ${fullPlanFile}`)
|
||||
goto Phase5
|
||||
|
||||
case "handoff-req-plan":
|
||||
Output("[coordinator] Generating req-plan from requirements...")
|
||||
const reqAnalysis = Read(getTaskOutput("RESEARCH-001"))
|
||||
const reqPlan = generateReqPlan(reqAnalysis)
|
||||
const reqPlanFile = `D:/Claude_dms3/.workflow/.sessions/${session.session_id}-req-plan.md`
|
||||
Write(reqPlanFile, reqPlan)
|
||||
Output(`[coordinator] Req-plan created: ${reqPlanFile}`)
|
||||
goto Phase5
|
||||
|
||||
case "handoff-create-issues":
|
||||
Output("[coordinator] Generating GitHub issues...")
|
||||
const issuesSpec = Read(getTaskOutput("DISCUSS-006"))
|
||||
const issues = generateGitHubIssues(issuesSpec)
|
||||
const issuesFile = `D:/Claude_dms3/.workflow/.sessions/${session.session_id}-issues.json`
|
||||
Write(issuesFile, issues)
|
||||
Output(`[coordinator] Issues created: ${issuesFile}`)
|
||||
Output("[coordinator] Use GitHub CLI to import: gh issue create --title ... --body ...")
|
||||
goto Phase5
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
function calculateDuration(start, end) {
|
||||
const diff = new Date(end) - new Date(start)
|
||||
const minutes = Math.floor(diff / 60000)
|
||||
const seconds = Math.floor((diff % 60000) / 1000)
|
||||
return `${minutes}m ${seconds}s`
|
||||
}
|
||||
|
||||
function getTaskOutput(taskId) {
|
||||
const task = TaskGet(taskId)
|
||||
return task.output_file
|
||||
}
|
||||
|
||||
function generateLitePlan(specOutput) {
|
||||
// Parse spec output and create lite-plan format
|
||||
return `# Lite Plan\n\n${specOutput}\n\n## Implementation Steps\n- Step 1\n- Step 2\n...`
|
||||
}
|
||||
|
||||
function generateFullPlan(specOutput) {
|
||||
// Parse spec output and create full-plan format with detailed breakdown
|
||||
return `# Full Plan\n\n${specOutput}\n\n## Detailed Implementation\n### Phase 1\n### Phase 2\n...`
|
||||
}
|
||||
|
||||
function generateReqPlan(reqAnalysis) {
|
||||
// Parse requirements and create req-plan format
|
||||
return `# Requirements Plan\n\n${reqAnalysis}\n\n## Acceptance Criteria\n- Criterion 1\n- Criterion 2\n...`
|
||||
}
|
||||
|
||||
function generateGitHubIssues(specOutput) {
|
||||
// Parse spec and generate GitHub issue JSON
|
||||
return {
|
||||
issues: [
|
||||
{ title: "Issue 1", body: "Description", labels: ["feature"] },
|
||||
{ title: "Issue 2", body: "Description", labels: ["bug"] }
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session File Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "team-lifecycle-1234567890",
|
||||
"team_id": "team-lifecycle-1234567890",
|
||||
"mode": "full-lifecycle",
|
||||
"scope": "Build authentication system",
|
||||
"focus": ["security", "scalability"],
|
||||
"depth": "standard",
|
||||
"executionMethod": "sequential",
|
||||
"status": "active",
|
||||
"created_at": "2026-02-18T10:00:00Z",
|
||||
"completed_at": null,
|
||||
"resumed_at": null,
|
||||
"tasks_total": 16,
|
||||
"tasks_completed": 5,
|
||||
"current_phase": "spec"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error Type | Coordinator Action |
|
||||
|------------|-------------------|
|
||||
| Task timeout | Log timeout, mark task as failed, ask user to retry or skip |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Dependency cycle | Detect cycle, report to user, halt execution |
|
||||
| Invalid mode | Reject with error message, ask user to clarify |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
@@ -1,396 +0,0 @@
|
||||
# Command: Multi-Perspective Critique
|
||||
|
||||
Phase 3 of discussant execution - launch parallel CLI analyses for each required perspective.
|
||||
|
||||
## Overview
|
||||
|
||||
This command executes multi-perspective critique by routing to specialized CLI tools based on perspective type. Each perspective produces structured critique with strengths, weaknesses, suggestions, and ratings.
|
||||
|
||||
## Perspective Definitions
|
||||
|
||||
### 1. Product Perspective (gemini)
|
||||
|
||||
**Focus**: Market fit, user value, business viability, competitive differentiation
|
||||
|
||||
**CLI Tool**: gemini
|
||||
|
||||
**Output Structure**:
|
||||
```json
|
||||
{
|
||||
"perspective": "product",
|
||||
"strengths": ["string"],
|
||||
"weaknesses": ["string"],
|
||||
"suggestions": ["string"],
|
||||
"rating": 1-5
|
||||
}
|
||||
```
|
||||
|
||||
**Prompt Template**:
|
||||
```
|
||||
Analyze from Product Manager perspective:
|
||||
- Market fit and user value proposition
|
||||
- Business viability and ROI potential
|
||||
- Competitive differentiation
|
||||
- User experience and adoption barriers
|
||||
|
||||
Artifact: {artifactContent}
|
||||
|
||||
Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
```
|
||||
|
||||
### 2. Technical Perspective (codex)
|
||||
|
||||
**Focus**: Feasibility, tech debt, performance, security, maintainability
|
||||
|
||||
**CLI Tool**: codex
|
||||
|
||||
**Output Structure**:
|
||||
```json
|
||||
{
|
||||
"perspective": "technical",
|
||||
"strengths": ["string"],
|
||||
"weaknesses": ["string"],
|
||||
"suggestions": ["string"],
|
||||
"rating": 1-5
|
||||
}
|
||||
```
|
||||
|
||||
**Prompt Template**:
|
||||
```
|
||||
Analyze from Tech Lead perspective:
|
||||
- Technical feasibility and implementation complexity
|
||||
- Architecture decisions and tech debt implications
|
||||
- Performance and scalability considerations
|
||||
- Security vulnerabilities and risks
|
||||
- Code maintainability and extensibility
|
||||
|
||||
Artifact: {artifactContent}
|
||||
|
||||
Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
```
|
||||
|
||||
### 3. Quality Perspective (claude)
|
||||
|
||||
**Focus**: Completeness, testability, consistency, standards compliance
|
||||
|
||||
**CLI Tool**: claude
|
||||
|
||||
**Output Structure**:
|
||||
```json
|
||||
{
|
||||
"perspective": "quality",
|
||||
"strengths": ["string"],
|
||||
"weaknesses": ["string"],
|
||||
"suggestions": ["string"],
|
||||
"rating": 1-5
|
||||
}
|
||||
```
|
||||
|
||||
**Prompt Template**:
|
||||
```
|
||||
Analyze from QA Lead perspective:
|
||||
- Specification completeness and clarity
|
||||
- Testability and test coverage potential
|
||||
- Consistency across requirements/design
|
||||
- Standards compliance (coding, documentation, accessibility)
|
||||
- Ambiguity detection and edge case coverage
|
||||
|
||||
Artifact: {artifactContent}
|
||||
|
||||
Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
```
|
||||
|
||||
### 4. Risk Perspective (gemini)
|
||||
|
||||
**Focus**: Risk identification, dependency analysis, assumption validation, failure modes
|
||||
|
||||
**CLI Tool**: gemini
|
||||
|
||||
**Output Structure**:
|
||||
```json
|
||||
{
|
||||
"perspective": "risk",
|
||||
"strengths": ["string"],
|
||||
"weaknesses": ["string"],
|
||||
"suggestions": ["string"],
|
||||
"rating": 1-5,
|
||||
"risk_level": "low|medium|high|critical"
|
||||
}
|
||||
```
|
||||
|
||||
**Prompt Template**:
|
||||
```
|
||||
Analyze from Risk Analyst perspective:
|
||||
- Risk identification (technical, business, operational)
|
||||
- Dependency analysis and external risks
|
||||
- Assumption validation and hidden dependencies
|
||||
- Failure modes and mitigation strategies
|
||||
- Timeline and resource risks
|
||||
|
||||
Artifact: {artifactContent}
|
||||
|
||||
Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5), risk_level
|
||||
```
|
||||
|
||||
### 5. Coverage Perspective (gemini)
|
||||
|
||||
**Focus**: Requirement completeness vs original intent, scope drift, gap detection
|
||||
|
||||
**CLI Tool**: gemini
|
||||
|
||||
**Output Structure**:
|
||||
```json
|
||||
{
|
||||
"perspective": "coverage",
|
||||
"strengths": ["string"],
|
||||
"weaknesses": ["string"],
|
||||
"suggestions": ["string"],
|
||||
"rating": 1-5,
|
||||
"covered_requirements": ["REQ-ID"],
|
||||
"partial_requirements": ["REQ-ID"],
|
||||
"missing_requirements": ["REQ-ID"],
|
||||
"scope_creep": ["description"]
|
||||
}
|
||||
```
|
||||
|
||||
**Prompt Template**:
|
||||
```
|
||||
Analyze from Requirements Analyst perspective:
|
||||
- Compare current artifact against original requirements in discovery-context.json
|
||||
- Identify covered requirements (fully addressed)
|
||||
- Identify partial requirements (partially addressed)
|
||||
- Identify missing requirements (not addressed)
|
||||
- Detect scope creep (new items not in original requirements)
|
||||
|
||||
Original Requirements: {discoveryContext}
|
||||
Current Artifact: {artifactContent}
|
||||
|
||||
Output JSON with:
|
||||
- strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
- covered_requirements[] (REQ-IDs fully addressed)
|
||||
- partial_requirements[] (REQ-IDs partially addressed)
|
||||
- missing_requirements[] (REQ-IDs not addressed) ← CRITICAL if non-empty
|
||||
- scope_creep[] (new items not in original requirements)
|
||||
```
|
||||
|
||||
## Execution Pattern
|
||||
|
||||
### Parallel CLI Execution
|
||||
|
||||
```javascript
|
||||
// Load artifact content
|
||||
const artifactPath = `${sessionFolder}/${config.artifact}`
|
||||
const artifactContent = config.type === 'json'
|
||||
? JSON.parse(Read(artifactPath))
|
||||
: Read(artifactPath)
|
||||
|
||||
// Load discovery context for coverage perspective
|
||||
let discoveryContext = null
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${sessionFolder}/spec/discovery-context.json`))
|
||||
} catch { /* may not exist in early rounds */ }
|
||||
|
||||
// Launch parallel CLI analyses
|
||||
const perspectiveResults = []
|
||||
|
||||
for (const perspective of config.perspectives) {
|
||||
let cliTool, prompt
|
||||
|
||||
switch(perspective) {
|
||||
case 'product':
|
||||
cliTool = 'gemini'
|
||||
prompt = `Analyze from Product Manager perspective:
|
||||
- Market fit and user value proposition
|
||||
- Business viability and ROI potential
|
||||
- Competitive differentiation
|
||||
- User experience and adoption barriers
|
||||
|
||||
Artifact:
|
||||
${JSON.stringify(artifactContent, null, 2)}
|
||||
|
||||
Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)`
|
||||
break
|
||||
|
||||
case 'technical':
|
||||
cliTool = 'codex'
|
||||
prompt = `Analyze from Tech Lead perspective:
|
||||
- Technical feasibility and implementation complexity
|
||||
- Architecture decisions and tech debt implications
|
||||
- Performance and scalability considerations
|
||||
- Security vulnerabilities and risks
|
||||
- Code maintainability and extensibility
|
||||
|
||||
Artifact:
|
||||
${JSON.stringify(artifactContent, null, 2)}
|
||||
|
||||
Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)`
|
||||
break
|
||||
|
||||
case 'quality':
|
||||
cliTool = 'claude'
|
||||
prompt = `Analyze from QA Lead perspective:
|
||||
- Specification completeness and clarity
|
||||
- Testability and test coverage potential
|
||||
- Consistency across requirements/design
|
||||
- Standards compliance (coding, documentation, accessibility)
|
||||
- Ambiguity detection and edge case coverage
|
||||
|
||||
Artifact:
|
||||
${JSON.stringify(artifactContent, null, 2)}
|
||||
|
||||
Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)`
|
||||
break
|
||||
|
||||
case 'risk':
|
||||
cliTool = 'gemini'
|
||||
prompt = `Analyze from Risk Analyst perspective:
|
||||
- Risk identification (technical, business, operational)
|
||||
- Dependency analysis and external risks
|
||||
- Assumption validation and hidden dependencies
|
||||
- Failure modes and mitigation strategies
|
||||
- Timeline and resource risks
|
||||
|
||||
Artifact:
|
||||
${JSON.stringify(artifactContent, null, 2)}
|
||||
|
||||
Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5), risk_level`
|
||||
break
|
||||
|
||||
case 'coverage':
|
||||
cliTool = 'gemini'
|
||||
prompt = `Analyze from Requirements Analyst perspective:
|
||||
- Compare current artifact against original requirements in discovery-context.json
|
||||
- Identify covered requirements (fully addressed)
|
||||
- Identify partial requirements (partially addressed)
|
||||
- Identify missing requirements (not addressed)
|
||||
- Detect scope creep (new items not in original requirements)
|
||||
|
||||
Original Requirements:
|
||||
${discoveryContext ? JSON.stringify(discoveryContext, null, 2) : 'Not available'}
|
||||
|
||||
Current Artifact:
|
||||
${JSON.stringify(artifactContent, null, 2)}
|
||||
|
||||
Output JSON with:
|
||||
- strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
- covered_requirements[] (REQ-IDs fully addressed)
|
||||
- partial_requirements[] (REQ-IDs partially addressed)
|
||||
- missing_requirements[] (REQ-IDs not addressed) ← CRITICAL if non-empty
|
||||
- scope_creep[] (new items not in original requirements)`
|
||||
break
|
||||
}
|
||||
|
||||
// Execute CLI analysis (run_in_background: true per CLAUDE.md)
|
||||
Bash({
|
||||
command: `ccw cli -p "${prompt.replace(/"/g, '\\"')}" --tool ${cliTool} --mode analysis`,
|
||||
run_in_background: true,
|
||||
description: `[discussant] ${perspective} perspective analysis`
|
||||
})
|
||||
}
|
||||
|
||||
// Wait for all CLI results via hook callbacks
|
||||
// Results will be collected in perspectiveResults array
|
||||
```
|
||||
|
||||
## Critical Divergence Detection
|
||||
|
||||
### Coverage Gap Detection
|
||||
|
||||
```javascript
|
||||
const coverageResult = perspectiveResults.find(p => p.perspective === 'coverage')
|
||||
if (coverageResult?.missing_requirements?.length > 0) {
|
||||
// Flag as critical divergence
|
||||
synthesis.divergent_views.push({
|
||||
topic: 'requirement_coverage_gap',
|
||||
description: `${coverageResult.missing_requirements.length} requirements from discovery-context not covered: ${coverageResult.missing_requirements.join(', ')}`,
|
||||
severity: 'high',
|
||||
source: 'coverage'
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Risk Level Detection
|
||||
|
||||
```javascript
|
||||
const riskResult = perspectiveResults.find(p => p.perspective === 'risk')
|
||||
if (riskResult?.risk_level === 'high' || riskResult?.risk_level === 'critical') {
|
||||
synthesis.risk_flags.push({
|
||||
level: riskResult.risk_level,
|
||||
description: riskResult.weaknesses.join('; ')
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
## Fallback Strategy
|
||||
|
||||
### CLI Failure Fallback
|
||||
|
||||
```javascript
|
||||
// If CLI analysis fails for a perspective, fallback to direct Claude analysis
|
||||
try {
|
||||
// CLI execution
|
||||
Bash({ command: `ccw cli -p "..." --tool ${cliTool} --mode analysis`, run_in_background: true })
|
||||
} catch (error) {
|
||||
// Fallback: Direct Claude analysis
|
||||
const fallbackResult = {
|
||||
perspective: perspective,
|
||||
strengths: ["Direct analysis: ..."],
|
||||
weaknesses: ["Direct analysis: ..."],
|
||||
suggestions: ["Direct analysis: ..."],
|
||||
rating: 3,
|
||||
_fallback: true
|
||||
}
|
||||
perspectiveResults.push(fallbackResult)
|
||||
}
|
||||
```
|
||||
|
||||
### All CLI Failures
|
||||
|
||||
```javascript
|
||||
if (perspectiveResults.every(r => r._fallback)) {
|
||||
// Generate basic discussion from direct reading
|
||||
const basicDiscussion = {
|
||||
convergent_themes: ["Basic analysis from direct reading"],
|
||||
divergent_views: [],
|
||||
action_items: ["Review artifact manually"],
|
||||
open_questions: [],
|
||||
decisions: [],
|
||||
risk_flags: [],
|
||||
overall_sentiment: 'neutral',
|
||||
consensus_reached: true,
|
||||
_basic_mode: true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
Each perspective produces:
|
||||
|
||||
```json
|
||||
{
|
||||
"perspective": "product|technical|quality|risk|coverage",
|
||||
"strengths": ["string"],
|
||||
"weaknesses": ["string"],
|
||||
"suggestions": ["string"],
|
||||
"rating": 1-5,
|
||||
|
||||
// Risk perspective only
|
||||
"risk_level": "low|medium|high|critical",
|
||||
|
||||
// Coverage perspective only
|
||||
"covered_requirements": ["REQ-ID"],
|
||||
"partial_requirements": ["REQ-ID"],
|
||||
"missing_requirements": ["REQ-ID"],
|
||||
"scope_creep": ["description"]
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Phase 4
|
||||
|
||||
Phase 4 (Consensus Synthesis) consumes `perspectiveResults` array to:
|
||||
1. Extract convergent themes (2+ perspectives agree)
|
||||
2. Extract divergent views (perspectives conflict)
|
||||
3. Detect coverage gaps (missing_requirements non-empty)
|
||||
4. Assess risk flags (high/critical risk_level)
|
||||
5. Determine consensus_reached (true if no critical divergences)
|
||||
@@ -1,265 +0,0 @@
|
||||
# Role: discussant
|
||||
|
||||
Multi-perspective critique, consensus building, and conflict escalation. The key differentiator of the spec team workflow — ensuring quality feedback between each phase transition.
|
||||
|
||||
## Role Identity
|
||||
|
||||
- **Name**: `discussant`
|
||||
- **Task Prefix**: `DISCUSS-*`
|
||||
- **Output Tag**: `[discussant]`
|
||||
- **Responsibility**: Load Artifact → Multi-Perspective Critique → Synthesize Consensus → Report
|
||||
- **Communication**: SendMessage to coordinator only
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process DISCUSS-* tasks
|
||||
- Communicate only with coordinator
|
||||
- Write discussion records to `discussions/` folder
|
||||
- Tag all SendMessage and team_msg calls with `[discussant]`
|
||||
- Load roundConfig with all 6 rounds
|
||||
- Execute multi-perspective critique via CLI tools
|
||||
- Detect coverage gaps from coverage perspective
|
||||
- Synthesize consensus with convergent/divergent analysis
|
||||
- Report consensus_reached vs discussion_blocked paths
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Contact other workers directly
|
||||
- Modify spec documents directly
|
||||
- Skip perspectives defined in roundConfig
|
||||
- Proceed without artifact loading
|
||||
- Ignore critical divergences
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `discussion_ready` | discussant → coordinator | Discussion complete, consensus reached | With discussion record path and decision summary |
|
||||
| `discussion_blocked` | discussant → coordinator | Cannot reach consensus | With divergence points and options, needs coordinator |
|
||||
| `impl_progress` | discussant → coordinator | Long discussion progress | Multi-perspective analysis progress |
|
||||
| `error` | discussant → coordinator | Discussion cannot proceed | Input artifact missing, etc. |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log:
|
||||
|
||||
```javascript
|
||||
// Discussion complete
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "discussion_ready", summary: "[discussant] Scope discussion consensus reached: 3 decisions", ref: `${sessionFolder}/discussions/discuss-001-scope.md` })
|
||||
|
||||
// Discussion blocked
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "discussion_blocked", summary: "[discussant] Cannot reach consensus on tech stack", data: { reason: "...", options: [...] } })
|
||||
|
||||
// Error report
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "error", summary: "[discussant] Input artifact missing" })
|
||||
```
|
||||
|
||||
### CLI Fallback
|
||||
|
||||
When `mcp__ccw-tools__team_msg` MCP is unavailable:
|
||||
|
||||
```javascript
|
||||
Bash(`ccw team log --team "${teamName}" --from "discussant" --to "coordinator" --type "discussion_ready" --summary "[discussant] Discussion complete" --ref "${sessionFolder}/discussions/discuss-001-scope.md" --json`)
|
||||
```
|
||||
|
||||
## Discussion Dimension Model
|
||||
|
||||
Each discussion round analyzes from 5 perspectives:
|
||||
|
||||
| Perspective | Focus | Representative | CLI Tool |
|
||||
|-------------|-------|----------------|----------|
|
||||
| **Product** | Market fit, user value, business viability, competitive differentiation | Product Manager | gemini |
|
||||
| **Technical** | Feasibility, tech debt, performance, security, maintainability | Tech Lead | codex |
|
||||
| **Quality** | Completeness, testability, consistency, standards compliance | QA Lead | claude |
|
||||
| **Risk** | Risk identification, dependency analysis, assumption validation, failure modes | Risk Analyst | gemini |
|
||||
| **Coverage** | Requirement completeness vs original intent, scope drift, gap detection | Requirements Analyst | gemini |
|
||||
|
||||
## Discussion Round Configuration
|
||||
|
||||
| Round | Artifact | Key Perspectives | Focus |
|
||||
|-------|----------|-----------------|-------|
|
||||
| DISCUSS-001 | discovery-context | product + risk + **coverage** | Scope confirmation, direction, initial coverage check |
|
||||
| DISCUSS-002 | product-brief | product + technical + quality + **coverage** | Positioning, feasibility, requirement coverage |
|
||||
| DISCUSS-003 | requirements | quality + product + **coverage** | Completeness, priority, gap detection |
|
||||
| DISCUSS-004 | architecture | technical + risk | Tech choices, security |
|
||||
| DISCUSS-005 | epics | product + technical + quality + **coverage** | MVP scope, estimation, requirement tracing |
|
||||
| DISCUSS-006 | readiness-report | all 5 perspectives | Final sign-off |
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
- `commands/critique.md` - Multi-perspective CLI critique (Phase 3)
|
||||
|
||||
### Subagent Capabilities
|
||||
None (discussant uses CLI tools directly)
|
||||
|
||||
### CLI Capabilities
|
||||
- **gemini**: Product perspective, Risk perspective, Coverage perspective
|
||||
- **codex**: Technical perspective
|
||||
- **claude**: Quality perspective
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('DISCUSS-') &&
|
||||
t.owner === 'discussant' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
|
||||
if (myTasks.length === 0) return // idle
|
||||
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
```
|
||||
|
||||
### Phase 2: Artifact Loading
|
||||
|
||||
```javascript
|
||||
const sessionMatch = task.description.match(/Session:\s*(.+)/)
|
||||
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : ''
|
||||
const roundMatch = task.subject.match(/DISCUSS-(\d+)/)
|
||||
const roundNumber = roundMatch ? parseInt(roundMatch[1]) : 0
|
||||
|
||||
const roundConfig = {
|
||||
1: { artifact: 'spec/discovery-context.json', type: 'json', outputFile: 'discuss-001-scope.md', perspectives: ['product', 'risk', 'coverage'], label: '范围讨论' },
|
||||
2: { artifact: 'spec/product-brief.md', type: 'md', outputFile: 'discuss-002-brief.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Brief评审' },
|
||||
3: { artifact: 'spec/requirements/_index.md', type: 'md', outputFile: 'discuss-003-requirements.md', perspectives: ['quality', 'product', 'coverage'], label: '需求讨论' },
|
||||
4: { artifact: 'spec/architecture/_index.md', type: 'md', outputFile: 'discuss-004-architecture.md', perspectives: ['technical', 'risk'], label: '架构讨论' },
|
||||
5: { artifact: 'spec/epics/_index.md', type: 'md', outputFile: 'discuss-005-epics.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Epics讨论' },
|
||||
6: { artifact: 'spec/readiness-report.md', type: 'md', outputFile: 'discuss-006-final.md', perspectives: ['product', 'technical', 'quality', 'risk', 'coverage'], label: '最终签收' }
|
||||
}
|
||||
|
||||
const config = roundConfig[roundNumber]
|
||||
// Load target artifact and prior discussion records for continuity
|
||||
Bash(`mkdir -p ${sessionFolder}/discussions`)
|
||||
```
|
||||
|
||||
### Phase 3: Multi-Perspective Critique
|
||||
|
||||
**Delegate to**: `Read("commands/critique.md")`
|
||||
|
||||
Launch parallel CLI analyses for each required perspective. See `commands/critique.md` for full implementation.
|
||||
|
||||
### Phase 4: Consensus Synthesis
|
||||
|
||||
```javascript
|
||||
const synthesis = {
|
||||
convergent_themes: [],
|
||||
divergent_views: [],
|
||||
action_items: [],
|
||||
open_questions: [],
|
||||
decisions: [],
|
||||
risk_flags: [],
|
||||
overall_sentiment: '', // positive/neutral/concerns/critical
|
||||
consensus_reached: true // false if major unresolvable conflicts
|
||||
}
|
||||
|
||||
// Extract convergent themes (items mentioned positively by 2+ perspectives)
|
||||
// Extract divergent views (items where perspectives conflict)
|
||||
// Check coverage gaps from coverage perspective (if present)
|
||||
const coverageResult = perspectiveResults.find(p => p.perspective === 'coverage')
|
||||
if (coverageResult?.missing_requirements?.length > 0) {
|
||||
synthesis.coverage_gaps = coverageResult.missing_requirements
|
||||
synthesis.divergent_views.push({
|
||||
topic: 'requirement_coverage_gap',
|
||||
description: `${coverageResult.missing_requirements.length} requirements from discovery-context not covered: ${coverageResult.missing_requirements.join(', ')}`,
|
||||
severity: 'high',
|
||||
source: 'coverage'
|
||||
})
|
||||
}
|
||||
// Check for unresolvable conflicts
|
||||
const criticalDivergences = synthesis.divergent_views.filter(d => d.severity === 'high')
|
||||
if (criticalDivergences.length > 0) synthesis.consensus_reached = false
|
||||
|
||||
// Determine overall sentiment from average rating
|
||||
// Generate discussion record markdown with all perspectives, convergence, divergence, action items
|
||||
|
||||
Write(`${sessionFolder}/discussions/${config.outputFile}`, discussionRecord)
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
```javascript
|
||||
if (synthesis.consensus_reached) {
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "discussant", to: "coordinator",
|
||||
type: "discussion_ready",
|
||||
summary: `[discussant] ${config.label}讨论完成: ${synthesis.action_items.length}个行动项, ${synthesis.open_questions.length}个开放问题, 总体${synthesis.overall_sentiment}`,
|
||||
ref: `${sessionFolder}/discussions/${config.outputFile}`
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[discussant] ## 讨论结果: ${config.label}
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**共识**: 已达成
|
||||
**总体评价**: ${synthesis.overall_sentiment}
|
||||
|
||||
### 行动项 (${synthesis.action_items.length})
|
||||
${synthesis.action_items.map((item, i) => (i+1) + '. ' + item).join('\n') || '无'}
|
||||
|
||||
### 开放问题 (${synthesis.open_questions.length})
|
||||
${synthesis.open_questions.map((q, i) => (i+1) + '. ' + q).join('\n') || '无'}
|
||||
|
||||
### 讨论记录
|
||||
${sessionFolder}/discussions/${config.outputFile}
|
||||
|
||||
共识已达成,可推进至下一阶段。`,
|
||||
summary: `[discussant] ${config.label}共识达成: ${synthesis.action_items.length}行动项`
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
} else {
|
||||
// Consensus blocked - escalate to coordinator
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "discussant", to: "coordinator",
|
||||
type: "discussion_blocked",
|
||||
summary: `[discussant] ${config.label}讨论阻塞: ${criticalDivergences.length}个关键分歧需决策`,
|
||||
data: {
|
||||
reason: criticalDivergences.map(d => d.description).join('; '),
|
||||
options: criticalDivergences.map(d => ({ label: d.topic, description: d.options?.join(' vs ') || d.description }))
|
||||
}
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[discussant] ## 讨论阻塞: ${config.label}
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**状态**: 无法达成共识,需要 coordinator 介入
|
||||
|
||||
### 关键分歧
|
||||
${criticalDivergences.map((d, i) => (i+1) + '. **' + d.topic + '**: ' + d.description).join('\n\n')}
|
||||
|
||||
请通过 AskUserQuestion 收集用户对分歧点的决策。`,
|
||||
summary: `[discussant] ${config.label}阻塞: ${criticalDivergences.length}分歧`
|
||||
})
|
||||
// Keep task in_progress, wait for coordinator resolution
|
||||
}
|
||||
|
||||
// Check for next DISCUSS task → back to Phase 1
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No DISCUSS-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Target artifact not found | Notify coordinator with `[discussant]` tag, request prerequisite completion |
|
||||
| CLI perspective analysis failure | Fallback to direct Claude analysis for that perspective |
|
||||
| All CLI analyses fail | Generate basic discussion from direct reading |
|
||||
| Consensus timeout (all perspectives diverge) | Escalate as discussion_blocked with `[discussant]` tag |
|
||||
| Prior discussion records missing | Continue without continuity context |
|
||||
| Session folder not found | Notify coordinator with `[discussant]` tag, request session path |
|
||||
| Unexpected error | Log error via team_msg with `[discussant]` tag, report to coordinator |
|
||||
@@ -1,356 +0,0 @@
|
||||
# Implement Command
|
||||
|
||||
## Purpose
|
||||
Multi-backend code implementation with progress tracking and batch execution support.
|
||||
|
||||
## Execution Paths
|
||||
|
||||
### Path 1: Simple Task + Agent Backend (Direct Edit)
|
||||
|
||||
**Criteria**:
|
||||
```javascript
|
||||
function isSimpleTask(task) {
|
||||
return task.description.length < 200 &&
|
||||
!task.description.includes("refactor") &&
|
||||
!task.description.includes("architecture") &&
|
||||
!task.description.includes("multiple files")
|
||||
}
|
||||
```
|
||||
|
||||
**Execution**:
|
||||
```javascript
|
||||
if (isSimpleTask(task) && executor === "agent") {
|
||||
// Direct file edit without subagent overhead
|
||||
const targetFile = task.metadata?.target_file
|
||||
if (targetFile) {
|
||||
const content = Read(targetFile)
|
||||
const prompt = buildExecutionPrompt(task, plan, [task])
|
||||
|
||||
// Apply edit directly
|
||||
Edit(targetFile, oldContent, newContent)
|
||||
|
||||
return {
|
||||
success: true,
|
||||
files_modified: [targetFile],
|
||||
method: "direct_edit"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Path 2: Agent Backend (code-developer subagent)
|
||||
|
||||
**Execution**:
|
||||
```javascript
|
||||
if (executor === "agent") {
|
||||
const prompt = buildExecutionPrompt(task, plan, [task])
|
||||
|
||||
const result = Subagent({
|
||||
type: "code-developer",
|
||||
prompt: prompt,
|
||||
run_in_background: false // Synchronous execution
|
||||
})
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
files_modified: result.files_modified || [],
|
||||
method: "subagent"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Path 3: Codex Backend (CLI)
|
||||
|
||||
**Execution**:
|
||||
```javascript
|
||||
if (executor === "codex") {
|
||||
const prompt = buildExecutionPrompt(task, plan, [task])
|
||||
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
task_id: task.task_id,
|
||||
status: "executing_codex",
|
||||
message: "Starting Codex implementation..."
|
||||
}, "[executor]")
|
||||
|
||||
const result = Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --cd ${task.metadata?.working_dir || "."}`,
|
||||
{ run_in_background: true, timeout: 300000 }
|
||||
)
|
||||
|
||||
// Wait for CLI completion via hook callback
|
||||
return {
|
||||
success: true,
|
||||
files_modified: [], // Will be detected by git diff
|
||||
method: "codex_cli"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Path 4: Gemini Backend (CLI)
|
||||
|
||||
**Execution**:
|
||||
```javascript
|
||||
if (executor === "gemini") {
|
||||
const prompt = buildExecutionPrompt(task, plan, [task])
|
||||
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
task_id: task.task_id,
|
||||
status: "executing_gemini",
|
||||
message: "Starting Gemini implementation..."
|
||||
}, "[executor]")
|
||||
|
||||
const result = Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --cd ${task.metadata?.working_dir || "."}`,
|
||||
{ run_in_background: true, timeout: 300000 }
|
||||
)
|
||||
|
||||
// Wait for CLI completion via hook callback
|
||||
return {
|
||||
success: true,
|
||||
files_modified: [], // Will be detected by git diff
|
||||
method: "gemini_cli"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Prompt Building
|
||||
|
||||
### Single Task Prompt
|
||||
|
||||
```javascript
|
||||
function buildExecutionPrompt(task, plan, tasks) {
|
||||
const context = extractContextFromPlan(plan, task)
|
||||
|
||||
return `
|
||||
# Implementation Task: ${task.task_id}
|
||||
|
||||
## Task Description
|
||||
${task.description}
|
||||
|
||||
## Acceptance Criteria
|
||||
${task.acceptance_criteria?.map((c, i) => `${i + 1}. ${c}`).join("\n") || "None specified"}
|
||||
|
||||
## Context from Plan
|
||||
${context}
|
||||
|
||||
## Files to Modify
|
||||
${task.metadata?.target_files?.join("\n") || "Auto-detect based on task"}
|
||||
|
||||
## Constraints
|
||||
- Follow existing code style and patterns
|
||||
- Preserve backward compatibility
|
||||
- Add appropriate error handling
|
||||
- Include inline comments for complex logic
|
||||
- Update related tests if applicable
|
||||
|
||||
## Expected Output
|
||||
- Modified files with implementation
|
||||
- Brief summary of changes made
|
||||
- Any assumptions or decisions made during implementation
|
||||
`.trim()
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Task Prompt
|
||||
|
||||
```javascript
|
||||
function buildBatchPrompt(tasks, plan) {
|
||||
const taskDescriptions = tasks.map((task, i) => `
|
||||
### Task ${i + 1}: ${task.task_id}
|
||||
**Description**: ${task.description}
|
||||
**Acceptance Criteria**:
|
||||
${task.acceptance_criteria?.map((c, j) => ` ${j + 1}. ${c}`).join("\n") || " None specified"}
|
||||
**Target Files**: ${task.metadata?.target_files?.join(", ") || "Auto-detect"}
|
||||
`).join("\n")
|
||||
|
||||
return `
|
||||
# Batch Implementation: ${tasks.length} Tasks
|
||||
|
||||
## Tasks to Implement
|
||||
${taskDescriptions}
|
||||
|
||||
## Context from Plan
|
||||
${extractContextFromPlan(plan, tasks[0])}
|
||||
|
||||
## Batch Execution Guidelines
|
||||
- Implement tasks in the order listed
|
||||
- Ensure each task's acceptance criteria are met
|
||||
- Maintain consistency across all implementations
|
||||
- Report any conflicts or dependencies discovered
|
||||
- Follow existing code patterns and style
|
||||
|
||||
## Expected Output
|
||||
- All tasks implemented successfully
|
||||
- Summary of changes per task
|
||||
- Any cross-task considerations or conflicts
|
||||
`.trim()
|
||||
}
|
||||
```
|
||||
|
||||
### Context Extraction
|
||||
|
||||
```javascript
|
||||
function extractContextFromPlan(plan, task) {
|
||||
// Extract relevant sections from plan
|
||||
const sections = []
|
||||
|
||||
// Architecture context
|
||||
const archMatch = plan.match(/## Architecture[\s\S]*?(?=##|$)/)
|
||||
if (archMatch) {
|
||||
sections.push("### Architecture\n" + archMatch[0])
|
||||
}
|
||||
|
||||
// Technical stack
|
||||
const techMatch = plan.match(/## Technical Stack[\s\S]*?(?=##|$)/)
|
||||
if (techMatch) {
|
||||
sections.push("### Technical Stack\n" + techMatch[0])
|
||||
}
|
||||
|
||||
// Related tasks context
|
||||
const taskSection = plan.match(new RegExp(`${task.task_id}[\\s\\S]*?(?=IMPL-\\d+|$)`))
|
||||
if (taskSection) {
|
||||
sections.push("### Task Context\n" + taskSection[0])
|
||||
}
|
||||
|
||||
return sections.join("\n\n") || "No additional context available"
|
||||
}
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
### Batch Progress Updates
|
||||
|
||||
```javascript
|
||||
function reportBatchProgress(batchIndex, totalBatches, currentTask) {
|
||||
if (totalBatches > 1) {
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
batch_index: batchIndex + 1,
|
||||
total_batches: totalBatches,
|
||||
current_task: currentTask.task_id,
|
||||
message: `Processing batch ${batchIndex + 1}/${totalBatches}: ${currentTask.task_id}`
|
||||
}, "[executor]")
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Long-Running Task Updates
|
||||
|
||||
```javascript
|
||||
function reportLongRunningTask(task, elapsedSeconds) {
|
||||
if (elapsedSeconds > 60 && elapsedSeconds % 30 === 0) {
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
task_id: task.task_id,
|
||||
elapsed_seconds: elapsedSeconds,
|
||||
message: `Still processing ${task.task_id} (${elapsedSeconds}s elapsed)...`
|
||||
}, "[executor]")
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Utility Functions
|
||||
|
||||
### Prompt Escaping
|
||||
|
||||
```javascript
|
||||
function escapePrompt(prompt) {
|
||||
return prompt
|
||||
.replace(/\\/g, "\\\\")
|
||||
.replace(/"/g, '\\"')
|
||||
.replace(/\n/g, "\\n")
|
||||
.replace(/\$/g, "\\$")
|
||||
}
|
||||
```
|
||||
|
||||
### File Change Detection
|
||||
|
||||
```javascript
|
||||
function detectModifiedFiles() {
|
||||
const gitDiff = Bash("git diff --name-only HEAD")
|
||||
return gitDiff.stdout.split("\n").filter(f => f.trim())
|
||||
}
|
||||
```
|
||||
|
||||
### Simple Task Detection
|
||||
|
||||
```javascript
|
||||
function isSimpleTask(task) {
|
||||
const simpleIndicators = [
|
||||
task.description.length < 200,
|
||||
!task.description.toLowerCase().includes("refactor"),
|
||||
!task.description.toLowerCase().includes("architecture"),
|
||||
!task.description.toLowerCase().includes("multiple files"),
|
||||
!task.description.toLowerCase().includes("complex"),
|
||||
task.metadata?.target_files?.length === 1
|
||||
]
|
||||
|
||||
return simpleIndicators.filter(Boolean).length >= 4
|
||||
}
|
||||
```
|
||||
|
||||
## Error Recovery
|
||||
|
||||
### Retry Logic
|
||||
|
||||
```javascript
|
||||
function executeWithRetry(task, executor, maxRetries = 3) {
|
||||
let attempt = 0
|
||||
let lastError = null
|
||||
|
||||
while (attempt < maxRetries) {
|
||||
try {
|
||||
const result = executeTask(task, executor)
|
||||
if (result.success) {
|
||||
return result
|
||||
}
|
||||
lastError = result.error
|
||||
} catch (error) {
|
||||
lastError = error.message
|
||||
}
|
||||
|
||||
attempt++
|
||||
if (attempt < maxRetries) {
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
task_id: task.task_id,
|
||||
message: `Retry attempt ${attempt}/${maxRetries} after error: ${lastError}`
|
||||
}, "[executor]")
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: false,
|
||||
error: lastError,
|
||||
retry_count: maxRetries
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Backend Fallback
|
||||
|
||||
```javascript
|
||||
function executeWithFallback(task, primaryExecutor) {
|
||||
const result = executeTask(task, primaryExecutor)
|
||||
|
||||
if (!result.success && primaryExecutor !== "agent") {
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
task_id: task.task_id,
|
||||
message: `${primaryExecutor} failed, falling back to agent backend...`
|
||||
}, "[executor]")
|
||||
|
||||
return executeTask(task, "agent")
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
```
|
||||
@@ -1,324 +0,0 @@
|
||||
# Executor Role
|
||||
|
||||
## 1. Role Identity
|
||||
|
||||
- **Name**: executor
|
||||
- **Task Prefix**: IMPL-*
|
||||
- **Output Tag**: `[executor]`
|
||||
- **Responsibility**: Load plan → Route to backend → Implement code → Self-validate → Report
|
||||
|
||||
## 2. Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process IMPL-* tasks
|
||||
- Follow approved plan exactly
|
||||
- Use declared execution backends (agent/codex/gemini)
|
||||
- Self-validate all implementations (syntax + acceptance criteria)
|
||||
- Tag all outputs with `[executor]`
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Contact other workers directly
|
||||
- Modify plan files
|
||||
- Skip self-validation
|
||||
- Proceed without plan approval
|
||||
|
||||
## 3. Message Types
|
||||
|
||||
| Type | Direction | Purpose | Format |
|
||||
|------|-----------|---------|--------|
|
||||
| `task_request` | FROM coordinator | Receive IMPL-* task assignment | `{ type: "task_request", task_id, description }` |
|
||||
| `task_complete` | TO coordinator | Report implementation success | `{ type: "task_complete", task_id, status: "success", files_modified, validation_results }` |
|
||||
| `task_failed` | TO coordinator | Report implementation failure | `{ type: "task_failed", task_id, error, retry_count }` |
|
||||
| `progress_update` | TO coordinator | Report batch progress | `{ type: "progress_update", task_id, batch_index, total_batches }` |
|
||||
|
||||
## 4. Message Bus
|
||||
|
||||
**Primary**: Use `team_msg` for all coordinator communication with `[executor]` tag:
|
||||
```javascript
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_complete",
|
||||
task_id: "IMPL-001",
|
||||
status: "success",
|
||||
files_modified: ["src/auth.ts"],
|
||||
validation_results: { syntax: "pass", acceptance: "pass" }
|
||||
}, "[executor]")
|
||||
```
|
||||
|
||||
**CLI Fallback**: When message bus unavailable, write to `.workflow/.team/messages/executor-{timestamp}.json`
|
||||
|
||||
## 5. Toolbox
|
||||
|
||||
### Available Commands
|
||||
- `commands/implement.md` - Multi-backend code implementation with progress tracking
|
||||
|
||||
### Subagent Capabilities
|
||||
- `code-developer` - Synchronous agent execution for simple tasks and agent backend
|
||||
|
||||
### CLI Capabilities
|
||||
- `ccw cli --tool codex --mode write` - Codex backend implementation
|
||||
- `ccw cli --tool gemini --mode write` - Gemini backend implementation
|
||||
|
||||
## 6. Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task & Plan Loading
|
||||
|
||||
**Task Discovery**:
|
||||
```javascript
|
||||
const tasks = Glob(".workflow/.team/tasks/IMPL-*.json")
|
||||
.filter(task => task.status === "pending" && task.assigned_to === "executor")
|
||||
```
|
||||
|
||||
**Plan Path Extraction**:
|
||||
```javascript
|
||||
const planPath = task.metadata?.plan_path || ".workflow/plan.md"
|
||||
const plan = Read(planPath)
|
||||
```
|
||||
|
||||
**Execution Backend Resolution**:
|
||||
```javascript
|
||||
function resolveExecutor(task, plan) {
|
||||
// Priority 1: Task-level override
|
||||
if (task.metadata?.executor) {
|
||||
return task.metadata.executor // "agent" | "codex" | "gemini"
|
||||
}
|
||||
|
||||
// Priority 2: Plan-level default
|
||||
const planMatch = plan.match(/Execution Backend:\s*(agent|codex|gemini)/i)
|
||||
if (planMatch) {
|
||||
return planMatch[1].toLowerCase()
|
||||
}
|
||||
|
||||
// Priority 3: Auto-select based on task complexity
|
||||
const isSimple = task.description.length < 200 &&
|
||||
!task.description.includes("refactor") &&
|
||||
!task.description.includes("architecture")
|
||||
|
||||
return isSimple ? "agent" : "codex" // Default: codex for complex, agent for simple
|
||||
}
|
||||
```
|
||||
|
||||
**Code Review Resolution**:
|
||||
```javascript
|
||||
function resolveCodeReview(task, plan) {
|
||||
// Priority 1: Task-level override
|
||||
if (task.metadata?.code_review !== undefined) {
|
||||
return task.metadata.code_review // boolean
|
||||
}
|
||||
|
||||
// Priority 2: Plan-level default
|
||||
const reviewMatch = plan.match(/Code Review:\s*(enabled|disabled)/i)
|
||||
if (reviewMatch) {
|
||||
return reviewMatch[1].toLowerCase() === "enabled"
|
||||
}
|
||||
|
||||
// Priority 3: Default based on task type
|
||||
const criticalKeywords = ["auth", "security", "payment", "api", "database"]
|
||||
const isCritical = criticalKeywords.some(kw =>
|
||||
task.description.toLowerCase().includes(kw)
|
||||
)
|
||||
|
||||
return isCritical // Enable review for critical paths
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Task Grouping
|
||||
|
||||
**Dependency-Based Batching**:
|
||||
```javascript
|
||||
function createBatches(tasks, plan) {
|
||||
// Extract dependencies from plan
|
||||
const dependencies = new Map()
|
||||
const depRegex = /IMPL-(\d+).*depends on.*IMPL-(\d+)/gi
|
||||
let match
|
||||
while ((match = depRegex.exec(plan)) !== null) {
|
||||
const [_, taskId, depId] = match
|
||||
if (!dependencies.has(`IMPL-${taskId}`)) {
|
||||
dependencies.set(`IMPL-${taskId}`, [])
|
||||
}
|
||||
dependencies.get(`IMPL-${taskId}`).push(`IMPL-${depId}`)
|
||||
}
|
||||
|
||||
// Topological sort for execution order
|
||||
const batches = []
|
||||
const completed = new Set()
|
||||
const remaining = new Set(tasks.map(t => t.task_id))
|
||||
|
||||
while (remaining.size > 0) {
|
||||
const batch = []
|
||||
|
||||
for (const taskId of remaining) {
|
||||
const deps = dependencies.get(taskId) || []
|
||||
const depsCompleted = deps.every(dep => completed.has(dep))
|
||||
|
||||
if (depsCompleted) {
|
||||
batch.push(tasks.find(t => t.task_id === taskId))
|
||||
}
|
||||
}
|
||||
|
||||
if (batch.length === 0) {
|
||||
// Circular dependency detected
|
||||
throw new Error(`Circular dependency detected in remaining tasks: ${[...remaining].join(", ")}`)
|
||||
}
|
||||
|
||||
batches.push(batch)
|
||||
batch.forEach(task => {
|
||||
completed.add(task.task_id)
|
||||
remaining.delete(task.task_id)
|
||||
})
|
||||
}
|
||||
|
||||
return batches
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Code Implementation
|
||||
|
||||
**Delegate to Command**:
|
||||
```javascript
|
||||
const implementCommand = Read("commands/implement.md")
|
||||
// Command handles:
|
||||
// - buildExecutionPrompt (context + acceptance criteria)
|
||||
// - buildBatchPrompt (multi-task batching)
|
||||
// - 4 execution paths: simple+agent, agent, codex, gemini
|
||||
// - Progress updates via team_msg
|
||||
```
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
**Syntax Check**:
|
||||
```javascript
|
||||
const syntaxCheck = Bash("tsc --noEmit", { timeout: 30000 })
|
||||
const syntaxPass = syntaxCheck.exitCode === 0
|
||||
```
|
||||
|
||||
**Acceptance Criteria Verification**:
|
||||
```javascript
|
||||
function verifyAcceptance(task, implementation) {
|
||||
const criteria = task.acceptance_criteria || []
|
||||
const results = criteria.map(criterion => {
|
||||
// Simple keyword matching for automated verification
|
||||
const keywords = criterion.toLowerCase().match(/\b\w+\b/g) || []
|
||||
const matched = keywords.some(kw =>
|
||||
implementation.toLowerCase().includes(kw)
|
||||
)
|
||||
return { criterion, matched, status: matched ? "pass" : "manual_review" }
|
||||
})
|
||||
|
||||
const allPassed = results.every(r => r.status === "pass")
|
||||
return { allPassed, results }
|
||||
}
|
||||
```
|
||||
|
||||
**Test File Detection**:
|
||||
```javascript
|
||||
function findAffectedTests(modifiedFiles) {
|
||||
const testFiles = []
|
||||
|
||||
for (const file of modifiedFiles) {
|
||||
const baseName = file.replace(/\.(ts|js|tsx|jsx)$/, "")
|
||||
const testVariants = [
|
||||
`${baseName}.test.ts`,
|
||||
`${baseName}.test.js`,
|
||||
`${baseName}.spec.ts`,
|
||||
`${baseName}.spec.js`,
|
||||
`${file.replace(/^src\//, "tests/")}.test.ts`,
|
||||
`${file.replace(/^src\//, "__tests__/")}.test.ts`
|
||||
]
|
||||
|
||||
for (const variant of testVariants) {
|
||||
if (Bash(`test -f ${variant}`).exitCode === 0) {
|
||||
testFiles.push(variant)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return testFiles
|
||||
}
|
||||
```
|
||||
|
||||
**Optional Code Review**:
|
||||
```javascript
|
||||
const codeReviewEnabled = resolveCodeReview(task, plan)
|
||||
|
||||
if (codeReviewEnabled) {
|
||||
const executor = resolveExecutor(task, plan)
|
||||
|
||||
if (executor === "gemini") {
|
||||
// Gemini Review: Use Gemini CLI for review
|
||||
const reviewResult = Bash(
|
||||
`ccw cli -p "Review implementation for: ${task.description}. Check: code quality, security, architecture compliance." --tool gemini --mode analysis`,
|
||||
{ run_in_background: true }
|
||||
)
|
||||
} else if (executor === "codex") {
|
||||
// Codex Review: Use Codex CLI review mode
|
||||
const reviewResult = Bash(
|
||||
`ccw cli --tool codex --mode review --uncommitted`,
|
||||
{ run_in_background: true }
|
||||
)
|
||||
}
|
||||
|
||||
// Wait for review results and append to validation
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
**Success Report**:
|
||||
```javascript
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_complete",
|
||||
task_id: task.task_id,
|
||||
status: "success",
|
||||
files_modified: modifiedFiles,
|
||||
validation_results: {
|
||||
syntax: syntaxPass ? "pass" : "fail",
|
||||
acceptance: acceptanceResults.allPassed ? "pass" : "manual_review",
|
||||
tests_found: affectedTests.length,
|
||||
code_review: codeReviewEnabled ? "completed" : "skipped"
|
||||
},
|
||||
execution_backend: executor,
|
||||
timestamp: new Date().toISOString()
|
||||
}, "[executor]")
|
||||
```
|
||||
|
||||
**Failure Report**:
|
||||
```javascript
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_failed",
|
||||
task_id: task.task_id,
|
||||
error: errorMessage,
|
||||
retry_count: task.retry_count || 0,
|
||||
validation_results: {
|
||||
syntax: syntaxPass ? "pass" : "fail",
|
||||
acceptance: "not_verified"
|
||||
},
|
||||
timestamp: new Date().toISOString()
|
||||
}, "[executor]")
|
||||
```
|
||||
|
||||
## 7. Error Handling
|
||||
|
||||
| Error Type | Recovery Strategy | Escalation |
|
||||
|------------|-------------------|------------|
|
||||
| Syntax errors | Retry with error context (max 3 attempts) | Report to coordinator after 3 failures |
|
||||
| Missing dependencies | Request dependency resolution from coordinator | Immediate escalation |
|
||||
| Backend unavailable | Fallback to agent backend | Report backend switch |
|
||||
| Validation failure | Include validation details in report | Manual review required |
|
||||
| Circular dependencies | Abort batch, report dependency graph | Immediate escalation |
|
||||
|
||||
## 8. Execution Backends
|
||||
|
||||
| Backend | Tool | Invocation | Mode | Use Case |
|
||||
|---------|------|------------|------|----------|
|
||||
| **agent** | code-developer | Subagent call (synchronous) | N/A | Simple tasks, direct edits |
|
||||
| **codex** | ccw cli | `ccw cli --tool codex --mode write` | write | Complex tasks, architecture changes |
|
||||
| **gemini** | ccw cli | `ccw cli --tool gemini --mode write` | write | Alternative backend, analysis-heavy tasks |
|
||||
|
||||
**Backend Selection Logic**:
|
||||
1. Task metadata override → Use specified backend
|
||||
2. Plan default → Use plan-level backend
|
||||
3. Auto-select → Simple tasks use agent, complex use codex
|
||||
@@ -1,301 +0,0 @@
|
||||
# Explorer Role
|
||||
|
||||
专职代码搜索与模式发现。服务角色,被 analyst/planner/executor/discussant 按需调用。
|
||||
|
||||
## 1. Role Identity
|
||||
|
||||
- **Name**: explorer
|
||||
- **Task Prefix**: EXPLORE-*
|
||||
- **Output Tag**: `[explorer]`
|
||||
- **Role Type**: Service(按需调用,不占主链路位置)
|
||||
- **Responsibility**: Parse request → Multi-strategy search → Dependency trace → Package results → Report
|
||||
|
||||
## 2. Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process EXPLORE-* tasks
|
||||
- Output structured JSON for downstream consumption
|
||||
- Use priority-ordered search strategies (ACE → Grep → cli-explore-agent)
|
||||
- Tag all outputs with `[explorer]`
|
||||
- Cache results in `{session}/explorations/` for cross-role reuse
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Contact other workers directly
|
||||
- Modify any source code files
|
||||
- Execute analysis, planning, or implementation
|
||||
- Make architectural decisions (only discover patterns)
|
||||
|
||||
## 3. Message Types
|
||||
|
||||
| Type | Direction | Purpose | Format |
|
||||
|------|-----------|---------|--------|
|
||||
| `explore_ready` | TO coordinator | Search complete | `{ type: "explore_ready", task_id, file_count, pattern_count, output_path }` |
|
||||
| `explore_progress` | TO coordinator | Multi-angle progress | `{ type: "explore_progress", task_id, angle, status }` |
|
||||
| `task_failed` | TO coordinator | Search failure | `{ type: "task_failed", task_id, error, fallback_used }` |
|
||||
|
||||
## 4. Message Bus
|
||||
|
||||
**Primary**: Use `team_msg` for all coordinator communication with `[explorer]` tag:
|
||||
```javascript
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "explore_ready",
|
||||
task_id: "EXPLORE-001",
|
||||
file_count: 15,
|
||||
pattern_count: 3,
|
||||
output_path: `${sessionFolder}/explorations/explore-001.json`
|
||||
}, "[explorer]")
|
||||
```
|
||||
|
||||
**CLI Fallback**: When message bus unavailable:
|
||||
```bash
|
||||
ccw team log --team "${teamName}" --from "explorer" --to "coordinator" --type "explore_ready" --summary "[explorer] 15 files, 3 patterns" --json
|
||||
```
|
||||
|
||||
## 5. Toolbox
|
||||
|
||||
### Available Commands
|
||||
- None (inline execution, search logic is straightforward)
|
||||
|
||||
### Search Tools (priority order)
|
||||
|
||||
| Tool | Priority | Use Case |
|
||||
|------|----------|----------|
|
||||
| `mcp__ace-tool__search_context` | P0 | Semantic code search |
|
||||
| `Grep` / `Glob` | P1 | Pattern matching, file discovery |
|
||||
| `Read` | P1 | File content reading |
|
||||
| `Bash` (rg, find) | P2 | Structured search fallback |
|
||||
| `WebSearch` | P3 | External docs/best practices |
|
||||
|
||||
### Subagent Capabilities
|
||||
- `cli-explore-agent` — Deep multi-angle codebase exploration
|
||||
|
||||
## 6. Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery & Request Parsing
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('EXPLORE-') &&
|
||||
t.owner === 'explorer' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
if (myTasks.length === 0) return
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
|
||||
// Parse structured request from task description
|
||||
const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim()
|
||||
const exploreMode = task.description.match(/Mode:\s*([^\n]+)/)?.[1]?.trim() || 'codebase'
|
||||
const angles = (task.description.match(/Angles:\s*([^\n]+)/)?.[1] || 'general').split(',').map(a => a.trim())
|
||||
const keywords = (task.description.match(/Keywords:\s*([^\n]+)/)?.[1] || '').split(',').map(k => k.trim()).filter(Boolean)
|
||||
const requester = task.description.match(/Requester:\s*([^\n]+)/)?.[1]?.trim() || 'coordinator'
|
||||
|
||||
const outputDir = sessionFolder ? `${sessionFolder}/explorations` : '.workflow/.tmp'
|
||||
Bash(`mkdir -p "${outputDir}"`)
|
||||
```
|
||||
|
||||
### Phase 2: Multi-Strategy Search
|
||||
|
||||
```javascript
|
||||
const findings = {
|
||||
relevant_files: [], // { path, rationale, role, discovery_source, key_symbols }
|
||||
patterns: [], // { name, description, files }
|
||||
dependencies: [], // { file, imports[] }
|
||||
external_refs: [], // { keyword, results[] }
|
||||
_metadata: { angles, mode: exploreMode, requester, timestamp: new Date().toISOString() }
|
||||
}
|
||||
|
||||
// === Strategy 1: ACE Semantic Search (P0) ===
|
||||
if (exploreMode !== 'external') {
|
||||
for (const kw of keywords) {
|
||||
try {
|
||||
const results = mcp__ace-tool__search_context({ project_root_path: '.', query: kw })
|
||||
// Deduplicate and add to findings.relevant_files with discovery_source: 'ace-search'
|
||||
} catch { /* ACE unavailable, fall through */ }
|
||||
}
|
||||
}
|
||||
|
||||
// === Strategy 2: Grep Pattern Scan (P1) ===
|
||||
if (exploreMode !== 'external') {
|
||||
for (const kw of keywords) {
|
||||
// Find imports/exports/definitions
|
||||
const defResults = Grep({
|
||||
pattern: `(class|function|const|export|interface|type)\\s+.*${kw}`,
|
||||
glob: '*.{ts,tsx,js,jsx,py,go,rs}',
|
||||
'-n': true, output_mode: 'content'
|
||||
})
|
||||
// Add to findings with discovery_source: 'grep-scan'
|
||||
}
|
||||
}
|
||||
|
||||
// === Strategy 3: Dependency Tracing ===
|
||||
if (exploreMode !== 'external') {
|
||||
for (const file of findings.relevant_files.slice(0, 10)) {
|
||||
try {
|
||||
const content = Read(file.path)
|
||||
const imports = (content.match(/from\s+['"]([^'"]+)['"]/g) || [])
|
||||
.map(i => i.match(/['"]([^'"]+)['"]/)?.[1]).filter(Boolean)
|
||||
if (imports.length > 0) {
|
||||
findings.dependencies.push({ file: file.path, imports })
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
|
||||
// === Strategy 4: Deep Exploration (multi-angle, via cli-explore-agent) ===
|
||||
if (angles.length > 1 && exploreMode !== 'external') {
|
||||
for (const angle of angles) {
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `## Exploration: ${angle} angle
|
||||
Keywords: ${keywords.join(', ')}
|
||||
|
||||
## Steps
|
||||
1. rg -l "${keywords[0]}" --type-add 'code:*.{ts,tsx,js,py,go,rs}' --type code
|
||||
2. Read .workflow/project-tech.json (if exists)
|
||||
3. Focus on ${angle} perspective
|
||||
|
||||
## Output
|
||||
Write to: ${outputDir}/exploration-${angle}.json
|
||||
Schema: { relevant_files[], patterns[], dependencies[] }`
|
||||
})
|
||||
// Merge angle results into main findings
|
||||
try {
|
||||
const angleData = JSON.parse(Read(`${outputDir}/exploration-${angle}.json`))
|
||||
findings.relevant_files.push(...(angleData.relevant_files || []))
|
||||
findings.patterns.push(...(angleData.patterns || []))
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
|
||||
// === Strategy 5: External Search (P3) ===
|
||||
if (exploreMode === 'external' || exploreMode === 'hybrid') {
|
||||
for (const kw of keywords.slice(0, 3)) {
|
||||
try {
|
||||
const results = WebSearch({ query: `${kw} best practices documentation` })
|
||||
findings.external_refs.push({ keyword: kw, results })
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
|
||||
// Deduplicate relevant_files by path
|
||||
const seen = new Set()
|
||||
findings.relevant_files = findings.relevant_files.filter(f => {
|
||||
if (seen.has(f.path)) return false
|
||||
seen.add(f.path)
|
||||
return true
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 3: Wisdom Contribution
|
||||
|
||||
```javascript
|
||||
// If wisdom directory exists, contribute discovered patterns
|
||||
if (sessionFolder) {
|
||||
try {
|
||||
const conventionsPath = `${sessionFolder}/wisdom/conventions.md`
|
||||
const existing = Read(conventionsPath)
|
||||
if (findings.patterns.length > 0) {
|
||||
const newPatterns = findings.patterns
|
||||
.map(p => `- ${p.name}: ${p.description || ''}`)
|
||||
.join('\n')
|
||||
Edit({
|
||||
file_path: conventionsPath,
|
||||
old_string: '<!-- explorer-patterns -->',
|
||||
new_string: `<!-- explorer-patterns -->\n${newPatterns}`
|
||||
})
|
||||
}
|
||||
} catch {} // wisdom not initialized
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Package Results
|
||||
|
||||
```javascript
|
||||
const outputPath = `${outputDir}/explore-${task.subject.replace(/[^a-zA-Z0-9-]/g, '-').toLowerCase()}.json`
|
||||
Write(outputPath, JSON.stringify(findings, null, 2))
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
```javascript
|
||||
const summary = `${findings.relevant_files.length} files, ${findings.patterns.length} patterns, ${findings.dependencies.length} deps`
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "explorer", to: "coordinator",
|
||||
type: "explore_ready",
|
||||
summary: `[explorer] EXPLORE complete: ${summary}`,
|
||||
ref: outputPath
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[explorer] ## Exploration Results
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**Mode**: ${exploreMode} | **Angles**: ${angles.join(', ')} | **Requester**: ${requester}
|
||||
|
||||
### Files: ${findings.relevant_files.length}
|
||||
${findings.relevant_files.slice(0, 8).map(f => `- \`${f.path}\` (${f.role}) — ${f.rationale}`).join('\n')}
|
||||
|
||||
### Patterns: ${findings.patterns.length}
|
||||
${findings.patterns.slice(0, 5).map(p => `- ${p.name}: ${p.description || ''}`).join('\n') || 'None'}
|
||||
|
||||
### Output: ${outputPath}`,
|
||||
summary: `[explorer] ${summary}`
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
// Check for next EXPLORE task → back to Phase 1
|
||||
```
|
||||
|
||||
## 7. Coordinator Integration
|
||||
|
||||
Explorer 是服务角色,coordinator 在以下场景按需创建 EXPLORE-* 任务:
|
||||
|
||||
| Trigger | Task Example | Requester |
|
||||
|---------|-------------|-----------|
|
||||
| RESEARCH-001 需要代码库上下文 | `EXPLORE-001: 代码库上下文搜索` | analyst |
|
||||
| PLAN-001 需要多角度探索 | `EXPLORE-002: 实现相关代码探索` | planner |
|
||||
| DISCUSS-004 需要外部最佳实践 | `EXPLORE-003: 外部文档搜索` | discussant |
|
||||
| IMPL-001 遇到未知代码 | `EXPLORE-004: 依赖追踪` | executor |
|
||||
|
||||
**Task Description Template**:
|
||||
```
|
||||
搜索描述
|
||||
|
||||
Session: {sessionFolder}
|
||||
Mode: codebase|external|hybrid
|
||||
Angles: architecture,patterns,dependencies
|
||||
Keywords: auth,middleware,session
|
||||
Requester: analyst
|
||||
```
|
||||
|
||||
## 8. Result Caching
|
||||
|
||||
```
|
||||
{sessionFolder}/explorations/
|
||||
├── explore-explore-001-*.json # Consolidated results
|
||||
├── exploration-architecture.json # Angle-specific (from cli-explore-agent)
|
||||
└── exploration-patterns.json
|
||||
```
|
||||
|
||||
后续角色 Phase 2 可直接读取已有探索结果,避免重复搜索。
|
||||
|
||||
## 9. Error Handling
|
||||
|
||||
| Error Type | Recovery Strategy | Escalation |
|
||||
|------------|-------------------|------------|
|
||||
| ACE unavailable | Fallback to Grep + rg | Continue with degraded results |
|
||||
| cli-explore-agent failure | Fallback to direct search | Report partial results |
|
||||
| No results found | Report empty, suggest broader keywords | Coordinator decides |
|
||||
| Web search fails | Skip external refs | Continue with codebase results |
|
||||
| Session folder missing | Use .workflow/.tmp | Notify coordinator |
|
||||
@@ -1,410 +0,0 @@
|
||||
# Role: fe-developer
|
||||
|
||||
前端开发。消费计划/架构产出,实现前端组件、页面、样式代码。
|
||||
|
||||
## Role Identity
|
||||
|
||||
- **Name**: `fe-developer`
|
||||
- **Task Prefix**: `DEV-FE-*`
|
||||
- **Output Tag**: `[fe-developer]`
|
||||
- **Role Type**: Pipeline(前端子流水线 worker)
|
||||
- **Responsibility**: Context loading → Design token consumption → Component implementation → Report
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
- 仅处理 `DEV-FE-*` 前缀的任务
|
||||
- 所有输出带 `[fe-developer]` 标识
|
||||
- 仅通过 SendMessage 与 coordinator 通信
|
||||
- 遵循已有设计令牌和组件规范(如存在)
|
||||
- 生成可访问性合规的前端代码(语义 HTML、ARIA 属性、键盘导航)
|
||||
- 遵循项目已有的前端技术栈和约定
|
||||
|
||||
### MUST NOT
|
||||
- ❌ 修改后端代码或 API 接口
|
||||
- ❌ 直接与其他 worker 通信
|
||||
- ❌ 为其他角色创建任务
|
||||
- ❌ 跳过设计令牌/规范检查(如存在)
|
||||
- ❌ 引入未经架构审查的新前端依赖
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `dev_fe_complete` | fe-developer → coordinator | Implementation done | 前端实现完成 |
|
||||
| `dev_fe_progress` | fe-developer → coordinator | Long task progress | 进度更新 |
|
||||
| `error` | fe-developer → coordinator | Implementation failure | 实现失败 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
每次 SendMessage **前**,必须调用 `mcp__ccw-tools__team_msg` 记录:
|
||||
|
||||
```javascript
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "fe-developer", to: "coordinator",
|
||||
type: "dev_fe_complete",
|
||||
summary: "[fe-developer] DEV-FE complete: 3 components, 1 page",
|
||||
ref: outputPath
|
||||
})
|
||||
```
|
||||
|
||||
### CLI 回退
|
||||
|
||||
```javascript
|
||||
Bash(`ccw team log --team "${teamName}" --from "fe-developer" --to "coordinator" --type "dev_fe_complete" --summary "[fe-developer] DEV-FE complete" --ref "${outputPath}" --json`)
|
||||
```
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
- None (inline execution — implementation delegated to subagent)
|
||||
|
||||
### Subagent Capabilities
|
||||
|
||||
| Agent Type | Purpose |
|
||||
|------------|---------|
|
||||
| `code-developer` | 组件/页面代码实现 |
|
||||
|
||||
### CLI Capabilities
|
||||
|
||||
| CLI Tool | Mode | Purpose |
|
||||
|----------|------|---------|
|
||||
| `ccw cli --tool gemini --mode write` | write | 前端代码生成 |
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('DEV-FE-') &&
|
||||
t.owner === 'fe-developer' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
if (myTasks.length === 0) return
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
```
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
```javascript
|
||||
const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim()
|
||||
|
||||
// Load plan context
|
||||
let plan = null
|
||||
try { plan = JSON.parse(Read(`${sessionFolder}/plan/plan.json`)) } catch {}
|
||||
|
||||
// Load design tokens (if architect produced them)
|
||||
let designTokens = null
|
||||
try { designTokens = JSON.parse(Read(`${sessionFolder}/architecture/design-tokens.json`)) } catch {}
|
||||
|
||||
// Load design intelligence (from analyst via ui-ux-pro-max)
|
||||
let designIntel = {}
|
||||
try { designIntel = JSON.parse(Read(`${sessionFolder}/analysis/design-intelligence.json`)) } catch {}
|
||||
|
||||
// Load component specs (if available)
|
||||
let componentSpecs = []
|
||||
try {
|
||||
const specFiles = Glob({ pattern: `${sessionFolder}/architecture/component-specs/*.md` })
|
||||
componentSpecs = specFiles.map(f => ({ path: f, content: Read(f) }))
|
||||
} catch {}
|
||||
|
||||
// Load shared memory (cross-role state)
|
||||
let sharedMemory = {}
|
||||
try { sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`)) } catch {}
|
||||
|
||||
// Load wisdom
|
||||
let wisdom = {}
|
||||
if (sessionFolder) {
|
||||
try { wisdom.conventions = Read(`${sessionFolder}/wisdom/conventions.md`) } catch {}
|
||||
try { wisdom.decisions = Read(`${sessionFolder}/wisdom/decisions.md`) } catch {}
|
||||
}
|
||||
|
||||
// Extract design constraints from design intelligence
|
||||
const antiPatterns = designIntel.recommendations?.anti_patterns || []
|
||||
const implementationChecklist = designIntel.design_system?.implementation_checklist || []
|
||||
const stackGuidelines = designIntel.stack_guidelines || {}
|
||||
|
||||
// Detect frontend tech stack
|
||||
let techStack = {}
|
||||
try { techStack = JSON.parse(Read('.workflow/project-tech.json')) } catch {}
|
||||
const feTech = detectFrontendStack(techStack)
|
||||
// Override with design intelligence detection if available
|
||||
if (designIntel.detected_stack) {
|
||||
const diStack = designIntel.detected_stack
|
||||
if (['react', 'nextjs', 'vue', 'svelte', 'nuxt'].includes(diStack)) feTech.framework = diStack
|
||||
}
|
||||
|
||||
function detectFrontendStack(tech) {
|
||||
const deps = tech?.dependencies || {}
|
||||
const stack = { framework: 'html', styling: 'css', ui_lib: null }
|
||||
if (deps.react || deps['react-dom']) stack.framework = 'react'
|
||||
if (deps.vue) stack.framework = 'vue'
|
||||
if (deps.svelte) stack.framework = 'svelte'
|
||||
if (deps.next) stack.framework = 'nextjs'
|
||||
if (deps.nuxt) stack.framework = 'nuxt'
|
||||
if (deps.tailwindcss) stack.styling = 'tailwind'
|
||||
if (deps['@shadcn/ui'] || deps['shadcn-ui']) stack.ui_lib = 'shadcn'
|
||||
if (deps['@mui/material']) stack.ui_lib = 'mui'
|
||||
if (deps['antd']) stack.ui_lib = 'antd'
|
||||
return stack
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Frontend Implementation
|
||||
|
||||
#### Step 1: Generate Design Token CSS (if tokens available)
|
||||
|
||||
```javascript
|
||||
if (designTokens && task.description.includes('Scope: tokens') || task.description.includes('Scope: full')) {
|
||||
// Convert design-tokens.json to CSS custom properties
|
||||
let cssVars = ':root {\n'
|
||||
|
||||
// Colors
|
||||
if (designTokens.color) {
|
||||
for (const [name, token] of Object.entries(designTokens.color)) {
|
||||
const value = typeof token.$value === 'object' ? token.$value.light : token.$value
|
||||
cssVars += ` --color-${name}: ${value};\n`
|
||||
}
|
||||
}
|
||||
|
||||
// Typography
|
||||
if (designTokens.typography?.['font-family']) {
|
||||
for (const [name, token] of Object.entries(designTokens.typography['font-family'])) {
|
||||
const value = Array.isArray(token.$value) ? token.$value.join(', ') : token.$value
|
||||
cssVars += ` --font-${name}: ${value};\n`
|
||||
}
|
||||
}
|
||||
if (designTokens.typography?.['font-size']) {
|
||||
for (const [name, token] of Object.entries(designTokens.typography['font-size'])) {
|
||||
cssVars += ` --text-${name}: ${token.$value};\n`
|
||||
}
|
||||
}
|
||||
|
||||
// Spacing, border-radius, shadow, transition
|
||||
for (const category of ['spacing', 'border-radius', 'shadow', 'transition']) {
|
||||
const prefix = { spacing: 'space', 'border-radius': 'radius', shadow: 'shadow', transition: 'duration' }[category]
|
||||
if (designTokens[category]) {
|
||||
for (const [name, token] of Object.entries(designTokens[category])) {
|
||||
cssVars += ` --${prefix}-${name}: ${token.$value};\n`
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
cssVars += '}\n'
|
||||
|
||||
// Dark mode overrides
|
||||
if (designTokens.color) {
|
||||
const darkOverrides = Object.entries(designTokens.color)
|
||||
.filter(([, token]) => typeof token.$value === 'object' && token.$value.dark)
|
||||
if (darkOverrides.length > 0) {
|
||||
cssVars += '\n@media (prefers-color-scheme: dark) {\n :root {\n'
|
||||
for (const [name, token] of darkOverrides) {
|
||||
cssVars += ` --color-${name}: ${token.$value.dark};\n`
|
||||
}
|
||||
cssVars += ' }\n}\n'
|
||||
}
|
||||
}
|
||||
|
||||
Bash(`mkdir -p src/styles`)
|
||||
Write('src/styles/tokens.css', cssVars)
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 2: Implement Components
|
||||
|
||||
```javascript
|
||||
const taskId = task.subject.match(/DEV-FE-(\d+)/)?.[0]
|
||||
const taskDetail = plan?.task_ids?.includes(taskId)
|
||||
? JSON.parse(Read(`${sessionFolder}/plan/.task/${taskId}.json`))
|
||||
: { title: task.subject, description: task.description, files: [] }
|
||||
|
||||
const isSimple = (taskDetail.files || []).length <= 3 &&
|
||||
!task.description.includes('system') &&
|
||||
!task.description.includes('多组件')
|
||||
|
||||
if (isSimple) {
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: `Frontend implementation: ${taskDetail.title}`,
|
||||
prompt: `## Frontend Implementation
|
||||
|
||||
Task: ${taskDetail.title}
|
||||
Description: ${taskDetail.description}
|
||||
|
||||
${designTokens ? `## Design Tokens\nImport from: src/styles/tokens.css\nUse CSS custom properties (var(--color-primary), var(--space-md), etc.)\n${JSON.stringify(designTokens, null, 2).substring(0, 1000)}` : ''}
|
||||
${componentSpecs.length > 0 ? `## Component Specs\n${componentSpecs.map(s => s.content.substring(0, 500)).join('\n---\n')}` : ''}
|
||||
|
||||
## Tech Stack
|
||||
- Framework: ${feTech.framework}
|
||||
- Styling: ${feTech.styling}
|
||||
${feTech.ui_lib ? `- UI Library: ${feTech.ui_lib}` : ''}
|
||||
|
||||
## Stack-Specific Guidelines
|
||||
${JSON.stringify(stackGuidelines, null, 2).substring(0, 500)}
|
||||
|
||||
## Implementation Checklist (MUST verify each item)
|
||||
${implementationChecklist.map(item => `- [ ] ${item}`).join('\n') || '- [ ] Semantic HTML\n- [ ] Keyboard accessible\n- [ ] Responsive layout\n- [ ] Dark mode support'}
|
||||
|
||||
## Anti-Patterns to AVOID
|
||||
${antiPatterns.map(p => `- ❌ ${p}`).join('\n') || 'None specified'}
|
||||
|
||||
## Coding Standards
|
||||
- Use design token CSS variables, never hardcode colors/spacing
|
||||
- All interactive elements must have cursor: pointer
|
||||
- Transitions: 150-300ms (use var(--duration-normal))
|
||||
- Text contrast: minimum 4.5:1 ratio
|
||||
- Include focus-visible styles for keyboard navigation
|
||||
- Support prefers-reduced-motion
|
||||
- Responsive: mobile-first with md/lg breakpoints
|
||||
- No emoji as functional icons
|
||||
|
||||
## Files to modify/create
|
||||
${(taskDetail.files || []).map(f => `- ${f.path}: ${f.change}`).join('\n') || 'Determine from task description'}
|
||||
|
||||
## Conventions
|
||||
${wisdom.conventions || 'Follow project existing patterns'}`
|
||||
})
|
||||
} else {
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Implement frontend components for '${taskDetail.title}'
|
||||
TASK: ${taskDetail.description}
|
||||
MODE: write
|
||||
CONTEXT: @src/**/*.{tsx,jsx,vue,svelte,css,scss,html} @public/**/*
|
||||
EXPECTED: Production-ready frontend code with accessibility, responsive design, design token usage
|
||||
CONSTRAINTS: Framework=${feTech.framework}, Styling=${feTech.styling}${feTech.ui_lib ? ', UI=' + feTech.ui_lib : ''}
|
||||
ANTI-PATTERNS: ${antiPatterns.join(', ') || 'None'}
|
||||
CHECKLIST: ${implementationChecklist.join(', ') || 'Semantic HTML, keyboard accessible, responsive, dark mode'}" --tool gemini --mode write --rule development-implement-component-ui`,
|
||||
run_in_background: true
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Self-Validation + Wisdom + Shared Memory
|
||||
|
||||
```javascript
|
||||
// === Self-Validation (pre-QA check) ===
|
||||
const implementedFiles = Glob({ pattern: 'src/**/*.{tsx,jsx,vue,svelte,html,css}' })
|
||||
const selfCheck = { passed: [], failed: [] }
|
||||
|
||||
for (const file of implementedFiles.slice(0, 20)) {
|
||||
try {
|
||||
const content = Read(file)
|
||||
|
||||
// Check: no hardcoded colors (hex outside tokens.css)
|
||||
if (file !== 'src/styles/tokens.css' && /#[0-9a-fA-F]{3,8}/.test(content)) {
|
||||
selfCheck.failed.push({ file, check: 'hardcoded-color', message: 'Hardcoded color — use var(--color-*)' })
|
||||
}
|
||||
|
||||
// Check: cursor-pointer on interactive elements
|
||||
if (/button|<a |onClick|@click/.test(content) && !/cursor-pointer/.test(content)) {
|
||||
selfCheck.failed.push({ file, check: 'cursor-pointer', message: 'Missing cursor-pointer on interactive element' })
|
||||
}
|
||||
|
||||
// Check: focus styles
|
||||
if (/button|input|select|textarea|<a /.test(content) && !/focus/.test(content)) {
|
||||
selfCheck.failed.push({ file, check: 'focus-styles', message: 'Missing focus styles for keyboard navigation' })
|
||||
}
|
||||
|
||||
// Check: responsive breakpoints
|
||||
if (/className|class=/.test(content) && !/md:|lg:|@media/.test(content) && /\.(tsx|jsx|vue|html)$/.test(file)) {
|
||||
selfCheck.failed.push({ file, check: 'responsive', message: 'No responsive breakpoints found' })
|
||||
}
|
||||
|
||||
// Check: prefers-reduced-motion for animations
|
||||
if (/animation|@keyframes/.test(content) && !/prefers-reduced-motion/.test(content)) {
|
||||
selfCheck.failed.push({ file, check: 'reduced-motion', message: 'Animation without prefers-reduced-motion' })
|
||||
}
|
||||
|
||||
// Check: emoji as icons
|
||||
if (/[\u{1F300}-\u{1F9FF}]/u.test(content)) {
|
||||
selfCheck.failed.push({ file, check: 'emoji-icon', message: 'Emoji used as icon — use SVG/icon library' })
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// === Wisdom Contribution ===
|
||||
if (sessionFolder) {
|
||||
const timestamp = new Date().toISOString().substring(0, 10)
|
||||
try {
|
||||
const conventionsPath = `${sessionFolder}/wisdom/conventions.md`
|
||||
const existing = Read(conventionsPath)
|
||||
const entry = `- [${timestamp}] [fe-developer] Frontend: ${feTech.framework}/${feTech.styling}, component pattern used`
|
||||
Write(conventionsPath, existing + '\n' + entry)
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// === Update Shared Memory ===
|
||||
if (sessionFolder) {
|
||||
try {
|
||||
sharedMemory.component_inventory = implementedFiles.map(f => ({ path: f, status: 'implemented' }))
|
||||
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||
} catch {}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
```javascript
|
||||
const changedFiles = Bash(`git diff --name-only HEAD 2>/dev/null || echo "unknown"`)
|
||||
.split('\n').filter(Boolean)
|
||||
const feFiles = changedFiles.filter(f =>
|
||||
/\.(tsx|jsx|vue|svelte|css|scss|html)$/.test(f)
|
||||
)
|
||||
|
||||
const resultStatus = selfCheck.failed.length === 0 ? 'complete' : 'complete_with_warnings'
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "fe-developer", to: "coordinator",
|
||||
type: "dev_fe_complete",
|
||||
summary: `[fe-developer] DEV-FE complete: ${feFiles.length} files, self-check: ${selfCheck.failed.length} issues`,
|
||||
ref: sessionFolder
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[fe-developer] ## Frontend Implementation Complete
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**Status**: ${resultStatus}
|
||||
**Framework**: ${feTech.framework} | **Styling**: ${feTech.styling}
|
||||
**Design Intelligence**: ${designIntel._source || 'not available'}
|
||||
|
||||
### Files Modified
|
||||
${feFiles.slice(0, 10).map(f => `- \`${f}\``).join('\n') || 'See git diff'}
|
||||
|
||||
### Design Token Usage
|
||||
${designTokens ? 'Applied design tokens from architecture → src/styles/tokens.css' : 'No design tokens available — used project defaults'}
|
||||
|
||||
### Self-Validation
|
||||
${selfCheck.failed.length === 0 ? '✅ All checks passed' : `⚠️ ${selfCheck.failed.length} issues:\n${selfCheck.failed.slice(0, 5).map(f => `- [${f.check}] ${f.file}: ${f.message}`).join('\n')}`}
|
||||
|
||||
### Accessibility
|
||||
- Semantic HTML structure
|
||||
- ARIA attributes applied
|
||||
- Keyboard navigation supported
|
||||
- Focus-visible styles included`,
|
||||
summary: `[fe-developer] DEV-FE complete: ${feFiles.length} files`
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
// Check for next DEV-FE task → back to Phase 1
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No DEV-FE-* tasks | Idle, wait for coordinator |
|
||||
| Design tokens not found | Use project defaults, note in report |
|
||||
| Component spec missing | Implement from task description only |
|
||||
| Tech stack undetected | Default to HTML + CSS, ask coordinator |
|
||||
| Subagent failure | Fallback to CLI write mode |
|
||||
| Build/lint errors | Report to coordinator for QA-FE review |
|
||||
@@ -1,116 +0,0 @@
|
||||
# Command: pre-delivery-checklist
|
||||
|
||||
> 最终交付前的 CSS 级别精准检查清单,融合 ui-ux-pro-max Pre-Delivery Checklist 和 ux-guidelines Do/Don't 规则。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of fe-qa role, Dimension 5: Pre-Delivery
|
||||
- Final review or code-review type tasks
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Direct (inline pattern matching in fe-qa Phase 3)
|
||||
|
||||
## Checklist Items
|
||||
|
||||
### Accessibility
|
||||
|
||||
| # | Check | Pattern | Severity | Do | Don't |
|
||||
|---|-------|---------|----------|-----|-------|
|
||||
| 1 | Images have alt text | `<img` without `alt=` | CRITICAL | Always provide descriptive alt text | Leave alt empty without role="presentation" |
|
||||
| 2 | Form inputs have labels | `<input` without `<label`/`aria-label` | HIGH | Associate every input with a label | Use placeholder as sole label |
|
||||
| 3 | Focus states visible | Interactive elements without `focus` styles | HIGH | Add focus-visible outline | Remove default focus ring without replacement |
|
||||
| 4 | Color contrast 4.5:1 | Light text on light background | HIGH | Ensure 4.5:1 minimum ratio | Use low-contrast decorative text for content |
|
||||
| 5 | prefers-reduced-motion | Animations without media query | MEDIUM | Wrap in @media (prefers-reduced-motion: no-preference) | Force animations on all users |
|
||||
| 6 | Heading hierarchy | Skipped heading levels (h1→h3) | MEDIUM | Use sequential heading levels | Skip levels for visual sizing |
|
||||
|
||||
### Interaction
|
||||
|
||||
| # | Check | Pattern | Severity | Do | Don't |
|
||||
|---|-------|---------|----------|-----|-------|
|
||||
| 7 | cursor-pointer on clickable | Buttons/links without cursor-pointer | MEDIUM | Add cursor: pointer to all clickable elements | Leave default cursor |
|
||||
| 8 | Transitions 150-300ms | Duration outside range | LOW | Use 150-300ms for micro-interactions | Use >500ms or <100ms transitions |
|
||||
| 9 | Loading states | Async ops without loading indicator | MEDIUM | Show skeleton/spinner during fetch | Leave blank screen while loading |
|
||||
| 10 | Error states | Async ops without error handling | HIGH | Show user-friendly error message | Silently fail or show raw error |
|
||||
|
||||
### Design Compliance
|
||||
|
||||
| # | Check | Pattern | Severity | Do | Don't |
|
||||
|---|-------|---------|----------|-----|-------|
|
||||
| 11 | No hardcoded colors | Hex values outside tokens.css | HIGH | Use var(--color-*) tokens | Hardcode #hex values |
|
||||
| 12 | No hardcoded spacing | px values for margin/padding | MEDIUM | Use var(--space-*) tokens | Hardcode pixel values |
|
||||
| 13 | No emoji as icons | Unicode emoji in UI | HIGH | Use proper SVG/icon library | Use emoji for functional icons |
|
||||
| 14 | Dark mode support | No prefers-color-scheme | MEDIUM | Support light/dark themes | Design for light mode only |
|
||||
|
||||
### Layout
|
||||
|
||||
| # | Check | Pattern | Severity | Do | Don't |
|
||||
|---|-------|---------|----------|-----|-------|
|
||||
| 15 | Responsive breakpoints | No md:/lg:/@media | MEDIUM | Mobile-first responsive design | Desktop-only layout |
|
||||
| 16 | No horizontal scroll | Fixed widths > viewport | HIGH | Use relative/fluid widths | Set fixed pixel widths on containers |
|
||||
|
||||
## Execution
|
||||
|
||||
```javascript
|
||||
function runPreDeliveryChecklist(fileContents) {
|
||||
const results = { passed: 0, failed: 0, items: [] }
|
||||
|
||||
const checks = [
|
||||
{ id: 1, check: "Images have alt text", test: (c) => /<img\s/.test(c) && !/<img\s[^>]*alt=/.test(c), severity: 'CRITICAL' },
|
||||
{ id: 7, check: "cursor-pointer on clickable", test: (c) => /button|onClick/.test(c) && !/cursor-pointer/.test(c), severity: 'MEDIUM' },
|
||||
{ id: 11, check: "No hardcoded colors", test: (c, f) => f !== 'src/styles/tokens.css' && /#[0-9a-fA-F]{6}/.test(c), severity: 'HIGH' },
|
||||
{ id: 13, check: "No emoji as icons", test: (c) => /[\u{1F300}-\u{1F9FF}]/u.test(c), severity: 'HIGH' },
|
||||
{ id: 14, check: "Dark mode support", test: (c) => !/prefers-color-scheme|dark:|\.dark/.test(c), severity: 'MEDIUM', global: true },
|
||||
{ id: 15, check: "Responsive breakpoints", test: (c) => !/md:|lg:|@media.*min-width/.test(c), severity: 'MEDIUM', global: true }
|
||||
]
|
||||
|
||||
// Per-file checks
|
||||
for (const [file, content] of Object.entries(fileContents)) {
|
||||
for (const check of checks.filter(c => !c.global)) {
|
||||
if (check.test(content, file)) {
|
||||
results.failed++
|
||||
results.items.push({ ...check, file, status: 'FAIL' })
|
||||
} else {
|
||||
results.passed++
|
||||
results.items.push({ ...check, file, status: 'PASS' })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Global checks (across all content)
|
||||
const allContent = Object.values(fileContents).join('\n')
|
||||
for (const check of checks.filter(c => c.global)) {
|
||||
if (check.test(allContent)) {
|
||||
results.failed++
|
||||
results.items.push({ ...check, file: 'global', status: 'FAIL' })
|
||||
} else {
|
||||
results.passed++
|
||||
results.items.push({ ...check, file: 'global', status: 'PASS' })
|
||||
}
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Pre-Delivery Checklist Results
|
||||
- Passed: X / Y
|
||||
- Failed: Z
|
||||
|
||||
### Failed Items
|
||||
- [CRITICAL] #1 Images have alt text — src/components/Hero.tsx
|
||||
- [HIGH] #11 No hardcoded colors — src/styles/custom.css
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No files to check | Report empty checklist, score 10/10 |
|
||||
| File read error | Skip file, note in report |
|
||||
| Regex error | Skip check, note in report |
|
||||
@@ -1,510 +0,0 @@
|
||||
# Role: fe-qa
|
||||
|
||||
前端质量保证。5 维度代码审查 + Generator-Critic 循环确保前端代码质量。融合 ui-ux-pro-max 的 Pre-Delivery Checklist、ux-guidelines Do/Don't 规则、行业反模式库。
|
||||
|
||||
## Role Identity
|
||||
|
||||
- **Name**: `fe-qa`
|
||||
- **Task Prefix**: `QA-FE-*`
|
||||
- **Output Tag**: `[fe-qa]`
|
||||
- **Role Type**: Pipeline(前端子流水线 worker)
|
||||
- **Responsibility**: Context loading → Multi-dimension review → GC feedback → Report
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
- 仅处理 `QA-FE-*` 前缀的任务
|
||||
- 所有输出带 `[fe-qa]` 标识
|
||||
- 仅通过 SendMessage 与 coordinator 通信
|
||||
- 执行 5 维度审查(代码质量、可访问性、设计合规、UX 最佳实践、Pre-Delivery)
|
||||
- 提供可操作的修复建议(Do/Don't 格式)
|
||||
- 支持 Generator-Critic 循环(最多 2 轮)
|
||||
- 加载 design-intelligence.json 用于行业反模式检查
|
||||
|
||||
### MUST NOT
|
||||
- ❌ 直接修改源代码(仅提供审查意见)
|
||||
- ❌ 直接与其他 worker 通信
|
||||
- ❌ 为其他角色创建任务
|
||||
- ❌ 跳过可访问性检查
|
||||
- ❌ 在评分未达标时标记通过
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `qa_fe_passed` | fe-qa → coordinator | All dimensions pass | 前端质检通过 |
|
||||
| `qa_fe_result` | fe-qa → coordinator | Review complete (may have issues) | 审查结果(含问题) |
|
||||
| `fix_required` | fe-qa → coordinator | Critical issues found | 需要 fe-developer 修复 |
|
||||
| `error` | fe-qa → coordinator | Review failure | 审查失败 |
|
||||
|
||||
## Message Bus
|
||||
|
||||
```javascript
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "fe-qa", to: "coordinator",
|
||||
type: "qa_fe_result",
|
||||
summary: "[fe-qa] QA-FE: score=8.5, 0 critical, 2 medium",
|
||||
ref: outputPath
|
||||
})
|
||||
```
|
||||
|
||||
### CLI 回退
|
||||
|
||||
```javascript
|
||||
Bash(`ccw team log --team "${teamName}" --from "fe-qa" --to "coordinator" --type "qa_fe_result" --summary "[fe-qa] QA-FE complete" --json`)
|
||||
```
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
- [commands/pre-delivery-checklist.md](commands/pre-delivery-checklist.md) — CSS 级别精准交付检查
|
||||
|
||||
### CLI Capabilities
|
||||
|
||||
| CLI Tool | Mode | Purpose |
|
||||
|----------|------|---------|
|
||||
| `ccw cli --tool gemini --mode analysis` | analysis | 前端代码审查 |
|
||||
| `ccw cli --tool codex --mode review` | review | Git-aware 代码审查 |
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
| Dimension | Weight | Source | Focus |
|
||||
|-----------|--------|--------|-------|
|
||||
| Code Quality | 25% | Standard code review | TypeScript 类型安全、组件结构、状态管理、错误处理 |
|
||||
| Accessibility | 25% | ux-guidelines rules | 语义 HTML、ARIA、键盘导航、色彩对比、focus-visible、prefers-reduced-motion |
|
||||
| Design Compliance | 20% | design-intelligence.json | 设计令牌使用、行业反模式、emoji 检查、间距/排版一致性 |
|
||||
| UX Best Practices | 15% | ux-guidelines Do/Don't | 加载状态、错误状态、空状态、cursor-pointer、响应式、动画时长 |
|
||||
| Pre-Delivery | 15% | Pre-Delivery Checklist | 暗色模式、无 console.log、无硬编码、国际化就绪、must-have 检查 |
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('QA-FE-') &&
|
||||
t.owner === 'fe-qa' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
if (myTasks.length === 0) return
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
```
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
```javascript
|
||||
const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim()
|
||||
|
||||
// Load design tokens for compliance check
|
||||
let designTokens = null
|
||||
try { designTokens = JSON.parse(Read(`${sessionFolder}/architecture/design-tokens.json`)) } catch {}
|
||||
|
||||
// Load design intelligence (from analyst via ui-ux-pro-max)
|
||||
let designIntel = {}
|
||||
try { designIntel = JSON.parse(Read(`${sessionFolder}/analysis/design-intelligence.json`)) } catch {}
|
||||
|
||||
// Load shared memory for industry context + QA history
|
||||
let sharedMemory = {}
|
||||
try { sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`)) } catch {}
|
||||
|
||||
const industryContext = sharedMemory.industry_context || {}
|
||||
const antiPatterns = designIntel.recommendations?.anti_patterns || []
|
||||
const mustHave = designIntel.recommendations?.must_have || []
|
||||
|
||||
// Determine audit strictness from industry (standard / strict for medical/financial)
|
||||
const strictness = industryContext.config?.strictness || 'standard'
|
||||
|
||||
// Load component specs
|
||||
let componentSpecs = []
|
||||
try {
|
||||
const specFiles = Glob({ pattern: `${sessionFolder}/architecture/component-specs/*.md` })
|
||||
componentSpecs = specFiles.map(f => ({ path: f, content: Read(f) }))
|
||||
} catch {}
|
||||
|
||||
// Load previous QA results (for GC loop tracking)
|
||||
let previousQA = []
|
||||
try {
|
||||
const qaFiles = Glob({ pattern: `${sessionFolder}/qa/audit-fe-*.json` })
|
||||
previousQA = qaFiles.map(f => JSON.parse(Read(f)))
|
||||
} catch {}
|
||||
|
||||
// Determine GC round
|
||||
const gcRound = previousQA.filter(q => q.task_subject === task.subject).length + 1
|
||||
const maxGCRounds = 2
|
||||
|
||||
// Get changed frontend files
|
||||
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached 2>/dev/null || echo ""`)
|
||||
.split('\n').filter(f => /\.(tsx|jsx|vue|svelte|css|scss|html|ts|js)$/.test(f))
|
||||
|
||||
// Read file contents for review
|
||||
const fileContents = {}
|
||||
for (const file of changedFiles.slice(0, 30)) {
|
||||
try { fileContents[file] = Read(file) } catch {}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: 5-Dimension Review
|
||||
|
||||
```javascript
|
||||
const review = {
|
||||
task_subject: task.subject,
|
||||
gc_round: gcRound,
|
||||
timestamp: new Date().toISOString(),
|
||||
dimensions: [],
|
||||
issues: [],
|
||||
overall_score: 0,
|
||||
verdict: 'PENDING'
|
||||
}
|
||||
|
||||
// === Dimension 1: Code Quality (25%) ===
|
||||
const codeQuality = { name: 'code-quality', weight: 0.25, score: 10, issues: [] }
|
||||
for (const [file, content] of Object.entries(fileContents)) {
|
||||
if (/:\s*any\b/.test(content)) {
|
||||
codeQuality.issues.push({ file, severity: 'medium', issue: 'Using `any` type', fix: 'Replace with specific type', do: 'Define proper TypeScript types', dont: 'Use `any` to bypass type checking' })
|
||||
codeQuality.score -= 1.5
|
||||
}
|
||||
if (/\.tsx$/.test(file) && /export/.test(content) && !/ErrorBoundary/.test(content) && /throw/.test(content)) {
|
||||
codeQuality.issues.push({ file, severity: 'low', issue: 'No error boundary for component with throw', fix: 'Wrap with ErrorBoundary' })
|
||||
codeQuality.score -= 0.5
|
||||
}
|
||||
if (/style=\{?\{/.test(content) && designTokens) {
|
||||
codeQuality.issues.push({ file, severity: 'medium', issue: 'Inline styles detected', fix: 'Use design tokens or CSS classes', do: 'Use var(--color-*) tokens', dont: 'Hardcode style values inline' })
|
||||
codeQuality.score -= 1.5
|
||||
}
|
||||
if (/catch\s*\(\s*\)\s*\{[\s]*\}/.test(content)) {
|
||||
codeQuality.issues.push({ file, severity: 'high', issue: 'Empty catch block', fix: 'Add error handling logic', do: 'Log or handle the error', dont: 'Silently swallow exceptions' })
|
||||
codeQuality.score -= 2
|
||||
}
|
||||
if (content.split('\n').length > 300) {
|
||||
codeQuality.issues.push({ file, severity: 'medium', issue: 'File exceeds 300 lines', fix: 'Split into smaller modules' })
|
||||
codeQuality.score -= 1
|
||||
}
|
||||
}
|
||||
codeQuality.score = Math.max(0, codeQuality.score)
|
||||
review.dimensions.push(codeQuality)
|
||||
|
||||
// === Dimension 2: Accessibility (25%) ===
|
||||
const a11y = { name: 'accessibility', weight: 0.25, score: 10, issues: [] }
|
||||
for (const [file, content] of Object.entries(fileContents)) {
|
||||
if (!/\.(tsx|jsx|vue|svelte|html)$/.test(file)) continue
|
||||
|
||||
if (/<img\s/.test(content) && !/<img\s[^>]*alt=/.test(content)) {
|
||||
a11y.issues.push({ file, severity: 'high', issue: 'Image missing alt attribute', fix: 'Add descriptive alt text', do: 'Always provide alt text', dont: 'Leave alt empty without role="presentation"' })
|
||||
a11y.score -= 3
|
||||
}
|
||||
if (/onClick/.test(content) && !/onKeyDown|onKeyPress|onKeyUp|role=.button/.test(content)) {
|
||||
a11y.issues.push({ file, severity: 'medium', issue: 'Click handler without keyboard equivalent', fix: 'Add onKeyDown or role="button" tabIndex={0}' })
|
||||
a11y.score -= 1.5
|
||||
}
|
||||
if (/<input\s/.test(content) && !/<label/.test(content) && !/aria-label/.test(content)) {
|
||||
a11y.issues.push({ file, severity: 'high', issue: 'Form input without label', fix: 'Add <label> or aria-label', do: 'Associate every input with a label', dont: 'Use placeholder as sole label' })
|
||||
a11y.score -= 2
|
||||
}
|
||||
if (/<button\s/.test(content) && /<button\s[^>]*>\s*</.test(content) && !/aria-label/.test(content)) {
|
||||
a11y.issues.push({ file, severity: 'high', issue: 'Button may lack accessible text (icon-only?)', fix: 'Add aria-label', do: 'Add aria-label for icon-only buttons', dont: 'Use title as sole accessible name' })
|
||||
a11y.score -= 2
|
||||
}
|
||||
// Heading hierarchy
|
||||
const headings = content.match(/<h([1-6])/g)?.map(h => parseInt(h[2])) || []
|
||||
for (let i = 1; i < headings.length; i++) {
|
||||
if (headings[i] - headings[i-1] > 1) {
|
||||
a11y.issues.push({ file, severity: 'medium', issue: `Heading level skipped: h${headings[i-1]} → h${headings[i]}`, fix: 'Use sequential heading levels' })
|
||||
a11y.score -= 1
|
||||
}
|
||||
}
|
||||
// Focus-visible styles
|
||||
if (/button|<a |input|select/.test(content) && !/focus-visible|focus:/.test(content)) {
|
||||
a11y.issues.push({ file, severity: 'high', issue: 'Interactive element missing focus styles', fix: 'Add focus-visible outline', do: 'Add focus-visible outline', dont: 'Remove default focus ring without replacement' })
|
||||
a11y.score -= 2
|
||||
}
|
||||
// ARIA role with tabindex
|
||||
if (/role="(button|link)"/.test(content) && !/tabindex/.test(content)) {
|
||||
a11y.issues.push({ file, severity: 'medium', issue: 'Element with ARIA role may need tabindex', fix: 'Add tabindex={0}' })
|
||||
a11y.score -= 1
|
||||
}
|
||||
// Hardcoded color contrast
|
||||
if (/#[0-9a-f]{3,6}/i.test(content) && !/token|theme|var\(--/.test(content)) {
|
||||
a11y.issues.push({ file, severity: 'low', issue: 'Hardcoded color — verify contrast ratio', fix: 'Use design tokens for consistent contrast' })
|
||||
a11y.score -= 0.5
|
||||
}
|
||||
}
|
||||
|
||||
// Strict mode: additional checks for medical/financial
|
||||
if (strictness === 'strict') {
|
||||
for (const [file, content] of Object.entries(fileContents)) {
|
||||
if (/animation|transition|@keyframes/.test(content) && !/prefers-reduced-motion/.test(content)) {
|
||||
a11y.issues.push({ file, severity: 'high', issue: 'Animation without prefers-reduced-motion', fix: 'Wrap in @media (prefers-reduced-motion: no-preference)', do: 'Respect motion preferences', dont: 'Force animations on all users' })
|
||||
a11y.score -= 2
|
||||
}
|
||||
}
|
||||
}
|
||||
a11y.score = Math.max(0, a11y.score)
|
||||
review.dimensions.push(a11y)
|
||||
|
||||
// === Dimension 3: Design Compliance (20%) ===
|
||||
const designCompliance = { name: 'design-compliance', weight: 0.20, score: 10, issues: [] }
|
||||
for (const [file, content] of Object.entries(fileContents)) {
|
||||
if (file !== 'src/styles/tokens.css' && /#[0-9a-fA-F]{3,8}/.test(content)) {
|
||||
const count = (content.match(/#[0-9a-fA-F]{3,8}/g) || []).length
|
||||
designCompliance.issues.push({ file, severity: 'high', issue: `${count} hardcoded color(s)`, fix: 'Use var(--color-*) tokens', do: 'Use var(--color-primary)', dont: 'Hardcode #hex values' })
|
||||
designCompliance.score -= 2
|
||||
}
|
||||
if (/margin|padding/.test(content) && /:\s*\d+px/.test(content) && !/var\(--space/.test(content)) {
|
||||
designCompliance.issues.push({ file, severity: 'medium', issue: 'Hardcoded spacing', fix: 'Use var(--space-*) tokens', do: 'Use var(--space-md)', dont: 'Hardcode 16px' })
|
||||
designCompliance.score -= 1
|
||||
}
|
||||
if (/font-size:\s*\d+/.test(content) && !/var\(--/.test(content)) {
|
||||
designCompliance.issues.push({ file, severity: 'medium', issue: 'Hardcoded font size', fix: 'Use var(--text-*) tokens' })
|
||||
designCompliance.score -= 1
|
||||
}
|
||||
if (/[\u{1F300}-\u{1F9FF}]/u.test(content)) {
|
||||
designCompliance.issues.push({ file, severity: 'high', issue: 'Emoji used as functional icon', fix: 'Use SVG/icon library', do: 'Use proper SVG/icon library', dont: 'Use emoji for functional icons' })
|
||||
designCompliance.score -= 2
|
||||
}
|
||||
// Industry anti-patterns from design-intelligence.json
|
||||
for (const pattern of antiPatterns) {
|
||||
if (typeof pattern === 'string') {
|
||||
const pl = pattern.toLowerCase()
|
||||
if (pl.includes('gradient') && /gradient/.test(content)) {
|
||||
designCompliance.issues.push({ file, severity: 'high', issue: `Industry anti-pattern: ${pattern}` })
|
||||
designCompliance.score -= 3
|
||||
}
|
||||
if (pl.includes('emoji') && /[\u{1F300}-\u{1F9FF}]/u.test(content)) {
|
||||
designCompliance.issues.push({ file, severity: 'high', issue: `Industry anti-pattern: ${pattern}` })
|
||||
designCompliance.score -= 2
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!designTokens) designCompliance.score = 7
|
||||
designCompliance.score = Math.max(0, designCompliance.score)
|
||||
review.dimensions.push(designCompliance)
|
||||
|
||||
// === Dimension 4: UX Best Practices (15%) ===
|
||||
const uxPractices = { name: 'ux-practices', weight: 0.15, score: 10, issues: [] }
|
||||
for (const [file, content] of Object.entries(fileContents)) {
|
||||
// cursor-pointer on clickable (CSS files)
|
||||
if (/button|<a |onClick|@click/.test(content) && !/cursor-pointer/.test(content) && /\.(css|scss)$/.test(file)) {
|
||||
uxPractices.issues.push({ file, severity: 'medium', issue: 'Missing cursor: pointer on clickable', fix: 'Add cursor: pointer', do: 'Add cursor: pointer to all clickable elements', dont: 'Leave default cursor' })
|
||||
uxPractices.score -= 1
|
||||
}
|
||||
// Transition duration range (150-300ms)
|
||||
const durations = content.match(/duration[:-]\s*(\d+)/g) || []
|
||||
for (const d of durations) {
|
||||
const ms = parseInt(d.match(/\d+/)[0])
|
||||
if (ms > 0 && (ms < 100 || ms > 500)) {
|
||||
uxPractices.issues.push({ file, severity: 'low', issue: `Transition ${ms}ms outside 150-300ms range`, fix: 'Use 150-300ms for micro-interactions' })
|
||||
uxPractices.score -= 0.5
|
||||
}
|
||||
}
|
||||
if (!/\.(tsx|jsx|vue|svelte)$/.test(file)) continue
|
||||
// Loading states
|
||||
if (/fetch|useQuery|useSWR|axios/.test(content) && !/loading|isLoading|skeleton|spinner/i.test(content)) {
|
||||
uxPractices.issues.push({ file, severity: 'medium', issue: 'Data fetching without loading state', fix: 'Add loading indicator', do: 'Show skeleton/spinner during fetch', dont: 'Leave blank screen while loading' })
|
||||
uxPractices.score -= 1
|
||||
}
|
||||
// Error states
|
||||
if (/fetch|useQuery|useSWR|axios/.test(content) && !/error|isError|catch/i.test(content)) {
|
||||
uxPractices.issues.push({ file, severity: 'high', issue: 'Data fetching without error handling', fix: 'Add error state UI', do: 'Show user-friendly error message', dont: 'Silently fail or show raw error' })
|
||||
uxPractices.score -= 2
|
||||
}
|
||||
// Empty states
|
||||
if (/\.map\(/.test(content) && !/empty|no.*data|no.*result|length\s*===?\s*0/i.test(content)) {
|
||||
uxPractices.issues.push({ file, severity: 'low', issue: 'List rendering without empty state', fix: 'Add empty state message' })
|
||||
uxPractices.score -= 0.5
|
||||
}
|
||||
// Responsive breakpoints
|
||||
if (/className|class=/.test(content) && !/md:|lg:|@media/.test(content)) {
|
||||
uxPractices.issues.push({ file, severity: 'medium', issue: 'No responsive breakpoints', fix: 'Mobile-first responsive design', do: 'Mobile-first responsive design', dont: 'Design for desktop only' })
|
||||
uxPractices.score -= 1
|
||||
}
|
||||
}
|
||||
uxPractices.score = Math.max(0, uxPractices.score)
|
||||
review.dimensions.push(uxPractices)
|
||||
|
||||
// === Dimension 5: Pre-Delivery (15%) ===
|
||||
// Detailed checklist: commands/pre-delivery-checklist.md
|
||||
const preDelivery = { name: 'pre-delivery', weight: 0.15, score: 10, issues: [] }
|
||||
const allContent = Object.values(fileContents).join('\n')
|
||||
|
||||
// Per-file checks
|
||||
for (const [file, content] of Object.entries(fileContents)) {
|
||||
if (/console\.(log|debug|info)\(/.test(content) && !/test|spec|\.test\./.test(file)) {
|
||||
preDelivery.issues.push({ file, severity: 'medium', issue: 'console.log in production code', fix: 'Remove or use proper logger' })
|
||||
preDelivery.score -= 1
|
||||
}
|
||||
if (/\.(tsx|jsx)$/.test(file) && />\s*[A-Z][a-z]+\s+[a-z]+/.test(content) && !/t\(|intl|i18n|formatMessage/.test(content)) {
|
||||
preDelivery.issues.push({ file, severity: 'low', issue: 'Hardcoded text — consider i18n', fix: 'Extract to translation keys' })
|
||||
preDelivery.score -= 0.5
|
||||
}
|
||||
if (/TODO|FIXME|HACK|XXX/.test(content)) {
|
||||
preDelivery.issues.push({ file, severity: 'low', issue: 'TODO/FIXME comment found', fix: 'Resolve or create issue' })
|
||||
preDelivery.score -= 0.5
|
||||
}
|
||||
}
|
||||
|
||||
// Global checklist items (from pre-delivery-checklist.md)
|
||||
const checklist = [
|
||||
{ check: "No emoji as functional icons", test: () => /[\u{1F300}-\u{1F9FF}]/u.test(allContent), severity: 'high' },
|
||||
{ check: "cursor-pointer on clickable", test: () => /button|onClick/.test(allContent) && !/cursor-pointer/.test(allContent), severity: 'medium' },
|
||||
{ check: "Focus states visible", test: () => /button|input|<a /.test(allContent) && !/focus/.test(allContent), severity: 'high' },
|
||||
{ check: "prefers-reduced-motion", test: () => /animation|@keyframes/.test(allContent) && !/prefers-reduced-motion/.test(allContent), severity: 'medium' },
|
||||
{ check: "Responsive breakpoints", test: () => !/md:|lg:|@media.*min-width/.test(allContent), severity: 'medium' },
|
||||
{ check: "No hardcoded colors", test: () => { const nt = Object.entries(fileContents).filter(([f]) => f !== 'src/styles/tokens.css'); return nt.some(([,c]) => /#[0-9a-fA-F]{6}/.test(c)) }, severity: 'high' },
|
||||
{ check: "Dark mode support", test: () => !/prefers-color-scheme|dark:|\.dark/.test(allContent), severity: 'medium' }
|
||||
]
|
||||
for (const item of checklist) {
|
||||
try {
|
||||
if (item.test()) {
|
||||
preDelivery.issues.push({ check: item.check, severity: item.severity, issue: `Pre-delivery: ${item.check}` })
|
||||
preDelivery.score -= (item.severity === 'high' ? 2 : item.severity === 'medium' ? 1 : 0.5)
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// Must-have checks from industry config
|
||||
for (const req of mustHave) {
|
||||
if (req === 'wcag-aaa' && !/aria-/.test(allContent)) {
|
||||
preDelivery.issues.push({ severity: 'high', issue: 'WCAG AAA required but no ARIA attributes found' })
|
||||
preDelivery.score -= 3
|
||||
}
|
||||
if (req === 'high-contrast' && !/high-contrast|forced-colors/.test(allContent)) {
|
||||
preDelivery.issues.push({ severity: 'medium', issue: 'High contrast mode not supported' })
|
||||
preDelivery.score -= 1
|
||||
}
|
||||
}
|
||||
preDelivery.score = Math.max(0, preDelivery.score)
|
||||
review.dimensions.push(preDelivery)
|
||||
|
||||
// === Calculate Overall Score ===
|
||||
review.overall_score = review.dimensions.reduce((sum, d) => sum + d.score * d.weight, 0)
|
||||
review.issues = review.dimensions.flatMap(d => d.issues)
|
||||
const criticalCount = review.issues.filter(i => i.severity === 'high').length
|
||||
|
||||
if (review.overall_score >= 8 && criticalCount === 0) {
|
||||
review.verdict = 'PASS'
|
||||
} else if (gcRound >= maxGCRounds) {
|
||||
review.verdict = review.overall_score >= 6 ? 'PASS_WITH_WARNINGS' : 'FAIL'
|
||||
} else {
|
||||
review.verdict = 'NEEDS_FIX'
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Package Results + Shared Memory
|
||||
|
||||
```javascript
|
||||
const outputPath = sessionFolder
|
||||
? `${sessionFolder}/qa/audit-fe-${task.subject.replace(/[^a-zA-Z0-9-]/g, '-').toLowerCase()}-r${gcRound}.json`
|
||||
: '.workflow/.tmp/qa-fe-audit.json'
|
||||
|
||||
Bash(`mkdir -p "$(dirname '${outputPath}')"`)
|
||||
Write(outputPath, JSON.stringify(review, null, 2))
|
||||
|
||||
// Wisdom contribution
|
||||
if (sessionFolder && review.issues.length > 0) {
|
||||
try {
|
||||
const issuesPath = `${sessionFolder}/wisdom/issues.md`
|
||||
const existing = Read(issuesPath)
|
||||
const timestamp = new Date().toISOString().substring(0, 10)
|
||||
const highIssues = review.issues.filter(i => i.severity === 'high')
|
||||
if (highIssues.length > 0) {
|
||||
const entries = highIssues.map(i => `- [${timestamp}] [fe-qa] ${i.issue} in ${i.file || 'global'}`).join('\n')
|
||||
Write(issuesPath, existing + '\n' + entries)
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// Update shared memory with QA history
|
||||
if (sessionFolder) {
|
||||
try {
|
||||
sharedMemory.qa_history = sharedMemory.qa_history || []
|
||||
sharedMemory.qa_history.push({
|
||||
task_subject: task.subject,
|
||||
gc_round: gcRound,
|
||||
verdict: review.verdict,
|
||||
score: review.overall_score,
|
||||
critical_count: criticalCount,
|
||||
total_issues: review.issues.length,
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||
} catch {}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
```javascript
|
||||
const msgType = review.verdict === 'PASS' || review.verdict === 'PASS_WITH_WARNINGS'
|
||||
? 'qa_fe_passed'
|
||||
: criticalCount > 0 ? 'fix_required' : 'qa_fe_result'
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "fe-qa", to: "coordinator",
|
||||
type: msgType,
|
||||
summary: `[fe-qa] QA-FE R${gcRound}: ${review.verdict}, score=${review.overall_score.toFixed(1)}, ${criticalCount} critical`,
|
||||
ref: outputPath
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[fe-qa] ## Frontend QA Review
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**Round**: ${gcRound}/${maxGCRounds}
|
||||
**Verdict**: ${review.verdict}
|
||||
**Score**: ${review.overall_score.toFixed(1)}/10
|
||||
**Strictness**: ${strictness}
|
||||
**Design Intelligence**: ${designIntel._source || 'not available'}
|
||||
|
||||
### Dimension Scores
|
||||
${review.dimensions.map(d => `- **${d.name}**: ${d.score.toFixed(1)}/10 (${d.issues.length} issues)`).join('\n')}
|
||||
|
||||
### Critical Issues (${criticalCount})
|
||||
${review.issues.filter(i => i.severity === 'high').map(i => `- \`${i.file || i.check}\`: ${i.issue} → ${i.fix || ''}${i.do ? `\n ✅ Do: ${i.do}` : ''}${i.dont ? `\n ❌ Don't: ${i.dont}` : ''}`).join('\n') || 'None'}
|
||||
|
||||
### Medium Issues
|
||||
${review.issues.filter(i => i.severity === 'medium').slice(0, 5).map(i => `- \`${i.file || i.check}\`: ${i.issue} → ${i.fix || ''}`).join('\n') || 'None'}
|
||||
|
||||
${review.verdict === 'NEEDS_FIX' ? `\n### Action Required\nfe-developer 需修复 ${criticalCount} 个 critical 问题后重新提交。` : ''}
|
||||
|
||||
### Output: ${outputPath}`,
|
||||
summary: `[fe-qa] QA-FE R${gcRound}: ${review.verdict}, ${review.overall_score.toFixed(1)}/10`
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
// Check for next QA-FE task → back to Phase 1
|
||||
```
|
||||
|
||||
## Generator-Critic Loop
|
||||
|
||||
fe-developer ↔ fe-qa 循环由 coordinator 编排:
|
||||
|
||||
```
|
||||
Round 1: DEV-FE-001 → QA-FE-001
|
||||
if QA verdict = NEEDS_FIX:
|
||||
coordinator creates DEV-FE-002 (fix task, blockedBy QA-FE-001)
|
||||
coordinator creates QA-FE-002 (re-review, blockedBy DEV-FE-002)
|
||||
Round 2: DEV-FE-002 → QA-FE-002
|
||||
if still NEEDS_FIX: verdict = PASS_WITH_WARNINGS or FAIL (max 2 rounds)
|
||||
```
|
||||
|
||||
**收敛条件**: `overall_score >= 8 && critical_count === 0`
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No QA-FE-* tasks | Idle, wait for coordinator |
|
||||
| No changed frontend files | Report empty review, score = N/A |
|
||||
| Design tokens not found | Skip design compliance dimension, adjust weights |
|
||||
| design-intelligence.json not found | Skip industry anti-patterns, use standard strictness |
|
||||
| Git diff fails | Use Glob to find recent frontend files |
|
||||
| Max GC rounds exceeded | Force verdict (PASS_WITH_WARNINGS or FAIL) |
|
||||
| ui-ux-pro-max not installed | Continue without design intelligence, note in report |
|
||||
@@ -1,466 +0,0 @@
|
||||
# Command: Multi-Angle Exploration
|
||||
|
||||
Phase 2 of planner execution - assess complexity, select exploration angles, and execute parallel exploration.
|
||||
|
||||
## Overview
|
||||
|
||||
This command performs multi-angle codebase exploration based on task complexity. Low complexity uses direct semantic search, while Medium/High complexity launches parallel cli-explore-agent subagents for comprehensive analysis.
|
||||
|
||||
## Complexity Assessment
|
||||
|
||||
### assessComplexity Function
|
||||
|
||||
```javascript
|
||||
function assessComplexity(desc) {
|
||||
let score = 0
|
||||
if (/refactor|architect|restructure|模块|系统/.test(desc)) score += 2
|
||||
if (/multiple|多个|across|跨/.test(desc)) score += 2
|
||||
if (/integrate|集成|api|database/.test(desc)) score += 1
|
||||
if (/security|安全|performance|性能/.test(desc)) score += 1
|
||||
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
|
||||
}
|
||||
|
||||
const complexity = assessComplexity(task.description)
|
||||
```
|
||||
|
||||
### Complexity Levels
|
||||
|
||||
| Level | Score | Characteristics | Angle Count |
|
||||
|-------|-------|----------------|-------------|
|
||||
| **Low** | 0-1 | Simple feature, single module, clear scope | 1 |
|
||||
| **Medium** | 2-3 | Multiple modules, integration points, moderate scope | 3 |
|
||||
| **High** | 4+ | Architecture changes, cross-cutting concerns, complex scope | 4 |
|
||||
|
||||
## Angle Selection
|
||||
|
||||
### ANGLE_PRESETS
|
||||
|
||||
```javascript
|
||||
const ANGLE_PRESETS = {
|
||||
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
|
||||
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
|
||||
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
|
||||
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
|
||||
feature: ['patterns', 'integration-points', 'testing', 'dependencies']
|
||||
}
|
||||
```
|
||||
|
||||
### selectAngles Function
|
||||
|
||||
```javascript
|
||||
function selectAngles(desc, count) {
|
||||
const text = desc.toLowerCase()
|
||||
let preset = 'feature'
|
||||
if (/refactor|architect|restructure|modular/.test(text)) preset = 'architecture'
|
||||
else if (/security|auth|permission|access/.test(text)) preset = 'security'
|
||||
else if (/performance|slow|optimi|cache/.test(text)) preset = 'performance'
|
||||
else if (/fix|bug|error|issue|broken/.test(text)) preset = 'bugfix'
|
||||
return ANGLE_PRESETS[preset].slice(0, count)
|
||||
}
|
||||
|
||||
const angleCount = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1)
|
||||
const selectedAngles = selectAngles(task.description, angleCount)
|
||||
```
|
||||
|
||||
### Angle Definitions
|
||||
|
||||
| Angle | Focus | Use Case |
|
||||
|-------|-------|----------|
|
||||
| **architecture** | System structure, layer boundaries, design patterns | Refactoring, restructuring |
|
||||
| **dependencies** | Module dependencies, coupling, external libraries | Integration, modularity |
|
||||
| **modularity** | Component boundaries, separation of concerns | Architecture changes |
|
||||
| **integration-points** | API boundaries, data flow between modules | Feature development |
|
||||
| **security** | Auth/authz, input validation, data protection | Security features |
|
||||
| **auth-patterns** | Authentication flows, session management | Auth implementation |
|
||||
| **dataflow** | Data transformation, state propagation | Bug fixes, features |
|
||||
| **validation** | Input validation, error handling | Security, quality |
|
||||
| **performance** | Bottlenecks, optimization opportunities | Performance tuning |
|
||||
| **bottlenecks** | Slow operations, resource contention | Performance issues |
|
||||
| **caching** | Cache strategies, invalidation patterns | Performance optimization |
|
||||
| **data-access** | Database queries, data fetching patterns | Performance, features |
|
||||
| **error-handling** | Error propagation, recovery strategies | Bug fixes |
|
||||
| **state-management** | State updates, consistency | Bug fixes, features |
|
||||
| **edge-cases** | Boundary conditions, error scenarios | Bug fixes, testing |
|
||||
| **patterns** | Code patterns, conventions, best practices | Feature development |
|
||||
| **testing** | Test coverage, test strategies | Feature development |
|
||||
|
||||
## Exploration Execution
|
||||
|
||||
### Low Complexity: Direct Semantic Search
|
||||
|
||||
```javascript
|
||||
if (complexity === 'Low') {
|
||||
// Direct exploration via semantic search
|
||||
const results = mcp__ace-tool__search_context({
|
||||
project_root_path: projectRoot,
|
||||
query: task.description
|
||||
})
|
||||
|
||||
// Transform ACE results to exploration JSON
|
||||
const exploration = {
|
||||
project_structure: "Analyzed via ACE semantic search",
|
||||
relevant_files: results.files.map(f => ({
|
||||
path: f.path,
|
||||
rationale: f.relevance_reason || "Semantic match to task description",
|
||||
role: "modify_target",
|
||||
discovery_source: "ace-search",
|
||||
key_symbols: f.symbols || []
|
||||
})),
|
||||
patterns: results.patterns || [],
|
||||
dependencies: results.dependencies || [],
|
||||
integration_points: results.integration_points || [],
|
||||
constraints: [],
|
||||
clarification_needs: [],
|
||||
_metadata: {
|
||||
exploration_angle: selectedAngles[0],
|
||||
complexity: 'Low',
|
||||
discovery_method: 'ace-semantic-search'
|
||||
}
|
||||
}
|
||||
|
||||
Write(`${planDir}/exploration-${selectedAngles[0]}.json`, JSON.stringify(exploration, null, 2))
|
||||
}
|
||||
```
|
||||
|
||||
### Medium/High Complexity: Parallel cli-explore-agent
|
||||
|
||||
```javascript
|
||||
else {
|
||||
// Launch parallel cli-explore-agent for each angle
|
||||
selectedAngles.forEach((angle, index) => {
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
## Task Objective
|
||||
Execute **${angle}** exploration for task planning context.
|
||||
|
||||
## Output Location
|
||||
**Session Folder**: ${sessionFolder}
|
||||
**Output File**: ${planDir}/exploration-${angle}.json
|
||||
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Task Description**: ${task.description}
|
||||
- **Spec Context**: ${specContext ? 'Available — use spec/requirements, spec/architecture, spec/epics for informed exploration' : 'Not available (impl-only mode)'}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
|
||||
2. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema)
|
||||
3. Read: .workflow/project-tech.json (if exists - technology stack)
|
||||
|
||||
## Expected Output
|
||||
Write JSON to: ${planDir}/exploration-${angle}.json
|
||||
Follow explore-json-schema.json structure with ${angle}-focused findings.
|
||||
|
||||
**MANDATORY**: Every file in relevant_files MUST have:
|
||||
- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic)
|
||||
- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only
|
||||
- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual
|
||||
- **key_symbols** (recommended): Key functions/classes/types relevant to task
|
||||
|
||||
## Exploration Focus by Angle
|
||||
|
||||
${getAngleFocusGuide(angle)}
|
||||
|
||||
## Output Schema Structure
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"project_structure": "string - high-level architecture overview",
|
||||
"relevant_files": [
|
||||
{
|
||||
"path": "string - relative file path",
|
||||
"rationale": "string - WHY this file matters for ${angle} (>10 chars, specific)",
|
||||
"role": "modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only",
|
||||
"discovery_source": "bash-scan|cli-analysis|ace-search|dependency-trace|manual",
|
||||
"key_symbols": ["function/class/type names"]
|
||||
}
|
||||
],
|
||||
"patterns": ["string - code patterns relevant to ${angle}"],
|
||||
"dependencies": ["string - module/library dependencies"],
|
||||
"integration_points": ["string - API/interface boundaries"],
|
||||
"constraints": ["string - technical constraints"],
|
||||
"clarification_needs": ["string - questions needing user input"],
|
||||
"_metadata": {
|
||||
"exploration_angle": "${angle}",
|
||||
"complexity": "${complexity}",
|
||||
"discovery_method": "cli-explore-agent"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
`
|
||||
})
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Angle Focus Guide
|
||||
|
||||
```javascript
|
||||
function getAngleFocusGuide(angle) {
|
||||
const guides = {
|
||||
architecture: `
|
||||
**Architecture Focus**:
|
||||
- Identify layer boundaries (presentation, business, data)
|
||||
- Map module dependencies and coupling
|
||||
- Locate design patterns (factory, strategy, observer, etc.)
|
||||
- Find architectural decision records (ADRs)
|
||||
- Analyze component responsibilities`,
|
||||
|
||||
dependencies: `
|
||||
**Dependencies Focus**:
|
||||
- Map internal module dependencies (import/require statements)
|
||||
- Identify external library usage (package.json, requirements.txt)
|
||||
- Trace dependency chains and circular dependencies
|
||||
- Locate shared utilities and common modules
|
||||
- Analyze coupling strength between modules`,
|
||||
|
||||
modularity: `
|
||||
**Modularity Focus**:
|
||||
- Identify module boundaries and interfaces
|
||||
- Analyze separation of concerns
|
||||
- Locate tightly coupled code
|
||||
- Find opportunities for extraction/refactoring
|
||||
- Map public vs private APIs`,
|
||||
|
||||
'integration-points': `
|
||||
**Integration Points Focus**:
|
||||
- Locate API endpoints and routes
|
||||
- Identify data flow between modules
|
||||
- Find event emitters/listeners
|
||||
- Map external service integrations
|
||||
- Analyze interface contracts`,
|
||||
|
||||
security: `
|
||||
**Security Focus**:
|
||||
- Locate authentication/authorization logic
|
||||
- Identify input validation points
|
||||
- Find sensitive data handling
|
||||
- Analyze access control mechanisms
|
||||
- Locate security-related middleware`,
|
||||
|
||||
'auth-patterns': `
|
||||
**Auth Patterns Focus**:
|
||||
- Identify authentication flows (login, logout, refresh)
|
||||
- Locate session management code
|
||||
- Find token generation/validation
|
||||
- Map user permission checks
|
||||
- Analyze auth middleware`,
|
||||
|
||||
dataflow: `
|
||||
**Dataflow Focus**:
|
||||
- Trace data transformations
|
||||
- Identify state propagation paths
|
||||
- Locate data validation points
|
||||
- Map data sources and sinks
|
||||
- Analyze data mutation points`,
|
||||
|
||||
validation: `
|
||||
**Validation Focus**:
|
||||
- Locate input validation logic
|
||||
- Identify schema definitions
|
||||
- Find error handling for invalid data
|
||||
- Map validation middleware
|
||||
- Analyze sanitization functions`,
|
||||
|
||||
performance: `
|
||||
**Performance Focus**:
|
||||
- Identify computational bottlenecks
|
||||
- Locate database queries (N+1 problems)
|
||||
- Find synchronous blocking operations
|
||||
- Map resource-intensive operations
|
||||
- Analyze algorithm complexity`,
|
||||
|
||||
bottlenecks: `
|
||||
**Bottlenecks Focus**:
|
||||
- Locate slow operations (profiling data)
|
||||
- Identify resource contention points
|
||||
- Find inefficient algorithms
|
||||
- Map hot paths in code
|
||||
- Analyze concurrency issues`,
|
||||
|
||||
caching: `
|
||||
**Caching Focus**:
|
||||
- Locate existing cache implementations
|
||||
- Identify cacheable operations
|
||||
- Find cache invalidation logic
|
||||
- Map cache key strategies
|
||||
- Analyze cache hit/miss patterns`,
|
||||
|
||||
'data-access': `
|
||||
**Data Access Focus**:
|
||||
- Locate database query patterns
|
||||
- Identify ORM/query builder usage
|
||||
- Find data fetching strategies
|
||||
- Map data access layers
|
||||
- Analyze query optimization opportunities`,
|
||||
|
||||
'error-handling': `
|
||||
**Error Handling Focus**:
|
||||
- Locate try-catch blocks
|
||||
- Identify error propagation paths
|
||||
- Find error recovery strategies
|
||||
- Map error logging points
|
||||
- Analyze error types and handling`,
|
||||
|
||||
'state-management': `
|
||||
**State Management Focus**:
|
||||
- Locate state containers (Redux, Vuex, etc.)
|
||||
- Identify state update patterns
|
||||
- Find state synchronization logic
|
||||
- Map state dependencies
|
||||
- Analyze state consistency mechanisms`,
|
||||
|
||||
'edge-cases': `
|
||||
**Edge Cases Focus**:
|
||||
- Identify boundary conditions
|
||||
- Locate null/undefined handling
|
||||
- Find empty array/object handling
|
||||
- Map error scenarios
|
||||
- Analyze exceptional flows`,
|
||||
|
||||
patterns: `
|
||||
**Patterns Focus**:
|
||||
- Identify code patterns and conventions
|
||||
- Locate design pattern implementations
|
||||
- Find naming conventions
|
||||
- Map code organization patterns
|
||||
- Analyze best practices usage`,
|
||||
|
||||
testing: `
|
||||
**Testing Focus**:
|
||||
- Locate test files and test utilities
|
||||
- Identify test coverage gaps
|
||||
- Find test patterns (unit, integration, e2e)
|
||||
- Map mocking/stubbing strategies
|
||||
- Analyze test organization`
|
||||
}
|
||||
|
||||
return guides[angle] || `**${angle} Focus**: Analyze codebase from ${angle} perspective`
|
||||
}
|
||||
```
|
||||
|
||||
## Explorations Manifest
|
||||
|
||||
```javascript
|
||||
// Build explorations manifest
|
||||
const explorationManifest = {
|
||||
session_id: `${taskSlug}-${dateStr}`,
|
||||
task_description: task.description,
|
||||
complexity: complexity,
|
||||
exploration_count: selectedAngles.length,
|
||||
explorations: selectedAngles.map(angle => ({
|
||||
angle: angle,
|
||||
file: `exploration-${angle}.json`,
|
||||
path: `${planDir}/exploration-${angle}.json`
|
||||
}))
|
||||
}
|
||||
Write(`${planDir}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2))
|
||||
```
|
||||
|
||||
## Output Schema
|
||||
|
||||
### explore-json-schema.json Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"project_structure": "string - high-level architecture overview",
|
||||
"relevant_files": [
|
||||
{
|
||||
"path": "string - relative file path",
|
||||
"rationale": "string - specific selection basis (>10 chars)",
|
||||
"role": "modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only",
|
||||
"discovery_source": "bash-scan|cli-analysis|ace-search|dependency-trace|manual",
|
||||
"key_symbols": ["string - function/class/type names"]
|
||||
}
|
||||
],
|
||||
"patterns": ["string - code patterns relevant to angle"],
|
||||
"dependencies": ["string - module/library dependencies"],
|
||||
"integration_points": ["string - API/interface boundaries"],
|
||||
"constraints": ["string - technical constraints"],
|
||||
"clarification_needs": ["string - questions needing user input"],
|
||||
"_metadata": {
|
||||
"exploration_angle": "string - angle name",
|
||||
"complexity": "Low|Medium|High",
|
||||
"discovery_method": "ace-semantic-search|cli-explore-agent"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Phase 3
|
||||
|
||||
Phase 3 (Plan Generation) consumes:
|
||||
1. `explorations-manifest.json` - list of exploration files
|
||||
2. `exploration-{angle}.json` - per-angle exploration results
|
||||
3. `specContext` (if available) - requirements, architecture, epics
|
||||
|
||||
These inputs are passed to cli-lite-planning-agent for plan generation.
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Exploration Agent Failure
|
||||
|
||||
```javascript
|
||||
try {
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `...`
|
||||
})
|
||||
} catch (error) {
|
||||
// Skip exploration, continue with available explorations
|
||||
console.error(`[planner] Exploration failed for angle: ${angle}`, error)
|
||||
// Remove failed angle from manifest
|
||||
explorationManifest.explorations = explorationManifest.explorations.filter(e => e.angle !== angle)
|
||||
explorationManifest.exploration_count = explorationManifest.explorations.length
|
||||
}
|
||||
```
|
||||
|
||||
### All Explorations Fail
|
||||
|
||||
```javascript
|
||||
if (explorationManifest.exploration_count === 0) {
|
||||
// Fallback: Plan from task description only
|
||||
console.warn(`[planner] All explorations failed, planning from task description only`)
|
||||
// Proceed to Phase 3 with empty explorations
|
||||
}
|
||||
```
|
||||
|
||||
### ACE Search Failure (Low Complexity)
|
||||
|
||||
```javascript
|
||||
try {
|
||||
const results = mcp__ace-tool__search_context({
|
||||
project_root_path: projectRoot,
|
||||
query: task.description
|
||||
})
|
||||
} catch (error) {
|
||||
// Fallback: Use ripgrep for basic file discovery
|
||||
const rgResults = Bash(`rg -l "${task.description}" --type ts`)
|
||||
const exploration = {
|
||||
project_structure: "Basic file discovery via ripgrep",
|
||||
relevant_files: rgResults.split('\n').map(path => ({
|
||||
path: path.trim(),
|
||||
rationale: "Matched task description keywords",
|
||||
role: "modify_target",
|
||||
discovery_source: "bash-scan",
|
||||
key_symbols: []
|
||||
})),
|
||||
patterns: [],
|
||||
dependencies: [],
|
||||
integration_points: [],
|
||||
constraints: [],
|
||||
clarification_needs: [],
|
||||
_metadata: {
|
||||
exploration_angle: selectedAngles[0],
|
||||
complexity: 'Low',
|
||||
discovery_method: 'ripgrep-fallback'
|
||||
}
|
||||
}
|
||||
Write(`${planDir}/exploration-${selectedAngles[0]}.json`, JSON.stringify(exploration, null, 2))
|
||||
}
|
||||
```
|
||||
@@ -1,253 +0,0 @@
|
||||
# Role: planner
|
||||
|
||||
Multi-angle code exploration and structured implementation planning. Submits plans to the coordinator for approval.
|
||||
|
||||
## Role Identity
|
||||
|
||||
- **Name**: `planner`
|
||||
- **Task Prefix**: `PLAN-*`
|
||||
- **Output Tag**: `[planner]`
|
||||
- **Responsibility**: Code exploration → Implementation planning → Coordinator approval
|
||||
- **Communication**: SendMessage to coordinator only
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process PLAN-* tasks
|
||||
- Communicate only with coordinator
|
||||
- Write plan artifacts to `plan/` folder
|
||||
- Tag all SendMessage and team_msg calls with `[planner]`
|
||||
- Assess complexity (Low/Medium/High)
|
||||
- Execute multi-angle exploration based on complexity
|
||||
- Generate plan.json + .task/TASK-*.json following schemas
|
||||
- Submit plan for coordinator approval
|
||||
- Load spec context in full-lifecycle mode
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Contact other workers directly
|
||||
- Implement code
|
||||
- Modify spec documents
|
||||
- Skip complexity assessment
|
||||
- Proceed without exploration (Medium/High complexity)
|
||||
- Generate plan without schema validation
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `plan_ready` | planner → coordinator | Plan generation complete | With plan.json path and task count summary |
|
||||
| `plan_revision` | planner → coordinator | Plan revised and resubmitted | Describes changes made |
|
||||
| `impl_progress` | planner → coordinator | Exploration phase progress | Optional, for long explorations |
|
||||
| `error` | planner → coordinator | Unrecoverable error | Exploration failure, schema missing, etc. |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log:
|
||||
|
||||
```javascript
|
||||
// Plan ready
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_ready", summary: "[planner] Plan ready: 3 tasks, Medium complexity", ref: `${sessionFolder}/plan/plan.json` })
|
||||
|
||||
// Plan revision
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_revision", summary: "[planner] Split task-2 into two subtasks per feedback" })
|
||||
|
||||
// Error report
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "error", summary: "[planner] plan-overview-base-schema.json not found, using default structure" })
|
||||
```
|
||||
|
||||
### CLI Fallback
|
||||
|
||||
When `mcp__ccw-tools__team_msg` MCP is unavailable:
|
||||
|
||||
```javascript
|
||||
Bash(`ccw team log --team "${teamName}" --from "planner" --to "coordinator" --type "plan_ready" --summary "[planner] Plan ready: 3 tasks" --ref "${sessionFolder}/plan/plan.json" --json`)
|
||||
```
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
- `commands/explore.md` - Multi-angle codebase exploration (Phase 2)
|
||||
|
||||
### Subagent Capabilities
|
||||
- **cli-explore-agent**: Per-angle exploration (Medium/High complexity)
|
||||
- **cli-lite-planning-agent**: Plan generation (Medium/High complexity)
|
||||
|
||||
### CLI Capabilities
|
||||
None directly (delegates to subagents)
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('PLAN-') &&
|
||||
t.owner === 'planner' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
|
||||
if (myTasks.length === 0) return // idle
|
||||
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
```
|
||||
|
||||
### Phase 1.5: Load Spec Context (Full-Lifecycle Mode)
|
||||
|
||||
```javascript
|
||||
// Extract session folder from task description (set by coordinator)
|
||||
const sessionMatch = task.description.match(/Session:\s*(.+)/)
|
||||
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : `.workflow/.team/default`
|
||||
const planDir = `${sessionFolder}/plan`
|
||||
Bash(`mkdir -p ${planDir}`)
|
||||
|
||||
// Check if spec directory exists (full-lifecycle mode)
|
||||
const specDir = `${sessionFolder}/spec`
|
||||
let specContext = null
|
||||
try {
|
||||
const reqIndex = Read(`${specDir}/requirements/_index.md`)
|
||||
const archIndex = Read(`${specDir}/architecture/_index.md`)
|
||||
const epicsIndex = Read(`${specDir}/epics/_index.md`)
|
||||
const specConfig = JSON.parse(Read(`${specDir}/spec-config.json`))
|
||||
specContext = { reqIndex, archIndex, epicsIndex, specConfig }
|
||||
} catch { /* impl-only mode has no spec */ }
|
||||
```
|
||||
|
||||
### Phase 2: Multi-Angle Exploration
|
||||
|
||||
**Delegate to**: `Read("commands/explore.md")`
|
||||
|
||||
Execute complexity assessment, angle selection, and parallel exploration. See `commands/explore.md` for full implementation.
|
||||
|
||||
### Phase 3: Plan Generation
|
||||
|
||||
```javascript
|
||||
// Read schema reference
|
||||
const schema = Bash(`cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json`)
|
||||
|
||||
if (complexity === 'Low') {
|
||||
// Direct Claude planning
|
||||
Bash(`mkdir -p ${planDir}/.task`)
|
||||
// Generate plan.json + .task/TASK-*.json following schemas
|
||||
|
||||
const plan = {
|
||||
session_id: `${taskSlug}-${dateStr}`,
|
||||
task_description: task.description,
|
||||
complexity: 'Low',
|
||||
approach: "Direct implementation based on semantic search",
|
||||
task_count: 1,
|
||||
task_ids: ['TASK-001'],
|
||||
exploration_refs: [`${planDir}/exploration-patterns.json`]
|
||||
}
|
||||
Write(`${planDir}/plan.json`, JSON.stringify(plan, null, 2))
|
||||
|
||||
const taskDetail = {
|
||||
id: 'TASK-001',
|
||||
title: task.subject,
|
||||
description: task.description,
|
||||
files: [],
|
||||
convergence: { criteria: ["Implementation complete", "Tests pass"] },
|
||||
depends_on: []
|
||||
}
|
||||
Write(`${planDir}/.task/TASK-001.json`, JSON.stringify(taskDetail, null, 2))
|
||||
|
||||
} else {
|
||||
// Use cli-lite-planning-agent for Medium/High
|
||||
Task({
|
||||
subagent_type: "cli-lite-planning-agent",
|
||||
run_in_background: false,
|
||||
description: "Generate detailed implementation plan",
|
||||
prompt: `Generate implementation plan.
|
||||
Output: ${planDir}/plan.json + ${planDir}/.task/TASK-*.json
|
||||
Schema: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
|
||||
Task Description: ${task.description}
|
||||
Explorations: ${explorationManifest}
|
||||
Complexity: ${complexity}
|
||||
${specContext ? `Spec Context:
|
||||
- Requirements: ${specContext.reqIndex.substring(0, 500)}
|
||||
- Architecture: ${specContext.archIndex.substring(0, 500)}
|
||||
- Epics: ${specContext.epicsIndex.substring(0, 500)}
|
||||
Reference REQ-* IDs, follow ADR decisions, reuse Epic/Story decomposition.` : ''}
|
||||
Requirements: 2-7 tasks, each with id, title, files[].change, convergence.criteria, depends_on`
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Submit for Approval
|
||||
|
||||
```javascript
|
||||
const plan = JSON.parse(Read(`${planDir}/plan.json`))
|
||||
const planTasks = plan.task_ids.map(id => JSON.parse(Read(`${planDir}/.task/${id}.json`)))
|
||||
const taskCount = plan.task_count || plan.task_ids.length
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "planner", to: "coordinator",
|
||||
type: "plan_ready",
|
||||
summary: `[planner] Plan就绪: ${taskCount}个task, ${complexity}复杂度`,
|
||||
ref: `${planDir}/plan.json`
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[planner] ## Plan Ready for Review
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**Complexity**: ${complexity}
|
||||
**Tasks**: ${taskCount}
|
||||
|
||||
### Task Summary
|
||||
${planTasks.map((t, i) => (i+1) + '. ' + t.title).join('\n')}
|
||||
|
||||
### Approach
|
||||
${plan.approach}
|
||||
|
||||
### Plan Location
|
||||
${planDir}/plan.json
|
||||
Task Files: ${planDir}/.task/
|
||||
|
||||
Please review and approve or request revisions.`,
|
||||
summary: `[planner] Plan ready: ${taskCount} tasks`
|
||||
})
|
||||
|
||||
// Wait for coordinator response (approve → mark completed, revision → update and resubmit)
|
||||
```
|
||||
|
||||
### Phase 5: After Approval
|
||||
|
||||
```javascript
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
|
||||
// Check for next PLAN task → back to Phase 1
|
||||
```
|
||||
|
||||
## Session Files
|
||||
|
||||
```
|
||||
{sessionFolder}/plan/
|
||||
├── exploration-{angle}.json
|
||||
├── explorations-manifest.json
|
||||
├── planning-context.md
|
||||
├── plan.json
|
||||
└── .task/
|
||||
└── TASK-*.json
|
||||
```
|
||||
|
||||
> **Note**: `sessionFolder` is extracted from task description (`Session: .workflow/.team/TLS-xxx`). Plan outputs go to `plan/` subdirectory. In full-lifecycle mode, spec products are available at `../spec/`.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No PLAN-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Exploration agent failure | Skip exploration, plan from task description only |
|
||||
| Planning agent failure | Fallback to direct Claude planning |
|
||||
| Plan rejected 3+ times | Notify coordinator with `[planner]` tag, suggest alternative approach |
|
||||
| Schema file not found | Use basic plan structure without schema validation, log error with `[planner]` tag |
|
||||
| Spec context load failure | Continue in impl-only mode (no spec context) |
|
||||
| Session folder not found | Notify coordinator with `[planner]` tag, request session path |
|
||||
| Unexpected error | Log error via team_msg with `[planner]` tag, report to coordinator |
|
||||
@@ -1,689 +0,0 @@
|
||||
# Code Review Command
|
||||
|
||||
## Purpose
|
||||
4-dimension code review analyzing quality, security, architecture, and requirements compliance.
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
### 1. Quality Review
|
||||
|
||||
```javascript
|
||||
function reviewQuality(files, gitDiff) {
|
||||
const issues = {
|
||||
critical: [],
|
||||
high: [],
|
||||
medium: [],
|
||||
low: []
|
||||
}
|
||||
|
||||
for (const file of files) {
|
||||
const content = file.content
|
||||
const lines = content.split("\n")
|
||||
|
||||
// Check for @ts-ignore / @ts-expect-error
|
||||
lines.forEach((line, idx) => {
|
||||
if (line.includes("@ts-ignore") || line.includes("@ts-expect-error")) {
|
||||
const nextLine = lines[idx + 1] || ""
|
||||
const hasJustification = line.includes("//") && line.split("//")[1].trim().length > 10
|
||||
|
||||
if (!hasJustification) {
|
||||
issues.high.push({
|
||||
file: file.path,
|
||||
line: idx + 1,
|
||||
type: "ts-ignore-without-justification",
|
||||
message: "TypeScript error suppression without explanation",
|
||||
code: line.trim()
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Check for 'any' type usage
|
||||
const anyMatches = Grep("\\bany\\b", { path: file.path, "-n": true })
|
||||
if (anyMatches) {
|
||||
anyMatches.forEach(match => {
|
||||
// Exclude comments and type definitions that are intentionally generic
|
||||
if (!match.line.includes("//") && !match.line.includes("Generic")) {
|
||||
issues.high.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "any-type-usage",
|
||||
message: "Using 'any' type reduces type safety",
|
||||
code: match.line.trim()
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Check for console.log in production code
|
||||
const consoleMatches = Grep("console\\.(log|debug|info)", { path: file.path, "-n": true })
|
||||
if (consoleMatches && !file.path.includes("test")) {
|
||||
consoleMatches.forEach(match => {
|
||||
issues.high.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "console-log",
|
||||
message: "Console statements should be removed from production code",
|
||||
code: match.line.trim()
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
// Check for empty catch blocks
|
||||
const emptyCatchRegex = /catch\s*\([^)]*\)\s*\{\s*\}/g
|
||||
let match
|
||||
while ((match = emptyCatchRegex.exec(content)) !== null) {
|
||||
const lineNumber = content.substring(0, match.index).split("\n").length
|
||||
issues.critical.push({
|
||||
file: file.path,
|
||||
line: lineNumber,
|
||||
type: "empty-catch",
|
||||
message: "Empty catch block silently swallows errors",
|
||||
code: match[0]
|
||||
})
|
||||
}
|
||||
|
||||
// Check for magic numbers
|
||||
const magicNumberRegex = /(?<![a-zA-Z0-9_])((?!0|1|2|10|100|1000)\d{2,})(?![a-zA-Z0-9_])/g
|
||||
while ((match = magicNumberRegex.exec(content)) !== null) {
|
||||
const lineNumber = content.substring(0, match.index).split("\n").length
|
||||
const line = lines[lineNumber - 1]
|
||||
|
||||
// Exclude if in comment or constant definition
|
||||
if (!line.includes("//") && !line.includes("const") && !line.includes("=")) {
|
||||
issues.medium.push({
|
||||
file: file.path,
|
||||
line: lineNumber,
|
||||
type: "magic-number",
|
||||
message: "Magic number should be extracted to named constant",
|
||||
code: line.trim()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Check for duplicate code (simple heuristic: identical lines)
|
||||
const lineHashes = new Map()
|
||||
lines.forEach((line, idx) => {
|
||||
const trimmed = line.trim()
|
||||
if (trimmed.length > 30 && !trimmed.startsWith("//")) {
|
||||
if (!lineHashes.has(trimmed)) {
|
||||
lineHashes.set(trimmed, [])
|
||||
}
|
||||
lineHashes.get(trimmed).push(idx + 1)
|
||||
}
|
||||
})
|
||||
|
||||
lineHashes.forEach((occurrences, line) => {
|
||||
if (occurrences.length > 2) {
|
||||
issues.medium.push({
|
||||
file: file.path,
|
||||
line: occurrences[0],
|
||||
type: "duplicate-code",
|
||||
message: `Duplicate code found at lines: ${occurrences.join(", ")}`,
|
||||
code: line
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
return issues
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Security Review
|
||||
|
||||
```javascript
|
||||
function reviewSecurity(files) {
|
||||
const issues = {
|
||||
critical: [],
|
||||
high: [],
|
||||
medium: [],
|
||||
low: []
|
||||
}
|
||||
|
||||
for (const file of files) {
|
||||
const content = file.content
|
||||
|
||||
// Check for eval/exec usage
|
||||
const evalMatches = Grep("\\b(eval|exec|Function\\(|setTimeout\\(.*string|setInterval\\(.*string)\\b", {
|
||||
path: file.path,
|
||||
"-n": true
|
||||
})
|
||||
if (evalMatches) {
|
||||
evalMatches.forEach(match => {
|
||||
issues.high.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "dangerous-eval",
|
||||
message: "eval/exec usage can lead to code injection vulnerabilities",
|
||||
code: match.line.trim()
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
// Check for innerHTML/dangerouslySetInnerHTML
|
||||
const innerHTMLMatches = Grep("(innerHTML|dangerouslySetInnerHTML)", {
|
||||
path: file.path,
|
||||
"-n": true
|
||||
})
|
||||
if (innerHTMLMatches) {
|
||||
innerHTMLMatches.forEach(match => {
|
||||
issues.high.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "xss-risk",
|
||||
message: "Direct HTML injection can lead to XSS vulnerabilities",
|
||||
code: match.line.trim()
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
// Check for hardcoded secrets
|
||||
const secretPatterns = [
|
||||
/api[_-]?key\s*=\s*['"][^'"]{20,}['"]/i,
|
||||
/password\s*=\s*['"][^'"]+['"]/i,
|
||||
/secret\s*=\s*['"][^'"]{20,}['"]/i,
|
||||
/token\s*=\s*['"][^'"]{20,}['"]/i,
|
||||
/aws[_-]?access[_-]?key/i,
|
||||
/private[_-]?key\s*=\s*['"][^'"]+['"]/i
|
||||
]
|
||||
|
||||
secretPatterns.forEach(pattern => {
|
||||
const matches = content.match(new RegExp(pattern, "gm"))
|
||||
if (matches) {
|
||||
matches.forEach(match => {
|
||||
const lineNumber = content.substring(0, content.indexOf(match)).split("\n").length
|
||||
issues.critical.push({
|
||||
file: file.path,
|
||||
line: lineNumber,
|
||||
type: "hardcoded-secret",
|
||||
message: "Hardcoded secrets should be moved to environment variables",
|
||||
code: match.replace(/['"][^'"]+['"]/, "'***'") // Redact secret
|
||||
})
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// Check for SQL injection vectors
|
||||
const sqlInjectionMatches = Grep("(query|execute)\\s*\\(.*\\+.*\\)", {
|
||||
path: file.path,
|
||||
"-n": true
|
||||
})
|
||||
if (sqlInjectionMatches) {
|
||||
sqlInjectionMatches.forEach(match => {
|
||||
if (!match.line.includes("//") && !match.line.includes("prepared")) {
|
||||
issues.critical.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "sql-injection",
|
||||
message: "String concatenation in SQL queries can lead to SQL injection",
|
||||
code: match.line.trim()
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Check for insecure random
|
||||
const insecureRandomMatches = Grep("Math\\.random\\(\\)", {
|
||||
path: file.path,
|
||||
"-n": true
|
||||
})
|
||||
if (insecureRandomMatches) {
|
||||
insecureRandomMatches.forEach(match => {
|
||||
// Check if used for security purposes
|
||||
const context = content.substring(
|
||||
Math.max(0, content.indexOf(match.line) - 200),
|
||||
content.indexOf(match.line) + 200
|
||||
)
|
||||
if (context.match(/token|key|secret|password|session/i)) {
|
||||
issues.medium.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "insecure-random",
|
||||
message: "Math.random() is not cryptographically secure, use crypto.randomBytes()",
|
||||
code: match.line.trim()
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Check for missing input validation
|
||||
const functionMatches = Grep("(function|const.*=.*\\(|async.*\\()", {
|
||||
path: file.path,
|
||||
"-n": true
|
||||
})
|
||||
if (functionMatches) {
|
||||
functionMatches.forEach(match => {
|
||||
// Simple heuristic: check if function has parameters but no validation
|
||||
if (match.line.includes("(") && !match.line.includes("()")) {
|
||||
const nextLines = content.split("\n").slice(match.lineNumber, match.lineNumber + 5).join("\n")
|
||||
const hasValidation = nextLines.match(/if\s*\(|throw|assert|validate|check/)
|
||||
|
||||
if (!hasValidation && !match.line.includes("test") && !match.line.includes("mock")) {
|
||||
issues.low.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "missing-validation",
|
||||
message: "Function parameters should be validated",
|
||||
code: match.line.trim()
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return issues
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Architecture Review
|
||||
|
||||
```javascript
|
||||
function reviewArchitecture(files) {
|
||||
const issues = {
|
||||
critical: [],
|
||||
high: [],
|
||||
medium: [],
|
||||
low: []
|
||||
}
|
||||
|
||||
for (const file of files) {
|
||||
const content = file.content
|
||||
const lines = content.split("\n")
|
||||
|
||||
// Check for parent directory imports
|
||||
const importMatches = Grep("from\\s+['\"](\\.\\./)+", {
|
||||
path: file.path,
|
||||
"-n": true
|
||||
})
|
||||
if (importMatches) {
|
||||
importMatches.forEach(match => {
|
||||
const parentLevels = (match.line.match(/\.\.\//g) || []).length
|
||||
|
||||
if (parentLevels > 2) {
|
||||
issues.high.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "excessive-parent-imports",
|
||||
message: `Import traverses ${parentLevels} parent directories, consider restructuring`,
|
||||
code: match.line.trim()
|
||||
})
|
||||
} else if (parentLevels === 2) {
|
||||
issues.medium.push({
|
||||
file: file.path,
|
||||
line: match.lineNumber,
|
||||
type: "parent-imports",
|
||||
message: "Consider using absolute imports or restructuring modules",
|
||||
code: match.line.trim()
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Check for large files
|
||||
const lineCount = lines.length
|
||||
if (lineCount > 500) {
|
||||
issues.medium.push({
|
||||
file: file.path,
|
||||
line: 1,
|
||||
type: "large-file",
|
||||
message: `File has ${lineCount} lines, consider splitting into smaller modules`,
|
||||
code: `Total lines: ${lineCount}`
|
||||
})
|
||||
}
|
||||
|
||||
// Check for circular dependencies (simple heuristic)
|
||||
const imports = lines
|
||||
.filter(line => line.match(/^import.*from/))
|
||||
.map(line => {
|
||||
const match = line.match(/from\s+['"](.+?)['"]/)
|
||||
return match ? match[1] : null
|
||||
})
|
||||
.filter(Boolean)
|
||||
|
||||
// Check if any imported file imports this file back
|
||||
for (const importPath of imports) {
|
||||
const resolvedPath = resolveImportPath(file.path, importPath)
|
||||
if (resolvedPath && Bash(`test -f ${resolvedPath}`).exitCode === 0) {
|
||||
const importedContent = Read(resolvedPath)
|
||||
const reverseImport = importedContent.includes(file.path.replace(/\.[jt]sx?$/, ""))
|
||||
|
||||
if (reverseImport) {
|
||||
issues.critical.push({
|
||||
file: file.path,
|
||||
line: 1,
|
||||
type: "circular-dependency",
|
||||
message: `Circular dependency detected with ${resolvedPath}`,
|
||||
code: `${file.path} ↔ ${resolvedPath}`
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for tight coupling (many imports from same module)
|
||||
const importCounts = {}
|
||||
imports.forEach(imp => {
|
||||
const baseModule = imp.split("/")[0]
|
||||
importCounts[baseModule] = (importCounts[baseModule] || 0) + 1
|
||||
})
|
||||
|
||||
Object.entries(importCounts).forEach(([module, count]) => {
|
||||
if (count > 5) {
|
||||
issues.medium.push({
|
||||
file: file.path,
|
||||
line: 1,
|
||||
type: "tight-coupling",
|
||||
message: `File imports ${count} items from '${module}', consider facade pattern`,
|
||||
code: `Imports from ${module}: ${count}`
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// Check for missing abstractions (long functions)
|
||||
const functionRegex = /(function|const.*=.*\(|async.*\()/g
|
||||
let match
|
||||
while ((match = functionRegex.exec(content)) !== null) {
|
||||
const startLine = content.substring(0, match.index).split("\n").length
|
||||
const functionBody = extractFunctionBody(content, match.index)
|
||||
const functionLines = functionBody.split("\n").length
|
||||
|
||||
if (functionLines > 50) {
|
||||
issues.medium.push({
|
||||
file: file.path,
|
||||
line: startLine,
|
||||
type: "long-function",
|
||||
message: `Function has ${functionLines} lines, consider extracting smaller functions`,
|
||||
code: match[0].trim()
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return issues
|
||||
}
|
||||
|
||||
function resolveImportPath(fromFile, importPath) {
|
||||
if (importPath.startsWith(".")) {
|
||||
const dir = fromFile.substring(0, fromFile.lastIndexOf("/"))
|
||||
const resolved = `${dir}/${importPath}`.replace(/\/\.\//g, "/")
|
||||
|
||||
// Try with extensions
|
||||
for (const ext of [".ts", ".js", ".tsx", ".jsx"]) {
|
||||
if (Bash(`test -f ${resolved}${ext}`).exitCode === 0) {
|
||||
return `${resolved}${ext}`
|
||||
}
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
function extractFunctionBody(content, startIndex) {
|
||||
let braceCount = 0
|
||||
let inFunction = false
|
||||
let body = ""
|
||||
|
||||
for (let i = startIndex; i < content.length; i++) {
|
||||
const char = content[i]
|
||||
|
||||
if (char === "{") {
|
||||
braceCount++
|
||||
inFunction = true
|
||||
} else if (char === "}") {
|
||||
braceCount--
|
||||
}
|
||||
|
||||
if (inFunction) {
|
||||
body += char
|
||||
}
|
||||
|
||||
if (inFunction && braceCount === 0) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return body
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Requirements Verification
|
||||
|
||||
```javascript
|
||||
function verifyRequirements(plan, files, gitDiff) {
|
||||
const issues = {
|
||||
critical: [],
|
||||
high: [],
|
||||
medium: [],
|
||||
low: []
|
||||
}
|
||||
|
||||
// Extract acceptance criteria from plan
|
||||
const acceptanceCriteria = extractAcceptanceCriteria(plan)
|
||||
|
||||
// Verify each criterion
|
||||
for (const criterion of acceptanceCriteria) {
|
||||
const verified = verifyCriterion(criterion, files, gitDiff)
|
||||
|
||||
if (!verified.met) {
|
||||
issues.high.push({
|
||||
file: "plan",
|
||||
line: criterion.lineNumber,
|
||||
type: "unmet-acceptance-criteria",
|
||||
message: `Acceptance criterion not met: ${criterion.text}`,
|
||||
code: criterion.text
|
||||
})
|
||||
} else if (verified.partial) {
|
||||
issues.medium.push({
|
||||
file: "plan",
|
||||
line: criterion.lineNumber,
|
||||
type: "partial-acceptance-criteria",
|
||||
message: `Acceptance criterion partially met: ${criterion.text}`,
|
||||
code: criterion.text
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Check for missing error handling
|
||||
const errorHandlingRequired = plan.match(/error handling|exception|validation/i)
|
||||
if (errorHandlingRequired) {
|
||||
const hasErrorHandling = files.some(file =>
|
||||
file.content.match(/try\s*\{|catch\s*\(|throw\s+new|\.catch\(/)
|
||||
)
|
||||
|
||||
if (!hasErrorHandling) {
|
||||
issues.high.push({
|
||||
file: "implementation",
|
||||
line: 1,
|
||||
type: "missing-error-handling",
|
||||
message: "Plan requires error handling but none found in implementation",
|
||||
code: "No try-catch or error handling detected"
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Check for missing tests
|
||||
const testingRequired = plan.match(/test|testing|coverage/i)
|
||||
if (testingRequired) {
|
||||
const hasTests = files.some(file =>
|
||||
file.path.match(/\.(test|spec)\.[jt]sx?$/)
|
||||
)
|
||||
|
||||
if (!hasTests) {
|
||||
issues.medium.push({
|
||||
file: "implementation",
|
||||
line: 1,
|
||||
type: "missing-tests",
|
||||
message: "Plan requires tests but no test files found",
|
||||
code: "No test files detected"
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return issues
|
||||
}
|
||||
|
||||
function extractAcceptanceCriteria(plan) {
|
||||
const criteria = []
|
||||
const lines = plan.split("\n")
|
||||
|
||||
let inAcceptanceSection = false
|
||||
lines.forEach((line, idx) => {
|
||||
if (line.match(/acceptance criteria/i)) {
|
||||
inAcceptanceSection = true
|
||||
} else if (line.match(/^##/)) {
|
||||
inAcceptanceSection = false
|
||||
} else if (inAcceptanceSection && line.match(/^[-*]\s+/)) {
|
||||
criteria.push({
|
||||
text: line.replace(/^[-*]\s+/, "").trim(),
|
||||
lineNumber: idx + 1
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
return criteria
|
||||
}
|
||||
|
||||
function verifyCriterion(criterion, files, gitDiff) {
|
||||
// Extract keywords from criterion
|
||||
const keywords = criterion.text.toLowerCase().match(/\b\w{4,}\b/g) || []
|
||||
|
||||
// Check if keywords appear in implementation
|
||||
let matchCount = 0
|
||||
for (const file of files) {
|
||||
const content = file.content.toLowerCase()
|
||||
for (const keyword of keywords) {
|
||||
if (content.includes(keyword)) {
|
||||
matchCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const matchRatio = matchCount / keywords.length
|
||||
|
||||
return {
|
||||
met: matchRatio >= 0.7,
|
||||
partial: matchRatio >= 0.4 && matchRatio < 0.7,
|
||||
matchRatio: matchRatio
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Verdict Determination
|
||||
|
||||
```javascript
|
||||
function determineVerdict(qualityIssues, securityIssues, architectureIssues, requirementIssues) {
|
||||
const allIssues = {
|
||||
critical: [
|
||||
...qualityIssues.critical,
|
||||
...securityIssues.critical,
|
||||
...architectureIssues.critical,
|
||||
...requirementIssues.critical
|
||||
],
|
||||
high: [
|
||||
...qualityIssues.high,
|
||||
...securityIssues.high,
|
||||
...architectureIssues.high,
|
||||
...requirementIssues.high
|
||||
],
|
||||
medium: [
|
||||
...qualityIssues.medium,
|
||||
...securityIssues.medium,
|
||||
...architectureIssues.medium,
|
||||
...requirementIssues.medium
|
||||
],
|
||||
low: [
|
||||
...qualityIssues.low,
|
||||
...securityIssues.low,
|
||||
...architectureIssues.low,
|
||||
...requirementIssues.low
|
||||
]
|
||||
}
|
||||
|
||||
// BLOCK: Any critical issues
|
||||
if (allIssues.critical.length > 0) {
|
||||
return {
|
||||
verdict: "BLOCK",
|
||||
reason: `${allIssues.critical.length} critical issue(s) must be fixed`,
|
||||
blocking_issues: allIssues.critical
|
||||
}
|
||||
}
|
||||
|
||||
// CONDITIONAL: High or medium issues
|
||||
if (allIssues.high.length > 0 || allIssues.medium.length > 0) {
|
||||
return {
|
||||
verdict: "CONDITIONAL",
|
||||
reason: `${allIssues.high.length} high and ${allIssues.medium.length} medium issue(s) should be addressed`,
|
||||
blocking_issues: []
|
||||
}
|
||||
}
|
||||
|
||||
// APPROVE: Only low issues or none
|
||||
return {
|
||||
verdict: "APPROVE",
|
||||
reason: allIssues.low.length > 0
|
||||
? `${allIssues.low.length} low-priority issue(s) noted`
|
||||
: "No issues found",
|
||||
blocking_issues: []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Report Formatting
|
||||
|
||||
```javascript
|
||||
function formatCodeReviewReport(report) {
|
||||
const { verdict, dimensions, recommendations, blocking_issues } = report
|
||||
|
||||
let markdown = `# Code Review Report\n\n`
|
||||
markdown += `**Verdict**: ${verdict}\n\n`
|
||||
|
||||
if (blocking_issues.length > 0) {
|
||||
markdown += `## Blocking Issues\n\n`
|
||||
blocking_issues.forEach(issue => {
|
||||
markdown += `- **${issue.type}** (${issue.file}:${issue.line})\n`
|
||||
markdown += ` ${issue.message}\n`
|
||||
markdown += ` \`\`\`\n ${issue.code}\n \`\`\`\n\n`
|
||||
})
|
||||
}
|
||||
|
||||
markdown += `## Review Dimensions\n\n`
|
||||
|
||||
markdown += `### Quality Issues\n`
|
||||
markdown += formatIssuesByDimension(dimensions.quality)
|
||||
|
||||
markdown += `### Security Issues\n`
|
||||
markdown += formatIssuesByDimension(dimensions.security)
|
||||
|
||||
markdown += `### Architecture Issues\n`
|
||||
markdown += formatIssuesByDimension(dimensions.architecture)
|
||||
|
||||
markdown += `### Requirements Issues\n`
|
||||
markdown += formatIssuesByDimension(dimensions.requirements)
|
||||
|
||||
if (recommendations.length > 0) {
|
||||
markdown += `## Recommendations\n\n`
|
||||
recommendations.forEach((rec, i) => {
|
||||
markdown += `${i + 1}. ${rec}\n`
|
||||
})
|
||||
}
|
||||
|
||||
return markdown
|
||||
}
|
||||
|
||||
function formatIssuesByDimension(issues) {
|
||||
let markdown = ""
|
||||
|
||||
const severities = ["critical", "high", "medium", "low"]
|
||||
severities.forEach(severity => {
|
||||
if (issues[severity].length > 0) {
|
||||
markdown += `\n**${severity.toUpperCase()}** (${issues[severity].length})\n\n`
|
||||
issues[severity].forEach(issue => {
|
||||
markdown += `- ${issue.message} (${issue.file}:${issue.line})\n`
|
||||
markdown += ` \`${issue.code}\`\n\n`
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
return markdown || "No issues found.\n\n"
|
||||
}
|
||||
```
|
||||
@@ -1,845 +0,0 @@
|
||||
# Spec Quality Command
|
||||
|
||||
## Purpose
|
||||
5-dimension spec quality check with readiness report generation and quality gate determination.
|
||||
|
||||
## Quality Dimensions
|
||||
|
||||
### 1. Completeness (Weight: 25%)
|
||||
|
||||
```javascript
|
||||
function scoreCompleteness(specDocs) {
|
||||
const requiredSections = {
|
||||
"product-brief": [
|
||||
"Vision Statement",
|
||||
"Problem Statement",
|
||||
"Target Audience",
|
||||
"Success Metrics",
|
||||
"Constraints"
|
||||
],
|
||||
"prd": [
|
||||
"Goals",
|
||||
"Requirements",
|
||||
"User Stories",
|
||||
"Acceptance Criteria",
|
||||
"Non-Functional Requirements"
|
||||
],
|
||||
"architecture": [
|
||||
"System Overview",
|
||||
"Component Design",
|
||||
"Data Models",
|
||||
"API Specifications",
|
||||
"Technology Stack"
|
||||
],
|
||||
"user-stories": [
|
||||
"Story List",
|
||||
"Acceptance Criteria",
|
||||
"Priority",
|
||||
"Estimation"
|
||||
],
|
||||
"implementation-plan": [
|
||||
"Task Breakdown",
|
||||
"Dependencies",
|
||||
"Timeline",
|
||||
"Resource Allocation"
|
||||
],
|
||||
"test-strategy": [
|
||||
"Test Scope",
|
||||
"Test Cases",
|
||||
"Coverage Goals",
|
||||
"Test Environment"
|
||||
]
|
||||
}
|
||||
|
||||
let totalScore = 0
|
||||
let totalWeight = 0
|
||||
const details = []
|
||||
|
||||
for (const doc of specDocs) {
|
||||
const phase = doc.phase
|
||||
const expectedSections = requiredSections[phase] || []
|
||||
|
||||
if (expectedSections.length === 0) continue
|
||||
|
||||
let presentCount = 0
|
||||
let substantialCount = 0
|
||||
|
||||
for (const section of expectedSections) {
|
||||
const sectionRegex = new RegExp(`##\\s+${section}`, "i")
|
||||
const sectionMatch = doc.content.match(sectionRegex)
|
||||
|
||||
if (sectionMatch) {
|
||||
presentCount++
|
||||
|
||||
// Check if section has substantial content (not just header)
|
||||
const sectionIndex = doc.content.indexOf(sectionMatch[0])
|
||||
const nextSectionIndex = doc.content.indexOf("\n##", sectionIndex + 1)
|
||||
const sectionContent = nextSectionIndex > -1
|
||||
? doc.content.substring(sectionIndex, nextSectionIndex)
|
||||
: doc.content.substring(sectionIndex)
|
||||
|
||||
// Substantial = more than 100 chars excluding header
|
||||
const contentWithoutHeader = sectionContent.replace(sectionRegex, "").trim()
|
||||
if (contentWithoutHeader.length > 100) {
|
||||
substantialCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const presentRatio = presentCount / expectedSections.length
|
||||
const substantialRatio = substantialCount / expectedSections.length
|
||||
|
||||
// Score: 50% for presence, 50% for substance
|
||||
const docScore = (presentRatio * 50) + (substantialRatio * 50)
|
||||
|
||||
totalScore += docScore
|
||||
totalWeight += 100
|
||||
|
||||
details.push({
|
||||
phase: phase,
|
||||
score: docScore,
|
||||
present: presentCount,
|
||||
substantial: substantialCount,
|
||||
expected: expectedSections.length,
|
||||
missing: expectedSections.filter(s => !doc.content.match(new RegExp(`##\\s+${s}`, "i")))
|
||||
})
|
||||
}
|
||||
|
||||
const overallScore = totalWeight > 0 ? (totalScore / totalWeight) * 100 : 0
|
||||
|
||||
return {
|
||||
score: overallScore,
|
||||
weight: 25,
|
||||
weighted_score: overallScore * 0.25,
|
||||
details: details
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Consistency (Weight: 20%)
|
||||
|
||||
```javascript
|
||||
function scoreConsistency(specDocs) {
|
||||
const issues = []
|
||||
|
||||
// 1. Terminology consistency
|
||||
const terminologyMap = new Map()
|
||||
|
||||
for (const doc of specDocs) {
|
||||
// Extract key terms (capitalized phrases, technical terms)
|
||||
const terms = doc.content.match(/\b[A-Z][a-z]+(?:\s+[A-Z][a-z]+)*\b/g) || []
|
||||
|
||||
terms.forEach(term => {
|
||||
const normalized = term.toLowerCase()
|
||||
if (!terminologyMap.has(normalized)) {
|
||||
terminologyMap.set(normalized, new Set())
|
||||
}
|
||||
terminologyMap.get(normalized).add(term)
|
||||
})
|
||||
}
|
||||
|
||||
// Find inconsistent terminology (same concept, different casing/spelling)
|
||||
terminologyMap.forEach((variants, normalized) => {
|
||||
if (variants.size > 1) {
|
||||
issues.push({
|
||||
type: "terminology",
|
||||
severity: "medium",
|
||||
message: `Inconsistent terminology: ${[...variants].join(", ")}`,
|
||||
suggestion: `Standardize to one variant`
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// 2. Format consistency
|
||||
const headerStyles = new Map()
|
||||
for (const doc of specDocs) {
|
||||
const headers = doc.content.match(/^#{1,6}\s+.+$/gm) || []
|
||||
headers.forEach(header => {
|
||||
const level = header.match(/^#+/)[0].length
|
||||
const style = header.includes("**") ? "bold" : "plain"
|
||||
const key = `level-${level}`
|
||||
|
||||
if (!headerStyles.has(key)) {
|
||||
headerStyles.set(key, new Set())
|
||||
}
|
||||
headerStyles.get(key).add(style)
|
||||
})
|
||||
}
|
||||
|
||||
headerStyles.forEach((styles, level) => {
|
||||
if (styles.size > 1) {
|
||||
issues.push({
|
||||
type: "format",
|
||||
severity: "low",
|
||||
message: `Inconsistent header style at ${level}: ${[...styles].join(", ")}`,
|
||||
suggestion: "Use consistent header formatting"
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// 3. Reference consistency
|
||||
const references = new Map()
|
||||
for (const doc of specDocs) {
|
||||
// Extract references to other documents/sections
|
||||
const refs = doc.content.match(/\[.*?\]\(.*?\)/g) || []
|
||||
refs.forEach(ref => {
|
||||
const linkMatch = ref.match(/\((.*?)\)/)
|
||||
if (linkMatch) {
|
||||
const link = linkMatch[1]
|
||||
if (!references.has(link)) {
|
||||
references.set(link, [])
|
||||
}
|
||||
references.get(link).push(doc.phase)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Check for broken references
|
||||
references.forEach((sources, link) => {
|
||||
if (link.startsWith("./") || link.startsWith("../")) {
|
||||
// Check if file exists
|
||||
const exists = Bash(`test -f ${link}`).exitCode === 0
|
||||
if (!exists) {
|
||||
issues.push({
|
||||
type: "reference",
|
||||
severity: "high",
|
||||
message: `Broken reference: ${link} (referenced in ${sources.join(", ")})`,
|
||||
suggestion: "Fix or remove broken reference"
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// 4. Naming convention consistency
|
||||
const namingPatterns = {
|
||||
camelCase: /\b[a-z]+(?:[A-Z][a-z]+)+\b/g,
|
||||
PascalCase: /\b[A-Z][a-z]+(?:[A-Z][a-z]+)+\b/g,
|
||||
snake_case: /\b[a-z]+(?:_[a-z]+)+\b/g,
|
||||
kebab_case: /\b[a-z]+(?:-[a-z]+)+\b/g
|
||||
}
|
||||
|
||||
const namingCounts = {}
|
||||
for (const doc of specDocs) {
|
||||
Object.entries(namingPatterns).forEach(([pattern, regex]) => {
|
||||
const matches = doc.content.match(regex) || []
|
||||
namingCounts[pattern] = (namingCounts[pattern] || 0) + matches.length
|
||||
})
|
||||
}
|
||||
|
||||
const dominantPattern = Object.entries(namingCounts)
|
||||
.sort((a, b) => b[1] - a[1])[0]?.[0]
|
||||
|
||||
Object.entries(namingCounts).forEach(([pattern, count]) => {
|
||||
if (pattern !== dominantPattern && count > 10) {
|
||||
issues.push({
|
||||
type: "naming",
|
||||
severity: "low",
|
||||
message: `Mixed naming conventions: ${pattern} (${count} occurrences) vs ${dominantPattern}`,
|
||||
suggestion: `Standardize to ${dominantPattern}`
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// Calculate score based on issues
|
||||
const severityWeights = { high: 10, medium: 5, low: 2 }
|
||||
const totalPenalty = issues.reduce((sum, issue) => sum + severityWeights[issue.severity], 0)
|
||||
const maxPenalty = 100 // Arbitrary max for normalization
|
||||
|
||||
const score = Math.max(0, 100 - (totalPenalty / maxPenalty) * 100)
|
||||
|
||||
return {
|
||||
score: score,
|
||||
weight: 20,
|
||||
weighted_score: score * 0.20,
|
||||
issues: issues,
|
||||
details: {
|
||||
terminology_issues: issues.filter(i => i.type === "terminology").length,
|
||||
format_issues: issues.filter(i => i.type === "format").length,
|
||||
reference_issues: issues.filter(i => i.type === "reference").length,
|
||||
naming_issues: issues.filter(i => i.type === "naming").length
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Traceability (Weight: 25%)
|
||||
|
||||
```javascript
|
||||
function scoreTraceability(specDocs) {
|
||||
const chains = []
|
||||
|
||||
// Extract traceability elements
|
||||
const goals = extractElements(specDocs, "product-brief", /^[-*]\s+Goal:\s*(.+)$/gm)
|
||||
const requirements = extractElements(specDocs, "prd", /^[-*]\s+(?:REQ-\d+|Requirement):\s*(.+)$/gm)
|
||||
const components = extractElements(specDocs, "architecture", /^[-*]\s+(?:Component|Module):\s*(.+)$/gm)
|
||||
const stories = extractElements(specDocs, "user-stories", /^[-*]\s+(?:US-\d+|Story):\s*(.+)$/gm)
|
||||
|
||||
// Build traceability chains: Goals → Requirements → Components → Stories
|
||||
for (const goal of goals) {
|
||||
const chain = {
|
||||
goal: goal.text,
|
||||
requirements: [],
|
||||
components: [],
|
||||
stories: [],
|
||||
complete: false
|
||||
}
|
||||
|
||||
// Find requirements that reference this goal
|
||||
const goalKeywords = extractKeywords(goal.text)
|
||||
for (const req of requirements) {
|
||||
if (hasKeywordOverlap(req.text, goalKeywords, 0.3)) {
|
||||
chain.requirements.push(req.text)
|
||||
|
||||
// Find components that implement this requirement
|
||||
const reqKeywords = extractKeywords(req.text)
|
||||
for (const comp of components) {
|
||||
if (hasKeywordOverlap(comp.text, reqKeywords, 0.3)) {
|
||||
chain.components.push(comp.text)
|
||||
}
|
||||
}
|
||||
|
||||
// Find stories that implement this requirement
|
||||
for (const story of stories) {
|
||||
if (hasKeywordOverlap(story.text, reqKeywords, 0.3)) {
|
||||
chain.stories.push(story.text)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if chain is complete
|
||||
chain.complete = chain.requirements.length > 0 &&
|
||||
chain.components.length > 0 &&
|
||||
chain.stories.length > 0
|
||||
|
||||
chains.push(chain)
|
||||
}
|
||||
|
||||
// Calculate score
|
||||
const completeChains = chains.filter(c => c.complete).length
|
||||
const totalChains = chains.length
|
||||
const score = totalChains > 0 ? (completeChains / totalChains) * 100 : 0
|
||||
|
||||
// Identify weak links
|
||||
const weakLinks = []
|
||||
chains.forEach((chain, idx) => {
|
||||
if (!chain.complete) {
|
||||
if (chain.requirements.length === 0) {
|
||||
weakLinks.push(`Goal ${idx + 1} has no linked requirements`)
|
||||
}
|
||||
if (chain.components.length === 0) {
|
||||
weakLinks.push(`Goal ${idx + 1} has no linked components`)
|
||||
}
|
||||
if (chain.stories.length === 0) {
|
||||
weakLinks.push(`Goal ${idx + 1} has no linked stories`)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
return {
|
||||
score: score,
|
||||
weight: 25,
|
||||
weighted_score: score * 0.25,
|
||||
details: {
|
||||
total_chains: totalChains,
|
||||
complete_chains: completeChains,
|
||||
weak_links: weakLinks
|
||||
},
|
||||
chains: chains
|
||||
}
|
||||
}
|
||||
|
||||
function extractElements(specDocs, phase, regex) {
|
||||
const elements = []
|
||||
const doc = specDocs.find(d => d.phase === phase)
|
||||
|
||||
if (doc) {
|
||||
let match
|
||||
while ((match = regex.exec(doc.content)) !== null) {
|
||||
elements.push({
|
||||
text: match[1].trim(),
|
||||
phase: phase
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return elements
|
||||
}
|
||||
|
||||
function extractKeywords(text) {
|
||||
// Extract meaningful words (4+ chars, not common words)
|
||||
const commonWords = new Set(["that", "this", "with", "from", "have", "will", "should", "must", "can"])
|
||||
const words = text.toLowerCase().match(/\b\w{4,}\b/g) || []
|
||||
return words.filter(w => !commonWords.has(w))
|
||||
}
|
||||
|
||||
function hasKeywordOverlap(text, keywords, threshold) {
|
||||
const textLower = text.toLowerCase()
|
||||
const matchCount = keywords.filter(kw => textLower.includes(kw)).length
|
||||
return matchCount / keywords.length >= threshold
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Depth (Weight: 20%)
|
||||
|
||||
```javascript
|
||||
function scoreDepth(specDocs) {
|
||||
const dimensions = []
|
||||
|
||||
// 1. Acceptance Criteria Testability
|
||||
const acDoc = specDocs.find(d => d.phase === "prd" || d.phase === "user-stories")
|
||||
if (acDoc) {
|
||||
const acMatches = acDoc.content.match(/Acceptance Criteria:[\s\S]*?(?=\n##|\n\n[-*]|$)/gi) || []
|
||||
let testableCount = 0
|
||||
let totalCount = 0
|
||||
|
||||
acMatches.forEach(section => {
|
||||
const criteria = section.match(/^[-*]\s+(.+)$/gm) || []
|
||||
totalCount += criteria.length
|
||||
|
||||
criteria.forEach(criterion => {
|
||||
// Testable if contains measurable verbs or specific conditions
|
||||
const testablePatterns = [
|
||||
/\b(should|must|will)\s+(display|show|return|validate|check|verify|calculate|send|receive)\b/i,
|
||||
/\b(when|if|given)\b.*\b(then|should|must)\b/i,
|
||||
/\b\d+\b/, // Contains numbers (measurable)
|
||||
/\b(success|error|fail|pass)\b/i
|
||||
]
|
||||
|
||||
const isTestable = testablePatterns.some(pattern => pattern.test(criterion))
|
||||
if (isTestable) testableCount++
|
||||
})
|
||||
})
|
||||
|
||||
const acScore = totalCount > 0 ? (testableCount / totalCount) * 100 : 0
|
||||
dimensions.push({
|
||||
name: "Acceptance Criteria Testability",
|
||||
score: acScore,
|
||||
testable: testableCount,
|
||||
total: totalCount
|
||||
})
|
||||
}
|
||||
|
||||
// 2. ADR Justification
|
||||
const archDoc = specDocs.find(d => d.phase === "architecture")
|
||||
if (archDoc) {
|
||||
const adrMatches = archDoc.content.match(/##\s+(?:ADR|Decision)[\s\S]*?(?=\n##|$)/gi) || []
|
||||
let justifiedCount = 0
|
||||
let totalCount = adrMatches.length
|
||||
|
||||
adrMatches.forEach(adr => {
|
||||
// Justified if contains rationale, alternatives, or consequences
|
||||
const hasJustification = adr.match(/\b(rationale|reason|because|alternative|consequence|trade-?off)\b/i)
|
||||
if (hasJustification) justifiedCount++
|
||||
})
|
||||
|
||||
const adrScore = totalCount > 0 ? (justifiedCount / totalCount) * 100 : 100 // Default 100 if no ADRs
|
||||
dimensions.push({
|
||||
name: "ADR Justification",
|
||||
score: adrScore,
|
||||
justified: justifiedCount,
|
||||
total: totalCount
|
||||
})
|
||||
}
|
||||
|
||||
// 3. User Stories Estimability
|
||||
const storiesDoc = specDocs.find(d => d.phase === "user-stories")
|
||||
if (storiesDoc) {
|
||||
const storyMatches = storiesDoc.content.match(/^[-*]\s+(?:US-\d+|Story)[\s\S]*?(?=\n[-*]|$)/gim) || []
|
||||
let estimableCount = 0
|
||||
let totalCount = storyMatches.length
|
||||
|
||||
storyMatches.forEach(story => {
|
||||
// Estimable if has clear scope, AC, and no ambiguity
|
||||
const hasScope = story.match(/\b(as a|I want|so that)\b/i)
|
||||
const hasAC = story.match(/acceptance criteria/i)
|
||||
const hasEstimate = story.match(/\b(points?|hours?|days?|estimate)\b/i)
|
||||
|
||||
if ((hasScope && hasAC) || hasEstimate) estimableCount++
|
||||
})
|
||||
|
||||
const storiesScore = totalCount > 0 ? (estimableCount / totalCount) * 100 : 0
|
||||
dimensions.push({
|
||||
name: "User Stories Estimability",
|
||||
score: storiesScore,
|
||||
estimable: estimableCount,
|
||||
total: totalCount
|
||||
})
|
||||
}
|
||||
|
||||
// 4. Technical Detail Sufficiency
|
||||
const techDocs = specDocs.filter(d => d.phase === "architecture" || d.phase === "implementation-plan")
|
||||
let detailScore = 0
|
||||
|
||||
if (techDocs.length > 0) {
|
||||
const detailIndicators = [
|
||||
/```[\s\S]*?```/, // Code blocks
|
||||
/\b(API|endpoint|schema|model|interface|class|function)\b/i,
|
||||
/\b(GET|POST|PUT|DELETE|PATCH)\b/, // HTTP methods
|
||||
/\b(database|table|collection|index)\b/i,
|
||||
/\b(authentication|authorization|security)\b/i
|
||||
]
|
||||
|
||||
let indicatorCount = 0
|
||||
techDocs.forEach(doc => {
|
||||
detailIndicators.forEach(pattern => {
|
||||
if (pattern.test(doc.content)) indicatorCount++
|
||||
})
|
||||
})
|
||||
|
||||
detailScore = Math.min(100, (indicatorCount / (detailIndicators.length * techDocs.length)) * 100)
|
||||
dimensions.push({
|
||||
name: "Technical Detail Sufficiency",
|
||||
score: detailScore,
|
||||
indicators_found: indicatorCount,
|
||||
indicators_expected: detailIndicators.length * techDocs.length
|
||||
})
|
||||
}
|
||||
|
||||
// Calculate overall depth score
|
||||
const overallScore = dimensions.reduce((sum, d) => sum + d.score, 0) / dimensions.length
|
||||
|
||||
return {
|
||||
score: overallScore,
|
||||
weight: 20,
|
||||
weighted_score: overallScore * 0.20,
|
||||
dimensions: dimensions
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Requirement Coverage (Weight: 10%)
|
||||
|
||||
```javascript
|
||||
function scoreRequirementCoverage(specDocs, originalRequirements) {
|
||||
// Extract original requirements from task description or initial brief
|
||||
const originalReqs = originalRequirements || extractOriginalRequirements(specDocs)
|
||||
|
||||
if (originalReqs.length === 0) {
|
||||
return {
|
||||
score: 100, // No requirements to cover
|
||||
weight: 10,
|
||||
weighted_score: 10,
|
||||
details: {
|
||||
total: 0,
|
||||
covered: 0,
|
||||
uncovered: []
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract all requirements from spec documents
|
||||
const specReqs = []
|
||||
for (const doc of specDocs) {
|
||||
const reqMatches = doc.content.match(/^[-*]\s+(?:REQ-\d+|Requirement|Feature):\s*(.+)$/gm) || []
|
||||
reqMatches.forEach(match => {
|
||||
specReqs.push(match.replace(/^[-*]\s+(?:REQ-\d+|Requirement|Feature):\s*/, "").trim())
|
||||
})
|
||||
}
|
||||
|
||||
// Map original requirements to spec requirements
|
||||
const coverage = []
|
||||
for (const origReq of originalReqs) {
|
||||
const keywords = extractKeywords(origReq)
|
||||
const covered = specReqs.some(specReq => hasKeywordOverlap(specReq, keywords, 0.4))
|
||||
|
||||
coverage.push({
|
||||
requirement: origReq,
|
||||
covered: covered
|
||||
})
|
||||
}
|
||||
|
||||
const coveredCount = coverage.filter(c => c.covered).length
|
||||
const score = (coveredCount / originalReqs.length) * 100
|
||||
|
||||
return {
|
||||
score: score,
|
||||
weight: 10,
|
||||
weighted_score: score * 0.10,
|
||||
details: {
|
||||
total: originalReqs.length,
|
||||
covered: coveredCount,
|
||||
uncovered: coverage.filter(c => !c.covered).map(c => c.requirement)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function extractOriginalRequirements(specDocs) {
|
||||
// Try to find original requirements in product brief
|
||||
const briefDoc = specDocs.find(d => d.phase === "product-brief")
|
||||
if (!briefDoc) return []
|
||||
|
||||
const reqSection = briefDoc.content.match(/##\s+(?:Requirements|Objectives)[\s\S]*?(?=\n##|$)/i)
|
||||
if (!reqSection) return []
|
||||
|
||||
const reqs = reqSection[0].match(/^[-*]\s+(.+)$/gm) || []
|
||||
return reqs.map(r => r.replace(/^[-*]\s+/, "").trim())
|
||||
}
|
||||
```
|
||||
|
||||
## Quality Gate Determination
|
||||
|
||||
```javascript
|
||||
function determineQualityGate(overallScore, coverageScore) {
|
||||
// PASS: Score ≥80% AND coverage ≥70%
|
||||
if (overallScore >= 80 && coverageScore >= 70) {
|
||||
return {
|
||||
gate: "PASS",
|
||||
message: "Specification meets quality standards and is ready for implementation",
|
||||
action: "Proceed to implementation phase"
|
||||
}
|
||||
}
|
||||
|
||||
// FAIL: Score <60% OR coverage <50%
|
||||
if (overallScore < 60 || coverageScore < 50) {
|
||||
return {
|
||||
gate: "FAIL",
|
||||
message: "Specification requires major revisions before implementation",
|
||||
action: "Address critical gaps and resubmit for review"
|
||||
}
|
||||
}
|
||||
|
||||
// REVIEW: Between PASS and FAIL
|
||||
return {
|
||||
gate: "REVIEW",
|
||||
message: "Specification needs improvements but may proceed with caution",
|
||||
action: "Address recommendations and consider re-review"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Readiness Report Generation
|
||||
|
||||
```javascript
|
||||
function formatReadinessReport(report, specDocs) {
|
||||
const { overall_score, quality_gate, dimensions, phase_gates } = report
|
||||
|
||||
let markdown = `# Specification Readiness Report\n\n`
|
||||
markdown += `**Generated**: ${new Date().toISOString()}\n\n`
|
||||
markdown += `**Overall Score**: ${overall_score.toFixed(1)}%\n\n`
|
||||
markdown += `**Quality Gate**: ${quality_gate.gate} - ${quality_gate.message}\n\n`
|
||||
markdown += `**Recommended Action**: ${quality_gate.action}\n\n`
|
||||
|
||||
markdown += `---\n\n`
|
||||
|
||||
markdown += `## Dimension Scores\n\n`
|
||||
markdown += `| Dimension | Score | Weight | Weighted Score |\n`
|
||||
markdown += `|-----------|-------|--------|----------------|\n`
|
||||
|
||||
Object.entries(dimensions).forEach(([name, data]) => {
|
||||
markdown += `| ${name} | ${data.score.toFixed(1)}% | ${data.weight}% | ${data.weighted_score.toFixed(1)}% |\n`
|
||||
})
|
||||
|
||||
markdown += `\n---\n\n`
|
||||
|
||||
// Completeness Details
|
||||
markdown += `## Completeness Analysis\n\n`
|
||||
dimensions.completeness.details.forEach(detail => {
|
||||
markdown += `### ${detail.phase}\n`
|
||||
markdown += `- Score: ${detail.score.toFixed(1)}%\n`
|
||||
markdown += `- Sections Present: ${detail.present}/${detail.expected}\n`
|
||||
markdown += `- Substantial Content: ${detail.substantial}/${detail.expected}\n`
|
||||
if (detail.missing.length > 0) {
|
||||
markdown += `- Missing: ${detail.missing.join(", ")}\n`
|
||||
}
|
||||
markdown += `\n`
|
||||
})
|
||||
|
||||
// Consistency Details
|
||||
markdown += `## Consistency Analysis\n\n`
|
||||
if (dimensions.consistency.issues.length > 0) {
|
||||
markdown += `**Issues Found**: ${dimensions.consistency.issues.length}\n\n`
|
||||
dimensions.consistency.issues.forEach(issue => {
|
||||
markdown += `- **${issue.severity.toUpperCase()}**: ${issue.message}\n`
|
||||
markdown += ` *Suggestion*: ${issue.suggestion}\n\n`
|
||||
})
|
||||
} else {
|
||||
markdown += `No consistency issues found.\n\n`
|
||||
}
|
||||
|
||||
// Traceability Details
|
||||
markdown += `## Traceability Analysis\n\n`
|
||||
markdown += `- Complete Chains: ${dimensions.traceability.details.complete_chains}/${dimensions.traceability.details.total_chains}\n\n`
|
||||
if (dimensions.traceability.details.weak_links.length > 0) {
|
||||
markdown += `**Weak Links**:\n`
|
||||
dimensions.traceability.details.weak_links.forEach(link => {
|
||||
markdown += `- ${link}\n`
|
||||
})
|
||||
markdown += `\n`
|
||||
}
|
||||
|
||||
// Depth Details
|
||||
markdown += `## Depth Analysis\n\n`
|
||||
dimensions.depth.dimensions.forEach(dim => {
|
||||
markdown += `### ${dim.name}\n`
|
||||
markdown += `- Score: ${dim.score.toFixed(1)}%\n`
|
||||
if (dim.testable !== undefined) {
|
||||
markdown += `- Testable: ${dim.testable}/${dim.total}\n`
|
||||
}
|
||||
if (dim.justified !== undefined) {
|
||||
markdown += `- Justified: ${dim.justified}/${dim.total}\n`
|
||||
}
|
||||
if (dim.estimable !== undefined) {
|
||||
markdown += `- Estimable: ${dim.estimable}/${dim.total}\n`
|
||||
}
|
||||
markdown += `\n`
|
||||
})
|
||||
|
||||
// Coverage Details
|
||||
markdown += `## Requirement Coverage\n\n`
|
||||
markdown += `- Covered: ${dimensions.coverage.details.covered}/${dimensions.coverage.details.total}\n`
|
||||
if (dimensions.coverage.details.uncovered.length > 0) {
|
||||
markdown += `\n**Uncovered Requirements**:\n`
|
||||
dimensions.coverage.details.uncovered.forEach(req => {
|
||||
markdown += `- ${req}\n`
|
||||
})
|
||||
}
|
||||
markdown += `\n`
|
||||
|
||||
// Phase Gates
|
||||
if (phase_gates) {
|
||||
markdown += `---\n\n`
|
||||
markdown += `## Phase-Level Quality Gates\n\n`
|
||||
Object.entries(phase_gates).forEach(([phase, gate]) => {
|
||||
markdown += `### ${phase}\n`
|
||||
markdown += `- Gate: ${gate.status}\n`
|
||||
markdown += `- Score: ${gate.score.toFixed(1)}%\n`
|
||||
if (gate.issues.length > 0) {
|
||||
markdown += `- Issues: ${gate.issues.join(", ")}\n`
|
||||
}
|
||||
markdown += `\n`
|
||||
})
|
||||
}
|
||||
|
||||
return markdown
|
||||
}
|
||||
```
|
||||
|
||||
## Spec Summary Generation
|
||||
|
||||
```javascript
|
||||
function formatSpecSummary(specDocs, report) {
|
||||
let markdown = `# Specification Summary\n\n`
|
||||
|
||||
markdown += `**Overall Quality Score**: ${report.overall_score.toFixed(1)}%\n`
|
||||
markdown += `**Quality Gate**: ${report.quality_gate.gate}\n\n`
|
||||
|
||||
markdown += `---\n\n`
|
||||
|
||||
// Document Overview
|
||||
markdown += `## Documents Reviewed\n\n`
|
||||
specDocs.forEach(doc => {
|
||||
markdown += `### ${doc.phase}\n`
|
||||
markdown += `- Path: ${doc.path}\n`
|
||||
markdown += `- Size: ${doc.content.length} characters\n`
|
||||
|
||||
// Extract key sections
|
||||
const sections = doc.content.match(/^##\s+(.+)$/gm) || []
|
||||
if (sections.length > 0) {
|
||||
markdown += `- Sections: ${sections.map(s => s.replace(/^##\s+/, "")).join(", ")}\n`
|
||||
}
|
||||
markdown += `\n`
|
||||
})
|
||||
|
||||
markdown += `---\n\n`
|
||||
|
||||
// Key Findings
|
||||
markdown += `## Key Findings\n\n`
|
||||
|
||||
// Strengths
|
||||
const strengths = []
|
||||
Object.entries(report.dimensions).forEach(([name, data]) => {
|
||||
if (data.score >= 80) {
|
||||
strengths.push(`${name}: ${data.score.toFixed(1)}%`)
|
||||
}
|
||||
})
|
||||
|
||||
if (strengths.length > 0) {
|
||||
markdown += `### Strengths\n`
|
||||
strengths.forEach(s => markdown += `- ${s}\n`)
|
||||
markdown += `\n`
|
||||
}
|
||||
|
||||
// Areas for Improvement
|
||||
const improvements = []
|
||||
Object.entries(report.dimensions).forEach(([name, data]) => {
|
||||
if (data.score < 70) {
|
||||
improvements.push(`${name}: ${data.score.toFixed(1)}%`)
|
||||
}
|
||||
})
|
||||
|
||||
if (improvements.length > 0) {
|
||||
markdown += `### Areas for Improvement\n`
|
||||
improvements.forEach(i => markdown += `- ${i}\n`)
|
||||
markdown += `\n`
|
||||
}
|
||||
|
||||
// Recommendations
|
||||
if (report.recommendations && report.recommendations.length > 0) {
|
||||
markdown += `### Recommendations\n`
|
||||
report.recommendations.forEach((rec, i) => {
|
||||
markdown += `${i + 1}. ${rec}\n`
|
||||
})
|
||||
markdown += `\n`
|
||||
}
|
||||
|
||||
return markdown
|
||||
}
|
||||
```
|
||||
|
||||
## Phase-Level Quality Gates
|
||||
|
||||
```javascript
|
||||
function calculatePhaseGates(specDocs) {
|
||||
const gates = {}
|
||||
|
||||
for (const doc of specDocs) {
|
||||
const phase = doc.phase
|
||||
const issues = []
|
||||
let score = 100
|
||||
|
||||
// Check minimum content threshold
|
||||
if (doc.content.length < 500) {
|
||||
issues.push("Insufficient content")
|
||||
score -= 30
|
||||
}
|
||||
|
||||
// Check for required sections (phase-specific)
|
||||
const requiredSections = getRequiredSections(phase)
|
||||
const missingSections = requiredSections.filter(section =>
|
||||
!doc.content.match(new RegExp(`##\\s+${section}`, "i"))
|
||||
)
|
||||
|
||||
if (missingSections.length > 0) {
|
||||
issues.push(`Missing sections: ${missingSections.join(", ")}`)
|
||||
score -= missingSections.length * 15
|
||||
}
|
||||
|
||||
// Determine gate status
|
||||
let status = "PASS"
|
||||
if (score < 60) status = "FAIL"
|
||||
else if (score < 80) status = "REVIEW"
|
||||
|
||||
gates[phase] = {
|
||||
status: status,
|
||||
score: Math.max(0, score),
|
||||
issues: issues
|
||||
}
|
||||
}
|
||||
|
||||
return gates
|
||||
}
|
||||
|
||||
function getRequiredSections(phase) {
|
||||
const sectionMap = {
|
||||
"product-brief": ["Vision", "Problem", "Target Audience"],
|
||||
"prd": ["Goals", "Requirements", "User Stories"],
|
||||
"architecture": ["Overview", "Components", "Data Models"],
|
||||
"user-stories": ["Stories", "Acceptance Criteria"],
|
||||
"implementation-plan": ["Tasks", "Dependencies"],
|
||||
"test-strategy": ["Test Cases", "Coverage"]
|
||||
}
|
||||
|
||||
return sectionMap[phase] || []
|
||||
}
|
||||
```
|
||||
@@ -1,429 +0,0 @@
|
||||
# Reviewer Role
|
||||
|
||||
## 1. Role Identity
|
||||
|
||||
- **Name**: reviewer
|
||||
- **Task Prefix**: REVIEW-* + QUALITY-*
|
||||
- **Output Tag**: `[reviewer]`
|
||||
- **Responsibility**: Discover Task → Branch by Prefix → Review/Score → Report
|
||||
|
||||
## 2. Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process REVIEW-* and QUALITY-* tasks
|
||||
- Communicate only with coordinator
|
||||
- Generate readiness-report.md for QUALITY tasks
|
||||
- Tag all outputs with `[reviewer]`
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Contact other workers directly
|
||||
- Modify source code
|
||||
- Skip quality dimensions
|
||||
- Approve without verification
|
||||
|
||||
## 3. Message Types
|
||||
|
||||
| Type | Direction | Purpose | Format |
|
||||
|------|-----------|---------|--------|
|
||||
| `task_request` | FROM coordinator | Receive REVIEW-*/QUALITY-* task assignment | `{ type: "task_request", task_id, description, review_mode }` |
|
||||
| `task_complete` | TO coordinator | Report review success | `{ type: "task_complete", task_id, status: "success", verdict, score, issues }` |
|
||||
| `task_failed` | TO coordinator | Report review failure | `{ type: "task_failed", task_id, error }` |
|
||||
|
||||
## 4. Message Bus
|
||||
|
||||
**Primary**: Use `team_msg` for all coordinator communication with `[reviewer]` tag:
|
||||
```javascript
|
||||
// Code review completion
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_complete",
|
||||
task_id: "REVIEW-001",
|
||||
status: "success",
|
||||
verdict: "APPROVE",
|
||||
issues: { critical: 0, high: 2, medium: 5, low: 3 },
|
||||
recommendations: ["Fix console.log statements", "Add error handling"]
|
||||
}, "[reviewer]")
|
||||
|
||||
// Spec quality completion
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_complete",
|
||||
task_id: "QUALITY-001",
|
||||
status: "success",
|
||||
overall_score: 85.5,
|
||||
quality_gate: "PASS",
|
||||
dimensions: {
|
||||
completeness: 90,
|
||||
consistency: 85,
|
||||
traceability: 80,
|
||||
depth: 88,
|
||||
coverage: 82
|
||||
}
|
||||
}, "[reviewer]")
|
||||
```
|
||||
|
||||
**CLI Fallback**: When message bus unavailable, write to `.workflow/.team/messages/reviewer-{timestamp}.json`
|
||||
|
||||
## 5. Toolbox
|
||||
|
||||
### Available Commands
|
||||
- `commands/code-review.md` - 4-dimension code review (quality, security, architecture, requirements)
|
||||
- `commands/spec-quality.md` - 5-dimension spec quality check (completeness, consistency, traceability, depth, coverage)
|
||||
|
||||
### CLI Capabilities
|
||||
- None (uses Grep-based analysis)
|
||||
|
||||
## 6. Execution (5-Phase) - Dual-Prefix
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
**Dual Prefix Filter**:
|
||||
```javascript
|
||||
const tasks = Glob(".workflow/.team/tasks/{REVIEW,QUALITY}-*.json")
|
||||
.filter(task => task.status === "pending" && task.assigned_to === "reviewer")
|
||||
|
||||
// Determine review mode
|
||||
const reviewMode = task.task_id.startsWith("REVIEW-") ? "code" : "spec"
|
||||
```
|
||||
|
||||
### Phase 2: Context Loading (Branch by Mode)
|
||||
|
||||
**Code Review Context (REVIEW-*)**:
|
||||
```javascript
|
||||
if (reviewMode === "code") {
|
||||
// Load plan
|
||||
const planPath = task.metadata?.plan_path || ".workflow/plan.md"
|
||||
const plan = Read(planPath)
|
||||
|
||||
// Get git diff
|
||||
const implTaskId = task.metadata?.impl_task_id
|
||||
const gitDiff = Bash("git diff HEAD").stdout
|
||||
|
||||
// Load modified files
|
||||
const modifiedFiles = Bash("git diff --name-only HEAD").stdout.split("\n").filter(Boolean)
|
||||
const fileContents = modifiedFiles.map(f => ({
|
||||
path: f,
|
||||
content: Read(f)
|
||||
}))
|
||||
|
||||
// Load test results if available
|
||||
const testTaskId = task.metadata?.test_task_id
|
||||
const testResults = testTaskId ? Read(`.workflow/.team/tasks/${testTaskId}.json`) : null
|
||||
}
|
||||
```
|
||||
|
||||
**Spec Quality Context (QUALITY-*)**:
|
||||
```javascript
|
||||
if (reviewMode === "spec") {
|
||||
// Load session folder
|
||||
const sessionFolder = task.metadata?.session_folder || ".workflow/.sessions/latest"
|
||||
|
||||
// Load quality gates
|
||||
const qualityGates = task.metadata?.quality_gates || {
|
||||
pass_threshold: 80,
|
||||
fail_threshold: 60,
|
||||
coverage_threshold: 70
|
||||
}
|
||||
|
||||
// Load all spec documents
|
||||
const specDocs = Glob(`${sessionFolder}/**/*.md`).map(path => ({
|
||||
path: path,
|
||||
content: Read(path),
|
||||
phase: extractPhase(path)
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Review Execution (Delegate by Mode)
|
||||
|
||||
**Code Review**:
|
||||
```javascript
|
||||
if (reviewMode === "code") {
|
||||
const codeReviewCommand = Read("commands/code-review.md")
|
||||
// Command handles:
|
||||
// - reviewQuality (ts-ignore, any, console.log, empty catch)
|
||||
// - reviewSecurity (eval/exec, secrets, SQL injection, XSS)
|
||||
// - reviewArchitecture (parent imports, large files)
|
||||
// - verifyRequirements (plan acceptance criteria vs implementation)
|
||||
// - Verdict determination (BLOCK/CONDITIONAL/APPROVE)
|
||||
}
|
||||
```
|
||||
|
||||
**Spec Quality**:
|
||||
```javascript
|
||||
if (reviewMode === "spec") {
|
||||
const specQualityCommand = Read("commands/spec-quality.md")
|
||||
// Command handles:
|
||||
// - scoreCompleteness (section content checks)
|
||||
// - scoreConsistency (terminology, format, references)
|
||||
// - scoreTraceability (goals → reqs → arch → stories chain)
|
||||
// - scoreDepth (AC testable, ADRs justified, stories estimable)
|
||||
// - scoreRequirementCoverage (original requirements → document mapping)
|
||||
// - Quality gate determination (PASS ≥80%, FAIL <60%, else REVIEW)
|
||||
// - readiness-report.md generation
|
||||
// - spec-summary.md generation
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Report Generation (Branch by Mode)
|
||||
|
||||
**Code Review Report**:
|
||||
```javascript
|
||||
if (reviewMode === "code") {
|
||||
const report = {
|
||||
verdict: verdict, // BLOCK | CONDITIONAL | APPROVE
|
||||
dimensions: {
|
||||
quality: qualityIssues,
|
||||
security: securityIssues,
|
||||
architecture: architectureIssues,
|
||||
requirements: requirementIssues
|
||||
},
|
||||
recommendations: recommendations,
|
||||
blocking_issues: blockingIssues
|
||||
}
|
||||
|
||||
// Write review report
|
||||
Write(`.workflow/.team/reviews/${task.task_id}-report.md`, formatCodeReviewReport(report))
|
||||
}
|
||||
```
|
||||
|
||||
**Spec Quality Report**:
|
||||
```javascript
|
||||
if (reviewMode === "spec") {
|
||||
const report = {
|
||||
overall_score: overallScore,
|
||||
quality_gate: qualityGate, // PASS | REVIEW | FAIL
|
||||
dimensions: {
|
||||
completeness: completenessScore,
|
||||
consistency: consistencyScore,
|
||||
traceability: traceabilityScore,
|
||||
depth: depthScore,
|
||||
coverage: coverageScore
|
||||
},
|
||||
phase_gates: phaseGates,
|
||||
recommendations: recommendations
|
||||
}
|
||||
|
||||
// Write readiness report
|
||||
Write(`${sessionFolder}/readiness-report.md`, formatReadinessReport(report))
|
||||
|
||||
// Write spec summary
|
||||
Write(`${sessionFolder}/spec-summary.md`, formatSpecSummary(specDocs, report))
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator (Branch by Mode)
|
||||
|
||||
**Code Review Completion**:
|
||||
```javascript
|
||||
if (reviewMode === "code") {
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_complete",
|
||||
task_id: task.task_id,
|
||||
status: "success",
|
||||
verdict: verdict,
|
||||
issues: {
|
||||
critical: blockingIssues.length,
|
||||
high: highIssues.length,
|
||||
medium: mediumIssues.length,
|
||||
low: lowIssues.length
|
||||
},
|
||||
recommendations: recommendations,
|
||||
report_path: `.workflow/.team/reviews/${task.task_id}-report.md`,
|
||||
timestamp: new Date().toISOString()
|
||||
}, "[reviewer]")
|
||||
}
|
||||
```
|
||||
|
||||
**Spec Quality Completion**:
|
||||
```javascript
|
||||
if (reviewMode === "spec") {
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_complete",
|
||||
task_id: task.task_id,
|
||||
status: "success",
|
||||
overall_score: overallScore,
|
||||
quality_gate: qualityGate,
|
||||
dimensions: {
|
||||
completeness: completenessScore,
|
||||
consistency: consistencyScore,
|
||||
traceability: traceabilityScore,
|
||||
depth: depthScore,
|
||||
coverage: coverageScore
|
||||
},
|
||||
report_path: `${sessionFolder}/readiness-report.md`,
|
||||
summary_path: `${sessionFolder}/spec-summary.md`,
|
||||
timestamp: new Date().toISOString()
|
||||
}, "[reviewer]")
|
||||
}
|
||||
```
|
||||
|
||||
## 7. Code Review Dimensions
|
||||
|
||||
### Quality Dimension
|
||||
|
||||
**Anti-patterns**:
|
||||
- `@ts-ignore` / `@ts-expect-error` without justification
|
||||
- `any` type usage
|
||||
- `console.log` in production code
|
||||
- Empty catch blocks
|
||||
- Magic numbers
|
||||
- Duplicate code
|
||||
|
||||
**Severity**:
|
||||
- Critical: Empty catch, any in public APIs
|
||||
- High: @ts-ignore without comment, console.log
|
||||
- Medium: Magic numbers, duplicate code
|
||||
- Low: Minor style issues
|
||||
|
||||
### Security Dimension
|
||||
|
||||
**Vulnerabilities**:
|
||||
- `eval()` / `exec()` usage
|
||||
- `innerHTML` / `dangerouslySetInnerHTML`
|
||||
- Hardcoded secrets (API keys, passwords)
|
||||
- SQL injection vectors
|
||||
- XSS vulnerabilities
|
||||
- Insecure dependencies
|
||||
|
||||
**Severity**:
|
||||
- Critical: Hardcoded secrets, SQL injection
|
||||
- High: eval/exec, innerHTML
|
||||
- Medium: Insecure dependencies
|
||||
- Low: Missing input validation
|
||||
|
||||
### Architecture Dimension
|
||||
|
||||
**Issues**:
|
||||
- Parent directory imports (`../../../`)
|
||||
- Large files (>500 lines)
|
||||
- Circular dependencies
|
||||
- Missing abstractions
|
||||
- Tight coupling
|
||||
|
||||
**Severity**:
|
||||
- Critical: Circular dependencies
|
||||
- High: Excessive parent imports (>2 levels)
|
||||
- Medium: Large files, tight coupling
|
||||
- Low: Minor structure issues
|
||||
|
||||
### Requirements Dimension
|
||||
|
||||
**Verification**:
|
||||
- Acceptance criteria coverage
|
||||
- Feature completeness
|
||||
- Edge case handling
|
||||
- Error handling
|
||||
|
||||
**Severity**:
|
||||
- Critical: Missing core functionality
|
||||
- High: Incomplete acceptance criteria
|
||||
- Medium: Missing edge cases
|
||||
- Low: Minor feature gaps
|
||||
|
||||
## 8. Spec Quality Dimensions
|
||||
|
||||
### Completeness (Weight: 25%)
|
||||
|
||||
**Checks**:
|
||||
- All required sections present
|
||||
- Section content depth (not just headers)
|
||||
- Cross-phase coverage
|
||||
- Artifact completeness
|
||||
|
||||
**Scoring**:
|
||||
- 100%: All sections with substantial content
|
||||
- 75%: All sections present, some thin
|
||||
- 50%: Missing 1-2 sections
|
||||
- 25%: Missing 3+ sections
|
||||
- 0%: Critical sections missing
|
||||
|
||||
### Consistency (Weight: 20%)
|
||||
|
||||
**Checks**:
|
||||
- Terminology consistency
|
||||
- Format consistency
|
||||
- Reference consistency
|
||||
- Naming conventions
|
||||
|
||||
**Scoring**:
|
||||
- 100%: Fully consistent
|
||||
- 75%: Minor inconsistencies (1-2)
|
||||
- 50%: Moderate inconsistencies (3-5)
|
||||
- 25%: Major inconsistencies (6+)
|
||||
- 0%: Chaotic inconsistency
|
||||
|
||||
### Traceability (Weight: 25%)
|
||||
|
||||
**Checks**:
|
||||
- Goals → Requirements chain
|
||||
- Requirements → Architecture chain
|
||||
- Architecture → User Stories chain
|
||||
- Bidirectional references
|
||||
|
||||
**Scoring**:
|
||||
- 100%: Full traceability chain
|
||||
- 75%: 1 weak link
|
||||
- 50%: 2 weak links
|
||||
- 25%: 3+ weak links
|
||||
- 0%: No traceability
|
||||
|
||||
### Depth (Weight: 20%)
|
||||
|
||||
**Checks**:
|
||||
- Acceptance criteria testable
|
||||
- ADRs justified
|
||||
- User stories estimable
|
||||
- Technical details sufficient
|
||||
|
||||
**Scoring**:
|
||||
- 100%: All items detailed
|
||||
- 75%: 1-2 shallow items
|
||||
- 50%: 3-5 shallow items
|
||||
- 25%: 6+ shallow items
|
||||
- 0%: All items shallow
|
||||
|
||||
### Coverage (Weight: 10%)
|
||||
|
||||
**Checks**:
|
||||
- Original requirements mapped
|
||||
- All features documented
|
||||
- All constraints addressed
|
||||
- All stakeholders considered
|
||||
|
||||
**Scoring**:
|
||||
- 100%: Full coverage (100%)
|
||||
- 75%: High coverage (80-99%)
|
||||
- 50%: Moderate coverage (60-79%)
|
||||
- 25%: Low coverage (40-59%)
|
||||
- 0%: Minimal coverage (<40%)
|
||||
|
||||
## 9. Verdict/Gate Determination
|
||||
|
||||
### Code Review Verdicts
|
||||
|
||||
| Verdict | Criteria | Action |
|
||||
|---------|----------|--------|
|
||||
| **BLOCK** | Critical issues present | Must fix before merge |
|
||||
| **CONDITIONAL** | High/medium issues only | Fix recommended, merge allowed |
|
||||
| **APPROVE** | Low issues or none | Ready to merge |
|
||||
|
||||
### Spec Quality Gates
|
||||
|
||||
| Gate | Criteria | Action |
|
||||
|------|----------|--------|
|
||||
| **PASS** | Score ≥80% AND coverage ≥70% | Ready for implementation |
|
||||
| **REVIEW** | Score 60-79% OR coverage 50-69% | Revisions recommended |
|
||||
| **FAIL** | Score <60% OR coverage <50% | Major revisions required |
|
||||
|
||||
## 10. Error Handling
|
||||
|
||||
| Error Type | Recovery Strategy | Escalation |
|
||||
|------------|-------------------|------------|
|
||||
| Missing context | Request from coordinator | Immediate escalation |
|
||||
| Invalid review mode | Abort with error | Report to coordinator |
|
||||
| Analysis failure | Retry with verbose logging | Report after 2 failures |
|
||||
| Report generation failure | Use fallback template | Report with partial results |
|
||||
@@ -1,538 +0,0 @@
|
||||
# Validate Command
|
||||
|
||||
## Purpose
|
||||
Test-fix cycle with strategy engine for automated test failure resolution.
|
||||
|
||||
## Configuration
|
||||
|
||||
```javascript
|
||||
const MAX_ITERATIONS = 10
|
||||
const PASS_RATE_TARGET = 95 // percentage
|
||||
```
|
||||
|
||||
## Main Iteration Loop
|
||||
|
||||
```javascript
|
||||
function runTestFixCycle(task, framework, affectedTests, modifiedFiles) {
|
||||
let iteration = 0
|
||||
let bestPassRate = 0
|
||||
let bestResults = null
|
||||
|
||||
while (iteration < MAX_ITERATIONS) {
|
||||
iteration++
|
||||
|
||||
// Phase 1: Run Tests
|
||||
const testCommand = buildTestCommand(framework, affectedTests, iteration === 1)
|
||||
const testOutput = Bash(testCommand, { timeout: 120000 })
|
||||
const results = parseTestResults(testOutput.stdout + testOutput.stderr, framework)
|
||||
|
||||
const passRate = results.total > 0 ? (results.passed / results.total * 100) : 0
|
||||
|
||||
// Track best result
|
||||
if (passRate > bestPassRate) {
|
||||
bestPassRate = passRate
|
||||
bestResults = results
|
||||
}
|
||||
|
||||
// Progress update for long cycles
|
||||
if (iteration > 5) {
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
task_id: task.task_id,
|
||||
iteration: iteration,
|
||||
pass_rate: passRate.toFixed(1),
|
||||
tests_passed: results.passed,
|
||||
tests_failed: results.failed,
|
||||
message: `Test-fix cycle iteration ${iteration}/${MAX_ITERATIONS}`
|
||||
}, "[tester]")
|
||||
}
|
||||
|
||||
// Phase 2: Check Success
|
||||
if (passRate >= PASS_RATE_TARGET) {
|
||||
// Quality gate: Run full suite if only affected tests passed
|
||||
if (affectedTests.length > 0 && iteration === 1) {
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
task_id: task.task_id,
|
||||
message: "Affected tests passed, running full suite..."
|
||||
}, "[tester]")
|
||||
|
||||
const fullSuiteCommand = buildTestCommand(framework, [], false)
|
||||
const fullOutput = Bash(fullSuiteCommand, { timeout: 300000 })
|
||||
const fullResults = parseTestResults(fullOutput.stdout + fullOutput.stderr, framework)
|
||||
const fullPassRate = fullResults.total > 0 ? (fullResults.passed / fullResults.total * 100) : 0
|
||||
|
||||
if (fullPassRate >= PASS_RATE_TARGET) {
|
||||
return {
|
||||
success: true,
|
||||
results: fullResults,
|
||||
iterations: iteration,
|
||||
full_suite_run: true
|
||||
}
|
||||
} else {
|
||||
// Full suite failed, continue fixing
|
||||
results = fullResults
|
||||
passRate = fullPassRate
|
||||
}
|
||||
} else {
|
||||
return {
|
||||
success: true,
|
||||
results: results,
|
||||
iterations: iteration,
|
||||
full_suite_run: affectedTests.length === 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 3: Analyze Failures
|
||||
if (results.failures.length === 0) {
|
||||
break // No failures to fix
|
||||
}
|
||||
|
||||
const classified = classifyFailures(results.failures)
|
||||
|
||||
// Phase 4: Select Strategy
|
||||
const strategy = selectStrategy(iteration, passRate, results.failures)
|
||||
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "progress_update",
|
||||
task_id: task.task_id,
|
||||
iteration: iteration,
|
||||
strategy: strategy,
|
||||
failures: {
|
||||
critical: classified.critical.length,
|
||||
high: classified.high.length,
|
||||
medium: classified.medium.length,
|
||||
low: classified.low.length
|
||||
}
|
||||
}, "[tester]")
|
||||
|
||||
// Phase 5: Apply Fixes
|
||||
const fixResult = applyFixes(strategy, results.failures, framework, modifiedFiles)
|
||||
|
||||
if (!fixResult.success) {
|
||||
// Fix application failed, try next iteration with different strategy
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Max iterations reached
|
||||
return {
|
||||
success: false,
|
||||
results: bestResults,
|
||||
iterations: MAX_ITERATIONS,
|
||||
best_pass_rate: bestPassRate,
|
||||
error: "Max iterations reached without achieving target pass rate"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Strategy Selection
|
||||
|
||||
```javascript
|
||||
function selectStrategy(iteration, passRate, failures) {
|
||||
const classified = classifyFailures(failures)
|
||||
|
||||
// Conservative: Early iterations or high pass rate
|
||||
if (iteration <= 3 || passRate >= 80) {
|
||||
return "conservative"
|
||||
}
|
||||
|
||||
// Surgical: Specific failure patterns
|
||||
if (classified.critical.length > 0 && classified.critical.length < 5) {
|
||||
return "surgical"
|
||||
}
|
||||
|
||||
// Aggressive: Low pass rate or many iterations
|
||||
if (passRate < 50 || iteration > 7) {
|
||||
return "aggressive"
|
||||
}
|
||||
|
||||
return "conservative"
|
||||
}
|
||||
```
|
||||
|
||||
## Fix Application
|
||||
|
||||
### Conservative Strategy
|
||||
|
||||
```javascript
|
||||
function applyConservativeFixes(failures, framework, modifiedFiles) {
|
||||
const classified = classifyFailures(failures)
|
||||
|
||||
// Fix only the first critical failure
|
||||
if (classified.critical.length > 0) {
|
||||
const failure = classified.critical[0]
|
||||
return fixSingleFailure(failure, framework, modifiedFiles)
|
||||
}
|
||||
|
||||
// If no critical, fix first high priority
|
||||
if (classified.high.length > 0) {
|
||||
const failure = classified.high[0]
|
||||
return fixSingleFailure(failure, framework, modifiedFiles)
|
||||
}
|
||||
|
||||
return { success: false, error: "No fixable failures found" }
|
||||
}
|
||||
```
|
||||
|
||||
### Surgical Strategy
|
||||
|
||||
```javascript
|
||||
function applySurgicalFixes(failures, framework, modifiedFiles) {
|
||||
// Identify common pattern
|
||||
const pattern = identifyCommonPattern(failures)
|
||||
|
||||
if (!pattern) {
|
||||
return { success: false, error: "No common pattern identified" }
|
||||
}
|
||||
|
||||
// Apply pattern-based fix across all occurrences
|
||||
const fixes = []
|
||||
|
||||
for (const failure of failures) {
|
||||
if (matchesPattern(failure, pattern)) {
|
||||
const fix = generatePatternFix(failure, pattern, framework)
|
||||
fixes.push(fix)
|
||||
}
|
||||
}
|
||||
|
||||
// Apply all fixes in batch
|
||||
for (const fix of fixes) {
|
||||
applyFix(fix, modifiedFiles)
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
fixes_applied: fixes.length,
|
||||
pattern: pattern
|
||||
}
|
||||
}
|
||||
|
||||
function identifyCommonPattern(failures) {
|
||||
// Group failures by error type
|
||||
const errorTypes = {}
|
||||
|
||||
for (const failure of failures) {
|
||||
const errorType = extractErrorType(failure.error)
|
||||
if (!errorTypes[errorType]) {
|
||||
errorTypes[errorType] = []
|
||||
}
|
||||
errorTypes[errorType].push(failure)
|
||||
}
|
||||
|
||||
// Find most common error type
|
||||
let maxCount = 0
|
||||
let commonPattern = null
|
||||
|
||||
for (const [errorType, instances] of Object.entries(errorTypes)) {
|
||||
if (instances.length > maxCount) {
|
||||
maxCount = instances.length
|
||||
commonPattern = {
|
||||
type: errorType,
|
||||
instances: instances,
|
||||
count: instances.length
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return maxCount >= 3 ? commonPattern : null
|
||||
}
|
||||
|
||||
function extractErrorType(error) {
|
||||
const errorLower = error.toLowerCase()
|
||||
|
||||
if (errorLower.includes("cannot find module")) return "missing_import"
|
||||
if (errorLower.includes("is not defined")) return "undefined_variable"
|
||||
if (errorLower.includes("expected") && errorLower.includes("received")) return "assertion_mismatch"
|
||||
if (errorLower.includes("timeout")) return "timeout"
|
||||
if (errorLower.includes("syntaxerror")) return "syntax_error"
|
||||
|
||||
return "unknown"
|
||||
}
|
||||
```
|
||||
|
||||
### Aggressive Strategy
|
||||
|
||||
```javascript
|
||||
function applyAggressiveFixes(failures, framework, modifiedFiles) {
|
||||
const classified = classifyFailures(failures)
|
||||
const fixes = []
|
||||
|
||||
// Fix all critical failures
|
||||
for (const failure of classified.critical) {
|
||||
const fix = generateFix(failure, framework, modifiedFiles)
|
||||
if (fix) {
|
||||
fixes.push(fix)
|
||||
}
|
||||
}
|
||||
|
||||
// Fix all high priority failures
|
||||
for (const failure of classified.high) {
|
||||
const fix = generateFix(failure, framework, modifiedFiles)
|
||||
if (fix) {
|
||||
fixes.push(fix)
|
||||
}
|
||||
}
|
||||
|
||||
// Apply all fixes
|
||||
for (const fix of fixes) {
|
||||
applyFix(fix, modifiedFiles)
|
||||
}
|
||||
|
||||
return {
|
||||
success: fixes.length > 0,
|
||||
fixes_applied: fixes.length
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fix Generation
|
||||
|
||||
```javascript
|
||||
function generateFix(failure, framework, modifiedFiles) {
|
||||
const errorType = extractErrorType(failure.error)
|
||||
|
||||
switch (errorType) {
|
||||
case "missing_import":
|
||||
return generateImportFix(failure, modifiedFiles)
|
||||
|
||||
case "undefined_variable":
|
||||
return generateVariableFix(failure, modifiedFiles)
|
||||
|
||||
case "assertion_mismatch":
|
||||
return generateAssertionFix(failure, framework)
|
||||
|
||||
case "timeout":
|
||||
return generateTimeoutFix(failure, framework)
|
||||
|
||||
case "syntax_error":
|
||||
return generateSyntaxFix(failure, modifiedFiles)
|
||||
|
||||
default:
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
function generateImportFix(failure, modifiedFiles) {
|
||||
// Extract module name from error
|
||||
const moduleMatch = failure.error.match(/Cannot find module ['"](.+?)['"]/)
|
||||
if (!moduleMatch) return null
|
||||
|
||||
const moduleName = moduleMatch[1]
|
||||
|
||||
// Find test file
|
||||
const testFile = extractTestFile(failure.test)
|
||||
if (!testFile) return null
|
||||
|
||||
// Check if module exists in modified files
|
||||
const sourceFile = modifiedFiles.find(f =>
|
||||
f.includes(moduleName) || f.endsWith(`${moduleName}.ts`) || f.endsWith(`${moduleName}.js`)
|
||||
)
|
||||
|
||||
if (!sourceFile) return null
|
||||
|
||||
// Generate import statement
|
||||
const relativePath = calculateRelativePath(testFile, sourceFile)
|
||||
const importStatement = `import { } from '${relativePath}'`
|
||||
|
||||
return {
|
||||
file: testFile,
|
||||
type: "add_import",
|
||||
content: importStatement,
|
||||
line: 1 // Add at top of file
|
||||
}
|
||||
}
|
||||
|
||||
function generateAssertionFix(failure, framework) {
|
||||
// Extract expected vs received values
|
||||
const expectedMatch = failure.error.match(/Expected:\s*(.+?)(?:\n|$)/)
|
||||
const receivedMatch = failure.error.match(/Received:\s*(.+?)(?:\n|$)/)
|
||||
|
||||
if (!expectedMatch || !receivedMatch) return null
|
||||
|
||||
const expected = expectedMatch[1].trim()
|
||||
const received = receivedMatch[1].trim()
|
||||
|
||||
// Find test file and line
|
||||
const testFile = extractTestFile(failure.test)
|
||||
const testLine = extractTestLine(failure.error)
|
||||
|
||||
if (!testFile || !testLine) return null
|
||||
|
||||
return {
|
||||
file: testFile,
|
||||
type: "update_assertion",
|
||||
line: testLine,
|
||||
old_value: expected,
|
||||
new_value: received,
|
||||
note: "Auto-updated assertion based on actual behavior"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Test Result Parsing
|
||||
|
||||
```javascript
|
||||
function parseTestResults(output, framework) {
|
||||
const results = {
|
||||
total: 0,
|
||||
passed: 0,
|
||||
failed: 0,
|
||||
skipped: 0,
|
||||
failures: []
|
||||
}
|
||||
|
||||
if (framework === "jest" || framework === "vitest") {
|
||||
// Parse summary line
|
||||
const summaryMatch = output.match(/Tests:\s+(?:(\d+)\s+failed,\s+)?(?:(\d+)\s+passed,\s+)?(\d+)\s+total/)
|
||||
if (summaryMatch) {
|
||||
results.failed = summaryMatch[1] ? parseInt(summaryMatch[1]) : 0
|
||||
results.passed = summaryMatch[2] ? parseInt(summaryMatch[2]) : 0
|
||||
results.total = parseInt(summaryMatch[3])
|
||||
}
|
||||
|
||||
// Alternative format
|
||||
if (results.total === 0) {
|
||||
const altMatch = output.match(/(\d+)\s+passed.*?(\d+)\s+total/)
|
||||
if (altMatch) {
|
||||
results.passed = parseInt(altMatch[1])
|
||||
results.total = parseInt(altMatch[2])
|
||||
results.failed = results.total - results.passed
|
||||
}
|
||||
}
|
||||
|
||||
// Extract failure details
|
||||
const failureRegex = /●\s+(.*?)\n\n([\s\S]*?)(?=\n\n●|\n\nTest Suites:|\n\n$)/g
|
||||
let match
|
||||
while ((match = failureRegex.exec(output)) !== null) {
|
||||
results.failures.push({
|
||||
test: match[1].trim(),
|
||||
error: match[2].trim()
|
||||
})
|
||||
}
|
||||
|
||||
} else if (framework === "pytest") {
|
||||
// Parse pytest summary
|
||||
const summaryMatch = output.match(/=+\s+(?:(\d+)\s+failed,?\s+)?(?:(\d+)\s+passed)?/)
|
||||
if (summaryMatch) {
|
||||
results.failed = summaryMatch[1] ? parseInt(summaryMatch[1]) : 0
|
||||
results.passed = summaryMatch[2] ? parseInt(summaryMatch[2]) : 0
|
||||
results.total = results.failed + results.passed
|
||||
}
|
||||
|
||||
// Extract failure details
|
||||
const failureRegex = /FAILED\s+(.*?)\s+-\s+([\s\S]*?)(?=\n_+|FAILED|=+\s+\d+)/g
|
||||
let match
|
||||
while ((match = failureRegex.exec(output)) !== null) {
|
||||
results.failures.push({
|
||||
test: match[1].trim(),
|
||||
error: match[2].trim()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
## Test Command Building
|
||||
|
||||
```javascript
|
||||
function buildTestCommand(framework, affectedTests, isFirstRun) {
|
||||
const testFiles = affectedTests.length > 0 ? affectedTests.join(" ") : ""
|
||||
|
||||
switch (framework) {
|
||||
case "vitest":
|
||||
return testFiles
|
||||
? `vitest run ${testFiles} --reporter=verbose`
|
||||
: `vitest run --reporter=verbose`
|
||||
|
||||
case "jest":
|
||||
return testFiles
|
||||
? `jest ${testFiles} --no-coverage --verbose`
|
||||
: `jest --no-coverage --verbose`
|
||||
|
||||
case "mocha":
|
||||
return testFiles
|
||||
? `mocha ${testFiles} --reporter spec`
|
||||
: `mocha --reporter spec`
|
||||
|
||||
case "pytest":
|
||||
return testFiles
|
||||
? `pytest ${testFiles} -v --tb=short`
|
||||
: `pytest -v --tb=short`
|
||||
|
||||
default:
|
||||
throw new Error(`Unsupported test framework: ${framework}`)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Utility Functions
|
||||
|
||||
### Extract Test File
|
||||
|
||||
```javascript
|
||||
function extractTestFile(testName) {
|
||||
// Extract file path from test name
|
||||
// Format: "path/to/file.test.ts > describe block > test name"
|
||||
const fileMatch = testName.match(/^(.*?\.(?:test|spec)\.[jt]sx?)/)
|
||||
return fileMatch ? fileMatch[1] : null
|
||||
}
|
||||
```
|
||||
|
||||
### Extract Test Line
|
||||
|
||||
```javascript
|
||||
function extractTestLine(error) {
|
||||
// Extract line number from error stack
|
||||
const lineMatch = error.match(/:(\d+):\d+/)
|
||||
return lineMatch ? parseInt(lineMatch[1]) : null
|
||||
}
|
||||
```
|
||||
|
||||
### Calculate Relative Path
|
||||
|
||||
```javascript
|
||||
function calculateRelativePath(fromFile, toFile) {
|
||||
const fromParts = fromFile.split("/")
|
||||
const toParts = toFile.split("/")
|
||||
|
||||
// Remove filename
|
||||
fromParts.pop()
|
||||
|
||||
// Find common base
|
||||
let commonLength = 0
|
||||
while (commonLength < fromParts.length &&
|
||||
commonLength < toParts.length &&
|
||||
fromParts[commonLength] === toParts[commonLength]) {
|
||||
commonLength++
|
||||
}
|
||||
|
||||
// Build relative path
|
||||
const upLevels = fromParts.length - commonLength
|
||||
const downPath = toParts.slice(commonLength)
|
||||
|
||||
const relativeParts = []
|
||||
for (let i = 0; i < upLevels; i++) {
|
||||
relativeParts.push("..")
|
||||
}
|
||||
relativeParts.push(...downPath)
|
||||
|
||||
let path = relativeParts.join("/")
|
||||
|
||||
// Remove file extension
|
||||
path = path.replace(/\.[jt]sx?$/, "")
|
||||
|
||||
// Ensure starts with ./
|
||||
if (!path.startsWith(".")) {
|
||||
path = "./" + path
|
||||
}
|
||||
|
||||
return path
|
||||
}
|
||||
```
|
||||
@@ -1,385 +0,0 @@
|
||||
# Tester Role
|
||||
|
||||
## 1. Role Identity
|
||||
|
||||
- **Name**: tester
|
||||
- **Task Prefix**: TEST-*
|
||||
- **Output Tag**: `[tester]`
|
||||
- **Responsibility**: Detect Framework → Run Tests → Fix Cycle → Report Results
|
||||
|
||||
## 2. Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process TEST-* tasks
|
||||
- Communicate only with coordinator
|
||||
- Use detected test framework
|
||||
- Run affected tests before full suite
|
||||
- Tag all outputs with `[tester]`
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Contact other workers directly
|
||||
- Modify production code beyond test fixes
|
||||
- Skip framework detection
|
||||
- Run full suite without affected tests first
|
||||
|
||||
## 3. Message Types
|
||||
|
||||
| Type | Direction | Purpose | Format |
|
||||
|------|-----------|---------|--------|
|
||||
| `task_request` | FROM coordinator | Receive TEST-* task assignment | `{ type: "task_request", task_id, description, impl_task_id }` |
|
||||
| `task_complete` | TO coordinator | Report test success | `{ type: "task_complete", task_id, status: "success", pass_rate, tests_run, iterations }` |
|
||||
| `task_failed` | TO coordinator | Report test failure | `{ type: "task_failed", task_id, error, failures, pass_rate }` |
|
||||
| `progress_update` | TO coordinator | Report fix cycle progress | `{ type: "progress_update", task_id, iteration, pass_rate, strategy }` |
|
||||
|
||||
## 4. Message Bus
|
||||
|
||||
**Primary**: Use `team_msg` for all coordinator communication with `[tester]` tag:
|
||||
```javascript
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_complete",
|
||||
task_id: "TEST-001",
|
||||
status: "success",
|
||||
pass_rate: 98.5,
|
||||
tests_run: 45,
|
||||
iterations: 3,
|
||||
framework: "vitest"
|
||||
}, "[tester]")
|
||||
```
|
||||
|
||||
**CLI Fallback**: When message bus unavailable, write to `.workflow/.team/messages/tester-{timestamp}.json`
|
||||
|
||||
## 5. Toolbox
|
||||
|
||||
### Available Commands
|
||||
- `commands/validate.md` - Test-fix cycle with strategy engine
|
||||
|
||||
### CLI Capabilities
|
||||
- None (uses project's test framework directly via Bash)
|
||||
|
||||
## 6. Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
**Task Loading**:
|
||||
```javascript
|
||||
const tasks = Glob(".workflow/.team/tasks/TEST-*.json")
|
||||
.filter(task => task.status === "pending" && task.assigned_to === "tester")
|
||||
```
|
||||
|
||||
**Implementation Task Linking**:
|
||||
```javascript
|
||||
const implTaskId = task.metadata?.impl_task_id
|
||||
const implTask = implTaskId ? Read(`.workflow/.team/tasks/${implTaskId}.json`) : null
|
||||
const modifiedFiles = implTask?.metadata?.files_modified || []
|
||||
```
|
||||
|
||||
### Phase 2: Test Framework Detection
|
||||
|
||||
**Framework Detection**:
|
||||
```javascript
|
||||
function detectTestFramework() {
|
||||
// Check package.json for test frameworks
|
||||
const packageJson = Read("package.json")
|
||||
const pkg = JSON.parse(packageJson)
|
||||
|
||||
// Priority 1: Check dependencies
|
||||
if (pkg.devDependencies?.vitest || pkg.dependencies?.vitest) {
|
||||
return "vitest"
|
||||
}
|
||||
if (pkg.devDependencies?.jest || pkg.dependencies?.jest) {
|
||||
return "jest"
|
||||
}
|
||||
if (pkg.devDependencies?.mocha || pkg.dependencies?.mocha) {
|
||||
return "mocha"
|
||||
}
|
||||
if (pkg.devDependencies?.pytest || pkg.dependencies?.pytest) {
|
||||
return "pytest"
|
||||
}
|
||||
|
||||
// Priority 2: Check test scripts
|
||||
const testScript = pkg.scripts?.test || ""
|
||||
if (testScript.includes("vitest")) return "vitest"
|
||||
if (testScript.includes("jest")) return "jest"
|
||||
if (testScript.includes("mocha")) return "mocha"
|
||||
if (testScript.includes("pytest")) return "pytest"
|
||||
|
||||
// Priority 3: Check config files
|
||||
const configFiles = Glob("{vitest,jest,mocha}.config.{js,ts,json}")
|
||||
if (configFiles.some(f => f.includes("vitest"))) return "vitest"
|
||||
if (configFiles.some(f => f.includes("jest"))) return "jest"
|
||||
if (configFiles.some(f => f.includes("mocha"))) return "mocha"
|
||||
|
||||
if (Bash("test -f pytest.ini").exitCode === 0) return "pytest"
|
||||
|
||||
return "unknown"
|
||||
}
|
||||
```
|
||||
|
||||
**Affected Test Discovery**:
|
||||
```javascript
|
||||
function findAffectedTests(modifiedFiles) {
|
||||
const testFiles = []
|
||||
|
||||
for (const file of modifiedFiles) {
|
||||
const baseName = file.replace(/\.(ts|js|tsx|jsx|py)$/, "")
|
||||
const dir = file.substring(0, file.lastIndexOf("/"))
|
||||
|
||||
const testVariants = [
|
||||
// Same directory variants
|
||||
`${baseName}.test.ts`,
|
||||
`${baseName}.test.js`,
|
||||
`${baseName}.spec.ts`,
|
||||
`${baseName}.spec.js`,
|
||||
`${baseName}_test.py`,
|
||||
`test_${baseName.split("/").pop()}.py`,
|
||||
|
||||
// Test directory variants
|
||||
`${file.replace(/^src\//, "tests/")}`,
|
||||
`${file.replace(/^src\//, "__tests__/")}`,
|
||||
`${file.replace(/^src\//, "test/")}`,
|
||||
`${dir}/__tests__/${file.split("/").pop().replace(/\.(ts|js|tsx|jsx)$/, ".test.ts")}`,
|
||||
|
||||
// Python variants
|
||||
`${file.replace(/^src\//, "tests/").replace(/\.py$/, "_test.py")}`,
|
||||
`${file.replace(/^src\//, "tests/test_")}`
|
||||
]
|
||||
|
||||
for (const variant of testVariants) {
|
||||
if (Bash(`test -f ${variant}`).exitCode === 0) {
|
||||
testFiles.push(variant)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return [...new Set(testFiles)] // Deduplicate
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Test Execution & Fix Cycle
|
||||
|
||||
**Delegate to Command**:
|
||||
```javascript
|
||||
const validateCommand = Read("commands/validate.md")
|
||||
// Command handles:
|
||||
// - MAX_ITERATIONS=10, PASS_RATE_TARGET=95
|
||||
// - Main iteration loop with strategy selection
|
||||
// - Quality gate check (affected tests → full suite)
|
||||
// - applyFixes by strategy (conservative/aggressive/surgical)
|
||||
// - Progress updates for long cycles (iteration > 5)
|
||||
```
|
||||
|
||||
### Phase 4: Result Analysis
|
||||
|
||||
**Test Result Parsing**:
|
||||
```javascript
|
||||
function parseTestResults(output, framework) {
|
||||
const results = {
|
||||
total: 0,
|
||||
passed: 0,
|
||||
failed: 0,
|
||||
skipped: 0,
|
||||
failures: []
|
||||
}
|
||||
|
||||
if (framework === "jest" || framework === "vitest") {
|
||||
// Parse Jest/Vitest output
|
||||
const totalMatch = output.match(/Tests:\s+(\d+)\s+total/)
|
||||
const passedMatch = output.match(/(\d+)\s+passed/)
|
||||
const failedMatch = output.match(/(\d+)\s+failed/)
|
||||
const skippedMatch = output.match(/(\d+)\s+skipped/)
|
||||
|
||||
results.total = totalMatch ? parseInt(totalMatch[1]) : 0
|
||||
results.passed = passedMatch ? parseInt(passedMatch[1]) : 0
|
||||
results.failed = failedMatch ? parseInt(failedMatch[1]) : 0
|
||||
results.skipped = skippedMatch ? parseInt(skippedMatch[1]) : 0
|
||||
|
||||
// Extract failure details
|
||||
const failureRegex = /●\s+(.*?)\n\n\s+(.*?)(?=\n\n●|\n\nTest Suites:)/gs
|
||||
let match
|
||||
while ((match = failureRegex.exec(output)) !== null) {
|
||||
results.failures.push({
|
||||
test: match[1].trim(),
|
||||
error: match[2].trim()
|
||||
})
|
||||
}
|
||||
} else if (framework === "pytest") {
|
||||
// Parse pytest output
|
||||
const summaryMatch = output.match(/=+\s+(\d+)\s+failed,\s+(\d+)\s+passed/)
|
||||
if (summaryMatch) {
|
||||
results.failed = parseInt(summaryMatch[1])
|
||||
results.passed = parseInt(summaryMatch[2])
|
||||
results.total = results.failed + results.passed
|
||||
}
|
||||
|
||||
// Extract failure details
|
||||
const failureRegex = /FAILED\s+(.*?)\s+-\s+(.*?)(?=\n_+|\nFAILED|$)/gs
|
||||
let match
|
||||
while ((match = failureRegex.exec(output)) !== null) {
|
||||
results.failures.push({
|
||||
test: match[1].trim(),
|
||||
error: match[2].trim()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
```
|
||||
|
||||
**Failure Classification**:
|
||||
```javascript
|
||||
function classifyFailures(failures) {
|
||||
const classified = {
|
||||
critical: [], // Syntax errors, missing imports
|
||||
high: [], // Assertion failures, logic errors
|
||||
medium: [], // Timeout, flaky tests
|
||||
low: [] // Warnings, deprecations
|
||||
}
|
||||
|
||||
for (const failure of failures) {
|
||||
const error = failure.error.toLowerCase()
|
||||
|
||||
if (error.includes("syntaxerror") ||
|
||||
error.includes("cannot find module") ||
|
||||
error.includes("is not defined")) {
|
||||
classified.critical.push(failure)
|
||||
} else if (error.includes("expected") ||
|
||||
error.includes("assertion") ||
|
||||
error.includes("toBe") ||
|
||||
error.includes("toEqual")) {
|
||||
classified.high.push(failure)
|
||||
} else if (error.includes("timeout") ||
|
||||
error.includes("async")) {
|
||||
classified.medium.push(failure)
|
||||
} else {
|
||||
classified.low.push(failure)
|
||||
}
|
||||
}
|
||||
|
||||
return classified
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
**Success Report**:
|
||||
```javascript
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_complete",
|
||||
task_id: task.task_id,
|
||||
status: "success",
|
||||
pass_rate: (results.passed / results.total * 100).toFixed(1),
|
||||
tests_run: results.total,
|
||||
tests_passed: results.passed,
|
||||
tests_failed: results.failed,
|
||||
iterations: iterationCount,
|
||||
framework: framework,
|
||||
affected_tests: affectedTests.length,
|
||||
full_suite_run: fullSuiteRun,
|
||||
timestamp: new Date().toISOString()
|
||||
}, "[tester]")
|
||||
```
|
||||
|
||||
**Failure Report**:
|
||||
```javascript
|
||||
const classified = classifyFailures(results.failures)
|
||||
|
||||
team_msg({
|
||||
to: "coordinator",
|
||||
type: "task_failed",
|
||||
task_id: task.task_id,
|
||||
error: "Test failures exceeded threshold",
|
||||
pass_rate: (results.passed / results.total * 100).toFixed(1),
|
||||
tests_run: results.total,
|
||||
failures: {
|
||||
critical: classified.critical.length,
|
||||
high: classified.high.length,
|
||||
medium: classified.medium.length,
|
||||
low: classified.low.length
|
||||
},
|
||||
failure_details: classified,
|
||||
iterations: iterationCount,
|
||||
framework: framework,
|
||||
timestamp: new Date().toISOString()
|
||||
}, "[tester]")
|
||||
```
|
||||
|
||||
## 7. Strategy Engine
|
||||
|
||||
### Strategy Selection
|
||||
|
||||
```javascript
|
||||
function selectStrategy(iteration, passRate, failures) {
|
||||
const classified = classifyFailures(failures)
|
||||
|
||||
// Conservative: Early iterations or high pass rate
|
||||
if (iteration <= 3 || passRate >= 80) {
|
||||
return "conservative"
|
||||
}
|
||||
|
||||
// Surgical: Specific failure patterns
|
||||
if (classified.critical.length > 0 && classified.critical.length < 5) {
|
||||
return "surgical"
|
||||
}
|
||||
|
||||
// Aggressive: Low pass rate or many iterations
|
||||
if (passRate < 50 || iteration > 7) {
|
||||
return "aggressive"
|
||||
}
|
||||
|
||||
return "conservative"
|
||||
}
|
||||
```
|
||||
|
||||
### Fix Application
|
||||
|
||||
```javascript
|
||||
function applyFixes(strategy, failures, framework) {
|
||||
if (strategy === "conservative") {
|
||||
// Fix only critical failures one at a time
|
||||
const critical = classifyFailures(failures).critical
|
||||
if (critical.length > 0) {
|
||||
return fixFailure(critical[0], framework)
|
||||
}
|
||||
} else if (strategy === "surgical") {
|
||||
// Fix specific pattern across all occurrences
|
||||
const pattern = identifyCommonPattern(failures)
|
||||
return fixPattern(pattern, framework)
|
||||
} else if (strategy === "aggressive") {
|
||||
// Fix all failures in batch
|
||||
return fixAllFailures(failures, framework)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 8. Error Handling
|
||||
|
||||
| Error Type | Recovery Strategy | Escalation |
|
||||
|------------|-------------------|------------|
|
||||
| Framework not detected | Prompt user for framework | Immediate escalation |
|
||||
| No tests found | Report to coordinator | Manual intervention |
|
||||
| Test command fails | Retry with verbose output | Report after 2 failures |
|
||||
| Infinite fix loop | Abort after MAX_ITERATIONS | Report iteration history |
|
||||
| Pass rate below target | Report best attempt | Include failure classification |
|
||||
|
||||
## 9. Configuration
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| MAX_ITERATIONS | 10 | Maximum fix-test cycles |
|
||||
| PASS_RATE_TARGET | 95 | Target pass rate (%) |
|
||||
| AFFECTED_TESTS_FIRST | true | Run affected tests before full suite |
|
||||
| PARALLEL_TESTS | true | Enable parallel test execution |
|
||||
| TIMEOUT_PER_TEST | 30000 | Timeout per test (ms) |
|
||||
|
||||
## 10. Test Framework Commands
|
||||
|
||||
| Framework | Affected Tests Command | Full Suite Command |
|
||||
|-----------|------------------------|-------------------|
|
||||
| vitest | `vitest run ${files.join(" ")}` | `vitest run` |
|
||||
| jest | `jest ${files.join(" ")} --no-coverage` | `jest --no-coverage` |
|
||||
| mocha | `mocha ${files.join(" ")}` | `mocha` |
|
||||
| pytest | `pytest ${files.join(" ")} -v` | `pytest -v` |
|
||||
@@ -1,698 +0,0 @@
|
||||
# Command: Generate Document
|
||||
|
||||
Multi-CLI document generation for 4 document types: Product Brief, Requirements/PRD, Architecture, Epics & Stories.
|
||||
|
||||
## Pre-Steps (All Document Types)
|
||||
|
||||
```javascript
|
||||
// 1. Load document standards
|
||||
const docStandards = Read('../../specs/document-standards.md')
|
||||
|
||||
// 2. Load appropriate template
|
||||
const templateMap = {
|
||||
'product-brief': '../../templates/product-brief.md',
|
||||
'requirements': '../../templates/requirements-prd.md',
|
||||
'architecture': '../../templates/architecture-doc.md',
|
||||
'epics': '../../templates/epics-template.md'
|
||||
}
|
||||
const template = Read(templateMap[docType])
|
||||
|
||||
// 3. Build shared context
|
||||
const seedAnalysis = specConfig?.seed_analysis ||
|
||||
(priorDocs.discoveryContext ? JSON.parse(priorDocs.discoveryContext).seed_analysis : {})
|
||||
|
||||
const sharedContext = `
|
||||
SEED: ${specConfig?.topic || ''}
|
||||
PROBLEM: ${seedAnalysis.problem_statement || ''}
|
||||
TARGET USERS: ${(seedAnalysis.target_users || []).join(', ')}
|
||||
DOMAIN: ${seedAnalysis.domain || ''}
|
||||
CONSTRAINTS: ${(seedAnalysis.constraints || []).join(', ')}
|
||||
FOCUS AREAS: ${(specConfig?.focus_areas || []).join(', ')}
|
||||
${priorDocs.discoveryContext ? `
|
||||
CODEBASE CONTEXT:
|
||||
- Existing patterns: ${JSON.parse(priorDocs.discoveryContext).existing_patterns?.slice(0,5).join(', ') || 'none'}
|
||||
- Tech stack: ${JSON.stringify(JSON.parse(priorDocs.discoveryContext).tech_stack || {})}
|
||||
` : ''}`
|
||||
|
||||
// 4. Route to specific document type
|
||||
```
|
||||
|
||||
## DRAFT-001: Product Brief
|
||||
|
||||
3-way parallel CLI analysis (product/technical/user perspectives), then synthesize.
|
||||
|
||||
```javascript
|
||||
if (docType === 'product-brief') {
|
||||
// === Parallel CLI Analysis ===
|
||||
|
||||
// Product Perspective (Gemini)
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Product analysis for specification - identify market fit, user value, and success criteria.
|
||||
Success: Clear vision, measurable goals, competitive positioning.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Define product vision (1-3 sentences, aspirational)
|
||||
- Analyze market/competitive landscape
|
||||
- Define 3-5 measurable success metrics
|
||||
- Identify scope boundaries (in-scope vs out-of-scope)
|
||||
- Assess user value proposition
|
||||
- List assumptions that need validation
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured product analysis with: vision, goals with metrics, scope, competitive positioning, assumptions
|
||||
CONSTRAINTS: Focus on 'what' and 'why', not 'how'
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// Technical Perspective (Codex)
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Technical feasibility analysis for specification - assess implementation viability and constraints.
|
||||
Success: Clear technical constraints, integration complexity, technology recommendations.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Assess technical feasibility of the core concept
|
||||
- Identify technical constraints and blockers
|
||||
- Evaluate integration complexity with existing systems
|
||||
- Recommend technology approach (high-level)
|
||||
- Identify technical risks and dependencies
|
||||
- Estimate complexity: simple/moderate/complex
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Technical analysis with: feasibility assessment, constraints, integration complexity, tech recommendations, risks
|
||||
CONSTRAINTS: Focus on feasibility and constraints, not detailed architecture
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// User Perspective (Claude)
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: User experience analysis for specification - understand user journeys, pain points, and UX considerations.
|
||||
Success: Clear user personas, journey maps, UX requirements.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Elaborate user personas with goals and frustrations
|
||||
- Map primary user journey (happy path)
|
||||
- Identify key pain points in current experience
|
||||
- Define UX success criteria
|
||||
- List accessibility and usability considerations
|
||||
- Suggest interaction patterns
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: User analysis with: personas, journey map, pain points, UX criteria, interaction recommendations
|
||||
CONSTRAINTS: Focus on user needs and experience, not implementation
|
||||
" --tool claude --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// STOP: Wait for all 3 CLI results
|
||||
|
||||
// === Synthesize Three Perspectives ===
|
||||
const synthesis = {
|
||||
convergent_themes: [], // Themes consistent across all three perspectives
|
||||
conflicts: [], // Conflicting viewpoints
|
||||
product_insights: [], // Unique product perspective insights
|
||||
technical_insights: [], // Unique technical perspective insights
|
||||
user_insights: [] // Unique user perspective insights
|
||||
}
|
||||
|
||||
// Parse CLI outputs and identify:
|
||||
// - Common themes mentioned by 2+ perspectives
|
||||
// - Conflicts (e.g., product wants feature X, technical says infeasible)
|
||||
// - Unique insights from each perspective
|
||||
|
||||
// === Integrate Discussion Feedback ===
|
||||
if (discussionFeedback) {
|
||||
// Extract consensus and adjustments from discuss-001-scope.md
|
||||
// Merge discussion conclusions into synthesis
|
||||
}
|
||||
|
||||
// === Generate Document from Template ===
|
||||
const frontmatter = `---
|
||||
session_id: ${specConfig?.session_id || 'unknown'}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: draft
|
||||
generated_at: ${new Date().toISOString()}
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
- discovery-context.json
|
||||
---`
|
||||
|
||||
// Fill template sections:
|
||||
// - Vision (from product perspective + synthesis)
|
||||
// - Problem Statement (from seed analysis + user perspective)
|
||||
// - Target Users (from user perspective + personas)
|
||||
// - Goals (from product perspective + metrics)
|
||||
// - Scope (from product perspective + technical constraints)
|
||||
// - Success Criteria (from all three perspectives)
|
||||
// - Assumptions (from product + technical perspectives)
|
||||
|
||||
const filledContent = fillTemplate(template, {
|
||||
vision: productPerspective.vision,
|
||||
problem: seedAnalysis.problem_statement,
|
||||
users: userPerspective.personas,
|
||||
goals: productPerspective.goals,
|
||||
scope: synthesis.scope_boundaries,
|
||||
success_criteria: synthesis.convergent_themes,
|
||||
assumptions: [...productPerspective.assumptions, ...technicalPerspective.assumptions]
|
||||
})
|
||||
|
||||
Write(`${sessionFolder}/spec/product-brief.md`, `${frontmatter}\n\n${filledContent}`)
|
||||
|
||||
return {
|
||||
outputPath: 'spec/product-brief.md',
|
||||
documentSummary: `Product Brief generated with ${synthesis.convergent_themes.length} convergent themes, ${synthesis.conflicts.length} conflicts resolved`
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## DRAFT-002: Requirements/PRD
|
||||
|
||||
Gemini CLI expansion to generate REQ-NNN and NFR-{type}-NNN files.
|
||||
|
||||
```javascript
|
||||
if (docType === 'requirements') {
|
||||
// === Requirements Expansion CLI ===
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate detailed functional and non-functional requirements from product brief.
|
||||
Success: Complete PRD with testable acceptance criteria for every requirement.
|
||||
|
||||
PRODUCT BRIEF CONTEXT:
|
||||
${priorDocs.productBrief?.slice(0, 3000) || ''}
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- For each goal in the product brief, generate 3-7 functional requirements
|
||||
- Each requirement must have:
|
||||
- Unique ID: REQ-NNN (zero-padded)
|
||||
- Clear title
|
||||
- Detailed description
|
||||
- User story: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 specific, testable acceptance criteria
|
||||
- Generate non-functional requirements:
|
||||
- Performance (response times, throughput)
|
||||
- Security (authentication, authorization, data protection)
|
||||
- Scalability (user load, data volume)
|
||||
- Usability (accessibility, learnability)
|
||||
- Assign MoSCoW priority: Must/Should/Could/Won't
|
||||
- Output structure per requirement: ID, title, description, user_story, acceptance_criteria[], priority, traces
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured requirements with: ID, title, description, user story, acceptance criteria, priority, traceability to goals
|
||||
CONSTRAINTS: Every requirement must be specific enough to estimate and test. No vague requirements.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// Wait for CLI result
|
||||
|
||||
// === Integrate Discussion Feedback ===
|
||||
if (discussionFeedback) {
|
||||
// Extract requirement adjustments from discuss-002-brief.md
|
||||
// Merge new/modified/deleted requirements
|
||||
}
|
||||
|
||||
// === Generate requirements/ Directory ===
|
||||
Bash(`mkdir -p "${sessionFolder}/spec/requirements"`)
|
||||
|
||||
const timestamp = new Date().toISOString()
|
||||
|
||||
// Parse CLI output → funcReqs[], nfReqs[]
|
||||
const funcReqs = parseFunctionalRequirements(cliOutput)
|
||||
const nfReqs = parseNonFunctionalRequirements(cliOutput)
|
||||
|
||||
// Write individual REQ-*.md files (one per functional requirement)
|
||||
funcReqs.forEach(req => {
|
||||
const reqFrontmatter = `---
|
||||
id: REQ-${req.id}
|
||||
title: "${req.title}"
|
||||
priority: ${req.priority}
|
||||
status: draft
|
||||
traces:
|
||||
- product-brief.md
|
||||
---`
|
||||
const reqContent = `${reqFrontmatter}
|
||||
|
||||
# REQ-${req.id}: ${req.title}
|
||||
|
||||
## Description
|
||||
${req.description}
|
||||
|
||||
## User Story
|
||||
${req.user_story}
|
||||
|
||||
## Acceptance Criteria
|
||||
${req.acceptance_criteria.map((ac, i) => `${i+1}. ${ac}`).join('\n')}
|
||||
`
|
||||
Write(`${sessionFolder}/spec/requirements/REQ-${req.id}-${req.slug}.md`, reqContent)
|
||||
})
|
||||
|
||||
// Write individual NFR-*.md files
|
||||
nfReqs.forEach(nfr => {
|
||||
const nfrFrontmatter = `---
|
||||
id: NFR-${nfr.type}-${nfr.id}
|
||||
type: ${nfr.type}
|
||||
title: "${nfr.title}"
|
||||
status: draft
|
||||
traces:
|
||||
- product-brief.md
|
||||
---`
|
||||
const nfrContent = `${nfrFrontmatter}
|
||||
|
||||
# NFR-${nfr.type}-${nfr.id}: ${nfr.title}
|
||||
|
||||
## Requirement
|
||||
${nfr.requirement}
|
||||
|
||||
## Metric & Target
|
||||
${nfr.metric} — Target: ${nfr.target}
|
||||
`
|
||||
Write(`${sessionFolder}/spec/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent)
|
||||
})
|
||||
|
||||
// Write _index.md (summary + links)
|
||||
const indexFrontmatter = `---
|
||||
session_id: ${specConfig?.session_id || 'unknown'}
|
||||
phase: 3
|
||||
document_type: requirements-index
|
||||
status: draft
|
||||
generated_at: ${timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
---`
|
||||
const indexContent = `${indexFrontmatter}
|
||||
|
||||
# Requirements (PRD)
|
||||
|
||||
## Summary
|
||||
Total: ${funcReqs.length} functional + ${nfReqs.length} non-functional requirements
|
||||
|
||||
## Functional Requirements
|
||||
| ID | Title | Priority | Status |
|
||||
|----|-------|----------|--------|
|
||||
${funcReqs.map(r => `| [REQ-${r.id}](REQ-${r.id}-${r.slug}.md) | ${r.title} | ${r.priority} | draft |`).join('\n')}
|
||||
|
||||
## Non-Functional Requirements
|
||||
| ID | Type | Title |
|
||||
|----|------|-------|
|
||||
${nfReqs.map(n => `| [NFR-${n.type}-${n.id}](NFR-${n.type}-${n.id}-${n.slug}.md) | ${n.type} | ${n.title} |`).join('\n')}
|
||||
|
||||
## MoSCoW Summary
|
||||
- **Must**: ${funcReqs.filter(r => r.priority === 'Must').length}
|
||||
- **Should**: ${funcReqs.filter(r => r.priority === 'Should').length}
|
||||
- **Could**: ${funcReqs.filter(r => r.priority === 'Could').length}
|
||||
- **Won't**: ${funcReqs.filter(r => r.priority === "Won't").length}
|
||||
`
|
||||
Write(`${sessionFolder}/spec/requirements/_index.md`, indexContent)
|
||||
|
||||
return {
|
||||
outputPath: 'spec/requirements/_index.md',
|
||||
documentSummary: `Requirements generated: ${funcReqs.length} functional, ${nfReqs.length} non-functional`
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## DRAFT-003: Architecture
|
||||
|
||||
Two-stage CLI: Gemini architecture design + Codex architecture review.
|
||||
|
||||
```javascript
|
||||
if (docType === 'architecture') {
|
||||
// === Stage 1: Architecture Design (Gemini) ===
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate technical architecture for the specified requirements.
|
||||
Success: Complete component architecture, tech stack, and ADRs with justified decisions.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${priorDocs.productBrief?.slice(0, 3000) || ''}
|
||||
|
||||
REQUIREMENTS:
|
||||
${priorDocs.requirementsIndex?.slice(0, 5000) || ''}
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Define system architecture style (monolith, microservices, serverless, etc.) with justification
|
||||
- Identify core components and their responsibilities
|
||||
- Create component interaction diagram (Mermaid graph TD format)
|
||||
- Specify technology stack: languages, frameworks, databases, infrastructure
|
||||
- Generate 2-4 Architecture Decision Records (ADRs):
|
||||
- Each ADR: context, decision, 2-3 alternatives with pros/cons, consequences
|
||||
- Focus on: data storage, API design, authentication, key technical choices
|
||||
- Define data model: key entities and relationships (Mermaid erDiagram format)
|
||||
- Identify security architecture: auth, authorization, data protection
|
||||
- List API endpoints (high-level)
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Complete architecture with: style justification, component diagram, tech stack table, ADRs, data model, security controls, API overview
|
||||
CONSTRAINTS: Architecture must support all Must-have requirements. Prefer proven technologies.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// Wait for Gemini result
|
||||
|
||||
// === Stage 2: Architecture Review (Codex) ===
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of proposed architecture - identify weaknesses and risks.
|
||||
Success: Actionable feedback with specific concerns and improvement suggestions.
|
||||
|
||||
PROPOSED ARCHITECTURE:
|
||||
${geminiArchitectureOutput.slice(0, 5000)}
|
||||
|
||||
REQUIREMENTS CONTEXT:
|
||||
${priorDocs.requirementsIndex?.slice(0, 2000) || ''}
|
||||
|
||||
TASK:
|
||||
- Challenge each ADR: are the alternatives truly the best options?
|
||||
- Identify scalability bottlenecks in the component design
|
||||
- Assess security gaps: authentication, authorization, data protection
|
||||
- Evaluate technology choices: maturity, community support, fit
|
||||
- Check for over-engineering or under-engineering
|
||||
- Verify architecture covers all Must-have requirements
|
||||
- Rate overall architecture quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Architecture review with: per-ADR feedback, scalability concerns, security gaps, technology risks, quality rating
|
||||
CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable improvements.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// Wait for Codex result
|
||||
|
||||
// === Integrate Discussion Feedback ===
|
||||
if (discussionFeedback) {
|
||||
// Extract architecture feedback from discuss-003-requirements.md
|
||||
// Merge into architecture design
|
||||
}
|
||||
|
||||
// === Codebase Integration Mapping (conditional) ===
|
||||
let integrationMapping = null
|
||||
if (priorDocs.discoveryContext) {
|
||||
const dc = JSON.parse(priorDocs.discoveryContext)
|
||||
if (dc.relevant_files) {
|
||||
integrationMapping = dc.relevant_files.map(f => ({
|
||||
new_component: '...',
|
||||
existing_module: f.path,
|
||||
integration_type: 'Extend|Replace|New',
|
||||
notes: f.rationale
|
||||
}))
|
||||
}
|
||||
}
|
||||
|
||||
// === Generate architecture/ Directory ===
|
||||
Bash(`mkdir -p "${sessionFolder}/spec/architecture"`)
|
||||
|
||||
const timestamp = new Date().toISOString()
|
||||
const adrs = parseADRs(geminiArchitectureOutput, codexReviewOutput)
|
||||
|
||||
// Write individual ADR-*.md files
|
||||
adrs.forEach(adr => {
|
||||
const adrFrontmatter = `---
|
||||
id: ADR-${adr.id}
|
||||
title: "${adr.title}"
|
||||
status: draft
|
||||
traces:
|
||||
- ../requirements/_index.md
|
||||
---`
|
||||
const adrContent = `${adrFrontmatter}
|
||||
|
||||
# ADR-${adr.id}: ${adr.title}
|
||||
|
||||
## Context
|
||||
${adr.context}
|
||||
|
||||
## Decision
|
||||
${adr.decision}
|
||||
|
||||
## Alternatives
|
||||
${adr.alternatives.map((alt, i) => `### Option ${i+1}: ${alt.name}\n- **Pros**: ${alt.pros.join(', ')}\n- **Cons**: ${alt.cons.join(', ')}`).join('\n\n')}
|
||||
|
||||
## Consequences
|
||||
${adr.consequences}
|
||||
|
||||
## Review Feedback
|
||||
${adr.reviewFeedback || 'N/A'}
|
||||
`
|
||||
Write(`${sessionFolder}/spec/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent)
|
||||
})
|
||||
|
||||
// Write _index.md (with Mermaid component diagram + ER diagram + links)
|
||||
const archIndexFrontmatter = `---
|
||||
session_id: ${specConfig?.session_id || 'unknown'}
|
||||
phase: 4
|
||||
document_type: architecture-index
|
||||
status: draft
|
||||
generated_at: ${timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
---`
|
||||
|
||||
const archIndexContent = `${archIndexFrontmatter}
|
||||
|
||||
# Architecture Document
|
||||
|
||||
## System Overview
|
||||
${geminiArchitectureOutput.system_overview}
|
||||
|
||||
## Component Diagram
|
||||
\`\`\`mermaid
|
||||
${geminiArchitectureOutput.component_diagram}
|
||||
\`\`\`
|
||||
|
||||
## Technology Stack
|
||||
${geminiArchitectureOutput.tech_stack_table}
|
||||
|
||||
## Architecture Decision Records
|
||||
| ID | Title | Status |
|
||||
|----|-------|--------|
|
||||
${adrs.map(a => `| [ADR-${a.id}](ADR-${a.id}-${a.slug}.md) | ${a.title} | draft |`).join('\n')}
|
||||
|
||||
## Data Model
|
||||
\`\`\`mermaid
|
||||
${geminiArchitectureOutput.data_model_diagram}
|
||||
\`\`\`
|
||||
|
||||
## API Design
|
||||
${geminiArchitectureOutput.api_overview}
|
||||
|
||||
## Security Controls
|
||||
${geminiArchitectureOutput.security_controls}
|
||||
|
||||
## Review Summary
|
||||
${codexReviewOutput.summary}
|
||||
Quality Rating: ${codexReviewOutput.quality_rating}/5
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/spec/architecture/_index.md`, archIndexContent)
|
||||
|
||||
return {
|
||||
outputPath: 'spec/architecture/_index.md',
|
||||
documentSummary: `Architecture generated with ${adrs.length} ADRs, quality rating ${codexReviewOutput.quality_rating}/5`
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## DRAFT-004: Epics & Stories
|
||||
|
||||
Gemini CLI decomposition to generate EPIC-*.md files.
|
||||
|
||||
```javascript
|
||||
if (docType === 'epics') {
|
||||
// === Epic Decomposition CLI ===
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Decompose requirements into executable Epics and Stories for implementation planning.
|
||||
Success: 3-7 Epics with prioritized Stories, dependency map, and MVP subset clearly defined.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${priorDocs.productBrief?.slice(0, 2000) || ''}
|
||||
|
||||
REQUIREMENTS:
|
||||
${priorDocs.requirementsIndex?.slice(0, 5000) || ''}
|
||||
|
||||
ARCHITECTURE (summary):
|
||||
${priorDocs.architectureIndex?.slice(0, 3000) || ''}
|
||||
|
||||
TASK:
|
||||
- Group requirements into 3-7 logical Epics:
|
||||
- Each Epic: EPIC-NNN ID, title, description, priority (Must/Should/Could)
|
||||
- Group by functional domain or user journey stage
|
||||
- Tag MVP Epics (minimum set for initial release)
|
||||
- For each Epic, generate 2-5 Stories:
|
||||
- Each Story: STORY-{EPIC}-NNN ID, title
|
||||
- User story format: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 acceptance criteria per story (testable)
|
||||
- Relative size estimate: S/M/L/XL
|
||||
- Trace to source requirement(s): REQ-NNN
|
||||
- Create dependency map:
|
||||
- Cross-Epic dependencies (which Epics block others)
|
||||
- Mermaid graph LR format
|
||||
- Recommended execution order with rationale
|
||||
- Define MVP:
|
||||
- Which Epics are in MVP
|
||||
- MVP definition of done (3-5 criteria)
|
||||
- What is explicitly deferred post-MVP
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured output with: Epic list (ID, title, priority, MVP flag), Stories per Epic (ID, user story, AC, size, trace), dependency Mermaid diagram, execution order, MVP definition
|
||||
CONSTRAINTS: Every Must-have requirement must appear in at least one Story. Stories must be small enough to implement independently. Dependencies should be minimized across Epics.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
|
||||
// Wait for CLI result
|
||||
|
||||
// === Integrate Discussion Feedback ===
|
||||
if (discussionFeedback) {
|
||||
// Extract execution feedback from discuss-004-architecture.md
|
||||
// Adjust Epic granularity, MVP scope
|
||||
}
|
||||
|
||||
// === Generate epics/ Directory ===
|
||||
Bash(`mkdir -p "${sessionFolder}/spec/epics"`)
|
||||
|
||||
const timestamp = new Date().toISOString()
|
||||
const epicsList = parseEpics(cliOutput)
|
||||
|
||||
// Write individual EPIC-*.md files (with stories)
|
||||
epicsList.forEach(epic => {
|
||||
const epicFrontmatter = `---
|
||||
id: EPIC-${epic.id}
|
||||
title: "${epic.title}"
|
||||
priority: ${epic.priority}
|
||||
mvp: ${epic.mvp}
|
||||
size: ${epic.size}
|
||||
requirements:
|
||||
${epic.reqs.map(r => ` - ${r}`).join('\n')}
|
||||
architecture:
|
||||
${epic.adrs.map(a => ` - ${a}`).join('\n')}
|
||||
dependencies:
|
||||
${epic.deps.map(d => ` - ${d}`).join('\n')}
|
||||
status: draft
|
||||
---`
|
||||
const storiesContent = epic.stories.map(s => `### ${s.id}: ${s.title}
|
||||
|
||||
**User Story**: ${s.user_story}
|
||||
**Size**: ${s.size}
|
||||
**Traces**: ${s.traces.join(', ')}
|
||||
|
||||
**Acceptance Criteria**:
|
||||
${s.acceptance_criteria.map((ac, i) => `${i+1}. ${ac}`).join('\n')}
|
||||
`).join('\n')
|
||||
|
||||
const epicContent = `${epicFrontmatter}
|
||||
|
||||
# EPIC-${epic.id}: ${epic.title}
|
||||
|
||||
## Description
|
||||
${epic.description}
|
||||
|
||||
## Stories
|
||||
${storiesContent}
|
||||
|
||||
## Requirements
|
||||
${epic.reqs.map(r => `- [${r}](../requirements/${r}.md)`).join('\n')}
|
||||
|
||||
## Architecture
|
||||
${epic.adrs.map(a => `- [${a}](../architecture/${a}.md)`).join('\n')}
|
||||
`
|
||||
Write(`${sessionFolder}/spec/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent)
|
||||
})
|
||||
|
||||
// Write _index.md (with Mermaid dependency diagram + MVP + links)
|
||||
const epicsIndexFrontmatter = `---
|
||||
session_id: ${specConfig?.session_id || 'unknown'}
|
||||
phase: 5
|
||||
document_type: epics-index
|
||||
status: draft
|
||||
generated_at: ${timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../requirements/_index.md
|
||||
- ../architecture/_index.md
|
||||
---`
|
||||
|
||||
const epicsIndexContent = `${epicsIndexFrontmatter}
|
||||
|
||||
# Epics & Stories
|
||||
|
||||
## Epic Overview
|
||||
| ID | Title | Priority | MVP | Size | Status |
|
||||
|----|-------|----------|-----|------|--------|
|
||||
${epicsList.map(e => `| [EPIC-${e.id}](EPIC-${e.id}-${e.slug}.md) | ${e.title} | ${e.priority} | ${e.mvp ? '✓' : ''} | ${e.size} | draft |`).join('\n')}
|
||||
|
||||
## Dependency Map
|
||||
\`\`\`mermaid
|
||||
${cliOutput.dependency_diagram}
|
||||
\`\`\`
|
||||
|
||||
## Execution Order
|
||||
${cliOutput.execution_order}
|
||||
|
||||
## MVP Scope
|
||||
${cliOutput.mvp_definition}
|
||||
|
||||
### MVP Epics
|
||||
${epicsList.filter(e => e.mvp).map(e => `- EPIC-${e.id}: ${e.title}`).join('\n')}
|
||||
|
||||
### Post-MVP
|
||||
${epicsList.filter(e => !e.mvp).map(e => `- EPIC-${e.id}: ${e.title}`).join('\n')}
|
||||
|
||||
## Traceability Matrix
|
||||
${generateTraceabilityMatrix(epicsList, funcReqs)}
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/spec/epics/_index.md`, epicsIndexContent)
|
||||
|
||||
return {
|
||||
outputPath: 'spec/epics/_index.md',
|
||||
documentSummary: `Epics generated: ${epicsList.length} total, ${epicsList.filter(e => e.mvp).length} in MVP`
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function parseFunctionalRequirements(cliOutput) {
|
||||
// Parse CLI JSON output to extract functional requirements
|
||||
// Returns: [{ id, title, description, user_story, acceptance_criteria[], priority, slug }]
|
||||
}
|
||||
|
||||
function parseNonFunctionalRequirements(cliOutput) {
|
||||
// Parse CLI JSON output to extract non-functional requirements
|
||||
// Returns: [{ id, type, title, requirement, metric, target, slug }]
|
||||
}
|
||||
|
||||
function parseADRs(geminiOutput, codexOutput) {
|
||||
// Parse architecture outputs to extract ADRs with review feedback
|
||||
// Returns: [{ id, title, context, decision, alternatives[], consequences, reviewFeedback, slug }]
|
||||
}
|
||||
|
||||
function parseEpics(cliOutput) {
|
||||
// Parse CLI JSON output to extract Epics and Stories
|
||||
// Returns: [{ id, title, description, priority, mvp, size, stories[], reqs[], adrs[], deps[], slug }]
|
||||
}
|
||||
|
||||
function fillTemplate(template, data) {
|
||||
// Fill template placeholders with data
|
||||
// Apply document-standards.md formatting rules
|
||||
}
|
||||
|
||||
function generateTraceabilityMatrix(epics, requirements) {
|
||||
// Generate traceability matrix showing Epic → Requirement mappings
|
||||
}
|
||||
```
|
||||
@@ -1,257 +0,0 @@
|
||||
# Role: writer
|
||||
|
||||
Product Brief, Requirements/PRD, Architecture, and Epics & Stories document generation. Maps to spec-generator Phases 2-5.
|
||||
|
||||
## Role Identity
|
||||
|
||||
- **Name**: `writer`
|
||||
- **Task Prefix**: `DRAFT-*`
|
||||
- **Output Tag**: `[writer]`
|
||||
- **Responsibility**: Load Context → Generate Document → Incorporate Feedback → Report
|
||||
- **Communication**: SendMessage to coordinator only
|
||||
|
||||
## Role Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process DRAFT-* tasks
|
||||
- Read templates before generating documents
|
||||
- Follow document-standards.md formatting rules
|
||||
- Integrate discussion feedback when available
|
||||
- Generate proper frontmatter for all documents
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks for other roles
|
||||
- Contact other workers directly
|
||||
- Skip template loading
|
||||
- Modify discussion records
|
||||
- Generate documents without loading prior dependencies
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `draft_ready` | writer → coordinator | Document writing complete | With document path and type |
|
||||
| `draft_revision` | writer → coordinator | Document revised and resubmitted | Describes changes made |
|
||||
| `impl_progress` | writer → coordinator | Long writing progress | Multi-document stage progress |
|
||||
| `error` | writer → coordinator | Unrecoverable error | Template missing, insufficient context, etc. |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log:
|
||||
|
||||
```javascript
|
||||
// Document ready
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: teamName,
|
||||
from: "writer",
|
||||
to: "coordinator",
|
||||
type: "draft_ready",
|
||||
summary: "[writer] Product Brief complete",
|
||||
ref: `${sessionFolder}/product-brief.md`
|
||||
})
|
||||
|
||||
// Document revision
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: teamName,
|
||||
from: "writer",
|
||||
to: "coordinator",
|
||||
type: "draft_revision",
|
||||
summary: "[writer] Requirements revised per discussion feedback"
|
||||
})
|
||||
|
||||
// Error report
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: teamName,
|
||||
from: "writer",
|
||||
to: "coordinator",
|
||||
type: "error",
|
||||
summary: "[writer] Input artifact missing, cannot generate document"
|
||||
})
|
||||
```
|
||||
|
||||
### CLI Fallback
|
||||
|
||||
When `mcp__ccw-tools__team_msg` MCP is unavailable:
|
||||
|
||||
```bash
|
||||
ccw team log --team "${teamName}" --from "writer" --to "coordinator" --type "draft_ready" --summary "[writer] Brief complete" --ref "${sessionFolder}/product-brief.md" --json
|
||||
```
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
- `commands/generate-doc.md` - Multi-CLI document generation for 4 doc types
|
||||
|
||||
### Subagent Capabilities
|
||||
- None
|
||||
|
||||
### CLI Capabilities
|
||||
- `gemini`, `codex`, `claude` for multi-perspective analysis
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
```javascript
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('DRAFT-') &&
|
||||
t.owner === 'writer' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
|
||||
if (myTasks.length === 0) return // idle
|
||||
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
```
|
||||
|
||||
### Phase 2: Context & Discussion Loading
|
||||
|
||||
```javascript
|
||||
// Extract session folder from task description
|
||||
const sessionMatch = task.description.match(/Session:\s*(.+)/)
|
||||
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : ''
|
||||
|
||||
// Load session config
|
||||
let specConfig = null
|
||||
try { specConfig = JSON.parse(Read(`${sessionFolder}/spec/spec-config.json`)) } catch {}
|
||||
|
||||
// Determine document type from task subject
|
||||
const docType = task.subject.includes('Product Brief') ? 'product-brief'
|
||||
: task.subject.includes('Requirements') || task.subject.includes('PRD') ? 'requirements'
|
||||
: task.subject.includes('Architecture') ? 'architecture'
|
||||
: task.subject.includes('Epics') ? 'epics'
|
||||
: 'unknown'
|
||||
|
||||
// Load discussion feedback (from preceding DISCUSS task)
|
||||
const discussionFiles = {
|
||||
'product-brief': 'discussions/discuss-001-scope.md',
|
||||
'requirements': 'discussions/discuss-002-brief.md',
|
||||
'architecture': 'discussions/discuss-003-requirements.md',
|
||||
'epics': 'discussions/discuss-004-architecture.md'
|
||||
}
|
||||
let discussionFeedback = null
|
||||
try { discussionFeedback = Read(`${sessionFolder}/${discussionFiles[docType]}`) } catch {}
|
||||
|
||||
// Load prior documents progressively
|
||||
const priorDocs = {}
|
||||
if (docType !== 'product-brief') {
|
||||
try { priorDocs.discoveryContext = Read(`${sessionFolder}/spec/discovery-context.json`) } catch {}
|
||||
}
|
||||
if (['requirements', 'architecture', 'epics'].includes(docType)) {
|
||||
try { priorDocs.productBrief = Read(`${sessionFolder}/spec/product-brief.md`) } catch {}
|
||||
}
|
||||
if (['architecture', 'epics'].includes(docType)) {
|
||||
try { priorDocs.requirementsIndex = Read(`${sessionFolder}/spec/requirements/_index.md`) } catch {}
|
||||
}
|
||||
if (docType === 'epics') {
|
||||
try { priorDocs.architectureIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) } catch {}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Document Generation
|
||||
|
||||
**Delegate to command file**:
|
||||
|
||||
```javascript
|
||||
// Load and execute document generation command
|
||||
const generateDocCommand = Read('commands/generate-doc.md')
|
||||
|
||||
// Execute command with context:
|
||||
// - docType
|
||||
// - sessionFolder
|
||||
// - specConfig
|
||||
// - discussionFeedback
|
||||
// - priorDocs
|
||||
// - task
|
||||
|
||||
// Command will handle:
|
||||
// - Loading document standards
|
||||
// - Loading appropriate template
|
||||
// - Building shared context
|
||||
// - Routing to type-specific generation (DRAFT-001/002/003/004)
|
||||
// - Integrating discussion feedback
|
||||
// - Writing output files
|
||||
|
||||
// Returns: { outputPath, documentSummary }
|
||||
```
|
||||
|
||||
### Phase 4: Self-Validation
|
||||
|
||||
```javascript
|
||||
const docContent = Read(`${sessionFolder}/${outputPath}`)
|
||||
|
||||
const validationChecks = {
|
||||
has_frontmatter: /^---\n[\s\S]+?\n---/.test(docContent),
|
||||
sections_complete: /* verify all required sections present */,
|
||||
cross_references: docContent.includes('session_id'),
|
||||
discussion_integrated: !discussionFeedback || docContent.includes('Discussion')
|
||||
}
|
||||
|
||||
const allValid = Object.values(validationChecks).every(v => v)
|
||||
```
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
```javascript
|
||||
const docTypeLabel = {
|
||||
'product-brief': 'Product Brief',
|
||||
'requirements': 'Requirements/PRD',
|
||||
'architecture': 'Architecture Document',
|
||||
'epics': 'Epics & Stories'
|
||||
}
|
||||
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: teamName,
|
||||
from: "writer", to: "coordinator",
|
||||
type: "draft_ready",
|
||||
summary: `[writer] ${docTypeLabel[docType]} 完成: ${allValid ? '验证通过' : '部分验证失败'}`,
|
||||
ref: `${sessionFolder}/${outputPath}`
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "coordinator",
|
||||
content: `[writer] ## 文档撰写结果
|
||||
|
||||
**Task**: ${task.subject}
|
||||
**文档类型**: ${docTypeLabel[docType]}
|
||||
**验证状态**: ${allValid ? 'PASS' : 'PARTIAL'}
|
||||
|
||||
### 文档摘要
|
||||
${documentSummary}
|
||||
|
||||
### 讨论反馈整合
|
||||
${discussionFeedback ? '已整合前序讨论反馈' : '首次撰写'}
|
||||
|
||||
### 自验证结果
|
||||
${Object.entries(validationChecks).map(([k, v]) => '- ' + k + ': ' + (v ? 'PASS' : 'FAIL')).join('\n')}
|
||||
|
||||
### 输出位置
|
||||
${sessionFolder}/${outputPath}
|
||||
|
||||
文档已就绪,可进入讨论轮次。`,
|
||||
summary: `[writer] ${docTypeLabel[docType]} 就绪`
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
|
||||
// Check for next DRAFT task → back to Phase 1
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No DRAFT-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Prior document not found | Notify coordinator, request prerequisite |
|
||||
| CLI analysis failure | Retry with fallback tool, then direct generation |
|
||||
| Template sections incomplete | Generate best-effort, note gaps in report |
|
||||
| Discussion feedback contradicts prior docs | Note conflict in document, flag for next discussion |
|
||||
| Session folder missing | Notify coordinator, request session path |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
@@ -1,192 +0,0 @@
|
||||
# Document Standards
|
||||
|
||||
Defines format conventions, YAML frontmatter schema, naming rules, and content structure for all spec-generator outputs.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| All Phases | Frontmatter format | YAML Frontmatter Schema |
|
||||
| All Phases | File naming | Naming Conventions |
|
||||
| Phase 2-5 | Document structure | Content Structure |
|
||||
| Phase 6 | Validation reference | All sections |
|
||||
|
||||
---
|
||||
|
||||
## YAML Frontmatter Schema
|
||||
|
||||
Every generated document MUST begin with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
session_id: SPEC-{slug}-{YYYY-MM-DD}
|
||||
phase: {1-6}
|
||||
document_type: {product-brief|requirements|architecture|epics|readiness-report|spec-summary}
|
||||
status: draft|review|complete
|
||||
generated_at: {ISO8601 timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- {list of input documents used}
|
||||
---
|
||||
```
|
||||
|
||||
### Field Definitions
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `session_id` | string | Yes | Session identifier matching spec-config.json |
|
||||
| `phase` | number | Yes | Phase number that generated this document (1-6) |
|
||||
| `document_type` | string | Yes | One of: product-brief, requirements, architecture, epics, readiness-report, spec-summary |
|
||||
| `status` | enum | Yes | draft (initial), review (user reviewed), complete (finalized) |
|
||||
| `generated_at` | string | Yes | ISO8601 timestamp of generation |
|
||||
| `stepsCompleted` | array | Yes | List of step IDs completed during generation |
|
||||
| `version` | number | Yes | Document version, incremented on re-generation |
|
||||
| `dependencies` | array | No | List of input files this document depends on |
|
||||
|
||||
### Status Transitions
|
||||
|
||||
```
|
||||
draft -> review -> complete
|
||||
| ^
|
||||
+-------------------+ (direct promotion in auto mode)
|
||||
```
|
||||
|
||||
- **draft**: Initial generation, not yet user-reviewed
|
||||
- **review**: User has reviewed and provided feedback
|
||||
- **complete**: Finalized, ready for downstream consumption
|
||||
|
||||
In auto mode (`-y`), documents are promoted directly from `draft` to `complete`.
|
||||
|
||||
---
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Session ID Format
|
||||
|
||||
```
|
||||
SPEC-{slug}-{YYYY-MM-DD}
|
||||
```
|
||||
|
||||
- **slug**: Lowercase, alphanumeric + Chinese characters, hyphens as separators, max 40 chars
|
||||
- **date**: UTC+8 date in YYYY-MM-DD format
|
||||
|
||||
Examples:
|
||||
- `SPEC-task-management-system-2026-02-11`
|
||||
- `SPEC-user-auth-oauth-2026-02-11`
|
||||
|
||||
### Output Files
|
||||
|
||||
| File | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `spec-config.json` | 1 | Session configuration and state |
|
||||
| `discovery-context.json` | 1 | Codebase exploration results (optional) |
|
||||
| `product-brief.md` | 2 | Product brief document |
|
||||
| `requirements.md` | 3 | PRD document |
|
||||
| `architecture.md` | 4 | Architecture decisions document |
|
||||
| `epics.md` | 5 | Epic/Story breakdown document |
|
||||
| `readiness-report.md` | 6 | Quality validation report |
|
||||
| `spec-summary.md` | 6 | One-page executive summary |
|
||||
|
||||
### Output Directory
|
||||
|
||||
```
|
||||
.workflow/.spec/{session-id}/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Content Structure
|
||||
|
||||
### Heading Hierarchy
|
||||
|
||||
- `#` (H1): Document title only (one per document)
|
||||
- `##` (H2): Major sections
|
||||
- `###` (H3): Subsections
|
||||
- `####` (H4): Detail items (use sparingly)
|
||||
|
||||
Maximum depth: 4 levels. Prefer flat structures.
|
||||
|
||||
### Section Ordering
|
||||
|
||||
Every document follows this general pattern:
|
||||
|
||||
1. **YAML Frontmatter** (mandatory)
|
||||
2. **Title** (H1)
|
||||
3. **Executive Summary** (2-3 sentences)
|
||||
4. **Core Content Sections** (H2, document-specific)
|
||||
5. **Open Questions / Risks** (if applicable)
|
||||
6. **References / Traceability** (links to upstream/downstream docs)
|
||||
|
||||
### Formatting Rules
|
||||
|
||||
| Element | Format | Example |
|
||||
|---------|--------|---------|
|
||||
| Requirements | `REQ-{NNN}` prefix | REQ-001: User login |
|
||||
| Acceptance criteria | Checkbox list | `- [ ] User can log in with email` |
|
||||
| Architecture decisions | `ADR-{NNN}` prefix | ADR-001: Use PostgreSQL |
|
||||
| Epics | `EPIC-{NNN}` prefix | EPIC-001: Authentication |
|
||||
| Stories | `STORY-{EPIC}-{NNN}` prefix | STORY-001-001: Login form |
|
||||
| Priority tags | MoSCoW labels | `[Must]`, `[Should]`, `[Could]`, `[Won't]` |
|
||||
| Mermaid diagrams | Fenced code blocks | ````mermaid ... ``` `` |
|
||||
| Code examples | Language-tagged blocks | ````typescript ... ``` `` |
|
||||
|
||||
### Cross-Reference Format
|
||||
|
||||
Use relative references between documents:
|
||||
|
||||
```markdown
|
||||
See [Product Brief](product-brief.md#section-name) for details.
|
||||
Derived from [REQ-001](requirements.md#req-001).
|
||||
```
|
||||
|
||||
### Language
|
||||
|
||||
- Document body: Follow user's input language (Chinese or English)
|
||||
- Technical identifiers: Always English (REQ-001, ADR-001, EPIC-001)
|
||||
- YAML frontmatter keys: Always English
|
||||
|
||||
---
|
||||
|
||||
## spec-config.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "string (required)",
|
||||
"seed_input": "string (required) - original user input",
|
||||
"input_type": "text|file (required)",
|
||||
"timestamp": "ISO8601 (required)",
|
||||
"mode": "interactive|auto (required)",
|
||||
"complexity": "simple|moderate|complex (required)",
|
||||
"depth": "light|standard|comprehensive (required)",
|
||||
"focus_areas": ["string array"],
|
||||
"seed_analysis": {
|
||||
"problem_statement": "string",
|
||||
"target_users": ["string array"],
|
||||
"domain": "string",
|
||||
"constraints": ["string array"],
|
||||
"dimensions": ["string array - 3-5 exploration dimensions"]
|
||||
},
|
||||
"has_codebase": "boolean",
|
||||
"phasesCompleted": [
|
||||
{
|
||||
"phase": "number (1-6)",
|
||||
"name": "string (phase name)",
|
||||
"output_file": "string (primary output file)",
|
||||
"completed_at": "ISO8601"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
- [ ] Every document starts with valid YAML frontmatter
|
||||
- [ ] `session_id` matches across all documents in a session
|
||||
- [ ] `status` field reflects current document state
|
||||
- [ ] All cross-references resolve to valid targets
|
||||
- [ ] Heading hierarchy is correct (no skipped levels)
|
||||
- [ ] Technical identifiers use correct prefixes
|
||||
- [ ] Output files are in the correct directory
|
||||
@@ -1,207 +0,0 @@
|
||||
# Quality Gates
|
||||
|
||||
Per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| Phase 2-5 | Post-generation self-check | Per-Phase Gates |
|
||||
| Phase 6 | Cross-document validation | Cross-Document Validation |
|
||||
| Phase 6 | Final scoring | Scoring Dimensions |
|
||||
|
||||
---
|
||||
|
||||
## Quality Thresholds
|
||||
|
||||
| Gate | Score | Action |
|
||||
|------|-------|--------|
|
||||
| **Pass** | >= 80% | Continue to next phase |
|
||||
| **Review** | 60-79% | Log warnings, continue with caveats |
|
||||
| **Fail** | < 60% | Must address issues before continuing |
|
||||
|
||||
In auto mode (`-y`), Review-level issues are logged but do not block progress.
|
||||
|
||||
---
|
||||
|
||||
## Scoring Dimensions
|
||||
|
||||
### 1. Completeness (25%)
|
||||
|
||||
All required sections present with substantive content.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All template sections filled with detailed content |
|
||||
| 75% | All sections present, some lack detail |
|
||||
| 50% | Major sections present but minor sections missing |
|
||||
| 25% | Multiple major sections missing or empty |
|
||||
| 0% | Document is a skeleton only |
|
||||
|
||||
### 2. Consistency (25%)
|
||||
|
||||
Terminology, formatting, and references are uniform across documents.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All terms consistent, all references valid, formatting uniform |
|
||||
| 75% | Minor terminology variations, all references valid |
|
||||
| 50% | Some inconsistent terms, 1-2 broken references |
|
||||
| 25% | Frequent inconsistencies, multiple broken references |
|
||||
| 0% | Documents contradict each other |
|
||||
|
||||
### 3. Traceability (25%)
|
||||
|
||||
Requirements, architecture decisions, and stories trace back to goals.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Every story traces to a requirement, every requirement traces to a goal |
|
||||
| 75% | Most items traceable, few orphans |
|
||||
| 50% | Partial traceability, some disconnected items |
|
||||
| 25% | Weak traceability, many orphan items |
|
||||
| 0% | No traceability between documents |
|
||||
|
||||
### 4. Depth (25%)
|
||||
|
||||
Content provides sufficient detail for execution teams.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Acceptance criteria specific and testable, architecture decisions justified, stories estimable |
|
||||
| 75% | Most items detailed enough, few vague areas |
|
||||
| 50% | Mix of detailed and vague content |
|
||||
| 25% | Mostly high-level, lacking actionable detail |
|
||||
| 0% | Too abstract for execution |
|
||||
|
||||
---
|
||||
|
||||
## Per-Phase Quality Gates
|
||||
|
||||
### Phase 1: Discovery
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Session ID valid | Matches `SPEC-{slug}-{date}` format | Error |
|
||||
| Problem statement exists | Non-empty, >= 20 characters | Error |
|
||||
| Target users identified | >= 1 user group | Error |
|
||||
| Dimensions generated | 3-5 exploration dimensions | Warning |
|
||||
| Constraints listed | >= 0 (can be empty with justification) | Info |
|
||||
|
||||
### Phase 2: Product Brief
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Vision statement | Clear, 1-3 sentences | Error |
|
||||
| Problem statement | Specific and measurable | Error |
|
||||
| Target users | >= 1 persona with needs described | Error |
|
||||
| Goals defined | >= 2 measurable goals | Error |
|
||||
| Success metrics | >= 2 quantifiable metrics | Warning |
|
||||
| Scope boundaries | In-scope and out-of-scope listed | Warning |
|
||||
| Multi-perspective | >= 2 CLI perspectives synthesized | Info |
|
||||
|
||||
### Phase 3: Requirements (PRD)
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Functional requirements | >= 3 with REQ-NNN IDs | Error |
|
||||
| Acceptance criteria | Every requirement has >= 1 criterion | Error |
|
||||
| MoSCoW priority | Every requirement tagged | Error |
|
||||
| Non-functional requirements | >= 1 (performance, security, etc.) | Warning |
|
||||
| User stories | >= 1 per Must-have requirement | Warning |
|
||||
| Traceability | Requirements trace to product brief goals | Warning |
|
||||
|
||||
### Phase 4: Architecture
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Component diagram | Present (Mermaid or ASCII) | Error |
|
||||
| Tech stack specified | Languages, frameworks, key libraries | Error |
|
||||
| ADR present | >= 1 Architecture Decision Record | Error |
|
||||
| ADR has alternatives | Each ADR lists >= 2 options considered | Warning |
|
||||
| Integration points | External systems/APIs identified | Warning |
|
||||
| Data model | Key entities and relationships described | Warning |
|
||||
| Codebase mapping | Mapped to existing code (if has_codebase) | Info |
|
||||
|
||||
### Phase 5: Epics & Stories
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Epics defined | 3-7 epics with EPIC-NNN IDs | Error |
|
||||
| MVP subset | >= 1 epic tagged as MVP | Error |
|
||||
| Stories per epic | 2-5 stories per epic | Error |
|
||||
| Story format | "As a...I want...So that..." pattern | Warning |
|
||||
| Dependency map | Cross-epic dependencies documented | Warning |
|
||||
| Estimation hints | Relative sizing (S/M/L/XL) per story | Info |
|
||||
| Traceability | Stories trace to requirements | Warning |
|
||||
|
||||
### Phase 6: Readiness Check
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| All documents exist | product-brief, requirements, architecture, epics | Error |
|
||||
| Frontmatter valid | All YAML frontmatter parseable and correct | Error |
|
||||
| Cross-references valid | All document links resolve | Error |
|
||||
| Overall score >= 60% | Weighted average across 4 dimensions | Error |
|
||||
| No unresolved Errors | All Error-severity issues addressed | Error |
|
||||
| Summary generated | spec-summary.md created | Warning |
|
||||
|
||||
---
|
||||
|
||||
## Cross-Document Validation
|
||||
|
||||
Checks performed during Phase 6 across all documents:
|
||||
|
||||
### Completeness Matrix
|
||||
|
||||
```
|
||||
Product Brief goals -> Requirements (each goal has >= 1 requirement)
|
||||
Requirements -> Architecture (each Must requirement has design coverage)
|
||||
Requirements -> Epics (each Must requirement appears in >= 1 story)
|
||||
Architecture ADRs -> Epics (tech choices reflected in implementation stories)
|
||||
```
|
||||
|
||||
### Consistency Checks
|
||||
|
||||
| Check | Documents | Rule |
|
||||
|-------|-----------|------|
|
||||
| Terminology | All | Same term used consistently (no synonyms for same concept) |
|
||||
| User personas | Brief + PRD + Epics | Same user names/roles throughout |
|
||||
| Scope | Brief + PRD | PRD scope does not exceed brief scope |
|
||||
| Tech stack | Architecture + Epics | Stories reference correct technologies |
|
||||
|
||||
### Traceability Matrix Format
|
||||
|
||||
```markdown
|
||||
| Goal | Requirements | Architecture | Epics |
|
||||
|------|-------------|--------------|-------|
|
||||
| G-001: ... | REQ-001, REQ-002 | ADR-001 | EPIC-001 |
|
||||
| G-002: ... | REQ-003 | ADR-002 | EPIC-002, EPIC-003 |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Classification
|
||||
|
||||
### Error (Must Fix)
|
||||
|
||||
- Missing required document or section
|
||||
- Broken cross-references
|
||||
- Contradictory information between documents
|
||||
- Empty acceptance criteria on Must-have requirements
|
||||
- No MVP subset defined in epics
|
||||
|
||||
### Warning (Should Fix)
|
||||
|
||||
- Vague acceptance criteria
|
||||
- Missing non-functional requirements
|
||||
- No success metrics defined
|
||||
- Incomplete traceability
|
||||
- Missing architecture review notes
|
||||
|
||||
### Info (Nice to Have)
|
||||
|
||||
- Could add more detailed personas
|
||||
- Consider additional ADR alternatives
|
||||
- Story estimation hints missing
|
||||
- Mermaid diagrams could be more detailed
|
||||
@@ -1,156 +0,0 @@
|
||||
{
|
||||
"team_name": "team-lifecycle",
|
||||
"team_display_name": "Team Lifecycle",
|
||||
"description": "Unified team skill covering spec-to-dev-to-test full lifecycle",
|
||||
"version": "2.0.0",
|
||||
"architecture": "folder-based",
|
||||
"role_structure": "roles/{name}/role.md + roles/{name}/commands/*.md",
|
||||
|
||||
"roles": {
|
||||
"coordinator": {
|
||||
"task_prefix": null,
|
||||
"responsibility": "Pipeline orchestration, requirement clarification, task chain creation, message dispatch",
|
||||
"message_types": ["plan_approved", "plan_revision", "task_unblocked", "fix_required", "error", "shutdown"]
|
||||
},
|
||||
"analyst": {
|
||||
"task_prefix": "RESEARCH",
|
||||
"responsibility": "Seed analysis, codebase exploration, multi-dimensional context gathering",
|
||||
"message_types": ["research_ready", "research_progress", "error"]
|
||||
},
|
||||
"writer": {
|
||||
"task_prefix": "DRAFT",
|
||||
"responsibility": "Product Brief / PRD / Architecture / Epics document generation",
|
||||
"message_types": ["draft_ready", "draft_revision", "impl_progress", "error"]
|
||||
},
|
||||
"discussant": {
|
||||
"task_prefix": "DISCUSS",
|
||||
"responsibility": "Multi-perspective critique, consensus building, conflict escalation",
|
||||
"message_types": ["discussion_ready", "discussion_blocked", "impl_progress", "error"]
|
||||
},
|
||||
"planner": {
|
||||
"task_prefix": "PLAN",
|
||||
"responsibility": "Multi-angle code exploration, structured implementation planning",
|
||||
"message_types": ["plan_ready", "plan_revision", "impl_progress", "error"]
|
||||
},
|
||||
"executor": {
|
||||
"task_prefix": "IMPL",
|
||||
"responsibility": "Code implementation following approved plans",
|
||||
"message_types": ["impl_complete", "impl_progress", "error"]
|
||||
},
|
||||
"tester": {
|
||||
"task_prefix": "TEST",
|
||||
"responsibility": "Adaptive test-fix cycles, progressive testing, quality gates",
|
||||
"message_types": ["test_result", "impl_progress", "fix_required", "error"]
|
||||
},
|
||||
"reviewer": {
|
||||
"task_prefix": "REVIEW",
|
||||
"additional_prefixes": ["QUALITY"],
|
||||
"responsibility": "Code review (REVIEW-*) + Spec quality validation (QUALITY-*)",
|
||||
"message_types": ["review_result", "quality_result", "fix_required", "error"]
|
||||
},
|
||||
"explorer": {
|
||||
"task_prefix": "EXPLORE",
|
||||
"responsibility": "Code search, pattern discovery, dependency tracing. Service role — on-demand by coordinator",
|
||||
"role_type": "service",
|
||||
"message_types": ["explore_ready", "explore_progress", "task_failed"]
|
||||
},
|
||||
"architect": {
|
||||
"task_prefix": "ARCH",
|
||||
"responsibility": "Architecture assessment, tech feasibility, design pattern review. Consulting role — on-demand by coordinator",
|
||||
"role_type": "consulting",
|
||||
"consultation_modes": ["spec-review", "plan-review", "code-review", "consult", "feasibility"],
|
||||
"message_types": ["arch_ready", "arch_concern", "arch_progress", "error"]
|
||||
},
|
||||
"fe-developer": {
|
||||
"task_prefix": "DEV-FE",
|
||||
"responsibility": "Frontend component/page implementation, design token consumption, responsive UI",
|
||||
"role_type": "frontend-pipeline",
|
||||
"message_types": ["dev_fe_complete", "dev_fe_progress", "error"]
|
||||
},
|
||||
"fe-qa": {
|
||||
"task_prefix": "QA-FE",
|
||||
"responsibility": "5-dimension frontend review (quality, a11y, design compliance, UX, pre-delivery), GC loop",
|
||||
"role_type": "frontend-pipeline",
|
||||
"message_types": ["qa_fe_passed", "qa_fe_result", "fix_required", "error"]
|
||||
}
|
||||
},
|
||||
|
||||
"pipelines": {
|
||||
"spec-only": {
|
||||
"description": "Specification pipeline: research → discuss → draft → quality",
|
||||
"task_chain": [
|
||||
"RESEARCH-001",
|
||||
"DISCUSS-001", "DRAFT-001", "DISCUSS-002",
|
||||
"DRAFT-002", "DISCUSS-003", "DRAFT-003", "DISCUSS-004",
|
||||
"DRAFT-004", "DISCUSS-005", "QUALITY-001", "DISCUSS-006"
|
||||
]
|
||||
},
|
||||
"impl-only": {
|
||||
"description": "Implementation pipeline: plan → implement → test + review",
|
||||
"task_chain": ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"]
|
||||
},
|
||||
"full-lifecycle": {
|
||||
"description": "Full lifecycle: spec pipeline → implementation pipeline",
|
||||
"task_chain": "spec-only + impl-only (PLAN-001 blockedBy DISCUSS-006)"
|
||||
},
|
||||
"fe-only": {
|
||||
"description": "Frontend-only pipeline: plan → frontend dev → frontend QA",
|
||||
"task_chain": ["PLAN-001", "DEV-FE-001", "QA-FE-001"],
|
||||
"gc_loop": { "max_rounds": 2, "convergence": "score >= 8 && critical === 0" }
|
||||
},
|
||||
"fullstack": {
|
||||
"description": "Fullstack pipeline: plan → backend + frontend parallel → test + QA",
|
||||
"task_chain": ["PLAN-001", "IMPL-001||DEV-FE-001", "TEST-001||QA-FE-001", "REVIEW-001"],
|
||||
"sync_points": ["REVIEW-001"]
|
||||
},
|
||||
"full-lifecycle-fe": {
|
||||
"description": "Full lifecycle with frontend: spec → plan → backend + frontend → test + QA",
|
||||
"task_chain": "spec-only + fullstack (PLAN-001 blockedBy DISCUSS-006)"
|
||||
}
|
||||
},
|
||||
|
||||
"frontend_detection": {
|
||||
"keywords": ["component", "page", "UI", "前端", "frontend", "CSS", "HTML", "React", "Vue", "Tailwind", "组件", "页面", "样式", "layout", "responsive", "Svelte", "Next.js", "Nuxt", "shadcn", "设计系统", "design system"],
|
||||
"file_patterns": ["*.tsx", "*.jsx", "*.vue", "*.svelte", "*.css", "*.scss", "*.html"],
|
||||
"routing_rules": {
|
||||
"frontend_only": "All tasks match frontend keywords, no backend/API mentions",
|
||||
"fullstack": "Mix of frontend and backend tasks",
|
||||
"backend_only": "No frontend keywords detected (default impl-only)"
|
||||
}
|
||||
},
|
||||
|
||||
"ui_ux_pro_max": {
|
||||
"skill_name": "ui-ux-pro-max",
|
||||
"install_command": "/plugin install ui-ux-pro-max@ui-ux-pro-max-skill",
|
||||
"invocation": "Skill(skill=\"ui-ux-pro-max\", args=\"...\")",
|
||||
"domains": ["product", "style", "typography", "color", "landing", "chart", "ux", "web"],
|
||||
"stacks": ["html-tailwind", "react", "nextjs", "vue", "svelte", "shadcn", "swiftui", "react-native", "flutter"],
|
||||
"fallback": "llm-general-knowledge",
|
||||
"design_intelligence_chain": ["analyst → design-intelligence.json", "architect → design-tokens.json", "fe-developer → tokens.css", "fe-qa → anti-pattern audit"]
|
||||
},
|
||||
|
||||
"shared_memory": {
|
||||
"file": "shared-memory.json",
|
||||
"schema": {
|
||||
"design_intelligence": "From analyst via ui-ux-pro-max",
|
||||
"design_token_registry": "From architect, consumed by fe-developer/fe-qa",
|
||||
"component_inventory": "From fe-developer, list of implemented components",
|
||||
"style_decisions": "Accumulated design decisions",
|
||||
"qa_history": "From fe-qa, audit trail",
|
||||
"industry_context": "Industry + strictness config"
|
||||
}
|
||||
},
|
||||
|
||||
"collaboration_patterns": ["CP-1", "CP-2", "CP-4", "CP-5", "CP-6", "CP-10"],
|
||||
|
||||
"session_dirs": {
|
||||
"base": ".workflow/.team/TLS-{slug}-{YYYY-MM-DD}/",
|
||||
"spec": "spec/",
|
||||
"discussions": "discussions/",
|
||||
"plan": "plan/",
|
||||
"explorations": "explorations/",
|
||||
"architecture": "architecture/",
|
||||
"wisdom": "wisdom/",
|
||||
"messages": ".workflow/.team-msg/{team-name}/"
|
||||
}
|
||||
}
|
||||
@@ -1,254 +0,0 @@
|
||||
# Architecture Document Template (Directory Structure)
|
||||
|
||||
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
|
||||
| Output Location | `{workDir}/architecture/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/architecture/
|
||||
├── _index.md # Overview, components, tech stack, data model, security
|
||||
├── ADR-001-{slug}.md # Individual Architecture Decision Record
|
||||
├── ADR-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 4
|
||||
document_type: architecture-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
---
|
||||
|
||||
# Architecture: {product_name}
|
||||
|
||||
{executive_summary - high-level architecture approach and key decisions}
|
||||
|
||||
## System Overview
|
||||
|
||||
### Architecture Style
|
||||
{description of chosen architecture style: microservices, monolith, serverless, etc.}
|
||||
|
||||
### System Context Diagram
|
||||
|
||||
```mermaid
|
||||
C4Context
|
||||
title System Context Diagram
|
||||
Person(user, "User", "Primary user")
|
||||
System(system, "{product_name}", "Core system")
|
||||
System_Ext(ext1, "{external_system}", "{description}")
|
||||
Rel(user, system, "Uses")
|
||||
Rel(system, ext1, "Integrates with")
|
||||
```
|
||||
|
||||
## Component Architecture
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "{product_name}"
|
||||
A[Component A] --> B[Component B]
|
||||
B --> C[Component C]
|
||||
A --> D[Component D]
|
||||
end
|
||||
B --> E[External Service]
|
||||
```
|
||||
|
||||
### Component Descriptions
|
||||
|
||||
| Component | Responsibility | Technology | Dependencies |
|
||||
|-----------|---------------|------------|--------------|
|
||||
| {component_name} | {what it does} | {tech stack} | {depends on} |
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Core Technologies
|
||||
|
||||
| Layer | Technology | Version | Rationale |
|
||||
|-------|-----------|---------|-----------|
|
||||
| Frontend | {technology} | {version} | {why chosen} |
|
||||
| Backend | {technology} | {version} | {why chosen} |
|
||||
| Database | {technology} | {version} | {why chosen} |
|
||||
| Infrastructure | {technology} | {version} | {why chosen} |
|
||||
|
||||
### Key Libraries & Frameworks
|
||||
|
||||
| Library | Purpose | License |
|
||||
|---------|---------|---------|
|
||||
| {library_name} | {purpose} | {license} |
|
||||
|
||||
## Architecture Decision Records
|
||||
|
||||
| ADR | Title | Status | Key Choice |
|
||||
|-----|-------|--------|------------|
|
||||
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
|
||||
|
||||
## Data Architecture
|
||||
|
||||
### Data Model
|
||||
|
||||
```mermaid
|
||||
erDiagram
|
||||
ENTITY_A ||--o{ ENTITY_B : "has many"
|
||||
ENTITY_A {
|
||||
string id PK
|
||||
string name
|
||||
datetime created_at
|
||||
}
|
||||
ENTITY_B {
|
||||
string id PK
|
||||
string entity_a_id FK
|
||||
string value
|
||||
}
|
||||
```
|
||||
|
||||
### Data Storage Strategy
|
||||
|
||||
| Data Type | Storage | Retention | Backup |
|
||||
|-----------|---------|-----------|--------|
|
||||
| {type} | {storage solution} | {retention policy} | {backup strategy} |
|
||||
|
||||
## API Design
|
||||
|
||||
### API Overview
|
||||
|
||||
| Endpoint | Method | Purpose | Auth |
|
||||
|----------|--------|---------|------|
|
||||
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Security Controls
|
||||
|
||||
| Control | Implementation | Requirement |
|
||||
|---------|---------------|-------------|
|
||||
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
|
||||
## Infrastructure & Deployment
|
||||
|
||||
### Deployment Architecture
|
||||
|
||||
{description of deployment model: containers, serverless, VMs, etc.}
|
||||
|
||||
### Environment Strategy
|
||||
|
||||
| Environment | Purpose | Configuration |
|
||||
|-------------|---------|---------------|
|
||||
| Development | Local development | {config} |
|
||||
| Staging | Pre-production testing | {config} |
|
||||
| Production | Live system | {config} |
|
||||
|
||||
## Codebase Integration
|
||||
|
||||
{if has_codebase is true:}
|
||||
|
||||
### Existing Code Mapping
|
||||
|
||||
| New Component | Existing Module | Integration Type | Notes |
|
||||
|--------------|----------------|------------------|-------|
|
||||
| {component} | {existing module path} | Extend/Replace/New | {notes} |
|
||||
|
||||
### Migration Notes
|
||||
{any migration considerations for existing code}
|
||||
|
||||
## Quality Attributes
|
||||
|
||||
| Attribute | Target | Measurement | ADR Reference |
|
||||
|-----------|--------|-------------|---------------|
|
||||
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Impact | Probability | Mitigation |
|
||||
|------|--------|-------------|------------|
|
||||
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {architectural question 1}
|
||||
- [ ] {architectural question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
|
||||
- Next: [Epics & Stories](../epics/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: ADR-{NNN}
|
||||
status: Accepted
|
||||
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
|
||||
date: {timestamp}
|
||||
---
|
||||
|
||||
# ADR-{NNN}: {decision_title}
|
||||
|
||||
## Context
|
||||
|
||||
{what is the situation that motivates this decision}
|
||||
|
||||
## Decision
|
||||
|
||||
{what is the chosen approach}
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
| Option | Pros | Cons |
|
||||
|--------|------|------|
|
||||
| {option_1 - chosen} | {pros} | {cons} |
|
||||
| {option_2} | {pros} | {cons} |
|
||||
| {option_3} | {pros} | {cons} |
|
||||
|
||||
## Consequences
|
||||
|
||||
- **Positive**: {positive outcomes}
|
||||
- **Negative**: {tradeoffs accepted}
|
||||
- **Risks**: {risks to monitor}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | ADR/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from decision title |
|
||||
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |
|
||||
@@ -1,196 +0,0 @@
|
||||
# Epics & Stories Template (Directory Structure)
|
||||
|
||||
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
|
||||
| Output Location | `{workDir}/epics/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/epics/
|
||||
├── _index.md # Overview table + dependency map + MVP scope + execution order
|
||||
├── EPIC-001-{slug}.md # Individual Epic with its Stories
|
||||
├── EPIC-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 5
|
||||
document_type: epics-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
- ../architecture/_index.md
|
||||
---
|
||||
|
||||
# Epics & Stories: {product_name}
|
||||
|
||||
{executive_summary - overview of epic structure and MVP scope}
|
||||
|
||||
## Epic Overview
|
||||
|
||||
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|
||||
|---------|-------|----------|-----|---------|-----------|
|
||||
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
EPIC-001 --> EPIC-002
|
||||
EPIC-001 --> EPIC-003
|
||||
EPIC-002 --> EPIC-004
|
||||
EPIC-003 --> EPIC-005
|
||||
```
|
||||
|
||||
### Dependency Notes
|
||||
{explanation of why these dependencies exist and suggested execution order}
|
||||
|
||||
### Recommended Execution Order
|
||||
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
|
||||
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
|
||||
3. ...
|
||||
|
||||
## MVP Scope
|
||||
|
||||
### MVP Epics
|
||||
{list of epics included in MVP with justification, linking to each}
|
||||
|
||||
### MVP Definition of Done
|
||||
- [ ] {MVP completion criterion 1}
|
||||
- [ ] {MVP completion criterion 2}
|
||||
- [ ] {MVP completion criterion 3}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Requirement | Epic | Stories | Architecture |
|
||||
|-------------|------|---------|--------------|
|
||||
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
|
||||
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
|
||||
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
|
||||
|
||||
## Estimation Summary
|
||||
|
||||
| Size | Meaning | Count |
|
||||
|------|---------|-------|
|
||||
| S | Small - well-understood, minimal risk | {n} |
|
||||
| M | Medium - some complexity, moderate risk | {n} |
|
||||
| L | Large - significant complexity, should consider splitting | {n} |
|
||||
| XL | Extra Large - high complexity, must split before implementation | {n} |
|
||||
|
||||
## Risks & Considerations
|
||||
|
||||
| Risk | Affected Epics | Mitigation |
|
||||
|------|---------------|------------|
|
||||
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {question about scope or implementation 1}
|
||||
- [ ] {question about scope or implementation 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
|
||||
- Handoff to: execution workflows (lite-plan, plan, req-plan)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: EPIC-NNN-{slug}.md (Individual Epic)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: EPIC-{NNN}
|
||||
priority: {Must|Should|Could}
|
||||
mvp: {true|false}
|
||||
size: {S|M|L|XL}
|
||||
requirements: [REQ-{NNN}]
|
||||
architecture: [ADR-{NNN}]
|
||||
dependencies: [EPIC-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# EPIC-{NNN}: {epic_title}
|
||||
|
||||
**Priority**: {Must|Should|Could}
|
||||
**MVP**: {Yes|No}
|
||||
**Estimated Size**: {S|M|L|XL}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed epic description}
|
||||
|
||||
## Requirements
|
||||
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
|
||||
## Architecture
|
||||
|
||||
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
|
||||
- Component: {component_name}
|
||||
|
||||
## Dependencies
|
||||
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
|
||||
|
||||
## Stories
|
||||
|
||||
### STORY-{EPIC}-001: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
- [ ] {criterion 3}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
|
||||
---
|
||||
|
||||
### STORY-{EPIC}-002: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
|
||||
| `{NNN}` | Auto-increment | Story/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
|
||||
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |
|
||||
@@ -1,133 +0,0 @@
|
||||
# Product Brief Template
|
||||
|
||||
Template for generating product brief documents in Phase 2.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis |
|
||||
| Output Location | `{workDir}/product-brief.md` |
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
---
|
||||
|
||||
# Product Brief: {product_name}
|
||||
|
||||
{executive_summary - 2-3 sentences capturing the essence of the product/feature}
|
||||
|
||||
## Vision
|
||||
|
||||
{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like}
|
||||
|
||||
## Problem Statement
|
||||
|
||||
### Current Situation
|
||||
{description of the current state and pain points}
|
||||
|
||||
### Impact
|
||||
{quantified impact of the problem - who is affected, how much, how often}
|
||||
|
||||
## Target Users
|
||||
|
||||
{for each user persona:}
|
||||
|
||||
### {Persona Name}
|
||||
- **Role**: {user's role/context}
|
||||
- **Needs**: {primary needs related to this product}
|
||||
- **Pain Points**: {current frustrations}
|
||||
- **Success Criteria**: {what success looks like for this user}
|
||||
|
||||
## Goals & Success Metrics
|
||||
|
||||
| Goal ID | Goal | Success Metric | Target |
|
||||
|---------|------|----------------|--------|
|
||||
| G-001 | {goal description} | {measurable metric} | {specific target} |
|
||||
| G-002 | {goal description} | {measurable metric} | {specific target} |
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
- {feature/capability 1}
|
||||
- {feature/capability 2}
|
||||
- {feature/capability 3}
|
||||
|
||||
### Out of Scope
|
||||
- {explicitly excluded item 1}
|
||||
- {explicitly excluded item 2}
|
||||
|
||||
### Assumptions
|
||||
- {key assumption 1}
|
||||
- {key assumption 2}
|
||||
|
||||
## Competitive Landscape
|
||||
|
||||
| Aspect | Current State | Proposed Solution | Advantage |
|
||||
|--------|--------------|-------------------|-----------|
|
||||
| {aspect} | {how it's done now} | {our approach} | {differentiator} |
|
||||
|
||||
## Constraints & Dependencies
|
||||
|
||||
### Technical Constraints
|
||||
- {constraint 1}
|
||||
- {constraint 2}
|
||||
|
||||
### Business Constraints
|
||||
- {constraint 1}
|
||||
|
||||
### Dependencies
|
||||
- {external dependency 1}
|
||||
- {external dependency 2}
|
||||
|
||||
## Multi-Perspective Synthesis
|
||||
|
||||
### Product Perspective
|
||||
{summary of product/market analysis findings}
|
||||
|
||||
### Technical Perspective
|
||||
{summary of technical feasibility and constraints}
|
||||
|
||||
### User Perspective
|
||||
{summary of user journey and UX considerations}
|
||||
|
||||
### Convergent Themes
|
||||
{themes where all perspectives agree}
|
||||
|
||||
### Conflicting Views
|
||||
{areas where perspectives differ, with notes on resolution approach}
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [spec-config.json](spec-config.json)
|
||||
- Next: [Requirements PRD](requirements.md)
|
||||
```
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | Seed analysis | Product/feature name |
|
||||
| `{executive_summary}` | CLI synthesis | 2-3 sentence summary |
|
||||
| `{vision_statement}` | CLI product perspective | Aspirational vision |
|
||||
| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis |
|
||||
@@ -1,224 +0,0 @@
|
||||
# Requirements PRD Template (Directory Structure)
|
||||
|
||||
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
|
||||
| Output Location | `{workDir}/requirements/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/requirements/
|
||||
├── _index.md # Summary + MoSCoW table + traceability matrix + links
|
||||
├── REQ-001-{slug}.md # Individual functional requirement
|
||||
├── REQ-002-{slug}.md
|
||||
├── NFR-P-001-{slug}.md # Non-functional: Performance
|
||||
├── NFR-S-001-{slug}.md # Non-functional: Security
|
||||
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
|
||||
├── NFR-U-001-{slug}.md # Non-functional: Usability
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 3
|
||||
document_type: requirements-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
---
|
||||
|
||||
# Requirements: {product_name}
|
||||
|
||||
{executive_summary - brief overview of what this PRD covers and key decisions}
|
||||
|
||||
## Requirement Summary
|
||||
|
||||
| Priority | Count | Coverage |
|
||||
|----------|-------|----------|
|
||||
| Must Have | {n} | {description of must-have scope} |
|
||||
| Should Have | {n} | {description of should-have scope} |
|
||||
| Could Have | {n} | {description of could-have scope} |
|
||||
| Won't Have | {n} | {description of explicitly excluded} |
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
| ID | Title | Priority | Traces To |
|
||||
|----|-------|----------|-----------|
|
||||
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Security
|
||||
|
||||
| ID | Title | Standard |
|
||||
|----|-------|----------|
|
||||
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
|
||||
|
||||
### Scalability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Usability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
## Data Requirements
|
||||
|
||||
### Data Entities
|
||||
|
||||
| Entity | Description | Key Attributes |
|
||||
|--------|-------------|----------------|
|
||||
| {entity_name} | {description} | {attr1, attr2, attr3} |
|
||||
|
||||
### Data Flows
|
||||
|
||||
{description of key data flows, optionally with Mermaid diagram}
|
||||
|
||||
## Integration Requirements
|
||||
|
||||
| System | Direction | Protocol | Data Format | Notes |
|
||||
|--------|-----------|----------|-------------|-------|
|
||||
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
|
||||
|
||||
## Constraints & Assumptions
|
||||
|
||||
### Constraints
|
||||
- {technical or business constraint 1}
|
||||
- {technical or business constraint 2}
|
||||
|
||||
### Assumptions
|
||||
- {assumption 1 - must be validated}
|
||||
- {assumption 2 - must be validated}
|
||||
|
||||
## Priority Rationale
|
||||
|
||||
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Goal | Requirements |
|
||||
|------|-------------|
|
||||
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
|
||||
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Product Brief](../product-brief.md)
|
||||
- Next: [Architecture](../architecture/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: REQ-{NNN}
|
||||
type: functional
|
||||
priority: {Must|Should|Could|Won't}
|
||||
traces_to: [G-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# REQ-{NNN}: {requirement_title}
|
||||
|
||||
**Priority**: {Must|Should|Could|Won't}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## User Story
|
||||
|
||||
As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] {specific, testable criterion 1}
|
||||
- [ ] {specific, testable criterion 2}
|
||||
- [ ] {specific, testable criterion 3}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: NFR-{type}-{NNN}
|
||||
type: non-functional
|
||||
category: {Performance|Security|Scalability|Usability}
|
||||
priority: {Must|Should|Could}
|
||||
status: draft
|
||||
---
|
||||
|
||||
# NFR-{type}-{NNN}: {requirement_title}
|
||||
|
||||
**Category**: {Performance|Security|Scalability|Usability}
|
||||
**Priority**: {Must|Should|Could}
|
||||
|
||||
## Requirement
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## Metric & Target
|
||||
|
||||
| Metric | Target | Measurement Method |
|
||||
|--------|--------|--------------------|
|
||||
| {metric} | {target value} | {how measured} |
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
|
||||
| `{slug}` | Auto-generated | Kebab-case from requirement title |
|
||||
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
|
||||
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |
|
||||
@@ -1,274 +0,0 @@
|
||||
---
|
||||
name: team-skill-designer
|
||||
description: Design and generate unified team skills with role-based routing. All team members invoke ONE skill, SKILL.md routes to role-specific execution via --role arg. Triggers on "design team skill", "create team skill", "team skill designer".
|
||||
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
# Team Skill Designer v2
|
||||
|
||||
Meta-skill for creating unified team skills where all team members invoke ONE skill with role-based routing. Generates a complete skill package with SKILL.md as role router and `roles/` folder for per-role execution detail.
|
||||
|
||||
**v2 Style**: 生成的技能包遵循 v3 撰写规范 — text + decision tables + flow symbols, 无伪代码, `<placeholder>` 占位符, 显式节拍控制。
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Team Skill Designer (this meta-skill) │
|
||||
│ → Collect requirements → Analyze patterns → Generate skill pkg │
|
||||
└───────────────┬─────────────────────────────────────────────────┘
|
||||
│
|
||||
┌───────────┼───────────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓ ↓
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │ │ Phase 5 │
|
||||
│ Require │ │ Pattern │ │ Skill │ │ Integ │ │ Valid │
|
||||
│ Collect │ │ Analyze │ │ Gen │ │ Verify │ │ │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
|
||||
↓ ↓ ↓ ↓ ↓
|
||||
team- pattern- preview/ integ- validated
|
||||
config.json analysis SKILL.md + report skill pkg
|
||||
.json roles/*.md .json → delivery
|
||||
```
|
||||
|
||||
## Key Innovation: Unified Skill + Role Router
|
||||
|
||||
**Before** (command approach): 5 separate command files → 5 separate skill paths
|
||||
|
||||
**After** (unified skill approach):
|
||||
|
||||
```
|
||||
.claude/skills/team-<name>/
|
||||
├── SKILL.md → Skill(skill="team-<name>", args="--role=xxx")
|
||||
├── roles/
|
||||
│ ├── coordinator/
|
||||
│ │ ├── role.md
|
||||
│ │ └── commands/
|
||||
│ ├── <worker-1>/
|
||||
│ │ ├── role.md
|
||||
│ │ └── commands/
|
||||
│ └── <worker-N>/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
└── specs/
|
||||
└── team-config.json
|
||||
```
|
||||
|
||||
→ 1 skill entry point, `--role` arg routes to per-role execution
|
||||
|
||||
## Core Design Patterns
|
||||
|
||||
### Pattern 1: Role Router (Unified Entry Point)
|
||||
|
||||
SKILL.md parses `$ARGUMENTS` to extract `--role`:
|
||||
|
||||
```
|
||||
Input: Skill(skill="team-<name>", args="--role=planner")
|
||||
→ Parse --role=planner
|
||||
→ Read roles/planner/role.md
|
||||
→ Execute planner-specific phases
|
||||
```
|
||||
|
||||
No `--role` → Orchestration Mode (auto route to coordinator).
|
||||
|
||||
### Pattern 2: SKILL.md = Orchestration-Level Only
|
||||
|
||||
SKILL.md 仅包含编排级内容:
|
||||
|
||||
| 包含 | 不包含 |
|
||||
|------|--------|
|
||||
| Role Router (parse → dispatch) | Message bus 代码 |
|
||||
| Architecture 图 | Task lifecycle 代码 |
|
||||
| Role Registry 表 (含 markdown links) | 工具声明/使用示例 |
|
||||
| Pipeline 定义 + Cadence Control | 角色特定检测逻辑 |
|
||||
| Coordinator Spawn Template | 实现级代码块 |
|
||||
| Shared Infrastructure (Phase 1/5 模板) | |
|
||||
| Compact Protection | |
|
||||
|
||||
### Pattern 3: Role Files = Self-Contained Execution
|
||||
|
||||
每个 `roles/<role>/role.md` 包含角色执行所需的一切:
|
||||
- Identity, Boundaries (MUST/MUST NOT)
|
||||
- Toolbox 表 (command links)
|
||||
- Phase 2-4 核心逻辑 (text + decision tables, 无伪代码)
|
||||
- Error Handling 表
|
||||
|
||||
**关键原则**: subagent 加载 role.md 后无需回读 SKILL.md 即可执行。
|
||||
|
||||
### Pattern 4: v3 Style Output
|
||||
|
||||
生成的技能包遵循以下风格规则:
|
||||
|
||||
| 规则 | 说明 |
|
||||
|------|------|
|
||||
| 无伪代码 | 流程用 text + decision tables + flow symbols (→ ├─ └─) |
|
||||
| 代码块仅限工具调用 | 只有 Task(), TaskCreate(), Bash(), Read() 等实际执行的调用 |
|
||||
| `<placeholder>` 占位符 | 不使用 `${variable}` 或 `{{handlebars}}` |
|
||||
| Decision tables | 所有分支逻辑用 `\| Condition \| Action \|` 表格 |
|
||||
| Phase 1/5 共享 | SKILL.md 定义 Shared Infrastructure, role.md 只写 Phase 2-4 |
|
||||
| Cadence Control | SKILL.md 包含节拍图和检查点定义 |
|
||||
| Compact Protection | Phase Reference 表含 Compact 列, 关键 phase 标记重读 |
|
||||
|
||||
### Pattern 5: Batch Role Generation
|
||||
|
||||
Phase 1 一次性收集所有角色 (非逐个):
|
||||
- Team name + all role definitions in one pass
|
||||
- Coordinator always generated
|
||||
- Worker roles collected as batch
|
||||
|
||||
### Pattern 6: Coordinator Commands Alignment
|
||||
|
||||
**dispatch.md 约束**: owner 值匹配 Role Registry | 任务 ID 匹配 Pipeline 图 | 无幽灵角色名
|
||||
|
||||
**monitor.md 约束**: spawn prompt 含完整 `Skill(skill="...", args="--role=...")` | Task() 含 description + team_name + name | Message Routing 角色名匹配 Registry
|
||||
|
||||
**验证时机**: Phase 4 自动检查。
|
||||
|
||||
---
|
||||
|
||||
## Mandatory Prerequisites
|
||||
|
||||
> **Do NOT skip**: Read these before any execution.
|
||||
|
||||
### Specification Documents
|
||||
|
||||
| Document | Purpose | When |
|
||||
|----------|---------|------|
|
||||
| [specs/team-design-patterns.md](specs/team-design-patterns.md) | Infrastructure patterns (9) + collaboration index | Phase 0 必读 |
|
||||
| [specs/collaboration-patterns.md](specs/collaboration-patterns.md) | 11 collaboration patterns with convergence control | Phase 0 必读 |
|
||||
| [specs/quality-standards.md](specs/quality-standards.md) | Quality criteria (4 dimensions + command standards) | Phase 3 前必读 |
|
||||
|
||||
### Template Files
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [templates/skill-router-template.md](templates/skill-router-template.md) | 生成 SKILL.md 模板 (v3 style) |
|
||||
| [templates/role-template.md](templates/role-template.md) | 生成 role.md 模板 (v3 style) |
|
||||
| [templates/role-command-template.md](templates/role-command-template.md) | 生成 command 文件模板 (v3 style) |
|
||||
|
||||
---
|
||||
|
||||
## Cadence Control
|
||||
|
||||
**节拍模型**: 串行 5-Phase, 每个 Phase 产出一个 artifact 作为下一 Phase 的输入。
|
||||
|
||||
```
|
||||
Phase Cadence (设计生成节拍)
|
||||
═══════════════════════════════════════════════════════════════════
|
||||
Phase 0 1 2 3 4 5
|
||||
│ │ │ │ │ │
|
||||
读规格 → 收集需求 → 模式分析 → 技能生成 → 集成验证 → 质量交付
|
||||
│ │ │ │ │ │
|
||||
[memory] config analysis preview/ report delivery
|
||||
.json .json SKILL.md .json → skills/
|
||||
roles/
|
||||
commands/
|
||||
═══════════════════════════════════════════════════════════════════
|
||||
|
||||
Phase 产物链:
|
||||
Phase 0 → [in-memory] specs + templates 内化
|
||||
Phase 1 → team-config.json (角色定义 + pipeline)
|
||||
Phase 2 → pattern-analysis.json (模式映射 + 协作模式)
|
||||
Phase 3 → preview/ 完整技能包
|
||||
Phase 4 → integration-report.json (一致性报告)
|
||||
Phase 5 → validation-report.json + delivery → .claude/skills/team-<name>/
|
||||
|
||||
检查点:
|
||||
Phase 3 完成 → ⏸ 向用户展示 preview 结构,确认后进入 Phase 4
|
||||
Phase 5 评分 → ⏸ 评分 < 60% 则回退 Phase 3 重新生成
|
||||
```
|
||||
|
||||
**Phase 间衔接**:
|
||||
|
||||
| 当前 Phase | 完成条件 | 产物 | 下一步 |
|
||||
|------------|----------|------|--------|
|
||||
| Phase 0 | 3 个 spec + 3 个 template 已读取 | in-memory | → Phase 1 |
|
||||
| Phase 1 | team-config.json 写入成功 | team-config.json | → Phase 2 |
|
||||
| Phase 2 | pattern-analysis.json 写入成功 | pattern-analysis.json | → Phase 3 |
|
||||
| Phase 3 | preview/ 目录下所有文件生成 | preview/* | → ⏸ 用户确认 → Phase 4 |
|
||||
| Phase 4 | integration-report.json 无 FAIL 项 | integration-report.json | → Phase 5 |
|
||||
| Phase 5 | score ≥ 80% | delivery to skills/ | → 完成 |
|
||||
|
||||
**回退机制**:
|
||||
|
||||
| 条件 | 回退到 | 动作 |
|
||||
|------|--------|------|
|
||||
| Phase 4 发现 FAIL | Phase 3 | 修复后重新生成 |
|
||||
| Phase 5 score < 60% | Phase 3 | 重大返工 |
|
||||
| Phase 5 score 60-79% | Phase 4 | 修复建议后重验 |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Phase Reference Documents
|
||||
|
||||
| Phase | Document | Purpose | Compact |
|
||||
|-------|----------|---------|---------|
|
||||
| 0 | (inline) | Read specs + templates | N/A |
|
||||
| 1 | [phases/01-requirements-collection.md](phases/01-requirements-collection.md) | Batch collect team + all role definitions | 完成后可压缩 |
|
||||
| 2 | [phases/02-pattern-analysis.md](phases/02-pattern-analysis.md) | Per-role pattern matching and phase mapping | 完成后可压缩 |
|
||||
| 3 | [phases/03-skill-generation.md](phases/03-skill-generation.md) | Generate unified skill package | **⚠️ 压缩后必须重读** |
|
||||
| 4 | [phases/04-integration-verification.md](phases/04-integration-verification.md) | Verify internal consistency | 压缩后必须重读 |
|
||||
| 5 | [phases/05-validation.md](phases/05-validation.md) | Quality gate and delivery | 压缩后必须重读 |
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: Phase 文件是执行文档。当 context compression 发生后,Phase 指令仅剩摘要时,**必须立即 `Read` 对应 phase 文件重新加载后再继续执行**。不得基于摘要执行任何 Step。
|
||||
|
||||
### Phase 0: Specification Study (Inline)
|
||||
|
||||
**必须在生成前读取以下文件**:
|
||||
|
||||
1. Read `specs/team-design-patterns.md` → 9 基础设施模式
|
||||
2. Read `specs/collaboration-patterns.md` → 11 协作模式
|
||||
3. Read `specs/quality-standards.md` → 质量标准
|
||||
4. Read `templates/skill-router-template.md` → SKILL.md 生成模板
|
||||
5. Read `templates/role-template.md` → role.md 生成模板
|
||||
6. Read `templates/role-command-template.md` → command 文件模板
|
||||
|
||||
### Phase 1-5: Delegated
|
||||
|
||||
各 Phase 读取对应 phase 文件执行。See Phase Reference Documents table above.
|
||||
|
||||
---
|
||||
|
||||
## Directory Setup
|
||||
|
||||
工作目录: `.workflow/.scratchpad/team-skill-<timestamp>/`
|
||||
|
||||
```bash
|
||||
Bash("mkdir -p .workflow/.scratchpad/team-skill-$(date +%Y%m%d%H%M%S)")
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/team-skill-<timestamp>/
|
||||
├── team-config.json # Phase 1 output
|
||||
├── pattern-analysis.json # Phase 2 output
|
||||
├── integration-report.json # Phase 4 output
|
||||
├── validation-report.json # Phase 5 output
|
||||
└── preview/ # Phase 3 output (preview before delivery)
|
||||
├── SKILL.md
|
||||
├── roles/
|
||||
│ ├── coordinator/
|
||||
│ │ ├── role.md
|
||||
│ │ └── commands/
|
||||
│ └── <role-N>/
|
||||
│ ├── role.md
|
||||
│ └── commands/
|
||||
└── specs/
|
||||
└── team-config.json
|
||||
|
||||
Final delivery → .claude/skills/team-<name>/
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Specs not found | Fall back to inline pattern knowledge |
|
||||
| Role name conflicts | AskUserQuestion for rename |
|
||||
| Task prefix conflicts | Suggest alternative prefix |
|
||||
| Template variable unresolved | FAIL with specific variable name |
|
||||
| Quality score < 60% | Re-run Phase 3 with additional context |
|
||||
| Phase file compressed | Re-read phase file before continuing |
|
||||
@@ -1,200 +0,0 @@
|
||||
# Phase 1: Requirements Collection (Task-Driven Inference)
|
||||
|
||||
Analyze task requirements, infer appropriate roles, and generate team configuration.
|
||||
|
||||
## Objective
|
||||
|
||||
- Determine team name and display name
|
||||
- Analyze task description to infer needed roles (coordinator always included)
|
||||
- For each role: name, responsibility type, task prefix, capabilities
|
||||
- Build pipeline from inferred roles
|
||||
- Generate `team-config.json`
|
||||
|
||||
## Input
|
||||
|
||||
| Source | Description |
|
||||
|--------|-------------|
|
||||
| User request | `$ARGUMENTS` or interactive input |
|
||||
| Specification | `specs/team-design-patterns.md` (read in Phase 0) |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Team Name + Task Description
|
||||
|
||||
Prompt the user for team name and core task description.
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{ question: "Team name (lowercase, used as .claude/skills/team-{name}/)",
|
||||
header: "Team Name",
|
||||
options: ["custom", "dev", "spec", "security"] },
|
||||
{ question: "Core task of this team? (system will infer roles automatically)",
|
||||
header: "Task Desc",
|
||||
options: ["custom", "fullstack dev", "code review + refactor", "doc writing"] }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2: Role Inference (Task-Driven)
|
||||
|
||||
Coordinator is **always** included. Scan the task description for intent signals to infer worker roles.
|
||||
|
||||
#### Role Signal Detection Table
|
||||
|
||||
| Role | Keywords (CN/EN) | Responsibility Type | Task Prefix |
|
||||
|------|-------------------|---------------------|-------------|
|
||||
| planner | plan, design, architect, explore, analyze requirements | Orchestration | PLAN |
|
||||
| executor | implement, develop, build, code, create, refactor, migrate | Code generation | IMPL |
|
||||
| tester | test, verify, validate, QA, regression, fix, bug | Validation | TEST |
|
||||
| reviewer | review, audit, inspect, code quality | Read-only analysis | REVIEW |
|
||||
| analyst | research, analyze, investigate, diagnose | Orchestration | RESEARCH |
|
||||
| writer | document, write doc, generate report | Code generation | DRAFT |
|
||||
| debugger | debug, troubleshoot, root cause | Orchestration | DEBUG |
|
||||
| security | security, vulnerability, OWASP, compliance | Read-only analysis | SEC |
|
||||
|
||||
**Inference**: For each role, check if task description matches any keyword. Add matched roles to the inferred list.
|
||||
|
||||
#### Implicit Role Completion Table
|
||||
|
||||
| Condition | Add Role | Reason |
|
||||
|-----------|----------|--------|
|
||||
| Has executor, missing planner | Add planner (before executor) | Code needs planning first |
|
||||
| Has executor, missing tester | Add tester (after executor) | Code needs validation |
|
||||
| Has debugger, missing tester | Add tester | Bug fixes need verification |
|
||||
| Has writer, missing reviewer | Add reviewer | Documents need review |
|
||||
|
||||
**Minimum guarantee**: If fewer than 2 worker roles inferred, fall back to standard set: planner + executor + tester + reviewer.
|
||||
|
||||
**Pipeline type tag** (for Step 5):
|
||||
|
||||
| Condition | Pipeline Type |
|
||||
|-----------|---------------|
|
||||
| Has writer role | Document |
|
||||
| Has debugger role | Debug |
|
||||
| Default | Standard |
|
||||
|
||||
### Step 3: Role Confirmation (Interactive)
|
||||
|
||||
Present the inferred roles to the user for confirmation.
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{ question: "Inferred roles: <roles-summary>. Adjust?",
|
||||
header: "Confirm",
|
||||
options: ["Confirm (Recommended)", "Add role", "Remove role", "Re-describe"] }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
| User Choice | Action |
|
||||
|-------------|--------|
|
||||
| Confirm | Proceed with inferred roles |
|
||||
| Add role | AskUserQuestion for new role name + responsibility type |
|
||||
| Remove role | AskUserQuestion for which role to remove |
|
||||
| Re-describe | Return to Step 1, re-enter task description |
|
||||
|
||||
### Step 4: Capability Assignment (Per Role)
|
||||
|
||||
For each worker role, assign capabilities based on responsibility type.
|
||||
|
||||
#### Tool Assignment Table
|
||||
|
||||
| Responsibility Type | Extra Tools (beyond base set) | Adaptive Routing |
|
||||
|---------------------|-------------------------------|------------------|
|
||||
| Read-only analysis | Task(*) | No |
|
||||
| Code generation | Write(*), Edit(*), Task(*) | Yes |
|
||||
| Orchestration | Write(*), Task(*) | Yes |
|
||||
| Validation | Write(*), Edit(*), Task(*) | No |
|
||||
|
||||
> **Base tools** (all roles): SendMessage, TaskUpdate, TaskList, TaskGet, TodoWrite, Read, Bash, Glob, Grep
|
||||
|
||||
#### Message Type Assignment Table
|
||||
|
||||
| Responsibility Type | Message Types |
|
||||
|---------------------|---------------|
|
||||
| Read-only analysis | `<role>_result` (analysis complete), `error` |
|
||||
| Code generation | `<role>_complete`, `<role>_progress`, `error` |
|
||||
| Orchestration | `<role>_ready`, `<role>_progress`, `error` |
|
||||
| Validation | `<role>_result`, `fix_required`, `error` |
|
||||
|
||||
**Coordinator** gets special tools: TeamCreate, TeamDelete, AskUserQuestion, TaskCreate + all base tools.
|
||||
**Coordinator** message types: `plan_approved`, `plan_revision`, `task_unblocked`, `shutdown`, `error`.
|
||||
|
||||
#### Toolbox Assignment Table
|
||||
|
||||
| Responsibility Type | Commands | Subagents | CLI Tools |
|
||||
|---------------------|----------|-----------|-----------|
|
||||
| Read-only analysis | review, analyze | (none) | gemini (analysis), codex (review) |
|
||||
| Code generation | implement, validate | code-developer | (none) |
|
||||
| Orchestration | explore, plan | cli-explore-agent, cli-lite-planning-agent | gemini (analysis) |
|
||||
| Validation | validate | code-developer | (none) |
|
||||
|
||||
**Coordinator** always gets: commands=[dispatch, monitor], no subagents, no CLI tools.
|
||||
|
||||
### Step 5: Pipeline Definition
|
||||
|
||||
Sort roles into execution stages by weight, then build dependency chain.
|
||||
|
||||
#### Stage Weight Table
|
||||
|
||||
| Role | Weight | Stage Position |
|
||||
|------|--------|----------------|
|
||||
| analyst, debugger, security | 1 | Analysis/Exploration |
|
||||
| planner | 2 | Planning |
|
||||
| executor, writer | 3 | Implementation |
|
||||
| tester, reviewer | 4 | Validation/Review |
|
||||
|
||||
**Pipeline construction flow**:
|
||||
|
||||
1. Group worker roles by weight
|
||||
2. Sort groups by weight ascending (1 → 2 → 3 → 4)
|
||||
3. Within same weight → parallel (same stage)
|
||||
4. Each stage `blockedBy` all roles in previous stage
|
||||
5. Generate diagram: `Requirements → [Stage1] → [Stage2] → ... → Report`
|
||||
|
||||
### Step 6: Generate Configuration
|
||||
|
||||
Assemble all collected data into `team-config.json`.
|
||||
|
||||
#### Config Schema
|
||||
|
||||
| Field | Source | Example |
|
||||
|-------|--------|---------|
|
||||
| `team_name` | Step 1 | `"lifecycle"` |
|
||||
| `team_display_name` | Capitalized team_name | `"Lifecycle"` |
|
||||
| `skill_name` | `team-<team_name>` | `"team-lifecycle"` |
|
||||
| `skill_path` | `.claude/skills/team-<team_name>/` | |
|
||||
| `pipeline_type` | Step 2 tag | `"Standard"` |
|
||||
| `pipeline` | Step 5 output | `{ stages: [...], diagram: "..." }` |
|
||||
| `roles` | All roles with full metadata | Array |
|
||||
| `worker_roles` | Roles excluding coordinator | Array |
|
||||
| `all_roles_tools_union` | Union of all roles' allowed_tools | Comma-separated string |
|
||||
| `role_list` | All role names | Comma-separated string |
|
||||
|
||||
```
|
||||
Write("<work-dir>/team-config.json", <config-json>)
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
| Item | Value |
|
||||
|------|-------|
|
||||
| File | `team-config.json` |
|
||||
| Format | JSON |
|
||||
| Location | `<work-dir>/team-config.json` |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Team name is lowercase, valid as folder/skill name
|
||||
- [ ] Coordinator always included
|
||||
- [ ] At least 2 worker roles defined
|
||||
- [ ] Task prefixes are UPPERCASE and unique across roles
|
||||
- [ ] Pipeline stages reference valid roles
|
||||
- [ ] All roles have message types defined
|
||||
- [ ] Allowed tools include minimum set per responsibility type
|
||||
|
||||
## Next Phase
|
||||
|
||||
-> [Phase 2: Pattern Analysis](02-pattern-analysis.md)
|
||||
@@ -1,180 +0,0 @@
|
||||
# Phase 2: Pattern Analysis
|
||||
|
||||
Analyze applicable patterns for each role in the team.
|
||||
|
||||
## Objective
|
||||
|
||||
- Per-role: find most similar existing command reference
|
||||
- Per-role: select infrastructure + collaboration patterns
|
||||
- Per-role: map 5-phase structure to role responsibilities
|
||||
- Generate `pattern-analysis.json`
|
||||
|
||||
## Input
|
||||
|
||||
| Source | Description |
|
||||
|--------|-------------|
|
||||
| `team-config.json` | Phase 1 output |
|
||||
| `specs/team-design-patterns.md` | Infrastructure patterns (read in Phase 0) |
|
||||
| `specs/collaboration-patterns.md` | Collaboration patterns (read in Phase 0) |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Configuration
|
||||
|
||||
```
|
||||
Read("<work-dir>/team-config.json")
|
||||
```
|
||||
|
||||
### Step 2: Per-Role Similarity Mapping
|
||||
|
||||
For each worker role, find the most similar existing command based on responsibility type.
|
||||
|
||||
#### Similarity Mapping Table
|
||||
|
||||
| Responsibility Type | Primary Reference | Secondary Reference | Reason |
|
||||
|---------------------|-------------------|---------------------|--------|
|
||||
| Read-only analysis | review | plan | Both analyze code and report findings with severity classification |
|
||||
| Code generation | execute | test | Both write/modify code and self-validate |
|
||||
| Orchestration | plan | coordinate | Both coordinate sub-tasks and produce structured output |
|
||||
| Validation | test | review | Both validate quality with structured criteria |
|
||||
|
||||
For each worker role:
|
||||
1. Look up responsibility type in table above
|
||||
2. Record `similar_to.primary` and `similar_to.secondary`
|
||||
3. Set `reference_command` = `.claude/commands/team/<primary>.md`
|
||||
|
||||
### Step 3: Per-Role Phase Mapping
|
||||
|
||||
Map the generic 5-phase structure to role-specific phase names.
|
||||
|
||||
#### Phase Structure Mapping Table
|
||||
|
||||
| Responsibility Type | Phase 2 | Phase 3 | Phase 4 |
|
||||
|---------------------|---------|---------|---------|
|
||||
| Read-only analysis | Context Loading | Analysis Execution | Finding Summary |
|
||||
| Code generation | Task & Plan Loading | Code Implementation | Self-Validation |
|
||||
| Orchestration | Context & Complexity Assessment | Orchestrated Execution | Result Aggregation |
|
||||
| Validation | Environment Detection | Execution & Fix Cycle | Result Analysis |
|
||||
|
||||
> Phase 1 is always "Task Discovery" and Phase 5 is always "Report to Coordinator" for all roles.
|
||||
|
||||
### Step 4: Per-Role Infrastructure Patterns
|
||||
|
||||
#### Core Patterns (mandatory for all roles)
|
||||
|
||||
| Pattern | Name |
|
||||
|---------|------|
|
||||
| pattern-1 | Message Bus |
|
||||
| pattern-2 | YAML Front Matter (adapted: no YAML in skill role files) |
|
||||
| pattern-3 | Task Lifecycle |
|
||||
| pattern-4 | Five Phase |
|
||||
| pattern-6 | Coordinator Spawn |
|
||||
| pattern-7 | Error Handling |
|
||||
|
||||
#### Conditional Pattern Selection Table
|
||||
|
||||
| Condition | Add Pattern |
|
||||
|-----------|-------------|
|
||||
| Role has `adaptive_routing = true` | pattern-5 (Complexity Adaptive) |
|
||||
| Responsibility type is Code generation or Orchestration | pattern-8 (Session Files) |
|
||||
|
||||
#### Pattern 9 Selection
|
||||
|
||||
| Condition | Uses Pattern 9 |
|
||||
|-----------|----------------|
|
||||
| Role has subagents defined (length > 0) | Yes |
|
||||
| Role has CLI tools defined (length > 0) | Yes |
|
||||
| Neither | No |
|
||||
|
||||
### Step 5: Command-to-Phase Mapping
|
||||
|
||||
For each worker role, map commands to phases and determine extraction reasons.
|
||||
|
||||
**Per-command extraction reasons**:
|
||||
|
||||
| Condition | Extraction Reason |
|
||||
|-----------|-------------------|
|
||||
| Role has subagents | `subagent-delegation` |
|
||||
| Role has CLI tools | `cli-fan-out` |
|
||||
| Role has adaptive routing | `complexity-adaptive` |
|
||||
|
||||
Record `phase_commands` mapping (from config): which command runs in which phase.
|
||||
|
||||
### Step 6: Collaboration Pattern Selection
|
||||
|
||||
Select team-level collaboration patterns based on team composition.
|
||||
|
||||
#### Collaboration Pattern Selection Decision Table
|
||||
|
||||
| Condition | Pattern | Name |
|
||||
|-----------|---------|------|
|
||||
| Always | CP-1 | Linear Pipeline (base) |
|
||||
| Any role has Validation or Read-only analysis type | CP-2 | Review-Fix Cycle |
|
||||
| Any role has Orchestration type | CP-3 | Fan-out/Fan-in |
|
||||
| Worker roles >= 4 | CP-6 | Incremental Delivery |
|
||||
| Always | CP-5 | Escalation Chain |
|
||||
| Always | CP-10 | Post-Mortem |
|
||||
|
||||
#### Convergence Defaults Table
|
||||
|
||||
| Pattern | Max Iterations | Success Gate |
|
||||
|---------|----------------|--------------|
|
||||
| CP-1 | 1 | all_stages_completed |
|
||||
| CP-2 | 5 | verdict_approve_or_conditional |
|
||||
| CP-3 | 1 | quorum_100_percent |
|
||||
| CP-5 | null | issue_resolved_at_any_level |
|
||||
| CP-6 | 3 | all_increments_validated |
|
||||
| CP-10 | 1 | report_generated |
|
||||
|
||||
### Step 7: Read Reference Commands
|
||||
|
||||
For each unique `similar_to.primary` across all roles:
|
||||
|
||||
```
|
||||
Read(".claude/commands/team/<primary-ref>.md")
|
||||
```
|
||||
|
||||
Store content for Phase 3 reference. Skip silently if file not found.
|
||||
|
||||
### Step 8: Generate Analysis Document
|
||||
|
||||
Assemble all analysis results into `pattern-analysis.json`.
|
||||
|
||||
#### Output Schema
|
||||
|
||||
| Field | Source |
|
||||
|-------|--------|
|
||||
| `team_name` | config |
|
||||
| `role_count` / `worker_count` | config |
|
||||
| `role_analysis[]` | Steps 2-5 (per-role: similarity, phases, patterns, commands) |
|
||||
| `collaboration_patterns[]` | Step 6 |
|
||||
| `convergence_config[]` | Step 6 |
|
||||
| `referenced_commands[]` | Step 7 |
|
||||
| `pipeline` | config |
|
||||
| `skill_patterns` | Fixed: role_router, shared_infrastructure, progressive_loading |
|
||||
| `command_architecture` | Per-role command mapping + pattern-9 flag |
|
||||
|
||||
```
|
||||
Write("<work-dir>/pattern-analysis.json", <analysis-json>)
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
| Item | Value |
|
||||
|------|-------|
|
||||
| File | `pattern-analysis.json` |
|
||||
| Format | JSON |
|
||||
| Location | `<work-dir>/pattern-analysis.json` |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Every worker role has similarity mapping
|
||||
- [ ] Every worker role has 5-phase structure
|
||||
- [ ] Infrastructure patterns include all mandatory patterns
|
||||
- [ ] Collaboration patterns selected at team level
|
||||
- [ ] Referenced commands are readable
|
||||
- [ ] Skill-specific patterns documented
|
||||
|
||||
## Next Phase
|
||||
|
||||
-> [Phase 3: Skill Package Generation](03-skill-generation.md)
|
||||
@@ -1,239 +0,0 @@
|
||||
# Phase 3: Skill Package Generation
|
||||
|
||||
> **COMPACT PROTECTION**: This is the core generation phase. If context compression has occurred and this file is only a summary, **MUST `Read` this file again before executing any Step**. Do not generate from memory.
|
||||
|
||||
Generate the unified team skill package: SKILL.md (role router) + per-role `role.md` + per-role `commands/*.md`.
|
||||
|
||||
## Objective
|
||||
|
||||
- Generate `SKILL.md` with role router and shared infrastructure
|
||||
- Generate `roles/coordinator/role.md` + `commands/`
|
||||
- Generate `roles/<worker>/role.md` + `commands/` for each worker role
|
||||
- Generate `specs/team-config.json`
|
||||
- All files written to `preview/` directory first
|
||||
|
||||
## Input
|
||||
|
||||
| Source | Description |
|
||||
|--------|-------------|
|
||||
| `team-config.json` | Phase 1 output (roles, pipeline, capabilities) |
|
||||
| `pattern-analysis.json` | Phase 2 output (patterns, phase mapping, similarity) |
|
||||
| `templates/skill-router-template.md` | SKILL.md generation template |
|
||||
| `templates/role-template.md` | role.md generation template |
|
||||
| `templates/role-command-template.md` | command file generation template |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Inputs + Create Preview Directory
|
||||
|
||||
1. Read `<work-dir>/team-config.json` and `<work-dir>/pattern-analysis.json`
|
||||
2. Read all 3 template files from `templates/`
|
||||
3. Create directory structure:
|
||||
|
||||
```
|
||||
Bash("mkdir -p <preview-dir>/roles/<role-name>/commands <preview-dir>/specs")
|
||||
```
|
||||
|
||||
Repeat for every role in config.
|
||||
|
||||
### Step 2: Generate SKILL.md (Role Router)
|
||||
|
||||
Use `templates/skill-router-template.md` as the base. Fill the template using config values.
|
||||
|
||||
#### Template Variable Mapping
|
||||
|
||||
| Template Variable | Config Source |
|
||||
|-------------------|--------------|
|
||||
| `<skill-name>` | `config.skill_name` |
|
||||
| `<team-name>` | `config.team_name` |
|
||||
| `<team-display-name>` | `config.team_display_name` |
|
||||
| `<all-tools-union>` | `config.all_roles_tools_union` |
|
||||
| `<roles-table>` | Generated from `config.roles[]` (see Roles Table below) |
|
||||
| `<role-dispatch-entries>` | Per-role: `"<name>": { file, prefix }` |
|
||||
| `<message-bus-table>` | Per worker role: name + message types |
|
||||
| `<spawn-blocks>` | Per worker role: Task() spawn block |
|
||||
| `<pipeline-diagram>` | `config.pipeline.diagram` |
|
||||
| `<architecture-diagram>` | Generated from role names |
|
||||
|
||||
#### Roles Table Generation
|
||||
|
||||
For each role in `config.roles[]`, produce one row:
|
||||
|
||||
```
|
||||
| <role-name> | <task-prefix> | <description> | [roles/<name>/role.md](roles/<name>/role.md) |
|
||||
```
|
||||
|
||||
#### Spawn Block Generation
|
||||
|
||||
For each worker role, generate a `Task()` spawn block containing:
|
||||
- `subagent_type: "general-purpose"`
|
||||
- `team_name: <team-name>`
|
||||
- `name: "<role-name>"`
|
||||
- Prompt with: primary directive (MUST call Skill), role constraints, message bus requirement, workflow steps
|
||||
|
||||
**Spawn prompt must include**:
|
||||
1. Primary directive: `Skill(skill="<skill-name>", args="--role=<role-name>")`
|
||||
2. Task prefix constraint: only handle `<PREFIX>-*` tasks
|
||||
3. Output tag: all messages tagged `[<role-name>]`
|
||||
4. Communication rule: only talk to coordinator
|
||||
5. Message bus: call `team_msg` before every `SendMessage`
|
||||
|
||||
```
|
||||
Write("<preview-dir>/SKILL.md", <generated-skill-md>)
|
||||
```
|
||||
|
||||
### Step 3: Generate Coordinator Role File
|
||||
|
||||
Build coordinator `role.md` with these sections:
|
||||
|
||||
| Section | Content Source |
|
||||
|---------|---------------|
|
||||
| Role Identity | Fixed: name=coordinator, prefix=N/A, type=Orchestration |
|
||||
| Message Types | Fixed 5 types: plan_approved, plan_revision, task_unblocked, shutdown, error |
|
||||
| Execution Phase 1 | Requirement clarification via AskUserQuestion |
|
||||
| Execution Phase 2 | TeamCreate + spawn blocks (same as SKILL.md Step 2) |
|
||||
| Execution Phase 3 | Task chain creation from `config.pipeline.stages` |
|
||||
| Execution Phase 4 | Message-driven coordination loop |
|
||||
| Execution Phase 5 | Report + next steps (new requirement or shutdown) |
|
||||
| Error Handling | Fixed table: unresponsive, rejected plan, stuck tests, critical review |
|
||||
|
||||
#### Task Chain Generation
|
||||
|
||||
For each stage in `config.pipeline.stages[]`:
|
||||
1. Create task: `TaskCreate({ subject: "<PREFIX>-001: <role> work" })`
|
||||
2. Set owner: `TaskUpdate({ owner: "<role-name>" })`
|
||||
3. Set dependencies: `addBlockedBy` = all prefixes from the previous stage
|
||||
|
||||
#### Coordination Handler Table
|
||||
|
||||
For each worker role, generate one row:
|
||||
|
||||
```
|
||||
| <ROLE-UPPER>: <trigger> | team_msg log -> TaskUpdate <PREFIX> completed -> check next |
|
||||
```
|
||||
|
||||
```
|
||||
Write("<preview-dir>/roles/coordinator/role.md", <coordinator-md>)
|
||||
```
|
||||
|
||||
Generate coordinator command files (dispatch.md, monitor.md) using `templates/role-command-template.md`.
|
||||
|
||||
### Step 4: Generate Worker Role Files
|
||||
|
||||
**For each worker role**, generate `role.md` using `templates/role-template.md`.
|
||||
|
||||
#### Per-Role Template Variable Mapping
|
||||
|
||||
| Template Variable | Source |
|
||||
|-------------------|--------|
|
||||
| `<role-name>` | `role.name` |
|
||||
| `<task-prefix>` | `role.task_prefix` |
|
||||
| `<responsibility-type>` | `role.responsibility_type` |
|
||||
| `<description>` | `role.description` |
|
||||
| `<message-types-table>` | From `role.message_types[]` |
|
||||
| `<primary-msg-type>` | First non-error, non-progress message type |
|
||||
| `<phase-2-name>` | From `pattern-analysis.phase_structure.phase2` |
|
||||
| `<phase-3-name>` | From `pattern-analysis.phase_structure.phase3` |
|
||||
| `<phase-4-name>` | From `pattern-analysis.phase_structure.phase4` |
|
||||
| `<commands-table>` | From `role.commands[]` with phase mapping |
|
||||
| `<subagents-table>` | From `role.subagents[]` |
|
||||
| `<cli-tools-table>` | From `role.cli_tools[]` |
|
||||
|
||||
#### Role File Sections (v3 style)
|
||||
|
||||
Each generated `role.md` must contain:
|
||||
|
||||
1. **Role Identity**: name, prefix, output tag, responsibility, communication rule
|
||||
2. **Role Boundaries**: MUST / MUST NOT lists
|
||||
3. **Message Types**: table with type, direction, trigger
|
||||
4. **Message Bus**: `team_msg` call pattern + CLI fallback
|
||||
5. **Toolbox**: commands table, subagents table, CLI tools table
|
||||
6. **Execution (5-Phase)**:
|
||||
- Phase 1: Task Discovery (TaskList -> filter by prefix -> TaskGet -> TaskUpdate in_progress)
|
||||
- Phase 2-4: Content varies by responsibility type (see Phase Content Table below)
|
||||
- Phase 5: Report to Coordinator (team_msg + SendMessage + TaskUpdate completed + check next)
|
||||
7. **Error Handling**: table with scenario/resolution
|
||||
|
||||
#### Phase Content by Responsibility Type
|
||||
|
||||
| Type | Phase 2 | Phase 3 | Phase 4 |
|
||||
|------|---------|---------|---------|
|
||||
| Read-only analysis | Load plan + get changed files + read contents | Domain-specific analysis per file | Classify findings by severity |
|
||||
| Code generation | Extract plan path + load plan tasks | Implement tasks (adaptive: direct edit or delegate to code-developer) | Self-validation (syntax check + auto-fix) |
|
||||
| Orchestration | Assess complexity (Low/Medium/High) | Execute (adaptive: direct search or delegate to sub-agent) | Aggregate results |
|
||||
| Validation | Detect changed files for scope | Iterative test-fix cycle (max 5 iterations) | Analyze results (iterations, pass rate) |
|
||||
|
||||
```
|
||||
Write("<preview-dir>/roles/<role-name>/role.md", <role-md>)
|
||||
```
|
||||
|
||||
### Step 5: Generate Command Files
|
||||
|
||||
#### Command Extraction Decision Table
|
||||
|
||||
| Condition | Extract to command file? |
|
||||
|-----------|--------------------------|
|
||||
| Role has subagents (delegation needed) | Yes |
|
||||
| Role has CLI tools (fan-out needed) | Yes |
|
||||
| Role has adaptive routing (complexity branching) | Yes |
|
||||
| None of the above | No (all phases execute inline in role.md) |
|
||||
|
||||
For each role that needs command files, generate from `templates/role-command-template.md`.
|
||||
|
||||
#### Pre-built Command Patterns
|
||||
|
||||
| Command | Description | Delegation Mode | Used By Phase |
|
||||
|---------|-------------|-----------------|---------------|
|
||||
| explore | Multi-angle codebase exploration | Subagent Fan-out | Phase 2 |
|
||||
| analyze | Multi-perspective code analysis | CLI Fan-out | Phase 3 |
|
||||
| implement | Code implementation via delegation | Sequential Delegation | Phase 3 |
|
||||
| validate | Iterative test-fix cycle | Sequential Delegation | Phase 3 |
|
||||
| review | 4-dimensional code review | CLI Fan-out | Phase 3 |
|
||||
| dispatch | Task chain creation (coordinator) | Direct | Phase 3 |
|
||||
| monitor | Message-driven coordination (coordinator) | Message-Driven | Phase 4 |
|
||||
|
||||
If command name not in pre-built list, generate a skeleton with TODO placeholders.
|
||||
|
||||
```
|
||||
Write("<preview-dir>/roles/<role-name>/commands/<cmd>.md", <cmd-content>)
|
||||
```
|
||||
|
||||
### Step 6: Copy Team Config
|
||||
|
||||
```
|
||||
Write("<preview-dir>/specs/team-config.json", <config-json>)
|
||||
```
|
||||
|
||||
### Step 7: Preview Checkpoint
|
||||
|
||||
**PAUSE**: Present the generated preview structure to the user for confirmation before proceeding to Phase 4.
|
||||
|
||||
Display:
|
||||
- File tree of `<preview-dir>/`
|
||||
- Role count and names
|
||||
- Pipeline diagram
|
||||
- Total file count
|
||||
|
||||
Wait for user confirmation before advancing.
|
||||
|
||||
## Output
|
||||
|
||||
| Item | Value |
|
||||
|------|-------|
|
||||
| Directory | `<work-dir>/preview/` |
|
||||
| Files | SKILL.md + roles/*/role.md + roles/*/commands/*.md + specs/team-config.json |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] SKILL.md has frontmatter, architecture diagram, role router, role dispatch, shared infrastructure
|
||||
- [ ] Every role has `role.md` with all 7 required sections
|
||||
- [ ] Coordinator has dispatch.md and monitor.md command files
|
||||
- [ ] Worker roles with delegation have command files
|
||||
- [ ] All generated files use v3 style: text + decision tables + flow symbols, no pseudocode
|
||||
- [ ] Spawn blocks include complete prompt with primary directive + constraints
|
||||
- [ ] All `<placeholder>` variables resolved (no `${variable}` syntax in output)
|
||||
- [ ] Pipeline diagram matches actual role stages
|
||||
|
||||
## Next Phase
|
||||
|
||||
-> [Phase 4: Integration Verification](04-integration-verification.md)
|
||||
@@ -1,183 +0,0 @@
|
||||
# Phase 4: Integration Verification
|
||||
|
||||
Verify the generated skill package is internally consistent.
|
||||
|
||||
## Objective
|
||||
|
||||
- Verify SKILL.md role router references match actual role files
|
||||
- Verify task prefixes are unique across all roles
|
||||
- Verify message types are consistent between config and generated files
|
||||
- Verify coordinator spawn template uses correct skill invocation
|
||||
- Verify role file structural compliance
|
||||
- Verify coordinator commands alignment
|
||||
- Generate `integration-report.json`
|
||||
|
||||
## Input
|
||||
|
||||
| Source | Description |
|
||||
|--------|-------------|
|
||||
| `<work-dir>/preview/` | Phase 3 generated skill package |
|
||||
| `team-config.json` | Phase 1 configuration |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Generated Files
|
||||
|
||||
1. Read `<work-dir>/team-config.json`
|
||||
2. Read `<preview-dir>/SKILL.md`
|
||||
3. Read each `<preview-dir>/roles/<role-name>/role.md`
|
||||
4. Read each `<preview-dir>/roles/<role-name>/commands/*.md`
|
||||
|
||||
### Step 2: Run 6 Integration Checks
|
||||
|
||||
#### Check 1: Router Consistency
|
||||
|
||||
For each role in config, verify 3 conditions in SKILL.md:
|
||||
|
||||
| Item | Method | Pass Criteria |
|
||||
|------|--------|---------------|
|
||||
| Router entry | SKILL.md contains `"<role-name>"` | Found |
|
||||
| Role file exists | `roles/<role-name>/role.md` is readable | File exists |
|
||||
| Role link valid | SKILL.md contains `roles/<role-name>/role.md` | Found |
|
||||
|
||||
**Status**: PASS if all 3 conditions met for every role, FAIL otherwise.
|
||||
|
||||
#### Check 2: Prefix Uniqueness
|
||||
|
||||
| Item | Method | Pass Criteria |
|
||||
|------|--------|---------------|
|
||||
| All task prefixes | Collect `task_prefix` from each worker role | No duplicates |
|
||||
|
||||
**Status**: PASS if all prefixes unique, FAIL if any duplicate found.
|
||||
|
||||
#### Check 3: Message Type Consistency
|
||||
|
||||
For each worker role:
|
||||
|
||||
| Item | Method | Pass Criteria |
|
||||
|------|--------|---------------|
|
||||
| Config message types | List types from `role.message_types[]` | Baseline |
|
||||
| Types in role file | Search role.md for each type string | All present |
|
||||
|
||||
**Status**: PASS if all configured types found in role file, WARN if any missing.
|
||||
|
||||
#### Check 4: Spawn Template Verification
|
||||
|
||||
For each worker role, verify in SKILL.md:
|
||||
|
||||
| Item | Method | Pass Criteria |
|
||||
|------|--------|---------------|
|
||||
| Spawn present | SKILL.md contains `name: "<role-name>"` | Found |
|
||||
| Skill call correct | Contains `Skill(skill="<skill-name>", args="--role=<role-name>")` | Found |
|
||||
| Prefix in prompt | Contains `<PREFIX>-*` | Found |
|
||||
|
||||
**Status**: PASS if all 3 conditions met, FAIL otherwise.
|
||||
|
||||
#### Check 5: Role File Pattern Compliance
|
||||
|
||||
For each role file, check structural elements:
|
||||
|
||||
| Item | Search Pattern | Required |
|
||||
|------|---------------|----------|
|
||||
| Role Identity section | `## Role Identity` | Yes |
|
||||
| 5-Phase structure | `Phase 1` and `Phase 5` both present | Yes |
|
||||
| Task lifecycle | `TaskList`, `TaskGet`, `TaskUpdate` all present | Yes |
|
||||
| Message bus | `team_msg` present | Yes |
|
||||
| SendMessage | `SendMessage` present | Yes |
|
||||
| Error Handling | `## Error Handling` | Yes |
|
||||
|
||||
**Status**: PASS if all 6 items found, PARTIAL if some missing, MISSING if file not found.
|
||||
|
||||
#### Check 5b: Command File Verification
|
||||
|
||||
For each role's command files:
|
||||
|
||||
| Item | Search Pattern | Required |
|
||||
|------|---------------|----------|
|
||||
| Strategy section | `## Strategy` | Yes |
|
||||
| Execution Steps | `## Execution Steps` | Yes |
|
||||
| Error Handling | `## Error Handling` | Yes |
|
||||
| When to Use | `## When to Use` | Yes |
|
||||
| Self-contained | No `Read("../` cross-command references | Yes |
|
||||
|
||||
**Status**: PASS if all items found, PARTIAL if some missing, MISSING if file not found.
|
||||
|
||||
#### Check 6: Coordinator Commands Alignment
|
||||
|
||||
> **Critical**: dispatch.md and monitor.md are the most common source of integration failures.
|
||||
|
||||
**6a: dispatch.md role names**
|
||||
|
||||
| Item | Method | Pass Criteria |
|
||||
|------|--------|---------------|
|
||||
| Owner values | Extract all `owner: "<name>"` from dispatch.md | Every name exists in config roles |
|
||||
| No ghost roles | Compare dispatch roles vs config roles | No invalid role names |
|
||||
|
||||
**6b: monitor.md spawn completeness**
|
||||
|
||||
| Item | Method | Pass Criteria |
|
||||
|------|--------|---------------|
|
||||
| Has `description:` | Search for `description:` | Found |
|
||||
| Has `team_name:` | Search for `team_name:` | Found |
|
||||
| Has `name:` param | Search for `name:` | Found |
|
||||
| Has Skill callback | Search for `Skill(skill=` | Found |
|
||||
| Has role boundaries | Search for role constraint / MUST keywords | Found |
|
||||
| Not minimal prompt | No `prompt: \`Execute task` anti-pattern | Confirmed |
|
||||
|
||||
**6c: Pipeline alignment**
|
||||
|
||||
| Item | Method | Pass Criteria |
|
||||
|------|--------|---------------|
|
||||
| Pipeline task IDs | From `config.pipeline_tasks` (if defined) | Baseline |
|
||||
| Dispatch task IDs | Extract `subject: "<id>"` from dispatch.md | Match pipeline |
|
||||
|
||||
**Status**: PASS if no mismatches, WARN if pipeline_tasks not defined, FAIL if mismatches found.
|
||||
|
||||
### Step 3: Generate Report
|
||||
|
||||
Compute overall status: PASS if all checks pass (excluding SKIP), NEEDS_ATTENTION otherwise.
|
||||
|
||||
#### Report Schema
|
||||
|
||||
| Field | Content |
|
||||
|-------|---------|
|
||||
| `team_name` | Config team name |
|
||||
| `skill_name` | Config skill name |
|
||||
| `checks.router_consistency` | Check 1 results per role |
|
||||
| `checks.prefix_uniqueness` | Check 2 result |
|
||||
| `checks.message_types` | Check 3 results per role |
|
||||
| `checks.spawn_template` | Check 4 results per role |
|
||||
| `checks.pattern_compliance` | Check 5 results per role |
|
||||
| `checks.command_files` | Check 5b results per role |
|
||||
| `checks.coordinator_commands` | Check 6a/6b/6c results |
|
||||
| `overall` | PASS or NEEDS_ATTENTION |
|
||||
| `file_count` | skill_md: 1, role_files: N, total: N+2 |
|
||||
|
||||
```
|
||||
Write("<work-dir>/integration-report.json", <report-json>)
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
| Item | Value |
|
||||
|------|-------|
|
||||
| File | `integration-report.json` |
|
||||
| Format | JSON |
|
||||
| Location | `<work-dir>/integration-report.json` |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Every role in config has a router entry in SKILL.md
|
||||
- [ ] Every role has a file in `roles/`
|
||||
- [ ] Task prefixes are unique
|
||||
- [ ] Spawn template uses correct `Skill(skill="...", args="--role=...")`
|
||||
- [ ] Spawn template includes `description`, `team_name`, `name` parameters
|
||||
- [ ] All role files have 5-phase structure
|
||||
- [ ] All role files have message bus integration
|
||||
- [ ] dispatch.md `owner` values all exist in config roles (no ghost roles)
|
||||
- [ ] monitor.md spawn prompt contains full Skill callback (not minimal)
|
||||
- [ ] Task IDs in dispatch.md match pipeline diagram in SKILL.md
|
||||
|
||||
## Next Phase
|
||||
|
||||
-> [Phase 5: Validation](05-validation.md)
|
||||
@@ -1,209 +0,0 @@
|
||||
# Phase 5: Validation
|
||||
|
||||
Verify quality and deliver the final skill package.
|
||||
|
||||
## Objective
|
||||
|
||||
- SKILL.md structural completeness check
|
||||
- Per-role structural completeness check
|
||||
- Per-role command file quality check
|
||||
- Quality scoring across 5 dimensions
|
||||
- Deliver final skill package to `.claude/skills/team-<name>/`
|
||||
|
||||
## Input
|
||||
|
||||
| Source | Description |
|
||||
|--------|-------------|
|
||||
| `<work-dir>/preview/` | Phase 3 generated skill package |
|
||||
| `integration-report.json` | Phase 4 integration check results |
|
||||
| `specs/quality-standards.md` | Quality criteria (read in Phase 0) |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Files
|
||||
|
||||
1. Read `<work-dir>/team-config.json`
|
||||
2. Read `<work-dir>/integration-report.json`
|
||||
3. Read `<preview-dir>/SKILL.md`
|
||||
4. Read each `<preview-dir>/roles/<role-name>/role.md`
|
||||
5. Read each `<preview-dir>/roles/<role-name>/commands/*.md`
|
||||
|
||||
### Step 2: SKILL.md Structural Check
|
||||
|
||||
#### SKILL.md Structure Checklist
|
||||
|
||||
| # | Check Item | Search For |
|
||||
|---|------------|------------|
|
||||
| 1 | Frontmatter | `---` block at file start |
|
||||
| 2 | Architecture Overview | `## Architecture Overview` |
|
||||
| 3 | Role Router | `## Role Router` |
|
||||
| 4 | Role Dispatch Code | `VALID_ROLES` |
|
||||
| 5 | Orchestration Mode | `Orchestration Mode` |
|
||||
| 6 | Available Roles Table | `| Role | Task Prefix` |
|
||||
| 7 | Shared Infrastructure | `## Shared Infrastructure` |
|
||||
| 8 | Role Isolation Rules | `Role Isolation` |
|
||||
| 9 | Pipeline Diagram | `## Pipeline` |
|
||||
| 10 | Coordinator Spawn Template | `Coordinator Spawn` |
|
||||
| 11 | Spawn Skill Directive | `MUST` + primary directive |
|
||||
| 12 | Spawn Description Param | `description:` in spawn block |
|
||||
| 13 | Error Handling | `## Error Handling` |
|
||||
|
||||
**SKILL.md score** = (passed items / 13) * 100
|
||||
|
||||
### Step 3: Per-Role Structural Check
|
||||
|
||||
#### Role Structure Checklist
|
||||
|
||||
| # | Check Item | Search For |
|
||||
|---|------------|------------|
|
||||
| 1 | Role Identity | `## Role Identity` |
|
||||
| 2 | Role Boundaries | `## Role Boundaries` |
|
||||
| 3 | Output Tag | `Output Tag` |
|
||||
| 4 | Message Types Table | `## Message Types` |
|
||||
| 5 | Message Bus | `## Message Bus` |
|
||||
| 6 | CLI Fallback | `CLI` fallback section |
|
||||
| 7 | Toolbox Section | `## Toolbox` |
|
||||
| 8 | 5-Phase Execution | `## Execution` |
|
||||
| 9 | Phase 1 Task Discovery | `Phase 1` + `Task Discovery` |
|
||||
| 10 | TaskList Usage | `TaskList` |
|
||||
| 11 | TaskGet Usage | `TaskGet` |
|
||||
| 12 | TaskUpdate Usage | `TaskUpdate` |
|
||||
| 13 | team_msg Before SendMessage | `team_msg` |
|
||||
| 14 | SendMessage to Coordinator | `SendMessage` |
|
||||
| 15 | Error Handling | `## Error Handling` |
|
||||
|
||||
**Per-role score** = (passed items / 15) * 100
|
||||
|
||||
| Score | Status |
|
||||
|-------|--------|
|
||||
| >= 80% | PASS |
|
||||
| < 80% | PARTIAL |
|
||||
| File missing | MISSING (score = 0) |
|
||||
|
||||
### Step 3b: Command File Quality Check
|
||||
|
||||
For each role's command files:
|
||||
|
||||
#### Command Quality Checklist
|
||||
|
||||
| # | Check Item | Search For |
|
||||
|---|------------|------------|
|
||||
| 1 | When to Use section | `## When to Use` |
|
||||
| 2 | Strategy section | `## Strategy` |
|
||||
| 3 | Delegation mode declared | `Delegation Mode` |
|
||||
| 4 | Execution Steps section | `## Execution Steps` |
|
||||
| 5 | Error Handling section | `## Error Handling` |
|
||||
| 6 | Output Format section | `## Output Format` |
|
||||
| 7 | Self-contained (no cross-ref) | No `Read("../` patterns |
|
||||
|
||||
**Per-command score** = (passed items / 7) * 100. Role command score = average of all commands.
|
||||
|
||||
### Step 4: Quality Scoring
|
||||
|
||||
#### Quality Scoring Table
|
||||
|
||||
| Dimension | Weight | Source | Calculation |
|
||||
|-----------|--------|--------|-------------|
|
||||
| `skill_md` | Equal | Step 2 | SKILL.md checklist score |
|
||||
| `roles_avg` | Equal | Step 3 | Average of all role scores |
|
||||
| `integration` | Equal | Phase 4 report | PASS=100, otherwise=50 |
|
||||
| `consistency` | Equal | Cross-check | Start at 100, -20 per mismatch (see below) |
|
||||
| `command_quality` | Equal | Step 3b | Average of all command scores |
|
||||
|
||||
**Consistency deductions**:
|
||||
|
||||
| Mismatch | Deduction |
|
||||
|----------|-----------|
|
||||
| Skill name not in SKILL.md | -20 |
|
||||
| Team name not in SKILL.md | -20 |
|
||||
| Any role name not in SKILL.md | -10 per role |
|
||||
|
||||
**Overall score** = average of all 5 dimension scores.
|
||||
|
||||
#### Delivery Decision Table
|
||||
|
||||
| Score Range | Gate | Action |
|
||||
|-------------|------|--------|
|
||||
| >= 80% | PASS | Deliver to `.claude/skills/team-<name>/` |
|
||||
| 60-79% | REVIEW | Deliver with warnings, suggest fixes |
|
||||
| < 60% | FAIL | Do not deliver, return to Phase 3 for rework |
|
||||
|
||||
### Step 5: Generate Validation Report
|
||||
|
||||
#### Report Schema
|
||||
|
||||
| Field | Content |
|
||||
|-------|---------|
|
||||
| `team_name` | Config team name |
|
||||
| `skill_name` | Config skill name |
|
||||
| `timestamp` | ISO timestamp |
|
||||
| `scores` | All 5 dimension scores |
|
||||
| `overall_score` | Average score |
|
||||
| `quality_gate` | PASS / REVIEW / FAIL |
|
||||
| `skill_md_checks` | Step 2 results |
|
||||
| `role_results` | Step 3 results per role |
|
||||
| `integration_status` | Phase 4 overall status |
|
||||
| `delivery.source` | Preview directory |
|
||||
| `delivery.destination` | `.claude/skills/<skill-name>/` |
|
||||
| `delivery.ready` | true if gate is not FAIL |
|
||||
|
||||
```
|
||||
Write("<work-dir>/validation-report.json", <report-json>)
|
||||
```
|
||||
|
||||
### Step 6: Deliver Final Package
|
||||
|
||||
**Only execute if `quality_gate` is not FAIL.**
|
||||
|
||||
1. Create destination directory structure:
|
||||
|
||||
```
|
||||
Bash("mkdir -p .claude/skills/<skill-name>/roles/<role-name>/commands .claude/skills/<skill-name>/specs")
|
||||
```
|
||||
|
||||
2. Copy files from preview to destination:
|
||||
|
||||
| Source | Destination |
|
||||
|--------|-------------|
|
||||
| `<preview-dir>/SKILL.md` | `.claude/skills/<skill-name>/SKILL.md` |
|
||||
| `<preview-dir>/roles/<name>/role.md` | `.claude/skills/<skill-name>/roles/<name>/role.md` |
|
||||
| `<preview-dir>/roles/<name>/commands/*.md` | `.claude/skills/<skill-name>/roles/<name>/commands/*.md` |
|
||||
| `<preview-dir>/specs/team-config.json` | `.claude/skills/<skill-name>/specs/team-config.json` |
|
||||
|
||||
3. Report delivery summary:
|
||||
- Destination path
|
||||
- Skill name
|
||||
- Quality score and gate
|
||||
- Role list
|
||||
- Usage examples: `Skill(skill="<skill-name>", args="--role=<role-name>")`
|
||||
|
||||
4. List delivered files:
|
||||
|
||||
```
|
||||
Bash("find .claude/skills/<skill-name> -type f | sort")
|
||||
```
|
||||
|
||||
**If gate is FAIL**: Report failure with score, suggest returning to Phase 3 for rework.
|
||||
|
||||
## Output
|
||||
|
||||
| Item | Value |
|
||||
|------|-------|
|
||||
| File | `validation-report.json` |
|
||||
| Format | JSON |
|
||||
| Location | `<work-dir>/validation-report.json` |
|
||||
| Delivery | `.claude/skills/team-<name>/` (if gate passes) |
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] SKILL.md passes all 13 routing-level structural checks
|
||||
- [ ] All role files pass structural checks (>= 80%)
|
||||
- [ ] All command files pass quality checks (>= 80%)
|
||||
- [ ] Integration report is PASS
|
||||
- [ ] Overall score >= 80%
|
||||
- [ ] Final package delivered to `.claude/skills/team-<name>/`
|
||||
- [ ] Usage instructions provided to user
|
||||
|
||||
## Completion
|
||||
|
||||
This is the final phase. The unified team skill is ready for use.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,242 +0,0 @@
|
||||
# Quality Standards for Team Skills (v2)
|
||||
|
||||
Quality assessment criteria for generated team skill packages (v3 style: text + decision tables, no pseudocode).
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| Phase 5 | Score generated command | All dimensions |
|
||||
| Phase 3 | Guide generation quality | Checklist |
|
||||
|
||||
---
|
||||
|
||||
## Quality Dimensions
|
||||
|
||||
### 1. Completeness (25%)
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All 15 required sections present with substantive content |
|
||||
| 80% | 12+ sections present, minor gaps in non-critical areas |
|
||||
| 60% | Core sections present (front matter, message bus, 5 phases, error handling) |
|
||||
| 40% | Missing critical sections |
|
||||
| 0% | Skeleton only |
|
||||
|
||||
**Required Sections Checklist (role.md files):**
|
||||
- [ ] Role Identity (name, task prefix, output tag, responsibility)
|
||||
- [ ] Role Boundaries (MUST / MUST NOT)
|
||||
- [ ] Toolbox section (Available Commands with markdown links)
|
||||
- [ ] Phase 2: Context Loading (decision tables, no pseudocode)
|
||||
- [ ] Phase 3: Core Work (decision tables + tool call templates)
|
||||
- [ ] Phase 4: Validation/Summary (checklist tables)
|
||||
- [ ] Error Handling table
|
||||
- [ ] Phase 1/5: Reference to SKILL.md Shared Infrastructure (not inline)
|
||||
- [ ] No JavaScript pseudocode in any phase
|
||||
- [ ] All branching logic expressed as decision tables
|
||||
|
||||
**Required Sections Checklist (SKILL.md):**
|
||||
- [ ] Frontmatter (name, description, allowed-tools)
|
||||
- [ ] Architecture Overview (role routing diagram with flow symbols)
|
||||
- [ ] Role Router (Input Parsing + Role Registry table with markdown links)
|
||||
- [ ] Shared Infrastructure (Worker Phase 1 Task Discovery + Phase 5 Report templates)
|
||||
- [ ] Pipeline Definitions with Cadence Control (beat diagram + checkpoints)
|
||||
- [ ] Compact Protection (Phase Reference table with Compact column)
|
||||
- [ ] Coordinator Spawn Template
|
||||
- [ ] Role Isolation Rules table
|
||||
- [ ] Error Handling table
|
||||
|
||||
**SKILL.md MUST NOT contain:**
|
||||
- [ ] ❌ No JavaScript pseudocode (VALID_ROLES object, routing functions, etc.)
|
||||
- [ ] ❌ No role-specific implementation logic (belongs in role.md or commands/*.md)
|
||||
- [ ] ❌ No `${variable}` notation (use `<placeholder>` instead)
|
||||
|
||||
> **Note**: For `commands/*.md` file quality criteria, see [Command File Quality Standards](#command-file-quality-standards) below.
|
||||
|
||||
### 2. Pattern Compliance (25%)
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All 9 infrastructure patterns + selected collaboration patterns fully implemented |
|
||||
| 80% | 7 core infra patterns + at least 1 collaboration pattern with convergence |
|
||||
| 60% | Minimum 6 infra patterns, collaboration patterns present but incomplete |
|
||||
| 40% | Missing critical patterns (message bus or task lifecycle) |
|
||||
| 0% | No pattern compliance |
|
||||
|
||||
**Infrastructure Pattern Checklist:**
|
||||
- [ ] Pattern 1: Message bus - team_msg before every SendMessage
|
||||
- [ ] Pattern 1b: CLI fallback section
|
||||
- [ ] Pattern 2: YAML front matter - all fields present
|
||||
- [ ] Pattern 3: Task lifecycle - TaskList/Get/Update flow
|
||||
- [ ] Pattern 4: Five-phase structure (Phase 1/5 shared in SKILL.md, Phase 2-4 in role.md)
|
||||
- [ ] Pattern 5: Complexity-adaptive (if applicable)
|
||||
- [ ] Pattern 6: Coordinator spawn compatible
|
||||
- [ ] Pattern 7: Error handling table
|
||||
- [ ] Pattern 8: Session files (if applicable)
|
||||
- [ ] Pattern 9: Compact Protection (Phase Reference table + re-read directives)
|
||||
|
||||
**Collaboration Pattern Checklist:**
|
||||
- [ ] At least one CP selected (CP-1 minimum)
|
||||
- [ ] Each selected CP has convergence criteria defined
|
||||
- [ ] Each selected CP has feedback loop mechanism
|
||||
- [ ] Each selected CP has timeout/fallback behavior
|
||||
- [ ] CP-specific message types registered in message bus section
|
||||
- [ ] Escalation path defined (CP-5) for error scenarios
|
||||
|
||||
### 3. Integration (25%)
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All integration checks pass, spawn snippet ready |
|
||||
| 80% | Minor integration notes, no blocking issues |
|
||||
| 60% | Some checks need attention but functional |
|
||||
| 40% | Task prefix conflict or missing critical tools |
|
||||
| 0% | Incompatible with team system |
|
||||
|
||||
### 4. Consistency (25%)
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Role name, task prefix, message types consistent throughout |
|
||||
| 80% | Minor inconsistencies in non-critical areas |
|
||||
| 60% | Some mixed terminology but intent clear |
|
||||
| 40% | Confusing or contradictory content |
|
||||
| 0% | Internally inconsistent |
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
| Gate | Threshold | Action |
|
||||
|------|-----------|--------|
|
||||
| PASS | >= 80% | Deliver to `.claude/skills/team-{name}/` |
|
||||
| REVIEW | 60-79% | Fix recommendations, re-validate |
|
||||
| FAIL | < 60% | Major rework needed, re-run from Phase 3 |
|
||||
|
||||
---
|
||||
|
||||
## Issue Classification
|
||||
|
||||
### Errors (Must Fix)
|
||||
|
||||
- Missing YAML front matter
|
||||
- Missing `group: team`
|
||||
- No message bus section
|
||||
- No task lifecycle (TaskList/Get/Update)
|
||||
- No SendMessage to coordinator
|
||||
- Task prefix conflicts with existing
|
||||
- **Coordinator dispatch `owner` values not in Role Registry** — all task owners must match a role in SKILL.md Role Registry table
|
||||
- **Monitor spawn prompt missing Skill callback** — spawn prompt must contain `Skill(skill="team-xxx", args="--role=yyy")`
|
||||
- **Spawn template missing `description` parameter** — Task() requires `description` as a mandatory field
|
||||
- **Spawn template missing `team_name` or `name` parameter** — agent will not join the team or have identity
|
||||
|
||||
### Warnings (Should Fix)
|
||||
|
||||
- Missing error handling table
|
||||
- Incomplete Phase implementation (skeleton only)
|
||||
- Missing team_msg before some SendMessage calls
|
||||
- Missing CLI fallback section (`### CLI 回退` with `ccw team` examples)
|
||||
- No complexity-adaptive routing when role is complex
|
||||
- **Dispatch task IDs not aligned with pipeline diagram** — task IDs (e.g., RESEARCH-001, DRAFT-001) must match the pipeline defined in SKILL.md
|
||||
- **Coordinator commands reference roles not in Message Routing Tables** — all roles in dispatch/monitor must appear in SKILL.md Available Roles table
|
||||
|
||||
### Info (Nice to Have)
|
||||
|
||||
- Decision tables could cover more edge cases
|
||||
- Additional tool call examples
|
||||
- Session file structure documentation
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Commands Consistency Standards
|
||||
|
||||
Quality assessment for coordinator's `dispatch.md` and `monitor.md` command files. These files are the most common source of integration failures.
|
||||
|
||||
### 6. Coordinator-SKILL Alignment (Applies to coordinator commands)
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All 5 alignment checks pass |
|
||||
| 80% | 4/5 pass, one minor mismatch |
|
||||
| 60% | 3/5 pass, cosmetic role naming issues |
|
||||
| 40% | Critical mismatch: roles not in VALID_ROLES or missing Skill callback |
|
||||
| 0% | dispatch/monitor written independently of SKILL.md |
|
||||
|
||||
#### Check 1: Role Name Alignment
|
||||
|
||||
- [ ] Every `owner` value in dispatch.md TaskCreate calls exists in SKILL.md Role Registry table
|
||||
- [ ] No invented role names (e.g., "spec-writer" when Role Registry has "writer")
|
||||
- [ ] No typos or case mismatches in role names
|
||||
|
||||
#### Check 2: Task ID-Pipeline Alignment
|
||||
|
||||
- [ ] Task IDs in dispatch.md match the pipeline diagram in SKILL.md
|
||||
- [ ] Task prefix mapping is consistent (e.g., RESEARCH-* → analyst, DRAFT-* → writer)
|
||||
- [ ] Dependency chain in dispatch.md matches pipeline flow arrows
|
||||
|
||||
#### Check 3: Spawn Template Completeness
|
||||
|
||||
- [ ] monitor.md Task() calls include ALL required parameters: `description`, `team_name`, `name`, `prompt`
|
||||
- [ ] Spawn prompt contains `Skill(skill="team-xxx", args="--role=yyy")` callback
|
||||
- [ ] Spawn prompt includes role boundaries (task prefix constraint, output tag, communication rules)
|
||||
- [ ] Spawn prompt is NOT a minimal generic instruction (e.g., "Execute task X")
|
||||
|
||||
#### Check 4: Message Routing Table Alignment
|
||||
|
||||
- [ ] All roles in dispatch.md appear in monitor.md's Message Routing Tables
|
||||
- [ ] All message types used by roles are listed in the routing tables
|
||||
- [ ] Sender roles in routing tables match Role Registry entries
|
||||
|
||||
#### Check 5: v3 Style Compliance
|
||||
|
||||
- [ ] No JavaScript pseudocode in any generated file
|
||||
- [ ] All branching logic expressed as decision tables
|
||||
- [ ] Code blocks contain only actual tool calls
|
||||
- [ ] `<placeholder>` notation used (not `${variable}`)
|
||||
- [ ] Phase 1/5 reference SKILL.md Shared Infrastructure (not inline)
|
||||
|
||||
---
|
||||
|
||||
## Command File Quality Standards
|
||||
|
||||
Quality assessment criteria for generated command `.md` files in `roles/{name}/commands/`.
|
||||
|
||||
### 5. Command File Quality (Applies to folder-based roles)
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All 4 dimensions pass, all command files self-contained |
|
||||
| 80% | 3/4 dimensions pass, minor gaps in one area |
|
||||
| 60% | 2/4 dimensions pass, some cross-references or missing sections |
|
||||
| 40% | Missing required sections or broken references |
|
||||
| 0% | No command files or non-functional |
|
||||
|
||||
#### Dimension 1: Structural Completeness
|
||||
|
||||
Each command file MUST contain:
|
||||
- [ ] `## When to Use` - Trigger conditions
|
||||
- [ ] `## Strategy` with delegation mode (Subagent / CLI / Sequential / Direct)
|
||||
- [ ] `## Execution Steps` with decision tables and tool call templates
|
||||
- [ ] `## Error Handling` table with Scenario/Resolution
|
||||
- [ ] `## Output Format` section
|
||||
|
||||
#### Dimension 2: Self-Containment
|
||||
|
||||
- [ ] No `Ref:` or cross-references to other command files
|
||||
- [ ] No imports or dependencies on sibling commands
|
||||
- [ ] All context loaded within the command (task, plan, files)
|
||||
- [ ] Any subagent can `Read()` the command and execute independently
|
||||
|
||||
#### Dimension 3: Toolbox Consistency
|
||||
|
||||
- [ ] Every command listed in role.md Toolbox has a corresponding file in `commands/`
|
||||
- [ ] Every file in `commands/` is listed in role.md Toolbox
|
||||
- [ ] Phase mapping in Toolbox matches command's `## When to Use` phase reference
|
||||
- [ ] Delegation mode in command matches role's subagent/CLI capabilities
|
||||
|
||||
#### Dimension 4: Pattern Compliance
|
||||
|
||||
- [ ] Pre-built command patterns (explore, analyze, implement, validate, review, dispatch, monitor) follow templates/role-command-template.md
|
||||
- [ ] Custom commands follow the template skeleton structure
|
||||
- [ ] Delegation mode is appropriate for the command's complexity
|
||||
- [ ] Output format is structured and parseable by the calling role.md
|
||||
@@ -1,590 +0,0 @@
|
||||
# Team Command Design Patterns
|
||||
|
||||
> Extracted from 5 production team commands: coordinate, plan, execute, test, review
|
||||
> Extended with 10 collaboration patterns for diverse team interaction models
|
||||
|
||||
---
|
||||
|
||||
## Pattern Architecture
|
||||
|
||||
```
|
||||
Team Design Patterns
|
||||
├── Section A: Infrastructure Patterns (9) ← HOW to build a team command
|
||||
│ ├── Pattern 1: Message Bus Integration
|
||||
│ ├── Pattern 2: YAML Front Matter
|
||||
│ ├── Pattern 3: Task Lifecycle
|
||||
│ ├── Pattern 4: Five-Phase Execution
|
||||
│ ├── Pattern 5: Complexity-Adaptive Routing
|
||||
│ ├── Pattern 6: Coordinator Spawn Integration
|
||||
│ ├── Pattern 7: Error Handling Table
|
||||
│ └── Pattern 8: Session File Structure
|
||||
│
|
||||
└── Section B: Collaboration Patterns (10) ← HOW agents interact
|
||||
├── CP-1: Linear Pipeline (线性流水线)
|
||||
├── CP-2: Review-Fix Cycle (审查修复循环)
|
||||
├── CP-3: Parallel Fan-out/Fan-in (并行扇出扇入)
|
||||
├── CP-4: Consensus Gate (共识门控)
|
||||
├── CP-5: Escalation Chain (逐级升级)
|
||||
├── CP-6: Incremental Delivery (增量交付)
|
||||
├── CP-7: Swarming (群策攻关)
|
||||
├── CP-8: Consulting/Advisory (咨询顾问)
|
||||
├── CP-9: Dual-Track (双轨并行)
|
||||
└── CP-10: Post-Mortem (复盘回顾)
|
||||
```
|
||||
|
||||
**Section B** collaboration patterns are documented in: [collaboration-patterns.md](collaboration-patterns.md)
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| Phase 0 | Understand all patterns before design | All sections |
|
||||
| Phase 2 | Select applicable infrastructure + collaboration patterns | Pattern catalog |
|
||||
| Phase 3 | Apply patterns during generation | Implementation details |
|
||||
| Phase 4 | Verify compliance | Checklists |
|
||||
|
||||
---
|
||||
|
||||
# Section A: Infrastructure Patterns
|
||||
|
||||
## Pattern 1: Message Bus Integration
|
||||
|
||||
Every teammate must use `mcp__ccw-tools__team_msg` for persistent logging before every `SendMessage`.
|
||||
|
||||
### Structure
|
||||
|
||||
```javascript
|
||||
// BEFORE every SendMessage, call:
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: teamName,
|
||||
from: "<role-name>", // planner | executor | tester | <new-role>
|
||||
to: "coordinator",
|
||||
type: "<message-type>",
|
||||
summary: "<human-readable summary>",
|
||||
ref: "<optional file path>",
|
||||
data: { /* optional structured payload */ }
|
||||
})
|
||||
```
|
||||
|
||||
### Standard Message Types
|
||||
|
||||
| Type | Direction | Trigger | Payload |
|
||||
|------|-----------|---------|---------|
|
||||
| `plan_ready` | planner -> coordinator | Plan generation complete | `{ taskCount, complexity }` |
|
||||
| `plan_approved` | coordinator -> planner | Plan reviewed | `{ approved: true }` |
|
||||
| `plan_revision` | planner -> coordinator | Plan modified per feedback | `{ changes }` |
|
||||
| `task_unblocked` | coordinator -> any | Dependency resolved | `{ taskId }` |
|
||||
| `impl_complete` | executor -> coordinator | Implementation done | `{ changedFiles, syntaxClean }` |
|
||||
| `impl_progress` | any -> coordinator | Progress update | `{ batch, total }` |
|
||||
| `test_result` | tester -> coordinator | Test cycle end | `{ passRate, iterations }` |
|
||||
| `review_result` | tester -> coordinator | Review done | `{ verdict, findings }` |
|
||||
| `fix_required` | any -> coordinator | Critical issues | `{ details[] }` |
|
||||
| `error` | any -> coordinator | Blocking error | `{ message }` |
|
||||
| `shutdown` | coordinator -> all | Team dissolved | `{}` |
|
||||
|
||||
### Collaboration Pattern Message Types
|
||||
|
||||
| Type | Used By | Direction | Trigger |
|
||||
|------|---------|-----------|---------|
|
||||
| `vote` | CP-4 Consensus | any -> coordinator | Agent casts vote on proposal |
|
||||
| `escalate` | CP-5 Escalation | any -> coordinator | Agent escalates unresolved issue |
|
||||
| `increment_ready` | CP-6 Incremental | executor -> coordinator | Increment delivered for validation |
|
||||
| `swarm_join` | CP-7 Swarming | any -> coordinator | Agent joins swarm on blocker |
|
||||
| `consult_request` | CP-8 Consulting | any -> specialist | Agent requests expert advice |
|
||||
| `consult_response` | CP-8 Consulting | specialist -> requester | Expert provides advice |
|
||||
| `sync_checkpoint` | CP-9 Dual-Track | any -> coordinator | Track reaches sync point |
|
||||
| `retro_finding` | CP-10 Post-Mortem | any -> coordinator | Retrospective insight |
|
||||
|
||||
### Adding New Message Types
|
||||
|
||||
When designing a new role, define role-specific message types following the convention:
|
||||
- `{action}_ready` - Work product ready for review
|
||||
- `{action}_complete` - Work phase finished
|
||||
- `{action}_progress` - Intermediate progress update
|
||||
|
||||
### CLI Fallback
|
||||
|
||||
When `mcp__ccw-tools__team_msg` MCP is unavailable, use `ccw team` CLI as equivalent fallback:
|
||||
|
||||
```javascript
|
||||
// Fallback: Replace MCP call with Bash CLI (parameters map 1:1)
|
||||
Bash(`ccw team log --team "${teamName}" --from "<role>" --to "coordinator" --type "<type>" --summary "<summary>" [--ref <path>] [--data '<json>'] --json`)
|
||||
```
|
||||
|
||||
**Parameter mapping**: `team_msg(params)` → `ccw team <operation> --team <team> [--from/--to/--type/--summary/--ref/--data/--id/--last] [--json]`
|
||||
|
||||
**Coordinator** uses all 4 operations: `log`, `list`, `status`, `read`
|
||||
**Teammates** primarily use: `log`
|
||||
|
||||
### Message Bus Section Template
|
||||
|
||||
```markdown
|
||||
## 消息总线
|
||||
|
||||
每次 SendMessage **前**,必须调用 `mcp__ccw-tools__team_msg` 记录消息:
|
||||
|
||||
\`\`\`javascript
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "<role>", to: "coordinator", type: "<type>", summary: "<summary>" })
|
||||
\`\`\`
|
||||
|
||||
### 支持的 Message Types
|
||||
|
||||
| Type | 方向 | 触发时机 | 说明 |
|
||||
|------|------|----------|------|
|
||||
| `<type>` | <role> → coordinator | <when> | <what> |
|
||||
|
||||
### CLI 回退
|
||||
|
||||
当 `mcp__ccw-tools__team_msg` MCP 不可用时,使用 `ccw team` CLI 作为等效回退:
|
||||
|
||||
\`\`\`javascript
|
||||
// 回退: 将 MCP 调用替换为 Bash CLI(参数一一对应)
|
||||
Bash(\`ccw team log --team "${teamName}" --from "<role>" --to "coordinator" --type "<type>" --summary "<summary>" --json\`)
|
||||
\`\`\`
|
||||
|
||||
**参数映射**: `team_msg(params)` → `ccw team log --team <team> --from <role> --to coordinator --type <type> --summary "<text>" [--ref <path>] [--data '<json>'] [--json]`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 2: YAML Front Matter
|
||||
|
||||
Every team command file must start with standardized YAML front matter.
|
||||
|
||||
### Structure
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: <command-name>
|
||||
description: Team <role> - <capabilities in Chinese>
|
||||
argument-hint: ""
|
||||
allowed-tools: SendMessage(*), TaskUpdate(*), TaskList(*), TaskGet(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*), Task(*)
|
||||
group: team
|
||||
---
|
||||
```
|
||||
|
||||
### Field Rules
|
||||
|
||||
| Field | Rule | Example |
|
||||
|-------|------|---------|
|
||||
| `name` | Lowercase, matches filename | `plan`, `execute`, `test` |
|
||||
| `description` | `Team <role> -` prefix + Chinese capability list | `Team planner - 多角度代码探索、结构化实现规划` |
|
||||
| `argument-hint` | Empty string for teammates, has hint for coordinator | `""` |
|
||||
| `allowed-tools` | Start with `SendMessage(*), TaskUpdate(*), TaskList(*), TaskGet(*)` | See each role |
|
||||
| `group` | Always `team` | `team` |
|
||||
|
||||
### Minimum Tool Set (All Teammates)
|
||||
|
||||
```
|
||||
SendMessage(*), TaskUpdate(*), TaskList(*), TaskGet(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Grep(*)
|
||||
```
|
||||
|
||||
### Role-Specific Additional Tools
|
||||
|
||||
| Role Type | Additional Tools |
|
||||
|-----------|-----------------|
|
||||
| Read-only (reviewer, analyzer) | None extra |
|
||||
| Write-capable (executor, fixer) | `Write(*), Edit(*)` |
|
||||
| Agent-delegating (planner, executor) | `Task(*)` |
|
||||
|
||||
---
|
||||
|
||||
## Pattern 3: Task Lifecycle
|
||||
|
||||
All teammates follow the same task discovery and lifecycle pattern.
|
||||
|
||||
### Standard Flow
|
||||
|
||||
```javascript
|
||||
// Step 1: Find my tasks
|
||||
const tasks = TaskList()
|
||||
const myTasks = tasks.filter(t =>
|
||||
t.subject.startsWith('<PREFIX>-') && // PLAN-*, IMPL-*, TEST-*, REVIEW-*
|
||||
t.owner === '<role-name>' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0 // Not blocked
|
||||
)
|
||||
|
||||
// Step 2: No tasks -> idle
|
||||
if (myTasks.length === 0) return
|
||||
|
||||
// Step 3: Claim task (lowest ID first)
|
||||
const task = TaskGet({ taskId: myTasks[0].id })
|
||||
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||
|
||||
// Step 4: Execute work
|
||||
// ... role-specific logic ...
|
||||
|
||||
// Step 5: Complete and loop
|
||||
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||
|
||||
// Step 6: Check for next task
|
||||
const nextTasks = TaskList().filter(t =>
|
||||
t.subject.startsWith('<PREFIX>-') &&
|
||||
t.owner === '<role-name>' &&
|
||||
t.status === 'pending' &&
|
||||
t.blockedBy.length === 0
|
||||
)
|
||||
if (nextTasks.length > 0) {
|
||||
// Continue with next task -> back to Step 3
|
||||
}
|
||||
```
|
||||
|
||||
### Task Prefix Convention
|
||||
|
||||
| Prefix | Role | Example |
|
||||
|--------|------|---------|
|
||||
| `PLAN-` | planner | `PLAN-001: Explore and plan implementation` |
|
||||
| `IMPL-` | executor | `IMPL-001: Implement approved plan` |
|
||||
| `TEST-` | tester | `TEST-001: Test-fix cycle` |
|
||||
| `REVIEW-` | tester | `REVIEW-001: Code review and requirement verification` |
|
||||
| `<NEW>-` | new role | Must be unique, uppercase, hyphen-suffixed |
|
||||
|
||||
### Task Chain (defined in coordinate.md)
|
||||
|
||||
```
|
||||
PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
|
||||
↑ blockedBy ↑ blockedBy
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 4: Five-Phase Execution Structure
|
||||
|
||||
Every team command follows a consistent 5-phase internal structure.
|
||||
|
||||
### Standard Phases
|
||||
|
||||
| Phase | Purpose | Common Actions |
|
||||
|-------|---------|----------------|
|
||||
| Phase 1: Task Discovery | Find and claim assigned tasks | TaskList, TaskGet, TaskUpdate |
|
||||
| Phase 2: Context Loading | Load necessary context for work | Read plan/config, detect framework |
|
||||
| Phase 3: Core Work | Execute primary responsibility | Role-specific logic |
|
||||
| Phase 4: Validation/Summary | Verify work quality | Syntax check, criteria verification |
|
||||
| Phase 5: Report + Loop | Report to coordinator, check next | SendMessage, TaskUpdate, TaskList |
|
||||
|
||||
### Phase Structure Template
|
||||
|
||||
```markdown
|
||||
### Phase N: <Phase Name>
|
||||
|
||||
\`\`\`javascript
|
||||
// Implementation code
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 5: Complexity-Adaptive Routing
|
||||
|
||||
All roles that process varying-difficulty tasks should implement adaptive routing.
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
function assessComplexity(description) {
|
||||
let score = 0
|
||||
if (/refactor|architect|restructure|module|system/.test(description)) score += 2
|
||||
if (/multiple|across|cross/.test(description)) score += 2
|
||||
if (/integrate|api|database/.test(description)) score += 1
|
||||
if (/security|performance/.test(description)) score += 1
|
||||
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
|
||||
}
|
||||
```
|
||||
|
||||
### Routing Table
|
||||
|
||||
| Complexity | Direct Claude | CLI Agent | Sub-agent |
|
||||
|------------|---------------|-----------|-----------|
|
||||
| Low | Direct execution | - | - |
|
||||
| Medium | - | `cli-explore-agent` / `cli-lite-planning-agent` | - |
|
||||
| High | - | CLI agent | `code-developer` / `universal-executor` |
|
||||
|
||||
### Sub-agent Delegation Pattern
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "<agent-type>",
|
||||
run_in_background: false,
|
||||
description: "<brief description>",
|
||||
prompt: `
|
||||
## Task Objective
|
||||
${taskDescription}
|
||||
|
||||
## Output Location
|
||||
${sessionFolder}/${outputFile}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Read: .workflow/project-tech.json (if exists)
|
||||
2. Read: .workflow/project-guidelines.json (if exists)
|
||||
|
||||
## Expected Output
|
||||
${expectedFormat}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 6: Coordinator Spawn Integration
|
||||
|
||||
New teammates must be spawnable from coordinate.md using standard pattern.
|
||||
|
||||
### Skill Path Format (Folder-Based)
|
||||
|
||||
Team commands use folder-based organization with colon-separated skill paths:
|
||||
|
||||
```
|
||||
File location: .claude/commands/team/{team-name}/{role-name}.md
|
||||
Skill path: team:{team-name}:{role-name}
|
||||
|
||||
Example:
|
||||
.claude/commands/team/spec/analyst.md → team:spec:analyst
|
||||
.claude/commands/team/security/scanner.md → team:security:scanner
|
||||
```
|
||||
|
||||
### Spawn Template
|
||||
|
||||
> **⚠️ CRITICAL**: Spawn prompt 必须包含完整的 Skill 回调指令。如果 prompt 过于简化(如仅 "Execute task X"),agent 会自行发挥而非通过 Skill → role.md 加载角色定义。
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: `Spawn ${roleName} worker`, // ← 必填参数
|
||||
team_name: teamName,
|
||||
name: "<role-name>",
|
||||
prompt: `You are team "${teamName}" <ROLE>.
|
||||
|
||||
## ⚠️ 首要指令(MUST)
|
||||
你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥:
|
||||
Skill(skill="team-${teamName}", args="--role=<role-name>")
|
||||
|
||||
When you receive <PREFIX>-* tasks, execute via the Skill callback above.
|
||||
|
||||
Current requirement: ${taskDescription}
|
||||
Constraints: ${constraints}
|
||||
|
||||
## Message Bus (Required)
|
||||
Before each SendMessage, call mcp__ccw-tools__team_msg:
|
||||
mcp__ccw-tools__team_msg({ operation: "log", team: "${teamName}", from: "<role>", to: "coordinator", type: "<type>", summary: "<summary>" })
|
||||
|
||||
Workflow:
|
||||
1. 调用 Skill(skill="team-${teamName}", args="--role=<role-name>") 获取角色定义
|
||||
2. 按 role.md 中的 5-Phase 流程执行(TaskList → 找到 <PREFIX>-* 任务 → 执行 → 汇报)
|
||||
3. team_msg log + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
})
|
||||
```
|
||||
|
||||
### Spawn Anti-Patterns(必须避免)
|
||||
|
||||
| Anti-Pattern | 后果 | 正确做法 |
|
||||
|-------------|------|---------|
|
||||
| prompt 中缺少 `Skill(...)` 回调 | agent 自行发挥,不加载 role.md | 必须包含完整 Skill 回调指令 |
|
||||
| 缺少 `description` 参数 | Task() 调用失败(必填参数) | 始终提供 `description` |
|
||||
| 缺少 `team_name` 参数 | agent 不属于团队,无法收发消息 | 始终提供 `team_name` |
|
||||
| 缺少 `name` 参数 | agent 无角色标识 | 始终提供 `name` |
|
||||
| dispatch/monitor 中 `owner` 值不在 VALID_ROLES | Skill 路由失败 | owner 必须精确匹配 VALID_ROLES key |
|
||||
|
||||
---
|
||||
|
||||
## Pattern 7: Error Handling Table
|
||||
|
||||
Every command ends with a standardized error handling table.
|
||||
|
||||
### Template
|
||||
|
||||
```markdown
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No tasks available | Idle, wait for coordinator assignment |
|
||||
| Plan/Context file not found | Notify coordinator, request location |
|
||||
| Sub-agent failure | Retry once, then fallback to direct execution |
|
||||
| Max iterations exceeded | Report to coordinator, suggest intervention |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pattern 8: Session File Structure
|
||||
|
||||
Roles that produce artifacts follow standard session directory patterns.
|
||||
|
||||
### Convention
|
||||
|
||||
```
|
||||
.workflow/.team-<purpose>/{identifier}-{YYYY-MM-DD}/
|
||||
├── <work-product-files>
|
||||
├── manifest.json (if multiple outputs)
|
||||
└── .task/ (if generating task files)
|
||||
├── TASK-001.json
|
||||
└── TASK-002.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# Section B: Collaboration Patterns
|
||||
|
||||
> Complete specification: [collaboration-patterns.md](collaboration-patterns.md)
|
||||
|
||||
## Collaboration Pattern Quick Reference
|
||||
|
||||
Every collaboration pattern has these standard elements:
|
||||
|
||||
| Element | Description |
|
||||
|---------|-------------|
|
||||
| **Entry Condition** | When to activate this pattern |
|
||||
| **Workflow** | Step-by-step execution flow |
|
||||
| **Convergence Criteria** | How the pattern terminates successfully |
|
||||
| **Feedback Loop** | How information flows back to enable correction |
|
||||
| **Timeout/Fallback** | What happens when the pattern doesn't converge |
|
||||
| **Max Iterations** | Hard limit on cycles (where applicable) |
|
||||
|
||||
### Pattern Selection Guide
|
||||
|
||||
| Scenario | Recommended Pattern | Why |
|
||||
|----------|-------------------|-----|
|
||||
| Standard feature development | CP-1: Linear Pipeline | Well-defined sequential stages |
|
||||
| Code review with fixes needed | CP-2: Review-Fix Cycle | Iterative improvement until quality met |
|
||||
| Multi-angle analysis needed | CP-3: Fan-out/Fan-in | Parallel exploration, aggregated results |
|
||||
| Critical decision (architecture, security) | CP-4: Consensus Gate | Multiple perspectives before committing |
|
||||
| Agent stuck / self-repair failed | CP-5: Escalation Chain | Progressive expertise levels |
|
||||
| Large feature (many files) | CP-6: Incremental Delivery | Validated increments reduce risk |
|
||||
| Blocking issue stalls pipeline | CP-7: Swarming | All resources on one problem |
|
||||
| Domain-specific expertise needed | CP-8: Consulting | Expert advice without role change |
|
||||
| Design + Implementation parallel | CP-9: Dual-Track | Faster delivery with sync checkpoints |
|
||||
| Post-completion learning | CP-10: Post-Mortem | Capture insights for future improvement |
|
||||
| Multi-issue plan + execute overlap | CP-11: Beat Pipeline | Per-item dispatch eliminates stage idle time |
|
||||
|
||||
---
|
||||
|
||||
## Pattern Summary Checklist
|
||||
|
||||
When designing a new team command, verify:
|
||||
|
||||
### Infrastructure Patterns
|
||||
- [ ] YAML front matter with `group: team`
|
||||
- [ ] Message bus section with `team_msg` logging
|
||||
- [ ] CLI fallback section with `ccw team` CLI examples and parameter mapping
|
||||
- [ ] Role-specific message types defined
|
||||
- [ ] Task lifecycle: TaskList -> TaskGet -> TaskUpdate flow
|
||||
- [ ] Unique task prefix (no collision with existing PLAN/IMPL/TEST/REVIEW, scan `team/**/*.md`)
|
||||
- [ ] 5-phase execution structure
|
||||
- [ ] Complexity-adaptive routing (if applicable)
|
||||
- [ ] Coordinator spawn template integration
|
||||
- [ ] Error handling table
|
||||
- [ ] SendMessage communication to coordinator only
|
||||
- [ ] Session file structure (if producing artifacts)
|
||||
|
||||
### Collaboration Patterns
|
||||
- [ ] At least one collaboration pattern selected
|
||||
- [ ] Convergence criteria defined (max iterations / quality gate / timeout)
|
||||
- [ ] Feedback loop implemented (how results flow back)
|
||||
- [ ] Timeout/fallback behavior specified
|
||||
- [ ] Pattern-specific message types registered
|
||||
- [ ] Coordinator aware of pattern (can route messages accordingly)
|
||||
- [ ] If using CP-11: intermediate artifact protocol defined (file path + format)
|
||||
- [ ] If using CP-11: inline conflict check implemented (no heavy subagent for dependency detection)
|
||||
|
||||
---
|
||||
|
||||
## Pattern 9: Parallel Subagent Orchestration
|
||||
|
||||
Roles that need to perform complex, multi-perspective work can delegate to subagents or CLI tools rather than executing everything inline. This pattern defines three delegation modes and context management rules.
|
||||
|
||||
### Delegation Modes
|
||||
|
||||
#### Mode A: Subagent Fan-out
|
||||
|
||||
Launch multiple Task agents in parallel for independent work streams.
|
||||
|
||||
```javascript
|
||||
// Launch 2-4 parallel agents for different perspectives
|
||||
const agents = [
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore angle 1",
|
||||
prompt: `Analyze from perspective 1: ${taskDescription}`
|
||||
}),
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore angle 2",
|
||||
prompt: `Analyze from perspective 2: ${taskDescription}`
|
||||
})
|
||||
]
|
||||
// Aggregate results after all complete
|
||||
```
|
||||
|
||||
**When to use**: Multi-angle exploration, parallel code analysis, independent subtask execution.
|
||||
|
||||
#### Mode B: CLI Fan-out
|
||||
|
||||
Launch multiple `ccw cli` calls for multi-perspective analysis.
|
||||
|
||||
```javascript
|
||||
// Parallel CLI calls for different analysis angles
|
||||
Bash(`ccw cli -p "PURPOSE: Analyze from security angle..." --tool gemini --mode analysis`, { run_in_background: true })
|
||||
Bash(`ccw cli -p "PURPOSE: Analyze from performance angle..." --tool gemini --mode analysis`, { run_in_background: true })
|
||||
// Wait for all CLI results, then synthesize
|
||||
```
|
||||
|
||||
**When to use**: Multi-dimensional code review, architecture analysis, security + performance audits.
|
||||
|
||||
#### Mode C: Sequential Delegation
|
||||
|
||||
Delegate a single heavy task to a specialized agent.
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement complex feature",
|
||||
prompt: `## Goal\n${plan.summary}\n\n## Tasks\n${taskDetails}`
|
||||
})
|
||||
```
|
||||
|
||||
**When to use**: Complex implementation, test-fix cycles, large-scope refactoring.
|
||||
|
||||
### Context Management Hierarchy
|
||||
|
||||
| Level | Location | Context Size | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Small | role.md inline | < 200 lines | Simple logic, direct execution |
|
||||
| Medium | commands/*.md | 200-500 lines | Structured delegation with strategy |
|
||||
| Large | Subagent prompt | Unlimited | Full autonomous execution |
|
||||
|
||||
**Rule**: role.md Phase 1/5 are always inline (standardized). Phases 2-4 either inline (small) or delegate to commands (medium/large).
|
||||
|
||||
### Command File Extraction Criteria
|
||||
|
||||
Extract a phase into a command file when ANY of these conditions are met:
|
||||
|
||||
1. **Subagent delegation**: Phase launches Task() agents
|
||||
2. **CLI fan-out**: Phase runs parallel `ccw cli` calls
|
||||
3. **Complex strategy**: Phase has >3 conditional branches
|
||||
4. **Reusable logic**: Same logic used by multiple roles
|
||||
|
||||
If none apply, keep the phase inline in role.md.
|
||||
|
||||
### Relationship to Other Patterns
|
||||
|
||||
- **Pattern 5 (Complexity-Adaptive)**: Pattern 9 provides the delegation mechanisms that Pattern 5 routes to. Low complexity → inline, Medium → CLI agent, High → Subagent fan-out.
|
||||
- **CP-3 (Parallel Fan-out)**: Pattern 9 Mode A/B are the implementation mechanisms for CP-3 at the role level.
|
||||
- **Pattern 4 (Five-Phase)**: Pattern 9 does NOT replace the 5-phase structure. It provides delegation options WITHIN phases 2-4.
|
||||
|
||||
### Checklist
|
||||
|
||||
- [ ] Delegation mode selected based on task characteristics
|
||||
- [ ] Context management level appropriate (small/medium/large)
|
||||
- [ ] Command files extracted only when criteria met
|
||||
- [ ] Subagent prompts include mandatory first steps (read project config)
|
||||
- [ ] CLI fan-out uses `--mode analysis` by default
|
||||
- [ ] Results aggregated after parallel completion
|
||||
- [ ] Error handling covers agent/CLI failure with fallback
|
||||
@@ -1,820 +0,0 @@
|
||||
# Role Command Template
|
||||
|
||||
Template for generating command files in `roles/<role-name>/commands/<command>.md` (v3 style).
|
||||
|
||||
## Purpose
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 0 | Read to understand command file structure |
|
||||
| Phase 3 | Apply with role-specific content |
|
||||
|
||||
## Style Rules
|
||||
|
||||
Generated output follows v3 conventions:
|
||||
|
||||
| Rule | Description |
|
||||
|------|-------------|
|
||||
| No JS pseudocode | All logic uses text + decision tables + flow symbols |
|
||||
| Code blocks = tool calls only | Only Task(), Bash(), Read(), Grep() etc. |
|
||||
| `<placeholder>` in output | Not `${variable}` in generated content |
|
||||
| Decision tables | Strategy selection, error routing all use tables |
|
||||
| Self-contained | Each command executable independently |
|
||||
|
||||
> **Note**: The template itself uses `{{handlebars}}` for variable substitution during Phase 3 generation. The **generated output** must not contain `{{handlebars}}` or JS pseudocode.
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
# Command: {{command_name}}
|
||||
|
||||
## Purpose
|
||||
|
||||
{{command_description}}
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
{{#each context_inputs}}
|
||||
| {{this.name}} | {{this.source}} | {{this.required}} |
|
||||
{{/each}}
|
||||
|
||||
## Phase 3: Core Work
|
||||
|
||||
{{core_work_content}}
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
{{validation_content}}
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
{{#each error_handlers}}
|
||||
| {{this.scenario}} | {{this.resolution}} |
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7 Pre-built Command Patterns
|
||||
|
||||
Each pattern below provides the complete v3-style structure. During Phase 3 generation, select the matching pattern and customize with team-specific content.
|
||||
|
||||
### 1. explore.md (Multi-angle Exploration)
|
||||
|
||||
**Maps to**: Orchestration roles, Phase 2
|
||||
**Delegation**: Subagent Fan-out
|
||||
|
||||
```markdown
|
||||
# Command: explore
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-angle codebase exploration using parallel exploration agents. Discovers patterns, dependencies, and architecture before planning.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | TaskGet result | Yes |
|
||||
| Project root | `git rev-parse --show-toplevel` | Yes |
|
||||
| Existing explorations | <session-folder>/explorations/ | No |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
## Phase 3: Core Work
|
||||
|
||||
### Angle Selection
|
||||
|
||||
Determine exploration angles from task description:
|
||||
|
||||
| Signal in Description | Angle |
|
||||
|-----------------------|-------|
|
||||
| architect, structure, design | architecture |
|
||||
| pattern, convention, style | patterns |
|
||||
| depend, import, module | dependencies |
|
||||
| test, spec, coverage | testing |
|
||||
| No signals matched | general + patterns (default) |
|
||||
|
||||
### Execution Strategy
|
||||
|
||||
| Angle Count | Strategy |
|
||||
|-------------|----------|
|
||||
| 1 angle | Single agent exploration |
|
||||
| 2-4 angles | Parallel agents, one per angle |
|
||||
|
||||
**Per-angle agent spawn**:
|
||||
|
||||
\`\`\`
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore: <angle>",
|
||||
prompt: "Explore the codebase from the perspective of <angle>.
|
||||
Focus on: <task-description>
|
||||
Project root: <project-root>
|
||||
|
||||
Report findings as structured markdown with file references."
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
### Result Aggregation
|
||||
|
||||
After all agents complete:
|
||||
|
||||
1. Merge key findings across all angles (deduplicate)
|
||||
2. Collect relevant file paths (deduplicate)
|
||||
3. Extract discovered patterns
|
||||
4. Write aggregated results to `<session-folder>/explorations/<task-id>.md`
|
||||
|
||||
### Output Format
|
||||
|
||||
\`\`\`
|
||||
## Exploration Results
|
||||
|
||||
### Angles Explored: [list]
|
||||
|
||||
### Key Findings
|
||||
- [finding with file:line reference]
|
||||
|
||||
### Relevant Files
|
||||
- [file path with relevance note]
|
||||
|
||||
### Patterns Found
|
||||
- [pattern name: description]
|
||||
\`\`\`
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| All angles covered | Compare planned vs completed | All planned angles explored |
|
||||
| Findings non-empty | Check result count | At least 1 finding per angle |
|
||||
| File references valid | Verify referenced files exist | >= 80% files exist |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Agent returns no results | Retry with broader search scope |
|
||||
| Agent timeout | Use partial results, note incomplete angles |
|
||||
| Project root not found | Fall back to current directory |
|
||||
| Exploration cache exists | Load cached, skip re-exploration |
|
||||
```
|
||||
|
||||
### 2. analyze.md (Multi-perspective Analysis)
|
||||
|
||||
**Maps to**: Read-only analysis roles, Phase 3
|
||||
**Delegation**: CLI Fan-out
|
||||
|
||||
```markdown
|
||||
# Command: analyze
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-perspective code analysis using parallel CLI calls. Each perspective produces severity-ranked findings with file:line references.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Target files | `git diff --name-only HEAD~1` or `--cached` | Yes |
|
||||
| Plan file | <session-folder>/plan/plan.json | No |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
**File discovery**:
|
||||
|
||||
\`\`\`
|
||||
Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
||||
\`\`\`
|
||||
|
||||
## Phase 3: Core Work
|
||||
|
||||
### Perspective Selection
|
||||
|
||||
Determine analysis perspectives from task description:
|
||||
|
||||
| Signal in Description | Perspective |
|
||||
|-----------------------|-------------|
|
||||
| security, auth, inject, xss | security |
|
||||
| performance, speed, optimize, memory | performance |
|
||||
| quality, clean, maintain, debt | code-quality |
|
||||
| architect, pattern, structure | architecture |
|
||||
| No signals matched | code-quality + architecture (default) |
|
||||
|
||||
### Execution Strategy
|
||||
|
||||
| Perspective Count | Strategy |
|
||||
|-------------------|----------|
|
||||
| 1 perspective | Single CLI call |
|
||||
| 2-4 perspectives | Parallel CLI calls, one per perspective |
|
||||
|
||||
**Per-perspective CLI call**:
|
||||
|
||||
\`\`\`
|
||||
Bash("ccw cli -p \"PURPOSE: Analyze code from <perspective> perspective
|
||||
TASK: Review changes in: <file-list>
|
||||
MODE: analysis
|
||||
CONTEXT: @<file-patterns>
|
||||
EXPECTED: Findings with severity, file:line references, remediation
|
||||
CONSTRAINTS: Focus on <perspective>\" --tool gemini --mode analysis", { run_in_background: true })
|
||||
\`\`\`
|
||||
|
||||
### Finding Aggregation
|
||||
|
||||
After all perspectives complete:
|
||||
|
||||
1. Parse findings from each CLI response
|
||||
2. Classify by severity: Critical / High / Medium / Low
|
||||
3. Deduplicate across perspectives
|
||||
4. Sort by severity then by file location
|
||||
|
||||
### Output Format
|
||||
|
||||
\`\`\`
|
||||
## Analysis Results
|
||||
|
||||
### Perspectives Analyzed: [list]
|
||||
|
||||
### Findings by Severity
|
||||
#### Critical
|
||||
- [finding with file:line]
|
||||
#### High
|
||||
- [finding]
|
||||
#### Medium
|
||||
- [finding]
|
||||
#### Low
|
||||
- [finding]
|
||||
\`\`\`
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| All perspectives covered | Compare planned vs completed | All perspectives analyzed |
|
||||
| Findings have file refs | Check file:line format | >= 90% findings have references |
|
||||
| No duplicate findings | Dedup check | No identical findings |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| CLI tool unavailable | Fall back to secondary tool |
|
||||
| CLI returns empty | Retry with broader scope |
|
||||
| Too many findings | Prioritize critical/high, summarize medium/low |
|
||||
| Target files empty | Report no changes to analyze |
|
||||
```
|
||||
|
||||
### 3. implement.md (Code Implementation)
|
||||
|
||||
**Maps to**: Code generation roles, Phase 3
|
||||
**Delegation**: Sequential Delegation
|
||||
|
||||
```markdown
|
||||
# Command: implement
|
||||
|
||||
## Purpose
|
||||
|
||||
Code implementation via subagent delegation with batch routing. Reads plan tasks and executes code changes, grouping by module for efficiency.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Plan file | <session-folder>/plan/plan.json | Yes |
|
||||
| Task files | <session-folder>/plan/.task/<task-id>.json | Yes |
|
||||
| Wisdom conventions | <session-folder>/wisdom/conventions.md | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract plan path from task description
|
||||
2. Read plan.json -> get task list
|
||||
3. Read each task file for detailed specs
|
||||
4. Load coding conventions from wisdom
|
||||
|
||||
## Phase 3: Core Work
|
||||
|
||||
### Strategy Selection
|
||||
|
||||
| Task Count | Strategy | Description |
|
||||
|------------|----------|-------------|
|
||||
| <= 2 | Direct | Inline Edit/Write by this role |
|
||||
| 3-5 | Single agent | One code-developer subagent for all tasks |
|
||||
| > 5 | Batch agent | Group by module, one agent per batch |
|
||||
|
||||
### Direct Strategy (1-2 tasks)
|
||||
|
||||
For each task, for each file in task spec:
|
||||
1. Read existing file (if modifying)
|
||||
2. Apply changes via Edit or Write
|
||||
3. Verify file saved
|
||||
|
||||
### Agent Strategy (3+ tasks)
|
||||
|
||||
**Single agent spawn**:
|
||||
|
||||
\`\`\`
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement <N> tasks",
|
||||
prompt: "## Goal
|
||||
<plan-summary>
|
||||
|
||||
## Tasks
|
||||
<task-list-with-descriptions>
|
||||
|
||||
Complete each task according to its convergence criteria."
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
**Batch agent** (> 5 tasks): Group tasks by module/directory, spawn one agent per batch using the template above.
|
||||
|
||||
### Output Tracking
|
||||
|
||||
After implementation:
|
||||
1. Get list of changed files: `Bash("git diff --name-only")`
|
||||
2. Count completed vs total tasks
|
||||
3. Record changed file paths for validation phase
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Syntax clean | Language-specific check (tsc, python -c, etc.) | No syntax errors |
|
||||
| All files created | Verify plan-specified files exist | All files present |
|
||||
| Import resolution | Check for broken imports | All imports resolve |
|
||||
|
||||
**Auto-fix on failure** (max 2 attempts):
|
||||
|
||||
| Attempt | Action |
|
||||
|---------|--------|
|
||||
| 1 | Parse error, apply targeted fix |
|
||||
| 2 | Delegate fix to code-developer subagent |
|
||||
| Failed | Report remaining issues to coordinator |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Plan file not found | Notify coordinator, request plan path |
|
||||
| Agent fails on task | Retry once, then mark task as blocked |
|
||||
| Syntax errors after impl | Attempt auto-fix, report if unresolved |
|
||||
| File conflict | Check git status, resolve or report |
|
||||
```
|
||||
|
||||
### 4. validate.md (Test-Fix Cycle)
|
||||
|
||||
**Maps to**: Validation roles, Phase 3
|
||||
**Delegation**: Sequential Delegation
|
||||
|
||||
```markdown
|
||||
# Command: validate
|
||||
|
||||
## Purpose
|
||||
|
||||
Iterative test-fix cycle with max iteration control. Runs tests, identifies failures, delegates fixes, and re-validates until passing or max iterations reached.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Test command | Auto-detect from project config | Yes |
|
||||
| Changed files | `git diff --name-only` | Yes |
|
||||
| Plan file | <session-folder>/plan/plan.json | No |
|
||||
|
||||
**Test command detection**:
|
||||
|
||||
| Detection Signal | Test Command |
|
||||
|-----------------|--------------|
|
||||
| package.json has "test" script | `npm test` |
|
||||
| pytest.ini or conftest.py exists | `pytest` |
|
||||
| Makefile has "test" target | `make test` |
|
||||
| go.mod exists | `go test ./...` |
|
||||
| No signal detected | Notify coordinator |
|
||||
|
||||
## Phase 3: Core Work
|
||||
|
||||
### Test-Fix Cycle
|
||||
|
||||
| Step | Action | Exit Condition |
|
||||
|------|--------|----------------|
|
||||
| 1. Run tests | `Bash("<test-command> 2>&1 || true")` | - |
|
||||
| 2. Parse results | Extract pass/fail counts | - |
|
||||
| 3. Check pass rate | Compare against threshold | Pass rate >= 95% -> exit SUCCESS |
|
||||
| 4. Extract failures | Parse failing test names and errors | - |
|
||||
| 5. Delegate fix | Spawn code-developer subagent | - |
|
||||
| 6. Increment counter | iteration++ | iteration >= 5 -> exit MAX_REACHED |
|
||||
| 7. Loop | Go to Step 1 | - |
|
||||
|
||||
**Fix delegation**:
|
||||
|
||||
\`\`\`
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Fix test failures (iteration <N>)",
|
||||
prompt: "Test failures:
|
||||
<test-output>
|
||||
|
||||
Fix the failing tests. Changed files: <file-list>"
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
### Outcome Routing
|
||||
|
||||
| Outcome | Action |
|
||||
|---------|--------|
|
||||
| SUCCESS (pass rate >= 95%) | Proceed to Phase 4 |
|
||||
| MAX_REACHED (5 iterations) | Report failures, mark for manual intervention |
|
||||
| ENV_ERROR (test env broken) | Report environment issue to coordinator |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Metric | Source | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| Pass rate | Final test run | >= 95% |
|
||||
| Iterations used | Counter | Report count |
|
||||
| Remaining failures | Last test output | List details |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No test command found | Notify coordinator |
|
||||
| Max iterations exceeded | Report failures, suggest manual intervention |
|
||||
| Test environment broken | Report environment issue |
|
||||
| Flaky tests detected | Re-run once to confirm, exclude if consistently flaky |
|
||||
```
|
||||
|
||||
### 5. review.md (Multi-dimensional Review)
|
||||
|
||||
**Maps to**: Read-only analysis roles (reviewer type), Phase 3
|
||||
**Delegation**: CLI Fan-out
|
||||
|
||||
```markdown
|
||||
# Command: review
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-dimensional code review producing a verdict (BLOCK/CONDITIONAL/APPROVE) with categorized findings across quality, security, architecture, and requirements dimensions.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Plan file | <session-folder>/plan/plan.json | Yes |
|
||||
| Git diff | `git diff HEAD~1` or `git diff --cached` | Yes |
|
||||
| Modified files | From git diff --name-only | Yes |
|
||||
| Test results | Tester output (if available) | No |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
## Phase 3: Core Work
|
||||
|
||||
### Dimension Overview
|
||||
|
||||
| Dimension | Focus | What to Detect |
|
||||
|-----------|-------|----------------|
|
||||
| Quality | Code correctness, type safety, clean code | Empty catch, ts-ignore, any type, console.log |
|
||||
| Security | Vulnerability patterns, secret exposure | Hardcoded secrets, SQL injection, eval, XSS |
|
||||
| Architecture | Module structure, coupling, file size | Circular deps, deep imports, large files |
|
||||
| Requirements | Acceptance criteria coverage | Unmet criteria, missing error handling, missing tests |
|
||||
|
||||
### Per-Dimension Detection
|
||||
|
||||
For each dimension, scan modified files using pattern detection:
|
||||
|
||||
**Example: Quality scan for console statements**:
|
||||
|
||||
\`\`\`
|
||||
Grep(pattern="console\\.(log|debug|info)", path="<file-path>", output_mode="content", "-n"=true)
|
||||
\`\`\`
|
||||
|
||||
**Example: Architecture scan for deep imports**:
|
||||
|
||||
\`\`\`
|
||||
Grep(pattern="from\\s+['\"](\\.\\./){3,}", path="<file-path>", output_mode="content", "-n"=true)
|
||||
\`\`\`
|
||||
|
||||
### Requirements Verification
|
||||
|
||||
1. Read plan file -> extract acceptance criteria section
|
||||
2. For each criterion -> extract keywords (4+ char meaningful words)
|
||||
3. Search modified files for keyword matches
|
||||
4. Score coverage:
|
||||
|
||||
| Match Rate | Status |
|
||||
|------------|--------|
|
||||
| >= 70% | Met |
|
||||
| 40-69% | Partial |
|
||||
| < 40% | Unmet |
|
||||
|
||||
### Verdict Routing
|
||||
|
||||
| Verdict | Criteria | Action |
|
||||
|---------|----------|--------|
|
||||
| BLOCK | Any critical-severity issues found | Must fix before merge |
|
||||
| CONDITIONAL | High or medium issues, no critical | Should address, can merge with tracking |
|
||||
| APPROVE | Only low issues or none | Ready to merge |
|
||||
|
||||
### Report Format
|
||||
|
||||
\`\`\`
|
||||
# Code Review Report
|
||||
|
||||
**Verdict**: <BLOCK|CONDITIONAL|APPROVE>
|
||||
|
||||
## Blocking Issues (if BLOCK)
|
||||
- **<type>** (<file>:<line>): <message>
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
### Quality Issues
|
||||
**CRITICAL** (<count>)
|
||||
- <message> (<file>:<line>)
|
||||
|
||||
### Security Issues
|
||||
(same format per severity)
|
||||
|
||||
### Architecture Issues
|
||||
(same format per severity)
|
||||
|
||||
### Requirements Issues
|
||||
(same format per severity)
|
||||
|
||||
## Recommendations
|
||||
1. <actionable recommendation>
|
||||
\`\`\`
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Total issues | Sum across all dimensions and severities |
|
||||
| Critical count | Must be 0 for APPROVE |
|
||||
| Blocking issues | Listed explicitly in report header |
|
||||
| Dimensions covered | Must be 4/4 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Plan file not found | Skip requirements dimension, note in report |
|
||||
| Git diff empty | Report no changes to review |
|
||||
| File read fails | Skip file, note in report |
|
||||
| No modified files | Report empty review |
|
||||
| Codex unavailable | Skip codex review dimension, report 3-dimension review |
|
||||
```
|
||||
|
||||
### 6. dispatch.md (Task Distribution)
|
||||
|
||||
**Maps to**: Coordinator role, Phase 3
|
||||
**Delegation**: Direct (coordinator acts directly)
|
||||
|
||||
```markdown
|
||||
# Command: dispatch
|
||||
|
||||
## Purpose
|
||||
|
||||
Task chain creation with dependency management. Creates all pipeline tasks with correct blockedBy relationships and role-based ownership.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Pipeline definition | SKILL.md Pipeline Definitions | Yes |
|
||||
| Task metadata | SKILL.md Task Metadata Registry | Yes |
|
||||
| Session folder | From Phase 2 initialization | Yes |
|
||||
| Mode | From Phase 1 requirements | Yes |
|
||||
|
||||
## Phase 3: Core Work
|
||||
|
||||
### Pipeline Selection
|
||||
|
||||
Select pipeline based on mode:
|
||||
|
||||
| Mode | Pipeline | Task Count |
|
||||
|------|----------|------------|
|
||||
{{#each pipeline_modes}}
|
||||
| {{this.mode}} | {{this.pipeline}} | {{this.task_count}} |
|
||||
{{/each}}
|
||||
|
||||
### Task Creation Flow
|
||||
|
||||
For each task in the selected pipeline (in dependency order):
|
||||
|
||||
1. **Create task**:
|
||||
|
||||
\`\`\`
|
||||
TaskCreate({
|
||||
subject: "<TASK-ID>: <description>",
|
||||
description: "<detailed-description>\n\nSession: <session-folder>",
|
||||
activeForm: "<TASK-ID> in progress"
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
2. **Set owner and dependencies**:
|
||||
|
||||
\`\`\`
|
||||
TaskUpdate({
|
||||
taskId: <new-task-id>,
|
||||
owner: "<role-name>",
|
||||
addBlockedBy: [<dependency-task-ids>]
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
3. Record created task ID for downstream dependency references
|
||||
|
||||
### Dependency Mapping
|
||||
|
||||
Follow SKILL.md Task Metadata Registry for:
|
||||
- Task ID naming convention
|
||||
- Role assignment (owner field)
|
||||
- Dependencies (blockedBy relationships)
|
||||
- Task description with session folder reference
|
||||
|
||||
### Parallel Task Handling
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Tasks share same blockedBy and no mutual dependency | Create both, they run in parallel |
|
||||
| N parallel tasks for same role | Use instance-specific owner: `<role>-1`, `<role>-2` |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| All tasks created | Compare pipeline spec vs TaskList | Count matches |
|
||||
| Dependencies correct | Verify blockedBy for each task | All deps point to valid tasks |
|
||||
| Owners assigned | Check owner field | Every task has valid role owner |
|
||||
| No orphan tasks | Verify all tasks reachable from pipeline start | No disconnected tasks |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Task creation fails | Retry, then report to user |
|
||||
| Dependency cycle detected | Flatten dependencies, warn |
|
||||
| Role not spawned yet | Queue task, spawn role first |
|
||||
| Task prefix conflict | Log warning, proceed |
|
||||
```
|
||||
|
||||
### 7. monitor.md (Message-Driven Coordination)
|
||||
|
||||
**Maps to**: Coordinator role, Phase 4
|
||||
**Delegation**: Message-Driven (no polling)
|
||||
|
||||
```markdown
|
||||
# Command: monitor
|
||||
|
||||
## Purpose
|
||||
|
||||
Message-driven coordination. Team members (spawned in Phase 2) execute tasks autonomously and report via SendMessage. Coordinator receives messages and routes next actions.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Pipeline state | TaskList() | Yes |
|
||||
| Session file | <session-folder>/team-session.json | Yes |
|
||||
| Team config | Team member list | Yes |
|
||||
|
||||
## Phase 3: Core Work
|
||||
|
||||
### Design Principles
|
||||
|
||||
| Principle | Description |
|
||||
|-----------|-------------|
|
||||
| No re-spawning | Team members already spawned in Phase 2 -- do NOT spawn again here |
|
||||
| No polling loops | No `while` + `sleep` + status check (wastes API turns) |
|
||||
| Event-driven | Worker SendMessage is the trigger signal |
|
||||
| One beat per wake | Coordinator processes one event then STOPs |
|
||||
|
||||
### Entry Handlers
|
||||
|
||||
When coordinator wakes, route based on Entry Router detection:
|
||||
|
||||
| Handler | Trigger | Actions |
|
||||
|---------|---------|---------|
|
||||
| handleCallback | Worker `[role-name]` message received | 1. Log received message 2. Check task status 3. Route to next action |
|
||||
| handleCheck | User says "check"/"status" | 1. Load TaskList 2. Output status graph 3. STOP (no advancement) |
|
||||
| handleResume | User says "resume"/"continue" | 1. Load TaskList 2. Find ready tasks 3. Spawn/notify workers 4. STOP |
|
||||
|
||||
### handleCallback Flow
|
||||
|
||||
1. Identify sender role from message tag `[role-name]`
|
||||
2. Log received message via team_msg
|
||||
3. Load TaskList for current state
|
||||
4. Route based on message content:
|
||||
|
||||
| Message Content | Action |
|
||||
|-----------------|--------|
|
||||
| Contains "fix_required" or "error" | Assess severity -> escalate to user if critical |
|
||||
| Normal completion | Check pipeline progress (see below) |
|
||||
|
||||
5. Check pipeline progress:
|
||||
|
||||
| State | Condition | Action |
|
||||
|-------|-----------|--------|
|
||||
| All done | completed count >= total pipeline tasks | -> Phase 5 |
|
||||
| Tasks unblocked | pending tasks with empty blockedBy | Notify/spawn workers for unblocked tasks |
|
||||
| Checkpoint | Pipeline at spec->impl transition | Pause, ask user to `resume` |
|
||||
| Stalled | No ready + no running + has pending | Report blocking point |
|
||||
|
||||
6. Output status summary -> STOP
|
||||
|
||||
### handleCheck Flow (Status Only)
|
||||
|
||||
1. Load all tasks via TaskList
|
||||
2. Build status overview:
|
||||
|
||||
\`\`\`
|
||||
Pipeline Status:
|
||||
Completed: <N>/<total>
|
||||
In Progress: <list>
|
||||
Pending: <list>
|
||||
Blocked: <list with blockedBy details>
|
||||
\`\`\`
|
||||
|
||||
3. STOP (no pipeline advancement)
|
||||
|
||||
### handleResume Flow
|
||||
|
||||
1. Load TaskList
|
||||
2. Find tasks with: status=pending, blockedBy all resolved
|
||||
3. For each ready task:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Worker already spawned and idle | SendMessage to worker: "Task <subject> unblocked, please proceed" |
|
||||
| Worker not spawned | Spawn worker using SKILL.md Spawn Template |
|
||||
|
||||
4. Output status summary -> STOP
|
||||
|
||||
### Status Graph Format
|
||||
|
||||
\`\`\`
|
||||
Pipeline Progress: <completed>/<total>
|
||||
|
||||
[DONE] TASK-001 (role) - description
|
||||
[DONE] TASK-002 (role) - description
|
||||
[>> ] TASK-003 (role) - description <- in_progress
|
||||
[ ] TASK-004 (role) - description <- blocked by TASK-003
|
||||
...
|
||||
\`\`\`
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Message routed | Verify handler executed | Handler completed without error |
|
||||
| State consistent | TaskList reflects actions taken | Tasks updated correctly |
|
||||
| No orphan workers | All spawned workers have assigned tasks | No idle workers without tasks |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Teammate reports error | Assess severity -> retry SendMessage or escalate to user |
|
||||
| Task stuck (no callback) | Send follow-up to teammate, 2x -> suggest respawn |
|
||||
| Critical issue beyond scope | AskUserQuestion: retry/skip/terminate |
|
||||
| Session file corrupted | Rebuild state from TaskList |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Reference
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{{command_name}}` | Command identifier | e.g., "explore", "analyze" |
|
||||
| `{{command_description}}` | One-line description | What this command does |
|
||||
| `{{context_inputs}}` | Array of {name, source, required} | Context loading table rows |
|
||||
| `{{core_work_content}}` | Generated from pattern | Phase 3 content |
|
||||
| `{{validation_content}}` | Generated from pattern | Phase 4 content |
|
||||
| `{{error_handlers}}` | Array of {scenario, resolution} | Error handling table rows |
|
||||
| `{{pipeline_modes}}` | config.pipeline_modes | Array of {mode, pipeline, task_count} for dispatch |
|
||||
|
||||
## Self-Containment Rules
|
||||
|
||||
1. **No cross-command references**: Each command.md must be executable independently
|
||||
2. **Include all context inputs**: List all required context (files, configs) in Phase 2
|
||||
3. **Complete error handling**: Every command handles its own failures
|
||||
4. **Explicit output format**: Define what the command produces
|
||||
5. **Strategy in decision tables**: All routing logic in tables, not code
|
||||
|
||||
## Key Differences from v1
|
||||
|
||||
| Aspect | v1 (old) | v2 (this template) |
|
||||
|--------|----------|---------------------|
|
||||
| Strategy logic | JS `if/else` + regex matching | Decision tables |
|
||||
| Execution steps | JS code blocks (pseudocode) | Step lists + actual tool call templates |
|
||||
| Result processing | JS object construction | Text aggregation description |
|
||||
| Output format | Embedded in JS template literals | Standalone markdown format block |
|
||||
| Error handling | JS try/catch with fallbacks | Decision table with clear routing |
|
||||
| Context prep | JS variable assignments | Phase 2 table + loading steps |
|
||||
| Monitor design | JS while loop + polling | Event-driven handlers + STOP pattern |
|
||||
@@ -1,586 +0,0 @@
|
||||
# Role File Template
|
||||
|
||||
Template for generating per-role execution detail files in `roles/<role-name>/role.md` (v3 style).
|
||||
|
||||
## Purpose
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 0 | Read to understand role file structure |
|
||||
| Phase 3 | Apply with role-specific content |
|
||||
|
||||
## Style Rules
|
||||
|
||||
Generated output follows v3 conventions:
|
||||
|
||||
| Rule | Description |
|
||||
|------|-------------|
|
||||
| Phase 1/5 shared | Reference "See SKILL.md Shared Infrastructure" instead of inline code |
|
||||
| No JS pseudocode | Message Bus, Task Lifecycle all use text + tool call templates |
|
||||
| Decision tables | All branching logic uses `| Condition | Action |` tables |
|
||||
| Code blocks = tool calls only | Only actual executable calls (Read(), TaskList(), SendMessage(), etc.) |
|
||||
| `<placeholder>` in output | Not `${variable}` in generated content |
|
||||
| Phase 2-4 only | Role files define Phase 2-4 role-specific logic |
|
||||
|
||||
> **Note**: The template itself uses `{{handlebars}}` for variable substitution during Phase 3 generation. The **generated output** must not contain `{{handlebars}}` or JS pseudocode.
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
### Worker Role Template
|
||||
|
||||
```markdown
|
||||
# {{display_name}} Role
|
||||
|
||||
{{role_description}}
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `{{role_name}}` | **Tag**: `[{{role_name}}]`
|
||||
- **Task Prefix**: `{{task_prefix}}-*`
|
||||
- **Responsibility**: {{responsibility_type}}
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `{{task_prefix}}-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[{{role_name}}]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within {{responsibility_type}} responsibility scope
|
||||
{{#each must_rules}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
### MUST NOT
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's responsibility
|
||||
- Omit `[{{role_name}}]` identifier in any output
|
||||
{{#each must_not_rules}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
{{#each commands}}
|
||||
| `{{this.name}}` | [commands/{{this.name}}.md](commands/{{this.name}}.md) | Phase {{this.phase}} | {{this.description}} |
|
||||
{{/each}}
|
||||
|
||||
{{#if has_no_commands}}
|
||||
> No command files -- all phases execute inline.
|
||||
{{/if}}
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
{{#each tool_capabilities}}
|
||||
| `{{this.tool}}` | {{this.type}} | {{this.used_by}} | {{this.purpose}} |
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
{{#each message_types}}
|
||||
| `{{this.type}}` | {{this.direction}} | {{this.trigger}} | {{this.description}} |
|
||||
{{/each}}
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
\`\`\`
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: <team-name>,
|
||||
from: "{{role_name}}",
|
||||
to: "coordinator",
|
||||
type: <message-type>,
|
||||
summary: "[{{role_name}}] <task-prefix> complete: <task-subject>",
|
||||
ref: <artifact-path>
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
\`\`\`
|
||||
Bash("ccw team log --team <team-name> --from {{role_name}} --to coordinator --type <message-type> --summary \"[{{role_name}}] <task-prefix> complete\" --ref <artifact-path> --json")
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `{{task_prefix}}-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `{{role_name}}` for single-instance roles.
|
||||
|
||||
### Phase 2: {{phase2_name}}
|
||||
|
||||
{{phase2_content}}
|
||||
|
||||
### Phase 3: {{phase3_name}}
|
||||
|
||||
{{phase3_content}}
|
||||
|
||||
### Phase 4: {{phase4_name}}
|
||||
|
||||
{{phase4_content}}
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[{{role_name}}]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No {{task_prefix}}-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Context/Plan file not found | Notify coordinator, request location |
|
||||
{{#if has_commands}}
|
||||
| Command file not found | Fall back to inline execution |
|
||||
{{/if}}
|
||||
{{#each additional_error_handlers}}
|
||||
| {{this.scenario}} | {{this.resolution}} |
|
||||
{{/each}}
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Coordinator Role Template
|
||||
|
||||
The coordinator role is special and always generated. Its template differs from worker roles:
|
||||
|
||||
```markdown
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the {{team_display_name}} workflow: team creation, task dispatching, progress monitoring, session state.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
|
||||
- Create team and spawn worker subagents in background
|
||||
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
|
||||
- Monitor progress via worker callbacks and route messages
|
||||
- Maintain session state persistence
|
||||
{{#each coordinator_must_rules}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
### MUST NOT
|
||||
- Execute {{team_purpose}} work directly (delegate to workers)
|
||||
- Modify task outputs (workers own their deliverables)
|
||||
- Call implementation subagents directly
|
||||
- Skip dependency validation when creating task chains
|
||||
{{#each coordinator_must_not_rules}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
||||
|
||||
For callback/check/resume: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
1. Scan session directory for sessions with status "active" or "paused"
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (-> Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
1. Audit TaskList -> get real status of all tasks
|
||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
||||
4. Determine remaining pipeline from reconciled state
|
||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
6. Create missing tasks with correct blockedBy dependencies
|
||||
7. Verify dependency chain integrity
|
||||
8. Update session file with reconciled state
|
||||
9. Kick first executable task's worker -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
**Objective**: Parse user input and gather execution parameters.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse arguments** for explicit settings: mode, scope, focus areas
|
||||
2. **Ask for missing parameters** via AskUserQuestion:
|
||||
|
||||
{{phase1_questions}}
|
||||
|
||||
3. **Store requirements**: mode, scope, focus, constraints
|
||||
|
||||
**Success**: All parameters captured, mode finalized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
**Objective**: Initialize team, session file, and wisdom directory.
|
||||
|
||||
**Workflow**:
|
||||
1. Generate session ID
|
||||
2. Create session folder
|
||||
3. Call TeamCreate with team name
|
||||
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
|
||||
5. Write session file with: session_id, mode, scope, status="active"
|
||||
6. Spawn worker roles (see SKILL.md Coordinator Spawn Template)
|
||||
|
||||
**Success**: Team created, session file written, wisdom initialized, workers spawned.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
||||
|
||||
{{#if has_dispatch_command}}
|
||||
Delegate to `commands/dispatch.md` which creates the full task chain:
|
||||
1. Reads SKILL.md Task Metadata Registry for task definitions
|
||||
2. Creates tasks via TaskCreate with correct blockedBy
|
||||
3. Assigns owner based on role mapping
|
||||
4. Includes `Session: <session-folder>` in every task description
|
||||
{{else}}
|
||||
{{phase3_dispatch_content}}
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
||||
|
||||
**Design**: Spawn-and-Stop + Callback pattern.
|
||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
||||
- Worker completes -> SendMessage callback -> auto-advance
|
||||
- User can use "check" / "resume" to manually advance
|
||||
- Coordinator does one operation per invocation, then STOPS
|
||||
|
||||
**Workflow**:
|
||||
{{#if has_monitor_command}}
|
||||
1. Load `commands/monitor.md`
|
||||
{{/if}}
|
||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
3. For each ready task -> spawn worker (see SKILL.md Spawn Template)
|
||||
4. Output status summary
|
||||
5. STOP
|
||||
|
||||
**Pipeline advancement** driven by three wake sources:
|
||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
||||
- User "check" -> handleCheck (status only)
|
||||
- User "resume" -> handleResume (advance)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. List deliverables with output paths
|
||||
3. Update session status -> "completed"
|
||||
4. Offer next steps to user
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Dependency cycle | Detect, report to user, halt |
|
||||
| Invalid mode | Reject with error, ask to clarify |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
{{#each coordinator_error_handlers}}
|
||||
| {{this.error}} | {{this.resolution}} |
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2-4 Content by Responsibility Type
|
||||
|
||||
The following sections provide Phase 2-4 content templates based on `responsibility_type`. During Phase 3 generation, select the matching section and fill into `{{phase2_content}}`, `{{phase3_content}}`, `{{phase4_content}}`.
|
||||
|
||||
### Read-only Analysis
|
||||
|
||||
**Phase 2: Context Loading**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Plan file | <session-folder>/plan/plan.json | Yes |
|
||||
| Git diff | `git diff HEAD~1` or `git diff --cached` | Yes |
|
||||
| Modified files | From git diff --name-only | Yes |
|
||||
| Wisdom | <session-folder>/wisdom/ | No |
|
||||
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read plan file for criteria reference
|
||||
3. Get changed files list
|
||||
|
||||
\`\`\`
|
||||
Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
||||
\`\`\`
|
||||
|
||||
4. Read file contents for analysis (limit to 20 files)
|
||||
5. Load wisdom files if available
|
||||
```
|
||||
|
||||
**Phase 3: Analysis Execution**
|
||||
|
||||
```
|
||||
Delegate to `commands/<analysis-command>.md` if available, otherwise execute inline.
|
||||
|
||||
Analysis strategy selection:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Single dimension analysis | Direct inline scan |
|
||||
| Multi-dimension analysis | Per-dimension sequential scan |
|
||||
| Deep analysis needed | CLI Fan-out to external tool |
|
||||
|
||||
For each dimension, scan modified files for patterns. Record findings with severity levels.
|
||||
```
|
||||
|
||||
**Phase 4: Finding Summary**
|
||||
|
||||
```
|
||||
Classify findings by severity:
|
||||
|
||||
| Severity | Criteria |
|
||||
|----------|----------|
|
||||
| Critical | Must fix before merge |
|
||||
| High | Should fix, may merge with tracking |
|
||||
| Medium | Recommended improvement |
|
||||
| Low | Informational, optional |
|
||||
|
||||
Generate structured report with file:line references and remediation suggestions.
|
||||
```
|
||||
|
||||
### Code Generation
|
||||
|
||||
**Phase 2: Task & Plan Loading**
|
||||
|
||||
```
|
||||
**Loading steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read plan file -> extract task list and acceptance criteria
|
||||
3. Read individual task files from `.task/` directory
|
||||
4. Load wisdom files for conventions and patterns
|
||||
|
||||
Fail-safe: If plan file not found -> SendMessage to coordinator requesting location.
|
||||
```
|
||||
|
||||
**Phase 3: Code Implementation**
|
||||
|
||||
```
|
||||
Implementation strategy selection:
|
||||
|
||||
| Task Count | Complexity | Strategy |
|
||||
|------------|------------|----------|
|
||||
| <= 2 tasks | Low | Direct: inline Edit/Write |
|
||||
| 3-5 tasks | Medium | Single agent: one code-developer for all |
|
||||
| > 5 tasks | High | Batch agent: group by module, one agent per batch |
|
||||
|
||||
{{#if phase3_command}}
|
||||
Delegate to `commands/{{phase3_command}}.md`.
|
||||
{{else}}
|
||||
Execute inline based on strategy selection above.
|
||||
{{/if}}
|
||||
```
|
||||
|
||||
**Phase 4: Self-Validation**
|
||||
|
||||
```
|
||||
Validation checks:
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Syntax | `Bash("tsc --noEmit 2>&1 || true")` or equivalent | No errors |
|
||||
| File existence | Verify all planned files exist | All files present |
|
||||
| Import resolution | Check no broken imports | All imports resolve |
|
||||
|
||||
If validation fails -> attempt auto-fix (max 2 attempts) -> report remaining issues.
|
||||
```
|
||||
|
||||
### Orchestration
|
||||
|
||||
**Phase 2: Context & Complexity Assessment**
|
||||
|
||||
```
|
||||
Complexity assessment:
|
||||
|
||||
| Signal | Weight | Keywords |
|
||||
|--------|--------|----------|
|
||||
| Structural change | +2 | refactor, architect, restructure, module, system |
|
||||
| Cross-cutting | +2 | multiple, across, cross |
|
||||
| Integration | +1 | integrate, api, database |
|
||||
| Non-functional | +1 | security, performance |
|
||||
|
||||
| Score | Complexity | Approach |
|
||||
|-------|------------|----------|
|
||||
| >= 4 | High | Multi-stage with sub-orchestration |
|
||||
| 2-3 | Medium | Standard pipeline |
|
||||
| 0-1 | Low | Simplified flow |
|
||||
```
|
||||
|
||||
**Phase 3: Orchestrated Execution**
|
||||
|
||||
```
|
||||
Launch execution based on complexity:
|
||||
|
||||
| Complexity | Execution Pattern |
|
||||
|------------|-------------------|
|
||||
| High | Parallel sub-agents + synchronization barriers |
|
||||
| Medium | Sequential stages with dependency tracking |
|
||||
| Low | Direct delegation to single worker |
|
||||
```
|
||||
|
||||
**Phase 4: Result Aggregation**
|
||||
|
||||
```
|
||||
Merge and summarize sub-agent results:
|
||||
|
||||
1. Collect all sub-agent outputs
|
||||
2. Deduplicate findings across agents
|
||||
3. Prioritize by severity/importance
|
||||
4. Generate consolidated summary
|
||||
```
|
||||
|
||||
### Validation
|
||||
|
||||
**Phase 2: Environment Detection**
|
||||
|
||||
```
|
||||
**Detection steps**:
|
||||
|
||||
1. Get changed files from git diff
|
||||
2. Detect test framework from project files
|
||||
|
||||
| Detection | Method |
|
||||
|-----------|--------|
|
||||
| Changed files | `Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")` |
|
||||
| Test command | Check package.json scripts, pytest.ini, Makefile |
|
||||
| Coverage tool | Check for nyc, coverage.py, jest --coverage config |
|
||||
```
|
||||
|
||||
**Phase 3: Execution & Fix Cycle**
|
||||
|
||||
```
|
||||
Iterative test-fix cycle:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Run test command |
|
||||
| 2 | Parse results -> check pass rate |
|
||||
| 3 | Pass rate >= 95% -> exit loop (success) |
|
||||
| 4 | Extract failing test details |
|
||||
| 5 | Delegate fix to code-developer subagent |
|
||||
| 6 | Increment iteration counter |
|
||||
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
|
||||
| 8 | Go to Step 1 |
|
||||
```
|
||||
|
||||
**Phase 4: Result Analysis**
|
||||
|
||||
```
|
||||
Analyze test outcomes:
|
||||
|
||||
| Metric | Source | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| Pass rate | Test output parser | >= 95% |
|
||||
| Coverage | Coverage tool output | >= 80% |
|
||||
| Flaky tests | Compare runs | 0 flaky |
|
||||
|
||||
Generate test report with: pass/fail counts, coverage data, failure details, fix attempts made.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Reference
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{{role_name}}` | config.role_name | Role identifier |
|
||||
| `{{display_name}}` | config.display_name | Human-readable role name |
|
||||
| `{{task_prefix}}` | config.task_prefix | UPPERCASE task prefix |
|
||||
| `{{responsibility_type}}` | config.responsibility_type | Role type (read-only analysis, code generation, orchestration, validation) |
|
||||
| `{{role_description}}` | config.role_description | One-line role description |
|
||||
| `{{phase2_name}}` | patterns.phase_structure.phase2 | Phase 2 label |
|
||||
| `{{phase3_name}}` | patterns.phase_structure.phase3 | Phase 3 label |
|
||||
| `{{phase4_name}}` | patterns.phase_structure.phase4 | Phase 4 label |
|
||||
| `{{phase2_content}}` | Generated from responsibility template | Phase 2 text content |
|
||||
| `{{phase3_content}}` | Generated from responsibility template | Phase 3 text content |
|
||||
| `{{phase4_content}}` | Generated from responsibility template | Phase 4 text content |
|
||||
| `{{message_types}}` | config.message_types | Array of message type definitions |
|
||||
| `{{commands}}` | config.commands | Array of command definitions |
|
||||
| `{{has_commands}}` | config.commands.length > 0 | Boolean: has extracted commands |
|
||||
| `{{has_no_commands}}` | config.commands.length === 0 | Boolean: all phases inline |
|
||||
| `{{tool_capabilities}}` | config.tool_capabilities | Array of tool/subagent/CLI capabilities |
|
||||
| `{{must_rules}}` | config.must_rules | Additional MUST rules |
|
||||
| `{{must_not_rules}}` | config.must_not_rules | Additional MUST NOT rules |
|
||||
| `{{additional_error_handlers}}` | config.additional_error_handlers | Array of {scenario, resolution} |
|
||||
|
||||
## Key Differences from v1
|
||||
|
||||
| Aspect | v1 (old) | v2 (this template) |
|
||||
|--------|----------|---------------------|
|
||||
| Phase 1/5 | Inline JS code | Reference SKILL.md Shared Infrastructure |
|
||||
| Message Bus | JS function call pseudocode | Text description + actual tool call template |
|
||||
| Task Lifecycle | JS filter/map code | Step list description |
|
||||
| Phase 2-4 | JS code per responsibility_type | Text + decision tables per responsibility_type |
|
||||
| Command delegation | JS try/catch block | Text "Delegate to commands/xxx.md" |
|
||||
| Coordinator template | JS spawn loops | Text phases with decision tables |
|
||||
@@ -1,360 +0,0 @@
|
||||
# Skill Router Template
|
||||
|
||||
Template for the generated SKILL.md with role-based routing (v3 style).
|
||||
|
||||
## Purpose
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 0 | Read to understand generated SKILL.md structure |
|
||||
| Phase 3 | Apply with team-specific content |
|
||||
|
||||
## Style Rules
|
||||
|
||||
Generated output follows v3 conventions:
|
||||
|
||||
| Rule | Description |
|
||||
|------|-------------|
|
||||
| No pseudocode | Flow uses text + decision tables + flow symbols |
|
||||
| Code blocks = tool calls only | Only Task(), TaskCreate(), Bash(), Read() etc. |
|
||||
| `<placeholder>` in output | Not `${variable}` or `{{handlebars}}` in generated content |
|
||||
| Decision tables | All branching logic uses `| Condition | Action |` tables |
|
||||
| Cadence Control | Beat diagram + checkpoint definitions |
|
||||
| Compact Protection | Phase Reference with Compact column |
|
||||
|
||||
> **Note**: The template itself uses `{{handlebars}}` for variable substitution during Phase 3 generation. The **generated output** must not contain `{{handlebars}}` or JS pseudocode.
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: team-{{team_name}}
|
||||
description: Unified team skill for {{team_name}}. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team {{team_name}}".
|
||||
allowed-tools: {{all_roles_tools_union}}
|
||||
---
|
||||
|
||||
# Team {{team_display_name}}
|
||||
|
||||
Unified team skill: {{team_purpose}}. All team members invoke with `--role=xxx` to route to role-specific execution.
|
||||
|
||||
## Architecture
|
||||
|
||||
\`\`\`
|
||||
{{architecture_diagram}}
|
||||
\`\`\`
|
||||
|
||||
## Role Router
|
||||
|
||||
### Input Parsing
|
||||
|
||||
Parse `$ARGUMENTS` to extract `--role`. If absent -> Orchestration Mode (auto route to coordinator).
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
{{#each roles}}
|
||||
| {{this.name}} | [roles/{{this.name}}/role.md](roles/{{this.name}}/role.md) | {{this.task_prefix}}-* | {{this.type}} | Compress after must re-read |
|
||||
{{/each}}
|
||||
|
||||
> **COMPACT PROTECTION**: Role files are execution documents, not reference material. When context compression occurs and role instructions are reduced to summaries, **you MUST immediately `Read` the corresponding role.md to reload before continuing execution**. Do not execute any Phase based on summaries.
|
||||
|
||||
### Dispatch
|
||||
|
||||
1. Extract `--role` from arguments
|
||||
2. If no `--role` -> route to coordinator (Orchestration Mode)
|
||||
3. Look up role in registry -> Read the role file -> Execute its phases
|
||||
|
||||
### Orchestration Mode
|
||||
|
||||
When invoked without `--role`, coordinator auto-starts. User just provides task description.
|
||||
|
||||
**Invocation**: `Skill(skill="team-{{team_name}}", args="<task-description>")`
|
||||
|
||||
**Lifecycle**:
|
||||
\`\`\`
|
||||
User provides task description
|
||||
-> coordinator Phase 1-3: Requirement clarification -> TeamCreate -> Create task chain
|
||||
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
|
||||
-> Worker executes -> SendMessage callback -> coordinator advances next step
|
||||
-> Loop until pipeline complete -> Phase 5 report
|
||||
\`\`\`
|
||||
|
||||
**User Commands** (wake paused coordinator):
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Output execution status graph, no advancement |
|
||||
| `resume` / `continue` | Check worker states, advance next step |
|
||||
|
||||
---
|
||||
|
||||
## Shared Infrastructure
|
||||
|
||||
The following templates apply to all worker roles. Each role.md only needs to write **Phase 2-4** role-specific logic.
|
||||
|
||||
### Worker Phase 1: Task Discovery (shared by all workers)
|
||||
|
||||
Every worker executes the same task discovery flow on startup:
|
||||
|
||||
1. Call `TaskList()` to get all tasks
|
||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
||||
3. No tasks -> idle wait
|
||||
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
|
||||
|
||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
||||
- Check whether this task's output artifact already exists
|
||||
- Artifact complete -> skip to Phase 5 report completion
|
||||
- Artifact incomplete or missing -> normal Phase 2-4 execution
|
||||
|
||||
### Worker Phase 5: Report (shared by all workers)
|
||||
|
||||
Standard reporting flow after task completion:
|
||||
|
||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
||||
- Parameters: operation="log", team=<team-name>, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
||||
- **CLI fallback**: When MCP unavailable -> `ccw team log --team <team> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
||||
2. **SendMessage**: Send result to coordinator (content and summary both prefixed with `[<role>]`)
|
||||
3. **TaskUpdate**: Mark task completed
|
||||
4. **Loop**: Return to Phase 1 to check next task
|
||||
|
||||
### Wisdom Accumulation (all roles)
|
||||
|
||||
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session initialization.
|
||||
|
||||
**Directory**:
|
||||
\`\`\`
|
||||
<session-folder>/wisdom/
|
||||
+-- learnings.md # Patterns and insights
|
||||
+-- decisions.md # Architecture and design decisions
|
||||
+-- conventions.md # Codebase conventions
|
||||
+-- issues.md # Known risks and issues
|
||||
\`\`\`
|
||||
|
||||
**Worker Load** (Phase 2): Extract `Session: <path>` from task description, read wisdom directory files.
|
||||
**Worker Contribute** (Phase 4/5): Write this task's discoveries to corresponding wisdom files.
|
||||
|
||||
### Role Isolation Rules
|
||||
|
||||
| Allowed | Forbidden |
|
||||
|---------|-----------|
|
||||
| Process tasks with own prefix | Process tasks with other role prefixes |
|
||||
| SendMessage to coordinator | Communicate directly with other workers |
|
||||
| Use tools declared in Toolbox | Create tasks for other roles |
|
||||
| Delegate to commands/ files | Modify resources outside own responsibility |
|
||||
|
||||
Coordinator additional restrictions: Do not write/modify code directly, do not call implementation subagents, do not execute analysis/test/review directly.
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Definitions
|
||||
|
||||
### Pipeline Diagram
|
||||
|
||||
\`\`\`
|
||||
{{pipeline_diagram}}
|
||||
\`\`\`
|
||||
|
||||
### Cadence Control
|
||||
|
||||
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
|
||||
|
||||
\`\`\`
|
||||
Beat Cycle (single beat)
|
||||
========================================================
|
||||
Event Coordinator Workers
|
||||
--------------------------------------------------------
|
||||
callback/resume --> +- handleCallback -+
|
||||
| mark completed |
|
||||
| check pipeline |
|
||||
+- handleSpawnNext -+
|
||||
| find ready tasks |
|
||||
| spawn workers ---+--> [Worker A] Phase 1-5
|
||||
| (parallel OK) --+--> [Worker B] Phase 1-5
|
||||
+- STOP (idle) -----+ |
|
||||
|
|
||||
callback <-----------------------------------------+
|
||||
(next beat) SendMessage + TaskUpdate(completed)
|
||||
========================================================
|
||||
\`\`\`
|
||||
|
||||
{{cadence_beat_view}}
|
||||
|
||||
**Checkpoints**:
|
||||
|
||||
{{checkpoint_table}}
|
||||
|
||||
**Stall Detection** (coordinator `handleCheck` executes):
|
||||
|
||||
| Check | Condition | Resolution |
|
||||
|-------|-----------|------------|
|
||||
| Worker no response | in_progress task no callback | Report waiting task list, suggest user `resume` |
|
||||
| Pipeline deadlock | no ready + no running + has pending | Check blockedBy dependency chain, report blocking point |
|
||||
{{#if has_gc_loop}}
|
||||
| GC loop exceeded | iteration > max_rounds | Terminate loop, output latest report |
|
||||
{{/if}}
|
||||
|
||||
### Task Metadata Registry
|
||||
|
||||
| Task ID | Role | Phase | Dependencies | Description |
|
||||
|---------|------|-------|-------------|-------------|
|
||||
{{#each task_metadata}}
|
||||
| {{this.task_id}} | {{this.role}} | {{this.phase}} | {{this.dependencies}} | {{this.description}} |
|
||||
{{/each}}
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop):
|
||||
|
||||
\`\`\`
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
|
||||
## Primary Directive
|
||||
All your work must be executed through Skill to load role definition:
|
||||
Skill(skill="team-{{team_name}}", args="--role=<role>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] identifier
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate for other roles
|
||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> load role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
{{#if has_parallel_spawn}}
|
||||
### Parallel Spawn (N agents for same role)
|
||||
|
||||
> When pipeline has parallel tasks assigned to the same role, spawn N distinct agents with unique names. A single agent can only process tasks serially.
|
||||
|
||||
**Parallel detection**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| N parallel tasks for same role prefix | Spawn N agents named `<role>-1`, `<role>-2` ... |
|
||||
| Single task for role | Standard spawn (single agent) |
|
||||
|
||||
**Parallel spawn template**:
|
||||
|
||||
\`\`\`
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role>-<N> worker",
|
||||
team_name: <team-name>,
|
||||
name: "<role>-<N>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE> (<role>-<N>).
|
||||
Your agent name is "<role>-<N>", use this name for task discovery owner matching.
|
||||
|
||||
## Primary Directive
|
||||
Skill(skill="team-{{team_name}}", args="--role=<role> --agent-name=<role>-<N>")
|
||||
|
||||
## Role Guidelines
|
||||
- Only process tasks where owner === "<role>-<N>" with <PREFIX>-* prefix
|
||||
- All output prefixed with [<role>] identifier
|
||||
|
||||
## Workflow
|
||||
1. TaskList -> find tasks where owner === "<role>-<N>" with <PREFIX>-* prefix
|
||||
2. Skill -> execute role definition
|
||||
3. team_msg + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task`
|
||||
})
|
||||
\`\`\`
|
||||
|
||||
**Dispatch must match agent names**: In dispatch, parallel tasks use instance-specific owner: `<role>-<N>`. In role.md, task discovery uses --agent-name for owner matching.
|
||||
{{/if}}
|
||||
|
||||
## Session Directory
|
||||
|
||||
\`\`\`
|
||||
{{session_directory_tree}}
|
||||
\`\`\`
|
||||
|
||||
{{#if has_session_resume}}
|
||||
## Session Resume
|
||||
|
||||
Coordinator supports `--resume` / `--continue` for interrupted sessions:
|
||||
|
||||
1. Scan session directory for sessions with status "active" or "paused"
|
||||
2. Multiple matches -> AskUserQuestion for selection
|
||||
3. Audit TaskList -> reconcile session state <-> task status
|
||||
4. Reset in_progress -> pending (interrupted tasks)
|
||||
5. Rebuild team and spawn needed workers only
|
||||
6. Create missing tasks with correct blockedBy
|
||||
7. Kick first executable task -> Phase 4 coordination loop
|
||||
{{/if}}
|
||||
|
||||
{{#if shared_resources}}
|
||||
## Shared Resources
|
||||
|
||||
| Resource | Path | Usage |
|
||||
|----------|------|-------|
|
||||
{{#each shared_resources}}
|
||||
| {{this.name}} | [{{this.path}}]({{this.path}}) | {{this.usage}} |
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown --role value | Error with available role list |
|
||||
| Missing --role arg | Orchestration Mode -> auto route to coordinator |
|
||||
| Role file not found | Error with expected path (roles/<name>/role.md) |
|
||||
| Command file not found | Fallback to inline execution in role.md |
|
||||
{{#each additional_error_handlers}}
|
||||
| {{this.scenario}} | {{this.resolution}} |
|
||||
{{/each}}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Reference
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{{team_name}}` | config.team_name | Team identifier (lowercase) |
|
||||
| `{{team_display_name}}` | config.team_display_name | Human-readable team name |
|
||||
| `{{team_purpose}}` | config.team_purpose | One-line team purpose |
|
||||
| `{{all_roles_tools_union}}` | Union of all roles' allowed-tools | Combined tool list |
|
||||
| `{{roles}}` | config.roles[] | Array of role definitions |
|
||||
| `{{architecture_diagram}}` | Generated from role structure | ASCII architecture diagram |
|
||||
| `{{pipeline_diagram}}` | Generated from task chain | ASCII pipeline diagram |
|
||||
| `{{cadence_beat_view}}` | Generated from pipeline | Pipeline beat view diagram |
|
||||
| `{{checkpoint_table}}` | Generated from pipeline | Checkpoint trigger/location/behavior table |
|
||||
| `{{task_metadata}}` | Generated from pipeline | Task metadata registry entries |
|
||||
| `{{session_directory_tree}}` | Generated from session structure | Session directory tree |
|
||||
| `{{has_parallel_spawn}}` | config.has_parallel_spawn | Boolean: pipeline has parallel same-role tasks |
|
||||
| `{{has_session_resume}}` | config.has_session_resume | Boolean: supports session resume |
|
||||
| `{{has_gc_loop}}` | config.has_gc_loop | Boolean: has guard-and-correct loops |
|
||||
| `{{shared_resources}}` | config.shared_resources | Array of shared resource definitions |
|
||||
| `{{additional_error_handlers}}` | config.additional_error_handlers | Array of {scenario, resolution} |
|
||||
|
||||
## Key Differences from v1
|
||||
|
||||
| Aspect | v1 (old) | v2 (this template) |
|
||||
|--------|----------|---------------------|
|
||||
| Role lookup | `VALID_ROLES` JS object | Role Registry decision table with markdown links |
|
||||
| Routing | JS regex + if/else | Text dispatch flow (3 steps) |
|
||||
| Spawn template | JS code with `${variable}` | Text template with `<placeholder>` |
|
||||
| Infrastructure | Inline JS per role | Shared Infrastructure section (Phase 1/5 templates) |
|
||||
| Pipeline | ASCII only | Cadence Control + beat view + checkpoints |
|
||||
| Compact safety | None | Compact Protection with re-read mandate |
|
||||
| Orchestration Mode | JS if/else block | Decision table + lifecycle flow diagram |
|
||||
Reference in New Issue
Block a user