From e228b8b2730f4f95f6e3038f95049bc7760c88ad Mon Sep 17 00:00:00 2001 From: catlog22 Date: Thu, 26 Feb 2026 16:48:21 +0800 Subject: [PATCH] Remove obsolete documentation and configuration files for team lifecycle specifications - Deleted document standards for spec-generator outputs. - Removed quality gates criteria and scoring dimensions. - Eliminated team configuration JSON file. - Cleared architecture document template for generating ADRs. - Purged epics and stories template for breakdown generation. - Erased product brief template for Phase 2 documentation. - Removed requirements PRD template for Phase 3 documentation. --- .claude/skills/team-lifecycle-v2/SKILL.md | 601 ------------ .../team-lifecycle-v2/roles/analyst/role.md | 271 ----- .../roles/architect/commands/assess.md | 271 ----- .../team-lifecycle-v2/roles/architect/role.md | 368 ------- .../roles/coordinator/commands/dispatch.md | 523 ---------- .../roles/coordinator/commands/monitor.md | 626 ------------ .../roles/coordinator/role.md | 779 --------------- .../roles/discussant/commands/critique.md | 396 -------- .../roles/discussant/role.md | 265 ----- .../roles/executor/commands/implement.md | 356 ------- .../team-lifecycle-v2/roles/executor/role.md | 324 ------ .../team-lifecycle-v2/roles/explorer/role.md | 301 ------ .../roles/fe-developer/role.md | 410 -------- .../fe-qa/commands/pre-delivery-checklist.md | 116 --- .../team-lifecycle-v2/roles/fe-qa/role.md | 510 ---------- .../roles/planner/commands/explore.md | 466 --------- .../team-lifecycle-v2/roles/planner/role.md | 253 ----- .../roles/reviewer/commands/code-review.md | 689 ------------- .../roles/reviewer/commands/spec-quality.md | 845 ---------------- .../team-lifecycle-v2/roles/reviewer/role.md | 429 -------- .../roles/tester/commands/validate.md | 538 ---------- .../team-lifecycle-v2/roles/tester/role.md | 385 -------- .../roles/writer/commands/generate-doc.md | 698 ------------- .../team-lifecycle-v2/roles/writer/role.md | 257 ----- .../specs/document-standards.md | 192 ---- .../team-lifecycle-v2/specs/quality-gates.md | 207 ---- .../team-lifecycle-v2/specs/team-config.json | 156 --- .../templates/architecture-doc.md | 254 ----- .../templates/epics-template.md | 196 ---- .../templates/product-brief.md | 133 --- .../templates/requirements-prd.md | 224 ----- .claude/skills/team-lifecycle/SKILL.md | 410 -------- .../skills/team-lifecycle/roles/analyst.md | 215 ---- .../team-lifecycle/roles/coordinator.md | 925 ------------------ .../skills/team-lifecycle/roles/discussant.md | 236 ----- .../skills/team-lifecycle/roles/executor.md | 312 ------ .../skills/team-lifecycle/roles/planner.md | 298 ------ .../skills/team-lifecycle/roles/reviewer.md | 622 ------------ .claude/skills/team-lifecycle/roles/tester.md | 294 ------ .claude/skills/team-lifecycle/roles/writer.md | 739 -------------- .../specs/document-standards.md | 192 ---- .../team-lifecycle/specs/quality-gates.md | 207 ---- .../team-lifecycle/specs/team-config.json | 80 -- .../templates/architecture-doc.md | 254 ----- .../templates/epics-template.md | 196 ---- .../team-lifecycle/templates/product-brief.md | 133 --- .../templates/requirements-prd.md | 224 ----- 47 files changed, 17376 deletions(-) delete mode 100644 .claude/skills/team-lifecycle-v2/SKILL.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/analyst/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/architect/commands/assess.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/architect/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/coordinator/commands/dispatch.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/coordinator/commands/monitor.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/coordinator/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/discussant/commands/critique.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/discussant/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/executor/commands/implement.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/executor/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/explorer/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/fe-developer/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/fe-qa/commands/pre-delivery-checklist.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/fe-qa/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/planner/commands/explore.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/planner/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/reviewer/commands/code-review.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/reviewer/commands/spec-quality.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/reviewer/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/tester/commands/validate.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/tester/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/writer/commands/generate-doc.md delete mode 100644 .claude/skills/team-lifecycle-v2/roles/writer/role.md delete mode 100644 .claude/skills/team-lifecycle-v2/specs/document-standards.md delete mode 100644 .claude/skills/team-lifecycle-v2/specs/quality-gates.md delete mode 100644 .claude/skills/team-lifecycle-v2/specs/team-config.json delete mode 100644 .claude/skills/team-lifecycle-v2/templates/architecture-doc.md delete mode 100644 .claude/skills/team-lifecycle-v2/templates/epics-template.md delete mode 100644 .claude/skills/team-lifecycle-v2/templates/product-brief.md delete mode 100644 .claude/skills/team-lifecycle-v2/templates/requirements-prd.md delete mode 100644 .claude/skills/team-lifecycle/SKILL.md delete mode 100644 .claude/skills/team-lifecycle/roles/analyst.md delete mode 100644 .claude/skills/team-lifecycle/roles/coordinator.md delete mode 100644 .claude/skills/team-lifecycle/roles/discussant.md delete mode 100644 .claude/skills/team-lifecycle/roles/executor.md delete mode 100644 .claude/skills/team-lifecycle/roles/planner.md delete mode 100644 .claude/skills/team-lifecycle/roles/reviewer.md delete mode 100644 .claude/skills/team-lifecycle/roles/tester.md delete mode 100644 .claude/skills/team-lifecycle/roles/writer.md delete mode 100644 .claude/skills/team-lifecycle/specs/document-standards.md delete mode 100644 .claude/skills/team-lifecycle/specs/quality-gates.md delete mode 100644 .claude/skills/team-lifecycle/specs/team-config.json delete mode 100644 .claude/skills/team-lifecycle/templates/architecture-doc.md delete mode 100644 .claude/skills/team-lifecycle/templates/epics-template.md delete mode 100644 .claude/skills/team-lifecycle/templates/product-brief.md delete mode 100644 .claude/skills/team-lifecycle/templates/requirements-prd.md diff --git a/.claude/skills/team-lifecycle-v2/SKILL.md b/.claude/skills/team-lifecycle-v2/SKILL.md deleted file mode 100644 index 7b833034..00000000 --- a/.claude/skills/team-lifecycle-v2/SKILL.md +++ /dev/null @@ -1,601 +0,0 @@ ---- -name: team-lifecycle-v2 -description: Unified team skill for full lifecycle - spec/impl/test. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team lifecycle". -allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*) ---- - -# Team Lifecycle - -Unified team skill covering specification, implementation, testing, and review. All team members invoke this skill with `--role=xxx` to route to role-specific execution. - -## Architecture Overview - -``` -┌───────────────────────────────────────────────────┐ -│ Skill(skill="team-lifecycle-v2") │ -│ args="任务描述" 或 args="--role=xxx" │ -└───────────────────┬───────────────────────────────┘ - │ Role Router - │ - ┌──── --role present? ────┐ - │ NO │ YES - ↓ ↓ - Orchestration Mode Role Dispatch - (auto → coordinator) (route to role.md) - │ - ┌────┴────┬───────┬───────┬───────┬───────┬───────┬───────┐ - ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ -┌──────────┐┌───────┐┌──────┐┌──────────┐┌───────┐┌────────┐┌──────┐┌────────┐ -│coordinator││analyst││writer││discussant││planner││executor││tester││reviewer│ -│ roles/ ││roles/ ││roles/││ roles/ ││roles/ ││ roles/ ││roles/││ roles/ │ -└──────────┘└───────┘└──────┘└──────────┘└───────┘└────────┘└──────┘└────────┘ - ↑ ↑ - on-demand by coordinator - ┌──────────┐ ┌─────────┐ - │ explorer │ │architect│ - │ (service)│ │(consult)│ - └──────────┘ └─────────┘ -``` - -## Command Architecture - -Each role is organized as a folder with a `role.md` orchestrator and optional `commands/` for delegation: - -``` -roles/ -├── coordinator/ -│ ├── role.md # Orchestrator (Phase 1/5 inline, Phase 2-4 delegate) -│ └── commands/ -│ ├── dispatch.md # Task chain creation (3 modes) -│ └── monitor.md # Coordination loop + message routing -├── analyst/ -│ ├── role.md -│ └── commands/ -├── writer/ -│ ├── role.md -│ └── commands/ -│ └── generate-doc.md # Multi-CLI document generation (4 doc types) -├── discussant/ -│ ├── role.md -│ └── commands/ -│ └── critique.md # Multi-perspective CLI critique -├── planner/ -│ ├── role.md -│ └── commands/ -│ └── explore.md # Multi-angle codebase exploration -├── executor/ -│ ├── role.md -│ └── commands/ -│ └── implement.md # Multi-backend code implementation -├── tester/ -│ ├── role.md -│ └── commands/ -│ └── validate.md # Test-fix cycle -├── reviewer/ -│ ├── role.md -│ └── commands/ -│ ├── code-review.md # 4-dimension code review -│ └── spec-quality.md # 5-dimension spec quality check -├── explorer/ # Service role (on-demand) -│ └── role.md # Multi-strategy code search & pattern discovery -└── architect/ # Consulting role (on-demand) - ├── role.md # Multi-mode architecture assessment - └── commands/ - └── assess.md # Mode-specific assessment strategies -├── fe-developer/ # Frontend pipeline role -│ └── role.md # Frontend component/page implementation -└── fe-qa/ # Frontend pipeline role - ├── role.md - └── commands/ - └── pre-delivery-checklist.md - └── role.md # 5-dimension frontend QA + GC loop -``` - -**Design principle**: role.md keeps Phase 1 (Task Discovery) and Phase 5 (Report) inline. Phases 2-4 either stay inline (simple logic) or delegate to `commands/*.md` via `Read("commands/xxx.md")` when they involve subagent delegation, CLI fan-out, or complex strategies. - -**Command files** are self-contained: each includes Strategy, Execution Steps, and Error Handling. Any subagent can `Read()` a command file and execute it independently. - -## Role Router - -### Input Parsing - -Parse `$ARGUMENTS` to extract `--role`: - -```javascript -const args = "$ARGUMENTS" -const roleMatch = args.match(/--role[=\s]+(\w+)/) -const teamName = args.match(/--team[=\s]+([\w-]+)/)?.[1] || "lifecycle" - -if (!roleMatch) { - // No --role: Orchestration Mode → auto route to coordinator - // See "Orchestration Mode" section below -} - -const role = roleMatch ? roleMatch[1] : "coordinator" -``` - -### Role Dispatch - -```javascript -const VALID_ROLES = { - "coordinator": { file: "roles/coordinator/role.md", prefix: null }, - "analyst": { file: "roles/analyst/role.md", prefix: "RESEARCH" }, - "writer": { file: "roles/writer/role.md", prefix: "DRAFT" }, - "discussant": { file: "roles/discussant/role.md", prefix: "DISCUSS" }, - "planner": { file: "roles/planner/role.md", prefix: "PLAN" }, - "executor": { file: "roles/executor/role.md", prefix: "IMPL" }, - "tester": { file: "roles/tester/role.md", prefix: "TEST" }, - "reviewer": { file: "roles/reviewer/role.md", prefix: ["REVIEW", "QUALITY"] }, - "explorer": { file: "roles/explorer/role.md", prefix: "EXPLORE", type: "service" }, - "architect": { file: "roles/architect/role.md", prefix: "ARCH", type: "consulting" }, - "fe-developer":{ file: "roles/fe-developer/role.md",prefix: "DEV-FE", type: "frontend-pipeline" }, - "fe-qa": { file: "roles/fe-qa/role.md", prefix: "QA-FE", type: "frontend-pipeline" } -} - -if (!VALID_ROLES[role]) { - throw new Error(`Unknown role: ${role}. Available: ${Object.keys(VALID_ROLES).join(', ')}`) -} - -// Read and execute role-specific logic -Read(VALID_ROLES[role].file) -// → Execute the 5-phase process defined in that file -``` - -### Orchestration Mode(无参数触发) - -当不带 `--role` 调用时,自动进入 coordinator 编排模式。用户只需传任务描述即可触发完整流程。 - -**触发方式**: - -```javascript -// 用户调用(无 --role)— 自动路由到 coordinator -Skill(skill="team-lifecycle-v2", args="任务描述") - -// 等价于 -Skill(skill="team-lifecycle-v2", args="--role=coordinator 任务描述") -``` - -**流程**: - -```javascript -if (!roleMatch) { - // Orchestration Mode: 自动路由到 coordinator - // coordinator Entry Router 先检测命令类型: - // - Worker 回调 → handleCallback → 自动推进 - // - "check" → handleCheck → 状态报告 - // - "resume" → handleResume → 手动推进 - // - 新任务 → Phase 1-3 → spawn first batch → STOP - - const role = "coordinator" - Read(VALID_ROLES[role].file) -} -``` - -**完整调用链(Spawn-and-Stop)**: - -``` -用户: Skill(args="任务描述") - │ - ├─ SKILL.md: 无 --role → Orchestration Mode → 读取 coordinator role.md - │ - ├─ coordinator Phase 1-3: 需求澄清 → TeamCreate → 创建任务链 - │ - ├─ coordinator Phase 4: spawn 首批 worker(后台) → STOP ← 立即停止 - │ - │ ┌─────────────────────────────────────────────────────┐ - │ │ Worker 在后台执行,完成后 SendMessage 回调 │ - │ │ │ - │ │ 三种唤醒源推进流水线: │ - │ │ 1. Worker 回调 → coordinator 自动推进下一步 │ - │ │ 2. 用户 "check" → 输出执行状态图 │ - │ │ 3. 用户 "resume" → 手动推进 │ - │ └─────────────────────────────────────────────────────┘ - │ - ├─ worker 收到任务 → Skill(args="--role=xxx") → SKILL.md Role Router → role.md - │ 每个 worker 自动获取: - │ ├─ 角色定义 (role.md: identity, boundaries, message types) - │ ├─ 可用命令 (commands/*.md) - │ └─ 执行逻辑 (5-phase process) - │ - ├─ worker 完成 → SendMessage(to: coordinator) → 唤醒 coordinator → spawn 下一批 → STOP - │ (循环直到 pipeline 完成) - │ - └─ Pipeline 完成 → Phase 5: 结果汇报 -``` - -### User Commands(流水线推进指令) - -当 coordinator 已 spawn worker 并 STOP 后,用户可用以下指令唤醒 coordinator: - -| Command | Usage | Action | -|---------|-------|--------| -| `check` | `Skill(args="check")` | 输出执行状态图(pipeline graph),显示每个任务状态,不做推进 | -| `resume` | `Skill(args="resume")` | 检查所有 worker 成员状态,完成的任务自动推进到下一步 | -| `status` | `Skill(args="status")` | 同 `check` | - -> Worker 完成后的 SendMessage 回调也会自动唤醒 coordinator 推进流水线, -> 无需用户手动 `resume`。`resume` 用于手动推进(如 checkpoint 后、或回调未触发时)。 - -### Available Roles - -| Role | Task Prefix | Responsibility | Role File | -|------|-------------|----------------|-----------| -| `coordinator` | N/A | Pipeline orchestration, requirement clarification, task dispatch | [roles/coordinator/role.md](roles/coordinator/role.md) | -| `analyst` | RESEARCH-* | Seed analysis, codebase exploration, context gathering | [roles/analyst/role.md](roles/analyst/role.md) | -| `writer` | DRAFT-* | Product Brief / PRD / Architecture / Epics generation | [roles/writer/role.md](roles/writer/role.md) | -| `discussant` | DISCUSS-* | Multi-perspective critique, consensus building | [roles/discussant/role.md](roles/discussant/role.md) | -| `planner` | PLAN-* | Multi-angle exploration, structured planning | [roles/planner/role.md](roles/planner/role.md) | -| `executor` | IMPL-* | Code implementation following plans | [roles/executor/role.md](roles/executor/role.md) | -| `tester` | TEST-* | Adaptive test-fix cycles, quality gates | [roles/tester/role.md](roles/tester/role.md) | -| `reviewer` | `REVIEW-*` + `QUALITY-*` | Code review + Spec quality validation (auto-switch by prefix) | [roles/reviewer/role.md](roles/reviewer/role.md) | -| `explorer` | EXPLORE-* | Code search, pattern discovery, dependency tracing (service role, on-demand) | [roles/explorer/role.md](roles/explorer/role.md) | -| `architect` | ARCH-* | Architecture assessment, tech feasibility, design review (consulting role, on-demand) | [roles/architect/role.md](roles/architect/role.md) | -| `fe-developer` | DEV-FE-* | Frontend component/page implementation, design token consumption (frontend pipeline) | [roles/fe-developer/role.md](roles/fe-developer/role.md) | -| `fe-qa` | QA-FE-* | 5-dimension frontend QA, accessibility, design compliance, GC loop (frontend pipeline) | [roles/fe-qa/role.md](roles/fe-qa/role.md) | - -## Shared Infrastructure - -### Role Isolation Rules - -**核心原则**: 每个角色仅能执行自己职责范围内的工作。 - -#### Output Tagging(强制) - -所有角色的输出必须带 `[role_name]` 标识前缀: - -```javascript -// SendMessage — content 和 summary 都必须带标识 -SendMessage({ - content: `## [${role}] ...`, - summary: `[${role}] ...` -}) - -// team_msg — summary 必须带标识 -mcp__ccw-tools__team_msg({ - summary: `[${role}] ...` -}) -``` - -#### Coordinator 隔离 - -| 允许 | 禁止 | -|------|------| -| 需求澄清 (AskUserQuestion) | ❌ 直接编写/修改代码 | -| 创建任务链 (TaskCreate) | ❌ 调用实现类 subagent (code-developer 等) | -| 分发任务给 worker | ❌ 直接执行分析/测试/审查 | -| 监控进度 (消息总线) | ❌ 绕过 worker 自行完成任务 | -| 汇报结果给用户 | ❌ 修改源代码或产物文件 | - -#### Worker 隔离 - -| 允许 | 禁止 | -|------|------| -| 处理自己前缀的任务 | ❌ 处理其他角色前缀的任务 | -| SendMessage 给 coordinator | ❌ 直接与其他 worker 通信 | -| 使用 Toolbox 中声明的工具 | ❌ 为其他角色创建任务 (TaskCreate) | -| 委派给 commands/ 中的命令 | ❌ 修改不属于本职责的资源 | - -### Message Bus (All Roles) - -Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log: - -```javascript -mcp__ccw-tools__team_msg({ - operation: "log", - team: teamName, - from: role, - to: "coordinator", - type: "", - summary: `[${role}] `, - ref: "" -}) -``` - -**Message types by role**: - -| Role | Types | -|------|-------| -| coordinator | `plan_approved`, `plan_revision`, `task_unblocked`, `fix_required`, `error`, `shutdown` | -| analyst | `research_ready`, `research_progress`, `error` | -| writer | `draft_ready`, `draft_revision`, `impl_progress`, `error` | -| discussant | `discussion_ready`, `discussion_blocked`, `impl_progress`, `error` | -| planner | `plan_ready`, `plan_revision`, `impl_progress`, `error` | -| executor | `impl_complete`, `impl_progress`, `error` | -| tester | `test_result`, `impl_progress`, `fix_required`, `error` | -| reviewer | `review_result`, `quality_result`, `fix_required`, `error` | -| explorer | `explore_ready`, `explore_progress`, `task_failed` | -| architect | `arch_ready`, `arch_concern`, `arch_progress`, `error` | -| fe-developer | `dev_fe_complete`, `dev_fe_progress`, `error` | -| fe-qa | `qa_fe_passed`, `qa_fe_result`, `fix_required`, `error` | - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "${role}" --to "coordinator" --type "" --summary "[${role}] " --json`) -Bash(`ccw team list --team "${teamName}" --last 10 --json`) -Bash(`ccw team status --team "${teamName}" --json`) -``` - -### Wisdom Accumulation (All Roles) - -跨任务知识积累机制。Coordinator 在 session 初始化时创建 `wisdom/` 目录,所有 worker 在执行过程中读取和贡献 wisdom。 - -**目录结构**: -``` -{sessionFolder}/wisdom/ -├── learnings.md # 发现的模式和洞察 -├── decisions.md # 架构和设计决策 -├── conventions.md # 代码库约定 -└── issues.md # 已知风险和问题 -``` - -**Phase 2 加载(所有 worker)**: -```javascript -// Load wisdom context at start of Phase 2 -const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim() -let wisdom = {} -if (sessionFolder) { - try { wisdom.learnings = Read(`${sessionFolder}/wisdom/learnings.md`) } catch {} - try { wisdom.decisions = Read(`${sessionFolder}/wisdom/decisions.md`) } catch {} - try { wisdom.conventions = Read(`${sessionFolder}/wisdom/conventions.md`) } catch {} - try { wisdom.issues = Read(`${sessionFolder}/wisdom/issues.md`) } catch {} -} -``` - -**Phase 4/5 贡献(任务完成时)**: -```javascript -// Contribute wisdom after task completion -if (sessionFolder) { - const timestamp = new Date().toISOString().substring(0, 10) - - // Role-specific contributions: - // analyst → learnings (exploration dimensions, codebase patterns) - // writer → conventions (document structure, naming patterns) - // planner → decisions (task decomposition rationale) - // executor → learnings (implementation patterns), issues (bugs encountered) - // tester → issues (test failures, edge cases), learnings (test patterns) - // reviewer → conventions (code quality patterns), issues (review findings) - // explorer → conventions (codebase patterns), learnings (dependency insights) - // architect → decisions (architecture choices), issues (architectural risks) - - try { - const targetFile = `${sessionFolder}/wisdom/${wisdomTarget}.md` - const existing = Read(targetFile) - const entry = `- [${timestamp}] [${role}] ${wisdomEntry}` - Write(targetFile, existing + '\n' + entry) - } catch {} // wisdom not initialized -} -``` - -**Coordinator 注入**: Coordinator 在 spawn worker 时通过 task description 传递 `Session: {sessionFolder}`,worker 据此定位 wisdom 目录。已有 wisdom 内容为后续 worker 提供上下文,实现跨任务知识传递。 - -### Task Lifecycle (All Worker Roles) - -```javascript -// Standard task lifecycle every worker role follows -// Phase 1: Discovery -const tasks = TaskList() -const prefixes = Array.isArray(VALID_ROLES[role].prefix) ? VALID_ROLES[role].prefix : [VALID_ROLES[role].prefix] -const myTasks = tasks.filter(t => - prefixes.some(p => t.subject.startsWith(`${p}-`)) && - t.owner === role && - t.status === 'pending' && - t.blockedBy.length === 0 -) -if (myTasks.length === 0) return // idle -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) - -// Phase 1.5: Resume Artifact Check (防止重复产出) -// 当 session 从暂停恢复时,coordinator 已将 in_progress 任务重置为 pending。 -// Worker 在开始工作前,必须检查该任务的输出产物是否已存在。 -// 如果产物已存在且内容完整: -// → 直接跳到 Phase 5 报告完成(避免覆盖上次成果) -// 如果产物存在但不完整(如文件为空或缺少关键 section): -// → 正常执行 Phase 2-4(基于已有产物继续,而非从头开始) -// 如果产物不存在: -// → 正常执行 Phase 2-4 -// -// 每个 role 检查自己的输出路径: -// analyst → sessionFolder/spec/discovery-context.json -// writer → sessionFolder/spec/{product-brief.md | requirements/ | architecture/ | epics/} -// discussant → sessionFolder/discussions/discuss-NNN-*.md -// planner → sessionFolder/plan/plan.json -// executor → git diff (已提交的代码变更) -// tester → test pass rate -// reviewer → sessionFolder/spec/readiness-report.md (quality) 或 review findings (code) - -// Phase 2-4: Role-specific (see roles/{role}/role.md) - -// Phase 5: Report + Loop — 所有输出必须带 [role] 标识 -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: role, to: "coordinator", type: "...", summary: `[${role}] ...` }) -SendMessage({ type: "message", recipient: "coordinator", content: `## [${role}] ...`, summary: `[${role}] ...` }) -TaskUpdate({ taskId: task.id, status: 'completed' }) -// Check for next task → back to Phase 1 -``` - -## Three-Mode Pipeline - -``` -Spec-only: - RESEARCH-001 → DISCUSS-001 → DRAFT-001 → DISCUSS-002 - → DRAFT-002 → DISCUSS-003 → DRAFT-003 → DISCUSS-004 - → DRAFT-004 → DISCUSS-005 → QUALITY-001 → DISCUSS-006 - -Impl-only (backend): - PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001 - -Full-lifecycle (backend): - [Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006) → IMPL-001 → TEST-001 + REVIEW-001 -``` - -### Frontend Pipelines - -Coordinator 根据任务关键词自动检测前端任务并路由到前端子流水线: - -``` -FE-only (纯前端): - PLAN-001 → DEV-FE-001 → QA-FE-001 - (GC loop: if QA-FE verdict=NEEDS_FIX → DEV-FE-002 → QA-FE-002, max 2 rounds) - -Fullstack (前后端并行): - PLAN-001 → IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001 - -Full-lifecycle + FE: - [Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006) - → IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001 -``` - -### Frontend Detection - -Coordinator 在 Phase 1 根据任务关键词 + 项目文件自动检测前端任务并选择流水线模式(fe-only / fullstack / impl-only)。检测逻辑见 [roles/coordinator/role.md](roles/coordinator/role.md)。 - -### Generator-Critic Loop (fe-developer ↔ fe-qa) - -``` -┌──────────────┐ DEV-FE artifact ┌──────────┐ -│ fe-developer │ ──────────────────→ │ fe-qa │ -│ (Generator) │ │ (Critic) │ -│ │ ←────────────────── │ │ -└──────────────┘ QA-FE feedback └──────────┘ - (max 2 rounds) - -Convergence: fe-qa.score >= 8 && fe-qa.critical_count === 0 -``` - -## Unified Session Directory - -All session artifacts are stored under a single session folder: - -``` -.workflow/.team/TLS-{slug}-{YYYY-MM-DD}/ -├── team-session.json # Session state (status, progress, completed_tasks) -├── spec/ # Spec artifacts (analyst, writer, reviewer output) -│ ├── spec-config.json -│ ├── discovery-context.json -│ ├── product-brief.md -│ ├── requirements/ # _index.md + REQ-*.md + NFR-*.md -│ ├── architecture/ # _index.md + ADR-*.md -│ ├── epics/ # _index.md + EPIC-*.md -│ ├── readiness-report.md -│ └── spec-summary.md -├── discussions/ # Discussion records (discussant output) -│ └── discuss-001..006.md -├── plan/ # Plan artifacts (planner output) -│ ├── exploration-{angle}.json -│ ├── explorations-manifest.json -│ ├── plan.json -│ └── .task/ -│ └── TASK-*.json -├── explorations/ # Explorer output (cached for cross-role reuse) -│ └── explore-*.json -├── architecture/ # Architect output (assessment reports) -│ └── arch-*.json -└── wisdom/ # Cross-task accumulated knowledge - ├── learnings.md # Patterns and insights discovered - ├── decisions.md # Architectural decisions made - ├── conventions.md # Codebase conventions found - └── issues.md # Known issues and risks -├── qa/ # QA output (fe-qa audit reports) -│ └── audit-fe-*.json -└── build/ # Frontend build output (fe-developer) - ├── token-files/ - └── component-files/ -``` - -Messages remain at `.workflow/.team-msg/{team-name}/` (unchanged). - -## Session Resume - -Coordinator supports `--resume` / `--continue` flags to resume interrupted sessions: - -1. Scans `.workflow/.team/TLS-*/team-session.json` for `status: "active"` or `"paused"` -2. Multiple matches → `AskUserQuestion` for user selection -3. **Audit TaskList** — 获取当前所有任务的真实状态 -4. **Reconcile** — 双向同步 session.completed_tasks ↔ TaskList 状态: - - session 已完成但 TaskList 未标记 → 修正 TaskList 为 completed - - TaskList 已完成但 session 未记录 → 补录到 session - - in_progress 状态(暂停中断)→ 重置为 pending -5. Determines remaining pipeline from reconciled state -6. Rebuilds team (`TeamCreate` + worker spawns for needed roles only) -7. Creates missing tasks with correct `blockedBy` dependency chain (uses `TASK_METADATA` lookup) -8. Verifies dependency chain integrity for existing tasks -9. Updates session file with reconciled state + current_phase -10. **Kick** — 向首个可执行任务的 worker 发送 `task_unblocked` 消息,打破 resume 死锁 -11. Jumps to Phase 4 coordination loop - -## Coordinator Spawn Template - -When coordinator spawns workers, use **background mode** (Spawn-and-Stop pattern): - -```javascript -TeamCreate({ team_name: teamName }) - -// For each ready worker — 后台 spawn,立即返回: -Task({ - subagent_type: "general-purpose", - description: `Spawn ${roleName} worker`, // ← 必填参数 - team_name: teamName, - name: "", - run_in_background: true, // ← KEY: 后台执行,coordinator 立即返回 - prompt: `你是 team "${teamName}" 的 . - -## ⚠️ 首要指令(MUST) -你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥: -Skill(skill="team-lifecycle-v2", args="--role=") -此调用会加载你的角色定义(role.md)、可用命令(commands/*.md)和完整执行逻辑。 - -当前需求: ${taskDescription} -约束: ${constraints} -Session: ${sessionFolder} - -## 角色准则(强制) -- 你只能处理 -* 前缀的任务,不得执行其他角色的工作 -- 所有输出(SendMessage、team_msg)必须带 [] 标识前缀 -- 仅与 coordinator 通信,不得直接联系其他 worker -- 不得使用 TaskCreate 为其他角色创建任务 - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -## 工作流程(严格按顺序) -1. 调用 Skill(skill="team-lifecycle-v2", args="--role=") 获取角色定义和执行逻辑 -2. 按 role.md 中的 5-Phase 流程执行(TaskList → 找到 -* 任务 → 执行 → 汇报) -3. team_msg log + SendMessage 结果给 coordinator(带 [] 标识) -4. TaskUpdate completed → 检查下一个任务 → 回到步骤 1` -}) - -// ⚠️ Spawn 后立即 STOP — 不阻塞等待 -// Worker 完成后通过 SendMessage 回调唤醒 coordinator -// 用户也可通过 "check" / "resume" 命令手动推进 -``` - -See [roles/coordinator/role.md](roles/coordinator/role.md) for the full spawn implementation with per-role prompts and Entry Router. - -## Shared Spec Resources - -Writer 和 Reviewer 角色在 spec 模式下使用本 skill 内置的标准和模板(从 spec-generator 复制,独立维护): - -| Resource | Path | Usage | -|----------|------|-------| -| Document Standards | `specs/document-standards.md` | YAML frontmatter、命名规范、内容结构 | -| Quality Gates | `specs/quality-gates.md` | Per-phase 质量门禁、评分标尺 | -| Product Brief Template | `templates/product-brief.md` | DRAFT-001 文档生成 | -| Requirements Template | `templates/requirements-prd.md` | DRAFT-002 文档生成 | -| Architecture Template | `templates/architecture-doc.md` | DRAFT-003 文档生成 | -| Epics Template | `templates/epics-template.md` | DRAFT-004 文档生成 | - -> Writer 在执行每个 DRAFT-* 任务前 **必须先 Read** 对应的 template 文件和 document-standards.md。 -> 从 `roles/` 子目录引用时路径为 `../../specs/` 和 `../../templates/`。 - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| Unknown --role value | Error with available role list | -| Missing --role arg | Orchestration Mode → auto route to coordinator | -| Role file not found | Error with expected path (roles/{name}/role.md) | -| Command file not found | Fall back to inline execution in role.md | -| Task prefix conflict | Log warning, proceed | diff --git a/.claude/skills/team-lifecycle-v2/roles/analyst/role.md b/.claude/skills/team-lifecycle-v2/roles/analyst/role.md deleted file mode 100644 index ded63cf9..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/analyst/role.md +++ /dev/null @@ -1,271 +0,0 @@ -# Role: analyst - -Seed analysis, codebase exploration, and multi-dimensional context gathering. Maps to spec-generator Phase 1 (Discovery). - -## Role Identity - -- **Name**: `analyst` -- **Task Prefix**: `RESEARCH-*` -- **Output Tag**: `[analyst]` -- **Responsibility**: Seed Analysis → Codebase Exploration → Context Packaging → Report -- **Communication**: SendMessage to coordinator only - -## Role Boundaries - -### MUST -- Only process RESEARCH-* tasks -- Communicate only with coordinator -- Use Toolbox tools (ACE search, Gemini CLI) -- Generate discovery-context.json and spec-config.json -- Support file reference input (@ prefix or .md/.txt extension) - -### MUST NOT -- Create tasks for other roles -- Directly contact other workers -- Modify spec documents (only create discovery-context.json and spec-config.json) -- Skip seed analysis step -- Proceed without codebase detection - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `research_ready` | analyst → coordinator | Research complete | With discovery-context.json path and dimension summary | -| `research_progress` | analyst → coordinator | Long research progress | Intermediate progress update | -| `error` | analyst → coordinator | Unrecoverable error | Codebase access failure, CLI timeout, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Research complete -mcp__ccw-tools__team_msg({ - operation: "log", - team: teamName, - from: "analyst", - to: "coordinator", - type: "research_ready", - summary: "[analyst] Research done: 5 exploration dimensions", - ref: `${sessionFolder}/spec/discovery-context.json` -}) - -// Error report -mcp__ccw-tools__team_msg({ - operation: "log", - team: teamName, - from: "analyst", - to: "coordinator", - type: "error", - summary: "[analyst] Codebase access failed" -}) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```bash -ccw team log --team "${teamName}" --from "analyst" --to "coordinator" --type "research_ready" --summary "[analyst] Research done" --ref "${sessionFolder}/discovery-context.json" --json -``` - -## Toolbox - -### Available Commands -- None (simple enough for inline execution) - -### Subagent Capabilities -- None - -### CLI Capabilities -- `ccw cli --tool gemini --mode analysis` for seed analysis - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('RESEARCH-') && - t.owner === 'analyst' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Seed Analysis - -```javascript -// Extract session folder from task description -const sessionMatch = task.description.match(/Session:\s*(.+)/) -const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '.workflow/.team/default' - -// Parse topic from task description -const topicLines = task.description.split('\n').filter(l => !l.startsWith('Session:') && !l.startsWith('输出:') && l.trim()) -const rawTopic = topicLines[0] || task.subject.replace('RESEARCH-001: ', '') - -// 支持文件引用输入(与 spec-generator Phase 1 一致) -const topic = (rawTopic.startsWith('@') || rawTopic.endsWith('.md') || rawTopic.endsWith('.txt')) - ? Read(rawTopic.replace(/^@/, '')) - : rawTopic - -// Use Gemini CLI for seed analysis -Bash({ - command: `ccw cli -p "PURPOSE: Analyze the following topic/idea and extract structured seed information for specification generation. -TASK: -• Extract problem statement (what problem does this solve) -• Identify target users and their pain points -• Determine domain and industry context -• List constraints and assumptions -• Identify 3-5 exploration dimensions for deeper research -• Assess complexity (simple/moderate/complex) - -TOPIC: ${topic} - -MODE: analysis -CONTEXT: @**/* -EXPECTED: JSON output with fields: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment -CONSTRAINTS: Output as valid JSON" --tool gemini --mode analysis --rule analysis-analyze-technical-document`, - run_in_background: true -}) -// Wait for CLI result, then parse seedAnalysis from output -``` - -### Phase 3: Codebase Exploration (conditional) - -```javascript -// Check if there's an existing codebase to explore -const hasProject = Bash(`test -f package.json || test -f Cargo.toml || test -f pyproject.toml || test -f go.mod; echo $?`) - -if (hasProject === '0') { - mcp__ccw-tools__team_msg({ - operation: "log", - team: teamName, - from: "analyst", - to: "coordinator", - type: "research_progress", - summary: "[analyst] 种子分析完成, 开始代码库探索" - }) - - // Explore codebase using ACE search - const archSearch = mcp__ace-tool__search_context({ - project_root_path: projectRoot, - query: `Architecture patterns, main modules, entry points for: ${topic}` - }) - - // Detect tech stack from package files - // Explore existing patterns and integration points - - var codebaseContext = { - tech_stack, - architecture_patterns, - existing_conventions, - integration_points, - constraints_from_codebase: [] - } -} else { - var codebaseContext = null -} -``` - -### Phase 4: Context Packaging - -```javascript -// Generate spec-config.json -const specConfig = { - session_id: `SPEC-${topicSlug}-${dateStr}`, - topic: topic, - status: "research_complete", - complexity: seedAnalysis.complexity_assessment || "moderate", - depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard", - focus_areas: seedAnalysis.exploration_dimensions || [], - mode: "interactive", // team 模式始终交互 - phases_completed: ["discovery"], - created_at: new Date().toISOString(), - session_folder: sessionFolder, - discussion_depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard" -} -Write(`${sessionFolder}/spec/spec-config.json`, JSON.stringify(specConfig, null, 2)) - -// Generate discovery-context.json -const discoveryContext = { - session_id: specConfig.session_id, - phase: 1, - document_type: "discovery-context", - status: "complete", - generated_at: new Date().toISOString(), - seed_analysis: { - problem_statement: seedAnalysis.problem_statement, - target_users: seedAnalysis.target_users, - domain: seedAnalysis.domain, - constraints: seedAnalysis.constraints, - exploration_dimensions: seedAnalysis.exploration_dimensions, - complexity: seedAnalysis.complexity_assessment - }, - codebase_context: codebaseContext, - recommendations: { focus_areas: [], risks: [], open_questions: [] } -} -Write(`${sessionFolder}/spec/discovery-context.json`, JSON.stringify(discoveryContext, null, 2)) -``` - -### Phase 5: Report to Coordinator - -```javascript -const dimensionCount = discoveryContext.seed_analysis.exploration_dimensions?.length || 0 -const hasCodebase = codebaseContext !== null - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "analyst", to: "coordinator", - type: "research_ready", - summary: `[analyst] 研究完成: ${dimensionCount}个探索维度, ${hasCodebase ? '有' : '无'}代码库上下文, 复杂度=${specConfig.complexity}`, - ref: `${sessionFolder}/discovery-context.json` -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `[analyst] ## 研究分析结果 - -**Task**: ${task.subject} -**复杂度**: ${specConfig.complexity} -**代码库**: ${hasCodebase ? '已检测到现有项目' : '全新项目'} - -### 问题陈述 -${discoveryContext.seed_analysis.problem_statement} - -### 目标用户 -${(discoveryContext.seed_analysis.target_users || []).map(u => '- ' + u).join('\n')} - -### 探索维度 -${(discoveryContext.seed_analysis.exploration_dimensions || []).map((d, i) => (i+1) + '. ' + d).join('\n')} - -### 输出位置 -- Config: ${sessionFolder}/spec/spec-config.json -- Context: ${sessionFolder}/spec/discovery-context.json - -研究已就绪,可进入讨论轮次 DISCUSS-001。`, - summary: `[analyst] 研究就绪: ${dimensionCount}维度, ${specConfig.complexity}` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) - -// Check for next RESEARCH task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No RESEARCH-* tasks available | Idle, wait for coordinator assignment | -| Gemini CLI analysis failure | Fallback to direct Claude analysis without CLI | -| Codebase detection failed | Continue as new project (no codebase context) | -| Session folder cannot be created | Notify coordinator, request alternative path | -| Topic too vague for analysis | Report to coordinator with clarification questions | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle-v2/roles/architect/commands/assess.md b/.claude/skills/team-lifecycle-v2/roles/architect/commands/assess.md deleted file mode 100644 index 8fd76f66..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/architect/commands/assess.md +++ /dev/null @@ -1,271 +0,0 @@ -# Assess Command - -## Purpose -Multi-mode architecture assessment with mode-specific analysis strategies. Delegated from architect role.md Phase 3. - -## Input Context - -```javascript -// Provided by role.md Phase 2 -const { consultMode, sessionFolder, wisdom, explorations, projectTech, task } = context -``` - -## Mode Strategies - -### spec-review (ARCH-SPEC-*) - -审查架构文档的技术合理性。 - -```javascript -const dimensions = [ - { name: 'consistency', weight: 0.25 }, - { name: 'scalability', weight: 0.25 }, - { name: 'security', weight: 0.25 }, - { name: 'tech-fitness', weight: 0.25 } -] - -// Load architecture documents -const archIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) -const adrFiles = Glob({ pattern: `${sessionFolder}/spec/architecture/ADR-*.md` }) -const adrs = adrFiles.map(f => ({ path: f, content: Read(f) })) - -// Check ADR consistency -const adrDecisions = adrs.map(adr => { - const status = adr.content.match(/status:\s*(\w+)/i)?.[1] - const context = adr.content.match(/## Context\n([\s\S]*?)##/)?.[1]?.trim() - const decision = adr.content.match(/## Decision\n([\s\S]*?)##/)?.[1]?.trim() - return { path: adr.path, status, context, decision } -}) - -// Cross-reference: ADR decisions vs architecture index -// Flag contradictions between ADRs -// Check if tech choices align with project-tech.json - -for (const dim of dimensions) { - const score = evaluateDimension(dim.name, archIndex, adrs, projectTech) - assessment.dimensions.push({ name: dim.name, score, weight: dim.weight }) -} -``` - -### plan-review (ARCH-PLAN-*) - -审查实现计划的架构合理性。 - -```javascript -const plan = JSON.parse(Read(`${sessionFolder}/plan/plan.json`)) -const taskFiles = Glob({ pattern: `${sessionFolder}/plan/.task/TASK-*.json` }) -const tasks = taskFiles.map(f => JSON.parse(Read(f))) - -// 1. Dependency cycle detection -function detectCycles(tasks) { - const graph = {} - tasks.forEach(t => { graph[t.id] = t.depends_on || [] }) - const visited = new Set(), inStack = new Set() - function dfs(node) { - if (inStack.has(node)) return true // cycle - if (visited.has(node)) return false - visited.add(node); inStack.add(node) - for (const dep of (graph[node] || [])) { - if (dfs(dep)) return true - } - inStack.delete(node) - return false - } - return Object.keys(graph).filter(n => dfs(n)) -} -const cycles = detectCycles(tasks) -if (cycles.length > 0) { - assessment.concerns.push({ - severity: 'high', - concern: `Circular dependency detected: ${cycles.join(' → ')}`, - suggestion: 'Break cycle by extracting shared interface or reordering tasks' - }) -} - -// 2. Task granularity check -tasks.forEach(t => { - const fileCount = (t.files || []).length - if (fileCount > 8) { - assessment.concerns.push({ - severity: 'medium', - task: t.id, - concern: `Task touches ${fileCount} files — may be too coarse`, - suggestion: 'Split into smaller tasks with clearer boundaries' - }) - } -}) - -// 3. Convention compliance (from wisdom) -if (wisdom.conventions) { - // Check if plan follows discovered conventions -} - -// 4. Architecture alignment (from wisdom.decisions) -if (wisdom.decisions) { - // Verify plan doesn't contradict previous architectural decisions -} -``` - -### code-review (ARCH-CODE-*) - -评估代码变更的架构影响。 - -```javascript -const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`) - .split('\n').filter(Boolean) - -// 1. Layer violation detection -function detectLayerViolation(file, content) { - // Check import depth — deeper layers should not import from shallower - const imports = (content.match(/from\s+['"]([^'"]+)['"]/g) || []) - .map(i => i.match(/['"]([^'"]+)['"]/)?.[1]).filter(Boolean) - return imports.filter(imp => isUpwardImport(file, imp)) -} - -// 2. New dependency analysis -const pkgChanges = changedFiles.filter(f => f.includes('package.json')) -if (pkgChanges.length > 0) { - for (const pkg of pkgChanges) { - const diff = Bash(`git diff HEAD~1 -- ${pkg} 2>/dev/null || git diff --cached -- ${pkg}`) - const newDeps = (diff.match(/\+\s+"([^"]+)":\s+"[^"]+"/g) || []) - .map(d => d.match(/"([^"]+)"/)?.[1]).filter(Boolean) - if (newDeps.length > 0) { - assessment.recommendations.push({ - area: 'dependencies', - suggestion: `New dependencies added: ${newDeps.join(', ')}. Verify license compatibility and bundle size impact.` - }) - } - } -} - -// 3. Module boundary changes -const indexChanges = changedFiles.filter(f => f.endsWith('index.ts') || f.endsWith('index.js')) -if (indexChanges.length > 0) { - assessment.concerns.push({ - severity: 'medium', - concern: `Module boundary files modified: ${indexChanges.join(', ')}`, - suggestion: 'Verify public API changes are intentional and backward compatible' - }) -} - -// 4. Architectural impact scoring -assessment.architectural_impact = changedFiles.length > 10 ? 'high' - : indexChanges.length > 0 || pkgChanges.length > 0 ? 'medium' : 'low' -``` - -### consult (ARCH-CONSULT-*) - -回答架构决策咨询。 - -```javascript -const question = task.description - .replace(/Session:.*\n?/g, '') - .replace(/Requester:.*\n?/g, '') - .trim() - -const isComplex = question.length > 200 || - /architect|design|pattern|refactor|migrate|scalab/i.test(question) - -if (isComplex) { - // Use cli-explore-agent for deep exploration - Task({ - subagent_type: "cli-explore-agent", - run_in_background: false, - description: `Architecture consultation: ${question.substring(0, 80)}`, - prompt: `## Architecture Consultation - -Question: ${question} - -## Steps -1. Run: ccw tool exec get_modules_by_depth '{}' -2. Search for relevant architectural patterns in codebase -3. Read .workflow/project-tech.json (if exists) -4. Analyze architectural implications - -## Output -Write to: ${sessionFolder}/architecture/consult-exploration.json -Schema: { relevant_files[], patterns[], architectural_implications[], options[] }` - }) - - // Parse exploration results into assessment - try { - const exploration = JSON.parse(Read(`${sessionFolder}/architecture/consult-exploration.json`)) - assessment.recommendations = (exploration.options || []).map(opt => ({ - area: 'architecture', - suggestion: `${opt.name}: ${opt.description}`, - trade_offs: opt.trade_offs || [] - })) - } catch {} -} else { - // Simple consultation — direct analysis - assessment.recommendations.push({ - area: 'architecture', - suggestion: `Direct answer based on codebase context and wisdom` - }) -} -``` - -### feasibility (ARCH-FEASIBILITY-*) - -技术可行性评估。 - -```javascript -const proposal = task.description - .replace(/Session:.*\n?/g, '') - .replace(/Requester:.*\n?/g, '') - .trim() - -// 1. Tech stack compatibility -const techStack = projectTech?.tech_stack || {} -// Check if proposal requires technologies not in current stack - -// 2. Codebase readiness -// Use ACE search to find relevant integration points -const searchResults = mcp__ace-tool__search_context({ - project_root_path: '.', - query: proposal -}) - -// 3. Effort estimation -const touchPoints = (searchResults?.relevant_files || []).length -const effort = touchPoints > 20 ? 'high' : touchPoints > 5 ? 'medium' : 'low' - -// 4. Risk assessment -assessment.verdict = 'FEASIBLE' // FEASIBLE | RISKY | INFEASIBLE -assessment.effort_estimate = effort -assessment.prerequisites = [] -assessment.risks = [] - -if (touchPoints > 20) { - assessment.verdict = 'RISKY' - assessment.risks.push({ - risk: 'High touch-point count suggests significant refactoring', - mitigation: 'Phase the implementation, start with core module' - }) -} -``` - -## Verdict Logic - -```javascript -function determineVerdict(assessment) { - const highConcerns = (assessment.concerns || []).filter(c => c.severity === 'high') - const mediumConcerns = (assessment.concerns || []).filter(c => c.severity === 'medium') - - if (highConcerns.length >= 2) return 'BLOCK' - if (highConcerns.length >= 1 || mediumConcerns.length >= 3) return 'CONCERN' - return 'APPROVE' -} - -assessment.overall_verdict = determineVerdict(assessment) -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| Architecture docs not found | Assess from available context, note limitation in report | -| Plan file missing | Report to coordinator via arch_concern | -| Git diff fails (no commits) | Use staged changes or skip code-review mode | -| CLI exploration timeout | Provide partial assessment, flag as incomplete | -| Exploration results unparseable | Fall back to direct analysis without exploration | diff --git a/.claude/skills/team-lifecycle-v2/roles/architect/role.md b/.claude/skills/team-lifecycle-v2/roles/architect/role.md deleted file mode 100644 index 37b0e25f..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/architect/role.md +++ /dev/null @@ -1,368 +0,0 @@ -# Role: architect - -架构顾问。提供架构决策咨询、技术可行性评估、设计模式建议。咨询角色,在 spec 和 impl 流程关键节点提供专业判断。 - -## Role Identity - -- **Name**: `architect` -- **Task Prefix**: `ARCH-*` -- **Responsibility**: Context loading → Mode detection → Architecture analysis → Package assessment → Report -- **Communication**: SendMessage to coordinator only -- **Output Tag**: `[architect]` -- **Role Type**: Consulting(咨询角色,不阻塞主链路,输出被引用) - -## Role Boundaries - -### MUST - -- 仅处理 `ARCH-*` 前缀的任务 -- 所有输出(SendMessage、team_msg、日志)必须带 `[architect]` 标识 -- 仅通过 SendMessage 与 coordinator 通信 -- 输出结构化评估报告供调用方消费 -- 根据任务前缀自动切换咨询模式 - -### MUST NOT - -- ❌ 直接修改源代码文件 -- ❌ 执行需求分析、代码实现、测试等其他角色职责 -- ❌ 直接与其他 worker 角色通信 -- ❌ 为其他角色创建任务 -- ❌ 做最终决策(仅提供建议,决策权在 coordinator/用户) -- ❌ 在输出中省略 `[architect]` 标识 - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `arch_ready` | architect → coordinator | Consultation complete | 架构评估/建议已就绪 | -| `arch_concern` | architect → coordinator | Significant risk found | 发现重大架构风险 | -| `arch_progress` | architect → coordinator | Long analysis progress | 复杂分析进度更新 | -| `error` | architect → coordinator | Analysis failure | 分析失败或上下文不足 | - -## Message Bus - -每次 SendMessage **前**,必须调用 `mcp__ccw-tools__team_msg` 记录消息: - -```javascript -// Consultation complete -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "architect", to: "coordinator", - type: "arch_ready", - summary: "[architect] ARCH complete: 3 recommendations, 1 concern", - ref: outputPath -}) - -// Risk alert -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "architect", to: "coordinator", - type: "arch_concern", - summary: "[architect] RISK: circular dependency in module graph" -}) -``` - -### CLI 回退 - -当 `mcp__ccw-tools__team_msg` MCP 不可用时,使用 `ccw team` CLI 作为等效回退: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "architect" --to "coordinator" --type "arch_ready" --summary "[architect] ARCH complete" --ref "${outputPath}" --json`) -``` - -**参数映射**: `team_msg(params)` → `ccw team log --team --from architect --to coordinator --type --summary "" [--ref ] [--json]` - -## Toolbox - -### Available Commands -- `commands/assess.md` — Multi-mode architecture assessment (Phase 3) - -### Subagent Capabilities - -| Agent Type | Used By | Purpose | -|------------|---------|---------| -| `cli-explore-agent` | commands/assess.md | 深度架构探索(模块依赖、分层结构) | - -### CLI Capabilities - -| CLI Tool | Mode | Used By | Purpose | -|----------|------|---------|---------| -| `ccw cli --tool gemini --mode analysis` | analysis | commands/assess.md | 架构分析、模式评估 | - -## Consultation Modes - -根据任务 subject 前缀自动切换: - -| Mode | Task Pattern | Focus | Output | -|------|-------------|-------|--------| -| `spec-review` | ARCH-SPEC-* | 审查架构文档(ADR、组件图) | 架构评审报告 | -| `plan-review` | ARCH-PLAN-* | 审查实现计划的架构合理性 | 计划评审意见 | -| `code-review` | ARCH-CODE-* | 评估代码变更的架构影响 | 架构影响分析 | -| `consult` | ARCH-CONSULT-* | 回答架构决策咨询 | 决策建议 | -| `feasibility` | ARCH-FEASIBILITY-* | 技术可行性评估 | 可行性报告 | - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('ARCH-') && - t.owner === 'architect' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Context Loading & Mode Detection - -```javascript -const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim() - -// Auto-detect consultation mode from task subject -const MODE_MAP = { - 'ARCH-SPEC': 'spec-review', - 'ARCH-PLAN': 'plan-review', - 'ARCH-CODE': 'code-review', - 'ARCH-CONSULT': 'consult', - 'ARCH-FEASIBILITY': 'feasibility' -} -const modePrefix = Object.keys(MODE_MAP).find(p => task.subject.startsWith(p)) -const consultMode = modePrefix ? MODE_MAP[modePrefix] : 'consult' - -// Load wisdom (accumulated knowledge from previous tasks) -let wisdom = {} -if (sessionFolder) { - try { wisdom.learnings = Read(`${sessionFolder}/wisdom/learnings.md`) } catch {} - try { wisdom.decisions = Read(`${sessionFolder}/wisdom/decisions.md`) } catch {} - try { wisdom.conventions = Read(`${sessionFolder}/wisdom/conventions.md`) } catch {} -} - -// Load project tech context -let projectTech = {} -try { projectTech = JSON.parse(Read('.workflow/project-tech.json')) } catch {} - -// Load exploration results if available -let explorations = [] -if (sessionFolder) { - try { - const exploreFiles = Glob({ pattern: `${sessionFolder}/explorations/*.json` }) - explorations = exploreFiles.map(f => { - try { return JSON.parse(Read(f)) } catch { return null } - }).filter(Boolean) - } catch {} -} -``` - -### Phase 3: Architecture Assessment - -Delegate to command file for mode-specific analysis: - -```javascript -try { - const assessCommand = Read("commands/assess.md") - // Execute mode-specific strategy defined in command file - // Input: consultMode, sessionFolder, wisdom, explorations, projectTech - // Output: assessment object -} catch { - // Fallback: inline execution (see below) -} -``` - -**Command**: [commands/assess.md](commands/assess.md) - -**Inline Fallback** (when command file unavailable): - -```javascript -const assessment = { - mode: consultMode, - overall_verdict: 'APPROVE', // APPROVE | CONCERN | BLOCK - dimensions: [], - concerns: [], - recommendations: [], - _metadata: { timestamp: new Date().toISOString(), wisdom_loaded: Object.keys(wisdom).length > 0 } -} - -// Mode-specific analysis -if (consultMode === 'spec-review') { - // Load architecture documents, check ADR consistency, scalability, security - const archIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) - const adrFiles = Glob({ pattern: `${sessionFolder}/spec/architecture/ADR-*.md` }) - // Score dimensions: consistency, scalability, security, tech-fitness -} - -if (consultMode === 'plan-review') { - // Load plan.json, check task granularity, dependency cycles, convention compliance - const plan = JSON.parse(Read(`${sessionFolder}/plan/plan.json`)) - // Detect circular dependencies, oversized tasks, missing risk assessment -} - -if (consultMode === 'code-review') { - // Analyze changed files for layer violations, new deps, module boundary changes - const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`) - .split('\n').filter(Boolean) - // Check import depth, package.json changes, index.ts modifications -} - -if (consultMode === 'consult') { - // Free-form consultation — use CLI for complex questions - const question = task.description.replace(/Session:.*\n?/g, '').replace(/Requester:.*\n?/g, '').trim() - const isComplex = question.length > 200 || /architect|design|pattern|refactor|migrate/i.test(question) - if (isComplex) { - Bash({ - command: `ccw cli -p "PURPOSE: Architecture consultation — ${question} -TASK: • Analyze architectural implications • Identify options with trade-offs • Recommend approach -MODE: analysis -CONTEXT: @**/* -EXPECTED: Structured analysis with options, trade-offs, recommendation -CONSTRAINTS: Architecture-level only" --tool gemini --mode analysis --rule analysis-review-architecture`, - run_in_background: true - }) - // Wait for result, parse into assessment - } -} - -if (consultMode === 'feasibility') { - // Assess technical feasibility against current codebase - // Output: verdict (FEASIBLE|RISKY|INFEASIBLE), risks, effort estimate, prerequisites -} -``` - -### Phase 4: Package & Wisdom Contribution - -```javascript -// Write assessment to session -const outputPath = sessionFolder - ? `${sessionFolder}/architecture/arch-${task.subject.replace(/[^a-zA-Z0-9-]/g, '-').toLowerCase()}.json` - : '.workflow/.tmp/arch-assessment.json' - -Bash(`mkdir -p "$(dirname '${outputPath}')"`) -Write(outputPath, JSON.stringify(assessment, null, 2)) - -// Contribute to wisdom: record architectural decisions -if (sessionFolder && assessment.recommendations?.length > 0) { - try { - const decisionsPath = `${sessionFolder}/wisdom/decisions.md` - const existing = Read(decisionsPath) - const newDecisions = assessment.recommendations - .map(r => `- [${new Date().toISOString().substring(0, 10)}] ${r.area || r.dimension}: ${r.suggestion}`) - .join('\n') - Write(decisionsPath, existing + '\n' + newDecisions) - } catch {} // wisdom not initialized -} -``` - -### Phase 5: Report to Coordinator - -```javascript -const verdict = assessment.overall_verdict || assessment.verdict || 'N/A' -const concernCount = (assessment.concerns || []).length -const highConcerns = (assessment.concerns || []).filter(c => c.severity === 'high').length -const recCount = (assessment.recommendations || []).length - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "architect", to: "coordinator", - type: highConcerns > 0 ? "arch_concern" : "arch_ready", - summary: `[architect] ARCH ${consultMode}: ${verdict}, ${concernCount} concerns, ${recCount} recommendations`, - ref: outputPath -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `[architect] ## Architecture Assessment - -**Task**: ${task.subject} -**Mode**: ${consultMode} -**Verdict**: ${verdict} - -### Summary -- **Concerns**: ${concernCount} (${highConcerns} high) -- **Recommendations**: ${recCount} -${assessment.architectural_impact ? `- **Impact**: ${assessment.architectural_impact}` : ''} - -${assessment.dimensions?.length > 0 ? `### Dimension Scores -${assessment.dimensions.map(d => `- **${d.name}**: ${d.score}%`).join('\n')}` : ''} - -${concernCount > 0 ? `### Concerns -${assessment.concerns.map(c => `- [${(c.severity || 'medium').toUpperCase()}] ${c.task || c.file || ''}: ${c.concern}`).join('\n')}` : ''} - -### Recommendations -${(assessment.recommendations || []).map(r => `- ${r.area || r.dimension || ''}: ${r.suggestion}`).join('\n') || 'None'} - -### Output: ${outputPath}`, - summary: `[architect] ARCH ${consultMode}: ${verdict}` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) - -// Check for next ARCH task → back to Phase 1 -const nextTasks = TaskList().filter(t => - t.subject.startsWith('ARCH-') && - t.owner === 'architect' && - t.status === 'pending' && - t.blockedBy.length === 0 -) -if (nextTasks.length > 0) { - // Continue → back to Phase 1 -} -``` - -## Coordinator Integration - -Architect 由 coordinator 在关键节点按需创建 ARCH-* 任务: - -### Spec Pipeline (after DRAFT-003, before DISCUSS-004) - -```javascript -TaskCreate({ - subject: 'ARCH-SPEC-001: 架构文档专业评审', - description: `评审架构文档的技术合理性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/architecture/`, - activeForm: '架构评审中' -}) -TaskUpdate({ taskId: archSpecId, owner: 'architect' }) -// DISCUSS-004 addBlockedBy [archSpecId] -``` - -### Impl Pipeline (after PLAN-001, before IMPL-001) - -```javascript -TaskCreate({ - subject: 'ARCH-PLAN-001: 实现计划架构审查', - description: `审查实现计划的架构合理性\n\nSession: ${sessionFolder}\nPlan: ${sessionFolder}/plan/plan.json`, - activeForm: '计划审查中' -}) -TaskUpdate({ taskId: archPlanId, owner: 'architect' }) -// IMPL-001 addBlockedBy [archPlanId] -``` - -### On-Demand (any point via coordinator) - -```javascript -TaskCreate({ - subject: 'ARCH-CONSULT-001: 架构决策咨询', - description: `${question}\n\nSession: ${sessionFolder}\nRequester: ${role}`, - activeForm: '架构咨询中' -}) -TaskUpdate({ taskId: archConsultId, owner: 'architect' }) -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No ARCH-* tasks available | Idle, wait for coordinator assignment | -| Architecture documents not found | Assess from available context, note limitation | -| Plan file not found | Report to coordinator, request location | -| CLI analysis timeout | Provide partial assessment, note incomplete | -| Insufficient context | Request explorer to gather more context via coordinator | -| Conflicting requirements | Flag as concern, provide options | -| Command file not found | Fall back to inline execution | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle-v2/roles/coordinator/commands/dispatch.md b/.claude/skills/team-lifecycle-v2/roles/coordinator/commands/dispatch.md deleted file mode 100644 index a78ace33..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/coordinator/commands/dispatch.md +++ /dev/null @@ -1,523 +0,0 @@ -# Dispatch Command - Task Chain Creation - -**Purpose**: Create task chains based on execution mode, aligned with SKILL.md Three-Mode Pipeline - -**Invoked by**: Coordinator role.md Phase 3 - -**Output Tag**: `[coordinator]` - ---- - -## Task Chain Strategies - -### Role-Task Mapping (Source of Truth: SKILL.md VALID_ROLES) - -| Task Prefix | Role | VALID_ROLES Key | -|-------------|------|-----------------| -| RESEARCH-* | analyst | `analyst` | -| DISCUSS-* | discussant | `discussant` | -| DRAFT-* | writer | `writer` | -| QUALITY-* | reviewer | `reviewer` | -| PLAN-* | planner | `planner` | -| IMPL-* | executor | `executor` | -| TEST-* | tester | `tester` | -| REVIEW-* | reviewer | `reviewer` | -| DEV-FE-* | fe-developer | `fe-developer` | -| QA-FE-* | fe-qa | `fe-qa` | - ---- - -### Strategy 1: Spec-Only Mode (12 tasks) - -Pipeline: `RESEARCH → DISCUSS → DRAFT → DISCUSS → DRAFT → DISCUSS → DRAFT → DISCUSS → DRAFT → DISCUSS → QUALITY → DISCUSS` - -```javascript -if (requirements.mode === "spec-only") { - Output("[coordinator] Creating spec-only task chain (12 tasks)") - - // Task 1: Seed Analysis - TaskCreate({ - subject: "RESEARCH-001", - owner: "analyst", - description: `Seed analysis: codebase exploration and context gathering\nSession: ${sessionFolder}\nScope: ${requirements.scope}\nFocus: ${requirements.focus.join(", ")}\nDepth: ${requirements.depth}`, - blockedBy: [], - status: "pending" - }) - - // Task 2: Critique Research - TaskCreate({ - subject: "DISCUSS-001", - owner: "discussant", - description: `Critique research findings from RESEARCH-001, identify gaps and clarify scope\nSession: ${sessionFolder}`, - blockedBy: ["RESEARCH-001"], - status: "pending" - }) - - // Task 3: Product Brief - TaskCreate({ - subject: "DRAFT-001", - owner: "writer", - description: `Generate Product Brief based on RESEARCH-001 findings and DISCUSS-001 feedback\nSession: ${sessionFolder}`, - blockedBy: ["DISCUSS-001"], - status: "pending" - }) - - // Task 4: Critique Product Brief - TaskCreate({ - subject: "DISCUSS-002", - owner: "discussant", - description: `Critique Product Brief (DRAFT-001), evaluate completeness and clarity\nSession: ${sessionFolder}`, - blockedBy: ["DRAFT-001"], - status: "pending" - }) - - // Task 5: Requirements/PRD - TaskCreate({ - subject: "DRAFT-002", - owner: "writer", - description: `Generate Requirements/PRD incorporating DISCUSS-002 feedback\nSession: ${sessionFolder}`, - blockedBy: ["DISCUSS-002"], - status: "pending" - }) - - // Task 6: Critique Requirements - TaskCreate({ - subject: "DISCUSS-003", - owner: "discussant", - description: `Critique Requirements/PRD (DRAFT-002), validate coverage and feasibility\nSession: ${sessionFolder}`, - blockedBy: ["DRAFT-002"], - status: "pending" - }) - - // Task 7: Architecture Document - TaskCreate({ - subject: "DRAFT-003", - owner: "writer", - description: `Generate Architecture Document incorporating DISCUSS-003 feedback\nSession: ${sessionFolder}`, - blockedBy: ["DISCUSS-003"], - status: "pending" - }) - - // Task 8: Critique Architecture - TaskCreate({ - subject: "DISCUSS-004", - owner: "discussant", - description: `Critique Architecture Document (DRAFT-003), evaluate design decisions\nSession: ${sessionFolder}`, - blockedBy: ["DRAFT-003"], - status: "pending" - }) - - // Task 9: Epics - TaskCreate({ - subject: "DRAFT-004", - owner: "writer", - description: `Generate Epics document incorporating DISCUSS-004 feedback\nSession: ${sessionFolder}`, - blockedBy: ["DISCUSS-004"], - status: "pending" - }) - - // Task 10: Critique Epics - TaskCreate({ - subject: "DISCUSS-005", - owner: "discussant", - description: `Critique Epics (DRAFT-004), validate task decomposition and priorities\nSession: ${sessionFolder}`, - blockedBy: ["DRAFT-004"], - status: "pending" - }) - - // Task 11: Spec Quality Check - TaskCreate({ - subject: "QUALITY-001", - owner: "reviewer", - description: `5-dimension spec quality validation across all spec artifacts\nSession: ${sessionFolder}`, - blockedBy: ["DISCUSS-005"], - status: "pending" - }) - - // Task 12: Final Review Discussion - TaskCreate({ - subject: "DISCUSS-006", - owner: "discussant", - description: `Final review discussion: address QUALITY-001 findings, sign-off\nSession: ${sessionFolder}`, - blockedBy: ["QUALITY-001"], - status: "pending" - }) - - Output("[coordinator] Spec-only task chain created (12 tasks)") - Output("[coordinator] Starting with: RESEARCH-001 (analyst)") -} -``` - ---- - -### Strategy 2: Impl-Only Mode (4 tasks) - -Pipeline: `PLAN → IMPL → TEST + REVIEW` - -```javascript -if (requirements.mode === "impl-only") { - Output("[coordinator] Creating impl-only task chain (4 tasks)") - - // Verify spec exists - const specExists = AskUserQuestion({ - question: "Implementation mode requires existing specifications. Do you have a spec file?", - choices: ["yes", "no"] - }) - - if (specExists === "no") { - Output("[coordinator] ERROR: impl-only mode requires existing specifications") - Output("[coordinator] Please run spec-only mode first or use full-lifecycle mode") - throw new Error("Missing specifications for impl-only mode") - } - - const specFile = AskUserQuestion({ - question: "Provide path to specification file:", - type: "text" - }) - - const specContent = Read(specFile) - if (!specContent) { - throw new Error(`Specification file not found: ${specFile}`) - } - - Output(`[coordinator] Using specification: ${specFile}`) - - // Task 1: Planning - TaskCreate({ - subject: "PLAN-001", - owner: "planner", - description: `Multi-angle codebase exploration and structured planning\nSession: ${sessionFolder}\nSpec: ${specFile}\nScope: ${requirements.scope}`, - blockedBy: [], - status: "pending" - }) - - // Task 2: Implementation - TaskCreate({ - subject: "IMPL-001", - owner: "executor", - description: `Code implementation following PLAN-001\nSession: ${sessionFolder}\nSpec: ${specFile}`, - blockedBy: ["PLAN-001"], - status: "pending" - }) - - // Task 3: Testing (parallel with REVIEW-001) - TaskCreate({ - subject: "TEST-001", - owner: "tester", - description: `Adaptive test-fix cycles and quality gates\nSession: ${sessionFolder}`, - blockedBy: ["IMPL-001"], - status: "pending" - }) - - // Task 4: Code Review (parallel with TEST-001) - TaskCreate({ - subject: "REVIEW-001", - owner: "reviewer", - description: `4-dimension code review of IMPL-001 output\nSession: ${sessionFolder}`, - blockedBy: ["IMPL-001"], - status: "pending" - }) - - Output("[coordinator] Impl-only task chain created (4 tasks)") - Output("[coordinator] Starting with: PLAN-001 (planner)") -} -``` - ---- - -### Strategy 3: Full-Lifecycle Mode (16 tasks) - -Pipeline: `[Spec pipeline 12] → PLAN(blockedBy: DISCUSS-006) → IMPL → TEST + REVIEW` - -```javascript -if (requirements.mode === "full-lifecycle") { - Output("[coordinator] Creating full-lifecycle task chain (16 tasks)") - - // ======================================== - // SPEC PHASE (12 tasks) — same as spec-only - // ======================================== - - TaskCreate({ subject: "RESEARCH-001", owner: "analyst", description: `Seed analysis: codebase exploration and context gathering\nSession: ${sessionFolder}\nScope: ${requirements.scope}\nFocus: ${requirements.focus.join(", ")}\nDepth: ${requirements.depth}`, blockedBy: [], status: "pending" }) - TaskCreate({ subject: "DISCUSS-001", owner: "discussant", description: `Critique research findings from RESEARCH-001\nSession: ${sessionFolder}`, blockedBy: ["RESEARCH-001"], status: "pending" }) - TaskCreate({ subject: "DRAFT-001", owner: "writer", description: `Generate Product Brief\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-001"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-002", owner: "discussant", description: `Critique Product Brief (DRAFT-001)\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-001"], status: "pending" }) - TaskCreate({ subject: "DRAFT-002", owner: "writer", description: `Generate Requirements/PRD\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-002"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-003", owner: "discussant", description: `Critique Requirements/PRD (DRAFT-002)\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-002"], status: "pending" }) - TaskCreate({ subject: "DRAFT-003", owner: "writer", description: `Generate Architecture Document\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-003"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-004", owner: "discussant", description: `Critique Architecture Document (DRAFT-003)\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-003"], status: "pending" }) - TaskCreate({ subject: "DRAFT-004", owner: "writer", description: `Generate Epics\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-004"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-005", owner: "discussant", description: `Critique Epics (DRAFT-004)\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-004"], status: "pending" }) - TaskCreate({ subject: "QUALITY-001", owner: "reviewer", description: `5-dimension spec quality validation\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-005"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-006", owner: "discussant", description: `Final review discussion and sign-off\nSession: ${sessionFolder}`, blockedBy: ["QUALITY-001"], status: "pending" }) - - // ======================================== - // IMPL PHASE (4 tasks) — blocked by spec completion - // ======================================== - - TaskCreate({ - subject: "PLAN-001", - owner: "planner", - description: `Multi-angle codebase exploration and structured planning\nSession: ${sessionFolder}\nScope: ${requirements.scope}`, - blockedBy: ["DISCUSS-006"], // Blocked until spec phase completes - status: "pending" - }) - - TaskCreate({ - subject: "IMPL-001", - owner: "executor", - description: `Code implementation following PLAN-001\nSession: ${sessionFolder}`, - blockedBy: ["PLAN-001"], - status: "pending" - }) - - TaskCreate({ - subject: "TEST-001", - owner: "tester", - description: `Adaptive test-fix cycles and quality gates\nSession: ${sessionFolder}`, - blockedBy: ["IMPL-001"], - status: "pending" - }) - - TaskCreate({ - subject: "REVIEW-001", - owner: "reviewer", - description: `4-dimension code review of IMPL-001 output\nSession: ${sessionFolder}`, - blockedBy: ["IMPL-001"], - status: "pending" - }) - - Output("[coordinator] Full-lifecycle task chain created (16 tasks)") - Output("[coordinator] Starting with: RESEARCH-001 (analyst)") -} -``` - ---- - -### Strategy 4: FE-Only Mode (3 tasks) - -Pipeline: `PLAN → DEV-FE → QA-FE` (with GC loop: max 2 rounds) - -```javascript -if (requirements.mode === "fe-only") { - Output("[coordinator] Creating fe-only task chain (3 tasks)") - - TaskCreate({ - subject: "PLAN-001", - owner: "planner", - description: `Multi-angle codebase exploration and structured planning (frontend focus)\nSession: ${sessionFolder}\nScope: ${requirements.scope}`, - blockedBy: [], - status: "pending" - }) - - TaskCreate({ - subject: "DEV-FE-001", - owner: "fe-developer", - description: `Frontend component/page implementation following PLAN-001\nSession: ${sessionFolder}`, - blockedBy: ["PLAN-001"], - status: "pending" - }) - - TaskCreate({ - subject: "QA-FE-001", - owner: "fe-qa", - description: `5-dimension frontend QA for DEV-FE-001 output\nSession: ${sessionFolder}`, - blockedBy: ["DEV-FE-001"], - status: "pending" - }) - - // Note: GC loop (DEV-FE-002 → QA-FE-002) created dynamically by coordinator - // when QA-FE-001 verdict = NEEDS_FIX (max 2 rounds) - - Output("[coordinator] FE-only task chain created (3 tasks)") - Output("[coordinator] Starting with: PLAN-001 (planner)") -} -``` - ---- - -### Strategy 5: Fullstack Mode (6 tasks) - -Pipeline: `PLAN → IMPL ∥ DEV-FE → TEST ∥ QA-FE → REVIEW` - -```javascript -if (requirements.mode === "fullstack") { - Output("[coordinator] Creating fullstack task chain (6 tasks)") - - TaskCreate({ - subject: "PLAN-001", - owner: "planner", - description: `Multi-angle codebase exploration and structured planning (fullstack)\nSession: ${sessionFolder}\nScope: ${requirements.scope}`, - blockedBy: [], - status: "pending" - }) - - // Backend + Frontend in parallel - TaskCreate({ - subject: "IMPL-001", - owner: "executor", - description: `Backend implementation following PLAN-001\nSession: ${sessionFolder}`, - blockedBy: ["PLAN-001"], - status: "pending" - }) - - TaskCreate({ - subject: "DEV-FE-001", - owner: "fe-developer", - description: `Frontend implementation following PLAN-001\nSession: ${sessionFolder}`, - blockedBy: ["PLAN-001"], - status: "pending" - }) - - // Testing + QA in parallel - TaskCreate({ - subject: "TEST-001", - owner: "tester", - description: `Backend test-fix cycles\nSession: ${sessionFolder}`, - blockedBy: ["IMPL-001"], - status: "pending" - }) - - TaskCreate({ - subject: "QA-FE-001", - owner: "fe-qa", - description: `Frontend QA for DEV-FE-001\nSession: ${sessionFolder}`, - blockedBy: ["DEV-FE-001"], - status: "pending" - }) - - // Final review after all testing - TaskCreate({ - subject: "REVIEW-001", - owner: "reviewer", - description: `Full code review (backend + frontend)\nSession: ${sessionFolder}`, - blockedBy: ["TEST-001", "QA-FE-001"], - status: "pending" - }) - - Output("[coordinator] Fullstack task chain created (6 tasks)") - Output("[coordinator] Starting with: PLAN-001 (planner)") -} -``` - ---- - -### Strategy 6: Full-Lifecycle-FE Mode (18 tasks) - -Pipeline: `[Spec 12] → PLAN(blockedBy: DISCUSS-006) → IMPL ∥ DEV-FE → TEST ∥ QA-FE → REVIEW` - -```javascript -if (requirements.mode === "full-lifecycle-fe") { - Output("[coordinator] Creating full-lifecycle-fe task chain (18 tasks)") - - // SPEC PHASE (12 tasks) — same as spec-only - TaskCreate({ subject: "RESEARCH-001", owner: "analyst", description: `Seed analysis\nSession: ${sessionFolder}\nScope: ${requirements.scope}`, blockedBy: [], status: "pending" }) - TaskCreate({ subject: "DISCUSS-001", owner: "discussant", description: `Critique research findings\nSession: ${sessionFolder}`, blockedBy: ["RESEARCH-001"], status: "pending" }) - TaskCreate({ subject: "DRAFT-001", owner: "writer", description: `Generate Product Brief\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-001"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-002", owner: "discussant", description: `Critique Product Brief\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-001"], status: "pending" }) - TaskCreate({ subject: "DRAFT-002", owner: "writer", description: `Generate Requirements/PRD\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-002"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-003", owner: "discussant", description: `Critique Requirements\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-002"], status: "pending" }) - TaskCreate({ subject: "DRAFT-003", owner: "writer", description: `Generate Architecture Document\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-003"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-004", owner: "discussant", description: `Critique Architecture\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-003"], status: "pending" }) - TaskCreate({ subject: "DRAFT-004", owner: "writer", description: `Generate Epics\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-004"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-005", owner: "discussant", description: `Critique Epics\nSession: ${sessionFolder}`, blockedBy: ["DRAFT-004"], status: "pending" }) - TaskCreate({ subject: "QUALITY-001", owner: "reviewer", description: `Spec quality validation\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-005"], status: "pending" }) - TaskCreate({ subject: "DISCUSS-006", owner: "discussant", description: `Final review and sign-off\nSession: ${sessionFolder}`, blockedBy: ["QUALITY-001"], status: "pending" }) - - // IMPL PHASE (6 tasks) — fullstack, blocked by spec - TaskCreate({ subject: "PLAN-001", owner: "planner", description: `Fullstack planning\nSession: ${sessionFolder}`, blockedBy: ["DISCUSS-006"], status: "pending" }) - TaskCreate({ subject: "IMPL-001", owner: "executor", description: `Backend implementation\nSession: ${sessionFolder}`, blockedBy: ["PLAN-001"], status: "pending" }) - TaskCreate({ subject: "DEV-FE-001", owner: "fe-developer", description: `Frontend implementation\nSession: ${sessionFolder}`, blockedBy: ["PLAN-001"], status: "pending" }) - TaskCreate({ subject: "TEST-001", owner: "tester", description: `Backend test-fix cycles\nSession: ${sessionFolder}`, blockedBy: ["IMPL-001"], status: "pending" }) - TaskCreate({ subject: "QA-FE-001", owner: "fe-qa", description: `Frontend QA\nSession: ${sessionFolder}`, blockedBy: ["DEV-FE-001"], status: "pending" }) - TaskCreate({ subject: "REVIEW-001", owner: "reviewer", description: `Full code review\nSession: ${sessionFolder}`, blockedBy: ["TEST-001", "QA-FE-001"], status: "pending" }) - - Output("[coordinator] Full-lifecycle-fe task chain created (18 tasks)") - Output("[coordinator] Starting with: RESEARCH-001 (analyst)") -} -``` - ---- - -## Task Metadata Reference - -```javascript -// Unified metadata for all pipelines (used by Session Resume) -const TASK_METADATA = { - // Spec pipeline (12 tasks) - "RESEARCH-001": { role: "analyst", deps: [], description: "Seed analysis: codebase exploration and context gathering" }, - "DISCUSS-001": { role: "discussant", deps: ["RESEARCH-001"], description: "Critique research findings, identify gaps" }, - "DRAFT-001": { role: "writer", deps: ["DISCUSS-001"], description: "Generate Product Brief" }, - "DISCUSS-002": { role: "discussant", deps: ["DRAFT-001"], description: "Critique Product Brief" }, - "DRAFT-002": { role: "writer", deps: ["DISCUSS-002"], description: "Generate Requirements/PRD" }, - "DISCUSS-003": { role: "discussant", deps: ["DRAFT-002"], description: "Critique Requirements/PRD" }, - "DRAFT-003": { role: "writer", deps: ["DISCUSS-003"], description: "Generate Architecture Document" }, - "DISCUSS-004": { role: "discussant", deps: ["DRAFT-003"], description: "Critique Architecture Document" }, - "DRAFT-004": { role: "writer", deps: ["DISCUSS-004"], description: "Generate Epics" }, - "DISCUSS-005": { role: "discussant", deps: ["DRAFT-004"], description: "Critique Epics" }, - "QUALITY-001": { role: "reviewer", deps: ["DISCUSS-005"], description: "5-dimension spec quality validation" }, - "DISCUSS-006": { role: "discussant", deps: ["QUALITY-001"], description: "Final review discussion and sign-off" }, - - // Impl pipeline (4 tasks) — deps shown for impl-only mode - // In full-lifecycle, PLAN-001 deps = ["DISCUSS-006"] - "PLAN-001": { role: "planner", deps: [], description: "Multi-angle codebase exploration and structured planning" }, - "IMPL-001": { role: "executor", deps: ["PLAN-001"], description: "Code implementation following plan" }, - "TEST-001": { role: "tester", deps: ["IMPL-001"], description: "Adaptive test-fix cycles and quality gates" }, - "REVIEW-001": { role: "reviewer", deps: ["IMPL-001"], description: "4-dimension code review" }, - - // Frontend pipeline tasks - "DEV-FE-001": { role: "fe-developer", deps: ["PLAN-001"], description: "Frontend component/page implementation" }, - "QA-FE-001": { role: "fe-qa", deps: ["DEV-FE-001"], description: "5-dimension frontend QA" }, - // GC loop tasks (created dynamically) - "DEV-FE-002": { role: "fe-developer", deps: ["QA-FE-001"], description: "Frontend fixes (GC round 2)" }, - "QA-FE-002": { role: "fe-qa", deps: ["DEV-FE-002"], description: "Frontend QA re-check (GC round 2)" } -} - -// Pipeline chain constants -const SPEC_CHAIN = [ - "RESEARCH-001", "DISCUSS-001", "DRAFT-001", "DISCUSS-002", - "DRAFT-002", "DISCUSS-003", "DRAFT-003", "DISCUSS-004", - "DRAFT-004", "DISCUSS-005", "QUALITY-001", "DISCUSS-006" -] - -const IMPL_CHAIN = ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"] - -const FE_CHAIN = ["DEV-FE-001", "QA-FE-001"] - -const FULLSTACK_CHAIN = ["PLAN-001", "IMPL-001", "DEV-FE-001", "TEST-001", "QA-FE-001", "REVIEW-001"] -``` - ---- - -## Execution Method Handling - -### Sequential Execution - -```javascript -if (requirements.executionMethod === "sequential") { - Output("[coordinator] Sequential execution: tasks will run one at a time") - // Only one task active at a time - // Next task activated only after predecessor completes -} -``` - -### Parallel Execution - -```javascript -if (requirements.executionMethod === "parallel") { - Output("[coordinator] Parallel execution: independent tasks will run concurrently") - // Tasks with all deps met can run in parallel - // e.g., TEST-001 and REVIEW-001 both depend on IMPL-001 → run together - // e.g., IMPL-001 and DEV-FE-001 both depend on PLAN-001 → run together -} -``` - ---- - -## Output Format - -All outputs from this command use the `[coordinator]` tag: - -``` -[coordinator] Creating spec-only task chain (12 tasks) -[coordinator] Starting with: RESEARCH-001 (analyst) -``` diff --git a/.claude/skills/team-lifecycle-v2/roles/coordinator/commands/monitor.md b/.claude/skills/team-lifecycle-v2/roles/coordinator/commands/monitor.md deleted file mode 100644 index 36fc3579..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/coordinator/commands/monitor.md +++ /dev/null @@ -1,626 +0,0 @@ -# Monitor Command - Async Step-by-Step Coordination - -**Purpose**: Event-driven pipeline coordination with spawn-and-stop pattern. Three wake-up sources: worker callbacks (auto-advance), user `check` (status report), user `resume` (manual advance). - -**Invoked by**: Coordinator role.md Phase 4, or user commands (`check`, `resume`) - -**Output Tag**: `[coordinator]` - ---- - -## Design Principle - -> **Spawn-and-Stop + Callback**: Coordinator spawns worker(s) in background, outputs status, then STOPS. -> 三种唤醒源驱动流水线推进: -> -> 1. **Worker 回调**(自动): Worker 完成后 SendMessage → coordinator 收到消息 → 自动推进 -> 2. **User `check`**(手动): 查看执行状态图,不做推进 -> 3. **User `resume`**(手动): 检查成员状态,推进流水线 -> -> - ❌ 禁止: 阻塞循环 `Task(run_in_background: false)` 串行等待所有 worker -> - ❌ 禁止: `while` 循环 + `sleep` + 轮询 -> - ✅ 采用: `Task(run_in_background: true)` 后台 spawn,立即返回 -> - ✅ 采用: Worker SendMessage 回调触发下一步 -> - ✅ 采用: 用户指令 (`check` / `resume`) 辅助推进 -> -> **原理**: Coordinator 每次调用只做一步(spawn 或 advance),然后 STOP 交还控制权。 -> Worker 完成后通过 SendMessage 回调自动唤醒 coordinator。 -> 用户也可随时 `check` 查看状态或 `resume` 手动推进。 - ---- - -## Wake-up Sources - -| Source | Trigger | Action | -|--------|---------|--------| -| **Worker 回调** | Worker SendMessage 到 coordinator(含 `[role]` 标识) | 检测完成的任务 → 自动推进 → spawn 下一批 → STOP | -| **User `check`** | `check`, `status`, `--check` | 输出执行状态图(pipeline graph),不做推进 → STOP | -| **User `resume`** | `resume`, `continue`, `next`, `--resume` | 检查所有成员状态 → 推进流水线 → spawn 下一批 → STOP | -| **Initial** | (无关键词,dispatch 后自动) | spawn 首批 ready task → STOP | - ---- - -## Invocation Detection - -```javascript -const args = $ARGUMENTS - -// 1. Worker callback detection — 消息含 [role] 标识 -const callbackMatch = args.match(/\[(\w[\w-]*)\]/) -const WORKER_ROLES = ['analyst','writer','discussant','planner','executor','tester','reviewer','explorer','architect','fe-developer','fe-qa'] -const isCallback = callbackMatch && WORKER_ROLES.includes(callbackMatch[1]) - -// 2. User command detection -const isCheck = /\b(check|status|--check)\b/i.test(args) -const isResume = /\b(resume|continue|next|--resume|--continue)\b/i.test(args) - -// 3. Route -if (isCallback) { - handleCallback(callbackMatch[1], args) // Worker 回调 → 自动推进 -} else if (isCheck) { - handleCheck() // 状态报告 -} else if (isResume) { - handleResume() // 手动推进 -} else { - handleSpawnNext() // 初始 spawn -} -``` - ---- - -## Pipeline Constants (aligned with dispatch.md) - -```javascript -const TASK_METADATA = { - // Spec pipeline (12 tasks) - "RESEARCH-001": { role: "analyst", deps: [] }, - "DISCUSS-001": { role: "discussant", deps: ["RESEARCH-001"] }, - "DRAFT-001": { role: "writer", deps: ["DISCUSS-001"] }, - "DISCUSS-002": { role: "discussant", deps: ["DRAFT-001"] }, - "DRAFT-002": { role: "writer", deps: ["DISCUSS-002"] }, - "DISCUSS-003": { role: "discussant", deps: ["DRAFT-002"] }, - "DRAFT-003": { role: "writer", deps: ["DISCUSS-003"] }, - "DISCUSS-004": { role: "discussant", deps: ["DRAFT-003"] }, - "DRAFT-004": { role: "writer", deps: ["DISCUSS-004"] }, - "DISCUSS-005": { role: "discussant", deps: ["DRAFT-004"] }, - "QUALITY-001": { role: "reviewer", deps: ["DISCUSS-005"] }, - "DISCUSS-006": { role: "discussant", deps: ["QUALITY-001"] }, - // Impl pipeline - "PLAN-001": { role: "planner", deps: [] }, - "IMPL-001": { role: "executor", deps: ["PLAN-001"] }, - "TEST-001": { role: "tester", deps: ["IMPL-001"] }, - "REVIEW-001": { role: "reviewer", deps: ["IMPL-001"] }, - // Frontend pipeline - "DEV-FE-001": { role: "fe-developer", deps: ["PLAN-001"] }, - "QA-FE-001": { role: "fe-qa", deps: ["DEV-FE-001"] }, - "DEV-FE-002": { role: "fe-developer", deps: ["QA-FE-001"] }, - "QA-FE-002": { role: "fe-qa", deps: ["DEV-FE-002"] } -} - -function getExpectedChain(mode) { - const SPEC = ["RESEARCH-001","DISCUSS-001","DRAFT-001","DISCUSS-002","DRAFT-002","DISCUSS-003","DRAFT-003","DISCUSS-004","DRAFT-004","DISCUSS-005","QUALITY-001","DISCUSS-006"] - const IMPL = ["PLAN-001","IMPL-001","TEST-001","REVIEW-001"] - const FE = ["DEV-FE-001","QA-FE-001"] - const FULLSTACK = ["PLAN-001","IMPL-001","DEV-FE-001","TEST-001","QA-FE-001","REVIEW-001"] - - switch (mode) { - case "spec-only": return SPEC - case "impl-only": return IMPL - case "fe-only": return ["PLAN-001", ...FE] - case "fullstack": return FULLSTACK - case "full-lifecycle": return [...SPEC, ...IMPL] - case "full-lifecycle-fe": return [...SPEC, ...FULLSTACK] - default: return [...SPEC, ...IMPL] - } -} -``` - ---- - -## Handler: handleCallback - -> Worker 回调触发。检测哪个任务完成,更新状态,自动推进到下一步。 - -```javascript -function handleCallback(senderRole, messageContent) { - const session = Read(sessionFile) - const allTasks = TaskList() - - Output(`[coordinator] Received callback from [${senderRole}]`) - - // Find completed task from this role - const activeWorkers = session.active_workers || [] - const callbackWorker = activeWorkers.find(w => w.role === senderRole) - - if (callbackWorker) { - const task = allTasks.find(t => t.subject === callbackWorker.task_subject) - - if (task && task.status === 'completed') { - Output(`[coordinator] ✓ ${callbackWorker.task_subject} confirmed complete`) - - // Remove from active workers - session.active_workers = activeWorkers.filter(w => w !== callbackWorker) - session.tasks_completed = allTasks.filter(t => t.status === 'completed').length - Write(sessionFile, session) - - // Handle checkpoints - handleCheckpoints(callbackWorker.task_subject, session) - - // Auto-advance: spawn next ready tasks - handleSpawnNext() - } else { - // Task not yet marked complete — worker sent progress message - Output(`[coordinator] ${callbackWorker.task_subject} progress update from ${senderRole}`) - Output("[coordinator] Waiting for task completion...") - // STOP — don't advance yet - } - } else { - // Callback from unknown/already-completed worker - Output(`[coordinator] Info: message from ${senderRole} (no active task tracked)`) - - // Still check if any active workers completed - const doneWorkers = activeWorkers.filter(w => { - const t = allTasks.find(t2 => t2.subject === w.task_subject) - return t && t.status === 'completed' - }) - - if (doneWorkers.length > 0) { - for (const w of doneWorkers) { - Output(`[coordinator] ✓ ${w.task_subject} completed (${w.role})`) - handleCheckpoints(w.task_subject, session) - } - session.active_workers = activeWorkers.filter(w => !doneWorkers.includes(w)) - session.tasks_completed = allTasks.filter(t => t.status === 'completed').length - Write(sessionFile, session) - handleSpawnNext() - } - // STOP - } -} -``` - ---- - -## Handler: handleCheck - -> 读取当前状态,输出执行状态图(pipeline graph),不做任何推进动作。 - -```javascript -function handleCheck() { - // 1. Load session - const session = Read(sessionFile) - const allTasks = TaskList() - - const completed = allTasks.filter(t => t.status === 'completed') - const inProgress = allTasks.filter(t => t.status === 'in_progress') - const pending = allTasks.filter(t => t.status === 'pending') - const expectedChain = getExpectedChain(session.mode) - - // 2. Header - Output("[coordinator] ═══════════════════════════════════") - Output("[coordinator] Pipeline Status") - Output("[coordinator] ═══════════════════════════════════") - Output(`[coordinator] Mode: ${session.mode} | Progress: ${completed.length}/${expectedChain.length} (${Math.round(completed.length/expectedChain.length*100)}%)`) - Output("") - - // 3. Execution Status Graph — 可视化流水线执行点 - Output("[coordinator] Execution Graph:") - Output("") - renderPipelineGraph(expectedChain, allTasks, session) - Output("") - - // 4. Active workers detail - if (inProgress.length > 0) { - Output("[coordinator] Active Workers:") - for (const t of inProgress) { - const worker = (session.active_workers || []).find(w => w.task_subject === t.subject) - const elapsed = worker ? timeSince(worker.spawned_at) : 'unknown' - Output(` ▸ ${t.subject} (${t.owner}) — running ${elapsed}`) - } - Output("") - } - - // 5. Next ready - if (pending.length > 0) { - const nextReady = pending.filter(t => { - const meta = TASK_METADATA[t.subject] - return meta && meta.deps.every(dep => completed.some(c => c.subject === dep)) - }) - if (nextReady.length > 0) { - Output(`[coordinator] Ready to spawn: ${nextReady.map(t => t.subject).join(', ')}`) - } - } - - Output("") - Output("[coordinator] Commands: 'resume' to advance | 'check' to refresh") - - // STOP — no further action -} - -// Render pipeline graph with execution point markers -function renderPipelineGraph(chain, allTasks, session) { - // Status icon mapping - function icon(subject) { - const task = allTasks.find(t => t.subject === subject) - if (!task) return '·' // not created - if (task.status === 'completed') return '✓' - if (task.status === 'in_progress') return '▶' - return '○' // pending - } - - // Detect pipeline segments for visual grouping - const specTasks = chain.filter(s => s.startsWith('RESEARCH') || s.startsWith('DISCUSS') || s.startsWith('DRAFT') || s.startsWith('QUALITY')) - const implTasks = chain.filter(s => s.startsWith('PLAN') || s.startsWith('IMPL') || s.startsWith('TEST') || s.startsWith('REVIEW')) - const feTasks = chain.filter(s => s.startsWith('DEV-FE') || s.startsWith('QA-FE')) - - // Render each segment - if (specTasks.length > 0) { - Output(" Spec Phase:") - const specLine = specTasks.map(s => `[${icon(s)} ${s}]`).join(' → ') - Output(` ${specLine}`) - } - - if (implTasks.length > 0 || feTasks.length > 0) { - Output(" Impl Phase:") - - // Check for parallel branches (fullstack) - const hasParallel = implTasks.length > 0 && feTasks.length > 0 - - if (hasParallel) { - // Show PLAN first - const plan = implTasks.find(s => s.startsWith('PLAN')) - if (plan) Output(` [${icon(plan)} ${plan}]`) - - // Parallel branches - const backendTasks = implTasks.filter(s => !s.startsWith('PLAN')) - const beLine = backendTasks.map(s => `[${icon(s)} ${s}]`).join(' → ') - const feLine = feTasks.map(s => `[${icon(s)} ${s}]`).join(' → ') - - Output(` ├─ BE: ${beLine}`) - Output(` └─ FE: ${feLine}`) - } else { - // Sequential - const allImpl = [...implTasks, ...feTasks] - const implLine = allImpl.map(s => `[${icon(s)} ${s}]`).join(' → ') - Output(` ${implLine}`) - } - } - - // Legend - Output("") - Output(" ✓=done ▶=running ○=pending ·=not created") -} - -function timeSince(isoString) { - const diff = Date.now() - new Date(isoString).getTime() - const mins = Math.floor(diff / 60000) - if (mins < 1) return '<1m' - if (mins < 60) return `${mins}m` - return `${Math.floor(mins/60)}h${mins%60}m` -} -``` - ---- - -## Handler: handleResume - -> 检查活跃 worker 是否完成,处理结果,推进流水线。 - -```javascript -function handleResume() { - const session = Read(sessionFile) - const allTasks = TaskList() - const activeWorkers = session.active_workers || [] - - // Case 1: No active workers → just spawn next - if (activeWorkers.length === 0) { - handleSpawnNext() - return - } - - // Case 2: Check each active worker's completion - const doneWorkers = [] - const runningWorkers = [] - - for (const worker of activeWorkers) { - const task = allTasks.find(t => t.subject === worker.task_subject) - if (task && task.status === 'completed') { - Output(`[coordinator] ✓ ${worker.task_subject} completed (${worker.role})`) - doneWorkers.push(worker) - } else if (task && task.status === 'in_progress') { - Output(`[coordinator] ⋯ ${worker.task_subject} still running (${worker.role})`) - runningWorkers.push(worker) - } else { - // Worker returned abnormally - Output(`[coordinator] ✗ ${worker.task_subject} status: ${task?.status || 'unknown'} (${worker.role})`) - handleWorkerFailure(task, worker, session) - // Don't add to either list — will be re-evaluated - } - } - - // Update session with remaining active workers - session.active_workers = runningWorkers - session.tasks_completed = allTasks.filter(t => t.status === 'completed').length - Write(sessionFile, session) - - // Handle checkpoints for completed workers - for (const worker of doneWorkers) { - handleCheckpoints(worker.task_subject, session) - } - - // Advance pipeline - if (doneWorkers.length > 0) { - // Some workers completed → try to spawn next batch - handleSpawnNext() - } else if (runningWorkers.length > 0) { - // All still running - Output("") - Output(`[coordinator] ${runningWorkers.length} worker(s) still executing.`) - Output("[coordinator] Use 'check' to monitor or 'resume' again later.") - // STOP - } else { - // Edge case: all failed → try to spawn next anyway - handleSpawnNext() - } -} -``` - ---- - -## Handler: handleSpawnNext - -> 查找所有就绪任务,批量 spawn,保存状态,STOP。 - -```javascript -function handleSpawnNext() { - const session = Read(sessionFile) - const allTasks = TaskList() - - const completedSubjects = allTasks.filter(t => t.status === 'completed').map(t => t.subject) - const inProgressSubjects = allTasks.filter(t => t.status === 'in_progress').map(t => t.subject) - const expectedChain = getExpectedChain(session.mode) - - // Find ALL ready tasks (deps met, not completed, not in_progress) - const readySubjects = expectedChain.filter(subject => { - if (completedSubjects.includes(subject)) return false - if (inProgressSubjects.includes(subject)) return false - const meta = TASK_METADATA[subject] - if (!meta) return false - // Full-lifecycle: PLAN-001 deps override to include DISCUSS-006 - let deps = meta.deps - if (subject === "PLAN-001" && (session.mode === "full-lifecycle" || session.mode === "full-lifecycle-fe")) { - deps = ["DISCUSS-006"] - } - return deps.every(dep => completedSubjects.includes(dep)) - }) - - // Case 1: Nothing ready, but work in progress - if (readySubjects.length === 0 && inProgressSubjects.length > 0) { - Output(`[coordinator] Waiting for: ${inProgressSubjects.join(', ')}`) - Output("[coordinator] Use 'check' to monitor or 'resume' after completion.") - // STOP - return - } - - // Case 2: All done - if (readySubjects.length === 0 && inProgressSubjects.length === 0) { - Output("[coordinator] ✓ All pipeline tasks completed!") - session.status = "completed" - session.completed_at = new Date().toISOString() - session.active_workers = [] - Write(sessionFile, session) - // → goto Phase 5 (report) - return "PIPELINE_COMPLETE" - } - - // Case 3: Spawn ready tasks - const newWorkers = [] - - for (const subject of readySubjects) { - const meta = TASK_METADATA[subject] - const task = allTasks.find(t => t.subject === subject) - - // Mark as in_progress - TaskUpdate({ taskId: task.id, status: 'in_progress' }) - - // Log to message bus - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, from: "coordinator", - to: meta.role, type: "task_unblocked", - summary: `[coordinator] Spawning: ${subject} → ${meta.role}` - }) - - // Spawn worker in background — NON-BLOCKING - Task({ - subagent_type: "general-purpose", - description: `Spawn ${meta.role} worker for ${subject}`, - team_name: teamName, - name: meta.role, - prompt: buildWorkerPrompt(task, meta, session), - run_in_background: true // ← Spawn-and-Stop: 后台执行,立即返回 - }) - - newWorkers.push({ - task_subject: subject, - role: meta.role, - spawned_at: new Date().toISOString() - }) - - Output(`[coordinator] ▸ Spawned: ${meta.role} → ${subject}`) - } - - // Save state - const existingWorkers = session.active_workers || [] - session.active_workers = [...existingWorkers, ...newWorkers] - session.tasks_completed = completedSubjects.length - Write(sessionFile, session) - - // Status summary - Output("") - Output(`[coordinator] ${newWorkers.length} agent(s) spawned.`) - Output(`[coordinator] Pipeline: ${completedSubjects.length}/${expectedChain.length} completed`) - Output("") - Output("[coordinator] Type 'check' to see status, 'resume' to advance after completion.") - // STOP — coordinator finishes output, control returns to user -} -``` - ---- - -## Worker Prompt Builder - -```javascript -function buildWorkerPrompt(task, meta, session) { - const prefix = task.subject.split('-')[0] - - return `你是 team "${teamName}" 的 ${meta.role.toUpperCase()}. - -## ⚠️ 首要指令(MUST) -你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥: -Skill(skill="team-lifecycle-v2", args="--role=${meta.role}") -此调用会加载你的角色定义(role.md)、可用命令(commands/*.md)和完整执行逻辑。 - -当前任务: ${task.subject} - ${task.description} -Session: ${session.sessionFolder || sessionFolder} - -## 角色准则(强制) -- 你只能处理 ${prefix}-* 前缀的任务,不得执行其他角色的工作 -- 所有输出(SendMessage、team_msg)必须带 [${meta.role}] 标识前缀 -- 仅与 coordinator 通信,不得直接联系其他 worker -- 不得使用 TaskCreate 为其他角色创建任务 - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -## 工作流程(严格按顺序) -1. 调用 Skill(skill="team-lifecycle-v2", args="--role=${meta.role}") 获取角色定义和执行逻辑 -2. 按 role.md 中的 5-Phase 流程执行(TaskList → 找到任务 → 执行 → 汇报) -3. team_msg log + SendMessage 结果给 coordinator(带 [${meta.role}] 标识) -4. TaskUpdate completed → 检查下一个任务 → 回到步骤 1` -} -``` - ---- - -## Checkpoint Handlers - -```javascript -function handleCheckpoints(subject, session) { - // Spec → Impl transition checkpoint - if (subject === "DISCUSS-006" && (session.mode === "full-lifecycle" || session.mode === "full-lifecycle-fe")) { - Output("") - Output("[coordinator] ════════════════════════════════════════") - Output("[coordinator] SPEC PHASE COMPLETE — CHECKPOINT") - Output("[coordinator] ════════════════════════════════════════") - Output("[coordinator] Spec artifacts ready in session folder.") - Output("[coordinator] 'resume' to proceed to implementation phase (PLAN-001).") - Output("[coordinator] Review spec in session folder before continuing.") - // STOP — let user review before advancing to impl - } -} -``` - ---- - -## Worker Failure Handler - -```javascript -function handleWorkerFailure(task, worker, session) { - Output(`[coordinator] Worker failure: ${worker.task_subject} (${worker.role})`) - - // Reset task to pending for retry - if (task) { - TaskUpdate({ taskId: task.id, status: 'pending' }) - } - - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, from: "coordinator", - to: "user", type: "error", - summary: `[coordinator] Worker ${worker.role} failed on ${worker.task_subject}, reset to pending` - }) - - Output(`[coordinator] Task ${worker.task_subject} reset to pending. Will retry on next 'resume'.`) -} -``` - ---- - -## Session State: active_workers - -```json -{ - "active_workers": [ - { - "task_subject": "TEST-001", - "role": "tester", - "spawned_at": "2026-02-26T10:00:00Z" - }, - { - "task_subject": "REVIEW-001", - "role": "reviewer", - "spawned_at": "2026-02-26T10:00:00Z" - } - ] -} -``` - ---- - -## Output Format - -### check 状态图示例 - -``` -[coordinator] ═══════════════════════════════════ -[coordinator] Pipeline Status -[coordinator] ═══════════════════════════════════ -[coordinator] Mode: fullstack | Progress: 3/6 (50%) - -[coordinator] Execution Graph: - - Impl Phase: - [✓ PLAN-001] - ├─ BE: [✓ IMPL-001] → [▶ TEST-001] → [○ REVIEW-001] - └─ FE: [✓ DEV-FE-001] → [▶ QA-FE-001] - - ✓=done ▶=running ○=pending ·=not created - -[coordinator] Active Workers: - ▸ TEST-001 (tester) — running 3m - ▸ QA-FE-001 (fe-qa) — running 2m - -[coordinator] Commands: 'resume' to advance | 'check' to refresh -``` - -### callback 自动推进示例 - -``` -[coordinator] Received callback from [tester] -[coordinator] ✓ TEST-001 confirmed complete -[coordinator] ▸ Spawned: reviewer → REVIEW-001 -[coordinator] Pipeline: 4/6 completed -[coordinator] Type 'check' to see status, 'resume' to advance after completion. -``` - -### resume 手动推进示例 - -``` -[coordinator] ✓ PLAN-001 completed (planner) -[coordinator] ⋯ IMPL-001 still running (executor) -[coordinator] ▸ Spawned: fe-developer → DEV-FE-001 -[coordinator] 2 agent(s) active. -[coordinator] Pipeline: 1/6 completed -[coordinator] Type 'check' to see status, 'resume' to advance after completion. -``` - ---- - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| Worker 未完成(resume 检测到) | 重置任务为 pending,下次 resume 重新 spawn | -| 所有 worker 仍在执行 | 报告状态,建议稍后再 resume | -| 无就绪任务,有进行中任务 | 报告等待中,建议 check 监控 | -| Pipeline 完成 | 返回 PIPELINE_COMPLETE,跳转 Phase 5 | -| Session 文件不存在 | 报错,建议重新初始化 | diff --git a/.claude/skills/team-lifecycle-v2/roles/coordinator/role.md b/.claude/skills/team-lifecycle-v2/roles/coordinator/role.md deleted file mode 100644 index e7af143d..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/coordinator/role.md +++ /dev/null @@ -1,779 +0,0 @@ -# Coordinator Role - -## Role Identity - -**Role**: Coordinator -**Output Tag**: `[coordinator]` -**Responsibility**: Orchestrate the team-lifecycle workflow by managing team creation, task dispatching, progress monitoring, and session state persistence. - -## Role Boundaries - -### MUST -- Parse user requirements and clarify ambiguous inputs -- Create team and spawn worker subagents -- Dispatch tasks with proper dependency chains -- Monitor task progress and route messages -- Handle session resume and reconciliation -- Maintain session state persistence -- Provide progress reports and next-step options - -### MUST NOT -- Execute spec/impl/research work directly (delegate to workers) -- Modify task outputs (workers own their deliverables) -- Skip dependency validation -- Proceed without user confirmation at checkpoints - -## Message Types - -| Message Type | Sender | Trigger | Coordinator Action | -|--------------|--------|---------|-------------------| -| `task_complete` | Worker | Task finished | Update session, check dependencies, kick next task | -| `task_blocked` | Worker | Dependency missing | Log block reason, wait for predecessor | -| `discussion_needed` | Worker | Ambiguity found | Route to user via AskUserQuestion | -| `research_ready` | analyst | Research done | Checkpoint with user before impl | - -## Toolbox - -### Available Commands -- `commands/dispatch.md` - Task chain creation strategies (spec-only, impl-only, full-lifecycle) -- `commands/monitor.md` - Coordination loop with message routing and checkpoint handling - -### Subagent Capabilities -- `TeamCreate` - Initialize team with session metadata -- `TeamSpawn` - Spawn worker subagents (analyst, writer, discussant, planner, executor, tester, reviewer, etc.) -- `TaskCreate` - Create tasks with dependencies -- `TaskUpdate` - Update task status/metadata -- `TaskGet` - Retrieve task details -- `AskUserQuestion` - Interactive user prompts - -### CLI Capabilities -- Session file I/O (`Read`, `Write`) -- Directory scanning (`Glob`) -- Background execution for long-running tasks - ---- - -## Execution Flow - -### Entry Router: Command Detection - -**Purpose**: Detect invocation type and route to appropriate handler. Coordinator 有三种唤醒源:worker 回调、用户命令、初始调用。 - -```javascript -const args = $ARGUMENTS - -// ─── 1. Worker callback detection ─── -// Worker 完成后 SendMessage 到 coordinator,消息含 [role] 标识 -const callbackMatch = args.match(/\[(\w[\w-]*)\]/) -const WORKER_ROLES = ['analyst','writer','discussant','planner','executor','tester','reviewer','explorer','architect','fe-developer','fe-qa'] -const isCallback = callbackMatch && WORKER_ROLES.includes(callbackMatch[1]) - -// ─── 2. User command detection ─── -const isCheck = /\b(check|status|--check)\b/i.test(args) -const isResume = /\b(resume|continue|next|--resume|--continue)\b/i.test(args) - -// ─── 3. Route ─── -if (isCallback || isCheck || isResume) { - // Need active session - const sessionFile = findActiveSession() - if (!sessionFile) { - Output("[coordinator] No active session found. Start a new session by providing a task description.") - return - } - - // Load monitor command and execute - Read("commands/monitor.md") - - if (isCallback) { - // Worker 回调 → 自动推进 - handleCallback(callbackMatch[1], args) - } else if (isCheck) { - // 状态报告 → 输出执行状态图 - handleCheck() - } else if (isResume) { - // 手动推进 → 检查成员状态并推进 - handleResume() - } - - // STOP — 所有命令执行完后立即停止 - return -} - -// ─── 4. Normal invoke → check for session resume or new session ─── -goto Phase0 - -// Helper: find active session file -function findActiveSession() { - const sessionFiles = Glob("D:/Claude_dms3/.workflow/.sessions/team-lifecycle-*.json") - for (const f of sessionFiles) { - const s = Read(f) - if (s.status === "active") return f - } - return null -} -``` - ---- - -### Phase 0: Session Resume Check - -**Purpose**: Detect and resume interrupted sessions - -```javascript -// Scan for session files -const sessionFiles = Glob("D:/Claude_dms3/.workflow/.sessions/team-lifecycle-*.json") - -if (sessionFiles.length === 0) { - // No existing session, proceed to Phase 1 - goto Phase1 -} - -if (sessionFiles.length === 1) { - // Single session found - const session = Read(sessionFiles[0]) - if (session.status === "active" || session.status === "paused") { - Output("[coordinator] Resuming session: " + session.session_id) - goto SessionReconciliation - } -} - -if (sessionFiles.length > 1) { - // Multiple sessions - ask user - const choices = sessionFiles.map(f => { - const s = Read(f) - return `${s.session_id} (${s.status}) - ${s.mode} - ${s.tasks_completed}/${s.tasks_total}` - }) - - const answer = AskUserQuestion({ - question: "Multiple sessions found. Which to resume?", - choices: ["Create new session", ...choices] - }) - - if (answer === "Create new session") { - goto Phase1 - } else { - const selectedSession = Read(sessionFiles[answer.index - 1]) - goto SessionReconciliation - } -} - -// Session Reconciliation Process -SessionReconciliation: { - Output("[coordinator] Reconciling session state...") - - // Pipeline constants (aligned with SKILL.md Three-Mode Pipeline) - const SPEC_CHAIN = [ - "RESEARCH-001", "DISCUSS-001", "DRAFT-001", "DISCUSS-002", - "DRAFT-002", "DISCUSS-003", "DRAFT-003", "DISCUSS-004", - "DRAFT-004", "DISCUSS-005", "QUALITY-001", "DISCUSS-006" - ] - - const IMPL_CHAIN = ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"] - - const FE_CHAIN = ["DEV-FE-001", "QA-FE-001"] - - const FULLSTACK_CHAIN = ["PLAN-001", "IMPL-001", "DEV-FE-001", "TEST-001", "QA-FE-001", "REVIEW-001"] - - // Task metadata — role must match VALID_ROLES in SKILL.md - const TASK_METADATA = { - // Spec pipeline (12 tasks) - "RESEARCH-001": { role: "analyst", phase: "spec", deps: [], description: "Seed analysis: codebase exploration and context gathering" }, - "DISCUSS-001": { role: "discussant", phase: "spec", deps: ["RESEARCH-001"], description: "Critique research findings" }, - "DRAFT-001": { role: "writer", phase: "spec", deps: ["DISCUSS-001"], description: "Generate Product Brief" }, - "DISCUSS-002": { role: "discussant", phase: "spec", deps: ["DRAFT-001"], description: "Critique Product Brief" }, - "DRAFT-002": { role: "writer", phase: "spec", deps: ["DISCUSS-002"], description: "Generate Requirements/PRD" }, - "DISCUSS-003": { role: "discussant", phase: "spec", deps: ["DRAFT-002"], description: "Critique Requirements/PRD" }, - "DRAFT-003": { role: "writer", phase: "spec", deps: ["DISCUSS-003"], description: "Generate Architecture Document" }, - "DISCUSS-004": { role: "discussant", phase: "spec", deps: ["DRAFT-003"], description: "Critique Architecture Document" }, - "DRAFT-004": { role: "writer", phase: "spec", deps: ["DISCUSS-004"], description: "Generate Epics" }, - "DISCUSS-005": { role: "discussant", phase: "spec", deps: ["DRAFT-004"], description: "Critique Epics" }, - "QUALITY-001": { role: "reviewer", phase: "spec", deps: ["DISCUSS-005"], description: "5-dimension spec quality validation" }, - "DISCUSS-006": { role: "discussant", phase: "spec", deps: ["QUALITY-001"], description: "Final review discussion and sign-off" }, - - // Impl pipeline (deps shown for impl-only; full-lifecycle adds PLAN-001 → ["DISCUSS-006"]) - "PLAN-001": { role: "planner", phase: "impl", deps: [], description: "Multi-angle codebase exploration and structured planning" }, - "IMPL-001": { role: "executor", phase: "impl", deps: ["PLAN-001"], description: "Code implementation following plan" }, - "TEST-001": { role: "tester", phase: "impl", deps: ["IMPL-001"], description: "Adaptive test-fix cycles and quality gates" }, - "REVIEW-001": { role: "reviewer", phase: "impl", deps: ["IMPL-001"], description: "4-dimension code review" }, - - // Frontend pipeline tasks - "DEV-FE-001": { role: "fe-developer", phase: "impl", deps: ["PLAN-001"], description: "Frontend component/page implementation" }, - "QA-FE-001": { role: "fe-qa", phase: "impl", deps: ["DEV-FE-001"], description: "5-dimension frontend QA" } - } - - // Helper: Get predecessor task - function getPredecessor(taskId, chain) { - const index = chain.indexOf(taskId) - return index > 0 ? chain[index - 1] : null - } - - // Step 1: Audit current state - const session = Read(sessionFile) - const teamState = TeamGet(session.team_id) - const allTasks = teamState.tasks - - Output("[coordinator] Session audit:") - Output(` Mode: ${session.mode}`) - Output(` Tasks completed: ${session.tasks_completed}/${session.tasks_total}`) - Output(` Status: ${session.status}`) - - // Step 2: Reconcile task states - const completedTasks = allTasks.filter(t => t.status === "completed") - const activeTasks = allTasks.filter(t => t.status === "active") - const blockedTasks = allTasks.filter(t => t.status === "blocked") - const pendingTasks = allTasks.filter(t => t.status === "pending") - - Output("[coordinator] Task breakdown:") - Output(` Completed: ${completedTasks.length}`) - Output(` Active: ${activeTasks.length}`) - Output(` Blocked: ${blockedTasks.length}`) - Output(` Pending: ${pendingTasks.length}`) - - // Step 3: Determine remaining work - const expectedChain = - session.mode === "spec-only" ? SPEC_CHAIN : - session.mode === "impl-only" ? IMPL_CHAIN : - session.mode === "fe-only" ? ["PLAN-001", ...FE_CHAIN] : - session.mode === "fullstack" ? FULLSTACK_CHAIN : - session.mode === "full-lifecycle-fe" ? [...SPEC_CHAIN, ...FULLSTACK_CHAIN] : - [...SPEC_CHAIN, ...IMPL_CHAIN] // full-lifecycle default - - const remainingTaskIds = expectedChain.filter(id => - !completedTasks.some(t => t.subject === id) - ) - - Output(`[coordinator] Remaining tasks: ${remainingTaskIds.join(", ")}`) - - // Step 4: Rebuild team if needed - if (!teamState || teamState.status === "disbanded") { - Output("[coordinator] Team disbanded, recreating...") - TeamCreate({ - team_id: session.team_id, - session_id: session.session_id, - mode: session.mode - }) - } - - // Step 5: Create missing tasks - for (const taskId of remainingTaskIds) { - const existingTask = allTasks.find(t => t.subject === taskId) - if (!existingTask) { - const metadata = TASK_METADATA[taskId] - TaskCreate({ - subject: taskId, - owner: metadata.role, - description: `${metadata.description}\nSession: ${sessionFolder}`, - blockedBy: metadata.deps, - status: "pending" - }) - Output(`[coordinator] Created missing task: ${taskId} (${metadata.role})`) - } - } - - // Step 6: Verify dependencies - for (const taskId of remainingTaskIds) { - const task = allTasks.find(t => t.subject === taskId) - if (!task) continue - const metadata = TASK_METADATA[taskId] - const allDepsMet = metadata.deps.every(depId => - completedTasks.some(t => t.subject === depId) - ) - - if (allDepsMet && task.status !== "completed") { - Output(`[coordinator] Unblocked task: ${taskId} (${metadata.role})`) - } - } - - // Step 7: Update session state - session.status = "active" - session.resumed_at = new Date().toISOString() - session.tasks_completed = completedTasks.length - Write(sessionFile, session) - - // Step 8: Report reconciliation - Output("[coordinator] Session reconciliation complete") - Output(`[coordinator] Ready to resume from: ${remainingTaskIds[0] || "all tasks complete"}`) - - // Step 9: Kick next task - if (remainingTaskIds.length > 0) { - const nextTaskId = remainingTaskIds[0] - const nextTask = TaskGet(nextTaskId) - const metadata = TASK_METADATA[nextTaskId] - - if (metadata.deps.every(depId => completedTasks.some(t => t.subject === depId))) { - TaskUpdate(nextTaskId, { status: "active" }) - Output(`[coordinator] Kicking task: ${nextTaskId}`) - goto Phase4_CoordinationLoop - } else { - Output(`[coordinator] Next task ${nextTaskId} blocked on: ${metadata.deps.join(", ")}`) - goto Phase4_CoordinationLoop - } - } else { - Output("[coordinator] All tasks complete!") - goto Phase5_Report - } -} -``` - ---- - -### Phase 1: Requirement Clarification - -**Purpose**: Parse user input and clarify execution parameters - -```javascript -Output("[coordinator] Phase 1: Requirement Clarification") - -// Parse $ARGUMENTS -const userInput = $ARGUMENTS - -// Extract mode if specified -let mode = null -if (userInput.includes("spec-only")) mode = "spec-only" -if (userInput.includes("impl-only")) mode = "impl-only" -if (userInput.includes("full-lifecycle")) mode = "full-lifecycle" - -// Extract scope if specified -let scope = null -if (userInput.includes("scope:")) { - scope = userInput.match(/scope:\s*([^\n]+)/)[1] -} - -// Extract focus areas -let focus = [] -if (userInput.includes("focus:")) { - focus = userInput.match(/focus:\s*([^\n]+)/)[1].split(",").map(s => s.trim()) -} - -// Extract depth preference -let depth = "standard" -if (userInput.includes("depth:shallow")) depth = "shallow" -if (userInput.includes("depth:deep")) depth = "deep" - -// Ask for missing parameters -if (!mode) { - mode = AskUserQuestion({ - question: "Select execution mode:", - choices: [ - "spec-only - Generate specifications only", - "impl-only - Implementation only (requires existing spec)", - "full-lifecycle - Complete spec + implementation", - "fe-only - Frontend-only pipeline (plan → dev → QA)", - "fullstack - Backend + frontend parallel pipeline", - "full-lifecycle-fe - Full lifecycle with frontend (spec → fullstack)" - ] - }) -} - -if (!scope) { - scope = AskUserQuestion({ - question: "Describe the project scope:", - type: "text" - }) -} - -if (focus.length === 0) { - const focusAnswer = AskUserQuestion({ - question: "Any specific focus areas? (optional)", - type: "text", - optional: true - }) - if (focusAnswer) { - focus = focusAnswer.split(",").map(s => s.trim()) - } -} - -// Determine execution method -const executionMethod = AskUserQuestion({ - question: "Execution method:", - choices: [ - "sequential - One task at a time (safer, slower)", - "parallel - Multiple tasks in parallel (faster, more complex)" - ] -}) - -// Store clarified requirements -const requirements = { - mode, - scope, - focus, - depth, - executionMethod, - originalInput: userInput -} - -// --- Frontend Detection --- -// Auto-detect frontend tasks and adjust pipeline mode -const FE_KEYWORDS = /component|page|UI|前端|frontend|CSS|HTML|React|Vue|Tailwind|组件|页面|样式|layout|responsive|Svelte|Next\.js|Nuxt|shadcn|设计系统|design.system/i -const BE_KEYWORDS = /API|database|server|后端|backend|middleware|auth|REST|GraphQL|migration|schema|model|controller|service/i - -function detectImplMode(taskDescription) { - const hasFE = FE_KEYWORDS.test(taskDescription) - const hasBE = BE_KEYWORDS.test(taskDescription) - - // Also check project files for frontend frameworks - const hasFEFiles = Bash(`test -f package.json && (grep -q react package.json || grep -q vue package.json || grep -q svelte package.json || grep -q next package.json); echo $?`) === '0' - - if (hasFE && hasBE) return 'fullstack' - if (hasFE || hasFEFiles) return 'fe-only' - return 'impl-only' // default backend -} - -// Apply frontend detection for implementation modes -if (mode === 'impl-only' || mode === 'full-lifecycle') { - const detectedMode = detectImplMode(scope + ' ' + userInput) - if (detectedMode !== 'impl-only') { - // Frontend detected — upgrade pipeline mode - if (mode === 'impl-only') { - mode = detectedMode // fe-only or fullstack - } else if (mode === 'full-lifecycle') { - mode = 'full-lifecycle-fe' // spec + fullstack - } - requirements.mode = mode - Output(`[coordinator] Frontend detected → pipeline upgraded to: ${mode}`) - } -} - -Output("[coordinator] Requirements clarified:") -Output(` Mode: ${mode}`) -Output(` Scope: ${scope}`) -Output(` Focus: ${focus.join(", ") || "none"}`) -Output(` Depth: ${depth}`) -Output(` Execution: ${executionMethod}`) - -goto Phase2 -``` - ---- - -### Phase 2: Create Team + Initialize Session - -**Purpose**: Initialize team and session state - -```javascript -Output("[coordinator] Phase 2: Team Creation") - -// Generate session ID -const sessionId = `team-lifecycle-${Date.now()}` -const teamId = sessionId - -// Create team -TeamCreate({ - team_id: teamId, - session_id: sessionId, - mode: requirements.mode, - scope: requirements.scope, - focus: requirements.focus, - depth: requirements.depth, - executionMethod: requirements.executionMethod -}) - -Output(`[coordinator] Team created: ${teamId}`) - -// Initialize wisdom directory -const wisdomDir = `${sessionFolder}/wisdom` -Bash(`mkdir -p "${wisdomDir}"`) -Write(`${wisdomDir}/learnings.md`, `# Learnings\n\n\n`) -Write(`${wisdomDir}/decisions.md`, `# Decisions\n\n\n`) -Write(`${wisdomDir}/conventions.md`, `# Conventions\n\n\n\n`) -Write(`${wisdomDir}/issues.md`, `# Known Issues\n\n\n`) - -// Initialize session file -const sessionFile = `D:/Claude_dms3/.workflow/.sessions/${sessionId}.json` -const sessionData = { - session_id: sessionId, - team_id: teamId, - mode: requirements.mode, - scope: requirements.scope, - focus: requirements.focus, - depth: requirements.depth, - executionMethod: requirements.executionMethod, - status: "active", - created_at: new Date().toISOString(), - tasks_total: requirements.mode === "spec-only" ? 12 : - requirements.mode === "impl-only" ? 4 : - requirements.mode === "fe-only" ? 3 : - requirements.mode === "fullstack" ? 6 : - requirements.mode === "full-lifecycle-fe" ? 18 : 16, - tasks_completed: 0, - current_phase: requirements.mode === "impl-only" ? "impl" : "spec" -} - -Write(sessionFile, sessionData) -Output(`[coordinator] Session file created: ${sessionFile}`) - -// ⚠️ Workers are NOT pre-spawned here. -// Workers are spawned on-demand in Phase 4 via Task(run_in_background: true). -// Coordinator spawns → STOPS → worker 回调或用户 check/resume 唤醒 coordinator. -// See SKILL.md Coordinator Spawn Template for worker prompt templates. -// -// Worker roles by mode (spawned on-demand, must match VALID_ROLES in SKILL.md): -// spec-only: analyst, discussant, writer, reviewer -// impl-only: planner, executor, tester, reviewer -// fe-only: planner, fe-developer, fe-qa -// fullstack: planner, executor, fe-developer, tester, fe-qa, reviewer -// full-lifecycle: analyst, discussant, writer, reviewer, planner, executor, tester -// full-lifecycle-fe: all of the above + fe-developer, fe-qa -// On-demand (ambiguity): analyst or explorer - -goto Phase3 -``` - ---- - -### Phase 3: Create Task Chain - -**Purpose**: Dispatch tasks based on execution mode - -```javascript -Output("[coordinator] Phase 3: Task Dispatching") - -// Delegate to command file -const dispatchStrategy = Read("commands/dispatch.md") - -// Execute strategy defined in command file -// (dispatch.md contains the complete task chain creation logic) - -goto Phase4 -``` - ---- - -### Phase 4: Spawn-and-Stop - -**Purpose**: Spawn first batch of ready workers, then STOP. 后续推进由 worker 回调或用户命令驱动。 - -> **设计原则(Spawn-and-Stop + Callback)**: -> - ❌ 禁止: 阻塞循环 `Task(run_in_background: false)` 串行等待所有 worker -> - ❌ 禁止: `while` + `sleep` + 轮询 -> - ✅ 采用: `Task(run_in_background: true)` 后台 spawn,立即返回 -> - ✅ 采用: Worker SendMessage 回调自动唤醒 coordinator -> - ✅ 采用: 用户 `check` / `resume` 命令辅助推进 -> -> Coordinator 每次只做一步操作,然后 STOP 交还控制权。 -> 流水线通过三种唤醒源推进:worker 回调(自动)、用户 resume(手动)、用户 check(状态)。 - -```javascript -Output("[coordinator] Phase 4: Spawning first batch...") - -// Load monitor command logic -Read("commands/monitor.md") - -// Spawn first batch of ready tasks → STOP -const result = handleSpawnNext() - -if (result === "PIPELINE_COMPLETE") { - goto Phase5 -} - -// STOP — coordinator 完成输出,控制权交还给用户 -// 后续推进方式: -// 1. Worker 完成 → SendMessage 回调 → Entry Router → handleCallback → 自动推进 -// 2. User 输入 "check" → Entry Router → handleCheck → 状态报告 -// 3. User 输入 "resume" → Entry Router → handleResume → 手动推进 -Output("") -Output("[coordinator] Coordinator paused. Pipeline will advance via:") -Output(" • Worker callbacks (automatic)") -Output(" • 'check' — view execution status graph") -Output(" • 'resume' — manually advance pipeline") -``` - ---- - -### Phase 5: Report + Persistent Loop - -**Purpose**: Provide completion report and offer next steps - -```javascript -Output("[coordinator] Phase 5: Completion Report") - -// Load session state -const session = Read(sessionFile) -const teamState = TeamGet(session.team_id) - -// Generate report -Output("[coordinator] ========================================") -Output("[coordinator] TEAM LIFECYCLE EXECUTION COMPLETE") -Output("[coordinator] ========================================") -Output(`[coordinator] Session ID: ${session.session_id}`) -Output(`[coordinator] Mode: ${session.mode}`) -Output(`[coordinator] Tasks Completed: ${session.tasks_completed}/${session.tasks_total}`) -Output(`[coordinator] Duration: ${calculateDuration(session.created_at, new Date())}`) - -// List deliverables -const completedTasks = teamState.tasks.filter(t => t.status === "completed") -Output("[coordinator] Deliverables:") -for (const task of completedTasks) { - Output(` ✓ ${task.subject}: ${task.description}`) - if (task.output_file) { - Output(` Output: ${task.output_file}`) - } -} - -// Update session status -session.status = "completed" -session.completed_at = new Date().toISOString() -Write(sessionFile, session) - -// Offer next steps -const nextAction = AskUserQuestion({ - question: "What would you like to do next?", - choices: [ - "exit - End session", - "review - Review specific deliverables", - "extend - Add more tasks to this session", - "handoff-lite-plan - Create lite-plan from spec", - "handoff-full-plan - Create full-plan from spec", - "handoff-req-plan - Create req-plan from requirements", - "handoff-create-issues - Generate GitHub issues" - ] -}) - -switch (nextAction) { - case "exit": - Output("[coordinator] Session ended. Goodbye!") - break - - case "review": - const taskToReview = AskUserQuestion({ - question: "Which task output to review?", - choices: completedTasks.map(t => t.subject) - }) - const reviewTask = completedTasks.find(t => t.subject === taskToReview) - if (reviewTask.output_file) { - const content = Read(reviewTask.output_file) - Output(`[coordinator] Task: ${reviewTask.subject}`) - Output(content) - } - goto Phase5 // Loop back for more actions - - case "extend": - const extensionScope = AskUserQuestion({ - question: "Describe additional work:", - type: "text" - }) - Output("[coordinator] Creating extension tasks...") - // Create custom tasks based on extension scope - // (Implementation depends on extension requirements) - goto Phase4 // Return to coordination loop - - case "handoff-lite-plan": - Output("[coordinator] Generating lite-plan from specifications...") - // Read spec completion output (DISCUSS-006 = final sign-off) - const specOutput = Read(getTaskOutput("DISCUSS-006")) - // Create lite-plan format - const litePlan = generateLitePlan(specOutput) - const litePlanFile = `D:/Claude_dms3/.workflow/.sessions/${session.session_id}-lite-plan.md` - Write(litePlanFile, litePlan) - Output(`[coordinator] Lite-plan created: ${litePlanFile}`) - goto Phase5 - - case "handoff-full-plan": - Output("[coordinator] Generating full-plan from specifications...") - const fullSpecOutput = Read(getTaskOutput("DISCUSS-006")) - const fullPlan = generateFullPlan(fullSpecOutput) - const fullPlanFile = `D:/Claude_dms3/.workflow/.sessions/${session.session_id}-full-plan.md` - Write(fullPlanFile, fullPlan) - Output(`[coordinator] Full-plan created: ${fullPlanFile}`) - goto Phase5 - - case "handoff-req-plan": - Output("[coordinator] Generating req-plan from requirements...") - const reqAnalysis = Read(getTaskOutput("RESEARCH-001")) - const reqPlan = generateReqPlan(reqAnalysis) - const reqPlanFile = `D:/Claude_dms3/.workflow/.sessions/${session.session_id}-req-plan.md` - Write(reqPlanFile, reqPlan) - Output(`[coordinator] Req-plan created: ${reqPlanFile}`) - goto Phase5 - - case "handoff-create-issues": - Output("[coordinator] Generating GitHub issues...") - const issuesSpec = Read(getTaskOutput("DISCUSS-006")) - const issues = generateGitHubIssues(issuesSpec) - const issuesFile = `D:/Claude_dms3/.workflow/.sessions/${session.session_id}-issues.json` - Write(issuesFile, issues) - Output(`[coordinator] Issues created: ${issuesFile}`) - Output("[coordinator] Use GitHub CLI to import: gh issue create --title ... --body ...") - goto Phase5 -} - -// Helper functions -function calculateDuration(start, end) { - const diff = new Date(end) - new Date(start) - const minutes = Math.floor(diff / 60000) - const seconds = Math.floor((diff % 60000) / 1000) - return `${minutes}m ${seconds}s` -} - -function getTaskOutput(taskId) { - const task = TaskGet(taskId) - return task.output_file -} - -function generateLitePlan(specOutput) { - // Parse spec output and create lite-plan format - return `# Lite Plan\n\n${specOutput}\n\n## Implementation Steps\n- Step 1\n- Step 2\n...` -} - -function generateFullPlan(specOutput) { - // Parse spec output and create full-plan format with detailed breakdown - return `# Full Plan\n\n${specOutput}\n\n## Detailed Implementation\n### Phase 1\n### Phase 2\n...` -} - -function generateReqPlan(reqAnalysis) { - // Parse requirements and create req-plan format - return `# Requirements Plan\n\n${reqAnalysis}\n\n## Acceptance Criteria\n- Criterion 1\n- Criterion 2\n...` -} - -function generateGitHubIssues(specOutput) { - // Parse spec and generate GitHub issue JSON - return { - issues: [ - { title: "Issue 1", body: "Description", labels: ["feature"] }, - { title: "Issue 2", body: "Description", labels: ["bug"] } - ] - } -} -``` - ---- - -## Session File Structure - -```json -{ - "session_id": "team-lifecycle-1234567890", - "team_id": "team-lifecycle-1234567890", - "mode": "full-lifecycle", - "scope": "Build authentication system", - "focus": ["security", "scalability"], - "depth": "standard", - "executionMethod": "sequential", - "status": "active", - "created_at": "2026-02-18T10:00:00Z", - "completed_at": null, - "resumed_at": null, - "tasks_total": 16, - "tasks_completed": 5, - "current_phase": "spec", - "active_workers": [ - { - "task_subject": "DISCUSS-003", - "role": "discussant", - "spawned_at": "2026-02-18T10:15:00Z" - } - ] -} -``` - ---- - -## Error Handling - -| Error Type | Coordinator Action | -|------------|-------------------| -| Task timeout | Log timeout, mark task as failed, ask user to retry or skip | -| Worker crash | Respawn worker, reassign task | -| Dependency cycle | Detect cycle, report to user, halt execution | -| Invalid mode | Reject with error message, ask user to clarify | -| Session corruption | Attempt recovery, fallback to manual reconciliation | diff --git a/.claude/skills/team-lifecycle-v2/roles/discussant/commands/critique.md b/.claude/skills/team-lifecycle-v2/roles/discussant/commands/critique.md deleted file mode 100644 index bc429a4c..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/discussant/commands/critique.md +++ /dev/null @@ -1,396 +0,0 @@ -# Command: Multi-Perspective Critique - -Phase 3 of discussant execution - launch parallel CLI analyses for each required perspective. - -## Overview - -This command executes multi-perspective critique by routing to specialized CLI tools based on perspective type. Each perspective produces structured critique with strengths, weaknesses, suggestions, and ratings. - -## Perspective Definitions - -### 1. Product Perspective (gemini) - -**Focus**: Market fit, user value, business viability, competitive differentiation - -**CLI Tool**: gemini - -**Output Structure**: -```json -{ - "perspective": "product", - "strengths": ["string"], - "weaknesses": ["string"], - "suggestions": ["string"], - "rating": 1-5 -} -``` - -**Prompt Template**: -``` -Analyze from Product Manager perspective: -- Market fit and user value proposition -- Business viability and ROI potential -- Competitive differentiation -- User experience and adoption barriers - -Artifact: {artifactContent} - -Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5) -``` - -### 2. Technical Perspective (codex) - -**Focus**: Feasibility, tech debt, performance, security, maintainability - -**CLI Tool**: codex - -**Output Structure**: -```json -{ - "perspective": "technical", - "strengths": ["string"], - "weaknesses": ["string"], - "suggestions": ["string"], - "rating": 1-5 -} -``` - -**Prompt Template**: -``` -Analyze from Tech Lead perspective: -- Technical feasibility and implementation complexity -- Architecture decisions and tech debt implications -- Performance and scalability considerations -- Security vulnerabilities and risks -- Code maintainability and extensibility - -Artifact: {artifactContent} - -Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5) -``` - -### 3. Quality Perspective (claude) - -**Focus**: Completeness, testability, consistency, standards compliance - -**CLI Tool**: claude - -**Output Structure**: -```json -{ - "perspective": "quality", - "strengths": ["string"], - "weaknesses": ["string"], - "suggestions": ["string"], - "rating": 1-5 -} -``` - -**Prompt Template**: -``` -Analyze from QA Lead perspective: -- Specification completeness and clarity -- Testability and test coverage potential -- Consistency across requirements/design -- Standards compliance (coding, documentation, accessibility) -- Ambiguity detection and edge case coverage - -Artifact: {artifactContent} - -Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5) -``` - -### 4. Risk Perspective (gemini) - -**Focus**: Risk identification, dependency analysis, assumption validation, failure modes - -**CLI Tool**: gemini - -**Output Structure**: -```json -{ - "perspective": "risk", - "strengths": ["string"], - "weaknesses": ["string"], - "suggestions": ["string"], - "rating": 1-5, - "risk_level": "low|medium|high|critical" -} -``` - -**Prompt Template**: -``` -Analyze from Risk Analyst perspective: -- Risk identification (technical, business, operational) -- Dependency analysis and external risks -- Assumption validation and hidden dependencies -- Failure modes and mitigation strategies -- Timeline and resource risks - -Artifact: {artifactContent} - -Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5), risk_level -``` - -### 5. Coverage Perspective (gemini) - -**Focus**: Requirement completeness vs original intent, scope drift, gap detection - -**CLI Tool**: gemini - -**Output Structure**: -```json -{ - "perspective": "coverage", - "strengths": ["string"], - "weaknesses": ["string"], - "suggestions": ["string"], - "rating": 1-5, - "covered_requirements": ["REQ-ID"], - "partial_requirements": ["REQ-ID"], - "missing_requirements": ["REQ-ID"], - "scope_creep": ["description"] -} -``` - -**Prompt Template**: -``` -Analyze from Requirements Analyst perspective: -- Compare current artifact against original requirements in discovery-context.json -- Identify covered requirements (fully addressed) -- Identify partial requirements (partially addressed) -- Identify missing requirements (not addressed) -- Detect scope creep (new items not in original requirements) - -Original Requirements: {discoveryContext} -Current Artifact: {artifactContent} - -Output JSON with: -- strengths[], weaknesses[], suggestions[], rating (1-5) -- covered_requirements[] (REQ-IDs fully addressed) -- partial_requirements[] (REQ-IDs partially addressed) -- missing_requirements[] (REQ-IDs not addressed) ← CRITICAL if non-empty -- scope_creep[] (new items not in original requirements) -``` - -## Execution Pattern - -### Parallel CLI Execution - -```javascript -// Load artifact content -const artifactPath = `${sessionFolder}/${config.artifact}` -const artifactContent = config.type === 'json' - ? JSON.parse(Read(artifactPath)) - : Read(artifactPath) - -// Load discovery context for coverage perspective -let discoveryContext = null -try { - discoveryContext = JSON.parse(Read(`${sessionFolder}/spec/discovery-context.json`)) -} catch { /* may not exist in early rounds */ } - -// Launch parallel CLI analyses -const perspectiveResults = [] - -for (const perspective of config.perspectives) { - let cliTool, prompt - - switch(perspective) { - case 'product': - cliTool = 'gemini' - prompt = `Analyze from Product Manager perspective: -- Market fit and user value proposition -- Business viability and ROI potential -- Competitive differentiation -- User experience and adoption barriers - -Artifact: -${JSON.stringify(artifactContent, null, 2)} - -Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)` - break - - case 'technical': - cliTool = 'codex' - prompt = `Analyze from Tech Lead perspective: -- Technical feasibility and implementation complexity -- Architecture decisions and tech debt implications -- Performance and scalability considerations -- Security vulnerabilities and risks -- Code maintainability and extensibility - -Artifact: -${JSON.stringify(artifactContent, null, 2)} - -Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)` - break - - case 'quality': - cliTool = 'claude' - prompt = `Analyze from QA Lead perspective: -- Specification completeness and clarity -- Testability and test coverage potential -- Consistency across requirements/design -- Standards compliance (coding, documentation, accessibility) -- Ambiguity detection and edge case coverage - -Artifact: -${JSON.stringify(artifactContent, null, 2)} - -Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5)` - break - - case 'risk': - cliTool = 'gemini' - prompt = `Analyze from Risk Analyst perspective: -- Risk identification (technical, business, operational) -- Dependency analysis and external risks -- Assumption validation and hidden dependencies -- Failure modes and mitigation strategies -- Timeline and resource risks - -Artifact: -${JSON.stringify(artifactContent, null, 2)} - -Output JSON with: strengths[], weaknesses[], suggestions[], rating (1-5), risk_level` - break - - case 'coverage': - cliTool = 'gemini' - prompt = `Analyze from Requirements Analyst perspective: -- Compare current artifact against original requirements in discovery-context.json -- Identify covered requirements (fully addressed) -- Identify partial requirements (partially addressed) -- Identify missing requirements (not addressed) -- Detect scope creep (new items not in original requirements) - -Original Requirements: -${discoveryContext ? JSON.stringify(discoveryContext, null, 2) : 'Not available'} - -Current Artifact: -${JSON.stringify(artifactContent, null, 2)} - -Output JSON with: -- strengths[], weaknesses[], suggestions[], rating (1-5) -- covered_requirements[] (REQ-IDs fully addressed) -- partial_requirements[] (REQ-IDs partially addressed) -- missing_requirements[] (REQ-IDs not addressed) ← CRITICAL if non-empty -- scope_creep[] (new items not in original requirements)` - break - } - - // Execute CLI analysis (run_in_background: true per CLAUDE.md) - Bash({ - command: `ccw cli -p "${prompt.replace(/"/g, '\\"')}" --tool ${cliTool} --mode analysis`, - run_in_background: true, - description: `[discussant] ${perspective} perspective analysis` - }) -} - -// Wait for all CLI results via hook callbacks -// Results will be collected in perspectiveResults array -``` - -## Critical Divergence Detection - -### Coverage Gap Detection - -```javascript -const coverageResult = perspectiveResults.find(p => p.perspective === 'coverage') -if (coverageResult?.missing_requirements?.length > 0) { - // Flag as critical divergence - synthesis.divergent_views.push({ - topic: 'requirement_coverage_gap', - description: `${coverageResult.missing_requirements.length} requirements from discovery-context not covered: ${coverageResult.missing_requirements.join(', ')}`, - severity: 'high', - source: 'coverage' - }) -} -``` - -### Risk Level Detection - -```javascript -const riskResult = perspectiveResults.find(p => p.perspective === 'risk') -if (riskResult?.risk_level === 'high' || riskResult?.risk_level === 'critical') { - synthesis.risk_flags.push({ - level: riskResult.risk_level, - description: riskResult.weaknesses.join('; ') - }) -} -``` - -## Fallback Strategy - -### CLI Failure Fallback - -```javascript -// If CLI analysis fails for a perspective, fallback to direct Claude analysis -try { - // CLI execution - Bash({ command: `ccw cli -p "..." --tool ${cliTool} --mode analysis`, run_in_background: true }) -} catch (error) { - // Fallback: Direct Claude analysis - const fallbackResult = { - perspective: perspective, - strengths: ["Direct analysis: ..."], - weaknesses: ["Direct analysis: ..."], - suggestions: ["Direct analysis: ..."], - rating: 3, - _fallback: true - } - perspectiveResults.push(fallbackResult) -} -``` - -### All CLI Failures - -```javascript -if (perspectiveResults.every(r => r._fallback)) { - // Generate basic discussion from direct reading - const basicDiscussion = { - convergent_themes: ["Basic analysis from direct reading"], - divergent_views: [], - action_items: ["Review artifact manually"], - open_questions: [], - decisions: [], - risk_flags: [], - overall_sentiment: 'neutral', - consensus_reached: true, - _basic_mode: true - } -} -``` - -## Output Format - -Each perspective produces: - -```json -{ - "perspective": "product|technical|quality|risk|coverage", - "strengths": ["string"], - "weaknesses": ["string"], - "suggestions": ["string"], - "rating": 1-5, - - // Risk perspective only - "risk_level": "low|medium|high|critical", - - // Coverage perspective only - "covered_requirements": ["REQ-ID"], - "partial_requirements": ["REQ-ID"], - "missing_requirements": ["REQ-ID"], - "scope_creep": ["description"] -} -``` - -## Integration with Phase 4 - -Phase 4 (Consensus Synthesis) consumes `perspectiveResults` array to: -1. Extract convergent themes (2+ perspectives agree) -2. Extract divergent views (perspectives conflict) -3. Detect coverage gaps (missing_requirements non-empty) -4. Assess risk flags (high/critical risk_level) -5. Determine consensus_reached (true if no critical divergences) diff --git a/.claude/skills/team-lifecycle-v2/roles/discussant/role.md b/.claude/skills/team-lifecycle-v2/roles/discussant/role.md deleted file mode 100644 index 855a7ef3..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/discussant/role.md +++ /dev/null @@ -1,265 +0,0 @@ -# Role: discussant - -Multi-perspective critique, consensus building, and conflict escalation. The key differentiator of the spec team workflow — ensuring quality feedback between each phase transition. - -## Role Identity - -- **Name**: `discussant` -- **Task Prefix**: `DISCUSS-*` -- **Output Tag**: `[discussant]` -- **Responsibility**: Load Artifact → Multi-Perspective Critique → Synthesize Consensus → Report -- **Communication**: SendMessage to coordinator only - -## Role Boundaries - -### MUST -- Only process DISCUSS-* tasks -- Communicate only with coordinator -- Write discussion records to `discussions/` folder -- Tag all SendMessage and team_msg calls with `[discussant]` -- Load roundConfig with all 6 rounds -- Execute multi-perspective critique via CLI tools -- Detect coverage gaps from coverage perspective -- Synthesize consensus with convergent/divergent analysis -- Report consensus_reached vs discussion_blocked paths - -### MUST NOT -- Create tasks -- Contact other workers directly -- Modify spec documents directly -- Skip perspectives defined in roundConfig -- Proceed without artifact loading -- Ignore critical divergences - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `discussion_ready` | discussant → coordinator | Discussion complete, consensus reached | With discussion record path and decision summary | -| `discussion_blocked` | discussant → coordinator | Cannot reach consensus | With divergence points and options, needs coordinator | -| `impl_progress` | discussant → coordinator | Long discussion progress | Multi-perspective analysis progress | -| `error` | discussant → coordinator | Discussion cannot proceed | Input artifact missing, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Discussion complete -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "discussion_ready", summary: "[discussant] Scope discussion consensus reached: 3 decisions", ref: `${sessionFolder}/discussions/discuss-001-scope.md` }) - -// Discussion blocked -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "discussion_blocked", summary: "[discussant] Cannot reach consensus on tech stack", data: { reason: "...", options: [...] } }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "error", summary: "[discussant] Input artifact missing" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "discussant" --to "coordinator" --type "discussion_ready" --summary "[discussant] Discussion complete" --ref "${sessionFolder}/discussions/discuss-001-scope.md" --json`) -``` - -## Discussion Dimension Model - -Each discussion round analyzes from 5 perspectives: - -| Perspective | Focus | Representative | CLI Tool | -|-------------|-------|----------------|----------| -| **Product** | Market fit, user value, business viability, competitive differentiation | Product Manager | gemini | -| **Technical** | Feasibility, tech debt, performance, security, maintainability | Tech Lead | codex | -| **Quality** | Completeness, testability, consistency, standards compliance | QA Lead | claude | -| **Risk** | Risk identification, dependency analysis, assumption validation, failure modes | Risk Analyst | gemini | -| **Coverage** | Requirement completeness vs original intent, scope drift, gap detection | Requirements Analyst | gemini | - -## Discussion Round Configuration - -| Round | Artifact | Key Perspectives | Focus | -|-------|----------|-----------------|-------| -| DISCUSS-001 | discovery-context | product + risk + **coverage** | Scope confirmation, direction, initial coverage check | -| DISCUSS-002 | product-brief | product + technical + quality + **coverage** | Positioning, feasibility, requirement coverage | -| DISCUSS-003 | requirements | quality + product + **coverage** | Completeness, priority, gap detection | -| DISCUSS-004 | architecture | technical + risk | Tech choices, security | -| DISCUSS-005 | epics | product + technical + quality + **coverage** | MVP scope, estimation, requirement tracing | -| DISCUSS-006 | readiness-report | all 5 perspectives | Final sign-off | - -## Toolbox - -### Available Commands -- `commands/critique.md` - Multi-perspective CLI critique (Phase 3) - -### Subagent Capabilities -None (discussant uses CLI tools directly) - -### CLI Capabilities -- **gemini**: Product perspective, Risk perspective, Coverage perspective -- **codex**: Technical perspective -- **claude**: Quality perspective - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('DISCUSS-') && - t.owner === 'discussant' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Artifact Loading - -```javascript -const sessionMatch = task.description.match(/Session:\s*(.+)/) -const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '' -const roundMatch = task.subject.match(/DISCUSS-(\d+)/) -const roundNumber = roundMatch ? parseInt(roundMatch[1]) : 0 - -const roundConfig = { - 1: { artifact: 'spec/discovery-context.json', type: 'json', outputFile: 'discuss-001-scope.md', perspectives: ['product', 'risk', 'coverage'], label: '范围讨论' }, - 2: { artifact: 'spec/product-brief.md', type: 'md', outputFile: 'discuss-002-brief.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Brief评审' }, - 3: { artifact: 'spec/requirements/_index.md', type: 'md', outputFile: 'discuss-003-requirements.md', perspectives: ['quality', 'product', 'coverage'], label: '需求讨论' }, - 4: { artifact: 'spec/architecture/_index.md', type: 'md', outputFile: 'discuss-004-architecture.md', perspectives: ['technical', 'risk'], label: '架构讨论' }, - 5: { artifact: 'spec/epics/_index.md', type: 'md', outputFile: 'discuss-005-epics.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Epics讨论' }, - 6: { artifact: 'spec/readiness-report.md', type: 'md', outputFile: 'discuss-006-final.md', perspectives: ['product', 'technical', 'quality', 'risk', 'coverage'], label: '最终签收' } -} - -const config = roundConfig[roundNumber] -// Load target artifact and prior discussion records for continuity -Bash(`mkdir -p ${sessionFolder}/discussions`) -``` - -### Phase 3: Multi-Perspective Critique - -**Delegate to**: `Read("commands/critique.md")` - -Launch parallel CLI analyses for each required perspective. See `commands/critique.md` for full implementation. - -### Phase 4: Consensus Synthesis - -```javascript -const synthesis = { - convergent_themes: [], - divergent_views: [], - action_items: [], - open_questions: [], - decisions: [], - risk_flags: [], - overall_sentiment: '', // positive/neutral/concerns/critical - consensus_reached: true // false if major unresolvable conflicts -} - -// Extract convergent themes (items mentioned positively by 2+ perspectives) -// Extract divergent views (items where perspectives conflict) -// Check coverage gaps from coverage perspective (if present) -const coverageResult = perspectiveResults.find(p => p.perspective === 'coverage') -if (coverageResult?.missing_requirements?.length > 0) { - synthesis.coverage_gaps = coverageResult.missing_requirements - synthesis.divergent_views.push({ - topic: 'requirement_coverage_gap', - description: `${coverageResult.missing_requirements.length} requirements from discovery-context not covered: ${coverageResult.missing_requirements.join(', ')}`, - severity: 'high', - source: 'coverage' - }) -} -// Check for unresolvable conflicts -const criticalDivergences = synthesis.divergent_views.filter(d => d.severity === 'high') -if (criticalDivergences.length > 0) synthesis.consensus_reached = false - -// Determine overall sentiment from average rating -// Generate discussion record markdown with all perspectives, convergence, divergence, action items - -Write(`${sessionFolder}/discussions/${config.outputFile}`, discussionRecord) -``` - -### Phase 5: Report to Coordinator - -```javascript -if (synthesis.consensus_reached) { - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "discussant", to: "coordinator", - type: "discussion_ready", - summary: `[discussant] ${config.label}讨论完成: ${synthesis.action_items.length}个行动项, ${synthesis.open_questions.length}个开放问题, 总体${synthesis.overall_sentiment}`, - ref: `${sessionFolder}/discussions/${config.outputFile}` - }) - - SendMessage({ - type: "message", - recipient: "coordinator", - content: `[discussant] ## 讨论结果: ${config.label} - -**Task**: ${task.subject} -**共识**: 已达成 -**总体评价**: ${synthesis.overall_sentiment} - -### 行动项 (${synthesis.action_items.length}) -${synthesis.action_items.map((item, i) => (i+1) + '. ' + item).join('\n') || '无'} - -### 开放问题 (${synthesis.open_questions.length}) -${synthesis.open_questions.map((q, i) => (i+1) + '. ' + q).join('\n') || '无'} - -### 讨论记录 -${sessionFolder}/discussions/${config.outputFile} - -共识已达成,可推进至下一阶段。`, - summary: `[discussant] ${config.label}共识达成: ${synthesis.action_items.length}行动项` - }) - - TaskUpdate({ taskId: task.id, status: 'completed' }) -} else { - // Consensus blocked - escalate to coordinator - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "discussant", to: "coordinator", - type: "discussion_blocked", - summary: `[discussant] ${config.label}讨论阻塞: ${criticalDivergences.length}个关键分歧需决策`, - data: { - reason: criticalDivergences.map(d => d.description).join('; '), - options: criticalDivergences.map(d => ({ label: d.topic, description: d.options?.join(' vs ') || d.description })) - } - }) - - SendMessage({ - type: "message", - recipient: "coordinator", - content: `[discussant] ## 讨论阻塞: ${config.label} - -**Task**: ${task.subject} -**状态**: 无法达成共识,需要 coordinator 介入 - -### 关键分歧 -${criticalDivergences.map((d, i) => (i+1) + '. **' + d.topic + '**: ' + d.description).join('\n\n')} - -请通过 AskUserQuestion 收集用户对分歧点的决策。`, - summary: `[discussant] ${config.label}阻塞: ${criticalDivergences.length}分歧` - }) - // Keep task in_progress, wait for coordinator resolution -} - -// Check for next DISCUSS task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No DISCUSS-* tasks available | Idle, wait for coordinator assignment | -| Target artifact not found | Notify coordinator with `[discussant]` tag, request prerequisite completion | -| CLI perspective analysis failure | Fallback to direct Claude analysis for that perspective | -| All CLI analyses fail | Generate basic discussion from direct reading | -| Consensus timeout (all perspectives diverge) | Escalate as discussion_blocked with `[discussant]` tag | -| Prior discussion records missing | Continue without continuity context | -| Session folder not found | Notify coordinator with `[discussant]` tag, request session path | -| Unexpected error | Log error via team_msg with `[discussant]` tag, report to coordinator | diff --git a/.claude/skills/team-lifecycle-v2/roles/executor/commands/implement.md b/.claude/skills/team-lifecycle-v2/roles/executor/commands/implement.md deleted file mode 100644 index 8078f6d9..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/executor/commands/implement.md +++ /dev/null @@ -1,356 +0,0 @@ -# Implement Command - -## Purpose -Multi-backend code implementation with progress tracking and batch execution support. - -## Execution Paths - -### Path 1: Simple Task + Agent Backend (Direct Edit) - -**Criteria**: -```javascript -function isSimpleTask(task) { - return task.description.length < 200 && - !task.description.includes("refactor") && - !task.description.includes("architecture") && - !task.description.includes("multiple files") -} -``` - -**Execution**: -```javascript -if (isSimpleTask(task) && executor === "agent") { - // Direct file edit without subagent overhead - const targetFile = task.metadata?.target_file - if (targetFile) { - const content = Read(targetFile) - const prompt = buildExecutionPrompt(task, plan, [task]) - - // Apply edit directly - Edit(targetFile, oldContent, newContent) - - return { - success: true, - files_modified: [targetFile], - method: "direct_edit" - } - } -} -``` - -### Path 2: Agent Backend (code-developer subagent) - -**Execution**: -```javascript -if (executor === "agent") { - const prompt = buildExecutionPrompt(task, plan, [task]) - - const result = Subagent({ - type: "code-developer", - prompt: prompt, - run_in_background: false // Synchronous execution - }) - - return { - success: result.success, - files_modified: result.files_modified || [], - method: "subagent" - } -} -``` - -### Path 3: Codex Backend (CLI) - -**Execution**: -```javascript -if (executor === "codex") { - const prompt = buildExecutionPrompt(task, plan, [task]) - - team_msg({ - to: "coordinator", - type: "progress_update", - task_id: task.task_id, - status: "executing_codex", - message: "Starting Codex implementation..." - }, "[executor]") - - const result = Bash( - `ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --cd ${task.metadata?.working_dir || "."}`, - { run_in_background: true, timeout: 300000 } - ) - - // Wait for CLI completion via hook callback - return { - success: true, - files_modified: [], // Will be detected by git diff - method: "codex_cli" - } -} -``` - -### Path 4: Gemini Backend (CLI) - -**Execution**: -```javascript -if (executor === "gemini") { - const prompt = buildExecutionPrompt(task, plan, [task]) - - team_msg({ - to: "coordinator", - type: "progress_update", - task_id: task.task_id, - status: "executing_gemini", - message: "Starting Gemini implementation..." - }, "[executor]") - - const result = Bash( - `ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --cd ${task.metadata?.working_dir || "."}`, - { run_in_background: true, timeout: 300000 } - ) - - // Wait for CLI completion via hook callback - return { - success: true, - files_modified: [], // Will be detected by git diff - method: "gemini_cli" - } -} -``` - -## Prompt Building - -### Single Task Prompt - -```javascript -function buildExecutionPrompt(task, plan, tasks) { - const context = extractContextFromPlan(plan, task) - - return ` -# Implementation Task: ${task.task_id} - -## Task Description -${task.description} - -## Acceptance Criteria -${task.acceptance_criteria?.map((c, i) => `${i + 1}. ${c}`).join("\n") || "None specified"} - -## Context from Plan -${context} - -## Files to Modify -${task.metadata?.target_files?.join("\n") || "Auto-detect based on task"} - -## Constraints -- Follow existing code style and patterns -- Preserve backward compatibility -- Add appropriate error handling -- Include inline comments for complex logic -- Update related tests if applicable - -## Expected Output -- Modified files with implementation -- Brief summary of changes made -- Any assumptions or decisions made during implementation -`.trim() -} -``` - -### Batch Task Prompt - -```javascript -function buildBatchPrompt(tasks, plan) { - const taskDescriptions = tasks.map((task, i) => ` -### Task ${i + 1}: ${task.task_id} -**Description**: ${task.description} -**Acceptance Criteria**: -${task.acceptance_criteria?.map((c, j) => ` ${j + 1}. ${c}`).join("\n") || " None specified"} -**Target Files**: ${task.metadata?.target_files?.join(", ") || "Auto-detect"} - `).join("\n") - - return ` -# Batch Implementation: ${tasks.length} Tasks - -## Tasks to Implement -${taskDescriptions} - -## Context from Plan -${extractContextFromPlan(plan, tasks[0])} - -## Batch Execution Guidelines -- Implement tasks in the order listed -- Ensure each task's acceptance criteria are met -- Maintain consistency across all implementations -- Report any conflicts or dependencies discovered -- Follow existing code patterns and style - -## Expected Output -- All tasks implemented successfully -- Summary of changes per task -- Any cross-task considerations or conflicts -`.trim() -} -``` - -### Context Extraction - -```javascript -function extractContextFromPlan(plan, task) { - // Extract relevant sections from plan - const sections = [] - - // Architecture context - const archMatch = plan.match(/## Architecture[\s\S]*?(?=##|$)/) - if (archMatch) { - sections.push("### Architecture\n" + archMatch[0]) - } - - // Technical stack - const techMatch = plan.match(/## Technical Stack[\s\S]*?(?=##|$)/) - if (techMatch) { - sections.push("### Technical Stack\n" + techMatch[0]) - } - - // Related tasks context - const taskSection = plan.match(new RegExp(`${task.task_id}[\\s\\S]*?(?=IMPL-\\d+|$)`)) - if (taskSection) { - sections.push("### Task Context\n" + taskSection[0]) - } - - return sections.join("\n\n") || "No additional context available" -} -``` - -## Progress Tracking - -### Batch Progress Updates - -```javascript -function reportBatchProgress(batchIndex, totalBatches, currentTask) { - if (totalBatches > 1) { - team_msg({ - to: "coordinator", - type: "progress_update", - batch_index: batchIndex + 1, - total_batches: totalBatches, - current_task: currentTask.task_id, - message: `Processing batch ${batchIndex + 1}/${totalBatches}: ${currentTask.task_id}` - }, "[executor]") - } -} -``` - -### Long-Running Task Updates - -```javascript -function reportLongRunningTask(task, elapsedSeconds) { - if (elapsedSeconds > 60 && elapsedSeconds % 30 === 0) { - team_msg({ - to: "coordinator", - type: "progress_update", - task_id: task.task_id, - elapsed_seconds: elapsedSeconds, - message: `Still processing ${task.task_id} (${elapsedSeconds}s elapsed)...` - }, "[executor]") - } -} -``` - -## Utility Functions - -### Prompt Escaping - -```javascript -function escapePrompt(prompt) { - return prompt - .replace(/\\/g, "\\\\") - .replace(/"/g, '\\"') - .replace(/\n/g, "\\n") - .replace(/\$/g, "\\$") -} -``` - -### File Change Detection - -```javascript -function detectModifiedFiles() { - const gitDiff = Bash("git diff --name-only HEAD") - return gitDiff.stdout.split("\n").filter(f => f.trim()) -} -``` - -### Simple Task Detection - -```javascript -function isSimpleTask(task) { - const simpleIndicators = [ - task.description.length < 200, - !task.description.toLowerCase().includes("refactor"), - !task.description.toLowerCase().includes("architecture"), - !task.description.toLowerCase().includes("multiple files"), - !task.description.toLowerCase().includes("complex"), - task.metadata?.target_files?.length === 1 - ] - - return simpleIndicators.filter(Boolean).length >= 4 -} -``` - -## Error Recovery - -### Retry Logic - -```javascript -function executeWithRetry(task, executor, maxRetries = 3) { - let attempt = 0 - let lastError = null - - while (attempt < maxRetries) { - try { - const result = executeTask(task, executor) - if (result.success) { - return result - } - lastError = result.error - } catch (error) { - lastError = error.message - } - - attempt++ - if (attempt < maxRetries) { - team_msg({ - to: "coordinator", - type: "progress_update", - task_id: task.task_id, - message: `Retry attempt ${attempt}/${maxRetries} after error: ${lastError}` - }, "[executor]") - } - } - - return { - success: false, - error: lastError, - retry_count: maxRetries - } -} -``` - -### Backend Fallback - -```javascript -function executeWithFallback(task, primaryExecutor) { - const result = executeTask(task, primaryExecutor) - - if (!result.success && primaryExecutor !== "agent") { - team_msg({ - to: "coordinator", - type: "progress_update", - task_id: task.task_id, - message: `${primaryExecutor} failed, falling back to agent backend...` - }, "[executor]") - - return executeTask(task, "agent") - } - - return result -} -``` diff --git a/.claude/skills/team-lifecycle-v2/roles/executor/role.md b/.claude/skills/team-lifecycle-v2/roles/executor/role.md deleted file mode 100644 index a6f16a9d..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/executor/role.md +++ /dev/null @@ -1,324 +0,0 @@ -# Executor Role - -## 1. Role Identity - -- **Name**: executor -- **Task Prefix**: IMPL-* -- **Output Tag**: `[executor]` -- **Responsibility**: Load plan → Route to backend → Implement code → Self-validate → Report - -## 2. Role Boundaries - -### MUST -- Only process IMPL-* tasks -- Follow approved plan exactly -- Use declared execution backends (agent/codex/gemini) -- Self-validate all implementations (syntax + acceptance criteria) -- Tag all outputs with `[executor]` - -### MUST NOT -- Create tasks -- Contact other workers directly -- Modify plan files -- Skip self-validation -- Proceed without plan approval - -## 3. Message Types - -| Type | Direction | Purpose | Format | -|------|-----------|---------|--------| -| `task_request` | FROM coordinator | Receive IMPL-* task assignment | `{ type: "task_request", task_id, description }` | -| `task_complete` | TO coordinator | Report implementation success | `{ type: "task_complete", task_id, status: "success", files_modified, validation_results }` | -| `task_failed` | TO coordinator | Report implementation failure | `{ type: "task_failed", task_id, error, retry_count }` | -| `progress_update` | TO coordinator | Report batch progress | `{ type: "progress_update", task_id, batch_index, total_batches }` | - -## 4. Message Bus - -**Primary**: Use `team_msg` for all coordinator communication with `[executor]` tag: -```javascript -team_msg({ - to: "coordinator", - type: "task_complete", - task_id: "IMPL-001", - status: "success", - files_modified: ["src/auth.ts"], - validation_results: { syntax: "pass", acceptance: "pass" } -}, "[executor]") -``` - -**CLI Fallback**: When message bus unavailable, write to `.workflow/.team/messages/executor-{timestamp}.json` - -## 5. Toolbox - -### Available Commands -- `commands/implement.md` - Multi-backend code implementation with progress tracking - -### Subagent Capabilities -- `code-developer` - Synchronous agent execution for simple tasks and agent backend - -### CLI Capabilities -- `ccw cli --tool codex --mode write` - Codex backend implementation -- `ccw cli --tool gemini --mode write` - Gemini backend implementation - -## 6. Execution (5-Phase) - -### Phase 1: Task & Plan Loading - -**Task Discovery**: -```javascript -const tasks = Glob(".workflow/.team/tasks/IMPL-*.json") - .filter(task => task.status === "pending" && task.assigned_to === "executor") -``` - -**Plan Path Extraction**: -```javascript -const planPath = task.metadata?.plan_path || ".workflow/plan.md" -const plan = Read(planPath) -``` - -**Execution Backend Resolution**: -```javascript -function resolveExecutor(task, plan) { - // Priority 1: Task-level override - if (task.metadata?.executor) { - return task.metadata.executor // "agent" | "codex" | "gemini" - } - - // Priority 2: Plan-level default - const planMatch = plan.match(/Execution Backend:\s*(agent|codex|gemini)/i) - if (planMatch) { - return planMatch[1].toLowerCase() - } - - // Priority 3: Auto-select based on task complexity - const isSimple = task.description.length < 200 && - !task.description.includes("refactor") && - !task.description.includes("architecture") - - return isSimple ? "agent" : "codex" // Default: codex for complex, agent for simple -} -``` - -**Code Review Resolution**: -```javascript -function resolveCodeReview(task, plan) { - // Priority 1: Task-level override - if (task.metadata?.code_review !== undefined) { - return task.metadata.code_review // boolean - } - - // Priority 2: Plan-level default - const reviewMatch = plan.match(/Code Review:\s*(enabled|disabled)/i) - if (reviewMatch) { - return reviewMatch[1].toLowerCase() === "enabled" - } - - // Priority 3: Default based on task type - const criticalKeywords = ["auth", "security", "payment", "api", "database"] - const isCritical = criticalKeywords.some(kw => - task.description.toLowerCase().includes(kw) - ) - - return isCritical // Enable review for critical paths -} -``` - -### Phase 2: Task Grouping - -**Dependency-Based Batching**: -```javascript -function createBatches(tasks, plan) { - // Extract dependencies from plan - const dependencies = new Map() - const depRegex = /IMPL-(\d+).*depends on.*IMPL-(\d+)/gi - let match - while ((match = depRegex.exec(plan)) !== null) { - const [_, taskId, depId] = match - if (!dependencies.has(`IMPL-${taskId}`)) { - dependencies.set(`IMPL-${taskId}`, []) - } - dependencies.get(`IMPL-${taskId}`).push(`IMPL-${depId}`) - } - - // Topological sort for execution order - const batches = [] - const completed = new Set() - const remaining = new Set(tasks.map(t => t.task_id)) - - while (remaining.size > 0) { - const batch = [] - - for (const taskId of remaining) { - const deps = dependencies.get(taskId) || [] - const depsCompleted = deps.every(dep => completed.has(dep)) - - if (depsCompleted) { - batch.push(tasks.find(t => t.task_id === taskId)) - } - } - - if (batch.length === 0) { - // Circular dependency detected - throw new Error(`Circular dependency detected in remaining tasks: ${[...remaining].join(", ")}`) - } - - batches.push(batch) - batch.forEach(task => { - completed.add(task.task_id) - remaining.delete(task.task_id) - }) - } - - return batches -} -``` - -### Phase 3: Code Implementation - -**Delegate to Command**: -```javascript -const implementCommand = Read("commands/implement.md") -// Command handles: -// - buildExecutionPrompt (context + acceptance criteria) -// - buildBatchPrompt (multi-task batching) -// - 4 execution paths: simple+agent, agent, codex, gemini -// - Progress updates via team_msg -``` - -### Phase 4: Self-Validation - -**Syntax Check**: -```javascript -const syntaxCheck = Bash("tsc --noEmit", { timeout: 30000 }) -const syntaxPass = syntaxCheck.exitCode === 0 -``` - -**Acceptance Criteria Verification**: -```javascript -function verifyAcceptance(task, implementation) { - const criteria = task.acceptance_criteria || [] - const results = criteria.map(criterion => { - // Simple keyword matching for automated verification - const keywords = criterion.toLowerCase().match(/\b\w+\b/g) || [] - const matched = keywords.some(kw => - implementation.toLowerCase().includes(kw) - ) - return { criterion, matched, status: matched ? "pass" : "manual_review" } - }) - - const allPassed = results.every(r => r.status === "pass") - return { allPassed, results } -} -``` - -**Test File Detection**: -```javascript -function findAffectedTests(modifiedFiles) { - const testFiles = [] - - for (const file of modifiedFiles) { - const baseName = file.replace(/\.(ts|js|tsx|jsx)$/, "") - const testVariants = [ - `${baseName}.test.ts`, - `${baseName}.test.js`, - `${baseName}.spec.ts`, - `${baseName}.spec.js`, - `${file.replace(/^src\//, "tests/")}.test.ts`, - `${file.replace(/^src\//, "__tests__/")}.test.ts` - ] - - for (const variant of testVariants) { - if (Bash(`test -f ${variant}`).exitCode === 0) { - testFiles.push(variant) - } - } - } - - return testFiles -} -``` - -**Optional Code Review**: -```javascript -const codeReviewEnabled = resolveCodeReview(task, plan) - -if (codeReviewEnabled) { - const executor = resolveExecutor(task, plan) - - if (executor === "gemini") { - // Gemini Review: Use Gemini CLI for review - const reviewResult = Bash( - `ccw cli -p "Review implementation for: ${task.description}. Check: code quality, security, architecture compliance." --tool gemini --mode analysis`, - { run_in_background: true } - ) - } else if (executor === "codex") { - // Codex Review: Use Codex CLI review mode - const reviewResult = Bash( - `ccw cli --tool codex --mode review --uncommitted`, - { run_in_background: true } - ) - } - - // Wait for review results and append to validation -} -``` - -### Phase 5: Report to Coordinator - -**Success Report**: -```javascript -team_msg({ - to: "coordinator", - type: "task_complete", - task_id: task.task_id, - status: "success", - files_modified: modifiedFiles, - validation_results: { - syntax: syntaxPass ? "pass" : "fail", - acceptance: acceptanceResults.allPassed ? "pass" : "manual_review", - tests_found: affectedTests.length, - code_review: codeReviewEnabled ? "completed" : "skipped" - }, - execution_backend: executor, - timestamp: new Date().toISOString() -}, "[executor]") -``` - -**Failure Report**: -```javascript -team_msg({ - to: "coordinator", - type: "task_failed", - task_id: task.task_id, - error: errorMessage, - retry_count: task.retry_count || 0, - validation_results: { - syntax: syntaxPass ? "pass" : "fail", - acceptance: "not_verified" - }, - timestamp: new Date().toISOString() -}, "[executor]") -``` - -## 7. Error Handling - -| Error Type | Recovery Strategy | Escalation | -|------------|-------------------|------------| -| Syntax errors | Retry with error context (max 3 attempts) | Report to coordinator after 3 failures | -| Missing dependencies | Request dependency resolution from coordinator | Immediate escalation | -| Backend unavailable | Fallback to agent backend | Report backend switch | -| Validation failure | Include validation details in report | Manual review required | -| Circular dependencies | Abort batch, report dependency graph | Immediate escalation | - -## 8. Execution Backends - -| Backend | Tool | Invocation | Mode | Use Case | -|---------|------|------------|------|----------| -| **agent** | code-developer | Subagent call (synchronous) | N/A | Simple tasks, direct edits | -| **codex** | ccw cli | `ccw cli --tool codex --mode write` | write | Complex tasks, architecture changes | -| **gemini** | ccw cli | `ccw cli --tool gemini --mode write` | write | Alternative backend, analysis-heavy tasks | - -**Backend Selection Logic**: -1. Task metadata override → Use specified backend -2. Plan default → Use plan-level backend -3. Auto-select → Simple tasks use agent, complex use codex diff --git a/.claude/skills/team-lifecycle-v2/roles/explorer/role.md b/.claude/skills/team-lifecycle-v2/roles/explorer/role.md deleted file mode 100644 index 90e369f8..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/explorer/role.md +++ /dev/null @@ -1,301 +0,0 @@ -# Explorer Role - -专职代码搜索与模式发现。服务角色,被 analyst/planner/executor/discussant 按需调用。 - -## 1. Role Identity - -- **Name**: explorer -- **Task Prefix**: EXPLORE-* -- **Output Tag**: `[explorer]` -- **Role Type**: Service(按需调用,不占主链路位置) -- **Responsibility**: Parse request → Multi-strategy search → Dependency trace → Package results → Report - -## 2. Role Boundaries - -### MUST -- Only process EXPLORE-* tasks -- Output structured JSON for downstream consumption -- Use priority-ordered search strategies (ACE → Grep → cli-explore-agent) -- Tag all outputs with `[explorer]` -- Cache results in `{session}/explorations/` for cross-role reuse - -### MUST NOT -- Create tasks -- Contact other workers directly -- Modify any source code files -- Execute analysis, planning, or implementation -- Make architectural decisions (only discover patterns) - -## 3. Message Types - -| Type | Direction | Purpose | Format | -|------|-----------|---------|--------| -| `explore_ready` | TO coordinator | Search complete | `{ type: "explore_ready", task_id, file_count, pattern_count, output_path }` | -| `explore_progress` | TO coordinator | Multi-angle progress | `{ type: "explore_progress", task_id, angle, status }` | -| `task_failed` | TO coordinator | Search failure | `{ type: "task_failed", task_id, error, fallback_used }` | - -## 4. Message Bus - -**Primary**: Use `team_msg` for all coordinator communication with `[explorer]` tag: -```javascript -team_msg({ - to: "coordinator", - type: "explore_ready", - task_id: "EXPLORE-001", - file_count: 15, - pattern_count: 3, - output_path: `${sessionFolder}/explorations/explore-001.json` -}, "[explorer]") -``` - -**CLI Fallback**: When message bus unavailable: -```bash -ccw team log --team "${teamName}" --from "explorer" --to "coordinator" --type "explore_ready" --summary "[explorer] 15 files, 3 patterns" --json -``` - -## 5. Toolbox - -### Available Commands -- None (inline execution, search logic is straightforward) - -### Search Tools (priority order) - -| Tool | Priority | Use Case | -|------|----------|----------| -| `mcp__ace-tool__search_context` | P0 | Semantic code search | -| `Grep` / `Glob` | P1 | Pattern matching, file discovery | -| `Read` | P1 | File content reading | -| `Bash` (rg, find) | P2 | Structured search fallback | -| `WebSearch` | P3 | External docs/best practices | - -### Subagent Capabilities -- `cli-explore-agent` — Deep multi-angle codebase exploration - -## 6. Execution (5-Phase) - -### Phase 1: Task Discovery & Request Parsing - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('EXPLORE-') && - t.owner === 'explorer' && - t.status === 'pending' && - t.blockedBy.length === 0 -) -if (myTasks.length === 0) return -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) - -// Parse structured request from task description -const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim() -const exploreMode = task.description.match(/Mode:\s*([^\n]+)/)?.[1]?.trim() || 'codebase' -const angles = (task.description.match(/Angles:\s*([^\n]+)/)?.[1] || 'general').split(',').map(a => a.trim()) -const keywords = (task.description.match(/Keywords:\s*([^\n]+)/)?.[1] || '').split(',').map(k => k.trim()).filter(Boolean) -const requester = task.description.match(/Requester:\s*([^\n]+)/)?.[1]?.trim() || 'coordinator' - -const outputDir = sessionFolder ? `${sessionFolder}/explorations` : '.workflow/.tmp' -Bash(`mkdir -p "${outputDir}"`) -``` - -### Phase 2: Multi-Strategy Search - -```javascript -const findings = { - relevant_files: [], // { path, rationale, role, discovery_source, key_symbols } - patterns: [], // { name, description, files } - dependencies: [], // { file, imports[] } - external_refs: [], // { keyword, results[] } - _metadata: { angles, mode: exploreMode, requester, timestamp: new Date().toISOString() } -} - -// === Strategy 1: ACE Semantic Search (P0) === -if (exploreMode !== 'external') { - for (const kw of keywords) { - try { - const results = mcp__ace-tool__search_context({ project_root_path: '.', query: kw }) - // Deduplicate and add to findings.relevant_files with discovery_source: 'ace-search' - } catch { /* ACE unavailable, fall through */ } - } -} - -// === Strategy 2: Grep Pattern Scan (P1) === -if (exploreMode !== 'external') { - for (const kw of keywords) { - // Find imports/exports/definitions - const defResults = Grep({ - pattern: `(class|function|const|export|interface|type)\\s+.*${kw}`, - glob: '*.{ts,tsx,js,jsx,py,go,rs}', - '-n': true, output_mode: 'content' - }) - // Add to findings with discovery_source: 'grep-scan' - } -} - -// === Strategy 3: Dependency Tracing === -if (exploreMode !== 'external') { - for (const file of findings.relevant_files.slice(0, 10)) { - try { - const content = Read(file.path) - const imports = (content.match(/from\s+['"]([^'"]+)['"]/g) || []) - .map(i => i.match(/['"]([^'"]+)['"]/)?.[1]).filter(Boolean) - if (imports.length > 0) { - findings.dependencies.push({ file: file.path, imports }) - } - } catch {} - } -} - -// === Strategy 4: Deep Exploration (multi-angle, via cli-explore-agent) === -if (angles.length > 1 && exploreMode !== 'external') { - for (const angle of angles) { - Task({ - subagent_type: "cli-explore-agent", - run_in_background: false, - description: `Explore: ${angle}`, - prompt: `## Exploration: ${angle} angle -Keywords: ${keywords.join(', ')} - -## Steps -1. rg -l "${keywords[0]}" --type-add 'code:*.{ts,tsx,js,py,go,rs}' --type code -2. Read .workflow/project-tech.json (if exists) -3. Focus on ${angle} perspective - -## Output -Write to: ${outputDir}/exploration-${angle}.json -Schema: { relevant_files[], patterns[], dependencies[] }` - }) - // Merge angle results into main findings - try { - const angleData = JSON.parse(Read(`${outputDir}/exploration-${angle}.json`)) - findings.relevant_files.push(...(angleData.relevant_files || [])) - findings.patterns.push(...(angleData.patterns || [])) - } catch {} - } -} - -// === Strategy 5: External Search (P3) === -if (exploreMode === 'external' || exploreMode === 'hybrid') { - for (const kw of keywords.slice(0, 3)) { - try { - const results = WebSearch({ query: `${kw} best practices documentation` }) - findings.external_refs.push({ keyword: kw, results }) - } catch {} - } -} - -// Deduplicate relevant_files by path -const seen = new Set() -findings.relevant_files = findings.relevant_files.filter(f => { - if (seen.has(f.path)) return false - seen.add(f.path) - return true -}) -``` - -### Phase 3: Wisdom Contribution - -```javascript -// If wisdom directory exists, contribute discovered patterns -if (sessionFolder) { - try { - const conventionsPath = `${sessionFolder}/wisdom/conventions.md` - const existing = Read(conventionsPath) - if (findings.patterns.length > 0) { - const newPatterns = findings.patterns - .map(p => `- ${p.name}: ${p.description || ''}`) - .join('\n') - Edit({ - file_path: conventionsPath, - old_string: '', - new_string: `\n${newPatterns}` - }) - } - } catch {} // wisdom not initialized -} -``` - -### Phase 4: Package Results - -```javascript -const outputPath = `${outputDir}/explore-${task.subject.replace(/[^a-zA-Z0-9-]/g, '-').toLowerCase()}.json` -Write(outputPath, JSON.stringify(findings, null, 2)) -``` - -### Phase 5: Report to Coordinator - -```javascript -const summary = `${findings.relevant_files.length} files, ${findings.patterns.length} patterns, ${findings.dependencies.length} deps` - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "explorer", to: "coordinator", - type: "explore_ready", - summary: `[explorer] EXPLORE complete: ${summary}`, - ref: outputPath -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `[explorer] ## Exploration Results - -**Task**: ${task.subject} -**Mode**: ${exploreMode} | **Angles**: ${angles.join(', ')} | **Requester**: ${requester} - -### Files: ${findings.relevant_files.length} -${findings.relevant_files.slice(0, 8).map(f => `- \`${f.path}\` (${f.role}) — ${f.rationale}`).join('\n')} - -### Patterns: ${findings.patterns.length} -${findings.patterns.slice(0, 5).map(p => `- ${p.name}: ${p.description || ''}`).join('\n') || 'None'} - -### Output: ${outputPath}`, - summary: `[explorer] ${summary}` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) -// Check for next EXPLORE task → back to Phase 1 -``` - -## 7. Coordinator Integration - -Explorer 是服务角色,coordinator 在以下场景按需创建 EXPLORE-* 任务: - -| Trigger | Task Example | Requester | -|---------|-------------|-----------| -| RESEARCH-001 需要代码库上下文 | `EXPLORE-001: 代码库上下文搜索` | analyst | -| PLAN-001 需要多角度探索 | `EXPLORE-002: 实现相关代码探索` | planner | -| DISCUSS-004 需要外部最佳实践 | `EXPLORE-003: 外部文档搜索` | discussant | -| IMPL-001 遇到未知代码 | `EXPLORE-004: 依赖追踪` | executor | - -**Task Description Template**: -``` -搜索描述 - -Session: {sessionFolder} -Mode: codebase|external|hybrid -Angles: architecture,patterns,dependencies -Keywords: auth,middleware,session -Requester: analyst -``` - -## 8. Result Caching - -``` -{sessionFolder}/explorations/ -├── explore-explore-001-*.json # Consolidated results -├── exploration-architecture.json # Angle-specific (from cli-explore-agent) -└── exploration-patterns.json -``` - -后续角色 Phase 2 可直接读取已有探索结果,避免重复搜索。 - -## 9. Error Handling - -| Error Type | Recovery Strategy | Escalation | -|------------|-------------------|------------| -| ACE unavailable | Fallback to Grep + rg | Continue with degraded results | -| cli-explore-agent failure | Fallback to direct search | Report partial results | -| No results found | Report empty, suggest broader keywords | Coordinator decides | -| Web search fails | Skip external refs | Continue with codebase results | -| Session folder missing | Use .workflow/.tmp | Notify coordinator | diff --git a/.claude/skills/team-lifecycle-v2/roles/fe-developer/role.md b/.claude/skills/team-lifecycle-v2/roles/fe-developer/role.md deleted file mode 100644 index 540ef26b..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/fe-developer/role.md +++ /dev/null @@ -1,410 +0,0 @@ -# Role: fe-developer - -前端开发。消费计划/架构产出,实现前端组件、页面、样式代码。 - -## Role Identity - -- **Name**: `fe-developer` -- **Task Prefix**: `DEV-FE-*` -- **Output Tag**: `[fe-developer]` -- **Role Type**: Pipeline(前端子流水线 worker) -- **Responsibility**: Context loading → Design token consumption → Component implementation → Report - -## Role Boundaries - -### MUST -- 仅处理 `DEV-FE-*` 前缀的任务 -- 所有输出带 `[fe-developer]` 标识 -- 仅通过 SendMessage 与 coordinator 通信 -- 遵循已有设计令牌和组件规范(如存在) -- 生成可访问性合规的前端代码(语义 HTML、ARIA 属性、键盘导航) -- 遵循项目已有的前端技术栈和约定 - -### MUST NOT -- ❌ 修改后端代码或 API 接口 -- ❌ 直接与其他 worker 通信 -- ❌ 为其他角色创建任务 -- ❌ 跳过设计令牌/规范检查(如存在) -- ❌ 引入未经架构审查的新前端依赖 - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `dev_fe_complete` | fe-developer → coordinator | Implementation done | 前端实现完成 | -| `dev_fe_progress` | fe-developer → coordinator | Long task progress | 进度更新 | -| `error` | fe-developer → coordinator | Implementation failure | 实现失败 | - -## Message Bus - -每次 SendMessage **前**,必须调用 `mcp__ccw-tools__team_msg` 记录: - -```javascript -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "fe-developer", to: "coordinator", - type: "dev_fe_complete", - summary: "[fe-developer] DEV-FE complete: 3 components, 1 page", - ref: outputPath -}) -``` - -### CLI 回退 - -```javascript -Bash(`ccw team log --team "${teamName}" --from "fe-developer" --to "coordinator" --type "dev_fe_complete" --summary "[fe-developer] DEV-FE complete" --ref "${outputPath}" --json`) -``` - -## Toolbox - -### Available Commands -- None (inline execution — implementation delegated to subagent) - -### Subagent Capabilities - -| Agent Type | Purpose | -|------------|---------| -| `code-developer` | 组件/页面代码实现 | - -### CLI Capabilities - -| CLI Tool | Mode | Purpose | -|----------|------|---------| -| `ccw cli --tool gemini --mode write` | write | 前端代码生成 | - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('DEV-FE-') && - t.owner === 'fe-developer' && - t.status === 'pending' && - t.blockedBy.length === 0 -) -if (myTasks.length === 0) return -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Context Loading - -```javascript -const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim() - -// Load plan context -let plan = null -try { plan = JSON.parse(Read(`${sessionFolder}/plan/plan.json`)) } catch {} - -// Load design tokens (if architect produced them) -let designTokens = null -try { designTokens = JSON.parse(Read(`${sessionFolder}/architecture/design-tokens.json`)) } catch {} - -// Load design intelligence (from analyst via ui-ux-pro-max) -let designIntel = {} -try { designIntel = JSON.parse(Read(`${sessionFolder}/analysis/design-intelligence.json`)) } catch {} - -// Load component specs (if available) -let componentSpecs = [] -try { - const specFiles = Glob({ pattern: `${sessionFolder}/architecture/component-specs/*.md` }) - componentSpecs = specFiles.map(f => ({ path: f, content: Read(f) })) -} catch {} - -// Load shared memory (cross-role state) -let sharedMemory = {} -try { sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`)) } catch {} - -// Load wisdom -let wisdom = {} -if (sessionFolder) { - try { wisdom.conventions = Read(`${sessionFolder}/wisdom/conventions.md`) } catch {} - try { wisdom.decisions = Read(`${sessionFolder}/wisdom/decisions.md`) } catch {} -} - -// Extract design constraints from design intelligence -const antiPatterns = designIntel.recommendations?.anti_patterns || [] -const implementationChecklist = designIntel.design_system?.implementation_checklist || [] -const stackGuidelines = designIntel.stack_guidelines || {} - -// Detect frontend tech stack -let techStack = {} -try { techStack = JSON.parse(Read('.workflow/project-tech.json')) } catch {} -const feTech = detectFrontendStack(techStack) -// Override with design intelligence detection if available -if (designIntel.detected_stack) { - const diStack = designIntel.detected_stack - if (['react', 'nextjs', 'vue', 'svelte', 'nuxt'].includes(diStack)) feTech.framework = diStack -} - -function detectFrontendStack(tech) { - const deps = tech?.dependencies || {} - const stack = { framework: 'html', styling: 'css', ui_lib: null } - if (deps.react || deps['react-dom']) stack.framework = 'react' - if (deps.vue) stack.framework = 'vue' - if (deps.svelte) stack.framework = 'svelte' - if (deps.next) stack.framework = 'nextjs' - if (deps.nuxt) stack.framework = 'nuxt' - if (deps.tailwindcss) stack.styling = 'tailwind' - if (deps['@shadcn/ui'] || deps['shadcn-ui']) stack.ui_lib = 'shadcn' - if (deps['@mui/material']) stack.ui_lib = 'mui' - if (deps['antd']) stack.ui_lib = 'antd' - return stack -} -``` - -### Phase 3: Frontend Implementation - -#### Step 1: Generate Design Token CSS (if tokens available) - -```javascript -if (designTokens && task.description.includes('Scope: tokens') || task.description.includes('Scope: full')) { - // Convert design-tokens.json to CSS custom properties - let cssVars = ':root {\n' - - // Colors - if (designTokens.color) { - for (const [name, token] of Object.entries(designTokens.color)) { - const value = typeof token.$value === 'object' ? token.$value.light : token.$value - cssVars += ` --color-${name}: ${value};\n` - } - } - - // Typography - if (designTokens.typography?.['font-family']) { - for (const [name, token] of Object.entries(designTokens.typography['font-family'])) { - const value = Array.isArray(token.$value) ? token.$value.join(', ') : token.$value - cssVars += ` --font-${name}: ${value};\n` - } - } - if (designTokens.typography?.['font-size']) { - for (const [name, token] of Object.entries(designTokens.typography['font-size'])) { - cssVars += ` --text-${name}: ${token.$value};\n` - } - } - - // Spacing, border-radius, shadow, transition - for (const category of ['spacing', 'border-radius', 'shadow', 'transition']) { - const prefix = { spacing: 'space', 'border-radius': 'radius', shadow: 'shadow', transition: 'duration' }[category] - if (designTokens[category]) { - for (const [name, token] of Object.entries(designTokens[category])) { - cssVars += ` --${prefix}-${name}: ${token.$value};\n` - } - } - } - - cssVars += '}\n' - - // Dark mode overrides - if (designTokens.color) { - const darkOverrides = Object.entries(designTokens.color) - .filter(([, token]) => typeof token.$value === 'object' && token.$value.dark) - if (darkOverrides.length > 0) { - cssVars += '\n@media (prefers-color-scheme: dark) {\n :root {\n' - for (const [name, token] of darkOverrides) { - cssVars += ` --color-${name}: ${token.$value.dark};\n` - } - cssVars += ' }\n}\n' - } - } - - Bash(`mkdir -p src/styles`) - Write('src/styles/tokens.css', cssVars) -} -``` - -#### Step 2: Implement Components - -```javascript -const taskId = task.subject.match(/DEV-FE-(\d+)/)?.[0] -const taskDetail = plan?.task_ids?.includes(taskId) - ? JSON.parse(Read(`${sessionFolder}/plan/.task/${taskId}.json`)) - : { title: task.subject, description: task.description, files: [] } - -const isSimple = (taskDetail.files || []).length <= 3 && - !task.description.includes('system') && - !task.description.includes('多组件') - -if (isSimple) { - Task({ - subagent_type: "code-developer", - run_in_background: false, - description: `Frontend implementation: ${taskDetail.title}`, - prompt: `## Frontend Implementation - -Task: ${taskDetail.title} -Description: ${taskDetail.description} - -${designTokens ? `## Design Tokens\nImport from: src/styles/tokens.css\nUse CSS custom properties (var(--color-primary), var(--space-md), etc.)\n${JSON.stringify(designTokens, null, 2).substring(0, 1000)}` : ''} -${componentSpecs.length > 0 ? `## Component Specs\n${componentSpecs.map(s => s.content.substring(0, 500)).join('\n---\n')}` : ''} - -## Tech Stack -- Framework: ${feTech.framework} -- Styling: ${feTech.styling} -${feTech.ui_lib ? `- UI Library: ${feTech.ui_lib}` : ''} - -## Stack-Specific Guidelines -${JSON.stringify(stackGuidelines, null, 2).substring(0, 500)} - -## Implementation Checklist (MUST verify each item) -${implementationChecklist.map(item => `- [ ] ${item}`).join('\n') || '- [ ] Semantic HTML\n- [ ] Keyboard accessible\n- [ ] Responsive layout\n- [ ] Dark mode support'} - -## Anti-Patterns to AVOID -${antiPatterns.map(p => `- ❌ ${p}`).join('\n') || 'None specified'} - -## Coding Standards -- Use design token CSS variables, never hardcode colors/spacing -- All interactive elements must have cursor: pointer -- Transitions: 150-300ms (use var(--duration-normal)) -- Text contrast: minimum 4.5:1 ratio -- Include focus-visible styles for keyboard navigation -- Support prefers-reduced-motion -- Responsive: mobile-first with md/lg breakpoints -- No emoji as functional icons - -## Files to modify/create -${(taskDetail.files || []).map(f => `- ${f.path}: ${f.change}`).join('\n') || 'Determine from task description'} - -## Conventions -${wisdom.conventions || 'Follow project existing patterns'}` - }) -} else { - Bash({ - command: `ccw cli -p "PURPOSE: Implement frontend components for '${taskDetail.title}' -TASK: ${taskDetail.description} -MODE: write -CONTEXT: @src/**/*.{tsx,jsx,vue,svelte,css,scss,html} @public/**/* -EXPECTED: Production-ready frontend code with accessibility, responsive design, design token usage -CONSTRAINTS: Framework=${feTech.framework}, Styling=${feTech.styling}${feTech.ui_lib ? ', UI=' + feTech.ui_lib : ''} -ANTI-PATTERNS: ${antiPatterns.join(', ') || 'None'} -CHECKLIST: ${implementationChecklist.join(', ') || 'Semantic HTML, keyboard accessible, responsive, dark mode'}" --tool gemini --mode write --rule development-implement-component-ui`, - run_in_background: true - }) -} -``` - -### Phase 4: Self-Validation + Wisdom + Shared Memory - -```javascript -// === Self-Validation (pre-QA check) === -const implementedFiles = Glob({ pattern: 'src/**/*.{tsx,jsx,vue,svelte,html,css}' }) -const selfCheck = { passed: [], failed: [] } - -for (const file of implementedFiles.slice(0, 20)) { - try { - const content = Read(file) - - // Check: no hardcoded colors (hex outside tokens.css) - if (file !== 'src/styles/tokens.css' && /#[0-9a-fA-F]{3,8}/.test(content)) { - selfCheck.failed.push({ file, check: 'hardcoded-color', message: 'Hardcoded color — use var(--color-*)' }) - } - - // Check: cursor-pointer on interactive elements - if (/button| ({ path: f, status: 'implemented' })) - Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2)) - } catch {} -} -``` - -### Phase 5: Report to Coordinator - -```javascript -const changedFiles = Bash(`git diff --name-only HEAD 2>/dev/null || echo "unknown"`) - .split('\n').filter(Boolean) -const feFiles = changedFiles.filter(f => - /\.(tsx|jsx|vue|svelte|css|scss|html)$/.test(f) -) - -const resultStatus = selfCheck.failed.length === 0 ? 'complete' : 'complete_with_warnings' - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "fe-developer", to: "coordinator", - type: "dev_fe_complete", - summary: `[fe-developer] DEV-FE complete: ${feFiles.length} files, self-check: ${selfCheck.failed.length} issues`, - ref: sessionFolder -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `[fe-developer] ## Frontend Implementation Complete - -**Task**: ${task.subject} -**Status**: ${resultStatus} -**Framework**: ${feTech.framework} | **Styling**: ${feTech.styling} -**Design Intelligence**: ${designIntel._source || 'not available'} - -### Files Modified -${feFiles.slice(0, 10).map(f => `- \`${f}\``).join('\n') || 'See git diff'} - -### Design Token Usage -${designTokens ? 'Applied design tokens from architecture → src/styles/tokens.css' : 'No design tokens available — used project defaults'} - -### Self-Validation -${selfCheck.failed.length === 0 ? '✅ All checks passed' : `⚠️ ${selfCheck.failed.length} issues:\n${selfCheck.failed.slice(0, 5).map(f => `- [${f.check}] ${f.file}: ${f.message}`).join('\n')}`} - -### Accessibility -- Semantic HTML structure -- ARIA attributes applied -- Keyboard navigation supported -- Focus-visible styles included`, - summary: `[fe-developer] DEV-FE complete: ${feFiles.length} files` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) -// Check for next DEV-FE task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No DEV-FE-* tasks | Idle, wait for coordinator | -| Design tokens not found | Use project defaults, note in report | -| Component spec missing | Implement from task description only | -| Tech stack undetected | Default to HTML + CSS, ask coordinator | -| Subagent failure | Fallback to CLI write mode | -| Build/lint errors | Report to coordinator for QA-FE review | diff --git a/.claude/skills/team-lifecycle-v2/roles/fe-qa/commands/pre-delivery-checklist.md b/.claude/skills/team-lifecycle-v2/roles/fe-qa/commands/pre-delivery-checklist.md deleted file mode 100644 index b8c9e4b0..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/fe-qa/commands/pre-delivery-checklist.md +++ /dev/null @@ -1,116 +0,0 @@ -# Command: pre-delivery-checklist - -> 最终交付前的 CSS 级别精准检查清单,融合 ui-ux-pro-max Pre-Delivery Checklist 和 ux-guidelines Do/Don't 规则。 - -## When to Use - -- Phase 3 of fe-qa role, Dimension 5: Pre-Delivery -- Final review or code-review type tasks - -## Strategy - -### Delegation Mode - -**Mode**: Direct (inline pattern matching in fe-qa Phase 3) - -## Checklist Items - -### Accessibility - -| # | Check | Pattern | Severity | Do | Don't | -|---|-------|---------|----------|-----|-------| -| 1 | Images have alt text | `500ms or <100ms transitions | -| 9 | Loading states | Async ops without loading indicator | MEDIUM | Show skeleton/spinner during fetch | Leave blank screen while loading | -| 10 | Error states | Async ops without error handling | HIGH | Show user-friendly error message | Silently fail or show raw error | - -### Design Compliance - -| # | Check | Pattern | Severity | Do | Don't | -|---|-------|---------|----------|-----|-------| -| 11 | No hardcoded colors | Hex values outside tokens.css | HIGH | Use var(--color-*) tokens | Hardcode #hex values | -| 12 | No hardcoded spacing | px values for margin/padding | MEDIUM | Use var(--space-*) tokens | Hardcode pixel values | -| 13 | No emoji as icons | Unicode emoji in UI | HIGH | Use proper SVG/icon library | Use emoji for functional icons | -| 14 | Dark mode support | No prefers-color-scheme | MEDIUM | Support light/dark themes | Design for light mode only | - -### Layout - -| # | Check | Pattern | Severity | Do | Don't | -|---|-------|---------|----------|-----|-------| -| 15 | Responsive breakpoints | No md:/lg:/@media | MEDIUM | Mobile-first responsive design | Desktop-only layout | -| 16 | No horizontal scroll | Fixed widths > viewport | HIGH | Use relative/fluid widths | Set fixed pixel widths on containers | - -## Execution - -```javascript -function runPreDeliveryChecklist(fileContents) { - const results = { passed: 0, failed: 0, items: [] } - - const checks = [ - { id: 1, check: "Images have alt text", test: (c) => /]*alt=/.test(c), severity: 'CRITICAL' }, - { id: 7, check: "cursor-pointer on clickable", test: (c) => /button|onClick/.test(c) && !/cursor-pointer/.test(c), severity: 'MEDIUM' }, - { id: 11, check: "No hardcoded colors", test: (c, f) => f !== 'src/styles/tokens.css' && /#[0-9a-fA-F]{6}/.test(c), severity: 'HIGH' }, - { id: 13, check: "No emoji as icons", test: (c) => /[\u{1F300}-\u{1F9FF}]/u.test(c), severity: 'HIGH' }, - { id: 14, check: "Dark mode support", test: (c) => !/prefers-color-scheme|dark:|\.dark/.test(c), severity: 'MEDIUM', global: true }, - { id: 15, check: "Responsive breakpoints", test: (c) => !/md:|lg:|@media.*min-width/.test(c), severity: 'MEDIUM', global: true } - ] - - // Per-file checks - for (const [file, content] of Object.entries(fileContents)) { - for (const check of checks.filter(c => !c.global)) { - if (check.test(content, file)) { - results.failed++ - results.items.push({ ...check, file, status: 'FAIL' }) - } else { - results.passed++ - results.items.push({ ...check, file, status: 'PASS' }) - } - } - } - - // Global checks (across all content) - const allContent = Object.values(fileContents).join('\n') - for (const check of checks.filter(c => c.global)) { - if (check.test(allContent)) { - results.failed++ - results.items.push({ ...check, file: 'global', status: 'FAIL' }) - } else { - results.passed++ - results.items.push({ ...check, file: 'global', status: 'PASS' }) - } - } - - return results -} -``` - -## Output Format - -``` -## Pre-Delivery Checklist Results -- Passed: X / Y -- Failed: Z - -### Failed Items -- [CRITICAL] #1 Images have alt text — src/components/Hero.tsx -- [HIGH] #11 No hardcoded colors — src/styles/custom.css -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No files to check | Report empty checklist, score 10/10 | -| File read error | Skip file, note in report | -| Regex error | Skip check, note in report | diff --git a/.claude/skills/team-lifecycle-v2/roles/fe-qa/role.md b/.claude/skills/team-lifecycle-v2/roles/fe-qa/role.md deleted file mode 100644 index 6e57627d..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/fe-qa/role.md +++ /dev/null @@ -1,510 +0,0 @@ -# Role: fe-qa - -前端质量保证。5 维度代码审查 + Generator-Critic 循环确保前端代码质量。融合 ui-ux-pro-max 的 Pre-Delivery Checklist、ux-guidelines Do/Don't 规则、行业反模式库。 - -## Role Identity - -- **Name**: `fe-qa` -- **Task Prefix**: `QA-FE-*` -- **Output Tag**: `[fe-qa]` -- **Role Type**: Pipeline(前端子流水线 worker) -- **Responsibility**: Context loading → Multi-dimension review → GC feedback → Report - -## Role Boundaries - -### MUST -- 仅处理 `QA-FE-*` 前缀的任务 -- 所有输出带 `[fe-qa]` 标识 -- 仅通过 SendMessage 与 coordinator 通信 -- 执行 5 维度审查(代码质量、可访问性、设计合规、UX 最佳实践、Pre-Delivery) -- 提供可操作的修复建议(Do/Don't 格式) -- 支持 Generator-Critic 循环(最多 2 轮) -- 加载 design-intelligence.json 用于行业反模式检查 - -### MUST NOT -- ❌ 直接修改源代码(仅提供审查意见) -- ❌ 直接与其他 worker 通信 -- ❌ 为其他角色创建任务 -- ❌ 跳过可访问性检查 -- ❌ 在评分未达标时标记通过 - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `qa_fe_passed` | fe-qa → coordinator | All dimensions pass | 前端质检通过 | -| `qa_fe_result` | fe-qa → coordinator | Review complete (may have issues) | 审查结果(含问题) | -| `fix_required` | fe-qa → coordinator | Critical issues found | 需要 fe-developer 修复 | -| `error` | fe-qa → coordinator | Review failure | 审查失败 | - -## Message Bus - -```javascript -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "fe-qa", to: "coordinator", - type: "qa_fe_result", - summary: "[fe-qa] QA-FE: score=8.5, 0 critical, 2 medium", - ref: outputPath -}) -``` - -### CLI 回退 - -```javascript -Bash(`ccw team log --team "${teamName}" --from "fe-qa" --to "coordinator" --type "qa_fe_result" --summary "[fe-qa] QA-FE complete" --json`) -``` - -## Toolbox - -### Available Commands -- [commands/pre-delivery-checklist.md](commands/pre-delivery-checklist.md) — CSS 级别精准交付检查 - -### CLI Capabilities - -| CLI Tool | Mode | Purpose | -|----------|------|---------| -| `ccw cli --tool gemini --mode analysis` | analysis | 前端代码审查 | -| `ccw cli --tool codex --mode review` | review | Git-aware 代码审查 | - -## Review Dimensions - -| Dimension | Weight | Source | Focus | -|-----------|--------|--------|-------| -| Code Quality | 25% | Standard code review | TypeScript 类型安全、组件结构、状态管理、错误处理 | -| Accessibility | 25% | ux-guidelines rules | 语义 HTML、ARIA、键盘导航、色彩对比、focus-visible、prefers-reduced-motion | -| Design Compliance | 20% | design-intelligence.json | 设计令牌使用、行业反模式、emoji 检查、间距/排版一致性 | -| UX Best Practices | 15% | ux-guidelines Do/Don't | 加载状态、错误状态、空状态、cursor-pointer、响应式、动画时长 | -| Pre-Delivery | 15% | Pre-Delivery Checklist | 暗色模式、无 console.log、无硬编码、国际化就绪、must-have 检查 | - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('QA-FE-') && - t.owner === 'fe-qa' && - t.status === 'pending' && - t.blockedBy.length === 0 -) -if (myTasks.length === 0) return -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Context Loading - -```javascript -const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim() - -// Load design tokens for compliance check -let designTokens = null -try { designTokens = JSON.parse(Read(`${sessionFolder}/architecture/design-tokens.json`)) } catch {} - -// Load design intelligence (from analyst via ui-ux-pro-max) -let designIntel = {} -try { designIntel = JSON.parse(Read(`${sessionFolder}/analysis/design-intelligence.json`)) } catch {} - -// Load shared memory for industry context + QA history -let sharedMemory = {} -try { sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`)) } catch {} - -const industryContext = sharedMemory.industry_context || {} -const antiPatterns = designIntel.recommendations?.anti_patterns || [] -const mustHave = designIntel.recommendations?.must_have || [] - -// Determine audit strictness from industry (standard / strict for medical/financial) -const strictness = industryContext.config?.strictness || 'standard' - -// Load component specs -let componentSpecs = [] -try { - const specFiles = Glob({ pattern: `${sessionFolder}/architecture/component-specs/*.md` }) - componentSpecs = specFiles.map(f => ({ path: f, content: Read(f) })) -} catch {} - -// Load previous QA results (for GC loop tracking) -let previousQA = [] -try { - const qaFiles = Glob({ pattern: `${sessionFolder}/qa/audit-fe-*.json` }) - previousQA = qaFiles.map(f => JSON.parse(Read(f))) -} catch {} - -// Determine GC round -const gcRound = previousQA.filter(q => q.task_subject === task.subject).length + 1 -const maxGCRounds = 2 - -// Get changed frontend files -const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached 2>/dev/null || echo ""`) - .split('\n').filter(f => /\.(tsx|jsx|vue|svelte|css|scss|html|ts|js)$/.test(f)) - -// Read file contents for review -const fileContents = {} -for (const file of changedFiles.slice(0, 30)) { - try { fileContents[file] = Read(file) } catch {} -} -``` - -### Phase 3: 5-Dimension Review - -```javascript -const review = { - task_subject: task.subject, - gc_round: gcRound, - timestamp: new Date().toISOString(), - dimensions: [], - issues: [], - overall_score: 0, - verdict: 'PENDING' -} - -// === Dimension 1: Code Quality (25%) === -const codeQuality = { name: 'code-quality', weight: 0.25, score: 10, issues: [] } -for (const [file, content] of Object.entries(fileContents)) { - if (/:\s*any\b/.test(content)) { - codeQuality.issues.push({ file, severity: 'medium', issue: 'Using `any` type', fix: 'Replace with specific type', do: 'Define proper TypeScript types', dont: 'Use `any` to bypass type checking' }) - codeQuality.score -= 1.5 - } - if (/\.tsx$/.test(file) && /export/.test(content) && !/ErrorBoundary/.test(content) && /throw/.test(content)) { - codeQuality.issues.push({ file, severity: 'low', issue: 'No error boundary for component with throw', fix: 'Wrap with ErrorBoundary' }) - codeQuality.score -= 0.5 - } - if (/style=\{?\{/.test(content) && designTokens) { - codeQuality.issues.push({ file, severity: 'medium', issue: 'Inline styles detected', fix: 'Use design tokens or CSS classes', do: 'Use var(--color-*) tokens', dont: 'Hardcode style values inline' }) - codeQuality.score -= 1.5 - } - if (/catch\s*\(\s*\)\s*\{[\s]*\}/.test(content)) { - codeQuality.issues.push({ file, severity: 'high', issue: 'Empty catch block', fix: 'Add error handling logic', do: 'Log or handle the error', dont: 'Silently swallow exceptions' }) - codeQuality.score -= 2 - } - if (content.split('\n').length > 300) { - codeQuality.issues.push({ file, severity: 'medium', issue: 'File exceeds 300 lines', fix: 'Split into smaller modules' }) - codeQuality.score -= 1 - } -} -codeQuality.score = Math.max(0, codeQuality.score) -review.dimensions.push(codeQuality) - -// === Dimension 2: Accessibility (25%) === -const a11y = { name: 'accessibility', weight: 0.25, score: 10, issues: [] } -for (const [file, content] of Object.entries(fileContents)) { - if (!/\.(tsx|jsx|vue|svelte|html)$/.test(file)) continue - - if (/]*alt=/.test(content)) { - a11y.issues.push({ file, severity: 'high', issue: 'Image missing alt attribute', fix: 'Add descriptive alt text', do: 'Always provide alt text', dont: 'Leave alt empty without role="presentation"' }) - a11y.score -= 3 - } - if (/onClick/.test(content) && !/onKeyDown|onKeyPress|onKeyUp|role=.button/.test(content)) { - a11y.issues.push({ file, severity: 'medium', issue: 'Click handler without keyboard equivalent', fix: 'Add onKeyDown or role="button" tabIndex={0}' }) - a11y.score -= 1.5 - } - if (/ or aria-label', do: 'Associate every input with a label', dont: 'Use placeholder as sole label' }) - a11y.score -= 2 - } - if (/]*>\s* parseInt(h[2])) || [] - for (let i = 1; i < headings.length; i++) { - if (headings[i] - headings[i-1] > 1) { - a11y.issues.push({ file, severity: 'medium', issue: `Heading level skipped: h${headings[i-1]} → h${headings[i]}`, fix: 'Use sequential heading levels' }) - a11y.score -= 1 - } - } - // Focus-visible styles - if (/button| 0 && (ms < 100 || ms > 500)) { - uxPractices.issues.push({ file, severity: 'low', issue: `Transition ${ms}ms outside 150-300ms range`, fix: 'Use 150-300ms for micro-interactions' }) - uxPractices.score -= 0.5 - } - } - if (!/\.(tsx|jsx|vue|svelte)$/.test(file)) continue - // Loading states - if (/fetch|useQuery|useSWR|axios/.test(content) && !/loading|isLoading|skeleton|spinner/i.test(content)) { - uxPractices.issues.push({ file, severity: 'medium', issue: 'Data fetching without loading state', fix: 'Add loading indicator', do: 'Show skeleton/spinner during fetch', dont: 'Leave blank screen while loading' }) - uxPractices.score -= 1 - } - // Error states - if (/fetch|useQuery|useSWR|axios/.test(content) && !/error|isError|catch/i.test(content)) { - uxPractices.issues.push({ file, severity: 'high', issue: 'Data fetching without error handling', fix: 'Add error state UI', do: 'Show user-friendly error message', dont: 'Silently fail or show raw error' }) - uxPractices.score -= 2 - } - // Empty states - if (/\.map\(/.test(content) && !/empty|no.*data|no.*result|length\s*===?\s*0/i.test(content)) { - uxPractices.issues.push({ file, severity: 'low', issue: 'List rendering without empty state', fix: 'Add empty state message' }) - uxPractices.score -= 0.5 - } - // Responsive breakpoints - if (/className|class=/.test(content) && !/md:|lg:|@media/.test(content)) { - uxPractices.issues.push({ file, severity: 'medium', issue: 'No responsive breakpoints', fix: 'Mobile-first responsive design', do: 'Mobile-first responsive design', dont: 'Design for desktop only' }) - uxPractices.score -= 1 - } -} -uxPractices.score = Math.max(0, uxPractices.score) -review.dimensions.push(uxPractices) - -// === Dimension 5: Pre-Delivery (15%) === -// Detailed checklist: commands/pre-delivery-checklist.md -const preDelivery = { name: 'pre-delivery', weight: 0.15, score: 10, issues: [] } -const allContent = Object.values(fileContents).join('\n') - -// Per-file checks -for (const [file, content] of Object.entries(fileContents)) { - if (/console\.(log|debug|info)\(/.test(content) && !/test|spec|\.test\./.test(file)) { - preDelivery.issues.push({ file, severity: 'medium', issue: 'console.log in production code', fix: 'Remove or use proper logger' }) - preDelivery.score -= 1 - } - if (/\.(tsx|jsx)$/.test(file) && />\s*[A-Z][a-z]+\s+[a-z]+/.test(content) && !/t\(|intl|i18n|formatMessage/.test(content)) { - preDelivery.issues.push({ file, severity: 'low', issue: 'Hardcoded text — consider i18n', fix: 'Extract to translation keys' }) - preDelivery.score -= 0.5 - } - if (/TODO|FIXME|HACK|XXX/.test(content)) { - preDelivery.issues.push({ file, severity: 'low', issue: 'TODO/FIXME comment found', fix: 'Resolve or create issue' }) - preDelivery.score -= 0.5 - } -} - -// Global checklist items (from pre-delivery-checklist.md) -const checklist = [ - { check: "No emoji as functional icons", test: () => /[\u{1F300}-\u{1F9FF}]/u.test(allContent), severity: 'high' }, - { check: "cursor-pointer on clickable", test: () => /button|onClick/.test(allContent) && !/cursor-pointer/.test(allContent), severity: 'medium' }, - { check: "Focus states visible", test: () => /button|input| /animation|@keyframes/.test(allContent) && !/prefers-reduced-motion/.test(allContent), severity: 'medium' }, - { check: "Responsive breakpoints", test: () => !/md:|lg:|@media.*min-width/.test(allContent), severity: 'medium' }, - { check: "No hardcoded colors", test: () => { const nt = Object.entries(fileContents).filter(([f]) => f !== 'src/styles/tokens.css'); return nt.some(([,c]) => /#[0-9a-fA-F]{6}/.test(c)) }, severity: 'high' }, - { check: "Dark mode support", test: () => !/prefers-color-scheme|dark:|\.dark/.test(allContent), severity: 'medium' } -] -for (const item of checklist) { - try { - if (item.test()) { - preDelivery.issues.push({ check: item.check, severity: item.severity, issue: `Pre-delivery: ${item.check}` }) - preDelivery.score -= (item.severity === 'high' ? 2 : item.severity === 'medium' ? 1 : 0.5) - } - } catch {} -} - -// Must-have checks from industry config -for (const req of mustHave) { - if (req === 'wcag-aaa' && !/aria-/.test(allContent)) { - preDelivery.issues.push({ severity: 'high', issue: 'WCAG AAA required but no ARIA attributes found' }) - preDelivery.score -= 3 - } - if (req === 'high-contrast' && !/high-contrast|forced-colors/.test(allContent)) { - preDelivery.issues.push({ severity: 'medium', issue: 'High contrast mode not supported' }) - preDelivery.score -= 1 - } -} -preDelivery.score = Math.max(0, preDelivery.score) -review.dimensions.push(preDelivery) - -// === Calculate Overall Score === -review.overall_score = review.dimensions.reduce((sum, d) => sum + d.score * d.weight, 0) -review.issues = review.dimensions.flatMap(d => d.issues) -const criticalCount = review.issues.filter(i => i.severity === 'high').length - -if (review.overall_score >= 8 && criticalCount === 0) { - review.verdict = 'PASS' -} else if (gcRound >= maxGCRounds) { - review.verdict = review.overall_score >= 6 ? 'PASS_WITH_WARNINGS' : 'FAIL' -} else { - review.verdict = 'NEEDS_FIX' -} -``` - -### Phase 4: Package Results + Shared Memory - -```javascript -const outputPath = sessionFolder - ? `${sessionFolder}/qa/audit-fe-${task.subject.replace(/[^a-zA-Z0-9-]/g, '-').toLowerCase()}-r${gcRound}.json` - : '.workflow/.tmp/qa-fe-audit.json' - -Bash(`mkdir -p "$(dirname '${outputPath}')"`) -Write(outputPath, JSON.stringify(review, null, 2)) - -// Wisdom contribution -if (sessionFolder && review.issues.length > 0) { - try { - const issuesPath = `${sessionFolder}/wisdom/issues.md` - const existing = Read(issuesPath) - const timestamp = new Date().toISOString().substring(0, 10) - const highIssues = review.issues.filter(i => i.severity === 'high') - if (highIssues.length > 0) { - const entries = highIssues.map(i => `- [${timestamp}] [fe-qa] ${i.issue} in ${i.file || 'global'}`).join('\n') - Write(issuesPath, existing + '\n' + entries) - } - } catch {} -} - -// Update shared memory with QA history -if (sessionFolder) { - try { - sharedMemory.qa_history = sharedMemory.qa_history || [] - sharedMemory.qa_history.push({ - task_subject: task.subject, - gc_round: gcRound, - verdict: review.verdict, - score: review.overall_score, - critical_count: criticalCount, - total_issues: review.issues.length, - timestamp: new Date().toISOString() - }) - Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2)) - } catch {} -} -``` - -### Phase 5: Report to Coordinator - -```javascript -const msgType = review.verdict === 'PASS' || review.verdict === 'PASS_WITH_WARNINGS' - ? 'qa_fe_passed' - : criticalCount > 0 ? 'fix_required' : 'qa_fe_result' - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "fe-qa", to: "coordinator", - type: msgType, - summary: `[fe-qa] QA-FE R${gcRound}: ${review.verdict}, score=${review.overall_score.toFixed(1)}, ${criticalCount} critical`, - ref: outputPath -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `[fe-qa] ## Frontend QA Review - -**Task**: ${task.subject} -**Round**: ${gcRound}/${maxGCRounds} -**Verdict**: ${review.verdict} -**Score**: ${review.overall_score.toFixed(1)}/10 -**Strictness**: ${strictness} -**Design Intelligence**: ${designIntel._source || 'not available'} - -### Dimension Scores -${review.dimensions.map(d => `- **${d.name}**: ${d.score.toFixed(1)}/10 (${d.issues.length} issues)`).join('\n')} - -### Critical Issues (${criticalCount}) -${review.issues.filter(i => i.severity === 'high').map(i => `- \`${i.file || i.check}\`: ${i.issue} → ${i.fix || ''}${i.do ? `\n ✅ Do: ${i.do}` : ''}${i.dont ? `\n ❌ Don't: ${i.dont}` : ''}`).join('\n') || 'None'} - -### Medium Issues -${review.issues.filter(i => i.severity === 'medium').slice(0, 5).map(i => `- \`${i.file || i.check}\`: ${i.issue} → ${i.fix || ''}`).join('\n') || 'None'} - -${review.verdict === 'NEEDS_FIX' ? `\n### Action Required\nfe-developer 需修复 ${criticalCount} 个 critical 问题后重新提交。` : ''} - -### Output: ${outputPath}`, - summary: `[fe-qa] QA-FE R${gcRound}: ${review.verdict}, ${review.overall_score.toFixed(1)}/10` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) -// Check for next QA-FE task → back to Phase 1 -``` - -## Generator-Critic Loop - -fe-developer ↔ fe-qa 循环由 coordinator 编排: - -``` -Round 1: DEV-FE-001 → QA-FE-001 - if QA verdict = NEEDS_FIX: - coordinator creates DEV-FE-002 (fix task, blockedBy QA-FE-001) - coordinator creates QA-FE-002 (re-review, blockedBy DEV-FE-002) -Round 2: DEV-FE-002 → QA-FE-002 - if still NEEDS_FIX: verdict = PASS_WITH_WARNINGS or FAIL (max 2 rounds) -``` - -**收敛条件**: `overall_score >= 8 && critical_count === 0` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No QA-FE-* tasks | Idle, wait for coordinator | -| No changed frontend files | Report empty review, score = N/A | -| Design tokens not found | Skip design compliance dimension, adjust weights | -| design-intelligence.json not found | Skip industry anti-patterns, use standard strictness | -| Git diff fails | Use Glob to find recent frontend files | -| Max GC rounds exceeded | Force verdict (PASS_WITH_WARNINGS or FAIL) | -| ui-ux-pro-max not installed | Continue without design intelligence, note in report | diff --git a/.claude/skills/team-lifecycle-v2/roles/planner/commands/explore.md b/.claude/skills/team-lifecycle-v2/roles/planner/commands/explore.md deleted file mode 100644 index 4746d851..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/planner/commands/explore.md +++ /dev/null @@ -1,466 +0,0 @@ -# Command: Multi-Angle Exploration - -Phase 2 of planner execution - assess complexity, select exploration angles, and execute parallel exploration. - -## Overview - -This command performs multi-angle codebase exploration based on task complexity. Low complexity uses direct semantic search, while Medium/High complexity launches parallel cli-explore-agent subagents for comprehensive analysis. - -## Complexity Assessment - -### assessComplexity Function - -```javascript -function assessComplexity(desc) { - let score = 0 - if (/refactor|architect|restructure|模块|系统/.test(desc)) score += 2 - if (/multiple|多个|across|跨/.test(desc)) score += 2 - if (/integrate|集成|api|database/.test(desc)) score += 1 - if (/security|安全|performance|性能/.test(desc)) score += 1 - return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low' -} - -const complexity = assessComplexity(task.description) -``` - -### Complexity Levels - -| Level | Score | Characteristics | Angle Count | -|-------|-------|----------------|-------------| -| **Low** | 0-1 | Simple feature, single module, clear scope | 1 | -| **Medium** | 2-3 | Multiple modules, integration points, moderate scope | 3 | -| **High** | 4+ | Architecture changes, cross-cutting concerns, complex scope | 4 | - -## Angle Selection - -### ANGLE_PRESETS - -```javascript -const ANGLE_PRESETS = { - architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'], - security: ['security', 'auth-patterns', 'dataflow', 'validation'], - performance: ['performance', 'bottlenecks', 'caching', 'data-access'], - bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'], - feature: ['patterns', 'integration-points', 'testing', 'dependencies'] -} -``` - -### selectAngles Function - -```javascript -function selectAngles(desc, count) { - const text = desc.toLowerCase() - let preset = 'feature' - if (/refactor|architect|restructure|modular/.test(text)) preset = 'architecture' - else if (/security|auth|permission|access/.test(text)) preset = 'security' - else if (/performance|slow|optimi|cache/.test(text)) preset = 'performance' - else if (/fix|bug|error|issue|broken/.test(text)) preset = 'bugfix' - return ANGLE_PRESETS[preset].slice(0, count) -} - -const angleCount = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1) -const selectedAngles = selectAngles(task.description, angleCount) -``` - -### Angle Definitions - -| Angle | Focus | Use Case | -|-------|-------|----------| -| **architecture** | System structure, layer boundaries, design patterns | Refactoring, restructuring | -| **dependencies** | Module dependencies, coupling, external libraries | Integration, modularity | -| **modularity** | Component boundaries, separation of concerns | Architecture changes | -| **integration-points** | API boundaries, data flow between modules | Feature development | -| **security** | Auth/authz, input validation, data protection | Security features | -| **auth-patterns** | Authentication flows, session management | Auth implementation | -| **dataflow** | Data transformation, state propagation | Bug fixes, features | -| **validation** | Input validation, error handling | Security, quality | -| **performance** | Bottlenecks, optimization opportunities | Performance tuning | -| **bottlenecks** | Slow operations, resource contention | Performance issues | -| **caching** | Cache strategies, invalidation patterns | Performance optimization | -| **data-access** | Database queries, data fetching patterns | Performance, features | -| **error-handling** | Error propagation, recovery strategies | Bug fixes | -| **state-management** | State updates, consistency | Bug fixes, features | -| **edge-cases** | Boundary conditions, error scenarios | Bug fixes, testing | -| **patterns** | Code patterns, conventions, best practices | Feature development | -| **testing** | Test coverage, test strategies | Feature development | - -## Exploration Execution - -### Low Complexity: Direct Semantic Search - -```javascript -if (complexity === 'Low') { - // Direct exploration via semantic search - const results = mcp__ace-tool__search_context({ - project_root_path: projectRoot, - query: task.description - }) - - // Transform ACE results to exploration JSON - const exploration = { - project_structure: "Analyzed via ACE semantic search", - relevant_files: results.files.map(f => ({ - path: f.path, - rationale: f.relevance_reason || "Semantic match to task description", - role: "modify_target", - discovery_source: "ace-search", - key_symbols: f.symbols || [] - })), - patterns: results.patterns || [], - dependencies: results.dependencies || [], - integration_points: results.integration_points || [], - constraints: [], - clarification_needs: [], - _metadata: { - exploration_angle: selectedAngles[0], - complexity: 'Low', - discovery_method: 'ace-semantic-search' - } - } - - Write(`${planDir}/exploration-${selectedAngles[0]}.json`, JSON.stringify(exploration, null, 2)) -} -``` - -### Medium/High Complexity: Parallel cli-explore-agent - -```javascript -else { - // Launch parallel cli-explore-agent for each angle - selectedAngles.forEach((angle, index) => { - Task({ - subagent_type: "cli-explore-agent", - run_in_background: false, - description: `Explore: ${angle}`, - prompt: ` -## Task Objective -Execute **${angle}** exploration for task planning context. - -## Output Location -**Session Folder**: ${sessionFolder} -**Output File**: ${planDir}/exploration-${angle}.json - -## Assigned Context -- **Exploration Angle**: ${angle} -- **Task Description**: ${task.description} -- **Spec Context**: ${specContext ? 'Available — use spec/requirements, spec/architecture, spec/epics for informed exploration' : 'Not available (impl-only mode)'} -- **Exploration Index**: ${index + 1} of ${selectedAngles.length} - -## MANDATORY FIRST STEPS -1. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files) -2. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema) -3. Read: .workflow/project-tech.json (if exists - technology stack) - -## Expected Output -Write JSON to: ${planDir}/exploration-${angle}.json -Follow explore-json-schema.json structure with ${angle}-focused findings. - -**MANDATORY**: Every file in relevant_files MUST have: -- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic) -- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only -- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual -- **key_symbols** (recommended): Key functions/classes/types relevant to task - -## Exploration Focus by Angle - -${getAngleFocusGuide(angle)} - -## Output Schema Structure - -\`\`\`json -{ - "project_structure": "string - high-level architecture overview", - "relevant_files": [ - { - "path": "string - relative file path", - "rationale": "string - WHY this file matters for ${angle} (>10 chars, specific)", - "role": "modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only", - "discovery_source": "bash-scan|cli-analysis|ace-search|dependency-trace|manual", - "key_symbols": ["function/class/type names"] - } - ], - "patterns": ["string - code patterns relevant to ${angle}"], - "dependencies": ["string - module/library dependencies"], - "integration_points": ["string - API/interface boundaries"], - "constraints": ["string - technical constraints"], - "clarification_needs": ["string - questions needing user input"], - "_metadata": { - "exploration_angle": "${angle}", - "complexity": "${complexity}", - "discovery_method": "cli-explore-agent" - } -} -\`\`\` -` - }) - }) -} -``` - -### Angle Focus Guide - -```javascript -function getAngleFocusGuide(angle) { - const guides = { - architecture: ` -**Architecture Focus**: -- Identify layer boundaries (presentation, business, data) -- Map module dependencies and coupling -- Locate design patterns (factory, strategy, observer, etc.) -- Find architectural decision records (ADRs) -- Analyze component responsibilities`, - - dependencies: ` -**Dependencies Focus**: -- Map internal module dependencies (import/require statements) -- Identify external library usage (package.json, requirements.txt) -- Trace dependency chains and circular dependencies -- Locate shared utilities and common modules -- Analyze coupling strength between modules`, - - modularity: ` -**Modularity Focus**: -- Identify module boundaries and interfaces -- Analyze separation of concerns -- Locate tightly coupled code -- Find opportunities for extraction/refactoring -- Map public vs private APIs`, - - 'integration-points': ` -**Integration Points Focus**: -- Locate API endpoints and routes -- Identify data flow between modules -- Find event emitters/listeners -- Map external service integrations -- Analyze interface contracts`, - - security: ` -**Security Focus**: -- Locate authentication/authorization logic -- Identify input validation points -- Find sensitive data handling -- Analyze access control mechanisms -- Locate security-related middleware`, - - 'auth-patterns': ` -**Auth Patterns Focus**: -- Identify authentication flows (login, logout, refresh) -- Locate session management code -- Find token generation/validation -- Map user permission checks -- Analyze auth middleware`, - - dataflow: ` -**Dataflow Focus**: -- Trace data transformations -- Identify state propagation paths -- Locate data validation points -- Map data sources and sinks -- Analyze data mutation points`, - - validation: ` -**Validation Focus**: -- Locate input validation logic -- Identify schema definitions -- Find error handling for invalid data -- Map validation middleware -- Analyze sanitization functions`, - - performance: ` -**Performance Focus**: -- Identify computational bottlenecks -- Locate database queries (N+1 problems) -- Find synchronous blocking operations -- Map resource-intensive operations -- Analyze algorithm complexity`, - - bottlenecks: ` -**Bottlenecks Focus**: -- Locate slow operations (profiling data) -- Identify resource contention points -- Find inefficient algorithms -- Map hot paths in code -- Analyze concurrency issues`, - - caching: ` -**Caching Focus**: -- Locate existing cache implementations -- Identify cacheable operations -- Find cache invalidation logic -- Map cache key strategies -- Analyze cache hit/miss patterns`, - - 'data-access': ` -**Data Access Focus**: -- Locate database query patterns -- Identify ORM/query builder usage -- Find data fetching strategies -- Map data access layers -- Analyze query optimization opportunities`, - - 'error-handling': ` -**Error Handling Focus**: -- Locate try-catch blocks -- Identify error propagation paths -- Find error recovery strategies -- Map error logging points -- Analyze error types and handling`, - - 'state-management': ` -**State Management Focus**: -- Locate state containers (Redux, Vuex, etc.) -- Identify state update patterns -- Find state synchronization logic -- Map state dependencies -- Analyze state consistency mechanisms`, - - 'edge-cases': ` -**Edge Cases Focus**: -- Identify boundary conditions -- Locate null/undefined handling -- Find empty array/object handling -- Map error scenarios -- Analyze exceptional flows`, - - patterns: ` -**Patterns Focus**: -- Identify code patterns and conventions -- Locate design pattern implementations -- Find naming conventions -- Map code organization patterns -- Analyze best practices usage`, - - testing: ` -**Testing Focus**: -- Locate test files and test utilities -- Identify test coverage gaps -- Find test patterns (unit, integration, e2e) -- Map mocking/stubbing strategies -- Analyze test organization` - } - - return guides[angle] || `**${angle} Focus**: Analyze codebase from ${angle} perspective` -} -``` - -## Explorations Manifest - -```javascript -// Build explorations manifest -const explorationManifest = { - session_id: `${taskSlug}-${dateStr}`, - task_description: task.description, - complexity: complexity, - exploration_count: selectedAngles.length, - explorations: selectedAngles.map(angle => ({ - angle: angle, - file: `exploration-${angle}.json`, - path: `${planDir}/exploration-${angle}.json` - })) -} -Write(`${planDir}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2)) -``` - -## Output Schema - -### explore-json-schema.json Structure - -```json -{ - "project_structure": "string - high-level architecture overview", - "relevant_files": [ - { - "path": "string - relative file path", - "rationale": "string - specific selection basis (>10 chars)", - "role": "modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only", - "discovery_source": "bash-scan|cli-analysis|ace-search|dependency-trace|manual", - "key_symbols": ["string - function/class/type names"] - } - ], - "patterns": ["string - code patterns relevant to angle"], - "dependencies": ["string - module/library dependencies"], - "integration_points": ["string - API/interface boundaries"], - "constraints": ["string - technical constraints"], - "clarification_needs": ["string - questions needing user input"], - "_metadata": { - "exploration_angle": "string - angle name", - "complexity": "Low|Medium|High", - "discovery_method": "ace-semantic-search|cli-explore-agent" - } -} -``` - -## Integration with Phase 3 - -Phase 3 (Plan Generation) consumes: -1. `explorations-manifest.json` - list of exploration files -2. `exploration-{angle}.json` - per-angle exploration results -3. `specContext` (if available) - requirements, architecture, epics - -These inputs are passed to cli-lite-planning-agent for plan generation. - -## Error Handling - -### Exploration Agent Failure - -```javascript -try { - Task({ - subagent_type: "cli-explore-agent", - run_in_background: false, - description: `Explore: ${angle}`, - prompt: `...` - }) -} catch (error) { - // Skip exploration, continue with available explorations - console.error(`[planner] Exploration failed for angle: ${angle}`, error) - // Remove failed angle from manifest - explorationManifest.explorations = explorationManifest.explorations.filter(e => e.angle !== angle) - explorationManifest.exploration_count = explorationManifest.explorations.length -} -``` - -### All Explorations Fail - -```javascript -if (explorationManifest.exploration_count === 0) { - // Fallback: Plan from task description only - console.warn(`[planner] All explorations failed, planning from task description only`) - // Proceed to Phase 3 with empty explorations -} -``` - -### ACE Search Failure (Low Complexity) - -```javascript -try { - const results = mcp__ace-tool__search_context({ - project_root_path: projectRoot, - query: task.description - }) -} catch (error) { - // Fallback: Use ripgrep for basic file discovery - const rgResults = Bash(`rg -l "${task.description}" --type ts`) - const exploration = { - project_structure: "Basic file discovery via ripgrep", - relevant_files: rgResults.split('\n').map(path => ({ - path: path.trim(), - rationale: "Matched task description keywords", - role: "modify_target", - discovery_source: "bash-scan", - key_symbols: [] - })), - patterns: [], - dependencies: [], - integration_points: [], - constraints: [], - clarification_needs: [], - _metadata: { - exploration_angle: selectedAngles[0], - complexity: 'Low', - discovery_method: 'ripgrep-fallback' - } - } - Write(`${planDir}/exploration-${selectedAngles[0]}.json`, JSON.stringify(exploration, null, 2)) -} -``` diff --git a/.claude/skills/team-lifecycle-v2/roles/planner/role.md b/.claude/skills/team-lifecycle-v2/roles/planner/role.md deleted file mode 100644 index c658ec3a..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/planner/role.md +++ /dev/null @@ -1,253 +0,0 @@ -# Role: planner - -Multi-angle code exploration and structured implementation planning. Submits plans to the coordinator for approval. - -## Role Identity - -- **Name**: `planner` -- **Task Prefix**: `PLAN-*` -- **Output Tag**: `[planner]` -- **Responsibility**: Code exploration → Implementation planning → Coordinator approval -- **Communication**: SendMessage to coordinator only - -## Role Boundaries - -### MUST -- Only process PLAN-* tasks -- Communicate only with coordinator -- Write plan artifacts to `plan/` folder -- Tag all SendMessage and team_msg calls with `[planner]` -- Assess complexity (Low/Medium/High) -- Execute multi-angle exploration based on complexity -- Generate plan.json + .task/TASK-*.json following schemas -- Submit plan for coordinator approval -- Load spec context in full-lifecycle mode - -### MUST NOT -- Create tasks -- Contact other workers directly -- Implement code -- Modify spec documents -- Skip complexity assessment -- Proceed without exploration (Medium/High complexity) -- Generate plan without schema validation - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `plan_ready` | planner → coordinator | Plan generation complete | With plan.json path and task count summary | -| `plan_revision` | planner → coordinator | Plan revised and resubmitted | Describes changes made | -| `impl_progress` | planner → coordinator | Exploration phase progress | Optional, for long explorations | -| `error` | planner → coordinator | Unrecoverable error | Exploration failure, schema missing, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Plan ready -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_ready", summary: "[planner] Plan ready: 3 tasks, Medium complexity", ref: `${sessionFolder}/plan/plan.json` }) - -// Plan revision -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_revision", summary: "[planner] Split task-2 into two subtasks per feedback" }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "error", summary: "[planner] plan-overview-base-schema.json not found, using default structure" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "planner" --to "coordinator" --type "plan_ready" --summary "[planner] Plan ready: 3 tasks" --ref "${sessionFolder}/plan/plan.json" --json`) -``` - -## Toolbox - -### Available Commands -- `commands/explore.md` - Multi-angle codebase exploration (Phase 2) - -### Subagent Capabilities -- **cli-explore-agent**: Per-angle exploration (Medium/High complexity) -- **cli-lite-planning-agent**: Plan generation (Medium/High complexity) - -### CLI Capabilities -None directly (delegates to subagents) - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('PLAN-') && - t.owner === 'planner' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 1.5: Load Spec Context (Full-Lifecycle Mode) - -```javascript -// Extract session folder from task description (set by coordinator) -const sessionMatch = task.description.match(/Session:\s*(.+)/) -const sessionFolder = sessionMatch ? sessionMatch[1].trim() : `.workflow/.team/default` -const planDir = `${sessionFolder}/plan` -Bash(`mkdir -p ${planDir}`) - -// Check if spec directory exists (full-lifecycle mode) -const specDir = `${sessionFolder}/spec` -let specContext = null -try { - const reqIndex = Read(`${specDir}/requirements/_index.md`) - const archIndex = Read(`${specDir}/architecture/_index.md`) - const epicsIndex = Read(`${specDir}/epics/_index.md`) - const specConfig = JSON.parse(Read(`${specDir}/spec-config.json`)) - specContext = { reqIndex, archIndex, epicsIndex, specConfig } -} catch { /* impl-only mode has no spec */ } -``` - -### Phase 2: Multi-Angle Exploration - -**Delegate to**: `Read("commands/explore.md")` - -Execute complexity assessment, angle selection, and parallel exploration. See `commands/explore.md` for full implementation. - -### Phase 3: Plan Generation - -```javascript -// Read schema reference -const schema = Bash(`cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json`) - -if (complexity === 'Low') { - // Direct Claude planning - Bash(`mkdir -p ${planDir}/.task`) - // Generate plan.json + .task/TASK-*.json following schemas - - const plan = { - session_id: `${taskSlug}-${dateStr}`, - task_description: task.description, - complexity: 'Low', - approach: "Direct implementation based on semantic search", - task_count: 1, - task_ids: ['TASK-001'], - exploration_refs: [`${planDir}/exploration-patterns.json`] - } - Write(`${planDir}/plan.json`, JSON.stringify(plan, null, 2)) - - const taskDetail = { - id: 'TASK-001', - title: task.subject, - description: task.description, - files: [], - convergence: { criteria: ["Implementation complete", "Tests pass"] }, - depends_on: [] - } - Write(`${planDir}/.task/TASK-001.json`, JSON.stringify(taskDetail, null, 2)) - -} else { - // Use cli-lite-planning-agent for Medium/High - Task({ - subagent_type: "cli-lite-planning-agent", - run_in_background: false, - description: "Generate detailed implementation plan", - prompt: `Generate implementation plan. -Output: ${planDir}/plan.json + ${planDir}/.task/TASK-*.json -Schema: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json -Task Description: ${task.description} -Explorations: ${explorationManifest} -Complexity: ${complexity} -${specContext ? `Spec Context: -- Requirements: ${specContext.reqIndex.substring(0, 500)} -- Architecture: ${specContext.archIndex.substring(0, 500)} -- Epics: ${specContext.epicsIndex.substring(0, 500)} -Reference REQ-* IDs, follow ADR decisions, reuse Epic/Story decomposition.` : ''} -Requirements: 2-7 tasks, each with id, title, files[].change, convergence.criteria, depends_on` - }) -} -``` - -### Phase 4: Submit for Approval - -```javascript -const plan = JSON.parse(Read(`${planDir}/plan.json`)) -const planTasks = plan.task_ids.map(id => JSON.parse(Read(`${planDir}/.task/${id}.json`))) -const taskCount = plan.task_count || plan.task_ids.length - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "planner", to: "coordinator", - type: "plan_ready", - summary: `[planner] Plan就绪: ${taskCount}个task, ${complexity}复杂度`, - ref: `${planDir}/plan.json` -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `[planner] ## Plan Ready for Review - -**Task**: ${task.subject} -**Complexity**: ${complexity} -**Tasks**: ${taskCount} - -### Task Summary -${planTasks.map((t, i) => (i+1) + '. ' + t.title).join('\n')} - -### Approach -${plan.approach} - -### Plan Location -${planDir}/plan.json -Task Files: ${planDir}/.task/ - -Please review and approve or request revisions.`, - summary: `[planner] Plan ready: ${taskCount} tasks` -}) - -// Wait for coordinator response (approve → mark completed, revision → update and resubmit) -``` - -### Phase 5: After Approval - -```javascript -TaskUpdate({ taskId: task.id, status: 'completed' }) - -// Check for next PLAN task → back to Phase 1 -``` - -## Session Files - -``` -{sessionFolder}/plan/ -├── exploration-{angle}.json -├── explorations-manifest.json -├── planning-context.md -├── plan.json -└── .task/ - └── TASK-*.json -``` - -> **Note**: `sessionFolder` is extracted from task description (`Session: .workflow/.team/TLS-xxx`). Plan outputs go to `plan/` subdirectory. In full-lifecycle mode, spec products are available at `../spec/`. - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No PLAN-* tasks available | Idle, wait for coordinator assignment | -| Exploration agent failure | Skip exploration, plan from task description only | -| Planning agent failure | Fallback to direct Claude planning | -| Plan rejected 3+ times | Notify coordinator with `[planner]` tag, suggest alternative approach | -| Schema file not found | Use basic plan structure without schema validation, log error with `[planner]` tag | -| Spec context load failure | Continue in impl-only mode (no spec context) | -| Session folder not found | Notify coordinator with `[planner]` tag, request session path | -| Unexpected error | Log error via team_msg with `[planner]` tag, report to coordinator | diff --git a/.claude/skills/team-lifecycle-v2/roles/reviewer/commands/code-review.md b/.claude/skills/team-lifecycle-v2/roles/reviewer/commands/code-review.md deleted file mode 100644 index 53e4a635..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/reviewer/commands/code-review.md +++ /dev/null @@ -1,689 +0,0 @@ -# Code Review Command - -## Purpose -4-dimension code review analyzing quality, security, architecture, and requirements compliance. - -## Review Dimensions - -### 1. Quality Review - -```javascript -function reviewQuality(files, gitDiff) { - const issues = { - critical: [], - high: [], - medium: [], - low: [] - } - - for (const file of files) { - const content = file.content - const lines = content.split("\n") - - // Check for @ts-ignore / @ts-expect-error - lines.forEach((line, idx) => { - if (line.includes("@ts-ignore") || line.includes("@ts-expect-error")) { - const nextLine = lines[idx + 1] || "" - const hasJustification = line.includes("//") && line.split("//")[1].trim().length > 10 - - if (!hasJustification) { - issues.high.push({ - file: file.path, - line: idx + 1, - type: "ts-ignore-without-justification", - message: "TypeScript error suppression without explanation", - code: line.trim() - }) - } - } - }) - - // Check for 'any' type usage - const anyMatches = Grep("\\bany\\b", { path: file.path, "-n": true }) - if (anyMatches) { - anyMatches.forEach(match => { - // Exclude comments and type definitions that are intentionally generic - if (!match.line.includes("//") && !match.line.includes("Generic")) { - issues.high.push({ - file: file.path, - line: match.lineNumber, - type: "any-type-usage", - message: "Using 'any' type reduces type safety", - code: match.line.trim() - }) - } - }) - } - - // Check for console.log in production code - const consoleMatches = Grep("console\\.(log|debug|info)", { path: file.path, "-n": true }) - if (consoleMatches && !file.path.includes("test")) { - consoleMatches.forEach(match => { - issues.high.push({ - file: file.path, - line: match.lineNumber, - type: "console-log", - message: "Console statements should be removed from production code", - code: match.line.trim() - }) - }) - } - - // Check for empty catch blocks - const emptyCatchRegex = /catch\s*\([^)]*\)\s*\{\s*\}/g - let match - while ((match = emptyCatchRegex.exec(content)) !== null) { - const lineNumber = content.substring(0, match.index).split("\n").length - issues.critical.push({ - file: file.path, - line: lineNumber, - type: "empty-catch", - message: "Empty catch block silently swallows errors", - code: match[0] - }) - } - - // Check for magic numbers - const magicNumberRegex = /(? { - const trimmed = line.trim() - if (trimmed.length > 30 && !trimmed.startsWith("//")) { - if (!lineHashes.has(trimmed)) { - lineHashes.set(trimmed, []) - } - lineHashes.get(trimmed).push(idx + 1) - } - }) - - lineHashes.forEach((occurrences, line) => { - if (occurrences.length > 2) { - issues.medium.push({ - file: file.path, - line: occurrences[0], - type: "duplicate-code", - message: `Duplicate code found at lines: ${occurrences.join(", ")}`, - code: line - }) - } - }) - } - - return issues -} -``` - -### 2. Security Review - -```javascript -function reviewSecurity(files) { - const issues = { - critical: [], - high: [], - medium: [], - low: [] - } - - for (const file of files) { - const content = file.content - - // Check for eval/exec usage - const evalMatches = Grep("\\b(eval|exec|Function\\(|setTimeout\\(.*string|setInterval\\(.*string)\\b", { - path: file.path, - "-n": true - }) - if (evalMatches) { - evalMatches.forEach(match => { - issues.high.push({ - file: file.path, - line: match.lineNumber, - type: "dangerous-eval", - message: "eval/exec usage can lead to code injection vulnerabilities", - code: match.line.trim() - }) - }) - } - - // Check for innerHTML/dangerouslySetInnerHTML - const innerHTMLMatches = Grep("(innerHTML|dangerouslySetInnerHTML)", { - path: file.path, - "-n": true - }) - if (innerHTMLMatches) { - innerHTMLMatches.forEach(match => { - issues.high.push({ - file: file.path, - line: match.lineNumber, - type: "xss-risk", - message: "Direct HTML injection can lead to XSS vulnerabilities", - code: match.line.trim() - }) - }) - } - - // Check for hardcoded secrets - const secretPatterns = [ - /api[_-]?key\s*=\s*['"][^'"]{20,}['"]/i, - /password\s*=\s*['"][^'"]+['"]/i, - /secret\s*=\s*['"][^'"]{20,}['"]/i, - /token\s*=\s*['"][^'"]{20,}['"]/i, - /aws[_-]?access[_-]?key/i, - /private[_-]?key\s*=\s*['"][^'"]+['"]/i - ] - - secretPatterns.forEach(pattern => { - const matches = content.match(new RegExp(pattern, "gm")) - if (matches) { - matches.forEach(match => { - const lineNumber = content.substring(0, content.indexOf(match)).split("\n").length - issues.critical.push({ - file: file.path, - line: lineNumber, - type: "hardcoded-secret", - message: "Hardcoded secrets should be moved to environment variables", - code: match.replace(/['"][^'"]+['"]/, "'***'") // Redact secret - }) - }) - } - }) - - // Check for SQL injection vectors - const sqlInjectionMatches = Grep("(query|execute)\\s*\\(.*\\+.*\\)", { - path: file.path, - "-n": true - }) - if (sqlInjectionMatches) { - sqlInjectionMatches.forEach(match => { - if (!match.line.includes("//") && !match.line.includes("prepared")) { - issues.critical.push({ - file: file.path, - line: match.lineNumber, - type: "sql-injection", - message: "String concatenation in SQL queries can lead to SQL injection", - code: match.line.trim() - }) - } - }) - } - - // Check for insecure random - const insecureRandomMatches = Grep("Math\\.random\\(\\)", { - path: file.path, - "-n": true - }) - if (insecureRandomMatches) { - insecureRandomMatches.forEach(match => { - // Check if used for security purposes - const context = content.substring( - Math.max(0, content.indexOf(match.line) - 200), - content.indexOf(match.line) + 200 - ) - if (context.match(/token|key|secret|password|session/i)) { - issues.medium.push({ - file: file.path, - line: match.lineNumber, - type: "insecure-random", - message: "Math.random() is not cryptographically secure, use crypto.randomBytes()", - code: match.line.trim() - }) - } - }) - } - - // Check for missing input validation - const functionMatches = Grep("(function|const.*=.*\\(|async.*\\()", { - path: file.path, - "-n": true - }) - if (functionMatches) { - functionMatches.forEach(match => { - // Simple heuristic: check if function has parameters but no validation - if (match.line.includes("(") && !match.line.includes("()")) { - const nextLines = content.split("\n").slice(match.lineNumber, match.lineNumber + 5).join("\n") - const hasValidation = nextLines.match(/if\s*\(|throw|assert|validate|check/) - - if (!hasValidation && !match.line.includes("test") && !match.line.includes("mock")) { - issues.low.push({ - file: file.path, - line: match.lineNumber, - type: "missing-validation", - message: "Function parameters should be validated", - code: match.line.trim() - }) - } - } - }) - } - } - - return issues -} -``` - -### 3. Architecture Review - -```javascript -function reviewArchitecture(files) { - const issues = { - critical: [], - high: [], - medium: [], - low: [] - } - - for (const file of files) { - const content = file.content - const lines = content.split("\n") - - // Check for parent directory imports - const importMatches = Grep("from\\s+['\"](\\.\\./)+", { - path: file.path, - "-n": true - }) - if (importMatches) { - importMatches.forEach(match => { - const parentLevels = (match.line.match(/\.\.\//g) || []).length - - if (parentLevels > 2) { - issues.high.push({ - file: file.path, - line: match.lineNumber, - type: "excessive-parent-imports", - message: `Import traverses ${parentLevels} parent directories, consider restructuring`, - code: match.line.trim() - }) - } else if (parentLevels === 2) { - issues.medium.push({ - file: file.path, - line: match.lineNumber, - type: "parent-imports", - message: "Consider using absolute imports or restructuring modules", - code: match.line.trim() - }) - } - }) - } - - // Check for large files - const lineCount = lines.length - if (lineCount > 500) { - issues.medium.push({ - file: file.path, - line: 1, - type: "large-file", - message: `File has ${lineCount} lines, consider splitting into smaller modules`, - code: `Total lines: ${lineCount}` - }) - } - - // Check for circular dependencies (simple heuristic) - const imports = lines - .filter(line => line.match(/^import.*from/)) - .map(line => { - const match = line.match(/from\s+['"](.+?)['"]/) - return match ? match[1] : null - }) - .filter(Boolean) - - // Check if any imported file imports this file back - for (const importPath of imports) { - const resolvedPath = resolveImportPath(file.path, importPath) - if (resolvedPath && Bash(`test -f ${resolvedPath}`).exitCode === 0) { - const importedContent = Read(resolvedPath) - const reverseImport = importedContent.includes(file.path.replace(/\.[jt]sx?$/, "")) - - if (reverseImport) { - issues.critical.push({ - file: file.path, - line: 1, - type: "circular-dependency", - message: `Circular dependency detected with ${resolvedPath}`, - code: `${file.path} ↔ ${resolvedPath}` - }) - } - } - } - - // Check for tight coupling (many imports from same module) - const importCounts = {} - imports.forEach(imp => { - const baseModule = imp.split("/")[0] - importCounts[baseModule] = (importCounts[baseModule] || 0) + 1 - }) - - Object.entries(importCounts).forEach(([module, count]) => { - if (count > 5) { - issues.medium.push({ - file: file.path, - line: 1, - type: "tight-coupling", - message: `File imports ${count} items from '${module}', consider facade pattern`, - code: `Imports from ${module}: ${count}` - }) - } - }) - - // Check for missing abstractions (long functions) - const functionRegex = /(function|const.*=.*\(|async.*\()/g - let match - while ((match = functionRegex.exec(content)) !== null) { - const startLine = content.substring(0, match.index).split("\n").length - const functionBody = extractFunctionBody(content, match.index) - const functionLines = functionBody.split("\n").length - - if (functionLines > 50) { - issues.medium.push({ - file: file.path, - line: startLine, - type: "long-function", - message: `Function has ${functionLines} lines, consider extracting smaller functions`, - code: match[0].trim() - }) - } - } - } - - return issues -} - -function resolveImportPath(fromFile, importPath) { - if (importPath.startsWith(".")) { - const dir = fromFile.substring(0, fromFile.lastIndexOf("/")) - const resolved = `${dir}/${importPath}`.replace(/\/\.\//g, "/") - - // Try with extensions - for (const ext of [".ts", ".js", ".tsx", ".jsx"]) { - if (Bash(`test -f ${resolved}${ext}`).exitCode === 0) { - return `${resolved}${ext}` - } - } - } - return null -} - -function extractFunctionBody(content, startIndex) { - let braceCount = 0 - let inFunction = false - let body = "" - - for (let i = startIndex; i < content.length; i++) { - const char = content[i] - - if (char === "{") { - braceCount++ - inFunction = true - } else if (char === "}") { - braceCount-- - } - - if (inFunction) { - body += char - } - - if (inFunction && braceCount === 0) { - break - } - } - - return body -} -``` - -### 4. Requirements Verification - -```javascript -function verifyRequirements(plan, files, gitDiff) { - const issues = { - critical: [], - high: [], - medium: [], - low: [] - } - - // Extract acceptance criteria from plan - const acceptanceCriteria = extractAcceptanceCriteria(plan) - - // Verify each criterion - for (const criterion of acceptanceCriteria) { - const verified = verifyCriterion(criterion, files, gitDiff) - - if (!verified.met) { - issues.high.push({ - file: "plan", - line: criterion.lineNumber, - type: "unmet-acceptance-criteria", - message: `Acceptance criterion not met: ${criterion.text}`, - code: criterion.text - }) - } else if (verified.partial) { - issues.medium.push({ - file: "plan", - line: criterion.lineNumber, - type: "partial-acceptance-criteria", - message: `Acceptance criterion partially met: ${criterion.text}`, - code: criterion.text - }) - } - } - - // Check for missing error handling - const errorHandlingRequired = plan.match(/error handling|exception|validation/i) - if (errorHandlingRequired) { - const hasErrorHandling = files.some(file => - file.content.match(/try\s*\{|catch\s*\(|throw\s+new|\.catch\(/) - ) - - if (!hasErrorHandling) { - issues.high.push({ - file: "implementation", - line: 1, - type: "missing-error-handling", - message: "Plan requires error handling but none found in implementation", - code: "No try-catch or error handling detected" - }) - } - } - - // Check for missing tests - const testingRequired = plan.match(/test|testing|coverage/i) - if (testingRequired) { - const hasTests = files.some(file => - file.path.match(/\.(test|spec)\.[jt]sx?$/) - ) - - if (!hasTests) { - issues.medium.push({ - file: "implementation", - line: 1, - type: "missing-tests", - message: "Plan requires tests but no test files found", - code: "No test files detected" - }) - } - } - - return issues -} - -function extractAcceptanceCriteria(plan) { - const criteria = [] - const lines = plan.split("\n") - - let inAcceptanceSection = false - lines.forEach((line, idx) => { - if (line.match(/acceptance criteria/i)) { - inAcceptanceSection = true - } else if (line.match(/^##/)) { - inAcceptanceSection = false - } else if (inAcceptanceSection && line.match(/^[-*]\s+/)) { - criteria.push({ - text: line.replace(/^[-*]\s+/, "").trim(), - lineNumber: idx + 1 - }) - } - }) - - return criteria -} - -function verifyCriterion(criterion, files, gitDiff) { - // Extract keywords from criterion - const keywords = criterion.text.toLowerCase().match(/\b\w{4,}\b/g) || [] - - // Check if keywords appear in implementation - let matchCount = 0 - for (const file of files) { - const content = file.content.toLowerCase() - for (const keyword of keywords) { - if (content.includes(keyword)) { - matchCount++ - } - } - } - - const matchRatio = matchCount / keywords.length - - return { - met: matchRatio >= 0.7, - partial: matchRatio >= 0.4 && matchRatio < 0.7, - matchRatio: matchRatio - } -} -``` - -## Verdict Determination - -```javascript -function determineVerdict(qualityIssues, securityIssues, architectureIssues, requirementIssues) { - const allIssues = { - critical: [ - ...qualityIssues.critical, - ...securityIssues.critical, - ...architectureIssues.critical, - ...requirementIssues.critical - ], - high: [ - ...qualityIssues.high, - ...securityIssues.high, - ...architectureIssues.high, - ...requirementIssues.high - ], - medium: [ - ...qualityIssues.medium, - ...securityIssues.medium, - ...architectureIssues.medium, - ...requirementIssues.medium - ], - low: [ - ...qualityIssues.low, - ...securityIssues.low, - ...architectureIssues.low, - ...requirementIssues.low - ] - } - - // BLOCK: Any critical issues - if (allIssues.critical.length > 0) { - return { - verdict: "BLOCK", - reason: `${allIssues.critical.length} critical issue(s) must be fixed`, - blocking_issues: allIssues.critical - } - } - - // CONDITIONAL: High or medium issues - if (allIssues.high.length > 0 || allIssues.medium.length > 0) { - return { - verdict: "CONDITIONAL", - reason: `${allIssues.high.length} high and ${allIssues.medium.length} medium issue(s) should be addressed`, - blocking_issues: [] - } - } - - // APPROVE: Only low issues or none - return { - verdict: "APPROVE", - reason: allIssues.low.length > 0 - ? `${allIssues.low.length} low-priority issue(s) noted` - : "No issues found", - blocking_issues: [] - } -} -``` - -## Report Formatting - -```javascript -function formatCodeReviewReport(report) { - const { verdict, dimensions, recommendations, blocking_issues } = report - - let markdown = `# Code Review Report\n\n` - markdown += `**Verdict**: ${verdict}\n\n` - - if (blocking_issues.length > 0) { - markdown += `## Blocking Issues\n\n` - blocking_issues.forEach(issue => { - markdown += `- **${issue.type}** (${issue.file}:${issue.line})\n` - markdown += ` ${issue.message}\n` - markdown += ` \`\`\`\n ${issue.code}\n \`\`\`\n\n` - }) - } - - markdown += `## Review Dimensions\n\n` - - markdown += `### Quality Issues\n` - markdown += formatIssuesByDimension(dimensions.quality) - - markdown += `### Security Issues\n` - markdown += formatIssuesByDimension(dimensions.security) - - markdown += `### Architecture Issues\n` - markdown += formatIssuesByDimension(dimensions.architecture) - - markdown += `### Requirements Issues\n` - markdown += formatIssuesByDimension(dimensions.requirements) - - if (recommendations.length > 0) { - markdown += `## Recommendations\n\n` - recommendations.forEach((rec, i) => { - markdown += `${i + 1}. ${rec}\n` - }) - } - - return markdown -} - -function formatIssuesByDimension(issues) { - let markdown = "" - - const severities = ["critical", "high", "medium", "low"] - severities.forEach(severity => { - if (issues[severity].length > 0) { - markdown += `\n**${severity.toUpperCase()}** (${issues[severity].length})\n\n` - issues[severity].forEach(issue => { - markdown += `- ${issue.message} (${issue.file}:${issue.line})\n` - markdown += ` \`${issue.code}\`\n\n` - }) - } - }) - - return markdown || "No issues found.\n\n" -} -``` diff --git a/.claude/skills/team-lifecycle-v2/roles/reviewer/commands/spec-quality.md b/.claude/skills/team-lifecycle-v2/roles/reviewer/commands/spec-quality.md deleted file mode 100644 index 63aa6f74..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/reviewer/commands/spec-quality.md +++ /dev/null @@ -1,845 +0,0 @@ -# Spec Quality Command - -## Purpose -5-dimension spec quality check with readiness report generation and quality gate determination. - -## Quality Dimensions - -### 1. Completeness (Weight: 25%) - -```javascript -function scoreCompleteness(specDocs) { - const requiredSections = { - "product-brief": [ - "Vision Statement", - "Problem Statement", - "Target Audience", - "Success Metrics", - "Constraints" - ], - "prd": [ - "Goals", - "Requirements", - "User Stories", - "Acceptance Criteria", - "Non-Functional Requirements" - ], - "architecture": [ - "System Overview", - "Component Design", - "Data Models", - "API Specifications", - "Technology Stack" - ], - "user-stories": [ - "Story List", - "Acceptance Criteria", - "Priority", - "Estimation" - ], - "implementation-plan": [ - "Task Breakdown", - "Dependencies", - "Timeline", - "Resource Allocation" - ], - "test-strategy": [ - "Test Scope", - "Test Cases", - "Coverage Goals", - "Test Environment" - ] - } - - let totalScore = 0 - let totalWeight = 0 - const details = [] - - for (const doc of specDocs) { - const phase = doc.phase - const expectedSections = requiredSections[phase] || [] - - if (expectedSections.length === 0) continue - - let presentCount = 0 - let substantialCount = 0 - - for (const section of expectedSections) { - const sectionRegex = new RegExp(`##\\s+${section}`, "i") - const sectionMatch = doc.content.match(sectionRegex) - - if (sectionMatch) { - presentCount++ - - // Check if section has substantial content (not just header) - const sectionIndex = doc.content.indexOf(sectionMatch[0]) - const nextSectionIndex = doc.content.indexOf("\n##", sectionIndex + 1) - const sectionContent = nextSectionIndex > -1 - ? doc.content.substring(sectionIndex, nextSectionIndex) - : doc.content.substring(sectionIndex) - - // Substantial = more than 100 chars excluding header - const contentWithoutHeader = sectionContent.replace(sectionRegex, "").trim() - if (contentWithoutHeader.length > 100) { - substantialCount++ - } - } - } - - const presentRatio = presentCount / expectedSections.length - const substantialRatio = substantialCount / expectedSections.length - - // Score: 50% for presence, 50% for substance - const docScore = (presentRatio * 50) + (substantialRatio * 50) - - totalScore += docScore - totalWeight += 100 - - details.push({ - phase: phase, - score: docScore, - present: presentCount, - substantial: substantialCount, - expected: expectedSections.length, - missing: expectedSections.filter(s => !doc.content.match(new RegExp(`##\\s+${s}`, "i"))) - }) - } - - const overallScore = totalWeight > 0 ? (totalScore / totalWeight) * 100 : 0 - - return { - score: overallScore, - weight: 25, - weighted_score: overallScore * 0.25, - details: details - } -} -``` - -### 2. Consistency (Weight: 20%) - -```javascript -function scoreConsistency(specDocs) { - const issues = [] - - // 1. Terminology consistency - const terminologyMap = new Map() - - for (const doc of specDocs) { - // Extract key terms (capitalized phrases, technical terms) - const terms = doc.content.match(/\b[A-Z][a-z]+(?:\s+[A-Z][a-z]+)*\b/g) || [] - - terms.forEach(term => { - const normalized = term.toLowerCase() - if (!terminologyMap.has(normalized)) { - terminologyMap.set(normalized, new Set()) - } - terminologyMap.get(normalized).add(term) - }) - } - - // Find inconsistent terminology (same concept, different casing/spelling) - terminologyMap.forEach((variants, normalized) => { - if (variants.size > 1) { - issues.push({ - type: "terminology", - severity: "medium", - message: `Inconsistent terminology: ${[...variants].join(", ")}`, - suggestion: `Standardize to one variant` - }) - } - }) - - // 2. Format consistency - const headerStyles = new Map() - for (const doc of specDocs) { - const headers = doc.content.match(/^#{1,6}\s+.+$/gm) || [] - headers.forEach(header => { - const level = header.match(/^#+/)[0].length - const style = header.includes("**") ? "bold" : "plain" - const key = `level-${level}` - - if (!headerStyles.has(key)) { - headerStyles.set(key, new Set()) - } - headerStyles.get(key).add(style) - }) - } - - headerStyles.forEach((styles, level) => { - if (styles.size > 1) { - issues.push({ - type: "format", - severity: "low", - message: `Inconsistent header style at ${level}: ${[...styles].join(", ")}`, - suggestion: "Use consistent header formatting" - }) - } - }) - - // 3. Reference consistency - const references = new Map() - for (const doc of specDocs) { - // Extract references to other documents/sections - const refs = doc.content.match(/\[.*?\]\(.*?\)/g) || [] - refs.forEach(ref => { - const linkMatch = ref.match(/\((.*?)\)/) - if (linkMatch) { - const link = linkMatch[1] - if (!references.has(link)) { - references.set(link, []) - } - references.get(link).push(doc.phase) - } - }) - } - - // Check for broken references - references.forEach((sources, link) => { - if (link.startsWith("./") || link.startsWith("../")) { - // Check if file exists - const exists = Bash(`test -f ${link}`).exitCode === 0 - if (!exists) { - issues.push({ - type: "reference", - severity: "high", - message: `Broken reference: ${link} (referenced in ${sources.join(", ")})`, - suggestion: "Fix or remove broken reference" - }) - } - } - }) - - // 4. Naming convention consistency - const namingPatterns = { - camelCase: /\b[a-z]+(?:[A-Z][a-z]+)+\b/g, - PascalCase: /\b[A-Z][a-z]+(?:[A-Z][a-z]+)+\b/g, - snake_case: /\b[a-z]+(?:_[a-z]+)+\b/g, - kebab_case: /\b[a-z]+(?:-[a-z]+)+\b/g - } - - const namingCounts = {} - for (const doc of specDocs) { - Object.entries(namingPatterns).forEach(([pattern, regex]) => { - const matches = doc.content.match(regex) || [] - namingCounts[pattern] = (namingCounts[pattern] || 0) + matches.length - }) - } - - const dominantPattern = Object.entries(namingCounts) - .sort((a, b) => b[1] - a[1])[0]?.[0] - - Object.entries(namingCounts).forEach(([pattern, count]) => { - if (pattern !== dominantPattern && count > 10) { - issues.push({ - type: "naming", - severity: "low", - message: `Mixed naming conventions: ${pattern} (${count} occurrences) vs ${dominantPattern}`, - suggestion: `Standardize to ${dominantPattern}` - }) - } - }) - - // Calculate score based on issues - const severityWeights = { high: 10, medium: 5, low: 2 } - const totalPenalty = issues.reduce((sum, issue) => sum + severityWeights[issue.severity], 0) - const maxPenalty = 100 // Arbitrary max for normalization - - const score = Math.max(0, 100 - (totalPenalty / maxPenalty) * 100) - - return { - score: score, - weight: 20, - weighted_score: score * 0.20, - issues: issues, - details: { - terminology_issues: issues.filter(i => i.type === "terminology").length, - format_issues: issues.filter(i => i.type === "format").length, - reference_issues: issues.filter(i => i.type === "reference").length, - naming_issues: issues.filter(i => i.type === "naming").length - } - } -} -``` - -### 3. Traceability (Weight: 25%) - -```javascript -function scoreTraceability(specDocs) { - const chains = [] - - // Extract traceability elements - const goals = extractElements(specDocs, "product-brief", /^[-*]\s+Goal:\s*(.+)$/gm) - const requirements = extractElements(specDocs, "prd", /^[-*]\s+(?:REQ-\d+|Requirement):\s*(.+)$/gm) - const components = extractElements(specDocs, "architecture", /^[-*]\s+(?:Component|Module):\s*(.+)$/gm) - const stories = extractElements(specDocs, "user-stories", /^[-*]\s+(?:US-\d+|Story):\s*(.+)$/gm) - - // Build traceability chains: Goals → Requirements → Components → Stories - for (const goal of goals) { - const chain = { - goal: goal.text, - requirements: [], - components: [], - stories: [], - complete: false - } - - // Find requirements that reference this goal - const goalKeywords = extractKeywords(goal.text) - for (const req of requirements) { - if (hasKeywordOverlap(req.text, goalKeywords, 0.3)) { - chain.requirements.push(req.text) - - // Find components that implement this requirement - const reqKeywords = extractKeywords(req.text) - for (const comp of components) { - if (hasKeywordOverlap(comp.text, reqKeywords, 0.3)) { - chain.components.push(comp.text) - } - } - - // Find stories that implement this requirement - for (const story of stories) { - if (hasKeywordOverlap(story.text, reqKeywords, 0.3)) { - chain.stories.push(story.text) - } - } - } - } - - // Check if chain is complete - chain.complete = chain.requirements.length > 0 && - chain.components.length > 0 && - chain.stories.length > 0 - - chains.push(chain) - } - - // Calculate score - const completeChains = chains.filter(c => c.complete).length - const totalChains = chains.length - const score = totalChains > 0 ? (completeChains / totalChains) * 100 : 0 - - // Identify weak links - const weakLinks = [] - chains.forEach((chain, idx) => { - if (!chain.complete) { - if (chain.requirements.length === 0) { - weakLinks.push(`Goal ${idx + 1} has no linked requirements`) - } - if (chain.components.length === 0) { - weakLinks.push(`Goal ${idx + 1} has no linked components`) - } - if (chain.stories.length === 0) { - weakLinks.push(`Goal ${idx + 1} has no linked stories`) - } - } - }) - - return { - score: score, - weight: 25, - weighted_score: score * 0.25, - details: { - total_chains: totalChains, - complete_chains: completeChains, - weak_links: weakLinks - }, - chains: chains - } -} - -function extractElements(specDocs, phase, regex) { - const elements = [] - const doc = specDocs.find(d => d.phase === phase) - - if (doc) { - let match - while ((match = regex.exec(doc.content)) !== null) { - elements.push({ - text: match[1].trim(), - phase: phase - }) - } - } - - return elements -} - -function extractKeywords(text) { - // Extract meaningful words (4+ chars, not common words) - const commonWords = new Set(["that", "this", "with", "from", "have", "will", "should", "must", "can"]) - const words = text.toLowerCase().match(/\b\w{4,}\b/g) || [] - return words.filter(w => !commonWords.has(w)) -} - -function hasKeywordOverlap(text, keywords, threshold) { - const textLower = text.toLowerCase() - const matchCount = keywords.filter(kw => textLower.includes(kw)).length - return matchCount / keywords.length >= threshold -} -``` - -### 4. Depth (Weight: 20%) - -```javascript -function scoreDepth(specDocs) { - const dimensions = [] - - // 1. Acceptance Criteria Testability - const acDoc = specDocs.find(d => d.phase === "prd" || d.phase === "user-stories") - if (acDoc) { - const acMatches = acDoc.content.match(/Acceptance Criteria:[\s\S]*?(?=\n##|\n\n[-*]|$)/gi) || [] - let testableCount = 0 - let totalCount = 0 - - acMatches.forEach(section => { - const criteria = section.match(/^[-*]\s+(.+)$/gm) || [] - totalCount += criteria.length - - criteria.forEach(criterion => { - // Testable if contains measurable verbs or specific conditions - const testablePatterns = [ - /\b(should|must|will)\s+(display|show|return|validate|check|verify|calculate|send|receive)\b/i, - /\b(when|if|given)\b.*\b(then|should|must)\b/i, - /\b\d+\b/, // Contains numbers (measurable) - /\b(success|error|fail|pass)\b/i - ] - - const isTestable = testablePatterns.some(pattern => pattern.test(criterion)) - if (isTestable) testableCount++ - }) - }) - - const acScore = totalCount > 0 ? (testableCount / totalCount) * 100 : 0 - dimensions.push({ - name: "Acceptance Criteria Testability", - score: acScore, - testable: testableCount, - total: totalCount - }) - } - - // 2. ADR Justification - const archDoc = specDocs.find(d => d.phase === "architecture") - if (archDoc) { - const adrMatches = archDoc.content.match(/##\s+(?:ADR|Decision)[\s\S]*?(?=\n##|$)/gi) || [] - let justifiedCount = 0 - let totalCount = adrMatches.length - - adrMatches.forEach(adr => { - // Justified if contains rationale, alternatives, or consequences - const hasJustification = adr.match(/\b(rationale|reason|because|alternative|consequence|trade-?off)\b/i) - if (hasJustification) justifiedCount++ - }) - - const adrScore = totalCount > 0 ? (justifiedCount / totalCount) * 100 : 100 // Default 100 if no ADRs - dimensions.push({ - name: "ADR Justification", - score: adrScore, - justified: justifiedCount, - total: totalCount - }) - } - - // 3. User Stories Estimability - const storiesDoc = specDocs.find(d => d.phase === "user-stories") - if (storiesDoc) { - const storyMatches = storiesDoc.content.match(/^[-*]\s+(?:US-\d+|Story)[\s\S]*?(?=\n[-*]|$)/gim) || [] - let estimableCount = 0 - let totalCount = storyMatches.length - - storyMatches.forEach(story => { - // Estimable if has clear scope, AC, and no ambiguity - const hasScope = story.match(/\b(as a|I want|so that)\b/i) - const hasAC = story.match(/acceptance criteria/i) - const hasEstimate = story.match(/\b(points?|hours?|days?|estimate)\b/i) - - if ((hasScope && hasAC) || hasEstimate) estimableCount++ - }) - - const storiesScore = totalCount > 0 ? (estimableCount / totalCount) * 100 : 0 - dimensions.push({ - name: "User Stories Estimability", - score: storiesScore, - estimable: estimableCount, - total: totalCount - }) - } - - // 4. Technical Detail Sufficiency - const techDocs = specDocs.filter(d => d.phase === "architecture" || d.phase === "implementation-plan") - let detailScore = 0 - - if (techDocs.length > 0) { - const detailIndicators = [ - /```[\s\S]*?```/, // Code blocks - /\b(API|endpoint|schema|model|interface|class|function)\b/i, - /\b(GET|POST|PUT|DELETE|PATCH)\b/, // HTTP methods - /\b(database|table|collection|index)\b/i, - /\b(authentication|authorization|security)\b/i - ] - - let indicatorCount = 0 - techDocs.forEach(doc => { - detailIndicators.forEach(pattern => { - if (pattern.test(doc.content)) indicatorCount++ - }) - }) - - detailScore = Math.min(100, (indicatorCount / (detailIndicators.length * techDocs.length)) * 100) - dimensions.push({ - name: "Technical Detail Sufficiency", - score: detailScore, - indicators_found: indicatorCount, - indicators_expected: detailIndicators.length * techDocs.length - }) - } - - // Calculate overall depth score - const overallScore = dimensions.reduce((sum, d) => sum + d.score, 0) / dimensions.length - - return { - score: overallScore, - weight: 20, - weighted_score: overallScore * 0.20, - dimensions: dimensions - } -} -``` - -### 5. Requirement Coverage (Weight: 10%) - -```javascript -function scoreRequirementCoverage(specDocs, originalRequirements) { - // Extract original requirements from task description or initial brief - const originalReqs = originalRequirements || extractOriginalRequirements(specDocs) - - if (originalReqs.length === 0) { - return { - score: 100, // No requirements to cover - weight: 10, - weighted_score: 10, - details: { - total: 0, - covered: 0, - uncovered: [] - } - } - } - - // Extract all requirements from spec documents - const specReqs = [] - for (const doc of specDocs) { - const reqMatches = doc.content.match(/^[-*]\s+(?:REQ-\d+|Requirement|Feature):\s*(.+)$/gm) || [] - reqMatches.forEach(match => { - specReqs.push(match.replace(/^[-*]\s+(?:REQ-\d+|Requirement|Feature):\s*/, "").trim()) - }) - } - - // Map original requirements to spec requirements - const coverage = [] - for (const origReq of originalReqs) { - const keywords = extractKeywords(origReq) - const covered = specReqs.some(specReq => hasKeywordOverlap(specReq, keywords, 0.4)) - - coverage.push({ - requirement: origReq, - covered: covered - }) - } - - const coveredCount = coverage.filter(c => c.covered).length - const score = (coveredCount / originalReqs.length) * 100 - - return { - score: score, - weight: 10, - weighted_score: score * 0.10, - details: { - total: originalReqs.length, - covered: coveredCount, - uncovered: coverage.filter(c => !c.covered).map(c => c.requirement) - } - } -} - -function extractOriginalRequirements(specDocs) { - // Try to find original requirements in product brief - const briefDoc = specDocs.find(d => d.phase === "product-brief") - if (!briefDoc) return [] - - const reqSection = briefDoc.content.match(/##\s+(?:Requirements|Objectives)[\s\S]*?(?=\n##|$)/i) - if (!reqSection) return [] - - const reqs = reqSection[0].match(/^[-*]\s+(.+)$/gm) || [] - return reqs.map(r => r.replace(/^[-*]\s+/, "").trim()) -} -``` - -## Quality Gate Determination - -```javascript -function determineQualityGate(overallScore, coverageScore) { - // PASS: Score ≥80% AND coverage ≥70% - if (overallScore >= 80 && coverageScore >= 70) { - return { - gate: "PASS", - message: "Specification meets quality standards and is ready for implementation", - action: "Proceed to implementation phase" - } - } - - // FAIL: Score <60% OR coverage <50% - if (overallScore < 60 || coverageScore < 50) { - return { - gate: "FAIL", - message: "Specification requires major revisions before implementation", - action: "Address critical gaps and resubmit for review" - } - } - - // REVIEW: Between PASS and FAIL - return { - gate: "REVIEW", - message: "Specification needs improvements but may proceed with caution", - action: "Address recommendations and consider re-review" - } -} -``` - -## Readiness Report Generation - -```javascript -function formatReadinessReport(report, specDocs) { - const { overall_score, quality_gate, dimensions, phase_gates } = report - - let markdown = `# Specification Readiness Report\n\n` - markdown += `**Generated**: ${new Date().toISOString()}\n\n` - markdown += `**Overall Score**: ${overall_score.toFixed(1)}%\n\n` - markdown += `**Quality Gate**: ${quality_gate.gate} - ${quality_gate.message}\n\n` - markdown += `**Recommended Action**: ${quality_gate.action}\n\n` - - markdown += `---\n\n` - - markdown += `## Dimension Scores\n\n` - markdown += `| Dimension | Score | Weight | Weighted Score |\n` - markdown += `|-----------|-------|--------|----------------|\n` - - Object.entries(dimensions).forEach(([name, data]) => { - markdown += `| ${name} | ${data.score.toFixed(1)}% | ${data.weight}% | ${data.weighted_score.toFixed(1)}% |\n` - }) - - markdown += `\n---\n\n` - - // Completeness Details - markdown += `## Completeness Analysis\n\n` - dimensions.completeness.details.forEach(detail => { - markdown += `### ${detail.phase}\n` - markdown += `- Score: ${detail.score.toFixed(1)}%\n` - markdown += `- Sections Present: ${detail.present}/${detail.expected}\n` - markdown += `- Substantial Content: ${detail.substantial}/${detail.expected}\n` - if (detail.missing.length > 0) { - markdown += `- Missing: ${detail.missing.join(", ")}\n` - } - markdown += `\n` - }) - - // Consistency Details - markdown += `## Consistency Analysis\n\n` - if (dimensions.consistency.issues.length > 0) { - markdown += `**Issues Found**: ${dimensions.consistency.issues.length}\n\n` - dimensions.consistency.issues.forEach(issue => { - markdown += `- **${issue.severity.toUpperCase()}**: ${issue.message}\n` - markdown += ` *Suggestion*: ${issue.suggestion}\n\n` - }) - } else { - markdown += `No consistency issues found.\n\n` - } - - // Traceability Details - markdown += `## Traceability Analysis\n\n` - markdown += `- Complete Chains: ${dimensions.traceability.details.complete_chains}/${dimensions.traceability.details.total_chains}\n\n` - if (dimensions.traceability.details.weak_links.length > 0) { - markdown += `**Weak Links**:\n` - dimensions.traceability.details.weak_links.forEach(link => { - markdown += `- ${link}\n` - }) - markdown += `\n` - } - - // Depth Details - markdown += `## Depth Analysis\n\n` - dimensions.depth.dimensions.forEach(dim => { - markdown += `### ${dim.name}\n` - markdown += `- Score: ${dim.score.toFixed(1)}%\n` - if (dim.testable !== undefined) { - markdown += `- Testable: ${dim.testable}/${dim.total}\n` - } - if (dim.justified !== undefined) { - markdown += `- Justified: ${dim.justified}/${dim.total}\n` - } - if (dim.estimable !== undefined) { - markdown += `- Estimable: ${dim.estimable}/${dim.total}\n` - } - markdown += `\n` - }) - - // Coverage Details - markdown += `## Requirement Coverage\n\n` - markdown += `- Covered: ${dimensions.coverage.details.covered}/${dimensions.coverage.details.total}\n` - if (dimensions.coverage.details.uncovered.length > 0) { - markdown += `\n**Uncovered Requirements**:\n` - dimensions.coverage.details.uncovered.forEach(req => { - markdown += `- ${req}\n` - }) - } - markdown += `\n` - - // Phase Gates - if (phase_gates) { - markdown += `---\n\n` - markdown += `## Phase-Level Quality Gates\n\n` - Object.entries(phase_gates).forEach(([phase, gate]) => { - markdown += `### ${phase}\n` - markdown += `- Gate: ${gate.status}\n` - markdown += `- Score: ${gate.score.toFixed(1)}%\n` - if (gate.issues.length > 0) { - markdown += `- Issues: ${gate.issues.join(", ")}\n` - } - markdown += `\n` - }) - } - - return markdown -} -``` - -## Spec Summary Generation - -```javascript -function formatSpecSummary(specDocs, report) { - let markdown = `# Specification Summary\n\n` - - markdown += `**Overall Quality Score**: ${report.overall_score.toFixed(1)}%\n` - markdown += `**Quality Gate**: ${report.quality_gate.gate}\n\n` - - markdown += `---\n\n` - - // Document Overview - markdown += `## Documents Reviewed\n\n` - specDocs.forEach(doc => { - markdown += `### ${doc.phase}\n` - markdown += `- Path: ${doc.path}\n` - markdown += `- Size: ${doc.content.length} characters\n` - - // Extract key sections - const sections = doc.content.match(/^##\s+(.+)$/gm) || [] - if (sections.length > 0) { - markdown += `- Sections: ${sections.map(s => s.replace(/^##\s+/, "")).join(", ")}\n` - } - markdown += `\n` - }) - - markdown += `---\n\n` - - // Key Findings - markdown += `## Key Findings\n\n` - - // Strengths - const strengths = [] - Object.entries(report.dimensions).forEach(([name, data]) => { - if (data.score >= 80) { - strengths.push(`${name}: ${data.score.toFixed(1)}%`) - } - }) - - if (strengths.length > 0) { - markdown += `### Strengths\n` - strengths.forEach(s => markdown += `- ${s}\n`) - markdown += `\n` - } - - // Areas for Improvement - const improvements = [] - Object.entries(report.dimensions).forEach(([name, data]) => { - if (data.score < 70) { - improvements.push(`${name}: ${data.score.toFixed(1)}%`) - } - }) - - if (improvements.length > 0) { - markdown += `### Areas for Improvement\n` - improvements.forEach(i => markdown += `- ${i}\n`) - markdown += `\n` - } - - // Recommendations - if (report.recommendations && report.recommendations.length > 0) { - markdown += `### Recommendations\n` - report.recommendations.forEach((rec, i) => { - markdown += `${i + 1}. ${rec}\n` - }) - markdown += `\n` - } - - return markdown -} -``` - -## Phase-Level Quality Gates - -```javascript -function calculatePhaseGates(specDocs) { - const gates = {} - - for (const doc of specDocs) { - const phase = doc.phase - const issues = [] - let score = 100 - - // Check minimum content threshold - if (doc.content.length < 500) { - issues.push("Insufficient content") - score -= 30 - } - - // Check for required sections (phase-specific) - const requiredSections = getRequiredSections(phase) - const missingSections = requiredSections.filter(section => - !doc.content.match(new RegExp(`##\\s+${section}`, "i")) - ) - - if (missingSections.length > 0) { - issues.push(`Missing sections: ${missingSections.join(", ")}`) - score -= missingSections.length * 15 - } - - // Determine gate status - let status = "PASS" - if (score < 60) status = "FAIL" - else if (score < 80) status = "REVIEW" - - gates[phase] = { - status: status, - score: Math.max(0, score), - issues: issues - } - } - - return gates -} - -function getRequiredSections(phase) { - const sectionMap = { - "product-brief": ["Vision", "Problem", "Target Audience"], - "prd": ["Goals", "Requirements", "User Stories"], - "architecture": ["Overview", "Components", "Data Models"], - "user-stories": ["Stories", "Acceptance Criteria"], - "implementation-plan": ["Tasks", "Dependencies"], - "test-strategy": ["Test Cases", "Coverage"] - } - - return sectionMap[phase] || [] -} -``` diff --git a/.claude/skills/team-lifecycle-v2/roles/reviewer/role.md b/.claude/skills/team-lifecycle-v2/roles/reviewer/role.md deleted file mode 100644 index 61f9702a..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/reviewer/role.md +++ /dev/null @@ -1,429 +0,0 @@ -# Reviewer Role - -## 1. Role Identity - -- **Name**: reviewer -- **Task Prefix**: REVIEW-* + QUALITY-* -- **Output Tag**: `[reviewer]` -- **Responsibility**: Discover Task → Branch by Prefix → Review/Score → Report - -## 2. Role Boundaries - -### MUST -- Only process REVIEW-* and QUALITY-* tasks -- Communicate only with coordinator -- Generate readiness-report.md for QUALITY tasks -- Tag all outputs with `[reviewer]` - -### MUST NOT -- Create tasks -- Contact other workers directly -- Modify source code -- Skip quality dimensions -- Approve without verification - -## 3. Message Types - -| Type | Direction | Purpose | Format | -|------|-----------|---------|--------| -| `task_request` | FROM coordinator | Receive REVIEW-*/QUALITY-* task assignment | `{ type: "task_request", task_id, description, review_mode }` | -| `task_complete` | TO coordinator | Report review success | `{ type: "task_complete", task_id, status: "success", verdict, score, issues }` | -| `task_failed` | TO coordinator | Report review failure | `{ type: "task_failed", task_id, error }` | - -## 4. Message Bus - -**Primary**: Use `team_msg` for all coordinator communication with `[reviewer]` tag: -```javascript -// Code review completion -team_msg({ - to: "coordinator", - type: "task_complete", - task_id: "REVIEW-001", - status: "success", - verdict: "APPROVE", - issues: { critical: 0, high: 2, medium: 5, low: 3 }, - recommendations: ["Fix console.log statements", "Add error handling"] -}, "[reviewer]") - -// Spec quality completion -team_msg({ - to: "coordinator", - type: "task_complete", - task_id: "QUALITY-001", - status: "success", - overall_score: 85.5, - quality_gate: "PASS", - dimensions: { - completeness: 90, - consistency: 85, - traceability: 80, - depth: 88, - coverage: 82 - } -}, "[reviewer]") -``` - -**CLI Fallback**: When message bus unavailable, write to `.workflow/.team/messages/reviewer-{timestamp}.json` - -## 5. Toolbox - -### Available Commands -- `commands/code-review.md` - 4-dimension code review (quality, security, architecture, requirements) -- `commands/spec-quality.md` - 5-dimension spec quality check (completeness, consistency, traceability, depth, coverage) - -### CLI Capabilities -- None (uses Grep-based analysis) - -## 6. Execution (5-Phase) - Dual-Prefix - -### Phase 1: Task Discovery - -**Dual Prefix Filter**: -```javascript -const tasks = Glob(".workflow/.team/tasks/{REVIEW,QUALITY}-*.json") - .filter(task => task.status === "pending" && task.assigned_to === "reviewer") - -// Determine review mode -const reviewMode = task.task_id.startsWith("REVIEW-") ? "code" : "spec" -``` - -### Phase 2: Context Loading (Branch by Mode) - -**Code Review Context (REVIEW-*)**: -```javascript -if (reviewMode === "code") { - // Load plan - const planPath = task.metadata?.plan_path || ".workflow/plan.md" - const plan = Read(planPath) - - // Get git diff - const implTaskId = task.metadata?.impl_task_id - const gitDiff = Bash("git diff HEAD").stdout - - // Load modified files - const modifiedFiles = Bash("git diff --name-only HEAD").stdout.split("\n").filter(Boolean) - const fileContents = modifiedFiles.map(f => ({ - path: f, - content: Read(f) - })) - - // Load test results if available - const testTaskId = task.metadata?.test_task_id - const testResults = testTaskId ? Read(`.workflow/.team/tasks/${testTaskId}.json`) : null -} -``` - -**Spec Quality Context (QUALITY-*)**: -```javascript -if (reviewMode === "spec") { - // Load session folder - const sessionFolder = task.metadata?.session_folder || ".workflow/.sessions/latest" - - // Load quality gates - const qualityGates = task.metadata?.quality_gates || { - pass_threshold: 80, - fail_threshold: 60, - coverage_threshold: 70 - } - - // Load all spec documents - const specDocs = Glob(`${sessionFolder}/**/*.md`).map(path => ({ - path: path, - content: Read(path), - phase: extractPhase(path) - })) -} -``` - -### Phase 3: Review Execution (Delegate by Mode) - -**Code Review**: -```javascript -if (reviewMode === "code") { - const codeReviewCommand = Read("commands/code-review.md") - // Command handles: - // - reviewQuality (ts-ignore, any, console.log, empty catch) - // - reviewSecurity (eval/exec, secrets, SQL injection, XSS) - // - reviewArchitecture (parent imports, large files) - // - verifyRequirements (plan acceptance criteria vs implementation) - // - Verdict determination (BLOCK/CONDITIONAL/APPROVE) -} -``` - -**Spec Quality**: -```javascript -if (reviewMode === "spec") { - const specQualityCommand = Read("commands/spec-quality.md") - // Command handles: - // - scoreCompleteness (section content checks) - // - scoreConsistency (terminology, format, references) - // - scoreTraceability (goals → reqs → arch → stories chain) - // - scoreDepth (AC testable, ADRs justified, stories estimable) - // - scoreRequirementCoverage (original requirements → document mapping) - // - Quality gate determination (PASS ≥80%, FAIL <60%, else REVIEW) - // - readiness-report.md generation - // - spec-summary.md generation -} -``` - -### Phase 4: Report Generation (Branch by Mode) - -**Code Review Report**: -```javascript -if (reviewMode === "code") { - const report = { - verdict: verdict, // BLOCK | CONDITIONAL | APPROVE - dimensions: { - quality: qualityIssues, - security: securityIssues, - architecture: architectureIssues, - requirements: requirementIssues - }, - recommendations: recommendations, - blocking_issues: blockingIssues - } - - // Write review report - Write(`.workflow/.team/reviews/${task.task_id}-report.md`, formatCodeReviewReport(report)) -} -``` - -**Spec Quality Report**: -```javascript -if (reviewMode === "spec") { - const report = { - overall_score: overallScore, - quality_gate: qualityGate, // PASS | REVIEW | FAIL - dimensions: { - completeness: completenessScore, - consistency: consistencyScore, - traceability: traceabilityScore, - depth: depthScore, - coverage: coverageScore - }, - phase_gates: phaseGates, - recommendations: recommendations - } - - // Write readiness report - Write(`${sessionFolder}/readiness-report.md`, formatReadinessReport(report)) - - // Write spec summary - Write(`${sessionFolder}/spec-summary.md`, formatSpecSummary(specDocs, report)) -} -``` - -### Phase 5: Report to Coordinator (Branch by Mode) - -**Code Review Completion**: -```javascript -if (reviewMode === "code") { - team_msg({ - to: "coordinator", - type: "task_complete", - task_id: task.task_id, - status: "success", - verdict: verdict, - issues: { - critical: blockingIssues.length, - high: highIssues.length, - medium: mediumIssues.length, - low: lowIssues.length - }, - recommendations: recommendations, - report_path: `.workflow/.team/reviews/${task.task_id}-report.md`, - timestamp: new Date().toISOString() - }, "[reviewer]") -} -``` - -**Spec Quality Completion**: -```javascript -if (reviewMode === "spec") { - team_msg({ - to: "coordinator", - type: "task_complete", - task_id: task.task_id, - status: "success", - overall_score: overallScore, - quality_gate: qualityGate, - dimensions: { - completeness: completenessScore, - consistency: consistencyScore, - traceability: traceabilityScore, - depth: depthScore, - coverage: coverageScore - }, - report_path: `${sessionFolder}/readiness-report.md`, - summary_path: `${sessionFolder}/spec-summary.md`, - timestamp: new Date().toISOString() - }, "[reviewer]") -} -``` - -## 7. Code Review Dimensions - -### Quality Dimension - -**Anti-patterns**: -- `@ts-ignore` / `@ts-expect-error` without justification -- `any` type usage -- `console.log` in production code -- Empty catch blocks -- Magic numbers -- Duplicate code - -**Severity**: -- Critical: Empty catch, any in public APIs -- High: @ts-ignore without comment, console.log -- Medium: Magic numbers, duplicate code -- Low: Minor style issues - -### Security Dimension - -**Vulnerabilities**: -- `eval()` / `exec()` usage -- `innerHTML` / `dangerouslySetInnerHTML` -- Hardcoded secrets (API keys, passwords) -- SQL injection vectors -- XSS vulnerabilities -- Insecure dependencies - -**Severity**: -- Critical: Hardcoded secrets, SQL injection -- High: eval/exec, innerHTML -- Medium: Insecure dependencies -- Low: Missing input validation - -### Architecture Dimension - -**Issues**: -- Parent directory imports (`../../../`) -- Large files (>500 lines) -- Circular dependencies -- Missing abstractions -- Tight coupling - -**Severity**: -- Critical: Circular dependencies -- High: Excessive parent imports (>2 levels) -- Medium: Large files, tight coupling -- Low: Minor structure issues - -### Requirements Dimension - -**Verification**: -- Acceptance criteria coverage -- Feature completeness -- Edge case handling -- Error handling - -**Severity**: -- Critical: Missing core functionality -- High: Incomplete acceptance criteria -- Medium: Missing edge cases -- Low: Minor feature gaps - -## 8. Spec Quality Dimensions - -### Completeness (Weight: 25%) - -**Checks**: -- All required sections present -- Section content depth (not just headers) -- Cross-phase coverage -- Artifact completeness - -**Scoring**: -- 100%: All sections with substantial content -- 75%: All sections present, some thin -- 50%: Missing 1-2 sections -- 25%: Missing 3+ sections -- 0%: Critical sections missing - -### Consistency (Weight: 20%) - -**Checks**: -- Terminology consistency -- Format consistency -- Reference consistency -- Naming conventions - -**Scoring**: -- 100%: Fully consistent -- 75%: Minor inconsistencies (1-2) -- 50%: Moderate inconsistencies (3-5) -- 25%: Major inconsistencies (6+) -- 0%: Chaotic inconsistency - -### Traceability (Weight: 25%) - -**Checks**: -- Goals → Requirements chain -- Requirements → Architecture chain -- Architecture → User Stories chain -- Bidirectional references - -**Scoring**: -- 100%: Full traceability chain -- 75%: 1 weak link -- 50%: 2 weak links -- 25%: 3+ weak links -- 0%: No traceability - -### Depth (Weight: 20%) - -**Checks**: -- Acceptance criteria testable -- ADRs justified -- User stories estimable -- Technical details sufficient - -**Scoring**: -- 100%: All items detailed -- 75%: 1-2 shallow items -- 50%: 3-5 shallow items -- 25%: 6+ shallow items -- 0%: All items shallow - -### Coverage (Weight: 10%) - -**Checks**: -- Original requirements mapped -- All features documented -- All constraints addressed -- All stakeholders considered - -**Scoring**: -- 100%: Full coverage (100%) -- 75%: High coverage (80-99%) -- 50%: Moderate coverage (60-79%) -- 25%: Low coverage (40-59%) -- 0%: Minimal coverage (<40%) - -## 9. Verdict/Gate Determination - -### Code Review Verdicts - -| Verdict | Criteria | Action | -|---------|----------|--------| -| **BLOCK** | Critical issues present | Must fix before merge | -| **CONDITIONAL** | High/medium issues only | Fix recommended, merge allowed | -| **APPROVE** | Low issues or none | Ready to merge | - -### Spec Quality Gates - -| Gate | Criteria | Action | -|------|----------|--------| -| **PASS** | Score ≥80% AND coverage ≥70% | Ready for implementation | -| **REVIEW** | Score 60-79% OR coverage 50-69% | Revisions recommended | -| **FAIL** | Score <60% OR coverage <50% | Major revisions required | - -## 10. Error Handling - -| Error Type | Recovery Strategy | Escalation | -|------------|-------------------|------------| -| Missing context | Request from coordinator | Immediate escalation | -| Invalid review mode | Abort with error | Report to coordinator | -| Analysis failure | Retry with verbose logging | Report after 2 failures | -| Report generation failure | Use fallback template | Report with partial results | diff --git a/.claude/skills/team-lifecycle-v2/roles/tester/commands/validate.md b/.claude/skills/team-lifecycle-v2/roles/tester/commands/validate.md deleted file mode 100644 index 05ec30c4..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/tester/commands/validate.md +++ /dev/null @@ -1,538 +0,0 @@ -# Validate Command - -## Purpose -Test-fix cycle with strategy engine for automated test failure resolution. - -## Configuration - -```javascript -const MAX_ITERATIONS = 10 -const PASS_RATE_TARGET = 95 // percentage -``` - -## Main Iteration Loop - -```javascript -function runTestFixCycle(task, framework, affectedTests, modifiedFiles) { - let iteration = 0 - let bestPassRate = 0 - let bestResults = null - - while (iteration < MAX_ITERATIONS) { - iteration++ - - // Phase 1: Run Tests - const testCommand = buildTestCommand(framework, affectedTests, iteration === 1) - const testOutput = Bash(testCommand, { timeout: 120000 }) - const results = parseTestResults(testOutput.stdout + testOutput.stderr, framework) - - const passRate = results.total > 0 ? (results.passed / results.total * 100) : 0 - - // Track best result - if (passRate > bestPassRate) { - bestPassRate = passRate - bestResults = results - } - - // Progress update for long cycles - if (iteration > 5) { - team_msg({ - to: "coordinator", - type: "progress_update", - task_id: task.task_id, - iteration: iteration, - pass_rate: passRate.toFixed(1), - tests_passed: results.passed, - tests_failed: results.failed, - message: `Test-fix cycle iteration ${iteration}/${MAX_ITERATIONS}` - }, "[tester]") - } - - // Phase 2: Check Success - if (passRate >= PASS_RATE_TARGET) { - // Quality gate: Run full suite if only affected tests passed - if (affectedTests.length > 0 && iteration === 1) { - team_msg({ - to: "coordinator", - type: "progress_update", - task_id: task.task_id, - message: "Affected tests passed, running full suite..." - }, "[tester]") - - const fullSuiteCommand = buildTestCommand(framework, [], false) - const fullOutput = Bash(fullSuiteCommand, { timeout: 300000 }) - const fullResults = parseTestResults(fullOutput.stdout + fullOutput.stderr, framework) - const fullPassRate = fullResults.total > 0 ? (fullResults.passed / fullResults.total * 100) : 0 - - if (fullPassRate >= PASS_RATE_TARGET) { - return { - success: true, - results: fullResults, - iterations: iteration, - full_suite_run: true - } - } else { - // Full suite failed, continue fixing - results = fullResults - passRate = fullPassRate - } - } else { - return { - success: true, - results: results, - iterations: iteration, - full_suite_run: affectedTests.length === 0 - } - } - } - - // Phase 3: Analyze Failures - if (results.failures.length === 0) { - break // No failures to fix - } - - const classified = classifyFailures(results.failures) - - // Phase 4: Select Strategy - const strategy = selectStrategy(iteration, passRate, results.failures) - - team_msg({ - to: "coordinator", - type: "progress_update", - task_id: task.task_id, - iteration: iteration, - strategy: strategy, - failures: { - critical: classified.critical.length, - high: classified.high.length, - medium: classified.medium.length, - low: classified.low.length - } - }, "[tester]") - - // Phase 5: Apply Fixes - const fixResult = applyFixes(strategy, results.failures, framework, modifiedFiles) - - if (!fixResult.success) { - // Fix application failed, try next iteration with different strategy - continue - } - } - - // Max iterations reached - return { - success: false, - results: bestResults, - iterations: MAX_ITERATIONS, - best_pass_rate: bestPassRate, - error: "Max iterations reached without achieving target pass rate" - } -} -``` - -## Strategy Selection - -```javascript -function selectStrategy(iteration, passRate, failures) { - const classified = classifyFailures(failures) - - // Conservative: Early iterations or high pass rate - if (iteration <= 3 || passRate >= 80) { - return "conservative" - } - - // Surgical: Specific failure patterns - if (classified.critical.length > 0 && classified.critical.length < 5) { - return "surgical" - } - - // Aggressive: Low pass rate or many iterations - if (passRate < 50 || iteration > 7) { - return "aggressive" - } - - return "conservative" -} -``` - -## Fix Application - -### Conservative Strategy - -```javascript -function applyConservativeFixes(failures, framework, modifiedFiles) { - const classified = classifyFailures(failures) - - // Fix only the first critical failure - if (classified.critical.length > 0) { - const failure = classified.critical[0] - return fixSingleFailure(failure, framework, modifiedFiles) - } - - // If no critical, fix first high priority - if (classified.high.length > 0) { - const failure = classified.high[0] - return fixSingleFailure(failure, framework, modifiedFiles) - } - - return { success: false, error: "No fixable failures found" } -} -``` - -### Surgical Strategy - -```javascript -function applySurgicalFixes(failures, framework, modifiedFiles) { - // Identify common pattern - const pattern = identifyCommonPattern(failures) - - if (!pattern) { - return { success: false, error: "No common pattern identified" } - } - - // Apply pattern-based fix across all occurrences - const fixes = [] - - for (const failure of failures) { - if (matchesPattern(failure, pattern)) { - const fix = generatePatternFix(failure, pattern, framework) - fixes.push(fix) - } - } - - // Apply all fixes in batch - for (const fix of fixes) { - applyFix(fix, modifiedFiles) - } - - return { - success: true, - fixes_applied: fixes.length, - pattern: pattern - } -} - -function identifyCommonPattern(failures) { - // Group failures by error type - const errorTypes = {} - - for (const failure of failures) { - const errorType = extractErrorType(failure.error) - if (!errorTypes[errorType]) { - errorTypes[errorType] = [] - } - errorTypes[errorType].push(failure) - } - - // Find most common error type - let maxCount = 0 - let commonPattern = null - - for (const [errorType, instances] of Object.entries(errorTypes)) { - if (instances.length > maxCount) { - maxCount = instances.length - commonPattern = { - type: errorType, - instances: instances, - count: instances.length - } - } - } - - return maxCount >= 3 ? commonPattern : null -} - -function extractErrorType(error) { - const errorLower = error.toLowerCase() - - if (errorLower.includes("cannot find module")) return "missing_import" - if (errorLower.includes("is not defined")) return "undefined_variable" - if (errorLower.includes("expected") && errorLower.includes("received")) return "assertion_mismatch" - if (errorLower.includes("timeout")) return "timeout" - if (errorLower.includes("syntaxerror")) return "syntax_error" - - return "unknown" -} -``` - -### Aggressive Strategy - -```javascript -function applyAggressiveFixes(failures, framework, modifiedFiles) { - const classified = classifyFailures(failures) - const fixes = [] - - // Fix all critical failures - for (const failure of classified.critical) { - const fix = generateFix(failure, framework, modifiedFiles) - if (fix) { - fixes.push(fix) - } - } - - // Fix all high priority failures - for (const failure of classified.high) { - const fix = generateFix(failure, framework, modifiedFiles) - if (fix) { - fixes.push(fix) - } - } - - // Apply all fixes - for (const fix of fixes) { - applyFix(fix, modifiedFiles) - } - - return { - success: fixes.length > 0, - fixes_applied: fixes.length - } -} -``` - -### Fix Generation - -```javascript -function generateFix(failure, framework, modifiedFiles) { - const errorType = extractErrorType(failure.error) - - switch (errorType) { - case "missing_import": - return generateImportFix(failure, modifiedFiles) - - case "undefined_variable": - return generateVariableFix(failure, modifiedFiles) - - case "assertion_mismatch": - return generateAssertionFix(failure, framework) - - case "timeout": - return generateTimeoutFix(failure, framework) - - case "syntax_error": - return generateSyntaxFix(failure, modifiedFiles) - - default: - return null - } -} - -function generateImportFix(failure, modifiedFiles) { - // Extract module name from error - const moduleMatch = failure.error.match(/Cannot find module ['"](.+?)['"]/) - if (!moduleMatch) return null - - const moduleName = moduleMatch[1] - - // Find test file - const testFile = extractTestFile(failure.test) - if (!testFile) return null - - // Check if module exists in modified files - const sourceFile = modifiedFiles.find(f => - f.includes(moduleName) || f.endsWith(`${moduleName}.ts`) || f.endsWith(`${moduleName}.js`) - ) - - if (!sourceFile) return null - - // Generate import statement - const relativePath = calculateRelativePath(testFile, sourceFile) - const importStatement = `import { } from '${relativePath}'` - - return { - file: testFile, - type: "add_import", - content: importStatement, - line: 1 // Add at top of file - } -} - -function generateAssertionFix(failure, framework) { - // Extract expected vs received values - const expectedMatch = failure.error.match(/Expected:\s*(.+?)(?:\n|$)/) - const receivedMatch = failure.error.match(/Received:\s*(.+?)(?:\n|$)/) - - if (!expectedMatch || !receivedMatch) return null - - const expected = expectedMatch[1].trim() - const received = receivedMatch[1].trim() - - // Find test file and line - const testFile = extractTestFile(failure.test) - const testLine = extractTestLine(failure.error) - - if (!testFile || !testLine) return null - - return { - file: testFile, - type: "update_assertion", - line: testLine, - old_value: expected, - new_value: received, - note: "Auto-updated assertion based on actual behavior" - } -} -``` - -## Test Result Parsing - -```javascript -function parseTestResults(output, framework) { - const results = { - total: 0, - passed: 0, - failed: 0, - skipped: 0, - failures: [] - } - - if (framework === "jest" || framework === "vitest") { - // Parse summary line - const summaryMatch = output.match(/Tests:\s+(?:(\d+)\s+failed,\s+)?(?:(\d+)\s+passed,\s+)?(\d+)\s+total/) - if (summaryMatch) { - results.failed = summaryMatch[1] ? parseInt(summaryMatch[1]) : 0 - results.passed = summaryMatch[2] ? parseInt(summaryMatch[2]) : 0 - results.total = parseInt(summaryMatch[3]) - } - - // Alternative format - if (results.total === 0) { - const altMatch = output.match(/(\d+)\s+passed.*?(\d+)\s+total/) - if (altMatch) { - results.passed = parseInt(altMatch[1]) - results.total = parseInt(altMatch[2]) - results.failed = results.total - results.passed - } - } - - // Extract failure details - const failureRegex = /●\s+(.*?)\n\n([\s\S]*?)(?=\n\n●|\n\nTest Suites:|\n\n$)/g - let match - while ((match = failureRegex.exec(output)) !== null) { - results.failures.push({ - test: match[1].trim(), - error: match[2].trim() - }) - } - - } else if (framework === "pytest") { - // Parse pytest summary - const summaryMatch = output.match(/=+\s+(?:(\d+)\s+failed,?\s+)?(?:(\d+)\s+passed)?/) - if (summaryMatch) { - results.failed = summaryMatch[1] ? parseInt(summaryMatch[1]) : 0 - results.passed = summaryMatch[2] ? parseInt(summaryMatch[2]) : 0 - results.total = results.failed + results.passed - } - - // Extract failure details - const failureRegex = /FAILED\s+(.*?)\s+-\s+([\s\S]*?)(?=\n_+|FAILED|=+\s+\d+)/g - let match - while ((match = failureRegex.exec(output)) !== null) { - results.failures.push({ - test: match[1].trim(), - error: match[2].trim() - }) - } - } - - return results -} -``` - -## Test Command Building - -```javascript -function buildTestCommand(framework, affectedTests, isFirstRun) { - const testFiles = affectedTests.length > 0 ? affectedTests.join(" ") : "" - - switch (framework) { - case "vitest": - return testFiles - ? `vitest run ${testFiles} --reporter=verbose` - : `vitest run --reporter=verbose` - - case "jest": - return testFiles - ? `jest ${testFiles} --no-coverage --verbose` - : `jest --no-coverage --verbose` - - case "mocha": - return testFiles - ? `mocha ${testFiles} --reporter spec` - : `mocha --reporter spec` - - case "pytest": - return testFiles - ? `pytest ${testFiles} -v --tb=short` - : `pytest -v --tb=short` - - default: - throw new Error(`Unsupported test framework: ${framework}`) - } -} -``` - -## Utility Functions - -### Extract Test File - -```javascript -function extractTestFile(testName) { - // Extract file path from test name - // Format: "path/to/file.test.ts > describe block > test name" - const fileMatch = testName.match(/^(.*?\.(?:test|spec)\.[jt]sx?)/) - return fileMatch ? fileMatch[1] : null -} -``` - -### Extract Test Line - -```javascript -function extractTestLine(error) { - // Extract line number from error stack - const lineMatch = error.match(/:(\d+):\d+/) - return lineMatch ? parseInt(lineMatch[1]) : null -} -``` - -### Calculate Relative Path - -```javascript -function calculateRelativePath(fromFile, toFile) { - const fromParts = fromFile.split("/") - const toParts = toFile.split("/") - - // Remove filename - fromParts.pop() - - // Find common base - let commonLength = 0 - while (commonLength < fromParts.length && - commonLength < toParts.length && - fromParts[commonLength] === toParts[commonLength]) { - commonLength++ - } - - // Build relative path - const upLevels = fromParts.length - commonLength - const downPath = toParts.slice(commonLength) - - const relativeParts = [] - for (let i = 0; i < upLevels; i++) { - relativeParts.push("..") - } - relativeParts.push(...downPath) - - let path = relativeParts.join("/") - - // Remove file extension - path = path.replace(/\.[jt]sx?$/, "") - - // Ensure starts with ./ - if (!path.startsWith(".")) { - path = "./" + path - } - - return path -} -``` diff --git a/.claude/skills/team-lifecycle-v2/roles/tester/role.md b/.claude/skills/team-lifecycle-v2/roles/tester/role.md deleted file mode 100644 index 83283ea0..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/tester/role.md +++ /dev/null @@ -1,385 +0,0 @@ -# Tester Role - -## 1. Role Identity - -- **Name**: tester -- **Task Prefix**: TEST-* -- **Output Tag**: `[tester]` -- **Responsibility**: Detect Framework → Run Tests → Fix Cycle → Report Results - -## 2. Role Boundaries - -### MUST -- Only process TEST-* tasks -- Communicate only with coordinator -- Use detected test framework -- Run affected tests before full suite -- Tag all outputs with `[tester]` - -### MUST NOT -- Create tasks -- Contact other workers directly -- Modify production code beyond test fixes -- Skip framework detection -- Run full suite without affected tests first - -## 3. Message Types - -| Type | Direction | Purpose | Format | -|------|-----------|---------|--------| -| `task_request` | FROM coordinator | Receive TEST-* task assignment | `{ type: "task_request", task_id, description, impl_task_id }` | -| `task_complete` | TO coordinator | Report test success | `{ type: "task_complete", task_id, status: "success", pass_rate, tests_run, iterations }` | -| `task_failed` | TO coordinator | Report test failure | `{ type: "task_failed", task_id, error, failures, pass_rate }` | -| `progress_update` | TO coordinator | Report fix cycle progress | `{ type: "progress_update", task_id, iteration, pass_rate, strategy }` | - -## 4. Message Bus - -**Primary**: Use `team_msg` for all coordinator communication with `[tester]` tag: -```javascript -team_msg({ - to: "coordinator", - type: "task_complete", - task_id: "TEST-001", - status: "success", - pass_rate: 98.5, - tests_run: 45, - iterations: 3, - framework: "vitest" -}, "[tester]") -``` - -**CLI Fallback**: When message bus unavailable, write to `.workflow/.team/messages/tester-{timestamp}.json` - -## 5. Toolbox - -### Available Commands -- `commands/validate.md` - Test-fix cycle with strategy engine - -### CLI Capabilities -- None (uses project's test framework directly via Bash) - -## 6. Execution (5-Phase) - -### Phase 1: Task Discovery - -**Task Loading**: -```javascript -const tasks = Glob(".workflow/.team/tasks/TEST-*.json") - .filter(task => task.status === "pending" && task.assigned_to === "tester") -``` - -**Implementation Task Linking**: -```javascript -const implTaskId = task.metadata?.impl_task_id -const implTask = implTaskId ? Read(`.workflow/.team/tasks/${implTaskId}.json`) : null -const modifiedFiles = implTask?.metadata?.files_modified || [] -``` - -### Phase 2: Test Framework Detection - -**Framework Detection**: -```javascript -function detectTestFramework() { - // Check package.json for test frameworks - const packageJson = Read("package.json") - const pkg = JSON.parse(packageJson) - - // Priority 1: Check dependencies - if (pkg.devDependencies?.vitest || pkg.dependencies?.vitest) { - return "vitest" - } - if (pkg.devDependencies?.jest || pkg.dependencies?.jest) { - return "jest" - } - if (pkg.devDependencies?.mocha || pkg.dependencies?.mocha) { - return "mocha" - } - if (pkg.devDependencies?.pytest || pkg.dependencies?.pytest) { - return "pytest" - } - - // Priority 2: Check test scripts - const testScript = pkg.scripts?.test || "" - if (testScript.includes("vitest")) return "vitest" - if (testScript.includes("jest")) return "jest" - if (testScript.includes("mocha")) return "mocha" - if (testScript.includes("pytest")) return "pytest" - - // Priority 3: Check config files - const configFiles = Glob("{vitest,jest,mocha}.config.{js,ts,json}") - if (configFiles.some(f => f.includes("vitest"))) return "vitest" - if (configFiles.some(f => f.includes("jest"))) return "jest" - if (configFiles.some(f => f.includes("mocha"))) return "mocha" - - if (Bash("test -f pytest.ini").exitCode === 0) return "pytest" - - return "unknown" -} -``` - -**Affected Test Discovery**: -```javascript -function findAffectedTests(modifiedFiles) { - const testFiles = [] - - for (const file of modifiedFiles) { - const baseName = file.replace(/\.(ts|js|tsx|jsx|py)$/, "") - const dir = file.substring(0, file.lastIndexOf("/")) - - const testVariants = [ - // Same directory variants - `${baseName}.test.ts`, - `${baseName}.test.js`, - `${baseName}.spec.ts`, - `${baseName}.spec.js`, - `${baseName}_test.py`, - `test_${baseName.split("/").pop()}.py`, - - // Test directory variants - `${file.replace(/^src\//, "tests/")}`, - `${file.replace(/^src\//, "__tests__/")}`, - `${file.replace(/^src\//, "test/")}`, - `${dir}/__tests__/${file.split("/").pop().replace(/\.(ts|js|tsx|jsx)$/, ".test.ts")}`, - - // Python variants - `${file.replace(/^src\//, "tests/").replace(/\.py$/, "_test.py")}`, - `${file.replace(/^src\//, "tests/test_")}` - ] - - for (const variant of testVariants) { - if (Bash(`test -f ${variant}`).exitCode === 0) { - testFiles.push(variant) - } - } - } - - return [...new Set(testFiles)] // Deduplicate -} -``` - -### Phase 3: Test Execution & Fix Cycle - -**Delegate to Command**: -```javascript -const validateCommand = Read("commands/validate.md") -// Command handles: -// - MAX_ITERATIONS=10, PASS_RATE_TARGET=95 -// - Main iteration loop with strategy selection -// - Quality gate check (affected tests → full suite) -// - applyFixes by strategy (conservative/aggressive/surgical) -// - Progress updates for long cycles (iteration > 5) -``` - -### Phase 4: Result Analysis - -**Test Result Parsing**: -```javascript -function parseTestResults(output, framework) { - const results = { - total: 0, - passed: 0, - failed: 0, - skipped: 0, - failures: [] - } - - if (framework === "jest" || framework === "vitest") { - // Parse Jest/Vitest output - const totalMatch = output.match(/Tests:\s+(\d+)\s+total/) - const passedMatch = output.match(/(\d+)\s+passed/) - const failedMatch = output.match(/(\d+)\s+failed/) - const skippedMatch = output.match(/(\d+)\s+skipped/) - - results.total = totalMatch ? parseInt(totalMatch[1]) : 0 - results.passed = passedMatch ? parseInt(passedMatch[1]) : 0 - results.failed = failedMatch ? parseInt(failedMatch[1]) : 0 - results.skipped = skippedMatch ? parseInt(skippedMatch[1]) : 0 - - // Extract failure details - const failureRegex = /●\s+(.*?)\n\n\s+(.*?)(?=\n\n●|\n\nTest Suites:)/gs - let match - while ((match = failureRegex.exec(output)) !== null) { - results.failures.push({ - test: match[1].trim(), - error: match[2].trim() - }) - } - } else if (framework === "pytest") { - // Parse pytest output - const summaryMatch = output.match(/=+\s+(\d+)\s+failed,\s+(\d+)\s+passed/) - if (summaryMatch) { - results.failed = parseInt(summaryMatch[1]) - results.passed = parseInt(summaryMatch[2]) - results.total = results.failed + results.passed - } - - // Extract failure details - const failureRegex = /FAILED\s+(.*?)\s+-\s+(.*?)(?=\n_+|\nFAILED|$)/gs - let match - while ((match = failureRegex.exec(output)) !== null) { - results.failures.push({ - test: match[1].trim(), - error: match[2].trim() - }) - } - } - - return results -} -``` - -**Failure Classification**: -```javascript -function classifyFailures(failures) { - const classified = { - critical: [], // Syntax errors, missing imports - high: [], // Assertion failures, logic errors - medium: [], // Timeout, flaky tests - low: [] // Warnings, deprecations - } - - for (const failure of failures) { - const error = failure.error.toLowerCase() - - if (error.includes("syntaxerror") || - error.includes("cannot find module") || - error.includes("is not defined")) { - classified.critical.push(failure) - } else if (error.includes("expected") || - error.includes("assertion") || - error.includes("toBe") || - error.includes("toEqual")) { - classified.high.push(failure) - } else if (error.includes("timeout") || - error.includes("async")) { - classified.medium.push(failure) - } else { - classified.low.push(failure) - } - } - - return classified -} -``` - -### Phase 5: Report to Coordinator - -**Success Report**: -```javascript -team_msg({ - to: "coordinator", - type: "task_complete", - task_id: task.task_id, - status: "success", - pass_rate: (results.passed / results.total * 100).toFixed(1), - tests_run: results.total, - tests_passed: results.passed, - tests_failed: results.failed, - iterations: iterationCount, - framework: framework, - affected_tests: affectedTests.length, - full_suite_run: fullSuiteRun, - timestamp: new Date().toISOString() -}, "[tester]") -``` - -**Failure Report**: -```javascript -const classified = classifyFailures(results.failures) - -team_msg({ - to: "coordinator", - type: "task_failed", - task_id: task.task_id, - error: "Test failures exceeded threshold", - pass_rate: (results.passed / results.total * 100).toFixed(1), - tests_run: results.total, - failures: { - critical: classified.critical.length, - high: classified.high.length, - medium: classified.medium.length, - low: classified.low.length - }, - failure_details: classified, - iterations: iterationCount, - framework: framework, - timestamp: new Date().toISOString() -}, "[tester]") -``` - -## 7. Strategy Engine - -### Strategy Selection - -```javascript -function selectStrategy(iteration, passRate, failures) { - const classified = classifyFailures(failures) - - // Conservative: Early iterations or high pass rate - if (iteration <= 3 || passRate >= 80) { - return "conservative" - } - - // Surgical: Specific failure patterns - if (classified.critical.length > 0 && classified.critical.length < 5) { - return "surgical" - } - - // Aggressive: Low pass rate or many iterations - if (passRate < 50 || iteration > 7) { - return "aggressive" - } - - return "conservative" -} -``` - -### Fix Application - -```javascript -function applyFixes(strategy, failures, framework) { - if (strategy === "conservative") { - // Fix only critical failures one at a time - const critical = classifyFailures(failures).critical - if (critical.length > 0) { - return fixFailure(critical[0], framework) - } - } else if (strategy === "surgical") { - // Fix specific pattern across all occurrences - const pattern = identifyCommonPattern(failures) - return fixPattern(pattern, framework) - } else if (strategy === "aggressive") { - // Fix all failures in batch - return fixAllFailures(failures, framework) - } -} -``` - -## 8. Error Handling - -| Error Type | Recovery Strategy | Escalation | -|------------|-------------------|------------| -| Framework not detected | Prompt user for framework | Immediate escalation | -| No tests found | Report to coordinator | Manual intervention | -| Test command fails | Retry with verbose output | Report after 2 failures | -| Infinite fix loop | Abort after MAX_ITERATIONS | Report iteration history | -| Pass rate below target | Report best attempt | Include failure classification | - -## 9. Configuration - -| Parameter | Default | Description | -|-----------|---------|-------------| -| MAX_ITERATIONS | 10 | Maximum fix-test cycles | -| PASS_RATE_TARGET | 95 | Target pass rate (%) | -| AFFECTED_TESTS_FIRST | true | Run affected tests before full suite | -| PARALLEL_TESTS | true | Enable parallel test execution | -| TIMEOUT_PER_TEST | 30000 | Timeout per test (ms) | - -## 10. Test Framework Commands - -| Framework | Affected Tests Command | Full Suite Command | -|-----------|------------------------|-------------------| -| vitest | `vitest run ${files.join(" ")}` | `vitest run` | -| jest | `jest ${files.join(" ")} --no-coverage` | `jest --no-coverage` | -| mocha | `mocha ${files.join(" ")}` | `mocha` | -| pytest | `pytest ${files.join(" ")} -v` | `pytest -v` | diff --git a/.claude/skills/team-lifecycle-v2/roles/writer/commands/generate-doc.md b/.claude/skills/team-lifecycle-v2/roles/writer/commands/generate-doc.md deleted file mode 100644 index 67c89708..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/writer/commands/generate-doc.md +++ /dev/null @@ -1,698 +0,0 @@ -# Command: Generate Document - -Multi-CLI document generation for 4 document types: Product Brief, Requirements/PRD, Architecture, Epics & Stories. - -## Pre-Steps (All Document Types) - -```javascript -// 1. Load document standards -const docStandards = Read('../../specs/document-standards.md') - -// 2. Load appropriate template -const templateMap = { - 'product-brief': '../../templates/product-brief.md', - 'requirements': '../../templates/requirements-prd.md', - 'architecture': '../../templates/architecture-doc.md', - 'epics': '../../templates/epics-template.md' -} -const template = Read(templateMap[docType]) - -// 3. Build shared context -const seedAnalysis = specConfig?.seed_analysis || - (priorDocs.discoveryContext ? JSON.parse(priorDocs.discoveryContext).seed_analysis : {}) - -const sharedContext = ` -SEED: ${specConfig?.topic || ''} -PROBLEM: ${seedAnalysis.problem_statement || ''} -TARGET USERS: ${(seedAnalysis.target_users || []).join(', ')} -DOMAIN: ${seedAnalysis.domain || ''} -CONSTRAINTS: ${(seedAnalysis.constraints || []).join(', ')} -FOCUS AREAS: ${(specConfig?.focus_areas || []).join(', ')} -${priorDocs.discoveryContext ? ` -CODEBASE CONTEXT: -- Existing patterns: ${JSON.parse(priorDocs.discoveryContext).existing_patterns?.slice(0,5).join(', ') || 'none'} -- Tech stack: ${JSON.stringify(JSON.parse(priorDocs.discoveryContext).tech_stack || {})} -` : ''}` - -// 4. Route to specific document type -``` - -## DRAFT-001: Product Brief - -3-way parallel CLI analysis (product/technical/user perspectives), then synthesize. - -```javascript -if (docType === 'product-brief') { - // === Parallel CLI Analysis === - - // Product Perspective (Gemini) - Bash({ - command: `ccw cli -p "PURPOSE: Product analysis for specification - identify market fit, user value, and success criteria. -Success: Clear vision, measurable goals, competitive positioning. - -${sharedContext} - -TASK: -- Define product vision (1-3 sentences, aspirational) -- Analyze market/competitive landscape -- Define 3-5 measurable success metrics -- Identify scope boundaries (in-scope vs out-of-scope) -- Assess user value proposition -- List assumptions that need validation - -MODE: analysis -EXPECTED: Structured product analysis with: vision, goals with metrics, scope, competitive positioning, assumptions -CONSTRAINTS: Focus on 'what' and 'why', not 'how' -" --tool gemini --mode analysis`, - run_in_background: true - }) - - // Technical Perspective (Codex) - Bash({ - command: `ccw cli -p "PURPOSE: Technical feasibility analysis for specification - assess implementation viability and constraints. -Success: Clear technical constraints, integration complexity, technology recommendations. - -${sharedContext} - -TASK: -- Assess technical feasibility of the core concept -- Identify technical constraints and blockers -- Evaluate integration complexity with existing systems -- Recommend technology approach (high-level) -- Identify technical risks and dependencies -- Estimate complexity: simple/moderate/complex - -MODE: analysis -EXPECTED: Technical analysis with: feasibility assessment, constraints, integration complexity, tech recommendations, risks -CONSTRAINTS: Focus on feasibility and constraints, not detailed architecture -" --tool codex --mode analysis`, - run_in_background: true - }) - - // User Perspective (Claude) - Bash({ - command: `ccw cli -p "PURPOSE: User experience analysis for specification - understand user journeys, pain points, and UX considerations. -Success: Clear user personas, journey maps, UX requirements. - -${sharedContext} - -TASK: -- Elaborate user personas with goals and frustrations -- Map primary user journey (happy path) -- Identify key pain points in current experience -- Define UX success criteria -- List accessibility and usability considerations -- Suggest interaction patterns - -MODE: analysis -EXPECTED: User analysis with: personas, journey map, pain points, UX criteria, interaction recommendations -CONSTRAINTS: Focus on user needs and experience, not implementation -" --tool claude --mode analysis`, - run_in_background: true - }) - - // STOP: Wait for all 3 CLI results - - // === Synthesize Three Perspectives === - const synthesis = { - convergent_themes: [], // Themes consistent across all three perspectives - conflicts: [], // Conflicting viewpoints - product_insights: [], // Unique product perspective insights - technical_insights: [], // Unique technical perspective insights - user_insights: [] // Unique user perspective insights - } - - // Parse CLI outputs and identify: - // - Common themes mentioned by 2+ perspectives - // - Conflicts (e.g., product wants feature X, technical says infeasible) - // - Unique insights from each perspective - - // === Integrate Discussion Feedback === - if (discussionFeedback) { - // Extract consensus and adjustments from discuss-001-scope.md - // Merge discussion conclusions into synthesis - } - - // === Generate Document from Template === - const frontmatter = `--- -session_id: ${specConfig?.session_id || 'unknown'} -phase: 2 -document_type: product-brief -status: draft -generated_at: ${new Date().toISOString()} -version: 1 -dependencies: - - spec-config.json - - discovery-context.json ----` - - // Fill template sections: - // - Vision (from product perspective + synthesis) - // - Problem Statement (from seed analysis + user perspective) - // - Target Users (from user perspective + personas) - // - Goals (from product perspective + metrics) - // - Scope (from product perspective + technical constraints) - // - Success Criteria (from all three perspectives) - // - Assumptions (from product + technical perspectives) - - const filledContent = fillTemplate(template, { - vision: productPerspective.vision, - problem: seedAnalysis.problem_statement, - users: userPerspective.personas, - goals: productPerspective.goals, - scope: synthesis.scope_boundaries, - success_criteria: synthesis.convergent_themes, - assumptions: [...productPerspective.assumptions, ...technicalPerspective.assumptions] - }) - - Write(`${sessionFolder}/spec/product-brief.md`, `${frontmatter}\n\n${filledContent}`) - - return { - outputPath: 'spec/product-brief.md', - documentSummary: `Product Brief generated with ${synthesis.convergent_themes.length} convergent themes, ${synthesis.conflicts.length} conflicts resolved` - } -} -``` - -## DRAFT-002: Requirements/PRD - -Gemini CLI expansion to generate REQ-NNN and NFR-{type}-NNN files. - -```javascript -if (docType === 'requirements') { - // === Requirements Expansion CLI === - Bash({ - command: `ccw cli -p "PURPOSE: Generate detailed functional and non-functional requirements from product brief. -Success: Complete PRD with testable acceptance criteria for every requirement. - -PRODUCT BRIEF CONTEXT: -${priorDocs.productBrief?.slice(0, 3000) || ''} - -${sharedContext} - -TASK: -- For each goal in the product brief, generate 3-7 functional requirements -- Each requirement must have: - - Unique ID: REQ-NNN (zero-padded) - - Clear title - - Detailed description - - User story: As a [persona], I want [action] so that [benefit] - - 2-4 specific, testable acceptance criteria -- Generate non-functional requirements: - - Performance (response times, throughput) - - Security (authentication, authorization, data protection) - - Scalability (user load, data volume) - - Usability (accessibility, learnability) -- Assign MoSCoW priority: Must/Should/Could/Won't -- Output structure per requirement: ID, title, description, user_story, acceptance_criteria[], priority, traces - -MODE: analysis -EXPECTED: Structured requirements with: ID, title, description, user story, acceptance criteria, priority, traceability to goals -CONSTRAINTS: Every requirement must be specific enough to estimate and test. No vague requirements. -" --tool gemini --mode analysis`, - run_in_background: true - }) - - // Wait for CLI result - - // === Integrate Discussion Feedback === - if (discussionFeedback) { - // Extract requirement adjustments from discuss-002-brief.md - // Merge new/modified/deleted requirements - } - - // === Generate requirements/ Directory === - Bash(`mkdir -p "${sessionFolder}/spec/requirements"`) - - const timestamp = new Date().toISOString() - - // Parse CLI output → funcReqs[], nfReqs[] - const funcReqs = parseFunctionalRequirements(cliOutput) - const nfReqs = parseNonFunctionalRequirements(cliOutput) - - // Write individual REQ-*.md files (one per functional requirement) - funcReqs.forEach(req => { - const reqFrontmatter = `--- -id: REQ-${req.id} -title: "${req.title}" -priority: ${req.priority} -status: draft -traces: - - product-brief.md ----` - const reqContent = `${reqFrontmatter} - -# REQ-${req.id}: ${req.title} - -## Description -${req.description} - -## User Story -${req.user_story} - -## Acceptance Criteria -${req.acceptance_criteria.map((ac, i) => `${i+1}. ${ac}`).join('\n')} -` - Write(`${sessionFolder}/spec/requirements/REQ-${req.id}-${req.slug}.md`, reqContent) - }) - - // Write individual NFR-*.md files - nfReqs.forEach(nfr => { - const nfrFrontmatter = `--- -id: NFR-${nfr.type}-${nfr.id} -type: ${nfr.type} -title: "${nfr.title}" -status: draft -traces: - - product-brief.md ----` - const nfrContent = `${nfrFrontmatter} - -# NFR-${nfr.type}-${nfr.id}: ${nfr.title} - -## Requirement -${nfr.requirement} - -## Metric & Target -${nfr.metric} — Target: ${nfr.target} -` - Write(`${sessionFolder}/spec/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent) - }) - - // Write _index.md (summary + links) - const indexFrontmatter = `--- -session_id: ${specConfig?.session_id || 'unknown'} -phase: 3 -document_type: requirements-index -status: draft -generated_at: ${timestamp} -version: 1 -dependencies: - - product-brief.md ----` - const indexContent = `${indexFrontmatter} - -# Requirements (PRD) - -## Summary -Total: ${funcReqs.length} functional + ${nfReqs.length} non-functional requirements - -## Functional Requirements -| ID | Title | Priority | Status | -|----|-------|----------|--------| -${funcReqs.map(r => `| [REQ-${r.id}](REQ-${r.id}-${r.slug}.md) | ${r.title} | ${r.priority} | draft |`).join('\n')} - -## Non-Functional Requirements -| ID | Type | Title | -|----|------|-------| -${nfReqs.map(n => `| [NFR-${n.type}-${n.id}](NFR-${n.type}-${n.id}-${n.slug}.md) | ${n.type} | ${n.title} |`).join('\n')} - -## MoSCoW Summary -- **Must**: ${funcReqs.filter(r => r.priority === 'Must').length} -- **Should**: ${funcReqs.filter(r => r.priority === 'Should').length} -- **Could**: ${funcReqs.filter(r => r.priority === 'Could').length} -- **Won't**: ${funcReqs.filter(r => r.priority === "Won't").length} -` - Write(`${sessionFolder}/spec/requirements/_index.md`, indexContent) - - return { - outputPath: 'spec/requirements/_index.md', - documentSummary: `Requirements generated: ${funcReqs.length} functional, ${nfReqs.length} non-functional` - } -} -``` - -## DRAFT-003: Architecture - -Two-stage CLI: Gemini architecture design + Codex architecture review. - -```javascript -if (docType === 'architecture') { - // === Stage 1: Architecture Design (Gemini) === - Bash({ - command: `ccw cli -p "PURPOSE: Generate technical architecture for the specified requirements. -Success: Complete component architecture, tech stack, and ADRs with justified decisions. - -PRODUCT BRIEF (summary): -${priorDocs.productBrief?.slice(0, 3000) || ''} - -REQUIREMENTS: -${priorDocs.requirementsIndex?.slice(0, 5000) || ''} - -${sharedContext} - -TASK: -- Define system architecture style (monolith, microservices, serverless, etc.) with justification -- Identify core components and their responsibilities -- Create component interaction diagram (Mermaid graph TD format) -- Specify technology stack: languages, frameworks, databases, infrastructure -- Generate 2-4 Architecture Decision Records (ADRs): - - Each ADR: context, decision, 2-3 alternatives with pros/cons, consequences - - Focus on: data storage, API design, authentication, key technical choices -- Define data model: key entities and relationships (Mermaid erDiagram format) -- Identify security architecture: auth, authorization, data protection -- List API endpoints (high-level) - -MODE: analysis -EXPECTED: Complete architecture with: style justification, component diagram, tech stack table, ADRs, data model, security controls, API overview -CONSTRAINTS: Architecture must support all Must-have requirements. Prefer proven technologies. -" --tool gemini --mode analysis`, - run_in_background: true - }) - - // Wait for Gemini result - - // === Stage 2: Architecture Review (Codex) === - Bash({ - command: `ccw cli -p "PURPOSE: Critical review of proposed architecture - identify weaknesses and risks. -Success: Actionable feedback with specific concerns and improvement suggestions. - -PROPOSED ARCHITECTURE: -${geminiArchitectureOutput.slice(0, 5000)} - -REQUIREMENTS CONTEXT: -${priorDocs.requirementsIndex?.slice(0, 2000) || ''} - -TASK: -- Challenge each ADR: are the alternatives truly the best options? -- Identify scalability bottlenecks in the component design -- Assess security gaps: authentication, authorization, data protection -- Evaluate technology choices: maturity, community support, fit -- Check for over-engineering or under-engineering -- Verify architecture covers all Must-have requirements -- Rate overall architecture quality: 1-5 with justification - -MODE: analysis -EXPECTED: Architecture review with: per-ADR feedback, scalability concerns, security gaps, technology risks, quality rating -CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable improvements. -" --tool codex --mode analysis`, - run_in_background: true - }) - - // Wait for Codex result - - // === Integrate Discussion Feedback === - if (discussionFeedback) { - // Extract architecture feedback from discuss-003-requirements.md - // Merge into architecture design - } - - // === Codebase Integration Mapping (conditional) === - let integrationMapping = null - if (priorDocs.discoveryContext) { - const dc = JSON.parse(priorDocs.discoveryContext) - if (dc.relevant_files) { - integrationMapping = dc.relevant_files.map(f => ({ - new_component: '...', - existing_module: f.path, - integration_type: 'Extend|Replace|New', - notes: f.rationale - })) - } - } - - // === Generate architecture/ Directory === - Bash(`mkdir -p "${sessionFolder}/spec/architecture"`) - - const timestamp = new Date().toISOString() - const adrs = parseADRs(geminiArchitectureOutput, codexReviewOutput) - - // Write individual ADR-*.md files - adrs.forEach(adr => { - const adrFrontmatter = `--- -id: ADR-${adr.id} -title: "${adr.title}" -status: draft -traces: - - ../requirements/_index.md ----` - const adrContent = `${adrFrontmatter} - -# ADR-${adr.id}: ${adr.title} - -## Context -${adr.context} - -## Decision -${adr.decision} - -## Alternatives -${adr.alternatives.map((alt, i) => `### Option ${i+1}: ${alt.name}\n- **Pros**: ${alt.pros.join(', ')}\n- **Cons**: ${alt.cons.join(', ')}`).join('\n\n')} - -## Consequences -${adr.consequences} - -## Review Feedback -${adr.reviewFeedback || 'N/A'} -` - Write(`${sessionFolder}/spec/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent) - }) - - // Write _index.md (with Mermaid component diagram + ER diagram + links) - const archIndexFrontmatter = `--- -session_id: ${specConfig?.session_id || 'unknown'} -phase: 4 -document_type: architecture-index -status: draft -generated_at: ${timestamp} -version: 1 -dependencies: - - ../product-brief.md - - ../requirements/_index.md ----` - - const archIndexContent = `${archIndexFrontmatter} - -# Architecture Document - -## System Overview -${geminiArchitectureOutput.system_overview} - -## Component Diagram -\`\`\`mermaid -${geminiArchitectureOutput.component_diagram} -\`\`\` - -## Technology Stack -${geminiArchitectureOutput.tech_stack_table} - -## Architecture Decision Records -| ID | Title | Status | -|----|-------|--------| -${adrs.map(a => `| [ADR-${a.id}](ADR-${a.id}-${a.slug}.md) | ${a.title} | draft |`).join('\n')} - -## Data Model -\`\`\`mermaid -${geminiArchitectureOutput.data_model_diagram} -\`\`\` - -## API Design -${geminiArchitectureOutput.api_overview} - -## Security Controls -${geminiArchitectureOutput.security_controls} - -## Review Summary -${codexReviewOutput.summary} -Quality Rating: ${codexReviewOutput.quality_rating}/5 -` - - Write(`${sessionFolder}/spec/architecture/_index.md`, archIndexContent) - - return { - outputPath: 'spec/architecture/_index.md', - documentSummary: `Architecture generated with ${adrs.length} ADRs, quality rating ${codexReviewOutput.quality_rating}/5` - } -} -``` - -## DRAFT-004: Epics & Stories - -Gemini CLI decomposition to generate EPIC-*.md files. - -```javascript -if (docType === 'epics') { - // === Epic Decomposition CLI === - Bash({ - command: `ccw cli -p "PURPOSE: Decompose requirements into executable Epics and Stories for implementation planning. -Success: 3-7 Epics with prioritized Stories, dependency map, and MVP subset clearly defined. - -PRODUCT BRIEF (summary): -${priorDocs.productBrief?.slice(0, 2000) || ''} - -REQUIREMENTS: -${priorDocs.requirementsIndex?.slice(0, 5000) || ''} - -ARCHITECTURE (summary): -${priorDocs.architectureIndex?.slice(0, 3000) || ''} - -TASK: -- Group requirements into 3-7 logical Epics: - - Each Epic: EPIC-NNN ID, title, description, priority (Must/Should/Could) - - Group by functional domain or user journey stage - - Tag MVP Epics (minimum set for initial release) -- For each Epic, generate 2-5 Stories: - - Each Story: STORY-{EPIC}-NNN ID, title - - User story format: As a [persona], I want [action] so that [benefit] - - 2-4 acceptance criteria per story (testable) - - Relative size estimate: S/M/L/XL - - Trace to source requirement(s): REQ-NNN -- Create dependency map: - - Cross-Epic dependencies (which Epics block others) - - Mermaid graph LR format - - Recommended execution order with rationale -- Define MVP: - - Which Epics are in MVP - - MVP definition of done (3-5 criteria) - - What is explicitly deferred post-MVP - -MODE: analysis -EXPECTED: Structured output with: Epic list (ID, title, priority, MVP flag), Stories per Epic (ID, user story, AC, size, trace), dependency Mermaid diagram, execution order, MVP definition -CONSTRAINTS: Every Must-have requirement must appear in at least one Story. Stories must be small enough to implement independently. Dependencies should be minimized across Epics. -" --tool gemini --mode analysis`, - run_in_background: true - }) - - // Wait for CLI result - - // === Integrate Discussion Feedback === - if (discussionFeedback) { - // Extract execution feedback from discuss-004-architecture.md - // Adjust Epic granularity, MVP scope - } - - // === Generate epics/ Directory === - Bash(`mkdir -p "${sessionFolder}/spec/epics"`) - - const timestamp = new Date().toISOString() - const epicsList = parseEpics(cliOutput) - - // Write individual EPIC-*.md files (with stories) - epicsList.forEach(epic => { - const epicFrontmatter = `--- -id: EPIC-${epic.id} -title: "${epic.title}" -priority: ${epic.priority} -mvp: ${epic.mvp} -size: ${epic.size} -requirements: -${epic.reqs.map(r => ` - ${r}`).join('\n')} -architecture: -${epic.adrs.map(a => ` - ${a}`).join('\n')} -dependencies: -${epic.deps.map(d => ` - ${d}`).join('\n')} -status: draft ----` - const storiesContent = epic.stories.map(s => `### ${s.id}: ${s.title} - -**User Story**: ${s.user_story} -**Size**: ${s.size} -**Traces**: ${s.traces.join(', ')} - -**Acceptance Criteria**: -${s.acceptance_criteria.map((ac, i) => `${i+1}. ${ac}`).join('\n')} -`).join('\n') - - const epicContent = `${epicFrontmatter} - -# EPIC-${epic.id}: ${epic.title} - -## Description -${epic.description} - -## Stories -${storiesContent} - -## Requirements -${epic.reqs.map(r => `- [${r}](../requirements/${r}.md)`).join('\n')} - -## Architecture -${epic.adrs.map(a => `- [${a}](../architecture/${a}.md)`).join('\n')} -` - Write(`${sessionFolder}/spec/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent) - }) - - // Write _index.md (with Mermaid dependency diagram + MVP + links) - const epicsIndexFrontmatter = `--- -session_id: ${specConfig?.session_id || 'unknown'} -phase: 5 -document_type: epics-index -status: draft -generated_at: ${timestamp} -version: 1 -dependencies: - - ../requirements/_index.md - - ../architecture/_index.md ----` - - const epicsIndexContent = `${epicsIndexFrontmatter} - -# Epics & Stories - -## Epic Overview -| ID | Title | Priority | MVP | Size | Status | -|----|-------|----------|-----|------|--------| -${epicsList.map(e => `| [EPIC-${e.id}](EPIC-${e.id}-${e.slug}.md) | ${e.title} | ${e.priority} | ${e.mvp ? '✓' : ''} | ${e.size} | draft |`).join('\n')} - -## Dependency Map -\`\`\`mermaid -${cliOutput.dependency_diagram} -\`\`\` - -## Execution Order -${cliOutput.execution_order} - -## MVP Scope -${cliOutput.mvp_definition} - -### MVP Epics -${epicsList.filter(e => e.mvp).map(e => `- EPIC-${e.id}: ${e.title}`).join('\n')} - -### Post-MVP -${epicsList.filter(e => !e.mvp).map(e => `- EPIC-${e.id}: ${e.title}`).join('\n')} - -## Traceability Matrix -${generateTraceabilityMatrix(epicsList, funcReqs)} -` - - Write(`${sessionFolder}/spec/epics/_index.md`, epicsIndexContent) - - return { - outputPath: 'spec/epics/_index.md', - documentSummary: `Epics generated: ${epicsList.length} total, ${epicsList.filter(e => e.mvp).length} in MVP` - } -} -``` - -## Helper Functions - -```javascript -function parseFunctionalRequirements(cliOutput) { - // Parse CLI JSON output to extract functional requirements - // Returns: [{ id, title, description, user_story, acceptance_criteria[], priority, slug }] -} - -function parseNonFunctionalRequirements(cliOutput) { - // Parse CLI JSON output to extract non-functional requirements - // Returns: [{ id, type, title, requirement, metric, target, slug }] -} - -function parseADRs(geminiOutput, codexOutput) { - // Parse architecture outputs to extract ADRs with review feedback - // Returns: [{ id, title, context, decision, alternatives[], consequences, reviewFeedback, slug }] -} - -function parseEpics(cliOutput) { - // Parse CLI JSON output to extract Epics and Stories - // Returns: [{ id, title, description, priority, mvp, size, stories[], reqs[], adrs[], deps[], slug }] -} - -function fillTemplate(template, data) { - // Fill template placeholders with data - // Apply document-standards.md formatting rules -} - -function generateTraceabilityMatrix(epics, requirements) { - // Generate traceability matrix showing Epic → Requirement mappings -} -``` diff --git a/.claude/skills/team-lifecycle-v2/roles/writer/role.md b/.claude/skills/team-lifecycle-v2/roles/writer/role.md deleted file mode 100644 index f86cff3a..00000000 --- a/.claude/skills/team-lifecycle-v2/roles/writer/role.md +++ /dev/null @@ -1,257 +0,0 @@ -# Role: writer - -Product Brief, Requirements/PRD, Architecture, and Epics & Stories document generation. Maps to spec-generator Phases 2-5. - -## Role Identity - -- **Name**: `writer` -- **Task Prefix**: `DRAFT-*` -- **Output Tag**: `[writer]` -- **Responsibility**: Load Context → Generate Document → Incorporate Feedback → Report -- **Communication**: SendMessage to coordinator only - -## Role Boundaries - -### MUST -- Only process DRAFT-* tasks -- Read templates before generating documents -- Follow document-standards.md formatting rules -- Integrate discussion feedback when available -- Generate proper frontmatter for all documents - -### MUST NOT -- Create tasks for other roles -- Contact other workers directly -- Skip template loading -- Modify discussion records -- Generate documents without loading prior dependencies - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `draft_ready` | writer → coordinator | Document writing complete | With document path and type | -| `draft_revision` | writer → coordinator | Document revised and resubmitted | Describes changes made | -| `impl_progress` | writer → coordinator | Long writing progress | Multi-document stage progress | -| `error` | writer → coordinator | Unrecoverable error | Template missing, insufficient context, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Document ready -mcp__ccw-tools__team_msg({ - operation: "log", - team: teamName, - from: "writer", - to: "coordinator", - type: "draft_ready", - summary: "[writer] Product Brief complete", - ref: `${sessionFolder}/product-brief.md` -}) - -// Document revision -mcp__ccw-tools__team_msg({ - operation: "log", - team: teamName, - from: "writer", - to: "coordinator", - type: "draft_revision", - summary: "[writer] Requirements revised per discussion feedback" -}) - -// Error report -mcp__ccw-tools__team_msg({ - operation: "log", - team: teamName, - from: "writer", - to: "coordinator", - type: "error", - summary: "[writer] Input artifact missing, cannot generate document" -}) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```bash -ccw team log --team "${teamName}" --from "writer" --to "coordinator" --type "draft_ready" --summary "[writer] Brief complete" --ref "${sessionFolder}/product-brief.md" --json -``` - -## Toolbox - -### Available Commands -- `commands/generate-doc.md` - Multi-CLI document generation for 4 doc types - -### Subagent Capabilities -- None - -### CLI Capabilities -- `gemini`, `codex`, `claude` for multi-perspective analysis - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('DRAFT-') && - t.owner === 'writer' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Context & Discussion Loading - -```javascript -// Extract session folder from task description -const sessionMatch = task.description.match(/Session:\s*(.+)/) -const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '' - -// Load session config -let specConfig = null -try { specConfig = JSON.parse(Read(`${sessionFolder}/spec/spec-config.json`)) } catch {} - -// Determine document type from task subject -const docType = task.subject.includes('Product Brief') ? 'product-brief' - : task.subject.includes('Requirements') || task.subject.includes('PRD') ? 'requirements' - : task.subject.includes('Architecture') ? 'architecture' - : task.subject.includes('Epics') ? 'epics' - : 'unknown' - -// Load discussion feedback (from preceding DISCUSS task) -const discussionFiles = { - 'product-brief': 'discussions/discuss-001-scope.md', - 'requirements': 'discussions/discuss-002-brief.md', - 'architecture': 'discussions/discuss-003-requirements.md', - 'epics': 'discussions/discuss-004-architecture.md' -} -let discussionFeedback = null -try { discussionFeedback = Read(`${sessionFolder}/${discussionFiles[docType]}`) } catch {} - -// Load prior documents progressively -const priorDocs = {} -if (docType !== 'product-brief') { - try { priorDocs.discoveryContext = Read(`${sessionFolder}/spec/discovery-context.json`) } catch {} -} -if (['requirements', 'architecture', 'epics'].includes(docType)) { - try { priorDocs.productBrief = Read(`${sessionFolder}/spec/product-brief.md`) } catch {} -} -if (['architecture', 'epics'].includes(docType)) { - try { priorDocs.requirementsIndex = Read(`${sessionFolder}/spec/requirements/_index.md`) } catch {} -} -if (docType === 'epics') { - try { priorDocs.architectureIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) } catch {} -} -``` - -### Phase 3: Document Generation - -**Delegate to command file**: - -```javascript -// Load and execute document generation command -const generateDocCommand = Read('commands/generate-doc.md') - -// Execute command with context: -// - docType -// - sessionFolder -// - specConfig -// - discussionFeedback -// - priorDocs -// - task - -// Command will handle: -// - Loading document standards -// - Loading appropriate template -// - Building shared context -// - Routing to type-specific generation (DRAFT-001/002/003/004) -// - Integrating discussion feedback -// - Writing output files - -// Returns: { outputPath, documentSummary } -``` - -### Phase 4: Self-Validation - -```javascript -const docContent = Read(`${sessionFolder}/${outputPath}`) - -const validationChecks = { - has_frontmatter: /^---\n[\s\S]+?\n---/.test(docContent), - sections_complete: /* verify all required sections present */, - cross_references: docContent.includes('session_id'), - discussion_integrated: !discussionFeedback || docContent.includes('Discussion') -} - -const allValid = Object.values(validationChecks).every(v => v) -``` - -### Phase 5: Report to Coordinator - -```javascript -const docTypeLabel = { - 'product-brief': 'Product Brief', - 'requirements': 'Requirements/PRD', - 'architecture': 'Architecture Document', - 'epics': 'Epics & Stories' -} - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "writer", to: "coordinator", - type: "draft_ready", - summary: `[writer] ${docTypeLabel[docType]} 完成: ${allValid ? '验证通过' : '部分验证失败'}`, - ref: `${sessionFolder}/${outputPath}` -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `[writer] ## 文档撰写结果 - -**Task**: ${task.subject} -**文档类型**: ${docTypeLabel[docType]} -**验证状态**: ${allValid ? 'PASS' : 'PARTIAL'} - -### 文档摘要 -${documentSummary} - -### 讨论反馈整合 -${discussionFeedback ? '已整合前序讨论反馈' : '首次撰写'} - -### 自验证结果 -${Object.entries(validationChecks).map(([k, v]) => '- ' + k + ': ' + (v ? 'PASS' : 'FAIL')).join('\n')} - -### 输出位置 -${sessionFolder}/${outputPath} - -文档已就绪,可进入讨论轮次。`, - summary: `[writer] ${docTypeLabel[docType]} 就绪` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) - -// Check for next DRAFT task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No DRAFT-* tasks available | Idle, wait for coordinator assignment | -| Prior document not found | Notify coordinator, request prerequisite | -| CLI analysis failure | Retry with fallback tool, then direct generation | -| Template sections incomplete | Generate best-effort, note gaps in report | -| Discussion feedback contradicts prior docs | Note conflict in document, flag for next discussion | -| Session folder missing | Notify coordinator, request session path | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle-v2/specs/document-standards.md b/.claude/skills/team-lifecycle-v2/specs/document-standards.md deleted file mode 100644 index 2820cd98..00000000 --- a/.claude/skills/team-lifecycle-v2/specs/document-standards.md +++ /dev/null @@ -1,192 +0,0 @@ -# Document Standards - -Defines format conventions, YAML frontmatter schema, naming rules, and content structure for all spec-generator outputs. - -## When to Use - -| Phase | Usage | Section | -|-------|-------|---------| -| All Phases | Frontmatter format | YAML Frontmatter Schema | -| All Phases | File naming | Naming Conventions | -| Phase 2-5 | Document structure | Content Structure | -| Phase 6 | Validation reference | All sections | - ---- - -## YAML Frontmatter Schema - -Every generated document MUST begin with YAML frontmatter: - -```yaml ---- -session_id: SPEC-{slug}-{YYYY-MM-DD} -phase: {1-6} -document_type: {product-brief|requirements|architecture|epics|readiness-report|spec-summary} -status: draft|review|complete -generated_at: {ISO8601 timestamp} -stepsCompleted: [] -version: 1 -dependencies: - - {list of input documents used} ---- -``` - -### Field Definitions - -| Field | Type | Required | Description | -|-------|------|----------|-------------| -| `session_id` | string | Yes | Session identifier matching spec-config.json | -| `phase` | number | Yes | Phase number that generated this document (1-6) | -| `document_type` | string | Yes | One of: product-brief, requirements, architecture, epics, readiness-report, spec-summary | -| `status` | enum | Yes | draft (initial), review (user reviewed), complete (finalized) | -| `generated_at` | string | Yes | ISO8601 timestamp of generation | -| `stepsCompleted` | array | Yes | List of step IDs completed during generation | -| `version` | number | Yes | Document version, incremented on re-generation | -| `dependencies` | array | No | List of input files this document depends on | - -### Status Transitions - -``` -draft -> review -> complete - | ^ - +-------------------+ (direct promotion in auto mode) -``` - -- **draft**: Initial generation, not yet user-reviewed -- **review**: User has reviewed and provided feedback -- **complete**: Finalized, ready for downstream consumption - -In auto mode (`-y`), documents are promoted directly from `draft` to `complete`. - ---- - -## Naming Conventions - -### Session ID Format - -``` -SPEC-{slug}-{YYYY-MM-DD} -``` - -- **slug**: Lowercase, alphanumeric + Chinese characters, hyphens as separators, max 40 chars -- **date**: UTC+8 date in YYYY-MM-DD format - -Examples: -- `SPEC-task-management-system-2026-02-11` -- `SPEC-user-auth-oauth-2026-02-11` - -### Output Files - -| File | Phase | Description | -|------|-------|-------------| -| `spec-config.json` | 1 | Session configuration and state | -| `discovery-context.json` | 1 | Codebase exploration results (optional) | -| `product-brief.md` | 2 | Product brief document | -| `requirements.md` | 3 | PRD document | -| `architecture.md` | 4 | Architecture decisions document | -| `epics.md` | 5 | Epic/Story breakdown document | -| `readiness-report.md` | 6 | Quality validation report | -| `spec-summary.md` | 6 | One-page executive summary | - -### Output Directory - -``` -.workflow/.spec/{session-id}/ -``` - ---- - -## Content Structure - -### Heading Hierarchy - -- `#` (H1): Document title only (one per document) -- `##` (H2): Major sections -- `###` (H3): Subsections -- `####` (H4): Detail items (use sparingly) - -Maximum depth: 4 levels. Prefer flat structures. - -### Section Ordering - -Every document follows this general pattern: - -1. **YAML Frontmatter** (mandatory) -2. **Title** (H1) -3. **Executive Summary** (2-3 sentences) -4. **Core Content Sections** (H2, document-specific) -5. **Open Questions / Risks** (if applicable) -6. **References / Traceability** (links to upstream/downstream docs) - -### Formatting Rules - -| Element | Format | Example | -|---------|--------|---------| -| Requirements | `REQ-{NNN}` prefix | REQ-001: User login | -| Acceptance criteria | Checkbox list | `- [ ] User can log in with email` | -| Architecture decisions | `ADR-{NNN}` prefix | ADR-001: Use PostgreSQL | -| Epics | `EPIC-{NNN}` prefix | EPIC-001: Authentication | -| Stories | `STORY-{EPIC}-{NNN}` prefix | STORY-001-001: Login form | -| Priority tags | MoSCoW labels | `[Must]`, `[Should]`, `[Could]`, `[Won't]` | -| Mermaid diagrams | Fenced code blocks | ````mermaid ... ``` `` | -| Code examples | Language-tagged blocks | ````typescript ... ``` `` | - -### Cross-Reference Format - -Use relative references between documents: - -```markdown -See [Product Brief](product-brief.md#section-name) for details. -Derived from [REQ-001](requirements.md#req-001). -``` - -### Language - -- Document body: Follow user's input language (Chinese or English) -- Technical identifiers: Always English (REQ-001, ADR-001, EPIC-001) -- YAML frontmatter keys: Always English - ---- - -## spec-config.json Schema - -```json -{ - "session_id": "string (required)", - "seed_input": "string (required) - original user input", - "input_type": "text|file (required)", - "timestamp": "ISO8601 (required)", - "mode": "interactive|auto (required)", - "complexity": "simple|moderate|complex (required)", - "depth": "light|standard|comprehensive (required)", - "focus_areas": ["string array"], - "seed_analysis": { - "problem_statement": "string", - "target_users": ["string array"], - "domain": "string", - "constraints": ["string array"], - "dimensions": ["string array - 3-5 exploration dimensions"] - }, - "has_codebase": "boolean", - "phasesCompleted": [ - { - "phase": "number (1-6)", - "name": "string (phase name)", - "output_file": "string (primary output file)", - "completed_at": "ISO8601" - } - ] -} -``` - ---- - -## Validation Checklist - -- [ ] Every document starts with valid YAML frontmatter -- [ ] `session_id` matches across all documents in a session -- [ ] `status` field reflects current document state -- [ ] All cross-references resolve to valid targets -- [ ] Heading hierarchy is correct (no skipped levels) -- [ ] Technical identifiers use correct prefixes -- [ ] Output files are in the correct directory diff --git a/.claude/skills/team-lifecycle-v2/specs/quality-gates.md b/.claude/skills/team-lifecycle-v2/specs/quality-gates.md deleted file mode 100644 index ae968436..00000000 --- a/.claude/skills/team-lifecycle-v2/specs/quality-gates.md +++ /dev/null @@ -1,207 +0,0 @@ -# Quality Gates - -Per-phase quality gate criteria and scoring dimensions for spec-generator outputs. - -## When to Use - -| Phase | Usage | Section | -|-------|-------|---------| -| Phase 2-5 | Post-generation self-check | Per-Phase Gates | -| Phase 6 | Cross-document validation | Cross-Document Validation | -| Phase 6 | Final scoring | Scoring Dimensions | - ---- - -## Quality Thresholds - -| Gate | Score | Action | -|------|-------|--------| -| **Pass** | >= 80% | Continue to next phase | -| **Review** | 60-79% | Log warnings, continue with caveats | -| **Fail** | < 60% | Must address issues before continuing | - -In auto mode (`-y`), Review-level issues are logged but do not block progress. - ---- - -## Scoring Dimensions - -### 1. Completeness (25%) - -All required sections present with substantive content. - -| Score | Criteria | -|-------|----------| -| 100% | All template sections filled with detailed content | -| 75% | All sections present, some lack detail | -| 50% | Major sections present but minor sections missing | -| 25% | Multiple major sections missing or empty | -| 0% | Document is a skeleton only | - -### 2. Consistency (25%) - -Terminology, formatting, and references are uniform across documents. - -| Score | Criteria | -|-------|----------| -| 100% | All terms consistent, all references valid, formatting uniform | -| 75% | Minor terminology variations, all references valid | -| 50% | Some inconsistent terms, 1-2 broken references | -| 25% | Frequent inconsistencies, multiple broken references | -| 0% | Documents contradict each other | - -### 3. Traceability (25%) - -Requirements, architecture decisions, and stories trace back to goals. - -| Score | Criteria | -|-------|----------| -| 100% | Every story traces to a requirement, every requirement traces to a goal | -| 75% | Most items traceable, few orphans | -| 50% | Partial traceability, some disconnected items | -| 25% | Weak traceability, many orphan items | -| 0% | No traceability between documents | - -### 4. Depth (25%) - -Content provides sufficient detail for execution teams. - -| Score | Criteria | -|-------|----------| -| 100% | Acceptance criteria specific and testable, architecture decisions justified, stories estimable | -| 75% | Most items detailed enough, few vague areas | -| 50% | Mix of detailed and vague content | -| 25% | Mostly high-level, lacking actionable detail | -| 0% | Too abstract for execution | - ---- - -## Per-Phase Quality Gates - -### Phase 1: Discovery - -| Check | Criteria | Severity | -|-------|----------|----------| -| Session ID valid | Matches `SPEC-{slug}-{date}` format | Error | -| Problem statement exists | Non-empty, >= 20 characters | Error | -| Target users identified | >= 1 user group | Error | -| Dimensions generated | 3-5 exploration dimensions | Warning | -| Constraints listed | >= 0 (can be empty with justification) | Info | - -### Phase 2: Product Brief - -| Check | Criteria | Severity | -|-------|----------|----------| -| Vision statement | Clear, 1-3 sentences | Error | -| Problem statement | Specific and measurable | Error | -| Target users | >= 1 persona with needs described | Error | -| Goals defined | >= 2 measurable goals | Error | -| Success metrics | >= 2 quantifiable metrics | Warning | -| Scope boundaries | In-scope and out-of-scope listed | Warning | -| Multi-perspective | >= 2 CLI perspectives synthesized | Info | - -### Phase 3: Requirements (PRD) - -| Check | Criteria | Severity | -|-------|----------|----------| -| Functional requirements | >= 3 with REQ-NNN IDs | Error | -| Acceptance criteria | Every requirement has >= 1 criterion | Error | -| MoSCoW priority | Every requirement tagged | Error | -| Non-functional requirements | >= 1 (performance, security, etc.) | Warning | -| User stories | >= 1 per Must-have requirement | Warning | -| Traceability | Requirements trace to product brief goals | Warning | - -### Phase 4: Architecture - -| Check | Criteria | Severity | -|-------|----------|----------| -| Component diagram | Present (Mermaid or ASCII) | Error | -| Tech stack specified | Languages, frameworks, key libraries | Error | -| ADR present | >= 1 Architecture Decision Record | Error | -| ADR has alternatives | Each ADR lists >= 2 options considered | Warning | -| Integration points | External systems/APIs identified | Warning | -| Data model | Key entities and relationships described | Warning | -| Codebase mapping | Mapped to existing code (if has_codebase) | Info | - -### Phase 5: Epics & Stories - -| Check | Criteria | Severity | -|-------|----------|----------| -| Epics defined | 3-7 epics with EPIC-NNN IDs | Error | -| MVP subset | >= 1 epic tagged as MVP | Error | -| Stories per epic | 2-5 stories per epic | Error | -| Story format | "As a...I want...So that..." pattern | Warning | -| Dependency map | Cross-epic dependencies documented | Warning | -| Estimation hints | Relative sizing (S/M/L/XL) per story | Info | -| Traceability | Stories trace to requirements | Warning | - -### Phase 6: Readiness Check - -| Check | Criteria | Severity | -|-------|----------|----------| -| All documents exist | product-brief, requirements, architecture, epics | Error | -| Frontmatter valid | All YAML frontmatter parseable and correct | Error | -| Cross-references valid | All document links resolve | Error | -| Overall score >= 60% | Weighted average across 4 dimensions | Error | -| No unresolved Errors | All Error-severity issues addressed | Error | -| Summary generated | spec-summary.md created | Warning | - ---- - -## Cross-Document Validation - -Checks performed during Phase 6 across all documents: - -### Completeness Matrix - -``` -Product Brief goals -> Requirements (each goal has >= 1 requirement) -Requirements -> Architecture (each Must requirement has design coverage) -Requirements -> Epics (each Must requirement appears in >= 1 story) -Architecture ADRs -> Epics (tech choices reflected in implementation stories) -``` - -### Consistency Checks - -| Check | Documents | Rule | -|-------|-----------|------| -| Terminology | All | Same term used consistently (no synonyms for same concept) | -| User personas | Brief + PRD + Epics | Same user names/roles throughout | -| Scope | Brief + PRD | PRD scope does not exceed brief scope | -| Tech stack | Architecture + Epics | Stories reference correct technologies | - -### Traceability Matrix Format - -```markdown -| Goal | Requirements | Architecture | Epics | -|------|-------------|--------------|-------| -| G-001: ... | REQ-001, REQ-002 | ADR-001 | EPIC-001 | -| G-002: ... | REQ-003 | ADR-002 | EPIC-002, EPIC-003 | -``` - ---- - -## Issue Classification - -### Error (Must Fix) - -- Missing required document or section -- Broken cross-references -- Contradictory information between documents -- Empty acceptance criteria on Must-have requirements -- No MVP subset defined in epics - -### Warning (Should Fix) - -- Vague acceptance criteria -- Missing non-functional requirements -- No success metrics defined -- Incomplete traceability -- Missing architecture review notes - -### Info (Nice to Have) - -- Could add more detailed personas -- Consider additional ADR alternatives -- Story estimation hints missing -- Mermaid diagrams could be more detailed diff --git a/.claude/skills/team-lifecycle-v2/specs/team-config.json b/.claude/skills/team-lifecycle-v2/specs/team-config.json deleted file mode 100644 index 306db7c0..00000000 --- a/.claude/skills/team-lifecycle-v2/specs/team-config.json +++ /dev/null @@ -1,156 +0,0 @@ -{ - "team_name": "team-lifecycle", - "team_display_name": "Team Lifecycle", - "description": "Unified team skill covering spec-to-dev-to-test full lifecycle", - "version": "2.0.0", - "architecture": "folder-based", - "role_structure": "roles/{name}/role.md + roles/{name}/commands/*.md", - - "roles": { - "coordinator": { - "task_prefix": null, - "responsibility": "Pipeline orchestration, requirement clarification, task chain creation, message dispatch", - "message_types": ["plan_approved", "plan_revision", "task_unblocked", "fix_required", "error", "shutdown"] - }, - "analyst": { - "task_prefix": "RESEARCH", - "responsibility": "Seed analysis, codebase exploration, multi-dimensional context gathering", - "message_types": ["research_ready", "research_progress", "error"] - }, - "writer": { - "task_prefix": "DRAFT", - "responsibility": "Product Brief / PRD / Architecture / Epics document generation", - "message_types": ["draft_ready", "draft_revision", "impl_progress", "error"] - }, - "discussant": { - "task_prefix": "DISCUSS", - "responsibility": "Multi-perspective critique, consensus building, conflict escalation", - "message_types": ["discussion_ready", "discussion_blocked", "impl_progress", "error"] - }, - "planner": { - "task_prefix": "PLAN", - "responsibility": "Multi-angle code exploration, structured implementation planning", - "message_types": ["plan_ready", "plan_revision", "impl_progress", "error"] - }, - "executor": { - "task_prefix": "IMPL", - "responsibility": "Code implementation following approved plans", - "message_types": ["impl_complete", "impl_progress", "error"] - }, - "tester": { - "task_prefix": "TEST", - "responsibility": "Adaptive test-fix cycles, progressive testing, quality gates", - "message_types": ["test_result", "impl_progress", "fix_required", "error"] - }, - "reviewer": { - "task_prefix": "REVIEW", - "additional_prefixes": ["QUALITY"], - "responsibility": "Code review (REVIEW-*) + Spec quality validation (QUALITY-*)", - "message_types": ["review_result", "quality_result", "fix_required", "error"] - }, - "explorer": { - "task_prefix": "EXPLORE", - "responsibility": "Code search, pattern discovery, dependency tracing. Service role — on-demand by coordinator", - "role_type": "service", - "message_types": ["explore_ready", "explore_progress", "task_failed"] - }, - "architect": { - "task_prefix": "ARCH", - "responsibility": "Architecture assessment, tech feasibility, design pattern review. Consulting role — on-demand by coordinator", - "role_type": "consulting", - "consultation_modes": ["spec-review", "plan-review", "code-review", "consult", "feasibility"], - "message_types": ["arch_ready", "arch_concern", "arch_progress", "error"] - }, - "fe-developer": { - "task_prefix": "DEV-FE", - "responsibility": "Frontend component/page implementation, design token consumption, responsive UI", - "role_type": "frontend-pipeline", - "message_types": ["dev_fe_complete", "dev_fe_progress", "error"] - }, - "fe-qa": { - "task_prefix": "QA-FE", - "responsibility": "5-dimension frontend review (quality, a11y, design compliance, UX, pre-delivery), GC loop", - "role_type": "frontend-pipeline", - "message_types": ["qa_fe_passed", "qa_fe_result", "fix_required", "error"] - } - }, - - "pipelines": { - "spec-only": { - "description": "Specification pipeline: research → discuss → draft → quality", - "task_chain": [ - "RESEARCH-001", - "DISCUSS-001", "DRAFT-001", "DISCUSS-002", - "DRAFT-002", "DISCUSS-003", "DRAFT-003", "DISCUSS-004", - "DRAFT-004", "DISCUSS-005", "QUALITY-001", "DISCUSS-006" - ] - }, - "impl-only": { - "description": "Implementation pipeline: plan → implement → test + review", - "task_chain": ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"] - }, - "full-lifecycle": { - "description": "Full lifecycle: spec pipeline → implementation pipeline", - "task_chain": "spec-only + impl-only (PLAN-001 blockedBy DISCUSS-006)" - }, - "fe-only": { - "description": "Frontend-only pipeline: plan → frontend dev → frontend QA", - "task_chain": ["PLAN-001", "DEV-FE-001", "QA-FE-001"], - "gc_loop": { "max_rounds": 2, "convergence": "score >= 8 && critical === 0" } - }, - "fullstack": { - "description": "Fullstack pipeline: plan → backend + frontend parallel → test + QA", - "task_chain": ["PLAN-001", "IMPL-001||DEV-FE-001", "TEST-001||QA-FE-001", "REVIEW-001"], - "sync_points": ["REVIEW-001"] - }, - "full-lifecycle-fe": { - "description": "Full lifecycle with frontend: spec → plan → backend + frontend → test + QA", - "task_chain": "spec-only + fullstack (PLAN-001 blockedBy DISCUSS-006)" - } - }, - - "frontend_detection": { - "keywords": ["component", "page", "UI", "前端", "frontend", "CSS", "HTML", "React", "Vue", "Tailwind", "组件", "页面", "样式", "layout", "responsive", "Svelte", "Next.js", "Nuxt", "shadcn", "设计系统", "design system"], - "file_patterns": ["*.tsx", "*.jsx", "*.vue", "*.svelte", "*.css", "*.scss", "*.html"], - "routing_rules": { - "frontend_only": "All tasks match frontend keywords, no backend/API mentions", - "fullstack": "Mix of frontend and backend tasks", - "backend_only": "No frontend keywords detected (default impl-only)" - } - }, - - "ui_ux_pro_max": { - "skill_name": "ui-ux-pro-max", - "install_command": "/plugin install ui-ux-pro-max@ui-ux-pro-max-skill", - "invocation": "Skill(skill=\"ui-ux-pro-max\", args=\"...\")", - "domains": ["product", "style", "typography", "color", "landing", "chart", "ux", "web"], - "stacks": ["html-tailwind", "react", "nextjs", "vue", "svelte", "shadcn", "swiftui", "react-native", "flutter"], - "fallback": "llm-general-knowledge", - "design_intelligence_chain": ["analyst → design-intelligence.json", "architect → design-tokens.json", "fe-developer → tokens.css", "fe-qa → anti-pattern audit"] - }, - - "shared_memory": { - "file": "shared-memory.json", - "schema": { - "design_intelligence": "From analyst via ui-ux-pro-max", - "design_token_registry": "From architect, consumed by fe-developer/fe-qa", - "component_inventory": "From fe-developer, list of implemented components", - "style_decisions": "Accumulated design decisions", - "qa_history": "From fe-qa, audit trail", - "industry_context": "Industry + strictness config" - } - }, - - "collaboration_patterns": ["CP-1", "CP-2", "CP-4", "CP-5", "CP-6", "CP-10"], - - "session_dirs": { - "base": ".workflow/.team/TLS-{slug}-{YYYY-MM-DD}/", - "spec": "spec/", - "discussions": "discussions/", - "plan": "plan/", - "explorations": "explorations/", - "architecture": "architecture/", - "wisdom": "wisdom/", - "messages": ".workflow/.team-msg/{team-name}/" - } -} diff --git a/.claude/skills/team-lifecycle-v2/templates/architecture-doc.md b/.claude/skills/team-lifecycle-v2/templates/architecture-doc.md deleted file mode 100644 index 5106de03..00000000 --- a/.claude/skills/team-lifecycle-v2/templates/architecture-doc.md +++ /dev/null @@ -1,254 +0,0 @@ -# Architecture Document Template (Directory Structure) - -Template for generating architecture decision documents as a directory of individual ADR files in Phase 4. - -## Usage Context - -| Phase | Usage | -|-------|-------| -| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis | -| Output Location | `{workDir}/architecture/` | - -## Output Structure - -``` -{workDir}/architecture/ -├── _index.md # Overview, components, tech stack, data model, security -├── ADR-001-{slug}.md # Individual Architecture Decision Record -├── ADR-002-{slug}.md -└── ... -``` - ---- - -## Template: _index.md - -```markdown ---- -session_id: {session_id} -phase: 4 -document_type: architecture-index -status: draft -generated_at: {timestamp} -version: 1 -dependencies: - - ../spec-config.json - - ../product-brief.md - - ../requirements/_index.md ---- - -# Architecture: {product_name} - -{executive_summary - high-level architecture approach and key decisions} - -## System Overview - -### Architecture Style -{description of chosen architecture style: microservices, monolith, serverless, etc.} - -### System Context Diagram - -```mermaid -C4Context - title System Context Diagram - Person(user, "User", "Primary user") - System(system, "{product_name}", "Core system") - System_Ext(ext1, "{external_system}", "{description}") - Rel(user, system, "Uses") - Rel(system, ext1, "Integrates with") -``` - -## Component Architecture - -### Component Diagram - -```mermaid -graph TD - subgraph "{product_name}" - A[Component A] --> B[Component B] - B --> C[Component C] - A --> D[Component D] - end - B --> E[External Service] -``` - -### Component Descriptions - -| Component | Responsibility | Technology | Dependencies | -|-----------|---------------|------------|--------------| -| {component_name} | {what it does} | {tech stack} | {depends on} | - -## Technology Stack - -### Core Technologies - -| Layer | Technology | Version | Rationale | -|-------|-----------|---------|-----------| -| Frontend | {technology} | {version} | {why chosen} | -| Backend | {technology} | {version} | {why chosen} | -| Database | {technology} | {version} | {why chosen} | -| Infrastructure | {technology} | {version} | {why chosen} | - -### Key Libraries & Frameworks - -| Library | Purpose | License | -|---------|---------|---------| -| {library_name} | {purpose} | {license} | - -## Architecture Decision Records - -| ADR | Title | Status | Key Choice | -|-----|-------|--------|------------| -| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} | -| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} | -| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} | - -## Data Architecture - -### Data Model - -```mermaid -erDiagram - ENTITY_A ||--o{ ENTITY_B : "has many" - ENTITY_A { - string id PK - string name - datetime created_at - } - ENTITY_B { - string id PK - string entity_a_id FK - string value - } -``` - -### Data Storage Strategy - -| Data Type | Storage | Retention | Backup | -|-----------|---------|-----------|--------| -| {type} | {storage solution} | {retention policy} | {backup strategy} | - -## API Design - -### API Overview - -| Endpoint | Method | Purpose | Auth | -|----------|--------|---------|------| -| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} | - -## Security Architecture - -### Security Controls - -| Control | Implementation | Requirement | -|---------|---------------|-------------| -| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) | -| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) | -| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) | - -## Infrastructure & Deployment - -### Deployment Architecture - -{description of deployment model: containers, serverless, VMs, etc.} - -### Environment Strategy - -| Environment | Purpose | Configuration | -|-------------|---------|---------------| -| Development | Local development | {config} | -| Staging | Pre-production testing | {config} | -| Production | Live system | {config} | - -## Codebase Integration - -{if has_codebase is true:} - -### Existing Code Mapping - -| New Component | Existing Module | Integration Type | Notes | -|--------------|----------------|------------------|-------| -| {component} | {existing module path} | Extend/Replace/New | {notes} | - -### Migration Notes -{any migration considerations for existing code} - -## Quality Attributes - -| Attribute | Target | Measurement | ADR Reference | -|-----------|--------|-------------|---------------| -| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) | -| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) | -| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) | - -## Risks & Mitigations - -| Risk | Impact | Probability | Mitigation | -|------|--------|-------------|------------| -| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} | - -## Open Questions - -- [ ] {architectural question 1} -- [ ] {architectural question 2} - -## References - -- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md) -- Next: [Epics & Stories](../epics/_index.md) -``` - ---- - -## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record) - -```markdown ---- -id: ADR-{NNN} -status: Accepted -traces_to: [{REQ-NNN}, {NFR-X-NNN}] -date: {timestamp} ---- - -# ADR-{NNN}: {decision_title} - -## Context - -{what is the situation that motivates this decision} - -## Decision - -{what is the chosen approach} - -## Alternatives Considered - -| Option | Pros | Cons | -|--------|------|------| -| {option_1 - chosen} | {pros} | {cons} | -| {option_2} | {pros} | {cons} | -| {option_3} | {pros} | {cons} | - -## Consequences - -- **Positive**: {positive outcomes} -- **Negative**: {tradeoffs accepted} -- **Risks**: {risks to monitor} - -## Traces - -- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md) -- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5) -``` - ---- - -## Variable Descriptions - -| Variable | Source | Description | -|----------|--------|-------------| -| `{session_id}` | spec-config.json | Session identifier | -| `{timestamp}` | Runtime | ISO8601 generation timestamp | -| `{product_name}` | product-brief.md | Product/feature name | -| `{NNN}` | Auto-increment | ADR/requirement number | -| `{slug}` | Auto-generated | Kebab-case from decision title | -| `{has_codebase}` | spec-config.json | Whether existing codebase exists | diff --git a/.claude/skills/team-lifecycle-v2/templates/epics-template.md b/.claude/skills/team-lifecycle-v2/templates/epics-template.md deleted file mode 100644 index 939d933c..00000000 --- a/.claude/skills/team-lifecycle-v2/templates/epics-template.md +++ /dev/null @@ -1,196 +0,0 @@ -# Epics & Stories Template (Directory Structure) - -Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5. - -## Usage Context - -| Phase | Usage | -|-------|-------| -| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition | -| Output Location | `{workDir}/epics/` | - -## Output Structure - -``` -{workDir}/epics/ -├── _index.md # Overview table + dependency map + MVP scope + execution order -├── EPIC-001-{slug}.md # Individual Epic with its Stories -├── EPIC-002-{slug}.md -└── ... -``` - ---- - -## Template: _index.md - -```markdown ---- -session_id: {session_id} -phase: 5 -document_type: epics-index -status: draft -generated_at: {timestamp} -version: 1 -dependencies: - - ../spec-config.json - - ../product-brief.md - - ../requirements/_index.md - - ../architecture/_index.md ---- - -# Epics & Stories: {product_name} - -{executive_summary - overview of epic structure and MVP scope} - -## Epic Overview - -| Epic ID | Title | Priority | MVP | Stories | Est. Size | -|---------|-------|----------|-----|---------|-----------| -| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} | -| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} | -| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} | - -## Dependency Map - -```mermaid -graph LR - EPIC-001 --> EPIC-002 - EPIC-001 --> EPIC-003 - EPIC-002 --> EPIC-004 - EPIC-003 --> EPIC-005 -``` - -### Dependency Notes -{explanation of why these dependencies exist and suggested execution order} - -### Recommended Execution Order -1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational} -2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1} -3. ... - -## MVP Scope - -### MVP Epics -{list of epics included in MVP with justification, linking to each} - -### MVP Definition of Done -- [ ] {MVP completion criterion 1} -- [ ] {MVP completion criterion 2} -- [ ] {MVP completion criterion 3} - -## Traceability Matrix - -| Requirement | Epic | Stories | Architecture | -|-------------|------|---------|--------------| -| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) | -| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B | -| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) | - -## Estimation Summary - -| Size | Meaning | Count | -|------|---------|-------| -| S | Small - well-understood, minimal risk | {n} | -| M | Medium - some complexity, moderate risk | {n} | -| L | Large - significant complexity, should consider splitting | {n} | -| XL | Extra Large - high complexity, must split before implementation | {n} | - -## Risks & Considerations - -| Risk | Affected Epics | Mitigation | -|------|---------------|------------| -| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} | - -## Open Questions - -- [ ] {question about scope or implementation 1} -- [ ] {question about scope or implementation 2} - -## References - -- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md) -- Handoff to: execution workflows (lite-plan, plan, req-plan) -``` - ---- - -## Template: EPIC-NNN-{slug}.md (Individual Epic) - -```markdown ---- -id: EPIC-{NNN} -priority: {Must|Should|Could} -mvp: {true|false} -size: {S|M|L|XL} -requirements: [REQ-{NNN}] -architecture: [ADR-{NNN}] -dependencies: [EPIC-{NNN}] -status: draft ---- - -# EPIC-{NNN}: {epic_title} - -**Priority**: {Must|Should|Could} -**MVP**: {Yes|No} -**Estimated Size**: {S|M|L|XL} - -## Description - -{detailed epic description} - -## Requirements - -- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title} -- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title} - -## Architecture - -- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title} -- Component: {component_name} - -## Dependencies - -- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason} -- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason} - -## Stories - -### STORY-{EPIC}-001: {story_title} - -**User Story**: As a {persona}, I want to {action} so that {benefit}. - -**Acceptance Criteria**: -- [ ] {criterion 1} -- [ ] {criterion 2} -- [ ] {criterion 3} - -**Size**: {S|M|L|XL} -**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md) - ---- - -### STORY-{EPIC}-002: {story_title} - -**User Story**: As a {persona}, I want to {action} so that {benefit}. - -**Acceptance Criteria**: -- [ ] {criterion 1} -- [ ] {criterion 2} - -**Size**: {S|M|L|XL} -**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md) -``` - ---- - -## Variable Descriptions - -| Variable | Source | Description | -|----------|--------|-------------| -| `{session_id}` | spec-config.json | Session identifier | -| `{timestamp}` | Runtime | ISO8601 generation timestamp | -| `{product_name}` | product-brief.md | Product/feature name | -| `{EPIC}` | Auto-increment | Epic number (3 digits) | -| `{NNN}` | Auto-increment | Story/requirement number | -| `{slug}` | Auto-generated | Kebab-case from epic/story title | -| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate | diff --git a/.claude/skills/team-lifecycle-v2/templates/product-brief.md b/.claude/skills/team-lifecycle-v2/templates/product-brief.md deleted file mode 100644 index ffbdf437..00000000 --- a/.claude/skills/team-lifecycle-v2/templates/product-brief.md +++ /dev/null @@ -1,133 +0,0 @@ -# Product Brief Template - -Template for generating product brief documents in Phase 2. - -## Usage Context - -| Phase | Usage | -|-------|-------| -| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis | -| Output Location | `{workDir}/product-brief.md` | - ---- - -## Template - -```markdown ---- -session_id: {session_id} -phase: 2 -document_type: product-brief -status: draft -generated_at: {timestamp} -stepsCompleted: [] -version: 1 -dependencies: - - spec-config.json ---- - -# Product Brief: {product_name} - -{executive_summary - 2-3 sentences capturing the essence of the product/feature} - -## Vision - -{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like} - -## Problem Statement - -### Current Situation -{description of the current state and pain points} - -### Impact -{quantified impact of the problem - who is affected, how much, how often} - -## Target Users - -{for each user persona:} - -### {Persona Name} -- **Role**: {user's role/context} -- **Needs**: {primary needs related to this product} -- **Pain Points**: {current frustrations} -- **Success Criteria**: {what success looks like for this user} - -## Goals & Success Metrics - -| Goal ID | Goal | Success Metric | Target | -|---------|------|----------------|--------| -| G-001 | {goal description} | {measurable metric} | {specific target} | -| G-002 | {goal description} | {measurable metric} | {specific target} | - -## Scope - -### In Scope -- {feature/capability 1} -- {feature/capability 2} -- {feature/capability 3} - -### Out of Scope -- {explicitly excluded item 1} -- {explicitly excluded item 2} - -### Assumptions -- {key assumption 1} -- {key assumption 2} - -## Competitive Landscape - -| Aspect | Current State | Proposed Solution | Advantage | -|--------|--------------|-------------------|-----------| -| {aspect} | {how it's done now} | {our approach} | {differentiator} | - -## Constraints & Dependencies - -### Technical Constraints -- {constraint 1} -- {constraint 2} - -### Business Constraints -- {constraint 1} - -### Dependencies -- {external dependency 1} -- {external dependency 2} - -## Multi-Perspective Synthesis - -### Product Perspective -{summary of product/market analysis findings} - -### Technical Perspective -{summary of technical feasibility and constraints} - -### User Perspective -{summary of user journey and UX considerations} - -### Convergent Themes -{themes where all perspectives agree} - -### Conflicting Views -{areas where perspectives differ, with notes on resolution approach} - -## Open Questions - -- [ ] {unresolved question 1} -- [ ] {unresolved question 2} - -## References - -- Derived from: [spec-config.json](spec-config.json) -- Next: [Requirements PRD](requirements.md) -``` - -## Variable Descriptions - -| Variable | Source | Description | -|----------|--------|-------------| -| `{session_id}` | spec-config.json | Session identifier | -| `{timestamp}` | Runtime | ISO8601 generation timestamp | -| `{product_name}` | Seed analysis | Product/feature name | -| `{executive_summary}` | CLI synthesis | 2-3 sentence summary | -| `{vision_statement}` | CLI product perspective | Aspirational vision | -| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis | diff --git a/.claude/skills/team-lifecycle-v2/templates/requirements-prd.md b/.claude/skills/team-lifecycle-v2/templates/requirements-prd.md deleted file mode 100644 index 0b1dbf28..00000000 --- a/.claude/skills/team-lifecycle-v2/templates/requirements-prd.md +++ /dev/null @@ -1,224 +0,0 @@ -# Requirements PRD Template (Directory Structure) - -Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3. - -## Usage Context - -| Phase | Usage | -|-------|-------| -| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion | -| Output Location | `{workDir}/requirements/` | - -## Output Structure - -``` -{workDir}/requirements/ -├── _index.md # Summary + MoSCoW table + traceability matrix + links -├── REQ-001-{slug}.md # Individual functional requirement -├── REQ-002-{slug}.md -├── NFR-P-001-{slug}.md # Non-functional: Performance -├── NFR-S-001-{slug}.md # Non-functional: Security -├── NFR-SC-001-{slug}.md # Non-functional: Scalability -├── NFR-U-001-{slug}.md # Non-functional: Usability -└── ... -``` - ---- - -## Template: _index.md - -```markdown ---- -session_id: {session_id} -phase: 3 -document_type: requirements-index -status: draft -generated_at: {timestamp} -version: 1 -dependencies: - - ../spec-config.json - - ../product-brief.md ---- - -# Requirements: {product_name} - -{executive_summary - brief overview of what this PRD covers and key decisions} - -## Requirement Summary - -| Priority | Count | Coverage | -|----------|-------|----------| -| Must Have | {n} | {description of must-have scope} | -| Should Have | {n} | {description of should-have scope} | -| Could Have | {n} | {description of could-have scope} | -| Won't Have | {n} | {description of explicitly excluded} | - -## Functional Requirements - -| ID | Title | Priority | Traces To | -|----|-------|----------|-----------| -| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) | -| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) | -| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) | - -## Non-Functional Requirements - -### Performance - -| ID | Title | Target | -|----|-------|--------| -| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} | - -### Security - -| ID | Title | Standard | -|----|-------|----------| -| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} | - -### Scalability - -| ID | Title | Target | -|----|-------|--------| -| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} | - -### Usability - -| ID | Title | Target | -|----|-------|--------| -| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} | - -## Data Requirements - -### Data Entities - -| Entity | Description | Key Attributes | -|--------|-------------|----------------| -| {entity_name} | {description} | {attr1, attr2, attr3} | - -### Data Flows - -{description of key data flows, optionally with Mermaid diagram} - -## Integration Requirements - -| System | Direction | Protocol | Data Format | Notes | -|--------|-----------|----------|-------------|-------| -| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} | - -## Constraints & Assumptions - -### Constraints -- {technical or business constraint 1} -- {technical or business constraint 2} - -### Assumptions -- {assumption 1 - must be validated} -- {assumption 2 - must be validated} - -## Priority Rationale - -{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries} - -## Traceability Matrix - -| Goal | Requirements | -|------|-------------| -| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) | -| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) | - -## Open Questions - -- [ ] {unresolved question 1} -- [ ] {unresolved question 2} - -## References - -- Derived from: [Product Brief](../product-brief.md) -- Next: [Architecture](../architecture/_index.md) -``` - ---- - -## Template: REQ-NNN-{slug}.md (Individual Functional Requirement) - -```markdown ---- -id: REQ-{NNN} -type: functional -priority: {Must|Should|Could|Won't} -traces_to: [G-{NNN}] -status: draft ---- - -# REQ-{NNN}: {requirement_title} - -**Priority**: {Must|Should|Could|Won't} - -## Description - -{detailed requirement description} - -## User Story - -As a {persona}, I want to {action} so that {benefit}. - -## Acceptance Criteria - -- [ ] {specific, testable criterion 1} -- [ ] {specific, testable criterion 2} -- [ ] {specific, testable criterion 3} - -## Traces - -- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics) -- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable) -- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5) -``` - ---- - -## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement) - -```markdown ---- -id: NFR-{type}-{NNN} -type: non-functional -category: {Performance|Security|Scalability|Usability} -priority: {Must|Should|Could} -status: draft ---- - -# NFR-{type}-{NNN}: {requirement_title} - -**Category**: {Performance|Security|Scalability|Usability} -**Priority**: {Must|Should|Could} - -## Requirement - -{detailed requirement description} - -## Metric & Target - -| Metric | Target | Measurement Method | -|--------|--------|--------------------| -| {metric} | {target value} | {how measured} | - -## Traces - -- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics) -- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable) -``` - ---- - -## Variable Descriptions - -| Variable | Source | Description | -|----------|--------|-------------| -| `{session_id}` | spec-config.json | Session identifier | -| `{timestamp}` | Runtime | ISO8601 generation timestamp | -| `{product_name}` | product-brief.md | Product/feature name | -| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) | -| `{slug}` | Auto-generated | Kebab-case from requirement title | -| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) | -| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag | diff --git a/.claude/skills/team-lifecycle/SKILL.md b/.claude/skills/team-lifecycle/SKILL.md deleted file mode 100644 index 2807dfc2..00000000 --- a/.claude/skills/team-lifecycle/SKILL.md +++ /dev/null @@ -1,410 +0,0 @@ ---- -name: team-lifecycle -description: Unified team skill for full lifecycle - spec/impl/test. All roles invoke this skill with --role arg for role-specific execution. -allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*) ---- - -# Team Lifecycle - -Unified team skill covering specification, implementation, testing, and review. All team members invoke this skill with `--role=xxx` to route to role-specific execution. - -## Architecture Overview - -``` -┌─────────────────────────────────────────────────┐ -│ Skill(skill="team-lifecycle", args="--role=xxx") │ -└───────────────────┬─────────────────────────────┘ - │ Role Router - ┌───────┬───────┼───────┬───────┬───────┬───────┬───────┐ - ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ -┌─────────┐┌───────┐┌──────┐┌──────────┐┌───────┐┌────────┐┌──────┐┌────────┐ -│coordinator││analyst││writer││discussant││planner││executor││tester││reviewer│ -│ roles/ ││roles/ ││roles/││ roles/ ││roles/ ││ roles/ ││roles/││ roles/ │ -└─────────┘└───────┘└──────┘└──────────┘└───────┘└────────┘└──────┘└────────┘ -``` - -## Role Router - -### Input Parsing - -Parse `$ARGUMENTS` to extract `--role`: - -```javascript -const args = "$ARGUMENTS" -const roleMatch = args.match(/--role[=\s]+(\w+)/) - -if (!roleMatch) { - throw new Error("Missing --role argument. Available roles: coordinator, analyst, writer, discussant, planner, executor, tester, reviewer") -} - -const role = roleMatch[1] -const teamName = args.match(/--team[=\s]+([\w-]+)/)?.[1] || "lifecycle" -``` - -### Role Dispatch - -```javascript -const VALID_ROLES = { - "coordinator": { file: "roles/coordinator.md", prefix: null }, - "analyst": { file: "roles/analyst.md", prefix: "RESEARCH" }, - "writer": { file: "roles/writer.md", prefix: "DRAFT" }, - "discussant": { file: "roles/discussant.md", prefix: "DISCUSS" }, - "planner": { file: "roles/planner.md", prefix: "PLAN" }, - "executor": { file: "roles/executor.md", prefix: "IMPL" }, - "tester": { file: "roles/tester.md", prefix: "TEST" }, - "reviewer": { file: "roles/reviewer.md", prefix: ["REVIEW", "QUALITY"] } -} - -if (!VALID_ROLES[role]) { - throw new Error(`Unknown role: ${role}. Available: ${Object.keys(VALID_ROLES).join(', ')}`) -} - -// Read and execute role-specific logic -Read(VALID_ROLES[role].file) -// → Execute the 5-phase process defined in that file -``` - -### Available Roles - -| Role | Task Prefix | Responsibility | Role File | -|------|-------------|----------------|-----------| -| `coordinator` | N/A | Pipeline orchestration, requirement clarification, task dispatch | [roles/coordinator.md](roles/coordinator.md) | -| `analyst` | RESEARCH-* | Seed analysis, codebase exploration, context gathering | [roles/analyst.md](roles/analyst.md) | -| `writer` | DRAFT-* | Product Brief / PRD / Architecture / Epics generation | [roles/writer.md](roles/writer.md) | -| `discussant` | DISCUSS-* | Multi-perspective critique, consensus building | [roles/discussant.md](roles/discussant.md) | -| `planner` | PLAN-* | Multi-angle exploration, structured planning | [roles/planner.md](roles/planner.md) | -| `executor` | IMPL-* | Code implementation following plans | [roles/executor.md](roles/executor.md) | -| `tester` | TEST-* | Adaptive test-fix cycles, quality gates | [roles/tester.md](roles/tester.md) | -| `reviewer` | `REVIEW-*` + `QUALITY-*` | Code review + Spec quality validation (auto-switch by prefix) | [roles/reviewer.md](roles/reviewer.md) | - -## Shared Infrastructure - -### Message Bus (All Roles) - -Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log: - -```javascript -mcp__ccw-tools__team_msg({ - operation: "log", - team: teamName, - from: role, - to: "coordinator", - type: "", - summary: "", - ref: "" -}) -``` - -**Message types by role**: - -| Role | Types | -|------|-------| -| coordinator | `plan_approved`, `plan_revision`, `task_unblocked`, `fix_required`, `error`, `shutdown` | -| analyst | `research_ready`, `research_progress`, `error` | -| writer | `draft_ready`, `draft_revision`, `impl_progress`, `error` | -| discussant | `discussion_ready`, `discussion_blocked`, `impl_progress`, `error` | -| planner | `plan_ready`, `plan_revision`, `impl_progress`, `error` | -| executor | `impl_complete`, `impl_progress`, `error` | -| tester | `test_result`, `impl_progress`, `fix_required`, `error` | -| reviewer | `review_result`, `quality_result`, `fix_required`, `error` | - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "${role}" --to "coordinator" --type "" --summary "" --json`) -Bash(`ccw team list --team "${teamName}" --last 10 --json`) -Bash(`ccw team status --team "${teamName}" --json`) -``` - -### Task Lifecycle (All Worker Roles) - -```javascript -// Standard task lifecycle every worker role follows -// Phase 1: Discovery -const tasks = TaskList() -const prefixes = Array.isArray(VALID_ROLES[role].prefix) ? VALID_ROLES[role].prefix : [VALID_ROLES[role].prefix] -const myTasks = tasks.filter(t => - prefixes.some(p => t.subject.startsWith(`${p}-`)) && - t.owner === role && - t.status === 'pending' && - t.blockedBy.length === 0 -) -if (myTasks.length === 0) return // idle -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) - -// Phase 1.5: Resume Artifact Check (防止重复产出) -// 当 session 从暂停恢复时,coordinator 已将 in_progress 任务重置为 pending。 -// Worker 在开始工作前,必须检查该任务的输出产物是否已存在。 -// 如果产物已存在且内容完整: -// → 直接跳到 Phase 5 报告完成(避免覆盖上次成果) -// 如果产物存在但不完整(如文件为空或缺少关键 section): -// → 正常执行 Phase 2-4(基于已有产物继续,而非从头开始) -// 如果产物不存在: -// → 正常执行 Phase 2-4 -// -// 每个 role 检查自己的输出路径: -// analyst → sessionFolder/spec/discovery-context.json -// writer → sessionFolder/spec/{product-brief.md | requirements/ | architecture/ | epics/} -// discussant → sessionFolder/discussions/discuss-NNN-*.md -// planner → sessionFolder/plan/plan.json -// executor → git diff (已提交的代码变更) -// tester → test pass rate -// reviewer → sessionFolder/spec/readiness-report.md (quality) 或 review findings (code) - -// Phase 2-4: Role-specific (see roles/{role}.md) - -// Phase 5: Report + Loop -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: role, to: "coordinator", type: "...", summary: "..." }) -SendMessage({ type: "message", recipient: "coordinator", content: "...", summary: "..." }) -TaskUpdate({ taskId: task.id, status: 'completed' }) -// Check for next task → back to Phase 1 -``` - -## Three-Mode Pipeline - -``` -Spec-only: - RESEARCH-001 → DISCUSS-001 → DRAFT-001 → DISCUSS-002 - → DRAFT-002 → DISCUSS-003 → DRAFT-003 → DISCUSS-004 - → DRAFT-004 → DISCUSS-005 → QUALITY-001 → DISCUSS-006 - -Impl-only: - PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001 - -Full-lifecycle: - [Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006) → IMPL-001 → TEST-001 + REVIEW-001 -``` - -## Unified Session Directory - -All session artifacts are stored under a single session folder: - -``` -.workflow/.team/TLS-{slug}-{YYYY-MM-DD}/ -├── team-session.json # Session state (status, progress, completed_tasks) -├── spec/ # Spec artifacts (analyst, writer, reviewer output) -│ ├── spec-config.json -│ ├── discovery-context.json -│ ├── product-brief.md -│ ├── requirements/ # _index.md + REQ-*.md + NFR-*.md -│ ├── architecture/ # _index.md + ADR-*.md -│ ├── epics/ # _index.md + EPIC-*.md -│ ├── readiness-report.md -│ └── spec-summary.md -├── discussions/ # Discussion records (discussant output) -│ └── discuss-001..006.md -└── plan/ # Plan artifacts (planner output) - ├── exploration-{angle}.json - ├── explorations-manifest.json - ├── plan.json - └── .task/ - └── TASK-*.json -``` - -Messages remain at `.workflow/.team-msg/{team-name}/` (unchanged). - -## Session Resume - -Coordinator supports `--resume` / `--continue` flags to resume interrupted sessions: - -1. Scans `.workflow/.team/TLS-*/team-session.json` for `status: "active"` or `"paused"` -2. Multiple matches → `AskUserQuestion` for user selection -3. **Audit TaskList** — 获取当前所有任务的真实状态 -4. **Reconcile** — 双向同步 session.completed_tasks ↔ TaskList 状态: - - session 已完成但 TaskList 未标记 → 修正 TaskList 为 completed - - TaskList 已完成但 session 未记录 → 补录到 session - - in_progress 状态(暂停中断)→ 重置为 pending -5. Determines remaining pipeline from reconciled state -6. Rebuilds team (`TeamCreate` + worker spawns for needed roles only) -7. Creates missing tasks with correct `blockedBy` dependency chain (uses `TASK_METADATA` lookup) -8. Verifies dependency chain integrity for existing tasks -9. Updates session file with reconciled state + current_phase -10. **Kick** — 向首个可执行任务的 worker 发送 `task_unblocked` 消息,打破 resume 死锁 -11. Jumps to Phase 4 coordination loop - -## Coordinator Spawn Template - -When coordinator creates teammates: - -```javascript -TeamCreate({ team_name: teamName }) - -// Analyst (spec-only / full) -Task({ - subagent_type: "general-purpose", - description: `Spawn analyst worker`, - team_name: teamName, - name: "analyst", - prompt: `你是 team "${teamName}" 的 ANALYST。 -当你收到 RESEARCH-* 任务时,调用 Skill(skill="team-lifecycle", args="--role=analyst") 执行。 -当前需求: ${taskDescription} -约束: ${constraints} -Session: ${sessionFolder} - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -工作流程: -1. TaskList → 找到 RESEARCH-* 任务 -2. Skill(skill="team-lifecycle", args="--role=analyst") 执行 -3. team_msg log + SendMessage 结果给 coordinator -4. TaskUpdate completed → 检查下一个任务` -}) - -// Writer (spec-only / full) -Task({ - subagent_type: "general-purpose", - description: `Spawn writer worker`, - team_name: teamName, - name: "writer", - prompt: `你是 team "${teamName}" 的 WRITER。 -当你收到 DRAFT-* 任务时,调用 Skill(skill="team-lifecycle", args="--role=writer") 执行。 -当前需求: ${taskDescription} -Session: ${sessionFolder} - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -工作流程: -1. TaskList → 找到 DRAFT-* 任务 -2. Skill(skill="team-lifecycle", args="--role=writer") 执行 -3. team_msg log + SendMessage 结果给 coordinator -4. TaskUpdate completed → 检查下一个任务` -}) - -// Discussant (spec-only / full) -Task({ - subagent_type: "general-purpose", - description: `Spawn discussant worker`, - team_name: teamName, - name: "discussant", - prompt: `你是 team "${teamName}" 的 DISCUSSANT。 -当你收到 DISCUSS-* 任务时,调用 Skill(skill="team-lifecycle", args="--role=discussant") 执行。 -当前需求: ${taskDescription} -Session: ${sessionFolder} -讨论深度: ${discussionDepth} - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -工作流程: -1. TaskList → 找到 DISCUSS-* 任务 -2. Skill(skill="team-lifecycle", args="--role=discussant") 执行 -3. team_msg log + SendMessage 结果给 coordinator -4. TaskUpdate completed → 检查下一个任务` -}) - -// Planner (impl-only / full) -Task({ - subagent_type: "general-purpose", - description: `Spawn planner worker`, - team_name: teamName, - name: "planner", - prompt: `你是 team "${teamName}" 的 PLANNER。 -当你收到 PLAN-* 任务时,调用 Skill(skill="team-lifecycle", args="--role=planner") 执行。 -当前需求: ${taskDescription} -约束: ${constraints} -Session: ${sessionFolder} - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -工作流程: -1. TaskList → 找到 PLAN-* 任务 -2. Skill(skill="team-lifecycle", args="--role=planner") 执行 -3. team_msg log + SendMessage 结果给 coordinator -4. TaskUpdate completed → 检查下一个任务` -}) - -// Executor (impl-only / full) -Task({ - subagent_type: "general-purpose", - description: `Spawn executor worker`, - team_name: teamName, - name: "executor", - prompt: `你是 team "${teamName}" 的 EXECUTOR。 -当你收到 IMPL-* 任务时,调用 Skill(skill="team-lifecycle", args="--role=executor") 执行。 -当前需求: ${taskDescription} -约束: ${constraints} - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -工作流程: -1. TaskList → 找到 IMPL-* 任务 -2. Skill(skill="team-lifecycle", args="--role=executor") 执行 -3. team_msg log + SendMessage 结果给 coordinator -4. TaskUpdate completed → 检查下一个任务` -}) - -// Tester (impl-only / full) -Task({ - subagent_type: "general-purpose", - description: `Spawn tester worker`, - team_name: teamName, - name: "tester", - prompt: `你是 team "${teamName}" 的 TESTER。 -当你收到 TEST-* 任务时,调用 Skill(skill="team-lifecycle", args="--role=tester") 执行。 -当前需求: ${taskDescription} -约束: ${constraints} - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -工作流程: -1. TaskList → 找到 TEST-* 任务 -2. Skill(skill="team-lifecycle", args="--role=tester") 执行 -3. team_msg log + SendMessage 结果给 coordinator -4. TaskUpdate completed → 检查下一个任务` -}) - -// Reviewer (all modes) -Task({ - subagent_type: "general-purpose", - description: `Spawn reviewer worker`, - team_name: teamName, - name: "reviewer", - prompt: `你是 team "${teamName}" 的 REVIEWER。 -当你收到 REVIEW-* 或 QUALITY-* 任务时,调用 Skill(skill="team-lifecycle", args="--role=reviewer") 执行。 -- REVIEW-* → 代码审查逻辑 -- QUALITY-* → 规格质量检查逻辑 -当前需求: ${taskDescription} - -## 消息总线(必须) -每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。 - -工作流程: -1. TaskList → 找到 REVIEW-* 或 QUALITY-* 任务 -2. Skill(skill="team-lifecycle", args="--role=reviewer") 执行 -3. team_msg log + SendMessage 结果给 coordinator -4. TaskUpdate completed → 检查下一个任务` -}) -``` - -## Shared Spec Resources - -Writer 和 Reviewer 角色在 spec 模式下使用本 skill 内置的标准和模板(从 spec-generator 复制,独立维护): - -| Resource | Path | Usage | -|----------|------|-------| -| Document Standards | `specs/document-standards.md` | YAML frontmatter、命名规范、内容结构 | -| Quality Gates | `specs/quality-gates.md` | Per-phase 质量门禁、评分标尺 | -| Product Brief Template | `templates/product-brief.md` | DRAFT-001 文档生成 | -| Requirements Template | `templates/requirements-prd.md` | DRAFT-002 文档生成 | -| Architecture Template | `templates/architecture-doc.md` | DRAFT-003 文档生成 | -| Epics Template | `templates/epics-template.md` | DRAFT-004 文档生成 | - -> Writer 在执行每个 DRAFT-* 任务前 **必须先 Read** 对应的 template 文件和 document-standards.md。 -> 从 `roles/` 子目录引用时路径为 `../specs/` 和 `../templates/`。 - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| Unknown --role value | Error with available role list | -| Missing --role arg | Error with usage hint | -| Role file not found | Error with expected path | -| Task prefix conflict | Log warning, proceed | diff --git a/.claude/skills/team-lifecycle/roles/analyst.md b/.claude/skills/team-lifecycle/roles/analyst.md deleted file mode 100644 index ebc6b1ed..00000000 --- a/.claude/skills/team-lifecycle/roles/analyst.md +++ /dev/null @@ -1,215 +0,0 @@ -# Role: analyst - -Seed analysis, codebase exploration, and multi-dimensional context gathering. Maps to spec-generator Phase 1 (Discovery). - -## Role Identity - -- **Name**: `analyst` -- **Task Prefix**: `RESEARCH-*` -- **Responsibility**: Seed Analysis → Codebase Exploration → Context Packaging → Report -- **Communication**: SendMessage to coordinator only - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `research_ready` | analyst → coordinator | Research complete | With discovery-context.json path and dimension summary | -| `research_progress` | analyst → coordinator | Long research progress | Intermediate progress update | -| `error` | analyst → coordinator | Unrecoverable error | Codebase access failure, CLI timeout, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Research complete -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "analyst", to: "coordinator", type: "research_ready", summary: "Research done: 5 exploration dimensions", ref: `${sessionFolder}/spec/discovery-context.json` }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "analyst", to: "coordinator", type: "error", summary: "Codebase access failed" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "analyst" --to "coordinator" --type "research_ready" --summary "Research done" --ref "${sessionFolder}/discovery-context.json" --json`) -``` - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('RESEARCH-') && - t.owner === 'analyst' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Seed Analysis - -```javascript -// Extract session folder from task description -const sessionMatch = task.description.match(/Session:\s*(.+)/) -const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '.workflow/.team/default' - -// Parse topic from task description -const topicLines = task.description.split('\n').filter(l => !l.startsWith('Session:') && !l.startsWith('输出:') && l.trim()) -const rawTopic = topicLines[0] || task.subject.replace('RESEARCH-001: ', '') - -// 支持文件引用输入(与 spec-generator Phase 1 一致) -const topic = (rawTopic.startsWith('@') || rawTopic.endsWith('.md') || rawTopic.endsWith('.txt')) - ? Read(rawTopic.replace(/^@/, '')) - : rawTopic - -// Use Gemini CLI for seed analysis -Bash({ - command: `ccw cli -p "PURPOSE: Analyze the following topic/idea and extract structured seed information for specification generation. -TASK: -• Extract problem statement (what problem does this solve) -• Identify target users and their pain points -• Determine domain and industry context -• List constraints and assumptions -• Identify 3-5 exploration dimensions for deeper research -• Assess complexity (simple/moderate/complex) - -TOPIC: ${topic} - -MODE: analysis -CONTEXT: @**/* -EXPECTED: JSON output with fields: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment -CONSTRAINTS: Output as valid JSON" --tool gemini --mode analysis --rule analysis-analyze-technical-document`, - run_in_background: true -}) -// Wait for CLI result, then parse -``` - -### Phase 3: Codebase Exploration (conditional) - -```javascript -// Check if there's an existing codebase to explore -const hasProject = Bash(`test -f package.json || test -f Cargo.toml || test -f pyproject.toml || test -f go.mod; echo $?`) - -if (hasProject === '0') { - mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "analyst", to: "coordinator", type: "research_progress", summary: "种子分析完成, 开始代码库探索" }) - - // Explore codebase using ACE search - const archSearch = mcp__ace-tool__search_context({ - project_root_path: projectRoot, - query: `Architecture patterns, main modules, entry points for: ${topic}` - }) - - // Detect tech stack from package files - // Explore existing patterns and integration points - - var codebaseContext = { tech_stack, architecture_patterns, existing_conventions, integration_points, constraints_from_codebase: [] } -} else { - var codebaseContext = null -} -``` - -### Phase 4: Context Packaging - -```javascript -// Generate spec-config.json -const specConfig = { - session_id: `SPEC-${topicSlug}-${dateStr}`, - topic: topic, - status: "research_complete", - complexity: seedAnalysis.complexity_assessment || "moderate", - depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard", - focus_areas: seedAnalysis.exploration_dimensions || [], - mode: "interactive", // team 模式始终交互 - phases_completed: ["discovery"], - created_at: new Date().toISOString(), - session_folder: sessionFolder, - discussion_depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard" -} -Write(`${sessionFolder}/spec/spec-config.json`, JSON.stringify(specConfig, null, 2)) - -// Generate discovery-context.json -const discoveryContext = { - session_id: specConfig.session_id, - phase: 1, - document_type: "discovery-context", - status: "complete", - generated_at: new Date().toISOString(), - seed_analysis: { - problem_statement: seedAnalysis.problem_statement, - target_users: seedAnalysis.target_users, - domain: seedAnalysis.domain, - constraints: seedAnalysis.constraints, - exploration_dimensions: seedAnalysis.exploration_dimensions, - complexity: seedAnalysis.complexity_assessment - }, - codebase_context: codebaseContext, - recommendations: { focus_areas: [], risks: [], open_questions: [] } -} -Write(`${sessionFolder}/spec/discovery-context.json`, JSON.stringify(discoveryContext, null, 2)) -``` - -### Phase 5: Report to Coordinator - -```javascript -const dimensionCount = discoveryContext.seed_analysis.exploration_dimensions?.length || 0 -const hasCodebase = codebaseContext !== null - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "analyst", to: "coordinator", - type: "research_ready", - summary: `研究完成: ${dimensionCount}个探索维度, ${hasCodebase ? '有' : '无'}代码库上下文, 复杂度=${specConfig.complexity}`, - ref: `${sessionFolder}/discovery-context.json` -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `## 研究分析结果 - -**Task**: ${task.subject} -**复杂度**: ${specConfig.complexity} -**代码库**: ${hasCodebase ? '已检测到现有项目' : '全新项目'} - -### 问题陈述 -${discoveryContext.seed_analysis.problem_statement} - -### 目标用户 -${(discoveryContext.seed_analysis.target_users || []).map(u => '- ' + u).join('\n')} - -### 探索维度 -${(discoveryContext.seed_analysis.exploration_dimensions || []).map((d, i) => (i+1) + '. ' + d).join('\n')} - -### 输出位置 -- Config: ${sessionFolder}/spec/spec-config.json -- Context: ${sessionFolder}/spec/discovery-context.json - -研究已就绪,可进入讨论轮次 DISCUSS-001。`, - summary: `研究就绪: ${dimensionCount}维度, ${specConfig.complexity}` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) - -// Check for next RESEARCH task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No RESEARCH-* tasks available | Idle, wait for coordinator assignment | -| Gemini CLI analysis failure | Fallback to direct Claude analysis without CLI | -| Codebase detection failed | Continue as new project (no codebase context) | -| Session folder cannot be created | Notify coordinator, request alternative path | -| Topic too vague for analysis | Report to coordinator with clarification questions | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle/roles/coordinator.md b/.claude/skills/team-lifecycle/roles/coordinator.md deleted file mode 100644 index 1fb64d9a..00000000 --- a/.claude/skills/team-lifecycle/roles/coordinator.md +++ /dev/null @@ -1,925 +0,0 @@ -# Role: coordinator - -Team lifecycle coordinator. Orchestrates the full pipeline across three modes: spec-only, impl-only, and full-lifecycle. Handles requirement clarification, team creation, task chain management, cross-phase coordination, and result reporting. - -## Role Identity - -- **Name**: `coordinator` -- **Task Prefix**: N/A (coordinator creates tasks, doesn't receive them) -- **Responsibility**: Orchestration -- **Communication**: SendMessage to all teammates - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `plan_approved` | coordinator → planner | Plan reviewed and accepted | Planner can mark task completed | -| `plan_revision` | coordinator → planner | Plan needs changes | Feedback with required changes | -| `task_unblocked` | coordinator → any | Dependency resolved | Notify worker of available task | -| `fix_required` | coordinator → executor/writer | Review/Quality found issues | Create fix task | -| `error` | coordinator → all | Critical system error | Escalation to user | -| `shutdown` | coordinator → all | Team being dissolved | Clean shutdown signal | - -## Execution - -### Phase 0: Session Resume Check - -Before any new session setup, check if resuming an existing session: - -```javascript -const args = "$ARGUMENTS" -const isResume = /--resume|--continue/.test(args) - -if (isResume) { - // Scan for active/paused sessions - const sessionDirs = Glob({ pattern: '.workflow/.team/TLS-*/team-session.json' }) - const resumable = sessionDirs.map(f => { - try { - const session = JSON.parse(Read(f)) - if (session.status === 'active' || session.status === 'paused') return session - } catch {} - return null - }).filter(Boolean) - - if (resumable.length === 0) { - // No resumable sessions → fall through to Phase 1 - } else if (resumable.length === 1) { - var resumedSession = resumable[0] - } else { - // Multiple matches → user selects - AskUserQuestion({ - questions: [{ - question: "检测到多个可恢复的会话,请选择:", - header: "Resume", - multiSelect: false, - options: resumable.slice(0, 4).map(s => ({ - label: s.session_id, - description: `${s.topic} (${s.current_phase}, ${s.status})` - })) - }] - }) - var resumedSession = resumable.find(s => s.session_id === userChoice) - } - - if (resumedSession) { - // Restore session state - const teamName = resumedSession.team_name - const mode = resumedSession.mode - const sessionFolder = `.workflow/.team/${resumedSession.session_id}` - const taskDescription = resumedSession.topic - const executionMethod = resumedSession.user_preferences?.execution_method || 'Auto' - const codeReviewTool = resumedSession.user_preferences?.code_review || 'Skip' - - // ============================================================ - // Pipeline Constants - // ============================================================ - const SPEC_CHAIN = [ - 'RESEARCH-001', 'DISCUSS-001', 'DRAFT-001', 'DISCUSS-002', - 'DRAFT-002', 'DISCUSS-003', 'DRAFT-003', 'DISCUSS-004', - 'DRAFT-004', 'DISCUSS-005', 'QUALITY-001', 'DISCUSS-006' - ] - const IMPL_CHAIN = ['PLAN-001', 'IMPL-001', 'TEST-001', 'REVIEW-001'] - - // Task metadata: prefix → { subject, owner, description template, activeForm } - const TASK_METADATA = { - 'RESEARCH-001': { owner: 'analyst', subject: 'RESEARCH-001: 主题发现与上下文研究', activeForm: '研究中', - desc: () => `${taskDescription}\n\nSession: ${sessionFolder}\n输出: ${sessionFolder}/spec/spec-config.json + spec/discovery-context.json` }, - 'DISCUSS-001': { owner: 'discussant', subject: 'DISCUSS-001: 研究结果讨论 - 范围确认与方向调整', activeForm: '讨论范围中', - desc: () => `讨论 RESEARCH-001 的发现结果\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/discovery-context.json\n输出: ${sessionFolder}/discussions/discuss-001-scope.md` }, - 'DRAFT-001': { owner: 'writer', subject: 'DRAFT-001: 撰写 Product Brief', activeForm: '撰写 Brief 中', - desc: () => `基于研究和讨论共识撰写产品简报\n\nSession: ${sessionFolder}\n输入: discovery-context.json + discuss-001-scope.md\n输出: ${sessionFolder}/spec/product-brief.md` }, - 'DISCUSS-002': { owner: 'discussant', subject: 'DISCUSS-002: Product Brief 多视角评审', activeForm: '评审 Brief 中', - desc: () => `评审 Product Brief 文档\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/product-brief.md\n输出: ${sessionFolder}/discussions/discuss-002-brief.md` }, - 'DRAFT-002': { owner: 'writer', subject: 'DRAFT-002: 撰写 Requirements/PRD', activeForm: '撰写 PRD 中', - desc: () => `基于 Brief 和讨论反馈撰写需求文档\n\nSession: ${sessionFolder}\n输入: product-brief.md + discuss-002-brief.md\n输出: ${sessionFolder}/spec/requirements/` }, - 'DISCUSS-003': { owner: 'discussant', subject: 'DISCUSS-003: 需求完整性与优先级讨论', activeForm: '讨论需求中', - desc: () => `讨论 PRD 需求完整性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/requirements/_index.md\n输出: ${sessionFolder}/discussions/discuss-003-requirements.md` }, - 'DRAFT-003': { owner: 'writer', subject: 'DRAFT-003: 撰写 Architecture Document', activeForm: '撰写架构中', - desc: () => `基于需求和讨论反馈撰写架构文档\n\nSession: ${sessionFolder}\n输入: requirements/ + discuss-003-requirements.md\n输出: ${sessionFolder}/spec/architecture/` }, - 'DISCUSS-004': { owner: 'discussant', subject: 'DISCUSS-004: 架构决策与技术可行性讨论', activeForm: '讨论架构中', - desc: () => `讨论架构设计合理性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/architecture/_index.md\n输出: ${sessionFolder}/discussions/discuss-004-architecture.md` }, - 'DRAFT-004': { owner: 'writer', subject: 'DRAFT-004: 撰写 Epics & Stories', activeForm: '撰写 Epics 中', - desc: () => `基于架构和讨论反馈撰写史诗和用户故事\n\nSession: ${sessionFolder}\n输入: architecture/ + discuss-004-architecture.md\n输出: ${sessionFolder}/spec/epics/` }, - 'DISCUSS-005': { owner: 'discussant', subject: 'DISCUSS-005: 执行计划与MVP范围讨论', activeForm: '讨论执行计划中', - desc: () => `讨论执行计划就绪性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/epics/_index.md\n输出: ${sessionFolder}/discussions/discuss-005-epics.md` }, - 'QUALITY-001': { owner: 'reviewer', subject: 'QUALITY-001: 规格就绪度检查', activeForm: '质量检查中', - desc: () => `全文档交叉验证和质量评分\n\nSession: ${sessionFolder}\n输入: 全部文档\n输出: ${sessionFolder}/spec/readiness-report.md + spec/spec-summary.md` }, - 'DISCUSS-006': { owner: 'discussant', subject: 'DISCUSS-006: 最终签收与交付确认', activeForm: '最终签收讨论中', - desc: () => `最终讨论和签收\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/readiness-report.md\n输出: ${sessionFolder}/discussions/discuss-006-final.md` }, - 'PLAN-001': { owner: 'planner', subject: 'PLAN-001: 探索和规划实现', activeForm: '规划中', - desc: () => `${taskDescription}\n\nSession: ${sessionFolder}\n写入: ${sessionFolder}/plan/` }, - 'IMPL-001': { owner: 'executor', subject: 'IMPL-001: 实现已批准的计划', activeForm: '实现中', - desc: () => `${taskDescription}\n\nSession: ${sessionFolder}\nPlan: ${sessionFolder}/plan/plan.json\nexecution_method: ${executionMethod}\ncode_review: ${codeReviewTool}` }, - 'TEST-001': { owner: 'tester', subject: 'TEST-001: 测试修复循环', activeForm: '测试中', - desc: () => taskDescription }, - 'REVIEW-001': { owner: 'reviewer', subject: 'REVIEW-001: 代码审查与需求验证', activeForm: '审查中', - desc: () => `${taskDescription}\n\nSession: ${sessionFolder}\nPlan: ${sessionFolder}/plan/plan.json` } - } - - // Pipeline dependency: prefix → predecessor prefix (special: TEST-001 & REVIEW-001 both depend on IMPL-001) - function getPredecessor(prefix, pipeline) { - if (prefix === 'TEST-001' || prefix === 'REVIEW-001') return 'IMPL-001' - const idx = pipeline.indexOf(prefix) - return idx > 0 ? pipeline[idx - 1] : null - } - - // ============================================================ - // Step 1: Audit TaskList — 审计当前任务清单状态 - // ============================================================ - const allTasks = TaskList() - const pipeline = mode === 'spec-only' ? SPEC_CHAIN - : mode === 'impl-only' ? IMPL_CHAIN - : [...SPEC_CHAIN, ...IMPL_CHAIN] - const sessionCompleted = new Set(resumedSession.completed_tasks || []) - - // Build prefix → task mapping from existing TaskList - const existingByPrefix = {} - allTasks.forEach(t => { - const prefixMatch = t.subject.match(/^([A-Z]+-\d+)/) - if (prefixMatch) existingByPrefix[prefixMatch[1]] = t - }) - - // ============================================================ - // Step 2: Reconcile — 同步 session 与 TaskList 状态 - // ============================================================ - const reconciledCompleted = new Set(sessionCompleted) - const statusFixes = [] - - for (const prefix of pipeline) { - const existing = existingByPrefix[prefix] - if (!existing) continue - - // Case A: session 记录已完成,但 TaskList 状态不是 completed → 修正 TaskList - if (sessionCompleted.has(prefix) && existing.status !== 'completed') { - TaskUpdate({ taskId: existing.id, status: 'completed' }) - statusFixes.push(`${prefix}: ${existing.status} → completed (sync from session)`) - } - - // Case B: TaskList 已 completed,但 session 未记录 → 补录 session - if (existing.status === 'completed' && !sessionCompleted.has(prefix)) { - reconciledCompleted.add(prefix) - statusFixes.push(`${prefix}: completed (sync to session)`) - } - - // Case C: TaskList 是 in_progress(暂停时可能中断)→ 重置为 pending - if (existing.status === 'in_progress' && !sessionCompleted.has(prefix)) { - TaskUpdate({ taskId: existing.id, status: 'pending' }) - statusFixes.push(`${prefix}: in_progress → pending (reset for retry)`) - } - } - - // Update session with reconciled completed_tasks - resumedSession.completed_tasks = [...reconciledCompleted] - - // ============================================================ - // Step 3: Determine remaining pipeline — 确定剩余任务顺序 - // ============================================================ - const remainingPipeline = pipeline.filter(p => !reconciledCompleted.has(p)) - - // ============================================================ - // Step 4: Rebuild team + Spawn workers — 重建团队 - // ============================================================ - TeamCreate({ team_name: teamName }) - - // Determine which worker roles are needed based on remaining tasks - const neededRoles = new Set() - remainingPipeline.forEach(prefix => { - const meta = TASK_METADATA[prefix] - if (meta) neededRoles.add(meta.owner) - }) - - // Spawn only needed workers using Phase 4 Stop-Wait pattern (see SKILL.md Coordinator Spawn Template) - // Workers are spawned per-stage via Task(run_in_background: false) in Phase 4 coordination loop. - // neededRoles is used to determine which workers will be spawned on-demand. - neededRoles.forEach(role => { - // → Worker prompt template in SKILL.md (spawned per-stage in Phase 4, not pre-spawned here) - }) - - // ============================================================ - // Step 5: Create missing tasks with correct dependencies - // ============================================================ - // In a new conversation, TaskList is EMPTY — all remaining tasks must be created. - // In a same-conversation resume, some tasks may already exist. - const missingPrefixes = remainingPipeline.filter(p => !existingByPrefix[p]) - - for (const prefix of missingPrefixes) { - const meta = TASK_METADATA[prefix] - if (!meta) continue - - // Create task - const newTask = TaskCreate({ - subject: meta.subject, - description: meta.desc(), - activeForm: meta.activeForm - }) - TaskUpdate({ taskId: newTask.id, owner: meta.owner }) - - // Register in existingByPrefix for dependency wiring - existingByPrefix[prefix] = { id: newTask.id, status: 'pending', blockedBy: [] } - - // Wire dependency: find predecessor - const predPrefix = getPredecessor(prefix, pipeline) - if (predPrefix && !reconciledCompleted.has(predPrefix)) { - const predTask = existingByPrefix[predPrefix] - if (predTask) { - TaskUpdate({ taskId: newTask.id, addBlockedBy: [predTask.id] }) - } - } - - statusFixes.push(`${prefix}: created (missing in TaskList)`) - } - - // ============================================================ - // Step 6: Verify dependency chain integrity for existing tasks - // ============================================================ - for (const prefix of remainingPipeline) { - // Skip tasks we just created (already wired) - if (missingPrefixes.includes(prefix)) continue - const task = existingByPrefix[prefix] - if (!task || task.status === 'completed') continue - - const predPrefix = getPredecessor(prefix, pipeline) - if (!predPrefix || reconciledCompleted.has(predPrefix)) continue - - const predTask = existingByPrefix[predPrefix] - if (predTask && task.blockedBy && !task.blockedBy.includes(predTask.id)) { - TaskUpdate({ taskId: task.id, addBlockedBy: [predTask.id] }) - statusFixes.push(`${prefix}: added missing blockedBy → ${predPrefix}`) - } - } - - // ============================================================ - // Step 7: Update session file — 写入恢复状态 - // ============================================================ - resumedSession.status = 'active' - resumedSession.resumed_at = new Date().toISOString() - resumedSession.updated_at = new Date().toISOString() - if (remainingPipeline.length > 0) { - const firstRemaining = remainingPipeline[0] - if (/^(RESEARCH|DISCUSS|DRAFT|QUALITY)/.test(firstRemaining)) { - resumedSession.current_phase = 'spec' - } else if (firstRemaining.startsWith('PLAN')) { - resumedSession.current_phase = 'plan' - } else { - resumedSession.current_phase = 'impl' - } - } - Write(`${sessionFolder}/team-session.json`, JSON.stringify(resumedSession, null, 2)) - - // ============================================================ - // Step 8: Report reconciliation — 输出恢复摘要 - // ============================================================ - // Output to user: - // - Session: {session_id} resumed - // - Completed: {reconciledCompleted.size}/{pipeline.length} tasks - // - Remaining: {remainingPipeline.join(' → ')} - // - Status fixes: {statusFixes.length} corrections applied - // - Next task: {remainingPipeline[0]} - // - Workers spawned: {[...neededRoles].join(', ')} - - // ============================================================ - // Step 9: Kick — 通知首个可执行任务的 worker 启动 - // ============================================================ - // 解决 resume 后的死锁:coordinator 等 worker 消息 ↔ worker 等任务 - // 找到第一个 pending + blockedBy 为空的任务,向其 owner 发送 task_unblocked - const firstActionable = remainingPipeline.find(prefix => { - const task = existingByPrefix[prefix] - return task && task.status === 'pending' && (!task.blockedBy || task.blockedBy.length === 0) - }) - - if (firstActionable) { - const meta = TASK_METADATA[firstActionable] - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "coordinator", to: meta.owner, - type: "task_unblocked", - summary: `Resume: ${firstActionable} is ready for execution` - }) - SendMessage({ - type: "message", - recipient: meta.owner, - content: `Session 已恢复。你的任务 ${firstActionable} 已就绪,请立即执行 TaskList 检查并开始工作。`, - summary: `Resume kick: ${firstActionable}` - }) - } - - // → Skip to Phase 4 coordination loop - } -} -``` - -### Phase 1: Requirement Clarification - -Parse `$ARGUMENTS` to extract `--team-name` and task description. - -```javascript -const args = "$ARGUMENTS" -const teamNameMatch = args.match(/--team-name[=\s]+([\w-]+)/) -const teamName = teamNameMatch ? teamNameMatch[1] : `lifecycle-${Date.now().toString(36)}` -const taskDescription = args.replace(/--team-name[=\s]+[\w-]+/, '').replace(/--role[=\s]+\w+/, '').replace(/--resume|--continue/, '').trim() -``` - -Use AskUserQuestion to collect mode and constraints: - -```javascript -AskUserQuestion({ - questions: [ - { - question: "选择工作模式:", - header: "Mode", - multiSelect: false, - options: [ - { label: "spec-only", description: "仅生成规格文档(研究→讨论→撰写→质量检查)" }, - { label: "impl-only", description: "仅实现代码(规划→实现→测试+审查)" }, - { label: "full-lifecycle", description: "完整生命周期(规格→实现→测试+审查)" } - ] - }, - { - question: "MVP 范围:", - header: "Scope", - multiSelect: false, - options: [ - { label: "最小可行", description: "核心功能优先" }, - { label: "功能完整", description: "覆盖主要用例" }, - { label: "全面实现", description: "包含边缘场景和优化" } - ] - } - ] -}) - -// Spec/Full 模式追加收集 -if (mode === 'spec-only' || mode === 'full-lifecycle') { - AskUserQuestion({ - questions: [ - { - question: "重点领域:", - header: "Focus", - multiSelect: false, - options: [ - { label: "产品定义", description: "聚焦用户需求和产品定位" }, - { label: "技术架构", description: "聚焦技术选型和系统设计" }, - { label: "全面规格", description: "均衡覆盖产品+技术" } - ] - }, - { - question: "讨论深度:", - header: "Depth", - multiSelect: false, - options: [ - { label: "快速共识", description: "每轮讨论简短聚焦,快速推进" }, - { label: "深度讨论", description: "每轮多视角深入分析" }, - { label: "全面辩论", description: "4个维度全覆盖,严格共识门控" } - ] - } - ] - }) -} -``` - -Simple tasks can skip clarification. - -#### Execution Method Selection (impl/full-lifecycle modes) - -When mode includes implementation, select execution backend before team creation: - -```javascript -if (mode === 'impl-only' || mode === 'full-lifecycle') { - const execSelection = AskUserQuestion({ - questions: [ - { - question: "选择代码执行方式:", - header: "Execution", - multiSelect: false, - options: [ - { label: "Agent", description: "code-developer agent(同步,适合简单任务)" }, - { label: "Codex", description: "Codex CLI(后台,适合复杂任务)" }, - { label: "Gemini", description: "Gemini CLI(后台,适合分析类任务)" }, - { label: "Auto", description: "根据任务复杂度自动选择(默认)" } - ] - }, - { - question: "实现后是否进行代码审查?", - header: "Code Review", - multiSelect: false, - options: [ - { label: "Skip", description: "不审查(Reviewer 角色独立负责)" }, - { label: "Gemini Review", description: "Gemini CLI 审查" }, - { label: "Codex Review", description: "Git-aware review(--uncommitted)" } - ] - } - ] - }) - - var executionMethod = execSelection.Execution || 'Auto' - var codeReviewTool = execSelection['Code Review'] || 'Skip' -} -``` - -### Phase 2: Create Team + Initialize Session - -```javascript -TeamCreate({ team_name: teamName }) - -// Unified session setup -const topicSlug = taskDescription.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40) -const dateStr = new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString().substring(0, 10) -const sessionId = `TLS-${topicSlug}-${dateStr}` -const sessionFolder = `.workflow/.team/${sessionId}` - -// Create unified directory structure -if (mode === 'spec-only' || mode === 'full-lifecycle') { - Bash(`mkdir -p "${sessionFolder}/spec" "${sessionFolder}/discussions"`) -} -if (mode === 'impl-only' || mode === 'full-lifecycle') { - Bash(`mkdir -p "${sessionFolder}/plan"`) -} - -// Create team-session.json -const teamSession = { - session_id: sessionId, - team_name: teamName, - topic: taskDescription, - mode: mode, - status: "active", - created_at: new Date().toISOString(), - updated_at: new Date().toISOString(), - paused_at: null, - resumed_at: null, - completed_at: null, - current_phase: mode === 'impl-only' ? 'plan' : 'spec', - completed_tasks: [], - pipeline_progress: { - spec: mode !== 'impl-only' ? { total: 12, completed: 0 } : null, - impl: mode !== 'spec-only' ? { total: 4, completed: 0 } : null - }, - user_preferences: { scope: scope || '', focus: focus || '', discussion_depth: discussionDepth || '' }, - messages_team: teamName -} -Write(`${sessionFolder}/team-session.json`, JSON.stringify(teamSession, null, 2)) -``` - -**Workers are NOT pre-spawned here.** Workers are spawned per-stage in Phase 4 via Stop-Wait `Task(run_in_background: false)`. See SKILL.md Coordinator Spawn Template for worker prompt templates. - -Worker roles by mode (spawned on-demand): - -| Mode | Worker Roles | -|------|-----------------| -| spec-only | analyst, writer, discussant, reviewer (4) | -| impl-only | planner, executor, tester, reviewer (4) | -| full-lifecycle | analyst, writer, discussant, planner, executor, tester, reviewer (7) | - -Each worker receives a prompt that tells it to invoke `Skill(skill="team-lifecycle", args="--role=")` when receiving tasks. - -### Phase 3: Create Task Chain - -Task chain creation depends on the selected mode. - -#### Spec-only Task Chain - -```javascript -// RESEARCH Phase -TaskCreate({ subject: "RESEARCH-001: 主题发现与上下文研究", description: `${taskDescription}\n\nSession: ${sessionFolder}\n输出: ${sessionFolder}/spec/spec-config.json + spec/discovery-context.json`, activeForm: "研究中" }) -TaskUpdate({ taskId: researchId, owner: "analyst" }) - -// DISCUSS-001: 范围讨论 (blockedBy RESEARCH-001) -TaskCreate({ subject: "DISCUSS-001: 研究结果讨论 - 范围确认与方向调整", description: `讨论 RESEARCH-001 的发现结果\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/discovery-context.json\n输出: ${sessionFolder}/discussions/discuss-001-scope.md\n\n讨论维度: 范围确认、方向调整、风险预判、探索缺口`, activeForm: "讨论范围中" }) -TaskUpdate({ taskId: discuss1Id, owner: "discussant", addBlockedBy: [researchId] }) - -// DRAFT-001: Product Brief (blockedBy DISCUSS-001) -TaskCreate({ subject: "DRAFT-001: 撰写 Product Brief", description: `基于研究和讨论共识撰写产品简报\n\nSession: ${sessionFolder}\n输入: discovery-context.json + discuss-001-scope.md\n输出: ${sessionFolder}/product-brief.md\n\n使用多视角分析: 产品/技术/用户`, activeForm: "撰写 Brief 中" }) -TaskUpdate({ taskId: draft1Id, owner: "writer", addBlockedBy: [discuss1Id] }) - -// DISCUSS-002: Brief 评审 (blockedBy DRAFT-001) -TaskCreate({ subject: "DISCUSS-002: Product Brief 多视角评审", description: `评审 Product Brief 文档\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/product-brief.md\n输出: ${sessionFolder}/discussions/discuss-002-brief.md\n\n讨论维度: 产品定位、目标用户、成功指标、竞品差异`, activeForm: "评审 Brief 中" }) -TaskUpdate({ taskId: discuss2Id, owner: "discussant", addBlockedBy: [draft1Id] }) - -// DRAFT-002: Requirements/PRD (blockedBy DISCUSS-002) -TaskCreate({ subject: "DRAFT-002: 撰写 Requirements/PRD", description: `基于 Brief 和讨论反馈撰写需求文档\n\nSession: ${sessionFolder}\n输入: product-brief.md + discuss-002-brief.md\n输出: ${sessionFolder}/requirements/\n\n包含: 功能需求(REQ-*) + 非功能需求(NFR-*) + MoSCoW 优先级`, activeForm: "撰写 PRD 中" }) -TaskUpdate({ taskId: draft2Id, owner: "writer", addBlockedBy: [discuss2Id] }) - -// DISCUSS-003: 需求完整性 (blockedBy DRAFT-002) -TaskCreate({ subject: "DISCUSS-003: 需求完整性与优先级讨论", description: `讨论 PRD 需求完整性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/requirements/_index.md\n输出: ${sessionFolder}/discussions/discuss-003-requirements.md\n\n讨论维度: 需求遗漏、MoSCoW合理性、验收标准可测性、非功能需求充分性`, activeForm: "讨论需求中" }) -TaskUpdate({ taskId: discuss3Id, owner: "discussant", addBlockedBy: [draft2Id] }) - -// DRAFT-003: Architecture (blockedBy DISCUSS-003) -TaskCreate({ subject: "DRAFT-003: 撰写 Architecture Document", description: `基于需求和讨论反馈撰写架构文档\n\nSession: ${sessionFolder}\n输入: requirements/ + discuss-003-requirements.md\n输出: ${sessionFolder}/architecture/\n\n包含: 架构风格 + 组件图 + 技术选型 + ADR-* + 数据模型`, activeForm: "撰写架构中" }) -TaskUpdate({ taskId: draft3Id, owner: "writer", addBlockedBy: [discuss3Id] }) - -// DISCUSS-004: 技术可行性 (blockedBy DRAFT-003) -TaskCreate({ subject: "DISCUSS-004: 架构决策与技术可行性讨论", description: `讨论架构设计合理性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/architecture/_index.md\n输出: ${sessionFolder}/discussions/discuss-004-architecture.md\n\n讨论维度: 技术选型风险、可扩展性、安全架构、ADR替代方案`, activeForm: "讨论架构中" }) -TaskUpdate({ taskId: discuss4Id, owner: "discussant", addBlockedBy: [draft3Id] }) - -// DRAFT-004: Epics & Stories (blockedBy DISCUSS-004) -TaskCreate({ subject: "DRAFT-004: 撰写 Epics & Stories", description: `基于架构和讨论反馈撰写史诗和用户故事\n\nSession: ${sessionFolder}\n输入: architecture/ + discuss-004-architecture.md\n输出: ${sessionFolder}/epics/\n\n包含: EPIC-* + STORY-* + 依赖图 + MVP定义 + 执行顺序`, activeForm: "撰写 Epics 中" }) -TaskUpdate({ taskId: draft4Id, owner: "writer", addBlockedBy: [discuss4Id] }) - -// DISCUSS-005: 执行就绪 (blockedBy DRAFT-004) -TaskCreate({ subject: "DISCUSS-005: 执行计划与MVP范围讨论", description: `讨论执行计划就绪性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/epics/_index.md\n输出: ${sessionFolder}/discussions/discuss-005-epics.md\n\n讨论维度: Epic粒度、故事估算、MVP范围、执行顺序、依赖风险`, activeForm: "讨论执行计划中" }) -TaskUpdate({ taskId: discuss5Id, owner: "discussant", addBlockedBy: [draft4Id] }) - -// QUALITY-001: Readiness Check (blockedBy DISCUSS-005) -TaskCreate({ subject: "QUALITY-001: 规格就绪度检查", description: `全文档交叉验证和质量评分\n\nSession: ${sessionFolder}\n输入: 全部文档\n输出: ${sessionFolder}/spec/readiness-report.md + spec/spec-summary.md\n\n评分维度: 完整性(20%) + 一致性(20%) + 可追溯性(20%) + 深度(20%) + 需求覆盖率(20%)`, activeForm: "质量检查中" }) -TaskUpdate({ taskId: qualityId, owner: "reviewer", addBlockedBy: [discuss5Id] }) - -// DISCUSS-006: 最终签收 (blockedBy QUALITY-001) -TaskCreate({ subject: "DISCUSS-006: 最终签收与交付确认", description: `最终讨论和签收\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/readiness-report.md\n输出: ${sessionFolder}/discussions/discuss-006-final.md\n\n讨论维度: 质量报告审查、遗留问题处理、交付确认、下一步建议`, activeForm: "最终签收讨论中" }) -TaskUpdate({ taskId: discuss6Id, owner: "discussant", addBlockedBy: [qualityId] }) -``` - -#### Impl-only Task Chain - -```javascript -// PLAN-001 -TaskCreate({ subject: "PLAN-001: 探索和规划实现", description: `${taskDescription}\n\nSession: ${sessionFolder}\n写入: ${sessionFolder}/plan/`, activeForm: "规划中" }) -TaskUpdate({ taskId: planId, owner: "planner" }) - -// IMPL-001 (blockedBy PLAN-001) -TaskCreate({ subject: "IMPL-001: 实现已批准的计划", description: `${taskDescription}\n\nSession: ${sessionFolder}\nPlan: ${sessionFolder}/plan/plan.json\nexecution_method: ${executionMethod || 'Auto'}\ncode_review: ${codeReviewTool || 'Skip'}`, activeForm: "实现中" }) -TaskUpdate({ taskId: implId, owner: "executor", addBlockedBy: [planId] }) - -// TEST-001 (blockedBy IMPL-001) -TaskCreate({ subject: "TEST-001: 测试修复循环", description: `${taskDescription}`, activeForm: "测试中" }) -TaskUpdate({ taskId: testId, owner: "tester", addBlockedBy: [implId] }) - -// REVIEW-001 (blockedBy IMPL-001, parallel with TEST-001) -TaskCreate({ subject: "REVIEW-001: 代码审查与需求验证", description: `${taskDescription}\n\nSession: ${sessionFolder}\nPlan: ${sessionFolder}/plan/plan.json`, activeForm: "审查中" }) -TaskUpdate({ taskId: reviewId, owner: "reviewer", addBlockedBy: [implId] }) -``` - -#### Full-lifecycle Task Chain - -Create both spec and impl chains, with PLAN-001 blockedBy DISCUSS-006: - -```javascript -// [All spec-only tasks as above] -// Then: -TaskCreate({ subject: "PLAN-001: 探索和规划实现", description: `${taskDescription}\n\nSession: ${sessionFolder}\n写入: ${sessionFolder}/plan/`, activeForm: "规划中" }) -TaskUpdate({ taskId: planId, owner: "planner", addBlockedBy: [discuss6Id] }) -// [Rest of impl-only tasks as above] -``` - -### Phase 4: Coordination Loop - -> **设计原则(Stop-Wait)**: 模型执行没有时间概念,禁止任何形式的轮询等待。 -> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态 -> - ✅ 采用: 同步 `Task(run_in_background: false)` 调用,Worker 返回 = 阶段完成信号 -> -> 按 Phase 3 创建的任务链顺序,逐阶段 spawn worker 同步执行。 -> Worker prompt 使用 SKILL.md Coordinator Spawn Template。 - -Receive teammate messages and make dispatch decisions. **Before each decision: `team_msg list` to review recent messages. After each decision: `team_msg log` to record.** - -#### Spec Messages - -| Received Message | Action | -|-----------------|--------| -| Analyst: research_ready | Read discovery-context.json → **用户确认检查点** → team_msg log → TaskUpdate RESEARCH completed (auto-unblocks DISCUSS-001) | -| Discussant: discussion_ready | Read discussion.md → judge if revision needed → unblock next DRAFT task | -| Discussant: discussion_blocked | Intervene → AskUserQuestion for user decision → write decision to discussion record → manually unblock | -| Writer: draft_ready | Read document summary → team_msg log → TaskUpdate DRAFT completed (auto-unblocks next DISCUSS) | -| Writer: draft_revision | Update dependencies → unblock related discussion tasks | -| Reviewer: quality_result (PASS ≥80%) | team_msg log → TaskUpdate QUALITY completed (auto-unblocks DISCUSS-006) | -| Reviewer: quality_result (REVIEW 60-79%) | team_msg log → notify writer of improvement suggestions | -| Reviewer: fix_required (FAIL <60%) | Create DRAFT-fix task → assign writer | - -#### Impl Messages - -| Received Message | Action | -|-----------------|--------| -| Planner: plan_ready | Read plan → approve/request revision → team_msg log(plan_approved/plan_revision) → TaskUpdate + SendMessage | -| Executor: impl_complete | team_msg log(task_unblocked) → TaskUpdate IMPL completed (auto-unblocks TEST + REVIEW) | -| Tester: test_result ≥ 95% | team_msg log → TaskUpdate TEST completed | -| Tester: test_result < 95% + iterations > 5 | team_msg log(error) → escalate to user | -| Reviewer: review_result (no critical) | team_msg log → TaskUpdate REVIEW completed | -| Reviewer: review_result (has critical) | team_msg log(fix_required) → TaskCreate IMPL-fix → assign executor | -| All tasks completed | → Phase 5 | - -#### Full-lifecycle Handoff - -When DISCUSS-006 completes in full-lifecycle mode, PLAN-001 is auto-unblocked via the dependency chain. - -#### Research Confirmation Checkpoint - -When receiving `research_ready` from analyst, confirm extracted requirements with user before unblocking: - -```javascript -if (msgType === 'research_ready') { - const discoveryContext = JSON.parse(Read(`${sessionFolder}/spec/discovery-context.json`)) - const dimensions = discoveryContext.seed_analysis?.exploration_dimensions || [] - const constraints = discoveryContext.seed_analysis?.constraints || [] - const problemStatement = discoveryContext.seed_analysis?.problem_statement || '' - - // Present extracted requirements for user confirmation - AskUserQuestion({ - questions: [{ - question: `研究阶段提取到以下需求,请确认是否完整:\n\n**问题定义**: ${problemStatement}\n**探索维度**: ${dimensions.join('、')}\n**约束条件**: ${constraints.join('、')}\n\n是否有遗漏?`, - header: "需求确认", - multiSelect: false, - options: [ - { label: "确认完整", description: "提取的需求已覆盖所有关键点,继续推进" }, - { label: "需要补充", description: "有遗漏的需求,我来补充" }, - { label: "需要重新研究", description: "提取方向有偏差,重新执行研究" } - ] - }] - }) - - if (userChoice === '需要补充') { - // User provides additional requirements via free text - // Merge into discovery-context.json, then unblock DISCUSS-001 - discoveryContext.seed_analysis.user_supplements = userInput - Write(`${sessionFolder}/spec/discovery-context.json`, JSON.stringify(discoveryContext, null, 2)) - } else if (userChoice === '需要重新研究') { - // Reset RESEARCH-001 to pending, notify analyst - TaskUpdate({ taskId: researchId, status: 'pending' }) - team_msg({ type: 'fix_required', summary: 'User requests re-research with revised scope' }) - return // Do not unblock DISCUSS-001 - } - // '确认完整' → proceed normally: TaskUpdate RESEARCH completed -} -``` - -#### Discussion Blocked Handling - -```javascript -if (msgType === 'discussion_blocked') { - const blockReason = msg.data.reason - const options = msg.data.options - - AskUserQuestion({ - questions: [{ - question: `讨论 ${msg.ref} 遇到分歧: ${blockReason}\n请选择方向:`, - header: "Decision", - multiSelect: false, - options: options.map(opt => ({ label: opt.label, description: opt.description })) - }] - }) - // Write user decision to discussion record, then unblock next task -} -``` - -### Phase 5: Report + Persistent Loop - -Summarize results based on mode: -- **spec-only**: Document inventory, quality scores, discussion rounds -- **impl-only**: Changed files, test pass rate, review verdict -- **full-lifecycle**: Both spec summary + impl summary - -```javascript -AskUserQuestion({ - questions: [{ - question: "当前需求已完成。下一步:", - header: "Next", - multiSelect: false, - options: [ - { label: "新需求", description: "提交新需求给当前团队" }, - { label: "交付执行", description: "将规格交给执行 workflow(仅 spec 模式)" }, - { label: "关闭团队", description: "关闭所有 teammate 并清理" } - ] - }] -}) - -// === 新需求 → 回到 Phase 1(复用 team,新建任务链)=== - -// === 交付执行 → Handoff 逻辑 === -if (userChoice === '交付执行') { - AskUserQuestion({ - questions: [{ - question: "选择交付方式:", - header: "Handoff", - multiSelect: false, - options: [ - { label: "lite-plan", description: "逐 Epic 轻量执行" }, - { label: "full-plan", description: "完整规划(创建 WFS session + .brainstorming/ 桥接)" }, - { label: "req-plan", description: "需求级路线图规划" }, - { label: "create-issues", description: "每个 Epic 创建 issue" } - ] - }] - }) - - // 读取 spec 文档 - const specConfig = JSON.parse(Read(`${sessionFolder}/spec/spec-config.json`)) - const specSummary = Read(`${sessionFolder}/spec/spec-summary.md`) - const productBrief = Read(`${sessionFolder}/spec/product-brief.md`) - const requirementsIndex = Read(`${sessionFolder}/spec/requirements/_index.md`) - const architectureIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) - const epicsIndex = Read(`${sessionFolder}/spec/epics/_index.md`) - const epicFiles = Glob(`${sessionFolder}/spec/epics/EPIC-*.md`) - - if (handoffChoice === 'lite-plan') { - // 读取首个 MVP Epic → 调用 lite-plan - const firstMvpFile = epicFiles.find(f => { - const content = Read(f) - return content.includes('mvp: true') - }) - const epicContent = Read(firstMvpFile) - const title = epicContent.match(/^#\s+(.+)/m)?.[1] || '' - const description = epicContent.match(/## Description\n([\s\S]*?)(?=\n## )/)?.[1]?.trim() || '' - Skill({ skill: "workflow:lite-plan", args: `"${title}: ${description}"` }) - } - - if (handoffChoice === 'full-plan' || handoffChoice === 'req-plan') { - // === 桥接: 构建 .brainstorming/ 兼容结构 === - // 从 spec-generator Phase 6 Step 6 适配 - - // Step A: 构建结构化描述 - const structuredDesc = `GOAL: ${specConfig.seed_analysis?.problem_statement || specConfig.topic} -SCOPE: ${specConfig.complexity} complexity -CONTEXT: Generated from spec team session ${specConfig.session_id}. Source: ${sessionFolder}/` - - // Step B: 创建 WFS session - Skill({ skill: "workflow:session:start", args: `--auto "${structuredDesc}"` }) - // → 产出 sessionId (WFS-xxx) 和 session 目录 - - // Step C: 创建 .brainstorming/ 桥接文件 - const brainstormDir = `.workflow/active/${sessionId}/.brainstorming` - Bash(`mkdir -p "${brainstormDir}/feature-specs"`) - - // C.1: guidance-specification.md(action-planning-agent 最高优先读取) - Write(`${brainstormDir}/guidance-specification.md`, ` -# ${specConfig.seed_analysis?.problem_statement || specConfig.topic} - Confirmed Guidance Specification - -**Source**: spec-team session ${specConfig.session_id} -**Generated**: ${new Date().toISOString()} -**Spec Directory**: ${sessionFolder} - -## 1. Project Positioning & Goals -${extractSection(productBrief, "Vision")} -${extractSection(productBrief, "Goals")} - -## 2. Requirements Summary -${extractSection(requirementsIndex, "Functional Requirements")} - -## 3. Architecture Decisions -${extractSection(architectureIndex, "Architecture Decision Records")} -${extractSection(architectureIndex, "Technology Stack")} - -## 4. Implementation Scope -${extractSection(epicsIndex, "Epic Overview")} -${extractSection(epicsIndex, "MVP Scope")} - -## Feature Decomposition -${extractSection(epicsIndex, "Traceability Matrix")} - -## Appendix: Source Documents -| Document | Path | Description | -|----------|------|-------------| -| Product Brief | ${sessionFolder}/spec/product-brief.md | Vision, goals, scope | -| Requirements | ${sessionFolder}/spec/requirements/ | _index.md + REQ-*.md + NFR-*.md | -| Architecture | ${sessionFolder}/spec/architecture/ | _index.md + ADR-*.md | -| Epics | ${sessionFolder}/spec/epics/ | _index.md + EPIC-*.md | -| Readiness Report | ${sessionFolder}/spec/readiness-report.md | Quality validation | -`) - - // C.2: feature-index.json(EPIC → Feature 映射) - const features = epicFiles.map(epicFile => { - const content = Read(epicFile) - const fmMatch = content.match(/^---\n([\s\S]*?)\n---/) - const fm = fmMatch ? parseYAML(fmMatch[1]) : {} - const basename = epicFile.replace(/.*[/\\]/, '').replace('.md', '') - const epicNum = (fm.id || '').replace('EPIC-', '') - const slug = basename.replace(/^EPIC-\d+-/, '') - return { - id: `F-${epicNum}`, slug, name: content.match(/^#\s+(.+)/m)?.[1] || '', - priority: fm.mvp ? "High" : "Medium", - spec_path: `${brainstormDir}/feature-specs/F-${epicNum}-${slug}.md`, - source_epic: fm.id, source_file: epicFile - } - }) - Write(`${brainstormDir}/feature-specs/feature-index.json`, JSON.stringify({ - version: "1.0", source: "spec-team", - spec_session: specConfig.session_id, features, cross_cutting_specs: [] - }, null, 2)) - - // C.3: Feature-spec 文件(EPIC → F-*.md 转换) - features.forEach(feature => { - const epicContent = Read(feature.source_file) - Write(feature.spec_path, ` -# Feature Spec: ${feature.source_epic} - ${feature.name} - -**Source**: ${feature.source_file} -**Priority**: ${feature.priority === "High" ? "MVP" : "Post-MVP"} - -## Description -${extractSection(epicContent, "Description")} - -## Stories -${extractSection(epicContent, "Stories")} - -## Requirements -${extractSection(epicContent, "Requirements")} - -## Architecture -${extractSection(epicContent, "Architecture")} -`) - }) - - // Step D: 调用下游 workflow - if (handoffChoice === 'full-plan') { - Skill({ skill: "workflow:plan", args: `"${structuredDesc}"` }) - } else { - Skill({ skill: "workflow:req-plan-with-file", args: `"${specConfig.seed_analysis?.problem_statement || specConfig.topic}"` }) - } - } - - if (handoffChoice === 'create-issues') { - // 逐 EPIC 文件创建 issue - epicFiles.forEach(epicFile => { - const content = Read(epicFile) - const title = content.match(/^#\s+(.+)/m)?.[1] || '' - const description = content.match(/## Description\n([\s\S]*?)(?=\n## )/)?.[1]?.trim() || '' - Skill({ skill: "issue:new", args: `"${title}: ${description}"` }) - }) - } -} - -// === 关闭 → shutdown 给每个 teammate → TeamDelete() === -``` - -#### Helper Functions Reference (pseudocode) - -```javascript -// Extract a named ## section from a markdown document -function extractSection(markdown, sectionName) { - // Return content between ## {sectionName} and next ## heading - const regex = new RegExp(`## ${sectionName}\\n([\\s\\S]*?)(?=\\n## |$)`) - return markdown.match(regex)?.[1]?.trim() || '' -} - -// Parse YAML frontmatter string into object -function parseYAML(yamlStr) { - // Simple key-value parsing from YAML frontmatter - const result = {} - yamlStr.split('\n').forEach(line => { - const match = line.match(/^(\w+):\s*(.+)/) - if (match) result[match[1]] = match[2].replace(/^["']|["']$/g, '') - }) - return result -} -``` - -## Session State Tracking - -At each key transition, update `team-session.json`: - -```javascript -// Helper: update session state -function updateSession(sessionFolder, updates) { - const session = JSON.parse(Read(`${sessionFolder}/team-session.json`)) - Object.assign(session, updates, { updated_at: new Date().toISOString() }) - Write(`${sessionFolder}/team-session.json`, JSON.stringify(session, null, 2)) -} - -// On task completion: -updateSession(sessionFolder, { - completed_tasks: [...session.completed_tasks, taskPrefix], - pipeline_progress: { ...session.pipeline_progress, - [phase]: { ...session.pipeline_progress[phase], completed: session.pipeline_progress[phase].completed + 1 } - } -}) - -// On phase transition (spec → plan): -updateSession(sessionFolder, { current_phase: 'plan' }) - -// On completion: -updateSession(sessionFolder, { status: 'completed', completed_at: new Date().toISOString() }) - -// On user closes team: -updateSession(sessionFolder, { status: 'completed', completed_at: new Date().toISOString() }) -``` - -## Session File Structure - -``` -.workflow/.team/TLS-{slug}-{YYYY-MM-DD}/ -├── team-session.json # Session state (resume support) -├── spec/ # Spec artifacts -│ ├── spec-config.json -│ ├── discovery-context.json -│ ├── product-brief.md -│ ├── requirements/ # _index.md + REQ-*.md + NFR-*.md -│ ├── architecture/ # _index.md + ADR-*.md -│ ├── epics/ # _index.md + EPIC-*.md -│ ├── readiness-report.md -│ └── spec-summary.md -├── discussions/ # Discussion records -│ └── discuss-001..006.md -└── plan/ # Plan artifacts - ├── exploration-{angle}.json - ├── explorations-manifest.json - ├── plan.json - └── .task/ - └── TASK-*.json -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| Teammate 无响应 | 发追踪消息,2次无响应 → 重新 spawn | -| Plan 被拒 3+ 次 | Coordinator 自行规划 | -| 测试卡在 <80% 超 5 次迭代 | 上报用户 | -| Review 发现 critical | 创建 IMPL-fix 任务给 executor | -| 讨论无法共识 | Coordinator 介入 → AskUserQuestion | -| 文档质量 <60% | 创建 DRAFT-fix 任务给 writer | -| Writer 修订 3+ 次 | 上报用户,建议调整范围 | -| Research 无法完成 | 降级为简化模式 | diff --git a/.claude/skills/team-lifecycle/roles/discussant.md b/.claude/skills/team-lifecycle/roles/discussant.md deleted file mode 100644 index 1cc84fd5..00000000 --- a/.claude/skills/team-lifecycle/roles/discussant.md +++ /dev/null @@ -1,236 +0,0 @@ -# Role: discussant - -Multi-perspective critique, consensus building, and conflict escalation. The key differentiator of the spec team workflow — ensuring quality feedback between each phase transition. - -## Role Identity - -- **Name**: `discussant` -- **Task Prefix**: `DISCUSS-*` -- **Responsibility**: Load Artifact → Multi-Perspective Critique → Synthesize Consensus → Report -- **Communication**: SendMessage to coordinator only - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `discussion_ready` | discussant → coordinator | Discussion complete, consensus reached | With discussion record path and decision summary | -| `discussion_blocked` | discussant → coordinator | Cannot reach consensus | With divergence points and options, needs coordinator | -| `impl_progress` | discussant → coordinator | Long discussion progress | Multi-perspective analysis progress | -| `error` | discussant → coordinator | Discussion cannot proceed | Input artifact missing, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Discussion complete -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "discussion_ready", summary: "Scope discussion consensus reached: 3 decisions", ref: `${sessionFolder}/discussions/discuss-001-scope.md` }) - -// Discussion blocked -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "discussion_blocked", summary: "Cannot reach consensus on tech stack", data: { reason: "...", options: [...] } }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "discussant", to: "coordinator", type: "error", summary: "Input artifact missing" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "discussant" --to "coordinator" --type "discussion_ready" --summary "Discussion complete" --ref "${sessionFolder}/discussions/discuss-001-scope.md" --json`) -``` - -## Discussion Dimension Model - -Each discussion round analyzes from 4 perspectives: - -| Perspective | Focus | Representative | -|-------------|-------|----------------| -| **Product** | Market fit, user value, business viability, competitive differentiation | Product Manager | -| **Technical** | Feasibility, tech debt, performance, security, maintainability | Tech Lead | -| **Quality** | Completeness, testability, consistency, standards compliance | QA Lead | -| **Risk** | Risk identification, dependency analysis, assumption validation, failure modes | Risk Analyst | -| **Coverage** | Requirement completeness vs original intent, scope drift, gap detection | Requirements Analyst | - -## Discussion Round Configuration - -| Round | Artifact | Key Perspectives | Focus | -|-------|----------|-----------------|-------| -| DISCUSS-001 | discovery-context | product + risk + **coverage** | Scope confirmation, direction, initial coverage check | -| DISCUSS-002 | product-brief | product + technical + quality + **coverage** | Positioning, feasibility, requirement coverage | -| DISCUSS-003 | requirements | quality + product + **coverage** | Completeness, priority, gap detection | -| DISCUSS-004 | architecture | technical + risk | Tech choices, security | -| DISCUSS-005 | epics | product + technical + quality + **coverage** | MVP scope, estimation, requirement tracing | -| DISCUSS-006 | readiness-report | all 5 perspectives | Final sign-off | - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('DISCUSS-') && - t.owner === 'discussant' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Artifact Loading - -```javascript -const sessionMatch = task.description.match(/Session:\s*(.+)/) -const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '' -const roundMatch = task.subject.match(/DISCUSS-(\d+)/) -const roundNumber = roundMatch ? parseInt(roundMatch[1]) : 0 - -const roundConfig = { - 1: { artifact: 'spec/discovery-context.json', type: 'json', outputFile: 'discuss-001-scope.md', perspectives: ['product', 'risk', 'coverage'], label: '范围讨论' }, - 2: { artifact: 'spec/product-brief.md', type: 'md', outputFile: 'discuss-002-brief.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Brief评审' }, - 3: { artifact: 'spec/requirements/_index.md', type: 'md', outputFile: 'discuss-003-requirements.md', perspectives: ['quality', 'product', 'coverage'], label: '需求讨论' }, - 4: { artifact: 'spec/architecture/_index.md', type: 'md', outputFile: 'discuss-004-architecture.md', perspectives: ['technical', 'risk'], label: '架构讨论' }, - 5: { artifact: 'spec/epics/_index.md', type: 'md', outputFile: 'discuss-005-epics.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Epics讨论' }, - 6: { artifact: 'spec/readiness-report.md', type: 'md', outputFile: 'discuss-006-final.md', perspectives: ['product', 'technical', 'quality', 'risk', 'coverage'], label: '最终签收' } -} - -const config = roundConfig[roundNumber] -// Load target artifact and prior discussion records for continuity -Bash(`mkdir -p ${sessionFolder}/discussions`) -``` - -### Phase 3: Multi-Perspective Critique - -Launch parallel CLI analyses for each required perspective: - -- **Product Perspective** (gemini): Market fit, user value, business viability, competitive differentiation. Rate 1-5 with improvement suggestions. -- **Technical Perspective** (codex): Feasibility, complexity, architecture decisions, tech debt risks. Rate 1-5. -- **Quality Perspective** (claude): Completeness, testability, consistency, ambiguity detection. Rate 1-5. -- **Risk Perspective** (gemini): Risk identification, dependency analysis, assumption validation, failure modes. Rate risk level. -- **Coverage Perspective** (gemini): Compare current artifact against original requirements in discovery-context.json. Identify covered_requirements[], partial_requirements[], missing_requirements[], scope_creep[]. Rate coverage 1-5. **If missing_requirements is non-empty, flag as critical divergence.** - -Each CLI call produces structured critique with: strengths[], weaknesses[], suggestions[], rating. Coverage perspective additionally outputs: covered_requirements[], missing_requirements[], scope_creep[]. - -### Phase 4: Consensus Synthesis - -```javascript -const synthesis = { - convergent_themes: [], - divergent_views: [], - action_items: [], - open_questions: [], - decisions: [], - risk_flags: [], - overall_sentiment: '', // positive/neutral/concerns/critical - consensus_reached: true // false if major unresolvable conflicts -} - -// Extract convergent themes (items mentioned positively by 2+ perspectives) -// Extract divergent views (items where perspectives conflict) -// Check coverage gaps from coverage perspective (if present) -const coverageResult = perspectiveResults.find(p => p.perspective === 'coverage') -if (coverageResult?.missing_requirements?.length > 0) { - synthesis.coverage_gaps = coverageResult.missing_requirements - synthesis.divergent_views.push({ - topic: 'requirement_coverage_gap', - description: `${coverageResult.missing_requirements.length} requirements from discovery-context not covered: ${coverageResult.missing_requirements.join(', ')}`, - severity: 'high', - source: 'coverage' - }) -} -// Check for unresolvable conflicts -const criticalDivergences = synthesis.divergent_views.filter(d => d.severity === 'high') -if (criticalDivergences.length > 0) synthesis.consensus_reached = false - -// Determine overall sentiment from average rating -// Generate discussion record markdown with all perspectives, convergence, divergence, action items - -Write(`${sessionFolder}/discussions/${config.outputFile}`, discussionRecord) -``` - -### Phase 5: Report to Coordinator - -```javascript -if (synthesis.consensus_reached) { - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "discussant", to: "coordinator", - type: "discussion_ready", - summary: `${config.label}讨论完成: ${synthesis.action_items.length}个行动项, ${synthesis.open_questions.length}个开放问题, 总体${synthesis.overall_sentiment}`, - ref: `${sessionFolder}/discussions/${config.outputFile}` - }) - - SendMessage({ - type: "message", - recipient: "coordinator", - content: `## 讨论结果: ${config.label} - -**Task**: ${task.subject} -**共识**: 已达成 -**总体评价**: ${synthesis.overall_sentiment} - -### 行动项 (${synthesis.action_items.length}) -${synthesis.action_items.map((item, i) => (i+1) + '. ' + item).join('\n') || '无'} - -### 开放问题 (${synthesis.open_questions.length}) -${synthesis.open_questions.map((q, i) => (i+1) + '. ' + q).join('\n') || '无'} - -### 讨论记录 -${sessionFolder}/discussions/${config.outputFile} - -共识已达成,可推进至下一阶段。`, - summary: `${config.label}共识达成: ${synthesis.action_items.length}行动项` - }) - - TaskUpdate({ taskId: task.id, status: 'completed' }) -} else { - // Consensus blocked - escalate to coordinator - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "discussant", to: "coordinator", - type: "discussion_blocked", - summary: `${config.label}讨论阻塞: ${criticalDivergences.length}个关键分歧需决策`, - data: { - reason: criticalDivergences.map(d => d.description).join('; '), - options: criticalDivergences.map(d => ({ label: d.topic, description: d.options?.join(' vs ') || d.description })) - } - }) - - SendMessage({ - type: "message", - recipient: "coordinator", - content: `## 讨论阻塞: ${config.label} - -**Task**: ${task.subject} -**状态**: 无法达成共识,需要 coordinator 介入 - -### 关键分歧 -${criticalDivergences.map((d, i) => (i+1) + '. **' + d.topic + '**: ' + d.description).join('\n\n')} - -请通过 AskUserQuestion 收集用户对分歧点的决策。`, - summary: `${config.label}阻塞: ${criticalDivergences.length}分歧` - }) - // Keep task in_progress, wait for coordinator resolution -} - -// Check for next DISCUSS task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No DISCUSS-* tasks available | Idle, wait for coordinator assignment | -| Target artifact not found | Notify coordinator, request prerequisite completion | -| CLI perspective analysis failure | Fallback to direct Claude analysis for that perspective | -| All CLI analyses fail | Generate basic discussion from direct reading | -| Consensus timeout (all perspectives diverge) | Escalate as discussion_blocked | -| Prior discussion records missing | Continue without continuity context | -| Session folder not found | Notify coordinator, request session path | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle/roles/executor.md b/.claude/skills/team-lifecycle/roles/executor.md deleted file mode 100644 index a8dc4d36..00000000 --- a/.claude/skills/team-lifecycle/roles/executor.md +++ /dev/null @@ -1,312 +0,0 @@ -# Role: executor - -Code implementation following approved plans. Reads plan files, routes to selected execution backend (Agent/Codex/Gemini), self-validates, and reports completion. - -## Role Identity - -- **Name**: `executor` -- **Task Prefix**: `IMPL-*` -- **Responsibility**: Load plan → Route to backend → Implement code → Self-validate → Report completion -- **Communication**: SendMessage to coordinator only - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `impl_complete` | executor → coordinator | All implementation complete | With changed files list and acceptance status | -| `impl_progress` | executor → coordinator | Batch/subtask completed | Progress percentage and completed subtask | -| `error` | executor → coordinator | Blocking problem | Plan file missing, file conflict, sub-agent failure | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Progress update -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "executor", to: "coordinator", type: "impl_progress", summary: "Batch 1/3 done: auth middleware implemented", data: { batch: 1, total: 3, files: ["src/middleware/auth.ts"] } }) - -// Implementation complete -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "executor", to: "coordinator", type: "impl_complete", summary: "IMPL-001 complete: 5 files changed, all acceptance met", data: { changedFiles: 5, syntaxClean: true } }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "executor", to: "coordinator", type: "error", summary: "Invalid plan.json path, cannot load implementation plan" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "executor" --to "coordinator" --type "impl_complete" --summary "IMPL-001 complete: 5 files changed" --json`) -``` - -## Execution Backends - -| Backend | Tool | Invocation | Mode | -|---------|------|------------|------| -| `agent` | code-developer subagent | `Task({ subagent_type: "code-developer" })` | 同步 | -| `codex` | Codex CLI | `ccw cli --tool codex --mode write` | 后台 | -| `gemini` | Gemini CLI | `ccw cli --tool gemini --mode write` | 后台 | - -## Execution Method Resolution - -从 IMPL-* 任务 description 中解析执行方式(coordinator 在创建任务时已写入): - -```javascript -function resolveExecutor(taskDesc, taskCount) { - const methodMatch = taskDesc.match(/execution_method:\s*(Agent|Codex|Gemini|Auto)/i) - const method = methodMatch ? methodMatch[1] : 'Auto' - - if (method.toLowerCase() === 'auto') { - return taskCount <= 3 ? 'agent' : 'codex' - } - return method.toLowerCase() // 'agent' | 'codex' | 'gemini' -} - -function resolveCodeReview(taskDesc) { - const reviewMatch = taskDesc.match(/code_review:\s*(\S+)/i) - return reviewMatch ? reviewMatch[1] : 'Skip' -} -``` - -## Execution (5-Phase) - -### Phase 1: Task & Plan Loading - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('IMPL-') && - t.owner === 'executor' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) - -// Extract plan path from task description -const planPathMatch = task.description.match(/\.workflow\/\.team\/[^\s]+\/plan\/plan\.json/) -const planPath = planPathMatch ? planPathMatch[0] : null - -if (!planPath) { - mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "executor", to: "coordinator", type: "error", summary: "plan.json路径无效" }) - SendMessage({ type: "message", recipient: "coordinator", content: `Cannot find plan.json in ${task.subject}`, summary: "Plan path not found" }) - return -} - -const plan = JSON.parse(Read(planPath)) - -// Resolve execution method -const executor = resolveExecutor(task.description, plan.task_count || plan.task_ids?.length || 0) -const codeReview = resolveCodeReview(task.description) -``` - -### Phase 2: Task Grouping - -```javascript -// Extract dependencies and group into parallel/sequential batches -function createBatches(planTasks) { - const processed = new Set() - const batches = [] - - // Phase 1: Independent tasks → single parallel batch - const independent = planTasks.filter(t => (t.depends_on || []).length === 0) - if (independent.length > 0) { - independent.forEach(t => processed.add(t.id)) - batches.push({ type: 'parallel', tasks: independent }) - } - - // Phase 2+: Dependent tasks in topological order - let remaining = planTasks.filter(t => !processed.has(t.id)) - while (remaining.length > 0) { - const ready = remaining.filter(t => (t.depends_on || []).every(d => processed.has(d))) - if (ready.length === 0) break // circular dependency guard - ready.forEach(t => processed.add(t.id)) - batches.push({ type: ready.length > 1 ? 'parallel' : 'sequential', tasks: ready }) - remaining = remaining.filter(t => !processed.has(t.id)) - } - return batches -} - -// Load task files from .task/ directory -const planTasks = plan.task_ids.map(id => JSON.parse(Read(`${planPath.replace('plan.json', '')}.task/${id}.json`))) -const batches = createBatches(planTasks) -``` - -### Phase 3: Code Implementation (Multi-Backend Routing) - -```javascript -// Unified Task Prompt Builder -function buildExecutionPrompt(planTask) { - return ` -## ${planTask.title} - -**Scope**: \`${planTask.scope}\` | **Action**: ${planTask.action || 'implement'} - -### Files -${(planTask.files || []).map(f => `- **${f.path}** → \`${f.target}\`: ${f.change}`).join('\n')} - -### How to do it -${planTask.description} - -${(planTask.implementation || []).map(step => `- ${step}`).join('\n')} - -### Reference -- Pattern: ${planTask.reference?.pattern || 'N/A'} -- Files: ${planTask.reference?.files?.join(', ') || 'N/A'} - -### Done when -${(planTask.convergence?.criteria || []).map(c => `- [ ] ${c}`).join('\n')} -` -} - -function buildBatchPrompt(batch) { - const taskPrompts = batch.tasks.map(buildExecutionPrompt).join('\n\n---\n') - return `## Goal\n${plan.summary}\n\n## Tasks\n${taskPrompts}\n\n## Context\n### Project Guidelines\n@.workflow/project-guidelines.json\n\nComplete each task according to its "Done when" checklist.` -} - -const changedFiles = [] -const sessionId = task.description.match(/TLS-[\w-]+/)?.[0] || 'lifecycle' - -for (const batch of batches) { - const batchPrompt = buildBatchPrompt(batch) - const batchId = `${sessionId}-B${batches.indexOf(batch) + 1}` - - if (batch.tasks.length === 1 && isSimpleTask(batch.tasks[0]) && executor === 'agent') { - // Simple task + Agent mode: direct file editing - const t = batch.tasks[0] - for (const f of (t.files || [])) { - const content = Read(f.path) - Edit({ file_path: f.path, old_string: "...", new_string: "..." }) - changedFiles.push(f.path) - } - } else if (executor === 'agent') { - // Agent execution (synchronous) - Task({ - subagent_type: "code-developer", - run_in_background: false, - description: batch.tasks.map(t => t.title).join(' | '), - prompt: batchPrompt - }) - batch.tasks.forEach(t => (t.files || []).forEach(f => changedFiles.push(f.path))) - } else if (executor === 'codex') { - // Codex CLI execution (background) - Bash( - `ccw cli -p "${batchPrompt}" --tool codex --mode write --id ${batchId}`, - { run_in_background: true } - ) - // STOP — CLI 后台执行,等待 task hook callback - batch.tasks.forEach(t => (t.files || []).forEach(f => changedFiles.push(f.path))) - } else if (executor === 'gemini') { - // Gemini CLI execution (background) - Bash( - `ccw cli -p "${batchPrompt}" --tool gemini --mode write --id ${batchId}`, - { run_in_background: true } - ) - // STOP — CLI 后台执行,等待 task hook callback - batch.tasks.forEach(t => (t.files || []).forEach(f => changedFiles.push(f.path))) - } - - // Progress update - mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "executor", to: "coordinator", type: "impl_progress", summary: `Batch完成 (${executor}): ${changedFiles.length}个文件已变更` }) -} - -function isSimpleTask(task) { - return (task.files || []).length <= 2 && (task.risks || []).length === 0 -} -``` - -### Phase 4: Self-Validation - -```javascript -// Syntax check -const syntaxResult = Bash(`tsc --noEmit 2>&1 || true`) -const hasSyntaxErrors = syntaxResult.includes('error TS') -if (hasSyntaxErrors) { /* attempt auto-fix */ } - -// Verify acceptance criteria -const acceptanceStatus = planTasks.map(t => ({ - title: t.title, - criteria: (t.convergence?.criteria || []).map(c => ({ criterion: c, met: true })) -})) - -// Run affected tests (if identifiable) -const testFiles = changedFiles - .map(f => f.replace(/\/src\//, '/tests/').replace(/\.(ts|js)$/, '.test.$1')) - .filter(f => Bash(`test -f ${f} && echo exists || true`).includes('exists')) -if (testFiles.length > 0) Bash(`npx jest ${testFiles.join(' ')} --passWithNoTests 2>&1 || true`) - -// Optional: Code review (if configured by coordinator) -if (codeReview !== 'Skip') { - if (codeReview === 'Gemini Review' || codeReview === 'Gemini') { - Bash(`ccw cli -p "PURPOSE: Code review for IMPL changes against plan convergence criteria -TASK: • Verify convergence criteria • Check test coverage • Analyze code quality -MODE: analysis -CONTEXT: @**/* | Memory: Review lifecycle IMPL execution -EXPECTED: Quality assessment with issue identification -CONSTRAINTS: analysis=READ-ONLY" --tool gemini --mode analysis --id ${sessionId}-review`, - { run_in_background: true }) - } else if (codeReview === 'Codex Review' || codeReview === 'Codex') { - Bash(`ccw cli --tool codex --mode review --uncommitted`, - { run_in_background: true }) - } -} -``` - -### Phase 5: Report to Coordinator - -```javascript -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "executor", to: "coordinator", - type: "impl_complete", - summary: `IMPL完成 (${executor}): ${[...new Set(changedFiles)].length}个文件变更, syntax=${hasSyntaxErrors ? 'errors' : 'clean'}` -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `## Implementation Complete - -**Task**: ${task.subject} -**Executor**: ${executor} -**Code Review**: ${codeReview} - -### Changed Files -${[...new Set(changedFiles)].map(f => '- ' + f).join('\n')} - -### Acceptance Criteria -${acceptanceStatus.map(t => '**' + t.title + '**: ' + (t.criteria.every(c => c.met) ? 'All met' : 'Partial')).join('\n')} - -### Validation -- Syntax: ${hasSyntaxErrors ? 'Has errors (attempted fix)' : 'Clean'} -- Tests: ${testFiles.length > 0 ? 'Ran' : 'N/A'} -${executor !== 'agent' ? `- CLI Resume ID: ${sessionId}-B*` : ''} - -Implementation is ready for testing and review.`, - summary: `IMPL complete (${executor}): ${[...new Set(changedFiles)].length} files changed` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) - -// Check for next IMPL task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No IMPL-* tasks available | Idle, wait for coordinator assignment | -| Plan file not found | Notify coordinator, request plan location | -| Unknown execution_method | Fallback to `agent` with warning | -| Syntax errors after implementation | Attempt auto-fix, report remaining errors | -| Agent (code-developer) failure | Retry once, then attempt direct implementation | -| CLI (Codex/Gemini) failure | Provide resume command with fixed ID, report error | -| CLI timeout | Use fixed ID `${sessionId}-B*` for resume | -| File conflict / merge issue | Notify coordinator, request guidance | -| Test failures in self-validation | Report in completion message, let tester handle | -| Circular dependencies in plan | Execute in plan order, ignore dependency chain | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle/roles/planner.md b/.claude/skills/team-lifecycle/roles/planner.md deleted file mode 100644 index d13b2c79..00000000 --- a/.claude/skills/team-lifecycle/roles/planner.md +++ /dev/null @@ -1,298 +0,0 @@ -# Role: planner - -Multi-angle code exploration and structured implementation planning. Submits plans to the coordinator for approval. - -## Role Identity - -- **Name**: `planner` -- **Task Prefix**: `PLAN-*` -- **Responsibility**: Code exploration → Implementation planning → Coordinator approval -- **Communication**: SendMessage to coordinator only - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `plan_ready` | planner → coordinator | Plan generation complete | With plan.json path and task count summary | -| `plan_revision` | planner → coordinator | Plan revised and resubmitted | Describes changes made | -| `impl_progress` | planner → coordinator | Exploration phase progress | Optional, for long explorations | -| `error` | planner → coordinator | Unrecoverable error | Exploration failure, schema missing, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Plan ready -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_ready", summary: "Plan ready: 3 tasks, Medium complexity", ref: `${sessionFolder}/plan/plan.json` }) - -// Plan revision -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_revision", summary: "Split task-2 into two subtasks per feedback" }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "error", summary: "plan-overview-base-schema.json not found, using default structure" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "planner" --to "coordinator" --type "plan_ready" --summary "Plan ready: 3 tasks" --ref "${sessionFolder}/plan/plan.json" --json`) -``` - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('PLAN-') && - t.owner === 'planner' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 1.5: Load Spec Context (Full-Lifecycle Mode) - -```javascript -// Extract session folder from task description (set by coordinator) -const sessionMatch = task.description.match(/Session:\s*(.+)/) -const sessionFolder = sessionMatch ? sessionMatch[1].trim() : `.workflow/.team/default` -const planDir = `${sessionFolder}/plan` -Bash(`mkdir -p ${planDir}`) - -// Check if spec directory exists (full-lifecycle mode) -const specDir = `${sessionFolder}/spec` -let specContext = null -try { - const reqIndex = Read(`${specDir}/requirements/_index.md`) - const archIndex = Read(`${specDir}/architecture/_index.md`) - const epicsIndex = Read(`${specDir}/epics/_index.md`) - const specConfig = JSON.parse(Read(`${specDir}/spec-config.json`)) - specContext = { reqIndex, archIndex, epicsIndex, specConfig } -} catch { /* impl-only mode has no spec */ } -``` - -### Phase 2: Multi-Angle Exploration - -```javascript - -// Complexity assessment -function assessComplexity(desc) { - let score = 0 - if (/refactor|architect|restructure|模块|系统/.test(desc)) score += 2 - if (/multiple|多个|across|跨/.test(desc)) score += 2 - if (/integrate|集成|api|database/.test(desc)) score += 1 - if (/security|安全|performance|性能/.test(desc)) score += 1 - return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low' -} - -const complexity = assessComplexity(task.description) - -// Angle selection based on task type -const ANGLE_PRESETS = { - architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'], - security: ['security', 'auth-patterns', 'dataflow', 'validation'], - performance: ['performance', 'bottlenecks', 'caching', 'data-access'], - bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'], - feature: ['patterns', 'integration-points', 'testing', 'dependencies'] -} - -function selectAngles(desc, count) { - const text = desc.toLowerCase() - let preset = 'feature' - if (/refactor|architect|restructure|modular/.test(text)) preset = 'architecture' - else if (/security|auth|permission|access/.test(text)) preset = 'security' - else if (/performance|slow|optimi|cache/.test(text)) preset = 'performance' - else if (/fix|bug|error|issue|broken/.test(text)) preset = 'bugfix' - return ANGLE_PRESETS[preset].slice(0, count) -} - -const angleCount = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1) -const selectedAngles = selectAngles(task.description, angleCount) - -// Execute exploration -if (complexity === 'Low') { - // Direct exploration via semantic search - const results = mcp__ace-tool__search_context({ - project_root_path: projectRoot, - query: task.description - }) - Write(`${planDir}/exploration-${selectedAngles[0]}.json`, JSON.stringify({ - project_structure: "...", - relevant_files: [], - patterns: [], - dependencies: [], - integration_points: [], - constraints: [], - clarification_needs: [], - _metadata: { exploration_angle: selectedAngles[0] } - }, null, 2)) -} else { - // Launch parallel cli-explore-agent for each angle - selectedAngles.forEach((angle, index) => { - Task({ - subagent_type: "cli-explore-agent", - run_in_background: false, - description: `Explore: ${angle}`, - prompt: ` -## Task Objective -Execute **${angle}** exploration for task planning context. - -## Output Location -**Session Folder**: ${sessionFolder} -**Output File**: ${planDir}/exploration-${angle}.json - -## Assigned Context -- **Exploration Angle**: ${angle} -- **Task Description**: ${task.description} -- **Spec Context**: ${specContext ? 'Available — use spec/requirements, spec/architecture, spec/epics for informed exploration' : 'Not available (impl-only mode)'} -- **Exploration Index**: ${index + 1} of ${selectedAngles.length} - -## MANDATORY FIRST STEPS -1. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files) -2. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema) -3. Read: .workflow/project-tech.json (if exists - technology stack) - -## Expected Output -Write JSON to: ${planDir}/exploration-${angle}.json -Follow explore-json-schema.json structure with ${angle}-focused findings. - -**MANDATORY**: Every file in relevant_files MUST have: -- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic) -- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only -- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual -- **key_symbols** (recommended): Key functions/classes/types relevant to task -` - }) - }) -} - -// Build explorations manifest -const explorationManifest = { - session_id: `${taskSlug}-${dateStr}`, - task_description: task.description, - complexity: complexity, - exploration_count: selectedAngles.length, - explorations: selectedAngles.map(angle => ({ - angle: angle, - file: `exploration-${angle}.json`, - path: `${planDir}/exploration-${angle}.json` - })) -} -Write(`${planDir}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2)) -``` - -### Phase 3: Plan Generation - -```javascript -// Read schema reference -const schema = Bash(`cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json`) - -if (complexity === 'Low') { - // Direct Claude planning - Bash(`mkdir -p ${planDir}/.task`) - // Generate plan.json + .task/TASK-*.json following schemas -} else { - // Use cli-lite-planning-agent for Medium/High - Task({ - subagent_type: "cli-lite-planning-agent", - run_in_background: false, - description: "Generate detailed implementation plan", - prompt: `Generate implementation plan. -Output: ${planDir}/plan.json + ${planDir}/.task/TASK-*.json -Schema: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json -Task Description: ${task.description} -Explorations: ${explorationManifest} -Complexity: ${complexity} -${specContext ? `Spec Context: -- Requirements: ${specContext.reqIndex.substring(0, 500)} -- Architecture: ${specContext.archIndex.substring(0, 500)} -- Epics: ${specContext.epicsIndex.substring(0, 500)} -Reference REQ-* IDs, follow ADR decisions, reuse Epic/Story decomposition.` : ''} -Requirements: 2-7 tasks, each with id, title, files[].change, convergence.criteria, depends_on` - }) -} -``` - -### Phase 4: Submit for Approval - -```javascript -const plan = JSON.parse(Read(`${planDir}/plan.json`)) -const planTasks = plan.task_ids.map(id => JSON.parse(Read(`${planDir}/.task/${id}.json`))) -const taskCount = plan.task_count || plan.task_ids.length - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "planner", to: "coordinator", - type: "plan_ready", - summary: `Plan就绪: ${taskCount}个task, ${complexity}复杂度`, - ref: `${planDir}/plan.json` -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `## Plan Ready for Review - -**Task**: ${task.subject} -**Complexity**: ${complexity} -**Tasks**: ${taskCount} - -### Task Summary -${planTasks.map((t, i) => (i+1) + '. ' + t.title).join('\n')} - -### Approach -${plan.approach} - -### Plan Location -${planDir}/plan.json -Task Files: ${planDir}/.task/ - -Please review and approve or request revisions.`, - summary: `Plan ready: ${taskCount} tasks` -}) - -// Wait for coordinator response (approve → mark completed, revision → update and resubmit) -``` - -### Phase 5: After Approval - -```javascript -TaskUpdate({ taskId: task.id, status: 'completed' }) - -// Check for next PLAN task → back to Phase 1 -``` - -## Session Files - -``` -{sessionFolder}/plan/ -├── exploration-{angle}.json -├── explorations-manifest.json -├── planning-context.md -├── plan.json -└── .task/ - └── TASK-*.json -``` - -> **Note**: `sessionFolder` is extracted from task description (`Session: .workflow/.team/TLS-xxx`). Plan outputs go to `plan/` subdirectory. In full-lifecycle mode, spec products are available at `../spec/`. - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No PLAN-* tasks available | Idle, wait for coordinator assignment | -| Exploration agent failure | Skip exploration, plan from task description only | -| Planning agent failure | Fallback to direct Claude planning | -| Plan rejected 3+ times | Notify coordinator, suggest alternative approach | -| Schema file not found | Use basic plan structure without schema validation | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle/roles/reviewer.md b/.claude/skills/team-lifecycle/roles/reviewer.md deleted file mode 100644 index f81cbbfc..00000000 --- a/.claude/skills/team-lifecycle/roles/reviewer.md +++ /dev/null @@ -1,622 +0,0 @@ -# Role: reviewer - -Unified review role handling both code review (REVIEW-*) and specification quality checks (QUALITY-*). Auto-switches behavior based on task prefix. - -## Role Identity - -- **Name**: `reviewer` -- **Task Prefix**: `REVIEW-*` + `QUALITY-*` -- **Responsibility**: Discover Task → Branch by Prefix → Review/Score → Report -- **Communication**: SendMessage to coordinator only - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `review_result` | reviewer → coordinator | Code review complete | With verdict (APPROVE/CONDITIONAL/BLOCK) and findings | -| `quality_result` | reviewer → coordinator | Spec quality check complete | With score and gate decision (PASS/REVIEW/FAIL) | -| `fix_required` | reviewer → coordinator | Critical issues found | Needs IMPL-fix or DRAFT-fix tasks | -| `error` | reviewer → coordinator | Review cannot proceed | Plan missing, documents missing, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Code review result -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "reviewer", to: "coordinator", type: "review_result", summary: "REVIEW APPROVE: 8 findings (critical=0, high=2)", data: { verdict: "APPROVE", critical: 0, high: 2 } }) - -// Spec quality result -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "reviewer", to: "coordinator", type: "quality_result", summary: "Quality check PASS: 85.0 score", data: { gate: "PASS", score: 85.0 } }) - -// Fix required -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "reviewer", to: "coordinator", type: "fix_required", summary: "Critical security issues found, IMPL-fix needed", data: { critical: 2 } }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "reviewer", to: "coordinator", type: "error", summary: "plan.json not found, cannot verify requirements" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "reviewer" --to "coordinator" --type "review_result" --summary "REVIEW APPROVE: 8 findings" --data '{"verdict":"APPROVE","critical":0}' --json`) -``` - -## Execution (5-Phase) - -### Phase 1: Task Discovery (Dual-Prefix) - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - (t.subject.startsWith('REVIEW-') || t.subject.startsWith('QUALITY-')) && - t.owner === 'reviewer' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) - -// Determine review mode -const reviewMode = task.subject.startsWith('REVIEW-') ? 'code' : 'spec' -``` - -### Phase 2: Context Loading (Branch by Mode) - -**Code Review Mode (REVIEW-*):** - -```javascript -if (reviewMode === 'code') { - // Load plan for acceptance criteria - const planPathMatch = task.description.match(/\.workflow\/\.team\/[^\s]+\/plan\/plan\.json/) - let plan = null - if (planPathMatch) { - try { plan = JSON.parse(Read(planPathMatch[0])) } catch {} - } - - // Get changed files via git - const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`) - .split('\n').filter(f => f.trim() && !f.startsWith('.')) - - // Read changed file contents (limit to 20 files) - const fileContents = {} - for (const file of changedFiles.slice(0, 20)) { - try { fileContents[file] = Read(file) } catch {} - } - - // Load test results if available - const testSummary = tasks.find(t => t.subject.startsWith('TEST-') && t.status === 'completed') -} -``` - -**Spec Quality Mode (QUALITY-*):** - -```javascript -if (reviewMode === 'spec') { - const sessionMatch = task.description.match(/Session:\s*(.+)/) - const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '' - - // 加载质量门禁标准(引用 spec-generator 共享资源) - let qualityGates = null - try { qualityGates = Read('../specs/quality-gates.md') } catch {} - - // Load all spec documents - const documents = { - config: null, discoveryContext: null, productBrief: null, - requirementsIndex: null, requirements: [], architectureIndex: null, - adrs: [], epicsIndex: null, epics: [], discussions: [] - } - - try { documents.config = JSON.parse(Read(`${sessionFolder}/spec/spec-config.json`)) } catch {} - try { documents.discoveryContext = JSON.parse(Read(`${sessionFolder}/spec/discovery-context.json`)) } catch {} - try { documents.productBrief = Read(`${sessionFolder}/spec/product-brief.md`) } catch {} - try { documents.requirementsIndex = Read(`${sessionFolder}/spec/requirements/_index.md`) } catch {} - try { documents.architectureIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) } catch {} - try { documents.epicsIndex = Read(`${sessionFolder}/spec/epics/_index.md`) } catch {} - - // Load individual documents - Glob({ pattern: `${sessionFolder}/spec/requirements/REQ-*.md` }).forEach(f => { try { documents.requirements.push(Read(f)) } catch {} }) - Glob({ pattern: `${sessionFolder}/spec/requirements/NFR-*.md` }).forEach(f => { try { documents.requirements.push(Read(f)) } catch {} }) - Glob({ pattern: `${sessionFolder}/spec/architecture/ADR-*.md` }).forEach(f => { try { documents.adrs.push(Read(f)) } catch {} }) - Glob({ pattern: `${sessionFolder}/spec/epics/EPIC-*.md` }).forEach(f => { try { documents.epics.push(Read(f)) } catch {} }) - Glob({ pattern: `${sessionFolder}/discussions/discuss-*.md` }).forEach(f => { try { documents.discussions.push(Read(f)) } catch {} }) - - const docInventory = { - config: !!documents.config, discoveryContext: !!documents.discoveryContext, - productBrief: !!documents.productBrief, requirements: documents.requirements.length > 0, - architecture: documents.adrs.length > 0, epics: documents.epics.length > 0, - discussions: documents.discussions.length - } -} -``` - -### Phase 3: Review Execution (Branch by Mode) - -**Code Review — 4-Dimension Analysis:** - -```javascript -if (reviewMode === 'code') { - const findings = { critical: [], high: [], medium: [], low: [] } - - // Quality: @ts-ignore, any, console.log, empty catch - const qualityIssues = reviewQuality(changedFiles) - - // Security: eval/exec/innerHTML, hardcoded secrets, SQL injection, XSS - const securityIssues = reviewSecurity(changedFiles) - - // Architecture: circular deps, large files, layering violations - const architectureIssues = reviewArchitecture(changedFiles, fileContents) - - // Requirement Verification: plan acceptance criteria vs implementation - const requirementIssues = plan ? verifyRequirements(plan, fileContents) : [] - - const allIssues = [...qualityIssues, ...securityIssues, ...architectureIssues, ...requirementIssues] - allIssues.forEach(issue => findings[issue.severity].push(issue)) - - // Verdict determination - const hasCritical = findings.critical.length > 0 - const verdict = hasCritical ? 'BLOCK' : findings.high.length > 3 ? 'CONDITIONAL' : 'APPROVE' -} -``` - -Review dimension functions: - -```javascript -function reviewQuality(files) { - const issues = [] - const tsIgnore = Grep({ pattern: '@ts-ignore|@ts-expect-error', glob: '*.{ts,tsx}', output_mode: 'content' }) - if (tsIgnore) issues.push({ type: 'quality', detail: '@ts-ignore/@ts-expect-error usage', severity: 'medium' }) - const anyType = Grep({ pattern: ': any[^A-Z]|as any', glob: '*.{ts,tsx}', output_mode: 'content' }) - if (anyType) issues.push({ type: 'quality', detail: 'Untyped `any` usage', severity: 'medium' }) - const consoleLogs = Grep({ pattern: 'console\\.log', glob: '*.{ts,tsx,js,jsx}', path: 'src/', output_mode: 'content' }) - if (consoleLogs) issues.push({ type: 'quality', detail: 'console.log in source code', severity: 'low' }) - const emptyCatch = Grep({ pattern: 'catch\\s*\\([^)]*\\)\\s*\\{\\s*\\}', glob: '*.{ts,tsx,js,jsx}', output_mode: 'content', multiline: true }) - if (emptyCatch) issues.push({ type: 'quality', detail: 'Empty catch blocks', severity: 'high' }) - return issues -} - -function reviewSecurity(files) { - const issues = [] - const dangerousFns = Grep({ pattern: '\\beval\\b|\\bexec\\b|innerHTML|dangerouslySetInnerHTML', glob: '*.{ts,tsx,js,jsx}', output_mode: 'content' }) - if (dangerousFns) issues.push({ type: 'security', detail: 'Dangerous function: eval/exec/innerHTML', severity: 'critical' }) - const secrets = Grep({ pattern: 'password\\s*=\\s*["\']|secret\\s*=\\s*["\']|api_key\\s*=\\s*["\']', glob: '*.{ts,tsx,js,jsx,py}', output_mode: 'content', '-i': true }) - if (secrets) issues.push({ type: 'security', detail: 'Hardcoded secrets/passwords', severity: 'critical' }) - const sqlInjection = Grep({ pattern: 'query\\s*\\(\\s*`|execute\\s*\\(\\s*`', glob: '*.{ts,js,py}', output_mode: 'content', '-i': true }) - if (sqlInjection) issues.push({ type: 'security', detail: 'Potential SQL injection via template literals', severity: 'critical' }) - const xssRisk = Grep({ pattern: 'document\\.write|window\\.location\\s*=', glob: '*.{ts,tsx,js,jsx}', output_mode: 'content' }) - if (xssRisk) issues.push({ type: 'security', detail: 'Potential XSS vectors', severity: 'high' }) - return issues -} - -function reviewArchitecture(files, fileContents) { - const issues = [] - for (const [file, content] of Object.entries(fileContents)) { - const imports = content.match(/from\s+['"]([^'"]+)['"]/g) || [] - if (imports.filter(i => i.includes('../..')).length > 2) { - issues.push({ type: 'architecture', detail: `${file}: excessive parent imports (layering violation)`, severity: 'medium' }) - } - if (content.split('\n').length > 500) { - issues.push({ type: 'architecture', detail: `${file}: ${content.split('\n').length} lines - consider splitting`, severity: 'low' }) - } - } - return issues -} - -function verifyRequirements(plan, fileContents) { - const issues = [] - for (const planTask of (plan.tasks || [])) { - for (const criterion of (planTask.acceptance || [])) { - const keywords = criterion.toLowerCase().split(/\s+/).filter(w => w.length > 4) - const hasEvidence = keywords.some(kw => Object.values(fileContents).some(c => c.toLowerCase().includes(kw))) - if (!hasEvidence) { - issues.push({ type: 'requirement', detail: `Acceptance criterion may not be met: "${criterion}" (${planTask.title})`, severity: 'high' }) - } - } - } - return issues -} -``` - -**Spec Quality — 4-Dimension Scoring:** - -```javascript -if (reviewMode === 'spec') { - const scores = { completeness: 0, consistency: 0, traceability: 0, depth: 0, requirementCoverage: 0 } - - // Completeness (25%): all sections present with content - function scoreCompleteness(docs) { - let score = 0 - const issues = [] - const checks = [ - { name: 'spec-config.json', present: !!docs.config, weight: 5 }, - { name: 'discovery-context.json', present: !!docs.discoveryContext, weight: 10 }, - { name: 'product-brief.md', present: !!docs.productBrief, weight: 20 }, - { name: 'requirements/_index.md', present: !!docs.requirementsIndex, weight: 15 }, - { name: 'REQ-* files', present: docs.requirements.length > 0, weight: 10 }, - { name: 'architecture/_index.md', present: !!docs.architectureIndex, weight: 15 }, - { name: 'ADR-* files', present: docs.adrs.length > 0, weight: 10 }, - { name: 'epics/_index.md', present: !!docs.epicsIndex, weight: 10 }, - { name: 'EPIC-* files', present: docs.epics.length > 0, weight: 5 } - ] - checks.forEach(c => { if (c.present) score += c.weight; else issues.push(`Missing: ${c.name}`) }) - - // 增强: section 内容检查(不仅检查文件是否存在,还检查关键 section 是否有实质内容) - if (docs.productBrief) { - const briefSections = ['## Vision', '## Problem Statement', '## Target Users', '## Goals', '## Scope'] - const missingSections = briefSections.filter(s => !docs.productBrief.includes(s)) - if (missingSections.length > 0) { - score -= missingSections.length * 3 - issues.push(`Product Brief missing sections: ${missingSections.join(', ')}`) - } - } - - if (docs.requirementsIndex) { - const reqSections = ['## Functional Requirements', '## Non-Functional Requirements', '## MoSCoW Summary'] - const missingReqSections = reqSections.filter(s => !docs.requirementsIndex.includes(s)) - if (missingReqSections.length > 0) { - score -= missingReqSections.length * 3 - issues.push(`Requirements index missing sections: ${missingReqSections.join(', ')}`) - } - } - - if (docs.architectureIndex) { - const archSections = ['## Architecture Decision Records', '## Technology Stack'] - const missingArchSections = archSections.filter(s => !docs.architectureIndex.includes(s)) - if (missingArchSections.length > 0) { - score -= missingArchSections.length * 3 - issues.push(`Architecture index missing sections: ${missingArchSections.join(', ')}`) - } - if (!docs.architectureIndex.includes('```mermaid')) { - score -= 5 - issues.push('Architecture index missing Mermaid component diagram') - } - } - - if (docs.epicsIndex) { - const epicsSections = ['## Epic Overview', '## MVP Scope'] - const missingEpicsSections = epicsSections.filter(s => !docs.epicsIndex.includes(s)) - if (missingEpicsSections.length > 0) { - score -= missingEpicsSections.length * 3 - issues.push(`Epics index missing sections: ${missingEpicsSections.join(', ')}`) - } - if (!docs.epicsIndex.includes('```mermaid')) { - score -= 5 - issues.push('Epics index missing Mermaid dependency diagram') - } - } - - return { score: Math.max(0, score), issues } - } - - // Consistency (25%): terminology, format, references - function scoreConsistency(docs) { - let score = 100 - const issues = [] - const sessionId = docs.config?.session_id - if (sessionId && docs.productBrief && !docs.productBrief.includes(sessionId)) { - score -= 15; issues.push('Product Brief missing session_id reference') - } - const docsWithFM = [docs.productBrief, docs.requirementsIndex, docs.architectureIndex, docs.epicsIndex].filter(Boolean) - const hasFM = docsWithFM.map(d => /^---\n[\s\S]+?\n---/.test(d)) - if (!hasFM.every(v => v === hasFM[0])) { - score -= 20; issues.push('Inconsistent YAML frontmatter across documents') - } - return { score: Math.max(0, score), issues } - } - - // Traceability (25%): goals → reqs → arch → stories chain - function scoreTraceability(docs) { - let score = 0 - const issues = [] - if (docs.productBrief && docs.requirementsIndex) { - if (docs.requirements.some(r => /goal|brief|vision/i.test(r))) score += 25 - else issues.push('Requirements lack references to Product Brief goals') - } - if (docs.requirementsIndex && docs.architectureIndex) { - if (docs.adrs.some(a => /REQ-|requirement/i.test(a))) score += 25 - else issues.push('Architecture ADRs lack requirement references') - } - if (docs.requirementsIndex && docs.epicsIndex) { - if (docs.epics.some(e => /REQ-|requirement/i.test(e))) score += 25 - else issues.push('Epics/Stories lack requirement tracing') - } - if (score >= 50) score += 25 - return { score: Math.min(100, score), issues } - } - - // Depth (25%): AC testable, ADRs justified, stories estimable - function scoreDepth(docs) { - let score = 100 - const issues = [] - if (!docs.requirements.some(r => /acceptance|criteria|验收/i.test(r) && r.length > 200)) { - score -= 25; issues.push('Acceptance criteria may lack specificity') - } - if (docs.adrs.length > 0 && !docs.adrs.some(a => /alternative|替代|pros|cons/i.test(a))) { - score -= 25; issues.push('ADRs lack alternatives analysis') - } - if (docs.epics.length > 0 && !docs.epics.some(e => /\b[SMLX]{1,2}\b|Small|Medium|Large/.test(e))) { - score -= 25; issues.push('Stories lack size estimates') - } - if (![docs.architectureIndex, docs.epicsIndex].some(d => d && /```mermaid/.test(d))) { - score -= 10; issues.push('Missing Mermaid diagrams') - } - return { score: Math.max(0, score), issues } - } - - // Requirement Coverage (20%): original requirements → document mapping - function scoreRequirementCoverage(docs) { - let score = 100 - const issues = [] - if (!docs.discoveryContext) { - return { score: 0, issues: ['discovery-context.json missing, cannot verify requirement coverage'] } - } - const context = typeof docs.discoveryContext === 'string' ? JSON.parse(docs.discoveryContext) : docs.discoveryContext - const dimensions = context.seed_analysis?.exploration_dimensions || [] - const constraints = context.seed_analysis?.constraints || [] - const userSupplements = context.seed_analysis?.user_supplements || '' - const allRequirements = [...dimensions, ...constraints] - if (userSupplements) allRequirements.push(userSupplements) - - if (allRequirements.length === 0) { - return { score: 100, issues: [] } // No requirements to check - } - - const allDocContent = [docs.productBrief, docs.requirementsIndex, docs.architectureIndex, docs.epicsIndex, - ...docs.requirements, ...docs.adrs, ...docs.epics].filter(Boolean).join('\n').toLowerCase() - - let covered = 0 - for (const req of allRequirements) { - const keywords = req.toLowerCase().split(/[\s,;]+/).filter(w => w.length > 2) - const isCovered = keywords.some(kw => allDocContent.includes(kw)) - if (isCovered) { covered++ } - else { issues.push(`Requirement not covered in documents: "${req}"`) } - } - - score = Math.round((covered / allRequirements.length) * 100) - return { score, issues } - } - - const completenessResult = scoreCompleteness(documents) - const consistencyResult = scoreConsistency(documents) - const traceabilityResult = scoreTraceability(documents) - const depthResult = scoreDepth(documents) - const coverageResult = scoreRequirementCoverage(documents) - - scores.completeness = completenessResult.score - scores.consistency = consistencyResult.score - scores.traceability = traceabilityResult.score - scores.depth = depthResult.score - scores.requirementCoverage = coverageResult.score - - const overallScore = (scores.completeness + scores.consistency + scores.traceability + scores.depth + scores.requirementCoverage) / 5 - const qualityGate = (overallScore >= 80 && scores.requirementCoverage >= 70) ? 'PASS' : - (overallScore < 60 || scores.requirementCoverage < 50) ? 'FAIL' : 'REVIEW' - const allSpecIssues = [...completenessResult.issues, ...consistencyResult.issues, ...traceabilityResult.issues, ...depthResult.issues, ...coverageResult.issues] -} -``` - -### Phase 4: Report Generation (Branch by Mode) - -**Code Review — Generate Recommendations:** - -```javascript -if (reviewMode === 'code') { - const totalIssues = Object.values(findings).flat().length - const recommendations = [] - if (hasCritical) recommendations.push('Fix all critical security issues before merging') - if (findings.high.length > 0) recommendations.push('Address high severity issues in a follow-up') - if (findings.medium.length > 3) recommendations.push('Consider refactoring to reduce medium severity issues') -} -``` - -**Spec Quality — Generate Reports:** - -```javascript -if (reviewMode === 'spec') { - // Generate readiness-report.md - const readinessReport = `--- -session_id: ${documents.config?.session_id || 'unknown'} -phase: 6 -document_type: readiness-report -status: complete -generated_at: ${new Date().toISOString()} -version: 1 ---- - -# Readiness Report - -## Quality Scores -| Dimension | Score | Weight | -|-----------|-------|--------| -| Completeness | ${scores.completeness}% | 20% | -| Consistency | ${scores.consistency}% | 20% | -| Traceability | ${scores.traceability}% | 20% | -| Depth | ${scores.depth}% | 20% | -| Requirement Coverage | ${scores.requirementCoverage}% | 20% | -| **Overall** | **${overallScore.toFixed(1)}%** | **100%** | - -## Quality Gate: ${qualityGate} - -## Per-Phase Quality Gates -${qualityGates ? `_(Applied from ../specs/quality-gates.md)_ - -### Phase 2 (Product Brief) -- Vision statement: ${docs.productBrief?.includes('## Vision') ? 'PASS' : 'MISSING'} -- Problem statement specificity: ${docs.productBrief?.match(/## Problem/)?.length ? 'PASS' : 'MISSING'} -- Target users >= 1: ${docs.productBrief?.includes('## Target Users') ? 'PASS' : 'MISSING'} -- Measurable goals >= 2: ${docs.productBrief?.includes('## Goals') ? 'PASS' : 'MISSING'} - -### Phase 3 (Requirements) -- Functional requirements >= 3: ${docs.requirements.length >= 3 ? 'PASS' : 'FAIL (' + docs.requirements.length + ')'} -- Acceptance criteria present: ${docs.requirements.some(r => /acceptance|criteria/i.test(r)) ? 'PASS' : 'MISSING'} -- MoSCoW priority tags: ${docs.requirementsIndex?.includes('Must') ? 'PASS' : 'MISSING'} - -### Phase 4 (Architecture) -- Component diagram: ${docs.architectureIndex?.includes('mermaid') ? 'PASS' : 'MISSING'} -- ADR with alternatives: ${docs.adrs.some(a => /alternative|option/i.test(a)) ? 'PASS' : 'MISSING'} -- Tech stack specified: ${docs.architectureIndex?.includes('Technology') ? 'PASS' : 'MISSING'} - -### Phase 5 (Epics) -- MVP subset tagged: ${docs.epics.some(e => /mvp:\s*true/i.test(e)) ? 'PASS' : 'MISSING'} -- Dependency map: ${docs.epicsIndex?.includes('mermaid') ? 'PASS' : 'MISSING'} -- Story sizing: ${docs.epics.some(e => /\b[SMLX]{1,2}\b|Small|Medium|Large/.test(e)) ? 'PASS' : 'MISSING'} -` : '_(quality-gates.md not loaded)_'} - -## Issues Found -${allSpecIssues.map(i => '- ' + i).join('\n') || 'None'} - -## Document Inventory -${Object.entries(docInventory).map(([k, v]) => '- ' + k + ': ' + (v === true ? '✓' : v === false ? '✗' : v)).join('\n')} -` - Write(`${sessionFolder}/spec/readiness-report.md`, readinessReport) - - // Generate spec-summary.md - const specSummary = `--- -session_id: ${documents.config?.session_id || 'unknown'} -phase: 6 -document_type: spec-summary -status: complete -generated_at: ${new Date().toISOString()} -version: 1 ---- - -# Specification Summary - -**Topic**: ${documents.config?.topic || 'N/A'} -**Complexity**: ${documents.config?.complexity || 'N/A'} -**Quality Score**: ${overallScore.toFixed(1)}% (${qualityGate}) -**Discussion Rounds**: ${documents.discussions.length} - -## Key Deliverables -- Product Brief: ${docInventory.productBrief ? '✓' : '✗'} -- Requirements (PRD): ${docInventory.requirements ? '✓ (' + documents.requirements.length + ' items)' : '✗'} -- Architecture: ${docInventory.architecture ? '✓ (' + documents.adrs.length + ' ADRs)' : '✗'} -- Epics & Stories: ${docInventory.epics ? '✓ (' + documents.epics.length + ' epics)' : '✗'} - -## Next Steps -${qualityGate === 'PASS' ? '- Ready for handoff to execution workflows' : - qualityGate === 'REVIEW' ? '- Address review items, then proceed to execution' : - '- Fix critical issues before proceeding'} -` - Write(`${sessionFolder}/spec/spec-summary.md`, specSummary) -} -``` - -### Phase 5: Report to Coordinator (Branch by Mode) - -**Code Review Report:** - -```javascript -if (reviewMode === 'code') { - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "reviewer", to: "coordinator", - type: hasCritical ? "fix_required" : "review_result", - summary: `REVIEW ${verdict}: ${totalIssues}个发现 (critical=${findings.critical.length}, high=${findings.high.length})`, - data: { verdict, critical: findings.critical.length, high: findings.high.length, medium: findings.medium.length, low: findings.low.length } - }) - - SendMessage({ - type: "message", - recipient: "coordinator", - content: `## Code Review Report - -**Task**: ${task.subject} -**Verdict**: ${verdict} -**Files Reviewed**: ${changedFiles.length} -**Total Findings**: ${totalIssues} - -### Finding Summary -- Critical: ${findings.critical.length} -- High: ${findings.high.length} -- Medium: ${findings.medium.length} -- Low: ${findings.low.length} - -${findings.critical.length > 0 ? '### Critical Issues\n' + findings.critical.map(f => '- [' + f.type.toUpperCase() + '] ' + f.detail).join('\n') + '\n' : ''} -${findings.high.length > 0 ? '### High Severity\n' + findings.high.map(f => '- [' + f.type.toUpperCase() + '] ' + f.detail).join('\n') + '\n' : ''} -### Recommendations -${recommendations.map(r => '- ' + r).join('\n')} - -${plan ? '### Requirement Verification\n' + (plan.tasks || []).map(t => '- **' + t.title + '**: ' + (requirementIssues.filter(i => i.detail.includes(t.title)).length === 0 ? 'Met' : 'Needs verification')).join('\n') : ''}`, - summary: `Review: ${verdict} (${totalIssues} findings)` - }) - - if (!hasCritical) { - TaskUpdate({ taskId: task.id, status: 'completed' }) - } - // If critical, keep in_progress for coordinator to create fix tasks -} -``` - -**Spec Quality Report:** - -```javascript -if (reviewMode === 'spec') { - mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "reviewer", to: "coordinator", - type: qualityGate === 'FAIL' ? "fix_required" : "quality_result", - summary: `质量检查 ${qualityGate}: ${overallScore.toFixed(1)}分 (完整性${scores.completeness}/一致性${scores.consistency}/追溯${scores.traceability}/深度${scores.depth}/覆盖率${scores.requirementCoverage})`, - data: { gate: qualityGate, score: overallScore, issues: allSpecIssues } - }) - - SendMessage({ - type: "message", - recipient: "coordinator", - content: `## 质量审查报告 - -**Task**: ${task.subject} -**总分**: ${overallScore.toFixed(1)}% -**Gate**: ${qualityGate} - -### 评分详情 -| 维度 | 分数 | -|------|------| -| 完整性 | ${scores.completeness}% | -| 一致性 | ${scores.consistency}% | -| 可追溯性 | ${scores.traceability}% | -| 深度 | ${scores.depth}% | -| 需求覆盖率 | ${scores.requirementCoverage}% | - -### 问题列表 (${allSpecIssues.length}) -${allSpecIssues.map(i => '- ' + i).join('\n') || '无问题'} - -### 文档清单 -${Object.entries(docInventory).map(([k, v]) => '- ' + k + ': ' + (typeof v === 'boolean' ? (v ? '✓' : '✗') : v)).join('\n')} - -### 输出位置 -- 就绪报告: ${sessionFolder}/spec/readiness-report.md -- 执行摘要: ${sessionFolder}/spec/spec-summary.md - -${qualityGate === 'PASS' ? '质量达标,可进入最终讨论轮次 DISCUSS-006。' : - qualityGate === 'REVIEW' ? '质量基本达标但有改进空间,建议在讨论中审查。' : - '质量未达标,建议创建 DRAFT-fix 任务修复关键问题。'}`, - summary: `质量 ${qualityGate}: ${overallScore.toFixed(1)}分` - }) - - if (qualityGate !== 'FAIL') { - TaskUpdate({ taskId: task.id, status: 'completed' }) - } -} - -// Check for next REVIEW-* or QUALITY-* task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No REVIEW-*/QUALITY-* tasks available | Idle, wait for coordinator assignment | -| Plan file not found (code review) | Review without requirement verification, note in report | -| No changed files detected | Report to coordinator, may need manual file list | -| Documents missing (spec quality) | Score as 0 for completeness, report to coordinator | -| Cannot parse YAML frontmatter | Skip consistency check for that document | -| Grep pattern errors | Skip specific check, continue with remaining | -| CLI analysis timeout | Report partial results, note incomplete analysis | -| Session folder not found | Notify coordinator, request session path | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle/roles/tester.md b/.claude/skills/team-lifecycle/roles/tester.md deleted file mode 100644 index c00903c2..00000000 --- a/.claude/skills/team-lifecycle/roles/tester.md +++ /dev/null @@ -1,294 +0,0 @@ -# Role: tester - -Adaptive test-fix cycle with progressive testing strategy. Detects test framework, applies multi-strategy fixes, and reports results to coordinator. - -## Role Identity - -- **Name**: `tester` -- **Task Prefix**: `TEST-*` -- **Responsibility**: Detect Framework → Run Tests → Fix Cycle → Report Results -- **Communication**: SendMessage to coordinator only - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `test_result` | tester → coordinator | Test cycle ends (pass or max iterations) | With pass rate, iteration count, remaining failures | -| `impl_progress` | tester → coordinator | Fix cycle intermediate progress | Optional, for long fix cycles (iteration > 5) | -| `fix_required` | tester → coordinator | Found issues beyond tester scope | Architecture/design problems needing executor | -| `error` | tester → coordinator | Framework unavailable or crash | Command not found, timeout, environment issues | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Test result -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "tester", to: "coordinator", type: "test_result", summary: "TEST passed: 98% pass rate, 3 iterations", data: { passRate: 98, iterations: 3, total: 50, passed: 49 } }) - -// Progress update (long fix cycles) -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "tester", to: "coordinator", type: "impl_progress", summary: "Fix iteration 6: 85% pass rate" }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "tester", to: "coordinator", type: "error", summary: "vitest command not found, falling back to npm test" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "tester" --to "coordinator" --type "test_result" --summary "TEST passed: 98% pass rate" --data '{"passRate":98,"iterations":3}' --json`) -``` - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('TEST-') && - t.owner === 'tester' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Test Framework Detection - -```javascript -function detectTestFramework() { - // Check package.json - try { - const pkg = JSON.parse(Read('package.json')) - const deps = { ...pkg.dependencies, ...pkg.devDependencies } - if (deps.vitest) return { framework: 'vitest', command: 'npx vitest run' } - if (deps.jest) return { framework: 'jest', command: 'npx jest' } - if (deps.mocha) return { framework: 'mocha', command: 'npx mocha' } - } catch {} - - // Check pyproject.toml / pytest - try { - const pyproject = Read('pyproject.toml') - if (pyproject.includes('pytest')) return { framework: 'pytest', command: 'pytest' } - } catch {} - - return { framework: 'unknown', command: 'npm test' } -} - -const testConfig = detectTestFramework() - -// Locate affected test files from changed files -function findAffectedTests(changedFiles) { - const testFiles = [] - for (const file of changedFiles) { - const testVariants = [ - file.replace(/\/src\//, '/tests/').replace(/\.(ts|js|tsx|jsx)$/, '.test.$1'), - file.replace(/\/src\//, '/__tests__/').replace(/\.(ts|js|tsx|jsx)$/, '.test.$1'), - file.replace(/\.(ts|js|tsx|jsx)$/, '.test.$1'), - file.replace(/\.(ts|js|tsx|jsx)$/, '.spec.$1') - ] - for (const variant of testVariants) { - const exists = Bash(`test -f "${variant}" && echo exists || true`) - if (exists.includes('exists')) testFiles.push(variant) - } - } - return [...new Set(testFiles)] -} - -const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`).split('\n').filter(Boolean) -const affectedTests = findAffectedTests(changedFiles) -``` - -### Phase 3: Test Execution & Fix Cycle - -```javascript -const MAX_ITERATIONS = 10 -const PASS_RATE_TARGET = 95 - -let currentPassRate = 0 -let previousPassRate = 0 -const iterationHistory = [] - -for (let iteration = 1; iteration <= MAX_ITERATIONS; iteration++) { - // Strategy selection - const strategy = selectStrategy(iteration, currentPassRate, previousPassRate, iterationHistory) - - // Determine test scope - const isFullSuite = iteration === MAX_ITERATIONS || currentPassRate >= PASS_RATE_TARGET - const testCommand = isFullSuite - ? testConfig.command - : `${testConfig.command} ${affectedTests.join(' ')}` - - // Run tests - const testOutput = Bash(`${testCommand} 2>&1 || true`, { timeout: 300000 }) - - // Parse results - const results = parseTestResults(testOutput, testConfig.framework) - previousPassRate = currentPassRate - currentPassRate = results.passRate - - iterationHistory.push({ - iteration, pass_rate: currentPassRate, strategy, - failed_tests: results.failedTests, total: results.total, passed: results.passed - }) - - // Quality gate check - if (currentPassRate >= PASS_RATE_TARGET) { - if (!isFullSuite) { - const fullOutput = Bash(`${testConfig.command} 2>&1 || true`, { timeout: 300000 }) - const fullResults = parseTestResults(fullOutput, testConfig.framework) - currentPassRate = fullResults.passRate - if (currentPassRate >= PASS_RATE_TARGET) break - } else { - break - } - } - - if (iteration >= MAX_ITERATIONS) break - - // Apply fixes based on strategy - applyFixes(results.failedTests, strategy, testOutput) - - // Progress update for long cycles - if (iteration > 5) { - mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "tester", to: "coordinator", type: "impl_progress", summary: `修复迭代${iteration}: ${currentPassRate}% pass rate` }) - } -} - -// Strategy Engine -function selectStrategy(iteration, passRate, prevPassRate, history) { - // Regression detection - if (prevPassRate > 0 && passRate < prevPassRate - 10) return 'surgical' - // Iteration-based default - if (iteration <= 2) return 'conservative' - // Pattern-based upgrade - if (passRate > 80) { - const recentFailures = history.slice(-2).flatMap(h => h.failed_tests) - const uniqueFailures = [...new Set(recentFailures)] - if (uniqueFailures.length <= recentFailures.length * 0.6) return 'aggressive' - } - return 'conservative' -} - -// Fix application -function applyFixes(failedTests, strategy, testOutput) { - switch (strategy) { - case 'conservative': - // Fix one failure at a time - read failing test, understand error, apply targeted fix - break - case 'aggressive': - // Batch fix similar failures - group by error pattern, apply fixes to all related - break - case 'surgical': - // Minimal changes, consider rollback - fix most critical failure only - break - } -} - -// Test result parser -function parseTestResults(output, framework) { - let passed = 0, failed = 0, total = 0, failedTests = [] - if (framework === 'jest' || framework === 'vitest') { - const passMatch = output.match(/(\d+) passed/) - const failMatch = output.match(/(\d+) failed/) - passed = passMatch ? parseInt(passMatch[1]) : 0 - failed = failMatch ? parseInt(failMatch[1]) : 0 - total = passed + failed - const failPattern = /FAIL\s+(.+)/g - let m - while ((m = failPattern.exec(output)) !== null) failedTests.push(m[1].trim()) - } else if (framework === 'pytest') { - const summaryMatch = output.match(/(\d+) passed.*?(\d+) failed/) - if (summaryMatch) { passed = parseInt(summaryMatch[1]); failed = parseInt(summaryMatch[2]) } - total = passed + failed - } - return { passed, failed, total, passRate: total > 0 ? Math.round((passed / total) * 100) : 100, failedTests } -} -``` - -### Phase 4: Result Analysis - -```javascript -function classifyFailures(failedTests) { - return failedTests.map(test => { - const testLower = test.toLowerCase() - let severity = 'low' - if (/auth|security|permission|login|password/.test(testLower)) severity = 'critical' - else if (/core|main|primary|data|state/.test(testLower)) severity = 'high' - else if (/edge|flaky|timeout|env/.test(testLower)) severity = 'low' - else severity = 'medium' - return { test, severity } - }) -} - -const classifiedFailures = classifyFailures(iterationHistory[iterationHistory.length - 1]?.failed_tests || []) -const hasCriticalFailures = classifiedFailures.some(f => f.severity === 'critical') -``` - -### Phase 5: Report to Coordinator - -```javascript -const finalIteration = iterationHistory[iterationHistory.length - 1] -const success = currentPassRate >= PASS_RATE_TARGET - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "tester", to: "coordinator", - type: "test_result", - summary: `TEST${success ? '通过' : '未达标'}: ${currentPassRate}% pass rate, ${iterationHistory.length}次迭代`, - data: { passRate: currentPassRate, iterations: iterationHistory.length, total: finalIteration?.total || 0, passed: finalIteration?.passed || 0 } -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `## Test Results - -**Task**: ${task.subject} -**Status**: ${success ? 'PASSED' : 'NEEDS ATTENTION'} - -### Summary -- **Pass Rate**: ${currentPassRate}% (target: ${PASS_RATE_TARGET}%) -- **Iterations**: ${iterationHistory.length}/${MAX_ITERATIONS} -- **Total Tests**: ${finalIteration?.total || 0} -- **Passed**: ${finalIteration?.passed || 0} -- **Failed**: ${finalIteration?.total - finalIteration?.passed || 0} - -### Strategy History -${iterationHistory.map(h => `- Iteration ${h.iteration}: ${h.strategy} → ${h.pass_rate}%`).join('\n')} - -${!success ? `### Remaining Failures -${classifiedFailures.map(f => `- [${f.severity.toUpperCase()}] ${f.test}`).join('\n')} - -${hasCriticalFailures ? '**CRITICAL failures detected - immediate attention required**' : ''}` : '### All tests passing'}`, - summary: `Tests: ${currentPassRate}% pass rate (${iterationHistory.length} iterations)` -}) - -if (success) { - TaskUpdate({ taskId: task.id, status: 'completed' }) -} else { - // Keep in_progress, coordinator decides next steps -} - -// Check for next TEST task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No TEST-* tasks available | Idle, wait for coordinator assignment | -| Test command not found | Detect framework, try alternatives (npm test, pytest, etc.) | -| Test execution timeout | Reduce test scope, retry with affected tests only | -| Regression detected (pass rate drops > 10%) | Switch to surgical strategy, consider rollback | -| Stuck tests (same failure 3+ iterations) | Report to coordinator, suggest different approach | -| Max iterations reached < 95% | Report failure details, let coordinator decide | -| No test files found | Report to coordinator, suggest test generation needed | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle/roles/writer.md b/.claude/skills/team-lifecycle/roles/writer.md deleted file mode 100644 index 3349aba5..00000000 --- a/.claude/skills/team-lifecycle/roles/writer.md +++ /dev/null @@ -1,739 +0,0 @@ -# Role: writer - -Product Brief, Requirements/PRD, Architecture, and Epics & Stories document generation. Maps to spec-generator Phases 2-5. - -## Role Identity - -- **Name**: `writer` -- **Task Prefix**: `DRAFT-*` -- **Responsibility**: Load Context → Generate Document → Incorporate Feedback → Report -- **Communication**: SendMessage to coordinator only - -## Message Types - -| Type | Direction | Trigger | Description | -|------|-----------|---------|-------------| -| `draft_ready` | writer → coordinator | Document writing complete | With document path and type | -| `draft_revision` | writer → coordinator | Document revised and resubmitted | Describes changes made | -| `impl_progress` | writer → coordinator | Long writing progress | Multi-document stage progress | -| `error` | writer → coordinator | Unrecoverable error | Template missing, insufficient context, etc. | - -## Message Bus - -Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log: - -```javascript -// Document ready -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "writer", to: "coordinator", type: "draft_ready", summary: "Product Brief complete", ref: `${sessionFolder}/product-brief.md` }) - -// Document revision -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "writer", to: "coordinator", type: "draft_revision", summary: "Requirements revised per discussion feedback" }) - -// Error report -mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "writer", to: "coordinator", type: "error", summary: "Input artifact missing, cannot generate document" }) -``` - -### CLI Fallback - -When `mcp__ccw-tools__team_msg` MCP is unavailable: - -```javascript -Bash(`ccw team log --team "${teamName}" --from "writer" --to "coordinator" --type "draft_ready" --summary "Brief complete" --ref "${sessionFolder}/product-brief.md" --json`) -``` - -## Execution (5-Phase) - -### Phase 1: Task Discovery - -```javascript -const tasks = TaskList() -const myTasks = tasks.filter(t => - t.subject.startsWith('DRAFT-') && - t.owner === 'writer' && - t.status === 'pending' && - t.blockedBy.length === 0 -) - -if (myTasks.length === 0) return // idle - -const task = TaskGet({ taskId: myTasks[0].id }) -TaskUpdate({ taskId: task.id, status: 'in_progress' }) -``` - -### Phase 2: Context & Discussion Loading - -```javascript -// Extract session folder from task description -const sessionMatch = task.description.match(/Session:\s*(.+)/) -const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '' - -// Load session config -let specConfig = null -try { specConfig = JSON.parse(Read(`${sessionFolder}/spec/spec-config.json`)) } catch {} - -// Determine document type from task subject -const docType = task.subject.includes('Product Brief') ? 'product-brief' - : task.subject.includes('Requirements') || task.subject.includes('PRD') ? 'requirements' - : task.subject.includes('Architecture') ? 'architecture' - : task.subject.includes('Epics') ? 'epics' - : 'unknown' - -// Load discussion feedback (from preceding DISCUSS task) -const discussionFiles = { - 'product-brief': 'discussions/discuss-001-scope.md', - 'requirements': 'discussions/discuss-002-brief.md', - 'architecture': 'discussions/discuss-003-requirements.md', - 'epics': 'discussions/discuss-004-architecture.md' -} -let discussionFeedback = null -try { discussionFeedback = Read(`${sessionFolder}/${discussionFiles[docType]}`) } catch {} - -// Load prior documents progressively -const priorDocs = {} -if (docType !== 'product-brief') { - try { priorDocs.discoveryContext = Read(`${sessionFolder}/spec/discovery-context.json`) } catch {} -} -if (['requirements', 'architecture', 'epics'].includes(docType)) { - try { priorDocs.productBrief = Read(`${sessionFolder}/spec/product-brief.md`) } catch {} -} -if (['architecture', 'epics'].includes(docType)) { - try { priorDocs.requirementsIndex = Read(`${sessionFolder}/spec/requirements/_index.md`) } catch {} -} -if (docType === 'epics') { - try { priorDocs.architectureIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) } catch {} -} -``` - -### Phase 3: Document Generation (type-specific) - -**前置步骤(所有类型共用)**: - -```javascript -// 1. 加载格式规范 -const docStandards = Read('../specs/document-standards.md') - -// 2. 加载对应 template 文件(路径见 SKILL.md Shared Spec Resources) -const templateMap = { - 'product-brief': '../templates/product-brief.md', - 'requirements': '../templates/requirements-prd.md', - 'architecture': '../templates/architecture-doc.md', - 'epics': '../templates/epics-template.md' -} -const template = Read(templateMap[docType]) - -// 3. 构建 sharedContext -const seedAnalysis = specConfig?.seed_analysis || discoveryContext?.seed_analysis || {} -const sharedContext = ` -SEED: ${specConfig?.topic || ''} -PROBLEM: ${seedAnalysis.problem_statement || ''} -TARGET USERS: ${(seedAnalysis.target_users || []).join(', ')} -DOMAIN: ${seedAnalysis.domain || ''} -CONSTRAINTS: ${(seedAnalysis.constraints || []).join(', ')} -FOCUS AREAS: ${(specConfig?.focus_areas || []).join(', ')} -${priorDocs.discoveryContext ? ` -CODEBASE CONTEXT: -- Existing patterns: ${JSON.parse(priorDocs.discoveryContext).existing_patterns?.slice(0,5).join(', ') || 'none'} -- Tech stack: ${JSON.stringify(JSON.parse(priorDocs.discoveryContext).tech_stack || {})} -` : ''}` - -// 4. 路由到具体类型 -``` - -#### DRAFT-001: Product Brief - -3 路并行 CLI 分析(产品视角/技术视角/用户视角),综合后生成 product-brief.md。 - -```javascript -if (docType === 'product-brief') { - // === 并行 CLI 分析 === - - // 产品视角 (Gemini) - Bash({ - command: `ccw cli -p "PURPOSE: Product analysis for specification - identify market fit, user value, and success criteria. -Success: Clear vision, measurable goals, competitive positioning. - -${sharedContext} - -TASK: -- Define product vision (1-3 sentences, aspirational) -- Analyze market/competitive landscape -- Define 3-5 measurable success metrics -- Identify scope boundaries (in-scope vs out-of-scope) -- Assess user value proposition -- List assumptions that need validation - -MODE: analysis -EXPECTED: Structured product analysis with: vision, goals with metrics, scope, competitive positioning, assumptions -CONSTRAINTS: Focus on 'what' and 'why', not 'how' -" --tool gemini --mode analysis`, - run_in_background: true - }) - - // 技术视角 (Codex) - Bash({ - command: `ccw cli -p "PURPOSE: Technical feasibility analysis for specification - assess implementation viability and constraints. -Success: Clear technical constraints, integration complexity, technology recommendations. - -${sharedContext} - -TASK: -- Assess technical feasibility of the core concept -- Identify technical constraints and blockers -- Evaluate integration complexity with existing systems -- Recommend technology approach (high-level) -- Identify technical risks and dependencies -- Estimate complexity: simple/moderate/complex - -MODE: analysis -EXPECTED: Technical analysis with: feasibility assessment, constraints, integration complexity, tech recommendations, risks -CONSTRAINTS: Focus on feasibility and constraints, not detailed architecture -" --tool codex --mode analysis`, - run_in_background: true - }) - - // 用户视角 (Claude) - Bash({ - command: `ccw cli -p "PURPOSE: User experience analysis for specification - understand user journeys, pain points, and UX considerations. -Success: Clear user personas, journey maps, UX requirements. - -${sharedContext} - -TASK: -- Elaborate user personas with goals and frustrations -- Map primary user journey (happy path) -- Identify key pain points in current experience -- Define UX success criteria -- List accessibility and usability considerations -- Suggest interaction patterns - -MODE: analysis -EXPECTED: User analysis with: personas, journey map, pain points, UX criteria, interaction recommendations -CONSTRAINTS: Focus on user needs and experience, not implementation -" --tool claude --mode analysis`, - run_in_background: true - }) - - // STOP: Wait for all 3 CLI results - - // === 综合三视角 === - const synthesis = { - convergent_themes: [], // 三视角一致的主题 - conflicts: [], // 视角冲突点 - product_insights: [], // 产品视角独特洞察 - technical_insights: [], // 技术视角独特洞察 - user_insights: [] // 用户视角独特洞察 - } - - // === 整合讨论反馈 === - if (discussionFeedback) { - // 从 discuss-001-scope.md 提取共识和调整建议 - // 将讨论结论融入 synthesis - } - - // === 按 template 生成文档 === - const frontmatter = `--- -session_id: ${specConfig?.session_id || 'unknown'} -phase: 2 -document_type: product-brief -status: draft -generated_at: ${new Date().toISOString()} -version: 1 -dependencies: - - spec-config.json - - discovery-context.json ----` - - // 填充 template 中所有 section: Vision, Problem Statement, Target Users, Goals, Scope - // 应用 document-standards.md 格式规范 - - Write(`${sessionFolder}/spec/product-brief.md`, `${frontmatter}\n\n${filledContent}`) - outputPath = 'spec/product-brief.md' -} -``` - -#### DRAFT-002: Requirements/PRD - -通过 Gemini CLI 扩展需求,生成 REQ-NNN + NFR-{type}-NNN 文件。 - -```javascript -if (docType === 'requirements') { - // === 需求扩展 CLI === - Bash({ - command: `ccw cli -p "PURPOSE: Generate detailed functional and non-functional requirements from product brief. -Success: Complete PRD with testable acceptance criteria for every requirement. - -PRODUCT BRIEF CONTEXT: -${priorDocs.productBrief?.slice(0, 3000) || ''} - -${sharedContext} - -TASK: -- For each goal in the product brief, generate 3-7 functional requirements -- Each requirement must have: - - Unique ID: REQ-NNN (zero-padded) - - Clear title - - Detailed description - - User story: As a [persona], I want [action] so that [benefit] - - 2-4 specific, testable acceptance criteria -- Generate non-functional requirements: - - Performance (response times, throughput) - - Security (authentication, authorization, data protection) - - Scalability (user load, data volume) - - Usability (accessibility, learnability) -- Assign MoSCoW priority: Must/Should/Could/Won't -- Output structure per requirement: ID, title, description, user_story, acceptance_criteria[], priority, traces - -MODE: analysis -EXPECTED: Structured requirements with: ID, title, description, user story, acceptance criteria, priority, traceability to goals -CONSTRAINTS: Every requirement must be specific enough to estimate and test. No vague requirements. -" --tool gemini --mode analysis`, - run_in_background: true - }) - - // Wait for CLI result - - // === 整合讨论反馈 === - if (discussionFeedback) { - // 从 discuss-002-brief.md 提取需求调整建议 - // 合并新增/修改/删除需求 - } - - // === 生成 requirements/ 目录 === - Bash(`mkdir -p "${sessionFolder}/spec/requirements"`) - - const timestamp = new Date().toISOString() - - // Parse CLI output → funcReqs[], nfReqs[] - const funcReqs = parseFunctionalRequirements(cliOutput) - const nfReqs = parseNonFunctionalRequirements(cliOutput) - - // 写入独立 REQ-*.md 文件(每个功能需求一个文件) - funcReqs.forEach(req => { - const reqFrontmatter = `--- -id: REQ-${req.id} -title: "${req.title}" -priority: ${req.priority} -status: draft -traces: - - product-brief.md ----` - const reqContent = `${reqFrontmatter} - -# REQ-${req.id}: ${req.title} - -## Description -${req.description} - -## User Story -${req.user_story} - -## Acceptance Criteria -${req.acceptance_criteria.map((ac, i) => `${i+1}. ${ac}`).join('\n')} -` - Write(`${sessionFolder}/spec/requirements/REQ-${req.id}-${req.slug}.md`, reqContent) - }) - - // 写入独立 NFR-*.md 文件 - nfReqs.forEach(nfr => { - const nfrFrontmatter = `--- -id: NFR-${nfr.type}-${nfr.id} -type: ${nfr.type} -title: "${nfr.title}" -status: draft -traces: - - product-brief.md ----` - const nfrContent = `${nfrFrontmatter} - -# NFR-${nfr.type}-${nfr.id}: ${nfr.title} - -## Requirement -${nfr.requirement} - -## Metric & Target -${nfr.metric} — Target: ${nfr.target} -` - Write(`${sessionFolder}/spec/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent) - }) - - // 写入 _index.md(汇总 + 链接) - const indexFrontmatter = `--- -session_id: ${specConfig?.session_id || 'unknown'} -phase: 3 -document_type: requirements-index -status: draft -generated_at: ${timestamp} -version: 1 -dependencies: - - product-brief.md ----` - const indexContent = `${indexFrontmatter} - -# Requirements (PRD) - -## Summary -Total: ${funcReqs.length} functional + ${nfReqs.length} non-functional requirements - -## Functional Requirements -| ID | Title | Priority | Status | -|----|-------|----------|--------| -${funcReqs.map(r => `| [REQ-${r.id}](REQ-${r.id}-${r.slug}.md) | ${r.title} | ${r.priority} | draft |`).join('\n')} - -## Non-Functional Requirements -| ID | Type | Title | -|----|------|-------| -${nfReqs.map(n => `| [NFR-${n.type}-${n.id}](NFR-${n.type}-${n.id}-${n.slug}.md) | ${n.type} | ${n.title} |`).join('\n')} - -## MoSCoW Summary -- **Must**: ${funcReqs.filter(r => r.priority === 'Must').length} -- **Should**: ${funcReqs.filter(r => r.priority === 'Should').length} -- **Could**: ${funcReqs.filter(r => r.priority === 'Could').length} -- **Won't**: ${funcReqs.filter(r => r.priority === "Won't").length} -` - Write(`${sessionFolder}/spec/requirements/_index.md`, indexContent) - outputPath = 'spec/requirements/_index.md' -} -``` - -#### DRAFT-003: Architecture - -两阶段 CLI:Gemini 架构设计 + Codex 架构挑战/审查。 - -```javascript -if (docType === 'architecture') { - // === 阶段1: 架构设计 (Gemini) === - Bash({ - command: `ccw cli -p "PURPOSE: Generate technical architecture for the specified requirements. -Success: Complete component architecture, tech stack, and ADRs with justified decisions. - -PRODUCT BRIEF (summary): -${priorDocs.productBrief?.slice(0, 3000) || ''} - -REQUIREMENTS: -${priorDocs.requirementsIndex?.slice(0, 5000) || ''} - -${sharedContext} - -TASK: -- Define system architecture style (monolith, microservices, serverless, etc.) with justification -- Identify core components and their responsibilities -- Create component interaction diagram (Mermaid graph TD format) -- Specify technology stack: languages, frameworks, databases, infrastructure -- Generate 2-4 Architecture Decision Records (ADRs): - - Each ADR: context, decision, 2-3 alternatives with pros/cons, consequences - - Focus on: data storage, API design, authentication, key technical choices -- Define data model: key entities and relationships (Mermaid erDiagram format) -- Identify security architecture: auth, authorization, data protection -- List API endpoints (high-level) - -MODE: analysis -EXPECTED: Complete architecture with: style justification, component diagram, tech stack table, ADRs, data model, security controls, API overview -CONSTRAINTS: Architecture must support all Must-have requirements. Prefer proven technologies. -" --tool gemini --mode analysis`, - run_in_background: true - }) - - // Wait for Gemini result - - // === 阶段2: 架构审查 (Codex) === - Bash({ - command: `ccw cli -p "PURPOSE: Critical review of proposed architecture - identify weaknesses and risks. -Success: Actionable feedback with specific concerns and improvement suggestions. - -PROPOSED ARCHITECTURE: -${geminiArchitectureOutput.slice(0, 5000)} - -REQUIREMENTS CONTEXT: -${priorDocs.requirementsIndex?.slice(0, 2000) || ''} - -TASK: -- Challenge each ADR: are the alternatives truly the best options? -- Identify scalability bottlenecks in the component design -- Assess security gaps: authentication, authorization, data protection -- Evaluate technology choices: maturity, community support, fit -- Check for over-engineering or under-engineering -- Verify architecture covers all Must-have requirements -- Rate overall architecture quality: 1-5 with justification - -MODE: analysis -EXPECTED: Architecture review with: per-ADR feedback, scalability concerns, security gaps, technology risks, quality rating -CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable improvements. -" --tool codex --mode analysis`, - run_in_background: true - }) - - // Wait for Codex result - - // === 整合讨论反馈 === - if (discussionFeedback) { - // 从 discuss-003-requirements.md 提取架构相关反馈 - // 合并到架构设计中 - } - - // === 代码库集成映射(条件性) === - let integrationMapping = null - if (priorDocs.discoveryContext) { - const dc = JSON.parse(priorDocs.discoveryContext) - if (dc.relevant_files) { - integrationMapping = dc.relevant_files.map(f => ({ - new_component: '...', - existing_module: f.path, - integration_type: 'Extend|Replace|New', - notes: f.rationale - })) - } - } - - // === 生成 architecture/ 目录 === - Bash(`mkdir -p "${sessionFolder}/spec/architecture"`) - - const timestamp = new Date().toISOString() - const adrs = parseADRs(geminiArchitectureOutput, codexReviewOutput) - - // 写入独立 ADR-*.md 文件 - adrs.forEach(adr => { - const adrFrontmatter = `--- -id: ADR-${adr.id} -title: "${adr.title}" -status: draft -traces: - - ../requirements/_index.md ----` - const adrContent = `${adrFrontmatter} - -# ADR-${adr.id}: ${adr.title} - -## Context -${adr.context} - -## Decision -${adr.decision} - -## Alternatives -${adr.alternatives.map((alt, i) => `### Option ${i+1}: ${alt.name}\n- **Pros**: ${alt.pros.join(', ')}\n- **Cons**: ${alt.cons.join(', ')}`).join('\n\n')} - -## Consequences -${adr.consequences} - -## Review Feedback -${adr.reviewFeedback || 'N/A'} -` - Write(`${sessionFolder}/spec/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent) - }) - - // 写入 _index.md(含 Mermaid 组件图 + ER图 + 链接) - const archIndexFrontmatter = `--- -session_id: ${specConfig?.session_id || 'unknown'} -phase: 4 -document_type: architecture-index -status: draft -generated_at: ${timestamp} -version: 1 -dependencies: - - ../product-brief.md - - ../requirements/_index.md ----` - // 包含: system overview, component diagram (Mermaid), tech stack table, - // ADR links table, data model (Mermaid erDiagram), API design, security controls - Write(`${sessionFolder}/spec/architecture/_index.md`, archIndexContent) - outputPath = 'spec/architecture/_index.md' -} -``` - -#### DRAFT-004: Epics & Stories - -通过 Gemini CLI 分解为 Epic,生成 EPIC-*.md 文件。 - -```javascript -if (docType === 'epics') { - // === Epic 分解 CLI === - Bash({ - command: `ccw cli -p "PURPOSE: Decompose requirements into executable Epics and Stories for implementation planning. -Success: 3-7 Epics with prioritized Stories, dependency map, and MVP subset clearly defined. - -PRODUCT BRIEF (summary): -${priorDocs.productBrief?.slice(0, 2000) || ''} - -REQUIREMENTS: -${priorDocs.requirementsIndex?.slice(0, 5000) || ''} - -ARCHITECTURE (summary): -${priorDocs.architectureIndex?.slice(0, 3000) || ''} - -TASK: -- Group requirements into 3-7 logical Epics: - - Each Epic: EPIC-NNN ID, title, description, priority (Must/Should/Could) - - Group by functional domain or user journey stage - - Tag MVP Epics (minimum set for initial release) -- For each Epic, generate 2-5 Stories: - - Each Story: STORY-{EPIC}-NNN ID, title - - User story format: As a [persona], I want [action] so that [benefit] - - 2-4 acceptance criteria per story (testable) - - Relative size estimate: S/M/L/XL - - Trace to source requirement(s): REQ-NNN -- Create dependency map: - - Cross-Epic dependencies (which Epics block others) - - Mermaid graph LR format - - Recommended execution order with rationale -- Define MVP: - - Which Epics are in MVP - - MVP definition of done (3-5 criteria) - - What is explicitly deferred post-MVP - -MODE: analysis -EXPECTED: Structured output with: Epic list (ID, title, priority, MVP flag), Stories per Epic (ID, user story, AC, size, trace), dependency Mermaid diagram, execution order, MVP definition -CONSTRAINTS: Every Must-have requirement must appear in at least one Story. Stories must be small enough to implement independently. Dependencies should be minimized across Epics. -" --tool gemini --mode analysis`, - run_in_background: true - }) - - // Wait for CLI result - - // === 整合讨论反馈 === - if (discussionFeedback) { - // 从 discuss-004-architecture.md 提取执行相关反馈 - // 调整 Epic 粒度、MVP 范围 - } - - // === 生成 epics/ 目录 === - Bash(`mkdir -p "${sessionFolder}/spec/epics"`) - - const timestamp = new Date().toISOString() - const epicsList = parseEpics(cliOutput) - - // 写入独立 EPIC-*.md 文件(含 stories) - epicsList.forEach(epic => { - const epicFrontmatter = `--- -id: EPIC-${epic.id} -title: "${epic.title}" -priority: ${epic.priority} -mvp: ${epic.mvp} -size: ${epic.size} -requirements: -${epic.reqs.map(r => ` - ${r}`).join('\n')} -architecture: -${epic.adrs.map(a => ` - ${a}`).join('\n')} -dependencies: -${epic.deps.map(d => ` - ${d}`).join('\n')} -status: draft ----` - const storiesContent = epic.stories.map(s => `### ${s.id}: ${s.title} - -**User Story**: ${s.user_story} -**Size**: ${s.size} -**Traces**: ${s.traces.join(', ')} - -**Acceptance Criteria**: -${s.acceptance_criteria.map((ac, i) => `${i+1}. ${ac}`).join('\n')} -`).join('\n') - - const epicContent = `${epicFrontmatter} - -# EPIC-${epic.id}: ${epic.title} - -## Description -${epic.description} - -## Stories -${storiesContent} - -## Requirements -${epic.reqs.map(r => `- [${r}](../requirements/${r}.md)`).join('\n')} - -## Architecture -${epic.adrs.map(a => `- [${a}](../architecture/${a}.md)`).join('\n')} -` - Write(`${sessionFolder}/spec/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent) - }) - - // 写入 _index.md(含 Mermaid 依赖图 + MVP + 链接) - const epicsIndexFrontmatter = `--- -session_id: ${specConfig?.session_id || 'unknown'} -phase: 5 -document_type: epics-index -status: draft -generated_at: ${timestamp} -version: 1 -dependencies: - - ../requirements/_index.md - - ../architecture/_index.md ----` - // 包含: Epic overview table (with links), dependency Mermaid diagram, - // execution order, MVP scope, traceability matrix - Write(`${sessionFolder}/spec/epics/_index.md`, epicsIndexContent) - outputPath = 'spec/epics/_index.md' -} -``` - -### Phase 4: Self-Validation - -```javascript -const validationChecks = { - has_frontmatter: /^---\n[\s\S]+?\n---/.test(docContent), - sections_complete: /* verify all required sections present */, - cross_references: docContent.includes('session_id'), - discussion_integrated: !discussionFeedback || docContent.includes('Discussion') -} - -const allValid = Object.values(validationChecks).every(v => v) -``` - -### Phase 5: Report to Coordinator - -```javascript -const docTypeLabel = { - 'product-brief': 'Product Brief', - 'requirements': 'Requirements/PRD', - 'architecture': 'Architecture Document', - 'epics': 'Epics & Stories' -} - -mcp__ccw-tools__team_msg({ - operation: "log", team: teamName, - from: "writer", to: "coordinator", - type: "draft_ready", - summary: `${docTypeLabel[docType]} 完成: ${allValid ? '验证通过' : '部分验证失败'}`, - ref: `${sessionFolder}/${outputPath}` -}) - -SendMessage({ - type: "message", - recipient: "coordinator", - content: `## 文档撰写结果 - -**Task**: ${task.subject} -**文档类型**: ${docTypeLabel[docType]} -**验证状态**: ${allValid ? 'PASS' : 'PARTIAL'} - -### 文档摘要 -${documentSummary} - -### 讨论反馈整合 -${discussionFeedback ? '已整合前序讨论反馈' : '首次撰写'} - -### 自验证结果 -${Object.entries(validationChecks).map(([k, v]) => '- ' + k + ': ' + (v ? 'PASS' : 'FAIL')).join('\n')} - -### 输出位置 -${sessionFolder}/${outputPath} - -文档已就绪,可进入讨论轮次。`, - summary: `${docTypeLabel[docType]} 就绪` -}) - -TaskUpdate({ taskId: task.id, status: 'completed' }) - -// Check for next DRAFT task → back to Phase 1 -``` - -## Error Handling - -| Scenario | Resolution | -|----------|------------| -| No DRAFT-* tasks available | Idle, wait for coordinator assignment | -| Prior document not found | Notify coordinator, request prerequisite | -| CLI analysis failure | Retry with fallback tool, then direct generation | -| Template sections incomplete | Generate best-effort, note gaps in report | -| Discussion feedback contradicts prior docs | Note conflict in document, flag for next discussion | -| Session folder missing | Notify coordinator, request session path | -| Unexpected error | Log error via team_msg, report to coordinator | diff --git a/.claude/skills/team-lifecycle/specs/document-standards.md b/.claude/skills/team-lifecycle/specs/document-standards.md deleted file mode 100644 index 2820cd98..00000000 --- a/.claude/skills/team-lifecycle/specs/document-standards.md +++ /dev/null @@ -1,192 +0,0 @@ -# Document Standards - -Defines format conventions, YAML frontmatter schema, naming rules, and content structure for all spec-generator outputs. - -## When to Use - -| Phase | Usage | Section | -|-------|-------|---------| -| All Phases | Frontmatter format | YAML Frontmatter Schema | -| All Phases | File naming | Naming Conventions | -| Phase 2-5 | Document structure | Content Structure | -| Phase 6 | Validation reference | All sections | - ---- - -## YAML Frontmatter Schema - -Every generated document MUST begin with YAML frontmatter: - -```yaml ---- -session_id: SPEC-{slug}-{YYYY-MM-DD} -phase: {1-6} -document_type: {product-brief|requirements|architecture|epics|readiness-report|spec-summary} -status: draft|review|complete -generated_at: {ISO8601 timestamp} -stepsCompleted: [] -version: 1 -dependencies: - - {list of input documents used} ---- -``` - -### Field Definitions - -| Field | Type | Required | Description | -|-------|------|----------|-------------| -| `session_id` | string | Yes | Session identifier matching spec-config.json | -| `phase` | number | Yes | Phase number that generated this document (1-6) | -| `document_type` | string | Yes | One of: product-brief, requirements, architecture, epics, readiness-report, spec-summary | -| `status` | enum | Yes | draft (initial), review (user reviewed), complete (finalized) | -| `generated_at` | string | Yes | ISO8601 timestamp of generation | -| `stepsCompleted` | array | Yes | List of step IDs completed during generation | -| `version` | number | Yes | Document version, incremented on re-generation | -| `dependencies` | array | No | List of input files this document depends on | - -### Status Transitions - -``` -draft -> review -> complete - | ^ - +-------------------+ (direct promotion in auto mode) -``` - -- **draft**: Initial generation, not yet user-reviewed -- **review**: User has reviewed and provided feedback -- **complete**: Finalized, ready for downstream consumption - -In auto mode (`-y`), documents are promoted directly from `draft` to `complete`. - ---- - -## Naming Conventions - -### Session ID Format - -``` -SPEC-{slug}-{YYYY-MM-DD} -``` - -- **slug**: Lowercase, alphanumeric + Chinese characters, hyphens as separators, max 40 chars -- **date**: UTC+8 date in YYYY-MM-DD format - -Examples: -- `SPEC-task-management-system-2026-02-11` -- `SPEC-user-auth-oauth-2026-02-11` - -### Output Files - -| File | Phase | Description | -|------|-------|-------------| -| `spec-config.json` | 1 | Session configuration and state | -| `discovery-context.json` | 1 | Codebase exploration results (optional) | -| `product-brief.md` | 2 | Product brief document | -| `requirements.md` | 3 | PRD document | -| `architecture.md` | 4 | Architecture decisions document | -| `epics.md` | 5 | Epic/Story breakdown document | -| `readiness-report.md` | 6 | Quality validation report | -| `spec-summary.md` | 6 | One-page executive summary | - -### Output Directory - -``` -.workflow/.spec/{session-id}/ -``` - ---- - -## Content Structure - -### Heading Hierarchy - -- `#` (H1): Document title only (one per document) -- `##` (H2): Major sections -- `###` (H3): Subsections -- `####` (H4): Detail items (use sparingly) - -Maximum depth: 4 levels. Prefer flat structures. - -### Section Ordering - -Every document follows this general pattern: - -1. **YAML Frontmatter** (mandatory) -2. **Title** (H1) -3. **Executive Summary** (2-3 sentences) -4. **Core Content Sections** (H2, document-specific) -5. **Open Questions / Risks** (if applicable) -6. **References / Traceability** (links to upstream/downstream docs) - -### Formatting Rules - -| Element | Format | Example | -|---------|--------|---------| -| Requirements | `REQ-{NNN}` prefix | REQ-001: User login | -| Acceptance criteria | Checkbox list | `- [ ] User can log in with email` | -| Architecture decisions | `ADR-{NNN}` prefix | ADR-001: Use PostgreSQL | -| Epics | `EPIC-{NNN}` prefix | EPIC-001: Authentication | -| Stories | `STORY-{EPIC}-{NNN}` prefix | STORY-001-001: Login form | -| Priority tags | MoSCoW labels | `[Must]`, `[Should]`, `[Could]`, `[Won't]` | -| Mermaid diagrams | Fenced code blocks | ````mermaid ... ``` `` | -| Code examples | Language-tagged blocks | ````typescript ... ``` `` | - -### Cross-Reference Format - -Use relative references between documents: - -```markdown -See [Product Brief](product-brief.md#section-name) for details. -Derived from [REQ-001](requirements.md#req-001). -``` - -### Language - -- Document body: Follow user's input language (Chinese or English) -- Technical identifiers: Always English (REQ-001, ADR-001, EPIC-001) -- YAML frontmatter keys: Always English - ---- - -## spec-config.json Schema - -```json -{ - "session_id": "string (required)", - "seed_input": "string (required) - original user input", - "input_type": "text|file (required)", - "timestamp": "ISO8601 (required)", - "mode": "interactive|auto (required)", - "complexity": "simple|moderate|complex (required)", - "depth": "light|standard|comprehensive (required)", - "focus_areas": ["string array"], - "seed_analysis": { - "problem_statement": "string", - "target_users": ["string array"], - "domain": "string", - "constraints": ["string array"], - "dimensions": ["string array - 3-5 exploration dimensions"] - }, - "has_codebase": "boolean", - "phasesCompleted": [ - { - "phase": "number (1-6)", - "name": "string (phase name)", - "output_file": "string (primary output file)", - "completed_at": "ISO8601" - } - ] -} -``` - ---- - -## Validation Checklist - -- [ ] Every document starts with valid YAML frontmatter -- [ ] `session_id` matches across all documents in a session -- [ ] `status` field reflects current document state -- [ ] All cross-references resolve to valid targets -- [ ] Heading hierarchy is correct (no skipped levels) -- [ ] Technical identifiers use correct prefixes -- [ ] Output files are in the correct directory diff --git a/.claude/skills/team-lifecycle/specs/quality-gates.md b/.claude/skills/team-lifecycle/specs/quality-gates.md deleted file mode 100644 index ae968436..00000000 --- a/.claude/skills/team-lifecycle/specs/quality-gates.md +++ /dev/null @@ -1,207 +0,0 @@ -# Quality Gates - -Per-phase quality gate criteria and scoring dimensions for spec-generator outputs. - -## When to Use - -| Phase | Usage | Section | -|-------|-------|---------| -| Phase 2-5 | Post-generation self-check | Per-Phase Gates | -| Phase 6 | Cross-document validation | Cross-Document Validation | -| Phase 6 | Final scoring | Scoring Dimensions | - ---- - -## Quality Thresholds - -| Gate | Score | Action | -|------|-------|--------| -| **Pass** | >= 80% | Continue to next phase | -| **Review** | 60-79% | Log warnings, continue with caveats | -| **Fail** | < 60% | Must address issues before continuing | - -In auto mode (`-y`), Review-level issues are logged but do not block progress. - ---- - -## Scoring Dimensions - -### 1. Completeness (25%) - -All required sections present with substantive content. - -| Score | Criteria | -|-------|----------| -| 100% | All template sections filled with detailed content | -| 75% | All sections present, some lack detail | -| 50% | Major sections present but minor sections missing | -| 25% | Multiple major sections missing or empty | -| 0% | Document is a skeleton only | - -### 2. Consistency (25%) - -Terminology, formatting, and references are uniform across documents. - -| Score | Criteria | -|-------|----------| -| 100% | All terms consistent, all references valid, formatting uniform | -| 75% | Minor terminology variations, all references valid | -| 50% | Some inconsistent terms, 1-2 broken references | -| 25% | Frequent inconsistencies, multiple broken references | -| 0% | Documents contradict each other | - -### 3. Traceability (25%) - -Requirements, architecture decisions, and stories trace back to goals. - -| Score | Criteria | -|-------|----------| -| 100% | Every story traces to a requirement, every requirement traces to a goal | -| 75% | Most items traceable, few orphans | -| 50% | Partial traceability, some disconnected items | -| 25% | Weak traceability, many orphan items | -| 0% | No traceability between documents | - -### 4. Depth (25%) - -Content provides sufficient detail for execution teams. - -| Score | Criteria | -|-------|----------| -| 100% | Acceptance criteria specific and testable, architecture decisions justified, stories estimable | -| 75% | Most items detailed enough, few vague areas | -| 50% | Mix of detailed and vague content | -| 25% | Mostly high-level, lacking actionable detail | -| 0% | Too abstract for execution | - ---- - -## Per-Phase Quality Gates - -### Phase 1: Discovery - -| Check | Criteria | Severity | -|-------|----------|----------| -| Session ID valid | Matches `SPEC-{slug}-{date}` format | Error | -| Problem statement exists | Non-empty, >= 20 characters | Error | -| Target users identified | >= 1 user group | Error | -| Dimensions generated | 3-5 exploration dimensions | Warning | -| Constraints listed | >= 0 (can be empty with justification) | Info | - -### Phase 2: Product Brief - -| Check | Criteria | Severity | -|-------|----------|----------| -| Vision statement | Clear, 1-3 sentences | Error | -| Problem statement | Specific and measurable | Error | -| Target users | >= 1 persona with needs described | Error | -| Goals defined | >= 2 measurable goals | Error | -| Success metrics | >= 2 quantifiable metrics | Warning | -| Scope boundaries | In-scope and out-of-scope listed | Warning | -| Multi-perspective | >= 2 CLI perspectives synthesized | Info | - -### Phase 3: Requirements (PRD) - -| Check | Criteria | Severity | -|-------|----------|----------| -| Functional requirements | >= 3 with REQ-NNN IDs | Error | -| Acceptance criteria | Every requirement has >= 1 criterion | Error | -| MoSCoW priority | Every requirement tagged | Error | -| Non-functional requirements | >= 1 (performance, security, etc.) | Warning | -| User stories | >= 1 per Must-have requirement | Warning | -| Traceability | Requirements trace to product brief goals | Warning | - -### Phase 4: Architecture - -| Check | Criteria | Severity | -|-------|----------|----------| -| Component diagram | Present (Mermaid or ASCII) | Error | -| Tech stack specified | Languages, frameworks, key libraries | Error | -| ADR present | >= 1 Architecture Decision Record | Error | -| ADR has alternatives | Each ADR lists >= 2 options considered | Warning | -| Integration points | External systems/APIs identified | Warning | -| Data model | Key entities and relationships described | Warning | -| Codebase mapping | Mapped to existing code (if has_codebase) | Info | - -### Phase 5: Epics & Stories - -| Check | Criteria | Severity | -|-------|----------|----------| -| Epics defined | 3-7 epics with EPIC-NNN IDs | Error | -| MVP subset | >= 1 epic tagged as MVP | Error | -| Stories per epic | 2-5 stories per epic | Error | -| Story format | "As a...I want...So that..." pattern | Warning | -| Dependency map | Cross-epic dependencies documented | Warning | -| Estimation hints | Relative sizing (S/M/L/XL) per story | Info | -| Traceability | Stories trace to requirements | Warning | - -### Phase 6: Readiness Check - -| Check | Criteria | Severity | -|-------|----------|----------| -| All documents exist | product-brief, requirements, architecture, epics | Error | -| Frontmatter valid | All YAML frontmatter parseable and correct | Error | -| Cross-references valid | All document links resolve | Error | -| Overall score >= 60% | Weighted average across 4 dimensions | Error | -| No unresolved Errors | All Error-severity issues addressed | Error | -| Summary generated | spec-summary.md created | Warning | - ---- - -## Cross-Document Validation - -Checks performed during Phase 6 across all documents: - -### Completeness Matrix - -``` -Product Brief goals -> Requirements (each goal has >= 1 requirement) -Requirements -> Architecture (each Must requirement has design coverage) -Requirements -> Epics (each Must requirement appears in >= 1 story) -Architecture ADRs -> Epics (tech choices reflected in implementation stories) -``` - -### Consistency Checks - -| Check | Documents | Rule | -|-------|-----------|------| -| Terminology | All | Same term used consistently (no synonyms for same concept) | -| User personas | Brief + PRD + Epics | Same user names/roles throughout | -| Scope | Brief + PRD | PRD scope does not exceed brief scope | -| Tech stack | Architecture + Epics | Stories reference correct technologies | - -### Traceability Matrix Format - -```markdown -| Goal | Requirements | Architecture | Epics | -|------|-------------|--------------|-------| -| G-001: ... | REQ-001, REQ-002 | ADR-001 | EPIC-001 | -| G-002: ... | REQ-003 | ADR-002 | EPIC-002, EPIC-003 | -``` - ---- - -## Issue Classification - -### Error (Must Fix) - -- Missing required document or section -- Broken cross-references -- Contradictory information between documents -- Empty acceptance criteria on Must-have requirements -- No MVP subset defined in epics - -### Warning (Should Fix) - -- Vague acceptance criteria -- Missing non-functional requirements -- No success metrics defined -- Incomplete traceability -- Missing architecture review notes - -### Info (Nice to Have) - -- Could add more detailed personas -- Consider additional ADR alternatives -- Story estimation hints missing -- Mermaid diagrams could be more detailed diff --git a/.claude/skills/team-lifecycle/specs/team-config.json b/.claude/skills/team-lifecycle/specs/team-config.json deleted file mode 100644 index 0ecf0ec6..00000000 --- a/.claude/skills/team-lifecycle/specs/team-config.json +++ /dev/null @@ -1,80 +0,0 @@ -{ - "team_name": "team-lifecycle", - "team_display_name": "Team Lifecycle", - "description": "Unified team skill covering spec-to-dev-to-test full lifecycle", - "version": "1.0.0", - - "roles": { - "coordinator": { - "task_prefix": null, - "responsibility": "Pipeline orchestration, requirement clarification, task chain creation, message dispatch", - "message_types": ["plan_approved", "plan_revision", "task_unblocked", "fix_required", "error", "shutdown"] - }, - "analyst": { - "task_prefix": "RESEARCH", - "responsibility": "Seed analysis, codebase exploration, multi-dimensional context gathering", - "message_types": ["research_ready", "research_progress", "error"] - }, - "writer": { - "task_prefix": "DRAFT", - "responsibility": "Product Brief / PRD / Architecture / Epics document generation", - "message_types": ["draft_ready", "draft_revision", "impl_progress", "error"] - }, - "discussant": { - "task_prefix": "DISCUSS", - "responsibility": "Multi-perspective critique, consensus building, conflict escalation", - "message_types": ["discussion_ready", "discussion_blocked", "impl_progress", "error"] - }, - "planner": { - "task_prefix": "PLAN", - "responsibility": "Multi-angle code exploration, structured implementation planning", - "message_types": ["plan_ready", "plan_revision", "impl_progress", "error"] - }, - "executor": { - "task_prefix": "IMPL", - "responsibility": "Code implementation following approved plans", - "message_types": ["impl_complete", "impl_progress", "error"] - }, - "tester": { - "task_prefix": "TEST", - "responsibility": "Adaptive test-fix cycles, progressive testing, quality gates", - "message_types": ["test_result", "impl_progress", "fix_required", "error"] - }, - "reviewer": { - "task_prefix": "REVIEW", - "additional_prefixes": ["QUALITY"], - "responsibility": "Code review (REVIEW-*) + Spec quality validation (QUALITY-*)", - "message_types": ["review_result", "quality_result", "fix_required", "error"] - } - }, - - "pipelines": { - "spec-only": { - "description": "Specification pipeline: research → discuss → draft → quality", - "task_chain": [ - "RESEARCH-001", - "DISCUSS-001", "DRAFT-001", "DISCUSS-002", - "DRAFT-002", "DISCUSS-003", "DRAFT-003", "DISCUSS-004", - "DRAFT-004", "DISCUSS-005", "QUALITY-001", "DISCUSS-006" - ] - }, - "impl-only": { - "description": "Implementation pipeline: plan → implement → test + review", - "task_chain": ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"] - }, - "full-lifecycle": { - "description": "Full lifecycle: spec pipeline → implementation pipeline", - "task_chain": "spec-only + impl-only (PLAN-001 blockedBy DISCUSS-006)" - } - }, - - "collaboration_patterns": ["CP-1", "CP-2", "CP-4", "CP-5", "CP-6", "CP-10"], - - "session_dirs": { - "base": ".workflow/.team/TLS-{slug}-{YYYY-MM-DD}/", - "spec": "spec/", - "discussions": "discussions/", - "plan": "plan/", - "messages": ".workflow/.team-msg/{team-name}/" - } -} diff --git a/.claude/skills/team-lifecycle/templates/architecture-doc.md b/.claude/skills/team-lifecycle/templates/architecture-doc.md deleted file mode 100644 index 5106de03..00000000 --- a/.claude/skills/team-lifecycle/templates/architecture-doc.md +++ /dev/null @@ -1,254 +0,0 @@ -# Architecture Document Template (Directory Structure) - -Template for generating architecture decision documents as a directory of individual ADR files in Phase 4. - -## Usage Context - -| Phase | Usage | -|-------|-------| -| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis | -| Output Location | `{workDir}/architecture/` | - -## Output Structure - -``` -{workDir}/architecture/ -├── _index.md # Overview, components, tech stack, data model, security -├── ADR-001-{slug}.md # Individual Architecture Decision Record -├── ADR-002-{slug}.md -└── ... -``` - ---- - -## Template: _index.md - -```markdown ---- -session_id: {session_id} -phase: 4 -document_type: architecture-index -status: draft -generated_at: {timestamp} -version: 1 -dependencies: - - ../spec-config.json - - ../product-brief.md - - ../requirements/_index.md ---- - -# Architecture: {product_name} - -{executive_summary - high-level architecture approach and key decisions} - -## System Overview - -### Architecture Style -{description of chosen architecture style: microservices, monolith, serverless, etc.} - -### System Context Diagram - -```mermaid -C4Context - title System Context Diagram - Person(user, "User", "Primary user") - System(system, "{product_name}", "Core system") - System_Ext(ext1, "{external_system}", "{description}") - Rel(user, system, "Uses") - Rel(system, ext1, "Integrates with") -``` - -## Component Architecture - -### Component Diagram - -```mermaid -graph TD - subgraph "{product_name}" - A[Component A] --> B[Component B] - B --> C[Component C] - A --> D[Component D] - end - B --> E[External Service] -``` - -### Component Descriptions - -| Component | Responsibility | Technology | Dependencies | -|-----------|---------------|------------|--------------| -| {component_name} | {what it does} | {tech stack} | {depends on} | - -## Technology Stack - -### Core Technologies - -| Layer | Technology | Version | Rationale | -|-------|-----------|---------|-----------| -| Frontend | {technology} | {version} | {why chosen} | -| Backend | {technology} | {version} | {why chosen} | -| Database | {technology} | {version} | {why chosen} | -| Infrastructure | {technology} | {version} | {why chosen} | - -### Key Libraries & Frameworks - -| Library | Purpose | License | -|---------|---------|---------| -| {library_name} | {purpose} | {license} | - -## Architecture Decision Records - -| ADR | Title | Status | Key Choice | -|-----|-------|--------|------------| -| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} | -| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} | -| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} | - -## Data Architecture - -### Data Model - -```mermaid -erDiagram - ENTITY_A ||--o{ ENTITY_B : "has many" - ENTITY_A { - string id PK - string name - datetime created_at - } - ENTITY_B { - string id PK - string entity_a_id FK - string value - } -``` - -### Data Storage Strategy - -| Data Type | Storage | Retention | Backup | -|-----------|---------|-----------|--------| -| {type} | {storage solution} | {retention policy} | {backup strategy} | - -## API Design - -### API Overview - -| Endpoint | Method | Purpose | Auth | -|----------|--------|---------|------| -| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} | - -## Security Architecture - -### Security Controls - -| Control | Implementation | Requirement | -|---------|---------------|-------------| -| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) | -| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) | -| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) | - -## Infrastructure & Deployment - -### Deployment Architecture - -{description of deployment model: containers, serverless, VMs, etc.} - -### Environment Strategy - -| Environment | Purpose | Configuration | -|-------------|---------|---------------| -| Development | Local development | {config} | -| Staging | Pre-production testing | {config} | -| Production | Live system | {config} | - -## Codebase Integration - -{if has_codebase is true:} - -### Existing Code Mapping - -| New Component | Existing Module | Integration Type | Notes | -|--------------|----------------|------------------|-------| -| {component} | {existing module path} | Extend/Replace/New | {notes} | - -### Migration Notes -{any migration considerations for existing code} - -## Quality Attributes - -| Attribute | Target | Measurement | ADR Reference | -|-----------|--------|-------------|---------------| -| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) | -| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) | -| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) | - -## Risks & Mitigations - -| Risk | Impact | Probability | Mitigation | -|------|--------|-------------|------------| -| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} | - -## Open Questions - -- [ ] {architectural question 1} -- [ ] {architectural question 2} - -## References - -- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md) -- Next: [Epics & Stories](../epics/_index.md) -``` - ---- - -## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record) - -```markdown ---- -id: ADR-{NNN} -status: Accepted -traces_to: [{REQ-NNN}, {NFR-X-NNN}] -date: {timestamp} ---- - -# ADR-{NNN}: {decision_title} - -## Context - -{what is the situation that motivates this decision} - -## Decision - -{what is the chosen approach} - -## Alternatives Considered - -| Option | Pros | Cons | -|--------|------|------| -| {option_1 - chosen} | {pros} | {cons} | -| {option_2} | {pros} | {cons} | -| {option_3} | {pros} | {cons} | - -## Consequences - -- **Positive**: {positive outcomes} -- **Negative**: {tradeoffs accepted} -- **Risks**: {risks to monitor} - -## Traces - -- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md) -- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5) -``` - ---- - -## Variable Descriptions - -| Variable | Source | Description | -|----------|--------|-------------| -| `{session_id}` | spec-config.json | Session identifier | -| `{timestamp}` | Runtime | ISO8601 generation timestamp | -| `{product_name}` | product-brief.md | Product/feature name | -| `{NNN}` | Auto-increment | ADR/requirement number | -| `{slug}` | Auto-generated | Kebab-case from decision title | -| `{has_codebase}` | spec-config.json | Whether existing codebase exists | diff --git a/.claude/skills/team-lifecycle/templates/epics-template.md b/.claude/skills/team-lifecycle/templates/epics-template.md deleted file mode 100644 index 939d933c..00000000 --- a/.claude/skills/team-lifecycle/templates/epics-template.md +++ /dev/null @@ -1,196 +0,0 @@ -# Epics & Stories Template (Directory Structure) - -Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5. - -## Usage Context - -| Phase | Usage | -|-------|-------| -| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition | -| Output Location | `{workDir}/epics/` | - -## Output Structure - -``` -{workDir}/epics/ -├── _index.md # Overview table + dependency map + MVP scope + execution order -├── EPIC-001-{slug}.md # Individual Epic with its Stories -├── EPIC-002-{slug}.md -└── ... -``` - ---- - -## Template: _index.md - -```markdown ---- -session_id: {session_id} -phase: 5 -document_type: epics-index -status: draft -generated_at: {timestamp} -version: 1 -dependencies: - - ../spec-config.json - - ../product-brief.md - - ../requirements/_index.md - - ../architecture/_index.md ---- - -# Epics & Stories: {product_name} - -{executive_summary - overview of epic structure and MVP scope} - -## Epic Overview - -| Epic ID | Title | Priority | MVP | Stories | Est. Size | -|---------|-------|----------|-----|---------|-----------| -| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} | -| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} | -| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} | - -## Dependency Map - -```mermaid -graph LR - EPIC-001 --> EPIC-002 - EPIC-001 --> EPIC-003 - EPIC-002 --> EPIC-004 - EPIC-003 --> EPIC-005 -``` - -### Dependency Notes -{explanation of why these dependencies exist and suggested execution order} - -### Recommended Execution Order -1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational} -2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1} -3. ... - -## MVP Scope - -### MVP Epics -{list of epics included in MVP with justification, linking to each} - -### MVP Definition of Done -- [ ] {MVP completion criterion 1} -- [ ] {MVP completion criterion 2} -- [ ] {MVP completion criterion 3} - -## Traceability Matrix - -| Requirement | Epic | Stories | Architecture | -|-------------|------|---------|--------------| -| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) | -| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B | -| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) | - -## Estimation Summary - -| Size | Meaning | Count | -|------|---------|-------| -| S | Small - well-understood, minimal risk | {n} | -| M | Medium - some complexity, moderate risk | {n} | -| L | Large - significant complexity, should consider splitting | {n} | -| XL | Extra Large - high complexity, must split before implementation | {n} | - -## Risks & Considerations - -| Risk | Affected Epics | Mitigation | -|------|---------------|------------| -| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} | - -## Open Questions - -- [ ] {question about scope or implementation 1} -- [ ] {question about scope or implementation 2} - -## References - -- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md) -- Handoff to: execution workflows (lite-plan, plan, req-plan) -``` - ---- - -## Template: EPIC-NNN-{slug}.md (Individual Epic) - -```markdown ---- -id: EPIC-{NNN} -priority: {Must|Should|Could} -mvp: {true|false} -size: {S|M|L|XL} -requirements: [REQ-{NNN}] -architecture: [ADR-{NNN}] -dependencies: [EPIC-{NNN}] -status: draft ---- - -# EPIC-{NNN}: {epic_title} - -**Priority**: {Must|Should|Could} -**MVP**: {Yes|No} -**Estimated Size**: {S|M|L|XL} - -## Description - -{detailed epic description} - -## Requirements - -- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title} -- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title} - -## Architecture - -- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title} -- Component: {component_name} - -## Dependencies - -- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason} -- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason} - -## Stories - -### STORY-{EPIC}-001: {story_title} - -**User Story**: As a {persona}, I want to {action} so that {benefit}. - -**Acceptance Criteria**: -- [ ] {criterion 1} -- [ ] {criterion 2} -- [ ] {criterion 3} - -**Size**: {S|M|L|XL} -**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md) - ---- - -### STORY-{EPIC}-002: {story_title} - -**User Story**: As a {persona}, I want to {action} so that {benefit}. - -**Acceptance Criteria**: -- [ ] {criterion 1} -- [ ] {criterion 2} - -**Size**: {S|M|L|XL} -**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md) -``` - ---- - -## Variable Descriptions - -| Variable | Source | Description | -|----------|--------|-------------| -| `{session_id}` | spec-config.json | Session identifier | -| `{timestamp}` | Runtime | ISO8601 generation timestamp | -| `{product_name}` | product-brief.md | Product/feature name | -| `{EPIC}` | Auto-increment | Epic number (3 digits) | -| `{NNN}` | Auto-increment | Story/requirement number | -| `{slug}` | Auto-generated | Kebab-case from epic/story title | -| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate | diff --git a/.claude/skills/team-lifecycle/templates/product-brief.md b/.claude/skills/team-lifecycle/templates/product-brief.md deleted file mode 100644 index ffbdf437..00000000 --- a/.claude/skills/team-lifecycle/templates/product-brief.md +++ /dev/null @@ -1,133 +0,0 @@ -# Product Brief Template - -Template for generating product brief documents in Phase 2. - -## Usage Context - -| Phase | Usage | -|-------|-------| -| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis | -| Output Location | `{workDir}/product-brief.md` | - ---- - -## Template - -```markdown ---- -session_id: {session_id} -phase: 2 -document_type: product-brief -status: draft -generated_at: {timestamp} -stepsCompleted: [] -version: 1 -dependencies: - - spec-config.json ---- - -# Product Brief: {product_name} - -{executive_summary - 2-3 sentences capturing the essence of the product/feature} - -## Vision - -{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like} - -## Problem Statement - -### Current Situation -{description of the current state and pain points} - -### Impact -{quantified impact of the problem - who is affected, how much, how often} - -## Target Users - -{for each user persona:} - -### {Persona Name} -- **Role**: {user's role/context} -- **Needs**: {primary needs related to this product} -- **Pain Points**: {current frustrations} -- **Success Criteria**: {what success looks like for this user} - -## Goals & Success Metrics - -| Goal ID | Goal | Success Metric | Target | -|---------|------|----------------|--------| -| G-001 | {goal description} | {measurable metric} | {specific target} | -| G-002 | {goal description} | {measurable metric} | {specific target} | - -## Scope - -### In Scope -- {feature/capability 1} -- {feature/capability 2} -- {feature/capability 3} - -### Out of Scope -- {explicitly excluded item 1} -- {explicitly excluded item 2} - -### Assumptions -- {key assumption 1} -- {key assumption 2} - -## Competitive Landscape - -| Aspect | Current State | Proposed Solution | Advantage | -|--------|--------------|-------------------|-----------| -| {aspect} | {how it's done now} | {our approach} | {differentiator} | - -## Constraints & Dependencies - -### Technical Constraints -- {constraint 1} -- {constraint 2} - -### Business Constraints -- {constraint 1} - -### Dependencies -- {external dependency 1} -- {external dependency 2} - -## Multi-Perspective Synthesis - -### Product Perspective -{summary of product/market analysis findings} - -### Technical Perspective -{summary of technical feasibility and constraints} - -### User Perspective -{summary of user journey and UX considerations} - -### Convergent Themes -{themes where all perspectives agree} - -### Conflicting Views -{areas where perspectives differ, with notes on resolution approach} - -## Open Questions - -- [ ] {unresolved question 1} -- [ ] {unresolved question 2} - -## References - -- Derived from: [spec-config.json](spec-config.json) -- Next: [Requirements PRD](requirements.md) -``` - -## Variable Descriptions - -| Variable | Source | Description | -|----------|--------|-------------| -| `{session_id}` | spec-config.json | Session identifier | -| `{timestamp}` | Runtime | ISO8601 generation timestamp | -| `{product_name}` | Seed analysis | Product/feature name | -| `{executive_summary}` | CLI synthesis | 2-3 sentence summary | -| `{vision_statement}` | CLI product perspective | Aspirational vision | -| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis | diff --git a/.claude/skills/team-lifecycle/templates/requirements-prd.md b/.claude/skills/team-lifecycle/templates/requirements-prd.md deleted file mode 100644 index 0b1dbf28..00000000 --- a/.claude/skills/team-lifecycle/templates/requirements-prd.md +++ /dev/null @@ -1,224 +0,0 @@ -# Requirements PRD Template (Directory Structure) - -Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3. - -## Usage Context - -| Phase | Usage | -|-------|-------| -| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion | -| Output Location | `{workDir}/requirements/` | - -## Output Structure - -``` -{workDir}/requirements/ -├── _index.md # Summary + MoSCoW table + traceability matrix + links -├── REQ-001-{slug}.md # Individual functional requirement -├── REQ-002-{slug}.md -├── NFR-P-001-{slug}.md # Non-functional: Performance -├── NFR-S-001-{slug}.md # Non-functional: Security -├── NFR-SC-001-{slug}.md # Non-functional: Scalability -├── NFR-U-001-{slug}.md # Non-functional: Usability -└── ... -``` - ---- - -## Template: _index.md - -```markdown ---- -session_id: {session_id} -phase: 3 -document_type: requirements-index -status: draft -generated_at: {timestamp} -version: 1 -dependencies: - - ../spec-config.json - - ../product-brief.md ---- - -# Requirements: {product_name} - -{executive_summary - brief overview of what this PRD covers and key decisions} - -## Requirement Summary - -| Priority | Count | Coverage | -|----------|-------|----------| -| Must Have | {n} | {description of must-have scope} | -| Should Have | {n} | {description of should-have scope} | -| Could Have | {n} | {description of could-have scope} | -| Won't Have | {n} | {description of explicitly excluded} | - -## Functional Requirements - -| ID | Title | Priority | Traces To | -|----|-------|----------|-----------| -| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) | -| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) | -| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) | - -## Non-Functional Requirements - -### Performance - -| ID | Title | Target | -|----|-------|--------| -| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} | - -### Security - -| ID | Title | Standard | -|----|-------|----------| -| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} | - -### Scalability - -| ID | Title | Target | -|----|-------|--------| -| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} | - -### Usability - -| ID | Title | Target | -|----|-------|--------| -| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} | - -## Data Requirements - -### Data Entities - -| Entity | Description | Key Attributes | -|--------|-------------|----------------| -| {entity_name} | {description} | {attr1, attr2, attr3} | - -### Data Flows - -{description of key data flows, optionally with Mermaid diagram} - -## Integration Requirements - -| System | Direction | Protocol | Data Format | Notes | -|--------|-----------|----------|-------------|-------| -| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} | - -## Constraints & Assumptions - -### Constraints -- {technical or business constraint 1} -- {technical or business constraint 2} - -### Assumptions -- {assumption 1 - must be validated} -- {assumption 2 - must be validated} - -## Priority Rationale - -{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries} - -## Traceability Matrix - -| Goal | Requirements | -|------|-------------| -| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) | -| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) | - -## Open Questions - -- [ ] {unresolved question 1} -- [ ] {unresolved question 2} - -## References - -- Derived from: [Product Brief](../product-brief.md) -- Next: [Architecture](../architecture/_index.md) -``` - ---- - -## Template: REQ-NNN-{slug}.md (Individual Functional Requirement) - -```markdown ---- -id: REQ-{NNN} -type: functional -priority: {Must|Should|Could|Won't} -traces_to: [G-{NNN}] -status: draft ---- - -# REQ-{NNN}: {requirement_title} - -**Priority**: {Must|Should|Could|Won't} - -## Description - -{detailed requirement description} - -## User Story - -As a {persona}, I want to {action} so that {benefit}. - -## Acceptance Criteria - -- [ ] {specific, testable criterion 1} -- [ ] {specific, testable criterion 2} -- [ ] {specific, testable criterion 3} - -## Traces - -- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics) -- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable) -- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5) -``` - ---- - -## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement) - -```markdown ---- -id: NFR-{type}-{NNN} -type: non-functional -category: {Performance|Security|Scalability|Usability} -priority: {Must|Should|Could} -status: draft ---- - -# NFR-{type}-{NNN}: {requirement_title} - -**Category**: {Performance|Security|Scalability|Usability} -**Priority**: {Must|Should|Could} - -## Requirement - -{detailed requirement description} - -## Metric & Target - -| Metric | Target | Measurement Method | -|--------|--------|--------------------| -| {metric} | {target value} | {how measured} | - -## Traces - -- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics) -- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable) -``` - ---- - -## Variable Descriptions - -| Variable | Source | Description | -|----------|--------|-------------| -| `{session_id}` | spec-config.json | Session identifier | -| `{timestamp}` | Runtime | ISO8601 generation timestamp | -| `{product_name}` | product-brief.md | Product/feature name | -| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) | -| `{slug}` | Auto-generated | Kebab-case from requirement title | -| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) | -| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |