Add integration verification and validation phases, role templates, and static graph tests

- Implement Phase 4: Integration Verification to ensure skill package consistency.
- Implement Phase 5: Validation to verify quality and deliver the final skill package.
- Create role-template.md for generating per-role execution detail files.
- Create skill-router-template.md for generating SKILL.md with role-based routing.
- Add tests for static graph relationship writing during index build in test_static_graph_integration.py.
This commit is contained in:
catlog22
2026-02-13 12:35:31 +08:00
parent 6054a01b8f
commit a512564b5a
14 changed files with 2897 additions and 51 deletions

View File

@@ -452,6 +452,7 @@ When designing a new team command, verify:
### Infrastructure Patterns ### Infrastructure Patterns
- [ ] YAML front matter with `group: team` - [ ] YAML front matter with `group: team`
- [ ] Message bus section with `team_msg` logging - [ ] Message bus section with `team_msg` logging
- [ ] CLI fallback section with `ccw team` CLI examples and parameter mapping
- [ ] Role-specific message types defined - [ ] Role-specific message types defined
- [ ] Task lifecycle: TaskList -> TaskGet -> TaskUpdate flow - [ ] Task lifecycle: TaskList -> TaskGet -> TaskUpdate flow
- [ ] Unique task prefix (no collision with existing PLAN/IMPL/TEST/REVIEW, scan `team/**/*.md`) - [ ] Unique task prefix (no collision with existing PLAN/IMPL/TEST/REVIEW, scan `team/**/*.md`)

View File

@@ -0,0 +1,285 @@
---
name: team-skill-designer
description: Design and generate unified team skills with role-based routing. All team members invoke ONE skill, SKILL.md routes to role-specific execution via --role arg. Triggers on "design team skill", "create team skill", "team skill designer".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep
---
# Team Skill Designer
Meta-skill for creating unified team skills where all team members invoke ONE skill with role-based routing. Generates a complete skill package with SKILL.md as role router and `roles/` folder for per-role execution detail.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Team Skill Designer (this meta-skill) │
│ → Collect requirements → Analyze patterns → Generate skill pkg │
└───────────────┬─────────────────────────────────────────────────┘
┌───────────┼───────────┬───────────┬───────────┐
↓ ↓ ↓ ↓ ↓
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │ │ Phase 5 │
│ Require │ │ Pattern │ │ Skill │ │ Integ │ │ Valid │
│ Collect │ │ Analyze │ │ Gen │ │ Verify │ │ │
└─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘
↓ ↓ ↓ ↓ ↓
team- patterns SKILL.md + report validated
config.json .json roles/*.md .json skill pkg
```
## Key Innovation: Unified Skill + Role Router
**Before (command approach)**:
```
.claude/commands/team/
├── coordinate.md → /team:coordinate
├── plan.md → /team:plan
├── execute.md → /team:execute
├── test.md → /team:test
└── review.md → /team:review
```
→ 5 separate command files, 5 separate skill paths
**After (unified skill approach)**:
```
.claude/skills/team-{name}/
├── SKILL.md → Skill(skill="team-{name}", args="--role=xxx")
├── roles/
│ ├── coordinator.md
│ ├── planner.md
│ ├── executor.md
│ ├── tester.md
│ └── reviewer.md
└── specs/
└── team-config.json
```
→ 1 skill entry point, --role arg routes to per-role execution
**Coordinator spawns teammates with**:
```javascript
Task({
prompt: `...调用 Skill(skill="team-{name}", args="--role=planner") 执行规划...`
})
```
## Target Output Structure
```
.claude/skills/team-{name}/
├── SKILL.md # Role router + shared infrastructure
│ ├─ Frontmatter
│ ├─ Architecture Overview (role routing diagram)
│ ├─ Role Router (parse --role → Read roles/{role}.md → execute)
│ ├─ Shared Infrastructure (message bus, task lifecycle)
│ ├─ Coordinator Spawn Template
│ └─ Error Handling
├── roles/ # Role-specific execution detail
│ ├── coordinator.md # Orchestration logic
│ ├── {role-1}.md # First worker role
│ ├── {role-2}.md # Second worker role
│ └── ...
└── specs/ # [Optional] Team-specific config
└── team-config.json
```
## Core Design Patterns
### Pattern 1: Role Router (Unified Entry Point)
SKILL.md parses `$ARGUMENTS` to extract `--role`:
```
Input: Skill(skill="team-{name}", args="--role=planner")
↓ Parse --role=planner
↓ Read roles/planner.md
↓ Execute planner-specific 5-phase logic
```
No --role → error (role is required, set by coordinator spawn).
### Pattern 2: Shared Infrastructure in SKILL.md
SKILL.md defines ONCE, all roles inherit:
- Message bus pattern (team_msg + CLI fallback)
- Task lifecycle (TaskList → TaskGet → TaskUpdate)
- Team name and session directory conventions
- Error handling and escalation rules
### Pattern 3: Role Files = Full Execution Detail
Each `roles/{role}.md` contains:
- Role-specific 5-phase implementation
- Per-role message types
- Per-role task prefix
- Complete code (no `Ref:` back to SKILL.md)
### Pattern 4: Batch Role Generation
Phase 1 collects ALL roles at once (not one at a time):
- Team name + all role definitions in one pass
- Coordinator is always generated
- Worker roles collected as a batch
### Pattern 5: Spec Reference (No Duplication)
Design pattern specs are referenced from team-command-designer:
```
specs → ../team-command-designer/specs/team-design-patterns.md
specs → ../team-command-designer/specs/collaboration-patterns.md
specs → ../team-command-designer/specs/quality-standards.md
```
---
## Mandatory Prerequisites
> **Do NOT skip**: Read these before any execution.
### Specification Documents (Required Reading)
| Document | Purpose | When |
|----------|---------|------|
| [../team-command-designer/specs/team-design-patterns.md](../team-command-designer/specs/team-design-patterns.md) | Infrastructure patterns (8) + collaboration index | **Must read** |
| [../team-command-designer/specs/collaboration-patterns.md](../team-command-designer/specs/collaboration-patterns.md) | 10 collaboration patterns with convergence control | **Must read** |
| [../team-command-designer/specs/quality-standards.md](../team-command-designer/specs/quality-standards.md) | Quality criteria | Must read before generation |
### Template Files (Must read before generation)
| Document | Purpose |
|----------|---------|
| [templates/skill-router-template.md](templates/skill-router-template.md) | Generated SKILL.md template with role router |
| [templates/role-template.md](templates/role-template.md) | Generated role file template |
### Existing Reference
| Document | Purpose |
|----------|---------|
| `.claude/commands/team/coordinate.md` | Coordinator spawn patterns |
| `.claude/commands/team/plan.md` | Planner role reference |
| `.claude/commands/team/execute.md` | Executor role reference |
| `.claude/commands/team/test.md` | Tester role reference |
| `.claude/commands/team/review.md` | Reviewer role reference |
---
## Execution Flow
```
Phase 0: Specification Study (MANDATORY)
-> Read: ../team-command-designer/specs/team-design-patterns.md
-> Read: ../team-command-designer/specs/collaboration-patterns.md
-> Read: templates/skill-router-template.md + templates/role-template.md
-> Read: 1-2 existing team commands for reference
-> Output: Internalized requirements (in-memory)
Phase 1: Requirements Collection
-> Ref: phases/01-requirements-collection.md
- Collect team name and ALL role definitions (batch)
- For each role: name, responsibility, task prefix, capabilities
- Pipeline definition (task chain order)
- Output: team-config.json (team-level + per-role config)
Phase 2: Pattern Analysis
-> Ref: phases/02-pattern-analysis.md
- Per-role: find most similar existing command
- Per-role: select infrastructure + collaboration patterns
- Per-role: map 5-phase structure
- Output: pattern-analysis.json
Phase 3: Skill Package Generation
-> Ref: phases/03-skill-generation.md
- Generate SKILL.md (role router + shared infrastructure)
- Generate roles/*.md (per-role execution detail)
- Generate specs/team-config.json
- Output: .claude/skills/team-{name}/ complete package
Phase 4: Integration Verification
-> Ref: phases/04-integration-verification.md
- Verify role router references match role files
- Verify task prefixes are unique across roles
- Verify message type compatibility
- Output: integration-report.json
Phase 5: Validation
-> Ref: phases/05-validation.md
- Structural completeness per role file
- Pattern compliance per role file
- Quality scoring and delivery
- Output: validation-report.json + delivered skill package
```
**Phase Reference Documents** (read on-demand):
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | [phases/01-requirements-collection.md](phases/01-requirements-collection.md) | Batch collect team + all role definitions |
| 2 | [phases/02-pattern-analysis.md](phases/02-pattern-analysis.md) | Per-role pattern matching and phase mapping |
| 3 | [phases/03-skill-generation.md](phases/03-skill-generation.md) | Generate unified skill package |
| 4 | [phases/04-integration-verification.md](phases/04-integration-verification.md) | Verify internal consistency |
| 5 | [phases/05-validation.md](phases/05-validation.md) | Quality gate and delivery |
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/team-skill-${timestamp}`;
Bash(`mkdir -p "${workDir}"`);
```
## Output Structure
```
.workflow/.scratchpad/team-skill-{timestamp}/
├── team-config.json # Phase 1 output (team + all roles)
├── pattern-analysis.json # Phase 2 output (per-role patterns)
├── integration-report.json # Phase 4 output
├── validation-report.json # Phase 5 output
└── preview/ # Phase 3 output (preview before delivery)
├── SKILL.md
├── roles/
│ ├── coordinator.md
│ └── {role-N}.md
└── specs/
└── team-config.json
Final delivery:
.claude/skills/team-{name}/
├── SKILL.md
├── roles/
│ ├── coordinator.md
│ └── ...
└── specs/
└── team-config.json
```
## Comparison: Command Designer vs Skill Designer
| Aspect | team-command-designer | team-skill-designer |
|--------|----------------------|---------------------|
| Output | N separate .md command files | 1 skill package (SKILL.md + roles/) |
| Entry point | N skill paths (/team:xxx) | 1 skill path + --role arg |
| Shared infra | Duplicated in each command | Defined once in SKILL.md |
| Role isolation | Complete (separate files) | Complete (roles/ directory) |
| Coordinator spawn | `Skill(skill="team:plan")` | `Skill(skill="team-{name}", args="--role=planner")` |
| Role generation | One role at a time | All roles in batch |
| Template | command-template.md | skill-router-template.md + role-template.md |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Specs not found | Fall back to inline pattern knowledge |
| Role name conflicts | AskUserQuestion for rename |
| Task prefix conflicts | Suggest alternative prefix |
| Template variable unresolved | FAIL with specific variable name |
| Quality score < 60% | Re-run Phase 3 with additional context |
## Debugging
| Issue | Solution |
|-------|----------|
| Generated SKILL.md missing router | Check templates/skill-router-template.md |
| Role file missing message bus | Check templates/role-template.md |
| Integration check fails | Review phases/04-integration-verification.md |
| Quality score below threshold | Review specs/quality-standards.md |

View File

@@ -0,0 +1,283 @@
# Phase 1: Requirements Collection (Batch Mode)
Collect team definition and ALL role definitions in one pass.
## Objective
- Determine team name and display name
- Collect ALL roles (coordinator + workers) in batch
- For each role: name, responsibility, task prefix, capabilities
- Define pipeline (task chain order)
- Generate team-config.json
## Input
- User request (`$ARGUMENTS` or interactive input)
- Specification: `../team-command-designer/specs/team-design-patterns.md` (read in Phase 0)
## Execution Steps
### Step 1: Team Basic Information
```javascript
const teamInfo = await AskUserQuestion({
questions: [
{
question: "团队名称是什么?(小写,用作 skill 文件夹名:.claude/skills/team-{name}/",
header: "Team Name",
multiSelect: false,
options: [
{ label: "自定义", description: "输入自定义团队名称" },
{ label: "dev", description: "开发团队plan/execute/test/review" },
{ label: "spec", description: "规格文档团队analyst/writer/reviewer/discuss" },
{ label: "security", description: "安全审计团队" }
]
},
{
question: "团队使用什么 pipeline 模型?",
header: "Pipeline",
multiSelect: false,
options: [
{ label: "Standard (Recommended)", description: "PLAN → IMPL → TEST + REVIEW标准开发流水线" },
{ label: "Document Chain", description: "RESEARCH → DRAFT → DISCUSS → REVIEW文档工作流" },
{ label: "Custom", description: "自定义 pipeline" }
]
}
]
})
```
### Step 2: Role Definitions (Batch)
```javascript
// Always include coordinator
const roles = [{
name: "coordinator",
responsibility_type: "Orchestration",
task_prefix: null, // coordinator creates tasks, doesn't receive them
description: "Pipeline orchestration, team lifecycle, cross-stage coordination"
}]
// Collect worker roles based on pipeline model
const pipelineType = teamInfo["Pipeline"]
if (pipelineType.includes("Standard")) {
// Pre-fill standard development roles
roles.push(
{ name: "planner", responsibility_type: "Orchestration", task_prefix: "PLAN", description: "Code exploration and implementation planning" },
{ name: "executor", responsibility_type: "Code generation", task_prefix: "IMPL", description: "Code implementation following approved plan" },
{ name: "tester", responsibility_type: "Validation", task_prefix: "TEST", description: "Test execution and fix cycles" },
{ name: "reviewer", responsibility_type: "Read-only analysis", task_prefix: "REVIEW", description: "Multi-dimensional code review" }
)
} else if (pipelineType.includes("Document")) {
roles.push(
{ name: "analyst", responsibility_type: "Orchestration", task_prefix: "RESEARCH", description: "Seed analysis, codebase exploration, context collection" },
{ name: "writer", responsibility_type: "Code generation", task_prefix: "DRAFT", description: "Document drafting following templates" },
{ name: "reviewer", responsibility_type: "Read-only analysis", task_prefix: "QUALITY", description: "Cross-document quality verification" },
{ name: "discuss", responsibility_type: "Orchestration", task_prefix: "DISCUSS", description: "Structured team discussion and consensus building" }
)
} else {
// Custom: ask user for each role
}
```
### Step 3: Role Customization (Interactive)
```javascript
// Allow user to customize pre-filled roles
const customization = await AskUserQuestion({
questions: [
{
question: "是否需要自定义角色?(默认角色已根据 pipeline 预填充)",
header: "Customize",
multiSelect: false,
options: [
{ label: "使用默认 (Recommended)", description: "直接使用预填充的角色定义" },
{ label: "添加角色", description: "在默认基础上添加新角色" },
{ label: "修改角色", description: "修改默认角色定义" },
{ label: "从零开始", description: "清空默认,逐个定义角色" }
]
}
]
})
if (customization["Customize"].includes("添加角色")) {
const newRole = await AskUserQuestion({
questions: [
{
question: "新角色名称?(小写)",
header: "Role Name",
multiSelect: false,
options: [
{ label: "自定义", description: "输入自定义角色名" },
{ label: "deployer", description: "部署和发布管理" },
{ label: "documenter", description: "文档生成" },
{ label: "monitor", description: "监控和告警" }
]
},
{
question: "角色职责类型?",
header: "Type",
multiSelect: false,
options: [
{ label: "Read-only analysis", description: "分析/审查/报告(不修改文件)" },
{ label: "Code generation", description: "写/改代码文件" },
{ label: "Orchestration", description: "协调子任务和 agent" },
{ label: "Validation", description: "测试/验证/审计" }
]
}
]
})
// Add to roles array
}
```
### Step 4: Capability Selection (Per Role)
```javascript
// For each worker role, determine capabilities
for (const role of roles.filter(r => r.name !== 'coordinator')) {
// Infer capabilities from responsibility type
const baseTools = ["SendMessage(*)", "TaskUpdate(*)", "TaskList(*)", "TaskGet(*)", "TodoWrite(*)", "Read(*)", "Bash(*)", "Glob(*)", "Grep(*)"]
if (role.responsibility_type === "Code generation") {
role.allowed_tools = [...baseTools, "Write(*)", "Edit(*)", "Task(*)"]
role.adaptive_routing = true
} else if (role.responsibility_type === "Orchestration") {
role.allowed_tools = [...baseTools, "Write(*)", "Task(*)"]
role.adaptive_routing = true
} else if (role.responsibility_type === "Validation") {
role.allowed_tools = [...baseTools, "Write(*)", "Edit(*)", "Task(*)"]
role.adaptive_routing = false
} else {
// Read-only analysis
role.allowed_tools = [...baseTools, "Task(*)"]
role.adaptive_routing = false
}
// Infer message types
const roleMsgTypes = {
"Read-only analysis": [
{ type: `${role.name}_result`, trigger: "Analysis complete" },
{ type: "error", trigger: "Blocking error" }
],
"Code generation": [
{ type: `${role.name}_complete`, trigger: "Generation complete" },
{ type: `${role.name}_progress`, trigger: "Batch progress" },
{ type: "error", trigger: "Blocking error" }
],
"Orchestration": [
{ type: `${role.name}_ready`, trigger: "Results ready" },
{ type: `${role.name}_progress`, trigger: "Progress update" },
{ type: "error", trigger: "Blocking error" }
],
"Validation": [
{ type: `${role.name}_result`, trigger: "Validation complete" },
{ type: "fix_required", trigger: "Critical issues found" },
{ type: "error", trigger: "Blocking error" }
]
}
role.message_types = roleMsgTypes[role.responsibility_type] || []
}
// Coordinator special config
roles[0].allowed_tools = [
"TeamCreate(*)", "TeamDelete(*)", "SendMessage(*)",
"TaskCreate(*)", "TaskUpdate(*)", "TaskList(*)", "TaskGet(*)",
"Task(*)", "AskUserQuestion(*)", "TodoWrite(*)",
"Read(*)", "Bash(*)", "Glob(*)", "Grep(*)"
]
roles[0].message_types = [
{ type: "plan_approved", trigger: "Plan approved" },
{ type: "plan_revision", trigger: "Revision requested" },
{ type: "task_unblocked", trigger: "Task unblocked" },
{ type: "shutdown", trigger: "Team shutdown" },
{ type: "error", trigger: "Coordination error" }
]
```
### Step 5: Pipeline Definition
```javascript
// Build pipeline from roles and their task chain positions
function buildPipeline(roles, pipelineType) {
if (pipelineType.includes("Standard")) {
return {
stages: [
{ name: "PLAN", role: "planner", blockedBy: [] },
{ name: "IMPL", role: "executor", blockedBy: ["PLAN"] },
{ name: "TEST", role: "tester", blockedBy: ["IMPL"] },
{ name: "REVIEW", role: "reviewer", blockedBy: ["IMPL"] }
],
diagram: "需求 → [PLAN: planner] → coordinator 审批 → [IMPL: executor] → [TEST + REVIEW: tester/reviewer] → 汇报"
}
}
if (pipelineType.includes("Document")) {
return {
stages: [
{ name: "RESEARCH", role: "analyst", blockedBy: [] },
{ name: "DISCUSS-scope", role: "discuss", blockedBy: ["RESEARCH"] },
{ name: "DRAFT", role: "writer", blockedBy: ["DISCUSS-scope"] },
{ name: "DISCUSS-eval", role: "discuss", blockedBy: ["DRAFT"] },
{ name: "QUALITY", role: "reviewer", blockedBy: ["DRAFT"] }
],
diagram: "RESEARCH → DISCUSS → DRAFT → DISCUSS → QUALITY → Deliver"
}
}
// Custom pipeline
return { stages: [], diagram: "Custom pipeline" }
}
const pipeline = buildPipeline(roles, pipelineType)
```
### Step 6: Generate Configuration
```javascript
const teamName = teamInfo["Team Name"] === "自定义"
? teamInfo["Team Name_other"]
: teamInfo["Team Name"]
const config = {
team_name: teamName,
team_display_name: teamName.charAt(0).toUpperCase() + teamName.slice(1),
skill_name: `team-${teamName}`,
skill_path: `.claude/skills/team-${teamName}/`,
pipeline_type: pipelineType,
pipeline: pipeline,
roles: roles.map(r => ({
...r,
display_name: `${teamName} ${r.name}`,
name_upper: r.name.toUpperCase()
})),
worker_roles: roles.filter(r => r.name !== 'coordinator').map(r => ({
...r,
display_name: `${teamName} ${r.name}`,
name_upper: r.name.toUpperCase()
})),
all_roles_tools_union: [...new Set(roles.flatMap(r => r.allowed_tools))].join(', '),
role_list: roles.map(r => r.name).join(', ')
}
Write(`${workDir}/team-config.json`, JSON.stringify(config, null, 2))
```
## Output
- **File**: `team-config.json`
- **Format**: JSON
- **Location**: `{workDir}/team-config.json`
## Quality Checklist
- [ ] Team name is lowercase, valid as folder/skill name
- [ ] Coordinator is always included
- [ ] At least 2 worker roles defined
- [ ] Task prefixes are UPPERCASE and unique across roles
- [ ] Pipeline stages reference valid roles
- [ ] All roles have message types defined
- [ ] Allowed tools include minimum set per role
## Next Phase
-> [Phase 2: Pattern Analysis](02-pattern-analysis.md)

View File

@@ -0,0 +1,217 @@
# Phase 2: Pattern Analysis
Analyze applicable patterns for each role in the team.
## Objective
- Per-role: find most similar existing command
- Per-role: select infrastructure + collaboration patterns
- Per-role: map 5-phase structure to role responsibilities
- Generate pattern-analysis.json
## Input
- Dependency: `team-config.json` (Phase 1)
- Specification: `../team-command-designer/specs/team-design-patterns.md` (read in Phase 0)
## Execution Steps
### Step 1: Load Configuration
```javascript
const config = JSON.parse(Read(`${workDir}/team-config.json`))
```
### Step 2: Per-Role Similarity Mapping
```javascript
const similarityMap = {
"Read-only analysis": {
primary: "review", secondary: "plan",
reason: "Both analyze code and report findings with severity classification"
},
"Code generation": {
primary: "execute", secondary: "test",
reason: "Both write/modify code and self-validate"
},
"Orchestration": {
primary: "plan", secondary: "coordinate",
reason: "Both coordinate sub-tasks and produce structured output"
},
"Validation": {
primary: "test", secondary: "review",
reason: "Both validate quality with structured criteria"
}
}
const roleAnalysis = config.worker_roles.map(role => {
const similarity = similarityMap[role.responsibility_type]
return {
role_name: role.name,
similar_to: similarity,
reference_command: `.claude/commands/team/${similarity.primary}.md`
}
})
```
### Step 3: Per-Role Phase Mapping
```javascript
const phaseMapping = {
"Read-only analysis": {
phase2: "Context Loading",
phase3: "Analysis Execution",
phase4: "Finding Summary"
},
"Code generation": {
phase2: "Task & Plan Loading",
phase3: "Code Implementation",
phase4: "Self-Validation"
},
"Orchestration": {
phase2: "Context & Complexity Assessment",
phase3: "Orchestrated Execution",
phase4: "Result Aggregation"
},
"Validation": {
phase2: "Environment Detection",
phase3: "Execution & Fix Cycle",
phase4: "Result Analysis"
}
}
roleAnalysis.forEach(ra => {
const role = config.worker_roles.find(r => r.name === ra.role_name)
ra.phase_structure = {
phase1: "Task Discovery",
...phaseMapping[role.responsibility_type],
phase5: "Report to Coordinator"
}
})
```
### Step 4: Per-Role Infrastructure Patterns
```javascript
roleAnalysis.forEach(ra => {
const role = config.worker_roles.find(r => r.name === ra.role_name)
// Core patterns (mandatory for all)
ra.core_patterns = [
"pattern-1-message-bus",
"pattern-2-yaml-front-matter", // Adapted: no YAML in skill role files
"pattern-3-task-lifecycle",
"pattern-4-five-phase",
"pattern-6-coordinator-spawn",
"pattern-7-error-handling"
]
// Conditional patterns
ra.conditional_patterns = []
if (role.adaptive_routing) {
ra.conditional_patterns.push("pattern-5-complexity-adaptive")
}
if (role.responsibility_type === "Code generation" || role.responsibility_type === "Orchestration") {
ra.conditional_patterns.push("pattern-8-session-files")
}
})
```
### Step 5: Collaboration Pattern Selection
```javascript
// Team-level collaboration patterns
function selectTeamPatterns(config) {
const patterns = ['CP-1'] // Linear Pipeline is always base
const hasValidation = config.worker_roles.some(r =>
r.responsibility_type === 'Validation' || r.responsibility_type === 'Read-only analysis'
)
if (hasValidation) patterns.push('CP-2') // Review-Fix Cycle
const hasOrchestration = config.worker_roles.some(r =>
r.responsibility_type === 'Orchestration'
)
if (hasOrchestration) patterns.push('CP-3') // Fan-out/Fan-in
if (config.worker_roles.length >= 4) patterns.push('CP-6') // Incremental Delivery
patterns.push('CP-5') // Escalation Chain (always available)
patterns.push('CP-10') // Post-Mortem (always at team level)
return [...new Set(patterns)]
}
const collaborationPatterns = selectTeamPatterns(config)
// Convergence defaults
const convergenceConfig = collaborationPatterns.map(cp => {
const defaults = {
'CP-1': { max_iterations: 1, success_gate: 'all_stages_completed' },
'CP-2': { max_iterations: 5, success_gate: 'verdict_approve_or_conditional' },
'CP-3': { max_iterations: 1, success_gate: 'quorum_100_percent' },
'CP-5': { max_iterations: null, success_gate: 'issue_resolved_at_any_level' },
'CP-6': { max_iterations: 3, success_gate: 'all_increments_validated' },
'CP-10': { max_iterations: 1, success_gate: 'report_generated' }
}
return { pattern: cp, convergence: defaults[cp] || {} }
})
```
### Step 6: Read Reference Commands
```javascript
// Read the most referenced commands for extraction
const referencedCommands = [...new Set(roleAnalysis.map(ra => ra.similar_to.primary))]
const referenceContent = {}
for (const cmdName of referencedCommands) {
try {
referenceContent[cmdName] = Read(`.claude/commands/team/${cmdName}.md`)
} catch {
referenceContent[cmdName] = null
}
}
```
### Step 7: Generate Analysis Document
```javascript
const analysis = {
team_name: config.team_name,
role_count: config.roles.length,
worker_count: config.worker_roles.length,
role_analysis: roleAnalysis,
collaboration_patterns: collaborationPatterns,
convergence_config: convergenceConfig,
referenced_commands: referencedCommands,
pipeline: config.pipeline,
// Skill-specific patterns
skill_patterns: {
role_router: "Parse --role from $ARGUMENTS → dispatch to roles/{role}.md",
shared_infrastructure: "Message bus + task lifecycle defined once in SKILL.md",
progressive_loading: "Only read roles/{role}.md when that role executes"
}
}
Write(`${workDir}/pattern-analysis.json`, JSON.stringify(analysis, null, 2))
```
## Output
- **File**: `pattern-analysis.json`
- **Format**: JSON
- **Location**: `{workDir}/pattern-analysis.json`
## Quality Checklist
- [ ] Every worker role has similarity mapping
- [ ] Every worker role has 5-phase structure
- [ ] Infrastructure patterns include all mandatory patterns
- [ ] Collaboration patterns selected at team level
- [ ] Referenced commands are readable
- [ ] Skill-specific patterns documented
## Next Phase
-> [Phase 3: Skill Package Generation](03-skill-generation.md)

View File

@@ -0,0 +1,668 @@
# Phase 3: Skill Package Generation
Generate the unified team skill package: SKILL.md (role router) + roles/*.md (per-role execution).
## Objective
- Generate SKILL.md with role router and shared infrastructure
- Generate roles/coordinator.md
- Generate roles/{worker-role}.md for each worker role
- Generate specs/team-config.json
- All files written to preview directory first
## Input
- Dependency: `team-config.json` (Phase 1), `pattern-analysis.json` (Phase 2)
- Templates: `templates/skill-router-template.md`, `templates/role-template.md`
- Reference: existing team commands (read in Phase 0)
## Execution Steps
### Step 1: Load Inputs
```javascript
const config = JSON.parse(Read(`${workDir}/team-config.json`))
const analysis = JSON.parse(Read(`${workDir}/pattern-analysis.json`))
const routerTemplate = Read(`${skillDir}/templates/skill-router-template.md`)
const roleTemplate = Read(`${skillDir}/templates/role-template.md`)
// Create preview directory
const previewDir = `${workDir}/preview`
Bash(`mkdir -p "${previewDir}/roles" "${previewDir}/specs"`)
```
### Step 2: Generate SKILL.md (Role Router)
This is the unified entry point. All roles invoke this skill with `--role=xxx`.
```javascript
const rolesTable = config.roles.map(r =>
`| \`${r.name}\` | ${r.task_prefix || 'N/A'} | ${r.description} | [roles/${r.name}.md](roles/${r.name}.md) |`
).join('\n')
const roleDispatchEntries = config.roles.map(r =>
` "${r.name}": { file: "roles/${r.name}.md", prefix: "${r.task_prefix || 'N/A'}" }`
).join(',\n')
const messageBusTable = config.worker_roles.map(r =>
`| ${r.name} | ${r.message_types.map(mt => '\`' + mt.type + '\`').join(', ')} |`
).join('\n')
const spawnBlocks = config.worker_roles.map(r => `
// ${r.display_name}
Task({
subagent_type: "general-purpose",
team_name: teamName,
name: "${r.name}",
prompt: \`你是 team "\${teamName}" 的 ${r.name_upper}
当你收到 ${r.task_prefix}-* 任务时,调用 Skill(skill="${config.skill_name}", args="--role=${r.name}") 执行。
当前需求: \${taskDescription}
约束: \${constraints}
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录:
mcp__ccw-tools__team_msg({ operation: "log", team: "\${teamName}", from: "${r.name}", to: "coordinator", type: "<type>", summary: "<摘要>" })
工作流程:
1. TaskList → 找到 ${r.task_prefix}-* 任务
2. Skill(skill="${config.skill_name}", args="--role=${r.name}") 执行
3. team_msg log + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务\`
})`).join('\n')
const skillMd = `---
name: ${config.skill_name}
description: Unified team skill for ${config.team_name} team. All roles invoke this skill with --role arg. Triggers on "team ${config.team_name}".
allowed-tools: ${config.all_roles_tools_union}
---
# Team ${config.team_display_name}
Unified team skill. All team members invoke this skill with \`--role=xxx\` for role-specific execution.
## Architecture Overview
\`\`\`
┌───────────────────────────────────────────┐
│ Skill(skill="${config.skill_name}") │
│ args="--role=xxx" │
└───────────────┬───────────────────────────┘
│ Role Router
┌───────────┼${'───────────┬'.repeat(Math.min(config.roles.length - 1, 3))}
${config.roles.map(r => ``).join('').trim()}
${config.roles.map(r => `┌──────────┐ `).join('').trim()}
${config.roles.map(r => `${r.name.padEnd(10)}`).join('').trim()}
${config.roles.map(r => `│ roles/ │ `).join('').trim()}
${config.roles.map(r => `└──────────┘ `).join('').trim()}
\`\`\`
## Role Router
### Input Parsing
Parse \`$ARGUMENTS\` to extract \`--role\`:
\`\`\`javascript
const args = "$ARGUMENTS"
const roleMatch = args.match(/--role[=\\s]+(\\w+)/)
if (!roleMatch) {
throw new Error("Missing --role argument. Available roles: ${config.role_list}")
}
const role = roleMatch[1]
const teamName = "${config.team_name}"
\`\`\`
### Role Dispatch
\`\`\`javascript
const VALID_ROLES = {
${roleDispatchEntries}
}
if (!VALID_ROLES[role]) {
throw new Error(\\\`Unknown role: \\\${role}. Available: \\\${Object.keys(VALID_ROLES).join(', ')}\\\`)
}
// Read and execute role-specific logic
Read(VALID_ROLES[role].file)
// → Execute the 5-phase process defined in that file
\`\`\`
### Available Roles
| Role | Task Prefix | Responsibility | Role File |
|------|-------------|----------------|-----------|
${rolesTable}
## Shared Infrastructure
### Team Configuration
\`\`\`javascript
const TEAM_CONFIG = {
name: "${config.team_name}",
sessionDir: ".workflow/.team-plan/${config.team_name}/",
msgDir: ".workflow/.team-msg/${config.team_name}/"
}
\`\`\`
### Message Bus (All Roles)
Every SendMessage **before**, must call \`mcp__ccw-tools__team_msg\`:
\`\`\`javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "${config.team_name}",
from: role,
to: "coordinator",
type: "<type>",
summary: "<summary>"
})
\`\`\`
**Message types by role**:
| Role | Types |
|------|-------|
${messageBusTable}
### CLI 回退
\`\`\`javascript
Bash(\\\`ccw team log --team "${config.team_name}" --from "\\\${role}" --to "coordinator" --type "<type>" --summary "<摘要>" --json\\\`)
\`\`\`
### Task Lifecycle (All Roles)
\`\`\`javascript
// Phase 1: Discovery
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith(\\\`\\\${VALID_ROLES[role].prefix}-\\\`) &&
t.owner === role &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
// Phase 2-4: Role-specific (see roles/{role}.md)
// Phase 5: Report + Loop
TaskUpdate({ taskId: task.id, status: 'completed' })
\`\`\`
## Pipeline
\`\`\`
${config.pipeline.diagram}
\`\`\`
## Coordinator Spawn Template
\`\`\`javascript
TeamCreate({ team_name: "${config.team_name}" })
${spawnBlocks}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with available role list |
| Missing --role arg | Error with usage hint |
| Role file not found | Error with expected path |
`
Write(`${previewDir}/SKILL.md`, skillMd)
```
### Step 3: Generate Coordinator Role File
```javascript
const taskChainCode = config.pipeline.stages.map((stage, i) => {
const blockedByIds = stage.blockedBy.map(dep => {
const depIdx = config.pipeline.stages.findIndex(s => s.name === dep)
return `\${task${depIdx}Id}`
})
return `TaskCreate({ subject: "${stage.name}-001: ${stage.role} work", description: \`\${taskDescription}\`, activeForm: "${stage.name}进行中" })
TaskUpdate({ taskId: task${i}Id, owner: "${stage.role}"${blockedByIds.length > 0 ? `, addBlockedBy: [${blockedByIds.join(', ')}]` : ''} })`
}).join('\n\n')
const coordinationHandlers = config.worker_roles.map(r => {
const resultType = r.message_types.find(mt => !mt.type.includes('error') && !mt.type.includes('progress'))
return `| ${r.name_upper}: ${resultType?.trigger || 'work complete'} | team_msg log → TaskUpdate ${r.task_prefix} completed → check next |`
}).join('\n')
const coordinatorMd = `# Role: coordinator
Team coordinator. Orchestrates pipeline: requirement clarification → team creation → task chain → dispatch → monitoring → reporting.
## Role Identity
- **Name**: \`coordinator\`
- **Task Prefix**: N/A (creates tasks, doesn't receive them)
- **Responsibility**: Orchestration
- **Communication**: SendMessage to all teammates
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| \`plan_approved\` | coordinator → planner | Plan approved |
| \`plan_revision\` | coordinator → planner | Revision requested |
| \`task_unblocked\` | coordinator → worker | Task dependency met |
| \`shutdown\` | coordinator → all | Team shutdown |
| \`error\` | coordinator → user | Coordination error |
## Execution
### Phase 1: Requirement Clarification
Parse \`$ARGUMENTS\` for task description. Use AskUserQuestion for:
- MVP scope (minimal / full / comprehensive)
- Key constraints (backward compatible / follow patterns / test coverage)
Simple tasks can skip clarification.
### Phase 2: Create Team + Spawn Teammates
\`\`\`javascript
TeamCreate({ team_name: "${config.team_name}" })
${spawnBlocks}
\`\`\`
### Phase 3: Create Task Chain
\`\`\`javascript
${taskChainCode}
\`\`\`
### Phase 4: Coordination Loop
Receive teammate messages, dispatch based on content.
**Before each decision**: \`team_msg list\` to check recent messages.
**After each decision**: \`team_msg log\` to record.
| Received Message | Action |
|-----------------|--------|
${coordinationHandlers}
| Worker: error | Assess severity → retry or escalate to user |
| All tasks completed | → Phase 5 |
### Phase 5: Report + Persist
Summarize changes, test results, review findings.
\`\`\`javascript
AskUserQuestion({
questions: [{
question: "当前需求已完成。下一步:",
header: "Next",
multiSelect: false,
options: [
{ label: "新需求", description: "提交新需求给当前团队" },
{ label: "关闭团队", description: "关闭所有 teammate 并清理" }
]
}]
})
// 新需求 → 回到 Phase 1
// 关闭 → shutdown → TeamDelete()
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Teammate unresponsive | Send follow-up, 2x → respawn |
| Plan rejected 3+ times | Coordinator self-plans |
| Test stuck >5 iterations | Escalate to user |
| Review finds critical | Create fix task for executor |
`
Write(`${previewDir}/roles/coordinator.md`, coordinatorMd)
```
### Step 4: Generate Worker Role Files
For each worker role, generate a complete role file with 5-phase execution.
```javascript
for (const role of config.worker_roles) {
const ra = analysis.role_analysis.find(r => r.role_name === role.name)
// Phase 2 content based on responsibility type
const phase2Content = {
"Read-only analysis": `\`\`\`javascript
// Load plan for criteria reference
const planPathMatch = task.description.match(/\\.workflow\\/\\.team-plan\\/[^\\s]+\\/plan\\.json/)
let plan = null
if (planPathMatch) {
try { plan = JSON.parse(Read(planPathMatch[0])) } catch {}
}
// Get changed files
const changedFiles = Bash(\`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached\`)
.split('\\n').filter(Boolean)
// Read file contents for analysis
const fileContents = {}
for (const file of changedFiles.slice(0, 20)) {
try { fileContents[file] = Read(file) } catch {}
}
\`\`\``,
"Code generation": `\`\`\`javascript
// Extract plan path from task description
const planPathMatch = task.description.match(/\\.workflow\\/\\.team-plan\\/[^\\s]+\\/plan\\.json/)
if (!planPathMatch) {
mcp__ccw-tools__team_msg({ operation: "log", team: "${config.team_name}", from: "${role.name}", to: "coordinator", type: "error", summary: "plan.json路径无效" })
SendMessage({ type: "message", recipient: "coordinator", content: \`Cannot find plan.json in \${task.subject}\`, summary: "Plan not found" })
return
}
const plan = JSON.parse(Read(planPathMatch[0]))
const planTasks = plan.task_ids.map(id =>
JSON.parse(Read(\`\${planPathMatch[0].replace('plan.json', '')}.task/\${id}.json\`))
)
\`\`\``,
"Orchestration": `\`\`\`javascript
function assessComplexity(desc) {
let score = 0
if (/refactor|architect|restructure|module|system/.test(desc)) score += 2
if (/multiple|across|cross/.test(desc)) score += 2
if (/integrate|api|database/.test(desc)) score += 1
if (/security|performance/.test(desc)) score += 1
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
}
const complexity = assessComplexity(task.description)
\`\`\``,
"Validation": `\`\`\`javascript
// Detect changed files for validation scope
const changedFiles = Bash(\`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached\`)
.split('\\n').filter(Boolean)
\`\`\``
}
// Phase 3 content based on responsibility type
const phase3Content = {
"Read-only analysis": `\`\`\`javascript
// Core analysis logic
// Reference: .claude/commands/team/${ra.similar_to.primary}.md Phase 3
// Analyze each file
for (const [file, content] of Object.entries(fileContents)) {
// Domain-specific analysis
}
\`\`\``,
"Code generation": `\`\`\`javascript
// Reference: .claude/commands/team/${ra.similar_to.primary}.md Phase 3
${role.adaptive_routing ? `// Complexity-adaptive execution
if (planTasks.length <= 2) {
// Direct file editing
for (const pt of planTasks) {
for (const f of (pt.files || [])) {
const content = Read(f.path)
Edit({ file_path: f.path, old_string: "...", new_string: "..." })
}
}
} else {
// Delegate to code-developer sub-agent
Task({
subagent_type: "code-developer",
run_in_background: false,
description: \`Implement \${planTasks.length} tasks\`,
prompt: \`## Goal
\${plan.summary}
## Tasks
\${planTasks.map(t => \`### \${t.title}\\n\${t.description}\`).join('\\n\\n')}
Complete each task according to its convergence criteria.\`
})
}` : `// Direct execution
for (const pt of planTasks) {
for (const f of (pt.files || [])) {
const content = Read(f.path)
Edit({ file_path: f.path, old_string: "...", new_string: "..." })
}
}`}
\`\`\``,
"Orchestration": `\`\`\`javascript
// Reference: .claude/commands/team/${ra.similar_to.primary}.md Phase 3
${role.adaptive_routing ? `if (complexity === 'Low') {
// Direct execution with mcp__ace-tool__search_context + Grep/Glob
} else {
// Launch sub-agents for complex work
Task({
subagent_type: "universal-executor",
run_in_background: false,
description: "${role.name} orchestration",
prompt: \`Execute ${role.name} work for: \${task.description}\`
})
}` : `// Direct orchestration`}
\`\`\``,
"Validation": `\`\`\`javascript
// Reference: .claude/commands/team/${ra.similar_to.primary}.md Phase 3
let iteration = 0
const MAX_ITERATIONS = 5
while (iteration < MAX_ITERATIONS) {
// Run validation
const result = Bash(\`npm test 2>&1 || true\`)
const passed = !result.includes('FAIL')
if (passed) break
// Attempt fix
iteration++
if (iteration < MAX_ITERATIONS) {
// Auto-fix or delegate
}
}
\`\`\``
}
// Phase 4 content
const phase4Content = {
"Read-only analysis": `\`\`\`javascript
// Classify findings by severity
const findings = { critical: [], high: [], medium: [], low: [] }
// ... populate findings from Phase 3 analysis
\`\`\``,
"Code generation": `\`\`\`javascript
// Self-validation
const syntaxResult = Bash(\`tsc --noEmit 2>&1 || true\`)
const hasSyntaxErrors = syntaxResult.includes('error TS')
if (hasSyntaxErrors) {
// Attempt auto-fix
}
\`\`\``,
"Orchestration": `\`\`\`javascript
// Aggregate results from sub-agents
const aggregated = {
// Merge findings, results, outputs
}
\`\`\``,
"Validation": `\`\`\`javascript
// Analyze results
const resultSummary = {
iterations: iteration,
passed: iteration < MAX_ITERATIONS,
// Coverage, pass rate, etc.
}
\`\`\``
}
const msgTypesTable = role.message_types.map(mt =>
`| \`${mt.type}\` | ${role.name} → coordinator | ${mt.trigger} |`
).join('\n')
const primaryMsgType = role.message_types.find(mt => !mt.type.includes('error') && !mt.type.includes('progress'))?.type || `${role.name}_complete`
const roleMd = `# Role: ${role.name}
${role.description}
## Role Identity
- **Name**: \`${role.name}\`
- **Task Prefix**: \`${role.task_prefix}-*\`
- **Responsibility**: ${role.responsibility_type}
- **Communication**: SendMessage to coordinator only
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
${msgTypesTable}
## Execution (5-Phase)
### Phase 1: Task Discovery
\`\`\`javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('${role.task_prefix}-') &&
t.owner === '${role.name}' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
\`\`\`
### Phase 2: ${ra.phase_structure.phase2}
${phase2Content[role.responsibility_type]}
### Phase 3: ${ra.phase_structure.phase3}
${phase3Content[role.responsibility_type]}
### Phase 4: ${ra.phase_structure.phase4}
${phase4Content[role.responsibility_type]}
### Phase 5: Report to Coordinator
\`\`\`javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "${config.team_name}",
from: "${role.name}",
to: "coordinator",
type: "${primaryMsgType}",
summary: \`${role.task_prefix} complete: \${task.subject}\`
})
SendMessage({
type: "message",
recipient: "coordinator",
content: \`## ${role.display_name} Results
**Task**: \${task.subject}
**Status**: \${resultStatus}
### Summary
\${resultSummary}
### Details
\${resultDetails}\`,
summary: \`${role.task_prefix} complete\`
})
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('${role.task_prefix}-') &&
t.owner === '${role.name}' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No ${role.task_prefix}-* tasks available | Idle, wait for coordinator |
| Context/Plan file not found | Notify coordinator |
${role.adaptive_routing ? '| Sub-agent failure | Retry once, fallback to direct |\n' : ''}| Critical issue beyond scope | SendMessage fix_required |
| Unexpected error | Log via team_msg, report |
`
Write(`${previewDir}/roles/${role.name}.md`, roleMd)
}
```
### Step 5: Generate specs/team-config.json
```javascript
Write(`${previewDir}/specs/team-config.json`, JSON.stringify({
team_name: config.team_name,
skill_name: config.skill_name,
pipeline_type: config.pipeline_type,
pipeline: config.pipeline,
roles: config.roles.map(r => ({
name: r.name,
task_prefix: r.task_prefix,
responsibility_type: r.responsibility_type,
description: r.description
})),
collaboration_patterns: analysis.collaboration_patterns,
generated_at: new Date().toISOString()
}, null, 2))
```
## Output
- **Directory**: `{workDir}/preview/`
- **Files**:
- `preview/SKILL.md` - Role router + shared infrastructure
- `preview/roles/coordinator.md` - Coordinator execution
- `preview/roles/{role}.md` - Per-worker role execution
- `preview/specs/team-config.json` - Team configuration
## Quality Checklist
- [ ] SKILL.md contains role router with all roles
- [ ] SKILL.md contains shared infrastructure (message bus, task lifecycle)
- [ ] SKILL.md contains coordinator spawn template
- [ ] Every role has a file in roles/
- [ ] Every role file has 5-phase execution
- [ ] Every role file has message types table
- [ ] Every role file has error handling
- [ ] team-config.json is valid JSON
## Next Phase
-> [Phase 4: Integration Verification](04-integration-verification.md)

View File

@@ -0,0 +1,178 @@
# Phase 4: Integration Verification
Verify the generated skill package is internally consistent.
## Objective
- Verify SKILL.md role router references match actual role files
- Verify task prefixes are unique across all roles
- Verify message types are consistent
- Verify coordinator spawn template uses correct skill invocation
- Generate integration-report.json
## Input
- Dependency: `{workDir}/preview/` directory (Phase 3)
- Reference: `team-config.json` (Phase 1)
## Execution Steps
### Step 1: Load Generated Files
```javascript
const config = JSON.parse(Read(`${workDir}/team-config.json`))
const previewDir = `${workDir}/preview`
const skillMd = Read(`${previewDir}/SKILL.md`)
const roleFiles = {}
for (const role of config.roles) {
try {
roleFiles[role.name] = Read(`${previewDir}/roles/${role.name}.md`)
} catch {
roleFiles[role.name] = null
}
}
```
### Step 2: Role Router Consistency
```javascript
const routerChecks = config.roles.map(role => {
const hasRouterEntry = skillMd.includes(`"${role.name}"`)
const hasRoleFile = roleFiles[role.name] !== null
const hasRoleLink = skillMd.includes(`roles/${role.name}.md`)
return {
role: role.name,
router_entry: hasRouterEntry,
file_exists: hasRoleFile,
link_valid: hasRoleLink,
status: (hasRouterEntry && hasRoleFile && hasRoleLink) ? 'PASS' : 'FAIL'
}
})
```
### Step 3: Task Prefix Uniqueness
```javascript
const prefixes = config.worker_roles.map(r => r.task_prefix)
const uniquePrefixes = [...new Set(prefixes)]
const prefixCheck = {
prefixes: prefixes,
unique: uniquePrefixes,
duplicates: prefixes.filter((p, i) => prefixes.indexOf(p) !== i),
status: prefixes.length === uniquePrefixes.length ? 'PASS' : 'FAIL'
}
```
### Step 4: Message Type Consistency
```javascript
const msgChecks = config.worker_roles.map(role => {
const roleFile = roleFiles[role.name] || ''
const typesInConfig = role.message_types.map(mt => mt.type)
const typesInFile = typesInConfig.filter(t => roleFile.includes(t))
return {
role: role.name,
configured: typesInConfig,
present_in_file: typesInFile,
missing: typesInConfig.filter(t => !typesInFile.includes(t)),
status: typesInFile.length === typesInConfig.length ? 'PASS' : 'WARN'
}
})
```
### Step 5: Spawn Template Verification
```javascript
const spawnChecks = config.worker_roles.map(role => {
const hasSpawn = skillMd.includes(`name: "${role.name}"`)
const hasSkillCall = skillMd.includes(`Skill(skill="${config.skill_name}", args="--role=${role.name}")`)
const hasTaskPrefix = skillMd.includes(`${role.task_prefix}-*`)
return {
role: role.name,
spawn_present: hasSpawn,
skill_call_correct: hasSkillCall,
prefix_in_prompt: hasTaskPrefix,
status: (hasSpawn && hasSkillCall && hasTaskPrefix) ? 'PASS' : 'FAIL'
}
})
```
### Step 6: Role File Pattern Compliance
```javascript
const patternChecks = Object.entries(roleFiles).map(([name, content]) => {
if (!content) return { role: name, status: 'MISSING' }
const checks = {
has_role_identity: /## Role Identity/.test(content),
has_5_phases: /Phase 1/.test(content) && /Phase 5/.test(content),
has_task_lifecycle: /TaskList/.test(content) && /TaskGet/.test(content) && /TaskUpdate/.test(content),
has_message_bus: /team_msg/.test(content),
has_send_message: /SendMessage/.test(content),
has_error_handling: /## Error Handling/.test(content)
}
const passCount = Object.values(checks).filter(Boolean).length
return {
role: name,
checks: checks,
pass_count: passCount,
total: Object.keys(checks).length,
status: passCount === Object.keys(checks).length ? 'PASS' : 'PARTIAL'
}
})
```
### Step 7: Generate Report
```javascript
const overallStatus = [
...routerChecks.map(c => c.status),
prefixCheck.status,
...spawnChecks.map(c => c.status),
...patternChecks.map(c => c.status)
].every(s => s === 'PASS') ? 'PASS' : 'NEEDS_ATTENTION'
const report = {
team_name: config.team_name,
skill_name: config.skill_name,
checks: {
router_consistency: routerChecks,
prefix_uniqueness: prefixCheck,
message_types: msgChecks,
spawn_template: spawnChecks,
pattern_compliance: patternChecks
},
overall: overallStatus,
file_count: {
skill_md: 1,
role_files: Object.keys(roleFiles).length,
total: 1 + Object.keys(roleFiles).length + 1 // SKILL.md + roles + config
}
}
Write(`${workDir}/integration-report.json`, JSON.stringify(report, null, 2))
```
## Output
- **File**: `integration-report.json`
- **Format**: JSON
- **Location**: `{workDir}/integration-report.json`
## Quality Checklist
- [ ] Every role in config has a router entry in SKILL.md
- [ ] Every role has a file in roles/
- [ ] Task prefixes are unique
- [ ] Spawn template uses correct `Skill(skill="...", args="--role=...")`
- [ ] All role files have 5-phase structure
- [ ] All role files have message bus integration
## Next Phase
-> [Phase 5: Validation](05-validation.md)

View File

@@ -0,0 +1,203 @@
# Phase 5: Validation
Verify quality and deliver the final skill package.
## Objective
- Per-role structural completeness check
- Per-role pattern compliance check
- Quality scoring
- Deliver final skill package to `.claude/skills/team-{name}/`
## Input
- Dependency: `{workDir}/preview/` (Phase 3), `integration-report.json` (Phase 4)
- Specification: `../team-command-designer/specs/quality-standards.md`
## Execution Steps
### Step 1: Load Files
```javascript
const config = JSON.parse(Read(`${workDir}/team-config.json`))
const integration = JSON.parse(Read(`${workDir}/integration-report.json`))
const previewDir = `${workDir}/preview`
const skillMd = Read(`${previewDir}/SKILL.md`)
const roleContents = {}
for (const role of config.roles) {
try {
roleContents[role.name] = Read(`${previewDir}/roles/${role.name}.md`)
} catch {
roleContents[role.name] = null
}
}
```
### Step 2: SKILL.md Structural Check
```javascript
const skillChecks = [
{ name: "Frontmatter", pattern: /^---\n[\s\S]+?\n---/ },
{ name: "Architecture Overview", pattern: /## Architecture Overview/ },
{ name: "Role Router", pattern: /## Role Router/ },
{ name: "Role Dispatch Code", pattern: /VALID_ROLES/ },
{ name: "Available Roles Table", pattern: /\| Role \| Task Prefix/ },
{ name: "Shared Infrastructure", pattern: /## Shared Infrastructure/ },
{ name: "Message Bus Section", pattern: /Message Bus/ },
{ name: "team_msg Example", pattern: /team_msg/ },
{ name: "CLI Fallback", pattern: /ccw team log/ },
{ name: "Task Lifecycle", pattern: /Task Lifecycle/ },
{ name: "Pipeline Diagram", pattern: /## Pipeline/ },
{ name: "Coordinator Spawn Template", pattern: /Coordinator Spawn/ },
{ name: "Error Handling", pattern: /## Error Handling/ }
]
const skillResults = skillChecks.map(c => ({
check: c.name,
status: c.pattern.test(skillMd) ? 'PASS' : 'FAIL'
}))
const skillScore = skillResults.filter(r => r.status === 'PASS').length / skillResults.length * 100
```
### Step 3: Per-Role Structural Check
```javascript
const roleChecks = [
{ name: "Role Identity", pattern: /## Role Identity/ },
{ name: "Message Types Table", pattern: /## Message Types/ },
{ name: "5-Phase Execution", pattern: /## Execution/ },
{ name: "Phase 1 Task Discovery", pattern: /Phase 1.*Task Discovery/i },
{ name: "TaskList Usage", pattern: /TaskList/ },
{ name: "TaskGet Usage", pattern: /TaskGet/ },
{ name: "TaskUpdate Usage", pattern: /TaskUpdate/ },
{ name: "team_msg Before SendMessage", pattern: /team_msg/ },
{ name: "SendMessage to Coordinator", pattern: /SendMessage/ },
{ name: "Error Handling", pattern: /## Error Handling/ }
]
const roleResults = {}
for (const [name, content] of Object.entries(roleContents)) {
if (!content) {
roleResults[name] = { status: 'MISSING', checks: [], score: 0 }
continue
}
const checks = roleChecks.map(c => ({
check: c.name,
status: c.pattern.test(content) ? 'PASS' : 'FAIL'
}))
const score = checks.filter(c => c.status === 'PASS').length / checks.length * 100
roleResults[name] = { status: score >= 80 ? 'PASS' : 'PARTIAL', checks, score }
}
```
### Step 4: Quality Scoring
```javascript
const scores = {
skill_md: skillScore,
roles_avg: Object.values(roleResults).reduce((sum, r) => sum + r.score, 0) / Object.keys(roleResults).length,
integration: integration.overall === 'PASS' ? 100 : 50,
consistency: checkConsistency()
}
function checkConsistency() {
let score = 100
// Check skill name in SKILL.md matches config
if (!skillMd.includes(config.skill_name)) score -= 20
// Check team name consistency
if (!skillMd.includes(config.team_name)) score -= 20
// Check all roles referenced in SKILL.md
for (const role of config.roles) {
if (!skillMd.includes(role.name)) score -= 10
}
return Math.max(0, score)
}
const overallScore = Object.values(scores).reduce((a, b) => a + b, 0) / Object.keys(scores).length
const qualityGate = overallScore >= 80 ? 'PASS' : overallScore >= 60 ? 'REVIEW' : 'FAIL'
```
### Step 5: Generate Validation Report
```javascript
const report = {
team_name: config.team_name,
skill_name: config.skill_name,
timestamp: new Date().toISOString(),
scores: scores,
overall_score: overallScore,
quality_gate: qualityGate,
skill_md_checks: skillResults,
role_results: roleResults,
integration_status: integration.overall,
delivery: {
source: previewDir,
destination: `.claude/skills/${config.skill_name}/`,
ready: qualityGate !== 'FAIL'
}
}
Write(`${workDir}/validation-report.json`, JSON.stringify(report, null, 2))
```
### Step 6: Deliver Final Package
```javascript
if (report.delivery.ready) {
const destDir = `.claude/skills/${config.skill_name}`
// Create directory structure
Bash(`mkdir -p "${destDir}/roles" "${destDir}/specs"`)
// Copy all files
Write(`${destDir}/SKILL.md`, skillMd)
for (const [name, content] of Object.entries(roleContents)) {
if (content) {
Write(`${destDir}/roles/${name}.md`, content)
}
}
// Copy team config
const teamConfig = Read(`${previewDir}/specs/team-config.json`)
Write(`${destDir}/specs/team-config.json`, teamConfig)
// Report
console.log(`\nTeam skill delivered to: ${destDir}/`)
console.log(`Skill name: ${config.skill_name}`)
console.log(`Quality score: ${overallScore.toFixed(1)}% (${qualityGate})`)
console.log(`Roles: ${config.role_list}`)
console.log(`\nUsage:`)
console.log(` Skill(skill="${config.skill_name}", args="--role=planner")`)
console.log(` Skill(skill="${config.skill_name}", args="--role=executor")`)
console.log(`\nFile structure:`)
Bash(`find "${destDir}" -type f | sort`)
} else {
console.log(`Validation FAILED (score: ${overallScore.toFixed(1)}%)`)
console.log('Fix issues and re-run Phase 3-5')
}
```
## Output
- **File**: `validation-report.json`
- **Format**: JSON
- **Location**: `{workDir}/validation-report.json`
- **Delivery**: `.claude/skills/team-{name}/` (if validation passes)
## Quality Checklist
- [ ] SKILL.md passes all 13 structural checks
- [ ] All role files pass structural checks (>= 80%)
- [ ] Integration report is PASS
- [ ] Overall score >= 80%
- [ ] Final package delivered to `.claude/skills/team-{name}/`
- [ ] Usage instructions provided
## Completion
This is the final phase. The unified team skill is ready for use.

View File

@@ -0,0 +1,333 @@
# Role File Template
Template for generating per-role execution detail files in `roles/{role-name}.md`.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand role file structure |
| Phase 3 | Apply with role-specific content |
---
## Template
```markdown
# Role: {{role_name}}
{{role_description}}
## Role Identity
- **Name**: `{{role_name}}`
- **Task Prefix**: `{{task_prefix}}-*`
- **Responsibility**: {{responsibility_type}}
- **Communication**: SendMessage to coordinator only
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
{{#each message_types}}
| `{{this.type}}` | {{../role_name}} → coordinator | {{this.trigger}} | {{this.description}} |
{{/each}}
## Execution (5-Phase)
### Phase 1: Task Discovery
\`\`\`javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('{{task_prefix}}-') &&
t.owner === '{{role_name}}' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
\`\`\`
### Phase 2: {{phase2_name}}
{{phase2_content}}
### Phase 3: {{phase3_name}}
{{phase3_content}}
### Phase 4: {{phase4_name}}
{{phase4_content}}
### Phase 5: Report to Coordinator
\`\`\`javascript
// Log message before SendMessage
mcp__ccw-tools__team_msg({
operation: "log",
team: teamName,
from: "{{role_name}}",
to: "coordinator",
type: "{{primary_message_type}}",
summary: \`{{task_prefix}} complete: \${task.subject}\`
})
SendMessage({
type: "message",
recipient: "coordinator",
content: \`## {{display_name}} Results
**Task**: \${task.subject}
**Status**: \${resultStatus}
### Summary
\${resultSummary}
### Details
\${resultDetails}\`,
summary: \`{{task_prefix}} complete\`
})
// Mark task completed
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('{{task_prefix}}-') &&
t.owner === '{{role_name}}' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No {{task_prefix}}-* tasks available | Idle, wait for coordinator assignment |
| Context/Plan file not found | Notify coordinator, request location |
{{#if adaptive_routing}}
| Sub-agent failure | Retry once, then fallback to direct execution |
{{/if}}
| Critical issue beyond scope | SendMessage fix_required to coordinator |
| Unexpected error | Log error via team_msg, report to coordinator |
```
---
## Template Sections by Responsibility Type
### Read-only analysis
**Phase 2: Context Loading**
```javascript
// Load plan for criteria reference
const planPathMatch = task.description.match(/\.workflow\/\.team-plan\/[^\s]+\/plan\.json/)
let plan = null
if (planPathMatch) {
try { plan = JSON.parse(Read(planPathMatch[0])) } catch {}
}
// Get changed files
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`)
.split('\n').filter(Boolean)
// Read file contents for analysis
const fileContents = {}
for (const file of changedFiles.slice(0, 20)) {
try { fileContents[file] = Read(file) } catch {}
}
```
**Phase 3: Analysis Execution**
```javascript
// Core analysis logic
// Customize per specific analysis domain
```
**Phase 4: Finding Summary**
```javascript
// Classify findings by severity
const findings = {
critical: [],
high: [],
medium: [],
low: []
}
```
### Code generation
**Phase 2: Task & Plan Loading**
```javascript
const planPathMatch = task.description.match(/\.workflow\/\.team-plan\/[^\s]+\/plan\.json/)
if (!planPathMatch) {
SendMessage({ type: "message", recipient: "coordinator",
content: `Cannot find plan.json in ${task.subject}`, summary: "Plan not found" })
return
}
const plan = JSON.parse(Read(planPathMatch[0]))
const planTasks = plan.task_ids.map(id =>
JSON.parse(Read(`${planPathMatch[0].replace('plan.json', '')}.task/${id}.json`))
)
```
**Phase 3: Code Implementation**
```javascript
// Complexity-adaptive execution
if (complexity === 'Low') {
// Direct file editing
} else {
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Implement plan tasks",
prompt: `...`
})
}
```
**Phase 4: Self-Validation**
```javascript
const syntaxResult = Bash(`tsc --noEmit 2>&1 || true`)
const hasSyntaxErrors = syntaxResult.includes('error TS')
```
### Orchestration
**Phase 2: Context & Complexity Assessment**
```javascript
function assessComplexity(desc) {
let score = 0
if (/refactor|architect|restructure|module|system/.test(desc)) score += 2
if (/multiple|across|cross/.test(desc)) score += 2
if (/integrate|api|database/.test(desc)) score += 1
if (/security|performance/.test(desc)) score += 1
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
}
const complexity = assessComplexity(task.description)
```
**Phase 3: Orchestrated Execution**
```javascript
// Launch parallel sub-agents or sequential stages
```
**Phase 4: Result Aggregation**
```javascript
// Merge and summarize sub-agent results
```
### Validation
**Phase 2: Environment Detection**
```javascript
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`)
.split('\n').filter(Boolean)
```
**Phase 3: Execution & Fix Cycle**
```javascript
// Run validation, collect failures, attempt fixes, re-validate
let iteration = 0
const MAX_ITERATIONS = 5
while (iteration < MAX_ITERATIONS) {
const result = runValidation()
if (result.passRate >= 0.95) break
applyFixes(result.failures)
iteration++
}
```
**Phase 4: Result Analysis**
```javascript
// Analyze pass/fail patterns, coverage gaps
```
---
## Coordinator Role Template
The coordinator role is special and always generated. Its template differs from worker roles:
```markdown
# Role: coordinator
Team coordinator. Orchestrates the pipeline: requirement clarification → task chain creation → dispatch → monitoring → reporting.
## Role Identity
- **Name**: `coordinator`
- **Task Prefix**: N/A (coordinator creates tasks, doesn't receive them)
- **Responsibility**: Orchestration
- **Communication**: SendMessage to all teammates
## Execution
### Phase 1: Requirement Clarification
Parse $ARGUMENTS, use AskUserQuestion for MVP scope and constraints.
### Phase 2: Create Team + Spawn Teammates
\`\`\`javascript
TeamCreate({ team_name: teamName })
// Spawn each worker role
{{#each worker_roles}}
Task({
subagent_type: "general-purpose",
team_name: teamName,
name: "{{this.name}}",
prompt: \`...Skill(skill="team-{{team_name}}", args="--role={{this.name}}")...\`
})
{{/each}}
\`\`\`
### Phase 3: Create Task Chain
\`\`\`javascript
{{task_chain_creation_code}}
\`\`\`
### Phase 4: Coordination Loop
| Received Message | Action |
|-----------------|--------|
{{#each coordination_handlers}}
| {{this.trigger}} | {{this.action}} |
{{/each}}
### Phase 5: Report + Persist
Summarize results. AskUserQuestion for next requirement or shutdown.
```
---
## Variable Reference
| Variable | Source | Description |
|----------|--------|-------------|
| `{{role_name}}` | config.role_name | Role identifier |
| `{{task_prefix}}` | config.task_prefix | UPPERCASE task prefix |
| `{{responsibility_type}}` | config.responsibility_type | Role type |
| `{{display_name}}` | config.display_name | Human-readable |
| `{{phase2_name}}` | patterns.phase_structure.phase2 | Phase 2 label |
| `{{phase3_name}}` | patterns.phase_structure.phase3 | Phase 3 label |
| `{{phase4_name}}` | patterns.phase_structure.phase4 | Phase 4 label |
| `{{phase2_content}}` | Generated from responsibility template | Phase 2 code |
| `{{phase3_content}}` | Generated from responsibility template | Phase 3 code |
| `{{phase4_content}}` | Generated from responsibility template | Phase 4 code |
| `{{message_types}}` | config.message_types | Array of message types |
| `{{primary_message_type}}` | config.message_types[0].type | Primary type |
| `{{adaptive_routing}}` | config.adaptive_routing | Boolean |

View File

@@ -0,0 +1,224 @@
# Skill Router Template
Template for the generated SKILL.md with role-based routing.
## Purpose
| Phase | Usage |
|-------|-------|
| Phase 0 | Read to understand generated SKILL.md structure |
| Phase 3 | Apply with team-specific content |
---
## Template
```markdown
---
name: team-{{team_name}}
description: Unified team skill for {{team_name}} team. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team {{team_name}}".
allowed-tools: {{all_roles_tools_union}}
---
# Team {{team_display_name}}
Unified team skill. All team members invoke this skill with `--role=xxx` to route to role-specific execution.
## Architecture Overview
\`\`\`
┌───────────────────────────────────────────┐
│ Skill(skill="team-{{team_name}}") │
│ args="--role=xxx" │
└───────────────┬───────────────────────────┘
│ Role Router
┌───────────┼───────────┬───────────┐
↓ ↓ ↓ ↓
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│coordinator│ │{{role_1}}│ │{{role_2}}│ │{{role_3}}│
│ roles/ │ │ roles/ │ │ roles/ │ │ roles/ │
└──────────┘ └──────────┘ └──────────┘ └──────────┘
\`\`\`
## Role Router
### Input Parsing
Parse `$ARGUMENTS` to extract `--role`:
\`\`\`javascript
const args = "$ARGUMENTS"
const roleMatch = args.match(/--role[=\s]+(\w+)/)
if (!roleMatch) {
// ERROR: --role is required
// This skill must be invoked with: Skill(skill="team-{{team_name}}", args="--role=xxx")
throw new Error("Missing --role argument. Available roles: {{role_list}}")
}
const role = roleMatch[1]
const teamName = "{{team_name}}"
\`\`\`
### Role Dispatch
\`\`\`javascript
const VALID_ROLES = {
{{#each roles}}
"{{this.name}}": { file: "roles/{{this.name}}.md", prefix: "{{this.task_prefix}}" },
{{/each}}
}
if (!VALID_ROLES[role]) {
throw new Error(\`Unknown role: \${role}. Available: \${Object.keys(VALID_ROLES).join(', ')}\`)
}
// Read and execute role-specific logic
Read(VALID_ROLES[role].file)
// → Execute the 5-phase process defined in that file
\`\`\`
### Available Roles
| Role | Task Prefix | Responsibility | Role File |
|------|-------------|----------------|-----------|
{{#each roles}}
| `{{this.name}}` | {{this.task_prefix}}-* | {{this.responsibility}} | [roles/{{this.name}}.md](roles/{{this.name}}.md) |
{{/each}}
## Shared Infrastructure
### Team Configuration
\`\`\`javascript
const TEAM_CONFIG = {
name: "{{team_name}}",
sessionDir: ".workflow/.team-plan/{{team_name}}/",
msgDir: ".workflow/.team-msg/{{team_name}}/",
roles: {{roles_json}}
}
\`\`\`
### Message Bus (All Roles)
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
\`\`\`javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "{{team_name}}",
from: role, // current role name
to: "coordinator",
type: "<type>",
summary: "<summary>",
ref: "<file_path>" // optional
})
\`\`\`
**Message types by role**:
| Role | Types |
|------|-------|
{{#each roles}}
| {{this.name}} | {{this.message_types_list}} |
{{/each}}
### CLI 回退
`mcp__ccw-tools__team_msg` MCP 不可用时:
\`\`\`javascript
Bash(\`ccw team log --team "{{team_name}}" --from "\${role}" --to "coordinator" --type "<type>" --summary "<摘要>" --json\`)
\`\`\`
### Task Lifecycle (All Roles)
\`\`\`javascript
// Standard task lifecycle every role follows
// Phase 1: Discovery
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith(\`\${VALID_ROLES[role].prefix}-\`) &&
t.owner === role &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
// Phase 2-4: Role-specific (see roles/{role}.md)
// Phase 5: Report + Loop
mcp__ccw-tools__team_msg({ operation: "log", team: "{{team_name}}", from: role, to: "coordinator", type: "...", summary: "..." })
SendMessage({ type: "message", recipient: "coordinator", content: "...", summary: "..." })
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task → back to Phase 1
\`\`\`
## Pipeline
\`\`\`
{{pipeline_diagram}}
\`\`\`
## Coordinator Spawn Template
When coordinator creates teammates, use this pattern:
\`\`\`javascript
TeamCreate({ team_name: "{{team_name}}" })
{{#each worker_roles}}
// {{this.display_name}}
Task({
subagent_type: "general-purpose",
team_name: "{{../team_name}}",
name: "{{this.name}}",
prompt: \`你是 team "{{../team_name}}" 的 {{this.name_upper}}.
当你收到 {{this.task_prefix}}-* 任务时,调用 Skill(skill="team-{{../team_name}}", args="--role={{this.name}}") 执行。
当前需求: \${taskDescription}
约束: \${constraints}
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
工作流程:
1. TaskList → 找到 {{this.task_prefix}}-* 任务
2. Skill(skill="team-{{../team_name}}", args="--role={{this.name}}") 执行
3. team_msg log + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务\`
})
{{/each}}
\`\`\`
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with available role list |
| Missing --role arg | Error with usage hint |
| Role file not found | Error with expected path |
| Task prefix conflict | Log warning, proceed |
```
---
## Variable Reference
| Variable | Source | Description |
|----------|--------|-------------|
| `{{team_name}}` | config.team_name | Team identifier (lowercase) |
| `{{team_display_name}}` | config.team_display_name | Human-readable team name |
| `{{all_roles_tools_union}}` | Union of all roles' allowed-tools | Combined tool list |
| `{{roles}}` | config.roles[] | Array of role definitions |
| `{{role_list}}` | Role names joined by comma | e.g., "coordinator, planner, executor" |
| `{{roles_json}}` | JSON.stringify(roles) | Roles as JSON |
| `{{pipeline_diagram}}` | Generated from task chain | ASCII pipeline |
| `{{worker_roles}}` | config.roles excluding coordinator | Non-coordinator roles |
| `{{role.name}}` | Per-role name | e.g., "planner" |
| `{{role.task_prefix}}` | Per-role task prefix | e.g., "PLAN" |
| `{{role.responsibility}}` | Per-role responsibility | e.g., "Code exploration and planning" |
| `{{role.message_types_list}}` | Per-role message types | e.g., "`plan_ready`, `error`" |

View File

@@ -10,47 +10,20 @@ import { Flowchart } from './Flowchart';
import { Badge } from '../ui/Badge'; import { Badge } from '../ui/Badge';
import { Button } from '../ui/Button'; import { Button } from '../ui/Button';
import { Tabs, TabsList, TabsTrigger, TabsContent } from '../ui/Tabs'; import { Tabs, TabsList, TabsTrigger, TabsContent } from '../ui/Tabs';
import type { LiteTask, FlowControl } from '@/lib/api'; import type { NormalizedTask } from '@/lib/api';
import { buildFlowControl } from '@/lib/api';
import type { TaskData } from '@/types/store'; import type { TaskData } from '@/types/store';
// ========== Types ========== // ========== Types ==========
export interface TaskDrawerProps { export interface TaskDrawerProps {
task: LiteTask | TaskData | null; task: NormalizedTask | TaskData | null;
isOpen: boolean; isOpen: boolean;
onClose: () => void; onClose: () => void;
} }
type TabValue = 'overview' | 'flowchart' | 'files'; type TabValue = 'overview' | 'flowchart' | 'files';
// ========== Helper: Unified Task Access ==========
/**
* Normalize task data to common interface
*/
function getTaskId(task: LiteTask | TaskData): string {
if ('task_id' in task && task.task_id) return task.task_id;
if ('id' in task) return task.id;
return 'N/A';
}
function getTaskTitle(task: LiteTask | TaskData): string {
return task.title || 'Untitled Task';
}
function getTaskDescription(task: LiteTask | TaskData): string | undefined {
return task.description;
}
function getTaskStatus(task: LiteTask | TaskData): string {
return task.status;
}
function getFlowControl(task: LiteTask | TaskData): FlowControl | undefined {
if ('flow_control' in task) return task.flow_control;
return undefined;
}
// Status configuration // Status configuration
const taskStatusConfig: Record<string, { label: string; variant: 'default' | 'secondary' | 'destructive' | 'outline' | 'success' | 'warning' | 'info' | null; icon: React.ComponentType<{ className?: string }> }> = { const taskStatusConfig: Record<string, { label: string; variant: 'default' | 'secondary' | 'destructive' | 'outline' | 'success' | 'warning' | 'info' | null; icon: React.ComponentType<{ className?: string }> }> = {
pending: { pending: {
@@ -113,17 +86,28 @@ export function TaskDrawer({ task, isOpen, onClose }: TaskDrawerProps) {
return null; return null;
} }
const taskId = getTaskId(task); // Use NormalizedTask fields (works for both old nested and new flat formats)
const taskTitle = getTaskTitle(task); const nt = task as NormalizedTask;
const taskDescription = getTaskDescription(task); const taskId = nt.task_id || 'N/A';
const taskStatus = getTaskStatus(task); const taskTitle = nt.title || 'Untitled Task';
const flowControl = getFlowControl(task); const taskDescription = nt.description;
const taskStatus = nt.status;
const flowControl = buildFlowControl(nt);
// Normalized flat fields
const acceptanceCriteria = nt.convergence?.criteria || [];
const focusPaths = nt.focus_paths || [];
const dependsOn = nt.depends_on || [];
const preAnalysis = nt.pre_analysis || flowControl?.pre_analysis || [];
const implSteps = nt.implementation || flowControl?.implementation_approach || [];
const taskFiles = nt.files || flowControl?.target_files || [];
const taskScope = nt.scope;
const statusConfig = taskStatusConfig[taskStatus] || taskStatusConfig.pending; const statusConfig = taskStatusConfig[taskStatus] || taskStatusConfig.pending;
const StatusIcon = statusConfig.icon; const StatusIcon = statusConfig.icon;
const hasFlowchart = !!flowControl?.implementation_approach && flowControl.implementation_approach.length > 0; const hasFlowchart = implSteps.length > 0;
const hasFiles = !!flowControl?.target_files && flowControl.target_files.length > 0; const hasFiles = taskFiles.length > 0;
return ( return (
<> <>
@@ -205,27 +189,27 @@ export function TaskDrawer({ task, isOpen, onClose }: TaskDrawerProps) {
)} )}
{/* Scope Section */} {/* Scope Section */}
{(task as LiteTask).meta?.scope && ( {taskScope && (
<div className="p-4 bg-card rounded-lg border border-border"> <div className="p-4 bg-card rounded-lg border border-border">
<h3 className="text-sm font-semibold text-foreground mb-2 flex items-center gap-2"> <h3 className="text-sm font-semibold text-foreground mb-2 flex items-center gap-2">
<span>📁</span> <span>📁</span>
Scope Scope
</h3> </h3>
<div className="pl-3 border-l-2 border-primary"> <div className="pl-3 border-l-2 border-primary">
<code className="text-sm text-foreground">{(task as LiteTask).meta?.scope}</code> <code className="text-sm text-foreground">{taskScope}</code>
</div> </div>
</div> </div>
)} )}
{/* Acceptance Criteria Section */} {/* Acceptance / Convergence Criteria Section */}
{(task as LiteTask).context?.acceptance && (task as LiteTask).context!.acceptance!.length > 0 && ( {acceptanceCriteria.length > 0 && (
<div className="p-4 bg-card rounded-lg border border-border"> <div className="p-4 bg-card rounded-lg border border-border">
<h3 className="text-sm font-semibold text-foreground mb-3 flex items-center gap-2"> <h3 className="text-sm font-semibold text-foreground mb-3 flex items-center gap-2">
<span></span> <span></span>
{formatMessage({ id: 'liteTasks.acceptanceCriteria' })} {formatMessage({ id: 'liteTasks.acceptanceCriteria' })}
</h3> </h3>
<div className="space-y-2"> <div className="space-y-2">
{(task as LiteTask).context!.acceptance!.map((criterion, i) => ( {acceptanceCriteria.map((criterion, i) => (
<div key={i} className="flex items-start gap-2"> <div key={i} className="flex items-start gap-2">
<span className="text-muted-foreground mt-0.5"></span> <span className="text-muted-foreground mt-0.5"></span>
<span className="text-sm text-foreground">{criterion}</span> <span className="text-sm text-foreground">{criterion}</span>
@@ -236,14 +220,14 @@ export function TaskDrawer({ task, isOpen, onClose }: TaskDrawerProps) {
)} )}
{/* Focus Paths / Reference Section */} {/* Focus Paths / Reference Section */}
{(task as LiteTask).context?.focus_paths && (task as LiteTask).context!.focus_paths!.length > 0 && ( {focusPaths.length > 0 && (
<div className="p-4 bg-card rounded-lg border border-border"> <div className="p-4 bg-card rounded-lg border border-border">
<h3 className="text-sm font-semibold text-foreground mb-3 flex items-center gap-2"> <h3 className="text-sm font-semibold text-foreground mb-3 flex items-center gap-2">
<span>📚</span> <span>📚</span>
{formatMessage({ id: 'liteTasks.focusPaths' })} {formatMessage({ id: 'liteTasks.focusPaths' })}
</h3> </h3>
<div className="space-y-1"> <div className="space-y-1">
{(task as LiteTask).context!.focus_paths!.map((path, i) => ( {focusPaths.map((path, i) => (
<code key={i} className="block text-xs bg-muted px-3 py-1.5 rounded text-foreground font-mono"> <code key={i} className="block text-xs bg-muted px-3 py-1.5 rounded text-foreground font-mono">
{path} {path}
</code> </code>
@@ -253,14 +237,14 @@ export function TaskDrawer({ task, isOpen, onClose }: TaskDrawerProps) {
)} )}
{/* Dependencies Section */} {/* Dependencies Section */}
{(task as LiteTask).context?.depends_on && (task as LiteTask).context!.depends_on!.length > 0 && ( {dependsOn.length > 0 && (
<div className="p-4 bg-card rounded-lg border border-border"> <div className="p-4 bg-card rounded-lg border border-border">
<h3 className="text-sm font-semibold text-foreground mb-3 flex items-center gap-2"> <h3 className="text-sm font-semibold text-foreground mb-3 flex items-center gap-2">
<span>🔗</span> <span>🔗</span>
{formatMessage({ id: 'liteTasks.dependsOn' })} {formatMessage({ id: 'liteTasks.dependsOn' })}
</h3> </h3>
<div className="flex flex-wrap gap-2"> <div className="flex flex-wrap gap-2">
{(task as LiteTask).context!.depends_on!.map((dep, i) => ( {dependsOn.map((dep, i) => (
<Badge key={i} variant="secondary">{dep}</Badge> <Badge key={i} variant="secondary">{dep}</Badge>
))} ))}
</div> </div>
@@ -268,14 +252,14 @@ export function TaskDrawer({ task, isOpen, onClose }: TaskDrawerProps) {
)} )}
{/* Pre-analysis Steps */} {/* Pre-analysis Steps */}
{flowControl?.pre_analysis && flowControl.pre_analysis.length > 0 && ( {preAnalysis.length > 0 && (
<div className="p-4 bg-card rounded-lg border border-border"> <div className="p-4 bg-card rounded-lg border border-border">
<h3 className="text-sm font-semibold text-foreground mb-3 flex items-center gap-2"> <h3 className="text-sm font-semibold text-foreground mb-3 flex items-center gap-2">
<span>🔍</span> <span>🔍</span>
{formatMessage({ id: 'sessionDetail.taskDrawer.overview.preAnalysis' })} {formatMessage({ id: 'sessionDetail.taskDrawer.overview.preAnalysis' })}
</h3> </h3>
<div className="space-y-3"> <div className="space-y-3">
{flowControl.pre_analysis.map((step, index) => ( {preAnalysis.map((step, index) => (
<div key={index} className="flex items-start gap-3"> <div key={index} className="flex items-start gap-3">
<span className="flex-shrink-0 flex items-center justify-center w-6 h-6 rounded-full bg-primary text-primary-foreground text-xs font-medium"> <span className="flex-shrink-0 flex items-center justify-center w-6 h-6 rounded-full bg-primary text-primary-foreground text-xs font-medium">
{index + 1} {index + 1}

View File

@@ -312,7 +312,8 @@ function transformBackendSession(
has_review: backendData.hasReview, has_review: backendData.hasReview,
review, review,
summaries: (backendSession as unknown as { summaries?: SessionMetadata['summaries'] }).summaries, summaries: (backendSession as unknown as { summaries?: SessionMetadata['summaries'] }).summaries,
tasks: (backendSession as unknown as { tasks?: TaskData[] }).tasks, tasks: ((backendSession as unknown as { tasks?: TaskData[] }).tasks || [])
.map(t => normalizeTask(t as unknown as Record<string, unknown>)),
}; };
} }
@@ -1986,6 +1987,139 @@ export interface LiteTask {
updated_at?: string; updated_at?: string;
} }
// ========== Normalized Task (Unified Flat Format) ==========
/**
* Normalized task type that unifies both old 6-field nested format
* and new unified flat format into a single interface.
*
* Old format paths → New flat paths:
* - context.acceptance[] → convergence.criteria[]
* - context.focus_paths[] → focus_paths[]
* - context.depends_on[] → depends_on[]
* - context.requirements[] → description
* - flow_control.pre_analysis[] → pre_analysis[]
* - flow_control.implementation_approach[] → implementation[]
* - flow_control.target_files[] → files[]
*/
export interface NormalizedTask extends TaskData {
// Promoted from context
focus_paths?: string[];
convergence?: {
criteria?: string[];
verification?: string;
definition_of_done?: string;
};
// Promoted from flow_control
pre_analysis?: PreAnalysisStep[];
implementation?: (ImplementationStep | string)[];
files?: Array<{ path: string; name?: string }>;
// Promoted from meta
type?: string;
scope?: string;
action?: string;
// Original nested objects (preserved for long-term compat)
flow_control?: FlowControl;
context?: {
focus_paths?: string[];
acceptance?: string[];
depends_on?: string[];
requirements?: string[];
};
meta?: {
type?: string;
scope?: string;
[key: string]: unknown;
};
// Raw data reference for JSON viewer / debugging
_raw?: unknown;
}
/**
* Normalize a raw task object (old 6-field or new unified flat) into NormalizedTask.
* Reads new flat fields first, falls back to old nested paths.
* Long-term compatible: handles both formats permanently.
*/
export function normalizeTask(raw: Record<string, unknown>): NormalizedTask {
if (!raw || typeof raw !== 'object') {
return { task_id: 'N/A', status: 'pending', _raw: raw } as NormalizedTask;
}
// Type-safe access helpers
const rawContext = raw.context as LiteTask['context'] | undefined;
const rawFlowControl = raw.flow_control as FlowControl | undefined;
const rawMeta = raw.meta as LiteTask['meta'] | undefined;
const rawConvergence = raw.convergence as NormalizedTask['convergence'] | undefined;
// Description: new flat field first, then join old context.requirements
const rawRequirements = rawContext?.requirements;
const description = (raw.description as string | undefined)
|| (Array.isArray(rawRequirements) && rawRequirements.length > 0
? rawRequirements.join('; ')
: undefined);
return {
// Identity
task_id: (raw.task_id as string) || (raw.id as string) || 'N/A',
title: raw.title as string | undefined,
description,
status: (raw.status as NormalizedTask['status']) || 'pending',
priority: raw.priority as NormalizedTask['priority'],
created_at: raw.created_at as string | undefined,
updated_at: raw.updated_at as string | undefined,
has_summary: raw.has_summary as boolean | undefined,
estimated_complexity: raw.estimated_complexity as string | undefined,
// Promoted from context (new first, old fallback)
depends_on: (raw.depends_on as string[]) || rawContext?.depends_on || [],
focus_paths: (raw.focus_paths as string[]) || rawContext?.focus_paths || [],
convergence: rawConvergence || (rawContext?.acceptance?.length
? { criteria: rawContext.acceptance }
: undefined),
// Promoted from flow_control (new first, old fallback)
pre_analysis: (raw.pre_analysis as PreAnalysisStep[]) || rawFlowControl?.pre_analysis,
implementation: (raw.implementation as (ImplementationStep | string)[]) || rawFlowControl?.implementation_approach,
files: (raw.files as Array<{ path: string; name?: string }>) || rawFlowControl?.target_files,
// Promoted from meta (new first, old fallback)
type: (raw.type as string) || rawMeta?.type,
scope: (raw.scope as string) || rawMeta?.scope,
action: (raw.action as string) || (rawMeta as Record<string, unknown> | undefined)?.action as string | undefined,
// Preserve original nested objects for backward compat
flow_control: rawFlowControl,
context: rawContext,
meta: rawMeta,
// Raw reference
_raw: raw,
};
}
/**
* Build a FlowControl object from NormalizedTask for backward-compatible components (e.g. Flowchart).
*/
export function buildFlowControl(task: NormalizedTask): FlowControl | undefined {
const preAnalysis = task.pre_analysis;
const implementation = task.implementation;
const files = task.files;
if (!preAnalysis?.length && !implementation?.length && !files?.length) {
return task.flow_control; // Fall back to original if no flat fields
}
return {
pre_analysis: preAnalysis || task.flow_control?.pre_analysis,
implementation_approach: implementation || task.flow_control?.implementation_approach,
target_files: files || task.flow_control?.target_files,
};
}
export interface LiteTaskSession { export interface LiteTaskSession {
id: string; id: string;
session_id?: string; session_id?: string;

View File

@@ -145,7 +145,7 @@ class Config:
# Staged cascade search configuration (4-stage pipeline) # Staged cascade search configuration (4-stage pipeline)
staged_coarse_k: int = 200 # Number of coarse candidates from Stage 1 binary search staged_coarse_k: int = 200 # Number of coarse candidates from Stage 1 binary search
staged_lsp_depth: int = 2 # LSP relationship expansion depth in Stage 2 staged_lsp_depth: int = 2 # LSP relationship expansion depth in Stage 2
staged_stage2_mode: str = "precomputed" # "precomputed" (graph_neighbors) | "realtime" (LSP) staged_stage2_mode: str = "precomputed" # "precomputed" (graph_neighbors) | "realtime" (LSP) | "static_global_graph" (global_relationships)
# Static graph configuration (write relationships to global index during build) # Static graph configuration (write relationships to global index during build)
static_graph_enabled: bool = False static_graph_enabled: bool = False
@@ -627,7 +627,7 @@ class Config:
staged_stage2_mode = get_env("STAGED_STAGE2_MODE") staged_stage2_mode = get_env("STAGED_STAGE2_MODE")
if staged_stage2_mode: if staged_stage2_mode:
mode = staged_stage2_mode.strip().lower() mode = staged_stage2_mode.strip().lower()
if mode in {"precomputed", "realtime"}: if mode in {"precomputed", "realtime", "static_global_graph"}:
self.staged_stage2_mode = mode self.staged_stage2_mode = mode
log.debug("Overriding staged_stage2_mode from .env: %s", self.staged_stage2_mode) log.debug("Overriding staged_stage2_mode from .env: %s", self.staged_stage2_mode)
elif mode in {"live"}: elif mode in {"live"}:

View File

@@ -1293,6 +1293,9 @@ class ChainSearchEngine:
query=query, query=query,
) )
if mode == "static_global_graph":
return self._stage2_static_global_graph_expand(coarse_results, index_root=index_root)
return self._stage2_precomputed_graph_expand(coarse_results, index_root=index_root) return self._stage2_precomputed_graph_expand(coarse_results, index_root=index_root)
except ImportError as exc: except ImportError as exc:
@@ -1343,6 +1346,50 @@ class ChainSearchEngine:
return self._combine_stage2_results(coarse_results, related_results) return self._combine_stage2_results(coarse_results, related_results)
def _stage2_static_global_graph_expand(
self,
coarse_results: List[SearchResult],
*,
index_root: Path,
) -> List[SearchResult]:
"""Stage 2 (static_global_graph): expand using GlobalGraphExpander over global_relationships."""
from codexlens.search.global_graph_expander import GlobalGraphExpander
global_db_path = index_root / GlobalSymbolIndex.DEFAULT_DB_NAME
if not global_db_path.exists():
self.logger.debug("Global symbol DB not found at %s, skipping static graph expansion", global_db_path)
return coarse_results
project_id = 1
try:
for p in self.registry.list_projects():
if p.index_root.resolve() == index_root.resolve():
project_id = p.id
break
except Exception:
pass
global_index = GlobalSymbolIndex(global_db_path, project_id=project_id)
global_index.initialize()
try:
expander = GlobalGraphExpander(global_index, config=self._config)
related_results = expander.expand(
coarse_results,
top_n=min(10, len(coarse_results)),
max_related=50,
)
if related_results:
self.logger.debug(
"Stage 2 (static_global_graph) expanded %d base results to %d related symbols",
len(coarse_results), len(related_results),
)
return self._combine_stage2_results(coarse_results, related_results)
finally:
global_index.close()
def _stage2_realtime_lsp_expand( def _stage2_realtime_lsp_expand(
self, self,
coarse_results: List[SearchResult], coarse_results: List[SearchResult],

View File

@@ -0,0 +1,289 @@
"""Tests for static graph relationship writing during index build (T2).
Verifies that IndexTreeBuilder._build_single_dir and _build_dir_worker
correctly write relationships to GlobalSymbolIndex when
config.static_graph_enabled is True.
"""
import tempfile
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from codexlens.config import Config
from codexlens.entities import (
CodeRelationship,
IndexedFile,
RelationshipType,
Symbol,
)
from codexlens.storage.global_index import GlobalSymbolIndex
@pytest.fixture()
def temp_dir():
tmpdir = tempfile.TemporaryDirectory(ignore_cleanup_errors=True)
yield Path(tmpdir.name)
try:
tmpdir.cleanup()
except (PermissionError, OSError):
pass
def _make_indexed_file(file_path: str) -> IndexedFile:
"""Create a test IndexedFile with symbols and relationships."""
return IndexedFile(
path=file_path,
language="python",
symbols=[
Symbol(name="MyClass", kind="class", range=(1, 20)),
Symbol(name="helper", kind="function", range=(22, 30)),
],
relationships=[
CodeRelationship(
source_symbol="MyClass",
target_symbol="BaseClass",
relationship_type=RelationshipType.INHERITS,
source_file=file_path,
target_file="other/base.py",
source_line=1,
),
CodeRelationship(
source_symbol="MyClass",
target_symbol="os",
relationship_type=RelationshipType.IMPORTS,
source_file=file_path,
source_line=2,
),
CodeRelationship(
source_symbol="helper",
target_symbol="external_func",
relationship_type=RelationshipType.CALL,
source_file=file_path,
source_line=25,
),
],
)
def test_build_single_dir_writes_global_relationships_when_enabled(temp_dir: Path) -> None:
"""When static_graph_enabled=True, relationships should be written to global index."""
from codexlens.storage.index_tree import IndexTreeBuilder
config = Config(
data_dir=temp_dir / "data",
static_graph_enabled=True,
static_graph_relationship_types=["imports", "inherits"],
global_symbol_index_enabled=True,
)
# Set up real GlobalSymbolIndex
global_db_path = temp_dir / "global_symbols.db"
global_index = GlobalSymbolIndex(global_db_path, project_id=1)
global_index.initialize()
# Create a source file
src_dir = temp_dir / "src"
src_dir.mkdir()
test_file = src_dir / "module.py"
test_file.write_text("class MyClass(BaseClass):\n pass\n", encoding="utf-8")
indexed_file = _make_indexed_file(str(test_file))
# Mock parser to return our test IndexedFile
mock_parser = MagicMock()
mock_parser.parse.return_value = indexed_file
mock_mapper = MagicMock()
mock_mapper.source_to_index_db.return_value = temp_dir / "index" / "_index.db"
mock_registry = MagicMock()
builder = IndexTreeBuilder(mock_registry, mock_mapper, config=config, incremental=False)
builder.parser_factory = MagicMock()
builder.parser_factory.get_parser.return_value = mock_parser
result = builder._build_single_dir(
src_dir,
languages=None,
project_id=1,
global_index_db_path=global_db_path,
)
assert result.error is None
assert result.files_count == 1
# Verify relationships were written to global index
# Only IMPORTS and INHERITS should be written (not CALL)
rels = global_index.query_by_target("BaseClass", prefix_mode=True)
rels += global_index.query_by_target("os", prefix_mode=True)
assert len(rels) >= 1, "Expected at least 1 relationship written to global index"
# CALL relationship for external_func should NOT be present
call_rels = global_index.query_by_target("external_func", prefix_mode=True)
assert len(call_rels) == 0, "CALL relationships should not be written"
global_index.close()
def test_build_single_dir_skips_relationships_when_disabled(temp_dir: Path) -> None:
"""When static_graph_enabled=False, no relationships should be written."""
from codexlens.storage.index_tree import IndexTreeBuilder
config = Config(
data_dir=temp_dir / "data",
static_graph_enabled=False,
global_symbol_index_enabled=True,
)
global_db_path = temp_dir / "global_symbols.db"
global_index = GlobalSymbolIndex(global_db_path, project_id=1)
global_index.initialize()
src_dir = temp_dir / "src"
src_dir.mkdir()
test_file = src_dir / "module.py"
test_file.write_text("import os\n", encoding="utf-8")
indexed_file = _make_indexed_file(str(test_file))
mock_parser = MagicMock()
mock_parser.parse.return_value = indexed_file
mock_mapper = MagicMock()
mock_mapper.source_to_index_db.return_value = temp_dir / "index" / "_index.db"
mock_registry = MagicMock()
builder = IndexTreeBuilder(mock_registry, mock_mapper, config=config, incremental=False)
builder.parser_factory = MagicMock()
builder.parser_factory.get_parser.return_value = mock_parser
result = builder._build_single_dir(
src_dir,
languages=None,
project_id=1,
global_index_db_path=global_db_path,
)
assert result.error is None
# No relationships should be in global index
conn = global_index._get_connection()
count = conn.execute("SELECT COUNT(*) FROM global_relationships").fetchone()[0]
assert count == 0, "No relationships should be written when static_graph_enabled=False"
global_index.close()
def test_relationship_write_failure_does_not_block_indexing(temp_dir: Path) -> None:
"""If global_index.update_file_relationships raises, file indexing continues."""
from codexlens.storage.index_tree import IndexTreeBuilder
config = Config(
data_dir=temp_dir / "data",
static_graph_enabled=True,
static_graph_relationship_types=["imports", "inherits"],
global_symbol_index_enabled=True,
)
src_dir = temp_dir / "src"
src_dir.mkdir()
test_file = src_dir / "module.py"
test_file.write_text("import os\n", encoding="utf-8")
indexed_file = _make_indexed_file(str(test_file))
mock_parser = MagicMock()
mock_parser.parse.return_value = indexed_file
mock_mapper = MagicMock()
mock_mapper.source_to_index_db.return_value = temp_dir / "index" / "_index.db"
mock_registry = MagicMock()
# Create a mock GlobalSymbolIndex that fails on update_file_relationships
mock_global_db_path = temp_dir / "global_symbols.db"
builder = IndexTreeBuilder(mock_registry, mock_mapper, config=config, incremental=False)
builder.parser_factory = MagicMock()
builder.parser_factory.get_parser.return_value = mock_parser
# Patch GlobalSymbolIndex so update_file_relationships raises
with patch("codexlens.storage.index_tree.GlobalSymbolIndex") as MockGSI:
mock_gsi_instance = MagicMock()
mock_gsi_instance.update_file_relationships.side_effect = RuntimeError("DB locked")
MockGSI.return_value = mock_gsi_instance
result = builder._build_single_dir(
src_dir,
languages=None,
project_id=1,
global_index_db_path=mock_global_db_path,
)
# File should still be indexed despite relationship write failure
assert result.error is None
assert result.files_count == 1
def test_only_configured_relationship_types_written(temp_dir: Path) -> None:
"""Only relationship types in static_graph_relationship_types should be written."""
from codexlens.storage.index_tree import IndexTreeBuilder
# Only allow 'imports' (not 'inherits')
config = Config(
data_dir=temp_dir / "data",
static_graph_enabled=True,
static_graph_relationship_types=["imports"],
global_symbol_index_enabled=True,
)
global_db_path = temp_dir / "global_symbols.db"
global_index = GlobalSymbolIndex(global_db_path, project_id=1)
global_index.initialize()
src_dir = temp_dir / "src"
src_dir.mkdir()
test_file = src_dir / "module.py"
test_file.write_text("import os\nclass Foo(Bar): pass\n", encoding="utf-8")
indexed_file = _make_indexed_file(str(test_file))
mock_parser = MagicMock()
mock_parser.parse.return_value = indexed_file
mock_mapper = MagicMock()
mock_mapper.source_to_index_db.return_value = temp_dir / "index" / "_index.db"
mock_registry = MagicMock()
builder = IndexTreeBuilder(mock_registry, mock_mapper, config=config, incremental=False)
builder.parser_factory = MagicMock()
builder.parser_factory.get_parser.return_value = mock_parser
result = builder._build_single_dir(
src_dir,
languages=None,
project_id=1,
global_index_db_path=global_db_path,
)
assert result.error is None
# Only IMPORTS should be written
conn = global_index._get_connection()
rows = conn.execute(
"SELECT relationship_type FROM global_relationships"
).fetchall()
rel_types = {row[0] for row in rows}
assert "imports" in rel_types or len(rows) == 0 or rel_types == {"imports"}, \
f"Expected only 'imports', got {rel_types}"
# INHERITS should NOT be present
assert "inherits" not in rel_types, "inherits should not be written when not in config"
# CALL should NOT be present
assert "calls" not in rel_types, "calls should not be written"
global_index.close()