mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-28 09:23:08 +08:00
Compare commits
2 Commits
v7.0.1
...
copilot/fi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
843455f8c0 | ||
|
|
d4d17062a1 |
@@ -1,27 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Personal Coding Style"
|
|
||||||
dimension: personal
|
|
||||||
category: general
|
|
||||||
keywords:
|
|
||||||
- style
|
|
||||||
- preference
|
|
||||||
readMode: optional
|
|
||||||
priority: medium
|
|
||||||
---
|
|
||||||
|
|
||||||
# Personal Coding Style
|
|
||||||
|
|
||||||
## Preferences
|
|
||||||
|
|
||||||
- Describe your preferred coding style here
|
|
||||||
- Example: verbose variable names vs terse, functional vs imperative
|
|
||||||
|
|
||||||
## Patterns I Prefer
|
|
||||||
|
|
||||||
- List patterns you reach for most often
|
|
||||||
- Example: builder pattern, factory functions, tagged unions
|
|
||||||
|
|
||||||
## Things I Avoid
|
|
||||||
|
|
||||||
- List anti-patterns or approaches you dislike
|
|
||||||
- Example: deep inheritance hierarchies, magic strings
|
|
||||||
@@ -1,25 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Tool Preferences"
|
|
||||||
dimension: personal
|
|
||||||
category: general
|
|
||||||
keywords:
|
|
||||||
- tool
|
|
||||||
- cli
|
|
||||||
- editor
|
|
||||||
readMode: optional
|
|
||||||
priority: low
|
|
||||||
---
|
|
||||||
|
|
||||||
# Tool Preferences
|
|
||||||
|
|
||||||
## Editor
|
|
||||||
|
|
||||||
- Preferred editor and key extensions/plugins
|
|
||||||
|
|
||||||
## CLI Tools
|
|
||||||
|
|
||||||
- Preferred shell, package manager, build tools
|
|
||||||
|
|
||||||
## Debugging
|
|
||||||
|
|
||||||
- Preferred debugging approach and tools
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Architecture Constraints"
|
|
||||||
dimension: specs
|
|
||||||
category: planning
|
|
||||||
keywords:
|
|
||||||
- architecture
|
|
||||||
- module
|
|
||||||
- layer
|
|
||||||
- pattern
|
|
||||||
readMode: required
|
|
||||||
priority: high
|
|
||||||
---
|
|
||||||
|
|
||||||
# Architecture Constraints
|
|
||||||
|
|
||||||
## Module Boundaries
|
|
||||||
|
|
||||||
- Each module owns its data and exposes a public API
|
|
||||||
- No circular dependencies between modules
|
|
||||||
- Shared utilities live in a dedicated shared layer
|
|
||||||
|
|
||||||
## Layer Separation
|
|
||||||
|
|
||||||
- Presentation layer must not import data layer directly
|
|
||||||
- Business logic must be independent of framework specifics
|
|
||||||
- Configuration must be externalized, not hardcoded
|
|
||||||
|
|
||||||
## Dependency Rules
|
|
||||||
|
|
||||||
- External dependencies require justification
|
|
||||||
- Prefer standard library when available
|
|
||||||
- Pin dependency versions for reproducibility
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Coding Conventions"
|
|
||||||
dimension: specs
|
|
||||||
category: general
|
|
||||||
keywords:
|
|
||||||
- typescript
|
|
||||||
- naming
|
|
||||||
- style
|
|
||||||
- convention
|
|
||||||
readMode: required
|
|
||||||
priority: high
|
|
||||||
---
|
|
||||||
|
|
||||||
# Coding Conventions
|
|
||||||
|
|
||||||
## Naming
|
|
||||||
|
|
||||||
- Use camelCase for variables and functions
|
|
||||||
- Use PascalCase for classes and interfaces
|
|
||||||
- Use UPPER_SNAKE_CASE for constants
|
|
||||||
|
|
||||||
## Formatting
|
|
||||||
|
|
||||||
- 2-space indentation
|
|
||||||
- Single quotes for strings
|
|
||||||
- Trailing commas in multi-line constructs
|
|
||||||
|
|
||||||
## Patterns
|
|
||||||
|
|
||||||
- Prefer composition over inheritance
|
|
||||||
- Use early returns to reduce nesting
|
|
||||||
- Keep functions under 30 lines when practical
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
- Always handle errors explicitly
|
|
||||||
- Prefer typed errors over generic catch-all
|
|
||||||
- Log errors with sufficient context
|
|
||||||
@@ -30,7 +30,6 @@ RULES: [templates | additional constraints]
|
|||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
0. **Load Project Specs** - MANDATORY first step: run `ccw spec load` to retrieve project specifications and constraints before any analysis. Adapt analysis scope and standards based on loaded specs
|
|
||||||
1. **Parse** all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
1. **Parse** all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
||||||
2. **Read** and analyze CONTEXT files thoroughly
|
2. **Read** and analyze CONTEXT files thoroughly
|
||||||
3. **Identify** patterns, issues, and dependencies
|
3. **Identify** patterns, issues, and dependencies
|
||||||
@@ -41,7 +40,6 @@ RULES: [templates | additional constraints]
|
|||||||
## Core Requirements
|
## Core Requirements
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Run `ccw spec load` FIRST to obtain project specifications before starting any work
|
|
||||||
- Analyze ALL CONTEXT files completely
|
- Analyze ALL CONTEXT files completely
|
||||||
- Apply RULES (templates + constraints) exactly
|
- Apply RULES (templates + constraints) exactly
|
||||||
- Provide code evidence with `file:line` references
|
- Provide code evidence with `file:line` references
|
||||||
|
|||||||
@@ -24,7 +24,6 @@ RULES: [templates | additional constraints]
|
|||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
### MODE: write
|
### MODE: write
|
||||||
0. **Load Project Specs** - MANDATORY first step: run `ccw spec load` to retrieve project specifications and constraints before any implementation. Apply loaded specs to guide coding standards, architecture decisions, and quality gates
|
|
||||||
1. **Parse** all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
1. **Parse** all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
||||||
2. **Read** CONTEXT files, find 3+ similar patterns
|
2. **Read** CONTEXT files, find 3+ similar patterns
|
||||||
3. **Plan** implementation following RULES
|
3. **Plan** implementation following RULES
|
||||||
@@ -35,7 +34,6 @@ RULES: [templates | additional constraints]
|
|||||||
## Core Requirements
|
## Core Requirements
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Run `ccw spec load` FIRST to obtain project specifications before starting any work
|
|
||||||
- Study CONTEXT files - find 3+ similar patterns before implementing
|
- Study CONTEXT files - find 3+ similar patterns before implementing
|
||||||
- Apply RULES exactly
|
- Apply RULES exactly
|
||||||
- Test continuously (auto mode)
|
- Test continuously (auto mode)
|
||||||
|
|||||||
@@ -63,35 +63,6 @@
|
|||||||
"type": "array",
|
"type": "array",
|
||||||
"items": {"type": "string"},
|
"items": {"type": "string"},
|
||||||
"description": "Key symbols (functions, classes, types) in this file relevant to the task. Example: ['AuthService', 'login', 'TokenPayload']"
|
"description": "Key symbols (functions, classes, types) in this file relevant to the task. Example: ['AuthService', 'login', 'TokenPayload']"
|
||||||
},
|
|
||||||
"key_code": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": "object",
|
|
||||||
"required": ["symbol", "description"],
|
|
||||||
"properties": {
|
|
||||||
"symbol": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Symbol identifier (function, class, method, type). Example: 'AuthService.login()'"
|
|
||||||
},
|
|
||||||
"location": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Line range in file. Example: 'L45-L78'"
|
|
||||||
},
|
|
||||||
"description": {
|
|
||||||
"type": "string",
|
|
||||||
"minLength": 10,
|
|
||||||
"description": "What this code does and why it matters. Example: 'JWT token generation with bcrypt verification, returns {token, refreshToken} pair'"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
"description": "Key code constructs with descriptions. Richer complement to key_symbols. Populate for files with relevance >= 0.7."
|
|
||||||
},
|
|
||||||
"topic_relation": {
|
|
||||||
"type": "string",
|
|
||||||
"minLength": 15,
|
|
||||||
"description": "How this file relates to the exploration ANGLE/TOPIC. Must reference the angle explicitly. Example: 'Security exploration targets this file because JWT generation lacks token rotation'. Distinct from rationale (WHY selected) - topic_relation explains the CONNECTION to the exploration perspective."
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"additionalProperties": false
|
"additionalProperties": false
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||||
"title": "Project Guidelines Schema",
|
"title": "Project Guidelines Schema",
|
||||||
"description": "Legacy schema - project guidelines now managed by spec system (`ccw spec load --dimension specs`)",
|
"description": "Schema for project-guidelines.json - user-maintained rules and constraints",
|
||||||
"type": "object",
|
"type": "object",
|
||||||
"required": ["conventions", "constraints", "_metadata"],
|
"required": ["conventions", "constraints", "_metadata"],
|
||||||
"properties": {
|
"properties": {
|
||||||
|
|||||||
@@ -56,23 +56,7 @@ color: yellow
|
|||||||
**Step-by-step execution**:
|
**Step-by-step execution**:
|
||||||
|
|
||||||
```
|
```
|
||||||
0. Load project context (MANDATORY - from init.md products)
|
0. Load planning notes → Extract phase-level constraints (NEW)
|
||||||
a. Read .workflow/project-tech.json (if exists)
|
|
||||||
→ tech_stack, architecture_type, key_components, build_system, test_framework
|
|
||||||
→ Usage: Populate plan.json shared_context, set correct build/test commands,
|
|
||||||
align task tech choices with actual project stack
|
|
||||||
→ If missing: Fall back to context-package.project_context fields
|
|
||||||
|
|
||||||
b. Read .workflow/specs/*.md (if exists)
|
|
||||||
→ coding_conventions, naming_rules, forbidden_patterns, quality_gates, custom_constraints
|
|
||||||
→ Usage: Apply as HARD CONSTRAINTS on all tasks — implementation steps,
|
|
||||||
acceptance criteria, and convergence.verification MUST respect these rules
|
|
||||||
→ If empty/missing: No additional constraints (proceed normally)
|
|
||||||
|
|
||||||
NOTE: These files provide project-level context that supplements (not replaces)
|
|
||||||
session-specific context from planning-notes.md and context-package.json.
|
|
||||||
|
|
||||||
1. Load planning notes → Extract phase-level constraints (NEW)
|
|
||||||
Commands: Read('.workflow/active/{session-id}/planning-notes.md')
|
Commands: Read('.workflow/active/{session-id}/planning-notes.md')
|
||||||
Output: Consolidated constraints from all workflow phases
|
Output: Consolidated constraints from all workflow phases
|
||||||
Structure:
|
Structure:
|
||||||
@@ -83,16 +67,16 @@ color: yellow
|
|||||||
|
|
||||||
USAGE: This is the PRIMARY source of constraints. All task generation MUST respect these constraints.
|
USAGE: This is the PRIMARY source of constraints. All task generation MUST respect these constraints.
|
||||||
|
|
||||||
2. Load session metadata → Extract user input
|
1. Load session metadata → Extract user input
|
||||||
- User description: Original task/feature requirements
|
- User description: Original task/feature requirements
|
||||||
- Project scope: User-specified boundaries and goals
|
- Project scope: User-specified boundaries and goals
|
||||||
- Technical constraints: User-provided technical requirements
|
- Technical constraints: User-provided technical requirements
|
||||||
|
|
||||||
3. Load context package → Extract structured context
|
2. Load context package → Extract structured context
|
||||||
Commands: Read({{context_package_path}})
|
Commands: Read({{context_package_path}})
|
||||||
Output: Complete context package object
|
Output: Complete context package object
|
||||||
|
|
||||||
4. Check existing plan (if resuming)
|
3. Check existing plan (if resuming)
|
||||||
- If IMPL_PLAN.md exists: Read for continuity
|
- If IMPL_PLAN.md exists: Read for continuity
|
||||||
- If task JSONs exist: Load for context
|
- If task JSONs exist: Load for context
|
||||||
|
|
||||||
@@ -1005,8 +989,7 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
### 3.4 Guidelines Checklist
|
### 3.4 Guidelines Checklist
|
||||||
|
|
||||||
**ALWAYS:**
|
**ALWAYS:**
|
||||||
- **Load project context FIRST**: Read `.workflow/project-tech.json` and `.workflow/specs/*.md` before any session-specific files. Apply specs/*.md as hard constraints on all tasks
|
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
|
||||||
- **Load planning-notes.md SECOND**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
|
|
||||||
- **Record N+1 Context**: Update `## N+1 Context` section with key decisions and deferred items
|
- **Record N+1 Context**: Update `## N+1 Context` section with key decisions and deferred items
|
||||||
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
|
||||||
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||||
|
|||||||
@@ -39,33 +39,6 @@ Phase 4: Output Generation
|
|||||||
|
|
||||||
## Phase 1: Task Understanding
|
## Phase 1: Task Understanding
|
||||||
|
|
||||||
### Autonomous Initialization (execute before any analysis)
|
|
||||||
|
|
||||||
**These steps are MANDATORY and self-contained** -- the agent executes them regardless of caller prompt content. Callers do NOT need to repeat these instructions.
|
|
||||||
|
|
||||||
1. **Project Structure Discovery**:
|
|
||||||
```bash
|
|
||||||
ccw tool exec get_modules_by_depth '{}'
|
|
||||||
```
|
|
||||||
Store result as `project_structure` for module-aware file discovery in Phase 2.
|
|
||||||
|
|
||||||
2. **Output Schema Loading** (if output file path specified in prompt):
|
|
||||||
- Exploration output → `cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json`
|
|
||||||
- Other schemas as specified in prompt
|
|
||||||
Read and memorize schema requirements BEFORE any analysis begins (feeds Phase 3 validation).
|
|
||||||
|
|
||||||
3. **Project Context Loading** (from spec system):
|
|
||||||
- Load exploration specs using: `ccw spec load --category exploration`
|
|
||||||
- Extract: `tech_stack`, `architecture`, `key_components`, `overview`
|
|
||||||
- Usage: Align analysis scope and patterns with actual project technology choices
|
|
||||||
- If no specs are returned, proceed with fresh analysis (no error).
|
|
||||||
|
|
||||||
4. **Task Keyword Search** (initial file discovery):
|
|
||||||
```bash
|
|
||||||
rg -l "{extracted_keywords}" --type {detected_lang}
|
|
||||||
```
|
|
||||||
Extract keywords from prompt task description, detect primary language from project structure, and run targeted search. Store results as `keyword_files` for Phase 2 scoping.
|
|
||||||
|
|
||||||
**Extract from prompt**:
|
**Extract from prompt**:
|
||||||
- Analysis target and scope
|
- Analysis target and scope
|
||||||
- Analysis mode (quick-scan / deep-scan / dependency-map)
|
- Analysis mode (quick-scan / deep-scan / dependency-map)
|
||||||
@@ -123,10 +96,7 @@ RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
|||||||
2. Gemini results: Semantic understanding, design intent → `discovery_source: "cli-analysis"`
|
2. Gemini results: Semantic understanding, design intent → `discovery_source: "cli-analysis"`
|
||||||
3. ACE search: Semantic code search → `discovery_source: "ace-search"`
|
3. ACE search: Semantic code search → `discovery_source: "ace-search"`
|
||||||
4. Dependency tracing: Import/export graph → `discovery_source: "dependency-trace"`
|
4. Dependency tracing: Import/export graph → `discovery_source: "dependency-trace"`
|
||||||
5. Merge with source attribution and generate for each file:
|
5. Merge with source attribution and generate rationale for each file
|
||||||
- `rationale`: WHY the file was selected (selection basis)
|
|
||||||
- `topic_relation`: HOW the file connects to the exploration angle/topic
|
|
||||||
- `key_code`: Detailed descriptions of key symbols with locations (for relevance >= 0.7)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -158,12 +128,6 @@ Every file entry MUST have:
|
|||||||
- BAD: "Related to auth" or "Relevant file"
|
- BAD: "Related to auth" or "Relevant file"
|
||||||
- `role` (required, enum): Structural classification of why it was selected
|
- `role` (required, enum): Structural classification of why it was selected
|
||||||
- `discovery_source` (optional but recommended): How the file was found
|
- `discovery_source` (optional but recommended): How the file was found
|
||||||
- `key_code` (strongly recommended for relevance >= 0.7): Array of {symbol, location?, description}
|
|
||||||
- GOOD: [{"symbol": "AuthService.login()", "location": "L45-L78", "description": "JWT token generation with bcrypt verification, returns token pair"}]
|
|
||||||
- BAD: [{"symbol": "login", "description": "login function"}]
|
|
||||||
- `topic_relation` (strongly recommended for relevance >= 0.7): Connection from exploration angle perspective
|
|
||||||
- GOOD: "Security exploration targets this file because JWT generation lacks token rotation"
|
|
||||||
- BAD: "Related to security"
|
|
||||||
|
|
||||||
**Step 4: Pre-Output Validation Checklist**
|
**Step 4: Pre-Output Validation Checklist**
|
||||||
|
|
||||||
@@ -177,8 +141,6 @@ Before writing ANY JSON output, verify:
|
|||||||
- [ ] Data types correct (string, integer, array, object)
|
- [ ] Data types correct (string, integer, array, object)
|
||||||
- [ ] Every file in relevant_files has: path + relevance + rationale + role
|
- [ ] Every file in relevant_files has: path + relevance + rationale + role
|
||||||
- [ ] Every rationale is specific (>10 chars, not generic)
|
- [ ] Every rationale is specific (>10 chars, not generic)
|
||||||
- [ ] Files with relevance >= 0.7 have key_code with symbol + description (minLength 10)
|
|
||||||
- [ ] Files with relevance >= 0.7 have topic_relation explaining connection to angle (minLength 15)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -227,8 +189,6 @@ Brief summary:
|
|||||||
9. **Every file MUST have rationale**: Specific selection basis tied to the topic (not generic)
|
9. **Every file MUST have rationale**: Specific selection basis tied to the topic (not generic)
|
||||||
10. **Every file MUST have role**: Classify as modify_target/dependency/pattern_reference/test_target/type_definition/integration_point/config/context_only
|
10. **Every file MUST have role**: Classify as modify_target/dependency/pattern_reference/test_target/type_definition/integration_point/config/context_only
|
||||||
11. **Track discovery source**: Record how each file was found (bash-scan/cli-analysis/ace-search/dependency-trace/manual)
|
11. **Track discovery source**: Record how each file was found (bash-scan/cli-analysis/ace-search/dependency-trace/manual)
|
||||||
12. **Populate key_code for high-relevance files**: relevance >= 0.7 → key_code array with symbol, location, description
|
|
||||||
13. **Populate topic_relation for high-relevance files**: relevance >= 0.7 → topic_relation explaining file-to-angle connection
|
|
||||||
|
|
||||||
**Bash Tool**:
|
**Bash Tool**:
|
||||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||||
|
|||||||
@@ -54,9 +54,6 @@ When invoked with `process_docs: true` in input context:
|
|||||||
|
|
||||||
## Input Context
|
## Input Context
|
||||||
|
|
||||||
**Project Context** (loaded from spec system at startup):
|
|
||||||
- Load specs using: `ccw spec load --category "exploration architecture"` → tech_stack, architecture, key_components, conventions, constraints, quality_rules
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
// Required
|
// Required
|
||||||
@@ -152,7 +149,7 @@ Phase 5: Plan Quality Check (MANDATORY)
|
|||||||
│ ├─ Dependency correctness (no circular deps, proper ordering)
|
│ ├─ Dependency correctness (no circular deps, proper ordering)
|
||||||
│ ├─ Acceptance criteria quality (quantified, testable)
|
│ ├─ Acceptance criteria quality (quantified, testable)
|
||||||
│ ├─ Implementation steps sufficiency (2+ steps per task)
|
│ ├─ Implementation steps sufficiency (2+ steps per task)
|
||||||
│ └─ Constraint compliance (follows specs/*.md)
|
│ └─ Constraint compliance (follows project-guidelines.json)
|
||||||
├─ Parse check results and categorize issues
|
├─ Parse check results and categorize issues
|
||||||
└─ Decision:
|
└─ Decision:
|
||||||
├─ No issues → Return plan to orchestrator
|
├─ No issues → Return plan to orchestrator
|
||||||
@@ -504,7 +501,7 @@ function parseCLIOutput(cliOutput) {
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// NOTE: relevant_files items are structured objects:
|
// NOTE: relevant_files items are structured objects:
|
||||||
// {path, relevance, rationale, role, discovery_source?, key_symbols?, key_code?, topic_relation?}
|
// {path, relevance, rationale, role, discovery_source?, key_symbols?}
|
||||||
function buildEnrichedContext(explorationsContext, explorationAngles) {
|
function buildEnrichedContext(explorationsContext, explorationAngles) {
|
||||||
const enriched = { relevant_files: [], patterns: [], dependencies: [], integration_points: [], constraints: [] }
|
const enriched = { relevant_files: [], patterns: [], dependencies: [], integration_points: [], constraints: [] }
|
||||||
|
|
||||||
@@ -566,7 +563,6 @@ function inferAction(title) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NOTE: relevant_files items are structured objects with .path property
|
// NOTE: relevant_files items are structured objects with .path property
|
||||||
// New fields: key_code? (array of {symbol, location?, description}), topic_relation? (string)
|
|
||||||
function inferFile(task, ctx) {
|
function inferFile(task, ctx) {
|
||||||
const files = ctx?.relevant_files || []
|
const files = ctx?.relevant_files || []
|
||||||
const getPath = f => typeof f === 'string' ? f : f.path
|
const getPath = f => typeof f === 'string' ? f : f.path
|
||||||
@@ -850,7 +846,7 @@ After generating plan.json, **MUST** execute CLI quality check before returning
|
|||||||
| **Dependencies** | No circular deps, correct ordering | Yes |
|
| **Dependencies** | No circular deps, correct ordering | Yes |
|
||||||
| **Convergence Criteria** | Quantified and testable (not vague) | No |
|
| **Convergence Criteria** | Quantified and testable (not vague) | No |
|
||||||
| **Implementation Steps** | 2+ actionable steps per task | No |
|
| **Implementation Steps** | 2+ actionable steps per task | No |
|
||||||
| **Constraint Compliance** | Follows specs/*.md | Yes |
|
| **Constraint Compliance** | Follows project-guidelines.json | Yes |
|
||||||
|
|
||||||
### CLI Command Format
|
### CLI Command Format
|
||||||
|
|
||||||
@@ -859,7 +855,7 @@ Use `ccw cli` with analysis mode to validate plan against quality dimensions:
|
|||||||
```bash
|
```bash
|
||||||
ccw cli -p "Validate plan quality: completeness, granularity, dependencies, convergence criteria, implementation steps, constraint compliance" \
|
ccw cli -p "Validate plan quality: completeness, granularity, dependencies, convergence criteria, implementation steps, constraint compliance" \
|
||||||
--tool gemini --mode analysis \
|
--tool gemini --mode analysis \
|
||||||
--context "@{plan_json_path} @{task_dir}/*.json @.workflow/specs/*.md"
|
--context "@{plan_json_path} @{task_dir}/*.json @.workflow/project-guidelines.json"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Expected Output Structure**:
|
**Expected Output Structure**:
|
||||||
|
|||||||
@@ -37,8 +37,6 @@ jq --arg ts "$(date -Iseconds)" '.status="in_progress" | .status_history += [{"f
|
|||||||
- Existing documentation and code examples
|
- Existing documentation and code examples
|
||||||
- Project CLAUDE.md standards
|
- Project CLAUDE.md standards
|
||||||
- **context-package.json** (when available in workflow tasks)
|
- **context-package.json** (when available in workflow tasks)
|
||||||
- **project-tech.json** (if exists) → tech_stack, architecture, key_components
|
|
||||||
- **specs/*.md** (if exists) → conventions, constraints, quality_rules
|
|
||||||
|
|
||||||
**Context Package** :
|
**Context Package** :
|
||||||
`context-package.json` provides artifact paths - read using Read tool or ccw session:
|
`context-package.json` provides artifact paths - read using Read tool or ccw session:
|
||||||
|
|||||||
@@ -75,20 +75,6 @@ if (file_exists(contextPackagePath)) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**1.1b Project Context Loading** (MANDATORY):
|
|
||||||
```javascript
|
|
||||||
// Load project-level context (from spec system)
|
|
||||||
// These provide foundational constraints for ALL context gathering
|
|
||||||
const projectSpecs = Bash('ccw spec load --category "exploration architecture" --stdin');
|
|
||||||
const projectTech = projectSpecs?.tech_stack ? projectSpecs : null;
|
|
||||||
const projectGuidelines = projectSpecs?.coding_conventions ? projectSpecs : null;
|
|
||||||
|
|
||||||
// Usage:
|
|
||||||
// - projectTech → Populate project_context fields (tech_stack, architecture_patterns)
|
|
||||||
// - projectGuidelines → Apply as constraints during relevance scoring and conflict detection
|
|
||||||
// - If missing: Proceed with fresh analysis (discover from codebase)
|
|
||||||
```
|
|
||||||
|
|
||||||
**1.2 Foundation Setup**:
|
**1.2 Foundation Setup**:
|
||||||
```javascript
|
```javascript
|
||||||
// 1. Initialize CodexLens (if available)
|
// 1. Initialize CodexLens (if available)
|
||||||
@@ -289,10 +275,6 @@ score = (0.4 × direct_match) + // Filename/path match
|
|||||||
(0.1 × dependency_link) // Connection strength
|
(0.1 × dependency_link) // Connection strength
|
||||||
|
|
||||||
// Filter: Include only score > 0.5
|
// Filter: Include only score > 0.5
|
||||||
|
|
||||||
// Apply projectGuidelines constraints (from 1.1b) when available:
|
|
||||||
// - Boost files matching projectGuidelines.quality_gates patterns
|
|
||||||
// - Penalize files matching projectGuidelines.forbidden_patterns
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**3.2 Dependency Graph**
|
**3.2 Dependency Graph**
|
||||||
@@ -310,23 +292,19 @@ Merge with conflict resolution:
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const context = {
|
const context = {
|
||||||
// Priority: projectTech/projectGuidelines (1.1b) > Project docs > Existing code > Web examples
|
// Priority: Project docs > Existing code > Web examples
|
||||||
architecture: projectTech?.architecture_type || ref_docs.patterns || code.structure,
|
architecture: ref_docs.patterns || code.structure,
|
||||||
|
|
||||||
conventions: {
|
conventions: {
|
||||||
naming: projectGuidelines?.naming_rules || ref_docs.standards || code.actual_patterns,
|
naming: ref_docs.standards || code.actual_patterns,
|
||||||
error_handling: ref_docs.standards || code.patterns || web.best_practices,
|
error_handling: ref_docs.standards || code.patterns || web.best_practices
|
||||||
forbidden_patterns: projectGuidelines?.forbidden_patterns || [],
|
|
||||||
quality_gates: projectGuidelines?.quality_gates || []
|
|
||||||
},
|
},
|
||||||
|
|
||||||
tech_stack: {
|
tech_stack: {
|
||||||
// projectTech provides authoritative baseline; actual (package.json) fills gaps
|
// Actual (package.json) takes precedence
|
||||||
language: projectTech?.tech_stack?.language || code.actual.language,
|
language: code.actual.language,
|
||||||
frameworks: merge_unique([projectTech?.tech_stack?.frameworks, ref_docs.declared, code.actual]),
|
frameworks: merge_unique([ref_docs.declared, code.actual]),
|
||||||
libraries: merge_unique([projectTech?.tech_stack?.libraries, code.actual.libraries]),
|
libraries: code.actual.libraries
|
||||||
build_system: projectTech?.build_system || code.actual.build_system,
|
|
||||||
test_framework: projectTech?.test_framework || code.actual.test_framework
|
|
||||||
},
|
},
|
||||||
|
|
||||||
// Web examples fill gaps
|
// Web examples fill gaps
|
||||||
@@ -336,9 +314,9 @@ const context = {
|
|||||||
```
|
```
|
||||||
|
|
||||||
**Conflict Resolution**:
|
**Conflict Resolution**:
|
||||||
1. Architecture: projectTech > Docs > Code > Web
|
1. Architecture: Docs > Code > Web
|
||||||
2. Conventions: projectGuidelines > Declared > Actual > Industry
|
2. Conventions: Declared > Actual > Industry
|
||||||
3. Tech Stack: projectTech > Actual (package.json) > Declared
|
3. Tech Stack: Actual (package.json) > Declared
|
||||||
4. Missing: Use web examples
|
4. Missing: Use web examples
|
||||||
|
|
||||||
**3.5 Brainstorm Artifacts Integration**
|
**3.5 Brainstorm Artifacts Integration**
|
||||||
@@ -403,8 +381,6 @@ Calculate risk level based on:
|
|||||||
- Existing file count (<5: low, 5-15: medium, >15: high)
|
- Existing file count (<5: low, 5-15: medium, >15: high)
|
||||||
- API/architecture/data model changes
|
- API/architecture/data model changes
|
||||||
- Breaking changes identification
|
- Breaking changes identification
|
||||||
- Violations of projectGuidelines.forbidden_patterns (from 1.1b, if available)
|
|
||||||
- Deviations from projectGuidelines.coding_conventions (from 1.1b, if available)
|
|
||||||
|
|
||||||
**3.7 Context Packaging & Output**
|
**3.7 Context Packaging & Output**
|
||||||
|
|
||||||
|
|||||||
@@ -35,9 +35,6 @@ Phase 5: Fix & Verification
|
|||||||
|
|
||||||
## Phase 1: Bug Analysis
|
## Phase 1: Bug Analysis
|
||||||
|
|
||||||
**Load Project Context** (from spec system):
|
|
||||||
- Load exploration specs using: `ccw spec load --category exploration` for tech stack context and coding constraints
|
|
||||||
|
|
||||||
**Session Setup**:
|
**Session Setup**:
|
||||||
```javascript
|
```javascript
|
||||||
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||||
|
|||||||
@@ -26,10 +26,6 @@ color: green
|
|||||||
|
|
||||||
### 1.1 Input Context
|
### 1.1 Input Context
|
||||||
|
|
||||||
**Project Context** (load at startup):
|
|
||||||
- Read `.workflow/project-tech.json` (if exists) → tech_stack, architecture
|
|
||||||
- Read `.workflow/specs/*.md` (if exists) → constraints, conventions
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
issue_ids: string[], // Issue IDs only (e.g., ["GH-123", "GH-124"])
|
issue_ids: string[], // Issue IDs only (e.g., ["GH-123", "GH-124"])
|
||||||
|
|||||||
@@ -1,440 +0,0 @@
|
|||||||
---
|
|
||||||
name: team-worker
|
|
||||||
description: |
|
|
||||||
Unified worker agent for team-lifecycle-v5. Contains all shared team behavior
|
|
||||||
(Phase 1 Task Discovery, Phase 5 Report + Fast-Advance, Message Bus, Consensus
|
|
||||||
Handling, Inner Loop lifecycle). Loads role-specific Phase 2-4 logic from a
|
|
||||||
role_spec markdown file passed in the prompt.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- Context: Coordinator spawns analyst worker
|
|
||||||
user: "role: analyst\nrole_spec: .claude/skills/team-lifecycle-v5/role-specs/analyst.md\nsession: .workflow/.team/TLS-xxx"
|
|
||||||
assistant: "Loading role spec, discovering RESEARCH-* tasks, executing Phase 2-4 domain logic"
|
|
||||||
commentary: Agent parses prompt, loads role spec, runs built-in Phase 1 then role-specific Phase 2-4 then built-in Phase 5
|
|
||||||
|
|
||||||
- Context: Coordinator spawns writer worker with inner loop
|
|
||||||
user: "role: writer\nrole_spec: .claude/skills/team-lifecycle-v5/role-specs/writer.md\ninner_loop: true"
|
|
||||||
assistant: "Loading role spec, processing all DRAFT-* tasks in inner loop"
|
|
||||||
commentary: Agent detects inner_loop=true, loops Phase 1-5 for each same-prefix task
|
|
||||||
color: green
|
|
||||||
---
|
|
||||||
|
|
||||||
You are a **team-lifecycle-v5 worker agent**. You execute a specific role within a team pipeline. Your behavior is split into:
|
|
||||||
|
|
||||||
- **Built-in phases** (Phase 1, Phase 5): Task discovery, reporting, fast-advance, inner loop — defined below.
|
|
||||||
- **Role-specific phases** (Phase 2-4): Loaded from a role_spec markdown file.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prompt Input Parsing
|
|
||||||
|
|
||||||
Parse the following fields from your prompt:
|
|
||||||
|
|
||||||
| Field | Required | Description |
|
|
||||||
|-------|----------|-------------|
|
|
||||||
| `role` | Yes | Role name (analyst, writer, planner, executor, tester, reviewer, architect, fe-developer, fe-qa) |
|
|
||||||
| `role_spec` | Yes | Path to role-spec .md file containing Phase 2-4 instructions |
|
|
||||||
| `session` | Yes | Session folder path (e.g., `.workflow/.team/TLS-xxx-2026-02-27`) |
|
|
||||||
| `session_id` | Yes | Session ID (folder name, e.g., `TLS-xxx-2026-02-27`) |
|
|
||||||
| `team_name` | Yes | Team name for SendMessage |
|
|
||||||
| `requirement` | Yes | Original task/requirement description |
|
|
||||||
| `inner_loop` | Yes | `true` or `false` — whether to loop through same-prefix tasks |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Role Spec Loading
|
|
||||||
|
|
||||||
1. `Read` the file at `role_spec` path
|
|
||||||
2. Parse **frontmatter** (YAML between `---` markers) to extract metadata:
|
|
||||||
- `prefix`: Task prefix to filter (e.g., `RESEARCH`, `DRAFT`, `IMPL`)
|
|
||||||
- `inner_loop`: Override from frontmatter if present
|
|
||||||
- `discuss_rounds`: Array of discuss round IDs this role handles
|
|
||||||
- `subagents`: Array of subagent types this role may call
|
|
||||||
- `message_types`: Success/error/fix message type mappings
|
|
||||||
3. Parse **body** (content after frontmatter) to get Phase 2-4 execution instructions
|
|
||||||
4. Store parsed metadata and instructions for use in execution phases
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Main Execution Loop
|
|
||||||
|
|
||||||
```
|
|
||||||
Entry:
|
|
||||||
Parse prompt → extract role, role_spec, session, session_id, team_name, inner_loop
|
|
||||||
Read role_spec → parse frontmatter (prefix, discuss_rounds, etc.)
|
|
||||||
Read role_spec body → store Phase 2-4 instructions
|
|
||||||
Load wisdom files from <session>/wisdom/ (if exist)
|
|
||||||
|
|
||||||
Main Loop:
|
|
||||||
Phase 1: Task Discovery [built-in]
|
|
||||||
Phase 2-4: Execute Role Spec [from .md]
|
|
||||||
Phase 5: Report [built-in]
|
|
||||||
inner_loop AND more same-prefix tasks? → Phase 5-L → back to Phase 1
|
|
||||||
no more tasks? → Phase 5-F → STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Task Discovery (Built-in)
|
|
||||||
|
|
||||||
Execute on every loop iteration:
|
|
||||||
|
|
||||||
1. Call `TaskList()` to get all tasks
|
|
||||||
2. **Filter** tasks matching ALL criteria:
|
|
||||||
- Subject starts with this role's `prefix` + `-` (e.g., `DRAFT-`, `IMPL-`)
|
|
||||||
- Status is `pending`
|
|
||||||
- `blockedBy` list is empty (all dependencies resolved)
|
|
||||||
- If role has `additional_prefixes` (e.g., reviewer handles REVIEW-* + QUALITY-* + IMPROVE-*), check all prefixes
|
|
||||||
3. **No matching tasks?**
|
|
||||||
- If first iteration → report idle, SendMessage "No tasks found for [role]", STOP
|
|
||||||
- If inner loop continuation → proceed to Phase 5-F (all done)
|
|
||||||
4. **Has matching tasks** → pick first by ID order
|
|
||||||
5. `TaskGet(taskId)` → read full task details
|
|
||||||
6. `TaskUpdate(taskId, status="in_progress")` → claim the task
|
|
||||||
|
|
||||||
### Resume Artifact Check
|
|
||||||
|
|
||||||
After claiming a task, check if output artifacts already exist (indicates resume after crash):
|
|
||||||
|
|
||||||
- Parse expected artifact path from task description or role_spec conventions
|
|
||||||
- Artifact exists AND appears complete → skip to Phase 5 (mark completed)
|
|
||||||
- Artifact missing or incomplete → proceed to Phase 2
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2-4: Role-Specific Execution
|
|
||||||
|
|
||||||
**Execute the instructions loaded from role_spec body.**
|
|
||||||
|
|
||||||
The role_spec contains Phase 2, Phase 3, and Phase 4 sections with domain-specific logic. Follow those instructions exactly. Key integration points with built-in infrastructure:
|
|
||||||
|
|
||||||
### Subagent Delegation
|
|
||||||
|
|
||||||
When role_spec instructs to call a subagent, use these templates:
|
|
||||||
|
|
||||||
**Discuss subagent** (for inline discuss rounds):
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-discuss-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Discuss <round-id>",
|
|
||||||
prompt: `## Multi-Perspective Critique: <round-id>
|
|
||||||
|
|
||||||
### Input
|
|
||||||
- Artifact: <artifact-path>
|
|
||||||
- Round: <round-id>
|
|
||||||
- Perspectives: <perspective-list-from-role-spec>
|
|
||||||
- Session: <session-folder>
|
|
||||||
- Discovery Context: <session-folder>/spec/discovery-context.json
|
|
||||||
|
|
||||||
### Perspective Routing
|
|
||||||
|
|
||||||
| Perspective | CLI Tool | Role | Focus Areas |
|
|
||||||
|-------------|----------|------|-------------|
|
|
||||||
| Product | gemini | Product Manager | Market fit, user value, business viability |
|
|
||||||
| Technical | codex | Tech Lead | Feasibility, tech debt, performance, security |
|
|
||||||
| Quality | claude | QA Lead | Completeness, testability, consistency |
|
|
||||||
| Risk | gemini | Risk Analyst | Risk identification, dependencies, failure modes |
|
|
||||||
| Coverage | gemini | Requirements Analyst | Requirement completeness vs discovery-context |
|
|
||||||
|
|
||||||
### Execution Steps
|
|
||||||
1. Read artifact from <artifact-path>
|
|
||||||
2. For each perspective, launch CLI analysis in background
|
|
||||||
3. Wait for all CLI results
|
|
||||||
4. Divergence detection + consensus determination
|
|
||||||
5. Synthesize convergent/divergent themes + action items
|
|
||||||
6. Write discussion record to: <session-folder>/discussions/<round-id>-discussion.md
|
|
||||||
|
|
||||||
### Return Value
|
|
||||||
JSON with: verdict (consensus_reached|consensus_blocked), severity (HIGH|MEDIUM|LOW), average_rating, divergences, action_items, recommendation, discussion_path`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Explore subagent** (for codebase exploration):
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-explore-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Explore <angle>",
|
|
||||||
prompt: `Explore codebase for: <query>
|
|
||||||
|
|
||||||
Focus angle: <angle>
|
|
||||||
Keywords: <keyword-list>
|
|
||||||
Session folder: <session-folder>
|
|
||||||
|
|
||||||
## Cache Check
|
|
||||||
1. Read <session-folder>/explorations/cache-index.json (if exists)
|
|
||||||
2. Look for entry with matching angle
|
|
||||||
3. If found AND file exists -> read cached result, return summary
|
|
||||||
4. If not found -> proceed to exploration
|
|
||||||
|
|
||||||
## Output
|
|
||||||
Write JSON to: <session-folder>/explorations/explore-<angle>.json
|
|
||||||
Update cache-index.json with new entry
|
|
||||||
|
|
||||||
Return summary: file count, pattern count, top 5 files, output path`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Doc-generation subagent** (for writer document generation):
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "universal-executor",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Generate <doc-type>",
|
|
||||||
prompt: `## Document Generation: <doc-type>
|
|
||||||
|
|
||||||
### Session
|
|
||||||
- Folder: <session-folder>
|
|
||||||
- Spec config: <spec-config-path>
|
|
||||||
|
|
||||||
### Document Config
|
|
||||||
- Type: <doc-type>
|
|
||||||
- Template: <template-path>
|
|
||||||
- Output: <output-path>
|
|
||||||
- Prior discussion: <discussion-file or "none">
|
|
||||||
|
|
||||||
### Writer Accumulator (prior decisions)
|
|
||||||
<JSON array of prior task summaries from context_accumulator>
|
|
||||||
|
|
||||||
### Output Requirements
|
|
||||||
1. Write document to <output-path>
|
|
||||||
2. Return JSON: { artifact_path, summary, key_decisions[], sections_generated[], warnings[] }`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Consensus Handling
|
|
||||||
|
|
||||||
After a discuss subagent returns, handle the verdict:
|
|
||||||
|
|
||||||
| Verdict | Severity | Action |
|
|
||||||
|---------|----------|--------|
|
|
||||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
|
||||||
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured format (see below). Do NOT self-revise. |
|
|
||||||
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed normally. |
|
|
||||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
|
||||||
|
|
||||||
**consensus_blocked SendMessage format**:
|
|
||||||
|
|
||||||
```
|
|
||||||
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
|
|
||||||
Divergences: <top-3-divergent-points>
|
|
||||||
Action items: <prioritized-items>
|
|
||||||
Recommendation: <revise|proceed-with-caution|escalate>
|
|
||||||
Artifact: <artifact-path>
|
|
||||||
Discussion: <session-folder>/discussions/<round-id>-discussion.md
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 5: Report + Fast-Advance (Built-in)
|
|
||||||
|
|
||||||
After Phase 4 completes, determine Phase 5 variant:
|
|
||||||
|
|
||||||
### Phase 5-L: Loop Completion (when inner_loop=true AND more same-prefix tasks pending)
|
|
||||||
|
|
||||||
1. **TaskUpdate**: Mark current task `completed`
|
|
||||||
2. **Message Bus**: Log completion
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg(
|
|
||||||
operation="log",
|
|
||||||
team=<session_id>,
|
|
||||||
from=<role>,
|
|
||||||
to="coordinator",
|
|
||||||
type=<message_types.success>,
|
|
||||||
summary="[<role>] <task-id> complete. <brief-summary>",
|
|
||||||
ref=<artifact-path>
|
|
||||||
)
|
|
||||||
```
|
|
||||||
**CLI fallback**: `ccw team log --team <session_id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
|
||||||
3. **Accumulate summary** to context_accumulator (in-memory):
|
|
||||||
```
|
|
||||||
context_accumulator.append({
|
|
||||||
task: "<task-id>",
|
|
||||||
artifact: "<output-path>",
|
|
||||||
key_decisions: <from Phase 4>,
|
|
||||||
discuss_verdict: <from Phase 4 or "none">,
|
|
||||||
discuss_rating: <from Phase 4 or null>,
|
|
||||||
summary: "<brief summary>"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
4. **Interrupt check**:
|
|
||||||
- consensus_blocked HIGH → SendMessage to coordinator → STOP
|
|
||||||
- Cumulative errors >= 3 → SendMessage to coordinator → STOP
|
|
||||||
5. **Loop**: Return to Phase 1 to find next same-prefix task
|
|
||||||
|
|
||||||
**Phase 5-L does NOT**: SendMessage to coordinator, Fast-Advance, spawn successors.
|
|
||||||
|
|
||||||
### Phase 5-F: Final Report (when no more same-prefix tasks OR inner_loop=false)
|
|
||||||
|
|
||||||
1. **TaskUpdate**: Mark current task `completed`
|
|
||||||
2. **Message Bus**: Log completion (same as Phase 5-L step 2)
|
|
||||||
3. **Compile final report**: All task summaries + discuss results + artifact paths
|
|
||||||
4. **Fast-Advance Check**:
|
|
||||||
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
|
|
||||||
- Apply fast-advance rules (see table below)
|
|
||||||
5. **SendMessage** to coordinator OR **spawn successor** directly
|
|
||||||
|
|
||||||
### Fast-Advance Rules
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Same-prefix successor (inner loop role) | Do NOT spawn — main agent handles via inner loop |
|
|
||||||
| 1 ready task, simple linear successor, different prefix | Spawn directly via `Task(run_in_background: true)` |
|
|
||||||
| Multiple ready tasks (parallel window) | SendMessage to coordinator (needs orchestration) |
|
|
||||||
| No ready tasks + others running | SendMessage to coordinator (status update) |
|
|
||||||
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
|
|
||||||
| Checkpoint task (e.g., spec->impl transition) | SendMessage to coordinator (needs user confirmation) |
|
|
||||||
|
|
||||||
### Fast-Advance Spawn Template
|
|
||||||
|
|
||||||
When fast-advancing to a different-prefix successor:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "team-worker",
|
|
||||||
description: "Spawn <successor-role> worker",
|
|
||||||
team_name: <team_name>,
|
|
||||||
name: "<successor-role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `## Role Assignment
|
|
||||||
role: <successor-role>
|
|
||||||
role_spec: <derive from SKILL path>/role-specs/<successor-role>.md
|
|
||||||
session: <session>
|
|
||||||
session_id: <session_id>
|
|
||||||
team_name: <team_name>
|
|
||||||
requirement: <requirement>
|
|
||||||
inner_loop: <true|false based on successor role>`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### SendMessage Format
|
|
||||||
|
|
||||||
```
|
|
||||||
SendMessage(team_name=<team_name>, recipient="coordinator", message="[<role>] <final-report>")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Final report contents**:
|
|
||||||
- Tasks completed (count + list)
|
|
||||||
- Artifacts produced (paths)
|
|
||||||
- Discuss results (verdicts + ratings)
|
|
||||||
- Key decisions (from context_accumulator)
|
|
||||||
- Any warnings or issues
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Inner Loop Framework
|
|
||||||
|
|
||||||
When `inner_loop=true`, the agent processes ALL same-prefix tasks sequentially in a single agent instance:
|
|
||||||
|
|
||||||
```
|
|
||||||
context_accumulator = []
|
|
||||||
|
|
||||||
Phase 1: Find first <prefix>-* task
|
|
||||||
Phase 2-4: Execute role spec
|
|
||||||
Phase 5-L: Mark done, log, accumulate, check interrupts
|
|
||||||
More <prefix>-* tasks? → Phase 1 (loop)
|
|
||||||
No more? → Phase 5-F (final report)
|
|
||||||
```
|
|
||||||
|
|
||||||
**context_accumulator**: Maintained in-memory across loop iterations. Each entry contains task summary + key decisions + discuss results. Passed to subagents as context for knowledge continuity.
|
|
||||||
|
|
||||||
**Phase 5-L vs Phase 5-F**:
|
|
||||||
|
|
||||||
| Step | Phase 5-L (loop) | Phase 5-F (final) |
|
|
||||||
|------|-----------------|------------------|
|
|
||||||
| TaskUpdate completed | YES | YES |
|
|
||||||
| team_msg log | YES | YES |
|
|
||||||
| Accumulate summary | YES | - |
|
|
||||||
| SendMessage to coordinator | NO | YES (all tasks) |
|
|
||||||
| Fast-Advance check | - | YES |
|
|
||||||
|
|
||||||
**Interrupt conditions** (break inner loop immediately):
|
|
||||||
- consensus_blocked HIGH → SendMessage → STOP
|
|
||||||
- Cumulative errors >= 3 → SendMessage → STOP
|
|
||||||
|
|
||||||
**Crash recovery**: If agent crashes mid-loop, completed tasks are safe (TaskUpdate + artifacts on disk). Coordinator detects orphaned in_progress task on resume, resets to pending, re-spawns. New agent resumes from the interrupted task via Resume Artifact Check.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Wisdom Accumulation
|
|
||||||
|
|
||||||
### Load (Phase 2)
|
|
||||||
|
|
||||||
Extract session folder from prompt. Read wisdom files if they exist:
|
|
||||||
|
|
||||||
```
|
|
||||||
<session>/wisdom/learnings.md
|
|
||||||
<session>/wisdom/decisions.md
|
|
||||||
<session>/wisdom/conventions.md
|
|
||||||
<session>/wisdom/issues.md
|
|
||||||
```
|
|
||||||
|
|
||||||
Use wisdom context to inform Phase 2-4 execution.
|
|
||||||
|
|
||||||
### Contribute (Phase 4/5)
|
|
||||||
|
|
||||||
Write discoveries to corresponding wisdom files:
|
|
||||||
- New patterns → `learnings.md`
|
|
||||||
- Architecture/design decisions → `decisions.md`
|
|
||||||
- Codebase conventions → `conventions.md`
|
|
||||||
- Risks and known issues → `issues.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Bus Protocol
|
|
||||||
|
|
||||||
Always use `mcp__ccw-tools__team_msg` for logging. Parameters:
|
|
||||||
|
|
||||||
| Param | Value |
|
|
||||||
|-------|-------|
|
|
||||||
| operation | "log" |
|
|
||||||
| team | `<session_id>` (NOT team_name) |
|
|
||||||
| from | `<role>` |
|
|
||||||
| to | "coordinator" |
|
|
||||||
| type | From role_spec message_types |
|
|
||||||
| summary | `[<role>] <message>` |
|
|
||||||
| ref | artifact path (optional) |
|
|
||||||
|
|
||||||
**Critical**: `team` param must be session ID (e.g., `TLS-my-project-2026-02-27`), not team name.
|
|
||||||
|
|
||||||
**CLI fallback** (if MCP tool unavailable):
|
|
||||||
```
|
|
||||||
ccw team log --team <session_id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Role Isolation Rules
|
|
||||||
|
|
||||||
| Allowed | Prohibited |
|
|
||||||
|---------|-----------|
|
|
||||||
| Process own prefix tasks | Process other role's prefix tasks |
|
|
||||||
| SendMessage to coordinator | Directly communicate with other workers |
|
|
||||||
| Use declared subagents (discuss, explore, doc-gen) | Create tasks for other roles |
|
|
||||||
| Fast-advance simple successors | Spawn parallel worker batches |
|
|
||||||
| Write to own artifacts + wisdom | Modify resources outside own scope |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Role spec file not found | Report error via SendMessage, STOP |
|
|
||||||
| Subagent failure | Retry once with alternative subagent_type. Still fails → log warning, continue if possible |
|
|
||||||
| Discuss subagent failure | Skip discuss, log warning in report. Proceed without discuss verdict |
|
|
||||||
| Explore subagent failure | Continue without codebase context |
|
|
||||||
| Cumulative errors >= 3 | SendMessage to coordinator with error summary, STOP |
|
|
||||||
| No tasks found | SendMessage idle status, STOP |
|
|
||||||
| Context missing (prior doc, template) | Request from coordinator via SendMessage |
|
|
||||||
| Agent crash mid-loop | Self-healing: coordinator resets orphaned task, re-spawns |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Output Tag
|
|
||||||
|
|
||||||
All output lines must be prefixed with `[<role>]` tag for coordinator message routing.
|
|
||||||
@@ -834,7 +834,7 @@ todos = [
|
|||||||
| `workflow:unified-execute-with-file` | Universal execution engine - consumes plan output from collaborative-plan, req-plan, brainstorm |
|
| `workflow:unified-execute-with-file` | Universal execution engine - consumes plan output from collaborative-plan, req-plan, brainstorm |
|
||||||
| `workflow:clean` | Intelligent code cleanup - mainline detection, stale artifact removal |
|
| `workflow:clean` | Intelligent code cleanup - mainline detection, stale artifact removal |
|
||||||
| `workflow:init` | Initialize `.workflow/project-tech.json` with project analysis |
|
| `workflow:init` | Initialize `.workflow/project-tech.json` with project analysis |
|
||||||
| `workflow:init-guidelines` | Interactive wizard to fill `specs/*.md` |
|
| `workflow:init-guidelines` | Interactive wizard to fill `project-guidelines.json` |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -160,7 +160,7 @@ ${issueList}
|
|||||||
|
|
||||||
### Project Context (MANDATORY)
|
### Project Context (MANDATORY)
|
||||||
1. Read: .workflow/project-tech.json (technology stack, architecture)
|
1. Read: .workflow/project-tech.json (technology stack, architecture)
|
||||||
2. Read: .workflow/specs/*.md (constraints and conventions)
|
2. Read: .workflow/project-guidelines.json (constraints and conventions)
|
||||||
|
|
||||||
### Workflow
|
### Workflow
|
||||||
1. Fetch issue details: ccw issue status <id> --json
|
1. Fetch issue details: ccw issue status <id> --json
|
||||||
|
|||||||
@@ -451,20 +451,7 @@ CONSTRAINTS: ${perspective.constraints}
|
|||||||
- Corrected assumptions
|
- Corrected assumptions
|
||||||
- New insights
|
- New insights
|
||||||
|
|
||||||
5. **📌 Intent Drift Check** (every round ≥ 2)
|
5. **Repeat or Converge**
|
||||||
- Re-read "User Intent" from discussion.md header
|
|
||||||
- For each original intent item, check: addressed / in-progress / not yet discussed / implicitly absorbed
|
|
||||||
- If any item is "implicitly absorbed" (addressed by a different solution than originally envisioned), explicitly note this in discussion.md:
|
|
||||||
```markdown
|
|
||||||
#### Intent Coverage Check
|
|
||||||
- ✅ Intent 1: [addressed in Round N]
|
|
||||||
- 🔄 Intent 2: [in-progress, current focus]
|
|
||||||
- ⚠️ Intent 3: [implicitly absorbed by X — needs explicit confirmation]
|
|
||||||
- ❌ Intent 4: [not yet discussed]
|
|
||||||
```
|
|
||||||
- If any item is ❌ or ⚠️ after 3+ rounds, surface it to the user in the next round's presentation
|
|
||||||
|
|
||||||
6. **Repeat or Converge**
|
|
||||||
- Continue loop (max 5 rounds) or exit to Phase 4
|
- Continue loop (max 5 rounds) or exit to Phase 4
|
||||||
|
|
||||||
**Discussion Actions**:
|
**Discussion Actions**:
|
||||||
@@ -495,28 +482,7 @@ CONSTRAINTS: ${perspective.constraints}
|
|||||||
|
|
||||||
**Workflow Steps**:
|
**Workflow Steps**:
|
||||||
|
|
||||||
1. **📌 Intent Coverage Verification** (MANDATORY before synthesis)
|
1. **Consolidate Insights**
|
||||||
- Re-read all original "User Intent" items from discussion.md header
|
|
||||||
- For EACH intent item, determine coverage status:
|
|
||||||
- **✅ Addressed**: Explicitly discussed and concluded with clear design/recommendation
|
|
||||||
- **🔀 Transformed**: Original intent evolved into a different solution — document the transformation chain
|
|
||||||
- **⚠️ Absorbed**: Implicitly covered by a broader solution — flag for explicit confirmation
|
|
||||||
- **❌ Missed**: Not discussed — MUST be either addressed now or explicitly listed as out-of-scope with reason
|
|
||||||
- Write "Intent Coverage Matrix" to discussion.md:
|
|
||||||
```markdown
|
|
||||||
### Intent Coverage Matrix
|
|
||||||
| # | Original Intent | Status | Where Addressed | Notes |
|
|
||||||
|---|----------------|--------|-----------------|-------|
|
|
||||||
| 1 | [intent text] | ✅ Addressed | Round N, Conclusion #M | |
|
|
||||||
| 2 | [intent text] | 🔀 Transformed | Round N → Round M | Original: X → Final: Y |
|
|
||||||
| 3 | [intent text] | ❌ Missed | — | Reason for omission |
|
|
||||||
```
|
|
||||||
- **Gate**: If any item is ❌ Missed, MUST either:
|
|
||||||
- (a) Add a dedicated discussion round to address it before continuing, OR
|
|
||||||
- (b) Explicitly confirm with user that it is intentionally deferred
|
|
||||||
- Add `intent_coverage[]` to conclusions.json
|
|
||||||
|
|
||||||
2. **Consolidate Insights**
|
|
||||||
- Extract all findings from discussion timeline
|
- Extract all findings from discussion timeline
|
||||||
- **📌 Compile Decision Trail**: Aggregate all Decision Records from Phases 1-3 into a consolidated decision log
|
- **📌 Compile Decision Trail**: Aggregate all Decision Records from Phases 1-3 into a consolidated decision log
|
||||||
- **Key conclusions**: Main points with evidence and confidence levels (high/medium/low)
|
- **Key conclusions**: Main points with evidence and confidence levels (high/medium/low)
|
||||||
@@ -542,53 +508,11 @@ CONSTRAINTS: ${perspective.constraints}
|
|||||||
- **Trade-offs Made**: Key trade-offs and why certain paths were chosen over others
|
- **Trade-offs Made**: Key trade-offs and why certain paths were chosen over others
|
||||||
- Add session statistics: rounds, duration, sources, artifacts, **decision count**
|
- Add session statistics: rounds, duration, sources, artifacts, **decision count**
|
||||||
|
|
||||||
3. **Post-Completion Options**
|
3. **Post-Completion Options** (AskUserQuestion)
|
||||||
|
- **创建Issue**: Launch issue-discover with conclusions
|
||||||
```javascript
|
- **生成任务**: Launch workflow-lite-plan for implementation
|
||||||
const hasActionableRecs = conclusions.recommendations?.some(r => r.priority === 'high' || r.priority === 'medium')
|
- **导出报告**: Generate standalone analysis report
|
||||||
|
- **完成**: No further action
|
||||||
const nextStep = AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "Analysis complete. What's next?",
|
|
||||||
header: "Next Step",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: hasActionableRecs ? "生成任务 (Recommended)" : "生成任务", description: "Launch workflow-lite-plan with analysis context" },
|
|
||||||
{ label: "创建Issue", description: "Launch issue-discover with conclusions" },
|
|
||||||
{ label: "导出报告", description: "Generate standalone analysis report" },
|
|
||||||
{ label: "完成", description: "No further action" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Handle "生成任务"**:
|
|
||||||
```javascript
|
|
||||||
if (nextStep.includes("生成任务")) {
|
|
||||||
// 1. Build task description from high/medium priority recommendations
|
|
||||||
const taskDescription = conclusions.recommendations
|
|
||||||
.filter(r => r.priority === 'high' || r.priority === 'medium')
|
|
||||||
.map(r => r.action)
|
|
||||||
.join('\n') || conclusions.summary
|
|
||||||
|
|
||||||
// 2. Assemble compact analysis context as inline memory block
|
|
||||||
const contextLines = [
|
|
||||||
`## Prior Analysis (${sessionId})`,
|
|
||||||
`**Summary**: ${conclusions.summary}`
|
|
||||||
]
|
|
||||||
const codebasePath = `${sessionFolder}/exploration-codebase.json`
|
|
||||||
if (file_exists(codebasePath)) {
|
|
||||||
const data = JSON.parse(Read(codebasePath))
|
|
||||||
const files = (data.relevant_files || []).slice(0, 8).map(f => f.path || f.file || f).filter(Boolean)
|
|
||||||
const findings = (data.key_findings || []).slice(0, 5)
|
|
||||||
if (files.length) contextLines.push(`**Key Files**: ${files.join(', ')}`)
|
|
||||||
if (findings.length) contextLines.push(`**Key Findings**:\n${findings.map(f => `- ${f}`).join('\n')}`)
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Call lite-plan with enriched task description (no special flags)
|
|
||||||
Skill(skill="workflow-lite-plan", args=`"${taskDescription}\n\n${contextLines.join('\n')}"`)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**conclusions.json Schema**:
|
**conclusions.json Schema**:
|
||||||
- `session_id`: Session identifier
|
- `session_id`: Session identifier
|
||||||
@@ -601,12 +525,10 @@ CONSTRAINTS: ${perspective.constraints}
|
|||||||
- `open_questions[]`: Unresolved questions
|
- `open_questions[]`: Unresolved questions
|
||||||
- `follow_up_suggestions[]`: {type, summary}
|
- `follow_up_suggestions[]`: {type, summary}
|
||||||
- `decision_trail[]`: {round, decision, context, options_considered, chosen, reason, impact}
|
- `decision_trail[]`: {round, decision, context, options_considered, chosen, reason, impact}
|
||||||
- `intent_coverage[]`: {intent, status, where_addressed, notes}
|
|
||||||
|
|
||||||
**Success Criteria**:
|
**Success Criteria**:
|
||||||
- conclusions.json created with final synthesis
|
- conclusions.json created with final synthesis
|
||||||
- discussion.md finalized with conclusions and decision trail
|
- discussion.md finalized with conclusions and decision trail
|
||||||
- **📌 Intent Coverage Matrix** verified — all original intents accounted for (no ❌ Missed without explicit user deferral)
|
|
||||||
- User offered next step options
|
- User offered next step options
|
||||||
- Session complete
|
- Session complete
|
||||||
- **📌 Complete decision trail** documented and traceable from initial scoping to final conclusions
|
- **📌 Complete decision trail** documented and traceable from initial scoping to final conclusions
|
||||||
@@ -769,8 +691,6 @@ User agrees with current direction, wants deeper code analysis
|
|||||||
- Need simple task breakdown
|
- Need simple task breakdown
|
||||||
- Focus on quick execution planning
|
- Focus on quick execution planning
|
||||||
|
|
||||||
> **Note**: Phase 4「生成任务」assembles analysis context as inline `## Prior Analysis` block in task description, allowing lite-plan to skip redundant exploration automatically.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Now execute analyze-with-file for**: $ARGUMENTS
|
**Now execute analyze-with-file for**: $ARGUMENTS
|
||||||
|
|||||||
@@ -475,43 +475,21 @@ if (selectedCategories.includes('Sessions')) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update project-tech.json: remove development_index entries referencing deleted sessions
|
// Update project-tech.json if features referenced deleted sessions
|
||||||
const projectPath = '.workflow/project-tech.json'
|
const projectPath = '.workflow/project-tech.json'
|
||||||
if (fileExists(projectPath)) {
|
if (fileExists(projectPath)) {
|
||||||
const project = JSON.parse(Read(projectPath))
|
const project = JSON.parse(Read(projectPath))
|
||||||
const deletedSessionIds = results.deleted
|
const deletedPaths = new Set(results.deleted)
|
||||||
.filter(p => p.match(/WFS-|lite-plan/))
|
|
||||||
.map(p => p.split('/').pop())
|
|
||||||
|
|
||||||
if (project.development_index) {
|
project.features = project.features.filter(f =>
|
||||||
for (const category of Object.keys(project.development_index)) {
|
!deletedPaths.has(f.traceability?.archive_path)
|
||||||
project.development_index[category] = project.development_index[category].filter(entry =>
|
|
||||||
!deletedSessionIds.includes(entry.session_id)
|
|
||||||
)
|
)
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
project._metadata.last_updated = getUtc8ISOString()
|
project.statistics.total_features = project.features.length
|
||||||
|
project.statistics.last_updated = getUtc8ISOString()
|
||||||
|
|
||||||
Write(projectPath, JSON.stringify(project, null, 2))
|
Write(projectPath, JSON.stringify(project, null, 2))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update specs/*.md: remove learnings referencing deleted sessions
|
|
||||||
const guidelinesPath = '.workflow/specs/*.md'
|
|
||||||
if (fileExists(guidelinesPath)) {
|
|
||||||
const guidelines = JSON.parse(Read(guidelinesPath))
|
|
||||||
const deletedSessionIds = results.deleted
|
|
||||||
.filter(p => p.match(/WFS-|lite-plan/))
|
|
||||||
.map(p => p.split('/').pop())
|
|
||||||
|
|
||||||
if (guidelines.learnings) {
|
|
||||||
guidelines.learnings = guidelines.learnings.filter(l =>
|
|
||||||
!deletedSessionIds.includes(l.session_id)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
guidelines._metadata.updated_at = getUtc8ISOString()
|
|
||||||
Write(guidelinesPath, JSON.stringify(guidelines, null, 2))
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 4.4: Report Results**
|
**Step 4.4: Report Results**
|
||||||
@@ -563,10 +541,8 @@ Cleanup manifest archived to: ${sessionFolder}/cleanup-manifest.json
|
|||||||
| Manifest parse error | Regenerate from filesystem scan |
|
| Manifest parse error | Regenerate from filesystem scan |
|
||||||
| Empty discovery | Report "codebase is clean" |
|
| Empty discovery | Report "codebase is clean" |
|
||||||
|
|
||||||
|
|
||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
- `/workflow:session:sync` - Sync session work to specs/*.md + project-tech (正向写入)
|
|
||||||
- `/workflow:session:complete` - Properly archive active sessions
|
- `/workflow:session:complete` - Properly archive active sessions
|
||||||
- `memory-capture` skill - Save session memory before cleanup
|
- `memory-capture` skill - Save session memory before cleanup
|
||||||
- `workflow-execute` skill - View current workflow state
|
- `workflow-execute` skill - View current workflow state
|
||||||
|
|||||||
@@ -205,11 +205,6 @@ Task(
|
|||||||
1. **Prioritize Latest Documentation**: Search for and reference latest README, design docs, architecture guides when available
|
1. **Prioritize Latest Documentation**: Search for and reference latest README, design docs, architecture guides when available
|
||||||
2. **Handle Ambiguities**: When requirement ambiguities exist, ask user for clarification (use AskUserQuestion) instead of assuming interpretations
|
2. **Handle Ambiguities**: When requirement ambiguities exist, ask user for clarification (use AskUserQuestion) instead of assuming interpretations
|
||||||
|
|
||||||
### Project Context (MANDATORY)
|
|
||||||
Read and incorporate:
|
|
||||||
- \`.workflow/project-tech.json\` (if exists): Technology stack, architecture
|
|
||||||
- \`.workflow/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS on sub-domain splitting and plan structure
|
|
||||||
|
|
||||||
### Input Requirements
|
### Input Requirements
|
||||||
${taskDescription}
|
${taskDescription}
|
||||||
|
|
||||||
@@ -354,19 +349,21 @@ subDomains.map(sub =>
|
|||||||
**TASK ID Range**: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
|
**TASK ID Range**: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
|
||||||
**Session**: ${sessionId}
|
**Session**: ${sessionId}
|
||||||
|
|
||||||
### Project Context (MANDATORY)
|
|
||||||
Read and incorporate:
|
|
||||||
- \`.workflow/project-tech.json\` (if exists): Technology stack, architecture
|
|
||||||
- \`.workflow/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS
|
|
||||||
|
|
||||||
## Dual Output Tasks
|
## Dual Output Tasks
|
||||||
|
|
||||||
### Task 1: Generate Two-Layer Plan Output
|
### Task 1: Generate Two-Layer Plan Output
|
||||||
Output: ${sessionFolder}/agents/${sub.focus_area}/plan.json
|
Output: ${sessionFolder}/agents/${sub.focus_area}/plan.json (overview with task_ids[])
|
||||||
Output: ${sessionFolder}/agents/${sub.focus_area}/.task/TASK-*.json
|
Output: ${sessionFolder}/agents/${sub.focus_area}/.task/TASK-*.json (independent task files)
|
||||||
Schema (plan): ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
|
Schema (plan): ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
|
||||||
Schema (tasks): ~/.ccw/workflows/cli-templates/schemas/task-schema.json
|
Schema (tasks): ~/.ccw/workflows/cli-templates/schemas/task-schema.json
|
||||||
|
|
||||||
|
**Two-Layer Output Format**:
|
||||||
|
- plan.json: Overview with task_ids[] referencing .task/ files (NO tasks[] array)
|
||||||
|
- .task/TASK-*.json: Independent task files following task-schema.json
|
||||||
|
- plan.json required: summary, approach, task_ids, task_count, _metadata (with plan_type)
|
||||||
|
- Task files required: id, title, description, depends_on, convergence (with criteria[])
|
||||||
|
- Task fields: files[].change (not modification_points), convergence.criteria (not acceptance), test (not verification)
|
||||||
|
|
||||||
### Task 2: Sync Summary to plan-note.md
|
### Task 2: Sync Summary to plan-note.md
|
||||||
|
|
||||||
**Locate Your Sections**:
|
**Locate Your Sections**:
|
||||||
|
|||||||
@@ -630,8 +630,6 @@ Why is config value None during update?
|
|||||||
|
|
||||||
## Post-Completion Expansion
|
## Post-Completion Expansion
|
||||||
|
|
||||||
**Auto-sync**: 执行 `/workflow:session:sync -y "{summary}"` 更新 specs/*.md + project-tech。
|
|
||||||
|
|
||||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
name: init-guidelines
|
name: init-guidelines
|
||||||
description: Interactive wizard to fill specs/*.md based on project analysis
|
description: Interactive wizard to fill project-guidelines.json based on project analysis
|
||||||
argument-hint: "[--reset]"
|
argument-hint: "[--reset]"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:init-guidelines
|
- /workflow:init-guidelines
|
||||||
@@ -11,7 +11,7 @@ examples:
|
|||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.workflow/specs/*.md` with coding conventions, constraints, and quality rules.
|
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.workflow/project-guidelines.json` with coding conventions, constraints, and quality rules.
|
||||||
|
|
||||||
**Design Principle**: Questions are dynamically generated based on the project's tech stack, architecture, and patterns — not generic boilerplate.
|
**Design Principle**: Questions are dynamically generated based on the project's tech stack, architecture, and patterns — not generic boilerplate.
|
||||||
|
|
||||||
@@ -31,7 +31,7 @@ Input Parsing:
|
|||||||
|
|
||||||
Step 1: Check Prerequisites
|
Step 1: Check Prerequisites
|
||||||
├─ project-tech.json must exist (run /workflow:init first)
|
├─ project-tech.json must exist (run /workflow:init first)
|
||||||
├─ specs/*.md: check if populated or scaffold-only
|
├─ project-guidelines.json: check if populated or scaffold-only
|
||||||
└─ If populated + no --reset → Ask: "Guidelines already exist. Overwrite or append?"
|
└─ If populated + no --reset → Ask: "Guidelines already exist. Overwrite or append?"
|
||||||
|
|
||||||
Step 2: Load Project Context
|
Step 2: Load Project Context
|
||||||
@@ -44,7 +44,7 @@ Step 3: Multi-Round Interactive Questionnaire
|
|||||||
├─ Round 4: Performance & Security Constraints (performance, security)
|
├─ Round 4: Performance & Security Constraints (performance, security)
|
||||||
└─ Round 5: Quality Rules (quality_rules)
|
└─ Round 5: Quality Rules (quality_rules)
|
||||||
|
|
||||||
Step 4: Write specs/*.md
|
Step 4: Write project-guidelines.json
|
||||||
|
|
||||||
Step 5: Display Summary
|
Step 5: Display Summary
|
||||||
```
|
```
|
||||||
@@ -55,7 +55,7 @@ Step 5: Display Summary
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
||||||
bash(test -f .workflow/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
|
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
|
||||||
```
|
```
|
||||||
|
|
||||||
**If TECH_NOT_FOUND**: Exit with message
|
**If TECH_NOT_FOUND**: Exit with message
|
||||||
@@ -71,10 +71,12 @@ const reset = $ARGUMENTS.includes('--reset')
|
|||||||
**If GUIDELINES_EXISTS and not --reset**: Check if guidelines are populated (not just scaffold)
|
**If GUIDELINES_EXISTS and not --reset**: Check if guidelines are populated (not just scaffold)
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Check if specs already have content via ccw spec list
|
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'))
|
||||||
const specsList = Bash('ccw spec list --json 2>/dev/null || echo "{}"')
|
const isPopulated =
|
||||||
const specsData = JSON.parse(specsList)
|
guidelines.conventions.coding_style.length > 0 ||
|
||||||
const isPopulated = (specsData.total || 0) > 5 // More than seed docs
|
guidelines.conventions.naming_patterns.length > 0 ||
|
||||||
|
guidelines.constraints.architecture.length > 0 ||
|
||||||
|
guidelines.constraints.tech_stack.length > 0
|
||||||
|
|
||||||
if (isPopulated) {
|
if (isPopulated) {
|
||||||
AskUserQuestion({
|
AskUserQuestion({
|
||||||
@@ -98,18 +100,22 @@ if (isPopulated) {
|
|||||||
### Step 2: Load Project Context
|
### Step 2: Load Project Context
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Load project context via ccw spec load for planning context
|
const projectTech = JSON.parse(Read('.workflow/project-tech.json'))
|
||||||
const projectContext = Bash('ccw spec load --category planning 2>/dev/null || echo "{}"')
|
|
||||||
const specData = JSON.parse(projectContext)
|
|
||||||
|
|
||||||
// Extract key info from loaded specs for generating smart questions
|
// Extract key info for generating smart questions
|
||||||
const languages = specData.overview?.technology_stack?.languages || []
|
const languages = projectTech.technology_analysis?.technology_stack?.languages
|
||||||
|
|| projectTech.overview?.technology_stack?.languages || []
|
||||||
const primaryLang = languages.find(l => l.primary)?.name || languages[0]?.name || 'Unknown'
|
const primaryLang = languages.find(l => l.primary)?.name || languages[0]?.name || 'Unknown'
|
||||||
const frameworks = specData.overview?.technology_stack?.frameworks || []
|
const frameworks = projectTech.technology_analysis?.technology_stack?.frameworks
|
||||||
const testFrameworks = specData.overview?.technology_stack?.test_frameworks || []
|
|| projectTech.overview?.technology_stack?.frameworks || []
|
||||||
const archStyle = specData.overview?.architecture?.style || 'Unknown'
|
const testFrameworks = projectTech.technology_analysis?.technology_stack?.test_frameworks
|
||||||
const archPatterns = specData.overview?.architecture?.patterns || []
|
|| projectTech.overview?.technology_stack?.test_frameworks || []
|
||||||
const buildTools = specData.overview?.technology_stack?.build_tools || []
|
const archStyle = projectTech.technology_analysis?.architecture?.style
|
||||||
|
|| projectTech.overview?.architecture?.style || 'Unknown'
|
||||||
|
const archPatterns = projectTech.technology_analysis?.architecture?.patterns
|
||||||
|
|| projectTech.overview?.architecture?.patterns || []
|
||||||
|
const buildTools = projectTech.technology_analysis?.technology_stack?.build_tools
|
||||||
|
|| projectTech.overview?.technology_stack?.build_tools || []
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Multi-Round Interactive Questionnaire
|
### Step 3: Multi-Round Interactive Questionnaire
|
||||||
@@ -320,108 +326,64 @@ AskUserQuestion({
|
|||||||
|
|
||||||
**Process Round 5 answers** → add to `quality_rules` array as `{ rule, scope, enforced_by }` objects.
|
**Process Round 5 answers** → add to `quality_rules` array as `{ rule, scope, enforced_by }` objects.
|
||||||
|
|
||||||
### Step 4: Write specs/*.md
|
### Step 4: Write project-guidelines.json
|
||||||
|
|
||||||
For each category of collected answers, append rules to the corresponding spec MD file. Each spec file uses YAML frontmatter with `readMode`, `priority`, `category`, and `keywords`.
|
|
||||||
|
|
||||||
**Category Assignment**: Based on the round and question type:
|
|
||||||
- Round 1-2 (conventions): `category: general` (applies to all stages)
|
|
||||||
- Round 3 (architecture/tech): `category: planning` (planning phase)
|
|
||||||
- Round 4 (performance/security): `category: execution` (implementation phase)
|
|
||||||
- Round 5 (quality): `category: execution` (testing phase)
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Helper: append rules to a spec MD file with category support
|
// Build the final guidelines object
|
||||||
function appendRulesToSpecFile(filePath, rules, defaultCategory = 'general') {
|
const finalGuidelines = {
|
||||||
if (rules.length === 0) return
|
conventions: {
|
||||||
|
coding_style: existingCodingStyle.concat(newCodingStyle),
|
||||||
// Check if file exists
|
naming_patterns: existingNamingPatterns.concat(newNamingPatterns),
|
||||||
if (!file_exists(filePath)) {
|
file_structure: existingFileStructure.concat(newFileStructure),
|
||||||
// Create file with frontmatter including category
|
documentation: existingDocumentation.concat(newDocumentation)
|
||||||
const frontmatter = `---
|
},
|
||||||
title: ${filePath.includes('conventions') ? 'Coding Conventions' : filePath.includes('constraints') ? 'Architecture Constraints' : 'Quality Rules'}
|
constraints: {
|
||||||
readMode: optional
|
architecture: existingArchitecture.concat(newArchitecture),
|
||||||
priority: medium
|
tech_stack: existingTechStack.concat(newTechStack),
|
||||||
category: ${defaultCategory}
|
performance: existingPerformance.concat(newPerformance),
|
||||||
scope: project
|
security: existingSecurity.concat(newSecurity)
|
||||||
dimension: specs
|
},
|
||||||
keywords: [${defaultCategory}, ${filePath.includes('conventions') ? 'convention' : filePath.includes('constraints') ? 'constraint' : 'quality'}]
|
quality_rules: existingQualityRules.concat(newQualityRules),
|
||||||
---
|
learnings: existingLearnings, // Preserve existing learnings
|
||||||
|
_metadata: {
|
||||||
# ${filePath.includes('conventions') ? 'Coding Conventions' : filePath.includes('constraints') ? 'Architecture Constraints' : 'Quality Rules'}
|
created_at: existingMetadata?.created_at || new Date().toISOString(),
|
||||||
|
version: "1.0.0",
|
||||||
`
|
last_updated: new Date().toISOString(),
|
||||||
Write(filePath, frontmatter)
|
updated_by: "workflow:init-guidelines"
|
||||||
}
|
}
|
||||||
|
|
||||||
const existing = Read(filePath)
|
|
||||||
// Append new rules as markdown list items after existing content
|
|
||||||
const newContent = existing.trimEnd() + '\n' + rules.map(r => `- ${r}`).join('\n') + '\n'
|
|
||||||
Write(filePath, newContent)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Write conventions (general category)
|
Write('.workflow/project-guidelines.json', JSON.stringify(finalGuidelines, null, 2))
|
||||||
appendRulesToSpecFile('.workflow/specs/coding-conventions.md',
|
|
||||||
[...newCodingStyle, ...newNamingPatterns, ...newFileStructure, ...newDocumentation],
|
|
||||||
'general')
|
|
||||||
|
|
||||||
// Write constraints (planning category)
|
|
||||||
appendRulesToSpecFile('.workflow/specs/architecture-constraints.md',
|
|
||||||
[...newArchitecture, ...newTechStack, ...newPerformance, ...newSecurity],
|
|
||||||
'planning')
|
|
||||||
|
|
||||||
// Write quality rules (execution category)
|
|
||||||
if (newQualityRules.length > 0) {
|
|
||||||
const qualityPath = '.workflow/specs/quality-rules.md'
|
|
||||||
if (!file_exists(qualityPath)) {
|
|
||||||
Write(qualityPath, `---
|
|
||||||
title: Quality Rules
|
|
||||||
readMode: required
|
|
||||||
priority: high
|
|
||||||
category: execution
|
|
||||||
scope: project
|
|
||||||
dimension: specs
|
|
||||||
keywords: [execution, quality, testing, coverage, lint]
|
|
||||||
---
|
|
||||||
|
|
||||||
# Quality Rules
|
|
||||||
|
|
||||||
`)
|
|
||||||
}
|
|
||||||
appendRulesToSpecFile(qualityPath,
|
|
||||||
newQualityRules.map(q => `${q.rule} (scope: ${q.scope}, enforced by: ${q.enforced_by})`),
|
|
||||||
'execution')
|
|
||||||
}
|
|
||||||
|
|
||||||
// Rebuild spec index after writing
|
|
||||||
Bash('ccw spec rebuild')
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 5: Display Summary
|
### Step 5: Display Summary
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const countConventions = newCodingStyle.length + newNamingPatterns.length
|
const countConventions = finalGuidelines.conventions.coding_style.length
|
||||||
+ newFileStructure.length + newDocumentation.length
|
+ finalGuidelines.conventions.naming_patterns.length
|
||||||
const countConstraints = newArchitecture.length + newTechStack.length
|
+ finalGuidelines.conventions.file_structure.length
|
||||||
+ newPerformance.length + newSecurity.length
|
+ finalGuidelines.conventions.documentation.length
|
||||||
const countQuality = newQualityRules.length
|
|
||||||
|
|
||||||
// Get updated spec list
|
const countConstraints = finalGuidelines.constraints.architecture.length
|
||||||
const specsList = Bash('ccw spec list --json 2>/dev/null || echo "{}"')
|
+ finalGuidelines.constraints.tech_stack.length
|
||||||
|
+ finalGuidelines.constraints.performance.length
|
||||||
|
+ finalGuidelines.constraints.security.length
|
||||||
|
|
||||||
|
const countQuality = finalGuidelines.quality_rules.length
|
||||||
|
|
||||||
console.log(`
|
console.log(`
|
||||||
✓ Project guidelines configured
|
✓ Project guidelines configured
|
||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
- Conventions: ${countConventions} rules added to coding-conventions.md
|
- Conventions: ${countConventions} rules (coding: ${cs}, naming: ${np}, files: ${fs}, docs: ${doc})
|
||||||
- Constraints: ${countConstraints} rules added to architecture-constraints.md
|
- Constraints: ${countConstraints} rules (arch: ${ar}, tech: ${ts}, perf: ${pf}, security: ${sc})
|
||||||
- Quality rules: ${countQuality} rules added to quality-rules.md
|
- Quality rules: ${countQuality}
|
||||||
|
|
||||||
Spec index rebuilt. Use \`ccw spec list\` to view all specs.
|
File: .workflow/project-guidelines.json
|
||||||
|
|
||||||
Next steps:
|
Next steps:
|
||||||
- Use /workflow:session:solidify to add individual rules later
|
- Use /workflow:session:solidify to add individual rules later
|
||||||
- Specs are auto-loaded via hook on each prompt
|
- Guidelines will be auto-loaded by /workflow:plan for task generation
|
||||||
`)
|
`)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -443,5 +405,4 @@ When converting user selections to guideline entries:
|
|||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
- `/workflow:init` - Creates scaffold; optionally calls this command
|
- `/workflow:init` - Creates scaffold; optionally calls this command
|
||||||
- `/workflow:init-specs` - Interactive wizard to create individual specs with scope selection
|
|
||||||
- `/workflow:session:solidify` - Add individual rules one at a time
|
- `/workflow:session:solidify` - Add individual rules one at a time
|
||||||
|
|||||||
@@ -1,380 +0,0 @@
|
|||||||
---
|
|
||||||
name: init-specs
|
|
||||||
description: Interactive wizard to create individual specs or personal constraints with scope selection
|
|
||||||
argument-hint: "[--scope <global|project>] [--dimension <specs|personal>] [--category <general|exploration|planning|execution>]"
|
|
||||||
examples:
|
|
||||||
- /workflow:init-specs
|
|
||||||
- /workflow:init-specs --scope global --dimension personal
|
|
||||||
- /workflow:init-specs --scope project --dimension specs
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow Init Specs Command (/workflow:init-specs)
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Interactive wizard for creating individual specs or personal constraints with scope selection. This command provides a guided experience for adding new rules to the spec system.
|
|
||||||
|
|
||||||
**Key Features**:
|
|
||||||
- Supports both project specs and personal specs
|
|
||||||
- Scope selection (global vs project) for personal specs
|
|
||||||
- Category-based organization for workflow stages
|
|
||||||
- Interactive mode with smart defaults
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
```bash
|
|
||||||
/workflow:init-specs # Interactive mode (all prompts)
|
|
||||||
/workflow:init-specs --scope global # Create global personal spec
|
|
||||||
/workflow:init-specs --scope project # Create project spec (default)
|
|
||||||
/workflow:init-specs --dimension specs # Project conventions/constraints
|
|
||||||
/workflow:init-specs --dimension personal # Personal preferences
|
|
||||||
/workflow:init-specs --category exploration # Workflow stage category
|
|
||||||
```
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
| Parameter | Values | Default | Description |
|
|
||||||
|-----------|--------|---------|-------------|
|
|
||||||
| `--scope` | `global`, `project` | `project` | Where to store the spec (only for personal dimension) |
|
|
||||||
| `--dimension` | `specs`, `personal` | Interactive | Type of spec to create |
|
|
||||||
| `--category` | `general`, `exploration`, `planning`, `execution` | `general` | Workflow stage category |
|
|
||||||
|
|
||||||
## Execution Process
|
|
||||||
|
|
||||||
```
|
|
||||||
Input Parsing:
|
|
||||||
├─ Parse --scope (global | project)
|
|
||||||
├─ Parse --dimension (specs | personal)
|
|
||||||
└─ Parse --category (general | exploration | planning | execution)
|
|
||||||
|
|
||||||
Step 1: Gather Requirements (Interactive)
|
|
||||||
├─ If dimension not specified → Ask dimension
|
|
||||||
├─ If personal + scope not specified → Ask scope
|
|
||||||
├─ If category not specified → Ask category
|
|
||||||
├─ Ask type (convention | constraint | learning)
|
|
||||||
└─ Ask content (rule text)
|
|
||||||
|
|
||||||
Step 2: Determine Target File
|
|
||||||
├─ specs dimension → .workflow/specs/coding-conventions.md or architecture-constraints.md
|
|
||||||
└─ personal dimension → ~/.ccw/specs/personal/ or .ccw/specs/personal/
|
|
||||||
|
|
||||||
Step 3: Write Spec
|
|
||||||
├─ Check if file exists, create if needed with proper frontmatter
|
|
||||||
├─ Append rule to appropriate section
|
|
||||||
└─ Run ccw spec rebuild
|
|
||||||
|
|
||||||
Step 4: Display Confirmation
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation
|
|
||||||
|
|
||||||
### Step 1: Parse Input and Gather Requirements
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Parse arguments
|
|
||||||
const args = $ARGUMENTS.toLowerCase()
|
|
||||||
const hasScope = args.includes('--scope')
|
|
||||||
const hasDimension = args.includes('--dimension')
|
|
||||||
const hasCategory = args.includes('--category')
|
|
||||||
|
|
||||||
// Extract values from arguments
|
|
||||||
let scope = hasScope ? args.match(/--scope\s+(\w+)/)?.[1] : null
|
|
||||||
let dimension = hasDimension ? args.match(/--dimension\s+(\w+)/)?.[1] : null
|
|
||||||
let category = hasCategory ? args.match(/--category\s+(\w+)/)?.[1] : null
|
|
||||||
|
|
||||||
// Validate values
|
|
||||||
if (scope && !['global', 'project'].includes(scope)) {
|
|
||||||
console.log("Invalid scope. Use 'global' or 'project'.")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if (dimension && !['specs', 'personal'].includes(dimension)) {
|
|
||||||
console.log("Invalid dimension. Use 'specs' or 'personal'.")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if (category && !['general', 'exploration', 'planning', 'execution'].includes(category)) {
|
|
||||||
console.log("Invalid category. Use 'general', 'exploration', 'planning', or 'execution'.")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Interactive Questions
|
|
||||||
|
|
||||||
**If dimension not specified**:
|
|
||||||
```javascript
|
|
||||||
if (!dimension) {
|
|
||||||
const dimensionAnswer = AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "What type of spec do you want to create?",
|
|
||||||
header: "Dimension",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{
|
|
||||||
label: "Project Spec",
|
|
||||||
description: "Coding conventions, constraints, quality rules for this project (stored in .workflow/specs/)"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: "Personal Spec",
|
|
||||||
description: "Personal preferences and constraints that follow you across projects (stored in ~/.ccw/specs/personal/ or .ccw/specs/personal/)"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
dimension = dimensionAnswer.answers["Dimension"] === "Project Spec" ? "specs" : "personal"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**If personal dimension and scope not specified**:
|
|
||||||
```javascript
|
|
||||||
if (dimension === 'personal' && !scope) {
|
|
||||||
const scopeAnswer = AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "Where should this personal spec be stored?",
|
|
||||||
header: "Scope",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{
|
|
||||||
label: "Global (Recommended)",
|
|
||||||
description: "Apply to ALL projects (~/.ccw/specs/personal/)"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: "Project-only",
|
|
||||||
description: "Apply only to this project (.ccw/specs/personal/)"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
scope = scopeAnswer.answers["Scope"].includes("Global") ? "global" : "project"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**If category not specified**:
|
|
||||||
```javascript
|
|
||||||
if (!category) {
|
|
||||||
const categoryAnswer = AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "Which workflow stage does this spec apply to?",
|
|
||||||
header: "Category",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{
|
|
||||||
label: "General (Recommended)",
|
|
||||||
description: "Applies to all stages (default)"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: "Exploration",
|
|
||||||
description: "Code exploration, analysis, debugging"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: "Planning",
|
|
||||||
description: "Task planning, requirements gathering"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: "Execution",
|
|
||||||
description: "Implementation, testing, deployment"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
const categoryLabel = categoryAnswer.answers["Category"]
|
|
||||||
category = categoryLabel.includes("General") ? "general"
|
|
||||||
: categoryLabel.includes("Exploration") ? "exploration"
|
|
||||||
: categoryLabel.includes("Planning") ? "planning"
|
|
||||||
: "execution"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Ask type**:
|
|
||||||
```javascript
|
|
||||||
const typeAnswer = AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "What type of rule is this?",
|
|
||||||
header: "Type",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{
|
|
||||||
label: "Convention",
|
|
||||||
description: "Coding style preference (e.g., use functional components)"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: "Constraint",
|
|
||||||
description: "Hard rule that must not be violated (e.g., no direct DB access)"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: "Learning",
|
|
||||||
description: "Insight or lesson learned (e.g., cache invalidation needs events)"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
const type = typeAnswer.answers["Type"]
|
|
||||||
const isConvention = type.includes("Convention")
|
|
||||||
const isConstraint = type.includes("Constraint")
|
|
||||||
const isLearning = type.includes("Learning")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Ask content**:
|
|
||||||
```javascript
|
|
||||||
const contentAnswer = AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "Enter the rule or guideline text:",
|
|
||||||
header: "Content",
|
|
||||||
multiSelect: false,
|
|
||||||
options: []
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
const ruleText = contentAnswer.answers["Content"]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Determine Target File
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const path = require('path')
|
|
||||||
const os = require('os')
|
|
||||||
|
|
||||||
let targetFile: string
|
|
||||||
let targetDir: string
|
|
||||||
|
|
||||||
if (dimension === 'specs') {
|
|
||||||
// Project specs
|
|
||||||
targetDir = '.workflow/specs'
|
|
||||||
if (isConstraint) {
|
|
||||||
targetFile = path.join(targetDir, 'architecture-constraints.md')
|
|
||||||
} else {
|
|
||||||
targetFile = path.join(targetDir, 'coding-conventions.md')
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Personal specs
|
|
||||||
if (scope === 'global') {
|
|
||||||
targetDir = path.join(os.homedir(), '.ccw', 'specs', 'personal')
|
|
||||||
} else {
|
|
||||||
targetDir = path.join('.ccw', 'specs', 'personal')
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create category-based filename
|
|
||||||
const typePrefix = isConstraint ? 'constraints' : isLearning ? 'learnings' : 'conventions'
|
|
||||||
targetFile = path.join(targetDir, `${typePrefix}.md`)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Write Spec
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const fs = require('fs')
|
|
||||||
|
|
||||||
// Ensure directory exists
|
|
||||||
if (!fs.existsSync(targetDir)) {
|
|
||||||
fs.mkdirSync(targetDir, { recursive: true })
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if file exists
|
|
||||||
const fileExists = fs.existsSync(targetFile)
|
|
||||||
|
|
||||||
if (!fileExists) {
|
|
||||||
// Create new file with frontmatter
|
|
||||||
const frontmatter = `---
|
|
||||||
title: ${dimension === 'specs' ? 'Project' : 'Personal'} ${isConstraint ? 'Constraints' : isLearning ? 'Learnings' : 'Conventions'}
|
|
||||||
readMode: optional
|
|
||||||
priority: medium
|
|
||||||
category: ${category}
|
|
||||||
scope: ${dimension === 'personal' ? scope : 'project'}
|
|
||||||
dimension: ${dimension}
|
|
||||||
keywords: [${category}, ${isConstraint ? 'constraint' : isLearning ? 'learning' : 'convention'}]
|
|
||||||
---
|
|
||||||
|
|
||||||
# ${dimension === 'specs' ? 'Project' : 'Personal'} ${isConstraint ? 'Constraints' : isLearning ? 'Learnings' : 'Conventions'}
|
|
||||||
|
|
||||||
`
|
|
||||||
fs.writeFileSync(targetFile, frontmatter, 'utf8')
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read existing content
|
|
||||||
let content = fs.readFileSync(targetFile, 'utf8')
|
|
||||||
|
|
||||||
// Format the new rule
|
|
||||||
const timestamp = new Date().toISOString().split('T')[0]
|
|
||||||
const rulePrefix = isLearning ? `- [learning] ` : `- [${category}] `
|
|
||||||
const ruleSuffix = isLearning ? ` (${timestamp})` : ''
|
|
||||||
const newRule = `${rulePrefix}${ruleText}${ruleSuffix}`
|
|
||||||
|
|
||||||
// Check for duplicate
|
|
||||||
if (content.includes(ruleText)) {
|
|
||||||
console.log(`
|
|
||||||
Rule already exists in ${targetFile}
|
|
||||||
Text: "${ruleText}"
|
|
||||||
`)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Append the rule
|
|
||||||
content = content.trimEnd() + '\n' + newRule + '\n'
|
|
||||||
fs.writeFileSync(targetFile, content, 'utf8')
|
|
||||||
|
|
||||||
// Rebuild spec index
|
|
||||||
Bash('ccw spec rebuild')
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Display Confirmation
|
|
||||||
|
|
||||||
```
|
|
||||||
Spec created successfully
|
|
||||||
|
|
||||||
Dimension: ${dimension}
|
|
||||||
Scope: ${dimension === 'personal' ? scope : 'project'}
|
|
||||||
Category: ${category}
|
|
||||||
Type: ${type}
|
|
||||||
Rule: "${ruleText}"
|
|
||||||
|
|
||||||
Location: ${targetFile}
|
|
||||||
|
|
||||||
Use 'ccw spec list' to view all specs
|
|
||||||
Use 'ccw spec load --category ${category}' to load specs by category
|
|
||||||
```
|
|
||||||
|
|
||||||
## Target File Resolution
|
|
||||||
|
|
||||||
### Project Specs (dimension: specs)
|
|
||||||
```
|
|
||||||
.workflow/specs/
|
|
||||||
├── coding-conventions.md ← conventions, learnings
|
|
||||||
├── architecture-constraints.md ← constraints
|
|
||||||
└── quality-rules.md ← quality rules
|
|
||||||
```
|
|
||||||
|
|
||||||
### Personal Specs (dimension: personal)
|
|
||||||
```
|
|
||||||
# Global (~/.ccw/specs/personal/)
|
|
||||||
~/.ccw/specs/personal/
|
|
||||||
├── conventions.md ← personal conventions (all projects)
|
|
||||||
├── constraints.md ← personal constraints (all projects)
|
|
||||||
└── learnings.md ← personal learnings (all projects)
|
|
||||||
|
|
||||||
# Project-local (.ccw/specs/personal/)
|
|
||||||
.ccw/specs/personal/
|
|
||||||
├── conventions.md ← personal conventions (this project only)
|
|
||||||
├── constraints.md ← personal constraints (this project only)
|
|
||||||
└── learnings.md ← personal learnings (this project only)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Category Field Usage
|
|
||||||
|
|
||||||
The `category` field in frontmatter enables filtered loading:
|
|
||||||
|
|
||||||
| Category | Use Case | Example Rules |
|
|
||||||
|----------|----------|---------------|
|
|
||||||
| `general` | Applies to all stages | "Use TypeScript strict mode" |
|
|
||||||
| `exploration` | Code exploration, debugging | "Always trace the call stack before modifying" |
|
|
||||||
| `planning` | Task planning, requirements | "Break down tasks into 2-hour chunks" |
|
|
||||||
| `execution` | Implementation, testing | "Run tests after each file modification" |
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
- **File not writable**: Check permissions, suggest manual creation
|
|
||||||
- **Duplicate rule**: Warn and skip (don't add duplicates)
|
|
||||||
- **Invalid path**: Exit with error message
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
- `/workflow:init` - Initialize project with specs scaffold
|
|
||||||
- `/workflow:init-guidelines` - Interactive wizard to fill specs
|
|
||||||
- `/workflow:session:solidify` - Add rules during/after sessions
|
|
||||||
- `ccw spec list` - View all specs
|
|
||||||
- `ccw spec load --category <cat>` - Load filtered specs
|
|
||||||
@@ -1,21 +1,20 @@
|
|||||||
---
|
---
|
||||||
name: init
|
name: init
|
||||||
description: Initialize project-level state with intelligent project analysis using cli-explore-agent
|
description: Initialize project-level state with intelligent project analysis using cli-explore-agent
|
||||||
argument-hint: "[--regenerate] [--skip-specs]"
|
argument-hint: "[--regenerate]"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:init
|
- /workflow:init
|
||||||
- /workflow:init --regenerate
|
- /workflow:init --regenerate
|
||||||
- /workflow:init --skip-specs
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Workflow Init Command (/workflow:init)
|
# Workflow Init Command (/workflow:init)
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Initialize `.workflow/project-tech.json` and `.workflow/specs/*.md` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
|
Initialize `.workflow/project-tech.json` and `.workflow/project-guidelines.json` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
|
||||||
|
|
||||||
**Dual File System**:
|
**Dual File System**:
|
||||||
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
|
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
|
||||||
- `specs/*.md`: User-maintained rules and constraints (created as scaffold)
|
- `project-guidelines.json`: User-maintained rules and constraints (created as scaffold)
|
||||||
|
|
||||||
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
|
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
|
||||||
|
|
||||||
@@ -23,15 +22,13 @@ Initialize `.workflow/project-tech.json` and `.workflow/specs/*.md` with compreh
|
|||||||
```bash
|
```bash
|
||||||
/workflow:init # Initialize (skip if exists)
|
/workflow:init # Initialize (skip if exists)
|
||||||
/workflow:init --regenerate # Force regeneration
|
/workflow:init --regenerate # Force regeneration
|
||||||
/workflow:init --skip-specs # Initialize project-tech only, skip spec initialization
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
```
|
```
|
||||||
Input Parsing:
|
Input Parsing:
|
||||||
├─ Parse --regenerate flag → regenerate = true | false
|
└─ Parse --regenerate flag → regenerate = true | false
|
||||||
└─ Parse --skip-specs flag → skipSpecs = true | false
|
|
||||||
|
|
||||||
Decision:
|
Decision:
|
||||||
├─ BOTH_EXIST + no --regenerate → Exit: "Already initialized"
|
├─ BOTH_EXIST + no --regenerate → Exit: "Already initialized"
|
||||||
@@ -45,44 +42,41 @@ Analysis Flow:
|
|||||||
│ ├─ Semantic analysis (Gemini CLI)
|
│ ├─ Semantic analysis (Gemini CLI)
|
||||||
│ ├─ Synthesis and merge
|
│ ├─ Synthesis and merge
|
||||||
│ └─ Write .workflow/project-tech.json
|
│ └─ Write .workflow/project-tech.json
|
||||||
├─ Spec Initialization (if not --skip-specs)
|
├─ Create guidelines scaffold (if not exists)
|
||||||
│ ├─ Check if specs/*.md exist
|
│ └─ Write .workflow/project-guidelines.json (empty structure)
|
||||||
│ ├─ If NOT_FOUND → Run ccw spec init
|
├─ Display summary
|
||||||
│ ├─ Run ccw spec rebuild
|
└─ Ask about guidelines configuration
|
||||||
│ └─ Ask about guidelines configuration
|
├─ If guidelines empty → Ask user: "Configure now?" or "Skip"
|
||||||
│ ├─ If guidelines empty → Ask user: "Configure now?" or "Skip"
|
│ ├─ Configure now → Skill(skill="workflow:init-guidelines")
|
||||||
│ │ ├─ Configure now → Skill(skill="workflow:init-guidelines")
|
│ └─ Skip → Show next steps
|
||||||
│ │ └─ Skip → Show next steps
|
└─ If guidelines populated → Show next steps only
|
||||||
│ └─ If guidelines populated → Show next steps only
|
|
||||||
└─ Display summary
|
|
||||||
|
|
||||||
Output:
|
Output:
|
||||||
├─ .workflow/project-tech.json (+ .backup if regenerate)
|
├─ .workflow/project-tech.json (+ .backup if regenerate)
|
||||||
└─ .workflow/specs/*.md (scaffold or configured, unless --skip-specs)
|
└─ .workflow/project-guidelines.json (scaffold or configured)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Implementation
|
## Implementation
|
||||||
|
|
||||||
### Step 1: Parse Input and Check Existing State
|
### Step 1: Parse Input and Check Existing State
|
||||||
|
|
||||||
**Parse flags**:
|
**Parse --regenerate flag**:
|
||||||
```javascript
|
```javascript
|
||||||
const regenerate = $ARGUMENTS.includes('--regenerate')
|
const regenerate = $ARGUMENTS.includes('--regenerate')
|
||||||
const skipSpecs = $ARGUMENTS.includes('--skip-specs')
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Check existing state**:
|
**Check existing state**:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
||||||
bash(test -f .workflow/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
|
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
|
||||||
```
|
```
|
||||||
|
|
||||||
**If BOTH_EXIST and no --regenerate**: Exit early
|
**If BOTH_EXIST and no --regenerate**: Exit early
|
||||||
```
|
```
|
||||||
Project already initialized:
|
Project already initialized:
|
||||||
- Tech analysis: .workflow/project-tech.json
|
- Tech analysis: .workflow/project-tech.json
|
||||||
- Guidelines: .workflow/specs/*.md
|
- Guidelines: .workflow/project-guidelines.json
|
||||||
|
|
||||||
Use /workflow:init --regenerate to rebuild tech analysis
|
Use /workflow:init --regenerate to rebuild tech analysis
|
||||||
Use /workflow:session:solidify to add guidelines
|
Use /workflow:session:solidify to add guidelines
|
||||||
@@ -165,20 +159,33 @@ Project root: ${projectRoot}
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3.5: Initialize Spec System (if not --skip-specs)
|
### Step 3.5: Create Guidelines Scaffold (if not exists)
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Skip spec initialization if --skip-specs flag is provided
|
// Only create if not exists (never overwrite user guidelines)
|
||||||
if (!skipSpecs) {
|
if (!file_exists('.workflow/project-guidelines.json')) {
|
||||||
// Initialize spec system if not already initialized
|
const guidelinesScaffold = {
|
||||||
const specsCheck = Bash('test -f .workflow/specs/coding-conventions.md && echo EXISTS || echo NOT_FOUND')
|
conventions: {
|
||||||
if (specsCheck.includes('NOT_FOUND')) {
|
coding_style: [],
|
||||||
console.log('Initializing spec system...')
|
naming_patterns: [],
|
||||||
Bash('ccw spec init')
|
file_structure: [],
|
||||||
Bash('ccw spec rebuild')
|
documentation: []
|
||||||
|
},
|
||||||
|
constraints: {
|
||||||
|
architecture: [],
|
||||||
|
tech_stack: [],
|
||||||
|
performance: [],
|
||||||
|
security: []
|
||||||
|
},
|
||||||
|
quality_rules: [],
|
||||||
|
learnings: [],
|
||||||
|
_metadata: {
|
||||||
|
created_at: new Date().toISOString(),
|
||||||
|
version: "1.0.0"
|
||||||
}
|
}
|
||||||
} else {
|
};
|
||||||
console.log('Skipping spec initialization (--skip-specs)')
|
|
||||||
|
Write('.workflow/project-guidelines.json', JSON.stringify(guidelinesScaffold, null, 2));
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -186,10 +193,10 @@ if (!skipSpecs) {
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
|
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
|
||||||
const specsInitialized = !skipSpecs && file_exists('.workflow/specs/coding-conventions.md');
|
const guidelinesExists = file_exists('.workflow/project-guidelines.json');
|
||||||
|
|
||||||
console.log(`
|
console.log(`
|
||||||
Project initialized successfully
|
✓ Project initialized successfully
|
||||||
|
|
||||||
## Project Overview
|
## Project Overview
|
||||||
Name: ${projectTech.project_name}
|
Name: ${projectTech.project_name}
|
||||||
@@ -206,37 +213,30 @@ Components: ${projectTech.overview.key_components.length} core modules
|
|||||||
---
|
---
|
||||||
Files created:
|
Files created:
|
||||||
- Tech analysis: .workflow/project-tech.json
|
- Tech analysis: .workflow/project-tech.json
|
||||||
${!skipSpecs ? `- Specs: .workflow/specs/ ${specsInitialized ? '(initialized)' : ''}` : '- Specs: (skipped via --skip-specs)'}
|
- Guidelines: .workflow/project-guidelines.json ${guidelinesExists ? '(scaffold)' : ''}
|
||||||
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
|
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
|
||||||
`);
|
`);
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 5: Ask About Guidelines Configuration (if not --skip-specs)
|
### Step 5: Ask About Guidelines Configuration
|
||||||
|
|
||||||
After displaying the summary, ask the user if they want to configure project guidelines interactively. Skip this step if `--skip-specs` was provided.
|
After displaying the summary, ask the user if they want to configure project guidelines interactively.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Skip guidelines configuration if --skip-specs was provided
|
// Check if guidelines are just a scaffold (empty) or already populated
|
||||||
if (skipSpecs) {
|
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'));
|
||||||
console.log(`
|
const isGuidelinesPopulated =
|
||||||
Next steps:
|
guidelines.conventions.coding_style.length > 0 ||
|
||||||
- Use /workflow:init-specs to create individual specs
|
guidelines.conventions.naming_patterns.length > 0 ||
|
||||||
- Use /workflow:init-guidelines to configure specs interactively
|
guidelines.constraints.architecture.length > 0 ||
|
||||||
- Use /workflow:plan to start planning
|
guidelines.constraints.security.length > 0;
|
||||||
`);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if specs have user content beyond seed documents
|
// Only ask if guidelines are not yet populated
|
||||||
const specsList = Bash('ccw spec list --json');
|
if (!isGuidelinesPopulated) {
|
||||||
const specsCount = JSON.parse(specsList).total || 0;
|
|
||||||
|
|
||||||
// Only ask if specs are just seeds
|
|
||||||
if (specsCount <= 5) {
|
|
||||||
const userChoice = AskUserQuestion({
|
const userChoice = AskUserQuestion({
|
||||||
questions: [{
|
questions: [{
|
||||||
question: "Would you like to configure project specs now? The wizard will ask targeted questions based on your tech stack.",
|
question: "Would you like to configure project guidelines now? The wizard will ask targeted questions based on your tech stack.",
|
||||||
header: "Specs",
|
header: "Guidelines",
|
||||||
multiSelect: false,
|
multiSelect: false,
|
||||||
options: [
|
options: [
|
||||||
{
|
{
|
||||||
@@ -245,30 +245,28 @@ if (specsCount <= 5) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
label: "Skip for now",
|
label: "Skip for now",
|
||||||
description: "You can run /workflow:init-guidelines later or use ccw spec load to import specs"
|
description: "You can run /workflow:init-guidelines later or use /workflow:session:solidify to add rules individually"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}]
|
}]
|
||||||
});
|
});
|
||||||
|
|
||||||
if (userChoice.answers["Specs"] === "Configure now (Recommended)") {
|
if (userChoice.answers["Guidelines"] === "Configure now (Recommended)") {
|
||||||
console.log("\nStarting specs configuration wizard...\n");
|
console.log("\n🔧 Starting guidelines configuration wizard...\n");
|
||||||
Skill(skill="workflow:init-guidelines");
|
Skill(skill="workflow:init-guidelines");
|
||||||
} else {
|
} else {
|
||||||
console.log(`
|
console.log(`
|
||||||
Next steps:
|
Next steps:
|
||||||
- Use /workflow:init-specs to create individual specs
|
- Use /workflow:init-guidelines to configure guidelines interactively
|
||||||
- Use /workflow:init-guidelines to configure specs interactively
|
- Use /workflow:session:solidify to add individual rules
|
||||||
- Use ccw spec load to import specs from external sources
|
|
||||||
- Use /workflow:plan to start planning
|
- Use /workflow:plan to start planning
|
||||||
`);
|
`);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
console.log(`
|
console.log(`
|
||||||
Specs already configured (${specsCount} spec files).
|
Guidelines already configured (${guidelines.conventions.coding_style.length + guidelines.constraints.architecture.length}+ rules).
|
||||||
|
|
||||||
Next steps:
|
Next steps:
|
||||||
- Use /workflow:init-specs to create additional specs
|
|
||||||
- Use /workflow:init-guidelines --reset to reconfigure
|
- Use /workflow:init-guidelines --reset to reconfigure
|
||||||
- Use /workflow:session:solidify to add individual rules
|
- Use /workflow:session:solidify to add individual rules
|
||||||
- Use /workflow:plan to start planning
|
- Use /workflow:plan to start planning
|
||||||
@@ -284,7 +282,6 @@ Next steps:
|
|||||||
|
|
||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
- `/workflow:init-specs` - Interactive wizard to create individual specs with scope selection
|
|
||||||
- `/workflow:init-guidelines` - Interactive wizard to configure project guidelines (called after init)
|
- `/workflow:init-guidelines` - Interactive wizard to configure project guidelines (called after init)
|
||||||
- `/workflow:session:solidify` - Add individual rules/constraints one at a time
|
- `/workflow:session:solidify` - Add individual rules/constraints one at a time
|
||||||
- `workflow-plan` skill - Start planning with initialized project context
|
- `workflow-plan` skill - Start planning with initialized project context
|
||||||
|
|||||||
@@ -843,8 +843,6 @@ AskUserQuestion({
|
|||||||
|
|
||||||
## Post-Completion Expansion
|
## Post-Completion Expansion
|
||||||
|
|
||||||
**Auto-sync**: 执行 `/workflow:session:sync -y "{summary}"` 更新 specs/*.md + project-tech。
|
|
||||||
|
|
||||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
620
.claude/commands/workflow/req-plan-with-file.md
Normal file
620
.claude/commands/workflow/req-plan-with-file.md
Normal file
@@ -0,0 +1,620 @@
|
|||||||
|
---
|
||||||
|
name: req-plan-with-file
|
||||||
|
description: Requirement-level progressive roadmap planning with issue creation. Decomposes requirements into convergent layers or task sequences, creates issues via ccw issue create, and generates roadmap.md for human review. Issues stored in .workflow/issues/issues.jsonl (single source of truth).
|
||||||
|
argument-hint: "[-y|--yes] [-c|--continue] [-m|--mode progressive|direct|auto] \"requirement description\""
|
||||||
|
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
## Auto Mode
|
||||||
|
|
||||||
|
When `--yes` or `-y`: Auto-confirm strategy selection, use recommended mode, skip interactive validation rounds.
|
||||||
|
|
||||||
|
# Workflow Req-Plan Command (/workflow:req-plan-with-file)
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage
|
||||||
|
/workflow:req-plan-with-file "Implement user authentication system with OAuth and 2FA"
|
||||||
|
|
||||||
|
# With mode selection
|
||||||
|
/workflow:req-plan-with-file -m progressive "Build real-time notification system" # Layered MVP→iterations
|
||||||
|
/workflow:req-plan-with-file -m direct "Refactor payment module" # Topologically-sorted task sequence
|
||||||
|
/workflow:req-plan-with-file -m auto "Add data export feature" # Auto-select strategy
|
||||||
|
|
||||||
|
# Continue existing session
|
||||||
|
/workflow:req-plan-with-file --continue "user authentication system"
|
||||||
|
|
||||||
|
# Auto mode
|
||||||
|
/workflow:req-plan-with-file -y "Implement caching layer"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Context Source**: cli-explore-agent (optional) + requirement analysis
|
||||||
|
**Output Directory**: `.workflow/.req-plan/{session-id}/`
|
||||||
|
**Core Innovation**: Requirement decomposition → issue creation via `ccw issue create`. Issues stored in `.workflow/issues/issues.jsonl` (single source of truth). Wave/dependency info embedded in issue tags (`wave-N`) and `extended_context.notes.depends_on_issues`. team-planex consumes issues directly by ID or tag query.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Requirement-level layered roadmap planning command. Decomposes a requirement into **convergent layers or task sequences**, creates issues via `ccw issue create`. Issues are the single source of truth in `.workflow/issues/issues.jsonl`; wave and dependency info is embedded in issue tags and `extended_context.notes`.
|
||||||
|
|
||||||
|
**Dual Modes**:
|
||||||
|
- **Progressive**: Layered MVP→iterations, suitable for high-uncertainty requirements (validate first, then refine)
|
||||||
|
- **Direct**: Topologically-sorted task sequence, suitable for low-uncertainty requirements (clear tasks, directly ordered)
|
||||||
|
- **Auto**: Automatically selects based on uncertainty level
|
||||||
|
|
||||||
|
**Core Workflow**: Requirement Understanding → Strategy Selection → Context Collection (optional) → Decomposition + Issue Creation → Validation → team-planex Handoff
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ REQ-PLAN ROADMAP WORKFLOW │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Phase 1: Requirement Understanding & Strategy Selection │
|
||||||
|
│ ├─ Parse requirement: goal / constraints / stakeholders │
|
||||||
|
│ ├─ Assess uncertainty level │
|
||||||
|
│ │ ├─ High uncertainty → recommend progressive │
|
||||||
|
│ │ └─ Low uncertainty → recommend direct │
|
||||||
|
│ ├─ User confirms strategy (-m skips, -y auto-selects recommended) │
|
||||||
|
│ └─ Initialize strategy-assessment.json + roadmap.md skeleton │
|
||||||
|
│ │
|
||||||
|
│ Phase 2: Context Collection (Optional) │
|
||||||
|
│ ├─ Detect codebase: package.json / go.mod / src / ... │
|
||||||
|
│ ├─ Has codebase → cli-explore-agent explores relevant modules │
|
||||||
|
│ └─ No codebase → skip, pure requirement decomposition │
|
||||||
|
│ │
|
||||||
|
│ Phase 3: Decomposition & Issue Creation (cli-roadmap-plan-agent) │
|
||||||
|
│ ├─ Progressive: define 2-4 layers, each with full convergence │
|
||||||
|
│ ├─ Direct: vertical slicing + topological sort, each with convergence│
|
||||||
|
│ ├─ Create issues via ccw issue create (ISS-xxx IDs) │
|
||||||
|
│ └─ Generate roadmap.md (with issue ID references) │
|
||||||
|
│ │
|
||||||
|
│ Phase 4: Validation & team-planex Handoff │
|
||||||
|
│ ├─ Display decomposition results (tabular + convergence criteria) │
|
||||||
|
│ ├─ User feedback loop (up to 5 rounds) │
|
||||||
|
│ └─ Next steps: team-planex full execution / wave-by-wave / view │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/.req-plan/RPLAN-{slug}-{YYYY-MM-DD}/
|
||||||
|
├── roadmap.md # Human-readable roadmap with issue ID references
|
||||||
|
├── strategy-assessment.json # Strategy assessment result
|
||||||
|
└── exploration-codebase.json # Codebase context (optional)
|
||||||
|
```
|
||||||
|
|
||||||
|
| File | Phase | Description |
|
||||||
|
|------|-------|-------------|
|
||||||
|
| `strategy-assessment.json` | 1 | Uncertainty analysis + mode recommendation + extracted goal/constraints/stakeholders |
|
||||||
|
| `roadmap.md` (skeleton) | 1 | Initial skeleton with placeholders, finalized in Phase 3 |
|
||||||
|
| `exploration-codebase.json` | 2 | Codebase context: relevant modules, patterns, integration points (only when codebase exists) |
|
||||||
|
| `roadmap.md` (final) | 3 | Human-readable roadmap with issue ID references, convergence details, team-planex execution guide |
|
||||||
|
|
||||||
|
**roadmap.md template**:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Requirement Roadmap
|
||||||
|
|
||||||
|
**Session**: RPLAN-{slug}-{date}
|
||||||
|
**Requirement**: {requirement}
|
||||||
|
**Strategy**: {progressive|direct}
|
||||||
|
**Generated**: {timestamp}
|
||||||
|
|
||||||
|
## Strategy Assessment
|
||||||
|
- Uncertainty level: {high|medium|low}
|
||||||
|
- Decomposition mode: {progressive|direct}
|
||||||
|
- Assessment basis: {factors summary}
|
||||||
|
|
||||||
|
## Roadmap
|
||||||
|
{Tabular display of layers/tasks}
|
||||||
|
|
||||||
|
## Convergence Criteria Details
|
||||||
|
{Expanded convergence for each layer/task}
|
||||||
|
|
||||||
|
## Risks
|
||||||
|
{Aggregated risks}
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
{Execution guidance}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
| Flag | Default | Description |
|
||||||
|
|------|---------|-------------|
|
||||||
|
| `-y, --yes` | false | Auto-confirm all decisions |
|
||||||
|
| `-c, --continue` | false | Continue existing session |
|
||||||
|
| `-m, --mode` | auto | Decomposition strategy: progressive / direct / auto |
|
||||||
|
|
||||||
|
**Session ID format**: `RPLAN-{slug}-{YYYY-MM-DD}`
|
||||||
|
- slug: lowercase, alphanumeric + CJK characters, max 40 chars
|
||||||
|
- date: YYYY-MM-DD (UTC+8)
|
||||||
|
- Auto-detect continue: session folder + roadmap.md exists → continue mode
|
||||||
|
|
||||||
|
## JSONL Schema Design
|
||||||
|
|
||||||
|
### Issue Format
|
||||||
|
|
||||||
|
Each line in `issues.jsonl` follows the standard `issues-jsonl-schema.json` (see `.ccw/workflows/cli-templates/schemas/issues-jsonl-schema.json`).
|
||||||
|
|
||||||
|
**Key fields per issue**:
|
||||||
|
|
||||||
|
| Field | Source | Description |
|
||||||
|
|-------|--------|-------------|
|
||||||
|
| `id` | `ccw issue create` | Formal ISS-YYYYMMDD-NNN ID |
|
||||||
|
| `title` | Layer/task mapping | `[LayerName] goal` or `[TaskType] title` |
|
||||||
|
| `context` | Convergence fields | Markdown with goal, scope, convergence criteria, verification, DoD |
|
||||||
|
| `priority` | Effort mapping | small→4, medium→3, large→2 |
|
||||||
|
| `source` | Fixed | `"text"` |
|
||||||
|
| `tags` | Auto-generated | `["req-plan", mode, name/type, "wave-N"]` |
|
||||||
|
| `extended_context.notes` | Metadata JSON | session, strategy, original_id, wave, depends_on_issues |
|
||||||
|
| `lifecycle_requirements` | Fixed | test_strategy, regression_scope, acceptance_type, commit_strategy |
|
||||||
|
|
||||||
|
### Convergence Criteria (in issue context)
|
||||||
|
|
||||||
|
Each issue's `context` field contains convergence information:
|
||||||
|
|
||||||
|
| Section | Purpose | Requirement |
|
||||||
|
|---------|---------|-------------|
|
||||||
|
| `## Convergence Criteria` | List of checkable specific conditions | **Testable** (can be written as assertions or manual steps) |
|
||||||
|
| `## Verification` | How to verify these conditions | **Executable** (command, script, or explicit steps) |
|
||||||
|
| `## Definition of Done` | One-sentence completion definition | **Business language** (non-technical person can judge) |
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Session Initialization
|
||||||
|
|
||||||
|
**Objective**: Create session context and directory structure.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
|
// Parse flags
|
||||||
|
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||||
|
const continueMode = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
|
||||||
|
const modeMatch = $ARGUMENTS.match(/(?:--mode|-m)\s+(progressive|direct|auto)/)
|
||||||
|
const requestedMode = modeMatch ? modeMatch[1] : 'auto'
|
||||||
|
|
||||||
|
// Clean requirement text (remove flags)
|
||||||
|
const requirement = $ARGUMENTS
|
||||||
|
.replace(/--yes|-y|--continue|-c|--mode\s+\w+|-m\s+\w+/g, '')
|
||||||
|
.trim()
|
||||||
|
|
||||||
|
const slug = requirement.toLowerCase()
|
||||||
|
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||||
|
.substring(0, 40)
|
||||||
|
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||||
|
const sessionId = `RPLAN-${slug}-${dateStr}`
|
||||||
|
const sessionFolder = `.workflow/.req-plan/${sessionId}`
|
||||||
|
|
||||||
|
// Auto-detect continue: session folder + roadmap.md exists → continue mode
|
||||||
|
Bash(`mkdir -p ${sessionFolder}`)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 1: Requirement Understanding & Strategy Selection
|
||||||
|
|
||||||
|
**Objective**: Parse requirement, assess uncertainty, select decomposition strategy.
|
||||||
|
|
||||||
|
**Prerequisites**: Session initialized, requirement description available.
|
||||||
|
|
||||||
|
**Steps**:
|
||||||
|
|
||||||
|
1. **Parse Requirement**
|
||||||
|
- Extract core goal (what to achieve)
|
||||||
|
- Identify constraints (tech stack, timeline, compatibility, etc.)
|
||||||
|
- Identify stakeholders (users, admins, developers, etc.)
|
||||||
|
- Identify keywords to determine domain
|
||||||
|
|
||||||
|
2. **Assess Uncertainty Level**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const uncertaintyFactors = {
|
||||||
|
scope_clarity: 'low|medium|high',
|
||||||
|
technical_risk: 'low|medium|high',
|
||||||
|
dependency_unknown: 'low|medium|high',
|
||||||
|
domain_familiarity: 'low|medium|high',
|
||||||
|
requirement_stability: 'low|medium|high'
|
||||||
|
}
|
||||||
|
// high uncertainty (>=3 high) → progressive
|
||||||
|
// low uncertainty (>=3 low) → direct
|
||||||
|
// otherwise → ask user preference
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Strategy Selection** (skip if `-m` already specified)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (requestedMode !== 'auto') {
|
||||||
|
selectedMode = requestedMode
|
||||||
|
} else if (autoYes) {
|
||||||
|
selectedMode = recommendedMode
|
||||||
|
} else {
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: `Decomposition strategy selection:\n\nUncertainty assessment: ${uncertaintyLevel}\nRecommended strategy: ${recommendedMode}\n\nSelect decomposition strategy:`,
|
||||||
|
header: "Strategy",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{
|
||||||
|
label: recommendedMode === 'progressive' ? "Progressive (Recommended)" : "Progressive",
|
||||||
|
description: "Layered MVP→iterations, validate core first then refine progressively. Suitable for high-uncertainty requirements needing quick validation"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: recommendedMode === 'direct' ? "Direct (Recommended)" : "Direct",
|
||||||
|
description: "Topologically-sorted task sequence with explicit dependencies. Suitable for clear requirements with confirmed technical approach"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Generate strategy-assessment.json**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const strategyAssessment = {
|
||||||
|
session_id: sessionId,
|
||||||
|
requirement: requirement,
|
||||||
|
timestamp: getUtc8ISOString(),
|
||||||
|
uncertainty_factors: uncertaintyFactors,
|
||||||
|
uncertainty_level: uncertaintyLevel, // 'high' | 'medium' | 'low'
|
||||||
|
recommended_mode: recommendedMode,
|
||||||
|
selected_mode: selectedMode,
|
||||||
|
goal: extractedGoal,
|
||||||
|
constraints: extractedConstraints,
|
||||||
|
stakeholders: extractedStakeholders,
|
||||||
|
domain_keywords: extractedKeywords
|
||||||
|
}
|
||||||
|
Write(`${sessionFolder}/strategy-assessment.json`, JSON.stringify(strategyAssessment, null, 2))
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Initialize roadmap.md skeleton** (placeholder sections, finalized in Phase 4)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const roadmapMdSkeleton = `# Requirement Roadmap
|
||||||
|
|
||||||
|
**Session**: ${sessionId}
|
||||||
|
**Requirement**: ${requirement}
|
||||||
|
**Strategy**: ${selectedMode}
|
||||||
|
**Status**: Planning
|
||||||
|
**Created**: ${getUtc8ISOString()}
|
||||||
|
|
||||||
|
## Strategy Assessment
|
||||||
|
- Uncertainty level: ${uncertaintyLevel}
|
||||||
|
- Decomposition mode: ${selectedMode}
|
||||||
|
|
||||||
|
## Roadmap
|
||||||
|
> To be populated after Phase 3 decomposition
|
||||||
|
|
||||||
|
## Convergence Criteria Details
|
||||||
|
> To be populated after Phase 3 decomposition
|
||||||
|
|
||||||
|
## Risk Items
|
||||||
|
> To be populated after Phase 3 decomposition
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
> To be populated after Phase 4 validation
|
||||||
|
`
|
||||||
|
Write(`${sessionFolder}/roadmap.md`, roadmapMdSkeleton)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Success Criteria**:
|
||||||
|
- Requirement goal, constraints, stakeholders identified
|
||||||
|
- Uncertainty level assessed
|
||||||
|
- Strategy selected (progressive or direct)
|
||||||
|
- strategy-assessment.json generated
|
||||||
|
- roadmap.md skeleton initialized
|
||||||
|
|
||||||
|
### Phase 2: Context Collection (Optional)
|
||||||
|
|
||||||
|
**Objective**: If a codebase exists, collect relevant context to enhance decomposition quality.
|
||||||
|
|
||||||
|
**Prerequisites**: Phase 1 complete.
|
||||||
|
|
||||||
|
**Steps**:
|
||||||
|
|
||||||
|
1. **Detect Codebase**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const hasCodebase = Bash(`
|
||||||
|
test -f package.json && echo "nodejs" ||
|
||||||
|
test -f go.mod && echo "golang" ||
|
||||||
|
test -f Cargo.toml && echo "rust" ||
|
||||||
|
test -f pyproject.toml && echo "python" ||
|
||||||
|
test -f pom.xml && echo "java" ||
|
||||||
|
test -d src && echo "generic" ||
|
||||||
|
echo "none"
|
||||||
|
`).trim()
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Codebase Exploration** (only when hasCodebase !== 'none')
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (hasCodebase !== 'none') {
|
||||||
|
Task({
|
||||||
|
subagent_type: "cli-explore-agent",
|
||||||
|
run_in_background: false,
|
||||||
|
description: `Explore codebase: ${slug}`,
|
||||||
|
prompt: `
|
||||||
|
## Exploration Context
|
||||||
|
Requirement: ${requirement}
|
||||||
|
Strategy: ${selectedMode}
|
||||||
|
Project Type: ${hasCodebase}
|
||||||
|
Session: ${sessionFolder}
|
||||||
|
|
||||||
|
## MANDATORY FIRST STEPS
|
||||||
|
1. Run: ccw tool exec get_modules_by_depth '{}'
|
||||||
|
2. Execute relevant searches based on requirement keywords
|
||||||
|
3. Read: .workflow/project-tech.json (if exists)
|
||||||
|
4. Read: .workflow/project-guidelines.json (if exists)
|
||||||
|
|
||||||
|
## Exploration Focus
|
||||||
|
- Identify modules/components related to the requirement
|
||||||
|
- Find existing patterns that should be followed
|
||||||
|
- Locate integration points for new functionality
|
||||||
|
- Assess current architecture constraints
|
||||||
|
|
||||||
|
## Output
|
||||||
|
Write findings to: ${sessionFolder}/exploration-codebase.json
|
||||||
|
|
||||||
|
Schema: {
|
||||||
|
project_type: "${hasCodebase}",
|
||||||
|
relevant_modules: [{name, path, relevance}],
|
||||||
|
existing_patterns: [{pattern, files, description}],
|
||||||
|
integration_points: [{location, description, risk}],
|
||||||
|
architecture_constraints: [string],
|
||||||
|
tech_stack: {languages, frameworks, tools},
|
||||||
|
_metadata: {timestamp, exploration_scope}
|
||||||
|
}
|
||||||
|
`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
// No codebase → skip, proceed directly to Phase 3
|
||||||
|
```
|
||||||
|
|
||||||
|
**Success Criteria**:
|
||||||
|
- Codebase detection complete
|
||||||
|
- When codebase exists, exploration-codebase.json generated
|
||||||
|
- When no codebase, skipped and logged
|
||||||
|
|
||||||
|
### Phase 3: Decomposition & Issue Creation
|
||||||
|
|
||||||
|
**Objective**: Execute requirement decomposition via `cli-roadmap-plan-agent`, creating issues and generating roadmap.md.
|
||||||
|
|
||||||
|
**Prerequisites**: Phase 1, Phase 2 complete. Strategy selected. Context collected (if applicable).
|
||||||
|
|
||||||
|
**Agent**: `cli-roadmap-plan-agent` (dedicated requirement roadmap planning agent, supports CLI-assisted decomposition + issue creation + built-in quality checks)
|
||||||
|
|
||||||
|
**Steps**:
|
||||||
|
|
||||||
|
1. **Prepare Context**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const strategy = JSON.parse(Read(`${sessionFolder}/strategy-assessment.json`))
|
||||||
|
let explorationContext = null
|
||||||
|
if (file_exists(`${sessionFolder}/exploration-codebase.json`)) {
|
||||||
|
explorationContext = JSON.parse(Read(`${sessionFolder}/exploration-codebase.json`))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Invoke cli-roadmap-plan-agent**
|
||||||
|
|
||||||
|
The agent internally executes a 5-phase flow:
|
||||||
|
- Phase 1: Context loading + requirement analysis
|
||||||
|
- Phase 2: CLI-assisted decomposition (Gemini → Qwen → manual fallback)
|
||||||
|
- Phase 3: Record enhancement + validation (schema compliance, dependency checks, convergence quality)
|
||||||
|
- Phase 4: Issue creation + roadmap generation (ccw issue create → roadmap.md)
|
||||||
|
- Phase 5: CLI decomposition quality check (**MANDATORY** - requirement coverage, convergence criteria quality, dependency correctness)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task({
|
||||||
|
subagent_type: "cli-roadmap-plan-agent",
|
||||||
|
run_in_background: false,
|
||||||
|
description: `Roadmap decomposition: ${slug}`,
|
||||||
|
prompt: `
|
||||||
|
## Roadmap Decomposition Task
|
||||||
|
|
||||||
|
### Input Context
|
||||||
|
- **Requirement**: ${requirement}
|
||||||
|
- **Selected Mode**: ${selectedMode}
|
||||||
|
- **Session ID**: ${sessionId}
|
||||||
|
- **Session Folder**: ${sessionFolder}
|
||||||
|
|
||||||
|
### Strategy Assessment
|
||||||
|
${JSON.stringify(strategy, null, 2)}
|
||||||
|
|
||||||
|
### Codebase Context
|
||||||
|
${explorationContext
|
||||||
|
? `File: ${sessionFolder}/exploration-codebase.json\n${JSON.stringify(explorationContext, null, 2)}`
|
||||||
|
: 'No codebase detected - pure requirement decomposition'}
|
||||||
|
|
||||||
|
### Issue Creation
|
||||||
|
- Use \`ccw issue create\` for each decomposed item
|
||||||
|
- Issue format: issues-jsonl-schema (id, title, status, priority, context, source, tags, extended_context)
|
||||||
|
- Update \`roadmap.md\` with issue ID references
|
||||||
|
|
||||||
|
### CLI Configuration
|
||||||
|
- Primary tool: gemini
|
||||||
|
- Fallback: qwen
|
||||||
|
- Timeout: 60000ms
|
||||||
|
|
||||||
|
### Expected Output
|
||||||
|
1. **${sessionFolder}/roadmap.md** - Human-readable roadmap with issue references
|
||||||
|
2. Issues created in \`.workflow/issues/issues.jsonl\` via ccw issue create
|
||||||
|
|
||||||
|
### Mode-Specific Requirements
|
||||||
|
|
||||||
|
${selectedMode === 'progressive' ? `**Progressive Mode**:
|
||||||
|
- 2-4 layers from MVP to full implementation
|
||||||
|
- Each layer: id (L0-L3), name, goal, scope, excludes, convergence, risks, effort, depends_on
|
||||||
|
- L0 (MVP) must be a self-contained closed loop with no dependencies
|
||||||
|
- Scope: each feature belongs to exactly ONE layer (no overlap)
|
||||||
|
- Layer names: MVP / Usable / Refined / Optimized` :
|
||||||
|
|
||||||
|
`**Direct Mode**:
|
||||||
|
- Topologically-sorted task sequence
|
||||||
|
- Each task: id (T1-Tn), title, type, scope, inputs, outputs, convergence, depends_on, parallel_group
|
||||||
|
- Inputs must come from preceding task outputs or existing resources
|
||||||
|
- Tasks in same parallel_group must be truly independent`}
|
||||||
|
|
||||||
|
### Convergence Quality Requirements
|
||||||
|
- criteria[]: MUST be testable (can write assertions or manual verification steps)
|
||||||
|
- verification: MUST be executable (command, script, or explicit steps)
|
||||||
|
- definition_of_done: MUST use business language (non-technical person can judge)
|
||||||
|
|
||||||
|
### Execution
|
||||||
|
1. Analyze requirement and build decomposition context
|
||||||
|
2. Execute CLI-assisted decomposition (Gemini, fallback Qwen)
|
||||||
|
3. Parse output, validate records, enhance convergence quality
|
||||||
|
4. Create issues via ccw issue create, generate roadmap.md
|
||||||
|
5. Execute mandatory quality check (Phase 5)
|
||||||
|
6. Return brief completion summary
|
||||||
|
`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Success Criteria**:
|
||||||
|
- Issues created via `ccw issue create`, each with formal ISS-xxx ID
|
||||||
|
- roadmap.md generated with issue ID references
|
||||||
|
- Agent's internal quality check passed
|
||||||
|
- No circular dependencies
|
||||||
|
- Progressive: 2-4 layers, no scope overlap
|
||||||
|
- Direct: tasks have explicit inputs/outputs, parallel_group assigned
|
||||||
|
|
||||||
|
### Phase 4: Validation & team-planex Handoff
|
||||||
|
|
||||||
|
**Objective**: Display decomposition results, collect user feedback, provide team-planex execution options.
|
||||||
|
|
||||||
|
**Prerequisites**: Phase 3 complete, issues created, roadmap.md generated.
|
||||||
|
|
||||||
|
**Steps**:
|
||||||
|
|
||||||
|
1. **Display Decomposition Results** (tabular format)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Use issueIdMap from Phase 3 for display
|
||||||
|
const issueIds = Object.values(issueIdMap)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Progressive Mode**:
|
||||||
|
```markdown
|
||||||
|
## Roadmap Overview
|
||||||
|
|
||||||
|
| Wave | Issue ID | Name | Goal | Priority |
|
||||||
|
|------|----------|------|------|----------|
|
||||||
|
| 1 | ISS-xxx | MVP | ... | 2 |
|
||||||
|
| 2 | ISS-yyy | Usable | ... | 3 |
|
||||||
|
|
||||||
|
### Convergence Criteria
|
||||||
|
**Wave 1 - MVP (ISS-xxx)**:
|
||||||
|
- Criteria: [criteria list]
|
||||||
|
- Verification: [verification]
|
||||||
|
- Definition of Done: [definition_of_done]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Direct Mode**:
|
||||||
|
```markdown
|
||||||
|
## Task Sequence
|
||||||
|
|
||||||
|
| Wave | Issue ID | Title | Type | Dependencies |
|
||||||
|
|------|----------|-------|------|--------------|
|
||||||
|
| 1 | ISS-xxx | ... | infrastructure | - |
|
||||||
|
| 2 | ISS-yyy | ... | feature | ISS-xxx |
|
||||||
|
|
||||||
|
### Convergence Criteria
|
||||||
|
**Wave 1 - ISS-xxx**:
|
||||||
|
- Criteria: [criteria list]
|
||||||
|
- Verification: [verification]
|
||||||
|
- Definition of Done: [definition_of_done]
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **User Feedback Loop** (up to 5 rounds, skipped when autoYes)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (!autoYes) {
|
||||||
|
let round = 0
|
||||||
|
let continueLoop = true
|
||||||
|
|
||||||
|
while (continueLoop && round < 5) {
|
||||||
|
round++
|
||||||
|
const feedback = AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: `Roadmap validation (round ${round}):\nAny feedback on the current decomposition?`,
|
||||||
|
header: "Feedback",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Approve", description: "Decomposition is reasonable, proceed to next steps" },
|
||||||
|
{ label: "Adjust Scope", description: "Some issue scopes need adjustment" },
|
||||||
|
{ label: "Modify Convergence", description: "Convergence criteria are not specific or testable enough" },
|
||||||
|
{ label: "Re-decompose", description: "Overall strategy or layering approach needs change" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
|
if (feedback === 'Approve') {
|
||||||
|
continueLoop = false
|
||||||
|
} else {
|
||||||
|
// Handle adjustment based on feedback type
|
||||||
|
// After adjustment, re-display and return to loop top
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Post-Completion Options**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (!autoYes) {
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: `路线图已生成,${issueIds.length} 个 issues 已创建。下一步:`,
|
||||||
|
header: "Next Step",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Execute with team-planex", description: `启动 team-planex 执行全部 ${issueIds.length} 个 issues` },
|
||||||
|
{ label: "Execute first wave", description: "仅执行 Wave 1(按 wave-1 tag 筛选)" },
|
||||||
|
{ label: "View issues", description: "查看已创建的 issue 详情" },
|
||||||
|
{ label: "Done", description: "保存路线图,稍后执行" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Selection | Action |
|
||||||
|
|-----------|--------|
|
||||||
|
| Execute with team-planex | `Skill(skill="team-planex", args="${issueIds.join(' ')}")` |
|
||||||
|
| Execute first wave | Filter issues by `wave-1` tag, pass to team-planex |
|
||||||
|
| View issues | Display issues summary from `.workflow/issues/issues.jsonl` |
|
||||||
|
| Done | Display file paths, end |
|
||||||
|
|
||||||
|
**Success Criteria**:
|
||||||
|
- User feedback processed (or skipped via autoYes)
|
||||||
|
- Post-completion options provided
|
||||||
|
- team-planex handoff available via issue IDs
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| cli-explore-agent failure | Skip code exploration, proceed with pure requirement decomposition |
|
||||||
|
| No codebase | Normal flow, skip Phase 2 |
|
||||||
|
| Circular dependency detected | Prompt user to adjust dependencies, re-decompose |
|
||||||
|
| User feedback timeout | Save current state, display `--continue` recovery command |
|
||||||
|
| Max feedback rounds reached | Use current version to generate final artifacts |
|
||||||
|
| Session folder conflict | Append timestamp suffix |
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Clear requirement description**: Detailed description → more accurate uncertainty assessment and decomposition
|
||||||
|
2. **Validate MVP first**: In progressive mode, L0 should be the minimum verifiable closed loop
|
||||||
|
3. **Testable convergence**: criteria must be writable as assertions or manual steps; definition_of_done should be judgeable by non-technical stakeholders (see Convergence Criteria in JSONL Schema Design)
|
||||||
|
4. **Agent-First for Exploration**: Delegate codebase exploration to cli-explore-agent, do not analyze directly in main flow
|
||||||
|
5. **Incremental validation**: Use `--continue` to iterate on existing roadmaps
|
||||||
|
6. **team-planex integration**: Issues created follow standard issues-jsonl-schema, directly consumable by team-planex via issue IDs and tags
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Now execute req-plan-with-file for**: $ARGUMENTS
|
||||||
@@ -1,544 +0,0 @@
|
|||||||
---
|
|
||||||
name: roadmap-with-file
|
|
||||||
description: Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable). Handoff to team-planex.
|
|
||||||
argument-hint: "[-y|--yes] [-c|--continue] [-m progressive|direct|auto] \"requirement description\""
|
|
||||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
## Auto Mode
|
|
||||||
|
|
||||||
When `--yes` or `-y`: Auto-confirm strategy selection, use recommended mode, skip interactive rounds.
|
|
||||||
|
|
||||||
# Workflow Roadmap Command (/workflow:roadmap-with-file)
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Basic usage
|
|
||||||
/workflow:roadmap-with-file "Implement user authentication system with OAuth and 2FA"
|
|
||||||
|
|
||||||
# With mode selection
|
|
||||||
/workflow:roadmap-with-file -m progressive "Build real-time notification system" # MVP→iterations
|
|
||||||
/workflow:roadmap-with-file -m direct "Refactor payment module" # Topological sequence
|
|
||||||
/workflow:roadmap-with-file -m auto "Add data export feature" # Auto-select
|
|
||||||
|
|
||||||
# Continue existing session
|
|
||||||
/workflow:roadmap-with-file --continue "auth system"
|
|
||||||
|
|
||||||
# Auto mode
|
|
||||||
/workflow:roadmap-with-file -y "Implement caching layer"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Context Source**: cli-explore-agent (optional) + requirement analysis
|
|
||||||
**Output Directory**: `.workflow/.roadmap/{session-id}/`
|
|
||||||
**Core Output**: `roadmap.md` (single source, human-readable) + `issues.jsonl` (global, machine-executable)
|
|
||||||
|
|
||||||
## Output Artifacts
|
|
||||||
|
|
||||||
### Single Source of Truth
|
|
||||||
|
|
||||||
| Artifact | Purpose | Consumer |
|
|
||||||
|----------|---------|----------|
|
|
||||||
| `roadmap.md` | ⭐ Human-readable strategic roadmap with all context | Human review, team-planex handoff |
|
|
||||||
| `.workflow/issues/issues.jsonl` | Global issue store (appended) | team-planex, issue commands |
|
|
||||||
|
|
||||||
### Why No Separate JSON Files?
|
|
||||||
|
|
||||||
| Original File | Why Removed | Where Content Goes |
|
|
||||||
|---------------|-------------|-------------------|
|
|
||||||
| `strategy-assessment.json` | Duplicates roadmap.md content | Embedded in `roadmap.md` Strategy Assessment section |
|
|
||||||
| `exploration-codebase.json` | Single-use intermediate | Embedded in `roadmap.md` Codebase Context appendix |
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Strategic requirement roadmap with **iterative decomposition**. Creates a single `roadmap.md` that evolves through discussion, with issues persisted to global `issues.jsonl` for execution.
|
|
||||||
|
|
||||||
**Core workflow**: Understand → Decompose → Iterate → Validate → Handoff
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ ROADMAP ITERATIVE WORKFLOW │
|
|
||||||
├─────────────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ Phase 1: Requirement Understanding & Strategy │
|
|
||||||
│ ├─ Parse requirement: goal / constraints / stakeholders │
|
|
||||||
│ ├─ Assess uncertainty level → recommend mode │
|
|
||||||
│ ├─ User confirms strategy (-m skips, -y auto-selects) │
|
|
||||||
│ └─ Initialize roadmap.md with Strategy Assessment │
|
|
||||||
│ │
|
|
||||||
│ Phase 2: Decomposition & Issue Creation │
|
|
||||||
│ ├─ cli-roadmap-plan-agent executes decomposition │
|
|
||||||
│ ├─ Progressive: 2-4 layers (MVP→Optimized) with convergence │
|
|
||||||
│ ├─ Direct: Topological task sequence with convergence │
|
|
||||||
│ ├─ Create issues via ccw issue create → issues.jsonl │
|
|
||||||
│ └─ Update roadmap.md with Roadmap table + Issue references │
|
|
||||||
│ │
|
|
||||||
│ Phase 3: Iterative Refinement (Multi-Round) │
|
|
||||||
│ ├─ Present roadmap to user │
|
|
||||||
│ ├─ Feedback: Approve | Adjust Scope | Modify Convergence | Replan │
|
|
||||||
│ ├─ Update roadmap.md with each round │
|
|
||||||
│ └─ Repeat until approved (max 5 rounds) │
|
|
||||||
│ │
|
|
||||||
│ Phase 4: Handoff │
|
|
||||||
│ ├─ Final roadmap.md with Issue ID references │
|
|
||||||
│ ├─ Options: team-planex | first wave | view issues | done │
|
|
||||||
│ └─ Issues ready in .workflow/issues/issues.jsonl │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dual Modes
|
|
||||||
|
|
||||||
| Mode | Strategy | Best For | Decomposition |
|
|
||||||
|------|----------|----------|---------------|
|
|
||||||
| **Progressive** | MVP → Usable → Refined → Optimized | High uncertainty, need validation | 2-4 layers, each with full convergence |
|
|
||||||
| **Direct** | Topological task sequence | Clear requirements, confirmed tech | Tasks with explicit inputs/outputs |
|
|
||||||
|
|
||||||
**Auto-selection logic**:
|
|
||||||
- ≥3 high uncertainty factors → Progressive
|
|
||||||
- ≥3 low uncertainty factors → Direct
|
|
||||||
- Otherwise → Ask user preference
|
|
||||||
|
|
||||||
## Output Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/.roadmap/RMAP-{slug}-{date}/
|
|
||||||
└── roadmap.md # ⭐ Single source of truth
|
|
||||||
# - Strategy Assessment (embedded)
|
|
||||||
# - Roadmap Table
|
|
||||||
# - Convergence Criteria per Issue
|
|
||||||
# - Codebase Context (appendix, if applicable)
|
|
||||||
# - Iteration History
|
|
||||||
|
|
||||||
.workflow/issues/issues.jsonl # Global issue store (appended)
|
|
||||||
# - One JSON object per line
|
|
||||||
# - Consumed by team-planex, issue commands
|
|
||||||
```
|
|
||||||
|
|
||||||
## roadmap.md Template
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Requirement Roadmap
|
|
||||||
|
|
||||||
**Session**: RMAP-{slug}-{date}
|
|
||||||
**Requirement**: {requirement}
|
|
||||||
**Strategy**: {progressive|direct}
|
|
||||||
**Status**: {Planning|Refining|Ready}
|
|
||||||
**Created**: {timestamp}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Strategy Assessment
|
|
||||||
|
|
||||||
- **Uncertainty Level**: {high|medium|low}
|
|
||||||
- **Decomposition Mode**: {progressive|direct}
|
|
||||||
- **Assessment Basis**: {factors summary}
|
|
||||||
- **Goal**: {extracted goal}
|
|
||||||
- **Constraints**: {extracted constraints}
|
|
||||||
- **Stakeholders**: {extracted stakeholders}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Roadmap
|
|
||||||
|
|
||||||
### Progressive Mode
|
|
||||||
| Wave | Issue ID | Layer | Goal | Priority | Dependencies |
|
|
||||||
|------|----------|-------|------|----------|--------------|
|
|
||||||
| 1 | ISS-xxx | MVP | ... | 2 | - |
|
|
||||||
| 2 | ISS-yyy | Usable | ... | 3 | ISS-xxx |
|
|
||||||
|
|
||||||
### Direct Mode
|
|
||||||
| Wave | Issue ID | Title | Type | Dependencies |
|
|
||||||
|------|----------|-------|------|--------------|
|
|
||||||
| 1 | ISS-xxx | ... | infrastructure | - |
|
|
||||||
| 2 | ISS-yyy | ... | feature | ISS-xxx |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Convergence Criteria
|
|
||||||
|
|
||||||
### ISS-xxx: {Issue Title}
|
|
||||||
- **Criteria**: [testable conditions]
|
|
||||||
- **Verification**: [executable steps/commands]
|
|
||||||
- **Definition of Done**: [business language, non-technical]
|
|
||||||
|
|
||||||
### ISS-yyy: {Issue Title}
|
|
||||||
...
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Risks
|
|
||||||
|
|
||||||
| Risk | Severity | Mitigation |
|
|
||||||
|------|----------|------------|
|
|
||||||
| ... | ... | ... |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Iteration History
|
|
||||||
|
|
||||||
### Round 1 - {timestamp}
|
|
||||||
**User Feedback**: {feedback summary}
|
|
||||||
**Changes Made**: {adjustments}
|
|
||||||
**Status**: {approved|continue iteration}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Codebase Context (Optional)
|
|
||||||
|
|
||||||
*Included when codebase exploration was performed*
|
|
||||||
|
|
||||||
- **Relevant Modules**: [...]
|
|
||||||
- **Existing Patterns**: [...]
|
|
||||||
- **Integration Points**: [...]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Issues JSONL Specification
|
|
||||||
|
|
||||||
### Location & Format
|
|
||||||
|
|
||||||
```
|
|
||||||
Path: .workflow/issues/issues.jsonl
|
|
||||||
Format: JSONL (one complete JSON object per line)
|
|
||||||
Encoding: UTF-8
|
|
||||||
Mode: Append-only (new issues appended to end)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Record Schema
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"id": "ISS-YYYYMMDD-NNN",
|
|
||||||
"title": "[LayerName] goal or [TaskType] title",
|
|
||||||
"status": "pending",
|
|
||||||
"priority": 2,
|
|
||||||
"context": "Markdown with goal, scope, convergence, verification, DoD",
|
|
||||||
"source": "text",
|
|
||||||
"tags": ["roadmap", "progressive|direct", "wave-N", "layer-name"],
|
|
||||||
"extended_context": {
|
|
||||||
"notes": {
|
|
||||||
"session": "RMAP-{slug}-{date}",
|
|
||||||
"strategy": "progressive|direct",
|
|
||||||
"wave": 1,
|
|
||||||
"depends_on_issues": []
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"lifecycle_requirements": {
|
|
||||||
"test_strategy": "unit",
|
|
||||||
"regression_scope": "affected",
|
|
||||||
"acceptance_type": "automated",
|
|
||||||
"commit_strategy": "per-issue"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Query Interface
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# By ID (detail view)
|
|
||||||
ccw issue list ISS-20260227-001
|
|
||||||
|
|
||||||
# List all with status filter
|
|
||||||
ccw issue list --status planned,queued
|
|
||||||
ccw issue list --brief # JSON minimal output
|
|
||||||
|
|
||||||
# Queue operations (wave-based execution)
|
|
||||||
ccw issue queue list # List all queues
|
|
||||||
ccw issue queue dag # Get dependency graph (JSON)
|
|
||||||
ccw issue next --queue <queue-id> # Get next task
|
|
||||||
|
|
||||||
# Execute
|
|
||||||
ccw issue queue add <issue-id> # Add to active queue
|
|
||||||
ccw issue done <item-id> # Mark completed
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note**: Issues are tagged with `wave-N` in `tags[]` field for filtering. Use `--brief` for programmatic parsing.
|
|
||||||
|
|
||||||
### Consumers
|
|
||||||
|
|
||||||
| Consumer | Usage |
|
|
||||||
|----------|-------|
|
|
||||||
| `team-planex` | Load by ID or tag, execute in wave order |
|
|
||||||
| `issue-manage` | CRUD operations on issues |
|
|
||||||
| `issue:execute` | DAG-based parallel execution |
|
|
||||||
| `issue:queue` | Form execution queue from solutions |
|
|
||||||
|
|
||||||
## Implementation
|
|
||||||
|
|
||||||
### Session Initialization
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
|
||||||
|
|
||||||
// Parse flags
|
|
||||||
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
|
||||||
const continueMode = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
|
|
||||||
const modeMatch = $ARGUMENTS.match(/(?:--mode|-m)\s+(progressive|direct|auto)/)
|
|
||||||
const requestedMode = modeMatch ? modeMatch[1] : 'auto'
|
|
||||||
|
|
||||||
// Clean requirement text
|
|
||||||
const requirement = $ARGUMENTS
|
|
||||||
.replace(/--yes|-y|--continue|-c|--mode\s+\w+|-m\s+\w+/g, '')
|
|
||||||
.trim()
|
|
||||||
|
|
||||||
const slug = requirement.toLowerCase()
|
|
||||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
|
||||||
.substring(0, 40)
|
|
||||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
|
||||||
const sessionId = `RMAP-${slug}-${dateStr}`
|
|
||||||
const sessionFolder = `.workflow/.roadmap/${sessionId}`
|
|
||||||
|
|
||||||
// Auto-detect continue
|
|
||||||
if (continueMode || file_exists(`${sessionFolder}/roadmap.md`)) {
|
|
||||||
// Resume existing session
|
|
||||||
}
|
|
||||||
Bash(`mkdir -p ${sessionFolder}`)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 1: Requirement Understanding & Strategy
|
|
||||||
|
|
||||||
**Objective**: Parse requirement, assess uncertainty, select decomposition strategy, initialize roadmap.md.
|
|
||||||
|
|
||||||
**Steps**:
|
|
||||||
|
|
||||||
1. **Parse Requirement**
|
|
||||||
- Extract: goal, constraints, stakeholders, keywords
|
|
||||||
|
|
||||||
2. **Assess Uncertainty**
|
|
||||||
```javascript
|
|
||||||
const uncertaintyFactors = {
|
|
||||||
scope_clarity: 'low|medium|high',
|
|
||||||
technical_risk: 'low|medium|high',
|
|
||||||
dependency_unknown: 'low|medium|high',
|
|
||||||
domain_familiarity: 'low|medium|high',
|
|
||||||
requirement_stability: 'low|medium|high'
|
|
||||||
}
|
|
||||||
// ≥3 high → progressive, ≥3 low → direct, else → ask
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Strategy Selection** (skip if `-m` specified or autoYes)
|
|
||||||
```javascript
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: `Decomposition strategy:\nUncertainty: ${uncertaintyLevel}\nRecommended: ${recommendedMode}`,
|
|
||||||
header: "Strategy",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: recommendedMode === 'progressive' ? "Progressive (Recommended)" : "Progressive",
|
|
||||||
description: "MVP→iterations, validate core first" },
|
|
||||||
{ label: recommendedMode === 'direct' ? "Direct (Recommended)" : "Direct",
|
|
||||||
description: "Topological task sequence, clear dependencies" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Initialize roadmap.md** with Strategy Assessment section
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- roadmap.md created with Strategy Assessment
|
|
||||||
- Strategy selected (progressive or direct)
|
|
||||||
- Uncertainty factors documented
|
|
||||||
|
|
||||||
### Phase 2: Decomposition & Issue Creation
|
|
||||||
|
|
||||||
**Objective**: Execute decomposition via `cli-roadmap-plan-agent`, create issues, update roadmap.md.
|
|
||||||
|
|
||||||
**Agent**: `cli-roadmap-plan-agent`
|
|
||||||
|
|
||||||
**Agent Tasks**:
|
|
||||||
1. Analyze requirement with strategy context
|
|
||||||
2. Execute CLI-assisted decomposition (Gemini → Qwen fallback)
|
|
||||||
3. Create issues via `ccw issue create`
|
|
||||||
4. Generate roadmap table with Issue ID references
|
|
||||||
5. Update roadmap.md
|
|
||||||
|
|
||||||
**Agent Prompt Template**:
|
|
||||||
```javascript
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-roadmap-plan-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: `Roadmap decomposition: ${slug}`,
|
|
||||||
prompt: `
|
|
||||||
## Roadmap Decomposition Task
|
|
||||||
|
|
||||||
### Input Context
|
|
||||||
- **Requirement**: ${requirement}
|
|
||||||
- **Strategy**: ${selectedMode}
|
|
||||||
- **Session**: ${sessionId}
|
|
||||||
- **Folder**: ${sessionFolder}
|
|
||||||
|
|
||||||
### Mode-Specific Requirements
|
|
||||||
|
|
||||||
${selectedMode === 'progressive' ? `**Progressive Mode**:
|
|
||||||
- 2-4 layers: MVP / Usable / Refined / Optimized
|
|
||||||
- Each layer: goal, scope, excludes, convergence, risks, effort
|
|
||||||
- L0 (MVP) must be self-contained, no dependencies
|
|
||||||
- Scope: each feature in exactly ONE layer (no overlap)` :
|
|
||||||
|
|
||||||
`**Direct Mode**:
|
|
||||||
- Topologically-sorted task sequence
|
|
||||||
- Each task: title, type, scope, inputs, outputs, convergence, depends_on
|
|
||||||
- Inputs from preceding outputs or existing resources
|
|
||||||
- parallel_group for truly independent tasks`}
|
|
||||||
|
|
||||||
### Convergence Quality Requirements
|
|
||||||
- criteria[]: MUST be testable
|
|
||||||
- verification: MUST be executable
|
|
||||||
- definition_of_done: MUST use business language
|
|
||||||
|
|
||||||
### Output
|
|
||||||
1. **${sessionFolder}/roadmap.md** - Update with Roadmap table + Convergence sections
|
|
||||||
2. **Append to .workflow/issues/issues.jsonl** via ccw issue create
|
|
||||||
|
|
||||||
### CLI Configuration
|
|
||||||
- Primary: gemini, Fallback: qwen, Timeout: 60000ms
|
|
||||||
`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- Issues created in `.workflow/issues/issues.jsonl`
|
|
||||||
- roadmap.md updated with Issue references
|
|
||||||
- No circular dependencies
|
|
||||||
- Convergence criteria testable
|
|
||||||
|
|
||||||
### Phase 3: Iterative Refinement
|
|
||||||
|
|
||||||
**Objective**: Multi-round user feedback to refine roadmap.
|
|
||||||
|
|
||||||
**Workflow Steps**:
|
|
||||||
|
|
||||||
1. **Present Roadmap**
|
|
||||||
- Display Roadmap table + key Convergence criteria
|
|
||||||
- Show issue count and wave breakdown
|
|
||||||
|
|
||||||
2. **Gather Feedback** (skip if autoYes)
|
|
||||||
```javascript
|
|
||||||
const feedback = AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: `Roadmap validation (round ${round}):\n${issueCount} issues across ${waveCount} waves. Feedback?`,
|
|
||||||
header: "Feedback",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: "Approve", description: "Proceed to handoff" },
|
|
||||||
{ label: "Adjust Scope", description: "Modify issue scopes" },
|
|
||||||
{ label: "Modify Convergence", description: "Refine criteria/verification" },
|
|
||||||
{ label: "Re-decompose", description: "Change strategy/layering" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Process Feedback**
|
|
||||||
- **Approve**: Exit loop, proceed to Phase 4
|
|
||||||
- **Adjust Scope**: Modify issue context, update roadmap.md
|
|
||||||
- **Modify Convergence**: Refine criteria/verification, update roadmap.md
|
|
||||||
- **Re-decompose**: Return to Phase 2 with new strategy
|
|
||||||
|
|
||||||
4. **Update roadmap.md**
|
|
||||||
- Append to Iteration History section
|
|
||||||
- Update Roadmap table if changed
|
|
||||||
- Increment round counter
|
|
||||||
|
|
||||||
5. **Loop** (max 5 rounds, then force proceed)
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- User approved OR max rounds reached
|
|
||||||
- All changes recorded in Iteration History
|
|
||||||
- roadmap.md reflects final state
|
|
||||||
|
|
||||||
### Phase 4: Handoff
|
|
||||||
|
|
||||||
**Objective**: Present final roadmap, offer execution options.
|
|
||||||
|
|
||||||
**Steps**:
|
|
||||||
|
|
||||||
1. **Display Summary**
|
|
||||||
```markdown
|
|
||||||
## Roadmap Complete
|
|
||||||
|
|
||||||
- **Session**: RMAP-{slug}-{date}
|
|
||||||
- **Strategy**: {progressive|direct}
|
|
||||||
- **Issues Created**: {count} across {waves} waves
|
|
||||||
- **Roadmap**: .workflow/.roadmap/RMAP-{slug}-{date}/roadmap.md
|
|
||||||
|
|
||||||
| Wave | Issue Count | Layer/Type |
|
|
||||||
|------|-------------|------------|
|
|
||||||
| 1 | 2 | MVP / infrastructure |
|
|
||||||
| 2 | 3 | Usable / feature |
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Offer Options** (skip if autoYes)
|
|
||||||
```javascript
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: `${issueIds.length} issues ready. Next step:`,
|
|
||||||
header: "Next Step",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: "Execute with team-planex (Recommended)",
|
|
||||||
description: `Run all ${issueIds.length} issues via team-planex` },
|
|
||||||
{ label: "Execute first wave",
|
|
||||||
description: "Run wave-1 issues only" },
|
|
||||||
{ label: "View issues",
|
|
||||||
description: "Display issue details from issues.jsonl" },
|
|
||||||
{ label: "Done",
|
|
||||||
description: "Save and exit, execute later" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Execute Selection**
|
|
||||||
| Selection | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Execute with team-planex | `Skill(skill="team-planex", args="${issueIds.join(' ')}")` |
|
|
||||||
| Execute first wave | Filter by `wave-1` tag, pass to team-planex |
|
|
||||||
| View issues | Display from `.workflow/issues/issues.jsonl` |
|
|
||||||
| Done | Output paths, end |
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
| Flag | Default | Description |
|
|
||||||
|------|---------|-------------|
|
|
||||||
| `-y, --yes` | false | Auto-confirm all decisions |
|
|
||||||
| `-c, --continue` | false | Continue existing session |
|
|
||||||
| `-m, --mode` | auto | Strategy: progressive / direct / auto |
|
|
||||||
|
|
||||||
**Session ID format**: `RMAP-{slug}-{YYYY-MM-DD}`
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Resolution |
|
|
||||||
|-------|------------|
|
|
||||||
| cli-roadmap-plan-agent fails | Retry once, fallback to manual decomposition |
|
|
||||||
| No codebase | Skip exploration, pure requirement decomposition |
|
|
||||||
| Circular dependency detected | Prompt user, re-decompose |
|
|
||||||
| User feedback timeout | Save roadmap.md, show `--continue` command |
|
|
||||||
| Max rounds reached | Force proceed with current roadmap |
|
|
||||||
| Session folder conflict | Append timestamp suffix |
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Clear Requirements**: Detailed description → better decomposition
|
|
||||||
2. **Iterate on Roadmap**: Use feedback rounds to refine convergence criteria
|
|
||||||
3. **Testable Convergence**: criteria = assertions, DoD = business language
|
|
||||||
4. **Use Continue Mode**: Resume to iterate on existing roadmap
|
|
||||||
5. **Wave Execution**: Start with wave-1 (MVP) to validate before full execution
|
|
||||||
|
|
||||||
## Usage Recommendations
|
|
||||||
|
|
||||||
**When to Use Roadmap vs Other Commands:**
|
|
||||||
|
|
||||||
| Scenario | Recommended Command |
|
|
||||||
|----------|-------------------|
|
|
||||||
| Strategic planning, need issue tracking | `/workflow:roadmap-with-file` |
|
|
||||||
| Quick task breakdown, immediate execution | `/workflow:lite-plan` |
|
|
||||||
| Collaborative multi-agent planning | `/workflow:collaborative-plan-with-file` |
|
|
||||||
| Full specification documents | `spec-generator` skill |
|
|
||||||
| Code implementation from existing plan | `/workflow:lite-execute` |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Now execute roadmap-with-file for**: $ARGUMENTS
|
|
||||||
@@ -109,16 +109,79 @@ rm -f .workflow/archives/$SESSION_ID/.archiving
|
|||||||
Manifest: Updated with N total sessions
|
Manifest: Updated with N total sessions
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Auto-Sync Project State
|
### Phase 4: Update project-tech.json (Optional)
|
||||||
|
|
||||||
Execute `/workflow:session:sync -y "{description}"` to update both `specs/*.md` and `project-tech.json` from session context.
|
**Skip if**: `.workflow/project-tech.json` doesn't exist
|
||||||
|
|
||||||
Description 取自 Phase 2 的 `workflow-session.json` description 字段。
|
```bash
|
||||||
|
# Check
|
||||||
|
test -f .workflow/project-tech.json || echo "SKIP"
|
||||||
|
```
|
||||||
|
|
||||||
|
**If exists**, add feature entry:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "<slugified title>",
|
||||||
|
"title": "<from IMPL_PLAN.md>",
|
||||||
|
"status": "completed",
|
||||||
|
"tags": ["<from Phase 2>"],
|
||||||
|
"timeline": { "implemented_at": "<date>" },
|
||||||
|
"traceability": { "session_id": "<SESSION_ID>", "archive_path": "<path>" }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**:
|
||||||
|
```
|
||||||
|
✓ Feature added to project registry
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: Ask About Solidify (Always)
|
||||||
|
|
||||||
|
After successful archival, prompt user to capture learnings:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Parse --yes flag
|
||||||
|
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||||
|
|
||||||
|
if (autoYes) {
|
||||||
|
// Auto mode: Skip solidify
|
||||||
|
console.log(`[--yes] Auto-selecting: Skip solidify`)
|
||||||
|
console.log(`Session archived successfully.`)
|
||||||
|
// Done - no solidify
|
||||||
|
} else {
|
||||||
|
// Interactive mode: Ask user
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "Would you like to solidify learnings from this session into project guidelines?",
|
||||||
|
header: "Solidify",
|
||||||
|
options: [
|
||||||
|
{ label: "Yes, solidify now", description: "Extract learnings and update project-guidelines.json" },
|
||||||
|
{ label: "Skip", description: "Archive complete, no learnings to capture" }
|
||||||
|
],
|
||||||
|
multiSelect: false
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
|
// **If "Yes, solidify now"**: Execute `/workflow:session:solidify` with the archived session ID.
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Auto Mode Defaults
|
## Auto Mode Defaults
|
||||||
|
|
||||||
When `--yes` or `-y` flag is used:
|
When `--yes` or `-y` flag is used:
|
||||||
- **Sync**: Auto-executed with `-y` (no confirmation)
|
- **Solidify Learnings**: Auto-selected "Skip" (archive only, no solidify)
|
||||||
|
|
||||||
|
**Flag Parsing**:
|
||||||
|
```javascript
|
||||||
|
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**:
|
||||||
|
```
|
||||||
|
Session archived successfully.
|
||||||
|
→ Run /workflow:session:solidify to capture learnings (recommended)
|
||||||
|
```
|
||||||
|
|
||||||
## Error Recovery
|
## Error Recovery
|
||||||
|
|
||||||
@@ -135,5 +198,6 @@ When `--yes` or `-y` flag is used:
|
|||||||
Phase 1: find session → create .archiving marker
|
Phase 1: find session → create .archiving marker
|
||||||
Phase 2: read key files → build manifest entry (no writes)
|
Phase 2: read key files → build manifest entry (no writes)
|
||||||
Phase 3: mkdir → mv → update manifest.json → rm marker
|
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||||
Phase 4: /workflow:session:sync -y → update specs/*.md + project-tech
|
Phase 4: update project-tech.json features array (optional)
|
||||||
|
Phase 5: ask user → solidify learnings (optional)
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ When `--yes` or `-y`: Auto-categorize and add guideline without confirmation.
|
|||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.workflow/specs/*.md`. This ensures valuable learnings persist across sessions and inform future planning.
|
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.workflow/project-guidelines.json`. This ensures valuable learnings persist across sessions and inform future planning.
|
||||||
|
|
||||||
## Use Cases
|
## Use Cases
|
||||||
|
|
||||||
@@ -113,14 +113,34 @@ ELSE (convention/constraint/learning):
|
|||||||
### Step 1: Ensure Guidelines File Exists
|
### Step 1: Ensure Guidelines File Exists
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
bash(test -f .workflow/specs/coding-conventions.md && echo "EXISTS" || echo "NOT_FOUND")
|
bash(test -f .workflow/project-guidelines.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||||
```
|
```
|
||||||
|
|
||||||
**If NOT_FOUND**, initialize spec system:
|
**If NOT_FOUND**, create scaffold:
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
Bash('ccw spec init')
|
const scaffold = {
|
||||||
Bash('ccw spec rebuild')
|
conventions: {
|
||||||
|
coding_style: [],
|
||||||
|
naming_patterns: [],
|
||||||
|
file_structure: [],
|
||||||
|
documentation: []
|
||||||
|
},
|
||||||
|
constraints: {
|
||||||
|
architecture: [],
|
||||||
|
tech_stack: [],
|
||||||
|
performance: [],
|
||||||
|
security: []
|
||||||
|
},
|
||||||
|
quality_rules: [],
|
||||||
|
learnings: [],
|
||||||
|
_metadata: {
|
||||||
|
created_at: new Date().toISOString(),
|
||||||
|
version: "1.0.0"
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Write('.workflow/project-guidelines.json', JSON.stringify(scaffold, null, 2));
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Auto-detect Type (if not specified)
|
### Step 2: Auto-detect Type (if not specified)
|
||||||
@@ -183,40 +203,33 @@ function buildEntry(rule, type, category, sessionId) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4: Update Spec Files
|
### Step 4: Update Guidelines File
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Map type+category to target spec file
|
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'));
|
||||||
const specFileMap = {
|
|
||||||
convention: '.workflow/specs/coding-conventions.md',
|
|
||||||
constraint: '.workflow/specs/architecture-constraints.md'
|
|
||||||
}
|
|
||||||
|
|
||||||
if (type === 'convention' || type === 'constraint') {
|
if (type === 'convention') {
|
||||||
const targetFile = specFileMap[type]
|
if (!guidelines.conventions[category]) {
|
||||||
const existing = Read(targetFile)
|
guidelines.conventions[category] = [];
|
||||||
|
}
|
||||||
// Deduplicate: skip if rule text already exists in the file
|
if (!guidelines.conventions[category].includes(rule)) {
|
||||||
if (!existing.includes(rule)) {
|
guidelines.conventions[category].push(rule);
|
||||||
const ruleText = `- [${category}] ${rule}`
|
}
|
||||||
const newContent = existing.trimEnd() + '\n' + ruleText + '\n'
|
} else if (type === 'constraint') {
|
||||||
Write(targetFile, newContent)
|
if (!guidelines.constraints[category]) {
|
||||||
|
guidelines.constraints[category] = [];
|
||||||
|
}
|
||||||
|
if (!guidelines.constraints[category].includes(rule)) {
|
||||||
|
guidelines.constraints[category].push(rule);
|
||||||
}
|
}
|
||||||
} else if (type === 'learning') {
|
} else if (type === 'learning') {
|
||||||
// Learnings go to coding-conventions.md as a special section
|
guidelines.learnings.push(buildEntry(rule, type, category, sessionId));
|
||||||
const targetFile = '.workflow/specs/coding-conventions.md'
|
|
||||||
const existing = Read(targetFile)
|
|
||||||
const entry = buildEntry(rule, type, category, sessionId)
|
|
||||||
const learningText = `- [learning/${category}] ${entry.insight} (${entry.date})`
|
|
||||||
|
|
||||||
if (!existing.includes(entry.insight)) {
|
|
||||||
const newContent = existing.trimEnd() + '\n' + learningText + '\n'
|
|
||||||
Write(targetFile, newContent)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Rebuild spec index after modification
|
guidelines._metadata.updated_at = new Date().toISOString();
|
||||||
Bash('ccw spec rebuild')
|
guidelines._metadata.last_solidified_by = sessionId;
|
||||||
|
|
||||||
|
Write('.workflow/project-guidelines.json', JSON.stringify(guidelines, null, 2));
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 5: Display Confirmation
|
### Step 5: Display Confirmation
|
||||||
@@ -228,7 +241,7 @@ Type: ${type}
|
|||||||
Category: ${category}
|
Category: ${category}
|
||||||
Rule: "${rule}"
|
Rule: "${rule}"
|
||||||
|
|
||||||
Location: .workflow/specs/*.md -> ${type}s.${category}
|
Location: .workflow/project-guidelines.json -> ${type}s.${category}
|
||||||
|
|
||||||
Total ${type}s in ${category}: ${count}
|
Total ${type}s in ${category}: ${count}
|
||||||
```
|
```
|
||||||
@@ -373,7 +386,7 @@ AskUserQuestion({
|
|||||||
/workflow:session:solidify "Use async/await instead of callbacks" --type convention --category coding_style
|
/workflow:session:solidify "Use async/await instead of callbacks" --type convention --category coding_style
|
||||||
```
|
```
|
||||||
|
|
||||||
Result in `specs/*.md`:
|
Result in `project-guidelines.json`:
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"conventions": {
|
"conventions": {
|
||||||
@@ -431,7 +444,7 @@ Result: Creates a new CMEM with consolidated content from the 10 most recent non
|
|||||||
|
|
||||||
## Integration with Planning
|
## Integration with Planning
|
||||||
|
|
||||||
The `specs/*.md` is consumed by:
|
The `project-guidelines.json` is consumed by:
|
||||||
|
|
||||||
1. **`workflow-plan` skill (context-gather phase)**: Loads guidelines into context-package.json
|
1. **`workflow-plan` skill (context-gather phase)**: Loads guidelines into context-package.json
|
||||||
2. **`workflow-plan` skill**: Passes guidelines to task generation agent
|
2. **`workflow-plan` skill**: Passes guidelines to task generation agent
|
||||||
@@ -449,5 +462,4 @@ This ensures all future planning respects solidified rules without users needing
|
|||||||
|
|
||||||
- `/workflow:session:start` - Start a session (may prompt for solidify at end)
|
- `/workflow:session:start` - Start a session (may prompt for solidify at end)
|
||||||
- `/workflow:session:complete` - Complete session (prompts for learnings to solidify)
|
- `/workflow:session:complete` - Complete session (prompts for learnings to solidify)
|
||||||
- `/workflow:init` - Creates specs/*.md scaffold if missing
|
- `/workflow:init` - Creates project-guidelines.json scaffold if missing
|
||||||
- `/workflow:init-specs` - Interactive wizard to create individual specs with scope selection
|
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
|
|||||||
```bash
|
```bash
|
||||||
# Check if project state exists (both files required)
|
# Check if project state exists (both files required)
|
||||||
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
||||||
bash(test -f .workflow/specs/*.md && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
|
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
|
||||||
```
|
```
|
||||||
|
|
||||||
**If either NOT_FOUND**, delegate to `/workflow:init`:
|
**If either NOT_FOUND**, delegate to `/workflow:init`:
|
||||||
@@ -53,14 +53,14 @@ bash(test -f .workflow/specs/*.md && echo "GUIDELINES_EXISTS" || echo "GUIDELINE
|
|||||||
Skill(skill="workflow:init");
|
Skill(skill="workflow:init");
|
||||||
|
|
||||||
// Wait for init completion
|
// Wait for init completion
|
||||||
// project-tech.json and specs/*.md will be created
|
// project-tech.json and project-guidelines.json will be created
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**:
|
**Output**:
|
||||||
- If BOTH_EXIST: `PROJECT_STATE: initialized`
|
- If BOTH_EXIST: `PROJECT_STATE: initialized`
|
||||||
- If NOT_FOUND: Calls `/workflow:init` → creates:
|
- If NOT_FOUND: Calls `/workflow:init` → creates:
|
||||||
- `.workflow/project-tech.json` with full technical analysis
|
- `.workflow/project-tech.json` with full technical analysis
|
||||||
- `.workflow/specs/*.md` with empty scaffold
|
- `.workflow/project-guidelines.json` with empty scaffold
|
||||||
|
|
||||||
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
|
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
|
||||||
|
|
||||||
|
|||||||
@@ -1,201 +0,0 @@
|
|||||||
---
|
|
||||||
name: sync
|
|
||||||
description: Quick-sync session work to specs/*.md and project-tech
|
|
||||||
argument-hint: "[-y|--yes] [\"what was done\"]"
|
|
||||||
allowed-tools: Bash(*), Read(*), Write(*), Edit(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Session Sync (/workflow:session:sync)
|
|
||||||
|
|
||||||
One-shot update `specs/*.md` + `project-tech.json` from current session context.
|
|
||||||
|
|
||||||
**Design**: Scan context → extract → write. No interactive wizards.
|
|
||||||
|
|
||||||
## Auto Mode
|
|
||||||
|
|
||||||
`--yes` or `-y`: Skip confirmation, auto-write both files.
|
|
||||||
|
|
||||||
## Process
|
|
||||||
|
|
||||||
```
|
|
||||||
Step 1: Gather Context
|
|
||||||
├─ git diff --stat HEAD~3..HEAD (recent changes)
|
|
||||||
├─ Active session folder (.workflow/.lite-plan/*) if exists
|
|
||||||
└─ User summary ($ARGUMENTS or auto-generate from git log)
|
|
||||||
|
|
||||||
Step 2: Extract Updates
|
|
||||||
├─ Guidelines: conventions / constraints / learnings
|
|
||||||
└─ Tech: development_index entry
|
|
||||||
|
|
||||||
Step 3: Preview & Confirm (skip if --yes)
|
|
||||||
|
|
||||||
Step 4: Write both files
|
|
||||||
|
|
||||||
Step 5: One-line confirmation
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 1: Gather Context
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
|
||||||
const userSummary = $ARGUMENTS.replace(/--yes|-y/g, '').trim()
|
|
||||||
|
|
||||||
// Recent changes
|
|
||||||
const gitStat = Bash('git diff --stat HEAD~3..HEAD 2>/dev/null || git diff --stat HEAD 2>/dev/null')
|
|
||||||
const gitLog = Bash('git log --oneline -5')
|
|
||||||
|
|
||||||
// Active session (optional)
|
|
||||||
const sessionFolders = Glob('.workflow/.lite-plan/*/plan.json')
|
|
||||||
let sessionContext = null
|
|
||||||
if (sessionFolders.length > 0) {
|
|
||||||
const latest = sessionFolders[sessionFolders.length - 1]
|
|
||||||
sessionContext = JSON.parse(Read(latest))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build summary
|
|
||||||
const summary = userSummary
|
|
||||||
|| sessionContext?.summary
|
|
||||||
|| gitLog.split('\n')[0].replace(/^[a-f0-9]+ /, '')
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 2: Extract Updates
|
|
||||||
|
|
||||||
Analyze context and produce two update payloads. Use LLM reasoning (current agent) — no CLI calls.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// ── Guidelines extraction ──
|
|
||||||
// Scan git diff + session for:
|
|
||||||
// - New patterns adopted → convention
|
|
||||||
// - Restrictions discovered → constraint
|
|
||||||
// - Surprises / gotchas → learning
|
|
||||||
//
|
|
||||||
// Output: array of { type, category, text }
|
|
||||||
// RULE: Only extract genuinely reusable insights. Skip trivial/obvious items.
|
|
||||||
// RULE: Deduplicate against existing guidelines before adding.
|
|
||||||
|
|
||||||
// Load existing specs via ccw spec load
|
|
||||||
const existingSpecs = Bash('ccw spec load --dimension specs 2>/dev/null || echo ""')
|
|
||||||
const guidelineUpdates = [] // populated by agent analysis
|
|
||||||
|
|
||||||
// ── Tech extraction ──
|
|
||||||
// Build one development_index entry from session work
|
|
||||||
|
|
||||||
function detectCategory(text) {
|
|
||||||
text = text.toLowerCase()
|
|
||||||
if (/\b(fix|bug|error|crash)\b/.test(text)) return 'bugfix'
|
|
||||||
if (/\b(refactor|cleanup|reorganize)\b/.test(text)) return 'refactor'
|
|
||||||
if (/\b(doc|readme|comment)\b/.test(text)) return 'docs'
|
|
||||||
if (/\b(add|new|create|implement)\b/.test(text)) return 'feature'
|
|
||||||
return 'enhancement'
|
|
||||||
}
|
|
||||||
|
|
||||||
function detectSubFeature(gitStat) {
|
|
||||||
// Most-changed directory from git diff --stat
|
|
||||||
const dirs = gitStat.match(/\S+\//g) || []
|
|
||||||
const counts = {}
|
|
||||||
dirs.forEach(d => {
|
|
||||||
const seg = d.split('/').filter(Boolean).slice(-2, -1)[0] || 'general'
|
|
||||||
counts[seg] = (counts[seg] || 0) + 1
|
|
||||||
})
|
|
||||||
return Object.entries(counts).sort((a, b) => b[1] - a[1])[0]?.[0] || 'general'
|
|
||||||
}
|
|
||||||
|
|
||||||
const techEntry = {
|
|
||||||
title: summary.slice(0, 60),
|
|
||||||
sub_feature: detectSubFeature(gitStat),
|
|
||||||
date: new Date().toISOString().split('T')[0],
|
|
||||||
description: summary.slice(0, 100),
|
|
||||||
status: 'completed',
|
|
||||||
session_id: sessionContext ? sessionFolders[sessionFolders.length - 1].match(/lite-plan\/([^/]+)/)?.[1] : null
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 3: Preview & Confirm
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Show preview
|
|
||||||
console.log(`
|
|
||||||
── Sync Preview ──
|
|
||||||
|
|
||||||
Guidelines (${guidelineUpdates.length} items):
|
|
||||||
${guidelineUpdates.map(g => ` [${g.type}/${g.category}] ${g.text}`).join('\n') || ' (none)'}
|
|
||||||
|
|
||||||
Tech [${detectCategory(summary)}]:
|
|
||||||
${techEntry.title}
|
|
||||||
|
|
||||||
Target files:
|
|
||||||
.workflow/specs/*.md
|
|
||||||
.workflow/project-tech.json
|
|
||||||
`)
|
|
||||||
|
|
||||||
if (!autoYes) {
|
|
||||||
const confirm = AskUserQuestion("Apply these updates? (modify/skip items if needed)")
|
|
||||||
// User can say "skip guidelines" or "change category to bugfix" etc.
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 4: Write
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// ── Update specs/*.md ──
|
|
||||||
if (guidelineUpdates.length > 0) {
|
|
||||||
// Map guideline types to spec files
|
|
||||||
const specFileMap = {
|
|
||||||
convention: '.workflow/specs/coding-conventions.md',
|
|
||||||
constraint: '.workflow/specs/architecture-constraints.md',
|
|
||||||
learning: '.workflow/specs/coding-conventions.md' // learnings appended to conventions
|
|
||||||
}
|
|
||||||
|
|
||||||
for (const g of guidelineUpdates) {
|
|
||||||
const targetFile = specFileMap[g.type]
|
|
||||||
const existing = Read(targetFile)
|
|
||||||
const ruleText = g.type === 'learning'
|
|
||||||
? `- [${g.category}] ${g.text} (learned: ${new Date().toISOString().split('T')[0]})`
|
|
||||||
: `- [${g.category}] ${g.text}`
|
|
||||||
|
|
||||||
// Deduplicate: skip if text already in file
|
|
||||||
if (!existing.includes(g.text)) {
|
|
||||||
const newContent = existing.trimEnd() + '\n' + ruleText + '\n'
|
|
||||||
Write(targetFile, newContent)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Rebuild spec index after writing
|
|
||||||
Bash('ccw spec rebuild')
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Update project-tech.json ──
|
|
||||||
const techPath = '.workflow/project-tech.json'
|
|
||||||
const tech = JSON.parse(Read(techPath))
|
|
||||||
|
|
||||||
if (!tech.development_index) {
|
|
||||||
tech.development_index = { feature: [], enhancement: [], bugfix: [], refactor: [], docs: [] }
|
|
||||||
}
|
|
||||||
|
|
||||||
const category = detectCategory(summary)
|
|
||||||
tech.development_index[category].push(techEntry)
|
|
||||||
tech._metadata.last_updated = new Date().toISOString()
|
|
||||||
|
|
||||||
Write(techPath, JSON.stringify(tech, null, 2))
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 5: Confirm
|
|
||||||
|
|
||||||
```
|
|
||||||
✓ Synced: ${guidelineUpdates.length} guidelines + 1 tech entry [${category}]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Resolution |
|
|
||||||
|-------|------------|
|
|
||||||
| File missing | Create scaffold (same as solidify Step 1) |
|
|
||||||
| No git history | Use user summary or session context only |
|
|
||||||
| No meaningful updates | Skip guidelines, still add tech entry |
|
|
||||||
| Duplicate entry | Skip silently (dedup check in Step 4) |
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
- `/workflow:init` - Initialize project with specs scaffold
|
|
||||||
- `/workflow:init-specs` - Interactive wizard to create individual specs with scope selection
|
|
||||||
- `/workflow:session:solidify` - Add individual rules one at a time
|
|
||||||
@@ -471,26 +471,11 @@ ${recommendations.map(r => \`- ${r}\`).join('\\n')}
|
|||||||
|
|
||||||
2. **Build Execution Context**
|
2. **Build Execution Context**
|
||||||
|
|
||||||
**Load Project Context** (from init.md products):
|
|
||||||
```javascript
|
|
||||||
// Read project-tech.json (if exists)
|
|
||||||
const projectTech = file_exists('.workflow/project-tech.json')
|
|
||||||
? JSON.parse(Read('.workflow/project-tech.json')) : null
|
|
||||||
// Read specs/*.md (if exists)
|
|
||||||
const projectGuidelines = file_exists('.workflow/specs/*.md')
|
|
||||||
? JSON.parse(Read('.workflow/specs/*.md')) : null
|
|
||||||
```
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
const executionContext = `
|
const executionContext = `
|
||||||
⚠️ Execution Notes from Previous Tasks
|
⚠️ Execution Notes from Previous Tasks
|
||||||
${relevantNotes} // Categorized notes with severity
|
${relevantNotes} // Categorized notes with severity
|
||||||
|
|
||||||
📋 Project Context (from init.md)
|
|
||||||
- Tech Stack: ${projectTech?.technology_analysis?.technology_stack || 'N/A'}
|
|
||||||
- Architecture: ${projectTech?.technology_analysis?.architecture?.style || 'N/A'}
|
|
||||||
- Constraints: ${projectGuidelines?.constraints || 'None defined'}
|
|
||||||
|
|
||||||
Current Task: ${task.id}
|
Current Task: ${task.id}
|
||||||
- Original ID: ${task.original_id}
|
- Original ID: ${task.original_id}
|
||||||
- Source Plan: ${task.source_plan}
|
- Source Plan: ${task.source_plan}
|
||||||
|
|||||||
@@ -1,190 +0,0 @@
|
|||||||
---
|
|
||||||
name: command-generator
|
|
||||||
description: Command file generator - 5 phase workflow for creating Claude Code command files with YAML frontmatter. Generates .md command files for project or user scope. Triggers on "create command", "new command", "command generator".
|
|
||||||
allowed-tools: Read, Write, Edit, Bash, Glob
|
|
||||||
---
|
|
||||||
|
|
||||||
# Command Generator
|
|
||||||
|
|
||||||
CLI-based command file generator producing Claude Code command .md files through a structured 5-phase workflow. Supports both project-level (`.claude/commands/`) and user-level (`~/.claude/commands/`) command locations.
|
|
||||||
|
|
||||||
## Architecture Overview
|
|
||||||
|
|
||||||
```
|
|
||||||
+-----------------------------------------------------------+
|
|
||||||
| Command Generator |
|
|
||||||
| |
|
|
||||||
| Input: skillName, description, location, [group], [hint] |
|
|
||||||
| | |
|
|
||||||
| +-------------------------------------------------+ |
|
|
||||||
| | Phase 1-5: Sequential Pipeline | |
|
|
||||||
| | | |
|
|
||||||
| | [P1] --> [P2] --> [P3] --> [P4] --> [P5] | |
|
|
||||||
| | Param Target Template Content File | |
|
|
||||||
| | Valid Path Loading Format Gen | |
|
|
||||||
| +-------------------------------------------------+ |
|
|
||||||
| | |
|
|
||||||
| Output: {scope}/.claude/commands/{group}/{name}.md |
|
|
||||||
| |
|
|
||||||
+-----------------------------------------------------------+
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key Design Principles
|
|
||||||
|
|
||||||
1. **Single Responsibility**: Generates one command file per invocation
|
|
||||||
2. **Scope Awareness**: Supports project and user-level command locations
|
|
||||||
3. **Template-Driven**: Uses consistent template for all generated commands
|
|
||||||
4. **Validation First**: Validates all required parameters before file operations
|
|
||||||
5. **Non-Destructive**: Warns if command file already exists
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
Phase 1: Parameter Validation
|
|
||||||
- Ref: phases/01-parameter-validation.md
|
|
||||||
- Validate: skillName (required), description (required), location (required)
|
|
||||||
- Optional: group, argumentHint
|
|
||||||
- Output: validated params object
|
|
||||||
|
|
||||||
Phase 2: Target Path Resolution
|
|
||||||
- Ref: phases/02-target-path-resolution.md
|
|
||||||
- Resolve: location -> target commands directory
|
|
||||||
- Support: project (.claude/commands/) vs user (~/.claude/commands/)
|
|
||||||
- Handle: group subdirectory if provided
|
|
||||||
- Output: targetPath string
|
|
||||||
|
|
||||||
Phase 3: Template Loading
|
|
||||||
- Ref: phases/03-template-loading.md
|
|
||||||
- Load: templates/command-md.md
|
|
||||||
- Template contains YAML frontmatter with placeholders
|
|
||||||
- Output: templateContent string
|
|
||||||
|
|
||||||
Phase 4: Content Formatting
|
|
||||||
- Ref: phases/04-content-formatting.md
|
|
||||||
- Substitute: {{name}}, {{description}}, {{group}}, {{argumentHint}}
|
|
||||||
- Handle: optional fields (group, argumentHint)
|
|
||||||
- Output: formattedContent string
|
|
||||||
|
|
||||||
Phase 5: File Generation
|
|
||||||
- Ref: phases/05-file-generation.md
|
|
||||||
- Check: file existence (warn if exists)
|
|
||||||
- Write: formatted content to target path
|
|
||||||
- Output: success confirmation with file path
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage Examples
|
|
||||||
|
|
||||||
### Basic Command (Project Scope)
|
|
||||||
```javascript
|
|
||||||
Skill(skill="command-generator", args={
|
|
||||||
skillName: "deploy",
|
|
||||||
description: "Deploy application to production environment",
|
|
||||||
location: "project"
|
|
||||||
})
|
|
||||||
// Output: .claude/commands/deploy.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### Grouped Command with Argument Hint
|
|
||||||
```javascript
|
|
||||||
Skill(skill="command-generator", args={
|
|
||||||
skillName: "create",
|
|
||||||
description: "Create new issue from GitHub URL or text",
|
|
||||||
location: "project",
|
|
||||||
group: "issue",
|
|
||||||
argumentHint: "[-y|--yes] <github-url | text-description> [--priority 1-5]"
|
|
||||||
})
|
|
||||||
// Output: .claude/commands/issue/create.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### User-Level Command
|
|
||||||
```javascript
|
|
||||||
Skill(skill="command-generator", args={
|
|
||||||
skillName: "global-status",
|
|
||||||
description: "Show global Claude Code status",
|
|
||||||
location: "user"
|
|
||||||
})
|
|
||||||
// Output: ~/.claude/commands/global-status.md
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Reference Documents by Phase
|
|
||||||
|
|
||||||
### Phase 1: Parameter Validation
|
|
||||||
| Document | Purpose | When to Use |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| [phases/01-parameter-validation.md](phases/01-parameter-validation.md) | Validate required parameters | Phase 1 execution |
|
|
||||||
|
|
||||||
### Phase 2: Target Path Resolution
|
|
||||||
| Document | Purpose | When to Use |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| [phases/02-target-path-resolution.md](phases/02-target-path-resolution.md) | Resolve target directory | Phase 2 execution |
|
|
||||||
|
|
||||||
### Phase 3: Template Loading
|
|
||||||
| Document | Purpose | When to Use |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| [phases/03-template-loading.md](phases/03-template-loading.md) | Load command template | Phase 3 execution |
|
|
||||||
| [templates/command-md.md](templates/command-md.md) | Command file template | Template reference |
|
|
||||||
|
|
||||||
### Phase 4: Content Formatting
|
|
||||||
| Document | Purpose | When to Use |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| [phases/04-content-formatting.md](phases/04-content-formatting.md) | Format content with params | Phase 4 execution |
|
|
||||||
|
|
||||||
### Phase 5: File Generation
|
|
||||||
| Document | Purpose | When to Use |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| [phases/05-file-generation.md](phases/05-file-generation.md) | Write final file | Phase 5 execution |
|
|
||||||
|
|
||||||
### Design Specifications
|
|
||||||
| Document | Purpose | When to Use |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| [specs/command-design-spec.md](specs/command-design-spec.md) | Command design guidelines | Understanding best practices |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Output Structure
|
|
||||||
|
|
||||||
### Generated Command File
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
name: {skillName}
|
|
||||||
description: {description}
|
|
||||||
{group} {argumentHint}
|
|
||||||
---
|
|
||||||
|
|
||||||
# {skillName} Command
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
{Auto-generated placeholder for command overview}
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
{Auto-generated placeholder for usage examples}
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
{Auto-generated placeholder for execution steps}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Stage | Action |
|
|
||||||
|-------|-------|--------|
|
|
||||||
| Missing skillName | Phase 1 | Error: "skillName is required" |
|
|
||||||
| Missing description | Phase 1 | Error: "description is required" |
|
|
||||||
| Missing location | Phase 1 | Error: "location is required (project or user)" |
|
|
||||||
| Invalid location | Phase 2 | Error: "location must be 'project' or 'user'" |
|
|
||||||
| Template not found | Phase 3 | Error: "Command template not found" |
|
|
||||||
| File exists | Phase 5 | Warning: "Command file already exists, will overwrite" |
|
|
||||||
| Write failure | Phase 5 | Error: "Failed to write command file" |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Related Skills
|
|
||||||
|
|
||||||
- **skill-generator**: Create complete skills with phases, templates, and specs
|
|
||||||
- **flow-coordinator**: Orchestrate multi-step command workflows
|
|
||||||
@@ -1,174 +0,0 @@
|
|||||||
# Phase 1: Parameter Validation
|
|
||||||
|
|
||||||
Validate all required parameters for command generation.
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
|
|
||||||
Ensure all required parameters are provided before proceeding with command generation:
|
|
||||||
- **skillName**: Command identifier (required)
|
|
||||||
- **description**: Command description (required)
|
|
||||||
- **location**: Target scope - "project" or "user" (required)
|
|
||||||
- **group**: Optional grouping subdirectory
|
|
||||||
- **argumentHint**: Optional argument hint string
|
|
||||||
|
|
||||||
## Input
|
|
||||||
|
|
||||||
Parameters received from skill invocation:
|
|
||||||
- `skillName`: string (required)
|
|
||||||
- `description`: string (required)
|
|
||||||
- `location`: "project" | "user" (required)
|
|
||||||
- `group`: string (optional)
|
|
||||||
- `argumentHint`: string (optional)
|
|
||||||
|
|
||||||
## Validation Rules
|
|
||||||
|
|
||||||
### Required Parameters
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const requiredParams = {
|
|
||||||
skillName: {
|
|
||||||
type: 'string',
|
|
||||||
minLength: 1,
|
|
||||||
pattern: /^[a-z][a-z0-9-]*$/, // lowercase, alphanumeric, hyphens
|
|
||||||
error: 'skillName must be lowercase alphanumeric with hyphens, starting with a letter'
|
|
||||||
},
|
|
||||||
description: {
|
|
||||||
type: 'string',
|
|
||||||
minLength: 10,
|
|
||||||
error: 'description must be at least 10 characters'
|
|
||||||
},
|
|
||||||
location: {
|
|
||||||
type: 'string',
|
|
||||||
enum: ['project', 'user'],
|
|
||||||
error: 'location must be "project" or "user"'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
### Optional Parameters
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const optionalParams = {
|
|
||||||
group: {
|
|
||||||
type: 'string',
|
|
||||||
pattern: /^[a-z][a-z0-9-]*$/,
|
|
||||||
default: null,
|
|
||||||
error: 'group must be lowercase alphanumeric with hyphens'
|
|
||||||
},
|
|
||||||
argumentHint: {
|
|
||||||
type: 'string',
|
|
||||||
default: '',
|
|
||||||
error: 'argumentHint must be a string'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
|
|
||||||
### Step 1: Extract Parameters
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Extract from skill args
|
|
||||||
const params = {
|
|
||||||
skillName: args.skillName,
|
|
||||||
description: args.description,
|
|
||||||
location: args.location,
|
|
||||||
group: args.group || null,
|
|
||||||
argumentHint: args.argumentHint || ''
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Validate Required Parameters
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
function validateRequired(params, rules) {
|
|
||||||
const errors = [];
|
|
||||||
|
|
||||||
for (const [key, rule] of Object.entries(rules)) {
|
|
||||||
const value = params[key];
|
|
||||||
|
|
||||||
// Check existence
|
|
||||||
if (value === undefined || value === null || value === '') {
|
|
||||||
errors.push(`${key} is required`);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check type
|
|
||||||
if (typeof value !== rule.type) {
|
|
||||||
errors.push(`${key} must be a ${rule.type}`);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check minLength
|
|
||||||
if (rule.minLength && value.length < rule.minLength) {
|
|
||||||
errors.push(`${key} must be at least ${rule.minLength} characters`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check pattern
|
|
||||||
if (rule.pattern && !rule.pattern.test(value)) {
|
|
||||||
errors.push(rule.error);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check enum
|
|
||||||
if (rule.enum && !rule.enum.includes(value)) {
|
|
||||||
errors.push(`${key} must be one of: ${rule.enum.join(', ')}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return errors;
|
|
||||||
}
|
|
||||||
|
|
||||||
const requiredErrors = validateRequired(params, requiredParams);
|
|
||||||
if (requiredErrors.length > 0) {
|
|
||||||
throw new Error(`Validation failed:\n${requiredErrors.join('\n')}`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Validate Optional Parameters
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
function validateOptional(params, rules) {
|
|
||||||
const warnings = [];
|
|
||||||
|
|
||||||
for (const [key, rule] of Object.entries(rules)) {
|
|
||||||
const value = params[key];
|
|
||||||
|
|
||||||
if (value !== null && value !== undefined && value !== '') {
|
|
||||||
if (rule.pattern && !rule.pattern.test(value)) {
|
|
||||||
warnings.push(`${key}: ${rule.error}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return warnings;
|
|
||||||
}
|
|
||||||
|
|
||||||
const optionalWarnings = validateOptional(params, optionalParams);
|
|
||||||
// Log warnings but continue
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Normalize Parameters
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const validatedParams = {
|
|
||||||
skillName: params.skillName.trim().toLowerCase(),
|
|
||||||
description: params.description.trim(),
|
|
||||||
location: params.location.trim().toLowerCase(),
|
|
||||||
group: params.group ? params.group.trim().toLowerCase() : null,
|
|
||||||
argumentHint: params.argumentHint ? params.argumentHint.trim() : ''
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
status: 'validated',
|
|
||||||
params: validatedParams,
|
|
||||||
warnings: optionalWarnings
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Next Phase
|
|
||||||
|
|
||||||
Proceed to [Phase 2: Target Path Resolution](02-target-path-resolution.md) with `validatedParams`.
|
|
||||||
@@ -1,171 +0,0 @@
|
|||||||
# Phase 2: Target Path Resolution
|
|
||||||
|
|
||||||
Resolve the target commands directory based on location parameter.
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
|
|
||||||
Determine the correct target path for the command file based on:
|
|
||||||
- **location**: "project" or "user" scope
|
|
||||||
- **group**: Optional subdirectory for command organization
|
|
||||||
- **skillName**: Command filename (with .md extension)
|
|
||||||
|
|
||||||
## Input
|
|
||||||
|
|
||||||
From Phase 1 validation:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
skillName: string, // e.g., "create"
|
|
||||||
description: string,
|
|
||||||
location: "project" | "user",
|
|
||||||
group: string | null, // e.g., "issue"
|
|
||||||
argumentHint: string
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Path Resolution Rules
|
|
||||||
|
|
||||||
### Location Mapping
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const locationMap = {
|
|
||||||
project: '.claude/commands',
|
|
||||||
user: '~/.claude/commands' // Expands to user home directory
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
### Path Construction
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
function resolveTargetPath(params) {
|
|
||||||
const baseDir = locationMap[params.location];
|
|
||||||
|
|
||||||
if (!baseDir) {
|
|
||||||
throw new Error(`Invalid location: ${params.location}. Must be "project" or "user".`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Expand ~ to user home if present
|
|
||||||
const expandedBase = baseDir.startsWith('~')
|
|
||||||
? path.join(os.homedir(), baseDir.slice(1))
|
|
||||||
: baseDir;
|
|
||||||
|
|
||||||
// Build full path
|
|
||||||
let targetPath;
|
|
||||||
if (params.group) {
|
|
||||||
// Grouped command: .claude/commands/{group}/{skillName}.md
|
|
||||||
targetPath = path.join(expandedBase, params.group, `${params.skillName}.md`);
|
|
||||||
} else {
|
|
||||||
// Top-level command: .claude/commands/{skillName}.md
|
|
||||||
targetPath = path.join(expandedBase, `${params.skillName}.md`);
|
|
||||||
}
|
|
||||||
|
|
||||||
return targetPath;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
|
|
||||||
### Step 1: Get Base Directory
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const location = validatedParams.location;
|
|
||||||
const baseDir = locationMap[location];
|
|
||||||
|
|
||||||
if (!baseDir) {
|
|
||||||
throw new Error(`Invalid location: ${location}. Must be "project" or "user".`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Expand User Path (if applicable)
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const os = require('os');
|
|
||||||
const path = require('path');
|
|
||||||
|
|
||||||
let expandedBase = baseDir;
|
|
||||||
if (baseDir.startsWith('~')) {
|
|
||||||
expandedBase = path.join(os.homedir(), baseDir.slice(1));
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Construct Full Path
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
let targetPath;
|
|
||||||
let targetDir;
|
|
||||||
|
|
||||||
if (validatedParams.group) {
|
|
||||||
// Command with group subdirectory
|
|
||||||
targetDir = path.join(expandedBase, validatedParams.group);
|
|
||||||
targetPath = path.join(targetDir, `${validatedParams.skillName}.md`);
|
|
||||||
} else {
|
|
||||||
// Top-level command
|
|
||||||
targetDir = expandedBase;
|
|
||||||
targetPath = path.join(targetDir, `${validatedParams.skillName}.md`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Ensure Target Directory Exists
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Check and create directory if needed
|
|
||||||
Bash(`mkdir -p "${targetDir}"`);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Check File Existence
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const fileExists = Bash(`test -f "${targetPath}" && echo "EXISTS" || echo "NOT_FOUND"`);
|
|
||||||
|
|
||||||
if (fileExists.includes('EXISTS')) {
|
|
||||||
console.warn(`Warning: Command file already exists at ${targetPath}. Will overwrite.`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
status: 'resolved',
|
|
||||||
targetPath: targetPath, // Full path to command file
|
|
||||||
targetDir: targetDir, // Directory containing command
|
|
||||||
fileName: `${skillName}.md`,
|
|
||||||
fileExists: fileExists.includes('EXISTS'),
|
|
||||||
params: validatedParams // Pass through to next phase
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Path Examples
|
|
||||||
|
|
||||||
### Project Scope (No Group)
|
|
||||||
```
|
|
||||||
location: "project"
|
|
||||||
skillName: "deploy"
|
|
||||||
-> .claude/commands/deploy.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### Project Scope (With Group)
|
|
||||||
```
|
|
||||||
location: "project"
|
|
||||||
skillName: "create"
|
|
||||||
group: "issue"
|
|
||||||
-> .claude/commands/issue/create.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### User Scope (No Group)
|
|
||||||
```
|
|
||||||
location: "user"
|
|
||||||
skillName: "global-status"
|
|
||||||
-> ~/.claude/commands/global-status.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### User Scope (With Group)
|
|
||||||
```
|
|
||||||
location: "user"
|
|
||||||
skillName: "sync"
|
|
||||||
group: "session"
|
|
||||||
-> ~/.claude/commands/session/sync.md
|
|
||||||
```
|
|
||||||
|
|
||||||
## Next Phase
|
|
||||||
|
|
||||||
Proceed to [Phase 3: Template Loading](03-template-loading.md) with `targetPath` and `params`.
|
|
||||||
@@ -1,123 +0,0 @@
|
|||||||
# Phase 3: Template Loading
|
|
||||||
|
|
||||||
Load the command template file for content generation.
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
|
|
||||||
Load the command template from the skill's templates directory. The template provides:
|
|
||||||
- YAML frontmatter structure
|
|
||||||
- Placeholder variables for substitution
|
|
||||||
- Standard command file sections
|
|
||||||
|
|
||||||
## Input
|
|
||||||
|
|
||||||
From Phase 2:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
targetPath: string,
|
|
||||||
targetDir: string,
|
|
||||||
fileName: string,
|
|
||||||
fileExists: boolean,
|
|
||||||
params: {
|
|
||||||
skillName: string,
|
|
||||||
description: string,
|
|
||||||
location: string,
|
|
||||||
group: string | null,
|
|
||||||
argumentHint: string
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Template Location
|
|
||||||
|
|
||||||
```
|
|
||||||
.claude/skills/command-generator/templates/command-md.md
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
|
|
||||||
### Step 1: Locate Template File
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Template is located in the skill's templates directory
|
|
||||||
const skillDir = '.claude/skills/command-generator';
|
|
||||||
const templatePath = `${skillDir}/templates/command-md.md`;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Read Template Content
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const templateContent = Read(templatePath);
|
|
||||||
|
|
||||||
if (!templateContent) {
|
|
||||||
throw new Error(`Command template not found at ${templatePath}`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Validate Template Structure
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Verify template contains expected placeholders
|
|
||||||
const requiredPlaceholders = ['{{name}}', '{{description}}'];
|
|
||||||
const optionalPlaceholders = ['{{group}}', '{{argumentHint}}'];
|
|
||||||
|
|
||||||
for (const placeholder of requiredPlaceholders) {
|
|
||||||
if (!templateContent.includes(placeholder)) {
|
|
||||||
throw new Error(`Template missing required placeholder: ${placeholder}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Store Template for Next Phase
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const template = {
|
|
||||||
content: templateContent,
|
|
||||||
requiredPlaceholders: requiredPlaceholders,
|
|
||||||
optionalPlaceholders: optionalPlaceholders
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Template Format Reference
|
|
||||||
|
|
||||||
The template should follow this structure:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
name: {{name}}
|
|
||||||
description: {{description}}
|
|
||||||
{{#if group}}group: {{group}}{{/if}}
|
|
||||||
{{#if argumentHint}}argument-hint: {{argumentHint}}{{/if}}
|
|
||||||
---
|
|
||||||
|
|
||||||
# {{name}} Command
|
|
||||||
|
|
||||||
[Template content with placeholders]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
status: 'loaded',
|
|
||||||
template: {
|
|
||||||
content: templateContent,
|
|
||||||
requiredPlaceholders: requiredPlaceholders,
|
|
||||||
optionalPlaceholders: optionalPlaceholders
|
|
||||||
},
|
|
||||||
targetPath: targetPath,
|
|
||||||
params: params
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Action |
|
|
||||||
|-------|--------|
|
|
||||||
| Template file not found | Throw error with path |
|
|
||||||
| Missing required placeholder | Throw error with missing placeholder name |
|
|
||||||
| Empty template | Throw error |
|
|
||||||
|
|
||||||
## Next Phase
|
|
||||||
|
|
||||||
Proceed to [Phase 4: Content Formatting](04-content-formatting.md) with `template`, `targetPath`, and `params`.
|
|
||||||
@@ -1,184 +0,0 @@
|
|||||||
# Phase 4: Content Formatting
|
|
||||||
|
|
||||||
Format template content by substituting placeholders with parameter values.
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
|
|
||||||
Replace all placeholder variables in the template with validated parameter values:
|
|
||||||
- `{{name}}` -> skillName
|
|
||||||
- `{{description}}` -> description
|
|
||||||
- `{{group}}` -> group (if provided)
|
|
||||||
- `{{argumentHint}}` -> argumentHint (if provided)
|
|
||||||
|
|
||||||
## Input
|
|
||||||
|
|
||||||
From Phase 3:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
template: {
|
|
||||||
content: string,
|
|
||||||
requiredPlaceholders: string[],
|
|
||||||
optionalPlaceholders: string[]
|
|
||||||
},
|
|
||||||
targetPath: string,
|
|
||||||
params: {
|
|
||||||
skillName: string,
|
|
||||||
description: string,
|
|
||||||
location: string,
|
|
||||||
group: string | null,
|
|
||||||
argumentHint: string
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Placeholder Mapping
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const placeholderMap = {
|
|
||||||
'{{name}}': params.skillName,
|
|
||||||
'{{description}}': params.description,
|
|
||||||
'{{group}}': params.group || '',
|
|
||||||
'{{argumentHint}}': params.argumentHint || ''
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
|
|
||||||
### Step 1: Initialize Content
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
let formattedContent = template.content;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Substitute Required Placeholders
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// These must always be replaced
|
|
||||||
formattedContent = formattedContent.replace(/\{\{name\}\}/g, params.skillName);
|
|
||||||
formattedContent = formattedContent.replace(/\{\{description\}\}/g, params.description);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Handle Optional Placeholders
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Group placeholder
|
|
||||||
if (params.group) {
|
|
||||||
formattedContent = formattedContent.replace(/\{\{group\}\}/g, params.group);
|
|
||||||
} else {
|
|
||||||
// Remove group line if not provided
|
|
||||||
formattedContent = formattedContent.replace(/^group: \{\{group\}\}\n?/gm, '');
|
|
||||||
formattedContent = formattedContent.replace(/\{\{group\}\}/g, '');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Argument hint placeholder
|
|
||||||
if (params.argumentHint) {
|
|
||||||
formattedContent = formattedContent.replace(/\{\{argumentHint\}\}/g, params.argumentHint);
|
|
||||||
} else {
|
|
||||||
// Remove argument-hint line if not provided
|
|
||||||
formattedContent = formattedContent.replace(/^argument-hint: \{\{argumentHint\}\}\n?/gm, '');
|
|
||||||
formattedContent = formattedContent.replace(/\{\{argumentHint\}\}/g, '');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Handle Conditional Sections
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Remove empty frontmatter lines (caused by missing optional fields)
|
|
||||||
formattedContent = formattedContent.replace(/\n{3,}/g, '\n\n');
|
|
||||||
|
|
||||||
// Handle {{#if group}} style conditionals
|
|
||||||
if (formattedContent.includes('{{#if')) {
|
|
||||||
// Process group conditional
|
|
||||||
if (params.group) {
|
|
||||||
formattedContent = formattedContent.replace(/\{\{#if group\}\}([\s\S]*?)\{\{\/if\}\}/g, '$1');
|
|
||||||
} else {
|
|
||||||
formattedContent = formattedContent.replace(/\{\{#if group\}\}[\s\S]*?\{\{\/if\}\}/g, '');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process argumentHint conditional
|
|
||||||
if (params.argumentHint) {
|
|
||||||
formattedContent = formattedContent.replace(/\{\{#if argumentHint\}\}([\s\S]*?)\{\{\/if\}\}/g, '$1');
|
|
||||||
} else {
|
|
||||||
formattedContent = formattedContent.replace(/\{\{#if argumentHint\}\}[\s\S]*?\{\{\/if\}\}/g, '');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Validate Final Content
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Ensure no unresolved placeholders remain
|
|
||||||
const unresolvedPlaceholders = formattedContent.match(/\{\{[^}]+\}\}/g);
|
|
||||||
if (unresolvedPlaceholders) {
|
|
||||||
console.warn(`Warning: Unresolved placeholders found: ${unresolvedPlaceholders.join(', ')}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure frontmatter is valid
|
|
||||||
const frontmatterMatch = formattedContent.match(/^---\n([\s\S]*?)\n---/);
|
|
||||||
if (!frontmatterMatch) {
|
|
||||||
throw new Error('Generated content has invalid frontmatter structure');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Generate Summary
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const summary = {
|
|
||||||
name: params.skillName,
|
|
||||||
description: params.description.substring(0, 50) + (params.description.length > 50 ? '...' : ''),
|
|
||||||
location: params.location,
|
|
||||||
group: params.group,
|
|
||||||
hasArgumentHint: !!params.argumentHint
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
status: 'formatted',
|
|
||||||
content: formattedContent,
|
|
||||||
targetPath: targetPath,
|
|
||||||
summary: summary
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Content Example
|
|
||||||
|
|
||||||
### Input Template
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
name: {{name}}
|
|
||||||
description: {{description}}
|
|
||||||
{{#if group}}group: {{group}}{{/if}}
|
|
||||||
{{#if argumentHint}}argument-hint: {{argumentHint}}{{/if}}
|
|
||||||
---
|
|
||||||
|
|
||||||
# {{name}} Command
|
|
||||||
```
|
|
||||||
|
|
||||||
### Output (with all fields)
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
name: create
|
|
||||||
description: Create structured issue from GitHub URL or text description
|
|
||||||
group: issue
|
|
||||||
argument-hint: [-y|--yes] <github-url | text-description> [--priority 1-5]
|
|
||||||
---
|
|
||||||
|
|
||||||
# create Command
|
|
||||||
```
|
|
||||||
|
|
||||||
### Output (minimal fields)
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
name: deploy
|
|
||||||
description: Deploy application to production environment
|
|
||||||
---
|
|
||||||
|
|
||||||
# deploy Command
|
|
||||||
```
|
|
||||||
|
|
||||||
## Next Phase
|
|
||||||
|
|
||||||
Proceed to [Phase 5: File Generation](05-file-generation.md) with `content` and `targetPath`.
|
|
||||||
@@ -1,185 +0,0 @@
|
|||||||
# Phase 5: File Generation
|
|
||||||
|
|
||||||
Write the formatted content to the target command file.
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
|
|
||||||
Generate the final command file by:
|
|
||||||
1. Checking for existing file (warn if present)
|
|
||||||
2. Writing formatted content to target path
|
|
||||||
3. Confirming successful generation
|
|
||||||
|
|
||||||
## Input
|
|
||||||
|
|
||||||
From Phase 4:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
status: 'formatted',
|
|
||||||
content: string,
|
|
||||||
targetPath: string,
|
|
||||||
summary: {
|
|
||||||
name: string,
|
|
||||||
description: string,
|
|
||||||
location: string,
|
|
||||||
group: string | null,
|
|
||||||
hasArgumentHint: boolean
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
|
|
||||||
### Step 1: Pre-Write Check
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Check if file already exists
|
|
||||||
const fileExists = Bash(`test -f "${targetPath}" && echo "EXISTS" || echo "NOT_FOUND"`);
|
|
||||||
|
|
||||||
if (fileExists.includes('EXISTS')) {
|
|
||||||
console.warn(`
|
|
||||||
WARNING: Command file already exists at: ${targetPath}
|
|
||||||
The file will be overwritten with new content.
|
|
||||||
`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Ensure Directory Exists
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Get directory from target path
|
|
||||||
const targetDir = path.dirname(targetPath);
|
|
||||||
|
|
||||||
// Create directory if it doesn't exist
|
|
||||||
Bash(`mkdir -p "${targetDir}"`);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Write File
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Write the formatted content
|
|
||||||
Write(targetPath, content);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Verify Write
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Confirm file was created
|
|
||||||
const verifyExists = Bash(`test -f "${targetPath}" && echo "SUCCESS" || echo "FAILED"`);
|
|
||||||
|
|
||||||
if (!verifyExists.includes('SUCCESS')) {
|
|
||||||
throw new Error(`Failed to create command file at ${targetPath}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify content was written
|
|
||||||
const writtenContent = Read(targetPath);
|
|
||||||
if (!writtenContent || writtenContent.length === 0) {
|
|
||||||
throw new Error(`Command file created but appears to be empty`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Generate Success Report
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const report = {
|
|
||||||
status: 'completed',
|
|
||||||
file: {
|
|
||||||
path: targetPath,
|
|
||||||
name: summary.name,
|
|
||||||
location: summary.location,
|
|
||||||
group: summary.group,
|
|
||||||
size: writtenContent.length,
|
|
||||||
created: new Date().toISOString()
|
|
||||||
},
|
|
||||||
command: {
|
|
||||||
name: summary.name,
|
|
||||||
description: summary.description,
|
|
||||||
hasArgumentHint: summary.hasArgumentHint
|
|
||||||
},
|
|
||||||
nextSteps: [
|
|
||||||
`Edit ${targetPath} to add implementation details`,
|
|
||||||
'Add usage examples and execution flow',
|
|
||||||
'Test the command with Claude Code'
|
|
||||||
]
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
### Success Output
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
status: 'completed',
|
|
||||||
file: {
|
|
||||||
path: '.claude/commands/issue/create.md',
|
|
||||||
name: 'create',
|
|
||||||
location: 'project',
|
|
||||||
group: 'issue',
|
|
||||||
size: 1234,
|
|
||||||
created: '2026-02-27T12:00:00.000Z'
|
|
||||||
},
|
|
||||||
command: {
|
|
||||||
name: 'create',
|
|
||||||
description: 'Create structured issue from GitHub URL...',
|
|
||||||
hasArgumentHint: true
|
|
||||||
},
|
|
||||||
nextSteps: [
|
|
||||||
'Edit .claude/commands/issue/create.md to add implementation details',
|
|
||||||
'Add usage examples and execution flow',
|
|
||||||
'Test the command with Claude Code'
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Console Output
|
|
||||||
|
|
||||||
```
|
|
||||||
Command generated successfully!
|
|
||||||
|
|
||||||
File: .claude/commands/issue/create.md
|
|
||||||
Name: create
|
|
||||||
Description: Create structured issue from GitHub URL...
|
|
||||||
Location: project
|
|
||||||
Group: issue
|
|
||||||
|
|
||||||
Next Steps:
|
|
||||||
1. Edit .claude/commands/issue/create.md to add implementation details
|
|
||||||
2. Add usage examples and execution flow
|
|
||||||
3. Test the command with Claude Code
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Action |
|
|
||||||
|-------|--------|
|
|
||||||
| Directory creation failed | Throw error with directory path |
|
|
||||||
| File write failed | Throw error with target path |
|
|
||||||
| Empty file detected | Throw error and attempt cleanup |
|
|
||||||
| Permission denied | Throw error with permission hint |
|
|
||||||
|
|
||||||
## Cleanup on Failure
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// If any step fails, attempt to clean up partial artifacts
|
|
||||||
function cleanup(targetPath) {
|
|
||||||
try {
|
|
||||||
Bash(`rm -f "${targetPath}"`);
|
|
||||||
} catch (e) {
|
|
||||||
// Ignore cleanup errors
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Completion
|
|
||||||
|
|
||||||
The command file has been successfully generated. The skill execution is complete.
|
|
||||||
|
|
||||||
### Usage Example
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Use the generated command
|
|
||||||
/issue:create https://github.com/owner/repo/issues/123
|
|
||||||
|
|
||||||
# Or with the group prefix
|
|
||||||
/issue:create "Login fails with special chars"
|
|
||||||
```
|
|
||||||
@@ -1,160 +0,0 @@
|
|||||||
# Command Design Specification
|
|
||||||
|
|
||||||
Guidelines and best practices for designing Claude Code command files.
|
|
||||||
|
|
||||||
## Command File Structure
|
|
||||||
|
|
||||||
### YAML Frontmatter
|
|
||||||
|
|
||||||
Every command file must start with YAML frontmatter containing:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
---
|
|
||||||
name: command-name # Required: Command identifier (lowercase, hyphens)
|
|
||||||
description: Description # Required: Brief description of command purpose
|
|
||||||
argument-hint: "[args]" # Optional: Argument format hint
|
|
||||||
allowed-tools: Tool1, Tool2 # Optional: Restricted tool set
|
|
||||||
examples: # Optional: Usage examples
|
|
||||||
- /command:example1
|
|
||||||
- /command:example2 --flag
|
|
||||||
---
|
|
||||||
```
|
|
||||||
|
|
||||||
### Frontmatter Fields
|
|
||||||
|
|
||||||
| Field | Required | Description |
|
|
||||||
|-------|----------|-------------|
|
|
||||||
| `name` | Yes | Command identifier, lowercase with hyphens |
|
|
||||||
| `description` | Yes | Brief description, appears in command listings |
|
|
||||||
| `argument-hint` | No | Usage hint for arguments (shown in help) |
|
|
||||||
| `allowed-tools` | No | Restrict available tools for this command |
|
|
||||||
| `examples` | No | Array of usage examples |
|
|
||||||
|
|
||||||
## Naming Conventions
|
|
||||||
|
|
||||||
### Command Names
|
|
||||||
|
|
||||||
- Use lowercase letters only
|
|
||||||
- Separate words with hyphens (`create-issue`, not `createIssue`)
|
|
||||||
- Keep names short but descriptive (2-3 words max)
|
|
||||||
- Use verbs for actions (`deploy`, `create`, `analyze`)
|
|
||||||
|
|
||||||
### Group Names
|
|
||||||
|
|
||||||
- Groups organize related commands
|
|
||||||
- Use singular nouns (`issue`, `session`, `workflow`)
|
|
||||||
- Common groups: `issue`, `workflow`, `session`, `memory`, `cli`
|
|
||||||
|
|
||||||
### Path Examples
|
|
||||||
|
|
||||||
```
|
|
||||||
.claude/commands/deploy.md # Top-level command
|
|
||||||
.claude/commands/issue/create.md # Grouped command
|
|
||||||
.claude/commands/workflow/init.md # Grouped command
|
|
||||||
```
|
|
||||||
|
|
||||||
## Content Sections
|
|
||||||
|
|
||||||
### Required Sections
|
|
||||||
|
|
||||||
1. **Overview**: Brief description of command purpose
|
|
||||||
2. **Usage**: Command syntax and examples
|
|
||||||
3. **Execution Flow**: High-level process diagram
|
|
||||||
|
|
||||||
### Recommended Sections
|
|
||||||
|
|
||||||
4. **Implementation**: Code examples for each phase
|
|
||||||
5. **Error Handling**: Error cases and recovery
|
|
||||||
6. **Related Commands**: Links to related functionality
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### 1. Clear Purpose
|
|
||||||
|
|
||||||
Each command should do one thing well:
|
|
||||||
|
|
||||||
```
|
|
||||||
Good: /issue:create - Create a new issue
|
|
||||||
Bad: /issue:manage - Create, update, delete issues (too broad)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Consistent Structure
|
|
||||||
|
|
||||||
Follow the same pattern across all commands in a group:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# All issue commands should have:
|
|
||||||
- Overview
|
|
||||||
- Usage with examples
|
|
||||||
- Phase-based implementation
|
|
||||||
- Error handling table
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Progressive Detail
|
|
||||||
|
|
||||||
Start simple, add detail in phases:
|
|
||||||
|
|
||||||
```
|
|
||||||
Phase 1: Quick overview
|
|
||||||
Phase 2: Implementation details
|
|
||||||
Phase 3: Edge cases and errors
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Reusable Patterns
|
|
||||||
|
|
||||||
Use consistent patterns for common operations:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Input parsing pattern
|
|
||||||
const args = parseArguments($ARGUMENTS);
|
|
||||||
const flags = parseFlags($ARGUMENTS);
|
|
||||||
|
|
||||||
// Validation pattern
|
|
||||||
if (!args.required) {
|
|
||||||
throw new Error('Required argument missing');
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Scope Guidelines
|
|
||||||
|
|
||||||
### Project Commands (`.claude/commands/`)
|
|
||||||
|
|
||||||
- Project-specific workflows
|
|
||||||
- Team conventions
|
|
||||||
- Integration with project tools
|
|
||||||
|
|
||||||
### User Commands (`~/.claude/commands/`)
|
|
||||||
|
|
||||||
- Personal productivity tools
|
|
||||||
- Cross-project utilities
|
|
||||||
- Global configuration
|
|
||||||
|
|
||||||
## Error Messages
|
|
||||||
|
|
||||||
### Good Error Messages
|
|
||||||
|
|
||||||
```
|
|
||||||
Error: GitHub issue URL required
|
|
||||||
Usage: /issue:create <github-url>
|
|
||||||
Example: /issue:create https://github.com/owner/repo/issues/123
|
|
||||||
```
|
|
||||||
|
|
||||||
### Bad Error Messages
|
|
||||||
|
|
||||||
```
|
|
||||||
Error: Invalid input
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing Commands
|
|
||||||
|
|
||||||
After creating a command, test:
|
|
||||||
|
|
||||||
1. **Basic invocation**: Does it run without arguments?
|
|
||||||
2. **Argument parsing**: Does it handle valid arguments?
|
|
||||||
3. **Error cases**: Does it show helpful errors for invalid input?
|
|
||||||
4. **Help text**: Is the usage clear?
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- [SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) - Full skill design specification
|
|
||||||
- [../skill-generator/SKILL.md](../skill-generator/SKILL.md) - Meta-skill for creating skills
|
|
||||||
@@ -1,75 +0,0 @@
|
|||||||
---
|
|
||||||
name: {{name}}
|
|
||||||
description: {{description}}
|
|
||||||
{{#if argumentHint}}argument-hint: {{argumentHint}}
|
|
||||||
{{/if}}---
|
|
||||||
|
|
||||||
# {{name}} Command
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
[Describe the command purpose and what it does]
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/{{#if group}}{{group}}:{{/if}}{{name}} [arguments]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Examples**:
|
|
||||||
```bash
|
|
||||||
# Example 1: Basic usage
|
|
||||||
/{{#if group}}{{group}}:{{/if}}{{name}}
|
|
||||||
|
|
||||||
# Example 2: With arguments
|
|
||||||
/{{#if group}}{{group}}:{{/if}}{{name}} --option value
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
Phase 1: Input Parsing
|
|
||||||
- Parse arguments and flags
|
|
||||||
- Validate input parameters
|
|
||||||
|
|
||||||
Phase 2: Core Processing
|
|
||||||
- Execute main logic
|
|
||||||
- Handle edge cases
|
|
||||||
|
|
||||||
Phase 3: Output Generation
|
|
||||||
- Format results
|
|
||||||
- Display to user
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation
|
|
||||||
|
|
||||||
### Phase 1: Input Parsing
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Parse command arguments
|
|
||||||
const args = parseArguments($ARGUMENTS);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 2: Core Processing
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// TODO: Implement core logic
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 3: Output Generation
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// TODO: Format and display output
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Action |
|
|
||||||
|-------|--------|
|
|
||||||
| Invalid input | Show usage and error message |
|
|
||||||
| Processing failure | Log error and suggest recovery |
|
|
||||||
|
|
||||||
## Related Commands
|
|
||||||
|
|
||||||
- [Related command 1]
|
|
||||||
- [Related command 2]
|
|
||||||
@@ -736,8 +736,6 @@ TodoWrite({
|
|||||||
|
|
||||||
## Post-Completion Expansion
|
## Post-Completion Expansion
|
||||||
|
|
||||||
**Auto-sync**: 执行 `/workflow:session:sync -y "{summary}"` 更新 specs/*.md + project-tech。
|
|
||||||
|
|
||||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||||
|
|
||||||
## Best Practices
|
## Best Practices
|
||||||
|
|||||||
@@ -403,7 +403,7 @@ Task(
|
|||||||
3. Validate file access: bash(ls -la ${targetFiles.join(' ')})
|
3. Validate file access: bash(ls -la ${targetFiles.join(' ')})
|
||||||
4. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
4. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
||||||
5. Read: .workflow/project-tech.json (technology stack and architecture context)
|
5. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||||
6. Read: .workflow/specs/*.md (user-defined constraints and conventions to validate against)
|
6. Read: .workflow/project-guidelines.json (user-defined constraints and conventions to validate against)
|
||||||
|
|
||||||
## Review Context
|
## Review Context
|
||||||
- Review Type: module (independent)
|
- Review Type: module (independent)
|
||||||
@@ -507,7 +507,7 @@ Task(
|
|||||||
4. Read test files: bash(find ${projectDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
4. Read test files: bash(find ${projectDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
||||||
5. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
5. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||||
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||||
7. Read: .workflow/specs/*.md (user-defined constraints for remediation compliance)
|
7. Read: .workflow/project-guidelines.json (user-defined constraints for remediation compliance)
|
||||||
|
|
||||||
## CLI Configuration
|
## CLI Configuration
|
||||||
- Tool Priority: gemini → qwen → codex
|
- Tool Priority: gemini → qwen → codex
|
||||||
|
|||||||
@@ -414,7 +414,7 @@ Task(
|
|||||||
4. Read review state: ${reviewStateJsonPath}
|
4. Read review state: ${reviewStateJsonPath}
|
||||||
5. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
5. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
||||||
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||||
7. Read: .workflow/specs/*.md (user-defined constraints and conventions to validate against)
|
7. Read: .workflow/project-guidelines.json (user-defined constraints and conventions to validate against)
|
||||||
|
|
||||||
## Session Context
|
## Session Context
|
||||||
- Session ID: ${sessionId}
|
- Session ID: ${sessionId}
|
||||||
@@ -518,7 +518,7 @@ Task(
|
|||||||
4. Read test files: bash(find ${workflowDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
4. Read test files: bash(find ${workflowDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
||||||
5. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
5. Execute: cat ~/.ccw/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||||
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||||
7. Read: .workflow/specs/*.md (user-defined constraints for remediation compliance)
|
7. Read: .workflow/project-guidelines.json (user-defined constraints for remediation compliance)
|
||||||
|
|
||||||
## CLI Configuration
|
## CLI Configuration
|
||||||
- Tool Priority: gemini → qwen → codex
|
- Tool Priority: gemini → qwen → codex
|
||||||
|
|||||||
@@ -15,8 +15,6 @@ Phase 0: Specification Study (Read specs/ + templates/ - mandatory prerequisit
|
|||||||
|
|
|
|
||||||
Phase 1: Discovery -> spec-config.json + discovery-context.json
|
Phase 1: Discovery -> spec-config.json + discovery-context.json
|
||||||
|
|
|
|
||||||
Phase 1.5: Req Expansion -> refined-requirements.json (interactive discussion + CLI gap analysis)
|
|
||||||
| (-y auto mode: auto-expansion, skip interaction)
|
|
||||||
Phase 2: Product Brief -> product-brief.md (multi-CLI parallel analysis)
|
Phase 2: Product Brief -> product-brief.md (multi-CLI parallel analysis)
|
||||||
|
|
|
|
||||||
Phase 3: Requirements (PRD) -> requirements/ (_index.md + REQ-*.md + NFR-*.md)
|
Phase 3: Requirements (PRD) -> requirements/ (_index.md + REQ-*.md + NFR-*.md)
|
||||||
@@ -81,16 +79,6 @@ Phase 1: Discovery & Seed Analysis
|
|||||||
|- User confirmation (interactive, -y skips)
|
|- User confirmation (interactive, -y skips)
|
||||||
|- Output: spec-config.json, discovery-context.json (optional)
|
|- Output: spec-config.json, discovery-context.json (optional)
|
||||||
|
|
||||||
Phase 1.5: Requirement Expansion & Clarification
|
|
||||||
|- Ref: phases/01-5-requirement-clarification.md
|
|
||||||
|- CLI gap analysis: completeness scoring, missing dimensions detection
|
|
||||||
|- Multi-round interactive discussion (max 5 rounds)
|
|
||||||
| |- Round 1: present gap analysis + expansion suggestions
|
|
||||||
| |- Round N: follow-up refinement based on user responses
|
|
||||||
|- User final confirmation of requirements
|
|
||||||
|- Auto mode (-y): CLI auto-expansion without interaction
|
|
||||||
|- Output: refined-requirements.json
|
|
||||||
|
|
||||||
Phase 2: Product Brief
|
Phase 2: Product Brief
|
||||||
|- Ref: phases/02-product-brief.md
|
|- Ref: phases/02-product-brief.md
|
||||||
|- 3 parallel CLI analyses: Product (Gemini) + Technical (Codex) + User (Claude)
|
|- 3 parallel CLI analyses: Product (Gemini) + Technical (Codex) + User (Claude)
|
||||||
@@ -160,7 +148,6 @@ Bash(`mkdir -p "${workDir}"`);
|
|||||||
.workflow/.spec/SPEC-{slug}-{YYYY-MM-DD}/
|
.workflow/.spec/SPEC-{slug}-{YYYY-MM-DD}/
|
||||||
├── spec-config.json # Session configuration + phase state
|
├── spec-config.json # Session configuration + phase state
|
||||||
├── discovery-context.json # Codebase exploration results (optional)
|
├── discovery-context.json # Codebase exploration results (optional)
|
||||||
├── refined-requirements.json # Phase 1.5: Confirmed requirements after discussion
|
|
||||||
├── product-brief.md # Phase 2: Product brief
|
├── product-brief.md # Phase 2: Product brief
|
||||||
├── requirements/ # Phase 3: Detailed PRD (directory)
|
├── requirements/ # Phase 3: Detailed PRD (directory)
|
||||||
│ ├── _index.md # Summary, MoSCoW table, traceability, links
|
│ ├── _index.md # Summary, MoSCoW table, traceability, links
|
||||||
@@ -197,10 +184,8 @@ Bash(`mkdir -p "${workDir}"`);
|
|||||||
"dimensions": []
|
"dimensions": []
|
||||||
},
|
},
|
||||||
"has_codebase": false,
|
"has_codebase": false,
|
||||||
"refined_requirements_file": "refined-requirements.json",
|
|
||||||
"phasesCompleted": [
|
"phasesCompleted": [
|
||||||
{ "phase": 1, "name": "discovery", "output_file": "spec-config.json", "completed_at": "ISO8601" },
|
{ "phase": 1, "name": "discovery", "output_file": "spec-config.json", "completed_at": "ISO8601" },
|
||||||
{ "phase": 1.5, "name": "requirement-clarification", "output_file": "refined-requirements.json", "discussion_rounds": 2, "completed_at": "ISO8601" },
|
|
||||||
{ "phase": 3, "name": "requirements", "output_dir": "requirements/", "output_index": "requirements/_index.md", "file_count": 8, "completed_at": "ISO8601" }
|
{ "phase": 3, "name": "requirements", "output_dir": "requirements/", "output_index": "requirements/_index.md", "file_count": 8, "completed_at": "ISO8601" }
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -226,12 +211,6 @@ Bash(`mkdir -p "${workDir}"`);
|
|||||||
| [phases/01-discovery.md](phases/01-discovery.md) | Seed analysis and session setup | Phase start |
|
| [phases/01-discovery.md](phases/01-discovery.md) | Seed analysis and session setup | Phase start |
|
||||||
| [specs/document-standards.md](specs/document-standards.md) | Frontmatter format for spec-config.json | Config generation |
|
| [specs/document-standards.md](specs/document-standards.md) | Frontmatter format for spec-config.json | Config generation |
|
||||||
|
|
||||||
### Phase 1.5: Requirement Expansion & Clarification
|
|
||||||
| Document | Purpose | When to Use |
|
|
||||||
|----------|---------|-------------|
|
|
||||||
| [phases/01-5-requirement-clarification.md](phases/01-5-requirement-clarification.md) | Interactive requirement discussion workflow | Phase start |
|
|
||||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria for refined requirements | Validation |
|
|
||||||
|
|
||||||
### Phase 2: Product Brief
|
### Phase 2: Product Brief
|
||||||
| Document | Purpose | When to Use |
|
| Document | Purpose | When to Use |
|
||||||
|----------|---------|-------------|
|
|----------|---------|-------------|
|
||||||
@@ -275,9 +254,6 @@ Bash(`mkdir -p "${workDir}"`);
|
|||||||
|-------|-------|-----------|--------|
|
|-------|-------|-----------|--------|
|
||||||
| Phase 1 | Empty input | Yes | Error and exit |
|
| Phase 1 | Empty input | Yes | Error and exit |
|
||||||
| Phase 1 | CLI seed analysis fails | No | Use basic parsing fallback |
|
| Phase 1 | CLI seed analysis fails | No | Use basic parsing fallback |
|
||||||
| Phase 1.5 | Gap analysis CLI fails | No | Skip to user questions with basic prompts |
|
|
||||||
| Phase 1.5 | User skips discussion | No | Proceed with seed_analysis as-is |
|
|
||||||
| Phase 1.5 | Max rounds reached (5) | No | Force confirmation with current state |
|
|
||||||
| Phase 2 | Single CLI perspective fails | No | Continue with available perspectives |
|
| Phase 2 | Single CLI perspective fails | No | Continue with available perspectives |
|
||||||
| Phase 2 | All CLI calls fail | No | Generate basic brief from seed analysis |
|
| Phase 2 | All CLI calls fail | No | Generate basic brief from seed analysis |
|
||||||
| Phase 3 | Gemini CLI fails | No | Use codex fallback |
|
| Phase 3 | Gemini CLI fails | No | Use codex fallback |
|
||||||
|
|||||||
@@ -1,404 +0,0 @@
|
|||||||
# Phase 1.5: Requirement Expansion & Clarification
|
|
||||||
|
|
||||||
在进入正式文档生成前,通过多轮交互讨论对原始需求进行深度挖掘、扩展和确认。
|
|
||||||
|
|
||||||
## Objective
|
|
||||||
|
|
||||||
- 识别原始需求中的模糊点、遗漏和潜在风险
|
|
||||||
- 通过 CLI 辅助分析需求完整性,生成深度探测问题
|
|
||||||
- 支持多轮交互讨论,逐步细化需求
|
|
||||||
- 生成经用户确认的 `refined-requirements.json` 作为后续阶段的高质量输入
|
|
||||||
|
|
||||||
## Input
|
|
||||||
|
|
||||||
- Dependency: `{workDir}/spec-config.json` (Phase 1 output)
|
|
||||||
- Optional: `{workDir}/discovery-context.json` (codebase context)
|
|
||||||
|
|
||||||
## Execution Steps
|
|
||||||
|
|
||||||
### Step 1: Load Phase 1 Context
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
|
||||||
const { seed_analysis, seed_input, focus_areas, has_codebase, depth } = specConfig;
|
|
||||||
|
|
||||||
let discoveryContext = null;
|
|
||||||
if (has_codebase) {
|
|
||||||
try {
|
|
||||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
|
||||||
} catch (e) { /* proceed without */ }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: CLI Gap Analysis & Question Generation
|
|
||||||
|
|
||||||
调用 Gemini CLI 分析原始需求的完整性,识别模糊点并生成探测问题。
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Bash({
|
|
||||||
command: `ccw cli -p "PURPOSE: 深度分析用户的初始需求,识别模糊点、遗漏和需要澄清的领域。
|
|
||||||
Success: 生成 3-5 个高质量的探测问题,覆盖功能范围、边界条件、非功能性需求、用户场景等维度。
|
|
||||||
|
|
||||||
ORIGINAL SEED INPUT:
|
|
||||||
${seed_input}
|
|
||||||
|
|
||||||
SEED ANALYSIS:
|
|
||||||
${JSON.stringify(seed_analysis, null, 2)}
|
|
||||||
|
|
||||||
FOCUS AREAS: ${focus_areas.join(', ')}
|
|
||||||
${discoveryContext ? `
|
|
||||||
CODEBASE CONTEXT:
|
|
||||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
|
||||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
|
||||||
` : ''}
|
|
||||||
|
|
||||||
TASK:
|
|
||||||
1. 评估当前需求描述的完整性(1-10 分,列出缺失维度)
|
|
||||||
2. 识别 3-5 个关键模糊区域,每个区域包含:
|
|
||||||
- 模糊点描述(为什么不清楚)
|
|
||||||
- 1-2 个开放式探测问题
|
|
||||||
- 1-2 个扩展建议(基于领域最佳实践)
|
|
||||||
3. 检查以下维度是否有遗漏:
|
|
||||||
- 功能范围边界(什么在范围内/外?)
|
|
||||||
- 核心用户场景和流程
|
|
||||||
- 非功能性需求(性能、安全、可用性、可扩展性)
|
|
||||||
- 集成点和外部依赖
|
|
||||||
- 数据模型和存储需求
|
|
||||||
- 错误处理和异常场景
|
|
||||||
4. 基于领域经验提供需求扩展建议
|
|
||||||
|
|
||||||
MODE: analysis
|
|
||||||
EXPECTED: JSON output:
|
|
||||||
{
|
|
||||||
\"completeness_score\": 7,
|
|
||||||
\"missing_dimensions\": [\"Performance requirements\", \"Error handling\"],
|
|
||||||
\"clarification_areas\": [
|
|
||||||
{
|
|
||||||
\"area\": \"Scope boundary\",
|
|
||||||
\"rationale\": \"Input does not clarify...\",
|
|
||||||
\"questions\": [\"Question 1?\", \"Question 2?\"],
|
|
||||||
\"suggestions\": [\"Suggestion 1\", \"Suggestion 2\"]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
\"expansion_recommendations\": [
|
|
||||||
{
|
|
||||||
\"category\": \"Non-functional\",
|
|
||||||
\"recommendation\": \"Consider adding...\",
|
|
||||||
\"priority\": \"high|medium|low\"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
CONSTRAINTS: 问题必须是开放式的,建议必须具体可执行,使用用户输入的语言
|
|
||||||
" --tool gemini --mode analysis`,
|
|
||||||
run_in_background: true
|
|
||||||
});
|
|
||||||
// Wait for CLI result before continuing
|
|
||||||
```
|
|
||||||
|
|
||||||
解析 CLI 输出为结构化数据:
|
|
||||||
```javascript
|
|
||||||
const gapAnalysis = {
|
|
||||||
completeness_score: 0,
|
|
||||||
missing_dimensions: [],
|
|
||||||
clarification_areas: [],
|
|
||||||
expansion_recommendations: []
|
|
||||||
};
|
|
||||||
// Parse from CLI output
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Interactive Discussion Loop
|
|
||||||
|
|
||||||
核心多轮交互循环。每轮:展示分析结果 → 用户回应 → 更新需求状态 → 判断是否继续。
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Initialize requirement state
|
|
||||||
let requirementState = {
|
|
||||||
problem_statement: seed_analysis.problem_statement,
|
|
||||||
target_users: seed_analysis.target_users,
|
|
||||||
domain: seed_analysis.domain,
|
|
||||||
constraints: seed_analysis.constraints,
|
|
||||||
confirmed_features: [],
|
|
||||||
non_functional_requirements: [],
|
|
||||||
boundary_conditions: [],
|
|
||||||
integration_points: [],
|
|
||||||
key_assumptions: [],
|
|
||||||
discussion_rounds: 0
|
|
||||||
};
|
|
||||||
|
|
||||||
let discussionLog = [];
|
|
||||||
let userSatisfied = false;
|
|
||||||
|
|
||||||
// === Round 1: Present gap analysis results ===
|
|
||||||
// Display completeness_score, clarification_areas, expansion_recommendations
|
|
||||||
// Then ask user to respond
|
|
||||||
|
|
||||||
while (!userSatisfied && requirementState.discussion_rounds < 5) {
|
|
||||||
requirementState.discussion_rounds++;
|
|
||||||
|
|
||||||
if (requirementState.discussion_rounds === 1) {
|
|
||||||
// --- First round: present initial gap analysis ---
|
|
||||||
// Format questions and suggestions from gapAnalysis for display
|
|
||||||
// Present as a structured summary to the user
|
|
||||||
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [
|
|
||||||
{
|
|
||||||
question: buildDiscussionPrompt(gapAnalysis, requirementState),
|
|
||||||
header: "Req Expand",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: "I'll answer", description: "I have answers/feedback to provide (type in 'Other')" },
|
|
||||||
{ label: "Accept all suggestions", description: "Accept all expansion recommendations as-is" },
|
|
||||||
{ label: "Skip to generation", description: "Requirements are clear enough, proceed directly" }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
// --- Subsequent rounds: refine based on user feedback ---
|
|
||||||
// Call CLI with accumulated context for follow-up analysis
|
|
||||||
Bash({
|
|
||||||
command: `ccw cli -p "PURPOSE: 基于用户最新回应,更新需求理解,识别剩余模糊点。
|
|
||||||
|
|
||||||
CURRENT REQUIREMENT STATE:
|
|
||||||
${JSON.stringify(requirementState, null, 2)}
|
|
||||||
|
|
||||||
DISCUSSION HISTORY:
|
|
||||||
${JSON.stringify(discussionLog, null, 2)}
|
|
||||||
|
|
||||||
USER'S LATEST RESPONSE:
|
|
||||||
${lastUserResponse}
|
|
||||||
|
|
||||||
TASK:
|
|
||||||
1. 将用户回应整合到需求状态中
|
|
||||||
2. 识别 1-3 个仍需澄清或可扩展的领域
|
|
||||||
3. 生成后续问题(如有必要)
|
|
||||||
4. 如果需求已充分,输出最终需求摘要
|
|
||||||
|
|
||||||
MODE: analysis
|
|
||||||
EXPECTED: JSON output:
|
|
||||||
{
|
|
||||||
\"updated_fields\": { /* fields to merge into requirementState */ },
|
|
||||||
\"status\": \"need_more_discussion\" | \"ready_for_confirmation\",
|
|
||||||
\"follow_up\": {
|
|
||||||
\"remaining_areas\": [{\"area\": \"...\", \"questions\": [\"...\"]}],
|
|
||||||
\"summary\": \"...\"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
CONSTRAINTS: 避免重复已回答的问题,聚焦未覆盖的领域
|
|
||||||
" --tool gemini --mode analysis`,
|
|
||||||
run_in_background: true
|
|
||||||
});
|
|
||||||
// Wait for CLI result, parse and continue
|
|
||||||
|
|
||||||
// If status === "ready_for_confirmation", break to confirmation step
|
|
||||||
// If status === "need_more_discussion", present follow-up questions
|
|
||||||
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [
|
|
||||||
{
|
|
||||||
question: buildFollowUpPrompt(followUpAnalysis, requirementState),
|
|
||||||
header: "Follow-up",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: "I'll answer", description: "I have more feedback (type in 'Other')" },
|
|
||||||
{ label: "Looks good", description: "Requirements are sufficiently clear now" },
|
|
||||||
{ label: "Accept suggestions", description: "Accept remaining suggestions" }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process user response
|
|
||||||
// - "Skip to generation" / "Looks good" → userSatisfied = true
|
|
||||||
// - "Accept all suggestions" → merge suggestions into requirementState, userSatisfied = true
|
|
||||||
// - "I'll answer" (with Other text) → record in discussionLog, continue loop
|
|
||||||
// - User selects Other with custom text → parse and record
|
|
||||||
|
|
||||||
discussionLog.push({
|
|
||||||
round: requirementState.discussion_rounds,
|
|
||||||
agent_prompt: currentPrompt,
|
|
||||||
user_response: userResponse,
|
|
||||||
timestamp: new Date().toISOString()
|
|
||||||
});
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Helper: Build Discussion Prompt
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
function buildDiscussionPrompt(gapAnalysis, state) {
|
|
||||||
let prompt = `## Requirement Analysis Results\n\n`;
|
|
||||||
prompt += `**Completeness Score**: ${gapAnalysis.completeness_score}/10\n`;
|
|
||||||
|
|
||||||
if (gapAnalysis.missing_dimensions.length > 0) {
|
|
||||||
prompt += `**Missing Dimensions**: ${gapAnalysis.missing_dimensions.join(', ')}\n\n`;
|
|
||||||
}
|
|
||||||
|
|
||||||
prompt += `### Key Questions\n\n`;
|
|
||||||
gapAnalysis.clarification_areas.forEach((area, i) => {
|
|
||||||
prompt += `**${i+1}. ${area.area}**\n`;
|
|
||||||
prompt += ` ${area.rationale}\n`;
|
|
||||||
area.questions.forEach(q => { prompt += ` - ${q}\n`; });
|
|
||||||
if (area.suggestions.length > 0) {
|
|
||||||
prompt += ` Suggestions: ${area.suggestions.join('; ')}\n`;
|
|
||||||
}
|
|
||||||
prompt += `\n`;
|
|
||||||
});
|
|
||||||
|
|
||||||
if (gapAnalysis.expansion_recommendations.length > 0) {
|
|
||||||
prompt += `### Expansion Recommendations\n\n`;
|
|
||||||
gapAnalysis.expansion_recommendations.forEach(rec => {
|
|
||||||
prompt += `- [${rec.priority}] **${rec.category}**: ${rec.recommendation}\n`;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
prompt += `\nPlease answer the questions above, or choose an option below.`;
|
|
||||||
return prompt;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Auto Mode Handling
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
if (autoMode) {
|
|
||||||
// Skip interactive discussion
|
|
||||||
// CLI generates default requirement expansion based on seed_analysis
|
|
||||||
Bash({
|
|
||||||
command: `ccw cli -p "PURPOSE: 基于种子分析自动生成需求扩展,无需用户交互。
|
|
||||||
|
|
||||||
SEED ANALYSIS:
|
|
||||||
${JSON.stringify(seed_analysis, null, 2)}
|
|
||||||
|
|
||||||
SEED INPUT: ${seed_input}
|
|
||||||
DEPTH: ${depth}
|
|
||||||
${discoveryContext ? `CODEBASE: ${JSON.stringify(discoveryContext.tech_stack || {})}` : ''}
|
|
||||||
|
|
||||||
TASK:
|
|
||||||
1. 基于领域最佳实践,自动扩展功能需求清单
|
|
||||||
2. 推断合理的非功能性需求
|
|
||||||
3. 识别明显的边界条件
|
|
||||||
4. 列出关键假设
|
|
||||||
|
|
||||||
MODE: analysis
|
|
||||||
EXPECTED: JSON output matching refined-requirements.json schema
|
|
||||||
CONSTRAINTS: 保守推断,只添加高置信度的扩展
|
|
||||||
" --tool gemini --mode analysis`,
|
|
||||||
run_in_background: true
|
|
||||||
});
|
|
||||||
// Parse output directly into refined-requirements.json
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Generate Requirement Confirmation Summary
|
|
||||||
|
|
||||||
在写入文件前,向用户展示最终的需求确认摘要(非 auto mode)。
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
if (!autoMode) {
|
|
||||||
// Build confirmation summary from requirementState
|
|
||||||
const summary = buildConfirmationSummary(requirementState);
|
|
||||||
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [
|
|
||||||
{
|
|
||||||
question: `## Requirement Confirmation\n\n${summary}\n\nConfirm and proceed to specification generation?`,
|
|
||||||
header: "Confirm",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: "Confirm & proceed", description: "Requirements confirmed, start spec generation" },
|
|
||||||
{ label: "Need adjustments", description: "Go back and refine further" }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
});
|
|
||||||
|
|
||||||
// If "Need adjustments" → loop back to Step 3
|
|
||||||
// If "Confirm & proceed" → continue to Step 6
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Write refined-requirements.json
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const refinedRequirements = {
|
|
||||||
session_id: specConfig.session_id,
|
|
||||||
phase: "1.5",
|
|
||||||
generated_at: new Date().toISOString(),
|
|
||||||
source: autoMode ? "auto-expansion" : "interactive-discussion",
|
|
||||||
discussion_rounds: requirementState.discussion_rounds,
|
|
||||||
|
|
||||||
// Core requirement content
|
|
||||||
clarified_problem_statement: requirementState.problem_statement,
|
|
||||||
confirmed_target_users: requirementState.target_users.map(u =>
|
|
||||||
typeof u === 'string' ? { name: u, needs: [], pain_points: [] } : u
|
|
||||||
),
|
|
||||||
confirmed_domain: requirementState.domain,
|
|
||||||
|
|
||||||
confirmed_features: requirementState.confirmed_features.map(f => ({
|
|
||||||
name: f.name,
|
|
||||||
description: f.description,
|
|
||||||
acceptance_criteria: f.acceptance_criteria || [],
|
|
||||||
edge_cases: f.edge_cases || [],
|
|
||||||
priority: f.priority || "unset"
|
|
||||||
})),
|
|
||||||
|
|
||||||
non_functional_requirements: requirementState.non_functional_requirements.map(nfr => ({
|
|
||||||
type: nfr.type, // Performance, Security, Usability, Scalability, etc.
|
|
||||||
details: nfr.details,
|
|
||||||
measurable_criteria: nfr.measurable_criteria || ""
|
|
||||||
})),
|
|
||||||
|
|
||||||
boundary_conditions: {
|
|
||||||
in_scope: requirementState.boundary_conditions.filter(b => b.scope === 'in'),
|
|
||||||
out_of_scope: requirementState.boundary_conditions.filter(b => b.scope === 'out'),
|
|
||||||
constraints: requirementState.constraints
|
|
||||||
},
|
|
||||||
|
|
||||||
integration_points: requirementState.integration_points,
|
|
||||||
key_assumptions: requirementState.key_assumptions,
|
|
||||||
|
|
||||||
// Traceability
|
|
||||||
discussion_log: autoMode ? [] : discussionLog
|
|
||||||
};
|
|
||||||
|
|
||||||
Write(`${workDir}/refined-requirements.json`, JSON.stringify(refinedRequirements, null, 2));
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 7: Update spec-config.json
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
specConfig.refined_requirements_file = "refined-requirements.json";
|
|
||||||
specConfig.phasesCompleted.push({
|
|
||||||
phase: 1.5,
|
|
||||||
name: "requirement-clarification",
|
|
||||||
output_file: "refined-requirements.json",
|
|
||||||
discussion_rounds: requirementState.discussion_rounds,
|
|
||||||
completed_at: new Date().toISOString()
|
|
||||||
});
|
|
||||||
|
|
||||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
- **File**: `refined-requirements.json`
|
|
||||||
- **Format**: JSON
|
|
||||||
- **Updated**: `spec-config.json` (added `refined_requirements_file` field and phase 1.5 to `phasesCompleted`)
|
|
||||||
|
|
||||||
## Quality Checklist
|
|
||||||
|
|
||||||
- [ ] Problem statement refined (>= 30 characters, more specific than seed)
|
|
||||||
- [ ] At least 2 confirmed features with descriptions
|
|
||||||
- [ ] At least 1 non-functional requirement identified
|
|
||||||
- [ ] Boundary conditions defined (in-scope + out-of-scope)
|
|
||||||
- [ ] Key assumptions listed (>= 1)
|
|
||||||
- [ ] Discussion rounds recorded (>= 1 in interactive mode)
|
|
||||||
- [ ] User explicitly confirmed requirements (non-auto mode)
|
|
||||||
- [ ] `refined-requirements.json` written with valid JSON
|
|
||||||
- [ ] `spec-config.json` updated with phase 1.5 completion
|
|
||||||
|
|
||||||
## Next Phase
|
|
||||||
|
|
||||||
Proceed to [Phase 2: Product Brief](02-product-brief.md). Phase 2 should load `refined-requirements.json` as primary input instead of relying solely on `spec-config.json.seed_analysis`.
|
|
||||||
@@ -13,7 +13,6 @@ Generate a product brief through multi-perspective CLI analysis, establishing "w
|
|||||||
## Input
|
## Input
|
||||||
|
|
||||||
- Dependency: `{workDir}/spec-config.json`
|
- Dependency: `{workDir}/spec-config.json`
|
||||||
- Primary: `{workDir}/refined-requirements.json` (Phase 1.5 output, preferred over raw seed_analysis)
|
|
||||||
- Optional: `{workDir}/discovery-context.json`
|
- Optional: `{workDir}/discovery-context.json`
|
||||||
- Config: `{workDir}/spec-config.json`
|
- Config: `{workDir}/spec-config.json`
|
||||||
- Template: `templates/product-brief.md`
|
- Template: `templates/product-brief.md`
|
||||||
@@ -26,14 +25,6 @@ Generate a product brief through multi-perspective CLI analysis, establishing "w
|
|||||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||||
const { seed_analysis, seed_input, has_codebase, depth, focus_areas } = specConfig;
|
const { seed_analysis, seed_input, has_codebase, depth, focus_areas } = specConfig;
|
||||||
|
|
||||||
// Load refined requirements (Phase 1.5 output) - preferred over raw seed_analysis
|
|
||||||
let refinedReqs = null;
|
|
||||||
try {
|
|
||||||
refinedReqs = JSON.parse(Read(`${workDir}/refined-requirements.json`));
|
|
||||||
} catch (e) {
|
|
||||||
// No refined requirements, fall back to seed_analysis
|
|
||||||
}
|
|
||||||
|
|
||||||
let discoveryContext = null;
|
let discoveryContext = null;
|
||||||
if (has_codebase) {
|
if (has_codebase) {
|
||||||
try {
|
try {
|
||||||
@@ -44,25 +35,13 @@ if (has_codebase) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Build shared context string for CLI prompts
|
// Build shared context string for CLI prompts
|
||||||
// Prefer refined requirements over raw seed_analysis
|
|
||||||
const problem = refinedReqs?.clarified_problem_statement || seed_analysis.problem_statement;
|
|
||||||
const users = refinedReqs?.confirmed_target_users?.map(u => u.name || u).join(', ')
|
|
||||||
|| seed_analysis.target_users.join(', ');
|
|
||||||
const domain = refinedReqs?.confirmed_domain || seed_analysis.domain;
|
|
||||||
const constraints = refinedReqs?.boundary_conditions?.constraints?.join(', ')
|
|
||||||
|| seed_analysis.constraints.join(', ');
|
|
||||||
const features = refinedReqs?.confirmed_features?.map(f => f.name).join(', ') || '';
|
|
||||||
const nfrs = refinedReqs?.non_functional_requirements?.map(n => `${n.type}: ${n.details}`).join('; ') || '';
|
|
||||||
|
|
||||||
const sharedContext = `
|
const sharedContext = `
|
||||||
SEED: ${seed_input}
|
SEED: ${seed_input}
|
||||||
PROBLEM: ${problem}
|
PROBLEM: ${seed_analysis.problem_statement}
|
||||||
TARGET USERS: ${users}
|
TARGET USERS: ${seed_analysis.target_users.join(', ')}
|
||||||
DOMAIN: ${domain}
|
DOMAIN: ${seed_analysis.domain}
|
||||||
CONSTRAINTS: ${constraints}
|
CONSTRAINTS: ${seed_analysis.constraints.join(', ')}
|
||||||
FOCUS AREAS: ${focus_areas.join(', ')}
|
FOCUS AREAS: ${focus_areas.join(', ')}
|
||||||
${features ? `CONFIRMED FEATURES: ${features}` : ''}
|
|
||||||
${nfrs ? `NON-FUNCTIONAL REQUIREMENTS: ${nfrs}` : ''}
|
|
||||||
${discoveryContext ? `
|
${discoveryContext ? `
|
||||||
CODEBASE CONTEXT:
|
CODEBASE CONTEXT:
|
||||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
||||||
|
|||||||
@@ -81,7 +81,6 @@ Examples:
|
|||||||
|------|-------|-------------|
|
|------|-------|-------------|
|
||||||
| `spec-config.json` | 1 | Session configuration and state |
|
| `spec-config.json` | 1 | Session configuration and state |
|
||||||
| `discovery-context.json` | 1 | Codebase exploration results (optional) |
|
| `discovery-context.json` | 1 | Codebase exploration results (optional) |
|
||||||
| `refined-requirements.json` | 1.5 | Confirmed requirements after discussion |
|
|
||||||
| `product-brief.md` | 2 | Product brief document |
|
| `product-brief.md` | 2 | Product brief document |
|
||||||
| `requirements.md` | 3 | PRD document |
|
| `requirements.md` | 3 | PRD document |
|
||||||
| `architecture.md` | 4 | Architecture decisions document |
|
| `architecture.md` | 4 | Architecture decisions document |
|
||||||
@@ -169,7 +168,6 @@ Derived from [REQ-001](requirements.md#req-001).
|
|||||||
"dimensions": ["string array - 3-5 exploration dimensions"]
|
"dimensions": ["string array - 3-5 exploration dimensions"]
|
||||||
},
|
},
|
||||||
"has_codebase": "boolean",
|
"has_codebase": "boolean",
|
||||||
"refined_requirements_file": "string (optional) - path to refined-requirements.json",
|
|
||||||
"phasesCompleted": [
|
"phasesCompleted": [
|
||||||
{
|
{
|
||||||
"phase": "number (1-6)",
|
"phase": "number (1-6)",
|
||||||
@@ -183,60 +181,6 @@ Derived from [REQ-001](requirements.md#req-001).
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## refined-requirements.json Schema
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"session_id": "string (required) - matches spec-config.json",
|
|
||||||
"phase": "1.5",
|
|
||||||
"generated_at": "ISO8601 (required)",
|
|
||||||
"source": "interactive-discussion|auto-expansion (required)",
|
|
||||||
"discussion_rounds": "number (required) - 0 for auto mode",
|
|
||||||
"clarified_problem_statement": "string (required) - refined problem statement",
|
|
||||||
"confirmed_target_users": [
|
|
||||||
{
|
|
||||||
"name": "string",
|
|
||||||
"needs": ["string array"],
|
|
||||||
"pain_points": ["string array"]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"confirmed_domain": "string",
|
|
||||||
"confirmed_features": [
|
|
||||||
{
|
|
||||||
"name": "string",
|
|
||||||
"description": "string",
|
|
||||||
"acceptance_criteria": ["string array"],
|
|
||||||
"edge_cases": ["string array"],
|
|
||||||
"priority": "must|should|could|unset"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"non_functional_requirements": [
|
|
||||||
{
|
|
||||||
"type": "Performance|Security|Usability|Scalability|Reliability|...",
|
|
||||||
"details": "string",
|
|
||||||
"measurable_criteria": "string (optional)"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"boundary_conditions": {
|
|
||||||
"in_scope": ["string array"],
|
|
||||||
"out_of_scope": ["string array"],
|
|
||||||
"constraints": ["string array"]
|
|
||||||
},
|
|
||||||
"integration_points": ["string array"],
|
|
||||||
"key_assumptions": ["string array"],
|
|
||||||
"discussion_log": [
|
|
||||||
{
|
|
||||||
"round": "number",
|
|
||||||
"agent_prompt": "string",
|
|
||||||
"user_response": "string",
|
|
||||||
"timestamp": "ISO8601"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Validation Checklist
|
## Validation Checklist
|
||||||
|
|
||||||
- [ ] Every document starts with valid YAML frontmatter
|
- [ ] Every document starts with valid YAML frontmatter
|
||||||
|
|||||||
@@ -88,18 +88,6 @@ Content provides sufficient detail for execution teams.
|
|||||||
| Dimensions generated | 3-5 exploration dimensions | Warning |
|
| Dimensions generated | 3-5 exploration dimensions | Warning |
|
||||||
| Constraints listed | >= 0 (can be empty with justification) | Info |
|
| Constraints listed | >= 0 (can be empty with justification) | Info |
|
||||||
|
|
||||||
### Phase 1.5: Requirement Expansion & Clarification
|
|
||||||
|
|
||||||
| Check | Criteria | Severity |
|
|
||||||
|-------|----------|----------|
|
|
||||||
| Problem statement refined | More specific than seed, >= 30 characters | Error |
|
|
||||||
| Confirmed features | >= 2 features with descriptions | Error |
|
|
||||||
| Non-functional requirements | >= 1 identified (performance, security, etc.) | Warning |
|
|
||||||
| Boundary conditions | In-scope and out-of-scope defined | Warning |
|
|
||||||
| Key assumptions | >= 1 assumption listed | Warning |
|
|
||||||
| User confirmation | Explicit user confirmation recorded (non-auto mode) | Info |
|
|
||||||
| Discussion rounds | >= 1 round of interaction (non-auto mode) | Info |
|
|
||||||
|
|
||||||
### Phase 2: Product Brief
|
### Phase 2: Product Brief
|
||||||
|
|
||||||
| Check | Criteria | Severity |
|
| Check | Criteria | Severity |
|
||||||
|
|||||||
@@ -6,148 +6,157 @@ allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), Task
|
|||||||
|
|
||||||
# Team Brainstorm
|
# Team Brainstorm
|
||||||
|
|
||||||
Unified team skill: multi-angle brainstorming via Generator-Critic loops, shared memory, and dynamic pipeline selection. All team members invoke with `--role=xxx` to route to role-specific execution.
|
头脑风暴团队技能。通过 Generator-Critic 循环、共享记忆和动态管道选择,实现多角度创意发散、挑战验证和收敛筛选。所有团队成员通过 `--role=xxx` 路由到角色执行逻辑。
|
||||||
|
|
||||||
## Architecture
|
## Architecture Overview
|
||||||
|
|
||||||
```
|
```
|
||||||
┌───────────────────────────────────────────────────┐
|
┌───────────────────────────────────────────────────┐
|
||||||
│ Skill(skill="team-brainstorm") │
|
│ Skill(skill="team-brainstorm", args="--role=xxx") │
|
||||||
│ args="<topic>" or args="--role=xxx" │
|
|
||||||
└───────────────────┬───────────────────────────────┘
|
└───────────────────┬───────────────────────────────┘
|
||||||
│ Role Router
|
│ Role Router
|
||||||
┌──── --role present? ────┐
|
┌───────────┬───┼───────────┬───────────┐
|
||||||
│ NO │ YES
|
|
||||||
↓ ↓
|
|
||||||
Orchestration Mode Role Dispatch
|
|
||||||
(auto → coordinator) (route to role.md)
|
|
||||||
│
|
|
||||||
┌────┴────┬───────────┬───────────┬───────────┐
|
|
||||||
↓ ↓ ↓ ↓ ↓
|
↓ ↓ ↓ ↓ ↓
|
||||||
┌──────────┐┌─────────┐┌──────────┐┌──────────┐┌─────────┐
|
┌──────────┐┌───────┐┌──────────┐┌──────────┐┌─────────┐
|
||||||
│coordinator││ ideator ││challenger││synthesizer││evaluator│
|
│coordinator││ideator││challenger││synthesizer││evaluator│
|
||||||
│ ││ IDEA-* ││CHALLENGE-*││ SYNTH-* ││ EVAL-* │
|
│ roles/ ││roles/ ││ roles/ ││ roles/ ││ roles/ │
|
||||||
└──────────┘└─────────┘└──────────┘└──────────┘└─────────┘
|
└──────────┘└───────┘└──────────┘└──────────┘└─────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## Role Router
|
## Role Router
|
||||||
|
|
||||||
### Input Parsing
|
### Input Parsing
|
||||||
|
|
||||||
Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto route to coordinator).
|
Parse `$ARGUMENTS` to extract `--role`:
|
||||||
|
|
||||||
### Role Registry
|
```javascript
|
||||||
|
const args = "$ARGUMENTS"
|
||||||
|
const roleMatch = args.match(/--role[=\s]+(\w+)/)
|
||||||
|
|
||||||
| Role | File | Task Prefix | Type | Compact |
|
if (!roleMatch) {
|
||||||
|------|------|-------------|------|---------|
|
throw new Error("Missing --role argument. Available roles: coordinator, ideator, challenger, synthesizer, evaluator")
|
||||||
| coordinator | [roles/coordinator.md](roles/coordinator.md) | (none) | orchestrator | **⚠️ 压缩后必须重读** |
|
}
|
||||||
| ideator | [roles/ideator.md](roles/ideator.md) | IDEA-* | pipeline | 压缩后必须重读 |
|
|
||||||
| challenger | [roles/challenger.md](roles/challenger.md) | CHALLENGE-* | pipeline | 压缩后必须重读 |
|
|
||||||
| synthesizer | [roles/synthesizer.md](roles/synthesizer.md) | SYNTH-* | pipeline | 压缩后必须重读 |
|
|
||||||
| evaluator | [roles/evaluator.md](roles/evaluator.md) | EVAL-* | pipeline | 压缩后必须重读 |
|
|
||||||
|
|
||||||
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
const role = roleMatch[1]
|
||||||
|
const teamName = args.match(/--team[=\s]+([\w-]+)/)?.[1] || "brainstorm"
|
||||||
### Dispatch
|
const agentName = args.match(/--agent-name[=\s]+([\w-]+)/)?.[1] || role
|
||||||
|
|
||||||
1. Extract `--role` from arguments
|
|
||||||
2. If no `--role` → route to coordinator (Orchestration Mode)
|
|
||||||
3. Look up role in registry → Read the role file → Execute its phases
|
|
||||||
|
|
||||||
### Orchestration Mode
|
|
||||||
|
|
||||||
When invoked without `--role`, coordinator auto-starts. User just provides topic description.
|
|
||||||
|
|
||||||
**Invocation**: `Skill(skill="team-brainstorm", args="<topic-description>")`
|
|
||||||
|
|
||||||
**Lifecycle**:
|
|
||||||
```
|
|
||||||
User provides topic description
|
|
||||||
→ coordinator Phase 1-3: Topic clarification → TeamCreate → Create task chain
|
|
||||||
→ coordinator Phase 4: spawn first batch workers (background) → STOP
|
|
||||||
→ Worker executes → SendMessage callback → coordinator advances next step
|
|
||||||
→ Loop until pipeline complete → Phase 5 report
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**User Commands** (wake paused coordinator):
|
### Role Dispatch
|
||||||
|
|
||||||
| Command | Action |
|
```javascript
|
||||||
|---------|--------|
|
const VALID_ROLES = {
|
||||||
| `check` / `status` | Output execution status graph, no advancement |
|
"coordinator": { file: "roles/coordinator.md", prefix: null },
|
||||||
| `resume` / `continue` | Check worker states, advance next step |
|
"ideator": { file: "roles/ideator.md", prefix: "IDEA" },
|
||||||
|
"challenger": { file: "roles/challenger.md", prefix: "CHALLENGE" },
|
||||||
|
"synthesizer": { file: "roles/synthesizer.md", prefix: "SYNTH" },
|
||||||
|
"evaluator": { file: "roles/evaluator.md", prefix: "EVAL" }
|
||||||
|
}
|
||||||
|
|
||||||
---
|
if (!VALID_ROLES[role]) {
|
||||||
|
throw new Error(`Unknown role: ${role}. Available: ${Object.keys(VALID_ROLES).join(', ')}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read and execute role-specific logic
|
||||||
|
Read(VALID_ROLES[role].file)
|
||||||
|
// → Execute the 5-phase process defined in that file
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Roles
|
||||||
|
|
||||||
|
| Role | Task Prefix | Responsibility | Role File |
|
||||||
|
|------|-------------|----------------|-----------|
|
||||||
|
| `coordinator` | N/A | 话题澄清、复杂度评估、管道选择、收敛监控 | [roles/coordinator.md](roles/coordinator.md) |
|
||||||
|
| `ideator` | IDEA-* | 多角度创意生成、概念探索、发散思维 | [roles/ideator.md](roles/ideator.md) |
|
||||||
|
| `challenger` | CHALLENGE-* | 魔鬼代言人、假设挑战、可行性质疑 | [roles/challenger.md](roles/challenger.md) |
|
||||||
|
| `synthesizer` | SYNTH-* | 跨想法整合、主题提取、冲突解决 | [roles/synthesizer.md](roles/synthesizer.md) |
|
||||||
|
| `evaluator` | EVAL-* | 评分排序、优先级推荐、最终筛选 | [roles/evaluator.md](roles/evaluator.md) |
|
||||||
|
|
||||||
## Shared Infrastructure
|
## Shared Infrastructure
|
||||||
|
|
||||||
The following templates apply to all worker roles. Each role.md only needs to write **Phase 2-4** role-specific logic.
|
|
||||||
|
|
||||||
### Worker Phase 1: Task Discovery (shared by all workers)
|
|
||||||
|
|
||||||
Every worker executes the same task discovery flow on startup:
|
|
||||||
|
|
||||||
1. Call `TaskList()` to get all tasks
|
|
||||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
|
||||||
3. No tasks → idle wait
|
|
||||||
4. Has tasks → `TaskGet` for details → `TaskUpdate` mark in_progress
|
|
||||||
|
|
||||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
|
||||||
- Check whether this task's output artifact already exists
|
|
||||||
- Artifact complete → skip to Phase 5 report completion
|
|
||||||
- Artifact incomplete or missing → normal Phase 2-4 execution
|
|
||||||
|
|
||||||
### Worker Phase 5: Report (shared by all workers)
|
|
||||||
|
|
||||||
Standard reporting flow after task completion:
|
|
||||||
|
|
||||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
|
||||||
- Parameters: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
|
||||||
- **CLI fallback**: When MCP unavailable → `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
|
||||||
- **Note**: `team` must be session ID (e.g., `BRS-xxx-date`), NOT team name. Extract from `Session:` field in task description.
|
|
||||||
2. **SendMessage**: Send result to coordinator (content and summary both prefixed with `[<role>]`)
|
|
||||||
3. **TaskUpdate**: Mark task completed
|
|
||||||
4. **Loop**: Return to Phase 1 to check next task
|
|
||||||
|
|
||||||
### Wisdom Accumulation (all roles)
|
|
||||||
|
|
||||||
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session initialization.
|
|
||||||
|
|
||||||
**Directory**:
|
|
||||||
```
|
|
||||||
<session-folder>/wisdom/
|
|
||||||
├── learnings.md # Patterns and insights
|
|
||||||
├── decisions.md # Architecture and design decisions
|
|
||||||
├── conventions.md # Codebase conventions
|
|
||||||
└── issues.md # Known risks and issues
|
|
||||||
```
|
|
||||||
|
|
||||||
**Worker Load** (Phase 2): Extract `Session: <path>` from task description, read wisdom directory files.
|
|
||||||
**Worker Contribute** (Phase 4/5): Write this task's discoveries to corresponding wisdom files.
|
|
||||||
|
|
||||||
### Role Isolation Rules
|
### Role Isolation Rules
|
||||||
|
|
||||||
| Allowed | Forbidden |
|
**核心原则**: 每个角色仅能执行自己职责范围内的工作。
|
||||||
|---------|-----------|
|
|
||||||
| Process tasks with own prefix | Process tasks with other role prefixes |
|
|
||||||
| SendMessage to coordinator | Communicate directly with other workers |
|
|
||||||
| Read/write shared-memory.json (own fields) | Create tasks for other roles |
|
|
||||||
| Delegate to commands/ files | Modify resources outside own responsibility |
|
|
||||||
|
|
||||||
Coordinator additional restrictions: Do not generate ideas directly, do not evaluate/challenge ideas, do not execute analysis/synthesis, do not bypass workers.
|
#### Output Tagging(强制)
|
||||||
|
|
||||||
### Output Tagging
|
所有角色的输出必须带 `[role_name]` 标识前缀:
|
||||||
|
|
||||||
All outputs must carry `[role_name]` prefix in both SendMessage content/summary and team_msg summary.
|
```javascript
|
||||||
|
// SendMessage — content 和 summary 都必须带标识
|
||||||
|
SendMessage({
|
||||||
|
content: `## [${role}] ...`,
|
||||||
|
summary: `[${role}] ...`
|
||||||
|
})
|
||||||
|
|
||||||
|
// team_msg — summary 必须带标识
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
summary: `[${role}] ...`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Coordinator 隔离
|
||||||
|
|
||||||
|
| 允许 | 禁止 |
|
||||||
|
|------|------|
|
||||||
|
| 需求澄清 (AskUserQuestion) | ❌ 直接生成创意 |
|
||||||
|
| 创建任务链 (TaskCreate) | ❌ 直接评估/挑战想法 |
|
||||||
|
| 分发任务给 worker | ❌ 直接执行分析/综合 |
|
||||||
|
| 监控进度 (消息总线) | ❌ 绕过 worker 自行完成任务 |
|
||||||
|
| 汇报结果给用户 | ❌ 修改源代码或产物文件 |
|
||||||
|
|
||||||
|
#### Worker 隔离
|
||||||
|
|
||||||
|
| 允许 | 禁止 |
|
||||||
|
|------|------|
|
||||||
|
| 处理自己前缀的任务 | ❌ 处理其他角色前缀的任务 |
|
||||||
|
| SendMessage 给 coordinator | ❌ 直接与其他 worker 通信 |
|
||||||
|
| 读取 shared-memory.json | ❌ 为其他角色创建任务 (TaskCreate) |
|
||||||
|
| 写入 shared-memory.json (自己的字段) | ❌ 修改不属于本职责的资源 |
|
||||||
|
|
||||||
|
### Team Configuration
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const TEAM_CONFIG = {
|
||||||
|
name: "brainstorm",
|
||||||
|
sessionDir: ".workflow/.team/BRS-{slug}-{date}/",
|
||||||
|
sharedMemory: "shared-memory.json"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Shared Memory (创新模式)
|
||||||
|
|
||||||
|
所有角色在 Phase 2 读取、Phase 5 写入 `shared-memory.json`:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Phase 2: 读取共享记忆
|
||||||
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|
let sharedMemory = {}
|
||||||
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
|
|
||||||
|
// Phase 5: 写入共享记忆(仅更新自己负责的字段)
|
||||||
|
// ideator → sharedMemory.generated_ideas
|
||||||
|
// challenger → sharedMemory.critique_insights
|
||||||
|
// synthesizer → sharedMemory.synthesis_themes
|
||||||
|
// evaluator → sharedMemory.evaluation_scores
|
||||||
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
```
|
||||||
|
|
||||||
### Message Bus (All Roles)
|
### Message Bus (All Roles)
|
||||||
|
|
||||||
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
||||||
|
|
||||||
**Parameters**: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
**CLI fallback**: When MCP unavailable → `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
**Note**: `team` must be session ID (e.g., `BRS-xxx-date`), NOT team name. Extract from `Session:` field in task description.
|
from: role,
|
||||||
|
to: "coordinator",
|
||||||
|
type: "<type>",
|
||||||
|
summary: `[${role}] <summary>`,
|
||||||
|
ref: "<file_path>"
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
**Message types by role**:
|
**Message types by role**:
|
||||||
|
|
||||||
@@ -159,26 +168,36 @@ Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
|||||||
| synthesizer | `synthesis_ready`, `error` |
|
| synthesizer | `synthesis_ready`, `error` |
|
||||||
| evaluator | `evaluation_ready`, `error` |
|
| evaluator | `evaluation_ready`, `error` |
|
||||||
|
|
||||||
### Shared Memory
|
### CLI Fallback
|
||||||
|
|
||||||
All roles read in Phase 2 and write in Phase 5 to `shared-memory.json`:
|
When `mcp__ccw-tools__team_msg` MCP is unavailable:
|
||||||
|
|
||||||
| Role | Field |
|
```javascript
|
||||||
|------|-------|
|
Bash(`ccw team log --team "${teamName}" --from "${role}" --to "coordinator" --type "<type>" --summary "<summary>" --json`)
|
||||||
| ideator | `generated_ideas` |
|
```
|
||||||
| challenger | `critique_insights` |
|
|
||||||
| synthesizer | `synthesis_themes` |
|
|
||||||
| evaluator | `evaluation_scores` |
|
|
||||||
|
|
||||||
### Team Configuration
|
### Task Lifecycle (All Worker Roles)
|
||||||
|
|
||||||
| Setting | Value |
|
```javascript
|
||||||
|---------|-------|
|
const tasks = TaskList()
|
||||||
| Team name | brainstorm |
|
const myTasks = tasks.filter(t =>
|
||||||
| Session directory | `.workflow/.team/BRS-<slug>-<date>/` |
|
t.subject.startsWith(`${VALID_ROLES[role].prefix}-`) &&
|
||||||
| Shared memory | `shared-memory.json` in session dir |
|
t.owner === agentName && // Use agentName (e.g., 'ideator-1') instead of role
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (myTasks.length === 0) return // idle
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
|
||||||
---
|
// Phase 2-4: Role-specific (see roles/{role}.md)
|
||||||
|
|
||||||
|
// Phase 5: Report + Loop — 所有输出必须带 [role] 标识
|
||||||
|
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: role, to: "coordinator", type: "...", summary: `[${role}] ...` })
|
||||||
|
SendMessage({ type: "message", recipient: "coordinator", content: `## [${role}] ...`, summary: `[${role}] ...` })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
// Check for next task → back to Phase 1
|
||||||
|
```
|
||||||
|
|
||||||
## Three-Pipeline Architecture
|
## Three-Pipeline Architecture
|
||||||
|
|
||||||
@@ -195,193 +214,19 @@ Full (Fan-out + Generator-Critic):
|
|||||||
|
|
||||||
### Generator-Critic Loop
|
### Generator-Critic Loop
|
||||||
|
|
||||||
ideator <-> challenger loop, max 2 rounds:
|
ideator ↔ challenger 循环,最多2轮:
|
||||||
|
|
||||||
```
|
```
|
||||||
IDEA → CHALLENGE → (if critique.severity >= HIGH) → IDEA-fix → CHALLENGE-2 → SYNTH
|
IDEA → CHALLENGE → (if critique.severity >= HIGH) → IDEA-fix → CHALLENGE-2 → SYNTH
|
||||||
(if critique.severity < HIGH) → SYNTH
|
(if critique.severity < HIGH) → SYNTH
|
||||||
```
|
```
|
||||||
|
|
||||||
### Cadence Control
|
|
||||||
|
|
||||||
**Beat model**: Event-driven, each beat = coordinator wake → process → spawn → STOP. Brainstorm beat: generate → challenge → synthesize → evaluate.
|
|
||||||
|
|
||||||
```
|
|
||||||
Beat Cycle (single beat)
|
|
||||||
═══════════════════════════════════════════════════════════
|
|
||||||
Event Coordinator Workers
|
|
||||||
───────────────────────────────────────────────────────────
|
|
||||||
callback/resume ──→ ┌─ handleCallback ─┐
|
|
||||||
│ mark completed │
|
|
||||||
│ check pipeline │
|
|
||||||
├─ handleSpawnNext ─┤
|
|
||||||
│ find ready tasks │
|
|
||||||
│ spawn workers ───┼──→ [Worker A] Phase 1-5
|
|
||||||
│ (parallel OK) ──┼──→ [Worker B] Phase 1-5
|
|
||||||
└─ STOP (idle) ─────┘ │
|
|
||||||
│
|
|
||||||
callback ←─────────────────────────────────────────┘
|
|
||||||
(next beat) SendMessage + TaskUpdate(completed)
|
|
||||||
═══════════════════════════════════════════════════════════
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pipeline beat views**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Quick (3 beats, strictly serial)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3
|
|
||||||
│ │ │
|
|
||||||
IDEA → CHALLENGE ──→ SYNTH
|
|
||||||
▲ ▲
|
|
||||||
pipeline pipeline
|
|
||||||
start done
|
|
||||||
|
|
||||||
IDEA=ideator CHALLENGE=challenger SYNTH=synthesizer
|
|
||||||
|
|
||||||
Deep (5-6 beats, with Generator-Critic loop)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3 4 5 6
|
|
||||||
│ │ │ │ │ │
|
|
||||||
IDEA → CHALLENGE → (GC loop?) → IDEA-fix → SYNTH → EVAL
|
|
||||||
│
|
|
||||||
severity check
|
|
||||||
(< HIGH → skip to SYNTH)
|
|
||||||
|
|
||||||
Full (4-7 beats, fan-out + Generator-Critic)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3-4 5 6
|
|
||||||
┌────┴────┐ │ │ │ │
|
|
||||||
IDEA-1 ∥ IDEA-2 ∥ IDEA-3 → CHALLENGE → (GC loop) → SYNTH → EVAL
|
|
||||||
▲ ▲
|
|
||||||
parallel pipeline
|
|
||||||
window done
|
|
||||||
```
|
|
||||||
|
|
||||||
**Checkpoints**:
|
|
||||||
|
|
||||||
| Trigger | Location | Behavior |
|
|
||||||
|---------|----------|----------|
|
|
||||||
| Generator-Critic loop | After CHALLENGE-* | If severity >= HIGH → create IDEA-fix task; else proceed to SYNTH |
|
|
||||||
| GC loop limit | Max 2 rounds | Exceeds limit → force convergence to SYNTH |
|
|
||||||
| Pipeline stall | No ready + no running | Check missing tasks, report to user |
|
|
||||||
|
|
||||||
**Stall Detection** (coordinator `handleCheck` executes):
|
|
||||||
|
|
||||||
| Check | Condition | Resolution |
|
|
||||||
|-------|-----------|------------|
|
|
||||||
| Worker no response | in_progress task no callback | Report waiting task list, suggest user `resume` |
|
|
||||||
| Pipeline deadlock | no ready + no running + has pending | Check blockedBy dependency chain, report blocking point |
|
|
||||||
| GC loop exceeded | ideator/challenger iteration > 2 rounds | Terminate loop, force convergence to synthesizer |
|
|
||||||
|
|
||||||
### Task Metadata Registry
|
|
||||||
|
|
||||||
| Task ID | Role | Phase | Dependencies | Description |
|
|
||||||
|---------|------|-------|-------------|-------------|
|
|
||||||
| IDEA-001 | ideator | generate | (none) | Multi-angle idea generation |
|
|
||||||
| IDEA-002 | ideator | generate | (none) | Parallel angle (Full pipeline only) |
|
|
||||||
| IDEA-003 | ideator | generate | (none) | Parallel angle (Full pipeline only) |
|
|
||||||
| CHALLENGE-001 | challenger | challenge | IDEA-001 (or all IDEA-*) | Devil's advocate critique and feasibility challenge |
|
|
||||||
| IDEA-004 | ideator | gc-fix | CHALLENGE-001 | Revision based on critique (GC loop, if triggered) |
|
|
||||||
| CHALLENGE-002 | challenger | gc-fix | IDEA-004 | Re-critique of revised ideas (GC loop round 2) |
|
|
||||||
| SYNTH-001 | synthesizer | synthesize | last CHALLENGE-* | Cross-idea integration, theme extraction, conflict resolution |
|
|
||||||
| EVAL-001 | evaluator | evaluate | SYNTH-001 | Scoring, ranking, priority recommendation, final selection |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Coordinator Spawn Template
|
|
||||||
|
|
||||||
When coordinator spawns workers, use background mode (Spawn-and-Stop).
|
|
||||||
|
|
||||||
**Standard spawn** (single agent per role): For Quick/Deep pipeline, spawn one ideator. Challenger, synthesizer, and evaluator are always single agents.
|
|
||||||
|
|
||||||
**Parallel spawn** (Full pipeline): For Full pipeline with N idea angles, spawn N ideator agents in parallel (`ideator-1`, `ideator-2`, ...) with `run_in_background: true`. Each parallel ideator only processes tasks where owner matches its agent name. After all parallel ideators complete, proceed with single challenger for batch critique.
|
|
||||||
|
|
||||||
**Spawn template**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn <role> worker",
|
|
||||||
team_name: "brainstorm",
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `You are team "brainstorm" <ROLE>.
|
|
||||||
|
|
||||||
## Primary Directive
|
|
||||||
All your work must be executed through Skill to load role definition:
|
|
||||||
Skill(skill="team-brainstorm", args="--role=<role>")
|
|
||||||
|
|
||||||
Current topic: <topic-description>
|
|
||||||
Session: <session-folder>
|
|
||||||
|
|
||||||
## Role Guidelines
|
|
||||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
|
||||||
- All output prefixed with [<role>] identifier
|
|
||||||
- Only communicate with coordinator
|
|
||||||
- Do not use TaskCreate for other roles
|
|
||||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
1. Call Skill -> load role definition and execution logic
|
|
||||||
2. Follow role.md 5-Phase flow
|
|
||||||
3. team_msg + SendMessage results to coordinator
|
|
||||||
4. TaskUpdate completed -> check next task`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Parallel ideator spawn** (Full pipeline with N angles):
|
|
||||||
|
|
||||||
> When Full pipeline has N parallel IDEA tasks assigned to ideator role, spawn N distinct agents named `ideator-1`, `ideator-2`, etc. Each agent only processes tasks where owner matches its agent name.
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Full pipeline with N idea angles (N > 1) | Spawn N agents: `ideator-1`, `ideator-2`, ... `ideator-N` with `run_in_background: true` |
|
|
||||||
| Quick/Deep pipeline (single ideator) | Standard spawn: single `ideator` agent |
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn ideator-<N> worker",
|
|
||||||
team_name: "brainstorm",
|
|
||||||
name: "ideator-<N>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `You are team "brainstorm" IDEATOR (ideator-<N>).
|
|
||||||
Your agent name is "ideator-<N>", use this name for task discovery owner matching.
|
|
||||||
|
|
||||||
## Primary Directive
|
|
||||||
Skill(skill="team-brainstorm", args="--role=ideator --agent-name=ideator-<N>")
|
|
||||||
|
|
||||||
Current topic: <topic-description>
|
|
||||||
Session: <session-folder>
|
|
||||||
|
|
||||||
## Role Guidelines
|
|
||||||
- Only process tasks where owner === "ideator-<N>" with IDEA-* prefix
|
|
||||||
- All output prefixed with [ideator] identifier
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
1. TaskList -> find tasks where owner === "ideator-<N>" with IDEA-* prefix
|
|
||||||
2. Skill -> execute role definition
|
|
||||||
3. team_msg + SendMessage results to coordinator
|
|
||||||
4. TaskUpdate completed -> check next task`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Dispatch must match agent names**: When dispatching parallel IDEA tasks, coordinator sets each task's owner to the corresponding instance name (`ideator-1`, `ideator-2`, etc.). In role.md, task discovery uses `--agent-name` for owner matching.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Unified Session Directory
|
## Unified Session Directory
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/.team/BRS-<slug>-<YYYY-MM-DD>/
|
.workflow/.team/BRS-{slug}-{YYYY-MM-DD}/
|
||||||
├── team-session.json # Session state
|
├── team-session.json # Session state
|
||||||
├── shared-memory.json # Cumulative: generated_ideas / critique_insights / synthesis_themes / evaluation_scores
|
├── shared-memory.json # 累积: generated_ideas / critique_insights / synthesis_themes / evaluation_scores
|
||||||
├── wisdom/ # Cross-task knowledge
|
|
||||||
│ ├── learnings.md
|
|
||||||
│ ├── decisions.md
|
|
||||||
│ ├── conventions.md
|
|
||||||
│ └── issues.md
|
|
||||||
├── ideas/ # Ideator output
|
├── ideas/ # Ideator output
|
||||||
│ ├── idea-001.md
|
│ ├── idea-001.md
|
||||||
│ ├── idea-002.md
|
│ ├── idea-002.md
|
||||||
@@ -395,13 +240,154 @@ Session: <session-folder>
|
|||||||
└── evaluation-001.md
|
└── evaluation-001.md
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Coordinator Spawn Template
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
TeamCreate({ team_name: teamName })
|
||||||
|
|
||||||
|
// Ideator — conditional parallel spawn for Full pipeline (multiple angles)
|
||||||
|
const isFullPipeline = selectedPipeline === 'full'
|
||||||
|
const ideaAngles = selectedAngles || []
|
||||||
|
|
||||||
|
if (isFullPipeline && ideaAngles.length > 1) {
|
||||||
|
// Full pipeline: spawn N ideators for N parallel angle tasks
|
||||||
|
for (let i = 0; i < ideaAngles.length; i++) {
|
||||||
|
const agentName = `ideator-${i + 1}`
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: agentName,
|
||||||
|
prompt: `你是 team "${teamName}" 的 IDEATOR (${agentName})。
|
||||||
|
你的 agent 名称是 "${agentName}",任务发现时用此名称匹配 owner。
|
||||||
|
|
||||||
|
当你收到 IDEA-* 任务时,调用 Skill(skill="team-brainstorm", args="--role=ideator --agent-name=${agentName}") 执行。
|
||||||
|
当前话题: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 owner 为 "${agentName}" 的 IDEA-* 前缀任务
|
||||||
|
- 所有输出(SendMessage、team_msg)必须带 [ideator] 标识前缀
|
||||||
|
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||||
|
- 不得使用 TaskCreate 为其他角色创建任务
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 owner === "${agentName}" 的 IDEA-* 任务
|
||||||
|
2. Skill(skill="team-brainstorm", args="--role=ideator --agent-name=${agentName}") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator(带 [ideator] 标识)
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Quick/Deep pipeline: single ideator
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "ideator",
|
||||||
|
prompt: `你是 team "${teamName}" 的 IDEATOR。
|
||||||
|
当你收到 IDEA-* 任务时,调用 Skill(skill="team-brainstorm", args="--role=ideator") 执行。
|
||||||
|
当前话题: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 IDEA-* 前缀的任务,不得执行其他角色的工作
|
||||||
|
- 所有输出(SendMessage、team_msg)必须带 [ideator] 标识前缀
|
||||||
|
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||||
|
- 不得使用 TaskCreate 为其他角色创建任务
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 IDEA-* 任务
|
||||||
|
2. Skill(skill="team-brainstorm", args="--role=ideator") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator(带 [ideator] 标识)
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Challenger
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "challenger",
|
||||||
|
prompt: `你是 team "${teamName}" 的 CHALLENGER。
|
||||||
|
当你收到 CHALLENGE-* 任务时,调用 Skill(skill="team-brainstorm", args="--role=challenger") 执行。
|
||||||
|
当前话题: ${taskDescription}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 CHALLENGE-* 前缀的任务
|
||||||
|
- 所有输出必须带 [challenger] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 CHALLENGE-* 任务
|
||||||
|
2. Skill(skill="team-brainstorm", args="--role=challenger") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator(带 [challenger] 标识)
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
|
||||||
|
// Synthesizer
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "synthesizer",
|
||||||
|
prompt: `你是 team "${teamName}" 的 SYNTHESIZER。
|
||||||
|
当你收到 SYNTH-* 任务时,调用 Skill(skill="team-brainstorm", args="--role=synthesizer") 执行。
|
||||||
|
当前话题: ${taskDescription}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 SYNTH-* 前缀的任务
|
||||||
|
- 所有输出必须带 [synthesizer] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 SYNTH-* 任务
|
||||||
|
2. Skill(skill="team-brainstorm", args="--role=synthesizer") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator(带 [synthesizer] 标识)
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
|
||||||
|
// Evaluator
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "evaluator",
|
||||||
|
prompt: `你是 team "${teamName}" 的 EVALUATOR。
|
||||||
|
当你收到 EVAL-* 任务时,调用 Skill(skill="team-brainstorm", args="--role=evaluator") 执行。
|
||||||
|
当前话题: ${taskDescription}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 EVAL-* 前缀的任务
|
||||||
|
- 所有输出必须带 [evaluator] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 EVAL-* 任务
|
||||||
|
2. Skill(skill="team-brainstorm", args="--role=evaluator") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator(带 [evaluator] 标识)
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| Unknown --role value | Error with available role list |
|
| Unknown --role value | Error with available role list |
|
||||||
| Missing --role arg | Orchestration Mode → auto route to coordinator |
|
| Missing --role arg | Error with usage hint |
|
||||||
| Role file not found | Error with expected path (roles/<name>.md) |
|
| Role file not found | Error with expected path (roles/{name}.md) |
|
||||||
| Task prefix conflict | Log warning, proceed |
|
| Task prefix conflict | Log warning, proceed |
|
||||||
| Generator-Critic loop exceeds 2 rounds | Force convergence → SYNTH |
|
| Generator-Critic loop exceeds 2 rounds | Force convergence → SYNTH |
|
||||||
| No ideas generated | Coordinator prompts with seed questions |
|
| No ideas generated | Coordinator prompts with seed questions |
|
||||||
|
|||||||
@@ -1,14 +1,16 @@
|
|||||||
# Challenger Role
|
# Role: challenger
|
||||||
|
|
||||||
魔鬼代言人角色。负责假设挑战、可行性质疑、风险识别。作为 Generator-Critic 循环中的 Critic 角色。
|
魔鬼代言人角色。负责假设挑战、可行性质疑、风险识别。作为 Generator-Critic 循环中的 Critic 角色。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `challenger` | **Tag**: `[challenger]`
|
- **Name**: `challenger`
|
||||||
- **Task Prefix**: `CHALLENGE-*`
|
- **Task Prefix**: `CHALLENGE-*`
|
||||||
- **Responsibility**: Read-only analysis (critical analysis)
|
- **Responsibility**: Read-only analysis (批判性分析)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[challenger]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
@@ -20,141 +22,181 @@
|
|||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- 生成创意、综合想法或评估排序
|
- ❌ 生成创意、综合想法或评估排序
|
||||||
- 直接与其他 worker 角色通信
|
- ❌ 直接与其他 worker 角色通信
|
||||||
- 为其他角色创建任务
|
- ❌ 为其他角色创建任务
|
||||||
- 修改 shared-memory.json 中不属于自己的字段
|
- ❌ 修改 shared-memory.json 中不属于自己的字段
|
||||||
- 在输出中省略 `[challenger]` 标识
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `TaskList` | Built-in | Phase 1 | Discover pending CHALLENGE-* tasks |
|
|
||||||
| `TaskGet` | Built-in | Phase 1 | Get task details |
|
|
||||||
| `TaskUpdate` | Built-in | Phase 1/5 | Update task status |
|
|
||||||
| `Read` | Built-in | Phase 2 | Read shared-memory.json, idea files |
|
|
||||||
| `Write` | Built-in | Phase 3/5 | Write critique files, update shared memory |
|
|
||||||
| `Glob` | Built-in | Phase 2 | Find idea files |
|
|
||||||
| `SendMessage` | Built-in | Phase 5 | Report to coordinator |
|
|
||||||
| `mcp__ccw-tools__team_msg` | MCP | Phase 5 | Log communication |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `critique_ready` | challenger -> coordinator | Critique completed | Critical analysis complete |
|
| `critique_ready` | challenger → coordinator | Critique completed | 挑战分析完成 |
|
||||||
| `error` | challenger -> coordinator | Processing failure | Error report |
|
| `error` | challenger → coordinator | Processing failure | 错误上报 |
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., BRS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "challenger",
|
|
||||||
to: "coordinator",
|
|
||||||
type: "critique_ready",
|
|
||||||
summary: "[challenger] Critique complete: <critical>C/<high>H/<medium>M/<low>L -- Signal: <signal>",
|
|
||||||
ref: <output-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from challenger --to coordinator --type critique_ready --summary \"[challenger] Critique complete\" --ref <output-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('CHALLENGE-') &&
|
||||||
|
t.owner === 'challenger' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `CHALLENGE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading + Shared Memory Read
|
### Phase 2: Context Loading + Shared Memory Read
|
||||||
|
|
||||||
| Input | Source | Required |
|
```javascript
|
||||||
|-------|--------|----------|
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
| Session folder | Task description (Session: line) | Yes |
|
const sessionFolder = sessionMatch?.[1]?.trim()
|
||||||
| Ideas | ideas/*.md files | Yes |
|
|
||||||
| Previous critiques | shared-memory.json.critique_insights | No (avoid repeating) |
|
|
||||||
|
|
||||||
**Loading steps**:
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|
let sharedMemory = {}
|
||||||
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
|
|
||||||
1. Extract session path from task description (match "Session: <path>")
|
// Read all idea files referenced in task
|
||||||
2. Glob idea files from session folder
|
const ideaFiles = Glob({ pattern: `${sessionFolder}/ideas/*.md` })
|
||||||
3. Read all idea files for analysis
|
const ideas = ideaFiles.map(f => Read(f))
|
||||||
4. Read shared-memory.json.critique_insights to avoid repeating
|
|
||||||
|
// Read previous critiques for context (avoid repeating)
|
||||||
|
const prevCritiques = sharedMemory.critique_insights || []
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 3: Critical Analysis
|
### Phase 3: Critical Analysis
|
||||||
|
|
||||||
**Challenge Dimensions** (apply to each idea):
|
```javascript
|
||||||
|
// For each idea, apply 4 challenge dimensions:
|
||||||
|
// 1. Assumption Validity — 核心假设是否成立?有什么反例?
|
||||||
|
// 2. Feasibility — 技术/资源/时间上是否可行?
|
||||||
|
// 3. Risk Assessment — 最坏情况是什么?有什么隐藏风险?
|
||||||
|
// 4. Competitive Analysis — 已有更好的替代方案吗?
|
||||||
|
|
||||||
| Dimension | Focus |
|
// Severity classification:
|
||||||
|-----------|-------|
|
// LOW — Minor concern, does not invalidate the idea
|
||||||
| Assumption Validity | Does the core assumption hold? Any counter-examples? |
|
// MEDIUM — Notable weakness, needs consideration
|
||||||
| Feasibility | Technical/resource/time feasibility? |
|
// HIGH — Significant flaw, requires revision
|
||||||
| Risk Assessment | Worst case scenario? Hidden risks? |
|
// CRITICAL — Fundamental issue, idea may need replacement
|
||||||
| Competitive Analysis | Better alternatives already exist? |
|
|
||||||
|
|
||||||
**Severity Classification**:
|
const challengeNum = task.subject.match(/CHALLENGE-(\d+)/)?.[1] || '001'
|
||||||
|
const outputPath = `${sessionFolder}/critiques/critique-${challengeNum}.md`
|
||||||
|
|
||||||
| Severity | Criteria |
|
const critiqueContent = `# Critique — Round ${challengeNum}
|
||||||
|----------|----------|
|
|
||||||
| CRITICAL | Fundamental issue, idea may need replacement |
|
|
||||||
| HIGH | Significant flaw, requires revision |
|
|
||||||
| MEDIUM | Notable weakness, needs consideration |
|
|
||||||
| LOW | Minor concern, does not invalidate the idea |
|
|
||||||
|
|
||||||
**Generator-Critic Signal**:
|
**Ideas Reviewed**: ${ideas.length} files
|
||||||
|
**Challenge Dimensions**: Assumption Validity, Feasibility, Risk, Competition
|
||||||
|
|
||||||
| Condition | Signal |
|
## Challenges
|
||||||
|-----------|--------|
|
|
||||||
| Any CRITICAL or HIGH severity | REVISION_NEEDED -> ideator must revise |
|
|
||||||
| All MEDIUM or lower | CONVERGED -> ready for synthesis |
|
|
||||||
|
|
||||||
**Output file structure**:
|
${challenges.map((c, i) => `### Idea: ${c.ideaTitle}
|
||||||
- File: `<session>/critiques/critique-<num>.md`
|
|
||||||
- Sections: Ideas Reviewed, Challenge Dimensions, Per-idea challenges with severity table, Summary table with counts, GC Signal
|
**Severity**: ${c.severity}
|
||||||
|
|
||||||
|
| Dimension | Finding |
|
||||||
|
|-----------|---------|
|
||||||
|
| Assumption Validity | ${c.assumption} |
|
||||||
|
| Feasibility | ${c.feasibility} |
|
||||||
|
| Risk Assessment | ${c.risk} |
|
||||||
|
| Competitive Analysis | ${c.competition} |
|
||||||
|
|
||||||
|
**Key Challenge**: ${c.keyChallenge}
|
||||||
|
**Suggested Direction**: ${c.suggestion}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
| Severity | Count |
|
||||||
|
|----------|-------|
|
||||||
|
| CRITICAL | ${challenges.filter(c => c.severity === 'CRITICAL').length} |
|
||||||
|
| HIGH | ${challenges.filter(c => c.severity === 'HIGH').length} |
|
||||||
|
| MEDIUM | ${challenges.filter(c => c.severity === 'MEDIUM').length} |
|
||||||
|
| LOW | ${challenges.filter(c => c.severity === 'LOW').length} |
|
||||||
|
|
||||||
|
**Generator-Critic Signal**: ${
|
||||||
|
challenges.some(c => c.severity === 'CRITICAL' || c.severity === 'HIGH')
|
||||||
|
? 'REVISION_NEEDED — Critical/High issues require ideator revision'
|
||||||
|
: 'CONVERGED — No critical issues, ready for synthesis'
|
||||||
|
}
|
||||||
|
`
|
||||||
|
|
||||||
|
Write(outputPath, critiqueContent)
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 4: Severity Summary
|
### Phase 4: Severity Summary
|
||||||
|
|
||||||
**Aggregation**:
|
```javascript
|
||||||
1. Count challenges by severity level
|
// Aggregate severity counts for coordinator decision
|
||||||
2. Determine signal based on presence of CRITICAL/HIGH
|
const severitySummary = {
|
||||||
|
critical: challenges.filter(c => c.severity === 'CRITICAL').length,
|
||||||
| Metric | Source |
|
high: challenges.filter(c => c.severity === 'HIGH').length,
|
||||||
|--------|--------|
|
medium: challenges.filter(c => c.severity === 'MEDIUM').length,
|
||||||
| critical count | challenges with severity CRITICAL |
|
low: challenges.filter(c => c.severity === 'LOW').length,
|
||||||
| high count | challenges with severity HIGH |
|
signal: (challenges.some(c => c.severity === 'CRITICAL' || c.severity === 'HIGH'))
|
||||||
| medium count | challenges with severity MEDIUM |
|
? 'REVISION_NEEDED' : 'CONVERGED'
|
||||||
| low count | challenges with severity LOW |
|
}
|
||||||
| signal | REVISION_NEEDED if critical+high > 0, else CONVERGED |
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
// Update shared memory
|
||||||
|
sharedMemory.critique_insights = [
|
||||||
|
...sharedMemory.critique_insights,
|
||||||
|
...challenges.map(c => ({
|
||||||
|
idea: c.ideaTitle,
|
||||||
|
severity: c.severity,
|
||||||
|
key_challenge: c.keyChallenge,
|
||||||
|
round: parseInt(challengeNum)
|
||||||
|
}))
|
||||||
|
]
|
||||||
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[challenger]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "challenger",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "critique_ready",
|
||||||
|
summary: `[challenger] Critique complete: ${severitySummary.critical}C/${severitySummary.high}H/${severitySummary.medium}M/${severitySummary.low}L — Signal: ${severitySummary.signal}`,
|
||||||
|
ref: outputPath
|
||||||
|
})
|
||||||
|
|
||||||
**Shared Memory Update**:
|
SendMessage({
|
||||||
1. Append challenges to shared-memory.json.critique_insights
|
type: "message",
|
||||||
2. Each entry: idea, severity, key_challenge, round
|
recipient: "coordinator",
|
||||||
|
content: `## [challenger] Critique Results
|
||||||
|
|
||||||
---
|
**Task**: ${task.subject}
|
||||||
|
**Signal**: ${severitySummary.signal}
|
||||||
|
**Severity**: ${severitySummary.critical} Critical, ${severitySummary.high} High, ${severitySummary.medium} Medium, ${severitySummary.low} Low
|
||||||
|
**Output**: ${outputPath}
|
||||||
|
|
||||||
|
${severitySummary.signal === 'REVISION_NEEDED'
|
||||||
|
? '### Requires Revision\n' + challenges.filter(c => ['CRITICAL', 'HIGH'].includes(c.severity)).map(c => `- **${c.ideaTitle}** (${c.severity}): ${c.keyChallenge}`).join('\n')
|
||||||
|
: '### All Clear — Ready for Synthesis'}`,
|
||||||
|
summary: `[challenger] Critique: ${severitySummary.signal}`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('CHALLENGE-') &&
|
||||||
|
t.owner === 'challenger' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -164,4 +206,3 @@ Standard report flow: team_msg log -> SendMessage with `[challenger]` prefix ->
|
|||||||
| Ideas file not found | Notify coordinator |
|
| Ideas file not found | Notify coordinator |
|
||||||
| All ideas trivially good | Mark all LOW, signal CONVERGED |
|
| All ideas trivially good | Mark all LOW, signal CONVERGED |
|
||||||
| Cannot assess feasibility | Mark MEDIUM with note, suggest deeper analysis |
|
| Cannot assess feasibility | Mark MEDIUM with note, suggest deeper analysis |
|
||||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
|
||||||
|
|||||||
@@ -1,257 +1,315 @@
|
|||||||
# Coordinator Role
|
# Role: coordinator
|
||||||
|
|
||||||
头脑风暴团队协调者。负责话题澄清、复杂度评估、管道选择、Generator-Critic 循环控制和收敛监控。
|
头脑风暴团队协调者。负责话题澄清、复杂度评估、管道选择、Generator-Critic 循环控制和收敛监控。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
- **Name**: `coordinator`
|
||||||
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
- **Task Prefix**: N/A (coordinator creates tasks, doesn't receive them)
|
||||||
|
- **Responsibility**: Orchestration
|
||||||
|
- **Communication**: SendMessage to all teammates
|
||||||
|
- **Output Tag**: `[coordinator]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- 所有输出(SendMessage、team_msg、日志)必须带 `[coordinator]` 标识
|
- 所有输出(SendMessage、team_msg、日志)必须带 `[coordinator]` 标识
|
||||||
- 解析用户需求,通过 AskUserQuestion 澄清模糊输入
|
- 仅负责话题澄清、任务创建/分发、进度监控、结果汇报
|
||||||
- 创建团队并通过 TaskCreate 分配任务给 worker 角色
|
- 通过 TaskCreate 创建任务并分配给 worker 角色
|
||||||
- 通过消息总线监控 worker 进度并路由消息
|
- 通过消息总线监控 worker 进度并路由消息
|
||||||
- 管理 Generator-Critic 循环计数,决定是否继续迭代
|
- 管理 Generator-Critic 循环计数,决定是否继续迭代
|
||||||
- 维护 session 状态持久化
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- 直接生成创意、挑战假设、综合想法或评估排序
|
- ❌ **直接生成创意、挑战假设、综合想法或评估排序**
|
||||||
- 直接调用实现类 subagent
|
- ❌ 直接调用实现类 subagent
|
||||||
- 直接修改产物文件(ideas/*.md, critiques/*.md 等)
|
- ❌ 直接修改产物文件(ideas/*.md, critiques/*.md 等)
|
||||||
- 绕过 worker 角色自行完成应委派的工作
|
- ❌ 绕过 worker 角色自行完成应委派的工作
|
||||||
- 在输出中省略 `[coordinator]` 标识
|
- ❌ 在输出中省略 `[coordinator]` 标识
|
||||||
|
|
||||||
> **核心原则**: coordinator 是指挥者,不是执行者。所有实际工作必须通过 TaskCreate 委派给 worker 角色。
|
> **核心原则**: coordinator 是指挥者,不是执行者。所有实际工作必须通过 TaskCreate 委派给 worker 角色。
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `pipeline_selected` | coordinator -> all | Pipeline decided | Notify selected pipeline mode |
|
| `pipeline_selected` | coordinator → all | Pipeline decided | Notify selected pipeline mode |
|
||||||
| `gc_loop_trigger` | coordinator -> ideator | Critique severity >= HIGH | Trigger ideator to revise |
|
| `gc_loop_trigger` | coordinator → ideator | Critique severity >= HIGH | Trigger ideator to revise |
|
||||||
| `task_unblocked` | coordinator -> any | Dependency resolved | Notify worker of available task |
|
| `task_unblocked` | coordinator → any | Dependency resolved | Notify worker of available task |
|
||||||
| `error` | coordinator -> all | Critical system error | Escalation to user |
|
| `error` | coordinator → all | Critical system error | Escalation to user |
|
||||||
| `shutdown` | coordinator -> all | Team being dissolved | Clean shutdown signal |
|
| `shutdown` | coordinator → all | Team being dissolved | Clean shutdown signal |
|
||||||
|
|
||||||
## Message Bus
|
## Execution
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Phase 1: Topic Clarification + Complexity Assessment
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const args = "$ARGUMENTS"
|
||||||
|
const teamNameMatch = args.match(/--team-name[=\s]+([\w-]+)/)
|
||||||
|
const teamName = teamNameMatch ? teamNameMatch[1] : `brainstorm-${Date.now().toString(36)}`
|
||||||
|
const taskDescription = args.replace(/--team-name[=\s]+[\w-]+/, '').replace(/--role[=\s]+\w+/, '').trim()
|
||||||
```
|
```
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
Assess topic complexity and select pipeline:
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., BRS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "coordinator",
|
```javascript
|
||||||
to: <recipient>,
|
function assessComplexity(topic) {
|
||||||
type: <message-type>,
|
let score = 0
|
||||||
summary: "[coordinator] <action> complete: <subject>",
|
if (/strategy|architecture|system|framework|paradigm/.test(topic)) score += 3
|
||||||
ref: <artifact-path>
|
if (/multiple|compare|tradeoff|versus|alternative/.test(topic)) score += 2
|
||||||
|
if (/innovative|creative|novel|breakthrough/.test(topic)) score += 2
|
||||||
|
if (/simple|quick|straightforward|basic/.test(topic)) score -= 2
|
||||||
|
return score >= 4 ? 'high' : score >= 2 ? 'medium' : 'low'
|
||||||
|
}
|
||||||
|
|
||||||
|
const complexity = assessComplexity(taskDescription)
|
||||||
|
```
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "选择头脑风暴模式:",
|
||||||
|
header: "Mode",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: complexity === 'low' ? "quick (推荐)" : "quick", description: "快速模式:创意→挑战→综合(3步)" },
|
||||||
|
{ label: complexity === 'medium' ? "deep (推荐)" : "deep", description: "深度模式:含 Generator-Critic 循环(6步)" },
|
||||||
|
{ label: complexity === 'high' ? "full (推荐)" : "full", description: "完整模式:并行发散 + 循环 + 评估(7步)" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "创意发散角度:",
|
||||||
|
header: "Angles",
|
||||||
|
multiSelect: true,
|
||||||
|
options: [
|
||||||
|
{ label: "技术视角", description: "技术可行性、实现方案、架构设计" },
|
||||||
|
{ label: "产品视角", description: "用户需求、市场定位、商业模式" },
|
||||||
|
{ label: "创新视角", description: "颠覆性思路、跨领域借鉴、未来趋势" },
|
||||||
|
{ label: "风险视角", description: "潜在问题、约束条件、替代方案" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
### Phase 2: Create Team + Initialize Session
|
||||||
|
|
||||||
```
|
```javascript
|
||||||
Bash("ccw team log --team <session-id> --from coordinator --to <recipient> --type <message-type> --summary \"[coordinator] <action> complete\" --ref <artifact-path> --json")
|
TeamCreate({ team_name: teamName })
|
||||||
|
|
||||||
|
const topicSlug = taskDescription.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||||
|
const dateStr = new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString().substring(0, 10)
|
||||||
|
const sessionId = `BRS-${topicSlug}-${dateStr}`
|
||||||
|
const sessionFolder = `.workflow/.team/${sessionId}`
|
||||||
|
|
||||||
|
Bash(`mkdir -p "${sessionFolder}/ideas" "${sessionFolder}/critiques" "${sessionFolder}/synthesis" "${sessionFolder}/evaluation"`)
|
||||||
|
|
||||||
|
// Initialize shared memory
|
||||||
|
const sharedMemory = {
|
||||||
|
topic: taskDescription,
|
||||||
|
pipeline: selectedPipeline,
|
||||||
|
angles: selectedAngles,
|
||||||
|
gc_round: 0,
|
||||||
|
max_gc_rounds: 2,
|
||||||
|
generated_ideas: [],
|
||||||
|
critique_insights: [],
|
||||||
|
synthesis_themes: [],
|
||||||
|
evaluation_scores: []
|
||||||
|
}
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
|
// Create team-session.json
|
||||||
|
const teamSession = {
|
||||||
|
session_id: sessionId,
|
||||||
|
team_name: teamName,
|
||||||
|
topic: taskDescription,
|
||||||
|
pipeline: selectedPipeline,
|
||||||
|
status: "active",
|
||||||
|
created_at: new Date().toISOString(),
|
||||||
|
updated_at: new Date().toISOString(),
|
||||||
|
gc_round: 0,
|
||||||
|
completed_tasks: []
|
||||||
|
}
|
||||||
|
Write(`${sessionFolder}/team-session.json`, JSON.stringify(teamSession, null, 2))
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
// ⚠️ Workers are NOT pre-spawned here.
|
||||||
|
// Workers are spawned per-stage in Phase 4 via Stop-Wait Task(run_in_background: false).
|
||||||
|
// See SKILL.md Coordinator Spawn Template for worker prompt templates.
|
||||||
|
|
||||||
## Entry Router
|
### Phase 3: Create Task Chain
|
||||||
|
|
||||||
When coordinator is invoked, first detect the invocation type:
|
Task chain depends on the selected pipeline.
|
||||||
|
|
||||||
| Detection | Condition | Handler |
|
#### Quick Pipeline
|
||||||
|-----------|-----------|---------|
|
|
||||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
|
||||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
|
||||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
|
||||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
|
||||||
|
|
||||||
For callback/check/resume: load monitor logic and execute the appropriate handler, then STOP.
|
```javascript
|
||||||
|
// IDEA-001: 创意生成
|
||||||
|
TaskCreate({ subject: "IDEA-001: 多角度创意生成", description: `话题: ${taskDescription}\n\nSession: ${sessionFolder}\n角度: ${selectedAngles.join(', ')}\n输出: ${sessionFolder}/ideas/idea-001.md\n\n要求: 每个角度至少产出3个创意,总计不少于6个`, activeForm: "生成创意中" })
|
||||||
|
TaskUpdate({ taskId: ideaId, owner: "ideator" })
|
||||||
|
|
||||||
---
|
// CHALLENGE-001: 挑战质疑 (blockedBy IDEA-001)
|
||||||
|
TaskCreate({ subject: "CHALLENGE-001: 假设挑战与可行性质疑", description: `对 IDEA-001 的创意进行批判性分析\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/ideas/idea-001.md\n输出: ${sessionFolder}/critiques/critique-001.md\n\n要求: 标记每个创意的挑战严重度 (LOW/MEDIUM/HIGH/CRITICAL)`, activeForm: "挑战创意中" })
|
||||||
|
TaskUpdate({ taskId: challengeId, owner: "challenger", addBlockedBy: [ideaId] })
|
||||||
|
|
||||||
## Phase 0: Session Resume Check
|
// SYNTH-001: 综合整合 (blockedBy CHALLENGE-001)
|
||||||
|
TaskCreate({ subject: "SYNTH-001: 跨想法整合与主题提取", description: `整合所有创意和挑战反馈\n\nSession: ${sessionFolder}\n输入: ideas/ + critiques/\n输出: ${sessionFolder}/synthesis/synthesis-001.md\n\n要求: 提取核心主题、解决冲突、生成整合方案`, activeForm: "综合整合中" })
|
||||||
|
TaskUpdate({ taskId: synthId, owner: "synthesizer", addBlockedBy: [challengeId] })
|
||||||
|
```
|
||||||
|
|
||||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
#### Deep Pipeline (with Generator-Critic Loop)
|
||||||
|
|
||||||
**Workflow**:
|
```javascript
|
||||||
1. Scan session directory for sessions with status "active" or "paused"
|
// IDEA-001 → CHALLENGE-001 → IDEA-002(fix) → CHALLENGE-002 → SYNTH-001 → EVAL-001
|
||||||
2. No sessions found -> proceed to Phase 1
|
|
||||||
3. Single session found -> resume it (-> Session Reconciliation)
|
|
||||||
4. Multiple sessions -> AskUserQuestion for user selection
|
|
||||||
|
|
||||||
**Session Reconciliation**:
|
TaskCreate({ subject: "IDEA-001: 初始创意生成", description: `话题: ${taskDescription}\n\nSession: ${sessionFolder}\n输出: ${sessionFolder}/ideas/idea-001.md`, activeForm: "生成创意中" })
|
||||||
1. Audit TaskList -> get real status of all tasks
|
TaskUpdate({ taskId: idea1Id, owner: "ideator" })
|
||||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
|
||||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
|
||||||
4. Determine remaining pipeline from reconciled state
|
|
||||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
|
||||||
6. Create missing tasks with correct blockedBy dependencies
|
|
||||||
7. Verify dependency chain integrity
|
|
||||||
8. Update session file with reconciled state
|
|
||||||
9. Kick first executable task's worker -> Phase 4
|
|
||||||
|
|
||||||
---
|
TaskCreate({ subject: "CHALLENGE-001: 第一轮挑战", description: `挑战 IDEA-001 的创意\n\nSession: ${sessionFolder}\n输入: ideas/idea-001.md\n输出: critiques/critique-001.md\n标记严重度`, activeForm: "挑战中" })
|
||||||
|
TaskUpdate({ taskId: challenge1Id, owner: "challenger", addBlockedBy: [idea1Id] })
|
||||||
|
|
||||||
## Phase 1: Topic Clarification + Complexity Assessment
|
TaskCreate({ subject: "IDEA-002: 创意修订(Generator-Critic Round 1)", description: `基于 CHALLENGE-001 的反馈修订创意\n\nSession: ${sessionFolder}\n输入: ideas/idea-001.md + critiques/critique-001.md\n输出: ideas/idea-002.md\n\n要求: 针对 HIGH/CRITICAL 严重度的挑战进行修订或替换`, activeForm: "修订创意中" })
|
||||||
|
TaskUpdate({ taskId: idea2Id, owner: "ideator", addBlockedBy: [challenge1Id] })
|
||||||
|
|
||||||
**Objective**: Parse user input, assess complexity, select pipeline mode.
|
TaskCreate({ subject: "CHALLENGE-002: 第二轮验证", description: `验证修订后的创意\n\nSession: ${sessionFolder}\n输入: ideas/idea-002.md + critiques/critique-001.md\n输出: critiques/critique-002.md`, activeForm: "验证中" })
|
||||||
|
TaskUpdate({ taskId: challenge2Id, owner: "challenger", addBlockedBy: [idea2Id] })
|
||||||
|
|
||||||
**Workflow**:
|
TaskCreate({ subject: "SYNTH-001: 综合整合", description: `整合全部创意和挑战反馈\n\nSession: ${sessionFolder}\n输入: ideas/ + critiques/\n输出: synthesis/synthesis-001.md`, activeForm: "综合中" })
|
||||||
|
TaskUpdate({ taskId: synthId, owner: "synthesizer", addBlockedBy: [challenge2Id] })
|
||||||
|
|
||||||
1. Parse arguments for `--team-name` and task description
|
TaskCreate({ subject: "EVAL-001: 评分排序与最终筛选", description: `对综合方案评分排序\n\nSession: ${sessionFolder}\n输入: synthesis/synthesis-001.md + shared-memory.json\n输出: evaluation/evaluation-001.md\n\n评分维度: 可行性(30%) + 创新性(25%) + 影响力(25%) + 实施成本(20%)`, activeForm: "评估中" })
|
||||||
|
TaskUpdate({ taskId: evalId, owner: "evaluator", addBlockedBy: [synthId] })
|
||||||
|
```
|
||||||
|
|
||||||
2. Assess topic complexity:
|
#### Full Pipeline (Fan-out + Generator-Critic)
|
||||||
|
|
||||||
| Signal | Weight | Keywords |
|
```javascript
|
||||||
|--------|--------|----------|
|
// 并行创意: IDEA-001 + IDEA-002 + IDEA-003 (no dependencies between them)
|
||||||
| Strategic/systemic | +3 | strategy, architecture, system, framework, paradigm |
|
// Each gets a distinct agent owner for true parallel execution
|
||||||
| Multi-dimensional | +2 | multiple, compare, tradeoff, versus, alternative |
|
const ideaAngles = selectedAngles.slice(0, 3)
|
||||||
| Innovation-focused | +2 | innovative, creative, novel, breakthrough |
|
ideaAngles.forEach((angle, i) => {
|
||||||
| Simple/basic | -2 | simple, quick, straightforward, basic |
|
const ideatorName = ideaAngles.length > 1 ? `ideator-${i+1}` : 'ideator'
|
||||||
|
TaskCreate({ subject: `IDEA-00${i+1}: ${angle}角度创意生成`, description: `话题: ${taskDescription}\n角度: ${angle}\n\nSession: ${sessionFolder}\n输出: ideas/idea-00${i+1}.md`, activeForm: `${angle}创意生成中` })
|
||||||
|
TaskUpdate({ taskId: ideaIds[i], owner: ideatorName })
|
||||||
|
})
|
||||||
|
|
||||||
| Score | Complexity | Pipeline Recommendation |
|
// CHALLENGE-001: 批量挑战 (blockedBy all IDEA-001..003)
|
||||||
|-------|------------|-------------------------|
|
TaskCreate({ subject: "CHALLENGE-001: 批量创意挑战", description: `批量挑战所有角度的创意\n\nSession: ${sessionFolder}\n输入: ideas/idea-001..003.md\n输出: critiques/critique-001.md`, activeForm: "批量挑战中" })
|
||||||
| >= 4 | High | full |
|
TaskUpdate({ taskId: challenge1Id, owner: "challenger", addBlockedBy: ideaIds })
|
||||||
| 2-3 | Medium | deep |
|
|
||||||
| 0-1 | Low | quick |
|
|
||||||
|
|
||||||
3. Ask for missing parameters via AskUserQuestion:
|
// IDEA-004: 修订 (blockedBy CHALLENGE-001)
|
||||||
|
TaskCreate({ subject: "IDEA-004: 创意修订", description: `基于批量挑战反馈修订\n\nSession: ${sessionFolder}\n输入: ideas/ + critiques/critique-001.md\n输出: ideas/idea-004.md`, activeForm: "修订中" })
|
||||||
|
TaskUpdate({ taskId: idea4Id, owner: "ideator", addBlockedBy: [challenge1Id] })
|
||||||
|
|
||||||
| Question | Header | Options |
|
// SYNTH-001 (blockedBy IDEA-004)
|
||||||
|----------|--------|---------|
|
TaskCreate({ subject: "SYNTH-001: 综合整合", description: `整合全部创意\n\nSession: ${sessionFolder}\n输入: ideas/ + critiques/\n输出: synthesis/synthesis-001.md`, activeForm: "综合中" })
|
||||||
| Pipeline mode | Mode | quick (3-step), deep (6-step with GC loop), full (7-step parallel + GC) |
|
TaskUpdate({ taskId: synthId, owner: "synthesizer", addBlockedBy: [idea4Id] })
|
||||||
| Divergence angles | Angles | Multi-select: Technical, Product, Innovation, Risk |
|
|
||||||
|
|
||||||
4. Store requirements: mode, scope, angles, constraints
|
// EVAL-001 (blockedBy SYNTH-001)
|
||||||
|
TaskCreate({ subject: "EVAL-001: 评分排序", description: `最终评估\n\nSession: ${sessionFolder}\n输入: synthesis/ + shared-memory.json\n输出: evaluation/evaluation-001.md`, activeForm: "评估中" })
|
||||||
|
TaskUpdate({ taskId: evalId, owner: "evaluator", addBlockedBy: [synthId] })
|
||||||
|
```
|
||||||
|
|
||||||
**Success**: All parameters captured, pipeline finalized.
|
### Phase 4: Coordination Loop + Generator-Critic Control
|
||||||
|
|
||||||
---
|
> **设计原则(Stop-Wait)**: 模型执行没有时间概念,禁止任何形式的轮询等待。
|
||||||
|
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态
|
||||||
## Phase 2: Create Team + Initialize Session
|
> - ✅ 采用: 同步 `Task(run_in_background: false)` 调用,Worker 返回 = 阶段完成信号
|
||||||
|
>
|
||||||
**Objective**: Initialize team, session file, and shared memory.
|
> 按 Phase 3 创建的任务链顺序,逐阶段 spawn worker 同步执行。
|
||||||
|
> Worker prompt 使用 SKILL.md Coordinator Spawn Template。
|
||||||
**Workflow**:
|
|
||||||
1. Generate session ID: `BRS-<topic-slug>-<date>`
|
|
||||||
2. Create session folder structure
|
|
||||||
3. Call TeamCreate with team name
|
|
||||||
4. Initialize subdirectories: ideas/, critiques/, synthesis/, evaluation/
|
|
||||||
5. Initialize shared-memory.json with: topic, pipeline, angles, gc_round, generated_ideas, critique_insights, synthesis_themes, evaluation_scores
|
|
||||||
6. Write team-session.json with: session_id, team_name, topic, pipeline, status="active", created_at, updated_at
|
|
||||||
7. Workers are NOT pre-spawned here -> spawned per-stage in Phase 4
|
|
||||||
|
|
||||||
**Success**: Team created, session file written, directories initialized.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 3: Create Task Chain
|
|
||||||
|
|
||||||
**Objective**: Dispatch tasks based on selected pipeline with proper dependencies.
|
|
||||||
|
|
||||||
### Quick Pipeline
|
|
||||||
|
|
||||||
| Task ID | Subject | Owner | BlockedBy |
|
|
||||||
|---------|---------|-------|-----------|
|
|
||||||
| IDEA-001 | Multi-angle idea generation | ideator | - |
|
|
||||||
| CHALLENGE-001 | Assumption challenges | challenger | IDEA-001 |
|
|
||||||
| SYNTH-001 | Cross-idea synthesis | synthesizer | CHALLENGE-001 |
|
|
||||||
|
|
||||||
### Deep Pipeline (with Generator-Critic Loop)
|
|
||||||
|
|
||||||
| Task ID | Subject | Owner | BlockedBy |
|
|
||||||
|---------|---------|-------|-----------|
|
|
||||||
| IDEA-001 | Initial idea generation | ideator | - |
|
|
||||||
| CHALLENGE-001 | First round critique | challenger | IDEA-001 |
|
|
||||||
| IDEA-002 | Idea revision (GC Round 1) | ideator | CHALLENGE-001 |
|
|
||||||
| CHALLENGE-002 | Second round validation | challenger | IDEA-002 |
|
|
||||||
| SYNTH-001 | Synthesis | synthesizer | CHALLENGE-002 |
|
|
||||||
| EVAL-001 | Scoring and ranking | evaluator | SYNTH-001 |
|
|
||||||
|
|
||||||
### Full Pipeline (Fan-out + Generator-Critic)
|
|
||||||
|
|
||||||
| Task ID | Subject | Owner | BlockedBy |
|
|
||||||
|---------|---------|-------|-----------|
|
|
||||||
| IDEA-001 | Technical angle ideas | ideator-1 | - |
|
|
||||||
| IDEA-002 | Product angle ideas | ideator-2 | - |
|
|
||||||
| IDEA-003 | Innovation angle ideas | ideator-3 | - |
|
|
||||||
| CHALLENGE-001 | Batch critique | challenger | IDEA-001, IDEA-002, IDEA-003 |
|
|
||||||
| IDEA-004 | Revised ideas | ideator | CHALLENGE-001 |
|
|
||||||
| SYNTH-001 | Synthesis | synthesizer | IDEA-004 |
|
|
||||||
| EVAL-001 | Evaluation | evaluator | SYNTH-001 |
|
|
||||||
|
|
||||||
**Success**: All tasks created with correct dependencies and owners assigned.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Coordination Loop + Generator-Critic Control
|
|
||||||
|
|
||||||
**Objective**: Monitor worker callbacks and advance pipeline.
|
|
||||||
|
|
||||||
> **Design Principle (Stop-Wait)**: No time-based polling. Worker return = stage complete signal.
|
|
||||||
|
|
||||||
| Received Message | Action |
|
| Received Message | Action |
|
||||||
|-----------------|--------|
|
|-----------------|--------|
|
||||||
| ideator: ideas_ready | Read ideas -> team_msg log -> TaskUpdate completed -> unblock CHALLENGE |
|
| ideator: ideas_ready | Read ideas → team_msg log → TaskUpdate completed → unblock CHALLENGE |
|
||||||
| challenger: critique_ready | Read critique -> **Generator-Critic decision** -> decide if IDEA-fix needed |
|
| challenger: critique_ready | Read critique → **Generator-Critic 判断** → 决定是否触发 IDEA-fix |
|
||||||
| ideator: ideas_revised | Read revised ideas -> team_msg log -> TaskUpdate completed -> unblock next CHALLENGE |
|
| ideator: ideas_revised | Read revised ideas → team_msg log → TaskUpdate completed → unblock CHALLENGE-2 |
|
||||||
| synthesizer: synthesis_ready | Read synthesis -> team_msg log -> TaskUpdate completed -> unblock EVAL (if exists) |
|
| synthesizer: synthesis_ready | Read synthesis → team_msg log → TaskUpdate completed → unblock EVAL (if exists) |
|
||||||
| evaluator: evaluation_ready | Read evaluation -> team_msg log -> TaskUpdate completed -> Phase 5 |
|
| evaluator: evaluation_ready | Read evaluation → team_msg log → TaskUpdate completed → Phase 5 |
|
||||||
| All tasks completed | -> Phase 5 |
|
| All tasks completed | → Phase 5 |
|
||||||
|
|
||||||
### Generator-Critic Loop Control
|
#### Generator-Critic Loop Control
|
||||||
|
|
||||||
| Condition | Action |
|
```javascript
|
||||||
|-----------|--------|
|
if (msgType === 'critique_ready') {
|
||||||
| critique_ready + criticalCount > 0 + gcRound < maxRounds | Trigger IDEA-fix task, increment gc_round |
|
const critique = Read(`${sessionFolder}/critiques/critique-${round}.md`)
|
||||||
| critique_ready + (criticalCount == 0 OR gcRound >= maxRounds) | Converged -> unblock SYNTH task |
|
const sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
|
||||||
|
|
||||||
**GC Round Tracking**:
|
// Count HIGH/CRITICAL severity challenges
|
||||||
1. Read critique file
|
const criticalCount = (critique.match(/severity:\s*(HIGH|CRITICAL)/gi) || []).length
|
||||||
2. Count severity: HIGH and CRITICAL
|
const gcRound = sharedMemory.gc_round || 0
|
||||||
3. Read shared-memory.json for gc_round
|
|
||||||
4. If criticalCount > 0 AND gcRound < max_gc_rounds:
|
|
||||||
- Increment gc_round in shared-memory.json
|
|
||||||
- Log team_msg with type "gc_loop_trigger"
|
|
||||||
- Unblock IDEA-fix task
|
|
||||||
5. Else: Log team_msg with type "task_unblocked", unblock SYNTH
|
|
||||||
|
|
||||||
---
|
if (criticalCount > 0 && gcRound < sharedMemory.max_gc_rounds) {
|
||||||
|
// Trigger another ideator round
|
||||||
|
sharedMemory.gc_round = gcRound + 1
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
## Phase 5: Report + Persist
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: teamName, from: "coordinator", to: "ideator",
|
||||||
|
type: "gc_loop_trigger",
|
||||||
|
summary: `[coordinator] Generator-Critic round ${gcRound + 1}: ${criticalCount} critical challenges need revision`
|
||||||
|
})
|
||||||
|
// Unblock IDEA-fix task
|
||||||
|
} else {
|
||||||
|
// Converged → unblock SYNTH
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: teamName, from: "coordinator", to: "synthesizer",
|
||||||
|
type: "task_unblocked",
|
||||||
|
summary: `[coordinator] Critique converged (round ${gcRound}), proceeding to synthesis`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
**Objective**: Completion report and follow-up options.
|
### Phase 5: Report + Persist
|
||||||
|
|
||||||
**Workflow**:
|
```javascript
|
||||||
1. Load session state -> count completed tasks, duration
|
// Read final results
|
||||||
2. Read synthesis and evaluation results
|
const synthesis = Read(`${sessionFolder}/synthesis/synthesis-001.md`)
|
||||||
3. Generate summary with: topic, pipeline, GC rounds, total ideas
|
const evaluation = selectedPipeline !== 'quick' ? Read(`${sessionFolder}/evaluation/evaluation-001.md`) : null
|
||||||
4. Update session status -> "completed"
|
const sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
|
||||||
5. Report to user via SendMessage
|
|
||||||
6. Offer next steps via AskUserQuestion:
|
|
||||||
- New topic (continue brainstorming)
|
|
||||||
- Deep dive (analyze top-ranked idea)
|
|
||||||
- Close team (cleanup)
|
|
||||||
|
|
||||||
---
|
// Report to user
|
||||||
|
SendMessage({
|
||||||
|
content: `## [coordinator] 头脑风暴完成
|
||||||
|
|
||||||
|
**话题**: ${taskDescription}
|
||||||
|
**管道**: ${selectedPipeline}
|
||||||
|
**Generator-Critic 轮次**: ${sharedMemory.gc_round}
|
||||||
|
**创意总数**: ${sharedMemory.generated_ideas.length}
|
||||||
|
|
||||||
|
### 综合结果
|
||||||
|
${synthesis}
|
||||||
|
|
||||||
|
${evaluation ? `### 评估排序\n${evaluation}` : ''}`,
|
||||||
|
summary: `[coordinator] Brainstorm complete: ${sharedMemory.generated_ideas.length} ideas, ${sharedMemory.gc_round} GC rounds`
|
||||||
|
})
|
||||||
|
|
||||||
|
// Update session
|
||||||
|
updateSession(sessionFolder, { status: 'completed', completed_at: new Date().toISOString() })
|
||||||
|
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "头脑风暴已完成。下一步:",
|
||||||
|
header: "Next",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "新话题", description: "继续头脑风暴新话题" },
|
||||||
|
{ label: "深化探索", description: "对排名最高的创意进行深入分析" },
|
||||||
|
{ label: "关闭团队", description: "关闭所有 teammate 并清理" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| Teammate unresponsive | Send tracking message, 2 failures -> respawn worker |
|
| Teammate 无响应 | 发追踪消息,2次无响应 → 重新 spawn |
|
||||||
| Generator-Critic loop exceeded | Force convergence to SYNTH stage |
|
| Generator-Critic 循环超限 | 强制收敛到 SYNTH 阶段 |
|
||||||
| Ideator cannot produce | Provide seed questions as guidance |
|
| Ideator 无法产出 | Coordinator 提供种子问题引导 |
|
||||||
| Challenger all LOW severity | Skip revision, proceed directly to SYNTH |
|
| Challenger 全部标记 LOW | 直接进入 SYNTH,跳过修订 |
|
||||||
| Synthesis conflict unresolved | Report to user, AskUserQuestion for direction |
|
| 综合冲突无法解决 | 上报用户,AskUserQuestion 决定方向 |
|
||||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
|
||||||
|
|||||||
@@ -1,14 +1,16 @@
|
|||||||
# Evaluator Role
|
# Role: evaluator
|
||||||
|
|
||||||
评分排序与最终筛选。负责对综合方案进行多维度评分、优先级推荐、生成最终排名。
|
评分排序与最终筛选。负责对综合方案进行多维度评分、优先级推荐、生成最终排名。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `evaluator` | **Tag**: `[evaluator]`
|
- **Name**: `evaluator`
|
||||||
- **Task Prefix**: `EVAL-*`
|
- **Task Prefix**: `EVAL-*`
|
||||||
- **Responsibility**: Validation (evaluation and ranking)
|
- **Responsibility**: Validation (评估验证)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[evaluator]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
@@ -17,134 +19,171 @@
|
|||||||
- 仅通过 SendMessage 与 coordinator 通信
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Phase 2 读取 shared-memory.json,Phase 5 写入 evaluation_scores
|
- Phase 2 读取 shared-memory.json,Phase 5 写入 evaluation_scores
|
||||||
- 使用标准化评分维度,确保评分可追溯
|
- 使用标准化评分维度,确保评分可追溯
|
||||||
- 为每个方案提供评分理由和推荐
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- 生成新创意、挑战假设或综合整合
|
- ❌ 生成新创意、挑战假设或综合整合
|
||||||
- 直接与其他 worker 角色通信
|
- ❌ 直接与其他 worker 角色通信
|
||||||
- 为其他角色创建任务
|
- ❌ 为其他角色创建任务
|
||||||
- 修改 shared-memory.json 中不属于自己的字段
|
- ❌ 修改 shared-memory.json 中不属于自己的字段
|
||||||
- 在输出中省略 `[evaluator]` 标识
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `TaskList` | Built-in | Phase 1 | Discover pending EVAL-* tasks |
|
|
||||||
| `TaskGet` | Built-in | Phase 1 | Get task details |
|
|
||||||
| `TaskUpdate` | Built-in | Phase 1/5 | Update task status |
|
|
||||||
| `Read` | Built-in | Phase 2 | Read shared-memory.json, synthesis files, ideas, critiques |
|
|
||||||
| `Write` | Built-in | Phase 3/5 | Write evaluation files, update shared memory |
|
|
||||||
| `Glob` | Built-in | Phase 2 | Find synthesis, idea, critique files |
|
|
||||||
| `SendMessage` | Built-in | Phase 5 | Report to coordinator |
|
|
||||||
| `mcp__ccw-tools__team_msg` | MCP | Phase 5 | Log communication |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `evaluation_ready` | evaluator -> coordinator | Evaluation completed | Scoring and ranking complete |
|
| `evaluation_ready` | evaluator → coordinator | Evaluation completed | 评估排序完成 |
|
||||||
| `error` | evaluator -> coordinator | Processing failure | Error report |
|
| `error` | evaluator → coordinator | Processing failure | 错误上报 |
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., BRS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "evaluator",
|
|
||||||
to: "coordinator",
|
|
||||||
type: "evaluation_ready",
|
|
||||||
summary: "[evaluator] Evaluation complete: Top pick \"<title>\" (<score>/10)",
|
|
||||||
ref: <output-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from evaluator --to coordinator --type evaluation_ready --summary \"[evaluator] Evaluation complete\" --ref <output-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('EVAL-') &&
|
||||||
|
t.owner === 'evaluator' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `EVAL-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading + Shared Memory Read
|
### Phase 2: Context Loading + Shared Memory Read
|
||||||
|
|
||||||
| Input | Source | Required |
|
```javascript
|
||||||
|-------|--------|----------|
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
| Session folder | Task description (Session: line) | Yes |
|
const sessionFolder = sessionMatch?.[1]?.trim()
|
||||||
| Synthesis results | synthesis/*.md files | Yes |
|
|
||||||
| All ideas | ideas/*.md files | No (for context) |
|
|
||||||
| All critiques | critiques/*.md files | No (for context) |
|
|
||||||
|
|
||||||
**Loading steps**:
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|
let sharedMemory = {}
|
||||||
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
|
|
||||||
1. Extract session path from task description (match "Session: <path>")
|
// Read synthesis results
|
||||||
2. Glob synthesis files from session/synthesis/
|
const synthesisFiles = Glob({ pattern: `${sessionFolder}/synthesis/*.md` })
|
||||||
3. Read all synthesis files for evaluation
|
const synthesis = synthesisFiles.map(f => Read(f))
|
||||||
4. Optionally read ideas and critiques for full context
|
|
||||||
|
|
||||||
### Phase 3: Evaluation and Scoring
|
// Read all ideas and critiques for full context
|
||||||
|
const ideaFiles = Glob({ pattern: `${sessionFolder}/ideas/*.md` })
|
||||||
**Scoring Dimensions**:
|
const critiqueFiles = Glob({ pattern: `${sessionFolder}/critiques/*.md` })
|
||||||
|
|
||||||
| Dimension | Weight | Focus |
|
|
||||||
|-----------|--------|-------|
|
|
||||||
| Feasibility | 30% | Technical feasibility, resource needs, timeline |
|
|
||||||
| Innovation | 25% | Novelty, differentiation, breakthrough potential |
|
|
||||||
| Impact | 25% | Scope of impact, value creation, problem resolution |
|
|
||||||
| Cost Efficiency | 20% | Implementation cost, risk cost, opportunity cost |
|
|
||||||
|
|
||||||
**Weighted Score Calculation**:
|
|
||||||
```
|
|
||||||
weightedScore = (Feasibility * 0.30) + (Innovation * 0.25) + (Impact * 0.25) + (Cost * 0.20)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Evaluation Structure per Proposal**:
|
### Phase 3: Evaluation & Scoring
|
||||||
- Score for each dimension (1-10)
|
|
||||||
- Rationale for each score
|
|
||||||
- Overall recommendation (Strong Recommend / Recommend / Consider / Pass)
|
|
||||||
|
|
||||||
**Output file structure**:
|
```javascript
|
||||||
- File: `<session>/evaluation/evaluation-<num>.md`
|
// Scoring dimensions:
|
||||||
- Sections: Input summary, Scoring Matrix (ranked table), Detailed Evaluation per proposal, Final Recommendation, Action Items, Risk Summary
|
// 1. Feasibility (30%) — 技术可行性、资源需求、时间框架
|
||||||
|
// 2. Innovation (25%) — 新颖性、差异化、突破性
|
||||||
|
// 3. Impact (25%) — 影响范围、价值创造、问题解决度
|
||||||
|
// 4. Cost (20%) — 实施成本、风险成本、机会成本
|
||||||
|
|
||||||
|
const evalNum = task.subject.match(/EVAL-(\d+)/)?.[1] || '001'
|
||||||
|
const outputPath = `${sessionFolder}/evaluation/evaluation-${evalNum}.md`
|
||||||
|
|
||||||
|
const evaluationContent = `# Evaluation — Round ${evalNum}
|
||||||
|
|
||||||
|
**Input**: ${synthesisFiles.length} synthesis files
|
||||||
|
**Scoring Dimensions**: Feasibility(30%), Innovation(25%), Impact(25%), Cost(20%)
|
||||||
|
|
||||||
|
## Scoring Matrix
|
||||||
|
|
||||||
|
| Rank | Proposal | Feasibility | Innovation | Impact | Cost | **Weighted Score** |
|
||||||
|
|------|----------|-------------|------------|--------|------|-------------------|
|
||||||
|
${scoredProposals.map((p, i) => `| ${i + 1} | ${p.title} | ${p.feasibility}/10 | ${p.innovation}/10 | ${p.impact}/10 | ${p.cost}/10 | **${p.weightedScore.toFixed(1)}** |`).join('\n')}
|
||||||
|
|
||||||
|
## Detailed Evaluation
|
||||||
|
|
||||||
|
${scoredProposals.map((p, i) => `### ${i + 1}. ${p.title} (Score: ${p.weightedScore.toFixed(1)}/10)
|
||||||
|
|
||||||
|
**Feasibility** (${p.feasibility}/10):
|
||||||
|
${p.feasibilityRationale}
|
||||||
|
|
||||||
|
**Innovation** (${p.innovation}/10):
|
||||||
|
${p.innovationRationale}
|
||||||
|
|
||||||
|
**Impact** (${p.impact}/10):
|
||||||
|
${p.impactRationale}
|
||||||
|
|
||||||
|
**Cost Efficiency** (${p.cost}/10):
|
||||||
|
${p.costRationale}
|
||||||
|
|
||||||
|
**Recommendation**: ${p.recommendation}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
|
## Final Recommendation
|
||||||
|
|
||||||
|
**Top Pick**: ${scoredProposals[0].title}
|
||||||
|
**Runner-up**: ${scoredProposals.length > 1 ? scoredProposals[1].title : 'N/A'}
|
||||||
|
|
||||||
|
### Action Items
|
||||||
|
${actionItems.map((item, i) => `${i + 1}. ${item}`).join('\n')}
|
||||||
|
|
||||||
|
### Risk Summary
|
||||||
|
${riskSummary.map(r => `- **${r.risk}**: ${r.mitigation}`).join('\n')}
|
||||||
|
`
|
||||||
|
|
||||||
|
Write(outputPath, evaluationContent)
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 4: Consistency Check
|
### Phase 4: Consistency Check
|
||||||
|
|
||||||
| Check | Pass Criteria | Action on Failure |
|
```javascript
|
||||||
|-------|---------------|-------------------|
|
// Verify scoring consistency
|
||||||
| Score spread | max - min >= 0.5 (with >1 proposal) | Re-evaluate differentiators |
|
// - No proposal should have all 10s
|
||||||
| No perfect scores | Not all 10s | Adjust scores to reflect critique findings |
|
// - Scores should reflect critique findings
|
||||||
| Ranking deterministic | Consistent ranking | Verify calculation |
|
// - Rankings should be deterministic
|
||||||
|
const maxScore = Math.max(...scoredProposals.map(p => p.weightedScore))
|
||||||
|
const minScore = Math.min(...scoredProposals.map(p => p.weightedScore))
|
||||||
|
const spread = maxScore - minScore
|
||||||
|
|
||||||
|
if (spread < 0.5 && scoredProposals.length > 1) {
|
||||||
|
// Too close — re-evaluate differentiators
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
sharedMemory.evaluation_scores = scoredProposals.map(p => ({
|
||||||
|
title: p.title,
|
||||||
|
weighted_score: p.weightedScore,
|
||||||
|
rank: p.rank,
|
||||||
|
recommendation: p.recommendation
|
||||||
|
}))
|
||||||
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[evaluator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "evaluator",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "evaluation_ready",
|
||||||
|
summary: `[evaluator] Evaluation complete: Top pick "${scoredProposals[0].title}" (${scoredProposals[0].weightedScore.toFixed(1)}/10)`,
|
||||||
|
ref: outputPath
|
||||||
|
})
|
||||||
|
|
||||||
**Shared Memory Update**:
|
SendMessage({
|
||||||
1. Set shared-memory.json.evaluation_scores
|
type: "message",
|
||||||
2. Each entry: title, weighted_score, rank, recommendation
|
recipient: "coordinator",
|
||||||
|
content: `## [evaluator] Evaluation Results
|
||||||
|
|
||||||
---
|
**Task**: ${task.subject}
|
||||||
|
**Proposals Evaluated**: ${scoredProposals.length}
|
||||||
|
**Output**: ${outputPath}
|
||||||
|
|
||||||
|
### Rankings
|
||||||
|
${scoredProposals.map((p, i) => `${i + 1}. **${p.title}** — ${p.weightedScore.toFixed(1)}/10 (${p.recommendation})`).join('\n')}
|
||||||
|
|
||||||
|
### Top Pick: ${scoredProposals[0].title}
|
||||||
|
${scoredProposals[0].feasibilityRationale}`,
|
||||||
|
summary: `[evaluator] Top: ${scoredProposals[0].title} (${scoredProposals[0].weightedScore.toFixed(1)}/10)`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -154,4 +193,3 @@ Standard report flow: team_msg log -> SendMessage with `[evaluator]` prefix -> T
|
|||||||
| Synthesis files not found | Notify coordinator |
|
| Synthesis files not found | Notify coordinator |
|
||||||
| Only one proposal | Evaluate against absolute criteria, recommend or reject |
|
| Only one proposal | Evaluate against absolute criteria, recommend or reject |
|
||||||
| All proposals score below 5 | Flag all as weak, recommend re-brainstorming |
|
| All proposals score below 5 | Flag all as weak, recommend re-brainstorming |
|
||||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
|
||||||
|
|||||||
@@ -1,14 +1,16 @@
|
|||||||
# Ideator Role
|
# Role: ideator
|
||||||
|
|
||||||
多角度创意生成者。负责发散思维、概念探索、创意修订。作为 Generator-Critic 循环中的 Generator 角色。
|
多角度创意生成者。负责发散思维、概念探索、创意修订。作为 Generator-Critic 循环中的 Generator 角色。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `ideator` | **Tag**: `[ideator]`
|
- **Name**: `ideator`
|
||||||
- **Task Prefix**: `IDEA-*`
|
- **Task Prefix**: `IDEA-*`
|
||||||
- **Responsibility**: Read-only analysis (idea generation, no code modification)
|
- **Responsibility**: Read-only analysis (创意生成不修改代码)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[ideator]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
@@ -16,137 +18,205 @@
|
|||||||
- 所有输出(SendMessage、team_msg、日志)必须带 `[ideator]` 标识
|
- 所有输出(SendMessage、team_msg、日志)必须带 `[ideator]` 标识
|
||||||
- 仅通过 SendMessage 与 coordinator 通信
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Phase 2 读取 shared-memory.json,Phase 5 写入 generated_ideas
|
- Phase 2 读取 shared-memory.json,Phase 5 写入 generated_ideas
|
||||||
- 针对每个指定角度产出至少3个创意
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- 执行挑战/评估/综合等其他角色工作
|
- ❌ 执行挑战/评估/综合等其他角色工作
|
||||||
- 直接与其他 worker 角色通信
|
- ❌ 直接与其他 worker 角色通信
|
||||||
- 为其他角色创建任务(TaskCreate 是 coordinator 专属)
|
- ❌ 为其他角色创建任务(TaskCreate 是 coordinator 专属)
|
||||||
- 修改 shared-memory.json 中不属于自己的字段
|
- ❌ 修改 shared-memory.json 中不属于自己的字段
|
||||||
- 在输出中省略 `[ideator]` 标识
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `TaskList` | Built-in | Phase 1 | Discover pending IDEA-* tasks |
|
|
||||||
| `TaskGet` | Built-in | Phase 1 | Get task details |
|
|
||||||
| `TaskUpdate` | Built-in | Phase 1/5 | Update task status |
|
|
||||||
| `Read` | Built-in | Phase 2 | Read shared-memory.json, critique files |
|
|
||||||
| `Write` | Built-in | Phase 3/5 | Write idea files, update shared memory |
|
|
||||||
| `Glob` | Built-in | Phase 2 | Find critique files |
|
|
||||||
| `SendMessage` | Built-in | Phase 5 | Report to coordinator |
|
|
||||||
| `mcp__ccw-tools__team_msg` | MCP | Phase 5 | Log communication |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `ideas_ready` | ideator -> coordinator | Initial ideas generated | Initial idea generation complete |
|
| `ideas_ready` | ideator → coordinator | Initial ideas generated | 初始创意完成 |
|
||||||
| `ideas_revised` | ideator -> coordinator | Ideas revised after critique | Revised ideas complete (GC loop) |
|
| `ideas_revised` | ideator → coordinator | Ideas revised after critique | 修订创意完成 (GC 循环) |
|
||||||
| `error` | ideator -> coordinator | Processing failure | Error report |
|
| `error` | ideator → coordinator | Processing failure | 错误上报 |
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., BRS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "ideator",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <ideas_ready|ideas_revised>,
|
|
||||||
summary: "[ideator] <Generated|Revised> <count> ideas (round <num>)",
|
|
||||||
ref: <output-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from ideator --to coordinator --type <message-type> --summary \"[ideator] ideas complete\" --ref <output-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
// Parse agent name for parallel instances (e.g., ideator-1, ideator-2)
|
||||||
|
const agentNameMatch = args.match(/--agent-name[=\s]+([\w-]+)/)
|
||||||
|
const agentName = agentNameMatch ? agentNameMatch[1] : 'ideator'
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `IDEA-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('IDEA-') &&
|
||||||
|
t.owner === agentName && // Use agentName (e.g., 'ideator-1') instead of hardcoded 'ideator'
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `ideator` for single-instance roles.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading + Shared Memory Read
|
### Phase 2: Context Loading + Shared Memory Read
|
||||||
|
|
||||||
| Input | Source | Required |
|
```javascript
|
||||||
|-------|--------|----------|
|
// Extract session folder from task description
|
||||||
| Session folder | Task description (Session: line) | Yes |
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
| Topic | shared-memory.json | Yes |
|
const sessionFolder = sessionMatch?.[1]?.trim()
|
||||||
| Angles | shared-memory.json | Yes |
|
|
||||||
| GC Round | shared-memory.json | Yes |
|
|
||||||
| Previous critique | critiques/*.md | For revision tasks only |
|
|
||||||
| Previous ideas | shared-memory.json.generated_ideas | No |
|
|
||||||
|
|
||||||
**Loading steps**:
|
// Read shared memory
|
||||||
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|
let sharedMemory = {}
|
||||||
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
|
|
||||||
1. Extract session path from task description (match "Session: <path>")
|
const topic = sharedMemory.topic || task.description
|
||||||
2. Read shared-memory.json for topic, angles, gc_round
|
const angles = sharedMemory.angles || ['技术', '产品', '创新']
|
||||||
3. If task is revision (subject contains "revision" or "fix"):
|
const gcRound = sharedMemory.gc_round || 0
|
||||||
- Glob critique files
|
|
||||||
- Read latest critique for revision context
|
// If this is a revision task (GC loop), read previous critique
|
||||||
4. Read previous ideas from shared-memory.generated_ideas
|
let previousCritique = null
|
||||||
|
if (task.subject.includes('修订') || task.subject.includes('fix')) {
|
||||||
|
const critiqueFiles = Glob({ pattern: `${sessionFolder}/critiques/*.md` })
|
||||||
|
if (critiqueFiles.length > 0) {
|
||||||
|
previousCritique = Read(critiqueFiles[critiqueFiles.length - 1])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read previous ideas for context
|
||||||
|
const previousIdeas = sharedMemory.generated_ideas || []
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 3: Idea Generation
|
### Phase 3: Idea Generation
|
||||||
|
|
||||||
| Mode | Condition | Focus |
|
```javascript
|
||||||
|------|-----------|-------|
|
// Determine generation mode
|
||||||
| Initial Generation | No previous critique | Multi-angle divergent thinking |
|
const isRevision = !!previousCritique
|
||||||
| GC Revision | Previous critique exists | Address HIGH/CRITICAL challenges |
|
|
||||||
|
|
||||||
**Initial Generation Mode**:
|
if (isRevision) {
|
||||||
- For each angle, generate 3+ ideas
|
// === Generator-Critic Revision Mode ===
|
||||||
- Each idea includes: title, description (2-3 sentences), key assumption, potential impact, implementation hint
|
// Focus on HIGH/CRITICAL severity challenges
|
||||||
|
// Revise or replace challenged ideas
|
||||||
|
// Keep unchallenged ideas intact
|
||||||
|
|
||||||
**GC Revision Mode**:
|
// Output structure:
|
||||||
- Focus on HIGH/CRITICAL severity challenges from critique
|
// - Retained ideas (unchallenged)
|
||||||
- Retain unchallenged ideas intact
|
// - Revised ideas (with revision rationale)
|
||||||
- Revise ideas with revision rationale
|
// - New replacement ideas (for unsalvageable ones)
|
||||||
- Replace unsalvageable ideas with new alternatives
|
} else {
|
||||||
|
// === Initial Generation Mode ===
|
||||||
|
// For each angle, generate 3+ ideas
|
||||||
|
// Each idea includes:
|
||||||
|
// - Title
|
||||||
|
// - Description (2-3 sentences)
|
||||||
|
// - Key assumption
|
||||||
|
// - Potential impact
|
||||||
|
// - Implementation hint
|
||||||
|
}
|
||||||
|
|
||||||
**Output file structure**:
|
// Write ideas to file
|
||||||
- File: `<session>/ideas/idea-<num>.md`
|
const ideaNum = task.subject.match(/IDEA-(\d+)/)?.[1] || '001'
|
||||||
- Sections: Topic, Angles, Mode, [Revision Context if applicable], Ideas list, Summary
|
const outputPath = `${sessionFolder}/ideas/idea-${ideaNum}.md`
|
||||||
|
|
||||||
|
const ideaContent = `# ${isRevision ? 'Revised' : 'Initial'} Ideas — Round ${ideaNum}
|
||||||
|
|
||||||
|
**Topic**: ${topic}
|
||||||
|
**Angles**: ${angles.join(', ')}
|
||||||
|
**Mode**: ${isRevision ? 'Generator-Critic Revision (Round ' + gcRound + ')' : 'Initial Generation'}
|
||||||
|
|
||||||
|
${isRevision ? `## Revision Context\n\nBased on critique feedback:\n${previousCritique}\n\n` : ''}
|
||||||
|
|
||||||
|
## Ideas
|
||||||
|
|
||||||
|
${generatedIdeas.map((idea, i) => `### Idea ${i + 1}: ${idea.title}
|
||||||
|
|
||||||
|
**Description**: ${idea.description}
|
||||||
|
**Key Assumption**: ${idea.assumption}
|
||||||
|
**Potential Impact**: ${idea.impact}
|
||||||
|
**Implementation Hint**: ${idea.implementation}
|
||||||
|
${isRevision ? `**Revision Note**: ${idea.revision_note || 'New idea'}` : ''}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
- Total ideas: ${generatedIdeas.length}
|
||||||
|
- ${isRevision ? `Retained: ${retainedCount}, Revised: ${revisedCount}, New: ${newCount}` : `Per angle: ${angles.map(a => `${a}: ${countByAngle[a]}`).join(', ')}`}
|
||||||
|
`
|
||||||
|
|
||||||
|
Write(outputPath, ideaContent)
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 4: Self-Review
|
### Phase 4: Self-Review
|
||||||
|
|
||||||
| Check | Pass Criteria | Action on Failure |
|
```javascript
|
||||||
|-------|---------------|-------------------|
|
// Verify minimum idea count
|
||||||
| Minimum count | >= 6 (initial) or >= 3 (revision) | Generate additional ideas |
|
const ideaCount = generatedIdeas.length
|
||||||
| No duplicates | All titles unique | Replace duplicates |
|
const minimumRequired = isRevision ? 3 : 6
|
||||||
| Angle coverage | At least 1 idea per angle | Generate missing angle ideas |
|
|
||||||
|
if (ideaCount < minimumRequired) {
|
||||||
|
// Generate additional ideas to meet minimum
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify no duplicate ideas
|
||||||
|
const titles = generatedIdeas.map(i => i.title.toLowerCase())
|
||||||
|
const duplicates = titles.filter((t, i) => titles.indexOf(t) !== i)
|
||||||
|
if (duplicates.length > 0) {
|
||||||
|
// Replace duplicates
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
// Update shared memory
|
||||||
|
sharedMemory.generated_ideas = [
|
||||||
|
...sharedMemory.generated_ideas,
|
||||||
|
...generatedIdeas.map(i => ({
|
||||||
|
id: `idea-${ideaNum}-${i.index}`,
|
||||||
|
title: i.title,
|
||||||
|
round: parseInt(ideaNum),
|
||||||
|
revised: isRevision
|
||||||
|
}))
|
||||||
|
]
|
||||||
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[ideator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
// Log message
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "ideator",
|
||||||
|
to: "coordinator",
|
||||||
|
type: isRevision ? "ideas_revised" : "ideas_ready",
|
||||||
|
summary: `[ideator] ${isRevision ? 'Revised' : 'Generated'} ${ideaCount} ideas (round ${ideaNum})`,
|
||||||
|
ref: outputPath
|
||||||
|
})
|
||||||
|
|
||||||
**Shared Memory Update**:
|
SendMessage({
|
||||||
1. Append new ideas to shared-memory.json.generated_ideas
|
type: "message",
|
||||||
2. Each entry: id, title, round, revised flag
|
recipient: "coordinator",
|
||||||
|
content: `## [ideator] ${isRevision ? 'Ideas Revised' : 'Ideas Generated'}
|
||||||
|
|
||||||
---
|
**Task**: ${task.subject}
|
||||||
|
**Ideas**: ${ideaCount}
|
||||||
|
**Output**: ${outputPath}
|
||||||
|
|
||||||
|
### Highlights
|
||||||
|
${generatedIdeas.slice(0, 3).map(i => `- **${i.title}**: ${i.description.substring(0, 100)}...`).join('\n')}`,
|
||||||
|
summary: `[ideator] ${ideaCount} ideas ${isRevision ? 'revised' : 'generated'}`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('IDEA-') &&
|
||||||
|
t.owner === agentName && // Use agentName for parallel instance filtering
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -156,5 +226,4 @@ Standard report flow: team_msg log -> SendMessage with `[ideator]` prefix -> Tas
|
|||||||
| Session folder not found | Notify coordinator, request path |
|
| Session folder not found | Notify coordinator, request path |
|
||||||
| Shared memory read fails | Initialize empty, proceed with generation |
|
| Shared memory read fails | Initialize empty, proceed with generation |
|
||||||
| Topic too vague | Generate meta-questions as seed ideas |
|
| Topic too vague | Generate meta-questions as seed ideas |
|
||||||
| Previous critique not found (revision task) | Generate new ideas instead of revising |
|
| Previous critique not found (revision) | Generate new ideas instead of revising |
|
||||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
|
||||||
|
|||||||
@@ -1,14 +1,16 @@
|
|||||||
# Synthesizer Role
|
# Role: synthesizer
|
||||||
|
|
||||||
跨想法整合者。负责从多个创意和挑战反馈中提取主题、解决冲突、生成整合方案。
|
跨想法整合者。负责从多个创意和挑战反馈中提取主题、解决冲突、生成整合方案。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `synthesizer` | **Tag**: `[synthesizer]`
|
- **Name**: `synthesizer`
|
||||||
- **Task Prefix**: `SYNTH-*`
|
- **Task Prefix**: `SYNTH-*`
|
||||||
- **Responsibility**: Read-only analysis (synthesis and integration)
|
- **Responsibility**: Read-only analysis (综合整合)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[synthesizer]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
@@ -16,146 +18,182 @@
|
|||||||
- 所有输出必须带 `[synthesizer]` 标识
|
- 所有输出必须带 `[synthesizer]` 标识
|
||||||
- 仅通过 SendMessage 与 coordinator 通信
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Phase 2 读取 shared-memory.json,Phase 5 写入 synthesis_themes
|
- Phase 2 读取 shared-memory.json,Phase 5 写入 synthesis_themes
|
||||||
- 从所有创意和挑战中提取共同主题
|
|
||||||
- 解决相互矛盾的想法,生成整合方案
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- 生成新创意、挑战假设或评分排序
|
- ❌ 生成新创意、挑战假设或评分排序
|
||||||
- 直接与其他 worker 角色通信
|
- ❌ 直接与其他 worker 角色通信
|
||||||
- 为其他角色创建任务
|
- ❌ 为其他角色创建任务
|
||||||
- 修改 shared-memory.json 中不属于自己的字段
|
- ❌ 修改 shared-memory.json 中不属于自己的字段
|
||||||
- 在输出中省略 `[synthesizer]` 标识
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `TaskList` | Built-in | Phase 1 | Discover pending SYNTH-* tasks |
|
|
||||||
| `TaskGet` | Built-in | Phase 1 | Get task details |
|
|
||||||
| `TaskUpdate` | Built-in | Phase 1/5 | Update task status |
|
|
||||||
| `Read` | Built-in | Phase 2 | Read shared-memory.json, idea files, critique files |
|
|
||||||
| `Write` | Built-in | Phase 3/5 | Write synthesis files, update shared memory |
|
|
||||||
| `Glob` | Built-in | Phase 2 | Find idea and critique files |
|
|
||||||
| `SendMessage` | Built-in | Phase 5 | Report to coordinator |
|
|
||||||
| `mcp__ccw-tools__team_msg` | MCP | Phase 5 | Log communication |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `synthesis_ready` | synthesizer -> coordinator | Synthesis completed | Cross-idea synthesis complete |
|
| `synthesis_ready` | synthesizer → coordinator | Synthesis completed | 综合整合完成 |
|
||||||
| `error` | synthesizer -> coordinator | Processing failure | Error report |
|
| `error` | synthesizer → coordinator | Processing failure | 错误上报 |
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., BRS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "synthesizer",
|
|
||||||
to: "coordinator",
|
|
||||||
type: "synthesis_ready",
|
|
||||||
summary: "[synthesizer] Synthesis complete: <themeCount> themes, <proposalCount> proposals",
|
|
||||||
ref: <output-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from synthesizer --to coordinator --type synthesis_ready --summary \"[synthesizer] Synthesis complete\" --ref <output-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('SYNTH-') &&
|
||||||
|
t.owner === 'synthesizer' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `SYNTH-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading + Shared Memory Read
|
### Phase 2: Context Loading + Shared Memory Read
|
||||||
|
|
||||||
| Input | Source | Required |
|
```javascript
|
||||||
|-------|--------|----------|
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
| Session folder | Task description (Session: line) | Yes |
|
const sessionFolder = sessionMatch?.[1]?.trim()
|
||||||
| All ideas | ideas/*.md files | Yes |
|
|
||||||
| All critiques | critiques/*.md files | Yes |
|
|
||||||
| GC rounds completed | shared-memory.json.gc_round | Yes |
|
|
||||||
|
|
||||||
**Loading steps**:
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|
let sharedMemory = {}
|
||||||
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
|
|
||||||
1. Extract session path from task description (match "Session: <path>")
|
// Read all ideas and critiques
|
||||||
2. Glob all idea files from session/ideas/
|
const ideaFiles = Glob({ pattern: `${sessionFolder}/ideas/*.md` })
|
||||||
3. Glob all critique files from session/critiques/
|
const critiqueFiles = Glob({ pattern: `${sessionFolder}/critiques/*.md` })
|
||||||
4. Read all idea and critique files for synthesis
|
const allIdeas = ideaFiles.map(f => Read(f))
|
||||||
5. Read shared-memory.json for context
|
const allCritiques = critiqueFiles.map(f => Read(f))
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 3: Synthesis Execution
|
### Phase 3: Synthesis Execution
|
||||||
|
|
||||||
**Synthesis Process**:
|
```javascript
|
||||||
|
// Synthesis process:
|
||||||
|
// 1. Theme Extraction — 识别跨创意的共同主题
|
||||||
|
// 2. Conflict Resolution — 解决相互矛盾的想法
|
||||||
|
// 3. Complementary Grouping — 将互补的创意组合
|
||||||
|
// 4. Gap Identification — 发现未覆盖的视角
|
||||||
|
// 5. Integrated Proposal — 生成1-3个整合方案
|
||||||
|
|
||||||
| Step | Action |
|
const synthNum = task.subject.match(/SYNTH-(\d+)/)?.[1] || '001'
|
||||||
|------|--------|
|
const outputPath = `${sessionFolder}/synthesis/synthesis-${synthNum}.md`
|
||||||
| 1. Theme Extraction | Identify common themes across ideas |
|
|
||||||
| 2. Conflict Resolution | Resolve contradictory ideas |
|
|
||||||
| 3. Complementary Grouping | Group complementary ideas together |
|
|
||||||
| 4. Gap Identification | Discover uncovered perspectives |
|
|
||||||
| 5. Integrated Proposal | Generate 1-3 consolidated proposals |
|
|
||||||
|
|
||||||
**Theme Extraction**:
|
const synthesisContent = `# Synthesis — Round ${synthNum}
|
||||||
- Cross-reference ideas for shared concepts
|
|
||||||
- Rate theme strength (1-10)
|
|
||||||
- List supporting ideas per theme
|
|
||||||
|
|
||||||
**Conflict Resolution**:
|
**Input**: ${ideaFiles.length} idea files, ${critiqueFiles.length} critique files
|
||||||
- Identify contradictory ideas
|
**GC Rounds Completed**: ${sharedMemory.gc_round || 0}
|
||||||
- Determine resolution approach
|
|
||||||
- Document rationale for resolution
|
|
||||||
|
|
||||||
**Integrated Proposal Structure**:
|
## Extracted Themes
|
||||||
- Core concept description
|
|
||||||
- Source ideas combined
|
|
||||||
- Addressed challenges from critiques
|
|
||||||
- Feasibility score (1-10)
|
|
||||||
- Innovation score (1-10)
|
|
||||||
- Key benefits list
|
|
||||||
- Remaining risks list
|
|
||||||
|
|
||||||
**Output file structure**:
|
${themes.map((theme, i) => `### Theme ${i + 1}: ${theme.name}
|
||||||
- File: `<session>/synthesis/synthesis-<num>.md`
|
|
||||||
- Sections: Input summary, Extracted Themes, Conflict Resolution, Integrated Proposals, Coverage Analysis
|
**Description**: ${theme.description}
|
||||||
|
**Supporting Ideas**: ${theme.supportingIdeas.join(', ')}
|
||||||
|
**Strength**: ${theme.strength}/10
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
|
## Conflict Resolution
|
||||||
|
|
||||||
|
${conflicts.map(c => `### ${c.idea1} vs ${c.idea2}
|
||||||
|
|
||||||
|
**Nature**: ${c.nature}
|
||||||
|
**Resolution**: ${c.resolution}
|
||||||
|
**Rationale**: ${c.rationale}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
|
## Integrated Proposals
|
||||||
|
|
||||||
|
${proposals.map((p, i) => `### Proposal ${i + 1}: ${p.title}
|
||||||
|
|
||||||
|
**Core Concept**: ${p.concept}
|
||||||
|
**Combines**: ${p.sourceIdeas.join(' + ')}
|
||||||
|
**Addresses Challenges**: ${p.addressedChallenges.join(', ')}
|
||||||
|
**Feasibility**: ${p.feasibility}/10
|
||||||
|
**Innovation**: ${p.innovation}/10
|
||||||
|
|
||||||
|
**Description**:
|
||||||
|
${p.description}
|
||||||
|
|
||||||
|
**Key Benefits**:
|
||||||
|
${p.benefits.map(b => `- ${b}`).join('\n')}
|
||||||
|
|
||||||
|
**Remaining Risks**:
|
||||||
|
${p.risks.map(r => `- ${r}`).join('\n')}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
|
## Coverage Analysis
|
||||||
|
|
||||||
|
| Aspect | Covered | Gaps |
|
||||||
|
|--------|---------|------|
|
||||||
|
${coverageAnalysis.map(a => `| ${a.aspect} | ${a.covered ? '✅' : '❌'} | ${a.gap || '—'} |`).join('\n')}
|
||||||
|
`
|
||||||
|
|
||||||
|
Write(outputPath, synthesisContent)
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 4: Quality Check
|
### Phase 4: Quality Check
|
||||||
|
|
||||||
| Check | Pass Criteria | Action on Failure |
|
```javascript
|
||||||
|-------|---------------|-------------------|
|
// Verify synthesis quality
|
||||||
| Proposal count | >= 1 proposal | Generate at least one proposal |
|
const proposalCount = proposals.length
|
||||||
| Theme count | >= 2 themes | Look for more patterns |
|
const themeCount = themes.length
|
||||||
| Conflict resolution | All conflicts documented | Address unresolved conflicts |
|
|
||||||
|
if (proposalCount === 0) {
|
||||||
|
// At least one proposal required
|
||||||
|
}
|
||||||
|
|
||||||
|
if (themeCount < 2) {
|
||||||
|
// May need to look for more patterns
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator + Shared Memory Write
|
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
sharedMemory.synthesis_themes = themes.map(t => ({
|
||||||
|
name: t.name,
|
||||||
|
strength: t.strength,
|
||||||
|
supporting_ideas: t.supportingIdeas
|
||||||
|
}))
|
||||||
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[synthesizer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "synthesizer",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "synthesis_ready",
|
||||||
|
summary: `[synthesizer] Synthesis complete: ${themeCount} themes, ${proposalCount} proposals`,
|
||||||
|
ref: outputPath
|
||||||
|
})
|
||||||
|
|
||||||
**Shared Memory Update**:
|
SendMessage({
|
||||||
1. Set shared-memory.json.synthesis_themes
|
type: "message",
|
||||||
2. Each entry: name, strength, supporting_ideas
|
recipient: "coordinator",
|
||||||
|
content: `## [synthesizer] Synthesis Results
|
||||||
|
|
||||||
---
|
**Task**: ${task.subject}
|
||||||
|
**Themes**: ${themeCount}
|
||||||
|
**Proposals**: ${proposalCount}
|
||||||
|
**Conflicts Resolved**: ${conflicts.length}
|
||||||
|
**Output**: ${outputPath}
|
||||||
|
|
||||||
|
### Top Proposals
|
||||||
|
${proposals.slice(0, 3).map((p, i) => `${i + 1}. **${p.title}** — ${p.concept} (Feasibility: ${p.feasibility}/10, Innovation: ${p.innovation}/10)`).join('\n')}`,
|
||||||
|
summary: `[synthesizer] ${themeCount} themes, ${proposalCount} proposals`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('SYNTH-') && t.owner === 'synthesizer' &&
|
||||||
|
t.status === 'pending' && t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (nextTasks.length > 0) { /* back to Phase 1 */ }
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -165,4 +203,3 @@ Standard report flow: team_msg log -> SendMessage with `[synthesizer]` prefix ->
|
|||||||
| No ideas/critiques found | Notify coordinator |
|
| No ideas/critiques found | Notify coordinator |
|
||||||
| Irreconcilable conflicts | Present both sides, recommend user decision |
|
| Irreconcilable conflicts | Present both sides, recommend user decision |
|
||||||
| Only one idea survives | Create single focused proposal |
|
| Only one idea survives | Create single focused proposal |
|
||||||
| Critical issue beyond scope | SendMessage error to coordinator |
|
|
||||||
|
|||||||
@@ -1,265 +0,0 @@
|
|||||||
---
|
|
||||||
name: team-coordinate-v2
|
|
||||||
description: Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on "team coordinate v2".
|
|
||||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Team Coordinate v2
|
|
||||||
|
|
||||||
Universal team coordination skill: analyze task -> generate role-specs -> dispatch -> execute -> deliver. Only the **coordinator** is built-in. All worker roles are **dynamically generated** as lightweight role-spec files and spawned via the `team-worker` agent.
|
|
||||||
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
+---------------------------------------------------+
|
|
||||||
| Skill(skill="team-coordinate-v2") |
|
|
||||||
| args="task description" |
|
|
||||||
+-------------------+-------------------------------+
|
|
||||||
|
|
|
||||||
Orchestration Mode (auto -> coordinator)
|
|
||||||
|
|
|
||||||
Coordinator (built-in)
|
|
||||||
Phase 0-5 orchestration
|
|
||||||
|
|
|
||||||
+-------+-------+-------+-------+
|
|
||||||
v v v v v
|
|
||||||
[team-worker agents, each loaded with a dynamic role-spec]
|
|
||||||
(roles generated at runtime from task analysis)
|
|
||||||
|
|
||||||
Subagents (callable by any worker, not team members):
|
|
||||||
[discuss-subagent] - multi-perspective critique (dynamic perspectives)
|
|
||||||
[explore-subagent] - codebase exploration with cache
|
|
||||||
```
|
|
||||||
|
|
||||||
## Role Router
|
|
||||||
|
|
||||||
This skill is **coordinator-only**. Workers do NOT invoke this skill -- they are spawned as `team-worker` agents directly.
|
|
||||||
|
|
||||||
### Input Parsing
|
|
||||||
|
|
||||||
Parse `$ARGUMENTS`. No `--role` needed -- always routes to coordinator.
|
|
||||||
|
|
||||||
### Role Registry
|
|
||||||
|
|
||||||
Only coordinator is statically registered. All other roles are dynamic, stored as role-specs in session.
|
|
||||||
|
|
||||||
| Role | File | Type |
|
|
||||||
|------|------|------|
|
|
||||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | built-in orchestrator |
|
|
||||||
| (dynamic) | `<session>/role-specs/<role-name>.md` | runtime-generated role-spec |
|
|
||||||
|
|
||||||
### Subagent Registry
|
|
||||||
|
|
||||||
| Subagent | Spec | Callable By | Purpose |
|
|
||||||
|----------|------|-------------|---------|
|
|
||||||
| discuss | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | any role | Multi-perspective critique (dynamic perspectives) |
|
|
||||||
| explore | [subagents/explore-subagent.md](subagents/explore-subagent.md) | any role | Codebase exploration with cache |
|
|
||||||
|
|
||||||
### Dispatch
|
|
||||||
|
|
||||||
Always route to coordinator. Coordinator reads `roles/coordinator/role.md` and executes its phases.
|
|
||||||
|
|
||||||
### Orchestration Mode
|
|
||||||
|
|
||||||
User just provides task description.
|
|
||||||
|
|
||||||
**Invocation**: `Skill(skill="team-coordinate-v2", args="task description")`
|
|
||||||
|
|
||||||
**Lifecycle**:
|
|
||||||
```
|
|
||||||
User provides task description
|
|
||||||
-> coordinator Phase 1: task analysis (detect capabilities, build dependency graph)
|
|
||||||
-> coordinator Phase 2: generate role-specs + initialize session
|
|
||||||
-> coordinator Phase 3: create task chain from dependency graph
|
|
||||||
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
|
|
||||||
-> Worker executes -> SendMessage callback -> coordinator advances next step
|
|
||||||
-> Loop until pipeline complete -> Phase 5 report + completion action
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Commands** (wake paused coordinator):
|
|
||||||
|
|
||||||
| Command | Action |
|
|
||||||
|---------|--------|
|
|
||||||
| `check` / `status` | Output execution status graph, no advancement |
|
|
||||||
| `resume` / `continue` | Check worker states, advance next step |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Coordinator Spawn Template
|
|
||||||
|
|
||||||
### v2 Worker Spawn (all roles)
|
|
||||||
|
|
||||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "team-worker",
|
|
||||||
description: "Spawn <role> worker",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `## Role Assignment
|
|
||||||
role: <role>
|
|
||||||
role_spec: <session-folder>/role-specs/<role>.md
|
|
||||||
session: <session-folder>
|
|
||||||
session_id: <session-id>
|
|
||||||
team_name: <team-name>
|
|
||||||
requirement: <task-description>
|
|
||||||
inner_loop: <true|false>
|
|
||||||
|
|
||||||
Read role_spec file to load Phase 2-4 domain instructions.
|
|
||||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Inner Loop roles** (role has 2+ serial same-prefix tasks): Set `inner_loop: true`. The team-worker agent handles the loop internally.
|
|
||||||
|
|
||||||
**Single-task roles**: Set `inner_loop: false`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Completion Action
|
|
||||||
|
|
||||||
When pipeline completes (all tasks done), coordinator presents an interactive choice:
|
|
||||||
|
|
||||||
```
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "Team pipeline complete. What would you like to do?",
|
|
||||||
header: "Completion",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
|
|
||||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
|
||||||
{ label: "Export Results", description: "Export deliverables to target directory, then clean" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Action Handlers
|
|
||||||
|
|
||||||
| Choice | Steps |
|
|
||||||
|--------|-------|
|
|
||||||
| Archive & Clean | Update session status="completed" -> TeamDelete -> output final summary with artifact paths |
|
|
||||||
| Keep Active | Update session status="paused" -> output: "Resume with: Skill(skill='team-coordinate-v2', args='resume')" |
|
|
||||||
| Export Results | AskUserQuestion(target path) -> copy artifacts to target -> Archive & Clean |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Cadence Control
|
|
||||||
|
|
||||||
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
|
|
||||||
|
|
||||||
```
|
|
||||||
Beat Cycle (single beat)
|
|
||||||
======================================================================
|
|
||||||
Event Coordinator Workers
|
|
||||||
----------------------------------------------------------------------
|
|
||||||
callback/resume --> +- handleCallback -+
|
|
||||||
| mark completed |
|
|
||||||
| check pipeline |
|
|
||||||
+- handleSpawnNext -+
|
|
||||||
| find ready tasks |
|
|
||||||
| spawn workers ---+--> [team-worker A] Phase 1-5
|
|
||||||
| (parallel OK) --+--> [team-worker B] Phase 1-5
|
|
||||||
+- STOP (idle) -----+ |
|
|
||||||
|
|
|
||||||
callback <-----------------------------------------+
|
|
||||||
(next beat) SendMessage + TaskUpdate(completed)
|
|
||||||
======================================================================
|
|
||||||
|
|
||||||
Fast-Advance (skips coordinator for simple linear successors)
|
|
||||||
======================================================================
|
|
||||||
[Worker A] Phase 5 complete
|
|
||||||
+- 1 ready task? simple successor? --> spawn team-worker B directly
|
|
||||||
+- complex case? --> SendMessage to coordinator
|
|
||||||
======================================================================
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pipelines are dynamic**: Unlike static pipeline definitions, team-coordinate pipelines are generated per-task from the dependency graph.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Session Directory
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/.team/TC-<slug>-<date>/
|
|
||||||
+-- team-session.json # Session state + dynamic role registry
|
|
||||||
+-- task-analysis.json # Phase 1 output: capabilities, dependency graph
|
|
||||||
+-- role-specs/ # Dynamic role-spec definitions (generated Phase 2)
|
|
||||||
| +-- <role-1>.md # Lightweight: frontmatter + Phase 2-4 only
|
|
||||||
| +-- <role-2>.md
|
|
||||||
+-- artifacts/ # All MD deliverables from workers
|
|
||||||
| +-- <artifact>.md
|
|
||||||
+-- shared-memory.json # Cross-role state store
|
|
||||||
+-- wisdom/ # Cross-task knowledge
|
|
||||||
| +-- learnings.md
|
|
||||||
| +-- decisions.md
|
|
||||||
| +-- issues.md
|
|
||||||
+-- explorations/ # Shared explore cache
|
|
||||||
| +-- cache-index.json
|
|
||||||
| +-- explore-<angle>.json
|
|
||||||
+-- discussions/ # Inline discuss records
|
|
||||||
| +-- <round>.md
|
|
||||||
+-- .msg/ # Team message bus logs
|
|
||||||
```
|
|
||||||
|
|
||||||
### team-session.json Schema
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"session_id": "TC-<slug>-<date>",
|
|
||||||
"task_description": "<original user input>",
|
|
||||||
"status": "active | paused | completed",
|
|
||||||
"team_name": "<team-name>",
|
|
||||||
"roles": [
|
|
||||||
{
|
|
||||||
"name": "<role-name>",
|
|
||||||
"prefix": "<PREFIX>",
|
|
||||||
"responsibility_type": "<type>",
|
|
||||||
"inner_loop": false,
|
|
||||||
"role_spec": "role-specs/<role-name>.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"pipeline": {
|
|
||||||
"dependency_graph": {},
|
|
||||||
"tasks_total": 0,
|
|
||||||
"tasks_completed": 0
|
|
||||||
},
|
|
||||||
"active_workers": [],
|
|
||||||
"completed_tasks": [],
|
|
||||||
"completion_action": "interactive",
|
|
||||||
"created_at": "<timestamp>"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Session Resume
|
|
||||||
|
|
||||||
Coordinator supports `resume` / `continue` for interrupted sessions:
|
|
||||||
|
|
||||||
1. Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
|
|
||||||
2. Multiple matches -> AskUserQuestion for selection
|
|
||||||
3. Audit TaskList -> reconcile session state <-> task status
|
|
||||||
4. Reset in_progress -> pending (interrupted tasks)
|
|
||||||
5. Rebuild team and spawn needed workers only
|
|
||||||
6. Create missing tasks with correct blockedBy
|
|
||||||
7. Kick first executable task -> Phase 4 coordination loop
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Unknown command | Error with available command list |
|
|
||||||
| Dynamic role-spec not found | Error, coordinator may need to regenerate |
|
|
||||||
| Command file not found | Fallback to inline execution |
|
|
||||||
| Discuss subagent fails | Worker proceeds without discuss, logs warning |
|
|
||||||
| Explore cache corrupt | Clear cache, re-explore |
|
|
||||||
| Fast-advance spawns wrong task | Coordinator reconciles on next callback |
|
|
||||||
| capability_gap reported | Coordinator generates new role-spec via handleAdapt |
|
|
||||||
| Completion action fails | Default to Keep Active, log warning |
|
|
||||||
@@ -1,185 +0,0 @@
|
|||||||
# Command: analyze-task
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Parse user task description -> detect required capabilities -> build dependency graph -> design dynamic roles with role-spec metadata. Outputs structured task-analysis.json with frontmatter fields for role-spec generation.
|
|
||||||
|
|
||||||
## CRITICAL CONSTRAINT
|
|
||||||
|
|
||||||
**TEXT-LEVEL analysis only. MUST NOT read source code or explore codebase.**
|
|
||||||
|
|
||||||
**Allowed:**
|
|
||||||
- Parse user task description text
|
|
||||||
- AskUserQuestion for clarification
|
|
||||||
- Keyword-to-capability mapping
|
|
||||||
- Write `task-analysis.json`
|
|
||||||
|
|
||||||
If task context requires codebase knowledge, set `needs_research: true`. Phase 2 will spawn researcher worker.
|
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
|
||||||
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | User input from Phase 1 | Yes |
|
|
||||||
| Clarification answers | AskUserQuestion results (if any) | No |
|
|
||||||
| Session folder | From coordinator Phase 2 | Yes |
|
|
||||||
|
|
||||||
## Phase 3: Task Analysis
|
|
||||||
|
|
||||||
### Step 1: Signal Detection
|
|
||||||
|
|
||||||
Scan task description for capability keywords:
|
|
||||||
|
|
||||||
| Signal | Keywords | Capability | Prefix | Responsibility Type |
|
|
||||||
|--------|----------|------------|--------|---------------------|
|
|
||||||
| Research | investigate, explore, compare, survey, find, research, discover, benchmark, study | researcher | RESEARCH | orchestration |
|
|
||||||
| Writing | write, draft, document, article, report, blog, describe, explain, summarize, content | writer | DRAFT | code-gen (docs) |
|
|
||||||
| Coding | implement, build, code, fix, refactor, develop, create app, program, migrate, port | developer | IMPL | code-gen (code) |
|
|
||||||
| Design | design, architect, plan, structure, blueprint, model, schema, wireframe, layout | designer | DESIGN | orchestration |
|
|
||||||
| Analysis | analyze, review, audit, assess, evaluate, inspect, examine, diagnose, profile | analyst | ANALYSIS | read-only |
|
|
||||||
| Testing | test, verify, validate, QA, quality, check, assert, coverage, regression | tester | TEST | validation |
|
|
||||||
| Planning | plan, breakdown, organize, schedule, decompose, roadmap, strategy, prioritize | planner | PLAN | orchestration |
|
|
||||||
|
|
||||||
**Multi-match**: A task may trigger multiple capabilities.
|
|
||||||
|
|
||||||
**No match**: Default to a single `general` capability with `TASK` prefix.
|
|
||||||
|
|
||||||
### Step 2: Artifact Inference
|
|
||||||
|
|
||||||
Each capability produces default output artifacts:
|
|
||||||
|
|
||||||
| Capability | Default Artifact | Format |
|
|
||||||
|------------|-----------------|--------|
|
|
||||||
| researcher | Research findings | `<session>/artifacts/research-findings.md` |
|
|
||||||
| writer | Written document(s) | `<session>/artifacts/<doc-name>.md` |
|
|
||||||
| developer | Code implementation | Source files + `<session>/artifacts/implementation-summary.md` |
|
|
||||||
| designer | Design document | `<session>/artifacts/design-spec.md` |
|
|
||||||
| analyst | Analysis report | `<session>/artifacts/analysis-report.md` |
|
|
||||||
| tester | Test results | `<session>/artifacts/test-report.md` |
|
|
||||||
| planner | Execution plan | `<session>/artifacts/execution-plan.md` |
|
|
||||||
|
|
||||||
### Step 3: Dependency Graph Construction
|
|
||||||
|
|
||||||
Build a DAG of work streams using natural ordering tiers:
|
|
||||||
|
|
||||||
| Tier | Capabilities | Description |
|
|
||||||
|------|-------------|-------------|
|
|
||||||
| 0 | researcher, planner | Knowledge gathering / planning |
|
|
||||||
| 1 | designer | Design (requires context from tier 0 if present) |
|
|
||||||
| 2 | writer, developer | Creation (requires design/plan if present) |
|
|
||||||
| 3 | analyst, tester | Validation (requires artifacts to validate) |
|
|
||||||
|
|
||||||
### Step 4: Complexity Scoring
|
|
||||||
|
|
||||||
| Factor | Weight | Condition |
|
|
||||||
|--------|--------|-----------|
|
|
||||||
| Capability count | +1 each | Number of distinct capabilities |
|
|
||||||
| Cross-domain factor | +2 | Capabilities span 3+ tiers |
|
|
||||||
| Parallel tracks | +1 each | Independent parallel work streams |
|
|
||||||
| Serial depth | +1 per level | Longest dependency chain length |
|
|
||||||
|
|
||||||
| Total Score | Complexity | Role Limit |
|
|
||||||
|-------------|------------|------------|
|
|
||||||
| 1-3 | Low | 1-2 roles |
|
|
||||||
| 4-6 | Medium | 2-3 roles |
|
|
||||||
| 7+ | High | 3-5 roles |
|
|
||||||
|
|
||||||
### Step 5: Role Minimization
|
|
||||||
|
|
||||||
Apply merging rules to reduce role count (cap at 5).
|
|
||||||
|
|
||||||
### Step 6: Role-Spec Metadata Assignment
|
|
||||||
|
|
||||||
For each role, determine frontmatter fields:
|
|
||||||
|
|
||||||
| Field | Derivation |
|
|
||||||
|-------|------------|
|
|
||||||
| `prefix` | From capability prefix (e.g., RESEARCH, DRAFT, IMPL) |
|
|
||||||
| `inner_loop` | `true` if role has 2+ serial same-prefix tasks |
|
|
||||||
| `subagents` | Inferred from responsibility type: orchestration -> [explore], code-gen (docs) -> [explore], validation -> [] |
|
|
||||||
| `message_types.success` | `<prefix>_complete` |
|
|
||||||
| `message_types.error` | `error` |
|
|
||||||
|
|
||||||
## Phase 4: Output
|
|
||||||
|
|
||||||
Write `<session-folder>/task-analysis.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"task_description": "<original user input>",
|
|
||||||
"capabilities": [
|
|
||||||
{
|
|
||||||
"name": "researcher",
|
|
||||||
"prefix": "RESEARCH",
|
|
||||||
"responsibility_type": "orchestration",
|
|
||||||
"tasks": [
|
|
||||||
{ "id": "RESEARCH-001", "description": "..." }
|
|
||||||
],
|
|
||||||
"artifacts": ["research-findings.md"]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"dependency_graph": {
|
|
||||||
"RESEARCH-001": [],
|
|
||||||
"DRAFT-001": ["RESEARCH-001"],
|
|
||||||
"ANALYSIS-001": ["DRAFT-001"]
|
|
||||||
},
|
|
||||||
"roles": [
|
|
||||||
{
|
|
||||||
"name": "researcher",
|
|
||||||
"prefix": "RESEARCH",
|
|
||||||
"responsibility_type": "orchestration",
|
|
||||||
"task_count": 1,
|
|
||||||
"inner_loop": false,
|
|
||||||
"role_spec_metadata": {
|
|
||||||
"subagents": ["explore"],
|
|
||||||
"message_types": {
|
|
||||||
"success": "research_complete",
|
|
||||||
"error": "error"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"complexity": {
|
|
||||||
"capability_count": 2,
|
|
||||||
"cross_domain_factor": false,
|
|
||||||
"parallel_tracks": 0,
|
|
||||||
"serial_depth": 2,
|
|
||||||
"total_score": 3,
|
|
||||||
"level": "low"
|
|
||||||
},
|
|
||||||
"needs_research": false,
|
|
||||||
"artifacts": [
|
|
||||||
{ "name": "research-findings.md", "producer": "researcher", "path": "artifacts/research-findings.md" }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Complexity Interpretation
|
|
||||||
|
|
||||||
**CRITICAL**: Complexity score is for **role design optimization**, NOT for skipping team workflow.
|
|
||||||
|
|
||||||
| Complexity | Team Structure | Coordinator Action |
|
|
||||||
|------------|----------------|-------------------|
|
|
||||||
| Low (1-2 roles) | Minimal team | Generate 1-2 role-specs, create team, spawn workers |
|
|
||||||
| Medium (2-3 roles) | Standard team | Generate role-specs, create team, spawn workers |
|
|
||||||
| High (3-5 roles) | Full team | Generate role-specs, create team, spawn workers |
|
|
||||||
|
|
||||||
**All complexity levels use team-worker architecture**:
|
|
||||||
- Single-role tasks still spawn team-worker agent
|
|
||||||
- Coordinator NEVER executes task work directly
|
|
||||||
- Team infrastructure provides session management, message bus, fast-advance
|
|
||||||
|
|
||||||
**Purpose of complexity score**:
|
|
||||||
- ✅ Determine optimal role count (merge vs separate)
|
|
||||||
- ✅ Guide dependency graph design
|
|
||||||
- ✅ Inform user about task scope
|
|
||||||
- ❌ NOT for deciding whether to use team workflow
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| No capabilities detected | Default to single `general` role with TASK prefix |
|
|
||||||
| Circular dependency in graph | Break cycle at lowest-tier edge, warn |
|
|
||||||
| Task description too vague | Return minimal analysis, coordinator will AskUserQuestion |
|
|
||||||
| All capabilities merge into one | Valid -- single-role execution via team-worker |
|
|
||||||
@@ -1,87 +0,0 @@
|
|||||||
# Command: dispatch
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Create task chains from dynamic dependency graphs. Builds pipelines from the task-analysis.json produced by Phase 1. Workers are spawned as team-worker agents with role-spec paths.
|
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
|
||||||
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task analysis | `<session-folder>/task-analysis.json` | Yes |
|
|
||||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
|
||||||
| Role registry | `team-session.json#roles` | Yes |
|
|
||||||
| Scope | User requirements description | Yes |
|
|
||||||
|
|
||||||
## Phase 3: Task Chain Creation
|
|
||||||
|
|
||||||
### Workflow
|
|
||||||
|
|
||||||
1. **Read dependency graph** from `task-analysis.json#dependency_graph`
|
|
||||||
2. **Topological sort** tasks to determine creation order
|
|
||||||
3. **Validate** all task owners exist in role registry
|
|
||||||
4. **For each task** (in topological order):
|
|
||||||
|
|
||||||
```
|
|
||||||
TaskCreate({
|
|
||||||
subject: "<PREFIX>-<NNN>",
|
|
||||||
owner: "<role-name>",
|
|
||||||
description: "<task description from task-analysis>\nSession: <session-folder>\nScope: <scope>\nInnerLoop: <true|false>\nRoleSpec: <session-folder>/role-specs/<role-name>.md",
|
|
||||||
blockedBy: [<dependency-list from graph>],
|
|
||||||
status: "pending"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Update team-session.json** with pipeline and tasks_total
|
|
||||||
6. **Validate** created chain
|
|
||||||
|
|
||||||
### Task Description Template
|
|
||||||
|
|
||||||
Every task description includes session path, inner loop flag, and role-spec path:
|
|
||||||
|
|
||||||
```
|
|
||||||
<task description>
|
|
||||||
Session: <session-folder>
|
|
||||||
Scope: <scope>
|
|
||||||
InnerLoop: <true|false>
|
|
||||||
RoleSpec: <session-folder>/role-specs/<role-name>.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### InnerLoop Flag Rules
|
|
||||||
|
|
||||||
| Condition | InnerLoop |
|
|
||||||
|-----------|-----------|
|
|
||||||
| Role has 2+ serial same-prefix tasks | true |
|
|
||||||
| Role has 1 task | false |
|
|
||||||
| Tasks are parallel (no dependency between them) | false |
|
|
||||||
|
|
||||||
### Dependency Validation
|
|
||||||
|
|
||||||
| Check | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| No orphan tasks | Every task is reachable from at least one root |
|
|
||||||
| No circular deps | Topological sort succeeds without cycle |
|
|
||||||
| All owners valid | Every task owner exists in team-session.json#roles |
|
|
||||||
| All blockedBy valid | Every blockedBy references an existing task subject |
|
|
||||||
| Session reference | Every task description contains `Session: <session-folder>` |
|
|
||||||
| RoleSpec reference | Every task description contains `RoleSpec: <path>` |
|
|
||||||
|
|
||||||
## Phase 4: Validation
|
|
||||||
|
|
||||||
| Check | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| Task count | Matches dependency_graph node count |
|
|
||||||
| Dependencies | Every blockedBy references an existing task subject |
|
|
||||||
| Owner assignment | Each task owner is in role registry |
|
|
||||||
| Session reference | Every task description contains `Session:` |
|
|
||||||
| Pipeline integrity | No disconnected subgraphs (warn if found) |
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Circular dependency detected | Report cycle, halt task creation |
|
|
||||||
| Owner not in role registry | Error, coordinator must fix roles first |
|
|
||||||
| TaskCreate fails | Log error, report to coordinator |
|
|
||||||
| Duplicate task subject | Skip creation, log warning |
|
|
||||||
| Empty dependency graph | Error, task analysis may have failed |
|
|
||||||
@@ -1,301 +0,0 @@
|
|||||||
# Command: monitor
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Event-driven pipeline coordination with Spawn-and-Stop pattern. Role names are read from `team-session.json#roles`. Workers are spawned as `team-worker` agents with role-spec paths. Includes `handleComplete` for pipeline completion action and `handleAdapt` for mid-pipeline capability gap handling.
|
|
||||||
|
|
||||||
## Constants
|
|
||||||
|
|
||||||
| Constant | Value | Description |
|
|
||||||
|----------|-------|-------------|
|
|
||||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
|
||||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
|
||||||
| FAST_ADVANCE_AWARE | true | Workers may skip coordinator for simple linear successors |
|
|
||||||
| WORKER_AGENT | team-worker | All workers spawned as team-worker agents |
|
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
|
||||||
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
|
||||||
| Task list | `TaskList()` | Yes |
|
|
||||||
| Active workers | session.active_workers[] | Yes |
|
|
||||||
| Role registry | session.roles[] | Yes |
|
|
||||||
|
|
||||||
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name`. Role-spec paths are in `session.roles[].role_spec`.
|
|
||||||
|
|
||||||
## Phase 3: Handler Routing
|
|
||||||
|
|
||||||
### Wake-up Source Detection
|
|
||||||
|
|
||||||
Parse `$ARGUMENTS` to determine handler:
|
|
||||||
|
|
||||||
| Priority | Condition | Handler |
|
|
||||||
|----------|-----------|---------|
|
|
||||||
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
|
|
||||||
| 2 | Contains "capability_gap" | handleAdapt |
|
|
||||||
| 3 | Contains "check" or "status" | handleCheck |
|
|
||||||
| 4 | Contains "resume", "continue", or "next" | handleResume |
|
|
||||||
| 5 | Pipeline detected as complete | handleComplete |
|
|
||||||
| 6 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleCallback
|
|
||||||
|
|
||||||
Worker completed a task. Verify completion, update state, auto-advance.
|
|
||||||
|
|
||||||
```
|
|
||||||
Receive callback from [<role>]
|
|
||||||
+- Find matching active worker by role (from session.roles)
|
|
||||||
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
|
|
||||||
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
|
|
||||||
+- Task status = completed?
|
|
||||||
| +- YES -> remove from active_workers -> update session
|
|
||||||
| | +- -> handleSpawnNext
|
|
||||||
| +- NO -> progress message, do not advance -> STOP
|
|
||||||
+- No matching worker found
|
|
||||||
+- Scan all active workers for completed tasks
|
|
||||||
+- Found completed -> process each -> handleSpawnNext
|
|
||||||
+- None completed -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
|
|
||||||
1. Check if the expected next task is already `in_progress` (fast-advanced)
|
|
||||||
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
|
|
||||||
3. If no -> normal handleSpawnNext
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleCheck
|
|
||||||
|
|
||||||
Read-only status report. No pipeline advancement.
|
|
||||||
|
|
||||||
**Output format**:
|
|
||||||
|
|
||||||
```
|
|
||||||
[coordinator] Pipeline Status
|
|
||||||
[coordinator] Progress: <completed>/<total> (<percent>%)
|
|
||||||
|
|
||||||
[coordinator] Execution Graph:
|
|
||||||
<visual representation of dependency graph with status icons>
|
|
||||||
|
|
||||||
done=completed >>>=running o=pending .=not created
|
|
||||||
|
|
||||||
[coordinator] Active Workers:
|
|
||||||
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
|
|
||||||
|
|
||||||
[coordinator] Ready to spawn: <subjects>
|
|
||||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
|
||||||
```
|
|
||||||
|
|
||||||
Then STOP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleResume
|
|
||||||
|
|
||||||
Check active worker completion, process results, advance pipeline.
|
|
||||||
|
|
||||||
```
|
|
||||||
Load active_workers from session
|
|
||||||
+- No active workers -> handleSpawnNext
|
|
||||||
+- Has active workers -> check each:
|
|
||||||
+- status = completed -> mark done, log
|
|
||||||
+- status = in_progress -> still running, log
|
|
||||||
+- other status -> worker failure -> reset to pending
|
|
||||||
After processing:
|
|
||||||
+- Some completed -> handleSpawnNext
|
|
||||||
+- All still running -> report status -> STOP
|
|
||||||
+- All failed -> handleSpawnNext (retry)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleSpawnNext
|
|
||||||
|
|
||||||
Find all ready tasks, spawn team-worker agents in background, update session, STOP.
|
|
||||||
|
|
||||||
```
|
|
||||||
Collect task states from TaskList()
|
|
||||||
+- completedSubjects: status = completed
|
|
||||||
+- inProgressSubjects: status = in_progress
|
|
||||||
+- readySubjects: pending + all blockedBy in completedSubjects
|
|
||||||
|
|
||||||
Ready tasks found?
|
|
||||||
+- NONE + work in progress -> report waiting -> STOP
|
|
||||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> handleComplete
|
|
||||||
+- HAS ready tasks -> for each:
|
|
||||||
+- Is task owner an Inner Loop role AND that role already has an active_worker?
|
|
||||||
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
|
|
||||||
| +- NO -> normal spawn below
|
|
||||||
+- TaskUpdate -> in_progress
|
|
||||||
+- team_msg log -> task_unblocked (team=<session-id>, NOT team name)
|
|
||||||
+- Spawn team-worker (see spawn tool call below)
|
|
||||||
+- Add to session.active_workers
|
|
||||||
Update session file -> output summary -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Spawn worker tool call** (one per ready task):
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "team-worker",
|
|
||||||
description: "Spawn <role> worker for <subject>",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `## Role Assignment
|
|
||||||
role: <role>
|
|
||||||
role_spec: <session-folder>/role-specs/<role>.md
|
|
||||||
session: <session-folder>
|
|
||||||
session_id: <session-id>
|
|
||||||
team_name: <team-name>
|
|
||||||
requirement: <task-description>
|
|
||||||
inner_loop: <true|false>
|
|
||||||
|
|
||||||
Read role_spec file to load Phase 2-4 domain instructions.`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleComplete
|
|
||||||
|
|
||||||
Pipeline complete. Execute completion action based on session configuration.
|
|
||||||
|
|
||||||
```
|
|
||||||
All tasks completed (no pending, no in_progress)
|
|
||||||
+- Generate pipeline summary:
|
|
||||||
| - Deliverables list with paths
|
|
||||||
| - Pipeline stats (tasks completed, duration)
|
|
||||||
| - Discussion verdicts (if any)
|
|
||||||
|
|
|
||||||
+- Read session.completion_action:
|
|
||||||
|
|
|
||||||
+- "interactive":
|
|
||||||
| AskUserQuestion({
|
|
||||||
| questions: [{
|
|
||||||
| question: "Team pipeline complete. What would you like to do?",
|
|
||||||
| header: "Completion",
|
|
||||||
| multiSelect: false,
|
|
||||||
| options: [
|
|
||||||
| { label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
|
|
||||||
| { label: "Keep Active", description: "Keep session for follow-up work" },
|
|
||||||
| { label: "Export Results", description: "Export deliverables to target directory" }
|
|
||||||
| ]
|
|
||||||
| }]
|
|
||||||
| })
|
|
||||||
| +- "Archive & Clean":
|
|
||||||
| | Update session status="completed"
|
|
||||||
| | TeamDelete(<team-name>)
|
|
||||||
| | Output final summary with artifact paths
|
|
||||||
| +- "Keep Active":
|
|
||||||
| | Update session status="paused"
|
|
||||||
| | Output: "Resume with: Skill(skill='team-coordinate-v2', args='resume')"
|
|
||||||
| +- "Export Results":
|
|
||||||
| AskUserQuestion for target directory
|
|
||||||
| Copy deliverables to target
|
|
||||||
| Execute Archive & Clean flow
|
|
||||||
|
|
|
||||||
+- "auto_archive":
|
|
||||||
| Execute Archive & Clean without prompt
|
|
||||||
|
|
|
||||||
+- "auto_keep":
|
|
||||||
Execute Keep Active without prompt
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fallback**: If completion action fails, default to Keep Active (session status="paused"), log warning.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleAdapt
|
|
||||||
|
|
||||||
Handle mid-pipeline capability gap discovery. A worker reports `capability_gap` when it encounters work outside its scope.
|
|
||||||
|
|
||||||
**CONSTRAINT**: Maximum 5 worker roles per session. handleAdapt MUST enforce this limit.
|
|
||||||
|
|
||||||
```
|
|
||||||
Parse capability_gap message:
|
|
||||||
+- Extract: gap_description, requesting_role, suggested_capability
|
|
||||||
+- Validate gap is genuine:
|
|
||||||
+- Check existing roles in session.roles -> does any role cover this?
|
|
||||||
| +- YES -> redirect: SendMessage to that role's owner -> STOP
|
|
||||||
| +- NO -> genuine gap, proceed to role-spec generation
|
|
||||||
+- CHECK ROLE COUNT LIMIT (MAX 5 ROLES):
|
|
||||||
+- Count current roles in session.roles
|
|
||||||
+- If count >= 5:
|
|
||||||
+- Attempt to merge new capability into existing role
|
|
||||||
+- If merge NOT possible -> PAUSE, report to user
|
|
||||||
+- Generate new role-spec:
|
|
||||||
1. Read specs/role-spec-template.md
|
|
||||||
2. Fill template with: frontmatter (role, prefix, inner_loop, message_types) + Phase 2-4 content
|
|
||||||
3. Write to <session-folder>/role-specs/<new-role>.md
|
|
||||||
4. Add to session.roles[]
|
|
||||||
+- Create new task(s) via TaskCreate
|
|
||||||
+- Update team-session.json
|
|
||||||
+- Spawn new team-worker -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Worker Failure Handling
|
|
||||||
|
|
||||||
When a worker has unexpected status (not completed, not in_progress):
|
|
||||||
|
|
||||||
1. Reset task -> pending via TaskUpdate
|
|
||||||
2. Log via team_msg (type: error)
|
|
||||||
3. Report to user: task reset, will retry on next resume
|
|
||||||
|
|
||||||
### Fast-Advance Failure Recovery
|
|
||||||
|
|
||||||
When coordinator detects a fast-advanced task has failed:
|
|
||||||
|
|
||||||
```
|
|
||||||
handleCallback / handleResume detects:
|
|
||||||
+- Task is in_progress (was fast-advanced by predecessor)
|
|
||||||
+- No active_worker entry for this task
|
|
||||||
+- Resolution:
|
|
||||||
1. TaskUpdate -> reset task to pending
|
|
||||||
2. Remove stale active_worker entry (if any)
|
|
||||||
3. Log via team_msg (type: error)
|
|
||||||
4. -> handleSpawnNext (will re-spawn the task normally)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Consensus-Blocked Handling
|
|
||||||
|
|
||||||
```
|
|
||||||
handleCallback receives message with consensus_blocked flag
|
|
||||||
+- Route by severity:
|
|
||||||
+- severity = HIGH
|
|
||||||
| +- Create REVISION task (same role, incremented suffix)
|
|
||||||
| +- Max 1 revision per task. If already revised -> PAUSE, escalate to user
|
|
||||||
+- severity = MEDIUM
|
|
||||||
| +- Proceed with warning, log to wisdom/issues.md
|
|
||||||
| +- Normal handleSpawnNext
|
|
||||||
+- severity = LOW
|
|
||||||
+- Proceed normally, treat as consensus_reached with notes
|
|
||||||
```
|
|
||||||
|
|
||||||
## Phase 4: Validation
|
|
||||||
|
|
||||||
| Check | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
|
||||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
|
||||||
| Dynamic roles valid | All task owners exist in session.roles |
|
|
||||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
|
|
||||||
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Session file not found | Error, suggest re-initialization |
|
|
||||||
| Worker callback from unknown role | Log info, scan for other completions |
|
|
||||||
| All workers still running on resume | Report status, suggest check later |
|
|
||||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
|
||||||
| Fast-advance conflict | Coordinator reconciles, no duplicate spawns |
|
|
||||||
| Dynamic role-spec file not found | Error, coordinator must regenerate from task-analysis |
|
|
||||||
| capability_gap when role limit (5) reached | Attempt merge, else pause for user |
|
|
||||||
| Completion action fails | Default to Keep Active, log warning |
|
|
||||||
@@ -1,297 +0,0 @@
|
|||||||
# Coordinator Role
|
|
||||||
|
|
||||||
Orchestrate the team-coordinate workflow: task analysis, dynamic role-spec generation, task dispatching, progress monitoring, session state, and completion action. The sole built-in role -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent.
|
|
||||||
|
|
||||||
## Identity
|
|
||||||
|
|
||||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
|
||||||
- **Responsibility**: Analyze task -> Generate role-specs -> Create team -> Dispatch tasks -> Monitor progress -> Completion action -> Report results
|
|
||||||
|
|
||||||
## Boundaries
|
|
||||||
|
|
||||||
### MUST
|
|
||||||
- Parse task description (text-level: keyword scanning, capability inference, dependency design)
|
|
||||||
- Dynamically generate worker role-specs from specs/role-spec-template.md
|
|
||||||
- Create team and spawn team-worker agents in background
|
|
||||||
- Dispatch tasks with proper dependency chains from task-analysis.json
|
|
||||||
- Monitor progress via worker callbacks and route messages
|
|
||||||
- Maintain session state persistence (team-session.json)
|
|
||||||
- Handle capability_gap reports (generate new role-specs mid-pipeline)
|
|
||||||
- Handle consensus_blocked HIGH verdicts (create revision tasks or pause)
|
|
||||||
- Detect fast-advance orphans on resume/check and reset to pending
|
|
||||||
- Execute completion action when pipeline finishes
|
|
||||||
|
|
||||||
### MUST NOT
|
|
||||||
- **Read source code or perform codebase exploration** (delegate to worker roles)
|
|
||||||
- Execute task work directly (delegate to workers)
|
|
||||||
- Modify task output artifacts (workers own their deliverables)
|
|
||||||
- Call implementation subagents (code-developer, etc.) directly
|
|
||||||
- Skip dependency validation when creating task chains
|
|
||||||
- Generate more than 5 worker roles (merge if exceeded)
|
|
||||||
- Override consensus_blocked HIGH without user confirmation
|
|
||||||
- Spawn workers with `general-purpose` agent (MUST use `team-worker`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Command Execution Protocol
|
|
||||||
|
|
||||||
When coordinator needs to execute a command (analyze-task, dispatch, monitor):
|
|
||||||
|
|
||||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
|
||||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
|
||||||
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
|
|
||||||
4. **Execute synchronously** - complete the command workflow before proceeding
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
Phase 1 needs task analysis
|
|
||||||
-> Read roles/coordinator/commands/analyze-task.md
|
|
||||||
-> Execute Phase 2 (Context Loading)
|
|
||||||
-> Execute Phase 3 (Task Analysis)
|
|
||||||
-> Execute Phase 4 (Output)
|
|
||||||
-> Continue to Phase 2
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Entry Router
|
|
||||||
|
|
||||||
When coordinator is invoked, first detect the invocation type:
|
|
||||||
|
|
||||||
| Detection | Condition | Handler |
|
|
||||||
|-----------|-----------|---------|
|
|
||||||
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
|
|
||||||
| Status check | Arguments contain "check" or "status" | -> handleCheck |
|
|
||||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
|
|
||||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
|
|
||||||
| Pipeline complete | All tasks completed, no pending/in_progress | -> handleComplete |
|
|
||||||
| Interrupted session | Active/paused session exists in `.workflow/.team/TC-*` | -> Phase 0 (Resume Check) |
|
|
||||||
| New session | None of above | -> Phase 1 (Task Analysis) |
|
|
||||||
|
|
||||||
For callback/check/resume/adapt/complete: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
|
||||||
|
|
||||||
### Router Implementation
|
|
||||||
|
|
||||||
1. **Load session context** (if exists):
|
|
||||||
- Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
|
|
||||||
- If found, extract `session.roles[].name` for callback detection
|
|
||||||
|
|
||||||
2. **Parse $ARGUMENTS** for detection keywords
|
|
||||||
|
|
||||||
3. **Route to handler**:
|
|
||||||
- For monitor handlers: Read `commands/monitor.md`, execute matched handler section, STOP
|
|
||||||
- For Phase 0: Execute Session Resume Check below
|
|
||||||
- For Phase 1: Execute Task Analysis below
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 0: Session Resume Check
|
|
||||||
|
|
||||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Scan `.workflow/.team/TC-*/team-session.json` for sessions with status "active" or "paused"
|
|
||||||
2. No sessions found -> proceed to Phase 1
|
|
||||||
3. Single session found -> resume it (-> Session Reconciliation)
|
|
||||||
4. Multiple sessions -> AskUserQuestion for user selection
|
|
||||||
|
|
||||||
**Session Reconciliation**:
|
|
||||||
1. Audit TaskList -> get real status of all tasks
|
|
||||||
2. Reconcile: session.completed_tasks <-> TaskList status (bidirectional sync)
|
|
||||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
|
||||||
4. Detect fast-advance orphans (in_progress without recent activity) -> reset to pending
|
|
||||||
5. Determine remaining pipeline from reconciled state
|
|
||||||
6. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
|
||||||
7. Create missing tasks with correct blockedBy dependencies
|
|
||||||
8. Verify dependency chain integrity
|
|
||||||
9. Update session file with reconciled state
|
|
||||||
10. Kick first executable task's worker -> Phase 4
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Task Analysis
|
|
||||||
|
|
||||||
**Objective**: Parse user task, detect capabilities, build dependency graph, design roles.
|
|
||||||
|
|
||||||
**Constraint**: This is TEXT-LEVEL analysis only. No source code reading, no codebase exploration.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. **Parse user task description**
|
|
||||||
|
|
||||||
2. **Clarify if ambiguous** via AskUserQuestion:
|
|
||||||
- What is the scope? (specific files, module, project-wide)
|
|
||||||
- What deliverables are expected? (documents, code, analysis reports)
|
|
||||||
- Any constraints? (timeline, technology, style)
|
|
||||||
|
|
||||||
3. **Delegate to `commands/analyze-task.md`**:
|
|
||||||
- Signal detection: scan keywords -> infer capabilities
|
|
||||||
- Artifact inference: each capability -> default output type (.md)
|
|
||||||
- Dependency graph: build DAG of work streams
|
|
||||||
- Complexity scoring: count capabilities, cross-domain factor, parallel tracks
|
|
||||||
- Role minimization: merge overlapping, absorb trivial, cap at 5
|
|
||||||
- **Role-spec metadata**: Generate frontmatter fields (prefix, inner_loop, subagents, message_types)
|
|
||||||
|
|
||||||
4. **Output**: Write `<session>/task-analysis.json`
|
|
||||||
|
|
||||||
5. **If `needs_research: true`**: Phase 2 will spawn researcher worker first
|
|
||||||
|
|
||||||
**Success**: Task analyzed, capabilities detected, dependency graph built, roles designed with role-spec metadata.
|
|
||||||
|
|
||||||
**CRITICAL - Team Workflow Enforcement**:
|
|
||||||
|
|
||||||
Regardless of complexity score or role count, coordinator MUST:
|
|
||||||
- ✅ **Always proceed to Phase 2** (generate role-specs)
|
|
||||||
- ✅ **Always create team** and spawn workers via team-worker agent
|
|
||||||
- ❌ **NEVER execute task work directly**, even for single-role low-complexity tasks
|
|
||||||
- ❌ **NEVER skip team workflow** based on complexity assessment
|
|
||||||
|
|
||||||
**Single-role execution is still team-based** - just with one worker. The team architecture provides:
|
|
||||||
- Consistent message bus communication
|
|
||||||
- Session state management
|
|
||||||
- Artifact tracking
|
|
||||||
- Fast-advance capability
|
|
||||||
- Resume/recovery mechanisms
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Generate Role-Specs + Initialize Session
|
|
||||||
|
|
||||||
**Objective**: Create session, generate dynamic role-spec files, initialize shared infrastructure.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. **Check `needs_research` flag** from task-analysis.json:
|
|
||||||
- If `true`: **Spawn researcher worker first** to gather codebase context
|
|
||||||
- Wait for researcher callback
|
|
||||||
- Merge research findings into task context
|
|
||||||
- Update task-analysis.json with enriched context
|
|
||||||
|
|
||||||
2. **Generate session ID**: `TC-<slug>-<date>` (slug from first 3 meaningful words of task)
|
|
||||||
|
|
||||||
3. **Create session folder structure**:
|
|
||||||
```
|
|
||||||
.workflow/.team/<session-id>/
|
|
||||||
+-- role-specs/
|
|
||||||
+-- artifacts/
|
|
||||||
+-- wisdom/
|
|
||||||
+-- explorations/
|
|
||||||
+-- discussions/
|
|
||||||
+-- .msg/
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Call TeamCreate** with team name derived from session ID
|
|
||||||
|
|
||||||
5. **Read `specs/role-spec-template.md`** + task-analysis.json
|
|
||||||
|
|
||||||
6. **For each role in task-analysis.json#roles**:
|
|
||||||
- Fill role-spec template with:
|
|
||||||
- YAML frontmatter: role, prefix, inner_loop, subagents, message_types
|
|
||||||
- Phase 2-4 content from responsibility type reference sections in template
|
|
||||||
- Task-specific instructions from task description
|
|
||||||
- Write generated role-spec to `<session>/role-specs/<role-name>.md`
|
|
||||||
|
|
||||||
7. **Register roles** in team-session.json#roles (with `role_spec` path instead of `role_file`)
|
|
||||||
|
|
||||||
8. **Initialize shared infrastructure**:
|
|
||||||
- `wisdom/learnings.md`, `wisdom/decisions.md`, `wisdom/issues.md` (empty with headers)
|
|
||||||
- `explorations/cache-index.json` (`{ "entries": [] }`)
|
|
||||||
- `shared-memory.json` (`{}`)
|
|
||||||
- `discussions/` (empty directory)
|
|
||||||
|
|
||||||
9. **Write team-session.json** with: session_id, task_description, status="active", roles, pipeline (empty), active_workers=[], completion_action="interactive", created_at
|
|
||||||
|
|
||||||
**Success**: Session created, role-spec files generated, shared infrastructure initialized.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 3: Create Task Chain
|
|
||||||
|
|
||||||
**Objective**: Dispatch tasks based on dependency graph with proper dependencies.
|
|
||||||
|
|
||||||
Delegate to `commands/dispatch.md` which creates the full task chain:
|
|
||||||
1. Reads dependency_graph from task-analysis.json
|
|
||||||
2. Topological sorts tasks
|
|
||||||
3. Creates tasks via TaskCreate with correct blockedBy
|
|
||||||
4. Assigns owner based on role mapping from task-analysis.json
|
|
||||||
5. Includes `Session: <session-folder>` in every task description
|
|
||||||
6. Sets InnerLoop flag for multi-task roles
|
|
||||||
7. Updates team-session.json with pipeline and tasks_total
|
|
||||||
|
|
||||||
**Success**: All tasks created with correct dependency chains, session updated.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Spawn-and-Stop
|
|
||||||
|
|
||||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
|
||||||
|
|
||||||
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Load `commands/monitor.md`
|
|
||||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
|
||||||
3. For each ready task -> spawn team-worker (see SKILL.md Coordinator Spawn Template)
|
|
||||||
4. Output status summary with execution graph
|
|
||||||
5. STOP
|
|
||||||
|
|
||||||
**Pipeline advancement** driven by three wake sources:
|
|
||||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
|
||||||
- User "check" -> handleCheck (status only)
|
|
||||||
- User "resume" -> handleResume (advance)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 5: Report + Completion Action
|
|
||||||
|
|
||||||
**Objective**: Completion report, interactive completion choice, and follow-up options.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Load session state -> count completed tasks, duration
|
|
||||||
2. List all deliverables with output paths in `<session>/artifacts/`
|
|
||||||
3. Include discussion summaries (if inline discuss was used)
|
|
||||||
4. Summarize wisdom accumulated during execution
|
|
||||||
5. Output report:
|
|
||||||
|
|
||||||
```
|
|
||||||
[coordinator] ============================================
|
|
||||||
[coordinator] TASK COMPLETE
|
|
||||||
[coordinator]
|
|
||||||
[coordinator] Deliverables:
|
|
||||||
[coordinator] - <artifact-1.md> (<producer role>)
|
|
||||||
[coordinator] - <artifact-2.md> (<producer role>)
|
|
||||||
[coordinator]
|
|
||||||
[coordinator] Pipeline: <completed>/<total> tasks
|
|
||||||
[coordinator] Roles: <role-list>
|
|
||||||
[coordinator] Duration: <elapsed>
|
|
||||||
[coordinator]
|
|
||||||
[coordinator] Session: <session-folder>
|
|
||||||
[coordinator] ============================================
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Execute Completion Action** (based on session.completion_action):
|
|
||||||
|
|
||||||
| Mode | Behavior |
|
|
||||||
|------|----------|
|
|
||||||
| `interactive` | AskUserQuestion with Archive/Keep/Export options |
|
|
||||||
| `auto_archive` | Execute Archive & Clean without prompt |
|
|
||||||
| `auto_keep` | Execute Keep Active without prompt |
|
|
||||||
|
|
||||||
**Interactive handler**: See SKILL.md Completion Action section.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Resolution |
|
|
||||||
|-------|------------|
|
|
||||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
|
||||||
| Worker crash | Respawn worker, reassign task |
|
|
||||||
| Dependency cycle | Detect in task analysis, report to user, halt |
|
|
||||||
| Task description too vague | AskUserQuestion for clarification |
|
|
||||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
|
||||||
| Role-spec generation fails | Fall back to single general-purpose role |
|
|
||||||
| capability_gap reported | handleAdapt: generate new role-spec, create tasks, spawn |
|
|
||||||
| All capabilities merge to one | Valid: single-role execution, reduced overhead |
|
|
||||||
| No capabilities detected | Default to single general role with TASK prefix |
|
|
||||||
| Completion action fails | Default to Keep Active, log warning |
|
|
||||||
@@ -1,295 +0,0 @@
|
|||||||
# Dynamic Role-Spec Template
|
|
||||||
|
|
||||||
Template used by coordinator to generate lightweight worker role-spec files at runtime. Each generated role-spec is written to `<session>/role-specs/<role-name>.md`.
|
|
||||||
|
|
||||||
**Key difference from v1**: Role-specs contain ONLY Phase 2-4 domain logic + YAML frontmatter. All shared behavior (Phase 1 Task Discovery, Phase 5 Report/Fast-Advance, Message Bus, Consensus, Inner Loop) is built into the `team-worker` agent.
|
|
||||||
|
|
||||||
## Template
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
role: <role_name>
|
|
||||||
prefix: <PREFIX>
|
|
||||||
inner_loop: <true|false>
|
|
||||||
subagents: [<subagent-names>]
|
|
||||||
message_types:
|
|
||||||
success: <prefix>_complete
|
|
||||||
error: error
|
|
||||||
---
|
|
||||||
|
|
||||||
# <Role Name> — Phase 2-4
|
|
||||||
|
|
||||||
## Phase 2: <phase2_name>
|
|
||||||
|
|
||||||
<phase2_content>
|
|
||||||
|
|
||||||
## Phase 3: <phase3_name>
|
|
||||||
|
|
||||||
<phase3_content>
|
|
||||||
|
|
||||||
## Phase 4: <phase4_name>
|
|
||||||
|
|
||||||
<phase4_content>
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
<error_entries>
|
|
||||||
```
|
|
||||||
|
|
||||||
## Frontmatter Fields
|
|
||||||
|
|
||||||
| Field | Required | Description |
|
|
||||||
|-------|----------|-------------|
|
|
||||||
| `role` | Yes | Role name matching session registry |
|
|
||||||
| `prefix` | Yes | Task prefix to filter (e.g., RESEARCH, DRAFT, IMPL) |
|
|
||||||
| `inner_loop` | Yes | Whether team-worker loops through same-prefix tasks |
|
|
||||||
| `subagents` | No | Array of subagent types this role may call |
|
|
||||||
| `message_types` | Yes | Message type mapping for team_msg |
|
|
||||||
| `message_types.success` | Yes | Type string for successful completion |
|
|
||||||
| `message_types.error` | Yes | Type string for errors (usually "error") |
|
|
||||||
|
|
||||||
## Design Rules
|
|
||||||
|
|
||||||
| Rule | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| Phase 2-4 only | No Phase 1 (Task Discovery) or Phase 5 (Report) — team-worker handles these |
|
|
||||||
| No message bus code | No team_msg calls — team-worker handles logging |
|
|
||||||
| No consensus handling | No consensus_reached/blocked logic — team-worker handles routing |
|
|
||||||
| No inner loop logic | No Phase 5-L/5-F — team-worker handles looping |
|
|
||||||
| ~80 lines target | Lightweight, domain-focused |
|
|
||||||
| No pseudocode | Decision tables + text + tool calls only |
|
|
||||||
| `<placeholder>` notation | Use angle brackets for variable substitution |
|
|
||||||
| Reference subagents by name | team-worker resolves invocation from its delegation templates |
|
|
||||||
|
|
||||||
## Phase 2-4 Content by Responsibility Type
|
|
||||||
|
|
||||||
Select the matching section based on `responsibility_type` from task analysis.
|
|
||||||
|
|
||||||
### orchestration
|
|
||||||
|
|
||||||
**Phase 2: Context Assessment**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Shared memory | <session>/shared-memory.json | No |
|
|
||||||
| Prior artifacts | <session>/artifacts/ | No |
|
|
||||||
| Wisdom | <session>/wisdom/ | No |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Extract session path from task description
|
|
||||||
2. Read shared-memory.json for cross-role context
|
|
||||||
3. Read prior artifacts (if any from upstream tasks)
|
|
||||||
4. Load wisdom files for accumulated knowledge
|
|
||||||
5. Optionally call explore subagent for codebase context
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Subagent Execution**
|
|
||||||
|
|
||||||
```
|
|
||||||
Delegate to appropriate subagent based on task:
|
|
||||||
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "<task-type> for <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- <task description>
|
|
||||||
- Session: <session-folder>
|
|
||||||
## Context
|
|
||||||
<prior artifacts + shared memory + explore results>
|
|
||||||
## Expected Output
|
|
||||||
Write artifact to: <session>/artifacts/<artifact-name>.md
|
|
||||||
Return JSON summary: { artifact_path, summary, key_decisions[], warnings[] }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Result Aggregation**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Verify subagent output artifact exists
|
|
||||||
2. Read artifact, validate structure/completeness
|
|
||||||
3. Update shared-memory.json with key findings
|
|
||||||
4. Write insights to wisdom/ files
|
|
||||||
```
|
|
||||||
|
|
||||||
### code-gen (docs)
|
|
||||||
|
|
||||||
**Phase 2: Load Prior Context**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Prior artifacts | <session>/artifacts/ from upstream | Conditional |
|
|
||||||
| Shared memory | <session>/shared-memory.json | No |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Extract session path from task description
|
|
||||||
2. Read upstream artifacts
|
|
||||||
3. Read shared-memory.json for cross-role context
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Document Generation**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "universal-executor",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Generate <doc-type> for <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- Generate: <document type>
|
|
||||||
- Session: <session-folder>
|
|
||||||
## Prior Context
|
|
||||||
<upstream artifacts + shared memory>
|
|
||||||
## Expected Output
|
|
||||||
Write document to: <session>/artifacts/<doc-name>.md
|
|
||||||
Return JSON: { artifact_path, summary, key_decisions[], warnings[] }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Structure Validation**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Verify document artifact exists
|
|
||||||
2. Check document has expected sections
|
|
||||||
3. Validate no placeholder text remains
|
|
||||||
4. Update shared-memory.json with document metadata
|
|
||||||
```
|
|
||||||
|
|
||||||
### code-gen (code)
|
|
||||||
|
|
||||||
**Phase 2: Load Plan/Specs**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Plan/design artifacts | <session>/artifacts/ | Conditional |
|
|
||||||
| Shared memory | <session>/shared-memory.json | No |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Extract session path from task description
|
|
||||||
2. Read plan/design artifacts from upstream
|
|
||||||
3. Load shared-memory.json for implementation context
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Code Implementation**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "code-developer",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Implement <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- <implementation description>
|
|
||||||
- Session: <session-folder>
|
|
||||||
## Plan/Design Context
|
|
||||||
<upstream artifacts>
|
|
||||||
## Expected Output
|
|
||||||
Implement code changes.
|
|
||||||
Write summary to: <session>/artifacts/implementation-summary.md
|
|
||||||
Return JSON: { artifact_path, summary, files_changed[], warnings[] }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Syntax Validation**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Run syntax check (tsc --noEmit or equivalent)
|
|
||||||
2. Verify all planned files exist
|
|
||||||
3. If validation fails -> attempt auto-fix (max 2 attempts)
|
|
||||||
4. Write implementation summary to artifacts/
|
|
||||||
```
|
|
||||||
|
|
||||||
### read-only
|
|
||||||
|
|
||||||
**Phase 2: Target Loading**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Target artifacts/files | From task description or upstream | Yes |
|
|
||||||
| Shared memory | <session>/shared-memory.json | No |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Extract session path and target files from task description
|
|
||||||
2. Read target artifacts or source files for analysis
|
|
||||||
3. Load shared-memory.json for context
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Multi-Dimension Analysis**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Analyze <target> for <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- Analyze: <target description>
|
|
||||||
- Dimensions: <analysis dimensions from coordinator>
|
|
||||||
- Session: <session-folder>
|
|
||||||
## Target Content
|
|
||||||
<artifact content or file content>
|
|
||||||
## Expected Output
|
|
||||||
Write report to: <session>/artifacts/analysis-report.md
|
|
||||||
Return JSON: { artifact_path, summary, findings[], severity_counts }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Severity Classification**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Verify analysis report exists
|
|
||||||
2. Classify findings by severity (Critical/High/Medium/Low)
|
|
||||||
3. Update shared-memory.json with key findings
|
|
||||||
4. Write issues to wisdom/issues.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### validation
|
|
||||||
|
|
||||||
**Phase 2: Environment Detection**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Implementation artifacts | Upstream code changes | Yes |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Detect test framework from project files
|
|
||||||
2. Get changed files from implementation
|
|
||||||
3. Identify test command and coverage tool
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Test-Fix Cycle**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "test-fix-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Test-fix for <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- Run tests and fix failures
|
|
||||||
- Session: <session-folder>
|
|
||||||
- Max iterations: 5
|
|
||||||
## Changed Files
|
|
||||||
<from upstream implementation>
|
|
||||||
## Expected Output
|
|
||||||
Write report to: <session>/artifacts/test-report.md
|
|
||||||
Return JSON: { artifact_path, pass_rate, coverage, remaining_failures[] }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Result Analysis**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Check pass rate >= 95%
|
|
||||||
2. Check coverage meets threshold
|
|
||||||
3. Generate test report with pass/fail counts
|
|
||||||
4. Update shared-memory.json with test results
|
|
||||||
```
|
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
# Discuss Subagent
|
|
||||||
|
|
||||||
Lightweight multi-perspective critique engine. Called inline by any role needing peer review. Perspectives are dynamic -- specified by the calling role, not pre-defined.
|
|
||||||
|
|
||||||
## Design
|
|
||||||
|
|
||||||
Unlike team-lifecycle-v4's fixed perspective definitions (product, technical, quality, risk, coverage), team-coordinate uses **dynamic perspectives** passed in the prompt. The calling role decides what viewpoints matter for its artifact.
|
|
||||||
|
|
||||||
## Invocation
|
|
||||||
|
|
||||||
Called by roles after artifact creation:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-discuss-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Discuss <round-id>",
|
|
||||||
prompt: `## Multi-Perspective Critique: <round-id>
|
|
||||||
|
|
||||||
### Input
|
|
||||||
- Artifact: <artifact-path>
|
|
||||||
- Round: <round-id>
|
|
||||||
- Session: <session-folder>
|
|
||||||
|
|
||||||
### Perspectives
|
|
||||||
<Dynamic perspective list -- each entry defines: name, cli_tool, role_label, focus_areas>
|
|
||||||
|
|
||||||
Example:
|
|
||||||
| Perspective | CLI Tool | Role | Focus Areas |
|
|
||||||
|-------------|----------|------|-------------|
|
|
||||||
| Feasibility | gemini | Engineer | Implementation complexity, technical risks, resource needs |
|
|
||||||
| Clarity | codex | Editor | Readability, logical flow, completeness of explanation |
|
|
||||||
| Accuracy | gemini | Domain Expert | Factual correctness, source reliability, claim verification |
|
|
||||||
|
|
||||||
### Execution Steps
|
|
||||||
1. Read artifact from <artifact-path>
|
|
||||||
2. For each perspective, launch CLI analysis in background:
|
|
||||||
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
|
|
||||||
TASK: <focus-areas>
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: Artifact content below
|
|
||||||
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
|
|
||||||
CONSTRAINTS: Output valid JSON only
|
|
||||||
|
|
||||||
Artifact:
|
|
||||||
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
|
|
||||||
3. Wait for all CLI results
|
|
||||||
4. Divergence detection:
|
|
||||||
- High severity: any rating <= 2, critical issue identified
|
|
||||||
- Medium severity: rating spread (max - min) >= 3, or single perspective rated <= 2 with others >= 3
|
|
||||||
- Low severity: minor suggestions only, all ratings >= 3
|
|
||||||
5. Consensus determination:
|
|
||||||
- No high-severity divergences AND average rating >= 3.0 -> consensus_reached
|
|
||||||
- Otherwise -> consensus_blocked
|
|
||||||
6. Synthesize:
|
|
||||||
- Convergent themes (agreed by 2+ perspectives)
|
|
||||||
- Divergent views (conflicting assessments)
|
|
||||||
- Action items from suggestions
|
|
||||||
7. Write discussion record to: <session-folder>/discussions/<round-id>-discussion.md
|
|
||||||
|
|
||||||
### Discussion Record Format
|
|
||||||
# Discussion Record: <round-id>
|
|
||||||
|
|
||||||
**Artifact**: <artifact-path>
|
|
||||||
**Perspectives**: <list>
|
|
||||||
**Consensus**: reached / blocked
|
|
||||||
**Average Rating**: <avg>/5
|
|
||||||
|
|
||||||
## Convergent Themes
|
|
||||||
- <theme>
|
|
||||||
|
|
||||||
## Divergent Views
|
|
||||||
- **<topic>** (<severity>): <description>
|
|
||||||
|
|
||||||
## Action Items
|
|
||||||
1. <item>
|
|
||||||
|
|
||||||
## Ratings
|
|
||||||
| Perspective | Rating |
|
|
||||||
|-------------|--------|
|
|
||||||
| <name> | <n>/5 |
|
|
||||||
|
|
||||||
### Return Value
|
|
||||||
|
|
||||||
**When consensus_reached**:
|
|
||||||
Return a summary string with:
|
|
||||||
- Verdict: consensus_reached
|
|
||||||
- Average rating
|
|
||||||
- Key action items (top 3)
|
|
||||||
- Discussion record path
|
|
||||||
|
|
||||||
**When consensus_blocked**:
|
|
||||||
Return a structured summary with:
|
|
||||||
- Verdict: consensus_blocked
|
|
||||||
- Severity: HIGH | MEDIUM | LOW
|
|
||||||
- Average rating
|
|
||||||
- Divergence summary: top 3 divergent points with perspective attribution
|
|
||||||
- Action items: prioritized list of required changes
|
|
||||||
- Recommendation: revise | proceed-with-caution | escalate
|
|
||||||
- Discussion record path
|
|
||||||
|
|
||||||
### Error Handling
|
|
||||||
- Single CLI fails -> fallback to direct Claude analysis for that perspective
|
|
||||||
- All CLI fail -> generate basic discussion from direct artifact reading
|
|
||||||
- Artifact not found -> return error immediately`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with Calling Role
|
|
||||||
|
|
||||||
The calling role is responsible for:
|
|
||||||
|
|
||||||
1. **Before calling**: Complete primary artifact output
|
|
||||||
2. **Calling**: Invoke discuss subagent with appropriate dynamic perspectives
|
|
||||||
3. **After calling**:
|
|
||||||
|
|
||||||
| Verdict | Severity | Role Action |
|
|
||||||
|---------|----------|-------------|
|
|
||||||
| consensus_reached | - | Include action items in Phase 5 report, proceed normally |
|
|
||||||
| consensus_blocked | HIGH | Include divergence details in Phase 5 SendMessage. Do NOT self-revise -- coordinator decides. |
|
|
||||||
| consensus_blocked | MEDIUM | Include warning in Phase 5 SendMessage. Proceed normally. |
|
|
||||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
|
|
||||||
|
|
||||||
**SendMessage format for consensus_blocked (HIGH or MEDIUM)**:
|
|
||||||
|
|
||||||
```
|
|
||||||
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
|
|
||||||
Divergences: <top-3-divergent-points>
|
|
||||||
Action items: <prioritized-items>
|
|
||||||
Recommendation: <revise|proceed-with-caution|escalate>
|
|
||||||
Artifact: <artifact-path>
|
|
||||||
Discussion: <discussion-record-path>
|
|
||||||
```
|
|
||||||
@@ -1,120 +0,0 @@
|
|||||||
# Explore Subagent
|
|
||||||
|
|
||||||
Shared codebase exploration utility with centralized caching. Callable by any role needing code context.
|
|
||||||
|
|
||||||
## Invocation
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-explore-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Explore <angle>",
|
|
||||||
prompt: `Explore codebase for: <query>
|
|
||||||
|
|
||||||
Focus angle: <angle>
|
|
||||||
Keywords: <keyword-list>
|
|
||||||
Session folder: <session-folder>
|
|
||||||
|
|
||||||
## Cache Check
|
|
||||||
1. Read <session-folder>/explorations/cache-index.json (if exists)
|
|
||||||
2. Look for entry with matching angle
|
|
||||||
3. If found AND file exists -> read cached result, return summary
|
|
||||||
4. If not found -> proceed to exploration
|
|
||||||
|
|
||||||
## Exploration
|
|
||||||
<angle-specific-focus-from-table-below>
|
|
||||||
|
|
||||||
## Output
|
|
||||||
Write JSON to: <session-folder>/explorations/explore-<angle>.json
|
|
||||||
Update cache-index.json with new entry
|
|
||||||
|
|
||||||
## Output Schema
|
|
||||||
{
|
|
||||||
"angle": "<angle>",
|
|
||||||
"query": "<query>",
|
|
||||||
"relevant_files": [
|
|
||||||
{ "path": "...", "rationale": "...", "role": "...", "discovery_source": "...", "key_symbols": [] }
|
|
||||||
],
|
|
||||||
"patterns": [],
|
|
||||||
"dependencies": [],
|
|
||||||
"external_refs": [],
|
|
||||||
"_metadata": { "created_by": "<calling-role>", "timestamp": "...", "cache_key": "..." }
|
|
||||||
}
|
|
||||||
|
|
||||||
Return summary: file count, pattern count, top 5 files, output path`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## Cache Mechanism
|
|
||||||
|
|
||||||
### Cache Index Schema
|
|
||||||
|
|
||||||
`<session-folder>/explorations/cache-index.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"entries": [
|
|
||||||
{
|
|
||||||
"angle": "architecture",
|
|
||||||
"keywords": ["auth", "middleware"],
|
|
||||||
"file": "explore-architecture.json",
|
|
||||||
"created_by": "analyst",
|
|
||||||
"created_at": "2026-02-27T10:00:00Z",
|
|
||||||
"file_count": 15
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cache Lookup Rules
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Exact angle match exists | Return cached result |
|
|
||||||
| No match | Execute exploration, cache result |
|
|
||||||
| Cache file missing but index has entry | Remove stale entry, re-explore |
|
|
||||||
|
|
||||||
### Cache Invalidation
|
|
||||||
|
|
||||||
Cache is session-scoped. No explicit invalidation needed -- each session starts fresh. If a role suspects stale data, it can pass `force_refresh: true` in the prompt to bypass cache.
|
|
||||||
|
|
||||||
## Angle Focus Guide
|
|
||||||
|
|
||||||
| Angle | Focus Points | Typical Caller |
|
|
||||||
|-------|-------------|----------------|
|
|
||||||
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs | any |
|
|
||||||
| dependencies | Import chains, external libraries, circular dependencies, shared utilities | any |
|
|
||||||
| modularity | Module interfaces, separation of concerns, extraction opportunities | any |
|
|
||||||
| integration-points | API endpoints, data flow between modules, event systems | any |
|
|
||||||
| security | Auth/authz logic, input validation, sensitive data handling, middleware | any |
|
|
||||||
| dataflow | Data transformations, state propagation, validation points | any |
|
|
||||||
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity | any |
|
|
||||||
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging | any |
|
|
||||||
| patterns | Code conventions, design patterns, naming conventions, best practices | any |
|
|
||||||
| testing | Test files, coverage gaps, test patterns, mocking strategies | any |
|
|
||||||
| general | Broad semantic search for topic-related code | any |
|
|
||||||
|
|
||||||
## Exploration Strategies
|
|
||||||
|
|
||||||
### Low Complexity (direct search)
|
|
||||||
|
|
||||||
For simple queries, use ACE semantic search:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<query>")
|
|
||||||
```
|
|
||||||
|
|
||||||
ACE failure fallback: `rg -l '<keywords>' --type ts`
|
|
||||||
|
|
||||||
### Medium/High Complexity (multi-angle)
|
|
||||||
|
|
||||||
For complex queries, call cli-explore-agent per angle. The calling role determines complexity and selects angles.
|
|
||||||
|
|
||||||
## Search Tool Priority
|
|
||||||
|
|
||||||
| Tool | Priority | Use Case |
|
|
||||||
|------|----------|----------|
|
|
||||||
| mcp__ace-tool__search_context | P0 | Semantic search |
|
|
||||||
| Grep / Glob | P1 | Pattern matching |
|
|
||||||
| cli-explore-agent | Deep | Multi-angle exploration |
|
|
||||||
| WebSearch | P3 | External docs |
|
|
||||||
@@ -1,445 +0,0 @@
|
|||||||
---
|
|
||||||
name: team-coordinate
|
|
||||||
description: Universal team coordination skill with dynamic role generation. Only coordinator is built-in -- all worker roles are generated at runtime based on task analysis. Beat/cadence model for orchestration. Triggers on "team coordinate".
|
|
||||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Team Coordinate
|
|
||||||
|
|
||||||
Universal team coordination skill: analyze task -> generate roles -> dispatch -> execute -> deliver. Only the **coordinator** is built-in. All worker roles are **dynamically generated** based on task analysis.
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
+---------------------------------------------------+
|
|
||||||
| Skill(skill="team-coordinate") |
|
|
||||||
| args="task description" |
|
|
||||||
| args="--role=coordinator" |
|
|
||||||
| args="--role=<dynamic> --session=<path>" |
|
|
||||||
+-------------------+-------------------------------+
|
|
||||||
| Role Router
|
|
||||||
+---- --role present? ----+
|
|
||||||
| NO | YES
|
|
||||||
v v
|
|
||||||
Orchestration Mode Role Dispatch
|
|
||||||
(auto -> coordinator) (route to role file)
|
|
||||||
| |
|
|
||||||
coordinator +-------+-------+
|
|
||||||
(built-in) | --role=coordinator?
|
|
||||||
| |
|
|
||||||
YES | | NO
|
|
||||||
v | v
|
|
||||||
built-in | Dynamic Role
|
|
||||||
role.md | <session>/roles/<role>.md
|
|
||||||
|
|
||||||
Subagents (callable by any role, not team members):
|
|
||||||
[discuss-subagent] - multi-perspective critique (dynamic perspectives)
|
|
||||||
[explore-subagent] - codebase exploration with cache
|
|
||||||
```
|
|
||||||
|
|
||||||
## Role Router
|
|
||||||
|
|
||||||
### Input Parsing
|
|
||||||
|
|
||||||
Parse `$ARGUMENTS` to extract `--role` and `--session`. If no `--role` -> Orchestration Mode (auto route to coordinator).
|
|
||||||
|
|
||||||
### Role Registry
|
|
||||||
|
|
||||||
Only coordinator is statically registered. All other roles are dynamic, stored in `team-session.json#roles`.
|
|
||||||
|
|
||||||
| Role | File | Type |
|
|
||||||
|------|------|------|
|
|
||||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | built-in orchestrator |
|
|
||||||
| (dynamic) | `<session>/roles/<role-name>.md` | runtime-generated worker |
|
|
||||||
|
|
||||||
> **COMPACT PROTECTION**: Role files are execution documents. After context compression, role instructions become summaries only -- **MUST immediately `Read` the role.md to reload before continuing**. Never execute any Phase based on summaries.
|
|
||||||
|
|
||||||
### Subagent Registry
|
|
||||||
|
|
||||||
| Subagent | Spec | Callable By | Purpose |
|
|
||||||
|----------|------|-------------|---------|
|
|
||||||
| discuss | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | any role | Multi-perspective critique (dynamic perspectives) |
|
|
||||||
| explore | [subagents/explore-subagent.md](subagents/explore-subagent.md) | any role | Codebase exploration with cache |
|
|
||||||
|
|
||||||
### Dispatch
|
|
||||||
|
|
||||||
1. Extract `--role` and `--session` from arguments
|
|
||||||
2. If no `--role` -> route to coordinator (Orchestration Mode)
|
|
||||||
3. If `--role=coordinator` -> Read built-in `roles/coordinator/role.md` -> Execute its phases
|
|
||||||
4. If `--role=<other>`:
|
|
||||||
- **`--session` is REQUIRED** for dynamic roles. Error if not provided.
|
|
||||||
- Read `<session>/roles/<role>.md` -> Execute its phases
|
|
||||||
- If role file not found at path -> Error with expected path
|
|
||||||
|
|
||||||
### Orchestration Mode
|
|
||||||
|
|
||||||
When invoked without `--role`, coordinator auto-starts. User just provides task description.
|
|
||||||
|
|
||||||
**Invocation**: `Skill(skill="team-coordinate", args="task description")`
|
|
||||||
|
|
||||||
**Lifecycle**:
|
|
||||||
```
|
|
||||||
User provides task description
|
|
||||||
-> coordinator Phase 1: task analysis (detect capabilities, build dependency graph)
|
|
||||||
-> coordinator Phase 2: generate roles + initialize session
|
|
||||||
-> coordinator Phase 3: create task chain from dependency graph
|
|
||||||
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
|
|
||||||
-> Worker executes -> SendMessage callback -> coordinator advances next step
|
|
||||||
-> Loop until pipeline complete -> Phase 5 report
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Commands** (wake paused coordinator):
|
|
||||||
|
|
||||||
| Command | Action |
|
|
||||||
|---------|--------|
|
|
||||||
| `check` / `status` | Output execution status graph, no advancement |
|
|
||||||
| `resume` / `continue` | Check worker states, advance next step |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Shared Infrastructure
|
|
||||||
|
|
||||||
The following templates apply to all worker roles. Each generated role.md only needs to define **Phase 2-4** role-specific logic.
|
|
||||||
|
|
||||||
### Worker Phase 1: Task Discovery (all workers shared)
|
|
||||||
|
|
||||||
Each worker on startup executes the same task discovery flow:
|
|
||||||
|
|
||||||
1. Call `TaskList()` to get all tasks
|
|
||||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
|
||||||
3. No tasks -> idle wait
|
|
||||||
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
|
|
||||||
|
|
||||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
|
||||||
- Check if this task's output artifacts already exist
|
|
||||||
- Artifacts complete -> skip to Phase 5 report completion
|
|
||||||
- Artifacts incomplete or missing -> normal Phase 2-4 execution
|
|
||||||
|
|
||||||
### Worker Phase 5: Report + Fast-Advance (all workers shared)
|
|
||||||
|
|
||||||
Task completion with optional fast-advance to skip coordinator round-trip:
|
|
||||||
|
|
||||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
|
||||||
- Params: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
|
||||||
- **`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
|
|
||||||
- **CLI fallback**: `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
|
||||||
2. **TaskUpdate**: Mark task completed
|
|
||||||
3. **Fast-Advance Check**:
|
|
||||||
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
|
|
||||||
- If exactly 1 ready task AND its owner matches a simple successor pattern -> **spawn it directly** (skip coordinator)
|
|
||||||
- Otherwise -> **SendMessage** to coordinator for orchestration
|
|
||||||
4. **Loop**: Back to Phase 1 to check for next task
|
|
||||||
|
|
||||||
**Fast-Advance Rules**:
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Same-prefix successor (Inner Loop role) | Do not spawn, main agent inner loop (Phase 5-L) |
|
|
||||||
| 1 ready task, simple linear successor, different prefix | Spawn directly via Task(run_in_background: true) |
|
|
||||||
| Multiple ready tasks (parallel window) | SendMessage to coordinator (needs orchestration) |
|
|
||||||
| No ready tasks + others running | SendMessage to coordinator (status update) |
|
|
||||||
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
|
|
||||||
|
|
||||||
**Fast-advance failure recovery**: If a fast-advanced task fails, the coordinator detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing. See [monitor.md](roles/coordinator/commands/monitor.md).
|
|
||||||
|
|
||||||
### Worker Inner Loop (roles with multiple same-prefix serial tasks)
|
|
||||||
|
|
||||||
When a role has **2+ serial same-prefix tasks**, it loops internally instead of spawning new agents:
|
|
||||||
|
|
||||||
**Inner Loop flow**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Phase 1: Discover task (first time)
|
|
||||||
|
|
|
||||||
+- Found task -> Phase 2-3: Load context + Execute work
|
|
||||||
| |
|
|
||||||
| v
|
|
||||||
| Phase 4: Validation (+ optional Inline Discuss)
|
|
||||||
| |
|
|
||||||
| v
|
|
||||||
| Phase 5-L: Loop Completion
|
|
||||||
| |
|
|
||||||
| +- TaskUpdate completed
|
|
||||||
| +- team_msg log
|
|
||||||
| +- Accumulate summary to context_accumulator
|
|
||||||
| |
|
|
||||||
| +- More same-prefix tasks?
|
|
||||||
| | +- YES -> back to Phase 1 (inner loop)
|
|
||||||
| | +- NO -> Phase 5-F: Final Report
|
|
||||||
| |
|
|
||||||
| +- Interrupt conditions?
|
|
||||||
| +- consensus_blocked HIGH -> SendMessage -> STOP
|
|
||||||
| +- Errors >= 3 -> SendMessage -> STOP
|
|
||||||
|
|
|
||||||
+- Phase 5-F: Final Report
|
|
||||||
+- SendMessage (all task summaries)
|
|
||||||
+- STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 5-L vs Phase 5-F**:
|
|
||||||
|
|
||||||
| Step | Phase 5-L (looping) | Phase 5-F (final) |
|
|
||||||
|------|---------------------|-------------------|
|
|
||||||
| TaskUpdate completed | YES | YES |
|
|
||||||
| team_msg log | YES | YES |
|
|
||||||
| Accumulate summary | YES | - |
|
|
||||||
| SendMessage to coordinator | NO | YES (all tasks summary) |
|
|
||||||
| Fast-Advance to next prefix | - | YES (check cross-prefix successors) |
|
|
||||||
|
|
||||||
### Inline Discuss Protocol (optional for any role)
|
|
||||||
|
|
||||||
After completing primary output, roles may call the discuss subagent inline. Unlike v4's fixed perspective definitions, team-coordinate uses **dynamic perspectives** specified by the coordinator when generating each role.
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-discuss-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Discuss <round-id>",
|
|
||||||
prompt: <see subagents/discuss-subagent.md for prompt template>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Consensus handling**:
|
|
||||||
|
|
||||||
| Verdict | Severity | Role Action |
|
|
||||||
|---------|----------|-------------|
|
|
||||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
|
||||||
| consensus_blocked | HIGH | SendMessage with structured format. Do NOT self-revise. |
|
|
||||||
| consensus_blocked | MEDIUM | SendMessage with warning. Proceed normally. |
|
|
||||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
|
||||||
|
|
||||||
### Shared Explore Utility
|
|
||||||
|
|
||||||
Any role needing codebase context calls the explore subagent:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-explore-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Explore <angle>",
|
|
||||||
prompt: <see subagents/explore-subagent.md for prompt template>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Cache**: Results stored in `explorations/` with `cache-index.json`. Before exploring, always check cache first.
|
|
||||||
|
|
||||||
### Wisdom Accumulation (all roles)
|
|
||||||
|
|
||||||
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session init.
|
|
||||||
|
|
||||||
**Directory**:
|
|
||||||
```
|
|
||||||
<session-folder>/wisdom/
|
|
||||||
+-- learnings.md # Patterns and insights
|
|
||||||
+-- decisions.md # Design and strategy decisions
|
|
||||||
+-- issues.md # Known risks and issues
|
|
||||||
```
|
|
||||||
|
|
||||||
**Worker load** (Phase 2): Extract `Session: <path>` from task description, read wisdom files.
|
|
||||||
**Worker contribute** (Phase 4/5): Write discoveries to corresponding wisdom files.
|
|
||||||
|
|
||||||
### Role Isolation Rules
|
|
||||||
|
|
||||||
| Allowed | Prohibited |
|
|
||||||
|---------|-----------|
|
|
||||||
| Process own prefix tasks | Process other role's prefix tasks |
|
|
||||||
| SendMessage to coordinator | Directly communicate with other workers |
|
|
||||||
| Use tools appropriate to responsibility | Create tasks for other roles |
|
|
||||||
| Call discuss/explore subagents | Modify resources outside own scope |
|
|
||||||
| Fast-advance simple successors | Spawn parallel worker batches |
|
|
||||||
| Report capability_gap to coordinator | Attempt work outside scope |
|
|
||||||
|
|
||||||
Coordinator additionally prohibited: directly write/modify deliverable artifacts, call implementation subagents directly, directly execute analysis/test/review.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Cadence Control
|
|
||||||
|
|
||||||
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
|
|
||||||
|
|
||||||
```
|
|
||||||
Beat Cycle (single beat)
|
|
||||||
======================================================================
|
|
||||||
Event Coordinator Workers
|
|
||||||
----------------------------------------------------------------------
|
|
||||||
callback/resume --> +- handleCallback -+
|
|
||||||
| mark completed |
|
|
||||||
| check pipeline |
|
|
||||||
+- handleSpawnNext -+
|
|
||||||
| find ready tasks |
|
|
||||||
| spawn workers ---+--> [Worker A] Phase 1-5
|
|
||||||
| (parallel OK) --+--> [Worker B] Phase 1-5
|
|
||||||
+- STOP (idle) -----+ |
|
|
||||||
|
|
|
||||||
callback <-----------------------------------------+
|
|
||||||
(next beat) SendMessage + TaskUpdate(completed)
|
|
||||||
======================================================================
|
|
||||||
|
|
||||||
Fast-Advance (skips coordinator for simple linear successors)
|
|
||||||
======================================================================
|
|
||||||
[Worker A] Phase 5 complete
|
|
||||||
+- 1 ready task? simple successor? --> spawn Worker B directly
|
|
||||||
+- complex case? --> SendMessage to coordinator
|
|
||||||
======================================================================
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pipelines are dynamic**: Unlike v4's predefined pipeline beat views (spec-only, impl-only, etc.), team-coordinate pipelines are generated per-task from the dependency graph. The beat model is the same -- only the pipeline shape varies.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Coordinator Spawn Template
|
|
||||||
|
|
||||||
### Standard Worker (single-task role)
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn <role> worker",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `You are team "<team-name>" <ROLE>.
|
|
||||||
|
|
||||||
## Primary Instruction
|
|
||||||
All your work MUST be executed by calling Skill to get role definition:
|
|
||||||
Skill(skill="team-coordinate", args="--role=<role> --session=<session-folder>")
|
|
||||||
|
|
||||||
Current requirement: <task-description>
|
|
||||||
Session: <session-folder>
|
|
||||||
|
|
||||||
## Role Guidelines
|
|
||||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
|
||||||
- All output prefixed with [<role>] tag
|
|
||||||
- Only communicate with coordinator
|
|
||||||
- Do not use TaskCreate to create tasks for other roles
|
|
||||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
|
||||||
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
1. Call Skill -> get role definition and execution logic
|
|
||||||
2. Follow role.md 5-Phase flow
|
|
||||||
3. team_msg(team=<session-id>) + SendMessage results to coordinator
|
|
||||||
4. TaskUpdate completed -> check next task or fast-advance`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inner Loop Worker (multi-task role)
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn <role> worker (inner loop)",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `You are team "<team-name>" <ROLE>.
|
|
||||||
|
|
||||||
## Primary Instruction
|
|
||||||
All your work MUST be executed by calling Skill to get role definition:
|
|
||||||
Skill(skill="team-coordinate", args="--role=<role> --session=<session-folder>")
|
|
||||||
|
|
||||||
Current requirement: <task-description>
|
|
||||||
Session: <session-folder>
|
|
||||||
|
|
||||||
## Inner Loop Mode
|
|
||||||
You will handle ALL <PREFIX>-* tasks in this session, not just the first one.
|
|
||||||
After completing each task, loop back to find the next <PREFIX>-* task.
|
|
||||||
Only SendMessage to coordinator when:
|
|
||||||
- All <PREFIX>-* tasks are done
|
|
||||||
- A consensus_blocked HIGH occurs
|
|
||||||
- Errors accumulate (>= 3)
|
|
||||||
|
|
||||||
## Role Guidelines
|
|
||||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
|
||||||
- All output prefixed with [<role>] tag
|
|
||||||
- Only communicate with coordinator
|
|
||||||
- Do not use TaskCreate to create tasks for other roles
|
|
||||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
|
||||||
- Use subagent calls for heavy work, retain summaries in context`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Session Directory
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/.team/TC-<slug>-<date>/
|
|
||||||
+-- team-session.json # Session state + dynamic role registry
|
|
||||||
+-- task-analysis.json # Phase 1 output: capabilities, dependency graph
|
|
||||||
+-- roles/ # Dynamic role definitions (generated Phase 2)
|
|
||||||
| +-- <role-1>.md
|
|
||||||
| +-- <role-2>.md
|
|
||||||
+-- artifacts/ # All MD deliverables from workers
|
|
||||||
| +-- <artifact>.md
|
|
||||||
+-- shared-memory.json # Cross-role state store
|
|
||||||
+-- wisdom/ # Cross-task knowledge
|
|
||||||
| +-- learnings.md
|
|
||||||
| +-- decisions.md
|
|
||||||
| +-- issues.md
|
|
||||||
+-- explorations/ # Shared explore cache
|
|
||||||
| +-- cache-index.json
|
|
||||||
| +-- explore-<angle>.json
|
|
||||||
+-- discussions/ # Inline discuss records
|
|
||||||
| +-- <round>.md
|
|
||||||
+-- .msg/ # Team message bus logs
|
|
||||||
```
|
|
||||||
|
|
||||||
### team-session.json Schema
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"session_id": "TC-<slug>-<date>",
|
|
||||||
"task_description": "<original user input>",
|
|
||||||
"status": "active | paused | completed",
|
|
||||||
"team_name": "<team-name>",
|
|
||||||
"roles": [
|
|
||||||
{
|
|
||||||
"name": "<role-name>",
|
|
||||||
"prefix": "<PREFIX>",
|
|
||||||
"responsibility_type": "<type>",
|
|
||||||
"inner_loop": false,
|
|
||||||
"role_file": "roles/<role-name>.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"pipeline": {
|
|
||||||
"dependency_graph": {},
|
|
||||||
"tasks_total": 0,
|
|
||||||
"tasks_completed": 0
|
|
||||||
},
|
|
||||||
"active_workers": [],
|
|
||||||
"completed_tasks": [],
|
|
||||||
"created_at": "<timestamp>"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Session Resume
|
|
||||||
|
|
||||||
Coordinator supports `--resume` / `--continue` for interrupted sessions:
|
|
||||||
|
|
||||||
1. Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
|
|
||||||
2. Multiple matches -> AskUserQuestion for selection
|
|
||||||
3. Audit TaskList -> reconcile session state <-> task status
|
|
||||||
4. Reset in_progress -> pending (interrupted tasks)
|
|
||||||
5. Rebuild team and spawn needed workers only
|
|
||||||
6. Create missing tasks with correct blockedBy
|
|
||||||
7. Kick first executable task -> Phase 4 coordination loop
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Unknown --role value | Check if `<session>/roles/<role>.md` exists; error with message if not |
|
|
||||||
| Missing --role arg | Orchestration Mode -> coordinator |
|
|
||||||
| Dynamic role file not found | Error with expected path, coordinator may need to regenerate |
|
|
||||||
| Built-in role file not found | Error with expected path |
|
|
||||||
| Command file not found | Fallback to inline execution |
|
|
||||||
| Discuss subagent fails | Role proceeds without discuss, logs warning |
|
|
||||||
| Explore cache corrupt | Clear cache, re-explore |
|
|
||||||
| Fast-advance spawns wrong task | Coordinator reconciles on next callback |
|
|
||||||
| Session path not provided (dynamic role) | Error: `--session` is required for dynamic roles. Coordinator must always pass `--session=<session-folder>` when spawning workers. |
|
|
||||||
| capability_gap reported | Coordinator generates new role via handleAdapt |
|
|
||||||
@@ -1,197 +0,0 @@
|
|||||||
# Command: analyze-task
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Parse user task description -> detect required capabilities -> build dependency graph -> design dynamic roles. This replaces v4's static mode selection with intelligent task decomposition.
|
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
|
||||||
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | User input from Phase 1 | Yes |
|
|
||||||
| Clarification answers | AskUserQuestion results (if any) | No |
|
|
||||||
| Session folder | From coordinator Phase 2 | Yes |
|
|
||||||
|
|
||||||
## Phase 3: Task Analysis
|
|
||||||
|
|
||||||
### Step 1: Signal Detection
|
|
||||||
|
|
||||||
Scan task description for capability keywords:
|
|
||||||
|
|
||||||
| Signal | Keywords | Capability | Prefix | Responsibility Type |
|
|
||||||
|--------|----------|------------|--------|---------------------|
|
|
||||||
| Research | investigate, explore, compare, survey, find, research, discover, benchmark, study | researcher | RESEARCH | orchestration |
|
|
||||||
| Writing | write, draft, document, article, report, blog, describe, explain, summarize, content | writer | DRAFT | code-gen (docs) |
|
|
||||||
| Coding | implement, build, code, fix, refactor, develop, create app, program, migrate, port | developer | IMPL | code-gen (code) |
|
|
||||||
| Design | design, architect, plan, structure, blueprint, model, schema, wireframe, layout | designer | DESIGN | orchestration |
|
|
||||||
| Analysis | analyze, review, audit, assess, evaluate, inspect, examine, diagnose, profile | analyst | ANALYSIS | read-only |
|
|
||||||
| Testing | test, verify, validate, QA, quality, check, assert, coverage, regression | tester | TEST | validation |
|
|
||||||
| Planning | plan, breakdown, organize, schedule, decompose, roadmap, strategy, prioritize | planner | PLAN | orchestration |
|
|
||||||
|
|
||||||
**Multi-match**: A task may trigger multiple capabilities. E.g., "research and write a technical article" triggers both `researcher` and `writer`.
|
|
||||||
|
|
||||||
**No match**: If no keywords match, default to a single `general` capability with `TASK` prefix.
|
|
||||||
|
|
||||||
### Step 2: Artifact Inference
|
|
||||||
|
|
||||||
Each capability produces default output artifacts:
|
|
||||||
|
|
||||||
| Capability | Default Artifact | Format |
|
|
||||||
|------------|-----------------|--------|
|
|
||||||
| researcher | Research findings | `<session>/artifacts/research-findings.md` |
|
|
||||||
| writer | Written document(s) | `<session>/artifacts/<doc-name>.md` |
|
|
||||||
| developer | Code implementation | Source files + `<session>/artifacts/implementation-summary.md` |
|
|
||||||
| designer | Design document | `<session>/artifacts/design-spec.md` |
|
|
||||||
| analyst | Analysis report | `<session>/artifacts/analysis-report.md` |
|
|
||||||
| tester | Test results | `<session>/artifacts/test-report.md` |
|
|
||||||
| planner | Execution plan | `<session>/artifacts/execution-plan.md` |
|
|
||||||
|
|
||||||
### Step 3: Dependency Graph Construction
|
|
||||||
|
|
||||||
Build a DAG of work streams using these inference rules:
|
|
||||||
|
|
||||||
| Pattern | Shape | Example |
|
|
||||||
|---------|-------|---------|
|
|
||||||
| Knowledge -> Creation | research blockedBy nothing, creation blockedBy research | RESEARCH-001 -> DRAFT-001 |
|
|
||||||
| Design -> Build | design first, build after | DESIGN-001 -> IMPL-001 |
|
|
||||||
| Build -> Validate | build first, test/review after | IMPL-001 -> TEST-001 + ANALYSIS-001 |
|
|
||||||
| Plan -> Execute | plan first, execute after | PLAN-001 -> IMPL-001 |
|
|
||||||
| Independent parallel | no dependency between them | DRAFT-001 || IMPL-001 |
|
|
||||||
| Analysis -> Revise | analysis finds issues, revise artifact | ANALYSIS-001 -> DRAFT-002 |
|
|
||||||
|
|
||||||
**Graph construction algorithm**:
|
|
||||||
|
|
||||||
1. Group capabilities by natural ordering: knowledge-gathering -> design/planning -> creation -> validation
|
|
||||||
2. Within same tier: capabilities are parallel unless task description implies sequence
|
|
||||||
3. Between tiers: downstream blockedBy upstream
|
|
||||||
4. Single-capability tasks: one node, no dependencies
|
|
||||||
|
|
||||||
**Natural ordering tiers**:
|
|
||||||
|
|
||||||
| Tier | Capabilities | Description |
|
|
||||||
|------|-------------|-------------|
|
|
||||||
| 0 | researcher, planner | Knowledge gathering / planning |
|
|
||||||
| 1 | designer | Design (requires context from tier 0 if present) |
|
|
||||||
| 2 | writer, developer | Creation (requires design/plan if present) |
|
|
||||||
| 3 | analyst, tester | Validation (requires artifacts to validate) |
|
|
||||||
|
|
||||||
### Step 4: Complexity Scoring
|
|
||||||
|
|
||||||
| Factor | Weight | Condition |
|
|
||||||
|--------|--------|-----------|
|
|
||||||
| Capability count | +1 each | Number of distinct capabilities |
|
|
||||||
| Cross-domain factor | +2 | Capabilities span 3+ tiers |
|
|
||||||
| Parallel tracks | +1 each | Independent parallel work streams |
|
|
||||||
| Serial depth | +1 per level | Longest dependency chain length |
|
|
||||||
|
|
||||||
| Total Score | Complexity | Role Limit |
|
|
||||||
|-------------|------------|------------|
|
|
||||||
| 1-3 | Low | 1-2 roles |
|
|
||||||
| 4-6 | Medium | 2-3 roles |
|
|
||||||
| 7+ | High | 3-5 roles |
|
|
||||||
|
|
||||||
### Step 5: Role Minimization
|
|
||||||
|
|
||||||
Apply merging rules to reduce role count:
|
|
||||||
|
|
||||||
| Rule | Condition | Action |
|
|
||||||
|------|-----------|--------|
|
|
||||||
| Absorb trivial | Capability has exactly 1 task AND no explore needed | Merge into nearest related role |
|
|
||||||
| Merge overlap | Two capabilities share >50% keywords from task description | Combine into single role |
|
|
||||||
| Cap at 5 | More than 5 roles after initial assignment | Merge lowest-priority pairs (priority: researcher > designer > developer > writer > analyst > planner > tester) |
|
|
||||||
|
|
||||||
**Merge priority** (when two must merge, keep the higher-priority one as the role name):
|
|
||||||
|
|
||||||
1. developer (code-gen is hardest to merge)
|
|
||||||
2. researcher (context-gathering is foundational)
|
|
||||||
3. writer (document generation has specific patterns)
|
|
||||||
4. designer (design has specific outputs)
|
|
||||||
5. analyst (analysis can be absorbed by reviewer pattern)
|
|
||||||
6. planner (planning can be merged with researcher or designer)
|
|
||||||
7. tester (can be absorbed by developer or analyst)
|
|
||||||
|
|
||||||
**IMPORTANT**: Even after merging, coordinator MUST spawn workers for all roles. Single-role tasks still use team architecture.
|
|
||||||
|
|
||||||
## Phase 4: Output
|
|
||||||
|
|
||||||
Write `<session-folder>/task-analysis.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"task_description": "<original user input>",
|
|
||||||
"capabilities": [
|
|
||||||
{
|
|
||||||
"name": "researcher",
|
|
||||||
"prefix": "RESEARCH",
|
|
||||||
"responsibility_type": "orchestration",
|
|
||||||
"tasks": [
|
|
||||||
{ "id": "RESEARCH-001", "description": "..." }
|
|
||||||
],
|
|
||||||
"artifacts": ["research-findings.md"]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"dependency_graph": {
|
|
||||||
"RESEARCH-001": [],
|
|
||||||
"DRAFT-001": ["RESEARCH-001"],
|
|
||||||
"ANALYSIS-001": ["DRAFT-001"]
|
|
||||||
},
|
|
||||||
"roles": [
|
|
||||||
{
|
|
||||||
"name": "researcher",
|
|
||||||
"prefix": "RESEARCH",
|
|
||||||
"responsibility_type": "orchestration",
|
|
||||||
"task_count": 1,
|
|
||||||
"inner_loop": false
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "writer",
|
|
||||||
"prefix": "DRAFT",
|
|
||||||
"responsibility_type": "code-gen (docs)",
|
|
||||||
"task_count": 1,
|
|
||||||
"inner_loop": false
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"complexity": {
|
|
||||||
"capability_count": 2,
|
|
||||||
"cross_domain_factor": false,
|
|
||||||
"parallel_tracks": 0,
|
|
||||||
"serial_depth": 2,
|
|
||||||
"total_score": 3,
|
|
||||||
"level": "low"
|
|
||||||
},
|
|
||||||
"artifacts": [
|
|
||||||
{ "name": "research-findings.md", "producer": "researcher", "path": "artifacts/research-findings.md" },
|
|
||||||
{ "name": "article-draft.md", "producer": "writer", "path": "artifacts/article-draft.md" }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Complexity Interpretation
|
|
||||||
|
|
||||||
**CRITICAL**: Complexity score is for **role design optimization**, NOT for skipping team workflow.
|
|
||||||
|
|
||||||
| Complexity | Team Structure | Coordinator Action |
|
|
||||||
|------------|----------------|-------------------|
|
|
||||||
| Low (1-2 roles) | Minimal team | Generate 1-2 roles, create team, spawn workers |
|
|
||||||
| Medium (2-3 roles) | Standard team | Generate roles, create team, spawn workers |
|
|
||||||
| High (3-5 roles) | Full team | Generate roles, create team, spawn workers |
|
|
||||||
|
|
||||||
**All complexity levels use team architecture**:
|
|
||||||
- Single-role tasks still spawn worker via Skill
|
|
||||||
- Coordinator NEVER executes task work directly
|
|
||||||
- Team infrastructure provides session management, message bus, fast-advance
|
|
||||||
|
|
||||||
**Purpose of complexity score**:
|
|
||||||
- ✅ Determine optimal role count (merge vs separate)
|
|
||||||
- ✅ Guide dependency graph design
|
|
||||||
- ✅ Inform user about task scope
|
|
||||||
- ❌ NOT for deciding whether to use team workflow
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| No capabilities detected | Default to single `general` role with TASK prefix |
|
|
||||||
| Circular dependency in graph | Break cycle at lowest-tier edge, warn |
|
|
||||||
| Task description too vague | Return minimal analysis, coordinator will AskUserQuestion |
|
|
||||||
| All capabilities merge into one | Valid -- single-role execution via team worker |
|
|
||||||
@@ -1,85 +0,0 @@
|
|||||||
# Command: dispatch
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Create task chains from dynamic dependency graphs. Unlike v4's static mode-to-pipeline mapping, team-coordinate builds pipelines from the task-analysis.json produced by Phase 1.
|
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
|
||||||
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task analysis | `<session-folder>/task-analysis.json` | Yes |
|
|
||||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
|
||||||
| Role registry | `team-session.json#roles` | Yes |
|
|
||||||
| Scope | User requirements description | Yes |
|
|
||||||
|
|
||||||
## Phase 3: Task Chain Creation
|
|
||||||
|
|
||||||
### Workflow
|
|
||||||
|
|
||||||
1. **Read dependency graph** from `task-analysis.json#dependency_graph`
|
|
||||||
2. **Topological sort** tasks to determine creation order
|
|
||||||
3. **Validate** all task owners exist in role registry
|
|
||||||
4. **For each task** (in topological order):
|
|
||||||
|
|
||||||
```
|
|
||||||
TaskCreate({
|
|
||||||
subject: "<PREFIX>-<NNN>",
|
|
||||||
owner: "<role-name>",
|
|
||||||
description: "<task description from task-analysis>\nSession: <session-folder>\nScope: <scope>\nInnerLoop: <true|false>",
|
|
||||||
blockedBy: [<dependency-list from graph>],
|
|
||||||
status: "pending"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Update team-session.json** with pipeline and tasks_total
|
|
||||||
6. **Validate** created chain
|
|
||||||
|
|
||||||
### Task Description Template
|
|
||||||
|
|
||||||
Every task description includes session path and inner loop flag:
|
|
||||||
|
|
||||||
```
|
|
||||||
<task description>
|
|
||||||
Session: <session-folder>
|
|
||||||
Scope: <scope>
|
|
||||||
InnerLoop: <true|false>
|
|
||||||
```
|
|
||||||
|
|
||||||
### InnerLoop Flag Rules
|
|
||||||
|
|
||||||
| Condition | InnerLoop |
|
|
||||||
|-----------|-----------|
|
|
||||||
| Role has 2+ serial same-prefix tasks | true |
|
|
||||||
| Role has 1 task | false |
|
|
||||||
| Tasks are parallel (no dependency between them) | false |
|
|
||||||
|
|
||||||
### Dependency Validation
|
|
||||||
|
|
||||||
| Check | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| No orphan tasks | Every task is reachable from at least one root |
|
|
||||||
| No circular deps | Topological sort succeeds without cycle |
|
|
||||||
| All owners valid | Every task owner exists in team-session.json#roles |
|
|
||||||
| All blockedBy valid | Every blockedBy references an existing task subject |
|
|
||||||
| Session reference | Every task description contains `Session: <session-folder>` |
|
|
||||||
|
|
||||||
## Phase 4: Validation
|
|
||||||
|
|
||||||
| Check | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| Task count | Matches dependency_graph node count |
|
|
||||||
| Dependencies | Every blockedBy references an existing task subject |
|
|
||||||
| Owner assignment | Each task owner is in role registry |
|
|
||||||
| Session reference | Every task description contains `Session:` |
|
|
||||||
| Pipeline integrity | No disconnected subgraphs (warn if found) |
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Circular dependency detected | Report cycle, halt task creation |
|
|
||||||
| Owner not in role registry | Error, coordinator must fix roles first |
|
|
||||||
| TaskCreate fails | Log error, report to coordinator |
|
|
||||||
| Duplicate task subject | Skip creation, log warning |
|
|
||||||
| Empty dependency graph | Error, task analysis may have failed |
|
|
||||||
@@ -1,296 +0,0 @@
|
|||||||
# Command: monitor
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Event-driven pipeline coordination with Spawn-and-Stop pattern. Adapted from v4 for dynamic roles -- role names are read from `team-session.json#roles` instead of hardcoded. Includes `handleAdapt` for mid-pipeline capability gap handling.
|
|
||||||
|
|
||||||
## Constants
|
|
||||||
|
|
||||||
| Constant | Value | Description |
|
|
||||||
|----------|-------|-------------|
|
|
||||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
|
||||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
|
||||||
| FAST_ADVANCE_AWARE | true | Workers may skip coordinator for simple linear successors |
|
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
|
||||||
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
|
||||||
| Task list | `TaskList()` | Yes |
|
|
||||||
| Active workers | session.active_workers[] | Yes |
|
|
||||||
| Role registry | session.roles[] | Yes |
|
|
||||||
|
|
||||||
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name` rather than a static list. This is the key difference from v4.
|
|
||||||
|
|
||||||
## Phase 3: Handler Routing
|
|
||||||
|
|
||||||
### Wake-up Source Detection
|
|
||||||
|
|
||||||
Parse `$ARGUMENTS` to determine handler:
|
|
||||||
|
|
||||||
| Priority | Condition | Handler |
|
|
||||||
|----------|-----------|---------|
|
|
||||||
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
|
|
||||||
| 2 | Contains "capability_gap" | handleAdapt |
|
|
||||||
| 3 | Contains "check" or "status" | handleCheck |
|
|
||||||
| 4 | Contains "resume", "continue", or "next" | handleResume |
|
|
||||||
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleCallback
|
|
||||||
|
|
||||||
Worker completed a task. Verify completion, update state, auto-advance.
|
|
||||||
|
|
||||||
```
|
|
||||||
Receive callback from [<role>]
|
|
||||||
+- Find matching active worker by role (from session.roles)
|
|
||||||
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
|
|
||||||
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
|
|
||||||
+- Task status = completed?
|
|
||||||
| +- YES -> remove from active_workers -> update session
|
|
||||||
| | +- -> handleSpawnNext
|
|
||||||
| +- NO -> progress message, do not advance -> STOP
|
|
||||||
+- No matching worker found
|
|
||||||
+- Scan all active workers for completed tasks
|
|
||||||
+- Found completed -> process each -> handleSpawnNext
|
|
||||||
+- None completed -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
|
|
||||||
1. Check if the expected next task is already `in_progress` (fast-advanced)
|
|
||||||
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
|
|
||||||
3. If no -> normal handleSpawnNext
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleCheck
|
|
||||||
|
|
||||||
Read-only status report. No pipeline advancement.
|
|
||||||
|
|
||||||
**Output format**:
|
|
||||||
|
|
||||||
```
|
|
||||||
[coordinator] Pipeline Status
|
|
||||||
[coordinator] Progress: <completed>/<total> (<percent>%)
|
|
||||||
|
|
||||||
[coordinator] Execution Graph:
|
|
||||||
<visual representation of dependency graph with status icons>
|
|
||||||
|
|
||||||
done=completed >>>=running o=pending .=not created
|
|
||||||
|
|
||||||
[coordinator] Active Workers:
|
|
||||||
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
|
|
||||||
|
|
||||||
[coordinator] Ready to spawn: <subjects>
|
|
||||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
|
||||||
```
|
|
||||||
|
|
||||||
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
|
|
||||||
|
|
||||||
**Graph rendering**: Read dependency_graph from task-analysis.json, render each node with status icon. Show parallel branches side-by-side.
|
|
||||||
|
|
||||||
Then STOP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleResume
|
|
||||||
|
|
||||||
Check active worker completion, process results, advance pipeline.
|
|
||||||
|
|
||||||
```
|
|
||||||
Load active_workers from session
|
|
||||||
+- No active workers -> handleSpawnNext
|
|
||||||
+- Has active workers -> check each:
|
|
||||||
+- status = completed -> mark done, log
|
|
||||||
+- status = in_progress -> still running, log
|
|
||||||
+- other status -> worker failure -> reset to pending
|
|
||||||
After processing:
|
|
||||||
+- Some completed -> handleSpawnNext
|
|
||||||
+- All still running -> report status -> STOP
|
|
||||||
+- All failed -> handleSpawnNext (retry)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleSpawnNext
|
|
||||||
|
|
||||||
Find all ready tasks, spawn workers in background, update session, STOP.
|
|
||||||
|
|
||||||
```
|
|
||||||
Collect task states from TaskList()
|
|
||||||
+- completedSubjects: status = completed
|
|
||||||
+- inProgressSubjects: status = in_progress
|
|
||||||
+- readySubjects: pending + all blockedBy in completedSubjects
|
|
||||||
|
|
||||||
Ready tasks found?
|
|
||||||
+- NONE + work in progress -> report waiting -> STOP
|
|
||||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> Phase 5
|
|
||||||
+- HAS ready tasks -> for each:
|
|
||||||
+- Is task owner an Inner Loop role AND that role already has an active_worker?
|
|
||||||
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
|
|
||||||
| +- NO -> normal spawn below
|
|
||||||
+- TaskUpdate -> in_progress
|
|
||||||
+- team_msg log -> task_unblocked (team=<session-id>, NOT team name)
|
|
||||||
+- Spawn worker (see spawn tool call below)
|
|
||||||
+- Add to session.active_workers
|
|
||||||
Update session file -> output summary -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Spawn worker tool call** (one per ready task):
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn <role> worker for <subject>",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: "<worker prompt from SKILL.md Coordinator Spawn Template>"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleAdapt
|
|
||||||
|
|
||||||
Handle mid-pipeline capability gap discovery. A worker reports `capability_gap` when it encounters work outside its scope.
|
|
||||||
|
|
||||||
**CONSTRAINT**: Maximum 5 worker roles per session (per coordinator/role.md). handleAdapt MUST enforce this limit.
|
|
||||||
|
|
||||||
```
|
|
||||||
Parse capability_gap message:
|
|
||||||
+- Extract: gap_description, requesting_role, suggested_capability
|
|
||||||
+- Validate gap is genuine:
|
|
||||||
+- Check existing roles in session.roles -> does any role cover this?
|
|
||||||
| +- YES -> redirect: SendMessage to that role's owner -> STOP
|
|
||||||
| +- NO -> genuine gap, proceed to role generation
|
|
||||||
+- CHECK ROLE COUNT LIMIT (MAX 5 ROLES):
|
|
||||||
+- Count current roles in session.roles
|
|
||||||
+- If count >= 5:
|
|
||||||
+- Attempt to merge new capability into existing role:
|
|
||||||
+- Find best-fit role by responsibility_type
|
|
||||||
+- If merge possible:
|
|
||||||
+- Update existing role file with new capability
|
|
||||||
+- Create task assigned to existing role
|
|
||||||
+- Log via team_msg (type: warning, summary: "Capability merged into existing role")
|
|
||||||
+- STOP
|
|
||||||
+- If merge NOT possible:
|
|
||||||
+- PAUSE session
|
|
||||||
+- Report to user:
|
|
||||||
"Role limit (5) reached. Cannot generate new role for: <gap_description>
|
|
||||||
Options:
|
|
||||||
1. Manually extend an existing role
|
|
||||||
2. Re-run team-coordinate with refined task to consolidate roles
|
|
||||||
3. Accept limitation and continue without this capability"
|
|
||||||
+- STOP
|
|
||||||
+- Generate new role:
|
|
||||||
1. Read specs/role-template.md
|
|
||||||
2. Fill template with capability details from gap description
|
|
||||||
3. Write new role file to <session-folder>/roles/<new-role>.md
|
|
||||||
4. Add to session.roles[]
|
|
||||||
+- Create new task(s):
|
|
||||||
TaskCreate({
|
|
||||||
subject: "<NEW-PREFIX>-001",
|
|
||||||
owner: "<new-role>",
|
|
||||||
description: "<gap_description>\nSession: <session-folder>\nInnerLoop: false",
|
|
||||||
blockedBy: [<requesting task if sequential>],
|
|
||||||
status: "pending"
|
|
||||||
})
|
|
||||||
+- Update team-session.json: add role, increment tasks_total
|
|
||||||
+- Spawn new worker -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Worker Failure Handling
|
|
||||||
|
|
||||||
When a worker has unexpected status (not completed, not in_progress):
|
|
||||||
|
|
||||||
1. Reset task -> pending via TaskUpdate
|
|
||||||
2. Log via team_msg (type: error)
|
|
||||||
3. Report to user: task reset, will retry on next resume
|
|
||||||
|
|
||||||
### Fast-Advance Failure Recovery
|
|
||||||
|
|
||||||
When coordinator detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
|
|
||||||
|
|
||||||
```
|
|
||||||
handleCallback / handleResume detects:
|
|
||||||
+- Task is in_progress (was fast-advanced by predecessor)
|
|
||||||
+- No active_worker entry for this task
|
|
||||||
+- Original fast-advancing worker has already completed and exited
|
|
||||||
+- Resolution:
|
|
||||||
1. TaskUpdate -> reset task to pending
|
|
||||||
2. Remove stale active_worker entry (if any)
|
|
||||||
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
|
|
||||||
4. -> handleSpawnNext (will re-spawn the task normally)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Detection in handleResume**:
|
|
||||||
|
|
||||||
```
|
|
||||||
For each in_progress task in TaskList():
|
|
||||||
+- Has matching active_worker? -> normal, skip
|
|
||||||
+- No matching active_worker? -> orphaned (likely fast-advance failure)
|
|
||||||
+- Check creation time: if > 5 minutes with no progress callback
|
|
||||||
+- Reset to pending -> handleSpawnNext
|
|
||||||
```
|
|
||||||
|
|
||||||
**Prevention**: Fast-advance failures are self-healing. The coordinator reconciles orphaned tasks on every `resume`/`check` cycle.
|
|
||||||
|
|
||||||
### Consensus-Blocked Handling
|
|
||||||
|
|
||||||
When a worker reports `consensus_blocked` in its callback:
|
|
||||||
|
|
||||||
```
|
|
||||||
handleCallback receives message with consensus_blocked flag
|
|
||||||
+- Extract: divergence_severity, blocked_round, action_recommendation
|
|
||||||
+- Route by severity:
|
|
||||||
|
|
|
||||||
+- severity = HIGH
|
|
||||||
| +- Create REVISION task:
|
|
||||||
| +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
|
|
||||||
| +- Description includes: divergence details + action items from discuss
|
|
||||||
| +- blockedBy: none (immediate execution)
|
|
||||||
| +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
|
|
||||||
| +- If already revised once -> PAUSE, escalate to user
|
|
||||||
| +- Update session: mark task as "revised", log revision chain
|
|
||||||
|
|
|
||||||
+- severity = MEDIUM
|
|
||||||
| +- Proceed with warning: include divergence in next task's context
|
|
||||||
| +- Log action items to wisdom/issues.md
|
|
||||||
| +- Normal handleSpawnNext
|
|
||||||
|
|
|
||||||
+- severity = LOW
|
|
||||||
+- Proceed normally: treat as consensus_reached with notes
|
|
||||||
+- Normal handleSpawnNext
|
|
||||||
```
|
|
||||||
|
|
||||||
## Phase 4: Validation
|
|
||||||
|
|
||||||
| Check | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
|
||||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
|
||||||
| Dynamic roles valid | All task owners exist in session.roles |
|
|
||||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
|
|
||||||
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
|
|
||||||
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Session file not found | Error, suggest re-initialization |
|
|
||||||
| Worker callback from unknown role | Log info, scan for other completions |
|
|
||||||
| All workers still running on resume | Report status, suggest check later |
|
|
||||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
|
||||||
| Fast-advance conflict | Coordinator reconciles, no duplicate spawns |
|
|
||||||
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
|
|
||||||
| Dynamic role file not found | Error, coordinator must regenerate from task-analysis |
|
|
||||||
| capability_gap from completed role | Validate gap, generate role if genuine |
|
|
||||||
| capability_gap when role limit (5) reached | Attempt merge into existing role, else pause for user |
|
|
||||||
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
|
|
||||||
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |
|
|
||||||
@@ -1,283 +0,0 @@
|
|||||||
# Coordinator Role
|
|
||||||
|
|
||||||
Orchestrate the team-coordinate workflow: task analysis, dynamic role generation, task dispatching, progress monitoring, session state. The sole built-in role -- all worker roles are generated at runtime.
|
|
||||||
|
|
||||||
## Identity
|
|
||||||
|
|
||||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
|
||||||
- **Responsibility**: Analyze task -> Generate roles -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
|
||||||
|
|
||||||
## Boundaries
|
|
||||||
|
|
||||||
### MUST
|
|
||||||
- Analyze user task to detect capabilities and build dependency graph
|
|
||||||
- Dynamically generate worker roles from specs/role-template.md
|
|
||||||
- Create team and spawn worker subagents in background
|
|
||||||
- Dispatch tasks with proper dependency chains from task-analysis.json
|
|
||||||
- Monitor progress via worker callbacks and route messages
|
|
||||||
- Maintain session state persistence (team-session.json)
|
|
||||||
- Handle capability_gap reports (generate new roles mid-pipeline)
|
|
||||||
- Handle consensus_blocked HIGH verdicts (create revision tasks or pause)
|
|
||||||
- Detect fast-advance orphans on resume/check and reset to pending
|
|
||||||
|
|
||||||
### MUST NOT
|
|
||||||
- Execute task work directly (delegate to workers)
|
|
||||||
- Modify task output artifacts (workers own their deliverables)
|
|
||||||
- Call implementation subagents (code-developer, etc.) directly
|
|
||||||
- Skip dependency validation when creating task chains
|
|
||||||
- Generate more than 5 worker roles (merge if exceeded)
|
|
||||||
- Override consensus_blocked HIGH without user confirmation
|
|
||||||
|
|
||||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work is delegated to dynamically generated worker roles.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Command Execution Protocol
|
|
||||||
|
|
||||||
When coordinator needs to execute a command (analyze-task, dispatch, monitor):
|
|
||||||
|
|
||||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
|
||||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
|
||||||
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
|
|
||||||
4. **Execute synchronously** - complete the command workflow before proceeding
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
Phase 1 needs task analysis
|
|
||||||
-> Read roles/coordinator/commands/analyze-task.md
|
|
||||||
-> Execute Phase 2 (Context Loading)
|
|
||||||
-> Execute Phase 3 (Task Analysis)
|
|
||||||
-> Execute Phase 4 (Output)
|
|
||||||
-> Continue to Phase 2
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Entry Router
|
|
||||||
|
|
||||||
When coordinator is invoked, first detect the invocation type:
|
|
||||||
|
|
||||||
| Detection | Condition | Handler |
|
|
||||||
|-----------|-----------|---------|
|
|
||||||
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
|
|
||||||
| Status check | Arguments contain "check" or "status" | -> handleCheck |
|
|
||||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
|
|
||||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
|
|
||||||
| Interrupted session | Active/paused session exists in `.workflow/.team/TC-*` | -> Phase 0 (Resume Check) |
|
|
||||||
| New session | None of above | -> Phase 1 (Task Analysis) |
|
|
||||||
|
|
||||||
For callback/check/resume/adapt: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
|
||||||
|
|
||||||
### Router Implementation
|
|
||||||
|
|
||||||
1. **Load session context** (if exists):
|
|
||||||
- Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
|
|
||||||
- If found, extract `session.roles[].name` for callback detection
|
|
||||||
|
|
||||||
2. **Parse $ARGUMENTS** for detection keywords
|
|
||||||
|
|
||||||
3. **Route to handler**:
|
|
||||||
- For monitor handlers: Read `commands/monitor.md`, execute matched handler section, STOP
|
|
||||||
- For Phase 0: Execute Session Resume Check below
|
|
||||||
- For Phase 1: Execute Task Analysis below
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 0: Session Resume Check
|
|
||||||
|
|
||||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Scan `.workflow/.team/TC-*/team-session.json` for sessions with status "active" or "paused"
|
|
||||||
2. No sessions found -> proceed to Phase 1
|
|
||||||
3. Single session found -> resume it (-> Session Reconciliation)
|
|
||||||
4. Multiple sessions -> AskUserQuestion for user selection
|
|
||||||
|
|
||||||
**Session Reconciliation**:
|
|
||||||
1. Audit TaskList -> get real status of all tasks
|
|
||||||
2. Reconcile: session.completed_tasks <-> TaskList status (bidirectional sync)
|
|
||||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
|
||||||
4. Detect fast-advance orphans (in_progress without recent activity) -> reset to pending
|
|
||||||
5. Determine remaining pipeline from reconciled state
|
|
||||||
6. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
|
||||||
7. Create missing tasks with correct blockedBy dependencies
|
|
||||||
8. Verify dependency chain integrity
|
|
||||||
9. Update session file with reconciled state
|
|
||||||
10. Kick first executable task's worker -> Phase 4
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Task Analysis
|
|
||||||
|
|
||||||
**Objective**: Parse user task, detect capabilities, build dependency graph, design roles.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. **Parse user task description**
|
|
||||||
|
|
||||||
2. **Clarify if ambiguous** via AskUserQuestion:
|
|
||||||
- What is the scope? (specific files, module, project-wide)
|
|
||||||
- What deliverables are expected? (documents, code, analysis reports)
|
|
||||||
- Any constraints? (timeline, technology, style)
|
|
||||||
|
|
||||||
3. **Delegate to `commands/analyze-task.md`**:
|
|
||||||
- Signal detection: scan keywords -> infer capabilities
|
|
||||||
- Artifact inference: each capability -> default output type (.md)
|
|
||||||
- Dependency graph: build DAG of work streams
|
|
||||||
- Complexity scoring: count capabilities, cross-domain factor, parallel tracks
|
|
||||||
- Role minimization: merge overlapping, absorb trivial, cap at 5
|
|
||||||
|
|
||||||
4. **Output**: Write `<session>/task-analysis.json`
|
|
||||||
|
|
||||||
**Success**: Task analyzed, capabilities detected, dependency graph built, roles designed.
|
|
||||||
|
|
||||||
**CRITICAL - Team Workflow Enforcement**:
|
|
||||||
|
|
||||||
Regardless of complexity score or role count, coordinator MUST:
|
|
||||||
- ✅ **Always proceed to Phase 2** (generate roles)
|
|
||||||
- ✅ **Always create team** and spawn workers
|
|
||||||
- ❌ **NEVER execute task work directly**, even for single-role low-complexity tasks
|
|
||||||
- ❌ **NEVER skip team workflow** based on complexity assessment
|
|
||||||
|
|
||||||
**Single-role execution is still team-based** - just with one worker. The team architecture provides:
|
|
||||||
- Consistent message bus communication
|
|
||||||
- Session state management
|
|
||||||
- Artifact tracking
|
|
||||||
- Fast-advance capability
|
|
||||||
- Resume/recovery mechanisms
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Generate Roles + Initialize Session
|
|
||||||
|
|
||||||
**Objective**: Create session, generate dynamic role files, initialize shared infrastructure.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. **Generate session ID**: `TC-<slug>-<date>` (slug from first 3 meaningful words of task)
|
|
||||||
|
|
||||||
2. **Create session folder structure**:
|
|
||||||
```
|
|
||||||
.workflow/.team/<session-id>/
|
|
||||||
+-- roles/
|
|
||||||
+-- artifacts/
|
|
||||||
+-- wisdom/
|
|
||||||
+-- explorations/
|
|
||||||
+-- discussions/
|
|
||||||
+-- .msg/
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Call TeamCreate** with team name derived from session ID
|
|
||||||
|
|
||||||
4. **Read `specs/role-template.md`** + `task-analysis.json`
|
|
||||||
|
|
||||||
5. **For each role in task-analysis.json#roles**:
|
|
||||||
- Fill role template with:
|
|
||||||
- role_name, prefix, responsibility_type from analysis
|
|
||||||
- Phase 2-4 content from responsibility type reference sections in template
|
|
||||||
- inner_loop flag from analysis (true if role has 2+ serial tasks)
|
|
||||||
- Task-specific instructions from task description
|
|
||||||
- Write generated role file to `<session>/roles/<role-name>.md`
|
|
||||||
|
|
||||||
6. **Register roles** in team-session.json#roles
|
|
||||||
|
|
||||||
7. **Initialize shared infrastructure**:
|
|
||||||
- `wisdom/learnings.md`, `wisdom/decisions.md`, `wisdom/issues.md` (empty with headers)
|
|
||||||
- `explorations/cache-index.json` (`{ "entries": [] }`)
|
|
||||||
- `shared-memory.json` (`{}`)
|
|
||||||
- `discussions/` (empty directory)
|
|
||||||
|
|
||||||
8. **Write team-session.json** with: session_id, task_description, status="active", roles, pipeline (empty), active_workers=[], created_at
|
|
||||||
|
|
||||||
**Success**: Session created, role files generated, shared infrastructure initialized.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 3: Create Task Chain
|
|
||||||
|
|
||||||
**Objective**: Dispatch tasks based on dependency graph with proper dependencies.
|
|
||||||
|
|
||||||
Delegate to `commands/dispatch.md` which creates the full task chain:
|
|
||||||
1. Reads dependency_graph from task-analysis.json
|
|
||||||
2. Topological sorts tasks
|
|
||||||
3. Creates tasks via TaskCreate with correct blockedBy
|
|
||||||
4. Assigns owner based on role mapping from task-analysis.json
|
|
||||||
5. Includes `Session: <session-folder>` in every task description
|
|
||||||
6. Sets InnerLoop flag for multi-task roles
|
|
||||||
7. Updates team-session.json with pipeline and tasks_total
|
|
||||||
|
|
||||||
**Success**: All tasks created with correct dependency chains, session updated.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Spawn-and-Stop
|
|
||||||
|
|
||||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
|
||||||
|
|
||||||
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
|
|
||||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
|
||||||
- Worker completes -> may fast-advance to next task OR SendMessage callback -> auto-advance
|
|
||||||
- User can use "check" / "resume" to manually advance
|
|
||||||
- Coordinator does one operation per invocation, then STOPS
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Load `commands/monitor.md`
|
|
||||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
|
||||||
3. For each ready task -> spawn worker (see SKILL.md Coordinator Spawn Template)
|
|
||||||
- Use Standard Worker template for single-task roles
|
|
||||||
- Use Inner Loop Worker template for multi-task roles
|
|
||||||
4. Output status summary with execution graph
|
|
||||||
5. STOP
|
|
||||||
|
|
||||||
**Pipeline advancement** driven by three wake sources:
|
|
||||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
|
||||||
- User "check" -> handleCheck (status only)
|
|
||||||
- User "resume" -> handleResume (advance)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 5: Report + Next Steps
|
|
||||||
|
|
||||||
**Objective**: Completion report and follow-up options.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Load session state -> count completed tasks, duration
|
|
||||||
2. List all deliverables with output paths in `<session>/artifacts/`
|
|
||||||
3. Include discussion summaries (if inline discuss was used)
|
|
||||||
4. Summarize wisdom accumulated during execution
|
|
||||||
5. Update session status -> "completed"
|
|
||||||
6. Offer next steps: exit / view artifacts / extend with additional tasks
|
|
||||||
|
|
||||||
**Output format**:
|
|
||||||
|
|
||||||
```
|
|
||||||
[coordinator] ============================================
|
|
||||||
[coordinator] TASK COMPLETE
|
|
||||||
[coordinator]
|
|
||||||
[coordinator] Deliverables:
|
|
||||||
[coordinator] - <artifact-1.md> (<producer role>)
|
|
||||||
[coordinator] - <artifact-2.md> (<producer role>)
|
|
||||||
[coordinator]
|
|
||||||
[coordinator] Pipeline: <completed>/<total> tasks
|
|
||||||
[coordinator] Roles: <role-list>
|
|
||||||
[coordinator] Duration: <elapsed>
|
|
||||||
[coordinator]
|
|
||||||
[coordinator] Session: <session-folder>
|
|
||||||
[coordinator] ============================================
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Resolution |
|
|
||||||
|-------|------------|
|
|
||||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
|
||||||
| Worker crash | Respawn worker, reassign task |
|
|
||||||
| Dependency cycle | Detect in task analysis, report to user, halt |
|
|
||||||
| Task description too vague | AskUserQuestion for clarification |
|
|
||||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
|
||||||
| Role generation fails | Fall back to single general-purpose role |
|
|
||||||
| capability_gap reported | handleAdapt: generate new role, create tasks, spawn |
|
|
||||||
| All capabilities merge to one | Valid: single-role execution, reduced overhead |
|
|
||||||
| No capabilities detected | Default to single general role with TASK prefix |
|
|
||||||
@@ -1,434 +0,0 @@
|
|||||||
# Dynamic Role Template
|
|
||||||
|
|
||||||
Template used by coordinator to generate worker role.md files at runtime. Each generated role is written to `<session>/roles/<role-name>.md`.
|
|
||||||
|
|
||||||
## Template
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Role: <role_name>
|
|
||||||
|
|
||||||
<role_description>
|
|
||||||
|
|
||||||
## Identity
|
|
||||||
|
|
||||||
- **Name**: `<role_name>` | **Tag**: `[<role_name>]`
|
|
||||||
- **Task Prefix**: `<prefix>-*`
|
|
||||||
- **Responsibility**: <responsibility_type>
|
|
||||||
<if inner_loop>
|
|
||||||
- **Mode**: Inner Loop (handle all `<prefix>-*` tasks in single agent)
|
|
||||||
</if>
|
|
||||||
|
|
||||||
## Boundaries
|
|
||||||
|
|
||||||
### MUST
|
|
||||||
- Only process `<prefix>-*` prefixed tasks
|
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[<role_name>]` identifier
|
|
||||||
- Only communicate with coordinator via SendMessage
|
|
||||||
- Work strictly within <responsibility_type> responsibility scope
|
|
||||||
- Use fast-advance for simple linear successors (see SKILL.md Phase 5)
|
|
||||||
- Produce MD artifacts in `<session>/artifacts/`
|
|
||||||
<if inner_loop>
|
|
||||||
- Use subagent for heavy work (do not execute CLI/generation in main agent context)
|
|
||||||
- Maintain context_accumulator across tasks within the inner loop
|
|
||||||
- Loop through all `<prefix>-*` tasks before reporting to coordinator
|
|
||||||
</if>
|
|
||||||
|
|
||||||
### MUST NOT
|
|
||||||
- Execute work outside this role's responsibility scope
|
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
|
||||||
- Modify files or resources outside this role's scope
|
|
||||||
- Omit `[<role_name>]` identifier in any output
|
|
||||||
- Fast-advance when multiple tasks are ready or at checkpoint boundaries
|
|
||||||
<if inner_loop>
|
|
||||||
- Execute heavy work (CLI calls, large document generation) in main agent (delegate to subagent)
|
|
||||||
- SendMessage to coordinator mid-loop (unless consensus_blocked HIGH or error count >= 3)
|
|
||||||
</if>
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
| Tool | Purpose |
|
|
||||||
|------|---------|
|
|
||||||
<tools based on responsibility_type -- see reference sections below>
|
|
||||||
|
|
||||||
## Message Types
|
|
||||||
|
|
||||||
| Type | Direction | Description |
|
|
||||||
|------|-----------|-------------|
|
|
||||||
| `<prefix>_complete` | -> coordinator | Task completed with artifact path |
|
|
||||||
| `<prefix>_error` | -> coordinator | Error encountered |
|
|
||||||
| `capability_gap` | -> coordinator | Work outside role scope discovered |
|
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: <session-id>,
|
|
||||||
from: "<role_name>",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[<role_name>] <prefix> complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from <role_name> --to coordinator --type <message-type> --summary \"[<role_name>] <prefix> complete\" --ref <artifact-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `<prefix>-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
|
||||||
|
|
||||||
### Phase 2: <phase2_name>
|
|
||||||
|
|
||||||
<phase2_content -- generated by coordinator based on responsibility type>
|
|
||||||
|
|
||||||
### Phase 3: <phase3_name>
|
|
||||||
|
|
||||||
<phase3_content -- generated by coordinator based on task specifics>
|
|
||||||
|
|
||||||
### Phase 4: <phase4_name>
|
|
||||||
|
|
||||||
<phase4_content -- generated by coordinator based on responsibility type>
|
|
||||||
|
|
||||||
<if inline_discuss>
|
|
||||||
### Phase 4b: Inline Discuss (optional)
|
|
||||||
|
|
||||||
After primary work, optionally call discuss subagent:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-discuss-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Discuss <round-id>",
|
|
||||||
prompt: "## Multi-Perspective Critique: <round-id>
|
|
||||||
See subagents/discuss-subagent.md for prompt template.
|
|
||||||
Perspectives: <specified by coordinator when generating this role>"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
| Verdict | Severity | Action |
|
|
||||||
|---------|----------|--------|
|
|
||||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
|
||||||
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured consensus_blocked format. Do NOT self-revise. |
|
|
||||||
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed normally. |
|
|
||||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
|
||||||
</if>
|
|
||||||
|
|
||||||
<if inner_loop>
|
|
||||||
### Phase 5-L: Loop Completion (Inner Loop)
|
|
||||||
|
|
||||||
When more same-prefix tasks remain:
|
|
||||||
|
|
||||||
1. **TaskUpdate**: Mark current task completed
|
|
||||||
2. **team_msg**: Log task completion
|
|
||||||
3. **Accumulate summary**:
|
|
||||||
```
|
|
||||||
context_accumulator.append({
|
|
||||||
task: "<task-id>",
|
|
||||||
artifact: "<output-path>",
|
|
||||||
key_decisions: <from subagent return>,
|
|
||||||
discuss_verdict: <from Phase 4>,
|
|
||||||
summary: <from subagent return>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
4. **Interrupt check**:
|
|
||||||
- consensus_blocked HIGH -> SendMessage -> STOP
|
|
||||||
- Error count >= 3 -> SendMessage -> STOP
|
|
||||||
5. **Loop**: Back to Phase 1
|
|
||||||
|
|
||||||
**Does NOT**: SendMessage to coordinator, Fast-Advance spawn.
|
|
||||||
|
|
||||||
### Phase 5-F: Final Report (Inner Loop)
|
|
||||||
|
|
||||||
When all same-prefix tasks are done:
|
|
||||||
|
|
||||||
1. **TaskUpdate**: Mark last task completed
|
|
||||||
2. **team_msg**: Log completion
|
|
||||||
3. **Summary report**: All tasks summary + discuss results + artifact paths
|
|
||||||
4. **Fast-Advance check**: Check cross-prefix successors
|
|
||||||
5. **SendMessage** or **spawn successor**
|
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report + Fast-Advance
|
|
||||||
|
|
||||||
<else>
|
|
||||||
### Phase 5: Report + Fast-Advance
|
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report + Fast-Advance
|
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[<role_name>]` prefix -> TaskUpdate completed -> Fast-Advance Check -> Loop to Phase 1 for next task.
|
|
||||||
</if>
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| No <prefix>-* tasks available | Idle, wait for coordinator assignment |
|
|
||||||
| Context file not found | Notify coordinator, request location |
|
|
||||||
| Subagent fails | Retry once with fallback; still fails -> log error, continue next task |
|
|
||||||
| Fast-advance spawn fails | Fall back to SendMessage to coordinator |
|
|
||||||
<if inner_loop>
|
|
||||||
| Cumulative 3 task failures | SendMessage to coordinator, STOP inner loop |
|
|
||||||
| Agent crash mid-loop | Coordinator detects orphan on resume -> re-spawn -> resume from interrupted task |
|
|
||||||
</if>
|
|
||||||
| Work outside scope discovered | SendMessage capability_gap to coordinator |
|
|
||||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2-4 Content by Responsibility Type
|
|
||||||
|
|
||||||
Reference sections for coordinator to fill when generating roles. Select the matching section based on `responsibility_type`.
|
|
||||||
|
|
||||||
### orchestration
|
|
||||||
|
|
||||||
**Phase 2: Context Assessment**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Shared memory | <session>/shared-memory.json | No |
|
|
||||||
| Prior artifacts | <session>/artifacts/ | No |
|
|
||||||
| Wisdom | <session>/wisdom/ | No |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Extract session path from task description
|
|
||||||
2. Read shared-memory.json for cross-role context
|
|
||||||
3. Read prior artifacts (if any exist from upstream tasks)
|
|
||||||
4. Load wisdom files for accumulated knowledge
|
|
||||||
5. Optionally call explore subagent for codebase context
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Subagent Execution**
|
|
||||||
|
|
||||||
```
|
|
||||||
Delegate to appropriate subagent based on task:
|
|
||||||
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "<task-type> for <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- <task description>
|
|
||||||
- Session: <session-folder>
|
|
||||||
## Context
|
|
||||||
<prior artifacts + shared memory + explore results>
|
|
||||||
## Expected Output
|
|
||||||
Write artifact to: <session>/artifacts/<artifact-name>.md
|
|
||||||
Return JSON summary: { artifact_path, summary, key_decisions[], warnings[] }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Result Aggregation**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Verify subagent output artifact exists
|
|
||||||
2. Read artifact, validate structure/completeness
|
|
||||||
3. Update shared-memory.json with key findings
|
|
||||||
4. Write insights to wisdom/ files
|
|
||||||
```
|
|
||||||
|
|
||||||
### code-gen (docs)
|
|
||||||
|
|
||||||
**Phase 2: Load Prior Context**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Prior artifacts | <session>/artifacts/ from upstream tasks | Conditional |
|
|
||||||
| Shared memory | <session>/shared-memory.json | No |
|
|
||||||
| Wisdom | <session>/wisdom/ | No |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Extract session path from task description
|
|
||||||
2. Read upstream artifacts (e.g., research findings for a writer)
|
|
||||||
3. Read shared-memory.json for cross-role context
|
|
||||||
4. Load wisdom for accumulated decisions
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Document Generation**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "universal-executor",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Generate <doc-type> for <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- Generate: <document type>
|
|
||||||
- Session: <session-folder>
|
|
||||||
## Prior Context
|
|
||||||
<upstream artifacts + shared memory>
|
|
||||||
## Instructions
|
|
||||||
<task-specific writing instructions from coordinator>
|
|
||||||
## Expected Output
|
|
||||||
Write document to: <session>/artifacts/<doc-name>.md
|
|
||||||
Return JSON: { artifact_path, summary, key_decisions[], sections_generated[], warnings[] }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Structure Validation**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Verify document artifact exists
|
|
||||||
2. Check document has expected sections
|
|
||||||
3. Validate no placeholder text remains
|
|
||||||
4. Update shared-memory.json with document metadata
|
|
||||||
```
|
|
||||||
|
|
||||||
### code-gen (code)
|
|
||||||
|
|
||||||
**Phase 2: Load Plan/Specs**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Plan/design artifacts | <session>/artifacts/ | Conditional |
|
|
||||||
| Shared memory | <session>/shared-memory.json | No |
|
|
||||||
| Wisdom | <session>/wisdom/ | No |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Extract session path from task description
|
|
||||||
2. Read plan/design artifacts from upstream
|
|
||||||
3. Load shared-memory.json for implementation context
|
|
||||||
4. Load wisdom for conventions and patterns
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Code Implementation**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "code-developer",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Implement <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- <implementation description>
|
|
||||||
- Session: <session-folder>
|
|
||||||
## Plan/Design Context
|
|
||||||
<upstream artifacts>
|
|
||||||
## Instructions
|
|
||||||
<task-specific implementation instructions>
|
|
||||||
## Expected Output
|
|
||||||
Implement code changes.
|
|
||||||
Write summary to: <session>/artifacts/implementation-summary.md
|
|
||||||
Return JSON: { artifact_path, summary, files_changed[], key_decisions[], warnings[] }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Syntax Validation**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Run syntax check (tsc --noEmit or equivalent)
|
|
||||||
2. Verify all planned files exist
|
|
||||||
3. Check no broken imports
|
|
||||||
4. If validation fails -> attempt auto-fix (max 2 attempts)
|
|
||||||
5. Write implementation summary to artifacts/
|
|
||||||
```
|
|
||||||
|
|
||||||
### read-only
|
|
||||||
|
|
||||||
**Phase 2: Target Loading**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Target artifacts/files | From task description or upstream | Yes |
|
|
||||||
| Shared memory | <session>/shared-memory.json | No |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Extract session path and target files from task description
|
|
||||||
2. Read target artifacts or source files for analysis
|
|
||||||
3. Load shared-memory.json for context
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Multi-Dimension Analysis**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Analyze <target> for <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- Analyze: <target description>
|
|
||||||
- Dimensions: <analysis dimensions from coordinator>
|
|
||||||
- Session: <session-folder>
|
|
||||||
## Target Content
|
|
||||||
<artifact content or file content>
|
|
||||||
## Expected Output
|
|
||||||
Write report to: <session>/artifacts/analysis-report.md
|
|
||||||
Return JSON: { artifact_path, summary, findings[], severity_counts: {critical, high, medium, low} }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Severity Classification**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Verify analysis report exists
|
|
||||||
2. Classify findings by severity (Critical/High/Medium/Low)
|
|
||||||
3. Update shared-memory.json with key findings
|
|
||||||
4. Write issues to wisdom/issues.md
|
|
||||||
```
|
|
||||||
|
|
||||||
### validation
|
|
||||||
|
|
||||||
**Phase 2: Environment Detection**
|
|
||||||
|
|
||||||
```
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Task description | From TaskGet | Yes |
|
|
||||||
| Implementation artifacts | Upstream code changes | Yes |
|
|
||||||
|
|
||||||
Loading steps:
|
|
||||||
1. Detect test framework from project files
|
|
||||||
2. Get changed files from implementation
|
|
||||||
3. Identify test command and coverage tool
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 3: Test-Fix Cycle**
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "test-fix-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Test-fix for <task-id>",
|
|
||||||
prompt: "## Task
|
|
||||||
- Run tests and fix failures
|
|
||||||
- Session: <session-folder>
|
|
||||||
- Max iterations: 5
|
|
||||||
## Changed Files
|
|
||||||
<from upstream implementation>
|
|
||||||
## Expected Output
|
|
||||||
Write report to: <session>/artifacts/test-report.md
|
|
||||||
Return JSON: { artifact_path, pass_rate, coverage, iterations_used, remaining_failures[] }"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 4: Result Analysis**
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Check pass rate >= 95%
|
|
||||||
2. Check coverage meets threshold
|
|
||||||
3. Generate test report with pass/fail counts
|
|
||||||
4. Update shared-memory.json with test results
|
|
||||||
```
|
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
# Discuss Subagent
|
|
||||||
|
|
||||||
Lightweight multi-perspective critique engine. Called inline by any role needing peer review. Perspectives are dynamic -- specified by the calling role, not pre-defined.
|
|
||||||
|
|
||||||
## Design
|
|
||||||
|
|
||||||
Unlike team-lifecycle-v4's fixed perspective definitions (product, technical, quality, risk, coverage), team-coordinate uses **dynamic perspectives** passed in the prompt. The calling role decides what viewpoints matter for its artifact.
|
|
||||||
|
|
||||||
## Invocation
|
|
||||||
|
|
||||||
Called by roles after artifact creation:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-discuss-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Discuss <round-id>",
|
|
||||||
prompt: `## Multi-Perspective Critique: <round-id>
|
|
||||||
|
|
||||||
### Input
|
|
||||||
- Artifact: <artifact-path>
|
|
||||||
- Round: <round-id>
|
|
||||||
- Session: <session-folder>
|
|
||||||
|
|
||||||
### Perspectives
|
|
||||||
<Dynamic perspective list -- each entry defines: name, cli_tool, role_label, focus_areas>
|
|
||||||
|
|
||||||
Example:
|
|
||||||
| Perspective | CLI Tool | Role | Focus Areas |
|
|
||||||
|-------------|----------|------|-------------|
|
|
||||||
| Feasibility | gemini | Engineer | Implementation complexity, technical risks, resource needs |
|
|
||||||
| Clarity | codex | Editor | Readability, logical flow, completeness of explanation |
|
|
||||||
| Accuracy | gemini | Domain Expert | Factual correctness, source reliability, claim verification |
|
|
||||||
|
|
||||||
### Execution Steps
|
|
||||||
1. Read artifact from <artifact-path>
|
|
||||||
2. For each perspective, launch CLI analysis in background:
|
|
||||||
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
|
|
||||||
TASK: <focus-areas>
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: Artifact content below
|
|
||||||
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
|
|
||||||
CONSTRAINTS: Output valid JSON only
|
|
||||||
|
|
||||||
Artifact:
|
|
||||||
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
|
|
||||||
3. Wait for all CLI results
|
|
||||||
4. Divergence detection:
|
|
||||||
- High severity: any rating <= 2, critical issue identified
|
|
||||||
- Medium severity: rating spread (max - min) >= 3, or single perspective rated <= 2 with others >= 3
|
|
||||||
- Low severity: minor suggestions only, all ratings >= 3
|
|
||||||
5. Consensus determination:
|
|
||||||
- No high-severity divergences AND average rating >= 3.0 -> consensus_reached
|
|
||||||
- Otherwise -> consensus_blocked
|
|
||||||
6. Synthesize:
|
|
||||||
- Convergent themes (agreed by 2+ perspectives)
|
|
||||||
- Divergent views (conflicting assessments)
|
|
||||||
- Action items from suggestions
|
|
||||||
7. Write discussion record to: <session-folder>/discussions/<round-id>-discussion.md
|
|
||||||
|
|
||||||
### Discussion Record Format
|
|
||||||
# Discussion Record: <round-id>
|
|
||||||
|
|
||||||
**Artifact**: <artifact-path>
|
|
||||||
**Perspectives**: <list>
|
|
||||||
**Consensus**: reached / blocked
|
|
||||||
**Average Rating**: <avg>/5
|
|
||||||
|
|
||||||
## Convergent Themes
|
|
||||||
- <theme>
|
|
||||||
|
|
||||||
## Divergent Views
|
|
||||||
- **<topic>** (<severity>): <description>
|
|
||||||
|
|
||||||
## Action Items
|
|
||||||
1. <item>
|
|
||||||
|
|
||||||
## Ratings
|
|
||||||
| Perspective | Rating |
|
|
||||||
|-------------|--------|
|
|
||||||
| <name> | <n>/5 |
|
|
||||||
|
|
||||||
### Return Value
|
|
||||||
|
|
||||||
**When consensus_reached**:
|
|
||||||
Return a summary string with:
|
|
||||||
- Verdict: consensus_reached
|
|
||||||
- Average rating
|
|
||||||
- Key action items (top 3)
|
|
||||||
- Discussion record path
|
|
||||||
|
|
||||||
**When consensus_blocked**:
|
|
||||||
Return a structured summary with:
|
|
||||||
- Verdict: consensus_blocked
|
|
||||||
- Severity: HIGH | MEDIUM | LOW
|
|
||||||
- Average rating
|
|
||||||
- Divergence summary: top 3 divergent points with perspective attribution
|
|
||||||
- Action items: prioritized list of required changes
|
|
||||||
- Recommendation: revise | proceed-with-caution | escalate
|
|
||||||
- Discussion record path
|
|
||||||
|
|
||||||
### Error Handling
|
|
||||||
- Single CLI fails -> fallback to direct Claude analysis for that perspective
|
|
||||||
- All CLI fail -> generate basic discussion from direct artifact reading
|
|
||||||
- Artifact not found -> return error immediately`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with Calling Role
|
|
||||||
|
|
||||||
The calling role is responsible for:
|
|
||||||
|
|
||||||
1. **Before calling**: Complete primary artifact output
|
|
||||||
2. **Calling**: Invoke discuss subagent with appropriate dynamic perspectives
|
|
||||||
3. **After calling**:
|
|
||||||
|
|
||||||
| Verdict | Severity | Role Action |
|
|
||||||
|---------|----------|-------------|
|
|
||||||
| consensus_reached | - | Include action items in Phase 5 report, proceed normally |
|
|
||||||
| consensus_blocked | HIGH | Include divergence details in Phase 5 SendMessage. Do NOT self-revise -- coordinator decides. |
|
|
||||||
| consensus_blocked | MEDIUM | Include warning in Phase 5 SendMessage. Proceed normally. |
|
|
||||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
|
|
||||||
|
|
||||||
**SendMessage format for consensus_blocked (HIGH or MEDIUM)**:
|
|
||||||
|
|
||||||
```
|
|
||||||
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
|
|
||||||
Divergences: <top-3-divergent-points>
|
|
||||||
Action items: <prioritized-items>
|
|
||||||
Recommendation: <revise|proceed-with-caution|escalate>
|
|
||||||
Artifact: <artifact-path>
|
|
||||||
Discussion: <discussion-record-path>
|
|
||||||
```
|
|
||||||
@@ -1,120 +0,0 @@
|
|||||||
# Explore Subagent
|
|
||||||
|
|
||||||
Shared codebase exploration utility with centralized caching. Callable by any role needing code context.
|
|
||||||
|
|
||||||
## Invocation
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "cli-explore-agent",
|
|
||||||
run_in_background: false,
|
|
||||||
description: "Explore <angle>",
|
|
||||||
prompt: `Explore codebase for: <query>
|
|
||||||
|
|
||||||
Focus angle: <angle>
|
|
||||||
Keywords: <keyword-list>
|
|
||||||
Session folder: <session-folder>
|
|
||||||
|
|
||||||
## Cache Check
|
|
||||||
1. Read <session-folder>/explorations/cache-index.json (if exists)
|
|
||||||
2. Look for entry with matching angle
|
|
||||||
3. If found AND file exists -> read cached result, return summary
|
|
||||||
4. If not found -> proceed to exploration
|
|
||||||
|
|
||||||
## Exploration
|
|
||||||
<angle-specific-focus-from-table-below>
|
|
||||||
|
|
||||||
## Output
|
|
||||||
Write JSON to: <session-folder>/explorations/explore-<angle>.json
|
|
||||||
Update cache-index.json with new entry
|
|
||||||
|
|
||||||
## Output Schema
|
|
||||||
{
|
|
||||||
"angle": "<angle>",
|
|
||||||
"query": "<query>",
|
|
||||||
"relevant_files": [
|
|
||||||
{ "path": "...", "rationale": "...", "role": "...", "discovery_source": "...", "key_symbols": [] }
|
|
||||||
],
|
|
||||||
"patterns": [],
|
|
||||||
"dependencies": [],
|
|
||||||
"external_refs": [],
|
|
||||||
"_metadata": { "created_by": "<calling-role>", "timestamp": "...", "cache_key": "..." }
|
|
||||||
}
|
|
||||||
|
|
||||||
Return summary: file count, pattern count, top 5 files, output path`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## Cache Mechanism
|
|
||||||
|
|
||||||
### Cache Index Schema
|
|
||||||
|
|
||||||
`<session-folder>/explorations/cache-index.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"entries": [
|
|
||||||
{
|
|
||||||
"angle": "architecture",
|
|
||||||
"keywords": ["auth", "middleware"],
|
|
||||||
"file": "explore-architecture.json",
|
|
||||||
"created_by": "analyst",
|
|
||||||
"created_at": "2026-02-27T10:00:00Z",
|
|
||||||
"file_count": 15
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cache Lookup Rules
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Exact angle match exists | Return cached result |
|
|
||||||
| No match | Execute exploration, cache result |
|
|
||||||
| Cache file missing but index has entry | Remove stale entry, re-explore |
|
|
||||||
|
|
||||||
### Cache Invalidation
|
|
||||||
|
|
||||||
Cache is session-scoped. No explicit invalidation needed -- each session starts fresh. If a role suspects stale data, it can pass `force_refresh: true` in the prompt to bypass cache.
|
|
||||||
|
|
||||||
## Angle Focus Guide
|
|
||||||
|
|
||||||
| Angle | Focus Points | Typical Caller |
|
|
||||||
|-------|-------------|----------------|
|
|
||||||
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs | any |
|
|
||||||
| dependencies | Import chains, external libraries, circular dependencies, shared utilities | any |
|
|
||||||
| modularity | Module interfaces, separation of concerns, extraction opportunities | any |
|
|
||||||
| integration-points | API endpoints, data flow between modules, event systems | any |
|
|
||||||
| security | Auth/authz logic, input validation, sensitive data handling, middleware | any |
|
|
||||||
| dataflow | Data transformations, state propagation, validation points | any |
|
|
||||||
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity | any |
|
|
||||||
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging | any |
|
|
||||||
| patterns | Code conventions, design patterns, naming conventions, best practices | any |
|
|
||||||
| testing | Test files, coverage gaps, test patterns, mocking strategies | any |
|
|
||||||
| general | Broad semantic search for topic-related code | any |
|
|
||||||
|
|
||||||
## Exploration Strategies
|
|
||||||
|
|
||||||
### Low Complexity (direct search)
|
|
||||||
|
|
||||||
For simple queries, use ACE semantic search:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<query>")
|
|
||||||
```
|
|
||||||
|
|
||||||
ACE failure fallback: `rg -l '<keywords>' --type ts`
|
|
||||||
|
|
||||||
### Medium/High Complexity (multi-angle)
|
|
||||||
|
|
||||||
For complex queries, call cli-explore-agent per angle. The calling role determines complexity and selects angles.
|
|
||||||
|
|
||||||
## Search Tool Priority
|
|
||||||
|
|
||||||
| Tool | Priority | Use Case |
|
|
||||||
|------|----------|----------|
|
|
||||||
| mcp__ace-tool__search_context | P0 | Semantic search |
|
|
||||||
| Grep / Glob | P1 | Pattern matching |
|
|
||||||
| cli-explore-agent | Deep | Multi-angle exploration |
|
|
||||||
| WebSearch | P3 | External docs |
|
|
||||||
@@ -1,215 +0,0 @@
|
|||||||
---
|
|
||||||
name: team-executor-v2
|
|
||||||
description: Lightweight session execution skill. Resumes existing team-coordinate-v2 sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on "team executor v2".
|
|
||||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Team Executor v2
|
|
||||||
|
|
||||||
Lightweight session execution skill: load session -> reconcile state -> spawn team-worker agents -> execute -> deliver. **No analysis, no role generation** -- only executes existing team-coordinate sessions.
|
|
||||||
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
+---------------------------------------------------+
|
|
||||||
| Skill(skill="team-executor") |
|
|
||||||
| args="--session=<path>" [REQUIRED] |
|
|
||||||
+-------------------+-------------------------------+
|
|
||||||
| Session Validation
|
|
||||||
+---- --session valid? ----+
|
|
||||||
| NO | YES
|
|
||||||
v v
|
|
||||||
Error immediately Orchestration Mode
|
|
||||||
(no session) -> executor
|
|
||||||
|
|
|
||||||
+-------+-------+-------+
|
|
||||||
v v v v
|
|
||||||
[team-worker agents loaded from session role-specs]
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Session Validation (BEFORE routing)
|
|
||||||
|
|
||||||
**CRITICAL**: Session validation MUST occur before any execution.
|
|
||||||
|
|
||||||
### Parse Arguments
|
|
||||||
|
|
||||||
Extract from `$ARGUMENTS`:
|
|
||||||
- `--session=<path>`: Path to team-coordinate session folder (REQUIRED)
|
|
||||||
|
|
||||||
### Validation Steps
|
|
||||||
|
|
||||||
1. **Check `--session` provided**:
|
|
||||||
- If missing -> **ERROR**: "Session required. Usage: --session=<path-to-TC-folder>"
|
|
||||||
|
|
||||||
2. **Validate session structure** (see specs/session-schema.md):
|
|
||||||
- Directory exists at path
|
|
||||||
- `team-session.json` exists and valid JSON
|
|
||||||
- `task-analysis.json` exists and valid JSON
|
|
||||||
- `role-specs/` directory has at least one `.md` file
|
|
||||||
- Each role in `team-session.json#roles` has corresponding `.md` file in `role-specs/`
|
|
||||||
|
|
||||||
3. **Validation failure**:
|
|
||||||
- Report specific missing component
|
|
||||||
- Suggest re-running team-coordinate or checking path
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Role Router
|
|
||||||
|
|
||||||
This skill is **executor-only**. Workers do NOT invoke this skill -- they are spawned as `team-worker` agents directly.
|
|
||||||
|
|
||||||
### Dispatch Logic
|
|
||||||
|
|
||||||
| Scenario | Action |
|
|
||||||
|----------|--------|
|
|
||||||
| No `--session` | **ERROR** immediately |
|
|
||||||
| `--session` invalid | **ERROR** with specific reason |
|
|
||||||
| Valid session | Orchestration Mode -> executor |
|
|
||||||
|
|
||||||
### Orchestration Mode
|
|
||||||
|
|
||||||
**Invocation**: `Skill(skill="team-executor", args="--session=<session-folder>")`
|
|
||||||
|
|
||||||
**Lifecycle**:
|
|
||||||
```
|
|
||||||
Validate session
|
|
||||||
-> executor Phase 0: Reconcile state (reset interrupted, detect orphans)
|
|
||||||
-> executor Phase 1: Spawn first batch team-worker agents (background) -> STOP
|
|
||||||
-> Worker executes -> SendMessage callback -> executor advances next step
|
|
||||||
-> Loop until pipeline complete -> Phase 2 report + completion action
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Commands** (wake paused executor):
|
|
||||||
|
|
||||||
| Command | Action |
|
|
||||||
|---------|--------|
|
|
||||||
| `check` / `status` | Output execution status graph, no advancement |
|
|
||||||
| `resume` / `continue` | Check worker states, advance next step |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Role Registry
|
|
||||||
|
|
||||||
| Role | File | Type |
|
|
||||||
|------|------|------|
|
|
||||||
| executor | [roles/executor/role.md](roles/executor/role.md) | built-in orchestrator |
|
|
||||||
| (dynamic) | `<session>/role-specs/<role-name>.md` | loaded from session |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Executor Spawn Template
|
|
||||||
|
|
||||||
### v2 Worker Spawn (all roles)
|
|
||||||
|
|
||||||
When executor spawns workers, use `team-worker` agent with role-spec path:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "team-worker",
|
|
||||||
description: "Spawn <role> worker",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `## Role Assignment
|
|
||||||
role: <role>
|
|
||||||
role_spec: <session-folder>/role-specs/<role>.md
|
|
||||||
session: <session-folder>
|
|
||||||
session_id: <session-id>
|
|
||||||
team_name: <team-name>
|
|
||||||
requirement: <task-description>
|
|
||||||
inner_loop: <true|false>
|
|
||||||
|
|
||||||
Read role_spec file to load Phase 2-4 domain instructions.`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Completion Action
|
|
||||||
|
|
||||||
When pipeline completes (all tasks done), executor presents an interactive choice:
|
|
||||||
|
|
||||||
```
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "Team pipeline complete. What would you like to do?",
|
|
||||||
header: "Completion",
|
|
||||||
multiSelect: false,
|
|
||||||
options: [
|
|
||||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up team" },
|
|
||||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
|
||||||
{ label: "Export Results", description: "Export deliverables to target directory, then clean" }
|
|
||||||
]
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Action Handlers
|
|
||||||
|
|
||||||
| Choice | Steps |
|
|
||||||
|--------|-------|
|
|
||||||
| Archive & Clean | Update session status="completed" -> TeamDelete -> output final summary with artifact paths |
|
|
||||||
| Keep Active | Update session status="paused" -> output: "Resume with: Skill(skill='team-executor', args='--session=<path>')" |
|
|
||||||
| Export Results | AskUserQuestion(target path) -> copy artifacts to target -> Archive & Clean |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Cadence Control
|
|
||||||
|
|
||||||
**Beat model**: Event-driven, each beat = executor wake -> process -> spawn -> STOP.
|
|
||||||
|
|
||||||
```
|
|
||||||
Beat Cycle (single beat)
|
|
||||||
======================================================================
|
|
||||||
Event Executor Workers
|
|
||||||
----------------------------------------------------------------------
|
|
||||||
callback/resume --> +- handleCallback -+
|
|
||||||
| mark completed |
|
|
||||||
| check pipeline |
|
|
||||||
+- handleSpawnNext -+
|
|
||||||
| find ready tasks |
|
|
||||||
| spawn workers ---+--> [team-worker A] Phase 1-5
|
|
||||||
| (parallel OK) --+--> [team-worker B] Phase 1-5
|
|
||||||
+- STOP (idle) -----+ |
|
|
||||||
|
|
|
||||||
callback <-----------------------------------------+
|
|
||||||
(next beat) SendMessage + TaskUpdate(completed)
|
|
||||||
======================================================================
|
|
||||||
|
|
||||||
Fast-Advance (skips executor for simple linear successors)
|
|
||||||
======================================================================
|
|
||||||
[Worker A] Phase 5 complete
|
|
||||||
+- 1 ready task? simple successor? --> spawn team-worker B directly
|
|
||||||
+- complex case? --> SendMessage to executor
|
|
||||||
======================================================================
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration with team-coordinate
|
|
||||||
|
|
||||||
| Scenario | Skill |
|
|
||||||
|----------|-------|
|
|
||||||
| New task, no session | team-coordinate |
|
|
||||||
| Existing session, resume execution | **team-executor** |
|
|
||||||
| Session needs new roles | team-coordinate (with resume) |
|
|
||||||
| Pure execution, no analysis | **team-executor** |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| No --session provided | ERROR immediately with usage message |
|
|
||||||
| Session directory not found | ERROR with path, suggest checking path |
|
|
||||||
| team-session.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
|
|
||||||
| task-analysis.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
|
|
||||||
| No role-specs in session | ERROR, session incomplete, suggest re-run team-coordinate |
|
|
||||||
| Role-spec file not found | ERROR with expected path |
|
|
||||||
| capability_gap reported | Warn only, cannot generate new role-specs |
|
|
||||||
| Fast-advance spawns wrong task | Executor reconciles on next callback |
|
|
||||||
| Completion action fails | Default to Keep Active, log warning |
|
|
||||||
@@ -1,239 +0,0 @@
|
|||||||
# Command: monitor
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Event-driven pipeline coordination with Spawn-and-Stop pattern for team-executor v2. Role names are read from `team-session.json#roles`. Workers are spawned as `team-worker` agents with role-spec paths. **handleAdapt is LIMITED**: only warns, cannot generate new role-specs. Includes `handleComplete` for pipeline completion action.
|
|
||||||
|
|
||||||
## Constants
|
|
||||||
|
|
||||||
| Constant | Value | Description |
|
|
||||||
|----------|-------|-------------|
|
|
||||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
|
||||||
| ONE_STEP_PER_INVOCATION | true | Executor does one operation then STOPS |
|
|
||||||
| FAST_ADVANCE_AWARE | true | Workers may skip executor for simple linear successors |
|
|
||||||
| ROLE_GENERATION | disabled | handleAdapt cannot generate new role-specs |
|
|
||||||
| WORKER_AGENT | team-worker | All workers spawned as team-worker agents |
|
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
|
||||||
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
|
||||||
| Task list | `TaskList()` | Yes |
|
|
||||||
| Active workers | session.active_workers[] | Yes |
|
|
||||||
| Role registry | session.roles[] | Yes |
|
|
||||||
|
|
||||||
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name`. Role-spec paths are in `session.roles[].role_spec`.
|
|
||||||
|
|
||||||
## Phase 3: Handler Routing
|
|
||||||
|
|
||||||
### Wake-up Source Detection
|
|
||||||
|
|
||||||
Parse `$ARGUMENTS` to determine handler:
|
|
||||||
|
|
||||||
| Priority | Condition | Handler |
|
|
||||||
|----------|-----------|---------|
|
|
||||||
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
|
|
||||||
| 2 | Contains "capability_gap" | handleAdapt |
|
|
||||||
| 3 | Contains "check" or "status" | handleCheck |
|
|
||||||
| 4 | Contains "resume", "continue", or "next" | handleResume |
|
|
||||||
| 5 | Pipeline detected as complete | handleComplete |
|
|
||||||
| 6 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleCallback
|
|
||||||
|
|
||||||
Worker completed a task. Verify completion, update state, auto-advance.
|
|
||||||
|
|
||||||
```
|
|
||||||
Receive callback from [<role>]
|
|
||||||
+- Find matching active worker by role (from session.roles)
|
|
||||||
+- Is this a progress update (not final)?
|
|
||||||
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
|
|
||||||
+- Task status = completed?
|
|
||||||
| +- YES -> remove from active_workers -> update session
|
|
||||||
| | +- -> handleSpawnNext
|
|
||||||
| +- NO -> progress message, do not advance -> STOP
|
|
||||||
+- No matching worker found
|
|
||||||
+- Scan all active workers for completed tasks
|
|
||||||
+- Found completed -> process each -> handleSpawnNext
|
|
||||||
+- None completed -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fast-advance note**: Check if expected next task is already `in_progress` (fast-advanced). If yes -> skip spawning, sync active_workers.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleCheck
|
|
||||||
|
|
||||||
Read-only status report. No pipeline advancement.
|
|
||||||
|
|
||||||
```
|
|
||||||
[executor] Pipeline Status
|
|
||||||
[executor] Progress: <completed>/<total> (<percent>%)
|
|
||||||
|
|
||||||
[executor] Execution Graph:
|
|
||||||
<visual representation with status icons>
|
|
||||||
|
|
||||||
done=completed >>>=running o=pending .=not created
|
|
||||||
|
|
||||||
[executor] Active Workers:
|
|
||||||
> <subject> (<role>) - running <elapsed>
|
|
||||||
|
|
||||||
[executor] Ready to spawn: <subjects>
|
|
||||||
[executor] Commands: 'resume' to advance | 'check' to refresh
|
|
||||||
```
|
|
||||||
|
|
||||||
Then STOP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleResume
|
|
||||||
|
|
||||||
Check active worker completion, process results, advance pipeline.
|
|
||||||
|
|
||||||
```
|
|
||||||
Load active_workers from session
|
|
||||||
+- No active workers -> handleSpawnNext
|
|
||||||
+- Has active workers -> check each:
|
|
||||||
+- status = completed -> mark done, log
|
|
||||||
+- status = in_progress -> still running, log
|
|
||||||
+- other status -> worker failure -> reset to pending
|
|
||||||
After processing:
|
|
||||||
+- Some completed -> handleSpawnNext
|
|
||||||
+- All still running -> report status -> STOP
|
|
||||||
+- All failed -> handleSpawnNext (retry)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleSpawnNext
|
|
||||||
|
|
||||||
Find all ready tasks, spawn team-worker agents in background, update session, STOP.
|
|
||||||
|
|
||||||
```
|
|
||||||
Collect task states from TaskList()
|
|
||||||
+- completedSubjects: status = completed
|
|
||||||
+- inProgressSubjects: status = in_progress
|
|
||||||
+- readySubjects: pending + all blockedBy in completedSubjects
|
|
||||||
|
|
||||||
Ready tasks found?
|
|
||||||
+- NONE + work in progress -> report waiting -> STOP
|
|
||||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> handleComplete
|
|
||||||
+- HAS ready tasks -> for each:
|
|
||||||
+- Is task owner an Inner Loop role AND already has active_worker?
|
|
||||||
| +- YES -> SKIP spawn (existing worker picks it up)
|
|
||||||
| +- NO -> normal spawn below
|
|
||||||
+- TaskUpdate -> in_progress
|
|
||||||
+- team_msg log -> task_unblocked (team=<session-id>)
|
|
||||||
+- Spawn team-worker (see spawn tool call below)
|
|
||||||
+- Add to session.active_workers
|
|
||||||
Update session file -> output summary -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Spawn worker tool call**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "team-worker",
|
|
||||||
description: "Spawn <role> worker for <subject>",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `## Role Assignment
|
|
||||||
role: <role>
|
|
||||||
role_spec: <session-folder>/role-specs/<role>.md
|
|
||||||
session: <session-folder>
|
|
||||||
session_id: <session-id>
|
|
||||||
team_name: <team-name>
|
|
||||||
requirement: <task-description>
|
|
||||||
inner_loop: <true|false>
|
|
||||||
|
|
||||||
Read role_spec file to load Phase 2-4 domain instructions.`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleComplete
|
|
||||||
|
|
||||||
Pipeline complete. Execute completion action.
|
|
||||||
|
|
||||||
```
|
|
||||||
All tasks completed (no pending, no in_progress)
|
|
||||||
+- Generate pipeline summary (deliverables, stats, duration)
|
|
||||||
+- Read session.completion_action:
|
|
||||||
|
|
|
||||||
+- "interactive":
|
|
||||||
| AskUserQuestion -> user choice:
|
|
||||||
| +- "Archive & Clean": session status="completed" -> TeamDelete -> summary
|
|
||||||
| +- "Keep Active": session status="paused" -> resume command
|
|
||||||
| +- "Export Results": copy artifacts -> Archive & Clean
|
|
||||||
|
|
|
||||||
+- "auto_archive": Execute Archive & Clean
|
|
||||||
+- "auto_keep": Execute Keep Active
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fallback**: If completion action fails, default to Keep Active, log warning.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleAdapt (LIMITED)
|
|
||||||
|
|
||||||
**UNLIKE team-coordinate, executor CANNOT generate new role-specs.**
|
|
||||||
|
|
||||||
```
|
|
||||||
Receive capability_gap from [<role>]
|
|
||||||
+- Log via team_msg (type: warning)
|
|
||||||
+- Check existing roles -> does any cover this?
|
|
||||||
| +- YES -> redirect to that role -> STOP
|
|
||||||
| +- NO -> genuine gap, report to user:
|
|
||||||
| "Capability gap detected. team-executor cannot generate new role-specs.
|
|
||||||
| Options: 1. Continue 2. Re-run team-coordinate 3. Manually add role-spec"
|
|
||||||
+- Continue execution with existing roles
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Worker Failure Handling
|
|
||||||
|
|
||||||
1. Reset task -> pending via TaskUpdate
|
|
||||||
2. Log via team_msg (type: error)
|
|
||||||
3. Report to user: task reset, will retry on next resume
|
|
||||||
|
|
||||||
### Fast-Advance Failure Recovery
|
|
||||||
|
|
||||||
Detect orphaned tasks (in_progress without active_worker, > 5 minutes) -> reset to pending -> handleSpawnNext.
|
|
||||||
|
|
||||||
### Consensus-Blocked Handling
|
|
||||||
|
|
||||||
```
|
|
||||||
Route by severity:
|
|
||||||
+- HIGH: Create REVISION task (max 1). Already revised -> PAUSE for user
|
|
||||||
+- MEDIUM: Proceed with warning, log to wisdom/issues.md
|
|
||||||
+- LOW: Proceed normally as consensus_reached with notes
|
|
||||||
```
|
|
||||||
|
|
||||||
## Phase 4: Validation
|
|
||||||
|
|
||||||
| Check | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
|
||||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
|
||||||
| Dynamic roles valid | All task owners exist in session.roles |
|
|
||||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
|
|
||||||
| Fast-advance tracking | Detect fast-advanced tasks, sync to active_workers |
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Session file not found | Error, suggest re-run team-coordinate |
|
|
||||||
| Worker callback from unknown role | Log info, scan for other completions |
|
|
||||||
| All workers still running on resume | Report status, suggest check later |
|
|
||||||
| Pipeline stall | Check for missing tasks, report to user |
|
|
||||||
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
|
|
||||||
| Role-spec file not found | Error, cannot proceed |
|
|
||||||
| capability_gap | WARN only, cannot generate new role-specs |
|
|
||||||
| Completion action fails | Default to Keep Active, log warning |
|
|
||||||
@@ -1,171 +0,0 @@
|
|||||||
# Executor Role
|
|
||||||
|
|
||||||
Orchestrate the team-executor workflow: session validation, state reconciliation, team-worker dispatch, progress monitoring, completion action. The sole built-in role -- all worker roles are loaded from session role-specs and spawned via team-worker agent.
|
|
||||||
|
|
||||||
## Identity
|
|
||||||
|
|
||||||
- **Name**: `executor` | **Tag**: `[executor]`
|
|
||||||
- **Responsibility**: Validate session -> Reconcile state -> Create team -> Dispatch team-worker agents -> Monitor progress -> Completion action -> Report results
|
|
||||||
|
|
||||||
## Boundaries
|
|
||||||
|
|
||||||
### MUST
|
|
||||||
- Validate session structure before any execution
|
|
||||||
- Reconcile session state with TaskList on startup
|
|
||||||
- Reset in_progress tasks to pending (interrupted tasks)
|
|
||||||
- Detect fast-advance orphans and reset to pending
|
|
||||||
- Spawn team-worker agents in background (NOT general-purpose)
|
|
||||||
- Monitor progress via worker callbacks and route messages
|
|
||||||
- Maintain session state persistence (team-session.json)
|
|
||||||
- Handle capability_gap reports with warning only (cannot generate role-specs)
|
|
||||||
- Execute completion action when pipeline finishes
|
|
||||||
|
|
||||||
### MUST NOT
|
|
||||||
- Execute task work directly (delegate to workers)
|
|
||||||
- Modify task output artifacts (workers own their deliverables)
|
|
||||||
- Call implementation subagents (code-developer, etc.) directly
|
|
||||||
- Generate new role-specs (use existing session role-specs only)
|
|
||||||
- Skip session validation
|
|
||||||
- Override consensus_blocked HIGH without user confirmation
|
|
||||||
- Spawn workers with `general-purpose` agent (MUST use `team-worker`)
|
|
||||||
|
|
||||||
> **Core principle**: executor is the orchestrator, not the executor. All actual work is delegated to session-defined worker roles via team-worker agents. Unlike team-coordinate coordinator, executor CANNOT generate new role-specs.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Entry Router
|
|
||||||
|
|
||||||
When executor is invoked, first detect the invocation type:
|
|
||||||
|
|
||||||
| Detection | Condition | Handler |
|
|
||||||
|-----------|-----------|---------|
|
|
||||||
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
|
|
||||||
| Status check | Arguments contain "check" or "status" | -> handleCheck |
|
|
||||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
|
|
||||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
|
|
||||||
| Pipeline complete | All tasks completed, no pending/in_progress | -> handleComplete |
|
|
||||||
| New execution | None of above | -> Phase 0 |
|
|
||||||
|
|
||||||
For callback/check/resume/adapt/complete: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 0: Session Validation + State Reconciliation
|
|
||||||
|
|
||||||
**Objective**: Validate session structure and reconcile session state with actual task status.
|
|
||||||
|
|
||||||
### Step 1: Session Validation
|
|
||||||
|
|
||||||
Validate session structure (see SKILL.md Session Validation):
|
|
||||||
- [ ] Directory exists at session path
|
|
||||||
- [ ] `team-session.json` exists and parses
|
|
||||||
- [ ] `task-analysis.json` exists and parses
|
|
||||||
- [ ] `role-specs/` directory has >= 1 .md files
|
|
||||||
- [ ] All roles in team-session.json#roles have corresponding role-spec .md files
|
|
||||||
- [ ] Role-spec files have valid YAML frontmatter + Phase 2-4 sections
|
|
||||||
|
|
||||||
If validation fails -> ERROR with specific reason -> STOP
|
|
||||||
|
|
||||||
### Step 2: Load Session State
|
|
||||||
|
|
||||||
Read team-session.json and task-analysis.json.
|
|
||||||
|
|
||||||
### Step 3: Reconcile with TaskList
|
|
||||||
|
|
||||||
Compare TaskList() with session.completed_tasks, bidirectional sync.
|
|
||||||
|
|
||||||
### Step 4: Reset Interrupted Tasks
|
|
||||||
|
|
||||||
Reset any in_progress tasks to pending.
|
|
||||||
|
|
||||||
### Step 5: Detect Fast-Advance Orphans
|
|
||||||
|
|
||||||
In_progress tasks without matching active_worker + created > 5 minutes -> reset to pending.
|
|
||||||
|
|
||||||
### Step 6: Create Missing Tasks (if needed)
|
|
||||||
|
|
||||||
For each task in task-analysis, check if exists in TaskList, create if missing.
|
|
||||||
|
|
||||||
### Step 7: Update Session File
|
|
||||||
|
|
||||||
Write reconciled team-session.json.
|
|
||||||
|
|
||||||
### Step 8: Team Setup
|
|
||||||
|
|
||||||
TeamCreate if team does not exist.
|
|
||||||
|
|
||||||
**Success**: Session validated, state reconciled, team ready -> Phase 1
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Spawn-and-Stop
|
|
||||||
|
|
||||||
**Objective**: Spawn first batch of ready workers as team-worker agents in background, then STOP.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Load `commands/monitor.md`
|
|
||||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
|
||||||
3. For each ready task -> spawn team-worker (see SKILL.md Executor Spawn Template)
|
|
||||||
4. Output status summary with execution graph
|
|
||||||
5. STOP
|
|
||||||
|
|
||||||
**Pipeline advancement** driven by three wake sources:
|
|
||||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
|
||||||
- User "check" -> handleCheck (status only)
|
|
||||||
- User "resume" -> handleResume (advance)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Report + Completion Action
|
|
||||||
|
|
||||||
**Objective**: Completion report, interactive completion choice, and follow-up options.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Load session state -> count completed tasks, duration
|
|
||||||
2. List all deliverables with output paths in `<session>/artifacts/`
|
|
||||||
3. Include discussion summaries (if inline discuss was used)
|
|
||||||
4. Summarize wisdom accumulated during execution
|
|
||||||
5. Output report:
|
|
||||||
|
|
||||||
```
|
|
||||||
[executor] ============================================
|
|
||||||
[executor] TASK COMPLETE
|
|
||||||
[executor]
|
|
||||||
[executor] Deliverables:
|
|
||||||
[executor] - <artifact-1.md> (<producer role>)
|
|
||||||
[executor] - <artifact-2.md> (<producer role>)
|
|
||||||
[executor]
|
|
||||||
[executor] Pipeline: <completed>/<total> tasks
|
|
||||||
[executor] Roles: <role-list>
|
|
||||||
[executor] Duration: <elapsed>
|
|
||||||
[executor]
|
|
||||||
[executor] Session: <session-folder>
|
|
||||||
[executor] ============================================
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Execute Completion Action** (based on session.completion_action):
|
|
||||||
|
|
||||||
| Mode | Behavior |
|
|
||||||
|------|----------|
|
|
||||||
| `interactive` | AskUserQuestion with Archive/Keep/Export options |
|
|
||||||
| `auto_archive` | Execute Archive & Clean without prompt |
|
|
||||||
| `auto_keep` | Execute Keep Active without prompt |
|
|
||||||
|
|
||||||
**Interactive handler**: See SKILL.md Completion Action section.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Resolution |
|
|
||||||
|-------|------------|
|
|
||||||
| Session validation fails | ERROR with specific reason, suggest re-run team-coordinate |
|
|
||||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
|
||||||
| Worker crash | Respawn worker, reassign task |
|
|
||||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
|
||||||
| capability_gap reported | handleAdapt: WARN only, cannot generate new role-specs |
|
|
||||||
| All workers still running on resume | Report status, suggest check later |
|
|
||||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
|
||||||
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
|
|
||||||
| Role-spec file not found | ERROR, cannot proceed without role definition |
|
|
||||||
| Completion action fails | Default to Keep Active, log warning |
|
|
||||||
@@ -1,264 +0,0 @@
|
|||||||
# Session Schema
|
|
||||||
|
|
||||||
Required session structure for team-executor v2. All components MUST exist for valid execution. Updated for role-spec architecture (lightweight Phase 2-4 files instead of full role.md files).
|
|
||||||
|
|
||||||
## Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
<session-folder>/
|
|
||||||
+-- team-session.json # Session state + dynamic role registry (REQUIRED)
|
|
||||||
+-- task-analysis.json # Task analysis output: capabilities, dependency graph (REQUIRED)
|
|
||||||
+-- role-specs/ # Dynamic role-spec definitions (REQUIRED, >= 1 .md file)
|
|
||||||
| +-- <role-1>.md # Lightweight: YAML frontmatter + Phase 2-4 only
|
|
||||||
| +-- <role-2>.md
|
|
||||||
+-- artifacts/ # All MD deliverables from workers
|
|
||||||
| +-- <artifact>.md
|
|
||||||
+-- shared-memory.json # Cross-role state store
|
|
||||||
+-- wisdom/ # Cross-task knowledge
|
|
||||||
| +-- learnings.md
|
|
||||||
| +-- decisions.md
|
|
||||||
| +-- issues.md
|
|
||||||
+-- explorations/ # Shared explore cache
|
|
||||||
| +-- cache-index.json
|
|
||||||
| +-- explore-<angle>.json
|
|
||||||
+-- discussions/ # Inline discuss records
|
|
||||||
| +-- <round>.md
|
|
||||||
+-- .msg/ # Team message bus logs
|
|
||||||
```
|
|
||||||
|
|
||||||
## Validation Checklist
|
|
||||||
|
|
||||||
team-executor validates the following before execution:
|
|
||||||
|
|
||||||
### Required Components
|
|
||||||
|
|
||||||
| Component | Validation | Error Message |
|
|
||||||
|-----------|------------|---------------|
|
|
||||||
| `--session` argument | Must be provided | "Session required. Usage: --session=<path-to-TC-folder>" |
|
|
||||||
| Directory | Must exist at path | "Session directory not found: <path>" |
|
|
||||||
| `team-session.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: team-session.json missing, corrupt, or missing required fields" |
|
|
||||||
| `task-analysis.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: task-analysis.json missing, corrupt, or missing required fields" |
|
|
||||||
| `role-specs/` directory | Must exist and contain >= 1 .md file | "Invalid session: no role-spec files in role-specs/" |
|
|
||||||
| Role-spec file mapping | Each role in team-session.json#roles must have .md file | "Role-spec file not found: role-specs/<role>.md" |
|
|
||||||
| Role-spec structure | Each role-spec must have YAML frontmatter + Phase 2-4 sections | "Invalid role-spec: role-specs/<role>.md missing required section" |
|
|
||||||
|
|
||||||
### Validation Algorithm
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Parse --session=<path> from arguments
|
|
||||||
+- Not provided -> ERROR: "Session required. Usage: --session=<path-to-TC-folder>"
|
|
||||||
|
|
||||||
2. Check directory exists
|
|
||||||
+- Not exists -> ERROR: "Session directory not found: <path>"
|
|
||||||
|
|
||||||
3. Check team-session.json
|
|
||||||
+- Not exists -> ERROR: "Invalid session: team-session.json missing"
|
|
||||||
+- Parse error -> ERROR: "Invalid session: team-session.json corrupt"
|
|
||||||
+- Validate required fields:
|
|
||||||
+- session_id (string) -> missing -> ERROR
|
|
||||||
+- task_description (string) -> missing -> ERROR
|
|
||||||
+- status (string: active|paused|completed) -> invalid -> ERROR
|
|
||||||
+- team_name (string) -> missing -> ERROR
|
|
||||||
+- roles (array, non-empty) -> missing/empty -> ERROR
|
|
||||||
|
|
||||||
4. Check task-analysis.json
|
|
||||||
+- Not exists -> ERROR: "Invalid session: task-analysis.json missing"
|
|
||||||
+- Parse error -> ERROR: "Invalid session: task-analysis.json corrupt"
|
|
||||||
+- Validate required fields:
|
|
||||||
+- capabilities (array) -> missing -> ERROR
|
|
||||||
+- dependency_graph (object) -> missing -> ERROR
|
|
||||||
+- roles (array, non-empty) -> missing/empty -> ERROR
|
|
||||||
|
|
||||||
5. Check role-specs/ directory
|
|
||||||
+- Not exists -> ERROR: "Invalid session: role-specs/ directory missing"
|
|
||||||
+- No .md files -> ERROR: "Invalid session: no role-spec files in role-specs/"
|
|
||||||
|
|
||||||
6. Check role-spec file mapping and structure
|
|
||||||
+- For each role in team-session.json#roles:
|
|
||||||
+- Check role-specs/<role.name>.md exists
|
|
||||||
+- Not exists -> ERROR: "Role-spec file not found: role-specs/<role.name>.md"
|
|
||||||
+- Validate role-spec structure (see Role-Spec Structure Validation)
|
|
||||||
|
|
||||||
7. All checks pass -> proceed to Phase 0
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## team-session.json Schema
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"session_id": "TC-<slug>-<date>",
|
|
||||||
"task_description": "<original user input>",
|
|
||||||
"status": "active | paused | completed",
|
|
||||||
"team_name": "<team-name>",
|
|
||||||
"roles": [
|
|
||||||
{
|
|
||||||
"name": "<role-name>",
|
|
||||||
"prefix": "<PREFIX>",
|
|
||||||
"responsibility_type": "<type>",
|
|
||||||
"inner_loop": false,
|
|
||||||
"role_spec": "role-specs/<role-name>.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"pipeline": {
|
|
||||||
"dependency_graph": {},
|
|
||||||
"tasks_total": 0,
|
|
||||||
"tasks_completed": 0
|
|
||||||
},
|
|
||||||
"active_workers": [],
|
|
||||||
"completed_tasks": [],
|
|
||||||
"completion_action": "interactive",
|
|
||||||
"created_at": "<timestamp>"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Required Fields
|
|
||||||
|
|
||||||
| Field | Type | Description |
|
|
||||||
|-------|------|-------------|
|
|
||||||
| `session_id` | string | Unique session identifier |
|
|
||||||
| `task_description` | string | Original task description from user |
|
|
||||||
| `status` | string | One of: "active", "paused", "completed" |
|
|
||||||
| `team_name` | string | Team name for Task tool |
|
|
||||||
| `roles` | array | List of role definitions |
|
|
||||||
| `roles[].name` | string | Role name (must match .md filename) |
|
|
||||||
| `roles[].prefix` | string | Task prefix for this role |
|
|
||||||
| `roles[].role_spec` | string | Relative path to role-spec file |
|
|
||||||
|
|
||||||
### Optional Fields
|
|
||||||
|
|
||||||
| Field | Type | Description |
|
|
||||||
|-------|------|-------------|
|
|
||||||
| `pipeline` | object | Pipeline metadata |
|
|
||||||
| `active_workers` | array | Currently running workers |
|
|
||||||
| `completed_tasks` | array | List of completed task IDs |
|
|
||||||
| `completion_action` | string | Completion mode: interactive, auto_archive, auto_keep |
|
|
||||||
| `created_at` | string | ISO timestamp |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## task-analysis.json Schema
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"capabilities": [
|
|
||||||
{
|
|
||||||
"name": "<capability-name>",
|
|
||||||
"description": "<description>",
|
|
||||||
"artifact_type": "<type>"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"dependency_graph": {
|
|
||||||
"<task-id>": {
|
|
||||||
"depends_on": ["<dependency-task-id>"],
|
|
||||||
"role": "<role-name>"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"roles": [
|
|
||||||
{
|
|
||||||
"name": "<role-name>",
|
|
||||||
"prefix": "<PREFIX>",
|
|
||||||
"responsibility_type": "<type>",
|
|
||||||
"inner_loop": false,
|
|
||||||
"role_spec_metadata": {
|
|
||||||
"subagents": [],
|
|
||||||
"message_types": {
|
|
||||||
"success": "<prefix>_complete",
|
|
||||||
"error": "error"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"complexity_score": 0
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Role-Spec File Schema
|
|
||||||
|
|
||||||
Each role-spec in `role-specs/<role-name>.md` follows the lightweight format with YAML frontmatter + Phase 2-4 body.
|
|
||||||
|
|
||||||
### Required Structure
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
role: <name>
|
|
||||||
prefix: <PREFIX>
|
|
||||||
inner_loop: <true|false>
|
|
||||||
message_types:
|
|
||||||
success: <type>
|
|
||||||
error: error
|
|
||||||
---
|
|
||||||
|
|
||||||
# <Role Name> — Phase 2-4
|
|
||||||
|
|
||||||
## Phase 2: <Name>
|
|
||||||
<domain-specific context loading>
|
|
||||||
|
|
||||||
## Phase 3: <Name>
|
|
||||||
<domain-specific execution>
|
|
||||||
|
|
||||||
## Phase 4: <Name>
|
|
||||||
<domain-specific validation>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Role-Spec Structure Validation
|
|
||||||
|
|
||||||
```
|
|
||||||
For each role-spec in role-specs/<role>.md:
|
|
||||||
1. Read file content
|
|
||||||
2. Check for YAML frontmatter (content between --- markers)
|
|
||||||
+- Not found -> ERROR: "Invalid role-spec: role-specs/<role>.md missing frontmatter"
|
|
||||||
3. Parse frontmatter, check required fields:
|
|
||||||
+- role (string) -> missing -> ERROR
|
|
||||||
+- prefix (string) -> missing -> ERROR
|
|
||||||
+- inner_loop (boolean) -> missing -> ERROR
|
|
||||||
+- message_types (object) -> missing -> ERROR
|
|
||||||
4. Check for "## Phase 2" section
|
|
||||||
+- Not found -> ERROR: "Invalid role-spec: missing Phase 2"
|
|
||||||
5. Check for "## Phase 3" section
|
|
||||||
+- Not found -> ERROR: "Invalid role-spec: missing Phase 3"
|
|
||||||
6. Check for "## Phase 4" section
|
|
||||||
+- Not found -> ERROR: "Invalid role-spec: missing Phase 4"
|
|
||||||
7. All checks pass -> role-spec valid
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Example Valid Session
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/.team/TC-auth-feature-2026-02-27/
|
|
||||||
+-- team-session.json # Valid JSON with session metadata
|
|
||||||
+-- task-analysis.json # Valid JSON with dependency graph
|
|
||||||
+-- role-specs/
|
|
||||||
| +-- researcher.md # YAML frontmatter + Phase 2-4
|
|
||||||
| +-- developer.md # YAML frontmatter + Phase 2-4
|
|
||||||
| +-- tester.md # YAML frontmatter + Phase 2-4
|
|
||||||
+-- artifacts/ # (may be empty)
|
|
||||||
+-- shared-memory.json # Valid JSON (may be {})
|
|
||||||
+-- wisdom/
|
|
||||||
| +-- learnings.md
|
|
||||||
| +-- decisions.md
|
|
||||||
| +-- issues.md
|
|
||||||
+-- explorations/
|
|
||||||
| +-- cache-index.json
|
|
||||||
+-- discussions/ # (may be empty)
|
|
||||||
+-- .msg/ # (may be empty)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Recovery from Invalid Sessions
|
|
||||||
|
|
||||||
If session validation fails:
|
|
||||||
|
|
||||||
1. **Missing team-session.json**: Re-run team-coordinate with original task
|
|
||||||
2. **Missing task-analysis.json**: Re-run team-coordinate with resume
|
|
||||||
3. **Missing role-spec files**: Re-run team-coordinate with resume
|
|
||||||
4. **Invalid frontmatter**: Manual fix or re-run team-coordinate
|
|
||||||
5. **Corrupt JSON**: Manual inspection or re-run team-coordinate
|
|
||||||
|
|
||||||
**team-executor cannot fix invalid sessions** -- it can only report errors and suggest recovery steps.
|
|
||||||
@@ -1,372 +0,0 @@
|
|||||||
---
|
|
||||||
name: team-executor
|
|
||||||
description: Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution. No analysis, no role generation -- only loads and executes. Session path required. Triggers on "team executor".
|
|
||||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Team Executor
|
|
||||||
|
|
||||||
Lightweight session execution skill: load session -> reconcile state -> spawn workers -> execute -> deliver. **No analysis, no role generation** -- only executes existing team-coordinate sessions.
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
+---------------------------------------------------+
|
|
||||||
| Skill(skill="team-executor") |
|
|
||||||
| args="--session=<path>" [REQUIRED] |
|
|
||||||
| args="--role=<name>" (for worker dispatch) |
|
|
||||||
+-------------------+-------------------------------+
|
|
||||||
| Session Validation
|
|
||||||
+---- --session valid? ----+
|
|
||||||
| NO | YES
|
|
||||||
v v
|
|
||||||
Error immediately Role Router
|
|
||||||
(no session) |
|
|
||||||
+-------+-------+
|
|
||||||
| --role present?
|
|
||||||
| |
|
|
||||||
YES | | NO
|
|
||||||
v | v
|
|
||||||
Route to | Orchestration Mode
|
|
||||||
session | -> executor
|
|
||||||
role.md |
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Session Validation (BEFORE routing)
|
|
||||||
|
|
||||||
**CRITICAL**: Session validation MUST occur before any role routing.
|
|
||||||
|
|
||||||
### Parse Arguments
|
|
||||||
|
|
||||||
Extract from `$ARGUMENTS`:
|
|
||||||
- `--session=<path>`: Path to team-coordinate session folder (REQUIRED)
|
|
||||||
- `--role=<name>`: Role to dispatch (optional, defaults to orchestration mode)
|
|
||||||
|
|
||||||
### Validation Steps
|
|
||||||
|
|
||||||
1. **Check `--session` provided**:
|
|
||||||
- If missing -> **ERROR**: "Session required. Usage: --session=<path-to-TC-folder>"
|
|
||||||
- Do NOT proceed
|
|
||||||
|
|
||||||
2. **Validate session structure** (see specs/session-schema.md):
|
|
||||||
- Directory exists at path
|
|
||||||
- `team-session.json` exists and valid JSON
|
|
||||||
- `task-analysis.json` exists and valid JSON
|
|
||||||
- `roles/` directory has at least one `.md` file
|
|
||||||
- Each role in `team-session.json#roles` has corresponding `.md` file in `roles/`
|
|
||||||
|
|
||||||
3. **Validation failure**:
|
|
||||||
- Report specific missing component
|
|
||||||
- Suggest re-running team-coordinate or checking path
|
|
||||||
- Do NOT proceed
|
|
||||||
|
|
||||||
### Validation Checklist
|
|
||||||
|
|
||||||
```
|
|
||||||
Session Validation Checklist:
|
|
||||||
[ ] --session argument provided
|
|
||||||
[ ] Directory exists at path
|
|
||||||
[ ] team-session.json exists and parses
|
|
||||||
[ ] task-analysis.json exists and parses
|
|
||||||
[ ] roles/ directory has >= 1 .md files
|
|
||||||
[ ] All session.roles[] have corresponding roles/<role>.md
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Role Router
|
|
||||||
|
|
||||||
### Dispatch Logic
|
|
||||||
|
|
||||||
| Scenario | Action |
|
|
||||||
|----------|--------|
|
|
||||||
| No `--session` | **ERROR** immediately |
|
|
||||||
| `--session` invalid | **ERROR** with specific reason |
|
|
||||||
| No `--role` | Orchestration Mode -> executor |
|
|
||||||
| `--role=executor` | Read built-in `roles/executor/role.md` |
|
|
||||||
| `--role=<other>` | Read `<session>/roles/<role>.md` |
|
|
||||||
|
|
||||||
### Orchestration Mode
|
|
||||||
|
|
||||||
When invoked without `--role`, executor auto-starts.
|
|
||||||
|
|
||||||
**Invocation**: `Skill(skill="team-executor", args="--session=<session-folder>")`
|
|
||||||
|
|
||||||
**Lifecycle**:
|
|
||||||
```
|
|
||||||
Validate session
|
|
||||||
-> executor Phase 0: Reconcile state (reset interrupted, detect orphans)
|
|
||||||
-> executor Phase 1: Spawn first batch workers (background) -> STOP
|
|
||||||
-> Worker executes -> SendMessage callback -> executor advances next step
|
|
||||||
-> Loop until pipeline complete -> Phase 2 report
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Commands** (wake paused executor):
|
|
||||||
|
|
||||||
| Command | Action |
|
|
||||||
|---------|--------|
|
|
||||||
| `check` / `status` | Output execution status graph, no advancement |
|
|
||||||
| `resume` / `continue` | Check worker states, advance next step |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Role Registry
|
|
||||||
|
|
||||||
| Role | File | Type |
|
|
||||||
|------|------|------|
|
|
||||||
| executor | [roles/executor/role.md](roles/executor/role.md) | built-in orchestrator |
|
|
||||||
| (dynamic) | `<session>/roles/<role-name>.md` | loaded from session |
|
|
||||||
|
|
||||||
> **COMPACT PROTECTION**: Role files are execution documents. After context compression, role instructions become summaries only -- **MUST immediately `Read` the role.md to reload before continuing**. Never execute any Phase based on summaries.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Shared Infrastructure
|
|
||||||
|
|
||||||
The following templates apply to all worker roles. Each loaded role.md follows the same structure.
|
|
||||||
|
|
||||||
### Worker Phase 1: Task Discovery (all workers shared)
|
|
||||||
|
|
||||||
Each worker on startup executes the same task discovery flow:
|
|
||||||
|
|
||||||
1. Call `TaskList()` to get all tasks
|
|
||||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
|
||||||
3. No tasks -> idle wait
|
|
||||||
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
|
|
||||||
|
|
||||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
|
||||||
- Check if this task's output artifacts already exist
|
|
||||||
- Artifacts complete -> skip to Phase 5 report completion
|
|
||||||
- Artifacts incomplete or missing -> normal Phase 2-4 execution
|
|
||||||
|
|
||||||
### Worker Phase 5: Report + Fast-Advance (all workers shared)
|
|
||||||
|
|
||||||
Task completion with optional fast-advance to skip executor round-trip:
|
|
||||||
|
|
||||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
|
||||||
- Params: operation="log", team=**<session-id>**, from=<role>, to="executor", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
|
||||||
- **`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
|
|
||||||
- **CLI fallback**: `ccw team log --team <session-id> --from <role> --to executor --type <type> --summary "[<role>] ..." --json`
|
|
||||||
2. **TaskUpdate**: Mark task completed
|
|
||||||
3. **Fast-Advance Check**:
|
|
||||||
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
|
|
||||||
- If exactly 1 ready task AND its owner matches a simple successor pattern -> **spawn it directly** (skip executor)
|
|
||||||
- Otherwise -> **SendMessage** to executor for orchestration
|
|
||||||
4. **Loop**: Back to Phase 1 to check for next task
|
|
||||||
|
|
||||||
**Fast-Advance Rules**:
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Same-prefix successor (Inner Loop role) | Do not spawn, main agent inner loop (Phase 5-L) |
|
|
||||||
| 1 ready task, simple linear successor, different prefix | Spawn directly via Task(run_in_background: true) |
|
|
||||||
| Multiple ready tasks (parallel window) | SendMessage to executor (needs orchestration) |
|
|
||||||
| No ready tasks + others running | SendMessage to executor (status update) |
|
|
||||||
| No ready tasks + nothing running | SendMessage to executor (pipeline may be complete) |
|
|
||||||
|
|
||||||
**Fast-advance failure recovery**: If a fast-advanced task fails, the executor detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing. See [monitor.md](roles/executor/commands/monitor.md).
|
|
||||||
|
|
||||||
### Worker Inner Loop (roles with multiple same-prefix serial tasks)
|
|
||||||
|
|
||||||
When a role has **2+ serial same-prefix tasks**, it loops internally instead of spawning new agents:
|
|
||||||
|
|
||||||
**Inner Loop flow**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Phase 1: Discover task (first time)
|
|
||||||
|
|
|
||||||
+- Found task -> Phase 2-3: Load context + Execute work
|
|
||||||
| |
|
|
||||||
| v
|
|
||||||
| Phase 4: Validation (+ optional Inline Discuss)
|
|
||||||
| |
|
|
||||||
| v
|
|
||||||
| Phase 5-L: Loop Completion
|
|
||||||
| |
|
|
||||||
| +- TaskUpdate completed
|
|
||||||
| +- team_msg log
|
|
||||||
| +- Accumulate summary to context_accumulator
|
|
||||||
| |
|
|
||||||
| +- More same-prefix tasks?
|
|
||||||
| | +- YES -> back to Phase 1 (inner loop)
|
|
||||||
| | +- NO -> Phase 5-F: Final Report
|
|
||||||
| |
|
|
||||||
| +- Interrupt conditions?
|
|
||||||
| +- consensus_blocked HIGH -> SendMessage -> STOP
|
|
||||||
| +- Errors >= 3 -> SendMessage -> STOP
|
|
||||||
|
|
|
||||||
+- Phase 5-F: Final Report
|
|
||||||
+- SendMessage (all task summaries)
|
|
||||||
+- STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Phase 5-L vs Phase 5-F**:
|
|
||||||
|
|
||||||
| Step | Phase 5-L (looping) | Phase 5-F (final) |
|
|
||||||
|------|---------------------|-------------------|
|
|
||||||
| TaskUpdate completed | YES | YES |
|
|
||||||
| team_msg log | YES | YES |
|
|
||||||
| Accumulate summary | YES | - |
|
|
||||||
| SendMessage to executor | NO | YES (all tasks summary) |
|
|
||||||
| Fast-Advance to next prefix | - | YES (check cross-prefix successors) |
|
|
||||||
|
|
||||||
### Wisdom Accumulation (all roles)
|
|
||||||
|
|
||||||
Cross-task knowledge accumulation. Loaded from session at startup.
|
|
||||||
|
|
||||||
**Directory**:
|
|
||||||
```
|
|
||||||
<session-folder>/wisdom/
|
|
||||||
+-- learnings.md # Patterns and insights
|
|
||||||
+-- decisions.md # Design and strategy decisions
|
|
||||||
+-- issues.md # Known risks and issues
|
|
||||||
```
|
|
||||||
|
|
||||||
**Worker load** (Phase 2): Extract `Session: <path>` from task description, read wisdom files.
|
|
||||||
**Worker contribute** (Phase 4/5): Write discoveries to corresponding wisdom files.
|
|
||||||
|
|
||||||
### Role Isolation Rules
|
|
||||||
|
|
||||||
| Allowed | Prohibited |
|
|
||||||
|---------|-----------|
|
|
||||||
| Process own prefix tasks | Process other role's prefix tasks |
|
|
||||||
| SendMessage to executor | Directly communicate with other workers |
|
|
||||||
| Use tools appropriate to responsibility | Create tasks for other roles |
|
|
||||||
| Fast-advance simple successors | Spawn parallel worker batches |
|
|
||||||
| Report capability_gap to executor | Attempt work outside scope |
|
|
||||||
|
|
||||||
Executor additionally prohibited: directly write/modify deliverable artifacts, call implementation subagents directly, directly execute analysis/test/review, generate new roles.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Cadence Control
|
|
||||||
|
|
||||||
**Beat model**: Event-driven, each beat = executor wake -> process -> spawn -> STOP.
|
|
||||||
|
|
||||||
```
|
|
||||||
Beat Cycle (single beat)
|
|
||||||
======================================================================
|
|
||||||
Event Executor Workers
|
|
||||||
----------------------------------------------------------------------
|
|
||||||
callback/resume --> +- handleCallback -+
|
|
||||||
| mark completed |
|
|
||||||
| check pipeline |
|
|
||||||
+- handleSpawnNext -+
|
|
||||||
| find ready tasks |
|
|
||||||
| spawn workers ---+--> [Worker A] Phase 1-5
|
|
||||||
| (parallel OK) --+--> [Worker B] Phase 1-5
|
|
||||||
+- STOP (idle) -----+ |
|
|
||||||
|
|
|
||||||
callback <-----------------------------------------+
|
|
||||||
(next beat) SendMessage + TaskUpdate(completed)
|
|
||||||
======================================================================
|
|
||||||
|
|
||||||
Fast-Advance (skips executor for simple linear successors)
|
|
||||||
======================================================================
|
|
||||||
[Worker A] Phase 5 complete
|
|
||||||
+- 1 ready task? simple successor? --> spawn Worker B directly
|
|
||||||
+- complex case? --> SendMessage to executor
|
|
||||||
======================================================================
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Executor Spawn Template
|
|
||||||
|
|
||||||
### Standard Worker (single-task role)
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn <role> worker",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `You are team "<team-name>" <ROLE>.
|
|
||||||
|
|
||||||
## Primary Instruction
|
|
||||||
All your work MUST be executed by calling Skill to get role definition:
|
|
||||||
Skill(skill="team-executor", args="--role=<role> --session=<session-folder>")
|
|
||||||
|
|
||||||
Current requirement: <task-description>
|
|
||||||
Session: <session-folder>
|
|
||||||
|
|
||||||
## Role Guidelines
|
|
||||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
|
||||||
- All output prefixed with [<role>] tag
|
|
||||||
- Only communicate with executor
|
|
||||||
- Do not use TaskCreate to create tasks for other roles
|
|
||||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
|
||||||
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
1. Call Skill -> get role definition and execution logic
|
|
||||||
2. Follow role.md 5-Phase flow
|
|
||||||
3. team_msg(team=<session-id>) + SendMessage results to executor
|
|
||||||
4. TaskUpdate completed -> check next task or fast-advance`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inner Loop Worker (multi-task role)
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn <role> worker (inner loop)",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `You are team "<team-name>" <ROLE>.
|
|
||||||
|
|
||||||
## Primary Instruction
|
|
||||||
All your work MUST be executed by calling Skill to get role definition:
|
|
||||||
Skill(skill="team-executor", args="--role=<role> --session=<session-folder>")
|
|
||||||
|
|
||||||
Current requirement: <task-description>
|
|
||||||
Session: <session-folder>
|
|
||||||
|
|
||||||
## Inner Loop Mode
|
|
||||||
You will handle ALL <PREFIX>-* tasks in this session, not just the first one.
|
|
||||||
After completing each task, loop back to find the next <PREFIX>-* task.
|
|
||||||
Only SendMessage to executor when:
|
|
||||||
- All <PREFIX>-* tasks are done
|
|
||||||
- A consensus_blocked HIGH occurs
|
|
||||||
- Errors accumulate (>= 3)
|
|
||||||
|
|
||||||
## Role Guidelines
|
|
||||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
|
||||||
- All output prefixed with [<role>] tag
|
|
||||||
- Only communicate with executor
|
|
||||||
- Do not use TaskCreate to create tasks for other roles
|
|
||||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
|
||||||
- Use subagent calls for heavy work, retain summaries in context`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration with team-coordinate
|
|
||||||
|
|
||||||
| Scenario | Skill |
|
|
||||||
|----------|-------|
|
|
||||||
| New task, no session | team-coordinate |
|
|
||||||
| Existing session, resume execution | **team-executor** |
|
|
||||||
| Session needs new roles | team-coordinate (with --resume) |
|
|
||||||
| Pure execution, no analysis | **team-executor** |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| No --session provided | ERROR immediately with usage message |
|
|
||||||
| Session directory not found | ERROR with path, suggest checking path |
|
|
||||||
| team-session.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
|
|
||||||
| task-analysis.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
|
|
||||||
| No roles in session | ERROR, session incomplete, suggest re-run team-coordinate |
|
|
||||||
| Role file not found | ERROR with expected path |
|
|
||||||
| capability_gap reported | Warn only, cannot generate new roles (see monitor.md handleAdapt) |
|
|
||||||
| Fast-advance spawns wrong task | Executor reconciles on next callback |
|
|
||||||
@@ -1,277 +0,0 @@
|
|||||||
# Command: monitor
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Event-driven pipeline coordination with Spawn-and-Stop pattern for team-executor. Adapted from team-coordinate monitor.md -- role names are read from `team-session.json#roles` instead of hardcoded. **handleAdapt is LIMITED**: only warns, cannot generate new roles.
|
|
||||||
|
|
||||||
## Constants
|
|
||||||
|
|
||||||
| Constant | Value | Description |
|
|
||||||
|----------|-------|-------------|
|
|
||||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
|
||||||
| ONE_STEP_PER_INVOCATION | true | Executor does one operation then STOPS |
|
|
||||||
| FAST_ADVANCE_AWARE | true | Workers may skip executor for simple linear successors |
|
|
||||||
| ROLE_GENERATION | disabled | handleAdapt cannot generate new roles |
|
|
||||||
|
|
||||||
## Phase 2: Context Loading
|
|
||||||
|
|
||||||
| Input | Source | Required |
|
|
||||||
|-------|--------|----------|
|
|
||||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
|
||||||
| Task list | `TaskList()` | Yes |
|
|
||||||
| Active workers | session.active_workers[] | Yes |
|
|
||||||
| Role registry | session.roles[] | Yes |
|
|
||||||
|
|
||||||
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name`. This is the same pattern as team-coordinate.
|
|
||||||
|
|
||||||
## Phase 3: Handler Routing
|
|
||||||
|
|
||||||
### Wake-up Source Detection
|
|
||||||
|
|
||||||
Parse `$ARGUMENTS` to determine handler:
|
|
||||||
|
|
||||||
| Priority | Condition | Handler |
|
|
||||||
|----------|-----------|---------|
|
|
||||||
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
|
|
||||||
| 2 | Contains "capability_gap" | handleAdapt |
|
|
||||||
| 3 | Contains "check" or "status" | handleCheck |
|
|
||||||
| 4 | Contains "resume", "continue", or "next" | handleResume |
|
|
||||||
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleCallback
|
|
||||||
|
|
||||||
Worker completed a task. Verify completion, update state, auto-advance.
|
|
||||||
|
|
||||||
```
|
|
||||||
Receive callback from [<role>]
|
|
||||||
+- Find matching active worker by role (from session.roles)
|
|
||||||
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
|
|
||||||
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
|
|
||||||
+- Task status = completed?
|
|
||||||
| +- YES -> remove from active_workers -> update session
|
|
||||||
| | +- -> handleSpawnNext
|
|
||||||
| +- NO -> progress message, do not advance -> STOP
|
|
||||||
+- No matching worker found
|
|
||||||
+- Scan all active workers for completed tasks
|
|
||||||
+- Found completed -> process each -> handleSpawnNext
|
|
||||||
+- None completed -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
|
|
||||||
1. Check if the expected next task is already `in_progress` (fast-advanced)
|
|
||||||
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
|
|
||||||
3. If no -> normal handleSpawnNext
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleCheck
|
|
||||||
|
|
||||||
Read-only status report. No pipeline advancement.
|
|
||||||
|
|
||||||
**Output format**:
|
|
||||||
|
|
||||||
```
|
|
||||||
[executor] Pipeline Status
|
|
||||||
[executor] Progress: <completed>/<total> (<percent>%)
|
|
||||||
|
|
||||||
[executor] Execution Graph:
|
|
||||||
<visual representation of dependency graph with status icons>
|
|
||||||
|
|
||||||
done=completed >>>=running o=pending .=not created
|
|
||||||
|
|
||||||
[executor] Active Workers:
|
|
||||||
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
|
|
||||||
|
|
||||||
[executor] Ready to spawn: <subjects>
|
|
||||||
[executor] Commands: 'resume' to advance | 'check' to refresh
|
|
||||||
```
|
|
||||||
|
|
||||||
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
|
|
||||||
|
|
||||||
**Graph rendering**: Read dependency_graph from task-analysis.json, render each node with status icon. Show parallel branches side-by-side.
|
|
||||||
|
|
||||||
Then STOP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleResume
|
|
||||||
|
|
||||||
Check active worker completion, process results, advance pipeline.
|
|
||||||
|
|
||||||
```
|
|
||||||
Load active_workers from session
|
|
||||||
+- No active workers -> handleSpawnNext
|
|
||||||
+- Has active workers -> check each:
|
|
||||||
+- status = completed -> mark done, log
|
|
||||||
+- status = in_progress -> still running, log
|
|
||||||
+- other status -> worker failure -> reset to pending
|
|
||||||
After processing:
|
|
||||||
+- Some completed -> handleSpawnNext
|
|
||||||
+- All still running -> report status -> STOP
|
|
||||||
+- All failed -> handleSpawnNext (retry)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleSpawnNext
|
|
||||||
|
|
||||||
Find all ready tasks, spawn workers in background, update session, STOP.
|
|
||||||
|
|
||||||
```
|
|
||||||
Collect task states from TaskList()
|
|
||||||
+- completedSubjects: status = completed
|
|
||||||
+- inProgressSubjects: status = in_progress
|
|
||||||
+- readySubjects: pending + all blockedBy in completedSubjects
|
|
||||||
|
|
||||||
Ready tasks found?
|
|
||||||
+- NONE + work in progress -> report waiting -> STOP
|
|
||||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> Phase 2
|
|
||||||
+- HAS ready tasks -> for each:
|
|
||||||
+- Is task owner an Inner Loop role AND that role already has an active_worker?
|
|
||||||
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
|
|
||||||
| +- NO -> normal spawn below
|
|
||||||
+- TaskUpdate -> in_progress
|
|
||||||
+- team_msg log -> task_unblocked (team=<session-id>, NOT team name)
|
|
||||||
+- Spawn worker (see spawn tool call below)
|
|
||||||
+- Add to session.active_workers
|
|
||||||
Update session file -> output summary -> STOP
|
|
||||||
```
|
|
||||||
|
|
||||||
**Spawn worker tool call** (one per ready task):
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn <role> worker for <subject>",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: "<worker prompt from SKILL.md Executor Spawn Template>"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Handler: handleAdapt (LIMITED)
|
|
||||||
|
|
||||||
Handle mid-pipeline capability gap discovery. **UNLIKE team-coordinate, executor CANNOT generate new roles.**
|
|
||||||
|
|
||||||
```
|
|
||||||
Receive capability_gap from [<role>]
|
|
||||||
+- Log via team_msg (type: warning)
|
|
||||||
+- Report to user:
|
|
||||||
"Capability gap detected: <gap_description>
|
|
||||||
|
|
||||||
team-executor cannot generate new roles.
|
|
||||||
Options:
|
|
||||||
1. Continue with existing roles (worker will skip gap work)
|
|
||||||
2. Re-run team-coordinate with --resume=<session> to extend session
|
|
||||||
3. Manually add role to <session>/roles/ and retry"
|
|
||||||
+- Extract: gap_description, requesting_role, suggested_capability
|
|
||||||
+- Validate gap is genuine:
|
|
||||||
+- Check existing roles in session.roles -> does any role cover this?
|
|
||||||
| +- YES -> redirect: SendMessage to that role's owner -> STOP
|
|
||||||
| +- NO -> genuine gap, report to user (cannot fix)
|
|
||||||
+- Do NOT generate new role
|
|
||||||
+- Continue execution with existing roles
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key difference from team-coordinate**:
|
|
||||||
| Aspect | team-coordinate | team-executor |
|
|
||||||
|--------|-----------------|---------------|
|
|
||||||
| handleAdapt | Generates new role, creates tasks, spawns worker | Only warns, cannot fix |
|
|
||||||
| Recovery | Automatic | Manual (re-run team-coordinate) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Worker Failure Handling
|
|
||||||
|
|
||||||
When a worker has unexpected status (not completed, not in_progress):
|
|
||||||
|
|
||||||
1. Reset task -> pending via TaskUpdate
|
|
||||||
2. Log via team_msg (type: error)
|
|
||||||
3. Report to user: task reset, will retry on next resume
|
|
||||||
|
|
||||||
### Fast-Advance Failure Recovery
|
|
||||||
|
|
||||||
When executor detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
|
|
||||||
|
|
||||||
```
|
|
||||||
handleCallback / handleResume detects:
|
|
||||||
+- Task is in_progress (was fast-advanced by predecessor)
|
|
||||||
+- No active_worker entry for this task
|
|
||||||
+- Original fast-advancing worker has already completed and exited
|
|
||||||
+- Resolution:
|
|
||||||
1. TaskUpdate -> reset task to pending
|
|
||||||
2. Remove stale active_worker entry (if any)
|
|
||||||
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
|
|
||||||
4. -> handleSpawnNext (will re-spawn the task normally)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Detection in handleResume**:
|
|
||||||
|
|
||||||
```
|
|
||||||
For each in_progress task in TaskList():
|
|
||||||
+- Has matching active_worker? -> normal, skip
|
|
||||||
+- No matching active_worker? -> orphaned (likely fast-advance failure)
|
|
||||||
+- Check creation time: if > 5 minutes with no progress callback
|
|
||||||
+- Reset to pending -> handleSpawnNext
|
|
||||||
```
|
|
||||||
|
|
||||||
**Prevention**: Fast-advance failures are self-healing. The executor reconciles orphaned tasks on every `resume`/`check` cycle.
|
|
||||||
|
|
||||||
### Consensus-Blocked Handling
|
|
||||||
|
|
||||||
When a worker reports `consensus_blocked` in its callback:
|
|
||||||
|
|
||||||
```
|
|
||||||
handleCallback receives message with consensus_blocked flag
|
|
||||||
+- Extract: divergence_severity, blocked_round, action_recommendation
|
|
||||||
+- Route by severity:
|
|
||||||
|
|
|
||||||
+- severity = HIGH
|
|
||||||
| +- Create REVISION task:
|
|
||||||
| +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
|
|
||||||
| +- Description includes: divergence details + action items from discuss
|
|
||||||
| +- blockedBy: none (immediate execution)
|
|
||||||
| +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
|
|
||||||
| +- If already revised once -> PAUSE, escalate to user
|
|
||||||
| +- Update session: mark task as "revised", log revision chain
|
|
||||||
|
|
|
||||||
+- severity = MEDIUM
|
|
||||||
| +- Proceed with warning: include divergence in next task's context
|
|
||||||
| +- Log action items to wisdom/issues.md
|
|
||||||
| +- Normal handleSpawnNext
|
|
||||||
|
|
|
||||||
+- severity = LOW
|
|
||||||
+- Proceed normally: treat as consensus_reached with notes
|
|
||||||
+- Normal handleSpawnNext
|
|
||||||
```
|
|
||||||
|
|
||||||
## Phase 4: Validation
|
|
||||||
|
|
||||||
| Check | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
|
||||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
|
||||||
| Dynamic roles valid | All task owners exist in session.roles |
|
|
||||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
|
|
||||||
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
|
|
||||||
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Scenario | Resolution |
|
|
||||||
|----------|------------|
|
|
||||||
| Session file not found | Error, suggest re-run team-coordinate |
|
|
||||||
| Worker callback from unknown role | Log info, scan for other completions |
|
|
||||||
| All workers still running on resume | Report status, suggest check later |
|
|
||||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
|
||||||
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
|
|
||||||
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
|
|
||||||
| Dynamic role file not found | Error, cannot proceed without role definition |
|
|
||||||
| capability_gap from role | WARN only, cannot generate new roles |
|
|
||||||
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
|
|
||||||
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |
|
|
||||||
@@ -1,202 +0,0 @@
|
|||||||
# Executor Role
|
|
||||||
|
|
||||||
Orchestrate the team-executor workflow: session validation, state reconciliation, worker dispatch, progress monitoring, session state. The sole built-in role -- all worker roles are loaded from the session.
|
|
||||||
|
|
||||||
## Identity
|
|
||||||
|
|
||||||
- **Name**: `executor` | **Tag**: `[executor]`
|
|
||||||
- **Responsibility**: Validate session -> Reconcile state -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
|
||||||
|
|
||||||
## Boundaries
|
|
||||||
|
|
||||||
### MUST
|
|
||||||
- Validate session structure before any execution
|
|
||||||
- Reconcile session state with TaskList on startup
|
|
||||||
- Reset in_progress tasks to pending (interrupted tasks)
|
|
||||||
- Detect fast-advance orphans and reset to pending
|
|
||||||
- Spawn worker subagents in background
|
|
||||||
- Monitor progress via worker callbacks and route messages
|
|
||||||
- Maintain session state persistence (team-session.json)
|
|
||||||
- Handle capability_gap reports with warning only (cannot generate roles)
|
|
||||||
|
|
||||||
### MUST NOT
|
|
||||||
- Execute task work directly (delegate to workers)
|
|
||||||
- Modify task output artifacts (workers own their deliverables)
|
|
||||||
- Call implementation subagents (code-developer, etc.) directly
|
|
||||||
- Generate new roles (use existing session roles only)
|
|
||||||
- Skip session validation
|
|
||||||
- Override consensus_blocked HIGH without user confirmation
|
|
||||||
|
|
||||||
> **Core principle**: executor is the orchestrator, not the executor. All actual work is delegated to session-defined worker roles. Unlike team-coordinate coordinator, executor CANNOT generate new roles.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Entry Router
|
|
||||||
|
|
||||||
When executor is invoked, first detect the invocation type:
|
|
||||||
|
|
||||||
| Detection | Condition | Handler |
|
|
||||||
|-----------|-----------|---------|
|
|
||||||
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
|
|
||||||
| Status check | Arguments contain "check" or "status" | -> handleCheck |
|
|
||||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
|
|
||||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
|
|
||||||
| New execution | None of above | -> Phase 0 |
|
|
||||||
|
|
||||||
For callback/check/resume/adapt: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 0: Session Validation + State Reconciliation
|
|
||||||
|
|
||||||
**Objective**: Validate session structure and reconcile session state with actual task status.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
### Step 1: Session Validation
|
|
||||||
|
|
||||||
Validate session structure (see SKILL.md Session Validation):
|
|
||||||
- [ ] Directory exists at session path
|
|
||||||
- [ ] `team-session.json` exists and parses
|
|
||||||
- [ ] `task-analysis.json` exists and parses
|
|
||||||
- [ ] `roles/` directory has >= 1 .md files
|
|
||||||
- [ ] All roles in team-session.json#roles have corresponding .md files
|
|
||||||
|
|
||||||
If validation fails -> ERROR with specific reason -> STOP
|
|
||||||
|
|
||||||
### Step 2: Load Session State
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
session = Read(<session-folder>/team-session.json)
|
|
||||||
taskAnalysis = Read(<session-folder>/task-analysis.json)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Reconcile with TaskList
|
|
||||||
|
|
||||||
```
|
|
||||||
Call TaskList() -> get real status of all tasks
|
|
||||||
Compare with session.completed_tasks:
|
|
||||||
+- Tasks in TaskList.completed but not in session -> add to session.completed_tasks
|
|
||||||
+- Tasks in session.completed_tasks but not TaskList.completed -> remove from session.completed_tasks (anomaly, log warning)
|
|
||||||
+- Tasks in TaskList.in_progress -> candidate for reset
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Reset Interrupted Tasks
|
|
||||||
|
|
||||||
```
|
|
||||||
For each task in TaskList.in_progress:
|
|
||||||
+- Reset to pending via TaskUpdate
|
|
||||||
+- Log via team_msg (type: warning, summary: "Task <ID> reset from interrupted state")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Detect Fast-Advance Orphans
|
|
||||||
|
|
||||||
```
|
|
||||||
For each task in TaskList.in_progress:
|
|
||||||
+- Check if has matching active_worker entry
|
|
||||||
+- No matching active_worker + created > 5 minutes ago -> orphan
|
|
||||||
+- Reset to pending via TaskUpdate
|
|
||||||
+- Log via team_msg (type: error, summary: "Fast-advance orphan <ID> reset")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Create Missing Tasks (if needed)
|
|
||||||
|
|
||||||
```
|
|
||||||
For each task in task-analysis.json#tasks:
|
|
||||||
+- Check if exists in TaskList
|
|
||||||
+- Not exists -> create via TaskCreate with correct blockedBy
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 7: Update Session File
|
|
||||||
|
|
||||||
```
|
|
||||||
Write updated team-session.json with:
|
|
||||||
+- reconciled completed_tasks
|
|
||||||
+- cleared active_workers (will be rebuilt on spawn)
|
|
||||||
+- status = "active"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 8: Team Setup
|
|
||||||
|
|
||||||
```
|
|
||||||
Check if team exists (via TaskList with team_name filter)
|
|
||||||
+- Not exists -> TeamCreate with team_name from session
|
|
||||||
+- Exists -> continue with existing team
|
|
||||||
```
|
|
||||||
|
|
||||||
**Success**: Session validated, state reconciled, team ready -> Phase 1
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Spawn-and-Stop
|
|
||||||
|
|
||||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
|
||||||
|
|
||||||
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
|
|
||||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
|
||||||
- Worker completes -> may fast-advance to next task OR SendMessage callback -> auto-advance
|
|
||||||
- User can use "check" / "resume" to manually advance
|
|
||||||
- Executor does one operation per invocation, then STOPS
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Load `commands/monitor.md`
|
|
||||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
|
||||||
3. For each ready task -> spawn worker (see SKILL.md Executor Spawn Template)
|
|
||||||
- Use Standard Worker template for single-task roles
|
|
||||||
- Use Inner Loop Worker template for multi-task roles
|
|
||||||
4. Output status summary with execution graph
|
|
||||||
5. STOP
|
|
||||||
|
|
||||||
**Pipeline advancement** driven by three wake sources:
|
|
||||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
|
||||||
- User "check" -> handleCheck (status only)
|
|
||||||
- User "resume" -> handleResume (advance)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Report + Next Steps
|
|
||||||
|
|
||||||
**Objective**: Completion report and follow-up options.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Load session state -> count completed tasks, duration
|
|
||||||
2. List all deliverables with output paths in `<session>/artifacts/`
|
|
||||||
3. Include discussion summaries (if inline discuss was used)
|
|
||||||
4. Summarize wisdom accumulated during execution
|
|
||||||
5. Update session status -> "completed"
|
|
||||||
6. Offer next steps: exit / view artifacts / extend with additional tasks
|
|
||||||
|
|
||||||
**Output format**:
|
|
||||||
|
|
||||||
```
|
|
||||||
[executor] ============================================
|
|
||||||
[executor] TASK COMPLETE
|
|
||||||
[executor]
|
|
||||||
[executor] Deliverables:
|
|
||||||
[executor] - <artifact-1.md> (<producer role>)
|
|
||||||
[executor] - <artifact-2.md> (<producer role>)
|
|
||||||
[executor]
|
|
||||||
[executor] Pipeline: <completed>/<total> tasks
|
|
||||||
[executor] Roles: <role-list>
|
|
||||||
[executor] Duration: <elapsed>
|
|
||||||
[executor]
|
|
||||||
[executor] Session: <session-folder>
|
|
||||||
[executor] ============================================
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Resolution |
|
|
||||||
|-------|------------|
|
|
||||||
| Session validation fails | ERROR with specific reason, suggest re-run team-coordinate |
|
|
||||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
|
||||||
| Worker crash | Respawn worker, reassign task |
|
|
||||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
|
||||||
| capability_gap reported | handleAdapt: WARN only, cannot generate new roles |
|
|
||||||
| All workers still running on resume | Report status, suggest check later |
|
|
||||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
|
||||||
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
|
|
||||||
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
|
|
||||||
| Role file not found | ERROR, cannot proceed without role definition |
|
|
||||||
@@ -1,272 +0,0 @@
|
|||||||
# Session Schema
|
|
||||||
|
|
||||||
Required session structure for team-executor. All components MUST exist for valid execution.
|
|
||||||
|
|
||||||
## Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
<session-folder>/
|
|
||||||
+-- team-session.json # Session state + dynamic role registry (REQUIRED)
|
|
||||||
+-- task-analysis.json # Task analysis output: capabilities, dependency graph (REQUIRED)
|
|
||||||
+-- roles/ # Dynamic role definitions (REQUIRED, >= 1 .md file)
|
|
||||||
| +-- <role-1>.md
|
|
||||||
| +-- <role-2>.md
|
|
||||||
+-- artifacts/ # All MD deliverables from workers
|
|
||||||
| +-- <artifact>.md
|
|
||||||
+-- shared-memory.json # Cross-role state store
|
|
||||||
+-- wisdom/ # Cross-task knowledge
|
|
||||||
| +-- learnings.md
|
|
||||||
| +-- decisions.md
|
|
||||||
| +-- issues.md
|
|
||||||
+-- explorations/ # Shared explore cache
|
|
||||||
| +-- cache-index.json
|
|
||||||
| +-- explore-<angle>.json
|
|
||||||
+-- discussions/ # Inline discuss records
|
|
||||||
| +-- <round>.md
|
|
||||||
+-- .msg/ # Team message bus logs
|
|
||||||
```
|
|
||||||
|
|
||||||
## Validation Checklist
|
|
||||||
|
|
||||||
team-executor validates the following before execution:
|
|
||||||
|
|
||||||
### Required Components
|
|
||||||
|
|
||||||
| Component | Validation | Error Message |
|
|
||||||
|-----------|------------|---------------|
|
|
||||||
| `--session` argument | Must be provided | "Session required. Usage: --session=<path-to-TC-folder>" |
|
|
||||||
| Directory | Must exist at path | "Session directory not found: <path>" |
|
|
||||||
| `team-session.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: team-session.json missing, corrupt, or missing required fields" |
|
|
||||||
| `task-analysis.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: task-analysis.json missing, corrupt, or missing required fields" |
|
|
||||||
| `roles/` directory | Must exist and contain >= 1 .md file | "Invalid session: no role files in roles/" |
|
|
||||||
| Role file mapping | Each role in team-session.json#roles must have .md file | "Role file not found: roles/<role>.md" |
|
|
||||||
| Role file structure | Each role .md must contain required headers | "Invalid role file: roles/<role>.md missing required section: <section>" |
|
|
||||||
|
|
||||||
### Validation Algorithm
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Parse --session=<path> from arguments
|
|
||||||
+- Not provided -> ERROR: "Session required. Usage: --session=<path-to-TC-folder>"
|
|
||||||
|
|
||||||
2. Check directory exists
|
|
||||||
+- Not exists -> ERROR: "Session directory not found: <path>"
|
|
||||||
|
|
||||||
3. Check team-session.json
|
|
||||||
+- Not exists -> ERROR: "Invalid session: team-session.json missing"
|
|
||||||
+- Parse error -> ERROR: "Invalid session: team-session.json corrupt"
|
|
||||||
+- Validate required fields:
|
|
||||||
+- session_id (string) -> missing/invalid -> ERROR: "team-session.json missing required field: session_id"
|
|
||||||
+- task_description (string) -> missing -> ERROR: "team-session.json missing required field: task_description"
|
|
||||||
+- status (string: active|paused|completed) -> missing/invalid -> ERROR: "team-session.json has invalid status"
|
|
||||||
+- team_name (string) -> missing -> ERROR: "team-session.json missing required field: team_name"
|
|
||||||
+- roles (array) -> missing/empty -> ERROR: "team-session.json missing or empty roles array"
|
|
||||||
|
|
||||||
4. Check task-analysis.json
|
|
||||||
+- Not exists -> ERROR: "Invalid session: task-analysis.json missing"
|
|
||||||
+- Parse error -> ERROR: "Invalid session: task-analysis.json corrupt"
|
|
||||||
+- Validate required fields:
|
|
||||||
+- capabilities (array) -> missing -> ERROR: "task-analysis.json missing required field: capabilities"
|
|
||||||
+- dependency_graph (object) -> missing -> ERROR: "task-analysis.json missing required field: dependency_graph"
|
|
||||||
+- roles (array) -> missing/empty -> ERROR: "task-analysis.json missing or empty roles array"
|
|
||||||
+- tasks (array) -> missing/empty -> ERROR: "task-analysis.json missing or empty tasks array"
|
|
||||||
|
|
||||||
5. Check roles/ directory
|
|
||||||
+- Not exists -> ERROR: "Invalid session: roles/ directory missing"
|
|
||||||
+- No .md files -> ERROR: "Invalid session: no role files in roles/"
|
|
||||||
|
|
||||||
6. Check role file mapping and structure
|
|
||||||
+- For each role in team-session.json#roles:
|
|
||||||
+- Check roles/<role.name>.md exists
|
|
||||||
+- Not exists -> ERROR: "Role file not found: roles/<role.name>.md"
|
|
||||||
+- Validate role file structure (see Role File Structure Validation):
|
|
||||||
+- Check for required headers: "# Role:", "## Identity", "## Boundaries", "## Execution"
|
|
||||||
+- Missing header -> ERROR: "Invalid role file: roles/<role.name>.md missing required section: <section>"
|
|
||||||
|
|
||||||
7. All checks pass -> proceed to Phase 0
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## team-session.json Schema
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"session_id": "TC-<slug>-<date>",
|
|
||||||
"task_description": "<original user input>",
|
|
||||||
"status": "active | paused | completed",
|
|
||||||
"team_name": "<team-name>",
|
|
||||||
"roles": [
|
|
||||||
{
|
|
||||||
"name": "<role-name>",
|
|
||||||
"prefix": "<PREFIX>",
|
|
||||||
"responsibility_type": "<type>",
|
|
||||||
"inner_loop": false,
|
|
||||||
"role_file": "roles/<role-name>.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"pipeline": {
|
|
||||||
"dependency_graph": {},
|
|
||||||
"tasks_total": 0,
|
|
||||||
"tasks_completed": 0
|
|
||||||
},
|
|
||||||
"active_workers": [],
|
|
||||||
"completed_tasks": [],
|
|
||||||
"created_at": "<timestamp>"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Required Fields
|
|
||||||
|
|
||||||
| Field | Type | Description |
|
|
||||||
|-------|------|-------------|
|
|
||||||
| `session_id` | string | Unique session identifier (e.g., "TC-auth-feature-2026-02-27") |
|
|
||||||
| `task_description` | string | Original task description from user |
|
|
||||||
| `status` | string | One of: "active", "paused", "completed" |
|
|
||||||
| `team_name` | string | Team name for Task tool |
|
|
||||||
| `roles` | array | List of role definitions |
|
|
||||||
| `roles[].name` | string | Role name (must match .md filename) |
|
|
||||||
| `roles[].prefix` | string | Task prefix for this role (e.g., "SPEC", "IMPL") |
|
|
||||||
| `roles[].role_file` | string | Relative path to role file |
|
|
||||||
|
|
||||||
### Optional Fields
|
|
||||||
|
|
||||||
| Field | Type | Description |
|
|
||||||
|-------|------|-------------|
|
|
||||||
| `pipeline` | object | Pipeline metadata |
|
|
||||||
| `active_workers` | array | Currently running workers |
|
|
||||||
| `completed_tasks` | array | List of completed task IDs |
|
|
||||||
| `created_at` | string | ISO timestamp |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## task-analysis.json Schema
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"capabilities": [
|
|
||||||
{
|
|
||||||
"name": "<capability-name>",
|
|
||||||
"description": "<description>",
|
|
||||||
"artifact_type": "<type>"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"dependency_graph": {
|
|
||||||
"<task-id>": {
|
|
||||||
"depends_on": ["<dependency-task-id>"],
|
|
||||||
"role": "<role-name>"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"roles": [
|
|
||||||
{
|
|
||||||
"name": "<role-name>",
|
|
||||||
"prefix": "<PREFIX>",
|
|
||||||
"responsibility_type": "<type>",
|
|
||||||
"inner_loop": false
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"tasks": [
|
|
||||||
{
|
|
||||||
"id": "<task-id>",
|
|
||||||
"subject": "<task-subject>",
|
|
||||||
"owner": "<role-name>",
|
|
||||||
"blockedBy": ["<dependency-task-id>"]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"complexity_score": 0
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Required Fields
|
|
||||||
|
|
||||||
| Field | Type | Description |
|
|
||||||
|-------|------|-------------|
|
|
||||||
| `capabilities` | array | Detected capabilities |
|
|
||||||
| `dependency_graph` | object | Task dependency DAG |
|
|
||||||
| `roles` | array | Role definitions |
|
|
||||||
| `tasks` | array | Task definitions |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Role File Schema
|
|
||||||
|
|
||||||
Each role file in `roles/<role-name>.md` must follow the structure defined in `team-coordinate/specs/role-template.md`.
|
|
||||||
|
|
||||||
### Minimum Required Sections
|
|
||||||
|
|
||||||
| Section | Description |
|
|
||||||
|---------|-------------|
|
|
||||||
| `# Role: <name>` | Header with role name |
|
|
||||||
| `## Identity` | Name, tag, prefix, responsibility |
|
|
||||||
| `## Boundaries` | MUST and MUST NOT rules |
|
|
||||||
| `## Execution (5-Phase)` | Phase 1-5 workflow |
|
|
||||||
|
|
||||||
### Role File Structure Validation
|
|
||||||
|
|
||||||
Role files MUST be validated for structure before execution. This catches malformed role files early and provides actionable error messages.
|
|
||||||
|
|
||||||
**Required Sections** (must be present in order):
|
|
||||||
|
|
||||||
| Section | Pattern | Purpose |
|
|
||||||
|---------|---------|---------|
|
|
||||||
| Role Header | `# Role: <name>` | Identifies the role definition |
|
|
||||||
| Identity | `## Identity` | Defines name, tag, prefix, responsibility |
|
|
||||||
| Boundaries | `## Boundaries` | Defines MUST and MUST NOT rules |
|
|
||||||
| Execution | `## Execution` | Defines the 5-Phase workflow |
|
|
||||||
|
|
||||||
**Validation Algorithm**:
|
|
||||||
|
|
||||||
```
|
|
||||||
For each role file in roles/<role>.md:
|
|
||||||
1. Read file content
|
|
||||||
2. Check for "# Role:" header
|
|
||||||
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing role header"
|
|
||||||
3. Check for "## Identity" section
|
|
||||||
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Identity"
|
|
||||||
4. Check for "## Boundaries" section
|
|
||||||
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Boundaries"
|
|
||||||
5. Check for "## Execution" section (or "## Execution (5-Phase)")
|
|
||||||
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Execution"
|
|
||||||
6. All checks pass -> role file valid
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- Early detection of malformed role files
|
|
||||||
- Clear error messages for debugging
|
|
||||||
- Prevents runtime failures during worker execution
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Example Valid Session
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/.team/TC-auth-feature-2026-02-27/
|
|
||||||
+-- team-session.json # Valid JSON with session metadata
|
|
||||||
+-- task-analysis.json # Valid JSON with dependency graph
|
|
||||||
+-- roles/
|
|
||||||
| +-- spec-writer.md # Role file for SPEC-* tasks
|
|
||||||
| +-- implementer.md # Role file for IMPL-* tasks
|
|
||||||
| +-- tester.md # Role file for TEST-* tasks
|
|
||||||
+-- artifacts/ # (may be empty)
|
|
||||||
+-- shared-memory.json # Valid JSON (may be {})
|
|
||||||
+-- wisdom/
|
|
||||||
| +-- learnings.md
|
|
||||||
| +-- decisions.md
|
|
||||||
| +-- issues.md
|
|
||||||
+-- explorations/
|
|
||||||
| +-- cache-index.json
|
|
||||||
+-- discussions/ # (may be empty)
|
|
||||||
+-- .msg/ # (may be empty)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Recovery from Invalid Sessions
|
|
||||||
|
|
||||||
If session validation fails:
|
|
||||||
|
|
||||||
1. **Missing team-session.json**: Re-run team-coordinate with original task
|
|
||||||
2. **Missing task-analysis.json**: Re-run team-coordinate with --resume
|
|
||||||
3. **Missing role files**: Re-run team-coordinate with --resume
|
|
||||||
4. **Corrupt JSON**: Manual inspection or re-run team-coordinate
|
|
||||||
|
|
||||||
**team-executor cannot fix invalid sessions** -- it can only report errors and suggest recovery steps.
|
|
||||||
@@ -6,28 +6,21 @@ allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), Task
|
|||||||
|
|
||||||
# Team Frontend Development
|
# Team Frontend Development
|
||||||
|
|
||||||
Unified team skill: frontend development with built-in ui-ux-pro-max design intelligence. Covers requirement analysis, design system generation, frontend implementation, and quality assurance. All team members invoke with `--role=xxx` to route to role-specific execution.
|
全栈前端开发团队,内置 ui-ux-pro-max 设计智能。具备需求分析、设计系统生成、前端实现、质量保证的完整能力。All team members invoke this skill with `--role=xxx` to route to role-specific execution.
|
||||||
|
|
||||||
## Architecture
|
## Architecture Overview
|
||||||
|
|
||||||
```
|
```
|
||||||
┌──────────────────────────────────────────────────────┐
|
┌──────────────────────────────────────────────────┐
|
||||||
│ Skill(skill="team-frontend") │
|
│ Skill(skill="team-frontend", args="--role=xxx") │
|
||||||
│ args="<task-description>" or args="--role=xxx" │
|
└───────────────────┬──────────────────────────────┘
|
||||||
└──────────────────────────┬───────────────────────────┘
|
|
||||||
│ Role Router
|
│ Role Router
|
||||||
┌──── --role present? ────┐
|
┌───────┬───────┼───────┬───────┐
|
||||||
│ NO │ YES
|
|
||||||
↓ ↓
|
|
||||||
Orchestration Mode Role Dispatch
|
|
||||||
(auto -> coordinator) (route to role.md)
|
|
||||||
│
|
|
||||||
┌────┴────┬───────────┬───────────┬───────────┐
|
|
||||||
↓ ↓ ↓ ↓ ↓
|
↓ ↓ ↓ ↓ ↓
|
||||||
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
|
┌──────────┐┌──────────┐┌──────────┐┌──────────┐┌──────────┐
|
||||||
│ coord │ │analyst │ │architect│ │developer│ │ qa │
|
│coordinator││ analyst ││ architect││ developer││ qa │
|
||||||
│ │ │ANALYZE-*│ │ARCH-* │ │DEV-* │ │QA-* │
|
│ roles/ ││ roles/ ││ roles/ ││ roles/ ││ roles/ │
|
||||||
└────────┘ └────────┘ └────────┘ └────────┘ └────────┘
|
└──────────┘└──────────┘└──────────┘└──────────┘└──────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## Command Architecture
|
## Command Architecture
|
||||||
@@ -59,124 +52,104 @@ roles/
|
|||||||
|
|
||||||
### Input Parsing
|
### Input Parsing
|
||||||
|
|
||||||
Parse `$ARGUMENTS` to extract `--role`. If absent -> Orchestration Mode (auto route to coordinator).
|
Parse `$ARGUMENTS` to extract `--role`:
|
||||||
|
|
||||||
### Role Registry
|
```javascript
|
||||||
|
const args = "$ARGUMENTS"
|
||||||
|
const roleMatch = args.match(/--role[=\s]+(\w+)/)
|
||||||
|
|
||||||
| Role | File | Task Prefix | Type | Compact |
|
if (!roleMatch) {
|
||||||
|------|------|-------------|------|---------|
|
throw new Error("Missing --role argument. Available roles: coordinator, analyst, architect, developer, qa")
|
||||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **compressed -> must re-read** |
|
}
|
||||||
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | ANALYZE-* | pipeline | compressed -> must re-read |
|
|
||||||
| architect | [roles/architect/role.md](roles/architect/role.md) | ARCH-* | pipeline | compressed -> must re-read |
|
|
||||||
| developer | [roles/developer/role.md](roles/developer/role.md) | DEV-* | pipeline | compressed -> must re-read |
|
|
||||||
| qa | [roles/qa/role.md](roles/qa/role.md) | QA-* | pipeline | compressed -> must re-read |
|
|
||||||
|
|
||||||
> **COMPACT PROTECTION**: Role files are execution documents, not reference material. When context compression occurs and role instructions are reduced to summaries, **you MUST immediately `Read` the corresponding role.md to reload before continuing execution**. Do not execute any Phase based on summaries.
|
const role = roleMatch[1]
|
||||||
|
const teamName = args.match(/--team[=\s]+([\w-]+)/)?.[1] || "frontend"
|
||||||
### Dispatch
|
|
||||||
|
|
||||||
1. Extract `--role` from arguments
|
|
||||||
2. If no `--role` -> route to coordinator (Orchestration Mode)
|
|
||||||
3. Look up role in registry -> Read the role file -> Execute its phases
|
|
||||||
|
|
||||||
### Orchestration Mode
|
|
||||||
|
|
||||||
When invoked without `--role`, coordinator auto-starts. User just provides task description.
|
|
||||||
|
|
||||||
**Invocation**: `Skill(skill="team-frontend", args="<task-description>")`
|
|
||||||
|
|
||||||
**Lifecycle**:
|
|
||||||
```
|
|
||||||
User provides task description
|
|
||||||
-> coordinator Phase 1-3: Requirement clarification + industry identification -> TeamCreate -> Create task chain
|
|
||||||
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
|
|
||||||
-> Worker executes -> SendMessage callback -> coordinator advances next step
|
|
||||||
-> Loop until pipeline complete -> Phase 5 report
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**User Commands** (wake paused coordinator):
|
### Role Dispatch
|
||||||
|
|
||||||
| Command | Action |
|
```javascript
|
||||||
|---------|--------|
|
const VALID_ROLES = {
|
||||||
| `check` / `status` | Output execution status graph, no advancement |
|
"coordinator": { file: "roles/coordinator/role.md", prefix: null },
|
||||||
| `resume` / `continue` | Check worker states, advance next step |
|
"analyst": { file: "roles/analyst/role.md", prefix: "ANALYZE" },
|
||||||
|
"architect": { file: "roles/architect/role.md", prefix: "ARCH" },
|
||||||
|
"developer": { file: "roles/developer/role.md", prefix: "DEV" },
|
||||||
|
"qa": { file: "roles/qa/role.md", prefix: "QA" }
|
||||||
|
}
|
||||||
|
|
||||||
---
|
if (!VALID_ROLES[role]) {
|
||||||
|
throw new Error(`Unknown role: ${role}. Available: ${Object.keys(VALID_ROLES).join(', ')}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read and execute role-specific logic
|
||||||
|
Read(VALID_ROLES[role].file)
|
||||||
|
// → Execute the 5-phase process defined in that file
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Roles
|
||||||
|
|
||||||
|
| Role | Task Prefix | Responsibility | Role File |
|
||||||
|
|------|-------------|----------------|-----------|
|
||||||
|
| `coordinator` | N/A | 需求澄清、行业识别、流水线编排、进度监控、GC循环控制 | [roles/coordinator/role.md](roles/coordinator/role.md) |
|
||||||
|
| `analyst` | ANALYZE-* | 需求分析、调用 ui-ux-pro-max 获取设计智能、行业推理规则匹配 | [roles/analyst/role.md](roles/analyst/role.md) |
|
||||||
|
| `architect` | ARCH-* | 消费设计智能、定义设计令牌系统、组件架构、技术选型 | [roles/architect/role.md](roles/architect/role.md) |
|
||||||
|
| `developer` | DEV-* | 消费架构产出、实现前端组件/页面代码 | [roles/developer/role.md](roles/developer/role.md) |
|
||||||
|
| `qa` | QA-* | 代码审查、可访问性检查、行业反模式检查、Pre-Delivery验证 | [roles/qa/role.md](roles/qa/role.md) |
|
||||||
|
|
||||||
## Shared Infrastructure
|
## Shared Infrastructure
|
||||||
|
|
||||||
The following templates apply to all worker roles. Each role.md only needs to write **Phase 2-4** role-specific logic.
|
|
||||||
|
|
||||||
### Worker Phase 1: Task Discovery (shared by all workers)
|
|
||||||
|
|
||||||
Every worker executes the same task discovery flow on startup:
|
|
||||||
|
|
||||||
1. Call `TaskList()` to get all tasks
|
|
||||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
|
||||||
3. No tasks -> idle wait
|
|
||||||
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
|
|
||||||
|
|
||||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
|
||||||
- Check whether this task's output artifact already exists
|
|
||||||
- Artifact complete -> skip to Phase 5 report completion
|
|
||||||
- Artifact incomplete or missing -> normal Phase 2-4 execution
|
|
||||||
|
|
||||||
### Worker Phase 5: Report (shared by all workers)
|
|
||||||
|
|
||||||
Standard reporting flow after task completion:
|
|
||||||
|
|
||||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
|
||||||
- Parameters: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
|
||||||
- **CLI fallback**: When MCP unavailable -> `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
|
||||||
- **Note**: `team` must be session ID (e.g., `FES-xxx-date`), NOT team name. Extract from `Session:` field in task description.
|
|
||||||
2. **SendMessage**: Send result to coordinator (content and summary both prefixed with `[<role>]`)
|
|
||||||
3. **TaskUpdate**: Mark task completed
|
|
||||||
4. **Loop**: Return to Phase 1 to check next task
|
|
||||||
|
|
||||||
### Wisdom Accumulation (all roles)
|
|
||||||
|
|
||||||
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session initialization.
|
|
||||||
|
|
||||||
**Directory**:
|
|
||||||
```
|
|
||||||
<session-folder>/wisdom/
|
|
||||||
├── learnings.md # Patterns and insights
|
|
||||||
├── decisions.md # Architecture and design decisions
|
|
||||||
├── conventions.md # Codebase conventions
|
|
||||||
└── issues.md # Known risks and issues
|
|
||||||
```
|
|
||||||
|
|
||||||
**Worker Load** (Phase 2): Extract `Session: <path>` from task description, read wisdom directory files.
|
|
||||||
**Worker Contribute** (Phase 4/5): Write this task's discoveries to corresponding wisdom files.
|
|
||||||
|
|
||||||
### Role Isolation Rules
|
### Role Isolation Rules
|
||||||
|
|
||||||
#### Output Tagging
|
**核心原则**: 每个角色仅能执行自己职责范围内的工作。
|
||||||
|
|
||||||
All outputs must carry `[role_name]` prefix.
|
#### Output Tagging(强制)
|
||||||
|
|
||||||
#### Coordinator Isolation
|
所有角色的输出必须带 `[role_name]` 标识前缀:
|
||||||
|
|
||||||
| Allowed | Forbidden |
|
```javascript
|
||||||
|---------|-----------|
|
SendMessage({
|
||||||
| Requirement clarification (AskUserQuestion) | Direct code writing/modification |
|
content: `## [${role}] ...`,
|
||||||
| Create task chain (TaskCreate) | Calling implementation subagents |
|
summary: `[${role}] ...`
|
||||||
| Dispatch tasks to workers | Direct analysis/testing/review |
|
})
|
||||||
| Monitor progress (message bus) | Bypassing workers |
|
|
||||||
| Report results to user | Modifying source code |
|
|
||||||
|
|
||||||
#### Worker Isolation
|
mcp__ccw-tools__team_msg({
|
||||||
|
summary: `[${role}] ...`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
| Allowed | Forbidden |
|
#### Coordinator 隔离
|
||||||
|---------|-----------|
|
|
||||||
| Process tasks with own prefix | Process tasks with other role prefixes |
|
| 允许 | 禁止 |
|
||||||
| SendMessage to coordinator | Communicate directly with other workers |
|
|------|------|
|
||||||
| Use tools declared in Toolbox | Create tasks for other roles (TaskCreate) |
|
| 需求澄清 (AskUserQuestion) | ❌ 直接编写/修改代码 |
|
||||||
| Delegate to commands/ files | Modify resources outside own responsibility |
|
| 创建任务链 (TaskCreate) | ❌ 调用实现类 subagent |
|
||||||
|
| 分发任务给 worker | ❌ 直接执行分析/测试/审查 |
|
||||||
|
| 监控进度 (消息总线) | ❌ 绕过 worker 自行完成任务 |
|
||||||
|
| 汇报结果给用户 | ❌ 修改源代码或产物文件 |
|
||||||
|
|
||||||
|
#### Worker 隔离
|
||||||
|
|
||||||
|
| 允许 | 禁止 |
|
||||||
|
|------|------|
|
||||||
|
| 处理自己前缀的任务 | ❌ 处理其他角色前缀的任务 |
|
||||||
|
| SendMessage 给 coordinator | ❌ 直接与其他 worker 通信 |
|
||||||
|
| 使用 Toolbox 中声明的工具 | ❌ 为其他角色创建任务 (TaskCreate) |
|
||||||
|
|
||||||
### Message Bus (All Roles)
|
### Message Bus (All Roles)
|
||||||
|
|
||||||
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log.
|
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: role,
|
||||||
|
to: "coordinator",
|
||||||
|
type: "<type>",
|
||||||
|
summary: `[${role}] <summary>`,
|
||||||
|
ref: "<file_path>"
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
**Message types by role**:
|
**Message types by role**:
|
||||||
|
|
||||||
@@ -188,239 +161,273 @@ Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log.
|
|||||||
| developer | `dev_complete`, `dev_progress`, `error` |
|
| developer | `dev_complete`, `dev_progress`, `error` |
|
||||||
| qa | `qa_passed`, `qa_result`, `fix_required`, `error` |
|
| qa | `qa_passed`, `qa_result`, `fix_required`, `error` |
|
||||||
|
|
||||||
### Shared Memory
|
### CLI Fallback
|
||||||
|
|
||||||
Cross-role accumulated knowledge stored in `shared-memory.json`:
|
当 `mcp__ccw-tools__team_msg` MCP 不可用时:
|
||||||
|
|
||||||
| Field | Owner | Content |
|
```javascript
|
||||||
|-------|-------|---------|
|
Bash(`ccw team log --team "${teamName}" --from "${role}" --to "coordinator" --type "<type>" --summary "<summary>" --json`)
|
||||||
| `design_intelligence` | analyst | ui-ux-pro-max output |
|
```
|
||||||
| `design_token_registry` | architect | colors, typography, spacing, shadows |
|
|
||||||
| `component_inventory` | architect | Component specs |
|
|
||||||
| `style_decisions` | architect | Design system decisions |
|
|
||||||
| `qa_history` | qa | QA audit results |
|
|
||||||
| `industry_context` | analyst | Industry-specific rules |
|
|
||||||
|
|
||||||
Each role reads in Phase 2, writes own fields in Phase 5.
|
### Task Lifecycle (All Worker Roles)
|
||||||
|
|
||||||
---
|
```javascript
|
||||||
|
// Standard task lifecycle every worker role follows
|
||||||
|
// Phase 1: Discovery
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith(`${VALID_ROLES[role].prefix}-`) &&
|
||||||
|
t.owner === role &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (myTasks.length === 0) return // idle
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
|
||||||
|
// Phase 2-4: Role-specific (see roles/{role}/role.md)
|
||||||
|
|
||||||
|
// Phase 5: Report + Loop — 所有输出必须带 [role] 标识
|
||||||
|
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: role, to: "coordinator", type: "...", summary: `[${role}] ...` })
|
||||||
|
SendMessage({ type: "message", recipient: "coordinator", content: `## [${role}] ...`, summary: `[${role}] ...` })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
// Check for next task → back to Phase 1
|
||||||
|
```
|
||||||
|
|
||||||
## Pipeline Architecture
|
## Pipeline Architecture
|
||||||
|
|
||||||
### Three Pipeline Modes
|
### Three Pipeline Modes
|
||||||
|
|
||||||
```
|
```
|
||||||
page (single page - linear):
|
page (单页面 - CP-1 线性):
|
||||||
ANALYZE-001 -> ARCH-001 -> DEV-001 -> QA-001
|
ANALYZE-001 → ARCH-001 → DEV-001 → QA-001
|
||||||
|
|
||||||
feature (multi-component feature - with architecture review):
|
feature (多组件特性 - CP-1 + CP-2 + CP-8):
|
||||||
ANALYZE-001 -> ARCH-001(tokens+structure) -> QA-001(architecture-review)
|
ANALYZE-001 → ARCH-001(tokens+structure) → QA-001(architecture-review)
|
||||||
-> DEV-001(components) -> QA-002(code-review)
|
→ DEV-001(components) → QA-002(code-review)
|
||||||
|
|
||||||
system (full frontend system - dual-track parallel):
|
system (完整前端系统 - CP-1 + CP-2 + CP-8 + CP-9 双轨):
|
||||||
ANALYZE-001 -> ARCH-001(tokens) -> QA-001(token-review)
|
ANALYZE-001 → ARCH-001(tokens) → QA-001(token-review)
|
||||||
-> [ARCH-002(components) || DEV-001(tokens)](parallel, blockedBy QA-001)
|
→ [ARCH-002(components) ∥ DEV-001(tokens)](并行, blockedBy QA-001)
|
||||||
-> QA-002(component-review) -> DEV-002(components) -> QA-003(final)
|
→ QA-002(component-review) → DEV-002(components) → QA-003(final)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Generator-Critic Loop (developer <-> qa)
|
### Generator-Critic Loop (CP-2)
|
||||||
|
|
||||||
Developer and qa iterate to ensure code quality and design compliance:
|
developer ↔ qa 循环,确保代码质量和设计合规:
|
||||||
|
|
||||||
```
|
```
|
||||||
┌──────────┐ DEV artifact ┌──────────┐
|
┌──────────┐ DEV artifact ┌──────────┐
|
||||||
│ developer│ ─────────────────────> │ qa │
|
│ developer│ ──────────────────────→ │ qa │
|
||||||
│(Generator)│ │ (Critic) │
|
│(Generator)│ │ (Critic) │
|
||||||
│ │ <───────────────────── │ │
|
│ │ ←────────────────────── │ │
|
||||||
└──────────┘ QA feedback └──────────┘
|
└──────────┘ QA feedback └──────────┘
|
||||||
(max 2 rounds)
|
(max 2 rounds)
|
||||||
|
|
||||||
Convergence: qa.score >= 8 && qa.critical_count === 0
|
Convergence: qa.score >= 8 && qa.critical_count === 0
|
||||||
```
|
```
|
||||||
|
|
||||||
### Consulting Pattern (developer -> analyst)
|
### Consulting Pattern (CP-8)
|
||||||
|
|
||||||
Developer can request design decision consultation via coordinator:
|
developer 可向 analyst 咨询设计决策:
|
||||||
|
|
||||||
```
|
```
|
||||||
developer -> coordinator: "Need design decision consultation"
|
developer → coordinator: "需要设计决策咨询"
|
||||||
coordinator -> analyst: Create ANALYZE-consult task
|
coordinator → analyst: 创建 ANALYZE-consult 任务
|
||||||
analyst -> coordinator: Design recommendation
|
analyst → coordinator: 设计建议
|
||||||
coordinator -> developer: Forward recommendation
|
coordinator → developer: 转发建议
|
||||||
```
|
```
|
||||||
|
|
||||||
### Cadence Control
|
### Shared Memory
|
||||||
|
|
||||||
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
|
```json
|
||||||
|
{
|
||||||
```
|
"design_intelligence": {},
|
||||||
Beat Cycle (single beat)
|
"design_token_registry": {
|
||||||
═══════════════════════════════════════════════════════════
|
"colors": {}, "typography": {}, "spacing": {}, "shadows": {}
|
||||||
Event Coordinator Workers
|
},
|
||||||
───────────────────────────────────────────────────────────
|
"component_inventory": [],
|
||||||
callback/resume ──> ┌─ handleCallback ─┐
|
"style_decisions": [],
|
||||||
│ mark completed │
|
"qa_history": [],
|
||||||
│ check pipeline │
|
"industry_context": {}
|
||||||
├─ handleSpawnNext ─┤
|
}
|
||||||
│ find ready tasks │
|
|
||||||
│ spawn workers ───┼──> [Worker A] Phase 1-5
|
|
||||||
│ (parallel OK) ──┼──> [Worker B] Phase 1-5
|
|
||||||
└─ STOP (idle) ─────┘ │
|
|
||||||
│
|
|
||||||
callback <─────────────────────────────────────────┘
|
|
||||||
(next beat) SendMessage + TaskUpdate(completed)
|
|
||||||
═══════════════════════════════════════════════════════════
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Pipeline beat view**:
|
每个角色在 Phase 2 读取,Phase 5 写入自己负责的字段。
|
||||||
|
|
||||||
```
|
|
||||||
Page mode (4 beats, strictly serial)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3 4
|
|
||||||
│ │ │ │
|
|
||||||
ANALYZE -> ARCH -> DEV -> QA
|
|
||||||
▲ ▲
|
|
||||||
pipeline pipeline
|
|
||||||
start done
|
|
||||||
|
|
||||||
A=ANALYZE ARCH=architect D=DEV Q=QA
|
|
||||||
|
|
||||||
Feature mode (5 beats, with architecture review gate)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3 4 5
|
|
||||||
│ │ │ │ │
|
|
||||||
ANALYZE -> ARCH -> QA-1 -> DEV -> QA-2
|
|
||||||
▲ ▲
|
|
||||||
arch review code review
|
|
||||||
|
|
||||||
System mode (7 beats, dual-track parallel)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3 4 5 6 7
|
|
||||||
│ │ │ ┌────┴────┐ │ │ │
|
|
||||||
ANALYZE -> ARCH-1 -> QA-1 -> ARCH-2 || DEV-1 -> QA-2 -> DEV-2 -> QA-3
|
|
||||||
▲ ▲
|
|
||||||
parallel window final check
|
|
||||||
```
|
|
||||||
|
|
||||||
**Checkpoints**:
|
|
||||||
|
|
||||||
| Trigger | Location | Behavior |
|
|
||||||
|---------|----------|----------|
|
|
||||||
| Architecture review gate | QA-001 (arch review) complete | Pause if critical issues, wait for architect revision |
|
|
||||||
| GC loop limit | developer <-> qa max 2 rounds | Exceed rounds -> stop iteration, report current state |
|
|
||||||
| Pipeline stall | No ready + no running | Check missing tasks, report to user |
|
|
||||||
|
|
||||||
**Stall Detection** (coordinator `handleCheck` executes):
|
|
||||||
|
|
||||||
| Check | Condition | Resolution |
|
|
||||||
|-------|-----------|------------|
|
|
||||||
| Worker no response | in_progress task no callback | Report waiting task list, suggest user `resume` |
|
|
||||||
| Pipeline deadlock | no ready + no running + has pending | Check blockedBy dependency chain, report blocking point |
|
|
||||||
| GC loop exceeded | DEV/QA iteration > max_rounds | Terminate loop, output latest QA report |
|
|
||||||
|
|
||||||
### Task Metadata Registry
|
|
||||||
|
|
||||||
| Task ID | Role | Phase | Dependencies | Description |
|
|
||||||
|---------|------|-------|-------------|-------------|
|
|
||||||
| ANALYZE-001 | analyst | analysis | (none) | Requirement analysis + design intelligence via ui-ux-pro-max |
|
|
||||||
| ARCH-001 | architect | design | ANALYZE-001 | Design token system + component architecture |
|
|
||||||
| ARCH-002 | architect | design | QA-001 (system mode) | Component specs refinement |
|
|
||||||
| DEV-001 | developer | impl | ARCH-001 or QA-001 | Frontend component/page implementation |
|
|
||||||
| DEV-002 | developer | impl | QA-002 (system mode) | Component implementation from refined specs |
|
|
||||||
| QA-001 | qa | review | ARCH-001 or DEV-001 | Architecture review or code review |
|
|
||||||
| QA-002 | qa | review | DEV-001 | Code review (feature/system mode) |
|
|
||||||
| QA-003 | qa | review | DEV-002 (system mode) | Final quality check |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Coordinator Spawn Template
|
|
||||||
|
|
||||||
When coordinator spawns workers, use background mode (Spawn-and-Stop):
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Spawn <role> worker",
|
|
||||||
team_name: <team-name>,
|
|
||||||
name: "<role>",
|
|
||||||
run_in_background: true,
|
|
||||||
prompt: `You are team "<team-name>" <ROLE>.
|
|
||||||
|
|
||||||
## Primary Directive
|
|
||||||
All your work must be executed through Skill to load role definition:
|
|
||||||
Skill(skill="team-frontend", args="--role=<role>")
|
|
||||||
|
|
||||||
Current requirement: <task-description>
|
|
||||||
Session: <session-folder>
|
|
||||||
|
|
||||||
## Role Guidelines
|
|
||||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
|
||||||
- All output prefixed with [<role>] identifier
|
|
||||||
- Only communicate with coordinator
|
|
||||||
- Do not use TaskCreate for other roles
|
|
||||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
1. Call Skill -> load role definition and execution logic
|
|
||||||
2. Follow role.md 5-Phase flow
|
|
||||||
3. team_msg + SendMessage results to coordinator
|
|
||||||
4. TaskUpdate completed -> check next task`
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## ui-ux-pro-max Integration
|
|
||||||
|
|
||||||
### Design Intelligence Engine
|
|
||||||
|
|
||||||
Analyst role invokes ui-ux-pro-max via Skill to obtain industry design intelligence:
|
|
||||||
|
|
||||||
| Action | Invocation |
|
|
||||||
|--------|------------|
|
|
||||||
| Full design system recommendation | `Skill(skill="ui-ux-pro-max", args="<industry> <keywords> --design-system")` |
|
|
||||||
| Domain search (UX, typography, color) | `Skill(skill="ui-ux-pro-max", args="<query> --domain <domain>")` |
|
|
||||||
| Tech stack guidance | `Skill(skill="ui-ux-pro-max", args="<query> --stack <stack>")` |
|
|
||||||
| Persist design system (cross-session) | `Skill(skill="ui-ux-pro-max", args="<query> --design-system --persist -p <projectName>")` |
|
|
||||||
|
|
||||||
**Supported Domains**: product, style, typography, color, landing, chart, ux, web
|
|
||||||
**Supported Stacks**: html-tailwind, react, nextjs, vue, svelte, shadcn, swiftui, react-native, flutter
|
|
||||||
|
|
||||||
**Fallback**: If ui-ux-pro-max skill not installed, degrade to LLM general design knowledge. Suggest installation: `/plugin install ui-ux-pro-max@ui-ux-pro-max-skill`
|
|
||||||
|
|
||||||
## Session Directory
|
## Session Directory
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/.team/FE-<slug>-<YYYY-MM-DD>/
|
.workflow/.team/FE-{slug}-{YYYY-MM-DD}/
|
||||||
├── team-session.json # Session state
|
├── team-session.json # Session state
|
||||||
├── shared-memory.json # Cross-role accumulated knowledge
|
├── shared-memory.json # Cross-role accumulated knowledge
|
||||||
├── wisdom/ # Cross-task knowledge
|
|
||||||
│ ├── learnings.md
|
|
||||||
│ ├── decisions.md
|
|
||||||
│ ├── conventions.md
|
|
||||||
│ └── issues.md
|
|
||||||
├── analysis/ # Analyst output
|
├── analysis/ # Analyst output
|
||||||
│ ├── design-intelligence.json
|
│ ├── design-intelligence.json
|
||||||
│ └── requirements.md
|
│ └── requirements.md
|
||||||
├── architecture/ # Architect output
|
├── architecture/ # Architect output
|
||||||
│ ├── design-tokens.json
|
│ ├── design-tokens.json
|
||||||
│ ├── component-specs/
|
│ ├── component-specs/
|
||||||
│ │ └── <component-name>.md
|
│ │ └── {component-name}.md
|
||||||
│ └── project-structure.md
|
│ └── project-structure.md
|
||||||
├── qa/ # QA output
|
├── qa/ # QA output
|
||||||
│ └── audit-<NNN>.md
|
│ └── audit-{NNN}.md
|
||||||
└── build/ # Developer output
|
└── build/ # Developer output
|
||||||
├── token-files/
|
├── token-files/
|
||||||
└── component-files/
|
└── component-files/
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## ui-ux-pro-max Integration
|
||||||
|
|
||||||
|
### Design Intelligence Engine
|
||||||
|
|
||||||
|
analyst 角色通过 Skill 调用 ui-ux-pro-max 获取行业设计智能:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 生成完整设计系统推荐
|
||||||
|
Skill(skill="ui-ux-pro-max", args="${industry} ${keywords} --design-system")
|
||||||
|
|
||||||
|
// 领域搜索(UX 指南、排版、色彩等)
|
||||||
|
Skill(skill="ui-ux-pro-max", args="${query} --domain ${domain}")
|
||||||
|
|
||||||
|
// 技术栈指南
|
||||||
|
Skill(skill="ui-ux-pro-max", args="${query} --stack ${stack}")
|
||||||
|
|
||||||
|
// 持久化设计系统(跨会话复用)
|
||||||
|
Skill(skill="ui-ux-pro-max", args="${query} --design-system --persist -p ${projectName}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
```
|
||||||
|
/plugin install ui-ux-pro-max@ui-ux-pro-max-skill
|
||||||
|
```
|
||||||
|
|
||||||
|
### Fallback Strategy
|
||||||
|
|
||||||
|
若 ui-ux-pro-max skill 未安装,降级为 LLM 通用设计知识。
|
||||||
|
|
||||||
|
### Supported Domains & Stacks
|
||||||
|
|
||||||
|
- **Domains**: product, style, typography, color, landing, chart, ux, web
|
||||||
|
- **Stacks**: html-tailwind, react, nextjs, vue, svelte, shadcn, swiftui, react-native, flutter
|
||||||
|
|
||||||
|
## Coordinator Spawn Template
|
||||||
|
|
||||||
|
When coordinator creates teammates:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
TeamCreate({ team_name: teamName })
|
||||||
|
|
||||||
|
// Analyst
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "analyst",
|
||||||
|
prompt: `你是 team "${teamName}" 的 ANALYST。
|
||||||
|
当你收到 ANALYZE-* 任务时,调用 Skill(skill="team-frontend", args="--role=analyst") 执行。
|
||||||
|
当前需求: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
Session: ${sessionFolder}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 ANALYZE-* 前缀的任务
|
||||||
|
- 所有输出必须带 [analyst] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 ANALYZE-* 任务
|
||||||
|
2. Skill(skill="team-frontend", args="--role=analyst") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
|
||||||
|
// Architect
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "architect",
|
||||||
|
prompt: `你是 team "${teamName}" 的 ARCHITECT。
|
||||||
|
当你收到 ARCH-* 任务时,调用 Skill(skill="team-frontend", args="--role=architect") 执行。
|
||||||
|
当前需求: ${taskDescription}
|
||||||
|
Session: ${sessionFolder}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 ARCH-* 前缀的任务
|
||||||
|
- 所有输出必须带 [architect] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 ARCH-* 任务
|
||||||
|
2. Skill(skill="team-frontend", args="--role=architect") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
|
||||||
|
// Developer
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "developer",
|
||||||
|
prompt: `你是 team "${teamName}" 的 DEVELOPER。
|
||||||
|
当你收到 DEV-* 任务时,调用 Skill(skill="team-frontend", args="--role=developer") 执行。
|
||||||
|
当前需求: ${taskDescription}
|
||||||
|
Session: ${sessionFolder}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 DEV-* 前缀的任务
|
||||||
|
- 所有输出必须带 [developer] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 DEV-* 任务
|
||||||
|
2. Skill(skill="team-frontend", args="--role=developer") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
|
||||||
|
// QA
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "qa",
|
||||||
|
prompt: `你是 team "${teamName}" 的 QA (质量保证)。
|
||||||
|
当你收到 QA-* 任务时,调用 Skill(skill="team-frontend", args="--role=qa") 执行。
|
||||||
|
当前需求: ${taskDescription}
|
||||||
|
Session: ${sessionFolder}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 QA-* 前缀的任务
|
||||||
|
- 所有输出必须带 [qa] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 QA-* 任务
|
||||||
|
2. Skill(skill="team-frontend", args="--role=qa") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| Unknown --role value | Error with available role list |
|
| Unknown --role value | Error with available role list |
|
||||||
| Missing --role arg | Orchestration Mode -> auto route to coordinator |
|
| Missing --role arg | Error with usage hint |
|
||||||
| Role file not found | Error with expected path (roles/<name>/role.md) |
|
| Role file not found | Error with expected path (roles/{name}/role.md) |
|
||||||
| QA score < 6 over 2 GC rounds | Coordinator reports to user |
|
| QA score < 6 超过 2 轮 GC | Coordinator 上报用户 |
|
||||||
| Dual-track sync failure | Fallback to single-track sequential execution |
|
| 双轨同步失败 | 回退到单轨顺序执行 |
|
||||||
| ui-ux-pro-max skill not installed | Degrade to LLM general design knowledge, show install command |
|
| ui-ux-pro-max skill 未安装 | 降级为 LLM 通用设计知识,提示安装命令 |
|
||||||
| DEV cannot find design files | Wait for sync point or escalate to coordinator |
|
| DEV 找不到设计文件 | 等待 Sync Point 或上报 |
|
||||||
|
|||||||
@@ -1,222 +1,358 @@
|
|||||||
# Analyst Role
|
# Role: analyst
|
||||||
|
|
||||||
Requirements analyst. Invokes ui-ux-pro-max search engine to retrieve industry design intelligence, analyzes requirements, matches industry inference rules, generates design-intelligence.json for downstream consumption.
|
需求分析师。调用 ui-ux-pro-max 搜索引擎获取行业设计智能,分析需求、匹配行业推理规则、生成 design-intelligence.json 供下游角色消费。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `analyst` | **Tag**: `[analyst]`
|
- **Name**: `analyst`
|
||||||
- **Task Prefix**: `ANALYZE-*`
|
- **Task Prefix**: `ANALYZE-*`
|
||||||
- **Responsibility**: Read-only analysis + design intelligence retrieval
|
- **Responsibility**: Read-only analysis + design intelligence retrieval
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[analyst]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `ANALYZE-*` prefixed tasks
|
- 仅处理 `ANALYZE-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[analyst]` identifier
|
- 所有输出必须带 `[analyst]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Work strictly within requirement analysis and design intelligence scope
|
- 严格在需求分析和设计智能检索范围内工作
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute work outside this role's responsibility scope (architecture, implementation, QA)
|
- ❌ 执行架构设计、代码实现、质量审查等其他角色职责
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
- ❌ 直接与其他 worker 角色通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
- ❌ 为其他角色创建任务
|
||||||
- Modify source code files
|
- ❌ 修改源代码文件
|
||||||
- Omit `[analyst]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
| Command | File | Phase | Description |
|
|
||||||
|---------|------|-------|-------------|
|
|
||||||
| `design-intelligence` | [commands/design-intelligence.md](commands/design-intelligence.md) | Phase 3 | ui-ux-pro-max integration for design system retrieval |
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Read` | builtin | Phase 2 | Load session files, shared memory |
|
|
||||||
| `Glob` | builtin | Phase 2 | Detect existing token files, CSS files |
|
|
||||||
| `Grep` | builtin | Phase 2 | Search codebase patterns |
|
|
||||||
| `Bash` | builtin | Phase 3 | Call ui-ux-pro-max search.py |
|
|
||||||
| `WebSearch` | builtin | Phase 3 | Competitive reference, design trends |
|
|
||||||
| `Task(cli-explore-agent)` | subagent | Phase 3 | Deep codebase exploration |
|
|
||||||
| `Skill(ui-ux-pro-max)` | skill | Phase 3 | Design intelligence retrieval |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `analyze_ready` | analyst → coordinator | Analysis complete | Design intelligence ready for downstream consumption |
|
| `analyze_ready` | analyst → coordinator | Analysis complete | 设计智能已就绪,下游可消费 |
|
||||||
| `analyze_progress` | analyst → coordinator | Partial progress | Analysis progress update |
|
| `analyze_progress` | analyst → coordinator | Partial progress | 分析进度更新 |
|
||||||
| `error` | analyst → coordinator | Analysis failure | Analysis failed or tool unavailable |
|
| `error` | analyst → coordinator | Analysis failure | 分析失败或工具不可用 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Available Tools
|
||||||
|
|
||||||
```
|
| Tool | Purpose |
|
||||||
mcp__ccw-tools__team_msg({
|
|------|---------|
|
||||||
operation: "log",
|
| Read, Glob, Grep | 读取项目文件、搜索现有代码模式 |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., FES-xxx-date), NOT team name. Extract from Session: field.
|
| Bash (search.py) | 调用 ui-ux-pro-max 搜索引擎 |
|
||||||
from: "analyst",
|
| WebSearch, WebFetch | 竞品参考、设计趋势搜索 |
|
||||||
to: "coordinator",
|
| Task (cli-explore-agent) | 深度代码库探索 |
|
||||||
type: <message-type>,
|
|
||||||
summary: "[analyst] ANALYZE complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
### Subagent Capabilities
|
||||||
|
|
||||||
```
|
| Agent Type | Purpose |
|
||||||
Bash("ccw team log --team <session-id> --from analyst --to coordinator --type <message-type> --summary \"[analyst] ...\" --ref <artifact-path> --json")
|
|------------|---------|
|
||||||
```
|
| `cli-explore-agent` | 探索现有代码库的设计模式和组件结构 |
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('ANALYZE-') &&
|
||||||
|
t.owner === 'analyst' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `ANALYZE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading
|
### Phase 2: Context Loading
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Extract session folder from task description
|
||||||
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
|
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : null
|
||||||
|
|
||||||
| Input | Source | Required |
|
// Extract industry context
|
||||||
|-------|--------|----------|
|
const industryMatch = task.description.match(/Industry:\s*([^\n]+)/)
|
||||||
| Session folder | Extract from task description `Session: <path>` | Yes |
|
const industry = industryMatch ? industryMatch[1].trim() : 'SaaS/科技'
|
||||||
| Industry context | Extract from task description `Industry: <type>` | Yes |
|
|
||||||
| Shared memory | `<session-folder>/shared-memory.json` | No |
|
|
||||||
| Session info | `<session-folder>/team-session.json` | No |
|
|
||||||
| Existing tokens | Glob `**/*token*.*` | No |
|
|
||||||
| Existing CSS | Glob `**/*.css` | No |
|
|
||||||
| Package.json | For tech stack detection | No |
|
|
||||||
|
|
||||||
**Loading Steps**:
|
// Load shared memory
|
||||||
|
let sharedMemory = {}
|
||||||
|
try {
|
||||||
|
sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
|
||||||
|
} catch {}
|
||||||
|
|
||||||
1. Extract session folder from task description
|
// Load session info
|
||||||
2. Extract industry context from task description
|
let session = {}
|
||||||
3. Load shared memory and session info
|
try {
|
||||||
4. Detect existing design system in project
|
session = JSON.parse(Read(`${sessionFolder}/team-session.json`))
|
||||||
5. Detect tech stack from package.json
|
} catch {}
|
||||||
|
|
||||||
**Tech Stack Detection**:
|
// Detect existing design system in project
|
||||||
|
const existingTokenFiles = Glob({ pattern: '**/*token*.*' })
|
||||||
| Detection | Stack |
|
const existingCssVars = Glob({ pattern: '**/*.css' })
|
||||||
|-----------|-------|
|
|
||||||
| `next` in dependencies | nextjs |
|
|
||||||
| `react` in dependencies | react |
|
|
||||||
| `vue` in dependencies | vue |
|
|
||||||
| `svelte` in dependencies | svelte |
|
|
||||||
| `@shadcn/ui` in dependencies | shadcn |
|
|
||||||
| No package.json | html-tailwind |
|
|
||||||
|
|
||||||
### Phase 3: Core Analysis - Design Intelligence Retrieval
|
|
||||||
|
|
||||||
Key integration point with ui-ux-pro-max. Retrieve design intelligence via Skill.
|
|
||||||
|
|
||||||
**Execution Strategy**:
|
|
||||||
|
|
||||||
| Condition | Strategy |
|
|
||||||
|-----------|----------|
|
|
||||||
| ui-ux-pro-max skill available | Full design system retrieval via Skill |
|
|
||||||
| ui-ux-pro-max not installed | Fallback to LLM general knowledge |
|
|
||||||
|
|
||||||
**Step 1: Invoke ui-ux-pro-max via Skill**
|
|
||||||
|
|
||||||
Delegate to `commands/design-intelligence.md` for detailed execution.
|
|
||||||
|
|
||||||
**Skill Invocations**:
|
|
||||||
|
|
||||||
| Action | Invocation |
|
|
||||||
|--------|------------|
|
|
||||||
| Full design system | `Skill(skill="ui-ux-pro-max", args="<industry> <keywords> --design-system")` |
|
|
||||||
| UX guidelines | `Skill(skill="ui-ux-pro-max", args="accessibility animation responsive --domain ux")` |
|
|
||||||
| Tech stack guide | `Skill(skill="ui-ux-pro-max", args="<keywords> --stack <detected-stack>")` |
|
|
||||||
|
|
||||||
**Step 2: Fallback - LLM General Knowledge**
|
|
||||||
|
|
||||||
If ui-ux-pro-max skill not available (not installed or execution failed):
|
|
||||||
- Generate design recommendations from LLM general knowledge
|
|
||||||
- Quality is lower than data-driven recommendations from ui-ux-pro-max
|
|
||||||
- Suggest installation: `/plugin install ui-ux-pro-max@ui-ux-pro-max-skill`
|
|
||||||
|
|
||||||
**Step 3: Analyze Existing Codebase**
|
|
||||||
|
|
||||||
If existing token files or CSS files found:
|
|
||||||
|
|
||||||
|
// Detect tech stack
|
||||||
|
const packageJsonExists = Glob({ pattern: 'package.json' })
|
||||||
|
let detectedStack = 'html-tailwind'
|
||||||
|
if (packageJsonExists.length > 0) {
|
||||||
|
try {
|
||||||
|
const pkg = JSON.parse(Read('package.json'))
|
||||||
|
const deps = { ...pkg.dependencies, ...pkg.devDependencies }
|
||||||
|
if (deps['next']) detectedStack = 'nextjs'
|
||||||
|
else if (deps['react']) detectedStack = 'react'
|
||||||
|
else if (deps['vue']) detectedStack = 'vue'
|
||||||
|
else if (deps['svelte']) detectedStack = 'svelte'
|
||||||
|
if (deps['@shadcn/ui'] || deps['shadcn-ui']) detectedStack = 'shadcn'
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Phase 3: Core Analysis — Design Intelligence Retrieval
|
||||||
|
|
||||||
|
This is the key integration point with ui-ux-pro-max. 通过 Skill 调用获取设计智能。
|
||||||
|
|
||||||
|
详细执行策略见: [commands/design-intelligence.md](commands/design-intelligence.md)
|
||||||
|
|
||||||
|
#### Step 1: 通过 Skill 调用 ui-ux-pro-max
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const taskDesc = task.description.replace(/Session:.*\n?/g, '').replace(/Industry:.*\n?/g, '').trim()
|
||||||
|
const keywords = taskDesc.split(/\s+/).slice(0, 5).join(' ')
|
||||||
|
|
||||||
|
// 通过 subagent 调用 ui-ux-pro-max skill 获取完整设计智能
|
||||||
|
// ui-ux-pro-max 内部会自动执行 search.py --design-system
|
||||||
Task({
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
run_in_background: false,
|
||||||
|
description: "Retrieve design intelligence via ui-ux-pro-max skill",
|
||||||
|
prompt: `调用 ui-ux-pro-max skill 获取设计系统推荐。
|
||||||
|
|
||||||
|
## 需求
|
||||||
|
- 产品类型/行业: ${industry}
|
||||||
|
- 关键词: ${keywords}
|
||||||
|
- 技术栈: ${detectedStack}
|
||||||
|
|
||||||
|
## 执行步骤
|
||||||
|
|
||||||
|
### 1. 生成设计系统(必须)
|
||||||
|
Skill(skill="ui-ux-pro-max", args="${industry} ${keywords} --design-system")
|
||||||
|
|
||||||
|
### 2. 补充 UX 指南
|
||||||
|
Skill(skill="ui-ux-pro-max", args="accessibility animation responsive --domain ux")
|
||||||
|
|
||||||
|
### 3. 获取技术栈指南
|
||||||
|
Skill(skill="ui-ux-pro-max", args="${keywords} --stack ${detectedStack}")
|
||||||
|
|
||||||
|
## 输出
|
||||||
|
将所有结果整合写入: ${sessionFolder}/analysis/design-intelligence-raw.md
|
||||||
|
|
||||||
|
包含:
|
||||||
|
- 设计系统推荐(pattern, style, colors, typography, effects, anti-patterns)
|
||||||
|
- UX 最佳实践
|
||||||
|
- 技术栈指南
|
||||||
|
- 行业反模式列表
|
||||||
|
`
|
||||||
|
})
|
||||||
|
|
||||||
|
// 读取 skill 输出
|
||||||
|
let designSystemRaw = ''
|
||||||
|
try {
|
||||||
|
designSystemRaw = Read(`${sessionFolder}/analysis/design-intelligence-raw.md`)
|
||||||
|
} catch {
|
||||||
|
// Skill 输出不可用,将在 Step 3 使用 fallback
|
||||||
|
}
|
||||||
|
|
||||||
|
const uiproAvailable = designSystemRaw.length > 0
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 2: Fallback — LLM 通用设计知识
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 若 ui-ux-pro-max skill 不可用(未安装或执行失败),降级为 LLM 通用知识
|
||||||
|
if (!uiproAvailable) {
|
||||||
|
// analyst 直接基于 LLM 知识生成设计推荐
|
||||||
|
// 不需要外部工具,但质量低于 ui-ux-pro-max 的数据驱动推荐
|
||||||
|
designSystemRaw = null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 3: Analyze Existing Codebase
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Explore existing design patterns in the project
|
||||||
|
let existingPatterns = {}
|
||||||
|
|
||||||
|
if (existingTokenFiles.length > 0 || existingCssVars.length > 0) {
|
||||||
|
Task({
|
||||||
subagent_type: "cli-explore-agent",
|
subagent_type: "cli-explore-agent",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Explore existing design system",
|
description: "Explore existing design system",
|
||||||
prompt: "Analyze existing design system: <token-files>, <css-files>. Find: color palette, typography scale, spacing system, component patterns. Output as JSON."
|
prompt: `Analyze the existing design system in this project:
|
||||||
})
|
- Token files: ${existingTokenFiles.slice(0, 5).join(', ')}
|
||||||
|
- CSS files: ${existingCssVars.slice(0, 5).join(', ')}
|
||||||
|
|
||||||
|
Find: color palette, typography scale, spacing system, component patterns.
|
||||||
|
Output as JSON: { colors, typography, spacing, components, patterns }`
|
||||||
|
})
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 4: Competitive Reference** (optional)
|
#### Step 4: Competitive Reference (optional)
|
||||||
|
|
||||||
If industry is not "Other":
|
```javascript
|
||||||
- Quick web search for design inspiration
|
// Quick web search for design inspiration if needed
|
||||||
- `WebSearch({ query: "<industry> web design trends 2025 best practices" })`
|
if (industry !== '其他') {
|
||||||
|
try {
|
||||||
|
const webResults = WebSearch({ query: `${industry} web design trends 2025 best practices` })
|
||||||
|
// Extract relevant insights
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 4: Synthesis and Output
|
### Phase 4: Synthesis & Output
|
||||||
|
|
||||||
**Compile Design Intelligence**:
|
```javascript
|
||||||
|
// Compile design intelligence
|
||||||
|
// 若 Skill 调用成功,解析 raw output;否则使用 LLM fallback
|
||||||
|
const designIntelligence = {
|
||||||
|
_source: uiproAvailable ? "ui-ux-pro-max-skill" : "llm-general-knowledge",
|
||||||
|
_generated_at: new Date().toISOString(),
|
||||||
|
industry: industry,
|
||||||
|
detected_stack: detectedStack,
|
||||||
|
|
||||||
Generate `design-intelligence.json` with:
|
// From ui-ux-pro-max skill (or LLM fallback)
|
||||||
|
design_system: uiproAvailable ? parseDesignSystem(designSystemRaw) : generateFallbackDesignSystem(industry, taskDesc),
|
||||||
|
ux_guidelines: uiproAvailable ? parseUxGuidelines(designSystemRaw) : [],
|
||||||
|
stack_guidelines: uiproAvailable ? parseStackGuidelines(designSystemRaw) : {},
|
||||||
|
|
||||||
| Field | Source | Description |
|
// From codebase analysis
|
||||||
|-------|--------|-------------|
|
existing_patterns: existingPatterns,
|
||||||
| `_source` | Execution | "ui-ux-pro-max-skill" or "llm-general-knowledge" |
|
existing_tokens: existingTokenFiles,
|
||||||
| `industry` | Task | Industry context |
|
|
||||||
| `detected_stack` | Phase 2 | Tech stack detection result |
|
|
||||||
| `design_system` | Skill/fallback | Colors, typography, style |
|
|
||||||
| `ux_guidelines` | Skill | UX best practices |
|
|
||||||
| `stack_guidelines` | Skill | Tech-specific guidance |
|
|
||||||
| `existing_patterns` | Phase 3 | Codebase analysis results |
|
|
||||||
| `recommendations` | Synthesis | Style, colors, anti-patterns, must-have |
|
|
||||||
|
|
||||||
**Output Files**:
|
// Synthesized recommendations
|
||||||
|
recommendations: {
|
||||||
|
style: null, // Recommended UI style
|
||||||
|
color_palette: null, // Recommended colors
|
||||||
|
typography: null, // Recommended font pairing
|
||||||
|
anti_patterns: uiproAvailable ? parseAntiPatterns(designSystemRaw) : [],
|
||||||
|
must_have: session.industry_config?.mustHave || []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
1. **design-intelligence.json**: Structured data for downstream consumption
|
// Write design intelligence for downstream consumption
|
||||||
2. **requirements.md**: Human-readable requirements summary
|
Write(`${sessionFolder}/analysis/design-intelligence.json`, JSON.stringify(designIntelligence, null, 2))
|
||||||
|
|
||||||
**Update Shared Memory**:
|
// Write human-readable requirements summary
|
||||||
- Write `design_intelligence` field
|
Write(`${sessionFolder}/analysis/requirements.md`, `# Requirements Analysis
|
||||||
- Write `industry_context` field
|
|
||||||
|
## Task
|
||||||
|
${taskDesc}
|
||||||
|
|
||||||
|
## Industry Context
|
||||||
|
- **Industry**: ${industry}
|
||||||
|
- **Detected Stack**: ${detectedStack}
|
||||||
|
- **Design Intelligence Source**: ${designIntelligence._source}
|
||||||
|
|
||||||
|
## Design System Recommendations
|
||||||
|
${designSystemRaw || '(Using LLM general knowledge — install ui-ux-pro-max for data-driven recommendations)'}
|
||||||
|
|
||||||
|
## Existing Patterns Found
|
||||||
|
${JSON.stringify(existingPatterns, null, 2)}
|
||||||
|
|
||||||
|
## Anti-Patterns to Avoid
|
||||||
|
${designIntelligence.recommendations.anti_patterns.map(p => \`- ❌ \${p}\`).join('\\n') || 'None specified'}
|
||||||
|
|
||||||
|
## Must-Have Requirements
|
||||||
|
${designIntelligence.recommendations.must_have.map(m => \`- ✅ \${m}\`).join('\\n') || 'Standard requirements'}
|
||||||
|
`)
|
||||||
|
|
||||||
|
// Update shared memory
|
||||||
|
sharedMemory.design_intelligence = designIntelligence
|
||||||
|
sharedMemory.industry_context = { industry, config: session.industry_config }
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
|
const resultStatus = 'complete'
|
||||||
|
const resultSummary = `Design intelligence generated (source: ${designIntelligence._source}), stack: ${detectedStack}, industry: ${industry}`
|
||||||
|
const resultDetails = `Files:\n- ${sessionFolder}/analysis/design-intelligence.json\n- ${sessionFolder}/analysis/requirements.md`
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Fallback: LLM General Knowledge
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function generateFallbackDesignSystem(industry, taskDesc) {
|
||||||
|
// When ui-ux-pro-max skill is not installed, use LLM general knowledge
|
||||||
|
// Install: /plugin install ui-ux-pro-max@ui-ux-pro-max-skill
|
||||||
|
return {
|
||||||
|
_fallback: true,
|
||||||
|
note: "Generated from LLM general knowledge. Install ui-ux-pro-max skill for data-driven recommendations.",
|
||||||
|
colors: { primary: "#1976d2", secondary: "#dc004e", background: "#ffffff" },
|
||||||
|
typography: { heading: ["Inter", "system-ui"], body: ["Inter", "system-ui"] },
|
||||||
|
style: "modern-minimal"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "analyst",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "analyze_ready",
|
||||||
|
summary: `[analyst] ANALYZE complete: ${task.subject}`,
|
||||||
|
ref: `${sessionFolder}/analysis/design-intelligence.json`
|
||||||
|
})
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[analyst]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `## [analyst] Analysis Results
|
||||||
|
|
||||||
**Report Content**:
|
**Task**: ${task.subject}
|
||||||
- Task subject and status
|
**Status**: ${resultStatus}
|
||||||
- Design intelligence source (ui-ux-pro-max or LLM fallback)
|
|
||||||
- Industry and detected stack
|
|
||||||
- Anti-patterns count
|
|
||||||
- Output file paths
|
|
||||||
|
|
||||||
---
|
### Summary
|
||||||
|
${resultSummary}
|
||||||
|
|
||||||
|
### Design Intelligence
|
||||||
|
- **Source**: ${designIntelligence._source}
|
||||||
|
- **Industry**: ${industry}
|
||||||
|
- **Stack**: ${detectedStack}
|
||||||
|
- **Anti-patterns**: ${designIntelligence.recommendations.anti_patterns.length} identified
|
||||||
|
|
||||||
|
### Output Files
|
||||||
|
${resultDetails}`,
|
||||||
|
summary: `[analyst] ANALYZE complete`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task (e.g., ANALYZE-consult from CP-8)
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('ANALYZE-') &&
|
||||||
|
t.owner === 'analyst' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No ANALYZE-* tasks available | Idle, wait for coordinator assignment |
|
| No ANALYZE-* tasks available | Idle, wait for coordinator |
|
||||||
| ui-ux-pro-max not found | Fallback to LLM general knowledge, log warning |
|
| ui-ux-pro-max not found | Fallback to LLM general knowledge, log warning |
|
||||||
| search.py execution error | Retry once, then fallback |
|
| search.py execution error | Retry once, then fallback |
|
||||||
| Python not available | Fallback to LLM general knowledge |
|
| Python not available | Fallback to LLM general knowledge |
|
||||||
|
|||||||
@@ -1,217 +1,406 @@
|
|||||||
# Architect Role
|
# Role: architect
|
||||||
|
|
||||||
Frontend architect. Consumes design-intelligence.json, defines design token system, component architecture, project structure, and technology selection. Design token values should prioritize ui-ux-pro-max recommendations.
|
前端架构师。消费 design-intelligence.json,定义设计令牌系统、组件架构、项目结构、技术选型。设计令牌值优先使用 ui-ux-pro-max 推荐值。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `architect` | **Tag**: `[architect]`
|
- **Name**: `architect`
|
||||||
- **Task Prefix**: `ARCH-*`
|
- **Task Prefix**: `ARCH-*`
|
||||||
- **Responsibility**: Code generation (architecture artifacts)
|
- **Responsibility**: Code generation (architecture artifacts)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[architect]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `ARCH-*` prefixed tasks
|
- 仅处理 `ARCH-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[architect]` identifier
|
- 所有输出必须带 `[architect]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Work strictly within architecture design and token definition scope
|
- 严格在架构设计和令牌定义范围内工作
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute work outside this role's responsibility scope (analysis, implementation, QA)
|
- ❌ 执行需求分析、代码实现、质量审查等其他角色职责
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
- ❌ 直接与其他 worker 角色通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
- ❌ 为其他角色创建任务
|
||||||
- Implement concrete component code (only define specifications)
|
- ❌ 实现具体组件代码(仅定义规格)
|
||||||
- Omit `[architect]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
> No command files -- all phases execute inline.
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Read` | builtin | Phase 2-3 | Load design intelligence, shared memory |
|
|
||||||
| `Write` | builtin | Phase 3-4 | Write architecture artifacts |
|
|
||||||
| `Edit` | builtin | Phase 3-4 | Modify architecture files |
|
|
||||||
| `Glob` | builtin | Phase 2 | Detect project structure |
|
|
||||||
| `Grep` | builtin | Phase 2 | Search patterns |
|
|
||||||
| `Task(code-developer)` | subagent | Phase 3 | Complex architecture file generation |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `arch_ready` | architect → coordinator | Architecture complete | Architecture artifacts ready for downstream |
|
| `arch_ready` | architect → coordinator | Architecture complete | 架构产出就绪,下游可消费 |
|
||||||
| `arch_revision` | architect → coordinator | Revision after QA feedback | Architecture revision complete |
|
| `arch_revision` | architect → coordinator | Revision after QA feedback | 架构修订完成 |
|
||||||
| `arch_progress` | architect → coordinator | Partial progress | Architecture progress update |
|
| `arch_progress` | architect → coordinator | Partial progress | 架构进度更新 |
|
||||||
| `error` | architect → coordinator | Architecture failure | Architecture design failed |
|
| `error` | architect → coordinator | Architecture failure | 架构设计失败 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Available Tools
|
||||||
|
|
||||||
```
|
| Tool | Purpose |
|
||||||
mcp__ccw-tools__team_msg({
|
|------|---------|
|
||||||
operation: "log",
|
| Read, Write, Edit | 读写架构产物文件 |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., FES-xxx-date), NOT team name. Extract from Session: field.
|
| Glob, Grep | 搜索项目结构和模式 |
|
||||||
from: "architect",
|
| Task (code-developer) | 复杂架构文件生成委派 |
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[architect] ARCH complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from architect --to coordinator --type <message-type> --summary \"[architect] ...\" --ref <artifact-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('ARCH-') &&
|
||||||
|
t.owner === 'architect' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `ARCH-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading
|
### Phase 2: Context Loading
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Extract session folder
|
||||||
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
|
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : null
|
||||||
|
|
||||||
| Input | Source | Required |
|
// Extract scope (tokens / components / full)
|
||||||
|-------|--------|----------|
|
const scopeMatch = task.description.match(/Scope:\s*([^\n]+)/)
|
||||||
| Session folder | Extract from task description `Session: <path>` | Yes |
|
const scope = scopeMatch ? scopeMatch[1].trim() : 'full'
|
||||||
| Scope | Extract from task description `Scope: <tokens|components|full>` | No (default: full) |
|
|
||||||
| Design intelligence | `<session-folder>/analysis/design-intelligence.json` | No |
|
|
||||||
| Shared memory | `<session-folder>/shared-memory.json` | No |
|
|
||||||
| Project files | Glob `src/**/*` | No |
|
|
||||||
|
|
||||||
**Loading Steps**:
|
// Load design intelligence from analyst
|
||||||
|
let designIntel = {}
|
||||||
|
try {
|
||||||
|
designIntel = JSON.parse(Read(`${sessionFolder}/analysis/design-intelligence.json`))
|
||||||
|
} catch {
|
||||||
|
// No design intelligence available — use defaults
|
||||||
|
}
|
||||||
|
|
||||||
1. Extract session folder from task description
|
// Load shared memory
|
||||||
2. Extract scope (tokens / components / full)
|
let sharedMemory = {}
|
||||||
3. Load design intelligence from analyst output
|
try {
|
||||||
4. Load shared memory
|
sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
|
||||||
5. Load existing project structure via Glob
|
} catch {}
|
||||||
|
|
||||||
**Fail-safe**: If design-intelligence.json not found -> SendMessage to coordinator requesting location.
|
// Load existing project structure
|
||||||
|
const projectFiles = Glob({ pattern: 'src/**/*' })
|
||||||
|
const hasExistingProject = projectFiles.length > 0
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 3: Architecture Design
|
### Phase 3: Architecture Design
|
||||||
|
|
||||||
**Scope Selection**:
|
|
||||||
|
|
||||||
| Scope | Output |
|
|
||||||
|-------|--------|
|
|
||||||
| `tokens` | Design token system only |
|
|
||||||
| `components` | Component architecture only |
|
|
||||||
| `full` | Both tokens and components |
|
|
||||||
|
|
||||||
#### Step 1: Design Token System (scope: tokens or full)
|
#### Step 1: Design Token System (scope: tokens or full)
|
||||||
|
|
||||||
Generate `design-tokens.json` with categories:
|
```javascript
|
||||||
|
if (scope === 'tokens' || scope === 'full') {
|
||||||
|
const recommended = designIntel.design_system || {}
|
||||||
|
|
||||||
| Category | Content | Source |
|
const designTokens = {
|
||||||
|----------|---------|--------|
|
"$schema": "https://design-tokens.github.io/community-group/format/",
|
||||||
| `color` | Primary, secondary, background, surface, text, CTA | ui-ux-pro-max recommendations |
|
"color": {
|
||||||
| `typography` | Font families, font sizes | ui-ux-pro-max recommendations |
|
"primary": {
|
||||||
| `spacing` | Scale from xs to 2xl | Standard scale |
|
"$type": "color",
|
||||||
| `border-radius` | sm, md, lg, full | Standard scale |
|
"$value": {
|
||||||
| `shadow` | sm, md, lg | Standard elevation |
|
"light": recommended.colors?.primary || "#1976d2",
|
||||||
| `transition` | fast, normal, slow | Standard durations |
|
"dark": recommended.colors?.primary_dark || "#90caf9"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"secondary": {
|
||||||
|
"$type": "color",
|
||||||
|
"$value": {
|
||||||
|
"light": recommended.colors?.secondary || "#dc004e",
|
||||||
|
"dark": recommended.colors?.secondary_dark || "#f48fb1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"background": {
|
||||||
|
"$type": "color",
|
||||||
|
"$value": { "light": recommended.colors?.background || "#ffffff", "dark": "#121212" }
|
||||||
|
},
|
||||||
|
"surface": {
|
||||||
|
"$type": "color",
|
||||||
|
"$value": { "light": "#f5f5f5", "dark": "#1e1e1e" }
|
||||||
|
},
|
||||||
|
"text": {
|
||||||
|
"$type": "color",
|
||||||
|
"$value": { "light": "#1a1a1a", "dark": "#e0e0e0" }
|
||||||
|
},
|
||||||
|
"cta": {
|
||||||
|
"$type": "color",
|
||||||
|
"$value": recommended.colors?.cta || "#F97316"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"typography": {
|
||||||
|
"font-family": {
|
||||||
|
"heading": {
|
||||||
|
"$type": "fontFamily",
|
||||||
|
"$value": recommended.typography?.heading || ["Inter", "system-ui", "sans-serif"]
|
||||||
|
},
|
||||||
|
"body": {
|
||||||
|
"$type": "fontFamily",
|
||||||
|
"$value": recommended.typography?.body || ["Inter", "system-ui", "sans-serif"]
|
||||||
|
},
|
||||||
|
"mono": {
|
||||||
|
"$type": "fontFamily",
|
||||||
|
"$value": ["JetBrains Mono", "Fira Code", "monospace"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"font-size": {
|
||||||
|
"xs": { "$type": "dimension", "$value": "0.75rem" },
|
||||||
|
"sm": { "$type": "dimension", "$value": "0.875rem" },
|
||||||
|
"base": { "$type": "dimension", "$value": "1rem" },
|
||||||
|
"lg": { "$type": "dimension", "$value": "1.125rem" },
|
||||||
|
"xl": { "$type": "dimension", "$value": "1.25rem" },
|
||||||
|
"2xl": { "$type": "dimension", "$value": "1.5rem" },
|
||||||
|
"3xl": { "$type": "dimension", "$value": "2rem" }
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"spacing": {
|
||||||
|
"xs": { "$type": "dimension", "$value": "0.25rem" },
|
||||||
|
"sm": { "$type": "dimension", "$value": "0.5rem" },
|
||||||
|
"md": { "$type": "dimension", "$value": "1rem" },
|
||||||
|
"lg": { "$type": "dimension", "$value": "1.5rem" },
|
||||||
|
"xl": { "$type": "dimension", "$value": "2rem" },
|
||||||
|
"2xl": { "$type": "dimension", "$value": "3rem" }
|
||||||
|
},
|
||||||
|
"border-radius": {
|
||||||
|
"sm": { "$type": "dimension", "$value": "0.25rem" },
|
||||||
|
"md": { "$type": "dimension", "$value": "0.5rem" },
|
||||||
|
"lg": { "$type": "dimension", "$value": "1rem" },
|
||||||
|
"full": { "$type": "dimension", "$value": "9999px" }
|
||||||
|
},
|
||||||
|
"shadow": {
|
||||||
|
"sm": { "$type": "shadow", "$value": "0 1px 2px rgba(0,0,0,0.05)" },
|
||||||
|
"md": { "$type": "shadow", "$value": "0 4px 6px rgba(0,0,0,0.1)" },
|
||||||
|
"lg": { "$type": "shadow", "$value": "0 10px 15px rgba(0,0,0,0.1)" }
|
||||||
|
},
|
||||||
|
"transition": {
|
||||||
|
"fast": { "$type": "duration", "$value": "150ms" },
|
||||||
|
"normal": { "$type": "duration", "$value": "200ms" },
|
||||||
|
"slow": { "$type": "duration", "$value": "300ms" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
**Token Structure**:
|
Write(`${sessionFolder}/architecture/design-tokens.json`, JSON.stringify(designTokens, null, 2))
|
||||||
- Use `$type` and `$value` format (Design Tokens Community Group)
|
}
|
||||||
- Support light/dark mode via nested values
|
```
|
||||||
- Fallback to defaults if design intelligence unavailable
|
|
||||||
|
|
||||||
#### Step 2: Component Architecture (scope: components or full)
|
#### Step 2: Component Architecture (scope: components or full)
|
||||||
|
|
||||||
Generate component specifications in `architecture/component-specs/`:
|
```javascript
|
||||||
|
if (scope === 'components' || scope === 'full') {
|
||||||
|
const taskDesc = task.description.replace(/Session:.*\n?/g, '').replace(/Scope:.*\n?/g, '').trim()
|
||||||
|
const antiPatterns = designIntel.recommendations?.anti_patterns || []
|
||||||
|
const styleHints = designIntel.design_system?.css_keywords || ''
|
||||||
|
|
||||||
**Component Spec Template**:
|
// Analyze requirements and define component specs
|
||||||
1. Design Reference (style, stack)
|
// Each component spec includes: props, variants, accessibility, implementation hints
|
||||||
2. Props table (name, type, default, description)
|
Bash(`mkdir -p "${sessionFolder}/architecture/component-specs"`)
|
||||||
3. Variants table (name, description)
|
|
||||||
4. Accessibility requirements (role, keyboard, ARIA, contrast)
|
|
||||||
5. Implementation hints (CSS keywords)
|
|
||||||
6. Anti-patterns to avoid (from design intelligence)
|
|
||||||
|
|
||||||
**Component List**: Derived from task description analysis.
|
// Generate component spec template with design intelligence hints
|
||||||
|
const componentSpecTemplate = `# Component: {name}
|
||||||
|
|
||||||
#### Step 3: Project Structure (scope: full or no existing project)
|
## Design Reference
|
||||||
|
- **Style**: ${designIntel.design_system?.style || 'modern-minimal'}
|
||||||
|
- **Stack**: ${designIntel.detected_stack || 'react'}
|
||||||
|
|
||||||
Generate `project-structure.md`:
|
## Props
|
||||||
|
| Prop | Type | Default | Description |
|
||||||
|
|------|------|---------|-------------|
|
||||||
|
|
||||||
**Stack-specific Structure**:
|
## Variants
|
||||||
|
| Variant | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
|
||||||
| Stack | Directory Layout |
|
## Accessibility
|
||||||
|-------|-----------------|
|
- Role:
|
||||||
| react | src/components/, src/pages/, src/hooks/, src/styles/, src/utils/, src/types/ |
|
- Keyboard:
|
||||||
| nextjs | app/(routes)/, app/components/, app/lib/, app/styles/, app/types/ |
|
- ARIA:
|
||||||
| vue | src/components/, src/views/, src/composables/, src/styles/, src/types/ |
|
- Contrast: 4.5:1 minimum
|
||||||
| html-tailwind | src/components/, src/pages/, src/styles/, src/assets/ |
|
|
||||||
|
|
||||||
**Conventions**:
|
## Implementation Hints
|
||||||
- Naming: kebab-case for files, PascalCase for components
|
${styleHints ? `- CSS Keywords: ${styleHints}` : ''}
|
||||||
- Imports: absolute imports via @/ alias
|
${antiPatterns.length > 0 ? `\n## Anti-Patterns to AVOID\n${antiPatterns.map(p => '- ❌ ' + p).join('\n')}` : ''}
|
||||||
- Styling: CSS Modules + design tokens (or Tailwind for html-tailwind)
|
`
|
||||||
- Testing: co-located test files (*.test.tsx)
|
|
||||||
|
// Write component specs based on task requirements
|
||||||
|
// (Actual component list derived from task description analysis)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 3: Project Structure
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (scope === 'full' || !hasExistingProject) {
|
||||||
|
const stack = designIntel.detected_stack || 'react'
|
||||||
|
|
||||||
|
const projectStructure = {
|
||||||
|
stack: stack,
|
||||||
|
structure: getStackStructure(stack),
|
||||||
|
conventions: {
|
||||||
|
naming: "kebab-case for files, PascalCase for components",
|
||||||
|
imports: "absolute imports via @/ alias",
|
||||||
|
styling: stack === 'html-tailwind' ? 'Tailwind CSS' : 'CSS Modules + design tokens',
|
||||||
|
testing: "co-located test files (*.test.tsx)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Write(`${sessionFolder}/architecture/project-structure.md`, `# Project Structure
|
||||||
|
|
||||||
|
## Stack: ${stack}
|
||||||
|
|
||||||
|
## Directory Layout
|
||||||
|
\`\`\`
|
||||||
|
${projectStructure.structure}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Conventions
|
||||||
|
${Object.entries(projectStructure.conventions).map(([k, v]) => `- **${k}**: ${v}`).join('\n')}
|
||||||
|
`)
|
||||||
|
}
|
||||||
|
|
||||||
|
function getStackStructure(stack) {
|
||||||
|
const structures = {
|
||||||
|
'react': `src/
|
||||||
|
├── components/ # Reusable UI components
|
||||||
|
│ ├── ui/ # Primitive components (Button, Input, etc.)
|
||||||
|
│ └── layout/ # Layout components (Header, Footer, etc.)
|
||||||
|
├── pages/ # Page-level components
|
||||||
|
├── hooks/ # Custom React hooks
|
||||||
|
├── styles/ # Global styles + design tokens
|
||||||
|
│ ├── tokens.css # CSS custom properties from design tokens
|
||||||
|
│ └── global.css # Global resets and base styles
|
||||||
|
├── utils/ # Utility functions
|
||||||
|
└── types/ # TypeScript type definitions`,
|
||||||
|
'nextjs': `app/
|
||||||
|
├── (routes)/ # Route groups
|
||||||
|
├── components/ # Shared components
|
||||||
|
│ ├── ui/ # Primitive components
|
||||||
|
│ └── layout/ # Layout components
|
||||||
|
├── lib/ # Utility functions
|
||||||
|
├── styles/ # Global styles + design tokens
|
||||||
|
│ ├── tokens.css
|
||||||
|
│ └── globals.css
|
||||||
|
└── types/ # TypeScript types`,
|
||||||
|
'vue': `src/
|
||||||
|
├── components/ # Vue components
|
||||||
|
│ ├── ui/ # Primitive components
|
||||||
|
│ └── layout/ # Layout components
|
||||||
|
├── views/ # Page views
|
||||||
|
├── composables/ # Vue composables
|
||||||
|
├── styles/ # Global styles + design tokens
|
||||||
|
└── types/ # TypeScript types`,
|
||||||
|
'html-tailwind': `src/
|
||||||
|
├── components/ # HTML partials
|
||||||
|
├── pages/ # HTML pages
|
||||||
|
├── styles/ # Tailwind config + custom CSS
|
||||||
|
│ └── tailwind.config.js
|
||||||
|
└── assets/ # Static assets`
|
||||||
|
}
|
||||||
|
return structures[stack] || structures['react']
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 4: Self-Validation
|
### Phase 4: Self-Validation
|
||||||
|
|
||||||
**Validation Checks**:
|
```javascript
|
||||||
|
// Validate architecture artifacts
|
||||||
|
const validationResults = { issues: [] }
|
||||||
|
|
||||||
| Check | Method | Pass Criteria |
|
// Check design tokens JSON validity
|
||||||
|-------|--------|---------------|
|
try {
|
||||||
| JSON validity | Parse design-tokens.json | No errors |
|
JSON.parse(Read(`${sessionFolder}/architecture/design-tokens.json`))
|
||||||
| Required categories | Check for color, typography, spacing | All present |
|
} catch (e) {
|
||||||
| Anti-pattern compliance | Check token values against anti-patterns | No violations |
|
validationResults.issues.push({ severity: 'critical', message: 'design-tokens.json is invalid JSON' })
|
||||||
| File existence | Verify all planned files exist | All files present |
|
}
|
||||||
|
|
||||||
**Validation Result**:
|
// Check color contrast (basic)
|
||||||
|
// Check that all required token categories exist
|
||||||
|
const requiredCategories = ['color', 'typography', 'spacing']
|
||||||
|
const tokens = JSON.parse(Read(`${sessionFolder}/architecture/design-tokens.json`))
|
||||||
|
for (const cat of requiredCategories) {
|
||||||
|
if (!tokens[cat]) {
|
||||||
|
validationResults.issues.push({ severity: 'high', message: `Missing token category: ${cat}` })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
| Status | Condition |
|
// Check anti-pattern compliance
|
||||||
|--------|-----------|
|
const antiPatterns = designIntel.recommendations?.anti_patterns || []
|
||||||
| complete | No issues found |
|
// Verify token values don't violate industry anti-patterns
|
||||||
| complete_with_warnings | Non-critical issues found |
|
|
||||||
|
|
||||||
**Update Shared Memory**:
|
// Update shared memory with architecture output
|
||||||
- Write `design_token_registry` field with generated tokens
|
sharedMemory.design_token_registry = tokens
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
|
const resultStatus = validationResults.issues.length === 0 ? 'complete' : 'complete_with_warnings'
|
||||||
|
const resultSummary = `Architecture artifacts generated for scope: ${scope}. ${validationResults.issues.length} issues found.`
|
||||||
|
const resultDetails = `Files:\n- ${sessionFolder}/architecture/design-tokens.json\n- ${sessionFolder}/architecture/project-structure.md\n- ${sessionFolder}/architecture/component-specs/`
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "architect",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "arch_ready",
|
||||||
|
summary: `[architect] ARCH complete: ${task.subject}`,
|
||||||
|
ref: `${sessionFolder}/architecture/design-tokens.json`
|
||||||
|
})
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[architect]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `## [architect] Architecture Results
|
||||||
|
|
||||||
**Report Content**:
|
**Task**: ${task.subject}
|
||||||
- Task subject and status
|
**Status**: ${resultStatus}
|
||||||
- Scope completed
|
**Scope**: ${scope}
|
||||||
- Token counts (colors, typography, spacing)
|
|
||||||
- Design intelligence source
|
|
||||||
- Output file paths
|
|
||||||
- Validation warnings (if any)
|
|
||||||
|
|
||||||
---
|
### Summary
|
||||||
|
${resultSummary}
|
||||||
|
|
||||||
|
### Design Tokens
|
||||||
|
- Colors: ${Object.keys(tokens.color || {}).length} tokens
|
||||||
|
- Typography: ${Object.keys(tokens.typography || {}).length} categories
|
||||||
|
- Spacing: ${Object.keys(tokens.spacing || {}).length} scales
|
||||||
|
- Source: ${designIntel._source || 'defaults'}
|
||||||
|
|
||||||
|
### Output Files
|
||||||
|
${resultDetails}
|
||||||
|
|
||||||
|
${validationResults.issues.length > 0 ? `### Warnings\n${validationResults.issues.map(i => `- [${i.severity}] ${i.message}`).join('\n')}` : ''}`,
|
||||||
|
summary: `[architect] ARCH complete`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('ARCH-') &&
|
||||||
|
t.owner === 'architect' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No ARCH-* tasks available | Idle, wait for coordinator assignment |
|
| No ARCH-* tasks available | Idle, wait for coordinator |
|
||||||
| design-intelligence.json not found | Use default token values, log warning |
|
| design-intelligence.json not found | Use default token values, log warning |
|
||||||
| Session folder not found | Notify coordinator, request location |
|
| Session folder not found | Notify coordinator, request location |
|
||||||
| Token validation fails | Report issues, continue with warnings |
|
| Token validation fails | Report issues, continue with warnings |
|
||||||
|
|||||||
@@ -1,47 +1,33 @@
|
|||||||
# Coordinator Role
|
# Role: coordinator
|
||||||
|
|
||||||
Frontend team coordinator. Orchestrates pipeline: requirement clarification → industry identification → team creation → task chain → dispatch → monitoring → reporting. Manages Generator-Critic loops between developer and qa, consulting pattern between developer and analyst.
|
Frontend team coordinator. Orchestrates pipeline: requirement clarification → industry identification → team creation → task chain → dispatch → monitoring → reporting. Manages Generator-Critic loops between developer and qa, consulting pattern between developer and analyst.
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
- **Name**: `coordinator`
|
||||||
- **Responsibility**: Parse requirements → Create team → Dispatch tasks → Monitor progress → Report results
|
- **Task Prefix**: N/A (coordinator creates tasks, doesn't receive them)
|
||||||
|
- **Responsibility**: Orchestration
|
||||||
|
- **Communication**: SendMessage to all teammates
|
||||||
|
- **Output Tag**: `[coordinator]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[coordinator]` identifier
|
- 所有输出(SendMessage、team_msg、日志)必须带 `[coordinator]` 标识
|
||||||
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
|
- 仅负责需求澄清、任务创建/分发、进度监控、结果汇报
|
||||||
- Create team and spawn worker subagents in background
|
- 通过 TaskCreate 创建任务并分配给 worker 角色
|
||||||
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
|
- 通过消息总线监控 worker 进度并路由消息
|
||||||
- Monitor progress via worker callbacks and route messages
|
|
||||||
- Maintain session state persistence
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute frontend development work directly (delegate to workers)
|
- ❌ **直接执行任何业务任务**(代码编写、分析、测试、审查等)
|
||||||
- Modify task outputs (workers own their deliverables)
|
- ❌ 直接调用 code-developer、cli-explore-agent 等实现类 subagent
|
||||||
- Call implementation subagents directly
|
- ❌ 直接修改源代码或生成产物文件
|
||||||
- Skip dependency validation when creating task chains
|
- ❌ 绕过 worker 角色自行完成应委派的工作
|
||||||
- Omit `[coordinator]` identifier in any output
|
- ❌ 在输出中省略 `[coordinator]` 标识
|
||||||
|
|
||||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
|
> **核心原则**: coordinator 是指挥者,不是执行者。所有实际工作必须通过 TaskCreate 委派给 worker 角色。
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Entry Router
|
|
||||||
|
|
||||||
When coordinator is invoked, first detect the invocation type:
|
|
||||||
|
|
||||||
| Detection | Condition | Handler |
|
|
||||||
|-----------|-----------|---------|
|
|
||||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
|
||||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
|
||||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
|
||||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
@@ -53,204 +39,282 @@ When coordinator is invoked, first detect the invocation type:
|
|||||||
| `error` | coordinator → all | Critical system error | Escalation to user |
|
| `error` | coordinator → all | Critical system error | Escalation to user |
|
||||||
| `shutdown` | coordinator → all | Team being dissolved | Clean shutdown signal |
|
| `shutdown` | coordinator → all | Team being dissolved | Clean shutdown signal |
|
||||||
|
|
||||||
## Message Bus
|
## Execution
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., FES-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "coordinator",
|
|
||||||
to: <recipient>,
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[coordinator] <summary>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from coordinator --to <recipient> --type <message-type> --summary \"[coordinator] ...\" --ref <artifact-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
|
||||||
|
|
||||||
### Phase 0: Session Resume Check
|
### Phase 0: Session Resume Check
|
||||||
|
|
||||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
```javascript
|
||||||
|
const args = "$ARGUMENTS"
|
||||||
|
const isResume = /--resume|--continue/.test(args)
|
||||||
|
|
||||||
**Workflow**:
|
if (isResume) {
|
||||||
1. Scan session directory for sessions with status "active" or "paused"
|
const sessionDirs = Glob({ pattern: '.workflow/.team/FE-*/team-session.json' })
|
||||||
2. No sessions found -> proceed to Phase 1
|
const resumable = sessionDirs.map(f => {
|
||||||
3. Single session found -> resume it (-> Session Reconciliation)
|
try {
|
||||||
4. Multiple sessions -> AskUserQuestion for user selection
|
const session = JSON.parse(Read(f))
|
||||||
|
if (session.status === 'active' || session.status === 'paused') return session
|
||||||
|
} catch {}
|
||||||
|
return null
|
||||||
|
}).filter(Boolean)
|
||||||
|
|
||||||
**Session Reconciliation**:
|
if (resumable.length === 1) {
|
||||||
1. Audit TaskList -> get real status of all tasks
|
var resumedSession = resumable[0]
|
||||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
} else if (resumable.length > 1) {
|
||||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
AskUserQuestion({ questions: [{ question: "检测到多个可恢复的会话,请选择:", header: "Resume", multiSelect: false,
|
||||||
4. Determine remaining pipeline from reconciled state
|
options: resumable.slice(0, 4).map(s => ({ label: s.session_id, description: `${s.topic} (${s.current_phase}, ${s.status})` }))
|
||||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
}]})
|
||||||
6. Create missing tasks with correct blockedBy dependencies
|
var resumedSession = resumable.find(s => s.session_id === userChoice)
|
||||||
7. Verify dependency chain integrity
|
}
|
||||||
8. Update session file with reconciled state
|
|
||||||
9. Kick first executable task's worker -> Phase 4
|
|
||||||
|
|
||||||
---
|
if (resumedSession) {
|
||||||
|
const teamName = resumedSession.team_name
|
||||||
|
const sessionFolder = `.workflow/.team/${resumedSession.session_id}`
|
||||||
|
TeamCreate({ team_name: teamName })
|
||||||
|
// Spawn workers, create remaining tasks, jump to Phase 4
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 1: Requirement Clarification
|
### Phase 1: Requirement Clarification
|
||||||
|
|
||||||
**Objective**: Parse user input and gather execution parameters.
|
```javascript
|
||||||
|
const args = "$ARGUMENTS"
|
||||||
|
const teamNameMatch = args.match(/--team-name[=\s]+([\w-]+)/)
|
||||||
|
const teamName = teamNameMatch ? teamNameMatch[1] : `frontend-${Date.now().toString(36)}`
|
||||||
|
const taskDescription = args.replace(/--team-name[=\s]+[\w-]+/, '').replace(/--role[=\s]+\w+/, '').replace(/--resume|--continue/, '').trim()
|
||||||
|
```
|
||||||
|
|
||||||
**Workflow**:
|
Assess scope, industry, and select pipeline:
|
||||||
|
|
||||||
1. **Parse arguments** for explicit settings: mode, scope, focus areas
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "前端开发范围:",
|
||||||
|
header: "Scope",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "单页面", description: "设计并实现一个独立页面/组件" },
|
||||||
|
{ label: "多组件特性", description: "多组件 + 设计令牌 + 交互逻辑" },
|
||||||
|
{ label: "完整前端系统", description: "从零构建完整前端(令牌 + 组件库 + 页面)" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "产品行业/类型:",
|
||||||
|
header: "Industry",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "SaaS/科技", description: "SaaS、开发工具、AI 产品" },
|
||||||
|
{ label: "电商/零售", description: "电商、奢侈品、市场平台" },
|
||||||
|
{ label: "医疗/金融", description: "医疗、银行、保险(高合规要求)" },
|
||||||
|
{ label: "其他", description: "手动输入行业关键词" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
|
||||||
2. **Ask for missing parameters** via AskUserQuestion:
|
// Map scope to pipeline
|
||||||
|
const pipelineMap = {
|
||||||
|
'单页面': 'page',
|
||||||
|
'多组件特性': 'feature',
|
||||||
|
'完整前端系统': 'system'
|
||||||
|
}
|
||||||
|
const pipeline = pipelineMap[scopeChoice]
|
||||||
|
|
||||||
**Scope Selection**:
|
// Industry-based audit strictness
|
||||||
| Option | Description | Pipeline |
|
const industryConfig = {
|
||||||
|--------|-------------|----------|
|
'SaaS/科技': { strictness: 'standard', mustHave: [] },
|
||||||
| Single page | Design and implement a standalone page/component | page |
|
'电商/零售': { strictness: 'standard', mustHave: ['responsive', 'performance'] },
|
||||||
| Multi-component feature | Multiple components + design tokens + interaction logic | feature |
|
'医疗/金融': { strictness: 'strict', mustHave: ['wcag-aaa', 'high-contrast', 'security-first'] },
|
||||||
| Full frontend system | Build complete frontend from scratch (tokens + component library + pages) | system |
|
'其他': { strictness: 'standard', mustHave: [] }
|
||||||
|
}
|
||||||
|
const industry = industryConfig[industryChoice]
|
||||||
|
```
|
||||||
|
|
||||||
**Industry Selection**:
|
Design constraints:
|
||||||
| Option | Description | Strictness |
|
|
||||||
|--------|-------------|------------|
|
|
||||||
| SaaS/Tech | SaaS, dev tools, AI products | standard |
|
|
||||||
| E-commerce/Retail | E-commerce, luxury, marketplace | standard |
|
|
||||||
| Healthcare/Finance | Healthcare, banking, insurance (high compliance) | strict |
|
|
||||||
| Other | Manual keyword input | standard |
|
|
||||||
|
|
||||||
**Design Constraints** (multi-select):
|
```javascript
|
||||||
- Existing design system (must be compatible with existing tokens/components)
|
AskUserQuestion({
|
||||||
- WCAG AA (must meet WCAG 2.1 AA accessibility standards)
|
questions: [{
|
||||||
- Responsive (must support mobile/tablet/desktop)
|
question: "设计约束:",
|
||||||
- Dark mode (must support light/dark theme switching)
|
header: "Constraint",
|
||||||
|
multiSelect: true,
|
||||||
3. **Store requirements**: mode, scope, focus, constraints
|
options: [
|
||||||
|
{ label: "现有设计系统", description: "必须兼容现有设计令牌和组件" },
|
||||||
**Success**: All parameters captured, mode finalized.
|
{ label: "WCAG AA", description: "必须满足 WCAG 2.1 AA 可访问性标准" },
|
||||||
|
{ label: "响应式", description: "必须支持 mobile/tablet/desktop" },
|
||||||
---
|
{ label: "暗色模式", description: "必须支持 light/dark 主题切换" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Create Team + Initialize Session
|
### Phase 2: Create Team + Initialize Session
|
||||||
|
|
||||||
**Objective**: Initialize team, session file, and wisdom directory.
|
```javascript
|
||||||
|
// Create session directory
|
||||||
|
const slug = taskDescription.replace(/[^a-zA-Z0-9\u4e00-\u9fff]+/g, '-').slice(0, 30)
|
||||||
|
const date = new Date().toISOString().slice(0, 10)
|
||||||
|
const sessionId = `FE-${slug}-${date}`
|
||||||
|
const sessionFolder = `.workflow/.team/${sessionId}`
|
||||||
|
Bash(`mkdir -p "${sessionFolder}/analysis" "${sessionFolder}/architecture" "${sessionFolder}/qa" "${sessionFolder}/build"`)
|
||||||
|
|
||||||
**Workflow**:
|
// Initialize session
|
||||||
|
Write(`${sessionFolder}/team-session.json`, JSON.stringify({
|
||||||
|
session_id: sessionId,
|
||||||
|
team_name: teamName,
|
||||||
|
topic: taskDescription,
|
||||||
|
pipeline: pipeline,
|
||||||
|
industry: industryChoice,
|
||||||
|
industry_config: industry,
|
||||||
|
constraints: constraintChoices,
|
||||||
|
status: 'active',
|
||||||
|
current_phase: 'init',
|
||||||
|
created_at: new Date().toISOString()
|
||||||
|
}, null, 2))
|
||||||
|
|
||||||
1. Generate session ID: `FE-<slug>-<YYYY-MM-DD>`
|
// Initialize shared memory
|
||||||
2. Create session folder structure
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify({
|
||||||
3. Call TeamCreate with team name
|
design_intelligence: {},
|
||||||
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
|
design_token_registry: { colors: {}, typography: {}, spacing: {}, shadows: {} },
|
||||||
5. Write session file with: session_id, mode, scope, status="active"
|
component_inventory: [],
|
||||||
6. Initialize shared-memory.json with empty structures
|
style_decisions: [],
|
||||||
7. Do NOT pre-spawn workers (spawned per-stage in Phase 4)
|
qa_history: [],
|
||||||
|
industry_context: { industry: industryChoice, config: industry }
|
||||||
|
}, null, 2))
|
||||||
|
|
||||||
**Session Directory Structure**:
|
// Create team
|
||||||
|
TeamCreate({ team_name: teamName })
|
||||||
|
// ⚠️ Workers are NOT pre-spawned here.
|
||||||
|
// Workers are spawned per-stage in Phase 4 via Stop-Wait Task(run_in_background: false).
|
||||||
|
// See SKILL.md Coordinator Spawn Template for worker prompt templates.
|
||||||
```
|
```
|
||||||
.workflow/.team/FE-<slug>-<date>/
|
|
||||||
├── team-session.json
|
|
||||||
├── shared-memory.json
|
|
||||||
├── wisdom/
|
|
||||||
├── analysis/
|
|
||||||
├── architecture/
|
|
||||||
├── qa/
|
|
||||||
└── build/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Success**: Team created, session file written, wisdom initialized.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 3: Create Task Chain
|
### Phase 3: Create Task Chain
|
||||||
|
|
||||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
Based on selected pipeline:
|
||||||
|
|
||||||
**Pipeline Definitions**:
|
```javascript
|
||||||
|
if (pipeline === 'page') {
|
||||||
|
// CP-1 Linear: ANALYZE → ARCH → DEV → QA
|
||||||
|
TaskCreate({ subject: "ANALYZE-001: 需求分析与设计智能获取", description: `${taskDescription}\nSession: ${sessionFolder}\nIndustry: ${industryChoice}`, owner: "analyst" })
|
||||||
|
TaskCreate({ subject: "ARCH-001: 页面架构与设计令牌", description: `${taskDescription}\nSession: ${sessionFolder}`, owner: "architect", addBlockedBy: ["ANALYZE-001"] })
|
||||||
|
TaskCreate({ subject: "DEV-001: 页面实现", description: `${taskDescription}\nSession: ${sessionFolder}`, owner: "developer", addBlockedBy: ["ARCH-001"] })
|
||||||
|
TaskCreate({ subject: "QA-001: 代码审查与质量验证", description: `${taskDescription}\nSession: ${sessionFolder}`, owner: "qa", addBlockedBy: ["DEV-001"] })
|
||||||
|
}
|
||||||
|
|
||||||
| Mode | Task Chain | Description |
|
if (pipeline === 'feature') {
|
||||||
|------|------------|-------------|
|
// CP-1 + CP-2: ANALYZE → ARCH → QA(arch) → DEV → QA(code)
|
||||||
| page | ANALYZE-001 -> ARCH-001 -> DEV-001 -> QA-001 | Linear 4-beat |
|
TaskCreate({ subject: "ANALYZE-001: 需求分析与设计智能获取", description: `${taskDescription}\nSession: ${sessionFolder}\nIndustry: ${industryChoice}`, owner: "analyst" })
|
||||||
| feature | ANALYZE-001 -> ARCH-001 -> QA-001 -> DEV-001 -> QA-002 | 5-beat with architecture review |
|
TaskCreate({ subject: "ARCH-001: 设计令牌+组件架构", description: `${taskDescription}\nSession: ${sessionFolder}`, owner: "architect", addBlockedBy: ["ANALYZE-001"] })
|
||||||
| system | ANALYZE-001 -> ARCH-001 -> QA-001 -> [ARCH-002 || DEV-001] -> QA-002 -> DEV-002 -> QA-003 | 7-beat dual-track |
|
TaskCreate({ subject: "QA-001: 架构审查", description: `审查 ARCH-001 产出\nSession: ${sessionFolder}\nType: architecture-review`, owner: "qa", addBlockedBy: ["ARCH-001"] })
|
||||||
|
TaskCreate({ subject: "DEV-001: 组件实现", description: `${taskDescription}\nSession: ${sessionFolder}`, owner: "developer", addBlockedBy: ["QA-001"] })
|
||||||
|
TaskCreate({ subject: "QA-002: 代码审查", description: `审查 DEV-001 产出\nSession: ${sessionFolder}\nType: code-review`, owner: "qa", addBlockedBy: ["DEV-001"] })
|
||||||
|
}
|
||||||
|
|
||||||
**Task Creation** (for each task):
|
if (pipeline === 'system') {
|
||||||
- Include `Session: <session-folder>` in description
|
// CP-1 + CP-2 + CP-9 Dual-Track
|
||||||
- Set owner based on role mapping
|
TaskCreate({ subject: "ANALYZE-001: 需求分析与设计智能获取", description: `${taskDescription}\nSession: ${sessionFolder}\nIndustry: ${industryChoice}`, owner: "analyst" })
|
||||||
- Set blockedBy dependencies based on pipeline
|
TaskCreate({ subject: "ARCH-001: 设计令牌系统", description: `${taskDescription}\nSession: ${sessionFolder}\nScope: tokens`, owner: "architect", addBlockedBy: ["ANALYZE-001"] })
|
||||||
|
TaskCreate({ subject: "QA-001: 令牌审查", description: `审查 ARCH-001 令牌系统\nSession: ${sessionFolder}\nType: token-review`, owner: "qa", addBlockedBy: ["ARCH-001"] })
|
||||||
**Success**: All tasks created with correct dependencies.
|
// Dual-track after QA-001
|
||||||
|
TaskCreate({ subject: "ARCH-002: 组件架构设计", description: `${taskDescription}\nSession: ${sessionFolder}\nScope: components`, owner: "architect", addBlockedBy: ["QA-001"] })
|
||||||
---
|
TaskCreate({ subject: "DEV-001: 令牌实现", description: `实现设计令牌\nSession: ${sessionFolder}\nScope: tokens`, owner: "developer", addBlockedBy: ["QA-001"] })
|
||||||
|
// Sync point 2
|
||||||
|
TaskCreate({ subject: "QA-002: 组件架构审查", description: `审查 ARCH-002 组件架构\nSession: ${sessionFolder}\nType: component-review`, owner: "qa", addBlockedBy: ["ARCH-002"] })
|
||||||
|
TaskCreate({ subject: "DEV-002: 组件实现", description: `${taskDescription}\nSession: ${sessionFolder}\nScope: components`, owner: "developer", addBlockedBy: ["QA-002", "DEV-001"] })
|
||||||
|
TaskCreate({ subject: "QA-003: 最终质量验证", description: `最终审查\nSession: ${sessionFolder}\nType: final`, owner: "qa", addBlockedBy: ["DEV-002"] })
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 4: Coordination Loop
|
### Phase 4: Coordination Loop
|
||||||
|
|
||||||
**Objective**: Spawn first batch of ready workers, then STOP.
|
> **设计原则(Stop-Wait)**: 模型执行没有时间概念,禁止任何形式的轮询等待。
|
||||||
|
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态
|
||||||
|
> - ✅ 采用: 同步 `Task(run_in_background: false)` 调用,Worker 返回 = 阶段完成信号
|
||||||
|
>
|
||||||
|
> 按 Phase 3 创建的任务链顺序,逐阶段 spawn worker 同步执行。
|
||||||
|
> Worker prompt 使用 SKILL.md Coordinator Spawn Template。
|
||||||
|
|
||||||
**Design**: Spawn-and-Stop + Callback pattern.
|
Receive teammate messages, dispatch based on content.
|
||||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
**Before each decision**: `team_msg list` to check recent messages.
|
||||||
- Worker completes -> SendMessage callback -> auto-advance
|
**After each decision**: `team_msg log` to record.
|
||||||
- User can use "check" / "resume" to manually advance
|
|
||||||
- Coordinator does one operation per invocation, then STOPS
|
|
||||||
|
|
||||||
**Pipeline advancement** driven by three wake sources:
|
|
||||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
|
||||||
- User "check" -> handleCheck (status only)
|
|
||||||
- User "resume" -> handleResume (advance)
|
|
||||||
|
|
||||||
**Message Routing**:
|
|
||||||
|
|
||||||
| Received Message | Action |
|
| Received Message | Action |
|
||||||
|-----------------|--------|
|
|-----------------|--------|
|
||||||
| analyst: `analyze_ready` | team_msg log -> TaskUpdate ANALYZE completed -> unblock ARCH |
|
| analyst: `analyze_ready` | team_msg log → TaskUpdate ANALYZE completed → unblock ARCH |
|
||||||
| architect: `arch_ready` | team_msg log -> TaskUpdate ARCH completed -> unblock QA/DEV |
|
| architect: `arch_ready` | team_msg log → TaskUpdate ARCH completed → unblock QA/DEV |
|
||||||
| developer: `dev_complete` | team_msg log -> TaskUpdate DEV completed -> unblock QA |
|
| developer: `dev_complete` | team_msg log → TaskUpdate DEV completed → unblock QA |
|
||||||
| qa: `qa_passed` | team_msg log -> TaskUpdate QA completed -> unblock next stage |
|
| qa: `qa_passed` | team_msg log → TaskUpdate QA completed → unblock next stage |
|
||||||
| qa: `fix_required` | Create DEV-fix task -> notify developer (GC loop) |
|
| qa: `fix_required` | Create DEV-fix task → notify developer (CP-2 GC loop) |
|
||||||
| developer: consult request | Create ANALYZE-consult task -> notify analyst |
|
| developer: consult request | Create ANALYZE-consult task → notify analyst (CP-8) |
|
||||||
| Worker: `error` | Assess severity -> retry or escalate to user |
|
| Worker: `error` | Assess severity → retry or escalate to user |
|
||||||
| All tasks completed | -> Phase 5 |
|
| All tasks completed | → Phase 5 |
|
||||||
|
|
||||||
**GC Loop Control** (Generator-Critic: developer <-> qa):
|
#### GC Loop Control (CP-2)
|
||||||
|
|
||||||
| Condition | Action |
|
```javascript
|
||||||
|-----------|--------|
|
let gcRound = 0
|
||||||
| QA sends fix_required && gcRound < MAX_GC_ROUNDS (2) | Create DEV-fix task + QA-recheck task, increment gcRound |
|
const MAX_GC_ROUNDS = 2
|
||||||
| QA sends fix_required && gcRound >= MAX_GC_ROUNDS | Escalate to user: accept current state or manual intervention |
|
|
||||||
|
|
||||||
---
|
// When QA sends fix_required
|
||||||
|
if (qaMessage.type === 'fix_required' && gcRound < MAX_GC_ROUNDS) {
|
||||||
|
gcRound++
|
||||||
|
// Create fix task for developer
|
||||||
|
TaskCreate({
|
||||||
|
subject: `DEV-fix-${gcRound}: 修复 QA 发现的问题`,
|
||||||
|
description: `${qaMessage.issues}\nSession: ${sessionFolder}\nGC Round: ${gcRound}`,
|
||||||
|
owner: "developer"
|
||||||
|
})
|
||||||
|
// Re-queue QA after fix
|
||||||
|
TaskCreate({
|
||||||
|
subject: `QA-recheck-${gcRound}: 复查修复`,
|
||||||
|
description: `复查 DEV-fix-${gcRound}\nSession: ${sessionFolder}`,
|
||||||
|
owner: "qa",
|
||||||
|
addBlockedBy: [`DEV-fix-${gcRound}`]
|
||||||
|
})
|
||||||
|
} else if (gcRound >= MAX_GC_ROUNDS) {
|
||||||
|
// Escalate to user
|
||||||
|
AskUserQuestion({ questions: [{ question: `QA 审查 ${MAX_GC_ROUNDS} 轮后仍有问题,如何处理?`, header: "GC Escalation", multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "接受当前状态", description: "跳过剩余问题,继续下一阶段" },
|
||||||
|
{ label: "手动介入", description: "暂停流水线,手动修复" }
|
||||||
|
]
|
||||||
|
}]})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 5: Report + Next Steps
|
### Phase 5: Report + Persist
|
||||||
|
|
||||||
**Objective**: Completion report and follow-up options.
|
Summarize results. Update session status.
|
||||||
|
|
||||||
**Workflow**:
|
```javascript
|
||||||
1. Load session state -> count completed tasks, duration
|
// Update session
|
||||||
2. List deliverables with output paths
|
const session = JSON.parse(Read(`${sessionFolder}/team-session.json`))
|
||||||
3. Update session status -> "completed"
|
session.status = 'completed'
|
||||||
4. Offer next steps to user via AskUserQuestion:
|
session.completed_at = new Date().toISOString()
|
||||||
- New requirement -> back to Phase 1
|
Write(`${sessionFolder}/team-session.json`, JSON.stringify(session, null, 2))
|
||||||
- Close team -> shutdown -> TeamDelete
|
|
||||||
|
|
||||||
---
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "当前需求已完成。下一步:",
|
||||||
|
header: "Next",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "新需求", description: "提交新需求给当前团队" },
|
||||||
|
{ label: "关闭团队", description: "关闭所有 teammate 并清理" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
// 新需求 → 回到 Phase 1
|
||||||
|
// 关闭 → shutdown → TeamDelete()
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
| Teammate unresponsive | Send follow-up, 2x → respawn |
|
||||||
| Worker crash | Respawn worker, reassign task |
|
|
||||||
| Dependency cycle | Detect, report to user, halt |
|
|
||||||
| Invalid mode | Reject with error, ask to clarify |
|
|
||||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
|
||||||
| Teammate unresponsive | Send follow-up, 2x -> respawn |
|
|
||||||
| QA rejected 3+ times | Escalate to user |
|
| QA rejected 3+ times | Escalate to user |
|
||||||
| Dual-track sync failure | Fallback to single-track sequential |
|
| Dual-track sync failure | Fallback to single-track sequential |
|
||||||
| ui-ux-pro-max unavailable | Continue with LLM general knowledge |
|
| ui-ux-pro-max unavailable | Continue with LLM general knowledge |
|
||||||
|
|||||||
@@ -1,236 +1,346 @@
|
|||||||
# Developer Role
|
# Role: developer
|
||||||
|
|
||||||
Frontend developer. Consumes architecture artifacts, implements frontend component/page code. References design-intelligence.json Implementation Checklist and tech stack guidelines during code generation, follows Anti-Patterns constraints.
|
前端开发者。消费架构产出,实现前端组件/页面代码。代码生成时引用 design-intelligence.json 的 Implementation Checklist 和技术栈指南,遵循 Anti-Patterns 约束。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `developer` | **Tag**: `[developer]`
|
- **Name**: `developer`
|
||||||
- **Task Prefix**: `DEV-*`
|
- **Task Prefix**: `DEV-*`
|
||||||
- **Responsibility**: Code generation
|
- **Responsibility**: Code generation
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[developer]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `DEV-*` prefixed tasks
|
- 仅处理 `DEV-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[developer]` identifier
|
- 所有输出必须带 `[developer]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Work strictly within frontend code implementation scope
|
- 严格在前端代码实现范围内工作
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute work outside this role's responsibility scope (analysis, architecture, QA)
|
- ❌ 执行需求分析、架构设计、质量审查等其他角色职责
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
- ❌ 直接与其他 worker 角色通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
- ❌ 为其他角色创建任务
|
||||||
- Modify design token definitions (only consume them)
|
- ❌ 修改设计令牌定义(仅消费)
|
||||||
- Omit `[developer]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
> No command files -- all phases execute inline.
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Read` | builtin | Phase 2 | Load architecture artifacts |
|
|
||||||
| `Write` | builtin | Phase 3 | Write source code files |
|
|
||||||
| `Edit` | builtin | Phase 3 | Modify source code |
|
|
||||||
| `Bash` | builtin | Phase 3-4 | Run build commands, install deps, format |
|
|
||||||
| `Glob` | builtin | Phase 2-4 | Search project files |
|
|
||||||
| `Grep` | builtin | Phase 2-4 | Search code patterns |
|
|
||||||
| `Task(code-developer)` | subagent | Phase 3 | Complex component implementation |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `dev_complete` | developer → coordinator | Implementation complete | Code implementation finished |
|
| `dev_complete` | developer → coordinator | Implementation complete | 代码实现完成 |
|
||||||
| `dev_progress` | developer → coordinator | Partial progress | Implementation progress update |
|
| `dev_progress` | developer → coordinator | Partial progress | 实现进度更新 |
|
||||||
| `error` | developer → coordinator | Implementation failure | Implementation failed |
|
| `error` | developer → coordinator | Implementation failure | 实现失败 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Available Tools
|
||||||
|
|
||||||
```
|
| Tool | Purpose |
|
||||||
mcp__ccw-tools__team_msg({
|
|------|---------|
|
||||||
operation: "log",
|
| Read, Write, Edit | 读写源代码文件 |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., FES-xxx-date), NOT team name. Extract from Session: field.
|
| Bash | 运行构建命令、安装依赖、格式化 |
|
||||||
from: "developer",
|
| Glob, Grep | 搜索项目文件和代码模式 |
|
||||||
to: "coordinator",
|
| Task (code-developer) | 复杂组件实现委派 |
|
||||||
type: <message-type>,
|
|
||||||
summary: "[developer] DEV complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
### Subagent Capabilities
|
||||||
|
|
||||||
```
|
| Agent Type | Purpose |
|
||||||
Bash("ccw team log --team <session-id> --from developer --to coordinator --type <message-type> --summary \"[developer] ...\" --ref <artifact-path> --json")
|
|------------|---------|
|
||||||
```
|
| `code-developer` | 复杂组件/页面的代码实现 |
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('DEV-') &&
|
||||||
|
t.owner === 'developer' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `DEV-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading
|
### Phase 2: Context Loading
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Extract session folder and scope
|
||||||
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
|
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : null
|
||||||
|
|
||||||
| Input | Source | Required |
|
const scopeMatch = task.description.match(/Scope:\s*([^\n]+)/)
|
||||||
|-------|--------|----------|
|
const scope = scopeMatch ? scopeMatch[1].trim() : 'full'
|
||||||
| Session folder | Extract from task description `Session: <path>` | Yes |
|
|
||||||
| Scope | Extract from task description `Scope: <tokens|components|full>` | No (default: full) |
|
|
||||||
| Design intelligence | `<session-folder>/analysis/design-intelligence.json` | No |
|
|
||||||
| Design tokens | `<session-folder>/architecture/design-tokens.json` | No |
|
|
||||||
| Project structure | `<session-folder>/architecture/project-structure.md` | No |
|
|
||||||
| Component specs | `<session-folder>/architecture/component-specs/*.md` | No |
|
|
||||||
| Shared memory | `<session-folder>/shared-memory.json` | No |
|
|
||||||
|
|
||||||
**Loading Steps**:
|
// Load design intelligence
|
||||||
|
let designIntel = {}
|
||||||
|
try {
|
||||||
|
designIntel = JSON.parse(Read(`${sessionFolder}/analysis/design-intelligence.json`))
|
||||||
|
} catch {}
|
||||||
|
|
||||||
1. Extract session folder and scope from task description
|
// Load design tokens
|
||||||
2. Load design intelligence
|
let designTokens = {}
|
||||||
3. Load design tokens
|
try {
|
||||||
4. Load project structure
|
designTokens = JSON.parse(Read(`${sessionFolder}/architecture/design-tokens.json`))
|
||||||
5. Load component specs (if available)
|
} catch {}
|
||||||
6. Load shared memory
|
|
||||||
7. Detect tech stack from design intelligence
|
|
||||||
|
|
||||||
**Fail-safe**: If design-tokens.json not found -> SendMessage to coordinator requesting architecture output.
|
// Load project structure
|
||||||
|
let projectStructure = ''
|
||||||
|
try {
|
||||||
|
projectStructure = Read(`${sessionFolder}/architecture/project-structure.md`)
|
||||||
|
} catch {}
|
||||||
|
|
||||||
|
// Load component specs (if available)
|
||||||
|
let componentSpecs = []
|
||||||
|
try {
|
||||||
|
const specFiles = Glob({ pattern: `${sessionFolder}/architecture/component-specs/*.md` })
|
||||||
|
componentSpecs = specFiles.map(f => ({ name: f, content: Read(f) }))
|
||||||
|
} catch {}
|
||||||
|
|
||||||
|
// Load shared memory
|
||||||
|
let sharedMemory = {}
|
||||||
|
try {
|
||||||
|
sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
|
||||||
|
} catch {}
|
||||||
|
|
||||||
|
// Detect stack
|
||||||
|
const detectedStack = designIntel.detected_stack || 'react'
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 3: Code Implementation
|
### Phase 3: Code Implementation
|
||||||
|
|
||||||
**Scope Selection**:
|
#### Step 1: Generate Design Token CSS
|
||||||
|
|
||||||
| Scope | Output |
|
```javascript
|
||||||
|-------|--------|
|
if (scope === 'tokens' || scope === 'full') {
|
||||||
| `tokens` | Generate CSS custom properties from design tokens |
|
// Convert design-tokens.json to CSS custom properties
|
||||||
| `components` | Implement components from specs |
|
let cssVars = ':root {\n'
|
||||||
| `full` | Both tokens and components |
|
|
||||||
|
|
||||||
#### Step 1: Generate Design Token CSS (scope: tokens or full)
|
// Colors
|
||||||
|
if (designTokens.color) {
|
||||||
|
for (const [name, token] of Object.entries(designTokens.color)) {
|
||||||
|
const value = typeof token.$value === 'object' ? token.$value.light : token.$value
|
||||||
|
cssVars += ` --color-${name}: ${value};\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
Convert `design-tokens.json` to CSS custom properties:
|
// Typography
|
||||||
|
if (designTokens.typography?.['font-family']) {
|
||||||
|
for (const [name, token] of Object.entries(designTokens.typography['font-family'])) {
|
||||||
|
const value = Array.isArray(token.$value) ? token.$value.join(', ') : token.$value
|
||||||
|
cssVars += ` --font-${name}: ${value};\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (designTokens.typography?.['font-size']) {
|
||||||
|
for (const [name, token] of Object.entries(designTokens.typography['font-size'])) {
|
||||||
|
cssVars += ` --text-${name}: ${token.$value};\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
**Token Category Mapping**:
|
// Spacing
|
||||||
|
if (designTokens.spacing) {
|
||||||
|
for (const [name, token] of Object.entries(designTokens.spacing)) {
|
||||||
|
cssVars += ` --space-${name}: ${token.$value};\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
| JSON Category | CSS Variable Prefix | Example |
|
// Border radius
|
||||||
|---------------|---------------------|---------|
|
if (designTokens['border-radius']) {
|
||||||
| color | `--color-` | `--color-primary` |
|
for (const [name, token] of Object.entries(designTokens['border-radius'])) {
|
||||||
| typography.font-family | `--font-` | `--font-heading` |
|
cssVars += ` --radius-${name}: ${token.$value};\n`
|
||||||
| typography.font-size | `--text-` | `--text-lg` |
|
}
|
||||||
| spacing | `--space-` | `--space-md` |
|
}
|
||||||
| border-radius | `--radius-` | `--radius-lg` |
|
|
||||||
| shadow | `--shadow-` | `--shadow-md` |
|
|
||||||
| transition | `--duration-` | `--duration-normal` |
|
|
||||||
|
|
||||||
**Output**: `src/styles/tokens.css`
|
// Shadows
|
||||||
|
if (designTokens.shadow) {
|
||||||
|
for (const [name, token] of Object.entries(designTokens.shadow)) {
|
||||||
|
cssVars += ` --shadow-${name}: ${token.$value};\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
**Dark Mode Support**: Add `@media (prefers-color-scheme: dark)` override for colors.
|
// Transitions
|
||||||
|
if (designTokens.transition) {
|
||||||
|
for (const [name, token] of Object.entries(designTokens.transition)) {
|
||||||
|
cssVars += ` --duration-${name}: ${token.$value};\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#### Step 2: Implement Components (scope: components or full)
|
cssVars += '}\n'
|
||||||
|
|
||||||
**Implementation Strategy**:
|
// Dark mode
|
||||||
|
if (designTokens.color) {
|
||||||
| Condition | Strategy |
|
cssVars += '\n@media (prefers-color-scheme: dark) {\n :root {\n'
|
||||||
|-----------|----------|
|
for (const [name, token] of Object.entries(designTokens.color)) {
|
||||||
| <= 2 tasks, low complexity | Direct: inline Edit/Write |
|
if (typeof token.$value === 'object' && token.$value.dark) {
|
||||||
| 3-5 tasks, medium complexity | Single agent: one code-developer for all |
|
cssVars += ` --color-${name}: ${token.$value.dark};\n`
|
||||||
| > 5 tasks, high complexity | Batch agent: group by module, one agent per batch |
|
}
|
||||||
|
}
|
||||||
**Subagent Delegation** (for complex implementation):
|
cssVars += ' }\n}\n'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write token CSS
|
||||||
|
Bash(`mkdir -p src/styles`)
|
||||||
|
Write('src/styles/tokens.css', cssVars)
|
||||||
|
}
|
||||||
```
|
```
|
||||||
Task({
|
|
||||||
|
#### Step 2: Implement Components
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (scope === 'components' || scope === 'full') {
|
||||||
|
const taskDesc = task.description.replace(/Session:.*\n?/g, '').replace(/Scope:.*\n?/g, '').trim()
|
||||||
|
const antiPatterns = designIntel.recommendations?.anti_patterns || []
|
||||||
|
const stackGuidelines = designIntel.stack_guidelines || {}
|
||||||
|
const implementationChecklist = designIntel.design_system?.implementation_checklist || []
|
||||||
|
|
||||||
|
// Delegate to code-developer for complex implementation
|
||||||
|
Task({
|
||||||
subagent_type: "code-developer",
|
subagent_type: "code-developer",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Implement frontend components: <task-description>",
|
description: `Implement frontend components: ${taskDesc}`,
|
||||||
prompt: "..."
|
prompt: `## Goal
|
||||||
})
|
${taskDesc}
|
||||||
```
|
|
||||||
|
|
||||||
**Prompt Content for Subagent**:
|
## Tech Stack
|
||||||
- Goal: task description
|
${detectedStack}
|
||||||
- Tech stack: detected stack
|
|
||||||
- Design tokens: import path, CSS variable usage
|
|
||||||
- Component specs: from component-specs/*.md
|
|
||||||
- Stack-specific guidelines: from design intelligence
|
|
||||||
- Implementation checklist: from design intelligence
|
|
||||||
- Anti-patterns to avoid: from design intelligence
|
|
||||||
- Coding standards: design token usage, cursor styles, transitions, contrast, focus styles, reduced motion, responsive
|
|
||||||
|
|
||||||
**Coding Standards**:
|
## Design Tokens
|
||||||
|
Import from: src/styles/tokens.css
|
||||||
|
Use CSS custom properties (var(--color-primary), var(--space-md), etc.)
|
||||||
|
|
||||||
|
## Component Specs
|
||||||
|
${componentSpecs.map(s => s.content).join('\n\n---\n\n')}
|
||||||
|
|
||||||
|
## Stack-Specific Guidelines
|
||||||
|
${JSON.stringify(stackGuidelines, null, 2)}
|
||||||
|
|
||||||
|
## Implementation Checklist (MUST verify each item)
|
||||||
|
${implementationChecklist.map(item => `- [ ] ${item}`).join('\n') || '- [ ] Semantic HTML\n- [ ] Keyboard accessible\n- [ ] Responsive layout\n- [ ] Dark mode support'}
|
||||||
|
|
||||||
|
## Anti-Patterns to AVOID
|
||||||
|
${antiPatterns.map(p => `- ❌ ${p}`).join('\n') || 'None specified'}
|
||||||
|
|
||||||
|
## Coding Standards
|
||||||
- Use design token CSS variables, never hardcode colors/spacing
|
- Use design token CSS variables, never hardcode colors/spacing
|
||||||
- All interactive elements must have `cursor: pointer`
|
- All interactive elements must have cursor: pointer
|
||||||
- Transitions: 150-300ms (use `var(--duration-normal)`)
|
- Transitions: 150-300ms (use var(--duration-normal))
|
||||||
- Text contrast: minimum 4.5:1 ratio
|
- Text contrast: minimum 4.5:1 ratio
|
||||||
- Include `focus-visible` styles for keyboard navigation
|
- Include focus-visible styles for keyboard navigation
|
||||||
- Support `prefers-reduced-motion`
|
- Support prefers-reduced-motion
|
||||||
- Responsive: mobile-first with md/lg breakpoints
|
- Responsive: mobile-first with md/lg breakpoints
|
||||||
- No emoji as functional icons
|
- No emoji as functional icons
|
||||||
|
`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 4: Self-Validation
|
### Phase 4: Self-Validation
|
||||||
|
|
||||||
**Pre-delivery Self-checks**:
|
```javascript
|
||||||
|
// Pre-delivery self-check
|
||||||
|
const implementedFiles = Glob({ pattern: 'src/**/*.{tsx,jsx,vue,svelte,html,css}' })
|
||||||
|
const selfCheckResults = { passed: [], failed: [] }
|
||||||
|
|
||||||
| Check | Method | Pass Criteria |
|
for (const file of implementedFiles.slice(0, 20)) {
|
||||||
|-------|--------|---------------|
|
try {
|
||||||
| Hardcoded colors | Scan for hex codes outside tokens.css | None found |
|
const content = Read(file)
|
||||||
| cursor-pointer | Check buttons/links for cursor style | All have cursor-pointer |
|
|
||||||
| Focus styles | Check interactive elements | All have focus styles |
|
|
||||||
| Responsive | Check for breakpoints | Breakpoints present |
|
|
||||||
| File existence | Verify all planned files exist | All files present |
|
|
||||||
| Import resolution | Check no broken imports | All imports resolve |
|
|
||||||
|
|
||||||
**Auto-fix** (if possible):
|
// Check: no hardcoded colors (hex outside tokens.css)
|
||||||
- Add missing cursor-pointer to buttons/links
|
if (file !== 'src/styles/tokens.css' && /#[0-9a-fA-F]{3,8}/.test(content)) {
|
||||||
- Add basic focus styles
|
selfCheckResults.failed.push({ file, check: 'hardcoded-color', message: 'Found hardcoded color value' })
|
||||||
|
}
|
||||||
|
|
||||||
**Update Shared Memory**:
|
// Check: cursor-pointer on interactive elements
|
||||||
- Write `component_inventory` field with implemented files
|
if (/button|<a |onClick|@click/.test(content) && !/cursor-pointer/.test(content)) {
|
||||||
|
selfCheckResults.failed.push({ file, check: 'cursor-pointer', message: 'Missing cursor-pointer on interactive element' })
|
||||||
|
}
|
||||||
|
|
||||||
**Validation Result**:
|
// Check: focus styles
|
||||||
|
if (/button|input|select|textarea|<a /.test(content) && !/focus/.test(content)) {
|
||||||
|
selfCheckResults.failed.push({ file, check: 'focus-styles', message: 'Missing focus styles' })
|
||||||
|
}
|
||||||
|
|
||||||
| Status | Condition |
|
// Check: responsive
|
||||||
|--------|-----------|
|
if (/className|class=/.test(content) && !/md:|lg:|@media/.test(content)) {
|
||||||
| complete | No issues found |
|
selfCheckResults.failed.push({ file, check: 'responsive', message: 'No responsive breakpoints found' })
|
||||||
| complete_with_warnings | Non-critical issues found |
|
}
|
||||||
|
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Auto-fix simple issues if possible
|
||||||
|
for (const failure of selfCheckResults.failed) {
|
||||||
|
if (failure.check === 'cursor-pointer') {
|
||||||
|
// Attempt to add cursor-pointer to button/link styles
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update shared memory
|
||||||
|
sharedMemory.component_inventory = implementedFiles.map(f => ({ path: f, status: 'implemented' }))
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
|
const resultStatus = selfCheckResults.failed.length === 0 ? 'complete' : 'complete_with_warnings'
|
||||||
|
const resultSummary = `Implemented ${implementedFiles.length} files. Self-check: ${selfCheckResults.failed.length} issues.`
|
||||||
|
const resultDetails = `Files:\n${implementedFiles.map(f => `- ${f}`).join('\n')}\n\n${selfCheckResults.failed.length > 0 ? `Self-check issues:\n${selfCheckResults.failed.map(f => `- [${f.check}] ${f.file}: ${f.message}`).join('\n')}` : 'All self-checks passed.'}`
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "developer",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "dev_complete",
|
||||||
|
summary: `[developer] DEV complete: ${task.subject}`
|
||||||
|
})
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[developer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `## [developer] Implementation Results
|
||||||
|
|
||||||
**Report Content**:
|
**Task**: ${task.subject}
|
||||||
- Task subject and status
|
**Status**: ${resultStatus}
|
||||||
- Scope completed
|
**Scope**: ${scope}
|
||||||
- File count implemented
|
|
||||||
- Self-check results
|
|
||||||
- Output file paths
|
|
||||||
|
|
||||||
---
|
### Summary
|
||||||
|
${resultSummary}
|
||||||
|
|
||||||
|
### Details
|
||||||
|
${resultDetails}`,
|
||||||
|
summary: `[developer] DEV complete`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('DEV-') &&
|
||||||
|
t.owner === 'developer' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No DEV-* tasks available | Idle, wait for coordinator assignment |
|
| No DEV-* tasks available | Idle, wait for coordinator |
|
||||||
| design-tokens.json not found | Notify coordinator, request architecture output |
|
| design-tokens.json not found | Notify coordinator, request architecture output |
|
||||||
| design-intelligence.json not found | Use default implementation guidelines |
|
| design-intelligence.json not found | Use default implementation guidelines |
|
||||||
| Sub-agent failure | Retry once, fallback to direct implementation |
|
| Sub-agent failure | Retry once, fallback to direct implementation |
|
||||||
|
|||||||
@@ -1,256 +1,490 @@
|
|||||||
# QA Role
|
# Role: qa
|
||||||
|
|
||||||
Quality assurance engineer. Integrates ux-guidelines.csv Do/Don't rules, Pre-Delivery Checklist, and industry anti-pattern library to execute 5-dimension code review. Upgrades from conceptual review to CSS-level precise review.
|
质量保证工程师。融合 ux-guidelines.csv 的 Do/Don't 规则、Pre-Delivery Checklist、行业反模式库,执行 5 维度代码审查。从概念级审查升级为 CSS 级别精准审查。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `qa` | **Tag**: `[qa]`
|
- **Name**: `qa`
|
||||||
- **Task Prefix**: `QA-*`
|
- **Task Prefix**: `QA-*`
|
||||||
- **Responsibility**: Read-only analysis (code review + quality audit)
|
- **Responsibility**: Read-only analysis (code review + quality audit)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[qa]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `QA-*` prefixed tasks
|
- 仅处理 `QA-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[qa]` identifier
|
- 所有输出必须带 `[qa]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Work strictly within quality review scope
|
- 严格在质量审查范围内工作
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute work outside this role's responsibility scope (analysis, architecture, implementation)
|
- ❌ 执行需求分析、架构设计、代码实现等其他角色职责
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
- ❌ 直接与其他 worker 角色通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
- ❌ 为其他角色创建任务
|
||||||
- Directly modify source code (only report issues)
|
- ❌ 直接修改源代码(仅报告问题)
|
||||||
- Omit `[qa]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
| Command | File | Phase | Description |
|
|
||||||
|---------|------|-------|-------------|
|
|
||||||
| `pre-delivery-checklist` | [commands/pre-delivery-checklist.md](commands/pre-delivery-checklist.md) | Phase 3 | Final delivery checklist execution |
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Read` | builtin | Phase 2-3 | Load artifacts, read code files |
|
|
||||||
| `Glob` | builtin | Phase 2 | Collect files to review |
|
|
||||||
| `Grep` | builtin | Phase 3 | Search code patterns |
|
|
||||||
| `Bash` | builtin | Phase 3 | Run read-only checks (lint, type-check) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `qa_passed` | qa → coordinator | All checks passed | Review passed, proceed to next stage |
|
| `qa_passed` | qa → coordinator | All checks passed | 审查通过,可进入下一阶段 |
|
||||||
| `qa_result` | qa → coordinator | Review complete with findings | Review complete, has findings to address |
|
| `qa_result` | qa → coordinator | Review complete with findings | 审查完成,有发现需处理 |
|
||||||
| `fix_required` | qa → coordinator | Critical issues found | Critical issues found, needs fix (triggers GC loop) |
|
| `fix_required` | qa → coordinator | Critical issues found | 发现严重问题,需修复 (triggers CP-2 GC loop) |
|
||||||
| `error` | qa → coordinator | Review failure | Review process failed |
|
| `error` | qa → coordinator | Review failure | 审查过程失败 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Available Tools
|
||||||
|
|
||||||
```
|
| Tool | Purpose |
|
||||||
mcp__ccw-tools__team_msg({
|
|------|---------|
|
||||||
operation: "log",
|
| Read, Glob, Grep | 读取代码文件、搜索模式 |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., FES-xxx-date), NOT team name. Extract from Session: field.
|
| Bash (read-only) | 运行 lint/type-check 等只读检查命令 |
|
||||||
from: "qa",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[qa] QA <verdict>: <task-subject> (<score>/10)",
|
|
||||||
ref: <audit-file>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from qa --to coordinator --type <message-type> --summary \"[qa] ...\" --ref <audit-file> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5-Dimension Audit Framework
|
## 5-Dimension Audit Framework
|
||||||
|
|
||||||
| Dimension | Weight | Source | Focus |
|
| Dimension | Weight | Source | Focus |
|
||||||
|-----------|--------|--------|-------|
|
|-----------|--------|--------|-------|
|
||||||
| Code Quality | 0.20 | Standard code review | Code structure, naming, maintainability |
|
| Code Quality | 0.20 | Standard code review | 代码结构、命名、可维护性 |
|
||||||
| Accessibility | 0.25 | ux-guidelines.csv accessibility rules | WCAG compliance, keyboard nav, screen reader |
|
| Accessibility | 0.25 | ux-guidelines.csv accessibility rules | WCAG 合规、键盘导航、屏幕阅读器 |
|
||||||
| Design Compliance | 0.20 | design-intelligence.json anti-patterns | Industry anti-pattern check, design token usage |
|
| Design Compliance | 0.20 | design-intelligence.json anti-patterns | 行业反模式检查、设计令牌使用 |
|
||||||
| UX Best Practices | 0.20 | ux-guidelines.csv Do/Don't rules | Interaction patterns, responsive, animations |
|
| UX Best Practices | 0.20 | ux-guidelines.csv Do/Don't rules | 交互模式、响应式、动画 |
|
||||||
| Pre-Delivery | 0.15 | ui-ux-pro-max Pre-Delivery Checklist | Final delivery checklist |
|
| Pre-Delivery | 0.15 | ui-ux-pro-max Pre-Delivery Checklist | 最终交付检查清单 |
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('QA-') &&
|
||||||
|
t.owner === 'qa' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `QA-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading
|
### Phase 2: Context Loading
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Extract session folder and review type
|
||||||
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
|
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : null
|
||||||
|
|
||||||
| Input | Source | Required |
|
const typeMatch = task.description.match(/Type:\s*([^\n]+)/)
|
||||||
|-------|--------|----------|
|
const reviewType = typeMatch ? typeMatch[1].trim() : 'code-review'
|
||||||
| Session folder | Extract from task description `Session: <path>` | Yes |
|
// Types: architecture-review, token-review, component-review, code-review, final
|
||||||
| Review type | Extract from task description `Type: <type>` | No (default: code-review) |
|
|
||||||
| Design intelligence | `<session-folder>/analysis/design-intelligence.json` | No |
|
|
||||||
| Design tokens | `<session-folder>/architecture/design-tokens.json` | No |
|
|
||||||
| Shared memory | `<session-folder>/shared-memory.json` | No |
|
|
||||||
|
|
||||||
**Review Types**:
|
// Load design intelligence
|
||||||
|
let designIntel = {}
|
||||||
|
try {
|
||||||
|
designIntel = JSON.parse(Read(`${sessionFolder}/analysis/design-intelligence.json`))
|
||||||
|
} catch {}
|
||||||
|
|
||||||
| Type | Files to Review |
|
// Load design tokens
|
||||||
|------|-----------------|
|
let designTokens = {}
|
||||||
| architecture-review | `<session-folder>/architecture/**/*` |
|
try {
|
||||||
| token-review | `<session-folder>/architecture/**/*` |
|
designTokens = JSON.parse(Read(`${sessionFolder}/architecture/design-tokens.json`))
|
||||||
| component-review | `<session-folder>/architecture/component-specs/**/*` |
|
} catch {}
|
||||||
| code-review | `src/**/*.{tsx,jsx,vue,svelte,html,css}` |
|
|
||||||
| final | `src/**/*.{tsx,jsx,vue,svelte,html,css}` |
|
|
||||||
|
|
||||||
**Loading Steps**:
|
// Load shared memory for industry context
|
||||||
|
let sharedMemory = {}
|
||||||
|
try {
|
||||||
|
sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
|
||||||
|
} catch {}
|
||||||
|
|
||||||
1. Extract session folder and review type
|
const industryContext = sharedMemory.industry_context || {}
|
||||||
2. Load design intelligence (for anti-patterns, must-have)
|
const antiPatterns = designIntel.recommendations?.anti_patterns || []
|
||||||
3. Load design tokens (for compliance checks)
|
const mustHave = designIntel.recommendations?.must_have || []
|
||||||
4. Load shared memory (for industry context, strictness)
|
|
||||||
5. Collect files to review based on review type
|
// Determine audit strictness from industry
|
||||||
|
const strictness = industryContext.config?.strictness || 'standard'
|
||||||
|
|
||||||
|
// Collect files to review based on review type
|
||||||
|
let filesToReview = []
|
||||||
|
if (reviewType === 'architecture-review' || reviewType === 'token-review') {
|
||||||
|
filesToReview = Glob({ pattern: `${sessionFolder}/architecture/**/*` })
|
||||||
|
} else if (reviewType === 'component-review') {
|
||||||
|
filesToReview = Glob({ pattern: `${sessionFolder}/architecture/component-specs/**/*` })
|
||||||
|
} else {
|
||||||
|
// code-review or final: review implemented source files
|
||||||
|
filesToReview = Glob({ pattern: 'src/**/*.{tsx,jsx,vue,svelte,html,css}' })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read file contents
|
||||||
|
const fileContents = {}
|
||||||
|
for (const file of filesToReview.slice(0, 30)) {
|
||||||
|
try { fileContents[file] = Read(file) } catch {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 3: 5-Dimension Audit
|
### Phase 3: 5-Dimension Audit
|
||||||
|
|
||||||
#### Dimension 1: Code Quality (weight: 0.20)
|
```javascript
|
||||||
|
const audit = {
|
||||||
|
score: 0,
|
||||||
|
dimensions: {},
|
||||||
|
issues: [],
|
||||||
|
passed: [],
|
||||||
|
critical_count: 0
|
||||||
|
}
|
||||||
|
|
||||||
| Check | Severity | Description |
|
// ═══════════════════════════════════════════
|
||||||
|-------|----------|-------------|
|
// Dimension 1: Code Quality (weight: 0.20)
|
||||||
| File length | MEDIUM | File exceeds 300 lines, consider splitting |
|
// ═══════════════════════════════════════════
|
||||||
| console.log | LOW | console.log found in production code |
|
const codeQuality = { score: 10, issues: [] }
|
||||||
| Empty catch | HIGH | Empty catch block found |
|
|
||||||
| Unused imports | LOW | Unused imports detected |
|
|
||||||
|
|
||||||
#### Dimension 2: Accessibility (weight: 0.25)
|
for (const [file, content] of Object.entries(fileContents)) {
|
||||||
|
// Check: consistent naming conventions
|
||||||
|
// Check: no unused imports/variables
|
||||||
|
// Check: reasonable file length (< 300 lines)
|
||||||
|
if (content.split('\n').length > 300) {
|
||||||
|
codeQuality.issues.push({ file, severity: 'MEDIUM', message: 'File exceeds 300 lines, consider splitting' })
|
||||||
|
codeQuality.score -= 1
|
||||||
|
}
|
||||||
|
|
||||||
| Check | Severity | Do | Don't |
|
// Check: no console.log in production code
|
||||||
|-------|----------|----|----|
|
if (/console\.(log|debug)/.test(content) && !/\.test\.|\.spec\./.test(file)) {
|
||||||
| Image alt | CRITICAL | Always provide alt text | Leave alt empty without role="presentation" |
|
codeQuality.issues.push({ file, severity: 'LOW', message: 'console.log found in production code' })
|
||||||
| Input labels | HIGH | Use <label> or aria-label | Rely on placeholder as label |
|
codeQuality.score -= 0.5
|
||||||
| Button text | HIGH | Add aria-label for icon-only buttons | Use title as sole accessible name |
|
}
|
||||||
| Heading hierarchy | MEDIUM | Maintain sequential heading levels | Skip heading levels |
|
|
||||||
| Focus styles | HIGH | Add focus-visible outline | Remove default outline without replacement |
|
|
||||||
| ARIA roles | MEDIUM | Include tabindex for non-native elements | Use role without keyboard support |
|
|
||||||
|
|
||||||
**Strict Mode** (medical/financial):
|
// Check: proper error handling
|
||||||
|
if (/catch\s*\(\s*\)\s*\{[\s]*\}/.test(content)) {
|
||||||
|
codeQuality.issues.push({ file, severity: 'HIGH', message: 'Empty catch block found' })
|
||||||
|
codeQuality.score -= 2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
| Check | Severity | Do | Don't |
|
audit.dimensions.code_quality = { weight: 0.20, score: Math.max(0, codeQuality.score), issues: codeQuality.issues }
|
||||||
|-------|----------|----|----|
|
|
||||||
| Reduced motion | HIGH | Wrap animations in @media (prefers-reduced-motion) | Force animations on all users |
|
|
||||||
|
|
||||||
#### Dimension 3: Design Compliance (weight: 0.20)
|
// ═══════════════════════════════════════════
|
||||||
|
// Dimension 2: Accessibility (weight: 0.25)
|
||||||
|
// ═══════════════════════════════════════════
|
||||||
|
const accessibility = { score: 10, issues: [] }
|
||||||
|
|
||||||
| Check | Severity | Do | Don't |
|
for (const [file, content] of Object.entries(fileContents)) {
|
||||||
|-------|----------|----|----|
|
if (!/\.(tsx|jsx|vue|svelte|html)$/.test(file)) continue
|
||||||
| Hardcoded colors | HIGH | Use var(--color-primary) | Hardcode #1976d2 |
|
|
||||||
| Hardcoded spacing | MEDIUM | Use var(--space-md) | Hardcode 16px |
|
|
||||||
| Industry anti-patterns | CRITICAL/HIGH | Follow industry-specific guidelines | Violate anti-patterns (gradients, emojis as icons, etc.) |
|
|
||||||
|
|
||||||
#### Dimension 4: UX Best Practices (weight: 0.20)
|
// Check: images have alt text
|
||||||
|
if (/<img\s/.test(content) && !/<img\s[^>]*alt=/.test(content)) {
|
||||||
|
accessibility.issues.push({ file, severity: 'CRITICAL', message: 'Image missing alt attribute', do: 'Always provide alt text', dont: 'Leave alt empty for decorative images without role="presentation"' })
|
||||||
|
accessibility.score -= 3
|
||||||
|
}
|
||||||
|
|
||||||
| Check | Severity | Do | Don't |
|
// Check: form inputs have labels
|
||||||
|-------|----------|----|----|
|
if (/<input\s/.test(content) && !/<label/.test(content) && !/aria-label/.test(content)) {
|
||||||
| Cursor pointer | MEDIUM | Add cursor: pointer to all clickable elements | Leave default cursor on buttons/links |
|
accessibility.issues.push({ file, severity: 'HIGH', message: 'Form input missing associated label', do: 'Use <label> or aria-label', dont: 'Rely on placeholder as label' })
|
||||||
| Transition duration | LOW | Use 150-300ms | Use durations outside 100-500ms |
|
accessibility.score -= 2
|
||||||
| Responsive | MEDIUM | Use mobile-first responsive design | Design for desktop only |
|
}
|
||||||
| Loading states | MEDIUM | Show loading indicator during data fetching | Leave blank screen while loading |
|
|
||||||
| Error states | HIGH | Show user-friendly error message | Silently fail or show raw error |
|
|
||||||
|
|
||||||
#### Dimension 5: Pre-Delivery (weight: 0.15)
|
// Check: buttons have accessible text
|
||||||
|
if (/<button\s/.test(content) && /<button\s[^>]*>\s*</.test(content) && !/aria-label/.test(content)) {
|
||||||
|
accessibility.issues.push({ file, severity: 'HIGH', message: 'Button may lack accessible text (icon-only?)', do: 'Add aria-label for icon-only buttons', dont: 'Use title attribute as sole accessible name' })
|
||||||
|
accessibility.score -= 2
|
||||||
|
}
|
||||||
|
|
||||||
Only run on `final` or `code-review` types.
|
// Check: heading hierarchy
|
||||||
|
if (/h[1-6]/.test(content)) {
|
||||||
|
const headings = content.match(/<h([1-6])/g)?.map(h => parseInt(h[2])) || []
|
||||||
|
for (let i = 1; i < headings.length; i++) {
|
||||||
|
if (headings[i] - headings[i-1] > 1) {
|
||||||
|
accessibility.issues.push({ file, severity: 'MEDIUM', message: `Heading level skipped: h${headings[i-1]} → h${headings[i]}` })
|
||||||
|
accessibility.score -= 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
| Check | Severity |
|
// Check: color contrast (basic — flag hardcoded light colors on light bg)
|
||||||
|-------|----------|
|
// Check: focus-visible styles
|
||||||
| No emojis as functional icons | HIGH |
|
if (/button|<a |input|select/.test(content) && !/focus-visible|focus:/.test(content)) {
|
||||||
| cursor-pointer on all clickable | MEDIUM |
|
accessibility.issues.push({ file, severity: 'HIGH', message: 'Interactive element missing focus styles', do: 'Add focus-visible outline', dont: 'Remove default focus outline without replacement' })
|
||||||
| Transitions in valid range (150-300ms) | LOW |
|
accessibility.score -= 2
|
||||||
| Focus states visible | HIGH |
|
}
|
||||||
| prefers-reduced-motion support | MEDIUM |
|
|
||||||
| Responsive breakpoints | MEDIUM |
|
|
||||||
| No hardcoded colors | HIGH |
|
|
||||||
| Dark mode support | MEDIUM |
|
|
||||||
|
|
||||||
### Phase 4: Score Calculation and Report
|
// Check: ARIA roles used correctly
|
||||||
|
if (/role=/.test(content) && /role="(button|link)"/.test(content)) {
|
||||||
|
// Verify tabindex is present for non-native elements with role
|
||||||
|
if (!/tabindex/.test(content)) {
|
||||||
|
accessibility.issues.push({ file, severity: 'MEDIUM', message: 'Element with ARIA role may need tabindex' })
|
||||||
|
accessibility.score -= 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
**Calculate Weighted Score**:
|
// Strict mode: additional checks for medical/financial
|
||||||
|
if (strictness === 'strict') {
|
||||||
|
for (const [file, content] of Object.entries(fileContents)) {
|
||||||
|
// Check: prefers-reduced-motion
|
||||||
|
if (/animation|transition|@keyframes/.test(content) && !/prefers-reduced-motion/.test(content)) {
|
||||||
|
accessibility.issues.push({ file, severity: 'HIGH', message: 'Animation without prefers-reduced-motion respect', do: 'Wrap animations in @media (prefers-reduced-motion: no-preference)', dont: 'Force animations on all users' })
|
||||||
|
accessibility.score -= 2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
```
|
audit.dimensions.accessibility = { weight: 0.25, score: Math.max(0, accessibility.score), issues: accessibility.issues }
|
||||||
score = sum(dimension_score * dimension_weight) for all dimensions
|
|
||||||
|
// ═══════════════════════════════════════════
|
||||||
|
// Dimension 3: Design Compliance (weight: 0.20)
|
||||||
|
// ═══════════════════════════════════════════
|
||||||
|
const designCompliance = { score: 10, issues: [] }
|
||||||
|
|
||||||
|
for (const [file, content] of Object.entries(fileContents)) {
|
||||||
|
// Check: using design tokens (no hardcoded colors)
|
||||||
|
if (file !== 'src/styles/tokens.css' && /#[0-9a-fA-F]{3,8}/.test(content)) {
|
||||||
|
const hardcodedColors = content.match(/#[0-9a-fA-F]{3,8}/g) || []
|
||||||
|
designCompliance.issues.push({ file, severity: 'HIGH', message: `${hardcodedColors.length} hardcoded color(s) found — use design token variables`, do: 'Use var(--color-primary)', dont: 'Hardcode #1976d2' })
|
||||||
|
designCompliance.score -= 2
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: using spacing tokens
|
||||||
|
if (/margin|padding/.test(content) && /:\s*\d+px/.test(content) && !/var\(--space/.test(content)) {
|
||||||
|
designCompliance.issues.push({ file, severity: 'MEDIUM', message: 'Hardcoded spacing values — use spacing tokens', do: 'Use var(--space-md)', dont: 'Hardcode 16px' })
|
||||||
|
designCompliance.score -= 1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: industry anti-patterns
|
||||||
|
for (const pattern of antiPatterns) {
|
||||||
|
// Each anti-pattern is a string description — check for common violations
|
||||||
|
if (typeof pattern === 'string') {
|
||||||
|
const patternLower = pattern.toLowerCase()
|
||||||
|
if (patternLower.includes('gradient') && /gradient/.test(content)) {
|
||||||
|
designCompliance.issues.push({ file, severity: 'CRITICAL', message: `Industry anti-pattern violation: ${pattern}` })
|
||||||
|
designCompliance.score -= 3
|
||||||
|
}
|
||||||
|
if (patternLower.includes('emoji') && /[\u{1F300}-\u{1F9FF}]/u.test(content)) {
|
||||||
|
designCompliance.issues.push({ file, severity: 'HIGH', message: `Industry anti-pattern violation: ${pattern}` })
|
||||||
|
designCompliance.score -= 2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
audit.dimensions.design_compliance = { weight: 0.20, score: Math.max(0, designCompliance.score), issues: designCompliance.issues }
|
||||||
|
|
||||||
|
// ═══════════════════════════════════════════
|
||||||
|
// Dimension 4: UX Best Practices (weight: 0.20)
|
||||||
|
// ═══════════════════════════════════════════
|
||||||
|
const uxPractices = { score: 10, issues: [] }
|
||||||
|
|
||||||
|
for (const [file, content] of Object.entries(fileContents)) {
|
||||||
|
// Check: cursor-pointer on clickable elements
|
||||||
|
if (/button|<a |onClick|@click/.test(content) && !/cursor-pointer/.test(content) && /\.css$/.test(file)) {
|
||||||
|
uxPractices.issues.push({ file, severity: 'MEDIUM', message: 'Missing cursor: pointer on clickable element', do: 'Add cursor: pointer to all clickable elements', dont: 'Leave default cursor on buttons/links' })
|
||||||
|
uxPractices.score -= 1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: transition duration in valid range (150-300ms)
|
||||||
|
const durations = content.match(/duration[:-]\s*(\d+)/g) || []
|
||||||
|
for (const d of durations) {
|
||||||
|
const ms = parseInt(d.match(/\d+/)[0])
|
||||||
|
if (ms > 0 && (ms < 100 || ms > 500)) {
|
||||||
|
uxPractices.issues.push({ file, severity: 'LOW', message: `Transition duration ${ms}ms outside recommended range (150-300ms)` })
|
||||||
|
uxPractices.score -= 0.5
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: responsive breakpoints
|
||||||
|
if (/className|class=/.test(content) && !/md:|lg:|@media/.test(content) && /\.(tsx|jsx|vue|html)$/.test(file)) {
|
||||||
|
uxPractices.issues.push({ file, severity: 'MEDIUM', message: 'No responsive breakpoints detected', do: 'Use mobile-first responsive design', dont: 'Design for desktop only' })
|
||||||
|
uxPractices.score -= 1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: loading states for async operations
|
||||||
|
if (/fetch|axios|useSWR|useQuery/.test(content) && !/loading|isLoading|skeleton|spinner/.test(content)) {
|
||||||
|
uxPractices.issues.push({ file, severity: 'MEDIUM', message: 'Async operation without loading state', do: 'Show loading indicator during data fetching', dont: 'Leave blank screen while loading' })
|
||||||
|
uxPractices.score -= 1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check: error states
|
||||||
|
if (/fetch|axios|useSWR|useQuery/.test(content) && !/error|isError|catch/.test(content)) {
|
||||||
|
uxPractices.issues.push({ file, severity: 'HIGH', message: 'Async operation without error handling', do: 'Show user-friendly error message', dont: 'Silently fail or show raw error' })
|
||||||
|
uxPractices.score -= 2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
audit.dimensions.ux_practices = { weight: 0.20, score: Math.max(0, uxPractices.score), issues: uxPractices.issues }
|
||||||
|
|
||||||
|
// ═══════════════════════════════════════════
|
||||||
|
// Dimension 5: Pre-Delivery Checklist (weight: 0.15)
|
||||||
|
// ═══════════════════════════════════════════
|
||||||
|
const preDelivery = { score: 10, issues: [] }
|
||||||
|
|
||||||
|
// Only run full pre-delivery on final review
|
||||||
|
if (reviewType === 'final' || reviewType === 'code-review') {
|
||||||
|
const allContent = Object.values(fileContents).join('\n')
|
||||||
|
|
||||||
|
const checklist = [
|
||||||
|
{ check: "No emojis as functional icons", test: () => /[\u{1F300}-\u{1F9FF}]/u.test(allContent), severity: 'HIGH' },
|
||||||
|
{ check: "cursor-pointer on clickable", test: () => /button|onClick/.test(allContent) && !/cursor-pointer/.test(allContent), severity: 'MEDIUM' },
|
||||||
|
{ check: "Transitions 150-300ms", test: () => { const m = allContent.match(/duration[:-]\s*(\d+)/g); return m?.some(d => { const v = parseInt(d.match(/\d+/)[0]); return v > 0 && (v < 100 || v > 500) }) }, severity: 'LOW' },
|
||||||
|
{ check: "Focus states visible", test: () => /button|input|<a /.test(allContent) && !/focus/.test(allContent), severity: 'HIGH' },
|
||||||
|
{ check: "prefers-reduced-motion", test: () => /animation|@keyframes/.test(allContent) && !/prefers-reduced-motion/.test(allContent), severity: 'MEDIUM' },
|
||||||
|
{ check: "Responsive breakpoints", test: () => !/md:|lg:|@media.*min-width/.test(allContent), severity: 'MEDIUM' },
|
||||||
|
{ check: "No hardcoded colors", test: () => { const nonToken = Object.entries(fileContents).filter(([f]) => f !== 'src/styles/tokens.css'); return nonToken.some(([,c]) => /#[0-9a-fA-F]{6}/.test(c)) }, severity: 'HIGH' },
|
||||||
|
{ check: "Dark mode support", test: () => !/prefers-color-scheme|dark:|\.dark/.test(allContent), severity: 'MEDIUM' }
|
||||||
|
]
|
||||||
|
|
||||||
|
for (const item of checklist) {
|
||||||
|
try {
|
||||||
|
if (item.test()) {
|
||||||
|
preDelivery.issues.push({ check: item.check, severity: item.severity, message: `Pre-delivery check failed: ${item.check}` })
|
||||||
|
preDelivery.score -= (item.severity === 'HIGH' ? 2 : item.severity === 'MEDIUM' ? 1 : 0.5)
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
audit.dimensions.pre_delivery = { weight: 0.15, score: Math.max(0, preDelivery.score), issues: preDelivery.issues }
|
||||||
```
|
```
|
||||||
|
|
||||||
**Collect Issues**:
|
### Phase 4: Score Calculation & Report
|
||||||
|
|
||||||
- Aggregate all issues from all dimensions
|
```javascript
|
||||||
- Count critical issues
|
// Calculate weighted score
|
||||||
|
audit.score = Object.values(audit.dimensions).reduce((sum, dim) => {
|
||||||
|
return sum + (dim.score * dim.weight)
|
||||||
|
}, 0)
|
||||||
|
|
||||||
**Determine Verdict**:
|
// Collect all issues
|
||||||
|
audit.issues = Object.values(audit.dimensions).flatMap(dim => dim.issues)
|
||||||
|
audit.critical_count = audit.issues.filter(i => i.severity === 'CRITICAL').length
|
||||||
|
audit.passed = Object.entries(audit.dimensions)
|
||||||
|
.filter(([, dim]) => dim.issues.length === 0)
|
||||||
|
.map(([name]) => name)
|
||||||
|
|
||||||
| Condition | Verdict |
|
// Determine verdict
|
||||||
|-----------|---------|
|
let verdict = 'PASSED'
|
||||||
| score >= 8 AND critical_count === 0 | PASSED |
|
if (audit.score < 6 || audit.critical_count > 0) {
|
||||||
| score >= 6 AND critical_count === 0 | PASSED_WITH_WARNINGS |
|
verdict = 'FIX_REQUIRED'
|
||||||
| score < 6 OR critical_count > 0 | FIX_REQUIRED |
|
} else if (audit.score < 8) {
|
||||||
|
verdict = 'PASSED_WITH_WARNINGS'
|
||||||
|
}
|
||||||
|
|
||||||
**Write Audit Report** to `<session-folder>/qa/audit-<NNN>.md`:
|
// Write audit report
|
||||||
|
const auditIndex = Glob({ pattern: `${sessionFolder}/qa/audit-*.md` }).length + 1
|
||||||
|
const auditFile = `${sessionFolder}/qa/audit-${String(auditIndex).padStart(3, '0')}.md`
|
||||||
|
|
||||||
Report structure:
|
Write(auditFile, `# QA Audit Report #${auditIndex}
|
||||||
1. Summary (verdict, score, critical count, total issues)
|
|
||||||
2. Dimension scores table
|
|
||||||
3. Issues (by severity, with Do/Don't guidance)
|
|
||||||
4. Passed dimensions
|
|
||||||
|
|
||||||
**Update Shared Memory**:
|
## Summary
|
||||||
- Append to `qa_history` array
|
- **Review Type**: ${reviewType}
|
||||||
|
- **Verdict**: ${verdict}
|
||||||
|
- **Score**: ${audit.score.toFixed(1)} / 10
|
||||||
|
- **Critical Issues**: ${audit.critical_count}
|
||||||
|
- **Total Issues**: ${audit.issues.length}
|
||||||
|
- **Strictness**: ${strictness}
|
||||||
|
|
||||||
|
## Dimension Scores
|
||||||
|
|
||||||
|
| Dimension | Weight | Score | Issues |
|
||||||
|
|-----------|--------|-------|--------|
|
||||||
|
| Code Quality | 0.20 | ${audit.dimensions.code_quality.score.toFixed(1)} | ${audit.dimensions.code_quality.issues.length} |
|
||||||
|
| Accessibility | 0.25 | ${audit.dimensions.accessibility.score.toFixed(1)} | ${audit.dimensions.accessibility.issues.length} |
|
||||||
|
| Design Compliance | 0.20 | ${audit.dimensions.design_compliance.score.toFixed(1)} | ${audit.dimensions.design_compliance.issues.length} |
|
||||||
|
| UX Best Practices | 0.20 | ${audit.dimensions.ux_practices.score.toFixed(1)} | ${audit.dimensions.ux_practices.issues.length} |
|
||||||
|
| Pre-Delivery | 0.15 | ${audit.dimensions.pre_delivery.score.toFixed(1)} | ${audit.dimensions.pre_delivery.issues.length} |
|
||||||
|
|
||||||
|
## Issues
|
||||||
|
|
||||||
|
${audit.issues.map(i => `### [${i.severity}] ${i.message}
|
||||||
|
- **File**: ${i.file || i.check || 'N/A'}
|
||||||
|
${i.do ? `- ✅ **Do**: ${i.do}` : ''}
|
||||||
|
${i.dont ? `- ❌ **Don't**: ${i.dont}` : ''}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
|
## Passed Dimensions
|
||||||
|
${audit.passed.map(p => `- ✅ ${p}`).join('\n') || 'None — all dimensions have issues'}
|
||||||
|
`)
|
||||||
|
|
||||||
|
// Update shared memory
|
||||||
|
sharedMemory.qa_history = sharedMemory.qa_history || []
|
||||||
|
sharedMemory.qa_history.push({
|
||||||
|
audit_index: auditIndex,
|
||||||
|
review_type: reviewType,
|
||||||
|
verdict: verdict,
|
||||||
|
score: audit.score,
|
||||||
|
critical_count: audit.critical_count,
|
||||||
|
total_issues: audit.issues.length,
|
||||||
|
timestamp: new Date().toISOString()
|
||||||
|
})
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
|
const resultStatus = verdict
|
||||||
|
const resultSummary = `Score: ${audit.score.toFixed(1)}/10, Verdict: ${verdict}, ${audit.issues.length} issues (${audit.critical_count} critical)`
|
||||||
|
const resultDetails = `Report: ${auditFile}`
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
const msgType = verdict === 'FIX_REQUIRED' ? 'fix_required' : verdict === 'PASSED' ? 'qa_passed' : 'qa_result'
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[qa]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "qa",
|
||||||
|
to: "coordinator",
|
||||||
|
type: msgType,
|
||||||
|
summary: `[qa] QA ${verdict}: ${task.subject} (${audit.score.toFixed(1)}/10)`,
|
||||||
|
ref: auditFile
|
||||||
|
})
|
||||||
|
|
||||||
**Message Type Selection**:
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `## [qa] QA Results
|
||||||
|
|
||||||
| Verdict | Message Type |
|
**Task**: ${task.subject}
|
||||||
|---------|-------------|
|
**Verdict**: ${verdict}
|
||||||
| PASSED | `qa_passed` |
|
**Score**: ${audit.score.toFixed(1)} / 10
|
||||||
| PASSED_WITH_WARNINGS | `qa_result` |
|
|
||||||
| FIX_REQUIRED | `fix_required` |
|
|
||||||
|
|
||||||
**Report Content**:
|
### Dimension Summary
|
||||||
- Task subject
|
${Object.entries(audit.dimensions).map(([name, dim]) =>
|
||||||
- Verdict and score
|
`- **${name}**: ${dim.score.toFixed(1)}/10 (${dim.issues.length} issues)`
|
||||||
- Dimension summary
|
).join('\n')}
|
||||||
- Critical issues (if any)
|
|
||||||
- High priority issues (if any)
|
|
||||||
- Audit report path
|
|
||||||
|
|
||||||
---
|
### Critical Issues
|
||||||
|
${audit.issues.filter(i => i.severity === 'CRITICAL').map(i => `- ❌ ${i.message} (${i.file || i.check})`).join('\n') || 'None'}
|
||||||
|
|
||||||
|
### High Priority Issues
|
||||||
|
${audit.issues.filter(i => i.severity === 'HIGH').map(i => `- ⚠️ ${i.message} (${i.file || i.check})`).join('\n') || 'None'}
|
||||||
|
|
||||||
|
### Report
|
||||||
|
${resultDetails}`,
|
||||||
|
summary: `[qa] QA ${verdict} (${audit.score.toFixed(1)}/10)`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('QA-') &&
|
||||||
|
t.owner === 'qa' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No QA-* tasks available | Idle, wait for coordinator assignment |
|
| No QA-* tasks available | Idle, wait for coordinator |
|
||||||
| design-intelligence.json not found | Skip design compliance dimension, adjust weights |
|
| design-intelligence.json not found | Skip design compliance dimension, adjust weights |
|
||||||
| No files to review | Report empty review, notify coordinator |
|
| No files to review | Report empty review, notify coordinator |
|
||||||
| Session folder not found | Notify coordinator, request location |
|
| Session folder not found | Notify coordinator, request location |
|
||||||
|
|||||||
@@ -6,163 +6,160 @@ allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), Task
|
|||||||
|
|
||||||
# Team Issue Resolution
|
# Team Issue Resolution
|
||||||
|
|
||||||
Unified team skill: issue processing pipeline (explore → plan → implement → review → integrate). All team members invoke with `--role=xxx` to route to role-specific execution.
|
Unified team skill for issue processing pipeline. All team members invoke this skill with `--role=xxx` for role-specific execution.
|
||||||
|
|
||||||
**Scope**: Issue processing flow (plan → queue → execute). Issue creation/discovery handled by `issue-discover`, CRUD management by `issue-manage`.
|
**Scope**: Issue 处理流程(plan → queue → execute)。Issue 创建/发现由 `issue-discover` 独立处理,CRUD 管理由 `issue-manage` 独立处理。
|
||||||
|
|
||||||
## Architecture
|
## Architecture Overview
|
||||||
|
|
||||||
```
|
```
|
||||||
┌───────────────────────────────────────────────┐
|
┌───────────────────────────────────────────┐
|
||||||
│ Skill(skill="team-issue") │
|
│ Skill(skill="team-issue") │
|
||||||
│ args="<issue-ids>" or args="--role=xxx" │
|
│ args="--role=xxx [issue-ids] [--mode=M]" │
|
||||||
└───────────────────┬───────────────────────────┘
|
└───────────────┬───────────────────────────┘
|
||||||
│ Role Router
|
│ Role Router
|
||||||
┌──── --role present? ────┐
|
┌───────────┼───────────┬───────────┬───────────┬───────────┐
|
||||||
│ NO │ YES
|
|
||||||
↓ ↓
|
|
||||||
Orchestration Mode Role Dispatch
|
|
||||||
(auto → coordinator) (route to role.md)
|
|
||||||
│
|
|
||||||
┌────┴────┬───────────┬───────────┬───────────┬───────────┐
|
|
||||||
↓ ↓ ↓ ↓ ↓ ↓
|
↓ ↓ ↓ ↓ ↓ ↓
|
||||||
┌──────────┐┌─────────┐┌─────────┐┌─────────┐┌──────────┐┌──────────┐
|
┌──────────┐┌──────────┐┌──────────┐┌──────────┐┌──────────┐┌──────────┐
|
||||||
│coordinator││explorer ││planner ││reviewer ││integrator││implementer│
|
│coordinator││explorer ││planner ││reviewer ││integrator││implementer│
|
||||||
│ ││EXPLORE-*││SOLVE-* ││AUDIT-* ││MARSHAL-* ││BUILD-* │
|
│ 编排调度 ││EXPLORE-* ││SOLVE-* ││AUDIT-* ││MARSHAL-* ││BUILD-* │
|
||||||
└──────────┘└─────────┘└─────────┘└─────────┘└──────────┘└──────────┘
|
└──────────┘└──────────┘└──────────┘└──────────┘└──────────┘└──────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## Command Architecture
|
## Command Architecture
|
||||||
|
|
||||||
```
|
```
|
||||||
roles/
|
roles/
|
||||||
├── coordinator.md # Pipeline orchestration (Phase 1/5 inline, Phase 2-4 core logic)
|
├── coordinator.md # Pipeline 编排(Phase 1/5 inline, Phase 2-4 core logic)
|
||||||
├── explorer.md # Context analysis (ACE + cli-explore-agent)
|
├── explorer.md # 上下文分析(ACE + cli-explore-agent)
|
||||||
├── planner.md # Solution design (wraps issue-plan-agent)
|
├── planner.md # 方案设计(wraps issue-plan-agent)
|
||||||
├── reviewer.md # Solution review (technical feasibility + risk assessment)
|
├── reviewer.md # 方案审查(技术可行性 + 风险评估)
|
||||||
├── integrator.md # Queue orchestration (wraps issue-queue-agent)
|
├── integrator.md # 队列编排(wraps issue-queue-agent)
|
||||||
└── implementer.md # Code implementation (wraps code-developer)
|
└── implementer.md # 代码实现(wraps code-developer)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Role Router
|
## Role Router
|
||||||
|
|
||||||
### Input Parsing
|
### Input Parsing
|
||||||
|
|
||||||
Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto route to coordinator). Extract issue IDs and `--mode` from remaining arguments.
|
Parse `$ARGUMENTS` to extract `--role`:
|
||||||
|
|
||||||
### Role Registry
|
```javascript
|
||||||
|
const args = "$ARGUMENTS"
|
||||||
|
const roleMatch = args.match(/--role[=\s]+(\w+)/)
|
||||||
|
|
||||||
| Role | File | Task Prefix | Type | Compact |
|
if (!roleMatch) {
|
||||||
|------|------|-------------|------|---------|
|
// No --role: this is a coordinator entry point
|
||||||
| coordinator | [roles/coordinator.md](roles/coordinator.md) | (none) | orchestrator | **⚠️ 压缩后必须重读** |
|
// Extract issue IDs and mode from args directly
|
||||||
| explorer | [roles/explorer.md](roles/explorer.md) | EXPLORE-* | pipeline | 压缩后必须重读 |
|
// → Read roles/coordinator.md and execute
|
||||||
| planner | [roles/planner.md](roles/planner.md) | SOLVE-* | pipeline | 压缩后必须重读 |
|
Read("roles/coordinator.md")
|
||||||
| reviewer | [roles/reviewer.md](roles/reviewer.md) | AUDIT-* | pipeline | 压缩后必须重读 |
|
return
|
||||||
| integrator | [roles/integrator.md](roles/integrator.md) | MARSHAL-* | pipeline | 压缩后必须重读 |
|
}
|
||||||
| implementer | [roles/implementer.md](roles/implementer.md) | BUILD-* | pipeline | 压缩后必须重读 |
|
|
||||||
|
|
||||||
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
const role = roleMatch[1]
|
||||||
|
const teamName = "issue"
|
||||||
### Dispatch
|
const agentName = args.match(/--agent-name[=\s]+([\w-]+)/)?.[1] || role
|
||||||
|
|
||||||
1. Extract `--role` from arguments
|
|
||||||
2. If no `--role` → route to coordinator (Orchestration Mode)
|
|
||||||
3. Look up role in registry → Read the role file → Execute its phases
|
|
||||||
|
|
||||||
### Orchestration Mode
|
|
||||||
|
|
||||||
When invoked without `--role`, coordinator auto-starts. User provides issue IDs and optional mode.
|
|
||||||
|
|
||||||
**Invocation**: `Skill(skill="team-issue", args="<issue-ids> [--mode=<mode>]")`
|
|
||||||
|
|
||||||
**Lifecycle**:
|
|
||||||
```
|
|
||||||
User provides issue IDs
|
|
||||||
→ coordinator Phase 1-3: Mode detection → TeamCreate → Create task chain
|
|
||||||
→ coordinator Phase 4: spawn first batch workers (background) → STOP
|
|
||||||
→ Worker executes → SendMessage callback → coordinator advances next step
|
|
||||||
→ Loop until pipeline complete → Phase 5 report
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**User Commands** (wake paused coordinator):
|
### Role Dispatch
|
||||||
|
|
||||||
| Command | Action |
|
```javascript
|
||||||
|---------|--------|
|
const VALID_ROLES = {
|
||||||
| `check` / `status` | Output execution status graph, no advancement |
|
"coordinator": { file: "roles/coordinator.md", prefix: null },
|
||||||
| `resume` / `continue` | Check worker states, advance next step |
|
"explorer": { file: "roles/explorer.md", prefix: "EXPLORE" },
|
||||||
|
"planner": { file: "roles/planner.md", prefix: "SOLVE" },
|
||||||
|
"reviewer": { file: "roles/reviewer.md", prefix: "AUDIT" },
|
||||||
|
"integrator": { file: "roles/integrator.md", prefix: "MARSHAL" },
|
||||||
|
"implementer": { file: "roles/implementer.md", prefix: "BUILD" }
|
||||||
|
}
|
||||||
|
|
||||||
---
|
if (!VALID_ROLES[role]) {
|
||||||
|
throw new Error(`Unknown role: ${role}. Available: ${Object.keys(VALID_ROLES).join(', ')}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read and execute role-specific logic
|
||||||
|
Read(VALID_ROLES[role].file)
|
||||||
|
// → Execute the 5-phase process defined in that file
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Roles
|
||||||
|
|
||||||
|
| Role | Task Prefix | Responsibility | Reuses Agent | Role File |
|
||||||
|
|------|-------------|----------------|--------------|-----------|
|
||||||
|
| `coordinator` | N/A | Pipeline 编排、模式选择、任务分派 | - | [roles/coordinator.md](roles/coordinator.md) |
|
||||||
|
| `explorer` | EXPLORE-* | 上下文分析、影响面评估 | cli-explore-agent | [roles/explorer.md](roles/explorer.md) |
|
||||||
|
| `planner` | SOLVE-* | 方案设计、任务分解 | issue-plan-agent | [roles/planner.md](roles/planner.md) |
|
||||||
|
| `reviewer` | AUDIT-* | 方案审查、风险评估 | - (新角色) | [roles/reviewer.md](roles/reviewer.md) |
|
||||||
|
| `integrator` | MARSHAL-* | 冲突检测、队列编排 | issue-queue-agent | [roles/integrator.md](roles/integrator.md) |
|
||||||
|
| `implementer` | BUILD-* | 代码实现、结果提交 | code-developer | [roles/implementer.md](roles/implementer.md) |
|
||||||
|
|
||||||
## Shared Infrastructure
|
## Shared Infrastructure
|
||||||
|
|
||||||
The following templates apply to all worker roles. Each role.md only needs to write **Phase 2-4** role-specific logic.
|
|
||||||
|
|
||||||
### Worker Phase 1: Task Discovery (shared by all workers)
|
|
||||||
|
|
||||||
Every worker executes the same task discovery flow on startup:
|
|
||||||
|
|
||||||
1. Call `TaskList()` to get all tasks
|
|
||||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
|
||||||
3. No tasks → idle wait
|
|
||||||
4. Has tasks → `TaskGet` for details → `TaskUpdate` mark in_progress
|
|
||||||
|
|
||||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
|
||||||
- Check whether this task's output artifact already exists
|
|
||||||
- Artifact complete → skip to Phase 5 report completion
|
|
||||||
- Artifact incomplete or missing → normal Phase 2-4 execution
|
|
||||||
|
|
||||||
### Worker Phase 5: Report (shared by all workers)
|
|
||||||
|
|
||||||
Standard reporting flow after task completion:
|
|
||||||
|
|
||||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
|
||||||
- Parameters: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
|
||||||
- **CLI fallback**: When MCP unavailable → `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
|
||||||
- **Note**: `team` must be session ID (e.g., `ISS-xxx-date`), NOT team name. Extract from `Session:` field in task description.
|
|
||||||
2. **SendMessage**: Send result to coordinator (content and summary both prefixed with `[<role>]`)
|
|
||||||
3. **TaskUpdate**: Mark task completed
|
|
||||||
4. **Loop**: Return to Phase 1 to check next task
|
|
||||||
|
|
||||||
### Wisdom Accumulation (all roles)
|
|
||||||
|
|
||||||
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session initialization.
|
|
||||||
|
|
||||||
**Directory**:
|
|
||||||
```
|
|
||||||
<session-folder>/wisdom/
|
|
||||||
├── learnings.md # Patterns and insights
|
|
||||||
├── decisions.md # Architecture and design decisions
|
|
||||||
├── conventions.md # Codebase conventions
|
|
||||||
└── issues.md # Known risks and issues
|
|
||||||
```
|
|
||||||
|
|
||||||
**Worker Load** (Phase 2): Extract `Session: <path>` from task description, read wisdom directory files.
|
|
||||||
**Worker Contribute** (Phase 4/5): Write this task's discoveries to corresponding wisdom files.
|
|
||||||
|
|
||||||
### Role Isolation Rules
|
### Role Isolation Rules
|
||||||
|
|
||||||
| Allowed | Forbidden |
|
**核心原则**: 每个角色仅能执行自己职责范围内的工作。
|
||||||
|---------|-----------|
|
|
||||||
| Process tasks with own prefix | Process tasks with other role prefixes |
|
|
||||||
| SendMessage to coordinator | Communicate directly with other workers |
|
|
||||||
| Use tools declared in Toolbox | Create tasks for other roles |
|
|
||||||
| Delegate to reused agents | Modify resources outside own responsibility |
|
|
||||||
|
|
||||||
Coordinator additional restrictions: Do not write/modify code directly, do not call implementation subagents (issue-plan-agent etc.), do not execute analysis/review directly.
|
#### Output Tagging(强制)
|
||||||
|
|
||||||
### Output Tagging
|
所有角色的输出必须带 `[role_name]` 标识前缀:
|
||||||
|
|
||||||
All outputs must carry `[role_name]` prefix in both SendMessage content/summary and team_msg summary.
|
```javascript
|
||||||
|
// SendMessage — content 和 summary 都必须带标识
|
||||||
|
SendMessage({
|
||||||
|
content: `## [${role}] ...`,
|
||||||
|
summary: `[${role}] ...`
|
||||||
|
})
|
||||||
|
|
||||||
|
// team_msg — summary 必须带标识
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
summary: `[${role}] ...`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Coordinator 隔离
|
||||||
|
|
||||||
|
| 允许 | 禁止 |
|
||||||
|
|------|------|
|
||||||
|
| 需求澄清 (AskUserQuestion) | ❌ 直接编写/修改代码 |
|
||||||
|
| 创建任务链 (TaskCreate) | ❌ 调用 issue-plan-agent 等实现类 agent |
|
||||||
|
| 分发任务给 worker | ❌ 直接执行分析/审查 |
|
||||||
|
| 监控进度 (消息总线) | ❌ 绕过 worker 自行完成任务 |
|
||||||
|
| 汇报结果给用户 | ❌ 修改源代码或产物文件 |
|
||||||
|
|
||||||
|
#### Worker 隔离
|
||||||
|
|
||||||
|
| 允许 | 禁止 |
|
||||||
|
|------|------|
|
||||||
|
| 处理自己前缀的任务 | ❌ 处理其他角色前缀的任务 |
|
||||||
|
| SendMessage 给 coordinator | ❌ 直接与其他 worker 通信 |
|
||||||
|
| 使用 Toolbox 中声明的工具 | ❌ 为其他角色创建任务 (TaskCreate) |
|
||||||
|
| 委派给复用的 agent | ❌ 修改不属于本职责的资源 |
|
||||||
|
|
||||||
|
### Team Configuration
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const TEAM_CONFIG = {
|
||||||
|
name: "issue",
|
||||||
|
sessionDir: ".workflow/.team-plan/issue/",
|
||||||
|
issueDataDir: ".workflow/issues/"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Message Bus (All Roles)
|
### Message Bus (All Roles)
|
||||||
|
|
||||||
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
||||||
|
|
||||||
**Parameters**: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
**CLI fallback**: When MCP unavailable → `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
operation: "log",
|
||||||
|
team: "issue",
|
||||||
**Note**: `team` must be session ID (e.g., `ISS-xxx-date`), NOT team name. Extract from `Session:` field in task description.
|
from: role, // current role name
|
||||||
|
to: "coordinator",
|
||||||
|
type: "<type>",
|
||||||
|
summary: "[role] <summary>",
|
||||||
|
ref: "<file_path>" // optional
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
**Message types by role**:
|
**Message types by role**:
|
||||||
|
|
||||||
@@ -175,15 +172,38 @@ Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
|||||||
| integrator | `queue_ready`, `conflict_found`, `error` |
|
| integrator | `queue_ready`, `conflict_found`, `error` |
|
||||||
| implementer | `impl_complete`, `impl_failed`, `error` |
|
| implementer | `impl_complete`, `impl_failed`, `error` |
|
||||||
|
|
||||||
### Team Configuration
|
### CLI 回退
|
||||||
|
|
||||||
| Setting | Value |
|
当 `mcp__ccw-tools__team_msg` MCP 不可用时:
|
||||||
|---------|-------|
|
|
||||||
| Team name | issue |
|
|
||||||
| Session directory | `.workflow/.team-plan/issue/` |
|
|
||||||
| Issue data directory | `.workflow/issues/` |
|
|
||||||
|
|
||||||
---
|
```javascript
|
||||||
|
Bash(`ccw team log --team "issue" --from "${role}" --to "coordinator" --type "<type>" --summary "<摘要>" --json`)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task Lifecycle (All Roles)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Standard task lifecycle every role follows
|
||||||
|
// Phase 1: Discovery
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith(`${VALID_ROLES[role].prefix}-`) &&
|
||||||
|
t.owner === agentName && // Use agentName (e.g., 'explorer-1') instead of role
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (myTasks.length === 0) return // idle
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
|
||||||
|
// Phase 2-4: Role-specific (see roles/{role}.md)
|
||||||
|
|
||||||
|
// Phase 5: Report + Loop — 所有输出必须带 [role] 标识
|
||||||
|
mcp__ccw-tools__team_msg({ operation: "log", team: "issue", from: role, to: "coordinator", type: "...", summary: `[${role}] ...` })
|
||||||
|
SendMessage({ type: "message", recipient: "coordinator", content: `## [${role}] ...`, summary: `[${role}] ...` })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
// Check for next task → back to Phase 1
|
||||||
|
```
|
||||||
|
|
||||||
## Pipeline Modes
|
## Pipeline Modes
|
||||||
|
|
||||||
@@ -196,221 +216,240 @@ Full Mode (complex issues, with review):
|
|||||||
└─(rejected)→ SOLVE-fix → AUDIT-002(re-review, max 2x)
|
└─(rejected)→ SOLVE-fix → AUDIT-002(re-review, max 2x)
|
||||||
|
|
||||||
Batch Mode (5-100 issues):
|
Batch Mode (5-100 issues):
|
||||||
EXPLORE-001..N(batch<=5) → SOLVE-001..N(batch<=3) → AUDIT-001(batch) → MARSHAL-001 → BUILD-001..M(DAG parallel)
|
EXPLORE-001..N(batch≤5) → SOLVE-001..N(batch≤3) → AUDIT-001(batch) → MARSHAL-001 → BUILD-001..M(DAG parallel)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Mode Auto-Detection
|
### Mode Auto-Detection
|
||||||
|
|
||||||
When user does not specify `--mode`, auto-detect based on issue count and complexity:
|
```javascript
|
||||||
|
function detectMode(issueIds, userMode) {
|
||||||
| Condition | Mode | Description |
|
if (userMode) return userMode // 用户显式指定
|
||||||
|-----------|------|-------------|
|
|
||||||
| User explicitly specifies `--mode=<M>` | Use specified mode | User override takes priority |
|
|
||||||
| Issue count <= 2 AND no high-priority issues (priority < 4) | `quick` | Simple issues, skip review step |
|
|
||||||
| Issue count <= 2 AND has high-priority issues (priority >= 4) | `full` | Complex issues need review gate |
|
|
||||||
| Issue count > 2 | `batch` | Multiple issues, parallel exploration and implementation |
|
|
||||||
|
|
||||||
### Review Gate (Full/Batch modes)
|
|
||||||
|
|
||||||
| AUDIT Verdict | Action |
|
|
||||||
|---------------|--------|
|
|
||||||
| approved | Proceed to MARSHAL → BUILD |
|
|
||||||
| rejected (round < 2) | Create SOLVE-fix task → AUDIT re-review |
|
|
||||||
| rejected (round >= 2) | Force proceed with warnings, report to user |
|
|
||||||
|
|
||||||
### Cadence Control
|
|
||||||
|
|
||||||
**Beat model**: Event-driven, each beat = coordinator wake → process → spawn → STOP. Issue beat: explore → plan → implement → review → integrate.
|
|
||||||
|
|
||||||
|
const count = issueIds.length
|
||||||
|
if (count <= 2) {
|
||||||
|
// Check complexity via issue priority
|
||||||
|
const issues = issueIds.map(id => JSON.parse(Bash(`ccw issue status ${id} --json`)))
|
||||||
|
const hasHighPriority = issues.some(i => i.priority >= 4)
|
||||||
|
return hasHighPriority ? 'full' : 'quick'
|
||||||
|
}
|
||||||
|
return 'batch'
|
||||||
|
}
|
||||||
```
|
```
|
||||||
Beat Cycle (single beat)
|
|
||||||
═══════════════════════════════════════════════════════════
|
|
||||||
Event Coordinator Workers
|
|
||||||
───────────────────────────────────────────────────────────
|
|
||||||
callback/resume ──→ ┌─ handleCallback ─┐
|
|
||||||
│ mark completed │
|
|
||||||
│ check pipeline │
|
|
||||||
├─ handleSpawnNext ─┤
|
|
||||||
│ find ready tasks │
|
|
||||||
│ spawn workers ───┼──→ [Worker A] Phase 1-5
|
|
||||||
│ (parallel OK) ──┼──→ [Worker B] Phase 1-5
|
|
||||||
└─ STOP (idle) ─────┘ │
|
|
||||||
│
|
|
||||||
callback ←─────────────────────────────────────────┘
|
|
||||||
(next beat) SendMessage + TaskUpdate(completed)
|
|
||||||
═══════════════════════════════════════════════════════════
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pipeline beat views**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Quick Mode (4 beats, strictly serial)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3 4
|
|
||||||
│ │ │ │
|
|
||||||
EXPLORE → SOLVE → MARSHAL ──→ BUILD
|
|
||||||
▲ ▲
|
|
||||||
pipeline pipeline
|
|
||||||
start done
|
|
||||||
|
|
||||||
EXPLORE=explorer SOLVE=planner MARSHAL=integrator BUILD=implementer
|
|
||||||
|
|
||||||
Full Mode (5-7 beats, with review gate)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3 4 5
|
|
||||||
│ │ │ │ │
|
|
||||||
EXPLORE → SOLVE → AUDIT ─┬─(ok)→ MARSHAL → BUILD
|
|
||||||
│
|
|
||||||
(rejected?)
|
|
||||||
SOLVE-fix → AUDIT-2 → MARSHAL → BUILD
|
|
||||||
|
|
||||||
Batch Mode (parallel windows)
|
|
||||||
──────────────────────────────────────────────────────────
|
|
||||||
Beat 1 2 3 4 5
|
|
||||||
┌────┴────┐ ┌────┴────┐ │ │ ┌────┴────┐
|
|
||||||
EXP-1..N → SOLVE-1..N → AUDIT → MARSHAL → BUILD-1..M
|
|
||||||
▲ ▲
|
|
||||||
parallel DAG parallel
|
|
||||||
window (<=5) window (<=3)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Checkpoints**:
|
|
||||||
|
|
||||||
| Trigger | Location | Behavior |
|
|
||||||
|---------|----------|----------|
|
|
||||||
| Review gate | After AUDIT-* | If approved → MARSHAL; if rejected → SOLVE-fix (max 2 rounds) |
|
|
||||||
| Review loop limit | AUDIT round >= 2 | Force proceed with warnings |
|
|
||||||
| Pipeline stall | No ready + no running | Check missing tasks, report to user |
|
|
||||||
|
|
||||||
**Stall Detection** (coordinator `handleCheck` executes):
|
|
||||||
|
|
||||||
| Check | Condition | Resolution |
|
|
||||||
|-------|-----------|------------|
|
|
||||||
| Worker no response | in_progress task no callback | Report waiting task list, suggest user `resume` |
|
|
||||||
| Pipeline deadlock | no ready + no running + has pending | Check blockedBy dependency chain, report blocking point |
|
|
||||||
| Review loop exceeded | AUDIT rejection > 2 rounds | Terminate loop, force proceed with current solution |
|
|
||||||
|
|
||||||
### Task Metadata Registry
|
|
||||||
|
|
||||||
| Task ID | Role | Phase | Dependencies | Description |
|
|
||||||
|---------|------|-------|-------------|-------------|
|
|
||||||
| EXPLORE-001 | explorer | explore | (none) | Context analysis and impact assessment |
|
|
||||||
| EXPLORE-002..N | explorer | explore | (none) | Parallel exploration (Batch mode only) |
|
|
||||||
| SOLVE-001 | planner | plan | EXPLORE-001 (or all EXPLORE-*) | Solution design and task decomposition |
|
|
||||||
| SOLVE-002..N | planner | plan | EXPLORE-* | Parallel solution design (Batch mode only) |
|
|
||||||
| AUDIT-001 | reviewer | review | SOLVE-001 (or all SOLVE-*) | Technical feasibility and risk review |
|
|
||||||
| MARSHAL-001 | integrator | integrate | AUDIT-001 (or last SOLVE-*) | Conflict detection and queue orchestration |
|
|
||||||
| BUILD-001 | implementer | implement | MARSHAL-001 | Code implementation and result submission |
|
|
||||||
| BUILD-002..M | implementer | implement | MARSHAL-001 | Parallel implementation (Batch DAG parallel) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Coordinator Spawn Template
|
## Coordinator Spawn Template
|
||||||
|
|
||||||
When coordinator spawns workers, use background mode (Spawn-and-Stop).
|
```javascript
|
||||||
|
TeamCreate({ team_name: "issue" })
|
||||||
|
|
||||||
**Standard spawn** (single agent per role): For Quick/Full mode, spawn one agent per role. Explorer, planner, reviewer, integrator each get a single agent.
|
// Explorer — conditional parallel spawn for Batch mode
|
||||||
|
const isBatchMode = mode === 'batch'
|
||||||
|
const maxParallelExplorers = Math.min(issueIds.length, 5)
|
||||||
|
|
||||||
**Parallel spawn** (Batch mode): For Batch mode with multiple issues, spawn N explorer agents in parallel (max 5) and M implementer agents in parallel (max 3). Each parallel agent only processes tasks where owner matches its agent name.
|
if (isBatchMode && issueIds.length > 1) {
|
||||||
|
// Batch mode: spawn N explorers for parallel exploration
|
||||||
|
for (let i = 0; i < maxParallelExplorers; i++) {
|
||||||
|
const agentName = `explorer-${i + 1}`
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: "issue",
|
||||||
|
name: agentName,
|
||||||
|
prompt: `你是 team "issue" 的 EXPLORER (${agentName})。
|
||||||
|
你的 agent 名称是 "${agentName}",任务发现时用此名称匹配 owner。
|
||||||
|
|
||||||
**Spawn template**:
|
当你收到 EXPLORE-* 任务时,调用 Skill(skill="team-issue", args="--role=explorer --agent-name=${agentName}") 执行。
|
||||||
|
|
||||||
```
|
当前需求: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 owner 为 "${agentName}" 的 EXPLORE-* 前缀任务
|
||||||
|
- 所有输出(SendMessage、team_msg)必须带 [explorer] 标识前缀
|
||||||
|
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||||
|
- 不得使用 TaskCreate 为其他角色创建任务
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 owner === "${agentName}" 的 EXPLORE-* 任务
|
||||||
|
2. Skill(skill="team-issue", args="--role=explorer --agent-name=${agentName}") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator(带 [explorer] 标识)
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Quick/Full mode: single explorer
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: "issue",
|
||||||
|
name: "explorer",
|
||||||
|
prompt: `你是 team "issue" 的 EXPLORER。
|
||||||
|
|
||||||
|
当你收到 EXPLORE-* 任务时,调用 Skill(skill="team-issue", args="--role=explorer") 执行。
|
||||||
|
|
||||||
|
当前需求: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 EXPLORE-* 前缀的任务,不得执行其他角色的工作
|
||||||
|
- 所有输出(SendMessage、team_msg)必须带 [explorer] 标识前缀
|
||||||
|
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||||
|
- 不得使用 TaskCreate 为其他角色创建任务
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 EXPLORE-* 任务
|
||||||
|
2. Skill(skill="team-issue", args="--role=explorer") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator(带 [explorer] 标识)
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Planner
|
||||||
Task({
|
Task({
|
||||||
subagent_type: "general-purpose",
|
subagent_type: "general-purpose",
|
||||||
description: "Spawn <role> worker",
|
|
||||||
team_name: "issue",
|
team_name: "issue",
|
||||||
name: "<role>",
|
name: "planner",
|
||||||
run_in_background: true,
|
prompt: `你是 team "issue" 的 PLANNER。
|
||||||
prompt: `You are team "issue" <ROLE>.
|
|
||||||
|
|
||||||
## Primary Directive
|
当你收到 SOLVE-* 任务时,调用 Skill(skill="team-issue", args="--role=planner") 执行。
|
||||||
All your work must be executed through Skill to load role definition:
|
|
||||||
Skill(skill="team-issue", args="--role=<role>")
|
|
||||||
|
|
||||||
Current requirement: <task-description>
|
当前需求: ${taskDescription}
|
||||||
Session: <session-folder>
|
约束: ${constraints}
|
||||||
|
|
||||||
## Role Guidelines
|
## 角色准则(强制)
|
||||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
- 你只能处理 SOLVE-* 前缀的任务
|
||||||
- All output prefixed with [<role>] identifier
|
- 所有输出必须带 [planner] 标识前缀
|
||||||
- Only communicate with coordinator
|
- 仅与 coordinator 通信
|
||||||
- Do not use TaskCreate for other roles
|
|
||||||
- Call mcp__ccw-tools__team_msg before every SendMessage
|
|
||||||
|
|
||||||
## Workflow
|
## 消息总线(必须)
|
||||||
1. Call Skill -> load role definition and execution logic
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
2. Follow role.md 5-Phase flow
|
|
||||||
3. team_msg + SendMessage results to coordinator
|
工作流程:
|
||||||
4. TaskUpdate completed -> check next task`
|
1. TaskList → 找到 SOLVE-* 任务
|
||||||
|
2. Skill(skill="team-issue", args="--role=planner") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
})
|
})
|
||||||
```
|
|
||||||
|
|
||||||
### Parallel Spawn (Batch Mode)
|
// Reviewer
|
||||||
|
|
||||||
> When Batch mode has parallel tasks assigned to the same role, spawn N distinct agents with unique names. A single agent can only process tasks serially.
|
|
||||||
|
|
||||||
**Explorer parallel spawn** (Batch mode, N issues):
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Batch mode with N issues (N > 1) | Spawn min(N, 5) agents: `explorer-1`, `explorer-2`, ... with `run_in_background: true` |
|
|
||||||
| Quick/Full mode (single explorer) | Standard spawn: single `explorer` agent |
|
|
||||||
|
|
||||||
**Implementer parallel spawn** (Batch mode, M BUILD tasks):
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Batch mode with M BUILD tasks (M > 2) | Spawn min(M, 3) agents: `implementer-1`, `implementer-2`, ... with `run_in_background: true` |
|
|
||||||
| Quick/Full mode (single implementer) | Standard spawn: single `implementer` agent |
|
|
||||||
|
|
||||||
**Parallel spawn template**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
Task({
|
||||||
subagent_type: "general-purpose",
|
subagent_type: "general-purpose",
|
||||||
description: "Spawn <role>-<N> worker",
|
|
||||||
team_name: "issue",
|
team_name: "issue",
|
||||||
name: "<role>-<N>",
|
name: "reviewer",
|
||||||
run_in_background: true,
|
prompt: `你是 team "issue" 的 REVIEWER。
|
||||||
prompt: `You are team "issue" <ROLE> (<role>-<N>).
|
|
||||||
Your agent name is "<role>-<N>", use this name for task discovery owner matching.
|
|
||||||
|
|
||||||
## Primary Directive
|
当你收到 AUDIT-* 任务时,调用 Skill(skill="team-issue", args="--role=reviewer") 执行。
|
||||||
Skill(skill="team-issue", args="--role=<role> --agent-name=<role>-<N>")
|
|
||||||
|
|
||||||
## Role Guidelines
|
当前需求: ${taskDescription}
|
||||||
- Only process tasks where owner === "<role>-<N>" with <PREFIX>-* prefix
|
约束: ${constraints}
|
||||||
- All output prefixed with [<role>] identifier
|
|
||||||
|
|
||||||
## Workflow
|
## 角色准则(强制)
|
||||||
1. TaskList -> find tasks where owner === "<role>-<N>" with <PREFIX>-* prefix
|
- 你只能处理 AUDIT-* 前缀的任务
|
||||||
2. Skill -> execute role definition
|
- 所有输出必须带 [reviewer] 标识前缀
|
||||||
3. team_msg + SendMessage results to coordinator
|
- 仅与 coordinator 通信
|
||||||
4. TaskUpdate completed -> check next task`
|
- 你是质量门控角色,审查方案但不修改代码
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 AUDIT-* 任务
|
||||||
|
2. Skill(skill="team-issue", args="--role=reviewer") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
})
|
})
|
||||||
```
|
|
||||||
|
|
||||||
**Dispatch must match agent names**: When dispatching parallel tasks, coordinator sets each task's owner to the corresponding instance name (`explorer-1`, `explorer-2`, etc. or `implementer-1`, `implementer-2`, etc.). In role.md, task discovery uses `--agent-name` for owner matching.
|
// Integrator
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: "issue",
|
||||||
|
name: "integrator",
|
||||||
|
prompt: `你是 team "issue" 的 INTEGRATOR。
|
||||||
|
|
||||||
---
|
当你收到 MARSHAL-* 任务时,调用 Skill(skill="team-issue", args="--role=integrator") 执行。
|
||||||
|
|
||||||
## Session Directory
|
当前需求: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
|
||||||
```
|
## 角色准则(强制)
|
||||||
.workflow/.team-plan/issue/
|
- 你只能处理 MARSHAL-* 前缀的任务
|
||||||
├── team-session.json # Session state
|
- 所有输出必须带 [integrator] 标识前缀
|
||||||
├── shared-memory.json # Cross-role state
|
- 仅与 coordinator 通信
|
||||||
├── wisdom/ # Cross-task knowledge
|
|
||||||
│ ├── learnings.md
|
## 消息总线(必须)
|
||||||
│ ├── decisions.md
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
│ ├── conventions.md
|
|
||||||
│ └── issues.md
|
工作流程:
|
||||||
├── explorations/ # Explorer output
|
1. TaskList → 找到 MARSHAL-* 任务
|
||||||
├── solutions/ # Planner output
|
2. Skill(skill="team-issue", args="--role=integrator") 执行
|
||||||
├── audits/ # Reviewer output
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
├── queue/ # Integrator output
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
└── builds/ # Implementer output
|
})
|
||||||
|
|
||||||
|
// Implementer — conditional parallel spawn for Batch mode (DAG parallel BUILD tasks)
|
||||||
|
if (isBatchMode && issueIds.length > 2) {
|
||||||
|
// Batch mode: spawn multiple implementers for parallel BUILD execution
|
||||||
|
const maxParallelBuilders = Math.min(issueIds.length, 3)
|
||||||
|
for (let i = 0; i < maxParallelBuilders; i++) {
|
||||||
|
const agentName = `implementer-${i + 1}`
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: "issue",
|
||||||
|
name: agentName,
|
||||||
|
prompt: `你是 team "issue" 的 IMPLEMENTER (${agentName})。
|
||||||
|
你的 agent 名称是 "${agentName}",任务发现时用此名称匹配 owner。
|
||||||
|
|
||||||
|
当你收到 BUILD-* 任务时,调用 Skill(skill="team-issue", args="--role=implementer --agent-name=${agentName}") 执行。
|
||||||
|
|
||||||
|
当前需求: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 owner 为 "${agentName}" 的 BUILD-* 前缀任务
|
||||||
|
- 所有输出必须带 [implementer] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 owner === "${agentName}" 的 BUILD-* 任务
|
||||||
|
2. Skill(skill="team-issue", args="--role=implementer --agent-name=${agentName}") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Quick/Full mode: single implementer
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: "issue",
|
||||||
|
name: "implementer",
|
||||||
|
prompt: `你是 team "issue" 的 IMPLEMENTER。
|
||||||
|
|
||||||
|
当你收到 BUILD-* 任务时,调用 Skill(skill="team-issue", args="--role=implementer") 执行。
|
||||||
|
|
||||||
|
当前需求: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 BUILD-* 前缀的任务
|
||||||
|
- 所有输出必须带 [implementer] 标识前缀
|
||||||
|
- 仅与 coordinator 通信
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
工作流程:
|
||||||
|
1. TaskList → 找到 BUILD-* 任务
|
||||||
|
2. Skill(skill="team-issue", args="--role=implementer") 执行
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator
|
||||||
|
4. TaskUpdate completed → 检查下一个任务`
|
||||||
|
})
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
@@ -418,8 +457,6 @@ Skill(skill="team-issue", args="--role=<role> --agent-name=<role>-<N>")
|
|||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| Unknown --role value | Error with available role list |
|
| Unknown --role value | Error with available role list |
|
||||||
| Missing --role arg | Orchestration Mode → auto route to coordinator |
|
| Missing --role arg | Default to coordinator role |
|
||||||
| Role file not found | Error with expected path (roles/<name>.md) |
|
| Role file not found | Error with expected path (roles/{name}.md) |
|
||||||
| Task prefix conflict | Log warning, proceed |
|
| Task prefix conflict | Log warning, proceed |
|
||||||
| Review rejection exceeds 2 rounds | Force proceed with warnings |
|
|
||||||
| No issues found for given IDs | Coordinator reports error to user |
|
|
||||||
|
|||||||
@@ -1,286 +1,319 @@
|
|||||||
# Coordinator Role
|
# Role: coordinator
|
||||||
|
|
||||||
Orchestrate the issue resolution pipeline: requirement clarification -> mode selection -> team creation -> task chain -> dispatch -> monitoring -> reporting.
|
Team coordinator. Orchestrates the issue resolution pipeline: requirement clarification → mode selection → team creation → task chain → dispatch → monitoring → reporting.
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
- **Name**: `coordinator`
|
||||||
- **Task Prefix**: N/A (coordinator creates tasks, does not receive them)
|
- **Task Prefix**: N/A (coordinator creates tasks, doesn't receive them)
|
||||||
- **Responsibility**: Orchestration
|
- **Responsibility**: Orchestration
|
||||||
|
- **Communication**: SendMessage to all teammates
|
||||||
|
- **Output Tag**: `[coordinator]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[coordinator]` identifier
|
- 所有输出(SendMessage、team_msg、日志)必须带 `[coordinator]` 标识
|
||||||
- Responsible only for: requirement clarification, mode selection, task creation/dispatch, progress monitoring, result reporting
|
- 仅负责需求澄清、模式选择、任务创建/分发、进度监控、结果汇报
|
||||||
- Create tasks via TaskCreate and assign to worker roles
|
- 通过 TaskCreate 创建任务并分配给 worker 角色
|
||||||
- Monitor worker progress via message bus and route messages
|
- 通过消息总线监控 worker 进度并路由消息
|
||||||
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
|
|
||||||
- Maintain session state persistence
|
|
||||||
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute any business tasks directly (code writing, solution design, review, etc.)
|
- ❌ **直接执行任何业务任务**(代码编写、方案设计、审查等)
|
||||||
- Call implementation subagents directly (issue-plan-agent, issue-queue-agent, code-developer, etc.)
|
- ❌ 直接调用 issue-plan-agent、issue-queue-agent、code-developer 等 agent
|
||||||
- Modify source code or generated artifacts directly
|
- ❌ 直接修改源代码或生成产物文件
|
||||||
- Bypass worker roles to complete delegated work
|
- ❌ 绕过 worker 角色自行完成应委派的工作
|
||||||
- Omit `[coordinator]` identifier in any output
|
- ❌ 在输出中省略 `[coordinator]` 标识
|
||||||
- Skip dependency validation when creating task chains
|
|
||||||
|
|
||||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
|
> **核心原则**: coordinator 是指挥者,不是执行者。所有实际工作必须通过 TaskCreate 委派给 worker 角色。
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
> No command files -- all phases execute inline.
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `TeamCreate` | Team | coordinator | Initialize team |
|
|
||||||
| `TeamDelete` | Team | coordinator | Dissolve team |
|
|
||||||
| `SendMessage` | Team | coordinator | Communicate with workers/user |
|
|
||||||
| `TaskCreate` | Task | coordinator | Create and dispatch tasks |
|
|
||||||
| `TaskList` | Task | coordinator | Monitor task status |
|
|
||||||
| `TaskGet` | Task | coordinator | Get task details |
|
|
||||||
| `TaskUpdate` | Task | coordinator | Update task status |
|
|
||||||
| `AskUserQuestion` | UI | coordinator | Clarify requirements |
|
|
||||||
| `Read` | IO | coordinator | Read session files |
|
|
||||||
| `Write` | IO | coordinator | Write session files |
|
|
||||||
| `Bash` | System | coordinator | Execute ccw commands |
|
|
||||||
| `mcp__ccw-tools__team_msg` | Team | coordinator | Log messages to message bus |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `task_assigned` | coordinator -> worker | Task dispatched | Notify worker of new task |
|
| `task_assigned` | coordinator → worker | Task dispatched | 通知 worker 有新任务 |
|
||||||
| `pipeline_update` | coordinator -> user | Progress milestone | Pipeline progress update |
|
| `pipeline_update` | coordinator → user | Progress milestone | 流水线进度更新 |
|
||||||
| `escalation` | coordinator -> user | Unresolvable issue | Escalate to user decision |
|
| `escalation` | coordinator → user | Unresolvable issue | 升级到用户决策 |
|
||||||
| `shutdown` | coordinator -> all | Team dissolved | Team shutdown notification |
|
| `shutdown` | coordinator → all | Team dissolved | 团队关闭 |
|
||||||
|
|
||||||
## Message Bus
|
## Execution
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Phase 0: Session Resume
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check for existing team session
|
||||||
|
const existingMsgs = mcp__ccw-tools__team_msg({ operation: "list", team: "issue" })
|
||||||
|
if (existingMsgs && existingMsgs.length > 0) {
|
||||||
|
// Resume: check pending tasks and continue coordination loop
|
||||||
|
// Skip Phase 1-3, go directly to Phase 4
|
||||||
|
}
|
||||||
```
|
```
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
### Phase 1: Requirement Clarification
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., ISS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "coordinator",
|
Parse `$ARGUMENTS` for issue IDs and mode.
|
||||||
to: "<recipient>",
|
|
||||||
type: <message-type>,
|
```javascript
|
||||||
summary: "[coordinator] <summary>",
|
const args = "$ARGUMENTS"
|
||||||
ref: <artifact-path>
|
|
||||||
|
// Extract issue IDs (GH-xxx, ISS-xxx formats)
|
||||||
|
const issueIds = args.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/g) || []
|
||||||
|
|
||||||
|
// Extract mode
|
||||||
|
const modeMatch = args.match(/--mode[=\s]+(quick|full|batch)/)
|
||||||
|
const explicitMode = modeMatch ? modeMatch[1] : null
|
||||||
|
|
||||||
|
// If --all-pending, load all pending issues
|
||||||
|
if (args.includes('--all-pending')) {
|
||||||
|
const pendingList = Bash(`ccw issue list --status registered,pending --json`)
|
||||||
|
const pending = JSON.parse(pendingList)
|
||||||
|
issueIds.push(...pending.map(i => i.id))
|
||||||
|
}
|
||||||
|
|
||||||
|
if (issueIds.length === 0) {
|
||||||
|
// Ask user for issue IDs
|
||||||
|
const answer = AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "请提供要处理的 issue ID(支持多个,逗号分隔)",
|
||||||
|
header: "Issue IDs",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "输入 ID", description: "手动输入 issue ID(GH-123 或 ISS-20260215-120000)" },
|
||||||
|
{ label: "全部 pending", description: "处理所有 registered/pending 状态的 issue" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Auto-detect mode
|
||||||
|
const mode = detectMode(issueIds, explicitMode)
|
||||||
|
|
||||||
|
// Execution method selection (for BUILD phase)
|
||||||
|
const execSelection = AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "选择代码执行方式:",
|
||||||
|
header: "Execution",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Agent", description: "code-developer agent(同步,适合简单任务)" },
|
||||||
|
{ label: "Codex", description: "Codex CLI(后台,适合复杂任务)" },
|
||||||
|
{ label: "Gemini", description: "Gemini CLI(后台,适合分析类任务)" },
|
||||||
|
{ label: "Auto", description: "根据 solution task_count 自动选择(默认)" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "实现后是否进行代码审查?",
|
||||||
|
header: "Code Review",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Skip", description: "不审查" },
|
||||||
|
{ label: "Gemini Review", description: "Gemini CLI 审查" },
|
||||||
|
{ label: "Codex Review", description: "Git-aware review(--uncommitted)" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
})
|
})
|
||||||
|
|
||||||
|
const executionMethod = execSelection.Execution || 'Auto'
|
||||||
|
const codeReviewTool = execSelection['Code Review'] || 'Skip'
|
||||||
```
|
```
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
**Mode Auto-Detection**:
|
||||||
|
```javascript
|
||||||
```
|
function detectMode(issueIds, userMode) {
|
||||||
Bash("ccw team log --team <session-id> --from coordinator --to <recipient> --type <message-type> --summary \"[coordinator] ...\" --json")
|
if (userMode) return userMode
|
||||||
|
const count = issueIds.length
|
||||||
|
if (count <= 2) {
|
||||||
|
const issues = issueIds.map(id => JSON.parse(Bash(`ccw issue status ${id} --json`)))
|
||||||
|
const hasHighPriority = issues.some(i => i.priority >= 4)
|
||||||
|
return hasHighPriority ? 'full' : 'quick'
|
||||||
|
}
|
||||||
|
// 3-4 issues with review, 5+ triggers batch parallel processing
|
||||||
|
return count >= 5 ? 'batch' : 'full'
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
### Phase 2: Create Team + Initialize Session
|
||||||
|
|
||||||
## Entry Router
|
```javascript
|
||||||
|
TeamCreate({ team_name: "issue" })
|
||||||
|
|
||||||
When coordinator is invoked, first detect the invocation type:
|
// ⚠️ Workers are NOT pre-spawned here.
|
||||||
|
// Workers are spawned per-stage in Phase 4 via Stop-Wait Task(run_in_background: false).
|
||||||
| Detection | Condition | Handler |
|
// See SKILL.md Coordinator Spawn Template for worker prompt templates.
|
||||||
|-----------|-----------|---------|
|
//
|
||||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
// Worker roles available (spawned on-demand per pipeline stage):
|
||||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
// quick mode: explorer, planner, integrator, implementer
|
||||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
// full mode: explorer, planner, reviewer, integrator, implementer
|
||||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
|
||||||
|
|
||||||
For callback/check/resume: execute the appropriate handler, then STOP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 0: Session Resume Check
|
|
||||||
|
|
||||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. Check for existing team session via team_msg list
|
|
||||||
2. No sessions found -> proceed to Phase 1
|
|
||||||
3. Single session found -> resume it (-> Session Reconciliation)
|
|
||||||
4. Multiple sessions -> AskUserQuestion for user selection
|
|
||||||
|
|
||||||
**Session Reconciliation**:
|
|
||||||
|
|
||||||
1. Audit TaskList -> get real status of all tasks
|
|
||||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
|
||||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
|
||||||
4. Determine remaining pipeline from reconciled state
|
|
||||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
|
||||||
6. Create missing tasks with correct blockedBy dependencies
|
|
||||||
7. Verify dependency chain integrity
|
|
||||||
8. Update session file with reconciled state
|
|
||||||
9. Kick first executable task's worker -> Phase 4
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Requirement Clarification
|
|
||||||
|
|
||||||
**Objective**: Parse user input and gather execution parameters.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. **Parse arguments** for issue IDs and mode:
|
|
||||||
|
|
||||||
| Pattern | Extraction |
|
|
||||||
|---------|------------|
|
|
||||||
| `GH-\d+` | GitHub issue ID |
|
|
||||||
| `ISS-\d{8}-\d{6}` | Local issue ID |
|
|
||||||
| `--mode=<mode>` | Explicit mode |
|
|
||||||
| `--all-pending` | Load all pending issues |
|
|
||||||
|
|
||||||
2. **Load pending issues** if `--all-pending`:
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw issue list --status registered,pending --json")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Ask for missing parameters** via AskUserQuestion if no issue IDs found
|
### Phase 3: Create Task Chain
|
||||||
|
|
||||||
4. **Mode auto-detection** (when user does not specify `--mode`):
|
**Quick Mode**:
|
||||||
|
```javascript
|
||||||
|
// EXPLORE → SOLVE → MARSHAL → BUILD
|
||||||
|
for (const issueId of issueIds) {
|
||||||
|
const exploreId = TaskCreate({
|
||||||
|
subject: `EXPLORE-001: Analyze context for ${issueId}`,
|
||||||
|
description: `Explore codebase context for issue ${issueId}. Load issue via ccw issue status ${issueId} --json, then perform ACE semantic search and impact analysis.`,
|
||||||
|
activeForm: `Exploring ${issueId}`,
|
||||||
|
owner: "explorer"
|
||||||
|
})
|
||||||
|
|
||||||
| Condition | Mode |
|
const solveId = TaskCreate({
|
||||||
|-----------|------|
|
subject: `SOLVE-001: Design solution for ${issueId}`,
|
||||||
| Issue count <= 2 AND no high-priority (priority < 4) | `quick` |
|
description: `Design solution for issue ${issueId} using issue-plan-agent. Context report from EXPLORE-001.`,
|
||||||
| Issue count <= 2 AND has high-priority (priority >= 4) | `full` |
|
activeForm: `Planning ${issueId}`,
|
||||||
| Issue count >= 5 | `batch` |
|
owner: "planner",
|
||||||
| 3-4 issues | `full` |
|
addBlockedBy: [exploreId]
|
||||||
|
})
|
||||||
|
|
||||||
5. **Execution method selection** (for BUILD phase):
|
const marshalId = TaskCreate({
|
||||||
|
subject: `MARSHAL-001: Form queue for ${issueId}`,
|
||||||
|
description: `Form execution queue for issue ${issueId} solution using issue-queue-agent.`,
|
||||||
|
activeForm: `Forming queue`,
|
||||||
|
owner: "integrator",
|
||||||
|
addBlockedBy: [solveId]
|
||||||
|
})
|
||||||
|
|
||||||
| Option | Description |
|
TaskCreate({
|
||||||
|--------|-------------|
|
subject: `BUILD-001: Implement solution for ${issueId}`,
|
||||||
| `Agent` | code-developer agent (sync, for simple tasks) |
|
description: `Implement solution for issue ${issueId}. Load via ccw issue detail <item-id>, execute tasks, report via ccw issue done.\nexecution_method: ${executionMethod}\ncode_review: ${codeReviewTool}`,
|
||||||
| `Codex` | Codex CLI (background, for complex tasks) |
|
activeForm: `Implementing ${issueId}`,
|
||||||
| `Gemini` | Gemini CLI (background, for analysis tasks) |
|
owner: "implementer",
|
||||||
| `Auto` | Auto-select based on solution task_count (default) |
|
addBlockedBy: [marshalId]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
6. **Code review selection**:
|
**Full Mode** (adds AUDIT between SOLVE and MARSHAL):
|
||||||
|
```javascript
|
||||||
|
for (const issueId of issueIds) {
|
||||||
|
const exploreId = TaskCreate({
|
||||||
|
subject: `EXPLORE-001: Analyze context for ${issueId}`,
|
||||||
|
description: `Explore codebase context for issue ${issueId}.`,
|
||||||
|
activeForm: `Exploring ${issueId}`,
|
||||||
|
owner: "explorer"
|
||||||
|
})
|
||||||
|
|
||||||
| Option | Description |
|
const solveId = TaskCreate({
|
||||||
|--------|-------------|
|
subject: `SOLVE-001: Design solution for ${issueId}`,
|
||||||
| `Skip` | No review |
|
description: `Design solution for issue ${issueId} using issue-plan-agent.`,
|
||||||
| `Gemini Review` | Gemini CLI review |
|
activeForm: `Planning ${issueId}`,
|
||||||
| `Codex Review` | Git-aware review (--uncommitted) |
|
owner: "planner",
|
||||||
|
addBlockedBy: [exploreId]
|
||||||
|
})
|
||||||
|
|
||||||
**Success**: All parameters captured, mode finalized.
|
const auditId = TaskCreate({
|
||||||
|
subject: `AUDIT-001: Review solution for ${issueId}`,
|
||||||
|
description: `Review solution quality, technical feasibility, and risks for issue ${issueId}. Read solution from .workflow/issues/solutions/${issueId}.jsonl.`,
|
||||||
|
activeForm: `Reviewing ${issueId}`,
|
||||||
|
owner: "reviewer",
|
||||||
|
addBlockedBy: [solveId]
|
||||||
|
})
|
||||||
|
|
||||||
---
|
const marshalId = TaskCreate({
|
||||||
|
subject: `MARSHAL-001: Form queue for ${issueId}`,
|
||||||
|
description: `Form execution queue after review approval.`,
|
||||||
|
activeForm: `Forming queue`,
|
||||||
|
owner: "integrator",
|
||||||
|
addBlockedBy: [auditId]
|
||||||
|
})
|
||||||
|
|
||||||
## Phase 2: Create Team + Initialize Session
|
TaskCreate({
|
||||||
|
subject: `BUILD-001: Implement solution for ${issueId}`,
|
||||||
|
description: `Implement approved solution for issue ${issueId}.\nexecution_method: ${executionMethod}\ncode_review: ${codeReviewTool}`,
|
||||||
|
activeForm: `Implementing ${issueId}`,
|
||||||
|
owner: "implementer",
|
||||||
|
addBlockedBy: [marshalId]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
**Objective**: Initialize team, session file, and wisdom directory.
|
**Batch Mode** (parallel EXPLORE and SOLVE batches):
|
||||||
|
```javascript
|
||||||
|
// Group issues into batches
|
||||||
|
const exploreBatches = chunkArray(issueIds, 5) // max 5 parallel
|
||||||
|
const solveBatches = chunkArray(issueIds, 3) // max 3 parallel
|
||||||
|
const maxParallelExplorers = Math.min(issueIds.length, 5)
|
||||||
|
const maxParallelBuilders = Math.min(issueIds.length, 3)
|
||||||
|
|
||||||
**Workflow**:
|
// Create EXPLORE tasks — distribute across parallel explorer agents (round-robin)
|
||||||
|
const exploreTaskIds = []
|
||||||
|
let prevBatchLastId = null
|
||||||
|
for (const [batchIdx, batch] of exploreBatches.entries()) {
|
||||||
|
const batchTaskIds = []
|
||||||
|
for (const [inBatchIdx, issueId] of batch.entries()) {
|
||||||
|
const globalIdx = exploreTaskIds.length
|
||||||
|
const explorerName = `explorer-${(globalIdx % maxParallelExplorers) + 1}`
|
||||||
|
const id = TaskCreate({
|
||||||
|
subject: `EXPLORE-${String(globalIdx + 1).padStart(3, '0')}: Context for ${issueId}`,
|
||||||
|
description: `Batch ${batchIdx + 1}: Explore codebase context for issue ${issueId}.`,
|
||||||
|
activeForm: `Exploring ${issueId}`,
|
||||||
|
owner: explorerName, // Distribute across explorer-1, explorer-2, etc.
|
||||||
|
// Only block on previous batch's LAST task (not within same batch)
|
||||||
|
addBlockedBy: prevBatchLastId ? [prevBatchLastId] : []
|
||||||
|
})
|
||||||
|
batchTaskIds.push(id)
|
||||||
|
exploreTaskIds.push(id)
|
||||||
|
}
|
||||||
|
prevBatchLastId = batchTaskIds[batchTaskIds.length - 1]
|
||||||
|
}
|
||||||
|
|
||||||
1. Generate session ID
|
// Create SOLVE tasks (blocked by corresponding EXPLORE)
|
||||||
2. Create session folder
|
const solveTaskIds = []
|
||||||
3. Call TeamCreate with team name "issue"
|
for (const [i, issueId] of issueIds.entries()) {
|
||||||
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
|
const id = TaskCreate({
|
||||||
5. Write session file with: session_id, mode, scope, status="active"
|
subject: `SOLVE-${String(i + 1).padStart(3, '0')}: Solution for ${issueId}`,
|
||||||
|
description: `Design solution for issue ${issueId}.`,
|
||||||
|
activeForm: `Planning ${issueId}`,
|
||||||
|
owner: "planner",
|
||||||
|
addBlockedBy: [exploreTaskIds[i]]
|
||||||
|
})
|
||||||
|
solveTaskIds.push(id)
|
||||||
|
}
|
||||||
|
|
||||||
**Spawn template**: Workers are NOT pre-spawned here. Workers are spawned on-demand in Phase 4. See SKILL.md Coordinator Spawn Template for worker prompt templates.
|
// AUDIT as batch review (blocked by all SOLVE tasks)
|
||||||
|
const auditId = TaskCreate({
|
||||||
|
subject: `AUDIT-001: Batch review all solutions`,
|
||||||
|
description: `Review all ${issueIds.length} solutions for quality and conflicts.`,
|
||||||
|
activeForm: `Batch reviewing`,
|
||||||
|
owner: "reviewer",
|
||||||
|
addBlockedBy: solveTaskIds
|
||||||
|
})
|
||||||
|
|
||||||
**Worker roles available**:
|
// MARSHAL (blocked by AUDIT)
|
||||||
|
const marshalId = TaskCreate({
|
||||||
|
subject: `MARSHAL-001: Form execution queue`,
|
||||||
|
description: `Form DAG-based execution queue for all approved solutions.`,
|
||||||
|
activeForm: `Forming queue`,
|
||||||
|
owner: "integrator",
|
||||||
|
addBlockedBy: [auditId]
|
||||||
|
})
|
||||||
|
|
||||||
- quick mode: explorer, planner, integrator, implementer
|
// BUILD tasks created dynamically after MARSHAL completes (based on DAG)
|
||||||
- full mode: explorer, planner, reviewer, integrator, implementer
|
// Each BUILD-* task is assigned to implementer-1, implementer-2, etc. (round-robin)
|
||||||
- batch mode: parallel explorers (max 5), parallel implementers (max 3)
|
// Each BUILD-* task description MUST include:
|
||||||
|
// execution_method: ${executionMethod}
|
||||||
|
// code_review: ${codeReviewTool}
|
||||||
|
```
|
||||||
|
|
||||||
**Success**: Team created, session file written, wisdom initialized.
|
### Phase 4: Coordination Loop
|
||||||
|
|
||||||
---
|
> **设计原则(Stop-Wait)**: 模型执行没有时间概念,禁止任何形式的轮询等待。
|
||||||
|
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态
|
||||||
|
> - ✅ 采用: 同步 `Task(run_in_background: false)` 调用,Worker 返回 = 阶段完成信号
|
||||||
|
>
|
||||||
|
> 按 Phase 3 创建的任务链顺序,逐阶段 spawn worker 同步执行。
|
||||||
|
> Worker prompt 使用 SKILL.md Coordinator Spawn Template。
|
||||||
|
|
||||||
## Phase 3: Create Task Chain
|
Receive teammate messages, dispatch based on type.
|
||||||
|
|
||||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
|
||||||
|
|
||||||
### Quick Mode (4 beats, strictly serial)
|
|
||||||
|
|
||||||
Create task chain for each issue: EXPLORE -> SOLVE -> MARSHAL -> BUILD
|
|
||||||
|
|
||||||
| Task ID | Role | Dependencies | Description |
|
|
||||||
|---------|------|--------------|-------------|
|
|
||||||
| EXPLORE-001 | explorer | (none) | Context analysis |
|
|
||||||
| SOLVE-001 | planner | EXPLORE-001 | Solution design |
|
|
||||||
| MARSHAL-001 | integrator | SOLVE-001 | Queue formation |
|
|
||||||
| BUILD-001 | implementer | MARSHAL-001 | Code implementation |
|
|
||||||
|
|
||||||
### Full Mode (5-7 beats, with review gate)
|
|
||||||
|
|
||||||
Add AUDIT between SOLVE and MARSHAL:
|
|
||||||
|
|
||||||
| Task ID | Role | Dependencies | Description |
|
|
||||||
|---------|------|--------------|-------------|
|
|
||||||
| EXPLORE-001 | explorer | (none) | Context analysis |
|
|
||||||
| SOLVE-001 | planner | EXPLORE-001 | Solution design |
|
|
||||||
| AUDIT-001 | reviewer | SOLVE-001 | Solution review |
|
|
||||||
| MARSHAL-001 | integrator | AUDIT-001 | Queue formation |
|
|
||||||
| BUILD-001 | implementer | MARSHAL-001 | Code implementation |
|
|
||||||
|
|
||||||
### Batch Mode (parallel windows)
|
|
||||||
|
|
||||||
Create parallel task batches:
|
|
||||||
|
|
||||||
| Batch | Tasks | Parallel Limit |
|
|
||||||
|-------|-------|----------------|
|
|
||||||
| EXPLORE-001..N | explorer | max 5 parallel |
|
|
||||||
| SOLVE-001..N | planner | sequential |
|
|
||||||
| AUDIT-001 | reviewer | (all SOLVE complete) |
|
|
||||||
| MARSHAL-001 | integrator | (AUDIT complete) |
|
|
||||||
| BUILD-001..M | implementer | max 3 parallel |
|
|
||||||
|
|
||||||
**Task description must include**: execution_method, code_review settings from Phase 1.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Spawn-and-Stop
|
|
||||||
|
|
||||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
|
||||||
|
|
||||||
**Design**: Spawn-and-Stop + Callback pattern.
|
|
||||||
|
|
||||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
|
||||||
- Worker completes -> SendMessage callback -> auto-advance
|
|
||||||
- User can use "check" / "resume" to manually advance
|
|
||||||
- Coordinator does one operation per invocation, then STOPS
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
|
||||||
2. For each ready task -> spawn worker (see SKILL.md Spawn Template)
|
|
||||||
3. Output status summary
|
|
||||||
4. STOP
|
|
||||||
|
|
||||||
**Pipeline advancement** driven by three wake sources:
|
|
||||||
|
|
||||||
| Wake Source | Handler | Action |
|
|
||||||
|-------------|---------|--------|
|
|
||||||
| Worker callback | handleCallback | Auto-advance next step |
|
|
||||||
| User "check" | handleCheck | Status output only |
|
|
||||||
| User "resume" | handleResume | Advance pipeline |
|
|
||||||
|
|
||||||
### Message Handlers
|
|
||||||
|
|
||||||
| Received Message | Action |
|
| Received Message | Action |
|
||||||
|------------------|--------|
|
|-----------------|--------|
|
||||||
| `context_ready` from explorer | Unblock SOLVE-* tasks for this issue |
|
| `context_ready` from explorer | Unblock SOLVE-* tasks for this issue |
|
||||||
| `solution_ready` from planner | Quick: create MARSHAL-*; Full: create AUDIT-* |
|
| `solution_ready` from planner | Quick: create MARSHAL-*; Full: create AUDIT-* |
|
||||||
| `multi_solution` from planner | AskUserQuestion for solution selection, then ccw issue bind |
|
| `multi_solution` from planner | AskUserQuestion for solution selection, then ccw issue bind |
|
||||||
@@ -290,38 +323,74 @@ Create parallel task batches:
|
|||||||
| `queue_ready` from integrator | Create BUILD-* tasks based on DAG parallel batches |
|
| `queue_ready` from integrator | Create BUILD-* tasks based on DAG parallel batches |
|
||||||
| `conflict_found` from integrator | AskUserQuestion for conflict resolution |
|
| `conflict_found` from integrator | AskUserQuestion for conflict resolution |
|
||||||
| `impl_complete` from implementer | Refresh DAG, create next BUILD-* batch or complete |
|
| `impl_complete` from implementer | Refresh DAG, create next BUILD-* batch or complete |
|
||||||
| `impl_failed` from implementer | Escalation: retry / skip / abort |
|
| `impl_failed` from implementer | CP-5 escalation: retry / skip / abort |
|
||||||
| `error` from any worker | Assess severity -> retry or escalate to user |
|
| `error` from any worker | Assess severity → retry or escalate to user |
|
||||||
|
|
||||||
### Review-Fix Cycle (max 2 rounds)
|
**Review-Fix Cycle (CP-2)** — max 2 rounds:
|
||||||
|
```javascript
|
||||||
|
let auditRound = 0
|
||||||
|
const MAX_AUDIT_ROUNDS = 2
|
||||||
|
|
||||||
| Round | Rejected Action |
|
// On rejected message:
|
||||||
|-------|-----------------|
|
if (msg.type === 'rejected' && auditRound < MAX_AUDIT_ROUNDS) {
|
||||||
| Round 1 | Create SOLVE-fix-1 task with reviewer feedback |
|
auditRound++
|
||||||
| Round 2 | Create SOLVE-fix-2 task with reviewer feedback |
|
TaskCreate({
|
||||||
| Round 3+ | Escalate to user: Force approve / Manual fix / Skip issue |
|
subject: `SOLVE-fix-${auditRound}: Revise solution based on review`,
|
||||||
|
description: `Fix solution per reviewer feedback:\n${msg.data.findings}\n\nThis is revision round ${auditRound}/${MAX_AUDIT_ROUNDS}.`,
|
||||||
|
owner: "planner"
|
||||||
|
})
|
||||||
|
// After SOLVE-fix completes → create AUDIT-{round+1}
|
||||||
|
} else if (auditRound >= MAX_AUDIT_ROUNDS) {
|
||||||
|
// Escalate to user: solution cannot pass review after 2 rounds
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: `Solution for ${issueId} rejected ${MAX_AUDIT_ROUNDS} times. How to proceed?`,
|
||||||
|
header: "Escalation",
|
||||||
|
options: [
|
||||||
|
{ label: "Force approve", description: "Skip review, proceed to execution" },
|
||||||
|
{ label: "Manual fix", description: "User will fix the solution" },
|
||||||
|
{ label: "Skip issue", description: "Skip this issue, continue with others" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
---
|
### Phase 5: Report + Handoff
|
||||||
|
|
||||||
## Phase 5: Report + Next Steps
|
```javascript
|
||||||
|
// Summarize results
|
||||||
|
const summary = {
|
||||||
|
mode,
|
||||||
|
issues_processed: issueIds.length,
|
||||||
|
solutions_approved: approvedCount,
|
||||||
|
builds_completed: completedBuilds,
|
||||||
|
builds_failed: failedBuilds
|
||||||
|
}
|
||||||
|
|
||||||
**Objective**: Completion report and follow-up options.
|
// Report to user
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: "issue", from: "coordinator",
|
||||||
|
to: "user", type: "pipeline_update",
|
||||||
|
summary: `[coordinator] Pipeline complete: ${summary.issues_processed} issues processed`
|
||||||
|
})
|
||||||
|
|
||||||
**Workflow**:
|
// Ask for next action
|
||||||
|
AskUserQuestion({
|
||||||
1. Load session state -> count completed tasks, duration
|
questions: [{
|
||||||
2. List deliverables with output paths
|
question: "Issue 处理完成。下一步:",
|
||||||
3. Update session status -> "completed"
|
header: "Next",
|
||||||
4. Log via team_msg
|
multiSelect: false,
|
||||||
5. Offer next steps to user:
|
options: [
|
||||||
|
{ label: "新一批 issue", description: "提交新的 issue ID 给当前团队处理" },
|
||||||
| Option | Action |
|
{ label: "查看结果", description: "查看实现结果和 git 变更" },
|
||||||
|--------|--------|
|
{ label: "关闭团队", description: "关闭所有 teammate 并清理" }
|
||||||
| New batch | Return to Phase 1 with new issue IDs |
|
]
|
||||||
| View results | Show implementation results and git changes |
|
}]
|
||||||
| Close team | TeamDelete() and cleanup |
|
})
|
||||||
|
// 新一批 → 回到 Phase 1
|
||||||
---
|
// 关闭 → shutdown → TeamDelete()
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -329,12 +398,7 @@ Create parallel task batches:
|
|||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No issue IDs provided | AskUserQuestion for IDs |
|
| No issue IDs provided | AskUserQuestion for IDs |
|
||||||
| Issue not found | Skip with warning, continue others |
|
| Issue not found | Skip with warning, continue others |
|
||||||
| Worker unresponsive | Send follow-up, 2x -> respawn |
|
| Worker unresponsive | Send follow-up, 2x → respawn |
|
||||||
| Review rejected 2+ times | Escalate to user |
|
| Review rejected 2+ times | Escalate to user (CP-5 L3) |
|
||||||
| Build failed | Retry once, then escalate |
|
| Build failed | Retry once, then escalate |
|
||||||
| All workers error | Shutdown team, report to user |
|
| All workers error | Shutdown team, report to user |
|
||||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
|
||||||
| Worker crash | Respawn worker, reassign task |
|
|
||||||
| Dependency cycle | Detect, report to user, halt |
|
|
||||||
| Invalid mode | Reject with error, ask to clarify |
|
|
||||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
|
||||||
|
|||||||
@@ -1,154 +1,139 @@
|
|||||||
# Explorer Role
|
# Role: explorer
|
||||||
|
|
||||||
Issue context analysis, codebase exploration, dependency identification, impact assessment. Produces shared context report for planner and reviewer.
|
Issue 上下文分析、代码探索、依赖识别、影响面评估。为 planner 和 reviewer 提供共享的 context report。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `explorer` | **Tag**: `[explorer]`
|
- **Name**: `explorer`
|
||||||
- **Task Prefix**: `EXPLORE-*`
|
- **Task Prefix**: `EXPLORE-*`
|
||||||
- **Responsibility**: Orchestration (context gathering)
|
- **Responsibility**: Orchestration (context gathering)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[explorer]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `EXPLORE-*` prefixed tasks
|
- 仅处理 `EXPLORE-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[explorer]` identifier
|
- 所有输出必须带 `[explorer]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Produce context-report for subsequent roles (planner, reviewer)
|
- 产出 context-report 供后续角色(planner, reviewer)使用
|
||||||
- Work strictly within context gathering responsibility scope
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Design solutions (planner responsibility)
|
- ❌ 设计解决方案(planner 职责)
|
||||||
- Review solution quality (reviewer responsibility)
|
- ❌ 审查方案质量(reviewer 职责)
|
||||||
- Modify any source code
|
- ❌ 修改任何源代码
|
||||||
- Communicate directly with other worker roles
|
- ❌ 直接与其他 worker 通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
- ❌ 为其他角色创建任务
|
||||||
- Omit `[explorer]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
> No command files -- all phases execute inline.
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Task` | Subagent | explorer | Spawn cli-explore-agent for deep exploration |
|
|
||||||
| `Read` | IO | explorer | Read context files and issue data |
|
|
||||||
| `Write` | IO | explorer | Write context report |
|
|
||||||
| `Bash` | System | explorer | Execute ccw commands |
|
|
||||||
| `mcp__ace-tool__search_context` | Search | explorer | Semantic code search |
|
|
||||||
| `mcp__ccw-tools__team_msg` | Team | explorer | Log messages to message bus |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `context_ready` | explorer -> coordinator | Context analysis complete | Context report ready |
|
| `context_ready` | explorer → coordinator | Context analysis complete | 上下文报告就绪 |
|
||||||
| `impact_assessed` | explorer -> coordinator | Impact scope determined | Impact assessment complete |
|
| `impact_assessed` | explorer → coordinator | Impact scope determined | 影响面评估完成 |
|
||||||
| `error` | explorer -> coordinator | Blocking error | Cannot complete exploration |
|
| `error` | explorer → coordinator | Blocking error | 无法完成探索 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Subagent Capabilities
|
||||||
|
|
||||||
```
|
| Agent Type | Purpose |
|
||||||
mcp__ccw-tools__team_msg({
|
|------------|---------|
|
||||||
operation: "log",
|
| `cli-explore-agent` | Deep codebase exploration with module analysis |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., ISS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "explorer",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[explorer] <task-prefix> complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
### CLI Capabilities
|
||||||
|
|
||||||
```
|
| CLI Command | Purpose |
|
||||||
Bash("ccw team log --team <session-id> --from explorer --to coordinator --type <message-type> --summary \"[explorer] ...\" --ref <artifact-path> --json")
|
|-------------|---------|
|
||||||
```
|
| `ccw issue status <id> --json` | Load full issue details |
|
||||||
|
| `ccw tool exec get_modules_by_depth '{}'` | Get project module structure |
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
// Parse agent name for parallel instances (e.g., explorer-1, explorer-2)
|
||||||
|
const agentNameMatch = args.match(/--agent-name[=\s]+([\w-]+)/)
|
||||||
|
const agentName = agentNameMatch ? agentNameMatch[1] : 'explorer'
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `EXPLORE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('EXPLORE-') &&
|
||||||
|
t.owner === agentName && // Use agentName (e.g., 'explorer-1') instead of hardcoded 'explorer'
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `explorer` for single-instance roles.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Issue Loading & Context Setup
|
### Phase 2: Issue Loading & Context Setup
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Resolve project root from working directory
|
||||||
|
const projectRoot = Bash('pwd').trim()
|
||||||
|
|
||||||
| Input | Source | Required |
|
// Extract issue ID from task description
|
||||||
|-------|--------|----------|
|
const issueIdMatch = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/)
|
||||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
const issueId = issueIdMatch ? issueIdMatch[0] : null
|
||||||
| Issue details | `ccw issue status <id> --json` | Yes |
|
|
||||||
| Project root | Working directory | Yes |
|
|
||||||
|
|
||||||
**Loading steps**:
|
if (!issueId) {
|
||||||
|
// Report error
|
||||||
1. Extract issue ID from task description via regex: `(?:GH-\d+|ISS-\d{8}-\d{6})`
|
mcp__ccw-tools__team_msg({ operation: "log", team: "issue", from: "explorer", to: "coordinator", type: "error", summary: "[explorer] No issue ID found in task" })
|
||||||
2. If no issue ID found -> SendMessage error to coordinator, STOP
|
SendMessage({ type: "message", recipient: "coordinator", content: "## [explorer] Error\nNo issue ID in task description", summary: "[explorer] error: no issue ID" })
|
||||||
3. Load issue details:
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load issue details
|
||||||
|
const issueJson = Bash(`ccw issue status ${issueId} --json`)
|
||||||
|
const issue = JSON.parse(issueJson)
|
||||||
```
|
```
|
||||||
Bash("ccw issue status <issueId> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Parse JSON response for issue metadata (title, context, priority, labels, feedback)
|
|
||||||
|
|
||||||
### Phase 3: Codebase Exploration & Impact Analysis
|
### Phase 3: Codebase Exploration & Impact Analysis
|
||||||
|
|
||||||
**Complexity assessment determines exploration depth**:
|
```javascript
|
||||||
|
// Complexity assessment determines exploration depth
|
||||||
|
function assessComplexity(issue) {
|
||||||
|
let score = 0
|
||||||
|
if (/refactor|architect|restructure|module|system/i.test(issue.context)) score += 2
|
||||||
|
if (/multiple|across|cross/i.test(issue.context)) score += 2
|
||||||
|
if (/integrate|api|database/i.test(issue.context)) score += 1
|
||||||
|
if (issue.priority >= 4) score += 1
|
||||||
|
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
|
||||||
|
}
|
||||||
|
|
||||||
| Signal | Weight | Keywords |
|
const complexity = assessComplexity(issue)
|
||||||
|--------|--------|----------|
|
|
||||||
| Structural change | +2 | refactor, architect, restructure, module, system |
|
|
||||||
| Cross-cutting | +2 | multiple, across, cross |
|
|
||||||
| Integration | +1 | integrate, api, database |
|
|
||||||
| High priority | +1 | priority >= 4 |
|
|
||||||
|
|
||||||
| Score | Complexity | Strategy |
|
if (complexity === 'Low') {
|
||||||
|-------|------------|----------|
|
// Direct ACE search
|
||||||
| >= 4 | High | Deep exploration via cli-explore-agent |
|
const results = mcp__ace-tool__search_context({
|
||||||
| 2-3 | Medium | Hybrid: ACE search + selective agent |
|
project_root_path: projectRoot,
|
||||||
| 0-1 | Low | Direct ACE search only |
|
query: `${issue.title}. ${issue.context}. Keywords: ${issue.labels?.join(', ') || ''}`
|
||||||
|
})
|
||||||
**Exploration execution**:
|
// Build context from ACE results
|
||||||
|
} else {
|
||||||
| Complexity | Execution |
|
// Deep exploration via cli-explore-agent
|
||||||
|------------|-----------|
|
Task({
|
||||||
| Low | Direct ACE search: `mcp__ace-tool__search_context(project_root_path, query)` |
|
subagent_type: "cli-explore-agent",
|
||||||
| Medium/High | Spawn cli-explore-agent: `Task({ subagent_type: "cli-explore-agent", run_in_background: false })` |
|
run_in_background: false,
|
||||||
|
description: `Explore context for ${issueId}`,
|
||||||
**cli-explore-agent prompt template**:
|
prompt: `
|
||||||
|
|
||||||
```
|
|
||||||
## Issue Context
|
## Issue Context
|
||||||
ID: <issueId>
|
ID: ${issueId}
|
||||||
Title: <issue.title>
|
Title: ${issue.title}
|
||||||
Description: <issue.context>
|
Description: ${issue.context}
|
||||||
Priority: <issue.priority>
|
Priority: ${issue.priority}
|
||||||
|
|
||||||
## MANDATORY FIRST STEPS
|
## MANDATORY FIRST STEPS
|
||||||
1. Run: ccw tool exec get_modules_by_depth '{}'
|
1. Run: ccw tool exec get_modules_by_depth '{}'
|
||||||
2. Execute ACE searches based on issue keywords
|
2. Execute ACE searches based on issue keywords
|
||||||
3. Run: ccw spec load --category exploration
|
3. Read: .workflow/project-tech.json (if exists)
|
||||||
|
|
||||||
## Exploration Focus
|
## Exploration Focus
|
||||||
- Identify files directly related to this issue
|
- Identify files directly related to this issue
|
||||||
@@ -158,55 +143,99 @@ Priority: <issue.priority>
|
|||||||
- Check for previous related changes (git log)
|
- Check for previous related changes (git log)
|
||||||
|
|
||||||
## Output
|
## Output
|
||||||
Write findings to: .workflow/.team-plan/issue/context-<issueId>.json
|
Write findings to: .workflow/.team-plan/issue/context-${issueId}.json
|
||||||
|
|
||||||
Schema: {
|
Schema: {
|
||||||
issue_id, relevant_files[], dependencies[], impact_scope,
|
issue_id, relevant_files[], dependencies[], impact_scope,
|
||||||
existing_patterns[], related_changes[], key_findings[],
|
existing_patterns[], related_changes[], key_findings[],
|
||||||
complexity_assessment, _metadata
|
complexity_assessment, _metadata
|
||||||
}
|
}
|
||||||
|
`
|
||||||
|
})
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Context Report Generation
|
### Phase 4: Context Report Generation
|
||||||
|
|
||||||
**Report assembly**:
|
```javascript
|
||||||
|
// Read exploration results
|
||||||
|
const contextPath = `.workflow/.team-plan/issue/context-${issueId}.json`
|
||||||
|
let contextReport
|
||||||
|
try {
|
||||||
|
contextReport = JSON.parse(Read(contextPath))
|
||||||
|
} catch {
|
||||||
|
// Build minimal report from ACE results
|
||||||
|
contextReport = {
|
||||||
|
issue_id: issueId,
|
||||||
|
relevant_files: [],
|
||||||
|
key_findings: [],
|
||||||
|
complexity_assessment: complexity
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
1. Read exploration results from `.workflow/.team-plan/issue/context-<issueId>.json`
|
// Enrich with issue metadata
|
||||||
2. If file not found, build minimal report from ACE results
|
contextReport.issue = {
|
||||||
3. Enrich with issue metadata: id, title, priority, status, labels, feedback
|
id: issue.id,
|
||||||
|
title: issue.title,
|
||||||
**Report schema**:
|
priority: issue.priority,
|
||||||
|
status: issue.status,
|
||||||
```
|
labels: issue.labels,
|
||||||
{
|
feedback: issue.feedback // Previous failure history
|
||||||
issue_id: string,
|
|
||||||
issue: { id, title, priority, status, labels, feedback },
|
|
||||||
relevant_files: [{ path, relevance }], | string[],
|
|
||||||
dependencies: string[],
|
|
||||||
impact_scope: "low" | "medium" | "high",
|
|
||||||
existing_patterns: string[],
|
|
||||||
related_changes: string[],
|
|
||||||
key_findings: string[],
|
|
||||||
complexity_assessment: "Low" | "Medium" | "High"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: "issue",
|
||||||
|
from: "explorer",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "context_ready",
|
||||||
|
summary: `[explorer] Context ready for ${issueId}: ${contextReport.relevant_files?.length || 0} files, complexity=${complexity}`,
|
||||||
|
ref: contextPath
|
||||||
|
})
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[explorer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `## [explorer] Context Analysis Results
|
||||||
|
|
||||||
---
|
**Issue**: ${issueId} - ${issue.title}
|
||||||
|
**Complexity**: ${complexity}
|
||||||
|
**Files Identified**: ${contextReport.relevant_files?.length || 0}
|
||||||
|
**Impact Scope**: ${contextReport.impact_scope || 'unknown'}
|
||||||
|
|
||||||
|
### Key Findings
|
||||||
|
${(contextReport.key_findings || []).map(f => `- ${f}`).join('\n')}
|
||||||
|
|
||||||
|
### Context Report
|
||||||
|
Saved to: ${contextPath}`,
|
||||||
|
summary: `[explorer] EXPLORE complete: ${issueId}`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('EXPLORE-') &&
|
||||||
|
t.owner === agentName && // Use agentName for parallel instance filtering
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No EXPLORE-* tasks available | Idle, wait for coordinator assignment |
|
| No EXPLORE-* tasks available | Idle, wait for coordinator assignment |
|
||||||
| Issue ID not found in task | Notify coordinator with error |
|
|
||||||
| Issue ID not found in ccw | Notify coordinator with error |
|
| Issue ID not found in ccw | Notify coordinator with error |
|
||||||
| ACE search returns no results | Fallback to Glob/Grep, report limited context |
|
| ACE search returns no results | Fallback to Glob/Grep, report limited context |
|
||||||
| cli-explore-agent failure | Retry once with simplified prompt, then report partial results |
|
| cli-explore-agent failure | Retry once with simplified prompt, then report partial results |
|
||||||
| Context file write failure | Report via SendMessage with inline context |
|
| Context file write failure | Report via SendMessage with inline context |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
|
|||||||
@@ -1,139 +1,113 @@
|
|||||||
# Implementer Role
|
# Role: implementer
|
||||||
|
|
||||||
Load solution -> route to backend (Agent/Codex/Gemini) based on execution_method -> test validation -> commit. Supports multiple CLI execution backends. Execution method is determined in coordinator Phase 1.
|
加载 solution → 根据 execution_method 路由到对应后端(Agent/Codex/Gemini)→ 测试验证 → 提交。支持多种 CLI 执行后端,执行方式在 coordinator Phase 1 已确定(见 coordinator.md Execution Method Selection)。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `implementer` | **Tag**: `[implementer]`
|
- **Name**: `implementer`
|
||||||
- **Task Prefix**: `BUILD-*`
|
- **Task Prefix**: `BUILD-*`
|
||||||
- **Responsibility**: Code implementation (solution -> route to backend -> test -> commit)
|
- **Responsibility**: Code implementation (solution → route to backend → test → commit)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[implementer]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `BUILD-*` prefixed tasks
|
- 仅处理 `BUILD-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[implementer]` identifier
|
- 所有输出必须带 `[implementer]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 按照 BUILD-* 任务中的 `execution_method` 字段选择执行后端
|
||||||
- Select execution backend based on `execution_method` field in BUILD-* task
|
- 每个 solution 完成后通知 coordinator
|
||||||
- Notify coordinator after each solution completes
|
- 持续轮询新的 BUILD-* 任务
|
||||||
- Continuously poll for new BUILD-* tasks
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Modify solutions (planner responsibility)
|
- ❌ 修改解决方案(planner 职责)
|
||||||
- Review implementation results (reviewer responsibility)
|
- ❌ 审查其他实现结果(reviewer 职责)
|
||||||
- Modify execution queue (integrator responsibility)
|
- ❌ 修改执行队列(integrator 职责)
|
||||||
- Communicate directly with other worker roles
|
- ❌ 直接与其他 worker 通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
- ❌ 为其他角色创建任务
|
||||||
- Omit `[implementer]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
> No command files -- all phases execute inline.
|
|
||||||
|
|
||||||
### Execution Backends
|
|
||||||
|
|
||||||
| Backend | Tool | Invocation | Mode |
|
|
||||||
|---------|------|------------|------|
|
|
||||||
| `agent` | code-developer subagent | `Task({ subagent_type: "code-developer" })` | Sync |
|
|
||||||
| `codex` | Codex CLI | `ccw cli --tool codex --mode write` | Background |
|
|
||||||
| `gemini` | Gemini CLI | `ccw cli --tool gemini --mode write` | Background |
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Task` | Subagent | implementer | Spawn code-developer for agent execution |
|
|
||||||
| `Read` | IO | implementer | Read solution plan and queue files |
|
|
||||||
| `Write` | IO | implementer | Write implementation artifacts |
|
|
||||||
| `Edit` | IO | implementer | Edit source code |
|
|
||||||
| `Bash` | System | implementer | Run tests, git operations, CLI calls |
|
|
||||||
| `mcp__ccw-tools__team_msg` | Team | implementer | Log messages to message bus |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `impl_complete` | implementer -> coordinator | Implementation and tests pass | Implementation complete |
|
| `impl_complete` | implementer → coordinator | Implementation and tests pass | 实现完成 |
|
||||||
| `impl_failed` | implementer -> coordinator | Implementation failed after retries | Implementation failed |
|
| `impl_failed` | implementer → coordinator | Implementation failed after retries | 实现失败 |
|
||||||
| `error` | implementer -> coordinator | Blocking error | Execution error |
|
| `error` | implementer → coordinator | Blocking error | 执行错误 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Execution Backends
|
||||||
|
|
||||||
```
|
| Backend | Tool | Invocation | Mode |
|
||||||
mcp__ccw-tools__team_msg({
|
|---------|------|------------|------|
|
||||||
operation: "log",
|
| `agent` | code-developer subagent | `Task({ subagent_type: "code-developer" })` | 同步 |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., ISS-xxx-date), NOT team name. Extract from Session: field.
|
| `codex` | Codex CLI | `ccw cli --tool codex --mode write` | 后台 |
|
||||||
from: "implementer",
|
| `gemini` | Gemini CLI | `ccw cli --tool gemini --mode write` | 后台 |
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[implementer] <task-prefix> complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
### Direct Capabilities
|
||||||
|
|
||||||
```
|
| Tool | Purpose |
|
||||||
Bash("ccw team log --team <session-id> --from implementer --to coordinator --type <message-type> --summary \"[implementer] ...\" --ref <artifact-path> --json")
|
|------|---------|
|
||||||
```
|
| `Read` | 读取 solution plan 和队列文件 |
|
||||||
|
| `Write` | 写入实现产物 |
|
||||||
|
| `Edit` | 编辑源代码 |
|
||||||
|
| `Bash` | 运行测试、git 操作、CLI 调用 |
|
||||||
|
|
||||||
---
|
### CLI Capabilities
|
||||||
|
|
||||||
|
| CLI Command | Purpose |
|
||||||
|
|-------------|---------|
|
||||||
|
| `ccw issue status <id> --json` | 查看 issue 状态 |
|
||||||
|
| `ccw issue solutions <id> --json` | 加载 bound solution |
|
||||||
|
| `ccw issue update <id> --status in-progress` | 更新 issue 状态为进行中 |
|
||||||
|
| `ccw issue update <id> --status resolved` | 标记 issue 已解决 |
|
||||||
|
|
||||||
## Execution Method Resolution
|
## Execution Method Resolution
|
||||||
|
|
||||||
Parse execution method from BUILD-* task description:
|
从 BUILD-* 任务的 description 中解析执行方式:
|
||||||
|
|
||||||
| Pattern | Extraction |
|
```javascript
|
||||||
|---------|------------|
|
// 从任务描述中解析 execution_method
|
||||||
| `execution_method:\s*Agent` | Use agent backend |
|
function resolveExecutor(taskDesc, solutionTaskCount) {
|
||||||
| `execution_method:\s*Codex` | Use codex backend |
|
const methodMatch = taskDesc.match(/execution_method:\s*(Agent|Codex|Gemini|Auto)/i)
|
||||||
| `execution_method:\s*Gemini` | Use gemini backend |
|
const method = methodMatch ? methodMatch[1] : 'Auto'
|
||||||
| `execution_method:\s*Auto` | Auto-select based on task count |
|
|
||||||
|
|
||||||
**Auto-selection logic**:
|
if (method.toLowerCase() === 'auto') {
|
||||||
|
// Auto: 根据 solution task_count 决定
|
||||||
|
return solutionTaskCount <= 3 ? 'agent' : 'codex'
|
||||||
|
}
|
||||||
|
return method.toLowerCase() // 'agent' | 'codex' | 'gemini'
|
||||||
|
}
|
||||||
|
|
||||||
| Solution Task Count | Backend |
|
// 从任务描述中解析 code_review 配置
|
||||||
|---------------------|---------|
|
function resolveCodeReview(taskDesc) {
|
||||||
| <= 3 | agent |
|
const reviewMatch = taskDesc.match(/code_review:\s*(\S+)/i)
|
||||||
| > 3 | codex |
|
return reviewMatch ? reviewMatch[1] : 'Skip'
|
||||||
|
}
|
||||||
**Code review resolution**:
|
```
|
||||||
|
|
||||||
| Pattern | Setting |
|
|
||||||
|---------|---------|
|
|
||||||
| `code_review:\s*Skip` | No review |
|
|
||||||
| `code_review:\s*Gemini Review` | Gemini CLI review |
|
|
||||||
| `code_review:\s*Codex Review` | Git-aware review (--uncommitted) |
|
|
||||||
| No match | Skip (default) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution Prompt Builder
|
## Execution Prompt Builder
|
||||||
|
|
||||||
Unified prompt template for all backends:
|
统一的 prompt 构建,所有后端共用:
|
||||||
|
|
||||||
```
|
```javascript
|
||||||
|
function buildExecutionPrompt(issueId, solution, explorerContext) {
|
||||||
|
return `
|
||||||
## Issue
|
## Issue
|
||||||
ID: <issueId>
|
ID: ${issueId}
|
||||||
Title: <solution.bound.title>
|
Title: ${solution.bound.title || 'N/A'}
|
||||||
|
|
||||||
## Solution Plan
|
## Solution Plan
|
||||||
<solution.bound JSON>
|
${JSON.stringify(solution.bound, null, 2)}
|
||||||
|
|
||||||
|
${explorerContext ? `
|
||||||
## Codebase Context (from explorer)
|
## Codebase Context (from explorer)
|
||||||
Relevant files: <explorerContext.relevant_files>
|
Relevant files: ${explorerContext.relevant_files?.map(f => f.path || f).slice(0, 10).join(', ')}
|
||||||
Existing patterns: <explorerContext.existing_patterns>
|
Existing patterns: ${explorerContext.existing_patterns?.join('; ') || 'N/A'}
|
||||||
Dependencies: <explorerContext.dependencies>
|
Dependencies: ${explorerContext.dependencies?.join(', ') || 'N/A'}
|
||||||
|
` : ''}
|
||||||
|
|
||||||
## Implementation Requirements
|
## Implementation Requirements
|
||||||
|
|
||||||
@@ -141,7 +115,7 @@ Dependencies: <explorerContext.dependencies>
|
|||||||
2. Write clean, minimal code following existing patterns
|
2. Write clean, minimal code following existing patterns
|
||||||
3. Run tests after each significant change
|
3. Run tests after each significant change
|
||||||
4. Ensure all existing tests still pass
|
4. Ensure all existing tests still pass
|
||||||
5. Do NOT over-engineer -- implement exactly what the solution specifies
|
5. Do NOT over-engineer — implement exactly what the solution specifies
|
||||||
|
|
||||||
## Quality Checklist
|
## Quality Checklist
|
||||||
- [ ] All solution tasks implemented
|
- [ ] All solution tasks implemented
|
||||||
@@ -151,160 +125,264 @@ Dependencies: <explorerContext.dependencies>
|
|||||||
- [ ] No security vulnerabilities introduced
|
- [ ] No security vulnerabilities introduced
|
||||||
|
|
||||||
## Project Guidelines
|
## Project Guidelines
|
||||||
@.workflow/specs/*.md
|
@.workflow/project-guidelines.json
|
||||||
|
`
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
// Parse agent name for parallel instances (e.g., implementer-1, implementer-2)
|
||||||
|
const agentNameMatch = args.match(/--agent-name[=\s]+([\w-]+)/)
|
||||||
|
const agentName = agentNameMatch ? agentNameMatch[1] : 'implementer'
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `BUILD-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('BUILD-') &&
|
||||||
|
t.owner === agentName && // Use agentName (e.g., 'implementer-1') instead of hardcoded 'implementer'
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `implementer` for single-instance roles.
|
if (myTasks.length === 0) return // idle — wait for coordinator to create BUILD tasks
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Load Solution & Resolve Executor
|
### Phase 2: Load Solution & Resolve Executor
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Extract issue ID from task description
|
||||||
|
const issueIdMatch = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/)
|
||||||
|
const issueId = issueIdMatch ? issueIdMatch[0] : null
|
||||||
|
|
||||||
| Input | Source | Required |
|
if (!issueId) {
|
||||||
|-------|--------|----------|
|
mcp__ccw-tools__team_msg({
|
||||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
operation: "log", team: "issue", from: "implementer", to: "coordinator",
|
||||||
| Bound solution | `ccw issue solutions <id> --json` | Yes |
|
type: "error",
|
||||||
| Explorer context | `.workflow/.team-plan/issue/context-<issueId>.json` | No |
|
summary: "[implementer] No issue ID found in task"
|
||||||
| Execution method | Task description | Yes |
|
})
|
||||||
| Code review | Task description | No |
|
SendMessage({
|
||||||
|
type: "message", recipient: "coordinator",
|
||||||
|
content: "## [implementer] Error\nNo issue ID in task description",
|
||||||
|
summary: "[implementer] error: no issue ID"
|
||||||
|
})
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
**Loading steps**:
|
// Load solution plan
|
||||||
|
const solJson = Bash(`ccw issue solutions ${issueId} --json`)
|
||||||
|
const solution = JSON.parse(solJson)
|
||||||
|
|
||||||
1. Extract issue ID from task description
|
if (!solution.bound) {
|
||||||
2. If no issue ID -> SendMessage error to coordinator, STOP
|
mcp__ccw-tools__team_msg({
|
||||||
3. Load bound solution:
|
operation: "log", team: "issue", from: "implementer", to: "coordinator",
|
||||||
|
type: "error",
|
||||||
|
summary: `[implementer] No bound solution for ${issueId}`
|
||||||
|
})
|
||||||
|
SendMessage({
|
||||||
|
type: "message", recipient: "coordinator",
|
||||||
|
content: `## [implementer] Error\nNo bound solution for ${issueId}`,
|
||||||
|
summary: `[implementer] error: no solution for ${issueId}`
|
||||||
|
})
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
```
|
// Load explorer context for implementation guidance
|
||||||
Bash("ccw issue solutions <issueId> --json")
|
let explorerContext = null
|
||||||
```
|
try {
|
||||||
|
const contextPath = `.workflow/.team-plan/issue/context-${issueId}.json`
|
||||||
|
explorerContext = JSON.parse(Read(contextPath))
|
||||||
|
} catch {
|
||||||
|
// No explorer context
|
||||||
|
}
|
||||||
|
|
||||||
4. If no bound solution -> SendMessage error to coordinator, STOP
|
// Resolve execution method from task description
|
||||||
5. Load explorer context (if available)
|
const taskCount = solution.bound.task_count || solution.bound.tasks?.length || 0
|
||||||
6. Resolve execution method from task description
|
const executor = resolveExecutor(task.description, taskCount)
|
||||||
7. Resolve code review setting from task description
|
const codeReview = resolveCodeReview(task.description)
|
||||||
8. Update issue status:
|
|
||||||
|
|
||||||
```
|
// Update issue status
|
||||||
Bash("ccw issue update <issueId> --status in-progress")
|
Bash(`ccw issue update ${issueId} --status in-progress`)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 3: Implementation (Multi-Backend Routing)
|
### Phase 3: Implementation (Multi-Backend Routing)
|
||||||
|
|
||||||
Route to backend based on `executor` resolution:
|
根据 `executor` 变量路由到对应后端:
|
||||||
|
|
||||||
#### Option A: Agent Execution (`executor === 'agent'`)
|
#### Option A: Agent Execution (`executor === 'agent'`)
|
||||||
|
|
||||||
Sync call to code-developer subagent, suitable for simple tasks (task_count <= 3).
|
同步调用 code-developer subagent,适合简单任务(task_count ≤ 3)。
|
||||||
|
|
||||||
```
|
```javascript
|
||||||
Task({
|
if (executor === 'agent') {
|
||||||
|
const implResult = Task({
|
||||||
subagent_type: "code-developer",
|
subagent_type: "code-developer",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Implement solution for <issueId>",
|
description: `Implement solution for ${issueId}`,
|
||||||
prompt: <executionPrompt>
|
prompt: buildExecutionPrompt(issueId, solution, explorerContext)
|
||||||
})
|
})
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Option B: Codex CLI Execution (`executor === 'codex'`)
|
#### Option B: Codex CLI Execution (`executor === 'codex'`)
|
||||||
|
|
||||||
Background call to Codex CLI, suitable for complex tasks. Uses fixed ID for resume support.
|
后台调用 Codex CLI,适合复杂任务。使用固定 ID 支持 resume。
|
||||||
|
|
||||||
```
|
```javascript
|
||||||
Bash("ccw cli -p \"<executionPrompt>\" --tool codex --mode write --id issue-<issueId>", { run_in_background: true })
|
if (executor === 'codex') {
|
||||||
```
|
const fixedId = `issue-${issueId}`
|
||||||
|
|
||||||
**On failure, resume with**:
|
Bash(
|
||||||
|
`ccw cli -p "${buildExecutionPrompt(issueId, solution, explorerContext)}" --tool codex --mode write --id ${fixedId}`,
|
||||||
|
{ run_in_background: true }
|
||||||
|
)
|
||||||
|
// STOP — CLI 后台执行,等待 task hook callback 通知完成
|
||||||
|
|
||||||
```
|
// 失败时 resume:
|
||||||
ccw cli -p "Continue implementation" --resume issue-<issueId> --tool codex --mode write --id issue-<issueId>-retry
|
// ccw cli -p "Continue implementation" --resume ${fixedId} --tool codex --mode write --id ${fixedId}-retry
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Option C: Gemini CLI Execution (`executor === 'gemini'`)
|
#### Option C: Gemini CLI Execution (`executor === 'gemini'`)
|
||||||
|
|
||||||
Background call to Gemini CLI, suitable for composite tasks requiring analysis.
|
后台调用 Gemini CLI,适合需要分析的复合任务。
|
||||||
|
|
||||||
```
|
```javascript
|
||||||
Bash("ccw cli -p \"<executionPrompt>\" --tool gemini --mode write --id issue-<issueId>", { run_in_background: true })
|
if (executor === 'gemini') {
|
||||||
|
const fixedId = `issue-${issueId}`
|
||||||
|
|
||||||
|
Bash(
|
||||||
|
`ccw cli -p "${buildExecutionPrompt(issueId, solution, explorerContext)}" --tool gemini --mode write --id ${fixedId}`,
|
||||||
|
{ run_in_background: true }
|
||||||
|
)
|
||||||
|
// STOP — CLI 后台执行,等待 task hook callback 通知完成
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Verify & Commit
|
### Phase 4: Verify & Commit
|
||||||
|
|
||||||
**Test detection**:
|
```javascript
|
||||||
|
// Detect test command from package.json or project config
|
||||||
|
let testCmd = 'npm test'
|
||||||
|
try {
|
||||||
|
const pkgJson = JSON.parse(Read('package.json'))
|
||||||
|
if (pkgJson.scripts?.test) testCmd = 'npm test'
|
||||||
|
else if (pkgJson.scripts?.['test:unit']) testCmd = 'npm run test:unit'
|
||||||
|
} catch {
|
||||||
|
// Fallback: try common test runners
|
||||||
|
const hasYarn = Bash('test -f yarn.lock && echo yes || echo no').trim() === 'yes'
|
||||||
|
if (hasYarn) testCmd = 'yarn test'
|
||||||
|
}
|
||||||
|
|
||||||
| Detection | Method |
|
// Verify implementation
|
||||||
|-----------|--------|
|
const testResult = Bash(`${testCmd} 2>&1 || echo "TEST_FAILED"`)
|
||||||
| Package.json exists | Check `scripts.test` or `scripts.test:unit` |
|
const testPassed = !testResult.includes('TEST_FAILED') && !testResult.includes('FAIL')
|
||||||
| Yarn.lock exists | Use `yarn test` |
|
|
||||||
| Fallback | Use `npm test` |
|
|
||||||
|
|
||||||
**Test execution**:
|
if (!testPassed) {
|
||||||
|
// Implementation failed — report to coordinator
|
||||||
```
|
mcp__ccw-tools__team_msg({
|
||||||
Bash("<testCmd> 2>&1 || echo \"TEST_FAILED\"")
|
operation: "log", team: "issue", from: "implementer", to: "coordinator",
|
||||||
```
|
|
||||||
|
|
||||||
**Test result handling**:
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| Tests pass | Proceed to optional code review |
|
|
||||||
| Tests fail | Report impl_failed to coordinator |
|
|
||||||
|
|
||||||
**Failed test report**:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log", team: **<session-id>**, from: "implementer", to: "coordinator", // MUST be session ID, NOT team name
|
|
||||||
type: "impl_failed",
|
type: "impl_failed",
|
||||||
summary: "[implementer] Tests failing for <issueId> after implementation (via <executor>)"
|
summary: `[implementer] Tests failing for ${issueId} after implementation (via ${executor})`
|
||||||
})
|
})
|
||||||
|
|
||||||
SendMessage({
|
SendMessage({
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: "## [implementer] Implementation Failed\n\n**Issue**: <issueId>\n**Executor**: <executor>\n**Status**: Tests failing\n**Test Output** (truncated):\n<truncated output>\n\n**Action**: May need solution revision or manual intervention.",
|
content: `## [implementer] Implementation Failed
|
||||||
summary: "[implementer] impl_failed: <issueId> (<executor>)"
|
|
||||||
})
|
**Issue**: ${issueId}
|
||||||
|
**Executor**: ${executor}
|
||||||
|
**Status**: Tests failing after implementation
|
||||||
|
**Test Output** (truncated):
|
||||||
|
${testResult.slice(0, 500)}
|
||||||
|
|
||||||
|
**Action**: May need solution revision or manual intervention.
|
||||||
|
${executor !== 'agent' ? `**Resume**: \`ccw cli -p "Fix failing tests" --resume issue-${issueId} --tool ${executor} --mode write --id issue-${issueId}-fix\`` : ''}`,
|
||||||
|
summary: `[implementer] impl_failed: ${issueId} (${executor})`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Optional: Code review (if configured)
|
||||||
|
if (codeReview !== 'Skip') {
|
||||||
|
executeCodeReview(codeReview, issueId)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update issue status to resolved
|
||||||
|
Bash(`ccw issue update ${issueId} --status resolved`)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Optional code review** (if configured):
|
### Code Review (Optional)
|
||||||
|
|
||||||
| Tool | Command |
|
```javascript
|
||||||
|------|---------|
|
function executeCodeReview(reviewTool, issueId) {
|
||||||
| Gemini Review | `ccw cli -p "<reviewPrompt>" --tool gemini --mode analysis --id issue-review-<issueId>` |
|
const reviewPrompt = `PURPOSE: Code review for ${issueId} implementation against solution plan
|
||||||
| Codex Review | `ccw cli --tool codex --mode review --uncommitted` |
|
TASK: • Verify solution convergence criteria • Check test coverage • Analyze code quality • Identify issues
|
||||||
|
MODE: analysis
|
||||||
|
CONTEXT: @**/* | Memory: Review issue team execution for ${issueId}
|
||||||
|
EXPECTED: Quality assessment with issue identification and recommendations
|
||||||
|
CONSTRAINTS: Focus on solution adherence and code quality | analysis=READ-ONLY`
|
||||||
|
|
||||||
**Success completion**:
|
if (reviewTool === 'Gemini Review') {
|
||||||
|
Bash(`ccw cli -p "${reviewPrompt}" --tool gemini --mode analysis --id issue-review-${issueId}`,
|
||||||
```
|
{ run_in_background: true })
|
||||||
Bash("ccw issue update <issueId> --status resolved")
|
} else if (reviewTool === 'Codex Review') {
|
||||||
|
// Codex review: --uncommitted flag only (no prompt with target flags)
|
||||||
|
Bash(`ccw cli --tool codex --mode review --uncommitted`,
|
||||||
|
{ run_in_background: true })
|
||||||
|
}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: "issue",
|
||||||
|
from: "implementer",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "impl_complete",
|
||||||
|
summary: `[implementer] Implementation complete for ${issueId} via ${executor}, tests passing`
|
||||||
|
})
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[implementer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `## [implementer] Implementation Complete
|
||||||
|
|
||||||
**Report content includes**:
|
**Issue**: ${issueId}
|
||||||
|
**Executor**: ${executor}
|
||||||
|
**Solution**: ${solution.bound.id}
|
||||||
|
**Code Review**: ${codeReview}
|
||||||
|
**Status**: All tests passing
|
||||||
|
**Issue Status**: Updated to resolved`,
|
||||||
|
summary: `[implementer] BUILD complete: ${issueId} (${executor})`
|
||||||
|
})
|
||||||
|
|
||||||
- Issue ID
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
- Executor used
|
|
||||||
- Solution ID
|
|
||||||
- Code review status
|
|
||||||
- Test status
|
|
||||||
- Issue status update
|
|
||||||
|
|
||||||
---
|
// Check for next BUILD-* task (parallel BUILD tasks or new batches)
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('BUILD-') &&
|
||||||
|
t.owner === agentName && // Use agentName for parallel instance filtering
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -318,5 +396,4 @@ Standard report flow: team_msg log -> SendMessage with `[implementer]` prefix ->
|
|||||||
| CLI timeout | Use fixed ID `issue-{issueId}` for resume |
|
| CLI timeout | Use fixed ID `issue-{issueId}` for resume |
|
||||||
| Tests failing after implementation | Report impl_failed with test output + resume info |
|
| Tests failing after implementation | Report impl_failed with test output + resume info |
|
||||||
| Issue status update failure | Log warning, continue with report |
|
| Issue status update failure | Log warning, continue with report |
|
||||||
| Dependency not yet complete | Wait -- task is blocked by blockedBy |
|
| Dependency not yet complete | Wait — task is blocked by blockedBy |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
|
|||||||
@@ -1,150 +1,127 @@
|
|||||||
# Integrator Role
|
# Role: integrator
|
||||||
|
|
||||||
Queue orchestration, conflict detection, execution order optimization. Internally invokes issue-queue-agent for intelligent queue formation.
|
队列编排、冲突检测、执行顺序优化。内部调用 issue-queue-agent 进行智能队列形成。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `integrator` | **Tag**: `[integrator]`
|
- **Name**: `integrator`
|
||||||
- **Task Prefix**: `MARSHAL-*`
|
- **Task Prefix**: `MARSHAL-*`
|
||||||
- **Responsibility**: Orchestration (queue formation)
|
- **Responsibility**: Orchestration (queue formation)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[integrator]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `MARSHAL-*` prefixed tasks
|
- 仅处理 `MARSHAL-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[integrator]` identifier
|
- 所有输出必须带 `[integrator]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 使用 issue-queue-agent 进行队列编排
|
||||||
- Use issue-queue-agent for queue orchestration
|
- 确保所有 issue 都有 bound solution 才能编排
|
||||||
- Ensure all issues have bound solutions before queue formation
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Modify solutions (planner responsibility)
|
- ❌ 修改解决方案(planner 职责)
|
||||||
- Review solution quality (reviewer responsibility)
|
- ❌ 审查方案质量(reviewer 职责)
|
||||||
- Implement code (implementer responsibility)
|
- ❌ 实现代码(implementer 职责)
|
||||||
- Communicate directly with other worker roles
|
- ❌ 直接与其他 worker 通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
- ❌ 为其他角色创建任务
|
||||||
- Omit `[integrator]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
> No command files -- all phases execute inline.
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Task` | Subagent | integrator | Spawn issue-queue-agent for queue formation |
|
|
||||||
| `Read` | IO | integrator | Read queue files and solution data |
|
|
||||||
| `Write` | IO | integrator | Write queue output |
|
|
||||||
| `Bash` | System | integrator | Execute ccw commands |
|
|
||||||
| `mcp__ccw-tools__team_msg` | Team | integrator | Log messages to message bus |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `queue_ready` | integrator -> coordinator | Queue formed successfully | Queue ready for execution |
|
| `queue_ready` | integrator → coordinator | Queue formed successfully | 队列就绪,可执行 |
|
||||||
| `conflict_found` | integrator -> coordinator | File conflicts detected, user input needed | Conflicts need manual decision |
|
| `conflict_found` | integrator → coordinator | File conflicts detected, user input needed | 发现冲突需要人工决策 |
|
||||||
| `error` | integrator -> coordinator | Blocking error | Queue formation failed |
|
| `error` | integrator → coordinator | Blocking error | 队列编排失败 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Subagent Capabilities
|
||||||
|
|
||||||
```
|
| Agent Type | Purpose |
|
||||||
mcp__ccw-tools__team_msg({
|
|------------|---------|
|
||||||
operation: "log",
|
| `issue-queue-agent` | Receives solutions from bound issues, uses Gemini for conflict detection, produces ordered execution queue |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., ISS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "integrator",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[integrator] <task-prefix> complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
### CLI Capabilities
|
||||||
|
|
||||||
```
|
| CLI Command | Purpose |
|
||||||
Bash("ccw team log --team <session-id> --from integrator --to coordinator --type <message-type> --summary \"[integrator] ...\" --ref <artifact-path> --json")
|
|-------------|---------|
|
||||||
```
|
| `ccw issue status <id> --json` | Load issue details |
|
||||||
|
| `ccw issue solutions <id> --json` | Verify bound solution |
|
||||||
---
|
| `ccw issue list --status planned --json` | List planned issues |
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('MARSHAL-') &&
|
||||||
|
t.owner === 'integrator' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `MARSHAL-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Collect Bound Solutions
|
### Phase 2: Collect Bound Solutions
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Extract issue IDs from task description
|
||||||
|
const issueIds = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/g) || []
|
||||||
|
|
||||||
| Input | Source | Required |
|
// Verify all issues have bound solutions
|
||||||
|-------|--------|----------|
|
const unbound = []
|
||||||
| Issue IDs | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
const boundIssues = []
|
||||||
| Bound solutions | `ccw issue solutions <id> --json` | Yes |
|
|
||||||
|
|
||||||
**Loading steps**:
|
for (const issueId of issueIds) {
|
||||||
|
const solJson = Bash(`ccw issue solutions ${issueId} --json`)
|
||||||
|
const sol = JSON.parse(solJson)
|
||||||
|
|
||||||
1. Extract issue IDs from task description via regex
|
if (sol.bound) {
|
||||||
2. Verify all issues have bound solutions:
|
boundIssues.push({ id: issueId, solution: sol.bound })
|
||||||
|
} else {
|
||||||
|
unbound.push(issueId)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
```
|
if (unbound.length > 0) {
|
||||||
Bash("ccw issue solutions <issueId> --json")
|
mcp__ccw-tools__team_msg({
|
||||||
```
|
operation: "log", team: "issue", from: "integrator", to: "coordinator",
|
||||||
|
|
||||||
3. Check for unbound issues:
|
|
||||||
|
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| All issues bound | Proceed to Phase 3 |
|
|
||||||
| Any issue unbound | Report error to coordinator, STOP |
|
|
||||||
|
|
||||||
**Unbound error report**:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log", team: **<session-id>**, from: "integrator", to: "coordinator", // MUST be session ID, NOT team name
|
|
||||||
type: "error",
|
type: "error",
|
||||||
summary: "[integrator] Unbound issues: <issueIds> - cannot form queue"
|
summary: `[integrator] Unbound issues: ${unbound.join(', ')} — cannot form queue`
|
||||||
})
|
})
|
||||||
|
SendMessage({
|
||||||
SendMessage({
|
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: "## [integrator] Error: Unbound Issues\n\nThe following issues have no bound solution:\n<unbound list>\n\nPlanner must create solutions before queue formation.",
|
content: `## [integrator] Error: Unbound Issues\n\nThe following issues have no bound solution:\n${unbound.map(id => `- ${id}`).join('\n')}\n\nPlanner must create solutions before queue formation.`,
|
||||||
summary: "[integrator] error: <count> unbound issues"
|
summary: `[integrator] error: ${unbound.length} unbound issues`
|
||||||
})
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 3: Queue Formation via issue-queue-agent
|
### Phase 3: Queue Formation via issue-queue-agent
|
||||||
|
|
||||||
**Agent invocation**:
|
```javascript
|
||||||
|
// Invoke issue-queue-agent for intelligent queue formation
|
||||||
```
|
const agentResult = Task({
|
||||||
Task({
|
|
||||||
subagent_type: "issue-queue-agent",
|
subagent_type: "issue-queue-agent",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Form queue for <count> issues",
|
description: `Form queue for ${issueIds.length} issues`,
|
||||||
prompt: "
|
prompt: `
|
||||||
## Issues to Queue
|
## Issues to Queue
|
||||||
|
|
||||||
Issue IDs: <issueIds>
|
Issue IDs: ${issueIds.join(', ')}
|
||||||
|
|
||||||
## Bound Solutions
|
## Bound Solutions
|
||||||
|
|
||||||
<solution list with issue_id, solution_id, task_count>
|
${boundIssues.map(bi => `- ${bi.id}: Solution ${bi.solution.id} (${bi.solution.task_count} tasks)`).join('\n')}
|
||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
@@ -162,72 +139,115 @@ Schema: {
|
|||||||
conflicts: [{ issues: [id1, id2], files: [...], resolution }],
|
conflicts: [{ issues: [id1, id2], files: [...], resolution }],
|
||||||
parallel_groups: [{ group: N, issues: [...] }]
|
parallel_groups: [{ group: N, issues: [...] }]
|
||||||
}
|
}
|
||||||
"
|
`
|
||||||
})
|
})
|
||||||
```
|
|
||||||
|
|
||||||
**Parse queue result**:
|
// Parse queue result
|
||||||
|
const queuePath = `.workflow/issues/queue/execution-queue.json`
|
||||||
```
|
let queueResult
|
||||||
Read(".workflow/issues/queue/execution-queue.json")
|
try {
|
||||||
|
queueResult = JSON.parse(Read(queuePath))
|
||||||
|
} catch {
|
||||||
|
queueResult = null
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Conflict Resolution
|
### Phase 4: Conflict Resolution
|
||||||
|
|
||||||
**Queue validation**:
|
```javascript
|
||||||
|
if (!queueResult) {
|
||||||
| Condition | Action |
|
// Queue formation failed
|
||||||
|-----------|--------|
|
mcp__ccw-tools__team_msg({
|
||||||
| Queue file exists | Check for unresolved conflicts |
|
operation: "log", team: "issue", from: "integrator", to: "coordinator",
|
||||||
| Queue file not found | Report error to coordinator, STOP |
|
type: "error",
|
||||||
|
summary: `[integrator] Queue formation failed — no output from issue-queue-agent`
|
||||||
**Conflict handling**:
|
})
|
||||||
|
SendMessage({
|
||||||
| Condition | Action |
|
|
||||||
|-----------|--------|
|
|
||||||
| No unresolved conflicts | Proceed to Phase 5 |
|
|
||||||
| Has unresolved conflicts | Report to coordinator for user decision |
|
|
||||||
|
|
||||||
**Unresolved conflict report**:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log", team: **<session-id>**, from: "integrator", to: "coordinator", // MUST be session ID, NOT team name
|
|
||||||
type: "conflict_found",
|
|
||||||
summary: "[integrator] <count> unresolved conflicts in queue"
|
|
||||||
})
|
|
||||||
|
|
||||||
SendMessage({
|
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: "## [integrator] Conflicts Found\n\n**Unresolved Conflicts**: <count>\n\n<conflict details>\n\n**Action Required**: Coordinator should present conflicts to user for resolution, then re-trigger MARSHAL.",
|
content: `## [integrator] Error\n\nQueue formation failed. issue-queue-agent produced no output.`,
|
||||||
summary: "[integrator] conflict_found: <count> conflicts"
|
summary: `[integrator] error: queue formation failed`
|
||||||
})
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for unresolved conflicts
|
||||||
|
const unresolvedConflicts = (queueResult.conflicts || []).filter(c => c.resolution === 'unresolved')
|
||||||
|
|
||||||
|
if (unresolvedConflicts.length > 0) {
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: "issue", from: "integrator", to: "coordinator",
|
||||||
|
type: "conflict_found",
|
||||||
|
summary: `[integrator] ${unresolvedConflicts.length} unresolved conflicts in queue`
|
||||||
|
})
|
||||||
|
|
||||||
|
SendMessage({
|
||||||
|
type: "message", recipient: "coordinator",
|
||||||
|
content: `## [integrator] Conflicts Found
|
||||||
|
|
||||||
|
**Unresolved Conflicts**: ${unresolvedConflicts.length}
|
||||||
|
|
||||||
|
${unresolvedConflicts.map((c, i) => `### Conflict ${i + 1}
|
||||||
|
- **Issues**: ${c.issues.join(' vs ')}
|
||||||
|
- **Files**: ${c.files.join(', ')}
|
||||||
|
- **Recommendation**: User decision needed — which issue takes priority`).join('\n\n')}
|
||||||
|
|
||||||
|
**Action Required**: Coordinator should present conflicts to user for resolution, then re-trigger MARSHAL.`,
|
||||||
|
summary: `[integrator] conflict_found: ${unresolvedConflicts.length} conflicts`
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Queue metrics**:
|
|
||||||
|
|
||||||
| Metric | Source |
|
|
||||||
|--------|--------|
|
|
||||||
| Queue size | `queueResult.queue.length` |
|
|
||||||
| Parallel groups | `queueResult.parallel_groups.length` |
|
|
||||||
| Resolved conflicts | Count where `resolution !== 'unresolved'` |
|
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
const queueSize = queueResult.queue?.length || 0
|
||||||
|
const parallelGroups = queueResult.parallel_groups?.length || 1
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[integrator]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: "issue",
|
||||||
|
from: "integrator",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "queue_ready",
|
||||||
|
summary: `[integrator] Queue ready: ${queueSize} items in ${parallelGroups} parallel groups`,
|
||||||
|
ref: queuePath
|
||||||
|
})
|
||||||
|
|
||||||
**Report content includes**:
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `## [integrator] Queue Ready
|
||||||
|
|
||||||
- Queue size
|
**Queue Size**: ${queueSize} items
|
||||||
- Number of parallel groups
|
**Parallel Groups**: ${parallelGroups}
|
||||||
- Resolved conflicts count
|
**Resolved Conflicts**: ${(queueResult.conflicts || []).filter(c => c.resolution !== 'unresolved').length}
|
||||||
- Execution order list
|
|
||||||
- Parallel groups breakdown
|
|
||||||
- Queue file path
|
|
||||||
|
|
||||||
---
|
### Execution Order
|
||||||
|
${(queueResult.queue || []).map((q, i) => `${i + 1}. ${q.issue_id} (Solution: ${q.solution_id})${q.depends_on?.length ? ` — depends on: ${q.depends_on.join(', ')}` : ''}`).join('\n')}
|
||||||
|
|
||||||
|
### Parallel Groups
|
||||||
|
${(queueResult.parallel_groups || []).map(g => `- Group ${g.group}: ${g.issues.join(', ')}`).join('\n')}
|
||||||
|
|
||||||
|
**Queue File**: ${queuePath}
|
||||||
|
**Status**: Ready for BUILD phase`,
|
||||||
|
summary: `[integrator] MARSHAL complete: ${queueSize} items queued`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('MARSHAL-') &&
|
||||||
|
t.owner === 'integrator' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -236,6 +256,5 @@ Standard report flow: team_msg log -> SendMessage with `[integrator]` prefix ->
|
|||||||
| No MARSHAL-* tasks available | Idle, wait for coordinator |
|
| No MARSHAL-* tasks available | Idle, wait for coordinator |
|
||||||
| Issues without bound solutions | Report to coordinator, block queue formation |
|
| Issues without bound solutions | Report to coordinator, block queue formation |
|
||||||
| issue-queue-agent failure | Retry once, then report error |
|
| issue-queue-agent failure | Retry once, then report error |
|
||||||
| Unresolved file conflicts | Escalate to coordinator for user decision |
|
| Unresolved file conflicts | Escalate to coordinator for user decision (CP-5) |
|
||||||
| Single issue (no conflict possible) | Create trivial queue with one entry |
|
| Single issue (no conflict possible) | Create trivial queue with one entry |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
|
|||||||
@@ -1,199 +1,205 @@
|
|||||||
# Planner Role
|
# Role: planner
|
||||||
|
|
||||||
Solution design, task decomposition. Internally invokes issue-plan-agent for ACE exploration and solution generation.
|
解决方案设计、任务分解。内部调用 issue-plan-agent 进行 ACE 探索和方案生成。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `planner` | **Tag**: `[planner]`
|
- **Name**: `planner`
|
||||||
- **Task Prefix**: `SOLVE-*`
|
- **Task Prefix**: `SOLVE-*`
|
||||||
- **Responsibility**: Orchestration (solution design)
|
- **Responsibility**: Orchestration (solution design)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[planner]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `SOLVE-*` prefixed tasks
|
- 仅处理 `SOLVE-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[planner]` identifier
|
- 所有输出必须带 `[planner]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 使用 issue-plan-agent 进行方案设计
|
||||||
- Use issue-plan-agent for solution design
|
- 参考 explorer 的 context-report 丰富方案上下文
|
||||||
- Reference explorer's context-report for solution context
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute code implementation (implementer responsibility)
|
- ❌ 执行代码实现(implementer 职责)
|
||||||
- Review solution quality (reviewer responsibility)
|
- ❌ 审查方案质量(reviewer 职责)
|
||||||
- Orchestrate execution queue (integrator responsibility)
|
- ❌ 编排执行队列(integrator 职责)
|
||||||
- Communicate directly with other worker roles
|
- ❌ 直接与其他 worker 通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
|
||||||
- Modify files or resources outside this role's responsibility
|
|
||||||
- Omit `[planner]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
> No command files -- all phases execute inline.
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Task` | Subagent | planner | Spawn issue-plan-agent for solution design |
|
|
||||||
| `Read` | IO | planner | Read context reports |
|
|
||||||
| `Bash` | System | planner | Execute ccw commands |
|
|
||||||
| `mcp__ccw-tools__team_msg` | Team | planner | Log messages to message bus |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `solution_ready` | planner -> coordinator | Solution designed and bound | Single solution ready |
|
| `solution_ready` | planner → coordinator | Solution designed and bound | 单方案就绪 |
|
||||||
| `multi_solution` | planner -> coordinator | Multiple solutions, needs selection | Multiple solutions pending selection |
|
| `multi_solution` | planner → coordinator | Multiple solutions, needs selection | 多方案待选 |
|
||||||
| `error` | planner -> coordinator | Blocking error | Solution design failed |
|
| `error` | planner → coordinator | Blocking error | 方案设计失败 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Subagent Capabilities
|
||||||
|
|
||||||
```
|
| Agent Type | Purpose |
|
||||||
mcp__ccw-tools__team_msg({
|
|------------|---------|
|
||||||
operation: "log",
|
| `issue-plan-agent` | Closed-loop planning: ACE exploration + solution generation + binding |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., ISS-xxx-date), NOT team name. Extract from Session: field.
|
|
||||||
from: "planner",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[planner] <task-prefix> complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
### CLI Capabilities
|
||||||
|
|
||||||
```
|
| CLI Command | Purpose |
|
||||||
Bash("ccw team log --team <session-id> --from planner --to coordinator --type <message-type> --summary \"[planner] ...\" --ref <artifact-path> --json")
|
|-------------|---------|
|
||||||
```
|
| `ccw issue status <id> --json` | Load issue details |
|
||||||
|
| `ccw issue bind <id> <sol-id>` | Bind solution to issue |
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('SOLVE-') &&
|
||||||
|
t.owner === 'planner' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `SOLVE-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading
|
### Phase 2: Context Loading
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Resolve project root from working directory
|
||||||
|
const projectRoot = Bash('pwd').trim()
|
||||||
|
|
||||||
| Input | Source | Required |
|
// Extract issue ID
|
||||||
|-------|--------|----------|
|
const issueIdMatch = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/)
|
||||||
| Issue ID | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
const issueId = issueIdMatch ? issueIdMatch[0] : null
|
||||||
| Explorer context | `.workflow/.team-plan/issue/context-<issueId>.json` | No |
|
|
||||||
| Review feedback | Task description (for SOLVE-fix tasks) | No |
|
|
||||||
|
|
||||||
**Loading steps**:
|
// Load explorer's context report (if available)
|
||||||
|
const contextPath = `.workflow/.team-plan/issue/context-${issueId}.json`
|
||||||
1. Extract issue ID from task description via regex: `(?:GH-\d+|ISS-\d{8}-\d{6})`
|
let explorerContext = null
|
||||||
2. If no issue ID found -> SendMessage error to coordinator, STOP
|
try {
|
||||||
3. Load explorer's context report (if available):
|
explorerContext = JSON.parse(Read(contextPath))
|
||||||
|
} catch {
|
||||||
|
// Explorer context not available, issue-plan-agent will do its own exploration
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if this is a revision task (SOLVE-fix-N)
|
||||||
|
const isRevision = task.subject.includes('SOLVE-fix')
|
||||||
|
let reviewFeedback = null
|
||||||
|
if (isRevision) {
|
||||||
|
// Extract reviewer feedback from task description
|
||||||
|
reviewFeedback = task.description
|
||||||
|
}
|
||||||
```
|
```
|
||||||
Read(".workflow/.team-plan/issue/context-<issueId>.json")
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Check if this is a revision task (SOLVE-fix-N):
|
|
||||||
- If yes, extract reviewer feedback from task description
|
|
||||||
- Design alternative approach addressing reviewer concerns
|
|
||||||
|
|
||||||
### Phase 3: Solution Generation via issue-plan-agent
|
### Phase 3: Solution Generation via issue-plan-agent
|
||||||
|
|
||||||
**Agent invocation**:
|
```javascript
|
||||||
|
// Invoke issue-plan-agent
|
||||||
```
|
const agentResult = Task({
|
||||||
Task({
|
|
||||||
subagent_type: "issue-plan-agent",
|
subagent_type: "issue-plan-agent",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Plan solution for <issueId>",
|
description: `Plan solution for ${issueId}`,
|
||||||
prompt: "
|
prompt: `
|
||||||
issue_ids: [\"<issueId>\"]
|
issue_ids: ["${issueId}"]
|
||||||
project_root: \"<projectRoot>\"
|
project_root: "${projectRoot}"
|
||||||
|
|
||||||
|
${explorerContext ? `
|
||||||
## Explorer Context (pre-gathered)
|
## Explorer Context (pre-gathered)
|
||||||
Relevant files: <explorerContext.relevant_files>
|
Relevant files: ${explorerContext.relevant_files?.map(f => f.path || f).join(', ')}
|
||||||
Key findings: <explorerContext.key_findings>
|
Key findings: ${explorerContext.key_findings?.join('; ')}
|
||||||
Complexity: <explorerContext.complexity_assessment>
|
Complexity: ${explorerContext.complexity_assessment}
|
||||||
|
` : ''}
|
||||||
|
|
||||||
## Revision Required (if SOLVE-fix)
|
${reviewFeedback ? `
|
||||||
|
## Revision Required
|
||||||
Previous solution was rejected by reviewer. Feedback:
|
Previous solution was rejected by reviewer. Feedback:
|
||||||
<reviewFeedback>
|
${reviewFeedback}
|
||||||
|
|
||||||
Design an ALTERNATIVE approach that addresses the reviewer's concerns.
|
Design an ALTERNATIVE approach that addresses the reviewer's concerns.
|
||||||
"
|
` : ''}
|
||||||
|
`
|
||||||
})
|
})
|
||||||
|
|
||||||
|
// Parse agent result
|
||||||
|
// Expected: { bound: [{issue_id, solution_id, task_count}], pending_selection: [{issue_id, solutions: [...]}] }
|
||||||
```
|
```
|
||||||
|
|
||||||
**Expected agent result**:
|
|
||||||
|
|
||||||
| Field | Description |
|
|
||||||
|-------|-------------|
|
|
||||||
| `bound` | Array of auto-bound solutions: `[{issue_id, solution_id, task_count}]` |
|
|
||||||
| `pending_selection` | Array of multi-solution issues: `[{issue_id, solutions: [...]}]` |
|
|
||||||
|
|
||||||
### Phase 4: Solution Selection & Binding
|
### Phase 4: Solution Selection & Binding
|
||||||
|
|
||||||
**Outcome routing**:
|
```javascript
|
||||||
|
const result = agentResult // from Phase 3
|
||||||
|
|
||||||
| Condition | Action |
|
if (result.bound && result.bound.length > 0) {
|
||||||
|-----------|--------|
|
// Single solution auto-bound
|
||||||
| Single solution auto-bound | Report `solution_ready` to coordinator |
|
const bound = result.bound[0]
|
||||||
| Multiple solutions pending | Report `multi_solution` to coordinator for user selection |
|
|
||||||
| No solution generated | Report `error` to coordinator |
|
|
||||||
|
|
||||||
**Single solution report**:
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: "issue", from: "planner", to: "coordinator",
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log", team: **<session-id>**, from: "planner", to: "coordinator", // MUST be session ID, NOT team name
|
|
||||||
type: "solution_ready",
|
type: "solution_ready",
|
||||||
summary: "[planner] Solution <solution_id> bound to <issue_id> (<task_count> tasks)"
|
summary: `[planner] Solution ${bound.solution_id} bound to ${bound.issue_id} (${bound.task_count} tasks)`
|
||||||
})
|
})
|
||||||
|
|
||||||
SendMessage({
|
SendMessage({
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: "## [planner] Solution Ready\n\n**Issue**: <issue_id>\n**Solution**: <solution_id>\n**Tasks**: <task_count>\n**Status**: Auto-bound (single solution)",
|
content: `## [planner] Solution Ready
|
||||||
summary: "[planner] SOLVE complete: <issue_id>"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Multi-solution report**:
|
**Issue**: ${bound.issue_id}
|
||||||
|
**Solution**: ${bound.solution_id}
|
||||||
|
**Tasks**: ${bound.task_count}
|
||||||
|
**Status**: Auto-bound (single solution)
|
||||||
|
|
||||||
```
|
Solution written to: .workflow/issues/solutions/${bound.issue_id}.jsonl`,
|
||||||
mcp__ccw-tools__team_msg({
|
summary: `[planner] SOLVE complete: ${bound.issue_id}`
|
||||||
operation: "log", team: **<session-id>**, from: "planner", to: "coordinator", // MUST be session ID, NOT team name
|
})
|
||||||
|
} else if (result.pending_selection && result.pending_selection.length > 0) {
|
||||||
|
// Multiple solutions need user selection
|
||||||
|
const pending = result.pending_selection[0]
|
||||||
|
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: "issue", from: "planner", to: "coordinator",
|
||||||
type: "multi_solution",
|
type: "multi_solution",
|
||||||
summary: "[planner] <count> solutions for <issue_id>, user selection needed"
|
summary: `[planner] ${pending.solutions.length} solutions for ${pending.issue_id}, user selection needed`
|
||||||
})
|
})
|
||||||
|
|
||||||
SendMessage({
|
SendMessage({
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: "## [planner] Multiple Solutions\n\n**Issue**: <issue_id>\n**Solutions**: <count> options\n\n### Options\n<solution details>\n\n**Action Required**: Coordinator should present options to user for selection.",
|
content: `## [planner] Multiple Solutions
|
||||||
summary: "[planner] multi_solution: <issue_id>"
|
|
||||||
})
|
**Issue**: ${pending.issue_id}
|
||||||
|
**Solutions**: ${pending.solutions.length} options
|
||||||
|
|
||||||
|
${pending.solutions.map((s, i) => `### Option ${i + 1}: ${s.id}
|
||||||
|
${s.description}
|
||||||
|
Tasks: ${s.task_count}`).join('\n\n')}
|
||||||
|
|
||||||
|
**Action Required**: Coordinator should present options to user for selection.`,
|
||||||
|
summary: `[planner] multi_solution: ${pending.issue_id}`
|
||||||
|
})
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
Standard report flow: TaskUpdate completed -> check for next SOLVE-* task -> if found, loop to Phase 1.
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('SOLVE-') &&
|
||||||
|
t.owner === 'planner' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
---
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -202,6 +208,5 @@ Standard report flow: TaskUpdate completed -> check for next SOLVE-* task -> if
|
|||||||
| No SOLVE-* tasks available | Idle, wait for coordinator |
|
| No SOLVE-* tasks available | Idle, wait for coordinator |
|
||||||
| Issue not found | Notify coordinator with error |
|
| Issue not found | Notify coordinator with error |
|
||||||
| issue-plan-agent failure | Retry once, then report error |
|
| issue-plan-agent failure | Retry once, then report error |
|
||||||
| Explorer context missing | Proceed without - agent does its own exploration |
|
| Explorer context missing | Proceed without — agent does its own exploration |
|
||||||
| Solution binding failure | Report to coordinator for manual binding |
|
| Solution binding failure | Report to coordinator for manual binding |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
|
|||||||
@@ -1,260 +1,316 @@
|
|||||||
# Reviewer Role
|
# Role: reviewer
|
||||||
|
|
||||||
Solution review, technical feasibility validation, risk assessment. **Quality gate role** that fills the gap between plan and execute phases.
|
方案审查、技术可行性验证、风险评估。**新增质量门控角色**,填补当前 plan → execute 直接执行无审查的缺口。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `reviewer` | **Tag**: `[reviewer]`
|
- **Name**: `reviewer`
|
||||||
- **Task Prefix**: `AUDIT-*`
|
- **Task Prefix**: `AUDIT-*`
|
||||||
- **Responsibility**: Read-only analysis (solution review)
|
- **Responsibility**: Read-only analysis (solution review)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[reviewer]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `AUDIT-*` prefixed tasks
|
- 仅处理 `AUDIT-*` 前缀的任务
|
||||||
- All output (SendMessage, team_msg, logs) must carry `[reviewer]` identifier
|
- 所有输出必须带 `[reviewer]` 标识
|
||||||
- Only communicate with coordinator via SendMessage
|
- 仅通过 SendMessage 与 coordinator 通信
|
||||||
- Reference explorer's context-report for solution coverage validation
|
- 参考 explorer 的 context-report 验证方案覆盖度
|
||||||
- Provide clear verdict for each solution: approved / rejected / concerns
|
- 对每个方案给出明确的 approved / rejected / concerns 结论
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Modify solutions (planner responsibility)
|
- ❌ 修改解决方案(planner 职责)
|
||||||
- Modify any source code
|
- ❌ 修改任何源代码
|
||||||
- Orchestrate execution queue (integrator responsibility)
|
- ❌ 编排执行队列(integrator 职责)
|
||||||
- Communicate directly with other worker roles
|
- ❌ 直接与其他 worker 通信
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
- ❌ 为其他角色创建任务
|
||||||
- Omit `[reviewer]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Available Commands
|
|
||||||
|
|
||||||
> No command files -- all phases execute inline.
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Used By | Purpose |
|
|
||||||
|------|------|---------|---------|
|
|
||||||
| `Read` | IO | reviewer | Read solution files and context reports |
|
|
||||||
| `Bash` | System | reviewer | Execute ccw issue commands |
|
|
||||||
| `Glob` | Search | reviewer | Find related files |
|
|
||||||
| `Grep` | Search | reviewer | Search code patterns |
|
|
||||||
| `mcp__ace-tool__search_context` | Search | reviewer | Semantic search for solution validation |
|
|
||||||
| `mcp__ccw-tools__team_msg` | Team | reviewer | Log messages to message bus |
|
|
||||||
| `Write` | IO | reviewer | Write audit report |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `approved` | reviewer -> coordinator | Solution passes all checks | Solution approved |
|
| `approved` | reviewer → coordinator | Solution passes all checks | 方案审批通过 |
|
||||||
| `rejected` | reviewer -> coordinator | Critical issues found | Solution rejected, needs revision |
|
| `rejected` | reviewer → coordinator | Critical issues found | 方案被拒,需修订 |
|
||||||
| `concerns` | reviewer -> coordinator | Minor issues noted | Has concerns but non-blocking |
|
| `concerns` | reviewer → coordinator | Minor issues noted | 有顾虑但不阻塞 |
|
||||||
| `error` | reviewer -> coordinator | Blocking error | Review failed |
|
| `error` | reviewer → coordinator | Blocking error | 审查失败 |
|
||||||
|
|
||||||
## Message Bus
|
## Toolbox
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
### Direct Capabilities
|
||||||
|
|
||||||
```
|
| Tool | Purpose |
|
||||||
mcp__ccw-tools__team_msg({
|
|------|---------|
|
||||||
operation: "log",
|
| `Read` | 读取方案文件和上下文报告 |
|
||||||
team: **<session-id>**, // MUST be session ID (e.g., ISS-xxx-date), NOT team name. Extract from Session: field.
|
| `Bash` | 执行 ccw issue 命令查看 issue/solution 详情 |
|
||||||
from: "reviewer",
|
| `Glob` | 查找相关文件 |
|
||||||
to: "coordinator",
|
| `Grep` | 搜索代码模式 |
|
||||||
type: <message-type>,
|
| `mcp__ace-tool__search_context` | 语义搜索验证方案引用的代码 |
|
||||||
summary: "[reviewer] <task-prefix> complete: <task-subject>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
### CLI Capabilities
|
||||||
|
|
||||||
```
|
| CLI Command | Purpose |
|
||||||
Bash("ccw team log --team <session-id> --from reviewer --to coordinator --type <message-type> --summary \"[reviewer] ...\" --ref <artifact-path> --json")
|
|-------------|---------|
|
||||||
```
|
| `ccw issue status <id> --json` | 加载 issue 详情 |
|
||||||
|
| `ccw issue solutions <id> --json` | 查看已绑定的方案 |
|
||||||
---
|
|
||||||
|
|
||||||
## Review Criteria
|
## Review Criteria
|
||||||
|
|
||||||
### Technical Feasibility (Weight 40%)
|
### Technical Feasibility (权重 40%)
|
||||||
|
|
||||||
| Criterion | Check |
|
| Criterion | Check |
|
||||||
|-----------|-------|
|
|-----------|-------|
|
||||||
| File Coverage | Solution covers all affected files |
|
| File Coverage | 方案是否涵盖所有受影响的文件 |
|
||||||
| Dependency Awareness | Considers dependency cascade effects |
|
| Dependency Awareness | 是否考虑到依赖变更的级联影响 |
|
||||||
| API Compatibility | Maintains backward compatibility |
|
| API Compatibility | 是否保持向后兼容 |
|
||||||
| Pattern Conformance | Follows existing code patterns |
|
| Pattern Conformance | 是否遵循现有代码模式 |
|
||||||
|
|
||||||
### Risk Assessment (Weight 30%)
|
### Risk Assessment (权重 30%)
|
||||||
|
|
||||||
| Criterion | Check |
|
| Criterion | Check |
|
||||||
|-----------|-------|
|
|-----------|-------|
|
||||||
| Scope Creep | Solution stays within issue boundary |
|
| Scope Creep | 方案是否超出 issue 的边界 |
|
||||||
| Breaking Changes | No destructive modifications |
|
| Breaking Changes | 是否引入破坏性变更 |
|
||||||
| Side Effects | No unforeseen side effects |
|
| Side Effects | 是否有未预见的副作用 |
|
||||||
| Rollback Path | Can rollback if issues occur |
|
| Rollback Path | 出问题时能否回退 |
|
||||||
|
|
||||||
### Completeness (Weight 30%)
|
### Completeness (权重 30%)
|
||||||
|
|
||||||
| Criterion | Check |
|
| Criterion | Check |
|
||||||
|-----------|-------|
|
|-----------|-------|
|
||||||
| All Tasks Defined | Task decomposition is complete |
|
| All Tasks Defined | 任务分解是否完整 |
|
||||||
| Test Coverage | Includes test plan |
|
| Test Coverage | 是否包含测试计划 |
|
||||||
| Edge Cases | Considers boundary conditions |
|
| Edge Cases | 是否考虑边界情况 |
|
||||||
| Documentation | Key changes are documented |
|
| Documentation | 关键变更是否有说明 |
|
||||||
|
|
||||||
### Verdict Rules
|
### Verdict Rules
|
||||||
|
|
||||||
| Score | Verdict | Action |
|
| Score | Verdict | Action |
|
||||||
|-------|---------|--------|
|
|-------|---------|--------|
|
||||||
| >= 80% | `approved` | Proceed to MARSHAL phase |
|
| ≥ 80% | `approved` | 可直接进入 MARSHAL 阶段 |
|
||||||
| 60-79% | `concerns` | Include suggestions, non-blocking |
|
| 60-79% | `concerns` | 附带建议,不阻塞流程 |
|
||||||
| < 60% | `rejected` | Requires planner revision |
|
| < 60% | `rejected` | 需要 planner 修订方案 |
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('AUDIT-') &&
|
||||||
|
t.owner === 'reviewer' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `AUDIT-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context & Solution Loading
|
### Phase 2: Context & Solution Loading
|
||||||
|
|
||||||
**Input Sources**:
|
```javascript
|
||||||
|
// Extract issue IDs from task description
|
||||||
|
const issueIds = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/g) || []
|
||||||
|
|
||||||
| Input | Source | Required |
|
// Load explorer context reports
|
||||||
|-------|--------|----------|
|
const contexts = {}
|
||||||
| Issue IDs | Task description (GH-\d+ or ISS-\d{8}-\d{6}) | Yes |
|
for (const issueId of issueIds) {
|
||||||
| Explorer context | `.workflow/.team-plan/issue/context-<issueId>.json` | No |
|
const contextPath = `.workflow/.team-plan/issue/context-${issueId}.json`
|
||||||
| Bound solution | `ccw issue solutions <id> --json` | Yes |
|
try {
|
||||||
|
contexts[issueId] = JSON.parse(Read(contextPath))
|
||||||
|
} catch {
|
||||||
|
contexts[issueId] = null // No explorer context
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
**Loading steps**:
|
// Load solution plans
|
||||||
|
const solutions = {}
|
||||||
1. Extract issue IDs from task description via regex
|
for (const issueId of issueIds) {
|
||||||
2. Load explorer context reports for each issue:
|
const solJson = Bash(`ccw issue solutions ${issueId} --json`)
|
||||||
|
solutions[issueId] = JSON.parse(solJson)
|
||||||
```
|
}
|
||||||
Read(".workflow/.team-plan/issue/context-<issueId>.json")
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Load bound solutions for each issue:
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw issue solutions <issueId> --json")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 3: Multi-Dimensional Review
|
### Phase 3: Multi-Dimensional Review
|
||||||
|
|
||||||
**Review execution for each issue**:
|
```javascript
|
||||||
|
const reviewResults = []
|
||||||
|
|
||||||
| Dimension | Weight | Validation Method |
|
for (const issueId of issueIds) {
|
||||||
|-----------|--------|-------------------|
|
const context = contexts[issueId]
|
||||||
| Technical Feasibility | 40% | Cross-check solution files against explorer context + ACE semantic validation |
|
const solution = solutions[issueId]
|
||||||
| Risk Assessment | 30% | Analyze task count for scope creep, check for breaking changes |
|
|
||||||
| Completeness | 30% | Verify task definitions exist, check for test plan |
|
|
||||||
|
|
||||||
**Technical Feasibility validation**:
|
if (!solution || !solution.bound) {
|
||||||
|
reviewResults.push({
|
||||||
|
issueId,
|
||||||
|
verdict: 'error',
|
||||||
|
reason: 'No bound solution found'
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
| Condition | Score Impact |
|
const review = {
|
||||||
|-----------|--------------|
|
issueId,
|
||||||
| All context files covered by solution | 100% |
|
solutionId: solution.bound.id,
|
||||||
| Partial coverage (some files missing) | -15% per uncovered file, min 40% |
|
technical_feasibility: { score: 0, findings: [] },
|
||||||
| ACE results diverge from solution patterns | -10% |
|
risk_assessment: { score: 0, findings: [] },
|
||||||
| No explorer context available | 70% (limited validation) |
|
completeness: { score: 0, findings: [] }
|
||||||
|
}
|
||||||
|
|
||||||
**Risk Assessment validation**:
|
// 1. Technical Feasibility — verify solution references real files + semantic validation
|
||||||
|
if (context && context.relevant_files) {
|
||||||
|
const solutionFiles = solution.bound.tasks?.flatMap(t => t.files || []) || []
|
||||||
|
const contextFiles = context.relevant_files.map(f => f.path || f)
|
||||||
|
const uncovered = contextFiles.filter(f => !solutionFiles.some(sf => sf.includes(f)))
|
||||||
|
|
||||||
| Condition | Score |
|
if (uncovered.length === 0) {
|
||||||
|-----------|-------|
|
review.technical_feasibility.score = 100
|
||||||
| Task count <= 10 | 90% |
|
} else {
|
||||||
| Task count > 10 (possible scope creep) | 50% |
|
review.technical_feasibility.score = Math.max(40, 100 - uncovered.length * 15)
|
||||||
|
review.technical_feasibility.findings.push(
|
||||||
|
`Uncovered files: ${uncovered.join(', ')}`
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
**Completeness validation**:
|
// Semantic validation via ACE — verify solution references exist in codebase
|
||||||
|
const projectRoot = Bash('pwd').trim()
|
||||||
|
const aceResults = mcp__ace-tool__search_context({
|
||||||
|
project_root_path: projectRoot,
|
||||||
|
query: `${solution.bound.title || issue.title}. Verify patterns: ${solutionFiles.slice(0, 5).join(', ')}`
|
||||||
|
})
|
||||||
|
if (aceResults && aceResults.length > 0) {
|
||||||
|
// Cross-check ACE results against solution's assumed patterns
|
||||||
|
const aceFiles = aceResults.map(r => r.file || r.path).filter(Boolean)
|
||||||
|
const missedByAce = solutionFiles.filter(sf => !aceFiles.some(af => af.includes(sf)))
|
||||||
|
if (missedByAce.length > solutionFiles.length * 0.5) {
|
||||||
|
review.technical_feasibility.score = Math.max(50, review.technical_feasibility.score - 10)
|
||||||
|
review.technical_feasibility.findings.push(
|
||||||
|
`ACE semantic search found divergent patterns — solution may reference outdated code`
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
review.technical_feasibility.score = 70 // No context to validate against
|
||||||
|
review.technical_feasibility.findings.push('Explorer context not available for cross-validation')
|
||||||
|
}
|
||||||
|
|
||||||
| Condition | Score |
|
// 2. Risk Assessment — check for breaking changes, scope
|
||||||
|-----------|-------|
|
const taskCount = solution.bound.task_count || solution.bound.tasks?.length || 0
|
||||||
| Tasks defined (count > 0) | 85% |
|
if (taskCount > 10) {
|
||||||
| No tasks defined | 30% |
|
review.risk_assessment.score = 50
|
||||||
|
review.risk_assessment.findings.push(`High task count (${taskCount}) indicates possible scope creep`)
|
||||||
|
} else {
|
||||||
|
review.risk_assessment.score = 90
|
||||||
|
}
|
||||||
|
|
||||||
**ACE semantic validation**:
|
// 3. Completeness — check task definitions
|
||||||
|
if (taskCount > 0) {
|
||||||
|
review.completeness.score = 85
|
||||||
|
} else {
|
||||||
|
review.completeness.score = 30
|
||||||
|
review.completeness.findings.push('No tasks defined in solution')
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate weighted score
|
||||||
|
const totalScore = Math.round(
|
||||||
|
review.technical_feasibility.score * 0.4 +
|
||||||
|
review.risk_assessment.score * 0.3 +
|
||||||
|
review.completeness.score * 0.3
|
||||||
|
)
|
||||||
|
|
||||||
|
// Determine verdict
|
||||||
|
let verdict
|
||||||
|
if (totalScore >= 80) verdict = 'approved'
|
||||||
|
else if (totalScore >= 60) verdict = 'concerns'
|
||||||
|
else verdict = 'rejected'
|
||||||
|
|
||||||
|
review.total_score = totalScore
|
||||||
|
review.verdict = verdict
|
||||||
|
reviewResults.push(review)
|
||||||
|
}
|
||||||
```
|
```
|
||||||
mcp__ace-tool__search_context({
|
|
||||||
project_root_path: <projectRoot>,
|
|
||||||
query: "<solution.title>. Verify patterns: <solutionFiles>"
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
Cross-check ACE results against solution's assumed patterns. If >50% of solution files not found in ACE results, flag as potentially outdated.
|
|
||||||
|
|
||||||
### Phase 4: Compile Review Report
|
### Phase 4: Compile Review Report
|
||||||
|
|
||||||
**Score calculation**:
|
```javascript
|
||||||
|
// Determine overall verdict
|
||||||
|
const hasRejected = reviewResults.some(r => r.verdict === 'rejected')
|
||||||
|
const hasConcerns = reviewResults.some(r => r.verdict === 'concerns')
|
||||||
|
const overallVerdict = hasRejected ? 'rejected' : hasConcerns ? 'concerns' : 'approved'
|
||||||
|
|
||||||
```
|
// Build feedback for rejected solutions
|
||||||
total_score = round(
|
const rejectedFeedback = reviewResults
|
||||||
technical_feasibility.score * 0.4 +
|
.filter(r => r.verdict === 'rejected')
|
||||||
risk_assessment.score * 0.3 +
|
.map(r => `### ${r.issueId} (Score: ${r.total_score}%)
|
||||||
completeness.score * 0.3
|
${r.technical_feasibility.findings.map(f => `- [Technical] ${f}`).join('\n')}
|
||||||
)
|
${r.risk_assessment.findings.map(f => `- [Risk] ${f}`).join('\n')}
|
||||||
```
|
${r.completeness.findings.map(f => `- [Completeness] ${f}`).join('\n')}`)
|
||||||
|
.join('\n\n')
|
||||||
|
|
||||||
**Verdict determination**:
|
// Write review report
|
||||||
|
const reportPath = `.workflow/.team-plan/issue/audit-report.json`
|
||||||
| Score | Verdict |
|
Write(reportPath, JSON.stringify({
|
||||||
|-------|---------|
|
timestamp: new Date().toISOString(),
|
||||||
| >= 80 | approved |
|
overall_verdict: overallVerdict,
|
||||||
| 60-79 | concerns |
|
reviews: reviewResults
|
||||||
| < 60 | rejected |
|
}, null, 2))
|
||||||
|
|
||||||
**Overall verdict**:
|
|
||||||
|
|
||||||
| Condition | Overall Verdict |
|
|
||||||
|-----------|-----------------|
|
|
||||||
| Any solution rejected | rejected |
|
|
||||||
| Any solution has concerns (no rejections) | concerns |
|
|
||||||
| All solutions approved | approved |
|
|
||||||
|
|
||||||
**Write audit report**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Write(".workflow/.team-plan/issue/audit-report.json", {
|
|
||||||
timestamp: <ISO timestamp>,
|
|
||||||
overall_verdict: <verdict>,
|
|
||||||
reviews: [{
|
|
||||||
issueId, solutionId, total_score, verdict,
|
|
||||||
technical_feasibility: { score, findings },
|
|
||||||
risk_assessment: { score, findings },
|
|
||||||
completeness: { score, findings }
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
// Choose message type based on verdict
|
||||||
|
const msgType = overallVerdict // 'approved' | 'rejected' | 'concerns'
|
||||||
|
|
||||||
Standard report flow: team_msg log -> SendMessage with `[reviewer]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: "issue",
|
||||||
|
from: "reviewer",
|
||||||
|
to: "coordinator",
|
||||||
|
type: msgType,
|
||||||
|
summary: `[reviewer] ${overallVerdict.toUpperCase()}: ${reviewResults.length} solutions reviewed, score avg=${Math.round(reviewResults.reduce((a,r) => a + (r.total_score || 0), 0) / reviewResults.length)}%`,
|
||||||
|
ref: reportPath
|
||||||
|
})
|
||||||
|
|
||||||
**Report content includes**:
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `## [reviewer] Audit Results — ${overallVerdict.toUpperCase()}
|
||||||
|
|
||||||
- Overall verdict
|
**Overall**: ${overallVerdict}
|
||||||
- Per-issue scores and verdicts
|
**Solutions Reviewed**: ${reviewResults.length}
|
||||||
- Rejection reasons (if any)
|
|
||||||
- Action required for rejected solutions
|
|
||||||
|
|
||||||
---
|
${reviewResults.map(r => `### ${r.issueId} — ${r.verdict} (${r.total_score}%)
|
||||||
|
- Technical: ${r.technical_feasibility.score}%
|
||||||
|
- Risk: ${r.risk_assessment.score}%
|
||||||
|
- Completeness: ${r.completeness.score}%
|
||||||
|
${r.verdict === 'rejected' ? `\n**Rejection Reasons**:\n${[...r.technical_feasibility.findings, ...r.risk_assessment.findings, ...r.completeness.findings].map(f => '- ' + f).join('\n')}` : ''}`).join('\n\n')}
|
||||||
|
|
||||||
|
${overallVerdict === 'rejected' ? `\n**Action Required**: Coordinator should create SOLVE-fix task for planner to revise rejected solutions.` : ''}
|
||||||
|
**Report**: ${reportPath}`,
|
||||||
|
summary: `[reviewer] AUDIT ${overallVerdict}: ${reviewResults.length} solutions`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('AUDIT-') &&
|
||||||
|
t.owner === 'reviewer' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (nextTasks.length > 0) {
|
||||||
|
// Continue with next task → back to Phase 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -263,6 +319,5 @@ Standard report flow: team_msg log -> SendMessage with `[reviewer]` prefix -> Ta
|
|||||||
| No AUDIT-* tasks available | Idle, wait for coordinator |
|
| No AUDIT-* tasks available | Idle, wait for coordinator |
|
||||||
| Solution file not found | Check ccw issue solutions, report error if missing |
|
| Solution file not found | Check ccw issue solutions, report error if missing |
|
||||||
| Explorer context missing | Proceed with limited review (lower technical score) |
|
| Explorer context missing | Proceed with limited review (lower technical score) |
|
||||||
| All solutions rejected | Report to coordinator for review-fix cycle |
|
| All solutions rejected | Report to coordinator for CP-2 review-fix cycle |
|
||||||
| Review timeout | Report partial results with available data |
|
| Review timeout | Report partial results with available data |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -1,265 +1,179 @@
|
|||||||
# Architect Role
|
# Role: architect
|
||||||
|
|
||||||
Technical architect. Responsible for technical design, task decomposition, and architecture decision records.
|
技术架构师。负责技术设计、任务分解、架构决策记录。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `architect` | **Tag**: `[architect]`
|
- **Name**: `architect`
|
||||||
- **Task Prefix**: `DESIGN-*`
|
- **Task Prefix**: `DESIGN-*`
|
||||||
- **Responsibility**: Read-only analysis (Technical Design)
|
- **Responsibility**: Read-only analysis (技术设计)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[architect]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `DESIGN-*` prefixed tasks
|
- 仅处理 `DESIGN-*` 前缀的任务
|
||||||
- All output must carry `[architect]` identifier
|
- 所有输出必须带 `[architect]` 标识
|
||||||
- Phase 2: Read shared-memory.json, Phase 5: Write architecture_decisions
|
- Phase 2 读取 shared-memory.json,Phase 5 写入 architecture_decisions
|
||||||
- Work strictly within technical design responsibility scope
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute work outside this role's responsibility scope
|
- ❌ 编写实现代码、执行测试或代码审查
|
||||||
- Write implementation code, execute tests, or perform code review
|
- ❌ 直接与其他 worker 通信
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
- ❌ 为其他角色创建任务
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
|
||||||
- Modify files or resources outside this role's responsibility
|
|
||||||
- Omit `[architect]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Purpose |
|
|
||||||
|------|------|---------|
|
|
||||||
| Task | Agent | Spawn cli-explore-agent for codebase exploration |
|
|
||||||
| Read | File | Read session files, shared memory, design files |
|
|
||||||
| Write | File | Write design documents and task breakdown |
|
|
||||||
| Bash | Shell | Execute shell commands |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `design_ready` | architect -> coordinator | Design completed | Design ready for implementation |
|
| `design_ready` | architect → coordinator | Design completed | 设计完成 |
|
||||||
| `design_revision` | architect -> coordinator | Design revised | Design updated based on feedback |
|
| `design_revision` | architect → coordinator | Design revised | 设计修订 |
|
||||||
| `error` | architect -> coordinator | Processing failure | Error report |
|
| `error` | architect → coordinator | Processing failure | 错误上报 |
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
**NOTE**: `team` must be **session ID** (e.g., `TID-project-2026-02-27`), NOT team name. Extract from `Session:` field in task description.
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: <session-id>, // e.g., "TID-project-2026-02-27", NOT "iterdev"
|
|
||||||
from: "architect",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[architect] DESIGN complete: <task-subject>",
|
|
||||||
ref: <design-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from architect --to coordinator --type <message-type> --summary \"[architect] DESIGN complete\" --ref <design-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `DESIGN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('DESIGN-') && t.owner === 'architect' &&
|
||||||
|
t.status === 'pending' && t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (myTasks.length === 0) return
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading + Codebase Exploration
|
### Phase 2: Context Loading + Codebase Exploration
|
||||||
|
|
||||||
**Inputs**:
|
```javascript
|
||||||
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
|
const sessionFolder = sessionMatch?.[1]?.trim()
|
||||||
|
|
||||||
| Input | Source | Required |
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|-------|--------|----------|
|
let sharedMemory = {}
|
||||||
| Session path | Task description (Session: <path>) | Yes |
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
| Shared memory | <session-folder>/shared-memory.json | Yes |
|
|
||||||
| Codebase | Project files | Yes |
|
|
||||||
| Wisdom | <session-folder>/wisdom/ | No |
|
|
||||||
|
|
||||||
**Loading steps**:
|
// Multi-angle codebase exploration
|
||||||
|
|
||||||
1. Extract session path from task description
|
|
||||||
2. Read shared-memory.json for context
|
|
||||||
|
|
||||||
```
|
|
||||||
Read(<session-folder>/shared-memory.json)
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Multi-angle codebase exploration via cli-explore-agent:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
Task({
|
||||||
subagent_type: "cli-explore-agent",
|
subagent_type: "cli-explore-agent",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Explore architecture",
|
description: "Explore architecture",
|
||||||
prompt: `Explore codebase architecture for: <task-description>
|
prompt: `Explore codebase architecture for: ${task.description}
|
||||||
|
Focus on: existing patterns, module structure, dependencies, similar implementations.
|
||||||
Focus on:
|
|
||||||
- Existing patterns
|
|
||||||
- Module structure
|
|
||||||
- Dependencies
|
|
||||||
- Similar implementations
|
|
||||||
|
|
||||||
Report relevant files and integration points.`
|
Report relevant files and integration points.`
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 3: Technical Design + Task Decomposition
|
### Phase 3: Technical Design + Task Decomposition
|
||||||
|
|
||||||
**Design strategy selection**:
|
```javascript
|
||||||
|
const designNum = task.subject.match(/DESIGN-(\d+)/)?.[1] || '001'
|
||||||
|
const designPath = `${sessionFolder}/design/design-${designNum}.md`
|
||||||
|
const breakdownPath = `${sessionFolder}/design/task-breakdown.json`
|
||||||
|
|
||||||
| Condition | Strategy |
|
// Generate design document
|
||||||
|-----------|----------|
|
const designContent = `# Technical Design — ${designNum}
|
||||||
| Single module change | Direct inline design |
|
|
||||||
| Cross-module change | Multi-component design with integration points |
|
|
||||||
| Large refactoring | Phased approach with milestones |
|
|
||||||
|
|
||||||
**Outputs**:
|
**Requirement**: ${task.description}
|
||||||
|
**Sprint**: ${sharedMemory.sprint_history?.length + 1 || 1}
|
||||||
1. **Design Document** (`<session-folder>/design/design-<num>.md`):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Technical Design — <num>
|
|
||||||
|
|
||||||
**Requirement**: <task-description>
|
|
||||||
**Sprint**: <sprint-number>
|
|
||||||
|
|
||||||
## Architecture Decision
|
## Architecture Decision
|
||||||
|
|
||||||
**Approach**: <selected-approach>
|
**Approach**: ${selectedApproach}
|
||||||
**Rationale**: <rationale>
|
**Rationale**: ${rationale}
|
||||||
**Alternatives Considered**: <alternatives>
|
**Alternatives Considered**: ${alternatives.join(', ')}
|
||||||
|
|
||||||
## Component Design
|
## Component Design
|
||||||
|
|
||||||
### <Component-1>
|
${components.map(c => `### ${c.name}
|
||||||
- **Responsibility**: <description>
|
- **Responsibility**: ${c.responsibility}
|
||||||
- **Dependencies**: <deps>
|
- **Dependencies**: ${c.dependencies.join(', ')}
|
||||||
- **Files**: <file-list>
|
- **Files**: ${c.files.join(', ')}
|
||||||
- **Complexity**: <low/medium/high>
|
- **Complexity**: ${c.complexity}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
## Task Breakdown
|
## Task Breakdown
|
||||||
|
|
||||||
### Task 1: <title>
|
${taskBreakdown.map((t, i) => `### Task ${i + 1}: ${t.title}
|
||||||
- **Files**: <file-list>
|
- **Files**: ${t.files.join(', ')}
|
||||||
- **Estimated Complexity**: <level>
|
- **Estimated Complexity**: ${t.complexity}
|
||||||
- **Dependencies**: <deps or None>
|
- **Dependencies**: ${t.dependencies.join(', ') || 'None'}
|
||||||
|
`).join('\n')}
|
||||||
|
|
||||||
## Integration Points
|
## Integration Points
|
||||||
|
|
||||||
- **<Integration-1>**: <description>
|
${integrationPoints.map(ip => `- **${ip.name}**: ${ip.description}`).join('\n')}
|
||||||
|
|
||||||
## Risks
|
## Risks
|
||||||
|
|
||||||
- **<Risk-1>**: <mitigation>
|
${risks.map(r => `- **${r.risk}**: ${r.mitigation}`).join('\n')}
|
||||||
```
|
`
|
||||||
|
|
||||||
2. **Task Breakdown JSON** (`<session-folder>/design/task-breakdown.json`):
|
Write(designPath, designContent)
|
||||||
|
|
||||||
```json
|
// Generate task breakdown JSON for developer
|
||||||
{
|
const breakdown = {
|
||||||
"design_id": "design-<num>",
|
design_id: `design-${designNum}`,
|
||||||
"tasks": [
|
tasks: taskBreakdown.map((t, i) => ({
|
||||||
{
|
id: `task-${i + 1}`,
|
||||||
"id": "task-1",
|
title: t.title,
|
||||||
"title": "<title>",
|
files: t.files,
|
||||||
"files": ["<file1>", "<file2>"],
|
complexity: t.complexity,
|
||||||
"complexity": "<level>",
|
dependencies: t.dependencies,
|
||||||
"dependencies": [],
|
acceptance_criteria: t.acceptance
|
||||||
"acceptance_criteria": "<criteria>"
|
})),
|
||||||
}
|
total_files: [...new Set(taskBreakdown.flatMap(t => t.files))].length,
|
||||||
],
|
execution_order: taskBreakdown.map((t, i) => `task-${i + 1}`)
|
||||||
"total_files": <count>,
|
|
||||||
"execution_order": ["task-1", "task-2"]
|
|
||||||
}
|
}
|
||||||
|
Write(breakdownPath, JSON.stringify(breakdown, null, 2))
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Design Validation
|
### Phase 4: Design Validation
|
||||||
|
|
||||||
**Validation checks**:
|
```javascript
|
||||||
|
// Verify design completeness
|
||||||
| Check | Method | Pass Criteria |
|
const hasComponents = components.length > 0
|
||||||
|-------|--------|---------------|
|
const hasBreakdown = taskBreakdown.length > 0
|
||||||
| Components defined | Verify component list | At least 1 component |
|
const hasDependencies = components.every(c => c.dependencies !== undefined)
|
||||||
| Task breakdown exists | Verify task list | At least 1 task |
|
|
||||||
| Dependencies mapped | Check all components have dependencies field | All have dependencies (can be empty) |
|
|
||||||
| Integration points | Verify integration section | Key integrations documented |
|
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
|
||||||
|
|
||||||
1. **Update shared memory**:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||||
|
|
||||||
|
```javascript
|
||||||
sharedMemory.architecture_decisions.push({
|
sharedMemory.architecture_decisions.push({
|
||||||
design_id: "design-<num>",
|
design_id: `design-${designNum}`,
|
||||||
approach: <approach>,
|
approach: selectedApproach,
|
||||||
rationale: <rationale>,
|
rationale: rationale,
|
||||||
components: <component-names>,
|
components: components.map(c => c.name),
|
||||||
task_count: <count>
|
task_count: taskBreakdown.length
|
||||||
})
|
})
|
||||||
Write(<session-folder>/shared-memory.json, JSON.stringify(sharedMemory, null, 2))
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
```
|
|
||||||
|
|
||||||
2. **Log and send message**:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
mcp__ccw-tools__team_msg({
|
||||||
operation: "log", team: <session-id>, from: "architect", to: "coordinator", // team = session ID, e.g., "TID-project-2026-02-27"
|
operation: "log", team: teamName, from: "architect", to: "coordinator",
|
||||||
type: "design_ready",
|
type: "design_ready",
|
||||||
summary: "[architect] Design complete: <count> components, <task-count> tasks",
|
summary: `[architect] Design complete: ${components.length} components, ${taskBreakdown.length} tasks`,
|
||||||
ref: <design-path>
|
ref: designPath
|
||||||
})
|
})
|
||||||
|
|
||||||
SendMessage({
|
SendMessage({
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: `## [architect] Design Ready
|
content: `## [architect] Design Ready\n\n**Components**: ${components.length}\n**Tasks**: ${taskBreakdown.length}\n**Design**: ${designPath}\n**Breakdown**: ${breakdownPath}`,
|
||||||
|
summary: `[architect] Design: ${taskBreakdown.length} tasks`
|
||||||
**Components**: <count>
|
|
||||||
**Tasks**: <task-count>
|
|
||||||
**Design**: <design-path>
|
|
||||||
**Breakdown**: <breakdown-path>`,
|
|
||||||
summary: "[architect] Design: <task-count> tasks"
|
|
||||||
})
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Mark task complete**:
|
|
||||||
|
|
||||||
```
|
|
||||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Loop to Phase 1** for next task
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No DESIGN-* tasks available | Idle, wait for coordinator assignment |
|
| No DESIGN-* tasks | Idle |
|
||||||
| Codebase exploration fails | Design based on task description alone |
|
| Codebase exploration fails | Design based on task description alone |
|
||||||
| Too many components identified | Simplify, suggest phased approach |
|
| Too many components | Simplify, suggest phased approach |
|
||||||
| Conflicting patterns found | Document in design, recommend resolution |
|
| Conflicting patterns found | Document in design, recommend resolution |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
|
||||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
|
||||||
|
|||||||
@@ -1,423 +1,292 @@
|
|||||||
# Coordinator Role
|
# Role: coordinator
|
||||||
|
|
||||||
Orchestrate the IterDev workflow: Sprint planning, backlog management, task ledger maintenance, Generator-Critic loop control (developer<->reviewer, max 3 rounds), cross-sprint learning, conflict handling, concurrency control, rollback strategy, user feedback loop, and tech debt tracking.
|
持续迭代开发团队协调者。负责 Sprint 规划、积压管理、任务账本维护、Generator-Critic 循环控制(developer↔reviewer,最多3轮)、Sprint 间学习、**冲突处理、并发控制、回滚策略**、**用户反馈循环、技术债务追踪**。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
- **Name**: `coordinator`
|
||||||
- **Responsibility**: Orchestration + Stability Management + Quality Tracking
|
- **Task Prefix**: N/A
|
||||||
|
- **Responsibility**: Orchestration + **Stability Management** + **Quality Tracking**
|
||||||
|
- **Communication**: SendMessage to all teammates
|
||||||
|
- **Output Tag**: `[coordinator]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- All output must carry `[coordinator]` identifier
|
- 所有输出必须带 `[coordinator]` 标识
|
||||||
- Maintain task-ledger.json for real-time progress
|
- 维护 task-ledger.json 实时进度
|
||||||
- Manage developer<->reviewer GC loop (max 3 rounds)
|
- 管理 developer↔reviewer 的 GC 循环(最多3轮)
|
||||||
- Record learning to shared-memory.json at Sprint end
|
- Sprint 结束时记录学习到 shared-memory.json
|
||||||
- Detect and coordinate task conflicts
|
- **Phase 1 新增**:
|
||||||
- Manage shared resource locks (resource_locks)
|
- 检测并协调任务间冲突
|
||||||
- Record rollback points and support emergency rollback
|
- 管理共享资源锁定(resource_locks)
|
||||||
- Collect and track user feedback (user_feedback_items)
|
- 记录回滚点并支持紧急回滚
|
||||||
- Identify and record tech debt (tech_debt_items)
|
- **Phase 3 新增**:
|
||||||
- Generate tech debt reports
|
- 收集并跟踪用户反馈(user_feedback_items)
|
||||||
|
- 识别并记录技术债务(tech_debt_items)
|
||||||
|
- 生成技术债务报告
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute implementation work directly (delegate to workers)
|
- ❌ 直接编写代码、设计架构、执行测试或代码审查
|
||||||
- Write source code directly
|
- ❌ 直接调用实现类 subagent
|
||||||
- Call implementation-type subagents directly
|
- ❌ 修改源代码
|
||||||
- Modify task outputs (workers own their deliverables)
|
|
||||||
- Skip dependency validation when creating task chains
|
|
||||||
|
|
||||||
> **Core principle**: Coordinator is the orchestrator, not the executor. All actual work must be delegated to worker roles via TaskCreate.
|
## Execution
|
||||||
|
|
||||||
---
|
### Phase 1: Sprint Planning
|
||||||
|
|
||||||
## Entry Router
|
```javascript
|
||||||
|
const args = "$ARGUMENTS"
|
||||||
|
const teamName = args.match(/--team-name[=\s]+([\w-]+)/)?.[1] || `iterdev-${Date.now().toString(36)}`
|
||||||
|
const taskDescription = args.replace(/--team-name[=\s]+[\w-]+/, '').replace(/--role[=\s]+\w+/, '').trim()
|
||||||
|
|
||||||
When coordinator is invoked, first detect the invocation type:
|
// Assess complexity for pipeline selection
|
||||||
|
function assessComplexity(desc) {
|
||||||
|
let score = 0
|
||||||
|
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || echo ""`).split('\n').filter(Boolean)
|
||||||
|
score += changedFiles.length > 10 ? 3 : changedFiles.length > 3 ? 2 : 0
|
||||||
|
if (/refactor|architect|restructure|system|module/.test(desc)) score += 3
|
||||||
|
if (/multiple|across|cross/.test(desc)) score += 2
|
||||||
|
if (/fix|bug|typo|patch/.test(desc)) score -= 2
|
||||||
|
return { score, fileCount: changedFiles.length }
|
||||||
|
}
|
||||||
|
|
||||||
| Detection | Condition | Handler |
|
const { score, fileCount } = assessComplexity(taskDescription)
|
||||||
|-----------|-----------|---------|
|
const suggestedPipeline = score >= 5 ? 'multi-sprint' : score >= 2 ? 'sprint' : 'patch'
|
||||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
|
|
||||||
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
|
|
||||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
|
|
||||||
| New session | None of the above | -> Phase 0 (Session Resume Check) |
|
|
||||||
|
|
||||||
For callback/check/resume: load monitor logic and execute the appropriate handler, then STOP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 0: Session Resume Check
|
|
||||||
|
|
||||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. Scan `.workflow/.team/IDS-*/team-session.json` for sessions with status "active" or "paused"
|
|
||||||
2. No sessions found -> proceed to Phase 1
|
|
||||||
3. Single session found -> resume it (-> Session Reconciliation)
|
|
||||||
4. Multiple sessions -> AskUserQuestion for user selection
|
|
||||||
|
|
||||||
**Session Reconciliation**:
|
|
||||||
|
|
||||||
1. Audit TaskList -> get real status of all tasks
|
|
||||||
2. Reconcile: session state <-> TaskList status (bidirectional sync)
|
|
||||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
|
||||||
4. Determine remaining pipeline from reconciled state
|
|
||||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
|
||||||
6. Create missing tasks with correct blockedBy dependencies
|
|
||||||
7. Verify dependency chain integrity
|
|
||||||
8. Update session file with reconciled state
|
|
||||||
9. Kick first executable task's worker -> Phase 4
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Requirement Clarification
|
|
||||||
|
|
||||||
**Objective**: Parse user input and gather execution parameters.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. **Parse arguments** for explicit settings: mode, scope, focus areas
|
|
||||||
|
|
||||||
2. **Assess complexity** for pipeline selection:
|
|
||||||
|
|
||||||
| Signal | Weight | Keywords |
|
|
||||||
|--------|--------|----------|
|
|
||||||
| Changed files > 10 | +3 | Large changeset |
|
|
||||||
| Changed files 3-10 | +2 | Medium changeset |
|
|
||||||
| Structural change | +3 | refactor, architect, restructure, system, module |
|
|
||||||
| Cross-cutting | +2 | multiple, across, cross |
|
|
||||||
| Simple fix | -2 | fix, bug, typo, patch |
|
|
||||||
|
|
||||||
| Score | Pipeline | Description |
|
|
||||||
|-------|----------|-------------|
|
|
||||||
| >= 5 | multi-sprint | Incremental iterative delivery for large features |
|
|
||||||
| 2-4 | sprint | Standard: Design -> Dev -> Verify + Review |
|
|
||||||
| 0-1 | patch | Simple: Dev -> Verify |
|
|
||||||
|
|
||||||
3. **Ask for missing parameters** via AskUserQuestion:
|
|
||||||
|
|
||||||
```
|
|
||||||
AskUserQuestion({
|
AskUserQuestion({
|
||||||
questions: [{
|
questions: [{
|
||||||
question: "Select development mode:",
|
question: "选择开发模式:",
|
||||||
header: "Mode",
|
header: "Mode",
|
||||||
multiSelect: false,
|
multiSelect: false,
|
||||||
options: [
|
options: [
|
||||||
{ label: "patch (recommended)", description: "Patch mode: implement -> verify (simple fixes)" },
|
{ label: suggestedPipeline === 'patch' ? "patch (推荐)" : "patch", description: "补丁模式:实现→验证(简单修复)" },
|
||||||
{ label: "sprint (recommended)", description: "Sprint mode: design -> implement -> verify + review" },
|
{ label: suggestedPipeline === 'sprint' ? "sprint (推荐)" : "sprint", description: "Sprint模式:设计→实现→验证+审查" },
|
||||||
{ label: "multi-sprint (recommended)", description: "Multi-sprint: incremental iterative delivery (large features)" }
|
{ label: suggestedPipeline === 'multi-sprint' ? "multi-sprint (推荐)" : "multi-sprint", description: "多Sprint:增量迭代交付(大型特性)" }
|
||||||
]
|
]
|
||||||
}]
|
}]
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**Success**: All parameters captured, mode finalized.
|
### Phase 2: Create Team + Initialize Ledger
|
||||||
|
|
||||||
---
|
```javascript
|
||||||
|
TeamCreate({ team_name: teamName })
|
||||||
|
|
||||||
## Phase 2: Create Team + Initialize Session
|
const topicSlug = taskDescription.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||||
|
const dateStr = new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString().substring(0, 10)
|
||||||
|
const sessionId = `IDS-${topicSlug}-${dateStr}`
|
||||||
|
const sessionFolder = `.workflow/.team/${sessionId}`
|
||||||
|
|
||||||
**Objective**: Initialize team, session file, task ledger, shared memory, and wisdom directory.
|
Bash(`mkdir -p "${sessionFolder}/design" "${sessionFolder}/code" "${sessionFolder}/verify" "${sessionFolder}/review"`)
|
||||||
|
|
||||||
**Workflow**:
|
// Initialize task ledger
|
||||||
|
const taskLedger = {
|
||||||
1. Generate session ID: `IDS-{slug}-{YYYY-MM-DD}`
|
sprint_id: "sprint-1",
|
||||||
2. Create session folder structure
|
sprint_goal: taskDescription,
|
||||||
3. Call TeamCreate with team name
|
pipeline: selectedPipeline,
|
||||||
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
|
tasks: [],
|
||||||
5. Write session file with: session_id, mode, scope, status="active"
|
metrics: { total: 0, completed: 0, in_progress: 0, blocked: 0, velocity: 0 }
|
||||||
6. Initialize task-ledger.json:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
"sprint_id": "sprint-1",
|
|
||||||
"sprint_goal": "<task-description>",
|
|
||||||
"pipeline": "<selected-pipeline>",
|
|
||||||
"tasks": [],
|
|
||||||
"metrics": { "total": 0, "completed": 0, "in_progress": 0, "blocked": 0, "velocity": 0 }
|
|
||||||
}
|
}
|
||||||
```
|
Write(`${sessionFolder}/task-ledger.json`, JSON.stringify(taskLedger, null, 2))
|
||||||
|
|
||||||
7. Initialize shared-memory.json:
|
// Initialize shared memory with sprint learning
|
||||||
|
const sharedMemory = {
|
||||||
```
|
sprint_history: [],
|
||||||
{
|
architecture_decisions: [],
|
||||||
"sprint_history": [],
|
implementation_context: [],
|
||||||
"architecture_decisions": [],
|
review_feedback_trends: [],
|
||||||
"implementation_context": [],
|
gc_round: 0,
|
||||||
"review_feedback_trends": [],
|
max_gc_rounds: 3
|
||||||
"gc_round": 0,
|
|
||||||
"max_gc_rounds": 3,
|
|
||||||
"resource_locks": {},
|
|
||||||
"task_checkpoints": {},
|
|
||||||
"user_feedback_items": [],
|
|
||||||
"tech_debt_items": []
|
|
||||||
}
|
}
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
|
const teamSession = {
|
||||||
|
session_id: sessionId, team_name: teamName, task: taskDescription,
|
||||||
|
pipeline: selectedPipeline, status: "active", sprint_number: 1,
|
||||||
|
created_at: new Date().toISOString(), updated_at: new Date().toISOString(),
|
||||||
|
completed_tasks: []
|
||||||
|
}
|
||||||
|
Write(`${sessionFolder}/team-session.json`, JSON.stringify(teamSession, null, 2))
|
||||||
```
|
```
|
||||||
|
|
||||||
**Success**: Team created, session file written, wisdom initialized, task ledger and shared memory ready.
|
// ⚠️ Workers are NOT pre-spawned here.
|
||||||
|
// Workers are spawned per-stage in Phase 4 via Stop-Wait Task(run_in_background: false).
|
||||||
|
// See SKILL.md Coordinator Spawn Template for worker prompt templates.
|
||||||
|
|
||||||
---
|
### Phase 3: Create Task Chain + Update Ledger
|
||||||
|
|
||||||
## Phase 3: Create Task Chain
|
#### Patch Pipeline
|
||||||
|
|
||||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
```javascript
|
||||||
|
TaskCreate({ subject: "DEV-001: 实现修复", description: `${taskDescription}\n\nSession: ${sessionFolder}`, activeForm: "实现中" })
|
||||||
|
TaskUpdate({ taskId: devId, owner: "developer" })
|
||||||
|
|
||||||
### Patch Pipeline
|
TaskCreate({ subject: "VERIFY-001: 验证修复", description: `验证 DEV-001\n\nSession: ${sessionFolder}`, activeForm: "验证中" })
|
||||||
|
TaskUpdate({ taskId: verifyId, owner: "tester", addBlockedBy: [devId] })
|
||||||
|
|
||||||
| Task ID | Owner | Blocked By | Description |
|
// Update ledger
|
||||||
|---------|-------|------------|-------------|
|
updateLedger(sessionFolder, null, {
|
||||||
| DEV-001 | developer | (none) | Implement fix |
|
tasks: [
|
||||||
| VERIFY-001 | tester | DEV-001 | Verify fix |
|
{ id: "DEV-001", title: "实现修复", owner: "developer", status: "pending", gc_rounds: 0 },
|
||||||
|
{ id: "VERIFY-001", title: "验证修复", owner: "tester", status: "pending", gc_rounds: 0 }
|
||||||
### Sprint Pipeline
|
],
|
||||||
|
metrics: { total: 2, completed: 0, in_progress: 0, blocked: 0, velocity: 0 }
|
||||||
| Task ID | Owner | Blocked By | Description |
|
|
||||||
|---------|-------|------------|-------------|
|
|
||||||
| DESIGN-001 | architect | (none) | Technical design and task breakdown |
|
|
||||||
| DEV-001 | developer | DESIGN-001 | Implement design |
|
|
||||||
| VERIFY-001 | tester | DEV-001 | Test execution |
|
|
||||||
| REVIEW-001 | reviewer | DEV-001 | Code review |
|
|
||||||
|
|
||||||
### Multi-Sprint Pipeline
|
|
||||||
|
|
||||||
Sprint 1: DESIGN-001 -> DEV-001 -> DEV-002(incremental) -> VERIFY-001 -> DEV-fix -> REVIEW-001
|
|
||||||
|
|
||||||
Subsequent sprints created dynamically after Sprint N completes.
|
|
||||||
|
|
||||||
**Task Creation**: Use TaskCreate + TaskUpdate(owner, addBlockedBy) for each task. Include `Session: <session-folder>` in every task description.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Spawn-and-Stop
|
|
||||||
|
|
||||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
|
||||||
|
|
||||||
**Design**: Spawn-and-Stop + Callback pattern.
|
|
||||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
|
||||||
- Worker completes -> SendMessage callback -> auto-advance
|
|
||||||
- User can use "check" / "resume" to manually advance
|
|
||||||
- Coordinator does one operation per invocation, then STOPS
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
|
||||||
2. For each ready task -> spawn worker using Spawn Template
|
|
||||||
3. Output status summary
|
|
||||||
4. STOP
|
|
||||||
|
|
||||||
### Callback Handler
|
|
||||||
|
|
||||||
| Received Message | Action |
|
|
||||||
|-----------------|--------|
|
|
||||||
| architect: design_ready | Update ledger -> unblock DEV |
|
|
||||||
| developer: dev_complete | Update ledger -> unblock VERIFY + REVIEW |
|
|
||||||
| tester: verify_passed | Update ledger (test_pass_rate) |
|
|
||||||
| tester: verify_failed | Create DEV-fix task |
|
|
||||||
| tester: fix_required | Create DEV-fix task -> assign developer |
|
|
||||||
| reviewer: review_passed | Update ledger (review_score) -> mark complete |
|
|
||||||
| reviewer: review_revision | **GC loop** -> create DEV-fix -> REVIEW-next |
|
|
||||||
| reviewer: review_critical | **GC loop** -> create DEV-fix -> REVIEW-next |
|
|
||||||
|
|
||||||
### GC Loop Control
|
|
||||||
|
|
||||||
When receiving `review_revision` or `review_critical`:
|
|
||||||
|
|
||||||
1. Read shared-memory.json -> get gc_round
|
|
||||||
2. If gc_round < max_gc_rounds (3):
|
|
||||||
- Increment gc_round
|
|
||||||
- Create DEV-fix task with review feedback
|
|
||||||
- Create REVIEW-next task blocked by DEV-fix
|
|
||||||
- Update ledger
|
|
||||||
- Log gc_loop_trigger message
|
|
||||||
3. Else (max rounds reached):
|
|
||||||
- Accept with warning
|
|
||||||
- Log sprint_complete message
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 5: Report + Next Steps
|
|
||||||
|
|
||||||
**Objective**: Completion report and follow-up options.
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
|
|
||||||
1. Load session state -> count completed tasks, duration
|
|
||||||
2. Record sprint learning to shared-memory.json
|
|
||||||
3. List deliverables with output paths
|
|
||||||
4. Update session status -> "completed"
|
|
||||||
5. Offer next steps via AskUserQuestion
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Protocol Implementations
|
|
||||||
|
|
||||||
### Resource Lock Protocol
|
|
||||||
|
|
||||||
Concurrency control for shared resources. Prevents multiple workers from modifying the same files simultaneously.
|
|
||||||
|
|
||||||
| Action | Trigger Condition | Behavior |
|
|
||||||
|--------|-------------------|----------|
|
|
||||||
| Acquire lock | Worker requests exclusive access | Check resource_locks in shared-memory.json. If unlocked, record lock with task ID, timestamp, holder. Log resource_locked. Return success. |
|
|
||||||
| Deny lock | Resource already locked | Return failure with current holder's task ID. Log resource_contention. Worker must wait. |
|
|
||||||
| Release lock | Worker completes task | Remove lock entry. Log resource_unlocked. |
|
|
||||||
| Force release | Lock held beyond timeout (5 min) | Force-remove lock entry. Notify holder. Log warning. |
|
|
||||||
| Deadlock detection | Multiple tasks waiting on each other | Abort youngest task, release its locks. Notify coordinator. |
|
|
||||||
|
|
||||||
### Conflict Detection Protocol
|
|
||||||
|
|
||||||
Detects and resolves file-level conflicts between concurrent development tasks.
|
|
||||||
|
|
||||||
| Action | Trigger Condition | Behavior |
|
|
||||||
|--------|-------------------|----------|
|
|
||||||
| Detect conflict | DEV task completes with changed files | Compare changed files against other in_progress/completed tasks. If overlap, update task's conflict_info to status "detected". Log conflict_detected. |
|
|
||||||
| Resolve conflict | Conflict detected | Set resolution_strategy (manual/auto_merge/abort). Create fix-conflict task for developer. Log conflict_resolved. |
|
|
||||||
| Skip | No file overlap | No action needed. |
|
|
||||||
|
|
||||||
### Rollback Point Protocol
|
|
||||||
|
|
||||||
Manages state snapshots for safe recovery.
|
|
||||||
|
|
||||||
| Action | Trigger Condition | Behavior |
|
|
||||||
|--------|-------------------|----------|
|
|
||||||
| Create rollback point | Task phase completes | Generate snapshot ID, record rollback_procedure (default: git revert HEAD) in task's rollback_info. |
|
|
||||||
| Execute rollback | Task failure or user request | Log rollback_initiated. Execute stored procedure. Log rollback_completed or rollback_failed. |
|
|
||||||
| Validate snapshot | Before rollback | Verify snapshot ID exists and procedure is valid. |
|
|
||||||
|
|
||||||
### Dependency Validation Protocol
|
|
||||||
|
|
||||||
Validates external dependencies before task execution.
|
|
||||||
|
|
||||||
| Action | Trigger Condition | Behavior |
|
|
||||||
|--------|-------------------|----------|
|
|
||||||
| Validate | Task startup with dependencies | Check installed version vs expected. Record status (ok/mismatch/missing) in external_dependencies. |
|
|
||||||
| Report mismatch | Any dependency has issues | Log dependency_mismatch. Block task until resolved. |
|
|
||||||
| Update notification | Important update available | Log dependency_update_needed. Add to backlog. |
|
|
||||||
|
|
||||||
### Checkpoint Management Protocol
|
|
||||||
|
|
||||||
Saves and restores task execution state for interruption recovery.
|
|
||||||
|
|
||||||
| Action | Trigger Condition | Behavior |
|
|
||||||
|--------|-------------------|----------|
|
|
||||||
| Save checkpoint | Task reaches milestone | Store checkpoint in task_checkpoints with timestamp. Retain last 5 per task. Log context_checkpoint_saved. |
|
|
||||||
| Restore checkpoint | Task resumes after interruption | Load latest checkpoint. Log context_restored. |
|
|
||||||
| Not found | Resume requested but no checkpoints | Return failure. Worker starts fresh. |
|
|
||||||
|
|
||||||
### User Feedback Protocol
|
|
||||||
|
|
||||||
Collects, categorizes, and tracks user feedback.
|
|
||||||
|
|
||||||
| Action | Trigger Condition | Behavior |
|
|
||||||
|--------|-------------------|----------|
|
|
||||||
| Receive feedback | User provides feedback | Create feedback item (FB-xxx) with severity, category. Store in user_feedback_items (max 50). Log user_feedback_received. |
|
|
||||||
| Link to task | Feedback relates to task | Update source_task_id, set status "reviewed". |
|
|
||||||
| Triage | High/critical severity | Prioritize in next sprint. Create task if actionable. |
|
|
||||||
|
|
||||||
### Tech Debt Management Protocol
|
|
||||||
|
|
||||||
Identifies, tracks, and prioritizes technical debt.
|
|
||||||
|
|
||||||
| Action | Trigger Condition | Behavior |
|
|
||||||
|--------|-------------------|----------|
|
|
||||||
| Identify debt | Worker reports tech debt | Create debt item (TD-xxx) with category, severity, effort. Store in tech_debt_items. Log tech_debt_identified. |
|
|
||||||
| Generate report | Sprint retrospective | Aggregate by severity and category. Report totals. |
|
|
||||||
| Prioritize | Sprint planning | Rank by severity. Recommend items for current sprint. |
|
|
||||||
| Resolve | Developer completes debt task | Update status to "resolved". Record in sprint history. |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Purpose |
|
|
||||||
|------|------|---------|
|
|
||||||
| TeamCreate | Team | Create team instance |
|
|
||||||
| TeamDelete | Team | Disband team |
|
|
||||||
| SendMessage | Communication | Send messages to workers |
|
|
||||||
| TaskCreate | Task | Create tasks for workers |
|
|
||||||
| TaskUpdate | Task | Update task status/owner/dependencies |
|
|
||||||
| TaskList | Task | List all tasks |
|
|
||||||
| TaskGet | Task | Get task details |
|
|
||||||
| Task | Agent | Spawn worker agents |
|
|
||||||
| AskUserQuestion | Interaction | Ask user for input |
|
|
||||||
| Read | File | Read session files |
|
|
||||||
| Write | File | Write session files |
|
|
||||||
| Bash | Shell | Execute shell commands |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
|
||||||
|------|-----------|---------|-------------|
|
|
||||||
| sprint_started | coordinator -> all | Sprint begins | Sprint initialization |
|
|
||||||
| gc_loop_trigger | coordinator -> developer | Review needs revision | GC loop iteration |
|
|
||||||
| sprint_complete | coordinator -> all | Sprint ends | Sprint summary |
|
|
||||||
| task_unblocked | coordinator -> worker | Task dependencies resolved | Task ready |
|
|
||||||
| error | coordinator -> all | Error occurred | Error notification |
|
|
||||||
| shutdown | coordinator -> all | Team disbands | Shutdown notice |
|
|
||||||
| conflict_detected | coordinator -> all | File conflict found | Conflict alert |
|
|
||||||
| conflict_resolved | coordinator -> all | Conflict resolved | Resolution notice |
|
|
||||||
| resource_locked | coordinator -> all | Resource acquired | Lock notification |
|
|
||||||
| resource_unlocked | coordinator -> all | Resource released | Unlock notification |
|
|
||||||
| resource_contention | coordinator -> all | Lock denied | Contention alert |
|
|
||||||
| rollback_initiated | coordinator -> all | Rollback started | Rollback notice |
|
|
||||||
| rollback_completed | coordinator -> all | Rollback succeeded | Success notice |
|
|
||||||
| rollback_failed | coordinator -> all | Rollback failed | Failure alert |
|
|
||||||
| dependency_mismatch | coordinator -> all | Dependency issue | Dependency alert |
|
|
||||||
| dependency_update_needed | coordinator -> all | Update available | Update notice |
|
|
||||||
| context_checkpoint_saved | coordinator -> all | Checkpoint created | Checkpoint notice |
|
|
||||||
| context_restored | coordinator -> all | Checkpoint restored | Restore notice |
|
|
||||||
| user_feedback_received | coordinator -> all | Feedback recorded | Feedback notice |
|
|
||||||
| tech_debt_identified | coordinator -> all | Tech debt found | Debt notice |
|
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
**NOTE**: `team` must be **session ID** (e.g., `TID-project-2026-02-27`), NOT team name. Extract from `Session:` field in task description.
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: <session-id>, // e.g., "TID-project-2026-02-27", NOT "iterdev"
|
|
||||||
from: "coordinator",
|
|
||||||
to: "all",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[coordinator] <summary>",
|
|
||||||
ref: <artifact-path>
|
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
#### Sprint Pipeline
|
||||||
|
|
||||||
```
|
```javascript
|
||||||
Bash("ccw team log --team <session-id> --from coordinator --to all --type <message-type> --summary \"[coordinator] ...\" --ref <artifact-path> --json")
|
TaskCreate({ subject: "DESIGN-001: 技术设计与任务分解", description: `${taskDescription}\n\nSession: ${sessionFolder}\n输出: ${sessionFolder}/design/design-001.md + task-breakdown.json`, activeForm: "设计中" })
|
||||||
|
TaskUpdate({ taskId: designId, owner: "architect" })
|
||||||
|
|
||||||
|
TaskCreate({ subject: "DEV-001: 实现设计方案", description: `按设计方案实现\n\nSession: ${sessionFolder}\n设计: design/design-001.md\n分解: design/task-breakdown.json`, activeForm: "实现中" })
|
||||||
|
TaskUpdate({ taskId: devId, owner: "developer", addBlockedBy: [designId] })
|
||||||
|
|
||||||
|
// VERIFY-001 and REVIEW-001 parallel, both blockedBy DEV-001
|
||||||
|
TaskCreate({ subject: "VERIFY-001: 测试验证", description: `验证实现\n\nSession: ${sessionFolder}`, activeForm: "验证中" })
|
||||||
|
TaskUpdate({ taskId: verifyId, owner: "tester", addBlockedBy: [devId] })
|
||||||
|
|
||||||
|
TaskCreate({ subject: "REVIEW-001: 代码审查", description: `审查实现\n\nSession: ${sessionFolder}\n设计: design/design-001.md`, activeForm: "审查中" })
|
||||||
|
TaskUpdate({ taskId: reviewId, owner: "reviewer", addBlockedBy: [devId] })
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
#### Multi-Sprint Pipeline
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Sprint 1 — created dynamically, subsequent sprints created after Sprint N completes
|
||||||
|
// Each sprint: DESIGN → DEV-1..N(incremental) → VERIFY → DEV-fix → REVIEW
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Coordination Loop + GC Control + Ledger Updates
|
||||||
|
|
||||||
|
> **设计原则(Stop-Wait)**: 模型执行没有时间概念,禁止任何形式的轮询等待。
|
||||||
|
> - ❌ 禁止: `while` 循环 + `sleep` + 检查状态
|
||||||
|
> - ✅ 采用: 同步 `Task(run_in_background: false)` 调用,Worker 返回 = 阶段完成信号
|
||||||
|
>
|
||||||
|
> 按 Phase 3 创建的任务链顺序,逐阶段 spawn worker 同步执行。
|
||||||
|
> Worker prompt 使用 SKILL.md Coordinator Spawn Template。
|
||||||
|
|
||||||
|
| Received Message | Action |
|
||||||
|
|-----------------|--------|
|
||||||
|
| architect: design_ready | Read design → update ledger → unblock DEV |
|
||||||
|
| developer: dev_complete | Update ledger → unblock VERIFY + REVIEW |
|
||||||
|
| tester: verify_passed | Update ledger (test_pass_rate) |
|
||||||
|
| tester: verify_failed | Create DEV-fix task |
|
||||||
|
| tester: fix_required | Create DEV-fix task → assign developer |
|
||||||
|
| reviewer: review_passed | Update ledger (review_score) → mark complete |
|
||||||
|
| reviewer: review_revision | **GC loop** → create DEV-fix → REVIEW-next |
|
||||||
|
| reviewer: review_critical | **GC loop** → create DEV-fix → REVIEW-next |
|
||||||
|
|
||||||
|
#### Generator-Critic Loop Control (developer↔reviewer)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (msgType === 'review_revision' || msgType === 'review_critical') {
|
||||||
|
const sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
|
||||||
|
const gcRound = sharedMemory.gc_round || 0
|
||||||
|
|
||||||
|
if (gcRound < sharedMemory.max_gc_rounds) {
|
||||||
|
sharedMemory.gc_round = gcRound + 1
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
|
// Create DEV-fix task
|
||||||
|
TaskCreate({
|
||||||
|
subject: `DEV-fix-${gcRound + 1}: 根据审查修订代码`,
|
||||||
|
description: `审查反馈:\n${reviewFeedback}\n\nSession: ${sessionFolder}\n审查: review/review-${reviewNum}.md`,
|
||||||
|
activeForm: "修订代码中"
|
||||||
|
})
|
||||||
|
TaskUpdate({ taskId: fixId, owner: "developer" })
|
||||||
|
|
||||||
|
// Create REVIEW-next task
|
||||||
|
TaskCreate({
|
||||||
|
subject: `REVIEW-${reviewNum + 1}: 验证修订`,
|
||||||
|
description: `验证 DEV-fix-${gcRound + 1} 的修订\n\nSession: ${sessionFolder}`,
|
||||||
|
activeForm: "复审中"
|
||||||
|
})
|
||||||
|
TaskUpdate({ taskId: nextReviewId, owner: "reviewer", addBlockedBy: [fixId] })
|
||||||
|
|
||||||
|
// Update ledger
|
||||||
|
updateLedger(sessionFolder, `DEV-fix-${gcRound + 1}`, { status: 'pending', gc_rounds: gcRound + 1 })
|
||||||
|
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: teamName, from: "coordinator", to: "developer",
|
||||||
|
type: "gc_loop_trigger",
|
||||||
|
summary: `[coordinator] GC round ${gcRound + 1}/${sharedMemory.max_gc_rounds}: review requires revision`
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
// Max rounds — accept with warning
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: teamName, from: "coordinator", to: "all",
|
||||||
|
type: "sprint_complete",
|
||||||
|
summary: `[coordinator] GC loop exhausted (${gcRound} rounds), accepting current state`
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: Sprint Retrospective + Persist
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const ledger = JSON.parse(Read(`${sessionFolder}/task-ledger.json`))
|
||||||
|
const sharedMemory = JSON.parse(Read(`${sessionFolder}/shared-memory.json`))
|
||||||
|
|
||||||
|
// Record sprint learning
|
||||||
|
const sprintRetro = {
|
||||||
|
sprint_id: ledger.sprint_id,
|
||||||
|
velocity: ledger.metrics.velocity,
|
||||||
|
gc_rounds: sharedMemory.gc_round,
|
||||||
|
what_worked: [], // extracted from review/verify feedback
|
||||||
|
what_failed: [], // extracted from failures
|
||||||
|
patterns_learned: [] // derived from GC loop patterns
|
||||||
|
}
|
||||||
|
sharedMemory.sprint_history.push(sprintRetro)
|
||||||
|
Write(`${sessionFolder}/shared-memory.json`, JSON.stringify(sharedMemory, null, 2))
|
||||||
|
|
||||||
|
SendMessage({
|
||||||
|
content: `## [coordinator] Sprint 完成
|
||||||
|
|
||||||
|
**需求**: ${taskDescription}
|
||||||
|
**管道**: ${selectedPipeline}
|
||||||
|
**完成**: ${ledger.metrics.completed}/${ledger.metrics.total}
|
||||||
|
**GC 轮次**: ${sharedMemory.gc_round}
|
||||||
|
|
||||||
|
### 任务账本
|
||||||
|
${ledger.tasks.map(t => `- ${t.id}: ${t.status} ${t.review_score ? '(Review: ' + t.review_score + '/10)' : ''}`).join('\n')}`,
|
||||||
|
summary: `[coordinator] Sprint complete: ${ledger.metrics.completed}/${ledger.metrics.total}`
|
||||||
|
})
|
||||||
|
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "Sprint 已完成。下一步:",
|
||||||
|
header: "Next",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "下一个Sprint", description: "继续迭代(携带学习记忆)" },
|
||||||
|
{ label: "新需求", description: "新的开发需求" },
|
||||||
|
{ label: "关闭团队", description: "关闭所有 teammate" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Error | Resolution |
|
| Scenario | Resolution |
|
||||||
|-------|------------|
|
|----------|------------|
|
||||||
| GC loop exceeds 3 rounds | Accept current code, record to sprint_history |
|
| GC 循环超限 (3轮) | 接受当前代码,记录到 sprint_history |
|
||||||
| Velocity below 50% | Alert user, suggest scope reduction |
|
| Velocity 低于 50% | 上报用户,建议缩小范围 |
|
||||||
| Task ledger corrupted | Rebuild from TaskList state |
|
| 任务账本损坏 | 从 TaskList 重建 |
|
||||||
| Design rejected 3+ times | Coordinator intervenes, simplifies design |
|
| 设计被拒 3+ 次 | Coordinator 介入简化设计 |
|
||||||
| Tests continuously fail | Create DEV-fix for developer |
|
| 测试持续失败 | 创建 DEV-fix 给 developer |
|
||||||
| Conflict detected | Update conflict_info, create DEV-fix task |
|
| **Phase 1 新增** | |
|
||||||
| Resource lock timeout | Force release after 5 min, notify holder |
|
| 冲突检测到 | 更新 conflict_info,通知 coordinator,创建 DEV-fix 任务 |
|
||||||
| Rollback requested | Validate snapshot_id, execute procedure |
|
| 资源锁超时 (5min) | 强制释放锁,通知持有者和 coordinator |
|
||||||
| Deadlock detected | Abort youngest task, release locks |
|
| 回滚请求 | 验证 snapshot_id,执行 rollback_procedure,通知所有角色 |
|
||||||
| Dependency mismatch | Log mismatch, block task until resolved |
|
| 死锁检测 | 终止最年轻任务,释放其锁,通知 coordinator |
|
||||||
| Checkpoint restore failure | Log error, worker restarts from Phase 1 |
|
| **Phase 3 新增** | |
|
||||||
| User feedback critical | Create fix task immediately, elevate priority |
|
| 用户反馈 critical | 立即创建修复任务,优先级提升 |
|
||||||
| Tech debt exceeds threshold | Generate report, suggest dedicated sprint |
|
| 技术债务累积超过阈值 | 生成债务报告,建议用户安排专项处理 |
|
||||||
| Feedback task link fails | Retain feedback, mark unlinked, manual follow-up |
|
| 反馈关联任务失败 | 保留反馈条目,标记为 unlinked,人工跟进 |
|
||||||
|
|||||||
@@ -1,148 +1,103 @@
|
|||||||
# Developer Role
|
# Role: developer
|
||||||
|
|
||||||
Code implementer. Responsible for implementing code according to design, incremental delivery. Acts as Generator in Generator-Critic loop (paired with reviewer).
|
代码实现者。负责按设计方案编码、增量交付。作为 Generator-Critic 循环中的 Generator 角色(与 reviewer 配对)。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `developer` | **Tag**: `[developer]`
|
- **Name**: `developer`
|
||||||
- **Task Prefix**: `DEV-*`
|
- **Task Prefix**: `DEV-*`
|
||||||
- **Responsibility**: Code generation (Code Implementation)
|
- **Responsibility**: Code generation (代码实现)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[developer]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `DEV-*` prefixed tasks
|
- 仅处理 `DEV-*` 前缀的任务
|
||||||
- All output must carry `[developer]` identifier
|
- 所有输出必须带 `[developer]` 标识
|
||||||
- Phase 2: Read shared-memory.json + design, Phase 5: Write implementation_context
|
- Phase 2 读取 shared-memory.json + design,Phase 5 写入 implementation_context
|
||||||
- For fix tasks (DEV-fix-*): Reference review feedback
|
- 修订任务(DEV-fix-*)时参考 review 反馈
|
||||||
- Work strictly within code implementation responsibility scope
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute work outside this role's responsibility scope
|
- ❌ 执行测试、代码审查或架构设计
|
||||||
- Execute tests, perform code review, or design architecture
|
- ❌ 直接与其他 worker 通信
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
- ❌ 为其他角色创建任务
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
|
||||||
- Modify files or resources outside this role's responsibility
|
|
||||||
- Omit `[developer]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Purpose |
|
|
||||||
|------|------|---------|
|
|
||||||
| Task | Agent | Spawn code-developer for implementation |
|
|
||||||
| Read | File | Read design, breakdown, shared memory |
|
|
||||||
| Write | File | Write dev-log |
|
|
||||||
| Edit | File | Modify code files |
|
|
||||||
| Glob | Search | Find review files |
|
|
||||||
| Bash | Shell | Execute syntax check, git commands |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `dev_complete` | developer -> coordinator | Implementation done | Implementation completed |
|
| `dev_complete` | developer → coordinator | Implementation done | 实现完成 |
|
||||||
| `dev_progress` | developer -> coordinator | Incremental progress | Progress update |
|
| `dev_progress` | developer → coordinator | Incremental progress | 进度更新 |
|
||||||
| `error` | developer -> coordinator | Processing failure | Error report |
|
| `error` | developer → coordinator | Processing failure | 错误上报 |
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
**NOTE**: `team` must be **session ID** (e.g., `TID-project-2026-02-27`), NOT team name. Extract from `Session:` field in task description.
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: <session-id>, // e.g., "TID-project-2026-02-27", NOT "iterdev"
|
|
||||||
from: "developer",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[developer] DEV complete: <task-subject>",
|
|
||||||
ref: <dev-log-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from developer --to coordinator --type <message-type> --summary \"[developer] DEV complete\" --ref <dev-log-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `DEV-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('DEV-') && t.owner === 'developer' &&
|
||||||
|
t.status === 'pending' && t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (myTasks.length === 0) return
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading
|
### Phase 2: Context Loading
|
||||||
|
|
||||||
**Inputs**:
|
```javascript
|
||||||
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
|
const sessionFolder = sessionMatch?.[1]?.trim()
|
||||||
|
|
||||||
| Input | Source | Required |
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|-------|--------|----------|
|
let sharedMemory = {}
|
||||||
| Session path | Task description (Session: <path>) | Yes |
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
| Shared memory | <session-folder>/shared-memory.json | Yes |
|
|
||||||
| Design document | <session-folder>/design/design-001.md | For non-fix tasks |
|
|
||||||
| Task breakdown | <session-folder>/design/task-breakdown.json | For non-fix tasks |
|
|
||||||
| Review feedback | <session-folder>/review/*.md | For fix tasks |
|
|
||||||
| Wisdom | <session-folder>/wisdom/ | No |
|
|
||||||
|
|
||||||
**Loading steps**:
|
// Read design and breakdown
|
||||||
|
let design = null, breakdown = null
|
||||||
|
try {
|
||||||
|
design = Read(`${sessionFolder}/design/design-001.md`)
|
||||||
|
breakdown = JSON.parse(Read(`${sessionFolder}/design/task-breakdown.json`))
|
||||||
|
} catch {}
|
||||||
|
|
||||||
1. Extract session path from task description
|
// Check if this is a fix task (GC loop)
|
||||||
2. Read shared-memory.json
|
const isFixTask = task.subject.includes('fix')
|
||||||
|
let reviewFeedback = null
|
||||||
|
if (isFixTask) {
|
||||||
|
const reviewFiles = Glob({ pattern: `${sessionFolder}/review/*.md` })
|
||||||
|
if (reviewFiles.length > 0) {
|
||||||
|
reviewFeedback = Read(reviewFiles[reviewFiles.length - 1])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
```
|
// Previous implementation context from shared memory
|
||||||
Read(<session-folder>/shared-memory.json)
|
const prevContext = sharedMemory.implementation_context || []
|
||||||
```
|
|
||||||
|
|
||||||
3. Check if this is a fix task (GC loop):
|
|
||||||
|
|
||||||
| Task Type | Detection | Loading |
|
|
||||||
|-----------|-----------|---------|
|
|
||||||
| Fix task | Subject contains "fix" | Read latest review file |
|
|
||||||
| Normal task | Subject does not contain "fix" | Read design + breakdown |
|
|
||||||
|
|
||||||
4. Load previous implementation context from shared memory:
|
|
||||||
|
|
||||||
```
|
|
||||||
prevContext = sharedMemory.implementation_context || []
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 3: Code Implementation
|
### Phase 3: Code Implementation
|
||||||
|
|
||||||
**Implementation strategy selection**:
|
```javascript
|
||||||
|
// Determine complexity and delegation strategy
|
||||||
|
const taskCount = breakdown?.tasks?.length || 1
|
||||||
|
|
||||||
| Task Count | Complexity | Strategy |
|
if (isFixTask) {
|
||||||
|------------|------------|----------|
|
// === GC Fix Mode ===
|
||||||
| <= 2 tasks | Low | Direct: inline Edit/Write |
|
// Focus on review feedback items
|
||||||
| 3-5 tasks | Medium | Single agent: one code-developer for all |
|
// Parse critical/high issues from review
|
||||||
| > 5 tasks | High | Batch agent: group by module, one agent per batch |
|
// Fix each issue directly
|
||||||
|
|
||||||
#### Fix Task Mode (GC Loop)
|
Task({
|
||||||
|
|
||||||
Focus on review feedback items:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "code-developer",
|
subagent_type: "code-developer",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Fix review issues",
|
description: "Fix review issues",
|
||||||
prompt: `Fix the following code review issues:
|
prompt: `Fix the following code review issues:
|
||||||
|
|
||||||
<review-feedback>
|
${reviewFeedback}
|
||||||
|
|
||||||
Focus on:
|
Focus on:
|
||||||
1. Critical issues (must fix)
|
1. Critical issues (must fix)
|
||||||
@@ -151,130 +106,107 @@ Focus on:
|
|||||||
|
|
||||||
Do NOT change code that wasn't flagged.
|
Do NOT change code that wasn't flagged.
|
||||||
Maintain existing code style and patterns.`
|
Maintain existing code style and patterns.`
|
||||||
})
|
})
|
||||||
```
|
|
||||||
|
|
||||||
#### Normal Task Mode
|
} else if (taskCount <= 2) {
|
||||||
|
// Direct implementation
|
||||||
|
for (const t of (breakdown?.tasks || [])) {
|
||||||
|
for (const file of (t.files || [])) {
|
||||||
|
try { Read(file) } catch {} // Read existing file
|
||||||
|
// Edit or Write as needed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
For each task in breakdown:
|
} else {
|
||||||
|
// Delegate to code-developer
|
||||||
1. Read target files (if exist)
|
Task({
|
||||||
2. Apply changes using Edit or Write
|
|
||||||
3. Follow execution order from breakdown
|
|
||||||
|
|
||||||
For complex tasks (>3), delegate to code-developer:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "code-developer",
|
subagent_type: "code-developer",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Implement <task-count> tasks",
|
description: `Implement ${taskCount} tasks`,
|
||||||
prompt: `## Design
|
prompt: `## Design
|
||||||
<design-content>
|
${design}
|
||||||
|
|
||||||
## Task Breakdown
|
## Task Breakdown
|
||||||
<breakdown-json>
|
${JSON.stringify(breakdown, null, 2)}
|
||||||
|
|
||||||
## Previous Context
|
${prevContext.length > 0 ? `## Previous Context\n${prevContext.map(c => `- ${c}`).join('\n')}` : ''}
|
||||||
<prev-context>
|
|
||||||
|
|
||||||
Implement each task following the design. Complete tasks in the specified execution order.`
|
Implement each task following the design. Complete tasks in the specified execution order.`
|
||||||
})
|
})
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Self-Validation
|
### Phase 4: Self-Validation
|
||||||
|
|
||||||
**Validation checks**:
|
```javascript
|
||||||
|
// Syntax check
|
||||||
|
const syntaxResult = Bash(`npx tsc --noEmit 2>&1 || python -m py_compile *.py 2>&1 || true`)
|
||||||
|
const hasSyntaxErrors = syntaxResult.includes('error')
|
||||||
|
|
||||||
| Check | Method | Pass Criteria |
|
// List changed files
|
||||||
|-------|--------|---------------|
|
const changedFiles = Bash(`git diff --name-only`).split('\n').filter(Boolean)
|
||||||
| Syntax | `tsc --noEmit` or equivalent | No errors |
|
|
||||||
| File existence | Verify all planned files exist | All files present |
|
|
||||||
| Import resolution | Check no broken imports | All imports resolve |
|
|
||||||
|
|
||||||
**Syntax check command**:
|
// Log implementation progress
|
||||||
|
const devLog = `# Dev Log — ${task.subject}
|
||||||
|
|
||||||
```
|
**Changed Files**: ${changedFiles.length}
|
||||||
Bash("npx tsc --noEmit 2>&1 || python -m py_compile *.py 2>&1 || true")
|
**Syntax Clean**: ${!hasSyntaxErrors}
|
||||||
```
|
**Fix Task**: ${isFixTask}
|
||||||
|
|
||||||
**Auto-fix**: If validation fails, attempt auto-fix (max 2 attempts), then report remaining issues.
|
|
||||||
|
|
||||||
**Dev log output** (`<session-folder>/code/dev-log.md`):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Dev Log — <task-subject>
|
|
||||||
|
|
||||||
**Changed Files**: <count>
|
|
||||||
**Syntax Clean**: <true/false>
|
|
||||||
**Fix Task**: <true/false>
|
|
||||||
|
|
||||||
## Files Changed
|
## Files Changed
|
||||||
- <file-1>
|
${changedFiles.map(f => `- ${f}`).join('\n')}
|
||||||
- <file-2>
|
`
|
||||||
|
Write(`${sessionFolder}/code/dev-log.md`, devLog)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
```javascript
|
||||||
|
|
||||||
1. **Update shared memory**:
|
|
||||||
|
|
||||||
```
|
|
||||||
sharedMemory.implementation_context.push({
|
sharedMemory.implementation_context.push({
|
||||||
task: <task-subject>,
|
task: task.subject,
|
||||||
changed_files: <file-list>,
|
changed_files: changedFiles,
|
||||||
is_fix: <is-fix-task>,
|
is_fix: isFixTask,
|
||||||
syntax_clean: <has-syntax-errors>
|
syntax_clean: !hasSyntaxErrors
|
||||||
})
|
})
|
||||||
Write(<session-folder>/shared-memory.json, JSON.stringify(sharedMemory, null, 2))
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
```
|
|
||||||
|
|
||||||
2. **Log and send message**:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
mcp__ccw-tools__team_msg({
|
||||||
operation: "log", team: <session-id>, from: "developer", to: "coordinator", // team = session ID, e.g., "TID-project-2026-02-27"
|
operation: "log", team: teamName, from: "developer", to: "coordinator",
|
||||||
type: "dev_complete",
|
type: "dev_complete",
|
||||||
summary: "[developer] <Fix|Implementation> complete: <file-count> files changed",
|
summary: `[developer] ${isFixTask ? 'Fix' : 'Implementation'} complete: ${changedFiles.length} files changed`,
|
||||||
ref: <dev-log-path>
|
ref: `${sessionFolder}/code/dev-log.md`
|
||||||
})
|
})
|
||||||
|
|
||||||
SendMessage({
|
SendMessage({
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: `## [developer] <Fix|Implementation> Complete
|
content: `## [developer] ${isFixTask ? 'Fix' : 'Implementation'} Complete
|
||||||
|
|
||||||
**Task**: <task-subject>
|
**Task**: ${task.subject}
|
||||||
**Changed Files**: <count>
|
**Changed Files**: ${changedFiles.length}
|
||||||
**Syntax Clean**: <true/false>
|
**Syntax Clean**: ${!hasSyntaxErrors}
|
||||||
<if-fix-task>**GC Round**: <gc-round></if>
|
${isFixTask ? `**GC Round**: ${sharedMemory.gc_round}` : ''}
|
||||||
|
|
||||||
### Files
|
### Files
|
||||||
- <file-1>
|
${changedFiles.slice(0, 10).map(f => `- ${f}`).join('\n')}`,
|
||||||
- <file-2>`,
|
summary: `[developer] ${changedFiles.length} files ${isFixTask ? 'fixed' : 'implemented'}`
|
||||||
summary: "[developer] <file-count> files <fixed|implemented>"
|
|
||||||
})
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next task
|
||||||
|
const nextTasks = TaskList().filter(t =>
|
||||||
|
t.subject.startsWith('DEV-') && t.owner === 'developer' &&
|
||||||
|
t.status === 'pending' && t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (nextTasks.length > 0) { /* back to Phase 1 */ }
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Mark task complete**:
|
|
||||||
|
|
||||||
```
|
|
||||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Loop to Phase 1** for next task
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No DEV-* tasks available | Idle, wait for coordinator assignment |
|
| No DEV-* tasks | Idle |
|
||||||
| Design not found | Implement based on task description |
|
| Design not found | Implement based on task description |
|
||||||
| Syntax errors after implementation | Attempt auto-fix, report remaining errors |
|
| Syntax errors after implementation | Attempt auto-fix, report remaining errors |
|
||||||
| Review feedback unclear | Implement best interpretation, note in dev-log |
|
| Review feedback unclear | Implement best interpretation, note in dev-log |
|
||||||
| Code-developer agent fails | Retry once, then implement inline |
|
| Code-developer agent fails | Retry once, then implement inline |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
|
||||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
|
||||||
|
|||||||
@@ -1,307 +1,206 @@
|
|||||||
# Reviewer Role
|
# Role: reviewer
|
||||||
|
|
||||||
Code reviewer. Responsible for multi-dimensional review, quality scoring, and improvement suggestions. Acts as Critic in Generator-Critic loop (paired with developer).
|
代码审查者。负责多维度审查、质量评分、改进建议。作为 Generator-Critic 循环中的 Critic 角色(与 developer 配对)。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `reviewer` | **Tag**: `[reviewer]`
|
- **Name**: `reviewer`
|
||||||
- **Task Prefix**: `REVIEW-*`
|
- **Task Prefix**: `REVIEW-*`
|
||||||
- **Responsibility**: Read-only analysis (Code Review)
|
- **Responsibility**: Read-only analysis (代码审查)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[reviewer]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `REVIEW-*` prefixed tasks
|
- 仅处理 `REVIEW-*` 前缀的任务
|
||||||
- All output must carry `[reviewer]` identifier
|
- 所有输出必须带 `[reviewer]` 标识
|
||||||
- Phase 2: Read shared-memory.json + design, Phase 5: Write review_feedback_trends
|
- Phase 2 读取 shared-memory.json + design,Phase 5 写入 review_feedback_trends
|
||||||
- Mark each issue with severity (CRITICAL/HIGH/MEDIUM/LOW)
|
- 标记每个问题的严重度 (CRITICAL/HIGH/MEDIUM/LOW)
|
||||||
- Provide quality score (1-10)
|
- 提供质量评分 (1-10)
|
||||||
- Work strictly within code review responsibility scope
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute work outside this role's responsibility scope
|
- ❌ 编写实现代码、设计架构或执行测试
|
||||||
- Write implementation code, design architecture, or execute tests
|
- ❌ 直接与其他 worker 通信
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
- ❌ 为其他角色创建任务
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
|
||||||
- Modify files or resources outside this role's responsibility
|
|
||||||
- Omit `[reviewer]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Purpose |
|
|
||||||
|------|------|---------|
|
|
||||||
| Read | File | Read design, shared memory, file contents |
|
|
||||||
| Write | File | Write review reports |
|
|
||||||
| Bash | Shell | Git diff, CLI-assisted review |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `review_passed` | reviewer -> coordinator | No critical issues, score >= 7 | Review passed |
|
| `review_passed` | reviewer → coordinator | No critical issues, score >= 7 | 审查通过 |
|
||||||
| `review_revision` | reviewer -> coordinator | Issues found, score < 7 | Revision needed (triggers GC) |
|
| `review_revision` | reviewer → coordinator | Issues found, score < 7 | 需要修订 (触发GC) |
|
||||||
| `review_critical` | reviewer -> coordinator | Critical issues found | Critical issues (triggers GC) |
|
| `review_critical` | reviewer → coordinator | Critical issues found | 严重问题 (触发GC) |
|
||||||
| `error` | reviewer -> coordinator | Processing failure | Error report |
|
| `error` | reviewer → coordinator | Processing failure | 错误上报 |
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
**NOTE**: `team` must be **session ID** (e.g., `TID-project-2026-02-27`), NOT team name. Extract from `Session:` field in task description.
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: <session-id>, // e.g., "TID-project-2026-02-27", NOT "iterdev"
|
|
||||||
from: "reviewer",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[reviewer] REVIEW complete: <task-subject>",
|
|
||||||
ref: <review-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from reviewer --to coordinator --type <message-type> --summary \"[reviewer] REVIEW complete\" --ref <review-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `REVIEW-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('REVIEW-') && t.owner === 'reviewer' &&
|
||||||
|
t.status === 'pending' && t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (myTasks.length === 0) return
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Context Loading
|
### Phase 2: Context Loading
|
||||||
|
|
||||||
**Inputs**:
|
```javascript
|
||||||
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
|
const sessionFolder = sessionMatch?.[1]?.trim()
|
||||||
|
|
||||||
| Input | Source | Required |
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|-------|--------|----------|
|
let sharedMemory = {}
|
||||||
| Session path | Task description (Session: <path>) | Yes |
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
| Shared memory | <session-folder>/shared-memory.json | Yes |
|
|
||||||
| Design document | <session-folder>/design/design-001.md | For requirements alignment |
|
|
||||||
| Changed files | Git diff | Yes |
|
|
||||||
| Wisdom | <session-folder>/wisdom/ | No |
|
|
||||||
|
|
||||||
**Loading steps**:
|
// Read design for requirements alignment
|
||||||
|
let design = null
|
||||||
|
try { design = Read(`${sessionFolder}/design/design-001.md`) } catch {}
|
||||||
|
|
||||||
1. Extract session path from task description
|
// Get changed files
|
||||||
2. Read shared-memory.json
|
const changedFiles = Bash(`git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached`).split('\n').filter(Boolean)
|
||||||
|
|
||||||
```
|
// Read file contents
|
||||||
Read(<session-folder>/shared-memory.json)
|
const fileContents = {}
|
||||||
```
|
for (const file of changedFiles.slice(0, 20)) {
|
||||||
|
try { fileContents[file] = Read(file) } catch {}
|
||||||
|
}
|
||||||
|
|
||||||
3. Read design document for requirements alignment:
|
// Previous review trends
|
||||||
|
const prevTrends = sharedMemory.review_feedback_trends || []
|
||||||
```
|
|
||||||
Read(<session-folder>/design/design-001.md)
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Get changed files:
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
|
||||||
```
|
|
||||||
|
|
||||||
5. Read file contents (limit to 20 files):
|
|
||||||
|
|
||||||
```
|
|
||||||
Read(<file-1>)
|
|
||||||
Read(<file-2>)
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
6. Load previous review trends:
|
|
||||||
|
|
||||||
```
|
|
||||||
prevTrends = sharedMemory.review_feedback_trends || []
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 3: Multi-Dimensional Review
|
### Phase 3: Multi-Dimensional Review
|
||||||
|
|
||||||
**Review dimensions**:
|
```javascript
|
||||||
|
// Review dimensions:
|
||||||
|
// 1. Correctness — 逻辑正确性、边界处理
|
||||||
|
// 2. Completeness — 是否覆盖设计要求
|
||||||
|
// 3. Maintainability — 可读性、代码风格、DRY
|
||||||
|
// 4. Security — 安全漏洞、输入验证
|
||||||
|
|
||||||
| Dimension | Focus Areas |
|
// Optional: CLI-assisted review
|
||||||
|-----------|-------------|
|
|
||||||
| Correctness | Logic correctness, boundary handling |
|
|
||||||
| Completeness | Coverage of design requirements |
|
|
||||||
| Maintainability | Readability, code style, DRY |
|
|
||||||
| Security | Security vulnerabilities, input validation |
|
|
||||||
|
|
||||||
**Analysis strategy selection**:
|
|
||||||
|
|
||||||
| Condition | Strategy |
|
|
||||||
|-----------|----------|
|
|
||||||
| Single dimension analysis | Direct inline scan |
|
|
||||||
| Multi-dimension analysis | Per-dimension sequential scan |
|
|
||||||
| Deep analysis needed | CLI Fan-out to external tool |
|
|
||||||
|
|
||||||
**Optional CLI-assisted review**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash(`ccw cli -p "PURPOSE: Code review for correctness and security
|
Bash(`ccw cli -p "PURPOSE: Code review for correctness and security
|
||||||
TASK: Review changes in: <file-list>
|
TASK: Review changes in: ${changedFiles.join(', ')}
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @<file-list>
|
CONTEXT: @${changedFiles.join(' @')}
|
||||||
EXPECTED: Issues with severity (CRITICAL/HIGH/MEDIUM/LOW) and file:line
|
EXPECTED: Issues with severity (CRITICAL/HIGH/MEDIUM/LOW) and file:line
|
||||||
CONSTRAINTS: Focus on correctness and security" --tool gemini --mode analysis`, { run_in_background: true })
|
CONSTRAINTS: Focus on correctness and security" --tool gemini --mode analysis`, { run_in_background: true })
|
||||||
```
|
|
||||||
|
|
||||||
**Scoring**:
|
const reviewNum = task.subject.match(/REVIEW-(\d+)/)?.[1] || '001'
|
||||||
|
const outputPath = `${sessionFolder}/review/review-${reviewNum}.md`
|
||||||
|
|
||||||
| Dimension | Weight | Score Range |
|
// Scoring
|
||||||
|-----------|--------|-------------|
|
const score = calculateScore(findings)
|
||||||
| Correctness | 30% | 1-10 |
|
const criticalCount = findings.filter(f => f.severity === 'CRITICAL').length
|
||||||
| Completeness | 25% | 1-10 |
|
const highCount = findings.filter(f => f.severity === 'HIGH').length
|
||||||
| Maintainability | 25% | 1-10 |
|
|
||||||
| Security | 20% | 1-10 |
|
|
||||||
|
|
||||||
**Overall score**: Weighted average of dimension scores.
|
const reviewContent = `# Code Review — Round ${reviewNum}
|
||||||
|
|
||||||
**Output review report** (`<session-folder>/review/review-<num>.md`):
|
**Files Reviewed**: ${changedFiles.length}
|
||||||
|
**Quality Score**: ${score}/10
|
||||||
```markdown
|
**Critical Issues**: ${criticalCount}
|
||||||
# Code Review — Round <num>
|
**High Issues**: ${highCount}
|
||||||
|
|
||||||
**Files Reviewed**: <count>
|
|
||||||
**Quality Score**: <score>/10
|
|
||||||
**Critical Issues**: <count>
|
|
||||||
**High Issues**: <count>
|
|
||||||
|
|
||||||
## Findings
|
## Findings
|
||||||
|
|
||||||
### 1. [CRITICAL] <title>
|
${findings.map((f, i) => `### ${i + 1}. [${f.severity}] ${f.title}
|
||||||
|
|
||||||
**File**: <file>:<line>
|
**File**: ${f.file}:${f.line}
|
||||||
**Dimension**: <dimension>
|
**Dimension**: ${f.dimension}
|
||||||
**Description**: <description>
|
**Description**: ${f.description}
|
||||||
**Suggestion**: <suggestion>
|
**Suggestion**: ${f.suggestion}
|
||||||
|
`).join('\n')}
|
||||||
### 2. [HIGH] <title>
|
|
||||||
...
|
|
||||||
|
|
||||||
## Scoring Breakdown
|
## Scoring Breakdown
|
||||||
|
|
||||||
| Dimension | Score | Notes |
|
| Dimension | Score | Notes |
|
||||||
|-----------|-------|-------|
|
|-----------|-------|-------|
|
||||||
| Correctness | <score>/10 | <notes> |
|
| Correctness | ${scores.correctness}/10 | ${scores.correctnessNotes} |
|
||||||
| Completeness | <score>/10 | <notes> |
|
| Completeness | ${scores.completeness}/10 | ${scores.completenessNotes} |
|
||||||
| Maintainability | <score>/10 | <notes> |
|
| Maintainability | ${scores.maintainability}/10 | ${scores.maintainabilityNotes} |
|
||||||
| Security | <score>/10 | <notes> |
|
| Security | ${scores.security}/10 | ${scores.securityNotes} |
|
||||||
| **Overall** | **<score>/10** | |
|
| **Overall** | **${score}/10** | |
|
||||||
|
|
||||||
## Signal
|
## Signal
|
||||||
|
|
||||||
<CRITICAL — Critical issues must be fixed before merge
|
${criticalCount > 0 ? '**CRITICAL** — Critical issues must be fixed before merge'
|
||||||
| REVISION_NEEDED — Quality below threshold (7/10)
|
: score < 7 ? '**REVISION_NEEDED** — Quality below threshold (7/10)'
|
||||||
| APPROVED — Code meets quality standards>
|
: '**APPROVED** — Code meets quality standards'}
|
||||||
|
|
||||||
## Design Alignment
|
${design ? `## Design Alignment\n${designAlignmentNotes}` : ''}
|
||||||
|
`
|
||||||
|
|
||||||
<notes on how implementation aligns with design>
|
Write(outputPath, reviewContent)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Trend Analysis
|
### Phase 4: Trend Analysis
|
||||||
|
|
||||||
**Compare with previous reviews**:
|
```javascript
|
||||||
|
// Compare with previous reviews to detect trends
|
||||||
1. Extract issue types from current findings
|
const currentIssueTypes = findings.map(f => f.dimension)
|
||||||
2. Compare with previous review trends
|
const trendNote = prevTrends.length > 0
|
||||||
3. Identify recurring issues
|
? `Recurring: ${findRecurring(prevTrends, currentIssueTypes).join(', ')}`
|
||||||
|
: 'First review'
|
||||||
| Analysis | Method |
|
|
||||||
|----------|--------|
|
|
||||||
| Recurring issues | Match dimension/type with previous reviews |
|
|
||||||
| Improvement areas | Issues that appear in multiple reviews |
|
|
||||||
| New issues | Issues unique to this review |
|
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
|
||||||
|
|
||||||
1. **Update shared memory**:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||||
|
|
||||||
|
```javascript
|
||||||
sharedMemory.review_feedback_trends.push({
|
sharedMemory.review_feedback_trends.push({
|
||||||
review_id: "review-<num>",
|
review_id: `review-${reviewNum}`,
|
||||||
score: <score>,
|
score: score,
|
||||||
critical: <critical-count>,
|
critical: criticalCount,
|
||||||
high: <high-count>,
|
high: highCount,
|
||||||
dimensions: <dimension-list>,
|
dimensions: findings.map(f => f.dimension),
|
||||||
gc_round: sharedMemory.gc_round || 0
|
gc_round: sharedMemory.gc_round || 0
|
||||||
})
|
})
|
||||||
Write(<session-folder>/shared-memory.json, JSON.stringify(sharedMemory, null, 2))
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
```
|
|
||||||
|
|
||||||
2. **Determine message type**:
|
const msgType = criticalCount > 0 ? "review_critical"
|
||||||
|
: score < 7 ? "review_revision"
|
||||||
|
: "review_passed"
|
||||||
|
|
||||||
| Condition | Message Type |
|
|
||||||
|-----------|--------------|
|
|
||||||
| criticalCount > 0 | review_critical |
|
|
||||||
| score < 7 | review_revision |
|
|
||||||
| else | review_passed |
|
|
||||||
|
|
||||||
3. **Log and send message**:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
mcp__ccw-tools__team_msg({
|
||||||
operation: "log", team: <session-id>, from: "reviewer", to: "coordinator", // team = session ID, e.g., "TID-project-2026-02-27"
|
operation: "log", team: teamName, from: "reviewer", to: "coordinator",
|
||||||
type: <message-type>,
|
type: msgType,
|
||||||
summary: "[reviewer] Review <message-type>: score=<score>/10, <critical-count>C/<high-count>H",
|
summary: `[reviewer] Review ${msgType}: score=${score}/10, ${criticalCount}C/${highCount}H`,
|
||||||
ref: <review-path>
|
ref: outputPath
|
||||||
})
|
})
|
||||||
|
|
||||||
SendMessage({
|
SendMessage({
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: `## [reviewer] Code Review Results
|
content: `## [reviewer] Code Review Results
|
||||||
|
|
||||||
**Task**: <task-subject>
|
**Task**: ${task.subject}
|
||||||
**Score**: <score>/10
|
**Score**: ${score}/10
|
||||||
**Signal**: <message-type>
|
**Signal**: ${msgType.toUpperCase()}
|
||||||
**Critical**: <count>, **High**: <count>
|
**Critical**: ${criticalCount}, **High**: ${highCount}
|
||||||
**Output**: <review-path>
|
**Output**: ${outputPath}
|
||||||
|
|
||||||
### Top Issues
|
### Top Issues
|
||||||
- **[CRITICAL/HIGH]** <title> (<file>:<line>)
|
${findings.filter(f => ['CRITICAL', 'HIGH'].includes(f.severity)).slice(0, 5).map(f =>
|
||||||
...`,
|
`- **[${f.severity}]** ${f.title} (${f.file}:${f.line})`
|
||||||
summary: "[reviewer] <message-type>: <score>/10"
|
).join('\n')}`,
|
||||||
|
summary: `[reviewer] ${msgType}: ${score}/10`
|
||||||
})
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **Mark task complete**:
|
|
||||||
|
|
||||||
```
|
|
||||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Loop to Phase 1** for next task
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No REVIEW-* tasks available | Idle, wait for coordinator assignment |
|
| No REVIEW-* tasks | Idle |
|
||||||
| No changed files | Review files referenced in design |
|
| No changed files | Review files referenced in design |
|
||||||
| CLI review fails | Fall back to inline analysis |
|
| CLI review fails | Fall back to inline analysis |
|
||||||
| All issues LOW severity | Score high, approve |
|
| All issues LOW | Score high, approve |
|
||||||
| Design not found | Review against general quality standards |
|
| Design not found | Review against general quality standards |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
|
||||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
|
||||||
|
|||||||
@@ -1,253 +1,146 @@
|
|||||||
# Tester Role
|
# Role: tester
|
||||||
|
|
||||||
Test validator. Responsible for test execution, fix cycles, and regression detection.
|
测试验证者。负责测试执行、修复循环、回归检测。
|
||||||
|
|
||||||
## Identity
|
## Role Identity
|
||||||
|
|
||||||
- **Name**: `tester` | **Tag**: `[tester]`
|
- **Name**: `tester`
|
||||||
- **Task Prefix**: `VERIFY-*`
|
- **Task Prefix**: `VERIFY-*`
|
||||||
- **Responsibility**: Validation (Test Verification)
|
- **Responsibility**: Validation (测试验证)
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
- **Output Tag**: `[tester]`
|
||||||
|
|
||||||
## Boundaries
|
## Role Boundaries
|
||||||
|
|
||||||
### MUST
|
### MUST
|
||||||
|
|
||||||
- Only process `VERIFY-*` prefixed tasks
|
- 仅处理 `VERIFY-*` 前缀的任务
|
||||||
- All output must carry `[tester]` identifier
|
- 所有输出必须带 `[tester]` 标识
|
||||||
- Phase 2: Read shared-memory.json, Phase 5: Write test_patterns
|
- Phase 2 读取 shared-memory.json,Phase 5 写入 test_patterns
|
||||||
- Work strictly within test validation responsibility scope
|
|
||||||
|
|
||||||
### MUST NOT
|
### MUST NOT
|
||||||
|
|
||||||
- Execute work outside this role's responsibility scope
|
- ❌ 编写实现代码、设计架构或代码审查
|
||||||
- Write implementation code, design architecture, or perform code review
|
- ❌ 直接与其他 worker 通信
|
||||||
- Communicate directly with other worker roles (must go through coordinator)
|
- ❌ 为其他角色创建任务
|
||||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
|
||||||
- Modify files or resources outside this role's responsibility
|
|
||||||
- Omit `[tester]` identifier in any output
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Toolbox
|
|
||||||
|
|
||||||
### Tool Capabilities
|
|
||||||
|
|
||||||
| Tool | Type | Purpose |
|
|
||||||
|------|------|---------|
|
|
||||||
| Task | Agent | Spawn code-developer for fix cycles |
|
|
||||||
| Read | File | Read shared memory, verify results |
|
|
||||||
| Write | File | Write verification results |
|
|
||||||
| Bash | Shell | Execute tests, git commands |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Message Types
|
## Message Types
|
||||||
|
|
||||||
| Type | Direction | Trigger | Description |
|
| Type | Direction | Trigger | Description |
|
||||||
|------|-----------|---------|-------------|
|
|------|-----------|---------|-------------|
|
||||||
| `verify_passed` | tester -> coordinator | All tests pass | Verification passed |
|
| `verify_passed` | tester → coordinator | All tests pass | 验证通过 |
|
||||||
| `verify_failed` | tester -> coordinator | Tests fail | Verification failed |
|
| `verify_failed` | tester → coordinator | Tests fail | 验证失败 |
|
||||||
| `fix_required` | tester -> coordinator | Issues found needing fix | Fix required |
|
| `fix_required` | tester → coordinator | Issues found needing fix | 需要修复 |
|
||||||
| `error` | tester -> coordinator | Environment failure | Error report |
|
| `error` | tester → coordinator | Environment failure | 错误上报 |
|
||||||
|
|
||||||
## Message Bus
|
|
||||||
|
|
||||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
|
||||||
|
|
||||||
**NOTE**: `team` must be **session ID** (e.g., `TID-project-2026-02-27`), NOT team name. Extract from `Session:` field in task description.
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
|
||||||
operation: "log",
|
|
||||||
team: <session-id>, // e.g., "TID-project-2026-02-27", NOT "iterdev"
|
|
||||||
from: "tester",
|
|
||||||
to: "coordinator",
|
|
||||||
type: <message-type>,
|
|
||||||
summary: "[tester] VERIFY complete: <task-subject>",
|
|
||||||
ref: <verify-path>
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**CLI fallback** (when MCP unavailable):
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("ccw team log --team <session-id> --from tester --to coordinator --type <message-type> --summary \"[tester] VERIFY complete\" --ref <verify-path> --json")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution (5-Phase)
|
## Execution (5-Phase)
|
||||||
|
|
||||||
### Phase 1: Task Discovery
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
Standard task discovery flow: TaskList -> filter by prefix `VERIFY-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('VERIFY-') && t.owner === 'tester' &&
|
||||||
### Phase 2: Environment Detection
|
t.status === 'pending' && t.blockedBy.length === 0
|
||||||
|
)
|
||||||
**Inputs**:
|
if (myTasks.length === 0) return
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
| Input | Source | Required |
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|-------|--------|----------|
|
|
||||||
| Session path | Task description (Session: <path>) | Yes |
|
|
||||||
| Shared memory | <session-folder>/shared-memory.json | Yes |
|
|
||||||
| Changed files | Git diff | Yes |
|
|
||||||
| Wisdom | <session-folder>/wisdom/ | No |
|
|
||||||
|
|
||||||
**Detection steps**:
|
|
||||||
|
|
||||||
1. Extract session path from task description
|
|
||||||
2. Read shared-memory.json
|
|
||||||
|
|
||||||
```
|
|
||||||
Read(<session-folder>/shared-memory.json)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Get changed files:
|
### Phase 2: Context Loading
|
||||||
|
|
||||||
```
|
```javascript
|
||||||
Bash("git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
const sessionMatch = task.description.match(/Session:\s*([^\n]+)/)
|
||||||
|
const sessionFolder = sessionMatch?.[1]?.trim()
|
||||||
|
|
||||||
|
const memoryPath = `${sessionFolder}/shared-memory.json`
|
||||||
|
let sharedMemory = {}
|
||||||
|
try { sharedMemory = JSON.parse(Read(memoryPath)) } catch {}
|
||||||
|
|
||||||
|
// Detect test framework and test command
|
||||||
|
const testCommand = detectTestCommand()
|
||||||
|
const changedFiles = Bash(`git diff --name-only`).split('\n').filter(Boolean)
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Detect test framework and command:
|
### Phase 3: Test Execution + Fix Cycle
|
||||||
|
|
||||||
| Detection | Method |
|
```javascript
|
||||||
|-----------|--------|
|
let iteration = 0
|
||||||
| Test command | Check package.json scripts, pytest.ini, Makefile |
|
const MAX_ITERATIONS = 5
|
||||||
| Coverage tool | Check for nyc, coverage.py, jest --coverage config |
|
let lastResult = null
|
||||||
|
let passRate = 0
|
||||||
|
|
||||||
**Common test commands**:
|
while (iteration < MAX_ITERATIONS) {
|
||||||
- JavaScript: `npm test`, `yarn test`, `pnpm test`
|
lastResult = Bash(`${testCommand} 2>&1 || true`)
|
||||||
- Python: `pytest`, `python -m pytest`
|
passRate = parsePassRate(lastResult)
|
||||||
- Go: `go test ./...`
|
|
||||||
- Rust: `cargo test`
|
|
||||||
|
|
||||||
### Phase 3: Execution + Fix Cycle
|
if (passRate >= 0.95) break
|
||||||
|
|
||||||
**Iterative test-fix cycle**:
|
if (iteration < MAX_ITERATIONS - 1) {
|
||||||
|
// Delegate fix to code-developer
|
||||||
| Step | Action |
|
Task({
|
||||||
|------|--------|
|
|
||||||
| 1 | Run test command |
|
|
||||||
| 2 | Parse results -> check pass rate |
|
|
||||||
| 3 | Pass rate >= 95% -> exit loop (success) |
|
|
||||||
| 4 | Extract failing test details |
|
|
||||||
| 5 | Delegate fix to code-developer subagent |
|
|
||||||
| 6 | Increment iteration counter |
|
|
||||||
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
|
|
||||||
| 8 | Go to Step 1 |
|
|
||||||
|
|
||||||
**Test execution**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Bash("<test-command> 2>&1 || true")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix delegation** (when tests fail):
|
|
||||||
|
|
||||||
```
|
|
||||||
Task({
|
|
||||||
subagent_type: "code-developer",
|
subagent_type: "code-developer",
|
||||||
run_in_background: false,
|
run_in_background: false,
|
||||||
description: "Fix test failures (iteration <num>)",
|
description: `Fix test failures (iteration ${iteration + 1})`,
|
||||||
prompt: `Test failures:
|
prompt: `Test failures:\n${lastResult.substring(0, 3000)}\n\nFix failing tests. Changed files: ${changedFiles.join(', ')}`
|
||||||
<test-output>
|
})
|
||||||
|
}
|
||||||
Fix failing tests. Changed files: <file-list>`
|
iteration++
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output verification results** (`<session-folder>/verify/verify-<num>.json`):
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"verify_id": "verify-<num>",
|
|
||||||
"pass_rate": <rate>,
|
|
||||||
"iterations": <count>,
|
|
||||||
"passed": <true/false>,
|
|
||||||
"timestamp": "<iso-timestamp>",
|
|
||||||
"regression_passed": <true/false>
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Save verification results
|
||||||
|
const verifyNum = task.subject.match(/VERIFY-(\d+)/)?.[1] || '001'
|
||||||
|
const resultData = {
|
||||||
|
verify_id: `verify-${verifyNum}`,
|
||||||
|
pass_rate: passRate,
|
||||||
|
iterations: iteration,
|
||||||
|
passed: passRate >= 0.95,
|
||||||
|
timestamp: new Date().toISOString()
|
||||||
|
}
|
||||||
|
Write(`${sessionFolder}/verify/verify-${verifyNum}.json`, JSON.stringify(resultData, null, 2))
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Regression Check
|
### Phase 4: Regression Check
|
||||||
|
|
||||||
**Full test suite for regression**:
|
```javascript
|
||||||
|
// Run full test suite for regression
|
||||||
```
|
const regressionResult = Bash(`${testCommand} --all 2>&1 || true`)
|
||||||
Bash("<test-command> --all 2>&1 || true")
|
const regressionPassed = !regressionResult.includes('FAIL')
|
||||||
|
resultData.regression_passed = regressionPassed
|
||||||
```
|
```
|
||||||
|
|
||||||
| Check | Method | Pass Criteria |
|
### Phase 5: Report to Coordinator + Shared Memory Write
|
||||||
|-------|--------|---------------|
|
|
||||||
| Regression | Run full test suite | No FAIL in output |
|
|
||||||
| Coverage | Run coverage tool | >= 80% (if configured) |
|
|
||||||
|
|
||||||
Update verification results with regression status.
|
```javascript
|
||||||
|
|
||||||
### Phase 5: Report to Coordinator
|
|
||||||
|
|
||||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
|
||||||
|
|
||||||
1. **Update shared memory**:
|
|
||||||
|
|
||||||
```
|
|
||||||
sharedMemory.test_patterns = sharedMemory.test_patterns || []
|
sharedMemory.test_patterns = sharedMemory.test_patterns || []
|
||||||
if (passRate >= 0.95) {
|
if (passRate >= 0.95) {
|
||||||
sharedMemory.test_patterns.push(`verify-<num>: passed in <iterations> iterations`)
|
sharedMemory.test_patterns.push(`verify-${verifyNum}: passed in ${iteration} iterations`)
|
||||||
}
|
}
|
||||||
Write(<session-folder>/shared-memory.json, JSON.stringify(sharedMemory, null, 2))
|
Write(memoryPath, JSON.stringify(sharedMemory, null, 2))
|
||||||
```
|
|
||||||
|
|
||||||
2. **Determine message type**:
|
const msgType = resultData.passed ? "verify_passed" : (iteration >= MAX_ITERATIONS ? "fix_required" : "verify_failed")
|
||||||
|
|
||||||
| Condition | Message Type |
|
|
||||||
|-----------|--------------|
|
|
||||||
| passRate >= 0.95 | verify_passed |
|
|
||||||
| passRate < 0.95 && iterations >= MAX | fix_required |
|
|
||||||
| passRate < 0.95 | verify_failed |
|
|
||||||
|
|
||||||
3. **Log and send message**:
|
|
||||||
|
|
||||||
```
|
|
||||||
mcp__ccw-tools__team_msg({
|
mcp__ccw-tools__team_msg({
|
||||||
operation: "log", team: <session-id>, from: "tester", to: "coordinator", // team = session ID, e.g., "TID-project-2026-02-27"
|
operation: "log", team: teamName, from: "tester", to: "coordinator",
|
||||||
type: <message-type>,
|
type: msgType,
|
||||||
summary: "[tester] <message-type>: pass_rate=<rate>%, iterations=<count>",
|
summary: `[tester] ${msgType}: pass_rate=${(passRate*100).toFixed(1)}%, iterations=${iteration}`,
|
||||||
ref: <verify-path>
|
ref: `${sessionFolder}/verify/verify-${verifyNum}.json`
|
||||||
})
|
})
|
||||||
|
|
||||||
SendMessage({
|
SendMessage({
|
||||||
type: "message", recipient: "coordinator",
|
type: "message", recipient: "coordinator",
|
||||||
content: `## [tester] Verification Results
|
content: `## [tester] Verification Results\n\n**Pass Rate**: ${(passRate*100).toFixed(1)}%\n**Iterations**: ${iteration}/${MAX_ITERATIONS}\n**Regression**: ${resultData.regression_passed ? '✅' : '❌'}\n**Status**: ${resultData.passed ? '✅ PASSED' : '❌ NEEDS FIX'}`,
|
||||||
|
summary: `[tester] ${resultData.passed ? 'PASSED' : 'FAILED'}: ${(passRate*100).toFixed(1)}%`
|
||||||
**Pass Rate**: <rate>%
|
|
||||||
**Iterations**: <count>/<MAX>
|
|
||||||
**Regression**: <passed/failed>
|
|
||||||
**Status**: <PASSED/NEEDS FIX>`,
|
|
||||||
summary: "[tester] <PASSED/FAILED>: <rate>%"
|
|
||||||
})
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **Mark task complete**:
|
|
||||||
|
|
||||||
```
|
|
||||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Loop to Phase 1** for next task
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
| Scenario | Resolution |
|
| Scenario | Resolution |
|
||||||
|----------|------------|
|
|----------|------------|
|
||||||
| No VERIFY-* tasks available | Idle, wait for coordinator assignment |
|
| No VERIFY-* tasks | Idle |
|
||||||
| Test command not found | Try common commands (npm test, pytest, vitest) |
|
| Test command not found | Try common commands (npm test, pytest, vitest) |
|
||||||
| Max iterations exceeded | Report fix_required to coordinator |
|
| Max iterations exceeded | Report fix_required to coordinator |
|
||||||
| Test environment broken | Report error, suggest manual fix |
|
| Test environment broken | Report error, suggest manual fix |
|
||||||
| Context/Plan file not found | Notify coordinator, request location |
|
|
||||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
|
||||||
| Unexpected error | Log error via team_msg, report to coordinator |
|
|
||||||
|
|||||||
573
.claude/skills/team-lifecycle-v2/SKILL.md
Normal file
573
.claude/skills/team-lifecycle-v2/SKILL.md
Normal file
@@ -0,0 +1,573 @@
|
|||||||
|
---
|
||||||
|
name: team-lifecycle-v2
|
||||||
|
description: Unified team skill for full lifecycle - spec/impl/test. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team lifecycle".
|
||||||
|
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Team Lifecycle
|
||||||
|
|
||||||
|
Unified team skill covering specification, implementation, testing, and review. All team members invoke this skill with `--role=xxx` to route to role-specific execution.
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
|
||||||
|
```
|
||||||
|
┌───────────────────────────────────────────────────┐
|
||||||
|
│ Skill(skill="team-lifecycle-v2") │
|
||||||
|
│ args="任务描述" 或 args="--role=xxx" │
|
||||||
|
└───────────────────┬───────────────────────────────┘
|
||||||
|
│ Role Router
|
||||||
|
│
|
||||||
|
┌──── --role present? ────┐
|
||||||
|
│ NO │ YES
|
||||||
|
↓ ↓
|
||||||
|
Orchestration Mode Role Dispatch
|
||||||
|
(auto → coordinator) (route to role.md)
|
||||||
|
│
|
||||||
|
┌────┴────┬───────┬───────┬───────┬───────┬───────┬───────┐
|
||||||
|
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
|
||||||
|
┌──────────┐┌───────┐┌──────┐┌──────────┐┌───────┐┌────────┐┌──────┐┌────────┐
|
||||||
|
│coordinator││analyst││writer││discussant││planner││executor││tester││reviewer│
|
||||||
|
│ roles/ ││roles/ ││roles/││ roles/ ││roles/ ││ roles/ ││roles/││ roles/ │
|
||||||
|
└──────────┘└───────┘└──────┘└──────────┘└───────┘└────────┘└──────┘└────────┘
|
||||||
|
↑ ↑
|
||||||
|
on-demand by coordinator
|
||||||
|
┌──────────┐ ┌─────────┐
|
||||||
|
│ explorer │ │architect│
|
||||||
|
│ (service)│ │(consult)│
|
||||||
|
└──────────┘ └─────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Command Architecture
|
||||||
|
|
||||||
|
Each role is organized as a folder with a `role.md` orchestrator and optional `commands/` for delegation:
|
||||||
|
|
||||||
|
```
|
||||||
|
roles/
|
||||||
|
├── coordinator/
|
||||||
|
│ ├── role.md # Orchestrator (Phase 1/5 inline, Phase 2-4 delegate)
|
||||||
|
│ └── commands/
|
||||||
|
│ ├── dispatch.md # Task chain creation (3 modes)
|
||||||
|
│ └── monitor.md # Coordination loop + message routing
|
||||||
|
├── analyst/
|
||||||
|
│ ├── role.md
|
||||||
|
│ └── commands/
|
||||||
|
├── writer/
|
||||||
|
│ ├── role.md
|
||||||
|
│ └── commands/
|
||||||
|
│ └── generate-doc.md # Multi-CLI document generation (4 doc types)
|
||||||
|
├── discussant/
|
||||||
|
│ ├── role.md
|
||||||
|
│ └── commands/
|
||||||
|
│ └── critique.md # Multi-perspective CLI critique
|
||||||
|
├── planner/
|
||||||
|
│ ├── role.md
|
||||||
|
│ └── commands/
|
||||||
|
│ └── explore.md # Multi-angle codebase exploration
|
||||||
|
├── executor/
|
||||||
|
│ ├── role.md
|
||||||
|
│ └── commands/
|
||||||
|
│ └── implement.md # Multi-backend code implementation
|
||||||
|
├── tester/
|
||||||
|
│ ├── role.md
|
||||||
|
│ └── commands/
|
||||||
|
│ └── validate.md # Test-fix cycle
|
||||||
|
├── reviewer/
|
||||||
|
│ ├── role.md
|
||||||
|
│ └── commands/
|
||||||
|
│ ├── code-review.md # 4-dimension code review
|
||||||
|
│ └── spec-quality.md # 5-dimension spec quality check
|
||||||
|
├── explorer/ # Service role (on-demand)
|
||||||
|
│ └── role.md # Multi-strategy code search & pattern discovery
|
||||||
|
└── architect/ # Consulting role (on-demand)
|
||||||
|
├── role.md # Multi-mode architecture assessment
|
||||||
|
└── commands/
|
||||||
|
└── assess.md # Mode-specific assessment strategies
|
||||||
|
├── fe-developer/ # Frontend pipeline role
|
||||||
|
│ └── role.md # Frontend component/page implementation
|
||||||
|
└── fe-qa/ # Frontend pipeline role
|
||||||
|
├── role.md
|
||||||
|
└── commands/
|
||||||
|
└── pre-delivery-checklist.md
|
||||||
|
└── role.md # 5-dimension frontend QA + GC loop
|
||||||
|
```
|
||||||
|
|
||||||
|
**Design principle**: role.md keeps Phase 1 (Task Discovery) and Phase 5 (Report) inline. Phases 2-4 either stay inline (simple logic) or delegate to `commands/*.md` via `Read("commands/xxx.md")` when they involve subagent delegation, CLI fan-out, or complex strategies.
|
||||||
|
|
||||||
|
**Command files** are self-contained: each includes Strategy, Execution Steps, and Error Handling. Any subagent can `Read()` a command file and execute it independently.
|
||||||
|
|
||||||
|
## Role Router
|
||||||
|
|
||||||
|
### Input Parsing
|
||||||
|
|
||||||
|
Parse `$ARGUMENTS` to extract `--role`:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const args = "$ARGUMENTS"
|
||||||
|
const roleMatch = args.match(/--role[=\s]+(\w+)/)
|
||||||
|
const teamName = args.match(/--team[=\s]+([\w-]+)/)?.[1] || "lifecycle"
|
||||||
|
|
||||||
|
if (!roleMatch) {
|
||||||
|
// No --role: Orchestration Mode → auto route to coordinator
|
||||||
|
// See "Orchestration Mode" section below
|
||||||
|
}
|
||||||
|
|
||||||
|
const role = roleMatch ? roleMatch[1] : "coordinator"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Role Dispatch
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const VALID_ROLES = {
|
||||||
|
"coordinator": { file: "roles/coordinator/role.md", prefix: null },
|
||||||
|
"analyst": { file: "roles/analyst/role.md", prefix: "RESEARCH" },
|
||||||
|
"writer": { file: "roles/writer/role.md", prefix: "DRAFT" },
|
||||||
|
"discussant": { file: "roles/discussant/role.md", prefix: "DISCUSS" },
|
||||||
|
"planner": { file: "roles/planner/role.md", prefix: "PLAN" },
|
||||||
|
"executor": { file: "roles/executor/role.md", prefix: "IMPL" },
|
||||||
|
"tester": { file: "roles/tester/role.md", prefix: "TEST" },
|
||||||
|
"reviewer": { file: "roles/reviewer/role.md", prefix: ["REVIEW", "QUALITY"] },
|
||||||
|
"explorer": { file: "roles/explorer/role.md", prefix: "EXPLORE", type: "service" },
|
||||||
|
"architect": { file: "roles/architect/role.md", prefix: "ARCH", type: "consulting" },
|
||||||
|
"fe-developer":{ file: "roles/fe-developer/role.md",prefix: "DEV-FE", type: "frontend-pipeline" },
|
||||||
|
"fe-qa": { file: "roles/fe-qa/role.md", prefix: "QA-FE", type: "frontend-pipeline" }
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!VALID_ROLES[role]) {
|
||||||
|
throw new Error(`Unknown role: ${role}. Available: ${Object.keys(VALID_ROLES).join(', ')}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read and execute role-specific logic
|
||||||
|
Read(VALID_ROLES[role].file)
|
||||||
|
// → Execute the 5-phase process defined in that file
|
||||||
|
```
|
||||||
|
|
||||||
|
### Orchestration Mode(无参数触发)
|
||||||
|
|
||||||
|
当不带 `--role` 调用时,自动进入 coordinator 编排模式。用户只需传任务描述即可触发完整流程。
|
||||||
|
|
||||||
|
**触发方式**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 用户调用(无 --role)— 自动路由到 coordinator
|
||||||
|
Skill(skill="team-lifecycle-v2", args="任务描述")
|
||||||
|
|
||||||
|
// 等价于
|
||||||
|
Skill(skill="team-lifecycle-v2", args="--role=coordinator 任务描述")
|
||||||
|
```
|
||||||
|
|
||||||
|
**流程**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (!roleMatch) {
|
||||||
|
// Orchestration Mode: 自动路由到 coordinator
|
||||||
|
// coordinator role.md 将执行:
|
||||||
|
// Phase 1: 需求澄清
|
||||||
|
// Phase 2: TeamCreate + spawn 所有 worker agents
|
||||||
|
// 每个 agent prompt 中包含 Skill(args="--role=xxx") 回调
|
||||||
|
// Phase 3: 创建任务链
|
||||||
|
// Phase 4: 监控协调循环
|
||||||
|
// Phase 5: 结果汇报
|
||||||
|
|
||||||
|
const role = "coordinator"
|
||||||
|
Read(VALID_ROLES[role].file)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**完整调用链**:
|
||||||
|
|
||||||
|
```
|
||||||
|
用户: Skill(args="任务描述")
|
||||||
|
│
|
||||||
|
├─ SKILL.md: 无 --role → Orchestration Mode → 读取 coordinator role.md
|
||||||
|
│
|
||||||
|
├─ coordinator Phase 2: TeamCreate + spawn workers
|
||||||
|
│ 每个 worker prompt 中包含 Skill(args="--role=xxx") 回调
|
||||||
|
│
|
||||||
|
├─ coordinator Phase 3: dispatch 任务链
|
||||||
|
│
|
||||||
|
├─ worker 收到任务 → Skill(args="--role=xxx") → SKILL.md Role Router → role.md
|
||||||
|
│ 每个 worker 自动获取:
|
||||||
|
│ ├─ 角色定义 (role.md: identity, boundaries, message types)
|
||||||
|
│ ├─ 可用命令 (commands/*.md)
|
||||||
|
│ └─ 执行逻辑 (5-phase process)
|
||||||
|
│
|
||||||
|
└─ coordinator Phase 4-5: 监控 → 结果汇报
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Roles
|
||||||
|
|
||||||
|
| Role | Task Prefix | Responsibility | Role File |
|
||||||
|
|------|-------------|----------------|-----------|
|
||||||
|
| `coordinator` | N/A | Pipeline orchestration, requirement clarification, task dispatch | [roles/coordinator/role.md](roles/coordinator/role.md) |
|
||||||
|
| `analyst` | RESEARCH-* | Seed analysis, codebase exploration, context gathering | [roles/analyst/role.md](roles/analyst/role.md) |
|
||||||
|
| `writer` | DRAFT-* | Product Brief / PRD / Architecture / Epics generation | [roles/writer/role.md](roles/writer/role.md) |
|
||||||
|
| `discussant` | DISCUSS-* | Multi-perspective critique, consensus building | [roles/discussant/role.md](roles/discussant/role.md) |
|
||||||
|
| `planner` | PLAN-* | Multi-angle exploration, structured planning | [roles/planner/role.md](roles/planner/role.md) |
|
||||||
|
| `executor` | IMPL-* | Code implementation following plans | [roles/executor/role.md](roles/executor/role.md) |
|
||||||
|
| `tester` | TEST-* | Adaptive test-fix cycles, quality gates | [roles/tester/role.md](roles/tester/role.md) |
|
||||||
|
| `reviewer` | `REVIEW-*` + `QUALITY-*` | Code review + Spec quality validation (auto-switch by prefix) | [roles/reviewer/role.md](roles/reviewer/role.md) |
|
||||||
|
| `explorer` | EXPLORE-* | Code search, pattern discovery, dependency tracing (service role, on-demand) | [roles/explorer/role.md](roles/explorer/role.md) |
|
||||||
|
| `architect` | ARCH-* | Architecture assessment, tech feasibility, design review (consulting role, on-demand) | [roles/architect/role.md](roles/architect/role.md) |
|
||||||
|
| `fe-developer` | DEV-FE-* | Frontend component/page implementation, design token consumption (frontend pipeline) | [roles/fe-developer/role.md](roles/fe-developer/role.md) |
|
||||||
|
| `fe-qa` | QA-FE-* | 5-dimension frontend QA, accessibility, design compliance, GC loop (frontend pipeline) | [roles/fe-qa/role.md](roles/fe-qa/role.md) |
|
||||||
|
|
||||||
|
## Shared Infrastructure
|
||||||
|
|
||||||
|
### Role Isolation Rules
|
||||||
|
|
||||||
|
**核心原则**: 每个角色仅能执行自己职责范围内的工作。
|
||||||
|
|
||||||
|
#### Output Tagging(强制)
|
||||||
|
|
||||||
|
所有角色的输出必须带 `[role_name]` 标识前缀:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// SendMessage — content 和 summary 都必须带标识
|
||||||
|
SendMessage({
|
||||||
|
content: `## [${role}] ...`,
|
||||||
|
summary: `[${role}] ...`
|
||||||
|
})
|
||||||
|
|
||||||
|
// team_msg — summary 必须带标识
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
summary: `[${role}] ...`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Coordinator 隔离
|
||||||
|
|
||||||
|
| 允许 | 禁止 |
|
||||||
|
|------|------|
|
||||||
|
| 需求澄清 (AskUserQuestion) | ❌ 直接编写/修改代码 |
|
||||||
|
| 创建任务链 (TaskCreate) | ❌ 调用实现类 subagent (code-developer 等) |
|
||||||
|
| 分发任务给 worker | ❌ 直接执行分析/测试/审查 |
|
||||||
|
| 监控进度 (消息总线) | ❌ 绕过 worker 自行完成任务 |
|
||||||
|
| 汇报结果给用户 | ❌ 修改源代码或产物文件 |
|
||||||
|
|
||||||
|
#### Worker 隔离
|
||||||
|
|
||||||
|
| 允许 | 禁止 |
|
||||||
|
|------|------|
|
||||||
|
| 处理自己前缀的任务 | ❌ 处理其他角色前缀的任务 |
|
||||||
|
| SendMessage 给 coordinator | ❌ 直接与其他 worker 通信 |
|
||||||
|
| 使用 Toolbox 中声明的工具 | ❌ 为其他角色创建任务 (TaskCreate) |
|
||||||
|
| 委派给 commands/ 中的命令 | ❌ 修改不属于本职责的资源 |
|
||||||
|
|
||||||
|
### Message Bus (All Roles)
|
||||||
|
|
||||||
|
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: role,
|
||||||
|
to: "coordinator",
|
||||||
|
type: "<type>",
|
||||||
|
summary: `[${role}] <summary>`,
|
||||||
|
ref: "<file_path>"
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Message types by role**:
|
||||||
|
|
||||||
|
| Role | Types |
|
||||||
|
|------|-------|
|
||||||
|
| coordinator | `plan_approved`, `plan_revision`, `task_unblocked`, `fix_required`, `error`, `shutdown` |
|
||||||
|
| analyst | `research_ready`, `research_progress`, `error` |
|
||||||
|
| writer | `draft_ready`, `draft_revision`, `impl_progress`, `error` |
|
||||||
|
| discussant | `discussion_ready`, `discussion_blocked`, `impl_progress`, `error` |
|
||||||
|
| planner | `plan_ready`, `plan_revision`, `impl_progress`, `error` |
|
||||||
|
| executor | `impl_complete`, `impl_progress`, `error` |
|
||||||
|
| tester | `test_result`, `impl_progress`, `fix_required`, `error` |
|
||||||
|
| reviewer | `review_result`, `quality_result`, `fix_required`, `error` |
|
||||||
|
| explorer | `explore_ready`, `explore_progress`, `task_failed` |
|
||||||
|
| architect | `arch_ready`, `arch_concern`, `arch_progress`, `error` |
|
||||||
|
| fe-developer | `dev_fe_complete`, `dev_fe_progress`, `error` |
|
||||||
|
| fe-qa | `qa_fe_passed`, `qa_fe_result`, `fix_required`, `error` |
|
||||||
|
|
||||||
|
### CLI Fallback
|
||||||
|
|
||||||
|
When `mcp__ccw-tools__team_msg` MCP is unavailable:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Bash(`ccw team log --team "${teamName}" --from "${role}" --to "coordinator" --type "<type>" --summary "[${role}] <summary>" --json`)
|
||||||
|
Bash(`ccw team list --team "${teamName}" --last 10 --json`)
|
||||||
|
Bash(`ccw team status --team "${teamName}" --json`)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Wisdom Accumulation (All Roles)
|
||||||
|
|
||||||
|
跨任务知识积累机制。Coordinator 在 session 初始化时创建 `wisdom/` 目录,所有 worker 在执行过程中读取和贡献 wisdom。
|
||||||
|
|
||||||
|
**目录结构**:
|
||||||
|
```
|
||||||
|
{sessionFolder}/wisdom/
|
||||||
|
├── learnings.md # 发现的模式和洞察
|
||||||
|
├── decisions.md # 架构和设计决策
|
||||||
|
├── conventions.md # 代码库约定
|
||||||
|
└── issues.md # 已知风险和问题
|
||||||
|
```
|
||||||
|
|
||||||
|
**Phase 2 加载(所有 worker)**:
|
||||||
|
```javascript
|
||||||
|
// Load wisdom context at start of Phase 2
|
||||||
|
const sessionFolder = task.description.match(/Session:\s*([^\n]+)/)?.[1]?.trim()
|
||||||
|
let wisdom = {}
|
||||||
|
if (sessionFolder) {
|
||||||
|
try { wisdom.learnings = Read(`${sessionFolder}/wisdom/learnings.md`) } catch {}
|
||||||
|
try { wisdom.decisions = Read(`${sessionFolder}/wisdom/decisions.md`) } catch {}
|
||||||
|
try { wisdom.conventions = Read(`${sessionFolder}/wisdom/conventions.md`) } catch {}
|
||||||
|
try { wisdom.issues = Read(`${sessionFolder}/wisdom/issues.md`) } catch {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Phase 4/5 贡献(任务完成时)**:
|
||||||
|
```javascript
|
||||||
|
// Contribute wisdom after task completion
|
||||||
|
if (sessionFolder) {
|
||||||
|
const timestamp = new Date().toISOString().substring(0, 10)
|
||||||
|
|
||||||
|
// Role-specific contributions:
|
||||||
|
// analyst → learnings (exploration dimensions, codebase patterns)
|
||||||
|
// writer → conventions (document structure, naming patterns)
|
||||||
|
// planner → decisions (task decomposition rationale)
|
||||||
|
// executor → learnings (implementation patterns), issues (bugs encountered)
|
||||||
|
// tester → issues (test failures, edge cases), learnings (test patterns)
|
||||||
|
// reviewer → conventions (code quality patterns), issues (review findings)
|
||||||
|
// explorer → conventions (codebase patterns), learnings (dependency insights)
|
||||||
|
// architect → decisions (architecture choices), issues (architectural risks)
|
||||||
|
|
||||||
|
try {
|
||||||
|
const targetFile = `${sessionFolder}/wisdom/${wisdomTarget}.md`
|
||||||
|
const existing = Read(targetFile)
|
||||||
|
const entry = `- [${timestamp}] [${role}] ${wisdomEntry}`
|
||||||
|
Write(targetFile, existing + '\n' + entry)
|
||||||
|
} catch {} // wisdom not initialized
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Coordinator 注入**: Coordinator 在 spawn worker 时通过 task description 传递 `Session: {sessionFolder}`,worker 据此定位 wisdom 目录。已有 wisdom 内容为后续 worker 提供上下文,实现跨任务知识传递。
|
||||||
|
|
||||||
|
### Task Lifecycle (All Worker Roles)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Standard task lifecycle every worker role follows
|
||||||
|
// Phase 1: Discovery
|
||||||
|
const tasks = TaskList()
|
||||||
|
const prefixes = Array.isArray(VALID_ROLES[role].prefix) ? VALID_ROLES[role].prefix : [VALID_ROLES[role].prefix]
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
prefixes.some(p => t.subject.startsWith(`${p}-`)) &&
|
||||||
|
t.owner === role &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
if (myTasks.length === 0) return // idle
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
|
||||||
|
// Phase 1.5: Resume Artifact Check (防止重复产出)
|
||||||
|
// 当 session 从暂停恢复时,coordinator 已将 in_progress 任务重置为 pending。
|
||||||
|
// Worker 在开始工作前,必须检查该任务的输出产物是否已存在。
|
||||||
|
// 如果产物已存在且内容完整:
|
||||||
|
// → 直接跳到 Phase 5 报告完成(避免覆盖上次成果)
|
||||||
|
// 如果产物存在但不完整(如文件为空或缺少关键 section):
|
||||||
|
// → 正常执行 Phase 2-4(基于已有产物继续,而非从头开始)
|
||||||
|
// 如果产物不存在:
|
||||||
|
// → 正常执行 Phase 2-4
|
||||||
|
//
|
||||||
|
// 每个 role 检查自己的输出路径:
|
||||||
|
// analyst → sessionFolder/spec/discovery-context.json
|
||||||
|
// writer → sessionFolder/spec/{product-brief.md | requirements/ | architecture/ | epics/}
|
||||||
|
// discussant → sessionFolder/discussions/discuss-NNN-*.md
|
||||||
|
// planner → sessionFolder/plan/plan.json
|
||||||
|
// executor → git diff (已提交的代码变更)
|
||||||
|
// tester → test pass rate
|
||||||
|
// reviewer → sessionFolder/spec/readiness-report.md (quality) 或 review findings (code)
|
||||||
|
|
||||||
|
// Phase 2-4: Role-specific (see roles/{role}/role.md)
|
||||||
|
|
||||||
|
// Phase 5: Report + Loop — 所有输出必须带 [role] 标识
|
||||||
|
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: role, to: "coordinator", type: "...", summary: `[${role}] ...` })
|
||||||
|
SendMessage({ type: "message", recipient: "coordinator", content: `## [${role}] ...`, summary: `[${role}] ...` })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
// Check for next task → back to Phase 1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Three-Mode Pipeline
|
||||||
|
|
||||||
|
```
|
||||||
|
Spec-only:
|
||||||
|
RESEARCH-001 → DISCUSS-001 → DRAFT-001 → DISCUSS-002
|
||||||
|
→ DRAFT-002 → DISCUSS-003 → DRAFT-003 → DISCUSS-004
|
||||||
|
→ DRAFT-004 → DISCUSS-005 → QUALITY-001 → DISCUSS-006
|
||||||
|
|
||||||
|
Impl-only (backend):
|
||||||
|
PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
|
||||||
|
|
||||||
|
Full-lifecycle (backend):
|
||||||
|
[Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006) → IMPL-001 → TEST-001 + REVIEW-001
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend Pipelines
|
||||||
|
|
||||||
|
Coordinator 根据任务关键词自动检测前端任务并路由到前端子流水线:
|
||||||
|
|
||||||
|
```
|
||||||
|
FE-only (纯前端):
|
||||||
|
PLAN-001 → DEV-FE-001 → QA-FE-001
|
||||||
|
(GC loop: if QA-FE verdict=NEEDS_FIX → DEV-FE-002 → QA-FE-002, max 2 rounds)
|
||||||
|
|
||||||
|
Fullstack (前后端并行):
|
||||||
|
PLAN-001 → IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001
|
||||||
|
|
||||||
|
Full-lifecycle + FE:
|
||||||
|
[Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006)
|
||||||
|
→ IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend Detection
|
||||||
|
|
||||||
|
Coordinator 在 Phase 1 根据任务关键词 + 项目文件自动检测前端任务并选择流水线模式(fe-only / fullstack / impl-only)。检测逻辑见 [roles/coordinator/role.md](roles/coordinator/role.md)。
|
||||||
|
|
||||||
|
### Generator-Critic Loop (fe-developer ↔ fe-qa)
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────┐ DEV-FE artifact ┌──────────┐
|
||||||
|
│ fe-developer │ ──────────────────→ │ fe-qa │
|
||||||
|
│ (Generator) │ │ (Critic) │
|
||||||
|
│ │ ←────────────────── │ │
|
||||||
|
└──────────────┘ QA-FE feedback └──────────┘
|
||||||
|
(max 2 rounds)
|
||||||
|
|
||||||
|
Convergence: fe-qa.score >= 8 && fe-qa.critical_count === 0
|
||||||
|
```
|
||||||
|
|
||||||
|
## Unified Session Directory
|
||||||
|
|
||||||
|
All session artifacts are stored under a single session folder:
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/.team/TLS-{slug}-{YYYY-MM-DD}/
|
||||||
|
├── team-session.json # Session state (status, progress, completed_tasks)
|
||||||
|
├── spec/ # Spec artifacts (analyst, writer, reviewer output)
|
||||||
|
│ ├── spec-config.json
|
||||||
|
│ ├── discovery-context.json
|
||||||
|
│ ├── product-brief.md
|
||||||
|
│ ├── requirements/ # _index.md + REQ-*.md + NFR-*.md
|
||||||
|
│ ├── architecture/ # _index.md + ADR-*.md
|
||||||
|
│ ├── epics/ # _index.md + EPIC-*.md
|
||||||
|
│ ├── readiness-report.md
|
||||||
|
│ └── spec-summary.md
|
||||||
|
├── discussions/ # Discussion records (discussant output)
|
||||||
|
│ └── discuss-001..006.md
|
||||||
|
├── plan/ # Plan artifacts (planner output)
|
||||||
|
│ ├── exploration-{angle}.json
|
||||||
|
│ ├── explorations-manifest.json
|
||||||
|
│ ├── plan.json
|
||||||
|
│ └── .task/
|
||||||
|
│ └── TASK-*.json
|
||||||
|
├── explorations/ # Explorer output (cached for cross-role reuse)
|
||||||
|
│ └── explore-*.json
|
||||||
|
├── architecture/ # Architect output (assessment reports)
|
||||||
|
│ └── arch-*.json
|
||||||
|
└── wisdom/ # Cross-task accumulated knowledge
|
||||||
|
├── learnings.md # Patterns and insights discovered
|
||||||
|
├── decisions.md # Architectural decisions made
|
||||||
|
├── conventions.md # Codebase conventions found
|
||||||
|
└── issues.md # Known issues and risks
|
||||||
|
├── qa/ # QA output (fe-qa audit reports)
|
||||||
|
│ └── audit-fe-*.json
|
||||||
|
└── build/ # Frontend build output (fe-developer)
|
||||||
|
├── token-files/
|
||||||
|
└── component-files/
|
||||||
|
```
|
||||||
|
|
||||||
|
Messages remain at `.workflow/.team-msg/{team-name}/` (unchanged).
|
||||||
|
|
||||||
|
## Session Resume
|
||||||
|
|
||||||
|
Coordinator supports `--resume` / `--continue` flags to resume interrupted sessions:
|
||||||
|
|
||||||
|
1. Scans `.workflow/.team/TLS-*/team-session.json` for `status: "active"` or `"paused"`
|
||||||
|
2. Multiple matches → `AskUserQuestion` for user selection
|
||||||
|
3. **Audit TaskList** — 获取当前所有任务的真实状态
|
||||||
|
4. **Reconcile** — 双向同步 session.completed_tasks ↔ TaskList 状态:
|
||||||
|
- session 已完成但 TaskList 未标记 → 修正 TaskList 为 completed
|
||||||
|
- TaskList 已完成但 session 未记录 → 补录到 session
|
||||||
|
- in_progress 状态(暂停中断)→ 重置为 pending
|
||||||
|
5. Determines remaining pipeline from reconciled state
|
||||||
|
6. Rebuilds team (`TeamCreate` + worker spawns for needed roles only)
|
||||||
|
7. Creates missing tasks with correct `blockedBy` dependency chain (uses `TASK_METADATA` lookup)
|
||||||
|
8. Verifies dependency chain integrity for existing tasks
|
||||||
|
9. Updates session file with reconciled state + current_phase
|
||||||
|
10. **Kick** — 向首个可执行任务的 worker 发送 `task_unblocked` 消息,打破 resume 死锁
|
||||||
|
11. Jumps to Phase 4 coordination loop
|
||||||
|
|
||||||
|
## Coordinator Spawn Template
|
||||||
|
|
||||||
|
When coordinator creates teammates, use this pattern:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
TeamCreate({ team_name: teamName })
|
||||||
|
|
||||||
|
// For each worker role:
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
team_name: teamName,
|
||||||
|
name: "<role_name>",
|
||||||
|
prompt: `你是 team "${teamName}" 的 <ROLE_NAME_UPPER>.
|
||||||
|
|
||||||
|
## ⚠️ 首要指令(MUST)
|
||||||
|
你的所有工作必须通过调用 Skill 获取角色定义后执行,禁止自行发挥:
|
||||||
|
Skill(skill="team-lifecycle-v2", args="--role=<role_name>")
|
||||||
|
此调用会加载你的角色定义(role.md)、可用命令(commands/*.md)和完整执行逻辑。
|
||||||
|
|
||||||
|
当前需求: ${taskDescription}
|
||||||
|
约束: ${constraints}
|
||||||
|
Session: ${sessionFolder}
|
||||||
|
|
||||||
|
## 角色准则(强制)
|
||||||
|
- 你只能处理 <PREFIX>-* 前缀的任务,不得执行其他角色的工作
|
||||||
|
- 所有输出(SendMessage、team_msg)必须带 [<role_name>] 标识前缀
|
||||||
|
- 仅与 coordinator 通信,不得直接联系其他 worker
|
||||||
|
- 不得使用 TaskCreate 为其他角色创建任务
|
||||||
|
|
||||||
|
## 消息总线(必须)
|
||||||
|
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
|
||||||
|
|
||||||
|
## 工作流程(严格按顺序)
|
||||||
|
1. 调用 Skill(skill="team-lifecycle-v2", args="--role=<role_name>") 获取角色定义和执行逻辑
|
||||||
|
2. 按 role.md 中的 5-Phase 流程执行(TaskList → 找到 <PREFIX>-* 任务 → 执行 → 汇报)
|
||||||
|
3. team_msg log + SendMessage 结果给 coordinator(带 [<role_name>] 标识)
|
||||||
|
4. TaskUpdate completed → 检查下一个任务 → 回到步骤 1`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
See [roles/coordinator/role.md](roles/coordinator/role.md) for the full spawn implementation with per-role prompts.
|
||||||
|
|
||||||
|
## Shared Spec Resources
|
||||||
|
|
||||||
|
Writer 和 Reviewer 角色在 spec 模式下使用本 skill 内置的标准和模板(从 spec-generator 复制,独立维护):
|
||||||
|
|
||||||
|
| Resource | Path | Usage |
|
||||||
|
|----------|------|-------|
|
||||||
|
| Document Standards | `specs/document-standards.md` | YAML frontmatter、命名规范、内容结构 |
|
||||||
|
| Quality Gates | `specs/quality-gates.md` | Per-phase 质量门禁、评分标尺 |
|
||||||
|
| Product Brief Template | `templates/product-brief.md` | DRAFT-001 文档生成 |
|
||||||
|
| Requirements Template | `templates/requirements-prd.md` | DRAFT-002 文档生成 |
|
||||||
|
| Architecture Template | `templates/architecture-doc.md` | DRAFT-003 文档生成 |
|
||||||
|
| Epics Template | `templates/epics-template.md` | DRAFT-004 文档生成 |
|
||||||
|
|
||||||
|
> Writer 在执行每个 DRAFT-* 任务前 **必须先 Read** 对应的 template 文件和 document-standards.md。
|
||||||
|
> 从 `roles/` 子目录引用时路径为 `../../specs/` 和 `../../templates/`。
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Scenario | Resolution |
|
||||||
|
|----------|------------|
|
||||||
|
| Unknown --role value | Error with available role list |
|
||||||
|
| Missing --role arg | Orchestration Mode → auto route to coordinator |
|
||||||
|
| Role file not found | Error with expected path (roles/{name}/role.md) |
|
||||||
|
| Command file not found | Fall back to inline execution in role.md |
|
||||||
|
| Task prefix conflict | Log warning, proceed |
|
||||||
271
.claude/skills/team-lifecycle-v2/roles/analyst/role.md
Normal file
271
.claude/skills/team-lifecycle-v2/roles/analyst/role.md
Normal file
@@ -0,0 +1,271 @@
|
|||||||
|
# Role: analyst
|
||||||
|
|
||||||
|
Seed analysis, codebase exploration, and multi-dimensional context gathering. Maps to spec-generator Phase 1 (Discovery).
|
||||||
|
|
||||||
|
## Role Identity
|
||||||
|
|
||||||
|
- **Name**: `analyst`
|
||||||
|
- **Task Prefix**: `RESEARCH-*`
|
||||||
|
- **Output Tag**: `[analyst]`
|
||||||
|
- **Responsibility**: Seed Analysis → Codebase Exploration → Context Packaging → Report
|
||||||
|
- **Communication**: SendMessage to coordinator only
|
||||||
|
|
||||||
|
## Role Boundaries
|
||||||
|
|
||||||
|
### MUST
|
||||||
|
- Only process RESEARCH-* tasks
|
||||||
|
- Communicate only with coordinator
|
||||||
|
- Use Toolbox tools (ACE search, Gemini CLI)
|
||||||
|
- Generate discovery-context.json and spec-config.json
|
||||||
|
- Support file reference input (@ prefix or .md/.txt extension)
|
||||||
|
|
||||||
|
### MUST NOT
|
||||||
|
- Create tasks for other roles
|
||||||
|
- Directly contact other workers
|
||||||
|
- Modify spec documents (only create discovery-context.json and spec-config.json)
|
||||||
|
- Skip seed analysis step
|
||||||
|
- Proceed without codebase detection
|
||||||
|
|
||||||
|
## Message Types
|
||||||
|
|
||||||
|
| Type | Direction | Trigger | Description |
|
||||||
|
|------|-----------|---------|-------------|
|
||||||
|
| `research_ready` | analyst → coordinator | Research complete | With discovery-context.json path and dimension summary |
|
||||||
|
| `research_progress` | analyst → coordinator | Long research progress | Intermediate progress update |
|
||||||
|
| `error` | analyst → coordinator | Unrecoverable error | Codebase access failure, CLI timeout, etc. |
|
||||||
|
|
||||||
|
## Message Bus
|
||||||
|
|
||||||
|
Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Research complete
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "analyst",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "research_ready",
|
||||||
|
summary: "[analyst] Research done: 5 exploration dimensions",
|
||||||
|
ref: `${sessionFolder}/spec/discovery-context.json`
|
||||||
|
})
|
||||||
|
|
||||||
|
// Error report
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "analyst",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "error",
|
||||||
|
summary: "[analyst] Codebase access failed"
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### CLI Fallback
|
||||||
|
|
||||||
|
When `mcp__ccw-tools__team_msg` MCP is unavailable:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw team log --team "${teamName}" --from "analyst" --to "coordinator" --type "research_ready" --summary "[analyst] Research done" --ref "${sessionFolder}/discovery-context.json" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Toolbox
|
||||||
|
|
||||||
|
### Available Commands
|
||||||
|
- None (simple enough for inline execution)
|
||||||
|
|
||||||
|
### Subagent Capabilities
|
||||||
|
- None
|
||||||
|
|
||||||
|
### CLI Capabilities
|
||||||
|
- `ccw cli --tool gemini --mode analysis` for seed analysis
|
||||||
|
|
||||||
|
## Execution (5-Phase)
|
||||||
|
|
||||||
|
### Phase 1: Task Discovery
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const tasks = TaskList()
|
||||||
|
const myTasks = tasks.filter(t =>
|
||||||
|
t.subject.startsWith('RESEARCH-') &&
|
||||||
|
t.owner === 'analyst' &&
|
||||||
|
t.status === 'pending' &&
|
||||||
|
t.blockedBy.length === 0
|
||||||
|
)
|
||||||
|
|
||||||
|
if (myTasks.length === 0) return // idle
|
||||||
|
|
||||||
|
const task = TaskGet({ taskId: myTasks[0].id })
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'in_progress' })
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Seed Analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Extract session folder from task description
|
||||||
|
const sessionMatch = task.description.match(/Session:\s*(.+)/)
|
||||||
|
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '.workflow/.team/default'
|
||||||
|
|
||||||
|
// Parse topic from task description
|
||||||
|
const topicLines = task.description.split('\n').filter(l => !l.startsWith('Session:') && !l.startsWith('输出:') && l.trim())
|
||||||
|
const rawTopic = topicLines[0] || task.subject.replace('RESEARCH-001: ', '')
|
||||||
|
|
||||||
|
// 支持文件引用输入(与 spec-generator Phase 1 一致)
|
||||||
|
const topic = (rawTopic.startsWith('@') || rawTopic.endsWith('.md') || rawTopic.endsWith('.txt'))
|
||||||
|
? Read(rawTopic.replace(/^@/, ''))
|
||||||
|
: rawTopic
|
||||||
|
|
||||||
|
// Use Gemini CLI for seed analysis
|
||||||
|
Bash({
|
||||||
|
command: `ccw cli -p "PURPOSE: Analyze the following topic/idea and extract structured seed information for specification generation.
|
||||||
|
TASK:
|
||||||
|
• Extract problem statement (what problem does this solve)
|
||||||
|
• Identify target users and their pain points
|
||||||
|
• Determine domain and industry context
|
||||||
|
• List constraints and assumptions
|
||||||
|
• Identify 3-5 exploration dimensions for deeper research
|
||||||
|
• Assess complexity (simple/moderate/complex)
|
||||||
|
|
||||||
|
TOPIC: ${topic}
|
||||||
|
|
||||||
|
MODE: analysis
|
||||||
|
CONTEXT: @**/*
|
||||||
|
EXPECTED: JSON output with fields: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment
|
||||||
|
CONSTRAINTS: Output as valid JSON" --tool gemini --mode analysis --rule analysis-analyze-technical-document`,
|
||||||
|
run_in_background: true
|
||||||
|
})
|
||||||
|
// Wait for CLI result, then parse seedAnalysis from output
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Codebase Exploration (conditional)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check if there's an existing codebase to explore
|
||||||
|
const hasProject = Bash(`test -f package.json || test -f Cargo.toml || test -f pyproject.toml || test -f go.mod; echo $?`)
|
||||||
|
|
||||||
|
if (hasProject === '0') {
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log",
|
||||||
|
team: teamName,
|
||||||
|
from: "analyst",
|
||||||
|
to: "coordinator",
|
||||||
|
type: "research_progress",
|
||||||
|
summary: "[analyst] 种子分析完成, 开始代码库探索"
|
||||||
|
})
|
||||||
|
|
||||||
|
// Explore codebase using ACE search
|
||||||
|
const archSearch = mcp__ace-tool__search_context({
|
||||||
|
project_root_path: projectRoot,
|
||||||
|
query: `Architecture patterns, main modules, entry points for: ${topic}`
|
||||||
|
})
|
||||||
|
|
||||||
|
// Detect tech stack from package files
|
||||||
|
// Explore existing patterns and integration points
|
||||||
|
|
||||||
|
var codebaseContext = {
|
||||||
|
tech_stack,
|
||||||
|
architecture_patterns,
|
||||||
|
existing_conventions,
|
||||||
|
integration_points,
|
||||||
|
constraints_from_codebase: []
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
var codebaseContext = null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Context Packaging
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Generate spec-config.json
|
||||||
|
const specConfig = {
|
||||||
|
session_id: `SPEC-${topicSlug}-${dateStr}`,
|
||||||
|
topic: topic,
|
||||||
|
status: "research_complete",
|
||||||
|
complexity: seedAnalysis.complexity_assessment || "moderate",
|
||||||
|
depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard",
|
||||||
|
focus_areas: seedAnalysis.exploration_dimensions || [],
|
||||||
|
mode: "interactive", // team 模式始终交互
|
||||||
|
phases_completed: ["discovery"],
|
||||||
|
created_at: new Date().toISOString(),
|
||||||
|
session_folder: sessionFolder,
|
||||||
|
discussion_depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard"
|
||||||
|
}
|
||||||
|
Write(`${sessionFolder}/spec/spec-config.json`, JSON.stringify(specConfig, null, 2))
|
||||||
|
|
||||||
|
// Generate discovery-context.json
|
||||||
|
const discoveryContext = {
|
||||||
|
session_id: specConfig.session_id,
|
||||||
|
phase: 1,
|
||||||
|
document_type: "discovery-context",
|
||||||
|
status: "complete",
|
||||||
|
generated_at: new Date().toISOString(),
|
||||||
|
seed_analysis: {
|
||||||
|
problem_statement: seedAnalysis.problem_statement,
|
||||||
|
target_users: seedAnalysis.target_users,
|
||||||
|
domain: seedAnalysis.domain,
|
||||||
|
constraints: seedAnalysis.constraints,
|
||||||
|
exploration_dimensions: seedAnalysis.exploration_dimensions,
|
||||||
|
complexity: seedAnalysis.complexity_assessment
|
||||||
|
},
|
||||||
|
codebase_context: codebaseContext,
|
||||||
|
recommendations: { focus_areas: [], risks: [], open_questions: [] }
|
||||||
|
}
|
||||||
|
Write(`${sessionFolder}/spec/discovery-context.json`, JSON.stringify(discoveryContext, null, 2))
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: Report to Coordinator
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const dimensionCount = discoveryContext.seed_analysis.exploration_dimensions?.length || 0
|
||||||
|
const hasCodebase = codebaseContext !== null
|
||||||
|
|
||||||
|
mcp__ccw-tools__team_msg({
|
||||||
|
operation: "log", team: teamName,
|
||||||
|
from: "analyst", to: "coordinator",
|
||||||
|
type: "research_ready",
|
||||||
|
summary: `[analyst] 研究完成: ${dimensionCount}个探索维度, ${hasCodebase ? '有' : '无'}代码库上下文, 复杂度=${specConfig.complexity}`,
|
||||||
|
ref: `${sessionFolder}/discovery-context.json`
|
||||||
|
})
|
||||||
|
|
||||||
|
SendMessage({
|
||||||
|
type: "message",
|
||||||
|
recipient: "coordinator",
|
||||||
|
content: `[analyst] ## 研究分析结果
|
||||||
|
|
||||||
|
**Task**: ${task.subject}
|
||||||
|
**复杂度**: ${specConfig.complexity}
|
||||||
|
**代码库**: ${hasCodebase ? '已检测到现有项目' : '全新项目'}
|
||||||
|
|
||||||
|
### 问题陈述
|
||||||
|
${discoveryContext.seed_analysis.problem_statement}
|
||||||
|
|
||||||
|
### 目标用户
|
||||||
|
${(discoveryContext.seed_analysis.target_users || []).map(u => '- ' + u).join('\n')}
|
||||||
|
|
||||||
|
### 探索维度
|
||||||
|
${(discoveryContext.seed_analysis.exploration_dimensions || []).map((d, i) => (i+1) + '. ' + d).join('\n')}
|
||||||
|
|
||||||
|
### 输出位置
|
||||||
|
- Config: ${sessionFolder}/spec/spec-config.json
|
||||||
|
- Context: ${sessionFolder}/spec/discovery-context.json
|
||||||
|
|
||||||
|
研究已就绪,可进入讨论轮次 DISCUSS-001。`,
|
||||||
|
summary: `[analyst] 研究就绪: ${dimensionCount}维度, ${specConfig.complexity}`
|
||||||
|
})
|
||||||
|
|
||||||
|
TaskUpdate({ taskId: task.id, status: 'completed' })
|
||||||
|
|
||||||
|
// Check for next RESEARCH task → back to Phase 1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Scenario | Resolution |
|
||||||
|
|----------|------------|
|
||||||
|
| No RESEARCH-* tasks available | Idle, wait for coordinator assignment |
|
||||||
|
| Gemini CLI analysis failure | Fallback to direct Claude analysis without CLI |
|
||||||
|
| Codebase detection failed | Continue as new project (no codebase context) |
|
||||||
|
| Session folder cannot be created | Notify coordinator, request alternative path |
|
||||||
|
| Topic too vague for analysis | Report to coordinator with clarification questions |
|
||||||
|
| Unexpected error | Log error via team_msg, report to coordinator |
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user