Refactor and optimize templates and code structure

- Deleted outdated templates for epics, product brief, and requirements PRD.
- Introduced lazy loading for locale messages in i18n module to enhance performance.
- Updated main application bootstrap to parallelize CSRF token fetching and locale loading.
- Implemented code splitting for router configuration to optimize bundle size and loading times.
- Added WebSocket connection limits and rate limiting to improve server performance and prevent abuse.
- Enhanced input validation with compiled regex patterns for better performance and maintainability.
This commit is contained in:
catlog22
2026-03-02 15:57:55 +08:00
parent ce2927b28d
commit 73cc2ef3fa
79 changed files with 306 additions and 14108 deletions

View File

@@ -1,445 +0,0 @@
---
name: team-coordinate
description: Universal team coordination skill with dynamic role generation. Only coordinator is built-in -- all worker roles are generated at runtime based on task analysis. Beat/cadence model for orchestration. Triggers on "team coordinate".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team Coordinate
Universal team coordination skill: analyze task -> generate roles -> dispatch -> execute -> deliver. Only the **coordinator** is built-in. All worker roles are **dynamically generated** based on task analysis.
## Architecture
```
+---------------------------------------------------+
| Skill(skill="team-coordinate") |
| args="task description" |
| args="--role=coordinator" |
| args="--role=<dynamic> --session=<path>" |
+-------------------+-------------------------------+
| Role Router
+---- --role present? ----+
| NO | YES
v v
Orchestration Mode Role Dispatch
(auto -> coordinator) (route to role file)
| |
coordinator +-------+-------+
(built-in) | --role=coordinator?
| |
YES | | NO
v | v
built-in | Dynamic Role
role.md | <session>/roles/<role>.md
Subagents (callable by any role, not team members):
[discuss-subagent] - multi-perspective critique (dynamic perspectives)
[explore-subagent] - codebase exploration with cache
```
## Role Router
### Input Parsing
Parse `$ARGUMENTS` to extract `--role` and `--session`. If no `--role` -> Orchestration Mode (auto route to coordinator).
### Role Registry
Only coordinator is statically registered. All other roles are dynamic, stored in `team-session.json#roles`.
| Role | File | Type |
|------|------|------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | built-in orchestrator |
| (dynamic) | `<session>/roles/<role-name>.md` | runtime-generated worker |
> **COMPACT PROTECTION**: Role files are execution documents. After context compression, role instructions become summaries only -- **MUST immediately `Read` the role.md to reload before continuing**. Never execute any Phase based on summaries.
### Subagent Registry
| Subagent | Spec | Callable By | Purpose |
|----------|------|-------------|---------|
| discuss | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | any role | Multi-perspective critique (dynamic perspectives) |
| explore | [subagents/explore-subagent.md](subagents/explore-subagent.md) | any role | Codebase exploration with cache |
### Dispatch
1. Extract `--role` and `--session` from arguments
2. If no `--role` -> route to coordinator (Orchestration Mode)
3. If `--role=coordinator` -> Read built-in `roles/coordinator/role.md` -> Execute its phases
4. If `--role=<other>`:
- **`--session` is REQUIRED** for dynamic roles. Error if not provided.
- Read `<session>/roles/<role>.md` -> Execute its phases
- If role file not found at path -> Error with expected path
### Orchestration Mode
When invoked without `--role`, coordinator auto-starts. User just provides task description.
**Invocation**: `Skill(skill="team-coordinate", args="task description")`
**Lifecycle**:
```
User provides task description
-> coordinator Phase 1: task analysis (detect capabilities, build dependency graph)
-> coordinator Phase 2: generate roles + initialize session
-> coordinator Phase 3: create task chain from dependency graph
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
-> Worker executes -> SendMessage callback -> coordinator advances next step
-> Loop until pipeline complete -> Phase 5 report
```
**User Commands** (wake paused coordinator):
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no advancement |
| `resume` / `continue` | Check worker states, advance next step |
---
## Shared Infrastructure
The following templates apply to all worker roles. Each generated role.md only needs to define **Phase 2-4** role-specific logic.
### Worker Phase 1: Task Discovery (all workers shared)
Each worker on startup executes the same task discovery flow:
1. Call `TaskList()` to get all tasks
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
3. No tasks -> idle wait
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
**Resume Artifact Check** (prevent duplicate output after resume):
- Check if this task's output artifacts already exist
- Artifacts complete -> skip to Phase 5 report completion
- Artifacts incomplete or missing -> normal Phase 2-4 execution
### Worker Phase 5: Report + Fast-Advance (all workers shared)
Task completion with optional fast-advance to skip coordinator round-trip:
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
- Params: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
- **`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
- **CLI fallback**: `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
2. **TaskUpdate**: Mark task completed
3. **Fast-Advance Check**:
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
- If exactly 1 ready task AND its owner matches a simple successor pattern -> **spawn it directly** (skip coordinator)
- Otherwise -> **SendMessage** to coordinator for orchestration
4. **Loop**: Back to Phase 1 to check for next task
**Fast-Advance Rules**:
| Condition | Action |
|-----------|--------|
| Same-prefix successor (Inner Loop role) | Do not spawn, main agent inner loop (Phase 5-L) |
| 1 ready task, simple linear successor, different prefix | Spawn directly via Task(run_in_background: true) |
| Multiple ready tasks (parallel window) | SendMessage to coordinator (needs orchestration) |
| No ready tasks + others running | SendMessage to coordinator (status update) |
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
**Fast-advance failure recovery**: If a fast-advanced task fails, the coordinator detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing. See [monitor.md](roles/coordinator/commands/monitor.md).
### Worker Inner Loop (roles with multiple same-prefix serial tasks)
When a role has **2+ serial same-prefix tasks**, it loops internally instead of spawning new agents:
**Inner Loop flow**:
```
Phase 1: Discover task (first time)
|
+- Found task -> Phase 2-3: Load context + Execute work
| |
| v
| Phase 4: Validation (+ optional Inline Discuss)
| |
| v
| Phase 5-L: Loop Completion
| |
| +- TaskUpdate completed
| +- team_msg log
| +- Accumulate summary to context_accumulator
| |
| +- More same-prefix tasks?
| | +- YES -> back to Phase 1 (inner loop)
| | +- NO -> Phase 5-F: Final Report
| |
| +- Interrupt conditions?
| +- consensus_blocked HIGH -> SendMessage -> STOP
| +- Errors >= 3 -> SendMessage -> STOP
|
+- Phase 5-F: Final Report
+- SendMessage (all task summaries)
+- STOP
```
**Phase 5-L vs Phase 5-F**:
| Step | Phase 5-L (looping) | Phase 5-F (final) |
|------|---------------------|-------------------|
| TaskUpdate completed | YES | YES |
| team_msg log | YES | YES |
| Accumulate summary | YES | - |
| SendMessage to coordinator | NO | YES (all tasks summary) |
| Fast-Advance to next prefix | - | YES (check cross-prefix successors) |
### Inline Discuss Protocol (optional for any role)
After completing primary output, roles may call the discuss subagent inline. Unlike v4's fixed perspective definitions, team-coordinate uses **dynamic perspectives** specified by the coordinator when generating each role.
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss <round-id>",
prompt: <see subagents/discuss-subagent.md for prompt template>
})
```
**Consensus handling**:
| Verdict | Severity | Role Action |
|---------|----------|-------------|
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
| consensus_blocked | HIGH | SendMessage with structured format. Do NOT self-revise. |
| consensus_blocked | MEDIUM | SendMessage with warning. Proceed normally. |
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
### Shared Explore Utility
Any role needing codebase context calls the explore subagent:
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore <angle>",
prompt: <see subagents/explore-subagent.md for prompt template>
})
```
**Cache**: Results stored in `explorations/` with `cache-index.json`. Before exploring, always check cache first.
### Wisdom Accumulation (all roles)
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session init.
**Directory**:
```
<session-folder>/wisdom/
+-- learnings.md # Patterns and insights
+-- decisions.md # Design and strategy decisions
+-- issues.md # Known risks and issues
```
**Worker load** (Phase 2): Extract `Session: <path>` from task description, read wisdom files.
**Worker contribute** (Phase 4/5): Write discoveries to corresponding wisdom files.
### Role Isolation Rules
| Allowed | Prohibited |
|---------|-----------|
| Process own prefix tasks | Process other role's prefix tasks |
| SendMessage to coordinator | Directly communicate with other workers |
| Use tools appropriate to responsibility | Create tasks for other roles |
| Call discuss/explore subagents | Modify resources outside own scope |
| Fast-advance simple successors | Spawn parallel worker batches |
| Report capability_gap to coordinator | Attempt work outside scope |
Coordinator additionally prohibited: directly write/modify deliverable artifacts, call implementation subagents directly, directly execute analysis/test/review.
---
## Cadence Control
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
```
Beat Cycle (single beat)
======================================================================
Event Coordinator Workers
----------------------------------------------------------------------
callback/resume --> +- handleCallback -+
| mark completed |
| check pipeline |
+- handleSpawnNext -+
| find ready tasks |
| spawn workers ---+--> [Worker A] Phase 1-5
| (parallel OK) --+--> [Worker B] Phase 1-5
+- STOP (idle) -----+ |
|
callback <-----------------------------------------+
(next beat) SendMessage + TaskUpdate(completed)
======================================================================
Fast-Advance (skips coordinator for simple linear successors)
======================================================================
[Worker A] Phase 5 complete
+- 1 ready task? simple successor? --> spawn Worker B directly
+- complex case? --> SendMessage to coordinator
======================================================================
```
**Pipelines are dynamic**: Unlike v4's predefined pipeline beat views (spec-only, impl-only, etc.), team-coordinate pipelines are generated per-task from the dependency graph. The beat model is the same -- only the pipeline shape varies.
---
## Coordinator Spawn Template
### Standard Worker (single-task role)
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `You are team "<team-name>" <ROLE>.
## Primary Instruction
All your work MUST be executed by calling Skill to get role definition:
Skill(skill="team-coordinate", args="--role=<role> --session=<session-folder>")
Current requirement: <task-description>
Session: <session-folder>
## Role Guidelines
- Only process <PREFIX>-* tasks, do not execute other role work
- All output prefixed with [<role>] tag
- Only communicate with coordinator
- Do not use TaskCreate to create tasks for other roles
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
## Workflow
1. Call Skill -> get role definition and execution logic
2. Follow role.md 5-Phase flow
3. team_msg(team=<session-id>) + SendMessage results to coordinator
4. TaskUpdate completed -> check next task or fast-advance`
})
```
### Inner Loop Worker (multi-task role)
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker (inner loop)",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `You are team "<team-name>" <ROLE>.
## Primary Instruction
All your work MUST be executed by calling Skill to get role definition:
Skill(skill="team-coordinate", args="--role=<role> --session=<session-folder>")
Current requirement: <task-description>
Session: <session-folder>
## Inner Loop Mode
You will handle ALL <PREFIX>-* tasks in this session, not just the first one.
After completing each task, loop back to find the next <PREFIX>-* task.
Only SendMessage to coordinator when:
- All <PREFIX>-* tasks are done
- A consensus_blocked HIGH occurs
- Errors accumulate (>= 3)
## Role Guidelines
- Only process <PREFIX>-* tasks, do not execute other role work
- All output prefixed with [<role>] tag
- Only communicate with coordinator
- Do not use TaskCreate to create tasks for other roles
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
- Use subagent calls for heavy work, retain summaries in context`
})
```
---
## Session Directory
```
.workflow/.team/TC-<slug>-<date>/
+-- team-session.json # Session state + dynamic role registry
+-- task-analysis.json # Phase 1 output: capabilities, dependency graph
+-- roles/ # Dynamic role definitions (generated Phase 2)
| +-- <role-1>.md
| +-- <role-2>.md
+-- artifacts/ # All MD deliverables from workers
| +-- <artifact>.md
+-- shared-memory.json # Cross-role state store
+-- wisdom/ # Cross-task knowledge
| +-- learnings.md
| +-- decisions.md
| +-- issues.md
+-- explorations/ # Shared explore cache
| +-- cache-index.json
| +-- explore-<angle>.json
+-- discussions/ # Inline discuss records
| +-- <round>.md
+-- .msg/ # Team message bus logs
```
### team-session.json Schema
```json
{
"session_id": "TC-<slug>-<date>",
"task_description": "<original user input>",
"status": "active | paused | completed",
"team_name": "<team-name>",
"roles": [
{
"name": "<role-name>",
"prefix": "<PREFIX>",
"responsibility_type": "<type>",
"inner_loop": false,
"role_file": "roles/<role-name>.md"
}
],
"pipeline": {
"dependency_graph": {},
"tasks_total": 0,
"tasks_completed": 0
},
"active_workers": [],
"completed_tasks": [],
"created_at": "<timestamp>"
}
```
---
## Session Resume
Coordinator supports `--resume` / `--continue` for interrupted sessions:
1. Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
2. Multiple matches -> AskUserQuestion for selection
3. Audit TaskList -> reconcile session state <-> task status
4. Reset in_progress -> pending (interrupted tasks)
5. Rebuild team and spawn needed workers only
6. Create missing tasks with correct blockedBy
7. Kick first executable task -> Phase 4 coordination loop
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Check if `<session>/roles/<role>.md` exists; error with message if not |
| Missing --role arg | Orchestration Mode -> coordinator |
| Dynamic role file not found | Error with expected path, coordinator may need to regenerate |
| Built-in role file not found | Error with expected path |
| Command file not found | Fallback to inline execution |
| Discuss subagent fails | Role proceeds without discuss, logs warning |
| Explore cache corrupt | Clear cache, re-explore |
| Fast-advance spawns wrong task | Coordinator reconciles on next callback |
| Session path not provided (dynamic role) | Error: `--session` is required for dynamic roles. Coordinator must always pass `--session=<session-folder>` when spawning workers. |
| capability_gap reported | Coordinator generates new role via handleAdapt |

View File

@@ -1,197 +0,0 @@
# Command: analyze-task
## Purpose
Parse user task description -> detect required capabilities -> build dependency graph -> design dynamic roles. This replaces v4's static mode selection with intelligent task decomposition.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | User input from Phase 1 | Yes |
| Clarification answers | AskUserQuestion results (if any) | No |
| Session folder | From coordinator Phase 2 | Yes |
## Phase 3: Task Analysis
### Step 1: Signal Detection
Scan task description for capability keywords:
| Signal | Keywords | Capability | Prefix | Responsibility Type |
|--------|----------|------------|--------|---------------------|
| Research | investigate, explore, compare, survey, find, research, discover, benchmark, study | researcher | RESEARCH | orchestration |
| Writing | write, draft, document, article, report, blog, describe, explain, summarize, content | writer | DRAFT | code-gen (docs) |
| Coding | implement, build, code, fix, refactor, develop, create app, program, migrate, port | developer | IMPL | code-gen (code) |
| Design | design, architect, plan, structure, blueprint, model, schema, wireframe, layout | designer | DESIGN | orchestration |
| Analysis | analyze, review, audit, assess, evaluate, inspect, examine, diagnose, profile | analyst | ANALYSIS | read-only |
| Testing | test, verify, validate, QA, quality, check, assert, coverage, regression | tester | TEST | validation |
| Planning | plan, breakdown, organize, schedule, decompose, roadmap, strategy, prioritize | planner | PLAN | orchestration |
**Multi-match**: A task may trigger multiple capabilities. E.g., "research and write a technical article" triggers both `researcher` and `writer`.
**No match**: If no keywords match, default to a single `general` capability with `TASK` prefix.
### Step 2: Artifact Inference
Each capability produces default output artifacts:
| Capability | Default Artifact | Format |
|------------|-----------------|--------|
| researcher | Research findings | `<session>/artifacts/research-findings.md` |
| writer | Written document(s) | `<session>/artifacts/<doc-name>.md` |
| developer | Code implementation | Source files + `<session>/artifacts/implementation-summary.md` |
| designer | Design document | `<session>/artifacts/design-spec.md` |
| analyst | Analysis report | `<session>/artifacts/analysis-report.md` |
| tester | Test results | `<session>/artifacts/test-report.md` |
| planner | Execution plan | `<session>/artifacts/execution-plan.md` |
### Step 3: Dependency Graph Construction
Build a DAG of work streams using these inference rules:
| Pattern | Shape | Example |
|---------|-------|---------|
| Knowledge -> Creation | research blockedBy nothing, creation blockedBy research | RESEARCH-001 -> DRAFT-001 |
| Design -> Build | design first, build after | DESIGN-001 -> IMPL-001 |
| Build -> Validate | build first, test/review after | IMPL-001 -> TEST-001 + ANALYSIS-001 |
| Plan -> Execute | plan first, execute after | PLAN-001 -> IMPL-001 |
| Independent parallel | no dependency between them | DRAFT-001 || IMPL-001 |
| Analysis -> Revise | analysis finds issues, revise artifact | ANALYSIS-001 -> DRAFT-002 |
**Graph construction algorithm**:
1. Group capabilities by natural ordering: knowledge-gathering -> design/planning -> creation -> validation
2. Within same tier: capabilities are parallel unless task description implies sequence
3. Between tiers: downstream blockedBy upstream
4. Single-capability tasks: one node, no dependencies
**Natural ordering tiers**:
| Tier | Capabilities | Description |
|------|-------------|-------------|
| 0 | researcher, planner | Knowledge gathering / planning |
| 1 | designer | Design (requires context from tier 0 if present) |
| 2 | writer, developer | Creation (requires design/plan if present) |
| 3 | analyst, tester | Validation (requires artifacts to validate) |
### Step 4: Complexity Scoring
| Factor | Weight | Condition |
|--------|--------|-----------|
| Capability count | +1 each | Number of distinct capabilities |
| Cross-domain factor | +2 | Capabilities span 3+ tiers |
| Parallel tracks | +1 each | Independent parallel work streams |
| Serial depth | +1 per level | Longest dependency chain length |
| Total Score | Complexity | Role Limit |
|-------------|------------|------------|
| 1-3 | Low | 1-2 roles |
| 4-6 | Medium | 2-3 roles |
| 7+ | High | 3-5 roles |
### Step 5: Role Minimization
Apply merging rules to reduce role count:
| Rule | Condition | Action |
|------|-----------|--------|
| Absorb trivial | Capability has exactly 1 task AND no explore needed | Merge into nearest related role |
| Merge overlap | Two capabilities share >50% keywords from task description | Combine into single role |
| Cap at 5 | More than 5 roles after initial assignment | Merge lowest-priority pairs (priority: researcher > designer > developer > writer > analyst > planner > tester) |
**Merge priority** (when two must merge, keep the higher-priority one as the role name):
1. developer (code-gen is hardest to merge)
2. researcher (context-gathering is foundational)
3. writer (document generation has specific patterns)
4. designer (design has specific outputs)
5. analyst (analysis can be absorbed by reviewer pattern)
6. planner (planning can be merged with researcher or designer)
7. tester (can be absorbed by developer or analyst)
**IMPORTANT**: Even after merging, coordinator MUST spawn workers for all roles. Single-role tasks still use team architecture.
## Phase 4: Output
Write `<session-folder>/task-analysis.json`:
```json
{
"task_description": "<original user input>",
"capabilities": [
{
"name": "researcher",
"prefix": "RESEARCH",
"responsibility_type": "orchestration",
"tasks": [
{ "id": "RESEARCH-001", "description": "..." }
],
"artifacts": ["research-findings.md"]
}
],
"dependency_graph": {
"RESEARCH-001": [],
"DRAFT-001": ["RESEARCH-001"],
"ANALYSIS-001": ["DRAFT-001"]
},
"roles": [
{
"name": "researcher",
"prefix": "RESEARCH",
"responsibility_type": "orchestration",
"task_count": 1,
"inner_loop": false
},
{
"name": "writer",
"prefix": "DRAFT",
"responsibility_type": "code-gen (docs)",
"task_count": 1,
"inner_loop": false
}
],
"complexity": {
"capability_count": 2,
"cross_domain_factor": false,
"parallel_tracks": 0,
"serial_depth": 2,
"total_score": 3,
"level": "low"
},
"artifacts": [
{ "name": "research-findings.md", "producer": "researcher", "path": "artifacts/research-findings.md" },
{ "name": "article-draft.md", "producer": "writer", "path": "artifacts/article-draft.md" }
]
}
```
## Complexity Interpretation
**CRITICAL**: Complexity score is for **role design optimization**, NOT for skipping team workflow.
| Complexity | Team Structure | Coordinator Action |
|------------|----------------|-------------------|
| Low (1-2 roles) | Minimal team | Generate 1-2 roles, create team, spawn workers |
| Medium (2-3 roles) | Standard team | Generate roles, create team, spawn workers |
| High (3-5 roles) | Full team | Generate roles, create team, spawn workers |
**All complexity levels use team architecture**:
- Single-role tasks still spawn worker via Skill
- Coordinator NEVER executes task work directly
- Team infrastructure provides session management, message bus, fast-advance
**Purpose of complexity score**:
- ✅ Determine optimal role count (merge vs separate)
- ✅ Guide dependency graph design
- ✅ Inform user about task scope
- ❌ NOT for deciding whether to use team workflow
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No capabilities detected | Default to single `general` role with TASK prefix |
| Circular dependency in graph | Break cycle at lowest-tier edge, warn |
| Task description too vague | Return minimal analysis, coordinator will AskUserQuestion |
| All capabilities merge into one | Valid -- single-role execution via team worker |

View File

@@ -1,85 +0,0 @@
# Command: dispatch
## Purpose
Create task chains from dynamic dependency graphs. Unlike v4's static mode-to-pipeline mapping, team-coordinate builds pipelines from the task-analysis.json produced by Phase 1.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task analysis | `<session-folder>/task-analysis.json` | Yes |
| Session file | `<session-folder>/team-session.json` | Yes |
| Role registry | `team-session.json#roles` | Yes |
| Scope | User requirements description | Yes |
## Phase 3: Task Chain Creation
### Workflow
1. **Read dependency graph** from `task-analysis.json#dependency_graph`
2. **Topological sort** tasks to determine creation order
3. **Validate** all task owners exist in role registry
4. **For each task** (in topological order):
```
TaskCreate({
subject: "<PREFIX>-<NNN>",
owner: "<role-name>",
description: "<task description from task-analysis>\nSession: <session-folder>\nScope: <scope>\nInnerLoop: <true|false>",
blockedBy: [<dependency-list from graph>],
status: "pending"
})
```
5. **Update team-session.json** with pipeline and tasks_total
6. **Validate** created chain
### Task Description Template
Every task description includes session path and inner loop flag:
```
<task description>
Session: <session-folder>
Scope: <scope>
InnerLoop: <true|false>
```
### InnerLoop Flag Rules
| Condition | InnerLoop |
|-----------|-----------|
| Role has 2+ serial same-prefix tasks | true |
| Role has 1 task | false |
| Tasks are parallel (no dependency between them) | false |
### Dependency Validation
| Check | Criteria |
|-------|----------|
| No orphan tasks | Every task is reachable from at least one root |
| No circular deps | Topological sort succeeds without cycle |
| All owners valid | Every task owner exists in team-session.json#roles |
| All blockedBy valid | Every blockedBy references an existing task subject |
| Session reference | Every task description contains `Session: <session-folder>` |
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Task count | Matches dependency_graph node count |
| Dependencies | Every blockedBy references an existing task subject |
| Owner assignment | Each task owner is in role registry |
| Session reference | Every task description contains `Session:` |
| Pipeline integrity | No disconnected subgraphs (warn if found) |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Circular dependency detected | Report cycle, halt task creation |
| Owner not in role registry | Error, coordinator must fix roles first |
| TaskCreate fails | Log error, report to coordinator |
| Duplicate task subject | Skip creation, log warning |
| Empty dependency graph | Error, task analysis may have failed |

View File

@@ -1,296 +0,0 @@
# Command: monitor
## Purpose
Event-driven pipeline coordination with Spawn-and-Stop pattern. Adapted from v4 for dynamic roles -- role names are read from `team-session.json#roles` instead of hardcoded. Includes `handleAdapt` for mid-pipeline capability gap handling.
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
| FAST_ADVANCE_AWARE | true | Workers may skip coordinator for simple linear successors |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session file | `<session-folder>/team-session.json` | Yes |
| Task list | `TaskList()` | Yes |
| Active workers | session.active_workers[] | Yes |
| Role registry | session.roles[] | Yes |
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name` rather than a static list. This is the key difference from v4.
## Phase 3: Handler Routing
### Wake-up Source Detection
Parse `$ARGUMENTS` to determine handler:
| Priority | Condition | Handler |
|----------|-----------|---------|
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
| 2 | Contains "capability_gap" | handleAdapt |
| 3 | Contains "check" or "status" | handleCheck |
| 4 | Contains "resume", "continue", or "next" | handleResume |
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
---
### Handler: handleCallback
Worker completed a task. Verify completion, update state, auto-advance.
```
Receive callback from [<role>]
+- Find matching active worker by role (from session.roles)
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
+- Task status = completed?
| +- YES -> remove from active_workers -> update session
| | +- -> handleSpawnNext
| +- NO -> progress message, do not advance -> STOP
+- No matching worker found
+- Scan all active workers for completed tasks
+- Found completed -> process each -> handleSpawnNext
+- None completed -> STOP
```
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
1. Check if the expected next task is already `in_progress` (fast-advanced)
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
3. If no -> normal handleSpawnNext
---
### Handler: handleCheck
Read-only status report. No pipeline advancement.
**Output format**:
```
[coordinator] Pipeline Status
[coordinator] Progress: <completed>/<total> (<percent>%)
[coordinator] Execution Graph:
<visual representation of dependency graph with status icons>
done=completed >>>=running o=pending .=not created
[coordinator] Active Workers:
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
[coordinator] Ready to spawn: <subjects>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
**Graph rendering**: Read dependency_graph from task-analysis.json, render each node with status icon. Show parallel branches side-by-side.
Then STOP.
---
### Handler: handleResume
Check active worker completion, process results, advance pipeline.
```
Load active_workers from session
+- No active workers -> handleSpawnNext
+- Has active workers -> check each:
+- status = completed -> mark done, log
+- status = in_progress -> still running, log
+- other status -> worker failure -> reset to pending
After processing:
+- Some completed -> handleSpawnNext
+- All still running -> report status -> STOP
+- All failed -> handleSpawnNext (retry)
```
---
### Handler: handleSpawnNext
Find all ready tasks, spawn workers in background, update session, STOP.
```
Collect task states from TaskList()
+- completedSubjects: status = completed
+- inProgressSubjects: status = in_progress
+- readySubjects: pending + all blockedBy in completedSubjects
Ready tasks found?
+- NONE + work in progress -> report waiting -> STOP
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> Phase 5
+- HAS ready tasks -> for each:
+- Is task owner an Inner Loop role AND that role already has an active_worker?
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
| +- NO -> normal spawn below
+- TaskUpdate -> in_progress
+- team_msg log -> task_unblocked (team=<session-id>, NOT team name)
+- Spawn worker (see spawn tool call below)
+- Add to session.active_workers
Update session file -> output summary -> STOP
```
**Spawn worker tool call** (one per ready task):
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker for <subject>",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: "<worker prompt from SKILL.md Coordinator Spawn Template>"
})
```
---
### Handler: handleAdapt
Handle mid-pipeline capability gap discovery. A worker reports `capability_gap` when it encounters work outside its scope.
**CONSTRAINT**: Maximum 5 worker roles per session (per coordinator/role.md). handleAdapt MUST enforce this limit.
```
Parse capability_gap message:
+- Extract: gap_description, requesting_role, suggested_capability
+- Validate gap is genuine:
+- Check existing roles in session.roles -> does any role cover this?
| +- YES -> redirect: SendMessage to that role's owner -> STOP
| +- NO -> genuine gap, proceed to role generation
+- CHECK ROLE COUNT LIMIT (MAX 5 ROLES):
+- Count current roles in session.roles
+- If count >= 5:
+- Attempt to merge new capability into existing role:
+- Find best-fit role by responsibility_type
+- If merge possible:
+- Update existing role file with new capability
+- Create task assigned to existing role
+- Log via team_msg (type: warning, summary: "Capability merged into existing role")
+- STOP
+- If merge NOT possible:
+- PAUSE session
+- Report to user:
"Role limit (5) reached. Cannot generate new role for: <gap_description>
Options:
1. Manually extend an existing role
2. Re-run team-coordinate with refined task to consolidate roles
3. Accept limitation and continue without this capability"
+- STOP
+- Generate new role:
1. Read specs/role-template.md
2. Fill template with capability details from gap description
3. Write new role file to <session-folder>/roles/<new-role>.md
4. Add to session.roles[]
+- Create new task(s):
TaskCreate({
subject: "<NEW-PREFIX>-001",
owner: "<new-role>",
description: "<gap_description>\nSession: <session-folder>\nInnerLoop: false",
blockedBy: [<requesting task if sequential>],
status: "pending"
})
+- Update team-session.json: add role, increment tasks_total
+- Spawn new worker -> STOP
```
---
### Worker Failure Handling
When a worker has unexpected status (not completed, not in_progress):
1. Reset task -> pending via TaskUpdate
2. Log via team_msg (type: error)
3. Report to user: task reset, will retry on next resume
### Fast-Advance Failure Recovery
When coordinator detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
```
handleCallback / handleResume detects:
+- Task is in_progress (was fast-advanced by predecessor)
+- No active_worker entry for this task
+- Original fast-advancing worker has already completed and exited
+- Resolution:
1. TaskUpdate -> reset task to pending
2. Remove stale active_worker entry (if any)
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
4. -> handleSpawnNext (will re-spawn the task normally)
```
**Detection in handleResume**:
```
For each in_progress task in TaskList():
+- Has matching active_worker? -> normal, skip
+- No matching active_worker? -> orphaned (likely fast-advance failure)
+- Check creation time: if > 5 minutes with no progress callback
+- Reset to pending -> handleSpawnNext
```
**Prevention**: Fast-advance failures are self-healing. The coordinator reconciles orphaned tasks on every `resume`/`check` cycle.
### Consensus-Blocked Handling
When a worker reports `consensus_blocked` in its callback:
```
handleCallback receives message with consensus_blocked flag
+- Extract: divergence_severity, blocked_round, action_recommendation
+- Route by severity:
|
+- severity = HIGH
| +- Create REVISION task:
| +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
| +- Description includes: divergence details + action items from discuss
| +- blockedBy: none (immediate execution)
| +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
| +- If already revised once -> PAUSE, escalate to user
| +- Update session: mark task as "revised", log revision chain
|
+- severity = MEDIUM
| +- Proceed with warning: include divergence in next task's context
| +- Log action items to wisdom/issues.md
| +- Normal handleSpawnNext
|
+- severity = LOW
+- Proceed normally: treat as consensus_reached with notes
+- Normal handleSpawnNext
```
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Session state consistent | active_workers matches TaskList in_progress tasks |
| No orphaned tasks | Every in_progress task has an active_worker entry |
| Dynamic roles valid | All task owners exist in session.roles |
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-initialization |
| Worker callback from unknown role | Log info, scan for other completions |
| All workers still running on resume | Report status, suggest check later |
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
| Fast-advance conflict | Coordinator reconciles, no duplicate spawns |
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
| Dynamic role file not found | Error, coordinator must regenerate from task-analysis |
| capability_gap from completed role | Validate gap, generate role if genuine |
| capability_gap when role limit (5) reached | Attempt merge into existing role, else pause for user |
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |

View File

@@ -1,283 +0,0 @@
# Coordinator Role
Orchestrate the team-coordinate workflow: task analysis, dynamic role generation, task dispatching, progress monitoring, session state. The sole built-in role -- all worker roles are generated at runtime.
## Identity
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **Responsibility**: Analyze task -> Generate roles -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Analyze user task to detect capabilities and build dependency graph
- Dynamically generate worker roles from specs/role-template.md
- Create team and spawn worker subagents in background
- Dispatch tasks with proper dependency chains from task-analysis.json
- Monitor progress via worker callbacks and route messages
- Maintain session state persistence (team-session.json)
- Handle capability_gap reports (generate new roles mid-pipeline)
- Handle consensus_blocked HIGH verdicts (create revision tasks or pause)
- Detect fast-advance orphans on resume/check and reset to pending
### MUST NOT
- Execute task work directly (delegate to workers)
- Modify task output artifacts (workers own their deliverables)
- Call implementation subagents (code-developer, etc.) directly
- Skip dependency validation when creating task chains
- Generate more than 5 worker roles (merge if exceeded)
- Override consensus_blocked HIGH without user confirmation
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work is delegated to dynamically generated worker roles.
---
## Command Execution Protocol
When coordinator needs to execute a command (analyze-task, dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
4. **Execute synchronously** - complete the command workflow before proceeding
Example:
```
Phase 1 needs task analysis
-> Read roles/coordinator/commands/analyze-task.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Analysis)
-> Execute Phase 4 (Output)
-> Continue to Phase 2
```
---
## Entry Router
When coordinator is invoked, first detect the invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
| Interrupted session | Active/paused session exists in `.workflow/.team/TC-*` | -> Phase 0 (Resume Check) |
| New session | None of above | -> Phase 1 (Task Analysis) |
For callback/check/resume/adapt: load `commands/monitor.md` and execute the appropriate handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
- If found, extract `session.roles[].name` for callback detection
2. **Parse $ARGUMENTS** for detection keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler section, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Task Analysis below
---
## Phase 0: Session Resume Check
**Objective**: Detect and resume interrupted sessions before creating new ones.
**Workflow**:
1. Scan `.workflow/.team/TC-*/team-session.json` for sessions with status "active" or "paused"
2. No sessions found -> proceed to Phase 1
3. Single session found -> resume it (-> Session Reconciliation)
4. Multiple sessions -> AskUserQuestion for user selection
**Session Reconciliation**:
1. Audit TaskList -> get real status of all tasks
2. Reconcile: session.completed_tasks <-> TaskList status (bidirectional sync)
3. Reset any in_progress tasks -> pending (they were interrupted)
4. Detect fast-advance orphans (in_progress without recent activity) -> reset to pending
5. Determine remaining pipeline from reconciled state
6. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
7. Create missing tasks with correct blockedBy dependencies
8. Verify dependency chain integrity
9. Update session file with reconciled state
10. Kick first executable task's worker -> Phase 4
---
## Phase 1: Task Analysis
**Objective**: Parse user task, detect capabilities, build dependency graph, design roles.
**Workflow**:
1. **Parse user task description**
2. **Clarify if ambiguous** via AskUserQuestion:
- What is the scope? (specific files, module, project-wide)
- What deliverables are expected? (documents, code, analysis reports)
- Any constraints? (timeline, technology, style)
3. **Delegate to `commands/analyze-task.md`**:
- Signal detection: scan keywords -> infer capabilities
- Artifact inference: each capability -> default output type (.md)
- Dependency graph: build DAG of work streams
- Complexity scoring: count capabilities, cross-domain factor, parallel tracks
- Role minimization: merge overlapping, absorb trivial, cap at 5
4. **Output**: Write `<session>/task-analysis.json`
**Success**: Task analyzed, capabilities detected, dependency graph built, roles designed.
**CRITICAL - Team Workflow Enforcement**:
Regardless of complexity score or role count, coordinator MUST:
-**Always proceed to Phase 2** (generate roles)
-**Always create team** and spawn workers
-**NEVER execute task work directly**, even for single-role low-complexity tasks
-**NEVER skip team workflow** based on complexity assessment
**Single-role execution is still team-based** - just with one worker. The team architecture provides:
- Consistent message bus communication
- Session state management
- Artifact tracking
- Fast-advance capability
- Resume/recovery mechanisms
---
## Phase 2: Generate Roles + Initialize Session
**Objective**: Create session, generate dynamic role files, initialize shared infrastructure.
**Workflow**:
1. **Generate session ID**: `TC-<slug>-<date>` (slug from first 3 meaningful words of task)
2. **Create session folder structure**:
```
.workflow/.team/<session-id>/
+-- roles/
+-- artifacts/
+-- wisdom/
+-- explorations/
+-- discussions/
+-- .msg/
```
3. **Call TeamCreate** with team name derived from session ID
4. **Read `specs/role-template.md`** + `task-analysis.json`
5. **For each role in task-analysis.json#roles**:
- Fill role template with:
- role_name, prefix, responsibility_type from analysis
- Phase 2-4 content from responsibility type reference sections in template
- inner_loop flag from analysis (true if role has 2+ serial tasks)
- Task-specific instructions from task description
- Write generated role file to `<session>/roles/<role-name>.md`
6. **Register roles** in team-session.json#roles
7. **Initialize shared infrastructure**:
- `wisdom/learnings.md`, `wisdom/decisions.md`, `wisdom/issues.md` (empty with headers)
- `explorations/cache-index.json` (`{ "entries": [] }`)
- `shared-memory.json` (`{}`)
- `discussions/` (empty directory)
8. **Write team-session.json** with: session_id, task_description, status="active", roles, pipeline (empty), active_workers=[], created_at
**Success**: Session created, role files generated, shared infrastructure initialized.
---
## Phase 3: Create Task Chain
**Objective**: Dispatch tasks based on dependency graph with proper dependencies.
Delegate to `commands/dispatch.md` which creates the full task chain:
1. Reads dependency_graph from task-analysis.json
2. Topological sorts tasks
3. Creates tasks via TaskCreate with correct blockedBy
4. Assigns owner based on role mapping from task-analysis.json
5. Includes `Session: <session-folder>` in every task description
6. Sets InnerLoop flag for multi-task roles
7. Updates team-session.json with pipeline and tasks_total
**Success**: All tasks created with correct dependency chains, session updated.
---
## Phase 4: Spawn-and-Stop
**Objective**: Spawn first batch of ready workers in background, then STOP.
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
- Spawn workers with `Task(run_in_background: true)` -> immediately return
- Worker completes -> may fast-advance to next task OR SendMessage callback -> auto-advance
- User can use "check" / "resume" to manually advance
- Coordinator does one operation per invocation, then STOPS
**Workflow**:
1. Load `commands/monitor.md`
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
3. For each ready task -> spawn worker (see SKILL.md Coordinator Spawn Template)
- Use Standard Worker template for single-task roles
- Use Inner Loop Worker template for multi-task roles
4. Output status summary with execution graph
5. STOP
**Pipeline advancement** driven by three wake sources:
- Worker callback (automatic) -> Entry Router -> handleCallback
- User "check" -> handleCheck (status only)
- User "resume" -> handleResume (advance)
---
## Phase 5: Report + Next Steps
**Objective**: Completion report and follow-up options.
**Workflow**:
1. Load session state -> count completed tasks, duration
2. List all deliverables with output paths in `<session>/artifacts/`
3. Include discussion summaries (if inline discuss was used)
4. Summarize wisdom accumulated during execution
5. Update session status -> "completed"
6. Offer next steps: exit / view artifacts / extend with additional tasks
**Output format**:
```
[coordinator] ============================================
[coordinator] TASK COMPLETE
[coordinator]
[coordinator] Deliverables:
[coordinator] - <artifact-1.md> (<producer role>)
[coordinator] - <artifact-2.md> (<producer role>)
[coordinator]
[coordinator] Pipeline: <completed>/<total> tasks
[coordinator] Roles: <role-list>
[coordinator] Duration: <elapsed>
[coordinator]
[coordinator] Session: <session-folder>
[coordinator] ============================================
```
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Respawn worker, reassign task |
| Dependency cycle | Detect in task analysis, report to user, halt |
| Task description too vague | AskUserQuestion for clarification |
| Session corruption | Attempt recovery, fallback to manual reconciliation |
| Role generation fails | Fall back to single general-purpose role |
| capability_gap reported | handleAdapt: generate new role, create tasks, spawn |
| All capabilities merge to one | Valid: single-role execution, reduced overhead |
| No capabilities detected | Default to single general role with TASK prefix |

View File

@@ -1,434 +0,0 @@
# Dynamic Role Template
Template used by coordinator to generate worker role.md files at runtime. Each generated role is written to `<session>/roles/<role-name>.md`.
## Template
```markdown
# Role: <role_name>
<role_description>
## Identity
- **Name**: `<role_name>` | **Tag**: `[<role_name>]`
- **Task Prefix**: `<prefix>-*`
- **Responsibility**: <responsibility_type>
<if inner_loop>
- **Mode**: Inner Loop (handle all `<prefix>-*` tasks in single agent)
</if>
## Boundaries
### MUST
- Only process `<prefix>-*` prefixed tasks
- All output (SendMessage, team_msg, logs) must carry `[<role_name>]` identifier
- Only communicate with coordinator via SendMessage
- Work strictly within <responsibility_type> responsibility scope
- Use fast-advance for simple linear successors (see SKILL.md Phase 5)
- Produce MD artifacts in `<session>/artifacts/`
<if inner_loop>
- Use subagent for heavy work (do not execute CLI/generation in main agent context)
- Maintain context_accumulator across tasks within the inner loop
- Loop through all `<prefix>-*` tasks before reporting to coordinator
</if>
### MUST NOT
- Execute work outside this role's responsibility scope
- Communicate directly with other worker roles (must go through coordinator)
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
- Modify files or resources outside this role's scope
- Omit `[<role_name>]` identifier in any output
- Fast-advance when multiple tasks are ready or at checkpoint boundaries
<if inner_loop>
- Execute heavy work (CLI calls, large document generation) in main agent (delegate to subagent)
- SendMessage to coordinator mid-loop (unless consensus_blocked HIGH or error count >= 3)
</if>
## Toolbox
| Tool | Purpose |
|------|---------|
<tools based on responsibility_type -- see reference sections below>
## Message Types
| Type | Direction | Description |
|------|-----------|-------------|
| `<prefix>_complete` | -> coordinator | Task completed with artifact path |
| `<prefix>_error` | -> coordinator | Error encountered |
| `capability_gap` | -> coordinator | Work outside role scope discovered |
## Message Bus
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
```
mcp__ccw-tools__team_msg({
operation: "log",
team: <session-id>,
from: "<role_name>",
to: "coordinator",
type: <message-type>,
summary: "[<role_name>] <prefix> complete: <task-subject>",
ref: <artifact-path>
})
```
**`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
**CLI fallback** (when MCP unavailable):
```
Bash("ccw team log --team <session-id> --from <role_name> --to coordinator --type <message-type> --summary \"[<role_name>] <prefix> complete\" --ref <artifact-path> --json")
```
---
## Execution (5-Phase)
### Phase 1: Task Discovery
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
Standard task discovery flow: TaskList -> filter by prefix `<prefix>-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
### Phase 2: <phase2_name>
<phase2_content -- generated by coordinator based on responsibility type>
### Phase 3: <phase3_name>
<phase3_content -- generated by coordinator based on task specifics>
### Phase 4: <phase4_name>
<phase4_content -- generated by coordinator based on responsibility type>
<if inline_discuss>
### Phase 4b: Inline Discuss (optional)
After primary work, optionally call discuss subagent:
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss <round-id>",
prompt: "## Multi-Perspective Critique: <round-id>
See subagents/discuss-subagent.md for prompt template.
Perspectives: <specified by coordinator when generating this role>"
})
```
| Verdict | Severity | Action |
|---------|----------|--------|
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured consensus_blocked format. Do NOT self-revise. |
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed normally. |
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
</if>
<if inner_loop>
### Phase 5-L: Loop Completion (Inner Loop)
When more same-prefix tasks remain:
1. **TaskUpdate**: Mark current task completed
2. **team_msg**: Log task completion
3. **Accumulate summary**:
```
context_accumulator.append({
task: "<task-id>",
artifact: "<output-path>",
key_decisions: <from subagent return>,
discuss_verdict: <from Phase 4>,
summary: <from subagent return>
})
```
4. **Interrupt check**:
- consensus_blocked HIGH -> SendMessage -> STOP
- Error count >= 3 -> SendMessage -> STOP
5. **Loop**: Back to Phase 1
**Does NOT**: SendMessage to coordinator, Fast-Advance spawn.
### Phase 5-F: Final Report (Inner Loop)
When all same-prefix tasks are done:
1. **TaskUpdate**: Mark last task completed
2. **team_msg**: Log completion
3. **Summary report**: All tasks summary + discuss results + artifact paths
4. **Fast-Advance check**: Check cross-prefix successors
5. **SendMessage** or **spawn successor**
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report + Fast-Advance
<else>
### Phase 5: Report + Fast-Advance
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report + Fast-Advance
Standard report flow: team_msg log -> SendMessage with `[<role_name>]` prefix -> TaskUpdate completed -> Fast-Advance Check -> Loop to Phase 1 for next task.
</if>
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No <prefix>-* tasks available | Idle, wait for coordinator assignment |
| Context file not found | Notify coordinator, request location |
| Subagent fails | Retry once with fallback; still fails -> log error, continue next task |
| Fast-advance spawn fails | Fall back to SendMessage to coordinator |
<if inner_loop>
| Cumulative 3 task failures | SendMessage to coordinator, STOP inner loop |
| Agent crash mid-loop | Coordinator detects orphan on resume -> re-spawn -> resume from interrupted task |
</if>
| Work outside scope discovered | SendMessage capability_gap to coordinator |
| Critical issue beyond scope | SendMessage fix_required to coordinator |
```
---
## Phase 2-4 Content by Responsibility Type
Reference sections for coordinator to fill when generating roles. Select the matching section based on `responsibility_type`.
### orchestration
**Phase 2: Context Assessment**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Shared memory | <session>/shared-memory.json | No |
| Prior artifacts | <session>/artifacts/ | No |
| Wisdom | <session>/wisdom/ | No |
Loading steps:
1. Extract session path from task description
2. Read shared-memory.json for cross-role context
3. Read prior artifacts (if any exist from upstream tasks)
4. Load wisdom files for accumulated knowledge
5. Optionally call explore subagent for codebase context
```
**Phase 3: Subagent Execution**
```
Delegate to appropriate subagent based on task:
Task({
subagent_type: "general-purpose",
run_in_background: false,
description: "<task-type> for <task-id>",
prompt: "## Task
- <task description>
- Session: <session-folder>
## Context
<prior artifacts + shared memory + explore results>
## Expected Output
Write artifact to: <session>/artifacts/<artifact-name>.md
Return JSON summary: { artifact_path, summary, key_decisions[], warnings[] }"
})
```
**Phase 4: Result Aggregation**
```
1. Verify subagent output artifact exists
2. Read artifact, validate structure/completeness
3. Update shared-memory.json with key findings
4. Write insights to wisdom/ files
```
### code-gen (docs)
**Phase 2: Load Prior Context**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Prior artifacts | <session>/artifacts/ from upstream tasks | Conditional |
| Shared memory | <session>/shared-memory.json | No |
| Wisdom | <session>/wisdom/ | No |
Loading steps:
1. Extract session path from task description
2. Read upstream artifacts (e.g., research findings for a writer)
3. Read shared-memory.json for cross-role context
4. Load wisdom for accumulated decisions
```
**Phase 3: Document Generation**
```
Task({
subagent_type: "universal-executor",
run_in_background: false,
description: "Generate <doc-type> for <task-id>",
prompt: "## Task
- Generate: <document type>
- Session: <session-folder>
## Prior Context
<upstream artifacts + shared memory>
## Instructions
<task-specific writing instructions from coordinator>
## Expected Output
Write document to: <session>/artifacts/<doc-name>.md
Return JSON: { artifact_path, summary, key_decisions[], sections_generated[], warnings[] }"
})
```
**Phase 4: Structure Validation**
```
1. Verify document artifact exists
2. Check document has expected sections
3. Validate no placeholder text remains
4. Update shared-memory.json with document metadata
```
### code-gen (code)
**Phase 2: Load Plan/Specs**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Plan/design artifacts | <session>/artifacts/ | Conditional |
| Shared memory | <session>/shared-memory.json | No |
| Wisdom | <session>/wisdom/ | No |
Loading steps:
1. Extract session path from task description
2. Read plan/design artifacts from upstream
3. Load shared-memory.json for implementation context
4. Load wisdom for conventions and patterns
```
**Phase 3: Code Implementation**
```
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Implement <task-id>",
prompt: "## Task
- <implementation description>
- Session: <session-folder>
## Plan/Design Context
<upstream artifacts>
## Instructions
<task-specific implementation instructions>
## Expected Output
Implement code changes.
Write summary to: <session>/artifacts/implementation-summary.md
Return JSON: { artifact_path, summary, files_changed[], key_decisions[], warnings[] }"
})
```
**Phase 4: Syntax Validation**
```
1. Run syntax check (tsc --noEmit or equivalent)
2. Verify all planned files exist
3. Check no broken imports
4. If validation fails -> attempt auto-fix (max 2 attempts)
5. Write implementation summary to artifacts/
```
### read-only
**Phase 2: Target Loading**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Target artifacts/files | From task description or upstream | Yes |
| Shared memory | <session>/shared-memory.json | No |
Loading steps:
1. Extract session path and target files from task description
2. Read target artifacts or source files for analysis
3. Load shared-memory.json for context
```
**Phase 3: Multi-Dimension Analysis**
```
Task({
subagent_type: "general-purpose",
run_in_background: false,
description: "Analyze <target> for <task-id>",
prompt: "## Task
- Analyze: <target description>
- Dimensions: <analysis dimensions from coordinator>
- Session: <session-folder>
## Target Content
<artifact content or file content>
## Expected Output
Write report to: <session>/artifacts/analysis-report.md
Return JSON: { artifact_path, summary, findings[], severity_counts: {critical, high, medium, low} }"
})
```
**Phase 4: Severity Classification**
```
1. Verify analysis report exists
2. Classify findings by severity (Critical/High/Medium/Low)
3. Update shared-memory.json with key findings
4. Write issues to wisdom/issues.md
```
### validation
**Phase 2: Environment Detection**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Implementation artifacts | Upstream code changes | Yes |
Loading steps:
1. Detect test framework from project files
2. Get changed files from implementation
3. Identify test command and coverage tool
```
**Phase 3: Test-Fix Cycle**
```
Task({
subagent_type: "test-fix-agent",
run_in_background: false,
description: "Test-fix for <task-id>",
prompt: "## Task
- Run tests and fix failures
- Session: <session-folder>
- Max iterations: 5
## Changed Files
<from upstream implementation>
## Expected Output
Write report to: <session>/artifacts/test-report.md
Return JSON: { artifact_path, pass_rate, coverage, iterations_used, remaining_failures[] }"
})
```
**Phase 4: Result Analysis**
```
1. Check pass rate >= 95%
2. Check coverage meets threshold
3. Generate test report with pass/fail counts
4. Update shared-memory.json with test results
```

View File

@@ -1,133 +0,0 @@
# Discuss Subagent
Lightweight multi-perspective critique engine. Called inline by any role needing peer review. Perspectives are dynamic -- specified by the calling role, not pre-defined.
## Design
Unlike team-lifecycle-v4's fixed perspective definitions (product, technical, quality, risk, coverage), team-coordinate uses **dynamic perspectives** passed in the prompt. The calling role decides what viewpoints matter for its artifact.
## Invocation
Called by roles after artifact creation:
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss <round-id>",
prompt: `## Multi-Perspective Critique: <round-id>
### Input
- Artifact: <artifact-path>
- Round: <round-id>
- Session: <session-folder>
### Perspectives
<Dynamic perspective list -- each entry defines: name, cli_tool, role_label, focus_areas>
Example:
| Perspective | CLI Tool | Role | Focus Areas |
|-------------|----------|------|-------------|
| Feasibility | gemini | Engineer | Implementation complexity, technical risks, resource needs |
| Clarity | codex | Editor | Readability, logical flow, completeness of explanation |
| Accuracy | gemini | Domain Expert | Factual correctness, source reliability, claim verification |
### Execution Steps
1. Read artifact from <artifact-path>
2. For each perspective, launch CLI analysis in background:
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
TASK: <focus-areas>
MODE: analysis
CONTEXT: Artifact content below
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
CONSTRAINTS: Output valid JSON only
Artifact:
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
3. Wait for all CLI results
4. Divergence detection:
- High severity: any rating <= 2, critical issue identified
- Medium severity: rating spread (max - min) >= 3, or single perspective rated <= 2 with others >= 3
- Low severity: minor suggestions only, all ratings >= 3
5. Consensus determination:
- No high-severity divergences AND average rating >= 3.0 -> consensus_reached
- Otherwise -> consensus_blocked
6. Synthesize:
- Convergent themes (agreed by 2+ perspectives)
- Divergent views (conflicting assessments)
- Action items from suggestions
7. Write discussion record to: <session-folder>/discussions/<round-id>-discussion.md
### Discussion Record Format
# Discussion Record: <round-id>
**Artifact**: <artifact-path>
**Perspectives**: <list>
**Consensus**: reached / blocked
**Average Rating**: <avg>/5
## Convergent Themes
- <theme>
## Divergent Views
- **<topic>** (<severity>): <description>
## Action Items
1. <item>
## Ratings
| Perspective | Rating |
|-------------|--------|
| <name> | <n>/5 |
### Return Value
**When consensus_reached**:
Return a summary string with:
- Verdict: consensus_reached
- Average rating
- Key action items (top 3)
- Discussion record path
**When consensus_blocked**:
Return a structured summary with:
- Verdict: consensus_blocked
- Severity: HIGH | MEDIUM | LOW
- Average rating
- Divergence summary: top 3 divergent points with perspective attribution
- Action items: prioritized list of required changes
- Recommendation: revise | proceed-with-caution | escalate
- Discussion record path
### Error Handling
- Single CLI fails -> fallback to direct Claude analysis for that perspective
- All CLI fail -> generate basic discussion from direct artifact reading
- Artifact not found -> return error immediately`
})
```
## Integration with Calling Role
The calling role is responsible for:
1. **Before calling**: Complete primary artifact output
2. **Calling**: Invoke discuss subagent with appropriate dynamic perspectives
3. **After calling**:
| Verdict | Severity | Role Action |
|---------|----------|-------------|
| consensus_reached | - | Include action items in Phase 5 report, proceed normally |
| consensus_blocked | HIGH | Include divergence details in Phase 5 SendMessage. Do NOT self-revise -- coordinator decides. |
| consensus_blocked | MEDIUM | Include warning in Phase 5 SendMessage. Proceed normally. |
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
**SendMessage format for consensus_blocked (HIGH or MEDIUM)**:
```
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
Divergences: <top-3-divergent-points>
Action items: <prioritized-items>
Recommendation: <revise|proceed-with-caution|escalate>
Artifact: <artifact-path>
Discussion: <discussion-record-path>
```

View File

@@ -1,120 +0,0 @@
# Explore Subagent
Shared codebase exploration utility with centralized caching. Callable by any role needing code context.
## Invocation
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore <angle>",
prompt: `Explore codebase for: <query>
Focus angle: <angle>
Keywords: <keyword-list>
Session folder: <session-folder>
## Cache Check
1. Read <session-folder>/explorations/cache-index.json (if exists)
2. Look for entry with matching angle
3. If found AND file exists -> read cached result, return summary
4. If not found -> proceed to exploration
## Exploration
<angle-specific-focus-from-table-below>
## Output
Write JSON to: <session-folder>/explorations/explore-<angle>.json
Update cache-index.json with new entry
## Output Schema
{
"angle": "<angle>",
"query": "<query>",
"relevant_files": [
{ "path": "...", "rationale": "...", "role": "...", "discovery_source": "...", "key_symbols": [] }
],
"patterns": [],
"dependencies": [],
"external_refs": [],
"_metadata": { "created_by": "<calling-role>", "timestamp": "...", "cache_key": "..." }
}
Return summary: file count, pattern count, top 5 files, output path`
})
```
## Cache Mechanism
### Cache Index Schema
`<session-folder>/explorations/cache-index.json`:
```json
{
"entries": [
{
"angle": "architecture",
"keywords": ["auth", "middleware"],
"file": "explore-architecture.json",
"created_by": "analyst",
"created_at": "2026-02-27T10:00:00Z",
"file_count": 15
}
]
}
```
### Cache Lookup Rules
| Condition | Action |
|-----------|--------|
| Exact angle match exists | Return cached result |
| No match | Execute exploration, cache result |
| Cache file missing but index has entry | Remove stale entry, re-explore |
### Cache Invalidation
Cache is session-scoped. No explicit invalidation needed -- each session starts fresh. If a role suspects stale data, it can pass `force_refresh: true` in the prompt to bypass cache.
## Angle Focus Guide
| Angle | Focus Points | Typical Caller |
|-------|-------------|----------------|
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs | any |
| dependencies | Import chains, external libraries, circular dependencies, shared utilities | any |
| modularity | Module interfaces, separation of concerns, extraction opportunities | any |
| integration-points | API endpoints, data flow between modules, event systems | any |
| security | Auth/authz logic, input validation, sensitive data handling, middleware | any |
| dataflow | Data transformations, state propagation, validation points | any |
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity | any |
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging | any |
| patterns | Code conventions, design patterns, naming conventions, best practices | any |
| testing | Test files, coverage gaps, test patterns, mocking strategies | any |
| general | Broad semantic search for topic-related code | any |
## Exploration Strategies
### Low Complexity (direct search)
For simple queries, use ACE semantic search:
```
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<query>")
```
ACE failure fallback: `rg -l '<keywords>' --type ts`
### Medium/High Complexity (multi-angle)
For complex queries, call cli-explore-agent per angle. The calling role determines complexity and selects angles.
## Search Tool Priority
| Tool | Priority | Use Case |
|------|----------|----------|
| mcp__ace-tool__search_context | P0 | Semantic search |
| Grep / Glob | P1 | Pattern matching |
| cli-explore-agent | Deep | Multi-angle exploration |
| WebSearch | P3 | External docs |

View File

@@ -1,372 +0,0 @@
---
name: team-executor
description: Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution. No analysis, no role generation -- only loads and executes. Session path required. Triggers on "team executor".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team Executor
Lightweight session execution skill: load session -> reconcile state -> spawn workers -> execute -> deliver. **No analysis, no role generation** -- only executes existing team-coordinate sessions.
## Architecture
```
+---------------------------------------------------+
| Skill(skill="team-executor") |
| args="--session=<path>" [REQUIRED] |
| args="--role=<name>" (for worker dispatch) |
+-------------------+-------------------------------+
| Session Validation
+---- --session valid? ----+
| NO | YES
v v
Error immediately Role Router
(no session) |
+-------+-------+
| --role present?
| |
YES | | NO
v | v
Route to | Orchestration Mode
session | -> executor
role.md |
```
---
## Session Validation (BEFORE routing)
**CRITICAL**: Session validation MUST occur before any role routing.
### Parse Arguments
Extract from `$ARGUMENTS`:
- `--session=<path>`: Path to team-coordinate session folder (REQUIRED)
- `--role=<name>`: Role to dispatch (optional, defaults to orchestration mode)
### Validation Steps
1. **Check `--session` provided**:
- If missing -> **ERROR**: "Session required. Usage: --session=<path-to-TC-folder>"
- Do NOT proceed
2. **Validate session structure** (see specs/session-schema.md):
- Directory exists at path
- `team-session.json` exists and valid JSON
- `task-analysis.json` exists and valid JSON
- `roles/` directory has at least one `.md` file
- Each role in `team-session.json#roles` has corresponding `.md` file in `roles/`
3. **Validation failure**:
- Report specific missing component
- Suggest re-running team-coordinate or checking path
- Do NOT proceed
### Validation Checklist
```
Session Validation Checklist:
[ ] --session argument provided
[ ] Directory exists at path
[ ] team-session.json exists and parses
[ ] task-analysis.json exists and parses
[ ] roles/ directory has >= 1 .md files
[ ] All session.roles[] have corresponding roles/<role>.md
```
---
## Role Router
### Dispatch Logic
| Scenario | Action |
|----------|--------|
| No `--session` | **ERROR** immediately |
| `--session` invalid | **ERROR** with specific reason |
| No `--role` | Orchestration Mode -> executor |
| `--role=executor` | Read built-in `roles/executor/role.md` |
| `--role=<other>` | Read `<session>/roles/<role>.md` |
### Orchestration Mode
When invoked without `--role`, executor auto-starts.
**Invocation**: `Skill(skill="team-executor", args="--session=<session-folder>")`
**Lifecycle**:
```
Validate session
-> executor Phase 0: Reconcile state (reset interrupted, detect orphans)
-> executor Phase 1: Spawn first batch workers (background) -> STOP
-> Worker executes -> SendMessage callback -> executor advances next step
-> Loop until pipeline complete -> Phase 2 report
```
**User Commands** (wake paused executor):
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no advancement |
| `resume` / `continue` | Check worker states, advance next step |
---
## Role Registry
| Role | File | Type |
|------|------|------|
| executor | [roles/executor/role.md](roles/executor/role.md) | built-in orchestrator |
| (dynamic) | `<session>/roles/<role-name>.md` | loaded from session |
> **COMPACT PROTECTION**: Role files are execution documents. After context compression, role instructions become summaries only -- **MUST immediately `Read` the role.md to reload before continuing**. Never execute any Phase based on summaries.
---
## Shared Infrastructure
The following templates apply to all worker roles. Each loaded role.md follows the same structure.
### Worker Phase 1: Task Discovery (all workers shared)
Each worker on startup executes the same task discovery flow:
1. Call `TaskList()` to get all tasks
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
3. No tasks -> idle wait
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
**Resume Artifact Check** (prevent duplicate output after resume):
- Check if this task's output artifacts already exist
- Artifacts complete -> skip to Phase 5 report completion
- Artifacts incomplete or missing -> normal Phase 2-4 execution
### Worker Phase 5: Report + Fast-Advance (all workers shared)
Task completion with optional fast-advance to skip executor round-trip:
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
- Params: operation="log", team=**<session-id>**, from=<role>, to="executor", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
- **`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
- **CLI fallback**: `ccw team log --team <session-id> --from <role> --to executor --type <type> --summary "[<role>] ..." --json`
2. **TaskUpdate**: Mark task completed
3. **Fast-Advance Check**:
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
- If exactly 1 ready task AND its owner matches a simple successor pattern -> **spawn it directly** (skip executor)
- Otherwise -> **SendMessage** to executor for orchestration
4. **Loop**: Back to Phase 1 to check for next task
**Fast-Advance Rules**:
| Condition | Action |
|-----------|--------|
| Same-prefix successor (Inner Loop role) | Do not spawn, main agent inner loop (Phase 5-L) |
| 1 ready task, simple linear successor, different prefix | Spawn directly via Task(run_in_background: true) |
| Multiple ready tasks (parallel window) | SendMessage to executor (needs orchestration) |
| No ready tasks + others running | SendMessage to executor (status update) |
| No ready tasks + nothing running | SendMessage to executor (pipeline may be complete) |
**Fast-advance failure recovery**: If a fast-advanced task fails, the executor detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing. See [monitor.md](roles/executor/commands/monitor.md).
### Worker Inner Loop (roles with multiple same-prefix serial tasks)
When a role has **2+ serial same-prefix tasks**, it loops internally instead of spawning new agents:
**Inner Loop flow**:
```
Phase 1: Discover task (first time)
|
+- Found task -> Phase 2-3: Load context + Execute work
| |
| v
| Phase 4: Validation (+ optional Inline Discuss)
| |
| v
| Phase 5-L: Loop Completion
| |
| +- TaskUpdate completed
| +- team_msg log
| +- Accumulate summary to context_accumulator
| |
| +- More same-prefix tasks?
| | +- YES -> back to Phase 1 (inner loop)
| | +- NO -> Phase 5-F: Final Report
| |
| +- Interrupt conditions?
| +- consensus_blocked HIGH -> SendMessage -> STOP
| +- Errors >= 3 -> SendMessage -> STOP
|
+- Phase 5-F: Final Report
+- SendMessage (all task summaries)
+- STOP
```
**Phase 5-L vs Phase 5-F**:
| Step | Phase 5-L (looping) | Phase 5-F (final) |
|------|---------------------|-------------------|
| TaskUpdate completed | YES | YES |
| team_msg log | YES | YES |
| Accumulate summary | YES | - |
| SendMessage to executor | NO | YES (all tasks summary) |
| Fast-Advance to next prefix | - | YES (check cross-prefix successors) |
### Wisdom Accumulation (all roles)
Cross-task knowledge accumulation. Loaded from session at startup.
**Directory**:
```
<session-folder>/wisdom/
+-- learnings.md # Patterns and insights
+-- decisions.md # Design and strategy decisions
+-- issues.md # Known risks and issues
```
**Worker load** (Phase 2): Extract `Session: <path>` from task description, read wisdom files.
**Worker contribute** (Phase 4/5): Write discoveries to corresponding wisdom files.
### Role Isolation Rules
| Allowed | Prohibited |
|---------|-----------|
| Process own prefix tasks | Process other role's prefix tasks |
| SendMessage to executor | Directly communicate with other workers |
| Use tools appropriate to responsibility | Create tasks for other roles |
| Fast-advance simple successors | Spawn parallel worker batches |
| Report capability_gap to executor | Attempt work outside scope |
Executor additionally prohibited: directly write/modify deliverable artifacts, call implementation subagents directly, directly execute analysis/test/review, generate new roles.
---
## Cadence Control
**Beat model**: Event-driven, each beat = executor wake -> process -> spawn -> STOP.
```
Beat Cycle (single beat)
======================================================================
Event Executor Workers
----------------------------------------------------------------------
callback/resume --> +- handleCallback -+
| mark completed |
| check pipeline |
+- handleSpawnNext -+
| find ready tasks |
| spawn workers ---+--> [Worker A] Phase 1-5
| (parallel OK) --+--> [Worker B] Phase 1-5
+- STOP (idle) -----+ |
|
callback <-----------------------------------------+
(next beat) SendMessage + TaskUpdate(completed)
======================================================================
Fast-Advance (skips executor for simple linear successors)
======================================================================
[Worker A] Phase 5 complete
+- 1 ready task? simple successor? --> spawn Worker B directly
+- complex case? --> SendMessage to executor
======================================================================
```
---
## Executor Spawn Template
### Standard Worker (single-task role)
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `You are team "<team-name>" <ROLE>.
## Primary Instruction
All your work MUST be executed by calling Skill to get role definition:
Skill(skill="team-executor", args="--role=<role> --session=<session-folder>")
Current requirement: <task-description>
Session: <session-folder>
## Role Guidelines
- Only process <PREFIX>-* tasks, do not execute other role work
- All output prefixed with [<role>] tag
- Only communicate with executor
- Do not use TaskCreate to create tasks for other roles
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
## Workflow
1. Call Skill -> get role definition and execution logic
2. Follow role.md 5-Phase flow
3. team_msg(team=<session-id>) + SendMessage results to executor
4. TaskUpdate completed -> check next task or fast-advance`
})
```
### Inner Loop Worker (multi-task role)
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker (inner loop)",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `You are team "<team-name>" <ROLE>.
## Primary Instruction
All your work MUST be executed by calling Skill to get role definition:
Skill(skill="team-executor", args="--role=<role> --session=<session-folder>")
Current requirement: <task-description>
Session: <session-folder>
## Inner Loop Mode
You will handle ALL <PREFIX>-* tasks in this session, not just the first one.
After completing each task, loop back to find the next <PREFIX>-* task.
Only SendMessage to executor when:
- All <PREFIX>-* tasks are done
- A consensus_blocked HIGH occurs
- Errors accumulate (>= 3)
## Role Guidelines
- Only process <PREFIX>-* tasks, do not execute other role work
- All output prefixed with [<role>] tag
- Only communicate with executor
- Do not use TaskCreate to create tasks for other roles
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
- Use subagent calls for heavy work, retain summaries in context`
})
```
---
## Integration with team-coordinate
| Scenario | Skill |
|----------|-------|
| New task, no session | team-coordinate |
| Existing session, resume execution | **team-executor** |
| Session needs new roles | team-coordinate (with --resume) |
| Pure execution, no analysis | **team-executor** |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No --session provided | ERROR immediately with usage message |
| Session directory not found | ERROR with path, suggest checking path |
| team-session.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
| task-analysis.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
| No roles in session | ERROR, session incomplete, suggest re-run team-coordinate |
| Role file not found | ERROR with expected path |
| capability_gap reported | Warn only, cannot generate new roles (see monitor.md handleAdapt) |
| Fast-advance spawns wrong task | Executor reconciles on next callback |

View File

@@ -1,277 +0,0 @@
# Command: monitor
## Purpose
Event-driven pipeline coordination with Spawn-and-Stop pattern for team-executor. Adapted from team-coordinate monitor.md -- role names are read from `team-session.json#roles` instead of hardcoded. **handleAdapt is LIMITED**: only warns, cannot generate new roles.
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
| ONE_STEP_PER_INVOCATION | true | Executor does one operation then STOPS |
| FAST_ADVANCE_AWARE | true | Workers may skip executor for simple linear successors |
| ROLE_GENERATION | disabled | handleAdapt cannot generate new roles |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session file | `<session-folder>/team-session.json` | Yes |
| Task list | `TaskList()` | Yes |
| Active workers | session.active_workers[] | Yes |
| Role registry | session.roles[] | Yes |
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name`. This is the same pattern as team-coordinate.
## Phase 3: Handler Routing
### Wake-up Source Detection
Parse `$ARGUMENTS` to determine handler:
| Priority | Condition | Handler |
|----------|-----------|---------|
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
| 2 | Contains "capability_gap" | handleAdapt |
| 3 | Contains "check" or "status" | handleCheck |
| 4 | Contains "resume", "continue", or "next" | handleResume |
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
---
### Handler: handleCallback
Worker completed a task. Verify completion, update state, auto-advance.
```
Receive callback from [<role>]
+- Find matching active worker by role (from session.roles)
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
+- Task status = completed?
| +- YES -> remove from active_workers -> update session
| | +- -> handleSpawnNext
| +- NO -> progress message, do not advance -> STOP
+- No matching worker found
+- Scan all active workers for completed tasks
+- Found completed -> process each -> handleSpawnNext
+- None completed -> STOP
```
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
1. Check if the expected next task is already `in_progress` (fast-advanced)
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
3. If no -> normal handleSpawnNext
---
### Handler: handleCheck
Read-only status report. No pipeline advancement.
**Output format**:
```
[executor] Pipeline Status
[executor] Progress: <completed>/<total> (<percent>%)
[executor] Execution Graph:
<visual representation of dependency graph with status icons>
done=completed >>>=running o=pending .=not created
[executor] Active Workers:
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
[executor] Ready to spawn: <subjects>
[executor] Commands: 'resume' to advance | 'check' to refresh
```
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
**Graph rendering**: Read dependency_graph from task-analysis.json, render each node with status icon. Show parallel branches side-by-side.
Then STOP.
---
### Handler: handleResume
Check active worker completion, process results, advance pipeline.
```
Load active_workers from session
+- No active workers -> handleSpawnNext
+- Has active workers -> check each:
+- status = completed -> mark done, log
+- status = in_progress -> still running, log
+- other status -> worker failure -> reset to pending
After processing:
+- Some completed -> handleSpawnNext
+- All still running -> report status -> STOP
+- All failed -> handleSpawnNext (retry)
```
---
### Handler: handleSpawnNext
Find all ready tasks, spawn workers in background, update session, STOP.
```
Collect task states from TaskList()
+- completedSubjects: status = completed
+- inProgressSubjects: status = in_progress
+- readySubjects: pending + all blockedBy in completedSubjects
Ready tasks found?
+- NONE + work in progress -> report waiting -> STOP
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> Phase 2
+- HAS ready tasks -> for each:
+- Is task owner an Inner Loop role AND that role already has an active_worker?
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
| +- NO -> normal spawn below
+- TaskUpdate -> in_progress
+- team_msg log -> task_unblocked (team=<session-id>, NOT team name)
+- Spawn worker (see spawn tool call below)
+- Add to session.active_workers
Update session file -> output summary -> STOP
```
**Spawn worker tool call** (one per ready task):
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker for <subject>",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: "<worker prompt from SKILL.md Executor Spawn Template>"
})
```
---
### Handler: handleAdapt (LIMITED)
Handle mid-pipeline capability gap discovery. **UNLIKE team-coordinate, executor CANNOT generate new roles.**
```
Receive capability_gap from [<role>]
+- Log via team_msg (type: warning)
+- Report to user:
"Capability gap detected: <gap_description>
team-executor cannot generate new roles.
Options:
1. Continue with existing roles (worker will skip gap work)
2. Re-run team-coordinate with --resume=<session> to extend session
3. Manually add role to <session>/roles/ and retry"
+- Extract: gap_description, requesting_role, suggested_capability
+- Validate gap is genuine:
+- Check existing roles in session.roles -> does any role cover this?
| +- YES -> redirect: SendMessage to that role's owner -> STOP
| +- NO -> genuine gap, report to user (cannot fix)
+- Do NOT generate new role
+- Continue execution with existing roles
```
**Key difference from team-coordinate**:
| Aspect | team-coordinate | team-executor |
|--------|-----------------|---------------|
| handleAdapt | Generates new role, creates tasks, spawns worker | Only warns, cannot fix |
| Recovery | Automatic | Manual (re-run team-coordinate) |
---
### Worker Failure Handling
When a worker has unexpected status (not completed, not in_progress):
1. Reset task -> pending via TaskUpdate
2. Log via team_msg (type: error)
3. Report to user: task reset, will retry on next resume
### Fast-Advance Failure Recovery
When executor detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
```
handleCallback / handleResume detects:
+- Task is in_progress (was fast-advanced by predecessor)
+- No active_worker entry for this task
+- Original fast-advancing worker has already completed and exited
+- Resolution:
1. TaskUpdate -> reset task to pending
2. Remove stale active_worker entry (if any)
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
4. -> handleSpawnNext (will re-spawn the task normally)
```
**Detection in handleResume**:
```
For each in_progress task in TaskList():
+- Has matching active_worker? -> normal, skip
+- No matching active_worker? -> orphaned (likely fast-advance failure)
+- Check creation time: if > 5 minutes with no progress callback
+- Reset to pending -> handleSpawnNext
```
**Prevention**: Fast-advance failures are self-healing. The executor reconciles orphaned tasks on every `resume`/`check` cycle.
### Consensus-Blocked Handling
When a worker reports `consensus_blocked` in its callback:
```
handleCallback receives message with consensus_blocked flag
+- Extract: divergence_severity, blocked_round, action_recommendation
+- Route by severity:
|
+- severity = HIGH
| +- Create REVISION task:
| +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
| +- Description includes: divergence details + action items from discuss
| +- blockedBy: none (immediate execution)
| +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
| +- If already revised once -> PAUSE, escalate to user
| +- Update session: mark task as "revised", log revision chain
|
+- severity = MEDIUM
| +- Proceed with warning: include divergence in next task's context
| +- Log action items to wisdom/issues.md
| +- Normal handleSpawnNext
|
+- severity = LOW
+- Proceed normally: treat as consensus_reached with notes
+- Normal handleSpawnNext
```
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Session state consistent | active_workers matches TaskList in_progress tasks |
| No orphaned tasks | Every in_progress task has an active_worker entry |
| Dynamic roles valid | All task owners exist in session.roles |
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-run team-coordinate |
| Worker callback from unknown role | Log info, scan for other completions |
| All workers still running on resume | Report status, suggest check later |
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
| Dynamic role file not found | Error, cannot proceed without role definition |
| capability_gap from role | WARN only, cannot generate new roles |
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |

View File

@@ -1,202 +0,0 @@
# Executor Role
Orchestrate the team-executor workflow: session validation, state reconciliation, worker dispatch, progress monitoring, session state. The sole built-in role -- all worker roles are loaded from the session.
## Identity
- **Name**: `executor` | **Tag**: `[executor]`
- **Responsibility**: Validate session -> Reconcile state -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Validate session structure before any execution
- Reconcile session state with TaskList on startup
- Reset in_progress tasks to pending (interrupted tasks)
- Detect fast-advance orphans and reset to pending
- Spawn worker subagents in background
- Monitor progress via worker callbacks and route messages
- Maintain session state persistence (team-session.json)
- Handle capability_gap reports with warning only (cannot generate roles)
### MUST NOT
- Execute task work directly (delegate to workers)
- Modify task output artifacts (workers own their deliverables)
- Call implementation subagents (code-developer, etc.) directly
- Generate new roles (use existing session roles only)
- Skip session validation
- Override consensus_blocked HIGH without user confirmation
> **Core principle**: executor is the orchestrator, not the executor. All actual work is delegated to session-defined worker roles. Unlike team-coordinate coordinator, executor CANNOT generate new roles.
---
## Entry Router
When executor is invoked, first detect the invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
| Status check | Arguments contain "check" or "status" | -> handleCheck |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
| New execution | None of above | -> Phase 0 |
For callback/check/resume/adapt: load `commands/monitor.md` and execute the appropriate handler, then STOP.
---
## Phase 0: Session Validation + State Reconciliation
**Objective**: Validate session structure and reconcile session state with actual task status.
**Workflow**:
### Step 1: Session Validation
Validate session structure (see SKILL.md Session Validation):
- [ ] Directory exists at session path
- [ ] `team-session.json` exists and parses
- [ ] `task-analysis.json` exists and parses
- [ ] `roles/` directory has >= 1 .md files
- [ ] All roles in team-session.json#roles have corresponding .md files
If validation fails -> ERROR with specific reason -> STOP
### Step 2: Load Session State
```javascript
session = Read(<session-folder>/team-session.json)
taskAnalysis = Read(<session-folder>/task-analysis.json)
```
### Step 3: Reconcile with TaskList
```
Call TaskList() -> get real status of all tasks
Compare with session.completed_tasks:
+- Tasks in TaskList.completed but not in session -> add to session.completed_tasks
+- Tasks in session.completed_tasks but not TaskList.completed -> remove from session.completed_tasks (anomaly, log warning)
+- Tasks in TaskList.in_progress -> candidate for reset
```
### Step 4: Reset Interrupted Tasks
```
For each task in TaskList.in_progress:
+- Reset to pending via TaskUpdate
+- Log via team_msg (type: warning, summary: "Task <ID> reset from interrupted state")
```
### Step 5: Detect Fast-Advance Orphans
```
For each task in TaskList.in_progress:
+- Check if has matching active_worker entry
+- No matching active_worker + created > 5 minutes ago -> orphan
+- Reset to pending via TaskUpdate
+- Log via team_msg (type: error, summary: "Fast-advance orphan <ID> reset")
```
### Step 6: Create Missing Tasks (if needed)
```
For each task in task-analysis.json#tasks:
+- Check if exists in TaskList
+- Not exists -> create via TaskCreate with correct blockedBy
```
### Step 7: Update Session File
```
Write updated team-session.json with:
+- reconciled completed_tasks
+- cleared active_workers (will be rebuilt on spawn)
+- status = "active"
```
### Step 8: Team Setup
```
Check if team exists (via TaskList with team_name filter)
+- Not exists -> TeamCreate with team_name from session
+- Exists -> continue with existing team
```
**Success**: Session validated, state reconciled, team ready -> Phase 1
---
## Phase 1: Spawn-and-Stop
**Objective**: Spawn first batch of ready workers in background, then STOP.
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
- Spawn workers with `Task(run_in_background: true)` -> immediately return
- Worker completes -> may fast-advance to next task OR SendMessage callback -> auto-advance
- User can use "check" / "resume" to manually advance
- Executor does one operation per invocation, then STOPS
**Workflow**:
1. Load `commands/monitor.md`
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
3. For each ready task -> spawn worker (see SKILL.md Executor Spawn Template)
- Use Standard Worker template for single-task roles
- Use Inner Loop Worker template for multi-task roles
4. Output status summary with execution graph
5. STOP
**Pipeline advancement** driven by three wake sources:
- Worker callback (automatic) -> Entry Router -> handleCallback
- User "check" -> handleCheck (status only)
- User "resume" -> handleResume (advance)
---
## Phase 2: Report + Next Steps
**Objective**: Completion report and follow-up options.
**Workflow**:
1. Load session state -> count completed tasks, duration
2. List all deliverables with output paths in `<session>/artifacts/`
3. Include discussion summaries (if inline discuss was used)
4. Summarize wisdom accumulated during execution
5. Update session status -> "completed"
6. Offer next steps: exit / view artifacts / extend with additional tasks
**Output format**:
```
[executor] ============================================
[executor] TASK COMPLETE
[executor]
[executor] Deliverables:
[executor] - <artifact-1.md> (<producer role>)
[executor] - <artifact-2.md> (<producer role>)
[executor]
[executor] Pipeline: <completed>/<total> tasks
[executor] Roles: <role-list>
[executor] Duration: <elapsed>
[executor]
[executor] Session: <session-folder>
[executor] ============================================
```
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Session validation fails | ERROR with specific reason, suggest re-run team-coordinate |
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Respawn worker, reassign task |
| Session corruption | Attempt recovery, fallback to manual reconciliation |
| capability_gap reported | handleAdapt: WARN only, cannot generate new roles |
| All workers still running on resume | Report status, suggest check later |
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
| Role file not found | ERROR, cannot proceed without role definition |

View File

@@ -1,272 +0,0 @@
# Session Schema
Required session structure for team-executor. All components MUST exist for valid execution.
## Directory Structure
```
<session-folder>/
+-- team-session.json # Session state + dynamic role registry (REQUIRED)
+-- task-analysis.json # Task analysis output: capabilities, dependency graph (REQUIRED)
+-- roles/ # Dynamic role definitions (REQUIRED, >= 1 .md file)
| +-- <role-1>.md
| +-- <role-2>.md
+-- artifacts/ # All MD deliverables from workers
| +-- <artifact>.md
+-- shared-memory.json # Cross-role state store
+-- wisdom/ # Cross-task knowledge
| +-- learnings.md
| +-- decisions.md
| +-- issues.md
+-- explorations/ # Shared explore cache
| +-- cache-index.json
| +-- explore-<angle>.json
+-- discussions/ # Inline discuss records
| +-- <round>.md
+-- .msg/ # Team message bus logs
```
## Validation Checklist
team-executor validates the following before execution:
### Required Components
| Component | Validation | Error Message |
|-----------|------------|---------------|
| `--session` argument | Must be provided | "Session required. Usage: --session=<path-to-TC-folder>" |
| Directory | Must exist at path | "Session directory not found: <path>" |
| `team-session.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: team-session.json missing, corrupt, or missing required fields" |
| `task-analysis.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: task-analysis.json missing, corrupt, or missing required fields" |
| `roles/` directory | Must exist and contain >= 1 .md file | "Invalid session: no role files in roles/" |
| Role file mapping | Each role in team-session.json#roles must have .md file | "Role file not found: roles/<role>.md" |
| Role file structure | Each role .md must contain required headers | "Invalid role file: roles/<role>.md missing required section: <section>" |
### Validation Algorithm
```
1. Parse --session=<path> from arguments
+- Not provided -> ERROR: "Session required. Usage: --session=<path-to-TC-folder>"
2. Check directory exists
+- Not exists -> ERROR: "Session directory not found: <path>"
3. Check team-session.json
+- Not exists -> ERROR: "Invalid session: team-session.json missing"
+- Parse error -> ERROR: "Invalid session: team-session.json corrupt"
+- Validate required fields:
+- session_id (string) -> missing/invalid -> ERROR: "team-session.json missing required field: session_id"
+- task_description (string) -> missing -> ERROR: "team-session.json missing required field: task_description"
+- status (string: active|paused|completed) -> missing/invalid -> ERROR: "team-session.json has invalid status"
+- team_name (string) -> missing -> ERROR: "team-session.json missing required field: team_name"
+- roles (array) -> missing/empty -> ERROR: "team-session.json missing or empty roles array"
4. Check task-analysis.json
+- Not exists -> ERROR: "Invalid session: task-analysis.json missing"
+- Parse error -> ERROR: "Invalid session: task-analysis.json corrupt"
+- Validate required fields:
+- capabilities (array) -> missing -> ERROR: "task-analysis.json missing required field: capabilities"
+- dependency_graph (object) -> missing -> ERROR: "task-analysis.json missing required field: dependency_graph"
+- roles (array) -> missing/empty -> ERROR: "task-analysis.json missing or empty roles array"
+- tasks (array) -> missing/empty -> ERROR: "task-analysis.json missing or empty tasks array"
5. Check roles/ directory
+- Not exists -> ERROR: "Invalid session: roles/ directory missing"
+- No .md files -> ERROR: "Invalid session: no role files in roles/"
6. Check role file mapping and structure
+- For each role in team-session.json#roles:
+- Check roles/<role.name>.md exists
+- Not exists -> ERROR: "Role file not found: roles/<role.name>.md"
+- Validate role file structure (see Role File Structure Validation):
+- Check for required headers: "# Role:", "## Identity", "## Boundaries", "## Execution"
+- Missing header -> ERROR: "Invalid role file: roles/<role.name>.md missing required section: <section>"
7. All checks pass -> proceed to Phase 0
```
---
## team-session.json Schema
```json
{
"session_id": "TC-<slug>-<date>",
"task_description": "<original user input>",
"status": "active | paused | completed",
"team_name": "<team-name>",
"roles": [
{
"name": "<role-name>",
"prefix": "<PREFIX>",
"responsibility_type": "<type>",
"inner_loop": false,
"role_file": "roles/<role-name>.md"
}
],
"pipeline": {
"dependency_graph": {},
"tasks_total": 0,
"tasks_completed": 0
},
"active_workers": [],
"completed_tasks": [],
"created_at": "<timestamp>"
}
```
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `session_id` | string | Unique session identifier (e.g., "TC-auth-feature-2026-02-27") |
| `task_description` | string | Original task description from user |
| `status` | string | One of: "active", "paused", "completed" |
| `team_name` | string | Team name for Task tool |
| `roles` | array | List of role definitions |
| `roles[].name` | string | Role name (must match .md filename) |
| `roles[].prefix` | string | Task prefix for this role (e.g., "SPEC", "IMPL") |
| `roles[].role_file` | string | Relative path to role file |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `pipeline` | object | Pipeline metadata |
| `active_workers` | array | Currently running workers |
| `completed_tasks` | array | List of completed task IDs |
| `created_at` | string | ISO timestamp |
---
## task-analysis.json Schema
```json
{
"capabilities": [
{
"name": "<capability-name>",
"description": "<description>",
"artifact_type": "<type>"
}
],
"dependency_graph": {
"<task-id>": {
"depends_on": ["<dependency-task-id>"],
"role": "<role-name>"
}
},
"roles": [
{
"name": "<role-name>",
"prefix": "<PREFIX>",
"responsibility_type": "<type>",
"inner_loop": false
}
],
"tasks": [
{
"id": "<task-id>",
"subject": "<task-subject>",
"owner": "<role-name>",
"blockedBy": ["<dependency-task-id>"]
}
],
"complexity_score": 0
}
```
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `capabilities` | array | Detected capabilities |
| `dependency_graph` | object | Task dependency DAG |
| `roles` | array | Role definitions |
| `tasks` | array | Task definitions |
---
## Role File Schema
Each role file in `roles/<role-name>.md` must follow the structure defined in `team-coordinate/specs/role-template.md`.
### Minimum Required Sections
| Section | Description |
|---------|-------------|
| `# Role: <name>` | Header with role name |
| `## Identity` | Name, tag, prefix, responsibility |
| `## Boundaries` | MUST and MUST NOT rules |
| `## Execution (5-Phase)` | Phase 1-5 workflow |
### Role File Structure Validation
Role files MUST be validated for structure before execution. This catches malformed role files early and provides actionable error messages.
**Required Sections** (must be present in order):
| Section | Pattern | Purpose |
|---------|---------|---------|
| Role Header | `# Role: <name>` | Identifies the role definition |
| Identity | `## Identity` | Defines name, tag, prefix, responsibility |
| Boundaries | `## Boundaries` | Defines MUST and MUST NOT rules |
| Execution | `## Execution` | Defines the 5-Phase workflow |
**Validation Algorithm**:
```
For each role file in roles/<role>.md:
1. Read file content
2. Check for "# Role:" header
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing role header"
3. Check for "## Identity" section
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Identity"
4. Check for "## Boundaries" section
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Boundaries"
5. Check for "## Execution" section (or "## Execution (5-Phase)")
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Execution"
6. All checks pass -> role file valid
```
**Benefits**:
- Early detection of malformed role files
- Clear error messages for debugging
- Prevents runtime failures during worker execution
---
## Example Valid Session
```
.workflow/.team/TC-auth-feature-2026-02-27/
+-- team-session.json # Valid JSON with session metadata
+-- task-analysis.json # Valid JSON with dependency graph
+-- roles/
| +-- spec-writer.md # Role file for SPEC-* tasks
| +-- implementer.md # Role file for IMPL-* tasks
| +-- tester.md # Role file for TEST-* tasks
+-- artifacts/ # (may be empty)
+-- shared-memory.json # Valid JSON (may be {})
+-- wisdom/
| +-- learnings.md
| +-- decisions.md
| +-- issues.md
+-- explorations/
| +-- cache-index.json
+-- discussions/ # (may be empty)
+-- .msg/ # (may be empty)
```
---
## Recovery from Invalid Sessions
If session validation fails:
1. **Missing team-session.json**: Re-run team-coordinate with original task
2. **Missing task-analysis.json**: Re-run team-coordinate with --resume
3. **Missing role files**: Re-run team-coordinate with --resume
4. **Corrupt JSON**: Manual inspection or re-run team-coordinate
**team-executor cannot fix invalid sessions** -- it can only report errors and suggest recovery steps.

View File

@@ -1,384 +0,0 @@
---
name: team-lifecycle-v3
description: Unified team skill for full lifecycle - spec/impl/test. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team lifecycle".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team Lifecycle v3
Unified team skill: specification → implementation → testing → review. All team members invoke with `--role=xxx` to route to role-specific execution.
## Architecture
```
┌───────────────────────────────────────────────────┐
│ Skill(skill="team-lifecycle-v3") │
│ args="任务描述" 或 args="--role=xxx" │
└───────────────────┬───────────────────────────────┘
│ Role Router
┌──── --role present? ────┐
│ NO │ YES
↓ ↓
Orchestration Mode Role Dispatch
(auto → coordinator) (route to role.md)
┌────┴────┬───────┬───────┬───────┬───────┬───────┬───────┐
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
coordinator analyst writer discussant planner executor tester reviewer
↑ ↑
on-demand by coordinator
┌──────────┐ ┌─────────┐
│ explorer │ │architect│
└──────────┘ └─────────┘
┌──────────────┐ ┌──────┐
│ fe-developer │ │fe-qa │
└──────────────┘ └──────┘
```
## Role Router
### Input Parsing
Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto route to coordinator).
### Role Registry
| Role | File | Task Prefix | Type | Compact |
|------|------|-------------|------|---------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **⚠️ 压缩后必须重读** |
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | RESEARCH-* | pipeline | 压缩后必须重读 |
| writer | [roles/writer/role.md](roles/writer/role.md) | DRAFT-* | pipeline | 压缩后必须重读 |
| discussant | [roles/discussant/role.md](roles/discussant/role.md) | DISCUSS-* | pipeline | 压缩后必须重读 |
| planner | [roles/planner/role.md](roles/planner/role.md) | PLAN-* | pipeline | 压缩后必须重读 |
| executor | [roles/executor/role.md](roles/executor/role.md) | IMPL-* | pipeline | 压缩后必须重读 |
| tester | [roles/tester/role.md](roles/tester/role.md) | TEST-* | pipeline | 压缩后必须重读 |
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-* + QUALITY-* | pipeline | 压缩后必须重读 |
| explorer | [roles/explorer/role.md](roles/explorer/role.md) | EXPLORE-* | service (on-demand) | 压缩后必须重读 |
| architect | [roles/architect/role.md](roles/architect/role.md) | ARCH-* | consulting (on-demand) | 压缩后必须重读 |
| fe-developer | [roles/fe-developer/role.md](roles/fe-developer/role.md) | DEV-FE-* | frontend pipeline | 压缩后必须重读 |
| fe-qa | [roles/fe-qa/role.md](roles/fe-qa/role.md) | QA-FE-* | frontend pipeline | 压缩后必须重读 |
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
### Dispatch
1. Extract `--role` from arguments
2. If no `--role` → route to coordinator (Orchestration Mode)
3. Look up role in registry → Read the role file → Execute its phases
### Orchestration Mode
When invoked without `--role`, coordinator auto-starts. User just provides task description.
**Invocation**: `Skill(skill="team-lifecycle-v3", args="任务描述")`
**Lifecycle**:
```
用户提供任务描述
→ coordinator Phase 1-3: 需求澄清 → TeamCreate → 创建任务链
→ coordinator Phase 4: spawn 首批 worker (后台) → STOP
→ Worker 执行 → SendMessage 回调 → coordinator 推进下一步
→ 循环直到 pipeline 完成 → Phase 5 汇报
```
**User Commands** (唤醒已暂停的 coordinator):
| Command | Action |
|---------|--------|
| `check` / `status` | 输出执行状态图,不推进 |
| `resume` / `continue` | 检查 worker 状态,推进下一步 |
---
## Shared Infrastructure
以下模板适用于所有 worker 角色。每个 role.md 只需写 **Phase 2-4** 的角色特有逻辑。
### Worker Phase 1: Task Discovery (所有 worker 共享)
每个 worker 启动后执行相同的任务发现流程:
1. 调用 `TaskList()` 获取所有任务
2. 筛选: subject 匹配本角色前缀 + owner 是本角色 + status 为 pending + blockedBy 为空
3. 无任务 → idle 等待
4. 有任务 → `TaskGet` 获取详情 → `TaskUpdate` 标记 in_progress
**Resume Artifact Check** (防止恢复后重复产出):
- 检查本任务的输出产物是否已存在
- 产物完整 → 跳到 Phase 5 报告完成
- 产物不完整或不存在 → 正常执行 Phase 2-4
### Worker Phase 5: Report (所有 worker 共享)
任务完成后的标准报告流程:
1. **Message Bus**: 调用 `mcp__ccw-tools__team_msg` 记录消息
- 参数: operation="log", team=<session-id>, from=<role>, to="coordinator", type=<消息类型>, summary="[<role>] <摘要>", ref=<产物路径>
- **注意**: `team` 必须是 **session ID** (如 `TLS-project-2026-02-27`), 不是 team name. 从任务描述的 `Session:` 字段提取.
- **CLI fallback**: 当 MCP 不可用时 → `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
2. **SendMessage**: 发送结果给 coordinator (content 和 summary 都带 `[<role>]` 前缀)
3. **TaskUpdate**: 标记任务 completed
4. **Loop**: 回到 Phase 1 检查下一个任务
### Wisdom Accumulation (所有角色)
跨任务知识积累。Coordinator 在 session 初始化时创建 `wisdom/` 目录。
**目录**:
```
<session-folder>/wisdom/
├── learnings.md # 模式和洞察
├── decisions.md # 架构和设计决策
├── conventions.md # 代码库约定
└── issues.md # 已知风险和问题
```
**Worker 加载** (Phase 2): 从 task description 提取 `Session: <path>`, 读取 wisdom 目录下各文件。
**Worker 贡献** (Phase 4/5): 将本任务发现写入对应 wisdom 文件。
### Role Isolation Rules
| 允许 | 禁止 |
|------|------|
| 处理自己前缀的任务 | 处理其他角色前缀的任务 |
| SendMessage 给 coordinator | 直接与其他 worker 通信 |
| 使用 Toolbox 中声明的工具 | 为其他角色创建任务 |
| 委派给 commands/ 中的命令 | 修改不属于本职责的资源 |
Coordinator 额外禁止: 直接编写/修改代码、调用实现类 subagent、直接执行分析/测试/审查。
---
## Pipeline Definitions
### Spec-only (12 tasks)
```
RESEARCH-001 → DISCUSS-001 → DRAFT-001 → DISCUSS-002
→ DRAFT-002 → DISCUSS-003 → DRAFT-003 → DISCUSS-004
→ DRAFT-004 → DISCUSS-005 → QUALITY-001 → DISCUSS-006
```
### Impl-only / Backend (4 tasks)
```
PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
```
### Full-lifecycle (16 tasks)
```
[Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006) → IMPL-001 → TEST-001 + REVIEW-001
```
### Frontend Pipelines
```
FE-only: PLAN-001 → DEV-FE-001 → QA-FE-001
(GC loop: QA-FE verdict=NEEDS_FIX → DEV-FE-002 → QA-FE-002, max 2 rounds)
Fullstack: PLAN-001 → IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001
Full + FE: [Spec pipeline] → PLAN-001 → IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001
```
### Cadence Control
**节拍模型**: 事件驱动,每个 beat = coordinator 唤醒 → 处理 → spawn → STOP。
```
Beat Cycle (单次节拍)
═══════════════════════════════════════════════════════════
Event Coordinator Workers
───────────────────────────────────────────────────────────
callback/resume ──→ ┌─ handleCallback ─┐
│ mark completed │
│ check pipeline │
├─ handleSpawnNext ─┤
│ find ready tasks │
│ spawn workers ───┼──→ [Worker A] Phase 1-5
│ (parallel OK) ──┼──→ [Worker B] Phase 1-5
└─ STOP (idle) ─────┘ │
callback ←─────────────────────────────────────────┘
(next beat) SendMessage + TaskUpdate(completed)
═══════════════════════════════════════════════════════════
```
**Pipeline 节拍视图**:
```
Spec-only (12 beats, 严格串行)
──────────────────────────────────────────────────────────
Beat 1 2 3 4 5 6 7 8 9 10 11 12
│ │ │ │ │ │ │ │ │ │ │ │
R1 → D1 → W1 → D2 → W2 → D3 → W3 → D4 → W4 → D5 → Q1 → D6
▲ ▲
pipeline sign-off
start pause
R=RESEARCH D=DISCUSS W=DRAFT(writer) Q=QUALITY
Impl-only (3 beats, 含并行窗口)
──────────────────────────────────────────────────────────
Beat 1 2 3
│ │ ┌────┴────┐
PLAN → IMPL ──→ TEST ∥ REVIEW ← 并行窗口
└────┬────┘
pipeline
done
Full-lifecycle (15 beats, spec→impl 过渡含检查点)
──────────────────────────────────────────────────────────
Beat 1-12: [Spec pipeline 同上]
Beat 12 (D6 完成): ⏸ CHECKPOINT ── 用户确认后 resume
Beat 13 14 15
PLAN → IMPL → TEST ∥ REVIEW
Fullstack (含双并行窗口)
──────────────────────────────────────────────────────────
Beat 1 2 3 4
│ ┌────┴────┐ ┌────┴────┐ │
PLAN → IMPL ∥ DEV-FE → TEST ∥ QA-FE → REVIEW
▲ ▲ ▲
并行窗口 1 并行窗口 2 同步屏障
```
**检查点 (Checkpoint)**:
| 触发条件 | 位置 | 行为 |
|----------|------|------|
| Spec→Impl 过渡 | DISCUSS-006 完成后 | ⏸ 暂停,等待用户 `resume` 确认 |
| GC 循环上限 | QA-FE max 2 rounds | 超出轮次 → 停止迭代,报告当前状态 |
| Pipeline 停滞 | 无 ready + 无 running | 检查缺失任务,报告用户 |
**Stall 检测** (coordinator `handleCheck` 时执行):
| 检查项 | 条件 | 处理 |
|--------|------|------|
| Worker 无响应 | in_progress 任务无回调 | 报告等待中的任务列表,建议用户 `resume` |
| Pipeline 死锁 | 无 ready + 无 running + 有 pending | 检查 blockedBy 依赖链,报告卡点 |
| GC 循环超限 | DEV-FE / QA-FE 迭代 > max_rounds | 终止循环,输出最新 QA 报告 |
### Task Metadata Registry
| Task ID | Role | Phase | Dependencies | Description |
|---------|------|-------|-------------|-------------|
| RESEARCH-001 | analyst | spec | (none) | Seed analysis and context gathering |
| DISCUSS-001 | discussant | spec | RESEARCH-001 | Critique research findings |
| DRAFT-001 | writer | spec | DISCUSS-001 | Generate Product Brief |
| DISCUSS-002 | discussant | spec | DRAFT-001 | Critique Product Brief |
| DRAFT-002 | writer | spec | DISCUSS-002 | Generate Requirements/PRD |
| DISCUSS-003 | discussant | spec | DRAFT-002 | Critique Requirements/PRD |
| DRAFT-003 | writer | spec | DISCUSS-003 | Generate Architecture Document |
| DISCUSS-004 | discussant | spec | DRAFT-003 | Critique Architecture Document |
| DRAFT-004 | writer | spec | DISCUSS-004 | Generate Epics & Stories |
| DISCUSS-005 | discussant | spec | DRAFT-004 | Critique Epics |
| QUALITY-001 | reviewer | spec | DISCUSS-005 | 5-dimension spec quality validation |
| DISCUSS-006 | discussant | spec | QUALITY-001 | Final review discussion and sign-off |
| PLAN-001 | planner | impl | (none or DISCUSS-006) | Multi-angle exploration and planning |
| IMPL-001 | executor | impl | PLAN-001 | Code implementation |
| TEST-001 | tester | impl | IMPL-001 | Test-fix cycles |
| REVIEW-001 | reviewer | impl | IMPL-001 | 4-dimension code review |
| DEV-FE-001 | fe-developer | impl | PLAN-001 | Frontend implementation |
| QA-FE-001 | fe-qa | impl | DEV-FE-001 | 5-dimension frontend QA |
## Coordinator Spawn Template
When coordinator spawns workers, use background mode (Spawn-and-Stop):
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `你是 team "<team-name>" 的 <ROLE>.
## 首要指令
你的所有工作必须通过调用 Skill 获取角色定义后执行:
Skill(skill="team-lifecycle-v3", args="--role=<role>")
当前需求: <task-description>
Session: <session-folder>
## 角色准则
- 只处理 <PREFIX>-* 任务,不执行其他角色工作
- 所有输出带 [<role>] 标识前缀
- 仅与 coordinator 通信
- 不使用 TaskCreate 为其他角色创建任务
- 每次 SendMessage 前先调用 mcp__ccw-tools__team_msg 记录
## 工作流程
1. 调用 Skill → 获取角色定义和执行逻辑
2. 按 role.md 5-Phase 流程执行
3. team_msg + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务`
})
```
## Session Directory
```
.workflow/.team/TLS-<slug>-<date>/
├── team-session.json # Session state
├── spec/ # Spec artifacts
│ ├── spec-config.json
│ ├── discovery-context.json
│ ├── product-brief.md
│ ├── requirements/
│ ├── architecture/
│ ├── epics/
│ ├── readiness-report.md
│ └── spec-summary.md
├── discussions/ # Discussion records
├── plan/ # Plan artifacts
│ ├── plan.json
│ └── .task/TASK-*.json
├── explorations/ # Explorer output (cached)
├── architecture/ # Architect assessments + design-tokens.json
├── analysis/ # analyst design-intelligence.json (UI mode)
├── qa/ # QA audit reports
├── wisdom/ # Cross-task knowledge
│ ├── learnings.md
│ ├── decisions.md
│ ├── conventions.md
│ └── issues.md
├── .msg/ # Team message bus logs (messages.jsonl)
└── shared-memory.json # Cross-role state
```
## Session Resume
Coordinator supports `--resume` / `--continue` for interrupted sessions:
1. Scan `.workflow/.team/TLS-*/team-session.json` for active/paused sessions
2. Multiple matches → AskUserQuestion for selection
3. Audit TaskList → reconcile session state ↔ task status
4. Reset in_progress → pending (interrupted tasks)
5. Rebuild team and spawn needed workers only
6. Create missing tasks with correct blockedBy
7. Kick first executable task → Phase 4 coordination loop
## Shared Spec Resources
| Resource | Path | Usage |
|----------|------|-------|
| Document Standards | [specs/document-standards.md](specs/document-standards.md) | YAML frontmatter, naming, structure |
| Quality Gates | [specs/quality-gates.md](specs/quality-gates.md) | Per-phase quality gates |
| Product Brief Template | [templates/product-brief.md](templates/product-brief.md) | DRAFT-001 |
| Requirements Template | [templates/requirements-prd.md](templates/requirements-prd.md) | DRAFT-002 |
| Architecture Template | [templates/architecture-doc.md](templates/architecture-doc.md) | DRAFT-003 |
| Epics Template | [templates/epics-template.md](templates/epics-template.md) | DRAFT-004 |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with available role list |
| Missing --role arg | Orchestration Mode → coordinator |
| Role file not found | Error with expected path |
| Command file not found | Fallback to inline execution |

View File

@@ -1,113 +0,0 @@
# Role: analyst
Seed analysis, codebase exploration, and multi-dimensional context gathering. Maps to spec-generator Phase 1 (Discovery).
## Identity
- **Name**: `analyst` | **Prefix**: `RESEARCH-*` | **Tag**: `[analyst]`
- **Responsibility**: Seed Analysis → Codebase Exploration → Context Packaging → Report
## Boundaries
### MUST
- Only process RESEARCH-* tasks
- Communicate only with coordinator
- Generate discovery-context.json and spec-config.json
- Support file reference input (@ prefix or .md/.txt extension)
### MUST NOT
- Create tasks for other roles
- Directly contact other workers
- Modify spec documents (only create discovery artifacts)
- Skip seed analysis step
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| research_ready | → coordinator | Research complete |
| research_progress | → coordinator | Long research progress update |
| error | → coordinator | Unrecoverable error |
## Toolbox
| Tool | Purpose |
|------|---------|
| ccw cli --tool gemini --mode analysis | Seed analysis |
| mcp__ace-tool__search_context | Codebase semantic search |
---
## Phase 2: Seed Analysis
**Objective**: Extract structured seed information from the topic/idea.
**Workflow**:
1. Extract session folder from task description (`Session: <path>`)
2. Parse topic from task description (first non-metadata line)
3. If topic starts with `@` or ends with `.md`/`.txt` → Read the referenced file as topic content
4. Run Gemini CLI seed analysis:
```
Bash({
command: `ccw cli -p "PURPOSE: Analyze topic and extract structured seed information.
TASK: • Extract problem statement • Identify target users • Determine domain context
• List constraints and assumptions • Identify 3-5 exploration dimensions • Assess complexity
TOPIC: <topic-content>
MODE: analysis
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment" --tool gemini --mode analysis`,
run_in_background: true
})
```
5. Wait for CLI result, parse seed analysis JSON
**Success**: Seed analysis parsed with problem statement, dimensions, complexity.
---
## Phase 3: Codebase Exploration (conditional)
**Objective**: Gather codebase context if an existing project is detected.
| Condition | Action |
|-----------|--------|
| package.json / Cargo.toml / pyproject.toml / go.mod exists | Explore codebase |
| No project files | Skip → codebase context = null |
**When project detected**:
1. Report progress: "种子分析完成, 开始代码库探索"
2. ACE semantic search for architecture patterns related to topic
3. Detect tech stack from package files
4. Build codebase context: tech_stack, architecture_patterns, conventions, integration_points
---
## Phase 4: Context Packaging
**Objective**: Generate spec-config.json and discovery-context.json.
**spec-config.json**`<session-folder>/spec/spec-config.json`:
- session_id, topic, status="research_complete", complexity, depth, focus_areas, mode="interactive"
**discovery-context.json**`<session-folder>/spec/discovery-context.json`:
- session_id, phase=1, seed_analysis (all fields), codebase_context (or null), recommendations
**design-intelligence.json**`<session-folder>/analysis/design-intelligence.json` (UI mode only):
- Produced when frontend keywords detected (component, page, UI, React, Vue, CSS, 前端) in seed_analysis
- Fields: industry, style_direction, ux_patterns, color_strategy, typography, component_patterns
- Consumed by architect (for design-tokens.json) and fe-developer
**Report**: complexity, codebase presence, problem statement, exploration dimensions, output paths.
**Success**: Both JSON files created; design-intelligence.json created if UI mode.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Gemini CLI failure | Fallback to direct Claude analysis |
| Codebase detection failed | Continue as new project |
| Topic too vague | Report with clarification questions |

View File

@@ -1,193 +0,0 @@
# Command: assess
## Purpose
Multi-mode architecture assessment. Auto-detects consultation mode from task subject prefix, runs mode-specific analysis, and produces a verdict with concerns and recommendations.
## Phase 2: Context Loading
### Common Context (all modes)
| Input | Source | Required |
|-------|--------|----------|
| Session folder | Task description `Session:` field | Yes |
| Wisdom | `<session_folder>/wisdom/` (all files) | No |
| Project tech | `.workflow/project-tech.json` | No |
| Explorations | `<session_folder>/explorations/` | No |
### Mode-Specific Context
| Mode | Task Pattern | Additional Context |
|------|-------------|-------------------|
| spec-review | ARCH-SPEC-* | `spec/architecture/_index.md`, `spec/architecture/ADR-*.md` |
| plan-review | ARCH-PLAN-* | `plan/plan.json`, `plan/.task/TASK-*.json` |
| code-review | ARCH-CODE-* | `git diff --name-only`, changed file contents |
| consult | ARCH-CONSULT-* | Question extracted from task description |
| feasibility | ARCH-FEASIBILITY-* | Proposal from task description, codebase search results |
## Phase 3: Mode-Specific Assessment
### Mode: spec-review
Review architecture documents for technical soundness across 4 dimensions.
**Assessment dimensions**:
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Consistency | 25% | ADR decisions align with each other and with architecture index |
| Scalability | 25% | Design supports growth, no single-point bottlenecks |
| Security | 25% | Auth model, data protection, API security addressed |
| Tech fitness | 25% | Technology choices match project-tech.json and problem domain |
**Checks**:
- Read architecture index and all ADR files
- Cross-reference ADR decisions for contradictions
- Verify tech choices align with project-tech.json
- Score each dimension 0-100
---
### Mode: plan-review
Review implementation plan for architectural soundness.
**Checks**:
| Check | What | Severity if Failed |
|-------|------|-------------------|
| Dependency cycles | Build task graph, detect cycles via DFS | High |
| Task granularity | Flag tasks touching >8 files | Medium |
| Convention compliance | Verify plan follows wisdom/conventions.md | Medium |
| Architecture alignment | Verify plan doesn't contradict wisdom/decisions.md | High |
**Dependency cycle detection flow**:
1. Parse all TASK-*.json files -> extract id and depends_on
2. Build directed graph
3. DFS traversal -> flag any node visited twice in same stack
4. Report cycle path if found
---
### Mode: code-review
Assess architectural impact of code changes.
**Checks**:
| Check | Method | Severity if Found |
|-------|--------|-------------------|
| Layer violations | Detect upward imports (deeper layer importing shallower) | High |
| New dependencies | Parse package.json diff for added deps | Medium |
| Module boundary changes | Flag index.ts/index.js modifications | Medium |
| Architectural impact | Score based on file count and boundary changes | Info |
**Impact scoring**:
| Condition | Impact Level |
|-----------|-------------|
| Changed files > 10 | High |
| index.ts/index.js or package.json modified | Medium |
| All other cases | Low |
**Detection example** (find changed files):
```bash
Bash(command="git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
```
---
### Mode: consult
Answer architecture decision questions. Route by question complexity.
**Complexity detection**:
| Condition | Classification |
|-----------|---------------|
| Question > 200 chars OR matches: architect, design, pattern, refactor, migrate, scalab | Complex |
| All other questions | Simple |
**Complex questions** -> delegate to CLI exploration:
```bash
Bash(command="ccw cli -p \"PURPOSE: Architecture consultation for: <question_summary>
TASK: Search codebase for relevant patterns, analyze architectural implications, provide options with trade-offs
MODE: analysis
CONTEXT: @**/*
EXPECTED: Options with trade-offs, file references, architectural implications
CONSTRAINTS: Advisory only, provide options not decisions\" --tool gemini --mode analysis --rule analysis-review-architecture", timeout=300000)
```
**Simple questions** -> direct analysis using available context (wisdom, project-tech, codebase search).
---
### Mode: feasibility
Evaluate technical feasibility of a proposal.
**Assessment areas**:
| Area | Method | Output |
|------|--------|--------|
| Tech stack compatibility | Compare proposal needs against project-tech.json | Compatible / Requires additions |
| Codebase readiness | Search for integration points using Grep/Glob | Touch-point count |
| Effort estimation | Based on touch-point count (see table below) | Low / Medium / High |
| Risk assessment | Based on effort + tech compatibility | Risks + mitigations |
**Effort estimation**:
| Touch Points | Effort | Implication |
|-------------|--------|-------------|
| <= 5 | Low | Straightforward implementation |
| 6 - 20 | Medium | Moderate refactoring needed |
| > 20 | High | Significant refactoring, consider phasing |
**Verdict for feasibility**:
| Condition | Verdict |
|-----------|---------|
| Low/medium effort, compatible stack | FEASIBLE |
| High touch-points OR new tech required | RISKY |
| Fundamental incompatibility or unreasonable effort | INFEASIBLE |
---
### Verdict Routing (all modes except feasibility)
| Verdict | Criteria |
|---------|----------|
| BLOCK | >= 2 high-severity concerns |
| CONCERN | >= 1 high-severity OR >= 3 medium-severity concerns |
| APPROVE | All other cases |
## Phase 4: Validation
### Output Format
Write assessment to `<session_folder>/architecture/arch-<slug>.json`.
**Report content sent to coordinator**:
| Field | Description |
|-------|-------------|
| mode | Consultation mode used |
| verdict | APPROVE / CONCERN / BLOCK (or FEASIBLE / RISKY / INFEASIBLE) |
| concern_count | Number of concerns by severity |
| recommendations | Actionable suggestions with trade-offs |
| output_path | Path to full assessment file |
**Wisdom contribution**: Append significant decisions to `<session_folder>/wisdom/decisions.md`.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Architecture docs not found | Assess from available context, note limitation in report |
| Plan file missing | Report to coordinator via arch_concern |
| Git diff fails (no commits) | Use staged changes or skip code-review mode |
| CLI exploration timeout | Provide partial assessment, flag as incomplete |
| Exploration results unparseable | Fall back to direct analysis without exploration |
| Insufficient context | Request explorer assistance via coordinator |

View File

@@ -1,103 +0,0 @@
# Role: architect
Architecture consultant. Advice on decisions, feasibility, design patterns.
## Identity
- **Name**: `architect` | **Prefix**: `ARCH-*` | **Tag**: `[architect]`
- **Type**: Consulting (on-demand, advisory only)
- **Responsibility**: Context loading → Mode detection → Analysis → Report
## Boundaries
### MUST
- Only process ARCH-* tasks
- Auto-detect mode from task subject prefix
- Provide options with trade-offs (not final decisions)
### MUST NOT
- Modify source code
- Make final decisions (advisory only)
- Execute implementation or testing
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| arch_ready | → coordinator | Assessment complete |
| arch_concern | → coordinator | Significant risk found |
| error | → coordinator | Analysis failure |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/assess.md | Multi-mode assessment |
| cli-explore-agent | Deep architecture exploration |
| ccw cli --tool gemini --mode analysis | Architecture analysis |
---
## Consultation Modes
| Task Pattern | Mode | Focus |
|-------------|------|-------|
| ARCH-SPEC-* | spec-review | Review architecture docs |
| ARCH-PLAN-* | plan-review | Review plan soundness |
| ARCH-CODE-* | code-review | Assess code change impact |
| ARCH-CONSULT-* | consult | Answer architecture questions |
| ARCH-FEASIBILITY-* | feasibility | Technical feasibility |
---
## Phase 2: Context Loading
**Common**: session folder, wisdom, project-tech.json, explorations
**Mode-specific**:
| Mode | Additional Context |
|------|-------------------|
| spec-review | architecture/_index.md, ADR-*.md |
| plan-review | plan/plan.json |
| code-review | git diff, changed files |
| consult | Question from task description |
| feasibility | Requirements + codebase |
---
## Phase 3: Assessment
Delegate to `commands/assess.md`. Output: mode, verdict (APPROVE/CONCERN/BLOCK), dimensions[], concerns[], recommendations[].
For complex questions → Gemini CLI with architecture review rule.
---
## Phase 4: Report
Output to `<session-folder>/architecture/arch-<slug>.json`. Contribute decisions to wisdom/decisions.md.
**Frontend project outputs** (when frontend tech stack detected in shared-memory or discovery-context):
- `<session-folder>/architecture/design-tokens.json` — color, spacing, typography, shadow tokens
- `<session-folder>/architecture/component-specs/*.md` — per-component design spec
**Report**: mode, verdict, concern count, recommendations, output path(s).
---
## Coordinator Integration
| Timing | Task |
|--------|------|
| After DRAFT-003 | ARCH-SPEC-001: 架构文档评审 |
| After PLAN-001 | ARCH-PLAN-001: 计划架构审查 |
| On-demand | ARCH-CONSULT-001: 架构咨询 |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Docs not found | Assess from available context |
| CLI timeout | Partial assessment |
| Insufficient context | Request explorer via coordinator |

View File

@@ -1,142 +0,0 @@
# Command: dispatch
## Purpose
Create task chains based on execution mode. Each mode maps to a predefined pipeline from SKILL.md Task Metadata Registry. Tasks are created with proper dependency chains, owner assignments, and session references.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Mode | Phase 1 requirements (spec-only, impl-only, etc.) | Yes |
| Session folder | `<session-folder>` from Phase 2 | Yes |
| Scope | User requirements description | Yes |
| Spec file | User-provided path (impl-only mode only) | Conditional |
## Phase 3: Task Chain Creation
### Mode-to-Pipeline Routing
| Mode | Tasks | Pipeline | First Task |
|------|-------|----------|------------|
| spec-only | 12 | Spec pipeline | RESEARCH-001 |
| impl-only | 4 | Impl pipeline | PLAN-001 |
| fe-only | 3 | FE pipeline | PLAN-001 |
| fullstack | 6 | Fullstack pipeline | PLAN-001 |
| full-lifecycle | 16 | Spec + Impl | RESEARCH-001 |
| full-lifecycle-fe | 18 | Spec + Fullstack | RESEARCH-001 |
---
### Spec Pipeline (12 tasks)
Used by: spec-only, full-lifecycle, full-lifecycle-fe
| # | Subject | Owner | BlockedBy | Description |
|---|---------|-------|-----------|-------------|
| 1 | RESEARCH-001 | analyst | (none) | Seed analysis and context gathering |
| 2 | DISCUSS-001 | discussant | RESEARCH-001 | Critique research findings |
| 3 | DRAFT-001 | writer | DISCUSS-001 | Generate Product Brief |
| 4 | DISCUSS-002 | discussant | DRAFT-001 | Critique Product Brief |
| 5 | DRAFT-002 | writer | DISCUSS-002 | Generate Requirements/PRD |
| 6 | DISCUSS-003 | discussant | DRAFT-002 | Critique Requirements/PRD |
| 7 | DRAFT-003 | writer | DISCUSS-003 | Generate Architecture Document |
| 8 | DISCUSS-004 | discussant | DRAFT-003 | Critique Architecture Document |
| 9 | DRAFT-004 | writer | DISCUSS-004 | Generate Epics |
| 10 | DISCUSS-005 | discussant | DRAFT-004 | Critique Epics |
| 11 | QUALITY-001 | reviewer | DISCUSS-005 | 5-dimension spec quality validation |
| 12 | DISCUSS-006 | discussant | QUALITY-001 | Final review discussion and sign-off |
### Impl Pipeline (4 tasks)
Used by: impl-only, full-lifecycle (PLAN-001 blockedBy DISCUSS-006)
| # | Subject | Owner | BlockedBy | Description |
|---|---------|-------|-----------|-------------|
| 1 | PLAN-001 | planner | (none) | Multi-angle exploration and planning |
| 2 | IMPL-001 | executor | PLAN-001 | Code implementation |
| 3 | TEST-001 | tester | IMPL-001 | Test-fix cycles |
| 4 | REVIEW-001 | reviewer | IMPL-001 | 4-dimension code review |
### FE Pipeline (3 tasks)
Used by: fe-only
| # | Subject | Owner | BlockedBy | Description |
|---|---------|-------|-----------|-------------|
| 1 | PLAN-001 | planner | (none) | Planning (frontend focus) |
| 2 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation |
| 3 | QA-FE-001 | fe-qa | DEV-FE-001 | 5-dimension frontend QA |
GC loop (max 2 rounds): QA-FE verdict=NEEDS_FIX → create DEV-FE-002 + QA-FE-002 dynamically.
### Fullstack Pipeline (6 tasks)
Used by: fullstack, full-lifecycle-fe (PLAN-001 blockedBy DISCUSS-006)
| # | Subject | Owner | BlockedBy | Description |
|---|---------|-------|-----------|-------------|
| 1 | PLAN-001 | planner | (none) | Fullstack planning |
| 2 | IMPL-001 | executor | PLAN-001 | Backend implementation |
| 3 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation |
| 4 | TEST-001 | tester | IMPL-001 | Backend test-fix cycles |
| 5 | QA-FE-001 | fe-qa | DEV-FE-001 | Frontend QA |
| 6 | REVIEW-001 | reviewer | TEST-001, QA-FE-001 | Full code review |
### Composite Modes
| Mode | Construction | PLAN-001 BlockedBy |
|------|-------------|-------------------|
| full-lifecycle | Spec (12) + Impl (4) | DISCUSS-006 |
| full-lifecycle-fe | Spec (12) + Fullstack (6) | DISCUSS-006 |
---
### Impl-Only Pre-check
Before creating impl-only tasks, verify specification exists:
```
Spec exists?
├─ YES → read spec path → proceed with task creation
└─ NO → error: "impl-only requires existing spec, use spec-only or full-lifecycle"
```
### Task Description Template
Every task description includes session and scope context:
```
TaskCreate({
subject: "<TASK-ID>",
owner: "<role>",
description: "<task description from pipeline table>\nSession: <session-folder>\nScope: <scope>",
blockedBy: [<dependency-list>],
status: "pending"
})
```
### Execution Method
| Method | Behavior |
|--------|----------|
| sequential | One task active at a time; next activated after predecessor completes |
| parallel | Tasks with all deps met run concurrently (e.g., TEST-001 + REVIEW-001) |
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Task count | Matches mode total from routing table |
| Dependencies | Every blockedBy references an existing task subject |
| Owner assignment | Each task owner matches SKILL.md Role Registry prefix |
| Session reference | Every task description contains `Session: <session-folder>` |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown mode | Reject with supported mode list |
| Missing spec for impl-only | Error, suggest spec-only or full-lifecycle |
| TaskCreate fails | Log error, report to user |
| Duplicate task subject | Skip creation, log warning |

View File

@@ -1,180 +0,0 @@
# Command: monitor
## Purpose
Event-driven pipeline coordination with Spawn-and-Stop pattern. Three wake-up sources drive pipeline advancement: worker callbacks (auto-advance), user `check` (status report), user `resume` (manual advance).
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session file | `<session-folder>/team-session.json` | Yes |
| Task list | `TaskList()` | Yes |
| Active workers | session.active_workers[] | Yes |
| Pipeline mode | session.mode | Yes |
## Phase 3: Handler Routing
### Wake-up Source Detection
Parse `$ARGUMENTS` to determine handler:
| Priority | Condition | Handler |
|----------|-----------|---------|
| 1 | Message contains `[<role-name>]` from known worker role | handleCallback |
| 2 | Contains "check" or "status" | handleCheck |
| 3 | Contains "resume", "continue", or "next" | handleResume |
| 4 | None of the above (initial spawn after dispatch) | handleSpawnNext |
Known worker roles: analyst, writer, discussant, planner, executor, tester, reviewer, explorer, architect, fe-developer, fe-qa.
---
### Handler: handleCallback
Worker completed a task. Verify completion, update state, auto-advance.
```
Receive callback from [<role>]
├─ Find matching active worker by role
├─ Task status = completed?
│ ├─ YES → remove from active_workers → update session
│ │ ├─ Handle checkpoints (see below)
│ │ └─ → handleSpawnNext
│ └─ NO → progress message, do not advance → STOP
└─ No matching worker found
├─ Scan all active workers for completed tasks
├─ Found completed → process each → handleSpawnNext
└─ None completed → STOP
```
---
### Handler: handleCheck
Read-only status report. No pipeline advancement.
**Output format**:
```
[coordinator] Pipeline Status
[coordinator] Mode: <mode> | Progress: <completed>/<total> (<percent>%)
[coordinator] Execution Graph:
Spec Phase: (if applicable)
[<icon> RESEARCH-001] → [<icon> DISCUSS-001] → ...
Impl Phase: (if applicable)
[<icon> PLAN-001]
├─ BE: [<icon> IMPL-001] → [<icon> TEST-001] → [<icon> REVIEW-001]
└─ FE: [<icon> DEV-FE-001] → [<icon> QA-FE-001]
done=completed >>>=running o=pending .=not created
[coordinator] Active Workers:
> <subject> (<role>) - running <elapsed>
[coordinator] Ready to spawn: <subjects>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
Then STOP.
---
### Handler: handleResume
Check active worker completion, process results, advance pipeline.
```
Load active_workers from session
├─ No active workers → handleSpawnNext
└─ Has active workers → check each:
├─ status = completed → mark done, log
├─ status = in_progress → still running, log
└─ other status → worker failure → reset to pending
After processing:
├─ Some completed → handleSpawnNext
├─ All still running → report status → STOP
└─ All failed → handleSpawnNext (retry)
```
---
### Handler: handleSpawnNext
Find all ready tasks, spawn workers in background, update session, STOP.
```
Collect task states from TaskList()
├─ completedSubjects: status = completed
├─ inProgressSubjects: status = in_progress
└─ readySubjects: pending + all blockedBy in completedSubjects
Ready tasks found?
├─ NONE + work in progress → report waiting → STOP
├─ NONE + nothing in progress → PIPELINE_COMPLETE → Phase 5
└─ HAS ready tasks → for each:
├─ TaskUpdate → in_progress
├─ team_msg log → task_unblocked
├─ Spawn worker (see tool call below)
└─ Add to session.active_workers
Update session file → output summary → STOP
```
**Spawn worker tool call** (one per ready task):
```bash
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker for <subject>",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: "<worker prompt from SKILL.md Coordinator Spawn Template>"
})
```
---
### Checkpoints
| Completed Task | Mode Condition | Action |
|---------------|----------------|--------|
| DISCUSS-006 | full-lifecycle or full-lifecycle-fe | Output "SPEC PHASE COMPLETE" checkpoint, pause for user review before impl |
---
### Worker Failure Handling
When a worker has unexpected status (not completed, not in_progress):
1. Reset task → pending via TaskUpdate
2. Log via team_msg (type: error)
3. Report to user: task reset, will retry on next resume
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Session state consistent | active_workers matches TaskList in_progress tasks |
| No orphaned tasks | Every in_progress task has an active_worker entry |
| Pipeline completeness | All expected tasks exist per mode |
| Completion detection | readySubjects=0 + inProgressSubjects=0 → PIPELINE_COMPLETE |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-initialization |
| Worker callback from unknown role | Log info, scan for other completions |
| All workers still running on resume | Report status, suggest check later |
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |

View File

@@ -1,209 +0,0 @@
# Coordinator Role
Orchestrate the team-lifecycle workflow: team creation, task dispatching, progress monitoring, session state.
## Identity
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **Responsibility**: Parse requirements → Create team → Dispatch tasks → Monitor progress → Report results
## Boundaries
### MUST
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
- Create team and spawn worker subagents in background
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
- Monitor progress via worker callbacks and route messages
- Maintain session state persistence (team-session.json)
### MUST NOT
- Execute spec/impl/research work directly (delegate to workers)
- Modify task outputs (workers own their deliverables)
- Call implementation subagents (code-developer, etc.) directly
- Skip dependency validation when creating task chains
---
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
4. **Execute synchronously** - complete the command workflow before proceeding
Example:
```
Phase 3 needs task dispatch
-> Read roles/coordinator/commands/dispatch.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Chain Creation)
-> Execute Phase 4 (Validation)
-> Continue to Phase 4
```
---
## Entry Router
When coordinator is invoked, first detect the invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains `[role-name]` tag from a known worker role | → handleCallback: auto-advance pipeline |
| Status check | Arguments contain "check" or "status" | → handleCheck: output execution graph, no advancement |
| Manual resume | Arguments contain "resume" or "continue" | → handleResume: check worker states, advance pipeline |
| Interrupted session | Active/paused session exists in `.workflow/.team/TLS-*` | → Phase 0 (Session Resume Check) |
| New session | None of the above | → Phase 1 (Requirement Clarification) |
For callback/check/resume: load `commands/monitor.md` and execute the appropriate handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/TLS-*/team-session.json` for active/paused sessions
- If found, extract known worker roles from session or SKILL.md Role Registry
2. **Parse $ARGUMENTS** for detection keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler section, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Requirement Clarification below
---
## Phase 0: Session Resume Check
**Objective**: Detect and resume interrupted sessions before creating new ones.
**Workflow**:
1. Scan `.workflow/.team/TLS-*/team-session.json` for sessions with status "active" or "paused"
2. No sessions found → proceed to Phase 1
3. Single session found → resume it (→ Session Reconciliation)
4. Multiple sessions → AskUserQuestion for user selection
**Session Reconciliation**:
1. Audit TaskList → get real status of all tasks
2. Reconcile: session.completed_tasks ↔ TaskList status (bidirectional sync)
3. Reset any in_progress tasks → pending (they were interrupted)
4. Determine remaining pipeline from reconciled state
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
6. Create missing tasks with correct blockedBy dependencies
7. Verify dependency chain integrity
8. Update session file with reconciled state
9. Kick first executable task's worker → Phase 4
---
## Phase 1: Requirement Clarification
**Objective**: Parse user input and gather execution parameters.
**Workflow**:
1. **Parse arguments** for explicit settings: mode, scope, focus areas, depth
2. **Ask for missing parameters** via AskUserQuestion:
- Mode: spec-only / impl-only / full-lifecycle / fe-only / fullstack / full-lifecycle-fe
- Scope: project description
- Execution method: sequential / parallel
3. **Frontend auto-detection** (for impl-only and full-lifecycle modes):
| Signal | Detection | Pipeline Upgrade |
|--------|----------|-----------------|
| FE keywords (component, page, UI, React, Vue, CSS...) | Keyword match in description | impl-only → fe-only or fullstack |
| BE keywords also present (API, database, server...) | Both FE + BE keywords | impl-only → fullstack |
| FE framework in package.json | grep react/vue/svelte/next | full-lifecycle → full-lifecycle-fe |
4. **Store requirements**: mode, scope, focus, depth, executionMethod
**Success**: All parameters captured, mode finalized.
---
## Phase 2: Create Team + Initialize Session
**Objective**: Initialize team, session file, and wisdom directory.
**Workflow**:
1. Generate session ID: `TLS-<slug>-<date>`
2. Create session folder: `.workflow/.team/<session-id>/`
3. Call TeamCreate with team name
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
5. Write team-session.json with: session_id, mode, scope, status="active", tasks_total, tasks_completed=0
**Task counts by mode**:
| Mode | Tasks |
|------|-------|
| spec-only | 12 |
| impl-only | 4 |
| fe-only | 3 |
| fullstack | 6 |
| full-lifecycle | 16 |
| full-lifecycle-fe | 18 |
**Success**: Team created, session file written, wisdom initialized.
---
## Phase 3: Create Task Chain
**Objective**: Dispatch tasks based on mode with proper dependencies.
Delegate to `commands/dispatch.md` which creates the full task chain:
1. Reads SKILL.md Task Metadata Registry for task definitions
2. Creates tasks via TaskCreate with correct blockedBy
3. Assigns owner based on role mapping
4. Includes `Session: <session-folder>` in every task description
---
## Phase 4: Spawn-and-Stop
**Objective**: Spawn first batch of ready workers in background, then STOP.
**Design**: Spawn-and-Stop + Callback pattern.
- Spawn workers with `Task(run_in_background: true)` → immediately return
- Worker completes → SendMessage callback → auto-advance
- User can use "check" / "resume" to manually advance
- Coordinator does one operation per invocation, then STOPS
**Workflow**:
1. Load `commands/monitor.md`
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
3. For each ready task → spawn worker (see SKILL.md Spawn Template)
4. Output status summary
5. STOP
**Pipeline advancement** driven by three wake sources:
- Worker callback (automatic) → Entry Router → handleCallback
- User "check" → handleCheck (status only)
- User "resume" → handleResume (advance)
---
## Phase 5: Report + Next Steps
**Objective**: Completion report and follow-up options.
**Workflow**:
1. Load session state → count completed tasks, duration
2. List deliverables with output paths
3. Update session status → "completed"
4. Offer next steps: 退出 / 查看产物 / 扩展任务 / 生成 lite-plan / 创建 Issue
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Respawn worker, reassign task |
| Dependency cycle | Detect, report to user, halt |
| Invalid mode | Reject with error, ask to clarify |
| Session corruption | Attempt recovery, fallback to manual reconciliation |

View File

@@ -1,136 +0,0 @@
# Command: critique
## Purpose
Multi-perspective CLI critique: launch parallel analyses from assigned perspectives, collect structured ratings, detect divergences, and synthesize consensus.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Round config | DISCUSS-NNN → look up round in role.md table | Yes |
| Artifact | `<session-folder>/<artifact-path>` from round config | Yes |
| Perspectives | Round config perspectives column | Yes |
| Discovery context | `<session-folder>/spec/discovery-context.json` | For coverage perspective |
| Prior discussions | `<session-folder>/discussions/` | No |
## Phase 3: Multi-Perspective Critique
### Perspective Routing
| Perspective | CLI Tool | Role | Focus Areas |
|-------------|----------|------|-------------|
| Product | gemini | Product Manager | Market fit, user value, business viability, competitive differentiation |
| Technical | codex | Tech Lead | Feasibility, tech debt, performance, security, maintainability |
| Quality | claude | QA Lead | Completeness, testability, consistency, standards compliance |
| Risk | gemini | Risk Analyst | Risk identification, dependencies, failure modes, mitigation |
| Coverage | gemini | Requirements Analyst | Requirement completeness vs discovery-context, gap detection, scope creep |
### Execution Flow
```
For each perspective in round config:
├─ Build prompt with perspective focus + artifact content
├─ Launch CLI analysis (background)
│ Bash(command="ccw cli -p '<prompt>' --tool <cli-tool> --mode analysis", run_in_background=true)
└─ Collect result via hook callback
```
### CLI Call Template
```bash
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
TASK: <focus-areas-from-table>
MODE: analysis
CONTEXT: Artifact content below
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
CONSTRAINTS: Output valid JSON only
Artifact:
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
```
### Extra Fields by Perspective
| Perspective | Additional Output Fields |
|-------------|------------------------|
| Risk | `risk_level`: low / medium / high / critical |
| Coverage | `covered_requirements[]`, `partial_requirements[]`, `missing_requirements[]`, `scope_creep[]` |
---
### Divergence Detection
After all perspectives return, scan results for critical signals:
| Signal | Condition | Severity |
|--------|-----------|----------|
| Coverage gap | `missing_requirements` non-empty | High |
| High risk | `risk_level` is high or critical | High |
| Low rating | Any perspective rating <= 2 | Medium |
| Rating spread | Max rating - min rating >= 3 | Medium |
### Consensus Determination
| Condition | Verdict |
|-----------|---------|
| No high-severity divergences AND average rating >= 3.0 | consensus_reached |
| Any high-severity divergence OR average rating < 3.0 | consensus_blocked |
### Synthesis Process
```
Collect all perspective results
├─ Extract convergent themes (agreed by 2+ perspectives)
├─ Extract divergent views (conflicting assessments)
├─ Check coverage gaps from coverage result
├─ Compile action items from all suggestions
└─ Determine consensus per table above
```
## Phase 4: Validation
### Discussion Record
Write to `<session-folder>/discussions/<round-id>-discussion.md`:
```
# Discussion Record: <round-id>
**Artifact**: <artifact-path>
**Perspectives**: <list>
**Consensus**: reached / blocked
## Convergent Themes
- <theme>
## Divergent Views
- **<topic>** (<severity>): <description>
## Action Items
1. <item>
## Ratings
| Perspective | Rating |
|-------------|--------|
| <name> | <n>/5 |
**Average**: <avg>/5
```
### Result Routing
| Outcome | Message Type | Content |
|---------|-------------|---------|
| Consensus reached | discussion_ready | Action items, record path, average rating |
| Consensus blocked | discussion_blocked | Divergence points, severity, record path |
| Artifact not found | error | Missing artifact path |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Artifact not found | Report error to coordinator |
| Single CLI perspective fails | Fallback to direct Claude analysis for that perspective |
| All CLI analyses fail | Generate basic discussion from direct artifact reading |
| All perspectives diverge | Report as discussion_blocked with all divergence points |

View File

@@ -1,128 +0,0 @@
# Role: discussant
Multi-perspective critique, consensus building, and conflict escalation. Ensures quality feedback between each phase transition.
## Identity
- **Name**: `discussant` | **Prefix**: `DISCUSS-*` | **Tag**: `[discussant]`
- **Responsibility**: Load Artifact → Multi-Perspective Critique → Synthesize Consensus → Report
## Boundaries
### MUST
- Only process DISCUSS-* tasks
- Execute multi-perspective critique via CLI tools
- Detect coverage gaps from coverage perspective
- Synthesize consensus with convergent/divergent analysis
- Write discussion records to `discussions/` folder
### MUST NOT
- Create tasks
- Contact other workers directly
- Modify spec documents directly
- Skip perspectives defined in round config
- Ignore critical divergences
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| discussion_ready | → coordinator | Consensus reached |
| discussion_blocked | → coordinator | Cannot reach consensus |
| error | → coordinator | Input artifact missing |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/critique.md | Multi-perspective CLI critique |
| gemini CLI | Product, Risk, Coverage perspectives |
| codex CLI | Technical perspective |
| claude CLI | Quality perspective |
---
## Perspective Model
| Perspective | Focus | CLI Tool |
|-------------|-------|----------|
| Product | Market fit, user value, business viability | gemini |
| Technical | Feasibility, tech debt, performance, security | codex |
| Quality | Completeness, testability, consistency | claude |
| Risk | Risk identification, dependency analysis, failure modes | gemini |
| Coverage | Requirement completeness vs original intent, gap detection | gemini |
## Round Configuration
| Round | Artifact | Perspectives | Focus |
|-------|----------|-------------|-------|
| DISCUSS-001 | spec/discovery-context.json | product, risk, coverage | Scope confirmation |
| DISCUSS-002 | spec/product-brief.md | product, technical, quality, coverage | Positioning, feasibility |
| DISCUSS-003 | spec/requirements/_index.md | quality, product, coverage | Completeness, priority |
| DISCUSS-004 | spec/architecture/_index.md | technical, risk | Tech choices, security |
| DISCUSS-005 | spec/epics/_index.md | product, technical, quality, coverage | MVP scope, estimation |
| DISCUSS-006 | spec/readiness-report.md | all 5 | Final sign-off |
---
## Phase 2: Artifact Loading
**Objective**: Load target artifact and determine discussion parameters.
**Workflow**:
1. Extract session folder and round number from task subject (`DISCUSS-<NNN>`)
2. Look up round config from table above
3. Load target artifact from `<session-folder>/<artifact-path>`
4. Create `<session-folder>/discussions/` directory
5. Load prior discussion records for continuity
---
## Phase 3: Multi-Perspective Critique
**Objective**: Run parallel CLI analyses from each required perspective.
Delegate to `commands/critique.md` -- launches parallel CLI calls per perspective with focused prompts and designated tools.
---
## Phase 4: Consensus Synthesis
**Objective**: Synthesize into consensus with actionable outcomes.
**Synthesis process**:
1. Extract convergent themes (agreed by 2+ perspectives)
2. Extract divergent views (conflicting perspectives, with severity)
3. Check coverage gaps from coverage perspective
4. Compile action items and open questions
**Consensus routing**:
| Condition | Status | Report |
|-----------|--------|--------|
| No high-severity divergences | consensus_reached | Action items, open questions, record path |
| Any high-severity divergences | consensus_blocked | Escalate divergence points to coordinator |
Write discussion record to `<session-folder>/discussions/<output-file>`.
**Output file naming convention**:
| Round | Output File |
|-------|------------|
| DISCUSS-001 | discuss-001-scope.md |
| DISCUSS-002 | discuss-002-brief.md |
| DISCUSS-003 | discuss-003-requirements.md |
| DISCUSS-004 | discuss-004-architecture.md |
| DISCUSS-005 | discuss-005-epics.md |
| DISCUSS-006 | discuss-006-signoff.md |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Target artifact not found | Notify coordinator |
| CLI perspective failure | Fallback to direct Claude analysis |
| All CLI analyses fail | Generate basic discussion from direct reading |
| All perspectives diverge | Escalate as discussion_blocked |

View File

@@ -1,166 +0,0 @@
# Command: implement
## Purpose
Multi-backend code implementation: route tasks to appropriate execution backend (direct edit, subagent, or CLI), build focused prompts, execute with retry and fallback.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Plan | `<session-folder>/plan/plan.json` | Yes |
| Task files | `<session-folder>/plan/.task/TASK-*.json` | Yes |
| Backend | Task metadata / plan default / auto-select | Yes |
| Working directory | task.metadata.working_dir or project root | No |
| Wisdom | `<session-folder>/wisdom/` | No |
## Phase 3: Implementation
### Backend Selection
Priority order (first match wins):
| Priority | Source | Method |
|----------|--------|--------|
| 1 | Task metadata | `task.metadata.executor` field |
| 2 | Plan default | "Execution Backend:" line in plan.json |
| 3 | Auto-select | See auto-select table below |
**Auto-select routing**:
| Condition | Backend |
|-----------|---------|
| Description < 200 chars AND no refactor/architecture keywords AND single target file | agent (direct edit) |
| Description < 200 chars AND simple scope | agent (subagent) |
| Complex scope OR architecture keywords | codex |
| Analysis-heavy OR multi-module integration | gemini |
### Execution Paths
```
Backend selected
├─ agent (direct edit)
│ └─ Read target file → Edit directly → no subagent overhead
├─ agent (subagent)
│ └─ Task({ subagent_type: "code-developer", run_in_background: false })
├─ codex (CLI)
│ └─ Bash(command="ccw cli ... --tool codex --mode write", run_in_background=true)
└─ gemini (CLI)
└─ Bash(command="ccw cli ... --tool gemini --mode write", run_in_background=true)
```
### Path 1: Direct Edit (agent, simple task)
```bash
Read(file_path="<target-file>")
Edit(file_path="<target-file>", old_string="<old>", new_string="<new>")
```
### Path 2: Subagent (agent, moderate task)
```
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Implement <task-id>",
prompt: "<execution-prompt>"
})
```
### Path 3: CLI Backend (codex or gemini)
```bash
Bash(command="ccw cli -p '<execution-prompt>' --tool <codex|gemini> --mode write --cd <working-dir>", run_in_background=true)
```
### Execution Prompt Template
All backends receive the same structured prompt:
```
# Implementation Task: <task-id>
## Task Description
<task-description>
## Acceptance Criteria
1. <criterion>
## Context from Plan
<architecture-section>
<technical-stack-section>
<task-context-section>
## Files to Modify
<target-files or "Auto-detect based on task">
## Constraints
- Follow existing code style and patterns
- Preserve backward compatibility
- Add appropriate error handling
- Include inline comments for complex logic
```
### Batch Execution
When multiple IMPL tasks exist, execute in dependency order:
```
Topological sort by task.depends_on
├─ Batch 1: Tasks with no dependencies → execute
├─ Batch 2: Tasks depending on batch 1 → execute
└─ Batch N: Continue until all tasks complete
Progress update per batch (when > 1 batch):
→ team_msg: "Processing batch <N>/<total>: <task-id>"
```
### Retry and Fallback
**Retry** (max 3 attempts per task):
```
Attempt 1 → failure
├─ team_msg: "Retry 1/3 after error: <message>"
└─ Attempt 2 → failure
├─ team_msg: "Retry 2/3 after error: <message>"
└─ Attempt 3 → failure → fallback
```
**Fallback** (when primary backend fails after retries):
| Primary Backend | Fallback |
|----------------|----------|
| codex | agent (subagent) |
| gemini | agent (subagent) |
| agent (subagent) | Report failure to coordinator |
| agent (direct edit) | agent (subagent) |
## Phase 4: Validation
### Self-Validation Steps
| Step | Method | Pass Criteria |
|------|--------|--------------|
| Syntax check | `Bash(command="tsc --noEmit", timeout=30000)` | Exit code 0 |
| Acceptance match | Check criteria keywords vs modified files | All criteria addressed |
| Test detection | Search for .test.ts/.spec.ts matching modified files | Tests identified |
| File changes | `Bash(command="git diff --name-only HEAD")` | At least 1 file modified |
### Result Routing
| Outcome | Message Type | Content |
|---------|-------------|---------|
| All tasks pass validation | impl_complete | Task ID, files modified, backend used |
| Batch progress | impl_progress | Batch index, total batches, current task |
| Validation failure after retries | error | Task ID, error details, retry count |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Syntax errors after implementation | Retry with error context (max 3) |
| Backend unavailable | Fallback to agent |
| Missing dependencies | Request from coordinator |
| All retries + fallback exhausted | Report failure with full error log |

View File

@@ -1,103 +0,0 @@
# Role: executor
Code implementation following approved plans. Multi-backend execution with self-validation.
## Identity
- **Name**: `executor` | **Prefix**: `IMPL-*` | **Tag**: `[executor]`
- **Responsibility**: Load plan → Route to backend → Implement → Self-validate → Report
## Boundaries
### MUST
- Only process IMPL-* tasks
- Follow approved plan exactly
- Use declared execution backends
- Self-validate all implementations
### MUST NOT
- Create tasks
- Contact other workers directly
- Modify plan files
- Skip self-validation
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| impl_complete | → coordinator | Implementation success |
| impl_progress | → coordinator | Batch progress |
| error | → coordinator | Implementation failure |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/implement.md | Multi-backend implementation |
| code-developer agent | Simple tasks (synchronous) |
| ccw cli --tool codex --mode write | Complex tasks |
| ccw cli --tool gemini --mode write | Alternative backend |
---
## Phase 2: Task & Plan Loading
**Objective**: Load plan and determine execution strategy.
1. Load plan.json and .task/TASK-*.json from `<session-folder>/plan/`
**Backend selection** (priority order):
| Priority | Source | Method |
|----------|--------|--------|
| 1 | Task metadata | task.metadata.executor field |
| 2 | Plan default | "Execution Backend:" in plan |
| 3 | Auto-select | Simple (< 200 chars, no refactor) → agent; Complex → codex |
**Code review selection**:
| Priority | Source | Method |
|----------|--------|--------|
| 1 | Task metadata | task.metadata.code_review field |
| 2 | Plan default | "Code Review:" in plan |
| 3 | Auto-select | Critical keywords (auth, security, payment) → enabled |
---
## Phase 3: Code Implementation
**Objective**: Execute implementation across batches.
**Batching**: Topological sort by IMPL task dependencies → sequential batches.
Delegate to `commands/implement.md` for prompt building and backend routing:
| Backend | Invocation | Use Case |
|---------|-----------|----------|
| agent | Task({ subagent_type: "code-developer", run_in_background: false }) | Simple, direct edits |
| codex | ccw cli --tool codex --mode write (background) | Complex, architecture |
| gemini | ccw cli --tool gemini --mode write (background) | Analysis-heavy |
---
## Phase 4: Self-Validation
| Step | Method | Pass Criteria |
|------|--------|--------------|
| Syntax check | `tsc --noEmit` (30s) | Exit code 0 |
| Acceptance criteria | Match criteria keywords vs implementation | All addressed |
| Test detection | Find .test.ts/.spec.ts for modified files | Tests identified |
| Code review (optional) | gemini analysis or codex review | No blocking issues |
**Report**: task ID, status, files modified, validation results, backend used.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Syntax errors | Retry with error context (max 3) |
| Missing dependencies | Request from coordinator |
| Backend unavailable | Fallback to agent |
| Circular dependencies | Abort, report graph |

View File

@@ -1,91 +0,0 @@
# Role: explorer
Code search, pattern discovery, dependency tracing. Service role, on-demand.
## Identity
- **Name**: `explorer` | **Prefix**: `EXPLORE-*` | **Tag**: `[explorer]`
- **Type**: Service (on-demand, not on main pipeline)
- **Responsibility**: Parse request → Multi-strategy search → Package results
## Boundaries
### MUST
- Only process EXPLORE-* tasks
- Output structured JSON
- Cache results in `<session>/explorations/`
### MUST NOT
- Create tasks or modify source code
- Execute analysis, planning, or implementation
- Make architectural decisions (only discover patterns)
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| explore_ready | → coordinator | Search complete |
| task_failed | → coordinator | Search failure |
## Search Tools (priority order)
| Tool | Priority | Use Case |
|------|----------|----------|
| mcp__ace-tool__search_context | P0 | Semantic search |
| Grep / Glob | P1 | Pattern matching |
| cli-explore-agent | Deep | Multi-angle exploration |
| WebSearch | P3 | External docs |
---
## Phase 2: Request Parsing
Parse from task description:
| Field | Pattern | Default |
|-------|---------|---------|
| Session | `Session: <path>` | .workflow/.tmp |
| Mode | `Mode: codebase\|external\|hybrid` | codebase |
| Angles | `Angles: <list>` | general |
| Keywords | `Keywords: <list>` | from subject |
| Requester | `Requester: <role>` | coordinator |
---
## Phase 3: Multi-Strategy Search
Execute strategies in priority order, accumulating findings:
1. **ACE (P0)**: Per keyword → semantic search → relevant_files
2. **Grep (P1)**: Per keyword → class/function/export definitions → relevant_files
3. **Dependency trace**: Top 10 files → Read imports → dependencies
4. **Deep exploration** (multi-angle): Per angle → cli-explore-agent → merge
5. **External (P3)** (external/hybrid mode): Top 3 keywords → WebSearch
Deduplicate by path.
---
## Phase 4: Package Results
Write JSON to `<output-dir>/explore-<slug>.json`:
- relevant_files[], patterns[], dependencies[], external_refs[], _metadata
**Report**: file count, pattern count, top files, output path.
---
## Coordinator Integration
| Trigger | Example Task |
|---------|-------------|
| RESEARCH needs context | EXPLORE-001: 代码库搜索 |
| PLAN needs exploration | EXPLORE-002: 实现代码探索 |
| DISCUSS needs practices | EXPLORE-003: 外部文档 |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| ACE unavailable | Fallback to Grep |
| No results | Report empty, suggest broader keywords |

View File

@@ -1,111 +0,0 @@
# Role: fe-developer
Frontend development. Consumes plan/architecture output, implements components, pages, styles.
## Identity
- **Name**: `fe-developer` | **Prefix**: `DEV-FE-*` | **Tag**: `[fe-developer]`
- **Type**: Frontend pipeline worker
- **Responsibility**: Context loading → Design token consumption → Component implementation → Report
## Boundaries
### MUST
- Only process DEV-FE-* tasks
- Follow existing design tokens and component specs (if available)
- Generate accessible frontend code (semantic HTML, ARIA, keyboard nav)
- Follow project's frontend tech stack
### MUST NOT
- Modify backend code or API interfaces
- Contact other workers directly
- Introduce new frontend dependencies without architecture review
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| dev_fe_complete | → coordinator | Implementation done |
| dev_fe_progress | → coordinator | Long task progress |
| error | → coordinator | Implementation failure |
## Toolbox
| Tool | Purpose |
|------|---------|
| code-developer agent | Component implementation |
| ccw cli --tool gemini --mode write | Complex frontend generation |
---
## Phase 2: Context Loading
**Inputs to load**:
- Plan: `<session-folder>/plan/plan.json`
- Design tokens: `<session-folder>/architecture/design-tokens.json` (optional)
- Design intelligence: `<session-folder>/analysis/design-intelligence.json` (optional)
- Component specs: `<session-folder>/architecture/component-specs/*.md` (optional)
- Shared memory, wisdom
**Tech stack detection**:
| Signal | Framework | Styling |
|--------|-----------|---------|
| react/react-dom in deps | react | - |
| vue in deps | vue | - |
| next in deps | nextjs | - |
| tailwindcss in deps | - | tailwind |
| @shadcn/ui in deps | - | shadcn |
---
## Phase 3: Frontend Implementation
**Step 1**: Generate design token CSS (if tokens available)
- Convert design-tokens.json → CSS custom properties (`:root { --color-*, --space-*, --text-* }`)
- Include dark mode overrides via `@media (prefers-color-scheme: dark)`
- Write to `src/styles/tokens.css`
**Step 2**: Implement components
| Task Size | Strategy |
|-----------|----------|
| Simple (≤ 3 files, single component) | code-developer agent (synchronous) |
| Complex (system, multi-component) | ccw cli --tool gemini --mode write (background) |
**Coding standards** (include in agent/CLI prompt):
- Use design token CSS variables, never hardcode colors/spacing
- Interactive elements: cursor: pointer
- Transitions: 150-300ms
- Text contrast: minimum 4.5:1
- Include focus-visible styles
- Support prefers-reduced-motion
- Responsive: mobile-first
- No emoji as functional icons
---
## Phase 4: Self-Validation
| Check | What |
|-------|------|
| hardcoded-color | No #hex outside tokens.css |
| cursor-pointer | Interactive elements have cursor: pointer |
| focus-styles | Interactive elements have focus styles |
| responsive | Has responsive breakpoints |
| reduced-motion | Animations respect prefers-reduced-motion |
| emoji-icon | No emoji as functional icons |
Contribute to wisdom/conventions.md. Update shared-memory.json with component inventory.
**Report**: file count, framework, design token usage, self-validation results.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Design tokens not found | Use project defaults |
| Tech stack undetected | Default HTML + CSS |
| Subagent failure | Fallback to CLI write mode |

View File

@@ -1,152 +0,0 @@
# Command: pre-delivery-checklist
## Purpose
CSS-level pre-delivery checks for frontend files. Validates accessibility, interaction, design compliance, and layout patterns before final delivery.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Changed frontend files | git diff --name-only (filtered to .tsx, .jsx, .css, .scss) | Yes |
| File contents | Read each changed file | Yes |
| Design tokens path | `src/styles/tokens.css` or equivalent | No |
| Session folder | Task description `Session:` field | Yes |
## Phase 3: Checklist Execution
### Category 1: Accessibility (6 items)
| # | Check | Pattern to Detect | Severity |
|---|-------|--------------------|----------|
| 1 | Images have alt text | `<img` without `alt=` | CRITICAL |
| 2 | Form inputs have labels | `<input` without `<label` or `aria-label` | HIGH |
| 3 | Focus states visible | Interactive elements without `:focus` styles | HIGH |
| 4 | Color contrast 4.5:1 | Light text on light background patterns | HIGH |
| 5 | prefers-reduced-motion | Animations without `@media (prefers-reduced-motion)` | MEDIUM |
| 6 | Heading hierarchy | Skipped heading levels (h1 followed by h3) | MEDIUM |
**Do / Don't**:
| # | Do | Don't |
|---|-----|-------|
| 1 | Always provide descriptive alt text | Leave alt empty without `role="presentation"` |
| 2 | Associate every input with a label | Use placeholder as sole label |
| 3 | Add `focus-visible` outline | Remove default focus ring without replacement |
| 4 | Ensure 4.5:1 minimum contrast ratio | Use low-contrast decorative text for content |
| 5 | Wrap in `@media (prefers-reduced-motion: no-preference)` | Force animations on all users |
| 6 | Use sequential heading levels | Skip levels for visual sizing |
---
### Category 2: Interaction (4 items)
| # | Check | Pattern to Detect | Severity |
|---|-------|--------------------|----------|
| 7 | cursor-pointer on clickable | Buttons/links without `cursor: pointer` | MEDIUM |
| 8 | Transitions 150-300ms | Duration outside 150-300ms range | LOW |
| 9 | Loading states | Async operations without loading indicator | MEDIUM |
| 10 | Error states | Async operations without error handling UI | HIGH |
**Do / Don't**:
| # | Do | Don't |
|---|-----|-------|
| 7 | Add `cursor: pointer` to all clickable elements | Leave default cursor on buttons |
| 8 | Use 150-300ms for micro-interactions | Use >500ms or <100ms transitions |
| 9 | Show skeleton/spinner during fetch | Leave blank screen while loading |
| 10 | Show user-friendly error message | Silently fail or show raw error |
---
### Category 3: Design Compliance (4 items)
| # | Check | Pattern to Detect | Severity |
|---|-------|--------------------|----------|
| 11 | No hardcoded colors | Hex values (`#XXXXXX`) outside tokens file | HIGH |
| 12 | No hardcoded spacing | Raw `px` values for margin/padding | MEDIUM |
| 13 | No emoji as icons | Unicode emoji (U+1F300-1F9FF) in UI code | HIGH |
| 14 | Dark mode support | No `prefers-color-scheme` or `.dark` class | MEDIUM |
**Do / Don't**:
| # | Do | Don't |
|---|-----|-------|
| 11 | Use `var(--color-*)` design tokens | Hardcode `#hex` values |
| 12 | Use `var(--space-*)` spacing tokens | Hardcode pixel values |
| 13 | Use proper SVG/icon library | Use emoji for functional icons |
| 14 | Support light/dark themes | Design for light mode only |
---
### Category 4: Layout (2 items)
| # | Check | Pattern to Detect | Severity |
|---|-------|--------------------|----------|
| 15 | Responsive breakpoints | No `md:`/`lg:`/`@media` queries | MEDIUM |
| 16 | No horizontal scroll | Fixed widths greater than viewport | HIGH |
**Do / Don't**:
| # | Do | Don't |
|---|-----|-------|
| 15 | Mobile-first responsive design | Desktop-only layout |
| 16 | Use relative/fluid widths | Set fixed pixel widths on containers |
---
### Check Execution Strategy
| Check Scope | Applies To | Method |
|-------------|-----------|--------|
| Per-file checks | Items 1-4, 7-8, 10-13, 16 | Run against each changed file individually |
| Global checks | Items 5-6, 9, 14-15 | Run against concatenated content of all files |
**Detection example** (check for hardcoded colors):
```bash
Grep(pattern="#[0-9a-fA-F]{6}", path="<file_path>", output_mode="content", "-n"=true)
```
**Detection example** (check for missing alt text):
```bash
Grep(pattern="<img\\s(?![^>]*alt=)", path="<file_path>", output_mode="content", "-n"=true)
```
## Phase 4: Validation
### Pass/Fail Criteria
| Condition | Result |
|-----------|--------|
| Zero CRITICAL + zero HIGH failures | PASS |
| Zero CRITICAL, some HIGH | CONDITIONAL (list fixes needed) |
| Any CRITICAL failure | FAIL |
### Output Format
```
## Pre-Delivery Checklist Results
- Total checks: <n>
- Passed: <n> / <total>
- Failed: <n>
### Failed Items
- [CRITICAL] #1 Images have alt text -- <file_path>
- [HIGH] #11 No hardcoded colors -- <file_path>:<line>
- [MEDIUM] #7 cursor-pointer on clickable -- <file_path>
### Recommendations
(Do/Don't guidance for each failed item)
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No frontend files to check | Report empty checklist, all checks N/A |
| File read error | Skip file, note in report |
| Regex match error | Skip check, note in report |
| Design tokens file not found | Skip items 11-12, adjust total |

View File

@@ -1,113 +0,0 @@
# Role: fe-qa
Frontend quality assurance. 5-dimension review + Generator-Critic loop.
## Identity
- **Name**: `fe-qa` | **Prefix**: `QA-FE-*` | **Tag**: `[fe-qa]`
- **Type**: Frontend pipeline worker
- **Responsibility**: Context loading → 5-dimension review → GC feedback → Report
## Boundaries
### MUST
- Only process QA-FE-* tasks
- Execute 5-dimension review
- Support Generator-Critic loop (max 2 rounds)
- Provide actionable fix suggestions (Do/Don't format)
### MUST NOT
- Modify source code directly (review only)
- Contact other workers directly
- Mark pass when score below threshold
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| qa_fe_passed | → coordinator | All dimensions pass |
| qa_fe_result | → coordinator | Review complete (may have issues) |
| fix_required | → coordinator | Critical issues found |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/pre-delivery-checklist.md | CSS-level delivery checks |
| ccw cli --tool gemini --mode analysis | Frontend code review |
| ccw cli --tool codex --mode review | Git-aware code review |
---
## Review Dimensions
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Code Quality | 25% | TypeScript types, component structure, error handling |
| Accessibility | 25% | Semantic HTML, ARIA, keyboard nav, contrast, focus-visible |
| Design Compliance | 20% | Token usage, no hardcoded colors, no emoji icons |
| UX Best Practices | 15% | Loading/error/empty states, cursor-pointer, responsive |
| Pre-Delivery | 15% | No console.log, dark mode, i18n readiness |
---
## Phase 2: Context Loading
**Inputs**: design tokens, design intelligence, shared memory, previous QA results (for GC round tracking), changed frontend files via git diff.
Determine GC round from previous QA results count. Max 2 rounds.
---
## Phase 3: 5-Dimension Review
For each changed frontend file, check against all 5 dimensions. Score each dimension 0-10, deducting for issues found.
**Scoring deductions**:
| Severity | Deduction |
|----------|-----------|
| High | -2 to -3 |
| Medium | -1 to -1.5 |
| Low | -0.5 |
**Overall score** = weighted sum of dimension scores.
**Verdict routing**:
| Condition | Verdict |
|-----------|---------|
| Score ≥ 8 AND no critical issues | PASS |
| GC round ≥ max AND score ≥ 6 | PASS_WITH_WARNINGS |
| GC round ≥ max AND score < 6 | FAIL |
| Otherwise | NEEDS_FIX |
---
## Phase 4: Report
Write audit to `<session-folder>/qa/audit-fe-<task>-r<round>.json`. Update wisdom and shared memory.
**Report**: round, verdict, overall score, dimension scores, critical issues with Do/Don't format, action required (if NEEDS_FIX).
---
## Generator-Critic Loop
Orchestrated by coordinator:
```
Round 1: DEV-FE-001 → QA-FE-001
if NEEDS_FIX → coordinator creates DEV-FE-002 + QA-FE-002
Round 2: DEV-FE-002 → QA-FE-002
if still NEEDS_FIX → PASS_WITH_WARNINGS or FAIL (max 2)
```
**Convergence**: score ≥ 8 AND critical_count = 0
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No changed files | Report empty, score N/A |
| Design tokens not found | Skip design compliance, adjust weights |
| Max GC rounds exceeded | Force verdict |

View File

@@ -1,154 +0,0 @@
# Command: explore
## Purpose
Complexity-driven codebase exploration: assess task complexity, select exploration angles by category, execute parallel exploration agents, and produce structured exploration results for plan generation.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | PLAN-* task subject/description | Yes |
| Session folder | Task description `Session:` field | Yes |
| Spec context | `<session-folder>/spec/` (if exists) | No |
| Plan directory | `<session-folder>/plan/` | Yes (create if missing) |
| Project tech | `.workflow/project-tech.json` | No |
## Phase 3: Exploration
### Complexity Assessment
Score the task description against keyword indicators:
| Indicator | Keywords | Score |
|-----------|----------|-------|
| Structural change | refactor, architect, restructure, modular | +2 |
| Multi-scope | multiple, across, cross-cutting | +2 |
| Integration | integrate, api, database | +1 |
| Non-functional | security, performance, auth | +1 |
**Complexity routing**:
| Score | Level | Strategy | Angle Count |
|-------|-------|----------|-------------|
| 0-1 | Low | ACE semantic search only | 1 |
| 2-3 | Medium | cli-explore-agent per angle | 2-3 |
| 4+ | High | cli-explore-agent per angle | 3-5 |
### Angle Presets
Select preset by dominant keyword match, then take first N angles per complexity:
| Preset | Trigger Keywords | Angles (priority order) |
|--------|-----------------|------------------------|
| architecture | refactor, architect, restructure, modular | architecture, dependencies, modularity, integration-points |
| security | security, auth, permission, access | security, auth-patterns, dataflow, validation |
| performance | performance, slow, optimize, cache | performance, bottlenecks, caching, data-access |
| bugfix | fix, bug, error, issue, broken | error-handling, dataflow, state-management, edge-cases |
| feature | (default) | patterns, integration-points, testing, dependencies |
### Low Complexity: Direct Search
```bash
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<task-description>")
```
Transform results into exploration JSON and write to `<plan-dir>/exploration-<angle>.json`.
**ACE failure fallback**:
```bash
Bash(command="rg -l '<keywords>' --type ts", timeout=30000)
```
### Medium/High Complexity: Parallel Exploration
For each selected angle, launch an exploration agent:
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore: <angle>",
prompt: "## Task Objective
Execute <angle> exploration for task planning context.
## Output Location
Output File: <plan-dir>/exploration-<angle>.json
## Assigned Context
- Exploration Angle: <angle>
- Task Description: <task-description>
- Spec Context: <available|not available>
## Mandatory First Steps
1. rg -l '<relevant-keyword>' --type ts
2. cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json
3. Read .workflow/project-tech.json (if exists)
## Exploration Focus
<angle-focus-from-table-below>
## Output
Write JSON to: <plan-dir>/exploration-<angle>.json
Each file in relevant_files MUST have: rationale (>10 chars), role, discovery_source, key_symbols"
})
```
### Angle Focus Guide
| Angle | Focus Points |
|-------|-------------|
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs |
| dependencies | Import chains, external libraries, circular dependencies, shared utilities |
| modularity | Module interfaces, separation of concerns, extraction opportunities |
| integration-points | API endpoints, data flow between modules, event systems, service integrations |
| security | Auth/authz logic, input validation, sensitive data handling, middleware |
| auth-patterns | Auth flows (login/refresh), session management, token validation, permissions |
| dataflow | Data transformations, state propagation, validation points, mutation paths |
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity |
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging |
| patterns | Code conventions, design patterns, naming conventions, best practices |
| testing | Test files, coverage gaps, test patterns (unit/integration/e2e), mocking |
### Explorations Manifest
After all explorations complete, write manifest to `<plan-dir>/explorations-manifest.json`:
```
{
"task_description": "<description>",
"complexity": "<Low|Medium|High>",
"exploration_count": <N>,
"explorations": [
{ "angle": "<angle>", "file": "exploration-<angle>.json" }
]
}
```
## Phase 4: Validation
### Output Files
```
<session-folder>/plan/
├─ exploration-<angle>.json (per angle)
└─ explorations-manifest.json (summary)
```
### Success Criteria
| Check | Criteria | Required |
|-------|----------|----------|
| At least 1 exploration | Non-empty exploration file exists | Yes |
| Manifest written | explorations-manifest.json exists | Yes |
| File roles assigned | Every relevant_file has role + rationale | Yes |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Single exploration agent fails | Skip angle, remove from manifest, continue |
| All explorations fail | Proceed to plan generation with task description only |
| ACE search fails (Low) | Fallback to ripgrep keyword search |
| Schema file not found | Use inline schema from Output section |

View File

@@ -1,120 +0,0 @@
# Role: planner
Multi-angle code exploration and structured implementation planning. Submits plans to coordinator for approval.
## Identity
- **Name**: `planner` | **Prefix**: `PLAN-*` | **Tag**: `[planner]`
- **Responsibility**: Complexity assessment → Code exploration → Plan generation → Approval
## Boundaries
### MUST
- Only process PLAN-* tasks
- Assess complexity before planning
- Execute multi-angle exploration for Medium/High complexity
- Generate plan.json + .task/TASK-*.json
- Load spec context in full-lifecycle mode
- Submit plan for coordinator approval
### MUST NOT
- Create tasks for other roles
- Implement code
- Modify spec documents
- Skip complexity assessment
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| plan_ready | → coordinator | Plan complete |
| plan_revision | → coordinator | Plan revised per feedback |
| error | → coordinator | Exploration or planning failure |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/explore.md | Multi-angle codebase exploration |
| cli-explore-agent | Per-angle exploration |
| cli-lite-planning-agent | Plan generation |
---
## Phase 1.5: Load Spec Context (Full-Lifecycle)
If `<session-folder>/spec/` exists → load requirements/_index.md, architecture/_index.md, epics/_index.md, spec-config.json. Otherwise → impl-only mode.
---
## Phase 2: Multi-Angle Exploration
**Objective**: Explore codebase to inform planning.
**Complexity routing**:
| Complexity | Criteria | Strategy |
|------------|----------|----------|
| Low | < 200 chars, no refactor/architecture keywords | ACE semantic search only |
| Medium | 200-500 chars or moderate scope | 2-3 angle cli-explore-agent |
| High | > 500 chars, refactor/architecture, multi-module | 3-5 angle cli-explore-agent |
Delegate to `commands/explore.md` for angle selection and parallel execution.
---
## Phase 3: Plan Generation
**Objective**: Generate structured implementation plan.
| Complexity | Strategy |
|------------|----------|
| Low | Direct planning → single TASK-001 with plan.json |
| Medium/High | cli-lite-planning-agent with exploration results |
**Agent call** (Medium/High):
```
Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: "Generate implementation plan",
prompt: "Generate plan.
Output: <plan-dir>/plan.json + <plan-dir>/.task/TASK-*.json
Schema: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
Task: <task-description>
Explorations: <explorations-manifest>
Complexity: <complexity>
Requirements: 2-7 tasks with id, title, files[].change, convergence.criteria, depends_on"
})
```
**Spec context** (full-lifecycle): Reference REQ-* IDs, follow ADR decisions, reuse Epic/Story decomposition.
---
## Phase 4: Submit for Approval
1. Read plan.json and TASK-*.json
2. Report to coordinator: complexity, task count, task list, approach, plan location
3. Wait for response: approved → complete; revision → update and resubmit
**Session files**:
```
<session-folder>/plan/
├── exploration-<angle>.json
├── explorations-manifest.json
├── plan.json
└── .task/TASK-*.json
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Exploration agent failure | Plan from description only |
| Planning agent failure | Fallback to direct planning |
| Plan rejected 3+ times | Notify coordinator, suggest alternative |
| Schema not found | Use basic structure |

View File

@@ -1,163 +0,0 @@
# Command: code-review
## Purpose
4-dimension code review analyzing quality, security, architecture, and requirements compliance. Produces a verdict (BLOCK/CONDITIONAL/APPROVE) with categorized findings.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Plan file | `<session_folder>/plan/plan.json` | Yes |
| Git diff | `git diff HEAD~1` or `git diff --cached` | Yes |
| Modified files | From git diff --name-only | Yes |
| Test results | Tester output (if available) | No |
| Wisdom | `<session_folder>/wisdom/` | No |
## Phase 3: 4-Dimension Review
### Dimension Overview
| Dimension | Focus | Weight |
|-----------|-------|--------|
| Quality | Code correctness, type safety, clean code | Equal |
| Security | Vulnerability patterns, secret exposure | Equal |
| Architecture | Module structure, coupling, file size | Equal |
| Requirements | Acceptance criteria coverage, completeness | Equal |
---
### Dimension 1: Quality
Scan each modified file for quality anti-patterns.
| Severity | Pattern | What to Detect |
|----------|---------|----------------|
| Critical | Empty catch blocks | `catch(e) {}` with no handling |
| High | @ts-ignore without justification | Suppression comment < 10 chars explanation |
| High | `any` type in public APIs | `any` outside comments and generic definitions |
| High | console.log in production | `console.(log\|debug\|info)` outside test files |
| Medium | Magic numbers | Numeric literals > 1 digit, not in const/comment |
| Medium | Duplicate code | Identical lines (>30 chars) appearing 3+ times |
**Detection example** (Grep for console statements):
```bash
Grep(pattern="console\\.(log|debug|info)", path="<file_path>", output_mode="content", "-n"=true)
```
---
### Dimension 2: Security
Scan for vulnerability patterns across all modified files.
| Severity | Pattern | What to Detect |
|----------|---------|----------------|
| Critical | Hardcoded secrets | `api_key=`, `password=`, `secret=`, `token=` with string values (20+ chars) |
| Critical | SQL injection | String concatenation in `query()`/`execute()` calls |
| High | eval/exec usage | `eval()`, `new Function()`, `setTimeout(string)` |
| High | XSS vectors | `innerHTML`, `dangerouslySetInnerHTML` |
| Medium | Insecure random | `Math.random()` in security context (token/key/password/session) |
| Low | Missing input validation | Functions with parameters but no validation in first 5 lines |
---
### Dimension 3: Architecture
Assess structural health of modified files.
| Severity | Pattern | What to Detect |
|----------|---------|----------------|
| Critical | Circular dependencies | File A imports B, B imports A |
| High | Excessive parent imports | Import traverses >2 parent directories (`../../../`) |
| Medium | Large files | Files exceeding 500 lines |
| Medium | Tight coupling | >5 imports from same base module |
| Medium | Long functions | Functions exceeding 50 lines |
| Medium | Module boundary changes | Modifications to index.ts/index.js files |
**Detection example** (check for deep parent imports):
```bash
Grep(pattern="from\\s+['\"](\\.\\./){3,}", path="<file_path>", output_mode="content", "-n"=true)
```
---
### Dimension 4: Requirements
Verify implementation against plan acceptance criteria.
| Severity | Check | Method |
|----------|-------|--------|
| High | Unmet acceptance criteria | Extract criteria from plan, check keyword overlap (threshold: 70%) |
| High | Missing error handling | Plan mentions "error handling" but no try/catch in code |
| Medium | Partially met criteria | Keyword overlap 40-69% |
| Medium | Missing tests | Plan mentions "test" but no test files in modified set |
**Verification flow**:
1. Read plan file → extract acceptance criteria section
2. For each criterion → extract keywords (4+ char meaningful words)
3. Search modified files for keyword matches
4. Score: >= 70% match = met, 40-69% = partial, < 40% = unmet
---
### Verdict Routing
| Verdict | Criteria | Action |
|---------|----------|--------|
| BLOCK | Any critical-severity issues found | Must fix before merge |
| CONDITIONAL | High or medium issues, no critical | Should address, can merge with tracking |
| APPROVE | Only low issues or none | Ready to merge |
## Phase 4: Validation
### Report Format
The review report follows this structure:
```
# Code Review Report
**Verdict**: <BLOCK|CONDITIONAL|APPROVE>
## Blocking Issues (if BLOCK)
- **<type>** (<file>:<line>): <message>
## Review Dimensions
### Quality Issues
**CRITICAL** (<count>)
- <message> (<file>:<line>)
### Security Issues
(same format per severity)
### Architecture Issues
(same format per severity)
### Requirements Issues
(same format per severity)
## Recommendations
1. <actionable recommendation>
```
### Summary Counts
| Field | Description |
|-------|-------------|
| Total issues | Sum across all dimensions and severities |
| Critical count | Must be 0 for APPROVE |
| Blocking issues | Listed explicitly in report header |
| Dimensions covered | Must be 4/4 |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Plan file not found | Skip requirements dimension, note in report |
| Git diff empty | Report no changes to review |
| File read fails | Skip file, note in report |
| No modified files | Report empty review |

View File

@@ -1,202 +0,0 @@
# Command: spec-quality
## Purpose
5-dimension spec quality check with weighted scoring, quality gate determination, and readiness report generation.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Spec documents | `<session_folder>/spec/` (all .md files) | Yes |
| Original requirements | Product brief objectives section | Yes |
| Quality gate config | specs/quality-gates.md | No |
| Session folder | Task description `Session:` field | Yes |
**Spec document phases** (matched by filename/directory):
| Phase | Expected Path | Required |
|-------|--------------|---------|
| product-brief | spec/product-brief.md | Yes |
| prd | spec/requirements/*.md | Yes |
| architecture | spec/architecture/_index.md + ADR-*.md | Yes |
| user-stories | spec/epics/*.md | Yes |
| implementation-plan | plan/plan.json | No (impl-only/full-lifecycle) |
| test-strategy | spec/test-strategy.md | No (optional, not generated by pipeline) |
## Phase 3: 5-Dimension Scoring
### Dimension Weights
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Completeness | 25% | All required sections present with substance |
| Consistency | 20% | Terminology, format, references, naming |
| Traceability | 25% | Goals -> Reqs -> Components -> Stories chain |
| Depth | 20% | AC testable, ADRs justified, stories estimable |
| Coverage | 10% | Original requirements mapped to spec |
---
### Dimension 1: Completeness (25%)
Check each spec document for required sections.
**Required sections by phase**:
| Phase | Required Sections |
|-------|------------------|
| product-brief | Vision Statement, Problem Statement, Target Audience, Success Metrics, Constraints |
| prd | Goals, Requirements, User Stories, Acceptance Criteria, Non-Functional Requirements |
| architecture | System Overview, Component Design, Data Models, API Specifications, Technology Stack |
| user-stories | Story List, Acceptance Criteria, Priority, Estimation |
| implementation-plan | Task Breakdown, Dependencies, Timeline, Resource Allocation |
> **Note**: `test-strategy` is optional — skip scoring if `spec/test-strategy.md` is absent. Do not penalize completeness score for missing optional phases.
**Scoring formula**:
- Section present: 50% credit
- Section has substantial content (>100 chars beyond header): additional 50% credit
- Per-document score = (present_ratio * 50) + (substantial_ratio * 50)
- Overall = average across all documents
---
### Dimension 2: Consistency (20%)
Check cross-document consistency across four areas.
| Area | What to Check | Severity |
|------|--------------|----------|
| Terminology | Same concept with different casing/spelling across docs | Medium |
| Format | Mixed header styles at same level across docs | Low |
| References | Broken links (`./` or `../` paths that don't resolve) | High |
| Naming | Mixed naming conventions (camelCase vs snake_case vs kebab-case) | Low |
**Scoring**:
- Penalty weights: High = 10, Medium = 5, Low = 2
- Score = max(0, 100 - (total_penalty / 100) * 100)
---
### Dimension 3: Traceability (25%)
Build and validate traceability chains: Goals -> Requirements -> Components -> Stories.
**Chain building flow**:
1. Extract goals from product-brief (pattern: `- Goal: <text>`)
2. Extract requirements from PRD (pattern: `- REQ-NNN: <text>`)
3. Extract components from architecture (pattern: `- Component: <text>`)
4. Extract stories from user-stories (pattern: `- US-NNN: <text>`)
5. Link by keyword overlap (threshold: 30% keyword match)
**Chain completeness**: A chain is complete when a goal links to at least one requirement, one component, and one story.
**Scoring**: (complete chains / total chains) * 100
**Weak link identification**: For each incomplete chain, report which link is missing (no requirements, no components, or no stories).
---
### Dimension 4: Depth (20%)
Assess the analytical depth of spec content across four sub-dimensions.
| Sub-dimension | Source | Testable Criteria |
|---------------|--------|-------------------|
| AC Testability | PRD / User Stories | Contains measurable verbs (display, return, validate) or Given/When/Then or numbers |
| ADR Justification | Architecture | Contains rationale, alternatives, consequences, or trade-offs |
| Story Estimability | User Stories | Has "As a/I want/So that" + AC, or explicit estimate |
| Technical Detail | Architecture + Plan | Contains code blocks, API terms, HTTP methods, DB terms |
**Scoring**: Average of sub-dimension scores (each 0-100%)
---
### Dimension 5: Coverage (10%)
Map original requirements to spec requirements.
**Flow**:
1. Extract original requirements from product-brief objectives section
2. Extract spec requirements from all documents (pattern: `- REQ-NNN:` or `- Requirement:` or `- Feature:`)
3. For each original requirement, check keyword overlap with any spec requirement (threshold: 40%)
4. Score = (covered_count / total_original) * 100
---
### Quality Gate Decision Table
| Gate | Criteria | Message |
|------|----------|---------|
| PASS | Overall score >= 80% AND coverage >= 70% | Ready for implementation |
| FAIL | Overall score < 60% OR coverage < 50% | Major revisions required |
| REVIEW | All other cases | Improvements needed, may proceed with caution |
## Phase 4: Validation
### Readiness Report Format
Write to `<session_folder>/spec/readiness-report.md`:
```
# Specification Readiness Report
**Generated**: <timestamp>
**Overall Score**: <score>%
**Quality Gate**: <PASS|REVIEW|FAIL> - <message>
**Recommended Action**: <action>
## Dimension Scores
| Dimension | Score | Weight | Weighted Score |
|-----------|-------|--------|----------------|
| Completeness | <n>% | 25% | <n>% |
| Consistency | <n>% | 20% | <n>% |
| Traceability | <n>% | 25% | <n>% |
| Depth | <n>% | 20% | <n>% |
| Coverage | <n>% | 10% | <n>% |
## Completeness Analysis
(per-phase breakdown: sections present/expected, missing sections)
## Consistency Analysis
(issues by area: terminology, format, references, naming)
## Traceability Analysis
(complete chains / total, weak links)
## Depth Analysis
(per sub-dimension scores)
## Requirement Coverage
(covered / total, uncovered requirements list)
```
### Spec Summary Format
Write to `<session_folder>/spec/spec-summary.md`:
```
# Specification Summary
**Overall Quality Score**: <score>%
**Quality Gate**: <gate>
## Documents Reviewed
(per document: phase, path, size, section list)
## Key Findings
### Strengths (dimensions scoring >= 80%)
### Areas for Improvement (dimensions scoring < 70%)
### Recommendations
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Spec folder empty | FAIL gate, report no documents found |
| Missing phase document | Score 0 for that phase in completeness, note in report |
| No original requirements found | Score coverage at 100% (nothing to cover) |
| Broken references | Flag in consistency, do not fail entire review |

View File

@@ -1,104 +0,0 @@
# Role: reviewer
Dual-mode review: code review (REVIEW-*) and spec quality validation (QUALITY-*). Auto-switches by task prefix.
## Identity
- **Name**: `reviewer` | **Prefix**: `REVIEW-*` + `QUALITY-*` | **Tag**: `[reviewer]`
- **Responsibility**: Branch by Prefix → Review/Score → Report
## Boundaries
### MUST
- Process REVIEW-* and QUALITY-* tasks
- Generate readiness-report.md for QUALITY tasks
- Cover all required dimensions per mode
### MUST NOT
- Create tasks
- Modify source code
- Skip quality dimensions
- Approve without verification
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| review_result | → coordinator | Code review complete |
| quality_result | → coordinator | Spec quality complete |
| fix_required | → coordinator | Critical issues found |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/code-review.md | 4-dimension code review |
| commands/spec-quality.md | 5-dimension spec quality |
---
## Mode Detection
| Task Prefix | Mode | Dimensions |
|-------------|------|-----------|
| REVIEW-* | Code Review | quality, security, architecture, requirements |
| QUALITY-* | Spec Quality | completeness, consistency, traceability, depth, coverage |
---
## Code Review (REVIEW-*)
**Inputs**: Plan file, git diff, modified files, test results (if available)
**4 dimensions** (delegate to commands/code-review.md):
| Dimension | Critical Issues |
|-----------|----------------|
| Quality | Empty catch, any in public APIs, @ts-ignore, console.log |
| Security | Hardcoded secrets, SQL injection, eval/exec, innerHTML |
| Architecture | Circular deps, parent imports >2 levels, files >500 lines |
| Requirements | Missing core functionality, incomplete acceptance criteria |
**Verdict**:
| Verdict | Criteria |
|---------|----------|
| BLOCK | Critical issues present |
| CONDITIONAL | High/medium only |
| APPROVE | Low or none |
---
## Spec Quality (QUALITY-*)
**Inputs**: All spec docs in session folder, quality gate config
**5 dimensions** (delegate to commands/spec-quality.md):
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Completeness | 25% | All sections present with substance |
| Consistency | 20% | Terminology, format, references |
| Traceability | 25% | Goals → Reqs → Arch → Stories chain |
| Depth | 20% | AC testable, ADRs justified, stories estimable |
| Coverage | 10% | Original requirements mapped |
**Quality gate**:
| Gate | Criteria |
|------|----------|
| PASS | Score ≥ 80% AND coverage ≥ 70% |
| REVIEW | Score 60-79% OR coverage 50-69% |
| FAIL | Score < 60% OR coverage < 50% |
**Artifacts**: readiness-report.md + spec-summary.md
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Missing context | Request from coordinator |
| Invalid mode | Abort with error |
| Analysis failure | Retry, then fallback template |

View File

@@ -1,152 +0,0 @@
# Command: validate
## Purpose
Test-fix cycle with strategy engine: detect framework, run tests, classify failures, select fix strategy, iterate until pass rate target is met or max iterations exhausted.
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| MAX_ITERATIONS | 10 | Maximum test-fix cycle attempts |
| PASS_RATE_TARGET | 95% | Minimum pass rate to succeed |
| AFFECTED_TESTS_FIRST | true | Run affected tests before full suite |
## Phase 2: Context Loading
Load from task description and executor output:
| Input | Source | Required |
|-------|--------|----------|
| Framework | Auto-detected (see below) | Yes |
| Modified files | Executor task output / git diff | Yes |
| Affected tests | Derived from modified files | No |
| Session folder | Task description `Session:` field | Yes |
| Wisdom | `<session_folder>/wisdom/` | No |
**Framework detection** (priority order):
| Priority | Method | Check |
|----------|--------|-------|
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
| 2 | package.json scripts.test | Command contains framework name |
| 3 | Config file existence | vitest.config.*, jest.config.*, pytest.ini |
**Affected test discovery** from modified files:
- For each modified file `<name>.<ext>`, search:
`<name>.test.ts`, `<name>.spec.ts`, `tests/<name>.test.ts`, `__tests__/<name>.test.ts`
## Phase 3: Test-Fix Cycle
### Test Command Table
| Framework | Affected Tests | Full Suite |
|-----------|---------------|------------|
| vitest | `vitest run <files> --reporter=verbose` | `vitest run --reporter=verbose` |
| jest | `jest <files> --no-coverage --verbose` | `jest --no-coverage --verbose` |
| mocha | `mocha <files> --reporter spec` | `mocha --reporter spec` |
| pytest | `pytest <files> -v --tb=short` | `pytest -v --tb=short` |
### Iteration Flow
```
Iteration 1
├─ Run affected tests (or full suite if none)
├─ Parse results → pass rate
├─ Pass rate >= 95%?
│ ├─ YES + affected-only → run full suite to confirm
│ │ ├─ Full suite passes → SUCCESS
│ │ └─ Full suite fails → continue with full results
│ └─ YES + full suite → SUCCESS
└─ NO → classify failures → select strategy → apply fixes
Iteration 2..10
├─ Re-run tests
├─ Track best pass rate across iterations
├─ Pass rate >= 95% → SUCCESS
├─ No failures to fix → STOP (anomaly)
└─ Failures remain → classify → select strategy → apply fixes
After iteration 10
└─ FAIL: max iterations reached, report best pass rate
```
**Progress update**: When iteration > 5, send progress to coordinator with current pass rate and iteration count.
### Strategy Selection Matrix
| Condition | Strategy | Behavior |
|-----------|----------|----------|
| Iteration <= 3 OR pass rate >= 80% | Conservative | Fix one failure at a time, highest severity first |
| Critical failures exist AND count < 5 | Surgical | Identify common error pattern, fix all matching occurrences |
| Pass rate < 50% OR iteration > 7 | Aggressive | Fix all critical + high failures in batch |
| Default (no other match) | Conservative | Safe fallback |
### Failure Classification Table
| Severity | Error Patterns |
|----------|---------------|
| Critical | SyntaxError, cannot find module, is not defined |
| High | Assertion mismatch (expected/received), toBe/toEqual failures |
| Medium | Timeout, async errors |
| Low | Warnings, deprecation notices |
### Fix Approach by Error Type
| Error Type | Pattern | Fix Approach |
|------------|---------|-------------|
| missing_import | "Cannot find module '<module>'" | Add import statement, resolve relative path from modified files |
| undefined_variable | "<name> is not defined" | Check source for renamed/moved exports, update reference |
| assertion_mismatch | "Expected: X, Received: Y" | Read test file at failure line, update expected value if behavior change is intentional |
| timeout | "Timeout" | Increase timeout or add async/await |
| syntax_error | "SyntaxError" | Read source at error line, fix syntax |
### Tool Call Example
Run tests with framework-appropriate command:
```bash
Bash(command="vitest run src/utils/__tests__/parser.test.ts --reporter=verbose", timeout=120000)
```
Read test file to analyze failure:
```bash
Read(file_path="<test_file_path>")
```
Apply fix via Edit:
```bash
Edit(file_path="<file>", old_string="<old>", new_string="<new>")
```
## Phase 4: Validation
### Success Criteria
| Check | Criteria | Required |
|-------|----------|----------|
| Pass rate | >= 95% | Yes |
| Full suite run | At least one full suite pass | Yes |
| No critical failures | Zero critical-severity failures remaining | Yes |
| Best pass rate tracked | Reported in final result | Yes |
### Result Routing
| Outcome | Message Type | Content |
|---------|-------------|---------|
| Pass rate >= target | test_result | Success, iterations count, full suite confirmed |
| Max iterations, pass rate < target | fix_required | Best pass rate, remaining failures, iteration count |
| No tests found | error | Framework detected but no test files |
| Framework not detected | error | Detection methods exhausted |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Framework not detected | Report error to coordinator, list detection attempts |
| No test files found | Report to coordinator, suggest manual test path |
| Test command fails (exit code != 0/1) | Check stderr for environment issues, retry once |
| Fix application fails | Skip fix, try next iteration with different strategy |
| Infinite loop (same failures repeat) | Abort after 3 identical result sets |

View File

@@ -1,108 +0,0 @@
# Role: tester
Adaptive test execution with fix cycles and quality gates.
## Identity
- **Name**: `tester` | **Prefix**: `TEST-*` | **Tag**: `[tester]`
- **Responsibility**: Detect Framework → Run Tests → Fix Cycle → Report
## Boundaries
### MUST
- Only process TEST-* tasks
- Detect test framework before running
- Run affected tests before full suite
- Use strategy engine for fix cycles
### MUST NOT
- Create tasks
- Contact other workers directly
- Modify production code beyond test fixes
- Skip framework detection
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| test_result | → coordinator | Tests pass or final result |
| fix_required | → coordinator | Failures after max iterations |
| error | → coordinator | Framework not detected |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/validate.md | Test-fix cycle with strategy engine |
---
## Phase 2: Framework Detection & Test Discovery
**Framework detection** (priority order):
| Priority | Method | Frameworks |
|----------|--------|-----------|
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
| 2 | package.json scripts.test | vitest, jest, mocha, pytest |
| 3 | Config files | vitest.config.*, jest.config.*, pytest.ini |
**Affected test discovery** from executor's modified files:
- Search variants: `<name>.test.ts`, `<name>.spec.ts`, `tests/<name>.test.ts`, `__tests__/<name>.test.ts`
---
## Phase 3: Test Execution & Fix Cycle
**Config**: MAX_ITERATIONS=10, PASS_RATE_TARGET=95%, AFFECTED_TESTS_FIRST=true
Delegate to `commands/validate.md`:
1. Run affected tests → parse results
2. Pass rate met → run full suite
3. Failures → select strategy → fix → re-run → repeat
**Strategy selection**:
| Condition | Strategy | Behavior |
|-----------|----------|----------|
| Iteration ≤ 3 or pass ≥ 80% | Conservative | Fix one critical failure at a time |
| Critical failures < 5 | Surgical | Fix specific pattern everywhere |
| Pass < 50% or iteration > 7 | Aggressive | Fix all failures in batch |
**Test commands**:
| Framework | Affected | Full Suite |
|-----------|---------|------------|
| vitest | `vitest run <files>` | `vitest run` |
| jest | `jest <files> --no-coverage` | `jest --no-coverage` |
| pytest | `pytest <files> -v` | `pytest -v` |
---
## Phase 4: Result Analysis
**Failure classification**:
| Severity | Patterns |
|----------|----------|
| Critical | SyntaxError, cannot find module, undefined |
| High | Assertion failures, toBe/toEqual |
| Medium | Timeout, async errors |
| Low | Warnings, deprecations |
**Report routing**:
| Condition | Type |
|-----------|------|
| Pass rate ≥ target | test_result (success) |
| Pass rate < target after max iterations | fix_required |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Framework not detected | Prompt user |
| No tests found | Report to coordinator |
| Infinite fix loop | Abort after MAX_ITERATIONS |

View File

@@ -1,187 +0,0 @@
# Command: generate-doc
## Purpose
Multi-CLI document generation for 4 document types. Each uses parallel or staged CLI analysis, then synthesizes into templated documents.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Document standards | `../../specs/document-standards.md` | Yes |
| Template | From routing table below | Yes |
| Spec config | `<session-folder>/spec/spec-config.json` | Yes |
| Discovery context | `<session-folder>/spec/discovery-context.json` | Yes |
| Discussion feedback | `<session-folder>/discussions/<discuss-file>` | If exists |
| Session folder | Task description `Session:` field | Yes |
### Document Type Routing
| Doc Type | Task | Template | Discussion Input | Output |
|----------|------|----------|-----------------|--------|
| product-brief | DRAFT-001 | templates/product-brief.md | discuss-001-scope.md | spec/product-brief.md |
| requirements | DRAFT-002 | templates/requirements-prd.md | discuss-002-brief.md | spec/requirements/_index.md |
| architecture | DRAFT-003 | templates/architecture-doc.md | discuss-003-requirements.md | spec/architecture/_index.md |
| epics | DRAFT-004 | templates/epics-template.md | discuss-004-architecture.md | spec/epics/_index.md |
### Progressive Dependencies
Each doc type requires all prior docs: discovery-context → product-brief → requirements/_index → architecture/_index.
## Phase 3: Document Generation
### Shared Context Block
Built from spec-config and discovery-context for all CLI prompts:
```
SEED: <topic>
PROBLEM: <problem_statement>
TARGET USERS: <target_users>
DOMAIN: <domain>
CONSTRAINTS: <constraints>
FOCUS AREAS: <focus_areas>
CODEBASE CONTEXT: <existing_patterns, tech_stack> (if discovery-context exists)
```
---
### DRAFT-001: Product Brief
**Strategy**: 3-way parallel CLI analysis, then synthesize.
| Perspective | CLI Tool | Focus |
|-------------|----------|-------|
| Product | gemini | Vision, market fit, success metrics, scope |
| Technical | codex | Feasibility, constraints, integration complexity |
| User | claude | Personas, journey maps, pain points, UX |
**CLI call template** (one per perspective, all `run_in_background: true`):
```bash
Bash(command="ccw cli -p \"PURPOSE: <perspective> analysis for specification.\n<shared-context>\nTASK: <perspective-specific tasks>\nMODE: analysis\nEXPECTED: <structured output>\nCONSTRAINTS: <perspective scope>\" --tool <tool> --mode analysis", run_in_background=true)
```
**Synthesis flow** (after all 3 return):
```
3 CLI outputs received
├─ Identify convergent themes (2+ perspectives agree)
├─ Identify conflicts (e.g., product wants X, technical says infeasible)
├─ Extract unique insights per perspective
├─ Integrate discussion feedback (if exists)
└─ Fill template → Write to spec/product-brief.md
```
**Template sections**: Vision, Problem Statement, Target Users, Goals, Scope, Success Criteria, Assumptions.
---
### DRAFT-002: Requirements/PRD
**Strategy**: Single CLI expansion, then structure into individual requirement files.
| Step | Tool | Action |
|------|------|--------|
| 1 | gemini | Generate functional (REQ-NNN) and non-functional (NFR-type-NNN) requirements |
| 2 | (local) | Integrate discussion feedback |
| 3 | (local) | Write individual files + _index.md |
**CLI prompt focus**: For each product-brief goal, generate 3-7 functional requirements with user stories, acceptance criteria, and MoSCoW priority. Generate NFR categories: performance, security, scalability, usability.
**Output structure**:
```
spec/requirements/
├─ _index.md (summary table + MoSCoW breakdown)
├─ REQ-001-<slug>.md (individual functional requirement)
├─ REQ-002-<slug>.md
├─ NFR-perf-001-<slug>.md (non-functional)
└─ NFR-sec-001-<slug>.md
```
Each requirement file has: YAML frontmatter (id, title, priority, status, traces), description, user story, acceptance criteria.
---
### DRAFT-003: Architecture
**Strategy**: 2-stage CLI (design + critical review).
| Stage | Tool | Purpose |
|-------|------|---------|
| 1 | gemini | Architecture design: style, components, tech stack, ADRs, data model, security |
| 2 | codex | Critical review: challenge ADRs, identify bottlenecks, rate quality 1-5 |
Stage 2 runs after stage 1 completes (sequential dependency).
**After both complete**:
1. Integrate discussion feedback
2. Map codebase integration points (from discovery-context.relevant_files)
3. Write individual ADR files + _index.md
**Output structure**:
```
spec/architecture/
├─ _index.md (overview, component diagram, tech stack, data model, API, security)
├─ ADR-001-<slug>.md (individual decision record)
└─ ADR-002-<slug>.md
```
Each ADR file has: YAML frontmatter (id, title, status, traces), context, decision, alternatives with pros/cons, consequences, review feedback.
---
### DRAFT-004: Epics & Stories
**Strategy**: Single CLI decomposition, then structure into individual epic files.
| Step | Tool | Action |
|------|------|--------|
| 1 | gemini | Decompose requirements into 3-7 Epics with Stories, dependency map, MVP subset |
| 2 | (local) | Integrate discussion feedback |
| 3 | (local) | Write individual EPIC files + _index.md |
**CLI prompt focus**: Group requirements by domain, generate EPIC-NNN with STORY-EPIC-NNN children, define MVP subset, create Mermaid dependency diagram, recommend execution order.
**Output structure**:
```
spec/epics/
├─ _index.md (overview table, dependency map, execution order, MVP scope)
├─ EPIC-001-<slug>.md (individual epic with stories)
└─ EPIC-002-<slug>.md
```
Each epic file has: YAML frontmatter (id, title, priority, mvp, size, requirements, architecture, dependencies), stories with user stories and acceptance criteria.
All generated documents include YAML frontmatter: session_id, phase, document_type, status=draft, generated_at, version, dependencies.
## Phase 4: Validation
| Check | What to Verify |
|-------|---------------|
| has_frontmatter | Document starts with valid YAML frontmatter |
| sections_complete | All template sections present in output |
| cross_references | session_id matches spec-config |
| discussion_integrated | Feedback reflected (if feedback exists) |
| files_written | All expected files exist (individual + _index.md) |
### Result Routing
| Outcome | Message Type | Content |
|---------|-------------|---------|
| All checks pass | draft_ready | Doc type, output path, summary |
| Validation issues | draft_ready (with warnings) | Doc type, output path, issues list |
| Critical failure | error | Missing template, CLI failure |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Prior doc not found | Notify coordinator, request prerequisite task completion |
| Template not found | Error, report missing template path |
| CLI tool fails | Retry with fallback tool (gemini → codex → claude) |
| Discussion contradicts prior docs | Note conflict in document, flag for next discussion round |
| Partial CLI output | Use available data, note gaps in document |

View File

@@ -1,96 +0,0 @@
# Role: writer
Product Brief, Requirements/PRD, Architecture, and Epics & Stories document generation. Maps to spec-generator Phases 2-5.
## Identity
- **Name**: `writer` | **Prefix**: `DRAFT-*` | **Tag**: `[writer]`
- **Responsibility**: Load Context → Generate Document → Incorporate Feedback → Report
## Boundaries
### MUST
- Only process DRAFT-* tasks
- Read templates before generating (from `../../templates/`)
- Follow document-standards.md (from `../../specs/`)
- Integrate discussion feedback when available
- Generate proper YAML frontmatter
### MUST NOT
- Create tasks for other roles
- Skip template loading
- Modify discussion records
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| draft_ready | → coordinator | Document writing complete |
| draft_revision | → coordinator | Document revised per feedback |
| error | → coordinator | Template missing, insufficient context |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/generate-doc.md | Multi-CLI document generation |
| gemini, codex, claude CLI | Multi-perspective content generation |
---
## Phase 2: Context & Discussion Loading
**Objective**: Load all required inputs for document generation.
**Document type routing**:
| Task Subject Contains | Doc Type | Template | Discussion Input |
|----------------------|----------|----------|-----------------|
| Product Brief | product-brief | templates/product-brief.md | discuss-001-scope.md |
| Requirements / PRD | requirements | templates/requirements-prd.md | discuss-002-brief.md |
| Architecture | architecture | templates/architecture-doc.md | discuss-003-requirements.md |
| Epics | epics | templates/epics-template.md | discuss-004-architecture.md |
**Progressive dependency loading**:
| Doc Type | Requires |
|----------|----------|
| product-brief | discovery-context.json |
| requirements | + product-brief.md |
| architecture | + requirements/_index.md |
| epics | + architecture/_index.md |
**Success**: Template loaded, discussion feedback loaded (if exists), prior docs loaded.
---
## Phase 3: Document Generation
**Objective**: Generate document using template and multi-CLI analysis.
Delegate to `commands/generate-doc.md` with: doc type, session folder, spec config, discussion feedback, prior docs.
---
## Phase 4: Self-Validation
**Objective**: Verify document meets standards.
| Check | What to Verify |
|-------|---------------|
| has_frontmatter | Starts with YAML frontmatter |
| sections_complete | All template sections present |
| cross_references | session_id included |
| discussion_integrated | Reflects feedback (if exists) |
**Report**: doc type, validation status, summary, output path.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Prior doc not found | Notify coordinator, request prerequisite |
| CLI failure | Retry with fallback tool |
| Discussion contradicts prior docs | Note conflict, flag for next discussion |

View File

@@ -1,192 +0,0 @@
# Document Standards
Defines format conventions, YAML frontmatter schema, naming rules, and content structure for all spec-generator outputs.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| All Phases | Frontmatter format | YAML Frontmatter Schema |
| All Phases | File naming | Naming Conventions |
| Phase 2-5 | Document structure | Content Structure |
| Phase 6 | Validation reference | All sections |
---
## YAML Frontmatter Schema
Every generated document MUST begin with YAML frontmatter:
```yaml
---
session_id: SPEC-{slug}-{YYYY-MM-DD}
phase: {1-6}
document_type: {product-brief|requirements|architecture|epics|readiness-report|spec-summary}
status: draft|review|complete
generated_at: {ISO8601 timestamp}
stepsCompleted: []
version: 1
dependencies:
- {list of input documents used}
---
```
### Field Definitions
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `session_id` | string | Yes | Session identifier matching spec-config.json |
| `phase` | number | Yes | Phase number that generated this document (1-6) |
| `document_type` | string | Yes | One of: product-brief, requirements, architecture, epics, readiness-report, spec-summary |
| `status` | enum | Yes | draft (initial), review (user reviewed), complete (finalized) |
| `generated_at` | string | Yes | ISO8601 timestamp of generation |
| `stepsCompleted` | array | Yes | List of step IDs completed during generation |
| `version` | number | Yes | Document version, incremented on re-generation |
| `dependencies` | array | No | List of input files this document depends on |
### Status Transitions
```
draft -> review -> complete
| ^
+-------------------+ (direct promotion in auto mode)
```
- **draft**: Initial generation, not yet user-reviewed
- **review**: User has reviewed and provided feedback
- **complete**: Finalized, ready for downstream consumption
In auto mode (`-y`), documents are promoted directly from `draft` to `complete`.
---
## Naming Conventions
### Session ID Format
```
SPEC-{slug}-{YYYY-MM-DD}
```
- **slug**: Lowercase, alphanumeric + Chinese characters, hyphens as separators, max 40 chars
- **date**: UTC+8 date in YYYY-MM-DD format
Examples:
- `SPEC-task-management-system-2026-02-11`
- `SPEC-user-auth-oauth-2026-02-11`
### Output Files
| File | Phase | Description |
|------|-------|-------------|
| `spec-config.json` | 1 | Session configuration and state |
| `discovery-context.json` | 1 | Codebase exploration results (optional) |
| `product-brief.md` | 2 | Product brief document |
| `requirements.md` | 3 | PRD document |
| `architecture.md` | 4 | Architecture decisions document |
| `epics.md` | 5 | Epic/Story breakdown document |
| `readiness-report.md` | 6 | Quality validation report |
| `spec-summary.md` | 6 | One-page executive summary |
### Output Directory
```
.workflow/.spec/{session-id}/
```
---
## Content Structure
### Heading Hierarchy
- `#` (H1): Document title only (one per document)
- `##` (H2): Major sections
- `###` (H3): Subsections
- `####` (H4): Detail items (use sparingly)
Maximum depth: 4 levels. Prefer flat structures.
### Section Ordering
Every document follows this general pattern:
1. **YAML Frontmatter** (mandatory)
2. **Title** (H1)
3. **Executive Summary** (2-3 sentences)
4. **Core Content Sections** (H2, document-specific)
5. **Open Questions / Risks** (if applicable)
6. **References / Traceability** (links to upstream/downstream docs)
### Formatting Rules
| Element | Format | Example |
|---------|--------|---------|
| Requirements | `REQ-{NNN}` prefix | REQ-001: User login |
| Acceptance criteria | Checkbox list | `- [ ] User can log in with email` |
| Architecture decisions | `ADR-{NNN}` prefix | ADR-001: Use PostgreSQL |
| Epics | `EPIC-{NNN}` prefix | EPIC-001: Authentication |
| Stories | `STORY-{EPIC}-{NNN}` prefix | STORY-001-001: Login form |
| Priority tags | MoSCoW labels | `[Must]`, `[Should]`, `[Could]`, `[Won't]` |
| Mermaid diagrams | Fenced code blocks | ````mermaid ... ``` `` |
| Code examples | Language-tagged blocks | ````typescript ... ``` `` |
### Cross-Reference Format
Use relative references between documents:
```markdown
See [Product Brief](product-brief.md#section-name) for details.
Derived from [REQ-001](requirements.md#req-001).
```
### Language
- Document body: Follow user's input language (Chinese or English)
- Technical identifiers: Always English (REQ-001, ADR-001, EPIC-001)
- YAML frontmatter keys: Always English
---
## spec-config.json Schema
```json
{
"session_id": "string (required)",
"seed_input": "string (required) - original user input",
"input_type": "text|file (required)",
"timestamp": "ISO8601 (required)",
"mode": "interactive|auto (required)",
"complexity": "simple|moderate|complex (required)",
"depth": "light|standard|comprehensive (required)",
"focus_areas": ["string array"],
"seed_analysis": {
"problem_statement": "string",
"target_users": ["string array"],
"domain": "string",
"constraints": ["string array"],
"dimensions": ["string array - 3-5 exploration dimensions"]
},
"has_codebase": "boolean",
"phasesCompleted": [
{
"phase": "number (1-6)",
"name": "string (phase name)",
"output_file": "string (primary output file)",
"completed_at": "ISO8601"
}
]
}
```
---
## Validation Checklist
- [ ] Every document starts with valid YAML frontmatter
- [ ] `session_id` matches across all documents in a session
- [ ] `status` field reflects current document state
- [ ] All cross-references resolve to valid targets
- [ ] Heading hierarchy is correct (no skipped levels)
- [ ] Technical identifiers use correct prefixes
- [ ] Output files are in the correct directory

View File

@@ -1,207 +0,0 @@
# Quality Gates
Per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| Phase 2-5 | Post-generation self-check | Per-Phase Gates |
| Phase 6 | Cross-document validation | Cross-Document Validation |
| Phase 6 | Final scoring | Scoring Dimensions |
---
## Quality Thresholds
| Gate | Score | Action |
|------|-------|--------|
| **Pass** | >= 80% | Continue to next phase |
| **Review** | 60-79% | Log warnings, continue with caveats |
| **Fail** | < 60% | Must address issues before continuing |
In auto mode (`-y`), Review-level issues are logged but do not block progress.
---
## Scoring Dimensions
### 1. Completeness (25%)
All required sections present with substantive content.
| Score | Criteria |
|-------|----------|
| 100% | All template sections filled with detailed content |
| 75% | All sections present, some lack detail |
| 50% | Major sections present but minor sections missing |
| 25% | Multiple major sections missing or empty |
| 0% | Document is a skeleton only |
### 2. Consistency (25%)
Terminology, formatting, and references are uniform across documents.
| Score | Criteria |
|-------|----------|
| 100% | All terms consistent, all references valid, formatting uniform |
| 75% | Minor terminology variations, all references valid |
| 50% | Some inconsistent terms, 1-2 broken references |
| 25% | Frequent inconsistencies, multiple broken references |
| 0% | Documents contradict each other |
### 3. Traceability (25%)
Requirements, architecture decisions, and stories trace back to goals.
| Score | Criteria |
|-------|----------|
| 100% | Every story traces to a requirement, every requirement traces to a goal |
| 75% | Most items traceable, few orphans |
| 50% | Partial traceability, some disconnected items |
| 25% | Weak traceability, many orphan items |
| 0% | No traceability between documents |
### 4. Depth (25%)
Content provides sufficient detail for execution teams.
| Score | Criteria |
|-------|----------|
| 100% | Acceptance criteria specific and testable, architecture decisions justified, stories estimable |
| 75% | Most items detailed enough, few vague areas |
| 50% | Mix of detailed and vague content |
| 25% | Mostly high-level, lacking actionable detail |
| 0% | Too abstract for execution |
---
## Per-Phase Quality Gates
### Phase 1: Discovery
| Check | Criteria | Severity |
|-------|----------|----------|
| Session ID valid | Matches `SPEC-{slug}-{date}` format | Error |
| Problem statement exists | Non-empty, >= 20 characters | Error |
| Target users identified | >= 1 user group | Error |
| Dimensions generated | 3-5 exploration dimensions | Warning |
| Constraints listed | >= 0 (can be empty with justification) | Info |
### Phase 2: Product Brief
| Check | Criteria | Severity |
|-------|----------|----------|
| Vision statement | Clear, 1-3 sentences | Error |
| Problem statement | Specific and measurable | Error |
| Target users | >= 1 persona with needs described | Error |
| Goals defined | >= 2 measurable goals | Error |
| Success metrics | >= 2 quantifiable metrics | Warning |
| Scope boundaries | In-scope and out-of-scope listed | Warning |
| Multi-perspective | >= 2 CLI perspectives synthesized | Info |
### Phase 3: Requirements (PRD)
| Check | Criteria | Severity |
|-------|----------|----------|
| Functional requirements | >= 3 with REQ-NNN IDs | Error |
| Acceptance criteria | Every requirement has >= 1 criterion | Error |
| MoSCoW priority | Every requirement tagged | Error |
| Non-functional requirements | >= 1 (performance, security, etc.) | Warning |
| User stories | >= 1 per Must-have requirement | Warning |
| Traceability | Requirements trace to product brief goals | Warning |
### Phase 4: Architecture
| Check | Criteria | Severity |
|-------|----------|----------|
| Component diagram | Present (Mermaid or ASCII) | Error |
| Tech stack specified | Languages, frameworks, key libraries | Error |
| ADR present | >= 1 Architecture Decision Record | Error |
| ADR has alternatives | Each ADR lists >= 2 options considered | Warning |
| Integration points | External systems/APIs identified | Warning |
| Data model | Key entities and relationships described | Warning |
| Codebase mapping | Mapped to existing code (if has_codebase) | Info |
### Phase 5: Epics & Stories
| Check | Criteria | Severity |
|-------|----------|----------|
| Epics defined | 3-7 epics with EPIC-NNN IDs | Error |
| MVP subset | >= 1 epic tagged as MVP | Error |
| Stories per epic | 2-5 stories per epic | Error |
| Story format | "As a...I want...So that..." pattern | Warning |
| Dependency map | Cross-epic dependencies documented | Warning |
| Estimation hints | Relative sizing (S/M/L/XL) per story | Info |
| Traceability | Stories trace to requirements | Warning |
### Phase 6: Readiness Check
| Check | Criteria | Severity |
|-------|----------|----------|
| All documents exist | product-brief, requirements, architecture, epics | Error |
| Frontmatter valid | All YAML frontmatter parseable and correct | Error |
| Cross-references valid | All document links resolve | Error |
| Overall score >= 60% | Weighted average across 4 dimensions | Error |
| No unresolved Errors | All Error-severity issues addressed | Error |
| Summary generated | spec-summary.md created | Warning |
---
## Cross-Document Validation
Checks performed during Phase 6 across all documents:
### Completeness Matrix
```
Product Brief goals -> Requirements (each goal has >= 1 requirement)
Requirements -> Architecture (each Must requirement has design coverage)
Requirements -> Epics (each Must requirement appears in >= 1 story)
Architecture ADRs -> Epics (tech choices reflected in implementation stories)
```
### Consistency Checks
| Check | Documents | Rule |
|-------|-----------|------|
| Terminology | All | Same term used consistently (no synonyms for same concept) |
| User personas | Brief + PRD + Epics | Same user names/roles throughout |
| Scope | Brief + PRD | PRD scope does not exceed brief scope |
| Tech stack | Architecture + Epics | Stories reference correct technologies |
### Traceability Matrix Format
```markdown
| Goal | Requirements | Architecture | Epics |
|------|-------------|--------------|-------|
| G-001: ... | REQ-001, REQ-002 | ADR-001 | EPIC-001 |
| G-002: ... | REQ-003 | ADR-002 | EPIC-002, EPIC-003 |
```
---
## Issue Classification
### Error (Must Fix)
- Missing required document or section
- Broken cross-references
- Contradictory information between documents
- Empty acceptance criteria on Must-have requirements
- No MVP subset defined in epics
### Warning (Should Fix)
- Vague acceptance criteria
- Missing non-functional requirements
- No success metrics defined
- Incomplete traceability
- Missing architecture review notes
### Info (Nice to Have)
- Could add more detailed personas
- Consider additional ADR alternatives
- Story estimation hints missing
- Mermaid diagrams could be more detailed

View File

@@ -1,158 +0,0 @@
{
"team_name": "team-lifecycle",
"team_display_name": "Team Lifecycle",
"description": "Unified team skill covering spec-to-dev-to-test full lifecycle",
"version": "2.0.0",
"architecture": "folder-based",
"role_structure": "roles/{name}/role.md + roles/{name}/commands/*.md",
"roles": {
"coordinator": {
"task_prefix": null,
"responsibility": "Pipeline orchestration, requirement clarification, task chain creation, message dispatch",
"message_types": ["plan_approved", "plan_revision", "task_unblocked", "fix_required", "error", "shutdown"]
},
"analyst": {
"task_prefix": "RESEARCH",
"responsibility": "Seed analysis, codebase exploration, multi-dimensional context gathering",
"message_types": ["research_ready", "research_progress", "error"]
},
"writer": {
"task_prefix": "DRAFT",
"responsibility": "Product Brief / PRD / Architecture / Epics document generation",
"message_types": ["draft_ready", "draft_revision", "impl_progress", "error"]
},
"discussant": {
"task_prefix": "DISCUSS",
"responsibility": "Multi-perspective critique, consensus building, conflict escalation",
"message_types": ["discussion_ready", "discussion_blocked", "impl_progress", "error"]
},
"planner": {
"task_prefix": "PLAN",
"responsibility": "Multi-angle code exploration, structured implementation planning",
"message_types": ["plan_ready", "plan_revision", "impl_progress", "error"]
},
"executor": {
"task_prefix": "IMPL",
"responsibility": "Code implementation following approved plans",
"message_types": ["impl_complete", "impl_progress", "error"]
},
"tester": {
"task_prefix": "TEST",
"responsibility": "Adaptive test-fix cycles, progressive testing, quality gates",
"message_types": ["test_result", "impl_progress", "fix_required", "error"]
},
"reviewer": {
"task_prefix": "REVIEW",
"additional_prefixes": ["QUALITY"],
"responsibility": "Code review (REVIEW-*) + Spec quality validation (QUALITY-*)",
"message_types": ["review_result", "quality_result", "fix_required", "error"]
},
"explorer": {
"task_prefix": "EXPLORE",
"responsibility": "Code search, pattern discovery, dependency tracing. Service role — on-demand by coordinator",
"role_type": "service",
"message_types": ["explore_ready", "explore_progress", "task_failed"]
},
"architect": {
"task_prefix": "ARCH",
"responsibility": "Architecture assessment, tech feasibility, design pattern review. Consulting role — on-demand by coordinator",
"role_type": "consulting",
"consultation_modes": ["spec-review", "plan-review", "code-review", "consult", "feasibility"],
"message_types": ["arch_ready", "arch_concern", "arch_progress", "error"]
},
"fe-developer": {
"task_prefix": "DEV-FE",
"responsibility": "Frontend component/page implementation, design token consumption, responsive UI",
"role_type": "frontend-pipeline",
"message_types": ["dev_fe_complete", "dev_fe_progress", "error"]
},
"fe-qa": {
"task_prefix": "QA-FE",
"responsibility": "5-dimension frontend review (quality, a11y, design compliance, UX, pre-delivery), GC loop",
"role_type": "frontend-pipeline",
"message_types": ["qa_fe_passed", "qa_fe_result", "fix_required", "error"]
}
},
"pipelines": {
"spec-only": {
"description": "Specification pipeline: research → discuss → draft → quality",
"task_chain": [
"RESEARCH-001",
"DISCUSS-001", "DRAFT-001", "DISCUSS-002",
"DRAFT-002", "DISCUSS-003", "DRAFT-003", "DISCUSS-004",
"DRAFT-004", "DISCUSS-005", "QUALITY-001", "DISCUSS-006"
]
},
"impl-only": {
"description": "Implementation pipeline: plan → implement → test + review",
"task_chain": ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"]
},
"full-lifecycle": {
"description": "Full lifecycle: spec pipeline → implementation pipeline",
"task_chain": "spec-only + impl-only (PLAN-001 blockedBy DISCUSS-006)"
},
"fe-only": {
"description": "Frontend-only pipeline: plan → frontend dev → frontend QA",
"task_chain": ["PLAN-001", "DEV-FE-001", "QA-FE-001"],
"gc_loop": { "max_rounds": 2, "convergence": "score >= 8 && critical === 0" }
},
"fullstack": {
"description": "Fullstack pipeline: plan → backend + frontend parallel → test + QA",
"task_chain": ["PLAN-001", "IMPL-001||DEV-FE-001", "TEST-001||QA-FE-001", "REVIEW-001"],
"sync_points": ["REVIEW-001"]
},
"full-lifecycle-fe": {
"description": "Full lifecycle with frontend: spec → plan → backend + frontend → test + QA",
"task_chain": "spec-only + fullstack (PLAN-001 blockedBy DISCUSS-006)"
}
},
"frontend_detection": {
"keywords": ["component", "page", "UI", "前端", "frontend", "CSS", "HTML", "React", "Vue", "Tailwind", "组件", "页面", "样式", "layout", "responsive", "Svelte", "Next.js", "Nuxt", "shadcn", "设计系统", "design system"],
"file_patterns": ["*.tsx", "*.jsx", "*.vue", "*.svelte", "*.css", "*.scss", "*.html"],
"routing_rules": {
"frontend_only": "All tasks match frontend keywords, no backend/API mentions",
"fullstack": "Mix of frontend and backend tasks",
"backend_only": "No frontend keywords detected (default impl-only)"
}
},
"ui_ux_pro_max": {
"skill_name": "ui-ux-pro-max",
"install_command": "/plugin install ui-ux-pro-max@ui-ux-pro-max-skill",
"invocation": "Skill(skill=\"ui-ux-pro-max\", args=\"...\")",
"domains": ["product", "style", "typography", "color", "landing", "chart", "ux", "web"],
"stacks": ["html-tailwind", "react", "nextjs", "vue", "svelte", "shadcn", "swiftui", "react-native", "flutter"],
"fallback": "llm-general-knowledge",
"design_intelligence_chain": ["analyst → design-intelligence.json", "architect → design-tokens.json", "fe-developer → tokens.css", "fe-qa → anti-pattern audit"]
},
"shared_memory": {
"file": "shared-memory.json",
"schema": {
"design_intelligence": "From analyst via ui-ux-pro-max",
"design_token_registry": "From architect, consumed by fe-developer/fe-qa",
"component_inventory": "From fe-developer, list of implemented components",
"style_decisions": "Accumulated design decisions",
"qa_history": "From fe-qa, audit trail",
"industry_context": "Industry + strictness config"
}
},
"collaboration_patterns": ["CP-1", "CP-2", "CP-4", "CP-5", "CP-6", "CP-10"],
"session_dirs": {
"base": ".workflow/.team/TLS-{slug}-{YYYY-MM-DD}/",
"spec": "spec/",
"discussions": "discussions/",
"plan": "plan/",
"explorations": "explorations/",
"architecture": "architecture/",
"analysis": "analysis/",
"qa": "qa/",
"wisdom": "wisdom/",
"messages": ".msg/"
}
}

View File

@@ -1,254 +0,0 @@
# Architecture Document Template (Directory Structure)
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
| Output Location | `{workDir}/architecture/` |
## Output Structure
```
{workDir}/architecture/
├── _index.md # Overview, components, tech stack, data model, security
├── ADR-001-{slug}.md # Individual Architecture Decision Record
├── ADR-002-{slug}.md
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 4
document_type: architecture-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
- ../requirements/_index.md
---
# Architecture: {product_name}
{executive_summary - high-level architecture approach and key decisions}
## System Overview
### Architecture Style
{description of chosen architecture style: microservices, monolith, serverless, etc.}
### System Context Diagram
```mermaid
C4Context
title System Context Diagram
Person(user, "User", "Primary user")
System(system, "{product_name}", "Core system")
System_Ext(ext1, "{external_system}", "{description}")
Rel(user, system, "Uses")
Rel(system, ext1, "Integrates with")
```
## Component Architecture
### Component Diagram
```mermaid
graph TD
subgraph "{product_name}"
A[Component A] --> B[Component B]
B --> C[Component C]
A --> D[Component D]
end
B --> E[External Service]
```
### Component Descriptions
| Component | Responsibility | Technology | Dependencies |
|-----------|---------------|------------|--------------|
| {component_name} | {what it does} | {tech stack} | {depends on} |
## Technology Stack
### Core Technologies
| Layer | Technology | Version | Rationale |
|-------|-----------|---------|-----------|
| Frontend | {technology} | {version} | {why chosen} |
| Backend | {technology} | {version} | {why chosen} |
| Database | {technology} | {version} | {why chosen} |
| Infrastructure | {technology} | {version} | {why chosen} |
### Key Libraries & Frameworks
| Library | Purpose | License |
|---------|---------|---------|
| {library_name} | {purpose} | {license} |
## Architecture Decision Records
| ADR | Title | Status | Key Choice |
|-----|-------|--------|------------|
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
## Data Architecture
### Data Model
```mermaid
erDiagram
ENTITY_A ||--o{ ENTITY_B : "has many"
ENTITY_A {
string id PK
string name
datetime created_at
}
ENTITY_B {
string id PK
string entity_a_id FK
string value
}
```
### Data Storage Strategy
| Data Type | Storage | Retention | Backup |
|-----------|---------|-----------|--------|
| {type} | {storage solution} | {retention policy} | {backup strategy} |
## API Design
### API Overview
| Endpoint | Method | Purpose | Auth |
|----------|--------|---------|------|
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
## Security Architecture
### Security Controls
| Control | Implementation | Requirement |
|---------|---------------|-------------|
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
## Infrastructure & Deployment
### Deployment Architecture
{description of deployment model: containers, serverless, VMs, etc.}
### Environment Strategy
| Environment | Purpose | Configuration |
|-------------|---------|---------------|
| Development | Local development | {config} |
| Staging | Pre-production testing | {config} |
| Production | Live system | {config} |
## Codebase Integration
{if has_codebase is true:}
### Existing Code Mapping
| New Component | Existing Module | Integration Type | Notes |
|--------------|----------------|------------------|-------|
| {component} | {existing module path} | Extend/Replace/New | {notes} |
### Migration Notes
{any migration considerations for existing code}
## Quality Attributes
| Attribute | Target | Measurement | ADR Reference |
|-----------|--------|-------------|---------------|
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
## Risks & Mitigations
| Risk | Impact | Probability | Mitigation |
|------|--------|-------------|------------|
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
## Open Questions
- [ ] {architectural question 1}
- [ ] {architectural question 2}
## References
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
- Next: [Epics & Stories](../epics/_index.md)
```
---
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
```markdown
---
id: ADR-{NNN}
status: Accepted
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
date: {timestamp}
---
# ADR-{NNN}: {decision_title}
## Context
{what is the situation that motivates this decision}
## Decision
{what is the chosen approach}
## Alternatives Considered
| Option | Pros | Cons |
|--------|------|------|
| {option_1 - chosen} | {pros} | {cons} |
| {option_2} | {pros} | {cons} |
| {option_3} | {pros} | {cons} |
## Consequences
- **Positive**: {positive outcomes}
- **Negative**: {tradeoffs accepted}
- **Risks**: {risks to monitor}
## Traces
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{NNN}` | Auto-increment | ADR/requirement number |
| `{slug}` | Auto-generated | Kebab-case from decision title |
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |

View File

@@ -1,196 +0,0 @@
# Epics & Stories Template (Directory Structure)
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
| Output Location | `{workDir}/epics/` |
## Output Structure
```
{workDir}/epics/
├── _index.md # Overview table + dependency map + MVP scope + execution order
├── EPIC-001-{slug}.md # Individual Epic with its Stories
├── EPIC-002-{slug}.md
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 5
document_type: epics-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
- ../requirements/_index.md
- ../architecture/_index.md
---
# Epics & Stories: {product_name}
{executive_summary - overview of epic structure and MVP scope}
## Epic Overview
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|---------|-------|----------|-----|---------|-----------|
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
## Dependency Map
```mermaid
graph LR
EPIC-001 --> EPIC-002
EPIC-001 --> EPIC-003
EPIC-002 --> EPIC-004
EPIC-003 --> EPIC-005
```
### Dependency Notes
{explanation of why these dependencies exist and suggested execution order}
### Recommended Execution Order
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
3. ...
## MVP Scope
### MVP Epics
{list of epics included in MVP with justification, linking to each}
### MVP Definition of Done
- [ ] {MVP completion criterion 1}
- [ ] {MVP completion criterion 2}
- [ ] {MVP completion criterion 3}
## Traceability Matrix
| Requirement | Epic | Stories | Architecture |
|-------------|------|---------|--------------|
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
## Estimation Summary
| Size | Meaning | Count |
|------|---------|-------|
| S | Small - well-understood, minimal risk | {n} |
| M | Medium - some complexity, moderate risk | {n} |
| L | Large - significant complexity, should consider splitting | {n} |
| XL | Extra Large - high complexity, must split before implementation | {n} |
## Risks & Considerations
| Risk | Affected Epics | Mitigation |
|------|---------------|------------|
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
## Open Questions
- [ ] {question about scope or implementation 1}
- [ ] {question about scope or implementation 2}
## References
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
- Handoff to: execution workflows (lite-plan, plan, req-plan)
```
---
## Template: EPIC-NNN-{slug}.md (Individual Epic)
```markdown
---
id: EPIC-{NNN}
priority: {Must|Should|Could}
mvp: {true|false}
size: {S|M|L|XL}
requirements: [REQ-{NNN}]
architecture: [ADR-{NNN}]
dependencies: [EPIC-{NNN}]
status: draft
---
# EPIC-{NNN}: {epic_title}
**Priority**: {Must|Should|Could}
**MVP**: {Yes|No}
**Estimated Size**: {S|M|L|XL}
## Description
{detailed epic description}
## Requirements
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
## Architecture
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
- Component: {component_name}
## Dependencies
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
## Stories
### STORY-{EPIC}-001: {story_title}
**User Story**: As a {persona}, I want to {action} so that {benefit}.
**Acceptance Criteria**:
- [ ] {criterion 1}
- [ ] {criterion 2}
- [ ] {criterion 3}
**Size**: {S|M|L|XL}
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
---
### STORY-{EPIC}-002: {story_title}
**User Story**: As a {persona}, I want to {action} so that {benefit}.
**Acceptance Criteria**:
- [ ] {criterion 1}
- [ ] {criterion 2}
**Size**: {S|M|L|XL}
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
| `{NNN}` | Auto-increment | Story/requirement number |
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |

View File

@@ -1,133 +0,0 @@
# Product Brief Template
Template for generating product brief documents in Phase 2.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis |
| Output Location | `{workDir}/product-brief.md` |
---
## Template
```markdown
---
session_id: {session_id}
phase: 2
document_type: product-brief
status: draft
generated_at: {timestamp}
stepsCompleted: []
version: 1
dependencies:
- spec-config.json
---
# Product Brief: {product_name}
{executive_summary - 2-3 sentences capturing the essence of the product/feature}
## Vision
{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like}
## Problem Statement
### Current Situation
{description of the current state and pain points}
### Impact
{quantified impact of the problem - who is affected, how much, how often}
## Target Users
{for each user persona:}
### {Persona Name}
- **Role**: {user's role/context}
- **Needs**: {primary needs related to this product}
- **Pain Points**: {current frustrations}
- **Success Criteria**: {what success looks like for this user}
## Goals & Success Metrics
| Goal ID | Goal | Success Metric | Target |
|---------|------|----------------|--------|
| G-001 | {goal description} | {measurable metric} | {specific target} |
| G-002 | {goal description} | {measurable metric} | {specific target} |
## Scope
### In Scope
- {feature/capability 1}
- {feature/capability 2}
- {feature/capability 3}
### Out of Scope
- {explicitly excluded item 1}
- {explicitly excluded item 2}
### Assumptions
- {key assumption 1}
- {key assumption 2}
## Competitive Landscape
| Aspect | Current State | Proposed Solution | Advantage |
|--------|--------------|-------------------|-----------|
| {aspect} | {how it's done now} | {our approach} | {differentiator} |
## Constraints & Dependencies
### Technical Constraints
- {constraint 1}
- {constraint 2}
### Business Constraints
- {constraint 1}
### Dependencies
- {external dependency 1}
- {external dependency 2}
## Multi-Perspective Synthesis
### Product Perspective
{summary of product/market analysis findings}
### Technical Perspective
{summary of technical feasibility and constraints}
### User Perspective
{summary of user journey and UX considerations}
### Convergent Themes
{themes where all perspectives agree}
### Conflicting Views
{areas where perspectives differ, with notes on resolution approach}
## Open Questions
- [ ] {unresolved question 1}
- [ ] {unresolved question 2}
## References
- Derived from: [spec-config.json](spec-config.json)
- Next: [Requirements PRD](requirements.md)
```
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | Seed analysis | Product/feature name |
| `{executive_summary}` | CLI synthesis | 2-3 sentence summary |
| `{vision_statement}` | CLI product perspective | Aspirational vision |
| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis |

View File

@@ -1,224 +0,0 @@
# Requirements PRD Template (Directory Structure)
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
| Output Location | `{workDir}/requirements/` |
## Output Structure
```
{workDir}/requirements/
├── _index.md # Summary + MoSCoW table + traceability matrix + links
├── REQ-001-{slug}.md # Individual functional requirement
├── REQ-002-{slug}.md
├── NFR-P-001-{slug}.md # Non-functional: Performance
├── NFR-S-001-{slug}.md # Non-functional: Security
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
├── NFR-U-001-{slug}.md # Non-functional: Usability
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 3
document_type: requirements-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
---
# Requirements: {product_name}
{executive_summary - brief overview of what this PRD covers and key decisions}
## Requirement Summary
| Priority | Count | Coverage |
|----------|-------|----------|
| Must Have | {n} | {description of must-have scope} |
| Should Have | {n} | {description of should-have scope} |
| Could Have | {n} | {description of could-have scope} |
| Won't Have | {n} | {description of explicitly excluded} |
## Functional Requirements
| ID | Title | Priority | Traces To |
|----|-------|----------|-----------|
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
## Non-Functional Requirements
### Performance
| ID | Title | Target |
|----|-------|--------|
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
### Security
| ID | Title | Standard |
|----|-------|----------|
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
### Scalability
| ID | Title | Target |
|----|-------|--------|
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
### Usability
| ID | Title | Target |
|----|-------|--------|
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
## Data Requirements
### Data Entities
| Entity | Description | Key Attributes |
|--------|-------------|----------------|
| {entity_name} | {description} | {attr1, attr2, attr3} |
### Data Flows
{description of key data flows, optionally with Mermaid diagram}
## Integration Requirements
| System | Direction | Protocol | Data Format | Notes |
|--------|-----------|----------|-------------|-------|
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
## Constraints & Assumptions
### Constraints
- {technical or business constraint 1}
- {technical or business constraint 2}
### Assumptions
- {assumption 1 - must be validated}
- {assumption 2 - must be validated}
## Priority Rationale
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
## Traceability Matrix
| Goal | Requirements |
|------|-------------|
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
## Open Questions
- [ ] {unresolved question 1}
- [ ] {unresolved question 2}
## References
- Derived from: [Product Brief](../product-brief.md)
- Next: [Architecture](../architecture/_index.md)
```
---
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
```markdown
---
id: REQ-{NNN}
type: functional
priority: {Must|Should|Could|Won't}
traces_to: [G-{NNN}]
status: draft
---
# REQ-{NNN}: {requirement_title}
**Priority**: {Must|Should|Could|Won't}
## Description
{detailed requirement description}
## User Story
As a {persona}, I want to {action} so that {benefit}.
## Acceptance Criteria
- [ ] {specific, testable criterion 1}
- [ ] {specific, testable criterion 2}
- [ ] {specific, testable criterion 3}
## Traces
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
```
---
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
```markdown
---
id: NFR-{type}-{NNN}
type: non-functional
category: {Performance|Security|Scalability|Usability}
priority: {Must|Should|Could}
status: draft
---
# NFR-{type}-{NNN}: {requirement_title}
**Category**: {Performance|Security|Scalability|Usability}
**Priority**: {Must|Should|Could}
## Requirement
{detailed requirement description}
## Metric & Target
| Metric | Target | Measurement Method |
|--------|--------|--------------------|
| {metric} | {target value} | {how measured} |
## Traces
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
| `{slug}` | Auto-generated | Kebab-case from requirement title |
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |

View File

@@ -1,643 +0,0 @@
---
name: team-lifecycle-v4
description: Unified team skill for full lifecycle - spec/impl/test. Optimized cadence with inline discuss subagent and shared explore. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team lifecycle".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team Lifecycle v4
Unified team skill: specification -> implementation -> testing -> review. Optimized from v3 with **inline discuss subagent** and **shared explore utility**, halving spec pipeline beats from 12 to 6.
## Architecture
```
+---------------------------------------------------+
| Skill(skill="team-lifecycle-v4") |
| args="task description" or args="--role=xxx" |
+-------------------+-------------------------------+
| Role Router
+---- --role present? ----+
| NO | YES
v v
Orchestration Mode Role Dispatch
(auto -> coordinator) (route to role.md)
|
+----+----+-------+-------+-------+-------+
v v v v v v
coordinator analyst writer planner executor tester
^ ^
on-demand by coordinator
+---------+ +--------+
|architect| |reviewer|
+---------+ +--------+
+-------------+ +-----+
|fe-developer | |fe-qa|
+-------------+ +-----+
Subagents (callable by any role, not team members):
[discuss-subagent] - multi-perspective critique
[explore-subagent] - codebase exploration with cache
```
## Role Router
### Input Parsing
Parse `$ARGUMENTS` to extract `--role`. If absent -> Orchestration Mode (auto route to coordinator).
### Role Registry
| Role | File | Task Prefix | Type | Compact |
|------|------|-------------|------|---------|
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | compact must re-read |
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | RESEARCH-* | pipeline | compact must re-read |
| writer | [roles/writer/role.md](roles/writer/role.md) | DRAFT-* | pipeline | compact must re-read |
| planner | [roles/planner/role.md](roles/planner/role.md) | PLAN-* | pipeline | compact must re-read |
| executor | [roles/executor/role.md](roles/executor/role.md) | IMPL-* | pipeline | compact must re-read |
| tester | [roles/tester/role.md](roles/tester/role.md) | TEST-* | pipeline | compact must re-read |
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-* + QUALITY-* | pipeline | compact must re-read |
| architect | [roles/architect/role.md](roles/architect/role.md) | ARCH-* | consulting (on-demand) | compact must re-read |
| fe-developer | [roles/fe-developer/role.md](roles/fe-developer/role.md) | DEV-FE-* | frontend pipeline | compact must re-read |
| fe-qa | [roles/fe-qa/role.md](roles/fe-qa/role.md) | QA-FE-* | frontend pipeline | compact must re-read |
> **COMPACT PROTECTION**: Role files are execution documents. After context compression, role instructions become summaries only -- **MUST immediately `Read` the role.md to reload before continuing**. Never execute any Phase based on summaries.
### Subagent Registry
| Subagent | Spec | Callable By | Purpose |
|----------|------|-------------|---------|
| discuss | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | analyst, writer, reviewer | Multi-perspective critique |
| explore | [subagents/explore-subagent.md](subagents/explore-subagent.md) | analyst, planner, any role | Codebase exploration with cache |
| doc-generation | [subagents/doc-generation-subagent.md](subagents/doc-generation-subagent.md) | writer | Document generation (CLI execution) |
### Dispatch
1. Extract `--role` from arguments
2. If no `--role` -> route to coordinator (Orchestration Mode)
3. Look up role in registry -> Read the role file -> Execute its phases
### Orchestration Mode
When invoked without `--role`, coordinator auto-starts. User just provides task description.
**Invocation**: `Skill(skill="team-lifecycle-v4", args="task description")`
**Lifecycle**:
```
User provides task description
-> coordinator Phase 1-3: requirement clarification -> TeamCreate -> create task chain
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
-> Worker executes -> SendMessage callback -> coordinator advances next step
-> Loop until pipeline complete -> Phase 5 report
```
**User Commands** (wake paused coordinator):
| Command | Action |
|---------|--------|
| `check` / `status` | Output execution status graph, no advancement |
| `resume` / `continue` | Check worker states, advance next step |
| `revise <TASK-ID> [feedback]` | Create revision task for specified document + cascade downstream |
| `feedback <text>` | Analyze feedback impact, create targeted revision chain |
| `recheck` | Re-run QUALITY-001 quality check (after manual edits) |
| `improve [dimension]` | Auto-improve weakest dimension from readiness-report |
---
## Shared Infrastructure
The following templates apply to all worker roles. Each role.md only needs to write **Phase 2-4** role-specific logic.
### Worker Phase 1: Task Discovery (all workers shared)
Each worker on startup executes the same task discovery flow:
1. Call `TaskList()` to get all tasks
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
3. No tasks -> idle wait
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
**Resume Artifact Check** (prevent duplicate output after resume):
- Check if this task's output artifacts already exist
- Artifacts complete -> skip to Phase 5 report completion
- Artifacts incomplete or missing -> normal Phase 2-4 execution
### Worker Phase 5: Report + Fast-Advance (all workers shared)
Task completion with optional fast-advance to skip coordinator round-trip:
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
- Params: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
- **`team` must be session ID** (e.g., `TLS-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field → take folder name.
- **CLI fallback**: `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
2. **TaskUpdate**: Mark task completed
3. **Fast-Advance Check**:
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
- If exactly 1 ready task AND its owner matches a simple successor pattern -> **spawn it directly** (skip coordinator)
- Otherwise -> **SendMessage** to coordinator for orchestration
4. **Loop**: Back to Phase 1 to check for next task
**Fast-Advance Rules**:
| Condition | Action |
|-----------|--------|
| 同前缀后续任务 (Inner Loop 角色) | 不 spawn主 agent 内循环 (Phase 5-L) |
| 1 ready task, simple linear successor, 不同前缀 | Spawn directly via Task(run_in_background: true) |
| Multiple ready tasks (parallel window) | SendMessage to coordinator (needs orchestration) |
| No ready tasks + others running | SendMessage to coordinator (status update) |
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
| Checkpoint task (e.g., spec->impl transition) | SendMessage to coordinator (needs user confirmation) |
**Fast-advance failure recovery**: If a fast-advanced task fails (worker exits without completing), the coordinator detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing, no manual intervention required. See [monitor.md](roles/coordinator/commands/monitor.md) Fast-Advance Failure Recovery.
### Worker Inner Loop (同前缀多任务角色)
适用角色writer (DRAFT-*)、planner (PLAN-*)、executor (IMPL-*)
当一个角色拥有**同前缀的多个串行任务**时,不再每完成一个就 spawn 新 agent而是在同一 agent 内循环处理:
**Inner Loop 流程**
```
Phase 1: 发现任务 (首次)
├─ 找到任务 → Phase 2-3: 加载上下文 + Subagent 执行
│ │
│ v
│ Phase 4: 验证 (+ Inline Discuss if applicable)
│ │
│ v
│ Phase 5-L: 轻量完成 (Loop variant)
│ │
│ ├─ TaskUpdate 标完成
│ ├─ team_msg 记录
│ ├─ 累积摘要到 context_accumulator
│ │
│ ├─ 检查:还有同前缀待处理任务?
│ │ ├─ YES → 回到 Phase 1 (内循环)
│ │ └─ NO → Phase 5-F: 最终报告
│ │
│ └─ 异常中断条件?
│ ├─ consensus_blocked HIGH → SendMessage → STOP
│ └─ 错误累计 ≥ 3 → SendMessage → STOP
└─ Phase 5-F: 最终报告 (Final)
├─ SendMessage (含全部任务摘要)
└─ STOP
```
**context_accumulator** (主 agent 上下文中维护,不写文件):
每个 subagent 返回后,主 agent 将结果压缩为摘要追加到 accumulator
```
context_accumulator = []
# DRAFT-001 subagent 返回后
context_accumulator.append({
task: "DRAFT-001",
artifact: "spec/product-brief.md",
key_decisions: ["聚焦 B2B 场景", "MVP 不含移动端"],
discuss_verdict: "consensus_reached",
discuss_rating: 4.2
})
# DRAFT-002 subagent 返回后
context_accumulator.append({
task: "DRAFT-002",
artifact: "spec/requirements/_index.md",
key_decisions: ["REQ-003 降级为 P2", "NFR-perf 新增 SLA"],
discuss_verdict: "consensus_reached",
discuss_rating: 3.8
})
```
后续 subagent 调用时,将 accumulator 摘要作为 CONTEXT 传入,实现知识传递。
**Phase 5-L vs Phase 5-F 区别**
| 步骤 | Phase 5-L (循环中) | Phase 5-F (最终) |
|------|-------------------|-----------------|
| TaskUpdate completed | YES | YES |
| team_msg log | YES | YES |
| 累积摘要 | YES | - |
| SendMessage to coordinator | NO | YES (含所有任务汇总) |
| Fast-Advance 到下一前缀 | - | YES (检查跨前缀后续) |
**中断恢复**
如果 Inner Loop agent 在 DRAFT-003 崩溃:
1. DRAFT-001, DRAFT-002 已落盘 + 已标完成 → 安全
2. DRAFT-003 状态为 in_progress → coordinator resume 时检测到无 active_worker → 重置为 pending
3. 重新 spawn writer → Phase 1 找到 DRAFT-003 → Resume Artifact Check:
- DRAFT-003 产物不存在 → 正常执行
- DRAFT-003 产物已写但未标完成 → 验证后标完成
4. 新 writer 从 DRAFT-003 开始循环,丢失的只是 001+002 的隐性摘要(可从磁盘重建基础信息)
**恢复增强** (可选):在每个 Phase 5-L 后将 context_accumulator 写入 `<session>/shared-memory.json``context_accumulator` 字段crash 后可读回。
### Inline Discuss Protocol (produce roles: analyst, writer, reviewer)
After completing their primary output, produce roles call the discuss subagent inline:
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss <round-id>",
prompt: <see subagents/discuss-subagent.md for prompt template>
})
```
**Discussion round mapping** (which role runs which discuss round):
| Role | After Task | Discuss Round | Perspectives |
|------|-----------|---------------|-------------|
| analyst | RESEARCH-001 | DISCUSS-001 | product, risk, coverage |
| writer | DRAFT-001 | DISCUSS-002 | product, technical, quality, coverage |
| writer | DRAFT-002 | DISCUSS-003 | quality, product, coverage |
| writer | DRAFT-003 | DISCUSS-004 | technical, risk |
| writer | DRAFT-004 | DISCUSS-005 | product, technical, quality, coverage |
| reviewer | QUALITY-001 | DISCUSS-006 | all 5 |
The discuss subagent writes its record to `discussions/` and returns the verdict. The calling role includes the discuss result in its Phase 5 report.
**Consensus-blocked handling** (produce role responsibility):
| Verdict | Severity | Role Action |
|---------|----------|-------------|
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
| consensus_blocked | HIGH | SendMessage with `consensus_blocked=true, severity=HIGH`, include divergence details + action items. Coordinator creates revision task or pauses. |
| consensus_blocked | MEDIUM | SendMessage with `consensus_blocked=true, severity=MEDIUM`. Proceed to Phase 5 normally. Coordinator logs warning to wisdom. |
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
**SendMessage format for consensus_blocked**:
```
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<HIGH|MEDIUM>)
Divergences: <divergence-summary>
Action items: <top-3-items>
Recommendation: <revise|proceed-with-caution|escalate>
```
**Coordinator response** (see monitor.md Consensus-Blocked Handling for full flow):
- HIGH -> revision task (max 1 per task) or pause for user decision
- MEDIUM -> proceed with warning, log to wisdom/issues.md
- DISCUSS-006 HIGH -> always pause for user (final sign-off gate)
### Shared Explore Utility
Any role needing codebase context calls the explore subagent:
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore <angle>",
prompt: <see subagents/explore-subagent.md for prompt template>
})
```
**Cache**: Results stored in `explorations/` with `cache-index.json`. Before exploring, always check cache first. See [subagents/explore-subagent.md](subagents/explore-subagent.md).
### Wisdom Accumulation (all roles)
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session init.
**Directory**:
```
<session-folder>/wisdom/
+-- learnings.md # Patterns and insights
+-- decisions.md # Architecture and design decisions
+-- conventions.md # Codebase conventions
+-- issues.md # Known risks and issues
```
**Worker load** (Phase 2): Extract `Session: <path>` from task description, read wisdom files.
**Worker contribute** (Phase 4/5): Write discoveries to corresponding wisdom files.
### Role Isolation Rules
| Allowed | Prohibited |
|---------|-----------|
| Process own prefix tasks | Process other role's prefix tasks |
| SendMessage to coordinator | Directly communicate with other workers |
| Use tools declared in Toolbox | Create tasks for other roles |
| Call discuss/explore subagents | Modify resources outside own scope |
| Fast-advance simple successors | Spawn parallel worker batches |
Coordinator additionally prohibited: directly write/modify code, call implementation subagents, directly execute analysis/test/review.
---
## Pipeline Definitions
### Spec-only (6 tasks, was 12 in v3)
```
RESEARCH-001(+D1) -> DRAFT-001(+D2) -> DRAFT-002(+D3) -> DRAFT-003(+D4) -> DRAFT-004(+D5) -> QUALITY-001(+D6)
```
Each task includes inline discuss. `(+DN)` = inline discuss round N.
### Impl-only / Backend (4 tasks)
```
PLAN-001 -> IMPL-001 -> TEST-001 + REVIEW-001
```
### Full-lifecycle (10 tasks, was 16 in v3)
```
[Spec pipeline] -> PLAN-001(blockedBy: QUALITY-001) -> IMPL-001 -> TEST-001 + REVIEW-001
```
### Frontend Pipelines
```
FE-only: PLAN-001 -> DEV-FE-001 -> QA-FE-001
(GC loop: QA-FE verdict=NEEDS_FIX -> DEV-FE-002 -> QA-FE-002, max 2 rounds)
Fullstack: PLAN-001 -> IMPL-001 || DEV-FE-001 -> TEST-001 || QA-FE-001 -> REVIEW-001
Full + FE: [Spec pipeline] -> PLAN-001 -> IMPL-001 || DEV-FE-001 -> TEST-001 || QA-FE-001 -> REVIEW-001
```
### Cadence Control
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
```
Beat Cycle (single beat)
======================================================================
Event Coordinator Workers
----------------------------------------------------------------------
callback/resume --> +- handleCallback -+
| mark completed |
| check pipeline |
+- handleSpawnNext -+
| find ready tasks |
| spawn workers ---+--> [Worker A] Phase 1-5
| (parallel OK) --+--> [Worker B] Phase 1-5
+- STOP (idle) -----+ |
|
callback <-----------------------------------------+
(next beat) SendMessage + TaskUpdate(completed)
======================================================================
Fast-Advance (skips coordinator for simple linear successors)
======================================================================
[Worker A] Phase 5 complete
+- 1 ready task? simple successor? --> spawn Worker B directly
+- complex case? --> SendMessage to coordinator
======================================================================
```
**Pipeline beat view (v4 optimized)**:
```
Spec-only (6 beats, was 12 in v3)
-------------------------------------------------------
Beat 1 2 3 4 5 6
| | | | | |
R1+D1 --> W1+D2 --> W2+D3 --> W3+D4 --> W4+D5 --> Q1+D6
^ ^
pipeline sign-off
start pause
R=RESEARCH W=DRAFT(writer) Q=QUALITY D=DISCUSS(inline)
Impl-only (3 beats, with parallel window)
-------------------------------------------------------
Beat 1 2 3
| | +----+----+
PLAN --> IMPL --> TEST || REVIEW <-- parallel window
+----+----+
pipeline
done
Full-lifecycle (9 beats, was 15 in v3)
-------------------------------------------------------
Beat 1-6: [Spec pipeline as above]
|
Beat 6 (Q1+D6 done): PAUSE CHECKPOINT -- user confirms then resume
|
Beat 7 8 9
PLAN --> IMPL --> TEST || REVIEW
Fullstack (with dual parallel windows)
-------------------------------------------------------
Beat 1 2 3 4
| +----+----+ +----+----+ |
PLAN --> IMPL || DEV-FE --> TEST || QA-FE --> REVIEW
^ ^ ^
parallel 1 parallel 2 sync barrier
```
**Checkpoints**:
| Trigger | Position | Behavior |
|---------|----------|----------|
| Spec->Impl transition | QUALITY-001 completed | Read readiness-report.md, extract gate + scores, display Checkpoint Output Template, pause for user action |
| GC loop max | QA-FE max 2 rounds | Stop iteration, report current state |
| Pipeline stall | No ready + no running | Check missing tasks, report to user |
**Checkpoint Output Template** (QUALITY-001 completion):
Coordinator reads `<session>/spec/readiness-report.md`, extracts gate + dimension scores, displays:
```
[coordinator] ══════════════════════════════════════════
[coordinator] SPEC PHASE COMPLETE
[coordinator] Quality Gate: <PASS|REVIEW|FAIL> (<score>%)
[coordinator]
[coordinator] Dimension Scores:
[coordinator] Completeness: <bar> <n>%
[coordinator] Consistency: <bar> <n>%
[coordinator] Traceability: <bar> <n>%
[coordinator] Depth: <bar> <n>%
[coordinator] Coverage: <bar> <n>%
[coordinator]
[coordinator] Available Actions:
[coordinator] resume → Proceed to implementation
[coordinator] improve → Auto-improve weakest dimension
[coordinator] improve <dimension> → Improve specific dimension
[coordinator] revise <TASK-ID> → Revise specific document
[coordinator] recheck → Re-run quality check
[coordinator] feedback <text> → Inject feedback, create revision
[coordinator] ══════════════════════════════════════════
```
Gate-specific guidance:
- PASS: All actions available, resume is primary suggestion
- REVIEW: Recommend improve/revise before resume, warn on resume
- FAIL: Recommend improve/revise, do not suggest resume (user can force)
**Stall detection** (coordinator `handleCheck`):
| Check | Condition | Resolution |
|-------|-----------|-----------|
| Worker unresponsive | in_progress task with no callback | Report waiting tasks, suggest `resume` |
| Pipeline deadlock | no ready + no running + has pending | Check blockedBy chain, report blockage |
| GC loop exceeded | DEV-FE / QA-FE iteration > max_rounds | Terminate loop, output latest QA report |
### Task Metadata Registry
| Task ID | Role | Phase | Dependencies | Description | Inline Discuss |
|---------|------|-------|-------------|-------------|---------------|
| RESEARCH-001 | analyst | spec | (none) | Seed analysis and context gathering | DISCUSS-001 |
| DRAFT-001 | writer | spec | RESEARCH-001 | Generate Product Brief | DISCUSS-002 |
| DRAFT-002 | writer | spec | DRAFT-001 | Generate Requirements/PRD | DISCUSS-003 |
| DRAFT-003 | writer | spec | DRAFT-002 | Generate Architecture Document | DISCUSS-004 |
| DRAFT-004 | writer | spec | DRAFT-003 | Generate Epics & Stories | DISCUSS-005 |
| QUALITY-001 | reviewer | spec | DRAFT-004 | 5-dimension spec quality + sign-off | DISCUSS-006 |
| PLAN-001 | planner | impl | (none or QUALITY-001) | Multi-angle exploration and planning | - |
| IMPL-001 | executor | impl | PLAN-001 | Code implementation | - |
| TEST-001 | tester | impl | IMPL-001 | Test-fix cycles | - |
| REVIEW-001 | reviewer | impl | IMPL-001 | 4-dimension code review | - |
| DEV-FE-001 | fe-developer | impl | PLAN-001 | Frontend implementation | - |
| QA-FE-001 | fe-qa | impl | DEV-FE-001 | 5-dimension frontend QA | - |
## Coordinator Spawn Template
### 标准 Worker (单任务角色: analyst, tester, reviewer, architect)
When coordinator spawns workers, use background mode (Spawn-and-Stop):
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `You are team "<team-name>" <ROLE>.
## Primary Instruction
All your work MUST be executed by calling Skill to get role definition:
Skill(skill="team-lifecycle-v4", args="--role=<role>")
Current requirement: <task-description>
Session: <session-folder>
## Role Guidelines
- Only process <PREFIX>-* tasks, do not execute other role work
- All output prefixed with [<role>] tag
- Only communicate with coordinator
- Do not use TaskCreate to create tasks for other roles
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
## Workflow
1. Call Skill -> get role definition and execution logic
2. Follow role.md 5-Phase flow
3. team_msg(team=<session-id>) + SendMessage results to coordinator
4. TaskUpdate completed -> check next task or fast-advance`
})
```
### Inner Loop Worker (多任务角色: writer, planner, executor)
```
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker (inner loop)",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: `You are team "<team-name>" <ROLE>.
## Primary Instruction
All your work MUST be executed by calling Skill to get role definition:
Skill(skill="team-lifecycle-v4", args="--role=<role>")
Current requirement: <task-description>
Session: <session-folder>
## Inner Loop Mode
You will handle ALL <PREFIX>-* tasks in this session, not just the first one.
After completing each task, loop back to find the next <PREFIX>-* task.
Only SendMessage to coordinator when:
- All <PREFIX>-* tasks are done
- A consensus_blocked HIGH occurs
- Errors accumulate (≥ 3)
## Role Guidelines
- Only process <PREFIX>-* tasks, do not execute other role work
- All output prefixed with [<role>] tag
- Only communicate with coordinator
- Do not use TaskCreate to create tasks for other roles
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
- Use subagent calls for heavy work, retain summaries in context`
})
```
## Session Directory
```
.workflow/.team/TLS-<slug>-<date>/
+-- team-session.json # Session state
+-- spec/ # Spec artifacts
| +-- spec-config.json
| +-- discovery-context.json
| +-- product-brief.md
| +-- requirements/
| +-- architecture/
| +-- epics/
| +-- readiness-report.md
| +-- spec-summary.md
+-- discussions/ # Discussion records (written by discuss subagent)
+-- plan/ # Plan artifacts
| +-- plan.json
| +-- .task/TASK-*.json
+-- explorations/ # Shared explore cache
| +-- cache-index.json # { angle+keywords_hash -> file_path }
| +-- explore-<angle>.json
+-- architecture/ # Architect assessments + design-tokens.json
+-- analysis/ # Analyst design-intelligence.json (UI mode)
+-- qa/ # QA audit reports
+-- wisdom/ # Cross-task knowledge
| +-- learnings.md
| +-- decisions.md
| +-- conventions.md
| +-- issues.md
+-- .msg/ # Team message bus logs (messages.jsonl)
+-- shared-memory.json # Cross-role state
```
## Session Resume
Coordinator supports `--resume` / `--continue` for interrupted sessions:
1. Scan `.workflow/.team/TLS-*/team-session.json` for active/paused sessions
2. Multiple matches -> AskUserQuestion for selection
3. Audit TaskList -> reconcile session state <-> task status
4. Reset in_progress -> pending (interrupted tasks)
5. Rebuild team and spawn needed workers only
6. Create missing tasks with correct blockedBy
7. Kick first executable task -> Phase 4 coordination loop
## Shared Spec Resources
| Resource | Path | Usage |
|----------|------|-------|
| Document Standards | [specs/document-standards.md](specs/document-standards.md) | YAML frontmatter, naming, structure |
| Quality Gates | [specs/quality-gates.md](specs/quality-gates.md) | Per-phase quality gates |
| Product Brief Template | [templates/product-brief.md](templates/product-brief.md) | DRAFT-001 |
| Requirements Template | [templates/requirements-prd.md](templates/requirements-prd.md) | DRAFT-002 |
| Architecture Template | [templates/architecture-doc.md](templates/architecture-doc.md) | DRAFT-003 |
| Epics Template | [templates/epics-template.md](templates/epics-template.md) | DRAFT-004 |
| Discuss Subagent | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | Inline discuss protocol |
| Explore Subagent | [subagents/explore-subagent.md](subagents/explore-subagent.md) | Shared exploration utility |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with available role list |
| Missing --role arg | Orchestration Mode -> coordinator |
| Role file not found | Error with expected path |
| Command file not found | Fallback to inline execution |
| Discuss subagent fails | Role proceeds without discuss, logs warning |
| Explore cache corrupt | Clear cache, re-explore |
| Fast-advance spawns wrong task | Coordinator reconciles on next callback |

View File

@@ -1,174 +0,0 @@
# Role: analyst
Seed analysis, codebase exploration (via shared explore subagent), and multi-dimensional context gathering. Includes inline discuss (DISCUSS-001) after research output.
## Identity
- **Name**: `analyst` | **Prefix**: `RESEARCH-*` | **Tag**: `[analyst]`
- **Responsibility**: Seed Analysis -> Codebase Exploration -> Context Packaging -> **Inline Discuss** -> Report
## Boundaries
### MUST
- Only process RESEARCH-* tasks
- Communicate only with coordinator
- Generate discovery-context.json and spec-config.json
- Support file reference input (@ prefix or .md/.txt extension)
- Call discuss subagent for DISCUSS-001 after output
- Use shared explore subagent for codebase exploration (cache-aware)
### MUST NOT
- Create tasks for other roles
- Directly contact other workers
- Modify spec documents (only create discovery artifacts)
- Skip seed analysis step
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| research_ready | -> coordinator | Research + discuss complete |
| research_progress | -> coordinator | Long research progress update |
| error | -> coordinator | Unrecoverable error |
## Toolbox
| Tool | Purpose |
|------|---------|
| ccw cli --tool gemini --mode analysis | Seed analysis |
| Explore subagent | Codebase exploration (shared cache) |
| discuss subagent | Inline DISCUSS-001 critique |
---
## Phase 2: Seed Analysis
**Objective**: Extract structured seed information from the topic/idea.
**Workflow**:
1. Extract session folder from task description (`Session: <path>`)
2. Parse topic from task description (first non-metadata line)
3. If topic starts with `@` or ends with `.md`/`.txt` -> Read the referenced file as topic content
4. Run Gemini CLI seed analysis:
```
Bash({
command: `ccw cli -p "PURPOSE: Analyze topic and extract structured seed information.
TASK: * Extract problem statement * Identify target users * Determine domain context
* List constraints and assumptions * Identify 3-5 exploration dimensions * Assess complexity
TOPIC: <topic-content>
MODE: analysis
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment" --tool gemini --mode analysis`,
run_in_background: true
})
```
5. Wait for CLI result, parse seed analysis JSON
**Success**: Seed analysis parsed with problem statement, dimensions, complexity.
---
## Phase 3: Codebase Exploration (conditional)
**Objective**: Gather codebase context if an existing project is detected.
| Condition | Action |
|-----------|--------|
| package.json / Cargo.toml / pyproject.toml / go.mod exists | Explore codebase |
| No project files | Skip -> codebase context = null |
**When project detected** (uses shared explore subagent):
1. Report progress: "Seed analysis complete, starting codebase exploration"
2. Call explore subagent with `angle: general`, `keywords: <from seed analysis>`
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore general context",
prompt: "Explore codebase for: <topic>
Focus angle: general
Keywords: <seed analysis keywords>
Session folder: <session-folder>
..."
})
```
3. Use exploration results to build codebase context: tech_stack, architecture_patterns, conventions, integration_points
---
## Phase 4: Context Packaging + Inline Discuss
**Objective**: Generate spec-config.json and discovery-context.json, then run DISCUSS-001.
### 4a: Context Packaging
**spec-config.json** -> `<session-folder>/spec/spec-config.json`:
- session_id, topic, status="research_complete", complexity, depth, focus_areas, mode="interactive"
**discovery-context.json** -> `<session-folder>/spec/discovery-context.json`:
- session_id, phase=1, seed_analysis (all fields), codebase_context (or null), recommendations
**design-intelligence.json** -> `<session-folder>/analysis/design-intelligence.json` (UI mode only):
- Produced when frontend keywords detected in seed_analysis
- Fields: industry, style_direction, ux_patterns, color_strategy, typography, component_patterns
- Consumed by architect (for design-tokens.json) and fe-developer
### 4b: Inline Discuss (DISCUSS-001)
After packaging, call discuss subagent:
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss DISCUSS-001",
prompt: `## Multi-Perspective Critique: DISCUSS-001
### Input
- Artifact: <session-folder>/spec/discovery-context.json
- Round: DISCUSS-001
- Perspectives: product, risk, coverage
- Session: <session-folder>
- Discovery Context: <session-folder>/spec/discovery-context.json
<rest of discuss subagent prompt from subagents/discuss-subagent.md>`
})
```
**Discuss result handling**:
| Verdict | Severity | Action |
|---------|----------|--------|
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured consensus_blocked format (see below). Do NOT self-revise. |
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed normally. |
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
**consensus_blocked SendMessage format**:
```
[analyst] RESEARCH-001 complete. Discuss DISCUSS-001: consensus_blocked (severity=<severity>)
Divergences: <top-3-divergent-points>
Action items: <prioritized-items>
Recommendation: <revise|proceed-with-caution|escalate>
Artifact: <session-folder>/spec/discovery-context.json
Discussion: <session-folder>/discussions/DISCUSS-001-discussion.md
```
**Report**: complexity, codebase presence, problem statement, exploration dimensions, discuss verdict + severity, output paths.
**Success**: Both JSON files created; discuss record written; design-intelligence.json created if UI mode.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Gemini CLI failure | Fallback to direct Claude analysis |
| Codebase detection failed | Continue as new project |
| Topic too vague | Report with clarification questions |
| Explore subagent fails | Continue without codebase context |
| Discuss subagent fails | Proceed without discuss, log warning in report |

View File

@@ -1,193 +0,0 @@
# Command: assess
## Purpose
Multi-mode architecture assessment. Auto-detects consultation mode from task subject prefix, runs mode-specific analysis, and produces a verdict with concerns and recommendations.
## Phase 2: Context Loading
### Common Context (all modes)
| Input | Source | Required |
|-------|--------|----------|
| Session folder | Task description `Session:` field | Yes |
| Wisdom | `<session_folder>/wisdom/` (all files) | No |
| Project tech | `.workflow/project-tech.json` | No |
| Explorations | `<session_folder>/explorations/` | No |
### Mode-Specific Context
| Mode | Task Pattern | Additional Context |
|------|-------------|-------------------|
| spec-review | ARCH-SPEC-* | `spec/architecture/_index.md`, `spec/architecture/ADR-*.md` |
| plan-review | ARCH-PLAN-* | `plan/plan.json`, `plan/.task/TASK-*.json` |
| code-review | ARCH-CODE-* | `git diff --name-only`, changed file contents |
| consult | ARCH-CONSULT-* | Question extracted from task description |
| feasibility | ARCH-FEASIBILITY-* | Proposal from task description, codebase search results |
## Phase 3: Mode-Specific Assessment
### Mode: spec-review
Review architecture documents for technical soundness across 4 dimensions.
**Assessment dimensions**:
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Consistency | 25% | ADR decisions align with each other and with architecture index |
| Scalability | 25% | Design supports growth, no single-point bottlenecks |
| Security | 25% | Auth model, data protection, API security addressed |
| Tech fitness | 25% | Technology choices match project-tech.json and problem domain |
**Checks**:
- Read architecture index and all ADR files
- Cross-reference ADR decisions for contradictions
- Verify tech choices align with project-tech.json
- Score each dimension 0-100
---
### Mode: plan-review
Review implementation plan for architectural soundness.
**Checks**:
| Check | What | Severity if Failed |
|-------|------|-------------------|
| Dependency cycles | Build task graph, detect cycles via DFS | High |
| Task granularity | Flag tasks touching >8 files | Medium |
| Convention compliance | Verify plan follows wisdom/conventions.md | Medium |
| Architecture alignment | Verify plan doesn't contradict wisdom/decisions.md | High |
**Dependency cycle detection flow**:
1. Parse all TASK-*.json files -> extract id and depends_on
2. Build directed graph
3. DFS traversal -> flag any node visited twice in same stack
4. Report cycle path if found
---
### Mode: code-review
Assess architectural impact of code changes.
**Checks**:
| Check | Method | Severity if Found |
|-------|--------|-------------------|
| Layer violations | Detect upward imports (deeper layer importing shallower) | High |
| New dependencies | Parse package.json diff for added deps | Medium |
| Module boundary changes | Flag index.ts/index.js modifications | Medium |
| Architectural impact | Score based on file count and boundary changes | Info |
**Impact scoring**:
| Condition | Impact Level |
|-----------|-------------|
| Changed files > 10 | High |
| index.ts/index.js or package.json modified | Medium |
| All other cases | Low |
**Detection example** (find changed files):
```bash
Bash(command="git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
```
---
### Mode: consult
Answer architecture decision questions. Route by question complexity.
**Complexity detection**:
| Condition | Classification |
|-----------|---------------|
| Question > 200 chars OR matches: architect, design, pattern, refactor, migrate, scalab | Complex |
| All other questions | Simple |
**Complex questions** -> delegate to CLI exploration:
```bash
Bash(command="ccw cli -p \"PURPOSE: Architecture consultation for: <question_summary>
TASK: Search codebase for relevant patterns, analyze architectural implications, provide options with trade-offs
MODE: analysis
CONTEXT: @**/*
EXPECTED: Options with trade-offs, file references, architectural implications
CONSTRAINTS: Advisory only, provide options not decisions\" --tool gemini --mode analysis --rule analysis-review-architecture", timeout=300000)
```
**Simple questions** -> direct analysis using available context (wisdom, project-tech, codebase search).
---
### Mode: feasibility
Evaluate technical feasibility of a proposal.
**Assessment areas**:
| Area | Method | Output |
|------|--------|--------|
| Tech stack compatibility | Compare proposal needs against project-tech.json | Compatible / Requires additions |
| Codebase readiness | Search for integration points using Grep/Glob | Touch-point count |
| Effort estimation | Based on touch-point count (see table below) | Low / Medium / High |
| Risk assessment | Based on effort + tech compatibility | Risks + mitigations |
**Effort estimation**:
| Touch Points | Effort | Implication |
|-------------|--------|-------------|
| <= 5 | Low | Straightforward implementation |
| 6 - 20 | Medium | Moderate refactoring needed |
| > 20 | High | Significant refactoring, consider phasing |
**Verdict for feasibility**:
| Condition | Verdict |
|-----------|---------|
| Low/medium effort, compatible stack | FEASIBLE |
| High touch-points OR new tech required | RISKY |
| Fundamental incompatibility or unreasonable effort | INFEASIBLE |
---
### Verdict Routing (all modes except feasibility)
| Verdict | Criteria |
|---------|----------|
| BLOCK | >= 2 high-severity concerns |
| CONCERN | >= 1 high-severity OR >= 3 medium-severity concerns |
| APPROVE | All other cases |
## Phase 4: Validation
### Output Format
Write assessment to `<session_folder>/architecture/arch-<slug>.json`.
**Report content sent to coordinator**:
| Field | Description |
|-------|-------------|
| mode | Consultation mode used |
| verdict | APPROVE / CONCERN / BLOCK (or FEASIBLE / RISKY / INFEASIBLE) |
| concern_count | Number of concerns by severity |
| recommendations | Actionable suggestions with trade-offs |
| output_path | Path to full assessment file |
**Wisdom contribution**: Append significant decisions to `<session_folder>/wisdom/decisions.md`.
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Architecture docs not found | Assess from available context, note limitation in report |
| Plan file missing | Report to coordinator via arch_concern |
| Git diff fails (no commits) | Use staged changes or skip code-review mode |
| CLI exploration timeout | Provide partial assessment, flag as incomplete |
| Exploration results unparseable | Fall back to direct analysis without exploration |
| Insufficient context | Request explorer assistance via coordinator |

View File

@@ -1,103 +0,0 @@
# Role: architect
Architecture consultant. Advice on decisions, feasibility, design patterns.
## Identity
- **Name**: `architect` | **Prefix**: `ARCH-*` | **Tag**: `[architect]`
- **Type**: Consulting (on-demand, advisory only)
- **Responsibility**: Context loading → Mode detection → Analysis → Report
## Boundaries
### MUST
- Only process ARCH-* tasks
- Auto-detect mode from task subject prefix
- Provide options with trade-offs (not final decisions)
### MUST NOT
- Modify source code
- Make final decisions (advisory only)
- Execute implementation or testing
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| arch_ready | → coordinator | Assessment complete |
| arch_concern | → coordinator | Significant risk found |
| error | → coordinator | Analysis failure |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/assess.md | Multi-mode assessment |
| cli-explore-agent | Deep architecture exploration |
| ccw cli --tool gemini --mode analysis | Architecture analysis |
---
## Consultation Modes
| Task Pattern | Mode | Focus |
|-------------|------|-------|
| ARCH-SPEC-* | spec-review | Review architecture docs |
| ARCH-PLAN-* | plan-review | Review plan soundness |
| ARCH-CODE-* | code-review | Assess code change impact |
| ARCH-CONSULT-* | consult | Answer architecture questions |
| ARCH-FEASIBILITY-* | feasibility | Technical feasibility |
---
## Phase 2: Context Loading
**Common**: session folder, wisdom, project-tech.json, explorations
**Mode-specific**:
| Mode | Additional Context |
|------|-------------------|
| spec-review | architecture/_index.md, ADR-*.md |
| plan-review | plan/plan.json |
| code-review | git diff, changed files |
| consult | Question from task description |
| feasibility | Requirements + codebase |
---
## Phase 3: Assessment
Delegate to `commands/assess.md`. Output: mode, verdict (APPROVE/CONCERN/BLOCK), dimensions[], concerns[], recommendations[].
For complex questions → Gemini CLI with architecture review rule.
---
## Phase 4: Report
Output to `<session-folder>/architecture/arch-<slug>.json`. Contribute decisions to wisdom/decisions.md.
**Frontend project outputs** (when frontend tech stack detected in shared-memory or discovery-context):
- `<session-folder>/architecture/design-tokens.json` — color, spacing, typography, shadow tokens
- `<session-folder>/architecture/component-specs/*.md` — per-component design spec
**Report**: mode, verdict, concern count, recommendations, output path(s).
---
## Coordinator Integration
| Timing | Task |
|--------|------|
| After DRAFT-003 | ARCH-SPEC-001: 架构文档评审 |
| After PLAN-001 | ARCH-PLAN-001: 计划架构审查 |
| On-demand | ARCH-CONSULT-001: 架构咨询 |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Docs not found | Assess from available context |
| CLI timeout | Partial assessment |
| Insufficient context | Request explorer via coordinator |

View File

@@ -1,198 +0,0 @@
# Command: dispatch
## Purpose
Create task chains based on execution mode. v4 optimized: spec pipeline reduced from 12 to 6 tasks by inlining discuss rounds into produce roles.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Mode | Phase 1 requirements (spec-only, impl-only, etc.) | Yes |
| Session folder | `<session-folder>` from Phase 2 | Yes |
| Scope | User requirements description | Yes |
| Spec file | User-provided path (impl-only mode only) | Conditional |
## Phase 3: Task Chain Creation
### Mode-to-Pipeline Routing
| Mode | Tasks | Pipeline | First Task |
|------|-------|----------|------------|
| spec-only | 6 | Spec pipeline | RESEARCH-001 |
| impl-only | 4 | Impl pipeline | PLAN-001 |
| fe-only | 3 | FE pipeline | PLAN-001 |
| fullstack | 6 | Fullstack pipeline | PLAN-001 |
| full-lifecycle | 10 | Spec + Impl | RESEARCH-001 |
| full-lifecycle-fe | 12 | Spec + Fullstack | RESEARCH-001 |
---
### Spec Pipeline (6 tasks, was 12 in v3)
Used by: spec-only, full-lifecycle, full-lifecycle-fe
| # | Subject | Owner | BlockedBy | Description | Inline Discuss |
|---|---------|-------|-----------|-------------|---------------|
| 1 | RESEARCH-001 | analyst | (none) | Seed analysis and context gathering | DISCUSS-001 |
| 2 | DRAFT-001 | writer | RESEARCH-001 | Generate Product Brief | DISCUSS-002 |
| 3 | DRAFT-002 | writer | DRAFT-001 | Generate Requirements/PRD | DISCUSS-003 |
| 4 | DRAFT-003 | writer | DRAFT-002 | Generate Architecture Document | DISCUSS-004 |
| 5 | DRAFT-004 | writer | DRAFT-003 | Generate Epics & Stories | DISCUSS-005 |
| 6 | QUALITY-001 | reviewer | DRAFT-004 | 5-dimension spec quality + sign-off | DISCUSS-006 |
**Key change from v3**: No separate DISCUSS-* tasks. Each produce role calls the discuss subagent inline after completing its primary output.
### Impl Pipeline (4 tasks)
Used by: impl-only, full-lifecycle (PLAN-001 blockedBy QUALITY-001)
| # | Subject | Owner | BlockedBy | Description |
|---|---------|-------|-----------|-------------|
| 1 | PLAN-001 | planner | (none) | Multi-angle exploration and planning |
| 2 | IMPL-001 | executor | PLAN-001 | Code implementation |
| 3 | TEST-001 | tester | IMPL-001 | Test-fix cycles |
| 4 | REVIEW-001 | reviewer | IMPL-001 | 4-dimension code review |
### FE Pipeline (3 tasks)
Used by: fe-only
| # | Subject | Owner | BlockedBy | Description |
|---|---------|-------|-----------|-------------|
| 1 | PLAN-001 | planner | (none) | Planning (frontend focus) |
| 2 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation |
| 3 | QA-FE-001 | fe-qa | DEV-FE-001 | 5-dimension frontend QA |
GC loop (max 2 rounds): QA-FE verdict=NEEDS_FIX -> create DEV-FE-002 + QA-FE-002 dynamically.
### Fullstack Pipeline (6 tasks)
Used by: fullstack, full-lifecycle-fe (PLAN-001 blockedBy QUALITY-001)
| # | Subject | Owner | BlockedBy | Description |
|---|---------|-------|-----------|-------------|
| 1 | PLAN-001 | planner | (none) | Fullstack planning |
| 2 | IMPL-001 | executor | PLAN-001 | Backend implementation |
| 3 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation |
| 4 | TEST-001 | tester | IMPL-001 | Backend test-fix cycles |
| 5 | QA-FE-001 | fe-qa | DEV-FE-001 | Frontend QA |
| 6 | REVIEW-001 | reviewer | TEST-001, QA-FE-001 | Full code review |
### Composite Modes
| Mode | Construction | PLAN-001 BlockedBy |
|------|-------------|-------------------|
| full-lifecycle | Spec (6) + Impl (4) | QUALITY-001 |
| full-lifecycle-fe | Spec (6) + Fullstack (6) | QUALITY-001 |
---
### Impl-Only Pre-check
Before creating impl-only tasks, verify specification exists:
```
Spec exists?
+- YES -> read spec path -> proceed with task creation
+- NO -> error: "impl-only requires existing spec, use spec-only or full-lifecycle"
```
### Task Description Template
Every task description includes session, scope, and inline discuss metadata:
```
TaskCreate({
subject: "<TASK-ID>",
owner: "<role>",
description: "<task description from pipeline table>\nSession: <session-folder>\nScope: <scope>\nInlineDiscuss: <DISCUSS-NNN or none>\nInnerLoop: <true|false>",
blockedBy: [<dependency-list>],
status: "pending"
})
```
**InnerLoop Flag Rules**:
| Role | InnerLoop |
|------|-----------|
| writer (DRAFT-*) | true |
| planner (PLAN-*) | true |
| executor (IMPL-*) | true |
| analyst, tester, reviewer, architect, fe-developer, fe-qa | false |
### Execution Method
| Method | Behavior |
|--------|----------|
| sequential | One task active at a time; next activated after predecessor completes |
| parallel | Tasks with all deps met run concurrently (e.g., TEST-001 + REVIEW-001) |
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Task count | Matches mode total from routing table |
| Dependencies | Every blockedBy references an existing task subject |
| Owner assignment | Each task owner matches SKILL.md Role Registry prefix |
| Session reference | Every task description contains `Session: <session-folder>` |
| Inline discuss | Spec tasks have InlineDiscuss field matching round config |
### Revision Task Template
When handleRevise/handleFeedback creates revision tasks:
```
TaskCreate({
subject: "<ORIGINAL-ID>-R1",
owner: "<same-role-as-original>",
description: "<revision-type> revision of <ORIGINAL-ID>.\n
Session: <session-folder>\n
Original artifact: <artifact-path>\n
User feedback: <feedback-text or 'system-initiated'>\n
Revision scope: <targeted|full>\n
InlineDiscuss: <same-discuss-round-as-original>\n
InnerLoop: <true|false based on role>",
status: "pending",
blockedBy: [<predecessor-R1 if cascaded>]
})
```
**Revision naming**: `<ORIGINAL-ID>-R1` (max 1 revision per task; second revision -> `-R2`; third -> escalate to user)
**Cascade blockedBy chain example** (revise DRAFT-002):
- DRAFT-002-R1 (no blockedBy)
- DRAFT-003-R1 (blockedBy: DRAFT-002-R1)
- DRAFT-004-R1 (blockedBy: DRAFT-003-R1)
- QUALITY-001-R1 (blockedBy: DRAFT-004-R1)
### Improvement Task Template
When handleImprove creates improvement tasks:
```
TaskCreate({
subject: "IMPROVE-<dimension>-001",
owner: "writer",
description: "Quality improvement: <dimension>.\n
Session: <session-folder>\n
Current score: <X>%\n
Target: 80%\n
Readiness report: <session>/spec/readiness-report.md\n
Weak areas: <extracted-from-report>\n
Strategy: <from-dimension-strategy-table>\n
InnerLoop: true",
status: "pending"
})
```
Improvement tasks are always followed by a QUALITY-001-R1 recheck (blockedBy: IMPROVE task).
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown mode | Reject with supported mode list |
| Missing spec for impl-only | Error, suggest spec-only or full-lifecycle |
| TaskCreate fails | Log error, report to user |
| Duplicate task subject | Skip creation, log warning |

View File

@@ -1,413 +0,0 @@
# Command: monitor
## Purpose
Event-driven pipeline coordination with Spawn-and-Stop pattern. v4 enhanced with worker fast-advance awareness. Three wake-up sources: worker callbacks (auto-advance), user `check` (status report), user `resume` (manual advance).
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
| FAST_ADVANCE_AWARE | true | Workers may skip coordinator for simple linear successors |
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Session file | `<session-folder>/team-session.json` | Yes |
| Task list | `TaskList()` | Yes |
| Active workers | session.active_workers[] | Yes |
| Pipeline mode | session.mode | Yes |
## Phase 3: Handler Routing
### Wake-up Source Detection
Parse `$ARGUMENTS` to determine handler:
| Priority | Condition | Handler |
|----------|-----------|---------|
| 1 | Message contains `[<role-name>]` from known worker role | handleCallback |
| 2 | Contains "check" or "status" | handleCheck |
| 3 | Contains "resume", "continue", or "next" | handleResume |
| 4 | None of the above (initial spawn after dispatch) | handleSpawnNext |
| 5 | Contains "revise" + task ID pattern | handleRevise |
| 6 | Contains "feedback" + quoted/unquoted text | handleFeedback |
| 7 | Contains "recheck" | handleRecheck |
| 8 | Contains "improve" (optionally + dimension name) | handleImprove |
Known worker roles: analyst, writer, planner, executor, tester, reviewer, architect, fe-developer, fe-qa.
> **Note**: `discussant` and `explorer` are no longer worker roles in v4. They are subagents called inline by produce roles.
---
### Handler: handleCallback
Worker completed a task. Verify completion, update state, auto-advance.
```
Receive callback from [<role>]
+- Find matching active worker by role
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
+- Task status = completed?
| +- YES -> remove from active_workers -> update session
| | +- Handle checkpoints (see below)
| | +- -> handleSpawnNext
| +- NO -> progress message, do not advance -> STOP
+- No matching worker found
+- Scan all active workers for completed tasks
+- Found completed -> process each -> handleSpawnNext
+- None completed -> STOP
```
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
1. Check if the expected next task is already `in_progress` (fast-advanced)
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
3. If no -> normal handleSpawnNext
---
### Handler: handleCheck
Read-only status report. No pipeline advancement.
**Output format**:
```
[coordinator] Pipeline Status (v4)
[coordinator] Mode: <mode> | Progress: <completed>/<total> (<percent>%)
[coordinator] Execution Graph:
Spec Phase: (if applicable)
[<icon> RESEARCH-001(+D1)] -> [<icon> DRAFT-001(+D2)] -> [<icon> DRAFT-002(+D3)]
-> [<icon> DRAFT-003(+D4)] -> [<icon> DRAFT-004(+D5)] -> [<icon> QUALITY-001(+D6)]
Impl Phase: (if applicable)
[<icon> PLAN-001]
+- BE: [<icon> IMPL-001] -> [<icon> TEST-001] -> [<icon> REVIEW-001]
+- FE: [<icon> DEV-FE-001] -> [<icon> QA-FE-001]
done=completed >>>=running o=pending .=not created
[coordinator] Active Workers:
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
[coordinator] Ready to spawn: <subjects>
[coordinator] Commands: 'resume' to advance | 'check' to refresh
```
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
Then STOP.
---
### Handler: handleResume
Check active worker completion, process results, advance pipeline.
```
Load active_workers from session
+- No active workers -> handleSpawnNext
+- Has active workers -> check each:
+- status = completed -> mark done, log
+- status = in_progress -> still running, log
+- other status -> worker failure -> reset to pending
After processing:
+- Some completed -> handleSpawnNext
+- All still running -> report status -> STOP
+- All failed -> handleSpawnNext (retry)
```
---
### Handler: handleSpawnNext
Find all ready tasks, spawn workers in background, update session, STOP.
```
Collect task states from TaskList()
+- completedSubjects: status = completed
+- inProgressSubjects: status = in_progress
+- readySubjects: pending + all blockedBy in completedSubjects
Ready tasks found?
+- NONE + work in progress -> report waiting -> STOP
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> Phase 5
+- HAS ready tasks -> for each:
+- Is task owner an Inner Loop role AND that role already has an active_worker?
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
| +- NO -> normal spawn below
+- TaskUpdate -> in_progress
+- team_msg log -> task_unblocked
+- Spawn worker (see tool call below)
+- Add to session.active_workers
Update session file -> output summary -> STOP
```
**Spawn worker tool call** (one per ready task):
```bash
Task({
subagent_type: "general-purpose",
description: "Spawn <role> worker for <subject>",
team_name: <team-name>,
name: "<role>",
run_in_background: true,
prompt: "<worker prompt from SKILL.md Coordinator Spawn Template>"
})
```
---
### Handler: handleRevise
User requests targeted revision of a completed task.
```
Parse: revise <TASK-ID> [feedback-text]
+- Validate TASK-ID exists and is completed
| +- NOT completed -> error: "Task <ID> is not completed, cannot revise"
+- Determine role and doc type from TASK-ID prefix
+- Create revision task:
| TaskCreate({
| subject: "<TASK-ID>-R1",
| owner: "<same-role>",
| description: "User-requested revision of <TASK-ID>.\n
| Session: <session-folder>\n
| Original artifact: <artifact-path>\n
| User feedback: <feedback-text or 'general revision requested'>\n
| Revision scope: targeted\n
| InlineDiscuss: <same-discuss-round>\n
| InnerLoop: true",
| status: "pending"
| })
+- Cascade check (auto):
| +- Find all completed tasks downstream of TASK-ID
| +- For each downstream completed task -> create <ID>-R1
| +- Chain blockedBy: each R1 blockedBy its predecessor R1
| +- Always end with QUALITY-001-R1 (recheck)
+- Spawn worker for first revision task -> STOP
```
**Cascade Rules**:
| Revised Task | Downstream (auto-cascade) |
|-------------|--------------------------|
| RESEARCH-001 | DRAFT-001~004-R1, QUALITY-001-R1 |
| DRAFT-001 | DRAFT-002~004-R1, QUALITY-001-R1 |
| DRAFT-002 | DRAFT-003~004-R1, QUALITY-001-R1 |
| DRAFT-003 | DRAFT-004-R1, QUALITY-001-R1 |
| DRAFT-004 | QUALITY-001-R1 |
| QUALITY-001 | (no cascade, just recheck) |
**Cascade depth control**: Only cascade tasks that are already completed. Pending/in_progress tasks will naturally pick up changes.
---
### Handler: handleFeedback
User injects feedback into pipeline context.
```
Parse: feedback <text>
+- Determine pipeline state:
| +- Spec phase in progress -> find earliest affected DRAFT task
| +- Spec phase complete (at checkpoint) -> analyze full impact
| +- Impl phase in progress -> log to wisdom/decisions.md, no revision
+- Analyze feedback impact:
| +- Keyword match against doc types:
| "vision/market/MVP/scope" -> DRAFT-001 (product-brief)
| "requirement/feature/NFR/user story" -> DRAFT-002 (requirements)
| "architecture/ADR/component/tech stack" -> DRAFT-003 (architecture)
| "epic/story/sprint/priority" -> DRAFT-004 (epics)
| +- If unclear -> default to earliest incomplete or most recent completed
+- Write feedback to wisdom/decisions.md
+- Create revision chain (same as handleRevise from determined start point)
+- Spawn worker -> STOP
```
---
### Handler: handleRecheck
Re-run quality check after manual edits or revisions.
```
Parse: recheck
+- Validate QUALITY-001 exists and is completed
| +- NOT completed -> error: "Quality check hasn't run yet"
+- Create recheck task:
| TaskCreate({
| subject: "QUALITY-001-R1",
| owner: "reviewer",
| description: "Re-run spec quality check.\n
| Session: <session-folder>\n
| Scope: full recheck\n
| InlineDiscuss: DISCUSS-006",
| status: "pending"
| })
+- Spawn reviewer -> STOP
```
---
### Handler: handleImprove
Quality-driven improvement based on readiness report dimensions.
```
Parse: improve [dimension]
+- Read <session>/spec/readiness-report.md
| +- NOT found -> error: "No readiness report. Run quality check first."
+- Extract dimension scores
+- Select target:
| +- dimension specified -> use it
| +- not specified -> pick lowest scoring dimension
+- Map dimension to improvement strategy:
|
| | Dimension | Strategy | Target Tasks |
| |-----------|----------|-------------|
| | completeness | Fill missing sections | DRAFT with missing sections |
| | consistency | Unify terminology/format | All DRAFT (batch) |
| | traceability | Strengthen Goals->Reqs->Arch->Stories chain | DRAFT-002, DRAFT-003, DRAFT-004 |
| | depth | Enhance AC/ADR detail | Weakest sub-dimension's DRAFT |
| | coverage | Add uncovered requirements | DRAFT-002 |
|
+- Create improvement task:
| TaskCreate({
| subject: "IMPROVE-<dimension>-001",
| owner: "writer",
| description: "Quality improvement: <dimension>.\n
| Session: <session-folder>\n
| Current score: <X>%\n
| Target: 80%\n
| Weak areas: <from readiness-report>\n
| Strategy: <from table>\n
| InnerLoop: true",
| status: "pending"
| })
+- Create QUALITY-001-R1 (blockedBy: IMPROVE task)
+- Spawn writer -> STOP
```
---
### Checkpoints
| Completed Task | Mode Condition | Action |
|---------------|----------------|--------|
| QUALITY-001 | full-lifecycle or full-lifecycle-fe | Read readiness-report.md -> extract gate + scores -> output Checkpoint Output Template (see SKILL.md) -> pause for user action |
---
### Worker Failure Handling
When a worker has unexpected status (not completed, not in_progress):
1. Reset task -> pending via TaskUpdate
2. Log via team_msg (type: error)
3. Report to user: task reset, will retry on next resume
### Fast-Advance Failure Recovery
When coordinator detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
```
handleCallback / handleResume detects:
+- Task is in_progress (was fast-advanced by predecessor)
+- No active_worker entry for this task
+- Original fast-advancing worker has already completed and exited
+- Resolution:
1. TaskUpdate -> reset task to pending
2. Remove stale active_worker entry (if any)
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
4. -> handleSpawnNext (will re-spawn the task normally)
```
**Detection in handleResume**:
```
For each in_progress task in TaskList():
+- Has matching active_worker? -> normal, skip
+- No matching active_worker? -> orphaned (likely fast-advance failure)
+- Check creation time: if > 5 minutes with no progress callback
+- Reset to pending -> handleSpawnNext
```
**Prevention**: Fast-advance failures are self-healing. The coordinator reconciles orphaned tasks on every `resume`/`check` cycle. No manual intervention required unless the same task fails repeatedly (3+ resets -> escalate to user).
### Consensus-Blocked Handling
When a produce role reports `consensus_blocked` in its callback:
```
handleCallback receives message with consensus_blocked flag
+- Extract: divergence_severity, blocked_round, action_recommendation
+- Route by severity:
|
+- severity = HIGH (rating <= 2 or critical risk)
| +- Is this DISCUSS-006 (final sign-off)?
| | +- YES -> PAUSE: output warning, wait for user `resume` with decision
| | +- NO -> Create REVISION task:
| | +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
| | +- Description includes: divergence details + action items from discuss
| | +- blockedBy: none (immediate execution)
| | +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
| | +- If already revised once -> PAUSE, escalate to user
| +- Update session: mark task as "revised", log revision chain
|
+- severity = MEDIUM (rating spread or single low rating)
| +- Proceed with warning: include divergence in next task's context
| +- Log action items to wisdom/issues.md for downstream awareness
| +- Normal handleSpawnNext
|
+- severity = LOW (minor suggestions only)
+- Proceed normally: treat as consensus_reached with notes
+- Normal handleSpawnNext
```
**Revision task template** (for HIGH severity):
```
TaskCreate({
subject: "<ORIGINAL-ID>-R1",
description: "Revision of <ORIGINAL-ID>: address consensus-blocked divergences.\n
Session: <session-folder>\n
Original artifact: <artifact-path>\n
Divergences:\n<divergence-details>\n
Action items:\n<action-items-from-discuss>\n
InlineDiscuss: <same-round-id>",
owner: "<same-role>",
status: "pending"
})
```
## Phase 4: Validation
| Check | Criteria |
|-------|----------|
| Session state consistent | active_workers matches TaskList in_progress tasks |
| No orphaned tasks | Every in_progress task has an active_worker entry |
| Pipeline completeness | All expected tasks exist per mode |
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
| Consensus-blocked routing | HIGH -> revision/pause, MEDIUM -> warn+proceed, LOW -> proceed |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Session file not found | Error, suggest re-initialization |
| Worker callback from unknown role | Log info, scan for other completions |
| All workers still running on resume | Report status, suggest check later |
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
| Fast-advance conflict | Coordinator reconciles, no duplicate spawns |
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |
| Revision task also blocked | Escalate to user, pause pipeline |

View File

@@ -1,230 +0,0 @@
# Coordinator Role
Orchestrate the team-lifecycle workflow: team creation, task dispatching, progress monitoring, session state. Optimized for v4 reduced pipeline with inline discuss and shared explore.
## Identity
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **Responsibility**: Parse requirements -> Create team -> Dispatch tasks -> Monitor progress -> Report results
## Boundaries
### MUST
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
- Create team and spawn worker subagents in background
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
- Monitor progress via worker callbacks and route messages
- Maintain session state persistence (team-session.json)
### MUST NOT
- Execute spec/impl/research work directly (delegate to workers)
- Modify task outputs (workers own their deliverables)
- Call implementation subagents (code-developer, etc.) directly
- Skip dependency validation when creating task chains
---
## Command Execution Protocol
When coordinator needs to execute a command (dispatch, monitor):
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
4. **Execute synchronously** - complete the command workflow before proceeding
Example:
```
Phase 3 needs task dispatch
-> Read roles/coordinator/commands/dispatch.md
-> Execute Phase 2 (Context Loading)
-> Execute Phase 3 (Task Chain Creation)
-> Execute Phase 4 (Validation)
-> Continue to Phase 4
```
---
## Entry Router
When coordinator is invoked, first detect the invocation type:
| Detection | Condition | Handler |
|-----------|-----------|---------|
| Worker callback | Message contains `[role-name]` tag from a known worker role | -> handleCallback: auto-advance pipeline |
| Status check | Arguments contain "check" or "status" | -> handleCheck: output execution graph, no advancement |
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume: check worker states, advance pipeline |
| Interrupted session | Active/paused session exists in `.workflow/.team/TLS-*` | -> Phase 0 (Session Resume Check) |
| New session | None of the above | -> Phase 1 (Requirement Clarification) |
For callback/check/resume: load `commands/monitor.md` and execute the appropriate handler, then STOP.
### Router Implementation
1. **Load session context** (if exists):
- Scan `.workflow/.team/TLS-*/team-session.json` for active/paused sessions
- If found, extract known worker roles from session or SKILL.md Role Registry
2. **Parse $ARGUMENTS** for detection keywords
3. **Route to handler**:
- For monitor handlers: Read `commands/monitor.md`, execute matched handler section, STOP
- For Phase 0: Execute Session Resume Check below
- For Phase 1: Execute Requirement Clarification below
---
## Phase 0: Session Resume Check
**Objective**: Detect and resume interrupted sessions before creating new ones.
**Workflow**:
1. Scan `.workflow/.team/TLS-*/team-session.json` for sessions with status "active" or "paused"
2. No sessions found -> proceed to Phase 1
3. Single session found -> resume it (-> Session Reconciliation)
4. Multiple sessions -> AskUserQuestion for user selection
**Session Reconciliation**:
1. Audit TaskList -> get real status of all tasks
2. Reconcile: session.completed_tasks <-> TaskList status (bidirectional sync)
3. Reset any in_progress tasks -> pending (they were interrupted)
4. Determine remaining pipeline from reconciled state
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
6. Create missing tasks with correct blockedBy dependencies
7. Verify dependency chain integrity
8. Update session file with reconciled state
9. Kick first executable task's worker -> Phase 4
---
## Phase 1: Requirement Clarification
**Objective**: Parse user input and gather execution parameters.
**Workflow**:
1. **Parse arguments** for explicit settings: mode, scope, focus areas, depth
2. **Ask for missing parameters** via AskUserQuestion:
- Mode: spec-only / impl-only / full-lifecycle / fe-only / fullstack / full-lifecycle-fe
- Scope: project description
- Execution method: sequential / parallel
3. **Frontend auto-detection** (for impl-only and full-lifecycle modes):
| Signal | Detection | Pipeline Upgrade |
|--------|----------|-----------------|
| FE keywords (component, page, UI, React, Vue, CSS...) | Keyword match in description | impl-only -> fe-only or fullstack |
| BE keywords also present (API, database, server...) | Both FE + BE keywords | impl-only -> fullstack |
| FE framework in package.json | grep react/vue/svelte/next | full-lifecycle -> full-lifecycle-fe |
4. **Store requirements**: mode, scope, focus, depth, executionMethod
**Success**: All parameters captured, mode finalized.
---
## Phase 2: Create Team + Initialize Session
**Objective**: Initialize team, session file, and wisdom directory.
**Workflow**:
1. Generate session ID: `TLS-<slug>-<date>`
2. Create session folder: `.workflow/.team/<session-id>/`
3. Call TeamCreate with team name
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
5. Initialize explorations directory with empty cache-index.json
6. Write team-session.json with: session_id, mode, scope, status="active", tasks_total, tasks_completed=0
**Task counts by mode (v4)**:
| Mode | Tasks | v3 Tasks | Savings |
|------|-------|----------|---------|
| spec-only | 6 | 12 | -6 (discuss inlined) |
| impl-only | 4 | 4 | 0 |
| fe-only | 3 | 3 | 0 |
| fullstack | 6 | 6 | 0 |
| full-lifecycle | 10 | 16 | -6 (discuss inlined) |
| full-lifecycle-fe | 12 | 18 | -6 (discuss inlined) |
**Success**: Team created, session file written, wisdom and explorations initialized.
---
## Phase 3: Create Task Chain
**Objective**: Dispatch tasks based on mode with proper dependencies.
Delegate to `commands/dispatch.md` which creates the full task chain:
1. Reads SKILL.md Task Metadata Registry for task definitions
2. Creates tasks via TaskCreate with correct blockedBy
3. Assigns owner based on role mapping
4. Includes `Session: <session-folder>` in every task description
5. Marks inline discuss round in task description (e.g., `InlineDiscuss: DISCUSS-002`)
---
## Phase 4: Spawn-and-Stop
**Objective**: Spawn first batch of ready workers in background, then STOP.
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
- Spawn workers with `Task(run_in_background: true)` -> immediately return
- Worker completes -> may fast-advance to next task OR SendMessage callback -> auto-advance
- User can use "check" / "resume" to manually advance
- Coordinator does one operation per invocation, then STOPS
**Workflow**:
1. Load `commands/monitor.md`
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
3. For each ready task -> spawn worker (see SKILL.md Spawn Template)
4. Output status summary
5. STOP
**Pipeline advancement** driven by three wake sources:
- Worker callback (automatic) -> Entry Router -> handleCallback
- User "check" -> handleCheck (status only)
- User "resume" -> handleResume (advance)
### Checkpoint Gate Handling
When QUALITY-001 completes (spec->impl transition checkpoint):
1. Read `<session-folder>/spec/readiness-report.md`
2. Parse quality gate: extract `Quality Gate:` line -> PASS/REVIEW/FAIL + score
3. Parse dimension scores: extract `Dimension Scores` table
4. Output Checkpoint Output Template (see SKILL.md Checkpoints) with gate-specific guidance
5. Write gate result to team-session.json: `checkpoint_gate: { gate, score, dimensions }`
6. Pause and wait for user command
**Gate-specific behavior**:
| Gate | Primary Suggestion | Warning |
|------|-------------------|---------|
| PASS (>=80%) | `resume` to proceed | None |
| REVIEW (60-79%) | `improve` or `revise` first | Warn on `resume`: "Quality below target, proceed at risk" |
| FAIL (<60%) | `improve` or `revise` required | Block `resume` suggestion, user can force |
---
## Phase 5: Report + Next Steps
**Objective**: Completion report and follow-up options.
**Workflow**:
1. Load session state -> count completed tasks, duration
2. List deliverables with output paths
3. Update session status -> "completed"
4. Offer next steps: exit / view artifacts / extend tasks / generate lite-plan / create Issue
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Task timeout | Log, mark failed, ask user to retry or skip |
| Worker crash | Respawn worker, reassign task |
| Dependency cycle | Detect, report to user, halt |
| Invalid mode | Reject with error, ask to clarify |
| Session corruption | Attempt recovery, fallback to manual reconciliation |

View File

@@ -1,166 +0,0 @@
# Command: implement
## Purpose
Multi-backend code implementation: route tasks to appropriate execution backend (direct edit, subagent, or CLI), build focused prompts, execute with retry and fallback.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Plan | `<session-folder>/plan/plan.json` | Yes |
| Task files | `<session-folder>/plan/.task/TASK-*.json` | Yes |
| Backend | Task metadata / plan default / auto-select | Yes |
| Working directory | task.metadata.working_dir or project root | No |
| Wisdom | `<session-folder>/wisdom/` | No |
## Phase 3: Implementation
### Backend Selection
Priority order (first match wins):
| Priority | Source | Method |
|----------|--------|--------|
| 1 | Task metadata | `task.metadata.executor` field |
| 2 | Plan default | "Execution Backend:" line in plan.json |
| 3 | Auto-select | See auto-select table below |
**Auto-select routing**:
| Condition | Backend |
|-----------|---------|
| Description < 200 chars AND no refactor/architecture keywords AND single target file | agent (direct edit) |
| Description < 200 chars AND simple scope | agent (subagent) |
| Complex scope OR architecture keywords | codex |
| Analysis-heavy OR multi-module integration | gemini |
### Execution Paths
```
Backend selected
├─ agent (direct edit)
│ └─ Read target file → Edit directly → no subagent overhead
├─ agent (subagent)
│ └─ Task({ subagent_type: "code-developer", run_in_background: false })
├─ codex (CLI)
│ └─ Bash(command="ccw cli ... --tool codex --mode write", run_in_background=true)
└─ gemini (CLI)
└─ Bash(command="ccw cli ... --tool gemini --mode write", run_in_background=true)
```
### Path 1: Direct Edit (agent, simple task)
```bash
Read(file_path="<target-file>")
Edit(file_path="<target-file>", old_string="<old>", new_string="<new>")
```
### Path 2: Subagent (agent, moderate task)
```
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Implement <task-id>",
prompt: "<execution-prompt>"
})
```
### Path 3: CLI Backend (codex or gemini)
```bash
Bash(command="ccw cli -p '<execution-prompt>' --tool <codex|gemini> --mode write --cd <working-dir>", run_in_background=true)
```
### Execution Prompt Template
All backends receive the same structured prompt:
```
# Implementation Task: <task-id>
## Task Description
<task-description>
## Acceptance Criteria
1. <criterion>
## Context from Plan
<architecture-section>
<technical-stack-section>
<task-context-section>
## Files to Modify
<target-files or "Auto-detect based on task">
## Constraints
- Follow existing code style and patterns
- Preserve backward compatibility
- Add appropriate error handling
- Include inline comments for complex logic
```
### Batch Execution
When multiple IMPL tasks exist, execute in dependency order:
```
Topological sort by task.depends_on
├─ Batch 1: Tasks with no dependencies → execute
├─ Batch 2: Tasks depending on batch 1 → execute
└─ Batch N: Continue until all tasks complete
Progress update per batch (when > 1 batch):
→ team_msg: "Processing batch <N>/<total>: <task-id>"
```
### Retry and Fallback
**Retry** (max 3 attempts per task):
```
Attempt 1 → failure
├─ team_msg: "Retry 1/3 after error: <message>"
└─ Attempt 2 → failure
├─ team_msg: "Retry 2/3 after error: <message>"
└─ Attempt 3 → failure → fallback
```
**Fallback** (when primary backend fails after retries):
| Primary Backend | Fallback |
|----------------|----------|
| codex | agent (subagent) |
| gemini | agent (subagent) |
| agent (subagent) | Report failure to coordinator |
| agent (direct edit) | agent (subagent) |
## Phase 4: Validation
### Self-Validation Steps
| Step | Method | Pass Criteria |
|------|--------|--------------|
| Syntax check | `Bash(command="tsc --noEmit", timeout=30000)` | Exit code 0 |
| Acceptance match | Check criteria keywords vs modified files | All criteria addressed |
| Test detection | Search for .test.ts/.spec.ts matching modified files | Tests identified |
| File changes | `Bash(command="git diff --name-only HEAD")` | At least 1 file modified |
### Result Routing
| Outcome | Message Type | Content |
|---------|-------------|---------|
| All tasks pass validation | impl_complete | Task ID, files modified, backend used |
| Batch progress | impl_progress | Batch index, total batches, current task |
| Validation failure after retries | error | Task ID, error details, retry count |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Syntax errors after implementation | Retry with error context (max 3) |
| Backend unavailable | Fallback to agent |
| Missing dependencies | Request from coordinator |
| All retries + fallback exhausted | Report failure with full error log |

View File

@@ -1,103 +0,0 @@
# Role: executor
Code implementation following approved plans. Multi-backend execution with self-validation.
## Identity
- **Name**: `executor` | **Prefix**: `IMPL-*` | **Tag**: `[executor]`
- **Responsibility**: Load plan → Route to backend → Implement → Self-validate → Report
## Boundaries
### MUST
- Only process IMPL-* tasks
- Follow approved plan exactly
- Use declared execution backends
- Self-validate all implementations
### MUST NOT
- Create tasks
- Contact other workers directly
- Modify plan files
- Skip self-validation
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| impl_complete | → coordinator | Implementation success |
| impl_progress | → coordinator | Batch progress |
| error | → coordinator | Implementation failure |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/implement.md | Multi-backend implementation |
| code-developer agent | Simple tasks (synchronous) |
| ccw cli --tool codex --mode write | Complex tasks |
| ccw cli --tool gemini --mode write | Alternative backend |
---
## Phase 2: Task & Plan Loading
**Objective**: Load plan and determine execution strategy.
1. Load plan.json and .task/TASK-*.json from `<session-folder>/plan/`
**Backend selection** (priority order):
| Priority | Source | Method |
|----------|--------|--------|
| 1 | Task metadata | task.metadata.executor field |
| 2 | Plan default | "Execution Backend:" in plan |
| 3 | Auto-select | Simple (< 200 chars, no refactor) → agent; Complex → codex |
**Code review selection**:
| Priority | Source | Method |
|----------|--------|--------|
| 1 | Task metadata | task.metadata.code_review field |
| 2 | Plan default | "Code Review:" in plan |
| 3 | Auto-select | Critical keywords (auth, security, payment) → enabled |
---
## Phase 3: Code Implementation
**Objective**: Execute implementation across batches.
**Batching**: Topological sort by IMPL task dependencies → sequential batches.
Delegate to `commands/implement.md` for prompt building and backend routing:
| Backend | Invocation | Use Case |
|---------|-----------|----------|
| agent | Task({ subagent_type: "code-developer", run_in_background: false }) | Simple, direct edits |
| codex | ccw cli --tool codex --mode write (background) | Complex, architecture |
| gemini | ccw cli --tool gemini --mode write (background) | Analysis-heavy |
---
## Phase 4: Self-Validation
| Step | Method | Pass Criteria |
|------|--------|--------------|
| Syntax check | `tsc --noEmit` (30s) | Exit code 0 |
| Acceptance criteria | Match criteria keywords vs implementation | All addressed |
| Test detection | Find .test.ts/.spec.ts for modified files | Tests identified |
| Code review (optional) | gemini analysis or codex review | No blocking issues |
**Report**: task ID, status, files modified, validation results, backend used.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Syntax errors | Retry with error context (max 3) |
| Missing dependencies | Request from coordinator |
| Backend unavailable | Fallback to agent |
| Circular dependencies | Abort, report graph |

View File

@@ -1,111 +0,0 @@
# Role: fe-developer
Frontend development. Consumes plan/architecture output, implements components, pages, styles.
## Identity
- **Name**: `fe-developer` | **Prefix**: `DEV-FE-*` | **Tag**: `[fe-developer]`
- **Type**: Frontend pipeline worker
- **Responsibility**: Context loading → Design token consumption → Component implementation → Report
## Boundaries
### MUST
- Only process DEV-FE-* tasks
- Follow existing design tokens and component specs (if available)
- Generate accessible frontend code (semantic HTML, ARIA, keyboard nav)
- Follow project's frontend tech stack
### MUST NOT
- Modify backend code or API interfaces
- Contact other workers directly
- Introduce new frontend dependencies without architecture review
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| dev_fe_complete | → coordinator | Implementation done |
| dev_fe_progress | → coordinator | Long task progress |
| error | → coordinator | Implementation failure |
## Toolbox
| Tool | Purpose |
|------|---------|
| code-developer agent | Component implementation |
| ccw cli --tool gemini --mode write | Complex frontend generation |
---
## Phase 2: Context Loading
**Inputs to load**:
- Plan: `<session-folder>/plan/plan.json`
- Design tokens: `<session-folder>/architecture/design-tokens.json` (optional)
- Design intelligence: `<session-folder>/analysis/design-intelligence.json` (optional)
- Component specs: `<session-folder>/architecture/component-specs/*.md` (optional)
- Shared memory, wisdom
**Tech stack detection**:
| Signal | Framework | Styling |
|--------|-----------|---------|
| react/react-dom in deps | react | - |
| vue in deps | vue | - |
| next in deps | nextjs | - |
| tailwindcss in deps | - | tailwind |
| @shadcn/ui in deps | - | shadcn |
---
## Phase 3: Frontend Implementation
**Step 1**: Generate design token CSS (if tokens available)
- Convert design-tokens.json → CSS custom properties (`:root { --color-*, --space-*, --text-* }`)
- Include dark mode overrides via `@media (prefers-color-scheme: dark)`
- Write to `src/styles/tokens.css`
**Step 2**: Implement components
| Task Size | Strategy |
|-----------|----------|
| Simple (≤ 3 files, single component) | code-developer agent (synchronous) |
| Complex (system, multi-component) | ccw cli --tool gemini --mode write (background) |
**Coding standards** (include in agent/CLI prompt):
- Use design token CSS variables, never hardcode colors/spacing
- Interactive elements: cursor: pointer
- Transitions: 150-300ms
- Text contrast: minimum 4.5:1
- Include focus-visible styles
- Support prefers-reduced-motion
- Responsive: mobile-first
- No emoji as functional icons
---
## Phase 4: Self-Validation
| Check | What |
|-------|------|
| hardcoded-color | No #hex outside tokens.css |
| cursor-pointer | Interactive elements have cursor: pointer |
| focus-styles | Interactive elements have focus styles |
| responsive | Has responsive breakpoints |
| reduced-motion | Animations respect prefers-reduced-motion |
| emoji-icon | No emoji as functional icons |
Contribute to wisdom/conventions.md. Update shared-memory.json with component inventory.
**Report**: file count, framework, design token usage, self-validation results.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Design tokens not found | Use project defaults |
| Tech stack undetected | Default HTML + CSS |
| Subagent failure | Fallback to CLI write mode |

View File

@@ -1,152 +0,0 @@
# Command: pre-delivery-checklist
## Purpose
CSS-level pre-delivery checks for frontend files. Validates accessibility, interaction, design compliance, and layout patterns before final delivery.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Changed frontend files | git diff --name-only (filtered to .tsx, .jsx, .css, .scss) | Yes |
| File contents | Read each changed file | Yes |
| Design tokens path | `src/styles/tokens.css` or equivalent | No |
| Session folder | Task description `Session:` field | Yes |
## Phase 3: Checklist Execution
### Category 1: Accessibility (6 items)
| # | Check | Pattern to Detect | Severity |
|---|-------|--------------------|----------|
| 1 | Images have alt text | `<img` without `alt=` | CRITICAL |
| 2 | Form inputs have labels | `<input` without `<label` or `aria-label` | HIGH |
| 3 | Focus states visible | Interactive elements without `:focus` styles | HIGH |
| 4 | Color contrast 4.5:1 | Light text on light background patterns | HIGH |
| 5 | prefers-reduced-motion | Animations without `@media (prefers-reduced-motion)` | MEDIUM |
| 6 | Heading hierarchy | Skipped heading levels (h1 followed by h3) | MEDIUM |
**Do / Don't**:
| # | Do | Don't |
|---|-----|-------|
| 1 | Always provide descriptive alt text | Leave alt empty without `role="presentation"` |
| 2 | Associate every input with a label | Use placeholder as sole label |
| 3 | Add `focus-visible` outline | Remove default focus ring without replacement |
| 4 | Ensure 4.5:1 minimum contrast ratio | Use low-contrast decorative text for content |
| 5 | Wrap in `@media (prefers-reduced-motion: no-preference)` | Force animations on all users |
| 6 | Use sequential heading levels | Skip levels for visual sizing |
---
### Category 2: Interaction (4 items)
| # | Check | Pattern to Detect | Severity |
|---|-------|--------------------|----------|
| 7 | cursor-pointer on clickable | Buttons/links without `cursor: pointer` | MEDIUM |
| 8 | Transitions 150-300ms | Duration outside 150-300ms range | LOW |
| 9 | Loading states | Async operations without loading indicator | MEDIUM |
| 10 | Error states | Async operations without error handling UI | HIGH |
**Do / Don't**:
| # | Do | Don't |
|---|-----|-------|
| 7 | Add `cursor: pointer` to all clickable elements | Leave default cursor on buttons |
| 8 | Use 150-300ms for micro-interactions | Use >500ms or <100ms transitions |
| 9 | Show skeleton/spinner during fetch | Leave blank screen while loading |
| 10 | Show user-friendly error message | Silently fail or show raw error |
---
### Category 3: Design Compliance (4 items)
| # | Check | Pattern to Detect | Severity |
|---|-------|--------------------|----------|
| 11 | No hardcoded colors | Hex values (`#XXXXXX`) outside tokens file | HIGH |
| 12 | No hardcoded spacing | Raw `px` values for margin/padding | MEDIUM |
| 13 | No emoji as icons | Unicode emoji (U+1F300-1F9FF) in UI code | HIGH |
| 14 | Dark mode support | No `prefers-color-scheme` or `.dark` class | MEDIUM |
**Do / Don't**:
| # | Do | Don't |
|---|-----|-------|
| 11 | Use `var(--color-*)` design tokens | Hardcode `#hex` values |
| 12 | Use `var(--space-*)` spacing tokens | Hardcode pixel values |
| 13 | Use proper SVG/icon library | Use emoji for functional icons |
| 14 | Support light/dark themes | Design for light mode only |
---
### Category 4: Layout (2 items)
| # | Check | Pattern to Detect | Severity |
|---|-------|--------------------|----------|
| 15 | Responsive breakpoints | No `md:`/`lg:`/`@media` queries | MEDIUM |
| 16 | No horizontal scroll | Fixed widths greater than viewport | HIGH |
**Do / Don't**:
| # | Do | Don't |
|---|-----|-------|
| 15 | Mobile-first responsive design | Desktop-only layout |
| 16 | Use relative/fluid widths | Set fixed pixel widths on containers |
---
### Check Execution Strategy
| Check Scope | Applies To | Method |
|-------------|-----------|--------|
| Per-file checks | Items 1-4, 7-8, 10-13, 16 | Run against each changed file individually |
| Global checks | Items 5-6, 9, 14-15 | Run against concatenated content of all files |
**Detection example** (check for hardcoded colors):
```bash
Grep(pattern="#[0-9a-fA-F]{6}", path="<file_path>", output_mode="content", "-n"=true)
```
**Detection example** (check for missing alt text):
```bash
Grep(pattern="<img\\s(?![^>]*alt=)", path="<file_path>", output_mode="content", "-n"=true)
```
## Phase 4: Validation
### Pass/Fail Criteria
| Condition | Result |
|-----------|--------|
| Zero CRITICAL + zero HIGH failures | PASS |
| Zero CRITICAL, some HIGH | CONDITIONAL (list fixes needed) |
| Any CRITICAL failure | FAIL |
### Output Format
```
## Pre-Delivery Checklist Results
- Total checks: <n>
- Passed: <n> / <total>
- Failed: <n>
### Failed Items
- [CRITICAL] #1 Images have alt text -- <file_path>
- [HIGH] #11 No hardcoded colors -- <file_path>:<line>
- [MEDIUM] #7 cursor-pointer on clickable -- <file_path>
### Recommendations
(Do/Don't guidance for each failed item)
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No frontend files to check | Report empty checklist, all checks N/A |
| File read error | Skip file, note in report |
| Regex match error | Skip check, note in report |
| Design tokens file not found | Skip items 11-12, adjust total |

View File

@@ -1,113 +0,0 @@
# Role: fe-qa
Frontend quality assurance. 5-dimension review + Generator-Critic loop.
## Identity
- **Name**: `fe-qa` | **Prefix**: `QA-FE-*` | **Tag**: `[fe-qa]`
- **Type**: Frontend pipeline worker
- **Responsibility**: Context loading → 5-dimension review → GC feedback → Report
## Boundaries
### MUST
- Only process QA-FE-* tasks
- Execute 5-dimension review
- Support Generator-Critic loop (max 2 rounds)
- Provide actionable fix suggestions (Do/Don't format)
### MUST NOT
- Modify source code directly (review only)
- Contact other workers directly
- Mark pass when score below threshold
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| qa_fe_passed | → coordinator | All dimensions pass |
| qa_fe_result | → coordinator | Review complete (may have issues) |
| fix_required | → coordinator | Critical issues found |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/pre-delivery-checklist.md | CSS-level delivery checks |
| ccw cli --tool gemini --mode analysis | Frontend code review |
| ccw cli --tool codex --mode review | Git-aware code review |
---
## Review Dimensions
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Code Quality | 25% | TypeScript types, component structure, error handling |
| Accessibility | 25% | Semantic HTML, ARIA, keyboard nav, contrast, focus-visible |
| Design Compliance | 20% | Token usage, no hardcoded colors, no emoji icons |
| UX Best Practices | 15% | Loading/error/empty states, cursor-pointer, responsive |
| Pre-Delivery | 15% | No console.log, dark mode, i18n readiness |
---
## Phase 2: Context Loading
**Inputs**: design tokens, design intelligence, shared memory, previous QA results (for GC round tracking), changed frontend files via git diff.
Determine GC round from previous QA results count. Max 2 rounds.
---
## Phase 3: 5-Dimension Review
For each changed frontend file, check against all 5 dimensions. Score each dimension 0-10, deducting for issues found.
**Scoring deductions**:
| Severity | Deduction |
|----------|-----------|
| High | -2 to -3 |
| Medium | -1 to -1.5 |
| Low | -0.5 |
**Overall score** = weighted sum of dimension scores.
**Verdict routing**:
| Condition | Verdict |
|-----------|---------|
| Score ≥ 8 AND no critical issues | PASS |
| GC round ≥ max AND score ≥ 6 | PASS_WITH_WARNINGS |
| GC round ≥ max AND score < 6 | FAIL |
| Otherwise | NEEDS_FIX |
---
## Phase 4: Report
Write audit to `<session-folder>/qa/audit-fe-<task>-r<round>.json`. Update wisdom and shared memory.
**Report**: round, verdict, overall score, dimension scores, critical issues with Do/Don't format, action required (if NEEDS_FIX).
---
## Generator-Critic Loop
Orchestrated by coordinator:
```
Round 1: DEV-FE-001 → QA-FE-001
if NEEDS_FIX → coordinator creates DEV-FE-002 + QA-FE-002
Round 2: DEV-FE-002 → QA-FE-002
if still NEEDS_FIX → PASS_WITH_WARNINGS or FAIL (max 2)
```
**Convergence**: score ≥ 8 AND critical_count = 0
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No changed files | Report empty, score N/A |
| Design tokens not found | Skip design compliance, adjust weights |
| Max GC rounds exceeded | Force verdict |

View File

@@ -1,172 +0,0 @@
# Command: explore
## Purpose
Complexity-driven codebase exploration using shared explore subagent with centralized cache. v4 optimized: checks cache before exploring, avoids duplicate work across roles.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Task description | PLAN-* task subject/description | Yes |
| Session folder | Task description `Session:` field | Yes |
| Spec context | `<session-folder>/spec/` (if exists) | No |
| Plan directory | `<session-folder>/plan/` | Yes (create if missing) |
| Project tech | `.workflow/project-tech.json` | No |
| Cache index | `<session-folder>/explorations/cache-index.json` | No (create if missing) |
## Phase 3: Exploration
### Complexity Assessment
Score the task description against keyword indicators:
| Indicator | Keywords | Score |
|-----------|----------|-------|
| Structural change | refactor, architect, restructure, modular | +2 |
| Multi-scope | multiple, across, cross-cutting | +2 |
| Integration | integrate, api, database | +1 |
| Non-functional | security, performance, auth | +1 |
**Complexity routing**:
| Score | Level | Strategy | Angle Count |
|-------|-------|----------|-------------|
| 0-1 | Low | ACE semantic search only | 1 |
| 2-3 | Medium | Explore subagent per angle | 2-3 |
| 4+ | High | Explore subagent per angle | 3-5 |
### Angle Presets
Select preset by dominant keyword match, then take first N angles per complexity:
| Preset | Trigger Keywords | Angles (priority order) |
|--------|-----------------|------------------------|
| architecture | refactor, architect, restructure, modular | architecture, dependencies, modularity, integration-points |
| security | security, auth, permission, access | security, auth-patterns, dataflow, validation |
| performance | performance, slow, optimize, cache | performance, bottlenecks, caching, data-access |
| bugfix | fix, bug, error, issue, broken | error-handling, dataflow, state-management, edge-cases |
| feature | (default) | patterns, integration-points, testing, dependencies |
### Cache-First Strategy (v4 new)
Before launching any exploration, check the shared cache:
```
1. Read <session-folder>/explorations/cache-index.json
2. For each selected angle:
a. If angle exists in cache -> SKIP (reuse cached result)
b. If angle not in cache -> explore via subagent
3. Only explore uncached angles
```
This avoids duplicating work done by analyst's `general` exploration or any prior role.
### Low Complexity: Direct Search
```bash
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<task-description>")
```
Transform results into exploration JSON and write to `<session-folder>/explorations/explore-general.json`.
Update cache-index.json.
**ACE failure fallback**:
```bash
Bash(command="rg -l '<keywords>' --type ts", timeout=30000)
```
### Medium/High Complexity: Explore Subagent per Angle
For each uncached angle, call the shared explore subagent:
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore: <angle>",
prompt: "Explore codebase for: <task-description>
Focus angle: <angle>
Keywords: <relevant-keywords>
Session folder: <session-folder>
## Cache Check
1. Read <session-folder>/explorations/cache-index.json (if exists)
2. Look for entry with matching angle
3. If found AND file exists -> read cached result, return summary
4. If not found -> proceed to exploration
## Exploration Focus
<angle-focus-from-table-below>
## Output
Write JSON to: <session-folder>/explorations/explore-<angle>.json
Update cache-index.json with new entry
Each file in relevant_files MUST have: rationale (>10 chars), role, discovery_source, key_symbols"
})
```
### Angle Focus Guide
| Angle | Focus Points |
|-------|-------------|
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs |
| dependencies | Import chains, external libraries, circular dependencies, shared utilities |
| modularity | Module interfaces, separation of concerns, extraction opportunities |
| integration-points | API endpoints, data flow between modules, event systems, service integrations |
| security | Auth/authz logic, input validation, sensitive data handling, middleware |
| auth-patterns | Auth flows (login/refresh), session management, token validation, permissions |
| dataflow | Data transformations, state propagation, validation points, mutation paths |
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity |
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging |
| patterns | Code conventions, design patterns, naming conventions, best practices |
| testing | Test files, coverage gaps, test patterns (unit/integration/e2e), mocking |
### Explorations Manifest
After all explorations complete (both cached and new), write manifest to `<session-folder>/plan/explorations-manifest.json`:
```json
{
"task_description": "<description>",
"complexity": "<Low|Medium|High>",
"exploration_count": "<N>",
"cached_count": "<M>",
"explorations": [
{ "angle": "<angle>", "file": "../explorations/explore-<angle>.json", "source": "cached|new" }
]
}
```
## Phase 4: Validation
### Output Files
```
<session-folder>/explorations/ (shared cache)
+- cache-index.json (updated)
+- explore-<angle>.json (per angle)
<session-folder>/plan/
+- explorations-manifest.json (summary referencing shared cache)
```
### Success Criteria
| Check | Criteria | Required |
|-------|----------|----------|
| At least 1 exploration | Non-empty exploration file exists | Yes |
| Manifest written | explorations-manifest.json exists | Yes |
| File roles assigned | Every relevant_file has role + rationale | Yes |
| Cache updated | cache-index.json reflects all explorations | Yes |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Single exploration agent fails | Skip angle, remove from manifest, continue |
| All explorations fail | Proceed to plan generation with task description only |
| ACE search fails (Low) | Fallback to ripgrep keyword search |
| Schema file not found | Use inline schema from Output section |
| Cache index corrupt | Reset cache-index.json to empty, re-explore all |

View File

@@ -1,139 +0,0 @@
# Role: planner
Multi-angle code exploration and structured implementation planning.
Uses **Inner Loop** pattern for consistency (currently single PLAN-* task, extensible).
## Identity
- **Name**: `planner` | **Prefix**: `PLAN-*` | **Tag**: `[planner]`
- **Mode**: Inner Loop
- **Responsibility**: Complexity assessment -> Code exploration (shared cache) -> Plan generation -> Approval
## Boundaries
### MUST
- Only process PLAN-* tasks
- Assess complexity before planning
- Use shared explore subagent for codebase exploration (cache-aware)
- Generate plan.json + .task/TASK-*.json
- Load spec context in full-lifecycle mode
- Submit plan for coordinator approval
### MUST NOT
- Create tasks for other roles
- Implement code
- Modify spec documents
- Skip complexity assessment
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| plan_ready | -> coordinator | Plan complete |
| plan_revision | -> coordinator | Plan revised per feedback |
| error | -> coordinator | Exploration or planning failure |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/explore.md | Complexity-driven exploration via shared explore subagent |
| Explore subagent | Per-angle exploration (shared cache) |
| cli-lite-planning-agent | Plan generation |
---
## Phase 1.5: Load Spec Context (Full-Lifecycle)
If `<session-folder>/spec/` exists -> load requirements/_index.md, architecture/_index.md, epics/_index.md, spec-config.json. Otherwise -> impl-only mode.
**Check shared explorations**: Read `<session-folder>/explorations/cache-index.json` to see if analyst already cached useful explorations. Reuse rather than re-explore.
---
## Phase 2: Multi-Angle Exploration
**Objective**: Explore codebase to inform planning.
**Complexity routing**:
| Complexity | Criteria | Strategy |
|------------|----------|----------|
| Low | < 200 chars, no refactor/architecture keywords | ACE semantic search only |
| Medium | 200-500 chars or moderate scope | 2-3 angle explore subagent |
| High | > 500 chars, refactor/architecture, multi-module | 3-5 angle explore subagent |
Delegate to `commands/explore.md` for angle selection and execution.
**Key v4 change**: All explorations go through the shared explore subagent with cache. Before launching an exploration for an angle, check cache-index.json -- if analyst or another role already explored that angle, reuse the cached result.
---
## Phase 3: Plan Generation
**Objective**: Generate structured implementation plan.
| Complexity | Strategy |
|------------|----------|
| Low | Direct planning -> single TASK-001 with plan.json |
| Medium/High | cli-lite-planning-agent with exploration results |
**Agent call** (Medium/High):
```
Task({
subagent_type: "cli-lite-planning-agent",
run_in_background: false,
description: "Generate implementation plan",
prompt: "Generate plan.
Output: <plan-dir>/plan.json + <plan-dir>/.task/TASK-*.json
Schema: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
Task: <task-description>
Explorations: <explorations-manifest>
Complexity: <complexity>
Requirements: 2-7 tasks with id, title, files[].change, convergence.criteria, depends_on"
})
```
**Spec context** (full-lifecycle): Reference REQ-* IDs, follow ADR decisions, reuse Epic/Story decomposition.
---
## Phase 4: Submit for Approval
1. Read plan.json and TASK-*.json
2. Report to coordinator: complexity, task count, task list, approach, plan location
3. Wait for response: approved -> complete; revision -> update and resubmit
**Session files**:
```
<session-folder>/explorations/ (shared cache, written by explore subagent)
+-- cache-index.json
+-- explore-<angle>.json
<session-folder>/plan/
+-- explorations-manifest.json (summary, references ../explorations/)
+-- plan.json
+-- .task/TASK-*.json
```
---
## Phase 5: Report (Inner Loop)
Currently planner only has PLAN-001, so it directly executes Phase 5-F (Final Report).
If future extensions add multiple PLAN-* tasks, Phase 5-L loop activates automatically:
- Phase 5-L: Mark task completed, accumulate summary, loop back to Phase 1
- Phase 5-F: All PLAN-* done, send final report to coordinator with full summary
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Exploration agent failure | Plan from description only |
| Planning agent failure | Fallback to direct planning |
| Plan rejected 3+ times | Notify coordinator, suggest alternative |
| Schema not found | Use basic structure |
| Cache index corrupt | Clear cache, re-explore all angles |

View File

@@ -1,163 +0,0 @@
# Command: code-review
## Purpose
4-dimension code review analyzing quality, security, architecture, and requirements compliance. Produces a verdict (BLOCK/CONDITIONAL/APPROVE) with categorized findings.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Plan file | `<session_folder>/plan/plan.json` | Yes |
| Git diff | `git diff HEAD~1` or `git diff --cached` | Yes |
| Modified files | From git diff --name-only | Yes |
| Test results | Tester output (if available) | No |
| Wisdom | `<session_folder>/wisdom/` | No |
## Phase 3: 4-Dimension Review
### Dimension Overview
| Dimension | Focus | Weight |
|-----------|-------|--------|
| Quality | Code correctness, type safety, clean code | Equal |
| Security | Vulnerability patterns, secret exposure | Equal |
| Architecture | Module structure, coupling, file size | Equal |
| Requirements | Acceptance criteria coverage, completeness | Equal |
---
### Dimension 1: Quality
Scan each modified file for quality anti-patterns.
| Severity | Pattern | What to Detect |
|----------|---------|----------------|
| Critical | Empty catch blocks | `catch(e) {}` with no handling |
| High | @ts-ignore without justification | Suppression comment < 10 chars explanation |
| High | `any` type in public APIs | `any` outside comments and generic definitions |
| High | console.log in production | `console.(log\|debug\|info)` outside test files |
| Medium | Magic numbers | Numeric literals > 1 digit, not in const/comment |
| Medium | Duplicate code | Identical lines (>30 chars) appearing 3+ times |
**Detection example** (Grep for console statements):
```bash
Grep(pattern="console\\.(log|debug|info)", path="<file_path>", output_mode="content", "-n"=true)
```
---
### Dimension 2: Security
Scan for vulnerability patterns across all modified files.
| Severity | Pattern | What to Detect |
|----------|---------|----------------|
| Critical | Hardcoded secrets | `api_key=`, `password=`, `secret=`, `token=` with string values (20+ chars) |
| Critical | SQL injection | String concatenation in `query()`/`execute()` calls |
| High | eval/exec usage | `eval()`, `new Function()`, `setTimeout(string)` |
| High | XSS vectors | `innerHTML`, `dangerouslySetInnerHTML` |
| Medium | Insecure random | `Math.random()` in security context (token/key/password/session) |
| Low | Missing input validation | Functions with parameters but no validation in first 5 lines |
---
### Dimension 3: Architecture
Assess structural health of modified files.
| Severity | Pattern | What to Detect |
|----------|---------|----------------|
| Critical | Circular dependencies | File A imports B, B imports A |
| High | Excessive parent imports | Import traverses >2 parent directories (`../../../`) |
| Medium | Large files | Files exceeding 500 lines |
| Medium | Tight coupling | >5 imports from same base module |
| Medium | Long functions | Functions exceeding 50 lines |
| Medium | Module boundary changes | Modifications to index.ts/index.js files |
**Detection example** (check for deep parent imports):
```bash
Grep(pattern="from\\s+['\"](\\.\\./){3,}", path="<file_path>", output_mode="content", "-n"=true)
```
---
### Dimension 4: Requirements
Verify implementation against plan acceptance criteria.
| Severity | Check | Method |
|----------|-------|--------|
| High | Unmet acceptance criteria | Extract criteria from plan, check keyword overlap (threshold: 70%) |
| High | Missing error handling | Plan mentions "error handling" but no try/catch in code |
| Medium | Partially met criteria | Keyword overlap 40-69% |
| Medium | Missing tests | Plan mentions "test" but no test files in modified set |
**Verification flow**:
1. Read plan file → extract acceptance criteria section
2. For each criterion → extract keywords (4+ char meaningful words)
3. Search modified files for keyword matches
4. Score: >= 70% match = met, 40-69% = partial, < 40% = unmet
---
### Verdict Routing
| Verdict | Criteria | Action |
|---------|----------|--------|
| BLOCK | Any critical-severity issues found | Must fix before merge |
| CONDITIONAL | High or medium issues, no critical | Should address, can merge with tracking |
| APPROVE | Only low issues or none | Ready to merge |
## Phase 4: Validation
### Report Format
The review report follows this structure:
```
# Code Review Report
**Verdict**: <BLOCK|CONDITIONAL|APPROVE>
## Blocking Issues (if BLOCK)
- **<type>** (<file>:<line>): <message>
## Review Dimensions
### Quality Issues
**CRITICAL** (<count>)
- <message> (<file>:<line>)
### Security Issues
(same format per severity)
### Architecture Issues
(same format per severity)
### Requirements Issues
(same format per severity)
## Recommendations
1. <actionable recommendation>
```
### Summary Counts
| Field | Description |
|-------|-------------|
| Total issues | Sum across all dimensions and severities |
| Critical count | Must be 0 for APPROVE |
| Blocking issues | Listed explicitly in report header |
| Dimensions covered | Must be 4/4 |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Plan file not found | Skip requirements dimension, note in report |
| Git diff empty | Report no changes to review |
| File read fails | Skip file, note in report |
| No modified files | Report empty review |

View File

@@ -1,202 +0,0 @@
# Command: spec-quality
## Purpose
5-dimension spec quality check with weighted scoring, quality gate determination, and readiness report generation.
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Spec documents | `<session_folder>/spec/` (all .md files) | Yes |
| Original requirements | Product brief objectives section | Yes |
| Quality gate config | specs/quality-gates.md | No |
| Session folder | Task description `Session:` field | Yes |
**Spec document phases** (matched by filename/directory):
| Phase | Expected Path | Required |
|-------|--------------|---------|
| product-brief | spec/product-brief.md | Yes |
| prd | spec/requirements/*.md | Yes |
| architecture | spec/architecture/_index.md + ADR-*.md | Yes |
| user-stories | spec/epics/*.md | Yes |
| implementation-plan | plan/plan.json | No (impl-only/full-lifecycle) |
| test-strategy | spec/test-strategy.md | No (optional, not generated by pipeline) |
## Phase 3: 5-Dimension Scoring
### Dimension Weights
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Completeness | 25% | All required sections present with substance |
| Consistency | 20% | Terminology, format, references, naming |
| Traceability | 25% | Goals -> Reqs -> Components -> Stories chain |
| Depth | 20% | AC testable, ADRs justified, stories estimable |
| Coverage | 10% | Original requirements mapped to spec |
---
### Dimension 1: Completeness (25%)
Check each spec document for required sections.
**Required sections by phase**:
| Phase | Required Sections |
|-------|------------------|
| product-brief | Vision Statement, Problem Statement, Target Audience, Success Metrics, Constraints |
| prd | Goals, Requirements, User Stories, Acceptance Criteria, Non-Functional Requirements |
| architecture | System Overview, Component Design, Data Models, API Specifications, Technology Stack |
| user-stories | Story List, Acceptance Criteria, Priority, Estimation |
| implementation-plan | Task Breakdown, Dependencies, Timeline, Resource Allocation |
> **Note**: `test-strategy` is optional — skip scoring if `spec/test-strategy.md` is absent. Do not penalize completeness score for missing optional phases.
**Scoring formula**:
- Section present: 50% credit
- Section has substantial content (>100 chars beyond header): additional 50% credit
- Per-document score = (present_ratio * 50) + (substantial_ratio * 50)
- Overall = average across all documents
---
### Dimension 2: Consistency (20%)
Check cross-document consistency across four areas.
| Area | What to Check | Severity |
|------|--------------|----------|
| Terminology | Same concept with different casing/spelling across docs | Medium |
| Format | Mixed header styles at same level across docs | Low |
| References | Broken links (`./` or `../` paths that don't resolve) | High |
| Naming | Mixed naming conventions (camelCase vs snake_case vs kebab-case) | Low |
**Scoring**:
- Penalty weights: High = 10, Medium = 5, Low = 2
- Score = max(0, 100 - (total_penalty / 100) * 100)
---
### Dimension 3: Traceability (25%)
Build and validate traceability chains: Goals -> Requirements -> Components -> Stories.
**Chain building flow**:
1. Extract goals from product-brief (pattern: `- Goal: <text>`)
2. Extract requirements from PRD (pattern: `- REQ-NNN: <text>`)
3. Extract components from architecture (pattern: `- Component: <text>`)
4. Extract stories from user-stories (pattern: `- US-NNN: <text>`)
5. Link by keyword overlap (threshold: 30% keyword match)
**Chain completeness**: A chain is complete when a goal links to at least one requirement, one component, and one story.
**Scoring**: (complete chains / total chains) * 100
**Weak link identification**: For each incomplete chain, report which link is missing (no requirements, no components, or no stories).
---
### Dimension 4: Depth (20%)
Assess the analytical depth of spec content across four sub-dimensions.
| Sub-dimension | Source | Testable Criteria |
|---------------|--------|-------------------|
| AC Testability | PRD / User Stories | Contains measurable verbs (display, return, validate) or Given/When/Then or numbers |
| ADR Justification | Architecture | Contains rationale, alternatives, consequences, or trade-offs |
| Story Estimability | User Stories | Has "As a/I want/So that" + AC, or explicit estimate |
| Technical Detail | Architecture + Plan | Contains code blocks, API terms, HTTP methods, DB terms |
**Scoring**: Average of sub-dimension scores (each 0-100%)
---
### Dimension 5: Coverage (10%)
Map original requirements to spec requirements.
**Flow**:
1. Extract original requirements from product-brief objectives section
2. Extract spec requirements from all documents (pattern: `- REQ-NNN:` or `- Requirement:` or `- Feature:`)
3. For each original requirement, check keyword overlap with any spec requirement (threshold: 40%)
4. Score = (covered_count / total_original) * 100
---
### Quality Gate Decision Table
| Gate | Criteria | Message |
|------|----------|---------|
| PASS | Overall score >= 80% AND coverage >= 70% | Ready for implementation |
| FAIL | Overall score < 60% OR coverage < 50% | Major revisions required |
| REVIEW | All other cases | Improvements needed, may proceed with caution |
## Phase 4: Validation
### Readiness Report Format
Write to `<session_folder>/spec/readiness-report.md`:
```
# Specification Readiness Report
**Generated**: <timestamp>
**Overall Score**: <score>%
**Quality Gate**: <PASS|REVIEW|FAIL> - <message>
**Recommended Action**: <action>
## Dimension Scores
| Dimension | Score | Weight | Weighted Score |
|-----------|-------|--------|----------------|
| Completeness | <n>% | 25% | <n>% |
| Consistency | <n>% | 20% | <n>% |
| Traceability | <n>% | 25% | <n>% |
| Depth | <n>% | 20% | <n>% |
| Coverage | <n>% | 10% | <n>% |
## Completeness Analysis
(per-phase breakdown: sections present/expected, missing sections)
## Consistency Analysis
(issues by area: terminology, format, references, naming)
## Traceability Analysis
(complete chains / total, weak links)
## Depth Analysis
(per sub-dimension scores)
## Requirement Coverage
(covered / total, uncovered requirements list)
```
### Spec Summary Format
Write to `<session_folder>/spec/spec-summary.md`:
```
# Specification Summary
**Overall Quality Score**: <score>%
**Quality Gate**: <gate>
## Documents Reviewed
(per document: phase, path, size, section list)
## Key Findings
### Strengths (dimensions scoring >= 80%)
### Areas for Improvement (dimensions scoring < 70%)
### Recommendations
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Spec folder empty | FAIL gate, report no documents found |
| Missing phase document | Score 0 for that phase in completeness, note in report |
| No original requirements found | Score coverage at 100% (nothing to cover) |
| Broken references | Flag in consistency, do not fail entire review |

View File

@@ -1,150 +0,0 @@
# Role: reviewer
Dual-mode review: code review (REVIEW-*) and spec quality validation (QUALITY-*). QUALITY tasks include inline discuss (DISCUSS-006) for final sign-off.
## Identity
- **Name**: `reviewer` | **Prefix**: `REVIEW-*` + `QUALITY-*` | **Tag**: `[reviewer]`
- **Responsibility**: Branch by Prefix -> Review/Score -> **Inline Discuss (QUALITY only)** -> Report
## Boundaries
### MUST
- Process REVIEW-* and QUALITY-* tasks
- Generate readiness-report.md for QUALITY tasks
- Cover all required dimensions per mode
- Call discuss subagent for DISCUSS-006 after QUALITY-001
### MUST NOT
- Create tasks
- Modify source code
- Skip quality dimensions
- Approve without verification
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| review_result | -> coordinator | Code review complete |
| quality_result | -> coordinator | Spec quality + discuss complete |
| fix_required | -> coordinator | Critical issues found |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/code-review.md | 4-dimension code review |
| commands/spec-quality.md | 5-dimension spec quality |
| discuss subagent | Inline DISCUSS-006 (QUALITY tasks only) |
---
## Mode Detection
| Task Prefix | Mode | Dimensions | Inline Discuss |
|-------------|------|-----------|---------------|
| REVIEW-* | Code Review | quality, security, architecture, requirements | None |
| QUALITY-* | Spec Quality | completeness, consistency, traceability, depth, coverage | DISCUSS-006 |
---
## Code Review (REVIEW-*)
**Inputs**: Plan file, git diff, modified files, test results (if available)
**4 dimensions** (delegate to commands/code-review.md):
| Dimension | Critical Issues |
|-----------|----------------|
| Quality | Empty catch, any in public APIs, @ts-ignore, console.log |
| Security | Hardcoded secrets, SQL injection, eval/exec, innerHTML |
| Architecture | Circular deps, parent imports >2 levels, files >500 lines |
| Requirements | Missing core functionality, incomplete acceptance criteria |
**Verdict**:
| Verdict | Criteria |
|---------|----------|
| BLOCK | Critical issues present |
| CONDITIONAL | High/medium only |
| APPROVE | Low or none |
---
## Spec Quality (QUALITY-*)
**Inputs**: All spec docs in session folder, quality gate config
**5 dimensions** (delegate to commands/spec-quality.md):
| Dimension | Weight | Focus |
|-----------|--------|-------|
| Completeness | 25% | All sections present with substance |
| Consistency | 20% | Terminology, format, references |
| Traceability | 25% | Goals -> Reqs -> Arch -> Stories chain |
| Depth | 20% | AC testable, ADRs justified, stories estimable |
| Coverage | 10% | Original requirements mapped |
**Quality gate**:
| Gate | Criteria |
|------|----------|
| PASS | Score >= 80% AND coverage >= 70% |
| REVIEW | Score 60-79% OR coverage 50-69% |
| FAIL | Score < 60% OR coverage < 50% |
**Artifacts**: readiness-report.md + spec-summary.md
### Inline Discuss (DISCUSS-006) -- QUALITY tasks only
After generating readiness-report.md, call discuss subagent for final sign-off:
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss DISCUSS-006",
prompt: `## Multi-Perspective Critique: DISCUSS-006
### Input
- Artifact: <session-folder>/spec/readiness-report.md
- Round: DISCUSS-006
- Perspectives: product, technical, quality, risk, coverage
- Session: <session-folder>
- Discovery Context: <session-folder>/spec/discovery-context.json
<rest of discuss subagent prompt from subagents/discuss-subagent.md>`
})
```
**Discuss result handling**:
| Verdict | Severity | Action |
|---------|----------|--------|
| consensus_reached | - | Include as final endorsement in quality report, proceed to Phase 5 |
| consensus_blocked | HIGH | **DISCUSS-006 is final sign-off gate**. Phase 5 SendMessage includes structured format. Coordinator always pauses for user decision. |
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed to Phase 5. Coordinator logs to wisdom. |
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
**consensus_blocked SendMessage format**:
```
[reviewer] QUALITY-001 complete. Discuss DISCUSS-006: consensus_blocked (severity=<severity>)
Divergences: <top-3-divergent-points>
Action items: <prioritized-items>
Recommendation: <revise|proceed-with-caution|escalate>
Artifact: <session-folder>/spec/readiness-report.md
Discussion: <session-folder>/discussions/DISCUSS-006-discussion.md
```
> **Note**: DISCUSS-006 HIGH always triggers user pause regardless of revision count, since this is the spec->impl gate.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Missing context | Request from coordinator |
| Invalid mode | Abort with error |
| Analysis failure | Retry, then fallback template |
| Discuss subagent fails | Proceed without final discuss, log warning |

View File

@@ -1,152 +0,0 @@
# Command: validate
## Purpose
Test-fix cycle with strategy engine: detect framework, run tests, classify failures, select fix strategy, iterate until pass rate target is met or max iterations exhausted.
## Constants
| Constant | Value | Description |
|----------|-------|-------------|
| MAX_ITERATIONS | 10 | Maximum test-fix cycle attempts |
| PASS_RATE_TARGET | 95% | Minimum pass rate to succeed |
| AFFECTED_TESTS_FIRST | true | Run affected tests before full suite |
## Phase 2: Context Loading
Load from task description and executor output:
| Input | Source | Required |
|-------|--------|----------|
| Framework | Auto-detected (see below) | Yes |
| Modified files | Executor task output / git diff | Yes |
| Affected tests | Derived from modified files | No |
| Session folder | Task description `Session:` field | Yes |
| Wisdom | `<session_folder>/wisdom/` | No |
**Framework detection** (priority order):
| Priority | Method | Check |
|----------|--------|-------|
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
| 2 | package.json scripts.test | Command contains framework name |
| 3 | Config file existence | vitest.config.*, jest.config.*, pytest.ini |
**Affected test discovery** from modified files:
- For each modified file `<name>.<ext>`, search:
`<name>.test.ts`, `<name>.spec.ts`, `tests/<name>.test.ts`, `__tests__/<name>.test.ts`
## Phase 3: Test-Fix Cycle
### Test Command Table
| Framework | Affected Tests | Full Suite |
|-----------|---------------|------------|
| vitest | `vitest run <files> --reporter=verbose` | `vitest run --reporter=verbose` |
| jest | `jest <files> --no-coverage --verbose` | `jest --no-coverage --verbose` |
| mocha | `mocha <files> --reporter spec` | `mocha --reporter spec` |
| pytest | `pytest <files> -v --tb=short` | `pytest -v --tb=short` |
### Iteration Flow
```
Iteration 1
├─ Run affected tests (or full suite if none)
├─ Parse results → pass rate
├─ Pass rate >= 95%?
│ ├─ YES + affected-only → run full suite to confirm
│ │ ├─ Full suite passes → SUCCESS
│ │ └─ Full suite fails → continue with full results
│ └─ YES + full suite → SUCCESS
└─ NO → classify failures → select strategy → apply fixes
Iteration 2..10
├─ Re-run tests
├─ Track best pass rate across iterations
├─ Pass rate >= 95% → SUCCESS
├─ No failures to fix → STOP (anomaly)
└─ Failures remain → classify → select strategy → apply fixes
After iteration 10
└─ FAIL: max iterations reached, report best pass rate
```
**Progress update**: When iteration > 5, send progress to coordinator with current pass rate and iteration count.
### Strategy Selection Matrix
| Condition | Strategy | Behavior |
|-----------|----------|----------|
| Iteration <= 3 OR pass rate >= 80% | Conservative | Fix one failure at a time, highest severity first |
| Critical failures exist AND count < 5 | Surgical | Identify common error pattern, fix all matching occurrences |
| Pass rate < 50% OR iteration > 7 | Aggressive | Fix all critical + high failures in batch |
| Default (no other match) | Conservative | Safe fallback |
### Failure Classification Table
| Severity | Error Patterns |
|----------|---------------|
| Critical | SyntaxError, cannot find module, is not defined |
| High | Assertion mismatch (expected/received), toBe/toEqual failures |
| Medium | Timeout, async errors |
| Low | Warnings, deprecation notices |
### Fix Approach by Error Type
| Error Type | Pattern | Fix Approach |
|------------|---------|-------------|
| missing_import | "Cannot find module '<module>'" | Add import statement, resolve relative path from modified files |
| undefined_variable | "<name> is not defined" | Check source for renamed/moved exports, update reference |
| assertion_mismatch | "Expected: X, Received: Y" | Read test file at failure line, update expected value if behavior change is intentional |
| timeout | "Timeout" | Increase timeout or add async/await |
| syntax_error | "SyntaxError" | Read source at error line, fix syntax |
### Tool Call Example
Run tests with framework-appropriate command:
```bash
Bash(command="vitest run src/utils/__tests__/parser.test.ts --reporter=verbose", timeout=120000)
```
Read test file to analyze failure:
```bash
Read(file_path="<test_file_path>")
```
Apply fix via Edit:
```bash
Edit(file_path="<file>", old_string="<old>", new_string="<new>")
```
## Phase 4: Validation
### Success Criteria
| Check | Criteria | Required |
|-------|----------|----------|
| Pass rate | >= 95% | Yes |
| Full suite run | At least one full suite pass | Yes |
| No critical failures | Zero critical-severity failures remaining | Yes |
| Best pass rate tracked | Reported in final result | Yes |
### Result Routing
| Outcome | Message Type | Content |
|---------|-------------|---------|
| Pass rate >= target | test_result | Success, iterations count, full suite confirmed |
| Max iterations, pass rate < target | fix_required | Best pass rate, remaining failures, iteration count |
| No tests found | error | Framework detected but no test files |
| Framework not detected | error | Detection methods exhausted |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Framework not detected | Report error to coordinator, list detection attempts |
| No test files found | Report to coordinator, suggest manual test path |
| Test command fails (exit code != 0/1) | Check stderr for environment issues, retry once |
| Fix application fails | Skip fix, try next iteration with different strategy |
| Infinite loop (same failures repeat) | Abort after 3 identical result sets |

View File

@@ -1,108 +0,0 @@
# Role: tester
Adaptive test execution with fix cycles and quality gates.
## Identity
- **Name**: `tester` | **Prefix**: `TEST-*` | **Tag**: `[tester]`
- **Responsibility**: Detect Framework → Run Tests → Fix Cycle → Report
## Boundaries
### MUST
- Only process TEST-* tasks
- Detect test framework before running
- Run affected tests before full suite
- Use strategy engine for fix cycles
### MUST NOT
- Create tasks
- Contact other workers directly
- Modify production code beyond test fixes
- Skip framework detection
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| test_result | → coordinator | Tests pass or final result |
| fix_required | → coordinator | Failures after max iterations |
| error | → coordinator | Framework not detected |
## Toolbox
| Tool | Purpose |
|------|---------|
| commands/validate.md | Test-fix cycle with strategy engine |
---
## Phase 2: Framework Detection & Test Discovery
**Framework detection** (priority order):
| Priority | Method | Frameworks |
|----------|--------|-----------|
| 1 | package.json devDependencies | vitest, jest, mocha, pytest |
| 2 | package.json scripts.test | vitest, jest, mocha, pytest |
| 3 | Config files | vitest.config.*, jest.config.*, pytest.ini |
**Affected test discovery** from executor's modified files:
- Search variants: `<name>.test.ts`, `<name>.spec.ts`, `tests/<name>.test.ts`, `__tests__/<name>.test.ts`
---
## Phase 3: Test Execution & Fix Cycle
**Config**: MAX_ITERATIONS=10, PASS_RATE_TARGET=95%, AFFECTED_TESTS_FIRST=true
Delegate to `commands/validate.md`:
1. Run affected tests → parse results
2. Pass rate met → run full suite
3. Failures → select strategy → fix → re-run → repeat
**Strategy selection**:
| Condition | Strategy | Behavior |
|-----------|----------|----------|
| Iteration ≤ 3 or pass ≥ 80% | Conservative | Fix one critical failure at a time |
| Critical failures < 5 | Surgical | Fix specific pattern everywhere |
| Pass < 50% or iteration > 7 | Aggressive | Fix all failures in batch |
**Test commands**:
| Framework | Affected | Full Suite |
|-----------|---------|------------|
| vitest | `vitest run <files>` | `vitest run` |
| jest | `jest <files> --no-coverage` | `jest --no-coverage` |
| pytest | `pytest <files> -v` | `pytest -v` |
---
## Phase 4: Result Analysis
**Failure classification**:
| Severity | Patterns |
|----------|----------|
| Critical | SyntaxError, cannot find module, undefined |
| High | Assertion failures, toBe/toEqual |
| Medium | Timeout, async errors |
| Low | Warnings, deprecations |
**Report routing**:
| Condition | Type |
|-----------|------|
| Pass rate ≥ target | test_result (success) |
| Pass rate < target after max iterations | fix_required |
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Framework not detected | Prompt user |
| No tests found | Report to coordinator |
| Infinite fix loop | Abort after MAX_ITERATIONS |

View File

@@ -1,192 +0,0 @@
# Command: generate-doc
## Purpose
Document generation strategy reference. Used by doc-generation-subagent.md as prompt source.
Writer 主 agent 不再直接执行此文件中的 CLI 调用,而是将对应段落传入 subagent prompt。
## Usage
Writer Phase 3 加载此文件中对应 doc-type 的策略段落,嵌入 subagent prompt 的 "Execution Strategy" 字段。
## Phase 2: Context Loading
| Input | Source | Required |
|-------|--------|----------|
| Document standards | `../../specs/document-standards.md` | Yes |
| Template | From routing table below | Yes |
| Spec config | `<session-folder>/spec/spec-config.json` | Yes |
| Discovery context | `<session-folder>/spec/discovery-context.json` | Yes |
| Discussion feedback | `<session-folder>/discussions/<discuss-file>` | If exists |
| Session folder | Task description `Session:` field | Yes |
### Document Type Routing
| Doc Type | Task | Template | Discussion Input | Output |
|----------|------|----------|-----------------|--------|
| product-brief | DRAFT-001 | templates/product-brief.md | DISCUSS-001-discussion.md | spec/product-brief.md |
| requirements | DRAFT-002 | templates/requirements-prd.md | DISCUSS-002-discussion.md | spec/requirements/_index.md |
| architecture | DRAFT-003 | templates/architecture-doc.md | DISCUSS-003-discussion.md | spec/architecture/_index.md |
| epics | DRAFT-004 | templates/epics-template.md | DISCUSS-004-discussion.md | spec/epics/_index.md |
### Progressive Dependencies
Each doc type requires all prior docs: discovery-context → product-brief → requirements/_index → architecture/_index.
## Phase 3: Document Generation
### Shared Context Block
Built from spec-config and discovery-context for all CLI prompts:
```
SEED: <topic>
PROBLEM: <problem_statement>
TARGET USERS: <target_users>
DOMAIN: <domain>
CONSTRAINTS: <constraints>
FOCUS AREAS: <focus_areas>
CODEBASE CONTEXT: <existing_patterns, tech_stack> (if discovery-context exists)
```
---
### DRAFT-001: Product Brief
**Strategy**: 3-way parallel CLI analysis, then synthesize.
| Perspective | CLI Tool | Focus |
|-------------|----------|-------|
| Product | gemini | Vision, market fit, success metrics, scope |
| Technical | codex | Feasibility, constraints, integration complexity |
| User | claude | Personas, journey maps, pain points, UX |
**CLI call template** (one per perspective, all `run_in_background: true`):
```bash
Bash(command="ccw cli -p \"PURPOSE: <perspective> analysis for specification.\n<shared-context>\nTASK: <perspective-specific tasks>\nMODE: analysis\nEXPECTED: <structured output>\nCONSTRAINTS: <perspective scope>\" --tool <tool> --mode analysis", run_in_background=true)
```
**Synthesis flow** (after all 3 return):
```
3 CLI outputs received
├─ Identify convergent themes (2+ perspectives agree)
├─ Identify conflicts (e.g., product wants X, technical says infeasible)
├─ Extract unique insights per perspective
├─ Integrate discussion feedback (if exists)
└─ Fill template → Write to spec/product-brief.md
```
**Template sections**: Vision, Problem Statement, Target Users, Goals, Scope, Success Criteria, Assumptions.
---
### DRAFT-002: Requirements/PRD
**Strategy**: Single CLI expansion, then structure into individual requirement files.
| Step | Tool | Action |
|------|------|--------|
| 1 | gemini | Generate functional (REQ-NNN) and non-functional (NFR-type-NNN) requirements |
| 2 | (local) | Integrate discussion feedback |
| 3 | (local) | Write individual files + _index.md |
**CLI prompt focus**: For each product-brief goal, generate 3-7 functional requirements with user stories, acceptance criteria, and MoSCoW priority. Generate NFR categories: performance, security, scalability, usability.
**Output structure**:
```
spec/requirements/
├─ _index.md (summary table + MoSCoW breakdown)
├─ REQ-001-<slug>.md (individual functional requirement)
├─ REQ-002-<slug>.md
├─ NFR-perf-001-<slug>.md (non-functional)
└─ NFR-sec-001-<slug>.md
```
Each requirement file has: YAML frontmatter (id, title, priority, status, traces), description, user story, acceptance criteria.
---
### DRAFT-003: Architecture
**Strategy**: 2-stage CLI (design + critical review).
| Stage | Tool | Purpose |
|-------|------|---------|
| 1 | gemini | Architecture design: style, components, tech stack, ADRs, data model, security |
| 2 | codex | Critical review: challenge ADRs, identify bottlenecks, rate quality 1-5 |
Stage 2 runs after stage 1 completes (sequential dependency).
**After both complete**:
1. Integrate discussion feedback
2. Map codebase integration points (from discovery-context.relevant_files)
3. Write individual ADR files + _index.md
**Output structure**:
```
spec/architecture/
├─ _index.md (overview, component diagram, tech stack, data model, API, security)
├─ ADR-001-<slug>.md (individual decision record)
└─ ADR-002-<slug>.md
```
Each ADR file has: YAML frontmatter (id, title, status, traces), context, decision, alternatives with pros/cons, consequences, review feedback.
---
### DRAFT-004: Epics & Stories
**Strategy**: Single CLI decomposition, then structure into individual epic files.
| Step | Tool | Action |
|------|------|--------|
| 1 | gemini | Decompose requirements into 3-7 Epics with Stories, dependency map, MVP subset |
| 2 | (local) | Integrate discussion feedback |
| 3 | (local) | Write individual EPIC files + _index.md |
**CLI prompt focus**: Group requirements by domain, generate EPIC-NNN with STORY-EPIC-NNN children, define MVP subset, create Mermaid dependency diagram, recommend execution order.
**Output structure**:
```
spec/epics/
├─ _index.md (overview table, dependency map, execution order, MVP scope)
├─ EPIC-001-<slug>.md (individual epic with stories)
└─ EPIC-002-<slug>.md
```
Each epic file has: YAML frontmatter (id, title, priority, mvp, size, requirements, architecture, dependencies), stories with user stories and acceptance criteria.
All generated documents include YAML frontmatter: session_id, phase, document_type, status=draft, generated_at, version, dependencies.
## Phase 4: Validation
| Check | What to Verify |
|-------|---------------|
| has_frontmatter | Document starts with valid YAML frontmatter |
| sections_complete | All template sections present in output |
| cross_references | session_id matches spec-config |
| discussion_integrated | Feedback reflected (if feedback exists) |
| files_written | All expected files exist (individual + _index.md) |
### Result Routing
| Outcome | Message Type | Content |
|---------|-------------|---------|
| All checks pass | draft_ready | Doc type, output path, summary |
| Validation issues | draft_ready (with warnings) | Doc type, output path, issues list |
| Critical failure | error | Missing template, CLI failure |
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Prior doc not found | Notify coordinator, request prerequisite task completion |
| Template not found | Error, report missing template path |
| CLI tool fails | Retry with fallback tool (gemini → codex → claude) |
| Discussion contradicts prior docs | Note conflict in document, flag for next discussion round |
| Partial CLI output | Use available data, note gaps in document |

View File

@@ -1,246 +0,0 @@
# Role: writer
Product Brief, Requirements/PRD, Architecture, and Epics & Stories document generation.
Uses **Inner Loop** pattern: one agent handles all DRAFT-* tasks sequentially,
delegating document generation to subagent, retaining summaries across tasks.
## Identity
- **Name**: `writer` | **Prefix**: `DRAFT-*` | **Tag**: `[writer]`
- **Mode**: Inner Loop (处理全部 DRAFT-* 任务)
- **Responsibility**: [Loop: Load Context -> Subagent Generate -> Validate + Discuss -> Accumulate] -> Final Report
## Boundaries
### MUST
- Only process DRAFT-* tasks
- Use subagent for document generation (不在主 agent 内执行 CLI)
- Maintain context_accumulator across tasks
- Call discuss subagent after each document output
- Loop through all DRAFT-* tasks before reporting to coordinator
### MUST NOT
- Create tasks for other roles
- Skip template loading
- Execute CLI document generation in main agent (delegate to subagent)
- SendMessage to coordinator mid-loop (除非 consensus_blocked HIGH)
## Message Types
| Type | Direction | Trigger |
|------|-----------|---------|
| draft_ready | -> coordinator | Document + discuss complete |
| draft_revision | -> coordinator | Document revised per feedback |
| error | -> coordinator | Template missing, insufficient context |
## Toolbox
| Tool | Purpose |
|------|---------|
| subagents/doc-generation-subagent.md | Document generation (per task) |
| discuss subagent | Inline discuss critique |
---
## Phase 1: Task Discovery (Inner Loop)
**首次进入**:标准 Phase 1 流程,找到第一个 DRAFT-* pending 任务。
**循环重入**Phase 5-L 完成后回到此处TaskList 查找下一个 DRAFT-* pending 且 blockedBy 已全部 completed 的任务。
**终止条件**:无更多 DRAFT-* 可处理 → Phase 5-F。
---
## Phase 2: Context Loading
**Objective**: Load all required inputs for document generation.
**Document type routing**:
| Task Subject Contains | Doc Type | Template | Prior Discussion Input |
|----------------------|----------|----------|----------------------|
| Product Brief | product-brief | templates/product-brief.md | discussions/DISCUSS-001-discussion.md |
| Requirements / PRD | requirements | templates/requirements-prd.md | discussions/DISCUSS-002-discussion.md |
| Architecture | architecture | templates/architecture-doc.md | discussions/DISCUSS-003-discussion.md |
| Epics | epics | templates/epics-template.md | discussions/DISCUSS-004-discussion.md |
**Inline discuss mapping**:
| Doc Type | Inline Discuss Round | Perspectives |
|----------|---------------------|-------------|
| product-brief | DISCUSS-002 | product, technical, quality, coverage |
| requirements | DISCUSS-003 | quality, product, coverage |
| architecture | DISCUSS-004 | technical, risk |
| epics | DISCUSS-005 | product, technical, quality, coverage |
**Progressive dependency loading**:
| Doc Type | Requires |
|----------|----------|
| product-brief | discovery-context.json |
| requirements | + product-brief.md |
| architecture | + requirements/_index.md |
| epics | + architecture/_index.md |
**Prior decisions from accumulator**: 将 context_accumulator 中的前序摘要作为 "Prior Decisions" 传入。
| Input | Source | Required |
|-------|--------|----------|
| Document standards | `../../specs/document-standards.md` | Yes |
| Template | From routing table | Yes |
| Spec config | `<session-folder>/spec/spec-config.json` | Yes |
| Discovery context | `<session-folder>/spec/discovery-context.json` | Yes |
| Discussion feedback | `<session-folder>/discussions/<discuss-file>` | If exists |
| Prior decisions | context_accumulator (内存) | 如果有前序任务 |
**Success**: Template loaded, prior discussion feedback loaded (if exists), prior docs loaded, accumulator context prepared.
---
## Phase 3: Subagent Document Generation
**Objective**: Delegate document generation to doc-generation subagent.
**变化**:不再在主 agent 内执行 CLI 调用,而是委托给 doc-generation subagent。
```
Task({
subagent_type: "universal-executor",
run_in_background: false,
description: "Generate <doc-type> document",
prompt: `<从 subagents/doc-generation-subagent.md 加载 prompt>
## Task
- Document type: <doc-type>
- Session folder: <session-folder>
- Template: <template-path>
## Context
- Spec config: <spec-config 内容>
- Discovery context: <discovery-context 摘要>
- Prior discussion feedback: <discussion-file 内容 if exists>
- Prior decisions (from writer accumulator):
<context_accumulator 序列化>
## Instructions
<从 commands/generate-doc.md 加载该 doc-type 的具体策略>
## Expected Output
Return JSON:
{
"artifact_path": "<output-path>",
"summary": "<100-200字摘要>",
"key_decisions": ["<decision-1>", "<decision-2>", ...],
"sections_generated": ["<section-1>", ...],
"warnings": ["<warning if any>"]
}`
})
```
**主 agent 拿到的只是上述 JSON 摘要**,不是整篇文档。文档已由 subagent 写入磁盘。
---
## Phase 4: Self-Validation + Inline Discuss
### 4a: Self-Validation
| Check | What to Verify |
|-------|---------------|
| has_frontmatter | Starts with YAML frontmatter |
| sections_complete | All template sections present |
| cross_references | session_id included |
| discussion_integrated | Reflects prior round feedback (if exists) |
### 4b: Inline Discuss
After validation, call discuss subagent for this task's discuss round:
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss <DISCUSS-NNN>",
prompt: `## Multi-Perspective Critique: <DISCUSS-NNN>
### Input
- Artifact: <output-path>
- Round: <DISCUSS-NNN>
- Perspectives: <perspectives-from-table>
- Session: <session-folder>
- Discovery Context: <session-folder>/spec/discovery-context.json
<rest of discuss subagent prompt from subagents/discuss-subagent.md>`
})
```
**Discuss result handling**:
| Verdict | Severity | Action |
|---------|----------|--------|
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured consensus_blocked format (see below). Do NOT self-revise -- coordinator creates revision task. |
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed to Phase 5 normally. |
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
**consensus_blocked SendMessage format**:
```
[writer] <task-id> complete. Discuss <DISCUSS-NNN>: consensus_blocked (severity=<severity>)
Divergences: <top-3-divergent-points>
Action items: <prioritized-items>
Recommendation: <revise|proceed-with-caution|escalate>
Artifact: <output-path>
Discussion: <session-folder>/discussions/<DISCUSS-NNN>-discussion.md
```
**Report**: doc type, validation status, discuss verdict + severity, average rating, summary, output path.
---
## Phase 5-L: 循环完成 (Loop Completion)
在还有后续 DRAFT-* 任务时执行:
1. **TaskUpdate**: 标记当前任务 completed
2. **team_msg**: 记录任务完成
3. **累积摘要**:
```
context_accumulator.append({
task: "<DRAFT-NNN>",
artifact: "<output-path>",
key_decisions: <from subagent return>,
discuss_verdict: <from Phase 4>,
discuss_rating: <from Phase 4>,
summary: <from subagent return>
})
```
4. **中断检查**:
- consensus_blocked HIGH → SendMessage → STOP
- 累计错误 >= 3 → SendMessage → STOP
5. **Loop**: 回到 Phase 1
**不做**:不 SendMessage、不 Fast-Advance spawn。
## Phase 5-F: 最终报告 (Final Report)
当所有 DRAFT-* 任务完成后:
1. **TaskUpdate**: 标记最后一个任务 completed
2. **team_msg**: 记录完成
3. **汇总报告**: 所有任务摘要 + discuss 结果 + 产出路径
4. **Fast-Advance 检查**: 检查跨前缀后续 (如 QUALITY-001 是否 ready)
5. **SendMessage****spawn successor**
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Subagent 失败 | 重试 1 次,换 subagent_type仍失败则记录错误继续下一任务 |
| Discuss subagent 失败 | 跳过 discuss记录 warning |
| 累计 3 个任务失败 | SendMessage 报告 coordinatorSTOP |
| Agent crash mid-loop | Coordinator resume 检测 orphan → 重新 spawn → 从断点恢复 |
| Prior doc not found | Notify coordinator, request prerequisite |
| Discussion contradicts prior docs | Note conflict, flag for coordinator |

View File

@@ -1,192 +0,0 @@
# Document Standards
Defines format conventions, YAML frontmatter schema, naming rules, and content structure for all spec-generator outputs.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| All Phases | Frontmatter format | YAML Frontmatter Schema |
| All Phases | File naming | Naming Conventions |
| Phase 2-5 | Document structure | Content Structure |
| Phase 6 | Validation reference | All sections |
---
## YAML Frontmatter Schema
Every generated document MUST begin with YAML frontmatter:
```yaml
---
session_id: SPEC-{slug}-{YYYY-MM-DD}
phase: {1-6}
document_type: {product-brief|requirements|architecture|epics|readiness-report|spec-summary}
status: draft|review|complete
generated_at: {ISO8601 timestamp}
stepsCompleted: []
version: 1
dependencies:
- {list of input documents used}
---
```
### Field Definitions
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `session_id` | string | Yes | Session identifier matching spec-config.json |
| `phase` | number | Yes | Phase number that generated this document (1-6) |
| `document_type` | string | Yes | One of: product-brief, requirements, architecture, epics, readiness-report, spec-summary |
| `status` | enum | Yes | draft (initial), review (user reviewed), complete (finalized) |
| `generated_at` | string | Yes | ISO8601 timestamp of generation |
| `stepsCompleted` | array | Yes | List of step IDs completed during generation |
| `version` | number | Yes | Document version, incremented on re-generation |
| `dependencies` | array | No | List of input files this document depends on |
### Status Transitions
```
draft -> review -> complete
| ^
+-------------------+ (direct promotion in auto mode)
```
- **draft**: Initial generation, not yet user-reviewed
- **review**: User has reviewed and provided feedback
- **complete**: Finalized, ready for downstream consumption
In auto mode (`-y`), documents are promoted directly from `draft` to `complete`.
---
## Naming Conventions
### Session ID Format
```
SPEC-{slug}-{YYYY-MM-DD}
```
- **slug**: Lowercase, alphanumeric + Chinese characters, hyphens as separators, max 40 chars
- **date**: UTC+8 date in YYYY-MM-DD format
Examples:
- `SPEC-task-management-system-2026-02-11`
- `SPEC-user-auth-oauth-2026-02-11`
### Output Files
| File | Phase | Description |
|------|-------|-------------|
| `spec-config.json` | 1 | Session configuration and state |
| `discovery-context.json` | 1 | Codebase exploration results (optional) |
| `product-brief.md` | 2 | Product brief document |
| `requirements.md` | 3 | PRD document |
| `architecture.md` | 4 | Architecture decisions document |
| `epics.md` | 5 | Epic/Story breakdown document |
| `readiness-report.md` | 6 | Quality validation report |
| `spec-summary.md` | 6 | One-page executive summary |
### Output Directory
```
.workflow/.spec/{session-id}/
```
---
## Content Structure
### Heading Hierarchy
- `#` (H1): Document title only (one per document)
- `##` (H2): Major sections
- `###` (H3): Subsections
- `####` (H4): Detail items (use sparingly)
Maximum depth: 4 levels. Prefer flat structures.
### Section Ordering
Every document follows this general pattern:
1. **YAML Frontmatter** (mandatory)
2. **Title** (H1)
3. **Executive Summary** (2-3 sentences)
4. **Core Content Sections** (H2, document-specific)
5. **Open Questions / Risks** (if applicable)
6. **References / Traceability** (links to upstream/downstream docs)
### Formatting Rules
| Element | Format | Example |
|---------|--------|---------|
| Requirements | `REQ-{NNN}` prefix | REQ-001: User login |
| Acceptance criteria | Checkbox list | `- [ ] User can log in with email` |
| Architecture decisions | `ADR-{NNN}` prefix | ADR-001: Use PostgreSQL |
| Epics | `EPIC-{NNN}` prefix | EPIC-001: Authentication |
| Stories | `STORY-{EPIC}-{NNN}` prefix | STORY-001-001: Login form |
| Priority tags | MoSCoW labels | `[Must]`, `[Should]`, `[Could]`, `[Won't]` |
| Mermaid diagrams | Fenced code blocks | ````mermaid ... ``` `` |
| Code examples | Language-tagged blocks | ````typescript ... ``` `` |
### Cross-Reference Format
Use relative references between documents:
```markdown
See [Product Brief](product-brief.md#section-name) for details.
Derived from [REQ-001](requirements.md#req-001).
```
### Language
- Document body: Follow user's input language (Chinese or English)
- Technical identifiers: Always English (REQ-001, ADR-001, EPIC-001)
- YAML frontmatter keys: Always English
---
## spec-config.json Schema
```json
{
"session_id": "string (required)",
"seed_input": "string (required) - original user input",
"input_type": "text|file (required)",
"timestamp": "ISO8601 (required)",
"mode": "interactive|auto (required)",
"complexity": "simple|moderate|complex (required)",
"depth": "light|standard|comprehensive (required)",
"focus_areas": ["string array"],
"seed_analysis": {
"problem_statement": "string",
"target_users": ["string array"],
"domain": "string",
"constraints": ["string array"],
"dimensions": ["string array - 3-5 exploration dimensions"]
},
"has_codebase": "boolean",
"phasesCompleted": [
{
"phase": "number (1-6)",
"name": "string (phase name)",
"output_file": "string (primary output file)",
"completed_at": "ISO8601"
}
]
}
```
---
## Validation Checklist
- [ ] Every document starts with valid YAML frontmatter
- [ ] `session_id` matches across all documents in a session
- [ ] `status` field reflects current document state
- [ ] All cross-references resolve to valid targets
- [ ] Heading hierarchy is correct (no skipped levels)
- [ ] Technical identifiers use correct prefixes
- [ ] Output files are in the correct directory

View File

@@ -1,207 +0,0 @@
# Quality Gates
Per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| Phase 2-5 | Post-generation self-check | Per-Phase Gates |
| Phase 6 | Cross-document validation | Cross-Document Validation |
| Phase 6 | Final scoring | Scoring Dimensions |
---
## Quality Thresholds
| Gate | Score | Action |
|------|-------|--------|
| **Pass** | >= 80% | Continue to next phase |
| **Review** | 60-79% | Log warnings, continue with caveats |
| **Fail** | < 60% | Must address issues before continuing |
In auto mode (`-y`), Review-level issues are logged but do not block progress.
---
## Scoring Dimensions
### 1. Completeness (25%)
All required sections present with substantive content.
| Score | Criteria |
|-------|----------|
| 100% | All template sections filled with detailed content |
| 75% | All sections present, some lack detail |
| 50% | Major sections present but minor sections missing |
| 25% | Multiple major sections missing or empty |
| 0% | Document is a skeleton only |
### 2. Consistency (25%)
Terminology, formatting, and references are uniform across documents.
| Score | Criteria |
|-------|----------|
| 100% | All terms consistent, all references valid, formatting uniform |
| 75% | Minor terminology variations, all references valid |
| 50% | Some inconsistent terms, 1-2 broken references |
| 25% | Frequent inconsistencies, multiple broken references |
| 0% | Documents contradict each other |
### 3. Traceability (25%)
Requirements, architecture decisions, and stories trace back to goals.
| Score | Criteria |
|-------|----------|
| 100% | Every story traces to a requirement, every requirement traces to a goal |
| 75% | Most items traceable, few orphans |
| 50% | Partial traceability, some disconnected items |
| 25% | Weak traceability, many orphan items |
| 0% | No traceability between documents |
### 4. Depth (25%)
Content provides sufficient detail for execution teams.
| Score | Criteria |
|-------|----------|
| 100% | Acceptance criteria specific and testable, architecture decisions justified, stories estimable |
| 75% | Most items detailed enough, few vague areas |
| 50% | Mix of detailed and vague content |
| 25% | Mostly high-level, lacking actionable detail |
| 0% | Too abstract for execution |
---
## Per-Phase Quality Gates
### Phase 1: Discovery
| Check | Criteria | Severity |
|-------|----------|----------|
| Session ID valid | Matches `SPEC-{slug}-{date}` format | Error |
| Problem statement exists | Non-empty, >= 20 characters | Error |
| Target users identified | >= 1 user group | Error |
| Dimensions generated | 3-5 exploration dimensions | Warning |
| Constraints listed | >= 0 (can be empty with justification) | Info |
### Phase 2: Product Brief
| Check | Criteria | Severity |
|-------|----------|----------|
| Vision statement | Clear, 1-3 sentences | Error |
| Problem statement | Specific and measurable | Error |
| Target users | >= 1 persona with needs described | Error |
| Goals defined | >= 2 measurable goals | Error |
| Success metrics | >= 2 quantifiable metrics | Warning |
| Scope boundaries | In-scope and out-of-scope listed | Warning |
| Multi-perspective | >= 2 CLI perspectives synthesized | Info |
### Phase 3: Requirements (PRD)
| Check | Criteria | Severity |
|-------|----------|----------|
| Functional requirements | >= 3 with REQ-NNN IDs | Error |
| Acceptance criteria | Every requirement has >= 1 criterion | Error |
| MoSCoW priority | Every requirement tagged | Error |
| Non-functional requirements | >= 1 (performance, security, etc.) | Warning |
| User stories | >= 1 per Must-have requirement | Warning |
| Traceability | Requirements trace to product brief goals | Warning |
### Phase 4: Architecture
| Check | Criteria | Severity |
|-------|----------|----------|
| Component diagram | Present (Mermaid or ASCII) | Error |
| Tech stack specified | Languages, frameworks, key libraries | Error |
| ADR present | >= 1 Architecture Decision Record | Error |
| ADR has alternatives | Each ADR lists >= 2 options considered | Warning |
| Integration points | External systems/APIs identified | Warning |
| Data model | Key entities and relationships described | Warning |
| Codebase mapping | Mapped to existing code (if has_codebase) | Info |
### Phase 5: Epics & Stories
| Check | Criteria | Severity |
|-------|----------|----------|
| Epics defined | 3-7 epics with EPIC-NNN IDs | Error |
| MVP subset | >= 1 epic tagged as MVP | Error |
| Stories per epic | 2-5 stories per epic | Error |
| Story format | "As a...I want...So that..." pattern | Warning |
| Dependency map | Cross-epic dependencies documented | Warning |
| Estimation hints | Relative sizing (S/M/L/XL) per story | Info |
| Traceability | Stories trace to requirements | Warning |
### Phase 6: Readiness Check
| Check | Criteria | Severity |
|-------|----------|----------|
| All documents exist | product-brief, requirements, architecture, epics | Error |
| Frontmatter valid | All YAML frontmatter parseable and correct | Error |
| Cross-references valid | All document links resolve | Error |
| Overall score >= 60% | Weighted average across 4 dimensions | Error |
| No unresolved Errors | All Error-severity issues addressed | Error |
| Summary generated | spec-summary.md created | Warning |
---
## Cross-Document Validation
Checks performed during Phase 6 across all documents:
### Completeness Matrix
```
Product Brief goals -> Requirements (each goal has >= 1 requirement)
Requirements -> Architecture (each Must requirement has design coverage)
Requirements -> Epics (each Must requirement appears in >= 1 story)
Architecture ADRs -> Epics (tech choices reflected in implementation stories)
```
### Consistency Checks
| Check | Documents | Rule |
|-------|-----------|------|
| Terminology | All | Same term used consistently (no synonyms for same concept) |
| User personas | Brief + PRD + Epics | Same user names/roles throughout |
| Scope | Brief + PRD | PRD scope does not exceed brief scope |
| Tech stack | Architecture + Epics | Stories reference correct technologies |
### Traceability Matrix Format
```markdown
| Goal | Requirements | Architecture | Epics |
|------|-------------|--------------|-------|
| G-001: ... | REQ-001, REQ-002 | ADR-001 | EPIC-001 |
| G-002: ... | REQ-003 | ADR-002 | EPIC-002, EPIC-003 |
```
---
## Issue Classification
### Error (Must Fix)
- Missing required document or section
- Broken cross-references
- Contradictory information between documents
- Empty acceptance criteria on Must-have requirements
- No MVP subset defined in epics
### Warning (Should Fix)
- Vague acceptance criteria
- Missing non-functional requirements
- No success metrics defined
- Incomplete traceability
- Missing architecture review notes
### Info (Nice to Have)
- Could add more detailed personas
- Consider additional ADR alternatives
- Story estimation hints missing
- Mermaid diagrams could be more detailed

View File

@@ -1,200 +0,0 @@
{
"team_name": "team-lifecycle",
"team_display_name": "Team Lifecycle v4",
"description": "Optimized team skill: inline discuss subagent + shared explore cache, spec beats halved",
"version": "4.0.0",
"architecture": "folder-based + subagents",
"role_structure": "roles/{name}/role.md + roles/{name}/commands/*.md",
"subagent_structure": "subagents/{name}-subagent.md",
"roles": {
"coordinator": {
"task_prefix": null,
"responsibility": "Pipeline orchestration, requirement clarification, task chain creation, message dispatch",
"message_types": ["plan_approved", "plan_revision", "task_unblocked", "fix_required", "error", "shutdown"]
},
"analyst": {
"task_prefix": "RESEARCH",
"responsibility": "Seed analysis, codebase exploration, multi-dimensional context gathering + inline discuss",
"inline_discuss": "DISCUSS-001",
"message_types": ["research_ready", "research_progress", "error"]
},
"writer": {
"task_prefix": "DRAFT",
"responsibility": "Product Brief / PRD / Architecture / Epics document generation + inline discuss",
"execution_mode": "inner_loop",
"subagent_type": "universal-executor",
"inline_discuss": ["DISCUSS-002", "DISCUSS-003", "DISCUSS-004", "DISCUSS-005"],
"message_types": ["draft_ready", "draft_revision", "error"]
},
"planner": {
"task_prefix": "PLAN",
"responsibility": "Multi-angle code exploration (via shared explore), structured implementation planning",
"execution_mode": "inner_loop",
"message_types": ["plan_ready", "plan_revision", "error"]
},
"executor": {
"task_prefix": "IMPL",
"responsibility": "Code implementation following approved plans",
"execution_mode": "inner_loop",
"message_types": ["impl_complete", "impl_progress", "error"]
},
"tester": {
"task_prefix": "TEST",
"responsibility": "Adaptive test-fix cycles, progressive testing, quality gates",
"message_types": ["test_result", "fix_required", "error"]
},
"reviewer": {
"task_prefix": "REVIEW",
"additional_prefixes": ["QUALITY", "IMPROVE"],
"responsibility": "Code review (REVIEW-*) + Spec quality validation (QUALITY-*) + Quality improvement recheck (IMPROVE-*) + inline discuss for sign-off",
"inline_discuss": "DISCUSS-006",
"message_types": ["review_result", "quality_result", "quality_recheck", "fix_required", "error"]
},
"architect": {
"task_prefix": "ARCH",
"responsibility": "Architecture assessment, tech feasibility, design pattern review. Consulting role -- on-demand by coordinator",
"role_type": "consulting",
"consultation_modes": ["spec-review", "plan-review", "code-review", "consult", "feasibility"],
"message_types": ["arch_ready", "arch_concern", "arch_progress", "error"]
},
"fe-developer": {
"task_prefix": "DEV-FE",
"responsibility": "Frontend component/page implementation, design token consumption, responsive UI",
"role_type": "frontend-pipeline",
"message_types": ["dev_fe_complete", "dev_fe_progress", "error"]
},
"fe-qa": {
"task_prefix": "QA-FE",
"responsibility": "5-dimension frontend review (quality, a11y, design compliance, UX, pre-delivery), GC loop",
"role_type": "frontend-pipeline",
"message_types": ["qa_fe_passed", "qa_fe_result", "fix_required", "error"]
}
},
"subagents": {
"discuss": {
"spec": "subagents/discuss-subagent.md",
"type": "cli-discuss-agent",
"callable_by": ["analyst", "writer", "reviewer"],
"purpose": "Multi-perspective critique with CLI tools, consensus synthesis"
},
"explore": {
"spec": "subagents/explore-subagent.md",
"type": "cli-explore-agent",
"callable_by": ["analyst", "planner", "any"],
"purpose": "Codebase exploration with centralized cache"
}
},
"checkpoint_commands": {
"revise": {
"handler": "handleRevise",
"pattern": "revise <TASK-ID> [feedback]",
"cascade": true,
"creates": "revision_task"
},
"feedback": {
"handler": "handleFeedback",
"pattern": "feedback <text>",
"cascade": true,
"creates": "revision_chain"
},
"recheck": {
"handler": "handleRecheck",
"pattern": "recheck",
"cascade": false,
"creates": "quality_recheck"
},
"improve": {
"handler": "handleImprove",
"pattern": "improve [dimension]",
"cascade": false,
"creates": "improvement_task + quality_recheck"
}
},
"pipelines": {
"spec-only": {
"description": "Specification pipeline: research+discuss -> draft+discuss x4 -> quality+discuss",
"task_chain": [
"RESEARCH-001",
"DRAFT-001", "DRAFT-002", "DRAFT-003", "DRAFT-004",
"QUALITY-001"
],
"beats": 6,
"v3_beats": 12,
"optimization": "Discuss rounds inlined into produce roles"
},
"impl-only": {
"description": "Implementation pipeline: plan -> implement -> test + review",
"task_chain": ["PLAN-001", "IMPL-001", "TEST-001", "REVIEW-001"],
"beats": 3
},
"full-lifecycle": {
"description": "Full lifecycle: spec pipeline -> implementation pipeline",
"task_chain": "spec-only + impl-only (PLAN-001 blockedBy QUALITY-001)",
"beats": 9,
"v3_beats": 15,
"optimization": "6 fewer spec beats + QUALITY-001 is last spec task"
},
"fe-only": {
"description": "Frontend-only pipeline: plan -> frontend dev -> frontend QA",
"task_chain": ["PLAN-001", "DEV-FE-001", "QA-FE-001"],
"gc_loop": { "max_rounds": 2, "convergence": "score >= 8 && critical === 0" }
},
"fullstack": {
"description": "Fullstack pipeline: plan -> backend + frontend parallel -> test + QA",
"task_chain": ["PLAN-001", "IMPL-001||DEV-FE-001", "TEST-001||QA-FE-001", "REVIEW-001"],
"sync_points": ["REVIEW-001"]
},
"full-lifecycle-fe": {
"description": "Full lifecycle with frontend: spec -> plan -> backend + frontend -> test + QA",
"task_chain": "spec-only + fullstack (PLAN-001 blockedBy QUALITY-001)"
}
},
"frontend_detection": {
"keywords": ["component", "page", "UI", "frontend", "CSS", "HTML", "React", "Vue", "Tailwind", "Svelte", "Next.js", "Nuxt", "shadcn", "design system"],
"file_patterns": ["*.tsx", "*.jsx", "*.vue", "*.svelte", "*.css", "*.scss", "*.html"],
"routing_rules": {
"frontend_only": "All tasks match frontend keywords, no backend/API mentions",
"fullstack": "Mix of frontend and backend tasks",
"backend_only": "No frontend keywords detected (default impl-only)"
}
},
"ui_ux_pro_max": {
"skill_name": "ui-ux-pro-max",
"invocation": "Skill(skill=\"ui-ux-pro-max\", args=\"...\")",
"domains": ["product", "style", "typography", "color", "landing", "chart", "ux", "web"],
"stacks": ["html-tailwind", "react", "nextjs", "vue", "svelte", "shadcn", "swiftui", "react-native", "flutter"],
"design_intelligence_chain": ["analyst -> design-intelligence.json", "architect -> design-tokens.json", "fe-developer -> tokens.css", "fe-qa -> anti-pattern audit"]
},
"shared_memory": {
"file": "shared-memory.json",
"schema": {
"design_intelligence": "From analyst via ui-ux-pro-max",
"design_token_registry": "From architect, consumed by fe-developer/fe-qa",
"component_inventory": "From fe-developer, list of implemented components",
"style_decisions": "Accumulated design decisions",
"qa_history": "From fe-qa, audit trail",
"industry_context": "Industry + strictness config",
"exploration_cache": "From explore subagent, shared by all roles"
}
},
"session_dirs": {
"base": ".workflow/.team/TLS-{slug}-{YYYY-MM-DD}/",
"spec": "spec/",
"discussions": "discussions/",
"plan": "plan/",
"explorations": "explorations/",
"architecture": "architecture/",
"analysis": "analysis/",
"qa": "qa/",
"wisdom": "wisdom/",
"messages": ".msg/"
}
}

View File

@@ -1,169 +0,0 @@
# Discuss Subagent
Lightweight multi-perspective critique engine. Called inline by produce roles (analyst, writer, reviewer) instead of as a separate team member. Eliminates spawn overhead while preserving multi-CLI analysis quality.
## Design Rationale
In v3, `discussant` was a full team role requiring: agent spawn -> Skill load -> Phase 1 task discovery -> Phase 2-4 work -> Phase 5 report + callback. For what is essentially "run CLI analyses + synthesize", the framework overhead exceeded actual work time.
In v4, discuss is a **subagent call** from within the producing role, reducing each discuss round from ~60-90s overhead to ~5s overhead.
## Invocation
Called by produce roles after artifact creation:
```
Task({
subagent_type: "cli-discuss-agent",
run_in_background: false,
description: "Discuss <round-id>",
prompt: `## Multi-Perspective Critique: <round-id>
### Input
- Artifact: <artifact-path>
- Round: <round-id>
- Perspectives: <perspective-list>
- Session: <session-folder>
- Discovery Context: <session-folder>/spec/discovery-context.json (for coverage perspective)
### Perspective Routing
| Perspective | CLI Tool | Role | Focus Areas |
|-------------|----------|------|-------------|
| Product | gemini | Product Manager | Market fit, user value, business viability |
| Technical | codex | Tech Lead | Feasibility, tech debt, performance, security |
| Quality | claude | QA Lead | Completeness, testability, consistency |
| Risk | gemini | Risk Analyst | Risk identification, dependencies, failure modes |
| Coverage | gemini | Requirements Analyst | Requirement completeness vs discovery-context |
### Execution Steps
1. Read artifact from <artifact-path>
2. For each perspective, launch CLI analysis in background:
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
TASK: <focus-areas>
MODE: analysis
CONTEXT: Artifact content below
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
CONSTRAINTS: Output valid JSON only
Artifact:
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
3. Wait for all CLI results
4. Divergence detection:
- Coverage gap: missing_requirements non-empty -> High severity
- High risk: risk_level is high or critical -> High severity
- Low rating: any perspective rating <= 2 -> Medium severity
- Rating spread: max - min >= 3 -> Medium severity
5. Consensus determination:
- No high-severity divergences AND average rating >= 3.0 -> consensus_reached
- Otherwise -> consensus_blocked
6. Synthesize:
- Convergent themes (agreed by 2+ perspectives)
- Divergent views (conflicting assessments)
- Coverage gaps
- Action items from suggestions
7. Write discussion record to: <session-folder>/discussions/<round-id>-discussion.md
### Discussion Record Format
# Discussion Record: <round-id>
**Artifact**: <artifact-path>
**Perspectives**: <list>
**Consensus**: reached / blocked
**Average Rating**: <avg>/5
## Convergent Themes
- <theme>
## Divergent Views
- **<topic>** (<severity>): <description>
## Action Items
1. <item>
## Ratings
| Perspective | Rating |
|-------------|--------|
| <name> | <n>/5 |
### Return Value
**When consensus_reached**:
Return a summary string with:
- Verdict: consensus_reached
- Average rating
- Key action items (top 3)
- Discussion record path
**When consensus_blocked**:
Return a structured summary with:
- Verdict: consensus_blocked
- Severity: HIGH | MEDIUM | LOW
- HIGH: any rating <= 2, critical risk identified, or missing_requirements non-empty
- MEDIUM: rating spread >= 3, or single perspective rated <= 2 with others >= 3
- LOW: minor suggestions only, all ratings >= 3 but divergent views exist
- Average rating
- Divergence summary: top 3 divergent points with perspective attribution
- Action items: prioritized list of required changes
- Recommendation: revise | proceed-with-caution | escalate
- Discussion record path
### Error Handling
- Single CLI fails -> fallback to direct Claude analysis for that perspective
- All CLI fail -> generate basic discussion from direct artifact reading
- Artifact not found -> return error immediately`
})
```
## Round Configuration
| Round | Artifact | Perspectives | Calling Role |
|-------|----------|-------------|-------------|
| DISCUSS-001 | spec/discovery-context.json | product, risk, coverage | analyst |
| DISCUSS-002 | spec/product-brief.md | product, technical, quality, coverage | writer |
| DISCUSS-003 | spec/requirements/_index.md | quality, product, coverage | writer |
| DISCUSS-004 | spec/architecture/_index.md | technical, risk | writer |
| DISCUSS-005 | spec/epics/_index.md | product, technical, quality, coverage | writer |
| DISCUSS-006 | spec/readiness-report.md | all 5 | reviewer |
## Integration with Calling Role
The calling role is responsible for:
1. **Before calling**: Complete primary artifact output
2. **Calling**: Invoke discuss subagent with correct round config
3. **After calling**:
| Verdict | Severity | Role Action |
|---------|----------|-------------|
| consensus_reached | - | Include action items in Phase 5 report, proceed normally |
| consensus_blocked | HIGH | Include divergence details in Phase 5 SendMessage with structured format (see below). **Do NOT self-revise** -- coordinator decides. |
| consensus_blocked | MEDIUM | Include warning in Phase 5 SendMessage. Proceed to Phase 5 normally. |
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
**SendMessage format for consensus_blocked (HIGH or MEDIUM)**:
```
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
Divergences: <top-3-divergent-points>
Action items: <prioritized-items>
Recommendation: <revise|proceed-with-caution|escalate>
Artifact: <artifact-path>
Discussion: <discussion-record-path>
```
The coordinator receives this and routes per severity (see monitor.md Consensus-Blocked Handling):
- HIGH -> creates revision task (max 1) or pauses for user
- MEDIUM -> proceeds with warning logged to wisdom/issues.md
- DISCUSS-006 HIGH -> always pauses for user decision (final sign-off gate)
## Comparison with v3
| Aspect | v3 (discussant role) | v4 (discuss subagent) |
|--------|---------------------|----------------------|
| Spawn | Full general-purpose agent | Inline subagent call |
| Skill load | Reads SKILL.md + role.md | None (prompt contains all logic) |
| Task discovery | TaskList + TaskGet + TaskUpdate | None (called with context) |
| Report overhead | team_msg + SendMessage + TaskUpdate | Return value to caller |
| Total overhead | ~25-45s framework | ~5s call overhead |
| Pipeline beat | 1 beat per discuss round | 0 additional beats |

View File

@@ -1,62 +0,0 @@
# Doc Generation Subagent
文档生成执行引擎。由 writer 主 agent 通过 Inner Loop 调用。
负责 CLI 多视角分析 + 模板填充 + 文件写入。
## Design Rationale
从 v4.0 的 writer 内联 CLI 执行,改为 subagent 隔离调用。
好处CLI 调用的大量 token 消耗不污染 writer 主 agent 上下文,
writer 只拿到压缩摘要,可在多任务间保持上下文连续。
## Invocation
```
Task({
subagent_type: "universal-executor",
run_in_background: false,
description: "Generate <doc-type>",
prompt: `## Document Generation: <doc-type>
### Session
- Folder: <session-folder>
- Spec config: <spec-config-path>
### Document Config
- Type: <product-brief | requirements | architecture | epics>
- Template: <template-path>
- Output: <output-path>
- Prior discussion: <discussion-file or "none">
### Writer Accumulator (prior decisions)
<JSON array of prior task summaries>
### Execution Strategy
<从 generate-doc.md 对应 doc-type 段落加载>
### Output Requirements
1. Write document to <output-path> (follow template + document-standards.md)
2. Return JSON summary:
{
"artifact_path": "<path>",
"summary": "<100-200字>",
"key_decisions": [],
"sections_generated": [],
"cross_references": [],
"warnings": []
}`
})
```
## Doc Type Strategies
(直接引用 generate-doc.md 的 DRAFT-001/002/003/004 策略,不重复)
See: roles/writer/commands/generate-doc.md
## Error Handling
| Scenario | Resolution |
|----------|------------|
| CLI 工具失败 | fallback chain: gemini → codex → claude |
| Template 不存在 | 返回错误 JSON |
| Prior doc 不存在 | 返回错误 JSONwriter 决定是否继续 |

View File

@@ -1,172 +0,0 @@
# Explore Subagent
Shared codebase exploration utility with centralized caching. Callable by any role needing code context. Replaces v3's standalone explorer role.
## Design Rationale
In v3, exploration happened in 3 separate places:
- `analyst` Phase 3: ACE semantic search for architecture patterns
- `planner` Phase 2: multi-angle cli-explore-agent
- `explorer` role: standalone service with full spawn cycle
Results were scattered across different directories and never shared. In v4, all exploration routes through this subagent with a **unified cache** in `explorations/`.
## Invocation
```
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: "Explore <angle>",
prompt: `Explore codebase for: <query>
Focus angle: <angle>
Keywords: <keyword-list>
Session folder: <session-folder>
## Cache Check
1. Read <session-folder>/explorations/cache-index.json (if exists)
2. Look for entry with matching angle
3. If found AND file exists -> read cached result, return summary
4. If not found -> proceed to exploration
## Exploration
<angle-specific-focus-from-table-below>
## Output
Write JSON to: <session-folder>/explorations/explore-<angle>.json
Update cache-index.json with new entry
## Output Schema
{
"angle": "<angle>",
"query": "<query>",
"relevant_files": [
{ "path": "...", "rationale": "...", "role": "...", "discovery_source": "...", "key_symbols": [] }
],
"patterns": [],
"dependencies": [],
"external_refs": [],
"_metadata": { "created_by": "<calling-role>", "timestamp": "...", "cache_key": "..." }
}
Return summary: file count, pattern count, top 5 files, output path`
})
```
## Cache Mechanism
### Cache Index Schema
`<session-folder>/explorations/cache-index.json`:
```json
{
"entries": [
{
"angle": "architecture",
"keywords": ["auth", "middleware"],
"file": "explore-architecture.json",
"created_by": "analyst",
"created_at": "2026-02-27T10:00:00Z",
"file_count": 15
}
]
}
```
### Cache Lookup Rules
| Condition | Action |
|-----------|--------|
| Exact angle match exists | Return cached result |
| No match | Execute exploration, cache result |
| Cache file missing but index has entry | Remove stale entry, re-explore |
### Cache Invalidation
Cache is session-scoped. No explicit invalidation needed -- each session starts fresh. If a role suspects stale data, it can pass `force_refresh: true` in the prompt to bypass cache.
## Angle Focus Guide
| Angle | Focus Points | Typical Caller |
|-------|-------------|----------------|
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs | analyst, planner |
| dependencies | Import chains, external libraries, circular dependencies, shared utilities | planner |
| modularity | Module interfaces, separation of concerns, extraction opportunities | planner |
| integration-points | API endpoints, data flow between modules, event systems | analyst, planner |
| security | Auth/authz logic, input validation, sensitive data handling, middleware | planner |
| auth-patterns | Auth flows, session management, token validation, permissions | planner |
| dataflow | Data transformations, state propagation, validation points | planner |
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity | planner |
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging | planner |
| patterns | Code conventions, design patterns, naming conventions, best practices | analyst, planner |
| testing | Test files, coverage gaps, test patterns, mocking strategies | planner |
| general | Broad semantic search for topic-related code | analyst |
## Exploration Strategies
### Low Complexity (direct search)
For simple queries, use ACE semantic search:
```
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<query>")
```
ACE failure fallback: `rg -l '<keywords>' --type ts`
### Medium/High Complexity (multi-angle)
For complex queries, call cli-explore-agent per angle. The calling role determines complexity and selects angles (see planner's `commands/explore.md`).
## Search Tool Priority
| Tool | Priority | Use Case |
|------|----------|----------|
| mcp__ace-tool__search_context | P0 | Semantic search |
| Grep / Glob | P1 | Pattern matching |
| cli-explore-agent | Deep | Multi-angle exploration |
| WebSearch | P3 | External docs |
## Integration with Roles
### analyst (RESEARCH-001)
```
# After seed analysis, explore codebase context
Task({
subagent_type: "cli-explore-agent",
description: "Explore general context",
prompt: "Explore codebase for: <topic>\nFocus angle: general\n..."
})
# Result feeds into discovery-context.json
```
### planner (PLAN-001)
```
# Multi-angle exploration before plan generation
for angle in selected_angles:
Task({
subagent_type: "cli-explore-agent",
description: "Explore <angle>",
prompt: "Explore codebase for: <task>\nFocus angle: <angle>\n..."
})
# Explorations manifest built from cache-index.json
```
### Any role needing context
Any role can call explore subagent when needing codebase context for better decisions.
## Comparison with v3
| Aspect | v3 (explorer role) | v4 (explore subagent) |
|--------|-------------------|----------------------|
| Identity | Full team member | Utility subagent |
| Task creation | Coordinator creates EXPLORE-* tasks | No tasks, direct call |
| Spawn overhead | Full agent lifecycle | Subagent call (~5s) |
| Result sharing | Isolated in explorations/ | Shared cache with index |
| Caller | Only via coordinator dispatch | Any role directly |
| Duplication | analyst + planner + explorer all explore | Single cached path |

View File

@@ -1,254 +0,0 @@
# Architecture Document Template (Directory Structure)
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
| Output Location | `{workDir}/architecture/` |
## Output Structure
```
{workDir}/architecture/
├── _index.md # Overview, components, tech stack, data model, security
├── ADR-001-{slug}.md # Individual Architecture Decision Record
├── ADR-002-{slug}.md
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 4
document_type: architecture-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
- ../requirements/_index.md
---
# Architecture: {product_name}
{executive_summary - high-level architecture approach and key decisions}
## System Overview
### Architecture Style
{description of chosen architecture style: microservices, monolith, serverless, etc.}
### System Context Diagram
```mermaid
C4Context
title System Context Diagram
Person(user, "User", "Primary user")
System(system, "{product_name}", "Core system")
System_Ext(ext1, "{external_system}", "{description}")
Rel(user, system, "Uses")
Rel(system, ext1, "Integrates with")
```
## Component Architecture
### Component Diagram
```mermaid
graph TD
subgraph "{product_name}"
A[Component A] --> B[Component B]
B --> C[Component C]
A --> D[Component D]
end
B --> E[External Service]
```
### Component Descriptions
| Component | Responsibility | Technology | Dependencies |
|-----------|---------------|------------|--------------|
| {component_name} | {what it does} | {tech stack} | {depends on} |
## Technology Stack
### Core Technologies
| Layer | Technology | Version | Rationale |
|-------|-----------|---------|-----------|
| Frontend | {technology} | {version} | {why chosen} |
| Backend | {technology} | {version} | {why chosen} |
| Database | {technology} | {version} | {why chosen} |
| Infrastructure | {technology} | {version} | {why chosen} |
### Key Libraries & Frameworks
| Library | Purpose | License |
|---------|---------|---------|
| {library_name} | {purpose} | {license} |
## Architecture Decision Records
| ADR | Title | Status | Key Choice |
|-----|-------|--------|------------|
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
## Data Architecture
### Data Model
```mermaid
erDiagram
ENTITY_A ||--o{ ENTITY_B : "has many"
ENTITY_A {
string id PK
string name
datetime created_at
}
ENTITY_B {
string id PK
string entity_a_id FK
string value
}
```
### Data Storage Strategy
| Data Type | Storage | Retention | Backup |
|-----------|---------|-----------|--------|
| {type} | {storage solution} | {retention policy} | {backup strategy} |
## API Design
### API Overview
| Endpoint | Method | Purpose | Auth |
|----------|--------|---------|------|
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
## Security Architecture
### Security Controls
| Control | Implementation | Requirement |
|---------|---------------|-------------|
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
## Infrastructure & Deployment
### Deployment Architecture
{description of deployment model: containers, serverless, VMs, etc.}
### Environment Strategy
| Environment | Purpose | Configuration |
|-------------|---------|---------------|
| Development | Local development | {config} |
| Staging | Pre-production testing | {config} |
| Production | Live system | {config} |
## Codebase Integration
{if has_codebase is true:}
### Existing Code Mapping
| New Component | Existing Module | Integration Type | Notes |
|--------------|----------------|------------------|-------|
| {component} | {existing module path} | Extend/Replace/New | {notes} |
### Migration Notes
{any migration considerations for existing code}
## Quality Attributes
| Attribute | Target | Measurement | ADR Reference |
|-----------|--------|-------------|---------------|
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
## Risks & Mitigations
| Risk | Impact | Probability | Mitigation |
|------|--------|-------------|------------|
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
## Open Questions
- [ ] {architectural question 1}
- [ ] {architectural question 2}
## References
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
- Next: [Epics & Stories](../epics/_index.md)
```
---
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
```markdown
---
id: ADR-{NNN}
status: Accepted
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
date: {timestamp}
---
# ADR-{NNN}: {decision_title}
## Context
{what is the situation that motivates this decision}
## Decision
{what is the chosen approach}
## Alternatives Considered
| Option | Pros | Cons |
|--------|------|------|
| {option_1 - chosen} | {pros} | {cons} |
| {option_2} | {pros} | {cons} |
| {option_3} | {pros} | {cons} |
## Consequences
- **Positive**: {positive outcomes}
- **Negative**: {tradeoffs accepted}
- **Risks**: {risks to monitor}
## Traces
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{NNN}` | Auto-increment | ADR/requirement number |
| `{slug}` | Auto-generated | Kebab-case from decision title |
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |

View File

@@ -1,196 +0,0 @@
# Epics & Stories Template (Directory Structure)
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
| Output Location | `{workDir}/epics/` |
## Output Structure
```
{workDir}/epics/
├── _index.md # Overview table + dependency map + MVP scope + execution order
├── EPIC-001-{slug}.md # Individual Epic with its Stories
├── EPIC-002-{slug}.md
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 5
document_type: epics-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
- ../requirements/_index.md
- ../architecture/_index.md
---
# Epics & Stories: {product_name}
{executive_summary - overview of epic structure and MVP scope}
## Epic Overview
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|---------|-------|----------|-----|---------|-----------|
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
## Dependency Map
```mermaid
graph LR
EPIC-001 --> EPIC-002
EPIC-001 --> EPIC-003
EPIC-002 --> EPIC-004
EPIC-003 --> EPIC-005
```
### Dependency Notes
{explanation of why these dependencies exist and suggested execution order}
### Recommended Execution Order
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
3. ...
## MVP Scope
### MVP Epics
{list of epics included in MVP with justification, linking to each}
### MVP Definition of Done
- [ ] {MVP completion criterion 1}
- [ ] {MVP completion criterion 2}
- [ ] {MVP completion criterion 3}
## Traceability Matrix
| Requirement | Epic | Stories | Architecture |
|-------------|------|---------|--------------|
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
## Estimation Summary
| Size | Meaning | Count |
|------|---------|-------|
| S | Small - well-understood, minimal risk | {n} |
| M | Medium - some complexity, moderate risk | {n} |
| L | Large - significant complexity, should consider splitting | {n} |
| XL | Extra Large - high complexity, must split before implementation | {n} |
## Risks & Considerations
| Risk | Affected Epics | Mitigation |
|------|---------------|------------|
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
## Open Questions
- [ ] {question about scope or implementation 1}
- [ ] {question about scope or implementation 2}
## References
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
- Handoff to: execution workflows (lite-plan, plan, req-plan)
```
---
## Template: EPIC-NNN-{slug}.md (Individual Epic)
```markdown
---
id: EPIC-{NNN}
priority: {Must|Should|Could}
mvp: {true|false}
size: {S|M|L|XL}
requirements: [REQ-{NNN}]
architecture: [ADR-{NNN}]
dependencies: [EPIC-{NNN}]
status: draft
---
# EPIC-{NNN}: {epic_title}
**Priority**: {Must|Should|Could}
**MVP**: {Yes|No}
**Estimated Size**: {S|M|L|XL}
## Description
{detailed epic description}
## Requirements
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
## Architecture
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
- Component: {component_name}
## Dependencies
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
## Stories
### STORY-{EPIC}-001: {story_title}
**User Story**: As a {persona}, I want to {action} so that {benefit}.
**Acceptance Criteria**:
- [ ] {criterion 1}
- [ ] {criterion 2}
- [ ] {criterion 3}
**Size**: {S|M|L|XL}
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
---
### STORY-{EPIC}-002: {story_title}
**User Story**: As a {persona}, I want to {action} so that {benefit}.
**Acceptance Criteria**:
- [ ] {criterion 1}
- [ ] {criterion 2}
**Size**: {S|M|L|XL}
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
| `{NNN}` | Auto-increment | Story/requirement number |
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |

View File

@@ -1,133 +0,0 @@
# Product Brief Template
Template for generating product brief documents in Phase 2.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis |
| Output Location | `{workDir}/product-brief.md` |
---
## Template
```markdown
---
session_id: {session_id}
phase: 2
document_type: product-brief
status: draft
generated_at: {timestamp}
stepsCompleted: []
version: 1
dependencies:
- spec-config.json
---
# Product Brief: {product_name}
{executive_summary - 2-3 sentences capturing the essence of the product/feature}
## Vision
{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like}
## Problem Statement
### Current Situation
{description of the current state and pain points}
### Impact
{quantified impact of the problem - who is affected, how much, how often}
## Target Users
{for each user persona:}
### {Persona Name}
- **Role**: {user's role/context}
- **Needs**: {primary needs related to this product}
- **Pain Points**: {current frustrations}
- **Success Criteria**: {what success looks like for this user}
## Goals & Success Metrics
| Goal ID | Goal | Success Metric | Target |
|---------|------|----------------|--------|
| G-001 | {goal description} | {measurable metric} | {specific target} |
| G-002 | {goal description} | {measurable metric} | {specific target} |
## Scope
### In Scope
- {feature/capability 1}
- {feature/capability 2}
- {feature/capability 3}
### Out of Scope
- {explicitly excluded item 1}
- {explicitly excluded item 2}
### Assumptions
- {key assumption 1}
- {key assumption 2}
## Competitive Landscape
| Aspect | Current State | Proposed Solution | Advantage |
|--------|--------------|-------------------|-----------|
| {aspect} | {how it's done now} | {our approach} | {differentiator} |
## Constraints & Dependencies
### Technical Constraints
- {constraint 1}
- {constraint 2}
### Business Constraints
- {constraint 1}
### Dependencies
- {external dependency 1}
- {external dependency 2}
## Multi-Perspective Synthesis
### Product Perspective
{summary of product/market analysis findings}
### Technical Perspective
{summary of technical feasibility and constraints}
### User Perspective
{summary of user journey and UX considerations}
### Convergent Themes
{themes where all perspectives agree}
### Conflicting Views
{areas where perspectives differ, with notes on resolution approach}
## Open Questions
- [ ] {unresolved question 1}
- [ ] {unresolved question 2}
## References
- Derived from: [spec-config.json](spec-config.json)
- Next: [Requirements PRD](requirements.md)
```
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | Seed analysis | Product/feature name |
| `{executive_summary}` | CLI synthesis | 2-3 sentence summary |
| `{vision_statement}` | CLI product perspective | Aspirational vision |
| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis |

View File

@@ -1,224 +0,0 @@
# Requirements PRD Template (Directory Structure)
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
| Output Location | `{workDir}/requirements/` |
## Output Structure
```
{workDir}/requirements/
├── _index.md # Summary + MoSCoW table + traceability matrix + links
├── REQ-001-{slug}.md # Individual functional requirement
├── REQ-002-{slug}.md
├── NFR-P-001-{slug}.md # Non-functional: Performance
├── NFR-S-001-{slug}.md # Non-functional: Security
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
├── NFR-U-001-{slug}.md # Non-functional: Usability
└── ...
```
---
## Template: _index.md
```markdown
---
session_id: {session_id}
phase: 3
document_type: requirements-index
status: draft
generated_at: {timestamp}
version: 1
dependencies:
- ../spec-config.json
- ../product-brief.md
---
# Requirements: {product_name}
{executive_summary - brief overview of what this PRD covers and key decisions}
## Requirement Summary
| Priority | Count | Coverage |
|----------|-------|----------|
| Must Have | {n} | {description of must-have scope} |
| Should Have | {n} | {description of should-have scope} |
| Could Have | {n} | {description of could-have scope} |
| Won't Have | {n} | {description of explicitly excluded} |
## Functional Requirements
| ID | Title | Priority | Traces To |
|----|-------|----------|-----------|
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
## Non-Functional Requirements
### Performance
| ID | Title | Target |
|----|-------|--------|
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
### Security
| ID | Title | Standard |
|----|-------|----------|
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
### Scalability
| ID | Title | Target |
|----|-------|--------|
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
### Usability
| ID | Title | Target |
|----|-------|--------|
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
## Data Requirements
### Data Entities
| Entity | Description | Key Attributes |
|--------|-------------|----------------|
| {entity_name} | {description} | {attr1, attr2, attr3} |
### Data Flows
{description of key data flows, optionally with Mermaid diagram}
## Integration Requirements
| System | Direction | Protocol | Data Format | Notes |
|--------|-----------|----------|-------------|-------|
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
## Constraints & Assumptions
### Constraints
- {technical or business constraint 1}
- {technical or business constraint 2}
### Assumptions
- {assumption 1 - must be validated}
- {assumption 2 - must be validated}
## Priority Rationale
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
## Traceability Matrix
| Goal | Requirements |
|------|-------------|
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
## Open Questions
- [ ] {unresolved question 1}
- [ ] {unresolved question 2}
## References
- Derived from: [Product Brief](../product-brief.md)
- Next: [Architecture](../architecture/_index.md)
```
---
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
```markdown
---
id: REQ-{NNN}
type: functional
priority: {Must|Should|Could|Won't}
traces_to: [G-{NNN}]
status: draft
---
# REQ-{NNN}: {requirement_title}
**Priority**: {Must|Should|Could|Won't}
## Description
{detailed requirement description}
## User Story
As a {persona}, I want to {action} so that {benefit}.
## Acceptance Criteria
- [ ] {specific, testable criterion 1}
- [ ] {specific, testable criterion 2}
- [ ] {specific, testable criterion 3}
## Traces
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
```
---
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
```markdown
---
id: NFR-{type}-{NNN}
type: non-functional
category: {Performance|Security|Scalability|Usability}
priority: {Must|Should|Could}
status: draft
---
# NFR-{type}-{NNN}: {requirement_title}
**Category**: {Performance|Security|Scalability|Usability}
**Priority**: {Must|Should|Could}
## Requirement
{detailed requirement description}
## Metric & Target
| Metric | Target | Measurement Method |
|--------|--------|--------------------|
| {metric} | {target value} | {how measured} |
## Traces
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
```
---
## Variable Descriptions
| Variable | Source | Description |
|----------|--------|-------------|
| `{session_id}` | spec-config.json | Session identifier |
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
| `{product_name}` | product-brief.md | Product/feature name |
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
| `{slug}` | Auto-generated | Kebab-case from requirement title |
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |

View File

@@ -68,9 +68,21 @@ const messages: Record<Locale, Record<string, string>> = {
zh: {},
};
/**
* Load messages for a specific locale only (lazy loading)
* This is the optimized init method that loads only the active locale
*/
export async function loadMessagesForLocale(locale: Locale): Promise<Record<string, string>> {
const localeMessages = await loadMessages(locale);
messages[locale] = localeMessages;
updateIntl(locale);
return localeMessages;
}
/**
* Initialize translation messages for all locales
* Call this during app initialization
* NOTE: Use loadMessagesForLocale() for faster single-locale init
*/
export async function initMessages(): Promise<void> {
// Load messages for both locales in parallel

View File

@@ -5,7 +5,7 @@ import './index.css'
import 'react-grid-layout/css/styles.css'
import 'react-resizable/css/styles.css'
import 'xterm/css/xterm.css'
import { initMessages, getInitialLocale, getMessages, type Locale } from './lib/i18n'
import { loadMessagesForLocale, getInitialLocale } from './lib/i18n'
import { logWebVitals } from './lib/webVitals'
/**
@@ -29,17 +29,14 @@ async function bootstrapApplication() {
const rootElement = document.getElementById('root')
if (!rootElement) throw new Error('Failed to find the root element')
// Initialize CSRF token before any API calls
await initCsrfToken()
// Parallelize CSRF token fetch and locale detection (independent operations)
const [, locale] = await Promise.all([
initCsrfToken(),
getInitialLocale()
])
// Initialize translation messages
await initMessages()
// Determine initial locale from browser/storage
const locale: Locale = getInitialLocale()
// Get messages for the initial locale
const messages = getMessages(locale)
// Load only the active locale's messages (lazy load secondary on demand)
const messages = await loadMessagesForLocale(locale)
const root = createRoot(rootElement)
root.render(

View File

@@ -1,53 +1,68 @@
// ========================================
// Router Configuration
// ========================================
// React Router v6 configuration with all dashboard routes
// React Router v6 configuration with code splitting for optimal bundle size
import { createBrowserRouter, RouteObject, Navigate } from 'react-router-dom';
import { lazy, Suspense } from 'react';
import { AppShell } from '@/components/layout';
import {
HomePage,
SessionsPage,
FixSessionPage,
ProjectOverviewPage,
SessionDetailPage,
HistoryPage,
OrchestratorPage,
IssueHubPage,
SkillsManagerPage,
CommandsManagerPage,
MemoryPage,
SettingsPage,
NotFoundPage,
LiteTasksPage,
// LiteTaskDetailPage removed - now using TaskDrawer instead
ReviewSessionPage,
McpManagerPage,
EndpointsPage,
InstallationsPage,
HookManagerPage,
RulesManagerPage,
PromptHistoryPage,
ExplorerPage,
GraphExplorerPage,
CodexLensManagerPage,
ApiSettingsPage,
CliViewerPage,
CliSessionSharePage,
TeamPage,
TerminalDashboardPage,
AnalysisPage,
SpecsSettingsPage,
} from '@/pages';
// Import HomePage directly (no lazy - needed immediately)
import { HomePage } from '@/pages/HomePage';
// Lazy load all other route components for code splitting
const SessionsPage = lazy(() => import('@/pages/SessionsPage').then(m => ({ default: m.SessionsPage })));
const FixSessionPage = lazy(() => import('@/pages/FixSessionPage').then(m => ({ default: m.FixSessionPage })));
const ProjectOverviewPage = lazy(() => import('@/pages/ProjectOverviewPage').then(m => ({ default: m.ProjectOverviewPage })));
const SessionDetailPage = lazy(() => import('@/pages/SessionDetailPage').then(m => ({ default: m.SessionDetailPage })));
const HistoryPage = lazy(() => import('@/pages/HistoryPage').then(m => ({ default: m.HistoryPage })));
const OrchestratorPage = lazy(() => import('@/pages/orchestrator/OrchestratorPage').then(m => ({ default: m.OrchestratorPage })));
const IssueHubPage = lazy(() => import('@/pages/IssueHubPage').then(m => ({ default: m.IssueHubPage })));
const SkillsManagerPage = lazy(() => import('@/pages/SkillsManagerPage').then(m => ({ default: m.SkillsManagerPage })));
const CommandsManagerPage = lazy(() => import('@/pages/CommandsManagerPage').then(m => ({ default: m.CommandsManagerPage })));
const MemoryPage = lazy(() => import('@/pages/MemoryPage').then(m => ({ default: m.MemoryPage })));
const SettingsPage = lazy(() => import('@/pages/SettingsPage').then(m => ({ default: m.SettingsPage })));
const NotFoundPage = lazy(() => import('@/pages/NotFoundPage').then(m => ({ default: m.NotFoundPage })));
const LiteTasksPage = lazy(() => import('@/pages/LiteTasksPage').then(m => ({ default: m.LiteTasksPage })));
const ReviewSessionPage = lazy(() => import('@/pages/ReviewSessionPage').then(m => ({ default: m.ReviewSessionPage })));
const McpManagerPage = lazy(() => import('@/pages/McpManagerPage').then(m => ({ default: m.McpManagerPage })));
const EndpointsPage = lazy(() => import('@/pages/EndpointsPage').then(m => ({ default: m.EndpointsPage })));
const InstallationsPage = lazy(() => import('@/pages/InstallationsPage').then(m => ({ default: m.InstallationsPage })));
const HookManagerPage = lazy(() => import('@/pages/HookManagerPage').then(m => ({ default: m.HookManagerPage })));
const RulesManagerPage = lazy(() => import('@/pages/RulesManagerPage').then(m => ({ default: m.RulesManagerPage })));
const PromptHistoryPage = lazy(() => import('@/pages/PromptHistoryPage').then(m => ({ default: m.PromptHistoryPage })));
const ExplorerPage = lazy(() => import('@/pages/ExplorerPage').then(m => ({ default: m.ExplorerPage })));
const GraphExplorerPage = lazy(() => import('@/pages/GraphExplorerPage').then(m => ({ default: m.GraphExplorerPage })));
const CodexLensManagerPage = lazy(() => import('@/pages/CodexLensManagerPage').then(m => ({ default: m.CodexLensManagerPage })));
const ApiSettingsPage = lazy(() => import('@/pages/ApiSettingsPage').then(m => ({ default: m.ApiSettingsPage })));
const CliViewerPage = lazy(() => import('@/pages/CliViewerPage').then(m => ({ default: m.CliViewerPage })));
const CliSessionSharePage = lazy(() => import('@/pages/CliSessionSharePage').then(m => ({ default: m.CliSessionSharePage })));
const TeamPage = lazy(() => import('@/pages/TeamPage').then(m => ({ default: m.TeamPage })));
const TerminalDashboardPage = lazy(() => import('@/pages/TerminalDashboardPage').then(m => ({ default: m.TerminalDashboardPage })));
const AnalysisPage = lazy(() => import('@/pages/AnalysisPage').then(m => ({ default: m.AnalysisPage })));
const SpecsSettingsPage = lazy(() => import('@/pages/SpecsSettingsPage').then(m => ({ default: m.SpecsSettingsPage })));
// Loading fallback component for lazy-loaded routes
function PageSkeleton() {
return (
<div className="flex items-center justify-center h-full">
<div className="animate-pulse text-muted-foreground">Loading...</div>
</div>
);
}
/**
* Route configuration for the dashboard
* All routes are wrapped in AppShell layout
* All routes are wrapped in AppShell layout with Suspense for code splitting
*/
const routes: RouteObject[] = [
{
path: 'cli-sessions/share',
element: <CliSessionSharePage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<CliSessionSharePage />
</Suspense>
),
},
{
path: '/',
@@ -59,36 +74,68 @@ const routes: RouteObject[] = [
},
{
path: 'sessions',
element: <SessionsPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<SessionsPage />
</Suspense>
),
},
{
path: 'sessions/:sessionId',
element: <SessionDetailPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<SessionDetailPage />
</Suspense>
),
},
{
path: 'sessions/:sessionId/fix',
element: <FixSessionPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<FixSessionPage />
</Suspense>
),
},
{
path: 'sessions/:sessionId/review',
element: <ReviewSessionPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<ReviewSessionPage />
</Suspense>
),
},
{
path: 'lite-tasks',
element: <LiteTasksPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<LiteTasksPage />
</Suspense>
),
},
// /lite-tasks/:sessionId route removed - now using TaskDrawer
{
path: 'project',
element: <ProjectOverviewPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<ProjectOverviewPage />
</Suspense>
),
},
{
path: 'history',
element: <HistoryPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<HistoryPage />
</Suspense>
),
},
{
path: 'orchestrator',
element: <OrchestratorPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<OrchestratorPage />
</Suspense>
),
},
{
path: 'loops',
@@ -96,11 +143,19 @@ const routes: RouteObject[] = [
},
{
path: 'cli-viewer',
element: <CliViewerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<CliViewerPage />
</Suspense>
),
},
{
path: 'issues',
element: <IssueHubPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<IssueHubPage />
</Suspense>
),
},
// Legacy routes - redirect to hub with tab parameter
{
@@ -113,75 +168,147 @@ const routes: RouteObject[] = [
},
{
path: 'skills',
element: <SkillsManagerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<SkillsManagerPage />
</Suspense>
),
},
{
path: 'commands',
element: <CommandsManagerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<CommandsManagerPage />
</Suspense>
),
},
{
path: 'memory',
element: <MemoryPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<MemoryPage />
</Suspense>
),
},
{
path: 'prompts',
element: <PromptHistoryPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<PromptHistoryPage />
</Suspense>
),
},
{
path: 'settings',
element: <SettingsPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<SettingsPage />
</Suspense>
),
},
{
path: 'settings/mcp',
element: <McpManagerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<McpManagerPage />
</Suspense>
),
},
{
path: 'settings/endpoints',
element: <EndpointsPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<EndpointsPage />
</Suspense>
),
},
{
path: 'settings/installations',
element: <InstallationsPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<InstallationsPage />
</Suspense>
),
},
{
path: 'settings/rules',
element: <RulesManagerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<RulesManagerPage />
</Suspense>
),
},
{
path: 'settings/specs',
element: <SpecsSettingsPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<SpecsSettingsPage />
</Suspense>
),
},
{
path: 'settings/codexlens',
element: <CodexLensManagerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<CodexLensManagerPage />
</Suspense>
),
},
{
path: 'api-settings',
element: <ApiSettingsPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<ApiSettingsPage />
</Suspense>
),
},
{
path: 'hooks',
element: <HookManagerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<HookManagerPage />
</Suspense>
),
},
{
path: 'explorer',
element: <ExplorerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<ExplorerPage />
</Suspense>
),
},
{
path: 'graph',
element: <GraphExplorerPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<GraphExplorerPage />
</Suspense>
),
},
{
path: 'teams',
element: <TeamPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<TeamPage />
</Suspense>
),
},
{
path: 'analysis',
element: <AnalysisPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<AnalysisPage />
</Suspense>
),
},
{
path: 'terminal-dashboard',
element: <TerminalDashboardPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<TerminalDashboardPage />
</Suspense>
),
},
{
path: 'skill-hub',
@@ -190,7 +317,11 @@ const routes: RouteObject[] = [
// Catch-all route for 404
{
path: '*',
element: <NotFoundPage />,
element: (
<Suspense fallback={<PageSkeleton />}>
<NotFoundPage />
</Suspense>
),
},
],
},

View File

@@ -30,6 +30,15 @@ import { deepMerge } from '../../types/util.js';
// ========== Input Validation ==========
// Compiled regex patterns for performance (compiled once at module load)
const TELEGRAM_BOT_TOKEN_REGEX = /^\d{8,15}:[A-Za-z0-9_-]{30,50}$/;
const TELEGRAM_CHAT_ID_REGEX = /^-?\d{1,20}$/;
const EMAIL_REGEX = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
const DISCORD_HOSTNAMES = ['discord.com', 'discordapp.com'];
const FEISHU_HOSTNAMES = ['feishu.cn', 'larksuite.com', 'lark.com'];
const DINGTALK_HOSTNAMES = ['dingtalk.com', 'oapi.dingtalk.com'];
const WECOM_HOSTNAMES = ['qyapi.weixin.qq.com', 'work.weixin.qq.com'];
/**
* Validate URL format (must be http or https)
*/
@@ -51,7 +60,7 @@ function isValidDiscordWebhookUrl(url: string): boolean {
const parsed = new URL(url);
// Discord webhooks are typically: discord.com/api/webhooks/{id}/{token}
return (
(parsed.hostname === 'discord.com' || parsed.hostname === 'discordapp.com') &&
DISCORD_HOSTNAMES.includes(parsed.hostname) &&
parsed.pathname.startsWith('/api/webhooks/')
);
} catch {
@@ -65,7 +74,7 @@ function isValidDiscordWebhookUrl(url: string): boolean {
function isValidTelegramBotToken(token: string): boolean {
// Telegram bot tokens are in format: {bot_id}:{token}
// Bot ID is a number, token is alphanumeric with underscores and hyphens
return /^\d{8,15}:[A-Za-z0-9_-]{30,50}$/.test(token);
return TELEGRAM_BOT_TOKEN_REGEX.test(token);
}
/**
@@ -73,7 +82,7 @@ function isValidTelegramBotToken(token: string): boolean {
*/
function isValidTelegramChatId(chatId: string): boolean {
// Chat IDs are numeric, optionally negative (for groups)
return /^-?\d{1,20}$/.test(chatId);
return TELEGRAM_CHAT_ID_REGEX.test(chatId);
}
/**
@@ -138,7 +147,7 @@ function isValidDingTalkWebhookUrl(url: string): boolean {
try {
const parsed = new URL(url);
// DingTalk webhooks are typically: oapi.dingtalk.com/robot/send?access_token=xxx
return parsed.hostname.includes('dingtalk.com') && parsed.pathname.includes('robot');
return DINGTALK_HOSTNAMES.some(h => parsed.hostname.includes(h)) && parsed.pathname.includes('robot');
} catch {
return false;
}
@@ -152,7 +161,7 @@ function isValidWeComWebhookUrl(url: string): boolean {
try {
const parsed = new URL(url);
// WeCom webhooks are typically: qyapi.weixin.qq.com/cgi-bin/webhook/send?key=xxx
return parsed.hostname.includes('qyapi.weixin.qq.com') && parsed.pathname.includes('webhook');
return WECOM_HOSTNAMES.includes(parsed.hostname) && parsed.pathname.includes('webhook');
} catch {
return false;
}
@@ -162,8 +171,8 @@ function isValidWeComWebhookUrl(url: string): boolean {
* Validate email address format
*/
function isValidEmail(email: string): boolean {
// Basic email validation regex
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
// Basic email validation regex (using compiled constant)
return EMAIL_REGEX.test(email);
}
/**

View File

@@ -13,9 +13,55 @@ import type {
QueueSchedulerConfigUpdatedMessage,
} from '../types/queue-types.js';
// WebSocket configuration for connection limits and rate limiting
const WS_MAX_CONNECTIONS = 100;
const WS_MESSAGE_RATE_LIMIT = 10; // messages per second per client
const WS_RATE_LIMIT_WINDOW = 1000; // 1 second window
// WebSocket clients for real-time notifications
export const wsClients = new Set<Duplex>();
// Track message counts per client for rate limiting
export const wsClientMessageCounts = new Map<Duplex, { count: number; resetTime: number }>();
/**
* Check if a new WebSocket connection should be accepted
* Returns true if connection allowed, false if limit reached
*/
export function canAcceptWebSocketConnection(): boolean {
return wsClients.size < WS_MAX_CONNECTIONS;
}
/**
* Check if a client has exceeded their message rate limit
* Returns true if rate limit OK, false if exceeded
*/
export function checkClientRateLimit(client: Duplex): boolean {
const now = Date.now();
const tracking = wsClientMessageCounts.get(client);
if (!tracking || now > tracking.resetTime) {
// First message or window expired - reset tracking
wsClientMessageCounts.set(client, { count: 1, resetTime: now + WS_RATE_LIMIT_WINDOW });
return true;
}
if (tracking.count >= WS_MESSAGE_RATE_LIMIT) {
return false; // Rate limit exceeded
}
tracking.count++;
return true;
}
/**
* Clean up client tracking when they disconnect
*/
export function removeClientTracking(client: Duplex): void {
wsClients.delete(client);
wsClientMessageCounts.delete(client);
}
/**
* WebSocket message types for Loop monitoring
*/
@@ -154,6 +200,24 @@ export function handleWebSocketUpgrade(req: IncomingMessage, socket: Duplex, _he
socket.end();
return;
}
// Check connection limit
if (!canAcceptWebSocketConnection()) {
const responseHeaders = [
'HTTP/1.1 429 Too Many Requests',
'Content-Type: text/plain',
'Connection: close',
'',
'WebSocket connection limit reached. Please try again later.',
'',
''
].join('\r\n');
socket.write(responseHeaders);
socket.end();
console.warn(`[WS] Connection rejected: limit reached (${wsClients.size}/${WS_MAX_CONNECTIONS})`);
return;
}
const acceptKey = createHash('sha1')
.update(key + '258EAFA5-E914-47DA-95CA-C5AB0DC85B11')
.digest('base64');
@@ -171,6 +235,8 @@ export function handleWebSocketUpgrade(req: IncomingMessage, socket: Duplex, _he
// Add to clients set
wsClients.add(socket);
// Initialize rate limit tracking
wsClientMessageCounts.set(socket, { count: 0, resetTime: Date.now() + WS_RATE_LIMIT_WINDOW });
console.log(`[WS] Client connected (${wsClients.size} total)`);
// Replay any buffered A2UI surfaces to the new client
@@ -194,6 +260,11 @@ export function handleWebSocketUpgrade(req: IncomingMessage, socket: Duplex, _he
switch (opcode) {
case 0x1: // Text frame
if (payload) {
// Check rate limit before processing
if (!checkClientRateLimit(socket)) {
console.warn('[WS] Rate limit exceeded for client');
break;
}
console.log('[WS] Received:', payload);
// Try to handle as A2UI message
const handledAsA2UI = handleA2UIMessage(payload, a2uiWebSocketHandler, handleAnswer);
@@ -227,12 +298,12 @@ export function handleWebSocketUpgrade(req: IncomingMessage, socket: Duplex, _he
// Handle disconnect
socket.on('close', () => {
wsClients.delete(socket);
removeClientTracking(socket);
console.log(`[WS] Client disconnected (${wsClients.size} remaining)`);
});
socket.on('error', () => {
wsClients.delete(socket);
removeClientTracking(socket);
});
}