mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-03 15:43:11 +08:00
Compare commits
72 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3bb4a821de | ||
|
|
d346d48ba2 | ||
|
|
57636040d2 | ||
|
|
980be3d87d | ||
|
|
a4cb1e7eb2 | ||
|
|
4f3ef5cba8 | ||
|
|
e54d76f7be | ||
|
|
c12acd41ee | ||
|
|
73cc2ef3fa | ||
|
|
ce2927b28d | ||
|
|
121e834459 | ||
|
|
2c2b9d6e29 | ||
|
|
0d5cc4a74f | ||
|
|
71485b89e6 | ||
|
|
2fc792a3b7 | ||
|
|
0af4ca040f | ||
|
|
7af258f43d | ||
|
|
8ad283086b | ||
|
|
b36a46d59d | ||
|
|
899a92f2eb | ||
|
|
d0ac3a5cd2 | ||
|
|
0939510e0d | ||
|
|
deea92581b | ||
|
|
4d17bb02a4 | ||
|
|
5cab8ae8a5 | ||
|
|
ffe3b427ce | ||
|
|
8c953b287d | ||
|
|
b1e321267e | ||
|
|
d0275f14b2 | ||
|
|
ee4dc367d9 | ||
|
|
a63fb370aa | ||
|
|
da19a6ec89 | ||
|
|
bf84a157ea | ||
|
|
41f990ddd4 | ||
|
|
3463bc8e27 | ||
|
|
9ad755e225 | ||
|
|
8799a9c2fd | ||
|
|
1f859ae4b9 | ||
|
|
ecf4e4d848 | ||
|
|
8ceae6d6fd | ||
|
|
2fb93d20e0 | ||
|
|
a753327acc | ||
|
|
f61a3da957 | ||
|
|
b0fb899675 | ||
|
|
0a49dc0675 | ||
|
|
096fc1c380 | ||
|
|
29f0a6cdb8 | ||
|
|
e83414abf3 | ||
|
|
e42597b1bc | ||
|
|
67b2129f3c | ||
|
|
19fb4d86c7 | ||
|
|
65763c76e9 | ||
|
|
4a89f626fc | ||
|
|
b0bfdb907a | ||
|
|
39bbf960c2 | ||
|
|
5e35da32e8 | ||
|
|
84610661ca | ||
|
|
cd54c10256 | ||
|
|
c3ddf7e322 | ||
|
|
ab65caec45 | ||
|
|
3788ba1268 | ||
|
|
d14a710797 | ||
|
|
92ac9897f2 | ||
|
|
67e78b132c | ||
|
|
0a37bc3eaf | ||
|
|
604f89b7aa | ||
|
|
46989dcbad | ||
|
|
4763edb0e4 | ||
|
|
7d349a64fb | ||
|
|
22a98b7b6c | ||
|
|
a59a86fb9b | ||
|
|
101417e028 |
@@ -9,6 +9,7 @@ keywords:
|
||||
- pattern
|
||||
readMode: required
|
||||
priority: high
|
||||
scope: project
|
||||
---
|
||||
|
||||
# Architecture Constraints
|
||||
|
||||
@@ -9,6 +9,7 @@ keywords:
|
||||
- convention
|
||||
readMode: required
|
||||
priority: high
|
||||
scope: project
|
||||
---
|
||||
|
||||
# Coding Conventions
|
||||
|
||||
@@ -301,7 +301,7 @@ Document known constraints that affect planning:
|
||||
|
||||
[Continue for all major feature groups]
|
||||
|
||||
**Note**: Detailed task breakdown into executable work items is handled by `/workflow:plan` → `IMPL_PLAN.md`
|
||||
**Note**: Detailed task breakdown into executable work items is handled by `/workflow-plan` → `IMPL_PLAN.md`
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
||||
|
||||
**Quality Gates**:
|
||||
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
|
||||
- plan-verify: ⏳ Pending (recommended before /workflow:execute)
|
||||
- plan-verify: ⏳ Pending (recommended before /workflow-execute)
|
||||
|
||||
**Context Package Summary**:
|
||||
- **Focus Paths**: {list key directories from context-package.json}
|
||||
|
||||
@@ -155,7 +155,7 @@
|
||||
},
|
||||
"development_index": {
|
||||
"type": "object",
|
||||
"description": "Categorized development history (lite-plan/lite-execute)",
|
||||
"description": "Categorized development history (lite-planex sessions)",
|
||||
"properties": {
|
||||
"feature": { "type": "array", "items": { "$ref": "#/$defs/devIndexEntry" } },
|
||||
"enhancement": { "type": "array", "items": { "$ref": "#/$defs/devIndexEntry" } },
|
||||
|
||||
@@ -85,13 +85,14 @@ Tools are selected based on **tags** defined in the configuration. Use tags to m
|
||||
|
||||
```bash
|
||||
# Explicit tool selection
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write|review>
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --mode <analysis|write>
|
||||
|
||||
# Model override
|
||||
ccw cli -p "<PROMPT>" --tool <tool-id> --model <model-id> --mode <analysis|write>
|
||||
|
||||
# Code review (codex only)
|
||||
# Code review (codex only - review mode and target flags are invalid for other tools)
|
||||
ccw cli -p "<PROMPT>" --tool codex --mode review
|
||||
ccw cli --tool codex --mode review --commit <hash>
|
||||
|
||||
# Tag-based auto-selection (future)
|
||||
ccw cli -p "<PROMPT>" --tags <tag1,tag2> --mode <analysis|write>
|
||||
@@ -292,11 +293,8 @@ ccw cli -p "..." --tool gemini --mode analysis --rule analysis-review-architectu
|
||||
- **`review`**
|
||||
- Permission: Read-only (code review output)
|
||||
- Use For: Git-aware code review of uncommitted changes, branch diffs, specific commits
|
||||
- Specification: **codex only** - uses `codex review` subcommand
|
||||
- Tool Behavior:
|
||||
- `codex`: Executes `codex review` for structured code review
|
||||
- Other tools (gemini/qwen/claude): Accept mode but no operation change (treated as analysis)
|
||||
- **Constraint**: Target flags (`--uncommitted`, `--base`, `--commit`) and prompt are mutually exclusive
|
||||
- Specification: **codex only** - uses `codex review` subcommand. Other tools MUST NOT use this mode
|
||||
- **Constraint**: Target flags (`--uncommitted`, `--base`, `--commit`) are **codex-only** and mutually exclusive with prompt
|
||||
- With prompt only: `ccw cli -p "Focus on security" --tool codex --mode review` (reviews uncommitted by default)
|
||||
- With target flag only: `ccw cli --tool codex --mode review --commit abc123` (no prompt allowed)
|
||||
|
||||
@@ -309,7 +307,7 @@ ccw cli -p "..." --tool gemini --mode analysis --rule analysis-review-architectu
|
||||
- **`--mode <mode>`**
|
||||
- Description: **REQUIRED**: analysis, write, review
|
||||
- Default: **NONE** (must specify)
|
||||
- Note: `review` mode triggers `codex review` subcommand for codex tool only
|
||||
- Note: `review` mode is **codex-only**. Using `--mode review` with other tools (gemini/qwen/claude) is invalid and should be rejected
|
||||
|
||||
- **`--model <model>`**
|
||||
- Description: Model override
|
||||
@@ -446,7 +444,7 @@ ccw cli --tool codex --mode review --base main
|
||||
ccw cli --tool codex --mode review --commit abc123
|
||||
```
|
||||
|
||||
> **Note**: `--mode review` only triggers special behavior for `codex` tool. Target flags (`--uncommitted`, `--base`, `--commit`) and prompt are **mutually exclusive** - use one or the other, not both.
|
||||
> **Note**: `--mode review` and target flags (`--uncommitted`, `--base`, `--commit`) are **codex-only**. Using them with other tools is invalid. When using codex, target flags and prompt are **mutually exclusive** - use one or the other, not both.
|
||||
|
||||
---
|
||||
|
||||
@@ -455,9 +453,9 @@ ccw cli --tool codex --mode review --commit abc123
|
||||
**Single-Use Authorization**: Each execution requires explicit user instruction. Previous authorization does NOT carry over.
|
||||
|
||||
**Mode Hierarchy**:
|
||||
- `analysis`: Read-only, safe for auto-execution
|
||||
- `write`: Create/Modify/Delete files, full operations - requires explicit `--mode write`
|
||||
- `review`: Git-aware code review (codex only), read-only output - requires explicit `--mode review`
|
||||
- `analysis`: Read-only, safe for auto-execution. Available for all tools
|
||||
- `write`: Create/Modify/Delete files, full operations - requires explicit `--mode write`. Available for all tools
|
||||
- `review`: **codex-only**. Git-aware code review, read-only output. Invalid for other tools (gemini/qwen/claude)
|
||||
- **Exception**: User provides clear instructions like "modify", "create", "implement"
|
||||
|
||||
---
|
||||
|
||||
@@ -711,7 +711,7 @@ All workflows use the same file structure definition regardless of complexity. *
|
||||
│ ├── [.chat/] # CLI interaction sessions (created when analysis is run)
|
||||
│ │ ├── chat-*.md # Saved chat sessions
|
||||
│ │ └── analysis-*.md # Analysis results
|
||||
│ ├── [.process/] # Planning analysis results (created by /workflow:plan)
|
||||
│ ├── [.process/] # Planning analysis results (created by /workflow-plan)
|
||||
│ │ └── ANALYSIS_RESULTS.md # Analysis results and planning artifacts
|
||||
│ ├── IMPL_PLAN.md # Planning document (REQUIRED)
|
||||
│ ├── TODO_LIST.md # Progress tracking (REQUIRED)
|
||||
@@ -783,7 +783,7 @@ All workflows use the same file structure definition regardless of complexity. *
|
||||
**Examples**:
|
||||
|
||||
*Workflow Commands (lightweight):*
|
||||
- `/workflow:lite-plan "feature idea"` (exploratory) → `.scratchpad/lite-plan-feature-idea-20250105-143110.md`
|
||||
- `/workflow-lite-planex "feature idea"` (exploratory) → `.scratchpad/lite-plan-feature-idea-20250105-143110.md`
|
||||
- `/workflow:lite-fix "bug description"` (bug fixing) → `.scratchpad/lite-fix-bug-20250105-143130.md`
|
||||
|
||||
> **Note**: Direct CLI commands (`/cli:analyze`, `/cli:execute`, etc.) have been replaced by semantic invocation and workflow commands.
|
||||
|
||||
@@ -126,10 +126,11 @@ const planObject = generatePlanFromSchema(schema, context)
|
||||
Phase 1: Schema & Context Loading
|
||||
├─ Read schema reference (plan-overview-base-schema or plan-overview-fix-schema)
|
||||
├─ Aggregate multi-angle context (explorations or diagnoses)
|
||||
├─ If no explorations: use "## Prior Analysis" block from task description as primary context
|
||||
└─ Determine output structure from schema
|
||||
|
||||
Phase 2: CLI Execution
|
||||
├─ Construct CLI command with planning template
|
||||
├─ Construct CLI command with planning template (include Prior Analysis context when no explorations)
|
||||
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
||||
└─ Timeout: 60 minutes
|
||||
|
||||
@@ -173,7 +174,7 @@ TASK:
|
||||
• Identify dependencies and execution phases
|
||||
• Generate complexity-appropriate fields (rationale, verification, risks, code_skeleton, data_flow)
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {context_summary}
|
||||
CONTEXT: @**/* | Memory: {context_summary}. If task description contains '## Prior Analysis', treat it as primary planning context with pre-analyzed files, findings, and recommendations.
|
||||
EXPECTED:
|
||||
## Summary
|
||||
[overview]
|
||||
@@ -229,9 +230,9 @@ EXPECTED:
|
||||
**Total**: [time]
|
||||
|
||||
CONSTRAINTS:
|
||||
- Follow schema structure from {schema_path}
|
||||
- Output as structured markdown text following the EXPECTED format above
|
||||
- Task IDs use format TASK-001, TASK-002, etc. (FIX-001 for fix-plan)
|
||||
- Complexity determines required fields:
|
||||
- Complexity determines required sections:
|
||||
* Low: base fields only
|
||||
* Medium: + rationale + verification + design_decisions
|
||||
* High: + risks + code_skeleton + data_flow
|
||||
@@ -256,8 +257,8 @@ function extractSection(cliOutput, header) {
|
||||
// Parse structured tasks from CLI output
|
||||
function extractStructuredTasks(cliOutput, complexity) {
|
||||
const tasks = []
|
||||
// Split by task headers (supports both TASK-NNN and T\d+ formats)
|
||||
const taskBlocks = cliOutput.split(/### (TASK-\d+|T\d+):/).slice(1)
|
||||
// Split by task headers (flexible: 1-3 #, optional colon, supports TASK-NNN and T\d+)
|
||||
const taskBlocks = cliOutput.split(/#{1,3}\s*(TASK-\d+|T\d+):?\s*/).slice(1)
|
||||
|
||||
for (let i = 0; i < taskBlocks.length; i += 2) {
|
||||
const rawId = taskBlocks[i].trim()
|
||||
@@ -820,8 +821,8 @@ function validateTask(task) {
|
||||
- **Write BOTH plan.json AND .task/*.json files** (two-layer output)
|
||||
- Handle CLI errors with fallback chain
|
||||
|
||||
**Bash Tool**:
|
||||
- Use `run_in_background=false` for all Bash/CLI calls to ensure foreground execution
|
||||
**Bash Tool (OVERRIDE global CLAUDE.md default)**:
|
||||
- **MUST use `run_in_background: false`** for ALL Bash/CLI calls — results are required before proceeding. This overrides any global `run_in_background: true` default.
|
||||
|
||||
**NEVER**:
|
||||
- Execute implementation (return plan only)
|
||||
|
||||
@@ -455,7 +455,7 @@ function buildCliCommand(task, cliTool, cliPrompt) {
|
||||
|
||||
**Auto-Check Workflow Context**:
|
||||
- Verify session context paths are provided in agent prompt
|
||||
- If missing, request session context from workflow:execute
|
||||
- If missing, request session context from workflow-execute
|
||||
- Never assume default paths without explicit session context
|
||||
|
||||
### 5. Problem-Solving
|
||||
|
||||
@@ -237,7 +237,7 @@ After Phase 4 completes, determine Phase 5 variant:
|
||||
### Phase 5-L: Loop Completion (when inner_loop=true AND more same-prefix tasks pending)
|
||||
|
||||
1. **TaskUpdate**: Mark current task `completed`
|
||||
2. **Message Bus**: Log completion
|
||||
2. **Message Bus**: Log completion with verification evidence
|
||||
```
|
||||
mcp__ccw-tools__team_msg(
|
||||
operation="log",
|
||||
@@ -245,7 +245,7 @@ After Phase 4 completes, determine Phase 5 variant:
|
||||
from=<role>,
|
||||
to="coordinator",
|
||||
type=<message_types.success>,
|
||||
summary="[<role>] <task-id> complete. <brief-summary>",
|
||||
summary="[<role>] <task-id> complete. <brief-summary>. Verified: <verification_method>",
|
||||
ref=<artifact-path>
|
||||
)
|
||||
```
|
||||
@@ -283,7 +283,7 @@ After Phase 4 completes, determine Phase 5 variant:
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Same-prefix successor (inner loop role) | Do NOT spawn — main agent handles via inner loop |
|
||||
| 1 ready task, simple linear successor, different prefix | Spawn directly via `Task(run_in_background: true)` |
|
||||
| 1 ready task, simple linear successor, different prefix | Spawn directly via `Task(run_in_background: true)` + log `fast_advance` to message bus |
|
||||
| Multiple ready tasks (parallel window) | SendMessage to coordinator (needs orchestration) |
|
||||
| No ready tasks + others running | SendMessage to coordinator (status update) |
|
||||
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
|
||||
@@ -311,6 +311,23 @@ inner_loop: <true|false based on successor role>`
|
||||
})
|
||||
```
|
||||
|
||||
### Fast-Advance Notification
|
||||
|
||||
After spawning a successor via fast-advance, MUST log to message bus:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg(
|
||||
operation="log",
|
||||
team=<session_id>,
|
||||
from=<role>,
|
||||
to="coordinator",
|
||||
type="fast_advance",
|
||||
summary="[<role>] fast-advanced <completed-task-id> → spawned <successor-role> for <successor-task-id>"
|
||||
)
|
||||
```
|
||||
|
||||
This is a passive log entry (NOT a SendMessage). Coordinator reads it on next callback to reconcile `active_workers`.
|
||||
|
||||
### SendMessage Format
|
||||
|
||||
```
|
||||
@@ -320,8 +337,10 @@ SendMessage(team_name=<team_name>, recipient="coordinator", message="[<role>] <f
|
||||
**Final report contents**:
|
||||
- Tasks completed (count + list)
|
||||
- Artifacts produced (paths)
|
||||
- Files modified (paths + before/after evidence from Phase 4 verification)
|
||||
- Discuss results (verdicts + ratings)
|
||||
- Key decisions (from context_accumulator)
|
||||
- Verification summary (methods used, pass/fail status)
|
||||
- Any warnings or issues
|
||||
|
||||
---
|
||||
@@ -385,6 +404,48 @@ Write discoveries to corresponding wisdom files:
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Transfer
|
||||
|
||||
### Upstream Context Loading (Phase 2)
|
||||
|
||||
When executing Phase 2 of a role-spec, the worker MUST load available cross-role context:
|
||||
|
||||
| Source | Path | Load Method |
|
||||
|--------|------|-------------|
|
||||
| Upstream artifacts | `<session>/artifacts/*.md` | Read files listed in task description or dependency chain |
|
||||
| Shared memory | `<session>/shared-memory.json` | Read and parse JSON |
|
||||
| Wisdom | `<session>/wisdom/*.md` | Read all wisdom files |
|
||||
| Exploration cache | `<session>/explorations/cache-index.json` | Check before new explorations |
|
||||
|
||||
### Downstream Context Publishing (Phase 4)
|
||||
|
||||
After Phase 4 verification, the worker MUST publish its contributions:
|
||||
|
||||
1. **Artifact**: Write deliverable to `<session>/artifacts/<prefix>-<task-id>-<name>.md`
|
||||
2. **shared-memory.json**: Read-merge-write under role namespace
|
||||
```json
|
||||
{ "<role>": { "key_findings": [...], "decisions": [...], "files_modified": [...] } }
|
||||
```
|
||||
3. **Wisdom**: Append new patterns to `learnings.md`, decisions to `decisions.md`, issues to `issues.md`
|
||||
|
||||
### Inner Loop Context Accumulator
|
||||
|
||||
For `inner_loop: true` roles, `context_accumulator` is maintained in-memory:
|
||||
|
||||
```
|
||||
context_accumulator.append({
|
||||
task: "<task-id>",
|
||||
artifact: "<output-path>",
|
||||
key_decisions: [...],
|
||||
summary: "<brief>",
|
||||
files_modified: [...]
|
||||
})
|
||||
```
|
||||
|
||||
Pass the full accumulator to each subsequent task's Phase 3 subagent as `## Prior Context`.
|
||||
|
||||
---
|
||||
|
||||
## Message Bus Protocol
|
||||
|
||||
Always use `mcp__ccw-tools__team_msg` for logging. Parameters:
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -15,28 +15,20 @@ Main process orchestrator: intent analysis → workflow selection → command ch
|
||||
|
||||
| Skill | 内部流水线 |
|
||||
|-------|-----------|
|
||||
| `workflow-lite-plan` | explore → plan → confirm → execute |
|
||||
| `workflow-lite-planex` | explore → plan → confirm → execute |
|
||||
| `workflow-plan` | session → context → convention → gen → verify/replan |
|
||||
| `workflow-execute` | session discovery → task processing → commit |
|
||||
| `workflow-tdd` | 6-phase TDD plan → verify |
|
||||
| `workflow-tdd-plan` | 6-phase TDD plan → verify |
|
||||
| `workflow-test-fix` | session → context → analysis → gen → cycle |
|
||||
| `workflow-multi-cli-plan` | ACE context → CLI discussion → plan → execute |
|
||||
| `review-cycle` | session/module review → fix orchestration |
|
||||
| `brainstorm` | auto/single-role → artifacts → analysis → synthesis |
|
||||
| `spec-generator` | product-brief → PRD → architecture → epics |
|
||||
| `workflow:collaborative-plan-with-file` | understanding agent → parallel agents → plan-note.md |
|
||||
| `workflow:req-plan-with-file` | requirement decomposition → issue creation → execution-plan.json |
|
||||
| `workflow:roadmap-with-file` | strategic requirement roadmap → issue creation → execution-plan.json |
|
||||
| `workflow:integration-test-cycle` | explore → test dev → test-fix cycle → reflection |
|
||||
| `workflow:refactor-cycle` | tech debt discovery → prioritize → execute → validate |
|
||||
| `team-planex` | planner + executor wave pipeline(边规划边执行)|
|
||||
| `team-iterdev` | 迭代开发团队(planner → developer → reviewer 循环)|
|
||||
| `team-lifecycle` | 全生命周期团队(spec → impl → test)|
|
||||
| `team-issue` | issue 解决团队(discover → plan → execute)|
|
||||
| `team-testing` | 测试团队(strategy → generate → execute → analyze)|
|
||||
| `team-quality-assurance` | QA 团队(scout → strategist → generator → executor → analyst)|
|
||||
| `team-brainstorm` | 团队头脑风暴(facilitator → participants → synthesizer)|
|
||||
| `team-uidesign` | UI 设计团队(designer → implementer dual-track)|
|
||||
|
||||
独立命令(仍使用 colon 格式):workflow:brainstorm-with-file, workflow:debug-with-file, workflow:analyze-with-file, workflow:collaborative-plan-with-file, workflow:req-plan-with-file, workflow:integration-test-cycle, workflow:refactor-cycle, workflow:unified-execute-with-file, workflow:clean, workflow:init, workflow:init-guidelines, workflow:ui-design:*, issue:*, workflow:session:*
|
||||
| `team-planex` | planner + executor wave pipeline(适合大量零散 issue 或 roadmap 产出的清晰 issue)|
|
||||
|
||||
## Core Concept: Self-Contained Skills (自包含 Skill)
|
||||
|
||||
@@ -51,24 +43,21 @@ Main process orchestrator: intent analysis → workflow selection → command ch
|
||||
|
||||
| 单元类型 | Skill | 说明 |
|
||||
|---------|-------|------|
|
||||
| 轻量 Plan+Execute | `workflow-lite-plan` | 内部完成 plan→execute |
|
||||
| 轻量 Plan+Execute | `workflow-lite-planex` | 内部完成 plan→execute |
|
||||
| 标准 Planning | `workflow-plan` → `workflow-execute` | plan 和 execute 是独立 Skill |
|
||||
| TDD Planning | `workflow-tdd` → `workflow-execute` | tdd-plan 和 execute 是独立 Skill |
|
||||
| TDD Planning | `workflow-tdd-plan` → `workflow-execute` | tdd-plan 和 execute 是独立 Skill |
|
||||
| 规格驱动 | `spec-generator` → `workflow-plan` → `workflow-execute` | 规格文档驱动完整开发 |
|
||||
| 测试流水线 | `workflow-test-fix` | 内部完成 gen→cycle |
|
||||
| 代码审查 | `review-cycle` | 内部完成 review→fix |
|
||||
| 多CLI协作 | `workflow-multi-cli-plan` | ACE context → CLI discussion → plan → execute |
|
||||
| 协作规划 | `workflow:collaborative-plan-with-file` | 多 agent 协作生成 plan-note.md |
|
||||
| 需求路线图 | `workflow:req-plan-with-file` | 需求拆解→issue 创建→执行计划 |
|
||||
| 分析→规划 | `workflow:analyze-with-file` → `workflow-lite-planex` | 协作分析产物自动传递给 lite-plan |
|
||||
| 头脑风暴→规划 | `workflow:brainstorm-with-file` → `workflow-plan` → `workflow-execute` | 头脑风暴产物自动传递给正式规划 |
|
||||
| 0→1 开发(小) | `workflow:brainstorm-with-file` → `workflow-plan` → `workflow-execute` | 小规模从零开始,探索+正式规划+实现 |
|
||||
| 0→1 开发(中/大) | `workflow:brainstorm-with-file` → `workflow-plan` → `workflow-execute` | 探索后正式规划+执行 |
|
||||
| 协作规划 | `workflow:collaborative-plan-with-file` → `workflow:unified-execute-with-file` | 多 agent 协作规划→通用执行 |
|
||||
| 需求路线图 | `workflow:roadmap-with-file` → `team-planex` | 需求拆解→issue 创建→wave pipeline 执行(需明确 roadmap 关键词)|
|
||||
| 集成测试循环 | `workflow:integration-test-cycle` | 自迭代集成测试闭环 |
|
||||
| 重构循环 | `workflow:refactor-cycle` | 技术债务发现→重构→验证 |
|
||||
| 团队 Plan+Execute | `team-planex` | 2 人团队 wave pipeline,边规划边执行 |
|
||||
| 团队迭代开发 | `team-iterdev` | 多角色迭代开发闭环 |
|
||||
| 团队全生命周期 | `team-lifecycle` | spec→impl→test 全流程 |
|
||||
| 团队 Issue | `team-issue` | 多角色协作 issue 解决 |
|
||||
| 团队测试 | `team-testing` | 多角色测试流水线 |
|
||||
| 团队 QA | `team-quality-assurance` | 多角色质量保障闭环 |
|
||||
| 团队头脑风暴 | `team-brainstorm` | 多角色协作头脑风暴 |
|
||||
| 团队 UI 设计 | `team-uidesign` | dual-track 设计+实现 |
|
||||
|
||||
## Execution Model
|
||||
|
||||
@@ -136,27 +125,23 @@ function analyzeIntent(input) {
|
||||
function detectTaskType(text) {
|
||||
const patterns = {
|
||||
'bugfix-hotfix': /urgent|production|critical/ && /fix|bug/,
|
||||
// With-File workflows (documented exploration with multi-CLI collaboration)
|
||||
// With-File workflows (documented exploration → auto chain to lite-plan)
|
||||
// 0→1 Greenfield detection (priority over brainstorm/roadmap)
|
||||
'greenfield': /从零开始|from scratch|0.*to.*1|greenfield|全新.*开发|新项目|new project|build.*from.*ground/,
|
||||
'brainstorm': /brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking|multi-perspective.*think|compare perspectives|探索.*可能/,
|
||||
'brainstorm-to-issue': /brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/,
|
||||
'debug-file': /debug.*document|hypothesis.*debug|troubleshoot.*track|investigate.*log|调试.*记录|假设.*验证|systematic debug|深度调试/,
|
||||
'analyze-file': /analyze.*document|explore.*concept|understand.*architecture|investigate.*discuss|collaborative analysis|分析.*讨论|深度.*理解|协作.*分析/,
|
||||
'collaborative-plan': /collaborative.*plan|协作.*规划|多人.*规划|multi.*agent.*plan|Plan Note|分工.*规划/,
|
||||
'req-plan': /roadmap|需求.*规划|需求.*拆解|requirement.*plan|req.*plan|progressive.*plan|路线.*图/,
|
||||
'roadmap': /roadmap|路线.*图/, // Narrowed: only explicit roadmap keywords (需求规划/需求拆解 moved to greenfield routing)
|
||||
'spec-driven': /spec.*gen|specification|PRD|产品需求|产品文档|产品规格/,
|
||||
// Cycle workflows (self-iterating with reflection)
|
||||
'integration-test': /integration.*test|集成测试|端到端.*测试|e2e.*test|integration.*cycle/,
|
||||
'refactor': /refactor|重构|tech.*debt|技术债务/,
|
||||
// Team workflows (multi-role collaboration, explicit "team" keyword required)
|
||||
// Team workflows (kept: team-planex only)
|
||||
'team-planex': /team.*plan.*exec|team.*planex|团队.*规划.*执行|并行.*规划.*执行|wave.*pipeline/,
|
||||
'team-iterdev': /team.*iter|team.*iterdev|迭代.*开发.*团队|iterative.*dev.*team/,
|
||||
'team-lifecycle': /team.*lifecycle|全生命周期|full.*lifecycle|spec.*impl.*test.*team/,
|
||||
'team-issue': /team.*issue.*resolv|团队.*issue|team.*resolve.*issue/,
|
||||
'team-testing': /team.*test|测试团队|comprehensive.*test.*team|全面.*测试.*团队/,
|
||||
'team-qa': /team.*qa|quality.*assurance.*team|QA.*团队|质量.*保障.*团队|团队.*质量/,
|
||||
'team-brainstorm': /team.*brainstorm|团队.*头脑风暴|team.*ideation|多人.*头脑风暴/,
|
||||
'team-uidesign': /team.*ui.*design|UI.*设计.*团队|dual.*track.*design|团队.*UI/,
|
||||
// Standard workflows
|
||||
'multi-cli-plan': /multi.*cli|多.*CLI|多模型.*协作|multi.*model.*collab/,
|
||||
'multi-cli': /multi.*cli|多.*CLI|多模型.*协作|multi.*model.*collab/,
|
||||
'bugfix': /fix|bug|error|crash|fail|debug/,
|
||||
'issue-batch': /issues?|batch/ && /fix|resolve/,
|
||||
'issue-transition': /issue workflow|structured workflow|queue|multi-stage/,
|
||||
@@ -165,6 +150,7 @@ function detectTaskType(text) {
|
||||
'ui-design': /ui|design|component|style/,
|
||||
'tdd': /tdd|test-driven|test first/,
|
||||
'test-fix': /test fail|fix test|failing test/,
|
||||
'test-gen': /generate test|写测试|add test|补充测试/,
|
||||
'review': /review|code review/,
|
||||
'documentation': /docs|documentation|readme/
|
||||
};
|
||||
@@ -202,34 +188,34 @@ async function clarifyRequirements(analysis) {
|
||||
function selectWorkflow(analysis) {
|
||||
const levelMap = {
|
||||
'bugfix-hotfix': { level: 2, flow: 'bugfix.hotfix' },
|
||||
// With-File workflows (documented exploration with multi-CLI collaboration)
|
||||
'brainstorm': { level: 4, flow: 'brainstorm-with-file' }, // Multi-perspective ideation
|
||||
'brainstorm-to-issue': { level: 4, flow: 'brainstorm-to-issue' }, // Brainstorm → Issue workflow
|
||||
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging
|
||||
'analyze-file': { level: 3, flow: 'analyze-with-file' }, // Collaborative analysis
|
||||
// 0→1 Greenfield (complexity-adaptive routing)
|
||||
'greenfield': { level: analysis.complexity === 'high' ? 4 : 3,
|
||||
flow: analysis.complexity === 'high' ? 'greenfield-phased' // large: brainstorm → workflow-plan → execute
|
||||
: analysis.complexity === 'medium' ? 'greenfield-plan' // medium: brainstorm → workflow-plan → execute
|
||||
: 'brainstorm-to-plan' }, // small: brainstorm → workflow-plan
|
||||
// With-File workflows → auto chain to lite-plan
|
||||
'brainstorm': { level: 4, flow: 'brainstorm-to-plan' }, // brainstorm-with-file → workflow-plan
|
||||
'brainstorm-to-issue': { level: 4, flow: 'brainstorm-to-issue' }, // Brainstorm → Issue workflow
|
||||
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging (standalone)
|
||||
'analyze-file': { level: 3, flow: 'analyze-to-plan' }, // analyze-with-file → lite-plan
|
||||
'collaborative-plan': { level: 3, flow: 'collaborative-plan' }, // Multi-agent collaborative planning
|
||||
'req-plan': { level: 4, flow: 'req-plan' }, // Requirement-level roadmap planning
|
||||
'roadmap': { level: 4, flow: 'roadmap' }, // roadmap → team-planex (explicit roadmap only)
|
||||
'spec-driven': { level: 4, flow: 'spec-driven' }, // spec-generator → plan → execute
|
||||
// Cycle workflows (self-iterating with reflection)
|
||||
'integration-test': { level: 3, flow: 'integration-test-cycle' }, // Self-iterating integration test
|
||||
'refactor': { level: 3, flow: 'refactor-cycle' }, // Tech debt discovery and refactoring
|
||||
// Team workflows (multi-role collaboration)
|
||||
'integration-test': { level: 3, flow: 'integration-test-cycle' },
|
||||
'refactor': { level: 3, flow: 'refactor-cycle' },
|
||||
// Team workflows (kept: team-planex only)
|
||||
'team-planex': { level: 'Team', flow: 'team-planex' },
|
||||
'team-iterdev': { level: 'Team', flow: 'team-iterdev' },
|
||||
'team-lifecycle': { level: 'Team', flow: 'team-lifecycle' },
|
||||
'team-issue': { level: 'Team', flow: 'team-issue' },
|
||||
'team-testing': { level: 'Team', flow: 'team-testing' },
|
||||
'team-qa': { level: 'Team', flow: 'team-qa' },
|
||||
'team-brainstorm': { level: 'Team', flow: 'team-brainstorm' },
|
||||
'team-uidesign': { level: 'Team', flow: 'team-uidesign' },
|
||||
// Standard workflows
|
||||
'multi-cli-plan': { level: 3, flow: 'multi-cli-plan' }, // Multi-CLI collaborative planning
|
||||
'multi-cli': { level: 3, flow: 'multi-cli-plan' },
|
||||
'bugfix': { level: 2, flow: 'bugfix.standard' },
|
||||
'issue-batch': { level: 'Issue', flow: 'issue' },
|
||||
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' }, // Bridge workflow
|
||||
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' },
|
||||
'exploration': { level: 4, flow: 'full' },
|
||||
'quick-task': { level: 2, flow: 'rapid' },
|
||||
'ui-design': { level: analysis.complexity === 'high' ? 4 : 3, flow: 'ui' },
|
||||
'tdd': { level: 3, flow: 'tdd' },
|
||||
'test-gen': { level: 3, flow: 'test-gen' },
|
||||
'test-fix': { level: 3, flow: 'test-fix-gen' },
|
||||
'review': { level: 3, flow: 'review-cycle-fix' },
|
||||
'documentation': { level: 2, flow: 'docs' },
|
||||
@@ -245,7 +231,7 @@ function buildCommandChain(workflow, analysis) {
|
||||
const chains = {
|
||||
// Level 2 - Lightweight
|
||||
'rapid': [
|
||||
{ cmd: 'workflow-lite-plan', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-lite-planex', args: `"${analysis.goal}"` },
|
||||
...(analysis.constraints?.includes('skip-tests') ? [] : [
|
||||
{ cmd: 'workflow-test-fix', args: '' }
|
||||
])
|
||||
@@ -253,21 +239,21 @@ function buildCommandChain(workflow, analysis) {
|
||||
|
||||
// Level 2 Bridge - Lightweight to Issue Workflow
|
||||
'rapid-to-issue': [
|
||||
{ cmd: 'workflow-lite-plan', args: `"${analysis.goal}" --plan-only` },
|
||||
{ cmd: 'workflow-lite-planex', args: `"${analysis.goal}" --plan-only` },
|
||||
{ cmd: 'issue:convert-to-plan', args: '--latest-lite-plan -y' },
|
||||
{ cmd: 'issue:queue', args: '' },
|
||||
{ cmd: 'issue:execute', args: '--queue auto' }
|
||||
],
|
||||
|
||||
'bugfix.standard': [
|
||||
{ cmd: 'workflow-lite-plan', args: `--bugfix "${analysis.goal}"` },
|
||||
{ cmd: 'workflow-lite-planex', args: `--bugfix "${analysis.goal}"` },
|
||||
...(analysis.constraints?.includes('skip-tests') ? [] : [
|
||||
{ cmd: 'workflow-test-fix', args: '' }
|
||||
])
|
||||
],
|
||||
|
||||
'bugfix.hotfix': [
|
||||
{ cmd: 'workflow-lite-plan', args: `--hotfix "${analysis.goal}"` }
|
||||
{ cmd: 'workflow-lite-planex', args: `--hotfix "${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'multi-cli-plan': [
|
||||
@@ -278,21 +264,22 @@ function buildCommandChain(workflow, analysis) {
|
||||
],
|
||||
|
||||
'docs': [
|
||||
{ cmd: 'workflow-lite-plan', args: `"${analysis.goal}"` }
|
||||
{ cmd: 'workflow-lite-planex', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
// With-File workflows (documented exploration with multi-CLI collaboration)
|
||||
'brainstorm-with-file': [
|
||||
{ cmd: 'workflow:brainstorm-with-file', args: `"${analysis.goal}"` }
|
||||
// Note: Has built-in post-completion options (create plan, create issue, deep analysis)
|
||||
// With-File → Auto Chain to lite-plan
|
||||
'analyze-to-plan': [
|
||||
{ cmd: 'workflow:analyze-with-file', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-lite-planex', args: '' } // auto receives analysis artifacts (discussion.md)
|
||||
],
|
||||
|
||||
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
|
||||
'brainstorm-to-issue': [
|
||||
// Note: Assumes brainstorm session already exists, or run brainstorm first
|
||||
{ cmd: 'issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
|
||||
{ cmd: 'issue:queue', args: '' },
|
||||
{ cmd: 'issue:execute', args: '--queue auto' }
|
||||
'brainstorm-to-plan': [
|
||||
{ cmd: 'workflow:brainstorm-with-file', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-plan', args: '' }, // formal planning with brainstorm artifacts
|
||||
{ cmd: 'workflow-execute', args: '' },
|
||||
...(analysis.constraints?.includes('skip-tests') ? [] : [
|
||||
{ cmd: 'workflow-test-fix', args: '' }
|
||||
])
|
||||
],
|
||||
|
||||
'debug-with-file': [
|
||||
@@ -300,32 +287,42 @@ function buildCommandChain(workflow, analysis) {
|
||||
// Note: Self-contained with hypothesis-driven iteration and Gemini validation
|
||||
],
|
||||
|
||||
'analyze-with-file': [
|
||||
{ cmd: 'workflow:analyze-with-file', args: `"${analysis.goal}"` }
|
||||
// Note: Self-contained with multi-round discussion and CLI exploration
|
||||
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
|
||||
'brainstorm-to-issue': [
|
||||
{ cmd: 'issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
|
||||
{ cmd: 'issue:queue', args: '' },
|
||||
{ cmd: 'issue:execute', args: '--queue auto' }
|
||||
],
|
||||
|
||||
// 0→1 Greenfield (complexity-adaptive)
|
||||
'greenfield-plan': [
|
||||
{ cmd: 'workflow:brainstorm-with-file', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-plan', args: '' }, // formal planning after exploration
|
||||
{ cmd: 'workflow-execute', args: '' },
|
||||
...(analysis.constraints?.includes('skip-tests') ? [] : [
|
||||
{ cmd: 'workflow-test-fix', args: '' }
|
||||
])
|
||||
],
|
||||
|
||||
'greenfield-phased': [
|
||||
{ cmd: 'workflow:brainstorm-with-file', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-plan', args: '' }, // formal planning after exploration
|
||||
{ cmd: 'workflow-execute', args: '' },
|
||||
{ cmd: 'review-cycle', args: '' },
|
||||
...(analysis.constraints?.includes('skip-tests') ? [] : [
|
||||
{ cmd: 'workflow-test-fix', args: '' }
|
||||
])
|
||||
],
|
||||
|
||||
// Universal Plan+Execute
|
||||
'collaborative-plan': [
|
||||
{ cmd: 'workflow:collaborative-plan-with-file', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow:unified-execute-with-file', args: '' }
|
||||
// Note: Plan Note → unified execution engine
|
||||
],
|
||||
|
||||
'req-plan': [
|
||||
{ cmd: 'workflow:req-plan-with-file', args: `"${analysis.goal}"` },
|
||||
'roadmap': [
|
||||
{ cmd: 'workflow:roadmap-with-file', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'team-planex', args: '' }
|
||||
// Note: Requirement decomposition → issue creation → team-planex wave execution
|
||||
],
|
||||
|
||||
// Cycle workflows (self-iterating with reflection)
|
||||
'integration-test-cycle': [
|
||||
{ cmd: 'workflow:integration-test-cycle', args: `"${analysis.goal}"` }
|
||||
// Note: Self-contained explore → test → fix cycle with reflection
|
||||
],
|
||||
|
||||
'refactor-cycle': [
|
||||
{ cmd: 'workflow:refactor-cycle', args: `"${analysis.goal}"` }
|
||||
// Note: Self-contained tech debt discovery → refactor → validate
|
||||
],
|
||||
|
||||
// Level 3 - Standard
|
||||
@@ -338,11 +335,25 @@ function buildCommandChain(workflow, analysis) {
|
||||
])
|
||||
],
|
||||
|
||||
// Level 4 - Spec-Driven Full Pipeline
|
||||
'spec-driven': [
|
||||
{ cmd: 'spec-generator', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-plan', args: '' },
|
||||
{ cmd: 'workflow-execute', args: '' },
|
||||
...(analysis.constraints?.includes('skip-tests') ? [] : [
|
||||
{ cmd: 'workflow-test-fix', args: '' }
|
||||
])
|
||||
],
|
||||
|
||||
'tdd': [
|
||||
{ cmd: 'workflow-tdd', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-tdd-plan', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-execute', args: '' }
|
||||
],
|
||||
|
||||
'test-gen': [
|
||||
{ cmd: 'workflow-test-fix', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'test-fix-gen': [
|
||||
{ cmd: 'workflow-test-fix', args: `"${analysis.goal}"` }
|
||||
],
|
||||
@@ -360,7 +371,7 @@ function buildCommandChain(workflow, analysis) {
|
||||
{ cmd: 'workflow-execute', args: '' }
|
||||
],
|
||||
|
||||
// Level 4 - Full
|
||||
// Level 4 - Full Exploration (brainstorm → formal planning → execute)
|
||||
'full': [
|
||||
{ cmd: 'brainstorm', args: `"${analysis.goal}"` },
|
||||
{ cmd: 'workflow-plan', args: '' },
|
||||
@@ -370,6 +381,15 @@ function buildCommandChain(workflow, analysis) {
|
||||
])
|
||||
],
|
||||
|
||||
// Cycle workflows (self-iterating with reflection)
|
||||
'integration-test-cycle': [
|
||||
{ cmd: 'workflow:integration-test-cycle', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'refactor-cycle': [
|
||||
{ cmd: 'workflow:refactor-cycle', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
// Issue Workflow
|
||||
'issue': [
|
||||
{ cmd: 'issue:discover', args: '' },
|
||||
@@ -378,37 +398,9 @@ function buildCommandChain(workflow, analysis) {
|
||||
{ cmd: 'issue:execute', args: '' }
|
||||
],
|
||||
|
||||
// Team Workflows (multi-role collaboration, self-contained)
|
||||
// Team Workflows (kept: team-planex only)
|
||||
'team-planex': [
|
||||
{ cmd: 'team-planex', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'team-iterdev': [
|
||||
{ cmd: 'team-iterdev', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'team-lifecycle': [
|
||||
{ cmd: 'team-lifecycle', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'team-issue': [
|
||||
{ cmd: 'team-issue', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'team-testing': [
|
||||
{ cmd: 'team-testing', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'team-qa': [
|
||||
{ cmd: 'team-quality-assurance', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'team-brainstorm': [
|
||||
{ cmd: 'team-brainstorm', args: `"${analysis.goal}"` }
|
||||
],
|
||||
|
||||
'team-uidesign': [
|
||||
{ cmd: 'team-uidesign', args: `"${analysis.goal}"` }
|
||||
]
|
||||
};
|
||||
|
||||
@@ -484,7 +476,7 @@ function setupTodoTracking(chain, workflow, analysis) {
|
||||
```
|
||||
|
||||
**Output**:
|
||||
- TODO: `-> CCW:rapid: [1/2] workflow-lite-plan | CCW:rapid: [2/2] workflow-test-fix | ...`
|
||||
- TODO: `-> CCW:rapid: [1/2] workflow-lite-planex | CCW:rapid: [2/2] workflow-test-fix | ...`
|
||||
- Status File: `.workflow/.ccw/{session_id}/status.json`
|
||||
|
||||
---
|
||||
@@ -607,7 +599,7 @@ Phase 1: Analyze Intent
|
||||
+-- If clarity < 2 -> Phase 1.5: Clarify Requirements
|
||||
|
|
||||
Phase 2: Select Workflow & Build Chain
|
||||
|-- Map task_type -> Level (1/2/3/4/Issue)
|
||||
|-- Map task_type -> Level (2/3/4/Issue/Team)
|
||||
|-- Select flow based on complexity
|
||||
+-- Build command chain (Skill-based)
|
||||
|
|
||||
@@ -636,29 +628,25 @@ Phase 5: Execute Command Chain
|
||||
|
||||
| Input | Type | Level | Pipeline |
|
||||
|-------|------|-------|----------|
|
||||
| "Add API endpoint" | feature (low) | 2 | workflow-lite-plan → workflow-test-fix |
|
||||
| "Fix login timeout" | bugfix | 2 | workflow-lite-plan → workflow-test-fix |
|
||||
| "Use issue workflow" | issue-transition | 2.5 | workflow-lite-plan(plan-only) → convert-to-plan → queue → execute |
|
||||
| "头脑风暴: 通知系统重构" | brainstorm | 4 | workflow:brainstorm-with-file |
|
||||
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | issue:from-brainstorm → issue:queue → issue:execute |
|
||||
| "Add API endpoint" | feature (low) | 2 | workflow-lite-planex → workflow-test-fix |
|
||||
| "Fix login timeout" | bugfix | 2 | workflow-lite-planex → workflow-test-fix |
|
||||
| "Use issue workflow" | issue-transition | 2.5 | workflow-lite-planex(plan-only) → convert-to-plan → queue → execute |
|
||||
| "协作分析: 认证架构" | analyze-file | 3 | analyze-with-file → workflow-lite-planex |
|
||||
| "深度调试 WebSocket" | debug-file | 3 | workflow:debug-with-file |
|
||||
| "协作分析: 认证架构优化" | analyze-file | 3 | workflow:analyze-with-file |
|
||||
| "从零开始: 用户系统" | greenfield (medium) | 3 | brainstorm-with-file → workflow-plan → workflow-execute → workflow-test-fix |
|
||||
| "greenfield: 大型平台" | greenfield (high) | 4 | brainstorm-with-file → workflow-plan → workflow-execute → review-cycle → workflow-test-fix |
|
||||
| "头脑风暴: 通知系统" | brainstorm | 4 | brainstorm-with-file → workflow-plan → workflow-execute → workflow-test-fix |
|
||||
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | issue:from-brainstorm → issue:queue → issue:execute |
|
||||
| "协作规划: 实时通知系统" | collaborative-plan | 3 | collaborative-plan-with-file → unified-execute-with-file |
|
||||
| "需求规划: OAuth + 2FA" | req-plan | 4 | req-plan-with-file → team-planex |
|
||||
| "roadmap: OAuth + 2FA" | roadmap | 4 | roadmap-with-file → team-planex |
|
||||
| "specification: 用户系统" | spec-driven | 4 | spec-generator → workflow-plan → workflow-execute → workflow-test-fix |
|
||||
| "集成测试: 支付流程" | integration-test | 3 | workflow:integration-test-cycle |
|
||||
| "重构 auth 模块" | refactor | 3 | workflow:refactor-cycle |
|
||||
| "multi-cli plan: API设计" | multi-cli-plan | 3 | workflow-multi-cli-plan → workflow-test-fix |
|
||||
| "OAuth2 system" | feature (high) | 3 | workflow-plan → workflow-execute → review-cycle → workflow-test-fix |
|
||||
| "Implement with TDD" | tdd | 3 | workflow-tdd → workflow-execute |
|
||||
| "Implement with TDD" | tdd | 3 | workflow-tdd-plan → workflow-execute |
|
||||
| "Uncertain: real-time" | exploration | 4 | brainstorm → workflow-plan → workflow-execute → workflow-test-fix |
|
||||
| "team planex: 用户系统" | team-planex | Team | team-planex |
|
||||
| "迭代开发团队: 支付模块" | team-iterdev | Team | team-iterdev |
|
||||
| "全生命周期: 通知服务" | team-lifecycle | Team | team-lifecycle |
|
||||
| "team resolve issue #42" | team-issue | Team | team-issue |
|
||||
| "测试团队: 全面测试认证" | team-testing | Team | team-testing |
|
||||
| "QA 团队: 质量保障支付" | team-qa | Team | team-quality-assurance |
|
||||
| "团队头脑风暴: API 设计" | team-brainstorm | Team | team-brainstorm |
|
||||
| "团队 UI 设计: 仪表盘" | team-uidesign | Team | team-uidesign |
|
||||
|
||||
---
|
||||
|
||||
@@ -668,10 +656,11 @@ Phase 5: Execute Command Chain
|
||||
2. **Intent-Driven** - Auto-select workflow based on task intent
|
||||
3. **Skill-Based Chaining** - Build command chain by composing independent Skills
|
||||
4. **Self-Contained Skills** - 每个 Skill 内部处理完整流水线,是天然的最小执行单元
|
||||
5. **Progressive Clarification** - Low clarity triggers clarification phase
|
||||
6. **TODO Tracking** - Use CCW prefix to isolate workflow todos
|
||||
7. **Error Handling** - Retry/skip/abort at Skill level
|
||||
8. **User Control** - Optional user confirmation at each phase
|
||||
5. **Auto Chain** - With-File 产物自动传递给下游 Skill(如 analyze → lite-plan)
|
||||
6. **Progressive Clarification** - Low clarity triggers clarification phase
|
||||
7. **TODO Tracking** - Use CCW prefix to isolate workflow todos
|
||||
8. **Error Handling** - Retry/skip/abort at Skill level
|
||||
9. **User Control** - Optional user confirmation at each phase
|
||||
|
||||
---
|
||||
|
||||
@@ -684,13 +673,13 @@ Phase 5: Execute Command Chain
|
||||
```javascript
|
||||
// Initial state (rapid workflow: 2 steps)
|
||||
todos = [
|
||||
{ content: "CCW:rapid: [1/2] workflow-lite-plan", status: "in_progress" },
|
||||
{ content: "CCW:rapid: [1/2] workflow-lite-planex", status: "in_progress" },
|
||||
{ content: "CCW:rapid: [2/2] workflow-test-fix", status: "pending" }
|
||||
];
|
||||
|
||||
// After step 1 completes
|
||||
todos = [
|
||||
{ content: "CCW:rapid: [1/2] workflow-lite-plan", status: "completed" },
|
||||
{ content: "CCW:rapid: [1/2] workflow-lite-planex", status: "completed" },
|
||||
{ content: "CCW:rapid: [2/2] workflow-test-fix", status: "in_progress" }
|
||||
];
|
||||
```
|
||||
@@ -715,114 +704,51 @@ todos = [
|
||||
"complexity": "medium"
|
||||
},
|
||||
"command_chain": [
|
||||
{
|
||||
"index": 0,
|
||||
"command": "workflow-lite-plan",
|
||||
"status": "completed"
|
||||
},
|
||||
{
|
||||
"index": 1,
|
||||
"command": "workflow-test-fix",
|
||||
"status": "running"
|
||||
}
|
||||
{ "index": 0, "command": "workflow-lite-planex", "status": "completed" },
|
||||
{ "index": 1, "command": "workflow-test-fix", "status": "running" }
|
||||
],
|
||||
"current_index": 1
|
||||
}
|
||||
```
|
||||
|
||||
**Status Values**:
|
||||
- `running`: Workflow executing commands
|
||||
- `completed`: All commands finished
|
||||
- `failed`: User aborted or unrecoverable error
|
||||
- `error`: Command execution failed (during error handling)
|
||||
|
||||
**Command Status Values**:
|
||||
- `pending`: Not started
|
||||
- `running`: Currently executing
|
||||
- `completed`: Successfully finished
|
||||
- `failed`: Execution failed
|
||||
**Status Values**: `running` | `completed` | `failed` | `error`
|
||||
**Command Status Values**: `pending` | `running` | `completed` | `failed`
|
||||
|
||||
---
|
||||
|
||||
## With-File Workflows
|
||||
|
||||
**With-File workflows** provide documented exploration with multi-CLI collaboration. They are self-contained and generate comprehensive session artifacts.
|
||||
**With-File workflows** provide documented exploration with multi-CLI collaboration. They generate comprehensive session artifacts and can auto-chain to lite-plan for implementation.
|
||||
|
||||
| Workflow | Purpose | Key Features | Output Folder |
|
||||
|----------|---------|--------------|---------------|
|
||||
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge cycles | `.workflow/.brainstorm/` |
|
||||
| **debug-with-file** | Hypothesis-driven debugging | Gemini validation, understanding evolution, NDJSON logging | `.workflow/.debug/` |
|
||||
| **analyze-with-file** | Collaborative analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
|
||||
| **collaborative-plan-with-file** | Multi-agent collaborative planning | Understanding agent + parallel agents, Plan Note shared doc | `.workflow/.planning/` |
|
||||
| **req-plan-with-file** | Requirement roadmap planning | Requirement decomposition, issue creation, execution-plan.json | `.workflow/.planning/` |
|
||||
| Workflow | Purpose | Auto Chain | Output Folder |
|
||||
|----------|---------|------------|---------------|
|
||||
| **brainstorm-with-file** | Multi-perspective ideation | → workflow-plan → workflow-execute (auto) | `.workflow/.brainstorm/` |
|
||||
| **debug-with-file** | Hypothesis-driven debugging | Standalone (self-contained) | `.workflow/.debug/` |
|
||||
| **analyze-with-file** | Collaborative analysis | → workflow-lite-planex (auto) | `.workflow/.analysis/` |
|
||||
| **collaborative-plan-with-file** | Multi-agent collaborative planning | → unified-execute-with-file | `.workflow/.planning/` |
|
||||
| **roadmap-with-file** | Strategic requirement roadmap | → team-planex | `.workflow/.planning/` |
|
||||
|
||||
**Auto Chain Mechanism**: When `analyze-with-file` completes, its artifacts (discussion.md) are automatically passed to `workflow-lite-planex`. When `brainstorm-with-file` completes, its artifacts (brainstorm.md) are passed to `workflow-plan` for formal planning. No user intervention needed.
|
||||
|
||||
**Detection Keywords**:
|
||||
- **brainstorm**: 头脑风暴, 创意, 发散思维, multi-perspective, compare perspectives
|
||||
- **debug-file**: 深度调试, 假设验证, systematic debug, hypothesis debug
|
||||
- **analyze-file**: 协作分析, 深度理解, collaborative analysis, explore concept
|
||||
- **collaborative-plan**: 协作规划, 多人规划, collaborative plan, multi-agent plan, Plan Note
|
||||
- **req-plan**: roadmap, 需求规划, 需求拆解, requirement plan, progressive plan
|
||||
|
||||
**Characteristics**:
|
||||
1. **Self-Contained**: Each workflow handles its own iteration loop
|
||||
2. **Documented Process**: Creates evolving documents (brainstorm.md, understanding.md, discussion.md)
|
||||
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
|
||||
4. **Built-in Post-Completion**: Offers follow-up options (create plan, issue, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Team Workflows
|
||||
|
||||
**Team workflows** provide multi-role collaboration for complex tasks. Each team skill is self-contained with internal role routing via `--role=xxx`.
|
||||
|
||||
| Workflow | Roles | Pipeline | Use Case |
|
||||
|----------|-------|----------|----------|
|
||||
| **team-planex** | planner + executor | wave pipeline(边规划边执行)| 需要并行规划和执行的任务 |
|
||||
| **team-iterdev** | planner → developer → reviewer | 迭代开发循环 | 需要多轮迭代的开发任务 |
|
||||
| **team-lifecycle** | spec → impl → test | 全生命周期 | 从需求到测试的完整流程 |
|
||||
| **team-issue** | discover → plan → execute | issue 解决 | 多角色协作解决 issue |
|
||||
| **team-testing** | strategy → generate → execute → analyze | 测试流水线 | 全面测试覆盖 |
|
||||
| **team-quality-assurance** | scout → strategist → generator → executor → analyst | QA 闭环 | 质量保障全流程 |
|
||||
| **team-brainstorm** | facilitator → participants → synthesizer | 团队头脑风暴 | 多角色协作头脑风暴 |
|
||||
| **team-uidesign** | designer → implementer | dual-track 设计+实现 | UI 设计与实现并行 |
|
||||
|
||||
**Detection Keywords**:
|
||||
- **team-planex**: team planex, 团队规划执行, wave pipeline
|
||||
- **team-iterdev**: team iterdev, 迭代开发团队, iterative dev team
|
||||
- **team-lifecycle**: team lifecycle, 全生命周期, full lifecycle
|
||||
- **team-issue**: team issue, 团队 issue, team resolve issue
|
||||
- **team-testing**: team test, 测试团队, comprehensive test team
|
||||
- **team-qa**: team qa, QA 团队, 质量保障团队
|
||||
- **team-brainstorm**: team brainstorm, 团队头脑风暴, team ideation
|
||||
- **team-uidesign**: team ui design, UI 设计团队, dual track design
|
||||
|
||||
**Characteristics**:
|
||||
1. **Self-Contained**: Each team skill handles internal role coordination
|
||||
2. **Role-Based Routing**: All roles invoke the same skill with `--role=xxx`
|
||||
3. **Shared Memory**: Roles communicate via shared-memory.json and message bus
|
||||
4. **Auto Mode Support**: All team skills support `-y`/`--yes` for skip confirmations
|
||||
- **roadmap**: roadmap, 需求规划, 需求拆解, requirement plan, progressive plan
|
||||
- **spec-driven**: specification, PRD, 产品需求, 产品文档
|
||||
|
||||
---
|
||||
|
||||
## Cycle Workflows
|
||||
|
||||
**Cycle workflows** provide self-iterating development cycles with reflection-driven strategy adjustment. Each cycle is autonomous with built-in test-fix loops and quality gates.
|
||||
**Cycle workflows** provide self-iterating development cycles with reflection-driven strategy adjustment.
|
||||
|
||||
| Workflow | Pipeline | Key Features | Output Folder |
|
||||
|----------|----------|--------------|---------------|
|
||||
| **integration-test-cycle** | explore → test dev → test-fix → reflection | Self-iterating with max-iterations, auto continue | `.workflow/.test-cycle/` |
|
||||
| **refactor-cycle** | discover → prioritize → execute → validate | Multi-dimensional analysis, regression validation | `.workflow/.refactor-cycle/` |
|
||||
|
||||
**Detection Keywords**:
|
||||
- **integration-test**: integration test, 集成测试, 端到端测试, e2e test
|
||||
- **refactor**: refactor, 重构, tech debt, 技术债务
|
||||
|
||||
**Characteristics**:
|
||||
1. **Self-Iterating**: Autonomous test-fix loops until quality gate passes
|
||||
2. **Reflection-Driven**: Strategy adjusts based on previous iteration results
|
||||
3. **Continue Support**: `--continue` flag to resume interrupted sessions
|
||||
4. **Auto Mode Support**: `-y`/`--yes` for fully autonomous execution
|
||||
|
||||
---
|
||||
|
||||
## Utility Commands
|
||||
@@ -831,10 +757,11 @@ todos = [
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `workflow:unified-execute-with-file` | Universal execution engine - consumes plan output from collaborative-plan, req-plan, brainstorm |
|
||||
| `workflow:unified-execute-with-file` | Universal execution engine - consumes plan output from collaborative-plan, roadmap, brainstorm |
|
||||
| `workflow:clean` | Intelligent code cleanup - mainline detection, stale artifact removal |
|
||||
| `workflow:init` | Initialize `.workflow/project-tech.json` with project analysis |
|
||||
| `workflow:init-guidelines` | Interactive wizard to fill `specs/*.md` |
|
||||
| `workflow:status` | Generate on-demand views for project overview and workflow tasks |
|
||||
|
||||
---
|
||||
|
||||
@@ -848,9 +775,6 @@ todos = [
|
||||
/ccw -y "Add user authentication"
|
||||
/ccw --yes "Fix memory leak in WebSocket handler"
|
||||
|
||||
# Complex requirement (triggers clarification)
|
||||
/ccw "Optimize system performance"
|
||||
|
||||
# Bug fix
|
||||
/ccw "Fix memory leak in WebSocket handler"
|
||||
|
||||
@@ -863,35 +787,36 @@ todos = [
|
||||
# Multi-CLI collaborative planning
|
||||
/ccw "multi-cli plan: 支付网关API设计" # → workflow-multi-cli-plan → workflow-test-fix
|
||||
|
||||
# With-File workflows (documented exploration with multi-CLI collaboration)
|
||||
/ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
|
||||
/ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
|
||||
/ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
|
||||
/ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
|
||||
# 0→1 Greenfield development (exploration-first)
|
||||
/ccw "从零开始: 用户认证系统" # → brainstorm-with-file → workflow-plan → workflow-execute → workflow-test-fix
|
||||
/ccw "new project: 数据导出模块" # → brainstorm-with-file → workflow-plan → workflow-execute → workflow-test-fix
|
||||
/ccw "全新开发: 实时通知系统" # → brainstorm-with-file → workflow-plan → workflow-execute → review-cycle → workflow-test-fix
|
||||
|
||||
# Team workflows (multi-role collaboration)
|
||||
/ccw "team planex: 用户认证系统" # → team-planex (planner + executor wave pipeline)
|
||||
/ccw "迭代开发团队: 支付模块重构" # → team-iterdev (planner → developer → reviewer)
|
||||
/ccw "全生命周期: 通知服务开发" # → team-lifecycle (spec → impl → test)
|
||||
/ccw "team resolve issue #42" # → team-issue (discover → plan → execute)
|
||||
/ccw "测试团队: 全面测试认证模块" # → team-testing (strategy → generate → execute → analyze)
|
||||
/ccw "QA 团队: 质量保障支付流程" # → team-quality-assurance (scout → strategist → generator → executor → analyst)
|
||||
/ccw "团队头脑风暴: API 网关设计" # → team-brainstorm (facilitator → participants → synthesizer)
|
||||
/ccw "团队 UI 设计: 管理后台仪表盘" # → team-uidesign (designer → implementer dual-track)
|
||||
# With-File workflows → auto chain
|
||||
/ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file → workflow-lite-planex
|
||||
/ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file → workflow-plan → workflow-execute → workflow-test-fix
|
||||
/ccw "深度调试: 系统随机崩溃问题" # → debug-with-file (standalone)
|
||||
/ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
|
||||
|
||||
# Spec-driven full pipeline
|
||||
/ccw "specification: 用户认证系统产品文档" # → spec-generator → workflow-plan → workflow-execute → workflow-test-fix
|
||||
|
||||
# Collaborative planning & requirement workflows
|
||||
/ccw "协作规划: 实时通知系统架构" # → collaborative-plan-with-file → unified-execute
|
||||
/ccw "需求规划: 用户认证 OAuth + 2FA" # → req-plan-with-file → team-planex
|
||||
/ccw "roadmap: 数据导出功能路线图" # → req-plan-with-file → team-planex
|
||||
/ccw "roadmap: 用户认证 OAuth + 2FA 路线图" # → roadmap-with-file → team-planex (explicit roadmap only)
|
||||
/ccw "roadmap: 数据导出功能路线图" # → roadmap-with-file → team-planex (explicit roadmap only)
|
||||
|
||||
# Team workflows (kept: team-planex)
|
||||
/ccw "team planex: 用户认证系统" # → team-planex (planner + executor wave pipeline)
|
||||
|
||||
# Cycle workflows (self-iterating)
|
||||
/ccw "集成测试: 支付流程端到端" # → integration-test-cycle
|
||||
/ccw "重构 auth 模块的技术债务" # → refactor-cycle
|
||||
/ccw "tech debt: 清理支付服务" # → refactor-cycle
|
||||
|
||||
# Utility commands (invoked directly, not auto-routed)
|
||||
# /workflow:unified-execute-with-file # 通用执行引擎(消费 plan 输出)
|
||||
# /workflow:clean # 智能代码清理
|
||||
# /workflow:init # 初始化项目状态
|
||||
# /workflow:init-guidelines # 交互式填充项目规范
|
||||
# /workflow:status # 项目概览和工作流状态
|
||||
```
|
||||
|
||||
@@ -33,7 +33,7 @@ Creates tool-specific configuration directories:
|
||||
- `.gemini/settings.json`:
|
||||
```json
|
||||
{
|
||||
"contextfilename": ["CLAUDE.md","GEMINI.md"]
|
||||
"contextfilename": "CLAUDE.md"
|
||||
}
|
||||
```
|
||||
|
||||
@@ -41,7 +41,7 @@ Creates tool-specific configuration directories:
|
||||
- `.qwen/settings.json`:
|
||||
```json
|
||||
{
|
||||
"contextfilename": ["CLAUDE.md","QWEN.md"]
|
||||
"contextfilename": "CLAUDE.md"
|
||||
}
|
||||
```
|
||||
|
||||
@@ -378,7 +378,7 @@ docker-compose.override.yml
|
||||
## Integration Points
|
||||
|
||||
### Workflow Commands
|
||||
- **After `workflow-lite-plan` skill**: Suggest running cli-init for better analysis
|
||||
- **After `workflow-lite-planex` skill**: Suggest running cli-init for better analysis
|
||||
- **Before analysis**: Recommend updating ignore patterns for cleaner results
|
||||
|
||||
### CLI Tool Integration
|
||||
|
||||
@@ -86,7 +86,7 @@ async function selectCommandCategory() {
|
||||
header: "Category",
|
||||
options: [
|
||||
{ label: "Planning", description: "lite-plan, plan, multi-cli-plan, tdd-plan, quick-plan-with-file" },
|
||||
{ label: "Execution", description: "lite-execute, execute, unified-execute-with-file" },
|
||||
{ label: "Execution", description: "execute, unified-execute-with-file" },
|
||||
{ label: "Testing", description: "test-fix-gen, test-cycle-execute, test-gen, tdd-verify" },
|
||||
{ label: "Review", description: "review-session-cycle, review-module-cycle, review-cycle-fix" },
|
||||
{ label: "Bug Fix", description: "lite-plan --bugfix, debug-with-file" },
|
||||
@@ -107,24 +107,23 @@ async function selectCommandCategory() {
|
||||
async function selectCommand(category) {
|
||||
const commandOptions = {
|
||||
'Planning': [
|
||||
{ label: "/workflow:lite-plan", description: "Lightweight merged-mode planning" },
|
||||
{ label: "/workflow:plan", description: "Full planning with architecture design" },
|
||||
{ label: "/workflow:multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
|
||||
{ label: "/workflow:tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
|
||||
{ label: "/workflow-lite-planex", description: "Lightweight merged-mode planning" },
|
||||
{ label: "/workflow-plan", description: "Full planning with architecture design" },
|
||||
{ label: "/workflow-multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
|
||||
{ label: "/workflow-tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
|
||||
{ label: "/workflow:quick-plan-with-file", description: "Rapid planning with minimal docs" },
|
||||
{ label: "/workflow:plan-verify", description: "Verify plan against requirements" },
|
||||
{ label: "/workflow-plan-verify", description: "Verify plan against requirements" },
|
||||
{ label: "/workflow:replan", description: "Update plan and execute changes" }
|
||||
],
|
||||
'Execution': [
|
||||
{ label: "/workflow:lite-execute", description: "Execute from in-memory plan" },
|
||||
{ label: "/workflow:execute", description: "Execute from planning session" },
|
||||
{ label: "/workflow-execute", description: "Execute from planning session" },
|
||||
{ label: "/workflow:unified-execute-with-file", description: "Universal execution engine" }
|
||||
],
|
||||
'Testing': [
|
||||
{ label: "/workflow:test-fix-gen", description: "Generate test tasks for specific issues" },
|
||||
{ label: "/workflow:test-cycle-execute", description: "Execute iterative test-fix cycle (>=95% pass)" },
|
||||
{ label: "/workflow-test-fix", description: "Generate test tasks for specific issues" },
|
||||
{ label: "/workflow-test-fix", description: "Execute iterative test-fix cycle (>=95% pass)" },
|
||||
{ label: "/workflow:test-gen", description: "Generate comprehensive test suite" },
|
||||
{ label: "/workflow:tdd-verify", description: "Verify TDD workflow compliance" }
|
||||
{ label: "/workflow-tdd-verify", description: "Verify TDD workflow compliance" }
|
||||
],
|
||||
'Review': [
|
||||
{ label: "/workflow:review-session-cycle", description: "Session-based multi-dimensional code review" },
|
||||
@@ -133,7 +132,7 @@ async function selectCommand(category) {
|
||||
{ label: "/workflow:review", description: "Post-implementation review" }
|
||||
],
|
||||
'Bug Fix': [
|
||||
{ label: "/workflow:lite-plan", description: "Lightweight bug diagnosis and fix (with --bugfix flag)" },
|
||||
{ label: "/workflow-lite-planex", description: "Lightweight bug diagnosis and fix (with --bugfix flag)" },
|
||||
{ label: "/workflow:debug-with-file", description: "Hypothesis-driven debugging with documentation" }
|
||||
],
|
||||
'Brainstorm': [
|
||||
@@ -181,8 +180,8 @@ async function selectExecutionUnit() {
|
||||
header: "Unit",
|
||||
options: [
|
||||
// Planning + Execution Units
|
||||
{ label: "quick-implementation", description: "【lite-plan → lite-execute】" },
|
||||
{ label: "multi-cli-planning", description: "【multi-cli-plan → lite-execute】" },
|
||||
{ label: "quick-implementation", description: "【lite-plan】" },
|
||||
{ label: "multi-cli-planning", description: "【multi-cli-plan】" },
|
||||
{ label: "full-planning-execution", description: "【plan → execute】" },
|
||||
{ label: "verified-planning-execution", description: "【plan → plan-verify → execute】" },
|
||||
{ label: "replanning-execution", description: "【replan → execute】" },
|
||||
@@ -193,7 +192,7 @@ async function selectExecutionUnit() {
|
||||
// Review Units
|
||||
{ label: "code-review", description: "【review-*-cycle → review-cycle-fix】" },
|
||||
// Bug Fix Units
|
||||
{ label: "bug-fix", description: "【lite-plan --bugfix → lite-execute】" },
|
||||
{ label: "bug-fix", description: "【lite-plan --bugfix】" },
|
||||
// Issue Units
|
||||
{ label: "issue-workflow", description: "【discover → plan → queue → execute】" },
|
||||
{ label: "rapid-to-issue", description: "【lite-plan → convert-to-plan → queue → execute】" },
|
||||
@@ -303,10 +302,9 @@ async function defineSteps(templateDesign) {
|
||||
"description": "Quick implementation with testing",
|
||||
"level": 2,
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan" },
|
||||
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute implementation based on plan" },
|
||||
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
|
||||
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
|
||||
{ "cmd": "/workflow-lite-planex", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan (includes execution)" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -318,13 +316,13 @@ async function defineSteps(templateDesign) {
|
||||
"description": "Full workflow with verification, review, and testing",
|
||||
"level": 3,
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
|
||||
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
|
||||
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
|
||||
{ "cmd": "/workflow-plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
|
||||
{ "cmd": "/workflow-plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
|
||||
{ "cmd": "/workflow-execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
|
||||
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
|
||||
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
|
||||
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
|
||||
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -336,10 +334,9 @@ async function defineSteps(templateDesign) {
|
||||
"description": "Bug diagnosis and fix with testing",
|
||||
"level": 2,
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:lite-plan", "args": "--bugfix \"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose and plan bug fix" },
|
||||
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute bug fix" },
|
||||
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
|
||||
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
|
||||
{ "cmd": "/workflow-lite-planex", "args": "--bugfix \"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose, plan, and execute bug fix" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -351,7 +348,7 @@ async function defineSteps(templateDesign) {
|
||||
"description": "Urgent production bug fix (no tests)",
|
||||
"level": 2,
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:lite-plan", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
|
||||
{ "cmd": "/workflow-lite-planex", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -363,9 +360,9 @@ async function defineSteps(templateDesign) {
|
||||
"description": "Test-driven development with Red-Green-Refactor",
|
||||
"level": 3,
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
|
||||
{ "cmd": "/workflow:execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
|
||||
{ "cmd": "/workflow:tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
|
||||
{ "cmd": "/workflow-tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
|
||||
{ "cmd": "/workflow-execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
|
||||
{ "cmd": "/workflow-tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -379,8 +376,8 @@ async function defineSteps(templateDesign) {
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
|
||||
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
|
||||
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
|
||||
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -392,8 +389,8 @@ async function defineSteps(templateDesign) {
|
||||
"description": "Fix failing tests",
|
||||
"level": 3,
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:test-fix-gen", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
|
||||
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
|
||||
{ "cmd": "/workflow-test-fix", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -420,7 +417,7 @@ async function defineSteps(templateDesign) {
|
||||
"description": "Bridge lightweight planning to issue workflow",
|
||||
"level": 2,
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
|
||||
{ "cmd": "/workflow-lite-planex", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
|
||||
{ "cmd": "/issue:convert-to-plan", "args": "--latest-lite-plan -y", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert to issue plan" },
|
||||
{ "cmd": "/issue:queue", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
|
||||
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
|
||||
@@ -486,11 +483,11 @@ async function defineSteps(templateDesign) {
|
||||
"level": 4,
|
||||
"steps": [
|
||||
{ "cmd": "/brainstorm", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Unified brainstorming with multi-perspective exploration" },
|
||||
{ "cmd": "/workflow:plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
|
||||
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
|
||||
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
|
||||
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
|
||||
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
|
||||
{ "cmd": "/workflow-plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
|
||||
{ "cmd": "/workflow-plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
|
||||
{ "cmd": "/workflow-execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -502,10 +499,10 @@ async function defineSteps(templateDesign) {
|
||||
"description": "Multi-CLI collaborative planning with cross-verification",
|
||||
"level": 3,
|
||||
"steps": [
|
||||
{ "cmd": "/workflow:multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
|
||||
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute converged plan" },
|
||||
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
|
||||
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
|
||||
{ "cmd": "/workflow-multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
|
||||
// lite-execute is now an internal phase of multi-cli-plan (not invoked separately)
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
|
||||
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -530,7 +527,7 @@ Each command has input/output ports for pipeline composition:
|
||||
| tdd-plan | requirement | tdd-tasks | tdd-planning-execution |
|
||||
| replan | session, feedback | replan | replanning-execution |
|
||||
| **Execution** |
|
||||
| lite-execute | plan, multi-cli-plan | code | (multiple) |
|
||||
| ~~lite-execute~~ | _(internal phase of lite-plan/multi-cli-plan, not standalone)_ | code | — |
|
||||
| execute | detailed-plan, verified-plan, replan, tdd-tasks | code | (multiple) |
|
||||
| **Testing** |
|
||||
| test-fix-gen | failing-tests, session | test-tasks | test-validation |
|
||||
@@ -563,9 +560,9 @@ Each command has input/output ports for pipeline composition:
|
||||
|
||||
| Unit Name | Commands | Purpose |
|
||||
|-----------|----------|---------|
|
||||
| **quick-implementation** | lite-plan → lite-execute | Lightweight plan and execution |
|
||||
| **multi-cli-planning** | multi-cli-plan → lite-execute | Multi-perspective planning and execution |
|
||||
| **bug-fix** | lite-plan --bugfix → lite-execute | Bug diagnosis and fix |
|
||||
| **quick-implementation** | lite-plan (Phase 1: plan → Phase 2: execute) | Lightweight plan and execution |
|
||||
| **multi-cli-planning** | multi-cli-plan (Phase 1: plan → Phase 2: execute) | Multi-perspective planning and execution |
|
||||
| **bug-fix** | lite-plan --bugfix (Phase 1: plan → Phase 2: execute) | Bug diagnosis and fix |
|
||||
| **full-planning-execution** | plan → execute | Detailed planning and execution |
|
||||
| **verified-planning-execution** | plan → plan-verify → execute | Planning with verification |
|
||||
| **replanning-execution** | replan → execute | Update plan and execute |
|
||||
@@ -656,9 +653,9 @@ async function generateTemplate(design, steps, outputPath) {
|
||||
→ Level: 3 (Standard)
|
||||
→ Steps: Customize
|
||||
→ Step 1: /brainstorm (standalone, mainprocess)
|
||||
→ Step 2: /workflow:plan (verified-planning-execution, mainprocess)
|
||||
→ Step 3: /workflow:plan-verify (verified-planning-execution, mainprocess)
|
||||
→ Step 4: /workflow:execute (verified-planning-execution, async)
|
||||
→ Step 2: /workflow-plan (verified-planning-execution, mainprocess)
|
||||
→ Step 3: /workflow-plan-verify (verified-planning-execution, mainprocess)
|
||||
→ Step 4: /workflow-execute (verified-planning-execution, async)
|
||||
→ Step 5: /workflow:review-session-cycle (code-review, mainprocess)
|
||||
→ Step 6: /workflow:review-cycle-fix (code-review, mainprocess)
|
||||
→ Done
|
||||
|
||||
287
.claude/commands/idaw/add.md
Normal file
287
.claude/commands/idaw/add.md
Normal file
@@ -0,0 +1,287 @@
|
||||
---
|
||||
name: add
|
||||
description: Add IDAW tasks - manual creation or import from ccw issue
|
||||
argument-hint: "[-y|--yes] [--from-issue <id>[,<id>,...]] \"description\" [--type <task_type>] [--priority <1-5>]"
|
||||
allowed-tools: AskUserQuestion(*), Read(*), Bash(*), Write(*), Glob(*)
|
||||
---
|
||||
|
||||
# IDAW Add Command (/idaw:add)
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Skip clarification questions, create task with inferred details.
|
||||
|
||||
## IDAW Task Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IDAW-001",
|
||||
"title": "string",
|
||||
"description": "string",
|
||||
"status": "pending",
|
||||
"priority": 2,
|
||||
"task_type": null,
|
||||
"skill_chain": null,
|
||||
"context": {
|
||||
"affected_files": [],
|
||||
"acceptance_criteria": [],
|
||||
"constraints": [],
|
||||
"references": []
|
||||
},
|
||||
"source": {
|
||||
"type": "manual|import-issue",
|
||||
"issue_id": null,
|
||||
"issue_snapshot": null
|
||||
},
|
||||
"execution": {
|
||||
"session_id": null,
|
||||
"started_at": null,
|
||||
"completed_at": null,
|
||||
"skill_results": [],
|
||||
"git_commit": null,
|
||||
"error": null
|
||||
},
|
||||
"created_at": "ISO",
|
||||
"updated_at": "ISO"
|
||||
}
|
||||
```
|
||||
|
||||
**Valid task_type values**: `bugfix|bugfix-hotfix|feature|feature-complex|refactor|tdd|test|test-fix|review|docs`
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Parse Arguments
|
||||
|
||||
```javascript
|
||||
const args = $ARGUMENTS;
|
||||
const autoYes = /(-y|--yes)\b/.test(args);
|
||||
const fromIssue = args.match(/--from-issue\s+([\w,-]+)/)?.[1];
|
||||
const typeFlag = args.match(/--type\s+([\w-]+)/)?.[1];
|
||||
const priorityFlag = args.match(/--priority\s+(\d)/)?.[1];
|
||||
|
||||
// Extract description: content inside quotes (preferred), or fallback to stripping flags
|
||||
const quotedMatch = args.match(/(?:^|\s)["']([^"']+)["']/);
|
||||
const description = quotedMatch
|
||||
? quotedMatch[1].trim()
|
||||
: args.replace(/(-y|--yes|--from-issue\s+[\w,-]+|--type\s+[\w-]+|--priority\s+\d)/g, '').trim();
|
||||
```
|
||||
|
||||
### Phase 2: Route — Import or Manual
|
||||
|
||||
```
|
||||
--from-issue present?
|
||||
├─ YES → Import Mode (Phase 3A)
|
||||
└─ NO → Manual Mode (Phase 3B)
|
||||
```
|
||||
|
||||
### Phase 3A: Import Mode (from ccw issue)
|
||||
|
||||
```javascript
|
||||
const issueIds = fromIssue.split(',');
|
||||
|
||||
// Fetch all issues once (outside loop)
|
||||
let issues = [];
|
||||
try {
|
||||
const issueJson = Bash(`ccw issue list --json`);
|
||||
issues = JSON.parse(issueJson).issues || [];
|
||||
} catch (e) {
|
||||
console.log(`Error fetching CCW issues: ${e.message || e}`);
|
||||
console.log('Ensure ccw is installed and issues exist. Use /issue:new to create issues first.');
|
||||
return;
|
||||
}
|
||||
|
||||
for (const issueId of issueIds) {
|
||||
// 1. Find issue data
|
||||
const issue = issues.find(i => i.id === issueId.trim());
|
||||
if (!issue) {
|
||||
console.log(`Warning: Issue ${issueId} not found, skipping`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// 2. Check duplicate (same issue_id already imported)
|
||||
const existing = Glob('.workflow/.idaw/tasks/IDAW-*.json');
|
||||
for (const f of existing) {
|
||||
const data = JSON.parse(Read(f));
|
||||
if (data.source?.issue_id === issueId.trim()) {
|
||||
console.log(`Warning: Issue ${issueId} already imported as ${data.id}, skipping`);
|
||||
continue; // skip to next issue
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Generate next IDAW ID
|
||||
const nextId = generateNextId();
|
||||
|
||||
// 4. Map issue → IDAW task
|
||||
const task = {
|
||||
id: nextId,
|
||||
title: issue.title,
|
||||
description: issue.context || issue.title,
|
||||
status: 'pending',
|
||||
priority: parseInt(priorityFlag) || issue.priority || 3,
|
||||
task_type: typeFlag || inferTaskType(issue.title, issue.context || ''),
|
||||
skill_chain: null,
|
||||
context: {
|
||||
affected_files: issue.affected_components || [],
|
||||
acceptance_criteria: [],
|
||||
constraints: [],
|
||||
references: issue.source_url ? [issue.source_url] : []
|
||||
},
|
||||
source: {
|
||||
type: 'import-issue',
|
||||
issue_id: issue.id,
|
||||
issue_snapshot: {
|
||||
id: issue.id,
|
||||
title: issue.title,
|
||||
status: issue.status,
|
||||
context: issue.context,
|
||||
priority: issue.priority,
|
||||
created_at: issue.created_at
|
||||
}
|
||||
},
|
||||
execution: {
|
||||
session_id: null,
|
||||
started_at: null,
|
||||
completed_at: null,
|
||||
skill_results: [],
|
||||
git_commit: null,
|
||||
error: null
|
||||
},
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// 5. Write task file
|
||||
Bash('mkdir -p .workflow/.idaw/tasks');
|
||||
Write(`.workflow/.idaw/tasks/${nextId}.json`, JSON.stringify(task, null, 2));
|
||||
console.log(`Created ${nextId} from issue ${issueId}: ${issue.title}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3B: Manual Mode
|
||||
|
||||
```javascript
|
||||
// 1. Validate description
|
||||
if (!description && !autoYes) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Please provide a task description:',
|
||||
header: 'Task',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Provide description', description: 'What needs to be done?' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
// Use custom text from "Other"
|
||||
description = answer.customText || '';
|
||||
}
|
||||
|
||||
if (!description) {
|
||||
console.log('Error: No description provided. Usage: /idaw:add "task description"');
|
||||
return;
|
||||
}
|
||||
|
||||
// 2. Generate next IDAW ID
|
||||
const nextId = generateNextId();
|
||||
|
||||
// 3. Build title from first sentence
|
||||
const title = description.split(/[.\n]/)[0].substring(0, 80).trim();
|
||||
|
||||
// 4. Determine task_type
|
||||
const taskType = typeFlag || null; // null → inferred at run time
|
||||
|
||||
// 5. Create task
|
||||
const task = {
|
||||
id: nextId,
|
||||
title: title,
|
||||
description: description,
|
||||
status: 'pending',
|
||||
priority: parseInt(priorityFlag) || 3,
|
||||
task_type: taskType,
|
||||
skill_chain: null,
|
||||
context: {
|
||||
affected_files: [],
|
||||
acceptance_criteria: [],
|
||||
constraints: [],
|
||||
references: []
|
||||
},
|
||||
source: {
|
||||
type: 'manual',
|
||||
issue_id: null,
|
||||
issue_snapshot: null
|
||||
},
|
||||
execution: {
|
||||
session_id: null,
|
||||
started_at: null,
|
||||
completed_at: null,
|
||||
skill_results: [],
|
||||
git_commit: null,
|
||||
error: null
|
||||
},
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
Bash('mkdir -p .workflow/.idaw/tasks');
|
||||
Write(`.workflow/.idaw/tasks/${nextId}.json`, JSON.stringify(task, null, 2));
|
||||
console.log(`Created ${nextId}: ${title}`);
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
### ID Generation
|
||||
|
||||
```javascript
|
||||
function generateNextId() {
|
||||
const files = Glob('.workflow/.idaw/tasks/IDAW-*.json') || [];
|
||||
if (files.length === 0) return 'IDAW-001';
|
||||
|
||||
const maxNum = files
|
||||
.map(f => parseInt(f.match(/IDAW-(\d+)/)?.[1] || '0'))
|
||||
.reduce((max, n) => Math.max(max, n), 0);
|
||||
|
||||
return `IDAW-${String(maxNum + 1).padStart(3, '0')}`;
|
||||
}
|
||||
```
|
||||
|
||||
### Task Type Inference (deferred — used at run time if task_type is null)
|
||||
|
||||
```javascript
|
||||
function inferTaskType(title, description) {
|
||||
const text = `${title} ${description}`.toLowerCase();
|
||||
if (/urgent|production|critical/.test(text) && /fix|bug/.test(text)) return 'bugfix-hotfix';
|
||||
if (/refactor|重构|tech.*debt/.test(text)) return 'refactor';
|
||||
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
|
||||
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
|
||||
if (/generate test|写测试|add test/.test(text)) return 'test';
|
||||
if (/review|code review/.test(text)) return 'review';
|
||||
if (/docs|documentation|readme/.test(text)) return 'docs';
|
||||
if (/fix|bug|error|crash|fail/.test(text)) return 'bugfix';
|
||||
if (/complex|multi-module|architecture/.test(text)) return 'feature-complex';
|
||||
return 'feature';
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Manual creation
|
||||
/idaw:add "Fix login timeout bug" --type bugfix --priority 2
|
||||
/idaw:add "Add rate limiting to API endpoints" --priority 1
|
||||
/idaw:add "Refactor auth module to use strategy pattern"
|
||||
|
||||
# Import from ccw issue
|
||||
/idaw:add --from-issue ISS-20260128-001
|
||||
/idaw:add --from-issue ISS-20260128-001,ISS-20260128-002 --priority 1
|
||||
|
||||
# Auto mode (skip clarification)
|
||||
/idaw:add -y "Quick fix for typo in header"
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
```
|
||||
Created IDAW-001: Fix login timeout bug
|
||||
Type: bugfix | Priority: 2 | Source: manual
|
||||
→ Next: /idaw:run or /idaw:status
|
||||
```
|
||||
442
.claude/commands/idaw/resume.md
Normal file
442
.claude/commands/idaw/resume.md
Normal file
@@ -0,0 +1,442 @@
|
||||
---
|
||||
name: resume
|
||||
description: Resume interrupted IDAW session from last checkpoint
|
||||
argument-hint: "[-y|--yes] [session-id]"
|
||||
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Write(*), Bash(*), Glob(*)
|
||||
---
|
||||
|
||||
# IDAW Resume Command (/idaw:resume)
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-skip interrupted task, continue with remaining.
|
||||
|
||||
## Skill Chain Mapping
|
||||
|
||||
```javascript
|
||||
const SKILL_CHAIN_MAP = {
|
||||
'bugfix': ['workflow-lite-planex', 'workflow-test-fix'],
|
||||
'bugfix-hotfix': ['workflow-lite-planex'],
|
||||
'feature': ['workflow-lite-planex', 'workflow-test-fix'],
|
||||
'feature-complex': ['workflow-plan', 'workflow-execute', 'workflow-test-fix'],
|
||||
'refactor': ['workflow:refactor-cycle'],
|
||||
'tdd': ['workflow-tdd-plan', 'workflow-execute'],
|
||||
'test': ['workflow-test-fix'],
|
||||
'test-fix': ['workflow-test-fix'],
|
||||
'review': ['review-cycle'],
|
||||
'docs': ['workflow-lite-planex']
|
||||
};
|
||||
```
|
||||
|
||||
## Task Type Inference
|
||||
|
||||
```javascript
|
||||
function inferTaskType(title, description) {
|
||||
const text = `${title} ${description}`.toLowerCase();
|
||||
if (/urgent|production|critical/.test(text) && /fix|bug/.test(text)) return 'bugfix-hotfix';
|
||||
if (/refactor|重构|tech.*debt/.test(text)) return 'refactor';
|
||||
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
|
||||
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
|
||||
if (/generate test|写测试|add test/.test(text)) return 'test';
|
||||
if (/review|code review/.test(text)) return 'review';
|
||||
if (/docs|documentation|readme/.test(text)) return 'docs';
|
||||
if (/fix|bug|error|crash|fail/.test(text)) return 'bugfix';
|
||||
if (/complex|multi-module|architecture/.test(text)) return 'feature-complex';
|
||||
return 'feature';
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Find Resumable Session
|
||||
|
||||
```javascript
|
||||
const args = $ARGUMENTS;
|
||||
const autoYes = /(-y|--yes)/.test(args);
|
||||
const targetSessionId = args.replace(/(-y|--yes)/g, '').trim();
|
||||
|
||||
let session = null;
|
||||
let sessionDir = null;
|
||||
|
||||
if (targetSessionId) {
|
||||
// Load specific session
|
||||
sessionDir = `.workflow/.idaw/sessions/${targetSessionId}`;
|
||||
try {
|
||||
session = JSON.parse(Read(`${sessionDir}/session.json`));
|
||||
} catch {
|
||||
console.log(`Session "${targetSessionId}" not found.`);
|
||||
console.log('Use /idaw:status to list sessions, or /idaw:run to start a new one.');
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
// Find most recent running session
|
||||
const sessionFiles = Glob('.workflow/.idaw/sessions/IDA-*/session.json') || [];
|
||||
|
||||
for (const f of sessionFiles) {
|
||||
try {
|
||||
const s = JSON.parse(Read(f));
|
||||
if (s.status === 'running') {
|
||||
session = s;
|
||||
sessionDir = f.replace(/\/session\.json$/, '').replace(/\\session\.json$/, '');
|
||||
break;
|
||||
}
|
||||
} catch {
|
||||
// Skip malformed
|
||||
}
|
||||
}
|
||||
|
||||
if (!session) {
|
||||
console.log('No running sessions found to resume.');
|
||||
console.log('Use /idaw:run to start a new execution.');
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Resuming session: ${session.session_id}`);
|
||||
```
|
||||
|
||||
### Phase 2: Handle Interrupted Task
|
||||
|
||||
```javascript
|
||||
// Find the task that was in_progress when interrupted
|
||||
let currentTaskId = session.current_task;
|
||||
let currentTask = null;
|
||||
|
||||
if (currentTaskId) {
|
||||
try {
|
||||
currentTask = JSON.parse(Read(`.workflow/.idaw/tasks/${currentTaskId}.json`));
|
||||
} catch {
|
||||
console.log(`Warning: Could not read task ${currentTaskId}`);
|
||||
currentTaskId = null;
|
||||
}
|
||||
}
|
||||
|
||||
if (currentTask && currentTask.status === 'in_progress') {
|
||||
if (autoYes) {
|
||||
// Auto: skip interrupted task
|
||||
currentTask.status = 'skipped';
|
||||
currentTask.execution.error = 'Skipped on resume (auto mode)';
|
||||
currentTask.execution.completed_at = new Date().toISOString();
|
||||
currentTask.updated_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${currentTaskId}.json`, JSON.stringify(currentTask, null, 2));
|
||||
session.skipped.push(currentTaskId);
|
||||
console.log(`Skipped interrupted task: ${currentTaskId}`);
|
||||
} else {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Task ${currentTaskId} was interrupted: "${currentTask.title}". How to proceed?`,
|
||||
header: 'Resume',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Retry', description: 'Reset to pending, re-execute from beginning' },
|
||||
{ label: 'Skip', description: 'Mark as skipped, move to next task' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (answer.answers?.Resume === 'Skip') {
|
||||
currentTask.status = 'skipped';
|
||||
currentTask.execution.error = 'Skipped on resume (user choice)';
|
||||
currentTask.execution.completed_at = new Date().toISOString();
|
||||
currentTask.updated_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${currentTaskId}.json`, JSON.stringify(currentTask, null, 2));
|
||||
session.skipped.push(currentTaskId);
|
||||
} else {
|
||||
// Retry: reset to pending
|
||||
currentTask.status = 'pending';
|
||||
currentTask.execution.started_at = null;
|
||||
currentTask.execution.completed_at = null;
|
||||
currentTask.execution.skill_results = [];
|
||||
currentTask.execution.error = null;
|
||||
currentTask.updated_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${currentTaskId}.json`, JSON.stringify(currentTask, null, 2));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Build Remaining Task Queue
|
||||
|
||||
```javascript
|
||||
// Collect remaining tasks (pending, or the retried current task)
|
||||
const allTaskIds = session.tasks;
|
||||
const completedSet = new Set([...session.completed, ...session.failed, ...session.skipped]);
|
||||
|
||||
const remainingTasks = [];
|
||||
for (const taskId of allTaskIds) {
|
||||
if (completedSet.has(taskId)) continue;
|
||||
try {
|
||||
const task = JSON.parse(Read(`.workflow/.idaw/tasks/${taskId}.json`));
|
||||
if (task.status === 'pending') {
|
||||
remainingTasks.push(task);
|
||||
}
|
||||
} catch {
|
||||
console.log(`Warning: Could not read task ${taskId}, skipping`);
|
||||
}
|
||||
}
|
||||
|
||||
if (remainingTasks.length === 0) {
|
||||
console.log('No remaining tasks to execute. Session complete.');
|
||||
session.status = 'completed';
|
||||
session.current_task = null;
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
return;
|
||||
}
|
||||
|
||||
// Sort: priority ASC, then ID ASC
|
||||
remainingTasks.sort((a, b) => {
|
||||
if (a.priority !== b.priority) return a.priority - b.priority;
|
||||
return a.id.localeCompare(b.id);
|
||||
});
|
||||
|
||||
console.log(`Remaining tasks: ${remainingTasks.length}`);
|
||||
|
||||
// Append resume marker to progress.md
|
||||
const progressFile = `${sessionDir}/progress.md`;
|
||||
try {
|
||||
const currentProgress = Read(progressFile);
|
||||
Write(progressFile, currentProgress + `\n---\n**Resumed**: ${new Date().toISOString()}\n\n`);
|
||||
} catch {
|
||||
Write(progressFile, `# IDAW Progress — ${session.session_id}\n\n---\n**Resumed**: ${new Date().toISOString()}\n\n`);
|
||||
}
|
||||
|
||||
// Update TodoWrite
|
||||
TodoWrite({
|
||||
todos: remainingTasks.map((t, i) => ({
|
||||
content: `IDAW:[${i + 1}/${remainingTasks.length}] ${t.title}`,
|
||||
status: i === 0 ? 'in_progress' : 'pending',
|
||||
activeForm: `Executing ${t.title}`
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 4-6: Execute Remaining (reuse run.md main loop)
|
||||
|
||||
Execute remaining tasks using the same Phase 4-6 logic from `/idaw:run`:
|
||||
|
||||
```javascript
|
||||
// Phase 4: Main Loop — identical to run.md Phase 4
|
||||
for (let taskIdx = 0; taskIdx < remainingTasks.length; taskIdx++) {
|
||||
const task = remainingTasks[taskIdx];
|
||||
|
||||
// Resolve skill chain
|
||||
const resolvedType = task.task_type || inferTaskType(task.title, task.description);
|
||||
const chain = task.skill_chain || SKILL_CHAIN_MAP[resolvedType] || SKILL_CHAIN_MAP['feature'];
|
||||
|
||||
// Update task → in_progress
|
||||
task.status = 'in_progress';
|
||||
task.task_type = resolvedType;
|
||||
task.execution.started_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
|
||||
|
||||
session.current_task = task.id;
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
console.log(`\n--- [${taskIdx + 1}/${remainingTasks.length}] ${task.id}: ${task.title} ---`);
|
||||
console.log(`Chain: ${chain.join(' → ')}`);
|
||||
|
||||
// ━━━ Pre-Task CLI Context Analysis (for complex/bugfix tasks) ━━━
|
||||
if (['bugfix', 'bugfix-hotfix', 'feature-complex'].includes(resolvedType)) {
|
||||
console.log(` Pre-analysis: gathering context for ${resolvedType} task...`);
|
||||
const affectedFiles = (task.context?.affected_files || []).join(', ');
|
||||
const preAnalysisPrompt = `PURPOSE: Pre-analyze codebase context for IDAW task before execution.
|
||||
TASK: • Understand current state of: ${affectedFiles || 'files related to: ' + task.title} • Identify dependencies and risk areas • Note existing patterns to follow
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Brief context summary (affected modules, dependencies, risk areas) in 3-5 bullet points
|
||||
CONSTRAINTS: Keep concise | Focus on execution-relevant context`;
|
||||
const preAnalysis = Bash(`ccw cli -p '${preAnalysisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis 2>&1 || echo "Pre-analysis skipped"`);
|
||||
task.execution.skill_results.push({
|
||||
skill: 'cli-pre-analysis',
|
||||
status: 'completed',
|
||||
context_summary: preAnalysis?.substring(0, 500),
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
}
|
||||
|
||||
// Execute skill chain
|
||||
let previousResult = null;
|
||||
let taskFailed = false;
|
||||
|
||||
for (let skillIdx = 0; skillIdx < chain.length; skillIdx++) {
|
||||
const skillName = chain[skillIdx];
|
||||
const skillArgs = assembleSkillArgs(skillName, task, previousResult, autoYes, skillIdx === 0);
|
||||
|
||||
console.log(` [${skillIdx + 1}/${chain.length}] ${skillName}`);
|
||||
|
||||
try {
|
||||
const result = Skill({ skill: skillName, args: skillArgs });
|
||||
previousResult = result;
|
||||
task.execution.skill_results.push({
|
||||
skill: skillName,
|
||||
status: 'completed',
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
} catch (error) {
|
||||
// ━━━ CLI-Assisted Error Recovery ━━━
|
||||
console.log(` Diagnosing failure: ${skillName}...`);
|
||||
const diagnosisPrompt = `PURPOSE: Diagnose why skill "${skillName}" failed during IDAW task execution.
|
||||
TASK: • Analyze error: ${String(error).substring(0, 300)} • Check affected files: ${(task.context?.affected_files || []).join(', ') || 'unknown'} • Identify root cause • Suggest fix strategy
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: IDAW task ${task.id}: ${task.title}
|
||||
EXPECTED: Root cause + actionable fix recommendation (1-2 sentences)
|
||||
CONSTRAINTS: Focus on actionable diagnosis`;
|
||||
const diagnosisResult = Bash(`ccw cli -p '${diagnosisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause 2>&1 || echo "CLI diagnosis unavailable"`);
|
||||
|
||||
task.execution.skill_results.push({
|
||||
skill: `cli-diagnosis:${skillName}`,
|
||||
status: 'completed',
|
||||
diagnosis: diagnosisResult?.substring(0, 500),
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
|
||||
// Retry with diagnosis context
|
||||
console.log(` Retry with diagnosis: ${skillName}`);
|
||||
try {
|
||||
const retryResult = Skill({ skill: skillName, args: skillArgs });
|
||||
previousResult = retryResult;
|
||||
task.execution.skill_results.push({
|
||||
skill: skillName,
|
||||
status: 'completed-retry-with-diagnosis',
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
} catch (retryError) {
|
||||
task.execution.skill_results.push({
|
||||
skill: skillName,
|
||||
status: 'failed',
|
||||
error: String(retryError).substring(0, 200),
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
|
||||
if (autoYes) {
|
||||
taskFailed = true;
|
||||
break;
|
||||
}
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `${skillName} failed after CLI diagnosis + retry: ${String(retryError).substring(0, 100)}`,
|
||||
header: 'Error',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Skip task', description: 'Mark as failed, continue' },
|
||||
{ label: 'Abort', description: 'Stop run' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (answer.answers?.Error === 'Abort') {
|
||||
task.status = 'failed';
|
||||
task.execution.error = String(retryError).substring(0, 200);
|
||||
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
|
||||
session.failed.push(task.id);
|
||||
session.status = 'failed';
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
return;
|
||||
}
|
||||
taskFailed = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 5: Checkpoint
|
||||
if (taskFailed) {
|
||||
task.status = 'failed';
|
||||
task.execution.error = 'Skill chain failed after retry';
|
||||
task.execution.completed_at = new Date().toISOString();
|
||||
session.failed.push(task.id);
|
||||
} else {
|
||||
// Git commit
|
||||
const commitMsg = `feat(idaw): ${task.title} [${task.id}]`;
|
||||
const diffCheck = Bash('git diff --stat HEAD 2>/dev/null || echo ""');
|
||||
const untrackedCheck = Bash('git ls-files --others --exclude-standard 2>/dev/null || echo ""');
|
||||
|
||||
if (diffCheck?.trim() || untrackedCheck?.trim()) {
|
||||
Bash('git add -A');
|
||||
Bash(`git commit -m "$(cat <<'EOF'\n${commitMsg}\nEOF\n)"`);
|
||||
const commitHash = Bash('git rev-parse --short HEAD 2>/dev/null')?.trim();
|
||||
task.execution.git_commit = commitHash;
|
||||
} else {
|
||||
task.execution.git_commit = 'no-commit';
|
||||
}
|
||||
|
||||
task.status = 'completed';
|
||||
task.execution.completed_at = new Date().toISOString();
|
||||
session.completed.push(task.id);
|
||||
}
|
||||
|
||||
task.updated_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
// Append progress
|
||||
const chain_str = chain.join(' → ');
|
||||
const progressEntry = `## ${task.id} — ${task.title}\n- Status: ${task.status}\n- Chain: ${chain_str}\n- Commit: ${task.execution.git_commit || '-'}\n\n`;
|
||||
const currentProgress = Read(`${sessionDir}/progress.md`);
|
||||
Write(`${sessionDir}/progress.md`, currentProgress + progressEntry);
|
||||
}
|
||||
|
||||
// Phase 6: Report
|
||||
session.status = session.failed.length > 0 && session.completed.length === 0 ? 'failed' : 'completed';
|
||||
session.current_task = null;
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
const summary = `\n---\n## Summary (Resumed)\n- Completed: ${session.completed.length}\n- Failed: ${session.failed.length}\n- Skipped: ${session.skipped.length}\n`;
|
||||
const finalProgress = Read(`${sessionDir}/progress.md`);
|
||||
Write(`${sessionDir}/progress.md`, finalProgress + summary);
|
||||
|
||||
console.log('\n=== IDAW Resume Complete ===');
|
||||
console.log(`Session: ${session.session_id}`);
|
||||
console.log(`Completed: ${session.completed.length} | Failed: ${session.failed.length} | Skipped: ${session.skipped.length}`);
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
### assembleSkillArgs
|
||||
|
||||
```javascript
|
||||
function assembleSkillArgs(skillName, task, previousResult, autoYes, isFirst) {
|
||||
let args = '';
|
||||
|
||||
if (isFirst) {
|
||||
// Sanitize for shell safety
|
||||
const goal = `${task.title}\n${task.description}`
|
||||
.replace(/\\/g, '\\\\')
|
||||
.replace(/"/g, '\\"')
|
||||
.replace(/\$/g, '\\$')
|
||||
.replace(/`/g, '\\`');
|
||||
args = `"${goal}"`;
|
||||
if (task.task_type === 'bugfix-hotfix') args += ' --hotfix';
|
||||
} else if (previousResult?.session_id) {
|
||||
args = `--session="${previousResult.session_id}"`;
|
||||
}
|
||||
|
||||
if (autoYes && !args.includes('-y') && !args.includes('--yes')) {
|
||||
args = args ? `${args} -y` : '-y';
|
||||
}
|
||||
|
||||
return args;
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Resume most recent running session (interactive)
|
||||
/idaw:resume
|
||||
|
||||
# Resume specific session
|
||||
/idaw:resume IDA-auth-fix-20260301
|
||||
|
||||
# Resume with auto mode (skip interrupted, continue)
|
||||
/idaw:resume -y
|
||||
|
||||
# Resume specific session with auto mode
|
||||
/idaw:resume -y IDA-auth-fix-20260301
|
||||
```
|
||||
648
.claude/commands/idaw/run-coordinate.md
Normal file
648
.claude/commands/idaw/run-coordinate.md
Normal file
@@ -0,0 +1,648 @@
|
||||
---
|
||||
name: run-coordinate
|
||||
description: IDAW coordinator - execute task skill chains via external CLI with hook callbacks and git checkpoints
|
||||
argument-hint: "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run] [--tool <tool>]"
|
||||
allowed-tools: Task(*), AskUserQuestion(*), Read(*), Write(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# IDAW Run Coordinate Command (/idaw:run-coordinate)
|
||||
|
||||
Coordinator variant of `/idaw:run`: external CLI execution with background tasks and hook callbacks.
|
||||
|
||||
**Execution Model**: `ccw cli -p "..." --tool <tool> --mode write` in background → hook callback → next step.
|
||||
|
||||
**vs `/idaw:run`**: Direct `Skill()` calls (blocking, main process) vs `ccw cli` (background, external process).
|
||||
|
||||
## When to Use
|
||||
|
||||
| Scenario | Use |
|
||||
|----------|-----|
|
||||
| Standard IDAW execution (main process) | `/idaw:run` |
|
||||
| External CLI execution (background, hook-driven) | `/idaw:run-coordinate` |
|
||||
| Need `claude` or `gemini` as execution tool | `/idaw:run-coordinate --tool claude` |
|
||||
| Long-running tasks, avoid context window pressure | `/idaw:run-coordinate` |
|
||||
|
||||
## Skill Chain Mapping
|
||||
|
||||
```javascript
|
||||
const SKILL_CHAIN_MAP = {
|
||||
'bugfix': ['workflow-lite-planex', 'workflow-test-fix'],
|
||||
'bugfix-hotfix': ['workflow-lite-planex'],
|
||||
'feature': ['workflow-lite-planex', 'workflow-test-fix'],
|
||||
'feature-complex': ['workflow-plan', 'workflow-execute', 'workflow-test-fix'],
|
||||
'refactor': ['workflow:refactor-cycle'],
|
||||
'tdd': ['workflow-tdd-plan', 'workflow-execute'],
|
||||
'test': ['workflow-test-fix'],
|
||||
'test-fix': ['workflow-test-fix'],
|
||||
'review': ['review-cycle'],
|
||||
'docs': ['workflow-lite-planex']
|
||||
};
|
||||
```
|
||||
|
||||
## Task Type Inference
|
||||
|
||||
```javascript
|
||||
function inferTaskType(title, description) {
|
||||
const text = `${title} ${description}`.toLowerCase();
|
||||
if (/urgent|production|critical/.test(text) && /fix|bug/.test(text)) return 'bugfix-hotfix';
|
||||
if (/refactor|重构|tech.*debt/.test(text)) return 'refactor';
|
||||
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
|
||||
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
|
||||
if (/generate test|写测试|add test/.test(text)) return 'test';
|
||||
if (/review|code review/.test(text)) return 'review';
|
||||
if (/docs|documentation|readme/.test(text)) return 'docs';
|
||||
if (/fix|bug|error|crash|fail/.test(text)) return 'bugfix';
|
||||
if (/complex|multi-module|architecture/.test(text)) return 'feature-complex';
|
||||
return 'feature';
|
||||
}
|
||||
```
|
||||
|
||||
## 6-Phase Execution (Coordinator Model)
|
||||
|
||||
### Phase 1: Load Tasks
|
||||
|
||||
```javascript
|
||||
const args = $ARGUMENTS;
|
||||
const autoYes = /(-y|--yes)/.test(args);
|
||||
const dryRun = /--dry-run/.test(args);
|
||||
const taskFilter = args.match(/--task\s+([\w,-]+)/)?.[1]?.split(',') || null;
|
||||
const cliTool = args.match(/--tool\s+(\w+)/)?.[1] || 'claude';
|
||||
|
||||
// Load task files
|
||||
const taskFiles = Glob('.workflow/.idaw/tasks/IDAW-*.json') || [];
|
||||
|
||||
if (taskFiles.length === 0) {
|
||||
console.log('No IDAW tasks found. Use /idaw:add to create tasks.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Parse and filter
|
||||
let tasks = taskFiles.map(f => JSON.parse(Read(f)));
|
||||
|
||||
if (taskFilter) {
|
||||
tasks = tasks.filter(t => taskFilter.includes(t.id));
|
||||
} else {
|
||||
tasks = tasks.filter(t => t.status === 'pending');
|
||||
}
|
||||
|
||||
if (tasks.length === 0) {
|
||||
console.log('No pending tasks to execute. Use /idaw:add to add tasks or --task to specify IDs.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Sort: priority ASC (1=critical first), then ID ASC
|
||||
tasks.sort((a, b) => {
|
||||
if (a.priority !== b.priority) return a.priority - b.priority;
|
||||
return a.id.localeCompare(b.id);
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 2: Session Setup
|
||||
|
||||
```javascript
|
||||
// Generate session ID: IDA-{slug}-YYYYMMDD
|
||||
const slug = tasks[0].title
|
||||
.toLowerCase()
|
||||
.replace(/[^a-z0-9]+/g, '-')
|
||||
.substring(0, 20)
|
||||
.replace(/-$/, '');
|
||||
const dateStr = new Date().toISOString().slice(0, 10).replace(/-/g, '');
|
||||
let sessionId = `IDA-${slug}-${dateStr}`;
|
||||
|
||||
// Check collision
|
||||
const existingSession = Glob(`.workflow/.idaw/sessions/${sessionId}/session.json`);
|
||||
if (existingSession?.length > 0) {
|
||||
sessionId = `${sessionId}-2`;
|
||||
}
|
||||
|
||||
const sessionDir = `.workflow/.idaw/sessions/${sessionId}`;
|
||||
Bash(`mkdir -p "${sessionDir}"`);
|
||||
|
||||
const session = {
|
||||
session_id: sessionId,
|
||||
mode: 'coordinate', // ★ Marks this as coordinator-mode session
|
||||
cli_tool: cliTool,
|
||||
status: 'running',
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
tasks: tasks.map(t => t.id),
|
||||
current_task: null,
|
||||
current_skill_index: 0,
|
||||
completed: [],
|
||||
failed: [],
|
||||
skipped: [],
|
||||
prompts_used: []
|
||||
};
|
||||
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
// Initialize progress.md
|
||||
const progressHeader = `# IDAW Progress — ${sessionId} (coordinate mode)\nStarted: ${session.created_at}\nCLI Tool: ${cliTool}\n\n`;
|
||||
Write(`${sessionDir}/progress.md`, progressHeader);
|
||||
```
|
||||
|
||||
### Phase 3: Startup Protocol
|
||||
|
||||
```javascript
|
||||
// Check for existing running sessions
|
||||
const runningSessions = Glob('.workflow/.idaw/sessions/IDA-*/session.json')
|
||||
?.map(f => { try { return JSON.parse(Read(f)); } catch { return null; } })
|
||||
.filter(s => s && s.status === 'running' && s.session_id !== sessionId) || [];
|
||||
|
||||
if (runningSessions.length > 0 && !autoYes) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Found running session: ${runningSessions[0].session_id}. How to proceed?`,
|
||||
header: 'Conflict',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Resume existing', description: 'Use /idaw:resume instead' },
|
||||
{ label: 'Start fresh', description: 'Continue with new session' },
|
||||
{ label: 'Abort', description: 'Cancel this run' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
if (answer.answers?.Conflict === 'Resume existing') {
|
||||
console.log(`Use: /idaw:resume ${runningSessions[0].session_id}`);
|
||||
return;
|
||||
}
|
||||
if (answer.answers?.Conflict === 'Abort') return;
|
||||
}
|
||||
|
||||
// Check git status
|
||||
const gitStatus = Bash('git status --porcelain 2>/dev/null');
|
||||
if (gitStatus?.trim() && !autoYes) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Working tree has uncommitted changes. How to proceed?',
|
||||
header: 'Git',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Continue', description: 'Proceed with dirty tree' },
|
||||
{ label: 'Stash', description: 'git stash before running' },
|
||||
{ label: 'Abort', description: 'Stop and handle manually' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
if (answer.answers?.Git === 'Stash') Bash('git stash push -m "idaw-pre-run"');
|
||||
if (answer.answers?.Git === 'Abort') return;
|
||||
}
|
||||
|
||||
// Dry run
|
||||
if (dryRun) {
|
||||
console.log(`# Dry Run — ${sessionId} (coordinate mode, tool: ${cliTool})\n`);
|
||||
for (const task of tasks) {
|
||||
const taskType = task.task_type || inferTaskType(task.title, task.description);
|
||||
const chain = task.skill_chain || SKILL_CHAIN_MAP[taskType] || SKILL_CHAIN_MAP['feature'];
|
||||
console.log(`## ${task.id}: ${task.title}`);
|
||||
console.log(` Type: ${taskType} | Priority: ${task.priority}`);
|
||||
console.log(` Chain: ${chain.join(' → ')}`);
|
||||
console.log(` CLI: ccw cli --tool ${cliTool} --mode write\n`);
|
||||
}
|
||||
console.log(`Total: ${tasks.length} tasks`);
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Launch First Task (then wait for hook)
|
||||
|
||||
```javascript
|
||||
// Start with the first task, first skill
|
||||
const firstTask = tasks[0];
|
||||
const resolvedType = firstTask.task_type || inferTaskType(firstTask.title, firstTask.description);
|
||||
const chain = firstTask.skill_chain || SKILL_CHAIN_MAP[resolvedType] || SKILL_CHAIN_MAP['feature'];
|
||||
|
||||
// Update task → in_progress
|
||||
firstTask.status = 'in_progress';
|
||||
firstTask.task_type = resolvedType;
|
||||
firstTask.execution.started_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${firstTask.id}.json`, JSON.stringify(firstTask, null, 2));
|
||||
|
||||
// Update session
|
||||
session.current_task = firstTask.id;
|
||||
session.current_skill_index = 0;
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
// ━━━ Pre-Task CLI Context Analysis (for complex/bugfix tasks) ━━━
|
||||
if (['bugfix', 'bugfix-hotfix', 'feature-complex'].includes(resolvedType)) {
|
||||
console.log(`Pre-analysis: gathering context for ${resolvedType} task...`);
|
||||
const affectedFiles = (firstTask.context?.affected_files || []).join(', ');
|
||||
const preAnalysisPrompt = `PURPOSE: Pre-analyze codebase context for IDAW task.
|
||||
TASK: • Understand current state of: ${affectedFiles || 'files related to: ' + firstTask.title} • Identify dependencies and risk areas
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Brief context summary in 3-5 bullet points
|
||||
CONSTRAINTS: Keep concise`;
|
||||
Bash(`ccw cli -p '${preAnalysisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis 2>&1 || echo "Pre-analysis skipped"`);
|
||||
}
|
||||
|
||||
// Assemble prompt for first skill
|
||||
const skillName = chain[0];
|
||||
const prompt = assembleCliPrompt(skillName, firstTask, null, autoYes);
|
||||
|
||||
session.prompts_used.push({
|
||||
task_id: firstTask.id,
|
||||
skill_index: 0,
|
||||
skill: skillName,
|
||||
prompt: prompt,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
// Launch via ccw cli in background
|
||||
console.log(`[1/${tasks.length}] ${firstTask.id}: ${firstTask.title}`);
|
||||
console.log(` Chain: ${chain.join(' → ')}`);
|
||||
console.log(` Launching: ${skillName} via ccw cli --tool ${cliTool}`);
|
||||
|
||||
Bash(
|
||||
`ccw cli -p "${escapeForShell(prompt)}" --tool ${cliTool} --mode write`,
|
||||
{ run_in_background: true }
|
||||
);
|
||||
|
||||
// ★ STOP HERE — wait for hook callback
|
||||
// Hook callback will trigger handleStepCompletion() below
|
||||
```
|
||||
|
||||
### Phase 5: Hook Callback Handler (per-step completion)
|
||||
|
||||
```javascript
|
||||
// Called by hook when background CLI completes
|
||||
async function handleStepCompletion(sessionId, cliOutput) {
|
||||
const sessionDir = `.workflow/.idaw/sessions/${sessionId}`;
|
||||
const session = JSON.parse(Read(`${sessionDir}/session.json`));
|
||||
|
||||
const taskId = session.current_task;
|
||||
const task = JSON.parse(Read(`.workflow/.idaw/tasks/${taskId}.json`));
|
||||
|
||||
const resolvedType = task.task_type || inferTaskType(task.title, task.description);
|
||||
const chain = task.skill_chain || SKILL_CHAIN_MAP[resolvedType] || SKILL_CHAIN_MAP['feature'];
|
||||
const skillIdx = session.current_skill_index;
|
||||
const skillName = chain[skillIdx];
|
||||
|
||||
// Parse CLI output for session ID
|
||||
const parsedOutput = parseCliOutput(cliOutput);
|
||||
|
||||
// Record skill result
|
||||
task.execution.skill_results.push({
|
||||
skill: skillName,
|
||||
status: parsedOutput.success ? 'completed' : 'failed',
|
||||
session_id: parsedOutput.sessionId,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
|
||||
// ━━━ Handle failure with CLI diagnosis ━━━
|
||||
if (!parsedOutput.success) {
|
||||
console.log(` ${skillName} failed. Running CLI diagnosis...`);
|
||||
const diagnosisPrompt = `PURPOSE: Diagnose why skill "${skillName}" failed during IDAW task.
|
||||
TASK: • Analyze error output • Check affected files: ${(task.context?.affected_files || []).join(', ') || 'unknown'}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: IDAW task ${task.id}: ${task.title}
|
||||
EXPECTED: Root cause + fix recommendation
|
||||
CONSTRAINTS: Actionable diagnosis`;
|
||||
Bash(`ccw cli -p '${diagnosisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause 2>&1 || true`);
|
||||
|
||||
task.execution.skill_results.push({
|
||||
skill: `cli-diagnosis:${skillName}`,
|
||||
status: 'completed',
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
|
||||
// Retry once
|
||||
console.log(` Retrying: ${skillName}`);
|
||||
const retryPrompt = assembleCliPrompt(skillName, task, parsedOutput, true);
|
||||
session.prompts_used.push({
|
||||
task_id: taskId,
|
||||
skill_index: skillIdx,
|
||||
skill: `${skillName}-retry`,
|
||||
prompt: retryPrompt,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
Write(`.workflow/.idaw/tasks/${taskId}.json`, JSON.stringify(task, null, 2));
|
||||
|
||||
Bash(
|
||||
`ccw cli -p "${escapeForShell(retryPrompt)}" --tool ${session.cli_tool} --mode write`,
|
||||
{ run_in_background: true }
|
||||
);
|
||||
return; // Wait for retry hook
|
||||
}
|
||||
|
||||
// ━━━ Skill succeeded — advance ━━━
|
||||
const nextSkillIdx = skillIdx + 1;
|
||||
|
||||
if (nextSkillIdx < chain.length) {
|
||||
// More skills in this task's chain → launch next skill
|
||||
session.current_skill_index = nextSkillIdx;
|
||||
session.updated_at = new Date().toISOString();
|
||||
|
||||
const nextSkill = chain[nextSkillIdx];
|
||||
const nextPrompt = assembleCliPrompt(nextSkill, task, parsedOutput, true);
|
||||
|
||||
session.prompts_used.push({
|
||||
task_id: taskId,
|
||||
skill_index: nextSkillIdx,
|
||||
skill: nextSkill,
|
||||
prompt: nextPrompt,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
Write(`.workflow/.idaw/tasks/${taskId}.json`, JSON.stringify(task, null, 2));
|
||||
|
||||
console.log(` Next skill: ${nextSkill}`);
|
||||
Bash(
|
||||
`ccw cli -p "${escapeForShell(nextPrompt)}" --tool ${session.cli_tool} --mode write`,
|
||||
{ run_in_background: true }
|
||||
);
|
||||
return; // Wait for next hook
|
||||
}
|
||||
|
||||
// ━━━ Task chain complete — git checkpoint ━━━
|
||||
const commitMsg = `feat(idaw): ${task.title} [${task.id}]`;
|
||||
const diffCheck = Bash('git diff --stat HEAD 2>/dev/null || echo ""');
|
||||
const untrackedCheck = Bash('git ls-files --others --exclude-standard 2>/dev/null || echo ""');
|
||||
|
||||
if (diffCheck?.trim() || untrackedCheck?.trim()) {
|
||||
Bash('git add -A');
|
||||
Bash(`git commit -m "$(cat <<'EOF'\n${commitMsg}\nEOF\n)"`);
|
||||
const commitHash = Bash('git rev-parse --short HEAD 2>/dev/null')?.trim();
|
||||
task.execution.git_commit = commitHash;
|
||||
} else {
|
||||
task.execution.git_commit = 'no-commit';
|
||||
}
|
||||
|
||||
task.status = 'completed';
|
||||
task.execution.completed_at = new Date().toISOString();
|
||||
task.updated_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${taskId}.json`, JSON.stringify(task, null, 2));
|
||||
|
||||
session.completed.push(taskId);
|
||||
|
||||
// Append progress
|
||||
const progressEntry = `## ${task.id} — ${task.title}\n` +
|
||||
`- Status: completed\n` +
|
||||
`- Type: ${task.task_type}\n` +
|
||||
`- Chain: ${chain.join(' → ')}\n` +
|
||||
`- Commit: ${task.execution.git_commit || '-'}\n` +
|
||||
`- Mode: coordinate (${session.cli_tool})\n\n`;
|
||||
const currentProgress = Read(`${sessionDir}/progress.md`);
|
||||
Write(`${sessionDir}/progress.md`, currentProgress + progressEntry);
|
||||
|
||||
// ━━━ Advance to next task ━━━
|
||||
const allTaskIds = session.tasks;
|
||||
const completedSet = new Set([...session.completed, ...session.failed, ...session.skipped]);
|
||||
const nextTaskId = allTaskIds.find(id => !completedSet.has(id));
|
||||
|
||||
if (nextTaskId) {
|
||||
// Load next task
|
||||
const nextTask = JSON.parse(Read(`.workflow/.idaw/tasks/${nextTaskId}.json`));
|
||||
const nextType = nextTask.task_type || inferTaskType(nextTask.title, nextTask.description);
|
||||
const nextChain = nextTask.skill_chain || SKILL_CHAIN_MAP[nextType] || SKILL_CHAIN_MAP['feature'];
|
||||
|
||||
nextTask.status = 'in_progress';
|
||||
nextTask.task_type = nextType;
|
||||
nextTask.execution.started_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${nextTaskId}.json`, JSON.stringify(nextTask, null, 2));
|
||||
|
||||
session.current_task = nextTaskId;
|
||||
session.current_skill_index = 0;
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
// Pre-analysis for complex tasks
|
||||
if (['bugfix', 'bugfix-hotfix', 'feature-complex'].includes(nextType)) {
|
||||
const affectedFiles = (nextTask.context?.affected_files || []).join(', ');
|
||||
Bash(`ccw cli -p 'PURPOSE: Pre-analyze context for ${nextTask.title}. TASK: Check ${affectedFiles || "related files"}. MODE: analysis. EXPECTED: 3-5 bullet points.' --tool gemini --mode analysis 2>&1 || true`);
|
||||
}
|
||||
|
||||
const nextSkillName = nextChain[0];
|
||||
const nextPrompt = assembleCliPrompt(nextSkillName, nextTask, null, true);
|
||||
|
||||
session.prompts_used.push({
|
||||
task_id: nextTaskId,
|
||||
skill_index: 0,
|
||||
skill: nextSkillName,
|
||||
prompt: nextPrompt,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
const taskNum = session.completed.length + 1;
|
||||
const totalTasks = session.tasks.length;
|
||||
console.log(`\n[${taskNum}/${totalTasks}] ${nextTaskId}: ${nextTask.title}`);
|
||||
console.log(` Chain: ${nextChain.join(' → ')}`);
|
||||
|
||||
Bash(
|
||||
`ccw cli -p "${escapeForShell(nextPrompt)}" --tool ${session.cli_tool} --mode write`,
|
||||
{ run_in_background: true }
|
||||
);
|
||||
return; // Wait for hook
|
||||
}
|
||||
|
||||
// ━━━ All tasks complete — Phase 6: Report ━━━
|
||||
session.status = session.failed.length > 0 && session.completed.length === 0 ? 'failed' : 'completed';
|
||||
session.current_task = null;
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
const summary = `\n---\n## Summary (coordinate mode)\n` +
|
||||
`- CLI Tool: ${session.cli_tool}\n` +
|
||||
`- Completed: ${session.completed.length}\n` +
|
||||
`- Failed: ${session.failed.length}\n` +
|
||||
`- Skipped: ${session.skipped.length}\n` +
|
||||
`- Total: ${session.tasks.length}\n`;
|
||||
const finalProgress = Read(`${sessionDir}/progress.md`);
|
||||
Write(`${sessionDir}/progress.md`, finalProgress + summary);
|
||||
|
||||
console.log('\n=== IDAW Coordinate Complete ===');
|
||||
console.log(`Session: ${sessionId}`);
|
||||
console.log(`Completed: ${session.completed.length}/${session.tasks.length}`);
|
||||
if (session.failed.length > 0) console.log(`Failed: ${session.failed.join(', ')}`);
|
||||
}
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
### assembleCliPrompt
|
||||
|
||||
```javascript
|
||||
function assembleCliPrompt(skillName, task, previousResult, autoYes) {
|
||||
let prompt = '';
|
||||
const yFlag = autoYes ? ' -y' : '';
|
||||
|
||||
// Map skill to command invocation
|
||||
if (skillName === 'workflow-lite-planex') {
|
||||
const goal = sanitize(`${task.title}\n${task.description}`);
|
||||
prompt = `/workflow-lite-planex${yFlag} "${goal}"`;
|
||||
if (task.task_type === 'bugfix') prompt = `/workflow-lite-planex${yFlag} --bugfix "${goal}"`;
|
||||
if (task.task_type === 'bugfix-hotfix') prompt = `/workflow-lite-planex${yFlag} --hotfix "${goal}"`;
|
||||
|
||||
} else if (skillName === 'workflow-plan') {
|
||||
prompt = `/workflow-plan${yFlag} "${sanitize(task.title)}"`;
|
||||
|
||||
} else if (skillName === 'workflow-execute') {
|
||||
if (previousResult?.sessionId) {
|
||||
prompt = `/workflow-execute${yFlag} --resume-session="${previousResult.sessionId}"`;
|
||||
} else {
|
||||
prompt = `/workflow-execute${yFlag}`;
|
||||
}
|
||||
|
||||
} else if (skillName === 'workflow-test-fix') {
|
||||
if (previousResult?.sessionId) {
|
||||
prompt = `/workflow-test-fix${yFlag} "${previousResult.sessionId}"`;
|
||||
} else {
|
||||
prompt = `/workflow-test-fix${yFlag} "${sanitize(task.title)}"`;
|
||||
}
|
||||
|
||||
} else if (skillName === 'workflow-tdd-plan') {
|
||||
prompt = `/workflow-tdd-plan${yFlag} "${sanitize(task.title)}"`;
|
||||
|
||||
} else if (skillName === 'workflow:refactor-cycle') {
|
||||
prompt = `/workflow:refactor-cycle${yFlag} "${sanitize(task.title)}"`;
|
||||
|
||||
} else if (skillName === 'review-cycle') {
|
||||
if (previousResult?.sessionId) {
|
||||
prompt = `/review-cycle${yFlag} --session="${previousResult.sessionId}"`;
|
||||
} else {
|
||||
prompt = `/review-cycle${yFlag}`;
|
||||
}
|
||||
|
||||
} else {
|
||||
// Generic fallback
|
||||
prompt = `/${skillName}${yFlag} "${sanitize(task.title)}"`;
|
||||
}
|
||||
|
||||
// Append task context
|
||||
prompt += `\n\nTask: ${task.title}\nDescription: ${task.description}`;
|
||||
if (task.context?.affected_files?.length > 0) {
|
||||
prompt += `\nAffected files: ${task.context.affected_files.join(', ')}`;
|
||||
}
|
||||
if (task.context?.acceptance_criteria?.length > 0) {
|
||||
prompt += `\nAcceptance criteria: ${task.context.acceptance_criteria.join('; ')}`;
|
||||
}
|
||||
|
||||
return prompt;
|
||||
}
|
||||
```
|
||||
|
||||
### sanitize & escapeForShell
|
||||
|
||||
```javascript
|
||||
function sanitize(text) {
|
||||
return text
|
||||
.replace(/\\/g, '\\\\')
|
||||
.replace(/"/g, '\\"')
|
||||
.replace(/\$/g, '\\$')
|
||||
.replace(/`/g, '\\`');
|
||||
}
|
||||
|
||||
function escapeForShell(prompt) {
|
||||
return prompt.replace(/'/g, "'\\''");
|
||||
}
|
||||
```
|
||||
|
||||
### parseCliOutput
|
||||
|
||||
```javascript
|
||||
function parseCliOutput(output) {
|
||||
// Extract session ID from CLI output (e.g., WFS-xxx, session-xxx)
|
||||
const sessionMatch = output.match(/(?:session|WFS|Session ID)[:\s]*([\w-]+)/i);
|
||||
const success = !/(?:error|failed|fatal)/i.test(output) || /completed|success/i.test(output);
|
||||
|
||||
return {
|
||||
success,
|
||||
sessionId: sessionMatch?.[1] || null,
|
||||
raw: output?.substring(0, 500)
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## CLI-Assisted Analysis
|
||||
|
||||
Same as `/idaw:run` — integrated at two points:
|
||||
|
||||
### Pre-Task Context Analysis
|
||||
For `bugfix`, `bugfix-hotfix`, `feature-complex` tasks: auto-invoke `ccw cli --tool gemini --mode analysis` before launching skill chain.
|
||||
|
||||
### Error Recovery with CLI Diagnosis
|
||||
When a skill's CLI execution fails: invoke diagnosis → retry once → if still fails, mark failed and advance.
|
||||
|
||||
```
|
||||
Skill CLI fails → CLI diagnosis (gemini) → Retry CLI → Still fails → mark failed → next task
|
||||
```
|
||||
|
||||
## State Flow
|
||||
|
||||
```
|
||||
Phase 4: Launch first skill
|
||||
↓
|
||||
ccw cli --tool claude --mode write (background)
|
||||
↓
|
||||
★ STOP — wait for hook callback
|
||||
↓
|
||||
Phase 5: handleStepCompletion()
|
||||
├─ Skill succeeded + more in chain → launch next skill → STOP
|
||||
├─ Skill succeeded + chain complete → git checkpoint → next task → STOP
|
||||
├─ Skill failed → CLI diagnosis → retry → STOP
|
||||
└─ All tasks done → Phase 6: Report
|
||||
```
|
||||
|
||||
## Session State (session.json)
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "IDA-fix-login-20260301",
|
||||
"mode": "coordinate",
|
||||
"cli_tool": "claude",
|
||||
"status": "running|waiting|completed|failed",
|
||||
"created_at": "ISO",
|
||||
"updated_at": "ISO",
|
||||
"tasks": ["IDAW-001", "IDAW-002"],
|
||||
"current_task": "IDAW-001",
|
||||
"current_skill_index": 0,
|
||||
"completed": [],
|
||||
"failed": [],
|
||||
"skipped": [],
|
||||
"prompts_used": [
|
||||
{
|
||||
"task_id": "IDAW-001",
|
||||
"skill_index": 0,
|
||||
"skill": "workflow-lite-planex",
|
||||
"prompt": "/workflow-lite-planex -y \"Fix login timeout\"",
|
||||
"timestamp": "ISO"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Differences from /idaw:run
|
||||
|
||||
| Aspect | /idaw:run | /idaw:run-coordinate |
|
||||
|--------|-----------|---------------------|
|
||||
| Execution | `Skill()` blocking in main process | `ccw cli` background + hook callback |
|
||||
| Context window | Shared (each skill uses main context) | Isolated (each CLI gets fresh context) |
|
||||
| Concurrency | Sequential blocking | Sequential non-blocking (hook-driven) |
|
||||
| State tracking | session.json + task.json | session.json + task.json + prompts_used |
|
||||
| Tool selection | N/A (Skill native) | `--tool claude\|gemini\|qwen` |
|
||||
| Resume | Via `/idaw:resume` (same) | Via `/idaw:resume` (same, detects mode) |
|
||||
| Best for | Short chains, interactive | Long chains, autonomous, context-heavy |
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Execute all pending tasks via claude CLI
|
||||
/idaw:run-coordinate -y
|
||||
|
||||
# Use specific CLI tool
|
||||
/idaw:run-coordinate -y --tool gemini
|
||||
|
||||
# Execute specific tasks
|
||||
/idaw:run-coordinate --task IDAW-001,IDAW-003 --tool claude
|
||||
|
||||
# Dry run (show plan without executing)
|
||||
/idaw:run-coordinate --dry-run
|
||||
|
||||
# Interactive mode
|
||||
/idaw:run-coordinate
|
||||
```
|
||||
539
.claude/commands/idaw/run.md
Normal file
539
.claude/commands/idaw/run.md
Normal file
@@ -0,0 +1,539 @@
|
||||
---
|
||||
name: run
|
||||
description: IDAW orchestrator - execute task skill chains serially with git checkpoints
|
||||
argument-hint: "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run]"
|
||||
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Write(*), Bash(*), Glob(*)
|
||||
---
|
||||
|
||||
# IDAW Run Command (/idaw:run)
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Skip all confirmations, auto-skip on failure, proceed with dirty git.
|
||||
|
||||
## Skill Chain Mapping
|
||||
|
||||
```javascript
|
||||
const SKILL_CHAIN_MAP = {
|
||||
'bugfix': ['workflow-lite-planex', 'workflow-test-fix'],
|
||||
'bugfix-hotfix': ['workflow-lite-planex'],
|
||||
'feature': ['workflow-lite-planex', 'workflow-test-fix'],
|
||||
'feature-complex': ['workflow-plan', 'workflow-execute', 'workflow-test-fix'],
|
||||
'refactor': ['workflow:refactor-cycle'],
|
||||
'tdd': ['workflow-tdd-plan', 'workflow-execute'],
|
||||
'test': ['workflow-test-fix'],
|
||||
'test-fix': ['workflow-test-fix'],
|
||||
'review': ['review-cycle'],
|
||||
'docs': ['workflow-lite-planex']
|
||||
};
|
||||
```
|
||||
|
||||
## Task Type Inference
|
||||
|
||||
```javascript
|
||||
function inferTaskType(title, description) {
|
||||
const text = `${title} ${description}`.toLowerCase();
|
||||
if (/urgent|production|critical/.test(text) && /fix|bug/.test(text)) return 'bugfix-hotfix';
|
||||
if (/refactor|重构|tech.*debt/.test(text)) return 'refactor';
|
||||
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
|
||||
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
|
||||
if (/generate test|写测试|add test/.test(text)) return 'test';
|
||||
if (/review|code review/.test(text)) return 'review';
|
||||
if (/docs|documentation|readme/.test(text)) return 'docs';
|
||||
if (/fix|bug|error|crash|fail/.test(text)) return 'bugfix';
|
||||
if (/complex|multi-module|architecture/.test(text)) return 'feature-complex';
|
||||
return 'feature';
|
||||
}
|
||||
```
|
||||
|
||||
## 6-Phase Execution
|
||||
|
||||
### Phase 1: Load Tasks
|
||||
|
||||
```javascript
|
||||
const args = $ARGUMENTS;
|
||||
const autoYes = /(-y|--yes)/.test(args);
|
||||
const dryRun = /--dry-run/.test(args);
|
||||
const taskFilter = args.match(/--task\s+([\w,-]+)/)?.[1]?.split(',') || null;
|
||||
|
||||
// Load task files
|
||||
const taskFiles = Glob('.workflow/.idaw/tasks/IDAW-*.json') || [];
|
||||
|
||||
if (taskFiles.length === 0) {
|
||||
console.log('No IDAW tasks found. Use /idaw:add to create tasks.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Parse and filter
|
||||
let tasks = taskFiles.map(f => JSON.parse(Read(f)));
|
||||
|
||||
if (taskFilter) {
|
||||
tasks = tasks.filter(t => taskFilter.includes(t.id));
|
||||
} else {
|
||||
tasks = tasks.filter(t => t.status === 'pending');
|
||||
}
|
||||
|
||||
if (tasks.length === 0) {
|
||||
console.log('No pending tasks to execute. Use /idaw:add to add tasks or --task to specify IDs.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Sort: priority ASC (1=critical first), then ID ASC
|
||||
tasks.sort((a, b) => {
|
||||
if (a.priority !== b.priority) return a.priority - b.priority;
|
||||
return a.id.localeCompare(b.id);
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 2: Session Setup
|
||||
|
||||
```javascript
|
||||
// Generate session ID: IDA-{slug}-YYYYMMDD
|
||||
const slug = tasks[0].title
|
||||
.toLowerCase()
|
||||
.replace(/[^a-z0-9]+/g, '-')
|
||||
.substring(0, 20)
|
||||
.replace(/-$/, '');
|
||||
const dateStr = new Date().toISOString().slice(0, 10).replace(/-/g, '');
|
||||
let sessionId = `IDA-${slug}-${dateStr}`;
|
||||
|
||||
// Check collision
|
||||
const existingSession = Glob(`.workflow/.idaw/sessions/${sessionId}/session.json`);
|
||||
if (existingSession?.length > 0) {
|
||||
sessionId = `${sessionId}-2`;
|
||||
}
|
||||
|
||||
const sessionDir = `.workflow/.idaw/sessions/${sessionId}`;
|
||||
Bash(`mkdir -p "${sessionDir}"`);
|
||||
|
||||
const session = {
|
||||
session_id: sessionId,
|
||||
status: 'running',
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
tasks: tasks.map(t => t.id),
|
||||
current_task: null,
|
||||
completed: [],
|
||||
failed: [],
|
||||
skipped: []
|
||||
};
|
||||
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
// Initialize progress.md
|
||||
const progressHeader = `# IDAW Progress — ${sessionId}\nStarted: ${session.created_at}\n\n`;
|
||||
Write(`${sessionDir}/progress.md`, progressHeader);
|
||||
|
||||
// TodoWrite
|
||||
TodoWrite({
|
||||
todos: tasks.map((t, i) => ({
|
||||
content: `IDAW:[${i + 1}/${tasks.length}] ${t.title}`,
|
||||
status: i === 0 ? 'in_progress' : 'pending',
|
||||
activeForm: `Executing ${t.title}`
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 3: Startup Protocol
|
||||
|
||||
```javascript
|
||||
// Check for existing running sessions
|
||||
const runningSessions = Glob('.workflow/.idaw/sessions/IDA-*/session.json')
|
||||
?.map(f => JSON.parse(Read(f)))
|
||||
.filter(s => s.status === 'running' && s.session_id !== sessionId) || [];
|
||||
|
||||
if (runningSessions.length > 0) {
|
||||
if (!autoYes) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Found running session: ${runningSessions[0].session_id}. How to proceed?`,
|
||||
header: 'Conflict',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Resume existing', description: 'Use /idaw:resume instead' },
|
||||
{ label: 'Start fresh', description: 'Continue with new session' },
|
||||
{ label: 'Abort', description: 'Cancel this run' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
if (answer.answers?.Conflict === 'Resume existing') {
|
||||
console.log(`Use: /idaw:resume ${runningSessions[0].session_id}`);
|
||||
return;
|
||||
}
|
||||
if (answer.answers?.Conflict === 'Abort') return;
|
||||
}
|
||||
// autoYes or "Start fresh": proceed
|
||||
}
|
||||
|
||||
// Check git status
|
||||
const gitStatus = Bash('git status --porcelain 2>/dev/null');
|
||||
if (gitStatus?.trim()) {
|
||||
if (!autoYes) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Working tree has uncommitted changes. How to proceed?',
|
||||
header: 'Git',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Continue', description: 'Proceed with dirty tree' },
|
||||
{ label: 'Stash', description: 'git stash before running' },
|
||||
{ label: 'Abort', description: 'Stop and handle manually' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
if (answer.answers?.Git === 'Stash') {
|
||||
Bash('git stash push -m "idaw-pre-run"');
|
||||
}
|
||||
if (answer.answers?.Git === 'Abort') return;
|
||||
}
|
||||
// autoYes: proceed silently
|
||||
}
|
||||
|
||||
// Dry run: show plan and exit
|
||||
if (dryRun) {
|
||||
console.log(`# Dry Run — ${sessionId}\n`);
|
||||
for (const task of tasks) {
|
||||
const taskType = task.task_type || inferTaskType(task.title, task.description);
|
||||
const chain = task.skill_chain || SKILL_CHAIN_MAP[taskType] || SKILL_CHAIN_MAP['feature'];
|
||||
console.log(`## ${task.id}: ${task.title}`);
|
||||
console.log(` Type: ${taskType} | Priority: ${task.priority}`);
|
||||
console.log(` Chain: ${chain.join(' → ')}\n`);
|
||||
}
|
||||
console.log(`Total: ${tasks.length} tasks`);
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Main Loop (serial, one task at a time)
|
||||
|
||||
```javascript
|
||||
for (let taskIdx = 0; taskIdx < tasks.length; taskIdx++) {
|
||||
const task = tasks[taskIdx];
|
||||
|
||||
// Skip completed/failed/skipped
|
||||
if (['completed', 'failed', 'skipped'].includes(task.status)) continue;
|
||||
|
||||
// Resolve skill chain
|
||||
const resolvedType = task.task_type || inferTaskType(task.title, task.description);
|
||||
const chain = task.skill_chain || SKILL_CHAIN_MAP[resolvedType] || SKILL_CHAIN_MAP['feature'];
|
||||
|
||||
// Update task status → in_progress
|
||||
task.status = 'in_progress';
|
||||
task.task_type = resolvedType; // persist inferred type
|
||||
task.execution.started_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
|
||||
|
||||
// Update session
|
||||
session.current_task = task.id;
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
console.log(`\n--- [${taskIdx + 1}/${tasks.length}] ${task.id}: ${task.title} ---`);
|
||||
console.log(`Chain: ${chain.join(' → ')}`);
|
||||
|
||||
// ━━━ Pre-Task CLI Context Analysis (for complex/bugfix tasks) ━━━
|
||||
if (['bugfix', 'bugfix-hotfix', 'feature-complex'].includes(resolvedType)) {
|
||||
console.log(` Pre-analysis: gathering context for ${resolvedType} task...`);
|
||||
const affectedFiles = (task.context?.affected_files || []).join(', ');
|
||||
const preAnalysisPrompt = `PURPOSE: Pre-analyze codebase context for IDAW task before execution.
|
||||
TASK: • Understand current state of: ${affectedFiles || 'files related to: ' + task.title} • Identify dependencies and risk areas • Note existing patterns to follow
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Brief context summary (affected modules, dependencies, risk areas) in 3-5 bullet points
|
||||
CONSTRAINTS: Keep concise | Focus on execution-relevant context`;
|
||||
const preAnalysis = Bash(`ccw cli -p '${preAnalysisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis 2>&1 || echo "Pre-analysis skipped"`);
|
||||
task.execution.skill_results.push({
|
||||
skill: 'cli-pre-analysis',
|
||||
status: 'completed',
|
||||
context_summary: preAnalysis?.substring(0, 500),
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
}
|
||||
|
||||
// Execute each skill in chain
|
||||
let previousResult = null;
|
||||
let taskFailed = false;
|
||||
|
||||
for (let skillIdx = 0; skillIdx < chain.length; skillIdx++) {
|
||||
const skillName = chain[skillIdx];
|
||||
const skillArgs = assembleSkillArgs(skillName, task, previousResult, autoYes, skillIdx === 0);
|
||||
|
||||
console.log(` [${skillIdx + 1}/${chain.length}] ${skillName}`);
|
||||
|
||||
try {
|
||||
const result = Skill({ skill: skillName, args: skillArgs });
|
||||
previousResult = result;
|
||||
task.execution.skill_results.push({
|
||||
skill: skillName,
|
||||
status: 'completed',
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
} catch (error) {
|
||||
// ━━━ CLI-Assisted Error Recovery ━━━
|
||||
// Step 1: Invoke CLI diagnosis (auto-invoke trigger: self-repair fails)
|
||||
console.log(` Diagnosing failure: ${skillName}...`);
|
||||
const diagnosisPrompt = `PURPOSE: Diagnose why skill "${skillName}" failed during IDAW task execution.
|
||||
TASK: • Analyze error: ${String(error).substring(0, 300)} • Check affected files: ${(task.context?.affected_files || []).join(', ') || 'unknown'} • Identify root cause • Suggest fix strategy
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: IDAW task ${task.id}: ${task.title}
|
||||
EXPECTED: Root cause + actionable fix recommendation (1-2 sentences)
|
||||
CONSTRAINTS: Focus on actionable diagnosis`;
|
||||
const diagnosisResult = Bash(`ccw cli -p '${diagnosisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause 2>&1 || echo "CLI diagnosis unavailable"`);
|
||||
|
||||
task.execution.skill_results.push({
|
||||
skill: `cli-diagnosis:${skillName}`,
|
||||
status: 'completed',
|
||||
diagnosis: diagnosisResult?.substring(0, 500),
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
|
||||
// Step 2: Retry with diagnosis context
|
||||
console.log(` Retry with diagnosis: ${skillName}`);
|
||||
try {
|
||||
const retryResult = Skill({ skill: skillName, args: skillArgs });
|
||||
previousResult = retryResult;
|
||||
task.execution.skill_results.push({
|
||||
skill: skillName,
|
||||
status: 'completed-retry-with-diagnosis',
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
} catch (retryError) {
|
||||
// Step 3: Failed after CLI-assisted retry
|
||||
task.execution.skill_results.push({
|
||||
skill: skillName,
|
||||
status: 'failed',
|
||||
error: String(retryError).substring(0, 200),
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
|
||||
if (autoYes) {
|
||||
taskFailed = true;
|
||||
break;
|
||||
} else {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `${skillName} failed after CLI diagnosis + retry: ${String(retryError).substring(0, 100)}. How to proceed?`,
|
||||
header: 'Error',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Skip task', description: 'Mark task as failed, continue to next' },
|
||||
{ label: 'Abort', description: 'Stop entire run' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
if (answer.answers?.Error === 'Abort') {
|
||||
task.status = 'failed';
|
||||
task.execution.error = String(retryError).substring(0, 200);
|
||||
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
|
||||
session.failed.push(task.id);
|
||||
session.status = 'failed';
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
return;
|
||||
}
|
||||
taskFailed = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 5: Checkpoint (per task) — inline
|
||||
if (taskFailed) {
|
||||
task.status = 'failed';
|
||||
task.execution.error = 'Skill chain failed after retry';
|
||||
task.execution.completed_at = new Date().toISOString();
|
||||
session.failed.push(task.id);
|
||||
} else {
|
||||
// Git commit checkpoint
|
||||
const commitMsg = `feat(idaw): ${task.title} [${task.id}]`;
|
||||
const diffCheck = Bash('git diff --stat HEAD 2>/dev/null || echo ""');
|
||||
const untrackedCheck = Bash('git ls-files --others --exclude-standard 2>/dev/null || echo ""');
|
||||
|
||||
if (diffCheck?.trim() || untrackedCheck?.trim()) {
|
||||
Bash('git add -A');
|
||||
const commitResult = Bash(`git commit -m "$(cat <<'EOF'\n${commitMsg}\nEOF\n)"`);
|
||||
const commitHash = Bash('git rev-parse --short HEAD 2>/dev/null')?.trim();
|
||||
task.execution.git_commit = commitHash;
|
||||
} else {
|
||||
task.execution.git_commit = 'no-commit';
|
||||
}
|
||||
|
||||
task.status = 'completed';
|
||||
task.execution.completed_at = new Date().toISOString();
|
||||
session.completed.push(task.id);
|
||||
}
|
||||
|
||||
// Write task + session state
|
||||
task.updated_at = new Date().toISOString();
|
||||
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
|
||||
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
// Append to progress.md
|
||||
const duration = task.execution.started_at && task.execution.completed_at
|
||||
? formatDuration(new Date(task.execution.completed_at) - new Date(task.execution.started_at))
|
||||
: 'unknown';
|
||||
|
||||
const progressEntry = `## ${task.id} — ${task.title}\n` +
|
||||
`- Status: ${task.status}\n` +
|
||||
`- Type: ${task.task_type}\n` +
|
||||
`- Chain: ${chain.join(' → ')}\n` +
|
||||
`- Commit: ${task.execution.git_commit || '-'}\n` +
|
||||
`- Duration: ${duration}\n\n`;
|
||||
|
||||
const currentProgress = Read(`${sessionDir}/progress.md`);
|
||||
Write(`${sessionDir}/progress.md`, currentProgress + progressEntry);
|
||||
|
||||
// Update TodoWrite
|
||||
if (taskIdx + 1 < tasks.length) {
|
||||
TodoWrite({
|
||||
todos: tasks.map((t, i) => ({
|
||||
content: `IDAW:[${i + 1}/${tasks.length}] ${t.title}`,
|
||||
status: i < taskIdx + 1 ? 'completed' : (i === taskIdx + 1 ? 'in_progress' : 'pending'),
|
||||
activeForm: `Executing ${t.title}`
|
||||
}))
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 6: Report
|
||||
|
||||
```javascript
|
||||
session.status = session.failed.length > 0 && session.completed.length === 0 ? 'failed' : 'completed';
|
||||
session.current_task = null;
|
||||
session.updated_at = new Date().toISOString();
|
||||
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
|
||||
|
||||
// Final progress summary
|
||||
const summary = `\n---\n## Summary\n` +
|
||||
`- Completed: ${session.completed.length}\n` +
|
||||
`- Failed: ${session.failed.length}\n` +
|
||||
`- Skipped: ${session.skipped.length}\n` +
|
||||
`- Total: ${tasks.length}\n`;
|
||||
|
||||
const finalProgress = Read(`${sessionDir}/progress.md`);
|
||||
Write(`${sessionDir}/progress.md`, finalProgress + summary);
|
||||
|
||||
// Display report
|
||||
console.log('\n=== IDAW Run Complete ===');
|
||||
console.log(`Session: ${sessionId}`);
|
||||
console.log(`Completed: ${session.completed.length}/${tasks.length}`);
|
||||
if (session.failed.length > 0) console.log(`Failed: ${session.failed.join(', ')}`);
|
||||
if (session.skipped.length > 0) console.log(`Skipped: ${session.skipped.join(', ')}`);
|
||||
|
||||
// List git commits
|
||||
for (const taskId of session.completed) {
|
||||
const t = JSON.parse(Read(`.workflow/.idaw/tasks/${taskId}.json`));
|
||||
if (t.execution.git_commit && t.execution.git_commit !== 'no-commit') {
|
||||
console.log(` ${t.execution.git_commit} ${t.title}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
### assembleSkillArgs
|
||||
|
||||
```javascript
|
||||
function assembleSkillArgs(skillName, task, previousResult, autoYes, isFirst) {
|
||||
let args = '';
|
||||
|
||||
if (isFirst) {
|
||||
// First skill: pass task goal — sanitize for shell safety
|
||||
const goal = `${task.title}\n${task.description}`
|
||||
.replace(/\\/g, '\\\\')
|
||||
.replace(/"/g, '\\"')
|
||||
.replace(/\$/g, '\\$')
|
||||
.replace(/`/g, '\\`');
|
||||
args = `"${goal}"`;
|
||||
|
||||
// bugfix-hotfix: add --hotfix
|
||||
if (task.task_type === 'bugfix-hotfix') {
|
||||
args += ' --hotfix';
|
||||
}
|
||||
} else if (previousResult?.session_id) {
|
||||
// Subsequent skills: chain session
|
||||
args = `--session="${previousResult.session_id}"`;
|
||||
}
|
||||
|
||||
// Propagate -y
|
||||
if (autoYes && !args.includes('-y') && !args.includes('--yes')) {
|
||||
args = args ? `${args} -y` : '-y';
|
||||
}
|
||||
|
||||
return args;
|
||||
}
|
||||
```
|
||||
|
||||
### formatDuration
|
||||
|
||||
```javascript
|
||||
function formatDuration(ms) {
|
||||
const seconds = Math.floor(ms / 1000);
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const remainingSeconds = seconds % 60;
|
||||
if (minutes > 0) return `${minutes}m ${remainingSeconds}s`;
|
||||
return `${seconds}s`;
|
||||
}
|
||||
```
|
||||
|
||||
## CLI-Assisted Analysis
|
||||
|
||||
IDAW integrates `ccw cli` (Gemini) for intelligent analysis at two key points:
|
||||
|
||||
### Pre-Task Context Analysis
|
||||
|
||||
For `bugfix`, `bugfix-hotfix`, and `feature-complex` tasks, IDAW automatically invokes CLI analysis **before** executing the skill chain to gather codebase context:
|
||||
|
||||
```
|
||||
Task starts → CLI pre-analysis (gemini) → Context gathered → Skill chain executes
|
||||
```
|
||||
|
||||
- Identifies dependencies and risk areas
|
||||
- Notes existing patterns to follow
|
||||
- Results stored in `task.execution.skill_results` as `cli-pre-analysis`
|
||||
|
||||
### Error Recovery with CLI Diagnosis
|
||||
|
||||
When a skill fails, instead of blind retry, IDAW uses CLI-assisted diagnosis:
|
||||
|
||||
```
|
||||
Skill fails → CLI diagnosis (gemini, analysis-diagnose-bug-root-cause)
|
||||
→ Root cause identified → Retry with diagnosis context
|
||||
→ Still fails → Skip (autoYes) or Ask user (interactive)
|
||||
```
|
||||
|
||||
- Uses `--rule analysis-diagnose-bug-root-cause` template
|
||||
- Diagnosis results stored in `task.execution.skill_results` as `cli-diagnosis:{skill}`
|
||||
- Follows CLAUDE.md auto-invoke trigger pattern: "self-repair fails → invoke CLI analysis"
|
||||
|
||||
### Execution Flow (with CLI analysis)
|
||||
|
||||
```
|
||||
Phase 4 Main Loop (per task):
|
||||
├─ [bugfix/complex only] CLI pre-analysis → context summary
|
||||
├─ Skill 1: execute
|
||||
│ ├─ Success → next skill
|
||||
│ └─ Failure → CLI diagnosis → retry → success/fail
|
||||
├─ Skill 2: execute ...
|
||||
└─ Phase 5: git checkpoint
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Execute all pending tasks
|
||||
/idaw:run -y
|
||||
|
||||
# Execute specific tasks
|
||||
/idaw:run --task IDAW-001,IDAW-003
|
||||
|
||||
# Dry run (show plan without executing)
|
||||
/idaw:run --dry-run
|
||||
|
||||
# Interactive mode (confirm at each step)
|
||||
/idaw:run
|
||||
```
|
||||
182
.claude/commands/idaw/status.md
Normal file
182
.claude/commands/idaw/status.md
Normal file
@@ -0,0 +1,182 @@
|
||||
---
|
||||
name: status
|
||||
description: View IDAW task and session progress
|
||||
argument-hint: "[session-id]"
|
||||
allowed-tools: Read(*), Glob(*), Bash(*)
|
||||
---
|
||||
|
||||
# IDAW Status Command (/idaw:status)
|
||||
|
||||
## Overview
|
||||
|
||||
Read-only command to view IDAW task queue and execution session progress.
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Determine View Mode
|
||||
|
||||
```javascript
|
||||
const sessionId = $ARGUMENTS?.trim();
|
||||
|
||||
if (sessionId) {
|
||||
// Specific session view
|
||||
showSession(sessionId);
|
||||
} else {
|
||||
// Overview: pending tasks + latest session
|
||||
showOverview();
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Show Overview
|
||||
|
||||
```javascript
|
||||
function showOverview() {
|
||||
// 1. Load all tasks
|
||||
const taskFiles = Glob('.workflow/.idaw/tasks/IDAW-*.json') || [];
|
||||
|
||||
if (taskFiles.length === 0) {
|
||||
console.log('No IDAW tasks found. Use /idaw:add to create tasks.');
|
||||
return;
|
||||
}
|
||||
|
||||
const tasks = taskFiles.map(f => JSON.parse(Read(f)));
|
||||
|
||||
// 2. Group by status
|
||||
const byStatus = {
|
||||
pending: tasks.filter(t => t.status === 'pending'),
|
||||
in_progress: tasks.filter(t => t.status === 'in_progress'),
|
||||
completed: tasks.filter(t => t.status === 'completed'),
|
||||
failed: tasks.filter(t => t.status === 'failed'),
|
||||
skipped: tasks.filter(t => t.status === 'skipped')
|
||||
};
|
||||
|
||||
// 3. Display task summary table
|
||||
console.log('# IDAW Tasks\n');
|
||||
console.log('| ID | Title | Type | Priority | Status |');
|
||||
console.log('|----|-------|------|----------|--------|');
|
||||
|
||||
// Sort: priority ASC, then ID ASC
|
||||
const sorted = [...tasks].sort((a, b) => {
|
||||
if (a.priority !== b.priority) return a.priority - b.priority;
|
||||
return a.id.localeCompare(b.id);
|
||||
});
|
||||
|
||||
for (const t of sorted) {
|
||||
const type = t.task_type || '(infer)';
|
||||
console.log(`| ${t.id} | ${t.title.substring(0, 40)} | ${type} | ${t.priority} | ${t.status} |`);
|
||||
}
|
||||
|
||||
console.log(`\nTotal: ${tasks.length} | Pending: ${byStatus.pending.length} | Completed: ${byStatus.completed.length} | Failed: ${byStatus.failed.length}`);
|
||||
|
||||
// 4. Show latest session (if any)
|
||||
const sessionDirs = Glob('.workflow/.idaw/sessions/IDA-*/session.json') || [];
|
||||
if (sessionDirs.length > 0) {
|
||||
// Sort by modification time (newest first) — Glob returns sorted by mtime
|
||||
const latestSessionFile = sessionDirs[0];
|
||||
const session = JSON.parse(Read(latestSessionFile));
|
||||
console.log(`\n## Latest Session: ${session.session_id}`);
|
||||
console.log(`Status: ${session.status} | Tasks: ${session.tasks?.length || 0}`);
|
||||
console.log(`Completed: ${session.completed?.length || 0} | Failed: ${session.failed?.length || 0} | Skipped: ${session.skipped?.length || 0}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Show Specific Session
|
||||
|
||||
```javascript
|
||||
function showSession(sessionId) {
|
||||
const sessionFile = `.workflow/.idaw/sessions/${sessionId}/session.json`;
|
||||
const progressFile = `.workflow/.idaw/sessions/${sessionId}/progress.md`;
|
||||
|
||||
// Try reading session
|
||||
try {
|
||||
const session = JSON.parse(Read(sessionFile));
|
||||
|
||||
console.log(`# IDAW Session: ${session.session_id}\n`);
|
||||
console.log(`Status: ${session.status}`);
|
||||
console.log(`Created: ${session.created_at}`);
|
||||
console.log(`Updated: ${session.updated_at}`);
|
||||
console.log(`Current Task: ${session.current_task || 'none'}\n`);
|
||||
|
||||
// Task detail table
|
||||
console.log('| ID | Title | Status | Commit |');
|
||||
console.log('|----|-------|--------|--------|');
|
||||
|
||||
for (const taskId of session.tasks) {
|
||||
const taskFile = `.workflow/.idaw/tasks/${taskId}.json`;
|
||||
try {
|
||||
const task = JSON.parse(Read(taskFile));
|
||||
const commit = task.execution?.git_commit?.substring(0, 7) || '-';
|
||||
console.log(`| ${task.id} | ${task.title.substring(0, 40)} | ${task.status} | ${commit} |`);
|
||||
} catch {
|
||||
console.log(`| ${taskId} | (file not found) | unknown | - |`);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\nCompleted: ${session.completed?.length || 0} | Failed: ${session.failed?.length || 0} | Skipped: ${session.skipped?.length || 0}`);
|
||||
|
||||
// Show progress.md if exists
|
||||
try {
|
||||
const progress = Read(progressFile);
|
||||
console.log('\n---\n');
|
||||
console.log(progress);
|
||||
} catch {
|
||||
// No progress file yet
|
||||
}
|
||||
|
||||
} catch {
|
||||
// Session not found — try listing all sessions
|
||||
console.log(`Session "${sessionId}" not found.\n`);
|
||||
listSessions();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: List All Sessions
|
||||
|
||||
```javascript
|
||||
function listSessions() {
|
||||
const sessionFiles = Glob('.workflow/.idaw/sessions/IDA-*/session.json') || [];
|
||||
|
||||
if (sessionFiles.length === 0) {
|
||||
console.log('No IDAW sessions found. Use /idaw:run to start execution.');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('# IDAW Sessions\n');
|
||||
console.log('| Session ID | Status | Tasks | Completed | Failed |');
|
||||
console.log('|------------|--------|-------|-----------|--------|');
|
||||
|
||||
for (const f of sessionFiles) {
|
||||
try {
|
||||
const session = JSON.parse(Read(f));
|
||||
console.log(`| ${session.session_id} | ${session.status} | ${session.tasks?.length || 0} | ${session.completed?.length || 0} | ${session.failed?.length || 0} |`);
|
||||
} catch {
|
||||
// Skip malformed
|
||||
}
|
||||
}
|
||||
|
||||
console.log('\nUse /idaw:status <session-id> for details.');
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Show overview (pending tasks + latest session)
|
||||
/idaw:status
|
||||
|
||||
# Show specific session details
|
||||
/idaw:status IDA-auth-fix-20260301
|
||||
|
||||
# Output example:
|
||||
# IDAW Tasks
|
||||
#
|
||||
# | ID | Title | Type | Priority | Status |
|
||||
# |----------|------------------------------------|--------|----------|-----------|
|
||||
# | IDAW-001 | Fix auth token refresh | bugfix | 1 | completed |
|
||||
# | IDAW-002 | Add rate limiting | feature| 2 | pending |
|
||||
# | IDAW-003 | Refactor payment module | refact | 3 | pending |
|
||||
#
|
||||
# Total: 3 | Pending: 2 | Completed: 1 | Failed: 0
|
||||
```
|
||||
@@ -252,6 +252,17 @@ await updateDiscoveryState(outputDir, {
|
||||
const hasHighPriority = issues.some(i => i.priority === 'critical' || i.priority === 'high');
|
||||
const hasMediumFindings = prioritizedFindings.some(f => f.priority === 'medium');
|
||||
|
||||
// Auto mode: auto-select recommended action
|
||||
if (autoYes) {
|
||||
if (hasHighPriority) {
|
||||
await appendJsonl('.workflow/issues/issues.jsonl', issues);
|
||||
console.log(`Exported ${issues.length} issues. Run /issue:plan to continue.`);
|
||||
} else {
|
||||
console.log('Discovery complete. No significant issues found.');
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Discovery complete: ${issues.length} issues generated, ${prioritizedFindings.length} total findings. What would you like to do next?`,
|
||||
|
||||
@@ -152,6 +152,12 @@ if (!QUEUE_ID) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Auto mode: auto-select if exactly one active queue
|
||||
if (autoYes && activeQueues.length === 1) {
|
||||
QUEUE_ID = activeQueues[0].id;
|
||||
console.log(`Auto-selected queue: ${QUEUE_ID}`);
|
||||
} else {
|
||||
|
||||
// Display and prompt user
|
||||
console.log('\nAvailable Queues:');
|
||||
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
|
||||
@@ -176,6 +182,7 @@ if (!QUEUE_ID) {
|
||||
});
|
||||
|
||||
QUEUE_ID = answer['Queue'];
|
||||
} // end else (multi-queue prompt)
|
||||
}
|
||||
|
||||
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
|
||||
@@ -203,6 +210,13 @@ console.log(`
|
||||
- Parallel in batch 1: ${dag.parallel_batches[0]?.length || 0}
|
||||
`);
|
||||
|
||||
// Auto mode: use recommended defaults (Codex + Execute + Worktree)
|
||||
if (autoYes) {
|
||||
var executor = 'codex';
|
||||
var isDryRun = false;
|
||||
var useWorktree = true;
|
||||
} else {
|
||||
|
||||
// Interactive selection via AskUserQuestion
|
||||
const answer = AskUserQuestion({
|
||||
questions: [
|
||||
@@ -237,9 +251,10 @@ const answer = AskUserQuestion({
|
||||
]
|
||||
});
|
||||
|
||||
const executor = answer['Executor'].toLowerCase().split(' ')[0]; // codex|gemini|agent
|
||||
const isDryRun = answer['Mode'].includes('Dry-run');
|
||||
const useWorktree = answer['Worktree'].includes('Yes');
|
||||
var executor = answer['Executor'].toLowerCase().split(' ')[0]; // codex|gemini|agent
|
||||
var isDryRun = answer['Mode'].includes('Dry-run');
|
||||
var useWorktree = answer['Worktree'].includes('Yes');
|
||||
} // end else (interactive selection)
|
||||
|
||||
// Dry run mode
|
||||
if (isDryRun) {
|
||||
@@ -451,27 +466,33 @@ if (refreshedDag.ready_count > 0) {
|
||||
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
|
||||
console.log('\n## All Solutions Completed - Worktree Cleanup');
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
|
||||
header: 'Merge',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
|
||||
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
|
||||
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
// Auto mode: Create PR (recommended)
|
||||
if (autoYes) {
|
||||
var mergeAction = 'Create PR';
|
||||
} else {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
|
||||
header: 'Merge',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Create PR (Recommended)', description: 'Push branch and create pull request' },
|
||||
{ label: 'Merge to main', description: 'Merge all commits and cleanup worktree' },
|
||||
{ label: 'Keep branch', description: 'Cleanup worktree, keep branch for manual handling' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
var mergeAction = answer['Merge'];
|
||||
}
|
||||
|
||||
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
|
||||
|
||||
if (answer['Merge'].includes('Create PR')) {
|
||||
if (mergeAction.includes('Create PR')) {
|
||||
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
|
||||
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
|
||||
Bash(`git worktree remove "${worktreePath}"`);
|
||||
console.log(`PR created for branch: ${worktreeBranch}`);
|
||||
} else if (answer['Merge'].includes('Merge to main')) {
|
||||
} else if (mergeAction.includes('Merge to main')) {
|
||||
// Check main is clean
|
||||
const mainDirty = Bash('git status --porcelain').trim();
|
||||
if (mainDirty) {
|
||||
|
||||
@@ -154,8 +154,8 @@ Phase 6: Bind Solution
|
||||
├─ Update issue status to 'planned'
|
||||
└─ Returns: SOL-{issue-id}-{uid}
|
||||
|
||||
Phase 7: Next Steps
|
||||
└─ Offer: Form queue | Convert another idea | View details | Done
|
||||
Phase 7: Next Steps (skip in auto mode)
|
||||
└─ Auto mode: complete directly | Interactive: Form queue | Convert another | Done
|
||||
```
|
||||
|
||||
## Context Enrichment Logic
|
||||
|
||||
@@ -263,6 +263,14 @@ for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
|
||||
for (const pending of pendingSelections) {
|
||||
if (pending.solutions.length === 0) continue;
|
||||
|
||||
// Auto mode: auto-bind first (highest-ranked) solution
|
||||
if (autoYes) {
|
||||
const solId = pending.solutions[0].id;
|
||||
Bash(`ccw issue bind ${pending.issue_id} ${solId}`);
|
||||
console.log(`✓ ${pending.issue_id}: ${solId} bound (auto)`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const options = pending.solutions.slice(0, 4).map(sol => ({
|
||||
label: `${sol.id} (${sol.task_count} tasks)`,
|
||||
description: sol.description || sol.approach || 'No description'
|
||||
|
||||
@@ -273,6 +273,17 @@ const allClarifications = results.flatMap((r, i) =>
|
||||
```javascript
|
||||
if (allClarifications.length > 0) {
|
||||
for (const clarification of allClarifications) {
|
||||
// Auto mode: use recommended resolution (first option)
|
||||
if (autoYes) {
|
||||
const autoAnswer = clarification.options[0]?.label || 'skip';
|
||||
Task(
|
||||
subagent_type="issue-queue-agent",
|
||||
resume=clarification.agent_id,
|
||||
prompt=`Conflict ${clarification.conflict_id} resolved: ${autoAnswer}`
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Present to user via AskUserQuestion
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
@@ -345,6 +356,14 @@ ccw issue queue list --brief
|
||||
|
||||
**AskUserQuestion:**
|
||||
```javascript
|
||||
// Auto mode: merge into existing queue
|
||||
if (autoYes) {
|
||||
Bash(`ccw issue queue merge ${newQueueId} --queue ${activeQueueId}`);
|
||||
Bash(`ccw issue queue delete ${newQueueId}`);
|
||||
console.log(`Auto-merged new queue into ${activeQueueId}`);
|
||||
return;
|
||||
}
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Active queue exists. How would you like to proceed?",
|
||||
|
||||
@@ -526,7 +526,7 @@ CONSTRAINTS: ${perspective.constraints}
|
||||
- **📌 Decision summary**: How key decisions shaped the final conclusions (link conclusions back to decisions)
|
||||
- Write to conclusions.json
|
||||
|
||||
2. **Final discussion.md Update**
|
||||
3. **Final discussion.md Update**
|
||||
- Append conclusions section:
|
||||
- **Summary**: High-level overview
|
||||
- **Key Conclusions**: Ranked with evidence and confidence
|
||||
@@ -542,29 +542,51 @@ CONSTRAINTS: ${perspective.constraints}
|
||||
- **Trade-offs Made**: Key trade-offs and why certain paths were chosen over others
|
||||
- Add session statistics: rounds, duration, sources, artifacts, **decision count**
|
||||
|
||||
3. **Post-Completion Options**
|
||||
4. **Display Conclusions Summary**
|
||||
- Present analysis conclusions to the user before asking for next steps:
|
||||
```javascript
|
||||
console.log(`
|
||||
## Analysis Report
|
||||
|
||||
**Summary**: ${conclusions.summary}
|
||||
|
||||
**Key Conclusions** (${conclusions.key_conclusions.length}):
|
||||
${conclusions.key_conclusions.map((c, i) => `${i+1}. [${c.confidence}] ${c.point}`).join('\n')}
|
||||
|
||||
**Recommendations** (${conclusions.recommendations.length}):
|
||||
${conclusions.recommendations.map((r, i) => `${i+1}. [${r.priority}] ${r.action} — ${r.rationale}`).join('\n')}
|
||||
${conclusions.open_questions.length > 0 ? `\n**Open Questions**:\n${conclusions.open_questions.map(q => '- ' + q).join('\n')}` : ''}
|
||||
|
||||
📄 Full report: ${sessionFolder}/discussion.md
|
||||
`)
|
||||
```
|
||||
|
||||
5. **Post-Completion Options** (⚠️ TERMINAL — analyze-with-file ends after user selection)
|
||||
|
||||
> **WORKFLOW BOUNDARY**: After user selects any option below, the analyze-with-file workflow is **COMPLETE**.
|
||||
> If "执行任务" is selected, workflow-lite-planex takes over exclusively — do NOT return to any analyze-with-file phase.
|
||||
> The "Phase" numbers in workflow-lite-planex (LP-Phase 1-5) are SEPARATE from analyze-with-file phases.
|
||||
|
||||
```javascript
|
||||
const hasActionableRecs = conclusions.recommendations?.some(r => r.priority === 'high' || r.priority === 'medium')
|
||||
|
||||
const nextStep = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Analysis complete. What's next?",
|
||||
question: "Report generated. What would you like to do next?",
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: hasActionableRecs ? "生成任务 (Recommended)" : "生成任务", description: "Launch workflow-lite-plan with analysis context" },
|
||||
{ label: "创建Issue", description: "Launch issue-discover with conclusions" },
|
||||
{ label: "导出报告", description: "Generate standalone analysis report" },
|
||||
{ label: hasActionableRecs ? "执行任务 (Recommended)" : "执行任务", description: "Launch workflow-lite-planex to plan & execute" },
|
||||
{ label: "产出Issue", description: "Launch issue-discover with conclusions" },
|
||||
{ label: "完成", description: "No further action" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Handle "生成任务"**:
|
||||
**Handle "执行任务"** (⚠️ TERMINAL — analyze-with-file ends here, lite-plan takes over exclusively):
|
||||
```javascript
|
||||
if (nextStep.includes("生成任务")) {
|
||||
if (nextStep.includes("执行任务")) {
|
||||
// 1. Build task description from high/medium priority recommendations
|
||||
const taskDescription = conclusions.recommendations
|
||||
.filter(r => r.priority === 'high' || r.priority === 'medium')
|
||||
@@ -585,8 +607,19 @@ CONSTRAINTS: ${perspective.constraints}
|
||||
if (findings.length) contextLines.push(`**Key Findings**:\n${findings.map(f => `- ${f}`).join('\n')}`)
|
||||
}
|
||||
|
||||
// 3. Call lite-plan with enriched task description (no special flags)
|
||||
Skill(skill="workflow-lite-plan", args=`"${taskDescription}\n\n${contextLines.join('\n')}"`)
|
||||
// 3. ⛔ SESSION TERMINATION — output explicit boundary
|
||||
console.log(`
|
||||
---
|
||||
## ⛔ ANALYZE-WITH-FILE SESSION COMPLETE
|
||||
All Phase 1-4 of analyze-with-file are FINISHED.
|
||||
Session: ${sessionId} — concluded at ${new Date().toISOString()}
|
||||
DO NOT reference any analyze-with-file phase instructions beyond this point.
|
||||
---
|
||||
`)
|
||||
|
||||
// 4. Hand off to lite-plan — analyze-with-file COMPLETE, do NOT return to any analyze phase
|
||||
Skill(skill="workflow-lite-planex", args=`"${taskDescription}\n\n${contextLines.join('\n')}"`)
|
||||
return // ⛔ analyze-with-file terminates here
|
||||
}
|
||||
```
|
||||
|
||||
@@ -764,12 +797,12 @@ User agrees with current direction, wants deeper code analysis
|
||||
- Quick information gathering without multi-round iteration
|
||||
- Follow-up analysis building on existing session
|
||||
|
||||
**Use `Skill(skill="workflow-lite-plan", args="\"task description\"")` when:**
|
||||
**Use `Skill(skill="workflow-lite-planex", args="\"task description\"")` when:**
|
||||
- Ready to implement (past analysis phase)
|
||||
- Need simple task breakdown
|
||||
- Focus on quick execution planning
|
||||
|
||||
> **Note**: Phase 4「生成任务」assembles analysis context as inline `## Prior Analysis` block in task description, allowing lite-plan to skip redundant exploration automatically.
|
||||
> **Note**: Phase 4「执行任务」assembles analysis context as inline `## Prior Analysis` block in task description, allowing lite-plan to skip redundant exploration automatically.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -769,7 +769,7 @@ See full markdown template in original file (lines 955-1161).
|
||||
- Want shared collaborative planning document
|
||||
- Need structured task breakdown with agent coordination
|
||||
|
||||
**Use `Skill(skill="workflow-lite-plan", args="\"task description\"")` when:**
|
||||
**Use `Skill(skill="workflow-lite-planex", args="\"task description\"")` when:**
|
||||
- Direction is already clear
|
||||
- Ready to move from ideas to execution
|
||||
- Need simple implementation breakdown
|
||||
|
||||
@@ -496,7 +496,7 @@ if (fileExists(projectPath)) {
|
||||
}
|
||||
|
||||
// Update specs/*.md: remove learnings referencing deleted sessions
|
||||
const guidelinesPath = '.workflow/specs/*.md'
|
||||
const guidelinesPath = '.ccw/specs/*.md'
|
||||
if (fileExists(guidelinesPath)) {
|
||||
const guidelines = JSON.parse(Read(guidelinesPath))
|
||||
const deletedSessionIds = results.deleted
|
||||
|
||||
@@ -208,7 +208,7 @@ Task(
|
||||
### Project Context (MANDATORY)
|
||||
Read and incorporate:
|
||||
- \`.workflow/project-tech.json\` (if exists): Technology stack, architecture
|
||||
- \`.workflow/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS on sub-domain splitting and plan structure
|
||||
- \`.ccw/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS on sub-domain splitting and plan structure
|
||||
|
||||
### Input Requirements
|
||||
${taskDescription}
|
||||
@@ -357,7 +357,7 @@ subDomains.map(sub =>
|
||||
### Project Context (MANDATORY)
|
||||
Read and incorporate:
|
||||
- \`.workflow/project-tech.json\` (if exists): Technology stack, architecture
|
||||
- \`.workflow/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS
|
||||
- \`.ccw/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS
|
||||
|
||||
## Dual Output Tasks
|
||||
|
||||
|
||||
@@ -632,6 +632,14 @@ Why is config value None during update?
|
||||
|
||||
**Auto-sync**: 执行 `/workflow:session:sync -y "{summary}"` 更新 specs/*.md + project-tech。
|
||||
|
||||
```javascript
|
||||
// Auto mode: skip expansion question, complete session directly
|
||||
if (autoYes) {
|
||||
console.log('Debug session complete. Auto mode: skipping expansion.');
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
@@ -643,6 +651,6 @@ Why is config value None during update?
|
||||
| Empty debug.log | Verify reproduction triggered the code path |
|
||||
| All hypotheses rejected | Use Gemini to generate new hypotheses based on disproven assumptions |
|
||||
| Fix doesn't work | Document failed fix attempt, iterate with refined understanding |
|
||||
| >5 iterations | Review consolidated understanding, escalate to `workflow-lite-plan` skill with full context |
|
||||
| >5 iterations | Review consolidated understanding, escalate to `workflow-lite-planex` skill with full context |
|
||||
| Gemini unavailable | Fallback to manual hypothesis generation, document without Gemini insights |
|
||||
| Understanding too long | Consolidate aggressively, archive old iterations to separate file |
|
||||
|
||||
@@ -11,7 +11,7 @@ examples:
|
||||
|
||||
## Overview
|
||||
|
||||
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.workflow/specs/*.md` with coding conventions, constraints, and quality rules.
|
||||
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.ccw/specs/*.md` with coding conventions, constraints, and quality rules.
|
||||
|
||||
**Design Principle**: Questions are dynamically generated based on the project's tech stack, architecture, and patterns — not generic boilerplate.
|
||||
|
||||
@@ -55,7 +55,7 @@ Step 5: Display Summary
|
||||
|
||||
```bash
|
||||
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
||||
bash(test -f .workflow/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
|
||||
bash(test -f .ccw/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
|
||||
```
|
||||
|
||||
**If TECH_NOT_FOUND**: Exit with message
|
||||
@@ -332,9 +332,16 @@ For each category of collected answers, append rules to the corresponding spec M
|
||||
|
||||
```javascript
|
||||
// Helper: append rules to a spec MD file with category support
|
||||
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
||||
function appendRulesToSpecFile(filePath, rules, defaultCategory = 'general') {
|
||||
if (rules.length === 0) return
|
||||
|
||||
// Ensure .ccw/specs/ directory exists
|
||||
const specDir = path.dirname(filePath)
|
||||
if (!fs.existsSync(specDir)) {
|
||||
fs.mkdirSync(specDir, { recursive: true })
|
||||
}
|
||||
|
||||
// Check if file exists
|
||||
if (!file_exists(filePath)) {
|
||||
// Create file with frontmatter including category
|
||||
@@ -360,19 +367,19 @@ keywords: [${defaultCategory}, ${filePath.includes('conventions') ? 'convention'
|
||||
Write(filePath, newContent)
|
||||
}
|
||||
|
||||
// Write conventions (general category)
|
||||
appendRulesToSpecFile('.workflow/specs/coding-conventions.md',
|
||||
// Write conventions (general category) - use .ccw/specs/ (same as frontend/backend)
|
||||
appendRulesToSpecFile('.ccw/specs/coding-conventions.md',
|
||||
[...newCodingStyle, ...newNamingPatterns, ...newFileStructure, ...newDocumentation],
|
||||
'general')
|
||||
|
||||
// Write constraints (planning category)
|
||||
appendRulesToSpecFile('.workflow/specs/architecture-constraints.md',
|
||||
appendRulesToSpecFile('.ccw/specs/architecture-constraints.md',
|
||||
[...newArchitecture, ...newTechStack, ...newPerformance, ...newSecurity],
|
||||
'planning')
|
||||
|
||||
// Write quality rules (execution category)
|
||||
if (newQualityRules.length > 0) {
|
||||
const qualityPath = '.workflow/specs/quality-rules.md'
|
||||
const qualityPath = '.ccw/specs/quality-rules.md'
|
||||
if (!file_exists(qualityPath)) {
|
||||
Write(qualityPath, `---
|
||||
title: Quality Rules
|
||||
|
||||
@@ -54,7 +54,7 @@ Step 1: Gather Requirements (Interactive)
|
||||
└─ Ask content (rule text)
|
||||
|
||||
Step 2: Determine Target File
|
||||
├─ specs dimension → .workflow/specs/coding-conventions.md or architecture-constraints.md
|
||||
├─ specs dimension → .ccw/specs/coding-conventions.md or architecture-constraints.md
|
||||
└─ personal dimension → ~/.ccw/specs/personal/ or .ccw/specs/personal/
|
||||
|
||||
Step 3: Write Spec
|
||||
@@ -109,7 +109,7 @@ if (!dimension) {
|
||||
options: [
|
||||
{
|
||||
label: "Project Spec",
|
||||
description: "Coding conventions, constraints, quality rules for this project (stored in .workflow/specs/)"
|
||||
description: "Coding conventions, constraints, quality rules for this project (stored in .ccw/specs/)"
|
||||
},
|
||||
{
|
||||
label: "Personal Spec",
|
||||
@@ -234,19 +234,19 @@ let targetFile: string
|
||||
let targetDir: string
|
||||
|
||||
if (dimension === 'specs') {
|
||||
// Project specs
|
||||
targetDir = '.workflow/specs'
|
||||
// Project specs - use .ccw/specs/ (same as frontend/backend spec-index-builder)
|
||||
targetDir = '.ccw/specs'
|
||||
if (isConstraint) {
|
||||
targetFile = path.join(targetDir, 'architecture-constraints.md')
|
||||
} else {
|
||||
targetFile = path.join(targetDir, 'coding-conventions.md')
|
||||
}
|
||||
} else {
|
||||
// Personal specs
|
||||
// Personal specs - use .ccw/personal/ (same as backend spec-index-builder)
|
||||
if (scope === 'global') {
|
||||
targetDir = path.join(os.homedir(), '.ccw', 'specs', 'personal')
|
||||
targetDir = path.join(os.homedir(), '.ccw', 'personal')
|
||||
} else {
|
||||
targetDir = path.join('.ccw', 'specs', 'personal')
|
||||
targetDir = path.join('.ccw', 'personal')
|
||||
}
|
||||
|
||||
// Create category-based filename
|
||||
@@ -333,7 +333,7 @@ Use 'ccw spec load --category ${category}' to load specs by category
|
||||
|
||||
### Project Specs (dimension: specs)
|
||||
```
|
||||
.workflow/specs/
|
||||
.ccw/specs/
|
||||
├── coding-conventions.md ← conventions, learnings
|
||||
├── architecture-constraints.md ← constraints
|
||||
└── quality-rules.md ← quality rules
|
||||
@@ -341,14 +341,14 @@ Use 'ccw spec load --category ${category}' to load specs by category
|
||||
|
||||
### Personal Specs (dimension: personal)
|
||||
```
|
||||
# Global (~/.ccw/specs/personal/)
|
||||
~/.ccw/specs/personal/
|
||||
# Global (~/.ccw/personal/)
|
||||
~/.ccw/personal/
|
||||
├── conventions.md ← personal conventions (all projects)
|
||||
├── constraints.md ← personal constraints (all projects)
|
||||
└── learnings.md ← personal learnings (all projects)
|
||||
|
||||
# Project-local (.ccw/specs/personal/)
|
||||
.ccw/specs/personal/
|
||||
# Project-local (.ccw/personal/)
|
||||
.ccw/personal/
|
||||
├── conventions.md ← personal conventions (this project only)
|
||||
├── constraints.md ← personal constraints (this project only)
|
||||
└── learnings.md ← personal learnings (this project only)
|
||||
|
||||
@@ -11,7 +11,7 @@ examples:
|
||||
# Workflow Init Command (/workflow:init)
|
||||
|
||||
## Overview
|
||||
Initialize `.workflow/project-tech.json` and `.workflow/specs/*.md` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
|
||||
Initialize `.workflow/project-tech.json` and `.ccw/specs/*.md` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
|
||||
|
||||
**Dual File System**:
|
||||
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
|
||||
@@ -58,7 +58,7 @@ Analysis Flow:
|
||||
|
||||
Output:
|
||||
├─ .workflow/project-tech.json (+ .backup if regenerate)
|
||||
└─ .workflow/specs/*.md (scaffold or configured, unless --skip-specs)
|
||||
└─ .ccw/specs/*.md (scaffold or configured, unless --skip-specs)
|
||||
```
|
||||
|
||||
## Implementation
|
||||
@@ -75,14 +75,14 @@ const skipSpecs = $ARGUMENTS.includes('--skip-specs')
|
||||
|
||||
```bash
|
||||
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
||||
bash(test -f .workflow/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
|
||||
bash(test -f .ccw/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
|
||||
```
|
||||
|
||||
**If BOTH_EXIST and no --regenerate**: Exit early
|
||||
```
|
||||
Project already initialized:
|
||||
- Tech analysis: .workflow/project-tech.json
|
||||
- Guidelines: .workflow/specs/*.md
|
||||
- Guidelines: .ccw/specs/*.md
|
||||
|
||||
Use /workflow:init --regenerate to rebuild tech analysis
|
||||
Use /workflow:session:solidify to add guidelines
|
||||
@@ -171,7 +171,7 @@ Project root: ${projectRoot}
|
||||
// Skip spec initialization if --skip-specs flag is provided
|
||||
if (!skipSpecs) {
|
||||
// Initialize spec system if not already initialized
|
||||
const specsCheck = Bash('test -f .workflow/specs/coding-conventions.md && echo EXISTS || echo NOT_FOUND')
|
||||
const specsCheck = Bash('test -f .ccw/specs/coding-conventions.md && echo EXISTS || echo NOT_FOUND')
|
||||
if (specsCheck.includes('NOT_FOUND')) {
|
||||
console.log('Initializing spec system...')
|
||||
Bash('ccw spec init')
|
||||
@@ -186,7 +186,7 @@ if (!skipSpecs) {
|
||||
|
||||
```javascript
|
||||
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
|
||||
const specsInitialized = !skipSpecs && file_exists('.workflow/specs/coding-conventions.md');
|
||||
const specsInitialized = !skipSpecs && file_exists('.ccw/specs/coding-conventions.md');
|
||||
|
||||
console.log(`
|
||||
Project initialized successfully
|
||||
@@ -206,7 +206,7 @@ Components: ${projectTech.overview.key_components.length} core modules
|
||||
---
|
||||
Files created:
|
||||
- Tech analysis: .workflow/project-tech.json
|
||||
${!skipSpecs ? `- Specs: .workflow/specs/ ${specsInitialized ? '(initialized)' : ''}` : '- Specs: (skipped via --skip-specs)'}
|
||||
${!skipSpecs ? `- Specs: .ccw/specs/ ${specsInitialized ? '(initialized)' : ''}` : '- Specs: (skipped via --skip-specs)'}
|
||||
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
|
||||
`);
|
||||
```
|
||||
@@ -222,7 +222,7 @@ if (skipSpecs) {
|
||||
Next steps:
|
||||
- Use /workflow:init-specs to create individual specs
|
||||
- Use /workflow:init-guidelines to configure specs interactively
|
||||
- Use /workflow:plan to start planning
|
||||
- Use /workflow-plan to start planning
|
||||
`);
|
||||
return;
|
||||
}
|
||||
@@ -260,7 +260,7 @@ Next steps:
|
||||
- Use /workflow:init-specs to create individual specs
|
||||
- Use /workflow:init-guidelines to configure specs interactively
|
||||
- Use ccw spec load to import specs from external sources
|
||||
- Use /workflow:plan to start planning
|
||||
- Use /workflow-plan to start planning
|
||||
`);
|
||||
}
|
||||
} else {
|
||||
@@ -271,7 +271,7 @@ Next steps:
|
||||
- Use /workflow:init-specs to create additional specs
|
||||
- Use /workflow:init-guidelines --reset to reconfigure
|
||||
- Use /workflow:session:solidify to add individual rules
|
||||
- Use /workflow:plan to start planning
|
||||
- Use /workflow-plan to start planning
|
||||
`);
|
||||
}
|
||||
```
|
||||
|
||||
@@ -923,7 +923,7 @@ Single evolving state file — each phase writes its section:
|
||||
- Already have a completed implementation session (WFS-*)
|
||||
- Only need unit/component level tests
|
||||
|
||||
**Use `workflow-tdd` skill when:**
|
||||
**Use `workflow-tdd-plan` skill when:**
|
||||
- Building new features with test-first approach
|
||||
- Red-Green-Refactor cycle
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ Closed-loop tech debt lifecycle: **Discover → Assess → Plan → Refactor →
|
||||
|
||||
**vs Existing Commands**:
|
||||
- **workflow:lite-fix**: Single bug fix, no systematic debt analysis
|
||||
- **workflow:plan + execute**: Generic implementation, no debt-aware prioritization or regression validation
|
||||
- **workflow-plan + execute**: Generic implementation, no debt-aware prioritization or regression validation
|
||||
- **This command**: Full debt lifecycle — discovery through multi-dimensional scan, prioritized execution with per-item regression validation
|
||||
|
||||
### Value Proposition
|
||||
@@ -827,7 +827,7 @@ AskUserQuestion({
|
||||
- Need regression-safe iterative refactoring with rollback
|
||||
- Want documented reasoning for each refactoring decision
|
||||
|
||||
**Use `workflow-lite-plan` skill when:**
|
||||
**Use `workflow-lite-planex` skill when:**
|
||||
- Single specific bug or issue to fix
|
||||
- No systematic debt analysis needed
|
||||
|
||||
|
||||
@@ -534,10 +534,10 @@ ${selectedMode === 'progressive' ? `**Progressive Mode**:
|
||||
| Scenario | Recommended Command |
|
||||
|----------|-------------------|
|
||||
| Strategic planning, need issue tracking | `/workflow:roadmap-with-file` |
|
||||
| Quick task breakdown, immediate execution | `/workflow:lite-plan` |
|
||||
| Quick task breakdown, immediate execution | `/workflow-lite-planex` |
|
||||
| Collaborative multi-agent planning | `/workflow:collaborative-plan-with-file` |
|
||||
| Full specification documents | `spec-generator` skill |
|
||||
| Code implementation from existing plan | `/workflow:lite-execute` |
|
||||
| Code implementation from existing plan | `/workflow-lite-planex` (Phase 1: plan → Phase 2: execute) |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -57,5 +57,5 @@ Session WFS-user-auth resumed
|
||||
- Status: active
|
||||
- Paused at: 2025-09-15T14:30:00Z
|
||||
- Resumed at: 2025-09-15T15:45:00Z
|
||||
- Ready for: /workflow:execute
|
||||
- Ready for: /workflow-execute
|
||||
```
|
||||
@@ -18,7 +18,7 @@ When `--yes` or `-y`: Auto-categorize and add guideline without confirmation.
|
||||
|
||||
## Overview
|
||||
|
||||
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.workflow/specs/*.md`. This ensures valuable learnings persist across sessions and inform future planning.
|
||||
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.ccw/specs/*.md`. This ensures valuable learnings persist across sessions and inform future planning.
|
||||
|
||||
## Use Cases
|
||||
|
||||
@@ -112,8 +112,10 @@ ELSE (convention/constraint/learning):
|
||||
|
||||
### Step 1: Ensure Guidelines File Exists
|
||||
|
||||
**Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)**
|
||||
|
||||
```bash
|
||||
bash(test -f .workflow/specs/coding-conventions.md && echo "EXISTS" || echo "NOT_FOUND")
|
||||
bash(test -f .ccw/specs/coding-conventions.md && echo "EXISTS" || echo "NOT_FOUND")
|
||||
```
|
||||
|
||||
**If NOT_FOUND**, initialize spec system:
|
||||
@@ -187,9 +189,10 @@ function buildEntry(rule, type, category, sessionId) {
|
||||
|
||||
```javascript
|
||||
// Map type+category to target spec file
|
||||
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
||||
const specFileMap = {
|
||||
convention: '.workflow/specs/coding-conventions.md',
|
||||
constraint: '.workflow/specs/architecture-constraints.md'
|
||||
convention: '.ccw/specs/coding-conventions.md',
|
||||
constraint: '.ccw/specs/architecture-constraints.md'
|
||||
}
|
||||
|
||||
if (type === 'convention' || type === 'constraint') {
|
||||
@@ -204,7 +207,8 @@ if (type === 'convention' || type === 'constraint') {
|
||||
}
|
||||
} else if (type === 'learning') {
|
||||
// Learnings go to coding-conventions.md as a special section
|
||||
const targetFile = '.workflow/specs/coding-conventions.md'
|
||||
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
||||
const targetFile = '.ccw/specs/coding-conventions.md'
|
||||
const existing = Read(targetFile)
|
||||
const entry = buildEntry(rule, type, category, sessionId)
|
||||
const learningText = `- [learning/${category}] ${entry.insight} (${entry.date})`
|
||||
@@ -228,7 +232,7 @@ Type: ${type}
|
||||
Category: ${category}
|
||||
Rule: "${rule}"
|
||||
|
||||
Location: .workflow/specs/*.md -> ${type}s.${category}
|
||||
Location: .ccw/specs/*.md -> ${type}s.${category}
|
||||
|
||||
Total ${type}s in ${category}: ${count}
|
||||
```
|
||||
@@ -373,13 +377,9 @@ AskUserQuestion({
|
||||
/workflow:session:solidify "Use async/await instead of callbacks" --type convention --category coding_style
|
||||
```
|
||||
|
||||
Result in `specs/*.md`:
|
||||
```json
|
||||
{
|
||||
"conventions": {
|
||||
"coding_style": ["Use async/await instead of callbacks"]
|
||||
}
|
||||
}
|
||||
Result in `.ccw/specs/coding-conventions.md`:
|
||||
```markdown
|
||||
- [coding_style] Use async/await instead of callbacks
|
||||
```
|
||||
|
||||
### Add an Architectural Constraint
|
||||
@@ -387,13 +387,9 @@ Result in `specs/*.md`:
|
||||
/workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
|
||||
```
|
||||
|
||||
Result:
|
||||
```json
|
||||
{
|
||||
"constraints": {
|
||||
"architecture": ["No direct DB access from controllers"]
|
||||
}
|
||||
}
|
||||
Result in `.ccw/specs/architecture-constraints.md`:
|
||||
```markdown
|
||||
- [architecture] No direct DB access from controllers
|
||||
```
|
||||
|
||||
### Capture a Session Learning
|
||||
@@ -401,18 +397,9 @@ Result:
|
||||
/workflow:session:solidify "Cache invalidation requires event sourcing for consistency" --type learning
|
||||
```
|
||||
|
||||
Result:
|
||||
```json
|
||||
{
|
||||
"learnings": [
|
||||
{
|
||||
"date": "2024-12-28",
|
||||
"session_id": "WFS-auth-feature",
|
||||
"insight": "Cache invalidation requires event sourcing for consistency",
|
||||
"category": "architecture"
|
||||
}
|
||||
]
|
||||
}
|
||||
Result in `.ccw/specs/coding-conventions.md`:
|
||||
```markdown
|
||||
- [learning/architecture] Cache invalidation requires event sourcing for consistency (2024-12-28)
|
||||
```
|
||||
|
||||
### Compress Recent Memories
|
||||
|
||||
@@ -27,7 +27,7 @@ The `--type` parameter classifies sessions for CCW dashboard organization:
|
||||
|------|-------------|-------------|
|
||||
| `workflow` | Standard implementation (default) | `workflow-plan` skill |
|
||||
| `review` | Code review sessions | `review-cycle` skill |
|
||||
| `tdd` | TDD-based development | `workflow-tdd` skill |
|
||||
| `tdd` | TDD-based development | `workflow-tdd-plan` skill |
|
||||
| `test` | Test generation/fix sessions | `workflow-test-fix` skill |
|
||||
| `docs` | Documentation sessions | `memory-manage` skill |
|
||||
|
||||
@@ -44,7 +44,7 @@ ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
|
||||
```bash
|
||||
# Check if project state exists (both files required)
|
||||
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
||||
bash(test -f .workflow/specs/*.md && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
|
||||
bash(test -f .ccw/specs/*.md && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
|
||||
```
|
||||
|
||||
**If either NOT_FOUND**, delegate to `/workflow:init`:
|
||||
@@ -60,7 +60,7 @@ Skill(skill="workflow:init");
|
||||
- If BOTH_EXIST: `PROJECT_STATE: initialized`
|
||||
- If NOT_FOUND: Calls `/workflow:init` → creates:
|
||||
- `.workflow/project-tech.json` with full technical analysis
|
||||
- `.workflow/specs/*.md` with empty scaffold
|
||||
- `.ccw/specs/*.md` with empty scaffold
|
||||
|
||||
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
|
||||
|
||||
|
||||
@@ -124,7 +124,7 @@ Tech [${detectCategory(summary)}]:
|
||||
${techEntry.title}
|
||||
|
||||
Target files:
|
||||
.workflow/specs/*.md
|
||||
.ccw/specs/*.md
|
||||
.workflow/project-tech.json
|
||||
`)
|
||||
|
||||
@@ -138,12 +138,13 @@ if (!autoYes) {
|
||||
|
||||
```javascript
|
||||
// ── Update specs/*.md ──
|
||||
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
|
||||
if (guidelineUpdates.length > 0) {
|
||||
// Map guideline types to spec files
|
||||
const specFileMap = {
|
||||
convention: '.workflow/specs/coding-conventions.md',
|
||||
constraint: '.workflow/specs/architecture-constraints.md',
|
||||
learning: '.workflow/specs/coding-conventions.md' // learnings appended to conventions
|
||||
convention: '.ccw/specs/coding-conventions.md',
|
||||
constraint: '.ccw/specs/architecture-constraints.md',
|
||||
learning: '.ccw/specs/coding-conventions.md' // learnings appended to conventions
|
||||
}
|
||||
|
||||
for (const g of guidelineUpdates) {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: design-sync
|
||||
description: Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption
|
||||
description: Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption
|
||||
argument-hint: --session <session_id> [--selected-prototypes "<list>"]
|
||||
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
|
||||
---
|
||||
@@ -351,10 +351,10 @@ Updated artifacts:
|
||||
✓ {role_count} role analysis.md files - Design system references
|
||||
✓ ui-designer/design-system-reference.md - Design system reference guide
|
||||
|
||||
Design system assets ready for /workflow:plan:
|
||||
Design system assets ready for /workflow-plan:
|
||||
- design-tokens.json | style-guide.md | {prototype_count} reference prototypes
|
||||
|
||||
Next: /workflow:plan [--agent] "<task description>"
|
||||
Next: /workflow-plan [--agent] "<task description>"
|
||||
The plan phase will automatically discover and utilize the design system.
|
||||
```
|
||||
|
||||
@@ -394,7 +394,7 @@ Next: /workflow:plan [--agent] "<task description>"
|
||||
@../../{design_id}/prototypes/{prototype}.html
|
||||
```
|
||||
|
||||
## Integration with /workflow:plan
|
||||
## Integration with /workflow-plan
|
||||
|
||||
After this update, `workflow-plan` skill will discover design assets through:
|
||||
|
||||
|
||||
@@ -606,7 +606,7 @@ Total workflow time: ~{estimate_total_time()} minutes
|
||||
|
||||
{IF session_id:
|
||||
2. Create implementation tasks:
|
||||
/workflow:plan --session {session_id}
|
||||
/workflow-plan --session {session_id}
|
||||
|
||||
3. Generate tests (if needed):
|
||||
/workflow:test-gen {session_id}
|
||||
@@ -741,5 +741,5 @@ Design Quality:
|
||||
- Design token driven
|
||||
- {generated_count} assembled prototypes
|
||||
|
||||
Next: [/workflow:execute] OR [Open compare.html → /workflow:plan]
|
||||
Next: [/workflow-execute] OR [Open compare.html → /workflow-plan]
|
||||
```
|
||||
|
||||
@@ -477,8 +477,8 @@ ${recommendations.map(r => \`- ${r}\`).join('\\n')}
|
||||
const projectTech = file_exists('.workflow/project-tech.json')
|
||||
? JSON.parse(Read('.workflow/project-tech.json')) : null
|
||||
// Read specs/*.md (if exists)
|
||||
const projectGuidelines = file_exists('.workflow/specs/*.md')
|
||||
? JSON.parse(Read('.workflow/specs/*.md')) : null
|
||||
const projectGuidelines = file_exists('.ccw/specs/*.md')
|
||||
? JSON.parse(Read('.ccw/specs/*.md')) : null
|
||||
```
|
||||
|
||||
```javascript
|
||||
|
||||
@@ -43,19 +43,19 @@ function getExistingCommandSources() {
|
||||
// These commands were migrated to skills but references were never updated
|
||||
const COMMAND_TO_SKILL_MAP = {
|
||||
// workflow commands → skills
|
||||
'/workflow:plan': 'workflow-plan',
|
||||
'/workflow:execute': 'workflow-execute',
|
||||
'/workflow:lite-plan': 'workflow-lite-plan',
|
||||
'/workflow:lite-execute': 'workflow-lite-plan', // lite-execute is part of lite-plan skill
|
||||
'/workflow:lite-fix': 'workflow-lite-plan', // lite-fix is part of lite-plan skill
|
||||
'/workflow:multi-cli-plan': 'workflow-multi-cli-plan',
|
||||
'/workflow:plan-verify': 'workflow-plan', // plan-verify is a phase of workflow-plan
|
||||
'/workflow-plan': 'workflow-plan',
|
||||
'/workflow-execute': 'workflow-execute',
|
||||
'/workflow-lite-planex': 'workflow-lite-planex',
|
||||
'/workflow:lite-execute': 'workflow-lite-planex', // lite-execute is part of lite-plan skill
|
||||
'/workflow:lite-fix': 'workflow-lite-planex', // lite-fix is part of lite-plan skill
|
||||
'/workflow-multi-cli-plan': 'workflow-multi-cli-plan',
|
||||
'/workflow-plan-verify': 'workflow-plan', // plan-verify is a phase of workflow-plan
|
||||
'/workflow:replan': 'workflow-plan', // replan is a phase of workflow-plan
|
||||
'/workflow:tdd-plan': 'workflow-tdd',
|
||||
'/workflow:tdd-verify': 'workflow-tdd', // tdd-verify is a phase of workflow-tdd
|
||||
'/workflow:test-fix-gen': 'workflow-test-fix',
|
||||
'/workflow-tdd-plan': 'workflow-tdd-plan',
|
||||
'/workflow-tdd-verify': 'workflow-tdd-plan', // tdd-verify is a phase of workflow-tdd-plan
|
||||
'/workflow-test-fix': 'workflow-test-fix',
|
||||
'/workflow:test-gen': 'workflow-test-fix',
|
||||
'/workflow:test-cycle-execute': 'workflow-test-fix',
|
||||
'/workflow-test-fix': 'workflow-test-fix',
|
||||
'/workflow:review': 'review-cycle',
|
||||
'/workflow:review-session-cycle': 'review-cycle',
|
||||
'/workflow:review-module-cycle': 'review-cycle',
|
||||
@@ -70,8 +70,8 @@ const COMMAND_TO_SKILL_MAP = {
|
||||
'/workflow:tools:context-gather': 'workflow-plan',
|
||||
'/workflow:tools:conflict-resolution': 'workflow-plan',
|
||||
'/workflow:tools:task-generate-agent': 'workflow-plan',
|
||||
'/workflow:tools:task-generate-tdd': 'workflow-tdd',
|
||||
'/workflow:tools:tdd-coverage-analysis': 'workflow-tdd',
|
||||
'/workflow:tools:task-generate-tdd': 'workflow-tdd-plan',
|
||||
'/workflow:tools:tdd-coverage-analysis': 'workflow-tdd-plan',
|
||||
'/workflow:tools:test-concept-enhanced': 'workflow-test-fix',
|
||||
'/workflow:tools:test-context-gather': 'workflow-test-fix',
|
||||
'/workflow:tools:test-task-generate': 'workflow-test-fix',
|
||||
@@ -87,7 +87,7 @@ const COMMAND_TO_SKILL_MAP = {
|
||||
// general commands
|
||||
'/ccw-debug': null, // deleted, no replacement
|
||||
'/ccw view': null, // deleted, no replacement
|
||||
'/workflow:lite-lite-lite': 'workflow-lite-plan',
|
||||
'/workflow:lite-lite-lite': 'workflow-lite-planex',
|
||||
// ui-design (these still exist as commands)
|
||||
'/workflow:ui-design:auto': '/workflow:ui-design:explore-auto',
|
||||
'/workflow:ui-design:update': '/workflow:ui-design:generate',
|
||||
@@ -302,8 +302,8 @@ function fixBrokenReferences() {
|
||||
"skill='compact'": "skill='memory-capture'",
|
||||
'skill="workflow:brainstorm:role-analysis"': 'skill="brainstorm"',
|
||||
"skill='workflow:brainstorm:role-analysis'": "skill='brainstorm'",
|
||||
'skill="workflow:lite-execute"': 'skill="workflow-lite-plan"',
|
||||
"skill='workflow:lite-execute'": "skill='workflow-lite-plan'",
|
||||
'skill="workflow:lite-execute"': 'skill="workflow-lite-planex"',
|
||||
"skill='workflow:lite-execute'": "skill='workflow-lite-planex'",
|
||||
};
|
||||
|
||||
for (const [oldCall, newCall] of Object.entries(skillCallFixes)) {
|
||||
@@ -319,17 +319,17 @@ function fixBrokenReferences() {
|
||||
// Pattern: `/ command:name` references that point to non-existent commands
|
||||
// These are documentation references - update to point to skill names
|
||||
const proseRefFixes = {
|
||||
'`/workflow:plan`': '`workflow-plan` skill',
|
||||
'`/workflow:execute`': '`workflow-execute` skill',
|
||||
'`/workflow:lite-execute`': '`workflow-lite-plan` skill',
|
||||
'`/workflow:lite-fix`': '`workflow-lite-plan` skill',
|
||||
'`/workflow:plan-verify`': '`workflow-plan` skill (plan-verify phase)',
|
||||
'`/workflow-plan`': '`workflow-plan` skill',
|
||||
'`/workflow-execute`': '`workflow-execute` skill',
|
||||
'`/workflow:lite-execute`': '`workflow-lite-planex` skill',
|
||||
'`/workflow:lite-fix`': '`workflow-lite-planex` skill',
|
||||
'`/workflow-plan-verify`': '`workflow-plan` skill (plan-verify phase)',
|
||||
'`/workflow:replan`': '`workflow-plan` skill (replan phase)',
|
||||
'`/workflow:tdd-plan`': '`workflow-tdd` skill',
|
||||
'`/workflow:tdd-verify`': '`workflow-tdd` skill (tdd-verify phase)',
|
||||
'`/workflow:test-fix-gen`': '`workflow-test-fix` skill',
|
||||
'`/workflow-tdd-plan`': '`workflow-tdd-plan` skill',
|
||||
'`/workflow-tdd-verify`': '`workflow-tdd-plan` skill (tdd-verify phase)',
|
||||
'`/workflow-test-fix`': '`workflow-test-fix` skill',
|
||||
'`/workflow:test-gen`': '`workflow-test-fix` skill',
|
||||
'`/workflow:test-cycle-execute`': '`workflow-test-fix` skill',
|
||||
'`/workflow-test-fix`': '`workflow-test-fix` skill',
|
||||
'`/workflow:review`': '`review-cycle` skill',
|
||||
'`/workflow:review-session-cycle`': '`review-cycle` skill',
|
||||
'`/workflow:review-module-cycle`': '`review-cycle` skill',
|
||||
@@ -346,9 +346,9 @@ function fixBrokenReferences() {
|
||||
'`/workflow:tools:task-generate`': '`workflow-plan` skill (task-generate phase)',
|
||||
'`/workflow:ui-design:auto`': '`/workflow:ui-design:explore-auto`',
|
||||
'`/workflow:ui-design:update`': '`/workflow:ui-design:generate`',
|
||||
'`/workflow:multi-cli-plan`': '`workflow-multi-cli-plan` skill',
|
||||
'`/workflow:lite-plan`': '`workflow-lite-plan` skill',
|
||||
'`/cli:plan`': '`workflow-lite-plan` skill',
|
||||
'`/workflow-multi-cli-plan`': '`workflow-multi-cli-plan` skill',
|
||||
'`/workflow-lite-planex`': '`workflow-lite-planex` skill',
|
||||
'`/cli:plan`': '`workflow-lite-planex` skill',
|
||||
'`/test-cycle-execute`': '`workflow-test-fix` skill',
|
||||
};
|
||||
|
||||
|
||||
@@ -123,7 +123,7 @@
|
||||
| **命令调用语法** | 转换为 Phase 文件的相对路径 | `/workflow:session:start` → `phases/01-session-discovery.md` |
|
||||
| **命令路径引用** | 转换为 Skill 目录内路径 | `commands/workflow/tools/` → `phases/` |
|
||||
| **跨命令引用** | 转换为 Phase 间文件引用 | `workflow-plan` skill (context-gather phase) → `phases/02-context-gathering.md` |
|
||||
| **命令参数说明** | 移除或转为 Phase Prerequisites | `usage: /workflow:plan [session-id]` → Phase Prerequisites 中说明 |
|
||||
| **命令参数说明** | 移除或转为 Phase Prerequisites | `usage: /workflow-plan [session-id]` → Phase Prerequisites 中说明 |
|
||||
|
||||
**转换示例**:
|
||||
|
||||
|
||||
@@ -373,7 +373,7 @@ Initial → Phase 1 Mode Routing (completed)
|
||||
- `/workflow:session:start` - Start a new workflow session (optional, brainstorm creates its own)
|
||||
|
||||
**Follow-ups** (after brainstorm completes):
|
||||
- `/workflow:plan --session {sessionId}` - Generate implementation plan
|
||||
- `/workflow-plan --session {sessionId}` - Generate implementation plan
|
||||
- `/workflow:brainstorm:synthesis --session {sessionId}` - Run synthesis standalone (if skipped)
|
||||
|
||||
## Reference Information
|
||||
|
||||
@@ -469,7 +469,7 @@ ${selected_roles.length > 1 ? `
|
||||
- Run synthesis: /brainstorm --session ${session_id} (auto mode)
|
||||
` : `
|
||||
- Clarify insights: /brainstorm --session ${session_id} (auto mode)
|
||||
- Generate plan: /workflow:plan --session ${session_id}
|
||||
- Generate plan: /workflow-plan --session ${session_id}
|
||||
`}
|
||||
```
|
||||
|
||||
|
||||
@@ -744,7 +744,7 @@ Write(context_pkg_path, JSON.stringify(context_pkg))
|
||||
**Changelog**: .brainstorming/synthesis-changelog.md
|
||||
|
||||
### Next Steps
|
||||
PROCEED: `/workflow:plan --session {session-id}`
|
||||
PROCEED: `/workflow-plan --session {session-id}`
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
@@ -1,18 +1,18 @@
|
||||
---
|
||||
name: ccw-help
|
||||
description: CCW command help system. Search, browse, recommend commands. Triggers "ccw-help", "ccw-issue".
|
||||
description: CCW command help system. Search, browse, recommend commands, skills, teams. Triggers "ccw-help", "ccw-issue".
|
||||
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
||||
version: 7.0.0
|
||||
version: 8.0.0
|
||||
---
|
||||
|
||||
# CCW-Help Skill
|
||||
|
||||
CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
|
||||
CCW 命令帮助系统,提供命令搜索、推荐、文档查看、Skill/Team 浏览功能。
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用", "ccw 怎么用", "工作流"
|
||||
- 场景: 询问命令用法、搜索命令、请求下一步建议、询问任务应该用哪个工作流
|
||||
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用", "ccw 怎么用", "工作流", "skill", "team"
|
||||
- 场景: 询问命令用法、搜索命令、请求下一步建议、询问任务应该用哪个工作流、浏览 Skill/Team 目录
|
||||
|
||||
## Operation Modes
|
||||
|
||||
@@ -61,7 +61,7 @@ CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
|
||||
4. Get user confirmation
|
||||
5. Execute chain with TODO tracking
|
||||
|
||||
**Supported Workflows**:
|
||||
**Supported Workflows** (参考 [ccw.md](../../commands/ccw.md)):
|
||||
- **Level 1** (Lite-Lite-Lite): Ultra-simple quick tasks
|
||||
- **Level 2** (Rapid/Hotfix): Bug fixes, simple features, documentation
|
||||
- **Level 2.5** (Rapid-to-Issue): Bridge from quick planning to issue workflow
|
||||
@@ -71,12 +71,17 @@ CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
|
||||
- Test-fix workflows (debug failing tests)
|
||||
- Review workflows (code review and fixes)
|
||||
- UI design workflows
|
||||
- Multi-CLI collaborative workflows
|
||||
- Cycle workflows (integration-test, refactor)
|
||||
- **Level 4** (Full): Exploratory tasks with brainstorming
|
||||
- **With-File Workflows**: Documented exploration with multi-CLI collaboration
|
||||
- `brainstorm-with-file`: Multi-perspective ideation
|
||||
- `debug-with-file`: Hypothesis-driven debugging
|
||||
- `analyze-with-file`: Collaborative analysis
|
||||
- `brainstorm-with-file`: Multi-perspective ideation → workflow-plan → workflow-execute
|
||||
- `debug-with-file`: Hypothesis-driven debugging (standalone)
|
||||
- `analyze-with-file`: Collaborative analysis → workflow-lite-planex
|
||||
- `collaborative-plan-with-file`: Multi-agent planning → unified-execute
|
||||
- `roadmap-with-file`: Strategic requirement roadmap → team-planex
|
||||
- **Issue Workflow**: Batch issue discovery, planning, queueing, execution
|
||||
- **Team Workflow**: team-planex wave pipeline for parallel execution
|
||||
|
||||
### Mode 6: Issue Reporting
|
||||
|
||||
@@ -86,6 +91,16 @@ CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
|
||||
1. Use AskUserQuestion to gather context
|
||||
2. Generate structured issue template
|
||||
|
||||
### Mode 7: Skill & Team Browsing
|
||||
|
||||
**Triggers**: "skill", "team", "技能", "团队", "有哪些 skill", "team 怎么用"
|
||||
|
||||
**Process**:
|
||||
1. Query `command.json` skills array
|
||||
2. Filter by category: workflow / team / review / meta / utility / standalone
|
||||
3. Present categorized skill list with descriptions
|
||||
4. For team skills, explain team architecture and usage patterns
|
||||
|
||||
## Data Source
|
||||
|
||||
Single source of truth: **[command.json](command.json)**
|
||||
@@ -94,8 +109,9 @@ Single source of truth: **[command.json](command.json)**
|
||||
|-------|---------|
|
||||
| `commands[]` | Flat command list with metadata |
|
||||
| `commands[].flow` | Relationships (next_steps, prerequisites) |
|
||||
| `commands[].essential` | Essential flag for onboarding |
|
||||
| `agents[]` | Agent directory |
|
||||
| `skills[]` | Skill directory with categories |
|
||||
| `skills[].is_team` | Whether skill uses team architecture |
|
||||
| `essential_commands[]` | Core commands list |
|
||||
|
||||
### Source Path Format
|
||||
@@ -109,6 +125,77 @@ Single source of truth: **[command.json](command.json)**
|
||||
}
|
||||
```
|
||||
|
||||
## Skill Catalog
|
||||
|
||||
### Workflow Skills (核心工作流)
|
||||
|
||||
| Skill | 内部流水线 | 触发词 |
|
||||
|-------|-----------|--------|
|
||||
| `workflow-lite-planex` | explore → plan → confirm → execute | "lite-plan", 快速任务 |
|
||||
| `workflow-plan` | session → context → convention → gen → verify | "workflow-plan", 正式规划 |
|
||||
| `workflow-execute` | session discovery → task processing → commit | "workflow-execute", 执行 |
|
||||
| `workflow-tdd-plan` | 6-phase TDD plan → verify | "tdd-plan", TDD 开发 |
|
||||
| `workflow-test-fix` | session → context → analysis → gen → cycle | "test-fix", 测试修复 |
|
||||
| `workflow-multi-cli-plan` | ACE context → CLI discussion → plan → execute | "multi-cli", 多CLI协作 |
|
||||
| `workflow-skill-designer` | Meta-skill for designing workflow skills | "skill-designer" |
|
||||
|
||||
### Team Skills (团队协作)
|
||||
|
||||
Team Skills 使用 `team-worker` agent 架构,Coordinator 编排流水线,Workers 是加载了 role-spec 的 `team-worker` agents。
|
||||
|
||||
| Skill | 用途 | 架构 |
|
||||
|-------|------|------|
|
||||
| `team-planex` | 规划+执行 wave pipeline | planner + executor, 适合清晰 issue/roadmap |
|
||||
| `team-lifecycle-v5` | 完整生命周期 (spec/impl/test) | team-worker agents with role-specs |
|
||||
| `team-lifecycle-v4` | 优化版生命周期 | Optimized pipeline |
|
||||
| `team-lifecycle-v3` | 基础版生命周期 | All roles invoke unified skill |
|
||||
| `team-coordinate-v2` | 通用动态团队协调 | 运行时动态生成 role-specs |
|
||||
| `team-coordinate` | 通用团队协调 v1 | Dynamic role generation |
|
||||
| `team-brainstorm` | 团队头脑风暴 | Multi-perspective analysis |
|
||||
| `team-frontend` | 前端开发团队 | Frontend specialists |
|
||||
| `team-issue` | Issue 解决团队 | Issue resolution pipeline |
|
||||
| `team-iterdev` | 迭代开发团队 | Iterative development |
|
||||
| `team-review` | 代码扫描/漏洞审查 | Scanning + vulnerability review |
|
||||
| `team-roadmap-dev` | Roadmap 驱动开发 | Requirement → implementation |
|
||||
| `team-tech-debt` | 技术债务清理 | Debt identification + cleanup |
|
||||
| `team-testing` | 测试团队 | Test planning + execution |
|
||||
| `team-quality-assurance` | QA 团队 | Quality assurance pipeline |
|
||||
| `team-uidesign` | UI 设计团队 | Design system + prototyping |
|
||||
| `team-ultra-analyze` | 深度协作分析 | Deep collaborative analysis |
|
||||
| `team-executor` | 轻量执行 (恢复会话) | Resume existing sessions |
|
||||
| `team-executor-v2` | 轻量执行 v2 | Improved session resumption |
|
||||
|
||||
### Standalone Skills (独立技能)
|
||||
|
||||
| Skill | 用途 |
|
||||
|-------|------|
|
||||
| `brainstorm` | 双模头脑风暴 (auto pipeline / single role) |
|
||||
| `review-code` | 多维度代码审查 |
|
||||
| `review-cycle` | 审查+自动修复编排 |
|
||||
| `spec-generator` | 6阶段规格文档链 (product-brief → PRD → architecture → epics) |
|
||||
| `issue-manage` | 交互式 Issue 管理 (CRUD) |
|
||||
| `memory-capture` | 统一记忆捕获 (session compact / quick tip) |
|
||||
| `memory-manage` | 统一记忆管理 (CLAUDE.md + documentation) |
|
||||
| `command-generator` | 命令文件生成器 |
|
||||
| `skill-generator` | Meta-skill: 创建新 Skill |
|
||||
| `skill-tuning` | Skill 诊断与优化 |
|
||||
|
||||
## Workflow Mapping (CCW Auto-Route)
|
||||
|
||||
CCW 根据任务意图自动选择工作流级别(参考 [ccw.md](../../commands/ccw.md)):
|
||||
|
||||
| 输入示例 | 类型 | 级别 | 流水线 |
|
||||
|---------|------|------|--------|
|
||||
| "Add API endpoint" | feature (low) | 2 | workflow-lite-planex → workflow-test-fix |
|
||||
| "Fix login timeout" | bugfix | 2 | workflow-lite-planex → workflow-test-fix |
|
||||
| "协作分析: 认证架构" | analyze-file | 3 | analyze-with-file → workflow-lite-planex |
|
||||
| "重构 auth 模块" | refactor | 3 | workflow:refactor-cycle |
|
||||
| "multi-cli: API设计" | multi-cli | 3 | workflow-multi-cli-plan → workflow-test-fix |
|
||||
| "头脑风暴: 通知系统" | brainstorm | 4 | brainstorm-with-file → workflow-plan → workflow-execute |
|
||||
| "roadmap: OAuth + 2FA" | roadmap | 4 | roadmap-with-file → team-planex |
|
||||
| "specification: 用户系统" | spec-driven | 4 | spec-generator → workflow-plan → workflow-execute |
|
||||
| "team planex: 用户系统" | team-planex | Team | team-planex |
|
||||
|
||||
## Slash Commands
|
||||
|
||||
```bash
|
||||
@@ -116,6 +203,8 @@ Single source of truth: **[command.json](command.json)**
|
||||
/ccw-help # General help entry
|
||||
/ccw-help search <keyword> # Search commands
|
||||
/ccw-help next <command> # Get next step suggestions
|
||||
/ccw-help skills # Browse skill catalog
|
||||
/ccw-help teams # Browse team skills
|
||||
/ccw-issue # Issue reporting
|
||||
```
|
||||
|
||||
@@ -128,6 +217,9 @@ Single source of truth: **[command.json](command.json)**
|
||||
/ccw "头脑风暴: 用户通知系统" # → detect brainstorm, use brainstorm-with-file
|
||||
/ccw "深度调试: 系统随机崩溃" # → detect debug-file, use debug-with-file
|
||||
/ccw "协作分析: 认证架构设计" # → detect analyze-file, use analyze-with-file
|
||||
/ccw "roadmap: OAuth + 2FA 路线图" # → roadmap-with-file → team-planex
|
||||
/ccw "集成测试: 支付流程" # → integration-test-cycle
|
||||
/ccw "重构 auth 模块" # → refactor-cycle
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
@@ -135,6 +227,7 @@ Single source of truth: **[command.json](command.json)**
|
||||
### Update Mechanism
|
||||
|
||||
CCW-Help skill supports manual updates through user confirmation dialog.
|
||||
Script scans `commands/`, `agents/`, and `skills/` directories to regenerate all indexes.
|
||||
|
||||
#### How to Update
|
||||
|
||||
@@ -153,18 +246,33 @@ cd D:/Claude_dms3/.claude/skills/ccw-help
|
||||
python scripts/auto-update.py
|
||||
```
|
||||
|
||||
This runs `analyze_commands.py` to scan commands/ and agents/ directories and regenerate `command.json`.
|
||||
This runs `analyze_commands.py` to scan commands/, agents/, and skills/ directories and regenerate `command.json` + all index files.
|
||||
|
||||
#### Update Scripts
|
||||
|
||||
- **`auto-update.py`**: Simple wrapper that runs analyze_commands.py
|
||||
- **`analyze_commands.py`**: Scans directories and generates command index
|
||||
- **`analyze_commands.py`**: Scans directories and generates command/agent/skill indexes
|
||||
|
||||
#### Generated Index Files
|
||||
|
||||
| File | Content |
|
||||
|------|---------|
|
||||
| `command.json` | Master index: commands + agents + skills |
|
||||
| `index/all-commands.json` | Flat command list |
|
||||
| `index/all-agents.json` | Agent directory |
|
||||
| `index/all-skills.json` | Skill directory with metadata |
|
||||
| `index/skills-by-category.json` | Skills grouped by category |
|
||||
| `index/by-category.json` | Commands by category |
|
||||
| `index/by-use-case.json` | Commands by usage scenario |
|
||||
| `index/essential-commands.json` | Core commands for onboarding |
|
||||
| `index/command-relationships.json` | Command flow relationships |
|
||||
|
||||
## Statistics
|
||||
|
||||
- **Commands**: 50+
|
||||
- **Agents**: 16
|
||||
- **Workflows**: 6 main levels + 3 with-file variants
|
||||
- **Agents**: 22
|
||||
- **Skills**: 36+ (7 workflow, 19 team, 10+ standalone/utility)
|
||||
- **Workflows**: 6 main levels + 5 with-file variants + 2 cycle variants
|
||||
- **Essential**: 10 core commands
|
||||
|
||||
## Core Principle
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -29,6 +29,11 @@
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "cli-roadmap-plan-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/cli-roadmap-plan-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "code-developer",
|
||||
"description": "|",
|
||||
@@ -74,6 +79,16 @@
|
||||
"description": "|",
|
||||
"source": "../../../agents/tdd-developer.md"
|
||||
},
|
||||
{
|
||||
"name": "team-worker",
|
||||
"description": "|",
|
||||
"source": "../../../agents/team-worker.md"
|
||||
},
|
||||
{
|
||||
"name": "test-action-planning-agent",
|
||||
"description": "|",
|
||||
"source": "../../../agents/test-action-planning-agent.md"
|
||||
},
|
||||
{
|
||||
"name": "test-context-search-agent",
|
||||
"description": "|",
|
||||
|
||||
@@ -43,6 +43,72 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/codex-review.md"
|
||||
},
|
||||
{
|
||||
"name": "flow-create",
|
||||
"command": "/flow-create",
|
||||
"description": "",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/flow-create.md"
|
||||
},
|
||||
{
|
||||
"name": "add",
|
||||
"command": "/idaw:add",
|
||||
"description": "Add IDAW tasks - manual creation or import from ccw issue",
|
||||
"arguments": "[-y|--yes] [--from-issue <id>[,<id>,...]] \\\"description\\\" [--type <task_type>] [--priority <1-5>]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/add.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/idaw:resume",
|
||||
"description": "Resume interrupted IDAW session from last checkpoint",
|
||||
"arguments": "[-y|--yes] [session-id]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "run-coordinate",
|
||||
"command": "/idaw:run-coordinate",
|
||||
"description": "IDAW coordinator - execute task skill chains via external CLI with hook callbacks and git checkpoints",
|
||||
"arguments": "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run] [--tool <tool>]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/run-coordinate.md"
|
||||
},
|
||||
{
|
||||
"name": "run",
|
||||
"command": "/idaw:run",
|
||||
"description": "IDAW orchestrator - execute task skill chains serially with git checkpoints",
|
||||
"arguments": "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/run.md"
|
||||
},
|
||||
{
|
||||
"name": "status",
|
||||
"command": "/idaw:status",
|
||||
"description": "View IDAW task and session progress",
|
||||
"arguments": "[session-id]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/idaw/status.md"
|
||||
},
|
||||
{
|
||||
"name": "convert-to-plan",
|
||||
"command": "/issue:convert-to-plan",
|
||||
@@ -131,6 +197,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "prepare",
|
||||
"command": "/memory:prepare",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/prepare.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
@@ -175,6 +252,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:collaborative-plan-with-file",
|
||||
"command": "/workflow:collaborative-plan-with-file",
|
||||
"description": "Collaborative planning with Plan Note - Understanding agent creates shared plan-note.md template, parallel agents fill pre-allocated sections, conflict detection without merge. Outputs executable plan-note.md.",
|
||||
"arguments": "[-y|--yes] <task description> [--max-agents=5]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/collaborative-plan-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "debug-with-file",
|
||||
"command": "/workflow:debug-with-file",
|
||||
@@ -186,17 +274,72 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "init-guidelines",
|
||||
"command": "/workflow:init-guidelines",
|
||||
"description": "Interactive wizard to fill specs/*.md based on project analysis",
|
||||
"arguments": "[--reset]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init-guidelines.md"
|
||||
},
|
||||
{
|
||||
"name": "init-specs",
|
||||
"command": "/workflow:init-specs",
|
||||
"description": "Interactive wizard to create individual specs or personal constraints with scope selection",
|
||||
"arguments": "[--scope <global|project>] [--dimension <specs|personal>] [--category <general|exploration|planning|execution>]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init-specs.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"arguments": "[--regenerate] [--skip-specs]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "integration-test-cycle",
|
||||
"command": "/workflow:integration-test-cycle",
|
||||
"description": "Self-iterating integration test workflow with codebase exploration, test development, autonomous test-fix cycles, and reflection-driven strategy adjustment",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [--max-iterations=N] \\\"module or feature description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/integration-test-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "refactor-cycle",
|
||||
"command": "/workflow:refactor-cycle",
|
||||
"description": "Tech debt discovery and self-iterating refactoring with multi-dimensional analysis, prioritized execution, regression validation, and reflection-driven adjustment",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [--scope=module|project] \\\"module or refactoring goal\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/refactor-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "roadmap-with-file",
|
||||
"command": "/workflow:roadmap-with-file",
|
||||
"description": "Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable). Handoff to team-planex.",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [-m progressive|direct|auto] \\\"requirement description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/roadmap-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
@@ -233,8 +376,8 @@
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines, or compress recent memories",
|
||||
"arguments": "[-y|--yes] [--type <convention|constraint|learning|compress>] [--category <category>] [--limit <N>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
@@ -252,6 +395,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "sync",
|
||||
"command": "/workflow:session:sync",
|
||||
"description": "Quick-sync session work to specs/*.md and project-tech",
|
||||
"arguments": "[-y|--yes] [\\\"what was done\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/sync.md"
|
||||
},
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
@@ -277,7 +431,7 @@
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
@@ -366,11 +520,11 @@
|
||||
"name": "unified-execute-with-file",
|
||||
"command": "/workflow:unified-execute-with-file",
|
||||
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
|
||||
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
|
||||
"arguments": "[-y|--yes] [<path>[,<path2>] | -p|--plan <path>[,<path2>]] [--auto-commit] [--commit-prefix \\\"prefix\\\"] [\\\"execution context or task name\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/unified-execute-with-file.md"
|
||||
}
|
||||
]
|
||||
]
|
||||
362
.claude/skills/ccw-help/index/all-skills.json
Normal file
362
.claude/skills/ccw-help/index/all-skills.json
Normal file
@@ -0,0 +1,362 @@
|
||||
[
|
||||
{
|
||||
"name": "brainstorm",
|
||||
"description": "Unified brainstorming skill with dual-mode operation - auto pipeline and single role analysis. Triggers on \"brainstorm\", \"头脑风暴\".",
|
||||
"category": "standalone",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/brainstorm/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "command-generator",
|
||||
"description": "Command file generator - 5 phase workflow for creating Claude Code command files with YAML frontmatter. Generates .md command files for project or user scope. Triggers on \"create command\", \"new command\", \"command generator\".",
|
||||
"category": "meta",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/command-generator/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "issue-manage",
|
||||
"description": "Interactive issue management with menu-driven CRUD operations. Use when managing issues, viewing issue status, editing issue fields, performing bulk operations, or viewing issue history. Triggers on \"manage issue\", \"list issues\", \"edit issue\", \"delete issue\", \"bulk update\", \"issue dashboard\", \"issue history\", \"completed issues\".",
|
||||
"category": "utility",
|
||||
"is_team": false,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/issue-manage/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "memory-capture",
|
||||
"description": "Unified memory capture with routing - session compact or quick tips. Triggers on \"memory capture\", \"compact session\", \"save session\", \"quick tip\", \"memory tips\", \"记录\", \"压缩会话\".",
|
||||
"category": "utility",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/memory-capture/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "memory-manage",
|
||||
"description": "Unified memory management - CLAUDE.md updates and documentation generation with interactive routing. Triggers on \"memory manage\", \"update claude\", \"update memory\", \"generate docs\", \"更新记忆\", \"生成文档\".",
|
||||
"category": "utility",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/memory-manage/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "review-code",
|
||||
"description": "Multi-dimensional code review with structured reports. Analyzes correctness, readability, performance, security, testing, and architecture. Triggers on \"review code\", \"code review\", \"审查代码\", \"代码审查\".",
|
||||
"category": "review",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/review-code/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "review-cycle",
|
||||
"description": "Unified multi-dimensional code review with automated fix orchestration. Routes to session-based (git changes), module-based (path patterns), or fix mode. Triggers on \"workflow:review-cycle\", \"workflow:review-session-cycle\", \"workflow:review-module-cycle\", \"workflow:review-cycle-fix\".",
|
||||
"category": "review",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/review-cycle/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-generator",
|
||||
"description": "Meta-skill for creating new Claude Code skills with configurable execution modes. Supports sequential (fixed order) and autonomous (stateless) phase patterns. Use for skill scaffolding, skill creation, or building new workflows. Triggers on \"create skill\", \"new skill\", \"skill generator\".",
|
||||
"category": "meta",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/skill-generator/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-tuning",
|
||||
"description": "Universal skill diagnosis and optimization tool. Detect and fix skill execution issues including context explosion, long-tail forgetting, data flow disruption, and agent coordination failures. Supports Gemini CLI for deep analysis. Triggers on \"skill tuning\", \"tune skill\", \"skill diagnosis\", \"optimize skill\", \"skill debug\".",
|
||||
"category": "meta",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/skill-tuning/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "spec-generator",
|
||||
"description": "Specification generator - 6 phase document chain producing product brief, PRD, architecture, and epics. Triggers on \"generate spec\", \"create specification\", \"spec generator\", \"workflow:spec\".",
|
||||
"category": "meta",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/spec-generator/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-brainstorm",
|
||||
"description": "Unified team skill for brainstorming team. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team brainstorm\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-brainstorm/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-coordinate",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Only coordinator is built-in -- all worker roles are generated at runtime based on task analysis. Beat/cadence model for orchestration. Triggers on \"team coordinate\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-coordinate/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-coordinate-v2",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on \"team coordinate v2\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-coordinate-v2/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-executor",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"team executor\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-executor/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-executor-v2",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate-v2 sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"team executor v2\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-executor-v2/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-frontend",
|
||||
"description": "Unified team skill for frontend development team. All roles invoke this skill with --role arg. Built-in ui-ux-pro-max design intelligence. Triggers on \"team frontend\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-frontend/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-issue",
|
||||
"description": "Unified team skill for issue resolution. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team issue\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-issue/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-iterdev",
|
||||
"description": "Unified team skill for iterative development team. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team iterdev\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-iterdev/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v3",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v3/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v4",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. Optimized cadence with inline discuss subagent and shared explore. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v4/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v5",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents loaded with role-specific Phase 2-4 specs. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": true,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v5/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-planex",
|
||||
"description": "Unified team skill for plan-and-execute pipeline. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on \"team planex\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": true,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-planex/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-quality-assurance",
|
||||
"description": "Unified team skill for quality assurance team. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team quality-assurance\", \"team qa\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-quality-assurance/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-review",
|
||||
"description": "Unified team skill for code scanning, vulnerability review, optimization suggestions, and automated fix. 4-role team: coordinator, scanner, reviewer, fixer. Triggers on team-review.",
|
||||
"category": "team",
|
||||
"is_team": false,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-review/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-roadmap-dev",
|
||||
"description": "Unified team skill for roadmap-driven development workflow. Coordinator discusses roadmap with user, then dispatches phased execution pipeline (plan -> execute -> verify). All roles invoke this skill with --role arg. Triggers on \"team roadmap-dev\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-roadmap-dev/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-tech-debt",
|
||||
"description": "Unified team skill for tech debt identification and cleanup. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team tech-debt\", \"tech debt cleanup\", \"技术债务\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-tech-debt/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-testing",
|
||||
"description": "Unified team skill for testing team. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team testing\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-testing/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-uidesign",
|
||||
"description": "Unified team skill for UI design team. All roles invoke this skill with --role arg for role-specific execution. CP-9 Dual-Track design+implementation.",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-uidesign/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-ultra-analyze",
|
||||
"description": "Unified team skill for deep collaborative analysis. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team ultra-analyze\", \"team analyze\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-ultra-analyze/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking. Triggers on \"workflow-execute\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-execute/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-lite-planex",
|
||||
"description": "Lightweight planning and execution skill (Phase 1: plan, Phase 2: execute). Triggers on \"workflow-lite-planex\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-lite-planex/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-multi-cli-plan",
|
||||
"description": "Multi-CLI collaborative planning and execution skill with integrated execution phase. Triggers on \"workflow-multi-cli-plan\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-multi-cli-plan/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-plan",
|
||||
"description": "Unified planning skill - 4-phase planning workflow, plan verification, and interactive replanning. Triggers on \"workflow-plan\", \"workflow-plan-verify\", \"workflow:replan\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-plan/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-designer",
|
||||
"description": "Meta-skill for designing orchestrator+phases structured workflow skills. Creates SKILL.md coordinator with progressive phase loading, TodoWrite patterns, and data flow. Triggers on \"design workflow skill\", \"create workflow skill\", \"workflow skill designer\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-skill-designer/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-tdd-plan-plan",
|
||||
"description": "Unified TDD workflow skill combining 6-phase TDD planning with Red-Green-Refactor task chain generation, and 4-phase TDD verification with compliance reporting. Triggers on \"workflow-tdd-plan\", \"workflow-tdd-verify\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-tdd-plan/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-test-fix",
|
||||
"description": "Unified test-fix pipeline combining test generation (session, context, analysis, task gen) with iterative test-cycle execution (adaptive strategy, progressive testing, CLI fallback). Triggers on \"workflow-test-fix\", \"workflow-test-fix\", \"test fix workflow\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-test-fix/SKILL.md"
|
||||
}
|
||||
]
|
||||
@@ -22,6 +22,17 @@
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/ccw.md"
|
||||
},
|
||||
{
|
||||
"name": "flow-create",
|
||||
"command": "/flow-create",
|
||||
"description": "",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/flow-create.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -51,6 +62,65 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
"idaw": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "add",
|
||||
"command": "/idaw:add",
|
||||
"description": "Add IDAW tasks - manual creation or import from ccw issue",
|
||||
"arguments": "[-y|--yes] [--from-issue <id>[,<id>,...]] \\\"description\\\" [--type <task_type>] [--priority <1-5>]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/add.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/idaw:resume",
|
||||
"description": "Resume interrupted IDAW session from last checkpoint",
|
||||
"arguments": "[-y|--yes] [session-id]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "run-coordinate",
|
||||
"command": "/idaw:run-coordinate",
|
||||
"description": "IDAW coordinator - execute task skill chains via external CLI with hook callbacks and git checkpoints",
|
||||
"arguments": "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run] [--tool <tool>]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/run-coordinate.md"
|
||||
},
|
||||
{
|
||||
"name": "run",
|
||||
"command": "/idaw:run",
|
||||
"description": "IDAW orchestrator - execute task skill chains serially with git checkpoints",
|
||||
"arguments": "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/run.md"
|
||||
},
|
||||
{
|
||||
"name": "status",
|
||||
"command": "/idaw:status",
|
||||
"description": "View IDAW task and session progress",
|
||||
"arguments": "[session-id]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/idaw/status.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
"issue": {
|
||||
"_root": [
|
||||
{
|
||||
@@ -145,6 +215,17 @@
|
||||
},
|
||||
"memory": {
|
||||
"_root": [
|
||||
{
|
||||
"name": "prepare",
|
||||
"command": "/memory:prepare",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/prepare.md"
|
||||
},
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
"command": "/memory:style-skill-memory",
|
||||
@@ -193,6 +274,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/clean.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:collaborative-plan-with-file",
|
||||
"command": "/workflow:collaborative-plan-with-file",
|
||||
"description": "Collaborative planning with Plan Note - Understanding agent creates shared plan-note.md template, parallel agents fill pre-allocated sections, conflict detection without merge. Outputs executable plan-note.md.",
|
||||
"arguments": "[-y|--yes] <task description> [--max-agents=5]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/collaborative-plan-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "debug-with-file",
|
||||
"command": "/workflow:debug-with-file",
|
||||
@@ -204,22 +296,77 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "init-guidelines",
|
||||
"command": "/workflow:init-guidelines",
|
||||
"description": "Interactive wizard to fill specs/*.md based on project analysis",
|
||||
"arguments": "[--reset]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init-guidelines.md"
|
||||
},
|
||||
{
|
||||
"name": "init-specs",
|
||||
"command": "/workflow:init-specs",
|
||||
"description": "Interactive wizard to create individual specs or personal constraints with scope selection",
|
||||
"arguments": "[--scope <global|project>] [--dimension <specs|personal>] [--category <general|exploration|planning|execution>]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init-specs.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"arguments": "[--regenerate] [--skip-specs]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "integration-test-cycle",
|
||||
"command": "/workflow:integration-test-cycle",
|
||||
"description": "Self-iterating integration test workflow with codebase exploration, test development, autonomous test-fix cycles, and reflection-driven strategy adjustment",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [--max-iterations=N] \\\"module or feature description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/integration-test-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "refactor-cycle",
|
||||
"command": "/workflow:refactor-cycle",
|
||||
"description": "Tech debt discovery and self-iterating refactoring with multi-dimensional analysis, prioritized execution, regression validation, and reflection-driven adjustment",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [--scope=module|project] \\\"module or refactoring goal\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/refactor-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "roadmap-with-file",
|
||||
"command": "/workflow:roadmap-with-file",
|
||||
"description": "Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable). Handoff to team-planex.",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [-m progressive|direct|auto] \\\"requirement description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/roadmap-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "unified-execute-with-file",
|
||||
"command": "/workflow:unified-execute-with-file",
|
||||
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
|
||||
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
|
||||
"arguments": "[-y|--yes] [<path>[,<path2>] | -p|--plan <path>[,<path2>]] [--auto-commit] [--commit-prefix \\\"prefix\\\"] [\\\"execution context or task name\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
@@ -264,8 +411,8 @@
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines, or compress recent memories",
|
||||
"arguments": "[-y|--yes] [--type <convention|constraint|learning|compress>] [--category <category>] [--limit <N>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
@@ -282,6 +429,17 @@
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "sync",
|
||||
"command": "/workflow:session:sync",
|
||||
"description": "Quick-sync session work to specs/*.md and project-tech",
|
||||
"arguments": "[-y|--yes] [\\\"what was done\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/sync.md"
|
||||
}
|
||||
],
|
||||
"ui-design": [
|
||||
@@ -310,7 +468,7 @@
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
@@ -397,4 +555,4 @@
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -33,6 +33,39 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/cli/cli-init.md"
|
||||
},
|
||||
{
|
||||
"name": "add",
|
||||
"command": "/idaw:add",
|
||||
"description": "Add IDAW tasks - manual creation or import from ccw issue",
|
||||
"arguments": "[-y|--yes] [--from-issue <id>[,<id>,...]] \\\"description\\\" [--type <task_type>] [--priority <1-5>]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/add.md"
|
||||
},
|
||||
{
|
||||
"name": "run-coordinate",
|
||||
"command": "/idaw:run-coordinate",
|
||||
"description": "IDAW coordinator - execute task skill chains via external CLI with hook callbacks and git checkpoints",
|
||||
"arguments": "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run] [--tool <tool>]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/run-coordinate.md"
|
||||
},
|
||||
{
|
||||
"name": "run",
|
||||
"command": "/idaw:run",
|
||||
"description": "IDAW orchestrator - execute task skill chains serially with git checkpoints",
|
||||
"arguments": "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/run.md"
|
||||
},
|
||||
{
|
||||
"name": "issue:discover-by-prompt",
|
||||
"command": "/issue:discover-by-prompt",
|
||||
@@ -77,6 +110,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/queue.md"
|
||||
},
|
||||
{
|
||||
"name": "prepare",
|
||||
"command": "/memory:prepare",
|
||||
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
|
||||
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/memory/prepare.md"
|
||||
},
|
||||
{
|
||||
"name": "clean",
|
||||
"command": "/workflow:clean",
|
||||
@@ -99,17 +143,61 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/debug-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "init-guidelines",
|
||||
"command": "/workflow:init-guidelines",
|
||||
"description": "Interactive wizard to fill specs/*.md based on project analysis",
|
||||
"arguments": "[--reset]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init-guidelines.md"
|
||||
},
|
||||
{
|
||||
"name": "init-specs",
|
||||
"command": "/workflow:init-specs",
|
||||
"description": "Interactive wizard to create individual specs or personal constraints with scope selection",
|
||||
"arguments": "[--scope <global|project>] [--dimension <specs|personal>] [--category <general|exploration|planning|execution>]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init-specs.md"
|
||||
},
|
||||
{
|
||||
"name": "init",
|
||||
"command": "/workflow:init",
|
||||
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
|
||||
"arguments": "[--regenerate]",
|
||||
"arguments": "[--regenerate] [--skip-specs]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/init.md"
|
||||
},
|
||||
{
|
||||
"name": "refactor-cycle",
|
||||
"command": "/workflow:refactor-cycle",
|
||||
"description": "Tech debt discovery and self-iterating refactoring with multi-dimensional analysis, prioritized execution, regression validation, and reflection-driven adjustment",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [--scope=module|project] \\\"module or refactoring goal\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/refactor-cycle.md"
|
||||
},
|
||||
{
|
||||
"name": "roadmap-with-file",
|
||||
"command": "/workflow:roadmap-with-file",
|
||||
"description": "Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable). Handoff to team-planex.",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [-m progressive|direct|auto] \\\"requirement description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/roadmap-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "list",
|
||||
"command": "/workflow:session:list",
|
||||
@@ -124,8 +212,8 @@
|
||||
{
|
||||
"name": "solidify",
|
||||
"command": "/workflow:session:solidify",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
|
||||
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
|
||||
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines, or compress recent memories",
|
||||
"arguments": "[-y|--yes] [--type <convention|constraint|learning|compress>] [--category <category>] [--limit <N>] \\\"rule or insight\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
@@ -143,6 +231,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "sync",
|
||||
"command": "/workflow:session:sync",
|
||||
"description": "Quick-sync session work to specs/*.md and project-tech",
|
||||
"arguments": "[-y|--yes] [\\\"what was done\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/sync.md"
|
||||
},
|
||||
{
|
||||
"name": "animation-extract",
|
||||
"command": "/workflow:ui-design:animation-extract",
|
||||
@@ -223,6 +322,98 @@
|
||||
"source": "../../../commands/workflow/analyze-with-file.md"
|
||||
}
|
||||
],
|
||||
"implementation": [
|
||||
{
|
||||
"name": "flow-create",
|
||||
"command": "/flow-create",
|
||||
"description": "",
|
||||
"arguments": "",
|
||||
"category": "general",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/flow-create.md"
|
||||
},
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with DAG-based parallel orchestration (one commit per solution)",
|
||||
"arguments": "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
},
|
||||
{
|
||||
"name": "unified-execute-with-file",
|
||||
"command": "/workflow:unified-execute-with-file",
|
||||
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
|
||||
"arguments": "[-y|--yes] [<path>[,<path2>] | -p|--plan <path>[,<path2>]] [--auto-commit] [--commit-prefix \\\"prefix\\\"] [\\\"execution context or task name\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/unified-execute-with-file.md"
|
||||
}
|
||||
],
|
||||
"session-management": [
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/idaw:resume",
|
||||
"description": "Resume interrupted IDAW session from last checkpoint",
|
||||
"arguments": "[-y|--yes] [session-id]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/idaw/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "status",
|
||||
"command": "/idaw:status",
|
||||
"description": "View IDAW task and session progress",
|
||||
"arguments": "[session-id]",
|
||||
"category": "idaw",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Beginner",
|
||||
"source": "../../../commands/idaw/status.md"
|
||||
},
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "[-y|--yes] [--detailed]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
}
|
||||
],
|
||||
"planning": [
|
||||
{
|
||||
"name": "convert-to-plan",
|
||||
@@ -268,6 +459,17 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/brainstorm-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:collaborative-plan-with-file",
|
||||
"command": "/workflow:collaborative-plan-with-file",
|
||||
"description": "Collaborative planning with Plan Note - Understanding agent creates shared plan-note.md template, parallel agents fill pre-allocated sections, conflict detection without merge. Outputs executable plan-note.md.",
|
||||
"arguments": "[-y|--yes] <task description> [--max-agents=5]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "planning",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/collaborative-plan-with-file.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:ui-design:codify-style",
|
||||
"command": "/workflow:ui-design:codify-style",
|
||||
@@ -282,7 +484,7 @@
|
||||
{
|
||||
"name": "design-sync",
|
||||
"command": "/workflow:ui-design:design-sync",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption",
|
||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
@@ -313,41 +515,6 @@
|
||||
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
|
||||
}
|
||||
],
|
||||
"implementation": [
|
||||
{
|
||||
"name": "execute",
|
||||
"command": "/issue:execute",
|
||||
"description": "Execute queue with DAG-based parallel orchestration (one commit per solution)",
|
||||
"arguments": "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]",
|
||||
"category": "issue",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/issue/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "generate",
|
||||
"command": "/workflow:ui-design:generate",
|
||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||
"arguments": "[--design-id <id>] [--session <id>]",
|
||||
"category": "workflow",
|
||||
"subcategory": "ui-design",
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/ui-design/generate.md"
|
||||
},
|
||||
{
|
||||
"name": "unified-execute-with-file",
|
||||
"command": "/workflow:unified-execute-with-file",
|
||||
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
|
||||
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "implementation",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/unified-execute-with-file.md"
|
||||
}
|
||||
],
|
||||
"documentation": [
|
||||
{
|
||||
"name": "style-skill-memory",
|
||||
@@ -361,28 +528,17 @@
|
||||
"source": "../../../commands/memory/style-skill-memory.md"
|
||||
}
|
||||
],
|
||||
"session-management": [
|
||||
"testing": [
|
||||
{
|
||||
"name": "complete",
|
||||
"command": "/workflow:session:complete",
|
||||
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
|
||||
"arguments": "[-y|--yes] [--detailed]",
|
||||
"name": "integration-test-cycle",
|
||||
"command": "/workflow:integration-test-cycle",
|
||||
"description": "Self-iterating integration test workflow with codebase exploration, test development, autonomous test-fix cycles, and reflection-driven strategy adjustment",
|
||||
"arguments": "[-y|--yes] [-c|--continue] [--max-iterations=N] \\\"module or feature description\\",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "testing",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/complete.md"
|
||||
},
|
||||
{
|
||||
"name": "resume",
|
||||
"command": "/workflow:session:resume",
|
||||
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
|
||||
"arguments": "",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/resume.md"
|
||||
"source": "../../../commands/workflow/integration-test-cycle.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,15 +1,146 @@
|
||||
{
|
||||
"workflow-plan": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather",
|
||||
"workflow:tools:conflict-resolution",
|
||||
"workflow:tools:task-generate-agent"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow-plan-verify",
|
||||
"workflow:status",
|
||||
"workflow-execute"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow-tdd-plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow-tdd-plan": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather",
|
||||
"workflow:tools:task-generate-tdd"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow-tdd-verify",
|
||||
"workflow:status",
|
||||
"workflow-execute"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow-plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow-execute": {
|
||||
"prerequisites": [
|
||||
"workflow-plan",
|
||||
"workflow-tdd-plan"
|
||||
],
|
||||
"related": [
|
||||
"workflow:status",
|
||||
"workflow:resume"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:review",
|
||||
"workflow-tdd-verify"
|
||||
]
|
||||
},
|
||||
"workflow-plan-verify": {
|
||||
"prerequisites": [
|
||||
"workflow-plan"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow-execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow-tdd-verify": {
|
||||
"prerequisites": [
|
||||
"workflow-execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:tools:tdd-coverage-analysis"
|
||||
]
|
||||
},
|
||||
"workflow:session:start": {
|
||||
"next_steps": [],
|
||||
"next_steps": [
|
||||
"workflow-plan",
|
||||
"workflow-execute"
|
||||
],
|
||||
"related": [
|
||||
"workflow:session:list",
|
||||
"workflow:session:resume"
|
||||
]
|
||||
},
|
||||
"workflow:session:resume": {
|
||||
"alternatives": [],
|
||||
"alternatives": [
|
||||
"workflow:resume"
|
||||
],
|
||||
"related": [
|
||||
"workflow:session:list"
|
||||
"workflow:session:list",
|
||||
"workflow:status"
|
||||
]
|
||||
},
|
||||
"workflow-lite-planex": {
|
||||
"calls_internally": [],
|
||||
"next_steps": [
|
||||
"workflow:status"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow-plan"
|
||||
],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:lite-fix": {
|
||||
"next_steps": [
|
||||
"workflow:status"
|
||||
],
|
||||
"alternatives": [
|
||||
"workflow-lite-planex"
|
||||
],
|
||||
"related": [
|
||||
"workflow-test-fix"
|
||||
]
|
||||
},
|
||||
"workflow:review-session-cycle": {
|
||||
"prerequisites": [
|
||||
"workflow-execute"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow:review-fix"
|
||||
],
|
||||
"related": [
|
||||
"workflow:review-module-cycle"
|
||||
]
|
||||
},
|
||||
"workflow:review-fix": {
|
||||
"prerequisites": [
|
||||
"workflow:review-module-cycle",
|
||||
"workflow:review-session-cycle"
|
||||
],
|
||||
"related": [
|
||||
"workflow-test-fix"
|
||||
]
|
||||
},
|
||||
"memory:docs": {
|
||||
"calls_internally": [
|
||||
"workflow:session:start",
|
||||
"workflow:tools:context-gather"
|
||||
],
|
||||
"next_steps": [
|
||||
"workflow-execute"
|
||||
]
|
||||
},
|
||||
"memory:skill-memory": {
|
||||
"next_steps": [
|
||||
"workflow-plan",
|
||||
"cli:analyze"
|
||||
],
|
||||
"related": [
|
||||
"memory:load-skill-memory"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -10,4 +10,4 @@
|
||||
"difficulty": "Intermediate",
|
||||
"source": "../../../commands/workflow/session/start.md"
|
||||
}
|
||||
]
|
||||
]
|
||||
374
.claude/skills/ccw-help/index/skills-by-category.json
Normal file
374
.claude/skills/ccw-help/index/skills-by-category.json
Normal file
@@ -0,0 +1,374 @@
|
||||
{
|
||||
"standalone": [
|
||||
{
|
||||
"name": "brainstorm",
|
||||
"description": "Unified brainstorming skill with dual-mode operation - auto pipeline and single role analysis. Triggers on \"brainstorm\", \"头脑风暴\".",
|
||||
"category": "standalone",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/brainstorm/SKILL.md"
|
||||
}
|
||||
],
|
||||
"meta": [
|
||||
{
|
||||
"name": "command-generator",
|
||||
"description": "Command file generator - 5 phase workflow for creating Claude Code command files with YAML frontmatter. Generates .md command files for project or user scope. Triggers on \"create command\", \"new command\", \"command generator\".",
|
||||
"category": "meta",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/command-generator/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-generator",
|
||||
"description": "Meta-skill for creating new Claude Code skills with configurable execution modes. Supports sequential (fixed order) and autonomous (stateless) phase patterns. Use for skill scaffolding, skill creation, or building new workflows. Triggers on \"create skill\", \"new skill\", \"skill generator\".",
|
||||
"category": "meta",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/skill-generator/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "skill-tuning",
|
||||
"description": "Universal skill diagnosis and optimization tool. Detect and fix skill execution issues including context explosion, long-tail forgetting, data flow disruption, and agent coordination failures. Supports Gemini CLI for deep analysis. Triggers on \"skill tuning\", \"tune skill\", \"skill diagnosis\", \"optimize skill\", \"skill debug\".",
|
||||
"category": "meta",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/skill-tuning/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "spec-generator",
|
||||
"description": "Specification generator - 6 phase document chain producing product brief, PRD, architecture, and epics. Triggers on \"generate spec\", \"create specification\", \"spec generator\", \"workflow:spec\".",
|
||||
"category": "meta",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/spec-generator/SKILL.md"
|
||||
}
|
||||
],
|
||||
"utility": [
|
||||
{
|
||||
"name": "issue-manage",
|
||||
"description": "Interactive issue management with menu-driven CRUD operations. Use when managing issues, viewing issue status, editing issue fields, performing bulk operations, or viewing issue history. Triggers on \"manage issue\", \"list issues\", \"edit issue\", \"delete issue\", \"bulk update\", \"issue dashboard\", \"issue history\", \"completed issues\".",
|
||||
"category": "utility",
|
||||
"is_team": false,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/issue-manage/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "memory-capture",
|
||||
"description": "Unified memory capture with routing - session compact or quick tips. Triggers on \"memory capture\", \"compact session\", \"save session\", \"quick tip\", \"memory tips\", \"记录\", \"压缩会话\".",
|
||||
"category": "utility",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/memory-capture/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "memory-manage",
|
||||
"description": "Unified memory management - CLAUDE.md updates and documentation generation with interactive routing. Triggers on \"memory manage\", \"update claude\", \"update memory\", \"generate docs\", \"更新记忆\", \"生成文档\".",
|
||||
"category": "utility",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/memory-manage/SKILL.md"
|
||||
}
|
||||
],
|
||||
"review": [
|
||||
{
|
||||
"name": "review-code",
|
||||
"description": "Multi-dimensional code review with structured reports. Analyzes correctness, readability, performance, security, testing, and architecture. Triggers on \"review code\", \"code review\", \"审查代码\", \"代码审查\".",
|
||||
"category": "review",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/review-code/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "review-cycle",
|
||||
"description": "Unified multi-dimensional code review with automated fix orchestration. Routes to session-based (git changes), module-based (path patterns), or fix mode. Triggers on \"workflow:review-cycle\", \"workflow:review-session-cycle\", \"workflow:review-module-cycle\", \"workflow:review-cycle-fix\".",
|
||||
"category": "review",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/review-cycle/SKILL.md"
|
||||
}
|
||||
],
|
||||
"team": [
|
||||
{
|
||||
"name": "team-brainstorm",
|
||||
"description": "Unified team skill for brainstorming team. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team brainstorm\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-brainstorm/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-coordinate",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Only coordinator is built-in -- all worker roles are generated at runtime based on task analysis. Beat/cadence model for orchestration. Triggers on \"team coordinate\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-coordinate/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-coordinate-v2",
|
||||
"description": "Universal team coordination skill with dynamic role generation. Uses team-worker agent architecture with role-spec files. Only coordinator is built-in -- all worker roles are generated at runtime as role-specs and spawned via team-worker agent. Beat/cadence model for orchestration. Triggers on \"team coordinate v2\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-coordinate-v2/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-executor",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"team executor\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-executor/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-executor-v2",
|
||||
"description": "Lightweight session execution skill. Resumes existing team-coordinate-v2 sessions for pure execution via team-worker agents. No analysis, no role generation -- only loads and executes. Session path required. Triggers on \"team executor v2\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-executor-v2/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-frontend",
|
||||
"description": "Unified team skill for frontend development team. All roles invoke this skill with --role arg. Built-in ui-ux-pro-max design intelligence. Triggers on \"team frontend\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-frontend/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-issue",
|
||||
"description": "Unified team skill for issue resolution. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team issue\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-issue/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-iterdev",
|
||||
"description": "Unified team skill for iterative development team. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team iterdev\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-iterdev/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v3",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v3/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v4",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. Optimized cadence with inline discuss subagent and shared explore. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v4/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-lifecycle-v5",
|
||||
"description": "Unified team skill for full lifecycle - spec/impl/test. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents loaded with role-specific Phase 2-4 specs. Triggers on \"team lifecycle\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": true,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-lifecycle-v5/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-planex",
|
||||
"description": "Unified team skill for plan-and-execute pipeline. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on \"team planex\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": true,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-planex/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-quality-assurance",
|
||||
"description": "Unified team skill for quality assurance team. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team quality-assurance\", \"team qa\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-quality-assurance/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-review",
|
||||
"description": "Unified team skill for code scanning, vulnerability review, optimization suggestions, and automated fix. 4-role team: coordinator, scanner, reviewer, fixer. Triggers on team-review.",
|
||||
"category": "team",
|
||||
"is_team": false,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-review/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-roadmap-dev",
|
||||
"description": "Unified team skill for roadmap-driven development workflow. Coordinator discusses roadmap with user, then dispatches phased execution pipeline (plan -> execute -> verify). All roles invoke this skill with --role arg. Triggers on \"team roadmap-dev\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-roadmap-dev/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-tech-debt",
|
||||
"description": "Unified team skill for tech debt identification and cleanup. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team tech-debt\", \"tech debt cleanup\", \"技术债务\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-tech-debt/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-testing",
|
||||
"description": "Unified team skill for testing team. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team testing\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-testing/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-uidesign",
|
||||
"description": "Unified team skill for UI design team. All roles invoke this skill with --role arg for role-specific execution. CP-9 Dual-Track design+implementation.",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-uidesign/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "team-ultra-analyze",
|
||||
"description": "Unified team skill for deep collaborative analysis. All roles invoke this skill with --role arg for role-specific execution. Triggers on \"team ultra-analyze\", \"team analyze\".",
|
||||
"category": "team",
|
||||
"is_team": true,
|
||||
"has_phases": false,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/team-ultra-analyze/SKILL.md"
|
||||
}
|
||||
],
|
||||
"workflow": [
|
||||
{
|
||||
"name": "workflow-execute",
|
||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking. Triggers on \"workflow-execute\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-execute/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-lite-planex",
|
||||
"description": "Lightweight planning and execution skill (Phase 1: plan, Phase 2: execute). Triggers on \"workflow-lite-planex\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-lite-planex/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-multi-cli-plan",
|
||||
"description": "Multi-CLI collaborative planning and execution skill with integrated execution phase. Triggers on \"workflow-multi-cli-plan\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-multi-cli-plan/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-plan",
|
||||
"description": "Unified planning skill - 4-phase planning workflow, plan verification, and interactive replanning. Triggers on \"workflow-plan\", \"workflow-plan-verify\", \"workflow:replan\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-plan/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-skill-designer",
|
||||
"description": "Meta-skill for designing orchestrator+phases structured workflow skills. Creates SKILL.md coordinator with progressive phase loading, TodoWrite patterns, and data flow. Triggers on \"design workflow skill\", \"create workflow skill\", \"workflow skill designer\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-skill-designer/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-tdd-plan-plan",
|
||||
"description": "Unified TDD workflow skill combining 6-phase TDD planning with Red-Green-Refactor task chain generation, and 4-phase TDD verification with compliance reporting. Triggers on \"workflow-tdd-plan\", \"workflow-tdd-verify\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-tdd-plan/SKILL.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow-test-fix",
|
||||
"description": "Unified test-fix pipeline combining test generation (session, context, analysis, task gen) with iterative test-cycle execution (adaptive strategy, progressive testing, CLI fallback). Triggers on \"workflow-test-fix\", \"workflow-test-fix\", \"test fix workflow\".",
|
||||
"category": "workflow",
|
||||
"is_team": false,
|
||||
"has_phases": true,
|
||||
"has_role_specs": false,
|
||||
"version": "",
|
||||
"source": "../../../skills/workflow-test-fix/SKILL.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,11 +1,10 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Analyze all command/agent files and generate index files for ccw-help skill.
|
||||
Analyze all command/agent/skill files and generate index files for ccw-help skill.
|
||||
Outputs relative paths pointing to source files (no reference folder duplication).
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
from pathlib import Path
|
||||
from collections import defaultdict
|
||||
@@ -15,9 +14,13 @@ from typing import Dict, List, Any
|
||||
BASE_DIR = Path("D:/Claude_dms3/.claude")
|
||||
COMMANDS_DIR = BASE_DIR / "commands"
|
||||
AGENTS_DIR = BASE_DIR / "agents"
|
||||
SKILLS_DIR = BASE_DIR / "skills"
|
||||
SKILL_DIR = BASE_DIR / "skills" / "ccw-help"
|
||||
INDEX_DIR = SKILL_DIR / "index"
|
||||
|
||||
# Skills to skip (internal/shared, not user-facing)
|
||||
SKIP_SKILLS = {"_shared", "ccw-help"}
|
||||
|
||||
def parse_frontmatter(content: str) -> Dict[str, Any]:
|
||||
"""Extract YAML frontmatter from markdown content."""
|
||||
frontmatter = {}
|
||||
@@ -139,73 +142,129 @@ def analyze_agent_file(file_path: Path) -> Dict[str, Any]:
|
||||
"source": rel_path # Relative from index/ dir (e.g., "../../../agents/...")
|
||||
}
|
||||
|
||||
def categorize_skill(name: str, description: str) -> str:
|
||||
"""Determine skill category from name and description."""
|
||||
if name.startswith('team-'):
|
||||
return "team"
|
||||
if name.startswith('workflow-'):
|
||||
return "workflow"
|
||||
if name.startswith('review-'):
|
||||
return "review"
|
||||
if name.startswith('spec-') or name.startswith('command-') or name.startswith('skill-'):
|
||||
return "meta"
|
||||
if name.startswith('memory-') or name.startswith('issue-'):
|
||||
return "utility"
|
||||
return "standalone"
|
||||
|
||||
|
||||
def analyze_skill_dir(skill_path: Path) -> Dict[str, Any] | None:
|
||||
"""Analyze a skill directory and extract metadata from SKILL.md."""
|
||||
skill_md = skill_path / "SKILL.md"
|
||||
if not skill_md.exists():
|
||||
return None
|
||||
|
||||
with open(skill_md, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
frontmatter = parse_frontmatter(content)
|
||||
|
||||
name = frontmatter.get('name', skill_path.name)
|
||||
description = frontmatter.get('description', '')
|
||||
allowed_tools = frontmatter.get('allowed-tools', '')
|
||||
version = frontmatter.get('version', '')
|
||||
|
||||
category = categorize_skill(name, description)
|
||||
|
||||
# Detect if it's a team skill (uses TeamCreate + SendMessage together)
|
||||
is_team = 'TeamCreate' in allowed_tools and 'SendMessage' in allowed_tools
|
||||
|
||||
# Detect if it has phases
|
||||
phases_dir = skill_path / "phases"
|
||||
has_phases = phases_dir.exists() and any(phases_dir.iterdir()) if phases_dir.exists() else False
|
||||
|
||||
# Detect if it has role-specs
|
||||
role_specs_dir = skill_path / "role-specs"
|
||||
has_role_specs = role_specs_dir.exists() and any(role_specs_dir.iterdir()) if role_specs_dir.exists() else False
|
||||
|
||||
# Build relative path from INDEX_DIR
|
||||
rel_from_base = skill_path.relative_to(BASE_DIR)
|
||||
rel_path = "../../../" + str(rel_from_base).replace('\\', '/') + "/SKILL.md"
|
||||
|
||||
return {
|
||||
"name": name,
|
||||
"description": description,
|
||||
"category": category,
|
||||
"is_team": is_team,
|
||||
"has_phases": has_phases,
|
||||
"has_role_specs": has_role_specs,
|
||||
"version": version,
|
||||
"source": rel_path
|
||||
}
|
||||
|
||||
|
||||
def build_command_relationships() -> Dict[str, Any]:
|
||||
"""Build command relationship mappings."""
|
||||
return {
|
||||
"workflow:plan": {
|
||||
"workflow-plan": {
|
||||
"calls_internally": ["workflow:session:start", "workflow:tools:context-gather", "workflow:tools:conflict-resolution", "workflow:tools:task-generate-agent"],
|
||||
"next_steps": ["workflow:plan-verify", "workflow:status", "workflow:execute"],
|
||||
"alternatives": ["workflow:tdd-plan"],
|
||||
"next_steps": ["workflow-plan-verify", "workflow:status", "workflow-execute"],
|
||||
"alternatives": ["workflow-tdd-plan"],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:tdd-plan": {
|
||||
"workflow-tdd-plan": {
|
||||
"calls_internally": ["workflow:session:start", "workflow:tools:context-gather", "workflow:tools:task-generate-tdd"],
|
||||
"next_steps": ["workflow:tdd-verify", "workflow:status", "workflow:execute"],
|
||||
"alternatives": ["workflow:plan"],
|
||||
"next_steps": ["workflow-tdd-verify", "workflow:status", "workflow-execute"],
|
||||
"alternatives": ["workflow-plan"],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:execute": {
|
||||
"prerequisites": ["workflow:plan", "workflow:tdd-plan"],
|
||||
"workflow-execute": {
|
||||
"prerequisites": ["workflow-plan", "workflow-tdd-plan"],
|
||||
"related": ["workflow:status", "workflow:resume"],
|
||||
"next_steps": ["workflow:review", "workflow:tdd-verify"]
|
||||
"next_steps": ["workflow:review", "workflow-tdd-verify"]
|
||||
},
|
||||
"workflow:plan-verify": {
|
||||
"prerequisites": ["workflow:plan"],
|
||||
"next_steps": ["workflow:execute"],
|
||||
"workflow-plan-verify": {
|
||||
"prerequisites": ["workflow-plan"],
|
||||
"next_steps": ["workflow-execute"],
|
||||
"related": ["workflow:status"]
|
||||
},
|
||||
"workflow:tdd-verify": {
|
||||
"prerequisites": ["workflow:execute"],
|
||||
"workflow-tdd-verify": {
|
||||
"prerequisites": ["workflow-execute"],
|
||||
"related": ["workflow:tools:tdd-coverage-analysis"]
|
||||
},
|
||||
"workflow:session:start": {
|
||||
"next_steps": ["workflow:plan", "workflow:execute"],
|
||||
"next_steps": ["workflow-plan", "workflow-execute"],
|
||||
"related": ["workflow:session:list", "workflow:session:resume"]
|
||||
},
|
||||
"workflow:session:resume": {
|
||||
"alternatives": ["workflow:resume"],
|
||||
"related": ["workflow:session:list", "workflow:status"]
|
||||
},
|
||||
"workflow:lite-plan": {
|
||||
"calls_internally": ["workflow:lite-execute"],
|
||||
"next_steps": ["workflow:lite-execute", "workflow:status"],
|
||||
"alternatives": ["workflow:plan"],
|
||||
"workflow-lite-planex": {
|
||||
"calls_internally": [],
|
||||
"next_steps": ["workflow:status"],
|
||||
"alternatives": ["workflow-plan"],
|
||||
"prerequisites": []
|
||||
},
|
||||
"workflow:lite-fix": {
|
||||
"next_steps": ["workflow:lite-execute", "workflow:status"],
|
||||
"alternatives": ["workflow:lite-plan"],
|
||||
"related": ["workflow:test-cycle-execute"]
|
||||
},
|
||||
"workflow:lite-execute": {
|
||||
"prerequisites": ["workflow:lite-plan", "workflow:lite-fix"],
|
||||
"related": ["workflow:execute", "workflow:status"]
|
||||
"next_steps": ["workflow:status"],
|
||||
"alternatives": ["workflow-lite-planex"],
|
||||
"related": ["workflow-test-fix"]
|
||||
},
|
||||
"workflow:review-session-cycle": {
|
||||
"prerequisites": ["workflow:execute"],
|
||||
"prerequisites": ["workflow-execute"],
|
||||
"next_steps": ["workflow:review-fix"],
|
||||
"related": ["workflow:review-module-cycle"]
|
||||
},
|
||||
"workflow:review-fix": {
|
||||
"prerequisites": ["workflow:review-module-cycle", "workflow:review-session-cycle"],
|
||||
"related": ["workflow:test-cycle-execute"]
|
||||
"related": ["workflow-test-fix"]
|
||||
},
|
||||
"memory:docs": {
|
||||
"calls_internally": ["workflow:session:start", "workflow:tools:context-gather"],
|
||||
"next_steps": ["workflow:execute"]
|
||||
"next_steps": ["workflow-execute"]
|
||||
},
|
||||
"memory:skill-memory": {
|
||||
"next_steps": ["workflow:plan", "cli:analyze"],
|
||||
"next_steps": ["workflow-plan", "cli:analyze"],
|
||||
"related": ["memory:load-skill-memory"]
|
||||
}
|
||||
}
|
||||
@@ -213,11 +272,11 @@ def build_command_relationships() -> Dict[str, Any]:
|
||||
def identify_essential_commands(all_commands: List[Dict]) -> List[Dict]:
|
||||
"""Identify the most essential commands for beginners."""
|
||||
essential_names = [
|
||||
"workflow:lite-plan", "workflow:lite-fix", "workflow:plan",
|
||||
"workflow:execute", "workflow:status", "workflow:session:start",
|
||||
"workflow-lite-planex", "workflow:lite-fix", "workflow-plan",
|
||||
"workflow-execute", "workflow:status", "workflow:session:start",
|
||||
"workflow:review-session-cycle", "cli:analyze", "cli:chat",
|
||||
"memory:docs", "workflow:brainstorm:artifacts",
|
||||
"workflow:plan-verify", "workflow:resume", "version"
|
||||
"workflow-plan-verify", "workflow:resume", "version"
|
||||
]
|
||||
|
||||
essential = []
|
||||
@@ -267,7 +326,24 @@ def main():
|
||||
except Exception as e:
|
||||
print(f" ERROR analyzing {agent_file}: {e}")
|
||||
|
||||
print(f"\nAnalyzed {len(all_commands)} commands, {len(all_agents)} agents")
|
||||
# Analyze skill directories
|
||||
print("\n=== Analyzing Skill Files ===")
|
||||
skill_dirs = [d for d in SKILLS_DIR.iterdir() if d.is_dir() and d.name not in SKIP_SKILLS]
|
||||
print(f"Found {len(skill_dirs)} skill directories")
|
||||
|
||||
all_skills = []
|
||||
for skill_dir in sorted(skill_dirs):
|
||||
try:
|
||||
metadata = analyze_skill_dir(skill_dir)
|
||||
if metadata:
|
||||
all_skills.append(metadata)
|
||||
print(f" OK {metadata['name']} [{metadata['category']}]")
|
||||
else:
|
||||
print(f" SKIP {skill_dir.name} (no SKILL.md)")
|
||||
except Exception as e:
|
||||
print(f" ERROR analyzing {skill_dir}: {e}")
|
||||
|
||||
print(f"\nAnalyzed {len(all_commands)} commands, {len(all_agents)} agents, {len(all_skills)} skills")
|
||||
|
||||
# Generate index files
|
||||
INDEX_DIR.mkdir(parents=True, exist_ok=True)
|
||||
@@ -320,15 +396,62 @@ def main():
|
||||
json.dump(relationships, f, indent=2, ensure_ascii=False)
|
||||
print(f"OK Generated {relationships_path.name} ({os.path.getsize(relationships_path)} bytes)")
|
||||
|
||||
# 7. all-skills.json
|
||||
all_skills_path = INDEX_DIR / "all-skills.json"
|
||||
with open(all_skills_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(all_skills, f, indent=2, ensure_ascii=False)
|
||||
print(f"OK Generated {all_skills_path.name} ({os.path.getsize(all_skills_path)} bytes)")
|
||||
|
||||
# 8. skills-by-category.json
|
||||
skills_by_cat = defaultdict(list)
|
||||
for skill in all_skills:
|
||||
skills_by_cat[skill['category']].append(skill)
|
||||
|
||||
skills_by_cat_path = INDEX_DIR / "skills-by-category.json"
|
||||
with open(skills_by_cat_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(dict(skills_by_cat), f, indent=2, ensure_ascii=False)
|
||||
print(f"OK Generated {skills_by_cat_path.name} ({os.path.getsize(skills_by_cat_path)} bytes)")
|
||||
|
||||
# Generate master command.json (includes commands, agents, skills)
|
||||
master = {
|
||||
"_metadata": {
|
||||
"version": "4.0.0",
|
||||
"total_commands": len(all_commands),
|
||||
"total_agents": len(all_agents),
|
||||
"total_skills": len(all_skills),
|
||||
"description": "Auto-generated CCW-Help command index from analyze_commands.py",
|
||||
"generated": "Auto-updated - all commands, agents, skills synced from file system",
|
||||
"last_sync": "command.json now stays in sync with CLI definitions"
|
||||
},
|
||||
"essential_commands": [cmd['name'] for cmd in essential],
|
||||
"commands": all_commands,
|
||||
"agents": all_agents,
|
||||
"skills": all_skills,
|
||||
"categories": sorted(set(cmd['category'] for cmd in all_commands)),
|
||||
"skill_categories": sorted(skills_by_cat.keys())
|
||||
}
|
||||
|
||||
master_path = SKILL_DIR / "command.json"
|
||||
with open(master_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(master, f, indent=2, ensure_ascii=False)
|
||||
print(f"\nOK Generated command.json ({os.path.getsize(master_path)} bytes)")
|
||||
|
||||
# Print summary
|
||||
print("\n=== Summary ===")
|
||||
print(f"Commands: {len(all_commands)}")
|
||||
print(f"Agents: {len(all_agents)}")
|
||||
print(f"Skills: {len(all_skills)}")
|
||||
print(f"Essential: {len(essential)}")
|
||||
print(f"\nBy category:")
|
||||
print(f"\nCommands by category:")
|
||||
for cat in sorted(by_category.keys()):
|
||||
total = sum(len(cmds) for cmds in by_category[cat].values())
|
||||
print(f" {cat}: {total}")
|
||||
print(f"\nSkills by category:")
|
||||
for cat in sorted(skills_by_cat.keys()):
|
||||
print(f" {cat}: {len(skills_by_cat[cat])}")
|
||||
for skill in skills_by_cat[cat]:
|
||||
team_tag = " [team]" if skill['is_team'] else ""
|
||||
print(f" - {skill['name']}{team_tag}")
|
||||
|
||||
print(f"\nIndex: {INDEX_DIR}")
|
||||
print("=== Complete ===")
|
||||
|
||||
@@ -67,9 +67,9 @@ spec-generator/
|
||||
## Handoff
|
||||
|
||||
After Phase 6, choose execution path:
|
||||
- `workflow:lite-plan` - Execute per Epic
|
||||
- `workflow-lite-planex` - Execute per Epic
|
||||
- `workflow:req-plan-with-file` - Roadmap decomposition
|
||||
- `workflow:plan` - Full planning
|
||||
- `workflow-plan` - Full planning
|
||||
- `issue:new` - Create issues per Epic
|
||||
|
||||
## Design Principles
|
||||
|
||||
@@ -210,7 +210,7 @@ AskUserQuestion({
|
||||
options: [
|
||||
{
|
||||
label: "Execute via lite-plan",
|
||||
description: "Start implementing with /workflow:lite-plan, one Epic at a time"
|
||||
description: "Start implementing with /workflow-lite-planex, one Epic at a time"
|
||||
},
|
||||
{
|
||||
label: "Create roadmap",
|
||||
@@ -218,7 +218,7 @@ AskUserQuestion({
|
||||
},
|
||||
{
|
||||
label: "Full planning",
|
||||
description: "Detailed planning with /workflow:plan for the full scope"
|
||||
description: "Detailed planning with /workflow-plan for the full scope"
|
||||
},
|
||||
{
|
||||
label: "Create Issues",
|
||||
@@ -242,7 +242,7 @@ if (selection === "Execute via lite-plan") {
|
||||
const epicContent = Read(firstMvpFile);
|
||||
const title = extractTitle(epicContent); // First # heading
|
||||
const description = extractSection(epicContent, "Description");
|
||||
Skill(skill="workflow:lite-plan", args=`"${title}: ${description}"`)
|
||||
Skill(skill="workflow-lite-planex", args=`"${title}: ${description}"`)
|
||||
}
|
||||
|
||||
if (selection === "Full planning" || selection === "Create roadmap") {
|
||||
@@ -368,7 +368,7 @@ ${extractSection(epicContent, "Architecture")}
|
||||
// → context-package.json.brainstorm_artifacts populated
|
||||
// → action-planning-agent loads guidance_specification (P1) + feature_index (P2)
|
||||
if (selection === "Full planning") {
|
||||
Skill(skill="workflow:plan", args=`"${structuredDesc}"`)
|
||||
Skill(skill="workflow-plan", args=`"${structuredDesc}"`)
|
||||
} else {
|
||||
Skill(skill="workflow:req-plan-with-file", args=`"${extractGoal(specSummary)}"`)
|
||||
}
|
||||
|
||||
474
.claude/skills/team-arch-opt/SKILL.md
Normal file
474
.claude/skills/team-arch-opt/SKILL.md
Normal file
@@ -0,0 +1,474 @@
|
||||
---
|
||||
name: team-arch-opt
|
||||
description: Unified team skill for architecture optimization. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on "team arch-opt".
|
||||
allowed-tools: Task, TaskCreate, TaskList, TaskGet, TaskUpdate, TeamCreate, TeamDelete, SendMessage, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep, mcp__ace-tool__search_context
|
||||
---
|
||||
|
||||
# Team Architecture Optimization
|
||||
|
||||
Unified team skill: Analyze codebase architecture, identify structural issues (dependency cycles, coupling/cohesion, layering violations, God Classes, dead code), design refactoring strategies, implement changes, validate improvements, and review code quality. Built on **team-worker agent architecture** -- all worker roles share a single agent definition with role-specific Phase 2-4 loaded from markdown specs.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-arch-opt") |
|
||||
| args="<task-description>" |
|
||||
+-------------------+-------------------------------+
|
||||
|
|
||||
Orchestration Mode (auto -> coordinator)
|
||||
|
|
||||
Coordinator (inline)
|
||||
Phase 0-5 orchestration
|
||||
|
|
||||
+-------+-------+-------+-------+-------+
|
||||
v v v v v
|
||||
[tw] [tw] [tw] [tw] [tw]
|
||||
analyzer desig- refact- valid- review-
|
||||
ner orer ator er
|
||||
|
||||
Subagents (callable by workers, not team members):
|
||||
[explore] [discuss]
|
||||
|
||||
(tw) = team-worker agent
|
||||
```
|
||||
|
||||
## Role Router
|
||||
|
||||
This skill is **coordinator-only**. Workers do NOT invoke this skill -- they are spawned as `team-worker` agents directly.
|
||||
|
||||
### Input Parsing
|
||||
|
||||
Parse `$ARGUMENTS`. No `--role` needed -- always routes to coordinator.
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | Spec | Task Prefix | Type | Inner Loop |
|
||||
|------|------|-------------|------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | - |
|
||||
| analyzer | [role-specs/analyzer.md](role-specs/analyzer.md) | ANALYZE-* | orchestration | false |
|
||||
| designer | [role-specs/designer.md](role-specs/designer.md) | DESIGN-* | orchestration | false |
|
||||
| refactorer | [role-specs/refactorer.md](role-specs/refactorer.md) | REFACTOR-* / FIX-* | code_generation | true |
|
||||
| validator | [role-specs/validator.md](role-specs/validator.md) | VALIDATE-* | validation | false |
|
||||
| reviewer | [role-specs/reviewer.md](role-specs/reviewer.md) | REVIEW-* / QUALITY-* | read_only_analysis | false |
|
||||
|
||||
### Subagent Registry
|
||||
|
||||
| Subagent | Spec | Callable By | Purpose |
|
||||
|----------|------|-------------|---------|
|
||||
| explore | [subagents/explore-subagent.md](subagents/explore-subagent.md) | analyzer, refactorer | Shared codebase exploration for architecture-critical structures and dependency graphs |
|
||||
| discuss | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | designer, reviewer | Multi-perspective discussion for refactoring approaches and review findings |
|
||||
|
||||
### Dispatch
|
||||
|
||||
Always route to coordinator. Coordinator reads `roles/coordinator/role.md` and executes its phases.
|
||||
|
||||
### Orchestration Mode
|
||||
|
||||
User just provides task description.
|
||||
|
||||
**Invocation**:
|
||||
```bash
|
||||
Skill(skill="team-arch-opt", args="<task-description>") # auto mode
|
||||
Skill(skill="team-arch-opt", args="--parallel-mode=fan-out <task-description>") # force fan-out
|
||||
Skill(skill="team-arch-opt", args='--parallel-mode=independent "target1" "target2"') # independent
|
||||
Skill(skill="team-arch-opt", args="--max-branches=3 <task-description>") # limit branches
|
||||
```
|
||||
|
||||
**Parallel Modes**:
|
||||
|
||||
| Mode | Description | When to Use |
|
||||
|------|-------------|------------|
|
||||
| `auto` (default) | count <= 2 -> single, count >= 3 -> fan-out | General refactoring requests |
|
||||
| `single` | Linear pipeline, no branching | Simple or tightly coupled refactorings |
|
||||
| `fan-out` | Shared ANALYZE+DESIGN, then N parallel REFACTOR->VALIDATE+REVIEW branches | Multiple independent architecture issues |
|
||||
| `independent` | M fully independent pipelines from analysis to review | Separate refactoring targets |
|
||||
|
||||
**Lifecycle**:
|
||||
```
|
||||
User provides task description + optional --parallel-mode / --max-branches
|
||||
-> coordinator Phase 1-3: Parse flags -> TeamCreate -> Create task chain (mode-aware)
|
||||
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
|
||||
-> Worker (team-worker agent) executes -> SendMessage callback -> coordinator advances
|
||||
-> [auto/fan-out] CP-2.5: Design complete -> create N branch tasks -> spawn all REFACTOR-B* in parallel
|
||||
-> [independent] All pipelines run in parallel from the start
|
||||
-> Per-branch/pipeline fix cycles run independently
|
||||
-> All branches/pipelines complete -> AGGREGATE -> Phase 5 report + completion action
|
||||
```
|
||||
|
||||
**User Commands** (wake paused coordinator):
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Output execution status graph (branch-grouped), no advancement |
|
||||
| `resume` / `continue` | Check worker states, advance next step |
|
||||
| `revise <TASK-ID> [feedback]` | Create revision task + cascade downstream (scoped to branch) |
|
||||
| `feedback <text>` | Analyze feedback impact, create targeted revision chain |
|
||||
| `recheck` | Re-run quality check |
|
||||
| `improve [dimension]` | Auto-improve weakest dimension |
|
||||
|
||||
---
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** -- complete the command workflow before proceeding
|
||||
|
||||
Example:
|
||||
```
|
||||
Phase 3 needs task dispatch
|
||||
-> Read roles/coordinator/commands/dispatch.md
|
||||
-> Execute Phase 2 (Context Loading)
|
||||
-> Execute Phase 3 (Task Chain Creation)
|
||||
-> Execute Phase 4 (Validation)
|
||||
-> Continue to Phase 4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-arch-opt/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Inner Loop roles** (refactorer): Set `inner_loop: true`. The team-worker agent handles the loop internally.
|
||||
|
||||
**Single-task roles** (analyzer, designer, validator, reviewer): Set `inner_loop: false`.
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Definitions
|
||||
|
||||
### Pipeline Diagrams
|
||||
|
||||
**Single Mode** (linear, backward compatible):
|
||||
```
|
||||
Pipeline: Single (Linear with Review-Fix Cycle)
|
||||
=====================================================================
|
||||
Stage 1 Stage 2 Stage 3 Stage 4
|
||||
ANALYZE-001 --> DESIGN-001 --> REFACTOR-001 --> VALIDATE-001
|
||||
[analyzer] [designer] [refactorer] [validator]
|
||||
^ |
|
||||
+<--FIX-001---->+
|
||||
| REVIEW-001
|
||||
+<--------> [reviewer]
|
||||
(max 3 iterations) |
|
||||
COMPLETE
|
||||
=====================================================================
|
||||
```
|
||||
|
||||
**Fan-out Mode** (shared stages 1-2, parallel branches 3-4):
|
||||
```
|
||||
Pipeline: Fan-out (N parallel refactoring branches)
|
||||
=====================================================================
|
||||
Stage 1 Stage 2 CP-2.5 Stage 3+4 (per branch)
|
||||
(branch creation)
|
||||
ANALYZE-001 --> DESIGN-001 --+-> REFACTOR-B01 --> VALIDATE-B01 + REVIEW-B01 (fix cycle)
|
||||
[analyzer] [designer] | [refactorer] [validator] [reviewer]
|
||||
+-> REFACTOR-B02 --> VALIDATE-B02 + REVIEW-B02 (fix cycle)
|
||||
| [refactorer] [validator] [reviewer]
|
||||
+-> REFACTOR-B0N --> VALIDATE-B0N + REVIEW-B0N (fix cycle)
|
||||
|
|
||||
AGGREGATE -> Phase 5
|
||||
=====================================================================
|
||||
```
|
||||
|
||||
**Independent Mode** (M fully independent pipelines):
|
||||
```
|
||||
Pipeline: Independent (M complete pipelines)
|
||||
=====================================================================
|
||||
Pipeline A: ANALYZE-A01 --> DESIGN-A01 --> REFACTOR-A01 --> VALIDATE-A01 + REVIEW-A01
|
||||
Pipeline B: ANALYZE-B01 --> DESIGN-B01 --> REFACTOR-B01 --> VALIDATE-B01 + REVIEW-B01
|
||||
Pipeline C: ANALYZE-C01 --> DESIGN-C01 --> REFACTOR-C01 --> VALIDATE-C01 + REVIEW-C01
|
||||
|
|
||||
AGGREGATE -> Phase 5
|
||||
=====================================================================
|
||||
```
|
||||
|
||||
### Cadence Control
|
||||
|
||||
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
|
||||
|
||||
```
|
||||
Beat Cycle (single beat)
|
||||
======================================================================
|
||||
Event Coordinator Workers
|
||||
----------------------------------------------------------------------
|
||||
callback/resume --> +- handleCallback -+
|
||||
| mark completed |
|
||||
| check pipeline |
|
||||
+- handleSpawnNext -+
|
||||
| find ready tasks |
|
||||
| spawn workers ---+--> [team-worker A] Phase 1-5
|
||||
| (parallel OK) --+--> [team-worker B] Phase 1-5
|
||||
+- STOP (idle) -----+ |
|
||||
|
|
||||
callback <-----------------------------------------+
|
||||
(next beat) SendMessage + TaskUpdate(completed)
|
||||
======================================================================
|
||||
|
||||
Fast-Advance (skips coordinator for simple linear successors)
|
||||
======================================================================
|
||||
[Worker A] Phase 5 complete
|
||||
+- 1 ready task? simple successor?
|
||||
| --> spawn team-worker B directly
|
||||
| --> log fast_advance to message bus (coordinator syncs on next wake)
|
||||
+- complex case? --> SendMessage to coordinator
|
||||
======================================================================
|
||||
```
|
||||
|
||||
```
|
||||
Beat View: Architecture Optimization Pipeline
|
||||
======================================================================
|
||||
Event Coordinator Workers
|
||||
----------------------------------------------------------------------
|
||||
new task --> +- Phase 1-3: clarify -+
|
||||
| TeamCreate |
|
||||
| create ANALYZE-001 |
|
||||
+- Phase 4: spawn ------+--> [analyzer] Phase 1-5
|
||||
+- STOP (idle) ---------+ |
|
||||
|
|
||||
callback <----------------------------------------------+
|
||||
(analyzer done) --> +- handleCallback ------+ analyze_complete
|
||||
| mark ANALYZE done |
|
||||
| spawn designer ------+--> [designer] Phase 1-5
|
||||
+- STOP ----------------+ |
|
||||
|
|
||||
callback <----------------------------------------------+
|
||||
(designer done) --> +- handleCallback ------+ design_complete
|
||||
| mark DESIGN done |
|
||||
| spawn refactorer ----+--> [refactorer] Phase 1-5
|
||||
+- STOP ----------------+ |
|
||||
|
|
||||
callback <----------------------------------------------+
|
||||
(refactorer done)--> +- handleCallback ------+ refactor_complete
|
||||
| mark REFACTOR done |
|
||||
| spawn valid+reviewer-+--> [validator] Phase 1-5
|
||||
| (parallel) -------+--> [reviewer] Phase 1-5
|
||||
+- STOP ----------------+ | |
|
||||
| |
|
||||
callback x2 <--------------------------------------+-----------+
|
||||
--> +- handleCallback ------+
|
||||
| both done? |
|
||||
| YES + pass -> Phase 5|
|
||||
| NO / fail -> FIX task|
|
||||
| spawn refactorer ----+--> [refactorer] FIX-001
|
||||
+- STOP or Phase 5 -----+
|
||||
======================================================================
|
||||
```
|
||||
|
||||
**Checkpoints**:
|
||||
|
||||
| Checkpoint | Trigger | Location | Behavior |
|
||||
|------------|---------|----------|----------|
|
||||
| CP-1 | ANALYZE-001 complete | After Stage 1 | User reviews architecture report, can refine scope |
|
||||
| CP-2 | DESIGN-001 complete | After Stage 2 | User reviews refactoring plan, can adjust priorities |
|
||||
| CP-2.5 | DESIGN-001 complete (auto/fan-out) | After Stage 2 | Auto-create N branch tasks from refactoring plan, spawn all REFACTOR-B* in parallel |
|
||||
| CP-3 | REVIEW/VALIDATE fail | Stage 4 (per-branch) | Auto-create FIX task for that branch only (max 3x per branch) |
|
||||
| CP-4 | All tasks/branches complete | Phase 5 | Aggregate results, interactive completion action |
|
||||
|
||||
### Task Metadata Registry
|
||||
|
||||
**Single mode** (backward compatible):
|
||||
|
||||
| Task ID | Role | Phase | Dependencies | Description |
|
||||
|---------|------|-------|-------------|-------------|
|
||||
| ANALYZE-001 | analyzer | Stage 1 | (none) | Analyze architecture, identify structural issues |
|
||||
| DESIGN-001 | designer | Stage 2 | ANALYZE-001 | Design refactoring plan from architecture report |
|
||||
| REFACTOR-001 | refactorer | Stage 3 | DESIGN-001 | Implement highest-priority refactorings |
|
||||
| VALIDATE-001 | validator | Stage 4 | REFACTOR-001 | Validate build, tests, metrics, API compatibility |
|
||||
| REVIEW-001 | reviewer | Stage 4 | REFACTOR-001 | Review refactoring code for correctness |
|
||||
| FIX-001 | refactorer | Stage 3 (cycle) | REVIEW-001 or VALIDATE-001 | Fix issues found in review/validation |
|
||||
|
||||
**Fan-out mode** (branch tasks created at CP-2.5):
|
||||
|
||||
| Task ID | Role | Phase | Dependencies | Description |
|
||||
|---------|------|-------|-------------|-------------|
|
||||
| ANALYZE-001 | analyzer | Stage 1 (shared) | (none) | Analyze architecture |
|
||||
| DESIGN-001 | designer | Stage 2 (shared) | ANALYZE-001 | Design plan with discrete REFACTOR-IDs |
|
||||
| REFACTOR-B{NN} | refactorer | Stage 3 (branch) | DESIGN-001 | Implement REFACTOR-{NNN} only |
|
||||
| VALIDATE-B{NN} | validator | Stage 4 (branch) | REFACTOR-B{NN} | Validate branch B{NN} |
|
||||
| REVIEW-B{NN} | reviewer | Stage 4 (branch) | REFACTOR-B{NN} | Review branch B{NN} |
|
||||
| FIX-B{NN}-{cycle} | refactorer | Fix (branch) | (none) | Fix issues in branch B{NN} |
|
||||
| VALIDATE-B{NN}-R{cycle} | validator | Retry (branch) | FIX-B{NN}-{cycle} | Re-validate after fix |
|
||||
| REVIEW-B{NN}-R{cycle} | reviewer | Retry (branch) | FIX-B{NN}-{cycle} | Re-review after fix |
|
||||
|
||||
**Independent mode**:
|
||||
|
||||
| Task ID | Role | Phase | Dependencies | Description |
|
||||
|---------|------|-------|-------------|-------------|
|
||||
| ANALYZE-{P}01 | analyzer | Stage 1 | (none) | Analyze for pipeline {P} target |
|
||||
| DESIGN-{P}01 | designer | Stage 2 | ANALYZE-{P}01 | Design for pipeline {P} |
|
||||
| REFACTOR-{P}01 | refactorer | Stage 3 | DESIGN-{P}01 | Implement pipeline {P} refactorings |
|
||||
| VALIDATE-{P}01 | validator | Stage 4 | REFACTOR-{P}01 | Validate pipeline {P} |
|
||||
| REVIEW-{P}01 | reviewer | Stage 4 | REFACTOR-{P}01 | Review pipeline {P} |
|
||||
| FIX-{P}01-{cycle} | refactorer | Fix | (none) | Fix issues in pipeline {P} |
|
||||
|
||||
### Task Naming Rules
|
||||
|
||||
| Mode | Stage 3 | Stage 4 | Fix | Retry |
|
||||
|------|---------|---------|-----|-------|
|
||||
| Single | REFACTOR-001 | VALIDATE-001, REVIEW-001 | FIX-001 | VALIDATE-001-R1, REVIEW-001-R1 |
|
||||
| Fan-out | REFACTOR-B01 | VALIDATE-B01, REVIEW-B01 | FIX-B01-1 | VALIDATE-B01-R1, REVIEW-B01-R1 |
|
||||
| Independent | REFACTOR-A01 | VALIDATE-A01, REVIEW-A01 | FIX-A01-1 | VALIDATE-A01-R1, REVIEW-A01-R1 |
|
||||
|
||||
---
|
||||
|
||||
## Completion Action
|
||||
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Team pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete(arch-opt) -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions: `Skill(skill="team-arch-opt", args="resume")` |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
## Session Directory
|
||||
|
||||
**Single mode**:
|
||||
```
|
||||
.workflow/<session-id>/
|
||||
+-- session.json # Session metadata + status + parallel_mode
|
||||
+-- artifacts/
|
||||
| +-- architecture-baseline.json # Analyzer: pre-refactoring metrics
|
||||
| +-- architecture-report.md # Analyzer: ranked structural issue findings
|
||||
| +-- refactoring-plan.md # Designer: prioritized refactoring plan
|
||||
| +-- validation-results.json # Validator: post-refactoring validation
|
||||
| +-- review-report.md # Reviewer: code review findings
|
||||
+-- explorations/
|
||||
| +-- cache-index.json # Shared explore cache
|
||||
| +-- <hash>.md # Cached exploration results
|
||||
+-- wisdom/
|
||||
| +-- patterns.md # Discovered patterns and conventions
|
||||
| +-- shared-memory.json # Cross-role structured data
|
||||
+-- discussions/
|
||||
| +-- DISCUSS-REFACTOR.md # Refactoring design discussion record
|
||||
| +-- DISCUSS-REVIEW.md # Review discussion record
|
||||
```
|
||||
|
||||
**Fan-out mode** (adds branches/ directory):
|
||||
```
|
||||
.workflow/<session-id>/
|
||||
+-- session.json # + parallel_mode, branches, fix_cycles
|
||||
+-- artifacts/
|
||||
| +-- architecture-baseline.json # Shared baseline (all branches use this)
|
||||
| +-- architecture-report.md # Shared architecture report
|
||||
| +-- refactoring-plan.md # Shared plan with discrete REFACTOR-IDs
|
||||
| +-- aggregate-results.json # Aggregated results from all branches
|
||||
| +-- branches/
|
||||
| +-- B01/
|
||||
| | +-- refactoring-detail.md # Extracted REFACTOR-001 detail
|
||||
| | +-- validation-results.json # Branch B01 validation
|
||||
| | +-- review-report.md # Branch B01 review
|
||||
| +-- B02/
|
||||
| | +-- refactoring-detail.md
|
||||
| | +-- validation-results.json
|
||||
| | +-- review-report.md
|
||||
| +-- B0N/
|
||||
+-- explorations/ wisdom/ discussions/ # Same as single
|
||||
```
|
||||
|
||||
**Independent mode** (adds pipelines/ directory):
|
||||
```
|
||||
.workflow/<session-id>/
|
||||
+-- session.json # + parallel_mode, independent_targets, fix_cycles
|
||||
+-- artifacts/
|
||||
| +-- aggregate-results.json # Aggregated results from all pipelines
|
||||
| +-- pipelines/
|
||||
| +-- A/
|
||||
| | +-- architecture-baseline.json
|
||||
| | +-- architecture-report.md
|
||||
| | +-- refactoring-plan.md
|
||||
| | +-- validation-results.json
|
||||
| | +-- review-report.md
|
||||
| +-- B/
|
||||
| +-- architecture-baseline.json
|
||||
| +-- ...
|
||||
+-- explorations/ wisdom/ discussions/ # Same as single
|
||||
```
|
||||
|
||||
## Session Resume
|
||||
|
||||
Coordinator supports `--resume` / `--continue` for interrupted sessions:
|
||||
|
||||
1. Scan session directory for sessions with status "active" or "paused"
|
||||
2. Multiple matches -> AskUserQuestion for selection
|
||||
3. Audit TaskList -> reconcile session state <-> task status
|
||||
4. Reset in_progress -> pending (interrupted tasks)
|
||||
5. Rebuild team and spawn needed workers only
|
||||
6. Create missing tasks with correct blockedBy
|
||||
7. Kick first executable task -> Phase 4 coordination loop
|
||||
|
||||
## Shared Resources
|
||||
|
||||
| Resource | Path | Usage |
|
||||
|----------|------|-------|
|
||||
| Architecture Baseline | [<session>/artifacts/architecture-baseline.json](<session>/artifacts/architecture-baseline.json) | Pre-refactoring metrics for comparison |
|
||||
| Architecture Report | [<session>/artifacts/architecture-report.md](<session>/artifacts/architecture-report.md) | Analyzer output consumed by designer |
|
||||
| Refactoring Plan | [<session>/artifacts/refactoring-plan.md](<session>/artifacts/refactoring-plan.md) | Designer output consumed by refactorer |
|
||||
| Validation Results | [<session>/artifacts/validation-results.json](<session>/artifacts/validation-results.json) | Validator output consumed by reviewer |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Role spec file not found | Error with expected path (role-specs/<name>.md) |
|
||||
| Command file not found | Fallback to inline execution in coordinator role.md |
|
||||
| Subagent spec not found | Error with expected path (subagents/<name>-subagent.md) |
|
||||
| Fast-advance orphan detected | Coordinator resets task to pending on next check |
|
||||
| consensus_blocked HIGH | Coordinator creates revision task or pauses pipeline |
|
||||
| team-worker agent unavailable | Error: requires .claude/agents/team-worker.md |
|
||||
| Completion action timeout | Default to Keep Active |
|
||||
| Analysis tool not available | Fallback to static analysis methods |
|
||||
| Validation regression detected | Auto-create FIX task with regression details (scoped to branch/pipeline) |
|
||||
| Review-fix cycle exceeds 3 iterations | Escalate to user with summary of remaining issues (per-branch/pipeline scope) |
|
||||
| One branch REFACTOR fails | Mark that branch failed, other branches continue to completion |
|
||||
| Branch scope overlap detected | Designer constrains non-overlapping target files; REFACTOR logs warning on detection |
|
||||
| Shared-memory concurrent writes | Each worker writes only its own namespace key (e.g., `refactorer.B01`) |
|
||||
| Branch fix cycle >= 3 | Escalate only that branch to user, other branches continue independently |
|
||||
| max_branches exceeded | Coordinator truncates to top N refactorings by priority at CP-2.5 |
|
||||
| Independent pipeline partial failure | Failed pipeline marked, others continue; aggregate reports partial results |
|
||||
80
.claude/skills/team-arch-opt/role-specs/analyzer.md
Normal file
80
.claude/skills/team-arch-opt/role-specs/analyzer.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
prefix: ANALYZE
|
||||
inner_loop: false
|
||||
subagents: [explore]
|
||||
message_types:
|
||||
success: analyze_complete
|
||||
error: error
|
||||
---
|
||||
|
||||
# Architecture Analyzer
|
||||
|
||||
Analyze codebase architecture to identify structural issues: dependency cycles, coupling/cohesion problems, layering violations, God Classes, code duplication, dead code, and API surface bloat. Produce quantified baseline metrics and a ranked architecture report.
|
||||
|
||||
## Phase 2: Context & Environment Detection
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| shared-memory.json | <session>/wisdom/shared-memory.json | No |
|
||||
|
||||
1. Extract session path and target scope from task description
|
||||
2. Detect project type by scanning for framework markers:
|
||||
|
||||
| Signal File | Project Type | Analysis Focus |
|
||||
|-------------|-------------|----------------|
|
||||
| package.json + React/Vue/Angular | Frontend | Component tree, prop drilling, state management, barrel exports |
|
||||
| package.json + Express/Fastify/NestJS | Backend Node | Service layer boundaries, middleware chains, DB access patterns |
|
||||
| Cargo.toml / go.mod / pom.xml | Native/JVM Backend | Module boundaries, trait/interface usage, dependency injection |
|
||||
| Mixed framework markers | Full-stack / Monorepo | Cross-package dependencies, shared types, API contracts |
|
||||
| CLI entry / bin/ directory | CLI Tool | Command structure, plugin architecture, configuration layering |
|
||||
| No detection | Generic | All architecture dimensions |
|
||||
|
||||
3. Use `explore` subagent to map module structure, dependency graph, and layer boundaries within target scope
|
||||
4. Detect available analysis tools (linters, dependency analyzers, build tools)
|
||||
|
||||
## Phase 3: Architecture Analysis
|
||||
|
||||
Execute analysis based on detected project type:
|
||||
|
||||
**Dependency analysis**:
|
||||
- Build import/require graph across modules
|
||||
- Detect circular dependencies (direct and transitive cycles)
|
||||
- Identify layering violations (e.g., UI importing from data layer, utils importing from domain)
|
||||
- Calculate fan-in/fan-out per module (high fan-out = fragile hub, high fan-in = tightly coupled)
|
||||
|
||||
**Structural analysis**:
|
||||
- Identify God Classes / God Modules (> 500 LOC, > 10 public methods, too many responsibilities)
|
||||
- Calculate coupling metrics (afferent/efferent coupling per module)
|
||||
- Calculate cohesion metrics (LCOM -- Lack of Cohesion of Methods)
|
||||
- Detect code duplication (repeated logic blocks, copy-paste patterns)
|
||||
- Identify missing abstractions (repeated conditionals, switch-on-type patterns)
|
||||
|
||||
**API surface analysis**:
|
||||
- Count exported symbols per module (export bloat detection)
|
||||
- Identify dead exports (exported but never imported elsewhere)
|
||||
- Detect dead code (unreachable functions, unused variables, orphan files)
|
||||
- Check for pattern inconsistencies (mixed naming conventions, inconsistent error handling)
|
||||
|
||||
**All project types**:
|
||||
- Collect quantified architecture baseline metrics (dependency count, cycle count, coupling scores, LOC distribution)
|
||||
- Rank top 3-7 architecture issues by severity (Critical / High / Medium)
|
||||
- Record evidence: file paths, line numbers, measured values
|
||||
|
||||
## Phase 4: Report Generation
|
||||
|
||||
1. Write architecture baseline to `<session>/artifacts/architecture-baseline.json`:
|
||||
- Module count, dependency count, cycle count, average coupling, average cohesion
|
||||
- God Class candidates with LOC and method count
|
||||
- Dead code file count, dead export count
|
||||
- Timestamp and project type details
|
||||
|
||||
2. Write architecture report to `<session>/artifacts/architecture-report.md`:
|
||||
- Ranked list of architecture issues with severity, location (file:line or module), measured impact
|
||||
- Issue categories: CYCLE, COUPLING, COHESION, GOD_CLASS, DUPLICATION, LAYER_VIOLATION, DEAD_CODE, API_BLOAT
|
||||
- Evidence summary per issue
|
||||
- Detected project type and analysis methods used
|
||||
|
||||
3. Update `<session>/wisdom/shared-memory.json` under `analyzer` namespace:
|
||||
- Read existing -> merge `{ "analyzer": { project_type, issue_count, top_issue, scope, categories } }` -> write back
|
||||
118
.claude/skills/team-arch-opt/role-specs/designer.md
Normal file
118
.claude/skills/team-arch-opt/role-specs/designer.md
Normal file
@@ -0,0 +1,118 @@
|
||||
---
|
||||
prefix: DESIGN
|
||||
inner_loop: false
|
||||
discuss_rounds: [DISCUSS-REFACTOR]
|
||||
subagents: [discuss]
|
||||
message_types:
|
||||
success: design_complete
|
||||
error: error
|
||||
---
|
||||
|
||||
# Refactoring Designer
|
||||
|
||||
Analyze architecture reports and baseline metrics to design a prioritized refactoring plan with concrete strategies, expected structural improvements, and risk assessments.
|
||||
|
||||
## Phase 2: Analysis Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Architecture report | <session>/artifacts/architecture-report.md | Yes |
|
||||
| Architecture baseline | <session>/artifacts/architecture-baseline.json | Yes |
|
||||
| shared-memory.json | <session>/wisdom/shared-memory.json | Yes |
|
||||
| Wisdom files | <session>/wisdom/patterns.md | No |
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read architecture report -- extract ranked issue list with severities and categories
|
||||
3. Read architecture baseline -- extract current structural metrics
|
||||
4. Load shared-memory.json for analyzer findings (project_type, scope)
|
||||
5. Assess overall refactoring complexity:
|
||||
|
||||
| Issue Count | Severity Mix | Complexity |
|
||||
|-------------|-------------|------------|
|
||||
| 1-2 | All Medium | Low |
|
||||
| 2-3 | Mix of High/Medium | Medium |
|
||||
| 3+ or any Critical | Any Critical present | High |
|
||||
|
||||
## Phase 3: Strategy Formulation
|
||||
|
||||
For each architecture issue, select refactoring approach by type:
|
||||
|
||||
| Issue Type | Strategies | Risk Level |
|
||||
|------------|-----------|------------|
|
||||
| Circular dependency | Interface extraction, dependency inversion, mediator pattern | High |
|
||||
| God Class/Module | SRP decomposition, extract class/module, delegate pattern | High |
|
||||
| Layering violation | Move to correct layer, introduce Facade, add anti-corruption layer | Medium |
|
||||
| Code duplication | Extract shared utility/base class, template method pattern | Low |
|
||||
| High coupling | Introduce interface/abstraction, dependency injection, event-driven | Medium |
|
||||
| API bloat / dead exports | Privatize internals, re-export only public API, barrel file cleanup | Low |
|
||||
| Dead code | Safe removal with reference verification | Low |
|
||||
| Missing abstraction | Extract interface/type, introduce strategy/factory pattern | Medium |
|
||||
|
||||
Prioritize refactorings by impact/effort ratio:
|
||||
|
||||
| Priority | Criteria |
|
||||
|----------|----------|
|
||||
| P0 (Critical) | High impact + Low effort -- quick wins (dead code removal, simple moves) |
|
||||
| P1 (High) | High impact + Medium effort (cycle breaking, layer fixes) |
|
||||
| P2 (Medium) | Medium impact + Low effort (duplication extraction) |
|
||||
| P3 (Low) | Low impact or High effort -- defer (large God Class decomposition) |
|
||||
|
||||
If complexity is High, invoke `discuss` subagent (DISCUSS-REFACTOR round) to evaluate trade-offs between competing strategies before finalizing the plan.
|
||||
|
||||
Define measurable success criteria per refactoring (target metric improvement or structural change).
|
||||
|
||||
## Phase 4: Plan Output
|
||||
|
||||
1. Write refactoring plan to `<session>/artifacts/refactoring-plan.md`:
|
||||
|
||||
Each refactoring MUST have a unique REFACTOR-ID and self-contained detail block:
|
||||
|
||||
```markdown
|
||||
### REFACTOR-001: <title>
|
||||
- Priority: P0
|
||||
- Target issue: <issue from report>
|
||||
- Issue type: <CYCLE|COUPLING|GOD_CLASS|DUPLICATION|LAYER_VIOLATION|DEAD_CODE|API_BLOAT>
|
||||
- Target files: <file-list>
|
||||
- Strategy: <selected approach>
|
||||
- Expected improvement: <metric> by <description>
|
||||
- Risk level: <Low/Medium/High>
|
||||
- Success criteria: <specific structural change to verify>
|
||||
- Implementation guidance:
|
||||
1. <step 1>
|
||||
2. <step 2>
|
||||
3. <step 3>
|
||||
|
||||
### REFACTOR-002: <title>
|
||||
...
|
||||
```
|
||||
|
||||
Requirements:
|
||||
- Each REFACTOR-ID is sequentially numbered (REFACTOR-001, REFACTOR-002, ...)
|
||||
- Each refactoring must be **non-overlapping** in target files (no two REFACTOR-IDs modify the same file unless explicitly noted with conflict resolution)
|
||||
- Implementation guidance must be self-contained -- a branch refactorer should be able to work from a single REFACTOR block without reading others
|
||||
|
||||
2. Update `<session>/wisdom/shared-memory.json` under `designer` namespace:
|
||||
- Read existing -> merge -> write back:
|
||||
```json
|
||||
{
|
||||
"designer": {
|
||||
"complexity": "<Low|Medium|High>",
|
||||
"refactoring_count": 4,
|
||||
"priorities": ["P0", "P0", "P1", "P2"],
|
||||
"discuss_used": false,
|
||||
"refactorings": [
|
||||
{
|
||||
"id": "REFACTOR-001",
|
||||
"title": "<title>",
|
||||
"issue_type": "<CYCLE|COUPLING|...>",
|
||||
"priority": "P0",
|
||||
"target_files": ["src/a.ts", "src/b.ts"],
|
||||
"expected_improvement": "<metric> by <description>",
|
||||
"success_criteria": "<threshold>"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. If DISCUSS-REFACTOR was triggered, record discussion summary in `<session>/discussions/DISCUSS-REFACTOR.md`
|
||||
106
.claude/skills/team-arch-opt/role-specs/refactorer.md
Normal file
106
.claude/skills/team-arch-opt/role-specs/refactorer.md
Normal file
@@ -0,0 +1,106 @@
|
||||
---
|
||||
prefix: REFACTOR
|
||||
inner_loop: true
|
||||
additional_prefixes: [FIX]
|
||||
subagents: [explore]
|
||||
message_types:
|
||||
success: refactor_complete
|
||||
error: error
|
||||
fix: fix_required
|
||||
---
|
||||
|
||||
# Code Refactorer
|
||||
|
||||
Implement architecture refactoring changes following the design plan. For FIX tasks, apply targeted corrections based on review/validation feedback.
|
||||
|
||||
## Modes
|
||||
|
||||
| Mode | Task Prefix | Trigger | Focus |
|
||||
|------|-------------|---------|-------|
|
||||
| Refactor | REFACTOR | Design plan ready | Apply refactorings per plan priority |
|
||||
| Fix | FIX | Review/validation feedback | Targeted fixes for identified issues |
|
||||
|
||||
## Phase 2: Plan & Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Refactoring plan | <session>/artifacts/refactoring-plan.md | Yes (REFACTOR, no branch) |
|
||||
| Branch refactoring detail | <session>/artifacts/branches/B{NN}/refactoring-detail.md | Yes (REFACTOR with branch) |
|
||||
| Pipeline refactoring plan | <session>/artifacts/pipelines/{P}/refactoring-plan.md | Yes (REFACTOR with pipeline) |
|
||||
| Review/validation feedback | From task description | Yes (FIX) |
|
||||
| shared-memory.json | <session>/wisdom/shared-memory.json | Yes |
|
||||
| Wisdom files | <session>/wisdom/patterns.md | No |
|
||||
| Context accumulator | From prior REFACTOR/FIX tasks | Yes (inner loop) |
|
||||
|
||||
1. Extract session path and task mode (REFACTOR or FIX) from task description
|
||||
2. **Detect branch/pipeline context** from task description:
|
||||
|
||||
| Task Description Field | Value | Context |
|
||||
|----------------------|-------|---------
|
||||
| `BranchId: B{NN}` | Present | Fan-out branch -- load single refactoring detail |
|
||||
| `PipelineId: {P}` | Present | Independent pipeline -- load pipeline-scoped plan |
|
||||
| Neither present | - | Single mode -- load full refactoring plan |
|
||||
|
||||
3. **Load refactoring context by mode**:
|
||||
- **Single mode (no branch)**: Read `<session>/artifacts/refactoring-plan.md` -- extract ALL priority-ordered changes
|
||||
- **Fan-out branch**: Read `<session>/artifacts/branches/B{NN}/refactoring-detail.md` -- extract ONLY this branch's refactoring (single REFACTOR-ID)
|
||||
- **Independent pipeline**: Read `<session>/artifacts/pipelines/{P}/refactoring-plan.md` -- extract this pipeline's plan
|
||||
|
||||
4. For FIX: parse review/validation feedback for specific issues to address
|
||||
5. Use `explore` subagent to load implementation context for target files
|
||||
6. For inner loop (single mode only): load context_accumulator from prior REFACTOR/FIX tasks
|
||||
|
||||
**Shared-memory namespace**:
|
||||
- Single: write to `refactorer` namespace
|
||||
- Fan-out: write to `refactorer.B{NN}` namespace
|
||||
- Independent: write to `refactorer.{P}` namespace
|
||||
|
||||
## Phase 3: Code Implementation
|
||||
|
||||
Implementation backend selection:
|
||||
|
||||
| Backend | Condition | Method |
|
||||
|---------|-----------|--------|
|
||||
| CLI | Multi-file refactoring with clear plan | ccw cli --tool gemini --mode write |
|
||||
| Direct | Single-file changes or targeted fixes | Inline Edit/Write tools |
|
||||
|
||||
For REFACTOR tasks:
|
||||
- **Single mode**: Apply refactorings in plan priority order (P0 first, then P1, etc.)
|
||||
- **Fan-out branch**: Apply ONLY this branch's single refactoring (from refactoring-detail.md)
|
||||
- **Independent pipeline**: Apply this pipeline's refactorings in priority order
|
||||
- Follow implementation guidance from plan (target files, patterns)
|
||||
- **Preserve existing behavior -- refactoring must not change functionality**
|
||||
- **Update ALL import references** when moving/renaming modules
|
||||
- **Update ALL test files** that reference moved/renamed symbols
|
||||
|
||||
For FIX tasks:
|
||||
- Read specific issues from review/validation feedback
|
||||
- Apply targeted corrections to flagged code locations
|
||||
- Verify the fix addresses the exact concern raised
|
||||
|
||||
General rules:
|
||||
- Make minimal, focused changes per refactoring
|
||||
- Add comments only where refactoring logic is non-obvious
|
||||
- Preserve existing code style and conventions
|
||||
- Verify no dangling imports after module moves
|
||||
|
||||
## Phase 4: Self-Validation
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Syntax | IDE diagnostics or build check | No new errors |
|
||||
| File integrity | Verify all planned files exist and are modified | All present |
|
||||
| Import integrity | Verify no broken imports after moves | All imports resolve |
|
||||
| Acceptance | Match refactoring plan success criteria | All structural changes applied |
|
||||
| No regression | Run existing tests if available | No new failures |
|
||||
|
||||
If validation fails, attempt auto-fix (max 2 attempts) before reporting error.
|
||||
|
||||
Append to context_accumulator for next REFACTOR/FIX task (single/inner-loop mode only):
|
||||
- Files modified, refactorings applied, validation results
|
||||
- Any discovered patterns or caveats for subsequent iterations
|
||||
|
||||
**Branch output paths**:
|
||||
- Single: write artifacts to `<session>/artifacts/`
|
||||
- Fan-out: write artifacts to `<session>/artifacts/branches/B{NN}/`
|
||||
- Independent: write artifacts to `<session>/artifacts/pipelines/{P}/`
|
||||
116
.claude/skills/team-arch-opt/role-specs/reviewer.md
Normal file
116
.claude/skills/team-arch-opt/role-specs/reviewer.md
Normal file
@@ -0,0 +1,116 @@
|
||||
---
|
||||
prefix: REVIEW
|
||||
inner_loop: false
|
||||
additional_prefixes: [QUALITY]
|
||||
discuss_rounds: [DISCUSS-REVIEW]
|
||||
subagents: [discuss]
|
||||
message_types:
|
||||
success: review_complete
|
||||
error: error
|
||||
fix: fix_required
|
||||
---
|
||||
|
||||
# Architecture Reviewer
|
||||
|
||||
Review refactoring code changes for correctness, pattern consistency, completeness, migration safety, and adherence to best practices. Provide structured verdicts with actionable feedback.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Refactoring code changes | From REFACTOR task artifacts / git diff | Yes |
|
||||
| Refactoring plan / detail | Varies by mode (see below) | Yes |
|
||||
| Validation results | Varies by mode (see below) | No |
|
||||
| shared-memory.json | <session>/wisdom/shared-memory.json | Yes |
|
||||
|
||||
1. Extract session path from task description
|
||||
2. **Detect branch/pipeline context** from task description:
|
||||
|
||||
| Task Description Field | Value | Context |
|
||||
|----------------------|-------|---------
|
||||
| `BranchId: B{NN}` | Present | Fan-out branch -- review only this branch's changes |
|
||||
| `PipelineId: {P}` | Present | Independent pipeline -- review pipeline-scoped changes |
|
||||
| Neither present | - | Single mode -- review all refactoring changes |
|
||||
|
||||
3. **Load refactoring context by mode**:
|
||||
- Single: Read `<session>/artifacts/refactoring-plan.md`
|
||||
- Fan-out branch: Read `<session>/artifacts/branches/B{NN}/refactoring-detail.md`
|
||||
- Independent: Read `<session>/artifacts/pipelines/{P}/refactoring-plan.md`
|
||||
|
||||
4. Load shared-memory.json for scoped refactorer namespace:
|
||||
- Single: `refactorer` namespace
|
||||
- Fan-out: `refactorer.B{NN}` namespace
|
||||
- Independent: `refactorer.{P}` namespace
|
||||
|
||||
5. Identify changed files from refactorer context -- read ONLY files modified by this branch/pipeline
|
||||
6. If validation results available, read from scoped path:
|
||||
- Single: `<session>/artifacts/validation-results.json`
|
||||
- Fan-out: `<session>/artifacts/branches/B{NN}/validation-results.json`
|
||||
- Independent: `<session>/artifacts/pipelines/{P}/validation-results.json`
|
||||
|
||||
## Phase 3: Multi-Dimension Review
|
||||
|
||||
Analyze refactoring changes across five dimensions:
|
||||
|
||||
| Dimension | Focus | Severity |
|
||||
|-----------|-------|----------|
|
||||
| Correctness | No behavior changes, all references updated, no dangling imports | Critical |
|
||||
| Pattern consistency | Follows existing patterns, naming consistent, language-idiomatic | High |
|
||||
| Completeness | All related code updated (imports, tests, config, documentation) | High |
|
||||
| Migration safety | No dangling references, backward compatible, public API preserved | Critical |
|
||||
| Best practices | Clean Architecture / SOLID principles, appropriate abstraction level | Medium |
|
||||
|
||||
Per-dimension review process:
|
||||
- Scan modified files for patterns matching each dimension
|
||||
- Record findings with severity (Critical / High / Medium / Low)
|
||||
- Include specific file:line references and suggested fixes
|
||||
|
||||
**Correctness checks**:
|
||||
- Verify moved code preserves original behavior (no logic changes mixed with structural changes)
|
||||
- Check all import/require statements updated to new paths
|
||||
- Verify no orphaned files left behind after moves
|
||||
|
||||
**Pattern consistency checks**:
|
||||
- New module names follow existing naming conventions
|
||||
- Extracted interfaces/classes use consistent patterns with existing codebase
|
||||
- File organization matches project conventions (e.g., index files, barrel exports)
|
||||
|
||||
**Completeness checks**:
|
||||
- All test files updated for moved/renamed modules
|
||||
- Configuration files updated if needed (e.g., path aliases, build configs)
|
||||
- Type definitions updated for extracted interfaces
|
||||
|
||||
**Migration safety checks**:
|
||||
- Public API surface unchanged (same exports available to consumers)
|
||||
- No circular dependencies introduced by the refactoring
|
||||
- Re-exports in place if module paths changed for backward compatibility
|
||||
|
||||
**Best practices checks**:
|
||||
- Extracted modules have clear single responsibility
|
||||
- Dependency direction follows layer conventions (dependencies flow inward)
|
||||
- Appropriate abstraction level (not over-engineered, not under-abstracted)
|
||||
|
||||
If any Critical findings detected, invoke `discuss` subagent (DISCUSS-REVIEW round) to validate the assessment before issuing verdict.
|
||||
|
||||
## Phase 4: Verdict & Feedback
|
||||
|
||||
Classify overall verdict based on findings:
|
||||
|
||||
| Verdict | Condition | Action |
|
||||
|---------|-----------|--------|
|
||||
| APPROVE | No Critical or High findings | Send review_complete |
|
||||
| REVISE | Has High findings, no Critical | Send fix_required with detailed feedback |
|
||||
| REJECT | Has Critical findings or fundamental approach flaw | Send fix_required + flag for designer escalation |
|
||||
|
||||
1. Write review report to scoped output path:
|
||||
- Single: `<session>/artifacts/review-report.md`
|
||||
- Fan-out: `<session>/artifacts/branches/B{NN}/review-report.md`
|
||||
- Independent: `<session>/artifacts/pipelines/{P}/review-report.md`
|
||||
- Content: Per-dimension findings with severity, file:line, description; Overall verdict with rationale; Specific fix instructions for REVISE/REJECT verdicts
|
||||
|
||||
2. Update `<session>/wisdom/shared-memory.json` under scoped namespace:
|
||||
- Single: merge `{ "reviewer": { verdict, finding_count, critical_count, dimensions_reviewed } }`
|
||||
- Fan-out: merge `{ "reviewer.B{NN}": { verdict, finding_count, critical_count, dimensions_reviewed } }`
|
||||
- Independent: merge `{ "reviewer.{P}": { verdict, finding_count, critical_count, dimensions_reviewed } }`
|
||||
|
||||
3. If DISCUSS-REVIEW was triggered, record discussion summary in `<session>/discussions/DISCUSS-REVIEW.md` (or `DISCUSS-REVIEW-B{NN}.md` for branch-scoped discussions)
|
||||
117
.claude/skills/team-arch-opt/role-specs/validator.md
Normal file
117
.claude/skills/team-arch-opt/role-specs/validator.md
Normal file
@@ -0,0 +1,117 @@
|
||||
---
|
||||
prefix: VALIDATE
|
||||
inner_loop: false
|
||||
message_types:
|
||||
success: validate_complete
|
||||
error: error
|
||||
fix: fix_required
|
||||
---
|
||||
|
||||
# Architecture Validator
|
||||
|
||||
Validate refactoring changes by running build checks, test suites, dependency metric comparisons, and API compatibility verification. Ensure refactoring improves architecture without breaking functionality.
|
||||
|
||||
## Phase 2: Environment & Baseline Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Architecture baseline | <session>/artifacts/architecture-baseline.json (shared) | Yes |
|
||||
| Refactoring plan / detail | Varies by mode (see below) | Yes |
|
||||
| shared-memory.json | <session>/wisdom/shared-memory.json | Yes |
|
||||
|
||||
1. Extract session path from task description
|
||||
2. **Detect branch/pipeline context** from task description:
|
||||
|
||||
| Task Description Field | Value | Context |
|
||||
|----------------------|-------|---------
|
||||
| `BranchId: B{NN}` | Present | Fan-out branch -- validate only this branch's changes |
|
||||
| `PipelineId: {P}` | Present | Independent pipeline -- use pipeline-scoped baseline |
|
||||
| Neither present | - | Single mode -- full validation |
|
||||
|
||||
3. **Load architecture baseline**:
|
||||
- Single / Fan-out: Read `<session>/artifacts/architecture-baseline.json` (shared baseline)
|
||||
- Independent: Read `<session>/artifacts/pipelines/{P}/architecture-baseline.json`
|
||||
|
||||
4. **Load refactoring context**:
|
||||
- Single: Read `<session>/artifacts/refactoring-plan.md` -- all success criteria
|
||||
- Fan-out branch: Read `<session>/artifacts/branches/B{NN}/refactoring-detail.md` -- only this branch's criteria
|
||||
- Independent: Read `<session>/artifacts/pipelines/{P}/refactoring-plan.md`
|
||||
|
||||
5. Load shared-memory.json for project type and refactoring scope
|
||||
6. Detect available validation tools from project:
|
||||
|
||||
| Signal | Validation Tool | Method |
|
||||
|--------|----------------|--------|
|
||||
| package.json + tsc | TypeScript compiler | Type-check entire project |
|
||||
| package.json + vitest/jest | Test runner | Run existing test suite |
|
||||
| package.json + eslint | Linter | Run lint checks for import/export issues |
|
||||
| Cargo.toml | Rust compiler | cargo check + cargo test |
|
||||
| go.mod | Go tools | go build + go test |
|
||||
| Makefile with test target | Custom tests | make test |
|
||||
| No tooling detected | Manual validation | File existence + import grep checks |
|
||||
|
||||
7. Get changed files scope from shared-memory:
|
||||
- Single: `refactorer` namespace
|
||||
- Fan-out: `refactorer.B{NN}` namespace
|
||||
- Independent: `refactorer.{P}` namespace
|
||||
|
||||
## Phase 3: Validation Execution
|
||||
|
||||
Run validations across four dimensions:
|
||||
|
||||
**Build validation**:
|
||||
- Compile/type-check the project -- zero new errors allowed
|
||||
- Verify all moved/renamed files are correctly referenced
|
||||
- Check for missing imports or unresolved modules
|
||||
|
||||
**Test validation**:
|
||||
- Run existing test suite -- all previously passing tests must still pass
|
||||
- Identify any tests that need updating due to module moves (update, don't skip)
|
||||
- Check for test file imports that reference old paths
|
||||
|
||||
**Dependency metric validation**:
|
||||
- Recalculate architecture metrics post-refactoring
|
||||
- Compare coupling scores against baseline (must improve or stay neutral)
|
||||
- Verify no new circular dependencies introduced
|
||||
- Check cohesion metrics for affected modules
|
||||
|
||||
**API compatibility validation**:
|
||||
- Verify public API signatures are preserved (exported function/class/type names)
|
||||
- Check for dangling references (imports pointing to removed/moved files)
|
||||
- Verify no new dead exports introduced by the refactoring
|
||||
- Check that re-exports maintain backward compatibility where needed
|
||||
|
||||
**Branch-scoped validation** (fan-out mode):
|
||||
- Only validate metrics relevant to this branch's refactoring (from refactoring-detail.md)
|
||||
- Still check for regressions across all metrics (not just branch-specific ones)
|
||||
|
||||
## Phase 4: Result Analysis
|
||||
|
||||
Compare against baseline and plan criteria:
|
||||
|
||||
| Metric | Threshold | Verdict |
|
||||
|--------|-----------|---------|
|
||||
| Build passes | Zero compilation errors | PASS |
|
||||
| All tests pass | No new test failures | PASS |
|
||||
| Coupling improved or neutral | No metric degradation > 5% | PASS |
|
||||
| No new cycles introduced | Cycle count <= baseline | PASS |
|
||||
| All plan success criteria met | Every criterion satisfied | PASS |
|
||||
| Partial improvement | Some metrics improved, none degraded | WARN |
|
||||
| Build fails | Compilation errors detected | FAIL -> fix_required |
|
||||
| Test failures | Previously passing tests now fail | FAIL -> fix_required |
|
||||
| New cycles introduced | Cycle count > baseline | FAIL -> fix_required |
|
||||
| Dangling references | Unresolved imports detected | FAIL -> fix_required |
|
||||
|
||||
1. Write validation results to output path:
|
||||
- Single: `<session>/artifacts/validation-results.json`
|
||||
- Fan-out: `<session>/artifacts/branches/B{NN}/validation-results.json`
|
||||
- Independent: `<session>/artifacts/pipelines/{P}/validation-results.json`
|
||||
- Content: Per-dimension: name, baseline value, current value, improvement/regression, verdict; Overall verdict: PASS / WARN / FAIL; Failure details (if any)
|
||||
|
||||
2. Update `<session>/wisdom/shared-memory.json` under scoped namespace:
|
||||
- Single: merge `{ "validator": { verdict, improvements, regressions, build_pass, test_pass } }`
|
||||
- Fan-out: merge `{ "validator.B{NN}": { verdict, improvements, regressions, build_pass, test_pass } }`
|
||||
- Independent: merge `{ "validator.{P}": { verdict, improvements, regressions, build_pass, test_pass } }`
|
||||
|
||||
3. If verdict is FAIL, include detailed feedback in message for FIX task creation:
|
||||
- Which validations failed, specific errors, suggested investigation areas
|
||||
@@ -0,0 +1,376 @@
|
||||
# Command: Dispatch
|
||||
|
||||
Create the architecture optimization task chain with correct dependencies and structured task descriptions. Supports single, fan-out, independent, and auto parallel modes.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| User requirement | From coordinator Phase 1 | Yes |
|
||||
| Session folder | From coordinator Phase 2 | Yes |
|
||||
| Pipeline definition | From SKILL.md Pipeline Definitions | Yes |
|
||||
| Parallel mode | From session.json `parallel_mode` | Yes |
|
||||
| Max branches | From session.json `max_branches` | Yes |
|
||||
| Independent targets | From session.json `independent_targets` (independent mode only) | Conditional |
|
||||
|
||||
1. Load user requirement and refactoring scope from session.json
|
||||
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
|
||||
3. Read `parallel_mode` and `max_branches` from session.json
|
||||
4. For `independent` mode: read `independent_targets` array from session.json
|
||||
|
||||
## Phase 3: Task Chain Creation (Mode-Branched)
|
||||
|
||||
### Task Description Template
|
||||
|
||||
Every task description uses structured format for clarity:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "<TASK-ID>",
|
||||
owner: "<role>",
|
||||
description: "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>
|
||||
TASK:
|
||||
- <step 1: specific action>
|
||||
- <step 2: specific action>
|
||||
- <step 3: specific action>
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Scope: <refactoring-scope>
|
||||
- Branch: <branch-id or 'none'>
|
||||
- Upstream artifacts: <artifact-1>, <artifact-2>
|
||||
- Shared memory: <session>/wisdom/shared-memory.json
|
||||
EXPECTED: <deliverable path> + <quality criteria>
|
||||
CONSTRAINTS: <scope limits, focus areas>
|
||||
---
|
||||
InnerLoop: <true|false>
|
||||
BranchId: <B01|A|none>",
|
||||
blockedBy: [<dependency-list>],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
### Mode Router
|
||||
|
||||
| Mode | Action |
|
||||
|------|--------|
|
||||
| `single` | Create 5 tasks (ANALYZE → DESIGN → REFACTOR → VALIDATE + REVIEW) -- unchanged from linear pipeline |
|
||||
| `auto` | Create ANALYZE-001 + DESIGN-001 only. **Defer branch creation to CP-2.5** after design completes |
|
||||
| `fan-out` | Create ANALYZE-001 + DESIGN-001 only. **Defer branch creation to CP-2.5** after design completes |
|
||||
| `independent` | Create M complete pipelines immediately (one per target) |
|
||||
|
||||
---
|
||||
|
||||
### Single Mode Task Chain
|
||||
|
||||
Create tasks in dependency order (backward compatible, unchanged):
|
||||
|
||||
**ANALYZE-001** (analyzer, Stage 1):
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "ANALYZE-001",
|
||||
description: "PURPOSE: Analyze codebase architecture to identify structural issues | Success: Baseline metrics captured, top 3-7 issues ranked by severity
|
||||
TASK:
|
||||
- Detect project type and available analysis tools
|
||||
- Execute analysis across relevant dimensions (dependencies, coupling, cohesion, layering, duplication, dead code)
|
||||
- Collect baseline metrics and rank architecture issues by severity
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Scope: <refactoring-scope>
|
||||
- Branch: none
|
||||
- Shared memory: <session>/wisdom/shared-memory.json
|
||||
EXPECTED: <session>/artifacts/architecture-baseline.json + <session>/artifacts/architecture-report.md | Quantified metrics with evidence
|
||||
CONSTRAINTS: Focus on <refactoring-scope> | Analyze before any changes
|
||||
---
|
||||
InnerLoop: false",
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
**DESIGN-001** (designer, Stage 2):
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "DESIGN-001",
|
||||
description: "PURPOSE: Design prioritized refactoring plan from architecture analysis | Success: Actionable plan with measurable success criteria per refactoring
|
||||
TASK:
|
||||
- Analyze architecture report and baseline metrics
|
||||
- Select refactoring strategies per issue type
|
||||
- Prioritize by impact/effort ratio, define success criteria
|
||||
- Each refactoring MUST have a unique REFACTOR-ID (REFACTOR-001, REFACTOR-002, ...) with non-overlapping target files
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Scope: <refactoring-scope>
|
||||
- Branch: none
|
||||
- Upstream artifacts: architecture-baseline.json, architecture-report.md
|
||||
- Shared memory: <session>/wisdom/shared-memory.json
|
||||
EXPECTED: <session>/artifacts/refactoring-plan.md | Priority-ordered with structural improvement targets, discrete REFACTOR-IDs
|
||||
CONSTRAINTS: Focus on highest-impact refactorings | Risk assessment required | Non-overlapping file targets per REFACTOR-ID
|
||||
---
|
||||
InnerLoop: false",
|
||||
blockedBy: ["ANALYZE-001"],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
**REFACTOR-001** (refactorer, Stage 3):
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "REFACTOR-001",
|
||||
description: "PURPOSE: Implement refactoring changes per design plan | Success: All planned refactorings applied, code compiles, existing tests pass
|
||||
TASK:
|
||||
- Load refactoring plan and identify target files
|
||||
- Apply refactorings in priority order (P0 first)
|
||||
- Update all import references for moved/renamed modules
|
||||
- Validate changes compile and pass existing tests
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Scope: <refactoring-scope>
|
||||
- Branch: none
|
||||
- Upstream artifacts: refactoring-plan.md
|
||||
- Shared memory: <session>/wisdom/shared-memory.json
|
||||
EXPECTED: Modified source files + validation passing | Refactorings applied without regressions
|
||||
CONSTRAINTS: Preserve existing behavior | Update all references | Follow code conventions
|
||||
---
|
||||
InnerLoop: true",
|
||||
blockedBy: ["DESIGN-001"],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
**VALIDATE-001** (validator, Stage 4 - parallel):
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "VALIDATE-001",
|
||||
description: "PURPOSE: Validate refactoring results against baseline | Success: Build passes, tests pass, no metric regressions, API compatible
|
||||
TASK:
|
||||
- Load architecture baseline and plan success criteria
|
||||
- Run build validation (compilation, type checking)
|
||||
- Run test validation (existing test suite)
|
||||
- Compare dependency metrics against baseline
|
||||
- Verify API compatibility (no dangling references)
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Scope: <refactoring-scope>
|
||||
- Branch: none
|
||||
- Upstream artifacts: architecture-baseline.json, refactoring-plan.md
|
||||
- Shared memory: <session>/wisdom/shared-memory.json
|
||||
EXPECTED: <session>/artifacts/validation-results.json | Per-dimension validation with verdicts
|
||||
CONSTRAINTS: Must compare against baseline | Flag any regressions or broken imports
|
||||
---
|
||||
InnerLoop: false",
|
||||
blockedBy: ["REFACTOR-001"],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
**REVIEW-001** (reviewer, Stage 4 - parallel):
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "REVIEW-001",
|
||||
description: "PURPOSE: Review refactoring code for correctness, pattern consistency, and migration safety | Success: All dimensions reviewed, verdict issued
|
||||
TASK:
|
||||
- Load modified files and refactoring plan
|
||||
- Review across 5 dimensions: correctness, pattern consistency, completeness, migration safety, best practices
|
||||
- Issue verdict: APPROVE, REVISE, or REJECT with actionable feedback
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Scope: <refactoring-scope>
|
||||
- Branch: none
|
||||
- Upstream artifacts: refactoring-plan.md, validation-results.json (if available)
|
||||
- Shared memory: <session>/wisdom/shared-memory.json
|
||||
EXPECTED: <session>/artifacts/review-report.md | Per-dimension findings with severity
|
||||
CONSTRAINTS: Focus on refactoring changes only | Provide specific file:line references
|
||||
---
|
||||
InnerLoop: false",
|
||||
blockedBy: ["REFACTOR-001"],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Auto / Fan-out Mode Task Chain (Deferred Branching)
|
||||
|
||||
For `auto` and `fan-out` modes, create only shared stages now. Branch tasks are created at **CP-2.5** after DESIGN-001 completes.
|
||||
|
||||
Create ANALYZE-001 and DESIGN-001 with same templates as single mode above.
|
||||
|
||||
**Do NOT create REFACTOR/VALIDATE/REVIEW tasks yet.** They are created by the CP-2.5 Branch Creation subroutine in monitor.md.
|
||||
|
||||
---
|
||||
|
||||
### Independent Mode Task Chain
|
||||
|
||||
For `independent` mode, create M complete pipelines -- one per target in `independent_targets` array.
|
||||
|
||||
Pipeline prefix chars: `A, B, C, D, E, F, G, H, I, J` (from config `pipeline_prefix_chars`).
|
||||
|
||||
For each target index `i` (0-based), with prefix char `P = pipeline_prefix_chars[i]`:
|
||||
|
||||
```
|
||||
// Create session subdirectory for this pipeline
|
||||
Bash("mkdir -p <session>/artifacts/pipelines/<P>")
|
||||
|
||||
TaskCreate({ subject: "ANALYZE-<P>01", ... }) // blockedBy: []
|
||||
TaskCreate({ subject: "DESIGN-<P>01", ... }) // blockedBy: ["ANALYZE-<P>01"]
|
||||
TaskCreate({ subject: "REFACTOR-<P>01", ... }) // blockedBy: ["DESIGN-<P>01"]
|
||||
TaskCreate({ subject: "VALIDATE-<P>01", ... }) // blockedBy: ["REFACTOR-<P>01"]
|
||||
TaskCreate({ subject: "REVIEW-<P>01", ... }) // blockedBy: ["REFACTOR-<P>01"]
|
||||
```
|
||||
|
||||
Task descriptions follow same template as single mode, with additions:
|
||||
- `Pipeline: <P>` in CONTEXT
|
||||
- Artifact paths use `<session>/artifacts/pipelines/<P>/` instead of `<session>/artifacts/`
|
||||
- Shared-memory namespace uses `<role>.<P>` (e.g., `analyzer.A`, `refactorer.B`)
|
||||
- Each pipeline's scope is its specific target from `independent_targets[i]`
|
||||
|
||||
Example for pipeline A with target "refactor auth module":
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "ANALYZE-A01",
|
||||
description: "PURPOSE: Analyze auth module architecture | Success: Auth module structural issues identified
|
||||
TASK:
|
||||
- Detect project type and available analysis tools
|
||||
- Execute architecture analysis focused on auth module
|
||||
- Collect baseline metrics and rank auth module issues
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Scope: refactor auth module
|
||||
- Pipeline: A
|
||||
- Shared memory: <session>/wisdom/shared-memory.json (namespace: analyzer.A)
|
||||
EXPECTED: <session>/artifacts/pipelines/A/architecture-baseline.json + architecture-report.md
|
||||
CONSTRAINTS: Focus on auth module scope
|
||||
---
|
||||
InnerLoop: false
|
||||
PipelineId: A",
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### CP-2.5: Branch Creation Subroutine
|
||||
|
||||
**Triggered by**: monitor.md handleCallback when DESIGN-001 completes in `auto` or `fan-out` mode.
|
||||
|
||||
**Procedure**:
|
||||
|
||||
1. Read `<session>/artifacts/refactoring-plan.md` to count REFACTOR-IDs
|
||||
2. Read `shared-memory.json` -> `designer.refactoring_count`
|
||||
3. **Auto mode decision**:
|
||||
|
||||
| Refactoring Count | Decision |
|
||||
|-------------------|----------|
|
||||
| count <= 2 | Switch to `single` mode -- create REFACTOR-001, VALIDATE-001, REVIEW-001 (standard single pipeline) |
|
||||
| count >= 3 | Switch to `fan-out` mode -- create branch tasks below |
|
||||
|
||||
4. Update session.json with resolved `parallel_mode` (auto -> single or fan-out)
|
||||
|
||||
5. **Fan-out branch creation** (when count >= 3 or forced fan-out):
|
||||
- Truncate to `max_branches` if `refactoring_count > max_branches` (keep top N by priority)
|
||||
- For each refactoring `i` (1-indexed), branch ID = `B{NN}` where NN = zero-padded i:
|
||||
|
||||
```
|
||||
// Create branch artifact directory
|
||||
Bash("mkdir -p <session>/artifacts/branches/B{NN}")
|
||||
|
||||
// Extract single REFACTOR detail to branch
|
||||
Write("<session>/artifacts/branches/B{NN}/refactoring-detail.md",
|
||||
extracted REFACTOR-{NNN} block from refactoring-plan.md)
|
||||
```
|
||||
|
||||
6. Create branch tasks for each branch B{NN}:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "REFACTOR-B{NN}",
|
||||
description: "PURPOSE: Implement refactoring REFACTOR-{NNN} | Success: Single refactoring applied, compiles, tests pass
|
||||
TASK:
|
||||
- Load refactoring detail from branches/B{NN}/refactoring-detail.md
|
||||
- Apply this single refactoring to target files
|
||||
- Update all import references for moved/renamed modules
|
||||
- Validate changes compile and pass existing tests
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Branch: B{NN}
|
||||
- Upstream artifacts: branches/B{NN}/refactoring-detail.md
|
||||
- Shared memory: <session>/wisdom/shared-memory.json (namespace: refactorer.B{NN})
|
||||
EXPECTED: Modified source files for REFACTOR-{NNN} only
|
||||
CONSTRAINTS: Only implement this branch's refactoring | Do not touch files outside REFACTOR-{NNN} scope
|
||||
---
|
||||
InnerLoop: false
|
||||
BranchId: B{NN}",
|
||||
blockedBy: ["DESIGN-001"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "VALIDATE-B{NN}",
|
||||
description: "PURPOSE: Validate branch B{NN} refactoring | Success: REFACTOR-{NNN} passes build, tests, and metric checks
|
||||
TASK:
|
||||
- Load architecture baseline and REFACTOR-{NNN} success criteria
|
||||
- Validate build, tests, dependency metrics, and API compatibility
|
||||
- Compare against baseline, check for regressions
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Branch: B{NN}
|
||||
- Upstream artifacts: architecture-baseline.json, branches/B{NN}/refactoring-detail.md
|
||||
- Shared memory: <session>/wisdom/shared-memory.json (namespace: validator.B{NN})
|
||||
EXPECTED: <session>/artifacts/branches/B{NN}/validation-results.json
|
||||
CONSTRAINTS: Only validate this branch's changes
|
||||
---
|
||||
InnerLoop: false
|
||||
BranchId: B{NN}",
|
||||
blockedBy: ["REFACTOR-B{NN}"],
|
||||
status: "pending"
|
||||
})
|
||||
|
||||
TaskCreate({
|
||||
subject: "REVIEW-B{NN}",
|
||||
description: "PURPOSE: Review branch B{NN} refactoring code | Success: Code quality verified for REFACTOR-{NNN}
|
||||
TASK:
|
||||
- Load modified files from refactorer.B{NN} shared-memory namespace
|
||||
- Review across 5 dimensions for this branch's changes only
|
||||
- Issue verdict: APPROVE, REVISE, or REJECT
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Branch: B{NN}
|
||||
- Upstream artifacts: branches/B{NN}/refactoring-detail.md
|
||||
- Shared memory: <session>/wisdom/shared-memory.json (namespace: reviewer.B{NN})
|
||||
EXPECTED: <session>/artifacts/branches/B{NN}/review-report.md
|
||||
CONSTRAINTS: Only review this branch's changes
|
||||
---
|
||||
InnerLoop: false
|
||||
BranchId: B{NN}",
|
||||
blockedBy: ["REFACTOR-B{NN}"],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
7. Update session.json:
|
||||
- `branches`: array of branch IDs (["B01", "B02", ...])
|
||||
- `fix_cycles`: object keyed by branch ID, all initialized to 0
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
Verify task chain integrity:
|
||||
|
||||
| Check | Method | Expected |
|
||||
|-------|--------|----------|
|
||||
| Task count correct | TaskList count | single: 5, auto/fan-out: 2 (pre-CP-2.5), independent: 5*M |
|
||||
| Dependencies correct | Trace dependency graph | Acyclic, correct blockedBy |
|
||||
| No circular dependencies | Trace dependency graph | Acyclic |
|
||||
| Task IDs use correct prefixes | Pattern check | Match naming rules per mode |
|
||||
| Structured descriptions complete | Each has PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS | All present |
|
||||
| Branch/Pipeline IDs consistent | Cross-check with session.json | Match |
|
||||
|
||||
### Naming Rules Summary
|
||||
|
||||
| Mode | Stage 3 | Stage 4 | Fix |
|
||||
|------|---------|---------|-----|
|
||||
| Single | REFACTOR-001 | VALIDATE-001, REVIEW-001 | FIX-001, FIX-002 |
|
||||
| Fan-out | REFACTOR-B01 | VALIDATE-B01, REVIEW-B01 | FIX-B01-1, FIX-B01-2 |
|
||||
| Independent | REFACTOR-A01 | VALIDATE-A01, REVIEW-A01 | FIX-A01-1, FIX-A01-2 |
|
||||
|
||||
If validation fails, fix the specific task and re-validate.
|
||||
@@ -0,0 +1,332 @@
|
||||
# Command: Monitor
|
||||
|
||||
Handle all coordinator monitoring events: worker callbacks, status checks, pipeline advancement, and completion. Supports single, fan-out, and independent parallel modes with per-branch/pipeline tracking.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session state | <session>/session.json | Yes |
|
||||
| Task list | TaskList() | Yes |
|
||||
| Trigger event | From Entry Router detection | Yes |
|
||||
| Pipeline definition | From SKILL.md | Yes |
|
||||
|
||||
1. Load session.json for current state, `parallel_mode`, `branches`, `fix_cycles`
|
||||
2. Run TaskList() to get current task statuses
|
||||
3. Identify trigger event type from Entry Router
|
||||
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
### handleCallback
|
||||
|
||||
Triggered when a worker sends completion message.
|
||||
|
||||
1. Parse message to identify role, task ID, and **branch/pipeline label**:
|
||||
|
||||
| Message Pattern | Branch Detection |
|
||||
|----------------|-----------------|
|
||||
| `[refactorer-B01]` or task ID `REFACTOR-B01` | Branch `B01` (fan-out) |
|
||||
| `[analyzer-A]` or task ID `ANALYZE-A01` | Pipeline `A` (independent) |
|
||||
| `[analyzer]` or task ID `ANALYZE-001` | No branch (single) |
|
||||
|
||||
2. Mark task as completed:
|
||||
|
||||
```
|
||||
TaskUpdate({ taskId: "<task-id>", status: "completed" })
|
||||
```
|
||||
|
||||
3. Record completion in session state
|
||||
|
||||
4. **CP-2.5 check** (auto/fan-out mode only):
|
||||
- If completed task is DESIGN-001 AND `parallel_mode` is `auto` or `fan-out`:
|
||||
- Execute **CP-2.5 Branch Creation** subroutine from dispatch.md
|
||||
- After branch creation, proceed to handleSpawnNext (spawns all REFACTOR-B* in parallel)
|
||||
- STOP after spawning
|
||||
|
||||
5. Check if checkpoint feedback is configured for this stage:
|
||||
|
||||
| Completed Task | Checkpoint | Action |
|
||||
|---------------|------------|--------|
|
||||
| ANALYZE-001 / ANALYZE-{P}01 | CP-1 | Notify user: architecture report ready for review |
|
||||
| DESIGN-001 / DESIGN-{P}01 | CP-2 | Notify user: refactoring plan ready for review |
|
||||
| DESIGN-001 (auto/fan-out) | CP-2.5 | Execute branch creation, then notify user with branch count |
|
||||
| VALIDATE-* or REVIEW-* | CP-3 | Check verdicts per branch (see Review-Fix Cycle below) |
|
||||
|
||||
6. Proceed to handleSpawnNext
|
||||
|
||||
### handleSpawnNext
|
||||
|
||||
Find and spawn the next ready tasks.
|
||||
|
||||
1. Scan task list for tasks where:
|
||||
- Status is "pending"
|
||||
- All blockedBy tasks have status "completed"
|
||||
|
||||
2. For each ready task, spawn team-worker:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <task-id>",
|
||||
team_name: "arch-opt",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-arch-opt/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: arch-opt
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
|
||||
})
|
||||
```
|
||||
|
||||
3. **Parallel spawn rules by mode**:
|
||||
|
||||
| Mode | Scenario | Spawn Behavior |
|
||||
|------|----------|---------------|
|
||||
| Single | Stage 4 ready | Spawn VALIDATE-001 + REVIEW-001 in parallel |
|
||||
| Fan-out (CP-2.5 done) | All REFACTOR-B* unblocked | Spawn ALL REFACTOR-B* in parallel |
|
||||
| Fan-out (REFACTOR-B{NN} done) | VALIDATE-B{NN} + REVIEW-B{NN} ready | Spawn both for that branch in parallel |
|
||||
| Independent | Any unblocked task | Spawn all ready tasks across all pipelines in parallel |
|
||||
|
||||
4. STOP after spawning -- wait for next callback
|
||||
|
||||
### Review-Fix Cycle (CP-3)
|
||||
|
||||
**Per-branch/pipeline scoping**: Each branch/pipeline has its own independent fix cycle.
|
||||
|
||||
#### Single Mode (unchanged)
|
||||
|
||||
When both VALIDATE-001 and REVIEW-001 are completed:
|
||||
|
||||
1. Read validation verdict from shared-memory (validator namespace)
|
||||
2. Read review verdict from shared-memory (reviewer namespace)
|
||||
|
||||
| Validate Verdict | Review Verdict | Action |
|
||||
|-----------------|----------------|--------|
|
||||
| PASS | APPROVE | -> handleComplete |
|
||||
| PASS | REVISE | Create FIX task with review feedback |
|
||||
| FAIL | APPROVE | Create FIX task with validation feedback |
|
||||
| FAIL | REVISE/REJECT | Create FIX task with combined feedback |
|
||||
| Any | REJECT | Create FIX task + flag for designer re-evaluation |
|
||||
|
||||
#### Fan-out Mode (per-branch)
|
||||
|
||||
When both VALIDATE-B{NN} and REVIEW-B{NN} are completed for a specific branch:
|
||||
|
||||
1. Read validation verdict from `validator.B{NN}` namespace
|
||||
2. Read review verdict from `reviewer.B{NN}` namespace
|
||||
3. Apply same verdict matrix as single mode, but scoped to this branch only
|
||||
4. **Other branches are unaffected** -- they continue independently
|
||||
|
||||
#### Independent Mode (per-pipeline)
|
||||
|
||||
When both VALIDATE-{P}01 and REVIEW-{P}01 are completed for a specific pipeline:
|
||||
|
||||
1. Read verdicts from `validator.{P}` and `reviewer.{P}` namespaces
|
||||
2. Apply same verdict matrix, scoped to this pipeline only
|
||||
|
||||
#### Fix Cycle Count Tracking
|
||||
|
||||
Fix cycles are tracked per branch/pipeline in `session.json`:
|
||||
|
||||
```json
|
||||
// Single mode
|
||||
{ "fix_cycles": { "main": 0 } }
|
||||
|
||||
// Fan-out mode
|
||||
{ "fix_cycles": { "B01": 0, "B02": 1, "B03": 0 } }
|
||||
|
||||
// Independent mode
|
||||
{ "fix_cycles": { "A": 0, "B": 2 } }
|
||||
```
|
||||
|
||||
| Cycle Count | Action |
|
||||
|-------------|--------|
|
||||
| < 3 | Create FIX task, increment cycle count for this branch/pipeline |
|
||||
| >= 3 | Escalate THIS branch/pipeline to user. Other branches continue |
|
||||
|
||||
#### FIX Task Creation (branched)
|
||||
|
||||
**Fan-out mode**:
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "FIX-B{NN}-{cycle}",
|
||||
description: "PURPOSE: Fix issues in branch B{NN} from review/validation | Success: All flagged issues resolved
|
||||
TASK:
|
||||
- Address review findings: <specific-findings>
|
||||
- Fix validation failures: <specific-failures>
|
||||
- Re-validate after fixes
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Branch: B{NN}
|
||||
- Upstream artifacts: branches/B{NN}/review-report.md, branches/B{NN}/validation-results.json
|
||||
- Shared memory: <session>/wisdom/shared-memory.json (namespace: refactorer.B{NN})
|
||||
EXPECTED: Fixed source files for B{NN} only
|
||||
CONSTRAINTS: Targeted fixes only | Do not touch other branches
|
||||
---
|
||||
InnerLoop: false
|
||||
BranchId: B{NN}",
|
||||
blockedBy: [],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
Create new VALIDATE and REVIEW with retry suffix:
|
||||
- `VALIDATE-B{NN}-R{cycle}` blocked on `FIX-B{NN}-{cycle}`
|
||||
- `REVIEW-B{NN}-R{cycle}` blocked on `FIX-B{NN}-{cycle}`
|
||||
|
||||
**Independent mode**:
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "FIX-{P}01-{cycle}",
|
||||
...same pattern with pipeline prefix...
|
||||
blockedBy: [],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
Create `VALIDATE-{P}01-R{cycle}` and `REVIEW-{P}01-R{cycle}`.
|
||||
|
||||
### handleCheck
|
||||
|
||||
Output current pipeline status grouped by branch/pipeline.
|
||||
|
||||
**Single mode** (unchanged):
|
||||
```
|
||||
Pipeline Status:
|
||||
[DONE] ANALYZE-001 (analyzer) -> architecture-report.md
|
||||
[DONE] DESIGN-001 (designer) -> refactoring-plan.md
|
||||
[RUN] REFACTOR-001 (refactorer) -> refactoring...
|
||||
[WAIT] VALIDATE-001 (validator) -> blocked by REFACTOR-001
|
||||
[WAIT] REVIEW-001 (reviewer) -> blocked by REFACTOR-001
|
||||
|
||||
Fix Cycles: 0/3
|
||||
Session: <session-id>
|
||||
```
|
||||
|
||||
**Fan-out mode**:
|
||||
```
|
||||
Pipeline Status (fan-out, 3 branches):
|
||||
Shared Stages:
|
||||
[DONE] ANALYZE-001 (analyzer) -> architecture-report.md
|
||||
[DONE] DESIGN-001 (designer) -> refactoring-plan.md (4 REFACTOR-IDs)
|
||||
|
||||
Branch B01 (REFACTOR-001: <title>):
|
||||
[RUN] REFACTOR-B01 (refactorer) -> refactoring...
|
||||
[WAIT] VALIDATE-B01 (validator) -> blocked by REFACTOR-B01
|
||||
[WAIT] REVIEW-B01 (reviewer) -> blocked by REFACTOR-B01
|
||||
Fix Cycles: 0/3
|
||||
|
||||
Branch B02 (REFACTOR-002: <title>):
|
||||
[DONE] REFACTOR-B02 (refactorer) -> done
|
||||
[RUN] VALIDATE-B02 (validator) -> validating...
|
||||
[RUN] REVIEW-B02 (reviewer) -> reviewing...
|
||||
Fix Cycles: 0/3
|
||||
|
||||
Branch B03 (REFACTOR-003: <title>):
|
||||
[FAIL] REFACTOR-B03 (refactorer) -> failed
|
||||
Fix Cycles: 0/3 [BRANCH FAILED]
|
||||
|
||||
Session: <session-id>
|
||||
```
|
||||
|
||||
**Independent mode**:
|
||||
```
|
||||
Pipeline Status (independent, 2 pipelines):
|
||||
Pipeline A (target: refactor auth module):
|
||||
[DONE] ANALYZE-A01 -> [DONE] DESIGN-A01 -> [RUN] REFACTOR-A01 -> ...
|
||||
Fix Cycles: 0/3
|
||||
|
||||
Pipeline B (target: refactor API layer):
|
||||
[DONE] ANALYZE-B01 -> [DONE] DESIGN-B01 -> [DONE] REFACTOR-B01 -> ...
|
||||
Fix Cycles: 1/3
|
||||
|
||||
Session: <session-id>
|
||||
```
|
||||
|
||||
Output status -- do NOT advance pipeline.
|
||||
|
||||
### handleResume
|
||||
|
||||
Resume pipeline after user pause or interruption.
|
||||
|
||||
1. Audit task list for inconsistencies:
|
||||
- Tasks stuck in "in_progress" -> reset to "pending"
|
||||
- Tasks with completed blockers but still "pending" -> include in spawn list
|
||||
2. For fan-out/independent: check each branch/pipeline independently
|
||||
3. Proceed to handleSpawnNext
|
||||
|
||||
### handleConsensus
|
||||
|
||||
Handle consensus_blocked signals from discuss rounds.
|
||||
|
||||
| Severity | Action |
|
||||
|----------|--------|
|
||||
| HIGH | Pause pipeline (or branch), notify user with findings summary |
|
||||
| MEDIUM | Create revision task for the blocked role (scoped to branch if applicable) |
|
||||
| LOW | Log finding, continue pipeline |
|
||||
|
||||
### handleComplete
|
||||
|
||||
Triggered when all pipeline tasks are completed and no fix cycles remain.
|
||||
|
||||
**Completion check varies by mode**:
|
||||
|
||||
| Mode | Completion Condition |
|
||||
|------|---------------------|
|
||||
| Single | All 5 tasks (+ any FIX/retry tasks) have status "completed" |
|
||||
| Fan-out | ALL branches have VALIDATE + REVIEW completed with PASS/APPROVE (or escalated), shared stages done |
|
||||
| Independent | ALL pipelines have VALIDATE + REVIEW completed with PASS/APPROVE (or escalated) |
|
||||
|
||||
**Aggregate results** before transitioning to Phase 5:
|
||||
|
||||
1. For fan-out mode: collect per-branch validation results into `<session>/artifacts/aggregate-results.json`:
|
||||
```json
|
||||
{
|
||||
"branches": {
|
||||
"B01": { "refactor_id": "REFACTOR-001", "validate_verdict": "PASS", "review_verdict": "APPROVE", "improvement": "..." },
|
||||
"B02": { "refactor_id": "REFACTOR-002", "validate_verdict": "PASS", "review_verdict": "APPROVE", "improvement": "..." },
|
||||
"B03": { "status": "failed", "reason": "REFACTOR failed" }
|
||||
},
|
||||
"overall": { "total_branches": 3, "passed": 2, "failed": 1 }
|
||||
}
|
||||
```
|
||||
|
||||
2. For independent mode: collect per-pipeline results similarly
|
||||
|
||||
3. If any tasks not completed, return to handleSpawnNext
|
||||
4. If all completed (allowing for failed branches marked as such), transition to coordinator Phase 5
|
||||
|
||||
### handleRevise
|
||||
|
||||
Triggered by user "revise <TASK-ID> [feedback]" command.
|
||||
|
||||
1. Parse target task ID and optional feedback
|
||||
2. Detect branch/pipeline from task ID pattern
|
||||
3. Create revision task with same role but updated requirements, scoped to branch
|
||||
4. Set blockedBy to empty (immediate execution)
|
||||
5. Cascade: create new downstream tasks within same branch only
|
||||
6. Proceed to handleSpawnNext
|
||||
|
||||
### handleFeedback
|
||||
|
||||
Triggered by user "feedback <text>" command.
|
||||
|
||||
1. Analyze feedback text to determine impact scope
|
||||
2. Identify which pipeline stage, role, and branch/pipeline should handle the feedback
|
||||
3. Create targeted revision task (scoped to branch if applicable)
|
||||
4. Proceed to handleSpawnNext
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler execution:
|
||||
|
||||
1. Update session.json with current state (active tasks, fix cycle counts per branch, last event, resolved parallel_mode)
|
||||
2. Verify task list consistency
|
||||
3. STOP and wait for next event
|
||||
291
.claude/skills/team-arch-opt/roles/coordinator/role.md
Normal file
291
.claude/skills/team-arch-opt/roles/coordinator/role.md
Normal file
@@ -0,0 +1,291 @@
|
||||
# Coordinator - Architecture Optimization Team
|
||||
|
||||
**Role**: coordinator
|
||||
**Type**: Orchestrator
|
||||
**Team**: arch-opt
|
||||
|
||||
Orchestrates the architecture optimization pipeline: manages task chains, spawns team-worker agents, handles review-fix cycles, and drives the pipeline to completion.
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Use `team-worker` agent type for all worker spawns (NOT `general-purpose`)
|
||||
- Follow Command Execution Protocol for dispatch and monitor commands
|
||||
- Respect pipeline stage dependencies (blockedBy)
|
||||
- Stop after spawning workers -- wait for callbacks
|
||||
- Handle review-fix cycles with max 3 iterations
|
||||
- Execute completion action in Phase 5
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Implement domain logic (analyzing, refactoring, reviewing) -- workers handle this
|
||||
- Spawn workers without creating tasks first
|
||||
- Skip checkpoints when configured
|
||||
- Force-advance pipeline past failed review/validation
|
||||
- Modify source code directly -- delegate to refactorer worker
|
||||
|
||||
---
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** -- complete the command workflow before proceeding
|
||||
|
||||
Example:
|
||||
```
|
||||
Phase 3 needs task dispatch
|
||||
-> Read roles/coordinator/commands/dispatch.md
|
||||
-> Execute Phase 2 (Context Loading)
|
||||
-> Execute Phase 3 (Task Chain Creation)
|
||||
-> Execute Phase 4 (Validation)
|
||||
-> Continue to Phase 4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, detect invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains role tag [analyzer], [designer], [refactorer], [validator], [reviewer] | -> handleCallback |
|
||||
| Branch callback | Message contains branch tag [refactorer-B01], [validator-B02], etc. | -> handleCallback (branch-aware) |
|
||||
| Pipeline callback | Message contains pipeline tag [analyzer-A], [refactorer-B], etc. | -> handleCallback (pipeline-aware) |
|
||||
| Consensus blocked | Message contains "consensus_blocked" | -> handleConsensus |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
|
||||
| Pipeline complete | All tasks have status "completed" | -> handleComplete |
|
||||
| Interrupted session | Active/paused session exists | -> Phase 0 (Resume Check) |
|
||||
| New session | None of above | -> Phase 1 (Requirement Clarification) |
|
||||
|
||||
For callback/check/resume/complete: load `commands/monitor.md` and execute matched handler, then STOP.
|
||||
|
||||
### Router Implementation
|
||||
|
||||
1. **Load session context** (if exists):
|
||||
- Scan `.workflow/.team/ARCH-OPT-*/team-session.json` for active/paused sessions
|
||||
- If found, extract session folder path, status, and `parallel_mode`
|
||||
|
||||
2. **Parse $ARGUMENTS** for detection keywords:
|
||||
- Check for role name tags in message content (including branch variants like `[refactorer-B01]`)
|
||||
- Check for "check", "status", "resume", "continue" keywords
|
||||
- Check for "consensus_blocked" signal
|
||||
|
||||
3. **Route to handler**:
|
||||
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
|
||||
- For Phase 0: Execute Session Resume Check below
|
||||
- For Phase 1: Execute Requirement Clarification below
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
Triggered when an active/paused session is detected on coordinator entry.
|
||||
|
||||
1. Load session.json from detected session folder
|
||||
2. Audit task list:
|
||||
|
||||
```
|
||||
TaskList()
|
||||
```
|
||||
|
||||
3. Reconcile session state vs task status:
|
||||
|
||||
| Task Status | Session Expects | Action |
|
||||
|-------------|----------------|--------|
|
||||
| in_progress | Should be running | Reset to pending (worker was interrupted) |
|
||||
| completed | Already tracked | Skip |
|
||||
| pending + unblocked | Ready to run | Include in spawn list |
|
||||
|
||||
4. Rebuild team if not active:
|
||||
|
||||
```
|
||||
TeamCreate({ team_name: "arch-opt" })
|
||||
```
|
||||
|
||||
5. Spawn workers for ready tasks -> Phase 4 coordination loop
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
1. Parse user task description from $ARGUMENTS
|
||||
2. **Parse parallel mode flags**:
|
||||
|
||||
| Flag | Value | Default |
|
||||
|------|-------|---------|
|
||||
| `--parallel-mode` | `single`, `fan-out`, `independent`, `auto` | `auto` |
|
||||
| `--max-branches` | integer 1-10 | 5 (from config) |
|
||||
|
||||
- For `independent` mode: remaining positional arguments after flags are `independent_targets` array
|
||||
- Example: `--parallel-mode=independent "refactor auth" "refactor API"` -> targets = ["refactor auth", "refactor API"]
|
||||
|
||||
3. Identify architecture optimization target:
|
||||
|
||||
| Signal | Target |
|
||||
|--------|--------|
|
||||
| Specific file/module mentioned | Scoped refactoring |
|
||||
| "coupling", "dependency", "structure", generic | Full architecture analysis |
|
||||
| Specific issue mentioned (cycles, God Class, duplication) | Targeted issue resolution |
|
||||
| Multiple quoted targets (independent mode) | Per-target scoped refactoring |
|
||||
|
||||
4. If target is unclear, ask for clarification:
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What should I analyze and refactor? Provide a target scope or describe the architecture issue.",
|
||||
header: "Scope"
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
5. Record refactoring requirement with scope, target issues, parallel_mode, and max_branches
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Session & Team Setup
|
||||
|
||||
1. Create session directory:
|
||||
|
||||
```
|
||||
Bash("mkdir -p .workflow/<session-id>/artifacts .workflow/<session-id>/explorations .workflow/<session-id>/wisdom .workflow/<session-id>/discussions")
|
||||
```
|
||||
|
||||
For independent mode, also create pipeline subdirectories:
|
||||
```
|
||||
// For each target in independent_targets
|
||||
Bash("mkdir -p .workflow/<session-id>/artifacts/pipelines/A .workflow/<session-id>/artifacts/pipelines/B ...")
|
||||
```
|
||||
|
||||
2. Write session.json with extended fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "active",
|
||||
"team_name": "arch-opt",
|
||||
"requirement": "<requirement>",
|
||||
"timestamp": "<ISO-8601>",
|
||||
"parallel_mode": "<auto|single|fan-out|independent>",
|
||||
"max_branches": 5,
|
||||
"branches": [],
|
||||
"independent_targets": [],
|
||||
"fix_cycles": {}
|
||||
}
|
||||
```
|
||||
|
||||
- `parallel_mode`: from Phase 1 parsing (default: "auto")
|
||||
- `max_branches`: from Phase 1 parsing (default: 5)
|
||||
- `branches`: populated at CP-2.5 for fan-out mode (e.g., ["B01", "B02", "B03"])
|
||||
- `independent_targets`: populated for independent mode (e.g., ["refactor auth", "refactor API"])
|
||||
- `fix_cycles`: populated per-branch/pipeline as fix cycles occur
|
||||
|
||||
3. Initialize shared-memory.json:
|
||||
|
||||
```
|
||||
Write("<session>/wisdom/shared-memory.json", { "session_id": "<session-id>", "requirement": "<requirement>", "parallel_mode": "<mode>" })
|
||||
```
|
||||
|
||||
4. Create team:
|
||||
|
||||
```
|
||||
TeamCreate({ team_name: "arch-opt" })
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Task Chain Creation
|
||||
|
||||
Execute `commands/dispatch.md` inline (Command Execution Protocol):
|
||||
|
||||
1. Read `roles/coordinator/commands/dispatch.md`
|
||||
2. Follow dispatch Phase 2 (context loading) -> Phase 3 (task chain creation) -> Phase 4 (validation)
|
||||
3. Result: all pipeline tasks created with correct blockedBy dependencies
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn & Coordination Loop
|
||||
|
||||
### Initial Spawn
|
||||
|
||||
Find first unblocked task and spawn its worker:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn analyzer worker",
|
||||
team_name: "arch-opt",
|
||||
name: "analyzer",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: analyzer
|
||||
role_spec: .claude/skills/team-arch-opt/role-specs/analyzer.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: arch-opt
|
||||
requirement: <requirement>
|
||||
inner_loop: false
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 -> role-spec Phase 2-4 -> built-in Phase 5.`
|
||||
})
|
||||
```
|
||||
|
||||
**STOP** after spawning. Wait for worker callback.
|
||||
|
||||
### Coordination (via monitor.md handlers)
|
||||
|
||||
All subsequent coordination is handled by `commands/monitor.md` handlers triggered by worker callbacks:
|
||||
|
||||
- handleCallback -> mark task done -> check pipeline -> handleSpawnNext
|
||||
- handleSpawnNext -> find ready tasks -> spawn team-worker agents -> STOP
|
||||
- handleComplete -> all done -> Phase 5
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Completion Action
|
||||
|
||||
1. Load session state -> count completed tasks, calculate duration
|
||||
2. List deliverables:
|
||||
|
||||
| Deliverable | Path |
|
||||
|-------------|------|
|
||||
| Architecture Baseline | <session>/artifacts/architecture-baseline.json |
|
||||
| Architecture Report | <session>/artifacts/architecture-report.md |
|
||||
| Refactoring Plan | <session>/artifacts/refactoring-plan.md |
|
||||
| Validation Results | <session>/artifacts/validation-results.json |
|
||||
| Review Report | <session>/artifacts/review-report.md |
|
||||
|
||||
3. Include discussion summaries if discuss rounds were used
|
||||
4. Output pipeline summary: task count, duration, improvement metrics from validation results
|
||||
|
||||
5. **Completion Action** (interactive):
|
||||
|
||||
```
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Team pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
6. Handle user choice:
|
||||
|
||||
| Choice | Steps |
|
||||
|--------|-------|
|
||||
| Archive & Clean | TaskList -> verify all completed -> update session status="completed" -> TeamDelete("arch-opt") -> output final summary with artifact paths |
|
||||
| Keep Active | Update session status="paused" -> output: "Session paused. Resume with: Skill(skill='team-arch-opt', args='resume')" |
|
||||
| Export Results | AskUserQuestion for target directory -> copy all artifacts -> Archive & Clean flow |
|
||||
264
.claude/skills/team-arch-opt/specs/team-config.json
Normal file
264
.claude/skills/team-arch-opt/specs/team-config.json
Normal file
@@ -0,0 +1,264 @@
|
||||
{
|
||||
"version": "5.0.0",
|
||||
"team_name": "arch-opt",
|
||||
"team_display_name": "Architecture Optimization",
|
||||
"skill_name": "team-arch-opt",
|
||||
"skill_path": ".claude/skills/team-arch-opt/",
|
||||
"worker_agent": "team-worker",
|
||||
"pipeline_type": "Linear with Review-Fix Cycle (Parallel-Capable)",
|
||||
"completion_action": "interactive",
|
||||
"has_inline_discuss": true,
|
||||
"has_shared_explore": true,
|
||||
"has_checkpoint_feedback": true,
|
||||
"has_session_resume": true,
|
||||
|
||||
"roles": [
|
||||
{
|
||||
"name": "coordinator",
|
||||
"type": "orchestrator",
|
||||
"description": "Orchestrates architecture optimization pipeline, manages task chains, handles review-fix cycles",
|
||||
"spec_path": "roles/coordinator/role.md",
|
||||
"tools": ["Task", "TaskCreate", "TaskList", "TaskGet", "TaskUpdate", "TeamCreate", "TeamDelete", "SendMessage", "AskUserQuestion", "Read", "Write", "Bash", "Glob", "Grep"]
|
||||
},
|
||||
{
|
||||
"name": "analyzer",
|
||||
"type": "orchestration",
|
||||
"description": "Analyzes architecture: dependency graphs, coupling/cohesion, layering violations, God Classes, dead code",
|
||||
"role_spec": "role-specs/analyzer.md",
|
||||
"inner_loop": false,
|
||||
"frontmatter": {
|
||||
"prefix": "ANALYZE",
|
||||
"inner_loop": false,
|
||||
"additional_prefixes": [],
|
||||
"discuss_rounds": [],
|
||||
"subagents": ["explore"],
|
||||
"message_types": {
|
||||
"success": "analyze_complete",
|
||||
"error": "error"
|
||||
}
|
||||
},
|
||||
"weight": 1,
|
||||
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
|
||||
},
|
||||
{
|
||||
"name": "designer",
|
||||
"type": "orchestration",
|
||||
"description": "Designs refactoring strategies from architecture analysis, produces prioritized refactoring plan with discrete REFACTOR-IDs",
|
||||
"role_spec": "role-specs/designer.md",
|
||||
"inner_loop": false,
|
||||
"frontmatter": {
|
||||
"prefix": "DESIGN",
|
||||
"inner_loop": false,
|
||||
"additional_prefixes": [],
|
||||
"discuss_rounds": ["DISCUSS-REFACTOR"],
|
||||
"subagents": ["discuss"],
|
||||
"message_types": {
|
||||
"success": "design_complete",
|
||||
"error": "error"
|
||||
}
|
||||
},
|
||||
"weight": 2,
|
||||
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
|
||||
},
|
||||
{
|
||||
"name": "refactorer",
|
||||
"type": "code_generation",
|
||||
"description": "Implements architecture refactoring changes following the design plan",
|
||||
"role_spec": "role-specs/refactorer.md",
|
||||
"inner_loop": true,
|
||||
"frontmatter": {
|
||||
"prefix": "REFACTOR",
|
||||
"inner_loop": true,
|
||||
"additional_prefixes": ["FIX"],
|
||||
"discuss_rounds": [],
|
||||
"subagents": ["explore"],
|
||||
"message_types": {
|
||||
"success": "refactor_complete",
|
||||
"error": "error",
|
||||
"fix": "fix_required"
|
||||
}
|
||||
},
|
||||
"weight": 3,
|
||||
"tools": ["Read", "Write", "Edit", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
|
||||
},
|
||||
{
|
||||
"name": "validator",
|
||||
"type": "validation",
|
||||
"description": "Validates refactoring: build checks, test suites, dependency metrics, API compatibility",
|
||||
"role_spec": "role-specs/validator.md",
|
||||
"inner_loop": false,
|
||||
"frontmatter": {
|
||||
"prefix": "VALIDATE",
|
||||
"inner_loop": false,
|
||||
"additional_prefixes": [],
|
||||
"discuss_rounds": [],
|
||||
"subagents": [],
|
||||
"message_types": {
|
||||
"success": "validate_complete",
|
||||
"error": "error",
|
||||
"fix": "fix_required"
|
||||
}
|
||||
},
|
||||
"weight": 4,
|
||||
"tools": ["Read", "Bash", "Glob", "Grep", "Task"]
|
||||
},
|
||||
{
|
||||
"name": "reviewer",
|
||||
"type": "read_only_analysis",
|
||||
"description": "Reviews refactoring code for correctness, pattern consistency, completeness, migration safety, and best practices",
|
||||
"role_spec": "role-specs/reviewer.md",
|
||||
"inner_loop": false,
|
||||
"frontmatter": {
|
||||
"prefix": "REVIEW",
|
||||
"inner_loop": false,
|
||||
"additional_prefixes": ["QUALITY"],
|
||||
"discuss_rounds": ["DISCUSS-REVIEW"],
|
||||
"subagents": ["discuss"],
|
||||
"message_types": {
|
||||
"success": "review_complete",
|
||||
"error": "error",
|
||||
"fix": "fix_required"
|
||||
}
|
||||
},
|
||||
"weight": 4,
|
||||
"tools": ["Read", "Bash", "Glob", "Grep", "Task", "mcp__ace-tool__search_context"]
|
||||
}
|
||||
],
|
||||
|
||||
"parallel_config": {
|
||||
"modes": ["single", "fan-out", "independent", "auto"],
|
||||
"default_mode": "auto",
|
||||
"max_branches": 5,
|
||||
"auto_mode_rules": {
|
||||
"single": "refactoring_count <= 2",
|
||||
"fan-out": "refactoring_count >= 3"
|
||||
}
|
||||
},
|
||||
|
||||
"pipeline": {
|
||||
"stages": [
|
||||
{
|
||||
"stage": 1,
|
||||
"name": "Architecture Analysis",
|
||||
"roles": ["analyzer"],
|
||||
"blockedBy": [],
|
||||
"fast_advance": true
|
||||
},
|
||||
{
|
||||
"stage": 2,
|
||||
"name": "Refactoring Design",
|
||||
"roles": ["designer"],
|
||||
"blockedBy": ["ANALYZE"],
|
||||
"fast_advance": true
|
||||
},
|
||||
{
|
||||
"stage": 3,
|
||||
"name": "Code Refactoring",
|
||||
"roles": ["refactorer"],
|
||||
"blockedBy": ["DESIGN"],
|
||||
"fast_advance": false
|
||||
},
|
||||
{
|
||||
"stage": 4,
|
||||
"name": "Validate & Review",
|
||||
"roles": ["validator", "reviewer"],
|
||||
"blockedBy": ["REFACTOR"],
|
||||
"fast_advance": false,
|
||||
"parallel": true,
|
||||
"review_fix_cycle": {
|
||||
"trigger": "REVIEW or VALIDATE finds issues",
|
||||
"target_stage": 3,
|
||||
"max_iterations": 3
|
||||
}
|
||||
}
|
||||
],
|
||||
"parallel_pipelines": {
|
||||
"fan-out": {
|
||||
"shared_stages": [1, 2],
|
||||
"branch_stages": [3, 4],
|
||||
"branch_prefix": "B",
|
||||
"review_fix_cycle": {
|
||||
"scope": "per_branch",
|
||||
"max_iterations": 3
|
||||
}
|
||||
},
|
||||
"independent": {
|
||||
"pipeline_prefix_chars": "ABCDEFGHIJ",
|
||||
"review_fix_cycle": {
|
||||
"scope": "per_pipeline",
|
||||
"max_iterations": 3
|
||||
}
|
||||
}
|
||||
},
|
||||
"diagram": "See pipeline-diagram section"
|
||||
},
|
||||
|
||||
"subagents": [
|
||||
{
|
||||
"name": "explore",
|
||||
"agent_type": "cli-explore-agent",
|
||||
"callable_by": ["analyzer", "refactorer"],
|
||||
"purpose": "Shared codebase exploration for architecture-critical structures, dependency graphs, and module boundaries",
|
||||
"has_cache": true,
|
||||
"cache_domain": "explorations"
|
||||
},
|
||||
{
|
||||
"name": "discuss",
|
||||
"agent_type": "cli-discuss-agent",
|
||||
"callable_by": ["designer", "reviewer"],
|
||||
"purpose": "Multi-perspective discussion for refactoring approaches and review findings",
|
||||
"has_cache": false
|
||||
}
|
||||
],
|
||||
|
||||
"shared_resources": [
|
||||
{
|
||||
"name": "Architecture Baseline",
|
||||
"path": "<session>/artifacts/architecture-baseline.json",
|
||||
"usage": "Pre-refactoring architecture metrics for comparison",
|
||||
"scope": "shared (fan-out) / per-pipeline (independent)"
|
||||
},
|
||||
{
|
||||
"name": "Architecture Report",
|
||||
"path": "<session>/artifacts/architecture-report.md",
|
||||
"usage": "Analyzer output consumed by designer",
|
||||
"scope": "shared (fan-out) / per-pipeline (independent)"
|
||||
},
|
||||
{
|
||||
"name": "Refactoring Plan",
|
||||
"path": "<session>/artifacts/refactoring-plan.md",
|
||||
"usage": "Designer output consumed by refactorer",
|
||||
"scope": "shared (fan-out) / per-pipeline (independent)"
|
||||
},
|
||||
{
|
||||
"name": "Validation Results",
|
||||
"path": "<session>/artifacts/validation-results.json",
|
||||
"usage": "Validator output consumed by reviewer",
|
||||
"scope": "per-branch (fan-out) / per-pipeline (independent)"
|
||||
}
|
||||
],
|
||||
|
||||
"shared_memory_namespacing": {
|
||||
"single": {
|
||||
"analyzer": "analyzer",
|
||||
"designer": "designer",
|
||||
"refactorer": "refactorer",
|
||||
"validator": "validator",
|
||||
"reviewer": "reviewer"
|
||||
},
|
||||
"fan-out": {
|
||||
"analyzer": "analyzer",
|
||||
"designer": "designer",
|
||||
"refactorer": "refactorer.B{NN}",
|
||||
"validator": "validator.B{NN}",
|
||||
"reviewer": "reviewer.B{NN}"
|
||||
},
|
||||
"independent": {
|
||||
"analyzer": "analyzer.{P}",
|
||||
"designer": "designer.{P}",
|
||||
"refactorer": "refactorer.{P}",
|
||||
"validator": "validator.{P}",
|
||||
"reviewer": "reviewer.{P}"
|
||||
}
|
||||
}
|
||||
}
|
||||
89
.claude/skills/team-arch-opt/subagents/discuss-subagent.md
Normal file
89
.claude/skills/team-arch-opt/subagents/discuss-subagent.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# Discuss Subagent
|
||||
|
||||
Multi-perspective discussion for evaluating refactoring strategies and reviewing code change quality. Used by designer (DISCUSS-REFACTOR) and reviewer (DISCUSS-REVIEW) when complex trade-offs require multi-angle analysis.
|
||||
|
||||
## Design Rationale
|
||||
|
||||
Complex refactoring decisions (e.g., choosing between dependency inversion vs mediator pattern to break a cycle) and nuanced code review findings (e.g., evaluating whether a temporary coupling increase is acceptable) benefit from structured multi-perspective analysis. This subagent provides that analysis inline without spawning additional team members.
|
||||
|
||||
## Invocation
|
||||
|
||||
Called by designer, reviewer after their primary analysis when complexity warrants multi-perspective evaluation:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: "Discuss <round-id>: <topic> for architecture optimization",
|
||||
prompt: `Conduct a multi-perspective discussion on the following topic.
|
||||
|
||||
Round: <round-id>
|
||||
Topic: <discussion-topic>
|
||||
Session: <session-folder>
|
||||
|
||||
Context:
|
||||
<relevant-context-from-calling-role>
|
||||
|
||||
Perspectives to consider:
|
||||
- Architecture impact: Will this actually improve the target structural metric?
|
||||
- Risk assessment: What could break? Dangling references? Behavioral changes? Migration risk?
|
||||
- Maintainability: Is the refactored code more understandable and maintainable?
|
||||
- Alternative approaches: Are there simpler or safer ways to achieve the same structural improvement?
|
||||
|
||||
Evaluate trade-offs and provide a structured recommendation with:
|
||||
- Consensus verdict: proceed / revise / escalate
|
||||
- Confidence level: high / medium / low
|
||||
- Key trade-offs identified
|
||||
- Recommended approach with rationale
|
||||
- Dissenting perspectives (if any)`
|
||||
})
|
||||
```
|
||||
|
||||
## Round Configuration
|
||||
|
||||
| Round | Artifact | Parameters | Calling Role |
|
||||
|-------|----------|------------|-------------|
|
||||
| DISCUSS-REFACTOR | <session>/discussions/DISCUSS-REFACTOR.md | Refactoring strategy trade-offs | designer |
|
||||
| DISCUSS-REVIEW | <session>/discussions/DISCUSS-REVIEW.md | Code review finding validation | reviewer |
|
||||
|
||||
## Integration with Calling Role
|
||||
|
||||
The calling role is responsible for:
|
||||
|
||||
1. **Before calling**: Complete primary analysis, identify the specific trade-off or finding needing discussion
|
||||
2. **Calling**: Invoke subagent with round ID, topic, and relevant context
|
||||
3. **After calling**:
|
||||
|
||||
| Result | Action |
|
||||
|--------|--------|
|
||||
| consensus_reached (proceed) | Incorporate recommendation into output, continue |
|
||||
| consensus_reached (revise) | Adjust findings/strategy based on discussion insights |
|
||||
| consensus_blocked (HIGH) | Report to coordinator via message with severity |
|
||||
| consensus_blocked (MEDIUM) | Include in output with recommendation for revision |
|
||||
| consensus_blocked (LOW) | Note in output, proceed with original assessment |
|
||||
|
||||
## Output Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"round_id": "<DISCUSS-REFACTOR|DISCUSS-REVIEW>",
|
||||
"topic": "<discussion-topic>",
|
||||
"verdict": "<proceed|revise|escalate>",
|
||||
"confidence": "<high|medium|low>",
|
||||
"trade_offs": [
|
||||
{ "dimension": "<architecture|risk|maintainability>", "pro": "<benefit>", "con": "<cost>" }
|
||||
],
|
||||
"recommendation": "<recommended-approach>",
|
||||
"rationale": "<reasoning>",
|
||||
"dissenting_views": ["<alternative-perspective>"]
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Single perspective analysis fails | Continue with partial perspectives |
|
||||
| All analyses fail | Return basic recommendation from calling role's primary analysis |
|
||||
| Artifact not found | Return error immediately |
|
||||
| Discussion inconclusive | Return "revise" verdict with low confidence |
|
||||
113
.claude/skills/team-arch-opt/subagents/explore-subagent.md
Normal file
113
.claude/skills/team-arch-opt/subagents/explore-subagent.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# Explore Subagent
|
||||
|
||||
Shared codebase exploration for discovering architecture-critical structures, dependency graphs, module boundaries, and layer organization. Results are cached to avoid redundant exploration across analyzer and refactorer roles.
|
||||
|
||||
## Design Rationale
|
||||
|
||||
Codebase exploration is a read-only operation shared between analyzer (mapping structural issues) and refactorer (understanding implementation context). Caching explorations avoids redundant work when refactorer re-explores structures the analyzer already mapped.
|
||||
|
||||
## Invocation
|
||||
|
||||
Called by analyzer, refactorer after needing codebase context for architecture analysis or implementation:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore codebase for architecture-critical structures in <target-scope>",
|
||||
prompt: `Explore the codebase to identify architecture-critical structures.
|
||||
|
||||
Target scope: <target-scope>
|
||||
Session: <session-folder>
|
||||
Focus: <exploration-focus>
|
||||
|
||||
Tasks:
|
||||
1. Map the module structure, entry points, and layer boundaries within scope
|
||||
2. Build dependency graph (import/require relationships between modules)
|
||||
3. Identify architectural patterns (layering, dependency injection, event-driven, plugin architecture)
|
||||
4. Note any existing abstractions, interfaces, and module boundaries
|
||||
5. List key files with their roles, layer assignment, and dependency relationships
|
||||
|
||||
Output a structured exploration report with:
|
||||
- Module map (key files, their relationships, and layer assignments)
|
||||
- Dependency graph (directed edges between modules, cycle indicators)
|
||||
- Layer structure (identified layers and their boundaries)
|
||||
- Existing architectural patterns found
|
||||
- Architecture-relevant configuration (path aliases, barrel exports, module boundaries)`
|
||||
})
|
||||
```
|
||||
|
||||
## Cache Mechanism
|
||||
|
||||
### Cache Index Schema
|
||||
|
||||
`<session-folder>/explorations/cache-index.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"entries": [
|
||||
{
|
||||
"key": "<scope-hash>",
|
||||
"scope": "<target-scope>",
|
||||
"focus": "<exploration-focus>",
|
||||
"timestamp": "<ISO-8601>",
|
||||
"result_file": "<hash>.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Lookup Rules
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Exact scope+focus match exists | Return cached result from <hash>.md |
|
||||
| No match | Execute subagent, cache result to <hash>.md, update index |
|
||||
| Cache file missing but index has entry | Remove stale entry, re-execute |
|
||||
| Cache older than current session | Use cached (explorations are stable within session) |
|
||||
|
||||
## Integration with Calling Role
|
||||
|
||||
The calling role is responsible for:
|
||||
|
||||
1. **Before calling**: Determine target scope and exploration focus
|
||||
2. **Calling**: Check cache first, invoke subagent only on cache miss
|
||||
3. **After calling**:
|
||||
|
||||
| Result | Action |
|
||||
|--------|--------|
|
||||
| Exploration successful | Use findings to inform analysis/implementation |
|
||||
| Exploration partial | Use available findings, note gaps |
|
||||
| Exploration failed | Proceed without exploration context, use direct file reading |
|
||||
|
||||
## Output Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"scope": "<target-scope>",
|
||||
"module_map": [
|
||||
{ "file": "<path>", "role": "<description>", "layer": "<presentation|domain|data|infra>", "dependency_graph": { "imports": ["<path>"], "imported_by": ["<path>"] } }
|
||||
],
|
||||
"dependency_graph": {
|
||||
"nodes": ["<module-path>"],
|
||||
"edges": [{ "from": "<path>", "to": "<path>", "type": "<import|re-export|dynamic>" }],
|
||||
"cycles": [["<path-a>", "<path-b>", "<path-a>"]]
|
||||
},
|
||||
"layer_structure": [
|
||||
{ "layer": "<name>", "modules": ["<path>"], "violations": ["<description>"] }
|
||||
],
|
||||
"existing_patterns": [
|
||||
{ "type": "<pattern>", "location": "<file:line>", "description": "<what>" }
|
||||
],
|
||||
"investigation_targets": ["<file-or-pattern>"]
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Single exploration angle fails | Continue with partial results |
|
||||
| All exploration fails | Return basic result from direct file listing |
|
||||
| Target scope not found | Return error immediately |
|
||||
| Cache corrupt | Clear cache-index.json, re-execute |
|
||||
@@ -58,6 +58,27 @@ Each capability produces default output artifacts:
|
||||
| tester | Test results | `<session>/artifacts/test-report.md` |
|
||||
| planner | Execution plan | `<session>/artifacts/execution-plan.md` |
|
||||
|
||||
### Step 2.5: Key File Inference
|
||||
|
||||
For each task, infer relevant files based on capability type and task keywords:
|
||||
|
||||
| Capability | File Inference Strategy |
|
||||
|------------|------------------------|
|
||||
| researcher | Extract domain keywords → map to likely directories (e.g., "auth" → `src/auth/**`, `middleware/auth.ts`) |
|
||||
| developer | Extract feature/module keywords → map to source files (e.g., "payment" → `src/payments/**`, `types/payment.ts`) |
|
||||
| designer | Look for architecture/config keywords → map to config/schema files |
|
||||
| analyst | Extract target keywords → map to files under analysis |
|
||||
| tester | Extract test target keywords → map to source + test files |
|
||||
| writer | Extract documentation target → map to relevant source files for context |
|
||||
| planner | No specific files (planning is abstract) |
|
||||
|
||||
**Inference rules:**
|
||||
- Extract nouns and verbs from task description
|
||||
- Match against common directory patterns (src/, lib/, components/, services/, utils/)
|
||||
- Include related type definition files (types/, *.d.ts)
|
||||
- For "fix bug" tasks, include error-prone areas (error handlers, validation)
|
||||
- For "implement feature" tasks, include similar existing features as reference
|
||||
|
||||
### Step 3: Dependency Graph Construction
|
||||
|
||||
Build a DAG of work streams using natural ordering tiers:
|
||||
@@ -90,16 +111,26 @@ Apply merging rules to reduce role count (cap at 5).
|
||||
|
||||
### Step 6: Role-Spec Metadata Assignment
|
||||
|
||||
For each role, determine frontmatter fields:
|
||||
For each role, determine frontmatter and generation hints:
|
||||
|
||||
| Field | Derivation |
|
||||
|-------|------------|
|
||||
| `prefix` | From capability prefix (e.g., RESEARCH, DRAFT, IMPL) |
|
||||
| `inner_loop` | `true` if role has 2+ serial same-prefix tasks |
|
||||
| `subagents` | Inferred from responsibility type: orchestration -> [explore], code-gen (docs) -> [explore], validation -> [] |
|
||||
| `subagents` | Suggested, not mandatory — coordinator may adjust based on task needs |
|
||||
| `pattern_hint` | Reference pattern name from role-spec-template (research/document/code/analysis/validation) — guides coordinator's Phase 2-4 composition, NOT a rigid template selector |
|
||||
| `output_type` | `artifact` (new files in session/artifacts/) / `codebase` (modify existing project files) / `mixed` (both) — determines verification strategy in Behavioral Traits |
|
||||
| `message_types.success` | `<prefix>_complete` |
|
||||
| `message_types.error` | `error` |
|
||||
|
||||
**output_type derivation**:
|
||||
|
||||
| Task Signal | output_type | Example |
|
||||
|-------------|-------------|---------|
|
||||
| "write report", "analyze", "research" | `artifact` | New analysis-report.md in session |
|
||||
| "update docs", "modify code", "fix bug" | `codebase` | Modify existing project files |
|
||||
| "implement feature + write summary" | `mixed` | Code changes + implementation summary |
|
||||
|
||||
## Phase 4: Output
|
||||
|
||||
Write `<session-folder>/task-analysis.json`:
|
||||
@@ -113,7 +144,22 @@ Write `<session-folder>/task-analysis.json`:
|
||||
"prefix": "RESEARCH",
|
||||
"responsibility_type": "orchestration",
|
||||
"tasks": [
|
||||
{ "id": "RESEARCH-001", "description": "..." }
|
||||
{
|
||||
"id": "RESEARCH-001",
|
||||
"goal": "What this task achieves and why",
|
||||
"steps": [
|
||||
"step 1: specific action with clear verb",
|
||||
"step 2: specific action with clear verb",
|
||||
"step 3: specific action with clear verb"
|
||||
],
|
||||
"key_files": [
|
||||
"src/path/to/relevant.ts",
|
||||
"src/path/to/other.ts"
|
||||
],
|
||||
"upstream_artifacts": [],
|
||||
"success_criteria": "Measurable completion condition",
|
||||
"constraints": "Scope limits, focus areas"
|
||||
}
|
||||
],
|
||||
"artifacts": ["research-findings.md"]
|
||||
}
|
||||
@@ -132,6 +178,8 @@ Write `<session-folder>/task-analysis.json`:
|
||||
"inner_loop": false,
|
||||
"role_spec_metadata": {
|
||||
"subagents": ["explore"],
|
||||
"pattern_hint": "research",
|
||||
"output_type": "artifact",
|
||||
"message_types": {
|
||||
"success": "research_complete",
|
||||
"error": "error"
|
||||
|
||||
@@ -26,7 +26,21 @@ Create task chains from dynamic dependency graphs. Builds pipelines from the tas
|
||||
TaskCreate({
|
||||
subject: "<PREFIX>-<NNN>",
|
||||
owner: "<role-name>",
|
||||
description: "<task description from task-analysis>\nSession: <session-folder>\nScope: <scope>\nInnerLoop: <true|false>\nRoleSpec: <session-folder>/role-specs/<role-name>.md",
|
||||
description: "PURPOSE: <goal> | Success: <success_criteria>
|
||||
TASK:
|
||||
- <step 1>
|
||||
- <step 2>
|
||||
- <step 3>
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Upstream artifacts: <artifact-1.md>, <artifact-2.md>
|
||||
- Key files: <file1>, <file2>
|
||||
- Shared memory: <session>/shared-memory.json
|
||||
EXPECTED: <deliverable path> + <quality criteria>
|
||||
CONSTRAINTS: <scope limits>
|
||||
---
|
||||
InnerLoop: <true|false>
|
||||
RoleSpec: <session-folder>/role-specs/<role-name>.md",
|
||||
blockedBy: [<dependency-list from graph>],
|
||||
status: "pending"
|
||||
})
|
||||
@@ -37,16 +51,34 @@ TaskCreate({
|
||||
|
||||
### Task Description Template
|
||||
|
||||
Every task description includes session path, inner loop flag, and role-spec path:
|
||||
Every task description includes structured fields for clarity:
|
||||
|
||||
```
|
||||
<task description>
|
||||
Session: <session-folder>
|
||||
Scope: <scope>
|
||||
PURPOSE: <goal from task-analysis.json#tasks[].goal> | Success: <success_criteria from task-analysis.json#tasks[].success_criteria>
|
||||
TASK:
|
||||
- <step 1 from task-analysis.json#tasks[].steps[]>
|
||||
- <step 2 from task-analysis.json#tasks[].steps[]>
|
||||
- <step 3 from task-analysis.json#tasks[].steps[]>
|
||||
CONTEXT:
|
||||
- Session: <session-folder>
|
||||
- Upstream artifacts: <comma-separated list from task-analysis.json#tasks[].upstream_artifacts[]>
|
||||
- Key files: <comma-separated list from task-analysis.json#tasks[].key_files[]>
|
||||
- Shared memory: <session>/shared-memory.json
|
||||
EXPECTED: <artifact path from task-analysis.json#capabilities[].artifacts[]> + <quality criteria based on capability type>
|
||||
CONSTRAINTS: <constraints from task-analysis.json#tasks[].constraints>
|
||||
---
|
||||
InnerLoop: <true|false>
|
||||
RoleSpec: <session-folder>/role-specs/<role-name>.md
|
||||
```
|
||||
|
||||
**Field Mapping**:
|
||||
- `PURPOSE`: From `task-analysis.json#capabilities[].tasks[].goal` + `success_criteria`
|
||||
- `TASK`: From `task-analysis.json#capabilities[].tasks[].steps[]`
|
||||
- `CONTEXT.Upstream artifacts`: From `task-analysis.json#capabilities[].tasks[].upstream_artifacts[]`
|
||||
- `CONTEXT.Key files`: From `task-analysis.json#capabilities[].tasks[].key_files[]`
|
||||
- `EXPECTED`: From `task-analysis.json#capabilities[].artifacts[]` + quality criteria
|
||||
- `CONSTRAINTS`: From `task-analysis.json#capabilities[].tasks[].constraints`
|
||||
|
||||
### InnerLoop Flag Rules
|
||||
|
||||
| Condition | InnerLoop |
|
||||
|
||||
@@ -60,10 +60,12 @@ Receive callback from [<role>]
|
||||
+- None completed -> STOP
|
||||
```
|
||||
|
||||
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
|
||||
1. Check if the expected next task is already `in_progress` (fast-advanced)
|
||||
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
|
||||
3. If no -> normal handleSpawnNext
|
||||
**Fast-advance reconciliation**: A worker may have already spawned its successor via fast-advance. When processing any callback or resume:
|
||||
1. Read recent `fast_advance` messages from team_msg (type="fast_advance")
|
||||
2. For each fast_advance message: add the spawned successor to `active_workers` if not already present
|
||||
3. Check if the expected next task is already `in_progress` (fast-advanced)
|
||||
4. If yes -> skip spawning that task (already running)
|
||||
5. If no -> normal handleSpawnNext
|
||||
|
||||
---
|
||||
|
||||
@@ -262,6 +264,13 @@ handleCallback / handleResume detects:
|
||||
4. -> handleSpawnNext (will re-spawn the task normally)
|
||||
```
|
||||
|
||||
### Fast-Advance State Sync
|
||||
|
||||
On every coordinator wake (handleCallback, handleResume, handleCheck):
|
||||
1. Read team_msg entries with `type="fast_advance"` since last coordinator wake
|
||||
2. For each entry: sync `active_workers` with the spawned successor
|
||||
3. This ensures coordinator's state reflects fast-advance decisions even before the successor's callback arrives
|
||||
|
||||
### Consensus-Blocked Handling
|
||||
|
||||
```
|
||||
|
||||
@@ -182,13 +182,15 @@ Regardless of complexity score or role count, coordinator MUST:
|
||||
|
||||
4. **Call TeamCreate** with team name derived from session ID
|
||||
|
||||
5. **Read `specs/role-spec-template.md`** + task-analysis.json
|
||||
5. **Read `specs/role-spec-template.md`** for Behavioral Traits + Reference Patterns
|
||||
|
||||
6. **For each role in task-analysis.json#roles**:
|
||||
- Fill role-spec template with:
|
||||
- YAML frontmatter: role, prefix, inner_loop, subagents, message_types
|
||||
- Phase 2-4 content from responsibility type reference sections in template
|
||||
- Task-specific instructions from task description
|
||||
- Fill YAML frontmatter: role, prefix, inner_loop, subagents, message_types
|
||||
- **Compose Phase 2-4 content** (NOT copy from template):
|
||||
- Phase 2: Derive input sources and context loading steps from **task description + upstream dependencies**
|
||||
- Phase 3: Describe **execution goal** (WHAT to achieve) from task description — do NOT prescribe specific subagent or tool
|
||||
- Phase 4: Combine **Behavioral Traits** (from template) + **output_type** (from task analysis) to compose verification steps
|
||||
- Reference Patterns may guide phase structure, but task description determines specific content
|
||||
- Write generated role-spec to `<session>/role-specs/<role-name>.md`
|
||||
|
||||
7. **Register roles** in team-session.json#roles (with `role_spec` path instead of `role_file`)
|
||||
|
||||
@@ -63,233 +63,117 @@ message_types:
|
||||
| `<placeholder>` notation | Use angle brackets for variable substitution |
|
||||
| Reference subagents by name | team-worker resolves invocation from its delegation templates |
|
||||
|
||||
## Phase 2-4 Content by Responsibility Type
|
||||
## Behavioral Traits
|
||||
|
||||
Select the matching section based on `responsibility_type` from task analysis.
|
||||
All dynamically generated role-specs MUST embed these traits into Phase 4. Coordinator copies this section verbatim into every generated role-spec as a Phase 4 appendix.
|
||||
|
||||
### orchestration
|
||||
**Design principle**: Constrain behavioral characteristics (accuracy, feedback, quality gates), NOT specific actions (which tool, which subagent, which path). Tasks are diverse — the coordinator composes task-specific Phase 2-3 instructions, while these traits ensure execution quality regardless of task type.
|
||||
|
||||
**Phase 2: Context Assessment**
|
||||
### Accuracy — outputs must be verifiable
|
||||
|
||||
- Files claimed as **created** → Read to confirm file exists and has content
|
||||
- Files claimed as **modified** → Read to confirm content actually changed
|
||||
- Analysis claimed as **complete** → artifact file exists in `<session>/artifacts/`
|
||||
|
||||
### Feedback Contract — completion report must include evidence
|
||||
|
||||
Phase 4 must produce a verification summary with these fields:
|
||||
|
||||
| Field | When Required | Content |
|
||||
|-------|---------------|---------|
|
||||
| `files_produced` | New files created | Path list |
|
||||
| `files_modified` | Existing files changed | Path + before/after line count |
|
||||
| `artifacts_written` | Always | Paths in `<session>/artifacts/` |
|
||||
| `verification_method` | Always | How verified: Read confirm / syntax check / diff |
|
||||
|
||||
### Quality Gate — verify before reporting complete
|
||||
|
||||
- Phase 4 MUST verify Phase 3's **actual output** (not planned output)
|
||||
- Verification fails → retry Phase 3 (max 2 retries)
|
||||
- Still fails → report `partial_completion` with details, NOT `completed`
|
||||
- Update `shared-memory.json` with key findings after verification passes
|
||||
|
||||
### Error Protocol
|
||||
|
||||
- Primary approach fails → try alternative (different subagent / different tool)
|
||||
- 2 retries exhausted → escalate to coordinator with failure details
|
||||
- NEVER: skip verification and report completed
|
||||
|
||||
---
|
||||
|
||||
## Reference Patterns
|
||||
|
||||
Coordinator MAY reference these patterns when composing Phase 2-4 content for a role-spec. These are **structural guidance, not mandatory templates**. The task description determines specific behavior — patterns only suggest common phase structures.
|
||||
|
||||
### Research / Exploration
|
||||
|
||||
- Phase 2: Define exploration scope + load prior knowledge from shared-memory and wisdom
|
||||
- Phase 3: Explore via subagents, direct tool calls, or codebase search — approach chosen by agent
|
||||
- Phase 4: Verify findings documented (Behavioral Traits) + update shared-memory
|
||||
|
||||
### Document / Content
|
||||
|
||||
- Phase 2: Load upstream artifacts + read target files (if modifying existing docs)
|
||||
- Phase 3: Create new documents OR modify existing documents — determined by task, not template
|
||||
- Phase 4: Verify documents exist with expected content (Behavioral Traits) + update shared-memory
|
||||
|
||||
### Code Implementation
|
||||
|
||||
- Phase 2: Load design/spec artifacts from upstream
|
||||
- Phase 3: Implement code changes — subagent choice and approach determined by task complexity
|
||||
- Phase 4: Syntax check + file verification (Behavioral Traits) + update shared-memory
|
||||
|
||||
### Analysis / Audit
|
||||
|
||||
- Phase 2: Load analysis targets (artifacts or source files)
|
||||
- Phase 3: Multi-dimension analysis — perspectives and depth determined by task
|
||||
- Phase 4: Verify report exists + severity classification (Behavioral Traits) + update shared-memory
|
||||
|
||||
### Validation / Testing
|
||||
|
||||
- Phase 2: Detect test framework + identify changed files from upstream
|
||||
- Phase 3: Run test-fix cycle — iteration count and strategy determined by task
|
||||
- Phase 4: Verify pass rate + coverage (Behavioral Traits) + update shared-memory
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Transfer Protocol
|
||||
|
||||
How context flows between roles. Coordinator MUST reference this when composing Phase 2 of any role-spec.
|
||||
|
||||
### Transfer Channels
|
||||
|
||||
| Channel | Scope | Mechanism | When to Use |
|
||||
|---------|-------|-----------|-------------|
|
||||
| **Artifacts** | Producer -> Consumer | Write to `<session>/artifacts/<name>.md`, consumer reads in Phase 2 | Structured deliverables (reports, plans, specs) |
|
||||
| **shared-memory.json** | Cross-role | Read-merge-write `<session>/shared-memory.json` | Key findings, decisions, metadata (small, structured data) |
|
||||
| **Wisdom** | Cross-task | Append to `<session>/wisdom/{learnings,decisions,conventions,issues}.md` | Patterns, conventions, risks discovered during execution |
|
||||
| **context_accumulator** | Intra-role (inner loop) | In-memory array, passed to each subsequent task in same-prefix loop | Prior task summaries within same role's inner loop |
|
||||
| **Exploration cache** | Cross-role | `<session>/explorations/cache-index.json` + per-angle JSON | Codebase discovery results, prevents duplicate exploration |
|
||||
|
||||
### Phase 2 Context Loading (role-spec must specify)
|
||||
|
||||
Every generated role-spec Phase 2 MUST declare which upstream sources to load:
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
| Prior artifacts | <session>/artifacts/ | No |
|
||||
| Wisdom | <session>/wisdom/ | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read shared-memory.json for cross-role context
|
||||
3. Read prior artifacts (if any from upstream tasks)
|
||||
2. Read upstream artifacts: <list which artifacts from which upstream role>
|
||||
3. Read shared-memory.json for cross-role decisions
|
||||
4. Load wisdom files for accumulated knowledge
|
||||
5. Optionally call explore subagent for codebase context
|
||||
5. For inner_loop roles: load context_accumulator from prior tasks
|
||||
6. Check exploration cache before running new explorations
|
||||
```
|
||||
|
||||
**Phase 3: Subagent Execution**
|
||||
### shared-memory.json Usage Convention
|
||||
|
||||
```
|
||||
Delegate to appropriate subagent based on task:
|
||||
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
run_in_background: false,
|
||||
description: "<task-type> for <task-id>",
|
||||
prompt: "## Task
|
||||
- <task description>
|
||||
- Session: <session-folder>
|
||||
## Context
|
||||
<prior artifacts + shared memory + explore results>
|
||||
## Expected Output
|
||||
Write artifact to: <session>/artifacts/<artifact-name>.md
|
||||
Return JSON summary: { artifact_path, summary, key_decisions[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Result Aggregation**
|
||||
|
||||
```
|
||||
1. Verify subagent output artifact exists
|
||||
2. Read artifact, validate structure/completeness
|
||||
3. Update shared-memory.json with key findings
|
||||
4. Write insights to wisdom/ files
|
||||
```
|
||||
|
||||
### code-gen (docs)
|
||||
|
||||
**Phase 2: Load Prior Context**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Prior artifacts | <session>/artifacts/ from upstream | Conditional |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read upstream artifacts
|
||||
3. Read shared-memory.json for cross-role context
|
||||
```
|
||||
|
||||
**Phase 3: Document Generation**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "universal-executor",
|
||||
run_in_background: false,
|
||||
description: "Generate <doc-type> for <task-id>",
|
||||
prompt: "## Task
|
||||
- Generate: <document type>
|
||||
- Session: <session-folder>
|
||||
## Prior Context
|
||||
<upstream artifacts + shared memory>
|
||||
## Expected Output
|
||||
Write document to: <session>/artifacts/<doc-name>.md
|
||||
Return JSON: { artifact_path, summary, key_decisions[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Structure Validation**
|
||||
|
||||
```
|
||||
1. Verify document artifact exists
|
||||
2. Check document has expected sections
|
||||
3. Validate no placeholder text remains
|
||||
4. Update shared-memory.json with document metadata
|
||||
```
|
||||
|
||||
### code-gen (code)
|
||||
|
||||
**Phase 2: Load Plan/Specs**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Plan/design artifacts | <session>/artifacts/ | Conditional |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read plan/design artifacts from upstream
|
||||
3. Load shared-memory.json for implementation context
|
||||
```
|
||||
|
||||
**Phase 3: Code Implementation**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement <task-id>",
|
||||
prompt: "## Task
|
||||
- <implementation description>
|
||||
- Session: <session-folder>
|
||||
## Plan/Design Context
|
||||
<upstream artifacts>
|
||||
## Expected Output
|
||||
Implement code changes.
|
||||
Write summary to: <session>/artifacts/implementation-summary.md
|
||||
Return JSON: { artifact_path, summary, files_changed[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Syntax Validation**
|
||||
|
||||
```
|
||||
1. Run syntax check (tsc --noEmit or equivalent)
|
||||
2. Verify all planned files exist
|
||||
3. If validation fails -> attempt auto-fix (max 2 attempts)
|
||||
4. Write implementation summary to artifacts/
|
||||
```
|
||||
|
||||
### read-only
|
||||
|
||||
**Phase 2: Target Loading**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Target artifacts/files | From task description or upstream | Yes |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path and target files from task description
|
||||
2. Read target artifacts or source files for analysis
|
||||
3. Load shared-memory.json for context
|
||||
```
|
||||
|
||||
**Phase 3: Multi-Dimension Analysis**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
run_in_background: false,
|
||||
description: "Analyze <target> for <task-id>",
|
||||
prompt: "## Task
|
||||
- Analyze: <target description>
|
||||
- Dimensions: <analysis dimensions from coordinator>
|
||||
- Session: <session-folder>
|
||||
## Target Content
|
||||
<artifact content or file content>
|
||||
## Expected Output
|
||||
Write report to: <session>/artifacts/analysis-report.md
|
||||
Return JSON: { artifact_path, summary, findings[], severity_counts }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Severity Classification**
|
||||
|
||||
```
|
||||
1. Verify analysis report exists
|
||||
2. Classify findings by severity (Critical/High/Medium/Low)
|
||||
3. Update shared-memory.json with key findings
|
||||
4. Write issues to wisdom/issues.md
|
||||
```
|
||||
|
||||
### validation
|
||||
|
||||
**Phase 2: Environment Detection**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Implementation artifacts | Upstream code changes | Yes |
|
||||
|
||||
Loading steps:
|
||||
1. Detect test framework from project files
|
||||
2. Get changed files from implementation
|
||||
3. Identify test command and coverage tool
|
||||
```
|
||||
|
||||
**Phase 3: Test-Fix Cycle**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "test-fix-agent",
|
||||
run_in_background: false,
|
||||
description: "Test-fix for <task-id>",
|
||||
prompt: "## Task
|
||||
- Run tests and fix failures
|
||||
- Session: <session-folder>
|
||||
- Max iterations: 5
|
||||
## Changed Files
|
||||
<from upstream implementation>
|
||||
## Expected Output
|
||||
Write report to: <session>/artifacts/test-report.md
|
||||
Return JSON: { artifact_path, pass_rate, coverage, remaining_failures[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Result Analysis**
|
||||
|
||||
```
|
||||
1. Check pass rate >= 95%
|
||||
2. Check coverage meets threshold
|
||||
3. Generate test report with pass/fail counts
|
||||
4. Update shared-memory.json with test results
|
||||
```
|
||||
- **Read-merge-write**: Read current content -> merge new keys -> write back (NOT overwrite)
|
||||
- **Namespaced keys**: Each role writes under its own namespace: `{ "<role_name>": { ... } }`
|
||||
- **Small data only**: Key findings, decision summaries, metadata. NOT full documents
|
||||
- **Example**:
|
||||
```json
|
||||
{
|
||||
"researcher": { "key_findings": [...], "scope": "..." },
|
||||
"writer": { "documents_created": [...], "style_decisions": [...] },
|
||||
"developer": { "files_changed": [...], "patterns_used": [...] }
|
||||
}
|
||||
```
|
||||
|
||||
@@ -1,445 +0,0 @@
|
||||
---
|
||||
name: team-coordinate
|
||||
description: Universal team coordination skill with dynamic role generation. Only coordinator is built-in -- all worker roles are generated at runtime based on task analysis. Beat/cadence model for orchestration. Triggers on "team coordinate".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team Coordinate
|
||||
|
||||
Universal team coordination skill: analyze task -> generate roles -> dispatch -> execute -> deliver. Only the **coordinator** is built-in. All worker roles are **dynamically generated** based on task analysis.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-coordinate") |
|
||||
| args="task description" |
|
||||
| args="--role=coordinator" |
|
||||
| args="--role=<dynamic> --session=<path>" |
|
||||
+-------------------+-------------------------------+
|
||||
| Role Router
|
||||
+---- --role present? ----+
|
||||
| NO | YES
|
||||
v v
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto -> coordinator) (route to role file)
|
||||
| |
|
||||
coordinator +-------+-------+
|
||||
(built-in) | --role=coordinator?
|
||||
| |
|
||||
YES | | NO
|
||||
v | v
|
||||
built-in | Dynamic Role
|
||||
role.md | <session>/roles/<role>.md
|
||||
|
||||
Subagents (callable by any role, not team members):
|
||||
[discuss-subagent] - multi-perspective critique (dynamic perspectives)
|
||||
[explore-subagent] - codebase exploration with cache
|
||||
```
|
||||
|
||||
## Role Router
|
||||
|
||||
### Input Parsing
|
||||
|
||||
Parse `$ARGUMENTS` to extract `--role` and `--session`. If no `--role` -> Orchestration Mode (auto route to coordinator).
|
||||
|
||||
### Role Registry
|
||||
|
||||
Only coordinator is statically registered. All other roles are dynamic, stored in `team-session.json#roles`.
|
||||
|
||||
| Role | File | Type |
|
||||
|------|------|------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | built-in orchestrator |
|
||||
| (dynamic) | `<session>/roles/<role-name>.md` | runtime-generated worker |
|
||||
|
||||
> **COMPACT PROTECTION**: Role files are execution documents. After context compression, role instructions become summaries only -- **MUST immediately `Read` the role.md to reload before continuing**. Never execute any Phase based on summaries.
|
||||
|
||||
### Subagent Registry
|
||||
|
||||
| Subagent | Spec | Callable By | Purpose |
|
||||
|----------|------|-------------|---------|
|
||||
| discuss | [subagents/discuss-subagent.md](subagents/discuss-subagent.md) | any role | Multi-perspective critique (dynamic perspectives) |
|
||||
| explore | [subagents/explore-subagent.md](subagents/explore-subagent.md) | any role | Codebase exploration with cache |
|
||||
|
||||
### Dispatch
|
||||
|
||||
1. Extract `--role` and `--session` from arguments
|
||||
2. If no `--role` -> route to coordinator (Orchestration Mode)
|
||||
3. If `--role=coordinator` -> Read built-in `roles/coordinator/role.md` -> Execute its phases
|
||||
4. If `--role=<other>`:
|
||||
- **`--session` is REQUIRED** for dynamic roles. Error if not provided.
|
||||
- Read `<session>/roles/<role>.md` -> Execute its phases
|
||||
- If role file not found at path -> Error with expected path
|
||||
|
||||
### Orchestration Mode
|
||||
|
||||
When invoked without `--role`, coordinator auto-starts. User just provides task description.
|
||||
|
||||
**Invocation**: `Skill(skill="team-coordinate", args="task description")`
|
||||
|
||||
**Lifecycle**:
|
||||
```
|
||||
User provides task description
|
||||
-> coordinator Phase 1: task analysis (detect capabilities, build dependency graph)
|
||||
-> coordinator Phase 2: generate roles + initialize session
|
||||
-> coordinator Phase 3: create task chain from dependency graph
|
||||
-> coordinator Phase 4: spawn first batch workers (background) -> STOP
|
||||
-> Worker executes -> SendMessage callback -> coordinator advances next step
|
||||
-> Loop until pipeline complete -> Phase 5 report
|
||||
```
|
||||
|
||||
**User Commands** (wake paused coordinator):
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Output execution status graph, no advancement |
|
||||
| `resume` / `continue` | Check worker states, advance next step |
|
||||
|
||||
---
|
||||
|
||||
## Shared Infrastructure
|
||||
|
||||
The following templates apply to all worker roles. Each generated role.md only needs to define **Phase 2-4** role-specific logic.
|
||||
|
||||
### Worker Phase 1: Task Discovery (all workers shared)
|
||||
|
||||
Each worker on startup executes the same task discovery flow:
|
||||
|
||||
1. Call `TaskList()` to get all tasks
|
||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
||||
3. No tasks -> idle wait
|
||||
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
|
||||
|
||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
||||
- Check if this task's output artifacts already exist
|
||||
- Artifacts complete -> skip to Phase 5 report completion
|
||||
- Artifacts incomplete or missing -> normal Phase 2-4 execution
|
||||
|
||||
### Worker Phase 5: Report + Fast-Advance (all workers shared)
|
||||
|
||||
Task completion with optional fast-advance to skip coordinator round-trip:
|
||||
|
||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
||||
- Params: operation="log", team=**<session-id>**, from=<role>, to="coordinator", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
||||
- **`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
|
||||
- **CLI fallback**: `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
||||
2. **TaskUpdate**: Mark task completed
|
||||
3. **Fast-Advance Check**:
|
||||
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
|
||||
- If exactly 1 ready task AND its owner matches a simple successor pattern -> **spawn it directly** (skip coordinator)
|
||||
- Otherwise -> **SendMessage** to coordinator for orchestration
|
||||
4. **Loop**: Back to Phase 1 to check for next task
|
||||
|
||||
**Fast-Advance Rules**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Same-prefix successor (Inner Loop role) | Do not spawn, main agent inner loop (Phase 5-L) |
|
||||
| 1 ready task, simple linear successor, different prefix | Spawn directly via Task(run_in_background: true) |
|
||||
| Multiple ready tasks (parallel window) | SendMessage to coordinator (needs orchestration) |
|
||||
| No ready tasks + others running | SendMessage to coordinator (status update) |
|
||||
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
|
||||
|
||||
**Fast-advance failure recovery**: If a fast-advanced task fails, the coordinator detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing. See [monitor.md](roles/coordinator/commands/monitor.md).
|
||||
|
||||
### Worker Inner Loop (roles with multiple same-prefix serial tasks)
|
||||
|
||||
When a role has **2+ serial same-prefix tasks**, it loops internally instead of spawning new agents:
|
||||
|
||||
**Inner Loop flow**:
|
||||
|
||||
```
|
||||
Phase 1: Discover task (first time)
|
||||
|
|
||||
+- Found task -> Phase 2-3: Load context + Execute work
|
||||
| |
|
||||
| v
|
||||
| Phase 4: Validation (+ optional Inline Discuss)
|
||||
| |
|
||||
| v
|
||||
| Phase 5-L: Loop Completion
|
||||
| |
|
||||
| +- TaskUpdate completed
|
||||
| +- team_msg log
|
||||
| +- Accumulate summary to context_accumulator
|
||||
| |
|
||||
| +- More same-prefix tasks?
|
||||
| | +- YES -> back to Phase 1 (inner loop)
|
||||
| | +- NO -> Phase 5-F: Final Report
|
||||
| |
|
||||
| +- Interrupt conditions?
|
||||
| +- consensus_blocked HIGH -> SendMessage -> STOP
|
||||
| +- Errors >= 3 -> SendMessage -> STOP
|
||||
|
|
||||
+- Phase 5-F: Final Report
|
||||
+- SendMessage (all task summaries)
|
||||
+- STOP
|
||||
```
|
||||
|
||||
**Phase 5-L vs Phase 5-F**:
|
||||
|
||||
| Step | Phase 5-L (looping) | Phase 5-F (final) |
|
||||
|------|---------------------|-------------------|
|
||||
| TaskUpdate completed | YES | YES |
|
||||
| team_msg log | YES | YES |
|
||||
| Accumulate summary | YES | - |
|
||||
| SendMessage to coordinator | NO | YES (all tasks summary) |
|
||||
| Fast-Advance to next prefix | - | YES (check cross-prefix successors) |
|
||||
|
||||
### Inline Discuss Protocol (optional for any role)
|
||||
|
||||
After completing primary output, roles may call the discuss subagent inline. Unlike v4's fixed perspective definitions, team-coordinate uses **dynamic perspectives** specified by the coordinator when generating each role.
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: "Discuss <round-id>",
|
||||
prompt: <see subagents/discuss-subagent.md for prompt template>
|
||||
})
|
||||
```
|
||||
|
||||
**Consensus handling**:
|
||||
|
||||
| Verdict | Severity | Role Action |
|
||||
|---------|----------|-------------|
|
||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | SendMessage with structured format. Do NOT self-revise. |
|
||||
| consensus_blocked | MEDIUM | SendMessage with warning. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
|
||||
### Shared Explore Utility
|
||||
|
||||
Any role needing codebase context calls the explore subagent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore <angle>",
|
||||
prompt: <see subagents/explore-subagent.md for prompt template>
|
||||
})
|
||||
```
|
||||
|
||||
**Cache**: Results stored in `explorations/` with `cache-index.json`. Before exploring, always check cache first.
|
||||
|
||||
### Wisdom Accumulation (all roles)
|
||||
|
||||
Cross-task knowledge accumulation. Coordinator creates `wisdom/` directory at session init.
|
||||
|
||||
**Directory**:
|
||||
```
|
||||
<session-folder>/wisdom/
|
||||
+-- learnings.md # Patterns and insights
|
||||
+-- decisions.md # Design and strategy decisions
|
||||
+-- issues.md # Known risks and issues
|
||||
```
|
||||
|
||||
**Worker load** (Phase 2): Extract `Session: <path>` from task description, read wisdom files.
|
||||
**Worker contribute** (Phase 4/5): Write discoveries to corresponding wisdom files.
|
||||
|
||||
### Role Isolation Rules
|
||||
|
||||
| Allowed | Prohibited |
|
||||
|---------|-----------|
|
||||
| Process own prefix tasks | Process other role's prefix tasks |
|
||||
| SendMessage to coordinator | Directly communicate with other workers |
|
||||
| Use tools appropriate to responsibility | Create tasks for other roles |
|
||||
| Call discuss/explore subagents | Modify resources outside own scope |
|
||||
| Fast-advance simple successors | Spawn parallel worker batches |
|
||||
| Report capability_gap to coordinator | Attempt work outside scope |
|
||||
|
||||
Coordinator additionally prohibited: directly write/modify deliverable artifacts, call implementation subagents directly, directly execute analysis/test/review.
|
||||
|
||||
---
|
||||
|
||||
## Cadence Control
|
||||
|
||||
**Beat model**: Event-driven, each beat = coordinator wake -> process -> spawn -> STOP.
|
||||
|
||||
```
|
||||
Beat Cycle (single beat)
|
||||
======================================================================
|
||||
Event Coordinator Workers
|
||||
----------------------------------------------------------------------
|
||||
callback/resume --> +- handleCallback -+
|
||||
| mark completed |
|
||||
| check pipeline |
|
||||
+- handleSpawnNext -+
|
||||
| find ready tasks |
|
||||
| spawn workers ---+--> [Worker A] Phase 1-5
|
||||
| (parallel OK) --+--> [Worker B] Phase 1-5
|
||||
+- STOP (idle) -----+ |
|
||||
|
|
||||
callback <-----------------------------------------+
|
||||
(next beat) SendMessage + TaskUpdate(completed)
|
||||
======================================================================
|
||||
|
||||
Fast-Advance (skips coordinator for simple linear successors)
|
||||
======================================================================
|
||||
[Worker A] Phase 5 complete
|
||||
+- 1 ready task? simple successor? --> spawn Worker B directly
|
||||
+- complex case? --> SendMessage to coordinator
|
||||
======================================================================
|
||||
```
|
||||
|
||||
**Pipelines are dynamic**: Unlike v4's predefined pipeline beat views (spec-only, impl-only, etc.), team-coordinate pipelines are generated per-task from the dependency graph. The beat model is the same -- only the pipeline shape varies.
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
### Standard Worker (single-task role)
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
|
||||
## Primary Instruction
|
||||
All your work MUST be executed by calling Skill to get role definition:
|
||||
Skill(skill="team-coordinate", args="--role=<role> --session=<session-folder>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] tag
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
||||
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> get role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg(team=<session-id>) + SendMessage results to coordinator
|
||||
4. TaskUpdate completed -> check next task or fast-advance`
|
||||
})
|
||||
```
|
||||
|
||||
### Inner Loop Worker (multi-task role)
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker (inner loop)",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
|
||||
## Primary Instruction
|
||||
All your work MUST be executed by calling Skill to get role definition:
|
||||
Skill(skill="team-coordinate", args="--role=<role> --session=<session-folder>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Inner Loop Mode
|
||||
You will handle ALL <PREFIX>-* tasks in this session, not just the first one.
|
||||
After completing each task, loop back to find the next <PREFIX>-* task.
|
||||
Only SendMessage to coordinator when:
|
||||
- All <PREFIX>-* tasks are done
|
||||
- A consensus_blocked HIGH occurs
|
||||
- Errors accumulate (>= 3)
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] tag
|
||||
- Only communicate with coordinator
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
||||
- Use subagent calls for heavy work, retain summaries in context`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/TC-<slug>-<date>/
|
||||
+-- team-session.json # Session state + dynamic role registry
|
||||
+-- task-analysis.json # Phase 1 output: capabilities, dependency graph
|
||||
+-- roles/ # Dynamic role definitions (generated Phase 2)
|
||||
| +-- <role-1>.md
|
||||
| +-- <role-2>.md
|
||||
+-- artifacts/ # All MD deliverables from workers
|
||||
| +-- <artifact>.md
|
||||
+-- shared-memory.json # Cross-role state store
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
| +-- learnings.md
|
||||
| +-- decisions.md
|
||||
| +-- issues.md
|
||||
+-- explorations/ # Shared explore cache
|
||||
| +-- cache-index.json
|
||||
| +-- explore-<angle>.json
|
||||
+-- discussions/ # Inline discuss records
|
||||
| +-- <round>.md
|
||||
+-- .msg/ # Team message bus logs
|
||||
```
|
||||
|
||||
### team-session.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "TC-<slug>-<date>",
|
||||
"task_description": "<original user input>",
|
||||
"status": "active | paused | completed",
|
||||
"team_name": "<team-name>",
|
||||
"roles": [
|
||||
{
|
||||
"name": "<role-name>",
|
||||
"prefix": "<PREFIX>",
|
||||
"responsibility_type": "<type>",
|
||||
"inner_loop": false,
|
||||
"role_file": "roles/<role-name>.md"
|
||||
}
|
||||
],
|
||||
"pipeline": {
|
||||
"dependency_graph": {},
|
||||
"tasks_total": 0,
|
||||
"tasks_completed": 0
|
||||
},
|
||||
"active_workers": [],
|
||||
"completed_tasks": [],
|
||||
"created_at": "<timestamp>"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Resume
|
||||
|
||||
Coordinator supports `--resume` / `--continue` for interrupted sessions:
|
||||
|
||||
1. Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
|
||||
2. Multiple matches -> AskUserQuestion for selection
|
||||
3. Audit TaskList -> reconcile session state <-> task status
|
||||
4. Reset in_progress -> pending (interrupted tasks)
|
||||
5. Rebuild team and spawn needed workers only
|
||||
6. Create missing tasks with correct blockedBy
|
||||
7. Kick first executable task -> Phase 4 coordination loop
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown --role value | Check if `<session>/roles/<role>.md` exists; error with message if not |
|
||||
| Missing --role arg | Orchestration Mode -> coordinator |
|
||||
| Dynamic role file not found | Error with expected path, coordinator may need to regenerate |
|
||||
| Built-in role file not found | Error with expected path |
|
||||
| Command file not found | Fallback to inline execution |
|
||||
| Discuss subagent fails | Role proceeds without discuss, logs warning |
|
||||
| Explore cache corrupt | Clear cache, re-explore |
|
||||
| Fast-advance spawns wrong task | Coordinator reconciles on next callback |
|
||||
| Session path not provided (dynamic role) | Error: `--session` is required for dynamic roles. Coordinator must always pass `--session=<session-folder>` when spawning workers. |
|
||||
| capability_gap reported | Coordinator generates new role via handleAdapt |
|
||||
@@ -1,197 +0,0 @@
|
||||
# Command: analyze-task
|
||||
|
||||
## Purpose
|
||||
|
||||
Parse user task description -> detect required capabilities -> build dependency graph -> design dynamic roles. This replaces v4's static mode selection with intelligent task decomposition.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | User input from Phase 1 | Yes |
|
||||
| Clarification answers | AskUserQuestion results (if any) | No |
|
||||
| Session folder | From coordinator Phase 2 | Yes |
|
||||
|
||||
## Phase 3: Task Analysis
|
||||
|
||||
### Step 1: Signal Detection
|
||||
|
||||
Scan task description for capability keywords:
|
||||
|
||||
| Signal | Keywords | Capability | Prefix | Responsibility Type |
|
||||
|--------|----------|------------|--------|---------------------|
|
||||
| Research | investigate, explore, compare, survey, find, research, discover, benchmark, study | researcher | RESEARCH | orchestration |
|
||||
| Writing | write, draft, document, article, report, blog, describe, explain, summarize, content | writer | DRAFT | code-gen (docs) |
|
||||
| Coding | implement, build, code, fix, refactor, develop, create app, program, migrate, port | developer | IMPL | code-gen (code) |
|
||||
| Design | design, architect, plan, structure, blueprint, model, schema, wireframe, layout | designer | DESIGN | orchestration |
|
||||
| Analysis | analyze, review, audit, assess, evaluate, inspect, examine, diagnose, profile | analyst | ANALYSIS | read-only |
|
||||
| Testing | test, verify, validate, QA, quality, check, assert, coverage, regression | tester | TEST | validation |
|
||||
| Planning | plan, breakdown, organize, schedule, decompose, roadmap, strategy, prioritize | planner | PLAN | orchestration |
|
||||
|
||||
**Multi-match**: A task may trigger multiple capabilities. E.g., "research and write a technical article" triggers both `researcher` and `writer`.
|
||||
|
||||
**No match**: If no keywords match, default to a single `general` capability with `TASK` prefix.
|
||||
|
||||
### Step 2: Artifact Inference
|
||||
|
||||
Each capability produces default output artifacts:
|
||||
|
||||
| Capability | Default Artifact | Format |
|
||||
|------------|-----------------|--------|
|
||||
| researcher | Research findings | `<session>/artifacts/research-findings.md` |
|
||||
| writer | Written document(s) | `<session>/artifacts/<doc-name>.md` |
|
||||
| developer | Code implementation | Source files + `<session>/artifacts/implementation-summary.md` |
|
||||
| designer | Design document | `<session>/artifacts/design-spec.md` |
|
||||
| analyst | Analysis report | `<session>/artifacts/analysis-report.md` |
|
||||
| tester | Test results | `<session>/artifacts/test-report.md` |
|
||||
| planner | Execution plan | `<session>/artifacts/execution-plan.md` |
|
||||
|
||||
### Step 3: Dependency Graph Construction
|
||||
|
||||
Build a DAG of work streams using these inference rules:
|
||||
|
||||
| Pattern | Shape | Example |
|
||||
|---------|-------|---------|
|
||||
| Knowledge -> Creation | research blockedBy nothing, creation blockedBy research | RESEARCH-001 -> DRAFT-001 |
|
||||
| Design -> Build | design first, build after | DESIGN-001 -> IMPL-001 |
|
||||
| Build -> Validate | build first, test/review after | IMPL-001 -> TEST-001 + ANALYSIS-001 |
|
||||
| Plan -> Execute | plan first, execute after | PLAN-001 -> IMPL-001 |
|
||||
| Independent parallel | no dependency between them | DRAFT-001 || IMPL-001 |
|
||||
| Analysis -> Revise | analysis finds issues, revise artifact | ANALYSIS-001 -> DRAFT-002 |
|
||||
|
||||
**Graph construction algorithm**:
|
||||
|
||||
1. Group capabilities by natural ordering: knowledge-gathering -> design/planning -> creation -> validation
|
||||
2. Within same tier: capabilities are parallel unless task description implies sequence
|
||||
3. Between tiers: downstream blockedBy upstream
|
||||
4. Single-capability tasks: one node, no dependencies
|
||||
|
||||
**Natural ordering tiers**:
|
||||
|
||||
| Tier | Capabilities | Description |
|
||||
|------|-------------|-------------|
|
||||
| 0 | researcher, planner | Knowledge gathering / planning |
|
||||
| 1 | designer | Design (requires context from tier 0 if present) |
|
||||
| 2 | writer, developer | Creation (requires design/plan if present) |
|
||||
| 3 | analyst, tester | Validation (requires artifacts to validate) |
|
||||
|
||||
### Step 4: Complexity Scoring
|
||||
|
||||
| Factor | Weight | Condition |
|
||||
|--------|--------|-----------|
|
||||
| Capability count | +1 each | Number of distinct capabilities |
|
||||
| Cross-domain factor | +2 | Capabilities span 3+ tiers |
|
||||
| Parallel tracks | +1 each | Independent parallel work streams |
|
||||
| Serial depth | +1 per level | Longest dependency chain length |
|
||||
|
||||
| Total Score | Complexity | Role Limit |
|
||||
|-------------|------------|------------|
|
||||
| 1-3 | Low | 1-2 roles |
|
||||
| 4-6 | Medium | 2-3 roles |
|
||||
| 7+ | High | 3-5 roles |
|
||||
|
||||
### Step 5: Role Minimization
|
||||
|
||||
Apply merging rules to reduce role count:
|
||||
|
||||
| Rule | Condition | Action |
|
||||
|------|-----------|--------|
|
||||
| Absorb trivial | Capability has exactly 1 task AND no explore needed | Merge into nearest related role |
|
||||
| Merge overlap | Two capabilities share >50% keywords from task description | Combine into single role |
|
||||
| Cap at 5 | More than 5 roles after initial assignment | Merge lowest-priority pairs (priority: researcher > designer > developer > writer > analyst > planner > tester) |
|
||||
|
||||
**Merge priority** (when two must merge, keep the higher-priority one as the role name):
|
||||
|
||||
1. developer (code-gen is hardest to merge)
|
||||
2. researcher (context-gathering is foundational)
|
||||
3. writer (document generation has specific patterns)
|
||||
4. designer (design has specific outputs)
|
||||
5. analyst (analysis can be absorbed by reviewer pattern)
|
||||
6. planner (planning can be merged with researcher or designer)
|
||||
7. tester (can be absorbed by developer or analyst)
|
||||
|
||||
**IMPORTANT**: Even after merging, coordinator MUST spawn workers for all roles. Single-role tasks still use team architecture.
|
||||
|
||||
## Phase 4: Output
|
||||
|
||||
Write `<session-folder>/task-analysis.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"task_description": "<original user input>",
|
||||
"capabilities": [
|
||||
{
|
||||
"name": "researcher",
|
||||
"prefix": "RESEARCH",
|
||||
"responsibility_type": "orchestration",
|
||||
"tasks": [
|
||||
{ "id": "RESEARCH-001", "description": "..." }
|
||||
],
|
||||
"artifacts": ["research-findings.md"]
|
||||
}
|
||||
],
|
||||
"dependency_graph": {
|
||||
"RESEARCH-001": [],
|
||||
"DRAFT-001": ["RESEARCH-001"],
|
||||
"ANALYSIS-001": ["DRAFT-001"]
|
||||
},
|
||||
"roles": [
|
||||
{
|
||||
"name": "researcher",
|
||||
"prefix": "RESEARCH",
|
||||
"responsibility_type": "orchestration",
|
||||
"task_count": 1,
|
||||
"inner_loop": false
|
||||
},
|
||||
{
|
||||
"name": "writer",
|
||||
"prefix": "DRAFT",
|
||||
"responsibility_type": "code-gen (docs)",
|
||||
"task_count": 1,
|
||||
"inner_loop": false
|
||||
}
|
||||
],
|
||||
"complexity": {
|
||||
"capability_count": 2,
|
||||
"cross_domain_factor": false,
|
||||
"parallel_tracks": 0,
|
||||
"serial_depth": 2,
|
||||
"total_score": 3,
|
||||
"level": "low"
|
||||
},
|
||||
"artifacts": [
|
||||
{ "name": "research-findings.md", "producer": "researcher", "path": "artifacts/research-findings.md" },
|
||||
{ "name": "article-draft.md", "producer": "writer", "path": "artifacts/article-draft.md" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Complexity Interpretation
|
||||
|
||||
**CRITICAL**: Complexity score is for **role design optimization**, NOT for skipping team workflow.
|
||||
|
||||
| Complexity | Team Structure | Coordinator Action |
|
||||
|------------|----------------|-------------------|
|
||||
| Low (1-2 roles) | Minimal team | Generate 1-2 roles, create team, spawn workers |
|
||||
| Medium (2-3 roles) | Standard team | Generate roles, create team, spawn workers |
|
||||
| High (3-5 roles) | Full team | Generate roles, create team, spawn workers |
|
||||
|
||||
**All complexity levels use team architecture**:
|
||||
- Single-role tasks still spawn worker via Skill
|
||||
- Coordinator NEVER executes task work directly
|
||||
- Team infrastructure provides session management, message bus, fast-advance
|
||||
|
||||
**Purpose of complexity score**:
|
||||
- ✅ Determine optimal role count (merge vs separate)
|
||||
- ✅ Guide dependency graph design
|
||||
- ✅ Inform user about task scope
|
||||
- ❌ NOT for deciding whether to use team workflow
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No capabilities detected | Default to single `general` role with TASK prefix |
|
||||
| Circular dependency in graph | Break cycle at lowest-tier edge, warn |
|
||||
| Task description too vague | Return minimal analysis, coordinator will AskUserQuestion |
|
||||
| All capabilities merge into one | Valid -- single-role execution via team worker |
|
||||
@@ -1,85 +0,0 @@
|
||||
# Command: dispatch
|
||||
|
||||
## Purpose
|
||||
|
||||
Create task chains from dynamic dependency graphs. Unlike v4's static mode-to-pipeline mapping, team-coordinate builds pipelines from the task-analysis.json produced by Phase 1.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task analysis | `<session-folder>/task-analysis.json` | Yes |
|
||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
||||
| Role registry | `team-session.json#roles` | Yes |
|
||||
| Scope | User requirements description | Yes |
|
||||
|
||||
## Phase 3: Task Chain Creation
|
||||
|
||||
### Workflow
|
||||
|
||||
1. **Read dependency graph** from `task-analysis.json#dependency_graph`
|
||||
2. **Topological sort** tasks to determine creation order
|
||||
3. **Validate** all task owners exist in role registry
|
||||
4. **For each task** (in topological order):
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "<PREFIX>-<NNN>",
|
||||
owner: "<role-name>",
|
||||
description: "<task description from task-analysis>\nSession: <session-folder>\nScope: <scope>\nInnerLoop: <true|false>",
|
||||
blockedBy: [<dependency-list from graph>],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
5. **Update team-session.json** with pipeline and tasks_total
|
||||
6. **Validate** created chain
|
||||
|
||||
### Task Description Template
|
||||
|
||||
Every task description includes session path and inner loop flag:
|
||||
|
||||
```
|
||||
<task description>
|
||||
Session: <session-folder>
|
||||
Scope: <scope>
|
||||
InnerLoop: <true|false>
|
||||
```
|
||||
|
||||
### InnerLoop Flag Rules
|
||||
|
||||
| Condition | InnerLoop |
|
||||
|-----------|-----------|
|
||||
| Role has 2+ serial same-prefix tasks | true |
|
||||
| Role has 1 task | false |
|
||||
| Tasks are parallel (no dependency between them) | false |
|
||||
|
||||
### Dependency Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| No orphan tasks | Every task is reachable from at least one root |
|
||||
| No circular deps | Topological sort succeeds without cycle |
|
||||
| All owners valid | Every task owner exists in team-session.json#roles |
|
||||
| All blockedBy valid | Every blockedBy references an existing task subject |
|
||||
| Session reference | Every task description contains `Session: <session-folder>` |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Task count | Matches dependency_graph node count |
|
||||
| Dependencies | Every blockedBy references an existing task subject |
|
||||
| Owner assignment | Each task owner is in role registry |
|
||||
| Session reference | Every task description contains `Session:` |
|
||||
| Pipeline integrity | No disconnected subgraphs (warn if found) |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Circular dependency detected | Report cycle, halt task creation |
|
||||
| Owner not in role registry | Error, coordinator must fix roles first |
|
||||
| TaskCreate fails | Log error, report to coordinator |
|
||||
| Duplicate task subject | Skip creation, log warning |
|
||||
| Empty dependency graph | Error, task analysis may have failed |
|
||||
@@ -1,296 +0,0 @@
|
||||
# Command: monitor
|
||||
|
||||
## Purpose
|
||||
|
||||
Event-driven pipeline coordination with Spawn-and-Stop pattern. Adapted from v4 for dynamic roles -- role names are read from `team-session.json#roles` instead of hardcoded. Includes `handleAdapt` for mid-pipeline capability gap handling.
|
||||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
||||
| FAST_ADVANCE_AWARE | true | Workers may skip coordinator for simple linear successors |
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
| Role registry | session.roles[] | Yes |
|
||||
|
||||
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name` rather than a static list. This is the key difference from v4.
|
||||
|
||||
## Phase 3: Handler Routing
|
||||
|
||||
### Wake-up Source Detection
|
||||
|
||||
Parse `$ARGUMENTS` to determine handler:
|
||||
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
|
||||
| 2 | Contains "capability_gap" | handleAdapt |
|
||||
| 3 | Contains "check" or "status" | handleCheck |
|
||||
| 4 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
Worker completed a task. Verify completion, update state, auto-advance.
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
+- Find matching active worker by role (from session.roles)
|
||||
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
|
||||
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
|
||||
+- Task status = completed?
|
||||
| +- YES -> remove from active_workers -> update session
|
||||
| | +- -> handleSpawnNext
|
||||
| +- NO -> progress message, do not advance -> STOP
|
||||
+- No matching worker found
|
||||
+- Scan all active workers for completed tasks
|
||||
+- Found completed -> process each -> handleSpawnNext
|
||||
+- None completed -> STOP
|
||||
```
|
||||
|
||||
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
|
||||
1. Check if the expected next task is already `in_progress` (fast-advanced)
|
||||
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
|
||||
3. If no -> normal handleSpawnNext
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCheck
|
||||
|
||||
Read-only status report. No pipeline advancement.
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[coordinator] Pipeline Status
|
||||
[coordinator] Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[coordinator] Execution Graph:
|
||||
<visual representation of dependency graph with status icons>
|
||||
|
||||
done=completed >>>=running o=pending .=not created
|
||||
|
||||
[coordinator] Active Workers:
|
||||
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
|
||||
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
|
||||
|
||||
**Graph rendering**: Read dependency_graph from task-analysis.json, render each node with status icon. Show parallel branches side-by-side.
|
||||
|
||||
Then STOP.
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
Check active worker completion, process results, advance pipeline.
|
||||
|
||||
```
|
||||
Load active_workers from session
|
||||
+- No active workers -> handleSpawnNext
|
||||
+- Has active workers -> check each:
|
||||
+- status = completed -> mark done, log
|
||||
+- status = in_progress -> still running, log
|
||||
+- other status -> worker failure -> reset to pending
|
||||
After processing:
|
||||
+- Some completed -> handleSpawnNext
|
||||
+- All still running -> report status -> STOP
|
||||
+- All failed -> handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
Find all ready tasks, spawn workers in background, update session, STOP.
|
||||
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
+- completedSubjects: status = completed
|
||||
+- inProgressSubjects: status = in_progress
|
||||
+- readySubjects: pending + all blockedBy in completedSubjects
|
||||
|
||||
Ready tasks found?
|
||||
+- NONE + work in progress -> report waiting -> STOP
|
||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> Phase 5
|
||||
+- HAS ready tasks -> for each:
|
||||
+- Is task owner an Inner Loop role AND that role already has an active_worker?
|
||||
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
|
||||
| +- NO -> normal spawn below
|
||||
+- TaskUpdate -> in_progress
|
||||
+- team_msg log -> task_unblocked (team=<session-id>, NOT team name)
|
||||
+- Spawn worker (see spawn tool call below)
|
||||
+- Add to session.active_workers
|
||||
Update session file -> output summary -> STOP
|
||||
```
|
||||
|
||||
**Spawn worker tool call** (one per ready task):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: "<worker prompt from SKILL.md Coordinator Spawn Template>"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleAdapt
|
||||
|
||||
Handle mid-pipeline capability gap discovery. A worker reports `capability_gap` when it encounters work outside its scope.
|
||||
|
||||
**CONSTRAINT**: Maximum 5 worker roles per session (per coordinator/role.md). handleAdapt MUST enforce this limit.
|
||||
|
||||
```
|
||||
Parse capability_gap message:
|
||||
+- Extract: gap_description, requesting_role, suggested_capability
|
||||
+- Validate gap is genuine:
|
||||
+- Check existing roles in session.roles -> does any role cover this?
|
||||
| +- YES -> redirect: SendMessage to that role's owner -> STOP
|
||||
| +- NO -> genuine gap, proceed to role generation
|
||||
+- CHECK ROLE COUNT LIMIT (MAX 5 ROLES):
|
||||
+- Count current roles in session.roles
|
||||
+- If count >= 5:
|
||||
+- Attempt to merge new capability into existing role:
|
||||
+- Find best-fit role by responsibility_type
|
||||
+- If merge possible:
|
||||
+- Update existing role file with new capability
|
||||
+- Create task assigned to existing role
|
||||
+- Log via team_msg (type: warning, summary: "Capability merged into existing role")
|
||||
+- STOP
|
||||
+- If merge NOT possible:
|
||||
+- PAUSE session
|
||||
+- Report to user:
|
||||
"Role limit (5) reached. Cannot generate new role for: <gap_description>
|
||||
Options:
|
||||
1. Manually extend an existing role
|
||||
2. Re-run team-coordinate with refined task to consolidate roles
|
||||
3. Accept limitation and continue without this capability"
|
||||
+- STOP
|
||||
+- Generate new role:
|
||||
1. Read specs/role-template.md
|
||||
2. Fill template with capability details from gap description
|
||||
3. Write new role file to <session-folder>/roles/<new-role>.md
|
||||
4. Add to session.roles[]
|
||||
+- Create new task(s):
|
||||
TaskCreate({
|
||||
subject: "<NEW-PREFIX>-001",
|
||||
owner: "<new-role>",
|
||||
description: "<gap_description>\nSession: <session-folder>\nInnerLoop: false",
|
||||
blockedBy: [<requesting task if sequential>],
|
||||
status: "pending"
|
||||
})
|
||||
+- Update team-session.json: add role, increment tasks_total
|
||||
+- Spawn new worker -> STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Worker Failure Handling
|
||||
|
||||
When a worker has unexpected status (not completed, not in_progress):
|
||||
|
||||
1. Reset task -> pending via TaskUpdate
|
||||
2. Log via team_msg (type: error)
|
||||
3. Report to user: task reset, will retry on next resume
|
||||
|
||||
### Fast-Advance Failure Recovery
|
||||
|
||||
When coordinator detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
|
||||
|
||||
```
|
||||
handleCallback / handleResume detects:
|
||||
+- Task is in_progress (was fast-advanced by predecessor)
|
||||
+- No active_worker entry for this task
|
||||
+- Original fast-advancing worker has already completed and exited
|
||||
+- Resolution:
|
||||
1. TaskUpdate -> reset task to pending
|
||||
2. Remove stale active_worker entry (if any)
|
||||
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
|
||||
4. -> handleSpawnNext (will re-spawn the task normally)
|
||||
```
|
||||
|
||||
**Detection in handleResume**:
|
||||
|
||||
```
|
||||
For each in_progress task in TaskList():
|
||||
+- Has matching active_worker? -> normal, skip
|
||||
+- No matching active_worker? -> orphaned (likely fast-advance failure)
|
||||
+- Check creation time: if > 5 minutes with no progress callback
|
||||
+- Reset to pending -> handleSpawnNext
|
||||
```
|
||||
|
||||
**Prevention**: Fast-advance failures are self-healing. The coordinator reconciles orphaned tasks on every `resume`/`check` cycle.
|
||||
|
||||
### Consensus-Blocked Handling
|
||||
|
||||
When a worker reports `consensus_blocked` in its callback:
|
||||
|
||||
```
|
||||
handleCallback receives message with consensus_blocked flag
|
||||
+- Extract: divergence_severity, blocked_round, action_recommendation
|
||||
+- Route by severity:
|
||||
|
|
||||
+- severity = HIGH
|
||||
| +- Create REVISION task:
|
||||
| +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
|
||||
| +- Description includes: divergence details + action items from discuss
|
||||
| +- blockedBy: none (immediate execution)
|
||||
| +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
|
||||
| +- If already revised once -> PAUSE, escalate to user
|
||||
| +- Update session: mark task as "revised", log revision chain
|
||||
|
|
||||
+- severity = MEDIUM
|
||||
| +- Proceed with warning: include divergence in next task's context
|
||||
| +- Log action items to wisdom/issues.md
|
||||
| +- Normal handleSpawnNext
|
||||
|
|
||||
+- severity = LOW
|
||||
+- Proceed normally: treat as consensus_reached with notes
|
||||
+- Normal handleSpawnNext
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
||||
| Dynamic roles valid | All task owners exist in session.roles |
|
||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
|
||||
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
|
||||
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Worker callback from unknown role | Log info, scan for other completions |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
||||
| Fast-advance conflict | Coordinator reconciles, no duplicate spawns |
|
||||
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
|
||||
| Dynamic role file not found | Error, coordinator must regenerate from task-analysis |
|
||||
| capability_gap from completed role | Validate gap, generate role if genuine |
|
||||
| capability_gap when role limit (5) reached | Attempt merge into existing role, else pause for user |
|
||||
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
|
||||
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |
|
||||
@@ -1,283 +0,0 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the team-coordinate workflow: task analysis, dynamic role generation, task dispatching, progress monitoring, session state. The sole built-in role -- all worker roles are generated at runtime.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Analyze task -> Generate roles -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Analyze user task to detect capabilities and build dependency graph
|
||||
- Dynamically generate worker roles from specs/role-template.md
|
||||
- Create team and spawn worker subagents in background
|
||||
- Dispatch tasks with proper dependency chains from task-analysis.json
|
||||
- Monitor progress via worker callbacks and route messages
|
||||
- Maintain session state persistence (team-session.json)
|
||||
- Handle capability_gap reports (generate new roles mid-pipeline)
|
||||
- Handle consensus_blocked HIGH verdicts (create revision tasks or pause)
|
||||
- Detect fast-advance orphans on resume/check and reset to pending
|
||||
|
||||
### MUST NOT
|
||||
- Execute task work directly (delegate to workers)
|
||||
- Modify task output artifacts (workers own their deliverables)
|
||||
- Call implementation subagents (code-developer, etc.) directly
|
||||
- Skip dependency validation when creating task chains
|
||||
- Generate more than 5 worker roles (merge if exceeded)
|
||||
- Override consensus_blocked HIGH without user confirmation
|
||||
|
||||
> **Core principle**: coordinator is the orchestrator, not the executor. All actual work is delegated to dynamically generated worker roles.
|
||||
|
||||
---
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (analyze-task, dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** - complete the command workflow before proceeding
|
||||
|
||||
Example:
|
||||
```
|
||||
Phase 1 needs task analysis
|
||||
-> Read roles/coordinator/commands/analyze-task.md
|
||||
-> Execute Phase 2 (Context Loading)
|
||||
-> Execute Phase 3 (Task Analysis)
|
||||
-> Execute Phase 4 (Output)
|
||||
-> Continue to Phase 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
|
||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
|
||||
| Interrupted session | Active/paused session exists in `.workflow/.team/TC-*` | -> Phase 0 (Resume Check) |
|
||||
| New session | None of above | -> Phase 1 (Task Analysis) |
|
||||
|
||||
For callback/check/resume/adapt: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
||||
|
||||
### Router Implementation
|
||||
|
||||
1. **Load session context** (if exists):
|
||||
- Scan `.workflow/.team/TC-*/team-session.json` for active/paused sessions
|
||||
- If found, extract `session.roles[].name` for callback detection
|
||||
|
||||
2. **Parse $ARGUMENTS** for detection keywords
|
||||
|
||||
3. **Route to handler**:
|
||||
- For monitor handlers: Read `commands/monitor.md`, execute matched handler section, STOP
|
||||
- For Phase 0: Execute Session Resume Check below
|
||||
- For Phase 1: Execute Task Analysis below
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
1. Scan `.workflow/.team/TC-*/team-session.json` for sessions with status "active" or "paused"
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (-> Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
1. Audit TaskList -> get real status of all tasks
|
||||
2. Reconcile: session.completed_tasks <-> TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks -> pending (they were interrupted)
|
||||
4. Detect fast-advance orphans (in_progress without recent activity) -> reset to pending
|
||||
5. Determine remaining pipeline from reconciled state
|
||||
6. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
7. Create missing tasks with correct blockedBy dependencies
|
||||
8. Verify dependency chain integrity
|
||||
9. Update session file with reconciled state
|
||||
10. Kick first executable task's worker -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Task Analysis
|
||||
|
||||
**Objective**: Parse user task, detect capabilities, build dependency graph, design roles.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse user task description**
|
||||
|
||||
2. **Clarify if ambiguous** via AskUserQuestion:
|
||||
- What is the scope? (specific files, module, project-wide)
|
||||
- What deliverables are expected? (documents, code, analysis reports)
|
||||
- Any constraints? (timeline, technology, style)
|
||||
|
||||
3. **Delegate to `commands/analyze-task.md`**:
|
||||
- Signal detection: scan keywords -> infer capabilities
|
||||
- Artifact inference: each capability -> default output type (.md)
|
||||
- Dependency graph: build DAG of work streams
|
||||
- Complexity scoring: count capabilities, cross-domain factor, parallel tracks
|
||||
- Role minimization: merge overlapping, absorb trivial, cap at 5
|
||||
|
||||
4. **Output**: Write `<session>/task-analysis.json`
|
||||
|
||||
**Success**: Task analyzed, capabilities detected, dependency graph built, roles designed.
|
||||
|
||||
**CRITICAL - Team Workflow Enforcement**:
|
||||
|
||||
Regardless of complexity score or role count, coordinator MUST:
|
||||
- ✅ **Always proceed to Phase 2** (generate roles)
|
||||
- ✅ **Always create team** and spawn workers
|
||||
- ❌ **NEVER execute task work directly**, even for single-role low-complexity tasks
|
||||
- ❌ **NEVER skip team workflow** based on complexity assessment
|
||||
|
||||
**Single-role execution is still team-based** - just with one worker. The team architecture provides:
|
||||
- Consistent message bus communication
|
||||
- Session state management
|
||||
- Artifact tracking
|
||||
- Fast-advance capability
|
||||
- Resume/recovery mechanisms
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Generate Roles + Initialize Session
|
||||
|
||||
**Objective**: Create session, generate dynamic role files, initialize shared infrastructure.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Generate session ID**: `TC-<slug>-<date>` (slug from first 3 meaningful words of task)
|
||||
|
||||
2. **Create session folder structure**:
|
||||
```
|
||||
.workflow/.team/<session-id>/
|
||||
+-- roles/
|
||||
+-- artifacts/
|
||||
+-- wisdom/
|
||||
+-- explorations/
|
||||
+-- discussions/
|
||||
+-- .msg/
|
||||
```
|
||||
|
||||
3. **Call TeamCreate** with team name derived from session ID
|
||||
|
||||
4. **Read `specs/role-template.md`** + `task-analysis.json`
|
||||
|
||||
5. **For each role in task-analysis.json#roles**:
|
||||
- Fill role template with:
|
||||
- role_name, prefix, responsibility_type from analysis
|
||||
- Phase 2-4 content from responsibility type reference sections in template
|
||||
- inner_loop flag from analysis (true if role has 2+ serial tasks)
|
||||
- Task-specific instructions from task description
|
||||
- Write generated role file to `<session>/roles/<role-name>.md`
|
||||
|
||||
6. **Register roles** in team-session.json#roles
|
||||
|
||||
7. **Initialize shared infrastructure**:
|
||||
- `wisdom/learnings.md`, `wisdom/decisions.md`, `wisdom/issues.md` (empty with headers)
|
||||
- `explorations/cache-index.json` (`{ "entries": [] }`)
|
||||
- `shared-memory.json` (`{}`)
|
||||
- `discussions/` (empty directory)
|
||||
|
||||
8. **Write team-session.json** with: session_id, task_description, status="active", roles, pipeline (empty), active_workers=[], created_at
|
||||
|
||||
**Success**: Session created, role files generated, shared infrastructure initialized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on dependency graph with proper dependencies.
|
||||
|
||||
Delegate to `commands/dispatch.md` which creates the full task chain:
|
||||
1. Reads dependency_graph from task-analysis.json
|
||||
2. Topological sorts tasks
|
||||
3. Creates tasks via TaskCreate with correct blockedBy
|
||||
4. Assigns owner based on role mapping from task-analysis.json
|
||||
5. Includes `Session: <session-folder>` in every task description
|
||||
6. Sets InnerLoop flag for multi-task roles
|
||||
7. Updates team-session.json with pipeline and tasks_total
|
||||
|
||||
**Success**: All tasks created with correct dependency chains, session updated.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
||||
|
||||
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
|
||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
||||
- Worker completes -> may fast-advance to next task OR SendMessage callback -> auto-advance
|
||||
- User can use "check" / "resume" to manually advance
|
||||
- Coordinator does one operation per invocation, then STOPS
|
||||
|
||||
**Workflow**:
|
||||
1. Load `commands/monitor.md`
|
||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
3. For each ready task -> spawn worker (see SKILL.md Coordinator Spawn Template)
|
||||
- Use Standard Worker template for single-task roles
|
||||
- Use Inner Loop Worker template for multi-task roles
|
||||
4. Output status summary with execution graph
|
||||
5. STOP
|
||||
|
||||
**Pipeline advancement** driven by three wake sources:
|
||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
||||
- User "check" -> handleCheck (status only)
|
||||
- User "resume" -> handleResume (advance)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. List all deliverables with output paths in `<session>/artifacts/`
|
||||
3. Include discussion summaries (if inline discuss was used)
|
||||
4. Summarize wisdom accumulated during execution
|
||||
5. Update session status -> "completed"
|
||||
6. Offer next steps: exit / view artifacts / extend with additional tasks
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[coordinator] ============================================
|
||||
[coordinator] TASK COMPLETE
|
||||
[coordinator]
|
||||
[coordinator] Deliverables:
|
||||
[coordinator] - <artifact-1.md> (<producer role>)
|
||||
[coordinator] - <artifact-2.md> (<producer role>)
|
||||
[coordinator]
|
||||
[coordinator] Pipeline: <completed>/<total> tasks
|
||||
[coordinator] Roles: <role-list>
|
||||
[coordinator] Duration: <elapsed>
|
||||
[coordinator]
|
||||
[coordinator] Session: <session-folder>
|
||||
[coordinator] ============================================
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Dependency cycle | Detect in task analysis, report to user, halt |
|
||||
| Task description too vague | AskUserQuestion for clarification |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
| Role generation fails | Fall back to single general-purpose role |
|
||||
| capability_gap reported | handleAdapt: generate new role, create tasks, spawn |
|
||||
| All capabilities merge to one | Valid: single-role execution, reduced overhead |
|
||||
| No capabilities detected | Default to single general role with TASK prefix |
|
||||
@@ -1,434 +0,0 @@
|
||||
# Dynamic Role Template
|
||||
|
||||
Template used by coordinator to generate worker role.md files at runtime. Each generated role is written to `<session>/roles/<role-name>.md`.
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
# Role: <role_name>
|
||||
|
||||
<role_description>
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `<role_name>` | **Tag**: `[<role_name>]`
|
||||
- **Task Prefix**: `<prefix>-*`
|
||||
- **Responsibility**: <responsibility_type>
|
||||
<if inner_loop>
|
||||
- **Mode**: Inner Loop (handle all `<prefix>-*` tasks in single agent)
|
||||
</if>
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `<prefix>-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[<role_name>]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Work strictly within <responsibility_type> responsibility scope
|
||||
- Use fast-advance for simple linear successors (see SKILL.md Phase 5)
|
||||
- Produce MD artifacts in `<session>/artifacts/`
|
||||
<if inner_loop>
|
||||
- Use subagent for heavy work (do not execute CLI/generation in main agent context)
|
||||
- Maintain context_accumulator across tasks within the inner loop
|
||||
- Loop through all `<prefix>-*` tasks before reporting to coordinator
|
||||
</if>
|
||||
|
||||
### MUST NOT
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Modify files or resources outside this role's scope
|
||||
- Omit `[<role_name>]` identifier in any output
|
||||
- Fast-advance when multiple tasks are ready or at checkpoint boundaries
|
||||
<if inner_loop>
|
||||
- Execute heavy work (CLI calls, large document generation) in main agent (delegate to subagent)
|
||||
- SendMessage to coordinator mid-loop (unless consensus_blocked HIGH or error count >= 3)
|
||||
</if>
|
||||
|
||||
## Toolbox
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
<tools based on responsibility_type -- see reference sections below>
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Description |
|
||||
|------|-----------|-------------|
|
||||
| `<prefix>_complete` | -> coordinator | Task completed with artifact path |
|
||||
| `<prefix>_error` | -> coordinator | Error encountered |
|
||||
| `capability_gap` | -> coordinator | Work outside role scope discovered |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: <session-id>,
|
||||
from: "<role_name>",
|
||||
to: "coordinator",
|
||||
type: <message-type>,
|
||||
summary: "[<role_name>] <prefix> complete: <task-subject>",
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --team <session-id> --from <role_name> --to coordinator --type <message-type> --summary \"[<role_name>] <prefix> complete\" --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `<prefix>-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: <phase2_name>
|
||||
|
||||
<phase2_content -- generated by coordinator based on responsibility type>
|
||||
|
||||
### Phase 3: <phase3_name>
|
||||
|
||||
<phase3_content -- generated by coordinator based on task specifics>
|
||||
|
||||
### Phase 4: <phase4_name>
|
||||
|
||||
<phase4_content -- generated by coordinator based on responsibility type>
|
||||
|
||||
<if inline_discuss>
|
||||
### Phase 4b: Inline Discuss (optional)
|
||||
|
||||
After primary work, optionally call discuss subagent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: "Discuss <round-id>",
|
||||
prompt: "## Multi-Perspective Critique: <round-id>
|
||||
See subagents/discuss-subagent.md for prompt template.
|
||||
Perspectives: <specified by coordinator when generating this role>"
|
||||
})
|
||||
```
|
||||
|
||||
| Verdict | Severity | Action |
|
||||
|---------|----------|--------|
|
||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured consensus_blocked format. Do NOT self-revise. |
|
||||
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
</if>
|
||||
|
||||
<if inner_loop>
|
||||
### Phase 5-L: Loop Completion (Inner Loop)
|
||||
|
||||
When more same-prefix tasks remain:
|
||||
|
||||
1. **TaskUpdate**: Mark current task completed
|
||||
2. **team_msg**: Log task completion
|
||||
3. **Accumulate summary**:
|
||||
```
|
||||
context_accumulator.append({
|
||||
task: "<task-id>",
|
||||
artifact: "<output-path>",
|
||||
key_decisions: <from subagent return>,
|
||||
discuss_verdict: <from Phase 4>,
|
||||
summary: <from subagent return>
|
||||
})
|
||||
```
|
||||
4. **Interrupt check**:
|
||||
- consensus_blocked HIGH -> SendMessage -> STOP
|
||||
- Error count >= 3 -> SendMessage -> STOP
|
||||
5. **Loop**: Back to Phase 1
|
||||
|
||||
**Does NOT**: SendMessage to coordinator, Fast-Advance spawn.
|
||||
|
||||
### Phase 5-F: Final Report (Inner Loop)
|
||||
|
||||
When all same-prefix tasks are done:
|
||||
|
||||
1. **TaskUpdate**: Mark last task completed
|
||||
2. **team_msg**: Log completion
|
||||
3. **Summary report**: All tasks summary + discuss results + artifact paths
|
||||
4. **Fast-Advance check**: Check cross-prefix successors
|
||||
5. **SendMessage** or **spawn successor**
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report + Fast-Advance
|
||||
|
||||
<else>
|
||||
### Phase 5: Report + Fast-Advance
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report + Fast-Advance
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[<role_name>]` prefix -> TaskUpdate completed -> Fast-Advance Check -> Loop to Phase 1 for next task.
|
||||
</if>
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No <prefix>-* tasks available | Idle, wait for coordinator assignment |
|
||||
| Context file not found | Notify coordinator, request location |
|
||||
| Subagent fails | Retry once with fallback; still fails -> log error, continue next task |
|
||||
| Fast-advance spawn fails | Fall back to SendMessage to coordinator |
|
||||
<if inner_loop>
|
||||
| Cumulative 3 task failures | SendMessage to coordinator, STOP inner loop |
|
||||
| Agent crash mid-loop | Coordinator detects orphan on resume -> re-spawn -> resume from interrupted task |
|
||||
</if>
|
||||
| Work outside scope discovered | SendMessage capability_gap to coordinator |
|
||||
| Critical issue beyond scope | SendMessage fix_required to coordinator |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2-4 Content by Responsibility Type
|
||||
|
||||
Reference sections for coordinator to fill when generating roles. Select the matching section based on `responsibility_type`.
|
||||
|
||||
### orchestration
|
||||
|
||||
**Phase 2: Context Assessment**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
| Prior artifacts | <session>/artifacts/ | No |
|
||||
| Wisdom | <session>/wisdom/ | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read shared-memory.json for cross-role context
|
||||
3. Read prior artifacts (if any exist from upstream tasks)
|
||||
4. Load wisdom files for accumulated knowledge
|
||||
5. Optionally call explore subagent for codebase context
|
||||
```
|
||||
|
||||
**Phase 3: Subagent Execution**
|
||||
|
||||
```
|
||||
Delegate to appropriate subagent based on task:
|
||||
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
run_in_background: false,
|
||||
description: "<task-type> for <task-id>",
|
||||
prompt: "## Task
|
||||
- <task description>
|
||||
- Session: <session-folder>
|
||||
## Context
|
||||
<prior artifacts + shared memory + explore results>
|
||||
## Expected Output
|
||||
Write artifact to: <session>/artifacts/<artifact-name>.md
|
||||
Return JSON summary: { artifact_path, summary, key_decisions[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Result Aggregation**
|
||||
|
||||
```
|
||||
1. Verify subagent output artifact exists
|
||||
2. Read artifact, validate structure/completeness
|
||||
3. Update shared-memory.json with key findings
|
||||
4. Write insights to wisdom/ files
|
||||
```
|
||||
|
||||
### code-gen (docs)
|
||||
|
||||
**Phase 2: Load Prior Context**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Prior artifacts | <session>/artifacts/ from upstream tasks | Conditional |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
| Wisdom | <session>/wisdom/ | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read upstream artifacts (e.g., research findings for a writer)
|
||||
3. Read shared-memory.json for cross-role context
|
||||
4. Load wisdom for accumulated decisions
|
||||
```
|
||||
|
||||
**Phase 3: Document Generation**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "universal-executor",
|
||||
run_in_background: false,
|
||||
description: "Generate <doc-type> for <task-id>",
|
||||
prompt: "## Task
|
||||
- Generate: <document type>
|
||||
- Session: <session-folder>
|
||||
## Prior Context
|
||||
<upstream artifacts + shared memory>
|
||||
## Instructions
|
||||
<task-specific writing instructions from coordinator>
|
||||
## Expected Output
|
||||
Write document to: <session>/artifacts/<doc-name>.md
|
||||
Return JSON: { artifact_path, summary, key_decisions[], sections_generated[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Structure Validation**
|
||||
|
||||
```
|
||||
1. Verify document artifact exists
|
||||
2. Check document has expected sections
|
||||
3. Validate no placeholder text remains
|
||||
4. Update shared-memory.json with document metadata
|
||||
```
|
||||
|
||||
### code-gen (code)
|
||||
|
||||
**Phase 2: Load Plan/Specs**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Plan/design artifacts | <session>/artifacts/ | Conditional |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
| Wisdom | <session>/wisdom/ | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path from task description
|
||||
2. Read plan/design artifacts from upstream
|
||||
3. Load shared-memory.json for implementation context
|
||||
4. Load wisdom for conventions and patterns
|
||||
```
|
||||
|
||||
**Phase 3: Code Implementation**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement <task-id>",
|
||||
prompt: "## Task
|
||||
- <implementation description>
|
||||
- Session: <session-folder>
|
||||
## Plan/Design Context
|
||||
<upstream artifacts>
|
||||
## Instructions
|
||||
<task-specific implementation instructions>
|
||||
## Expected Output
|
||||
Implement code changes.
|
||||
Write summary to: <session>/artifacts/implementation-summary.md
|
||||
Return JSON: { artifact_path, summary, files_changed[], key_decisions[], warnings[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Syntax Validation**
|
||||
|
||||
```
|
||||
1. Run syntax check (tsc --noEmit or equivalent)
|
||||
2. Verify all planned files exist
|
||||
3. Check no broken imports
|
||||
4. If validation fails -> attempt auto-fix (max 2 attempts)
|
||||
5. Write implementation summary to artifacts/
|
||||
```
|
||||
|
||||
### read-only
|
||||
|
||||
**Phase 2: Target Loading**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Target artifacts/files | From task description or upstream | Yes |
|
||||
| Shared memory | <session>/shared-memory.json | No |
|
||||
|
||||
Loading steps:
|
||||
1. Extract session path and target files from task description
|
||||
2. Read target artifacts or source files for analysis
|
||||
3. Load shared-memory.json for context
|
||||
```
|
||||
|
||||
**Phase 3: Multi-Dimension Analysis**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
run_in_background: false,
|
||||
description: "Analyze <target> for <task-id>",
|
||||
prompt: "## Task
|
||||
- Analyze: <target description>
|
||||
- Dimensions: <analysis dimensions from coordinator>
|
||||
- Session: <session-folder>
|
||||
## Target Content
|
||||
<artifact content or file content>
|
||||
## Expected Output
|
||||
Write report to: <session>/artifacts/analysis-report.md
|
||||
Return JSON: { artifact_path, summary, findings[], severity_counts: {critical, high, medium, low} }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Severity Classification**
|
||||
|
||||
```
|
||||
1. Verify analysis report exists
|
||||
2. Classify findings by severity (Critical/High/Medium/Low)
|
||||
3. Update shared-memory.json with key findings
|
||||
4. Write issues to wisdom/issues.md
|
||||
```
|
||||
|
||||
### validation
|
||||
|
||||
**Phase 2: Environment Detection**
|
||||
|
||||
```
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From TaskGet | Yes |
|
||||
| Implementation artifacts | Upstream code changes | Yes |
|
||||
|
||||
Loading steps:
|
||||
1. Detect test framework from project files
|
||||
2. Get changed files from implementation
|
||||
3. Identify test command and coverage tool
|
||||
```
|
||||
|
||||
**Phase 3: Test-Fix Cycle**
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "test-fix-agent",
|
||||
run_in_background: false,
|
||||
description: "Test-fix for <task-id>",
|
||||
prompt: "## Task
|
||||
- Run tests and fix failures
|
||||
- Session: <session-folder>
|
||||
- Max iterations: 5
|
||||
## Changed Files
|
||||
<from upstream implementation>
|
||||
## Expected Output
|
||||
Write report to: <session>/artifacts/test-report.md
|
||||
Return JSON: { artifact_path, pass_rate, coverage, iterations_used, remaining_failures[] }"
|
||||
})
|
||||
```
|
||||
|
||||
**Phase 4: Result Analysis**
|
||||
|
||||
```
|
||||
1. Check pass rate >= 95%
|
||||
2. Check coverage meets threshold
|
||||
3. Generate test report with pass/fail counts
|
||||
4. Update shared-memory.json with test results
|
||||
```
|
||||
@@ -1,133 +0,0 @@
|
||||
# Discuss Subagent
|
||||
|
||||
Lightweight multi-perspective critique engine. Called inline by any role needing peer review. Perspectives are dynamic -- specified by the calling role, not pre-defined.
|
||||
|
||||
## Design
|
||||
|
||||
Unlike team-lifecycle-v4's fixed perspective definitions (product, technical, quality, risk, coverage), team-coordinate uses **dynamic perspectives** passed in the prompt. The calling role decides what viewpoints matter for its artifact.
|
||||
|
||||
## Invocation
|
||||
|
||||
Called by roles after artifact creation:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
run_in_background: false,
|
||||
description: "Discuss <round-id>",
|
||||
prompt: `## Multi-Perspective Critique: <round-id>
|
||||
|
||||
### Input
|
||||
- Artifact: <artifact-path>
|
||||
- Round: <round-id>
|
||||
- Session: <session-folder>
|
||||
|
||||
### Perspectives
|
||||
<Dynamic perspective list -- each entry defines: name, cli_tool, role_label, focus_areas>
|
||||
|
||||
Example:
|
||||
| Perspective | CLI Tool | Role | Focus Areas |
|
||||
|-------------|----------|------|-------------|
|
||||
| Feasibility | gemini | Engineer | Implementation complexity, technical risks, resource needs |
|
||||
| Clarity | codex | Editor | Readability, logical flow, completeness of explanation |
|
||||
| Accuracy | gemini | Domain Expert | Factual correctness, source reliability, claim verification |
|
||||
|
||||
### Execution Steps
|
||||
1. Read artifact from <artifact-path>
|
||||
2. For each perspective, launch CLI analysis in background:
|
||||
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
|
||||
TASK: <focus-areas>
|
||||
MODE: analysis
|
||||
CONTEXT: Artifact content below
|
||||
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
CONSTRAINTS: Output valid JSON only
|
||||
|
||||
Artifact:
|
||||
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
|
||||
3. Wait for all CLI results
|
||||
4. Divergence detection:
|
||||
- High severity: any rating <= 2, critical issue identified
|
||||
- Medium severity: rating spread (max - min) >= 3, or single perspective rated <= 2 with others >= 3
|
||||
- Low severity: minor suggestions only, all ratings >= 3
|
||||
5. Consensus determination:
|
||||
- No high-severity divergences AND average rating >= 3.0 -> consensus_reached
|
||||
- Otherwise -> consensus_blocked
|
||||
6. Synthesize:
|
||||
- Convergent themes (agreed by 2+ perspectives)
|
||||
- Divergent views (conflicting assessments)
|
||||
- Action items from suggestions
|
||||
7. Write discussion record to: <session-folder>/discussions/<round-id>-discussion.md
|
||||
|
||||
### Discussion Record Format
|
||||
# Discussion Record: <round-id>
|
||||
|
||||
**Artifact**: <artifact-path>
|
||||
**Perspectives**: <list>
|
||||
**Consensus**: reached / blocked
|
||||
**Average Rating**: <avg>/5
|
||||
|
||||
## Convergent Themes
|
||||
- <theme>
|
||||
|
||||
## Divergent Views
|
||||
- **<topic>** (<severity>): <description>
|
||||
|
||||
## Action Items
|
||||
1. <item>
|
||||
|
||||
## Ratings
|
||||
| Perspective | Rating |
|
||||
|-------------|--------|
|
||||
| <name> | <n>/5 |
|
||||
|
||||
### Return Value
|
||||
|
||||
**When consensus_reached**:
|
||||
Return a summary string with:
|
||||
- Verdict: consensus_reached
|
||||
- Average rating
|
||||
- Key action items (top 3)
|
||||
- Discussion record path
|
||||
|
||||
**When consensus_blocked**:
|
||||
Return a structured summary with:
|
||||
- Verdict: consensus_blocked
|
||||
- Severity: HIGH | MEDIUM | LOW
|
||||
- Average rating
|
||||
- Divergence summary: top 3 divergent points with perspective attribution
|
||||
- Action items: prioritized list of required changes
|
||||
- Recommendation: revise | proceed-with-caution | escalate
|
||||
- Discussion record path
|
||||
|
||||
### Error Handling
|
||||
- Single CLI fails -> fallback to direct Claude analysis for that perspective
|
||||
- All CLI fail -> generate basic discussion from direct artifact reading
|
||||
- Artifact not found -> return error immediately`
|
||||
})
|
||||
```
|
||||
|
||||
## Integration with Calling Role
|
||||
|
||||
The calling role is responsible for:
|
||||
|
||||
1. **Before calling**: Complete primary artifact output
|
||||
2. **Calling**: Invoke discuss subagent with appropriate dynamic perspectives
|
||||
3. **After calling**:
|
||||
|
||||
| Verdict | Severity | Role Action |
|
||||
|---------|----------|-------------|
|
||||
| consensus_reached | - | Include action items in Phase 5 report, proceed normally |
|
||||
| consensus_blocked | HIGH | Include divergence details in Phase 5 SendMessage. Do NOT self-revise -- coordinator decides. |
|
||||
| consensus_blocked | MEDIUM | Include warning in Phase 5 SendMessage. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
|
||||
|
||||
**SendMessage format for consensus_blocked (HIGH or MEDIUM)**:
|
||||
|
||||
```
|
||||
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <artifact-path>
|
||||
Discussion: <discussion-record-path>
|
||||
```
|
||||
@@ -1,120 +0,0 @@
|
||||
# Explore Subagent
|
||||
|
||||
Shared codebase exploration utility with centralized caching. Callable by any role needing code context.
|
||||
|
||||
## Invocation
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore <angle>",
|
||||
prompt: `Explore codebase for: <query>
|
||||
|
||||
Focus angle: <angle>
|
||||
Keywords: <keyword-list>
|
||||
Session folder: <session-folder>
|
||||
|
||||
## Cache Check
|
||||
1. Read <session-folder>/explorations/cache-index.json (if exists)
|
||||
2. Look for entry with matching angle
|
||||
3. If found AND file exists -> read cached result, return summary
|
||||
4. If not found -> proceed to exploration
|
||||
|
||||
## Exploration
|
||||
<angle-specific-focus-from-table-below>
|
||||
|
||||
## Output
|
||||
Write JSON to: <session-folder>/explorations/explore-<angle>.json
|
||||
Update cache-index.json with new entry
|
||||
|
||||
## Output Schema
|
||||
{
|
||||
"angle": "<angle>",
|
||||
"query": "<query>",
|
||||
"relevant_files": [
|
||||
{ "path": "...", "rationale": "...", "role": "...", "discovery_source": "...", "key_symbols": [] }
|
||||
],
|
||||
"patterns": [],
|
||||
"dependencies": [],
|
||||
"external_refs": [],
|
||||
"_metadata": { "created_by": "<calling-role>", "timestamp": "...", "cache_key": "..." }
|
||||
}
|
||||
|
||||
Return summary: file count, pattern count, top 5 files, output path`
|
||||
})
|
||||
```
|
||||
|
||||
## Cache Mechanism
|
||||
|
||||
### Cache Index Schema
|
||||
|
||||
`<session-folder>/explorations/cache-index.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"entries": [
|
||||
{
|
||||
"angle": "architecture",
|
||||
"keywords": ["auth", "middleware"],
|
||||
"file": "explore-architecture.json",
|
||||
"created_by": "analyst",
|
||||
"created_at": "2026-02-27T10:00:00Z",
|
||||
"file_count": 15
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Lookup Rules
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Exact angle match exists | Return cached result |
|
||||
| No match | Execute exploration, cache result |
|
||||
| Cache file missing but index has entry | Remove stale entry, re-explore |
|
||||
|
||||
### Cache Invalidation
|
||||
|
||||
Cache is session-scoped. No explicit invalidation needed -- each session starts fresh. If a role suspects stale data, it can pass `force_refresh: true` in the prompt to bypass cache.
|
||||
|
||||
## Angle Focus Guide
|
||||
|
||||
| Angle | Focus Points | Typical Caller |
|
||||
|-------|-------------|----------------|
|
||||
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs | any |
|
||||
| dependencies | Import chains, external libraries, circular dependencies, shared utilities | any |
|
||||
| modularity | Module interfaces, separation of concerns, extraction opportunities | any |
|
||||
| integration-points | API endpoints, data flow between modules, event systems | any |
|
||||
| security | Auth/authz logic, input validation, sensitive data handling, middleware | any |
|
||||
| dataflow | Data transformations, state propagation, validation points | any |
|
||||
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity | any |
|
||||
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging | any |
|
||||
| patterns | Code conventions, design patterns, naming conventions, best practices | any |
|
||||
| testing | Test files, coverage gaps, test patterns, mocking strategies | any |
|
||||
| general | Broad semantic search for topic-related code | any |
|
||||
|
||||
## Exploration Strategies
|
||||
|
||||
### Low Complexity (direct search)
|
||||
|
||||
For simple queries, use ACE semantic search:
|
||||
|
||||
```
|
||||
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<query>")
|
||||
```
|
||||
|
||||
ACE failure fallback: `rg -l '<keywords>' --type ts`
|
||||
|
||||
### Medium/High Complexity (multi-angle)
|
||||
|
||||
For complex queries, call cli-explore-agent per angle. The calling role determines complexity and selects angles.
|
||||
|
||||
## Search Tool Priority
|
||||
|
||||
| Tool | Priority | Use Case |
|
||||
|------|----------|----------|
|
||||
| mcp__ace-tool__search_context | P0 | Semantic search |
|
||||
| Grep / Glob | P1 | Pattern matching |
|
||||
| cli-explore-agent | Deep | Multi-angle exploration |
|
||||
| WebSearch | P3 | External docs |
|
||||
@@ -1,372 +0,0 @@
|
||||
---
|
||||
name: team-executor
|
||||
description: Lightweight session execution skill. Resumes existing team-coordinate sessions for pure execution. No analysis, no role generation -- only loads and executes. Session path required. Triggers on "team executor".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team Executor
|
||||
|
||||
Lightweight session execution skill: load session -> reconcile state -> spawn workers -> execute -> deliver. **No analysis, no role generation** -- only executes existing team-coordinate sessions.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+---------------------------------------------------+
|
||||
| Skill(skill="team-executor") |
|
||||
| args="--session=<path>" [REQUIRED] |
|
||||
| args="--role=<name>" (for worker dispatch) |
|
||||
+-------------------+-------------------------------+
|
||||
| Session Validation
|
||||
+---- --session valid? ----+
|
||||
| NO | YES
|
||||
v v
|
||||
Error immediately Role Router
|
||||
(no session) |
|
||||
+-------+-------+
|
||||
| --role present?
|
||||
| |
|
||||
YES | | NO
|
||||
v | v
|
||||
Route to | Orchestration Mode
|
||||
session | -> executor
|
||||
role.md |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Validation (BEFORE routing)
|
||||
|
||||
**CRITICAL**: Session validation MUST occur before any role routing.
|
||||
|
||||
### Parse Arguments
|
||||
|
||||
Extract from `$ARGUMENTS`:
|
||||
- `--session=<path>`: Path to team-coordinate session folder (REQUIRED)
|
||||
- `--role=<name>`: Role to dispatch (optional, defaults to orchestration mode)
|
||||
|
||||
### Validation Steps
|
||||
|
||||
1. **Check `--session` provided**:
|
||||
- If missing -> **ERROR**: "Session required. Usage: --session=<path-to-TC-folder>"
|
||||
- Do NOT proceed
|
||||
|
||||
2. **Validate session structure** (see specs/session-schema.md):
|
||||
- Directory exists at path
|
||||
- `team-session.json` exists and valid JSON
|
||||
- `task-analysis.json` exists and valid JSON
|
||||
- `roles/` directory has at least one `.md` file
|
||||
- Each role in `team-session.json#roles` has corresponding `.md` file in `roles/`
|
||||
|
||||
3. **Validation failure**:
|
||||
- Report specific missing component
|
||||
- Suggest re-running team-coordinate or checking path
|
||||
- Do NOT proceed
|
||||
|
||||
### Validation Checklist
|
||||
|
||||
```
|
||||
Session Validation Checklist:
|
||||
[ ] --session argument provided
|
||||
[ ] Directory exists at path
|
||||
[ ] team-session.json exists and parses
|
||||
[ ] task-analysis.json exists and parses
|
||||
[ ] roles/ directory has >= 1 .md files
|
||||
[ ] All session.roles[] have corresponding roles/<role>.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Role Router
|
||||
|
||||
### Dispatch Logic
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| No `--session` | **ERROR** immediately |
|
||||
| `--session` invalid | **ERROR** with specific reason |
|
||||
| No `--role` | Orchestration Mode -> executor |
|
||||
| `--role=executor` | Read built-in `roles/executor/role.md` |
|
||||
| `--role=<other>` | Read `<session>/roles/<role>.md` |
|
||||
|
||||
### Orchestration Mode
|
||||
|
||||
When invoked without `--role`, executor auto-starts.
|
||||
|
||||
**Invocation**: `Skill(skill="team-executor", args="--session=<session-folder>")`
|
||||
|
||||
**Lifecycle**:
|
||||
```
|
||||
Validate session
|
||||
-> executor Phase 0: Reconcile state (reset interrupted, detect orphans)
|
||||
-> executor Phase 1: Spawn first batch workers (background) -> STOP
|
||||
-> Worker executes -> SendMessage callback -> executor advances next step
|
||||
-> Loop until pipeline complete -> Phase 2 report
|
||||
```
|
||||
|
||||
**User Commands** (wake paused executor):
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Output execution status graph, no advancement |
|
||||
| `resume` / `continue` | Check worker states, advance next step |
|
||||
|
||||
---
|
||||
|
||||
## Role Registry
|
||||
|
||||
| Role | File | Type |
|
||||
|------|------|------|
|
||||
| executor | [roles/executor/role.md](roles/executor/role.md) | built-in orchestrator |
|
||||
| (dynamic) | `<session>/roles/<role-name>.md` | loaded from session |
|
||||
|
||||
> **COMPACT PROTECTION**: Role files are execution documents. After context compression, role instructions become summaries only -- **MUST immediately `Read` the role.md to reload before continuing**. Never execute any Phase based on summaries.
|
||||
|
||||
---
|
||||
|
||||
## Shared Infrastructure
|
||||
|
||||
The following templates apply to all worker roles. Each loaded role.md follows the same structure.
|
||||
|
||||
### Worker Phase 1: Task Discovery (all workers shared)
|
||||
|
||||
Each worker on startup executes the same task discovery flow:
|
||||
|
||||
1. Call `TaskList()` to get all tasks
|
||||
2. Filter: subject matches this role's prefix + owner is this role + status is pending + blockedBy is empty
|
||||
3. No tasks -> idle wait
|
||||
4. Has tasks -> `TaskGet` for details -> `TaskUpdate` mark in_progress
|
||||
|
||||
**Resume Artifact Check** (prevent duplicate output after resume):
|
||||
- Check if this task's output artifacts already exist
|
||||
- Artifacts complete -> skip to Phase 5 report completion
|
||||
- Artifacts incomplete or missing -> normal Phase 2-4 execution
|
||||
|
||||
### Worker Phase 5: Report + Fast-Advance (all workers shared)
|
||||
|
||||
Task completion with optional fast-advance to skip executor round-trip:
|
||||
|
||||
1. **Message Bus**: Call `mcp__ccw-tools__team_msg` to log message
|
||||
- Params: operation="log", team=**<session-id>**, from=<role>, to="executor", type=<message-type>, summary="[<role>] <summary>", ref=<artifact-path>
|
||||
- **`team` must be session ID** (e.g., `TC-my-project-2026-02-27`), NOT team name. Extract from task description `Session:` field -> take folder name.
|
||||
- **CLI fallback**: `ccw team log --team <session-id> --from <role> --to executor --type <type> --summary "[<role>] ..." --json`
|
||||
2. **TaskUpdate**: Mark task completed
|
||||
3. **Fast-Advance Check**:
|
||||
- Call `TaskList()`, find pending tasks whose blockedBy are ALL completed
|
||||
- If exactly 1 ready task AND its owner matches a simple successor pattern -> **spawn it directly** (skip executor)
|
||||
- Otherwise -> **SendMessage** to executor for orchestration
|
||||
4. **Loop**: Back to Phase 1 to check for next task
|
||||
|
||||
**Fast-Advance Rules**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Same-prefix successor (Inner Loop role) | Do not spawn, main agent inner loop (Phase 5-L) |
|
||||
| 1 ready task, simple linear successor, different prefix | Spawn directly via Task(run_in_background: true) |
|
||||
| Multiple ready tasks (parallel window) | SendMessage to executor (needs orchestration) |
|
||||
| No ready tasks + others running | SendMessage to executor (status update) |
|
||||
| No ready tasks + nothing running | SendMessage to executor (pipeline may be complete) |
|
||||
|
||||
**Fast-advance failure recovery**: If a fast-advanced task fails, the executor detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing. See [monitor.md](roles/executor/commands/monitor.md).
|
||||
|
||||
### Worker Inner Loop (roles with multiple same-prefix serial tasks)
|
||||
|
||||
When a role has **2+ serial same-prefix tasks**, it loops internally instead of spawning new agents:
|
||||
|
||||
**Inner Loop flow**:
|
||||
|
||||
```
|
||||
Phase 1: Discover task (first time)
|
||||
|
|
||||
+- Found task -> Phase 2-3: Load context + Execute work
|
||||
| |
|
||||
| v
|
||||
| Phase 4: Validation (+ optional Inline Discuss)
|
||||
| |
|
||||
| v
|
||||
| Phase 5-L: Loop Completion
|
||||
| |
|
||||
| +- TaskUpdate completed
|
||||
| +- team_msg log
|
||||
| +- Accumulate summary to context_accumulator
|
||||
| |
|
||||
| +- More same-prefix tasks?
|
||||
| | +- YES -> back to Phase 1 (inner loop)
|
||||
| | +- NO -> Phase 5-F: Final Report
|
||||
| |
|
||||
| +- Interrupt conditions?
|
||||
| +- consensus_blocked HIGH -> SendMessage -> STOP
|
||||
| +- Errors >= 3 -> SendMessage -> STOP
|
||||
|
|
||||
+- Phase 5-F: Final Report
|
||||
+- SendMessage (all task summaries)
|
||||
+- STOP
|
||||
```
|
||||
|
||||
**Phase 5-L vs Phase 5-F**:
|
||||
|
||||
| Step | Phase 5-L (looping) | Phase 5-F (final) |
|
||||
|------|---------------------|-------------------|
|
||||
| TaskUpdate completed | YES | YES |
|
||||
| team_msg log | YES | YES |
|
||||
| Accumulate summary | YES | - |
|
||||
| SendMessage to executor | NO | YES (all tasks summary) |
|
||||
| Fast-Advance to next prefix | - | YES (check cross-prefix successors) |
|
||||
|
||||
### Wisdom Accumulation (all roles)
|
||||
|
||||
Cross-task knowledge accumulation. Loaded from session at startup.
|
||||
|
||||
**Directory**:
|
||||
```
|
||||
<session-folder>/wisdom/
|
||||
+-- learnings.md # Patterns and insights
|
||||
+-- decisions.md # Design and strategy decisions
|
||||
+-- issues.md # Known risks and issues
|
||||
```
|
||||
|
||||
**Worker load** (Phase 2): Extract `Session: <path>` from task description, read wisdom files.
|
||||
**Worker contribute** (Phase 4/5): Write discoveries to corresponding wisdom files.
|
||||
|
||||
### Role Isolation Rules
|
||||
|
||||
| Allowed | Prohibited |
|
||||
|---------|-----------|
|
||||
| Process own prefix tasks | Process other role's prefix tasks |
|
||||
| SendMessage to executor | Directly communicate with other workers |
|
||||
| Use tools appropriate to responsibility | Create tasks for other roles |
|
||||
| Fast-advance simple successors | Spawn parallel worker batches |
|
||||
| Report capability_gap to executor | Attempt work outside scope |
|
||||
|
||||
Executor additionally prohibited: directly write/modify deliverable artifacts, call implementation subagents directly, directly execute analysis/test/review, generate new roles.
|
||||
|
||||
---
|
||||
|
||||
## Cadence Control
|
||||
|
||||
**Beat model**: Event-driven, each beat = executor wake -> process -> spawn -> STOP.
|
||||
|
||||
```
|
||||
Beat Cycle (single beat)
|
||||
======================================================================
|
||||
Event Executor Workers
|
||||
----------------------------------------------------------------------
|
||||
callback/resume --> +- handleCallback -+
|
||||
| mark completed |
|
||||
| check pipeline |
|
||||
+- handleSpawnNext -+
|
||||
| find ready tasks |
|
||||
| spawn workers ---+--> [Worker A] Phase 1-5
|
||||
| (parallel OK) --+--> [Worker B] Phase 1-5
|
||||
+- STOP (idle) -----+ |
|
||||
|
|
||||
callback <-----------------------------------------+
|
||||
(next beat) SendMessage + TaskUpdate(completed)
|
||||
======================================================================
|
||||
|
||||
Fast-Advance (skips executor for simple linear successors)
|
||||
======================================================================
|
||||
[Worker A] Phase 5 complete
|
||||
+- 1 ready task? simple successor? --> spawn Worker B directly
|
||||
+- complex case? --> SendMessage to executor
|
||||
======================================================================
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Executor Spawn Template
|
||||
|
||||
### Standard Worker (single-task role)
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
|
||||
## Primary Instruction
|
||||
All your work MUST be executed by calling Skill to get role definition:
|
||||
Skill(skill="team-executor", args="--role=<role> --session=<session-folder>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] tag
|
||||
- Only communicate with executor
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
||||
- After task completion, check for fast-advance opportunity (see SKILL.md Phase 5)
|
||||
|
||||
## Workflow
|
||||
1. Call Skill -> get role definition and execution logic
|
||||
2. Follow role.md 5-Phase flow
|
||||
3. team_msg(team=<session-id>) + SendMessage results to executor
|
||||
4. TaskUpdate completed -> check next task or fast-advance`
|
||||
})
|
||||
```
|
||||
|
||||
### Inner Loop Worker (multi-task role)
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker (inner loop)",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `You are team "<team-name>" <ROLE>.
|
||||
|
||||
## Primary Instruction
|
||||
All your work MUST be executed by calling Skill to get role definition:
|
||||
Skill(skill="team-executor", args="--role=<role> --session=<session-folder>")
|
||||
|
||||
Current requirement: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## Inner Loop Mode
|
||||
You will handle ALL <PREFIX>-* tasks in this session, not just the first one.
|
||||
After completing each task, loop back to find the next <PREFIX>-* task.
|
||||
Only SendMessage to executor when:
|
||||
- All <PREFIX>-* tasks are done
|
||||
- A consensus_blocked HIGH occurs
|
||||
- Errors accumulate (>= 3)
|
||||
|
||||
## Role Guidelines
|
||||
- Only process <PREFIX>-* tasks, do not execute other role work
|
||||
- All output prefixed with [<role>] tag
|
||||
- Only communicate with executor
|
||||
- Do not use TaskCreate to create tasks for other roles
|
||||
- Before each SendMessage, call mcp__ccw-tools__team_msg to log (team=<session-id> from Session field, NOT team name)
|
||||
- Use subagent calls for heavy work, retain summaries in context`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with team-coordinate
|
||||
|
||||
| Scenario | Skill |
|
||||
|----------|-------|
|
||||
| New task, no session | team-coordinate |
|
||||
| Existing session, resume execution | **team-executor** |
|
||||
| Session needs new roles | team-coordinate (with --resume) |
|
||||
| Pure execution, no analysis | **team-executor** |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No --session provided | ERROR immediately with usage message |
|
||||
| Session directory not found | ERROR with path, suggest checking path |
|
||||
| team-session.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
|
||||
| task-analysis.json missing | ERROR, session incomplete, suggest re-run team-coordinate |
|
||||
| No roles in session | ERROR, session incomplete, suggest re-run team-coordinate |
|
||||
| Role file not found | ERROR with expected path |
|
||||
| capability_gap reported | Warn only, cannot generate new roles (see monitor.md handleAdapt) |
|
||||
| Fast-advance spawns wrong task | Executor reconciles on next callback |
|
||||
@@ -1,277 +0,0 @@
|
||||
# Command: monitor
|
||||
|
||||
## Purpose
|
||||
|
||||
Event-driven pipeline coordination with Spawn-and-Stop pattern for team-executor. Adapted from team-coordinate monitor.md -- role names are read from `team-session.json#roles` instead of hardcoded. **handleAdapt is LIMITED**: only warns, cannot generate new roles.
|
||||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Executor does one operation then STOPS |
|
||||
| FAST_ADVANCE_AWARE | true | Workers may skip executor for simple linear successors |
|
||||
| ROLE_GENERATION | disabled | handleAdapt cannot generate new roles |
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
| Role registry | session.roles[] | Yes |
|
||||
|
||||
**Dynamic role resolution**: Known worker roles are loaded from `session.roles[].name`. This is the same pattern as team-coordinate.
|
||||
|
||||
## Phase 3: Handler Routing
|
||||
|
||||
### Wake-up Source Detection
|
||||
|
||||
Parse `$ARGUMENTS` to determine handler:
|
||||
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[<role-name>]` from session roles | handleCallback |
|
||||
| 2 | Contains "capability_gap" | handleAdapt |
|
||||
| 3 | Contains "check" or "status" | handleCheck |
|
||||
| 4 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 5 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
Worker completed a task. Verify completion, update state, auto-advance.
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
+- Find matching active worker by role (from session.roles)
|
||||
+- Is this a progress update (not final)? (Inner Loop intermediate task completion)
|
||||
| +- YES -> Update session state, do NOT remove from active_workers -> STOP
|
||||
+- Task status = completed?
|
||||
| +- YES -> remove from active_workers -> update session
|
||||
| | +- -> handleSpawnNext
|
||||
| +- NO -> progress message, do not advance -> STOP
|
||||
+- No matching worker found
|
||||
+- Scan all active workers for completed tasks
|
||||
+- Found completed -> process each -> handleSpawnNext
|
||||
+- None completed -> STOP
|
||||
```
|
||||
|
||||
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
|
||||
1. Check if the expected next task is already `in_progress` (fast-advanced)
|
||||
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
|
||||
3. If no -> normal handleSpawnNext
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCheck
|
||||
|
||||
Read-only status report. No pipeline advancement.
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[executor] Pipeline Status
|
||||
[executor] Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[executor] Execution Graph:
|
||||
<visual representation of dependency graph with status icons>
|
||||
|
||||
done=completed >>>=running o=pending .=not created
|
||||
|
||||
[executor] Active Workers:
|
||||
> <subject> (<role>) - running <elapsed> [inner-loop: N/M tasks done]
|
||||
|
||||
[executor] Ready to spawn: <subjects>
|
||||
[executor] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
|
||||
|
||||
**Graph rendering**: Read dependency_graph from task-analysis.json, render each node with status icon. Show parallel branches side-by-side.
|
||||
|
||||
Then STOP.
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
Check active worker completion, process results, advance pipeline.
|
||||
|
||||
```
|
||||
Load active_workers from session
|
||||
+- No active workers -> handleSpawnNext
|
||||
+- Has active workers -> check each:
|
||||
+- status = completed -> mark done, log
|
||||
+- status = in_progress -> still running, log
|
||||
+- other status -> worker failure -> reset to pending
|
||||
After processing:
|
||||
+- Some completed -> handleSpawnNext
|
||||
+- All still running -> report status -> STOP
|
||||
+- All failed -> handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
Find all ready tasks, spawn workers in background, update session, STOP.
|
||||
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
+- completedSubjects: status = completed
|
||||
+- inProgressSubjects: status = in_progress
|
||||
+- readySubjects: pending + all blockedBy in completedSubjects
|
||||
|
||||
Ready tasks found?
|
||||
+- NONE + work in progress -> report waiting -> STOP
|
||||
+- NONE + nothing in progress -> PIPELINE_COMPLETE -> Phase 2
|
||||
+- HAS ready tasks -> for each:
|
||||
+- Is task owner an Inner Loop role AND that role already has an active_worker?
|
||||
| +- YES -> SKIP spawn (existing worker will pick it up via inner loop)
|
||||
| +- NO -> normal spawn below
|
||||
+- TaskUpdate -> in_progress
|
||||
+- team_msg log -> task_unblocked (team=<session-id>, NOT team name)
|
||||
+- Spawn worker (see spawn tool call below)
|
||||
+- Add to session.active_workers
|
||||
Update session file -> output summary -> STOP
|
||||
```
|
||||
|
||||
**Spawn worker tool call** (one per ready task):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: "<worker prompt from SKILL.md Executor Spawn Template>"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleAdapt (LIMITED)
|
||||
|
||||
Handle mid-pipeline capability gap discovery. **UNLIKE team-coordinate, executor CANNOT generate new roles.**
|
||||
|
||||
```
|
||||
Receive capability_gap from [<role>]
|
||||
+- Log via team_msg (type: warning)
|
||||
+- Report to user:
|
||||
"Capability gap detected: <gap_description>
|
||||
|
||||
team-executor cannot generate new roles.
|
||||
Options:
|
||||
1. Continue with existing roles (worker will skip gap work)
|
||||
2. Re-run team-coordinate with --resume=<session> to extend session
|
||||
3. Manually add role to <session>/roles/ and retry"
|
||||
+- Extract: gap_description, requesting_role, suggested_capability
|
||||
+- Validate gap is genuine:
|
||||
+- Check existing roles in session.roles -> does any role cover this?
|
||||
| +- YES -> redirect: SendMessage to that role's owner -> STOP
|
||||
| +- NO -> genuine gap, report to user (cannot fix)
|
||||
+- Do NOT generate new role
|
||||
+- Continue execution with existing roles
|
||||
```
|
||||
|
||||
**Key difference from team-coordinate**:
|
||||
| Aspect | team-coordinate | team-executor |
|
||||
|--------|-----------------|---------------|
|
||||
| handleAdapt | Generates new role, creates tasks, spawns worker | Only warns, cannot fix |
|
||||
| Recovery | Automatic | Manual (re-run team-coordinate) |
|
||||
|
||||
---
|
||||
|
||||
### Worker Failure Handling
|
||||
|
||||
When a worker has unexpected status (not completed, not in_progress):
|
||||
|
||||
1. Reset task -> pending via TaskUpdate
|
||||
2. Log via team_msg (type: error)
|
||||
3. Report to user: task reset, will retry on next resume
|
||||
|
||||
### Fast-Advance Failure Recovery
|
||||
|
||||
When executor detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
|
||||
|
||||
```
|
||||
handleCallback / handleResume detects:
|
||||
+- Task is in_progress (was fast-advanced by predecessor)
|
||||
+- No active_worker entry for this task
|
||||
+- Original fast-advancing worker has already completed and exited
|
||||
+- Resolution:
|
||||
1. TaskUpdate -> reset task to pending
|
||||
2. Remove stale active_worker entry (if any)
|
||||
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
|
||||
4. -> handleSpawnNext (will re-spawn the task normally)
|
||||
```
|
||||
|
||||
**Detection in handleResume**:
|
||||
|
||||
```
|
||||
For each in_progress task in TaskList():
|
||||
+- Has matching active_worker? -> normal, skip
|
||||
+- No matching active_worker? -> orphaned (likely fast-advance failure)
|
||||
+- Check creation time: if > 5 minutes with no progress callback
|
||||
+- Reset to pending -> handleSpawnNext
|
||||
```
|
||||
|
||||
**Prevention**: Fast-advance failures are self-healing. The executor reconciles orphaned tasks on every `resume`/`check` cycle.
|
||||
|
||||
### Consensus-Blocked Handling
|
||||
|
||||
When a worker reports `consensus_blocked` in its callback:
|
||||
|
||||
```
|
||||
handleCallback receives message with consensus_blocked flag
|
||||
+- Extract: divergence_severity, blocked_round, action_recommendation
|
||||
+- Route by severity:
|
||||
|
|
||||
+- severity = HIGH
|
||||
| +- Create REVISION task:
|
||||
| +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
|
||||
| +- Description includes: divergence details + action items from discuss
|
||||
| +- blockedBy: none (immediate execution)
|
||||
| +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
|
||||
| +- If already revised once -> PAUSE, escalate to user
|
||||
| +- Update session: mark task as "revised", log revision chain
|
||||
|
|
||||
+- severity = MEDIUM
|
||||
| +- Proceed with warning: include divergence in next task's context
|
||||
| +- Log action items to wisdom/issues.md
|
||||
| +- Normal handleSpawnNext
|
||||
|
|
||||
+- severity = LOW
|
||||
+- Proceed normally: treat as consensus_reached with notes
|
||||
+- Normal handleSpawnNext
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
||||
| Dynamic roles valid | All task owners exist in session.roles |
|
||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
|
||||
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
|
||||
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Session file not found | Error, suggest re-run team-coordinate |
|
||||
| Worker callback from unknown role | Log info, scan for other completions |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
||||
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
|
||||
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
|
||||
| Dynamic role file not found | Error, cannot proceed without role definition |
|
||||
| capability_gap from role | WARN only, cannot generate new roles |
|
||||
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
|
||||
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |
|
||||
@@ -1,202 +0,0 @@
|
||||
# Executor Role
|
||||
|
||||
Orchestrate the team-executor workflow: session validation, state reconciliation, worker dispatch, progress monitoring, session state. The sole built-in role -- all worker roles are loaded from the session.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `executor` | **Tag**: `[executor]`
|
||||
- **Responsibility**: Validate session -> Reconcile state -> Create team -> Dispatch tasks -> Monitor progress -> Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Validate session structure before any execution
|
||||
- Reconcile session state with TaskList on startup
|
||||
- Reset in_progress tasks to pending (interrupted tasks)
|
||||
- Detect fast-advance orphans and reset to pending
|
||||
- Spawn worker subagents in background
|
||||
- Monitor progress via worker callbacks and route messages
|
||||
- Maintain session state persistence (team-session.json)
|
||||
- Handle capability_gap reports with warning only (cannot generate roles)
|
||||
|
||||
### MUST NOT
|
||||
- Execute task work directly (delegate to workers)
|
||||
- Modify task output artifacts (workers own their deliverables)
|
||||
- Call implementation subagents (code-developer, etc.) directly
|
||||
- Generate new roles (use existing session roles only)
|
||||
- Skip session validation
|
||||
- Override consensus_blocked HIGH without user confirmation
|
||||
|
||||
> **Core principle**: executor is the orchestrator, not the executor. All actual work is delegated to session-defined worker roles. Unlike team-coordinate coordinator, executor CANNOT generate new roles.
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When executor is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` from session roles | -> handleCallback |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume |
|
||||
| Capability gap | Message contains "capability_gap" | -> handleAdapt |
|
||||
| New execution | None of above | -> Phase 0 |
|
||||
|
||||
For callback/check/resume/adapt: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Validation + State Reconciliation
|
||||
|
||||
**Objective**: Validate session structure and reconcile session state with actual task status.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
### Step 1: Session Validation
|
||||
|
||||
Validate session structure (see SKILL.md Session Validation):
|
||||
- [ ] Directory exists at session path
|
||||
- [ ] `team-session.json` exists and parses
|
||||
- [ ] `task-analysis.json` exists and parses
|
||||
- [ ] `roles/` directory has >= 1 .md files
|
||||
- [ ] All roles in team-session.json#roles have corresponding .md files
|
||||
|
||||
If validation fails -> ERROR with specific reason -> STOP
|
||||
|
||||
### Step 2: Load Session State
|
||||
|
||||
```javascript
|
||||
session = Read(<session-folder>/team-session.json)
|
||||
taskAnalysis = Read(<session-folder>/task-analysis.json)
|
||||
```
|
||||
|
||||
### Step 3: Reconcile with TaskList
|
||||
|
||||
```
|
||||
Call TaskList() -> get real status of all tasks
|
||||
Compare with session.completed_tasks:
|
||||
+- Tasks in TaskList.completed but not in session -> add to session.completed_tasks
|
||||
+- Tasks in session.completed_tasks but not TaskList.completed -> remove from session.completed_tasks (anomaly, log warning)
|
||||
+- Tasks in TaskList.in_progress -> candidate for reset
|
||||
```
|
||||
|
||||
### Step 4: Reset Interrupted Tasks
|
||||
|
||||
```
|
||||
For each task in TaskList.in_progress:
|
||||
+- Reset to pending via TaskUpdate
|
||||
+- Log via team_msg (type: warning, summary: "Task <ID> reset from interrupted state")
|
||||
```
|
||||
|
||||
### Step 5: Detect Fast-Advance Orphans
|
||||
|
||||
```
|
||||
For each task in TaskList.in_progress:
|
||||
+- Check if has matching active_worker entry
|
||||
+- No matching active_worker + created > 5 minutes ago -> orphan
|
||||
+- Reset to pending via TaskUpdate
|
||||
+- Log via team_msg (type: error, summary: "Fast-advance orphan <ID> reset")
|
||||
```
|
||||
|
||||
### Step 6: Create Missing Tasks (if needed)
|
||||
|
||||
```
|
||||
For each task in task-analysis.json#tasks:
|
||||
+- Check if exists in TaskList
|
||||
+- Not exists -> create via TaskCreate with correct blockedBy
|
||||
```
|
||||
|
||||
### Step 7: Update Session File
|
||||
|
||||
```
|
||||
Write updated team-session.json with:
|
||||
+- reconciled completed_tasks
|
||||
+- cleared active_workers (will be rebuilt on spawn)
|
||||
+- status = "active"
|
||||
```
|
||||
|
||||
### Step 8: Team Setup
|
||||
|
||||
```
|
||||
Check if team exists (via TaskList with team_name filter)
|
||||
+- Not exists -> TeamCreate with team_name from session
|
||||
+- Exists -> continue with existing team
|
||||
```
|
||||
|
||||
**Success**: Session validated, state reconciled, team ready -> Phase 1
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Spawn-and-Stop
|
||||
|
||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
||||
|
||||
**Design**: Spawn-and-Stop + Callback pattern, with worker fast-advance.
|
||||
- Spawn workers with `Task(run_in_background: true)` -> immediately return
|
||||
- Worker completes -> may fast-advance to next task OR SendMessage callback -> auto-advance
|
||||
- User can use "check" / "resume" to manually advance
|
||||
- Executor does one operation per invocation, then STOPS
|
||||
|
||||
**Workflow**:
|
||||
1. Load `commands/monitor.md`
|
||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
3. For each ready task -> spawn worker (see SKILL.md Executor Spawn Template)
|
||||
- Use Standard Worker template for single-task roles
|
||||
- Use Inner Loop Worker template for multi-task roles
|
||||
4. Output status summary with execution graph
|
||||
5. STOP
|
||||
|
||||
**Pipeline advancement** driven by three wake sources:
|
||||
- Worker callback (automatic) -> Entry Router -> handleCallback
|
||||
- User "check" -> handleCheck (status only)
|
||||
- User "resume" -> handleResume (advance)
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. List all deliverables with output paths in `<session>/artifacts/`
|
||||
3. Include discussion summaries (if inline discuss was used)
|
||||
4. Summarize wisdom accumulated during execution
|
||||
5. Update session status -> "completed"
|
||||
6. Offer next steps: exit / view artifacts / extend with additional tasks
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[executor] ============================================
|
||||
[executor] TASK COMPLETE
|
||||
[executor]
|
||||
[executor] Deliverables:
|
||||
[executor] - <artifact-1.md> (<producer role>)
|
||||
[executor] - <artifact-2.md> (<producer role>)
|
||||
[executor]
|
||||
[executor] Pipeline: <completed>/<total> tasks
|
||||
[executor] Roles: <role-list>
|
||||
[executor] Duration: <elapsed>
|
||||
[executor]
|
||||
[executor] Session: <session-folder>
|
||||
[executor] ============================================
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Session validation fails | ERROR with specific reason, suggest re-run team-coordinate |
|
||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
| capability_gap reported | handleAdapt: WARN only, cannot generate new roles |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
||||
| Fast-advance conflict | Executor reconciles, no duplicate spawns |
|
||||
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
|
||||
| Role file not found | ERROR, cannot proceed without role definition |
|
||||
@@ -1,272 +0,0 @@
|
||||
# Session Schema
|
||||
|
||||
Required session structure for team-executor. All components MUST exist for valid execution.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
<session-folder>/
|
||||
+-- team-session.json # Session state + dynamic role registry (REQUIRED)
|
||||
+-- task-analysis.json # Task analysis output: capabilities, dependency graph (REQUIRED)
|
||||
+-- roles/ # Dynamic role definitions (REQUIRED, >= 1 .md file)
|
||||
| +-- <role-1>.md
|
||||
| +-- <role-2>.md
|
||||
+-- artifacts/ # All MD deliverables from workers
|
||||
| +-- <artifact>.md
|
||||
+-- shared-memory.json # Cross-role state store
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
| +-- learnings.md
|
||||
| +-- decisions.md
|
||||
| +-- issues.md
|
||||
+-- explorations/ # Shared explore cache
|
||||
| +-- cache-index.json
|
||||
| +-- explore-<angle>.json
|
||||
+-- discussions/ # Inline discuss records
|
||||
| +-- <round>.md
|
||||
+-- .msg/ # Team message bus logs
|
||||
```
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
team-executor validates the following before execution:
|
||||
|
||||
### Required Components
|
||||
|
||||
| Component | Validation | Error Message |
|
||||
|-----------|------------|---------------|
|
||||
| `--session` argument | Must be provided | "Session required. Usage: --session=<path-to-TC-folder>" |
|
||||
| Directory | Must exist at path | "Session directory not found: <path>" |
|
||||
| `team-session.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: team-session.json missing, corrupt, or missing required fields" |
|
||||
| `task-analysis.json` | Must exist, parse as JSON, and contain all required fields | "Invalid session: task-analysis.json missing, corrupt, or missing required fields" |
|
||||
| `roles/` directory | Must exist and contain >= 1 .md file | "Invalid session: no role files in roles/" |
|
||||
| Role file mapping | Each role in team-session.json#roles must have .md file | "Role file not found: roles/<role>.md" |
|
||||
| Role file structure | Each role .md must contain required headers | "Invalid role file: roles/<role>.md missing required section: <section>" |
|
||||
|
||||
### Validation Algorithm
|
||||
|
||||
```
|
||||
1. Parse --session=<path> from arguments
|
||||
+- Not provided -> ERROR: "Session required. Usage: --session=<path-to-TC-folder>"
|
||||
|
||||
2. Check directory exists
|
||||
+- Not exists -> ERROR: "Session directory not found: <path>"
|
||||
|
||||
3. Check team-session.json
|
||||
+- Not exists -> ERROR: "Invalid session: team-session.json missing"
|
||||
+- Parse error -> ERROR: "Invalid session: team-session.json corrupt"
|
||||
+- Validate required fields:
|
||||
+- session_id (string) -> missing/invalid -> ERROR: "team-session.json missing required field: session_id"
|
||||
+- task_description (string) -> missing -> ERROR: "team-session.json missing required field: task_description"
|
||||
+- status (string: active|paused|completed) -> missing/invalid -> ERROR: "team-session.json has invalid status"
|
||||
+- team_name (string) -> missing -> ERROR: "team-session.json missing required field: team_name"
|
||||
+- roles (array) -> missing/empty -> ERROR: "team-session.json missing or empty roles array"
|
||||
|
||||
4. Check task-analysis.json
|
||||
+- Not exists -> ERROR: "Invalid session: task-analysis.json missing"
|
||||
+- Parse error -> ERROR: "Invalid session: task-analysis.json corrupt"
|
||||
+- Validate required fields:
|
||||
+- capabilities (array) -> missing -> ERROR: "task-analysis.json missing required field: capabilities"
|
||||
+- dependency_graph (object) -> missing -> ERROR: "task-analysis.json missing required field: dependency_graph"
|
||||
+- roles (array) -> missing/empty -> ERROR: "task-analysis.json missing or empty roles array"
|
||||
+- tasks (array) -> missing/empty -> ERROR: "task-analysis.json missing or empty tasks array"
|
||||
|
||||
5. Check roles/ directory
|
||||
+- Not exists -> ERROR: "Invalid session: roles/ directory missing"
|
||||
+- No .md files -> ERROR: "Invalid session: no role files in roles/"
|
||||
|
||||
6. Check role file mapping and structure
|
||||
+- For each role in team-session.json#roles:
|
||||
+- Check roles/<role.name>.md exists
|
||||
+- Not exists -> ERROR: "Role file not found: roles/<role.name>.md"
|
||||
+- Validate role file structure (see Role File Structure Validation):
|
||||
+- Check for required headers: "# Role:", "## Identity", "## Boundaries", "## Execution"
|
||||
+- Missing header -> ERROR: "Invalid role file: roles/<role.name>.md missing required section: <section>"
|
||||
|
||||
7. All checks pass -> proceed to Phase 0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## team-session.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "TC-<slug>-<date>",
|
||||
"task_description": "<original user input>",
|
||||
"status": "active | paused | completed",
|
||||
"team_name": "<team-name>",
|
||||
"roles": [
|
||||
{
|
||||
"name": "<role-name>",
|
||||
"prefix": "<PREFIX>",
|
||||
"responsibility_type": "<type>",
|
||||
"inner_loop": false,
|
||||
"role_file": "roles/<role-name>.md"
|
||||
}
|
||||
],
|
||||
"pipeline": {
|
||||
"dependency_graph": {},
|
||||
"tasks_total": 0,
|
||||
"tasks_completed": 0
|
||||
},
|
||||
"active_workers": [],
|
||||
"completed_tasks": [],
|
||||
"created_at": "<timestamp>"
|
||||
}
|
||||
```
|
||||
|
||||
### Required Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `session_id` | string | Unique session identifier (e.g., "TC-auth-feature-2026-02-27") |
|
||||
| `task_description` | string | Original task description from user |
|
||||
| `status` | string | One of: "active", "paused", "completed" |
|
||||
| `team_name` | string | Team name for Task tool |
|
||||
| `roles` | array | List of role definitions |
|
||||
| `roles[].name` | string | Role name (must match .md filename) |
|
||||
| `roles[].prefix` | string | Task prefix for this role (e.g., "SPEC", "IMPL") |
|
||||
| `roles[].role_file` | string | Relative path to role file |
|
||||
|
||||
### Optional Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `pipeline` | object | Pipeline metadata |
|
||||
| `active_workers` | array | Currently running workers |
|
||||
| `completed_tasks` | array | List of completed task IDs |
|
||||
| `created_at` | string | ISO timestamp |
|
||||
|
||||
---
|
||||
|
||||
## task-analysis.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"capabilities": [
|
||||
{
|
||||
"name": "<capability-name>",
|
||||
"description": "<description>",
|
||||
"artifact_type": "<type>"
|
||||
}
|
||||
],
|
||||
"dependency_graph": {
|
||||
"<task-id>": {
|
||||
"depends_on": ["<dependency-task-id>"],
|
||||
"role": "<role-name>"
|
||||
}
|
||||
},
|
||||
"roles": [
|
||||
{
|
||||
"name": "<role-name>",
|
||||
"prefix": "<PREFIX>",
|
||||
"responsibility_type": "<type>",
|
||||
"inner_loop": false
|
||||
}
|
||||
],
|
||||
"tasks": [
|
||||
{
|
||||
"id": "<task-id>",
|
||||
"subject": "<task-subject>",
|
||||
"owner": "<role-name>",
|
||||
"blockedBy": ["<dependency-task-id>"]
|
||||
}
|
||||
],
|
||||
"complexity_score": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Required Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `capabilities` | array | Detected capabilities |
|
||||
| `dependency_graph` | object | Task dependency DAG |
|
||||
| `roles` | array | Role definitions |
|
||||
| `tasks` | array | Task definitions |
|
||||
|
||||
---
|
||||
|
||||
## Role File Schema
|
||||
|
||||
Each role file in `roles/<role-name>.md` must follow the structure defined in `team-coordinate/specs/role-template.md`.
|
||||
|
||||
### Minimum Required Sections
|
||||
|
||||
| Section | Description |
|
||||
|---------|-------------|
|
||||
| `# Role: <name>` | Header with role name |
|
||||
| `## Identity` | Name, tag, prefix, responsibility |
|
||||
| `## Boundaries` | MUST and MUST NOT rules |
|
||||
| `## Execution (5-Phase)` | Phase 1-5 workflow |
|
||||
|
||||
### Role File Structure Validation
|
||||
|
||||
Role files MUST be validated for structure before execution. This catches malformed role files early and provides actionable error messages.
|
||||
|
||||
**Required Sections** (must be present in order):
|
||||
|
||||
| Section | Pattern | Purpose |
|
||||
|---------|---------|---------|
|
||||
| Role Header | `# Role: <name>` | Identifies the role definition |
|
||||
| Identity | `## Identity` | Defines name, tag, prefix, responsibility |
|
||||
| Boundaries | `## Boundaries` | Defines MUST and MUST NOT rules |
|
||||
| Execution | `## Execution` | Defines the 5-Phase workflow |
|
||||
|
||||
**Validation Algorithm**:
|
||||
|
||||
```
|
||||
For each role file in roles/<role>.md:
|
||||
1. Read file content
|
||||
2. Check for "# Role:" header
|
||||
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing role header"
|
||||
3. Check for "## Identity" section
|
||||
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Identity"
|
||||
4. Check for "## Boundaries" section
|
||||
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Boundaries"
|
||||
5. Check for "## Execution" section (or "## Execution (5-Phase)")
|
||||
+- Not found -> ERROR: "Invalid role file: roles/<role>.md missing required section: Execution"
|
||||
6. All checks pass -> role file valid
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- Early detection of malformed role files
|
||||
- Clear error messages for debugging
|
||||
- Prevents runtime failures during worker execution
|
||||
|
||||
---
|
||||
|
||||
## Example Valid Session
|
||||
|
||||
```
|
||||
.workflow/.team/TC-auth-feature-2026-02-27/
|
||||
+-- team-session.json # Valid JSON with session metadata
|
||||
+-- task-analysis.json # Valid JSON with dependency graph
|
||||
+-- roles/
|
||||
| +-- spec-writer.md # Role file for SPEC-* tasks
|
||||
| +-- implementer.md # Role file for IMPL-* tasks
|
||||
| +-- tester.md # Role file for TEST-* tasks
|
||||
+-- artifacts/ # (may be empty)
|
||||
+-- shared-memory.json # Valid JSON (may be {})
|
||||
+-- wisdom/
|
||||
| +-- learnings.md
|
||||
| +-- decisions.md
|
||||
| +-- issues.md
|
||||
+-- explorations/
|
||||
| +-- cache-index.json
|
||||
+-- discussions/ # (may be empty)
|
||||
+-- .msg/ # (may be empty)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recovery from Invalid Sessions
|
||||
|
||||
If session validation fails:
|
||||
|
||||
1. **Missing team-session.json**: Re-run team-coordinate with original task
|
||||
2. **Missing task-analysis.json**: Re-run team-coordinate with --resume
|
||||
3. **Missing role files**: Re-run team-coordinate with --resume
|
||||
4. **Corrupt JSON**: Manual inspection or re-run team-coordinate
|
||||
|
||||
**team-executor cannot fix invalid sessions** -- it can only report errors and suggest recovery steps.
|
||||
@@ -1,384 +0,0 @@
|
||||
---
|
||||
name: team-lifecycle-v3
|
||||
description: Unified team skill for full lifecycle - spec/impl/test. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team lifecycle".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), TodoWrite(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team Lifecycle v3
|
||||
|
||||
Unified team skill: specification → implementation → testing → review. All team members invoke with `--role=xxx` to route to role-specific execution.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-lifecycle-v3") │
|
||||
│ args="任务描述" 或 args="--role=xxx" │
|
||||
└───────────────────┬───────────────────────────────┘
|
||||
│ Role Router
|
||||
┌──── --role present? ────┐
|
||||
│ NO │ YES
|
||||
↓ ↓
|
||||
Orchestration Mode Role Dispatch
|
||||
(auto → coordinator) (route to role.md)
|
||||
│
|
||||
┌────┴────┬───────┬───────┬───────┬───────┬───────┬───────┐
|
||||
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
|
||||
coordinator analyst writer discussant planner executor tester reviewer
|
||||
↑ ↑
|
||||
on-demand by coordinator
|
||||
┌──────────┐ ┌─────────┐
|
||||
│ explorer │ │architect│
|
||||
└──────────┘ └─────────┘
|
||||
┌──────────────┐ ┌──────┐
|
||||
│ fe-developer │ │fe-qa │
|
||||
└──────────────┘ └──────┘
|
||||
```
|
||||
|
||||
## Role Router
|
||||
|
||||
### Input Parsing
|
||||
|
||||
Parse `$ARGUMENTS` to extract `--role`. If absent → Orchestration Mode (auto route to coordinator).
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | **⚠️ 压缩后必须重读** |
|
||||
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | RESEARCH-* | pipeline | 压缩后必须重读 |
|
||||
| writer | [roles/writer/role.md](roles/writer/role.md) | DRAFT-* | pipeline | 压缩后必须重读 |
|
||||
| discussant | [roles/discussant/role.md](roles/discussant/role.md) | DISCUSS-* | pipeline | 压缩后必须重读 |
|
||||
| planner | [roles/planner/role.md](roles/planner/role.md) | PLAN-* | pipeline | 压缩后必须重读 |
|
||||
| executor | [roles/executor/role.md](roles/executor/role.md) | IMPL-* | pipeline | 压缩后必须重读 |
|
||||
| tester | [roles/tester/role.md](roles/tester/role.md) | TEST-* | pipeline | 压缩后必须重读 |
|
||||
| reviewer | [roles/reviewer/role.md](roles/reviewer/role.md) | REVIEW-* + QUALITY-* | pipeline | 压缩后必须重读 |
|
||||
| explorer | [roles/explorer/role.md](roles/explorer/role.md) | EXPLORE-* | service (on-demand) | 压缩后必须重读 |
|
||||
| architect | [roles/architect/role.md](roles/architect/role.md) | ARCH-* | consulting (on-demand) | 压缩后必须重读 |
|
||||
| fe-developer | [roles/fe-developer/role.md](roles/fe-developer/role.md) | DEV-FE-* | frontend pipeline | 压缩后必须重读 |
|
||||
| fe-qa | [roles/fe-qa/role.md](roles/fe-qa/role.md) | QA-FE-* | frontend pipeline | 压缩后必须重读 |
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
||||
|
||||
### Dispatch
|
||||
|
||||
1. Extract `--role` from arguments
|
||||
2. If no `--role` → route to coordinator (Orchestration Mode)
|
||||
3. Look up role in registry → Read the role file → Execute its phases
|
||||
|
||||
### Orchestration Mode
|
||||
|
||||
When invoked without `--role`, coordinator auto-starts. User just provides task description.
|
||||
|
||||
**Invocation**: `Skill(skill="team-lifecycle-v3", args="任务描述")`
|
||||
|
||||
**Lifecycle**:
|
||||
```
|
||||
用户提供任务描述
|
||||
→ coordinator Phase 1-3: 需求澄清 → TeamCreate → 创建任务链
|
||||
→ coordinator Phase 4: spawn 首批 worker (后台) → STOP
|
||||
→ Worker 执行 → SendMessage 回调 → coordinator 推进下一步
|
||||
→ 循环直到 pipeline 完成 → Phase 5 汇报
|
||||
```
|
||||
|
||||
**User Commands** (唤醒已暂停的 coordinator):
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | 输出执行状态图,不推进 |
|
||||
| `resume` / `continue` | 检查 worker 状态,推进下一步 |
|
||||
|
||||
---
|
||||
|
||||
## Shared Infrastructure
|
||||
|
||||
以下模板适用于所有 worker 角色。每个 role.md 只需写 **Phase 2-4** 的角色特有逻辑。
|
||||
|
||||
### Worker Phase 1: Task Discovery (所有 worker 共享)
|
||||
|
||||
每个 worker 启动后执行相同的任务发现流程:
|
||||
|
||||
1. 调用 `TaskList()` 获取所有任务
|
||||
2. 筛选: subject 匹配本角色前缀 + owner 是本角色 + status 为 pending + blockedBy 为空
|
||||
3. 无任务 → idle 等待
|
||||
4. 有任务 → `TaskGet` 获取详情 → `TaskUpdate` 标记 in_progress
|
||||
|
||||
**Resume Artifact Check** (防止恢复后重复产出):
|
||||
- 检查本任务的输出产物是否已存在
|
||||
- 产物完整 → 跳到 Phase 5 报告完成
|
||||
- 产物不完整或不存在 → 正常执行 Phase 2-4
|
||||
|
||||
### Worker Phase 5: Report (所有 worker 共享)
|
||||
|
||||
任务完成后的标准报告流程:
|
||||
|
||||
1. **Message Bus**: 调用 `mcp__ccw-tools__team_msg` 记录消息
|
||||
- 参数: operation="log", team=<session-id>, from=<role>, to="coordinator", type=<消息类型>, summary="[<role>] <摘要>", ref=<产物路径>
|
||||
- **注意**: `team` 必须是 **session ID** (如 `TLS-project-2026-02-27`), 不是 team name. 从任务描述的 `Session:` 字段提取.
|
||||
- **CLI fallback**: 当 MCP 不可用时 → `ccw team log --team <session-id> --from <role> --to coordinator --type <type> --summary "[<role>] ..." --json`
|
||||
2. **SendMessage**: 发送结果给 coordinator (content 和 summary 都带 `[<role>]` 前缀)
|
||||
3. **TaskUpdate**: 标记任务 completed
|
||||
4. **Loop**: 回到 Phase 1 检查下一个任务
|
||||
|
||||
### Wisdom Accumulation (所有角色)
|
||||
|
||||
跨任务知识积累。Coordinator 在 session 初始化时创建 `wisdom/` 目录。
|
||||
|
||||
**目录**:
|
||||
```
|
||||
<session-folder>/wisdom/
|
||||
├── learnings.md # 模式和洞察
|
||||
├── decisions.md # 架构和设计决策
|
||||
├── conventions.md # 代码库约定
|
||||
└── issues.md # 已知风险和问题
|
||||
```
|
||||
|
||||
**Worker 加载** (Phase 2): 从 task description 提取 `Session: <path>`, 读取 wisdom 目录下各文件。
|
||||
**Worker 贡献** (Phase 4/5): 将本任务发现写入对应 wisdom 文件。
|
||||
|
||||
### Role Isolation Rules
|
||||
|
||||
| 允许 | 禁止 |
|
||||
|------|------|
|
||||
| 处理自己前缀的任务 | 处理其他角色前缀的任务 |
|
||||
| SendMessage 给 coordinator | 直接与其他 worker 通信 |
|
||||
| 使用 Toolbox 中声明的工具 | 为其他角色创建任务 |
|
||||
| 委派给 commands/ 中的命令 | 修改不属于本职责的资源 |
|
||||
|
||||
Coordinator 额外禁止: 直接编写/修改代码、调用实现类 subagent、直接执行分析/测试/审查。
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Definitions
|
||||
|
||||
### Spec-only (12 tasks)
|
||||
|
||||
```
|
||||
RESEARCH-001 → DISCUSS-001 → DRAFT-001 → DISCUSS-002
|
||||
→ DRAFT-002 → DISCUSS-003 → DRAFT-003 → DISCUSS-004
|
||||
→ DRAFT-004 → DISCUSS-005 → QUALITY-001 → DISCUSS-006
|
||||
```
|
||||
|
||||
### Impl-only / Backend (4 tasks)
|
||||
|
||||
```
|
||||
PLAN-001 → IMPL-001 → TEST-001 + REVIEW-001
|
||||
```
|
||||
|
||||
### Full-lifecycle (16 tasks)
|
||||
|
||||
```
|
||||
[Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006) → IMPL-001 → TEST-001 + REVIEW-001
|
||||
```
|
||||
|
||||
### Frontend Pipelines
|
||||
|
||||
```
|
||||
FE-only: PLAN-001 → DEV-FE-001 → QA-FE-001
|
||||
(GC loop: QA-FE verdict=NEEDS_FIX → DEV-FE-002 → QA-FE-002, max 2 rounds)
|
||||
|
||||
Fullstack: PLAN-001 → IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001
|
||||
|
||||
Full + FE: [Spec pipeline] → PLAN-001 → IMPL-001 ∥ DEV-FE-001 → TEST-001 ∥ QA-FE-001 → REVIEW-001
|
||||
```
|
||||
|
||||
### Cadence Control
|
||||
|
||||
**节拍模型**: 事件驱动,每个 beat = coordinator 唤醒 → 处理 → spawn → STOP。
|
||||
|
||||
```
|
||||
Beat Cycle (单次节拍)
|
||||
═══════════════════════════════════════════════════════════
|
||||
Event Coordinator Workers
|
||||
───────────────────────────────────────────────────────────
|
||||
callback/resume ──→ ┌─ handleCallback ─┐
|
||||
│ mark completed │
|
||||
│ check pipeline │
|
||||
├─ handleSpawnNext ─┤
|
||||
│ find ready tasks │
|
||||
│ spawn workers ───┼──→ [Worker A] Phase 1-5
|
||||
│ (parallel OK) ──┼──→ [Worker B] Phase 1-5
|
||||
└─ STOP (idle) ─────┘ │
|
||||
│
|
||||
callback ←─────────────────────────────────────────┘
|
||||
(next beat) SendMessage + TaskUpdate(completed)
|
||||
═══════════════════════════════════════════════════════════
|
||||
```
|
||||
|
||||
**Pipeline 节拍视图**:
|
||||
|
||||
```
|
||||
Spec-only (12 beats, 严格串行)
|
||||
──────────────────────────────────────────────────────────
|
||||
Beat 1 2 3 4 5 6 7 8 9 10 11 12
|
||||
│ │ │ │ │ │ │ │ │ │ │ │
|
||||
R1 → D1 → W1 → D2 → W2 → D3 → W3 → D4 → W4 → D5 → Q1 → D6
|
||||
▲ ▲
|
||||
pipeline sign-off
|
||||
start pause
|
||||
|
||||
R=RESEARCH D=DISCUSS W=DRAFT(writer) Q=QUALITY
|
||||
|
||||
Impl-only (3 beats, 含并行窗口)
|
||||
──────────────────────────────────────────────────────────
|
||||
Beat 1 2 3
|
||||
│ │ ┌────┴────┐
|
||||
PLAN → IMPL ──→ TEST ∥ REVIEW ← 并行窗口
|
||||
└────┬────┘
|
||||
pipeline
|
||||
done
|
||||
|
||||
Full-lifecycle (15 beats, spec→impl 过渡含检查点)
|
||||
──────────────────────────────────────────────────────────
|
||||
Beat 1-12: [Spec pipeline 同上]
|
||||
│
|
||||
Beat 12 (D6 完成): ⏸ CHECKPOINT ── 用户确认后 resume
|
||||
│
|
||||
Beat 13 14 15
|
||||
PLAN → IMPL → TEST ∥ REVIEW
|
||||
|
||||
Fullstack (含双并行窗口)
|
||||
──────────────────────────────────────────────────────────
|
||||
Beat 1 2 3 4
|
||||
│ ┌────┴────┐ ┌────┴────┐ │
|
||||
PLAN → IMPL ∥ DEV-FE → TEST ∥ QA-FE → REVIEW
|
||||
▲ ▲ ▲
|
||||
并行窗口 1 并行窗口 2 同步屏障
|
||||
```
|
||||
|
||||
**检查点 (Checkpoint)**:
|
||||
|
||||
| 触发条件 | 位置 | 行为 |
|
||||
|----------|------|------|
|
||||
| Spec→Impl 过渡 | DISCUSS-006 完成后 | ⏸ 暂停,等待用户 `resume` 确认 |
|
||||
| GC 循环上限 | QA-FE max 2 rounds | 超出轮次 → 停止迭代,报告当前状态 |
|
||||
| Pipeline 停滞 | 无 ready + 无 running | 检查缺失任务,报告用户 |
|
||||
|
||||
**Stall 检测** (coordinator `handleCheck` 时执行):
|
||||
|
||||
| 检查项 | 条件 | 处理 |
|
||||
|--------|------|------|
|
||||
| Worker 无响应 | in_progress 任务无回调 | 报告等待中的任务列表,建议用户 `resume` |
|
||||
| Pipeline 死锁 | 无 ready + 无 running + 有 pending | 检查 blockedBy 依赖链,报告卡点 |
|
||||
| GC 循环超限 | DEV-FE / QA-FE 迭代 > max_rounds | 终止循环,输出最新 QA 报告 |
|
||||
|
||||
### Task Metadata Registry
|
||||
|
||||
| Task ID | Role | Phase | Dependencies | Description |
|
||||
|---------|------|-------|-------------|-------------|
|
||||
| RESEARCH-001 | analyst | spec | (none) | Seed analysis and context gathering |
|
||||
| DISCUSS-001 | discussant | spec | RESEARCH-001 | Critique research findings |
|
||||
| DRAFT-001 | writer | spec | DISCUSS-001 | Generate Product Brief |
|
||||
| DISCUSS-002 | discussant | spec | DRAFT-001 | Critique Product Brief |
|
||||
| DRAFT-002 | writer | spec | DISCUSS-002 | Generate Requirements/PRD |
|
||||
| DISCUSS-003 | discussant | spec | DRAFT-002 | Critique Requirements/PRD |
|
||||
| DRAFT-003 | writer | spec | DISCUSS-003 | Generate Architecture Document |
|
||||
| DISCUSS-004 | discussant | spec | DRAFT-003 | Critique Architecture Document |
|
||||
| DRAFT-004 | writer | spec | DISCUSS-004 | Generate Epics & Stories |
|
||||
| DISCUSS-005 | discussant | spec | DRAFT-004 | Critique Epics |
|
||||
| QUALITY-001 | reviewer | spec | DISCUSS-005 | 5-dimension spec quality validation |
|
||||
| DISCUSS-006 | discussant | spec | QUALITY-001 | Final review discussion and sign-off |
|
||||
| PLAN-001 | planner | impl | (none or DISCUSS-006) | Multi-angle exploration and planning |
|
||||
| IMPL-001 | executor | impl | PLAN-001 | Code implementation |
|
||||
| TEST-001 | tester | impl | IMPL-001 | Test-fix cycles |
|
||||
| REVIEW-001 | reviewer | impl | IMPL-001 | 4-dimension code review |
|
||||
| DEV-FE-001 | fe-developer | impl | PLAN-001 | Frontend implementation |
|
||||
| QA-FE-001 | fe-qa | impl | DEV-FE-001 | 5-dimension frontend QA |
|
||||
|
||||
## Coordinator Spawn Template
|
||||
|
||||
When coordinator spawns workers, use background mode (Spawn-and-Stop):
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `你是 team "<team-name>" 的 <ROLE>.
|
||||
|
||||
## 首要指令
|
||||
你的所有工作必须通过调用 Skill 获取角色定义后执行:
|
||||
Skill(skill="team-lifecycle-v3", args="--role=<role>")
|
||||
|
||||
当前需求: <task-description>
|
||||
Session: <session-folder>
|
||||
|
||||
## 角色准则
|
||||
- 只处理 <PREFIX>-* 任务,不执行其他角色工作
|
||||
- 所有输出带 [<role>] 标识前缀
|
||||
- 仅与 coordinator 通信
|
||||
- 不使用 TaskCreate 为其他角色创建任务
|
||||
- 每次 SendMessage 前先调用 mcp__ccw-tools__team_msg 记录
|
||||
|
||||
## 工作流程
|
||||
1. 调用 Skill → 获取角色定义和执行逻辑
|
||||
2. 按 role.md 5-Phase 流程执行
|
||||
3. team_msg + SendMessage 结果给 coordinator
|
||||
4. TaskUpdate completed → 检查下一个任务`
|
||||
})
|
||||
```
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/TLS-<slug>-<date>/
|
||||
├── team-session.json # Session state
|
||||
├── spec/ # Spec artifacts
|
||||
│ ├── spec-config.json
|
||||
│ ├── discovery-context.json
|
||||
│ ├── product-brief.md
|
||||
│ ├── requirements/
|
||||
│ ├── architecture/
|
||||
│ ├── epics/
|
||||
│ ├── readiness-report.md
|
||||
│ └── spec-summary.md
|
||||
├── discussions/ # Discussion records
|
||||
├── plan/ # Plan artifacts
|
||||
│ ├── plan.json
|
||||
│ └── .task/TASK-*.json
|
||||
├── explorations/ # Explorer output (cached)
|
||||
├── architecture/ # Architect assessments + design-tokens.json
|
||||
├── analysis/ # analyst design-intelligence.json (UI mode)
|
||||
├── qa/ # QA audit reports
|
||||
├── wisdom/ # Cross-task knowledge
|
||||
│ ├── learnings.md
|
||||
│ ├── decisions.md
|
||||
│ ├── conventions.md
|
||||
│ └── issues.md
|
||||
├── .msg/ # Team message bus logs (messages.jsonl)
|
||||
└── shared-memory.json # Cross-role state
|
||||
```
|
||||
|
||||
## Session Resume
|
||||
|
||||
Coordinator supports `--resume` / `--continue` for interrupted sessions:
|
||||
|
||||
1. Scan `.workflow/.team/TLS-*/team-session.json` for active/paused sessions
|
||||
2. Multiple matches → AskUserQuestion for selection
|
||||
3. Audit TaskList → reconcile session state ↔ task status
|
||||
4. Reset in_progress → pending (interrupted tasks)
|
||||
5. Rebuild team and spawn needed workers only
|
||||
6. Create missing tasks with correct blockedBy
|
||||
7. Kick first executable task → Phase 4 coordination loop
|
||||
|
||||
## Shared Spec Resources
|
||||
|
||||
| Resource | Path | Usage |
|
||||
|----------|------|-------|
|
||||
| Document Standards | [specs/document-standards.md](specs/document-standards.md) | YAML frontmatter, naming, structure |
|
||||
| Quality Gates | [specs/quality-gates.md](specs/quality-gates.md) | Per-phase quality gates |
|
||||
| Product Brief Template | [templates/product-brief.md](templates/product-brief.md) | DRAFT-001 |
|
||||
| Requirements Template | [templates/requirements-prd.md](templates/requirements-prd.md) | DRAFT-002 |
|
||||
| Architecture Template | [templates/architecture-doc.md](templates/architecture-doc.md) | DRAFT-003 |
|
||||
| Epics Template | [templates/epics-template.md](templates/epics-template.md) | DRAFT-004 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown --role value | Error with available role list |
|
||||
| Missing --role arg | Orchestration Mode → coordinator |
|
||||
| Role file not found | Error with expected path |
|
||||
| Command file not found | Fallback to inline execution |
|
||||
@@ -1,113 +0,0 @@
|
||||
# Role: analyst
|
||||
|
||||
Seed analysis, codebase exploration, and multi-dimensional context gathering. Maps to spec-generator Phase 1 (Discovery).
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `analyst` | **Prefix**: `RESEARCH-*` | **Tag**: `[analyst]`
|
||||
- **Responsibility**: Seed Analysis → Codebase Exploration → Context Packaging → Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process RESEARCH-* tasks
|
||||
- Communicate only with coordinator
|
||||
- Generate discovery-context.json and spec-config.json
|
||||
- Support file reference input (@ prefix or .md/.txt extension)
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks for other roles
|
||||
- Directly contact other workers
|
||||
- Modify spec documents (only create discovery artifacts)
|
||||
- Skip seed analysis step
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger |
|
||||
|------|-----------|---------|
|
||||
| research_ready | → coordinator | Research complete |
|
||||
| research_progress | → coordinator | Long research progress update |
|
||||
| error | → coordinator | Unrecoverable error |
|
||||
|
||||
## Toolbox
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| ccw cli --tool gemini --mode analysis | Seed analysis |
|
||||
| mcp__ace-tool__search_context | Codebase semantic search |
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Seed Analysis
|
||||
|
||||
**Objective**: Extract structured seed information from the topic/idea.
|
||||
|
||||
**Workflow**:
|
||||
1. Extract session folder from task description (`Session: <path>`)
|
||||
2. Parse topic from task description (first non-metadata line)
|
||||
3. If topic starts with `@` or ends with `.md`/`.txt` → Read the referenced file as topic content
|
||||
4. Run Gemini CLI seed analysis:
|
||||
|
||||
```
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Analyze topic and extract structured seed information.
|
||||
TASK: • Extract problem statement • Identify target users • Determine domain context
|
||||
• List constraints and assumptions • Identify 3-5 exploration dimensions • Assess complexity
|
||||
TOPIC: <topic-content>
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
})
|
||||
```
|
||||
|
||||
5. Wait for CLI result, parse seed analysis JSON
|
||||
|
||||
**Success**: Seed analysis parsed with problem statement, dimensions, complexity.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Codebase Exploration (conditional)
|
||||
|
||||
**Objective**: Gather codebase context if an existing project is detected.
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| package.json / Cargo.toml / pyproject.toml / go.mod exists | Explore codebase |
|
||||
| No project files | Skip → codebase context = null |
|
||||
|
||||
**When project detected**:
|
||||
1. Report progress: "种子分析完成, 开始代码库探索"
|
||||
2. ACE semantic search for architecture patterns related to topic
|
||||
3. Detect tech stack from package files
|
||||
4. Build codebase context: tech_stack, architecture_patterns, conventions, integration_points
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Context Packaging
|
||||
|
||||
**Objective**: Generate spec-config.json and discovery-context.json.
|
||||
|
||||
**spec-config.json** → `<session-folder>/spec/spec-config.json`:
|
||||
- session_id, topic, status="research_complete", complexity, depth, focus_areas, mode="interactive"
|
||||
|
||||
**discovery-context.json** → `<session-folder>/spec/discovery-context.json`:
|
||||
- session_id, phase=1, seed_analysis (all fields), codebase_context (or null), recommendations
|
||||
|
||||
**design-intelligence.json** → `<session-folder>/analysis/design-intelligence.json` (UI mode only):
|
||||
- Produced when frontend keywords detected (component, page, UI, React, Vue, CSS, 前端) in seed_analysis
|
||||
- Fields: industry, style_direction, ux_patterns, color_strategy, typography, component_patterns
|
||||
- Consumed by architect (for design-tokens.json) and fe-developer
|
||||
|
||||
**Report**: complexity, codebase presence, problem statement, exploration dimensions, output paths.
|
||||
|
||||
**Success**: Both JSON files created; design-intelligence.json created if UI mode.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Gemini CLI failure | Fallback to direct Claude analysis |
|
||||
| Codebase detection failed | Continue as new project |
|
||||
| Topic too vague | Report with clarification questions |
|
||||
@@ -1,193 +0,0 @@
|
||||
# Command: assess
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-mode architecture assessment. Auto-detects consultation mode from task subject prefix, runs mode-specific analysis, and produces a verdict with concerns and recommendations.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
### Common Context (all modes)
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | Task description `Session:` field | Yes |
|
||||
| Wisdom | `<session_folder>/wisdom/` (all files) | No |
|
||||
| Project tech | `.workflow/project-tech.json` | No |
|
||||
| Explorations | `<session_folder>/explorations/` | No |
|
||||
|
||||
### Mode-Specific Context
|
||||
|
||||
| Mode | Task Pattern | Additional Context |
|
||||
|------|-------------|-------------------|
|
||||
| spec-review | ARCH-SPEC-* | `spec/architecture/_index.md`, `spec/architecture/ADR-*.md` |
|
||||
| plan-review | ARCH-PLAN-* | `plan/plan.json`, `plan/.task/TASK-*.json` |
|
||||
| code-review | ARCH-CODE-* | `git diff --name-only`, changed file contents |
|
||||
| consult | ARCH-CONSULT-* | Question extracted from task description |
|
||||
| feasibility | ARCH-FEASIBILITY-* | Proposal from task description, codebase search results |
|
||||
|
||||
## Phase 3: Mode-Specific Assessment
|
||||
|
||||
### Mode: spec-review
|
||||
|
||||
Review architecture documents for technical soundness across 4 dimensions.
|
||||
|
||||
**Assessment dimensions**:
|
||||
|
||||
| Dimension | Weight | Focus |
|
||||
|-----------|--------|-------|
|
||||
| Consistency | 25% | ADR decisions align with each other and with architecture index |
|
||||
| Scalability | 25% | Design supports growth, no single-point bottlenecks |
|
||||
| Security | 25% | Auth model, data protection, API security addressed |
|
||||
| Tech fitness | 25% | Technology choices match project-tech.json and problem domain |
|
||||
|
||||
**Checks**:
|
||||
- Read architecture index and all ADR files
|
||||
- Cross-reference ADR decisions for contradictions
|
||||
- Verify tech choices align with project-tech.json
|
||||
- Score each dimension 0-100
|
||||
|
||||
---
|
||||
|
||||
### Mode: plan-review
|
||||
|
||||
Review implementation plan for architectural soundness.
|
||||
|
||||
**Checks**:
|
||||
|
||||
| Check | What | Severity if Failed |
|
||||
|-------|------|-------------------|
|
||||
| Dependency cycles | Build task graph, detect cycles via DFS | High |
|
||||
| Task granularity | Flag tasks touching >8 files | Medium |
|
||||
| Convention compliance | Verify plan follows wisdom/conventions.md | Medium |
|
||||
| Architecture alignment | Verify plan doesn't contradict wisdom/decisions.md | High |
|
||||
|
||||
**Dependency cycle detection flow**:
|
||||
1. Parse all TASK-*.json files -> extract id and depends_on
|
||||
2. Build directed graph
|
||||
3. DFS traversal -> flag any node visited twice in same stack
|
||||
4. Report cycle path if found
|
||||
|
||||
---
|
||||
|
||||
### Mode: code-review
|
||||
|
||||
Assess architectural impact of code changes.
|
||||
|
||||
**Checks**:
|
||||
|
||||
| Check | Method | Severity if Found |
|
||||
|-------|--------|-------------------|
|
||||
| Layer violations | Detect upward imports (deeper layer importing shallower) | High |
|
||||
| New dependencies | Parse package.json diff for added deps | Medium |
|
||||
| Module boundary changes | Flag index.ts/index.js modifications | Medium |
|
||||
| Architectural impact | Score based on file count and boundary changes | Info |
|
||||
|
||||
**Impact scoring**:
|
||||
|
||||
| Condition | Impact Level |
|
||||
|-----------|-------------|
|
||||
| Changed files > 10 | High |
|
||||
| index.ts/index.js or package.json modified | Medium |
|
||||
| All other cases | Low |
|
||||
|
||||
**Detection example** (find changed files):
|
||||
|
||||
```bash
|
||||
Bash(command="git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Mode: consult
|
||||
|
||||
Answer architecture decision questions. Route by question complexity.
|
||||
|
||||
**Complexity detection**:
|
||||
|
||||
| Condition | Classification |
|
||||
|-----------|---------------|
|
||||
| Question > 200 chars OR matches: architect, design, pattern, refactor, migrate, scalab | Complex |
|
||||
| All other questions | Simple |
|
||||
|
||||
**Complex questions** -> delegate to CLI exploration:
|
||||
|
||||
```bash
|
||||
Bash(command="ccw cli -p \"PURPOSE: Architecture consultation for: <question_summary>
|
||||
TASK: Search codebase for relevant patterns, analyze architectural implications, provide options with trade-offs
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Options with trade-offs, file references, architectural implications
|
||||
CONSTRAINTS: Advisory only, provide options not decisions\" --tool gemini --mode analysis --rule analysis-review-architecture", timeout=300000)
|
||||
```
|
||||
|
||||
**Simple questions** -> direct analysis using available context (wisdom, project-tech, codebase search).
|
||||
|
||||
---
|
||||
|
||||
### Mode: feasibility
|
||||
|
||||
Evaluate technical feasibility of a proposal.
|
||||
|
||||
**Assessment areas**:
|
||||
|
||||
| Area | Method | Output |
|
||||
|------|--------|--------|
|
||||
| Tech stack compatibility | Compare proposal needs against project-tech.json | Compatible / Requires additions |
|
||||
| Codebase readiness | Search for integration points using Grep/Glob | Touch-point count |
|
||||
| Effort estimation | Based on touch-point count (see table below) | Low / Medium / High |
|
||||
| Risk assessment | Based on effort + tech compatibility | Risks + mitigations |
|
||||
|
||||
**Effort estimation**:
|
||||
|
||||
| Touch Points | Effort | Implication |
|
||||
|-------------|--------|-------------|
|
||||
| <= 5 | Low | Straightforward implementation |
|
||||
| 6 - 20 | Medium | Moderate refactoring needed |
|
||||
| > 20 | High | Significant refactoring, consider phasing |
|
||||
|
||||
**Verdict for feasibility**:
|
||||
|
||||
| Condition | Verdict |
|
||||
|-----------|---------|
|
||||
| Low/medium effort, compatible stack | FEASIBLE |
|
||||
| High touch-points OR new tech required | RISKY |
|
||||
| Fundamental incompatibility or unreasonable effort | INFEASIBLE |
|
||||
|
||||
---
|
||||
|
||||
### Verdict Routing (all modes except feasibility)
|
||||
|
||||
| Verdict | Criteria |
|
||||
|---------|----------|
|
||||
| BLOCK | >= 2 high-severity concerns |
|
||||
| CONCERN | >= 1 high-severity OR >= 3 medium-severity concerns |
|
||||
| APPROVE | All other cases |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
### Output Format
|
||||
|
||||
Write assessment to `<session_folder>/architecture/arch-<slug>.json`.
|
||||
|
||||
**Report content sent to coordinator**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| mode | Consultation mode used |
|
||||
| verdict | APPROVE / CONCERN / BLOCK (or FEASIBLE / RISKY / INFEASIBLE) |
|
||||
| concern_count | Number of concerns by severity |
|
||||
| recommendations | Actionable suggestions with trade-offs |
|
||||
| output_path | Path to full assessment file |
|
||||
|
||||
**Wisdom contribution**: Append significant decisions to `<session_folder>/wisdom/decisions.md`.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Architecture docs not found | Assess from available context, note limitation in report |
|
||||
| Plan file missing | Report to coordinator via arch_concern |
|
||||
| Git diff fails (no commits) | Use staged changes or skip code-review mode |
|
||||
| CLI exploration timeout | Provide partial assessment, flag as incomplete |
|
||||
| Exploration results unparseable | Fall back to direct analysis without exploration |
|
||||
| Insufficient context | Request explorer assistance via coordinator |
|
||||
@@ -1,103 +0,0 @@
|
||||
# Role: architect
|
||||
|
||||
Architecture consultant. Advice on decisions, feasibility, design patterns.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `architect` | **Prefix**: `ARCH-*` | **Tag**: `[architect]`
|
||||
- **Type**: Consulting (on-demand, advisory only)
|
||||
- **Responsibility**: Context loading → Mode detection → Analysis → Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process ARCH-* tasks
|
||||
- Auto-detect mode from task subject prefix
|
||||
- Provide options with trade-offs (not final decisions)
|
||||
|
||||
### MUST NOT
|
||||
- Modify source code
|
||||
- Make final decisions (advisory only)
|
||||
- Execute implementation or testing
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger |
|
||||
|------|-----------|---------|
|
||||
| arch_ready | → coordinator | Assessment complete |
|
||||
| arch_concern | → coordinator | Significant risk found |
|
||||
| error | → coordinator | Analysis failure |
|
||||
|
||||
## Toolbox
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| commands/assess.md | Multi-mode assessment |
|
||||
| cli-explore-agent | Deep architecture exploration |
|
||||
| ccw cli --tool gemini --mode analysis | Architecture analysis |
|
||||
|
||||
---
|
||||
|
||||
## Consultation Modes
|
||||
|
||||
| Task Pattern | Mode | Focus |
|
||||
|-------------|------|-------|
|
||||
| ARCH-SPEC-* | spec-review | Review architecture docs |
|
||||
| ARCH-PLAN-* | plan-review | Review plan soundness |
|
||||
| ARCH-CODE-* | code-review | Assess code change impact |
|
||||
| ARCH-CONSULT-* | consult | Answer architecture questions |
|
||||
| ARCH-FEASIBILITY-* | feasibility | Technical feasibility |
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
**Common**: session folder, wisdom, project-tech.json, explorations
|
||||
|
||||
**Mode-specific**:
|
||||
|
||||
| Mode | Additional Context |
|
||||
|------|-------------------|
|
||||
| spec-review | architecture/_index.md, ADR-*.md |
|
||||
| plan-review | plan/plan.json |
|
||||
| code-review | git diff, changed files |
|
||||
| consult | Question from task description |
|
||||
| feasibility | Requirements + codebase |
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Assessment
|
||||
|
||||
Delegate to `commands/assess.md`. Output: mode, verdict (APPROVE/CONCERN/BLOCK), dimensions[], concerns[], recommendations[].
|
||||
|
||||
For complex questions → Gemini CLI with architecture review rule.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Report
|
||||
|
||||
Output to `<session-folder>/architecture/arch-<slug>.json`. Contribute decisions to wisdom/decisions.md.
|
||||
|
||||
**Frontend project outputs** (when frontend tech stack detected in shared-memory or discovery-context):
|
||||
- `<session-folder>/architecture/design-tokens.json` — color, spacing, typography, shadow tokens
|
||||
- `<session-folder>/architecture/component-specs/*.md` — per-component design spec
|
||||
|
||||
**Report**: mode, verdict, concern count, recommendations, output path(s).
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Integration
|
||||
|
||||
| Timing | Task |
|
||||
|--------|------|
|
||||
| After DRAFT-003 | ARCH-SPEC-001: 架构文档评审 |
|
||||
| After PLAN-001 | ARCH-PLAN-001: 计划架构审查 |
|
||||
| On-demand | ARCH-CONSULT-001: 架构咨询 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Docs not found | Assess from available context |
|
||||
| CLI timeout | Partial assessment |
|
||||
| Insufficient context | Request explorer via coordinator |
|
||||
@@ -1,142 +0,0 @@
|
||||
# Command: dispatch
|
||||
|
||||
## Purpose
|
||||
|
||||
Create task chains based on execution mode. Each mode maps to a predefined pipeline from SKILL.md Task Metadata Registry. Tasks are created with proper dependency chains, owner assignments, and session references.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Mode | Phase 1 requirements (spec-only, impl-only, etc.) | Yes |
|
||||
| Session folder | `<session-folder>` from Phase 2 | Yes |
|
||||
| Scope | User requirements description | Yes |
|
||||
| Spec file | User-provided path (impl-only mode only) | Conditional |
|
||||
|
||||
## Phase 3: Task Chain Creation
|
||||
|
||||
### Mode-to-Pipeline Routing
|
||||
|
||||
| Mode | Tasks | Pipeline | First Task |
|
||||
|------|-------|----------|------------|
|
||||
| spec-only | 12 | Spec pipeline | RESEARCH-001 |
|
||||
| impl-only | 4 | Impl pipeline | PLAN-001 |
|
||||
| fe-only | 3 | FE pipeline | PLAN-001 |
|
||||
| fullstack | 6 | Fullstack pipeline | PLAN-001 |
|
||||
| full-lifecycle | 16 | Spec + Impl | RESEARCH-001 |
|
||||
| full-lifecycle-fe | 18 | Spec + Fullstack | RESEARCH-001 |
|
||||
|
||||
---
|
||||
|
||||
### Spec Pipeline (12 tasks)
|
||||
|
||||
Used by: spec-only, full-lifecycle, full-lifecycle-fe
|
||||
|
||||
| # | Subject | Owner | BlockedBy | Description |
|
||||
|---|---------|-------|-----------|-------------|
|
||||
| 1 | RESEARCH-001 | analyst | (none) | Seed analysis and context gathering |
|
||||
| 2 | DISCUSS-001 | discussant | RESEARCH-001 | Critique research findings |
|
||||
| 3 | DRAFT-001 | writer | DISCUSS-001 | Generate Product Brief |
|
||||
| 4 | DISCUSS-002 | discussant | DRAFT-001 | Critique Product Brief |
|
||||
| 5 | DRAFT-002 | writer | DISCUSS-002 | Generate Requirements/PRD |
|
||||
| 6 | DISCUSS-003 | discussant | DRAFT-002 | Critique Requirements/PRD |
|
||||
| 7 | DRAFT-003 | writer | DISCUSS-003 | Generate Architecture Document |
|
||||
| 8 | DISCUSS-004 | discussant | DRAFT-003 | Critique Architecture Document |
|
||||
| 9 | DRAFT-004 | writer | DISCUSS-004 | Generate Epics |
|
||||
| 10 | DISCUSS-005 | discussant | DRAFT-004 | Critique Epics |
|
||||
| 11 | QUALITY-001 | reviewer | DISCUSS-005 | 5-dimension spec quality validation |
|
||||
| 12 | DISCUSS-006 | discussant | QUALITY-001 | Final review discussion and sign-off |
|
||||
|
||||
### Impl Pipeline (4 tasks)
|
||||
|
||||
Used by: impl-only, full-lifecycle (PLAN-001 blockedBy DISCUSS-006)
|
||||
|
||||
| # | Subject | Owner | BlockedBy | Description |
|
||||
|---|---------|-------|-----------|-------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Multi-angle exploration and planning |
|
||||
| 2 | IMPL-001 | executor | PLAN-001 | Code implementation |
|
||||
| 3 | TEST-001 | tester | IMPL-001 | Test-fix cycles |
|
||||
| 4 | REVIEW-001 | reviewer | IMPL-001 | 4-dimension code review |
|
||||
|
||||
### FE Pipeline (3 tasks)
|
||||
|
||||
Used by: fe-only
|
||||
|
||||
| # | Subject | Owner | BlockedBy | Description |
|
||||
|---|---------|-------|-----------|-------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Planning (frontend focus) |
|
||||
| 2 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation |
|
||||
| 3 | QA-FE-001 | fe-qa | DEV-FE-001 | 5-dimension frontend QA |
|
||||
|
||||
GC loop (max 2 rounds): QA-FE verdict=NEEDS_FIX → create DEV-FE-002 + QA-FE-002 dynamically.
|
||||
|
||||
### Fullstack Pipeline (6 tasks)
|
||||
|
||||
Used by: fullstack, full-lifecycle-fe (PLAN-001 blockedBy DISCUSS-006)
|
||||
|
||||
| # | Subject | Owner | BlockedBy | Description |
|
||||
|---|---------|-------|-----------|-------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Fullstack planning |
|
||||
| 2 | IMPL-001 | executor | PLAN-001 | Backend implementation |
|
||||
| 3 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation |
|
||||
| 4 | TEST-001 | tester | IMPL-001 | Backend test-fix cycles |
|
||||
| 5 | QA-FE-001 | fe-qa | DEV-FE-001 | Frontend QA |
|
||||
| 6 | REVIEW-001 | reviewer | TEST-001, QA-FE-001 | Full code review |
|
||||
|
||||
### Composite Modes
|
||||
|
||||
| Mode | Construction | PLAN-001 BlockedBy |
|
||||
|------|-------------|-------------------|
|
||||
| full-lifecycle | Spec (12) + Impl (4) | DISCUSS-006 |
|
||||
| full-lifecycle-fe | Spec (12) + Fullstack (6) | DISCUSS-006 |
|
||||
|
||||
---
|
||||
|
||||
### Impl-Only Pre-check
|
||||
|
||||
Before creating impl-only tasks, verify specification exists:
|
||||
|
||||
```
|
||||
Spec exists?
|
||||
├─ YES → read spec path → proceed with task creation
|
||||
└─ NO → error: "impl-only requires existing spec, use spec-only or full-lifecycle"
|
||||
```
|
||||
|
||||
### Task Description Template
|
||||
|
||||
Every task description includes session and scope context:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "<TASK-ID>",
|
||||
owner: "<role>",
|
||||
description: "<task description from pipeline table>\nSession: <session-folder>\nScope: <scope>",
|
||||
blockedBy: [<dependency-list>],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
### Execution Method
|
||||
|
||||
| Method | Behavior |
|
||||
|--------|----------|
|
||||
| sequential | One task active at a time; next activated after predecessor completes |
|
||||
| parallel | Tasks with all deps met run concurrently (e.g., TEST-001 + REVIEW-001) |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Task count | Matches mode total from routing table |
|
||||
| Dependencies | Every blockedBy references an existing task subject |
|
||||
| Owner assignment | Each task owner matches SKILL.md Role Registry prefix |
|
||||
| Session reference | Every task description contains `Session: <session-folder>` |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown mode | Reject with supported mode list |
|
||||
| Missing spec for impl-only | Error, suggest spec-only or full-lifecycle |
|
||||
| TaskCreate fails | Log error, report to user |
|
||||
| Duplicate task subject | Skip creation, log warning |
|
||||
@@ -1,180 +0,0 @@
|
||||
# Command: monitor
|
||||
|
||||
## Purpose
|
||||
|
||||
Event-driven pipeline coordination with Spawn-and-Stop pattern. Three wake-up sources drive pipeline advancement: worker callbacks (auto-advance), user `check` (status report), user `resume` (manual advance).
|
||||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
| Pipeline mode | session.mode | Yes |
|
||||
|
||||
## Phase 3: Handler Routing
|
||||
|
||||
### Wake-up Source Detection
|
||||
|
||||
Parse `$ARGUMENTS` to determine handler:
|
||||
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[<role-name>]` from known worker role | handleCallback |
|
||||
| 2 | Contains "check" or "status" | handleCheck |
|
||||
| 3 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 4 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
||||
|
||||
Known worker roles: analyst, writer, discussant, planner, executor, tester, reviewer, explorer, architect, fe-developer, fe-qa.
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
Worker completed a task. Verify completion, update state, auto-advance.
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
├─ Find matching active worker by role
|
||||
├─ Task status = completed?
|
||||
│ ├─ YES → remove from active_workers → update session
|
||||
│ │ ├─ Handle checkpoints (see below)
|
||||
│ │ └─ → handleSpawnNext
|
||||
│ └─ NO → progress message, do not advance → STOP
|
||||
└─ No matching worker found
|
||||
├─ Scan all active workers for completed tasks
|
||||
├─ Found completed → process each → handleSpawnNext
|
||||
└─ None completed → STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCheck
|
||||
|
||||
Read-only status report. No pipeline advancement.
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[coordinator] Pipeline Status
|
||||
[coordinator] Mode: <mode> | Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[coordinator] Execution Graph:
|
||||
Spec Phase: (if applicable)
|
||||
[<icon> RESEARCH-001] → [<icon> DISCUSS-001] → ...
|
||||
Impl Phase: (if applicable)
|
||||
[<icon> PLAN-001]
|
||||
├─ BE: [<icon> IMPL-001] → [<icon> TEST-001] → [<icon> REVIEW-001]
|
||||
└─ FE: [<icon> DEV-FE-001] → [<icon> QA-FE-001]
|
||||
|
||||
done=completed >>>=running o=pending .=not created
|
||||
|
||||
[coordinator] Active Workers:
|
||||
> <subject> (<role>) - running <elapsed>
|
||||
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
|
||||
|
||||
Then STOP.
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
Check active worker completion, process results, advance pipeline.
|
||||
|
||||
```
|
||||
Load active_workers from session
|
||||
├─ No active workers → handleSpawnNext
|
||||
└─ Has active workers → check each:
|
||||
├─ status = completed → mark done, log
|
||||
├─ status = in_progress → still running, log
|
||||
└─ other status → worker failure → reset to pending
|
||||
After processing:
|
||||
├─ Some completed → handleSpawnNext
|
||||
├─ All still running → report status → STOP
|
||||
└─ All failed → handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
Find all ready tasks, spawn workers in background, update session, STOP.
|
||||
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
├─ completedSubjects: status = completed
|
||||
├─ inProgressSubjects: status = in_progress
|
||||
└─ readySubjects: pending + all blockedBy in completedSubjects
|
||||
|
||||
Ready tasks found?
|
||||
├─ NONE + work in progress → report waiting → STOP
|
||||
├─ NONE + nothing in progress → PIPELINE_COMPLETE → Phase 5
|
||||
└─ HAS ready tasks → for each:
|
||||
├─ TaskUpdate → in_progress
|
||||
├─ team_msg log → task_unblocked
|
||||
├─ Spawn worker (see tool call below)
|
||||
└─ Add to session.active_workers
|
||||
Update session file → output summary → STOP
|
||||
```
|
||||
|
||||
**Spawn worker tool call** (one per ready task):
|
||||
|
||||
```bash
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: "<worker prompt from SKILL.md Coordinator Spawn Template>"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Checkpoints
|
||||
|
||||
| Completed Task | Mode Condition | Action |
|
||||
|---------------|----------------|--------|
|
||||
| DISCUSS-006 | full-lifecycle or full-lifecycle-fe | Output "SPEC PHASE COMPLETE" checkpoint, pause for user review before impl |
|
||||
|
||||
---
|
||||
|
||||
### Worker Failure Handling
|
||||
|
||||
When a worker has unexpected status (not completed, not in_progress):
|
||||
|
||||
1. Reset task → pending via TaskUpdate
|
||||
2. Log via team_msg (type: error)
|
||||
3. Report to user: task reset, will retry on next resume
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
||||
| Pipeline completeness | All expected tasks exist per mode |
|
||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 → PIPELINE_COMPLETE |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Worker callback from unknown role | Log info, scan for other completions |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
||||
@@ -1,209 +0,0 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the team-lifecycle workflow: team creation, task dispatching, progress monitoring, session state.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Parse requirements → Create team → Dispatch tasks → Monitor progress → Report results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Parse user requirements and clarify ambiguous inputs via AskUserQuestion
|
||||
- Create team and spawn worker subagents in background
|
||||
- Dispatch tasks with proper dependency chains (see SKILL.md Task Metadata Registry)
|
||||
- Monitor progress via worker callbacks and route messages
|
||||
- Maintain session state persistence (team-session.json)
|
||||
|
||||
### MUST NOT
|
||||
- Execute spec/impl/research work directly (delegate to workers)
|
||||
- Modify task outputs (workers own their deliverables)
|
||||
- Call implementation subagents (code-developer, etc.) directly
|
||||
- Skip dependency validation when creating task chains
|
||||
|
||||
---
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** - complete the command workflow before proceeding
|
||||
|
||||
Example:
|
||||
```
|
||||
Phase 3 needs task dispatch
|
||||
-> Read roles/coordinator/commands/dispatch.md
|
||||
-> Execute Phase 2 (Context Loading)
|
||||
-> Execute Phase 3 (Task Chain Creation)
|
||||
-> Execute Phase 4 (Validation)
|
||||
-> Continue to Phase 4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, first detect the invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[role-name]` tag from a known worker role | → handleCallback: auto-advance pipeline |
|
||||
| Status check | Arguments contain "check" or "status" | → handleCheck: output execution graph, no advancement |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | → handleResume: check worker states, advance pipeline |
|
||||
| Interrupted session | Active/paused session exists in `.workflow/.team/TLS-*` | → Phase 0 (Session Resume Check) |
|
||||
| New session | None of the above | → Phase 1 (Requirement Clarification) |
|
||||
|
||||
For callback/check/resume: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
||||
|
||||
### Router Implementation
|
||||
|
||||
1. **Load session context** (if exists):
|
||||
- Scan `.workflow/.team/TLS-*/team-session.json` for active/paused sessions
|
||||
- If found, extract known worker roles from session or SKILL.md Role Registry
|
||||
|
||||
2. **Parse $ARGUMENTS** for detection keywords
|
||||
|
||||
3. **Route to handler**:
|
||||
- For monitor handlers: Read `commands/monitor.md`, execute matched handler section, STOP
|
||||
- For Phase 0: Execute Session Resume Check below
|
||||
- For Phase 1: Execute Requirement Clarification below
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
**Objective**: Detect and resume interrupted sessions before creating new ones.
|
||||
|
||||
**Workflow**:
|
||||
1. Scan `.workflow/.team/TLS-*/team-session.json` for sessions with status "active" or "paused"
|
||||
2. No sessions found → proceed to Phase 1
|
||||
3. Single session found → resume it (→ Session Reconciliation)
|
||||
4. Multiple sessions → AskUserQuestion for user selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
1. Audit TaskList → get real status of all tasks
|
||||
2. Reconcile: session.completed_tasks ↔ TaskList status (bidirectional sync)
|
||||
3. Reset any in_progress tasks → pending (they were interrupted)
|
||||
4. Determine remaining pipeline from reconciled state
|
||||
5. Rebuild team if disbanded (TeamCreate + spawn needed workers only)
|
||||
6. Create missing tasks with correct blockedBy dependencies
|
||||
7. Verify dependency chain integrity
|
||||
8. Update session file with reconciled state
|
||||
9. Kick first executable task's worker → Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Requirement Clarification
|
||||
|
||||
**Objective**: Parse user input and gather execution parameters.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse arguments** for explicit settings: mode, scope, focus areas, depth
|
||||
|
||||
2. **Ask for missing parameters** via AskUserQuestion:
|
||||
- Mode: spec-only / impl-only / full-lifecycle / fe-only / fullstack / full-lifecycle-fe
|
||||
- Scope: project description
|
||||
- Execution method: sequential / parallel
|
||||
|
||||
3. **Frontend auto-detection** (for impl-only and full-lifecycle modes):
|
||||
|
||||
| Signal | Detection | Pipeline Upgrade |
|
||||
|--------|----------|-----------------|
|
||||
| FE keywords (component, page, UI, React, Vue, CSS...) | Keyword match in description | impl-only → fe-only or fullstack |
|
||||
| BE keywords also present (API, database, server...) | Both FE + BE keywords | impl-only → fullstack |
|
||||
| FE framework in package.json | grep react/vue/svelte/next | full-lifecycle → full-lifecycle-fe |
|
||||
|
||||
4. **Store requirements**: mode, scope, focus, depth, executionMethod
|
||||
|
||||
**Success**: All parameters captured, mode finalized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
**Objective**: Initialize team, session file, and wisdom directory.
|
||||
|
||||
**Workflow**:
|
||||
1. Generate session ID: `TLS-<slug>-<date>`
|
||||
2. Create session folder: `.workflow/.team/<session-id>/`
|
||||
3. Call TeamCreate with team name
|
||||
4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md)
|
||||
5. Write team-session.json with: session_id, mode, scope, status="active", tasks_total, tasks_completed=0
|
||||
|
||||
**Task counts by mode**:
|
||||
|
||||
| Mode | Tasks |
|
||||
|------|-------|
|
||||
| spec-only | 12 |
|
||||
| impl-only | 4 |
|
||||
| fe-only | 3 |
|
||||
| fullstack | 6 |
|
||||
| full-lifecycle | 16 |
|
||||
| full-lifecycle-fe | 18 |
|
||||
|
||||
**Success**: Team created, session file written, wisdom initialized.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
**Objective**: Dispatch tasks based on mode with proper dependencies.
|
||||
|
||||
Delegate to `commands/dispatch.md` which creates the full task chain:
|
||||
1. Reads SKILL.md Task Metadata Registry for task definitions
|
||||
2. Creates tasks via TaskCreate with correct blockedBy
|
||||
3. Assigns owner based on role mapping
|
||||
4. Includes `Session: <session-folder>` in every task description
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
**Objective**: Spawn first batch of ready workers in background, then STOP.
|
||||
|
||||
**Design**: Spawn-and-Stop + Callback pattern.
|
||||
- Spawn workers with `Task(run_in_background: true)` → immediately return
|
||||
- Worker completes → SendMessage callback → auto-advance
|
||||
- User can use "check" / "resume" to manually advance
|
||||
- Coordinator does one operation per invocation, then STOPS
|
||||
|
||||
**Workflow**:
|
||||
1. Load `commands/monitor.md`
|
||||
2. Find tasks with: status=pending, blockedBy all resolved, owner assigned
|
||||
3. For each ready task → spawn worker (see SKILL.md Spawn Template)
|
||||
4. Output status summary
|
||||
5. STOP
|
||||
|
||||
**Pipeline advancement** driven by three wake sources:
|
||||
- Worker callback (automatic) → Entry Router → handleCallback
|
||||
- User "check" → handleCheck (status only)
|
||||
- User "resume" → handleResume (advance)
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Next Steps
|
||||
|
||||
**Objective**: Completion report and follow-up options.
|
||||
|
||||
**Workflow**:
|
||||
1. Load session state → count completed tasks, duration
|
||||
2. List deliverables with output paths
|
||||
3. Update session status → "completed"
|
||||
4. Offer next steps: 退出 / 查看产物 / 扩展任务 / 生成 lite-plan / 创建 Issue
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Task timeout | Log, mark failed, ask user to retry or skip |
|
||||
| Worker crash | Respawn worker, reassign task |
|
||||
| Dependency cycle | Detect, report to user, halt |
|
||||
| Invalid mode | Reject with error, ask to clarify |
|
||||
| Session corruption | Attempt recovery, fallback to manual reconciliation |
|
||||
@@ -1,136 +0,0 @@
|
||||
# Command: critique
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-perspective CLI critique: launch parallel analyses from assigned perspectives, collect structured ratings, detect divergences, and synthesize consensus.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Round config | DISCUSS-NNN → look up round in role.md table | Yes |
|
||||
| Artifact | `<session-folder>/<artifact-path>` from round config | Yes |
|
||||
| Perspectives | Round config perspectives column | Yes |
|
||||
| Discovery context | `<session-folder>/spec/discovery-context.json` | For coverage perspective |
|
||||
| Prior discussions | `<session-folder>/discussions/` | No |
|
||||
|
||||
## Phase 3: Multi-Perspective Critique
|
||||
|
||||
### Perspective Routing
|
||||
|
||||
| Perspective | CLI Tool | Role | Focus Areas |
|
||||
|-------------|----------|------|-------------|
|
||||
| Product | gemini | Product Manager | Market fit, user value, business viability, competitive differentiation |
|
||||
| Technical | codex | Tech Lead | Feasibility, tech debt, performance, security, maintainability |
|
||||
| Quality | claude | QA Lead | Completeness, testability, consistency, standards compliance |
|
||||
| Risk | gemini | Risk Analyst | Risk identification, dependencies, failure modes, mitigation |
|
||||
| Coverage | gemini | Requirements Analyst | Requirement completeness vs discovery-context, gap detection, scope creep |
|
||||
|
||||
### Execution Flow
|
||||
|
||||
```
|
||||
For each perspective in round config:
|
||||
├─ Build prompt with perspective focus + artifact content
|
||||
├─ Launch CLI analysis (background)
|
||||
│ Bash(command="ccw cli -p '<prompt>' --tool <cli-tool> --mode analysis", run_in_background=true)
|
||||
└─ Collect result via hook callback
|
||||
```
|
||||
|
||||
### CLI Call Template
|
||||
|
||||
```bash
|
||||
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
|
||||
TASK: <focus-areas-from-table>
|
||||
MODE: analysis
|
||||
CONTEXT: Artifact content below
|
||||
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
CONSTRAINTS: Output valid JSON only
|
||||
|
||||
Artifact:
|
||||
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
|
||||
```
|
||||
|
||||
### Extra Fields by Perspective
|
||||
|
||||
| Perspective | Additional Output Fields |
|
||||
|-------------|------------------------|
|
||||
| Risk | `risk_level`: low / medium / high / critical |
|
||||
| Coverage | `covered_requirements[]`, `partial_requirements[]`, `missing_requirements[]`, `scope_creep[]` |
|
||||
|
||||
---
|
||||
|
||||
### Divergence Detection
|
||||
|
||||
After all perspectives return, scan results for critical signals:
|
||||
|
||||
| Signal | Condition | Severity |
|
||||
|--------|-----------|----------|
|
||||
| Coverage gap | `missing_requirements` non-empty | High |
|
||||
| High risk | `risk_level` is high or critical | High |
|
||||
| Low rating | Any perspective rating <= 2 | Medium |
|
||||
| Rating spread | Max rating - min rating >= 3 | Medium |
|
||||
|
||||
### Consensus Determination
|
||||
|
||||
| Condition | Verdict |
|
||||
|-----------|---------|
|
||||
| No high-severity divergences AND average rating >= 3.0 | consensus_reached |
|
||||
| Any high-severity divergence OR average rating < 3.0 | consensus_blocked |
|
||||
|
||||
### Synthesis Process
|
||||
|
||||
```
|
||||
Collect all perspective results
|
||||
├─ Extract convergent themes (agreed by 2+ perspectives)
|
||||
├─ Extract divergent views (conflicting assessments)
|
||||
├─ Check coverage gaps from coverage result
|
||||
├─ Compile action items from all suggestions
|
||||
└─ Determine consensus per table above
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
### Discussion Record
|
||||
|
||||
Write to `<session-folder>/discussions/<round-id>-discussion.md`:
|
||||
|
||||
```
|
||||
# Discussion Record: <round-id>
|
||||
|
||||
**Artifact**: <artifact-path>
|
||||
**Perspectives**: <list>
|
||||
**Consensus**: reached / blocked
|
||||
|
||||
## Convergent Themes
|
||||
- <theme>
|
||||
|
||||
## Divergent Views
|
||||
- **<topic>** (<severity>): <description>
|
||||
|
||||
## Action Items
|
||||
1. <item>
|
||||
|
||||
## Ratings
|
||||
| Perspective | Rating |
|
||||
|-------------|--------|
|
||||
| <name> | <n>/5 |
|
||||
|
||||
**Average**: <avg>/5
|
||||
```
|
||||
|
||||
### Result Routing
|
||||
|
||||
| Outcome | Message Type | Content |
|
||||
|---------|-------------|---------|
|
||||
| Consensus reached | discussion_ready | Action items, record path, average rating |
|
||||
| Consensus blocked | discussion_blocked | Divergence points, severity, record path |
|
||||
| Artifact not found | error | Missing artifact path |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact not found | Report error to coordinator |
|
||||
| Single CLI perspective fails | Fallback to direct Claude analysis for that perspective |
|
||||
| All CLI analyses fail | Generate basic discussion from direct artifact reading |
|
||||
| All perspectives diverge | Report as discussion_blocked with all divergence points |
|
||||
@@ -1,128 +0,0 @@
|
||||
# Role: discussant
|
||||
|
||||
Multi-perspective critique, consensus building, and conflict escalation. Ensures quality feedback between each phase transition.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `discussant` | **Prefix**: `DISCUSS-*` | **Tag**: `[discussant]`
|
||||
- **Responsibility**: Load Artifact → Multi-Perspective Critique → Synthesize Consensus → Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process DISCUSS-* tasks
|
||||
- Execute multi-perspective critique via CLI tools
|
||||
- Detect coverage gaps from coverage perspective
|
||||
- Synthesize consensus with convergent/divergent analysis
|
||||
- Write discussion records to `discussions/` folder
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Contact other workers directly
|
||||
- Modify spec documents directly
|
||||
- Skip perspectives defined in round config
|
||||
- Ignore critical divergences
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger |
|
||||
|------|-----------|---------|
|
||||
| discussion_ready | → coordinator | Consensus reached |
|
||||
| discussion_blocked | → coordinator | Cannot reach consensus |
|
||||
| error | → coordinator | Input artifact missing |
|
||||
|
||||
## Toolbox
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| commands/critique.md | Multi-perspective CLI critique |
|
||||
| gemini CLI | Product, Risk, Coverage perspectives |
|
||||
| codex CLI | Technical perspective |
|
||||
| claude CLI | Quality perspective |
|
||||
|
||||
---
|
||||
|
||||
## Perspective Model
|
||||
|
||||
| Perspective | Focus | CLI Tool |
|
||||
|-------------|-------|----------|
|
||||
| Product | Market fit, user value, business viability | gemini |
|
||||
| Technical | Feasibility, tech debt, performance, security | codex |
|
||||
| Quality | Completeness, testability, consistency | claude |
|
||||
| Risk | Risk identification, dependency analysis, failure modes | gemini |
|
||||
| Coverage | Requirement completeness vs original intent, gap detection | gemini |
|
||||
|
||||
## Round Configuration
|
||||
|
||||
| Round | Artifact | Perspectives | Focus |
|
||||
|-------|----------|-------------|-------|
|
||||
| DISCUSS-001 | spec/discovery-context.json | product, risk, coverage | Scope confirmation |
|
||||
| DISCUSS-002 | spec/product-brief.md | product, technical, quality, coverage | Positioning, feasibility |
|
||||
| DISCUSS-003 | spec/requirements/_index.md | quality, product, coverage | Completeness, priority |
|
||||
| DISCUSS-004 | spec/architecture/_index.md | technical, risk | Tech choices, security |
|
||||
| DISCUSS-005 | spec/epics/_index.md | product, technical, quality, coverage | MVP scope, estimation |
|
||||
| DISCUSS-006 | spec/readiness-report.md | all 5 | Final sign-off |
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Artifact Loading
|
||||
|
||||
**Objective**: Load target artifact and determine discussion parameters.
|
||||
|
||||
**Workflow**:
|
||||
1. Extract session folder and round number from task subject (`DISCUSS-<NNN>`)
|
||||
2. Look up round config from table above
|
||||
3. Load target artifact from `<session-folder>/<artifact-path>`
|
||||
4. Create `<session-folder>/discussions/` directory
|
||||
5. Load prior discussion records for continuity
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Multi-Perspective Critique
|
||||
|
||||
**Objective**: Run parallel CLI analyses from each required perspective.
|
||||
|
||||
Delegate to `commands/critique.md` -- launches parallel CLI calls per perspective with focused prompts and designated tools.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Consensus Synthesis
|
||||
|
||||
**Objective**: Synthesize into consensus with actionable outcomes.
|
||||
|
||||
**Synthesis process**:
|
||||
1. Extract convergent themes (agreed by 2+ perspectives)
|
||||
2. Extract divergent views (conflicting perspectives, with severity)
|
||||
3. Check coverage gaps from coverage perspective
|
||||
4. Compile action items and open questions
|
||||
|
||||
**Consensus routing**:
|
||||
|
||||
| Condition | Status | Report |
|
||||
|-----------|--------|--------|
|
||||
| No high-severity divergences | consensus_reached | Action items, open questions, record path |
|
||||
| Any high-severity divergences | consensus_blocked | Escalate divergence points to coordinator |
|
||||
|
||||
Write discussion record to `<session-folder>/discussions/<output-file>`.
|
||||
|
||||
**Output file naming convention**:
|
||||
|
||||
| Round | Output File |
|
||||
|-------|------------|
|
||||
| DISCUSS-001 | discuss-001-scope.md |
|
||||
| DISCUSS-002 | discuss-002-brief.md |
|
||||
| DISCUSS-003 | discuss-003-requirements.md |
|
||||
| DISCUSS-004 | discuss-004-architecture.md |
|
||||
| DISCUSS-005 | discuss-005-epics.md |
|
||||
| DISCUSS-006 | discuss-006-signoff.md |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Target artifact not found | Notify coordinator |
|
||||
| CLI perspective failure | Fallback to direct Claude analysis |
|
||||
| All CLI analyses fail | Generate basic discussion from direct reading |
|
||||
| All perspectives diverge | Escalate as discussion_blocked |
|
||||
@@ -1,166 +0,0 @@
|
||||
# Command: implement
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-backend code implementation: route tasks to appropriate execution backend (direct edit, subagent, or CLI), build focused prompts, execute with retry and fallback.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Plan | `<session-folder>/plan/plan.json` | Yes |
|
||||
| Task files | `<session-folder>/plan/.task/TASK-*.json` | Yes |
|
||||
| Backend | Task metadata / plan default / auto-select | Yes |
|
||||
| Working directory | task.metadata.working_dir or project root | No |
|
||||
| Wisdom | `<session-folder>/wisdom/` | No |
|
||||
|
||||
## Phase 3: Implementation
|
||||
|
||||
### Backend Selection
|
||||
|
||||
Priority order (first match wins):
|
||||
|
||||
| Priority | Source | Method |
|
||||
|----------|--------|--------|
|
||||
| 1 | Task metadata | `task.metadata.executor` field |
|
||||
| 2 | Plan default | "Execution Backend:" line in plan.json |
|
||||
| 3 | Auto-select | See auto-select table below |
|
||||
|
||||
**Auto-select routing**:
|
||||
|
||||
| Condition | Backend |
|
||||
|-----------|---------|
|
||||
| Description < 200 chars AND no refactor/architecture keywords AND single target file | agent (direct edit) |
|
||||
| Description < 200 chars AND simple scope | agent (subagent) |
|
||||
| Complex scope OR architecture keywords | codex |
|
||||
| Analysis-heavy OR multi-module integration | gemini |
|
||||
|
||||
### Execution Paths
|
||||
|
||||
```
|
||||
Backend selected
|
||||
├─ agent (direct edit)
|
||||
│ └─ Read target file → Edit directly → no subagent overhead
|
||||
├─ agent (subagent)
|
||||
│ └─ Task({ subagent_type: "code-developer", run_in_background: false })
|
||||
├─ codex (CLI)
|
||||
│ └─ Bash(command="ccw cli ... --tool codex --mode write", run_in_background=true)
|
||||
└─ gemini (CLI)
|
||||
└─ Bash(command="ccw cli ... --tool gemini --mode write", run_in_background=true)
|
||||
```
|
||||
|
||||
### Path 1: Direct Edit (agent, simple task)
|
||||
|
||||
```bash
|
||||
Read(file_path="<target-file>")
|
||||
Edit(file_path="<target-file>", old_string="<old>", new_string="<new>")
|
||||
```
|
||||
|
||||
### Path 2: Subagent (agent, moderate task)
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement <task-id>",
|
||||
prompt: "<execution-prompt>"
|
||||
})
|
||||
```
|
||||
|
||||
### Path 3: CLI Backend (codex or gemini)
|
||||
|
||||
```bash
|
||||
Bash(command="ccw cli -p '<execution-prompt>' --tool <codex|gemini> --mode write --cd <working-dir>", run_in_background=true)
|
||||
```
|
||||
|
||||
### Execution Prompt Template
|
||||
|
||||
All backends receive the same structured prompt:
|
||||
|
||||
```
|
||||
# Implementation Task: <task-id>
|
||||
|
||||
## Task Description
|
||||
<task-description>
|
||||
|
||||
## Acceptance Criteria
|
||||
1. <criterion>
|
||||
|
||||
## Context from Plan
|
||||
<architecture-section>
|
||||
<technical-stack-section>
|
||||
<task-context-section>
|
||||
|
||||
## Files to Modify
|
||||
<target-files or "Auto-detect based on task">
|
||||
|
||||
## Constraints
|
||||
- Follow existing code style and patterns
|
||||
- Preserve backward compatibility
|
||||
- Add appropriate error handling
|
||||
- Include inline comments for complex logic
|
||||
```
|
||||
|
||||
### Batch Execution
|
||||
|
||||
When multiple IMPL tasks exist, execute in dependency order:
|
||||
|
||||
```
|
||||
Topological sort by task.depends_on
|
||||
├─ Batch 1: Tasks with no dependencies → execute
|
||||
├─ Batch 2: Tasks depending on batch 1 → execute
|
||||
└─ Batch N: Continue until all tasks complete
|
||||
|
||||
Progress update per batch (when > 1 batch):
|
||||
→ team_msg: "Processing batch <N>/<total>: <task-id>"
|
||||
```
|
||||
|
||||
### Retry and Fallback
|
||||
|
||||
**Retry** (max 3 attempts per task):
|
||||
|
||||
```
|
||||
Attempt 1 → failure
|
||||
├─ team_msg: "Retry 1/3 after error: <message>"
|
||||
└─ Attempt 2 → failure
|
||||
├─ team_msg: "Retry 2/3 after error: <message>"
|
||||
└─ Attempt 3 → failure → fallback
|
||||
|
||||
```
|
||||
|
||||
**Fallback** (when primary backend fails after retries):
|
||||
|
||||
| Primary Backend | Fallback |
|
||||
|----------------|----------|
|
||||
| codex | agent (subagent) |
|
||||
| gemini | agent (subagent) |
|
||||
| agent (subagent) | Report failure to coordinator |
|
||||
| agent (direct edit) | agent (subagent) |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
### Self-Validation Steps
|
||||
|
||||
| Step | Method | Pass Criteria |
|
||||
|------|--------|--------------|
|
||||
| Syntax check | `Bash(command="tsc --noEmit", timeout=30000)` | Exit code 0 |
|
||||
| Acceptance match | Check criteria keywords vs modified files | All criteria addressed |
|
||||
| Test detection | Search for .test.ts/.spec.ts matching modified files | Tests identified |
|
||||
| File changes | `Bash(command="git diff --name-only HEAD")` | At least 1 file modified |
|
||||
|
||||
### Result Routing
|
||||
|
||||
| Outcome | Message Type | Content |
|
||||
|---------|-------------|---------|
|
||||
| All tasks pass validation | impl_complete | Task ID, files modified, backend used |
|
||||
| Batch progress | impl_progress | Batch index, total batches, current task |
|
||||
| Validation failure after retries | error | Task ID, error details, retry count |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Syntax errors after implementation | Retry with error context (max 3) |
|
||||
| Backend unavailable | Fallback to agent |
|
||||
| Missing dependencies | Request from coordinator |
|
||||
| All retries + fallback exhausted | Report failure with full error log |
|
||||
@@ -1,103 +0,0 @@
|
||||
# Role: executor
|
||||
|
||||
Code implementation following approved plans. Multi-backend execution with self-validation.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `executor` | **Prefix**: `IMPL-*` | **Tag**: `[executor]`
|
||||
- **Responsibility**: Load plan → Route to backend → Implement → Self-validate → Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process IMPL-* tasks
|
||||
- Follow approved plan exactly
|
||||
- Use declared execution backends
|
||||
- Self-validate all implementations
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Contact other workers directly
|
||||
- Modify plan files
|
||||
- Skip self-validation
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger |
|
||||
|------|-----------|---------|
|
||||
| impl_complete | → coordinator | Implementation success |
|
||||
| impl_progress | → coordinator | Batch progress |
|
||||
| error | → coordinator | Implementation failure |
|
||||
|
||||
## Toolbox
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| commands/implement.md | Multi-backend implementation |
|
||||
| code-developer agent | Simple tasks (synchronous) |
|
||||
| ccw cli --tool codex --mode write | Complex tasks |
|
||||
| ccw cli --tool gemini --mode write | Alternative backend |
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Task & Plan Loading
|
||||
|
||||
**Objective**: Load plan and determine execution strategy.
|
||||
|
||||
1. Load plan.json and .task/TASK-*.json from `<session-folder>/plan/`
|
||||
|
||||
**Backend selection** (priority order):
|
||||
|
||||
| Priority | Source | Method |
|
||||
|----------|--------|--------|
|
||||
| 1 | Task metadata | task.metadata.executor field |
|
||||
| 2 | Plan default | "Execution Backend:" in plan |
|
||||
| 3 | Auto-select | Simple (< 200 chars, no refactor) → agent; Complex → codex |
|
||||
|
||||
**Code review selection**:
|
||||
|
||||
| Priority | Source | Method |
|
||||
|----------|--------|--------|
|
||||
| 1 | Task metadata | task.metadata.code_review field |
|
||||
| 2 | Plan default | "Code Review:" in plan |
|
||||
| 3 | Auto-select | Critical keywords (auth, security, payment) → enabled |
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Code Implementation
|
||||
|
||||
**Objective**: Execute implementation across batches.
|
||||
|
||||
**Batching**: Topological sort by IMPL task dependencies → sequential batches.
|
||||
|
||||
Delegate to `commands/implement.md` for prompt building and backend routing:
|
||||
|
||||
| Backend | Invocation | Use Case |
|
||||
|---------|-----------|----------|
|
||||
| agent | Task({ subagent_type: "code-developer", run_in_background: false }) | Simple, direct edits |
|
||||
| codex | ccw cli --tool codex --mode write (background) | Complex, architecture |
|
||||
| gemini | ccw cli --tool gemini --mode write (background) | Analysis-heavy |
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Self-Validation
|
||||
|
||||
| Step | Method | Pass Criteria |
|
||||
|------|--------|--------------|
|
||||
| Syntax check | `tsc --noEmit` (30s) | Exit code 0 |
|
||||
| Acceptance criteria | Match criteria keywords vs implementation | All addressed |
|
||||
| Test detection | Find .test.ts/.spec.ts for modified files | Tests identified |
|
||||
| Code review (optional) | gemini analysis or codex review | No blocking issues |
|
||||
|
||||
**Report**: task ID, status, files modified, validation results, backend used.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Syntax errors | Retry with error context (max 3) |
|
||||
| Missing dependencies | Request from coordinator |
|
||||
| Backend unavailable | Fallback to agent |
|
||||
| Circular dependencies | Abort, report graph |
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user