Compare commits

...

30 Commits

Author SHA1 Message Date
catlog22
4d17bb02a4 chore: release v7.1.0 2026-03-01 23:17:52 +08:00
catlog22
5cab8ae8a5 fix: CSRF token accessibility and hook installation status
- Remove HttpOnly from XSRF-TOKEN cookie for JavaScript readability
- Add hook installation status detection in system settings API
- Update InjectionControlTab to show installed hooks status
- Add brace expansion support in globToRegex utility
2026-03-01 23:17:37 +08:00
catlog22
ffe3b427ce feat(docs): 添加技能/团队命令对比表和代码审查报告 2026-03-01 21:01:26 +08:00
catlog22
8c953b287d feat(idaw): add run-coordinate command for external CLI execution with hook callbacks 2026-03-01 20:58:26 +08:00
catlog22
b1e321267e docs: fix command invocation syntax accuracy [IDAW-004]
- Fix /workflow-tdd → /workflow-tdd-plan (correct skill name)
- Fix /workflow:test-fix → /workflow-test-fix (skill, not command)
- Fix /workflow:skill-designer → /workflow-skill-designer (skill)
- Fix /workflow:plan → /workflow-plan (skill, not command)
- Remove non-existent /workflow:wave-plan reference
- Update both English and Chinese documentation
2026-03-01 20:50:30 +08:00
catlog22
d0275f14b2 feat(idaw): add CLI-assisted analysis for pre-task context and error recovery
- Pre-task context analysis via gemini for bugfix/complex tasks
- CLI-assisted error diagnosis before retry on skill failure
- Consistent implementation across run.md and resume.md
2026-03-01 20:47:19 +08:00
catlog22
ee4dc367d9 docs: fix 404 errors - add missing zh guide files and fix zh-CN config [IDAW-002]
- Add docs/zh/guide/first-workflow.md (Chinese translation)
- Add docs/zh/guide/cli-tools.md (Chinese translation)
- Fix zh-CN locale config to only show existing files (dashboard, terminal, queue)
- Remove non-existent zh-CN sidebar entries that caused 404 errors
2026-03-01 20:34:11 +08:00
catlog22
a63fb370aa docs: fix repository URLs in getting started guide [IDAW-001]
Replace placeholder URLs with actual repository URL:
https://github.com/catlog22/Claude-Code-Workflow.git
2026-03-01 20:24:20 +08:00
catlog22
da19a6ec89 feat: Implement IDAW commands and update favicon/logo SVGs
- Added IDAW (Independent Development Autonomous Workflow) commands for batch task execution, including `/idaw:add`, `/idaw:run`, `/idaw:status`, and `/idaw:resume`.
- Updated documentation for IDAW commands in both English and Chinese.
- Modified favicon and logo SVGs to reflect new orbital design with dynamic colors.
- Incremented package version from 7.0.6 to 7.0.9.
2026-03-01 20:05:44 +08:00
catlog22
bf84a157ea chore: bump version to 7.0.9
feat(idaw): Independent Development Autonomous Workflow
- /idaw:add — manual task creation + import from ccw issue
- /idaw:run — 6-phase serial orchestrator with git checkpoints
- /idaw:status — read-only progress viewer
- /idaw:resume — resume interrupted sessions from last checkpoint
2026-03-01 19:50:27 +08:00
catlog22
41f990ddd4 Enhance shell safety in skill argument assembly and add animated orbital motion demo
- Updated `assembleSkillArgs` function in `resume.md` and `run.md` to sanitize task goal for shell safety by escaping special characters.
- Introduced a new animated orbital motion demo in `icon-concepts.html`, showcasing agents orbiting with varying speeds and a breathing core effect.
2026-03-01 19:48:50 +08:00
catlog22
3463bc8e27 feat(idaw): add resume, run, and status commands for task management
- Implemented /idaw:resume to resume interrupted sessions with task handling and auto mode.
- Created /idaw:run for executing task skill chains with git checkpoints and session management.
- Added /idaw:status for viewing task and session progress, including overview and specific session details.
- Introduced helper functions for task type inference and skill argument assembly.
- Enhanced task management with session tracking, progress reporting, and error handling.
2026-03-01 19:40:05 +08:00
catlog22
9ad755e225 feat: add comprehensive analysis report for Hook templates compliance with official standards
- Introduced a detailed report outlining compliance issues and recommendations for the `ccw/frontend` implementation of Hook templates.
- Identified critical issues regarding command structure and input reading methods.
- Highlighted errors related to cross-platform compatibility of Bash scripts on Windows.
- Documented warnings regarding matcher formats and exit code usage.
- Provided a summary of supported trigger types and outlined missing triggers.
- Included a section on completed fixes and references to affected files for easier tracking.
2026-03-01 15:12:44 +08:00
catlog22
8799a9c2fd refactor(team-planex): redesign skill with inverted control and beat model
- Delete executor agent (main flow IS the executor now)
- Rewrite SKILL.md: delegated planning + inline execution
- Input accepts issues.jsonl / roadmap session from roadmap-with-file
- Single reusable planner agent via send_input (Pattern 2.3)
- Interleaved plan-execute loop with eager delegation
- Follow codex v3 conventions (decision tables, placeholders)
- Remove complexity assessment and dynamic splitting
2026-03-01 15:06:06 +08:00
catlog22
1f859ae4b9 fix: align spec paths and add missing translation keys 2026-03-01 13:42:25 +08:00
catlog22
ecf4e4d848 fix: align spec paths from .workflow/specs to .ccw/specs
- Fix path mismatch between command files and frontend/backend spec-index-builder
- Update init-specs.md, init-guidelines.md, sync.md, solidify.md to use .ccw/specs/
- Update init.md, start.md, clean.md, unified-execute-with-file.md, collaborative-plan-with-file.md
- Add scope field to architecture-constraints.md and coding-conventions.md
- Ensures specs created by commands are visible in frontend Spec Settings page
2026-03-01 13:28:54 +08:00
catlog22
8ceae6d6fd Add Chinese documentation for custom skills development and reference guide
- Created a new document for custom skills development (`custom.md`) detailing the structure, creation, implementation, and best practices for developing custom CCW skills.
- Added an index document (`index.md`) summarizing all built-in skills, their categories, and usage examples.
- Introduced a reference guide (`reference.md`) providing a quick reference for all 33 built-in CCW skills, including triggers and purposes.
2026-03-01 13:08:12 +08:00
catlog22
2fb93d20e0 feat: add queue management and terminal dashboard documentation in Chinese
- Introduced comprehensive documentation for the queue management feature, detailing its pain points, core functionalities, and component structure.
- Added terminal dashboard documentation, highlighting its layout, core features, and usage examples.
- Created an index page in Chinese for Claude Code Workflow, summarizing its purpose and core features, along with quick links to installation and guides.
2026-03-01 10:52:46 +08:00
catlog22
a753327acc chore: bump version to 7.0.8
feat(team-coordinate-v2): enhance task descriptions with structured format
- Add PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS sections
- Include goal, actionable steps, key files, and success criteria
- Add Step 2.5: Key File Inference in analyze-task.md
- Update dispatch.md with structured task description template

feat(team-lifecycle-v5): adopt structured task descriptions
- Update task description template in dispatch.md
- Apply structured format to revision and improvement tasks
- Improve worker context with clear goals and file references

feat(team-skill-designer-v4): enforce structured task format
- Add task description template specification in phase 3
- Update quality standards to verify structured format
- Add integration verification checks for new format
- Ensure all generated v5 skills use structured descriptions

Benefits:
- Workers receive clearer context upfront
- Backward compatible with existing team-worker Phase 1
- Consistent format across all v5 team workflows
2026-02-28 23:51:25 +08:00
catlog22
f61a3da957 feat(dispatch): update task description template for improved clarity and structure 2026-02-28 23:49:59 +08:00
catlog22
b0fb899675 feat(dispatch): enhance task description structure with detailed fields and context 2026-02-28 23:45:20 +08:00
catlog22
0a49dc0675 feat: add comprehensive CCWMCP guide and installation instructions 2026-02-28 23:40:51 +08:00
catlog22
096fc1c380 docs: fix installation guide and update MCP recommendations
- Fix installation method to use `ccw install` command
- Update repository URL to correct GitHub location
- Reduce MCP recommendations to 3 servers (ace-tool, chrome-devtools, exa)
- Add ccwmcp documentation guide
- Align documentation with actual CLI and frontend implementation
2026-02-28 23:40:45 +08:00
catlog22
29f0a6cdb8 chore: bump version to 7.0.7 2026-02-28 23:10:11 +08:00
catlog22
e83414abf3 feat(theme): implement dynamic theme logo with reactive color updates 2026-02-28 23:08:27 +08:00
catlog22
e42597b1bc refactor(team): add fast-advance notification and knowledge transfer protocol
- team-worker: add fast_advance message bus log after spawning successor,
  closing coordinator state blind spot during fast-advance
- team-worker: add Knowledge Transfer section with upstream loading,
  downstream publishing, and context_accumulator conventions
- role-spec-template: add Knowledge Transfer Protocol with Transfer
  Channels table and shared-memory.json namespaced write convention
- monitor.md (v2+v5): add fast-advance reconciliation step reading
  fast_advance messages, add State Sync section for coordinator wake
- lifecycle-v5 SKILL.md: update cadence diagram with fast_advance log
2026-02-28 22:53:56 +08:00
catlog22
67b2129f3c Refactor code structure for improved readability and maintainability 2026-02-28 22:32:07 +08:00
catlog22
19fb4d86c7 fix: add -y auto mode bypass for all ccw-coordinator referenced skills
Harmonize orchestrator files (ccw.md, ccw-coordinator.md) with cross-file
consistency fixes, and add missing -y/--yes non-interactive bypass gates
to 7 skills that declared auto mode support but had blocking AskUserQuestion
calls: team-planex, issue:discover, issue:plan, issue:queue, issue:execute,
workflow:debug-with-file, issue:from-brainstorm.
2026-02-28 21:29:38 +08:00
catlog22
65763c76e9 Add TDD Structure Validation and Verification Phases with Comprehensive Reporting
- Introduced Phase 6: TDD Structure Validation to ensure compliance with TDD workflow standards, including task structure validation, dependency checks, and user configuration verification.
- Implemented Phase 7: TDD Verification for full compliance checks, including task chain structure validation, coverage analysis, and TDD cycle verification.
- Generated detailed TDD compliance reports with quality gate recommendations based on objective criteria.
- Added documentation for new commands and workflows in the Claude Commands index.
2026-02-28 20:41:06 +08:00
catlog22
4a89f626fc Refactor documentation for code commands and workflows
- Updated command syntax formatting to use code blocks for clarity in `prep.md`, `review.md`, and `spec.md`.
- Enhanced architectural diagrams in `ch01-what-is-claude-dms3.md` and core concepts in `ch03-core-concepts.md` using mermaid syntax for better visualization.
- Improved workflow diagrams in `ch04-workflow-basics.md` and `4-level.md` to provide clearer representations of processes.
- Added troubleshooting section in `installation.md` to address common installation issues and provide quick start examples.
- Revised skill documentation in `claude-meta.md` and `claude-workflow.md` to standardize command triggers and output structures.
- Updated best practices and workflow index documentation to enhance readability and understanding of workflow levels and practices.
2026-02-28 19:53:24 +08:00
307 changed files with 31712 additions and 4421 deletions

View File

@@ -9,6 +9,7 @@ keywords:
- pattern
readMode: required
priority: high
scope: project
---
# Architecture Constraints

View File

@@ -9,6 +9,7 @@ keywords:
- convention
readMode: required
priority: high
scope: project
---
# Coding Conventions

View File

@@ -301,7 +301,7 @@ Document known constraints that affect planning:
[Continue for all major feature groups]
**Note**: Detailed task breakdown into executable work items is handled by `/workflow:plan``IMPL_PLAN.md`
**Note**: Detailed task breakdown into executable work items is handled by `/workflow-plan``IMPL_PLAN.md`
---

View File

@@ -43,7 +43,7 @@ Core requirements, objectives, technical approach summary (2-3 paragraphs max).
**Quality Gates**:
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
- plan-verify: ⏳ Pending (recommended before /workflow:execute)
- plan-verify: ⏳ Pending (recommended before /workflow-execute)
**Context Package Summary**:
- **Focus Paths**: {list key directories from context-package.json}

View File

@@ -711,7 +711,7 @@ All workflows use the same file structure definition regardless of complexity. *
│ ├── [.chat/] # CLI interaction sessions (created when analysis is run)
│ │ ├── chat-*.md # Saved chat sessions
│ │ └── analysis-*.md # Analysis results
│ ├── [.process/] # Planning analysis results (created by /workflow:plan)
│ ├── [.process/] # Planning analysis results (created by /workflow-plan)
│ │ └── ANALYSIS_RESULTS.md # Analysis results and planning artifacts
│ ├── IMPL_PLAN.md # Planning document (REQUIRED)
│ ├── TODO_LIST.md # Progress tracking (REQUIRED)
@@ -783,7 +783,7 @@ All workflows use the same file structure definition regardless of complexity. *
**Examples**:
*Workflow Commands (lightweight):*
- `/workflow:lite-plan "feature idea"` (exploratory) → `.scratchpad/lite-plan-feature-idea-20250105-143110.md`
- `/workflow-lite-plan "feature idea"` (exploratory) → `.scratchpad/lite-plan-feature-idea-20250105-143110.md`
- `/workflow:lite-fix "bug description"` (bug fixing) → `.scratchpad/lite-fix-bug-20250105-143130.md`
> **Note**: Direct CLI commands (`/cli:analyze`, `/cli:execute`, etc.) have been replaced by semantic invocation and workflow commands.

View File

@@ -455,7 +455,7 @@ function buildCliCommand(task, cliTool, cliPrompt) {
**Auto-Check Workflow Context**:
- Verify session context paths are provided in agent prompt
- If missing, request session context from workflow:execute
- If missing, request session context from workflow-execute
- Never assume default paths without explicit session context
### 5. Problem-Solving

View File

@@ -237,7 +237,7 @@ After Phase 4 completes, determine Phase 5 variant:
### Phase 5-L: Loop Completion (when inner_loop=true AND more same-prefix tasks pending)
1. **TaskUpdate**: Mark current task `completed`
2. **Message Bus**: Log completion
2. **Message Bus**: Log completion with verification evidence
```
mcp__ccw-tools__team_msg(
operation="log",
@@ -245,7 +245,7 @@ After Phase 4 completes, determine Phase 5 variant:
from=<role>,
to="coordinator",
type=<message_types.success>,
summary="[<role>] <task-id> complete. <brief-summary>",
summary="[<role>] <task-id> complete. <brief-summary>. Verified: <verification_method>",
ref=<artifact-path>
)
```
@@ -283,7 +283,7 @@ After Phase 4 completes, determine Phase 5 variant:
| Condition | Action |
|-----------|--------|
| Same-prefix successor (inner loop role) | Do NOT spawn — main agent handles via inner loop |
| 1 ready task, simple linear successor, different prefix | Spawn directly via `Task(run_in_background: true)` |
| 1 ready task, simple linear successor, different prefix | Spawn directly via `Task(run_in_background: true)` + log `fast_advance` to message bus |
| Multiple ready tasks (parallel window) | SendMessage to coordinator (needs orchestration) |
| No ready tasks + others running | SendMessage to coordinator (status update) |
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
@@ -311,6 +311,23 @@ inner_loop: <true|false based on successor role>`
})
```
### Fast-Advance Notification
After spawning a successor via fast-advance, MUST log to message bus:
```
mcp__ccw-tools__team_msg(
operation="log",
team=<session_id>,
from=<role>,
to="coordinator",
type="fast_advance",
summary="[<role>] fast-advanced <completed-task-id> → spawned <successor-role> for <successor-task-id>"
)
```
This is a passive log entry (NOT a SendMessage). Coordinator reads it on next callback to reconcile `active_workers`.
### SendMessage Format
```
@@ -320,8 +337,10 @@ SendMessage(team_name=<team_name>, recipient="coordinator", message="[<role>] <f
**Final report contents**:
- Tasks completed (count + list)
- Artifacts produced (paths)
- Files modified (paths + before/after evidence from Phase 4 verification)
- Discuss results (verdicts + ratings)
- Key decisions (from context_accumulator)
- Verification summary (methods used, pass/fail status)
- Any warnings or issues
---
@@ -385,6 +404,48 @@ Write discoveries to corresponding wisdom files:
---
## Knowledge Transfer
### Upstream Context Loading (Phase 2)
When executing Phase 2 of a role-spec, the worker MUST load available cross-role context:
| Source | Path | Load Method |
|--------|------|-------------|
| Upstream artifacts | `<session>/artifacts/*.md` | Read files listed in task description or dependency chain |
| Shared memory | `<session>/shared-memory.json` | Read and parse JSON |
| Wisdom | `<session>/wisdom/*.md` | Read all wisdom files |
| Exploration cache | `<session>/explorations/cache-index.json` | Check before new explorations |
### Downstream Context Publishing (Phase 4)
After Phase 4 verification, the worker MUST publish its contributions:
1. **Artifact**: Write deliverable to `<session>/artifacts/<prefix>-<task-id>-<name>.md`
2. **shared-memory.json**: Read-merge-write under role namespace
```json
{ "<role>": { "key_findings": [...], "decisions": [...], "files_modified": [...] } }
```
3. **Wisdom**: Append new patterns to `learnings.md`, decisions to `decisions.md`, issues to `issues.md`
### Inner Loop Context Accumulator
For `inner_loop: true` roles, `context_accumulator` is maintained in-memory:
```
context_accumulator.append({
task: "<task-id>",
artifact: "<output-path>",
key_decisions: [...],
summary: "<brief>",
files_modified: [...]
})
```
Pass the full accumulator to each subsequent task's Phase 3 subagent as `## Prior Context`.
---
## Message Bus Protocol
Always use `mcp__ccw-tools__team_msg` for logging. Parameters:

File diff suppressed because it is too large Load Diff

View File

@@ -18,25 +18,17 @@ Main process orchestrator: intent analysis → workflow selection → command ch
| `workflow-lite-plan` | explore → plan → confirm → execute |
| `workflow-plan` | session → context → convention → gen → verify/replan |
| `workflow-execute` | session discovery → task processing → commit |
| `workflow-tdd` | 6-phase TDD plan → verify |
| `workflow-tdd-plan` | 6-phase TDD plan → verify |
| `workflow-test-fix` | session → context → analysis → gen → cycle |
| `workflow-multi-cli-plan` | ACE context → CLI discussion → plan → execute |
| `review-cycle` | session/module review → fix orchestration |
| `brainstorm` | auto/single-role → artifacts → analysis → synthesis |
| `spec-generator` | product-brief → PRD → architecture → epics |
| `workflow:collaborative-plan-with-file` | understanding agent → parallel agents → plan-note.md |
| `workflow:req-plan-with-file` | requirement decomposition → issue creation → execution-plan.json |
| `workflow:roadmap-with-file` | strategic requirement roadmap → issue creation → execution-plan.json |
| `workflow:integration-test-cycle` | explore → test dev → test-fix cycle → reflection |
| `workflow:refactor-cycle` | tech debt discovery → prioritize → execute → validate |
| `team-planex` | planner + executor wave pipeline边规划边执行|
| `team-iterdev` | 迭代开发团队planner → developer → reviewer 循环)|
| `team-lifecycle` | 全生命周期团队spec → impl → test|
| `team-issue` | issue 解决团队discover → plan → execute|
| `team-testing` | 测试团队strategy → generate → execute → analyze|
| `team-quality-assurance` | QA 团队scout → strategist → generator → executor → analyst|
| `team-brainstorm` | 团队头脑风暴facilitator → participants → synthesizer|
| `team-uidesign` | UI 设计团队designer → implementer dual-track|
独立命令(仍使用 colon 格式workflow:brainstorm-with-file, workflow:debug-with-file, workflow:analyze-with-file, workflow:collaborative-plan-with-file, workflow:req-plan-with-file, workflow:integration-test-cycle, workflow:refactor-cycle, workflow:unified-execute-with-file, workflow:clean, workflow:init, workflow:init-guidelines, workflow:ui-design:*, issue:*, workflow:session:*
| `team-planex` | planner + executor wave pipeline适合大量零散 issue 或 roadmap 产出的清晰 issue实现 0→1 开发|
## Core Concept: Self-Contained Skills (自包含 Skill)
@@ -53,22 +45,17 @@ Main process orchestrator: intent analysis → workflow selection → command ch
|---------|-------|------|
| 轻量 Plan+Execute | `workflow-lite-plan` | 内部完成 plan→execute |
| 标准 Planning | `workflow-plan``workflow-execute` | plan 和 execute 是独立 Skill |
| TDD Planning | `workflow-tdd``workflow-execute` | tdd-plan 和 execute 是独立 Skill |
| TDD Planning | `workflow-tdd-plan``workflow-execute` | tdd-plan 和 execute 是独立 Skill |
| 规格驱动 | `spec-generator``workflow-plan``workflow-execute` | 规格文档驱动完整开发 |
| 测试流水线 | `workflow-test-fix` | 内部完成 gen→cycle |
| 代码审查 | `review-cycle` | 内部完成 review→fix |
| 多CLI协作 | `workflow-multi-cli-plan` | ACE context → CLI discussion → plan → execute |
| 协作规划 | `workflow:collaborative-plan-with-file` | 多 agent 协作生成 plan-note.md |
| 需求路线图 | `workflow:req-plan-with-file` | 需求拆解→issue 创建→执行计划 |
| 分析→规划 | `workflow:analyze-with-file``workflow-lite-plan` | 协作分析产物自动传递给 lite-plan |
| 头脑风暴→规划 | `workflow:brainstorm-with-file` `workflow-lite-plan` | 头脑风暴产物自动传递给 lite-plan |
| 协作规划 | `workflow:collaborative-plan-with-file``workflow:unified-execute-with-file` | 多 agent 协作规划→通用执行 |
| 需求路线图 | `workflow:roadmap-with-file``team-planex` | 需求拆解→issue 创建→wave pipeline 执行 |
| 集成测试循环 | `workflow:integration-test-cycle` | 自迭代集成测试闭环 |
| 重构循环 | `workflow:refactor-cycle` | 技术债务发现→重构→验证 |
| 团队 Plan+Execute | `team-planex` | 2 人团队 wave pipeline边规划边执行 |
| 团队迭代开发 | `team-iterdev` | 多角色迭代开发闭环 |
| 团队全生命周期 | `team-lifecycle` | spec→impl→test 全流程 |
| 团队 Issue | `team-issue` | 多角色协作 issue 解决 |
| 团队测试 | `team-testing` | 多角色测试流水线 |
| 团队 QA | `team-quality-assurance` | 多角色质量保障闭环 |
| 团队头脑风暴 | `team-brainstorm` | 多角色协作头脑风暴 |
| 团队 UI 设计 | `team-uidesign` | dual-track 设计+实现 |
## Execution Model
@@ -136,27 +123,21 @@ function analyzeIntent(input) {
function detectTaskType(text) {
const patterns = {
'bugfix-hotfix': /urgent|production|critical/ && /fix|bug/,
// With-File workflows (documented exploration with multi-CLI collaboration)
// With-File workflows (documented exploration → auto chain to lite-plan)
'brainstorm': /brainstorm|ideation|头脑风暴|创意|发散思维|creative thinking|multi-perspective.*think|compare perspectives|探索.*可能/,
'brainstorm-to-issue': /brainstorm.*issue|头脑风暴.*issue|idea.*issue|想法.*issue|从.*头脑风暴|convert.*brainstorm/,
'debug-file': /debug.*document|hypothesis.*debug|troubleshoot.*track|investigate.*log|调试.*记录|假设.*验证|systematic debug|深度调试/,
'analyze-file': /analyze.*document|explore.*concept|understand.*architecture|investigate.*discuss|collaborative analysis|分析.*讨论|深度.*理解|协作.*分析/,
'collaborative-plan': /collaborative.*plan|协作.*规划|多人.*规划|multi.*agent.*plan|Plan Note|分工.*规划/,
'req-plan': /roadmap|需求.*规划|需求.*拆解|requirement.*plan|req.*plan|progressive.*plan|路线.*图/,
'roadmap': /roadmap|需求.*规划|需求.*拆解|requirement.*plan|progressive.*plan|路线.*图/,
'spec-driven': /spec.*gen|specification|PRD|产品需求|产品文档|产品规格/,
// Cycle workflows (self-iterating with reflection)
'integration-test': /integration.*test|集成测试|端到端.*测试|e2e.*test|integration.*cycle/,
'refactor': /refactor|重构|tech.*debt|技术债务/,
// Team workflows (multi-role collaboration, explicit "team" keyword required)
// Team workflows (kept: team-planex only)
'team-planex': /team.*plan.*exec|team.*planex|团队.*规划.*执行|并行.*规划.*执行|wave.*pipeline/,
'team-iterdev': /team.*iter|team.*iterdev|迭代.*开发.*团队|iterative.*dev.*team/,
'team-lifecycle': /team.*lifecycle|全生命周期|full.*lifecycle|spec.*impl.*test.*team/,
'team-issue': /team.*issue.*resolv|团队.*issue|team.*resolve.*issue/,
'team-testing': /team.*test|测试团队|comprehensive.*test.*team|全面.*测试.*团队/,
'team-qa': /team.*qa|quality.*assurance.*team|QA.*团队|质量.*保障.*团队|团队.*质量/,
'team-brainstorm': /team.*brainstorm|团队.*头脑风暴|team.*ideation|多人.*头脑风暴/,
'team-uidesign': /team.*ui.*design|UI.*设计.*团队|dual.*track.*design|团队.*UI/,
// Standard workflows
'multi-cli-plan': /multi.*cli|多.*CLI|多模型.*协作|multi.*model.*collab/,
'multi-cli': /multi.*cli|多.*CLI|多模型.*协作|multi.*model.*collab/,
'bugfix': /fix|bug|error|crash|fail|debug/,
'issue-batch': /issues?|batch/ && /fix|resolve/,
'issue-transition': /issue workflow|structured workflow|queue|multi-stage/,
@@ -165,6 +146,7 @@ function detectTaskType(text) {
'ui-design': /ui|design|component|style/,
'tdd': /tdd|test-driven|test first/,
'test-fix': /test fail|fix test|failing test/,
'test-gen': /generate test|写测试|add test|补充测试/,
'review': /review|code review/,
'documentation': /docs|documentation|readme/
};
@@ -202,34 +184,29 @@ async function clarifyRequirements(analysis) {
function selectWorkflow(analysis) {
const levelMap = {
'bugfix-hotfix': { level: 2, flow: 'bugfix.hotfix' },
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm': { level: 4, flow: 'brainstorm-with-file' }, // Multi-perspective ideation
// With-File workflows → auto chain to lite-plan
'brainstorm': { level: 4, flow: 'brainstorm-to-plan' }, // brainstorm-with-file → lite-plan
'brainstorm-to-issue': { level: 4, flow: 'brainstorm-to-issue' }, // Brainstorm → Issue workflow
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging
'analyze-file': { level: 3, flow: 'analyze-with-file' }, // Collaborative analysis
'debug-file': { level: 3, flow: 'debug-with-file' }, // Hypothesis-driven debugging (standalone)
'analyze-file': { level: 3, flow: 'analyze-to-plan' }, // analyze-with-file → lite-plan
'collaborative-plan': { level: 3, flow: 'collaborative-plan' }, // Multi-agent collaborative planning
'req-plan': { level: 4, flow: 'req-plan' }, // Requirement-level roadmap planning
'roadmap': { level: 4, flow: 'roadmap' }, // roadmap → team-planex
'spec-driven': { level: 4, flow: 'spec-driven' }, // spec-generator → plan → execute
// Cycle workflows (self-iterating with reflection)
'integration-test': { level: 3, flow: 'integration-test-cycle' }, // Self-iterating integration test
'refactor': { level: 3, flow: 'refactor-cycle' }, // Tech debt discovery and refactoring
// Team workflows (multi-role collaboration)
'integration-test': { level: 3, flow: 'integration-test-cycle' },
'refactor': { level: 3, flow: 'refactor-cycle' },
// Team workflows (kept: team-planex only)
'team-planex': { level: 'Team', flow: 'team-planex' },
'team-iterdev': { level: 'Team', flow: 'team-iterdev' },
'team-lifecycle': { level: 'Team', flow: 'team-lifecycle' },
'team-issue': { level: 'Team', flow: 'team-issue' },
'team-testing': { level: 'Team', flow: 'team-testing' },
'team-qa': { level: 'Team', flow: 'team-qa' },
'team-brainstorm': { level: 'Team', flow: 'team-brainstorm' },
'team-uidesign': { level: 'Team', flow: 'team-uidesign' },
// Standard workflows
'multi-cli-plan': { level: 3, flow: 'multi-cli-plan' }, // Multi-CLI collaborative planning
'multi-cli': { level: 3, flow: 'multi-cli-plan' },
'bugfix': { level: 2, flow: 'bugfix.standard' },
'issue-batch': { level: 'Issue', flow: 'issue' },
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' }, // Bridge workflow
'issue-transition': { level: 2.5, flow: 'rapid-to-issue' },
'exploration': { level: 4, flow: 'full' },
'quick-task': { level: 2, flow: 'rapid' },
'ui-design': { level: analysis.complexity === 'high' ? 4 : 3, flow: 'ui' },
'tdd': { level: 3, flow: 'tdd' },
'test-gen': { level: 3, flow: 'test-gen' },
'test-fix': { level: 3, flow: 'test-fix-gen' },
'review': { level: 3, flow: 'review-cycle-fix' },
'documentation': { level: 2, flow: 'docs' },
@@ -281,18 +258,15 @@ function buildCommandChain(workflow, analysis) {
{ cmd: 'workflow-lite-plan', args: `"${analysis.goal}"` }
],
// With-File workflows (documented exploration with multi-CLI collaboration)
'brainstorm-with-file': [
{ cmd: 'workflow:brainstorm-with-file', args: `"${analysis.goal}"` }
// Note: Has built-in post-completion options (create plan, create issue, deep analysis)
// With-File → Auto Chain to lite-plan
'analyze-to-plan': [
{ cmd: 'workflow:analyze-with-file', args: `"${analysis.goal}"` },
{ cmd: 'workflow-lite-plan', args: '' } // auto receives analysis artifacts (discussion.md)
],
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
'brainstorm-to-issue': [
// Note: Assumes brainstorm session already exists, or run brainstorm first
{ cmd: 'issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
{ cmd: 'issue:queue', args: '' },
{ cmd: 'issue:execute', args: '--queue auto' }
'brainstorm-to-plan': [
{ cmd: 'workflow:brainstorm-with-file', args: `"${analysis.goal}"` },
{ cmd: 'workflow-lite-plan', args: '' } // auto receives brainstorm artifacts (brainstorm.md)
],
'debug-with-file': [
@@ -300,32 +274,22 @@ function buildCommandChain(workflow, analysis) {
// Note: Self-contained with hypothesis-driven iteration and Gemini validation
],
'analyze-with-file': [
{ cmd: 'workflow:analyze-with-file', args: `"${analysis.goal}"` }
// Note: Self-contained with multi-round discussion and CLI exploration
// Brainstorm-to-Issue workflow (bridge from brainstorm to issue execution)
'brainstorm-to-issue': [
{ cmd: 'issue:from-brainstorm', args: `SESSION="${extractBrainstormSession(analysis)}" --auto` },
{ cmd: 'issue:queue', args: '' },
{ cmd: 'issue:execute', args: '--queue auto' }
],
// Universal Plan+Execute
'collaborative-plan': [
{ cmd: 'workflow:collaborative-plan-with-file', args: `"${analysis.goal}"` },
{ cmd: 'workflow:unified-execute-with-file', args: '' }
// Note: Plan Note → unified execution engine
],
'req-plan': [
{ cmd: 'workflow:req-plan-with-file', args: `"${analysis.goal}"` },
'roadmap': [
{ cmd: 'workflow:roadmap-with-file', args: `"${analysis.goal}"` },
{ cmd: 'team-planex', args: '' }
// Note: Requirement decomposition → issue creation → team-planex wave execution
],
// Cycle workflows (self-iterating with reflection)
'integration-test-cycle': [
{ cmd: 'workflow:integration-test-cycle', args: `"${analysis.goal}"` }
// Note: Self-contained explore → test → fix cycle with reflection
],
'refactor-cycle': [
{ cmd: 'workflow:refactor-cycle', args: `"${analysis.goal}"` }
// Note: Self-contained tech debt discovery → refactor → validate
],
// Level 3 - Standard
@@ -338,11 +302,25 @@ function buildCommandChain(workflow, analysis) {
])
],
// Level 4 - Spec-Driven Full Pipeline
'spec-driven': [
{ cmd: 'spec-generator', args: `"${analysis.goal}"` },
{ cmd: 'workflow-plan', args: '' },
{ cmd: 'workflow-execute', args: '' },
...(analysis.constraints?.includes('skip-tests') ? [] : [
{ cmd: 'workflow-test-fix', args: '' }
])
],
'tdd': [
{ cmd: 'workflow-tdd', args: `"${analysis.goal}"` },
{ cmd: 'workflow-tdd-plan', args: `"${analysis.goal}"` },
{ cmd: 'workflow-execute', args: '' }
],
'test-gen': [
{ cmd: 'workflow-test-fix', args: `"${analysis.goal}"` }
],
'test-fix-gen': [
{ cmd: 'workflow-test-fix', args: `"${analysis.goal}"` }
],
@@ -360,7 +338,7 @@ function buildCommandChain(workflow, analysis) {
{ cmd: 'workflow-execute', args: '' }
],
// Level 4 - Full
// Level 4 - Full Exploration
'full': [
{ cmd: 'brainstorm', args: `"${analysis.goal}"` },
{ cmd: 'workflow-plan', args: '' },
@@ -370,6 +348,15 @@ function buildCommandChain(workflow, analysis) {
])
],
// Cycle workflows (self-iterating with reflection)
'integration-test-cycle': [
{ cmd: 'workflow:integration-test-cycle', args: `"${analysis.goal}"` }
],
'refactor-cycle': [
{ cmd: 'workflow:refactor-cycle', args: `"${analysis.goal}"` }
],
// Issue Workflow
'issue': [
{ cmd: 'issue:discover', args: '' },
@@ -378,37 +365,9 @@ function buildCommandChain(workflow, analysis) {
{ cmd: 'issue:execute', args: '' }
],
// Team Workflows (multi-role collaboration, self-contained)
// Team Workflows (kept: team-planex only)
'team-planex': [
{ cmd: 'team-planex', args: `"${analysis.goal}"` }
],
'team-iterdev': [
{ cmd: 'team-iterdev', args: `"${analysis.goal}"` }
],
'team-lifecycle': [
{ cmd: 'team-lifecycle', args: `"${analysis.goal}"` }
],
'team-issue': [
{ cmd: 'team-issue', args: `"${analysis.goal}"` }
],
'team-testing': [
{ cmd: 'team-testing', args: `"${analysis.goal}"` }
],
'team-qa': [
{ cmd: 'team-quality-assurance', args: `"${analysis.goal}"` }
],
'team-brainstorm': [
{ cmd: 'team-brainstorm', args: `"${analysis.goal}"` }
],
'team-uidesign': [
{ cmd: 'team-uidesign', args: `"${analysis.goal}"` }
]
};
@@ -607,7 +566,7 @@ Phase 1: Analyze Intent
+-- If clarity < 2 -> Phase 1.5: Clarify Requirements
|
Phase 2: Select Workflow & Build Chain
|-- Map task_type -> Level (1/2/3/4/Issue)
|-- Map task_type -> Level (2/3/4/Issue/Team)
|-- Select flow based on complexity
+-- Build command chain (Skill-based)
|
@@ -639,26 +598,20 @@ Phase 5: Execute Command Chain
| "Add API endpoint" | feature (low) | 2 | workflow-lite-plan → workflow-test-fix |
| "Fix login timeout" | bugfix | 2 | workflow-lite-plan → workflow-test-fix |
| "Use issue workflow" | issue-transition | 2.5 | workflow-lite-plan(plan-only) → convert-to-plan → queue → execute |
| "头脑风暴: 通知系统重构" | brainstorm | 4 | workflow:brainstorm-with-file |
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | issue:from-brainstorm → issue:queue → issue:execute |
| "协作分析: 认证架构" | analyze-file | 3 | analyze-with-file → workflow-lite-plan |
| "深度调试 WebSocket" | debug-file | 3 | workflow:debug-with-file |
| "协作分析: 认证架构优化" | analyze-file | 3 | workflow:analyze-with-file |
| "头脑风暴: 通知系统" | brainstorm | 4 | brainstorm-with-file workflow-lite-plan |
| "从头脑风暴创建 issue" | brainstorm-to-issue | 4 | issue:from-brainstorm → issue:queue → issue:execute |
| "协作规划: 实时通知系统" | collaborative-plan | 3 | collaborative-plan-with-file → unified-execute-with-file |
| "需求规划: OAuth + 2FA" | req-plan | 4 | req-plan-with-file → team-planex |
| "需求路线图: OAuth + 2FA" | roadmap | 4 | roadmap-with-file → team-planex |
| "specification: 用户系统" | spec-driven | 4 | spec-generator → workflow-plan → workflow-execute → workflow-test-fix |
| "集成测试: 支付流程" | integration-test | 3 | workflow:integration-test-cycle |
| "重构 auth 模块" | refactor | 3 | workflow:refactor-cycle |
| "multi-cli plan: API设计" | multi-cli-plan | 3 | workflow-multi-cli-plan → workflow-test-fix |
| "OAuth2 system" | feature (high) | 3 | workflow-plan → workflow-execute → review-cycle → workflow-test-fix |
| "Implement with TDD" | tdd | 3 | workflow-tdd → workflow-execute |
| "Implement with TDD" | tdd | 3 | workflow-tdd-plan → workflow-execute |
| "Uncertain: real-time" | exploration | 4 | brainstorm → workflow-plan → workflow-execute → workflow-test-fix |
| "team planex: 用户系统" | team-planex | Team | team-planex |
| "迭代开发团队: 支付模块" | team-iterdev | Team | team-iterdev |
| "全生命周期: 通知服务" | team-lifecycle | Team | team-lifecycle |
| "team resolve issue #42" | team-issue | Team | team-issue |
| "测试团队: 全面测试认证" | team-testing | Team | team-testing |
| "QA 团队: 质量保障支付" | team-qa | Team | team-quality-assurance |
| "团队头脑风暴: API 设计" | team-brainstorm | Team | team-brainstorm |
| "团队 UI 设计: 仪表盘" | team-uidesign | Team | team-uidesign |
---
@@ -668,10 +621,11 @@ Phase 5: Execute Command Chain
2. **Intent-Driven** - Auto-select workflow based on task intent
3. **Skill-Based Chaining** - Build command chain by composing independent Skills
4. **Self-Contained Skills** - 每个 Skill 内部处理完整流水线,是天然的最小执行单元
5. **Progressive Clarification** - Low clarity triggers clarification phase
6. **TODO Tracking** - Use CCW prefix to isolate workflow todos
7. **Error Handling** - Retry/skip/abort at Skill level
8. **User Control** - Optional user confirmation at each phase
5. **Auto Chain** - With-File 产物自动传递给下游 Skill如 analyze → lite-plan
6. **Progressive Clarification** - Low clarity triggers clarification phase
7. **TODO Tracking** - Use CCW prefix to isolate workflow todos
8. **Error Handling** - Retry/skip/abort at Skill level
9. **User Control** - Optional user confirmation at each phase
---
@@ -715,114 +669,51 @@ todos = [
"complexity": "medium"
},
"command_chain": [
{
"index": 0,
"command": "workflow-lite-plan",
"status": "completed"
},
{
"index": 1,
"command": "workflow-test-fix",
"status": "running"
}
{ "index": 0, "command": "workflow-lite-plan", "status": "completed" },
{ "index": 1, "command": "workflow-test-fix", "status": "running" }
],
"current_index": 1
}
```
**Status Values**:
- `running`: Workflow executing commands
- `completed`: All commands finished
- `failed`: User aborted or unrecoverable error
- `error`: Command execution failed (during error handling)
**Command Status Values**:
- `pending`: Not started
- `running`: Currently executing
- `completed`: Successfully finished
- `failed`: Execution failed
**Status Values**: `running` | `completed` | `failed` | `error`
**Command Status Values**: `pending` | `running` | `completed` | `failed`
---
## With-File Workflows
**With-File workflows** provide documented exploration with multi-CLI collaboration. They are self-contained and generate comprehensive session artifacts.
**With-File workflows** provide documented exploration with multi-CLI collaboration. They generate comprehensive session artifacts and can auto-chain to lite-plan for implementation.
| Workflow | Purpose | Key Features | Output Folder |
|----------|---------|--------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | Gemini/Codex/Claude perspectives, diverge-converge cycles | `.workflow/.brainstorm/` |
| **debug-with-file** | Hypothesis-driven debugging | Gemini validation, understanding evolution, NDJSON logging | `.workflow/.debug/` |
| **analyze-with-file** | Collaborative analysis | Multi-round Q&A, CLI exploration, documented discussions | `.workflow/.analysis/` |
| **collaborative-plan-with-file** | Multi-agent collaborative planning | Understanding agent + parallel agents, Plan Note shared doc | `.workflow/.planning/` |
| **req-plan-with-file** | Requirement roadmap planning | Requirement decomposition, issue creation, execution-plan.json | `.workflow/.planning/` |
| Workflow | Purpose | Auto Chain | Output Folder |
|----------|---------|------------|---------------|
| **brainstorm-with-file** | Multi-perspective ideation | → workflow-lite-plan (auto) | `.workflow/.brainstorm/` |
| **debug-with-file** | Hypothesis-driven debugging | Standalone (self-contained) | `.workflow/.debug/` |
| **analyze-with-file** | Collaborative analysis | → workflow-lite-plan (auto) | `.workflow/.analysis/` |
| **collaborative-plan-with-file** | Multi-agent collaborative planning | → unified-execute-with-file | `.workflow/.planning/` |
| **roadmap-with-file** | Strategic requirement roadmap | → team-planex | `.workflow/.planning/` |
**Auto Chain Mechanism**: When `analyze-with-file` or `brainstorm-with-file` completes, its artifacts (discussion.md / brainstorm.md) are automatically passed to `workflow-lite-plan` as context input. No user intervention needed.
**Detection Keywords**:
- **brainstorm**: 头脑风暴, 创意, 发散思维, multi-perspective, compare perspectives
- **debug-file**: 深度调试, 假设验证, systematic debug, hypothesis debug
- **analyze-file**: 协作分析, 深度理解, collaborative analysis, explore concept
- **collaborative-plan**: 协作规划, 多人规划, collaborative plan, multi-agent plan, Plan Note
- **req-plan**: roadmap, 需求规划, 需求拆解, requirement plan, progressive plan
**Characteristics**:
1. **Self-Contained**: Each workflow handles its own iteration loop
2. **Documented Process**: Creates evolving documents (brainstorm.md, understanding.md, discussion.md)
3. **Multi-CLI**: Uses Gemini/Codex/Claude for different perspectives
4. **Built-in Post-Completion**: Offers follow-up options (create plan, issue, etc.)
---
## Team Workflows
**Team workflows** provide multi-role collaboration for complex tasks. Each team skill is self-contained with internal role routing via `--role=xxx`.
| Workflow | Roles | Pipeline | Use Case |
|----------|-------|----------|----------|
| **team-planex** | planner + executor | wave pipeline边规划边执行| 需要并行规划和执行的任务 |
| **team-iterdev** | planner → developer → reviewer | 迭代开发循环 | 需要多轮迭代的开发任务 |
| **team-lifecycle** | spec → impl → test | 全生命周期 | 从需求到测试的完整流程 |
| **team-issue** | discover → plan → execute | issue 解决 | 多角色协作解决 issue |
| **team-testing** | strategy → generate → execute → analyze | 测试流水线 | 全面测试覆盖 |
| **team-quality-assurance** | scout → strategist → generator → executor → analyst | QA 闭环 | 质量保障全流程 |
| **team-brainstorm** | facilitator → participants → synthesizer | 团队头脑风暴 | 多角色协作头脑风暴 |
| **team-uidesign** | designer → implementer | dual-track 设计+实现 | UI 设计与实现并行 |
**Detection Keywords**:
- **team-planex**: team planex, 团队规划执行, wave pipeline
- **team-iterdev**: team iterdev, 迭代开发团队, iterative dev team
- **team-lifecycle**: team lifecycle, 全生命周期, full lifecycle
- **team-issue**: team issue, 团队 issue, team resolve issue
- **team-testing**: team test, 测试团队, comprehensive test team
- **team-qa**: team qa, QA 团队, 质量保障团队
- **team-brainstorm**: team brainstorm, 团队头脑风暴, team ideation
- **team-uidesign**: team ui design, UI 设计团队, dual track design
**Characteristics**:
1. **Self-Contained**: Each team skill handles internal role coordination
2. **Role-Based Routing**: All roles invoke the same skill with `--role=xxx`
3. **Shared Memory**: Roles communicate via shared-memory.json and message bus
4. **Auto Mode Support**: All team skills support `-y`/`--yes` for skip confirmations
- **roadmap**: roadmap, 需求规划, 需求拆解, requirement plan, progressive plan
- **spec-driven**: specification, PRD, 产品需求, 产品文档
---
## Cycle Workflows
**Cycle workflows** provide self-iterating development cycles with reflection-driven strategy adjustment. Each cycle is autonomous with built-in test-fix loops and quality gates.
**Cycle workflows** provide self-iterating development cycles with reflection-driven strategy adjustment.
| Workflow | Pipeline | Key Features | Output Folder |
|----------|----------|--------------|---------------|
| **integration-test-cycle** | explore → test dev → test-fix → reflection | Self-iterating with max-iterations, auto continue | `.workflow/.test-cycle/` |
| **refactor-cycle** | discover → prioritize → execute → validate | Multi-dimensional analysis, regression validation | `.workflow/.refactor-cycle/` |
**Detection Keywords**:
- **integration-test**: integration test, 集成测试, 端到端测试, e2e test
- **refactor**: refactor, 重构, tech debt, 技术债务
**Characteristics**:
1. **Self-Iterating**: Autonomous test-fix loops until quality gate passes
2. **Reflection-Driven**: Strategy adjusts based on previous iteration results
3. **Continue Support**: `--continue` flag to resume interrupted sessions
4. **Auto Mode Support**: `-y`/`--yes` for fully autonomous execution
---
## Utility Commands
@@ -831,10 +722,11 @@ todos = [
| Command | Purpose |
|---------|---------|
| `workflow:unified-execute-with-file` | Universal execution engine - consumes plan output from collaborative-plan, req-plan, brainstorm |
| `workflow:unified-execute-with-file` | Universal execution engine - consumes plan output from collaborative-plan, roadmap, brainstorm |
| `workflow:clean` | Intelligent code cleanup - mainline detection, stale artifact removal |
| `workflow:init` | Initialize `.workflow/project-tech.json` with project analysis |
| `workflow:init-guidelines` | Interactive wizard to fill `specs/*.md` |
| `workflow:status` | Generate on-demand views for project overview and workflow tasks |
---
@@ -848,9 +740,6 @@ todos = [
/ccw -y "Add user authentication"
/ccw --yes "Fix memory leak in WebSocket handler"
# Complex requirement (triggers clarification)
/ccw "Optimize system performance"
# Bug fix
/ccw "Fix memory leak in WebSocket handler"
@@ -863,35 +752,31 @@ todos = [
# Multi-CLI collaborative planning
/ccw "multi-cli plan: 支付网关API设计" # → workflow-multi-cli-plan → workflow-test-fix
# With-File workflows (documented exploration with multi-CLI collaboration)
/ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file
# With-File workflows → auto chain to lite-plan
/ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file → workflow-lite-plan
/ccw "头脑风暴: 用户通知系统重新设计" # → brainstorm-with-file → workflow-lite-plan
/ccw "深度调试: 系统随机崩溃问题" # → debug-with-file (standalone)
/ccw "从头脑风暴 BS-通知系统-2025-01-28 创建 issue" # → brainstorm-to-issue (bridge)
/ccw "深度调试: 系统随机崩溃问题" # → debug-with-file
/ccw "协作分析: 理解现有认证架构的设计决策" # → analyze-with-file
# Team workflows (multi-role collaboration)
/ccw "team planex: 用户认证系统" # → team-planex (planner + executor wave pipeline)
/ccw "迭代开发团队: 支付模块重构" # → team-iterdev (planner → developer → reviewer)
/ccw "全生命周期: 通知服务开发" # → team-lifecycle (spec → impl → test)
/ccw "team resolve issue #42" # → team-issue (discover → plan → execute)
/ccw "测试团队: 全面测试认证模块" # → team-testing (strategy → generate → execute → analyze)
/ccw "QA 团队: 质量保障支付流程" # → team-quality-assurance (scout → strategist → generator → executor → analyst)
/ccw "团队头脑风暴: API 网关设计" # → team-brainstorm (facilitator → participants → synthesizer)
/ccw "团队 UI 设计: 管理后台仪表盘" # → team-uidesign (designer → implementer dual-track)
# Spec-driven full pipeline
/ccw "specification: 用户认证系统产品文档" # → spec-generator → workflow-plan → workflow-execute → workflow-test-fix
# Collaborative planning & requirement workflows
/ccw "协作规划: 实时通知系统架构" # → collaborative-plan-with-file → unified-execute
/ccw "需求规划: 用户认证 OAuth + 2FA" # → req-plan-with-file → team-planex
/ccw "roadmap: 数据导出功能路线图" # → req-plan-with-file → team-planex
/ccw "需求路线图: 用户认证 OAuth + 2FA" # → roadmap-with-file → team-planex
/ccw "roadmap: 数据导出功能路线图" # → roadmap-with-file → team-planex
# Team workflows (kept: team-planex)
/ccw "team planex: 用户认证系统" # → team-planex (planner + executor wave pipeline)
# Cycle workflows (self-iterating)
/ccw "集成测试: 支付流程端到端" # → integration-test-cycle
/ccw "重构 auth 模块的技术债务" # → refactor-cycle
/ccw "tech debt: 清理支付服务" # → refactor-cycle
# Utility commands (invoked directly, not auto-routed)
# /workflow:unified-execute-with-file # 通用执行引擎(消费 plan 输出)
# /workflow:clean # 智能代码清理
# /workflow:init # 初始化项目状态
# /workflow:init-guidelines # 交互式填充项目规范
# /workflow:status # 项目概览和工作流状态
```

View File

@@ -33,7 +33,7 @@ Creates tool-specific configuration directories:
- `.gemini/settings.json`:
```json
{
"contextfilename": ["CLAUDE.md","GEMINI.md"]
"contextfilename": "CLAUDE.md"
}
```
@@ -41,7 +41,7 @@ Creates tool-specific configuration directories:
- `.qwen/settings.json`:
```json
{
"contextfilename": ["CLAUDE.md","QWEN.md"]
"contextfilename": "CLAUDE.md"
}
```

View File

@@ -107,24 +107,24 @@ async function selectCommandCategory() {
async function selectCommand(category) {
const commandOptions = {
'Planning': [
{ label: "/workflow:lite-plan", description: "Lightweight merged-mode planning" },
{ label: "/workflow:plan", description: "Full planning with architecture design" },
{ label: "/workflow:multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
{ label: "/workflow:tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
{ label: "/workflow-lite-plan", description: "Lightweight merged-mode planning" },
{ label: "/workflow-plan", description: "Full planning with architecture design" },
{ label: "/workflow-multi-cli-plan", description: "Multi-CLI collaborative planning (Gemini+Codex+Claude)" },
{ label: "/workflow-tdd-plan", description: "TDD workflow planning with Red-Green-Refactor" },
{ label: "/workflow:quick-plan-with-file", description: "Rapid planning with minimal docs" },
{ label: "/workflow:plan-verify", description: "Verify plan against requirements" },
{ label: "/workflow-plan-verify", description: "Verify plan against requirements" },
{ label: "/workflow:replan", description: "Update plan and execute changes" }
],
'Execution': [
{ label: "/workflow:lite-execute", description: "Execute from in-memory plan" },
{ label: "/workflow:execute", description: "Execute from planning session" },
{ label: "/workflow-execute", description: "Execute from planning session" },
{ label: "/workflow:unified-execute-with-file", description: "Universal execution engine" }
],
'Testing': [
{ label: "/workflow:test-fix-gen", description: "Generate test tasks for specific issues" },
{ label: "/workflow:test-cycle-execute", description: "Execute iterative test-fix cycle (>=95% pass)" },
{ label: "/workflow-test-fix", description: "Generate test tasks for specific issues" },
{ label: "/workflow-test-fix", description: "Execute iterative test-fix cycle (>=95% pass)" },
{ label: "/workflow:test-gen", description: "Generate comprehensive test suite" },
{ label: "/workflow:tdd-verify", description: "Verify TDD workflow compliance" }
{ label: "/workflow-tdd-verify", description: "Verify TDD workflow compliance" }
],
'Review': [
{ label: "/workflow:review-session-cycle", description: "Session-based multi-dimensional code review" },
@@ -133,7 +133,7 @@ async function selectCommand(category) {
{ label: "/workflow:review", description: "Post-implementation review" }
],
'Bug Fix': [
{ label: "/workflow:lite-plan", description: "Lightweight bug diagnosis and fix (with --bugfix flag)" },
{ label: "/workflow-lite-plan", description: "Lightweight bug diagnosis and fix (with --bugfix flag)" },
{ label: "/workflow:debug-with-file", description: "Hypothesis-driven debugging with documentation" }
],
'Brainstorm': [
@@ -303,10 +303,10 @@ async function defineSteps(templateDesign) {
"description": "Quick implementation with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan" },
{ "cmd": "/workflow-lite-plan", "args": "\"{{goal}}\"", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight implementation plan" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "quick-implementation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute implementation based on plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle until pass rate >= 95%" }
]
}
```
@@ -318,13 +318,13 @@ async function defineSteps(templateDesign) {
"description": "Full workflow with verification, review, and testing",
"level": 3,
"steps": [
{ "cmd": "/workflow:plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow-plan", "args": "\"{{goal}}\"", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed implementation plan" },
{ "cmd": "/workflow-plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan against requirements" },
{ "cmd": "/workflow-execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test tasks" },
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
@@ -336,10 +336,10 @@ async function defineSteps(templateDesign) {
"description": "Bug diagnosis and fix with testing",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "--bugfix \"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose and plan bug fix" },
{ "cmd": "/workflow-lite-plan", "args": "--bugfix \"{{goal}}\"", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Diagnose and plan bug fix" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "bug-fix", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute bug fix" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate regression tests" },
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fix with tests" }
]
}
```
@@ -351,7 +351,7 @@ async function defineSteps(templateDesign) {
"description": "Urgent production bug fix (no tests)",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
{ "cmd": "/workflow-lite-plan", "args": "--hotfix \"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Emergency hotfix mode" }
]
}
```
@@ -363,9 +363,9 @@ async function defineSteps(templateDesign) {
"description": "Test-driven development with Red-Green-Refactor",
"level": 3,
"steps": [
{ "cmd": "/workflow:tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
{ "cmd": "/workflow:execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
{ "cmd": "/workflow:tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
{ "cmd": "/workflow-tdd-plan", "args": "\"{{goal}}\"", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create TDD task chain" },
{ "cmd": "/workflow-execute", "unit": "tdd-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute TDD cycle" },
{ "cmd": "/workflow-tdd-verify", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify TDD compliance" }
]
}
```
@@ -379,8 +379,8 @@ async function defineSteps(templateDesign) {
"steps": [
{ "cmd": "/workflow:review-session-cycle", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Multi-dimensional code review" },
{ "cmd": "/workflow:review-cycle-fix", "unit": "code-review", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Fix review findings" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests for fixes" },
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Verify fixes pass tests" }
]
}
```
@@ -392,8 +392,8 @@ async function defineSteps(templateDesign) {
"description": "Fix failing tests",
"level": 3,
"steps": [
{ "cmd": "/workflow:test-fix-gen", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
{ "cmd": "/workflow-test-fix", "args": "\"{{goal}}\"", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate test fix tasks" },
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test-fix cycle" }
]
}
```
@@ -420,7 +420,7 @@ async function defineSteps(templateDesign) {
"description": "Bridge lightweight planning to issue workflow",
"level": 2,
"steps": [
{ "cmd": "/workflow:lite-plan", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
{ "cmd": "/workflow-lite-plan", "args": "\"{{goal}}\"", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create lightweight plan" },
{ "cmd": "/issue:convert-to-plan", "args": "--latest-lite-plan -y", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Convert to issue plan" },
{ "cmd": "/issue:queue", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Form execution queue" },
{ "cmd": "/issue:execute", "args": "--queue auto", "unit": "rapid-to-issue", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute issue queue" }
@@ -486,11 +486,11 @@ async function defineSteps(templateDesign) {
"level": 4,
"steps": [
{ "cmd": "/brainstorm", "args": "\"{{goal}}\"", "unit": "standalone", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Unified brainstorming with multi-perspective exploration" },
{ "cmd": "/workflow:plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
{ "cmd": "/workflow:plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
{ "cmd": "/workflow:execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
{ "cmd": "/workflow-plan", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Create detailed plan from brainstorm" },
{ "cmd": "/workflow-plan-verify", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Verify plan quality" },
{ "cmd": "/workflow-execute", "unit": "verified-planning-execution", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute implementation" },
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate comprehensive tests" },
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
@@ -502,10 +502,10 @@ async function defineSteps(templateDesign) {
"description": "Multi-CLI collaborative planning with cross-verification",
"level": 3,
"steps": [
{ "cmd": "/workflow:multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
{ "cmd": "/workflow-multi-cli-plan", "args": "\"{{goal}}\"", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Gemini+Codex+Claude collaborative planning" },
{ "cmd": "/workflow:lite-execute", "args": "--in-memory", "unit": "multi-cli-planning", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Execute converged plan" },
{ "cmd": "/workflow:test-fix-gen", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
{ "cmd": "/workflow:test-cycle-execute", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "mainprocess" }, "contextHint": "Generate tests" },
{ "cmd": "/workflow-test-fix", "unit": "test-validation", "execution": { "type": "slash-command", "mode": "async" }, "contextHint": "Execute test cycle" }
]
}
```
@@ -656,9 +656,9 @@ async function generateTemplate(design, steps, outputPath) {
→ Level: 3 (Standard)
→ Steps: Customize
→ Step 1: /brainstorm (standalone, mainprocess)
→ Step 2: /workflow:plan (verified-planning-execution, mainprocess)
→ Step 3: /workflow:plan-verify (verified-planning-execution, mainprocess)
→ Step 4: /workflow:execute (verified-planning-execution, async)
→ Step 2: /workflow-plan (verified-planning-execution, mainprocess)
→ Step 3: /workflow-plan-verify (verified-planning-execution, mainprocess)
→ Step 4: /workflow-execute (verified-planning-execution, async)
→ Step 5: /workflow:review-session-cycle (code-review, mainprocess)
→ Step 6: /workflow:review-cycle-fix (code-review, mainprocess)
→ Done

View File

@@ -0,0 +1,287 @@
---
name: add
description: Add IDAW tasks - manual creation or import from ccw issue
argument-hint: "[-y|--yes] [--from-issue <id>[,<id>,...]] \"description\" [--type <task_type>] [--priority <1-5>]"
allowed-tools: AskUserQuestion(*), Read(*), Bash(*), Write(*), Glob(*)
---
# IDAW Add Command (/idaw:add)
## Auto Mode
When `--yes` or `-y`: Skip clarification questions, create task with inferred details.
## IDAW Task Schema
```json
{
"id": "IDAW-001",
"title": "string",
"description": "string",
"status": "pending",
"priority": 2,
"task_type": null,
"skill_chain": null,
"context": {
"affected_files": [],
"acceptance_criteria": [],
"constraints": [],
"references": []
},
"source": {
"type": "manual|import-issue",
"issue_id": null,
"issue_snapshot": null
},
"execution": {
"session_id": null,
"started_at": null,
"completed_at": null,
"skill_results": [],
"git_commit": null,
"error": null
},
"created_at": "ISO",
"updated_at": "ISO"
}
```
**Valid task_type values**: `bugfix|bugfix-hotfix|feature|feature-complex|refactor|tdd|test|test-fix|review|docs`
## Implementation
### Phase 1: Parse Arguments
```javascript
const args = $ARGUMENTS;
const autoYes = /(-y|--yes)\b/.test(args);
const fromIssue = args.match(/--from-issue\s+([\w,-]+)/)?.[1];
const typeFlag = args.match(/--type\s+([\w-]+)/)?.[1];
const priorityFlag = args.match(/--priority\s+(\d)/)?.[1];
// Extract description: content inside quotes (preferred), or fallback to stripping flags
const quotedMatch = args.match(/(?:^|\s)["']([^"']+)["']/);
const description = quotedMatch
? quotedMatch[1].trim()
: args.replace(/(-y|--yes|--from-issue\s+[\w,-]+|--type\s+[\w-]+|--priority\s+\d)/g, '').trim();
```
### Phase 2: Route — Import or Manual
```
--from-issue present?
├─ YES → Import Mode (Phase 3A)
└─ NO → Manual Mode (Phase 3B)
```
### Phase 3A: Import Mode (from ccw issue)
```javascript
const issueIds = fromIssue.split(',');
// Fetch all issues once (outside loop)
let issues = [];
try {
const issueJson = Bash(`ccw issue list --json`);
issues = JSON.parse(issueJson).issues || [];
} catch (e) {
console.log(`Error fetching CCW issues: ${e.message || e}`);
console.log('Ensure ccw is installed and issues exist. Use /issue:new to create issues first.');
return;
}
for (const issueId of issueIds) {
// 1. Find issue data
const issue = issues.find(i => i.id === issueId.trim());
if (!issue) {
console.log(`Warning: Issue ${issueId} not found, skipping`);
continue;
}
// 2. Check duplicate (same issue_id already imported)
const existing = Glob('.workflow/.idaw/tasks/IDAW-*.json');
for (const f of existing) {
const data = JSON.parse(Read(f));
if (data.source?.issue_id === issueId.trim()) {
console.log(`Warning: Issue ${issueId} already imported as ${data.id}, skipping`);
continue; // skip to next issue
}
}
// 3. Generate next IDAW ID
const nextId = generateNextId();
// 4. Map issue → IDAW task
const task = {
id: nextId,
title: issue.title,
description: issue.context || issue.title,
status: 'pending',
priority: parseInt(priorityFlag) || issue.priority || 3,
task_type: typeFlag || inferTaskType(issue.title, issue.context || ''),
skill_chain: null,
context: {
affected_files: issue.affected_components || [],
acceptance_criteria: [],
constraints: [],
references: issue.source_url ? [issue.source_url] : []
},
source: {
type: 'import-issue',
issue_id: issue.id,
issue_snapshot: {
id: issue.id,
title: issue.title,
status: issue.status,
context: issue.context,
priority: issue.priority,
created_at: issue.created_at
}
},
execution: {
session_id: null,
started_at: null,
completed_at: null,
skill_results: [],
git_commit: null,
error: null
},
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
// 5. Write task file
Bash('mkdir -p .workflow/.idaw/tasks');
Write(`.workflow/.idaw/tasks/${nextId}.json`, JSON.stringify(task, null, 2));
console.log(`Created ${nextId} from issue ${issueId}: ${issue.title}`);
}
```
### Phase 3B: Manual Mode
```javascript
// 1. Validate description
if (!description && !autoYes) {
const answer = AskUserQuestion({
questions: [{
question: 'Please provide a task description:',
header: 'Task',
multiSelect: false,
options: [
{ label: 'Provide description', description: 'What needs to be done?' }
]
}]
});
// Use custom text from "Other"
description = answer.customText || '';
}
if (!description) {
console.log('Error: No description provided. Usage: /idaw:add "task description"');
return;
}
// 2. Generate next IDAW ID
const nextId = generateNextId();
// 3. Build title from first sentence
const title = description.split(/[.\n]/)[0].substring(0, 80).trim();
// 4. Determine task_type
const taskType = typeFlag || null; // null → inferred at run time
// 5. Create task
const task = {
id: nextId,
title: title,
description: description,
status: 'pending',
priority: parseInt(priorityFlag) || 3,
task_type: taskType,
skill_chain: null,
context: {
affected_files: [],
acceptance_criteria: [],
constraints: [],
references: []
},
source: {
type: 'manual',
issue_id: null,
issue_snapshot: null
},
execution: {
session_id: null,
started_at: null,
completed_at: null,
skill_results: [],
git_commit: null,
error: null
},
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
Bash('mkdir -p .workflow/.idaw/tasks');
Write(`.workflow/.idaw/tasks/${nextId}.json`, JSON.stringify(task, null, 2));
console.log(`Created ${nextId}: ${title}`);
```
## Helper Functions
### ID Generation
```javascript
function generateNextId() {
const files = Glob('.workflow/.idaw/tasks/IDAW-*.json') || [];
if (files.length === 0) return 'IDAW-001';
const maxNum = files
.map(f => parseInt(f.match(/IDAW-(\d+)/)?.[1] || '0'))
.reduce((max, n) => Math.max(max, n), 0);
return `IDAW-${String(maxNum + 1).padStart(3, '0')}`;
}
```
### Task Type Inference (deferred — used at run time if task_type is null)
```javascript
function inferTaskType(title, description) {
const text = `${title} ${description}`.toLowerCase();
if (/urgent|production|critical/.test(text) && /fix|bug/.test(text)) return 'bugfix-hotfix';
if (/refactor|重构|tech.*debt/.test(text)) return 'refactor';
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
if (/generate test|写测试|add test/.test(text)) return 'test';
if (/review|code review/.test(text)) return 'review';
if (/docs|documentation|readme/.test(text)) return 'docs';
if (/fix|bug|error|crash|fail/.test(text)) return 'bugfix';
if (/complex|multi-module|architecture/.test(text)) return 'feature-complex';
return 'feature';
}
```
## Examples
```bash
# Manual creation
/idaw:add "Fix login timeout bug" --type bugfix --priority 2
/idaw:add "Add rate limiting to API endpoints" --priority 1
/idaw:add "Refactor auth module to use strategy pattern"
# Import from ccw issue
/idaw:add --from-issue ISS-20260128-001
/idaw:add --from-issue ISS-20260128-001,ISS-20260128-002 --priority 1
# Auto mode (skip clarification)
/idaw:add -y "Quick fix for typo in header"
```
## Output
```
Created IDAW-001: Fix login timeout bug
Type: bugfix | Priority: 2 | Source: manual
→ Next: /idaw:run or /idaw:status
```

View File

@@ -0,0 +1,442 @@
---
name: resume
description: Resume interrupted IDAW session from last checkpoint
argument-hint: "[-y|--yes] [session-id]"
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Write(*), Bash(*), Glob(*)
---
# IDAW Resume Command (/idaw:resume)
## Auto Mode
When `--yes` or `-y`: Auto-skip interrupted task, continue with remaining.
## Skill Chain Mapping
```javascript
const SKILL_CHAIN_MAP = {
'bugfix': ['workflow-lite-plan', 'workflow-test-fix'],
'bugfix-hotfix': ['workflow-lite-plan'],
'feature': ['workflow-lite-plan', 'workflow-test-fix'],
'feature-complex': ['workflow-plan', 'workflow-execute', 'workflow-test-fix'],
'refactor': ['workflow:refactor-cycle'],
'tdd': ['workflow-tdd-plan', 'workflow-execute'],
'test': ['workflow-test-fix'],
'test-fix': ['workflow-test-fix'],
'review': ['review-cycle'],
'docs': ['workflow-lite-plan']
};
```
## Task Type Inference
```javascript
function inferTaskType(title, description) {
const text = `${title} ${description}`.toLowerCase();
if (/urgent|production|critical/.test(text) && /fix|bug/.test(text)) return 'bugfix-hotfix';
if (/refactor|重构|tech.*debt/.test(text)) return 'refactor';
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
if (/generate test|写测试|add test/.test(text)) return 'test';
if (/review|code review/.test(text)) return 'review';
if (/docs|documentation|readme/.test(text)) return 'docs';
if (/fix|bug|error|crash|fail/.test(text)) return 'bugfix';
if (/complex|multi-module|architecture/.test(text)) return 'feature-complex';
return 'feature';
}
```
## Implementation
### Phase 1: Find Resumable Session
```javascript
const args = $ARGUMENTS;
const autoYes = /(-y|--yes)/.test(args);
const targetSessionId = args.replace(/(-y|--yes)/g, '').trim();
let session = null;
let sessionDir = null;
if (targetSessionId) {
// Load specific session
sessionDir = `.workflow/.idaw/sessions/${targetSessionId}`;
try {
session = JSON.parse(Read(`${sessionDir}/session.json`));
} catch {
console.log(`Session "${targetSessionId}" not found.`);
console.log('Use /idaw:status to list sessions, or /idaw:run to start a new one.');
return;
}
} else {
// Find most recent running session
const sessionFiles = Glob('.workflow/.idaw/sessions/IDA-*/session.json') || [];
for (const f of sessionFiles) {
try {
const s = JSON.parse(Read(f));
if (s.status === 'running') {
session = s;
sessionDir = f.replace(/\/session\.json$/, '').replace(/\\session\.json$/, '');
break;
}
} catch {
// Skip malformed
}
}
if (!session) {
console.log('No running sessions found to resume.');
console.log('Use /idaw:run to start a new execution.');
return;
}
}
console.log(`Resuming session: ${session.session_id}`);
```
### Phase 2: Handle Interrupted Task
```javascript
// Find the task that was in_progress when interrupted
let currentTaskId = session.current_task;
let currentTask = null;
if (currentTaskId) {
try {
currentTask = JSON.parse(Read(`.workflow/.idaw/tasks/${currentTaskId}.json`));
} catch {
console.log(`Warning: Could not read task ${currentTaskId}`);
currentTaskId = null;
}
}
if (currentTask && currentTask.status === 'in_progress') {
if (autoYes) {
// Auto: skip interrupted task
currentTask.status = 'skipped';
currentTask.execution.error = 'Skipped on resume (auto mode)';
currentTask.execution.completed_at = new Date().toISOString();
currentTask.updated_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${currentTaskId}.json`, JSON.stringify(currentTask, null, 2));
session.skipped.push(currentTaskId);
console.log(`Skipped interrupted task: ${currentTaskId}`);
} else {
const answer = AskUserQuestion({
questions: [{
question: `Task ${currentTaskId} was interrupted: "${currentTask.title}". How to proceed?`,
header: 'Resume',
multiSelect: false,
options: [
{ label: 'Retry', description: 'Reset to pending, re-execute from beginning' },
{ label: 'Skip', description: 'Mark as skipped, move to next task' }
]
}]
});
if (answer.answers?.Resume === 'Skip') {
currentTask.status = 'skipped';
currentTask.execution.error = 'Skipped on resume (user choice)';
currentTask.execution.completed_at = new Date().toISOString();
currentTask.updated_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${currentTaskId}.json`, JSON.stringify(currentTask, null, 2));
session.skipped.push(currentTaskId);
} else {
// Retry: reset to pending
currentTask.status = 'pending';
currentTask.execution.started_at = null;
currentTask.execution.completed_at = null;
currentTask.execution.skill_results = [];
currentTask.execution.error = null;
currentTask.updated_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${currentTaskId}.json`, JSON.stringify(currentTask, null, 2));
}
}
}
```
### Phase 3: Build Remaining Task Queue
```javascript
// Collect remaining tasks (pending, or the retried current task)
const allTaskIds = session.tasks;
const completedSet = new Set([...session.completed, ...session.failed, ...session.skipped]);
const remainingTasks = [];
for (const taskId of allTaskIds) {
if (completedSet.has(taskId)) continue;
try {
const task = JSON.parse(Read(`.workflow/.idaw/tasks/${taskId}.json`));
if (task.status === 'pending') {
remainingTasks.push(task);
}
} catch {
console.log(`Warning: Could not read task ${taskId}, skipping`);
}
}
if (remainingTasks.length === 0) {
console.log('No remaining tasks to execute. Session complete.');
session.status = 'completed';
session.current_task = null;
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
return;
}
// Sort: priority ASC, then ID ASC
remainingTasks.sort((a, b) => {
if (a.priority !== b.priority) return a.priority - b.priority;
return a.id.localeCompare(b.id);
});
console.log(`Remaining tasks: ${remainingTasks.length}`);
// Append resume marker to progress.md
const progressFile = `${sessionDir}/progress.md`;
try {
const currentProgress = Read(progressFile);
Write(progressFile, currentProgress + `\n---\n**Resumed**: ${new Date().toISOString()}\n\n`);
} catch {
Write(progressFile, `# IDAW Progress — ${session.session_id}\n\n---\n**Resumed**: ${new Date().toISOString()}\n\n`);
}
// Update TodoWrite
TodoWrite({
todos: remainingTasks.map((t, i) => ({
content: `IDAW:[${i + 1}/${remainingTasks.length}] ${t.title}`,
status: i === 0 ? 'in_progress' : 'pending',
activeForm: `Executing ${t.title}`
}))
});
```
### Phase 4-6: Execute Remaining (reuse run.md main loop)
Execute remaining tasks using the same Phase 4-6 logic from `/idaw:run`:
```javascript
// Phase 4: Main Loop — identical to run.md Phase 4
for (let taskIdx = 0; taskIdx < remainingTasks.length; taskIdx++) {
const task = remainingTasks[taskIdx];
// Resolve skill chain
const resolvedType = task.task_type || inferTaskType(task.title, task.description);
const chain = task.skill_chain || SKILL_CHAIN_MAP[resolvedType] || SKILL_CHAIN_MAP['feature'];
// Update task → in_progress
task.status = 'in_progress';
task.task_type = resolvedType;
task.execution.started_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
session.current_task = task.id;
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
console.log(`\n--- [${taskIdx + 1}/${remainingTasks.length}] ${task.id}: ${task.title} ---`);
console.log(`Chain: ${chain.join(' → ')}`);
// ━━━ Pre-Task CLI Context Analysis (for complex/bugfix tasks) ━━━
if (['bugfix', 'bugfix-hotfix', 'feature-complex'].includes(resolvedType)) {
console.log(` Pre-analysis: gathering context for ${resolvedType} task...`);
const affectedFiles = (task.context?.affected_files || []).join(', ');
const preAnalysisPrompt = `PURPOSE: Pre-analyze codebase context for IDAW task before execution.
TASK: • Understand current state of: ${affectedFiles || 'files related to: ' + task.title} • Identify dependencies and risk areas • Note existing patterns to follow
MODE: analysis
CONTEXT: @**/*
EXPECTED: Brief context summary (affected modules, dependencies, risk areas) in 3-5 bullet points
CONSTRAINTS: Keep concise | Focus on execution-relevant context`;
const preAnalysis = Bash(`ccw cli -p '${preAnalysisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis 2>&1 || echo "Pre-analysis skipped"`);
task.execution.skill_results.push({
skill: 'cli-pre-analysis',
status: 'completed',
context_summary: preAnalysis?.substring(0, 500),
timestamp: new Date().toISOString()
});
}
// Execute skill chain
let previousResult = null;
let taskFailed = false;
for (let skillIdx = 0; skillIdx < chain.length; skillIdx++) {
const skillName = chain[skillIdx];
const skillArgs = assembleSkillArgs(skillName, task, previousResult, autoYes, skillIdx === 0);
console.log(` [${skillIdx + 1}/${chain.length}] ${skillName}`);
try {
const result = Skill({ skill: skillName, args: skillArgs });
previousResult = result;
task.execution.skill_results.push({
skill: skillName,
status: 'completed',
timestamp: new Date().toISOString()
});
} catch (error) {
// ━━━ CLI-Assisted Error Recovery ━━━
console.log(` Diagnosing failure: ${skillName}...`);
const diagnosisPrompt = `PURPOSE: Diagnose why skill "${skillName}" failed during IDAW task execution.
TASK: • Analyze error: ${String(error).substring(0, 300)} • Check affected files: ${(task.context?.affected_files || []).join(', ') || 'unknown'} • Identify root cause • Suggest fix strategy
MODE: analysis
CONTEXT: @**/* | Memory: IDAW task ${task.id}: ${task.title}
EXPECTED: Root cause + actionable fix recommendation (1-2 sentences)
CONSTRAINTS: Focus on actionable diagnosis`;
const diagnosisResult = Bash(`ccw cli -p '${diagnosisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause 2>&1 || echo "CLI diagnosis unavailable"`);
task.execution.skill_results.push({
skill: `cli-diagnosis:${skillName}`,
status: 'completed',
diagnosis: diagnosisResult?.substring(0, 500),
timestamp: new Date().toISOString()
});
// Retry with diagnosis context
console.log(` Retry with diagnosis: ${skillName}`);
try {
const retryResult = Skill({ skill: skillName, args: skillArgs });
previousResult = retryResult;
task.execution.skill_results.push({
skill: skillName,
status: 'completed-retry-with-diagnosis',
timestamp: new Date().toISOString()
});
} catch (retryError) {
task.execution.skill_results.push({
skill: skillName,
status: 'failed',
error: String(retryError).substring(0, 200),
timestamp: new Date().toISOString()
});
if (autoYes) {
taskFailed = true;
break;
}
const answer = AskUserQuestion({
questions: [{
question: `${skillName} failed after CLI diagnosis + retry: ${String(retryError).substring(0, 100)}`,
header: 'Error',
multiSelect: false,
options: [
{ label: 'Skip task', description: 'Mark as failed, continue' },
{ label: 'Abort', description: 'Stop run' }
]
}]
});
if (answer.answers?.Error === 'Abort') {
task.status = 'failed';
task.execution.error = String(retryError).substring(0, 200);
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
session.failed.push(task.id);
session.status = 'failed';
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
return;
}
taskFailed = true;
break;
}
}
}
// Phase 5: Checkpoint
if (taskFailed) {
task.status = 'failed';
task.execution.error = 'Skill chain failed after retry';
task.execution.completed_at = new Date().toISOString();
session.failed.push(task.id);
} else {
// Git commit
const commitMsg = `feat(idaw): ${task.title} [${task.id}]`;
const diffCheck = Bash('git diff --stat HEAD 2>/dev/null || echo ""');
const untrackedCheck = Bash('git ls-files --others --exclude-standard 2>/dev/null || echo ""');
if (diffCheck?.trim() || untrackedCheck?.trim()) {
Bash('git add -A');
Bash(`git commit -m "$(cat <<'EOF'\n${commitMsg}\nEOF\n)"`);
const commitHash = Bash('git rev-parse --short HEAD 2>/dev/null')?.trim();
task.execution.git_commit = commitHash;
} else {
task.execution.git_commit = 'no-commit';
}
task.status = 'completed';
task.execution.completed_at = new Date().toISOString();
session.completed.push(task.id);
}
task.updated_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
// Append progress
const chain_str = chain.join(' → ');
const progressEntry = `## ${task.id}${task.title}\n- Status: ${task.status}\n- Chain: ${chain_str}\n- Commit: ${task.execution.git_commit || '-'}\n\n`;
const currentProgress = Read(`${sessionDir}/progress.md`);
Write(`${sessionDir}/progress.md`, currentProgress + progressEntry);
}
// Phase 6: Report
session.status = session.failed.length > 0 && session.completed.length === 0 ? 'failed' : 'completed';
session.current_task = null;
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
const summary = `\n---\n## Summary (Resumed)\n- Completed: ${session.completed.length}\n- Failed: ${session.failed.length}\n- Skipped: ${session.skipped.length}\n`;
const finalProgress = Read(`${sessionDir}/progress.md`);
Write(`${sessionDir}/progress.md`, finalProgress + summary);
console.log('\n=== IDAW Resume Complete ===');
console.log(`Session: ${session.session_id}`);
console.log(`Completed: ${session.completed.length} | Failed: ${session.failed.length} | Skipped: ${session.skipped.length}`);
```
## Helper Functions
### assembleSkillArgs
```javascript
function assembleSkillArgs(skillName, task, previousResult, autoYes, isFirst) {
let args = '';
if (isFirst) {
// Sanitize for shell safety
const goal = `${task.title}\n${task.description}`
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/\$/g, '\\$')
.replace(/`/g, '\\`');
args = `"${goal}"`;
if (task.task_type === 'bugfix-hotfix') args += ' --hotfix';
} else if (previousResult?.session_id) {
args = `--session="${previousResult.session_id}"`;
}
if (autoYes && !args.includes('-y') && !args.includes('--yes')) {
args = args ? `${args} -y` : '-y';
}
return args;
}
```
## Examples
```bash
# Resume most recent running session (interactive)
/idaw:resume
# Resume specific session
/idaw:resume IDA-auth-fix-20260301
# Resume with auto mode (skip interrupted, continue)
/idaw:resume -y
# Resume specific session with auto mode
/idaw:resume -y IDA-auth-fix-20260301
```

View File

@@ -0,0 +1,648 @@
---
name: run-coordinate
description: IDAW coordinator - execute task skill chains via external CLI with hook callbacks and git checkpoints
argument-hint: "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run] [--tool <tool>]"
allowed-tools: Task(*), AskUserQuestion(*), Read(*), Write(*), Bash(*), Glob(*), Grep(*)
---
# IDAW Run Coordinate Command (/idaw:run-coordinate)
Coordinator variant of `/idaw:run`: external CLI execution with background tasks and hook callbacks.
**Execution Model**: `ccw cli -p "..." --tool <tool> --mode write` in background → hook callback → next step.
**vs `/idaw:run`**: Direct `Skill()` calls (blocking, main process) vs `ccw cli` (background, external process).
## When to Use
| Scenario | Use |
|----------|-----|
| Standard IDAW execution (main process) | `/idaw:run` |
| External CLI execution (background, hook-driven) | `/idaw:run-coordinate` |
| Need `claude` or `gemini` as execution tool | `/idaw:run-coordinate --tool claude` |
| Long-running tasks, avoid context window pressure | `/idaw:run-coordinate` |
## Skill Chain Mapping
```javascript
const SKILL_CHAIN_MAP = {
'bugfix': ['workflow-lite-plan', 'workflow-test-fix'],
'bugfix-hotfix': ['workflow-lite-plan'],
'feature': ['workflow-lite-plan', 'workflow-test-fix'],
'feature-complex': ['workflow-plan', 'workflow-execute', 'workflow-test-fix'],
'refactor': ['workflow:refactor-cycle'],
'tdd': ['workflow-tdd-plan', 'workflow-execute'],
'test': ['workflow-test-fix'],
'test-fix': ['workflow-test-fix'],
'review': ['review-cycle'],
'docs': ['workflow-lite-plan']
};
```
## Task Type Inference
```javascript
function inferTaskType(title, description) {
const text = `${title} ${description}`.toLowerCase();
if (/urgent|production|critical/.test(text) && /fix|bug/.test(text)) return 'bugfix-hotfix';
if (/refactor|重构|tech.*debt/.test(text)) return 'refactor';
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
if (/generate test|写测试|add test/.test(text)) return 'test';
if (/review|code review/.test(text)) return 'review';
if (/docs|documentation|readme/.test(text)) return 'docs';
if (/fix|bug|error|crash|fail/.test(text)) return 'bugfix';
if (/complex|multi-module|architecture/.test(text)) return 'feature-complex';
return 'feature';
}
```
## 6-Phase Execution (Coordinator Model)
### Phase 1: Load Tasks
```javascript
const args = $ARGUMENTS;
const autoYes = /(-y|--yes)/.test(args);
const dryRun = /--dry-run/.test(args);
const taskFilter = args.match(/--task\s+([\w,-]+)/)?.[1]?.split(',') || null;
const cliTool = args.match(/--tool\s+(\w+)/)?.[1] || 'claude';
// Load task files
const taskFiles = Glob('.workflow/.idaw/tasks/IDAW-*.json') || [];
if (taskFiles.length === 0) {
console.log('No IDAW tasks found. Use /idaw:add to create tasks.');
return;
}
// Parse and filter
let tasks = taskFiles.map(f => JSON.parse(Read(f)));
if (taskFilter) {
tasks = tasks.filter(t => taskFilter.includes(t.id));
} else {
tasks = tasks.filter(t => t.status === 'pending');
}
if (tasks.length === 0) {
console.log('No pending tasks to execute. Use /idaw:add to add tasks or --task to specify IDs.');
return;
}
// Sort: priority ASC (1=critical first), then ID ASC
tasks.sort((a, b) => {
if (a.priority !== b.priority) return a.priority - b.priority;
return a.id.localeCompare(b.id);
});
```
### Phase 2: Session Setup
```javascript
// Generate session ID: IDA-{slug}-YYYYMMDD
const slug = tasks[0].title
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.substring(0, 20)
.replace(/-$/, '');
const dateStr = new Date().toISOString().slice(0, 10).replace(/-/g, '');
let sessionId = `IDA-${slug}-${dateStr}`;
// Check collision
const existingSession = Glob(`.workflow/.idaw/sessions/${sessionId}/session.json`);
if (existingSession?.length > 0) {
sessionId = `${sessionId}-2`;
}
const sessionDir = `.workflow/.idaw/sessions/${sessionId}`;
Bash(`mkdir -p "${sessionDir}"`);
const session = {
session_id: sessionId,
mode: 'coordinate', // ★ Marks this as coordinator-mode session
cli_tool: cliTool,
status: 'running',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
tasks: tasks.map(t => t.id),
current_task: null,
current_skill_index: 0,
completed: [],
failed: [],
skipped: [],
prompts_used: []
};
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
// Initialize progress.md
const progressHeader = `# IDAW Progress — ${sessionId} (coordinate mode)\nStarted: ${session.created_at}\nCLI Tool: ${cliTool}\n\n`;
Write(`${sessionDir}/progress.md`, progressHeader);
```
### Phase 3: Startup Protocol
```javascript
// Check for existing running sessions
const runningSessions = Glob('.workflow/.idaw/sessions/IDA-*/session.json')
?.map(f => { try { return JSON.parse(Read(f)); } catch { return null; } })
.filter(s => s && s.status === 'running' && s.session_id !== sessionId) || [];
if (runningSessions.length > 0 && !autoYes) {
const answer = AskUserQuestion({
questions: [{
question: `Found running session: ${runningSessions[0].session_id}. How to proceed?`,
header: 'Conflict',
multiSelect: false,
options: [
{ label: 'Resume existing', description: 'Use /idaw:resume instead' },
{ label: 'Start fresh', description: 'Continue with new session' },
{ label: 'Abort', description: 'Cancel this run' }
]
}]
});
if (answer.answers?.Conflict === 'Resume existing') {
console.log(`Use: /idaw:resume ${runningSessions[0].session_id}`);
return;
}
if (answer.answers?.Conflict === 'Abort') return;
}
// Check git status
const gitStatus = Bash('git status --porcelain 2>/dev/null');
if (gitStatus?.trim() && !autoYes) {
const answer = AskUserQuestion({
questions: [{
question: 'Working tree has uncommitted changes. How to proceed?',
header: 'Git',
multiSelect: false,
options: [
{ label: 'Continue', description: 'Proceed with dirty tree' },
{ label: 'Stash', description: 'git stash before running' },
{ label: 'Abort', description: 'Stop and handle manually' }
]
}]
});
if (answer.answers?.Git === 'Stash') Bash('git stash push -m "idaw-pre-run"');
if (answer.answers?.Git === 'Abort') return;
}
// Dry run
if (dryRun) {
console.log(`# Dry Run — ${sessionId} (coordinate mode, tool: ${cliTool})\n`);
for (const task of tasks) {
const taskType = task.task_type || inferTaskType(task.title, task.description);
const chain = task.skill_chain || SKILL_CHAIN_MAP[taskType] || SKILL_CHAIN_MAP['feature'];
console.log(`## ${task.id}: ${task.title}`);
console.log(` Type: ${taskType} | Priority: ${task.priority}`);
console.log(` Chain: ${chain.join(' → ')}`);
console.log(` CLI: ccw cli --tool ${cliTool} --mode write\n`);
}
console.log(`Total: ${tasks.length} tasks`);
return;
}
```
### Phase 4: Launch First Task (then wait for hook)
```javascript
// Start with the first task, first skill
const firstTask = tasks[0];
const resolvedType = firstTask.task_type || inferTaskType(firstTask.title, firstTask.description);
const chain = firstTask.skill_chain || SKILL_CHAIN_MAP[resolvedType] || SKILL_CHAIN_MAP['feature'];
// Update task → in_progress
firstTask.status = 'in_progress';
firstTask.task_type = resolvedType;
firstTask.execution.started_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${firstTask.id}.json`, JSON.stringify(firstTask, null, 2));
// Update session
session.current_task = firstTask.id;
session.current_skill_index = 0;
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
// ━━━ Pre-Task CLI Context Analysis (for complex/bugfix tasks) ━━━
if (['bugfix', 'bugfix-hotfix', 'feature-complex'].includes(resolvedType)) {
console.log(`Pre-analysis: gathering context for ${resolvedType} task...`);
const affectedFiles = (firstTask.context?.affected_files || []).join(', ');
const preAnalysisPrompt = `PURPOSE: Pre-analyze codebase context for IDAW task.
TASK: • Understand current state of: ${affectedFiles || 'files related to: ' + firstTask.title} • Identify dependencies and risk areas
MODE: analysis
CONTEXT: @**/*
EXPECTED: Brief context summary in 3-5 bullet points
CONSTRAINTS: Keep concise`;
Bash(`ccw cli -p '${preAnalysisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis 2>&1 || echo "Pre-analysis skipped"`);
}
// Assemble prompt for first skill
const skillName = chain[0];
const prompt = assembleCliPrompt(skillName, firstTask, null, autoYes);
session.prompts_used.push({
task_id: firstTask.id,
skill_index: 0,
skill: skillName,
prompt: prompt,
timestamp: new Date().toISOString()
});
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
// Launch via ccw cli in background
console.log(`[1/${tasks.length}] ${firstTask.id}: ${firstTask.title}`);
console.log(` Chain: ${chain.join(' → ')}`);
console.log(` Launching: ${skillName} via ccw cli --tool ${cliTool}`);
Bash(
`ccw cli -p "${escapeForShell(prompt)}" --tool ${cliTool} --mode write`,
{ run_in_background: true }
);
// ★ STOP HERE — wait for hook callback
// Hook callback will trigger handleStepCompletion() below
```
### Phase 5: Hook Callback Handler (per-step completion)
```javascript
// Called by hook when background CLI completes
async function handleStepCompletion(sessionId, cliOutput) {
const sessionDir = `.workflow/.idaw/sessions/${sessionId}`;
const session = JSON.parse(Read(`${sessionDir}/session.json`));
const taskId = session.current_task;
const task = JSON.parse(Read(`.workflow/.idaw/tasks/${taskId}.json`));
const resolvedType = task.task_type || inferTaskType(task.title, task.description);
const chain = task.skill_chain || SKILL_CHAIN_MAP[resolvedType] || SKILL_CHAIN_MAP['feature'];
const skillIdx = session.current_skill_index;
const skillName = chain[skillIdx];
// Parse CLI output for session ID
const parsedOutput = parseCliOutput(cliOutput);
// Record skill result
task.execution.skill_results.push({
skill: skillName,
status: parsedOutput.success ? 'completed' : 'failed',
session_id: parsedOutput.sessionId,
timestamp: new Date().toISOString()
});
// ━━━ Handle failure with CLI diagnosis ━━━
if (!parsedOutput.success) {
console.log(` ${skillName} failed. Running CLI diagnosis...`);
const diagnosisPrompt = `PURPOSE: Diagnose why skill "${skillName}" failed during IDAW task.
TASK: • Analyze error output • Check affected files: ${(task.context?.affected_files || []).join(', ') || 'unknown'}
MODE: analysis
CONTEXT: @**/* | Memory: IDAW task ${task.id}: ${task.title}
EXPECTED: Root cause + fix recommendation
CONSTRAINTS: Actionable diagnosis`;
Bash(`ccw cli -p '${diagnosisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause 2>&1 || true`);
task.execution.skill_results.push({
skill: `cli-diagnosis:${skillName}`,
status: 'completed',
timestamp: new Date().toISOString()
});
// Retry once
console.log(` Retrying: ${skillName}`);
const retryPrompt = assembleCliPrompt(skillName, task, parsedOutput, true);
session.prompts_used.push({
task_id: taskId,
skill_index: skillIdx,
skill: `${skillName}-retry`,
prompt: retryPrompt,
timestamp: new Date().toISOString()
});
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
Write(`.workflow/.idaw/tasks/${taskId}.json`, JSON.stringify(task, null, 2));
Bash(
`ccw cli -p "${escapeForShell(retryPrompt)}" --tool ${session.cli_tool} --mode write`,
{ run_in_background: true }
);
return; // Wait for retry hook
}
// ━━━ Skill succeeded — advance ━━━
const nextSkillIdx = skillIdx + 1;
if (nextSkillIdx < chain.length) {
// More skills in this task's chain → launch next skill
session.current_skill_index = nextSkillIdx;
session.updated_at = new Date().toISOString();
const nextSkill = chain[nextSkillIdx];
const nextPrompt = assembleCliPrompt(nextSkill, task, parsedOutput, true);
session.prompts_used.push({
task_id: taskId,
skill_index: nextSkillIdx,
skill: nextSkill,
prompt: nextPrompt,
timestamp: new Date().toISOString()
});
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
Write(`.workflow/.idaw/tasks/${taskId}.json`, JSON.stringify(task, null, 2));
console.log(` Next skill: ${nextSkill}`);
Bash(
`ccw cli -p "${escapeForShell(nextPrompt)}" --tool ${session.cli_tool} --mode write`,
{ run_in_background: true }
);
return; // Wait for next hook
}
// ━━━ Task chain complete — git checkpoint ━━━
const commitMsg = `feat(idaw): ${task.title} [${task.id}]`;
const diffCheck = Bash('git diff --stat HEAD 2>/dev/null || echo ""');
const untrackedCheck = Bash('git ls-files --others --exclude-standard 2>/dev/null || echo ""');
if (diffCheck?.trim() || untrackedCheck?.trim()) {
Bash('git add -A');
Bash(`git commit -m "$(cat <<'EOF'\n${commitMsg}\nEOF\n)"`);
const commitHash = Bash('git rev-parse --short HEAD 2>/dev/null')?.trim();
task.execution.git_commit = commitHash;
} else {
task.execution.git_commit = 'no-commit';
}
task.status = 'completed';
task.execution.completed_at = new Date().toISOString();
task.updated_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${taskId}.json`, JSON.stringify(task, null, 2));
session.completed.push(taskId);
// Append progress
const progressEntry = `## ${task.id}${task.title}\n` +
`- Status: completed\n` +
`- Type: ${task.task_type}\n` +
`- Chain: ${chain.join(' → ')}\n` +
`- Commit: ${task.execution.git_commit || '-'}\n` +
`- Mode: coordinate (${session.cli_tool})\n\n`;
const currentProgress = Read(`${sessionDir}/progress.md`);
Write(`${sessionDir}/progress.md`, currentProgress + progressEntry);
// ━━━ Advance to next task ━━━
const allTaskIds = session.tasks;
const completedSet = new Set([...session.completed, ...session.failed, ...session.skipped]);
const nextTaskId = allTaskIds.find(id => !completedSet.has(id));
if (nextTaskId) {
// Load next task
const nextTask = JSON.parse(Read(`.workflow/.idaw/tasks/${nextTaskId}.json`));
const nextType = nextTask.task_type || inferTaskType(nextTask.title, nextTask.description);
const nextChain = nextTask.skill_chain || SKILL_CHAIN_MAP[nextType] || SKILL_CHAIN_MAP['feature'];
nextTask.status = 'in_progress';
nextTask.task_type = nextType;
nextTask.execution.started_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${nextTaskId}.json`, JSON.stringify(nextTask, null, 2));
session.current_task = nextTaskId;
session.current_skill_index = 0;
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
// Pre-analysis for complex tasks
if (['bugfix', 'bugfix-hotfix', 'feature-complex'].includes(nextType)) {
const affectedFiles = (nextTask.context?.affected_files || []).join(', ');
Bash(`ccw cli -p 'PURPOSE: Pre-analyze context for ${nextTask.title}. TASK: Check ${affectedFiles || "related files"}. MODE: analysis. EXPECTED: 3-5 bullet points.' --tool gemini --mode analysis 2>&1 || true`);
}
const nextSkillName = nextChain[0];
const nextPrompt = assembleCliPrompt(nextSkillName, nextTask, null, true);
session.prompts_used.push({
task_id: nextTaskId,
skill_index: 0,
skill: nextSkillName,
prompt: nextPrompt,
timestamp: new Date().toISOString()
});
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
const taskNum = session.completed.length + 1;
const totalTasks = session.tasks.length;
console.log(`\n[${taskNum}/${totalTasks}] ${nextTaskId}: ${nextTask.title}`);
console.log(` Chain: ${nextChain.join(' → ')}`);
Bash(
`ccw cli -p "${escapeForShell(nextPrompt)}" --tool ${session.cli_tool} --mode write`,
{ run_in_background: true }
);
return; // Wait for hook
}
// ━━━ All tasks complete — Phase 6: Report ━━━
session.status = session.failed.length > 0 && session.completed.length === 0 ? 'failed' : 'completed';
session.current_task = null;
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
const summary = `\n---\n## Summary (coordinate mode)\n` +
`- CLI Tool: ${session.cli_tool}\n` +
`- Completed: ${session.completed.length}\n` +
`- Failed: ${session.failed.length}\n` +
`- Skipped: ${session.skipped.length}\n` +
`- Total: ${session.tasks.length}\n`;
const finalProgress = Read(`${sessionDir}/progress.md`);
Write(`${sessionDir}/progress.md`, finalProgress + summary);
console.log('\n=== IDAW Coordinate Complete ===');
console.log(`Session: ${sessionId}`);
console.log(`Completed: ${session.completed.length}/${session.tasks.length}`);
if (session.failed.length > 0) console.log(`Failed: ${session.failed.join(', ')}`);
}
```
## Helper Functions
### assembleCliPrompt
```javascript
function assembleCliPrompt(skillName, task, previousResult, autoYes) {
let prompt = '';
const yFlag = autoYes ? ' -y' : '';
// Map skill to command invocation
if (skillName === 'workflow-lite-plan') {
const goal = sanitize(`${task.title}\n${task.description}`);
prompt = `/workflow-lite-plan${yFlag} "${goal}"`;
if (task.task_type === 'bugfix') prompt = `/workflow-lite-plan${yFlag} --bugfix "${goal}"`;
if (task.task_type === 'bugfix-hotfix') prompt = `/workflow-lite-plan${yFlag} --hotfix "${goal}"`;
} else if (skillName === 'workflow-plan') {
prompt = `/workflow-plan${yFlag} "${sanitize(task.title)}"`;
} else if (skillName === 'workflow-execute') {
if (previousResult?.sessionId) {
prompt = `/workflow-execute${yFlag} --resume-session="${previousResult.sessionId}"`;
} else {
prompt = `/workflow-execute${yFlag}`;
}
} else if (skillName === 'workflow-test-fix') {
if (previousResult?.sessionId) {
prompt = `/workflow-test-fix${yFlag} "${previousResult.sessionId}"`;
} else {
prompt = `/workflow-test-fix${yFlag} "${sanitize(task.title)}"`;
}
} else if (skillName === 'workflow-tdd-plan') {
prompt = `/workflow-tdd-plan${yFlag} "${sanitize(task.title)}"`;
} else if (skillName === 'workflow:refactor-cycle') {
prompt = `/workflow:refactor-cycle${yFlag} "${sanitize(task.title)}"`;
} else if (skillName === 'review-cycle') {
if (previousResult?.sessionId) {
prompt = `/review-cycle${yFlag} --session="${previousResult.sessionId}"`;
} else {
prompt = `/review-cycle${yFlag}`;
}
} else {
// Generic fallback
prompt = `/${skillName}${yFlag} "${sanitize(task.title)}"`;
}
// Append task context
prompt += `\n\nTask: ${task.title}\nDescription: ${task.description}`;
if (task.context?.affected_files?.length > 0) {
prompt += `\nAffected files: ${task.context.affected_files.join(', ')}`;
}
if (task.context?.acceptance_criteria?.length > 0) {
prompt += `\nAcceptance criteria: ${task.context.acceptance_criteria.join('; ')}`;
}
return prompt;
}
```
### sanitize & escapeForShell
```javascript
function sanitize(text) {
return text
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/\$/g, '\\$')
.replace(/`/g, '\\`');
}
function escapeForShell(prompt) {
return prompt.replace(/'/g, "'\\''");
}
```
### parseCliOutput
```javascript
function parseCliOutput(output) {
// Extract session ID from CLI output (e.g., WFS-xxx, session-xxx)
const sessionMatch = output.match(/(?:session|WFS|Session ID)[:\s]*([\w-]+)/i);
const success = !/(?:error|failed|fatal)/i.test(output) || /completed|success/i.test(output);
return {
success,
sessionId: sessionMatch?.[1] || null,
raw: output?.substring(0, 500)
};
}
```
## CLI-Assisted Analysis
Same as `/idaw:run` — integrated at two points:
### Pre-Task Context Analysis
For `bugfix`, `bugfix-hotfix`, `feature-complex` tasks: auto-invoke `ccw cli --tool gemini --mode analysis` before launching skill chain.
### Error Recovery with CLI Diagnosis
When a skill's CLI execution fails: invoke diagnosis → retry once → if still fails, mark failed and advance.
```
Skill CLI fails → CLI diagnosis (gemini) → Retry CLI → Still fails → mark failed → next task
```
## State Flow
```
Phase 4: Launch first skill
ccw cli --tool claude --mode write (background)
★ STOP — wait for hook callback
Phase 5: handleStepCompletion()
├─ Skill succeeded + more in chain → launch next skill → STOP
├─ Skill succeeded + chain complete → git checkpoint → next task → STOP
├─ Skill failed → CLI diagnosis → retry → STOP
└─ All tasks done → Phase 6: Report
```
## Session State (session.json)
```json
{
"session_id": "IDA-fix-login-20260301",
"mode": "coordinate",
"cli_tool": "claude",
"status": "running|waiting|completed|failed",
"created_at": "ISO",
"updated_at": "ISO",
"tasks": ["IDAW-001", "IDAW-002"],
"current_task": "IDAW-001",
"current_skill_index": 0,
"completed": [],
"failed": [],
"skipped": [],
"prompts_used": [
{
"task_id": "IDAW-001",
"skill_index": 0,
"skill": "workflow-lite-plan",
"prompt": "/workflow-lite-plan -y \"Fix login timeout\"",
"timestamp": "ISO"
}
]
}
```
## Differences from /idaw:run
| Aspect | /idaw:run | /idaw:run-coordinate |
|--------|-----------|---------------------|
| Execution | `Skill()` blocking in main process | `ccw cli` background + hook callback |
| Context window | Shared (each skill uses main context) | Isolated (each CLI gets fresh context) |
| Concurrency | Sequential blocking | Sequential non-blocking (hook-driven) |
| State tracking | session.json + task.json | session.json + task.json + prompts_used |
| Tool selection | N/A (Skill native) | `--tool claude\|gemini\|qwen` |
| Resume | Via `/idaw:resume` (same) | Via `/idaw:resume` (same, detects mode) |
| Best for | Short chains, interactive | Long chains, autonomous, context-heavy |
## Examples
```bash
# Execute all pending tasks via claude CLI
/idaw:run-coordinate -y
# Use specific CLI tool
/idaw:run-coordinate -y --tool gemini
# Execute specific tasks
/idaw:run-coordinate --task IDAW-001,IDAW-003 --tool claude
# Dry run (show plan without executing)
/idaw:run-coordinate --dry-run
# Interactive mode
/idaw:run-coordinate
```

View File

@@ -0,0 +1,539 @@
---
name: run
description: IDAW orchestrator - execute task skill chains serially with git checkpoints
argument-hint: "[-y|--yes] [--task <id>[,<id>,...]] [--dry-run]"
allowed-tools: Skill(*), TodoWrite(*), AskUserQuestion(*), Read(*), Write(*), Bash(*), Glob(*)
---
# IDAW Run Command (/idaw:run)
## Auto Mode
When `--yes` or `-y`: Skip all confirmations, auto-skip on failure, proceed with dirty git.
## Skill Chain Mapping
```javascript
const SKILL_CHAIN_MAP = {
'bugfix': ['workflow-lite-plan', 'workflow-test-fix'],
'bugfix-hotfix': ['workflow-lite-plan'],
'feature': ['workflow-lite-plan', 'workflow-test-fix'],
'feature-complex': ['workflow-plan', 'workflow-execute', 'workflow-test-fix'],
'refactor': ['workflow:refactor-cycle'],
'tdd': ['workflow-tdd-plan', 'workflow-execute'],
'test': ['workflow-test-fix'],
'test-fix': ['workflow-test-fix'],
'review': ['review-cycle'],
'docs': ['workflow-lite-plan']
};
```
## Task Type Inference
```javascript
function inferTaskType(title, description) {
const text = `${title} ${description}`.toLowerCase();
if (/urgent|production|critical/.test(text) && /fix|bug/.test(text)) return 'bugfix-hotfix';
if (/refactor|重构|tech.*debt/.test(text)) return 'refactor';
if (/tdd|test-driven|test first/.test(text)) return 'tdd';
if (/test fail|fix test|failing test/.test(text)) return 'test-fix';
if (/generate test|写测试|add test/.test(text)) return 'test';
if (/review|code review/.test(text)) return 'review';
if (/docs|documentation|readme/.test(text)) return 'docs';
if (/fix|bug|error|crash|fail/.test(text)) return 'bugfix';
if (/complex|multi-module|architecture/.test(text)) return 'feature-complex';
return 'feature';
}
```
## 6-Phase Execution
### Phase 1: Load Tasks
```javascript
const args = $ARGUMENTS;
const autoYes = /(-y|--yes)/.test(args);
const dryRun = /--dry-run/.test(args);
const taskFilter = args.match(/--task\s+([\w,-]+)/)?.[1]?.split(',') || null;
// Load task files
const taskFiles = Glob('.workflow/.idaw/tasks/IDAW-*.json') || [];
if (taskFiles.length === 0) {
console.log('No IDAW tasks found. Use /idaw:add to create tasks.');
return;
}
// Parse and filter
let tasks = taskFiles.map(f => JSON.parse(Read(f)));
if (taskFilter) {
tasks = tasks.filter(t => taskFilter.includes(t.id));
} else {
tasks = tasks.filter(t => t.status === 'pending');
}
if (tasks.length === 0) {
console.log('No pending tasks to execute. Use /idaw:add to add tasks or --task to specify IDs.');
return;
}
// Sort: priority ASC (1=critical first), then ID ASC
tasks.sort((a, b) => {
if (a.priority !== b.priority) return a.priority - b.priority;
return a.id.localeCompare(b.id);
});
```
### Phase 2: Session Setup
```javascript
// Generate session ID: IDA-{slug}-YYYYMMDD
const slug = tasks[0].title
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.substring(0, 20)
.replace(/-$/, '');
const dateStr = new Date().toISOString().slice(0, 10).replace(/-/g, '');
let sessionId = `IDA-${slug}-${dateStr}`;
// Check collision
const existingSession = Glob(`.workflow/.idaw/sessions/${sessionId}/session.json`);
if (existingSession?.length > 0) {
sessionId = `${sessionId}-2`;
}
const sessionDir = `.workflow/.idaw/sessions/${sessionId}`;
Bash(`mkdir -p "${sessionDir}"`);
const session = {
session_id: sessionId,
status: 'running',
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
tasks: tasks.map(t => t.id),
current_task: null,
completed: [],
failed: [],
skipped: []
};
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
// Initialize progress.md
const progressHeader = `# IDAW Progress — ${sessionId}\nStarted: ${session.created_at}\n\n`;
Write(`${sessionDir}/progress.md`, progressHeader);
// TodoWrite
TodoWrite({
todos: tasks.map((t, i) => ({
content: `IDAW:[${i + 1}/${tasks.length}] ${t.title}`,
status: i === 0 ? 'in_progress' : 'pending',
activeForm: `Executing ${t.title}`
}))
});
```
### Phase 3: Startup Protocol
```javascript
// Check for existing running sessions
const runningSessions = Glob('.workflow/.idaw/sessions/IDA-*/session.json')
?.map(f => JSON.parse(Read(f)))
.filter(s => s.status === 'running' && s.session_id !== sessionId) || [];
if (runningSessions.length > 0) {
if (!autoYes) {
const answer = AskUserQuestion({
questions: [{
question: `Found running session: ${runningSessions[0].session_id}. How to proceed?`,
header: 'Conflict',
multiSelect: false,
options: [
{ label: 'Resume existing', description: 'Use /idaw:resume instead' },
{ label: 'Start fresh', description: 'Continue with new session' },
{ label: 'Abort', description: 'Cancel this run' }
]
}]
});
if (answer.answers?.Conflict === 'Resume existing') {
console.log(`Use: /idaw:resume ${runningSessions[0].session_id}`);
return;
}
if (answer.answers?.Conflict === 'Abort') return;
}
// autoYes or "Start fresh": proceed
}
// Check git status
const gitStatus = Bash('git status --porcelain 2>/dev/null');
if (gitStatus?.trim()) {
if (!autoYes) {
const answer = AskUserQuestion({
questions: [{
question: 'Working tree has uncommitted changes. How to proceed?',
header: 'Git',
multiSelect: false,
options: [
{ label: 'Continue', description: 'Proceed with dirty tree' },
{ label: 'Stash', description: 'git stash before running' },
{ label: 'Abort', description: 'Stop and handle manually' }
]
}]
});
if (answer.answers?.Git === 'Stash') {
Bash('git stash push -m "idaw-pre-run"');
}
if (answer.answers?.Git === 'Abort') return;
}
// autoYes: proceed silently
}
// Dry run: show plan and exit
if (dryRun) {
console.log(`# Dry Run — ${sessionId}\n`);
for (const task of tasks) {
const taskType = task.task_type || inferTaskType(task.title, task.description);
const chain = task.skill_chain || SKILL_CHAIN_MAP[taskType] || SKILL_CHAIN_MAP['feature'];
console.log(`## ${task.id}: ${task.title}`);
console.log(` Type: ${taskType} | Priority: ${task.priority}`);
console.log(` Chain: ${chain.join(' → ')}\n`);
}
console.log(`Total: ${tasks.length} tasks`);
return;
}
```
### Phase 4: Main Loop (serial, one task at a time)
```javascript
for (let taskIdx = 0; taskIdx < tasks.length; taskIdx++) {
const task = tasks[taskIdx];
// Skip completed/failed/skipped
if (['completed', 'failed', 'skipped'].includes(task.status)) continue;
// Resolve skill chain
const resolvedType = task.task_type || inferTaskType(task.title, task.description);
const chain = task.skill_chain || SKILL_CHAIN_MAP[resolvedType] || SKILL_CHAIN_MAP['feature'];
// Update task status → in_progress
task.status = 'in_progress';
task.task_type = resolvedType; // persist inferred type
task.execution.started_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
// Update session
session.current_task = task.id;
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
console.log(`\n--- [${taskIdx + 1}/${tasks.length}] ${task.id}: ${task.title} ---`);
console.log(`Chain: ${chain.join(' → ')}`);
// ━━━ Pre-Task CLI Context Analysis (for complex/bugfix tasks) ━━━
if (['bugfix', 'bugfix-hotfix', 'feature-complex'].includes(resolvedType)) {
console.log(` Pre-analysis: gathering context for ${resolvedType} task...`);
const affectedFiles = (task.context?.affected_files || []).join(', ');
const preAnalysisPrompt = `PURPOSE: Pre-analyze codebase context for IDAW task before execution.
TASK: • Understand current state of: ${affectedFiles || 'files related to: ' + task.title} • Identify dependencies and risk areas • Note existing patterns to follow
MODE: analysis
CONTEXT: @**/*
EXPECTED: Brief context summary (affected modules, dependencies, risk areas) in 3-5 bullet points
CONSTRAINTS: Keep concise | Focus on execution-relevant context`;
const preAnalysis = Bash(`ccw cli -p '${preAnalysisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis 2>&1 || echo "Pre-analysis skipped"`);
task.execution.skill_results.push({
skill: 'cli-pre-analysis',
status: 'completed',
context_summary: preAnalysis?.substring(0, 500),
timestamp: new Date().toISOString()
});
}
// Execute each skill in chain
let previousResult = null;
let taskFailed = false;
for (let skillIdx = 0; skillIdx < chain.length; skillIdx++) {
const skillName = chain[skillIdx];
const skillArgs = assembleSkillArgs(skillName, task, previousResult, autoYes, skillIdx === 0);
console.log(` [${skillIdx + 1}/${chain.length}] ${skillName}`);
try {
const result = Skill({ skill: skillName, args: skillArgs });
previousResult = result;
task.execution.skill_results.push({
skill: skillName,
status: 'completed',
timestamp: new Date().toISOString()
});
} catch (error) {
// ━━━ CLI-Assisted Error Recovery ━━━
// Step 1: Invoke CLI diagnosis (auto-invoke trigger: self-repair fails)
console.log(` Diagnosing failure: ${skillName}...`);
const diagnosisPrompt = `PURPOSE: Diagnose why skill "${skillName}" failed during IDAW task execution.
TASK: • Analyze error: ${String(error).substring(0, 300)} • Check affected files: ${(task.context?.affected_files || []).join(', ') || 'unknown'} • Identify root cause • Suggest fix strategy
MODE: analysis
CONTEXT: @**/* | Memory: IDAW task ${task.id}: ${task.title}
EXPECTED: Root cause + actionable fix recommendation (1-2 sentences)
CONSTRAINTS: Focus on actionable diagnosis`;
const diagnosisResult = Bash(`ccw cli -p '${diagnosisPrompt.replace(/'/g, "'\\''")}' --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause 2>&1 || echo "CLI diagnosis unavailable"`);
task.execution.skill_results.push({
skill: `cli-diagnosis:${skillName}`,
status: 'completed',
diagnosis: diagnosisResult?.substring(0, 500),
timestamp: new Date().toISOString()
});
// Step 2: Retry with diagnosis context
console.log(` Retry with diagnosis: ${skillName}`);
try {
const retryResult = Skill({ skill: skillName, args: skillArgs });
previousResult = retryResult;
task.execution.skill_results.push({
skill: skillName,
status: 'completed-retry-with-diagnosis',
timestamp: new Date().toISOString()
});
} catch (retryError) {
// Step 3: Failed after CLI-assisted retry
task.execution.skill_results.push({
skill: skillName,
status: 'failed',
error: String(retryError).substring(0, 200),
timestamp: new Date().toISOString()
});
if (autoYes) {
taskFailed = true;
break;
} else {
const answer = AskUserQuestion({
questions: [{
question: `${skillName} failed after CLI diagnosis + retry: ${String(retryError).substring(0, 100)}. How to proceed?`,
header: 'Error',
multiSelect: false,
options: [
{ label: 'Skip task', description: 'Mark task as failed, continue to next' },
{ label: 'Abort', description: 'Stop entire run' }
]
}]
});
if (answer.answers?.Error === 'Abort') {
task.status = 'failed';
task.execution.error = String(retryError).substring(0, 200);
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
session.failed.push(task.id);
session.status = 'failed';
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
return;
}
taskFailed = true;
break;
}
}
}
}
// Phase 5: Checkpoint (per task) — inline
if (taskFailed) {
task.status = 'failed';
task.execution.error = 'Skill chain failed after retry';
task.execution.completed_at = new Date().toISOString();
session.failed.push(task.id);
} else {
// Git commit checkpoint
const commitMsg = `feat(idaw): ${task.title} [${task.id}]`;
const diffCheck = Bash('git diff --stat HEAD 2>/dev/null || echo ""');
const untrackedCheck = Bash('git ls-files --others --exclude-standard 2>/dev/null || echo ""');
if (diffCheck?.trim() || untrackedCheck?.trim()) {
Bash('git add -A');
const commitResult = Bash(`git commit -m "$(cat <<'EOF'\n${commitMsg}\nEOF\n)"`);
const commitHash = Bash('git rev-parse --short HEAD 2>/dev/null')?.trim();
task.execution.git_commit = commitHash;
} else {
task.execution.git_commit = 'no-commit';
}
task.status = 'completed';
task.execution.completed_at = new Date().toISOString();
session.completed.push(task.id);
}
// Write task + session state
task.updated_at = new Date().toISOString();
Write(`.workflow/.idaw/tasks/${task.id}.json`, JSON.stringify(task, null, 2));
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
// Append to progress.md
const duration = task.execution.started_at && task.execution.completed_at
? formatDuration(new Date(task.execution.completed_at) - new Date(task.execution.started_at))
: 'unknown';
const progressEntry = `## ${task.id}${task.title}\n` +
`- Status: ${task.status}\n` +
`- Type: ${task.task_type}\n` +
`- Chain: ${chain.join(' → ')}\n` +
`- Commit: ${task.execution.git_commit || '-'}\n` +
`- Duration: ${duration}\n\n`;
const currentProgress = Read(`${sessionDir}/progress.md`);
Write(`${sessionDir}/progress.md`, currentProgress + progressEntry);
// Update TodoWrite
if (taskIdx + 1 < tasks.length) {
TodoWrite({
todos: tasks.map((t, i) => ({
content: `IDAW:[${i + 1}/${tasks.length}] ${t.title}`,
status: i < taskIdx + 1 ? 'completed' : (i === taskIdx + 1 ? 'in_progress' : 'pending'),
activeForm: `Executing ${t.title}`
}))
});
}
}
```
### Phase 6: Report
```javascript
session.status = session.failed.length > 0 && session.completed.length === 0 ? 'failed' : 'completed';
session.current_task = null;
session.updated_at = new Date().toISOString();
Write(`${sessionDir}/session.json`, JSON.stringify(session, null, 2));
// Final progress summary
const summary = `\n---\n## Summary\n` +
`- Completed: ${session.completed.length}\n` +
`- Failed: ${session.failed.length}\n` +
`- Skipped: ${session.skipped.length}\n` +
`- Total: ${tasks.length}\n`;
const finalProgress = Read(`${sessionDir}/progress.md`);
Write(`${sessionDir}/progress.md`, finalProgress + summary);
// Display report
console.log('\n=== IDAW Run Complete ===');
console.log(`Session: ${sessionId}`);
console.log(`Completed: ${session.completed.length}/${tasks.length}`);
if (session.failed.length > 0) console.log(`Failed: ${session.failed.join(', ')}`);
if (session.skipped.length > 0) console.log(`Skipped: ${session.skipped.join(', ')}`);
// List git commits
for (const taskId of session.completed) {
const t = JSON.parse(Read(`.workflow/.idaw/tasks/${taskId}.json`));
if (t.execution.git_commit && t.execution.git_commit !== 'no-commit') {
console.log(` ${t.execution.git_commit} ${t.title}`);
}
}
```
## Helper Functions
### assembleSkillArgs
```javascript
function assembleSkillArgs(skillName, task, previousResult, autoYes, isFirst) {
let args = '';
if (isFirst) {
// First skill: pass task goal — sanitize for shell safety
const goal = `${task.title}\n${task.description}`
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/\$/g, '\\$')
.replace(/`/g, '\\`');
args = `"${goal}"`;
// bugfix-hotfix: add --hotfix
if (task.task_type === 'bugfix-hotfix') {
args += ' --hotfix';
}
} else if (previousResult?.session_id) {
// Subsequent skills: chain session
args = `--session="${previousResult.session_id}"`;
}
// Propagate -y
if (autoYes && !args.includes('-y') && !args.includes('--yes')) {
args = args ? `${args} -y` : '-y';
}
return args;
}
```
### formatDuration
```javascript
function formatDuration(ms) {
const seconds = Math.floor(ms / 1000);
const minutes = Math.floor(seconds / 60);
const remainingSeconds = seconds % 60;
if (minutes > 0) return `${minutes}m ${remainingSeconds}s`;
return `${seconds}s`;
}
```
## CLI-Assisted Analysis
IDAW integrates `ccw cli` (Gemini) for intelligent analysis at two key points:
### Pre-Task Context Analysis
For `bugfix`, `bugfix-hotfix`, and `feature-complex` tasks, IDAW automatically invokes CLI analysis **before** executing the skill chain to gather codebase context:
```
Task starts → CLI pre-analysis (gemini) → Context gathered → Skill chain executes
```
- Identifies dependencies and risk areas
- Notes existing patterns to follow
- Results stored in `task.execution.skill_results` as `cli-pre-analysis`
### Error Recovery with CLI Diagnosis
When a skill fails, instead of blind retry, IDAW uses CLI-assisted diagnosis:
```
Skill fails → CLI diagnosis (gemini, analysis-diagnose-bug-root-cause)
→ Root cause identified → Retry with diagnosis context
→ Still fails → Skip (autoYes) or Ask user (interactive)
```
- Uses `--rule analysis-diagnose-bug-root-cause` template
- Diagnosis results stored in `task.execution.skill_results` as `cli-diagnosis:{skill}`
- Follows CLAUDE.md auto-invoke trigger pattern: "self-repair fails → invoke CLI analysis"
### Execution Flow (with CLI analysis)
```
Phase 4 Main Loop (per task):
├─ [bugfix/complex only] CLI pre-analysis → context summary
├─ Skill 1: execute
│ ├─ Success → next skill
│ └─ Failure → CLI diagnosis → retry → success/fail
├─ Skill 2: execute ...
└─ Phase 5: git checkpoint
```
## Examples
```bash
# Execute all pending tasks
/idaw:run -y
# Execute specific tasks
/idaw:run --task IDAW-001,IDAW-003
# Dry run (show plan without executing)
/idaw:run --dry-run
# Interactive mode (confirm at each step)
/idaw:run
```

View File

@@ -0,0 +1,182 @@
---
name: status
description: View IDAW task and session progress
argument-hint: "[session-id]"
allowed-tools: Read(*), Glob(*), Bash(*)
---
# IDAW Status Command (/idaw:status)
## Overview
Read-only command to view IDAW task queue and execution session progress.
## Implementation
### Phase 1: Determine View Mode
```javascript
const sessionId = $ARGUMENTS?.trim();
if (sessionId) {
// Specific session view
showSession(sessionId);
} else {
// Overview: pending tasks + latest session
showOverview();
}
```
### Phase 2: Show Overview
```javascript
function showOverview() {
// 1. Load all tasks
const taskFiles = Glob('.workflow/.idaw/tasks/IDAW-*.json') || [];
if (taskFiles.length === 0) {
console.log('No IDAW tasks found. Use /idaw:add to create tasks.');
return;
}
const tasks = taskFiles.map(f => JSON.parse(Read(f)));
// 2. Group by status
const byStatus = {
pending: tasks.filter(t => t.status === 'pending'),
in_progress: tasks.filter(t => t.status === 'in_progress'),
completed: tasks.filter(t => t.status === 'completed'),
failed: tasks.filter(t => t.status === 'failed'),
skipped: tasks.filter(t => t.status === 'skipped')
};
// 3. Display task summary table
console.log('# IDAW Tasks\n');
console.log('| ID | Title | Type | Priority | Status |');
console.log('|----|-------|------|----------|--------|');
// Sort: priority ASC, then ID ASC
const sorted = [...tasks].sort((a, b) => {
if (a.priority !== b.priority) return a.priority - b.priority;
return a.id.localeCompare(b.id);
});
for (const t of sorted) {
const type = t.task_type || '(infer)';
console.log(`| ${t.id} | ${t.title.substring(0, 40)} | ${type} | ${t.priority} | ${t.status} |`);
}
console.log(`\nTotal: ${tasks.length} | Pending: ${byStatus.pending.length} | Completed: ${byStatus.completed.length} | Failed: ${byStatus.failed.length}`);
// 4. Show latest session (if any)
const sessionDirs = Glob('.workflow/.idaw/sessions/IDA-*/session.json') || [];
if (sessionDirs.length > 0) {
// Sort by modification time (newest first) — Glob returns sorted by mtime
const latestSessionFile = sessionDirs[0];
const session = JSON.parse(Read(latestSessionFile));
console.log(`\n## Latest Session: ${session.session_id}`);
console.log(`Status: ${session.status} | Tasks: ${session.tasks?.length || 0}`);
console.log(`Completed: ${session.completed?.length || 0} | Failed: ${session.failed?.length || 0} | Skipped: ${session.skipped?.length || 0}`);
}
}
```
### Phase 3: Show Specific Session
```javascript
function showSession(sessionId) {
const sessionFile = `.workflow/.idaw/sessions/${sessionId}/session.json`;
const progressFile = `.workflow/.idaw/sessions/${sessionId}/progress.md`;
// Try reading session
try {
const session = JSON.parse(Read(sessionFile));
console.log(`# IDAW Session: ${session.session_id}\n`);
console.log(`Status: ${session.status}`);
console.log(`Created: ${session.created_at}`);
console.log(`Updated: ${session.updated_at}`);
console.log(`Current Task: ${session.current_task || 'none'}\n`);
// Task detail table
console.log('| ID | Title | Status | Commit |');
console.log('|----|-------|--------|--------|');
for (const taskId of session.tasks) {
const taskFile = `.workflow/.idaw/tasks/${taskId}.json`;
try {
const task = JSON.parse(Read(taskFile));
const commit = task.execution?.git_commit?.substring(0, 7) || '-';
console.log(`| ${task.id} | ${task.title.substring(0, 40)} | ${task.status} | ${commit} |`);
} catch {
console.log(`| ${taskId} | (file not found) | unknown | - |`);
}
}
console.log(`\nCompleted: ${session.completed?.length || 0} | Failed: ${session.failed?.length || 0} | Skipped: ${session.skipped?.length || 0}`);
// Show progress.md if exists
try {
const progress = Read(progressFile);
console.log('\n---\n');
console.log(progress);
} catch {
// No progress file yet
}
} catch {
// Session not found — try listing all sessions
console.log(`Session "${sessionId}" not found.\n`);
listSessions();
}
}
```
### Phase 4: List All Sessions
```javascript
function listSessions() {
const sessionFiles = Glob('.workflow/.idaw/sessions/IDA-*/session.json') || [];
if (sessionFiles.length === 0) {
console.log('No IDAW sessions found. Use /idaw:run to start execution.');
return;
}
console.log('# IDAW Sessions\n');
console.log('| Session ID | Status | Tasks | Completed | Failed |');
console.log('|------------|--------|-------|-----------|--------|');
for (const f of sessionFiles) {
try {
const session = JSON.parse(Read(f));
console.log(`| ${session.session_id} | ${session.status} | ${session.tasks?.length || 0} | ${session.completed?.length || 0} | ${session.failed?.length || 0} |`);
} catch {
// Skip malformed
}
}
console.log('\nUse /idaw:status <session-id> for details.');
}
```
## Examples
```bash
# Show overview (pending tasks + latest session)
/idaw:status
# Show specific session details
/idaw:status IDA-auth-fix-20260301
# Output example:
# IDAW Tasks
#
# | ID | Title | Type | Priority | Status |
# |----------|------------------------------------|--------|----------|-----------|
# | IDAW-001 | Fix auth token refresh | bugfix | 1 | completed |
# | IDAW-002 | Add rate limiting | feature| 2 | pending |
# | IDAW-003 | Refactor payment module | refact | 3 | pending |
#
# Total: 3 | Pending: 2 | Completed: 1 | Failed: 0
```

View File

@@ -252,6 +252,17 @@ await updateDiscoveryState(outputDir, {
const hasHighPriority = issues.some(i => i.priority === 'critical' || i.priority === 'high');
const hasMediumFindings = prioritizedFindings.some(f => f.priority === 'medium');
// Auto mode: auto-select recommended action
if (autoYes) {
if (hasHighPriority) {
await appendJsonl('.workflow/issues/issues.jsonl', issues);
console.log(`Exported ${issues.length} issues. Run /issue:plan to continue.`);
} else {
console.log('Discovery complete. No significant issues found.');
}
return;
}
await AskUserQuestion({
questions: [{
question: `Discovery complete: ${issues.length} issues generated, ${prioritizedFindings.length} total findings. What would you like to do next?`,

View File

@@ -152,6 +152,12 @@ if (!QUEUE_ID) {
return;
}
// Auto mode: auto-select if exactly one active queue
if (autoYes && activeQueues.length === 1) {
QUEUE_ID = activeQueues[0].id;
console.log(`Auto-selected queue: ${QUEUE_ID}`);
} else {
// Display and prompt user
console.log('\nAvailable Queues:');
console.log('ID'.padEnd(22) + 'Status'.padEnd(12) + 'Progress'.padEnd(12) + 'Issues');
@@ -176,6 +182,7 @@ if (!QUEUE_ID) {
});
QUEUE_ID = answer['Queue'];
} // end else (multi-queue prompt)
}
console.log(`\n## Executing Queue: ${QUEUE_ID}\n`);
@@ -203,6 +210,13 @@ console.log(`
- Parallel in batch 1: ${dag.parallel_batches[0]?.length || 0}
`);
// Auto mode: use recommended defaults (Codex + Execute + Worktree)
if (autoYes) {
var executor = 'codex';
var isDryRun = false;
var useWorktree = true;
} else {
// Interactive selection via AskUserQuestion
const answer = AskUserQuestion({
questions: [
@@ -237,9 +251,10 @@ const answer = AskUserQuestion({
]
});
const executor = answer['Executor'].toLowerCase().split(' ')[0]; // codex|gemini|agent
const isDryRun = answer['Mode'].includes('Dry-run');
const useWorktree = answer['Worktree'].includes('Yes');
var executor = answer['Executor'].toLowerCase().split(' ')[0]; // codex|gemini|agent
var isDryRun = answer['Mode'].includes('Dry-run');
var useWorktree = answer['Worktree'].includes('Yes');
} // end else (interactive selection)
// Dry run mode
if (isDryRun) {
@@ -451,6 +466,10 @@ if (refreshedDag.ready_count > 0) {
if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_count === refreshedDag.total) {
console.log('\n## All Solutions Completed - Worktree Cleanup');
// Auto mode: Create PR (recommended)
if (autoYes) {
var mergeAction = 'Create PR';
} else {
const answer = AskUserQuestion({
questions: [{
question: `Queue complete. What to do with worktree branch "${worktreeBranch}"?`,
@@ -463,15 +482,17 @@ if (useWorktree && refreshedDag.ready_count === 0 && refreshedDag.completed_coun
]
}]
});
var mergeAction = answer['Merge'];
}
const repoRoot = Bash('git rev-parse --show-toplevel').trim();
if (answer['Merge'].includes('Create PR')) {
if (mergeAction.includes('Create PR')) {
Bash(`git -C "${worktreePath}" push -u origin "${worktreeBranch}"`);
Bash(`gh pr create --title "Queue ${dag.queue_id}" --body "Issue queue execution - all solutions completed" --head "${worktreeBranch}"`);
Bash(`git worktree remove "${worktreePath}"`);
console.log(`PR created for branch: ${worktreeBranch}`);
} else if (answer['Merge'].includes('Merge to main')) {
} else if (mergeAction.includes('Merge to main')) {
// Check main is clean
const mainDirty = Bash('git status --porcelain').trim();
if (mainDirty) {

View File

@@ -154,8 +154,8 @@ Phase 6: Bind Solution
├─ Update issue status to 'planned'
└─ Returns: SOL-{issue-id}-{uid}
Phase 7: Next Steps
└─ Offer: Form queue | Convert another idea | View details | Done
Phase 7: Next Steps (skip in auto mode)
└─ Auto mode: complete directly | Interactive: Form queue | Convert another | Done
```
## Context Enrichment Logic

View File

@@ -263,6 +263,14 @@ for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
for (const pending of pendingSelections) {
if (pending.solutions.length === 0) continue;
// Auto mode: auto-bind first (highest-ranked) solution
if (autoYes) {
const solId = pending.solutions[0].id;
Bash(`ccw issue bind ${pending.issue_id} ${solId}`);
console.log(`${pending.issue_id}: ${solId} bound (auto)`);
continue;
}
const options = pending.solutions.slice(0, 4).map(sol => ({
label: `${sol.id} (${sol.task_count} tasks)`,
description: sol.description || sol.approach || 'No description'

View File

@@ -273,6 +273,17 @@ const allClarifications = results.flatMap((r, i) =>
```javascript
if (allClarifications.length > 0) {
for (const clarification of allClarifications) {
// Auto mode: use recommended resolution (first option)
if (autoYes) {
const autoAnswer = clarification.options[0]?.label || 'skip';
Task(
subagent_type="issue-queue-agent",
resume=clarification.agent_id,
prompt=`Conflict ${clarification.conflict_id} resolved: ${autoAnswer}`
);
continue;
}
// Present to user via AskUserQuestion
const answer = AskUserQuestion({
questions: [{
@@ -345,6 +356,14 @@ ccw issue queue list --brief
**AskUserQuestion:**
```javascript
// Auto mode: merge into existing queue
if (autoYes) {
Bash(`ccw issue queue merge ${newQueueId} --queue ${activeQueueId}`);
Bash(`ccw issue queue delete ${newQueueId}`);
console.log(`Auto-merged new queue into ${activeQueueId}`);
return;
}
AskUserQuestion({
questions: [{
question: "Active queue exists. How would you like to proceed?",

View File

@@ -496,7 +496,7 @@ if (fileExists(projectPath)) {
}
// Update specs/*.md: remove learnings referencing deleted sessions
const guidelinesPath = '.workflow/specs/*.md'
const guidelinesPath = '.ccw/specs/*.md'
if (fileExists(guidelinesPath)) {
const guidelines = JSON.parse(Read(guidelinesPath))
const deletedSessionIds = results.deleted

View File

@@ -208,7 +208,7 @@ Task(
### Project Context (MANDATORY)
Read and incorporate:
- \`.workflow/project-tech.json\` (if exists): Technology stack, architecture
- \`.workflow/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS on sub-domain splitting and plan structure
- \`.ccw/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS on sub-domain splitting and plan structure
### Input Requirements
${taskDescription}
@@ -357,7 +357,7 @@ subDomains.map(sub =>
### Project Context (MANDATORY)
Read and incorporate:
- \`.workflow/project-tech.json\` (if exists): Technology stack, architecture
- \`.workflow/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS
- \`.ccw/specs/*.md\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS
## Dual Output Tasks

View File

@@ -632,6 +632,14 @@ Why is config value None during update?
**Auto-sync**: 执行 `/workflow:session:sync -y "{summary}"` 更新 specs/*.md + project-tech。
```javascript
// Auto mode: skip expansion question, complete session directly
if (autoYes) {
console.log('Debug session complete. Auto mode: skipping expansion.');
return;
}
```
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
---

View File

@@ -11,7 +11,7 @@ examples:
## Overview
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.workflow/specs/*.md` with coding conventions, constraints, and quality rules.
Interactive multi-round wizard that analyzes the current project (via `project-tech.json`) and asks targeted questions to populate `.ccw/specs/*.md` with coding conventions, constraints, and quality rules.
**Design Principle**: Questions are dynamically generated based on the project's tech stack, architecture, and patterns — not generic boilerplate.
@@ -55,7 +55,7 @@ Step 5: Display Summary
```bash
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .workflow/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
bash(test -f .ccw/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
```
**If TECH_NOT_FOUND**: Exit with message
@@ -332,9 +332,16 @@ For each category of collected answers, append rules to the corresponding spec M
```javascript
// Helper: append rules to a spec MD file with category support
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
function appendRulesToSpecFile(filePath, rules, defaultCategory = 'general') {
if (rules.length === 0) return
// Ensure .ccw/specs/ directory exists
const specDir = path.dirname(filePath)
if (!fs.existsSync(specDir)) {
fs.mkdirSync(specDir, { recursive: true })
}
// Check if file exists
if (!file_exists(filePath)) {
// Create file with frontmatter including category
@@ -360,19 +367,19 @@ keywords: [${defaultCategory}, ${filePath.includes('conventions') ? 'convention'
Write(filePath, newContent)
}
// Write conventions (general category)
appendRulesToSpecFile('.workflow/specs/coding-conventions.md',
// Write conventions (general category) - use .ccw/specs/ (same as frontend/backend)
appendRulesToSpecFile('.ccw/specs/coding-conventions.md',
[...newCodingStyle, ...newNamingPatterns, ...newFileStructure, ...newDocumentation],
'general')
// Write constraints (planning category)
appendRulesToSpecFile('.workflow/specs/architecture-constraints.md',
appendRulesToSpecFile('.ccw/specs/architecture-constraints.md',
[...newArchitecture, ...newTechStack, ...newPerformance, ...newSecurity],
'planning')
// Write quality rules (execution category)
if (newQualityRules.length > 0) {
const qualityPath = '.workflow/specs/quality-rules.md'
const qualityPath = '.ccw/specs/quality-rules.md'
if (!file_exists(qualityPath)) {
Write(qualityPath, `---
title: Quality Rules

View File

@@ -54,7 +54,7 @@ Step 1: Gather Requirements (Interactive)
└─ Ask content (rule text)
Step 2: Determine Target File
├─ specs dimension → .workflow/specs/coding-conventions.md or architecture-constraints.md
├─ specs dimension → .ccw/specs/coding-conventions.md or architecture-constraints.md
└─ personal dimension → ~/.ccw/specs/personal/ or .ccw/specs/personal/
Step 3: Write Spec
@@ -109,7 +109,7 @@ if (!dimension) {
options: [
{
label: "Project Spec",
description: "Coding conventions, constraints, quality rules for this project (stored in .workflow/specs/)"
description: "Coding conventions, constraints, quality rules for this project (stored in .ccw/specs/)"
},
{
label: "Personal Spec",
@@ -234,19 +234,19 @@ let targetFile: string
let targetDir: string
if (dimension === 'specs') {
// Project specs
targetDir = '.workflow/specs'
// Project specs - use .ccw/specs/ (same as frontend/backend spec-index-builder)
targetDir = '.ccw/specs'
if (isConstraint) {
targetFile = path.join(targetDir, 'architecture-constraints.md')
} else {
targetFile = path.join(targetDir, 'coding-conventions.md')
}
} else {
// Personal specs
// Personal specs - use .ccw/personal/ (same as backend spec-index-builder)
if (scope === 'global') {
targetDir = path.join(os.homedir(), '.ccw', 'specs', 'personal')
targetDir = path.join(os.homedir(), '.ccw', 'personal')
} else {
targetDir = path.join('.ccw', 'specs', 'personal')
targetDir = path.join('.ccw', 'personal')
}
// Create category-based filename
@@ -333,7 +333,7 @@ Use 'ccw spec load --category ${category}' to load specs by category
### Project Specs (dimension: specs)
```
.workflow/specs/
.ccw/specs/
├── coding-conventions.md ← conventions, learnings
├── architecture-constraints.md ← constraints
└── quality-rules.md ← quality rules
@@ -341,14 +341,14 @@ Use 'ccw spec load --category ${category}' to load specs by category
### Personal Specs (dimension: personal)
```
# Global (~/.ccw/specs/personal/)
~/.ccw/specs/personal/
# Global (~/.ccw/personal/)
~/.ccw/personal/
├── conventions.md ← personal conventions (all projects)
├── constraints.md ← personal constraints (all projects)
└── learnings.md ← personal learnings (all projects)
# Project-local (.ccw/specs/personal/)
.ccw/specs/personal/
# Project-local (.ccw/personal/)
.ccw/personal/
├── conventions.md ← personal conventions (this project only)
├── constraints.md ← personal constraints (this project only)
└── learnings.md ← personal learnings (this project only)

View File

@@ -11,7 +11,7 @@ examples:
# Workflow Init Command (/workflow:init)
## Overview
Initialize `.workflow/project-tech.json` and `.workflow/specs/*.md` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
Initialize `.workflow/project-tech.json` and `.ccw/specs/*.md` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
**Dual File System**:
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
@@ -58,7 +58,7 @@ Analysis Flow:
Output:
├─ .workflow/project-tech.json (+ .backup if regenerate)
└─ .workflow/specs/*.md (scaffold or configured, unless --skip-specs)
└─ .ccw/specs/*.md (scaffold or configured, unless --skip-specs)
```
## Implementation
@@ -75,14 +75,14 @@ const skipSpecs = $ARGUMENTS.includes('--skip-specs')
```bash
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .workflow/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
bash(test -f .ccw/specs/coding-conventions.md && echo "SPECS_EXISTS" || echo "SPECS_NOT_FOUND")
```
**If BOTH_EXIST and no --regenerate**: Exit early
```
Project already initialized:
- Tech analysis: .workflow/project-tech.json
- Guidelines: .workflow/specs/*.md
- Guidelines: .ccw/specs/*.md
Use /workflow:init --regenerate to rebuild tech analysis
Use /workflow:session:solidify to add guidelines
@@ -171,7 +171,7 @@ Project root: ${projectRoot}
// Skip spec initialization if --skip-specs flag is provided
if (!skipSpecs) {
// Initialize spec system if not already initialized
const specsCheck = Bash('test -f .workflow/specs/coding-conventions.md && echo EXISTS || echo NOT_FOUND')
const specsCheck = Bash('test -f .ccw/specs/coding-conventions.md && echo EXISTS || echo NOT_FOUND')
if (specsCheck.includes('NOT_FOUND')) {
console.log('Initializing spec system...')
Bash('ccw spec init')
@@ -186,7 +186,7 @@ if (!skipSpecs) {
```javascript
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
const specsInitialized = !skipSpecs && file_exists('.workflow/specs/coding-conventions.md');
const specsInitialized = !skipSpecs && file_exists('.ccw/specs/coding-conventions.md');
console.log(`
Project initialized successfully
@@ -206,7 +206,7 @@ Components: ${projectTech.overview.key_components.length} core modules
---
Files created:
- Tech analysis: .workflow/project-tech.json
${!skipSpecs ? `- Specs: .workflow/specs/ ${specsInitialized ? '(initialized)' : ''}` : '- Specs: (skipped via --skip-specs)'}
${!skipSpecs ? `- Specs: .ccw/specs/ ${specsInitialized ? '(initialized)' : ''}` : '- Specs: (skipped via --skip-specs)'}
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
`);
```
@@ -222,7 +222,7 @@ if (skipSpecs) {
Next steps:
- Use /workflow:init-specs to create individual specs
- Use /workflow:init-guidelines to configure specs interactively
- Use /workflow:plan to start planning
- Use /workflow-plan to start planning
`);
return;
}
@@ -260,7 +260,7 @@ Next steps:
- Use /workflow:init-specs to create individual specs
- Use /workflow:init-guidelines to configure specs interactively
- Use ccw spec load to import specs from external sources
- Use /workflow:plan to start planning
- Use /workflow-plan to start planning
`);
}
} else {
@@ -271,7 +271,7 @@ Next steps:
- Use /workflow:init-specs to create additional specs
- Use /workflow:init-guidelines --reset to reconfigure
- Use /workflow:session:solidify to add individual rules
- Use /workflow:plan to start planning
- Use /workflow-plan to start planning
`);
}
```

View File

@@ -923,7 +923,7 @@ Single evolving state file — each phase writes its section:
- Already have a completed implementation session (WFS-*)
- Only need unit/component level tests
**Use `workflow-tdd` skill when:**
**Use `workflow-tdd-plan` skill when:**
- Building new features with test-first approach
- Red-Green-Refactor cycle

View File

@@ -39,7 +39,7 @@ Closed-loop tech debt lifecycle: **Discover → Assess → Plan → Refactor →
**vs Existing Commands**:
- **workflow:lite-fix**: Single bug fix, no systematic debt analysis
- **workflow:plan + execute**: Generic implementation, no debt-aware prioritization or regression validation
- **workflow-plan + execute**: Generic implementation, no debt-aware prioritization or regression validation
- **This command**: Full debt lifecycle — discovery through multi-dimensional scan, prioritized execution with per-item regression validation
### Value Proposition

View File

@@ -534,7 +534,7 @@ ${selectedMode === 'progressive' ? `**Progressive Mode**:
| Scenario | Recommended Command |
|----------|-------------------|
| Strategic planning, need issue tracking | `/workflow:roadmap-with-file` |
| Quick task breakdown, immediate execution | `/workflow:lite-plan` |
| Quick task breakdown, immediate execution | `/workflow-lite-plan` |
| Collaborative multi-agent planning | `/workflow:collaborative-plan-with-file` |
| Full specification documents | `spec-generator` skill |
| Code implementation from existing plan | `/workflow:lite-execute` |

View File

@@ -57,5 +57,5 @@ Session WFS-user-auth resumed
- Status: active
- Paused at: 2025-09-15T14:30:00Z
- Resumed at: 2025-09-15T15:45:00Z
- Ready for: /workflow:execute
- Ready for: /workflow-execute
```

View File

@@ -18,7 +18,7 @@ When `--yes` or `-y`: Auto-categorize and add guideline without confirmation.
## Overview
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.workflow/specs/*.md`. This ensures valuable learnings persist across sessions and inform future planning.
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.ccw/specs/*.md`. This ensures valuable learnings persist across sessions and inform future planning.
## Use Cases
@@ -112,8 +112,10 @@ ELSE (convention/constraint/learning):
### Step 1: Ensure Guidelines File Exists
**Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)**
```bash
bash(test -f .workflow/specs/coding-conventions.md && echo "EXISTS" || echo "NOT_FOUND")
bash(test -f .ccw/specs/coding-conventions.md && echo "EXISTS" || echo "NOT_FOUND")
```
**If NOT_FOUND**, initialize spec system:
@@ -187,9 +189,10 @@ function buildEntry(rule, type, category, sessionId) {
```javascript
// Map type+category to target spec file
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
const specFileMap = {
convention: '.workflow/specs/coding-conventions.md',
constraint: '.workflow/specs/architecture-constraints.md'
convention: '.ccw/specs/coding-conventions.md',
constraint: '.ccw/specs/architecture-constraints.md'
}
if (type === 'convention' || type === 'constraint') {
@@ -204,7 +207,8 @@ if (type === 'convention' || type === 'constraint') {
}
} else if (type === 'learning') {
// Learnings go to coding-conventions.md as a special section
const targetFile = '.workflow/specs/coding-conventions.md'
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
const targetFile = '.ccw/specs/coding-conventions.md'
const existing = Read(targetFile)
const entry = buildEntry(rule, type, category, sessionId)
const learningText = `- [learning/${category}] ${entry.insight} (${entry.date})`
@@ -228,7 +232,7 @@ Type: ${type}
Category: ${category}
Rule: "${rule}"
Location: .workflow/specs/*.md -> ${type}s.${category}
Location: .ccw/specs/*.md -> ${type}s.${category}
Total ${type}s in ${category}: ${count}
```
@@ -373,13 +377,9 @@ AskUserQuestion({
/workflow:session:solidify "Use async/await instead of callbacks" --type convention --category coding_style
```
Result in `specs/*.md`:
```json
{
"conventions": {
"coding_style": ["Use async/await instead of callbacks"]
}
}
Result in `.ccw/specs/coding-conventions.md`:
```markdown
- [coding_style] Use async/await instead of callbacks
```
### Add an Architectural Constraint
@@ -387,13 +387,9 @@ Result in `specs/*.md`:
/workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
```
Result:
```json
{
"constraints": {
"architecture": ["No direct DB access from controllers"]
}
}
Result in `.ccw/specs/architecture-constraints.md`:
```markdown
- [architecture] No direct DB access from controllers
```
### Capture a Session Learning
@@ -401,18 +397,9 @@ Result:
/workflow:session:solidify "Cache invalidation requires event sourcing for consistency" --type learning
```
Result:
```json
{
"learnings": [
{
"date": "2024-12-28",
"session_id": "WFS-auth-feature",
"insight": "Cache invalidation requires event sourcing for consistency",
"category": "architecture"
}
]
}
Result in `.ccw/specs/coding-conventions.md`:
```markdown
- [learning/architecture] Cache invalidation requires event sourcing for consistency (2024-12-28)
```
### Compress Recent Memories

View File

@@ -27,7 +27,7 @@ The `--type` parameter classifies sessions for CCW dashboard organization:
|------|-------------|-------------|
| `workflow` | Standard implementation (default) | `workflow-plan` skill |
| `review` | Code review sessions | `review-cycle` skill |
| `tdd` | TDD-based development | `workflow-tdd` skill |
| `tdd` | TDD-based development | `workflow-tdd-plan` skill |
| `test` | Test generation/fix sessions | `workflow-test-fix` skill |
| `docs` | Documentation sessions | `memory-manage` skill |
@@ -44,7 +44,7 @@ ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
```bash
# Check if project state exists (both files required)
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f .workflow/specs/*.md && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
bash(test -f .ccw/specs/*.md && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
```
**If either NOT_FOUND**, delegate to `/workflow:init`:
@@ -60,7 +60,7 @@ Skill(skill="workflow:init");
- If BOTH_EXIST: `PROJECT_STATE: initialized`
- If NOT_FOUND: Calls `/workflow:init` → creates:
- `.workflow/project-tech.json` with full technical analysis
- `.workflow/specs/*.md` with empty scaffold
- `.ccw/specs/*.md` with empty scaffold
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.

View File

@@ -124,7 +124,7 @@ Tech [${detectCategory(summary)}]:
${techEntry.title}
Target files:
.workflow/specs/*.md
.ccw/specs/*.md
.workflow/project-tech.json
`)
@@ -138,12 +138,13 @@ if (!autoYes) {
```javascript
// ── Update specs/*.md ──
// Uses .ccw/specs/ directory (same as frontend/backend spec-index-builder)
if (guidelineUpdates.length > 0) {
// Map guideline types to spec files
const specFileMap = {
convention: '.workflow/specs/coding-conventions.md',
constraint: '.workflow/specs/architecture-constraints.md',
learning: '.workflow/specs/coding-conventions.md' // learnings appended to conventions
convention: '.ccw/specs/coding-conventions.md',
constraint: '.ccw/specs/architecture-constraints.md',
learning: '.ccw/specs/coding-conventions.md' // learnings appended to conventions
}
for (const g of guidelineUpdates) {

View File

@@ -1,6 +1,6 @@
---
name: design-sync
description: Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption
description: Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption
argument-hint: --session <session_id> [--selected-prototypes "<list>"]
allowed-tools: Read(*), Write(*), Edit(*), TodoWrite(*), Glob(*), Bash(*)
---
@@ -351,10 +351,10 @@ Updated artifacts:
✓ {role_count} role analysis.md files - Design system references
✓ ui-designer/design-system-reference.md - Design system reference guide
Design system assets ready for /workflow:plan:
Design system assets ready for /workflow-plan:
- design-tokens.json | style-guide.md | {prototype_count} reference prototypes
Next: /workflow:plan [--agent] "<task description>"
Next: /workflow-plan [--agent] "<task description>"
The plan phase will automatically discover and utilize the design system.
```
@@ -394,7 +394,7 @@ Next: /workflow:plan [--agent] "<task description>"
@../../{design_id}/prototypes/{prototype}.html
```
## Integration with /workflow:plan
## Integration with /workflow-plan
After this update, `workflow-plan` skill will discover design assets through:

View File

@@ -606,7 +606,7 @@ Total workflow time: ~{estimate_total_time()} minutes
{IF session_id:
2. Create implementation tasks:
/workflow:plan --session {session_id}
/workflow-plan --session {session_id}
3. Generate tests (if needed):
/workflow:test-gen {session_id}
@@ -741,5 +741,5 @@ Design Quality:
- Design token driven
- {generated_count} assembled prototypes
Next: [/workflow:execute] OR [Open compare.html → /workflow:plan]
Next: [/workflow-execute] OR [Open compare.html → /workflow-plan]
```

View File

@@ -477,8 +477,8 @@ ${recommendations.map(r => \`- ${r}\`).join('\\n')}
const projectTech = file_exists('.workflow/project-tech.json')
? JSON.parse(Read('.workflow/project-tech.json')) : null
// Read specs/*.md (if exists)
const projectGuidelines = file_exists('.workflow/specs/*.md')
? JSON.parse(Read('.workflow/specs/*.md')) : null
const projectGuidelines = file_exists('.ccw/specs/*.md')
? JSON.parse(Read('.ccw/specs/*.md')) : null
```
```javascript

View File

@@ -43,19 +43,19 @@ function getExistingCommandSources() {
// These commands were migrated to skills but references were never updated
const COMMAND_TO_SKILL_MAP = {
// workflow commands → skills
'/workflow:plan': 'workflow-plan',
'/workflow:execute': 'workflow-execute',
'/workflow:lite-plan': 'workflow-lite-plan',
'/workflow-plan': 'workflow-plan',
'/workflow-execute': 'workflow-execute',
'/workflow-lite-plan': 'workflow-lite-plan',
'/workflow:lite-execute': 'workflow-lite-plan', // lite-execute is part of lite-plan skill
'/workflow:lite-fix': 'workflow-lite-plan', // lite-fix is part of lite-plan skill
'/workflow:multi-cli-plan': 'workflow-multi-cli-plan',
'/workflow:plan-verify': 'workflow-plan', // plan-verify is a phase of workflow-plan
'/workflow-multi-cli-plan': 'workflow-multi-cli-plan',
'/workflow-plan-verify': 'workflow-plan', // plan-verify is a phase of workflow-plan
'/workflow:replan': 'workflow-plan', // replan is a phase of workflow-plan
'/workflow:tdd-plan': 'workflow-tdd',
'/workflow:tdd-verify': 'workflow-tdd', // tdd-verify is a phase of workflow-tdd
'/workflow:test-fix-gen': 'workflow-test-fix',
'/workflow-tdd-plan': 'workflow-tdd-plan',
'/workflow-tdd-verify': 'workflow-tdd-plan', // tdd-verify is a phase of workflow-tdd-plan
'/workflow-test-fix': 'workflow-test-fix',
'/workflow:test-gen': 'workflow-test-fix',
'/workflow:test-cycle-execute': 'workflow-test-fix',
'/workflow-test-fix': 'workflow-test-fix',
'/workflow:review': 'review-cycle',
'/workflow:review-session-cycle': 'review-cycle',
'/workflow:review-module-cycle': 'review-cycle',
@@ -70,8 +70,8 @@ const COMMAND_TO_SKILL_MAP = {
'/workflow:tools:context-gather': 'workflow-plan',
'/workflow:tools:conflict-resolution': 'workflow-plan',
'/workflow:tools:task-generate-agent': 'workflow-plan',
'/workflow:tools:task-generate-tdd': 'workflow-tdd',
'/workflow:tools:tdd-coverage-analysis': 'workflow-tdd',
'/workflow:tools:task-generate-tdd': 'workflow-tdd-plan',
'/workflow:tools:tdd-coverage-analysis': 'workflow-tdd-plan',
'/workflow:tools:test-concept-enhanced': 'workflow-test-fix',
'/workflow:tools:test-context-gather': 'workflow-test-fix',
'/workflow:tools:test-task-generate': 'workflow-test-fix',
@@ -319,17 +319,17 @@ function fixBrokenReferences() {
// Pattern: `/ command:name` references that point to non-existent commands
// These are documentation references - update to point to skill names
const proseRefFixes = {
'`/workflow:plan`': '`workflow-plan` skill',
'`/workflow:execute`': '`workflow-execute` skill',
'`/workflow-plan`': '`workflow-plan` skill',
'`/workflow-execute`': '`workflow-execute` skill',
'`/workflow:lite-execute`': '`workflow-lite-plan` skill',
'`/workflow:lite-fix`': '`workflow-lite-plan` skill',
'`/workflow:plan-verify`': '`workflow-plan` skill (plan-verify phase)',
'`/workflow-plan-verify`': '`workflow-plan` skill (plan-verify phase)',
'`/workflow:replan`': '`workflow-plan` skill (replan phase)',
'`/workflow:tdd-plan`': '`workflow-tdd` skill',
'`/workflow:tdd-verify`': '`workflow-tdd` skill (tdd-verify phase)',
'`/workflow:test-fix-gen`': '`workflow-test-fix` skill',
'`/workflow-tdd-plan`': '`workflow-tdd-plan` skill',
'`/workflow-tdd-verify`': '`workflow-tdd-plan` skill (tdd-verify phase)',
'`/workflow-test-fix`': '`workflow-test-fix` skill',
'`/workflow:test-gen`': '`workflow-test-fix` skill',
'`/workflow:test-cycle-execute`': '`workflow-test-fix` skill',
'`/workflow-test-fix`': '`workflow-test-fix` skill',
'`/workflow:review`': '`review-cycle` skill',
'`/workflow:review-session-cycle`': '`review-cycle` skill',
'`/workflow:review-module-cycle`': '`review-cycle` skill',
@@ -346,8 +346,8 @@ function fixBrokenReferences() {
'`/workflow:tools:task-generate`': '`workflow-plan` skill (task-generate phase)',
'`/workflow:ui-design:auto`': '`/workflow:ui-design:explore-auto`',
'`/workflow:ui-design:update`': '`/workflow:ui-design:generate`',
'`/workflow:multi-cli-plan`': '`workflow-multi-cli-plan` skill',
'`/workflow:lite-plan`': '`workflow-lite-plan` skill',
'`/workflow-multi-cli-plan`': '`workflow-multi-cli-plan` skill',
'`/workflow-lite-plan`': '`workflow-lite-plan` skill',
'`/cli:plan`': '`workflow-lite-plan` skill',
'`/test-cycle-execute`': '`workflow-test-fix` skill',
};

View File

@@ -123,7 +123,7 @@
| **命令调用语法** | 转换为 Phase 文件的相对路径 | `/workflow:session:start``phases/01-session-discovery.md` |
| **命令路径引用** | 转换为 Skill 目录内路径 | `commands/workflow/tools/``phases/` |
| **跨命令引用** | 转换为 Phase 间文件引用 | `workflow-plan` skill (context-gather phase) → `phases/02-context-gathering.md` |
| **命令参数说明** | 移除或转为 Phase Prerequisites | `usage: /workflow:plan [session-id]` → Phase Prerequisites 中说明 |
| **命令参数说明** | 移除或转为 Phase Prerequisites | `usage: /workflow-plan [session-id]` → Phase Prerequisites 中说明 |
**转换示例**

View File

@@ -373,7 +373,7 @@ Initial → Phase 1 Mode Routing (completed)
- `/workflow:session:start` - Start a new workflow session (optional, brainstorm creates its own)
**Follow-ups** (after brainstorm completes):
- `/workflow:plan --session {sessionId}` - Generate implementation plan
- `/workflow-plan --session {sessionId}` - Generate implementation plan
- `/workflow:brainstorm:synthesis --session {sessionId}` - Run synthesis standalone (if skipped)
## Reference Information

View File

@@ -469,7 +469,7 @@ ${selected_roles.length > 1 ? `
- Run synthesis: /brainstorm --session ${session_id} (auto mode)
` : `
- Clarify insights: /brainstorm --session ${session_id} (auto mode)
- Generate plan: /workflow:plan --session ${session_id}
- Generate plan: /workflow-plan --session ${session_id}
`}
```

View File

@@ -744,7 +744,7 @@ Write(context_pkg_path, JSON.stringify(context_pkg))
**Changelog**: .brainstorming/synthesis-changelog.md
### Next Steps
PROCEED: `/workflow:plan --session {session-id}`
PROCEED: `/workflow-plan --session {session-id}`
```
## Output

View File

@@ -341,7 +341,7 @@
},
{
"name": "execute",
"command": "/workflow:execute",
"command": "/workflow-execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
@@ -396,7 +396,7 @@
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"command": "/workflow-lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
@@ -406,8 +406,8 @@
"source": "../../commands/workflow/lite-plan.md"
},
{
"name": "workflow:multi-cli-plan",
"command": "/workflow:multi-cli-plan",
"name": "workflow-multi-cli-plan",
"command": "/workflow-multi-cli-plan",
"description": "Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.",
"arguments": "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]",
"category": "workflow",
@@ -418,7 +418,7 @@
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"command": "/workflow-plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
@@ -429,7 +429,7 @@
},
{
"name": "plan",
"command": "/workflow:plan",
"command": "/workflow-plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
@@ -550,7 +550,7 @@
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"command": "/workflow-tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
@@ -561,7 +561,7 @@
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"command": "/workflow-tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.",
"arguments": "[optional: --session WFS-session-id]",
"category": "workflow",
@@ -572,7 +572,7 @@
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"command": "/workflow-test-fix",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
@@ -583,7 +583,7 @@
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"command": "/workflow-test-fix",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
@@ -716,7 +716,7 @@
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",

View File

@@ -277,7 +277,7 @@
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",

View File

@@ -310,7 +310,7 @@
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",

View File

@@ -282,7 +282,7 @@
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow-plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",

View File

@@ -142,70 +142,70 @@ def analyze_agent_file(file_path: Path) -> Dict[str, Any]:
def build_command_relationships() -> Dict[str, Any]:
"""Build command relationship mappings."""
return {
"workflow:plan": {
"workflow-plan": {
"calls_internally": ["workflow:session:start", "workflow:tools:context-gather", "workflow:tools:conflict-resolution", "workflow:tools:task-generate-agent"],
"next_steps": ["workflow:plan-verify", "workflow:status", "workflow:execute"],
"alternatives": ["workflow:tdd-plan"],
"next_steps": ["workflow-plan-verify", "workflow:status", "workflow-execute"],
"alternatives": ["workflow-tdd-plan"],
"prerequisites": []
},
"workflow:tdd-plan": {
"workflow-tdd-plan": {
"calls_internally": ["workflow:session:start", "workflow:tools:context-gather", "workflow:tools:task-generate-tdd"],
"next_steps": ["workflow:tdd-verify", "workflow:status", "workflow:execute"],
"alternatives": ["workflow:plan"],
"next_steps": ["workflow-tdd-verify", "workflow:status", "workflow-execute"],
"alternatives": ["workflow-plan"],
"prerequisites": []
},
"workflow:execute": {
"prerequisites": ["workflow:plan", "workflow:tdd-plan"],
"workflow-execute": {
"prerequisites": ["workflow-plan", "workflow-tdd-plan"],
"related": ["workflow:status", "workflow:resume"],
"next_steps": ["workflow:review", "workflow:tdd-verify"]
"next_steps": ["workflow:review", "workflow-tdd-verify"]
},
"workflow:plan-verify": {
"prerequisites": ["workflow:plan"],
"next_steps": ["workflow:execute"],
"workflow-plan-verify": {
"prerequisites": ["workflow-plan"],
"next_steps": ["workflow-execute"],
"related": ["workflow:status"]
},
"workflow:tdd-verify": {
"prerequisites": ["workflow:execute"],
"workflow-tdd-verify": {
"prerequisites": ["workflow-execute"],
"related": ["workflow:tools:tdd-coverage-analysis"]
},
"workflow:session:start": {
"next_steps": ["workflow:plan", "workflow:execute"],
"next_steps": ["workflow-plan", "workflow-execute"],
"related": ["workflow:session:list", "workflow:session:resume"]
},
"workflow:session:resume": {
"alternatives": ["workflow:resume"],
"related": ["workflow:session:list", "workflow:status"]
},
"workflow:lite-plan": {
"workflow-lite-plan": {
"calls_internally": ["workflow:lite-execute"],
"next_steps": ["workflow:lite-execute", "workflow:status"],
"alternatives": ["workflow:plan"],
"alternatives": ["workflow-plan"],
"prerequisites": []
},
"workflow:lite-fix": {
"next_steps": ["workflow:lite-execute", "workflow:status"],
"alternatives": ["workflow:lite-plan"],
"related": ["workflow:test-cycle-execute"]
"alternatives": ["workflow-lite-plan"],
"related": ["workflow-test-fix"]
},
"workflow:lite-execute": {
"prerequisites": ["workflow:lite-plan", "workflow:lite-fix"],
"related": ["workflow:execute", "workflow:status"]
"prerequisites": ["workflow-lite-plan", "workflow:lite-fix"],
"related": ["workflow-execute", "workflow:status"]
},
"workflow:review-session-cycle": {
"prerequisites": ["workflow:execute"],
"prerequisites": ["workflow-execute"],
"next_steps": ["workflow:review-fix"],
"related": ["workflow:review-module-cycle"]
},
"workflow:review-fix": {
"prerequisites": ["workflow:review-module-cycle", "workflow:review-session-cycle"],
"related": ["workflow:test-cycle-execute"]
"related": ["workflow-test-fix"]
},
"memory:docs": {
"calls_internally": ["workflow:session:start", "workflow:tools:context-gather"],
"next_steps": ["workflow:execute"]
"next_steps": ["workflow-execute"]
},
"memory:skill-memory": {
"next_steps": ["workflow:plan", "cli:analyze"],
"next_steps": ["workflow-plan", "cli:analyze"],
"related": ["memory:load-skill-memory"]
}
}
@@ -213,11 +213,11 @@ def build_command_relationships() -> Dict[str, Any]:
def identify_essential_commands(all_commands: List[Dict]) -> List[Dict]:
"""Identify the most essential commands for beginners."""
essential_names = [
"workflow:lite-plan", "workflow:lite-fix", "workflow:plan",
"workflow:execute", "workflow:status", "workflow:session:start",
"workflow-lite-plan", "workflow:lite-fix", "workflow-plan",
"workflow-execute", "workflow:status", "workflow:session:start",
"workflow:review-session-cycle", "cli:analyze", "cli:chat",
"memory:docs", "workflow:brainstorm:artifacts",
"workflow:plan-verify", "workflow:resume", "version"
"workflow-plan-verify", "workflow:resume", "version"
]
essential = []

View File

@@ -67,9 +67,9 @@ spec-generator/
## Handoff
After Phase 6, choose execution path:
- `workflow:lite-plan` - Execute per Epic
- `workflow-lite-plan` - Execute per Epic
- `workflow:req-plan-with-file` - Roadmap decomposition
- `workflow:plan` - Full planning
- `workflow-plan` - Full planning
- `issue:new` - Create issues per Epic
## Design Principles

View File

@@ -210,7 +210,7 @@ AskUserQuestion({
options: [
{
label: "Execute via lite-plan",
description: "Start implementing with /workflow:lite-plan, one Epic at a time"
description: "Start implementing with /workflow-lite-plan, one Epic at a time"
},
{
label: "Create roadmap",
@@ -218,7 +218,7 @@ AskUserQuestion({
},
{
label: "Full planning",
description: "Detailed planning with /workflow:plan for the full scope"
description: "Detailed planning with /workflow-plan for the full scope"
},
{
label: "Create Issues",
@@ -242,7 +242,7 @@ if (selection === "Execute via lite-plan") {
const epicContent = Read(firstMvpFile);
const title = extractTitle(epicContent); // First # heading
const description = extractSection(epicContent, "Description");
Skill(skill="workflow:lite-plan", args=`"${title}: ${description}"`)
Skill(skill="workflow-lite-plan", args=`"${title}: ${description}"`)
}
if (selection === "Full planning" || selection === "Create roadmap") {
@@ -368,7 +368,7 @@ ${extractSection(epicContent, "Architecture")}
// → context-package.json.brainstorm_artifacts populated
// → action-planning-agent loads guidance_specification (P1) + feature_index (P2)
if (selection === "Full planning") {
Skill(skill="workflow:plan", args=`"${structuredDesc}"`)
Skill(skill="workflow-plan", args=`"${structuredDesc}"`)
} else {
Skill(skill="workflow:req-plan-with-file", args=`"${extractGoal(specSummary)}"`)
}

View File

@@ -58,6 +58,27 @@ Each capability produces default output artifacts:
| tester | Test results | `<session>/artifacts/test-report.md` |
| planner | Execution plan | `<session>/artifacts/execution-plan.md` |
### Step 2.5: Key File Inference
For each task, infer relevant files based on capability type and task keywords:
| Capability | File Inference Strategy |
|------------|------------------------|
| researcher | Extract domain keywords → map to likely directories (e.g., "auth" → `src/auth/**`, `middleware/auth.ts`) |
| developer | Extract feature/module keywords → map to source files (e.g., "payment" → `src/payments/**`, `types/payment.ts`) |
| designer | Look for architecture/config keywords → map to config/schema files |
| analyst | Extract target keywords → map to files under analysis |
| tester | Extract test target keywords → map to source + test files |
| writer | Extract documentation target → map to relevant source files for context |
| planner | No specific files (planning is abstract) |
**Inference rules:**
- Extract nouns and verbs from task description
- Match against common directory patterns (src/, lib/, components/, services/, utils/)
- Include related type definition files (types/, *.d.ts)
- For "fix bug" tasks, include error-prone areas (error handlers, validation)
- For "implement feature" tasks, include similar existing features as reference
### Step 3: Dependency Graph Construction
Build a DAG of work streams using natural ordering tiers:
@@ -90,16 +111,26 @@ Apply merging rules to reduce role count (cap at 5).
### Step 6: Role-Spec Metadata Assignment
For each role, determine frontmatter fields:
For each role, determine frontmatter and generation hints:
| Field | Derivation |
|-------|------------|
| `prefix` | From capability prefix (e.g., RESEARCH, DRAFT, IMPL) |
| `inner_loop` | `true` if role has 2+ serial same-prefix tasks |
| `subagents` | Inferred from responsibility type: orchestration -> [explore], code-gen (docs) -> [explore], validation -> [] |
| `subagents` | Suggested, not mandatory — coordinator may adjust based on task needs |
| `pattern_hint` | Reference pattern name from role-spec-template (research/document/code/analysis/validation) — guides coordinator's Phase 2-4 composition, NOT a rigid template selector |
| `output_type` | `artifact` (new files in session/artifacts/) / `codebase` (modify existing project files) / `mixed` (both) — determines verification strategy in Behavioral Traits |
| `message_types.success` | `<prefix>_complete` |
| `message_types.error` | `error` |
**output_type derivation**:
| Task Signal | output_type | Example |
|-------------|-------------|---------|
| "write report", "analyze", "research" | `artifact` | New analysis-report.md in session |
| "update docs", "modify code", "fix bug" | `codebase` | Modify existing project files |
| "implement feature + write summary" | `mixed` | Code changes + implementation summary |
## Phase 4: Output
Write `<session-folder>/task-analysis.json`:
@@ -113,7 +144,22 @@ Write `<session-folder>/task-analysis.json`:
"prefix": "RESEARCH",
"responsibility_type": "orchestration",
"tasks": [
{ "id": "RESEARCH-001", "description": "..." }
{
"id": "RESEARCH-001",
"goal": "What this task achieves and why",
"steps": [
"step 1: specific action with clear verb",
"step 2: specific action with clear verb",
"step 3: specific action with clear verb"
],
"key_files": [
"src/path/to/relevant.ts",
"src/path/to/other.ts"
],
"upstream_artifacts": [],
"success_criteria": "Measurable completion condition",
"constraints": "Scope limits, focus areas"
}
],
"artifacts": ["research-findings.md"]
}
@@ -132,6 +178,8 @@ Write `<session-folder>/task-analysis.json`:
"inner_loop": false,
"role_spec_metadata": {
"subagents": ["explore"],
"pattern_hint": "research",
"output_type": "artifact",
"message_types": {
"success": "research_complete",
"error": "error"

View File

@@ -26,7 +26,21 @@ Create task chains from dynamic dependency graphs. Builds pipelines from the tas
TaskCreate({
subject: "<PREFIX>-<NNN>",
owner: "<role-name>",
description: "<task description from task-analysis>\nSession: <session-folder>\nScope: <scope>\nInnerLoop: <true|false>\nRoleSpec: <session-folder>/role-specs/<role-name>.md",
description: "PURPOSE: <goal> | Success: <success_criteria>
TASK:
- <step 1>
- <step 2>
- <step 3>
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: <artifact-1.md>, <artifact-2.md>
- Key files: <file1>, <file2>
- Shared memory: <session>/shared-memory.json
EXPECTED: <deliverable path> + <quality criteria>
CONSTRAINTS: <scope limits>
---
InnerLoop: <true|false>
RoleSpec: <session-folder>/role-specs/<role-name>.md",
blockedBy: [<dependency-list from graph>],
status: "pending"
})
@@ -37,16 +51,34 @@ TaskCreate({
### Task Description Template
Every task description includes session path, inner loop flag, and role-spec path:
Every task description includes structured fields for clarity:
```
<task description>
Session: <session-folder>
Scope: <scope>
PURPOSE: <goal from task-analysis.json#tasks[].goal> | Success: <success_criteria from task-analysis.json#tasks[].success_criteria>
TASK:
- <step 1 from task-analysis.json#tasks[].steps[]>
- <step 2 from task-analysis.json#tasks[].steps[]>
- <step 3 from task-analysis.json#tasks[].steps[]>
CONTEXT:
- Session: <session-folder>
- Upstream artifacts: <comma-separated list from task-analysis.json#tasks[].upstream_artifacts[]>
- Key files: <comma-separated list from task-analysis.json#tasks[].key_files[]>
- Shared memory: <session>/shared-memory.json
EXPECTED: <artifact path from task-analysis.json#capabilities[].artifacts[]> + <quality criteria based on capability type>
CONSTRAINTS: <constraints from task-analysis.json#tasks[].constraints>
---
InnerLoop: <true|false>
RoleSpec: <session-folder>/role-specs/<role-name>.md
```
**Field Mapping**:
- `PURPOSE`: From `task-analysis.json#capabilities[].tasks[].goal` + `success_criteria`
- `TASK`: From `task-analysis.json#capabilities[].tasks[].steps[]`
- `CONTEXT.Upstream artifacts`: From `task-analysis.json#capabilities[].tasks[].upstream_artifacts[]`
- `CONTEXT.Key files`: From `task-analysis.json#capabilities[].tasks[].key_files[]`
- `EXPECTED`: From `task-analysis.json#capabilities[].artifacts[]` + quality criteria
- `CONSTRAINTS`: From `task-analysis.json#capabilities[].tasks[].constraints`
### InnerLoop Flag Rules
| Condition | InnerLoop |

View File

@@ -60,10 +60,12 @@ Receive callback from [<role>]
+- None completed -> STOP
```
**Fast-advance note**: A worker may have already spawned its successor via fast-advance. When processing a callback:
1. Check if the expected next task is already `in_progress` (fast-advanced)
2. If yes -> skip spawning that task, update active_workers to include the fast-advanced worker
3. If no -> normal handleSpawnNext
**Fast-advance reconciliation**: A worker may have already spawned its successor via fast-advance. When processing any callback or resume:
1. Read recent `fast_advance` messages from team_msg (type="fast_advance")
2. For each fast_advance message: add the spawned successor to `active_workers` if not already present
3. Check if the expected next task is already `in_progress` (fast-advanced)
4. If yes -> skip spawning that task (already running)
5. If no -> normal handleSpawnNext
---
@@ -262,6 +264,13 @@ handleCallback / handleResume detects:
4. -> handleSpawnNext (will re-spawn the task normally)
```
### Fast-Advance State Sync
On every coordinator wake (handleCallback, handleResume, handleCheck):
1. Read team_msg entries with `type="fast_advance"` since last coordinator wake
2. For each entry: sync `active_workers` with the spawned successor
3. This ensures coordinator's state reflects fast-advance decisions even before the successor's callback arrives
### Consensus-Blocked Handling
```

View File

@@ -182,13 +182,15 @@ Regardless of complexity score or role count, coordinator MUST:
4. **Call TeamCreate** with team name derived from session ID
5. **Read `specs/role-spec-template.md`** + task-analysis.json
5. **Read `specs/role-spec-template.md`** for Behavioral Traits + Reference Patterns
6. **For each role in task-analysis.json#roles**:
- Fill role-spec template with:
- YAML frontmatter: role, prefix, inner_loop, subagents, message_types
- Phase 2-4 content from responsibility type reference sections in template
- Task-specific instructions from task description
- Fill YAML frontmatter: role, prefix, inner_loop, subagents, message_types
- **Compose Phase 2-4 content** (NOT copy from template):
- Phase 2: Derive input sources and context loading steps from **task description + upstream dependencies**
- Phase 3: Describe **execution goal** (WHAT to achieve) from task description — do NOT prescribe specific subagent or tool
- Phase 4: Combine **Behavioral Traits** (from template) + **output_type** (from task analysis) to compose verification steps
- Reference Patterns may guide phase structure, but task description determines specific content
- Write generated role-spec to `<session>/role-specs/<role-name>.md`
7. **Register roles** in team-session.json#roles (with `role_spec` path instead of `role_file`)

View File

@@ -63,233 +63,117 @@ message_types:
| `<placeholder>` notation | Use angle brackets for variable substitution |
| Reference subagents by name | team-worker resolves invocation from its delegation templates |
## Phase 2-4 Content by Responsibility Type
## Behavioral Traits
Select the matching section based on `responsibility_type` from task analysis.
All dynamically generated role-specs MUST embed these traits into Phase 4. Coordinator copies this section verbatim into every generated role-spec as a Phase 4 appendix.
### orchestration
**Design principle**: Constrain behavioral characteristics (accuracy, feedback, quality gates), NOT specific actions (which tool, which subagent, which path). Tasks are diverse — the coordinator composes task-specific Phase 2-3 instructions, while these traits ensure execution quality regardless of task type.
**Phase 2: Context Assessment**
### Accuracy — outputs must be verifiable
- Files claimed as **created** → Read to confirm file exists and has content
- Files claimed as **modified** → Read to confirm content actually changed
- Analysis claimed as **complete** → artifact file exists in `<session>/artifacts/`
### Feedback Contract — completion report must include evidence
Phase 4 must produce a verification summary with these fields:
| Field | When Required | Content |
|-------|---------------|---------|
| `files_produced` | New files created | Path list |
| `files_modified` | Existing files changed | Path + before/after line count |
| `artifacts_written` | Always | Paths in `<session>/artifacts/` |
| `verification_method` | Always | How verified: Read confirm / syntax check / diff |
### Quality Gate — verify before reporting complete
- Phase 4 MUST verify Phase 3's **actual output** (not planned output)
- Verification fails → retry Phase 3 (max 2 retries)
- Still fails → report `partial_completion` with details, NOT `completed`
- Update `shared-memory.json` with key findings after verification passes
### Error Protocol
- Primary approach fails → try alternative (different subagent / different tool)
- 2 retries exhausted → escalate to coordinator with failure details
- NEVER: skip verification and report completed
---
## Reference Patterns
Coordinator MAY reference these patterns when composing Phase 2-4 content for a role-spec. These are **structural guidance, not mandatory templates**. The task description determines specific behavior — patterns only suggest common phase structures.
### Research / Exploration
- Phase 2: Define exploration scope + load prior knowledge from shared-memory and wisdom
- Phase 3: Explore via subagents, direct tool calls, or codebase search — approach chosen by agent
- Phase 4: Verify findings documented (Behavioral Traits) + update shared-memory
### Document / Content
- Phase 2: Load upstream artifacts + read target files (if modifying existing docs)
- Phase 3: Create new documents OR modify existing documents — determined by task, not template
- Phase 4: Verify documents exist with expected content (Behavioral Traits) + update shared-memory
### Code Implementation
- Phase 2: Load design/spec artifacts from upstream
- Phase 3: Implement code changes — subagent choice and approach determined by task complexity
- Phase 4: Syntax check + file verification (Behavioral Traits) + update shared-memory
### Analysis / Audit
- Phase 2: Load analysis targets (artifacts or source files)
- Phase 3: Multi-dimension analysis — perspectives and depth determined by task
- Phase 4: Verify report exists + severity classification (Behavioral Traits) + update shared-memory
### Validation / Testing
- Phase 2: Detect test framework + identify changed files from upstream
- Phase 3: Run test-fix cycle — iteration count and strategy determined by task
- Phase 4: Verify pass rate + coverage (Behavioral Traits) + update shared-memory
---
## Knowledge Transfer Protocol
How context flows between roles. Coordinator MUST reference this when composing Phase 2 of any role-spec.
### Transfer Channels
| Channel | Scope | Mechanism | When to Use |
|---------|-------|-----------|-------------|
| **Artifacts** | Producer -> Consumer | Write to `<session>/artifacts/<name>.md`, consumer reads in Phase 2 | Structured deliverables (reports, plans, specs) |
| **shared-memory.json** | Cross-role | Read-merge-write `<session>/shared-memory.json` | Key findings, decisions, metadata (small, structured data) |
| **Wisdom** | Cross-task | Append to `<session>/wisdom/{learnings,decisions,conventions,issues}.md` | Patterns, conventions, risks discovered during execution |
| **context_accumulator** | Intra-role (inner loop) | In-memory array, passed to each subsequent task in same-prefix loop | Prior task summaries within same role's inner loop |
| **Exploration cache** | Cross-role | `<session>/explorations/cache-index.json` + per-angle JSON | Codebase discovery results, prevents duplicate exploration |
### Phase 2 Context Loading (role-spec must specify)
Every generated role-spec Phase 2 MUST declare which upstream sources to load:
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Shared memory | <session>/shared-memory.json | No |
| Prior artifacts | <session>/artifacts/ | No |
| Wisdom | <session>/wisdom/ | No |
Loading steps:
1. Extract session path from task description
2. Read shared-memory.json for cross-role context
3. Read prior artifacts (if any from upstream tasks)
2. Read upstream artifacts: <list which artifacts from which upstream role>
3. Read shared-memory.json for cross-role decisions
4. Load wisdom files for accumulated knowledge
5. Optionally call explore subagent for codebase context
5. For inner_loop roles: load context_accumulator from prior tasks
6. Check exploration cache before running new explorations
```
**Phase 3: Subagent Execution**
### shared-memory.json Usage Convention
```
Delegate to appropriate subagent based on task:
Task({
subagent_type: "general-purpose",
run_in_background: false,
description: "<task-type> for <task-id>",
prompt: "## Task
- <task description>
- Session: <session-folder>
## Context
<prior artifacts + shared memory + explore results>
## Expected Output
Write artifact to: <session>/artifacts/<artifact-name>.md
Return JSON summary: { artifact_path, summary, key_decisions[], warnings[] }"
})
```
**Phase 4: Result Aggregation**
```
1. Verify subagent output artifact exists
2. Read artifact, validate structure/completeness
3. Update shared-memory.json with key findings
4. Write insights to wisdom/ files
```
### code-gen (docs)
**Phase 2: Load Prior Context**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Prior artifacts | <session>/artifacts/ from upstream | Conditional |
| Shared memory | <session>/shared-memory.json | No |
Loading steps:
1. Extract session path from task description
2. Read upstream artifacts
3. Read shared-memory.json for cross-role context
```
**Phase 3: Document Generation**
```
Task({
subagent_type: "universal-executor",
run_in_background: false,
description: "Generate <doc-type> for <task-id>",
prompt: "## Task
- Generate: <document type>
- Session: <session-folder>
## Prior Context
<upstream artifacts + shared memory>
## Expected Output
Write document to: <session>/artifacts/<doc-name>.md
Return JSON: { artifact_path, summary, key_decisions[], warnings[] }"
})
```
**Phase 4: Structure Validation**
```
1. Verify document artifact exists
2. Check document has expected sections
3. Validate no placeholder text remains
4. Update shared-memory.json with document metadata
```
### code-gen (code)
**Phase 2: Load Plan/Specs**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Plan/design artifacts | <session>/artifacts/ | Conditional |
| Shared memory | <session>/shared-memory.json | No |
Loading steps:
1. Extract session path from task description
2. Read plan/design artifacts from upstream
3. Load shared-memory.json for implementation context
```
**Phase 3: Code Implementation**
```
Task({
subagent_type: "code-developer",
run_in_background: false,
description: "Implement <task-id>",
prompt: "## Task
- <implementation description>
- Session: <session-folder>
## Plan/Design Context
<upstream artifacts>
## Expected Output
Implement code changes.
Write summary to: <session>/artifacts/implementation-summary.md
Return JSON: { artifact_path, summary, files_changed[], warnings[] }"
})
```
**Phase 4: Syntax Validation**
```
1. Run syntax check (tsc --noEmit or equivalent)
2. Verify all planned files exist
3. If validation fails -> attempt auto-fix (max 2 attempts)
4. Write implementation summary to artifacts/
```
### read-only
**Phase 2: Target Loading**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Target artifacts/files | From task description or upstream | Yes |
| Shared memory | <session>/shared-memory.json | No |
Loading steps:
1. Extract session path and target files from task description
2. Read target artifacts or source files for analysis
3. Load shared-memory.json for context
```
**Phase 3: Multi-Dimension Analysis**
```
Task({
subagent_type: "general-purpose",
run_in_background: false,
description: "Analyze <target> for <task-id>",
prompt: "## Task
- Analyze: <target description>
- Dimensions: <analysis dimensions from coordinator>
- Session: <session-folder>
## Target Content
<artifact content or file content>
## Expected Output
Write report to: <session>/artifacts/analysis-report.md
Return JSON: { artifact_path, summary, findings[], severity_counts }"
})
```
**Phase 4: Severity Classification**
```
1. Verify analysis report exists
2. Classify findings by severity (Critical/High/Medium/Low)
3. Update shared-memory.json with key findings
4. Write issues to wisdom/issues.md
```
### validation
**Phase 2: Environment Detection**
```
| Input | Source | Required |
|-------|--------|----------|
| Task description | From TaskGet | Yes |
| Implementation artifacts | Upstream code changes | Yes |
Loading steps:
1. Detect test framework from project files
2. Get changed files from implementation
3. Identify test command and coverage tool
```
**Phase 3: Test-Fix Cycle**
```
Task({
subagent_type: "test-fix-agent",
run_in_background: false,
description: "Test-fix for <task-id>",
prompt: "## Task
- Run tests and fix failures
- Session: <session-folder>
- Max iterations: 5
## Changed Files
<from upstream implementation>
## Expected Output
Write report to: <session>/artifacts/test-report.md
Return JSON: { artifact_path, pass_rate, coverage, remaining_failures[] }"
})
```
**Phase 4: Result Analysis**
```
1. Check pass rate >= 95%
2. Check coverage meets threshold
3. Generate test report with pass/fail counts
4. Update shared-memory.json with test results
```
- **Read-merge-write**: Read current content -> merge new keys -> write back (NOT overwrite)
- **Namespaced keys**: Each role writes under its own namespace: `{ "<role_name>": { ... } }`
- **Small data only**: Key findings, decision summaries, metadata. NOT full documents
- **Example**:
```json
{
"researcher": { "key_findings": [...], "scope": "..." },
"writer": { "documents_created": [...], "style_decisions": [...] },
"developer": { "files_changed": [...], "patterns_used": [...] }
}
```

View File

@@ -192,7 +192,9 @@ Beat Cycle (single beat)
Fast-Advance (skips coordinator for simple linear successors)
======================================================================
[Worker A] Phase 5 complete
+- 1 ready task? simple successor? --> spawn team-worker B directly
+- 1 ready task? simple successor?
| --> spawn team-worker B directly
| --> log fast_advance to message bus (coordinator syncs on next wake)
+- complex case? --> SendMessage to coordinator
======================================================================
```

View File

@@ -80,18 +80,40 @@ GC loop (max 2 rounds): QA-FE verdict=NEEDS_FIX -> create DEV-FE-002 + QA-FE-002
### Task Description Template
Every task description includes session, scope, and metadata:
Every task description uses structured format for clarity:
```
TaskCreate({
subject: "<TASK-ID>",
owner: "<role>",
description: "<task description>\nSession: <session-folder>\nScope: <scope>\nInlineDiscuss: <DISCUSS-NNN or none>\nInnerLoop: <true|false>",
description: "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>
TASK:
- <step 1: specific action>
- <step 2: specific action>
- <step 3: specific action>
CONTEXT:
- Session: <session-folder>
- Scope: <scope>
- Upstream artifacts: <artifact-1.md>, <artifact-2.md>
- Key files: <file1>, <file2> (if applicable)
- Shared memory: <session>/shared-memory.json
EXPECTED: <deliverable path> + <quality criteria>
CONSTRAINTS: <scope limits, focus areas>
---
InlineDiscuss: <DISCUSS-NNN or none>
InnerLoop: <true|false>",
blockedBy: [<dependency-list>],
status: "pending"
})
```
**Field Guidelines**:
- **PURPOSE**: Clear goal statement + success criteria
- **TASK**: 2-5 actionable steps with specific verbs
- **CONTEXT**: Session path, scope, upstream artifacts, relevant files
- **EXPECTED**: Output artifact path + quality requirements
- **CONSTRAINTS**: Scope boundaries, focus areas, exclusions
**InnerLoop Flag Rules**:
| Role | InnerLoop |
@@ -107,13 +129,22 @@ TaskCreate({
TaskCreate({
subject: "<ORIGINAL-ID>-R1",
owner: "<same-role-as-original>",
description: "<revision-type> revision of <ORIGINAL-ID>.\n
Session: <session-folder>\n
Original artifact: <artifact-path>\n
User feedback: <feedback-text or 'system-initiated'>\n
Revision scope: <targeted|full>\n
InlineDiscuss: <same-discuss-round-as-original>\n
InnerLoop: <true|false based on role>",
description: "PURPOSE: <revision-type> revision of <ORIGINAL-ID> | Success: Address feedback and pass quality checks
TASK:
- Review original artifact and feedback
- Apply targeted fixes to weak areas
- Validate against quality criteria
CONTEXT:
- Session: <session-folder>
- Original artifact: <artifact-path>
- User feedback: <feedback-text or 'system-initiated'>
- Revision scope: <targeted|full>
- Shared memory: <session>/shared-memory.json
EXPECTED: Updated artifact at <artifact-path> + revision summary
CONSTRAINTS: <revision scope limits>
---
InlineDiscuss: <same-discuss-round-as-original>
InnerLoop: <true|false based on role>",
status: "pending",
blockedBy: [<predecessor-R1 if cascaded>]
})
@@ -138,14 +169,23 @@ TaskCreate({
TaskCreate({
subject: "IMPROVE-<dimension>-001",
owner: "writer",
description: "Quality improvement: <dimension>.\n
Session: <session-folder>\n
Current score: <X>%\n
Target: 80%\n
Readiness report: <session>/spec/readiness-report.md\n
Weak areas: <extracted-from-report>\n
Strategy: <from-dimension-strategy-table>\n
InnerLoop: true",
description: "PURPOSE: Improve <dimension> quality from <X>% to 80% | Success: Pass quality threshold
TASK:
- Review readiness report weak areas
- Apply dimension-specific improvement strategy
- Validate improvements against criteria
CONTEXT:
- Session: <session-folder>
- Current score: <X>%
- Target: 80%
- Readiness report: <session>/spec/readiness-report.md
- Weak areas: <extracted-from-report>
- Strategy: <from-dimension-strategy-table>
- Shared memory: <session>/shared-memory.json
EXPECTED: Improved artifacts + quality improvement summary
CONSTRAINTS: Focus on <dimension> only
---
InnerLoop: true",
status: "pending"
})
```

View File

@@ -59,7 +59,12 @@ Receive callback from [<role>]
+- None completed -> STOP
```
**Fast-advance awareness**: Check if next task is already `in_progress` (fast-advanced by worker). If yes -> skip spawning, update active_workers.
**Fast-advance reconciliation**: When processing any callback or resume:
1. Read recent `fast_advance` messages from team_msg (type="fast_advance")
2. For each: add spawned successor to `active_workers` if not already present
3. Check if expected next task is already `in_progress` (fast-advanced)
4. If yes -> skip spawning (already running)
5. If no -> normal handleSpawnNext
---
@@ -205,6 +210,13 @@ Detect orphaned in_progress task (no active_worker):
+- Reset to pending -> handleSpawnNext
```
### Fast-Advance State Sync
On every coordinator wake (handleCallback, handleResume, handleCheck):
1. Read team_msg entries with `type="fast_advance"` since last coordinator wake
2. For each entry: sync `active_workers` with the spawned successor
3. This ensures coordinator's state reflects fast-advance decisions even before the successor's callback arrives
### Consensus-Blocked Handling
```

View File

@@ -228,8 +228,12 @@ Beat Cycle (Coordinator Spawn-and-Stop)
When the pipeline completes (all tasks done, coordinator Phase 5):
```
AskUserQuestion({
```javascript
if (autoYes) {
// Auto mode: Archive & Clean without prompting
completionAction = "Archive & Clean";
} else {
AskUserQuestion({
questions: [{
question: "Team pipeline complete. What would you like to do?",
header: "Completion",
@@ -240,7 +244,8 @@ AskUserQuestion({
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
]
}]
})
})
}
```
| Choice | Action |

View File

@@ -1,6 +1,6 @@
---
name: workflow-execute
description: Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking. Triggers on "workflow:execute".
description: Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking. Triggers on "workflow-execute".
allowed-tools: Skill, Task, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, Read, Write, Edit, Bash, Glob, Grep
---
@@ -14,18 +14,18 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
```bash
# Interactive mode (with confirmations)
/workflow:execute
/workflow:execute --resume-session="WFS-auth"
/workflow-execute
/workflow-execute --resume-session="WFS-auth"
# Auto mode (skip confirmations, use defaults)
/workflow:execute --yes
/workflow:execute -y
/workflow:execute -y --resume-session="WFS-auth"
/workflow-execute --yes
/workflow-execute -y
/workflow-execute -y --resume-session="WFS-auth"
# With auto-commit (commit after each task completion)
/workflow:execute --with-commit
/workflow:execute -y --with-commit
/workflow:execute -y --with-commit --resume-session="WFS-auth"
/workflow-execute --with-commit
/workflow-execute -y --with-commit
/workflow-execute -y --with-commit --resume-session="WFS-auth"
```
## Auto Mode Defaults
@@ -153,7 +153,7 @@ bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
**Case A: No Sessions** (count = 0)
```
ERROR: No active workflow sessions found
Run /workflow:plan "task description" to create a session
Run /workflow-plan "task description" to create a session
```
**Case B: Single Session** (count = 1)
@@ -575,7 +575,7 @@ meta.agent missing → Infer from meta.type:
| Error Type | Cause | Recovery Strategy | Max Attempts |
|-----------|-------|------------------|--------------|
| **Discovery Errors** |
| No active session | No sessions in `.workflow/active/` | Create or resume session: `/workflow:plan "project"` | N/A |
| No active session | No sessions in `.workflow/active/` | Create or resume session: `/workflow-plan "project"` | N/A |
| Multiple sessions | Multiple sessions in `.workflow/active/` | Prompt user selection | N/A |
| Corrupted session | Invalid JSON files | Recreate session structure or validate files | N/A |
| **Execution Errors** |

View File

@@ -1,6 +1,6 @@
---
name: workflow-lite-plan
description: Lightweight planning and execution skill - route to lite-plan or lite-execute with prompt enhancement. Triggers on "workflow:lite-plan", "workflow:lite-execute".
description: Lightweight planning and execution skill - route to lite-plan or lite-execute with prompt enhancement. Triggers on "workflow-lite-plan", "workflow:lite-execute".
allowed-tools: Skill, Task, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep
---
@@ -41,7 +41,7 @@ const mode = detectMode()
function detectMode() {
if (skillName === 'workflow:lite-execute') return 'execute'
return 'plan' // default: workflow:lite-plan
return 'plan' // default: workflow-lite-plan
}
```
@@ -49,7 +49,7 @@ function detectMode() {
| Trigger | Mode | Phase Document | Description |
|---------|------|----------------|-------------|
| `workflow:lite-plan` | plan | [phases/01-lite-plan.md](phases/01-lite-plan.md) | Full planning pipeline (explore → plan → confirm → execute) |
| `workflow-lite-plan` | plan | [phases/01-lite-plan.md](phases/01-lite-plan.md) | Full planning pipeline (explore → plan → confirm → execute) |
| `workflow:lite-execute` | execute | [phases/02-lite-execute.md](phases/02-lite-execute.md) | Standalone execution (in-memory / prompt / file) |
## Interactive Preference Collection

View File

@@ -104,6 +104,17 @@ const sessionFolder = `.workflow/.lite-plan/${sessionId}`
bash(`mkdir -p ${sessionFolder} && test -d ${sessionFolder} && echo "SUCCESS: ${sessionFolder}" || echo "FAILED: ${sessionFolder}"`)
```
**TodoWrite (Phase 1 start)**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Exploration", status: "in_progress", activeForm: "Exploring codebase" },
{ content: "Phase 2: Clarification", status: "pending", activeForm: "Collecting clarifications" },
{ content: "Phase 3: Planning", status: "pending", activeForm: "Generating plan" },
{ content: "Phase 4: Confirmation", status: "pending", activeForm: "Awaiting confirmation" },
{ content: "Phase 5: Execution", status: "pending", activeForm: "Executing tasks" }
]})
```
**Exploration Decision Logic**:
```javascript
// Check if task description already contains prior analysis context (from analyze-with-file)
@@ -307,6 +318,17 @@ Angles explored: ${explorationManifest.explorations.map(e => e.angle).join(', ')
`)
```
**TodoWrite (Phase 1 complete)**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Exploration", status: "completed", activeForm: "Exploring codebase" },
{ content: "Phase 2: Clarification", status: "in_progress", activeForm: "Collecting clarifications" },
{ content: "Phase 3: Planning", status: "pending", activeForm: "Generating plan" },
{ content: "Phase 4: Confirmation", status: "pending", activeForm: "Awaiting confirmation" },
{ content: "Phase 5: Execution", status: "pending", activeForm: "Executing tasks" }
]})
```
**Output**:
- `${sessionFolder}/exploration-{angle1}.json`
- `${sessionFolder}/exploration-{angle2}.json`
@@ -560,6 +582,17 @@ Note: Use files[].change (not modification_points), convergence.criteria (not ac
**Output**: `${sessionFolder}/plan.json`
**TodoWrite (Phase 3 complete)**:
```javascript
TodoWrite({ todos: [
{ content: "Phase 1: Exploration", status: "completed", activeForm: "Exploring codebase" },
{ content: "Phase 2: Clarification", status: "completed", activeForm: "Collecting clarifications" },
{ content: "Phase 3: Planning", status: "completed", activeForm: "Generating plan" },
{ content: "Phase 4: Confirmation", status: "in_progress", activeForm: "Awaiting confirmation" },
{ content: "Phase 5: Execution", status: "pending", activeForm: "Executing tasks" }
]})
```
---
### Phase 4: Task Confirmation & Execution Selection
@@ -649,6 +682,19 @@ if (autoYes) {
}
```
**TodoWrite (Phase 4 confirmed)**:
```javascript
const executionLabel = userSelection.execution_method
TodoWrite({ todos: [
{ content: "Phase 1: Exploration", status: "completed", activeForm: "Exploring codebase" },
{ content: "Phase 2: Clarification", status: "completed", activeForm: "Collecting clarifications" },
{ content: "Phase 3: Planning", status: "completed", activeForm: "Generating plan" },
{ content: `Phase 4: Confirmed [${executionLabel}]`, status: "completed", activeForm: "Confirmed" },
{ content: `Phase 5: Execution [${executionLabel}]`, status: "in_progress", activeForm: `Executing [${executionLabel}]` }
]})
```
---
### Phase 5: Handoff to Execution
@@ -755,7 +801,7 @@ Read("phases/02-lite-execute.md")
| Planning agent failure | Fallback to direct planning by Claude |
| Clarification timeout | Use exploration findings as-is |
| Confirmation timeout | Save context, display resume instructions |
| Modify loop > 3 times | Suggest breaking task or using /workflow:plan |
| Modify loop > 3 times | Suggest breaking task or using /workflow-plan |
## Next Phase

View File

@@ -350,9 +350,9 @@ executionCalls = createExecutionCalls(getTasks(planObject), executionMethod).map
TodoWrite({
todos: executionCalls.map(c => ({
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} (${c.tasks.length} tasks)`,
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} [${c.executor}] (${c.tasks.length} tasks)`,
status: "pending",
activeForm: `Executing ${c.id}`
activeForm: `Executing ${c.id} [${c.executor}]`
}))
})
```

View File

@@ -1,6 +1,6 @@
---
name: workflow-multi-cli-plan
description: Multi-CLI collaborative planning and execution skill - route to multi-cli-plan or lite-execute with prompt enhancement. Triggers on "workflow:multi-cli-plan", "workflow:lite-execute".
description: Multi-CLI collaborative planning and execution skill - route to multi-cli-plan or lite-execute with prompt enhancement. Triggers on "workflow-multi-cli-plan", "workflow:lite-execute".
allowed-tools: Skill, Task, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep, mcp__ace-tool__search_context
---
@@ -33,7 +33,7 @@ const mode = detectMode()
function detectMode() {
if (skillName === 'workflow:lite-execute') return 'execute'
return 'plan' // default: workflow:multi-cli-plan
return 'plan' // default: workflow-multi-cli-plan
}
```
@@ -41,7 +41,7 @@ function detectMode() {
| Trigger | Mode | Phase Document | Description |
|---------|------|----------------|-------------|
| `workflow:multi-cli-plan` | plan | [phases/01-multi-cli-plan.md](phases/01-multi-cli-plan.md) | Multi-CLI collaborative planning (ACE context → discussion → plan → execute) |
| `workflow-multi-cli-plan` | plan | [phases/01-multi-cli-plan.md](phases/01-multi-cli-plan.md) | Multi-CLI collaborative planning (ACE context → discussion → plan → execute) |
| `workflow:lite-execute` | execute | [phases/02-lite-execute.md](phases/02-lite-execute.md) | Standalone execution (in-memory / prompt / file) |
## Interactive Preference Collection
@@ -124,7 +124,7 @@ Multi-phase execution (multi-cli-plan → lite-execute) spans long conversations
## Execution Flow
### Plan Mode (workflow:multi-cli-plan)
### Plan Mode (workflow-multi-cli-plan)
```
1. Collect preferences via AskUserQuestion (autoYes)

View File

@@ -1,6 +1,6 @@
# Phase 1: Multi-CLI Collaborative Planning
Complete multi-CLI collaborative planning pipeline with ACE context gathering and iterative cross-verification. This phase document preserves the full content of the original `workflow:multi-cli-plan` command.
Complete multi-CLI collaborative planning pipeline with ACE context gathering and iterative cross-verification. This phase document preserves the full content of the original `workflow-multi-cli-plan` command.
## Auto Mode
@@ -12,12 +12,12 @@ When `workflowPreferences.autoYes` is true: Auto-approve plan, use recommended s
```bash
# Basic usage
/workflow:multi-cli-plan "Implement user authentication"
/workflow-multi-cli-plan "Implement user authentication"
# With options
/workflow:multi-cli-plan "Add dark mode support" --max-rounds=3
/workflow:multi-cli-plan "Refactor payment module" --tools=gemini,codex,claude
/workflow:multi-cli-plan "Fix memory leak" --mode=serial
/workflow-multi-cli-plan "Add dark mode support" --max-rounds=3
/workflow-multi-cli-plan "Refactor payment module" --tools=gemini,codex,claude
/workflow-multi-cli-plan "Fix memory leak" --mode=serial
```
**Context Source**: ACE semantic search + Multi-CLI analysis
@@ -258,6 +258,19 @@ AskUserQuestion({
- Need More Analysis → Phase 2 with feedback
- Cancel → Save session for resumption
**TodoWrite Update (Phase 4 Decision)**:
```javascript
const executionLabel = userSelection.execution_method // "Agent" / "Codex" / "Auto"
TodoWrite({ todos: [
{ content: "Phase 1: Context Gathering", status: "completed", activeForm: "Gathering context" },
{ content: "Phase 2: Multi-CLI Discussion", status: "completed", activeForm: "Running discussion" },
{ content: "Phase 3: Present Options", status: "completed", activeForm: "Presenting options" },
{ content: `Phase 4: User Decision [${executionLabel}]`, status: "completed", activeForm: "Decision recorded" },
{ content: `Phase 5: Plan Generation [${executionLabel}]`, status: "in_progress", activeForm: `Generating plan [${executionLabel}]` }
]})
```
### Phase 5: Plan Generation & Execution Handoff
**Step 1: Build Context-Package** (Orchestrator responsibility):
@@ -585,7 +598,7 @@ TodoWrite({ todos: [
```bash
# Simpler single-round planning
/workflow:lite-plan "task description"
/workflow-lite-plan "task description"
# Issue-driven discovery
/issue:discover-by-prompt "find issues"

View File

@@ -357,9 +357,9 @@ executionCalls = createExecutionCalls(getTasks(planObject), executionMethod).map
TodoWrite({
todos: executionCalls.map(c => ({
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} (${c.tasks.length} tasks)`,
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} [${c.executor}] (${c.tasks.length} tasks)`,
status: "pending",
activeForm: `Executing ${c.id}`
activeForm: `Executing ${c.id} [${c.executor}]`
}))
})
```

View File

@@ -1,6 +1,6 @@
---
name: workflow-plan
description: Unified planning skill - 4-phase planning workflow, plan verification, and interactive replanning. Triggers on "workflow:plan", "workflow:plan-verify", "workflow:replan".
description: Unified planning skill - 4-phase planning workflow, plan verification, and interactive replanning. Triggers on "workflow-plan", "workflow-plan-verify", "workflow:replan".
allowed-tools: Skill, Task, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep
---
@@ -107,9 +107,9 @@ const mode = detectMode(args)
function detectMode(args) {
// Skill trigger determines mode
if (skillName === 'workflow:plan-verify') return 'verify'
if (skillName === 'workflow-plan-verify') return 'verify'
if (skillName === 'workflow:replan') return 'replan'
return 'plan' // default: workflow:plan
return 'plan' // default: workflow-plan
}
```

View File

@@ -338,13 +338,20 @@ Output:
)
```
**Executor Label** (computed after Step 4.0):
```javascript
const executorLabel = userConfig.executionMethod === 'agent' ? 'Agent'
: userConfig.executionMethod === 'hybrid' ? 'Hybrid'
: `CLI (${userConfig.preferredCliTool})`
```
### TodoWrite Update (Phase 4 in progress)
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 4: Task Generation", "status": "in_progress", "activeForm": "Executing task generation"}
{"content": "Phase 4: Task Generation [${executorLabel}]", "status": "in_progress", "activeForm": "Generating tasks [${executorLabel}]"}
]
```
@@ -354,7 +361,7 @@ Output:
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 4: Task Generation", "status": "completed", "activeForm": "Executing task generation"}
{"content": "Phase 4: Task Generation [${executorLabel}]", "status": "completed", "activeForm": "Generating tasks [${executorLabel}]"}
]
```

View File

@@ -90,11 +90,11 @@ ELSE:
SYNTHESIS_AVAILABLE = true
IF NOT EXISTS(IMPL_PLAN):
ERROR: "IMPL_PLAN.md not found. Run /workflow:plan first"
ERROR: "IMPL_PLAN.md not found. Run /workflow-plan first"
EXIT
IF TASK_FILES.count == 0:
ERROR: "No task JSON files found. Run /workflow:plan first"
ERROR: "No task JSON files found. Run /workflow-plan first"
EXIT
```
@@ -320,7 +320,7 @@ ${recommendation === 'BLOCK_EXECUTION' ? 'BLOCK: Fix critical issues then re-ver
recommendation === 'PROCEED_WITH_FIXES' ? 'FIX RECOMMENDED: Address high issues then re-verify or execute' :
'READY: Proceed to Skill(skill="workflow-execute")'}
Re-verify: \`/workflow:plan-verify --session ${session_id}\`
Re-verify: \`/workflow-plan-verify --session ${session_id}\`
Execute: \`Skill(skill="workflow-execute", args="--resume-session=${session_id}")\`
`

View File

@@ -215,7 +215,7 @@ const workflowConfig = {
skillName: "workflow-plan", // kebab-case
title: "Workflow Plan", // Human-readable
description: "5-phase planning...", // One-line description
triggers: ["workflow:plan"], // Trigger phrases
triggers: ["workflow-plan"], // Trigger phrases
allowedTools: ["Task", "AskUserQuestion", "TodoWrite", "Read", "Write", "Edit", "Bash", "Glob", "Grep", "Skill"],
// Source information

View File

@@ -1,6 +1,6 @@
---
name: workflow-tdd
description: Unified TDD workflow skill combining 6-phase TDD planning with Red-Green-Refactor task chain generation, and 4-phase TDD verification with compliance reporting. Triggers on "workflow:tdd-plan", "workflow:tdd-verify".
name: workflow-tdd-plan-plan
description: Unified TDD workflow skill combining 6-phase TDD planning with Red-Green-Refactor task chain generation, and 4-phase TDD verification with compliance reporting. Triggers on "workflow-tdd-plan", "workflow-tdd-verify".
allowed-tools: Skill, Task, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, Read, Write, Edit, Bash, Glob, Grep
---
@@ -89,8 +89,8 @@ const mode = detectMode(args)
function detectMode(args) {
// Skill trigger determines mode
if (skillName === 'workflow:tdd-verify') return 'verify'
return 'plan' // default: workflow:tdd-plan
if (skillName === 'workflow-tdd-verify') return 'verify'
return 'plan' // default: workflow-tdd-plan
}
```
@@ -496,7 +496,7 @@ Similar to workflow-plan, a `planning-notes.md` can accumulate context across ph
- `phases/07-tdd-verify.md` - Phase 7: Test coverage and cycle analysis (inline)
**Follow-up Skills**:
- `workflow-tdd` skill (tdd-verify phase) - Verify TDD compliance (can also invoke via verify mode)
- `workflow-tdd-plan` skill (tdd-verify phase) - Verify TDD compliance (can also invoke via verify mode)
- `workflow-plan` skill (plan-verify phase) - Verify plan quality and dependencies
- Display session status inline - Review TDD task breakdown
- `Skill(skill="workflow-execute")` - Begin TDD implementation

View File

@@ -50,7 +50,7 @@ process_dir = session_dir/.process
# Check task files exist
task_files = Glob(task_dir/*.json)
IF task_files.count == 0:
ERROR: "No task JSON files found. Run /workflow:tdd-plan first"
ERROR: "No task JSON files found. Run /workflow-tdd-plan first"
EXIT
# Check summaries exist (optional but recommended for full analysis)
@@ -596,7 +596,7 @@ Next: Review full report for detailed findings
| Error | Cause | Resolution |
|-------|-------|------------|
| Task files missing | Incomplete planning | Run /workflow:tdd-plan first |
| Task files missing | Incomplete planning | Run /workflow-tdd-plan first |
| Invalid JSON | Corrupted task files | Regenerate tasks |
| Missing summaries | Tasks not executed | Execute tasks before verify |
@@ -632,4 +632,4 @@ Next: Review full report for detailed findings
| PROCEED_WITH_CAVEATS | `workflow-execute` skill | Start with noted caveats |
| REQUIRE_FIXES | Review report, refine tasks | Address issues before proceed |
| BLOCK_MERGE | `workflow-plan` skill (replan phase) | Significant restructuring needed |
| After implementation | Re-run `workflow-tdd` skill (tdd-verify phase) | Verify post-execution compliance |
| After implementation | Re-run `workflow-tdd-plan` skill (tdd-verify phase) | Verify post-execution compliance |

View File

@@ -1,6 +1,6 @@
---
name: workflow-test-fix
description: Unified test-fix pipeline combining test generation (session, context, analysis, task gen) with iterative test-cycle execution (adaptive strategy, progressive testing, CLI fallback). Triggers on "workflow:test-fix-gen", "workflow:test-cycle-execute", "test fix workflow".
description: Unified test-fix pipeline combining test generation (session, context, analysis, task gen) with iterative test-cycle execution (adaptive strategy, progressive testing, CLI fallback). Triggers on "workflow-test-fix", "workflow-test-fix", "test fix workflow".
allowed-tools: Skill, Task, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, Read, Write, Edit, Bash, Glob, Grep
---
@@ -60,8 +60,8 @@ Task Pipeline (generated in Phase 4, executed in Phase 5):
Full pipeline and execute-only modes are triggered by skill name routing (see Mode Detection). Workflow preferences (auto mode) are collected interactively via AskUserQuestion before dispatching to phases.
**Full pipeline** (workflow:test-fix-gen): Task description or session ID as arguments → interactive preference collection → generate + execute pipeline
**Execute only** (workflow:test-cycle-execute): Auto-discovers active session → interactive preference collection → execution loop
**Full pipeline** (workflow-test-fix): Task description or session ID as arguments → interactive preference collection → generate + execute pipeline
**Execute only** (workflow-test-fix): Auto-discovers active session → interactive preference collection → execution loop
## Interactive Preference Collection
@@ -109,8 +109,8 @@ Multi-phase test-fix pipeline (Phase 1-5) spans long conversations, especially P
```
Entry Point Detection:
├─ /workflow:test-fix-gen → Full Pipeline (Phase 1→2→3→4→Summary→5)
└─ /workflow:test-cycle-execute → Execution Only (Phase 5)
├─ /workflow-test-fix → Full Pipeline (Phase 1→2→3→4→Summary→5)
└─ /workflow-test-fix → Execution Only (Phase 5)
Phase 1: Session Start (session-start)
└─ Ref: phases/01-session-start.md

View File

@@ -19,13 +19,13 @@ Execute test-fix workflow with dynamic task generation and iterative fix cycles
```bash
# Execute test-fix workflow (auto-discovers active session)
/workflow:test-cycle-execute
/workflow-test-fix
# Resume interrupted session
/workflow:test-cycle-execute --resume-session="WFS-test-user-auth"
/workflow-test-fix --resume-session="WFS-test-user-auth"
# Custom iteration limit (default: 10)
/workflow:test-cycle-execute --max-iterations=15
/workflow-test-fix --max-iterations=15
```
**Quality Gate**: Test pass rate >= 95% (criticality-aware) or 100%
@@ -60,7 +60,7 @@ Load session, tasks, and iteration state.
**For full-pipeline entry (from Phase 1-4)**: Use `testSessionId` passed from Phase 4.
**For direct entry (/workflow:test-cycle-execute)**:
**For direct entry (/workflow-test-fix)**:
- `--resume-session="WFS-xxx"` → Use specified session
- No args → Auto-discover active test session (find `.workflow/active/WFS-test-*`)

View File

@@ -1,896 +0,0 @@
---
name: workflow-wave-plan
description: CSV Wave planning and execution - explore via wave, resolve conflicts, execute from CSV with linked exploration context. Triggers on "workflow:wave-plan".
argument-hint: "<task description> [--yes|-y] [--concurrency|-c N]"
allowed-tools: Task, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Workflow Wave Plan
CSV Wave-based planning and execution. Uses structured CSV state for both exploration and execution, with cross-phase context propagation via `context_from` linking.
## Architecture
```
Requirement
┌─ Phase 1: Decompose ─────────────────────┐
│ Analyze requirement → explore.csv │
│ (1 row per exploration angle) │
└────────────────────┬──────────────────────┘
┌─ Phase 2: Wave Explore ──────────────────┐
│ Wave loop: spawn Explore agents │
│ → findings/key_files → explore.csv │
└────────────────────┬──────────────────────┘
┌─ Phase 3: Synthesize & Plan ─────────────┐
│ Read explore findings → cross-reference │
│ → resolve conflicts → tasks.csv │
│ (context_from links to E* explore rows) │
└────────────────────┬──────────────────────┘
┌─ Phase 4: Wave Execute ──────────────────┐
│ Wave loop: build prev_context from CSV │
│ → spawn code-developer agents per wave │
│ → results → tasks.csv │
└────────────────────┬──────────────────────┘
┌─ Phase 5: Aggregate ─────────────────────┐
│ results.csv + context.md + summary │
└───────────────────────────────────────────┘
```
## Context Flow
```
explore.csv tasks.csv
┌──────────┐ ┌──────────┐
│ E1: arch │──────────→│ T1: setup│ context_from: E1;E2
│ findings │ │ prev_ctx │← E1+E2 findings
├──────────┤ ├──────────┤
│ E2: deps │──────────→│ T2: impl │ context_from: E1;T1
│ findings │ │ prev_ctx │← E1+T1 findings
├──────────┤ ├──────────┤
│ E3: test │──┐ ┌───→│ T3: test │ context_from: E3;T2
│ findings │ └───┘ │ prev_ctx │← E3+T2 findings
└──────────┘ └──────────┘
Two context channels:
1. Directed: context_from → prev_context (from CSV findings)
2. Broadcast: discoveries.ndjson (append-only shared board)
```
---
## CSV Schemas
### explore.csv
| Column | Type | Set By | Description |
|--------|------|--------|-------------|
| `id` | string | Decomposer | E1, E2, ... |
| `angle` | string | Decomposer | Exploration angle name |
| `description` | string | Decomposer | What to explore from this angle |
| `focus` | string | Decomposer | Keywords and focus areas |
| `deps` | string | Decomposer | Semicolon-separated dep IDs (usually empty) |
| `wave` | integer | Wave Engine | Wave number (usually 1) |
| `status` | enum | Agent | pending / completed / failed |
| `findings` | string | Agent | Discoveries (max 800 chars) |
| `key_files` | string | Agent | Relevant files (semicolon-separated) |
| `error` | string | Agent | Error message if failed |
### tasks.csv
| Column | Type | Set By | Description |
|--------|------|--------|-------------|
| `id` | string | Planner | T1, T2, ... |
| `title` | string | Planner | Task title |
| `description` | string | Planner | Self-contained task description — what to implement |
| `test` | string | Planner | Test cases: what tests to write and how to verify (unit/integration/edge) |
| `acceptance_criteria` | string | Planner | Measurable conditions that define "done" |
| `scope` | string | Planner | Target file/directory glob — constrains agent write area, prevents cross-task file conflicts |
| `hints` | string | Planner | Implementation tips + reference files. Format: `tips text \|\| file1;file2`. Either part is optional |
| `execution_directives` | string | Planner | Execution constraints: commands to run for verification, tool restrictions |
| `deps` | string | Planner | Dependency task IDs: T1;T2 |
| `context_from` | string | Planner | Context source IDs: **E1;E2;T1** |
| `wave` | integer | Wave Engine | Wave number (computed from deps) |
| `status` | enum | Agent | pending / completed / failed / skipped |
| `findings` | string | Agent | Execution findings (max 500 chars) |
| `files_modified` | string | Agent | Files modified (semicolon-separated) |
| `tests_passed` | boolean | Agent | Whether all defined test cases passed (true/false) |
| `acceptance_met` | string | Agent | Summary of which acceptance criteria were met/unmet |
| `error` | string | Agent | Error if failed |
**context_from prefix convention**: `E*` → explore.csv lookup, `T*` → tasks.csv lookup.
---
## Session Structure
```
.workflow/.wave-plan/{session-id}/
├── explore.csv # Exploration state
├── tasks.csv # Execution state
├── discoveries.ndjson # Shared discovery board
├── explore-results/ # Detailed per-angle results
│ ├── E1.json
│ └── E2.json
├── task-results/ # Detailed per-task results
│ ├── T1.json
│ └── T2.json
├── results.csv # Final results export
└── context.md # Full context summary
```
---
## Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse flags
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4
const requirement = $ARGUMENTS
.replace(/--yes|-y|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `wp-${slug}-${dateStr}`
const sessionFolder = `.workflow/.wave-plan/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/explore-results ${sessionFolder}/task-results`)
```
---
## Phase 1: Decompose → explore.csv
### 1.1 Analyze Requirement
```javascript
const complexity = analyzeComplexity(requirement)
// Low: 1 angle | Medium: 2-3 angles | High: 3-4 angles
const ANGLE_PRESETS = {
architecture: ['architecture', 'dependencies', 'integration-points', 'modularity'],
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
feature: ['patterns', 'integration-points', 'testing', 'dependencies']
}
function selectAngles(text, count) {
let preset = 'feature'
if (/refactor|architect|restructure|modular/.test(text)) preset = 'architecture'
else if (/security|auth|permission|access/.test(text)) preset = 'security'
else if (/performance|slow|optimi|cache/.test(text)) preset = 'performance'
else if (/fix|bug|error|broken/.test(text)) preset = 'bugfix'
return ANGLE_PRESETS[preset].slice(0, count)
}
const angleCount = complexity === 'High' ? 4 : complexity === 'Medium' ? 3 : 1
const angles = selectAngles(requirement, angleCount)
```
### 1.2 Generate explore.csv
```javascript
const header = 'id,angle,description,focus,deps,wave,status,findings,key_files,error'
const rows = angles.map((angle, i) => {
const id = `E${i + 1}`
const desc = `Explore codebase from ${angle} perspective for: ${requirement}`
return `"${id}","${angle}","${escCSV(desc)}","${angle}","",1,"pending","","",""`
})
Write(`${sessionFolder}/explore.csv`, [header, ...rows].join('\n'))
```
All exploration rows default to wave 1 (independent parallel). If angle dependencies exist, compute waves.
---
## Phase 2: Wave Explore
Execute exploration waves using `Task(Explore)` agents.
### 2.1 Wave Loop
```javascript
const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
const maxExploreWave = Math.max(...exploreCSV.map(r => parseInt(r.wave)))
for (let wave = 1; wave <= maxExploreWave; wave++) {
const waveRows = exploreCSV.filter(r =>
parseInt(r.wave) === wave && r.status === 'pending'
)
if (waveRows.length === 0) continue
// Skip rows with failed dependencies
const validRows = waveRows.filter(r => {
if (!r.deps) return true
return r.deps.split(';').filter(Boolean).every(depId => {
const dep = exploreCSV.find(d => d.id === depId)
return dep && dep.status === 'completed'
})
})
waveRows.filter(r => !validRows.includes(r)).forEach(r => {
r.status = 'skipped'
r.error = 'Dependency failed/skipped'
})
// ★ Spawn ALL explore agents in SINGLE message → parallel execution
const results = validRows.map(row =>
Task({
subagent_type: "Explore",
run_in_background: false,
description: `Explore: ${row.angle}`,
prompt: buildExplorePrompt(row, requirement, sessionFolder)
})
)
// Collect results from JSON files → update explore.csv
validRows.forEach((row, i) => {
const resultPath = `${sessionFolder}/explore-results/${row.id}.json`
if (fileExists(resultPath)) {
const result = JSON.parse(Read(resultPath))
row.status = result.status || 'completed'
row.findings = truncate(result.findings, 800)
row.key_files = Array.isArray(result.key_files)
? result.key_files.join(';')
: (result.key_files || '')
row.error = result.error || ''
} else {
// Fallback: parse from agent output text
row.status = 'completed'
row.findings = truncate(results[i], 800)
}
})
writeCSV(`${sessionFolder}/explore.csv`, exploreCSV)
}
```
### 2.2 Explore Agent Prompt
```javascript
function buildExplorePrompt(row, requirement, sessionFolder) {
return `## Exploration: ${row.angle}
**Requirement**: ${requirement}
**Focus**: ${row.focus}
### MANDATORY FIRST STEPS
1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Instructions
Explore the codebase from the **${row.angle}** perspective:
1. Discover relevant files, modules, and patterns
2. Identify integration points and dependencies
3. Note constraints, risks, and conventions
4. Find existing patterns to follow
5. Share discoveries: append findings to ${sessionFolder}/discoveries.ndjson
## Output
Write findings to: ${sessionFolder}/explore-results/${row.id}.json
JSON format:
{
"status": "completed",
"findings": "Concise summary of ${row.angle} discoveries (max 800 chars)",
"key_files": ["relevant/file1.ts", "relevant/file2.ts"],
"details": {
"patterns": ["pattern descriptions"],
"integration_points": [{"file": "path", "description": "..."}],
"constraints": ["constraint descriptions"],
"recommendations": ["recommendation descriptions"]
}
}
Also provide a 2-3 sentence summary.`
}
```
---
## Phase 3: Synthesize & Plan → tasks.csv
Read exploration findings, cross-reference, resolve conflicts, generate execution tasks.
### 3.1 Load Explore Results
```javascript
const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
const completed = exploreCSV.filter(r => r.status === 'completed')
// Load detailed result JSONs where available
const detailedResults = {}
completed.forEach(r => {
const path = `${sessionFolder}/explore-results/${r.id}.json`
if (fileExists(path)) detailedResults[r.id] = JSON.parse(Read(path))
})
```
### 3.2 Conflict Resolution Protocol
Cross-reference findings across all exploration angles:
```javascript
// 1. Identify common files referenced by multiple angles
const fileRefs = {}
completed.forEach(r => {
r.key_files.split(';').filter(Boolean).forEach(f => {
if (!fileRefs[f]) fileRefs[f] = []
fileRefs[f].push({ angle: r.angle, id: r.id })
})
})
const sharedFiles = Object.entries(fileRefs).filter(([_, refs]) => refs.length > 1)
// 2. Detect conflicting recommendations
// Compare recommendations from different angles for same file/module
// Flag contradictions (angle A says "refactor X" vs angle B says "extend X")
// 3. Resolution rules:
// a. Safety first — when approaches conflict, choose safer option
// b. Consistency — prefer approaches aligned with existing patterns
// c. Scope — prefer minimal-change approaches
// d. Document — note all resolved conflicts for transparency
const synthesis = {
sharedFiles,
conflicts: detectConflicts(completed, detailedResults),
resolutions: [],
allKeyFiles: [...new Set(completed.flatMap(r => r.key_files.split(';').filter(Boolean)))]
}
```
### 3.3 Generate tasks.csv
Decompose into execution tasks based on synthesized exploration:
```javascript
// Task decomposition rules:
// 1. Group by feature/module (not per-file)
// 2. Each description is self-contained (agent sees only its row + prev_context)
// 3. deps only when task B requires task A's output
// 4. context_from links relevant explore rows (E*) and predecessor tasks (T*)
// 5. Prefer parallel (minimize deps)
// 6. Use exploration findings: key_files → target files, patterns → references,
// integration_points → dependency relationships, constraints → included in description
// 7. Each task MUST include: test (how to verify), acceptance_criteria (what defines done)
// 8. scope must not overlap between tasks in the same wave
// 9. hints = implementation tips + reference files (format: tips || file1;file2)
// 10. execution_directives = commands to run for verification, tool restrictions
const tasks = []
// Claude decomposes requirement using exploration synthesis
// Example:
// tasks.push({ id: 'T1', title: 'Setup types', description: '...', test: 'Verify types compile', acceptance_criteria: 'All interfaces exported', scope: 'src/types/**', hints: 'Follow existing type patterns || src/types/index.ts', execution_directives: 'tsc --noEmit', deps: '', context_from: 'E1;E2' })
// tasks.push({ id: 'T2', title: 'Implement core', description: '...', test: 'Unit test: core logic', acceptance_criteria: 'All functions pass tests', scope: 'src/core/**', hints: 'Reuse BaseService || src/services/Base.ts', execution_directives: 'npm test -- --grep core', deps: 'T1', context_from: 'E1;E2;T1' })
// tasks.push({ id: 'T3', title: 'Add tests', description: '...', test: 'Integration test suite', acceptance_criteria: '>80% coverage', scope: 'tests/**', hints: 'Follow existing test patterns || tests/auth.test.ts', execution_directives: 'npm test', deps: 'T2', context_from: 'E3;T2' })
// Compute waves
const waves = computeWaves(tasks)
tasks.forEach(t => { t.wave = waves[t.id] })
// Write tasks.csv
const header = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error'
const rows = tasks.map(t =>
[t.id, escCSV(t.title), escCSV(t.description), escCSV(t.test), escCSV(t.acceptance_criteria), escCSV(t.scope), escCSV(t.hints), escCSV(t.execution_directives), t.deps, t.context_from, t.wave, 'pending', '', '', '', '', '']
.map(v => `"${v}"`).join(',')
)
Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
```
### 3.4 User Confirmation
```javascript
if (!AUTO_YES) {
const maxWave = Math.max(...tasks.map(t => t.wave))
console.log(`
## Execution Plan
Explore: ${completed.length} angles completed
Conflicts resolved: ${synthesis.conflicts.length}
Tasks: ${tasks.length} across ${maxWave} waves
${Array.from({length: maxWave}, (_, i) => i + 1).map(w => {
const wt = tasks.filter(t => t.wave === w)
return `### Wave ${w} (${wt.length} tasks, concurrent)
${wt.map(t => ` - [${t.id}] ${t.title} (from: ${t.context_from})`).join('\n')}`
}).join('\n')}
`)
AskUserQuestion({
questions: [{
question: `Proceed with ${tasks.length} tasks across ${maxWave} waves?`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Execute", description: "Proceed with wave execution" },
{ label: "Modify", description: `Edit ${sessionFolder}/tasks.csv then re-run` },
{ label: "Cancel", description: "Abort" }
]
}]
})
}
```
---
## Phase 4: Wave Execute
Execute tasks from tasks.csv in wave order, with prev_context built from both explore.csv and tasks.csv.
### 4.1 Wave Loop
```javascript
const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
const failedIds = new Set()
const skippedIds = new Set()
let tasksCSV = parseCSV(Read(`${sessionFolder}/tasks.csv`))
const maxWave = Math.max(...tasksCSV.map(r => parseInt(r.wave)))
for (let wave = 1; wave <= maxWave; wave++) {
// Re-read master CSV (updated by previous wave)
tasksCSV = parseCSV(Read(`${sessionFolder}/tasks.csv`))
const waveRows = tasksCSV.filter(r =>
parseInt(r.wave) === wave && r.status === 'pending'
)
if (waveRows.length === 0) continue
// Skip on failed dependencies (cascade)
const validRows = []
for (const row of waveRows) {
const deps = (row.deps || '').split(';').filter(Boolean)
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
skippedIds.add(row.id)
row.status = 'skipped'
row.error = 'Dependency failed/skipped'
continue
}
validRows.push(row)
}
if (validRows.length === 0) {
writeCSV(`${sessionFolder}/tasks.csv`, tasksCSV)
continue
}
// Build prev_context for each row from explore.csv + tasks.csv
validRows.forEach(row => {
row._prev_context = buildPrevContext(row.context_from, exploreCSV, tasksCSV)
})
// ★ Spawn ALL task agents in SINGLE message → parallel execution
const results = validRows.map(row =>
Task({
subagent_type: "code-developer",
run_in_background: false,
description: row.title,
prompt: buildExecutePrompt(row, requirement, sessionFolder)
})
)
// Collect results → update tasks.csv
validRows.forEach((row, i) => {
const resultPath = `${sessionFolder}/task-results/${row.id}.json`
if (fileExists(resultPath)) {
const result = JSON.parse(Read(resultPath))
row.status = result.status || 'completed'
row.findings = truncate(result.findings, 500)
row.files_modified = Array.isArray(result.files_modified)
? result.files_modified.join(';')
: (result.files_modified || '')
row.tests_passed = String(result.tests_passed ?? '')
row.acceptance_met = result.acceptance_met || ''
row.error = result.error || ''
} else {
row.status = 'completed'
row.findings = truncate(results[i], 500)
}
if (row.status === 'failed') failedIds.add(row.id)
delete row._prev_context // runtime-only, don't persist
})
writeCSV(`${sessionFolder}/tasks.csv`, tasksCSV)
}
```
### 4.2 prev_context Builder
The key function linking exploration context to execution:
```javascript
function buildPrevContext(contextFrom, exploreCSV, tasksCSV) {
if (!contextFrom) return 'No previous context available'
const ids = contextFrom.split(';').filter(Boolean)
const entries = []
ids.forEach(id => {
if (id.startsWith('E')) {
// ← Look up in explore.csv (cross-phase link)
const row = exploreCSV.find(r => r.id === id)
if (row && row.status === 'completed' && row.findings) {
entries.push(`[Explore ${row.angle}] ${row.findings}`)
if (row.key_files) entries.push(` Key files: ${row.key_files}`)
}
} else if (id.startsWith('T')) {
// ← Look up in tasks.csv (same-phase link)
const row = tasksCSV.find(r => r.id === id)
if (row && row.status === 'completed' && row.findings) {
entries.push(`[Task ${row.id}: ${row.title}] ${row.findings}`)
if (row.files_modified) entries.push(` Modified: ${row.files_modified}`)
}
}
})
return entries.length > 0 ? entries.join('\n') : 'No previous context available'
}
```
### 4.3 Execute Agent Prompt
```javascript
function buildExecutePrompt(row, requirement, sessionFolder) {
return `## Task: ${row.title}
**ID**: ${row.id}
**Goal**: ${requirement}
**Scope**: ${row.scope || 'Not specified'}
## Description
${row.description}
### Implementation Hints & Reference Files
${row.hints || 'None'}
> Format: \`tips text || file1;file2\`. Read ALL reference files (after ||) before starting. Apply tips (before ||) as guidance.
### Execution Directives
${row.execution_directives || 'None'}
> Commands to run for verification, tool restrictions, or environment requirements.
### Test Cases
${row.test || 'None specified'}
### Acceptance Criteria
${row.acceptance_criteria || 'None specified'}
## Previous Context (from exploration and predecessor tasks)
${row._prev_context}
### MANDATORY FIRST STEPS
1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Execution Protocol
1. **Read references**: Parse hints — read all files listed after \`||\` to understand existing patterns
2. **Read discoveries**: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings
3. **Use context**: Apply previous tasks' findings from prev_context above
4. **Stay in scope**: ONLY create/modify files within ${row.scope || 'project'} — do NOT touch files outside this boundary
5. **Apply hints**: Follow implementation tips from hints (before \`||\`)
6. **Implement**: Execute changes described in the task description
7. **Write tests**: Implement the test cases defined above
8. **Run directives**: Execute commands from execution_directives to verify your work
9. **Verify acceptance**: Ensure all acceptance criteria are met before reporting completion
10. **Share discoveries**: Append exploration findings to shared board:
\\\`\\\`\\\`bash
echo '{"ts":"<ISO>","worker":"${row.id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
\\\`\\\`\\\`
11. **Report result**: Write JSON to output file
## Output
Write results to: ${sessionFolder}/task-results/${row.id}.json
{
"status": "completed" | "failed",
"findings": "What was done (max 500 chars)",
"files_modified": ["file1.ts", "file2.ts"],
"tests_passed": true | false,
"acceptance_met": "Summary of which acceptance criteria were met/unmet",
"error": ""
}
**IMPORTANT**: Set status to "completed" ONLY if:
- All test cases pass
- All acceptance criteria are met
Otherwise set status to "failed" with details in error field.`
}
```
---
## Phase 5: Aggregate
### 5.1 Generate Results
```javascript
const finalTasks = parseCSV(Read(`${sessionFolder}/tasks.csv`))
const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
Bash(`cp "${sessionFolder}/tasks.csv" "${sessionFolder}/results.csv"`)
const completed = finalTasks.filter(r => r.status === 'completed')
const failed = finalTasks.filter(r => r.status === 'failed')
const skipped = finalTasks.filter(r => r.status === 'skipped')
const maxWave = Math.max(...finalTasks.map(r => parseInt(r.wave)))
```
### 5.2 Generate context.md
```javascript
const contextMd = `# Wave Plan Results
**Requirement**: ${requirement}
**Session**: ${sessionId}
**Timestamp**: ${getUtc8ISOString()}
## Summary
| Metric | Count |
|--------|-------|
| Explore Angles | ${exploreCSV.length} |
| Total Tasks | ${finalTasks.length} |
| Completed | ${completed.length} |
| Failed | ${failed.length} |
| Skipped | ${skipped.length} |
| Waves | ${maxWave} |
## Exploration Results
${exploreCSV.map(e => `### ${e.id}: ${e.angle} (${e.status})
${e.findings || 'N/A'}
Key files: ${e.key_files || 'none'}`).join('\n\n')}
## Task Results
${finalTasks.map(t => `### ${t.id}: ${t.title} (${t.status})
| Field | Value |
|-------|-------|
| Wave | ${t.wave} |
| Scope | ${t.scope || 'none'} |
| Dependencies | ${t.deps || 'none'} |
| Context From | ${t.context_from || 'none'} |
| Tests Passed | ${t.tests_passed || 'N/A'} |
| Acceptance Met | ${t.acceptance_met || 'N/A'} |
| Error | ${t.error || 'none'} |
**Description**: ${t.description}
**Test Cases**: ${t.test || 'N/A'}
**Acceptance Criteria**: ${t.acceptance_criteria || 'N/A'}
**Hints**: ${t.hints || 'N/A'}
**Execution Directives**: ${t.execution_directives || 'N/A'}
**Findings**: ${t.findings || 'N/A'}
**Files Modified**: ${t.files_modified || 'none'}`).join('\n\n---\n\n')}
## All Modified Files
${[...new Set(finalTasks.flatMap(t =>
(t.files_modified || '').split(';')).filter(Boolean)
)].map(f => '- ' + f).join('\n') || 'None'}
`
Write(`${sessionFolder}/context.md`, contextMd)
```
### 5.3 Summary & Next Steps
```javascript
console.log(`
## Wave Plan Complete
Session: ${sessionFolder}
Explore: ${exploreCSV.filter(r => r.status === 'completed').length}/${exploreCSV.length} angles
Tasks: ${completed.length}/${finalTasks.length} completed, ${failed.length} failed, ${skipped.length} skipped
Waves: ${maxWave}
Files:
- explore.csv — exploration state
- tasks.csv — execution state
- results.csv — final results
- context.md — full report
- discoveries.ndjson — shared discoveries
`)
if (!AUTO_YES && failed.length > 0) {
AskUserQuestion({
questions: [{
question: `${failed.length} tasks failed. Next action?`,
header: "Next Step",
multiSelect: false,
options: [
{ label: "Retry Failed", description: "Reset failed + skipped, re-execute Phase 4" },
{ label: "View Report", description: "Display context.md" },
{ label: "Done", description: "Complete session" }
]
}]
})
// If Retry: reset failed/skipped status to pending, re-run Phase 4
}
```
---
## Utilities
### Wave Computation (Kahn's BFS)
```javascript
function computeWaves(tasks) {
const inDegree = {}, adj = {}, depth = {}
tasks.forEach(t => { inDegree[t.id] = 0; adj[t.id] = []; depth[t.id] = 1 })
tasks.forEach(t => {
const deps = (t.deps || '').split(';').filter(Boolean)
deps.forEach(dep => {
if (adj[dep]) { adj[dep].push(t.id); inDegree[t.id]++ }
})
})
const queue = Object.keys(inDegree).filter(id => inDegree[id] === 0)
queue.forEach(id => { depth[id] = 1 })
while (queue.length > 0) {
const current = queue.shift()
adj[current].forEach(next => {
depth[next] = Math.max(depth[next], depth[current] + 1)
inDegree[next]--
if (inDegree[next] === 0) queue.push(next)
})
}
if (Object.values(inDegree).some(d => d > 0)) {
throw new Error('Circular dependency detected')
}
return depth // { taskId: waveNumber }
}
```
### CSV Helpers
```javascript
function escCSV(s) { return String(s || '').replace(/"/g, '""') }
function parseCSV(content) {
const lines = content.trim().split('\n')
const header = lines[0].split(',').map(h => h.replace(/"/g, '').trim())
return lines.slice(1).filter(l => l.trim()).map(line => {
const values = parseCSVLine(line)
const row = {}
header.forEach((col, i) => { row[col] = (values[i] || '').replace(/^"|"$/g, '') })
return row
})
}
function writeCSV(path, rows) {
if (rows.length === 0) return
// Exclude runtime-only columns (prefixed with _)
const cols = Object.keys(rows[0]).filter(k => !k.startsWith('_'))
const header = cols.join(',')
const lines = rows.map(r =>
cols.map(c => `"${escCSV(r[c])}"`).join(',')
)
Write(path, [header, ...lines].join('\n'))
}
function truncate(s, max) {
s = String(s || '')
return s.length > max ? s.substring(0, max - 3) + '...' : s
}
```
---
## Discovery Board Protocol
Shared `discoveries.ndjson` — append-only NDJSON accessible to all agents across all phases.
**Lifecycle**:
- Created by the first agent to write a discovery
- Carries over across all phases and waves — never cleared
- Agents append via `echo '...' >> discoveries.ndjson`
**Format**: NDJSON, each line is a self-contained JSON:
```jsonl
{"ts":"...","worker":"E1","type":"code_pattern","data":{"name":"repo-pattern","file":"src/repos/Base.ts"}}
{"ts":"...","worker":"T2","type":"integration_point","data":{"file":"src/auth/index.ts","exports":["auth"]}}
```
**Discovery Types**:
| type | Dedup Key | Description |
|------|-----------|-------------|
| `code_pattern` | `data.name` | Reusable code pattern found |
| `integration_point` | `data.file` | Module connection point |
| `convention` | singleton | Code style conventions |
| `blocker` | `data.issue` | Blocking issue encountered |
| `tech_stack` | singleton | Project technology stack |
| `test_command` | singleton | Test commands discovered |
**Protocol Rules**:
1. Read board before own exploration → skip covered areas
2. Write discoveries immediately via `echo >>` → don't batch
3. Deduplicate — check existing entries; skip if same type + dedup key exists
4. Append-only — never modify or delete existing lines
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Explore agent failure | Mark as failed in explore.csv, exclude from planning |
| All explores failed | Fallback: plan directly from requirement without exploration |
| Execute agent failure | Mark as failed, skip dependents (cascade) |
| Agent timeout | Mark as failed in results, continue with wave |
| Circular dependency | Abort wave computation, report cycle |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
---
## Core Rules
1. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes
2. **CSV is Source of Truth**: Read master CSV before each wave, write after
3. **Context via CSV**: prev_context built from CSV findings, not from memory
4. **E* ↔ T* Linking**: tasks.csv `context_from` references explore.csv rows for cross-phase context
5. **Skip on Failure**: Failed dep → skip dependent (cascade)
6. **Discovery Board Append-Only**: Never clear or modify discoveries.ndjson
7. **Explore Before Execute**: Phase 2 completes before Phase 4 starts
8. **DO NOT STOP**: Continuous execution until all waves complete or remaining skipped
---
## Best Practices
1. **Exploration Angles**: 1 for simple, 3-4 for complex; avoid redundant angles
2. **Context Linking**: Link every task to at least one explore row (E*) — exploration was done for a reason
3. **Task Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism
4. **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism
5. **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained
6. **Non-Overlapping Scopes**: Same-wave tasks must not write to the same files
7. **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow
---
## Usage Recommendations
| Scenario | Recommended Approach |
|----------|---------------------|
| Complex feature (unclear architecture) | `workflow:wave-plan` — explore first, then plan |
| Simple known-pattern task | `$csv-wave-pipeline` — skip exploration, direct execution |
| Independent parallel tasks | `$csv-wave-pipeline -c 8` — single wave, max parallelism |
| Diamond dependency (A→B,C→D) | `workflow:wave-plan` — 3 waves with context propagation |
| Unknown codebase | `workflow:wave-plan` — exploration phase is essential |

View File

@@ -457,7 +457,7 @@ function buildCliCommand(task, cliTool, cliPrompt) {
**Auto-Check Workflow Context**:
- Verify session context paths are provided in agent prompt
- If missing, request session context from workflow:execute
- If missing, request session context from workflow-execute
- Never assume default paths without explicit session context
### 5. Problem-Solving

View File

@@ -1,11 +1,11 @@
---
description: "Interactive pre-flight checklist for workflow:plan. Validates environment, refines task to GOAL/SCOPE/CONTEXT, collects source docs, configures execution preferences, writes prep-package.json, then launches the workflow."
description: "Interactive pre-flight checklist for workflow-plan. Validates environment, refines task to GOAL/SCOPE/CONTEXT, collects source docs, configures execution preferences, writes prep-package.json, then launches the workflow."
argument-hint: TASK="<task description>" [EXEC_METHOD=agent|cli|hybrid] [CLI_TOOL=codex|gemini|qwen]
---
# Pre-Flight Checklist for Workflow Plan
You are an interactive preparation assistant. Your job is to ensure everything is ready for an **unattended** `workflow:plan` run with `--yes` mode. Follow each step sequentially. **Ask the user questions when information is missing.** At the end, write `prep-package.json` and invoke the skill.
You are an interactive preparation assistant. Your job is to ensure everything is ready for an **unattended** `workflow-plan` run with `--yes` mode. Follow each step sequentially. **Ask the user questions when information is missing.** At the end, write `prep-package.json` and invoke the skill.
---
@@ -112,7 +112,7 @@ Display detected sources:
### 2.1 Scoring
Score the user's TASK against 5 dimensions, mapped to workflow:plan's GOAL/SCOPE/CONTEXT format.
Score the user's TASK against 5 dimensions, mapped to workflow-plan's GOAL/SCOPE/CONTEXT format.
Each dimension scores 0-2 (0=missing, 1=vague, 2=clear). **Total minimum: 6/10 to proceed.**
| # | 维度 | 映射 | 评分标准 |
@@ -165,7 +165,7 @@ For dimensions still at score 1 after Q&A, auto-enhance from codebase:
### 2.5 Assemble Structured Description
Map to workflow:plan's GOAL/SCOPE/CONTEXT format:
Map to workflow-plan's GOAL/SCOPE/CONTEXT format:
```
GOAL: {objective + success criteria}
@@ -353,7 +353,7 @@ $workflow-plan-execute --yes --with-commit TASK="$TASK_STRUCTURED"
Print:
```
启动 workflow:plan (自动模式)...
启动 workflow-plan (自动模式)...
prep-package.json → Phase 1 自动加载并校验
执行方式: hybrid (codex) + auto-commit
```

View File

@@ -800,7 +800,7 @@ Offer user follow-up actions based on brainstorming results.
| Option | Purpose | Action |
|--------|---------|--------|
| **创建实施计划** | Plan implementation of top idea | Launch `workflow:lite-plan` |
| **创建实施计划** | Plan implementation of top idea | Launch `workflow-lite-plan` |
| **创建Issue** | Track top ideas for later | Launch `issue:new` with ideas |
| **深入分析** | Analyze top idea in detail | Launch `workflow:analyze-with-file` |
| **导出分享** | Generate shareable report | Create formatted report document |

View File

@@ -1,261 +1,503 @@
---
name: team-planex
description: |
Inline planning + delegated execution pipeline. Main flow does planning directly,
spawns Codex executor per issue immediately. All execution via Codex CLI only.
Plan-and-execute pipeline with inverted control. Accepts issues.jsonl or roadmap
session from roadmap-with-file. Delegates planning to issue-plan-agent (background),
executes inline (main flow). Interleaved plan-execute loop.
---
# Team PlanEx (Codex)
# Team PlanEx
主流程内联规划 + 委托执行。SKILL.md 自身完成规划(不再 spawn planner agent每完成一个 issue 的 solution 后立即 spawn executor agent 并行实现,无需等待所有规划完成。
接收 `issues.jsonl` 或 roadmap session 作为输入,委托规划 + 内联执行。Spawn issue-plan-agent 后台规划下一个 issue主流程内联执行当前已规划的 issue。交替循环规划 → 执行 → 规划下一个 → 执行 → 直到所有 issue 完成。
## Architecture
```
┌────────────────────────────────────────┐
│ SKILL.md (主流程 = 规划 + 节拍控制) │
┌────────────────────────────────────────────────────
│ SKILL.md (主流程 = 内联执行 + 循环控制)
│ │
│ Phase 1: 解析输入 + 初始化 session │
│ Phase 2: 逐 issue 规划循环 (内联)
│ ├── issue-plan → 写 solution artifact
│ ├── spawn executor agent ────────────┼──> [executor] 实现
└── continue (不等 executor)
Phase 3: 等待所有 executors
Phase 4: 汇总报告
└────────────────────────────────────────┘
│ Phase 1: 加载 issues.jsonl / roadmap session
│ Phase 2: 规划-执行交替循环
│ ├── 取下一个未规划 issue → spawn planner (bg)
│ ├── 取下一个已规划 issue → 内联执行 │
│ ├── 内联实现 (Read/Edit/Write/Bash)
│ ├── 验证测试 + 自修复 (max 3 retries)
│ └── git commit
│ └── 循环直到所有 issue 规划+执行完毕 │
│ Phase 3: 汇总报告 │
└────────────────────────────────────────────────────┘
Planner (single reusable agent, background):
issue-plan-agent spawn once → send_input per issue → write solution JSON → write .ready marker
```
## Beat Model (Interleaved Plan-Execute Loop)
```
Interleaved Loop (Phase 2, single planner reused via send_input):
═══════════════════════════════════════════════════════════════
Beat 1 Beat 2 Beat 3
| | |
spawn+plan(A) send_input(C) (drain)
↓ ↓ ↓
poll → A.ready poll → B.ready poll → C.ready
↓ ↓ ↓
exec(A) exec(B) exec(C)
↓ ↓ ↓
send_input(B) send_input(done) done
(eager delegate) (all delegated)
═══════════════════════════════════════════════════════════════
Planner timeline (never idle between issues):
─────────────────────────────────────────────────────────────
Planner: ├─plan(A)─┤├─plan(B)─┤├─plan(C)─┤ done
Main: 2a(spawn) ├─exec(A)──┤├─exec(B)──┤├─exec(C)──┤
^2f→send(B) ^2f→send(C)
─────────────────────────────────────────────────────────────
Single Beat Detail:
───────────────────────────────────────────────────────────────
1. Delegate next 2. Poll ready 3. Execute 4. Verify + commit
(2a or 2f eager) solutions inline + eager delegate
| | | |
send_input(bg) Glob(*.ready) Read/Edit/Write test → commit
│ → send_input(next)
wait if none ready
───────────────────────────────────────────────────────────────
```
---
## Agent Registry
| Agent | Role File | Responsibility |
|-------|-----------|----------------|
| `executor` | `~/.codex/agents/planex-executor.md` | Codex CLI implementation per issue |
| Agent | Role File | Responsibility | Pattern |
|-------|-----------|----------------|---------|
| `issue-plan-agent` | `~/.codex/agents/issue-plan-agent.md` | Explore codebase + generate solutions, single instance reused via send_input | 2.3 Deep Interaction |
> Executor agent must be deployed to `~/.codex/agents/` before use.
> Source: `.codex/skills/team-planex/agents/`
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs and agent instructions are reduced to summaries, **you MUST immediately `Read` the corresponding agent.md to reload before continuing execution**.
---
## Input Parsing
## Subagent API Reference
Supported input types (parse from `$ARGUMENTS`):
| Type | Detection | Handler |
|------|-----------|---------|
| Issue IDs | `ISS-\d{8}-\d{6}` regex | Use directly for planning |
| Text | `--text '...'` flag | Create issue(s) first via CLI |
| Plan file | `--plan <path>` flag | Read file, parse phases, batch create issues |
### Issue Creation (when needed)
For `--text` input:
```bash
ccw issue create --data '{"title":"<title>","description":"<description>"}' --json
```
For `--plan` input:
- Match `## Phase N: Title`, `## Step N: Title`, or `### N. Title`
- Each match → one issue (title + description from section content)
- Fallback: no structure found → entire file as single issue
---
## Session Setup
Before processing issues, initialize session directory:
### spawn_agent
```javascript
const slug = toSlug(inputDescription).slice(0, 20)
const date = new Date().toISOString().slice(0, 10).replace(/-/g, '')
const sessionDir = `.workflow/.team/PEX-${slug}-${date}`
const artifactsDir = `${sessionDir}/artifacts/solutions`
Bash(`mkdir -p "${artifactsDir}"`)
Write({
file_path: `${sessionDir}/team-session.json`,
content: JSON.stringify({
session_id: `PEX-${slug}-${date}`,
input_type: inputType,
input: rawInput,
status: "running",
started_at: new Date().toISOString(),
executors: []
}, null, 2)
})
```
---
## Phase 1: Parse Input + Initialize
1. Parse `$ARGUMENTS` to determine input type
2. Create issues if needed (--text / --plan)
3. Collect all issue IDs
4. Initialize session directory
---
## Phase 2: Inline Planning Loop
For each issue, execute planning inline (no planner agent):
### 2a. Generate Solution via issue-plan-agent
```javascript
const planAgent = spawn_agent({
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
issue_ids: ["${issueId}"]
project_root: "${projectRoot}"
## Requirements
- Generate solution for this issue
- Auto-bind single solution
- Output solution JSON when complete
${taskContext}
`
})
const result = wait({ ids: [planAgent], timeout_ms: 600000 })
close_agent({ id: planAgent })
```
### 2b. Write Solution Artifact
### wait
```javascript
const solution = parseSolution(result)
Write({
file_path: `${artifactsDir}/${issueId}.json`,
content: JSON.stringify({
session_id: sessionId,
issue_id: issueId,
solution: solution,
planned_at: new Date().toISOString()
}, null, 2)
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes
})
```
### 2c. Spawn Executor Immediately
### send_input
Continue interaction with active agent (reuse for next issue).
```javascript
const executorId = spawn_agent({
send_input({
id: agentId,
message: `
## NEXT ISSUE
issue_ids: ["<nextIssueId>"]
## Output Requirements
1. Generate solution for this issue
2. Write solution JSON to: <artifactsDir>/<nextIssueId>.json
3. Write ready marker to: <artifactsDir>/<nextIssueId>.ready
`
})
```
### close_agent
```javascript
close_agent({ id: agentId })
```
---
## Input
Accepts output from `roadmap-with-file` or direct `issues.jsonl` path.
Supported input forms (parse from `$ARGUMENTS`):
| Form | Detection | Example |
|------|-----------|---------|
| Roadmap session | `RMAP-` prefix or `--session` flag | `team-planex --session RMAP-auth-20260301` |
| Issues JSONL path | `.jsonl` extension | `team-planex .workflow/issues/issues.jsonl` |
| Issue IDs | `ISS-\d{8}-\d{3,6}` regex | `team-planex ISS-20260301-001 ISS-20260301-002` |
### Input Resolution
| Input Form | Resolution |
|------------|------------|
| `--session RMAP-*` | Read `.workflow/.roadmap/<sessionId>/roadmap.md` → extract issue IDs from Roadmap table → load issue data from `.workflow/issues/issues.jsonl` |
| `.jsonl` path | Read file → parse each line as JSON → collect all issues |
| Issue IDs | Use directly → fetch details via `ccw issue status <id> --json` |
### Issue Record Fields Used
| Field | Usage |
|-------|-------|
| `id` | Issue identifier for planning and execution |
| `title` | Commit message and reporting |
| `status` | Skip if already `completed` |
| `tags` | Wave ordering: `wave-1` before `wave-2` |
| `extended_context.notes.depends_on_issues` | Execution ordering |
### Wave Ordering
Issues are sorted by wave tag for execution order:
| Priority | Rule |
|----------|------|
| 1 | Lower wave number first (`wave-1` before `wave-2`) |
| 2 | Within same wave: issues without `depends_on_issues` first |
| 3 | Within same wave + no deps: original order from JSONL |
---
## Session Setup
Initialize session directory before processing:
| Item | Value |
|------|-------|
| Slug | `toSlug(<first issue title>)` truncated to 20 chars |
| Date | `YYYYMMDD` format |
| Session dir | `.workflow/.team/PEX-<slug>-<date>` |
| Solutions dir | `<sessionDir>/artifacts/solutions` |
Create directories:
```bash
mkdir -p "<sessionDir>/artifacts/solutions"
```
Write `<sessionDir>/team-session.json`:
| Field | Value |
|-------|-------|
| `session_id` | `PEX-<slug>-<date>` |
| `input_type` | `roadmap` / `jsonl` / `issue_ids` |
| `source_session` | Roadmap session ID (if applicable) |
| `issue_ids` | Array of all issue IDs to process (wave-sorted) |
| `status` | `"running"` |
| `started_at` | ISO timestamp |
---
## Phase 1: Load Issues + Initialize
1. Parse `$ARGUMENTS` to determine input form
2. Resolve issues (see Input Resolution table)
3. Filter out issues with `status: completed`
4. Sort by wave ordering
5. Collect into `<issueQueue>` (ordered list of issue IDs to process)
6. Initialize session directory and `team-session.json`
7. Set `<plannedSet>` = {} , `<executedSet>` = {} , `<plannerAgent>` = null (single reusable planner)
---
## Phase 2: Plan-Execute Loop
Interleaved loop that keeps planner agent busy at all times. Each beat: (1) delegate next issue to planner if idle, (2) poll for ready solutions, (3) execute inline, (4) after execution completes, immediately delegate next issue to planner before polling again.
### Loop Entry
Set `<queueIndex>` = 0 (pointer into `<issueQueue>`).
### 2a. Delegate Next Issue to Planner
Single planner agent is spawned once and reused via `send_input` for subsequent issues.
| Condition | Action |
|-----------|--------|
| `<plannerAgent>` is null AND `<queueIndex>` < `<issueQueue>.length` | Spawn planner (first issue), advance `<queueIndex>` |
| `<plannerAgent>` exists, idle AND `<queueIndex>` < `<issueQueue>.length` | `send_input` next issue, advance `<queueIndex>` |
| `<plannerAgent>` is busy | Skip (wait for current planning to finish) |
| `<queueIndex>` >= `<issueQueue>.length` | No more issues to plan |
**First issue — spawn planner**:
```javascript
const plannerAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/planex-executor.md (MUST read first)
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
## Issue
Issue ID: ${issueId}
Solution file: ${artifactsDir}/${issueId}.json
Session: ${sessionDir}
issue_ids: ["<issueId>"]
project_root: "<projectRoot>"
## Execution
Load solution from file → implement via Codex CLI → verify tests → commit → report.
## Output Requirements
1. Generate solution for this issue
2. Write solution JSON to: <artifactsDir>/<issueId>.json
3. Write ready marker to: <artifactsDir>/<issueId>.ready
- Marker content: {"issue_id":"<issueId>","task_count":<task_count>,"file_count":<file_count>}
## Multi-Issue Mode
You will receive additional issues via follow-up messages. After completing each issue,
output results and wait for next instruction.
`
})
executorIds.push(executorId)
executorIssueMap[executorId] = issueId
```
### 2d. Continue to Next Issue
**Subsequent issues — send_input**:
Do NOT wait for executor. Proceed to next issue immediately.
```javascript
send_input({
id: plannerAgent,
message: `
## NEXT ISSUE
issue_ids: ["<nextIssueId>"]
## Output Requirements
1. Generate solution for this issue
2. Write solution JSON to: <artifactsDir>/<nextIssueId>.json
3. Write ready marker to: <artifactsDir>/<nextIssueId>.ready
- Marker content: {"issue_id":"<nextIssueId>","task_count":<task_count>,"file_count":<file_count>}
`
})
```
Record `<planningIssueId>` = current issue ID.
### 2b. Poll for Ready Solutions
Poll `<artifactsDir>/*.ready` using Glob.
| Condition | Action |
|-----------|--------|
| New `.ready` found (not in `<executedSet>`) | Load `<issueId>.json` solution → proceed to 2c |
| `<plannerAgent>` busy, no `.ready` yet | Check planner: `wait({ ids: [<plannerAgent>], timeout_ms: 30000 })` |
| Planner finished current issue | Mark planner idle, re-poll |
| Planner timed out (30s wait) | Re-poll (planner still working) |
| No `.ready`, planner idle, all issues delegated | Exit loop → Phase 3 |
| Idle >5 minutes total | Exit loop → Phase 3 |
### 2c. Inline Execution
Main flow implements the solution directly. For each task in `solution.tasks`, ordered by `depends_on` sequence:
| Step | Action | Tool |
|------|--------|------|
| 1. Read context | Read all files referenced in current task | Read |
| 2. Identify patterns | Note imports, naming conventions, existing structure | — (inline reasoning) |
| 3. Apply changes | Modify existing files or create new files | Edit (prefer) / Write (new files) |
| 4. Build check | Run project build command if available | Bash |
Build verification:
```bash
npm run build 2>&1 || echo BUILD_FAILED
```
| Build Result | Action |
|--------------|--------|
| Success | Proceed to 2d |
| Failure | Analyze error → fix source → rebuild (max 3 retries) |
| No build command | Skip, proceed to 2d |
### 2d. Verify Tests
Detect test command:
| Priority | Detection |
|----------|-----------|
| 1 | `package.json``scripts.test` |
| 2 | `package.json``scripts.test:unit` |
| 3 | `pytest.ini` / `setup.cfg` (Python) |
| 4 | `Makefile` test target |
Run tests. If tests fail → self-repair loop:
| Attempt | Action |
|---------|--------|
| 13 | Analyze test output → diagnose → fix source code → re-run tests |
| After 3 | Mark issue as failed, log to `<sessionDir>/errors.json`, continue |
### 2e. Git Commit
Stage and commit changes for this issue:
```bash
git add -A
git commit -m "feat(<issueId>): <solution-title>"
```
| Outcome | Action |
|---------|--------|
| Commit succeeds | Record commit hash |
| Commit fails (nothing to commit) | Warn, continue |
| Pre-commit hook fails | Attempt fix once, then warn and continue |
### 2f. Update Status + Eagerly Delegate Next
Update issue status:
```bash
ccw issue update <issueId> --status completed
```
Add `<issueId>` to `<executedSet>`.
**Eager delegation**: Immediately check planner state and delegate next issue before returning to poll:
| Planner State | Action |
|---------------|--------|
| Idle AND more issues in queue | `send_input` next issue → advance `<queueIndex>` |
| Busy (still planning) | Skip — planner already working |
| All issues delegated | Skip — nothing to delegate |
This ensures planner is never idle between beats. Return to 2b for next beat.
---
## Phase 3: Wait All Executors
## Phase 3: Report
### 3a. Cleanup
Close the planner agent. Ignore cleanup failures.
```javascript
if (executorIds.length > 0) {
const execResults = wait({ ids: executorIds, timeout_ms: 1800000 })
if (plannerAgent) {
try { close_agent({ id: plannerAgent }) } catch {}
}
```
if (execResults.timed_out) {
const pending = executorIds.filter(id => !execResults.status[id]?.completed)
if (pending.length > 0) {
const pendingIssues = pending.map(id => executorIssueMap[id])
Write({
file_path: `${sessionDir}/pending-executors.json`,
content: JSON.stringify({ pending_issues: pendingIssues, executor_ids: pending }, null, 2)
})
}
}
### 3b. Generate Report
// Collect summaries
const summaries = executorIds.map(id => ({
issue_id: executorIssueMap[id],
status: execResults.status[id]?.completed ? 'completed' : 'timeout',
output: execResults.status[id]?.completed ?? null
}))
Update `<sessionDir>/team-session.json`:
// Cleanup
executorIds.forEach(id => {
try { close_agent({ id }) } catch { /* already closed */ }
})
| Field | Value |
|-------|-------|
| `status` | `"completed"` |
| `completed_at` | ISO timestamp |
| `results.total` | Total issues in queue |
| `results.completed` | Count in `<executedSet>` |
| `results.failed` | Count of failed issues |
Output summary:
```
## Pipeline Complete
**Total issues**: <total>
**Completed**: <completed>
**Failed**: <failed>
<per-issue status list>
Session: <sessionDir>
```
---
## File-Based Coordination Protocol
| File | Writer | Reader | Purpose |
|------|--------|--------|---------|
| `<artifactsDir>/<issueId>.json` | planner | main flow | Solution data |
| `<artifactsDir>/<issueId>.ready` | planner | main flow | Atomicity signal |
### Ready Marker Format
```json
{
"issue_id": "<issueId>",
"task_count": "<task_count>",
"file_count": "<file_count>"
}
```
---
## Phase 4: Report
## Session Directory
```javascript
const completed = summaries.filter(s => s.status === 'completed').length
const failed = summaries.filter(s => s.status === 'timeout').length
// Update session
Write({
file_path: `${sessionDir}/team-session.json`,
content: JSON.stringify({
...session,
status: "completed",
completed_at: new Date().toISOString(),
results: { total: executorIds.length, completed, failed }
}, null, 2)
})
return `
## Pipeline Complete
**Total issues**: ${executorIds.length}
**Completed**: ${completed}
**Timed out**: ${failed}
${summaries.map(s => `- ${s.issue_id}: ${s.status}`).join('\n')}
Session: ${sessionDir}
`
```
.workflow/.team/PEX-<slug>-<date>/
├── team-session.json
├── artifacts/
│ └── solutions/
│ ├── <issueId>.json
│ └── <issueId>.ready
└── errors.json
```
---
## User Commands
## Lifecycle Management
During execution, the user may issue:
### Timeout Protocol
| Command | Action |
|---------|--------|
| `check` / `status` | Show executor progress summary |
| `resume` / `continue` | Urge stalled executor |
| Phase | Default Timeout | On Timeout |
|-------|-----------------|------------|
| Phase 1 (Load) | 60s | Report error, stop |
| Phase 2 (Planner wait) | 600s per issue | Skip issue, write `.error` marker |
| Phase 2 (Execution) | No timeout | Self-repair up to 3 retries |
| Phase 2 (Loop idle) | 5 min total idle | Break loop → Phase 3 |
### Cleanup Protocol
At workflow end, close the planner agent:
```javascript
if (plannerAgent) {
try { close_agent({ id: plannerAgent }) } catch { /* already closed */ }
}
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| issue-plan-agent timeout (>10 min) | Skip issue, log error, continue to next |
| issue-plan-agent failure | Retry once, then skip with error log |
| Solution file not written | Executor reports error, logs to `${sessionDir}/errors.json` |
| Executor (Codex CLI) failure | Executor handles resume; logs CLI resume command |
| No issues to process | Report error: no issues found |
| Phase | Scenario | Resolution |
|-------|----------|------------|
| 1 | Invalid input (no issues, bad JSONL) | Report error, stop |
| 1 | Roadmap session not found | Report error, stop |
| 1 | Issue fetch fails | Retry once, then skip issue |
| 2 | Planner spawn fails | Retry once, then skip issue |
| 2 | issue-plan-agent timeout (>10 min) | Skip issue, write `.error` marker, continue |
| 2 | Solution file corrupt / unreadable | Skip, log to `errors.json`, continue |
| 2 | Implementation error | Self-repair up to 3 retries per task |
| 2 | Tests failing after 3 retries | Mark issue failed, log, continue |
| 2 | Git commit fails | Warn, mark completed anyway |
| 2 | Polling idle >5 minutes | Break loop → Phase 3 |
| 3 | Agent cleanup fails | Ignore |
---
## User Commands
| Command | Action |
|---------|--------|
| `check` / `status` | Show progress: planned / executing / completed / failed counts |
| `resume` / `continue` | Re-enter loop from Phase 2 |

View File

@@ -1,210 +0,0 @@
---
name: planex-executor
description: |
PlanEx executor agent. Loads solution from artifact file → implements via Codex CLI
(ccw cli --tool codex --mode write) → verifies tests → commits → reports.
Deploy to: ~/.codex/agents/planex-executor.md
color: green
---
# PlanEx Executor
Single-issue implementation agent. Loads solution from JSON artifact, executes
implementation via Codex CLI, verifies with tests, commits, and outputs a structured
completion report.
## Identity
- **Tag**: `[executor]`
- **Backend**: Codex CLI only (`ccw cli --tool codex --mode write`)
- **Granularity**: One issue per agent instance
## Core Responsibilities
| Action | Allowed |
|--------|---------|
| Read solution artifact from disk | Yes |
| Implement via Codex CLI | Yes |
| Run tests for verification | Yes |
| git commit completed work | Yes |
| Create or modify issues | No |
| Spawn subagents | No |
| Interact with user (AskUserQuestion) | No |
---
## Execution Flow
### Step 1: Load Context
After reading role definition:
- Extract issue ID, solution file path, session dir from task message
### Step 2: Load Solution
Read solution artifact:
```javascript
const solutionData = JSON.parse(Read(solutionFile))
const solution = solutionData.solution
```
If file not found or invalid:
- Log error: `[executor] ERROR: Solution file not found: ${solutionFile}`
- Output: `EXEC_FAILED:${issueId}:solution_file_missing`
- Stop execution
Verify solution has required fields:
- `solution.bound.title` or `solution.title`
- `solution.bound.tasks` or `solution.tasks`
### Step 3: Update Issue Status
```bash
ccw issue update ${issueId} --status executing
```
### Step 4: Codex CLI Execution
Build execution prompt and invoke Codex:
```bash
ccw cli -p "$(cat <<'PROMPT_EOF'
## Issue
ID: ${issueId}
Title: ${solution.bound.title}
## Solution Plan
${JSON.stringify(solution.bound, null, 2)}
## Implementation Requirements
1. Follow the solution plan tasks in order
2. Write clean, minimal code following existing patterns
3. Run tests after each significant change
4. Ensure all existing tests still pass
5. Do NOT over-engineer - implement exactly what the solution specifies
## Quality Checklist
- [ ] All solution tasks implemented
- [ ] No TypeScript/linting errors (run: npx tsc --noEmit)
- [ ] Existing tests pass
- [ ] New tests added where specified in solution
- [ ] No security vulnerabilities introduced
PROMPT_EOF
)" --tool codex --mode write --id planex-${issueId}
```
**STOP after spawn** — Codex CLI executes in background. Do NOT poll or wait inside this agent. The CLI process handles implementation autonomously.
Wait for CLI completion signal before proceeding to Step 5.
### Step 5: Verify Tests
Detect and run project test command:
```javascript
// Detection priority:
// 1. package.json scripts.test
// 2. package.json scripts.test:unit
// 3. pytest.ini / setup.cfg (Python)
// 4. Makefile test target
const testCmd = detectTestCommand()
if (testCmd) {
const testResult = Bash(`${testCmd} 2>&1 || echo TEST_FAILED`)
if (testResult.includes('TEST_FAILED') || testResult.includes('FAIL')) {
const resumeCmd = `ccw cli -p "Fix failing tests" --resume planex-${issueId} --tool codex --mode write`
Write({
file_path: `${sessionDir}/errors.json`,
content: JSON.stringify({
issue_id: issueId,
type: 'test_failure',
test_output: testResult.slice(0, 2000),
resume_cmd: resumeCmd,
timestamp: new Date().toISOString()
}, null, 2)
})
Output: `EXEC_FAILED:${issueId}:tests_failing`
Stop.
}
}
```
### Step 6: Commit
```bash
git add -A
git commit -m "feat(${issueId}): ${solution.bound.title}"
```
If commit fails (nothing to commit, pre-commit hook error):
- Log warning: `[executor] WARN: Commit failed for ${issueId}, continuing`
- Still proceed to Step 7
### Step 7: Update Issue & Report
```bash
ccw issue update ${issueId} --status completed
```
Output completion report:
```
## [executor] Implementation Complete
**Issue**: ${issueId}
**Title**: ${solution.bound.title}
**Backend**: codex
**Tests**: ${testCmd ? 'passing' : 'skipped (no test command found)'}
**Commit**: ${commitHash}
**Status**: resolved
EXEC_DONE:${issueId}
```
---
## Resume Protocol
If Codex CLI execution fails or times out:
```bash
ccw cli -p "Continue implementation from where stopped" \
--resume planex-${issueId} \
--tool codex --mode write \
--id planex-${issueId}-retry
```
Resume command is always logged to `${sessionDir}/errors.json` on any failure.
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Solution file missing | Output `EXEC_FAILED:<id>:solution_file_missing`, stop |
| Solution JSON malformed | Output `EXEC_FAILED:<id>:solution_invalid`, stop |
| Issue status update fails | Log warning, continue |
| Codex CLI failure | Log resume command to errors.json, output `EXEC_FAILED:<id>:codex_failed` |
| Tests failing | Log test output + resume command, output `EXEC_FAILED:<id>:tests_failing` |
| Commit fails | Log warning, still output `EXEC_DONE:<id>` (implementation complete) |
| No test command found | Skip test step, proceed to commit |
## Key Reminders
**ALWAYS**:
- Output `EXEC_DONE:<issueId>` on its own line when implementation succeeds
- Output `EXEC_FAILED:<issueId>:<reason>` on its own line when implementation fails
- Log resume command to errors.json on any failure
- Use `[executor]` prefix in all status messages
**NEVER**:
- Use any execution backend other than Codex CLI
- Create, modify, or read issues beyond the assigned issueId
- Spawn subagents
- Ask the user for clarification (fail fast with structured error)

View File

@@ -373,7 +373,7 @@ try {
## Related Skills
**Prerequisite Skills**:
- `workflow:plan` or `workflow:execute` - Complete implementation (Session Mode)
- `workflow-plan` or `workflow-execute` - Complete implementation (Session Mode)
- None for Prompt Mode
**Phase 1 Agents** (used by phases/01-test-fix-gen.md via spawn_agent):

View File

@@ -93,7 +93,7 @@ CCW uses two types of invocations:
| Type | Format | Examples |
|------|--------|----------|
| **Skills** | Trigger phrase (no slash) | `workflow:lite-plan`, `brainstorm`, `workflow:plan` |
| **Skills** | Trigger phrase (no slash) | `workflow-lite-plan`, `brainstorm`, `workflow-plan` |
| **Commands** | Slash command | `/ccw`, `/workflow/session:start`, `/issue/new` |
### Choose Your Workflow Skill
@@ -101,11 +101,11 @@ CCW uses two types of invocations:
<div align="center">
<table>
<tr><th>Skill Trigger</th><th>Use Case</th></tr>
<tr><td><code>workflow:lite-plan</code></td><td>Lightweight planning, single-module features</td></tr>
<tr><td><code>workflow:multi-cli-plan</code></td><td>Multi-CLI collaborative analysis</td></tr>
<tr><td><code>workflow:plan</code></td><td>Full planning with session persistence</td></tr>
<tr><td><code>workflow:tdd-plan</code></td><td>Test-driven development</td></tr>
<tr><td><code>workflow:test-fix-gen</code></td><td>Test generation and fix cycles</td></tr>
<tr><td><code>workflow-lite-plan</code></td><td>Lightweight planning, single-module features</td></tr>
<tr><td><code>workflow-multi-cli-plan</code></td><td>Multi-CLI collaborative analysis</td></tr>
<tr><td><code>workflow-plan</code></td><td>Full planning with session persistence</td></tr>
<tr><td><code>workflow-tdd-plan</code></td><td>Test-driven development</td></tr>
<tr><td><code>workflow-test-fix</code></td><td>Test generation and fix cycles</td></tr>
<tr><td><code>brainstorm</code></td><td>Multi-role brainstorming analysis</td></tr>
</table>
</div>
@@ -114,9 +114,9 @@ CCW uses two types of invocations:
```bash
# Skill triggers (no slash - just describe what you want)
workflow:lite-plan "Add JWT authentication"
workflow:plan "Implement payment gateway integration"
workflow:execute
workflow-lite-plan "Add JWT authentication"
workflow-plan "Implement payment gateway integration"
workflow-execute
# Brainstorming
brainstorm "Design real-time collaboration system"
@@ -278,9 +278,9 @@ ccw upgrade -a # Upgrade all installations
```
┌─────────────────────────────────────────────────────────────────┐
│ Workflow Skills │
│ 📝 workflow:lite-plan / workflow:multi-cli-plan (lightweight) │
│ 📊 workflow:plan / workflow:tdd-plan (session-based) │
│ 🧪 workflow:test-fix-gen / workflow:test-cycle-execute
│ 📝 workflow-lite-plan / workflow-multi-cli-plan (lightweight) │
│ 📊 workflow-plan / workflow-tdd-plan (session-based) │
│ 🧪 workflow-test-fix / workflow-test-fix
│ 🧠 brainstorm (multi-role analysis) │
└─────────────────────────────────────────────────────────────────┘
@@ -324,7 +324,7 @@ Claude-Code-Workflow/
│ └── skills/ # 37 modular skills
│ ├── workflow-lite-plan/
│ ├── workflow-plan/
│ ├── workflow-tdd/
│ ├── workflow-tdd-plan/
│ ├── workflow-test-fix/
│ ├── brainstorm/
│ ├── team-*/ # Team coordination skills

View File

@@ -93,7 +93,7 @@ CCW 使用两种调用方式:
| 类型 | 格式 | 示例 |
|------|------|------|
| **Skills** | 触发短语(无斜杠) | `workflow:lite-plan`, `brainstorm`, `workflow:plan` |
| **Skills** | 触发短语(无斜杠) | `workflow-lite-plan`, `brainstorm`, `workflow-plan` |
| **Commands** | 斜杠命令 | `/ccw`, `/workflow/session:start`, `/issue/new` |
### 选择工作流 Skill
@@ -101,11 +101,11 @@ CCW 使用两种调用方式:
<div align="center">
<table>
<tr><th>Skill 触发词</th><th>使用场景</th></tr>
<tr><td><code>workflow:lite-plan</code></td><td>轻量规划、单模块功能</td></tr>
<tr><td><code>workflow:multi-cli-plan</code></td><td>多 CLI 协同分析</td></tr>
<tr><td><code>workflow:plan</code></td><td>完整规划与会话持久化</td></tr>
<tr><td><code>workflow:tdd-plan</code></td><td>测试驱动开发</td></tr>
<tr><td><code>workflow:test-fix-gen</code></td><td>测试生成与修复循环</td></tr>
<tr><td><code>workflow-lite-plan</code></td><td>轻量规划、单模块功能</td></tr>
<tr><td><code>workflow-multi-cli-plan</code></td><td>多 CLI 协同分析</td></tr>
<tr><td><code>workflow-plan</code></td><td>完整规划与会话持久化</td></tr>
<tr><td><code>workflow-tdd-plan</code></td><td>测试驱动开发</td></tr>
<tr><td><code>workflow-test-fix</code></td><td>测试生成与修复循环</td></tr>
<tr><td><code>brainstorm</code></td><td>多角色头脑风暴分析</td></tr>
</table>
</div>
@@ -114,9 +114,9 @@ CCW 使用两种调用方式:
```bash
# Skill 触发(无斜杠 - 直接描述你想做什么)
workflow:lite-plan "添加 JWT 认证"
workflow:plan "实现支付网关集成"
workflow:execute
workflow-lite-plan "添加 JWT 认证"
workflow-plan "实现支付网关集成"
workflow-execute
# 头脑风暴
brainstorm "设计实时协作系统"
@@ -278,9 +278,9 @@ ccw upgrade -a # 升级所有安装
```
┌─────────────────────────────────────────────────────────────────┐
│ 工作流 Skills │
│ 📝 workflow:lite-plan / workflow:multi-cli-plan (轻量级) │
│ 📊 workflow:plan / workflow:tdd-plan (会话式) │
│ 🧪 workflow:test-fix-gen / workflow:test-cycle-execute
│ 📝 workflow-lite-plan / workflow-multi-cli-plan (轻量级) │
│ 📊 workflow-plan / workflow-tdd-plan (会话式) │
│ 🧪 workflow-test-fix / workflow-test-fix
│ 🧠 brainstorm (多角色分析) │
└─────────────────────────────────────────────────────────────────┘
@@ -324,7 +324,7 @@ Claude-Code-Workflow/
│ └── skills/ # 37 个模块化技能
│ ├── workflow-lite-plan/
│ ├── workflow-plan/
│ ├── workflow-tdd/
│ ├── workflow-tdd-plan/
│ ├── workflow-test-fix/
│ ├── brainstorm/
│ ├── team-*/ # 团队协调技能

View File

@@ -21,7 +21,7 @@ CCW uses two types of invocations:
| Type | Format | Examples |
|------|--------|----------|
| **Skills** | Trigger phrase (no slash) | `workflow:lite-plan`, `brainstorm`, `workflow:plan` |
| **Skills** | Trigger phrase (no slash) | `workflow-lite-plan`, `brainstorm`, `workflow-plan` |
| **Commands** | Slash command | `/ccw`, `/workflow/session:start`, `/issue/new` |
---
@@ -32,7 +32,7 @@ CCW uses two types of invocations:
| Skill Trigger | Purpose | Phases |
|---------------|---------|--------|
| `workflow:lite-plan` | Lightweight planning with exploration | 5 phases |
| `workflow-lite-plan` | Lightweight planning with exploration | 5 phases |
| `workflow:lite-execute` | Execute lite-plan output | Execution |
**5-Phase Interactive Workflow**:
@@ -48,7 +48,7 @@ Phase 5: Execution & Tracking
| Skill Trigger | Purpose |
|---------------|---------|
| `workflow:multi-cli-plan` | Multi-CLI collaborative analysis |
| `workflow-multi-cli-plan` | Multi-CLI collaborative analysis |
**5-Phase Workflow**:
```
@@ -63,16 +63,16 @@ Phase 5: Plan Generation
| Skill Trigger | Purpose | Phases |
|---------------|---------|--------|
| `workflow:plan` | Full planning with session | 5 phases |
| `workflow:plan-verify` | Plan verification | Verification |
| `workflow-plan` | Full planning with session | 5 phases |
| `workflow-plan-verify` | Plan verification | Verification |
| `workflow:replan` | Interactive replanning | Replanning |
### TDD Workflow
| Skill Trigger | Purpose |
|---------------|---------|
| `workflow:tdd-plan` | TDD planning |
| `workflow:tdd-verify` | TDD verification |
| `workflow-tdd-plan` | TDD planning |
| `workflow-tdd-verify` | TDD verification |
**6-Phase TDD Planning + Red-Green-Refactor**:
```
@@ -88,8 +88,8 @@ Phase 6: Next cycle
| Skill Trigger | Purpose |
|---------------|---------|
| `workflow:test-fix-gen` | Test generation and fix |
| `workflow:test-cycle-execute` | Execute test cycles |
| `workflow-test-fix` | Test generation and fix |
| `workflow-test-fix` | Execute test cycles |
**Progressive Test Layers (L0-L3)**:
@@ -232,12 +232,12 @@ Phase 6: Next cycle
| Skill | Trigger |
|-------|---------|
| workflow-lite-plan | `workflow:lite-plan`, `workflow:lite-execute` |
| workflow-multi-cli-plan | `workflow:multi-cli-plan` |
| workflow-plan | `workflow:plan`, `workflow:plan-verify`, `workflow:replan` |
| workflow-execute | `workflow:execute` |
| workflow-tdd | `workflow:tdd-plan`, `workflow:tdd-verify` |
| workflow-test-fix | `workflow:test-fix-gen`, `workflow:test-cycle-execute` |
| workflow-lite-plan | `workflow-lite-plan`, `workflow:lite-execute` |
| workflow-multi-cli-plan | `workflow-multi-cli-plan` |
| workflow-plan | `workflow-plan`, `workflow-plan-verify`, `workflow:replan` |
| workflow-execute | `workflow-execute` |
| workflow-tdd-plan | `workflow-tdd-plan`, `workflow-tdd-verify` |
| workflow-test-fix | `workflow-test-fix`, `workflow-test-fix` |
### Specialized Skills
@@ -293,25 +293,25 @@ New System ─┼────────────┼─────
Start
├─ Is it a quick fix or config change?
│ └─> Yes: workflow:lite-plan
│ └─> Yes: workflow-lite-plan
├─ Is it a single module feature?
│ └─> Yes: workflow:lite-plan
│ └─> Yes: workflow-lite-plan
├─ Does it need multi-CLI analysis?
│ └─> Yes: workflow:multi-cli-plan
│ └─> Yes: workflow-multi-cli-plan
├─ Is it multi-module with session?
│ └─> Yes: workflow:plan
│ └─> Yes: workflow-plan
├─ Is it TDD development?
│ └─> Yes: workflow:tdd-plan
│ └─> Yes: workflow-tdd-plan
├─ Is it test generation?
│ └─> Yes: workflow:test-fix-gen
│ └─> Yes: workflow-test-fix
└─ Is it architecture/new system?
└─> Yes: brainstorm + workflow:plan
└─> Yes: brainstorm + workflow-plan
```
---
@@ -348,10 +348,10 @@ Start
| Skill | When to Use |
|-------|-------------|
| `workflow:lite-plan` | Quick fixes, single features |
| `workflow:plan` | Multi-module development |
| `workflow-lite-plan` | Quick fixes, single features |
| `workflow-plan` | Multi-module development |
| `brainstorm` | Architecture, new features |
| `workflow:execute` | Execute planned work |
| `workflow-execute` | Execute planned work |
### Most Common Commands

View File

@@ -22,7 +22,7 @@ CCW 使用两种调用方式:
| 类型 | 格式 | 示例 |
|------|------|------|
| **Skills** | 触发短语(无斜杠) | `workflow:lite-plan`, `brainstorm`, `workflow:plan` |
| **Skills** | 触发短语(无斜杠) | `workflow-lite-plan`, `brainstorm`, `workflow-plan` |
| **Commands** | 斜杠命令 | `/ccw`, `/workflow/session:start`, `/issue/new` |
---
@@ -33,7 +33,7 @@ CCW 使用两种调用方式:
| Skill 触发词 | 用途 | 阶段 |
|--------------|------|------|
| `workflow:lite-plan` | 轻量规划与探索 | 5 阶段 |
| `workflow-lite-plan` | 轻量规划与探索 | 5 阶段 |
| `workflow:lite-execute` | 执行 lite-plan 输出 | 执行 |
**5 阶段交互式工作流**
@@ -49,7 +49,7 @@ CCW 使用两种调用方式:
| Skill 触发词 | 用途 |
|--------------|------|
| `workflow:multi-cli-plan` | 多 CLI 协同分析 |
| `workflow-multi-cli-plan` | 多 CLI 协同分析 |
**5 阶段工作流**
```
@@ -64,16 +64,16 @@ CCW 使用两种调用方式:
| Skill 触发词 | 用途 | 阶段 |
|--------------|------|------|
| `workflow:plan` | 完整规划与会话 | 5 阶段 |
| `workflow:plan-verify` | 规划验证 | 验证 |
| `workflow-plan` | 完整规划与会话 | 5 阶段 |
| `workflow-plan-verify` | 规划验证 | 验证 |
| `workflow:replan` | 交互式重新规划 | 重规划 |
### TDD 工作流
| Skill 触发词 | 用途 |
|--------------|------|
| `workflow:tdd-plan` | TDD 规划 |
| `workflow:tdd-verify` | TDD 验证 |
| `workflow-tdd-plan` | TDD 规划 |
| `workflow-tdd-verify` | TDD 验证 |
**6 阶段 TDD 规划 + Red-Green-Refactor**
```
@@ -89,8 +89,8 @@ CCW 使用两种调用方式:
| Skill 触发词 | 用途 |
|--------------|------|
| `workflow:test-fix-gen` | 测试生成与修复 |
| `workflow:test-cycle-execute` | 执行测试循环 |
| `workflow-test-fix` | 测试生成与修复 |
| `workflow-test-fix` | 执行测试循环 |
**渐进式测试层级 (L0-L3)**
@@ -233,12 +233,12 @@ CCW 使用两种调用方式:
| Skill | 触发词 |
|-------|--------|
| workflow-lite-plan | `workflow:lite-plan`, `workflow:lite-execute` |
| workflow-multi-cli-plan | `workflow:multi-cli-plan` |
| workflow-plan | `workflow:plan`, `workflow:plan-verify`, `workflow:replan` |
| workflow-execute | `workflow:execute` |
| workflow-tdd | `workflow:tdd-plan`, `workflow:tdd-verify` |
| workflow-test-fix | `workflow:test-fix-gen`, `workflow:test-cycle-execute` |
| workflow-lite-plan | `workflow-lite-plan`, `workflow:lite-execute` |
| workflow-multi-cli-plan | `workflow-multi-cli-plan` |
| workflow-plan | `workflow-plan`, `workflow-plan-verify`, `workflow:replan` |
| workflow-execute | `workflow-execute` |
| workflow-tdd-plan | `workflow-tdd-plan`, `workflow-tdd-verify` |
| workflow-test-fix | `workflow-test-fix`, `workflow-test-fix` |
### 专项 Skills
@@ -294,25 +294,25 @@ CCW 使用两种调用方式:
开始
├─ 是快速修复或配置变更?
│ └─> 是workflow:lite-plan
│ └─> 是workflow-lite-plan
├─ 是单模块功能?
│ └─> 是workflow:lite-plan
│ └─> 是workflow-lite-plan
├─ 需要多 CLI 分析?
│ └─> 是workflow:multi-cli-plan
│ └─> 是workflow-multi-cli-plan
├─ 是多模块且需要会话?
│ └─> 是workflow:plan
│ └─> 是workflow-plan
├─ 是 TDD 开发?
│ └─> 是workflow:tdd-plan
│ └─> 是workflow-tdd-plan
├─ 是测试生成?
│ └─> 是workflow:test-fix-gen
│ └─> 是workflow-test-fix
└─ 是架构/新系统?
└─> 是brainstorm + workflow:plan
└─> 是brainstorm + workflow-plan
```
---
@@ -349,10 +349,10 @@ CCW 使用两种调用方式:
| Skill | 何时使用 |
|-------|----------|
| `workflow:lite-plan` | 快速修复、单功能 |
| `workflow:plan` | 多模块开发 |
| `workflow-lite-plan` | 快速修复、单功能 |
| `workflow-plan` | 多模块开发 |
| `brainstorm` | 架构、新功能 |
| `workflow:execute` | 执行已规划的工作 |
| `workflow-execute` | 执行已规划的工作 |
### 最常用 Commands

View File

@@ -2,7 +2,7 @@
<html lang="en" data-theme="light">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<!-- Preconnect to Google Fonts for faster font loading -->
<link rel="preconnect" href="https://fonts.googleapis.com" />

View File

@@ -0,0 +1,8 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none">
<!-- Three horizontal lines - line style -->
<line x1="3" y1="6" x2="18" y2="6" stroke="currentColor" stroke-width="2" stroke-linecap="round"/>
<line x1="3" y1="12" x2="15" y2="12" stroke="currentColor" stroke-width="2" stroke-linecap="round"/>
<line x1="3" y1="18" x2="12" y2="18" stroke="currentColor" stroke-width="2" stroke-linecap="round"/>
<!-- Status dot - follows theme color -->
<circle cx="19" cy="17" r="3" fill="currentColor"/>
</svg>

After

Width:  |  Height:  |  Size: 532 B

View File

@@ -0,0 +1 @@
.ace-tool/

View File

@@ -102,7 +102,7 @@ function HomeEmptyState({ className }: HomeEmptyStateProps) {
</div>
<div className="flex flex-col gap-2 w-full">
<code className="px-3 py-2 bg-muted rounded text-xs font-mono text-center">
/workflow:plan
/workflow-plan
</code>
<p className="text-xs text-muted-foreground text-center">
{formatMessage({ id: 'home.emptyState.noSessions.hint' })}

View File

@@ -63,6 +63,8 @@ export interface HookQuickTemplatesProps {
}
// ========== Hook Templates ==========
// NOTE: Hook input is received via stdin (not environment variable)
// Use: const fs=require('fs');const p=JSON.parse(fs.readFileSync(0,'utf8')||'{}');
/**
* Predefined hook templates for quick installation
@@ -90,7 +92,7 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
command: 'node',
args: [
'-e',
'const p=JSON.parse(process.env.HOOK_INPUT||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(/workflow-session\\.json$|session-metadata\\.json$/.test(file)){const fs=require("fs");try{const content=fs.readFileSync(file,"utf8");const data=JSON.parse(content);const cp=require("child_process");const payload=JSON.stringify({type:"SESSION_STATE_CHANGED",file:file,sessionId:data.session_id||"",status:data.status||"unknown",project:process.env.CLAUDE_PROJECT_DIR||process.cwd(),timestamp:Date.now()});cp.spawnSync("curl",["-s","-X","POST","-H","Content-Type: application/json","-d",payload,"http://localhost:3456/api/hook"],{stdio:"inherit",shell:true})}catch(e){}}'
'const fs=require("fs");const p=JSON.parse(fs.readFileSync(0,"utf8")||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(/workflow-session\\.json$|session-metadata\\.json$/.test(file)){try{const content=fs.readFileSync(file,"utf8");const data=JSON.parse(content);const cp=require("child_process");const payload=JSON.stringify({type:"SESSION_STATE_CHANGED",file:file,sessionId:data.session_id||"",status:data.status||"unknown",project:process.env.CLAUDE_PROJECT_DIR||process.cwd(),timestamp:Date.now()});cp.spawnSync("curl",["-s","-X","POST","-H","Content-Type: application/json","-d",payload,"http://localhost:3456/api/hook"],{stdio:"inherit",shell:true})}catch(e){}}'
]
},
// --- Notification ---
@@ -117,7 +119,7 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
command: 'node',
args: [
'-e',
'const p=JSON.parse(process.env.HOOK_INPUT||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(file){const cp=require("child_process");cp.spawnSync("npx",["prettier","--write",file],{stdio:"inherit",shell:true})}'
'const fs=require("fs");const p=JSON.parse(fs.readFileSync(0,"utf8")||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(file){const cp=require("child_process");cp.spawnSync("npx",["prettier","--write",file],{stdio:"inherit",shell:true})}'
]
},
{
@@ -130,7 +132,7 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
command: 'node',
args: [
'-e',
'const p=JSON.parse(process.env.HOOK_INPUT||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(file){const cp=require("child_process");cp.spawnSync("npx",["eslint","--fix",file],{stdio:"inherit",shell:true})}'
'const fs=require("fs");const p=JSON.parse(fs.readFileSync(0,"utf8")||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(file){const cp=require("child_process");cp.spawnSync("npx",["eslint","--fix",file],{stdio:"inherit",shell:true})}'
]
},
{
@@ -143,7 +145,7 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
command: 'node',
args: [
'-e',
'const p=JSON.parse(process.env.HOOK_INPUT||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(/\\.env|secret|credential|\\.key$/.test(file)){process.stderr.write("Blocked: modifying sensitive file "+file);process.exit(2)}'
'const fs=require("fs");const p=JSON.parse(fs.readFileSync(0,"utf8")||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(/\\.env|secret|credential|\\.key$/.test(file)){process.stderr.write("Blocked: modifying sensitive file "+file);process.exit(2)}'
]
},
{
@@ -169,7 +171,7 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
command: 'node',
args: [
'-e',
'const p=JSON.parse(process.env.HOOK_INPUT||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(file){const cp=require("child_process");const payload=JSON.stringify({type:"FILE_MODIFIED",file:file,project:process.env.CLAUDE_PROJECT_DIR||process.cwd(),timestamp:Date.now()});cp.spawnSync("curl",["-s","-X","POST","-H","Content-Type: application/json","-d",payload,"http://localhost:3456/api/hook"],{stdio:"inherit",shell:true})}'
'const fs=require("fs");const p=JSON.parse(fs.readFileSync(0,"utf8")||"{}");const file=(p.tool_input&&p.tool_input.file_path)||"";if(file){const cp=require("child_process");const payload=JSON.stringify({type:"FILE_MODIFIED",file:file,project:process.env.CLAUDE_PROJECT_DIR||process.cwd(),timestamp:Date.now()});cp.spawnSync("curl",["-s","-X","POST","-H","Content-Type: application/json","-d",payload,"http://localhost:3456/api/hook"],{stdio:"inherit",shell:true})}'
]
},
{
@@ -181,7 +183,7 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
command: 'node',
args: [
'-e',
'const p=JSON.parse(process.env.HOOK_INPUT||"{}");const cp=require("child_process");const payload=JSON.stringify({type:"SESSION_SUMMARY",transcript:p.transcript_path||"",project:process.env.CLAUDE_PROJECT_DIR||process.cwd(),timestamp:Date.now()});cp.spawnSync("curl",["-s","-X","POST","-H","Content-Type: application/json","-d",payload,"http://localhost:3456/api/hook"],{stdio:"inherit",shell:true})'
'const fs=require("fs");const p=JSON.parse(fs.readFileSync(0,"utf8")||"{}");const cp=require("child_process");const payload=JSON.stringify({type:"SESSION_SUMMARY",transcript:p.transcript_path||"",project:process.env.CLAUDE_PROJECT_DIR||process.cwd(),timestamp:Date.now()});cp.spawnSync("curl",["-s","-X","POST","-H","Content-Type: application/json","-d",payload,"http://localhost:3456/api/hook"],{stdio:"inherit",shell:true})'
]
},
{
@@ -221,7 +223,7 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
description: 'Sync memory V2 status to dashboard on changes',
category: 'notification',
trigger: 'PostToolUse',
matcher: 'core_memory',
matcher: 'mcp__ccw-tools__core_memory',
command: 'node',
args: [
'-e',

View File

@@ -84,6 +84,9 @@ interface HookTemplate {
timeout?: number;
}
// NOTE: Hook input is received via stdin (not environment variable)
// Node.js: const fs=require('fs');const p=JSON.parse(fs.readFileSync(0,'utf8')||'{}');
// Bash: INPUT=$(cat)
const HOOK_TEMPLATES: Record<string, HookTemplate> = {
'memory-update-queue': {
event: 'Stop',
@@ -95,13 +98,13 @@ const HOOK_TEMPLATES: Record<string, HookTemplate> = {
event: 'UserPromptSubmit',
matcher: '',
command: 'node',
args: ['-e', "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({prompt:p.user_prompt||''})],{stdio:'inherit'})"],
args: ['-e', "const fs=require('fs');const p=JSON.parse(fs.readFileSync(0,'utf8')||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({prompt:p.prompt||''})],{stdio:'inherit'})"],
},
'skill-context-auto': {
event: 'UserPromptSubmit',
matcher: '',
command: 'node',
args: ['-e', "const p=JSON.parse(process.env.HOOK_INPUT||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({mode:'auto',prompt:p.user_prompt||''})],{stdio:'inherit'})"],
args: ['-e', "const fs=require('fs');const p=JSON.parse(fs.readFileSync(0,'utf8')||'{}');require('child_process').spawnSync('ccw',['tool','exec','skill_context_loader',JSON.stringify({mode:'auto',prompt:p.prompt||''})],{stdio:'inherit'})"],
},
'danger-bash-confirm': {
event: 'PreToolUse',
@@ -114,7 +117,7 @@ const HOOK_TEMPLATES: Record<string, HookTemplate> = {
event: 'PreToolUse',
matcher: 'Write|Edit',
command: 'bash',
args: ['-c', 'INPUT=$(cat); FILE=$(echo "$INPUT" | jq -r ".tool_input.file_path // .tool_input.path // empty"); PROTECTED=".env|.git/|package-lock.json|yarn.lock|.credentials|secrets|id_rsa|.pem$|.key$"; if echo "$FILE" | grep -qiE "$PROTECTED"; then echo "{\\"hookSpecificOutput\\":{\\"hookEventName\\":\\"PreToolUse\\",\\"permissionDecision\\":\\"deny\\",\\"permissionDecisionReason\\":\\"Protected file cannot be modified: $FILE\\"}}" && exit 0; fi; exit 0'],
args: ['-c', 'INPUT=$(cat); FILE=$(echo "$INPUT" | jq -r ".tool_input.file_path // .tool_input.path // empty"); PROTECTED=".env|.git/|package-lock.json|yarn.lock|.credentials|secrets|id_rsa|.pem$|.key$"; if echo "$FILE" | grep -qiE "$PROTECTED"; then echo "{\\"hookSpecificOutput\\":{\\"hookEventName\\":\\"PreToolUse\\",\\"permissionDecision\\":\\"deny\\",\\"permissionDecisionReason\\":\\"Protected file cannot be modified: $FILE\\"}}" >&2 && exit 2; fi; exit 0'],
timeout: 5000,
},
'danger-git-destructive': {
@@ -135,7 +138,7 @@ const HOOK_TEMPLATES: Record<string, HookTemplate> = {
event: 'PreToolUse',
matcher: 'Write|Edit|Bash',
command: 'bash',
args: ['-c', 'INPUT=$(cat); TOOL=$(echo "$INPUT" | jq -r ".tool_name // empty"); if [ "$TOOL" = "Bash" ]; then CMD=$(echo "$INPUT" | jq -r ".tool_input.command // empty"); SYS_PATHS="/etc/|/usr/|/bin/|/sbin/|/boot/|/sys/|/proc/|C:\\\\Windows|C:\\\\Program Files"; if echo "$CMD" | grep -qiE "$SYS_PATHS"; then echo "{\\"hookSpecificOutput\\":{\\"hookEventName\\":\\"PreToolUse\\",\\"permissionDecision\\":\\"ask\\",\\"permissionDecisionReason\\":\\"System path operation requires confirmation\\"}}" && exit 0; fi; else FILE=$(echo "$INPUT" | jq -r ".tool_input.file_path // .tool_input.path // empty"); SYS_PATHS="/etc/|/usr/|/bin/|/sbin/|C:\\\\Windows|C:\\\\Program Files"; if echo "$FILE" | grep -qiE "$SYS_PATHS"; then echo "{\\"hookSpecificOutput\\":{\\"hookEventName\\":\\"PreToolUse\\",\\"permissionDecision\\":\\"deny\\",\\"permissionDecisionReason\\":\\"Cannot modify system file: $FILE\\"}}" && exit 0; fi; fi; exit 0'],
args: ['-c', 'INPUT=$(cat); TOOL=$(echo "$INPUT" | jq -r ".tool_name // empty"); if [ "$TOOL" = "Bash" ]; then CMD=$(echo "$INPUT" | jq -r ".tool_input.command // empty"); SYS_PATHS="/etc/|/usr/|/bin/|/sbin/|/boot/|/sys/|/proc/|C:\\\\Windows|C:\\\\Program Files"; if echo "$CMD" | grep -qiE "$SYS_PATHS"; then echo "{\\"hookSpecificOutput\\":{\\"hookEventName\\":\\"PreToolUse\\",\\"permissionDecision\\":\\"ask\\",\\"permissionDecisionReason\\":\\"System path operation requires confirmation\\"}}" && exit 0; fi; else FILE=$(echo "$INPUT" | jq -r ".tool_input.file_path // .tool_input.path // empty"); SYS_PATHS="/etc/|/usr/|/bin/|/sbin/|C:\\\\Windows|C:\\\\Program Files"; if echo "$FILE" | grep -qiE "$SYS_PATHS"; then echo "{\\"hookSpecificOutput\\":{\\"hookEventName\\":\\"PreToolUse\\",\\"permissionDecision\\":\\"deny\\",\\"permissionDecisionReason\\":\\"Cannot modify system file: $FILE\\"}}" >&2 && exit 2; fi; fi; exit 0'],
timeout: 5000,
},
'danger-permission-change': {

View File

@@ -0,0 +1,119 @@
// ========================================
// CCW Logo Component
// ========================================
// Line-style logo for Claude Code Workflow
import { useEffect, useState } from 'react';
import { cn } from '@/lib/utils';
interface CCWLogoProps {
/** Size of the icon */
size?: number;
/** Additional class names */
className?: string;
/** Whether to show the status dot */
showDot?: boolean;
}
/**
* Hook to get reactive theme accent color
*/
function useThemeAccentColor(): string {
const [accentColor, setAccentColor] = useState<string>(() => {
if (typeof document === 'undefined') return 'hsl(220, 60%, 65%)';
const root = document.documentElement;
const accentValue = getComputedStyle(root).getPropertyValue('--accent').trim();
return accentValue ? `hsl(${accentValue})` : 'hsl(220, 60%, 65%)';
});
useEffect(() => {
const updateAccentColor = () => {
const root = document.documentElement;
const accentValue = getComputedStyle(root).getPropertyValue('--accent').trim();
setAccentColor(accentValue ? `hsl(${accentValue})` : 'hsl(220, 60%, 65%)');
};
// Initial update
updateAccentColor();
// Watch for theme changes via MutationObserver
const observer = new MutationObserver((mutations) => {
mutations.forEach((mutation) => {
if (mutation.attributeName === 'data-theme') {
updateAccentColor();
}
});
});
observer.observe(document.documentElement, {
attributes: true,
attributeFilter: ['data-theme'],
});
return () => observer.disconnect();
}, []);
return accentColor;
}
/**
* Line-style CCW logo component
* Features three horizontal lines with a status dot that follows theme color
*/
export function CCWLogo({ size = 24, className, showDot = true }: CCWLogoProps) {
const accentColor = useThemeAccentColor();
return (
<svg
width={size}
height={size}
viewBox="0 0 24 24"
fill="none"
xmlns="http://www.w3.org/2000/svg"
className={cn('ccw-logo', className)}
style={{ color: accentColor }}
aria-label="Claude Code Workflow"
>
{/* Three horizontal lines - line style */}
<line
x1="3"
y1="6"
x2="18"
y2="6"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
/>
<line
x1="3"
y1="12"
x2="15"
y2="12"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
/>
<line
x1="3"
y1="18"
x2="12"
y2="18"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
/>
{/* Status dot - follows theme color via currentColor */}
{showDot && (
<circle
cx="19"
cy="17"
r="3"
fill="currentColor"
/>
)}
</svg>
);
}
export default CCWLogo;

View File

@@ -0,0 +1,6 @@
// ========================================
// Icons Index
// ========================================
// Custom icon components for CCW Dashboard
export { CCWLogo } from './CCWLogo';

View File

@@ -7,7 +7,6 @@ import { useCallback } from 'react';
import { Link } from 'react-router-dom';
import { useIntl } from 'react-intl';
import {
Workflow,
Moon,
Sun,
RefreshCw,
@@ -19,6 +18,7 @@ import {
Monitor,
SquareTerminal,
} from 'lucide-react';
import { CCWLogo } from '@/components/icons/CCWLogo';
import { cn } from '@/lib/utils';
import { Button } from '@/components/ui/Button';
import { Badge } from '@/components/ui/Badge';
@@ -72,11 +72,11 @@ export function Header({
{/* Logo / Brand */}
<Link
to="/"
className="flex items-center gap-2 text-lg font-semibold text-primary hover:opacity-80 transition-opacity"
className="flex items-center gap-2 text-lg font-semibold hover:opacity-80 transition-opacity"
>
<Workflow className="w-6 h-6" />
<span className="hidden sm:inline">{formatMessage({ id: 'navigation.header.brand' })}</span>
<span className="sm:hidden">{formatMessage({ id: 'navigation.header.brandShort' })}</span>
<CCWLogo size={24} />
<span className="hidden sm:inline text-primary">{formatMessage({ id: 'navigation.header.brand' })}</span>
<span className="sm:hidden text-primary">{formatMessage({ id: 'navigation.header.brandShort' })}</span>
</Link>
{/* A2UI Quick Action Button */}

View File

@@ -105,15 +105,15 @@ const eventConfig: Record<
// Event label keys for i18n
const eventLabelKeys: Record<HookEvent, string> = {
SessionStart: 'hooks.events.sessionStart',
UserPromptSubmit: 'hooks.events.userPromptSubmit',
SessionEnd: 'hooks.events.sessionEnd',
SessionStart: 'specs.hook.event.SessionStart',
UserPromptSubmit: 'specs.hook.event.UserPromptSubmit',
SessionEnd: 'specs.hook.event.SessionEnd',
};
// Scope label keys for i18n
const scopeLabelKeys: Record<HookScope, string> = {
global: 'hooks.scope.global',
project: 'hooks.scope.project',
global: 'specs.hook.scope.global',
project: 'specs.hook.scope.project',
};
/**
@@ -136,7 +136,7 @@ export function HookCard({
variant: 'default' as const,
icon: <Zap className="h-3 w-3" />,
};
const eventLabel = formatMessage({ id: eventLabelKeys[hook.event] || 'hooks.events.unknown' });
const eventLabel = formatMessage({ id: eventLabelKeys[hook.event] || 'specs.hook.event.SessionStart' });
const scopeIcon = hook.scope === 'global' ? <Globe className="h-3 w-3" /> : <Folder className="h-3 w-3" />;
const scopeLabel = formatMessage({ id: scopeLabelKeys[hook.scope] });
@@ -194,7 +194,7 @@ export function HookCard({
disabled={actionsDisabled}
className="ml-4"
>
{formatMessage({ id: 'hooks.actions.install' })}
{formatMessage({ id: 'specs.hook.install' })}
</Button>
</div>
</CardContent>
@@ -256,7 +256,7 @@ export function HookCard({
<DropdownMenuContent align="end">
<DropdownMenuItem onClick={(e) => handleAction(e, 'edit')}>
<Edit className="mr-2 h-4 w-4" />
{formatMessage({ id: 'hooks.actions.edit' })}
{formatMessage({ id: 'specs.hook.edit' })}
</DropdownMenuItem>
<DropdownMenuSeparator />
<DropdownMenuItem
@@ -264,7 +264,7 @@ export function HookCard({
className="text-destructive focus:text-destructive"
>
<Trash2 className="mr-2 h-4 w-4" />
{formatMessage({ id: 'hooks.actions.uninstall' })}
{formatMessage({ id: 'specs.hook.uninstall' })}
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>

Some files were not shown because too many files have changed in this diff Show More